Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
10,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
B2b example notebook
Example notebook for the B2b, 2 channel 24-bit ADC module. The module contains the same ADCs as the D4 and is identical in hardware to the D4b module
Step1: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.
Step2: Read back the version of the microcontroller software. This should return 1.6 or higher to be able to use the B2b properly. Als read the temperature and the battery voltages through the C1b, this way we verify that the connection with the SPI Rack is working.
Step3: Create a new B2b module object at the correct module address using the SPI object. If we set calibrate=True, the module will run a calibration routine at initialisation. This takes about 2 seconds, during which the python code will stall all operations.
To see that we have a connection, we read back the firmware version.
Step4: FFT
One useful application of the B2b module is to find interference. The module can be set to run at a high sample rate and store a trace in the local memory. If we run an FFT on this data, we will be able to see all kinds of interference signals present in our setup. To demonstrate this, we will use the measurement setup as shown in the image below.
<img src="Images/Meas_Setup_B2b_FFT.png" alt="Scope Image" title="Scope Image" width="350" />
We will set the function generator to generate a 500 Hz signal of ±100mV and run it through a sample simulator at 10MΩ. The B2b will be connected to a current measurement module, in this case an old M1 I-measure module that was lying around.
First we have to configure the B2b for acquiring long traces of data.
Configuring the B2b for FFT
The B2b module can run from either a local (inside the module) clock or a user provided clock from the backplane. This backplane clock should be 10 MHz and either a square or a sine wave. If there are more modules with microcontrollers in the rack, and they need to run synchronously, it is recommended to use the backplane clock. For a single module it is fine to run it using the local clock.
If the external clock is selected but not present, the user will get an ERROR to the logger and the microcontroller will keep running on the internal clock. Never turn off the external clock if the microcontroller is running on it. This will stop the module from functioning.
In this example we will use the internal clock
Step5: To get the B2b module to do anything, it needs to be triggered. There are three ways of triggering the module
Step6: We'll measure on channel one (zero in software), so we need to enable it. For the FFT we'll take 10000 measurements with filter setting 0 on the sinc5 filter. This will give a datarate of 50 kSPS and a resolution of 16.8 bit. For details on all the filter settings, see the excel sheet for the D4_filter.
Step7: Measurement and plotting
To start a measurement we trigger the B2b via software and keep checking if the module is done measuring.
Step8: We use the periodogram from scipy, which will give the power spectral density. Before we do that we have to take the gain of the M1f module into account. It has a gain of 10 MV/A and a postgain of 10.
Step9: D5a sweep
We're now gonna perform a sweep with the D5a and measure synchronously with the B2b module. As the timing of the D5a updates is handled by the PC, the time between the sweep steps is not going to be very accurate. This same issue would arise if we used the software trigger of the B2b. To work around this, we will trigger the B2b using the controller trigger.
The measurement setup is displayed in the figure below.
<img src="Images/Meas_Setup_B2b_D5a.png" alt="Scope Image" title="Scope Image" width="350" />
Create a new D5a module object at the correct module address using the SPI object. By default the module resets the output voltages to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_voltages = False then the output will not be changed.
Step10: To get nice equidistant voltage steps, we will use integer multiples of the smallest step the DAC can do in the current range setting.
Step11: We now have to tell the B2b module to look out for the controller trigger, with an amount equal to the sweep length. Additionally we will also set a holdoff time of 1ms. This to compensate for any delays through the circuit (due to line length and/or filters).
Step12: We will keep the filter at sinc5, but the rate at 10
Step13: Here we see how we can synchronise the updating of the DAC with the triggering of the B2b module. Before we set the net output voltage, we arm the spi_rack controller. This means that it will send a trigger on the next SPI command it receives
Step14: Compensating for the gain of the M1 (a factor 10e6), we get the IV curve for our 'sample'. In this case the sample simulator was set to a series resistance of 10 MOhm with all capacitors at minimum value.
Step15: When done with this example, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to access the device. | Python Code:
from spirack import SPI_rack, B2b_module, D5a_module, D4b_module
import logging
from time import sleep
from tqdm import tqdm_notebook
import numpy as np
from scipy import signal
from plotly.offline import init_notebook_mode, iplot, plot
import plotly.graph_objs as go
init_notebook_mode(connected=True)
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
Explanation: B2b example notebook
Example notebook for the B2b, 2 channel 24-bit ADC module. The module contains the same ADCs as the D4 and is identical in hardware to the D4b module: the only difference being the absence of connectors on the front of the module. Both of them differ to the D4 by the addition of an ARM microcontroller. This allows for operations where exact timing and local storage is needed.
SPI Rack setup
To use the D5b module, we need to import both the D5b_module and the SPI_rack module from the spirack library. All the communication with the SPI Rack runs through the SPI_rack object which communicates through a virtual COM port. This COM port can only be open on one instance on the PC. Make sure you close the connection here before you can use it somewhere else.
We also import the logging library to be able to display the logging messages; numpy for data manipulation; scipy for the FFT analysis and plotly for visualistation.
End of explanation
COM_port = 'COM4' # COM port of the SPI rack
COM_speed = 1e6 # Baud rate, not of much importance
timeout = 1 # Timeout value in seconds
spi_rack = SPI_rack(COM_port, COM_speed, timeout)
spi_rack.unlock() # Unlock the controller to be able to send data to the rack
Explanation: Open the SPI rack connection and unlock the controller. This is necessary after bootup of the controller module. If not unlocked, no communication with the modules can take place. The virtual COM port baud rate is irrelevant as it doesn't change the actual speed. Timeout can be changed, but 1 second is a good value.
End of explanation
print('Version: ' + spi_rack.get_firmware_version())
print('Temperature: {:.2f} C'.format(spi_rack.get_temperature()))
battery_v = spi_rack.get_battery()
print('Battery: {:.3f}V, {:.3f}V'.format(battery_v[0], battery_v[1]))
Explanation: Read back the version of the microcontroller software. This should return 1.6 or higher to be able to use the B2b properly. Als read the temperature and the battery voltages through the C1b, this way we verify that the connection with the SPI Rack is working.
End of explanation
B2b = B2b_module(spi_rack, module=4, calibrate=False)
print("Firmware version: {}".format(B2b.get_firmware_version()))
Explanation: Create a new B2b module object at the correct module address using the SPI object. If we set calibrate=True, the module will run a calibration routine at initialisation. This takes about 2 seconds, during which the python code will stall all operations.
To see that we have a connection, we read back the firmware version.
End of explanation
B2b.set_clock_source('internal')
print("Clock source: {}".format(B2b.get_clock_source()))
Explanation: FFT
One useful application of the B2b module is to find interference. The module can be set to run at a high sample rate and store a trace in the local memory. If we run an FFT on this data, we will be able to see all kinds of interference signals present in our setup. To demonstrate this, we will use the measurement setup as shown in the image below.
<img src="Images/Meas_Setup_B2b_FFT.png" alt="Scope Image" title="Scope Image" width="350" />
We will set the function generator to generate a 500 Hz signal of ±100mV and run it through a sample simulator at 10MΩ. The B2b will be connected to a current measurement module, in this case an old M1 I-measure module that was lying around.
First we have to configure the B2b for acquiring long traces of data.
Configuring the B2b for FFT
The B2b module can run from either a local (inside the module) clock or a user provided clock from the backplane. This backplane clock should be 10 MHz and either a square or a sine wave. If there are more modules with microcontrollers in the rack, and they need to run synchronously, it is recommended to use the backplane clock. For a single module it is fine to run it using the local clock.
If the external clock is selected but not present, the user will get an ERROR to the logger and the microcontroller will keep running on the internal clock. Never turn off the external clock if the microcontroller is running on it. This will stop the module from functioning.
In this example we will use the internal clock:
End of explanation
B2b.set_trigger_input("None")
B2b.set_trigger_amount(1)
B2b.set_trigger_holdoff_time(0)
Explanation: To get the B2b module to do anything, it needs to be triggered. There are three ways of triggering the module:
Software trigger
Controller generated trigger
D5b generated trigger
The software trigger is generated by the PC, which means that the timing is not very exact. Depending on the user application, this might be acceptable. As an example, it would be perfectly fine for finding interference: take a long trace and run an FFT on the data.
The controller generated trigger eliminates the issue of the software trigger: the timing is now handled by the microcontroller in the controller module. This allows for exact alignment with other operations. There are two ways the controller can generate a trigger: directly by a PC command, or synchronous with another SPI command. This last one is the most interesting, you can for example generate a trigger at the moment you're sending a message to update the voltage on the D5a module. This allows for synchronous measurements and takes the PC out of the picture. The controller generated triggers will be on the backplane for all modules to see, so it allows the user to trigger multiple modules at once.
Finally there is also the D5b generated trigger: it generates a trigger everytime it toggles the output (in toggling mode). This allows for lock-in type of measurements. For more information on that, see the lock-in example notebook.
In this notebook we will be using both the software trigger and the controller generated trigger. First we'll use the software trigger. To do this, we'll set the trigger input to 'None' to make the B2b ignore the trigger lines on the backplane. We only expect one trigger, and we don't need any hold off time. This is a dead time which the B2b will wait after the trigger before it starts measuring. It can be set with a resolution of 100 ns.
End of explanation
filter_type = 'sinc5'
filter_setting = 0
B2b.set_ADC_enable(0, True)
B2b.set_sample_amount(0, 10000)
B2b.set_filter_type(0, filter_type)
B2b.set_filter_rate(0, filter_setting)
Explanation: We'll measure on channel one (zero in software), so we need to enable it. For the FFT we'll take 10000 measurements with filter setting 0 on the sinc5 filter. This will give a datarate of 50 kSPS and a resolution of 16.8 bit. For details on all the filter settings, see the excel sheet for the D4_filter.
End of explanation
B2b.software_trigger()
while B2b.is_running():
sleep(0.1)
ADC_data, _ = B2b.get_data()
Explanation: Measurement and plotting
To start a measurement we trigger the B2b via software and keep checking if the module is done measuring.
End of explanation
#Calculate periodogram
T = B2b.sample_time[filter_type][filter_setting]
fs = 1/T
N = len(ADC_data)
gain = 10*10e6
f0, Pxx_den0 = signal.periodogram(ADC_data/gain, fs)
#Plot the FFT data
pldata0 = go.Scattergl(x=f0, y=np.sqrt(Pxx_den0), mode='lines+markers', name='ADC1')
plot_data = [pldata0]
layout = go.Layout(
title = dict(text='Spectral Density'),
xaxis = dict(title=r'$\text{Frequency [Hz]}$', type='log'),
yaxis = dict(title=r'$\text{PSD [} \text{A/}\sqrt{\text{Hz}} \text{]} $')
)
fig = go.Figure(data=plot_data, layout=layout)
iplot(fig)
Explanation: We use the periodogram from scipy, which will give the power spectral density. Before we do that we have to take the gain of the M1f module into account. It has a gain of 10 MV/A and a postgain of 10.
End of explanation
D5a = D5a_module(spi_rack, module=2, reset_voltages=True)
Explanation: D5a sweep
We're now gonna perform a sweep with the D5a and measure synchronously with the B2b module. As the timing of the D5a updates is handled by the PC, the time between the sweep steps is not going to be very accurate. This same issue would arise if we used the software trigger of the B2b. To work around this, we will trigger the B2b using the controller trigger.
The measurement setup is displayed in the figure below.
<img src="Images/Meas_Setup_B2b_D5a.png" alt="Scope Image" title="Scope Image" width="350" />
Create a new D5a module object at the correct module address using the SPI object. By default the module resets the output voltages to 0 Volt. Before it does this, it will read back the current value. If this value is non-zero it will slowly ramp it to zero. If reset_voltages = False then the output will not be changed.
End of explanation
smallest_step = D5a.get_stepsize(0)
sweep_voltages = np.arange(-3000*smallest_step, 3001*smallest_step, 100*smallest_step)
print('Smallest step: {0:.3f} uV'.format(smallest_step*1e6))
print('Start voltage: {0:.4f} V. Stop voltage: {0:.4f} V'.format(sweep_voltages[0], sweep_voltages[-1]))
print('Sweep length: {} steps'.format(len(sweep_voltages)))
Explanation: To get nice equidistant voltage steps, we will use integer multiples of the smallest step the DAC can do in the current range setting.
End of explanation
B2b.set_trigger_input("Controller")
B2b.set_trigger_amount(len(sweep_voltages))
B2b.set_trigger_holdoff_time(10e-3)
Explanation: We now have to tell the B2b module to look out for the controller trigger, with an amount equal to the sweep length. Additionally we will also set a holdoff time of 1ms. This to compensate for any delays through the circuit (due to line length and/or filters).
End of explanation
filter_type = 'sinc5'
filter_setting = 10
B2b.set_ADC_enable(0, True)
B2b.set_sample_amount(0, 1)
B2b.set_filter_type(0, filter_type)
B2b.set_filter_rate(0, filter_setting)
Explanation: We will keep the filter at sinc5, but the rate at 10: a data rate of 1ksps with a settling time of 1 ms. This gives us a resolution of 20.7 bits. The sample_amount is now set to one: only one sample is taken per trigger.
End of explanation
for value in tqdm_notebook(sweep_voltages):
spi_rack.trigger_arm()
D5a.set_voltage(0, value)
while B2b.is_running():
sleep(1e-3)
ADC_data_sweep, _ = B2b.get_data()
Explanation: Here we see how we can synchronise the updating of the DAC with the triggering of the B2b module. Before we set the net output voltage, we arm the spi_rack controller. This means that it will send a trigger on the next SPI command it receives: in this case the D5a set_voltage command. We'll then wait a little bit to make sure measurement and settling is done, and go on to the next step.
End of explanation
gain = 10e6
pldata = go.Scattergl(x=sweep_voltages, y=ADC_data_sweep/gain, mode='lines+markers', name='ADC_data')
plot_data = [pldata]
layout = go.Layout(
title = dict(text='10 MOhm IV Curve'),
xaxis = dict(title='D5a voltage (V)'),
yaxis = dict(title='Current (A)')
)
fig = go.Figure(data=plot_data, layout=layout)
iplot(fig)
Explanation: Compensating for the gain of the M1 (a factor 10e6), we get the IV curve for our 'sample'. In this case the sample simulator was set to a series resistance of 10 MOhm with all capacitors at minimum value.
End of explanation
spi_rack.close()
Explanation: When done with this example, it is recommended to close the SPI Rack connection. This will allow other measurement scripts to access the device.
End of explanation |
10,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score. <br />
Step1: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". <br />
Step2: How to use counter
Step3: Tip
Step4: 4) Print a list of Lil's that are more popular than Lil' Kim. <br />
Step5: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks. <br />
Step6: Tip
Step7: 7) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? <br />
Step8: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? <br /> | Python Code:
import requests
response = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50')
data = response.json()
artists = data['artists']['items']
for artist in artists:
print(artist['name'], artist['popularity'])
Explanation: 1) With "Lil Wayne" and "Lil Kim" there are a lot of "Lil" musicians. Do a search and print a list of 50 that are playable in the USA (or the country of your choice), along with their popularity score. <br />
End of explanation
for artist in artists:
if not artist['genres']:
print(artist['name'], artist['popularity'], 'No genres listed')
else:
print(artist['name'], artist['popularity'], ', '.join(artist['genres']))
genre_list = []
for artist in artists:
for genre in artist['genres']:
genre_list.append(genre)
sorted_genre = sorted(genre_list)
genre_list_number = range(len(sorted_genre))
genre_count = 0
for number in genre_list_number:
if not sorted_genre[number] == sorted_genre[number - 1]:
print((sorted_genre[number]), genre_list.count(sorted_genre[number]))
if genre_count < genre_list.count(sorted_genre[number]):
genre_count = genre_list.count(sorted_genre[number])
freq_genre = sorted_genre[number]
print('')
print('With', genre_count, 'artists,', freq_genre, 'is the most represented in search results.')
Explanation: 2) What genres are most represented in the search results? Edit your previous printout to also display a list of their genres in the format "GENRE_1, GENRE_2, GENRE_3". If there are no genres, print "No genres listed". <br />
End of explanation
# numbers = ['hip hop', 'jerk', 'juggalo', 'trap', 'soca', 'soca', 'juggalo', 'juggalo']
# from collections import Counter
# counts = Counter(numbers)
# counts
# counts.most_common(2)
Explanation: How to use counter
End of explanation
highest_pop = 0
for artist in artists:
if artist['name'] != 'Lil Wayne' and highest_pop < artist['popularity']:
highest_pop = artist['popularity']
highest_pop_artist = artist['name']
print(highest_pop_artist, 'is the second-most-popular artist with \"Lil\" in his/her name.')
most_followers = 0
for artist in artists:
if most_followers < artist['followers']['total']:
most_followers = artist['followers']['total']
most_followers_artist = artist['name']
print(most_followers_artist, 'has', most_followers, 'followers.')
if highest_pop_artist == most_followers_artist:
print('The second-most-popular \'Lil\' artist is also the one with the most followers.')
else:
print('The second-most-popular \'Lil\' artist and the one with the most followers are different people.')
Explanation: Tip: "how to join a list Python" might be a helpful search <br />
3) Use a for loop to determine who BESIDES Lil Wayne has the highest popularity rating. Is it the same artist who has the largest number of followers? <br />
End of explanation
for artist in artists:
if artist['name'] == 'Lil\' Kim':
more_popular = artist['popularity']
for artist in artists:
if more_popular < artist ['popularity']:
print(artist['name'], 'is more popular than Lil\' Kim with a popularity score of', artist['popularity'])
Explanation: 4) Print a list of Lil's that are more popular than Lil' Kim. <br />
End of explanation
wayne_id = '55Aa2cqylxrFIXC767Z865'
wayne_response = requests.get('https://api.spotify.com/v1/artists/' + wayne_id + '/top-tracks?country=us')
wayne_data = wayne_response.json()
print('Lil Wayne\'s top tracks:')
wayne_tracks = wayne_data['tracks']
for track in wayne_tracks:
print(track['name'])
print('')
kim_id = '5tth2a3v0sWwV1C7bApBdX'
kim_response = requests.get('https://api.spotify.com/v1/artists/' + kim_id + '/top-tracks?country=us')
kim_data = kim_response.json()
print('Lil\' Kim\'s top tracks:')
kim_tracks = kim_data['tracks']
for track in kim_tracks:
print(track['name'])
Explanation: 5) Pick two of your favorite Lils to fight it out, and use their IDs to print out their top tracks. <br />
End of explanation
print('Lil Wayne\'s explicit top tracks:')
ew_total_pop = 0
ew_total_tracks = 0
ew_playtime = 0
for track in wayne_tracks:
if track['explicit']:
ew_total_pop = ew_total_pop + track['popularity']
ew_total_tracks = ew_total_tracks + 1
ew_playtime = ew_playtime + track['duration_ms']/60000
if ew_total_tracks == 0:
print('There are no explicit tracks.')
else:
print('The average popularity is', ew_total_pop / ew_total_tracks)
print('He has', ew_playtime, 'minutes of explicit music in his top tracks.')
print('')
print('Lil Wayne\'s non-explicit top tracks:')
nw_total_pop = 0
nw_total_tracks = 0
nw_playtime = 0
for track in wayne_tracks:
if not track['explicit']:
nw_total_pop = nw_total_pop + track ['popularity']
nw_total_tracks = nw_total_tracks + 1
nw_playtime = nw_playtime + track['duration_ms']/60000
if nw_total_tracks == 0:
print('There are no non-explicit tracks.')
else:
print('The average popularity is', nw_total_pop / nw_total_tracks)
print('He has', nw_playtime, 'minutes of non-explicit music in his top tracks.')
print('')
print('Lil\' Kim\'s explicit top tracks:')
ek_total_pop = 0
ek_total_tracks = 0
ek_playtime = 0
for track in kim_tracks:
if track['explicit']:
ek_total_pop = ek_total_pop + track ['popularity']
ek_total_tracks = ek_total_tracks + 1
ek_playtime = ek_playtime + track['duration_ms']/60000
if ek_total_tracks == 0:
print('There are no explicit tracks.')
else:
print('The average popularity is', ek_total_pop / ek_total_tracks)
print('She has', ek_playtime, 'minutes of explicit music in her top tracks.')
print('')
print('Lil\' Kim\'s non-explicit top tracks:')
nk_total_pop = 0
nk_total_tracks = 0
nk_playtime = 0
for track in kim_tracks:
if not track['explicit']:
nk_total_pop = nk_total_pop + track ['popularity']
nk_total_tracks = nk_total_tracks + 1
nk_playtime = nk_playtime + track['duration_ms']/60000
if nk_total_tracks == 0:
print('There are no non-explicit tracks.')
else:
print('The average popularity is', nk_total_pop / nk_total_tracks)
print('She has', nk_playtime, 'minutes of non-explicit music in her top tracks.')
Explanation: Tip: You're going to be making two separate requests, be sure you DO NOT save them into the same variable. <br />
6) Will the world explode if a musicians swears? Get an average popularity for their explicit songs vs. their non-explicit songs. How many minutes of explicit songs do they have? Non-explicit? <br />
End of explanation
biggie_response = requests.get('https://api.spotify.com/v1/search?query=artist:biggie&type=artist&market=us&limit=50')
biggie_data = biggie_response.json()
biggie_artists = biggie_data['artists']['items']
total_biggies = 0
for artist in biggie_artists:
total_biggies = total_biggies + 1
print('There are', total_biggies, 'Biggies on Spotify.')
print('It would take', total_biggies * 5, 'seconds to request all of the Biggies if you were requesting one every five seconds.')
print('')
pages = range(90)
total_lils = 0
for page in pages:
lil_response = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50&offset=' + str(page * 50))
lil_data = lil_response.json()
lil_artists = lil_data['artists']['items']
for artist in lil_artists:
total_lils = total_lils + 1
print('There are', total_lils, 'Lils on Spotify.')
print('It would take', round(total_lils / 12), 'minutes to request all of the Lils if you were requesting one every five seconds.')
Explanation: 7) Since we're talking about Lils, what about Biggies? How many total "Biggie" artists are there? How many total "Lil"s? If you made 1 request every 5 seconds, how long would it take to download information on all the Lils vs the Biggies? <br />
End of explanation
biggie_total_pop = 0
for artist in biggie_artists:
biggie_total_pop = biggie_total_pop + artist['popularity']
biggie_avg_pop = biggie_total_pop / 50
lil_response_pg1 = requests.get('https://api.spotify.com/v1/search?query=artist:lil&type=artist&market=us&limit=50')
lil_data_pg1 = lil_response_pg1.json()
lil_artists_pg1 = lil_data_pg1['artists']['items']
lil_total_pop = 0
for artist in lil_artists_pg1:
lil_total_pop = lil_total_pop + artist['popularity']
lil_avg_pop = lil_total_pop / 50
if biggie_avg_pop > lil_avg_pop:
print('The top 50 biggies are more popular.')
elif biggie_avg_pop < lil_avg_pop:
print('The top 50 lils are more popular.')
else:
print('They are equally popular.')
Explanation: 8) Out of the top 50 "Lil"s and the top 50 "Biggie"s, who is more popular on average? <br />
End of explanation |
10,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Estimating $\pi$ by Sampling Points
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
A stochastic way to estimate the value of $\pi$ is to sample points from a square area. Some of the points will fall within the area of a circle as defined by $x^2 + y^2 = 1$, we count what percentage all points fall within this area, which allows us to estimate the area of the circle and therefore $\pi$.
Step1: We can visualize the process to see how it works.
Step2: Finally, let's see how our estimate gets better as we increase $n$. We'll do this by computing the estimate for $\pi$ at each step and plotting that estimate to see how it converges. | Python Code:
# Import libraries
import math
import numpy as np
import matplotlib.pyplot as plt
in_circle = 0
outside_circle = 0
n = 10 ** 4
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi_estimate
Explanation: Estimating $\pi$ by Sampling Points
By Evgenia "Jenny" Nitishinskaya and Delaney Granizo-Mackenzie
Notebook released under the Creative Commons Attribution 4.0 License.
A stochastic way to estimate the value of $\pi$ is to sample points from a square area. Some of the points will fall within the area of a circle as defined by $x^2 + y^2 = 1$, we count what percentage all points fall within this area, which allows us to estimate the area of the circle and therefore $\pi$.
End of explanation
# Plot a circle for reference
circle1=plt.Circle((0,0),1,color='r', fill=False, lw=2)
fig = plt.gcf()
fig.gca().add_artist(circle1)
# Set the axis limits so the circle doesn't look skewed
plt.xlim((0, 1.8))
plt.ylim((0, 1.2))
plt.scatter(X, Y)
Explanation: We can visualize the process to see how it works.
End of explanation
in_circle = 0
outside_circle = 0
n = 10 ** 3
# Draw many random points
X = np.random.rand(n)
Y = np.random.rand(n)
# Make a new array
pi = np.ndarray(n)
for i in range(n):
if X[i]**2 + Y[i]**2 > 1:
outside_circle += 1
else:
in_circle += 1
area_of_quarter_circle = float(in_circle)/(in_circle + outside_circle)
pi_estimate = area_of_circle = area_of_quarter_circle * 4
pi[i] = pi_estimate
plt.plot(range(n), pi)
plt.xlabel('n')
plt.ylabel('pi estimate')
plt.plot(range(n), [math.pi] * n)
Explanation: Finally, let's see how our estimate gets better as we increase $n$. We'll do this by computing the estimate for $\pi$ at each step and plotting that estimate to see how it converges.
End of explanation |
10,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Orientation density functions
Step1: In this Python Notebook we will show how to properly run a simulation of a composite material, providing the ODF (orientation density function) of the reinforcments.
Such identification procedure require
Step2: In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates.
The next step is to discretize the ODF into phases.
The file containing the initial 2-phase microstructure contains the following informations | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from simmit import smartplus as sim
from simmit import identify as iden
import os
dir = os.path.dirname(os.path.realpath('__file__'))
Explanation: Orientation density functions
End of explanation
x = np.arange(0,182,2)
path_data = dir + '/data/'
peak_file = 'Npeaks0.dat'
y = sim.get_densities(x, path_data, peak_file, False)
fig = plt.figure()
plt.grid(True)
plt.plot(x,y, c='black')
Explanation: In this Python Notebook we will show how to properly run a simulation of a composite material, providing the ODF (orientation density function) of the reinforcments.
Such identification procedure require:
1. Proper ODF peak data
1. Proper composite properties
2. A proper numerical model (here a composite model for laminate constitutive model)
End of explanation
NPhases_file = dir + '/data/Nellipsoids0.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
NPhases[::]
#Number_of_parameters
n_param = 6
#Number_of_consts
n_consts = 0
#Number_of_files
nfiles = 1
#Number_of_generations
ngen = 200
#Aleatory/Mesh space population : 0=mesh 1=meshlimit 2=random 3=defined
aleaspace = 2
#Space or aleatory population : apop in case of aleatory, spop in case of mesh
apop = 200
#Number of "doped" individual
ngboys = 1
#Max population per subgeneration
maxpop = 50
#stationnarity condition
station_nb = 20
path_data = dir + '/data'
path_keys = dir + '/keys'
path_results = dir + '/results'
outputfile = 'id_params.txt'
materialfile = 'material.dat'
simul_type = 'ODF'
iden.identification(simul_type,n_param,n_consts,nfiles,ngen,aleaspace,apop,ngboys,maxpop,station_nb,path_data,path_keys,path_results,materialfile,outputfile)
umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments
nstatev = 0
nphases = 2 #The number of phases
num_file = 0 #The num of the file that contains the subphases
int1 = 20
int2 = 20
psi_rve = 0.
theta_rve = 0.
phi_rve = 0.
props = np.array([nphases, num_file, int1, int2, 0])
path_data = 'data'
path_results = 'results'
Nfile_init = 'Nellipsoids0.dat'
Nfile_disc = 'Nellipsoids2.dat'
nphases_rve = 36
num_phase_disc = 1
sim.ODF_discretization(nphases_rve, num_phase_disc, 0., 180., umat_name, props, path_data, peak_file, Nfile_init, Nfile_disc, 1)
#Plot the concentration and the angle
NPhases_exp = dir + '/exp_data/Nellipsoids0.dat'
NPhases_iden = dir + '/' + path_data + '/' + Nfile_disc
c_exp, angle_exp = np.loadtxt(NPhases_file, usecols=(4,5), skiprows=2, unpack=True)
c_iden, angle_iden = np.loadtxt(NPhases_iden, usecols=(4,5), skiprows=2, unpack=True)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
# the histogram of the data
xs = np.arange(0,180,5)
rects1 = ax1.bar(xs, c_exp, width=5, color='b', align='center')
rects2 = ax1.bar(xs, c_iden, width=3, color='r', align='center')
ax1.set_xlabel('X data')
ax1.set_ylabel('Y1 data', color='g')
ax2.set_ylabel('Y2 data', color='b')
ax1.set_ylim([0,0.025])
ax2.set_ylim([0,0.25])
plt.show()
#plt.grid(True)
#plt.plot(angle,c, c='black')
plt.show()
Explanation: In the previous graph we can see a multi-peak ODF (peaks are modeled using PEARSONVII functions). It actually represent quite well the microstructure of injected plates.
The next step is to discretize the ODF into phases.
The file containing the initial 2-phase microstructure contains the following informations
End of explanation |
10,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial about statistical methods
The following contains a sequence of simple exercises, designed to get familiar with using Minuit for maximum likelihood fits and emcee to determine parameters by MCMC. Commands are generally commented, i.e. in order to activate them, simply uncomment them. A few functions are still to be defined... which is part of the exercise. Have fun!
Step1: Generate a dataset to be fitted
Step2: Maximum likelihood fit of a simple power law
First define the negative-log likelihood function for a density proportional to x**(-a) the range 1 < x < infinity
Step3: Then minimize it using iminuit
Step4: Error analysis
First determine the parabolic errors using hesse() and then do a parameter scan using minos() to determine the 68% confidence level errors.
Step5: Use of an un-normalised PDF
The above example shall be modified such that the normalisation of the likelihood function, which so far was determined analytically, now is determined numerically in the fit. This is the more realistic case, since in many case no (simple) analytical normalisation exists. As a first step, this requires to load the integration package.
Step6: Then do the same minimization steps as before.
Step7: Extend the fit model by an exponential cutoff
The exponential cutoff is implemented by exp(-bbx), i.e. exponential growth is not allowed for real valued parameters b. The implications of this ansatz shall be discussed when looking at the solution. After that, the example can be modified to use exp(-b*x).
Here the likelihood function has no (simple) analytical normalisation anymore, i.e. we directly do the numerical approach.
Step8: As before, use Minuit for minimisation and error analysis, but now in two dimensions. Study parabolic errors and minos errors, the latter both for the single variables and for both together.
Step9: Do the same analysis by MCMC
Step10: emcee requires as input the log-likelihood of the posterior in the parameters a and b. In the following it is composed of the log-of the prior and the log-likelihood of the data. Initially use a simple uniform prior in a and b with the constraint b>0. Afterwards one can play with the prior to see how strongly it affects the result.
Step11: Here we'll set up the computation. emcee combines multiple "walkers", each of which is its own MCMC chain. The number of trace results will be nwalkers * nsteps
Step12: run the MCMC (and time it using IPython's %time magic
Step13: sampler.chain is of shape (nwalkers, nsteps, ndim). Before analysis throw-out the burn-in points and reshape.
Step14: Analyse the results. Plot the projected (marginalized) posteriors for the parameters a and b and also the joinyt density as sampled by the MCMC.
Step16: As a final step, generate 2-dim bayesian confidence level contours containing 68.3% and 95.5% probability content. For that define a convenient plot functions and use them. Overlay the contours with the scatter plot. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Tutorial about statistical methods
The following contains a sequence of simple exercises, designed to get familiar with using Minuit for maximum likelihood fits and emcee to determine parameters by MCMC. Commands are generally commented, i.e. in order to activate them, simply uncomment them. A few functions are still to be defined... which is part of the exercise. Have fun!
End of explanation
np.random.seed(42)
y = np.random.random(10000)
x = 1./np.sqrt(y)
plt.hist(x, bins=100, range=(1,10), histtype='stepfilled',color='blue')
plt.yscale('log')
Explanation: Generate a dataset to be fitted
End of explanation
def nllp(a)
# here define the function
return 1.
Explanation: Maximum likelihood fit of a simple power law
First define the negative-log likelihood function for a density proportional to x**(-a) the range 1 < x < infinity
End of explanation
import iminuit
# minp = iminuit.Minuit(nllp,a= ?,error_a=?, errordef=?)
# minp.migrad()
Explanation: Then minimize it using iminuit
End of explanation
# minp.hesse()
# minp.minos()
# minp.draw_profile('a')
Explanation: Error analysis
First determine the parabolic errors using hesse() and then do a parameter scan using minos() to determine the 68% confidence level errors.
End of explanation
from scipy.integrate import quad
def pdfpn(x, a):
return x**(-a)
def pdfpn_norm(a):
# here insert the calculation of the normalisation as a function of a
return 1.
def nllpn(a):
# calculate and return the proper negative-log likelihood function
return 1.
Explanation: Use of an un-normalised PDF
The above example shall be modified such that the normalisation of the likelihood function, which so far was determined analytically, now is determined numerically in the fit. This is the more realistic case, since in many case no (simple) analytical normalisation exists. As a first step, this requires to load the integration package.
End of explanation
# minpn = iminuit.Minuit(nllpn, a=?, error_a=?, errordef=?)
# minpn.migrad()
Explanation: Then do the same minimization steps as before.
End of explanation
def pdfcn(x, a, b):
return x**(-a)*np.exp(-b*b*x)
def pdfcn_norm(a, b):
# determine the normalization
return 1.
def nllcn(a, b):
# calculate an return the negative-log likelihood function
return 1.
Explanation: Extend the fit model by an exponential cutoff
The exponential cutoff is implemented by exp(-bbx), i.e. exponential growth is not allowed for real valued parameters b. The implications of this ansatz shall be discussed when looking at the solution. After that, the example can be modified to use exp(-b*x).
Here the likelihood function has no (simple) analytical normalisation anymore, i.e. we directly do the numerical approach.
End of explanation
# mincn = iminuit.Minuit(nllcn, a=?, b=?, error_a=?, error_b=?, errordef=?)
# mincn.migrad()
# mincn.hesse()
# mincn.minos()
# mincn.draw_profile('a')
# mincn.draw_profile('b')
# mincn.draw_contour('a','b')
Explanation: As before, use Minuit for minimisation and error analysis, but now in two dimensions. Study parabolic errors and minos errors, the latter both for the single variables and for both together.
End of explanation
import emcee
Explanation: Do the same analysis by MCMC
End of explanation
# Define the posterior.
# for clarity the prior and likelihood are separated
# emcee requires log-posterior
def log_prior(theta):
a, b = theta
if b < 0:
return -np.inf # log(0)
else:
return 0.
def log_likelihood(theta, x):
a, b = theta
return np.sum(-a*np.log(x) - b*b*x)
def log_posterior(theta, x):
a , b = theta
# construct and the log of the posterior
return 1.
Explanation: emcee requires as input the log-likelihood of the posterior in the parameters a and b. In the following it is composed of the log-of the prior and the log-likelihood of the data. Initially use a simple uniform prior in a and b with the constraint b>0. Afterwards one can play with the prior to see how strongly it affects the result.
End of explanation
ndim = 2 # number of parameters in the model
nwalkers = 50 # number of MCMC walkers
nburn = 100 # "burn-in" period to let chains stabilize
nsteps = 1000 # number of MCMC steps to take
# random starting point
np.random.seed(0)
starting_guesses = np.random.random((nwalkers, ndim))
Explanation: Here we'll set up the computation. emcee combines multiple "walkers", each of which is its own MCMC chain. The number of trace results will be nwalkers * nsteps
End of explanation
#sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[x])
#%time sampler.run_mcmc(starting_guesses, nsteps)
#print("done")
Explanation: run the MCMC (and time it using IPython's %time magic
End of explanation
#emcee_trace = sampler.chain[:, nburn:, :].reshape(-1, ndim).T
#len(emcee_trace[0])
Explanation: sampler.chain is of shape (nwalkers, nsteps, ndim). Before analysis throw-out the burn-in points and reshape.
End of explanation
# plt.hist(emcee_trace[0], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.hist(emcee_trace[1], 100, range=(?,?) , histtype='stepfilled', color='cyan')
# plt.plot(emcee_trace[0],emcee_trace[1],',k')
Explanation: Analyse the results. Plot the projected (marginalized) posteriors for the parameters a and b and also the joinyt density as sampled by the MCMC.
End of explanation
def compute_sigma_level(trace1, trace2, nbins=20):
From a set of traces, bin by number of standard deviations
L, xbins, ybins = np.histogram2d(trace1, trace2, nbins)
L[L == 0] = 1E-16
logL = np.log(L)
shape = L.shape
L = L.ravel()
# obtain the indices to sort and unsort the flattened array
i_sort = np.argsort(L)[::-1]
i_unsort = np.argsort(i_sort)
L_cumsum = L[i_sort].cumsum()
L_cumsum /= L_cumsum[-1]
xbins = 0.5 * (xbins[1:] + xbins[:-1])
ybins = 0.5 * (ybins[1:] + ybins[:-1])
return xbins, ybins, L_cumsum[i_unsort].reshape(shape)
#xbins, ybins, sigma = compute_sigma_level(emcee_trace[0], emcee_trace[1])
#plt.contour(xbins, ybins, sigma.T, levels=[0.683, 0.955])
#plt.plot(emcee_trace[0], emcee_trace[1], ',k', alpha=0.1)
Explanation: As a final step, generate 2-dim bayesian confidence level contours containing 68.3% and 95.5% probability content. For that define a convenient plot functions and use them. Overlay the contours with the scatter plot.
End of explanation |
10,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="https
Step1: DO NOTICE , this is extremely SLOW!
Step2: IPython parallel
IPython's power is not limited to its advanced shell. Its parallel package includes a framework to setup and run calculations on single and multi-core machines, as well as on multiple nodes connected to a network. IPython is great because it gives an interactive twist to parallel computing and provides a common interface to different
communication protocols.
how to start engines?
type $ ipcluster start -n 12 in terminal to start a 12-engine
how to use engines?
direct view (specify tasks for engines!)
task-base view (load balanced)
Direct interface
Step3: Engines should be treated as independent IPython sessions, and imports and custom-defined functions must be synchronized over the network. To import some libraries, both locally and in the engines, you can use the DirectView.sync_imports context manager
Step4: DirectView.map_async
Step5: parallel decorator
Step6: DirectView.apply
executed on every engine
Step7: scatter & gather
Step8: task-based interface (load balanced)
Step9: Run the Monte-Carlo for$\pi$ on IPython cluster
1. using @dview.parallel() decorator
Step10: 2. using direct/task-based interface
Step11: 2.1 dview.apply - blocking mode
executed in every engin!
Step12: 2.2 tview.map (load balanced)
Step13: 2.3 dview.map_async
Step14: 2.4 tview.map_async
Step15: 2.5 dview.apply
Step16: 2.6 tview.apply (single engine execution!)
Step17: 2.7 scatter & gather
Step18: view.wait()
can be used to block the async results
Step19: open qtconsole to engines? | Python Code:
%pylab inline
# with plt.xkcd():
fig = plt.figure(figsize=(10, 10))
ax = plt.axes(frameon=False)
plt.xlim(-1.5,1.5)
plt.ylim(-1.5,1.5)
circle = plt.Circle((0.,0.), 1., color='w', fill=False)
rect = plt.Rectangle((-1,-1), 2, 2, color='gray')
plt.gca().add_artist(rect)
plt.gca().add_artist(circle)
plt.arrow(-2., 0., 3.3, 0., head_width=0.1, head_length=0.2)
plt.arrow(0., -2., 0., 3.3, head_width=0.1, head_length=0.2)
randx = np.random.uniform(-1, 1, (100,))
randy = np.random.uniform(-1, 1, (100,))
plot(randx, randy, 'kx')
plt.gca().axis('off')
plt.text(-1.3, -0.1, '(-1, 0)', fontsize=20)
plt.text( 1.1, -0.1, '(+1, 0)', fontsize=20)
plt.text( 0.1, 1.1, '(0, +1)', fontsize=20)
plt.text( 0.1, -1.1, '(0, -1)', fontsize=20);
%%time
import random
samples = 1E5
hits = 0
for i in range(int(samples)):
x = random.uniform(-1.0, 1.0)
y = random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
hits += 1
pi = 4.0*hits/samples
print pi
Explanation: <img src="https://www.python.org/static/img/python-logo.png">
Welcome to my lessons
Bo Zhang (NAOC, bozhang@nao.cas.cn) will have a few lessons on python.
These are very useful knowledge, skills and code styles when you use python to process astronomical data.
All materials can be found on my github page.
jupyter notebook (formerly named ipython notebook) is recommeded to use
These lectures are organized as below:
1. install python
2. basic syntax
3. numerical computing
4. scientific computing
5. plotting
6. astronomical data processing
7. high performance computing
8. version control
flowchart
test your code BEFORE you do ANY optimization!
find the bottleneck of your code (ps: learn to use profiler to find the bottleneck)
use tricks, experience to optimize code
use as many computing resources as possible
parallel computing in multi-CPU/core computer (multiprocessing, ...)
run code on multi-node computer cluster (PBS, ...)
some simple principles for optimization
1. memory vs. speed
2. vectorization
3. type check
4. parallel
recommended packages
1. numexpr
2. Cython
- parallel
1. multiprocessing (standard library)
2. ipcluster/ipyparallel (support PBS)
further reading
Parallel Programming with Python
Python High performance Programming
Learning Cython Programming
Parallel computing
threads: shared memory, involves locks
processes: isolated memory for each process, inter-process communication is less efficient
the easiest way to do parallel computing: embarassingly parallel (no inter-process communication), which is the case we met most often
Monte Carlo approximation for $\pi$
End of explanation
%%time
import multiprocessing
def sample():
x = random.uniform(-1.0, 1.0)
y = random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
pool = multiprocessing.Pool()
results_async = [pool.apply_async(sample) for i in range(int(samples))]
hits = sum(r.get() for r in results_async)
pool.close()
pi = 4.0*hits/samples
print pi
%%time
import multiprocessing
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
ntasks = 10
chunk_size = int(samples/ntasks)
pool = multiprocessing.Pool()
results_async = [pool.apply_async(sample_multiple, [chunk_size]) for i in range(ntasks)]
hits = sum(r.get() for r in results_async)
pool.close()
pi = 4.0*hits/samples
print pi
Explanation: DO NOTICE , this is extremely SLOW!
End of explanation
# to creat an instance of Client, import Client from IPython.parallel
from IPython.parallel import Client
# from ipyparallel import Client
%%bash
#ipcluster start -n 12
rc = Client() # creat an Client instance
rc.ids # show IDs of each engine
dview = rc[0] # select the first engine
dview
dview = rc[::2] # select every other engine
dview
dview = rc[:] # select all engines
dview
dview.execute('a = 1')
dview.pull('a').get() # equivalent to dview['a']
dview.push({'a':2}) # equivbalent to dview['a'] = 2
dview['a']
res = dview.execute('a = T_T') # got error
res.get()
res = dview.execute('b = a+1')
dview['b']
res = dview.execute('b = b+1')
dview['b']
Explanation: IPython parallel
IPython's power is not limited to its advanced shell. Its parallel package includes a framework to setup and run calculations on single and multi-core machines, as well as on multiple nodes connected to a network. IPython is great because it gives an interactive twist to parallel computing and provides a common interface to different
communication protocols.
how to start engines?
type $ ipcluster start -n 12 in terminal to start a 12-engine
how to use engines?
direct view (specify tasks for engines!)
task-base view (load balanced)
Direct interface
End of explanation
with dview.sync_imports():
import numpy
# the syntax import _ as _ is not supported
Explanation: Engines should be treated as independent IPython sessions, and imports and custom-defined functions must be synchronized over the network. To import some libraries, both locally and in the engines, you can use the DirectView.sync_imports context manager:
End of explanation
a = range(100)
def square(x):
return x*x
results_async = dview.map_async(square, a)
print results_async.get()
Explanation: DirectView.map_async
End of explanation
@dview.parallel(block=False)
def square(x):
return x * x
print square.map(range(100)).get()
Explanation: parallel decorator
End of explanation
def square(x):
return x*x
result_async = dview.apply(square, 2)
result_async.get()
Explanation: DirectView.apply
executed on every engine
End of explanation
dview.scatter('a', [0, 1, 2, 3])
print dview['a']
dview.scatter('a', np.arange(16))
print dview['a']
dview.execute('a = a**2')
print dview['a']
dview.gather('a').get()
Explanation: scatter & gather
End of explanation
from IPython.parallel import Client
rc = Client()
tview = rc.load_balanced_view()
def square(x):
return x * x
dview.apply(square, 2) # executes in every engine!
tview.apply(square, 2).get() # executed in ONLY 1 engine
tview.apply(square, np.arange(10)).get()
Explanation: task-based interface (load balanced)
End of explanation
def sample():
x = numpy.random.uniform(-1.0, 1.0)
y = numpy.random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
@dview.parallel()
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
results = sample_multiple.map([chunk_size for i in range(ntasks)]) # ntask determines the # of processes-->10
print 'pi: ', sum(results.get())/np.double(samples)*4.
Explanation: Run the Monte-Carlo for$\pi$ on IPython cluster
1. using @dview.parallel() decorator
End of explanation
def sample():
x = numpy.random.uniform(-1.0, 1.0)
y = numpy.random.uniform(-1.0, 1.0)
if x**2 + y**2 <= 1.0:
return 1
else:
return 0
def sample_multiple(samples_partial):
return sum(sample() for i in range(samples_partial))
dview.push({'sample':sample, 'sample_multiple':sample_multiple})
Explanation: 2. using direct/task-based interface
End of explanation
%%time
samples = int(1E8)
ntasks = len(dview)
chunk_size = int(samples/ntasks)
dview.block = True
results = dview.map(sample_multiple, [chunk_size for i in range(ntasks)])
# task should be evenly splited on every engine
print 'pi: ', sum(results)/np.double(samples)*4.
Explanation: 2.1 dview.apply - blocking mode
executed in every engin!
End of explanation
%%time
samples = int(1E8)
ntasks = len(tview)
chunk_size = int(samples/ntasks)
tview.block = True
results = tview.map(sample_multiple, [chunk_size for i in range(ntasks)])
# task should be evenly splited on every engine
print 'pi: ', sum(results)/np.double(samples)*4.
Explanation: 2.2 tview.map (load balanced)
End of explanation
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
dview.block = False
results_async = dview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)])
print results_async.ready()
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
Explanation: 2.3 dview.map_async
End of explanation
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
tview.block = False
results_async = tview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)]) #determines the tasks for each engine
# print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
print results_async.ready()
Explanation: 2.4 tview.map_async
End of explanation
%%time
samples = 1E8
ntasks = len(dview)
chunk_size = int(samples/ntasks)
dview.block = True
results = dview.apply(sample_multiple, chunk_size)
print 'pi: ', sum(results)/np.double(samples)*4.
Explanation: 2.5 dview.apply
End of explanation
%%time
samples = 1E8
ntasks = len(tview)
chunk_size = int(samples/ntasks)
dview.block = True
results = tview.apply(sample_multiple, chunk_size)
print 'pi: ', sum(results.get())/np.double(samples)*4.
print 'pi: ', sum(results.get())/np.double(samples)*4.*ntasks
Explanation: 2.6 tview.apply (single engine execution!)
End of explanation
samples = 1E8
ntasks = 50
chunk_size = int(samples/ntasks)
dview.scatter('chunk_size', [chunk_size for i in range(ntasks)])
dview.scatter('sum_sample', [0 for i in range(ntasks)])
for cz in dview['chunk_size']:
print cz
dview['sample_multiple']
dview.execute('sum_sample = [sample_multiple(chunk_size_) for chunk_size_ in chunk_size]')
dview['sum_sample']
sum(dview.gather('sum_sample'))/samples*4.
Explanation: 2.7 scatter & gather
End of explanation
%%time
samples = 1E8
ntasks = 10
chunk_size = int(samples/ntasks)
dview.block = False
results_async = dview.map_async(sample_multiple,
[chunk_size for i in range(ntasks)])
results_async.ready()
dview.wait(results_async)
print 'pi: ', sum(results_async.get())/np.double(samples)*4.
Explanation: view.wait()
can be used to block the async results
End of explanation
dview = rc[::4]
dview.execute('qtconsole')
Explanation: open qtconsole to engines?
End of explanation |
10,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Figures
A “figure” in matplotlib means the whole window in the user interface. Within this figure there can be “sub-plots”.
A figure is the windows in the GUI that has “Figure #” as title. Figures are numbered starting from1 as opposed
to the normal Python way starting from 0.
Argument | Default| Description
----|---------|----------------------------
num| 1| number of figure
figsize| figure.figsize |figure size in in inches (width, height)
dpi| figure.dpi| resolution in dots per inch
facecolor| figure.facecolor| color of the drawing background
edgecolor| figure.edgecolor| color of edge around the drawing background
frameon| True| draw figure frame or not
Subplots
With subplot you can arrange plots in a regular grid. You need to specify the number of rows and columns and
the number of the plot.
Axes
Axes are very similar to subplots but allow placement of plots at any location in the figure.
So if we want to put a smaller plot inside a bigger one we do so with axes.
Ticks
Well formatted ticks are an important part of publishing-ready figures. Matplotlib provides a totally configurable
system for ticks. There are tick locators to specify where ticks should appear and tick formatters to
give ticks the appearance you want.
Step1: Adding one more sub plot
Step2: another way to add subplots
Step3: One more way to add subplots
Step4: Adding some samples plots
Step5: Setting the Plot Range | Python Code:
# generating some data points
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C, S = np.cos(X), np.sin(X)
# creating a figure
fig = plt.figure(figsize=(4,3), dpi=120)
#plotting
plt.plot(X, C, linestyle='--')
plt.plot(X, S)
# plotting
plt.show()
# creating a figure
fig = plt.figure(figsize=(4,3), dpi=120)
#plotting
plt.plot(X, C, linestyle='--')
plt.plot(X, S, linewidth=2)
# adding labels
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Plotting")
# adding margins, grid and autoscale
plt.margins(0.15)
plt.autoscale(True)
plt.grid(True)
# plotting
plt.show()
Explanation: Figures
A “figure” in matplotlib means the whole window in the user interface. Within this figure there can be “sub-plots”.
A figure is the windows in the GUI that has “Figure #” as title. Figures are numbered starting from1 as opposed
to the normal Python way starting from 0.
Argument | Default| Description
----|---------|----------------------------
num| 1| number of figure
figsize| figure.figsize |figure size in in inches (width, height)
dpi| figure.dpi| resolution in dots per inch
facecolor| figure.facecolor| color of the drawing background
edgecolor| figure.edgecolor| color of edge around the drawing background
frameon| True| draw figure frame or not
Subplots
With subplot you can arrange plots in a regular grid. You need to specify the number of rows and columns and
the number of the plot.
Axes
Axes are very similar to subplots but allow placement of plots at any location in the figure.
So if we want to put a smaller plot inside a bigger one we do so with axes.
Ticks
Well formatted ticks are an important part of publishing-ready figures. Matplotlib provides a totally configurable
system for ticks. There are tick locators to specify where ticks should appear and tick formatters to
give ticks the appearance you want.
End of explanation
# creating a figure
fig = plt.figure(figsize=(12,6), dpi=120)
# adding subplot
plt.subplot(1,2,1) # 1 row, 2 columns, 1st plot
plt.plot(X, C, linestyle='--')
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Cos Plotting")
plt.grid(True)
plt.margins(0.15)
plt.legend(["Cos"], loc="upper left")
plt.subplot(1,2,2) # 1 row, 2 columns, 2nd plot
plt.plot(X, S, linewidth=2)
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Sin Plotting")
plt.legend(["Sin"], loc="upper left")
plt.margins(0.15)
plt.autoscale(True)
plt.grid(True)
# plotting
plt.show()
Explanation: Adding one more sub plot
End of explanation
# creating a figure
fig = plt.figure(figsize=(12,6), dpi=120)
# adding subplot
ax1 = fig.add_subplot(1,2,1) # 1 row, 2 columns, 1st plot
ax1.plot(X, C, linestyle='--')
ax1.grid(True)
ax1.margins(0.15)
ax2 = fig.add_subplot(1,2,2) # 1 row, 2 columns, 2nd plot
ax2.plot(X, S, linewidth=2)
ax2.margins(0.2)
ax2.autoscale(True)
ax2.grid(True)
# plotting
plt.xlabel("X data")
plt.ylabel("Y data")
plt.title("Sin Plotting")
plt.legend(["Sin"], loc="upper left")
plt.show()
Explanation: another way to add subplots
End of explanation
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2')
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2')
ax2.grid(True)
ax2.margins(0.2)
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2', label="Sin wave")
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2', label="Cos wave")
ax2.grid(True)
ax2.margins(0.2)
ax1.legend()
ax2.legend()
# creating a figure
plt.figure(figsize=(12,6), dpi=120)
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, sharex=True, sharey=True)
ax1.plot(X,S, color='green', linestyle='--', linewidth='2', label="Sin wave")
ax1.grid(True)
ax1.margins(0.2)
ax2.plot(X,C, color='blue', linestyle='-', linewidth='2', label="Cos wave")
ax2.grid(True)
ax2.margins(0.2)
ax1.legend()
ax1.set_xlabel("X data")
ax1.set_ylabel("Sin data")
ax1.set_title("Sin Chart")
ax2.legend()
ax2.set_xlabel("X data")
ax2.set_ylabel("Sin data")
ax2.set_title("Cos Chart")
Explanation: One more way to add subplots
End of explanation
fig = plt.figure(figsize=(16,12))
plt.subplots(nrows=3, ncols=4)
plt.show()
fig = plt.figure(figsize=(8,6))
ax1 = plt.subplot(1,2,1)
plt.subplot(3,2,2)
plt.subplot(3,2,4)
plt.subplot(3,2,6)
ax1.plot([1,2,3], [0.5,4,1.5])
plt.show()
X = [ (2,1,1), (2,3,4), (2,3,5), (2,3,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
# removing all ticks
X = [ (2,1,1), (2,3,4), (2,3,5), (2,3,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
# removing all ticks
X = [ (1,2,1), (3,2,2), (3,2,4), (3,2,6) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
# removing all ticks
X = [ (4,2,1), (4,2,3), (4,2,5), (4,1,4), (4,2,2), (4,2,(4,6)) ]
for nrows, ncols, plot_number in X:
plt.subplot(nrows, ncols, plot_number)
plt.xticks([])
plt.yticks([])
Explanation: Adding some samples plots
End of explanation
fig, axes = plt.subplots(1, 3, figsize=(10, 4))
x = np.arange(0, 5, 0.25)
axes[0].plot(x, x**2, x, x**3)
axes[0].set_title("default axes ranges")
axes[1].plot(x, x**2, x, x**3)
axes[1].axis('tight')
axes[1].set_title("tight axes")
axes[2].plot(x, x**2, x, x**3)
axes[2].set_ylim([0, 60])
axes[2].set_xlim([2, 5])
axes[2].set_title("custom axes range");
fig, ax1 = plt.subplots()
x = np.arange(1,7,0.1)
ax1.plot(x, 2 * np.pi * x, lw=2, color="blue")
ax1.set_ylabel(r"Circumference $(cm)$", fontsize=16, color="blue")
for label in ax1.get_yticklabels():
label.set_color("blue")
ax2 = ax1.twinx()
ax2.plot(x, np.pi * x ** 2, lw=2, color="darkgreen")
ax2.set_ylabel(r"area $(cm^2)$", fontsize=16, color="darkgreen")
for label in ax2.get_yticklabels():
label.set_color("darkgreen")
#plt.grid(color='b', alpha=1.5, linestyle='dashed', linewidth=0.5)
fig.savefig("filename.png", dpi=200)
def f(t):
return np.exp(-t) * np.cos(2*np.pi*t)
def fp(t):
return -2*np.pi * np.exp(-t) * np.sin(2*np.pi*t) - np.e**(-t)*np.cos(2*np.pi*t)
def g(t):
return np.sin(t) * np.cos(1/(t+0.1))
def g(t):
return np.sin(t) * np.cos(1/(t))
python_course_green = "#476042"
fig = plt.figure(figsize=(6, 4))
t = np.arange(-5.0, 1.0, 0.1)
sub1 = fig.add_subplot(221) # instead of plt.subplot(2, 2, 1)
sub1.set_title('The function f') # non OOP: plt.title('The function f')
sub1.plot(t, f(t))
sub2 = fig.add_subplot(222, axisbg="lightgrey")
sub2.set_title('fp, the derivation of f')
sub2.plot(t, fp(t))
t = np.arange(-3.0, 2.0, 0.02)
sub3 = fig.add_subplot(223)
sub3.set_title('The function g')
sub3.plot(t, g(t))
t = np.arange(-0.2, 0.2, 0.001)
sub4 = fig.add_subplot(224, axisbg="lightgrey")
sub4.set_title('A closer look at g')
sub4.set_xticks([-0.2, -0.1, 0, 0.1, 0.2])
sub4.set_yticks([-0.15, -0.1, 0, 0.1, 0.15])
sub4.plot(t, g(t))
plt.plot(t, g(t))
plt.tight_layout()
plt.show()
Explanation: Setting the Plot Range
End of explanation |
10,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SARIMAX
Step1: ARIMA Example 1
Step2: Thus the maximum likelihood estimates imply that for the process above, we have
Step3: To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model
Step4: ARIMA Example 3
Step5: Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.
The default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.
ARIMA Example 4
Step6: ARIMA Postestimation
Step7: Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).
Step8: The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).
With no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.
Step9: We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.
The dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.
Here we perform dynamic prediction starting in the first quarter of 1978.
Step10: We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978
Step11: Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import statsmodels.api as sm
import matplotlib.pyplot as plt
from datetime import datetime
import requests
from io import BytesIO
Explanation: SARIMAX: Introduction
This notebook replicates examples from the Stata ARIMA time series estimation and postestimation documentation.
First, we replicate the four estimation examples http://www.stata.com/manuals13/tsarima.pdf:
ARIMA(1,1,1) model on the U.S. Wholesale Price Index (WPI) dataset.
Variation of example 1 which adds an MA(4) term to the ARIMA(1,1,1) specification to allow for an additive seasonal effect.
ARIMA(2,1,0) x (1,1,0,12) model of monthly airline data. This example allows a multiplicative seasonal effect.
ARMA(1,1) model with exogenous regressors; describes consumption as an autoregressive process on which also the money supply is assumed to be an explanatory variable.
Second, we demonstrate postestimation capabilitites to replicate http://www.stata.com/manuals13/tsarimapostestimation.pdf. The model from example 4 is used to demonstrate:
One-step-ahead in-sample prediction
n-step-ahead out-of-sample forecasting
n-step-ahead in-sample dynamic prediction
End of explanation
# Dataset
wpi1 = requests.get('http://www.stata-press.com/data/r12/wpi1.dta').content
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))
res = mod.fit(disp=False)
print(res.summary())
Explanation: ARIMA Example 1: Arima
As can be seen in the graphs from Example 2, the Wholesale price index (WPI) is growing over time (i.e. is not stationary). Therefore an ARMA model is not a good specification. In this first example, we consider a model where the original time series is assumed to be integrated of order 1, so that the difference is assumed to be stationary, and fit a model with one autoregressive lag and one moving average lag, as well as an intercept term.
The postulated data process is then:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
where $c$ is the intercept of the ARMA model, $\Delta$ is the first-difference operator, and we assume $\epsilon_{t} \sim N(0, \sigma^2)$. This can be rewritten to emphasize lag polynomials as (this will be useful in example 2, below):
$$
(1 - \phi_1 L ) \Delta y_t = c + (1 + \theta_1 L) \epsilon_{t}
$$
where $L$ is the lag operator.
Notice that one difference between the Stata output and the output below is that Stata estimates the following model:
$$
(\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
where $\beta_0$ is the mean of the process $y_t$. This model is equivalent to the one estimated in the Statsmodels SARIMAX class, but the interpretation is different. To see the equivalence, note that:
$$
(\Delta y_t - \beta_0) = \phi_1 ( \Delta y_{t-1} - \beta_0) + \theta_1 \epsilon_{t-1} + \epsilon_{t} \
\Delta y_t = (1 - \phi_1) \beta_0 + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \epsilon_{t}
$$
so that $c = (1 - \phi_1) \beta_0$.
End of explanation
# Dataset
data = pd.read_stata(BytesIO(wpi1))
data.index = data.t
data['ln_wpi'] = np.log(data['wpi'])
data['D.ln_wpi'] = data['ln_wpi'].diff()
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
# Levels
axes[0].plot(data.index._mpl_repr(), data['wpi'], '-')
axes[0].set(title='US Wholesale Price Index')
# Log difference
axes[1].plot(data.index._mpl_repr(), data['D.ln_wpi'], '-')
axes[1].hlines(0, data.index[0], data.index[-1], 'r')
axes[1].set(title='US Wholesale Price Index - difference of logs');
# Graph data
fig, axes = plt.subplots(1, 2, figsize=(15,4))
fig = sm.graphics.tsa.plot_acf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[0])
fig = sm.graphics.tsa.plot_pacf(data.iloc[1:]['D.ln_wpi'], lags=40, ax=axes[1])
Explanation: Thus the maximum likelihood estimates imply that for the process above, we have:
$$
\Delta y_t = 0.1050 + 0.8740 \Delta y_{t-1} - 0.4206 \epsilon_{t-1} + \epsilon_{t}
$$
where $\epsilon_{t} \sim N(0, 0.5226)$. Finally, recall that $c = (1 - \phi_1) \beta_0$, and here $c = 0.1050$ and $\phi_1 = 0.8740$. To compare with the output from Stata, we could calculate the mean:
$$\beta_0 = \frac{c}{1 - \phi_1} = \frac{0.1050}{1 - 0.8740} = 0.83$$
Note: these values are slightly different from the values in the Stata documentation because the optimizer in Statsmodels has found parameters here that yield a higher likelihood. Nonetheless, they are very close.
ARIMA Example 2: Arima with additive seasonal effects
This model is an extension of that from example 1. Here the data is assumed to follow the process:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t}
$$
The new part of this model is that there is allowed to be a annual seasonal effect (it is annual even though the periodicity is 4 because the dataset is quarterly). The second difference is that this model uses the log of the data rather than the level.
Before estimating the dataset, graphs showing:
The time series (in logs)
The first difference of the time series (in logs)
The autocorrelation function
The partial autocorrelation function.
From the first two graphs, we note that the original time series does not appear to be stationary, whereas the first-difference does. This supports either estimating an ARMA model on the first-difference of the data, or estimating an ARIMA model with 1 order of integration (recall that we are taking the latter approach). The last two graphs support the use of an ARMA(1,1,1) model.
End of explanation
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['ln_wpi'], trend='c', order=(1,1,1))
res = mod.fit(disp=False)
print(res.summary())
Explanation: To understand how to specify this model in Statsmodels, first recall that from example 1 we used the following code to specify the ARIMA(1,1,1) model:
python
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,1))
The order argument is a tuple of the form (AR specification, Integration order, MA specification). The integration order must be an integer (for example, here we assumed one order of integration, so it was specified as 1. In a pure ARMA model where the underlying data is already stationary, it would be 0).
For the AR specification and MA specification components, there are two possiblities. The first is to specify the maximum degree of the corresponding lag polynomial, in which case the component is an integer. For example, if we wanted to specify an ARIMA(1,1,4) process, we would use:
python
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(1,1,4))
and the corresponding data process would be:
$$
y_t = c + \phi_1 y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2} + \theta_3 \epsilon_{t-3} + \theta_4 \epsilon_{t-4} + \epsilon_{t}
$$
or
$$
(1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_2 L^2 + \theta_3 L^3 + \theta_4 L^4) \epsilon_{t}
$$
When the specification parameter is given as a maximum degree of the lag polynomial, it implies that all polynomial terms up to that degree are included. Notice that this is not the model we want to use, because it would include terms for $\epsilon_{t-2}$ and $\epsilon_{t-3}$, which we don't want here.
What we want is a polynomial that has terms for the 1st and 4th degrees, but leaves out the 2nd and 3rd terms. To do that, we need to provide a tuple for the specifiation parameter, where the tuple describes the lag polynomial itself. In particular, here we would want to use:
python
ar = 1 # this is the maximum degree specification
ma = (1,0,0,1) # this is the lag polynomial specification
mod = sm.tsa.statespace.SARIMAX(data['wpi'], trend='c', order=(ar,1,ma)))
This gives the following form for the process of the data:
$$
\Delta y_t = c + \phi_1 \Delta y_{t-1} + \theta_1 \epsilon_{t-1} + \theta_4 \epsilon_{t-4} + \epsilon_{t} \
(1 - \phi_1 L)\Delta y_t = c + (1 + \theta_1 L + \theta_4 L^4) \epsilon_{t}
$$
which is what we want.
End of explanation
# Dataset
air2 = requests.get('http://www.stata-press.com/data/r12/air2.dta').content
data = pd.read_stata(BytesIO(air2))
data.index = pd.date_range(start=datetime(data.time[0], 1, 1), periods=len(data), freq='MS')
data['lnair'] = np.log(data['air'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12), simple_differencing=True)
res = mod.fit(disp=False)
print(res.summary())
Explanation: ARIMA Example 3: Airline Model
In the previous example, we included a seasonal effect in an additive way, meaning that we added a term allowing the process to depend on the 4th MA lag. It may be instead that we want to model a seasonal effect in a multiplicative way. We often write the model then as an ARIMA $(p,d,q) \times (P,D,Q)_s$, where the lowercast letters indicate the specification for the non-seasonal component, and the uppercase letters indicate the specification for the seasonal component; $s$ is the periodicity of the seasons (e.g. it is often 4 for quarterly data or 12 for monthly data). The data process can be written generically as:
$$
\phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D y_t = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
where:
$\phi_p (L)$ is the non-seasonal autoregressive lag polynomial
$\tilde \phi_P (L^s)$ is the seasonal autoregressive lag polynomial
$\Delta^d \Delta_s^D y_t$ is the time series, differenced $d$ times, and seasonally differenced $D$ times.
$A(t)$ is the trend polynomial (including the intercept)
$\theta_q (L)$ is the non-seasonal moving average lag polynomial
$\tilde \theta_Q (L^s)$ is the seasonal moving average lag polynomial
sometimes we rewrite this as:
$$
\phi_p (L) \tilde \phi_P (L^s) y_t^* = A(t) + \theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
where $y_t^* = \Delta^d \Delta_s^D y_t$. This emphasizes that just as in the simple case, after we take differences (here both non-seasonal and seasonal) to make the data stationary, the resulting model is just an ARMA model.
As an example, consider the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$, with an intercept. The data process can be written in the form above as:
$$
(1 - \phi_1 L - \phi_2 L^2) (1 - \tilde \phi_1 L^{12}) \Delta \Delta_{12} y_t = c + \epsilon_t
$$
Here, we have:
$\phi_p (L) = (1 - \phi_1 L - \phi_2 L^2)$
$\tilde \phi_P (L^s) = (1 - \phi_1 L^12)$
$d = 1, D = 1, s=12$ indicating that $y_t^*$ is derived from $y_t$ by taking first-differences and then taking 12-th differences.
$A(t) = c$ is the constant trend polynomial (i.e. just an intercept)
$\theta_q (L) = \tilde \theta_Q (L^s) = 1$ (i.e. there is no moving average effect)
It may still be confusing to see the two lag polynomials in front of the time-series variable, but notice that we can multiply the lag polynomials together to get the following model:
$$
(1 - \phi_1 L - \phi_2 L^2 - \tilde \phi_1 L^{12} + \phi_1 \tilde \phi_1 L^{13} + \phi_2 \tilde \phi_1 L^{14} ) y_t^* = c + \epsilon_t
$$
which can be rewritten as:
$$
y_t^ = c + \phi_1 y_{t-1}^ + \phi_2 y_{t-2}^ + \tilde \phi_1 y_{t-12}^ - \phi_1 \tilde \phi_1 y_{t-13}^ - \phi_2 \tilde \phi_1 y_{t-14}^ + \epsilon_t
$$
This is similar to the additively seasonal model from example 2, but the coefficients in front of the autoregressive lags are actually combinations of the underlying seasonal and non-seasonal parameters.
Specifying the model in Statsmodels is done simply by adding the seasonal_order argument, which accepts a tuple of the form (Seasonal AR specification, Seasonal Integration order, Seasonal MA, Seasonal periodicity). The seasonal AR and MA specifications, as before, can be expressed as a maximum polynomial degree or as the lag polynomial itself. Seasonal periodicity is an integer.
For the airline model ARIMA $(2,1,0) \times (1,1,0)_{12}$ with an intercept, the command is:
python
mod = sm.tsa.statespace.SARIMAX(data['lnair'], order=(2,1,0), seasonal_order=(1,1,0,12))
End of explanation
# Dataset
friedman2 = requests.get('http://www.stata-press.com/data/r12/friedman2.dta').content
data = pd.read_stata(BytesIO(friedman2))
data.index = data.time
# Variables
endog = data.loc['1959':'1981', 'consump']
exog = sm.add_constant(data.loc['1959':'1981', 'm2'])
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog, exog, order=(1,0,1))
res = mod.fit(disp=False)
print(res.summary())
Explanation: Notice that here we used an additional argument simple_differencing=True. This controls how the order of integration is handled in ARIMA models. If simple_differencing=True, then the time series provided as endog is literatlly differenced and an ARMA model is fit to the resulting new time series. This implies that a number of initial periods are lost to the differencing process, however it may be necessary either to compare results to other packages (e.g. Stata's arima always uses simple differencing) or if the seasonal periodicity is large.
The default is simple_differencing=False, in which case the integration component is implemented as part of the state space formulation, and all of the original data can be used in estimation.
ARIMA Example 4: ARMAX (Friedman)
This model demonstrates the use of explanatory variables (the X part of ARMAX). When exogenous regressors are included, the SARIMAX module uses the concept of "regression with SARIMA errors" (see http://robjhyndman.com/hyndsight/arimax/ for details of regression with ARIMA errors versus alternative specifications), so that the model is specified as:
$$
y_t = \beta_t x_t + u_t \
\phi_p (L) \tilde \phi_P (L^s) \Delta^d \Delta_s^D u_t = A(t) +
\theta_q (L) \tilde \theta_Q (L^s) \epsilon_t
$$
Notice that the first equation is just a linear regression, and the second equation just describes the process followed by the error component as SARIMA (as was described in example 3). One reason for this specification is that the estimated parameters have their natural interpretations.
This specification nests many simpler specifications. For example, regression with AR(2) errors is:
$$
y_t = \beta_t x_t + u_t \
(1 - \phi_1 L - \phi_2 L^2) u_t = A(t) + \epsilon_t
$$
The model considered in this example is regression with ARMA(1,1) errors. The process is then written:
$$
\text{consump}_t = \beta_0 + \beta_1 \text{m2}_t + u_t \
(1 - \phi_1 L) u_t = (1 - \theta_1 L) \epsilon_t
$$
Notice that $\beta_0$ is, as described in example 1 above, not the same thing as an intercept specified by trend='c'. Whereas in the examples above we estimated the intercept of the model via the trend polynomial, here, we demonstrate how to estimate $\beta_0$ itself by adding a constant to the exogenous dataset. In the output, the $beta_0$ is called const, whereas above the intercept $c$ was called intercept in the output.
End of explanation
# Dataset
raw = pd.read_stata(BytesIO(friedman2))
raw.index = raw.time
data = raw.loc[:'1981']
# Variables
endog = data.loc['1959':, 'consump']
exog = sm.add_constant(data.loc['1959':, 'm2'])
nobs = endog.shape[0]
# Fit the model
mod = sm.tsa.statespace.SARIMAX(endog.loc[:'1978-01-01'], exog=exog.loc[:'1978-01-01'], order=(1,0,1))
fit_res = mod.fit(disp=False)
print(fit_res.summary())
Explanation: ARIMA Postestimation: Example 1 - Dynamic Forecasting
Here we describe some of the post-estimation capabilities of Statsmodels' SARIMAX.
First, using the model from example, we estimate the parameters using data that excludes the last few observations (this is a little artificial as an example, but it allows considering performance of out-of-sample forecasting and facilitates comparison to Stata's documentation).
End of explanation
mod = sm.tsa.statespace.SARIMAX(endog, exog=exog, order=(1,0,1))
res = mod.filter(fit_res.params)
Explanation: Next, we want to get results for the full dataset but using the estimated parameters (on a subset of the data).
End of explanation
# In-sample one-step-ahead predictions
predict = res.get_prediction()
predict_ci = predict.conf_int()
Explanation: The predict command is first applied here to get in-sample predictions. We use the full_results=True argument to allow us to calculate confidence intervals (the default output of predict is just the predicted values).
With no other arguments, predict returns the one-step-ahead in-sample predictions for the entire sample.
End of explanation
# Dynamic predictions
predict_dy = res.get_prediction(dynamic='1978-01-01')
predict_dy_ci = predict_dy.conf_int()
Explanation: We can also get dynamic predictions. One-step-ahead prediction uses the true values of the endogenous values at each step to predict the next in-sample value. Dynamic predictions use one-step-ahead prediction up to some point in the dataset (specified by the dynamic argument); after that, the previous predicted endogenous values are used in place of the true endogenous values for each new predicted element.
The dynamic argument is specified to be an offset relative to the start argument. If start is not specified, it is assumed to be 0.
Here we perform dynamic prediction starting in the first quarter of 1978.
End of explanation
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Personal consumption', xlabel='Date', ylabel='Billions of dollars')
# Plot data points
data.loc['1977-07-01':, 'consump'].plot(ax=ax, style='o', label='Observed')
# Plot predictions
predict.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='r--', label='One-step-ahead forecast')
ci = predict_ci.loc['1977-07-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)
predict_dy.predicted_mean.loc['1977-07-01':].plot(ax=ax, style='g', label='Dynamic forecast (1978)')
ci = predict_dy_ci.loc['1977-07-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='g', alpha=0.1)
legend = ax.legend(loc='lower right')
Explanation: We can graph the one-step-ahead and dynamic predictions (and the corresponding confidence intervals) to see their relative performance. Notice that up to the point where dynamic prediction begins (1978:Q1), the two are the same.
End of explanation
# Prediction error
# Graph
fig, ax = plt.subplots(figsize=(9,4))
npre = 4
ax.set(title='Forecast error', xlabel='Date', ylabel='Forecast - Actual')
# In-sample one-step-ahead predictions and 95% confidence intervals
predict_error = predict.predicted_mean - endog
predict_error.loc['1977-10-01':].plot(ax=ax, label='One-step-ahead forecast')
ci = predict_ci.loc['1977-10-01':].copy()
ci.iloc[:,0] -= endog.loc['1977-10-01':]
ci.iloc[:,1] -= endog.loc['1977-10-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], alpha=0.1)
# Dynamic predictions and 95% confidence intervals
predict_dy_error = predict_dy.predicted_mean - endog
predict_dy_error.loc['1977-10-01':].plot(ax=ax, style='r', label='Dynamic forecast (1978)')
ci = predict_dy_ci.loc['1977-10-01':].copy()
ci.iloc[:,0] -= endog.loc['1977-10-01':]
ci.iloc[:,1] -= endog.loc['1977-10-01':]
ax.fill_between(ci.index, ci.iloc[:,0], ci.iloc[:,1], color='r', alpha=0.1)
legend = ax.legend(loc='lower left');
legend.get_frame().set_facecolor('w')
Explanation: Finally, graph the prediction error. It is obvious that, as one would suspect, one-step-ahead prediction is considerably better.
End of explanation |
10,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with data in Python
Notebook version
Step1: 1. Data generation
One of the first things we need to learn is to generate random samples from a given distribution. Most things in life come muddled with random noise. A fundamental part of Detection and Estimation is finding out what the properties of this noise are in order to make better predictions. We assume that this noise can be modeled according to a specific probability distribution (i.e
Step2: Note that the random vectors that are being generated will be different every time we execute the code. However, we often need to make sure we obtain the exact same sequence of random numbers in order to recreate specific experimental results. There are essentially two ways of doing this
Step3: Exercise 1
Step4: Exercise 2.
Step5: Exercise 3.
Step6: Exercise 4.
Step7: 2. Data representation
Step8: Let's analyse how we have created the previous figures and in which things they differ
Step9: In just a single example we have seen a lot of Matplotlib functionalities that can be easely tuned. You have all you need to draw decent figures. However, those of you who want to learn more about Matplotlib can take a look at AnatomyOfMatplotlib, a collection of notebooks in which you will explore more in depth Matplotlib.
Now, try to solve the following exercises
Step10: Exercise 6
Step11: Exercise 7
Step12: Exercise 8
Step13: In the above exercises you have plotted a few vectors which are deterministic, that is, a range or a function applied to a range. Let's now consider the case of representing random samples and distributions.
If we have an expression to obtain the density of a given distribution, we can plot it in the same way we plotted functions before.
In a more general case, when we only have access to a limited number of samples, or we are interested in sampling them randomly, we usually make use of a histogram.
Consider x a vector containing samples coming from a 1-dimensional random variable. A histogram is a figure in which we represent the observed frequencies of different ranges of the x domain. We can express them as relative frequencies (summing up to 1) or absolute frequencies (counting events).
We can adapt the number and size of intervals (called bins) to directly affect the resolution of the plot.
When we have a sufficiently high number of random samples coming from the same distribution, its histogram is expected to have a similar shape to the theoretical expression corresponding to the density of this distribution.
In Matplotlib, we have already plotted histograms, with plt.hist(samples,bins=).
Let's see some examples
Step14: Now it's your turn!
Exercise 9
Step15: Exercise 10
Step16: 3. Data storage
Step17: It works in a similar way to lists, being capable of storing arrays, numbers and strings of diferent sizes. In the case of dictionaries, to access to a certain value you just have to use its key.
Step18: Let's now try to apply this knowledge about dictionaries with the following exercise
Step19: 3.2. Saving and Loading
Now that we know how to create and work with dictionaries we can start to save these dictionaries into different file types. In order to work with .mat files, we need to work with the scipy.io library, which provides us with the functions we need
Step20: The csv (Comma Separated Values) files are one of the most common when working with databases. As stated in its name, these format defines the sepparation between elements in the file by a delimiter, typically the comma. Nevertheless, as this files can be defined using any delimiter, it is recommendable to specify which one you would like to use to avoid errors.
In particular, we are going to work with the functions which allow us to save and load data | Python Code:
# Let's import some libraries
import numpy as np
import matplotlib.pyplot as plt
Explanation: Working with data in Python
Notebook version:
* 1.0 (Sep 3, 2018) - First TMDE version
* 1.1 (Sep 14, 2018) - Minor fixes
Authors: Vanessa Gómez Verdejo ([email protected]), Óscar García Hinde ([email protected]),
Simón Roca Sotelo ([email protected]), Carlos Sevilla Salcedo ([email protected])
Throughout this course we're going to work with data consisting of noisy signals, samples from probability distributions, etc. For example, we might need to apply different transformations to our data in order to compute a good predictor.
In this notebook we will learn to use specific tools that will let us load, generate, transform and visualise data. We will expand on the mathematical tools that numpy offers and we will introduce a new library, matplotlib, that will allow us to plot all sorts of graphs from our data.
End of explanation
# Random samplig examples
n = 1000 # number of samples
# Sampling from a standard uniform distribution:
x_unif = np.random.rand(n)
fig1 = plt.figure()
plt.hist(x_unif, bins=100)
plt.title('Samples from a uniform distribution between 0 and 1')
plt.show()
# Sampling from a normal distribution:
x_norm = np.random.randn(n)
fig2 = plt.figure()
plt.hist(x_norm, bins=100)
plt.title('Samples from a normal distribution with 0 mean and unity variance')
plt.show()
# Adding Gaussian noise to a linear function:
n = 30
x = np.linspace(-5, 5, n)
noise = np.random.randn(n)
y = 3*x
y_noise = y + noise
fig3 = plt.figure()
plt.plot(x, y, color='black', linestyle='--', label='Clean signal')
plt.plot(x, y_noise, color='red', label='Noisy signal')
plt.legend(loc=4, fontsize='large')
plt.title('Visualization of a noisy data-set')
plt.show()
Explanation: 1. Data generation
One of the first things we need to learn is to generate random samples from a given distribution. Most things in life come muddled with random noise. A fundamental part of Detection and Estimation is finding out what the properties of this noise are in order to make better predictions. We assume that this noise can be modeled according to a specific probability distribution (i.e: nose generated by a Gaussian dastribution), which in turn allows us to make precise estimations of said distribution's parameters.
In python, random samples can be easily generated with the numpy.random package. Inside it we can find many usefull tools to sample from the most important probability distributions.
We have common number generator functions:
* rand(): uniformily generates random samples.
* randn(): returns samples from the “standard normal” distribution.
Or more specific ones:
* exponential([scale, size]): draw samples from an exponential distribution with a given scale parameter.
* normal([loc, scale, size]): draw random samples from a normal (Gaussian) distribution with parameters: loc (mean) and scale (standard deviation).
* uniform([low, high, size]): draw samples from a uniform distribution in the range low-high.
In the following examples we will look at different random generation methods and we will visualize the results. For the time being, you can ignore the visualization code. Later on we will learn how these visualization tools work.
End of explanation
# Fixing the random number generator seed:
print("If we don't fix the seed, the sequence will be different each time:\n")
for i in range(3):
print('Iteration ', str(i))
print(np.random.rand(3), '\n')
print("\nHowever, if we fix the seed, we will always obtain the same sequence:\n")
for i in range(3):
print('Iteration ', str(i))
np.random.seed(0)
print(np.random.rand(3), '\n')
Explanation: Note that the random vectors that are being generated will be different every time we execute the code. However, we often need to make sure we obtain the exact same sequence of random numbers in order to recreate specific experimental results. There are essentially two ways of doing this:
Store the random sequence in variable and reuse it whenever the need arises.
Fix the seed of the random number generator with numpy.random.seed(int).
See for yourselves:
End of explanation
print('Exercise 1:\n')
# n = <FILL IN>
n = 1000
# x_unif = <FILL IN>
x_unif = np.random.uniform(2, 5, n)
# print('Sample mean = ', <FILL IN>)
print('Sample mean = ', x_unif.mean())
plt.hist(x_unif, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Uniform distribution between 2 and 5')
plt.show()
Explanation: Exercise 1:
Generate 1000 samples from a uniform distribution that spans from 2 to 5. Print the sample mean and check that it approximates its expected value.
Hint: check out the random.uniform() function
End of explanation
print('\nExercise 2:\n')
# n = <FILL IN>
n = 1000
# x_gauss = <FILL IN>
x_gauss = np.random.randn(n)*np.sqrt(2) + 3
# print('Sample mean = ', <FILL IN>)
print('Sample mean = ', x_gauss.mean())
# print('Sample variance = ', <FILL IN>)
print('Sample variance = ', x_gauss.var())
plt.hist(x_gauss, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Gaussian distribution with mean = 3 and variance = 2')
plt.show()
Explanation: Exercise 2.: Generate 1000 samples from a Gaussian distribution with mean 3 and variance 2. Print the sample mean and variance and check that they approximate their expected values.
Hint: check out the random.normal() function. Also, think about the changes you need to apply to a standard normal distribution to modify its mean and variance and try to obtain the same results using the random.randn() function.
End of explanation
print('\nExercise 3:\n')
# n = <FILL IN>
n = 100
# x = <FILL IN>
x = np.linspace(-5, 5, n)
# y = <FILL IN>
y = np.sin(x)
# noise = <FILL IN>
noise = np.random.uniform(-0.5, 0.5, 100)
# y_noise = <FILL IN>
y_noise = y + noise
plt.plot(x, y_noise, color='green', label='Noisy signal')
plt.plot(x, y, color='black', linestyle='--', label='Clean signal')
plt.legend(loc=3, fontsize='large')
plt.title('Sine signal with added uniform noise')
plt.show()
Explanation: Exercise 3.: Generate 100 samples of a sine signal between -5 and 5 and add uniform noise with mean 0 and amplitude 1.
End of explanation
print('\nExercise 4:\n')
# n = <FILL IN>
n = 1000
# mean = <FILL IN>
mean = np.array([2, 3])
# cov = <FILL IN>
cov = np.array([[2, 0], [0, 2]])
# x_2d_gauss = <FILL IN>
x_2d_gauss = np.random.multivariate_normal(mean=mean, cov=cov, size=n)
plt.scatter(x_2d_gauss[:, 0], x_2d_gauss[:, 1], )
plt.title('2d Gaussian Scatter Plot')
plt.show()
Explanation: Exercise 4.: Generate 1000 samples from a 2 dimensional Gaussian distribution with mean [2, 3] and covariance matrix [[2, 0], [0, 2]].
Hint: check out the random.multivariate_normal() function.
End of explanation
t = np.arange(0.0, 1.0, 0.05) # Time vector, from 0s to 1s in steps of 0.05s.
a1 = np.sin(2*np.pi*t) # Samples of the first signal.
a2 = np.sin(4*np.pi*t) # Samples of the second signal.
# Visualization
# We can create a figure that will contain our plots.
plt.figure()
# We can plot the two signals in different subplots, as in Matlab.
# First signal
ax1 = plt.subplot(211)
ax1.plot(t,a1)
plt.title('First sinusoid.')
plt.xlabel('t (seconds)')
plt.ylabel('a_1(t)')
# Second signal
ax2 = plt.subplot(212)
ax2.plot(t,a2, 'r.')
plt.title('Second sinusoid.')
plt.xlabel('t (seconds)')
plt.ylabel('a_2(t)')
# We ensure the two plots won't overlap, and finally we show the results on the
# screen.
plt.tight_layout()
plt.show()
Explanation: 2. Data representation: Matplotlib
When we work with real data, or even if we generate data following a certain function or random distribution, we often acquire a better understanding by plotting the content of a vector, instead of just looking at a bunch of real numbers. In a plot, we assign each axis a meaning (e.g., y-axis could be a probability, kilograms, euros, etc; and x-axis could be time, index of samples, etc.). It should be clear by now how important data visualization is for us and the people who receive our data. Data analysis wouldn't be Data analysis without a nice visualization.
In Python the simplest plotting library is matplotlib and its sintax is similar to Matlab plotting library. As in Matlab, we can plot any set of samples making use of a lot of features. For instance, we can model the sampling of a continuous signal, we can deal with discrete samples of a signal, or we can even plot the histogram of random samples.
Take a look at the following code we use to plot two different sinusoids:
End of explanation
t = np.arange(0.0, 3, 0.05)
a1 = np.sin(2*np.pi*t)+t
a2 = np.ones(a1.shape)*t
plt.figure()
# We are going to plot two signals in the same figure. For each one we can
# specify colors, symbols, width, and the label to be displayed in a legend.
# Use the Matplotlib docs if you want to know all the things you can do.
plt.plot(t,a1,'r--',LineWidth=2, label='Sinusoidal')
plt.plot(t,a2, 'k:', label='Straight Line')
plt.title('Playing with different parameters')
plt.ylabel('Amplitude')
plt.xlabel('Time (seconds)')
# By default, axis limits will coincide with the highest/lowest values in our
# vectors. However, we can specify ranges for x and y.
plt.xlim((-0.5, 3))
plt.ylim((-0.5, 4))
# When plotting more than one curve in a single figure, having a legend is a
# good practice. You can ask Matplotlib to place it in the "best" position
# (trying not to overlap the lines), or you can specify positions like
# "upper left", "lower right"... check the docs!
plt.legend(loc='best')
# We can draw the origin lines, to separate the bidimensional space in four
# quadrants.
plt.axhline(0,color='black')
plt.axvline(0, color='black')
# We can also set a grid with different styles...
plt.grid(color='grey', linestyle='--', linewidth=0.8)
# And specify the "ticks", i.e., the values which are going to be specified in
# the axis, where the grid method is placing lines.
plt.xticks(np.arange(-0.5, 3, 0.5)) # In x, put a value each 0.5.
plt.yticks(np.arange(-0.5, 4, 1)) # In y, put a value each 1.
# Finally, plot all the previous elements.
plt.show()
Explanation: Let's analyse how we have created the previous figures and in which things they differ:
A crucial aspect to consider is that both curves represent a set of discrete samples (the samples we've generated). While the second plot uses red dots to represent the data (specified through 'r.'), the first one will draw the points using the standard blue line. As in Matlab, using lines to plot samples will interpolate them by default. If we don't want Matplotlib to do so, we can specify a different symbol, like dots, squares, etc...
We can label the axis and set titles, enhancing the way in which our data is presented. Moreover, we can improve the clarity of a figure by including or modyfing the line width, colours, symbols, legends, and a big etcetera.
Look at the following figure and try to catch which argument and/or piece of code is related with each feature. It's intuitive! You can modify the parameters and see what's the new outcome.
End of explanation
# x = <FILL IN>
x = 4*np.random.rand(200) - 2
# Create a weights vector w, in which w[0] = 2.4, w[1] = -0.8 and w[2] = 1.
# w = <FILL IN>
w = np.array([2.4,-0.8,2])
print('x shape:\n',x.shape)
print('\nw:\n', w)
print('w shape:\n', w.shape)
Explanation: In just a single example we have seen a lot of Matplotlib functionalities that can be easely tuned. You have all you need to draw decent figures. However, those of you who want to learn more about Matplotlib can take a look at AnatomyOfMatplotlib, a collection of notebooks in which you will explore more in depth Matplotlib.
Now, try to solve the following exercises:
Exercise 5: Generate a random vector x, taking 200 samples of a uniform distribution, defined in the [-2,2] interval.
End of explanation
# y = <FILL IN>
y = w[0] + w[1]*x + w[2]*(x**2)
print('y shape:\n',y.shape)
Explanation: Exercise 6: Obtain the vector y whose samples are obtained by the polynomial $w_2 x^2 + w_1 x + w_0$
End of explanation
# X = <FILL IN>
X = np.array([np.ones((len(x),)), x, x**2]).T
# y2 = <FILL IN>
y2 = X @ w
print('y shape:\n',y.shape)
print('y2 shape:\n',y2.shape)
if(np.sum(np.abs(y-y2))<1e-10):
print('\ny and y2 are the same, well done!')
else:
print('\nOops, something went wrong, try again!')
Explanation: Exercise 7: You probably obtained the previous vector as a sum of different terms. If so, try to obtain y again (and name it y2) as a product of a matrix X and a vector w. Then, check that both methods lead to the same result (be careful with shapes).
Hint: w will remain the same, but now X has to be constructed in a way that the dot product of X and w is consistent).
End of explanation
# x2 = <FILL IN>
x2 = np.arange(-1,2,0.05)
# y3 = <FILL IN>
y3 = w[0] + w[1]*x2 + w[2]*(x2**2)
# Plot
# <SOL>
fig1 = plt.figure()
plt.plot(x2,y3,'r--')
plt.title('y3 = f(x2)')
plt.ylabel('y3')
plt.xlabel('x2')
plt.show()
# </SOL>
Explanation: Exercise 8: Define x2 as a range vector, going from -1 to 2, in steps of 0.05. Then, obtain y3 as the output of polynomial $w_2 x^2 + w_1 x + w_0$ for input x2 and plot the result using a red dashed line (--).
End of explanation
# We take samples from a normalized gaussian distribution, and we change
# mean and variance with an operation.
sigma = 4
mn = 5
x_norm = mn + np.sqrt(sigma)*np.random.randn(5000)
# Let's obtain an histogram with high resultion, that is, a lot of bins.
fig1 = plt.figure()
plt.hist(x_norm, bins=100,label='Samples')
plt.title('Histogram with 100 bins')
# With vertical lines, we plot the mean and the intervals obtain summing one
# standard deviation to the mean.
plt.axvline(x=np.mean(x_norm),color='k',linestyle='--',label='Mean')
plt.axvline(x=np.mean(x_norm)+np.std(x_norm),color='grey',linestyle='--',label='Mean +/- std')
plt.axvline(x=np.mean(x_norm)-np.std(x_norm),color='grey',linestyle='--')
plt.legend(loc='best')
plt.show()
# We check that the mean and variance of the samples is aprox. the original one.
print('Sample mean = ', x_norm.mean())
print('Sample variance = ', x_norm.var())
# Now let's plot a low resolution histogram, with just a few bins.
fig2 = plt.figure()
# Density=True normalizes the histogram.
plt.hist(x_norm, bins=10,label='Samples',density=True)
plt.title('Histogram with 10 bins')
plt.axvline(x=np.mean(x_norm),color='k',linestyle='--',label='Mean')
plt.axvline(x=np.mean(x_norm)+np.std(x_norm),color='grey',linestyle='--',label='Mean +/- std')
plt.axvline(x=np.mean(x_norm)-np.std(x_norm),color='grey',linestyle='--')
plt.legend(loc='best')
plt.show()
# A different resolution leads to different representations, but don't forget
# that we are plotting the same samples.
print('Sample mean = ', x_norm.mean())
print('Sample variance = ', x_norm.var())
Explanation: In the above exercises you have plotted a few vectors which are deterministic, that is, a range or a function applied to a range. Let's now consider the case of representing random samples and distributions.
If we have an expression to obtain the density of a given distribution, we can plot it in the same way we plotted functions before.
In a more general case, when we only have access to a limited number of samples, or we are interested in sampling them randomly, we usually make use of a histogram.
Consider x a vector containing samples coming from a 1-dimensional random variable. A histogram is a figure in which we represent the observed frequencies of different ranges of the x domain. We can express them as relative frequencies (summing up to 1) or absolute frequencies (counting events).
We can adapt the number and size of intervals (called bins) to directly affect the resolution of the plot.
When we have a sufficiently high number of random samples coming from the same distribution, its histogram is expected to have a similar shape to the theoretical expression corresponding to the density of this distribution.
In Matplotlib, we have already plotted histograms, with plt.hist(samples,bins=).
Let's see some examples:
End of explanation
# x_exp = <FILL IN>
x_exp = np.random.exponential(10,1000)
# plt.hist(<FILL IN>)
plt.hist(x_exp,bins=50,label="Emp. mean: "+str(np.mean(x_exp)))
plt.legend(loc='best')
plt.show()
Explanation: Now it's your turn!
Exercise 9: Obtain x_exp as 1000 samples of an exponential distribution with scale parameter of 10. Then, plot the corresponding histogram for the previous set of samples, using 50 bins. Obtain the empirical mean and make it appear in the histogram legend. Does it coincide with the theoretical one?
End of explanation
np.random.seed(4) # Keep the same result
x_exp = np.random.exponential(10,10000) # exponential samples
x = np.arange(np.min(x_exp),np.max(x_exp),0.05)
# density = <FILL IN>
density = (1/10)*np.exp(-x/10)
w_n = np.zeros_like(x_exp) + 1. / x_exp.size
plt.hist(x_exp, weights=w_n,label='Histogram.',bins=75)
plt.plot(x,density,'r--',label='Theoretical density.')
plt.legend()
plt.show()
Explanation: Exercise 10: Taking into account that the exponential density can be expressed as:
$f(x;\beta) = \frac{1}{\beta} e^{-\frac{x}{\beta}}; x>=0$.
where $\beta$ is the scale factor, fill the variable density using the vector x and apply it the theoretical density for an exponential distribution. Then, take a look at the plot. Do the histogram and the density look alike? How does the number of samples affect the final result?
End of explanation
# Creating a dictionary with different keys. These keys can be either an
# integer or a string. To separate elements you have to use ','.
my_dict = {'Is she a witch?': 'If... she... weights the same as a duck... she`s made of wood!', 42: 'Can you repeat the question?'}
print (my_dict)
Explanation: 3. Data storage: Saving and loading files
Once we have learned to generate and plot data, the next thing we need to know is how we can store those results for future usage and, subsequently, how to load them.
Python is a programming language commonly used in the context of data analysis. This implies there is a vast number of libraries and functions to work with data. In our case, we will study how to save your data into mat or csv files, although there are some other methods we encourage you to take a look at (pickle, pandas, npz,...).
3.1. Dictionaries
All of these are most usually combined with dictionaries. Dictionaries are an useful data structure implemented in Python which allows to index its different elements with keys instead of with a range of numbers. This way you can access the different elements on the list using either a number or a string.
End of explanation
# We can add a new key to a dictionary and fill in its value.
# Let's add a list of things that float in water:
my_dict['What floats in water?'] = ['Bread','Apples','Very small rocks','Cider','Gravy','Cherries','Mud']
# Now we can access to the key of things that float in water and add some other
# elements to the array in the dictionary:
my_dict['What floats in water?'].append('A duck')
print (my_dict['What floats in water?'])
# Print line by line the keys and elements on the dictionary
print('\nThese are the keys and elements on my list:\n')
for key in my_dict:
print (key,':',my_dict[key])
Explanation: It works in a similar way to lists, being capable of storing arrays, numbers and strings of diferent sizes. In the case of dictionaries, to access to a certain value you just have to use its key.
End of explanation
# alumnos = <FILL IN>
alumnos = {'Pedro Picapiedra':{},'Clark Kent':{}}
clothes = ['Shirt','Dress','Glasses','Shoes']
for alumno in alumnos:
print(alumno)
# <SOL>
for element in clothes:
alumnos[alumno][element] = input(element+': ')
print(alumnos)
# </SOL>
Explanation: Let's now try to apply this knowledge about dictionaries with the following exercise:
Exercise 11: Create a dictionary with your name and a colleage's and create a dictionary for each of you with what you are wearing. Then print the whole list to see what are each of you wearing.
End of explanation
# Saving the previous dictionary in a mat file:
import scipy.io as sio
sio.savemat('dictionaries_rule.mat', alumnos)
# Load the previously stored mat file:
data = sio.loadmat('dictionaries_rule.mat')
print (data.keys())
Explanation: 3.2. Saving and Loading
Now that we know how to create and work with dictionaries we can start to save these dictionaries into different file types. In order to work with .mat files, we need to work with the scipy.io library, which provides us with the functions we need:
* scipy.io.savemat([filename,mdict]): stores the given dictionary in a mat file with the given file name.
scipy.io.loadmat([filename, mdict=None]): loads the mat file with the given file name. If a dictionary is given, it loads the data into it.
End of explanation
# Saving a csv file with some text in it separated by spaces:
import csv
with open('eggs.csv', 'w') as csvfile:
spamwriter = csv.writer(csvfile, delimiter=' ')
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
spamwriter.writerow(['Spam', 'Lovely Spam', 'Wonderful Spam'])
# Loading the csv file and join the elements with commas instead of spaces:
with open('eggs.csv', 'r') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ')
for row in spamreader:
print (', '.join(row))
Explanation: The csv (Comma Separated Values) files are one of the most common when working with databases. As stated in its name, these format defines the sepparation between elements in the file by a delimiter, typically the comma. Nevertheless, as this files can be defined using any delimiter, it is recommendable to specify which one you would like to use to avoid errors.
In particular, we are going to work with the functions which allow us to save and load data:
csv.writer([filename, delimiter]): creates the csv file with the specified filename.
csv.reader([filename, delimiter]): loads the csv file with the specified filename.
End of explanation |
10,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The toehold problem
The "toehold problem" is named after a tech support response from Gurobi. The nature of the problem is that in order to take advantage of the algebraic constraint modeling provided by gurobipy, then the Model.addConstr function needs a "toehold" with which to build a Constr.
(Note that Constr is not part of the public package. You shouldn't try to build it directly, but instead let gurobipy create it for you as part of writing out algebraic constraints).
So what do I mean, specifically? To begin, let's make a function that captures exceptions, since I'm going to be making mistakes and deliberately throwing exceptions.
Step1: Let's make a constraint without creating any problems. (You'll need to understand lambda to understand this code).
Step2: Ok, now let's screw up and make a bad constraint. This might happen to you, so pay attention please.
Step3: The numbers and constraint type aren't important.
Step4: Now, why would you ever try to write a dumb constraint like that? Well, it happens naturally in the real world quite easily. Suppose you were summing over a set of variables that happened to be empty as part of building a constraint.
Step5: How did this happen? It's because we used sum. This returns the number zero if it is passed an empty sequence.
Step6: So what's the solution? Usually, it just involves using gurobipy.quicksum.
Step7: See what happened there? gu.quicksum will give us a toehold. It's not just faster than sum, it's smarter too. So when we use quicksum, the constraint can be added. | Python Code:
def exception_thrown(f):
try:
f()
except Exception as e:
return str(e)
Explanation: The toehold problem
The "toehold problem" is named after a tech support response from Gurobi. The nature of the problem is that in order to take advantage of the algebraic constraint modeling provided by gurobipy, then the Model.addConstr function needs a "toehold" with which to build a Constr.
(Note that Constr is not part of the public package. You shouldn't try to build it directly, but instead let gurobipy create it for you as part of writing out algebraic constraints).
So what do I mean, specifically? To begin, let's make a function that captures exceptions, since I'm going to be making mistakes and deliberately throwing exceptions.
End of explanation
import gurobipy as gu
m = gu.Model()
v = m.addVar(name = "goodstuff")
m.update()
exception_thrown(lambda : m.addConstr(v <= 100, name = "c1"))
m.update()
m.getConstrs()
Explanation: Let's make a constraint without creating any problems. (You'll need to understand lambda to understand this code).
End of explanation
exception_thrown(lambda : m.addConstr(0 <= 300, name = "not_going_to_be_added_to_model"))
Explanation: Ok, now let's screw up and make a bad constraint. This might happen to you, so pay attention please.
End of explanation
exception_thrown(lambda : m.addConstr(10 == 30, name = "not_going_to_be_added_to_model"))
Explanation: The numbers and constraint type aren't important.
End of explanation
exception_thrown(lambda : m.addConstr(sum(_ for x in m.getVars() if "bad" in x.VarName.lower())
<= 100, name = "not_going_to_be_added_either"))
Explanation: Now, why would you ever try to write a dumb constraint like that? Well, it happens naturally in the real world quite easily. Suppose you were summing over a set of variables that happened to be empty as part of building a constraint.
End of explanation
[_ for x in m.getVars() if "bad" in x.VarName.lower()]
sum(_ for x in m.getVars() if "bad" in x.VarName.lower())
Explanation: How did this happen? It's because we used sum. This returns the number zero if it is passed an empty sequence.
End of explanation
gu.quicksum(_ for x in m.getVars() if "bad" in x.VarName.lower())
Explanation: So what's the solution? Usually, it just involves using gurobipy.quicksum.
End of explanation
exception_thrown(lambda : m.addConstr(gu.quicksum(_ for x in m.getVars()
if "bad" in x.VarName.lower())
<= 100, name = "c2"))
m.update()
m.getConstrs()
Explanation: See what happened there? gu.quicksum will give us a toehold. It's not just faster than sum, it's smarter too. So when we use quicksum, the constraint can be added.
End of explanation |
10,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modelling Carrer Choices
The model is based on the following research paper
Step1: Load Resources
Step2: Parametrization
Step3: Derived Attributes
Step4: Auxiliary Functions
Step5: Solving the Model
Step6: Analysis
Plot the Optimal Policy
Step7: Formatting | Python Code:
%matplotlib inline
Explanation: Modelling Carrer Choices
The model is based on the following research paper:
Derek Neal (1999). The Complexity of Job Mobility among Young Men, Journal of Labor Economics, 17(2), 237-261.
The implementation draws heavily from the material provided on the Quantitative Economics website.
Model Features
Individuals choose their career and job within a career to max- imize the expected discounted value of lifetime wages. They solve an infinite horizon dynamic programming problem with two state variables.
Objective
$$\mathrm{E}\sum\limits {i=1}^{n}\beta^{t}\omega{t}$$
Payoffs
$$w_t = \theta_t + \epsilon_t$$
where:
* $\theta_t$ contribution of current occupation at time t
* $\epsilon_t$ contribution of current job at time t
Decision Problem
At the start of time t, a worker has the following options:
* Stay Put, retain a current (career, job) pair $(\theta_t,\epsilon_t)$
* New Job, retain a current career $\theta_t$ but redraw a job $\epsilon_t$
* New Life, redraw both a career $\theta_t$ and a job $\epsilon_t$
Draws of $\theta$ and $\epsilon$ are independent of each other and past values, with $\theta_t \sim F$ and $\epsilon_t \sim G$.
Value Functions
$$ V_{SP} = \theta+\varepsilon+\beta V(\theta,\varepsilon) \
V_{NJ} = \theta+\int\varepsilon'G(d\varepsilon')+\beta\int V(\theta,\varepsilon')G(d\varepsilon') \
V_{NL} = \int\theta'F(d\theta')+\int\varepsilon'G(d\varepsilon')+\beta\int\int V(\theta',\varepsilon')G(d\varepsilon')F(d\theta') $$
Course Registration
Please register for our class ECON41904 by sending an eMail to Brett Baker at: [email protected]
Housekeeping
End of explanation
# libraries
import scipy
import numpy as np
# project library
from support import *
Explanation: Load Resources
End of explanation
# Initialize container
para = dict()
# Preferences
para['beta'] = 0.95 # Time preference
# Distribution Grid
para['B'] = 5.0 # Upper bound for both epsilon and theta
para['N'] = 50 # Number of possible realizations for both epsilon and theta
# Parametrization of Career Distribution
para['F_a'], para['F_b'] = 1.0, 1.0
para['G_a'], para['G_b'] = 1.0, 1.0
Explanation: Parametrization
End of explanation
# Initialize container
attr = dict()
# Grid of random variables
attr['theta'] = np.linspace(0, para['B'], para['N'])
attr['epsilon'] = np.linspace(0, para['B'], para['N'])
# Construct probabilities
attr['F_probs'] = BetaBinomial_pdf(para['N'] - 1, para['F_a'], para['F_b'])
attr['G_probs'] = BetaBinomial_pdf(para['N'] - 1, para['G_a'], para['G_b'])
# Construct means.
attr['F_mean'] = np.sum(attr['theta'] * attr['F_probs'])
attr['G_mean'] = np.sum(attr['epsilon'] * attr['G_probs'])
Explanation: Derived Attributes
End of explanation
def evaluate_alternative(which, para, attr, v, i, j):
''' Evaluate alternatives
'''
if which == 'Stay Put':
eval_ = attr['theta'][i] + attr['epsilon'][j] + para['beta'] * v[i,j]
elif which == 'New Job':
eval_ = attr['theta'][i] + attr['G_mean'] + para['beta'] * np.dot(v[i,:], attr['G_probs'])
elif which == 'New Life':
eval_ = attr['G_mean'] + attr['F_mean'] + para['beta'] * np.dot(attr['F_probs'], np.dot(v, attr['G_probs']))
else:
raise AssertionError('Alternative misspecified.')
# Finishing
return eval_
def get_greedy(v, para, attr):
''' Compute optimal actions taking v as the value function
'''
# Initialize container
policy = np.empty(v.shape, dtype = int)
# Evalaute cases
for i in range(para['N']):
for j in range(para['N']):
values = []
for which in ['Stay Put', 'New Job', 'New Life']:
values += [evaluate_alternative(which, para, attr, v, i, j)]
# Determine optimal policy
policy[i,j] = np.argmax(values) + 1
# Finishing
return policy
def bellman_operator(v, para, attr):
''' The Bellman operator for the model.
'''
# Initialize container
new_v = np.empty(v.shape, dtype = float)
# Evalaute cases
for i in range(para['N']):
for j in range(para['N']):
values = []
for which in ['Stay Put', 'New Job', 'New Life']:
values += [evaluate_alternative(which, para, attr, v, i, j)]
new_v[i,j] = np.amax(values)
# Finishing
return new_v
def compute_fixed_point(T, v, para, attr, error_tol = 1e-3, max_iter = 50):
''' Compute the fixed point.
'''
# Initialization
error = error_tol + 1
iterate = 0
while True:
new_v = T(v, para, attr)
iterate += 1
error = np.max(np.abs(new_v - v))
v = new_v
# Terminal conditions
if iterate > max_iter: break
if error < error_tol: break
# Finishing
return v
Explanation: Auxiliary Functions
End of explanation
# Starting value
v_init = np.ones((para['N'],para['N']))*100
# Determine fix point
v = compute_fixed_point(bellman_operator, v_init, para, attr)
# Determine optimal policy
optimal_policy = get_greedy(v, para, attr)
Explanation: Solving the Model
End of explanation
plot_optimal_policy(optimal_policy, attr)
Explanation: Analysis
Plot the Optimal Policy
End of explanation
import urllib; from IPython.core.display import HTML
HTML(urllib.urlopen('http://bit.ly/1K5apRH').read())
Explanation: Formatting
End of explanation |
10,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General Structured Output Models with Shogun Machine Learning Toolbox
Shell Hu (GitHub ID
Step2: Few examples of the handwritten words are shown below. Note that the first capitalized letter has been removed.
Step3: Define Factor Types and Build Factor Graphs
Let's define 4 factor types, such that a word will be able to be modeled as a chain graph.
The unary factor type will be used to define unary potentials that capture the appearance likelihoods of each letter. In our case, each letter has $16 \times 8$ pixels, thus there are $(16 \times 8 + 1) \times 26$ parameters. Here the additional bits in the parameter vector are bias terms. One for each state.
The pairwise factor type will be used to define pairwise potentials between each pair of letters. This type in fact gives the Potts potentials. There are $26 \times 26$ parameters.
The bias factor type for the first letter is a compensation factor type, since the interaction is one-sided. So there are $26$ parameters to be learned.
The bias factor type for the last letter, which has the same intuition as the last item. There are also $26$ parameters.
Putting all parameters together, the global parameter vector $\mathbf{w}$ has length $4082$.
Step5: Next, we write a function to construct the factor graphs and prepare labels for training. For each factor graph instance, the structure is a chain but the number of nodes and edges depend on the number of letters, where unary factors will be added for each letter, pairwise factors will be added for each pair of neighboring letters. Besides, the first and last letter will get an additional bias factor respectively.
Step6: An example of graph structure is visualized as below, from which you may have a better sense how a factor graph being built. Note that different colors are used to represent different factor types.
Step7: Training
Now we can create the factor graph model and start training. We will use the tree max-product belief propagation to do MAP inference.
Step8: In Shogun, we implemented several batch solvers and online solvers. Let's first try to train the model using a batch solver. We choose the dual bundle method solver (<a href="http
Step9: Let's check the duality gap to see if the training has converged. We aim at minimizing the primal problem while maximizing the dual problem. By the weak duality theorem, the optimal value of the primal problem is always greater than or equal to dual problem. Thus, we could expect the duality gap will decrease during the time. A relative small and stable duality gap may indicate the convergence. In fact, the gap doesn't have to become zero, since we know it is not far away from the local minima.
Step10: There are other statitics may also be helpful to check if the solution is good or not, such as the number of cutting planes, from which we may have a sense how tight the piece-wise lower bound is. In general, the number of cutting planes should be much less than the dimension of the parameter vector.
Step11: In our case, we have 101 active cutting planes, which is much less than 4082, i.e. the number of parameters. We could expect a good model by looking at these statistics. Now come to the online solvers. Unlike the cutting plane algorithms re-optimizes over all the previously added dual variables, an online solver will update the solution based on a single point. This difference results in a faster convergence rate, i.e. less oracle calls, please refer to Table 1 in [4] for more detail. Here, we use the stochastic subgradient descent (<a href="http
Step12: We compare the SGD and BMRM in terms of the primal objectives versus effective passes. We first plot the training progress (until both algorithms converge) and then zoom in to check the first 100 passes. In order to make a fair comparison, we set the regularization constant to 1e-2 for both algorithms.
Step13: As is shown above, the SGD solver uses less oracle calls to get to converge. Note that the timing is 2 times slower than they actually need, since there are additional computations of primal objective and training error in each pass. The training errors of both algorithms for each pass are shown in below.
Step15: Interestingly, the training errors of SGD solver are lower than BMRM's in first 100 passes, but in the end the BMRM solver obtains a better training performance. A probable explanation is that BMRM uses very limited number of cutting planes at beginning, which form a poor approximation of the objective function. As the number of cutting planes increasing, we got a tighter piecewise lower bound, thus improve the performance. In addition, we would like to show the pairwise weights, which may learn important co-occurrances of letters. The hinton diagram is a wonderful tool for visualizing 2D data, in which positive and negative values are represented by white and black squares, respectively, and the size of each square represents the magnitude of each value. In our case, a smaller number i.e. a large black square indicates the two letters tend to coincide.
Step16: Inference
Next, we show how to do inference with the learned model parameters for a given data point.
Step17: Evaluation
In the end, we check average training error and average testing error. The evaluation can be done by two methods. We can either use the apply() function in the structured output machine or use the <a href="http | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import numpy as np
import scipy.io
dataset = scipy.io.loadmat(os.path.join(SHOGUN_DATA_DIR, 'ocr/ocr_taskar.mat'))
# patterns for training
p_tr = dataset['patterns_train']
# patterns for testing
p_ts = dataset['patterns_test']
# labels for training
l_tr = dataset['labels_train']
# labels for testing
l_ts = dataset['labels_test']
# feature dimension
n_dims = p_tr[0,0].shape[0]
# number of states
n_stats = 26
# number of training samples
n_tr_samples = p_tr.shape[1]
# number of testing samples
n_ts_samples = p_ts.shape[1]
Explanation: General Structured Output Models with Shogun Machine Learning Toolbox
Shell Hu (GitHub ID: hushell)
Thanks Patrick Pletscher and Fernando J. Iglesias García for taking time to help me finish the project! Shoguners = awesome! Me = grateful!
Introduction
This notebook illustrates the training of a <a href="http://en.wikipedia.org/wiki/Factor_graph">factor graph</a> model using <a href="http://en.wikipedia.org/wiki/Structured_support_vector_machine">structured SVM</a> in Shogun. We begin by giving a brief outline of factor graphs and <a href="http://en.wikipedia.org/wiki/Structured_prediction">structured output learning</a> followed by the corresponding API in Shogun. Finally, we test the scalability by performing an experiment on a real <a href="http://en.wikipedia.org/wiki/Optical_character_recognition">OCR</a> data set for <a href="http://en.wikipedia.org/wiki/Handwriting_recognition">handwritten character recognition</a>.
Factor Graph
A factor graph explicitly represents the factorization of an undirected graphical model in terms of a set of factors (potentials), each of which is defined on a clique in the original graph [1]. For example, a MRF distribution can be factorized as
$$
P(\mathbf{y}) = \frac{1}{Z} \prod_{F \in \mathcal{F}} \theta_F(\mathbf{y}_F),
$$
where $F$ is the factor index, $\theta_F(\mathbf{y}_F)$ is the energy with respect to assignment $\mathbf{y}_F$. In this demo, we focus only on table representation of factors. Namely, each factor holds an energy table $\theta_F$, which can be viewed as an unnormalized CPD. According to different factorizations, there are different types of factors. Usually we assume the Markovian property is held, that is, factors have the same parameterization if they belong to the same type, no matter how location or time changes. In addition, we have parameter free factor type, but nothing to learn for such kinds of types. More detailed implementation will be explained later.
Structured Prediction
Structured prediction typically involves an input $\mathbf{x}$ (can be structured) and a structured output $\mathbf{y}$. A joint feature map $\Phi(\mathbf{x},\mathbf{y})$ is defined to incorporate structure information into the labels, such as chains, trees or general graphs. In general, the linear parameterization will be used to give the prediction rule. We leave the kernelized version for future work.
$$
\hat{\mathbf{y}} = \underset{\mathbf{y} \in \mathcal{Y}}{\operatorname{argmax}} \langle \mathbf{w}, \Phi(\mathbf{x},\mathbf{y}) \rangle
$$
where $\Phi(\mathbf{x},\mathbf{y})$ is the feature vector by mapping local factor features to corresponding locations in terms of $\mathbf{y}$, and $\mathbf{w}$ is the global parameter vector. In factor graph model, parameters are associated with a set of factor types. So $\mathbf{w}$ is a collection of local parameters.
The parameters are learned by regularized risk minimization, where the risk defined by user provided loss function $\Delta(\mathbf{y},\mathbf{\hat{y}})$ is usually non-convex and non-differentiable, e.g. the Hamming loss. So the empirical risk is defined in terms of the surrogate hinge loss $H_i(\mathbf{w}) = \max_{\mathbf{y} \in \mathcal{Y}} \Delta(\mathbf{y}_i,\mathbf{y}) - \langle \mathbf{w}, \Psi_i(\mathbf{y}) \rangle $, which is an upper bound of the user defined loss. Here $\Psi_i(\mathbf{y}) = \Phi(\mathbf{x}_i,\mathbf{y}_i) - \Phi(\mathbf{x}_i,\mathbf{y})$. The training objective is given by
$$
\min_{\mathbf{w}} \frac{\lambda}{2} ||\mathbf{w}||^2 + \frac{1}{N} \sum_{i=1}^N H_i(\mathbf{w}).
$$
In Shogun's factor graph model, the corresponding implemented functions are:
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStructuredModel.html#a15bd99e15bbf0daa8a727d03dbbf4bcd">FactorGraphModel::get_joint_feature_vector()</a> $\longleftrightarrow \Phi(\mathbf{x}_i,\mathbf{y})$
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFactorGraphModel.html#a36665cfdd7ea2dfcc9b3c590947fe67f">FactorGraphModel::argmax()</a> $\longleftrightarrow H_i(\mathbf{w})$
<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFactorGraphModel.html#a17dac99e933f447db92482a6dce8489b">FactorGraphModel::delta_loss()</a> $\longleftrightarrow \Delta(\mathbf{y}_i,\mathbf{y})$
Experiment: OCR
Show Data
First of all, we load the OCR data from a prepared mat file. The raw data can be downloaded from <a href="http://www.seas.upenn.edu/~taskar/ocr/">http://www.seas.upenn.edu/~taskar/ocr/</a>. It has 6876 handwritten words with an average length of 8 letters from 150 different persons. Each letter is rasterized into a binary image of size 16 by 8 pixels. Thus, each $\mathbf{y}$ is a chain, and each node has 26 possible states denoting ${a,\cdots,z}$.
End of explanation
import matplotlib.pyplot as plt
def show_word(patterns, index):
show a word with padding
plt.rc('image', cmap='binary')
letters = patterns[0,index][:128,:]
n_letters = letters.shape[1]
for l in xrange(n_letters):
lett = np.transpose(np.reshape(letters[:,l], (8,16)))
lett = np.hstack((np.zeros((16,1)), lett, np.zeros((16,1))))
lett = np.vstack((np.zeros((1,10)), lett, np.zeros((1,10))))
subplot(1,n_letters,l+1)
imshow(lett)
plt.xticks(())
plt.yticks(())
plt.tight_layout()
show_word(p_tr, 174)
show_word(p_tr, 471)
show_word(p_tr, 57)
Explanation: Few examples of the handwritten words are shown below. Note that the first capitalized letter has been removed.
End of explanation
from modshogun import TableFactorType
# unary, type_id = 0
cards_u = np.array([n_stats], np.int32)
w_gt_u = np.zeros(n_stats*n_dims)
fac_type_u = TableFactorType(0, cards_u, w_gt_u)
# pairwise, type_id = 1
cards = np.array([n_stats,n_stats], np.int32)
w_gt = np.zeros(n_stats*n_stats)
fac_type = TableFactorType(1, cards, w_gt)
# first bias, type_id = 2
cards_s = np.array([n_stats], np.int32)
w_gt_s = np.zeros(n_stats)
fac_type_s = TableFactorType(2, cards_s, w_gt_s)
# last bias, type_id = 3
cards_t = np.array([n_stats], np.int32)
w_gt_t = np.zeros(n_stats)
fac_type_t = TableFactorType(3, cards_t, w_gt_t)
# all initial parameters
w_all = [w_gt_u,w_gt,w_gt_s,w_gt_t]
# all factor types
ftype_all = [fac_type_u,fac_type,fac_type_s,fac_type_t]
Explanation: Define Factor Types and Build Factor Graphs
Let's define 4 factor types, such that a word will be able to be modeled as a chain graph.
The unary factor type will be used to define unary potentials that capture the appearance likelihoods of each letter. In our case, each letter has $16 \times 8$ pixels, thus there are $(16 \times 8 + 1) \times 26$ parameters. Here the additional bits in the parameter vector are bias terms. One for each state.
The pairwise factor type will be used to define pairwise potentials between each pair of letters. This type in fact gives the Potts potentials. There are $26 \times 26$ parameters.
The bias factor type for the first letter is a compensation factor type, since the interaction is one-sided. So there are $26$ parameters to be learned.
The bias factor type for the last letter, which has the same intuition as the last item. There are also $26$ parameters.
Putting all parameters together, the global parameter vector $\mathbf{w}$ has length $4082$.
End of explanation
def prepare_data(x, y, ftype, num_samples):
prepare FactorGraphFeatures and FactorGraphLabels
from modshogun import Factor, TableFactorType, FactorGraph
from modshogun import FactorGraphObservation, FactorGraphLabels, FactorGraphFeatures
samples = FactorGraphFeatures(num_samples)
labels = FactorGraphLabels(num_samples)
for i in xrange(num_samples):
n_vars = x[0,i].shape[1]
data = x[0,i].astype(np.float64)
vc = np.array([n_stats]*n_vars, np.int32)
fg = FactorGraph(vc)
# add unary factors
for v in xrange(n_vars):
datau = data[:,v]
vindu = np.array([v], np.int32)
facu = Factor(ftype[0], vindu, datau)
fg.add_factor(facu)
# add pairwise factors
for e in xrange(n_vars-1):
datap = np.array([1.0])
vindp = np.array([e,e+1], np.int32)
facp = Factor(ftype[1], vindp, datap)
fg.add_factor(facp)
# add bias factor to first letter
datas = np.array([1.0])
vinds = np.array([0], np.int32)
facs = Factor(ftype[2], vinds, datas)
fg.add_factor(facs)
# add bias factor to last letter
datat = np.array([1.0])
vindt = np.array([n_vars-1], np.int32)
fact = Factor(ftype[3], vindt, datat)
fg.add_factor(fact)
# add factor graph
samples.add_sample(fg)
# add corresponding label
states_gt = y[0,i].astype(np.int32)
states_gt = states_gt[0,:]; # mat to vector
loss_weights = np.array([1.0/n_vars]*n_vars)
fg_obs = FactorGraphObservation(states_gt, loss_weights)
labels.add_label(fg_obs)
return samples, labels
# prepare training pairs (factor graph, node states)
n_tr_samples = 350 # choose a subset of training data to avoid time out on buildbot
samples, labels = prepare_data(p_tr, l_tr, ftype_all, n_tr_samples)
Explanation: Next, we write a function to construct the factor graphs and prepare labels for training. For each factor graph instance, the structure is a chain but the number of nodes and edges depend on the number of letters, where unary factors will be added for each letter, pairwise factors will be added for each pair of neighboring letters. Besides, the first and last letter will get an additional bias factor respectively.
End of explanation
try:
import networkx as nx # pip install networkx
except ImportError:
import pip
pip.main(['install', '--user', 'networkx'])
import networkx as nx
import matplotlib.pyplot as plt
# create a graph
G = nx.Graph()
node_pos = {}
# add variable nodes, assuming there are 3 letters
G.add_nodes_from(['v0','v1','v2'])
for i in xrange(3):
node_pos['v%d' % i] = (2*i,1)
# add factor nodes
G.add_nodes_from(['F0','F1','F2','F01','F12','Fs','Ft'])
for i in xrange(3):
node_pos['F%d' % i] = (2*i,1.006)
for i in xrange(2):
node_pos['F%d%d' % (i,i+1)] = (2*i+1,1)
node_pos['Fs'] = (-1,1)
node_pos['Ft'] = (5,1)
# add edges to connect variable nodes and factor nodes
G.add_edges_from([('v%d' % i,'F%d' % i) for i in xrange(3)])
G.add_edges_from([('v%d' % i,'F%d%d' % (i,i+1)) for i in xrange(2)])
G.add_edges_from([('v%d' % (i+1),'F%d%d' % (i,i+1)) for i in xrange(2)])
G.add_edges_from([('v0','Fs'),('v2','Ft')])
# draw graph
fig, ax = plt.subplots(figsize=(6,2))
nx.draw_networkx_nodes(G,node_pos,nodelist=['v0','v1','v2'],node_color='white',node_size=700,ax=ax)
nx.draw_networkx_nodes(G,node_pos,nodelist=['F0','F1','F2'],node_color='yellow',node_shape='s',node_size=300,ax=ax)
nx.draw_networkx_nodes(G,node_pos,nodelist=['F01','F12'],node_color='blue',node_shape='s',node_size=300,ax=ax)
nx.draw_networkx_nodes(G,node_pos,nodelist=['Fs'],node_color='green',node_shape='s',node_size=300,ax=ax)
nx.draw_networkx_nodes(G,node_pos,nodelist=['Ft'],node_color='purple',node_shape='s',node_size=300,ax=ax)
nx.draw_networkx_edges(G,node_pos,alpha=0.7)
plt.axis('off')
plt.tight_layout()
Explanation: An example of graph structure is visualized as below, from which you may have a better sense how a factor graph being built. Note that different colors are used to represent different factor types.
End of explanation
from modshogun import FactorGraphModel, TREE_MAX_PROD
# create model and register factor types
model = FactorGraphModel(samples, labels, TREE_MAX_PROD)
model.add_factor_type(ftype_all[0])
model.add_factor_type(ftype_all[1])
model.add_factor_type(ftype_all[2])
model.add_factor_type(ftype_all[3])
Explanation: Training
Now we can create the factor graph model and start training. We will use the tree max-product belief propagation to do MAP inference.
End of explanation
from modshogun import DualLibQPBMSOSVM
from modshogun import BmrmStatistics
import pickle
import time
# create bundle method SOSVM, there are few variants can be chosen
# BMRM, Proximal Point BMRM, Proximal Point P-BMRM, NCBM
# usually the default one i.e. BMRM is good enough
# lambda is set to 1e-2
bmrm = DualLibQPBMSOSVM(model, labels, 0.01)
bmrm.set_TolAbs(20.0)
bmrm.set_verbose(True)
bmrm.set_store_train_info(True)
# train
t0 = time.time()
bmrm.train()
t1 = time.time()
w_bmrm = bmrm.get_w()
print "BMRM took", t1 - t0, "seconds."
Explanation: In Shogun, we implemented several batch solvers and online solvers. Let's first try to train the model using a batch solver. We choose the dual bundle method solver (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDualLibQPBMSOSVM.html">DualLibQPBMSOSVM</a>) [2], since in practice it is slightly faster than the primal n-slack cutting plane solver (<a a href="http://www.shogun-toolbox.org/doc/en/latest/PrimalMosekSOSVM_8h.html">PrimalMosekSOSVM</a>) [3]. However, it still will take a while until convergence. Briefly, in each iteration, a gradually tighter piece-wise linear lower bound of the objective function will be constructed by adding more cutting planes (most violated constraints), then the approximate QP will be solved. Finding a cutting plane involves calling the max oracle $H_i(\mathbf{w})$ and in average $N$ calls are required in an iteration. This is basically why the training is time consuming.
End of explanation
import matplotlib.pyplot as plt
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
primal_bmrm = bmrm.get_helper().get_primal_values()
dual_bmrm = bmrm.get_result().get_hist_Fd_vector()
len_iter = min(primal_bmrm.size, dual_bmrm.size)
primal_bmrm = primal_bmrm[1:len_iter]
dual_bmrm = dual_bmrm[1:len_iter]
# plot duality gaps
xs = range(dual_bmrm.size)
axes[0].plot(xs, (primal_bmrm-dual_bmrm), label='duality gap')
axes[0].set_xlabel('iteration')
axes[0].set_ylabel('duality gap')
axes[0].legend(loc=1)
axes[0].set_title('duality gaps');
axes[0].grid(True)
# plot primal and dual values
xs = range(dual_bmrm.size-1)
axes[1].plot(xs, primal_bmrm[1:], label='primal')
axes[1].plot(xs, dual_bmrm[1:], label='dual')
axes[1].set_xlabel('iteration')
axes[1].set_ylabel('objective')
axes[1].legend(loc=1)
axes[1].set_title('primal vs dual');
axes[1].grid(True)
Explanation: Let's check the duality gap to see if the training has converged. We aim at minimizing the primal problem while maximizing the dual problem. By the weak duality theorem, the optimal value of the primal problem is always greater than or equal to dual problem. Thus, we could expect the duality gap will decrease during the time. A relative small and stable duality gap may indicate the convergence. In fact, the gap doesn't have to become zero, since we know it is not far away from the local minima.
End of explanation
# statistics
bmrm_stats = bmrm.get_result()
nCP = bmrm_stats.nCP
nzA = bmrm_stats.nzA
print 'number of cutting planes: %d' % nCP
print 'number of active cutting planes: %d' % nzA
Explanation: There are other statitics may also be helpful to check if the solution is good or not, such as the number of cutting planes, from which we may have a sense how tight the piece-wise lower bound is. In general, the number of cutting planes should be much less than the dimension of the parameter vector.
End of explanation
from modshogun import StochasticSOSVM
# the 3rd parameter is do_weighted_averaging, by turning this on,
# a possibly faster convergence rate may be achieved.
# the 4th parameter controls outputs of verbose training information
sgd = StochasticSOSVM(model, labels, True, True)
sgd.set_num_iter(100)
sgd.set_lambda(0.01)
# train
t0 = time.time()
sgd.train()
t1 = time.time()
w_sgd = sgd.get_w()
print "SGD took", t1 - t0, "seconds."
Explanation: In our case, we have 101 active cutting planes, which is much less than 4082, i.e. the number of parameters. We could expect a good model by looking at these statistics. Now come to the online solvers. Unlike the cutting plane algorithms re-optimizes over all the previously added dual variables, an online solver will update the solution based on a single point. This difference results in a faster convergence rate, i.e. less oracle calls, please refer to Table 1 in [4] for more detail. Here, we use the stochastic subgradient descent (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStochasticSOSVM.html">StochasticSOSVM</a>) to compare with the BMRM algorithm shown before.
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
primal_sgd = sgd.get_helper().get_primal_values()
xs = range(dual_bmrm.size-1)
axes[0].plot(xs, primal_bmrm[1:], label='BMRM')
axes[0].plot(range(99), primal_sgd[1:100], label='SGD')
axes[0].set_xlabel('effecitve passes')
axes[0].set_ylabel('primal objective')
axes[0].set_title('whole training progress')
axes[0].legend(loc=1)
axes[0].grid(True)
axes[1].plot(range(99), primal_bmrm[1:100], label='BMRM')
axes[1].plot(range(99), primal_sgd[1:100], label='SGD')
axes[1].set_xlabel('effecitve passes')
axes[1].set_ylabel('primal objective')
axes[1].set_title('first 100 effective passes')
axes[1].legend(loc=1)
axes[1].grid(True)
Explanation: We compare the SGD and BMRM in terms of the primal objectives versus effective passes. We first plot the training progress (until both algorithms converge) and then zoom in to check the first 100 passes. In order to make a fair comparison, we set the regularization constant to 1e-2 for both algorithms.
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12,4))
terr_bmrm = bmrm.get_helper().get_train_errors()
terr_sgd = sgd.get_helper().get_train_errors()
xs = range(terr_bmrm.size-1)
axes[0].plot(xs, terr_bmrm[1:], label='BMRM')
axes[0].plot(range(99), terr_sgd[1:100], label='SGD')
axes[0].set_xlabel('effecitve passes')
axes[0].set_ylabel('training error')
axes[0].set_title('whole training progress')
axes[0].legend(loc=1)
axes[0].grid(True)
axes[1].plot(range(99), terr_bmrm[1:100], label='BMRM')
axes[1].plot(range(99), terr_sgd[1:100], label='SGD')
axes[1].set_xlabel('effecitve passes')
axes[1].set_ylabel('training error')
axes[1].set_title('first 100 effective passes')
axes[1].legend(loc=1)
axes[1].grid(True)
Explanation: As is shown above, the SGD solver uses less oracle calls to get to converge. Note that the timing is 2 times slower than they actually need, since there are additional computations of primal objective and training error in each pass. The training errors of both algorithms for each pass are shown in below.
End of explanation
def hinton(matrix, max_weight=None, ax=None):
Draw Hinton diagram for visualizing a weight matrix.
ax = ax if ax is not None else plt.gca()
if not max_weight:
max_weight = 2**np.ceil(np.log(np.abs(matrix).max())/np.log(2))
ax.patch.set_facecolor('gray')
ax.set_aspect('equal', 'box')
ax.xaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_locator(plt.NullLocator())
for (x,y),w in np.ndenumerate(matrix):
color = 'white' if w > 0 else 'black'
size = np.sqrt(np.abs(w))
rect = plt.Rectangle([x - size / 2, y - size / 2], size, size,
facecolor=color, edgecolor=color)
ax.add_patch(rect)
ax.autoscale_view()
ax.invert_yaxis()
# get pairwise parameters, also accessible from
# w[n_dims*n_stats:n_dims*n_stats+n_stats*n_stats]
model.w_to_fparams(w_sgd) # update factor parameters
w_p = ftype_all[1].get_w()
w_p = np.reshape(w_p,(n_stats,n_stats))
hinton(w_p)
Explanation: Interestingly, the training errors of SGD solver are lower than BMRM's in first 100 passes, but in the end the BMRM solver obtains a better training performance. A probable explanation is that BMRM uses very limited number of cutting planes at beginning, which form a poor approximation of the objective function. As the number of cutting planes increasing, we got a tighter piecewise lower bound, thus improve the performance. In addition, we would like to show the pairwise weights, which may learn important co-occurrances of letters. The hinton diagram is a wonderful tool for visualizing 2D data, in which positive and negative values are represented by white and black squares, respectively, and the size of each square represents the magnitude of each value. In our case, a smaller number i.e. a large black square indicates the two letters tend to coincide.
End of explanation
# get testing data
samples_ts, labels_ts = prepare_data(p_ts, l_ts, ftype_all, n_ts_samples)
from modshogun import FactorGraphFeatures, FactorGraphObservation, TREE_MAX_PROD, MAPInference
# get a factor graph instance from test data
fg0 = samples_ts.get_sample(100)
fg0.compute_energies()
fg0.connect_components()
# create a MAP inference using tree max-product
infer_met = MAPInference(fg0, TREE_MAX_PROD)
infer_met.inference()
# get inference results
y_pred = infer_met.get_structured_outputs()
y_truth = FactorGraphObservation.obtain_from_generic(labels_ts.get_label(100))
print y_pred.get_data()
print y_truth.get_data()
Explanation: Inference
Next, we show how to do inference with the learned model parameters for a given data point.
End of explanation
from modshogun import LabelsFactory, SOSVMHelper
# training error of BMRM method
bmrm.set_w(w_bmrm)
model.w_to_fparams(w_bmrm)
lbs_bmrm = bmrm.apply()
acc_loss = 0.0
ave_loss = 0.0
for i in xrange(n_tr_samples):
y_pred = lbs_bmrm.get_label(i)
y_truth = labels.get_label(i)
acc_loss = acc_loss + model.delta_loss(y_truth, y_pred)
ave_loss = acc_loss / n_tr_samples
print('BMRM: Average training error is %.4f' % ave_loss)
# training error of stochastic method
print('SGD: Average training error is %.4f' % SOSVMHelper.average_loss(w_sgd, model))
# testing error
bmrm.set_features(samples_ts)
bmrm.set_labels(labels_ts)
lbs_bmrm_ts = bmrm.apply()
acc_loss = 0.0
ave_loss_ts = 0.0
for i in xrange(n_ts_samples):
y_pred = lbs_bmrm_ts.get_label(i)
y_truth = labels_ts.get_label(i)
acc_loss = acc_loss + model.delta_loss(y_truth, y_pred)
ave_loss_ts = acc_loss / n_ts_samples
print('BMRM: Average testing error is %.4f' % ave_loss_ts)
# testing error of stochastic method
print('SGD: Average testing error is %.4f' % SOSVMHelper.average_loss(sgd.get_w(), model))
Explanation: Evaluation
In the end, we check average training error and average testing error. The evaluation can be done by two methods. We can either use the apply() function in the structured output machine or use the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSOSVMHelper.html">SOSVMHelper</a>.
End of explanation |
10,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Product of 4 consecutive numbers is always 1 less than a perfect square
<p>
<center>Shubhanshu Mishra (<a href="https
Step1: Let us look at the right hand side of the equation first, i.e. $k^2 - 1$.
This can be rewritten as $\textbf{(k-1)*(k+1)}$
Now, this is where a hint lies.
What the right hand side means that it is a product of two integers ($k-1$ and $k+1$) which differ by 2.
We can see that this is the case
Step2: More videos to come
<p>
<center>Shubhanshu Mishra (<a href="https
Step3: Related works
P. Erdös. J. L. Selfridge. "The product of consecutive integers is never a power." Illinois J. Math. 19 (2) 292 - 301, June 1975. https | Python Code:
i_max = 4
nums = np.arange(0, 50)+1
consecutive_nums = np.stack([
np.roll(nums, -i)
for i in range(i_max)
], axis=1)[:-i_max+1]
n_prods = consecutive_nums.prod(axis=1)
df = pd.DataFrame(consecutive_nums, columns=[f"n{i+1}" for i in range(i_max)])
df["prod"] = n_prods
df["k"] = np.sqrt(n_prods+1).astype(int)
df["k^2"] = df["k"]**2
df["k^2 - 1"] = df["k^2"] - 1
df
fig, ax = plt.subplots(1,3, figsize=(18, 6))
ax[0].plot("n1", "prod", "bo-", data=df)
ax[0].set_xlabel("n", fontsize=20)
ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20)
ax[1].plot(df["k"], df["prod"], "ko-")
ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[1].set_title("$y = k^2 - 1$", fontsize=20)
ax[2].plot(df["n1"], df["k"], "ko-")
ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[2].set_xlabel("$n$", fontsize=20)
fig.tight_layout()
Explanation: Product of 4 consecutive numbers is always 1 less than a perfect square
<p>
<center>Shubhanshu Mishra (<a href="https://shubhanshu.com">shubhanshu.com</a>)</center>
 
</p>
For every $n \in \mathbb{Z}$, we can have 4 consecutive numbers as follows:
$
n, n+1, n+2, n+3
$
We can complete the proof, if we can show that there exists a $k \in \mathbb{Z}$, such that the following equation holds:
$
\begin{equation}
n(n+1)(n+2)*(n+3) = (k^2 - 1)
\end{equation}
$
End of explanation
df["k = n^2 + 3n + 1"] = (df["n1"]**2 + 3*df["n1"] + 1)
df
fig, ax = plt.subplots(1,3, figsize=(12, 6))
ax[0].plot("n1", "prod", "bo-", data=df)
ax[0].set_xlabel("n", fontsize=20)
ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20)
ax[1].plot(df["k"], df["prod"], "ko-")
ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[1].set_title("$y = k^2 - 1$", fontsize=20)
ax[2].plot(df["n1"], df["k"], "ko-", label="$k = \sqrt{y + 1}$")
ax[2].plot(df["n1"], df["k = n^2 + 3n + 1"], "r--", label="$k = n^2 + 3n + 1$")
ax[2].legend(fontsize=14)
ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[2].set_xlabel("$n$", fontsize=20)
fig.tight_layout()
Explanation: Let us look at the right hand side of the equation first, i.e. $k^2 - 1$.
This can be rewritten as $\textbf{(k-1)*(k+1)}$
Now, this is where a hint lies.
What the right hand side means that it is a product of two integers ($k-1$ and $k+1$) which differ by 2.
We can see that this is the case:
$
\begin{equation}
(k+1) - (k-1) \
= k + 1 - k - (-1) \
= k - k + 1 - (-1) \
= 0 + 1 + 1 \
= 2 \
\end{equation}
$
So, if we can somehow show that the left hand side of the original equation, i.e. $n(n+1)(n+2)*(n+3)$:
can be represented as a product of two numbers which differ by 2, then we are done,
as these numbers can then be mapped to $k-1$ and $k+1$ for some $k \in \mathbb{Z}$.
We can group the numbers $\textbf{n, n+1, n+2, n+3}$ into pairs, with the hope of getting $k-1$ and $k+1$.
We can utilize following facts to choose the two pairs:
The difference of the products should be constant, and hence independent of $n$
Knowing that product of two factors of type $(n+i)(n+j) = n^2 + (i+j)n + i*j$,
We can observe that $i+j$ will be same for numbers which are equidistant from the middle of all numbers.
Now we can select our pair of numbers.
The first pair is $n$ and $(n+3)$,
and their product is $\textbf{n * (n+3)}$
which can be expanded as $\color{red}{\textbf{n^2 + 3n}}$
And, the second pair $(n+1)$ and $(n+2)$,
and their product is $\textbf{(n+1)*(n+2)}$
which can be expanded as $\color{red}{\textbf{n^2 + 3n}} + \textbf{2}$
Based on the above pairing we can immediately see that the difference of these pair products is as follows:
$
\begin{equation}
[(n+1)*(n+2)] - [n * (n+3)]\
= [\color{red}{n^2 + 3n} + 2] - [\color{red}{n^2 + 3n}]\
= n^2 + 3n + 2 - n^2 - 3n\
= (n^2 -n^2) + (3n - 3n) + 2\
= 0 + 0 + 2\
= 2
\end{equation}
$
Hence, based on the above simplification, we can map:
$(\color{red}{n^2 + 3n} + 2) \rightarrow (k+1)$, and
$(\color{red}{n^2 + 3n}) \rightarrow (k-1)$.
Now, if we choose $\color{blue}{\textbf{k = (n^2 + 3n + 1)}}$, the following equations hold:
$n^2 + 3n + 2 = \color{blue}{(n^2 + 3n + 1)} + 1 = \color{blue}{k} + 1$
$n^2 + 3n = \color{blue}{(n^2 + 3n + 1)} - 1 = \color{blue}{k} - 1$
Hence, we have proved the following:
$
\begin{equation}
\forall n \in \mathbb{Z}, \
\exists k \in \mathbb{Z} \
n(n+1)(n+2)(n+3) \
= [(n+3)n][(n+1)(n+2)]\
= [\color{red}{n^2 + 3n}][\color{red}{n^2 + 3n} + 2]\
= [\color{blue}{(n^2 + 3n + 1)} - 1][\color{blue}{(n^2 + 3n + 1)} + 1]\
= [\color{blue}{k} - 1]*[\color{blue}{k} + 1]\
= (k^2 - 1)
\end{equation}
$
And this equation can be solved by choosing $\color{blue}{\textbf{k = (n^2 + 3n + 1)}}$.
Hence, proved.
End of explanation
fig, ax = plt.subplots(1,3, figsize=(12, 6))
fig.patch.set_facecolor('white')
ax[0].plot("n1", "prod", "bo-", data=df)
ax[0].set_xlabel("n", fontsize=20)
ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20)
ax[1].plot(df["k"], df["prod"], "ko-")
ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[1].set_title("$y = k^2 - 1$", fontsize=20)
ax[2].plot(df["n1"], df["k"], "ko-", label="$k = \sqrt{y + 1}$")
ax[2].plot(df["n1"], df["k = n^2 + 3n + 1"], "r--", label="$k = n^2 + 3n + 1$")
ax[2].legend(fontsize=14)
ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20)
ax[2].set_xlabel("$n$", fontsize=20)
fig.suptitle(f"Product of 4 consecutive integers is 1 less than a perfect square.", fontsize=20)
fig.tight_layout()
Explanation: More videos to come
<p>
<center>Shubhanshu Mishra (<a href="https://shubhanshu.com">shubhanshu.com</a>)</center>
 
</p>
End of explanation
nums = np.arange(10,10+4)
A = np.zeros((nums[0], nums[-1]))
A[:, nums[0]:] = 1
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
nums = np.arange(10,10+4)
A = np.zeros((nums[1], nums[2]))
A[:, nums[0]:] = 2
A[nums[0]:, :] = 3
A[nums[0]:, nums[0]:] = 1
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
import matplotlib.animation as animation
from IPython.display import HTML
fig, ax = plt.subplots(1,1)
frames = []
nums = np.arange(10,10+4)
A = np.zeros((nums[1], nums[-1]))
im = ax.pcolormesh(A, cmap="inferno", vmin=0, vmax=4)
title = ax.set_title(f"Start")
ax.invert_yaxis()
ax.set_xticks(np.arange(A.shape[1]))
ax.set_yticks(np.arange(A.shape[0]))
ax.grid(which="major", color="w", linestyle='-', linewidth=3)
def init():
im.set_array(A)
title.set_text("")
return im, title
def animate(i):
text = ""
if i == 0:
A[:, nums[0]:] = 4
A[nums[0]:, :] = 4
text = "$n * n$"
if i == 1:
A[:, nums[0]:] = 2
A[nums[0]:, ] = 4
text = "$n * (n+3)$"
if i == 2:
A[:, nums[0]:] = 2
A[:, nums[2]:] = 3
A[nums[0]:, ] = 4
text = "$n * (n+3)$"
if i == 3:
A[:, nums[2]:] = 4
A[nums[0]:, :] = 3
A[nums[0]:, nums[0]:] = 4
A[nums[0]:, nums[0]:nums[2]] = 4
text = "$(n+1) * (n+2)$"
if i == 4:
A[nums[0]:, nums[0]:nums[2]] = 1
text = "$n * (n+3) = (n+1)*(n+2) - 2$"
# print(A)
im.set_array(A)
title.set_text(f"Step: {i} | {text}")
return im, title
# ax = sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
fig.tight_layout()
ani = animation.FuncAnimation(fig,animate,frames=5,interval=2000,blit=True,repeat=True)
HTML(ani.to_html5_video())
# frames
# ax.cla()
nums = np.arange(10,10+4)
A = np.zeros((nums[1], nums[-1]))
A[:, nums[0]:] = 2
A[nums[0]:, ] = 4
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
plt.show()
# plt.pause(1)
A[:, nums[2]:] = 4
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
plt.show()
# plt.pause(1)
A[nums[0]:, :] = 2
A[nums[0]:, nums[0]:] = 4
A[nums[0]:, nums[0]:nums[2]] = 1
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
plt.show()
# plt.pause(1)
nums = np.arange(10,10+4)
A = np.zeros((nums[1], nums[-1]))
A[:, nums[0]:] = 2
A[nums[0]:, :] = 3
A[nums[0]:, nums[0]:] = 1
A[:, nums[2]:] = 4
sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
Explanation: Related works
P. Erdös. J. L. Selfridge. "The product of consecutive integers is never a power." Illinois J. Math. 19 (2) 292 - 301, June 1975. https://doi.org/10.1215/ijm/1256050816
Visual Proof
End of explanation |
10,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Taxonomy assignment of simulated communities
This notebook demonstrates how to assign taxonomy to communities simulated from natural compositions. These data are stored in the precomputed-results directory in tax-credit and this notebook does not need to be re-run unless if being used to test additional simulated communities or taxonomy assignment methods.
Step1: First, set the location of the tax-credit repository, the reference databases, and the simulated community directory
Step2: In the following cell, we define the simulated communities that we want to use for taxonomy assignment. The directory for each dataset is located in sim_dir, and contains the files simulated-seqs.fna that were previously generated in the dataset generation notebook.
Step3: Assign taxonomy to simulated community sequences
First, set the results directory, where we will put temporary results.
Step4: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
Step5: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format
Step6: A quick sanity check...
Step7: ... and finally we are ready to run.
Step8: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
Step9: Move result files to repository
Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
Step10: Add expected composition bioms to repository | Python Code:
from os.path import join, expandvars
from joblib import Parallel, delayed
from glob import glob
from os import system
from tax_credit.simulated_communities import copy_expected_composition
from tax_credit.framework_functions import (parameter_sweep,
generate_per_method_biom_tables,
move_results_to_repository)
Explanation: Taxonomy assignment of simulated communities
This notebook demonstrates how to assign taxonomy to communities simulated from natural compositions. These data are stored in the precomputed-results directory in tax-credit and this notebook does not need to be re-run unless if being used to test additional simulated communities or taxonomy assignment methods.
End of explanation
# Project directory
project_dir = expandvars("$HOME/Desktop/projects/tax-credit/")
# Directory containing reference sequence databases
reference_database_dir = join(project_dir, 'data', 'ref_dbs')
# simulated communities directory
sim_dir = join(project_dir, "data", "simulated-community")
Explanation: First, set the location of the tax-credit repository, the reference databases, and the simulated community directory
End of explanation
dataset_reference_combinations = [
# (community_name, ref_db)
('sake', 'gg_13_8_otus'),
('wine', 'unite_20.11.2016')
]
reference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/99_otus_clean_515f-806r_trim250.fasta'),
join(reference_database_dir, 'gg_13_8_otus/99_otu_taxonomy_clean.tsv')),
'unite_20.11.2016' : (join(reference_database_dir, 'unite_20.11.2016/sh_refs_qiime_ver7_99_20.11.2016_dev_clean_BITSf-B58S3r_trim250.fasta'),
join(reference_database_dir, 'unite_20.11.2016/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv'))}
Explanation: In the following cell, we define the simulated communities that we want to use for taxonomy assignment. The directory for each dataset is located in sim_dir, and contains the files simulated-seqs.fna that were previously generated in the dataset generation notebook.
End of explanation
results_dir = expandvars("$HOME/Desktop/projects/simulated-community/")
Explanation: Assign taxonomy to simulated community sequences
First, set the results directory, where we will put temporary results.
End of explanation
method_parameters_combinations = { # probabalistic classifiers
'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,
0.6, 0.7, 0.8, 0.9, 1.0]},
# global alignment classifiers
'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.8, 0.9],
'uclust_max_accepts': [1, 3, 5]},
# local alignment classifiers
'sortmerna': {'sortmerna_e_value': [1.0],
'min_consensus_fraction': [0.51, 0.76, 1.0],
'similarity': [0.8, 0.9],
'sortmerna_best_N_alignments ': [1, 3, 5],
'sortmerna_coverage' : [0.8, 0.9]},
'blast' : {'blast_e_value' : [0.0000000001, 0.001, 1, 1000]}
}
Explanation: Preparing the method/parameter combinations and generating commands
Now we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.
Assignment Using QIIME 1 or Command-Line Classifiers
Here we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.
End of explanation
command_template = "source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 16000"
commands = parameter_sweep(sim_dir, results_dir, reference_dbs,
dataset_reference_combinations,
method_parameters_combinations, command_template,
infile='simulated-seqs.fna')
Explanation: Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().
Fields must adhere to following format:
{0} = output directory
{1} = input data
{2} = output destination
{3} = reference taxonomy
{4} = method name
{5} = other parameters
End of explanation
print(len(commands))
commands[0]
Explanation: A quick sanity check...
End of explanation
Parallel(n_jobs=4)(delayed(system)(command) for command in commands)
Explanation: ... and finally we are ready to run.
End of explanation
taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'simulated-seqs_tax_assignments.txt')
generate_per_method_biom_tables(taxonomy_glob, sim_dir, biom_input_fn='simulated-composition.biom')
Explanation: Generate per-method biom tables
Modify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.
End of explanation
precomputed_results_dir = join(project_dir, "data", "precomputed-results", "simulated-community")
method_dirs = glob(join(results_dir, '*', '*', '*', '*'))
move_results_to_repository(method_dirs, precomputed_results_dir)
Explanation: Move result files to repository
Add results to the tax-credit directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.
End of explanation
copy_expected_composition(sim_dir, dataset_reference_combinations, precomputed_results_dir)
Explanation: Add expected composition bioms to repository
End of explanation |
10,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../images/qiskit-heading.gif" alt="Note
Step1: Quantum Teleportation<a id='teleportation'></a>
Quantum teleportation is a protocol to transmit quantum states from one location to another, assisted by a previously shared entangled state and a classical communication channel. It was devised by Charles H. Bennett (IBM), Gilles Brassard, Claude Crépeau, Richard Jozsa, Asher Peres, and William K. Wootters in 1993. It was first demonstrated with photons in 1997, and has since been realised in atoms, ions, electrons and superconducting circuits. The record distance for quantum teleportation is 143 km via satellite, set in 2012.
<img src="../images/teleportation.png" alt="Note
Step2: Alice then prepares her quantum state to be teleported, $|\psi\rangle_{C} = \alpha|0\rangle_C + \beta|1\rangle_C$. In this experiment, $\alpha = \cos(\frac{\theta}{2})$ and $\beta = \sin(\frac{\theta}{2})$ where $\theta = \frac{\pi}{4}$. This state can be created by applying a rotation around the y axis
Step3: Alice now applies $CNOT$ to her two quantum states $q_A(q_1)$ and $q_C(q_0)$, followed by an $H$, to entangle them and project them into the Bell basis
Step4: She now measures her two quantum states $q_A(q_1)$ and $q_C(q_0)$
Step5: Depending on the results of these measurements, Bob has to apply an $X$ or $Z$, or both, to his quantum state $q_B(q_2)$
Step6: His state is now the same as the state Alice prepared earlier, which can be verified by measurement
Step7: Let's now create and execute the quantum circuits and plot the results
Step8: We must manipulate the data to understand the results better, first only plotting the results of Alice's measurement
Step9: As expected, the probabilities are roughly equal.
Now, manipulate the data to plot the result of Bob's measurement
Step10: As expected, $|\alpha|^2 = |\cos(\frac{\pi}{8})|^2 \approx 0.854$ (the probability of measuring 0) and $|\beta|^2 = |\sin(\frac{\pi}{8})|^2 \approx 0.146$ (the probability of measuring 1). Why don't you try teleporting a different quantum state now?
Quantum Superdense Coding<a id='superdensecoding'></a>
Quantum superdense coding is the dual protocol of quantum teleportation, whereby two classical bits of information are transmitted using only one qubit and a previously shared entangled state. It was devised by Charles Bennett (IBM) and Stephen Wiesner in 1992.
<img src="../images/superdensecoding.png" alt="Note
Step11: Alice now needs to decide what two bit message she wants to transmit to Bob, ($00$, $01$, $10$, or $11$), and perform the corresponding to transformation ($I$, $X$, $Z$ or $XZ$ respectively) to her qubit $q_A$ ($q_0$). In this case, she encodes $11$
Step12: Bob now needs to 'decode' the message that Alice sent him. Since measurement in Qiskit is only possible in the standard computational basis, he does this by
Step13: Let's now create, execute the quantum circuits, and plot the results | Python Code:
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, IBMQ, execute
# import basic plot tools
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
Explanation: <img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Quantum Teleportation and Superdense Coding
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
Contributors
Anna Phan, Jay Gambetta, Takashi Imamichi
Introduction
In entanglement, we introduced you to the quantum concept of entanglement, in particular, the maximally entangled quantum state $|\psi\rangle = (|00\rangle + |11\rangle)$. In testing entanglement, we explored these types of states in detail, running various experiments to compare quantum mechanics to hidden variable models. In this notebook, we will explore how this state can be used in two quantum communication protocols:
* Teleportation, where a qubit state is transmitted using two classical bits; and
* Superdense Coding, where two classical bits are transmitted using one qubit.
End of explanation
# Creating registers
tq = QuantumRegister(3)
tc0 = ClassicalRegister(1)
tc1 = ClassicalRegister(1)
tc2 = ClassicalRegister(1)
# Quantum circuit to make the shared entangled state
teleport = QuantumCircuit(tq, tc0,tc1,tc2)
teleport.h(tq[1])
teleport.cx(tq[1], tq[2])
Explanation: Quantum Teleportation<a id='teleportation'></a>
Quantum teleportation is a protocol to transmit quantum states from one location to another, assisted by a previously shared entangled state and a classical communication channel. It was devised by Charles H. Bennett (IBM), Gilles Brassard, Claude Crépeau, Richard Jozsa, Asher Peres, and William K. Wootters in 1993. It was first demonstrated with photons in 1997, and has since been realised in atoms, ions, electrons and superconducting circuits. The record distance for quantum teleportation is 143 km via satellite, set in 2012.
<img src="../images/teleportation.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="600 px" align="center">
As illustrated above, the protocol starts out with a shared entangled state between the sender (Alice) and the receiver (Bob):
$$|\psi\rangle_{AB} = \frac{1}{\sqrt{2}}(|0\rangle_A \otimes |0\rangle_B + |1\rangle_A \otimes |1\rangle_B)$$
The first qubit, denoted by subscript $A$, belongs to Alice, and the second qubit, $B$, belongs to Bob.
Alice has a quantum state that she wants to convey to Bob:
$$|\psi\rangle_{C} = \alpha|0\rangle_C + \beta|1\rangle_C$$
At this point, Alice has two quantum states ($C$, the one she wants to teleport, and $A$, one of the entangled pair), and Bob has one quantum state. The total state of the system is given by:
$$|\psi\rangle_{AB} \otimes |\psi\rangle_C = \frac{1}{\sqrt{2}}(|0\rangle_A \otimes |0\rangle_B + |1\rangle_A \otimes |1\rangle_B) \otimes (\alpha|0_C\rangle + \beta|1_C\rangle)$$
or, in the Bell basis:
$$|\psi\rangle_{AB} \otimes |\psi\rangle_C = \frac{1}{2}[
|\Phi^+\rangle_{AC}\otimes(\alpha|0\rangle_B + \beta|1\rangle_B) +
|\Phi^-\rangle_{AC}\otimes(\alpha|0\rangle_B - \beta|1\rangle_B) + \
|\Psi^+\rangle_{AC}\otimes(\alpha|0\rangle_B + \beta|1\rangle_B) +
|\Psi^-\rangle_{AC}\otimes(\alpha|0\rangle_B - \beta|1\rangle_B) ]$$
where:
$$|0\rangle \otimes |0\rangle = \frac{1}{\sqrt{2}}(|\Phi^+\rangle + |\Phi^-\rangle),
|0\rangle \otimes |1\rangle = \frac{1}{\sqrt{2}}(|\Psi^+\rangle + |\Psi^-\rangle)\
|1\rangle \otimes |0\rangle = \frac{1}{\sqrt{2}}(|\Psi^+\rangle - |\Psi^-\rangle),
|1\rangle \otimes |1\rangle = \frac{1}{\sqrt{2}}(|\Phi^+\rangle - |\Phi^-\rangle).$$
Alice now measures her two quantum states, $A$ and $C$, in the Bell basis. This will collapse the three state system into the one of the following four states with equal probability, with the corresponding measurement outcomes:
- 00: $|\Phi^+\rangle_{AC}\otimes(\alpha|0\rangle_B + \beta|1\rangle_B)$
- 01: $|\Phi^-\rangle_{AC}\otimes(\alpha|0\rangle_B - \beta|1\rangle_B)$
- 10: $|\Psi^+\rangle_{AC}\otimes(\alpha|1\rangle_B + \beta|0\rangle_B)$
- 11: $|\Psi^-\rangle_{AC}\otimes(-\alpha|1\rangle_B + \beta|0\rangle_B)$
Alice now sends the results of her measurements to Bob. Using this information, he performs one of the following transformations on his quantum state to transform it to the desired state $\alpha|0\rangle_B - \beta|1\rangle_B$:
- If he receives 00, he applies $I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix}$
- If he receives 01, he applies $Z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix}$
- If he receives 10, he applies $X = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix}$
- If he receives 11, he applies $XZ = \begin{pmatrix} 0 & -1 \ 1 & 0 \end{pmatrix}$
Transmission (teleportation) of $|\psi\rangle = \alpha|0\rangle + \beta|1\rangle$ is thus achieved.
Recall from entanglement that the steps to make the shared entangled state $|\psi\rangle = \frac{1}{\sqrt{2}}(|0_A 0_B\rangle + |1_A 1_B\rangle)$ are:
1. Start with an initial state $|0_A 0_B\rangle$
2. Apply $H = \frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \ 1 & -1 \end{pmatrix}$ on $q_A$
3. Then a $CNOT = \begin{pmatrix} 1 & 0 & 0 & 0\ 0 & 0 & 0 & 1\0& 0& 1 & 0\0 & 1 & 0 & 0 \end{pmatrix}$ from $q_A$ to $q_B$
With $q_A = q_1$ and $q_B = q_2$, this looks like:
End of explanation
teleport.ry(np.pi/4,tq[0])
Explanation: Alice then prepares her quantum state to be teleported, $|\psi\rangle_{C} = \alpha|0\rangle_C + \beta|1\rangle_C$. In this experiment, $\alpha = \cos(\frac{\theta}{2})$ and $\beta = \sin(\frac{\theta}{2})$ where $\theta = \frac{\pi}{4}$. This state can be created by applying a rotation around the y axis:
$R_y(\theta)$ on $q_C$
With $q_C = q_0$, this looks like:
End of explanation
teleport.cx(tq[0], tq[1])
teleport.h(tq[0])
teleport.barrier()
Explanation: Alice now applies $CNOT$ to her two quantum states $q_A(q_1)$ and $q_C(q_0)$, followed by an $H$, to entangle them and project them into the Bell basis:
End of explanation
teleport.measure(tq[0], tc0[0])
teleport.measure(tq[1], tc1[0])
Explanation: She now measures her two quantum states $q_A(q_1)$ and $q_C(q_0)$:
End of explanation
teleport.z(tq[2]).c_if(tc0, 1)
teleport.x(tq[2]).c_if(tc1, 1)
Explanation: Depending on the results of these measurements, Bob has to apply an $X$ or $Z$, or both, to his quantum state $q_B(q_2)$:
End of explanation
teleport.measure(tq[2], tc2[0])
circuit_drawer(teleport,style=qx_color_scheme())
Explanation: His state is now the same as the state Alice prepared earlier, which can be verified by measurement:
End of explanation
local_backend = Aer.get_backend('qasm_simulator') # note that this circuit can not be run on an IBM Q device
teleport_job = execute(teleport, local_backend)
teleport_result = teleport_job.result()
Explanation: Let's now create and execute the quantum circuits and plot the results:
End of explanation
data = teleport_result.get_counts(teleport)
alice = {}
alice['00'] = data['0 0 0'] + data['1 0 0']
alice['10'] = data['0 1 0'] + data['1 1 0']
alice['01'] = data['0 0 1'] + data['1 0 1']
alice['11'] = data['0 1 1'] + data['1 1 1']
plot_histogram(alice)
Explanation: We must manipulate the data to understand the results better, first only plotting the results of Alice's measurement:
Note each classical register is seperated by a space, and the order is c2 c1 c0.
End of explanation
bob = {}
bob['0'] = data['0 0 0'] + data['0 1 0'] + data['0 0 1'] + data['0 1 1']
bob['1'] = data['1 0 0'] + data['1 1 0'] + data['1 0 1'] + data['1 1 1']
plot_histogram(bob)
Explanation: As expected, the probabilities are roughly equal.
Now, manipulate the data to plot the result of Bob's measurement:
End of explanation
# Creating registers
sdq = QuantumRegister(2)
sdc = ClassicalRegister(2)
# Quantum circuit to make the shared entangled state
superdense = QuantumCircuit(sdq, sdc)
superdense.h(sdq[0])
superdense.cx(sdq[0], sdq[1])
Explanation: As expected, $|\alpha|^2 = |\cos(\frac{\pi}{8})|^2 \approx 0.854$ (the probability of measuring 0) and $|\beta|^2 = |\sin(\frac{\pi}{8})|^2 \approx 0.146$ (the probability of measuring 1). Why don't you try teleporting a different quantum state now?
Quantum Superdense Coding<a id='superdensecoding'></a>
Quantum superdense coding is the dual protocol of quantum teleportation, whereby two classical bits of information are transmitted using only one qubit and a previously shared entangled state. It was devised by Charles Bennett (IBM) and Stephen Wiesner in 1992.
<img src="../images/superdensecoding.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="600 px" align="center">
As illustrated above, and as in quantum teleportation, the protocol starts out with a shared entangled state between the sender (Alice) and the receiver (Bob):
$$|\psi\rangle_{AB} = \frac{1}{\sqrt{2}}(|0\rangle_A \otimes |0\rangle_B + |1\rangle_A \otimes |1\rangle_B)$$
The first qubit, denoted by subscript $A$, belongs to Alice, and the second qubit, $B$, belongs to Bob.
Alice wants to send a two bit message to Bob, 00, 01, 10, or 11. She performs a single qubit operation on her qubit which transforms the entangled state according to which message she wants to send:
- For a message of 00: Alice applies $I = \begin{pmatrix} 1 & 0 \ 0 & 1 \end{pmatrix}$. The resultant state would be $|\psi_{00}\rangle = \frac{1}{\sqrt{2}}(|0_A 0_B\rangle + |1_A 1_B\rangle)$
- For a message of 01: Alice applies $X = \begin{pmatrix} 0 & 1 \ 1 & 0 \end{pmatrix}$. The resultant state would be $|\psi_{01}\rangle = \frac{1}{\sqrt{2}}(|1_A 0_B\rangle + |0_A 1_B\rangle)$
- For a message of 10: Alice applies $Z = \begin{pmatrix} 1 & 0 \ 0 & -1 \end{pmatrix}$. The resultant state would be $|\psi_{10}\rangle = \frac{1}{\sqrt{2}}(|0_A 0_B\rangle - |1_A 1_B\rangle)$
- For a message of 11: Alice applies $XZ = \begin{pmatrix} 0 & -1 \ 1 & 0 \end{pmatrix}$. The resultant state would be $|\psi_{11}\rangle = \frac{1}{\sqrt{2}}(- |1_A 0_B\rangle + |0_A 1_B\rangle $
The key to superdense coding is that these four states, $|\psi_{00}\rangle, |\psi_{01}\rangle, |\psi_{10}\rangle, |\psi_{11}\rangle$ (otherwise known as the Bell states), are orthonormal and are hence distinguishable by a quantum measurement.
End of explanation
# For 00, do nothing
# For 01, apply $X$
#shared.x(q[0])
# For 01, apply $Z$
#shared.z(q[0])
# For 11, apply $XZ$
superdense.z(sdq[0])
superdense.x(sdq[0])
superdense.barrier()
Explanation: Alice now needs to decide what two bit message she wants to transmit to Bob, ($00$, $01$, $10$, or $11$), and perform the corresponding to transformation ($I$, $X$, $Z$ or $XZ$ respectively) to her qubit $q_A$ ($q_0$). In this case, she encodes $11$:
End of explanation
superdense.cx(sdq[0], sdq[1])
superdense.h(sdq[0])
superdense.measure(sdq[0], sdc[0])
superdense.measure(sdq[1], sdc[1])
circuit_drawer(superdense,style=qx_color_scheme())
Explanation: Bob now needs to 'decode' the message that Alice sent him. Since measurement in Qiskit is only possible in the standard computational basis, he does this by:
1. Applying a $CNOT$ from $q_A$ to $q_B$
2. Then a $H$ on $q_A$
3. And measuring $q_A$ and $q_B$
Recalling that $q_A = q_0$ and $q_B = q_1$, this looks like:
End of explanation
backend = Aer.get_backend('qasm_simulator') # run on local simulator by default
# Uncomment the following lines to run on a real device
# IBMQ.load_accounts()
# from qiskit.backends.ibmq import least_busy
# backend = least_busy(IBMQ.backends(operational=True, simulator=False))
# print("the best backend is " + backend.name())
superdense_job = execute(superdense, backend)
superdense_result = superdense_job.result()
plot_histogram(superdense_result.get_counts(superdense))
Explanation: Let's now create, execute the quantum circuits, and plot the results:
End of explanation |
10,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ticdat to build modular engines
The goal of the ticdat package is to facilitate solve engines that are modular and robust. For example, the multicommodity netflow.py engine can read and write from a variety of file types when run from the the command line. It can also be run from a Python script that contains embedded static data, or from a script that reads and writes from a system-of-record data source such as an ERP system.
With regards to the latter, we should note that Python is one of the most popular "glue" languages. The market has recognized that Python scripts are easy to write, manage data with intuitive programming syntax, and can be connected to nearly any data source.
The ticdat package can easily be used in any Python glue script. One way to do this is to exploit ticdat's ability to recognize data tables as list-of-lists. The inner lists contain data values in the field order defined by by the TicDatFactory (i.e. netflow.input_schema).
For example, suppose the netflow engine needs to connect to an Oracle database for a daily automated solve. The integration engineer can use the cx_Oracle package (or something equivalent) to turn system data into a list-of-lists for each input table. These data structures can then be used to create a TicDat object that can be passed as input data to netflow.solve. The solution TicDat object returned by netflow.solve can then be converted back into a list-of-lists representation of each solution report table. (The list-of-lists strategy is just one approach. It might make sense to convert system-of-record data into pandas.DataFrame objects, and then use these DataFrames to build the TicDat object.)
We demonstrate this approach without explicit references to cx_Oracle. By demonstrating that ticdat is compatible with list-of-list/DataFrame table representations we thus show that ticdat is compatible with any data source that can be connected to Python, and also with human readable static data.
Step1: An integration engineer might prefer to copy system-of-records data into pandas.DataFrame objects. Note that pandas is itself capable of reading directly from various SQL databases, although it usually needs a supporting package like cx_Oracle.
Step2: Next we create a TicDat input data object from the list-of-lists/DataFrame representations.
Step3: We now create a TicDat solution data object by calling solve.
Step4: We now create a list of list representation of the solution data object.
Step5: Here we demonstrate that sln_lists is a dictionary mapping table name to list-of-lists of solution report data.
Step6: Here we demonstrate how DataFrame objects can be generated from the solution data object.
Step7: Using ticdat to build robust engines
The preceding section demonstrated how we can use ticdat to build modular engines. We now demonstrate how we can use ticdat to build engines that check solve pre-conditions, and are thus robust with respect to data integrity problems.
First, lets violate our (somewhat artificial) rule that the commodity volume must be positive.
Step8: The input_schema can not only flag this problem, but give us a useful data structure to examine.
Step9: Next, lets add a Cost record for a non-existent commodity and see how input_schema flags this problem. | Python Code:
commodities = [['Pencils', 0.5], ['Pens', 0.2125]]
# a one column table can just be a simple list
nodes = ['Boston', 'Denver', 'Detroit', 'New York', 'Seattle']
cost = [['Pencils', 'Denver', 'Boston', 10.0],
['Pencils', 'Denver', 'New York', 10.0],
['Pencils', 'Denver', 'Seattle', 7.5],
['Pencils', 'Detroit', 'Boston', 2.5],
['Pencils', 'Detroit', 'New York', 5.0],
['Pencils', 'Detroit', 'Seattle', 15.0],
['Pens', 'Denver', 'Boston', 15.0],
['Pens', 'Denver', 'New York', 17.5],
['Pens', 'Denver', 'Seattle', 7.5],
['Pens', 'Detroit', 'Boston', 5.0],
['Pens', 'Detroit', 'New York', 5.0],
['Pens', 'Detroit', 'Seattle', 20.0]]
inflow = [['Pencils', 'Boston', -200],
['Pencils', 'Denver', 240],
['Pencils', 'Detroit', 200],
['Pencils', 'New York', -200],
['Pencils', 'Seattle', -40],
['Pens', 'Boston', -160],
['Pens', 'Denver', 160],
['Pens', 'Detroit', 240],
['Pens', 'New York', -120],
['Pens', 'Seattle', -120]]
Explanation: Using ticdat to build modular engines
The goal of the ticdat package is to facilitate solve engines that are modular and robust. For example, the multicommodity netflow.py engine can read and write from a variety of file types when run from the the command line. It can also be run from a Python script that contains embedded static data, or from a script that reads and writes from a system-of-record data source such as an ERP system.
With regards to the latter, we should note that Python is one of the most popular "glue" languages. The market has recognized that Python scripts are easy to write, manage data with intuitive programming syntax, and can be connected to nearly any data source.
The ticdat package can easily be used in any Python glue script. One way to do this is to exploit ticdat's ability to recognize data tables as list-of-lists. The inner lists contain data values in the field order defined by by the TicDatFactory (i.e. netflow.input_schema).
For example, suppose the netflow engine needs to connect to an Oracle database for a daily automated solve. The integration engineer can use the cx_Oracle package (or something equivalent) to turn system data into a list-of-lists for each input table. These data structures can then be used to create a TicDat object that can be passed as input data to netflow.solve. The solution TicDat object returned by netflow.solve can then be converted back into a list-of-lists representation of each solution report table. (The list-of-lists strategy is just one approach. It might make sense to convert system-of-record data into pandas.DataFrame objects, and then use these DataFrames to build the TicDat object.)
We demonstrate this approach without explicit references to cx_Oracle. By demonstrating that ticdat is compatible with list-of-list/DataFrame table representations we thus show that ticdat is compatible with any data source that can be connected to Python, and also with human readable static data.
End of explanation
from pandas import DataFrame
arcs = DataFrame({"Source": ["Denver", "Denver", "Denver", "Detroit", "Detroit", "Detroit",],
"Destination": ["Boston", "New York", "Seattle", "Boston", "New York",
"Seattle"],
"Capacity": [120, 120, 120, 100, 80, 120]}).set_index(["Source", "Destination"])
arcs
Explanation: An integration engineer might prefer to copy system-of-records data into pandas.DataFrame objects. Note that pandas is itself capable of reading directly from various SQL databases, although it usually needs a supporting package like cx_Oracle.
End of explanation
from netflow import input_schema, solve, solution_schema
dat = input_schema.TicDat(commodities=commodities, nodes=nodes, cost=cost, arcs=arcs,
inflow=inflow)
Explanation: Next we create a TicDat input data object from the list-of-lists/DataFrame representations.
End of explanation
sln = solve(dat)
Explanation: We now create a TicDat solution data object by calling solve.
End of explanation
from ticdat.jsontd import make_json_dict
sln_lists = make_json_dict(solution_schema, sln)
Explanation: We now create a list of list representation of the solution data object.
End of explanation
import pprint
for sln_table_name, sln_table_data in sln_lists.items():
print "\n\n**\nSolution Table %s\n**"%sln_table_name
pprint.pprint(sln_table_data)
Explanation: Here we demonstrate that sln_lists is a dictionary mapping table name to list-of-lists of solution report data.
End of explanation
sln_pandas = solution_schema.copy_to_pandas(sln)
sln_pandas.flow
Explanation: Here we demonstrate how DataFrame objects can be generated from the solution data object.
End of explanation
dat.commodities["Pens"] = 0
Explanation: Using ticdat to build robust engines
The preceding section demonstrated how we can use ticdat to build modular engines. We now demonstrate how we can use ticdat to build engines that check solve pre-conditions, and are thus robust with respect to data integrity problems.
First, lets violate our (somewhat artificial) rule that the commodity volume must be positive.
End of explanation
input_schema.find_data_type_failures(dat)
Explanation: The input_schema can not only flag this problem, but give us a useful data structure to examine.
End of explanation
dat.cost['Crayons', 'Detroit', 'Seattle'] = 10
input_schema.find_foreign_key_failures(dat, verbosity="Low")
Explanation: Next, lets add a Cost record for a non-existent commodity and see how input_schema flags this problem.
End of explanation |
10,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generating a basic map image in Earth Engine
Install ee-python
Follow the installation directions found here
Step1: Visualize Geographic Data
Step3: Try it with mapclient
This code will run but then nothing happens. I have no idea why!
Step4: Testing Out Jill's Method for Displaying Maps | Python Code:
# Import the Earth Engine Python Package into Python environment.
import ee
import ee.mapclient
# Initialize the Earth Engine object, using the authentication credentials.
ee.Initialize()
Explanation: Generating a basic map image in Earth Engine
Install ee-python
Follow the installation directions found here:
https://github.com/catherinekuhn/CloudtoStreet/blob/master/Python%20API%20directions.ipynb
Check your environment
Make sure that you are in the correct environment. To check your current environment, type the following. The environment you are in will have a star next to it.
conda info --envs
If you are not in the ee-python environment, you can switch into it using
source activate ee-python
Import & Authentication
End of explanation
image = ee.Image('srtm90_v4')
from IPython.display import Image
Image(url=image.getThumbUrl({'min':0, 'max': 3000}))
# Print the information for an image asset. the 'srtm90_v4 file is a digital elevation model.
# that is housed in Google's cloud and has an elevation value for every pixel across the whole earth
# at a resolution of 30 meters. That is the map you see below in the static notebook.
print(image.getInfo())
#celebrate the metadata!!
Irene= ee.Image("users/kuhniculous/floodwithnoletters")
from IPython.display import display,Image
test=ee.Image(Irene)
display(Image(url=test.select(['b1']).getThumbUrl({'gamma':2})))
Lparams = {
'min':0.0134,
'max':0.0338,
'palette':'000000,0000ff,00ffff,00ff00,ffff00,ffa500,ff0000',
};
display(Image(url=test.select(["b1"]).getThumbUrl(Lparams)))
Irene= ee.Image("users/kuhniculous/popImage")
from IPython.display import display,Image
test=ee.Image(Irene)
display(Image(url=test.select(['b1']).getThumbUrl({'gamma':2})))
Lparams = {
'min':7,
'max':7.5,
'palette':'000000,ff0000',
};
display(Image(url=test.select(["b1"]).getThumbUrl(Lparams)))
Explanation: Visualize Geographic Data
End of explanation
Select rows from a fusion table.
import ee
import ee.mapclient
ee.Initialize()
ee.mapclient.centerMap(-93, 40, 4)
# Select the 'Sonoran desert' feature from the TNC Ecoregions fusion table.
fc = (ee.FeatureCollection('ft:1Ec8IWsP8asxN-ywSqgXWMuBaxI6pPaeh6hC64lA')
.filter(ee.Filter().eq('ECO_NAME', 'Sonoran desert')))
# Paint it into a blank image.
image1 = ee.Image(0).mask(0)
ee.mapclient.addToMap(image1.paint(fc, 0, 5))
Explanation: Try it with mapclient
This code will run but then nothing happens. I have no idea why!
End of explanation
%matplotlib inline
from __future__ import print_function # For py 2.7 compat
import datetime
from IPython.html import widgets
from IPython.display import display
from IPython.utils import traitlets
from IPython.core.display import Javascript
%run 'define_google_maps_interactive_widget.ipynb'
Irene= ee.Image("users/kuhniculous/popImage")
map = GoogleMapsWidget(lat=59.5, lng=10.9, zoom=13) # lat, lng and zoom are optional
display(map)
map.addLayer(Irene, {'color': 'FFFFCC'}, name='Irene Map')
Explanation: Testing Out Jill's Method for Displaying Maps
End of explanation |
10,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function h2stats
Synopse
The h2stats function computes several statistics given an image histogram.
g = h2stats(h)
Output
g
Step1: Examples
Step2: Numeric Example
Step3: Image Example | Python Code:
def h2stats(h):
import numpy as np
import ia898.src as ia
hn = 1.0*h/h.sum() # compute the normalized image histogram
v = np.zeros(11) # number of statistics
# compute statistics
n = len(h) # number of gray values
v[0] = np.sum((np.arange(n)*hn)) # mean
v[1] = np.sum(np.power((np.arange(n)-v[0]),2)*hn) # variance
v[2] = np.sum(np.power((np.arange(n)-v[0]),3)*hn)/(np.power(v[1],1.5))# skewness
v[3] = np.sum(np.power((np.arange(n)-v[0]),4)*hn)/(np.power(v[1],2))-3# kurtosis
v[4] = -(hn[hn>0]*np.log(hn[hn>0])).sum() # entropy
v[5] = np.argmax(h) # mode
v[6:] = ia.h2percentile(h,np.array([1,10,50,90,99])) # 1,10,50,90,99% percentile
return v
Explanation: Function h2stats
Synopse
The h2stats function computes several statistics given an image histogram.
g = h2stats(h)
Output
g: unidimensional array. Array containing the statistics from
the histogram
Input
h: 1-D ndarray: histogram
Description
The h2stats function extracts some relevant statistics of the images where
the histogram was computed:
[0] Mean (mean grayscale value)
[1] Variance (variance of grayscale values)
[2] Skewness
[3] Kurtosis
[4] entropy
[5] mode (gray scale value with largest occurrence)
[6] Percentile 1%
[7] Percentile 10%
[8] Percentile 50% (This is the median gray scale value)
[9] Percentile 90%
[10] Percentile 99%
Function Code
End of explanation
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python h2stats.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Examples
End of explanation
if testing:
f = np.array([1,1,1,0,1,2,2,2,1])
h = ia.histogram(f)
print('statistics =', ia.h2stats(h))
Explanation: Numeric Example
End of explanation
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from scipy.stats import mode, kurtosis, skew, entropy
f = mpimg.imread('../data/cameraman.tif')
plt.imshow(f,cmap='gray')
h = ia.histogram(f)
v = ia.h2stats(h)
print('mean =',v[0])
print('variance =',v[1])
print('skewness =',v[2])
print('kurtosis = ',v[3])
print('entropy = ',v[4])
print('mode = ',v[5])
print('percentil 1% = ',v[6])
print('percentil 10% = ',v[7])
print('percentil 50% = ',v[8])
print('percentil 90% = ',v[9])
print('percentil 99% = ',v[10])
Explanation: Image Example
End of explanation |
10,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# print("x: ", x)
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
# print("y:", y)
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
print(logits.shape, targets.shape, lstm_size, num_classes) # (10000, 83) (100, 100) 512 83
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
print(y_one_hot.shape) # (100, 100, 83)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
print(y_reshaped.shape) # y_reshaped
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss) # if this line is left, then bug will arise
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
10,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
定義 Input 及 Output 暫存變數
Input 為 28x28 的點陣圖素
Output 為 10 個 Label Array ,分別代表著 0~9 的預測值
Step1: Cost Functoin 請參考
Step2: 以下進行開始進行實際運算 | Python Code:
x = tf.placeholder(tf.float32,shape=[None,28*28])
y = tf.placeholder(tf.float32,shape=[None,10])
# Create model
# Set model weights
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
xw = tf.matmul(x, W)
r = xw + b
a = tf.nn.softmax(r)
Explanation: 定義 Input 及 Output 暫存變數
Input 為 28x28 的點陣圖素
Output 為 10 個 Label Array ,分別代表著 0~9 的預測值
End of explanation
cost = -tf.reduce_sum(y*tf.log(a))
op = tf.train.GradientDescentOptimizer(0.01).minimize(cost)
Explanation: Cost Functoin 請參考 : http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/
End of explanation
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
epochs = 100
batch_size = 200
for _ in range(100):
avg_cost = 0
input_x , output_y = mnist.train.next_batch(batch_size)
sess.run(op,feed_dict={x:input_x,
y:output_y })
avg_cost += sess.run(cost,feed_dict={x:input_x,
y:output_y })
print "avg_cost:" ,avg_cost/batch_size
predict = tf.argmax(a, 1)
# sess.run(predict,feed_dict={x:mnist.test.images})
ans = tf.argmax(y,1)
# sess.run(ans, feed_dict= {y:mnist.test.labels})
preccision = sess.run(tf.reduce_mean(tf.cast(tf.equal(predict,ans),"float")),feed_dict= {x:mnist.test.images,y:mnist.test.labels} )
print preccision
import random
for img in list(map(lambda _: random.choice(mnist.train.images), range(5))): #mnist.train.images[50:55]:
tmp = img
tmp2 = tmp.reshape((28,28))
plt.imshow(tmp2, cmap = cm.Greys)
plt.show()
print sess.run(predict,feed_dict={x:[tmp]})[0]
Explanation: 以下進行開始進行實際運算
End of explanation |
10,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two stage neural network implementation for MNIST digits classifier using Start
Overview
This is a step by step implementation of a multilayer neural network for MNIST digit classification using Start. Input images in MNIST database are 28x28 pixels. Images are black and white so one bit is required to represents each pixel. This neural network classifies input image to one of the possible digits (0-9).
Importing packages
Start and mnist packages are in the src directory. "Start install path"/start/src directory needs to be in $PYTHONPATH for these packages to load.
* mnist package is used for loading mnist data and other related functions.
* Start package that has the components to build a neural network.
Step1: Loading MNIST data
load_mnist function returns training data, validation data and test data as three numpy arrays.
Shape of these arrays is Number of samples * 795.
Step2: Neural Network architecture
The MNIST digit classifier net in this eaxmaple has the following architecture.
Creating the net object
A neural net object is created layer by layer. The first step is to create a net object. Input layer is created automatically when a net is created.
Step3: Adding layers
Layers are added sequentially to the net. Last layer added has to be an output layer.
Step4: Check the network architecture
Step5: Specify L2 loss coeffcient
Step6: Set weight update method
Step7: Initialize the network
Step8: Train the network
Step9: Test accuracy | Python Code:
import numpy as np
import mnist.utils.load_mnist as load_mnist
import start.neural_network as nn
import start.layer_dict as ld
import start.weight_update_params as wup
Explanation: Two stage neural network implementation for MNIST digits classifier using Start
Overview
This is a step by step implementation of a multilayer neural network for MNIST digit classification using Start. Input images in MNIST database are 28x28 pixels. Images are black and white so one bit is required to represents each pixel. This neural network classifies input image to one of the possible digits (0-9).
Importing packages
Start and mnist packages are in the src directory. "Start install path"/start/src directory needs to be in $PYTHONPATH for these packages to load.
* mnist package is used for loading mnist data and other related functions.
* Start package that has the components to build a neural network.
End of explanation
# Load the training, validation and test data
# Each data is a numpy array of shape Number of Samples * 795
# 0:783 are inputs, 784:793 are outputs, 794 is classified output
# N is chose as first dimention as it is easy to shuffle training data
# during training
training_data, validation_data, test_data = load_mnist.load_mnist()
validation_x = np.transpose(validation_data[:, 0:784])
validation_y_class = np.transpose(validation_data[:, 794])
val_acc = lambda: net.classification_accuracy(validation_x, validation_y_class)
test_x = np.transpose(test_data[:, 0:784])
test_y_class = np.transpose(test_data[:, 794])
test_acc = lambda: net.classification_accuracy(test_x, test_y_class)
##
Explanation: Loading MNIST data
load_mnist function returns training data, validation data and test data as three numpy arrays.
Shape of these arrays is Number of samples * 795.
End of explanation
# Create Network - specify input layer neurons (28x28=784)
net = nn.NeuralNetwork("test_net", 784)
Explanation: Neural Network architecture
The MNIST digit classifier net in this eaxmaple has the following architecture.
Creating the net object
A neural net object is created layer by layer. The first step is to create a net object. Input layer is created automatically when a net is created.
End of explanation
# Fully connected layer of 800 neurons
layer = ld.hdict["fc"](800)
net.add_layer(layer)
# Relu activation layer of 800 neurons
layer = ld.hdict["relu"](800)
net.add_layer(layer)
# Fully connected layer of 80 neurons
layer = ld.hdict["fc"](80)
net.add_layer(layer)
# Fully connected layer of 80 neurons
layer = ld.hdict["relu"](80)
net.add_layer(layer)
# Fully connected layer of 10 neurons
layer = ld.hdict["fc"](10)
net.add_layer(layer)
# Add softmax output layer
layer = ld.odict["softmax"](10)
net.add_layer(layer)
Explanation: Adding layers
Layers are added sequentially to the net. Last layer added has to be an output layer.
End of explanation
net.check_arch()
Explanation: Check the network architecture
End of explanation
# Specify l2 loss
net.set_l2_loss_coeff(.001)
Explanation: Specify L2 loss coeffcient
End of explanation
# Define weight update method
params = wup.GradientDescentParams(.3)
# params = wup.MomentumParams(.3)
# params = wup.AdamParams()
net.set_weight_update_function(params)
Explanation: Set weight update method
End of explanation
# For repeatability of results published below
np.random.seed(1)
# Initialize the network
net.initialize_parameters()
Explanation: Initialize the network
End of explanation
# Set training related parameters
mini_batch_size = 32
epochs = 20
verbose = 0
# Train the network
for epoch in range(1, epochs+1):
print("Epoch " + str(epoch))
np.random.shuffle(training_data)
mini_batches = [training_data[k:k + mini_batch_size, :] for k in
range(0, len(training_data), mini_batch_size)]
for count, mini_batch in enumerate(mini_batches, start=1):
x = np.transpose(mini_batch[:, 0:784])
y = np.transpose(mini_batch[:, 784:794])
net.train(x, y)
if ((count%100 == 0) and verbose):
print("Count {0} validation data accuracy = {1} %.".format(count, val_acc()))
print()
print("Epoch {0} validation data accuracy = {1} %.".format(epoch, val_acc()))
print()
Explanation: Train the network
End of explanation
print("Test data accuracy = {0} %.".format(test_acc()))
print()
Explanation: Test accuracy
End of explanation |
10,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyse de clusters
Step1: On récupère les channels trouvés grace a l'analyse de clusters
Step2: One sample ttest FDR corrected (per electrode)
Step3: Tests de 280 a 440, par fenetres de 20 ms avec chevauchement de 10 ms
Step4: Printing and preparing all time windows
Step5: Tests on each time window
Step6: On a channel subset (ROI) - average over channels
Parietal roi
Step7: Frontal roi | Python Code:
nperm = 1000
T_obs_bin,clusters_bin,clusters_pb_bin,H0_bin = mne.stats.spatio_temporal_cluster_test(X_bin,threshold=None,n_permutations=nperm,out_type='mask')
T_obs_ste,clusters_ste,clusters_pb_ste,H0_ste = mne.stats.spatio_temporal_cluster_test(X_ste,threshold=None,n_permutations=nperm,out_type='mask')
Explanation: Analyse de clusters
End of explanation
def extract_electrodes_times(clusters,clusters_pb,tmin_ind=500,tmax_ind=640,alpha=0.005,evoked = ev_bin_dev):
ch_list_temp = []
time_list_temp = []
for clust,pval in zip(clusters,clusters_pb):
if pval < alpha:
for j,curline in enumerate(clust[tmin_ind:tmax_ind]):
for k,el in enumerate(curline):
if el:
ch_list_temp.append(evoked.ch_names[k])
time_list_temp.append(evoked.times[j+tmin_ind])
return np.unique(ch_list_temp),np.unique(time_list_temp)
channels_deviance_ste,times_deviance_ste=extract_electrodes_times(clusters_ste,clusters_pb_ste)
channels_deviance_bin,times_deviance_bin=extract_electrodes_times(clusters_bin,clusters_pb_bin)
print(channels_deviance_bin),print(times_deviance_bin)
print(channels_deviance_ste),print(times_deviance_ste)
times_union = np.union1d(times_deviance_bin,times_deviance_ste)
ch_union = np.unique(np.hstack([channels_deviance_bin,channels_deviance_ste]))
print(ch_union)
#Selecting channels
epochs_bin_dev_ch = epochs_bin_dev.pick_channels(ch_union)
epochs_bin_std_ch = epochs_bin_std.pick_channels(ch_union)
epochs_ste_dev_ch = epochs_ste_dev.pick_channels(ch_union)
epochs_ste_std_ch = epochs_ste_std.pick_channels(ch_union)
X_diff = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),
epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]
X_diff_ste_bin = X_diff[1]-X_diff[0]
epochs_bin_dev_ch.plot_sensors(show_names=True)
plt.show()
roi = ['E117','E116','E108','E109','E151','E139','E141','E152','E110','E131','E143','E154','E142','E153','E140','E127','E118']
roi_frontal = ['E224','E223','E2','E4','E5','E6','E13','E14','E15','E20','E21','E27','E28','E30','E36','E40','E41']
len(roi_frontal),len(roi)
Explanation: On récupère les channels trouvés grace a l'analyse de clusters
End of explanation
from scipy.stats import ttest_1samp
from mne.stats import bonferroni_correction,fdr_correction
def ttest_amplitude(X,times_ind,ch_names,times):
# Selecting time points and averaging over time
amps = X[:,times_ind,:].mean(axis=1)
T, pval = ttest_1samp(amps, 0)
alpha = 0.05
n_samples, n_tests= amps.shape
threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1)
reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha)
threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1)
reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep')
mask_fdr = pval_fdr < 0.05
mask_bonf = pval_bonferroni < 0.05
print('FDR from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))
for i,curi in enumerate(mask_fdr):
if curi:
print("Channel %s, T = %0.2f, p = %0.3f " % (ch_names[i], T[i],pval_fdr[i]))
print('Bonferonni from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))
for i,curi in enumerate(mask_bonf):
if curi:
print("Channel %s, T = %0.2f, p = %0.3f " % (ch_names[i], T[i],pval_bonferroni[i]))
return T,pval,pval_fdr,pval_bonferronia
def ttest_amplitude_roi(X,times_ind,ch_names_roi,times):
print(X.shape)
# Selecting time points and averaging over time
amps = X[:,times_ind,:].mean(axis=1)
# averaging over channels
amps = amps.mean(axis=1)
T, pval = ttest_1samp(amps, 0)
alpha = 0.05
n_samples, _, n_tests= X.shape
print('Uncorrected from %02f to %02f' % ((times[times_ind[0]]),times[times_ind[-1]]))
print("T = %0.2f, p = %0.3f " % (T,pval))
return T,pval,pval_fdr,pval_bonferroni
Explanation: One sample ttest FDR corrected (per electrode)
End of explanation
toi = np.arange(0.28,0.44,0.001)
toi_index = ev_bin_dev.time_as_index(toi)
wsize = 20
wstep = 10
toi
Explanation: Tests de 280 a 440, par fenetres de 20 ms avec chevauchement de 10 ms
End of explanation
all_toi_indexes = []
for i in range(14):
print(toi[10*i],toi[10*i + 20])
cur_toi_ind = range(10*i+1,(10*i+21))
all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind]))
print(toi[10*14],toi[10*14 + 19])
cur_toi_ind = range(10*14+1,(10*14+19))
all_toi_indexes.append(ev_bin_dev.time_as_index(toi[cur_toi_ind]))
Explanation: Printing and preparing all time windows
End of explanation
for cur_timewindow in all_toi_indexes:
T,pval,pval_fdr,pval_bonferroni = ttest_amplitude(X_diff_ste_bin,cur_timewindow,epochs_bin_dev_ch.ch_names,times=epochs_bin_dev_ch.times)
Explanation: Tests on each time window
End of explanation
#Selecting channels
epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop)
epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop)
epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop)
epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop)
mne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std])
epochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi)
epochs_bin_std_ch = epochs_bin_std.pick_channels(roi)
epochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi)
epochs_ste_std_ch = epochs_ste_std.pick_channels(roi)
X_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),
epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]
X_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0]
for cur_timewindow in all_toi_indexes:
T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times)
grav_bin_dev = epochs_bin_dev_ch.average()
grav_bin_std = epochs_bin_std_ch.average()
grav_ste_dev = epochs_ste_dev_ch.average()
grav_ste_std = epochs_ste_std_ch.average()
evoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std],
weights='equal')
evoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std],
weights='equal')
mne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])
plt.show()
mne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])
plt.show()
Explanation: On a channel subset (ROI) - average over channels
Parietal roi
End of explanation
#Selecting channels
epochs_bin_dev = _matstruc2mne_epochs(mat_bin_dev).crop(tmax=tcrop)
epochs_bin_std = _matstruc2mne_epochs(mat_bin_std).crop(tmax=tcrop)
epochs_ste_dev = _matstruc2mne_epochs(mat_ste_dev).crop(tmax=tcrop)
epochs_ste_std = _matstruc2mne_epochs(mat_ste_std).crop(tmax=tcrop)
mne.equalize_channels([epochs_bin_dev,epochs_bin_std,epochs_ste_dev,epochs_ste_std])
epochs_bin_dev_ch = epochs_bin_dev.pick_channels(roi_frontal)
epochs_bin_std_ch = epochs_bin_std.pick_channels(roi_frontal)
epochs_ste_dev_ch = epochs_ste_dev.pick_channels(roi_frontal)
epochs_ste_std_ch = epochs_ste_std.pick_channels(roi_frontal)
X_diff_roi = [epochs_bin_dev_ch.get_data().transpose(0, 2, 1) - epochs_bin_std_ch.get_data().transpose(0, 2, 1),
epochs_ste_dev_ch.get_data().transpose(0, 2, 1) - epochs_ste_std_ch.get_data().transpose(0, 2, 1)]
X_diff_ste_bin_roi = X_diff_roi[1]-X_diff_roi[0]
for cur_timewindow in all_toi_indexes:
T,pval,pval_fdr,pval_bonferroni = ttest_amplitude_roi(X_diff_ste_bin_roi,cur_timewindow,roi,times=epochs_bin_dev_ch.times)
grav_bin_dev = epochs_bin_dev_ch.average()
grav_bin_std = epochs_bin_std_ch.average()
grav_ste_dev = epochs_ste_dev_ch.average()
grav_ste_std = epochs_ste_std_ch.average()
evoked_bin = mne.combine_evoked([grav_bin_dev, -grav_bin_std],
weights='equal')
evoked_ste = mne.combine_evoked([grav_ste_dev, -grav_ste_std],
weights='equal')
mne.viz.plot_compare_evokeds([grav_bin_std,grav_bin_dev,grav_ste_std,grav_ste_dev],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])
plt.show()
mne.viz.plot_compare_evokeds([evoked_bin,evoked_ste],picks=[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])
plt.show()
mne.viz.plot_compare_evokeds?
from scipy import stats
from mne.stats import bonferroni_correction,fdr_correction
T, pval = ttest_1samp(X_diff_ste_bin, 0)
alpha = 0.05
n_samples, n_tests,_ = X_diff_ste_bin.shape
threshold_uncorrected = stats.t.ppf(1.0 - alpha, n_samples - 1)
reject_bonferroni, pval_bonferroni = bonferroni_correction(pval, alpha=alpha)
threshold_bonferroni = stats.t.ppf(1.0 - alpha / n_tests, n_samples - 1)
reject_fdr, pval_fdr = fdr_correction(pval, alpha=alpha, method='indep')
#threshold_fdr = np.min(np.abs(T)[reject_fdr])
masking_mat = pval<0.05
Tbis = np.zeros_like(T)
Tbis[masking_mat] = T[masking_mat]
plt.matshow(Tbis.T,cmap=plt.cm.RdBu_r)
plt.colorbar()
plt.show()
plt.matshow(-np.log10(pval).T)
plt.colorbar()
Explanation: Frontal roi
End of explanation |
10,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Density of States Analysis Example
Given sample and empty-can data, compute phonon DOS
To use this notebook, first click jupyter menu File->Make a copy
Click the title of the copied jupyter notebook and change it to a new title
Start executing cells
Summary of processing steps
Gather experimental information and experimental raw data
Reduce raw data to S(Q,E)
Convert S(Q,E) to DOS
Preparation
Create a new working directory and change into it.
Please modify the following path to suit your need!
Step1: Get tools ready
Step2: Create a context for getdos
Step3: If you want to reuse a previously-saved context, please uncomment the following cell and execute
Step4: Experimental data and condition
Phonon Density of States (DOS) can be obtained from inelastic neutron scattering (INS) spectrum.
This notebook allows for extracting DOS from INS spectrum measured at the ARCS instrument at SNS.
To start, we need data files measured for the sample and the empty can, as well as experimental conditions such as incident energy and sample temperature.
The following wizard help you go through these steps.
<img src="select_raw_data.png" width="500"/>
Example datasets
Step5: Save configuration so you can reuse it
Step6: Obtain S(Q,E)
S(Q,E) spectra for both the sample and the empty can is the starting point for getdos processing. Here is an example
Step7: Parameters are saved in the work dir. Uncomment the script below to see.
Step8: Plot sample IQE
Step9: You can improve the Q,E grid parameters if you like, by re-executing the above cell of
QEGridWizardStart(context).show()
Plot I(E)
Step10: The plots above provide clues to selecting parameters for the getdos procedure
Save configuration so you can reuse it
Step11: Run GetDOS
DOS will be obtained from SQE by an iterative procedure where multiphonon and multiple scattering corrections are applied to the measured SQE spectrum, assuming
incoherent approximation, and the corrected spectrum
is then converted to DOS.
An example DOS plot
Step12: Save context
Step13: Print context
Step14: Check output
Results are saved in "work" directory
Step15: Plot the final result for DOS
Step16: More plotting utils are available | Python Code:
workdir = '/SNS/users/lj7/reduction/ARCS/getdos-demo-test'
!mkdir -p {workdir}
%cd {workdir}
Explanation: Density of States Analysis Example
Given sample and empty-can data, compute phonon DOS
To use this notebook, first click jupyter menu File->Make a copy
Click the title of the copied jupyter notebook and change it to a new title
Start executing cells
Summary of processing steps
Gather experimental information and experimental raw data
Reduce raw data to S(Q,E)
Convert S(Q,E) to DOS
Preparation
Create a new working directory and change into it.
Please modify the following path to suit your need!
End of explanation
import os, numpy as np
import histogram.hdf as hh, histogram as H
from matplotlib import pyplot as plt
%matplotlib notebook
# %matplotlib inline
from multiphonon.sqe import plot as plot_sqe
from multiphonon.ui.getdos import Context, NxsWizardStart, QEGridWizardStart, GetDOSWizStart
Explanation: Get tools ready
End of explanation
context=Context()
Explanation: Create a context for getdos
End of explanation
# context.from_yaml('./getdos2-context.yaml')
Explanation: If you want to reuse a previously-saved context, please uncomment the following cell and execute
End of explanation
NxsWizardStart(context).show()
Explanation: Experimental data and condition
Phonon Density of States (DOS) can be obtained from inelastic neutron scattering (INS) spectrum.
This notebook allows for extracting DOS from INS spectrum measured at the ARCS instrument at SNS.
To start, we need data files measured for the sample and the empty can, as well as experimental conditions such as incident energy and sample temperature.
The following wizard help you go through these steps.
<img src="select_raw_data.png" width="500"/>
Example datasets:
samplenxs = "/SNS/ARCS/2014_1_18_CAL/0/47435/NeXus/ARCS_47435_event.nxs"
mtnxs = Skip
Ei=80
T=300
End of explanation
context.to_yaml('./getdos2-context.yaml')
Explanation: Save configuration so you can reuse it
End of explanation
QEGridWizardStart(context).show()
Explanation: Obtain S(Q,E)
S(Q,E) spectra for both the sample and the empty can is the starting point for getdos processing. Here is an example:
<img width="300" src="Al-SQE.png"/>
Run the following wizard to define the E and Q axes so that S(Q,E) spectra can be obtained the INS raw data.
End of explanation
%%script bash
# ls work/
# cat work/raw2iqe-sample.params
Explanation: Parameters are saved in the work dir. Uncomment the script below to see.
End of explanation
iqe = hh.load('work/iqe.h5')
plt.figure(figsize=(6,4))
plot_sqe(iqe)
# plt.xlim(0, 11)
plt.clim(0, 1e-2)
Explanation: Plot sample IQE
End of explanation
iqe2 = iqe.copy()
I = iqe2.I; I[I!=I] = 0 # remove NaNs
IE = iqe2.sum('Q') # sum over Q
plt.figure(figsize=(6,4))
plt.plot(IE.energy, IE.I)
Explanation: You can improve the Q,E grid parameters if you like, by re-executing the above cell of
QEGridWizardStart(context).show()
Plot I(E)
End of explanation
context.to_yaml('./getdos2-context.yaml')
Explanation: The plots above provide clues to selecting parameters for the getdos procedure
Save configuration so you can reuse it
End of explanation
GetDOSWizStart(context).show()
Explanation: Run GetDOS
DOS will be obtained from SQE by an iterative procedure where multiphonon and multiple scattering corrections are applied to the measured SQE spectrum, assuming
incoherent approximation, and the corrected spectrum
is then converted to DOS.
An example DOS plot:
<img width="300" src="Al-DOS.png"/>
End of explanation
context.to_yaml('./getdos2-context.yaml')
Explanation: Save context
End of explanation
print context
Explanation: Print context
End of explanation
ls work/
Explanation: Check output
Results are saved in "work" directory
End of explanation
dos = hh.load('work/final-dos.h5')
plt.figure(figsize=(5,3))
plt.plot(dos.E, dos.I)
plt.xlabel('Energy (meV)')
# plt.xlim(0, 30)
Explanation: Plot the final result for DOS
End of explanation
from multiphonon.backward import plotutils as pu
plt.figure(figsize=(5,3))
pu.plot_dos_iteration('work/')
plt.figure(figsize=(6,4))
pu.plot_residual('work/')
plt.figure(figsize=(10, 4))
pu.plot_intermediate_result_se('work/round-3')
Explanation: More plotting utils are available
End of explanation |
10,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>
Download
Step1: Classes
Para criar uma classe, utiliza-se a palavra reservada class. O nome da sua classe segue a mesma convenção de nomes
para criação de funções e variáveis, mas normalmente se usa a primeira letra maiúscula em cada palavra no nome da
classe. | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>
Download: http://github.com/dsacademybr
End of explanation
# Criando uma classe chamada Livro
class Livro():
# Este método vai inicializar cada objeto criado a partir desta classe
# O nome deste método é __init__
# (self) é uma referência a cada atributo de um objeto criado a partir desta classe
def __init__(self):
# Atributos de cada objeto criado a partir desta classe.
# O self indica que estes são atributos dos objetos
self.titulo = 'O Monge e o Executivo'
self.isbn = 9988888
print("Construtor chamado para criar um objeto desta classe")
# Métodos são funções, que recebem como parâmetro atributos do objeto criado
def imprime(self):
print("Foi criado o livro %s e ISBN %d" %(self.titulo, self.isbn))
# Criando uma instância da classe Livro
Livro1 = Livro()
# Tipo do Objeto Livro1
type(Livro1)
# Atributo do objeto Livro1
Livro1.titulo
# Método do objeto Livro1
Livro1.imprime()
# Criando a classe Livro com parâmetros no método construtor
class Livro():
def __init__(self, titulo, isbn):
self.titulo = titulo
self.isbn = isbn
print("Construtor chamado para criar um objeto desta classe")
def imprime(self, titulo, isbn):
print("Este é o livro %s e ISBN %d" %(titulo, isbn))
# Criando o objeto Livro2 que é uma instância da classe Livro
Livro2 = Livro("A Menina que Roubava Livros", 77886611)
Livro2.titulo
# Método do objeto Livro2
Livro2.imprime("A Menina que Roubava Livros", 77886611)
# Criando a classe cachorro
class Cachorro():
def __init__(self, raça):
self.raça = raça
print("Construtor chamado para criar um objeto desta classe")
# Criando um objeto a partir da classe cachorro
Rex = Cachorro(raça='Labrador')
# Criando um objeto a partir da classe cachorro
Golias = Cachorro(raça='Huskie')
# Atributo da classe cachorro, utilizado pelo objeto criado
Rex.raça
# Atributo da classe cachorro, utilizado pelo objeto criado
Golias.raça
Explanation: Classes
Para criar uma classe, utiliza-se a palavra reservada class. O nome da sua classe segue a mesma convenção de nomes
para criação de funções e variáveis, mas normalmente se usa a primeira letra maiúscula em cada palavra no nome da
classe.
End of explanation |
10,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: GISS-E2-1H
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
10,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 6
Step1: Problem set #2
Step2: Problem set #3
Step3: Problem set #4
Step4: Problem set #5
Step5: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output
Step6: Paste your code
Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment.
Step7: The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field. | Python Code:
import requests
data = requests.get('http://localhost:5000/lakes').json()
print(len(data), "lakes")
for item in data[:10]:
print(item['name'], "- elevation:", item['elevation'], "m / area:", item['area'], "km^2 / type:", item['type'])
Explanation: Homework 6: Web Applications
For this homework, you're going to write a web API for the lake data in the MONDIAL database. (Make sure you've imported the data as originally outlined in our week 1 tutorial.)
The API should perform the following tasks:
A request to /lakes should return a JSON list of dictionaries, with the information from the name, elevation, area and type fields from the lake table in MONDIAL.
The API should recognize the query string parameter sort. When left blank or set to name, the results should be sorted by the name of the lake (in alphabetical order). When set to area or elevation, the results should be sorted by the requested field, in descending order.
The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field.
You should be able to use both the sort and type parameters in any request.
This notebook contains only test requests to your API. Write the API as a standalone Python program, start the program and then run the code in the cells below to ensure that your API produces the expected output. When you're done, paste the source code in the final cell (so we can check your work, if needed).
Hints when writing your API code:
You'll need to construct the SQL query as a string, piece by piece. This will likely involve a somewhat messy tangle of if statements. Lean into the messy tangle.
Make sure to use parameter placeholders (%s) in the query.
If you're getting SQL errors, print out your SQL statement in the request handler function so you can debug it. (When you use print() in Flask, the results will display in your terminal window.)
When in doubt, return to the test code. Examine it carefully and make sure you know exactly what it's trying to do.
Problem set #1: A list of lakes
Your API should return a JSON list of dictionaries (objects). Use the code below to determine what the keys of the dictionaries should be. (For brevity, this example only prints out the first ten records, but of course your API should return all of them.)
Expected output:
143 lakes
Ammersee - elevation: 533 m / area: 46 km^2 / type: None
Arresoe - elevation: None m / area: 40 km^2 / type: None
Atlin Lake - elevation: 668 m / area: 798 km^2 / type: None
Balaton - elevation: 104 m / area: 594 km^2 / type: None
Barrage de Mbakaou - elevation: None m / area: None km^2 / type: dam
Bodensee - elevation: 395 m / area: 538 km^2 / type: None
Brienzersee - elevation: 564 m / area: 29 km^2 / type: None
Caspian Sea - elevation: -28 m / area: 386400 km^2 / type: salt
Chad Lake - elevation: 250 m / area: 23000 km^2 / type: salt
Chew Bahir - elevation: 520 m / area: 800 km^2 / type: salt
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?type=salt').json()
avg_area = sum([x['area'] for x in data if x['area'] is not None]) / len(data)
avg_elev = sum([x['elevation'] for x in data if x['elevation'] is not None]) / len(data)
print("average area:", int(avg_area))
print("average elevation:", int(avg_elev))
Explanation: Problem set #2: Lakes of a certain type
The following code fetches all lakes of type salt and finds their average area and elevation.
Expected output:
average area: 18880
average elevation: 970
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?sort=elevation').json()
for item in [x['name'] for x in data if x['elevation'] is not None][:15]:
print("*", item)
Explanation: Problem set #3: Lakes in order
The following code fetches lakes in reverse order by their elevation and prints out the name of the first fifteen, excluding lakes with an empty elevation field.
Expected output:
* Licancabur Crater Lake
* Nam Co
* Lago Junin
* Lake Titicaca
* Poopo
* Salar de Uyuni
* Koli Sarez
* Lake Irazu
* Qinghai Lake
* Segara Anak
* Lake Tahoe
* Crater Lake
* Lake Tana
* Lake Van
* Issyk-Kul
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes?sort=area&type=caldera').json()
for item in data:
print("*", item['name'])
Explanation: Problem set #4: Order and type
The following code prints the names of the largest caldera lakes, ordered in reverse order by area.
Expected output:
* Lake Nyos
* Lake Toba
* Lago Trasimeno
* Lago di Bolsena
* Lago di Bracciano
* Crater Lake
* Segara Anak
* Laacher Maar
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes', params={'type': "' OR true; --"}).json()
data
Explanation: Problem set #5: Error handling
Your API should work fine even when faced with potential error-causing inputs. For example, the expected output for this statement is an empty list ([]), not every row in the table.
End of explanation
import requests
data = requests.get('http://localhost:5000/lakes', params={'sort': "florb"}).json()
[x['name'] for x in data[:5]]
Explanation: Specifying a field other than name, area or elevation for the sort parameter should fail silently, defaulting to sorting alphabetically. Expected output: ['Ammersee', 'Arresoe', 'Atlin Lake', 'Balaton', 'Barrage de Mbakaou']
End of explanation
conn.rollback()
from flask import Flask, request, jsonify
from decimal import Decimal
import pg8000
app = Flask(__name__)
conn = pg8000.connect(user="postgres",password="password", database="mondial")
@app.route("/lakes")
def get_lakes():
sorting = request.args.get('sort', 'name')
get_type = request.args.get('type', 0)
cursor = conn.cursor()
sort_by = "ORDER BY name"
if sorting=='area':
sort_by = "ORDER BY area DESC"
elif sorting=='elevation':
sort_by = "ORDER BY elevation DESC"
elif sorting=='name':
sort_by = "ORDER BY name DESC"
if get_type:
cursor.execute("SELECT name, area, elevation, type FROM lake WHERE type=%s " + sort_by, [get_type])
else:
cursor.execute("SELECT name, area, elevation, type FROM lake " + sort_by)
def decimal_to_int(x):
if isinstance(x, decimal.Decimal):
return int(x)
else:
return None
output = []
for item in cursor.fetchall():
try:
dictionary = {'name':item[0],'area': int(item[1]),'elevation':float(item[2]), 'type':item[3]}
print(dictionary)
output.append(
dictionary
)
except:
pass
return jsonify(output)
app.run(port=5004)
Explanation: Paste your code
Please paste the code for your entire Flask application in the cell below, in case we want to take a look when grading or debugging your assignment.
End of explanation
conn.rollback()
Explanation: The API should recognize the query string parameter type. When specified, the results should only include rows that have the specified value in the type field.
End of explanation |
10,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
10,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Objects
Python is an object oriented language. As such it allows the definition of classes.
For instance lists are also classes, that's why there are methods associated with them (i.e. append()). Here we will see how to create classes and assign them attributes and methods.
Definition and initialization
A class gathers functions (called methods) and variables (called attributes).
The main of goal of having this kind of structure is that the methods can share a common
set of inputs to operate and get the desired outcome by the programmer.
In Python classes are defined with the word class and are always initialized
with the method __init__, which is a function that always must have as input argument the
word self. The arguments that come after self are used to initialize the class attributes.
In the following example we create a class called Circle.
Step1: To create an instance of this class we do it as follows
Step2: We can check that the initialization worked out fine by printing its attributes
Step3: We now redefine the class to add new method called area that computes the area of the circle
Step4: Exercise 3.1
Redefine the class Circle to include a new method called perimeter that returns the value of the circle's perimeter.
We now want to define a method that returns a new Circle with twice the radius of the input Circle.
Step5: We now add a new method that takes as an input another element of the class Circle
and returns the total area of the two circles | Python Code:
class Circle:
def __init__(self, radius):
self.radius = radius #all attributes must be preceded by "self."
Explanation: Objects
Python is an object oriented language. As such it allows the definition of classes.
For instance lists are also classes, that's why there are methods associated with them (i.e. append()). Here we will see how to create classes and assign them attributes and methods.
Definition and initialization
A class gathers functions (called methods) and variables (called attributes).
The main of goal of having this kind of structure is that the methods can share a common
set of inputs to operate and get the desired outcome by the programmer.
In Python classes are defined with the word class and are always initialized
with the method __init__, which is a function that always must have as input argument the
word self. The arguments that come after self are used to initialize the class attributes.
In the following example we create a class called Circle.
End of explanation
A = Circle(5.0)
Explanation: To create an instance of this class we do it as follows
End of explanation
print(A.radius)
Explanation: We can check that the initialization worked out fine by printing its attributes
End of explanation
class Circle:
def __init__(self, radius):
self.radius = radius #all attributes must be preceded by "self."
def area(self):
import math
return math.pi * self.radius * self.radius
A = Circle(1.0)
print(A.radius)
print(A.area())
Explanation: We now redefine the class to add new method called area that computes the area of the circle
End of explanation
class Circle:
def __init__(self, radius):
self.radius = radius #all attributes must be preceded by "self."
def area(self):
import math
return math.pi * self.radius * self.radius
def enlarge(self):
return Circle(2.0*self.radius)
A = Circle(5.0) # Create a first circle
B = A.enlarge() # Use the method to create a new Circle
print(B.radius) # Check that the radius is twice as the original one.
Explanation: Exercise 3.1
Redefine the class Circle to include a new method called perimeter that returns the value of the circle's perimeter.
We now want to define a method that returns a new Circle with twice the radius of the input Circle.
End of explanation
class Circle:
def __init__(self, radius):
self.radius = radius #all attributes must be preceded by "self."
def area(self):
import math
return math.pi * self.radius * self.radius
def enlarge(self):
return Circle(2.0*self.radius)
def add_area(self, c):
return self.area() + c.area()
A = Circle(1.0)
B = Circle(2.0)
print(A.add_area(B))
print(B.add_area(A))
Explanation: We now add a new method that takes as an input another element of the class Circle
and returns the total area of the two circles
End of explanation |
10,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise
Step1: Step 1
Step2: Since these are just plain wave files, we can listen to the data using aplay
Step3: This has loaded each of the wave files into sound_files[], one for each of our 5 classes. We must process this into fixed length feature vectors which we can feed to a classifier. This is the major "engineering" of the machine learning process -- good feature selection is essential to getting good performance.
It's important that we can change the parameters of the feature extraction and learning and be able to rerun the entire process in one go. We define a dictionary called params which will hold every adjustable parameter and a function called run_pipeline() which will run our entire pipeline. For now, it does nothing.
Step4: Step 2
Step5: We can also view this in the frequency domain using plt.specgram(). We have to choose an FFT size and overlap (here I used N=256 samples, overlap=128)
Step6: Preprocessing steps
Two things we should do in the pre-processing step
Step7: Testing pre-processing
We can test this and check it working by plotting the time series and spectrogram before and after. We can create a quick function to plot this
Step9: Feature extraction
The next step is to make fixed length feature vectors. This requires some assumptions
Step10: Exercise 1
Exercise
Step11: Feature transform
The raw audio data isn't a great feature for classification. We can apply transformations to the feature vectors to improve the features. The Fourier transform is one way of doing that. To avoid artifacts, each section must have a window function applied to taper off the signal at the ends and avoid a large discontinuity. | Python Code:
# standard imports
import numpy as np
import scipy.io.wavfile as wavfile
import scipy.signal as sig
import matplotlib.pyplot as plt
import sklearn.preprocessing, sklearn.cluster, sklearn.tree, sklearn.neighbors, sklearn.ensemble, sklearn.multiclass, sklearn.feature_selection
import ipy_table
import sklearn.svm, sklearn.cross_validation, sklearn.grid_search, sklearn.metrics, sklearn.datasets, sklearn.decomposition, sklearn.manifold
import pandas as pd
import seaborn
import scipy.ndimage
# force plots to appear inline on this page
%matplotlib inline
Explanation: Exercise: Learning to recognise touch sounds
This exercise will look at recognising the audio registered by a piezo contact microphone on a mobile device when different parts of it are touched by a user. This is data from the Stane project (Paper and video), which used 3D printed surfaces to make super-cheap touch controllers.
<img src="imgs/stane_1.png" width="400px">
<img src="imgs/stane_2.png" width="400px">
The machine learning problem is simple: given a set of recordings of a user rubbing discrete touch zones on this 3D printed case, train a classifier which can distinguish which zone is being touched. This is in essence similar to speech recognition, but with a much simpler acoustic problem and no need to deal with language modeling.
We will use multi-class classification to distinguish the touch zones from the audio alone. We can assume a small number of discrete touch areas, and that there is no model governing how they might be touched (i.e. touches happen at random).
A data processing pipeline
We need to develop a pipeline to process the data. There are several stages common to most supervised learning tasks:
Loading the original data (from files, databases etc.)
Pre-processing (removing outliers, resampling or interpolating, raw normalisation)
Feature extraction (transforming data into fixed length feature vectors)
Feature processing (offset removal and normalisation)
Data splitting (dividing into testing and training sections)
Classification (training the classifier)
Evaluation (testing the classifier performance)
End of explanation
%cd datasets\stane
%ls
Explanation: Step 1: Loading the data
The first thing we need to do is to load the data. The data is in datasets/stane/ and consists of five wave files from scratching five different surfaces, each 60 seconds in length, 4Khz, 16 bit PCM.
End of explanation
# play the first five seconds of these two files
!aplay stane_2.wav -d 5
!aplay stane_4.wav -d 5
# load each of the files into sound_files
sound_files = []
for texture in "12345":
# load the wavefile
fname = "stane_%s.wav" % texture
sr, data = wavfile.read(fname)
print "Loaded %s, %s samples at %dHz (%f seconds)" % (fname, len(data), sr, len(data)/float(sr))
sound_files.append(data)
Explanation: Since these are just plain wave files, we can listen to the data using aplay:
End of explanation
params = {'sample_rate':4096,
}
def run_pipeline(sound_files, params):
# this is the outline of our pipeline
pre_processed = pre_process(sound_files, params)
features, targets = feature_extract(pre_processed, params)
train, validate, test = split_features(features, targets, params)
classifier = train_classifier(features, targets, params)
evaluate(classifier, features, targets, params)
Explanation: This has loaded each of the wave files into sound_files[], one for each of our 5 classes. We must process this into fixed length feature vectors which we can feed to a classifier. This is the major "engineering" of the machine learning process -- good feature selection is essential to getting good performance.
It's important that we can change the parameters of the feature extraction and learning and be able to rerun the entire process in one go. We define a dictionary called params which will hold every adjustable parameter and a function called run_pipeline() which will run our entire pipeline. For now, it does nothing.
End of explanation
one_second = params["sample_rate"]
# plot two of the files
plot_section_0 = sound_files[0][:one_second]
plot_section_1 = sound_files[1][:one_second]
# generate time indices
timebase = np.arange(len(plot_section_0)) / float(params["sample_rate"])
plt.figure()
plt.plot(timebase, plot_section_0)
plt.xlabel("Time (s)")
plt.figure()
plt.plot(timebase, plot_section_1)
plt.xlabel("Time (s)")
Explanation: Step 2: Pre-processing
This data is pretty clean already. We can plot a section of the data to have a look at it:
End of explanation
# the cmap= just selects a prettier heat map
_ = plt.specgram(plot_section_0, NFFT=256, Fs=params["sample_rate"], noverlap=128, cmap="gist_heat")
plt.figure()
_ = plt.specgram(plot_section_1, NFFT=256, Fs=params["sample_rate"], noverlap=128, cmap="gist_heat")
Explanation: We can also view this in the frequency domain using plt.specgram(). We have to choose an FFT size and overlap (here I used N=256 samples, overlap=128)
End of explanation
def bandpass(x, low, high, sample_rate):
# scipy.signal.filtfilt applies a linear filter to data (*without* phase distortion)
# scipy.signal.butter will design a linear Butterworth filter
nyquist = sample_rate / 2
b,a = sig.butter(4, [low/float(nyquist), high/float(nyquist)], btype="band")
return sig.filtfilt(b,a,x)
def pre_process(sound_files, params):
processed = []
for sound_file in sound_files:
normalised = sound_file / 32768.0
p = bandpass(normalised, params["low_cutoff"], params["high_cutoff"], params["sample_rate"])
processed.append(p)
return processed
Explanation: Preprocessing steps
Two things we should do in the pre-processing step:
1. normalise the data to 0-1 range
2. apply bandpass filtering to select frequencies we are interested in
End of explanation
def plot_second(x, params):
one_second = params["sample_rate"]
plot_section = x[:one_second]
# generate time indices
timebase = np.arange(len(plot_section)) / float(params["sample_rate"])
plt.figure()
plt.plot(timebase, plot_section)
plt.ylabel("Amplitude")
plt.xlabel("Time (s)")
plt.figure()
_ = plt.specgram(plot_section, NFFT=256, Fs=params["sample_rate"], noverlap=128, cmap='gist_heat')
plt.ylabel("Freq (Hz)")
plt.xlabel("Time (s)")
# test the filtering; these are example values only
params["low_cutoff"]=100
params["high_cutoff"]=1500
processed = pre_process(sound_files, params)
# plot the results
plot_second(sound_files[0], params)
plot_second(processed[0], params)
Explanation: Testing pre-processing
We can test this and check it working by plotting the time series and spectrogram before and after. We can create a quick function to plot this:
End of explanation
def sliding_window(x, length, overlap):
Split x into windows of the given length, with the specified overlap
wins = len(x)//(length-overlap)
windows = []
offset = 0
for i in range(wins):
windows.append(x[offset:offset+length])
offset += length-overlap
return windows
# for example
sliding_window(sound_files[0], 512, 256);
Explanation: Feature extraction
The next step is to make fixed length feature vectors. This requires some assumptions: we have a continuous signal, so how do we split it up? What processing should we apply to transform the data?
The obvious thing to do with a time series is to split it into windows of a fixed length. These windows can be overlapping (i.e. the next window can include part of the previous one). The function sliding_window() below splits up a 1D time series into such overlapping windows.
End of explanation
params['window_overlap'] = -1024
params['window_length'] = 256
features, labels = make_features(processed, params)
print features.shape
Explanation: Exercise 1
Exercise: Produce a feature matrix for the sound files using sliding window, and a corresponding label vector.
Hint: you can use np.full(n, x) to generate a vector [x,x,x,x,...] and np.vstack(l) to stack a list of vectors into a matrix.
Make the window size and overlap part of params (window_length and overlap) and write a function features, labels = make_features(processed, params)
End of explanation
def transform_features(data):
# window features and compute magnitude spectrum
# try different window functions (e.g. Hann, Blackman-Harris)
window = sig.hamming(features.shape[1])
fft_features = np.abs(np.fft.fft(features * window))
return fft_features
print transform_features(features).shape
Explanation: Feature transform
The raw audio data isn't a great feature for classification. We can apply transformations to the feature vectors to improve the features. The Fourier transform is one way of doing that. To avoid artifacts, each section must have a window function applied to taper off the signal at the ends and avoid a large discontinuity.
End of explanation |
10,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content and Objectives
Show PSD of ASK for random data
Spectra are determined using FFT and averaging along several realizations
<b> Note
Step1: Function for determining the impulse response of an RC filter
Step2: Parameters
Step3: Signals and their spectra
Step4: Plotting | Python Code:
# importing
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
# showing figures inline
%matplotlib inline
# plotting options
font = {'size' : 20}
plt.rc('font', **font)
plt.rc('text', usetex=True)
matplotlib.rc('figure', figsize=(18, 10) )
Explanation: Content and Objectives
Show PSD of ASK for random data
Spectra are determined using FFT and averaging along several realizations
<b> Note: </b> You may extend these lines to include other modulation schemes
Import
End of explanation
########################
# find impulse response of an RRC filter
########################
def get_rrc_ir(K, n_sps, t_symbol, beta):
'''
Determines coefficients of an RRC filter
Formula out of: J. Huber, Trelliscodierung, Springer, 1992, S. 15
NOTE: roll-off factor must not equal zero
NOTE: Length of the IR has to be an odd number
IN: length of IR, sps factor, symbol time, roll-off factor
OUT: filter coefficients
'''
if beta == 0:
beta = 1e-32
K = int(K)
if ( K%2 == 0):
raise ValueError('Length of the impulse response should be an odd number')
# initialize np.array
rrc = np.zeros( K )
# find sample time and initialize index vector
t_sample = t_symbol / n_sps
time_ind = range( -(K-1)//2, (K-1)//2+1)
# assign values of rrc
for t_i in time_ind:
t = (t_i)* t_sample
if t_i == 0:
rrc[ int( t_i+(K-1)//2 ) ] = (1-beta+4*beta/np.pi)
elif np.abs(t) == t_symbol / ( 4 * beta ):
rrc[ int( t_i+(K-1)//2 ) ] = beta*np.sin( np.pi/(4*beta)*(1+beta) ) \
- 2*beta/np.pi*np.cos(np.pi/(4*beta)*(1+beta))
else:
rrc[ int( t_i+(K-1)//2 ) ] = ( 4 * beta * t / t_symbol * np.cos( np.pi*(1+beta)*t/t_symbol ) \
+ np.sin( np.pi * (1-beta) * t / t_symbol ) ) / ( np.pi * t / t_symbol * (1-(4*beta*t/t_symbol)**2) )
rrc = rrc / np.sqrt(t_symbol)
return rrc
Explanation: Function for determining the impulse response of an RC filter
End of explanation
########################
# parameters
########################
# number of realizations along which to average the psd estimate
n_real = 100
# modulation scheme and constellation points
M = 16
constellation = 2. * np.arange( M ) - M + 1
#constellation = np.exp( 1j * 2 * np.pi * np.arange(M) / M )
constellation /= np.sqrt( np.linalg.norm( constellation )**2 / M )
# number of symbols
n_symb = int( 1e4 )
t_symb = 1.0
# parameters of the filter
beta = 0.33
n_sps = 4 # samples per symbol
syms_per_filt = 4 # symbols per filter (plus minus in both directions)
K_filt = 2*syms_per_filt * n_sps + 1 # length of the fir filter
Explanation: Parameters
End of explanation
# define rrc filter response
rrc = get_rrc_ir( K_filt, n_sps, t_symb, beta)
rrc = rrc/ np.linalg.norm(rrc)
# get frequency regime and initialize PSD
omega = np.linspace( -np.pi, np.pi, 512)
psd = np.zeros( (n_real, len(omega) ) )
psd_str = np.zeros( (n_real, len(omega) ) )
# loop for realizations
for k in np.arange(n_real):
# generate random binary vector and modulate the specified modulation scheme
d = np.random.randint( M, size = n_symb)
s = constellation[ d ]
# prepare sequence to be filtered
s_up = np.zeros(n_symb * n_sps, dtype=complex)
s_up[ : : n_sps ] = s
s_up = np.append( s_up, np.zeros( K_filt - 1 ) )
# apply rrc
#s_filt_rrc = signal.lfilter(rrc, [1], s_up)
s_filt_rrc = np.convolve( rrc, s_up )
x = s_filt_rrc
# get spectrum using Bartlett method
psd[k, :] = np.abs( 1 / n_sps * np.fft.fftshift( np.fft.fft( x, 512 ) ) )**2
# average along realizations
psd_average = np.average(psd, axis=0)
psd_str_average = np.average(psd_str, axis=0)
Explanation: Signals and their spectra
End of explanation
plt.figure()
plt.plot(omega, 10*np.log10(psd_average) )
plt.plot(omega, 10*np.log10(psd_str_average) )
plt.grid(True);
plt.xlabel('$\Omega$');
plt.ylabel('$\Phi(\Omega)$ (dB)')
plt.figure()
plt.plot(omega, psd_average )
plt.grid(True);
plt.xlabel('$\Omega$');
plt.ylabel('$\Phi(\Omega)$')
Explanation: Plotting
End of explanation |
10,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importing Cells in NetPyNE
(1) Clone repository and compile mod files
Determine your location in the directory structure
Step1: Move to (or stay in) the '/content' directory
Step2: Ensure you are in the correct directory --> Expected output
Step3: Install NEURON and NetPyNE, and import matplotlib
Step4: This next line will detect if the directory already exists (i.e. you are re-running this code), and will delete it to prevent future errors.
Step5: Clone repository with the necessary cell and mod files
Step6: Move into the repository with all the necessary files
Step7: Ensure you are in the repository with the 'pwd' command --> Expected output
Step8: Compile the mod files --> Expected output
Step9: (2) Importing cells from different file formats
Set up netParams object
Step10: 2a. Import cell from .json format
Step11: 2b. Import a detailed morphology from a .swc file
Step12: 2c. Import a cell from a .hoc (NEURON) file
Step13: 2d. Import a cell from a .py (python) file
Step14: EXERCISE
Step15: (3) Explore and manipulate cell parameters
Explore the cell types located in the netParams.cellParams dictionary
Step16: EXERCISE
Step17: EXERCISE
Step18: Now we want to explore (and change) the values of a channel parameter in a given cell model
Step19: EXERCISE
Step20: EXERCISE
Step21: Now let's see how these changes affect the cell behavior by plotting cell's response to current input before and after param changes!
EXERCISE
Step22: EXERCISE
Step23: Add cfg params
Step24: Create network and run simulation
Step25: EXERCISE
Step26: Run the sim
Step27: (4) Plotting Morphology
Step28: EXERCISE
Step29: Now let's set the propagation velocity and length constant
Step30: EXERCISE
Step31: Add some network stimulation parameters
Step32: EXERCISE
Step33: Add cell connectivity rules
EXERCISE
Step34: EXERCISE | Python Code:
!pwd
Explanation: Importing Cells in NetPyNE
(1) Clone repository and compile mod files
Determine your location in the directory structure
End of explanation
%cd /content/
Explanation: Move to (or stay in) the '/content' directory
End of explanation
!pwd
Explanation: Ensure you are in the correct directory --> Expected output: "/content"
End of explanation
!pip install neuron
!pip install netpyne
import matplotlib
import os
import json
%matplotlib inline
Explanation: Install NEURON and NetPyNE, and import matplotlib
End of explanation
if os.path.isdir('/content/cells_netpyne2021'):
!rm -r /content/cells_netpyne2021
Explanation: This next line will detect if the directory already exists (i.e. you are re-running this code), and will delete it to prevent future errors.
End of explanation
!git clone https://github.com/ericaygriffith/cells_netpyne2021.git
Explanation: Clone repository with the necessary cell and mod files
End of explanation
cd cells_netpyne2021/
Explanation: Move into the repository with all the necessary files
End of explanation
!pwd
Explanation: Ensure you are in the repository with the 'pwd' command --> Expected output: '/content/cells_netpyne2021'
End of explanation
!nrnivmodl
Explanation: Compile the mod files --> Expected output: creation of an 'x86_64' directory
End of explanation
from netpyne import specs, sim
# Network parameters
netParams = specs.NetParams() # object of class NetParams to store the network parameters
Explanation: (2) Importing cells from different file formats
Set up netParams object
End of explanation
netParams.loadCellParamsRule(label='TC_reduced', fileName = 'TC_reduced_cellParams.json')
netParams.cellParams['TC_reduced']
Explanation: 2a. Import cell from .json format
End of explanation
netParams.importCellParams(
label='PYR_HH3D_swc',
conds={'cellType': 'PYR', 'cellModel': 'HH3D_swc'},
fileName='BS0284.swc',
cellName='swc_cell')
netParams.cellParams.keys()
Explanation: 2b. Import a detailed morphology from a .swc file
End of explanation
netParams.importCellParams(
label='PYR_HH3D_hoc',
conds={'cellType': 'PYR', 'cellModel': 'HH3D_hoc'},
fileName='geom.hoc',
cellName='E21',
importSynMechs=False)
netParams.cellParams.keys()
Explanation: 2c. Import a cell from a .hoc (NEURON) file
End of explanation
netParams.importCellParams(
label='sRE_py',
conds={'cellType': 'sRE', 'cellModel': 'HH'},
fileName='sRE.py',
cellName='sRE',
importSynMechs=False)
netParams.cellParams.keys()
Explanation: 2d. Import a cell from a .py (python) file
End of explanation
netParams.importCellParams(
label='mouse_hipp_swc',
conds={'cellType': 'hipp','cellModel': 'HH3D'},
fileName='mouseGABA_hipp.swc',
cellName='swc_hippCell'
)
netParams.cellParams.keys()
Explanation: EXERCISE: import the other swc file contained in the cells_netpyne2021 directory
End of explanation
netParams.cellParams.keys()
Explanation: (3) Explore and manipulate cell parameters
Explore the cell types located in the netParams.cellParams dictionary
End of explanation
netParams.cellParams['TC_reduced']['secs']['soma']['geom']['L']
geom_TC = netParams.cellParams['TC_reduced']['secs']['soma']['geom']
geom_TC['L']
Explanation: EXERCISE: Find the geometry (length & diameter) of the soma compartment for each of the above cells
End of explanation
netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys()
Explanation: EXERCISE: List all of the channel mechanisms in the soma compartment of the thalamocortical cell model (TC_reduced)
End of explanation
netParams.cellParams['TC_reduced']['secs']['soma']['mechs'].keys()
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas'].keys()
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g'] = 5.0e05
netParams.cellParams['TC_reduced']['secs']['soma']['mechs']['pas']['g']
Explanation: Now we want to explore (and change) the values of a channel parameter in a given cell model
End of explanation
netParams.cellParams['mouse_hipp_swc']['secs']['soma_0']['mechs']['pas'] = {'g': 0.0000357, 'e': -70}
Explanation: EXERCISE: Change the conductance of the leak channel in the soma compartment of the reticular cell model (sRE.py)
EXERCISE: Insert a passive leak channel ('pas') into the soma compartment of the mouseGABA_hipp.swc cell model
End of explanation
for sec in netParams.cellParams['PYR_HH3D_swc']['secs'].keys():
netParams.cellParams['PYR_HH3D_swc']['secs'][sec]['geom']['cm'] = 1
Explanation: EXERCISE: Change the capacitance of all compartments in the model defined by BS0284.swc (PYR_HH3D_swc)
End of explanation
netParams.popParams['TC_pop'] = {'cellType': 'TC', 'numCells': 1, 'cellModel': 'HH_reduced'}
Explanation: Now let's see how these changes affect the cell behavior by plotting cell's response to current input before and after param changes!
EXERCISE: First create a population of thalamocortical cells
End of explanation
netParams.stimTargetParams['Input->TC_pop'] = {'source': 'Input', 'sec':'soma', 'loc': 0.5, 'conds': {'pop':'TC_pop'}}
Explanation: EXERCISE: Add hyperpolarizing current clamp stimulation of -0.1 nA to thalamocortical cell pop
End of explanation
## cfg
cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration
cfg.duration = 2*1e3 # Duration of the simulation, in ms
cfg.dt = 0.01 # Internal integration timestep to use
cfg.verbose = 1 # Show detailed messages
cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record
cfg.recordStep = 0.01
cfg.filename = 'model_output' # Set file output name
cfg.saveJson = False
cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells
cfg.hParams['celsius'] = 36
Explanation: Add cfg params
End of explanation
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
Explanation: Create network and run simulation
End of explanation
## cfg
cfg = specs.SimConfig() # object of class SimConfig to store simulation configuration
cfg.duration = 2*1e3 # Duration of the simulation, in ms
cfg.dt = 0.01 # Internal integration timestep to use
cfg.verbose = 1 # Show detailed messages
cfg.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}} # Dict with traces to record
cfg.recordStep = 0.01
cfg.filename = 'model_output' # Set file output name
cfg.saveJson = False
cfg.analysis['plotTraces'] = {'include': [0], 'saveFig': True} # Plot recorded traces for this list of cells
cfg.hParams['celsius'] = 36
Explanation: EXERCISE: We see a rebound burst! T-type calcium channels are normally considered responsible for this behavior. What happens if we set the conductance of this channel to 0?
cfg params
End of explanation
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
Explanation: Run the sim
End of explanation
netParams.popParams['HH3D_pop_hoc'] = {'cellType': 'PYR', 'numCells': 1, 'cellModel': 'HH3D_hoc'}
sim.createSimulateAnalyze(netParams = netParams, simConfig = cfg)
%matplotlib inline
sim.analysis.plotShape(includePre = [], includePost=['HH3D_pop_hoc'], showSyns=False, figSize=(4,9), dist=0.8, saveFig=True)
Explanation: (4) Plotting Morphology
End of explanation
netParams.sizeX = 200
Explanation: EXERCISE: Try plotting the morphology of other cell models
(5) Making a Network
EXERCISE: To begin creating a network, specify the geometry of the area you would like to model.
End of explanation
netParams.propVelocity = 100.0 # propagation velocity (um/ms)
netParams.probLengthConst = 150.0 # length constant for conn probability (um)
Explanation: Now let's set the propagation velocity and length constant:
End of explanation
netParams.synMechParams['exc'] = {'mod': 'Exp2Syn', 'tau1': 0.8, 'tau2': 5.3, 'e': 0} # NMDA synaptic mechanism
netParams.synMechParams['inh'] = {'mod': 'Exp2Syn', 'tau1': 0.6, 'tau2': 8.5, 'e': -75} # GABA synaptic mechanism
Explanation: EXERCISE: Now establish a few populations of cells
Now we need some synaptic mechanism parameters
End of explanation
netParams.stimSourceParams['bkg'] = {'type': 'NetStim', 'rate': 40, 'noise': 0.3}
Explanation: Add some network stimulation parameters
End of explanation
netParams.stimTargetParams['bkg->all'] = {'source': 'bkg',
'conds': {'cellType': ['E','I']},
'weight': 10.0, 'sec': 'soma',
'delay': 'max(1, normal(5,2))',
'synMech': 'exc'}
Explanation: EXERCISE: modify the line below such that your stim object can target the populations in your network
End of explanation
netParams.connParams['E->all'] = {
'preConds': {'cellType': 'E'}, 'postConds': {'y': [100,1000]}, # E -> all (100-1000 um)
'probability': 0.1 , # probability of connection
'weight': '5.0*post_ynorm', # synaptic weight
'delay': 'dist_3D/propVelocity', # transmission delay (ms)
'synMech': 'exc'} # synaptic mechanism
Explanation: Add cell connectivity rules
EXERCISE: modify the lines below to fit your network
End of explanation
cfg.analysis['plot2Dnet'] = {'saveFig': True} # plot 2D cell positions and connections
cfg.analysis['plotConn'] = {'saveFig': True} # plot connectivity matrix
Explanation: EXERCISE: Add the appropriate line(s) to run the network and plot a 2D representation of your network w/ connectivity between cells
End of explanation |
10,931 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I want to use the pandas apply() instead of iterating through each row of a dataframe, which from my knowledge is the more efficient procedure. | Problem:
import numpy as np
import pandas as pd
a = np.arange(4)
df = pd.DataFrame(np.repeat([1, 2, 3, 4], 4).reshape(4, -1))
df = pd.DataFrame(df.values - a[:, None], df.index, df.columns) |
10,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The first objective of this notebook is to implement the next function (to extract sample intervals from the total period).
Step1: Let's define the parameters as constants, just to do some scratch work.
Step3: The amount of samples to be generated would be (train_time - base_time) * num_companies / step. There are days_ahead market days left, only for target values, so the total "used" period is train_time + days_ahead.
The option of training with all, one, or some companies can be done by the user when it inputs the data (just filter data_df to get the companies you want). Anyway, one interesting choice would be to allow the training with multiple companies, targeting only one. That would multiply the features by the number of available companies, but would reduce the samples a lot. By now, I want to keep the complexity low, so I won't implement that idea, yet. A many to many approach could also be implemented (the target would be the vector with all the companies data). I will start with the simple "one to one".
Step4: One important thing to note
Step5: Is that initial date correct?
Step6: Ok, it looks so.
Let's split now!
I should allow for different feature extraction functions to be used, after the time divisions.
Step7: Let's define a function that takes a "sample blob" and produces one sample per symbol, only for the "Close" feature (looks like the easiest to do first). The dates in the base period should be substituted by an index, and the symbols shuffled later (along with their labels).
Step8: It is important to take care of the NaN values. Possibly at this sample_blob level is a good point to do so; just discard too bad samples.
Step9: Let's create the samples divider function
Step10: So, I have everything to define the final function of this notebook
Step11: Let's try the function as it was saved in the package
Step12: Looks good
Sometimes, it may be useful to keep the dates information...
Step13: That would be the way to go
Step14: Let's try the whole function, with shuffle (it's better to do it early, so that I won't forget later and get some artificial results), but keeping the index.
Step15: Let's test the "final" (you never know...) function in its module
Step17: Nice!
I will try to modify the add_market_days function to make it return a shift in real days instead of an index shift (that takes into account the possible duplicates, that are very common in some of the approaches I will follow) | Python Code:
def generate_train_intervals(data_df, train_time, base_time, step, days_ahead, today):
pass
Explanation: The first objective of this notebook is to implement the next function (to extract sample intervals from the total period).
End of explanation
# I will try to keep the convention to name with the "days" suffix,
# to all the variables that represent "market days". The ones that
# represent real time will be named more arbitrarily.
train_time = 365 # In real time days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
today
Explanation: Let's define the parameters as constants, just to do some scratch work.
End of explanation
data_df.index[data_df.index <= today][-(ahead_days + 1)]
def add_market_days(base, delta, data_df):
base is in real time.
delta is in market days.
market_days = data_df.index
if base not in market_days:
raise Exception('The base date is not in the market days list.')
base_index = market_days.tolist().index(base)
if base_index + delta >= len(market_days):
return market_days[-1]
if base_index + delta < 0:
return market_days[0]
return market_days[base_index + delta]
# Remember the last target days are not used for training, but that is a "market days" period.
end_of_training_date = add_market_days(today, -ahead_days, data_df)
start_date = end_of_training_date - dt.timedelta(train_time)
print('Start date: %s. End of training date: %s.' % (start_date, end_of_training_date))
TARGET_FEATURE = 'Close'
Explanation: The amount of samples to be generated would be (train_time - base_time) * num_companies / step. There are days_ahead market days left, only for target values, so the total "used" period is train_time + days_ahead.
The option of training with all, one, or some companies can be done by the user when it inputs the data (just filter data_df to get the companies you want). Anyway, one interesting choice would be to allow the training with multiple companies, targeting only one. That would multiply the features by the number of available companies, but would reduce the samples a lot. By now, I want to keep the complexity low, so I won't implement that idea, yet. A many to many approach could also be implemented (the target would be the vector with all the companies data). I will start with the simple "one to one".
End of explanation
def print_period(data_df):
print('Period: %s to %s.' % (data_df.index[0], data_df.index[-1]))
data_train_df = data_df[start_date:end_of_training_date]
print_period(data_train_df)
data_train_df.shape
start_target_date = add_market_days(start_date, base_days + ahead_days - 1, data_df)
data_target_df = data_df.loc[start_target_date: today,TARGET_FEATURE]
print_period(data_target_df)
data_target_df.shape
Explanation: One important thing to note: the base time is in "market days", that means that it doesn't represent a period of "real" time (the real time may vary with each base interval).
End of explanation
data_train_df.index[:10]
Explanation: Is that initial date correct?
End of explanation
date_base_ini = start_date
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
sample_blob = (data_train_df[date_base_ini: date_base_end], pd.DataFrame(data_target_df.loc[date_target]))
sample_blob[0]
target = sample_blob[1].T
target
Explanation: Ok, it looks so.
Let's split now!
I should allow for different feature extraction functions to be used, after the time divisions.
End of explanation
feat_close = sample_blob[0][TARGET_FEATURE]
feat_close.index = np.arange(feat_close.shape[0])
feat_close
target.index = ['target']
target
x_y_samples = feat_close.append(target)
x_y_samples
x_y_samples_shuffled = x_y_samples.T.sample(frac=1).reset_index(drop=True)
x_y_samples_shuffled.head()
Explanation: Let's define a function that takes a "sample blob" and produces one sample per symbol, only for the "Close" feature (looks like the easiest to do first). The dates in the base period should be substituted by an index, and the symbols shuffled later (along with their labels).
End of explanation
x_y_samples_shuffled.isnull().sum()
x_y_samples_filtered = x_y_samples_shuffled.dropna(axis=0, how='any')
print(x_y_samples_filtered.shape)
x_y_samples_filtered.isnull().sum()
# At some point I will have to standarize those values... (not now, but just as a reminder...)
std_samples = x_y_samples_shuffled.apply(lambda x: x / np.mean(x), axis=1)
std_samples.head()
features = std_samples.iloc[:,:-1]
features.head()
target = pd.DataFrame(std_samples.iloc[:,-1])
target.head()
Explanation: It is important to take care of the NaN values. Possibly at this sample_blob level is a good point to do so; just discard too bad samples.
End of explanation
TARGET_FEATURE = 'Close'
def feature_close_one_to_one(sample_blob):
target = sample_blob[1].T
feat_close = sample_blob[0][TARGET_FEATURE]
feat_close.index = np.arange(feat_close.shape[0])
target.index = ['target']
x_y_samples = feat_close.append(target)
x_y_samples_shuffled = x_y_samples.T.sample(frac=1).reset_index(drop=True)
x_y_samples_filtered = x_y_samples_shuffled.dropna(axis=0, how='any')
return x_y_samples_filtered
print(feature_close_one_to_one(sample_blob).shape)
feature_close_one_to_one(sample_blob).head()
date_base_ini = start_date
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
feat_tgt_df = pd.DataFrame()
while date_base_end < end_of_training_date:
sample_blob = (data_train_df[date_base_ini: date_base_end],
pd.DataFrame(data_target_df.loc[date_target]))
feat_tgt_blob = feature_close_one_to_one(sample_blob) # TODO: Change for a generic function
feat_tgt_df = feat_tgt_df.append(feat_tgt_blob, ignore_index=True)
date_base_ini = add_market_days(date_base_ini, step_days, data_df)
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
# print('Start: %s, End:%s' % (date_base_ini, date_base_end))
feat_tgt_df = feat_tgt_df.sample(frac=1).reset_index(drop=True)
X_df = feat_tgt_df.iloc[:,:-1]
y_df = pd.DataFrame(feat_tgt_df.iloc[:,-1])
print(X_df.shape)
X_df.head()
print(y_df.shape)
y_df.head()
Explanation: Let's create the samples divider function
End of explanation
def generate_train_intervals(data_df, train_time, base_days, step_days, ahead_days, today, blob_fun):
end_of_training_date = add_market_days(today, -ahead_days, data_df)
start_date = end_of_training_date - dt.timedelta(train_time)
start_target_date = add_market_days(start_date, base_days + ahead_days - 1, data_df)
data_train_df = data_df[start_date:end_of_training_date]
data_target_df = data_df.loc[start_target_date: today,TARGET_FEATURE]
date_base_ini = start_date
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
feat_tgt_df = pd.DataFrame()
while date_base_end < end_of_training_date:
sample_blob = (data_train_df[date_base_ini: date_base_end],
pd.DataFrame(data_target_df.loc[date_target]))
feat_tgt_blob = blob_fun(sample_blob)
feat_tgt_df = feat_tgt_df.append(feat_tgt_blob, ignore_index=True)
date_base_ini = add_market_days(date_base_ini, step_days, data_df)
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
# print('Start: %s, End:%s' % (date_base_ini, date_base_end))
feat_tgt_df = feat_tgt_df.sample(frac=1).reset_index(drop=True)
X_df = feat_tgt_df.iloc[:,:-1]
y_df = pd.DataFrame(feat_tgt_df.iloc[:,-1])
return X_df, y_df
train_time = 365 # In real time days
base_days = 7 # In market days
step_days = 7 # market days
ahead_days = 1 # market days
today = data_df.index[-1] # Real date
X, y = generate_train_intervals(data_df, train_time, base_days, step_days, ahead_days, today, feature_close_one_to_one)
print(X.shape)
X.head()
print(y.shape)
y.head()
%pwd
sys.path.append('../../')
import predictor.feature_extraction as fe
Explanation: So, I have everything to define the final function of this notebook
End of explanation
X, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
feature_close_one_to_one)
print(X.shape)
X.head()
print(y.shape)
y.head()
Explanation: Let's try the function as it was saved in the package:
End of explanation
x_y_samples
target = sample_blob[1].T
feat_close = sample_blob[0][TARGET_FEATURE]
x_y_samples = feat_close.append(target)
x_y_samples
x_y_samples.index = pd.MultiIndex.from_product([[x_y_samples.index[0]], np.arange(x_y_samples.shape[0])])
x_y_samples
Explanation: Looks good
Sometimes, it may be useful to keep the dates information...
End of explanation
x_y_samples.unstack().stack(0).sample(frac=1).reset_index(level=1, drop=True).head()
Explanation: That would be the way to go: the timestamp of the first day of the base period works as a global timestamp for the base period.
End of explanation
TARGET_FEATURE = 'Close'
def feature_close_one_to_one(sample_blob):
target = sample_blob[1].T
feat_close = sample_blob[0][TARGET_FEATURE]
x_y_samples = feat_close.append(target)
x_y_samples.index = pd.MultiIndex.from_product([[x_y_samples.index[0]],
np.arange(x_y_samples.shape[0])])
x_y_samples_shuffled = x_y_samples.unstack().stack(0).sample(frac=1).reset_index(level=1, drop=True)
x_y_samples_filtered = x_y_samples_shuffled.dropna(axis=0, how='any')
return x_y_samples_filtered
print(feature_close_one_to_one(sample_blob).shape)
feature_close_one_to_one(sample_blob).head()
def generate_train_intervals(data_df, train_time, base_days, step_days, ahead_days, today, blob_fun):
end_of_training_date = add_market_days(today, -ahead_days, data_df)
start_date = end_of_training_date - dt.timedelta(train_time)
start_target_date = add_market_days(start_date, base_days + ahead_days - 1, data_df)
data_train_df = data_df[start_date:end_of_training_date]
data_target_df = data_df.loc[start_target_date: today, TARGET_FEATURE]
date_base_ini = start_date
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
feat_tgt_df = pd.DataFrame()
while date_base_end < end_of_training_date:
sample_blob = (data_train_df[date_base_ini: date_base_end],
pd.DataFrame(data_target_df.loc[date_target]))
feat_tgt_blob = blob_fun(sample_blob)
feat_tgt_df = feat_tgt_df.append(feat_tgt_blob)
date_base_ini = add_market_days(date_base_ini, step_days, data_df)
date_base_end = add_market_days(date_base_ini, base_days - 1, data_df)
date_target = add_market_days(date_base_end, ahead_days, data_df)
# print('Start: %s, End:%s' % (date_base_ini, date_base_end))
feat_tgt_df = feat_tgt_df.sample(frac=1)
X_df = feat_tgt_df.iloc[:,:-1]
y_df = pd.DataFrame(feat_tgt_df.iloc[:,-1]).rename(columns={7:'target'})
return X_df, y_df
from time import time
tic = time()
X, y = generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
feature_close_one_to_one)
toc = time()
print('Elapsed time: %i seconds.' % (toc-tic))
print(X.shape)
X.head(10)
print(y.shape)
y.head(10)
Explanation: Let's try the whole function, with shuffle (it's better to do it early, so that I won't forget later and get some artificial results), but keeping the index.
End of explanation
sys.path.append('../../')
import predictor.feature_extraction as fe
X, y = fe.generate_train_intervals(data_df,
train_time,
base_days,
step_days,
ahead_days,
today,
feature_close_one_to_one)
print(X.shape)
X.head(10)
print(y.shape)
y.head(10)
Explanation: Let's test the "final" (you never know...) function in its module
End of explanation
data_df
base = data_df.index[0]
delta = 252
market_days = np.unique(data_df.sort_index().index)
len(market_days)
def add_market_days(base, delta, data_df):
base is in real time.
delta is in market days.
market_days = data_df.index
if base not in market_days:
raise Exception('The base date is not in the market days list.')
base_index = market_days.tolist().index(base)
if base_index + delta >= len(market_days):
return market_days[-1]
if base_index + delta < 0:
return market_days[0]
return market_days[base_index + delta]
Explanation: Nice!
I will try to modify the add_market_days function to make it return a shift in real days instead of an index shift (that takes into account the possible duplicates, that are very common in some of the approaches I will follow)
End of explanation |
10,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Verily Life Sciences LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https
Step1: Helper methods for visualization
Step2: 1. Define the trial
Choose the sites
A trial specification consists a list of sites, together with various properties of the sites.
For this demo, we read demonstration data embedded in the Baseline Site Selection Tool Python package. Specifically, this information is loaded from the file demo_data/site_list1.csv. Each row of this file contains the name of a site, as well as the detailed information about the trial. In this illustrative example, we pick sites in real US counties. Each column contains the following information
Step3: Choose trial parameters
The trial requires a number of parameters that have to be specified to be able to simulate what will happen in the trial
Step4: 2. Load incidence forecasts
We load historical incidence data from COVID-19 Open Data and forecasts from COVID-19 Forecast Hub.
We note that there are a set of caveats when using the CDC models that should be considered when using these for trial planning
Step5: 3. Simulate the trial
Now that we've specified how the trial works, we can compute how the trial will turn out given the incidence forecasts you've specified. We do this by first imagining what sampling what incidence will be at all locations simultaneously. For any given fully-specified scenario, we compute how many participants will be under observation at any given time in any given location (in any given combination of demographic buckets), then based on the specified local incidence we compute how many will become infected, and how many will produce clinical events.
Here we assume that the incidence trajectories of different locations are drawn at random from the available forecasts. Other scenario-generation methods in sim_scenarios support more complex approaches. For example, we may be highly uncertain about the incidence at each site, but believe that if incidence is high at a site, then it will also be high at geographically nearby sites. If this is the case then the simulation should not choose forecasts independently at each site but instead should take these correlations into account. The code scenario-generating methods in sim_scenarios allows us to do that.
Step6: 4. Optimize the trial
The simulations above supposed that all sites are activated as soon as possible (i.e. site_activation is identically 1). Now that we have shown the ability to simulate the outcome of the trial, we can turn it into a mathematical optimization problem.
Given the parameters of the trial within our control, how can we set those parameters to make the trial most likely to succeed or to succeed as quickly as possible?
We imagine the main levers of control are which sites to activate or which sites to prioritize activating, and this is what is implemented here.
However, the framework we have developed is very general and could be extended to just about anything you control which you can predict the impact of. For example,
* If you can estimate the impact of money spent boosting recruitment of high-risk participants, we could use those estimates to help figure out how to best allocate a fixed budget.
* If you had requirements for the number of people infected in different demographic groups, we could use those to help figure out how to best allocate doses between sites with different population characteristics.
The optimization algorithms are implemented in JAX, a python library that makes it possible to differentiate through native python and numpy functions. The flexibility of the language makes it possible to compose a variety of trial optimization scenarios and then to write algorithms that find optima. There are a number of technical details in how the optimization algorithms are written that will be discussed elsewhere.
Example
Step7: Plot the resulting sites
Now we can plot the activations for the resulting sites. Only a subset of the original sites are activated in the optimized plan. Comparing the distributions for the time to success for the optimized sites to those in the original trial plan (all sites activated), the optimized plan will save a bit of time if the vaccine efficacy is low. If the vaccine efficacy is high, then just getting as many participants as possible as quickly as possible is optimal.
Step8: Example
Step9: Plot the resulting sites
This time only 53 of 146 sites are activated. The slower recruitment costs us 1-2 weeks until the trial succeeds (depending on vaccine efficacy). In exchange, we don't need to activate as many sites, and we end up with a greater proportion of participants who are elderly, black, or hispanic (dropping from 55.7% to 45.6% young white).
Step10: Example | Python Code:
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('ticks')
import functools
import importlib.resources
import numpy as np
import os
import pandas as pd
pd.plotting.register_matplotlib_converters()
import xarray as xr
from IPython.display import display
# bsst imports
from bsst import demo_data
from bsst import io as bsst_io
from bsst import util
from bsst import optimization
from bsst import sim
from bsst import sim_scenarios
from bsst import public_data
Explanation: Copyright 2020 Verily Life Sciences LLC
Use of this source code is governed by a BSD-style
license that can be found in the LICENSE file or at
https://developers.google.com/open-source/licenses/bsd
Trial Specification Demo
The first step to use the Baseline Site Selection Tool is to specify your trial.
All data in the Baseline Site Selection Tool is stored in xarray.DataArray datasets. This is a convenient datastructure for storing multidimensional arrays with different labels, coordinates or attributes. You don't need to have any expertise with xr.Datasets to use the Baseline Site Selection Tool. The goal of this notebook is to walk you through the construction of the dataset that contains the specification of your trial.
This notebook has several sections:
1. Define the Trial. In this section you will load all aspects of your trial, including the trial sites, the expected recruitment demographics for each trial site (e.g. from a census) as well as the rules for how the trial will be carried out.
2. Load Incidence Forecasts. In this section you will load forecasts for covid incidence at the locations of your trial. We highly recommend using forecasts that are as local as possible for the sites of the trial. There is significant variation in covid incidence among counties in the same state, and taking the state (province) average can be highly misleading. Here we include code to preload forecasts for county level forecasts from the US Center for Disease Control. The trial planner should include whatever forecasts they find most compelling.
3. Simulate the Trial Given the incidence forecasts and the trial rules, the third section will simulate the trial.
4. Optimize the Trial Given the parameters of the trial within our control, the next section asks whether we can set those parameters to make the trial meet our objective criteria, for example most likely to succeed or to succeed as quickly as possible. We have written a set of optimization routines for optimizing different types of trials.
We write out different trial plans, which you can then examine interactively in the second notebook in the Baseline Site Selection Tool. That notebook lets you visualize how the trial is proceeding at a per site level and experiment with what will happen when you turn up or down different sites.
If you have questions about how to implement these steps for your clinical trial, or there are variations in the trial specification that are not captured with this framework, please contact [email protected] for additional help.
Imports
End of explanation
def plot_participants(participants):
time = participants.time.values
util.sum_all_but_dims(['time'], participants).cumsum('time').plot()
plt.title('Participants recruited (both control and treatment arm)')
plt.xlim(time[0], time[-1])
plt.ylim(bottom=0)
plt.show()
def plot_events(events):
time = events.time.values
events.cumsum('time').plot.line(x='time', color='k', alpha=.02, add_legend=False)
for analysis, num_events in c.needed_control_arm_events.to_series().items():
plt.axhline(num_events, linestyle='--')
plt.text(time[0], num_events, analysis, ha='left', va='bottom')
plt.ylim(0, 120)
plt.xlim(time[0], time[-1])
plt.title(f'Control arm events\n{events.scenario.size} simulated scenarios')
plt.show()
def plot_success(c, events):
time = c.time.values
success_day = xr.DataArray(util.success_day(c.needed_control_arm_events, events),
coords=(events.scenario, c.analysis))
fig, axes = plt.subplots(c.analysis.size, 1, sharex=True)
step = max(1, int(np.timedelta64(3, 'D') / (time[1] - time[0])))
bins = mpl.units.registry[np.datetime64].convert(time[::step], None, None)
for analysis, ax in zip(c.analysis.values, axes):
success_days = success_day.sel(analysis=analysis).values
np.where(np.isnat(success_days), np.datetime64('2050-06-01'), success_days)
ax.hist(success_days, bins=bins, density=True)
ax.yaxis.set_visible(False)
# subtract time[0] to make into timedelta64s so that we can take a mean/median
median = np.median(success_days - time[0]) + time[0]
median = pd.to_datetime(median).date()
ax.axvline(median, color='r')
ax.text(time[0], 0, f'{analysis}\n{median} median', ha='left', va='bottom')
plt.xlabel('Date when sufficient statistical power is achieved')
plt.xlim(time[0], time[-1])
plt.xticks(rotation=35)
plt.show()
Explanation: Helper methods for visualization
End of explanation
with importlib.resources.path(demo_data, 'site_list1.csv') as p:
demo_data_file_path = os.fspath(p)
site_df = pd.read_csv(demo_data_file_path, index_col=0)
site_df.index.name = 'location'
site_df['start_date'] = pd.to_datetime(site_df['start_date'])
display(site_df)
# Add in information we have about each county.
site_df = pd.concat([site_df, public_data.us_county_data().loc[site_df.opencovid_key].set_index(site_df.index)], axis=1)
Explanation: 1. Define the trial
Choose the sites
A trial specification consists a list of sites, together with various properties of the sites.
For this demo, we read demonstration data embedded in the Baseline Site Selection Tool Python package. Specifically, this information is loaded from the file demo_data/site_list1.csv. Each row of this file contains the name of a site, as well as the detailed information about the trial. In this illustrative example, we pick sites in real US counties. Each column contains the following information:
opencovid_key . This is a key that specifies location within COVID-19 Open Data. It is required by this schema because it is the way we join the incidence forecasts to the site locations.
capacity, the number of participants the site can recruit each week, including both control arm and treatment arms. For simplicity, we assume this is constant over time, but variable recruitment rates are also supported. (See the construction of the site_capacity array below).
start_date. This is the first date on which the site can recruit participants.
The proportion of the population in various demographic categories. For this example, we consider categories for age (over_60), ethnicity (black, hisp_lat), and comorbidities (smokers, diabetes, obese). Here we just fill in demographic information with random numbers. We assume different categories are independent, but the data structure supports complex beliefs about how different categories intersect, how much each site can enrich for different categories, and different infection risks for different categories. These are represented in the factors population_fraction, participant_fraction, incidence_scaler, and incidence_to_event_factor below. In a practical situation, we recommend that the trial planner uses accurate estimates of the populations for the different sites they are drawing from.
End of explanation
start_day = np.datetime64('2021-05-15')
end_day = np.datetime64('2021-10-01')
time_resolution = np.timedelta64(1, 'D')
time = np.arange(start_day, end_day + time_resolution, time_resolution)
c = xr.Dataset(coords=dict(time=time))
c['proportion_control_arm'] = 0.5
# Assume some intermediate analyses.
frac_control = float(c.proportion_control_arm)
efficacy = np.array([.55, .65, .75, .85, .95])
ctrl_events = util.needed_control_arm_events(efficacy, frac_control)
vaccine_events = (1 - efficacy) * ctrl_events * (1 - frac_control) / frac_control
ctrl_events, vaccine_events = np.round(ctrl_events), np.round(vaccine_events)
efficacy = 1 - (vaccine_events / ctrl_events)
total_events = ctrl_events + vaccine_events
analysis_names = [
f'{int(t)} total events @{int(100 * e)}% VE' for t, e in zip(total_events, efficacy)
]
c['needed_control_arm_events'] = xr.DataArray(
ctrl_events, dims=('analysis',)).assign_coords(analysis=analysis_names)
c['recruitment_type'] = 'default'
c['observation_delay'] = int(np.timedelta64(28, 'D') / time_resolution) # 28 days
c['trial_size_cap'] = 30000
# convert weekly capacity to capacity per time step
site_capacity = site_df.capacity.to_xarray() * time_resolution / np.timedelta64(7, 'D')
site_capacity = site_capacity.broadcast_like(c.time).astype('float')
# Can't recruit before the activation date
activation_date = site_df.start_date.to_xarray()
for l in activation_date.location.values:
date = activation_date.loc[l]
site_capacity.loc[site_capacity.time < date, l] = 0.0
c['site_capacity'] = site_capacity.transpose('location', 'time')
c['site_activation'] = xr.ones_like(c.site_capacity)
# For the sake of simplicity, this code assumes black and hisp_lat are
# non-overlapping, and that obese/smokers/diabetes are non-overlapping.
frac_and_scalar = util.fraction_and_incidence_scaler
fraction_scalers = [
frac_and_scalar(site_df, 'age', ['over_60'], [1], 'under_60'),
frac_and_scalar(site_df, 'ethnicity', ['black', 'hisp_lat'], [1, 1],
'other'),
frac_and_scalar(site_df, 'comorbidity', ['smokers', 'diabetes', 'obese'],
[1, 1, 1], 'none')
]
fractions, incidence_scalers = zip(*fraction_scalers)
# We assume that different categories are independent (e.g. the proportion of
# smokers over 60 is the same as the proportion of smokers under 60)
c['population_fraction'] = functools.reduce(lambda x, y: x * y, fractions)
# We assume the participants are drawn uniformly from the population.
c['participant_fraction'] = c['population_fraction']
# Assume some boosted incidence risk for subpopulations. We pick random numbers
# here, but in actual use you'd put your best estimate for the incidence risk
# of each demographic category.
# Since we assume participants are uniformly drawn from the county population,
# this actually doesn't end up affecting the estimated number of clinical events.
c['incidence_scaler'] = functools.reduce(lambda x, y: x * y,
incidence_scalers)
c.incidence_scaler.loc[dict(age='over_60')] = 1 + 2 * np.random.random()
c.incidence_scaler.loc[dict(comorbidity=['smokers', 'diabetes', 'obese'])] = 1 + 2 * np.random.random()
c.incidence_scaler.loc[dict(ethnicity=['black', 'hisp_lat'])] = 1 + 2 * np.random.random()
# We assume a constant incidence_to_event_factor.
c['incidence_to_event_factor'] = 0.6 * xr.ones_like(c.incidence_scaler)
util.add_empty_history(c)
Explanation: Choose trial parameters
The trial requires a number of parameters that have to be specified to be able to simulate what will happen in the trial: These include:
trial_size_cap: the maximum number of participants in the trial (includes both control and treatment arms)
start_day and end_day: the boundaries of the time period we will simulate.
proportion_control_arm: what proportion of participants are in the control arm. It's assumed that the control arm is as uniformly distributed across locations and time (e.g. at each location on each day, half of the recruited participants are assigned to the control arm).
needed_control_arm_events: the number of events required in the control arm of the trial at various intermediate analysis points. For this example we assume intermediate analyses which would demonstrate a vaccine efficacy of about 55%, 65%, 75%, 85%, or 95%.
observation_delay: how long after a participant is recruited before they contribute an event. This is measured in the same time units as your incidence forecasts. Here we assume 28 days.
site_capacity and site_activation: the number of participants each site could recruit if it were activated, and whether each site is activated at any given time. Here we assume each site as a constant weekly capacity, but time dependence can be included (e.g. to model ramp up of recruitment).
population_fraction, participant_fraction, and incidence_scaler: the proportion of the general population and the proportion of participants who fall into different demographic categories at each location, and the infection risk factor for each category. These three are required to translate an overall incidence forecast for the population into the incidence forecast for your control arm.
incidence_to_event_factor: what proportion of infections lead to a clinical event. We assume a constant 0.6, but you can specify different values for different demographic categories.
These factors are specified in the datastructure below.
End of explanation
# Extrapolate out a bit extra to ensure we're within bounds when we interpolate later.
full_pred = public_data.fetch_cdc_forecasts([('COVIDhub-ensemble', '2021-05-10'),
('COVIDhub-baseline', '2021-05-10')],
end_date=c.time.values[-1] + np.timedelta64(15, 'D'),
num_samples=50)
full_gt = public_data.fetch_opencovid_incidence()
# Suppose we only have ground truth through 2021-05-09.
full_gt = full_gt.sel(time=slice(None, np.datetime64('2021-05-09')))
# Include more historical incidence here for context. It will be trimmed off when
# we construct scenarios to simulate. The funny backwards range is to ensure that if
# we use weekly instead of daily resolution, we use the same day of the week as c.
time = np.arange(c.time.values[-1], np.datetime64('2021-04-01'), -time_resolution)[::-1]
incidence_model = public_data.assemble_forecast(full_gt, full_pred, site_df, time)
locs = np.random.choice(c.location.values, size=5, replace=False)
incidence_model.sel(location=locs).plot.line(x='time', color='k', alpha=.1, add_legend=False, col='location', row='model')
plt.ylim(0.0, 1e-3)
plt.suptitle('Forecast incidence at a sampling of sites', y=1.0)
pass
Explanation: 2. Load incidence forecasts
We load historical incidence data from COVID-19 Open Data and forecasts from COVID-19 Forecast Hub.
We note that there are a set of caveats when using the CDC models that should be considered when using these for trial planning:
* Forecasts are only available for US counties. Hence, these forecasts will only work for US-only trials. Trials with sites outside the US will need to supplement these forecasts.
* Forecasts only go out for four weeks. Trials take much longer than four weeks to complete, when measured from site selection to logging the required number of cases in the control arm. For simplicity, here we extrapolate incidence as constant after the last point of the forecast. Here we extrapolate out to October 1, 2021.
* The forecasts from the CDC are provided with quantile estimates. Our method depends on getting representative forecasts from the model: we need a set of sample forecasts for each site which represent the set of scenarios that can occur. Ideally these scenarios will be equally probable so that we can compute probabilities by averaging over samples. To get samples from quantiles, we interpolate/extrapolate to get 100 evenly spaced quantile estimates, which we treat as representative samples.
You can of course replace these forecasts with whatever represents your beliefs and uncertainty about what will happen.
End of explanation
# incidence_flattened: rolls together all the models you've included in your ensemble, treating them as independent samples.
incidence_flattened = sim_scenarios.get_incidence_flattened(incidence_model, c)
# incidence_scenarios: chooses scenarios given the incidence curves and your chosen method of scenario-generation.
incidence_scenarios = sim_scenarios.generate_scenarios_independently(incidence_flattened, num_scenarios=100)
# compute the number of participants recruited under your trial rule
participants = sim.recruitment(c)
# compute the number of control arm events under your trial rules and incidence_scenarios.
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
# plot events and label different vaccine efficacies
plot_events(events)
# plot histograms of time to success
plot_success(c, events)
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_all_site_on.nc')
Explanation: 3. Simulate the trial
Now that we've specified how the trial works, we can compute how the trial will turn out given the incidence forecasts you've specified. We do this by first imagining what sampling what incidence will be at all locations simultaneously. For any given fully-specified scenario, we compute how many participants will be under observation at any given time in any given location (in any given combination of demographic buckets), then based on the specified local incidence we compute how many will become infected, and how many will produce clinical events.
Here we assume that the incidence trajectories of different locations are drawn at random from the available forecasts. Other scenario-generation methods in sim_scenarios support more complex approaches. For example, we may be highly uncertain about the incidence at each site, but believe that if incidence is high at a site, then it will also be high at geographically nearby sites. If this is the case then the simulation should not choose forecasts independently at each site but instead should take these correlations into account. The code scenario-generating methods in sim_scenarios allows us to do that.
End of explanation
%time optimization.optimize_static_activation(c, incidence_scenarios)
Explanation: 4. Optimize the trial
The simulations above supposed that all sites are activated as soon as possible (i.e. site_activation is identically 1). Now that we have shown the ability to simulate the outcome of the trial, we can turn it into a mathematical optimization problem.
Given the parameters of the trial within our control, how can we set those parameters to make the trial most likely to succeed or to succeed as quickly as possible?
We imagine the main levers of control are which sites to activate or which sites to prioritize activating, and this is what is implemented here.
However, the framework we have developed is very general and could be extended to just about anything you control which you can predict the impact of. For example,
* If you can estimate the impact of money spent boosting recruitment of high-risk participants, we could use those estimates to help figure out how to best allocate a fixed budget.
* If you had requirements for the number of people infected in different demographic groups, we could use those to help figure out how to best allocate doses between sites with different population characteristics.
The optimization algorithms are implemented in JAX, a python library that makes it possible to differentiate through native python and numpy functions. The flexibility of the language makes it possible to compose a variety of trial optimization scenarios and then to write algorithms that find optima. There are a number of technical details in how the optimization algorithms are written that will be discussed elsewhere.
Example: Optimizing Static site activations
Suppose that the only variable we can control is which sites should be activated, and we have to make this decision at the beginning of the trial. This decision is then set in stone for the duration of the trial. To calculate this we proceed as follows:
The optimizer takes in the trial plan, encoded in the xarray c as well as the incidence_scenarios, and then calls the optimizer to find the sites that should be activated to minimize the time to success of the trial. The algorithm modifies c in place, so that after the algorithm runs, it returns the trial plan c but with the site activations chosen to be on or off in accordance with the optimizion.
End of explanation
all_sites = c.location.values
activated_sites = c.location.values[c.site_activation.mean('time') == 1]
# Simulate the results with this activation scheme.
print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated')
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas()
display(df.style.set_caption('Proportion of participants by age and ethnicity'))
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_optimized_static.nc')
Explanation: Plot the resulting sites
Now we can plot the activations for the resulting sites. Only a subset of the original sites are activated in the optimized plan. Comparing the distributions for the time to success for the optimized sites to those in the original trial plan (all sites activated), the optimized plan will save a bit of time if the vaccine efficacy is low. If the vaccine efficacy is high, then just getting as many participants as possible as quickly as possible is optimal.
End of explanation
def loss_fn(c):
# sum over location, time, comorbidity
# remaining dimensions are [age, ethnicity]
participants = c.participants.sum(axis=0).sum(axis=0).sum(axis=-1)
total_participants = participants.sum()
return (
optimization.negative_mean_successiness(c) # demonstrate efficacy fast
+ 0.2 * c.site_activation.mean() # turning on sites is costly
- 0.5 * participants[1:, :].sum() / total_participants # we want people over 60
- 0.5 * participants[:, 1:].sum() / total_participants # we want blacks and hispanics
)
%time optimization.optimize_static_activation(c, incidence_scenarios, loss_fn)
Explanation: Example: Custom loss penalizing site activation and promoting diverse participants
Suppose we want to factor in considerations aside from how quickly the trial succeeds. In this example, we assume that activating sites is expensive, so we'd like to activate as few of them as possible, so long as it doesn't delay the success of the trial too much. Similarly, we assume that it's valuable to have a larger proportion of elderly, black, or hispanic participants, and we're willing to activate sites which can recruit from these demographic groups, even if doing so delays success a bit.
End of explanation
all_sites = c.location.values
activated_sites = c.location.values[c.site_activation.mean('time') == 1]
# Simulate the results with this activation scheme.
print(f'\n\n{len(activated_sites)} of {len(all_sites)} activated')
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
df = (participants.sum(['location', 'time', 'comorbidity']) / participants.sum()).to_pandas()
display(df.style.set_caption('Proportion of participants by age and ethnicity'))
Explanation: Plot the resulting sites
This time only 53 of 146 sites are activated. The slower recruitment costs us 1-2 weeks until the trial succeeds (depending on vaccine efficacy). In exchange, we don't need to activate as many sites, and we end up with a greater proportion of participants who are elderly, black, or hispanic (dropping from 55.7% to 45.6% young white).
End of explanation
# We put all sites in on group. We also support prioritizing sites within groupings.
# For example, if you can activate 2 sites per state per week, sites would be grouped
# according to the state they're in.
site_to_group = pd.Series(['all_sites'] * len(site_df), index=site_df.index)
decision_dates = c.time.values[:70:7]
allowed_activations = pd.DataFrame([[20] * len(decision_dates)], index=['all_sites'], columns=decision_dates)
parameterizer = optimization.PivotTableActivation(c, site_to_group, allowed_activations, can_deactivate=False)
optimization.optimize_params(c, incidence_scenarios, parameterizer)
c['site_activation'] = c.site_activation.round() # each site has to be on or off at each time
df = c.site_activation.to_pandas()
df.columns = [pd.to_datetime(x).date() for x in df.columns]
sns.heatmap(df, cbar=False)
plt.title('Which sites are activated when')
plt.show()
participants = sim.recruitment(c)
events = sim.control_arm_events(c, participants, incidence_scenarios)
plot_participants(participants)
plot_events(events)
plot_success(c, events)
sim.add_stuff_to_ville(c, incidence_model, site_df, num_scenarios=100)
!mkdir -p demo_data
bsst_io.write_ville_to_netcdf(c, 'demo_data/site_list1_prioritized.nc')
Explanation: Example: prioritizing sites
Suppose we can activate up to 20 sites each week for 10 weeks. How do we prioritize them?
End of explanation |
10,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding text (and handling it in Python)
Fiona Pigott
A Python notebook to teach you everything you never wanted to know about text encoding (specifically ASCII, UTF-8, and the difference therein but we'll explain what some others mean).
Credit to these sites for a helpful description of different file encodings
Step1: UTF-8 (most commonly used multi-byte encoding, used by Twitter)
You might be familiar with the concept of Huffman Coding (https
Step2: Now look at a multi-byte character, "GRINNING FACE WITH SMILING EYES." This guy doesn't fit in a single byte. In fact, his encoding takes 4 bytes (http
Step3: This is pretty much exactly what you get when you're looking at Tweet data. If you don't believe me, try
Step4: Rolling your own UTF-8 decoder.
This is going to be fun. And by 'fun' I mean "why are you making us do this?"
Step5: Part 2
Step6: The easy way
Step7: Just asking for a list of all of the characters doesn't work, because Python 2 assumes ASCII (1 byte per character) and splits it up appropriately.
We'd have to search all of the bytes to figure out which ones constituted emoji.
I've implemented this, https
Step8: Now if you want to search your code for "😁", you just need to know its code point (which you can find or even, if you're rather determined, derive).
Step9: Appendices
A word on other encodings, with a tiny example
There are many other encodings, such as ISO-8859-1, UTF-16, UTF-32 etc, which are less commonly used on the web, and for the most part, don't worry about them. They represent a variety of other ways to mape bytes -> code points and back again.
I want to show one quick example of the UTF-32 encoding, which simply assigns 1 code point per 4-byte block. I'm going to show the encoding/decoding in Python, write the encoded data to a file, and read it back.
I'm not showing this becuase UTF-32 is special or because you should use it. I'm showing it so you understand a little about how to work with other file encodings.
Step10: Just when you thought you knew everything about Emoji
It's worse than it seems! Well, just a little worse.
One thing that I noticed when I was cat-ing a bunch of byte strings to my screen was that some emoji (not all) were followed by either "ef b8 8e" or "ef b8 8f." I felt sad. Had I totally failed to understand how emoji work on Twitter? Was there something I was missing?
The answer is no, not really. Those pesky multibyte charaters are non-display characters called "variation selectors (http | Python Code:
!printf "hi\n"
!printf "hi" | xxd -g1
!printf "hi" | xxd -b -g1
# Generate a list of all of the ASCII characters:
# 'unichr' is a built-in Python function to take a number to a unicode code point
# (I'll talk more about this and some other built-ins later)
for i in range(0,128):
print str(i) + " -> " + repr(unichr(i)) + "->" + "'" + unichr(i).encode("ascii") + "'"
# And if you try to use "ascii" encoding on a character whose value is too high:
# Hint: you've definitely seen this error before
unichr(129).encode("ascii")
Explanation: Understanding text (and handling it in Python)
Fiona Pigott
A Python notebook to teach you everything you never wanted to know about text encoding (specifically ASCII, UTF-8, and the difference therein but we'll explain what some others mean).
Credit to these sites for a helpful description of different file encodings:
- https://en.wikipedia.org/wiki/UTF-8
- http://stackoverflow.com/questions/700187/unicode-utf-ascii-ansi-format-differences
- http://csharpindepth.com/Articles/General/Unicode.aspx
And to these pages for a better understanding of emoji specifically:
- http://apps.timwhitlock.info/emoji/tables/unicode
- http://unicode.org/charts/PDF/UFE00.pdf
- http://www.unicode.org/Public/emoji/2.0/emoji-data.txt (a list of all of the official emoji)
And if you got here from Data-Science-45-min intros, check out https://github.com/fionapigott/emoji-counter for this tutorial and (a little) more.
Part 0: What do you mean by "text encoding"?
A text encoding is a scheme that allows us to convert between binary (stored on your computer) and a character that you can display and make sense of. A text encoding does not define a font.
When I say "character" I mean "unicode code point." Code point -> character is a 1->1 mapping of meaning. A font just decides how to display that character. Each emoji has a code point assigned to it by the Unicode Consortium, and "GRINNING FACE WITH SMILING EYES" should be a grinning face with smiley eyes on any platform. Windows Wingdigs, if you remember that regrettable period, is a font.
I'm going to use "code point" and "character" a little bit interchangeably. If you can get the code point represented by a string of bits, you can figure out what character it represents.
Decode = convert binary data to a code point
Encode = convert a code point (a big number) to binary data that you can write somewhere
You code will always:
- Ingest binary data (say, my_tweets.txt)
- Decode that data into characters (whether or not you have to type decode.)
- Encode that data so that you cna write it again (whether or not you type encode. You can't write "128513" to a single bit.)
Part 1: Text encodings are not magic
Get the actual data of the file in your terminal with xxd.
I want to spend a few minutes convincing you of what I'm about to say about text encoding in Python.
We're spending time in the terminal to make it painfully, horribly clear that text encoding/decoding is not some Python thing, but rather exactly what every text-display program does every time you convert some binary (stored on your computer) to something that you can read.
ASCII
ASCII is a character-encoding scheme where each character fits in exactly 1 byte--8 bits. ASCII, however, uses only the bottom 7 bits of an 8-bit byte, and thus can take only 2^7 (128) values. The value of the byte that encodes a character is exactly that character's code point.
"h" and "i" are both ascii characters--they fit in one byte in the ascii encoding scheme.
End of explanation
# Example: \xf0 is the leading byte for a 4-character emoji:
print bin(ord('\xf0'))
# And it has 4 1s!
print "Count the 1s at the beginning of the bit string: 4!"
Explanation: UTF-8 (most commonly used multi-byte encoding, used by Twitter)
You might be familiar with the concept of Huffman Coding (https://en.wikipedia.org/wiki/Huffman_coding). Huffamn coding is a way of losslessly compressing data by encoding the most common values with the least amount of information. A Huffman coding tree of the English language, might, for example, assign "e" a value of a single bit.
UTF-8 encoding is similar to a Huffman encoding. ASCII-compatible characters are encoded exactly the same way (a file that is UTF-8 encoded but contains only the 128 ASCII-compatible characters is effectively ASCII encoded. This way, those common characters occupy only one byte. All furter characters are encoded in multiple bytes.
The multibyte encoding scheme works like this:
- The number of leading 1s in the first byte maps to the length of the character in bytes.
- Each following character in the multibyte squence begins with '10'
- The value of the unicode code point is encoded in all of the unused bits. That is, every bit that isn't either a leading '1' of the first byte, or a leading '10' of the following bytes.
End of explanation
!printf "😁\n"
!printf "😁" | xxd -g1
!printf "😁" | xxd -b -g1
# Figure out what some weird emoji is:
# https://twitter.com/jrmontag/status/677621827410255872
!printf "📊" | xxd -g1
# https://twitter.com/SRKCHENNAIFC/status/677894680303017985
!printf "❤️" | xxd -g1
Explanation: Now look at a multi-byte character, "GRINNING FACE WITH SMILING EYES." This guy doesn't fit in a single byte. In fact, his encoding takes 4 bytes (http://apps.timwhitlock.info/unicode/inspect/hex/1F601).
End of explanation
text = !cat test_tweet.json | xxd -g1 | grep "f0 9f 98 81"
for line in text:
print line
# position of the emoji in bytes:
start = int(text[0][0:7],16)
end = int(text[0][0:7],16) + 16
print "Run the following to cat out just the first line of bytes from the hexdump:"
print "!head -c{} test_tweet.json | tail -c{}".format(end, end-start)
Explanation: This is pretty much exactly what you get when you're looking at Tweet data. If you don't believe me, try:
End of explanation
# Get the bits!
!printf "😁" | xxd -b -g1
# We're gonna use this in a minute
byte_string_smiley = !printf "😁" | xxd -b -g1
bytes = byte_string_smiley[0].split(" ")[1:5]
print bytes
first_byte = bytes[0]
print "The 1st byte: {}".format(first_byte)
length_of_char = 0
b = 0
while first_byte[b] == '1':
length_of_char += 1
b += 1
print "The character length in bytes, calculated using the 1st byte: {}".format(length_of_char)
print "The remaining bits in the first byte: {}".format(first_byte[b:])
print "The non-'leading 10' bits in the next 3 bytes: {}".format([x[2:] for x in bytes[1:]])
print "The bits of the code point: {}".format(
[first_byte[b:]]+[x[2:] for x in bytes[1:]])
code_point_bits = "".join([first_byte[b:]]+[x[2:] for x in bytes[1:]])
print "The bit string of the code point: {}".format(code_point_bits)
code_point_int = int(code_point_bits,2)
print "The code point is: {} (or in hex {})".format(code_point_int, hex(code_point_int))
print "And the character is: {}".format(unichr(code_point_int).encode("utf-8"))
print "Phew!"
Explanation: Rolling your own UTF-8 decoder.
This is going to be fun. And by 'fun' I mean "why are you making us do this?"
End of explanation
# The 'rb' option to open (or mode = 'rb' to fileinput.FileInput)
# this means, "read in the file as a byte string." Basically, exactly what you get from
# the xxd hexdump
f = open("test.txt", 'rb')
# read the file (the whole file is one emoji character)
test_emoji = f.read().strip()
bytes = []
bits = []
code_point = test_emoji.decode("utf-8")
print code_point
code_point_integer = ord(code_point)
for byte in test_emoji:
bytes.append(byte)
bits.append(bin(ord(byte)).lstrip("0b"))
print "The Unicode code point: {}".format([code_point])
print "Integer value of the unicode code point: hex: {}, decimal: {}".format(
hex(code_point_integer), code_point_integer)
print "The bytes (hex): {}".format(bytes)
print "The bytes (decimal): {}".format([ord(x) for x in bytes])
print "Each byte represented in bits: {}".format(bits)
f.close()
Explanation: Part 2: But what if I like magic?
Getting Python (2) to help you out with this.
The hard way:
The following should demonstrate to you that what we're about to do is exactly the same as what we just did, but easier.
End of explanation
!cat test.txt
g = open("test.txt")
# read the file (the whole file is one emoji character)
test_emoji = g.read().strip()
# Now, try to get a list of characters
print "list(test_emoji)"
print list(test_emoji)
Explanation: The easy way:
Now, imagine that you didn't want to have to think about bit strings every time you dealt with text data. We live in that brave new world.
The big problem that I (we, I think) have been having with emoji and multibyte charaters in general is decoding them in a way that allows us to process one character at a time. I had this problem because I didn't understand what the encoding/decoding steps meant.
End of explanation
# *Now*, try to get a list of characters
import struct
print list(test_emoji.decode('utf-8'))
print struct.unpack("i", test_emoji.decode('utf-8'))
#.decode('utf-32')
print "list(test_emoji.decode('utf-8'))"
print list(test_emoji.decode('utf-8'))
print list(test_emoji.decode('utf-8'))[0]
Explanation: Just asking for a list of all of the characters doesn't work, because Python 2 assumes ASCII (1 byte per character) and splits it up appropriately.
We'd have to search all of the bytes to figure out which ones constituted emoji.
I've implemented this, https://github.com/fionapigott/emoji-counter, because I didn't realize that there was a better way. But there is!
End of explanation
# Get the code point for this weird emoji
"📊".decode("utf-8")
Explanation: Now if you want to search your code for "😁", you just need to know its code point (which you can find or even, if you're rather determined, derive).
End of explanation
print "😁"
# Remember, and this is a bit hard: that thing we just printed was encoded at UTF-8
# (that's why Chrome renders it at all)
print repr("😁")
# Get the code point, so that we can encode it again with a different scheme
code_point = "😁".decode("utf-8")
# You have to print the repr() to look at the code point value,
# otherwise 'print' will automatically encode the character to print it
print repr(code_point)
# Now encode the data as UTF32
utf32_smiley = code_point.encode("utf-32")
print repr(utf32_smiley)
print "The first 4 bytes means 'this file is UTF-32 encoded'. The next 4 are the character."
# That's a byte string--we can write it to a file
utf32_file = open("test_utf32.txt","w")
utf32_file.write(utf32_smiley)
utf32_file.close()
# No nasty Encode errors. That's good.
# Butttt, that file looks like garbage, because nothing is going to automatically
# decode that byte string as UTF-32
!cat test_utf32.txt
print "\n"
# We can still look at the bytes tho! And they should look familiar
!cat test_utf32.txt | xxd -g1
# And we can read in the file as long as we use the right decoder
utf32_file_2 = open("test_utf32.txt","rb")
code_point_back_again = utf32_file_2.read().decode("utf-32")
print code_point_back_again
Explanation: Appendices
A word on other encodings, with a tiny example
There are many other encodings, such as ISO-8859-1, UTF-16, UTF-32 etc, which are less commonly used on the web, and for the most part, don't worry about them. They represent a variety of other ways to mape bytes -> code points and back again.
I want to show one quick example of the UTF-32 encoding, which simply assigns 1 code point per 4-byte block. I'm going to show the encoding/decoding in Python, write the encoded data to a file, and read it back.
I'm not showing this becuase UTF-32 is special or because you should use it. I'm showing it so you understand a little about how to work with other file encodings.
End of explanation
# Shoutout to Josh's RST!
def print_output(function,input_data,kwargs={}):
kwargs_repr = ",".join(["=".join([x[0], str(x[1])]) for x in kwargs.items()])
print "{}({},{}) -> {}".format(function.__name__, repr(input_data), kwargs_repr,
repr(function(input_data,**kwargs)))
# Decimal to hex:
print "Converting decimal to hex string:"
print_output(hex,240)
# hex to decimal
print "\nConverting hex to decimal:"
print_output(int,hex(240),kwargs = {"base":16})
# decimal to binary
print "\nConverting decimal to binary:"
print_output(bin,240)
# binary string to an integer
print "\nConverting decimal to binary:"
print_output(int,"11110000",kwargs = {"base":2})
# byte string representation to ordinal (unicode code point value)
print "\nConverting byte string to ordinal"
print_output(ord,"\x31")
print_output(ord,"\xF0")
# ordinal to unicode code point
print "\nConverting ordinal number to unicode code point"
print_output(unichr,49)
print_output(unichr,240)
Explanation: Just when you thought you knew everything about Emoji
It's worse than it seems! Well, just a little worse.
One thing that I noticed when I was cat-ing a bunch of byte strings to my screen was that some emoji (not all) were followed by either "ef b8 8e" or "ef b8 8f." I felt sad. Had I totally failed to understand how emoji work on Twitter? Was there something I was missing?
The answer is no, not really. Those pesky multibyte charaters are non-display characters called "variation selectors (http://unicode.org/charts/PDF/UFE00.pdf)," and the change how emoji are displayed. There are lots of variation selectors (16, I think), but two apply to emoji, and they correspond to "\xef\xb8\x8e, or text style" and "\xef\xb8\x8f, or emoji style" display of the emoji characters,to allow for even more variety in a world that already allows for a normal hotel (🏨) and a "love hotel" (🏩).
Not all emoji have variants for the variation selectors, nor do all platforms bother trying to deal with them, but Twitter does. If you ever find yourself in a position where you care, here's a quick example of what they do.
You will need to open a terminal, because I couldn't find a character that would display in-notebook as both text style and emoji style.
<pre><code>
printf "\xE2\x8C\x9A"
printf "\xE2\x8C\x9A\xef\xb8\x8e"
printf "\xE2\x8C\x9A\xef\xb8\x8f"
</code></pre>
Takeaway: Variation selectors are the difference between an Apple Watch and a Timex.
Python functions for dealing with data representations
Some of the built-in functions that I used to manipulate binary/hex/decimal representations here:
End of explanation |
10,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix
Step1: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R.
Step2: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
Step3: Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
Step4: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
Step5: Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction. | Python Code:
import pandas as pd
import numpy as np
movies_df = pd.read_csv('movies.csv')
movies_df['movie_id'] = movies_df['movie_id'].apply(pd.to_numeric)
movies_df.head(3)
ratings_df=pd.read_csv('ratings.csv')
ratings_df.head(3)
Explanation: Matrix Factorization via Singular Value Decomposition
Matrix factorization is the breaking down of one matrix in a product of multiple matrices. It's extremely well studied in mathematics, and it's highly useful. There are many different ways to factor matrices, but singular value decomposition is particularly useful for making recommendations.
So what is singular value decomposition (SVD)? At a high level, SVD is an algorithm that decomposes a matrix $R$ into the best lower rank (i.e. smaller/simpler) approximation of the original matrix $R$. Mathematically, it decomposes R into a two unitary matrices and a diagonal matrix:
$$\begin{equation}
R = U\Sigma V^{T}
\end{equation}$$
where R is users's ratings matrix, $U$ is the user "features" matrix, $\Sigma$ is the diagonal matrix of singular values (essentially weights), and $V^{T}$ is the movie "features" matrix. $U$ and $V^{T}$ are orthogonal, and represent different things. $U$ represents how much users "like" each feature and $V^{T}$ represents how relevant each feature is to each movie.
To get the lower rank approximation, we take these matrices and keep only the top $k$ features, which we think of as the underlying tastes and preferences vectors.
End of explanation
R_df = ratings_df.pivot(index = 'user_id', columns ='movie_id', values = 'rating').fillna(0)
R_df.head()
Explanation: These look good, but I want the format of my ratings matrix to be one row per user and one column per movie. I'll pivot ratings_df to get that and call the new variable R.
End of explanation
R = R_df.as_matrix()
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
Explanation: The last thing I need to do is de-mean the data (normalize by each users mean) and convert it from a dataframe to a numpy array.
End of explanation
from scipy.sparse.linalg import svds
U, sigma, Vt = svds(R_demeaned, k = 50)
Explanation: Singular Value Decomposition
Scipy and Numpy both have functions to do the singular value decomposition. I'm going to use the Scipy function svds because it let's me choose how many latent factors I want to use to approximate the original ratings matrix (instead of having to truncate it after).
End of explanation
sigma = np.diag(sigma)
Explanation: Done. The function returns exactly what I detailed earlier in this post, except that the $\Sigma$ returned is just the values instead of a diagonal matrix. This is useful, but since I'm going to leverage matrix multiplication to get predictions I'll convert it to the diagonal matrix form.
End of explanation
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = R_df.columns)
preds_df.head()
def recommend_movies(predictions_df, userID, movies_df, original_ratings_df, num_recommendations):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.user_id == (userID)]
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movie_id', right_on = 'movie_id').
sort_values(['rating'], ascending=False)
)
print('User {0} has already rated {1} movies.'.format(userID, user_full.shape[0]))
print('Recommending highest {0} predicted ratings movies not already rated.'.format(num_recommendations))
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['movie_id'].isin(user_full['movie_id'])].
merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left',
left_on = 'movie_id',
right_on = 'movie_id').
rename(columns = {user_row_number: 'Predictions'}).
sort_values('Predictions', ascending = False).
iloc[:num_recommendations, :-1]
)
return user_full, recommendations
already_rated, predictions = recommend_movies(preds_df,11, movies_df, ratings_df, 10)
predictions
already_rated.head(10)
Explanation: Making Predictions from the Decomposed Matrices
I now have everything I need to make movie ratings predictions for every user. I can do it all at once by following the math and matrix multiply $U$, $\Sigma$, and $V^{T}$ back to get the rank $k=50$ approximation of $R$.
I also need to add the user means back to get the actual star ratings prediction.
End of explanation |
10,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PYT-DS SAISOFT
Overview 2
Overview 3
<a data-flickr-embed="true" href="https
Step1: I emphasize this is not really chess. Chess masters do not think in terms of west and east, nor is the checkerboard pattern sufficiently accessible.
<a data-flickr-embed="true" href="https
Step2: LAB | Python Code:
import numpy as np
import pandas as pd
squares = np.array(list(64 * " "), dtype = np.str).reshape(8,8)
squares
print('♔♕♖')
squares[0][0] = '♖'
squares[7][0] = '♖'
squares[0][7] = '♖'
squares[7][7] = '♖'
squares
chessboard = pd.DataFrame(squares, index=range(1,9),
columns = ['wR','wKn', 'wB', 'K',
'Q','eB', 'eKn', 'eR' ] )
chessboard
Explanation: PYT-DS SAISOFT
Overview 2
Overview 3
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/27963484878/in/album-72157693427665102/" title="Barry at Large"><img src="https://farm1.staticflickr.com/969/27963484878_b38f0db42a_m.jpg" width="240" height="180" alt="Barry at Large"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
DATA SCIENCE WITH PYTHON
Where Have We Been, What Have We Seen?
My focus in this course is two track:
develop high level intuitions about statistical and machine learning concepts
practice with nuts and bolts tools of the trade, namely pandas, matplotlib, numpy, other visualization tools (seaborn, bokeh...), specialized versions of pandas (geopandas, basemap).
However, these two tracks are not strictly distinct, as navigating one's way through the extensive APIs associated with nuts and bolts tools, requires developing high level intuitions. These tracks are complementary and require each other.
HIGH LEVEL INTUITIONS
What are some examples of high level intuitions?
I talked at some length about long-raging debates between two schools of thought in statistics: frequentist and Bayesian. Some of these debates have been concealed from us, as the successes of Bayesian thinking, also known as subjectivist, tend to feature early electronic computers and prototypical examples of machine learning, as these were emergent in the UK and US during WW2 especially, and highly classified.
Here in 2018, we're getting more of a picture of what went on at Bletchley Park. Neal Stephenson's Cryptonomicon, a work of historical science fiction, helped break the ice around sharing these stories. I learned a lot about cryptography simply from reading about the history of RSA.
Frequentists focus on sampling sufficiently to make reliable estimates regarding a larger population, deemed approachable in the asymptote but with diminishing returns. Why sample a million people if choosing the right few hundred gives the same predictions? Find out what sampling techniques give the most bang for the buck and then consider yourself ready to predict what will happen on the larger scale. The focus is on finding correlating patterns, whether or not causation might be implied.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/40676390102/in/album-72157693427665102/" title="Dog Person"><img src="https://farm5.staticflickr.com/4799/40676390102_fe8495c60e.jpg" width="333" height="500" alt="Dog Person"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">Infrequent!</div>
Alan Turing, famously the feature of the much fictionalized The Imitation Game, was tasked with cracking military grade encryption and enlisted the aid of machines to brute force through more possible permutations in a shorter time, without mistakes, than human computers could match.
However this was not brute force in an entirely mindless sense. They had several clues, more as time went on. One begins with prior or a priori knowledge (axioms if you will, subject to revision), and during the search process itself (at runtime) the process might spontaneously choose the more promising branches for exploration.
Chess and Go players may do something similar, as it's naturally impractical to explore the entire tree of possible moves many moves ahead. A way of culling or limiting ones search, today sometimes called "back propagation" or "response to feedback" makes the term "brute force" too coarse. And yet the raw horsepower that machines bring to computation cannot be denied either.
Turing's machines were sensitive to feedback, in the sense of the children's game, where we say "warmer" or "colder" depending on whether the guided search is getting closer or further from a target. Today we hear a lot about "gradient descent" which, similar to Newton's Method, is a way of finding a local or perhaps global minimum, according to what the rates of change say. "This may be as good as it gets" in terms of face recognition. But then you may always feed in more faces.
Bayesians see themselves working on a belief system or "model" of a system, fine tuning it to match incoming data. To the extent expectations are met, so is the model considered reliable.
Icing on the cake is when a model predicts something no one expected. This proves, or at least adds credibility to the belief, that one's model might be ahead of the curve, in terms of its predictive powers.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/39098593240/in/album-72157693427665102/" title="Library Book"><img src="https://farm1.staticflickr.com/802/39098593240_ac283ee2df_n.jpg" width="213" height="320" alt="Library Book"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
The power of machines to respond to "warmer" and "colder" is the basis for many recent advances in data science, as is the faster speed of GPUs.
Our crystal balls have become better at recognizing things, for example faces in pictures, written characters, spoken words, word patterns in Tweets or scientific articles.
One might say these crystal balls are "predictive" in the sense that we anticipate they'll guess correctly when confronted with new sample data.
However, in English, we don't ordinarily consider "recognizing a dog to be a dog" as anything like divination, as in "seeing the future". From a machine's point of view, "getting it right" is a form a prediction.
PRACTICAL TOOLS
The two tools we started with were the numpy ndarray and the pandas Series type. A Series is really a 2D (or dim 2) addressing scheme, but with only a single vertical vector or column. A DataFrame lines these vertical vectors together, left to right, giving us spreadsheet-like structures already familiar from centuries of working with tabular arrangements of rows and columns, defined to form "cells".
Lets create a kind of chess board with labels reminiscent of standard chess notation. The cells or values will be Unicode, meaning we might use the appropriate chess piece glyphs.
End of explanation
string_board = chessboard.values
binary_tree = string_board.reshape(2,2,2,2,2,2)
Explanation: I emphasize this is not really chess. Chess masters do not think in terms of west and east, nor is the checkerboard pattern sufficiently accessible.
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24462714453/in/album-72157660337424600/" title="A Python Student Deploys Flask App"><img src="https://farm2.staticflickr.com/1593/24462714453_d88f762b00_n.jpg" width="291" height="320" alt="A Python Student Deploys Flask App"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
Our original ndarray (8 x 8) is now "framed" much as a canvas is framed by a possibly elaborately carved metadata apparatus. Indexes may be hierarchical, such as with years divided into months down the side, and animals into phyla across the top.
Suppose we want to stretch our chessboard values back into a string-like spaghetti strand, not even 2D? The string-like state is a base serialization of what is perhaps meant to be much higher dimensional data.
Applying the rule that if dimensions intermultiply to the same total, we may reshape between them, I will take my 8 x 8 chessboard and turn it into some binary tree like 2 x 2 x 2 x 2 x 2 x 2 data structure. Finding a Rook might take patience. I leave it for you in the Lab.
End of explanation
binary_tree[0][0][0][0][0][0][0][0]
Explanation: LAB:
Find the rooks, or at least one of them. The first one is easy and I give you it for free:
End of explanation |
10,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulating Membrane Potential
The simulation scripts described in this chapter are available at STEPS_Example repository.
This chapter introduces the concept of simulating the electric potential
across a membrane in STEPS using a method that calculates electric potentials on tetrahedral meshes called 'E-Field' (see Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI
Step1: Next we define some parameters for the simulation, which are intended to remain constant throughout
the script. We start with the potassium channel and define the single-channel conductance, channel
density and reversal potential, keeping to a conductance to 0.036 S/cm2 (see Simulation with Tetexact for more on converting continuous conductance to discrete conductance)
Step2: The first thing to note is that, as usual in STEPS, units are s.i., which means in the above example the single channel conductance is given in Siemens and
the reversal potential for the ohmic current is in volts.
Similarly, we define parameters for the sodium channel, also choosing a single-channel conductance
of 20pS
Step3: The HH model also includes a leak conductance, which may also be discretised (although another option is to use solver function steps.solver.Tetexact.setMembRes). The overall conductance is
small compared to maximal potassium and sodium conductances, but we choose a similar channel density to give
a good spatial spread of the conductance, which means a fairly low single-channel conductance
Step4: The next parameters require a little explanation. Taking the potassium conductance as an example, the
potassium density will convert to a discrete number of channels that will give (approximately) our intended
maximal conductance of 0.036 S/$cm^2$. In the molecular sense, this means that if all potassium channels
are in the 'open' conducting state then we will reach the maximal conductance. However, in fact
each individual channel can be in any one of 5 states (including the conducting state) (see figure above) and these states are
described by separate objects in the STEPS simulation (as we will see later), where the sum of populations of each state should
be equal to the total number of channels. For example, if the surface of the mesh is 100 square microns
by the above density we expect to have a total of 1800 potassium channels in the simulation but at some time
we might have e.g. 400 in the n0 state, 700 in the n1 state, 500 in the n2 state, 150 in the n3 state
and 50 in the conducting n4 state, and the total at any time will be equal to 1800.
So we intend to initialise our populations of channel states to some starting value. The details of how to
calculate the initial condition will not be given here, but the factors used here are steady-state approximations for
the HH model at an initial potential of -65mV. We then give a table of fractional channel state populations (which
add up to a value of 1). For each channel state the factor multiplied by the channel density and the surface area
of the mesh will give our initial population of channels in that state
Step5: We now define some more important parameters for our simulation. The first is temperature assumed for
the gating kinetics, which we will give in units of degrees celsius but is not directly used in simulation
(as we will see). The second is a current clamp that we intend for one end of the mesh. The third is a
voltage-range for simulation. These parameters will all be discussed in more detail later
Step6: Finally we set some simulation control parameters, the number of 'time-points' to run and
the 'time-step' at which we will record data. So we will run for 4ms in increments of 0.1ms
Step7: Model specification
We move on to the biochemical model description. This is quite different from previous chapters, with
new objects to look at, which are important building blocks of any simulation that includes
voltage-dependent processes in STEPS.
To start, we create a Model container object (steps.model.Model) and one surface system
(steps.model.Surfsys), with no volume system necessary for this relatively simple model
Step8: To make our potassium, sodium and leak channels we need to use two new objects. The steps.model.ChanState
objects are used to describe each separate channel state, and steps.model.Chan objects group a set of
channel states together to form a channel. At present the role of Channel objects (steps.model.Chan)
is mainly conceptual and not functional, with the ChannelState objects (steps.model.ChanState)
playing the important roles in simulation
Step9: steps.model.ChanState object construction looks quite similar to that for steps.model.Spec objects,
with the difference that, as well as the usual string identifier and steps.model.Model container object
arguments, the constructor also expects to see a reference to a steps.model.Chan object that conceptually
groups the channel states together. It is obvious to see here which channel configuration each
state is intended to represent in this model.
Similarly we create the sodium channel objects
Step10: and also the leak channel objects, which only exist in conducting state
Step11: We move on to describing the transitions between channel states. Firstly, we describe the transition rates
in the model, as described in Markov gating scheme, and we do so for each using a lambda expressions, which is
a shorthand way to define a function object in Python. We can use any callable function here (as will be
explained later) so we could just as easily use the more familiar def syntax if we wanted to. We also introduce
temperature dependence and use the previously defined celsius variable to find thi at 20 degrees celsius
Step12: We should bear in mind that these functions will expect a voltage to be given in units of millivolts, and
will return the transition rate in unit of /ms.
To define voltage-dependent channel transitions we use a new STEPS object, the 'Voltage-dependent surface reaction'
(steps.model.VDepSReac). This object may be used to define any reaction in STEPS that is voltage-dependent, which
often involves 1st-order voltage-dependent transitions between different channel states, but also supports
higher order interactions which may include interactions between volume-diffusing molecules and surface-bound molecules
and thus allows modelling of, for example, voltage-dependent channel block. Because all
of these processes are only permitted to occur on a surface and not in a volume, we choose the term
'voltage-dependent surface reaction'.
The syntax of creating this object, therefore, shares similarities with steps.model.SReac, but with some
important differences. Let's look at a first example
Step13: The first few arguments to the steps.model.VDepSReac constructor are identical to those for
steps.model.SReac
Step14: where the unit conversions should be clear (recall _a_n expects an argument in mV units, and returns /ms).
The vrange argument requires the voltage-range to evaluate the rate-function as a Python sequence in the order
of
Step15: In the 'Kn0n1' example the sequence of voltages was given directly to the vrange argument, but in fact at the beginning
of our script we defined a voltage-range as list Vrange, which we pass to all future VDepSreac objects we create in
this script. The rest of our voltage-dependent channel transitions for the Potassium channel are
Step16: The voltage-dependent surface reactions for the Sodium channel follow. Since there are 20 different possible
transitions (see figure above) we need to create 20 steps.model.VDepSReac objects
Step17: The final part of our model specification is to add currents. Presently in STEPS we have the choice of two types of current that have quite different behaviour
Step18: Now in the STEPS simulation when, for example, the number of potassium channels in state K_n4 is non-zero a potassium conductance will exist equal to the population of K_n4 channel states multiplied by the single channel conductance, and a current will be calculated depending on the local voltage relative to the given reversal potential.
Geometry specification
With the model completed we move on to geometry specification. To simulate action potential propagation we'll demonstrate the rather unusual case of using a long cuboid mesh whereas other simulators may typically assume cylindrical geometry. This is partly to demonstrate that the only restriction on geometry used for the membrane potential calculation in STEPS is that it can be represented by a tetrahedral mesh. Since tetrahedral meshes are capable of representing real cellular geometry with high accuracy this opens up many interesting applications, yet for this example we'll stick with a rather basic shape. As in previous sections we'll import a mesh in Abaqus format, which represents a cuboid of length 1000µm in the z-axis, and a diameter of 0.44µm (which is an equivalent cylindrical diamter of 0.5µm) in the x and y axes (as shown in the figure below)
Step19: In the figure above we show a portion of the tetrahedral mesh representing a cuboid of length 1000µm oriented along the z-axis.
The following section of code will not be explained in detail, but simply serves two purposes. Firstly, to find the vertices at one end of the cuboid in which a current pulse will be applied (which will be stored in list injverts)- since the long axis of the cuboid in the z-axis these will be the minimum z vertices. Secondly, to find the corresponding triangles on that face, which will be excluded from the membrane (stored in list facetris) since this end is intended to be an 'open' end
Step20: Now we will use a mesh function to find all the triangles on the surface of the mesh and exclude those on the bottom face
Step21: The following section of code, which will also not be described in full detail, simply serves to bin the surface triangles by distance along the z-axis and to store the total area of the bins, which will be used later in the script to convert recorded current to a current density (current per unit area)
Step22: The final piece of geometry manipulation is to find a point at every 10µm along the z-axis at which to record potential. In STEPS it is possible to record potential anywhere in the membrane or conduction volume and from vertices, triangles and tetrahedrons. Here we intend to record the potential at intracellular tetrahedrons along the centre of the cuboid, and so find their indices and store in numpy array pot_tet
Step23: Now, much like in previous chapters, we will create a compartment which simply consists of all tetrahedrons in the mesh, and a surface patch which consists of all surface triangles (except those on the minimum z face), which we found earlier and stored in list memb_tris
Step24: And now we create a new and very important object for the membrane potential calculation, the 'membrane' itself. The membrane object, steps.geom.Memb, simply consists of one or more patch objects which must together form one continuos surface, although the membrane may be 'open' or 'closed' ('closed' means all member triangles are directly connected to 3 other membrane triangles and so form a closed surface, and 'open' means some triangles have fewer than 3 neighbours and so the surface contains holes). Any channels that exist in the patch(es) that comprise(s) the membrane are available to conduct a current (specified by steps.model.OhmicCurr or steps.model.GHKcurr objects). The INNER compartment(s) to the membrane patches will comprise the 'conduction volume' representing the intracellular region. The potential at all vertices in the membrane and conduction volume will be calculated and will vary with any channel, capacitive or externally applied currents, relative to the (earthed) extracellular region.
Where the extracellular space is included in simulations the membrane may be comprised of internal mesh triangles, but for this relatively simple model the membrane is formed from triangles on the surface of the mesh and is comprised of only one patch. This patch contains an inner compartment consisting of all tetrahedrons in the mesh, which will form the conduction volume. So we create the membrane
Step25: The steps.geom.Memb constructor requires a string identifier argument and a reference to a steps.geom.Tetmesh object plus a list of the composite steps.geom.TmPatch objects (here there in only one), and finally an optional argument named opt_method. This allows the choice of a method for optimization of the ordering of vertices in the membrane and conduction volume, which is essential to produce an efficient calculation, as discussed in Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI
Step26: And with our model, geometry and random number generator created we are ready to create the solver object. The membrane potential calculation in STEPS is an extension to the steps.solver.Tetexact and steps.solver.TetODE solvers, and creating the solver is much like in previous mesh-based examples, with arguments to the constructor of a steps.model.Model object, a steps.geom.Tetmesh object and a steps.rng.RNG object in that order, plus a simple boolean flag that switches on the membrane potential calculation when set to True (and defaults to False)
Step27: If requested to perform the membrane potential calculation (with the boolean argument set to True) a Tetexact solver requires one (and currently only one) steps.geom.Memb to exist within the geometry description, and will therefore fail to be created if such an object does not exist.
With the steps.solver.Tetexact solver successfully created, with the membrane potential calculation included, it is time to set the simulation initial conditions. Much like in previous examples, this requires injecting molecules into a specific location. In this case we wish to inject a number of molecules represented by steps.model.ChanState objects in the model description into the membrane surface represented by a steps.geom.TmPatch object in the geometry description. As we will see, at the solver stage the Channel State objects behave just like Species objects and any solver method previously used for Species objects may be used for Channel State objects, such as steps.solver.Tetexact.setPatchCount, steps.solver.Tetexact.setCompConc and so on.
At this point we should pause to look at how to specify conductance in STEPS models. Conductance in STEPS comes from steps.model.OhmicCurr objects, which provide a single-channel conductance that will be applied to any Channel State molecule to which that conductance in mapped. For example, recall in this model that we created an Ohmic Current called OC_K to represent the potassium current in the simulation, which will apply to Channel State K_n4, with a single-channel conductance of 20 pS and reversal potential of -77mV, with this statement
Step28: And call solver method steps.solver.Tetexact.setPatchCount for every Channel State in the model (including leak) to set the initial number
Step29: One example run of the above code resulted in potassium Channel State populations of 3135, 5834, 4046, 1245 and 141 respectively giving an initial potassium conductance (from K_n4) of 2.8nS (0.00035 Siemens per square cm) and maximum conductance of 288nS (0.036 Siemens per square cm) as desired.
The next few lines of code set some important new simulation variables, all to do with the membranes potential calculation. The first function (steps.solver.Tetexact.setEfieldDT) sets the time-step period for the potential calculation, specified in seconds. This tells STEPS how often to perform the 'E-Field' calculation to evaluate potential, and update any voltage-dependent processes in the simulation. The optimal value for this time-step will vary for different simulations, so some things should be kept in mind when making the choice. Firstly, the time-step should be short enough that the voltage change occurring during each time-step is small and voltage can be assumed constant during each time-step for any voltage-dependent processes in the model. A large time-step may result in loss of accuracy. Secondly, the shorter the time-step the slower the simulation will be. Thirdly, the time-step must be shorter or equal to the simulation time-step (this is 0.1ms in our model) so that at least one membrane potential calculation can be carried out per simulation time-step. As a rough guide 0.01ms is usually highly accurate, and it is not recommended to exceed 0.1ms. So for this simulation we choose a calculation time-step of 0.01ms (which happens to be the default value)
Step30: Now we set the initial potential of the membrane with function
Step31: Which also happens to be the default.
And we set the specific capacitance of the membrane, in units of Farad per square meter, with function
steps.solver.Tetexact.setMembCapac. So for 1 microFarad per square cm this is 0.01 (which is also the default setting)
Step32: Finally we set bulk resistivity, which is actually a property of the conduction volume encompassed by the membrane. We use function
steps.solver.Tetexact.setMembVolRes. STEPS expects units of ohm.metre here
Step33: Again, this is in fact the default setting.
The last condition to set is something that will remain unchanged throughout our simulation in this example, which is a constant current injection at one end of the long cubic geometry. This will have an effect of inducing action potentials at the depolarised end, which will then propagate, and a constant current at the correct level will ensure a train of action potentials. In STEPS it is possible to inject current to any node in the conduction volume or membrane with solver method steps.solver.Tetexact.setVertIClamp, or any membrane triangle (where current will be shared equally between its 3 nodes) with solver method steps.solver.Tetexact.setTriIClamp. Here, we have already found the vertices at one end of the geometry, the minimum z end, and stored them in list injverts. We now wish to set the current clamp for each of these vertices as a share of the 50pA current we have already defined in variable Iclamp. Note
Step34: The current clamp set will remain in existence throughout the simulation, until we specify otherwise.
Just before running the simulation we need to create empty data structures, much like in previous chapters. Here we intend to record potential, along with sodium and potassium currents, by the 10µm bins we previously arranged
Step35: So finally we are ready to run the simulation. We will use some new methods to record information from the simulation
Step36: If we want to put all this code into a function, we should return the tuple
Step37: Plotting simulation output
We begin by importing some matplotlib plotting functions
Step38: Now we create an array of 'time-points' to be used in the plots
Step39: And create two functions
Step40: and another to plot the sodium and potassium currents (separately) along the z-axis at a given 'time-point'
Step41: Finally, with the simulation finished we can use the plotting functions to plot the potential along the z-axis at 1ms, 2ms and 3ms
Step42: And to plot the membrane currents along the z-axis also at 1ms, 2ms and 3ms
Step43: Simulation with TetOpSplit
The spatial stochastic approximate solver, which runs in parallel steps.mpi.solver.TetOpSplit also supports the membrane potential calculation. The solver is described in detail in a separate chapter, so usage will only
be described briefly here.
Usage is similar as for Tetexact, however we must import some different STEPS modules
Step44: We must also partition the mesh along the axis based on the number of MPI hosts
Step45: Now we can create the steps.mpi.solver.TetOpSplit solver object, passing the partitioning information
as well as the usual arguments
Step46: This time we only record voltage from the simulation due to the reduced functionality of the solver at present
Step47: And simply plot the data
Step48: Assuming that all this script was written in a file named HH_APprop_tetopsplit.py, to run from the command line with 4 MPI processes we should use
mpirun -n 4 python HH_APprop_tetopsplit.py
Simulation with TetODE
The spatial deterministic solver steps.solver.TetODE, which was introduced in Simulating Diffusion on Surfaces, is also available for membrane potential simulations.
As discussed in Simulating Diffusion on Surfaces, simulations in TetODE share model and geometry construction with solver Tetexact, with a few
differences to the solver to run deterministic simulations, such as the possibility of setting tolerance levels. Coupling with the membrane potential solution introduces some new considerations. Firstly,
since reaction-diffusion solutions are solved in CVODE this reduces the possibilities of coupling the reaction-diffusion simulation with the membrane
potential calculation. As discussed in Simulating Diffusion on Surfaces, a call to steps.solver.TetODE.run hands control to CVODE until the specified
endtime during which there can be no communication with the membrane potential calculation. For this reason function setEfieldDT is not supported in
TetODE
Step49: And we create a TetODE solver object instead of a Tetexact solver
Step50: And remove the unsupported function
Step51: Finally, since it is unfortunately not possible to record information about the spatial currents in TetODE (functions such as getTriOhmicI are not supported),
we remove anything to do with recording the Na and K currents, which makes our simulation loop rather simple
Step52: And now we return only the information related to the recordings of the spatial membrane potential
Step53: We can finally plot the results as follows | Python Code:
from __future__ import print_function # for backward compatibility with Py2
import steps.model as smodel
import steps.geom as sgeom
import steps.rng as srng
import steps.solver as ssolver
import steps.utilities.meshio as meshio
import numpy
import math
import time
from random import *
Explanation: Simulating Membrane Potential
The simulation scripts described in this chapter are available at STEPS_Example repository.
This chapter introduces the concept of simulating the electric potential
across a membrane in STEPS using a method that calculates electric potentials on tetrahedral meshes called 'E-Field' (see Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129).
We'll be introduced to new objects that
represent phenomena linked to the membrane potential simulation,
such as voltage-dependent channel transitions and currents across the membrane. We will
look at an example based on a very widely-used
model in computational neuroscience, the classical Hodgkin-Huxley model of the
action-potential, in molecular form. To demonstrate some useful techniques for
spatial simulations we will model action potential propagation in a simple mesh. As with previous chapters,
we will briefly introduce the model, then go through Python code used to run the
model in STEPS, with thorough descriptions where necessary.
We will start with spatial stochastic simulation in solvers 'Tetexact' (steps.solver.Tetexact) and 'TetOpSplit' (steps.solver.TetOpSplit), then discuss what modifications are necessary to run the
equivalent spatial deterministic solution in solver 'TetODE' (steps.solver.TetODE).
Markov gating scheme
While many readers may not be familiar with conversion of the classical Hodgkin-Huxley (HH)
model to a Markov gating scheme we will only give a brief description here, though there are many
sources a reader may consult for a more detailed description (for example Hille B. Gating Mechanisms: Kinetic Thinking. In Ion Channels of Excitable Membranes, 3rd ed. Sinauer Associates, Sunderland, MA: 2001:583-589).
In brief, conductances are converted to a population of individual channels (each with single-channel
conductance of typically 20pS), and each individual channel may exist in one of a number of
states with rates described of possible first-order transitions to other states. Certain assumptions,
such as that the the rate constants do not depend on the history of the system (a Markov process),
and with the simplification that states with the same number of 'open' and 'closed' gates behave
identically regardless of specific configuration, lead to gating schemes as shown in the two figures below
for the HH potassium and sodium channels respectively.
In this representation the potassium channel is described by 4 gates which may be in open or closed configuration. State n3, for example, means that any 3 of the 4 gates are in open state. Where all 4 gates are open (state n4) the channel may conduct a current- all other states are non-conducting states.
The sodium channel is represented by 8 possible states- the m3h1 state is the conducting state.
The transition rates ($a_n$, $b_n$ for the potassium channel - $a_m$, $b_m$, $a_h$, $b_h$ for the sodium channel)
should be very familiar to anyone well-acquainted with the HH model:
\begin{equation}
a_n = \frac{0.01\times(10-(V+65))}{\exp\left(\frac{10-(V+65)}{10}\right)-1}
\end{equation}
\begin{equation}
b_n = 0.125\exp\left(\frac{-(V+65)}{80}\right)
\end{equation}
\begin{equation}
a_m = \frac{0.1\times(25-(V+65))}{\exp\left(\frac{25-(V+65)}{10}\right)-1}
\end{equation}
\begin{equation}
b_m = 4\exp\left(\frac{-(V+65)}{18}\right)
\end{equation}
\begin{equation}
a_h = 0.07\exp\left(\frac{-(V+65)}{20}\right)
\end{equation}
\begin{equation}
b_h = \frac{1}{\exp\left(\frac{30-(V+65)}{10}\right)+1}
\end{equation}
Where V is the potential across the membrane (in millivolts). Modelled as a stochastic process where each state is discretely populated, these functions form the basis of the propensity functions for each possible transition at any given voltage (here units are per millisecond). Voltage continuously changes during simulation, yet over a short period of time the change is small enough so that the transition rates may be considered constant and stochastic algorithms applied. The transition rates must then be updated when the voltage change becomes large enough to merit a reevaluation of these functions.
Modelling solution
Organisation of code
As in previous chapters we will go through code line-by-line from a script
used to run this simulation in STEPS, but this time without using the command prompt style.
Readers should note that actual indentation in the Python code and the indentation in the examples
here can be different, and indentation is very important in Python code.
The first thing to do is to import modules from STEPS that we need to run the simulation,
and assign them shorter names to reduce typing (for example smodel refers to steps.model).
In addition we will make use of modules numpy, math, time and random to assist with the simulation:
End of explanation
# Potassium single-channel conductance
K_G = 20.0e-12 # Siemens
# Potassium channel density
K_ro = 18.0e12 # per square meter
# Potassium reversal potential
K_rev = -77e-3 # volts
Explanation: Next we define some parameters for the simulation, which are intended to remain constant throughout
the script. We start with the potassium channel and define the single-channel conductance, channel
density and reversal potential, keeping to a conductance to 0.036 S/cm2 (see Simulation with Tetexact for more on converting continuous conductance to discrete conductance):
End of explanation
# Sodium single-channel conductance
Na_G = 20.0e-12 # Siemens
# Sodium channel density
Na_ro = 60.0e12 # per square meter
# Sodium reversal potential
Na_rev = 50e-3 # volts
Explanation: The first thing to note is that, as usual in STEPS, units are s.i., which means in the above example the single channel conductance is given in Siemens and
the reversal potential for the ohmic current is in volts.
Similarly, we define parameters for the sodium channel, also choosing a single-channel conductance
of 20pS:
End of explanation
# Leak single-channel conductance
L_G = 0.3e-12 # Siemens
# Leak density
L_ro = 10.0e12 # per square meter
# Leak reversal potential
leak_rev = -54.4e-3 # volts
Explanation: The HH model also includes a leak conductance, which may also be discretised (although another option is to use solver function steps.solver.Tetexact.setMembRes). The overall conductance is
small compared to maximal potassium and sodium conductances, but we choose a similar channel density to give
a good spatial spread of the conductance, which means a fairly low single-channel conductance:
End of explanation
# A table of potassium channel population factors:
# n0, n1, n2, n3, n4
K_facs = [ 0.21768, 0.40513, 0.28093, 0.08647, 0.00979 ]
# A table of sodium channel population factors
# m0h0, m1h0, m2h0, m3h0, m0h1, m1h1, m2h1, m3h1:
Na_facs = [ 0.34412, 0.05733, 0.00327, 6.0e-05,\
0.50558, 0.08504, 0.00449, 0.00010 ]
Explanation: The next parameters require a little explanation. Taking the potassium conductance as an example, the
potassium density will convert to a discrete number of channels that will give (approximately) our intended
maximal conductance of 0.036 S/$cm^2$. In the molecular sense, this means that if all potassium channels
are in the 'open' conducting state then we will reach the maximal conductance. However, in fact
each individual channel can be in any one of 5 states (including the conducting state) (see figure above) and these states are
described by separate objects in the STEPS simulation (as we will see later), where the sum of populations of each state should
be equal to the total number of channels. For example, if the surface of the mesh is 100 square microns
by the above density we expect to have a total of 1800 potassium channels in the simulation but at some time
we might have e.g. 400 in the n0 state, 700 in the n1 state, 500 in the n2 state, 150 in the n3 state
and 50 in the conducting n4 state, and the total at any time will be equal to 1800.
So we intend to initialise our populations of channel states to some starting value. The details of how to
calculate the initial condition will not be given here, but the factors used here are steady-state approximations for
the HH model at an initial potential of -65mV. We then give a table of fractional channel state populations (which
add up to a value of 1). For each channel state the factor multiplied by the channel density and the surface area
of the mesh will give our initial population of channels in that state:
End of explanation
# Temperature for gating kinetics
celsius = 20.0
# Current clamp
Iclamp = 50.0e-12 # amps
# Voltage range for gating kinetics in Volts
Vrange = [-100.0e-3, 50e-3, 1e-4]
Explanation: We now define some more important parameters for our simulation. The first is temperature assumed for
the gating kinetics, which we will give in units of degrees celsius but is not directly used in simulation
(as we will see). The second is a current clamp that we intend for one end of the mesh. The third is a
voltage-range for simulation. These parameters will all be discussed in more detail later:
End of explanation
# The number of simulation time-points
N_timepoints = 41
# The simulation dt
DT_sim = 1.0e-4 # seconds
Explanation: Finally we set some simulation control parameters, the number of 'time-points' to run and
the 'time-step' at which we will record data. So we will run for 4ms in increments of 0.1ms:
End of explanation
mdl = smodel.Model()
ssys = smodel.Surfsys('ssys', mdl)
Explanation: Model specification
We move on to the biochemical model description. This is quite different from previous chapters, with
new objects to look at, which are important building blocks of any simulation that includes
voltage-dependent processes in STEPS.
To start, we create a Model container object (steps.model.Model) and one surface system
(steps.model.Surfsys), with no volume system necessary for this relatively simple model:
End of explanation
# Potassium channel
K = smodel.Chan('K', mdl)
K_n0 = smodel.ChanState('K_n0', mdl, K)
K_n1 = smodel.ChanState('K_n1', mdl, K)
K_n2 = smodel.ChanState('K_n2', mdl, K)
K_n3 = smodel.ChanState('K_n3', mdl, K)
K_n4 = smodel.ChanState('K_n4', mdl, K)
Explanation: To make our potassium, sodium and leak channels we need to use two new objects. The steps.model.ChanState
objects are used to describe each separate channel state, and steps.model.Chan objects group a set of
channel states together to form a channel. At present the role of Channel objects (steps.model.Chan)
is mainly conceptual and not functional, with the ChannelState objects (steps.model.ChanState)
playing the important roles in simulation: for example, voltage-dependent transitions occur between channel states
and a channel current object is associated with a channel state, both of which we will see later. As discussed in Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129, Channel states also include
the same functionality as steps.model.Spec objects and so can interact with other molecules and diffuse on the surface
or in a volume, however there is no example of that functionality in this model.
The code to create the potassium channel looks like this:
End of explanation
Na = smodel.Chan('Na', mdl)
Na_m0h0 = smodel.ChanState('Na_m0h0', mdl, Na)
Na_m1h0 = smodel.ChanState('Na_m1h0', mdl, Na)
Na_m2h0 = smodel.ChanState('Na_m2h0', mdl, Na)
Na_m3h0 = smodel.ChanState('Na_m3h0', mdl, Na)
Na_m0h1 = smodel.ChanState('Na_m0h1', mdl, Na)
Na_m1h1 = smodel.ChanState('Na_m1h1', mdl, Na)
Na_m2h1 = smodel.ChanState('Na_m2h1', mdl, Na)
Na_m3h1 = smodel.ChanState('Na_m3h1', mdl, Na)
Explanation: steps.model.ChanState object construction looks quite similar to that for steps.model.Spec objects,
with the difference that, as well as the usual string identifier and steps.model.Model container object
arguments, the constructor also expects to see a reference to a steps.model.Chan object that conceptually
groups the channel states together. It is obvious to see here which channel configuration each
state is intended to represent in this model.
Similarly we create the sodium channel objects:
End of explanation
# Leak channel
L = smodel.Chan('L', mdl)
Leak = smodel.ChanState('Leak', mdl, L)
Explanation: and also the leak channel objects, which only exist in conducting state:
End of explanation
# Temperature dependence
thi = math.pow(3.0, ((celsius-6.3)/10.0))
_a_n = lambda mV: thi*((0.01*(10-(mV+65.))/(math.exp((10-(mV+65.))/10.)-1)))
_b_n = lambda mV: thi*((0.125*math.exp(-(mV+65.)/80.)))
_a_m = lambda mV: thi*((0.1*(25-(mV+65.))/(math.exp((25-(mV+65.))/10.)-1)))
_b_m = lambda mV: thi*((4.*math.exp(-(mV+65.)/18.)))
_a_h = lambda mV: thi*((0.07*math.exp(-(mV+65.)/20.)))
_b_h = lambda mV: thi*((1./(math.exp((30-(mV+65.))/10.)+1)))
Explanation: We move on to describing the transitions between channel states. Firstly, we describe the transition rates
in the model, as described in Markov gating scheme, and we do so for each using a lambda expressions, which is
a shorthand way to define a function object in Python. We can use any callable function here (as will be
explained later) so we could just as easily use the more familiar def syntax if we wanted to. We also introduce
temperature dependence and use the previously defined celsius variable to find thi at 20 degrees celsius:
End of explanation
Kn0n1 = smodel.VDepSReac('Kn0n1', ssys, slhs = [K_n0], srhs = [K_n1], \
k=lambda V: 1.0e3 *4.*_a_n(V*1.0e3), vrange = [-100.0e-3, 50e-3, 1e-4])
Explanation: We should bear in mind that these functions will expect a voltage to be given in units of millivolts, and
will return the transition rate in unit of /ms.
To define voltage-dependent channel transitions we use a new STEPS object, the 'Voltage-dependent surface reaction'
(steps.model.VDepSReac). This object may be used to define any reaction in STEPS that is voltage-dependent, which
often involves 1st-order voltage-dependent transitions between different channel states, but also supports
higher order interactions which may include interactions between volume-diffusing molecules and surface-bound molecules
and thus allows modelling of, for example, voltage-dependent channel block. Because all
of these processes are only permitted to occur on a surface and not in a volume, we choose the term
'voltage-dependent surface reaction'.
The syntax of creating this object, therefore, shares similarities with steps.model.SReac, but with some
important differences. Let's look at a first example:
End of explanation
k = lambda V: 1.0e3 *4.*_a_n(V*1.0e3)
Explanation: The first few arguments to the steps.model.VDepSReac constructor are identical to those for
steps.model.SReac: in order, a string-identifier is required (which must be unique amongst all objects of the
same type), a reference to a steps.model.Surfsys object, a list of reactants- the 'left-hand side' arguments
(which may exist in the 'inner' volume, the surface, or the 'outer' volume, but not in both volumes) and a list of products- the 'right-hand side' arguments. The syntax up to this point follows exactly as described for
steps.model.SReac in Surface-Volume Reactions (Example: IP3 Model), with one noteworthy difference: now the reactants and products may be
steps.model.ChanState objects, as well as steps.model.Spec objects, or a mixture of both. Indeed,
in the context of reactions in STEPS (voltage-dependent or otherwise) steps.model.ChanState objects
behave exactly like steps.model.Spec objects, with the only difference between the two being that
steps.model.ChanState objects support additional functionality, namely the ability to conduct current, as we
will see later.
The other arguments, keyword arguments k and vrange require some explanation. The macroscopic reaction 'constant' is
of course now not a constant at all, but instead depends on voltage. To describe the voltage-dependence we pass
a function to argument k which returns the reaction rate as a function of voltage. We tell STEPS to evaluate this
function over a voltage range, which we choose so as to easily cover all voltages we expect the membrane potential to
reach during the simulation. As with other reaction objects, all units are specified as s.i. units, with the exception of
higher-order reactions which are based on Molar units. Since this is a 1st-order reaction we must ensure that the
function passed to the k argument returns in units of /second over the range of potentials passed in units of Volts.
Since this particular voltage-dependent surface reaction object is clearly intended to model the forward n0 to n1
transition, as shown in the figure above, we require a factor of 4 to be applied to the _a_n function to cover each possible 0 to 1 transition. To achieve this our
function to k is:
End of explanation
vrange = [-100.0e-3, 50e-3, 1e-4]
Explanation: where the unit conversions should be clear (recall _a_n expects an argument in mV units, and returns /ms).
The vrange argument requires the voltage-range to evaluate the rate-function as a Python sequence in the order
of: minimum voltage, maximum voltage, voltage-step. We should choose the voltage range to cover
what we expect from the simulation, but not by too much since a smaller range gives faster performance, and the voltage-step
should be chosen to give only a small error from linear interpolation between voltage-points. It is a very important point
that if, during a simulation, the membrane potential goes outside the voltage range for any voltage-dependent surface
reaction object located in that membrane the simulation will fail.
In our example we choose a voltage range of -100mV to +50mV, and tell STEPS to evaluate the voltage every 0.1mV, so
the vrange argument is:
End of explanation
Kn1n2 = smodel.VDepSReac('Kn1n2', ssys, slhs = [K_n1], srhs = [K_n2], \
k=lambda V: 1.0e3 *3.*_a_n(V*1.0e3), vrange = Vrange)
Kn2n3 = smodel.VDepSReac('Kn2n3', ssys, slhs = [K_n2], srhs = [K_n3], \
k=lambda V: 1.0e3 *2.*_a_n(V*1.0e3), vrange = Vrange)
Kn3n4 = smodel.VDepSReac('Kn3n4', ssys, slhs = [K_n3], srhs = [K_n4], \
k=lambda V: 1.0e3 *1.*_a_n(V*1.0e3), vrange = Vrange)
Kn4n3 = smodel.VDepSReac('Kn4n3', ssys, slhs = [K_n4], srhs = [K_n3], \
k=lambda V: 1.0e3 *4.*_b_n(V*1.0e3), vrange = Vrange)
Kn3n2 = smodel.VDepSReac('Kn3n2', ssys, slhs = [K_n3], srhs = [K_n2], \
k=lambda V: 1.0e3 *3.*_b_n(V*1.0e3), vrange = Vrange)
Kn2n1 = smodel.VDepSReac('Kn2n1', ssys, slhs = [K_n2], srhs = [K_n1], \
k=lambda V: 1.0e3 *2.*_b_n(V*1.0e3), vrange = Vrange)
Kn1n0 = smodel.VDepSReac('Kn1n0', ssys, slhs = [K_n1], srhs = [K_n0], \
k=lambda V: 1.0e3 *1.*_b_n(V*1.0e3), vrange = Vrange)
Explanation: In the 'Kn0n1' example the sequence of voltages was given directly to the vrange argument, but in fact at the beginning
of our script we defined a voltage-range as list Vrange, which we pass to all future VDepSreac objects we create in
this script. The rest of our voltage-dependent channel transitions for the Potassium channel are:
End of explanation
Na_m0h1_m1h1 = smodel.VDepSReac('Na_m0h1_m1h1', ssys, \
slhs=[Na_m0h1], srhs=[Na_m1h1], \
k=lambda V:1.0e3*3.*_a_m(V*1.0e3), vrange=Vrange)
Na_m1h1_m2h1 = smodel.VDepSReac('Na_m1h1_m2h1', ssys, \
slhs=[Na_m1h1], srhs=[Na_m2h1], \
k=lambda V:1.0e3*2.*_a_m(V*1.0e3), vrange=Vrange)
Na_m2h1_m3h1 = smodel.VDepSReac('Na_m2h1_m3h1', ssys, \
slhs=[Na_m2h1], srhs=[Na_m3h1], \
k=lambda V:1.0e3*1.*_a_m(V*1.0e3), vrange=Vrange)
Na_m3h1_m2h1 = smodel.VDepSReac('Na_m3h1_m2h1', ssys, \
slhs=[Na_m3h1], srhs=[Na_m2h1], \
k=lambda V:1.0e3*3.*_b_m(V*1.0e3), vrange=Vrange)
Na_m2h1_m1h1 = smodel.VDepSReac('Na_m2h1_m1h1', ssys, \
slhs=[Na_m2h1], srhs=[Na_m1h1], \
k=lambda V:1.0e3*2.*_b_m(V*1.0e3), vrange=Vrange)
Na_m1h1_m0h1 = smodel.VDepSReac('Na_m1h1_m0h1', ssys, \
slhs=[Na_m1h1], srhs=[Na_m0h1], \
k=lambda V:1.0e3*1.*_b_m(V*1.0e3), vrange=Vrange)
Na_m0h0_m1h0 = smodel.VDepSReac('Na_m0h0_m1h0', ssys, \
slhs=[Na_m0h0], srhs=[Na_m1h0], \
k=lambda V:1.0e3*3.*_a_m(V*1.0e3), vrange=Vrange)
Na_m1h0_m2h0 = smodel.VDepSReac('Na_m1h0_m2h0', ssys, \
slhs=[Na_m1h0], srhs=[Na_m2h0], \
k=lambda V:1.0e3*2.*_a_m(V*1.0e3), vrange=Vrange)
Na_m2h0_m3h0 = smodel.VDepSReac('Na_m2h0_m3h0', ssys, \
slhs=[Na_m2h0], srhs=[Na_m3h0], \
k=lambda V:1.0e3*1.*_a_m(V*1.0e3), vrange=Vrange)
Na_m3h0_m2h0 = smodel.VDepSReac('Na_m3h0_m2h0', ssys, \
slhs=[Na_m3h0], srhs=[Na_m2h0], \
k=lambda V:1.0e3*3.*_b_m(V*1.0e3), vrange=Vrange)
Na_m2h0_m1h0 = smodel.VDepSReac('Na_m2h0_m1h0', ssys, \
slhs=[Na_m2h0], srhs=[Na_m1h0], \
k=lambda V:1.0e3*2.*_b_m(V*1.0e3), vrange=Vrange)
Na_m1h0_m0h0 = smodel.VDepSReac('Na_m1h0_m0h0', ssys, \
slhs=[Na_m1h0], srhs=[Na_m0h0], \
k=lambda V:1.0e3*1.*_b_m(V*1.0e3), vrange=Vrange)
Na_m0h0_m0h1 = smodel.VDepSReac('Na_m0h0_m0h1', ssys, \
slhs=[Na_m0h0], srhs=[Na_m0h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m1h0_m1h1 = smodel.VDepSReac('Na_m1h0_m1h1', ssys, \
slhs=[Na_m1h0], srhs=[Na_m1h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m2h0_m2h1 = smodel.VDepSReac('Na_m2h0_m2h1', ssys, \
slhs=[Na_m2h0], srhs=[Na_m2h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m3h0_m3h1 = smodel.VDepSReac('Na_m3h0_m3h1', ssys, \
slhs=[Na_m3h0], srhs=[Na_m3h1], \
k=lambda V:1.0e3*_a_h(V*1.0e3), vrange=Vrange)
Na_m0h1_m0h0 = smodel.VDepSReac('Na_m0h1_m0h0', ssys, \
slhs=[Na_m0h1], srhs=[Na_m0h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m1h1_m1h0 = smodel.VDepSReac('Na_m1h1_m1h0', ssys, \
slhs=[Na_m1h1], srhs=[Na_m1h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m2h1_m2h0 = smodel.VDepSReac('Na_m2h1_m2h0', ssys, \
slhs=[Na_m2h1], srhs=[Na_m2h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Na_m3h1_m3h0 = smodel.VDepSReac('Na_m3h1_m3h0', ssys, \
slhs=[Na_m3h1], srhs=[Na_m3h0], \
k=lambda V:1.0e3*_b_h(V*1.0e3), vrange=Vrange)
Explanation: The voltage-dependent surface reactions for the Sodium channel follow. Since there are 20 different possible
transitions (see figure above) we need to create 20 steps.model.VDepSReac objects:
End of explanation
OC_K = smodel.OhmicCurr('OC_K', ssys, chanstate=K_n4, g=K_G, erev=K_rev)
OC_Na = smodel.OhmicCurr('OC_Na', ssys, chanstate=Na_m3h1, g=Na_G, erev=Na_rev)
OC_L = smodel.OhmicCurr('OC_L', ssys, chanstate=Leak, g=L_G, erev=leak_rev)
Explanation: The final part of our model specification is to add currents. Presently in STEPS we have the choice of two types of current that have quite different behaviour: Ohmic currents- which are represented by steps.model.OhmicCurr objects- and currents based on the GHK flux equation- represented by steps.model.GHKcurr objects. Since the Hodgkin-Huxley model utilises Ohmic currents we only need to concern ourselves with those objects here.
The assumption made in STEPS is that Ohmic current objects are used to model currents of ions that play no other important role in the system other than in membrane excitability, and so it is not necessary to add, in this example, ions of sodium and potassium diffusing both extra- and intra-cellularly. Because of the relatively large concentration of these ions simulating diffusion would be incredibly slowing to simulations with no perceptible benefit to accuracy. It is due to these arguments that an Ohmic current in STEPS will not result in transport of ions between compartments. The GHK current objects are able to model ion transport and so should always be used when modelling currents of important signalling ions, a good example of which for many systems is calcium.
Because STEPS is primarily a discrete simulator the Current objects in STEPS are based on single-channel currents. A steps.model.OhmicCurr, applied to a specific steps.model.ChanState object will result in an Ohmic current through every single Channel in that specific state located in the Membrane (which we will create later) at any given time. Therefore, to create an Ohmic current in STEPS we need to pass information as to which Channel state the current will be applied to, as well as its single-channel conductance to this current, along with the reversal potential. As usual in STEPS all units are based on s.i. units, and so the single-channel conductance unit is Siemens and reversal potential unit is volts.
The steps.model.OhmicCurr constructor expects 5 arguments: a string identifier (as usual in STEPS this must be unique amongst other Ohmic current objects), a reference to a steps.model.Surfsys object, a reference to a steps.model.ChanState to which this current applies (chanstate argument), a single-channel conductance (g argument), and a reversal potential (erev argument). At the top of our script we already defined conductance and reversal potential for all of our channels in this simulation, i.e. the potassium single-channel conductance K_G = 20.0e-12 Siemens and reversal potential K_rev = -77e-3 volts, the sodium single-channel conductance Na_G = 20.0e-12 Siemens and reversal potential Na_rev = 50e-3 volts, the leak single-channel conductance L_G = 0.3e-12 Siemens and reversal potential leak_rev = -54.4e-3 volts, so we use these values when creating the Ohmic current objects. The conducting states of the potassium, sodium and leak currents respectively are K_n4, Na_m3h1 and Leak:
End of explanation
mesh = meshio.importAbaqus('meshes/axon_cube_L1000um_D443nm_equiv0.5_19087tets.inp', 1e-6)[0]
Explanation: Now in the STEPS simulation when, for example, the number of potassium channels in state K_n4 is non-zero a potassium conductance will exist equal to the population of K_n4 channel states multiplied by the single channel conductance, and a current will be calculated depending on the local voltage relative to the given reversal potential.
Geometry specification
With the model completed we move on to geometry specification. To simulate action potential propagation we'll demonstrate the rather unusual case of using a long cuboid mesh whereas other simulators may typically assume cylindrical geometry. This is partly to demonstrate that the only restriction on geometry used for the membrane potential calculation in STEPS is that it can be represented by a tetrahedral mesh. Since tetrahedral meshes are capable of representing real cellular geometry with high accuracy this opens up many interesting applications, yet for this example we'll stick with a rather basic shape. As in previous sections we'll import a mesh in Abaqus format, which represents a cuboid of length 1000µm in the z-axis, and a diameter of 0.44µm (which is an equivalent cylindrical diamter of 0.5µm) in the x and y axes (as shown in the figure below) :
End of explanation
# Find the vertices for the current clamp and store in a list
injverts = []
for i in range(mesh.nverts):
if ((mesh.getVertex(i)[2] < (mesh.getBoundMin()[2]+0.1e-6))):
injverts.append(i)
print("Found ", injverts.__len__(), "I_inject vertices")
facetris = []
for i in range(mesh.ntris):
tri = mesh.getTri(i)
if ((tri[0] in injverts) and (tri[1] in injverts) and (tri[2] in injverts)):
facetris.append(i)
print("Found ", facetris.__len__(), "triangles on bottom face")
Explanation: In the figure above we show a portion of the tetrahedral mesh representing a cuboid of length 1000µm oriented along the z-axis.
The following section of code will not be explained in detail, but simply serves two purposes. Firstly, to find the vertices at one end of the cuboid in which a current pulse will be applied (which will be stored in list injverts)- since the long axis of the cuboid in the z-axis these will be the minimum z vertices. Secondly, to find the corresponding triangles on that face, which will be excluded from the membrane (stored in list facetris) since this end is intended to be an 'open' end:
End of explanation
memb_tris = list(mesh.getSurfTris())
# Remove triangles on bottom face from membrane triangles
for t in facetris: memb_tris.remove(t)
Explanation: Now we will use a mesh function to find all the triangles on the surface of the mesh and exclude those on the bottom face:
End of explanation
# Bin the surface triangles for recording current
bins_n = 100
memb_tris_binned = [None]*bins_n
mtb_area = numpy.zeros(bins_n)
# In m
bin_dz = 1000.0e-6/bins_n
# The centre positions of the bins
bin_pos = numpy.arange((bin_dz/2.0), 1000e-6, bin_dz)
for m in range(bins_n): memb_tris_binned[m]=[]
# Bin the triangles
for t in memb_tris:
barycz = mesh.getTriBarycenter(t)[2]
idx = 0
for p in bin_pos:
if (barycz >= p-(bin_dz/2.0) and barycz < p+(bin_dz/2.0)):
memb_tris_binned[idx].append(t)
mtb_area[idx]+=(mesh.getTriArea(t)*1.0e12)
break
idx +=1
Explanation: The following section of code, which will also not be described in full detail, simply serves to bin the surface triangles by distance along the z-axis and to store the total area of the bins, which will be used later in the script to convert recorded current to a current density (current per unit area):
End of explanation
# The points along (z) axis at which to record potential
pot_pos = numpy.arange(mesh.getBoundMin()[2], mesh.getBoundMax()[2], 10e-6)
pot_n = len(pot_pos)
pot_tet = numpy.zeros(pot_n, dtype = 'uint')
i=0
for p in pot_pos:
# Axis is aligned with z-axis
pot_tet[i] = mesh.findTetByPoint([0.0, 0.0, pot_pos[i]])
i=i+1
Explanation: The final piece of geometry manipulation is to find a point at every 10µm along the z-axis at which to record potential. In STEPS it is possible to record potential anywhere in the membrane or conduction volume and from vertices, triangles and tetrahedrons. Here we intend to record the potential at intracellular tetrahedrons along the centre of the cuboid, and so find their indices and store in numpy array pot_tet:
End of explanation
# Create cytosol compartment
cyto = sgeom.TmComp('cyto', mesh, range(mesh.ntets))
# Create the patch and associate with surface system 'ssys'
patch = sgeom.TmPatch('patch', mesh, memb_tris, cyto)
patch.addSurfsys('ssys')
Explanation: Now, much like in previous chapters, we will create a compartment which simply consists of all tetrahedrons in the mesh, and a surface patch which consists of all surface triangles (except those on the minimum z face), which we found earlier and stored in list memb_tris:
End of explanation
# Create the membrane across which the potential will be solved
membrane = sgeom.Memb('membrane', mesh, [patch], opt_method = 1)
Explanation: And now we create a new and very important object for the membrane potential calculation, the 'membrane' itself. The membrane object, steps.geom.Memb, simply consists of one or more patch objects which must together form one continuos surface, although the membrane may be 'open' or 'closed' ('closed' means all member triangles are directly connected to 3 other membrane triangles and so form a closed surface, and 'open' means some triangles have fewer than 3 neighbours and so the surface contains holes). Any channels that exist in the patch(es) that comprise(s) the membrane are available to conduct a current (specified by steps.model.OhmicCurr or steps.model.GHKcurr objects). The INNER compartment(s) to the membrane patches will comprise the 'conduction volume' representing the intracellular region. The potential at all vertices in the membrane and conduction volume will be calculated and will vary with any channel, capacitive or externally applied currents, relative to the (earthed) extracellular region.
Where the extracellular space is included in simulations the membrane may be comprised of internal mesh triangles, but for this relatively simple model the membrane is formed from triangles on the surface of the mesh and is comprised of only one patch. This patch contains an inner compartment consisting of all tetrahedrons in the mesh, which will form the conduction volume. So we create the membrane:
End of explanation
# Create the random number generator
r = srng.create('mt19937',512)
r.initialize(int(time.time()%10000))
Explanation: The steps.geom.Memb constructor requires a string identifier argument and a reference to a steps.geom.Tetmesh object plus a list of the composite steps.geom.TmPatch objects (here there in only one), and finally an optional argument named opt_method. This allows the choice of a method for optimization of the ordering of vertices in the membrane and conduction volume, which is essential to produce an efficient calculation, as discussed in Hepburn I. et al. (2013) Efficient calculation of the quasi-static electrical potential on a tetrahedral mesh. Front Comput Neurosci. DOI: 10.3389/fncom.2013.00129. Two methods are presently available: 1) a fast ordering of vertices by their position along the principle axis, which is suitable if one axis is much longer than an other (as is the case here) and 2) a slower breadth-first tree iteration, which produces a similar result to method (1) in cable-like structures but offers a significant improvement to simulation efficiency in complex geometries. Although the initial search for (2) can be slow it is possible to save an optimisation in a file for a specific membrane with solver function steps.solver.Tetexact.saveMembOpt, and this optimisation file can then be supplied as an argument to the steps.geom.Memb constructor, so each optimisation for any given membrane need only be found once. However, since this example uses a cable-like mesh we can use the faster principle-axis ordering method, though method (2) is recommended when working with complex, realistic geometries.
There is also an optional boolean argument verify, which defaults to False, but if True will verify that the membrane is a suitable surface for the potential calculation- although this verification can take rather a long time for larger meshes, so should only be used when one is not confident in the suitability of the membrane.
Simulation with Tetexact
As always for a stochastic simulation in STEPS, we create the random number generator and provide a random initial seed based on the current time, here with 10,000 possible unique values:
End of explanation
# Create solver object
sim = ssolver.Tetexact(mdl, mesh, r, True)
Explanation: And with our model, geometry and random number generator created we are ready to create the solver object. The membrane potential calculation in STEPS is an extension to the steps.solver.Tetexact and steps.solver.TetODE solvers, and creating the solver is much like in previous mesh-based examples, with arguments to the constructor of a steps.model.Model object, a steps.geom.Tetmesh object and a steps.rng.RNG object in that order, plus a simple boolean flag that switches on the membrane potential calculation when set to True (and defaults to False):
End of explanation
surfarea = sim.getPatchArea('patch')
Explanation: If requested to perform the membrane potential calculation (with the boolean argument set to True) a Tetexact solver requires one (and currently only one) steps.geom.Memb to exist within the geometry description, and will therefore fail to be created if such an object does not exist.
With the steps.solver.Tetexact solver successfully created, with the membrane potential calculation included, it is time to set the simulation initial conditions. Much like in previous examples, this requires injecting molecules into a specific location. In this case we wish to inject a number of molecules represented by steps.model.ChanState objects in the model description into the membrane surface represented by a steps.geom.TmPatch object in the geometry description. As we will see, at the solver stage the Channel State objects behave just like Species objects and any solver method previously used for Species objects may be used for Channel State objects, such as steps.solver.Tetexact.setPatchCount, steps.solver.Tetexact.setCompConc and so on.
At this point we should pause to look at how to specify conductance in STEPS models. Conductance in STEPS comes from steps.model.OhmicCurr objects, which provide a single-channel conductance that will be applied to any Channel State molecule to which that conductance in mapped. For example, recall in this model that we created an Ohmic Current called OC_K to represent the potassium current in the simulation, which will apply to Channel State K_n4, with a single-channel conductance of 20 pS and reversal potential of -77mV, with this statement:
OC_K = smodel.OhmicCurr('OC_K', ssys, chanstate=K_n4, g=20.0e-12, erev=-77e-3)
The overall potassium conductance in the simulation at any time will be equal to the number of K_n4 Channel States in existence multiplied by the single-channel conductance, with a maximum conductance equal to the highest possible number of K_n4 Channel States (the total number of potassium channels).
Other simulators may use different methods from STEPS to specify conductance, and many modellers may be more comfortable working with conductance per unit area, so some care should be taken with the conversion for STEPS models. This typically involves multiplying conductance per unit area by the membrane area to find overall conductance, then injecting the correct amount of channels into the membrane in STEPS to represent this conductance, depending on the single-channel conductance. Since the conducting channels are discrete in STEPS there may be a small discrepancy from the continuous value.
Recall we have specified potassium channel density, K_ro, as 18 per square micron and sodium channel density, Na_ro, as 60 per square micron, previously in our script with statements:
K_ro = 18.0e12 # per square meter
Na_ro = 60.0e12 # per square meter
when multiplied by single-channel conductance to give maximum potassium conductance of 0.036 Siemens per square cm and sodium conductance of 0.120 Siemens per square cm. So when injecting our channels in STEPS we simply need to multiply these densities by the surface area of the membrane to find the number to inject. An added complication for this model is that we want to inject steady-state initial conditions, so all channel states have some initial non-zero proportion, which we specified previously in lists K_facs and Na_facs (and we will not go into the derivation of the steady-state factors here).
So to inject our channels, first we find the membrane surface area, which is the same as the area of its only constituent patch:
End of explanation
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
Explanation: And call solver method steps.solver.Tetexact.setPatchCount for every Channel State in the model (including leak) to set the initial number:
End of explanation
# Set dt for membrane potential calculation to 0.01ms
sim.setEfieldDT(1.0e-5)
Explanation: One example run of the above code resulted in potassium Channel State populations of 3135, 5834, 4046, 1245 and 141 respectively giving an initial potassium conductance (from K_n4) of 2.8nS (0.00035 Siemens per square cm) and maximum conductance of 288nS (0.036 Siemens per square cm) as desired.
The next few lines of code set some important new simulation variables, all to do with the membranes potential calculation. The first function (steps.solver.Tetexact.setEfieldDT) sets the time-step period for the potential calculation, specified in seconds. This tells STEPS how often to perform the 'E-Field' calculation to evaluate potential, and update any voltage-dependent processes in the simulation. The optimal value for this time-step will vary for different simulations, so some things should be kept in mind when making the choice. Firstly, the time-step should be short enough that the voltage change occurring during each time-step is small and voltage can be assumed constant during each time-step for any voltage-dependent processes in the model. A large time-step may result in loss of accuracy. Secondly, the shorter the time-step the slower the simulation will be. Thirdly, the time-step must be shorter or equal to the simulation time-step (this is 0.1ms in our model) so that at least one membrane potential calculation can be carried out per simulation time-step. As a rough guide 0.01ms is usually highly accurate, and it is not recommended to exceed 0.1ms. So for this simulation we choose a calculation time-step of 0.01ms (which happens to be the default value):
End of explanation
# Initialise potential to -65mV
sim.setMembPotential('membrane', -65e-3)
Explanation: Now we set the initial potential of the membrane with function :func:steps.solver.Tetexact.setMembPotential with an argument given in volts:
End of explanation
# Set capacitance of the membrane to 1 uF/cm^2 = 0.01 F/m^2
sim.setMembCapac('membrane', 1.0e-2)
Explanation: Which also happens to be the default.
And we set the specific capacitance of the membrane, in units of Farad per square meter, with function
steps.solver.Tetexact.setMembCapac. So for 1 microFarad per square cm this is 0.01 (which is also the default setting):
End of explanation
# Set resistivity of the conduction volume to 100 ohm.cm = 1 ohm.meter
sim.setMembVolRes('membrane', 1.0)
Explanation: Finally we set bulk resistivity, which is actually a property of the conduction volume encompassed by the membrane. We use function
steps.solver.Tetexact.setMembVolRes. STEPS expects units of ohm.metre here:
End of explanation
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
Explanation: Again, this is in fact the default setting.
The last condition to set is something that will remain unchanged throughout our simulation in this example, which is a constant current injection at one end of the long cubic geometry. This will have an effect of inducing action potentials at the depolarised end, which will then propagate, and a constant current at the correct level will ensure a train of action potentials. In STEPS it is possible to inject current to any node in the conduction volume or membrane with solver method steps.solver.Tetexact.setVertIClamp, or any membrane triangle (where current will be shared equally between its 3 nodes) with solver method steps.solver.Tetexact.setTriIClamp. Here, we have already found the vertices at one end of the geometry, the minimum z end, and stored them in list injverts. We now wish to set the current clamp for each of these vertices as a share of the 50pA current we have already defined in variable Iclamp. Note: STEPS maintains the convention that the effect of a positive applied current is to make potential more positive, which is the opposite signing convention to channel currents.
End of explanation
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
Explanation: The current clamp set will remain in existence throughout the simulation, until we specify otherwise.
Just before running the simulation we need to create empty data structures, much like in previous chapters. Here we intend to record potential, along with sodium and potassium currents, by the 10µm bins we previously arranged:
End of explanation
# Run the simulation
for l in range(N_timepoints):
if l%10 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
# Loop through membrane triangle bins and record sodium and potassium currents
for b in range(bins_n):
for mt in memb_tris_binned[b]:
res_I_Na[l,b]+= sim.getTriOhmicI(mt, 'OC_Na')*1.0e12
res_I_K[l,b]+= sim.getTriOhmicI(mt, 'OC_K')*1.0e12
res_I_Na[l,b]/=mtb_area[b]
res_I_K[l,b]/=mtb_area[b]
# Loop through central tetrahedrons and record potential
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
Explanation: So finally we are ready to run the simulation. We will use some new methods to record information from the simulation: steps.solver.Tetexact.getTriOhmicI to record the current from membrane triangles and steps.solver.Tetexact.getTetV to record potential from tetrahedrons within the conduction volume. At every time-point we will use information found in the geometry section to loop over the binned membrane triangles (by every 10µm along he z-axis) and record current, then loop over an array of tetrahedral indices to record potential from one central tetrahedron at every 10µm along the z-axis:
End of explanation
results = (res, pot_pos, res_I_Na, res_I_K, bin_pos)
Explanation: If we want to put all this code into a function, we should return the tuple
End of explanation
%matplotlib inline
from pylab import *
Explanation: Plotting simulation output
We begin by importing some matplotlib plotting functions:
End of explanation
tpnt = arange(0.0, N_timepoints*DT_sim, DT_sim)
Explanation: Now we create an array of 'time-points' to be used in the plots:
End of explanation
def plotVz(tidx):
if (tidx >= tpnt.size):
print('Time index out of range')
return
plot(results[1]*1e6, results[0][tidx],\
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
Explanation: And create two functions: one to plot potential as along the z-axis at a given 'time-point':
End of explanation
def plotIz(tidx, plotstyles = ['-', '--']):
if (tidx >= tpnt.size):
print('Time index out of range')
return
plot(results[4]*1e6, results[2][tidx], plotstyles[0],\
label = 'Na: '+str(1e3*tidx*DT_sim)+'ms', linewidth=3)
plot(results[4]*1e6, results[3][tidx], plotstyles[1],\
label = 'K: '+str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(loc='best')
xlim(0, 1000)
ylim(-10, 15)
xlabel('Z-axis (um)')
ylabel('Current (pA/um^2)')
Explanation: and another to plot the sodium and potassium currents (separately) along the z-axis at a given 'time-point':
End of explanation
figure(figsize=(12,7))
plotVz(10)
plotVz(20)
plotVz(30)
show()
Explanation: Finally, with the simulation finished we can use the plotting functions to plot the potential along the z-axis at 1ms, 2ms and 3ms:
End of explanation
figure(figsize=(12,7))
plotIz(10)
plotIz(20)
plotIz(30)
show()
Explanation: And to plot the membrane currents along the z-axis also at 1ms, 2ms and 3ms:
End of explanation
import steps.mpi
import steps.mpi.solver as mpi_solver
import steps.utilities.geom_decompose as gd
Explanation: Simulation with TetOpSplit
The spatial stochastic approximate solver, which runs in parallel steps.mpi.solver.TetOpSplit also supports the membrane potential calculation. The solver is described in detail in a separate chapter, so usage will only
be described briefly here.
Usage is similar as for Tetexact, however we must import some different STEPS modules:
End of explanation
tet_hosts = gd.binTetsByAxis(mesh, steps.mpi.nhosts)
tri_hosts = gd.partitionTris(mesh, tet_hosts, mesh.getSurfTris())
Explanation: We must also partition the mesh along the axis based on the number of MPI hosts:
End of explanation
sim = mpi_solver.TetOpSplit(mdl, mesh, r, True, tet_hosts, tri_hosts)
Explanation: Now we can create the steps.mpi.solver.TetOpSplit solver object, passing the partitioning information
as well as the usual arguments:
End of explanation
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
sim.setEfieldDT(1.0e-5)
sim.setMembPotential('membrane', -65e-3)
sim.setMembCapac('membrane', 1.0e-2)
sim.setMembVolRes('membrane', 1.0)
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
# Run the simulation
for l in range(N_timepoints):
if steps.mpi.rank ==0:
if l%10 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
if steps.mpi.rank ==0:
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
Explanation: This time we only record voltage from the simulation due to the reduced functionality of the solver at present:
End of explanation
if steps.mpi.rank ==0:
results = (res, pot_pos)
tpnt = arange(0.0, N_timepoints*DT_sim, DT_sim)
figure(figsize=(12,7))
for tidx in (10,20,30,40):
plot(results[1]*1e6, results[0][tidx], \
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
show()
Explanation: And simply plot the data:
End of explanation
# The number of simulation time-points
N_timepoints = 401
# The simulation dt, now also the E-Field dt
DT_sim = 1.0e-5 # seconds
Explanation: Assuming that all this script was written in a file named HH_APprop_tetopsplit.py, to run from the command line with 4 MPI processes we should use
mpirun -n 4 python HH_APprop_tetopsplit.py
Simulation with TetODE
The spatial deterministic solver steps.solver.TetODE, which was introduced in Simulating Diffusion on Surfaces, is also available for membrane potential simulations.
As discussed in Simulating Diffusion on Surfaces, simulations in TetODE share model and geometry construction with solver Tetexact, with a few
differences to the solver to run deterministic simulations, such as the possibility of setting tolerance levels. Coupling with the membrane potential solution introduces some new considerations. Firstly,
since reaction-diffusion solutions are solved in CVODE this reduces the possibilities of coupling the reaction-diffusion simulation with the membrane
potential calculation. As discussed in Simulating Diffusion on Surfaces, a call to steps.solver.TetODE.run hands control to CVODE until the specified
endtime during which there can be no communication with the membrane potential calculation. For this reason function setEfieldDT is not supported in
TetODE: rather the E-Field time-step is implicitly taken as the simulation time step. E.g. one E-Field calculation will be performed every time the STEPS simulation
is advanced with a call to steps.solver.TetODE.run. Therefore, in this model, to achieve an E-Field calculation time-step of 0.01ms we need to change
constant DT_sim to $10^{-5}$, which will also of course change how often we record data, so we need to also change constant N_timepoints to 401
to ensure we run the simulation to 4ms. If we create a new script called 'HH_APprop_tetode.py' to run the deterministic simulation then, compared to 'HH_APprop.py'
we need to change the following constants:
End of explanation
sim = ssolver.TetODE(mdl, mesh, r, True)
Explanation: And we create a TetODE solver object instead of a Tetexact solver:
End of explanation
# sim.setEfieldDT(1.0e-5)
Explanation: And remove the unsupported function:
End of explanation
sim.setPatchCount('patch', 'Na_m0h0', Na_ro*surfarea*Na_facs[0])
sim.setPatchCount('patch', 'Na_m1h0', Na_ro*surfarea*Na_facs[1])
sim.setPatchCount('patch', 'Na_m2h0', Na_ro*surfarea*Na_facs[2])
sim.setPatchCount('patch', 'Na_m3h0', Na_ro*surfarea*Na_facs[3])
sim.setPatchCount('patch', 'Na_m0h1', Na_ro*surfarea*Na_facs[4])
sim.setPatchCount('patch', 'Na_m1h1', Na_ro*surfarea*Na_facs[5])
sim.setPatchCount('patch', 'Na_m2h1', Na_ro*surfarea*Na_facs[6])
sim.setPatchCount('patch', 'Na_m3h1', Na_ro*surfarea*Na_facs[7])
sim.setPatchCount('patch', 'K_n0', K_ro*surfarea*K_facs[0])
sim.setPatchCount('patch', 'K_n1', K_ro*surfarea*K_facs[1])
sim.setPatchCount('patch', 'K_n2', K_ro*surfarea*K_facs[2])
sim.setPatchCount('patch', 'K_n3', K_ro*surfarea*K_facs[3])
sim.setPatchCount('patch', 'K_n4', K_ro*surfarea*K_facs[4])
sim.setPatchCount('patch', 'Leak', L_ro * surfarea)
sim.setMembPotential('membrane', -65e-3)
sim.setMembCapac('membrane', 1.0e-2)
sim.setMembVolRes('membrane', 1.0)
# Set the current clamp
niverts = injverts.__len__()
for t in injverts:
sim.setVertIClamp(t, Iclamp/niverts)
# Create result structures
res = numpy.zeros((N_timepoints, pot_n))
res_I_Na = numpy.zeros((N_timepoints, bins_n))
res_I_K = numpy.zeros((N_timepoints, bins_n))
# Run the simulation
for l in range(N_timepoints):
if steps.mpi.rank ==0:
if l%100 == 0:
print ("Tpnt: ", l)
sim.run(DT_sim*l)
for p in range(pot_n):
res[l,p] = sim.getTetV(int(pot_tet[p]))*1.0e3
Explanation: Finally, since it is unfortunately not possible to record information about the spatial currents in TetODE (functions such as getTriOhmicI are not supported),
we remove anything to do with recording the Na and K currents, which makes our simulation loop rather simple:
End of explanation
results = (res, pot_pos)
Explanation: And now we return only the information related to the recordings of the spatial membrane potential:
End of explanation
figure(figsize=(12,7))
for tidx in (100,200,300):
plot(results[1]*1e6, results[0][tidx], \
label=str(1e3*tidx*DT_sim)+'ms', linewidth=3)
legend(numpoints=1)
xlim(0, 1000)
ylim(-80,40)
xlabel('Z-axis (um)')
ylabel('Membrane potential (mV)')
show()
Explanation: We can finally plot the results as follows
End of explanation |
10,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear algebra overview projection example
Linear algebra is the study of vectors and linear transformations. This notebook introduces concepts form linear algebra in a birds-eye overview. The goal is not to get into the details, but to give the reader a taste of the different types of thinking
Step1: Prerequisites
Linear algebra builds upon high school math concepts like
Step2: Vector addition
Step3: Vector length $\|\vec{u}\|$
Step4: Unit-length vectors $\hat{u}$
Step5: Dot product
Definition
The dot product of two vectors is proporional to the lengths for which the two vectors extend in the same direction.
If $\vec{u}=(u_1,u_2)$ and $\vec{v}=(v_1,v_2)$, then
Step6: Intuition
Step8: Projections
A projection of the vector $\vec{v}$ in the direction $\vec{d}$ is denoted $\Pi_{\vec{d}}(\vec{v})$. The formula for computing the projections uses the dot product operation
Step9: Projections play an important role in physics. For example, when solving a two dimensional projectile problem we often decompose vector quantities like forces $\vec{F}$, velocities $\vec{v}$, and momenta $\vec{p}$ into their $x$- and $y$-components
Step11: Take 1
Step13: Vector functions
Observe that the function P is a vector function—a function that takes vectors as inputs and produces vectors as outputs. In mathematical notation we write this as
$$
P
Step15: Take 3
Step16: Equivalence relationship between linear transformstions $T$ and matrices $M_T$
Step17: Matrix operations
Addition (denoted $A+B$)
Subtraction, the inverse of addition (denoted $A-B$)
Scaling by a constant $\alpha$ (denoted $\alpha A$)
Matrix-vector product (denoted $A\vec{x}$)
Matrix product (denoted $AB$)
Matrix inverse (denoted $A^{-1}$)
Trace (denoted $\textrm{Tr}(A)$)
Determinant (denoted $\textrm{det}(A)$ or $|A|$)
Matrix-vector product
$$
A \vec{x}
\quad
\Leftrightarrow
\quad
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}
!!
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix}
\equiv
\begin{bmatrix}
x_1a_{11} + x_2a_{12} \
x_1a_{21} + x_2a_{22} \
x_1a_{31} + x_2a_{32}
\end{bmatrix}
$$
The matrix-vector product is defined this way so it can represent linear transformations.
Step18: Matrix-matrix product
$$
AB
\quad \Leftrightarrow \quad
\begin{bmatrix}
a_{11} & a_{12} \[1.5mm]
a_{21} & a_{22} \[1.5mm]
a_{31} & a_{32}
\end{bmatrix}
!!
\begin{bmatrix}
b_{11} & b_{12} \[1.5mm]
b_{21} & b_{22} \
\end{bmatrix}
=
\begin{bmatrix}
a_{11}b_{11} + a_{12}b_{21} & a_{11}b_{12} + a_{12}b_{22} \[1.5mm]
a_{21}b_{11} + a_{22}b_{21} & a_{21}b_{12} + a_{22}b_{22} \[1.5mm]
a_{31}b_{11} + a_{32}b_{21} & a_{31}b_{12} + a_{32}b_{22}
\end{bmatrix}
$$
Step19: The matrix-matrix product implements composition of linear transformations
Step20: Matrix inverse
For an invertible matrix $A$, the matrix inverse $A^{-1}$ acts to undo the effects of $A$
Step21: Matrix equations
Suppose we're given the equation
$$
A\vec{x} = \vec{b}
$$
and we want to solve for $\vec{x}$.
One way to find $\vec{x}$ is to multiply both sides of the equation by $A^{-1}$ | Python Code:
# setup SymPy
from sympy import *
x, y, z, t = symbols('x y z t')
init_printing()
# a vector is a special type of matrix (an n-vector is either a nx1 or a 1xn matrix)
Vector = Matrix # define alias Vector so I don't have to explain this during video
# setup plotting
%matplotlib inline
import matplotlib.pyplot as mpl
from plot_helpers import plot_vec, plot_vecs, plot_line, plot_plane, autoscale_arrows
Explanation: Linear algebra overview projection example
Linear algebra is the study of vectors and linear transformations. This notebook introduces concepts form linear algebra in a birds-eye overview. The goal is not to get into the details, but to give the reader a taste of the different types of thinking: computational, geometrical, and theoretical, that are used in linear algebra.
Plan
Prerequisites
Vectors
Definition
Geomterical interpretation
Dot product
Projections
Projection operation
Example: projection onto the line with equation $x-y=0$
Vector functions
Linear property: $f(a\mathbf{x} + b\mathbf{y}) = af(\mathbf{x}) + bf(\mathbf{y})$
Projection transformation P
Matrix representation of linear transformations
Linear transformation <--> Matrix-vector product equivalence
Show matrix representation M_P of Projection transformation P
Matrices
Definition
Matrix operations
Matrix-vector product
Matrix-matrix product
Trace
Determinant
Matrix inverse
Matrix equations
Reduced row echelon form
End of explanation
# define two vectors
u = Vector([1,1])
v = Vector([1,-1])
u
v
plot_vecs(u, v)
autoscale_arrows()
Explanation: Prerequisites
Linear algebra builds upon high school math concepts like:
- Geometry (lines, curves, areas, triangles)
- Numbers (integers, rationals, reals, complex numbers)
- Functions ($f(x)$ takes an input $x$ and produces an output $y$)
Vectors
End of explanation
# graphical
plot_vecs(u,v)
plot_vec(v, at=u, color='b')
plot_vec(u+v, color='r')
autoscale_arrows()
# algebraic
u+v
Explanation: Vector addition
End of explanation
u.norm()
Explanation: Vector length $\|\vec{u}\|$
End of explanation
uhat = u/u.norm()
plot_vecs(u, uhat)
uhat
Explanation: Unit-length vectors $\hat{u}$
End of explanation
u = Vector([2,2])
v = Vector([3,0])
plot_vecs(u,v)
autoscale_arrows()
u.dot(v)
Explanation: Dot product
Definition
The dot product of two vectors is proporional to the lengths for which the two vectors extend in the same direction.
If $\vec{u}=(u_1,u_2)$ and $\vec{v}=(v_1,v_2)$, then:
$$
\vec{u} \cdot \vec{v} = u_1v_1 + u_2v_2 = \|\vec{u}\| \|\vec{v}\| \cos \theta_{uv},
$$
where $\theta_{uv}$ is the angle between the vectors.
End of explanation
# split the vector u into two parts:
u_parallel_to_v = Vector([2,0])
u_perp_to_v = Vector([0,2])
plot_vecs(u, v, u_parallel_to_v, u_perp_to_v)
autoscale_arrows()
u == u_parallel_to_v + u_perp_to_v
# the dot product uses only the part of u that is parallel to v
u.dot(v) == u_parallel_to_v.dot(v) == u_parallel_to_v.norm()*v.norm()
# two vetors that are perpendicular, have zero dot product together
u_perp_to_v.dot(v)
Explanation: Intuition
End of explanation
def proj(v, d):
Computes the projection of vector `v` onto direction `d`.
return v.dot( d/d.norm() )*( d/d.norm() )
v = Vector([2,2])
d = Vector([3,0])
proj_v_on_d = proj(v,d)
plot_vecs(d, v, proj_v_on_d)
autoscale_arrows()
Explanation: Projections
A projection of the vector $\vec{v}$ in the direction $\vec{d}$ is denoted $\Pi_{\vec{d}}(\vec{v})$. The formula for computing the projections uses the dot product operation:
$$
\Pi_{\vec{d}}(\vec{v})
\ \equiv \
(\vec{v} \cdot \hat{d}) \hat{d}
\ = \
\left(\vec{v} \cdot \frac{\vec{d}}{\|\vec{d}\|} \right) \frac{\vec{d}}{\|\vec{d}\|}.
$$
General projection operation
End of explanation
# The line with equation y = x can also be written as a paramteric equation
# [x,y] = [0,0] + s*[1,1] where d = [1,1] is called the direction vector the line
d = Vector([1,1])
plot_line(d,[0,0])
Explanation: Projections play an important role in physics. For example, when solving a two dimensional projectile problem we often decompose vector quantities like forces $\vec{F}$, velocities $\vec{v}$, and momenta $\vec{p}$ into their $x$- and $y$-components: $(F_x,F_y)$, $(v_x,v_y)$, and $(p_x,p_y)$. This decomposition of vectors can transform a complicated two-dimensional problem into two simpler one-dimensaional problems, which can be solved independetly.
Example: projection onto the line with equation $y=x$
End of explanation
# want a function that computes the projection onto the line with equation y = x for any vec
def P(vec):
Compute the projection of vector `vec` onto line y=x.
return proj(vec, d)
v = Vector([5,0])
plot_line(d,[0,0])
plot_vecs(v, P(v))
P(v)
Explanation: Take 1: using projection operation
End of explanation
ihat = Vector([1,0])
jhat = Vector([0,1])
Pihat = P(ihat)
Pjhat = P(jhat)
Pihat, Pjhat
def P2(vec):
Compute the projection of vector `vec` onto line y=x.
return vec[0]*Pihat + vec[1]*Pjhat
v = Vector([5,0])
plot_line(d,[0,0])
plot_vecs(v, P2(v))
Explanation: Vector functions
Observe that the function P is a vector function—a function that takes vectors as inputs and produces vectors as outputs. In mathematical notation we write this as
$$
P : \mathbb{R}^2 \to \mathbb{R}^2.
$$
Linear property:
A linear transformation $T$ is a vector function that obeys the linear property:
$$
T(a\vec{x} + b\vec{y}) = aT(\vec{x}) + bT(\vec{y}).
$$
Take 2: projection transformation P
The projection $P$ is a linear transformation, so it obeeys:
$$
P\left( \begin{bmatrix}a \ b \end{bmatrix} \right)
= P(a\hat{\imath} + b\hat{\jmath})
= aP(\hat{\imath}) + bP(\hat{\jmath}).
$$
End of explanation
M_P = Matrix([[1,1],
[1,1]])/2
M_P
def P3(vec):
Compute the projection of vector `vec` onto the line y=x.
return M_P*vec
v = Vector([4,0])
plot_line(d, [0,0])
plot_vecs(v, P3(v))
M_P.shape
Explanation: Take 3: linear transformation as matrix-vector product
Matrix definition
$$
\alpha \vec{u} + \beta \vec{v}
=
\alpha
\begin{bmatrix}u_1 \ u_2 \end{bmatrix}
+
\beta
\begin{bmatrix}v_1 \ v_2 \end{bmatrix}
=
\begin{bmatrix}u_1 & v_1 \ u_2 & v_2 \end{bmatrix}
!
\begin{bmatrix} \alpha \ \beta \end{bmatrix}.
$$
End of explanation
A = Matrix([[1,2],
[3,4],
[5,6]])
A
A.shape
Explanation: Equivalence relationship between linear transformstions $T$ and matrices $M_T$:
$$
T : \mathbb{R}^n \to \mathbb{R}^m
\qquad
\Leftrightarrow
\qquad
M_T \in \mathbb{R}^{m \times n}
$$
Matrices
A matrix is a two-dimensional array of numbers.
Example
End of explanation
a_11, a_12, a_21, a_22, a_31, a_32 = symbols('a_11 a_12 a_21 a_22 a_31 a_32')
x_1, x_2 = symbols('x_1 x_2')
A = Matrix([
[a_11, a_12],
[a_21, a_22],
[a_31, a_32]])
x = Vector([x_1,x_2])
A*x
Explanation: Matrix operations
Addition (denoted $A+B$)
Subtraction, the inverse of addition (denoted $A-B$)
Scaling by a constant $\alpha$ (denoted $\alpha A$)
Matrix-vector product (denoted $A\vec{x}$)
Matrix product (denoted $AB$)
Matrix inverse (denoted $A^{-1}$)
Trace (denoted $\textrm{Tr}(A)$)
Determinant (denoted $\textrm{det}(A)$ or $|A|$)
Matrix-vector product
$$
A \vec{x}
\quad
\Leftrightarrow
\quad
\begin{bmatrix}
a_{11} & a_{12} \
a_{21} & a_{22} \
a_{31} & a_{32}
\end{bmatrix}
!!
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix}
\equiv
\begin{bmatrix}
x_1a_{11} + x_2a_{12} \
x_1a_{21} + x_2a_{22} \
x_1a_{31} + x_2a_{32}
\end{bmatrix}
$$
The matrix-vector product is defined this way so it can represent linear transformations.
End of explanation
b_11, b_12, b_21, b_22 = symbols('b_11 b_12 b_21 b_22')
B = Matrix([[b_11, b_12],
[b_21, b_22]])
A*B
# (AB)_ij = dot product of ith row of A with jth col of B
(A*B)[2,1] == A[2,:].dot( B[:,1])
Explanation: Matrix-matrix product
$$
AB
\quad \Leftrightarrow \quad
\begin{bmatrix}
a_{11} & a_{12} \[1.5mm]
a_{21} & a_{22} \[1.5mm]
a_{31} & a_{32}
\end{bmatrix}
!!
\begin{bmatrix}
b_{11} & b_{12} \[1.5mm]
b_{21} & b_{22} \
\end{bmatrix}
=
\begin{bmatrix}
a_{11}b_{11} + a_{12}b_{21} & a_{11}b_{12} + a_{12}b_{22} \[1.5mm]
a_{21}b_{11} + a_{22}b_{21} & a_{21}b_{12} + a_{22}b_{22} \[1.5mm]
a_{31}b_{11} + a_{32}b_{21} & a_{31}b_{12} + a_{32}b_{22}
\end{bmatrix}
$$
End of explanation
A*(B*x)
expand( A*(B*x) ) == expand( (A*B)*x )
# analogy with ordinary functions...
x = symbols('x')
def f(x):
return 2*x
def g(x):
return 3*x
f(g(x))
def h(x):
return 6*x
h(x)
Explanation: The matrix-matrix product implements composition of linear transformations:
End of explanation
A = Matrix([[1,2],
[3,9]])
A.inv()
A.inv()*A
Explanation: Matrix inverse
For an invertible matrix $A$, the matrix inverse $A^{-1}$ acts to undo the effects of $A$:
$$
A^{-1} A \vec{v} = \vec{v}.
$$
End of explanation
A = Matrix([[1,2],
[3,9]])
b = Vector([5,21])
x = A.inv()*b
x
# verify A*x == b
A*x
Explanation: Matrix equations
Suppose we're given the equation
$$
A\vec{x} = \vec{b}
$$
and we want to solve for $\vec{x}$.
One way to find $\vec{x}$ is to multiply both sides of the equation by $A^{-1}$:
$$
A^{-1}A\vec{x} = A^{-1}\vec{b}
$$
since $A^{-1}$ cancels $A$ we obtain:
$$
\vec{x} = A^{-1}\vec{b}.
$$
Example
$$
\begin{bmatrix}
1 & 2 \
3 & 9
\end{bmatrix}
!!
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix}
=
\begin{bmatrix}
5 \
21
\end{bmatrix}
\qquad
\Rightarrow
\qquad
\begin{bmatrix}
x_1 \
x_2
\end{bmatrix}
=
\begin{bmatrix}
1 & 2 \
3 & 9
\end{bmatrix}^{-1}
!!
\begin{bmatrix}
5 \
21
\end{bmatrix}
$$
End of explanation |
10,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numeri
In questa pagina introdurremo diversi insiemi di numeri, facendosi aiutare da un sistema di calcolo simbolico
Step1: $\mathbb{N}$
L'insieme dei Naturali $\mathbb{N} = \lbrace 0, 1, 2, 3, 4, ..., \infty \rbrace$ e' stato definito per risolvere il problema del conteggio di un insieme arbitrario.
Step2: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa
Step3: puoi leggere la scrittura matematica $x^2 + 3x -1 = 0$, questa significa la seguente cosa
Step4: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa
Step5: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N}
Step6: dove in questo caso sia $y=3$ sia $z=-2$ sono in $\mathbb{Z}$.
$\mathbb{R}$
Adesso supponiamo di voler risolvere
Step7: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N}
Step8: Ha significato $\mathcal{X} = \lbrace x, y \in \mathbb{Z}
Step9: dove in questo caso sia $y=2$ sia $z=3$ sono in $\mathbb{Z}$. | Python Code:
from sympy import *
init_printing()
x = symbols('x')
x**2
Explanation: Numeri
In questa pagina introdurremo diversi insiemi di numeri, facendosi aiutare da un sistema di calcolo simbolico:
End of explanation
eq = Eq(x + 3, 2, evaluate=False)
eq
Explanation: $\mathbb{N}$
L'insieme dei Naturali $\mathbb{N} = \lbrace 0, 1, 2, 3, 4, ..., \infty \rbrace$ e' stato definito per risolvere il problema del conteggio di un insieme arbitrario.
End of explanation
Eq(x**2 + 3*x -1, 0, evaluate = False)
Explanation: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa: voglio trovare l'insieme $\mathcal{X}$ tale che $\mathcal{X} = \lbrace x \in \mathbb{N} : x + 3 = 2 \rbrace$. In questo caso, $\mathcal{X} = \emptyset$ perche' nessun naturale $x$ soddisfa l'equazione.
Allo stesso modo, per l'equazione
End of explanation
solve(eq, [x], dict=True)
Explanation: puoi leggere la scrittura matematica $x^2 + 3x -1 = 0$, questa significa la seguente cosa: voglio trovare l'insieme $\mathcal{X}$ tale che $\mathcal{X} = \lbrace x \in \mathbb{N} : x^2 + 3x -1 = 0 \rbrace$. Comunque, qui siamo gia avanti per capire la gerarchia dei numeri, solo per applicare il concetto precedente.
$\mathbb{Z}$
Dato che $x = -1$ non ha soluzioni in $\mathbb{N}$, si definisce l'insieme $\mathbb{Z}$ dei numeri interi definito come $\mathbb{Z} = \lbrace -\infty, ..., -2, -1, 0, 1, 2, ..., \infty\rbrace$. Infatti, risolvendo l'equazione rispetto a $x$,
End of explanation
eq = Eq(3*x, -2, evaluate=False)
eq
Explanation: Quando leggi la scrittura matematica $x + 3 = 2$, questa significa la seguente cosa: voglio trovare l'insieme $\mathcal{X}$ tale che $\mathcal{X} = \lbrace x \in \mathbb{Z} : x + 3 = 2 \rbrace$. In questo caso, $\mathcal{X} = \lbrace -1 \rbrace$.
Intermezzo sulla costruzione degli insiemi.
Lo schema per definire un insieme $\mathcal{X}$ e' questo:
<center>$\mathcal{X} = \lbrace x\in\mathcal{Y}: p(x) \text{ e' vero } \rbrace$</center>
dove $\mathcal{Y}$ e' un insieme gia definito e $p$ e' un predicato, ovvero una espressione logica su $x$ che puo' essere o vera o falsa. Poi puoi scegliere un nome al posto di $\mathcal{X}$ a tua scelta. Importante: quando uso questa scrittura, ottengo e definisco sempre un insieme.
$\mathbb{Q}$
Adesso supponiamo di voler risolvere
End of explanation
solve(eq, [x], dict=True)
Explanation: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N} : 3x = -2 \rbrace$ ? La risposta e' si, in questo caso $\mathcal{X}=\emptyset$.
Ha significato $\mathcal{X} = \lbrace x \in \mathbb{Z} : 3x = -2 \rbrace$ ? La risposta e' si, in questo caso $\mathcal{X}=\emptyset$.
Dato che $3x = -2$ non ha soluzioni ne in $\mathbb{N}$ ne in $\mathbb{Z}$, si definisce l'insieme $\mathbb{Q}$ dei numeri razionali definito come $\mathbb{Q} = \left\lbrace z \in\mathbb{Z} \wedge y \in\mathbb{Z}\setminus\lbrace0\rbrace: y\,x = z\right\rbrace$. Infatti, risolvendo l'equazione rispetto a $x$,
End of explanation
eq = Eq(x**2, 3, evaluate=False)
eq
Explanation: dove in questo caso sia $y=3$ sia $z=-2$ sono in $\mathbb{Z}$.
$\mathbb{R}$
Adesso supponiamo di voler risolvere
End of explanation
tentativi = list(map(S, range(-5, 5)))
tentativi
equazioni = [Eq((x/y)**2, S(3), evaluate=False) for x in tentativi for y in tentativi if y]
equazioni
list(map(solve, equazioni))
Explanation: Ha significato $\mathcal{X} = \lbrace x \in \mathbb{N} : x^2 = 3 \rbrace$ ? La risposta e' si, in questo caso $\mathcal{X}=\emptyset$.
Ha significato $\mathcal{X} = \lbrace x \in \mathbb{Z} : x^2 = 3 \rbrace$ ? La risposta e' si, in questo caso $\mathcal{X}=\emptyset$.
Ha significato $\mathcal{X} = \lbrace x, y \in \mathbb{Z} : \left({{x}\over{y}}\right)^2 = 3 \rbrace$ ?
End of explanation
solve(eq, [x], dict=True)
Explanation: Ha significato $\mathcal{X} = \lbrace x, y \in \mathbb{Z} : \left({{x}\over{y}}\right)^2 = 3 \rbrace$ ? La risposta e' si, in questo caso $\mathcal{X}=\emptyset$.
Dato che $x^2 = 3$ non ha soluzioni ne in $\mathbb{N}$ ne in $\mathbb{Z}$ ne in $\mathbb{Q}$, si definisce l'insieme $\mathbb{R}$ dei numeri reali definito come $\mathbb{R} = \lbrace y,z \in\mathbb{Z}: x^y = z\rbrace$. Infatti, risolvendo l'equazione rispetto a $x$,
End of explanation
0.3 == 3/10
#0.47368754355678678678678678678678678...(678)*... is in Q
pi
list(filter(lambda r: r.is_real, solve(Eq(x**5, 10))))
Explanation: dove in questo caso sia $y=2$ sia $z=3$ sono in $\mathbb{Z}$.
End of explanation |
10,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installating R on WinPython
This procedure applys for Winpython (Version of December 2015 and after)
1 - Downloading R binary
Step1: 2 - checking and Installing R binary in the right place
Step4: During Installation (if you wan't to move the R installation after)
Choose non default option "Yes (customized startup"
then after 3 screens, Select "Don't create a Start Menu Folder"
Un-select "Create a desktop icon"
Un-select "Save version number in registery"
<img src="https
Step5: 4- Install a R package via a IPython Kernel
Step6: 5- Small demo via R magic
Step7: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory) | Python Code:
import os
import sys
import io
# downloading R may takes a few minutes (80Mo)
try:
import urllib.request as urllib2 # Python 3
except:
import urllib2 # Python 2
# specify R binary and (md5, sha1) hash
# R-3.4.3:
r_url = "https://cran.r-project.org/bin/windows/base/old/3.5.0/R-3.5.0-win.exe"
hashes=("d3f579b3aacfdf45a008df3320cb3615","87cf0f72dcd91ff12627178e2438cb5a8d8a13c0")
# specify target location
# tweak change in recent winpython
tool_base_directory=os.environ["WINPYDIR"]+"\\..\\t\\"
if not os.path.isdir(tool_base_directory):
tool_base_directory=os.environ["WINPYDIR"]+"\\..\\tools\\"
r_installer = tool_base_directory+os.path.basename(r_url)
os.environ["r_installer"] = r_installer
# Download
g = urllib2.urlopen(r_url)
with io.open(r_installer, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
!dir %r_installer%
Explanation: Installating R on WinPython
This procedure applys for Winpython (Version of December 2015 and after)
1 - Downloading R binary
End of explanation
# checking it's the official R
import hashlib
def give_hash(of_file, with_this):
with io.open(r_installer, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-1"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*40)
print ("%s %s %s" % (give_hash(r_installer, hashlib.md5) , give_hash(r_installer, hashlib.sha1),r_installer))
if give_hash(r_installer, hashlib.md5) == hashes[0] and give_hash(r_installer, hashlib.sha1) == hashes[1]:
print("looks good!")
else:
print("problem ! please check")
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# preparing Dos variables
os.environ["R_HOME"] = tool_base_directory+ "R\\"
os.environ["R_HOMEbin"]=os.environ["R_HOME"] + "bin"
# for installation we need this
os.environ["tmp_Rbase"]=os.path.join(os.path.split(os.environ["WINPYDIR"])[0] , 't','R' )
if 'amd64' in sys.version.lower():
r_comp ='/COMPONENTS="main,x64,translations'
else:
r_comp ='/COMPONENTS="main,i386,translations'
os.environ["tmp_R_comp"]=r_comp
# let's install it, if hashes do match
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# If you are "USB life style", or multi-winpython
# ==> CLICK the OPTION "Don't create a StartMenuFolder' <== (when it will show up)
!start cmd /C %r_installer% /DIR=%tmp_Rbase% %tmp_R_comp%
Explanation: 2 - checking and Installing R binary in the right place
End of explanation
import os
import sys
import io
# let's create a R launcher
r_launcher = r
@echo off
call %~dp0env.bat
rscript %*
r_launcher_bat = os.environ["WINPYDIR"]+"\\..\\scripts\\R_launcher.bat"
# let's create a R init script
# in manual command line, you can use repos = c('http://irkernel.github.io/', getOption('repos'))
r_initialization = r
install.packages(c('repr', 'IRdisplay', 'stringr', 'crayon', 'pbdZMQ', 'devtools'), repos = c('http://cran.rstudio.com/', 'http://cran.rstudio.com/'))
devtools::install_github('IRkernel/IRkernel')
library('pbdZMQ')
library('repr')
library('IRkernel')
library('IRdisplay')
library('crayon')
library('stringr')
IRkernel::installspec()
r_initialization_r = os.path.normpath(os.environ["WINPYDIR"]+"\\..\\scripts\\R_initialization.r")
for i in [(r_launcher,r_launcher_bat), (r_initialization, r_initialization_r)]:
with io.open(i[1], 'w', encoding = sys.getdefaultencoding() ) as f:
for line in i[0].splitlines():
f.write('%s\n' % line )
#check what we are going to do
print ("!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save " + r_initialization_r)
# Launch Rkernel setup
os.environ["r_initialization_r"] = r_initialization_r
!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save %r_initialization_r%
# make RKernel a movable installation with the rest of WinPython
from winpython import utils
base_winpython = os.path.dirname(os.path.normpath(os.environ["WINPYDIR"]))
rkernel_json=(base_winpython+"\\settings\\kernels\\ir\\kernel.json")
# so we get "argv": ["{prefix}/../tools/R/bin/x64/R"
utils.patch_sourcefile(rkernel_json, base_winpython.replace("\\","/"), r'{prefix}/..', silent_mode=False)
Explanation: During Installation (if you wan't to move the R installation after)
Choose non default option "Yes (customized startup"
then after 3 screens, Select "Don't create a Start Menu Folder"
Un-select "Create a desktop icon"
Un-select "Save version number in registery"
<img src="https://raw.githubusercontent.com/stonebig/winpython_afterdoc/master/examples/images/r_setup_unclick_shortcut.GIF">
3 - create a R_launcher and install irkernel
End of explanation
%load_ext rpy2.ipython
#vitals: 'dplyr', 'R.utils', 'nycflights13'
# installation takes 2 minutes
%R install.packages(c('dplyr','R.utils', 'nycflights13'), repos='http://cran.rstudio.com/')
Explanation: 4- Install a R package via a IPython Kernel
End of explanation
!echo %R_HOME%
%load_ext rpy2.ipython
# avoid some pandas deprecation warning
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
%%R
library('dplyr')
library('nycflights13')
write.csv(flights, "flights.csv")
%R head(flights)
%R airports %>% mutate(dest = faa) %>% semi_join(flights) %>% head
Explanation: 5- Small demo via R magic
End of explanation
# essentials: 'tidyr', 'shiny', 'ggplot2', 'caret' , 'nnet'
# remaining of Hadley Wickahm "stack" (https://github.com/rstudio)
%R install.packages(c('tidyr', 'ggplot2', 'shiny','caret' , 'nnet'), repos='https://cran.rstudio.com/')
%R install.packages(c('knitr', 'purrr', 'readr', 'readxl'), repos='https://cran.rstudio.com/')
%R install.packages(c('rvest', 'lubridate', 'ggvis', 'readr','base64enc'), repos='https://cran.rstudio.com/')
# TRAINING = online training book http://r4ds.had.co.nz/ (or https://github.com/hadley/r4ds)
Explanation: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory)
End of explanation |
10,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer Learning on TPUs
In the <a href="3_tf_hub_transfer_learning.ipynb">previous notebook</a>, we learned how to do transfer learning with TensorFlow Hub. In this notebook, we're going to kick up our training speed with TPUs.
Learning Objectives
Know how to set up a TPU strategy for training
Know how to use a TensorFlow Hub Module when training on a TPU
Know how to create and specify a TPU for training
First things first. Configure the parameters below to match your own Google Cloud project details.
Step4: Packaging the Model
In order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the pevious lab copied into <a href="tpu_models/trainer/util.py">util.py</a>.
Similarly, the model building and training functions are pulled into <a href="tpu_models/trainer/model.py">model.py</a>. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new task.py file.
We've added five command line arguments which are standard for cloud training of a TensorFlow model
Step5: The TPU server
Before we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system such as Google Cloud Storage. This makes caching our dataset especially critical in order to speed up training time.
To access MobileNet with these restrictions, we can download a compressed saved version of the model by using the wget command. Adding ?tf-hub-format=compressed at the end of our module handle gives us a download URL.
Step6: This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.
Step7: Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.
Step8: Spinning up a TPU
Time to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU.
gcloud compute tpus execution-groups create \
--name=my-tpu \
--zone=us-central1-b \
--tf-version=2.3.2 \
--machine-type=n1-standard-1 \
--accelerator-type=v3-8
It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively Compute Engine Interface can be used to SSH in. You'll know you're running on a TPU when the command line starts with your-username@your-tpu-name.
This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the . at the end as it tells gsutil to copy data into the currect directory.
Step9: Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.
TODO | Python Code:
import os
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
BUCKET = PROJECT
os.environ["BUCKET"] = BUCKET
Explanation: Transfer Learning on TPUs
In the <a href="3_tf_hub_transfer_learning.ipynb">previous notebook</a>, we learned how to do transfer learning with TensorFlow Hub. In this notebook, we're going to kick up our training speed with TPUs.
Learning Objectives
Know how to set up a TPU strategy for training
Know how to use a TensorFlow Hub Module when training on a TPU
Know how to create and specify a TPU for training
First things first. Configure the parameters below to match your own Google Cloud project details.
End of explanation
%%writefile tpu_models/trainer/task.py
TPU trainer command line interface
import argparse
import sys
import tensorflow as tf
from . import model, util
def _parse_arguments(argv):
Parses command-line arguments.
parser = argparse.ArgumentParser()
parser.add_argument(
"--epochs", help="The number of epochs to train", type=int, default=5
)
parser.add_argument(
"--steps_per_epoch",
help="The number of steps per epoch to train",
type=int,
default=500,
)
parser.add_argument(
"--train_path",
help="The path to the training data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/train_set.csv",
)
parser.add_argument(
"--eval_path",
help="The path to the evaluation data",
type=str,
default="gs://cloud-ml-data/img/flower_photos/eval_set.csv",
)
parser.add_argument(
"--tpu_address",
help="The path to the TPUs we will use in training",
type=str,
required=True,
)
parser.add_argument(
"--hub_path",
help="The path to TF Hub module to use in GCS",
type=str,
required=True,
)
parser.add_argument(
"--job-dir",
help="Directory where to save the given model",
type=str,
required=True,
)
return parser.parse_known_args(argv)
def main():
Parses command line arguments and kicks off model training.
args = _parse_arguments(sys.argv[1:])[0]
# TODO: define a TPU strategy
resolver = # TODO: Your code goes here
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = # TODO: Your code goes here
with strategy.scope():
train_data = util.load_dataset(args.train_path)
eval_data = util.load_dataset(args.eval_path, training=False)
image_model = model.build_model(args.job_dir, args.hub_path)
model_history = model.train_and_evaluate(
image_model,
args.epochs,
args.steps_per_epoch,
train_data,
eval_data,
args.job_dir,
)
return model_history
if __name__ == "__main__":
main()
Explanation: Packaging the Model
In order to train on a TPU, we'll need to set up a python module for training. The skeleton for this has already been built out in tpu_models with the data processing functions from the pevious lab copied into <a href="tpu_models/trainer/util.py">util.py</a>.
Similarly, the model building and training functions are pulled into <a href="tpu_models/trainer/model.py">model.py</a>. This is almost entirely the same as before, except the hub module path is now a variable to be provided by the user. We'll get into why in a bit, but first, let's take a look at the new task.py file.
We've added five command line arguments which are standard for cloud training of a TensorFlow model: epochs, steps_per_epoch, train_path, eval_path, and job-dir. There are two new arguments for TPU training: tpu_address and hub_path
tpu_address is going to be our TPU name as it appears in Compute Engine Instances. We can specify this name with the ctpu up command.
hub_path is going to be a Google Cloud Storage path to a downloaded TensorFlow Hub module.
The other big difference is some code to deploy our model on a TPU. To begin, we'll set up a TPU Cluster Resolver, which will help tensorflow communicate with the hardware to set up workers for training (more on TensorFlow Cluster Resolvers). Once the resolver connects to and initializes the TPU system, our Tensorflow Graphs can be initialized within a TPU distribution strategy, allowing our TensorFlow code to take full advantage of the TPU hardware capabilities.
TODO: Complete the code below to setup the resolver and define the TPU training strategy.
End of explanation
!wget https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4?tf-hub-format=compressed
Explanation: The TPU server
Before we can start training with this code, we need a way to pull in MobileNet. When working with TPUs in the cloud, the TPU will not have access to the VM's local file directory since the TPU worker acts as a server. Because of this all data used by our model must be hosted on an outside storage system such as Google Cloud Storage. This makes caching our dataset especially critical in order to speed up training time.
To access MobileNet with these restrictions, we can download a compressed saved version of the model by using the wget command. Adding ?tf-hub-format=compressed at the end of our module handle gives us a download URL.
End of explanation
%%bash
rm -r tpu_models/hub
mkdir tpu_models/hub
tar xvzf 4?tf-hub-format=compressed -C tpu_models/hub/
Explanation: This model is still compressed, so lets uncompress it with the tar command below and place it in our tpu_models directory.
End of explanation
!gsutil rm -r gs://$BUCKET/tpu_models
!gsutil cp -r tpu_models gs://$BUCKET/tpu_models
Explanation: Finally, we need to transfer our materials to the TPU. We'll use GCS as a go-between, using gsutil cp to copy everything.
End of explanation
!echo "gsutil cp -r gs://$BUCKET/tpu_models ."
Explanation: Spinning up a TPU
Time to wake up a TPU! Open the Google Cloud Shell and copy the gcloud compute command below. Say 'Yes' to the prompts to spin up the TPU.
gcloud compute tpus execution-groups create \
--name=my-tpu \
--zone=us-central1-b \
--tf-version=2.3.2 \
--machine-type=n1-standard-1 \
--accelerator-type=v3-8
It will take about five minutes to wake up. Then, it should automatically SSH into the TPU, but alternatively Compute Engine Interface can be used to SSH in. You'll know you're running on a TPU when the command line starts with your-username@your-tpu-name.
This is a fresh TPU and still needs our code. Run the below cell and copy the output into your TPU terminal to copy your model from your GCS bucket. Don't forget to include the . at the end as it tells gsutil to copy data into the currect directory.
End of explanation
%%bash
export TPU_NAME=my-tpu
echo "export TPU_NAME="$TPU_NAME
echo "python3 -m tpu_models.trainer.task \
# TODO: Your code goes here \
# TODO: Your code goes here \
--job-dir=gs://$BUCKET/flowers_tpu_$(date -u +%y%m%d_%H%M%S)"
Explanation: Time to shine, TPU! Run the below cell and copy the output into your TPU terminal. Training will be slow at first, but it will pick up speed after a few minutes once the Tensorflow graph has been built out.
TODO: Complete the code below by adding flags for tpu_address and the hub_path. Have another look at task.py to see how these flags are used. The tpu_address denotes the TPU you created above and hub_path should denote the location of the TFHub module. (Note that the training code requires a TPU_NAME environment variable, set in the first two lines below -- you may reuse it in your code.)
End of explanation |
10,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Expressão Regular
Pesquisando
Step1: Também podemos utilizar a flag IGNORECASE
Step2: Extraindo partes de um ER
Step3: Encontrando todas as ocorrências
Para encontrar todas as ocorrências podemos utilizar o método re.findall(), que irá salvar todas as ocorrências em uma lista, onde cada posição representa uma ocorrência.
Step4: Exercícios
Exercício 1
Encontre todos os e-mails do arquivo emails.txt. Utilize a função findall() e imprima cada um dos e-mails.
Step5: Exercício 2
Considere o arquivo er-dados.txt. Esse arquivo contêm diversas strings no formato | Python Code:
import re
texto = 'um exemplo palavra:python!!'
match = re.search('python', texto)
print(match)
if match:
print('encontrou: ' + match.group())
else:
print('não encontrou')
Explanation: Expressão Regular
Pesquisando
End of explanation
texto = "GGATCGGAGCGGATGCC"
match = re.search(r'a[tg]c', texto, re.IGNORECASE)
if match:
print('encontrou: ' + match.group())
else:
print('não encontrou')
match
Explanation: Também podemos utilizar a flag IGNORECASE
End of explanation
texto = '[email protected]'
match = re.search(r'([\w.-]+)@([\w.-]+)', texto)
if match:
print('email:', match.group())
print('login:', match.group(1))
print('dominio:', match.group(2))
else:
print('não encontrou')
Explanation: Extraindo partes de um ER
End of explanation
texto = 'teste [email protected] teste123 [email protected], python [email protected]'
emails = re.findall(r'[\w.-]+@[\w.-]+', texto)
for email in emails:
print(email)
Explanation: Encontrando todas as ocorrências
Para encontrar todas as ocorrências podemos utilizar o método re.findall(), que irá salvar todas as ocorrências em uma lista, onde cada posição representa uma ocorrência.
End of explanation
arquivo = open('er-emails.txt', 'r')
Explanation: Exercícios
Exercício 1
Encontre todos os e-mails do arquivo emails.txt. Utilize a função findall() e imprima cada um dos e-mails.
End of explanation
# Parte 1
# Parte 2
# Parte 3a - anos
# Parte 3b - meses
# Parte 3c - dias
# Parte 3d - horário
# Parte 4
Explanation: Exercício 2
Considere o arquivo er-dados.txt. Esse arquivo contêm diversas strings no formato:
Tue Feb 15 10:39:54 2028::[email protected]
Crie um expressão regular para encontrar todos os e-mails e salve-os em uma lista chamada emails = [].
Crie um expressão regular para recuperar o login e o domínio de cada item salvo na lista emails. Salve cada item em uma lista logins = [] e domínios = []
Crie um expressão regular para recuperar o ano, mês, dia, horário.
Por fim, imprima os valores no seguinte formato:
1 - [email protected] | login | dominio | dia/mês/ano | hora:minuto
É importante pensar em como deve-se carregar o arquivo. Repare que o horário no arquivo considera os segundos e a impressão final não.
End of explanation |
10,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning for Communications
By Jakob Hoydis,
Contact [email protected]
This code is provided as supplementary material to the tutorial Deep Learning for Communications.
It is licensed under the GPLv2 license. If you in any way use this code for research that results in publications, please cite it appropriately.
Deep Unfolding - Neural Belief Propagation
In this notebook we show how to interpret the existing belief propagation decoding algorithm as explicite neural network and analyze how to train such a network
Step1: Problem Description
Step2: Let's analyze the parity-check matrix
When loading the (7,4) BCH code you should see the well-known (7,4) Hamming code structure (see lecture).
Step3: Each black dot at position (j,i) defines a connection (edge) between VN i and CN j in the corresponding Tanner graph
Helper functions
Compute the noise variance for a given SNR and coderate.
Step4: Compute LLRs for BPSK from given channel observations.
Step5: Modeling belief propagation as deep neural network
Now we need to create our neural network
Step6: Define placeholder for batch size and noise_var (SNR)
Step7: We use the all-zero CW x of length n (as it is part of any linear code) to avoid encoding.
The channel is a simple AWGN channel with noise variance "noise_var".
Finally, the channel output y is y=x+n.
Step8: Define VN to CN messages
The LLR values are clipped such that the absolute value of each message is not larger than 10.
In this Notebook we only train weights of messages from VNs to CNs, but extensions are straightforward.
The messages are initialized with 1 (rather than a random initalization).
Define CN to VN messages
The CN update requires message clipping due to numerical stability of the CN update function
Step9: Define the neural network
In total num_bp_iterations are performaned. The all zero CW is used (as it is part of any linear code).
In the first iteration there are no previous messages and, thus, only the channel observations (llr) are used as input (rather than pre-initalizing a previous layer with 0).
The last layer performs a final marginalization step, i.e., it sums ob all incoming messages at the VN.
Step10: Define ADAM optimizer with apply gradient clipping
Step11: Start new Session
Step12: Evaluate belief propagation decoding (without training)
As all of the weights are initialized with 1, the inital performance of the neural networks equals the performance of the conventional BP decoder.
For the Monte-Carlo BER simulations batches of size "samples" are transmitted and decoded one-shot. This is repeated "iterations" times, i.e., in total "samples*iterations" codewords are simulated per SNR point.
See http
Step13: Check whether all weights are properly initalized
Step14: All weights are properly set to 1.
Train the neural network
We now train the neural belief propagation decoder in the same way as we trained previous neural networks, i.e., by using stochastic-gradient descent. We print intermediate BER values to track the progress. However, this information is not used for training at any time.
Step15: And evaluate the performance of the trained decoder
Now we evaluate the BER performance of the trained BP decoder
Step16: As can be seen the decoder improves by approximately 0.5 dB. Keep in mind that the used LDPC code is very short (n=100; tpyically n>1000 bits per codeword) and not well-designed. For well-designed LDPC codes from existing standards (to the best of our knowledgeg) no further gains can be observed.
Plot the Histogram of the Trained Weights | Python Code:
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import tensorflow as tf
import numpy as np
from pprint import pprint
%matplotlib inline
import matplotlib.pyplot as plt
seed = 1337
tf.set_random_seed(seed)
np.random.seed(seed)
Explanation: Deep Learning for Communications
By Jakob Hoydis,
Contact [email protected]
This code is provided as supplementary material to the tutorial Deep Learning for Communications.
It is licensed under the GPLv2 license. If you in any way use this code for research that results in publications, please cite it appropriately.
Deep Unfolding - Neural Belief Propagation
In this notebook we show how to interpret the existing belief propagation decoding algorithm as explicite neural network and analyze how to train such a network
End of explanation
#load list of predefined matrices
ParityCheckMatrices = [[[1,0,1,1,1,0,0],[0,1,0,1,1,1,0],[0,0,1,0,1,1,1]],[[1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,1,1,0,0,1,0,0,0,0,0,1,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,0,0,1,0,1,0,1,1,1,1,0,0,1,1]],[[1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,1,1,0,0,1,0,0,1,1,1,0,0,0,0,0,0,1,0,0,1,0,0,0,1,1,0,1,1,0,0,0,1,0,1,0,1,1,0,0,1,1,0,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,1,1,0,0,0,1,0,0,0,0,0,0,0,1,0,0,1,1,0,1,0,0,0,0,1,1,1,1,0,1,1,1,0,0,1,1,1,1,1]],[[0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0],[0,1,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,1,0,0],[1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0],[1,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],[0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1],[1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,2],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0],[0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0],[0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0],[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0],[0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]]
ParityCheck_id = 3 # decide which parity check matrix should be used (0-2: BCH; 3: LDPC)
ParityCheckMatrix = np.array(ParityCheckMatrices[ParityCheck_id]) # load parity-check matrix
n = ParityCheckMatrix.shape[1] # define number of codeword bits n (codeword length)
k = n-ParityCheckMatrix.shape[0] # define number of information bits k per codeword
coderate = k/n # coderate as fraction of information bits per CWs
num_bp_iterations = 10 # number of bp iterations
#print parity-check matrix
print(ParityCheckMatrix)
print('n:{}, k:{}, coderate:{}'.format(n,k,coderate))
Explanation: Problem Description:
The task is to implement the belief propagation decoding algorithm for channel coding and add trainable weights to the edges in the graph.
Can stochastic gradient descent optimize these weights?
Setting up the scenario
We use pre-generated parity check matrixes (BCH and Polar); see webdemos (and other references given in the lecture):
http://webdemo.inue.uni-stuttgart.de/webdemos/02_lectures/MEC/LDPC_degree_distribution/
http://webdemo.inue.uni-stuttgart.de/webdemos/03_theses/LDPC_Codes_from_Communication_Standards/
End of explanation
plt.figure(figsize=(12, 8));
plt.xlabel('Variable Nodes i');
plt.ylabel('Check Nodes j');
plt.spy(ParityCheckMatrix);
Explanation: Let's analyze the parity-check matrix
When loading the (7,4) BCH code you should see the well-known (7,4) Hamming code structure (see lecture).
End of explanation
def ebnodb2noisevar(ebno_db, coderate):
ebno = 10**(ebno_db/10)
noise_var = 1/(2*coderate*ebno)
return noise_var
Explanation: Each black dot at position (j,i) defines a connection (edge) between VN i and CN j in the corresponding Tanner graph
Helper functions
Compute the noise variance for a given SNR and coderate.
End of explanation
def compute_llr(y, noise_var):
return 2*y/noise_var
Explanation: Compute LLRs for BPSK from given channel observations.
End of explanation
H = tf.Variable(ParityCheckMatrix, trainable=False, dtype=tf.float32)
H_unconnected = tf.cast(tf.equal(H, 0), dtype=tf.float32)
Explanation: Modeling belief propagation as deep neural network
Now we need to create our neural network:
The Graph is defined by the partiy-check matrix H.
H_unconnected defines all unnconnected positions.
Remark: We use dtype=tf.float32 as this is more natural in the GPU environment. However, boolean values (connection yes/no) are sufficient.
End of explanation
batch_size = tf.placeholder_with_default(100, shape=())
noise_var = tf.placeholder_with_default(1.0, shape=())
Explanation: Define placeholder for batch size and noise_var (SNR)
End of explanation
x = tf.ones(shape=[n], dtype=tf.float32)
noise = tf.random_normal(shape=[batch_size, n], stddev=tf.sqrt(noise_var))
y = x + noise
Explanation: We use the all-zero CW x of length n (as it is part of any linear code) to avoid encoding.
The channel is a simple AWGN channel with noise variance "noise_var".
Finally, the channel output y is y=x+n.
End of explanation
def var_to_check(cv, H, llr, init=False, final=False):
if init: #first layer
# Simply send channel llrs along the edges
vc = tf.transpose(tf.expand_dims(H, axis=0)*tf.expand_dims(llr, axis=1), perm=[0,2,1])
vc = tf.nn.tanh(tf.clip_by_value(1/2*vc, clip_value_min=-9.9, clip_value_max=9.9))
return vc, vc
else:
shape_cv = cv.get_shape().as_list()
shape_llr = llr.get_shape().as_list()
W_c = tf.Variable(initial_value=tf.ones(shape=[1, shape_cv[1], shape_cv[2]]), trainable=True,dtype=tf.float32)
cv *= W_c
if final: # last layer (marginalization)
# Final marginalization
vc = tf.reduce_sum(cv, axis=1)+llr
return vc, vc
else:
# Sum-up messages, add llr, and substract message for relative edge
vc = tf.reduce_sum(cv, axis=1)+llr
vc = tf.expand_dims(H, axis=0)*tf.expand_dims(vc, axis=1) - cv
vc = tf.transpose(vc, perm=[0,2,1])
vc = tf.nn.tanh(tf.clip_by_value(1/2*vc, clip_value_min=-9.9, clip_value_max=9.9))
return vc, tf.reduce_sum(cv, axis=1)+llr
def check_to_var(vc, H, H_unconnected):
vc_tanh = tf.transpose(vc, perm=[0,2,1])
cv = tf.reduce_prod(vc_tanh + H_unconnected, axis=2, keepdims=True)
cv = tf.where(tf.equal(cv,0),tf.ones_like(cv)*1e-12,cv)
cv = (cv/(vc_tanh+H_unconnected))*tf.expand_dims(H, axis=0)
cv = tf.clip_by_value(cv, clip_value_min=-1+1e-6, clip_value_max=1-1e-6)
cv = 2*tf.atanh(cv)
return cv
Explanation: Define VN to CN messages
The LLR values are clipped such that the absolute value of each message is not larger than 10.
In this Notebook we only train weights of messages from VNs to CNs, but extensions are straightforward.
The messages are initialized with 1 (rather than a random initalization).
Define CN to VN messages
The CN update requires message clipping due to numerical stability of the CN update function
End of explanation
cv = tf.zeros(shape=[batch_size,n-k,n])
llr = compute_llr(y, noise_var)
loss = 0
for i in range(num_bp_iterations):
is_final = (i==num_bp_iterations-1)
is_init = (i==0)
vc, logits = var_to_check(cv, H, llr, init=is_init, final=is_final)
if not is_final:
cv = check_to_var(vc, H, H_unconnected)
if not is_init:
loss += tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.ones(shape=[batch_size, n]), logits=logits)
loss /= num_bp_iterations
x_hat = tf.cast(tf.greater(tf.nn.sigmoid(vc), 0.5), dtype=tf.float32)
ber = tf.reduce_mean(tf.cast(tf.not_equal(x, x_hat), dtype=tf.float32))
W_total = tf.global_variables() #for weigth distribution histograms
Explanation: Define the neural network
In total num_bp_iterations are performaned. The all zero CW is used (as it is part of any linear code).
In the first iteration there are no previous messages and, thus, only the channel observations (llr) are used as input (rather than pre-initalizing a previous layer with 0).
The last layer performs a final marginalization step, i.e., it sums ob all incoming messages at the VN.
End of explanation
optimizer = tf.train.AdamOptimizer(1e-3)
grads_and_vars = optimizer.compute_gradients(loss)
grads_and_vars = [(tf.clip_by_value(g, -10, 10), v) for g,v in grads_and_vars]
step = optimizer.apply_gradients(grads_and_vars)
Explanation: Define ADAM optimizer with apply gradient clipping
End of explanation
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
Explanation: Start new Session
End of explanation
samples = 10000
epochs = 10
ebnos_db = np.linspace(1,6, 6)
bers_no_training = np.zeros(shape=[ebnos_db.shape[0]])
for j in range(epochs):
for i in range(ebnos_db.shape[0]):
ebno_db = ebnos_db[i]
bers_no_training[i] += sess.run(ber, feed_dict={
batch_size: samples,
noise_var: ebnodb2noisevar(ebno_db, coderate)
})
bers_no_training /= epochs
plt.figure(figsize=(10, 5))
plt.semilogy(ebnos_db, bers_no_training, '-x')
plt.grid(which='both');
plt.xlabel('EbNo [dB]');
plt.ylabel('BER');
plt.ylim([1e-6, 1e-0]);
plt.xlim([ebnos_db[0], ebnos_db[-1]]);
plt.legend(['No Training - Conventional BP Decoder with %d iterations' % (num_bp_iterations)]);
Explanation: Evaluate belief propagation decoding (without training)
As all of the weights are initialized with 1, the inital performance of the neural networks equals the performance of the conventional BP decoder.
For the Monte-Carlo BER simulations batches of size "samples" are transmitted and decoded one-shot. This is repeated "iterations" times, i.e., in total "samples*iterations" codewords are simulated per SNR point.
See http://webdemo.inue.uni-stuttgart.de/webdemos/08_research/GPU_LDPC_Decoder/index.php for reference curves.
End of explanation
w=sess.run(W_total, feed_dict={}) #get all weights from the graph and use flatten for the histogram function
Wuntrained=w[1].flatten()#shkip first variable (as this is the parity-check matrix)
for i in range(1,len(w)):
Wuntrained=np.concatenate((w[i].flatten(),Wuntrained),axis=0)
plt.figure(figsize=(10, 5));
plt.hist(Wuntrained,bins=40,range=(-0.2, 1.4))
plt.grid(which='both');
plt.xlabel('Weight');
plt.ylabel('Occurrence');
plt.xlim([0.5, 1.2]);
plt.legend(['No Training - Weights before Training']);
Explanation: Check whether all weights are properly initalized
End of explanation
train_ebno_db = 4
for it in range(2000):
feed_dict = {
batch_size: 100,
noise_var: ebnodb2noisevar(train_ebno_db, coderate)
}
sess.run(step, feed_dict=feed_dict)
#provide intermediate BER metric (not used for training!)
if (it%100==0):
feed_dict = {
batch_size: 10000,
noise_var: ebnodb2noisevar(train_ebno_db, coderate)
}
l, b = sess.run([loss, ber], feed_dict=feed_dict)
print(it, "Loss: {}, BER: {:.2E}".format(l, b))
Explanation: All weights are properly set to 1.
Train the neural network
We now train the neural belief propagation decoder in the same way as we trained previous neural networks, i.e., by using stochastic-gradient descent. We print intermediate BER values to track the progress. However, this information is not used for training at any time.
End of explanation
samples = 10000
epochs = 10
ebnos_db = np.linspace(1,6, 6)
bers = np.zeros(shape=[ebnos_db.shape[0]])
for j in range(epochs):
for i in range(ebnos_db.shape[0]):
ebno_db = ebnos_db[i]
bers[i] += sess.run(ber, feed_dict={
batch_size: samples,
noise_var: ebnodb2noisevar(ebno_db, coderate)
})
bers /= epochs
plt.figure(figsize=(10, 5))
plt.semilogy(ebnos_db, bers, '-x')
plt.semilogy(ebnos_db, bers_no_training, '-0')#use previously stored BER
plt.grid(which='both');
plt.xlabel('EbNo [dB]');
plt.ylabel('BER');
plt.ylim([1e-6, 1e-0]);
plt.xlim([ebnos_db[0], ebnos_db[-1]]);
plt.legend(['Trained Neural BP Decoder with %d iterations' % (num_bp_iterations),'No Training - Conventional BP Decoder with %d iterations' % (num_bp_iterations)]);
#plt.savefig('ber.pdf', bbox_inches='tight'); #create plots for the lecture slides
Explanation: And evaluate the performance of the trained decoder
Now we evaluate the BER performance of the trained BP decoder
End of explanation
w=sess.run(W_total, feed_dict={}) #get all weights from the graph and use flatten for the histogram function
Wtrained=w[1].flatten()#shkip first variable (as this is the parity-check matrix)
for i in range(1,len(w)):
Wtrained=np.concatenate((w[i].flatten(),Wtrained),axis=0)
plt.figure(figsize=(10, 5));
plt.hist(Wuntrained,bins=20, range=(-0.2, 1.4))#use previously stored weights
plt.hist(Wtrained,bins=20, range=(-0.2, 1.4))
plt.grid(which='both');
plt.xlabel('Weight');
plt.ylabel('Occurrence');
plt.xlim([0.5, 1.2]);
plt.legend(['No Training - Weights before Training','Trained Weights']);
#plt.savefig('hist.pdf', bbox_inches='tight'); #create plots for the lecture slides
Explanation: As can be seen the decoder improves by approximately 0.5 dB. Keep in mind that the used LDPC code is very short (n=100; tpyically n>1000 bits per codeword) and not well-designed. For well-designed LDPC codes from existing standards (to the best of our knowledgeg) no further gains can be observed.
Plot the Histogram of the Trained Weights
End of explanation |
10,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Operation-Research-Quick-Intro-Via-Ortools" data-toc-modified-id="Operation-Research-Quick-Intro-Via-Ortools-1"><span class="toc-item-num">1 </span>Operation Research Quick Intro Via Ortools</a></span><ul class="toc-item"><li><span><a href="#Assignment-Problem" data-toc-modified-id="Assignment-Problem-1.1"><span class="toc-item-num">1.1 </span>Assignment Problem</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Operation Research Quick Intro Via Ortools
The way to think about operation research, or optimization problem is we want to maximizie/minimize some objective, while being subjected to certain constraints.
For example, say we are deciding whether to buy ice cream or boba tea for dessert. Each type of food has an associated value, and cost, while we have a certain budget that we don't wish to exceed.
\begin{align}
& \text{maximize}
&& \text{value}{\text{ice_cream}} \cdot \text{ice_cream} + \text{value}{\text{boba}} \cdot \text{boba} \nonumber \
& \text{subject to}
&& \text{cost}{\text{ice_cream}} \cdot \text{ice_cream} + \text{cost}{\text{boba}} \cdot \text{boba} \leq \text{budget}
\end{align}
Say we are able to replace the value, cost, and budget part with actual numbers (in practice, assigning actual numbers to each of these coefficients is often times core pieces of the work).
\begin{align}
& \text{maximize}
&& 3 \cdot \text{ice_cream} + 2 \cdot \text{boba} \nonumber \
& \text{subject to}
&& 2 \cdot \text{ice_cream} + 1 \cdot \text{boba} \leq 1
\end{align}
Given this toy problem, we can eye ball the solution, and see that we should use our limited budget to buy a boba tea for dessert. Operation research, a.k.a optimization techniques helps us algorithmically find solutions for these types of problems at a much larger scale.
The following section, uses ortools library to solve this problem programmatically.
Step2: A couple of important things to note | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
%load_ext watermark
%load_ext autoreload
%autoreload 2
from collections import namedtuple
from ortools.linear_solver import pywraplp
%watermark -a 'Ethen' -d -t -v -p ortools
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Operation-Research-Quick-Intro-Via-Ortools" data-toc-modified-id="Operation-Research-Quick-Intro-Via-Ortools-1"><span class="toc-item-num">1 </span>Operation Research Quick Intro Via Ortools</a></span><ul class="toc-item"><li><span><a href="#Assignment-Problem" data-toc-modified-id="Assignment-Problem-1.1"><span class="toc-item-num">1.1 </span>Assignment Problem</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
budget = 1
DessertInfo = namedtuple('DessertInfo', ['name', 'value', 'cost'])
dessert_infos = [
DessertInfo('ice_cream', 3, 2),
DessertInfo('boba', 2, 1),
]
num_desserts = len(dessert_infos)
dessert_infos
# creates solver
solver = pywraplp.Solver.CreateSolver('GLOP')
# creates variables
variables = [solver.NumVar(0, solver.infinity(), dessert_info.name) for dessert_info in dessert_infos]
# define constraints
constraint_coefficients = [dessert_info.cost for dessert_info in dessert_infos]
constraint = [constraint_coefficients[i] * variables[i] for i in range(num_desserts)]
solver.Add(solver.Sum(constraint) <= budget)
# define objective
objective_coefficients = [dessert_info.value for dessert_info in dessert_infos]
objective = constraint = [objective_coefficients[i] * variables[i] for i in range(num_desserts)]
solver.Maximize(solver.Sum(objective))
# solve
status = solver.Solve()
# extract optimal/feasible value
if status == pywraplp.Solver.OPTIMAL or status == pywraplp.Solver.FEASIBLE:
optimal_value = solver.Objective().Value()
print(f'Optimal Value: = {optimal_value}')
for i in range(num_desserts):
print(variables[i].name(), variables[i].solution_value())
Explanation: Operation Research Quick Intro Via Ortools
The way to think about operation research, or optimization problem is we want to maximizie/minimize some objective, while being subjected to certain constraints.
For example, say we are deciding whether to buy ice cream or boba tea for dessert. Each type of food has an associated value, and cost, while we have a certain budget that we don't wish to exceed.
\begin{align}
& \text{maximize}
&& \text{value}{\text{ice_cream}} \cdot \text{ice_cream} + \text{value}{\text{boba}} \cdot \text{boba} \nonumber \
& \text{subject to}
&& \text{cost}{\text{ice_cream}} \cdot \text{ice_cream} + \text{cost}{\text{boba}} \cdot \text{boba} \leq \text{budget}
\end{align}
Say we are able to replace the value, cost, and budget part with actual numbers (in practice, assigning actual numbers to each of these coefficients is often times core pieces of the work).
\begin{align}
& \text{maximize}
&& 3 \cdot \text{ice_cream} + 2 \cdot \text{boba} \nonumber \
& \text{subject to}
&& 2 \cdot \text{ice_cream} + 1 \cdot \text{boba} \leq 1
\end{align}
Given this toy problem, we can eye ball the solution, and see that we should use our limited budget to buy a boba tea for dessert. Operation research, a.k.a optimization techniques helps us algorithmically find solutions for these types of problems at a much larger scale.
The following section, uses ortools library to solve this problem programmatically.
End of explanation
budget = 1000
price = [25, 30, 85, 250]
# rentention probability for each customer and channel pair
retention_prob = [
[0.02, 0.27, 0.17, 0.87],
[0.14, 0.21, 0.28, 0.014],
[0.13, 0.003, 0.016, 0.64],
[0.14, 0.04, 0.14, 0.26],
[0.04, 0.24, 0.11, 0.31],
]
num_users = len(retention_prob)
num_channels = len(retention_prob[0])
# creates the solver for the mixed integer programming
solver = pywraplp.Solver.CreateSolver('SCIP')
# variable: assignment problem, creating a dictionary of binary variables
variables = {}
for i in range(num_users):
for j in range(num_channels):
variables[i, j] = solver.IntVar(0, 1, f'prob{i}_{j}')
# constraint: each user is assigned to at most 1 channel.
for i in range(num_users):
solver.Add(solver.Sum([variables[i, j] for j in range(num_channels)]) <= 1)
# constraint: total cost should not exceed budget
constraints = []
for j in range(num_channels):
for i in range(num_users):
constraint = price[j] * variables[i, j]
constraints.append(constraint)
solver.Add(solver.Sum(constraints) <= budget)
# objective
objective_terms = []
for i in range(num_users):
for j in range(num_channels):
objective_terms.append(retention_prob[i][j] * variables[i, j])
solver.Maximize(solver.Sum(objective_terms))
# invokes the solver
status = solver.Solve()
if status == pywraplp.Solver.OPTIMAL or status == pywraplp.Solver.FEASIBLE:
optimal_value = solver.Objective().Value()
print(f'Optimal Value: = {optimal_value}')
for i in range(num_users):
for j in range(num_channels):
# check indicator variable's value, with tolerance for floating point arithmetic
if variables[i, j].solution_value() > 0.5:
print(f'User {i} assigned to Channel {j}, Cost = {price[j]}')
Explanation: A couple of important things to note:
We are solving a Linear Programming problem, where we are computing the best solution to a given problem modeled as a series of linear relationships.
In this article, we won't be diving into the algorithms/solvers that are the workhorse behind the scenes that's finding the solution for us, and focus on how to frame the optimization problem.
We didn't explicitly specify this in our optimization formula, but notice the definition of NumVar specifies that our variables can take on numeric solutions. Often times, our problem might require some of the variables to be integers, these are called Mixed Integer Programming. e.g. In our example, we probably can't buy 1.5 portion of boba. In these cases, we can specify our variables to be IntVar.
There're other open sourced frameworks other than ortools out there, feel free to pick and choose based on preferences or speed. The exact API might be different, but the main idea revolves around defining the objective, defining the variables, adding the constraints, solving it and extracting the optimal/feasible solution.
Assignment Problem
Continuing with our discussions around Mixed Integer Programming, a closely related problem is the assignment problem, where our variables involves boolean decisions of 0 and 1 values.
We'll use the examples from this blog post, Blog: Towards optimal personalization: synthesisizing machine learning and operations research.
Say we are working in the marketing team, and we have different types of churn prevention channel, each having different prices, while different users/customers' retention rate is different for each channel. Our constraint is not spending above our monthly marketing budget, and the goal is to maxmize the total number of retained customers.
\begin{align}
\text{maximize}
& \sum_{u, c} R_{u, c} A_{u, c} \nonumber \
\text{subject to}
& \sum_{u, c} P_{u, c} A_{u, c} \leq B \
& \sum_{c} A_{u, c} = 1, \forall u \in U \
& a_{u, c} \in {0, 1}
\end{align}
Where:
$U$: is the set of all users.
$C$: is the set of all channels.
$R_{u, c}$: is the rentention probability if we were to notify the user, $u$, using the channel $c$.
$A_{u, c}$: is the assignment boolean decision variable, i.e. it takes on the value of 1 if we decided to reach out to user $u$ with channel $c$, 0 otherwise.
$P_{u, c}$: is the price/cost if we were to notify the user, $u$, using the channel $c$.
We have a constraint saying each customer can only receive the retention message via one channel, to prevent bombarding them.
As well as a constraint saying our cost shouldn't exceed our monthly budget $B$.
Let's say we have 4 channels: email (0.25), push notification (0.3), text message (0.85), and phone call (5.0). Number in parenthesis indicates the cost/price.
As for the retention probability, we will be using some randomly generated numbers, but imagine in real world scenarios where this can come from aggregated historical information, or even generated by some machine learning models.
End of explanation |
10,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of FastText and Word2Vec
Facebook Research open sourced a great project recently - fastText, a fast (no surprise) and effective method to learn word representations and perform text classification. I was curious about comparing these embeddings to other commonly used embeddings, so word2vec seemed like the obvious choice, especially considering fastText embeddings are an extension of word2vec.
I've used gensim to train the word2vec models, and the analogical reasoning task (described in Section 4.1 of [2]) for comparing the word2vec and fastText models. I've compared embeddings trained using the skipgram architecture.
Download data
Step1: Train models
For training the models yourself, you'll need to have both Gensim and FastText set up on your machine.
Step2: Comparisons
Step3: Once you have downloaded or trained the models and downloaded questions-words.txt, you're ready to run the comparison.
Step4: The accuracy takes an optional parameter restrict_vocab, which limits the vocabulary of model considered for fast approximate evaluation (default is 30000).
Word2Vec embeddings seem to be slightly better than fastText embeddings at the semantic tasks, while the fastText embeddings do significantly better on the syntactic analogies. Makes sense, since fastText embeddings are trained for understanding morphological nuances, and most of the syntactic analogies are morphology based.
Let me explain that better.
According to the paper [1], embeddings for words are represented by the sum of their n-gram embeddings. This is meant to be useful for morphologically rich languages - so theoretically, the embedding for apparently would include information from both character n-grams apparent and ly (as well as other n-grams), and the n-grams would combine in a simple, linear manner. This is very similar to what most of our syntactic tasks look like.
Example analogy
Step5: A-ha! The results for FastText with no n-grams and Word2Vec look a lot more similar (as they should) - the differences could easily result from differences in implementation between fastText and Gensim, and randomization. Especially telling is that the semantic accuracy for FastText has improved slightly after removing n-grams, while the syntactic accuracy has taken a giant dive. Our hypothesis that the char n-grams result in better performance on syntactic analogies seems fair. It also seems possible that char n-grams hurt semantic accuracy a little. However, the brown corpus is too small to be able to draw any definite conclusions - the accuracies seem to vary significantly over different runs.
Let's try with a larger corpus now - text8 (collection of wiki articles). I'm also curious about the impact on semantic accuracy - for models trained on the brown corpus, the difference in the semantic accuracy and the accuracy values themselves are too small to be conclusive. Hopefully a larger corpus helps, and the text8 corpus likely has a lot more information about capitals, currencies, cities etc, which should be relevant to the semantic tasks.
Step6: With the text8 corpus, we observe a similar pattern. Semantic accuracy falls by a small but significant amount when n-grams are included in FastText, while FastText with n-grams performs far better on the syntactic analogies. FastText without n-grams are largely similar to Word2Vec.
My hypothesis for semantic accuracy being lower for the FastText-with-ngrams model is that most of the words in the semantic analogies are standalone words and are unrelated to their morphemes (eg | Python Code:
import nltk
nltk.download('brown')
# Only the brown corpus is needed in case you don't have it.
# Generate brown corpus text file
with open('brown_corp.txt', 'w+') as f:
for word in nltk.corpus.brown.words():
f.write('{word} '.format(word=word))
# Make sure you set FT_HOME to your fastText directory root
FT_HOME = 'fastText/'
# download the text8 corpus (a 100 MB sample of cleaned wikipedia text)
import os.path
if not os.path.isfile('text8'):
!wget -c http://mattmahoney.net/dc/text8.zip
!unzip text8.zip
# download and preprocess the text9 corpus
if not os.path.isfile('text9'):
!wget -c http://mattmahoney.net/dc/enwik9.zip
!unzip enwik9.zip
!perl {FT_HOME}wikifil.pl enwik9 > text9
Explanation: Comparison of FastText and Word2Vec
Facebook Research open sourced a great project recently - fastText, a fast (no surprise) and effective method to learn word representations and perform text classification. I was curious about comparing these embeddings to other commonly used embeddings, so word2vec seemed like the obvious choice, especially considering fastText embeddings are an extension of word2vec.
I've used gensim to train the word2vec models, and the analogical reasoning task (described in Section 4.1 of [2]) for comparing the word2vec and fastText models. I've compared embeddings trained using the skipgram architecture.
Download data
End of explanation
MODELS_DIR = 'models/'
!mkdir -p {MODELS_DIR}
lr = 0.05
dim = 100
ws = 5
epoch = 5
minCount = 5
neg = 5
loss = 'ns'
t = 1e-4
from gensim.models import Word2Vec
from gensim.models.word2vec import Text8Corpus
# Same values as used for fastText training above
params = {
'alpha': lr,
'size': dim,
'window': ws,
'iter': epoch,
'min_count': minCount,
'sample': t,
'sg': 1,
'hs': 0,
'negative': neg
}
def train_models(corpus_file, output_name):
output_file = '{:s}_ft'.format(output_name)
if not os.path.isfile(os.path.join(MODELS_DIR, '{:s}.vec'.format(output_file))):
print('Training fasttext on {:s} corpus..'.format(corpus_file))
%time !{FT_HOME}fasttext skipgram -input {corpus_file} -output {MODELS_DIR+output_file} -lr {lr} -dim {dim} -ws {ws} -epoch {epoch} -minCount {minCount} -neg {neg} -loss {loss} -t {t}
else:
print('\nUsing existing model file {:s}.vec'.format(output_file))
output_file = '{:s}_ft_no_ng'.format(output_name)
if not os.path.isfile(os.path.join(MODELS_DIR, '{:s}.vec'.format(output_file))):
print('\nTraining fasttext on {:s} corpus (without char n-grams)..'.format(corpus_file))
%time !{FT_HOME}fasttext skipgram -input {corpus_file} -output {MODELS_DIR+output_file} -lr {lr} -dim {dim} -ws {ws} -epoch {epoch} -minCount {minCount} -neg {neg} -loss {loss} -t {t} -maxn 0
else:
print('\nUsing existing model file {:s}.vec'.format(output_file))
output_file = '{:s}_gs'.format(output_name)
if not os.path.isfile(os.path.join(MODELS_DIR, '{:s}.vec'.format(output_file))):
print('\nTraining word2vec on {:s} corpus..'.format(corpus_file))
# Text8Corpus class for reading space-separated words file
%time gs_model = Word2Vec(Text8Corpus(corpus_file), **params); gs_model
# Direct local variable lookup doesn't work properly with magic statements (%time)
locals()['gs_model'].save_word2vec_format(os.path.join(MODELS_DIR, '{:s}.vec'.format(output_file)))
print('\nSaved gensim model as {:s}.vec'.format(output_file))
else:
print('\nUsing existing model file {:s}.vec'.format(output_file))
evaluation_data = {}
train_models('brown_corp.txt', 'brown')
train_models(corpus_file='text8', output_name='text8')
train_models(corpus_file='text9', output_name='text9')
Explanation: Train models
For training the models yourself, you'll need to have both Gensim and FastText set up on your machine.
End of explanation
# download the file questions-words.txt to be used for comparing word embeddings
!wget https://raw.githubusercontent.com/tmikolov/word2vec/master/questions-words.txt
Explanation: Comparisons
End of explanation
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
# Training times in seconds
evaluation_data['brown'] = [(18, 54.3, 32.5)]
evaluation_data['text8'] = [(402, 942, 496)]
evaluation_data['text9'] = [(3218, 6589, 3550)]
def print_accuracy(model, questions_file):
print('Evaluating...\n')
acc = model.accuracy(questions_file)
sem_correct = sum((len(acc[i]['correct']) for i in range(5)))
sem_total = sum((len(acc[i]['correct']) + len(acc[i]['incorrect'])) for i in range(5))
sem_acc = 100*float(sem_correct)/sem_total
print('\nSemantic: {:d}/{:d}, Accuracy: {:.2f}%'.format(sem_correct, sem_total, sem_acc))
syn_correct = sum((len(acc[i]['correct']) for i in range(5, len(acc)-1)))
syn_total = sum((len(acc[i]['correct']) + len(acc[i]['incorrect'])) for i in range(5,len(acc)-1))
syn_acc = 100*float(syn_correct)/syn_total
print('Syntactic: {:d}/{:d}, Accuracy: {:.2f}%\n'.format(syn_correct, syn_total, syn_acc))
return (sem_acc, syn_acc)
word_analogies_file = 'questions-words.txt'
accuracies = []
print('\nLoading Gensim embeddings')
brown_gs = Word2Vec.load_word2vec_format(MODELS_DIR + 'brown_gs.vec')
print('Accuracy for Word2Vec:')
accuracies.append(print_accuracy(brown_gs, word_analogies_file))
print('\nLoading FastText embeddings')
brown_ft = Word2Vec.load_word2vec_format(MODELS_DIR + 'brown_ft.vec')
print('Accuracy for FastText (with n-grams):')
accuracies.append(print_accuracy(brown_ft, word_analogies_file))
Explanation: Once you have downloaded or trained the models and downloaded questions-words.txt, you're ready to run the comparison.
End of explanation
print('Loading FastText embeddings')
brown_ft_no_ng = Word2Vec.load_word2vec_format(MODELS_DIR + 'brown_ft_no_ng.vec')
print('Accuracy for FastText (without n-grams):')
accuracies.append(print_accuracy(brown_ft_no_ng, word_analogies_file))
evaluation_data['brown'] += [[acc[0] for acc in accuracies], [acc[1] for acc in accuracies]]
Explanation: The accuracy takes an optional parameter restrict_vocab, which limits the vocabulary of model considered for fast approximate evaluation (default is 30000).
Word2Vec embeddings seem to be slightly better than fastText embeddings at the semantic tasks, while the fastText embeddings do significantly better on the syntactic analogies. Makes sense, since fastText embeddings are trained for understanding morphological nuances, and most of the syntactic analogies are morphology based.
Let me explain that better.
According to the paper [1], embeddings for words are represented by the sum of their n-gram embeddings. This is meant to be useful for morphologically rich languages - so theoretically, the embedding for apparently would include information from both character n-grams apparent and ly (as well as other n-grams), and the n-grams would combine in a simple, linear manner. This is very similar to what most of our syntactic tasks look like.
Example analogy:
amazing amazingly calm calmly
This analogy is marked correct if:
embedding(amazing) - embedding(amazingly) = embedding(calm) - embedding(calmly)
Both these subtractions would result in a very similar set of remaining ngrams.
No surprise the fastText embeddings do extremely well on this.
Let's do a small test to validate this hypothesis - fastText differs from word2vec only in that it uses char n-gram embeddings as well as the actual word embedding in the scoring function to calculate scores and then likelihoods for each word, given a context word. In case char n-gram embeddings are not present, this reduces (atleast theoretically) to the original word2vec model. This can be implemented by setting 0 for the max length of char n-grams for fastText.
End of explanation
accuracies = []
print('Loading Gensim embeddings')
text8_gs = Word2Vec.load_word2vec_format(MODELS_DIR + 'text8_gs.vec')
print('Accuracy for word2vec:')
accuracies.append(print_accuracy(text8_gs, word_analogies_file))
print('Loading FastText embeddings (with n-grams)')
text8_ft = Word2Vec.load_word2vec_format(MODELS_DIR + 'text8_ft.vec')
print('Accuracy for FastText (with n-grams):')
accuracies.append(print_accuracy(text8_ft, word_analogies_file))
print('Loading FastText embeddings')
text8_ft_no_ng = Word2Vec.load_word2vec_format(MODELS_DIR + 'text8_ft_no_ng.vec')
print('Accuracy for FastText (without n-grams):')
accuracies.append(print_accuracy(text8_ft_no_ng, word_analogies_file))
evaluation_data['text8'] += [[acc[0] for acc in accuracies], [acc[1] for acc in accuracies]]
Explanation: A-ha! The results for FastText with no n-grams and Word2Vec look a lot more similar (as they should) - the differences could easily result from differences in implementation between fastText and Gensim, and randomization. Especially telling is that the semantic accuracy for FastText has improved slightly after removing n-grams, while the syntactic accuracy has taken a giant dive. Our hypothesis that the char n-grams result in better performance on syntactic analogies seems fair. It also seems possible that char n-grams hurt semantic accuracy a little. However, the brown corpus is too small to be able to draw any definite conclusions - the accuracies seem to vary significantly over different runs.
Let's try with a larger corpus now - text8 (collection of wiki articles). I'm also curious about the impact on semantic accuracy - for models trained on the brown corpus, the difference in the semantic accuracy and the accuracy values themselves are too small to be conclusive. Hopefully a larger corpus helps, and the text8 corpus likely has a lot more information about capitals, currencies, cities etc, which should be relevant to the semantic tasks.
End of explanation
accuracies = []
print('Loading Gensim embeddings')
text9_gs = Word2Vec.load_word2vec_format(MODELS_DIR + 'text9_gs.vec')
print('Accuracy for word2vec:')
accuracies.append(print_accuracy(text9_gs, word_analogies_file))
print('Loading FastText embeddings (with n-grams)')
text9_ft = Word2Vec.load_word2vec_format(MODELS_DIR + 'text9_ft.vec')
print('Accuracy for FastText (with n-grams):')
accuracies.append(print_accuracy(text9_ft, word_analogies_file))
print('Loading FastText embeddings')
text9_ft_no_ng = Word2Vec.load_word2vec_format(MODELS_DIR + 'text9_ft_no_ng.vec')
print('Accuracy for FastText (without n-grams):')
accuracies.append(print_accuracy(text9_ft_no_ng, word_analogies_file))
evaluation_data['text9'] += [[acc[0] for acc in accuracies], [acc[1] for acc in accuracies]]
%matplotlib inline
import matplotlib.pyplot as plt
def plot(ax, data, corpus_name='brown'):
width = 0.25
pos = [(i, i + width, i + 2*width) for i in range(len(data))]
colors = ['#EE3224', '#F78F1E', '#FFC222']
acc_ax = ax.twinx()
# Training time
ax.bar(pos[0],
data[0],
width,
alpha=0.5,
color=colors
)
# Semantic accuracy
acc_ax.bar(pos[1],
data[1],
width,
alpha=0.5,
color=colors
)
# Syntactic accuracy
acc_ax.bar(pos[2],
data[2],
width,
alpha=0.5,
color=colors
)
ax.set_ylabel('Training time (s)')
acc_ax.set_ylabel('Accuracy (%)')
ax.set_title(corpus_name)
acc_ax.set_xticks([p[0] + 1.5 * width for p in pos])
acc_ax.set_xticklabels(['Training Time', 'Semantic Accuracy', 'Syntactic Accuracy'])
# Proxy plots for adding legend correctly
proxies = [ax.bar([0], [0], width=0, color=c, alpha=0.5)[0] for c in colors]
models = ('Gensim', 'FastText', 'FastText (no-ngrams)')
ax.legend((proxies), models, loc='upper left')
ax.set_xlim(pos[0][0]-width, pos[-1][0]+width*4)
ax.set_ylim([0, max(data[0])*1.1] )
acc_ax.set_ylim([0, max(data[1] + data[2])*1.1] )
plt.grid()
# Plotting the bars
fig = plt.figure(figsize=(10,15))
for corpus, subplot in zip(sorted(evaluation_data.keys()), [311, 312, 313]):
ax = fig.add_subplot(subplot)
plot(ax, evaluation_data[corpus], corpus)
plt.show()
Explanation: With the text8 corpus, we observe a similar pattern. Semantic accuracy falls by a small but significant amount when n-grams are included in FastText, while FastText with n-grams performs far better on the syntactic analogies. FastText without n-grams are largely similar to Word2Vec.
My hypothesis for semantic accuracy being lower for the FastText-with-ngrams model is that most of the words in the semantic analogies are standalone words and are unrelated to their morphemes (eg: father, mother, France, Paris), hence inclusion of the char n-grams into the scoring function actually makes the embeddings worse.
This trend is observed in the original paper too where the performance of embeddings with n-grams is worse on semantic tasks than both word2vec cbow and skipgram models.
Let's do a quick comparison on an even larger corpus - text9
End of explanation |
10,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FrameNet
I see three ways of getting features from FrameNet
Step2: 1. Does word $j$ evoke frame $i$?
In sum
Step3: Most words evoke one frame, some two, few three.
Step7: 2. Frame relations
In this approach, I represent a word by a bit vector indicating whether or not that word evokes a frame or evokes a frame that inherits from a frame. | Python Code:
import numpy as np
import pandas as pd
from nltk.corpus import framenet as fn
Explanation: FrameNet
I see three ways of getting features from FrameNet:
Does word $j$ evoke frame $i$?
Something with frame relations
Something with frame elements
End of explanation
def get_lus(frame):
Helper to get lexemes from frame.
lus = frame['lexUnit'].keys()
return [k.partition('.')[0] for k in lus]
all_frames = fn.frames('.*')
all_frame_names = [f.name for f in all_frames]
all_lus = [get_lus(f) for f in all_frames]
all_lus = [item for sublist in all_lus for item in sublist]
all_lus = list(set(all_lus))
evoke = pd.DataFrame(0, index=all_frame_names, columns=all_lus)
for frame in all_frames:
name = frame.name
lus = get_lus(frame)
for lu in lus:
evoke[lu][name] += 1
Explanation: 1. Does word $j$ evoke frame $i$?
In sum: too sparse to use.
End of explanation
evoke.max().value_counts()
evoke.head()
Explanation: Most words evoke one frame, some two, few three.
End of explanation
def evokes(frame):
Return words that evoke `frame`.
lus = frame['lexUnit'].keys()
return [k.partition('.')[0] for k in lus]
def is_inheritance_relation(relation):
return relation['type']['name'] == 'Inheritance'
def is_parent_frame(frame, relation):
return frame.name == relation.superFrameName
def children(frame):
Return children of `frame`.
relations = frame.frameRelations
relations = [r for r in relations if is_inheritance_relation(r)]
relations = [r for r in relations if is_parent_frame(frame, r)]
return [fn.frame(r.subFrameName) for r in relations]
def flatten(lst):
return [item for sublist in lst for item in sublist]
def words(frame):
Return all words that evoke `frame`, including words that
evoke frames that inherit from `frame`.
kids = children(frame)
if not kids:
return evokes(frame)
evoke_sub_frames = [words(f) for f in kids]
return evokes(frame) + flatten(evoke_sub_frames)
relations = pd.DataFrame(0, index=all_frame_names, columns=all_lus)
for frame in all_frames:
name = frame.name
lus = words(frame)
for lu in lus:
relations.loc[name, lu] += 1
relations.head()
(relations.size - np.count_nonzero(relations.values))/relations.size
relations.sum(axis=1).sort_values(ascending=False).head()
relations.loc['Transitive_action'].sort_values(ascending=False).head()
relations.to_csv('framenet-relations.csv')
normalized_relations = relations / relations.sum()
normalized_relations.to_csv('framenet-normalized-relations.csv')
Explanation: 2. Frame relations
In this approach, I represent a word by a bit vector indicating whether or not that word evokes a frame or evokes a frame that inherits from a frame.
End of explanation |
10,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST in Keras with Tensorboard
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
Parameters
Step1: Imports
Step3: TPU/GPU detection
Step4: Colab-only auth for this notebook and the TPU
Step5: tf.data.Dataset
Step6: Let's have a look at the data
Step7: Keras model
Step8: Train and validate the model
Step9: Visualize predictions
Step10: Export the model for serving from ML Engine
Step11: Deploy the trained model to AI Platform
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Cloud Configuration
Step12: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https
Step13: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work. | Python Code:
BATCH_SIZE = 64
LEARNING_RATE = 0.02
# GCS bucket for training logs and for saving the trained model
# You can leave this empty for local saving, unless you are using a TPU.
# TPUs do not have access to your local instance and can only write to GCS.
BUCKET="" # a valid bucket name must start with gs://
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: MNIST in Keras with Tensorboard
This sample trains an "MNIST" handwritten digit
recognition model on a GPU or TPU backend using a Keras
model. Data are handled using the tf.data.Datset API. This is
a very simple sample provided for educational purposes. Do
not expect outstanding TPU performance on a dataset as
small as MNIST.
Parameters
End of explanation
import os, re, math, json, time
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
Explanation: Imports
End of explanation
tpu = None
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection relies on TPU_NAME env var
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu, steps_per_run=100)
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
gpus = tf.config.experimental.list_logical_devices("GPU")
if len(gpus) > 1:
strategy = tf.distribute.MirroredStrategy([gpu.name for gpu in gpus])
print("running on multiple GPUs")
else:
strategy = tf.distribute.get_strategy() # the default strategy works on CPU and single GPU
print("Running on {}".format("a single GPU" if len(gpus)==1 else "CPU"))
# adjust batch size and learning rate for distributed computing
global_batch_size = BATCH_SIZE * strategy.num_replicas_in_sync # num replcas is 8 on a single TPU or N when runing on N GPUs.
learning_rate = LEARNING_RATE * strategy.num_replicas_in_sync
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
if tf.executing_eagerly():
# This is the TF 2.0 "eager execution" way of iterating through a tf.data.Dataset
for v_images, v_labels in validation_dataset:
break
for t_images, t_labels in unbatched_train_ds.batch(N):
break
validation_digits = v_images.numpy()
validation_labels = v_labels.numpy()
training_digits = t_images.numpy()
training_labels = t_labels.numpy()
else:
# This is the legacy TF 1.x way of iterating through a tf.data.Dataset
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
Explanation: TPU/GPU detection
End of explanation
#IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
#if IS_COLAB_BACKEND:
# from google.colab import auth
# auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
Explanation: Colab-only auth for this notebook and the TPU
End of explanation
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, global_batch_size)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
# This model trains to 99.4%— sometimes 99.5%— accuracy in 10 epochs (with a batch size of 64)
def make_model():
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=True, activation='relu'),
tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2),
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=True, activation='relu', strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=True, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
with strategy.scope(): # the new way of handling distribution strategies in Tensorflow 1.14+
model = make_model()
# print model layers
model.summary()
# set up Tensorboard logs
timestamp = time.strftime("%Y-%m-%d-%H-%M-%S")
log_dir=os.path.join(BUCKET, 'mnist-logs', timestamp)
tb_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, update_freq=50*global_batch_size)
print("Tensorboard loggs written to: ", log_dir)
Explanation: Keras model: 3 convolutional layers, 2 dense layers
End of explanation
EPOCHS = 10
steps_per_epoch = 60000//global_batch_size # 60,000 items in this dataset
print("Step (batches) per epoch: ", steps_per_epoch)
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1,
callbacks=[tb_callback], verbose=1)
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
def call(self, inputs):
# When the deployed model is called through its REST API,
# the JSON payload is parsed automatically, transformed into
# a tensor and passed to this input layer. You can perform
# additional transformations, such as decoding JPEGs for example,
# before sending the data to your model. However, you can only
# use tf.xxxx operations.
return inputs
# little wrinkle: must copy the model from TPU to CPU manually. This is a temporary workaround.
restored_model = make_model()
restored_model.set_weights(model.get_weights()) # this copied the weights from TPU, does nothing on GPU
# add the serving input layer
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, 28*28)))
serving_model.add(restored_model)
export_path = os.path.join(BUCKET, 'mnist-export', timestamp)
tf.saved_model.save(serving_model, export_path)
print("Model exported to: ", export_path)
Explanation: Export the model for serving from ML Engine
End of explanation
# Enable model deployment here
DEPLOY = False # #@param {type:"boolean"}
# Create the model only once, after that, create new versions of the same model
CREATE_MODEL = True #@param {type:"boolean"}
# Models are deployed in your cloud project
PROJECT = "" #@param {type:"string"}
MODEL_NAME = "mnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
if DEPLOY:
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', export_path), 'For this part, the model must have been exported to a GCS bucket.'
Explanation: Deploy the trained model to AI Platform
Push your trained model to production on AI Platform for a serverless, autoscaled, REST API experience.
You will need a GCS bucket and a GCP project for this.
Models deployed on AI Platform autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
Cloud Configuration
End of explanation
# Create the model
if DEPLOY and CREATE_MODEL:
!gcloud ai-platform models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
if DEPLOY:
!echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
!gcloud ai-platform versions create {MODEL_VERSION} --model={MODEL_NAME} --origin="{export_path}" --project={PROJECT} --runtime-version=1.13 --python-version=3.5
Explanation: Deploy the model
This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
End of explanation
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because the ServingInput layer was named "serving". Keras appends "_input"
f.write(data+'\n')
if DEPLOY: # Request online predictions from deployed model (REST API) using the "gcloud ai-platform" command line.
predictions = !gcloud ai-platform predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
probabilities = np.stack([json.loads(p) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
predictions = np.argmax(probabilities, axis=1)
display_top_unrecognized(digits, predictions, labels, N, 100//N)
Explanation: Test the deployed model
Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
command line tool but any tool that can send a JSON payload to a REST endpoint will work.
End of explanation |
10,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a random braid with a 25 node network, and a probability that any given node will produce a block in a given tick of $$\frac{2^{246}}{2^{256}-1} \simeq 0.1\%.$$
With 25 nodes this means that each "tick" there is a $\sim 2.4\%$ chance that the network will generate a new bead. This is small, and the resulting graph is close to a blockchain.
The network has a random topology of nodes, distributed on the surface of a sphere. Each node is connected to 4 other nodes, with a latency given by their arc distance on the sphere. A network with a low target (high dificulty) is blockchain-like, with an occasional diamond in it, which Bitcoin would orphan. TICKSIZE=0.1 so time is incremented by a tenth of a second with each iteration, and beads are propagated to connected nodes and delivered after a time interval given by their latency.
Step1: Keeping the same network n, let's increase the target (decrease the difficulty) to produce a thicker braid. Also, this time let's actually mine, to see that the graphs are the same. The number of iterations until this node mines a bead is given by the geometric distribution. This may give a computational speedup for graphing under the right circumstances. (It's no speedup with this example)
Step2: Let's turn to how to evaluate a rewards algorithm on a network. Let's choose one bead to focus on and examine its sibling structure. Siblings are beads in the same cohort that cannot be decided to come before or after one another, since the network is asychronous (they were mined by miners on opposite sides of the planet at nearly the same time). These might contain the same transaction.
Siblings are labeled by the number of beads in the (ancestor, descendant) direction (called "rank") one must travel from the sibling to find a common ancestor. The black-circled bead is the one under consideratoin, and its siblings are labeled by their rank (m,n).
This quantity, or something similar, might be used in evaluating rewards for miners. It gives a measure based on graph structure alone of which beads might have been withheld.
Step3: Now let's play with the reward model. (Modify Braid.rewards(...) to try a different model)
If we assume a fixed reward per cohort, and that within the cohort we distribute rewards proportionally, we end up with a graph like the following. (Note that this is not a very good reward model) The area of each cohort is equal, if you sum the areas of the constituent beads. Area proportionality is done purely for an intuitive visual representation.
Step5: Now let us examine the behavior of cohorts as we vary the target difficulty.
This can be fairly time consuming. We're essentially going to simulate an entire network with 25 nodes for many millions of beads, in order to get enough resolution on the graph. We use joblib to parallelize creation of the following graphs using a map/reduce algorithm. Each job will create ncohorts/cpu_count cohorts.
The relative vertical error in this graph is $\frac{1}{\sqrt{N}}$ with $N$ cohorts as data points, so for a very smooth graph with error ~1% we need $N=10000$. This takes a couple days on a beefy computer. Instead let's just do $N=100$. The resulting curve will be noisy but it's enough to extract the salient parts. Determining the cohorts is roughly $\mathcal{O}(B_C^2)$ in the number of beads per cohort $B_C$, so we will stop adding data points when the computation time rises for large $B_C$.
Go get a coffee now...this takes a few minutes.
Step6: This resulting curve (see below) is extremely well approximated by
$$
T(x) = \frac{1}{\lambda x} + a e^{a \lambda x}
$$
where $T(x)=T_C$ is the cohort time as a function of target $x$ and is measured at constant x within a window of time known as the "retarget window". We assume that an algorithm will select a new target $x^\prime$ based on the data accumulated at $x$. We assume that $\lambda$ and $a$ are constant over the window, and we measure $T_C, T_B, N_C, N_B$ for the chosen $x$.
The parameters can be understood intuitively, $\frac{1}{\lambda x} = T_B$ is the bead time. The parameter $a$ has dimensions of time and can be thought of as the "size" of the network -- the amount of time required for a bead to traverse the network. It is directly related to the orphan or uncle rate, in the blockchain limit, as well as what has been called "miner utilization" by other authors.
The parameter $\lambda$ is the hash rate and can be obtained along with $a$ | Python Code:
try: del n # this is here so that if you re-execute this block, it will create a new network
except: pass
n = Network(25, target=2**246) # A smaller network or lower target makes thinner braids
for dummy in range(500): n.tick(mine=False)
b = n.nodes[0].braids[0]
b.plot(numbervertices=True);
Explanation: Create a random braid with a 25 node network, and a probability that any given node will produce a block in a given tick of $$\frac{2^{246}}{2^{256}-1} \simeq 0.1\%.$$
With 25 nodes this means that each "tick" there is a $\sim 2.4\%$ chance that the network will generate a new bead. This is small, and the resulting graph is close to a blockchain.
The network has a random topology of nodes, distributed on the surface of a sphere. Each node is connected to 4 other nodes, with a latency given by their arc distance on the sphere. A network with a low target (high dificulty) is blockchain-like, with an occasional diamond in it, which Bitcoin would orphan. TICKSIZE=0.1 so time is incremented by a tenth of a second with each iteration, and beads are propagated to connected nodes and delivered after a time interval given by their latency.
End of explanation
n.reset(target=2**249)
for dummy in range(200): n.tick(mine=True)
b = n.nodes[0].braids[0]
b.plot(numbervertices=True);
Explanation: Keeping the same network n, let's increase the target (decrease the difficulty) to produce a thicker braid. Also, this time let's actually mine, to see that the graphs are the same. The number of iterations until this node mines a bead is given by the geometric distribution. This may give a computational speedup for graphing under the right circumstances. (It's no speedup with this example)
End of explanation
b.plot(focusbead=b.vertex(b.num_vertices()/2+2));
Explanation: Let's turn to how to evaluate a rewards algorithm on a network. Let's choose one bead to focus on and examine its sibling structure. Siblings are beads in the same cohort that cannot be decided to come before or after one another, since the network is asychronous (they were mined by miners on opposite sides of the planet at nearly the same time). These might contain the same transaction.
Siblings are labeled by the number of beads in the (ancestor, descendant) direction (called "rank") one must travel from the sibling to find a common ancestor. The black-circled bead is the one under consideratoin, and its siblings are labeled by their rank (m,n).
This quantity, or something similar, might be used in evaluating rewards for miners. It gives a measure based on graph structure alone of which beads might have been withheld.
End of explanation
b.plot(rewards=True, K=1.8);
Explanation: Now let's play with the reward model. (Modify Braid.rewards(...) to try a different model)
If we assume a fixed reward per cohort, and that within the cohort we distribute rewards proportionally, we end up with a graph like the following. (Note that this is not a very good reward model) The area of each cohort is equal, if you sum the areas of the constituent beads. Area proportionality is done purely for an intuitive visual representation.
End of explanation
dctimes = [] # Initialize our results
def job(n, difficulty, ncohorts, njob):
begin = time.process_time()
np.random.seed((int(time.time())*njob)%(2**32-1))# We have to seed the random number generator with something
n.reset(target=difficulty*(1<<255)) # different for each job or they will all generate exactly
times = [] # the same results!
lastcohort = frozenset({n.nodes[0].braids[0].vertex(0)})
while len(list(n.nodes[0].braids[0].cohorts())) < ncohorts: # 1% error = sqrt(N)/N
for dummy in range(100): n.tick()
for c in n.nodes[0].braids[0].cohorts(lastcohort): pass
lastcohort = c
b = n.nodes[0].braids[0]
times.append(b.cohort_time())
#print(difficulty, times, b.ncohorts)
return((difficulty,numpy.mean([m for (m,s) in times]), len(b.beads), b.ncohorts, time.process_time()-begin))
def parmap(f, *args):
Map the function f with arguments <args>, adding an extra argument which is its job number.
return Parallel(n_jobs=cpu_count())(delayed(f)(*args, job) for job in range(cpu_count()))
def gettimes(n, difficulty, ncohorts):
def reduce(result):
return (difficulty, numpy.mean([m[1] for m in result]), sum([m[2] for m in result]),
sum([m[3] for m in result]), sum([m[4] for m in result]))
return reduce(parmap(job, n, difficulty, ncohorts/cpu_count()))
print("(target difficulty, mean cohort time, nbeads, ncohorts, CPU time)")
for difficulty in exp(arange(-8.5, -3.375, 0.0625)):
if(any([x[0]==difficulty for x in dctimes])): continue
dctimes.append(gettimes(n, difficulty, 100))
print(dctimes[-1], ",")
if(dctimes[-1][-1] > 5*60): break # If it takes longer than 5 CPU-minutes stop
# re-run this block to add another row to dctimes.
Explanation: Now let us examine the behavior of cohorts as we vary the target difficulty.
This can be fairly time consuming. We're essentially going to simulate an entire network with 25 nodes for many millions of beads, in order to get enough resolution on the graph. We use joblib to parallelize creation of the following graphs using a map/reduce algorithm. Each job will create ncohorts/cpu_count cohorts.
The relative vertical error in this graph is $\frac{1}{\sqrt{N}}$ with $N$ cohorts as data points, so for a very smooth graph with error ~1% we need $N=10000$. This takes a couple days on a beefy computer. Instead let's just do $N=100$. The resulting curve will be noisy but it's enough to extract the salient parts. Determining the cohorts is roughly $\mathcal{O}(B_C^2)$ in the number of beads per cohort $B_C$, so we will stop adding data points when the computation time rises for large $B_C$.
Go get a coffee now...this takes a few minutes.
End of explanation
x = np.array([p[0] for p in dctimes])
def f(x,lam,a): return 1/lam/x+a*np.exp(a*lam*x)
y = np.array([p[1] for p in dctimes])
((lam,a),dummy) = scipy.optimize.curve_fit(f, x, y, [100, 1])
plt.loglog(x, y, label='simulation')
plt.loglog(x, f(x, lam, a), label='fit $\lambda=%3.2f$ hashes/s; $a=%1.4f$s'%(lam,a))
plt.xlabel("target difficulty $x$")
plt.ylabel("cohort time $T(x)$")
x0 = numpy.real(2*scipy.special.lambertw(1/2)/a/lam)
plt.axvline(x0, label="Fastest cohort time $T_C^\prime=%1.3f$s at target $x_0=%f$"%(f(x0,lam,a),x0), color='red')
plt.xlim(x[0], x[-1])
plt.legend(loc='lower left', frameon=False);
Explanation: This resulting curve (see below) is extremely well approximated by
$$
T(x) = \frac{1}{\lambda x} + a e^{a \lambda x}
$$
where $T(x)=T_C$ is the cohort time as a function of target $x$ and is measured at constant x within a window of time known as the "retarget window". We assume that an algorithm will select a new target $x^\prime$ based on the data accumulated at $x$. We assume that $\lambda$ and $a$ are constant over the window, and we measure $T_C, T_B, N_C, N_B$ for the chosen $x$.
The parameters can be understood intuitively, $\frac{1}{\lambda x} = T_B$ is the bead time. The parameter $a$ has dimensions of time and can be thought of as the "size" of the network -- the amount of time required for a bead to traverse the network. It is directly related to the orphan or uncle rate, in the blockchain limit, as well as what has been called "miner utilization" by other authors.
The parameter $\lambda$ is the hash rate and can be obtained along with $a$:
$$
\lambda = \frac{N_B}{x T_C N_C}; \qquad a = T_C W\left(\frac{T_C}{T_B} - 1\right)
$$
where $N_B$ is the number of beads and $N_C$ is the number of cohorts. $W(z)$ is the
Lambert W function.
With these in hand we can compute the location of the minimum
$$
x_0 = \frac{2 W\left(\frac12\right)}{a \lambda} = \frac{0.7035}{a\lambda}
$$
This is independent of network topology. (See the topology parameter of class Network -- this graph
is for miners distributed on the surface of a sphere, but setting topology to something else generates
a random network)
End of explanation |
10,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Start with the Final Design Report - SpaceX Hyperloop Competition II for high level view.
SpaceX Hyperloop Track Specification
Step1: Ball Screws, for the (Eddy current) Brake Mechanism
cf. THK Ball Screw, General Catalog
cf. NSK Ball Screws
Step2: From NSK Ball Screws | Python Code:
import sympy
from sympy import Eq, solve, Symbol, symbols, pi
from sympy import Rational as Rat
from sympy.abc import tau,l,F
Explanation: Start with the Final Design Report - SpaceX Hyperloop Competition II for high level view.
SpaceX Hyperloop Track Specification
End of explanation
eta_1 = Symbol('eta_1',positive='True')
BallScrewThrustfromTorque = Eq(F,Rat(2)*pi*eta_1*tau/l)
Explanation: Ball Screws, for the (Eddy current) Brake Mechanism
cf. THK Ball Screw, General Catalog
cf. NSK Ball Screws: 6mm ⌀ thru 15mm ⌀
cf. FUNdaMENTALS of Design, Topic 6 Power Transmission Elements II
The THK Ball Screw, General Catalog yields the following general relationship for the thrust generated when torque is applied:
$ Fa = \frac{ 2\pi \cdot \eta_1 \cdot T }{ Ph }$ (THK's notation)
$ \boxed{ F = \frac{2\pi \eta_1 \tau}{l} }$ (EY notation)
where $\eta_1$ is the efficiency of converting rotational motion to linear motion (i.e. linear output$/$ rotational input), $l$ is the thread lead (i.e. distance either the nut or screw moves, under 1 full rotation (revolution)), and $\tau$ is the applied input torque. Indeed, I had double-checked the kinematics and thus, using energy conversation, verified this relation (cf. servetheloop_dump)
End of explanation
solve( BallScrewThrustfromTorque.subs(eta_1,0.95).subs(tau,3.*1000).subs(l,4.), F)
Explanation: From NSK Ball Screws: 6mm ⌀ thru 15mm ⌀, given the stated product dimensions for the actual produce we had used (Product ID: W1003WF-24P-C3Z4 for the ball screw), $l=4 \, mm$
Supposing the forward or backward efficiency $\eta$ is $ \sim 0.95$ and torque $\tau$ is $3 \, N \cdot m$,
End of explanation |
10,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color=Teal>ATOMIC STRING FUNCTIONS and SPACETIME QUANTIZATION MODELS (Python Code)</font>
By Sergei Eremenko, PhD, Dr.Eng., Professor, Honorary Professor
https
Step1: <font color=teal>2. Atomic String Function (AString) is an Integral and Composing Branch of Atomic Function up(x) (introduced in 2017 by S. Yu. Eremenko)</font>
AString function is solitary kink function which simultaneously is integral and composing branch of atomic function up(x)
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Step2: Atomic String, Atomic Function (AF) and AF Derivative plotted together
Step3: <font color=teal>3. Properties of Atomic Function Up(x)</font>
3.1. Atomic Function Derivative expressed via Atomic Function itself
Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
<font color=maroon>up'(x)= 2up(2x+1)-2up(2x-1)</font>
Atomic Function and its Derivative plotted together
Step4: 3.2. Partition of Unity
The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1)
Step5: 3.3. Atomic Function (AF) is a 'finite', 'compactly supported', or 'solitary' function
Like a Spline, Atomic Function (AF) 'compactly supported' not equal to zero only on section |x|<=1
Step6: 3.4 Atomic Function is a non-analytical function (can not be represented by Taylor's series), but with known Fourier Transformation allowing to exactly calculate AF in certain points, with tabular representation provided in script above.
<font color=teal>4. Properties of Atomic String Function</font>
4.1. AString is not only Integral but also Composing Branch of Atomic Function
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Astring is a swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself
Step7: 4.3. AStrings and Atomic Solitons
Solitonic mathematical properties of AString and Atomic Functions have been explored in author's paper [2,3] (Eremenko, S.Yu. Atomic solitons as a new class of solitons; 2018; https
Step8: 4.6. Partition of Line from Atomic String functions
Combination/summation of Atomic Strings can exactly represent a straight line
Step9: Partition based on AString with certain width and height depending on a size of 'quanta'
Step10: <font color=teal>5. Model of Spacetime composed from AStrings Quanta (AString Metriants)
5.1. This is an abstract of paper "Atomic Strings and Fabric of Spacetime"
Based on a generalization of well-known atomic function this paper introduces an atomic string (AString) as a swing-like function by joining of which on a periodic lattice it is possible to build one model for absolutely flat and curved spacetime and explain gravitational warping effects due to a unique property of all derivatives being expressed via AString itself. Physically AString may represent a soliton-like elementary string as a spacetime warp/distortion composing atomic quanta in multiple dimensions which can grow, shrink, group into atoms and compose smooth continua, solid and biological matter widespread in nature. This makes AString a good candidate for an elementary string as a fundamental block of matter and spacetime fabric. AString can also be used as a generating axiom for a new non-Archimedean atomic calculus to describe nonlinear metric within cosmic strings and build never overflowing atomic computer with the super fast calculation of derivatives. The AString along with closely related atomic function may find new areas of applications in lattice physics, string theory, relativity, quantum gravity, cosmic strings, multiverse, dark matter, solitons, dislocations, computing, evolution of solid and biological matter, finite element methods, new regression analysis and artificial intelligence classification models.
5.2. Representing spacetime expansion via summation of AStrings quanta
Step11: 5.3. Model of Spacetime curvature and gravity based on AStrings
Schematic model of Gravitation explaining General Relativity effects where spacetime Shape, Density and Curvature are deeply related to each other being expressed via shifts and stretches of the same AString Atomic Function
Step12: <font color=teal>6. 'Soliton Nature' book</font>
6.1. The theory is also described in a book 'Soliton Nature'
Soliton Nature book is easy-to-read, pictorial, interactive book which uses beautiful photography, video channel, and computer scripts in R and Python to demonstrate existing and explore new solitons – the magnificent and versatile energy concentration phenomenon of nature. New class of atomic solitons can be used to describe Higgs boson (‘the god particle’) fields, spacetime quanta and other fundamental building blocks of nature. | Python Code:
import numpy as np
import pylab as pl
pl.rcParams["figure.figsize"] = 9,6
###################################################################
##This script calculates the values of Atomic Function up(x) (1971)
###################################################################
################### One Pulse of atomic function
def up1(x: float) -> float:
#Atomic function table
up_y = [0.5, 0.48, 0.460000017,0.440000421,0.420003478,0.400016184, 0.380053256, 0.360139056,
0.340308139, 0.320605107,0.301083436, 0.281802850, 0.262826445, 0.244218000, 0.226041554,
0.208361009, 0.191239338, 0.174736305, 0.158905389, 0.143991189, 0.129427260, 0.115840866,
0.103044024, 0.9110444278e-01, 0.798444445e-01, 0.694444445e-01, 0.598444445e-01,
0.510444877e-01, 0.430440239e-01, 0.358409663e-01, 0.294282603e-01, 0.237911889e-01,
0.189053889e-01, 0.147363055e-01, 0.112393379e-01, 0.836100883e-02, 0.604155412e-02,
0.421800000e-02, 0.282644445e-02, 0.180999032e-02, 0.108343562e-02, 0.605106267e-03,
0.308138660e-03, 0.139055523e-03, 0.532555251e-04, 0.161841328e-04, 0.347816874e-05,
0.420576116e-05, 0.167693347e-07, 0.354008603e-10, 0]
up_x = np.arange(0.5, 1.01, 0.01)
res = 0.
if ((x>=0.5) and (x<=1)):
for i in range(len(up_x) - 1):
if (up_x[i] >= x) and (x < up_x[i+1]):
N1 = 1 - (x - up_x[i])/0.01
res = N1 * up_y[i] + (1 - N1) * up_y[i+1]
return res
return res
############### Atomic Function Pulse with width, shift and scale #############
def upulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
res = 0.
if (x >= 0.5) and (x <= 1):
res = up1(x)
elif (x >= 0.0) and (x < 0.5):
res = 1 - up1(1 - x)
elif (x >= -1 and x <= -0.5):
res = up1(-x)
elif (x > -0.5) and (x < 0):
res = 1 - up1(1 + x)
res = d + res * c
return res
############### Atomic Function Applied to list with width, shift and scale #############
def up(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(upulse(x[i], a, b, c, d))
return res
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic Function up(x)')
pl.plot(x, up(x), label='Atomic Function')
pl.grid(True)
pl.show()
Explanation: <font color=Teal>ATOMIC STRING FUNCTIONS and SPACETIME QUANTIZATION MODELS (Python Code)</font>
By Sergei Eremenko, PhD, Dr.Eng., Professor, Honorary Professor
https://www.researchgate.net/profile/Sergei_Eremenko
https://www.amazon.com/Sergei-Eremenko/e/B082F3MQ4L
https://www.linkedin.com/in/sergei-eremenko-3862079
https://www.facebook.com/SergeiEremenko.Author
Atomic String (AString) function, with the name relevant to well-known Atomic Functions, is a smooth compactly-supported solitonic kink function by joining of which on a periodic lattice it is possible to build models of flat and curved spacetime and other continua widespread in nature, as well as to introduce the concepts of AString Spacetime Quantum and AString Metriant.
It may lead to novel models of quantized spacetime as a superposition of elementary AStrings kinks which may represent 'spacetime distortions/warps' studied by Stephen Hawking, or 'metriants' by A.Veinik, or 'ripples' of Higgs and other quantum fields permeating the space.
Joining/summation of co-located AStrings represents expansion of space while opposite AStrings form 'solitonic atoms' (Atomic Function) composing the fields holding quanta together.
AStrings, with unique property of derivatives being expressed via AString itself, can be used to create models of quantized gravity and expain general relativity effects where spacetime shape and matter distributions are deeply related to each other being expressed via shifts and stretches of the same AString function.
AStrings and Atomic Functions may also be used to construct models of different quantum fields as fundamental blocks of matter, anti-matter (solitonic atoms, Higgs fields, waves), cosmic strings and contribute to the theories of dark matter, chronal matter and multiverse where quanta can significantly differ from the ones in ordinary universe.
AStrings and Atomic Functions may find some applications in Spacetime Physics, String theory, General and Special Relativity, Theory of Solitons, Lattice Physics, Quantum Gravity, Cosmology, Dark matter and Multiverse theories as well as Finite Element Methods, Nonarchimedean Computers, Atomic regression analysis, Machine Learning and Artificial Intelligence.
Atomic Strings was introduced in 2017-2020 by the author, Professor Sergei Yu. Eremenko (https://www.researchgate.net/profile/Sergei_Eremenko), in papers "Atomic Strings and Fabric of Spacetime", "Atomic Solitons as a New Class of Solitons", "Atomic Machine Learning", and book "Soliton Nature" [1-8]. AStrings are deeply rooted into the theory of Atomic Functions (https://ru.wikipedia.org/w/index.php?oldid=82669103) developed since 1970s by author's teacher Academician NAS of Ukraine Rvachev V.L. (https://ru.wikipedia.org/w/index.php?oldid=83948367), professor Rvachev V.A. and advanced by many followers, notably professor Kravchenko V.F. (https://ru.wikipedia.org/w/index.php?oldid=84521570).
<font color=teal>1. Atomic Function up(x) (introduced in 1971 by V.L.Rvachev and V.A.Rvachev)</font>
End of explanation
############### Atomic String #############
def AString1(x: float) -> float:
res = 1 * (upulse(x/2.0 - 0.5) - 0.5)
return res
############### Atomic String Pulse with width, shift and scale #############
def AStringPulse(t: float, a = 1., b = 0., c = 1., d = 0.) -> float:
x = (t - b)/a
if (x < -1):
res = -0.5
elif (x > 1):
res = 0.5
else:
res = AString1(x)
res = d + res * c
return res
###### Atomic String Applied to list with width, shift and scale #############
def AString(x: list, a = 1., b = 0., c = 1., d = 0.) -> list:
res = []
for i in range(len(x)):
res.append(AStringPulse(x[i], a, b, c, d))
#res[i] = AStringPulse(x[i], a, b, c)
return res
###### Summation of two lists #############
def Sum(x1: list, x2: list) -> list:
res = []
for i in range(len(x1)):
res.append(x1[i] + x2[i])
return res
x = np.arange(-2.0, 2.0, 0.01)
pl.title('Atomic String Function')
pl.plot(x, AString(x, 1.0, 0, 1, 0), label='Atomic String')
pl.grid(True)
pl.show()
Explanation: <font color=teal>2. Atomic String Function (AString) is an Integral and Composing Branch of Atomic Function up(x) (introduced in 2017 by S. Yu. Eremenko)</font>
AString function is solitary kink function which simultaneously is integral and composing branch of atomic function up(x)
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
#This Calculates Derivative
dx = x[1] - x[0]
dydx = np.gradient(up(x), dx)
pl.plot(x, up(x), label='Atomic Function')
pl.plot(x, AString(x, 1.0, 0, 1, 0), linewidth=2, label='Atomic String Function')
pl.plot(x, dydx, '--', label='A-Function Derivative')
pl.title('Atomic and AString Functions')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Atomic String, Atomic Function (AF) and AF Derivative plotted together
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, up(x), label='Atomic Function', linewidth=2)
pl.plot(x, dydx, '--', label='Atomic Function Derivative', linewidth=1, color="Green")
pl.title('Atomic Function and Its Derivative')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: <font color=teal>3. Properties of Atomic Function Up(x)</font>
3.1. Atomic Function Derivative expressed via Atomic Function itself
Atomic Function Derivative can be exressed via Atomic Function itself - up'(x)= 2up(2x+1)-2up(2x-1) meaning the shape of pulses for derivative function can be represented by shifted and stratched Atomic Function itself - remarkable property
<font color=maroon>up'(x)= 2up(2x+1)-2up(2x-1)</font>
Atomic Function and its Derivative plotted together
End of explanation
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, up(x, 1, +0), '--', linewidth=1, label='Atomic Function at x=0')
pl.plot(x, up(x, 1, -1), '--', linewidth=1, label='Atomic Function at x=-1')
pl.plot(x, Sum(up(x, 1, -1), Sum(up(x), up(x, 1, 1))), linewidth=2, label='Atomic Function Compounding')
pl.title('Atomic Function Compounding represent 1')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.2. Partition of Unity
The Atomic Function pulses superposition set at points -2, -1, 0, +1, +2... can exactly represent a Unity (number 1):
1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...
<font color=maroon>1 = ... up(x-3) + up(x-2) + up(x-1) + up(x-0) + up(x+1) + up(x+2) + up(x+3) + ...</font>
End of explanation
x = np.arange(-5.0, 5.0, 0.01)
pl.plot(x, up(x), label='Atomic Function', linewidth=2)
#pl.plot(x, dydx, '--', label='Atomic Function Derivative', linewidth=1, color="Green")
pl.title('Atomic Function is compactly supported')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.3. Atomic Function (AF) is a 'finite', 'compactly supported', or 'solitary' function
Like a Spline, Atomic Function (AF) 'compactly supported' not equal to zero only on section |x|<=1
End of explanation
######### Presentation of Atomic Function via Atomic Strings ##########
x = np.arange(-2.0, 2.0, 0.01)
pl.plot(x, AString(x, 1, 0, 1, 0), '--', linewidth=1, label='AString(x)')
pl.plot(x, AString(x, 0.5, -0.5, +1, 0), '--', linewidth=2, label='+AString(2x+1)')
pl.plot(x, AString(x, 0.5, +0.5, -1, 0), '--', linewidth=2, label='-AString(2x-1)')
#pl.plot(x, up(x, 1.0, 0, 1, 0), '--', linewidth=1, label='Atomic Function')
AS2 = Sum(AString(x, 0.5, -0.5, +1, 0), AString(x, 0.5, +0.5, -1, 0))
pl.plot(x, AS2, linewidth=3, label='Up(x) via Strings')
pl.title('Atomic Function as a Combination of AStrings')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 3.4 Atomic Function is a non-analytical function (can not be represented by Taylor's series), but with known Fourier Transformation allowing to exactly calculate AF in certain points, with tabular representation provided in script above.
<font color=teal>4. Properties of Atomic String Function</font>
4.1. AString is not only Integral but also Composing Branch of Atomic Function
<font color=maroon>AString'(x) = AString(2x+1) - AString(2x-1) = up(x)</font>
Astring is a swing-like function - Integral of Atomic Function (AF) which can be expressed via AF itself:
AString(x) = Integral(0,x)(Up(x)) = Up(x/2 - 1/2) - 1/2
<font color=maroon>AString(x) = Integral(0,x)(Up(x)) = Up(x/2 - 1/2) - 1/2</font>
4.2. Atomic Function is a 'solitonic atom' composed from two opposite AStrings
The concept of 'Solitonic Atoms' (bions) composed from opposite kinks is known in soliton theory [3,5].
<font color=maroon>up(x) = AString(2x + 1) - AString(2x - 1)</font>
End of explanation
x = np.arange(-2, 2.0, 0.01)
pl.title('AString and Fabius Functions')
pl.plot(x, AString(x, 0.5, 0.5, 1, 0.5), label='Fabius Function')
pl.plot(x, AString(x, 1, 0, 1, 0), label='AString Function')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 4.3. AStrings and Atomic Solitons
Solitonic mathematical properties of AString and Atomic Functions have been explored in author's paper [2,3] (Eremenko, S.Yu. Atomic solitons as a new class of solitons; 2018; https://www.researchgate.net/publication/329465767). They both satisfy differential equations with shifted arguments which introduce special kind of <b>nonlinearity</b> typical for all mathematical solitons.
AString belong to the class of <b>Solitonic Kinks</b> similar to sine-Gordon, Frenkel-Kontorova, tanh and others. Unlike other kinks, AStrings are truly solitary (compactly-supported) and also have a unique property of composing of both straight-line and solitonic atoms on lattice resembling particle-like properties of solitons.
Atomic Function up(x) is not actually a mathematical soliton, but a complex object composed from summation of two opposite AString kinks, and in solitonic terminology, is called 'solitonic atoms' (like bions).
4.4. All derivatives of AString can be represented via AString itself
<font color=maroon>AString'(x) = AString(2x + 1) - AString(2x - 1)</font>
It means AString is a smooth (infinitely divisible) function, with fractalic properties.
4.5. AString and Fabius Function
Fabius Function https://en.wikipedia.org/wiki/Fabius_function, with unique property f'(x) = 2f(2x), published in 1966 but was probably known since 1935, is shifted and stretched AString function. Fabius function is not directly an integral of atomic function up(x). Properties of partition of line and representing other functions have not been explored as much as it is done in the theory of atomic functions for 50 years in hundreds of papers.
<font color=maroon>Fabius(x) = AString(2x - 1) + 0.5</font>
End of explanation
x = np.arange(-3, 3, 0.01)
pl.plot(x, AString(x, 1, -1.0, 1, 0), '--', linewidth=1, label='AString 1')
pl.plot(x, AString(x, 1, +0.0, 1, 0), '--', linewidth=1, label='AString 2')
pl.plot(x, AString(x, 1, +1.0, 1, 0), '--', linewidth=1, label='AString 3')
AS2 = Sum(AString(x, 1, -1.0, 1, 0), AString(x, 1, +0.0, 1, 0))
AS3 = Sum(AS2, AString(x, 1, +1.0, 1, 0))
pl.plot(x, AS3, label='AStrings Sum', linewidth=2)
pl.title('Atomic Strings compose Line')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 4.6. Partition of Line from Atomic String functions
Combination/summation of Atomic Strings can exactly represent a straight line:
x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)...
<font color=maroon>x = ...Astring(x-2) + Astring(x-1) + AString(x) + Astring(x+1) + Astring(x+2)...</font>
Partition based on AString function with width 1 and height 1
End of explanation
x = np.arange(-40.0, 40.0, 0.01)
width = 10.0
height = 10.0
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, AString(x, width, -3*width/2, height, -3*width/2), '--', linewidth=1, label='AString 1')
pl.plot(x, AString(x, width, -1*width/2, height, -1*width/2), '--', linewidth=1, label='AString 2')
pl.plot(x, AString(x, width, +1*width/2, height, +1*width/2), '--', linewidth=1, label='AString 3')
pl.plot(x, AString(x, width, +3*width/2, height, +3*width/2), '--', linewidth=1, label='AString 4')
AS2 = Sum(AString(x, width, -3*width/2, height, -3*width/2), AString(x, width, -1*width/2, height, -1*width/2))
AS3 = Sum(AS2, AString(x, width,+1*width/2, height, +1*width/2))
AS4 = Sum(AS3, AString(x, width,+3*width/2, height, +3*width/2))
pl.plot(x, AS4, label='AStrings Joins', linewidth=2)
pl.title('Atomic Strings Combinations')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: Partition based on AString with certain width and height depending on a size of 'quanta'
End of explanation
x = np.arange(-30.0, 30.0, 0.01)
#pl.plot(x, ABline (x, 1, 0), label='ABLine 1*x')
pl.plot(x, AString(x, 10.0,-15, 10, -15), '--', linewidth=1, label='AString Quantum 1')
pl.plot(x, AString(x, 10.0, -5, 10, -5), '--', linewidth=1, label='AString Quantum 2')
pl.plot(x, AString(x, 10.0, +5, 10, +5), '--', linewidth=1, label='AString Quantum 3')
pl.plot(x, AString(x, 10.0,+15, 10, +15), '--', linewidth=1, label='AString Quantum 4')
AS2 = Sum(AString(x, 10.0, -15, 10, -15), AString(x, 10., -5, 10, -5))
AS3 = Sum(AS2, AString(x, 10, +5, 10, +5))
AS4 = Sum(AS3, AString(x, 10,+15, 10, +15))
pl.plot(x, AS4, label='Spacetime Dimension', linewidth=2)
pl.title('Representing Spacetime by joining of Atomic Strings')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: <font color=teal>5. Model of Spacetime composed from AStrings Quanta (AString Metriants)
5.1. This is an abstract of paper "Atomic Strings and Fabric of Spacetime"
Based on a generalization of well-known atomic function this paper introduces an atomic string (AString) as a swing-like function by joining of which on a periodic lattice it is possible to build one model for absolutely flat and curved spacetime and explain gravitational warping effects due to a unique property of all derivatives being expressed via AString itself. Physically AString may represent a soliton-like elementary string as a spacetime warp/distortion composing atomic quanta in multiple dimensions which can grow, shrink, group into atoms and compose smooth continua, solid and biological matter widespread in nature. This makes AString a good candidate for an elementary string as a fundamental block of matter and spacetime fabric. AString can also be used as a generating axiom for a new non-Archimedean atomic calculus to describe nonlinear metric within cosmic strings and build never overflowing atomic computer with the super fast calculation of derivatives. The AString along with closely related atomic function may find new areas of applications in lattice physics, string theory, relativity, quantum gravity, cosmic strings, multiverse, dark matter, solitons, dislocations, computing, evolution of solid and biological matter, finite element methods, new regression analysis and artificial intelligence classification models.
5.2. Representing spacetime expansion via summation of AStrings quanta
End of explanation
x = np.arange(-50.0, 50.0, 0.1)
dx = x[1] - x[0]
CS6 = Sum(up(x, 5, -30, 5, 5), up(x, 15, 0, 15, 5))
CS6 = Sum(CS6, up(x, 10, +30, 10, 5))
pl.plot(x, CS6, label='Spacetime Density')
IntC6 = np.cumsum(CS6)*dx/50
pl.plot(x, IntC6, label='Spacetime Shape (Geodesics)')
DerC6 = np.gradient(CS6, dx)
pl.plot(x, DerC6, label='Spacetime Curvature')
LightTrajectory = -10 -IntC6/5
pl.plot(x, LightTrajectory, label='Light Trajectory')
pl.title('Shape of Curved Spacetime model')
pl.legend(loc='best', numpoints=1)
pl.grid(True)
pl.show()
Explanation: 5.3. Model of Spacetime curvature and gravity based on AStrings
Schematic model of Gravitation explaining General Relativity effects where spacetime Shape, Density and Curvature are deeply related to each other being expressed via shifts and stretches of the same AString Atomic Function
End of explanation
#pl.rcParams["figure.figsize"] = 16,12
book = pl.imread('BookSpread_small.png')
pl.imshow(book)
Explanation: <font color=teal>6. 'Soliton Nature' book</font>
6.1. The theory is also described in a book 'Soliton Nature'
Soliton Nature book is easy-to-read, pictorial, interactive book which uses beautiful photography, video channel, and computer scripts in R and Python to demonstrate existing and explore new solitons – the magnificent and versatile energy concentration phenomenon of nature. New class of atomic solitons can be used to describe Higgs boson (‘the god particle’) fields, spacetime quanta and other fundamental building blocks of nature.
End of explanation |
10,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
Step1: Background on SSS and Maxwell filtering
Signal-space separation (SSS)
Step2: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling
Step3: But this algorithm is not perfect. For example, it misses MEG 2313,
which has some flux jumps, because there are not enough flux jumps in the
recording. So it can still be useful to manually inspect and mark bad
channels
Step4: After that, performing SSS and Maxwell filtering is done with a
single call to
Step5: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
Step6: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
Explanation: Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory:
End of explanation
fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif')
Explanation: Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal components), and components attributable to sources outside
the measurement volume (the external components). The internal and external
components are linearly independent, so it is possible to simply discard the
external components to reduce environmental noise. Maxwell filtering is a
related procedure that omits the higher-order components of the internal
subspace, which are dominated by sensor noise. Typically, Maxwell filtering
and SSS are performed together (in MNE-Python they are implemented together
in a single function).
Like SSP <tut-artifact-ssp>, SSS is a form of projection. Whereas SSP
empirically determines a noise subspace based on data (empty-room recordings,
EOG or ECG activity, etc) and projects the measurements onto a subspace
orthogonal to the noise, SSS mathematically constructs the external and
internal subspaces from spherical harmonics_ and reconstructs the sensor
signals using only the internal subspace (i.e., does an oblique projection).
<div class="alert alert-danger"><h4>Warning</h4><p>Maxwell filtering was originally developed for Elekta Neuromag® systems,
and should be considered *experimental* for non-Neuromag data. See the
Notes section of the :func:`~mne.preprocessing.maxwell_filter` docstring
for details.</p></div>
The MNE-Python implementation of SSS / Maxwell filtering currently provides
the following features:
Basic bad channel detection
(:func:~mne.preprocessing.find_bad_channels_maxwell)
Bad channel reconstruction
Cross-talk cancellation
Fine calibration correction
tSSS
Coordinate frame translation
Regularization of internal components using information theory
Raw movement compensation (using head positions estimated by MaxFilter)
cHPI subtraction (see :func:mne.chpi.filter_chpi)
Handling of 3D (in addition to 1D) fine calibration files
Epoch-based movement compensation as described in
:footcite:TauluKajola2005 through :func:mne.epochs.average_movements
Experimental processing of data from (un-compensated) non-Elekta
systems
Using SSS and Maxwell filtering in MNE-Python
For optimal use of SSS with data from Elekta Neuromag® systems, you should
provide the path to the fine calibration file (which encodes site-specific
information about sensor orientation and calibration) as well as a crosstalk
compensation file (which reduces interference between Elekta's co-located
magnetometer and paired gradiometer sensor units).
End of explanation
raw.info['bads'] = []
raw_check = raw.copy().pick_types(exclude=()).filter(None, 40)
auto_noisy_chs, auto_flat_chs = mne.preprocessing.find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset
raw.info['bads'].extend(auto_noisy_chs + auto_flat_chs)
Explanation: Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
Let's see if we can automatically detect it. To do this we need to
operate on a signal without line noise or cHPI signals, which is most
easily achieved using :func:mne.chpi.filter_chpi,
:func:mne.io.Raw.notch_filter, or :meth:mne.io.Raw.filter. For simplicity
we just low-pass filter these data:
End of explanation
raw.info['bads'] += ['MEG 2313'] # from manual inspection
Explanation: But this algorithm is not perfect. For example, it misses MEG 2313,
which has some flux jumps, because there are not enough flux jumps in the
recording. So it can still be useful to manually inspect and mark bad
channels:
End of explanation
raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True)
Explanation: After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available):
End of explanation
raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True)
Explanation: To see the effect, we can plot the data before and after SSS / Maxwell
filtering.
End of explanation
head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces')
Explanation: Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the order of the spherical
harmonic expansion of the interior and exterior components; the default
values are appropriate for most use cases. Additional parameters include
coord_frame and origin for controlling the coordinate frame ("head"
or "meg") and the origin of the sphere; the defaults are appropriate for most
studies that include digitization of the scalp surface / electrodes. See the
documentation of :func:~mne.preprocessing.maxwell_filter for details.
Spatiotemporal SSS (tSSS)
An assumption of SSS is that the measurement volume (the spherical shell
where the sensors are physically located) is free of electromagnetic sources.
The thickness of this source-free measurement shell should be 4-8 cm for SSS
to perform optimally. In practice, there may be sources falling within that
measurement volume; these can often be mitigated by using Spatiotemporal
Signal Space Separation (tSSS) :footcite:TauluSimola2006.
tSSS works by looking for temporal
correlation between components of the internal and external subspaces, and
projecting out any components that are common to the internal and external
subspaces. The projection is done in an analogous way to
SSP <tut-artifact-ssp>, except that the noise vector is computed
across time points instead of across sensors.
To use tSSS in MNE-Python, pass a time (in seconds) to the parameter
st_duration of :func:~mne.preprocessing.maxwell_filter. This will
determine the "chunk duration" over which to compute the temporal projection.
The chunk duration effectively acts as a high-pass filter with a cutoff
frequency of $\frac{1}{\mathtt{st_duration}}~\mathrm{Hz}$; this
effective high-pass has an important consequence:
In general, larger values of st_duration are better (provided that your
computer has sufficient memory) because larger values of st_duration
will have a smaller effect on the signal.
If the chunk duration does not evenly divide your data length, the final
(shorter) chunk will be added to the prior chunk before filtering, leading
to slightly different effective filtering for the combined chunk (the
effective cutoff frequency differing at most by a factor of 2). If you need
to ensure identical processing of all analyzed chunks, either:
choose a chunk duration that evenly divides your data length (only
recommended if analyzing a single subject or run), or
include at least 2 * st_duration of post-experiment recording time at
the end of the :class:~mne.io.Raw object, so that the data you intend to
further analyze is guaranteed not to be in the final or penultimate chunks.
Additional parameters affecting tSSS include st_correlation (to set the
correlation value above which correlated internal and external components
will be projected out) and st_only (to apply only the temporal projection
without also performing SSS and Maxwell filtering). See the docstring of
:func:~mne.preprocessing.maxwell_filter for details.
Movement compensation
If you have information about subject head position relative to the sensors
(i.e., continuous head position indicator coils, or :term:cHPI <HPI>), SSS
can take that into account when projecting sensor data onto the internal
subspace. Head position data can be computed using
:func:mne.chpi.compute_chpi_locs and :func:mne.chpi.compute_head_pos,
or loaded with the:func:mne.chpi.read_head_pos function. The
example data <sample-dataset> doesn't include cHPI, so here we'll
load a :file:.pos file used for testing, just to demonstrate:
End of explanation |
10,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive time series with time slice retrieval
This notebook shows you how to use interactive plots to select time series for different locations and retrieve the imagery that corresponds with different points on a time series
Step1: retrieve the NBAR and PQ for the spatiotemporal range of interest
Step2: Plotting an image and select a location to retrieve a time series
Step3: Click on an interactive time series and pull back an image that corresponds with a point on the time seris | Python Code:
%pylab notebook
from __future__ import print_function
import datacube
import xarray as xr
from datacube.storage import masking
from datacube.storage.masking import mask_to_dict
from matplotlib import pyplot as plt
from IPython.display import display
import ipywidgets as widgets
dc = datacube.Datacube(app='Interactive time series analysis')
#### DEFINE SPATIOTEMPORAL RANGE AND BANDS OF INTEREST
#Use this to manually define an upper left/lower right coords
#Define temporal range
start_of_epoch = '2013-01-01'
end_of_epoch = '2016-12-31'
#Define wavelengths/bands of interest, remove this kwarg to retrieve all bands
bands_of_interest = [#'blue',
'green',
#'red',
'nir',
'swir1',
#'swir2'
]
#Define sensors of interest
sensors = ['ls8']#, 'ls7', 'ls5']
query = {'time': (start_of_epoch, end_of_epoch)}
lat_max = -17.42
lat_min = -17.45
lon_max = 140.90522
lon_min = 140.8785
query['x'] = (lon_min, lon_max)
query['y'] = (lat_max, lat_min)
query['crs'] = 'EPSG:4326'
print(query)
Explanation: Interactive time series with time slice retrieval
This notebook shows you how to use interactive plots to select time series for different locations and retrieve the imagery that corresponds with different points on a time series
End of explanation
#Define which pixel quality artefacts you want removed from the results
mask_components = {'cloud_acca':'no_cloud',
'cloud_shadow_acca' :'no_cloud_shadow',
'cloud_shadow_fmask' : 'no_cloud_shadow',
'cloud_fmask' :'no_cloud',
'blue_saturated' : False,
'green_saturated' : False,
'red_saturated' : False,
'nir_saturated' : False,
'swir1_saturated' : False,
'swir2_saturated' : False,
'contiguous':True}
#Retrieve the NBAR and PQ data for sensor n
sensor_clean = {}
for sensor in sensors:
#Load the NBAR and corresponding PQ
sensor_nbar = dc.load(product= sensor+'_nbar_albers', group_by='solar_day', measurements = bands_of_interest, **query)
sensor_pq = dc.load(product= sensor+'_pq_albers', group_by='solar_day', **query)
#grab the projection info before masking/sorting
crs = sensor_nbar.crs
crswkt = sensor_nbar.crs.wkt
affine = sensor_nbar.affine
#This line is to make sure there's PQ to go with the NBAR
sensor_nbar = sensor_nbar.sel(time = sensor_pq.time)
#Apply the PQ masks to the NBAR
cloud_free = masking.make_mask(sensor_pq, **mask_components)
good_data = cloud_free.pixelquality.loc[start_of_epoch:end_of_epoch]
sensor_nbar = sensor_nbar.where(good_data)
sensor_clean[sensor] = sensor_nbar
Explanation: retrieve the NBAR and PQ for the spatiotemporal range of interest
End of explanation
#select time slice of interest - this is trial and error until you get a decent image
time_slice_i = 140
rgb = sensor_clean['ls8'].isel(time =time_slice_i).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')
#rgb = nbar_clean.isel(time =time_slice).to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')
fake_saturation = 4500
clipped_visible = rgb.where(rgb<fake_saturation).fillna(fake_saturation)
max_val = clipped_visible.max(['y', 'x'])
scaled = (clipped_visible / max_val)
#Click on this image to chose the location for time series extraction
w = widgets.HTML("Event information appears here when you click on the figure")
def callback(event):
global x, y
x, y = int(event.xdata + 0.5), int(event.ydata + 0.5)
w.value = 'X: {}, Y: {}'.format(x,y)
fig = plt.figure(figsize =(12,6))
#plt.scatter(x=trans.coords['x'], y=trans.coords['y'], c='r') #turn this on or off to show location of transect
plt.imshow(scaled, interpolation = 'nearest',
extent=[scaled.coords['x'].min(), scaled.coords['x'].max(),
scaled.coords['y'].min(), scaled.coords['y'].max()])
fig.canvas.mpl_connect('button_press_event', callback)
date_ = sensor_clean['ls8'].time[time_slice_i]
plt.title(date_.astype('datetime64[D]'))
plt.show()
display(w)
#this converts the map x coordinate into image x coordinates
image_coords = ~affine * (x, y)
imagex = int(image_coords[0])
imagey = int(image_coords[1])
#retrieve the time series that corresponds with the location clicked, and drop the no data values
green_ls8 = sensor_clean['ls8'].green.isel(x=[imagex],y=[imagey]).dropna('time', how = 'any')
Explanation: Plotting an image and select a location to retrieve a time series
End of explanation
#Use this plot to visualise a time series and select the image that corresponds with a point in the time series
def callback(event):
global time_int, devent
devent = event
time_int = event.xdata
#time_int_ = time_int.astype(datetime64[D])
w.value = 'time_int: {}'.format(time_int)
fig = plt.figure(figsize=(10,5))
fig.canvas.mpl_connect('button_press_event', callback)
plt.show()
display(w)
green_ls8.plot(linestyle= '--', c= 'b', marker = '8', mec = 'b', mfc ='r')
plt.grid()
time_slice = matplotlib.dates.num2date(time_int).date()
rgb2 = sensor_clean['ls8'].sel(time =time_slice, method = 'nearest').to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')
fake_saturation = 6000
clipped_visible = rgb2.where(rgb2<fake_saturation).fillna(fake_saturation)
max_val = clipped_visible.max(['y', 'x'])
scaled2 = (clipped_visible / max_val)
#This image shows the time slice of choice and the location of the time series
fig = plt.figure(figsize =(12,6))
#plt.scatter(x=trans.coords['x'], y=trans.coords['y'], c='r')
plt.scatter(x = [x], y = [y], c= 'yellow', marker = 'D')
plt.imshow(scaled2, interpolation = 'nearest',
extent=[scaled.coords['x'].min(), scaled.coords['x'].max(),
scaled.coords['y'].min(), scaled.coords['y'].max()])
plt.title(time_slice)
plt.show()
Explanation: Click on an interactive time series and pull back an image that corresponds with a point on the time seris
End of explanation |
10,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyData.Tokyo Tutorial & Hackathon #1
PyData.Tokyoでは毎月開催している中上級者向けの勉強会に加え、初心者の育成を目的としたチュートリアルイベントを開催します。今回のイベントでは下記の項目にフォーカスします。
データの読み込み
データの前処理・整形
集計・統計解析
データの可視化
機械学習を使った分類モデルの生成
モデル分類結果の検証
このチュートリアルでは実際のデータを使ったコーディングを行うことで実践力をつけることを目的とします。扱う事例はタイタニックの乗客データを使った生存者推定モデルの生成です。乗客の年齢、性別、その他の情報を機械学習アルゴリズムに学習させることで、初心者でも80%に近い精度で生存者を当てることができるようになります。
イベント詳細: http
Step1: 2. ライブラリのインポートとデータの準備
最初に、必要なライブラリをインポートしましょう。
Step2: PandasのDataFrameに2つのデータを読込みます。train.csvは乗客の生存情報が付いているトレーニングデータ(教師データ)です。test.csvは生存情報を推定してKaggleに投稿するためのテストデータのため、生存情報が付いていません。
Step3: 2つのデータを確認してみましょう。df_trainにのみ生存情報(Survived)があるのがわかります。
Step4: 3. ジェンダーモデルによる生存者推定、推定値の評価
前半のチュートリアルのデータ解析で、生存確率は男性より女性の方が高いことが分かりました。先ず、最も単純なモデルとして、性別により生存者を予測するモデル(ジェンダーモデル)を考えてみましょう。
使用するデータの選択
トレーニングデータから性別データと乗客の生存情報を取り出します。特徴量はx、正解データはyと表すことが一般的です。ここでは、性別が性別、正解データは生存情報です。1つの特徴量(性別)のみを取り出す時には、ベクトルを意味する小文字のxを使いますが、2つ以上の特徴量を使う時は、行列(マトリクス)を意味する大文字のXを使います。大文字のXは後ほど出てきます。
Step5: ジェンダーモデルによる推定
ジェンダーモデルで生存者を推定します。ジェンダーモデルは、女性は全員が生存(0)、男性は全員が死亡(1)と仮定するモデルです。y_predのpredはpredictionの略です。pandasのmapを使って計算してみましょう。
Step6: 推定値の評価
推定したデータを評価します。最初に正解率(Accuracy)を求めましょう。accuracy_scoreで計算します。
Step7: 78.7%の正解率が得られました。データを理解して仮説を立てることで、単純なモデルでも高い正解率が得られることが分かります。Kaggleでは、コンペによって使われている指標が異なりますが、タイタニックのコンペでは正解率が指標となっています。<br>
他の指標もscikit-learnで簡単に計算出来ます。Precision、Recall、F1-scoreをclassification_reportで計算してみましょう。
Step8: 混同行列(Confusion Matrix)は、推定結果を理解するのにとても便利です。scikit-learnの[confusion_matrix]((http
Step9: テストデータから生存者を推定
演習問題
トレーニングデータと同様に、Kaggleに投稿するテストデータからも生存者を推定しましょう。
解答例
Step10: Kaggleに投稿するファイルの作成
推定した結果を、Kaggleに投稿するためのCSVファイルを作成します。CSVファイルに記載する必要のあるデータはPassengerIdとSurvived(生存者の推定値)です。pandasで投稿データ用のDataFrameを作成し、to_csvを使ってCSV形式で保存します。
Step11: 作成したkaggle_gendermodel.csvをKaggleに投稿し、スコアと順位を確認してみましょう!これで皆さんもKagglerです!
4. ロジスティック回帰による生存者推定
scikit-learnに実装されている機械学習のアルゴリズムを使うことを学びます。先ずは最も基本的な線形モデルから始めましょう。
使用するデータの選択
ジェンダーモデルでは、性別情報だけを使って生存者の推定を行いましたが、正解率を上げるために他の特徴量も使ってみましょう。チュートリアル第一部の解析より、性別は女性で、年齢は若く、乗船クラスのランクが高いほど生存率は高くなることが分かっています。今回はこれを仮説として使います。性別に加えて、年齢(Age)と乗船クラス(Pclass)を特徴量をして選びましょう。
Step12: 特徴量のデータフレームを確認します。
Step13: 年齢に欠損値があります。教師データのサイズが十分に大きければ、欠損値を使わなくても問題ありません。今回は教師データがあまり大きくないため、欠損値を埋めて使います。チュートリアル第一部では、欠損値を埋める手法をいくつか紹介しましたが、今回は全体の平均値を使うことにします。
Step14: また、性別(Sex)はmaleとfemaleという値が入っていますが、scikit-learnでは、このようなカテゴリー情報を扱うことが出来ません。そのため、female、maleを数値に変換する必要があります。femaleを0、maleを1とし、新しくGenderを作成します。
Step15: 次に、女性(Gender=0)で且つ、乗船クラスのランクが高い(Pclass=1)ほど、生存率が高いという仮説を表す新しい特徴量(Pclass_Gender)を作成します。Pclass_Genderは値が小さいほど生存率が高いことになります。
Step16: 今回は特徴量としてPclass_GenderとAgeの2つを使います。不要になった特徴量は、dropで削除します。
Step17: データを可視化して「年齢が若く、女性で且つ、乗船クラスのランクが高いほど、生存率が高い」という仮説が正しいか確認してみましょう。横軸が年齢、縦軸がPclass_Genderを表します。
Step18: いかがでしょうか?仮説は正しく、グラフの左下で生存者が多くなっていますね。
トレーニングデータの分割
機械学習では、学習にデータをすべて使ってしまうと、モデルを正しく評価出来ません。そこで、データを学習用と評価用の2つに分割します。分割はscikit-learnのtrain_test_splitで簡単に行うことが出来ます。ここでは、データの80%を学習用、20%を評価用として分割します。x_val、y_valのvalはvalidationの略です。
Step19: ロジスティック回帰による推定
線形モデルであるロジスティック回帰(LogisticRegression)を使います。clfは分類器を意味するclassifierの略です。
Step20: 先ほど分割した学習用データを使います。
Step21: これで、学習は完了です。データ数が少ないため、あっという間に終わります。<br>次に生存者の推定をしますが、こちらも簡単に行えます。先ほど分割した学習用データと、評価用データの両方について推定します。
Step22: 結果を評価してみましょう。
Step23: ロジスティック回帰はどのような役割を確認しましょう。matplotlibで可視化します。
Step24: ロジスティック回帰は与えられた特徴量に基づいて、乗客が生存したか、生存しなかったかの境界(グラフの点線)を判断しています。これの境界はHyperplane(超平面)または、Decision Boundary(決定境界)と呼ばれます。機械学習の分類問題の目的は、この境界を求めることとも言えます。アルゴリズムによって、この境界の求め方が異なり、結果も異なります。機械学習の様々な分野で広く使われいているSVM(Support Vector Machines)と比較してみましょう。アルゴリズムの詳細の説明は省略します。
Step25: 過学習(Overfitting)
上のグラフから分かるように、SVC with RBF Kernelは複雑な形状の境界が作れますが、これが必ずしも良いわけではありません。理由は、学習用データに対して性能が高くなり、評価用データに対して性能が低くなる場合があるためです。これを過学習(Overfitting)と呼びます。アルゴリズムが複雑になるほど、注意が必要です。
Step26: 5. 交差検証(クロスバリデーション)
演習問題
モデルを評価するために、データを学習用と評価用の2つに分割することを説明しましたが、データが変わっても結果は同じでしょうか?train_test_splitで指定するrandom_stateの値を変化させて実際に確認してみましょう。
解答例
Step27: どの部分を教師データにするかによって結果は異なります。この課題を解決する方法として交差検証(クロスバリデーション、Cross-validation)という手法があります。ここでは、K-分割交差検証(K-fold cross-validation)を使いましょう。K-分割交差検証は、データをK個に分割し、そのうちK-1個のデータセットを学習に、K個のデータを訓練に用いるということをK回繰り返し、得られた結果の平均を得るというものです。例えば、5-fold cross-validationの場合、5つのデータセットを作成します。各データセットには20%のサンプルが含まれていることになりますが、これを利用し、80%のサンプルで学習、20%のサンプルで評価するということを5回繰り返します。
Step28: scikit-learnでもcross_validationとして実装されています。K-分割交差検証を関数として定義し、実行してみましょう。
Step29: 3つ以上の特徴量を使う
3つ以上の特徴量を使う場合も同様に学習を行い、Hyperplaneを求めることが出来ます。
Step30: 演習問題
前回と同様に、年齢の欠損値を平均値で埋め、性別を数値化しましょう。
解答例
Step31: 性別の数値化にはscikit-learnのLabelEncoderを使うことも出来ます。
Step32: One-hot Encoding
性別(Sex)と同様に乗船地(Embarked)もそのままでは使えないため、数値化する必要がありますが、対象となるのはS、C、Qの3種類です。このような場合は、One-hot Encording、またはOne-of-K Encordingという手法を使って、新たな特徴量を作ります。pandasのget_dummiesを使います。
Step33: 不要な特徴量は削除しましょう。
Step34: ロジスティック回帰+交差検証で評価します。
Step35: スコアが改善しました!
6. 決定木(Decision Tree)による生存者推定
決定木は、機械学習の手法の中でも非常によく用いられるものの一つです。分類を決定づけた要因を木構造で説明することが出来るため、非常に分かりやすいという特徴があります。
Step36: 演習問題
決定木のパラメータを変えて、スコアを比較してみましょう。
解答例
Step37: 7. グリッドサーチ
グリッドサーチは、分類器のパラメータを指定した範囲で変化させ、最もスコアの高いパラメータの組合せを探してくれる便利な機能です。
Step38: ベストなスコアとパラメータの組合せを確認します。
Step39: 全ての結果を確認することも出来ます。
Step40: ベストなパラメータの組合せで推定を行います。
Step41: Kaggleに投稿するためのCSVファイルを作成し、結果を確認してみましょう。 | Python Code:
from IPython.display import Image
Image(url='http://graphics8.nytimes.com/images/section/learning/general/onthisday/big/0415_big.gif')
Explanation: PyData.Tokyo Tutorial & Hackathon #1
PyData.Tokyoでは毎月開催している中上級者向けの勉強会に加え、初心者の育成を目的としたチュートリアルイベントを開催します。今回のイベントでは下記の項目にフォーカスします。
データの読み込み
データの前処理・整形
集計・統計解析
データの可視化
機械学習を使った分類モデルの生成
モデル分類結果の検証
このチュートリアルでは実際のデータを使ったコーディングを行うことで実践力をつけることを目的とします。扱う事例はタイタニックの乗客データを使った生存者推定モデルの生成です。乗客の年齢、性別、その他の情報を機械学習アルゴリズムに学習させることで、初心者でも80%に近い精度で生存者を当てることができるようになります。
イベント詳細: http://pydatatokyo.connpass.com/event/11860/
チュートリアルのリポジトリ: https://github.com/PyDataTokyo/pydata-tokyo-tutorial-1
Twitter: @PyDataTokyo
チュートリアル第二部「Machine Learning」
第二部の目的
チュートリアル第二部では、Pythonの機械学習ライブラリscikit-learnを使って、次の2つの点について学びます。
- 機械学習を使った分類モデルの生成
- 分類結果の検証
使用するパッケージ
Python 3.6.0
IPython 5.1.0
numpy 1
pandas 0.15.2
matplotlib 1.4.3
scikit-learn 0.15.2
使用するデータ
タイタニックの乗客データ: Titanic: Machine Learning from Disaster
※データのダウンロードには、Kaggleのアカウントが必要です。
講師
PyData.Tokyo オーガナイザー 田中 秀樹(@atelierhide)
シリコンバレーでPython×Dataの魅力に出会う。その後、ディープラーニングに興味を持ち、PyCon JP 2014に登壇したことがきっかけとなりPyData.Tokyoをスタート。カメラレンズの光学設計エンジニアをする傍ら、画像認識を用いた火星および太陽系惑星表面の構造物探索を行うMarsface Project(@marsfaceproject)に参加。
アジェンダ
バックグラウンド
ライブラリのインポートとデータの準備
ジェンダーモデルによる生存者推定、推定値の評価
ロジスティック回帰による生存者推定
交差検証(クロスバリデーション)
決定木(Decision Tree)による生存者推定
グリッドサーチ
1. バックグラウンド - タイタニック号沈没事故
1912年4月15日、タイタニックはその初航海にして流氷との衝突により沈没しました。2224人の乗客員と乗組員のうち1502人がなくなる大事故でした。
この沈没事故がこれほど多くの犠牲者を産んだ一つの理由は救助ボートが十分に用意されていなかったことです。もちろん、生存には運が大きく左右しましたが、生存者の傾向にはパターンも見られます。例えば、女性・子供(男性が助けたため)や上流階級の乗客などは、生存率が高い傾向にあります。
End of explanation
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.model_selection import train_test_split, cross_val_score, KFold, GridSearchCV
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from IPython.display import Image
# Pandasの設定をします
pd.set_option('chained_assignment', None)
# matplotlibのスタイルを指定します。これでグラフが少しかっこよくなります。
plt.style.use('ggplot')
plt.rc('xtick.major', size=0)
plt.rc('ytick.major', size=0)
Explanation: 2. ライブラリのインポートとデータの準備
最初に、必要なライブラリをインポートしましょう。
End of explanation
df_train = pd.read_csv('data/train.csv')
df_test = pd.read_csv('data/test.csv')
Explanation: PandasのDataFrameに2つのデータを読込みます。train.csvは乗客の生存情報が付いているトレーニングデータ(教師データ)です。test.csvは生存情報を推定してKaggleに投稿するためのテストデータのため、生存情報が付いていません。
End of explanation
df_train.tail()
df_test.tail()
Explanation: 2つのデータを確認してみましょう。df_trainにのみ生存情報(Survived)があるのがわかります。
End of explanation
x = df_train['Sex']
y = df_train['Survived']
Explanation: 3. ジェンダーモデルによる生存者推定、推定値の評価
前半のチュートリアルのデータ解析で、生存確率は男性より女性の方が高いことが分かりました。先ず、最も単純なモデルとして、性別により生存者を予測するモデル(ジェンダーモデル)を考えてみましょう。
使用するデータの選択
トレーニングデータから性別データと乗客の生存情報を取り出します。特徴量はx、正解データはyと表すことが一般的です。ここでは、性別が性別、正解データは生存情報です。1つの特徴量(性別)のみを取り出す時には、ベクトルを意味する小文字のxを使いますが、2つ以上の特徴量を使う時は、行列(マトリクス)を意味する大文字のXを使います。大文字のXは後ほど出てきます。
End of explanation
y_pred = x.map({'female': 1, 'male': 0}).astype(int)
Explanation: ジェンダーモデルによる推定
ジェンダーモデルで生存者を推定します。ジェンダーモデルは、女性は全員が生存(0)、男性は全員が死亡(1)と仮定するモデルです。y_predのpredはpredictionの略です。pandasのmapを使って計算してみましょう。
End of explanation
print('Accuracy: {:.3f}'.format(accuracy_score(y, y_pred)))
Explanation: 推定値の評価
推定したデータを評価します。最初に正解率(Accuracy)を求めましょう。accuracy_scoreで計算します。
End of explanation
print(classification_report(y, y_pred))
Explanation: 78.7%の正解率が得られました。データを理解して仮説を立てることで、単純なモデルでも高い正解率が得られることが分かります。Kaggleでは、コンペによって使われている指標が異なりますが、タイタニックのコンペでは正解率が指標となっています。<br>
他の指標もscikit-learnで簡単に計算出来ます。Precision、Recall、F1-scoreをclassification_reportで計算してみましょう。
End of explanation
cm = confusion_matrix(y, y_pred)
print(cm)
def plot_confusion_matrix(cm):
fig, ax = plt.subplots()
im = ax.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
ax.set_title('Confusion Matrix')
fig.colorbar(im)
target_names = ['not survived', 'survived']
tick_marks = np.arange(len(target_names))
ax.set_xticks(tick_marks)
ax.set_xticklabels(target_names, rotation=45)
ax.set_yticks(tick_marks)
ax.set_yticklabels(target_names)
ax.set_ylabel('True label')
ax.set_xlabel('Predicted label')
fig.tight_layout()
plot_confusion_matrix(cm)
Explanation: 混同行列(Confusion Matrix)は、推定結果を理解するのにとても便利です。scikit-learnの[confusion_matrix]((http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html)で計算し、結果をmatplotlibで可視化してみましょう。
End of explanation
x_test = df_test['Sex']
y_test_pred = x_test.map({'female': 1, 'male': 0}).astype(int)
Explanation: テストデータから生存者を推定
演習問題
トレーニングデータと同様に、Kaggleに投稿するテストデータからも生存者を推定しましょう。
解答例
End of explanation
df_kaggle = pd.DataFrame({'PassengerId': df_test['PassengerId'], 'Survived':np.array(y_test_pred)})
df_kaggle.to_csv('kaggle_gendermodel.csv', index=False)
df_kaggle.head()
Explanation: Kaggleに投稿するファイルの作成
推定した結果を、Kaggleに投稿するためのCSVファイルを作成します。CSVファイルに記載する必要のあるデータはPassengerIdとSurvived(生存者の推定値)です。pandasで投稿データ用のDataFrameを作成し、to_csvを使ってCSV形式で保存します。
End of explanation
X = df_train[['Age', 'Pclass', 'Sex']]
y = df_train['Survived']
Explanation: 作成したkaggle_gendermodel.csvをKaggleに投稿し、スコアと順位を確認してみましょう!これで皆さんもKagglerです!
4. ロジスティック回帰による生存者推定
scikit-learnに実装されている機械学習のアルゴリズムを使うことを学びます。先ずは最も基本的な線形モデルから始めましょう。
使用するデータの選択
ジェンダーモデルでは、性別情報だけを使って生存者の推定を行いましたが、正解率を上げるために他の特徴量も使ってみましょう。チュートリアル第一部の解析より、性別は女性で、年齢は若く、乗船クラスのランクが高いほど生存率は高くなることが分かっています。今回はこれを仮説として使います。性別に加えて、年齢(Age)と乗船クラス(Pclass)を特徴量をして選びましょう。
End of explanation
X.tail()
Explanation: 特徴量のデータフレームを確認します。
End of explanation
X['AgeFill'] = X['Age'].fillna(X['Age'].mean())
X = X.drop(['Age'], axis=1)
Explanation: 年齢に欠損値があります。教師データのサイズが十分に大きければ、欠損値を使わなくても問題ありません。今回は教師データがあまり大きくないため、欠損値を埋めて使います。チュートリアル第一部では、欠損値を埋める手法をいくつか紹介しましたが、今回は全体の平均値を使うことにします。
End of explanation
X['Gender'] = X['Sex'].map({'female': 0, 'male': 1}).astype(int)
X.tail()
Explanation: また、性別(Sex)はmaleとfemaleという値が入っていますが、scikit-learnでは、このようなカテゴリー情報を扱うことが出来ません。そのため、female、maleを数値に変換する必要があります。femaleを0、maleを1とし、新しくGenderを作成します。
End of explanation
X['Pclass_Gender'] = X['Pclass'] + X['Gender']
X.tail()
Explanation: 次に、女性(Gender=0)で且つ、乗船クラスのランクが高い(Pclass=1)ほど、生存率が高いという仮説を表す新しい特徴量(Pclass_Gender)を作成します。Pclass_Genderは値が小さいほど生存率が高いことになります。
End of explanation
X = X.drop(['Pclass', 'Sex', 'Gender'], axis=1)
X.head()
Explanation: 今回は特徴量としてPclass_GenderとAgeの2つを使います。不要になった特徴量は、dropで削除します。
End of explanation
np.random.seed = 0
xmin, xmax = -5, 85
ymin, ymax = 0.5, 4.5
index_survived = y[y==0].index
index_notsurvived = y[y==1].index
fig, ax = plt.subplots()
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
sc = ax.scatter(X.loc[index_survived, 'AgeFill'],
X.loc[index_survived, 'Pclass_Gender']+(np.random.rand(len(index_survived))-0.5)*0.1,
color='r', label='Not Survived', alpha=0.3)
sc = ax.scatter(X.loc[index_notsurvived, 'AgeFill'],
X.loc[index_notsurvived, 'Pclass_Gender']+(np.random.rand(len(index_notsurvived))-0.5)*0.1,
color='b', label='Survived', alpha=0.3)
ax.set_xlabel('AgeFill')
ax.set_ylabel('Pclass_Gender')
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
ax.legend(bbox_to_anchor=(1.4, 1.03))
plt.show()
Explanation: データを可視化して「年齢が若く、女性で且つ、乗船クラスのランクが高いほど、生存率が高い」という仮説が正しいか確認してみましょう。横軸が年齢、縦軸がPclass_Genderを表します。
End of explanation
X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.8, random_state=1)
X_train
print('Num of Training Samples: {}'.format(len(X_train)))
print('Num of Validation Samples: {}'.format(len(X_val)))
Explanation: いかがでしょうか?仮説は正しく、グラフの左下で生存者が多くなっていますね。
トレーニングデータの分割
機械学習では、学習にデータをすべて使ってしまうと、モデルを正しく評価出来ません。そこで、データを学習用と評価用の2つに分割します。分割はscikit-learnのtrain_test_splitで簡単に行うことが出来ます。ここでは、データの80%を学習用、20%を評価用として分割します。x_val、y_valのvalはvalidationの略です。
End of explanation
clf = LogisticRegression()
Explanation: ロジスティック回帰による推定
線形モデルであるロジスティック回帰(LogisticRegression)を使います。clfは分類器を意味するclassifierの略です。
End of explanation
clf.fit(X_train, y_train)
Explanation: 先ほど分割した学習用データを使います。
End of explanation
y_train_pred = clf.predict(X_train)
y_val_pred = clf.predict(X_val)
Explanation: これで、学習は完了です。データ数が少ないため、あっという間に終わります。<br>次に生存者の推定をしますが、こちらも簡単に行えます。先ほど分割した学習用データと、評価用データの両方について推定します。
End of explanation
print('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))
print('Accuracy on Validation Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))
cm = confusion_matrix(y_val, y_val_pred)
print(cm)
plot_confusion_matrix(cm)
Explanation: 結果を評価してみましょう。
End of explanation
h = 0.02
xmin, xmax = -5, 85
ymin, ymax = 0.5, 4.5
xx, yy = np.meshgrid(np.arange(xmin, xmax, h), np.arange(ymin, ymax, h))
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
fig, ax = plt.subplots()
levels = np.linspace(0, 1.0, 5)
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
contour = ax.contourf(xx, yy, Z, cmap=cm, levels=levels, alpha=0.8)
ax.scatter(X_train.iloc[:, 0], X_train.iloc[:, 1]+(np.random.rand(len(X_train))-0.5)*0.1, c=y_train, cmap=cm_bright)
ax.scatter(X_val.iloc[:, 0], X_val.iloc[:, 1]+(np.random.rand(len(X_val))-0.5)*0.1, c=y_val, cmap=cm_bright, alpha=0.5)
ax.set_xlabel('AgeFill')
ax.set_ylabel('Pclass_Gender')
ax.set_xlim(xmin, xmax)
ax.set_ylim(ymin, ymax)
fig.colorbar(contour)
x1 = xmin
x2 = xmax
y1 = -1*(clf.intercept_[0]+clf.coef_[0][0]*xmin)/clf.coef_[0][1]
y2 = -1*(clf.intercept_[0]+clf.coef_[0][0]*xmax)/clf.coef_[0][1]
ax.plot([x1, x2] ,[y1, y2], 'k--')
plt.show()
Explanation: ロジスティック回帰はどのような役割を確認しましょう。matplotlibで可視化します。
End of explanation
clf_log = LogisticRegression()
clf_svc_lin = SVC(kernel='linear', probability=True)
clf_svc_rbf = SVC(kernel='rbf', probability=True)
titles = ['Logistic Regression', 'SVC with Linear Kernel', 'SVC with RBF Kernel',]
h = 0.02
xmin, xmax = -5, 85
ymin, ymax = 0.5, 4.5
xx, yy = np.meshgrid(np.arange(xmin, xmax, h), np.arange(ymin, ymax, h))
fig, axes = plt.subplots(1, 3, figsize=(12,4))
levels = np.linspace(0, 1.0, 5)
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
for i, clf in enumerate((clf_log, clf_svc_lin, clf_svc_rbf)):
clf.fit(X, y)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
axes[i].contourf(xx, yy, Z, cmap=cm, levels=levels, alpha=0.8)
axes[i].scatter(X.iloc[:, 0], X.iloc[:, 1], c=y, cmap=cm_bright)
axes[i].set_title(titles[i])
axes[i].set_xlabel('AgeFill')
axes[i].set_ylabel('Pclass_Gender')
axes[i].set_xlim(xmin, xmax)
axes[i].set_ylim(ymin, ymax)
fig.tight_layout()
Explanation: ロジスティック回帰は与えられた特徴量に基づいて、乗客が生存したか、生存しなかったかの境界(グラフの点線)を判断しています。これの境界はHyperplane(超平面)または、Decision Boundary(決定境界)と呼ばれます。機械学習の分類問題の目的は、この境界を求めることとも言えます。アルゴリズムによって、この境界の求め方が異なり、結果も異なります。機械学習の様々な分野で広く使われいているSVM(Support Vector Machines)と比較してみましょう。アルゴリズムの詳細の説明は省略します。
End of explanation
clf = SVC(kernel='rbf', probability=True)
clf.fit(X_train, y_train)
y_train_pred = clf.predict(X_train)
y_val_pred = clf.predict(X_val)
print('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))
print('Accuracy on Validation Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))
Explanation: 過学習(Overfitting)
上のグラフから分かるように、SVC with RBF Kernelは複雑な形状の境界が作れますが、これが必ずしも良いわけではありません。理由は、学習用データに対して性能が高くなり、評価用データに対して性能が低くなる場合があるためです。これを過学習(Overfitting)と呼びます。アルゴリズムが複雑になるほど、注意が必要です。
End of explanation
X_train, X_val, y_train, y_val = train_test_split(X, y, train_size=0.8, random_state=33)
clf = LogisticRegression()
clf.fit(X_train, y_train)
y_train_pred = clf.predict(X_train)
y_val_pred = clf.predict(X_val)
print('Accuracy on Training Set: {:.3f}'.format(accuracy_score(y_train, y_train_pred)))
print('Accuracy on Test Set: {:.3f}'.format(accuracy_score(y_val, y_val_pred)))
Explanation: 5. 交差検証(クロスバリデーション)
演習問題
モデルを評価するために、データを学習用と評価用の2つに分割することを説明しましたが、データが変わっても結果は同じでしょうか?train_test_splitで指定するrandom_stateの値を変化させて実際に確認してみましょう。
解答例
End of explanation
Image(url='http://scott.fortmann-roe.com/docs/docs/MeasuringError/crossvalidation.png')
Explanation: どの部分を教師データにするかによって結果は異なります。この課題を解決する方法として交差検証(クロスバリデーション、Cross-validation)という手法があります。ここでは、K-分割交差検証(K-fold cross-validation)を使いましょう。K-分割交差検証は、データをK個に分割し、そのうちK-1個のデータセットを学習に、K個のデータを訓練に用いるということをK回繰り返し、得られた結果の平均を得るというものです。例えば、5-fold cross-validationの場合、5つのデータセットを作成します。各データセットには20%のサンプルが含まれていることになりますが、これを利用し、80%のサンプルで学習、20%のサンプルで評価するということを5回繰り返します。
End of explanation
def cross_val(clf, X, y, K=5, random_state=0):
cv = KFold(K, shuffle=True, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cv)
return scores
cv = KFold(5, shuffle=True, random_state=0)
cv
clf = LogisticRegression()
scores = cross_val(clf, X, y)
print('Scores:', scores)
print('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))
Explanation: scikit-learnでもcross_validationとして実装されています。K-分割交差検証を関数として定義し、実行してみましょう。
End of explanation
X = df_train[['Age', 'Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']]
y = df_train['Survived']
X_test = df_test[['Age', 'Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']]
X.tail()
Explanation: 3つ以上の特徴量を使う
3つ以上の特徴量を使う場合も同様に学習を行い、Hyperplaneを求めることが出来ます。
End of explanation
X['AgeFill'] = X['Age'].fillna(X['Age'].mean())
X_test['AgeFill'] = X_test['Age'].fillna(X['Age'].mean())
X = X.drop(['Age'], axis=1)
X_test = X_test.drop(['Age'], axis=1)
Explanation: 演習問題
前回と同様に、年齢の欠損値を平均値で埋め、性別を数値化しましょう。
解答例
End of explanation
le = LabelEncoder()
le.fit(X['Sex'])
X['Gender'] = le.transform(X['Sex'])
X_test['Gender'] = le.transform(X_test['Sex'])
classes = {gender: i for (i, gender) in enumerate(le.classes_)}
print(classes)
X.tail()
Explanation: 性別の数値化にはscikit-learnのLabelEncoderを使うことも出来ます。
End of explanation
X = X.join(pd.get_dummies(X['Embarked'], prefix='Embarked'))
X_test = X_test.join(pd.get_dummies(X['Embarked'], prefix='Embarked'))
X.tail()
Explanation: One-hot Encoding
性別(Sex)と同様に乗船地(Embarked)もそのままでは使えないため、数値化する必要がありますが、対象となるのはS、C、Qの3種類です。このような場合は、One-hot Encording、またはOne-of-K Encordingという手法を使って、新たな特徴量を作ります。pandasのget_dummiesを使います。
End of explanation
X = X.drop(['Sex', 'Embarked'], axis=1)
X_test = X_test.drop(['Sex', 'Embarked'], axis=1)
Explanation: 不要な特徴量は削除しましょう。
End of explanation
clf = LogisticRegression()
scores = cross_val(clf, X, y)
print('Scores:', scores)
print('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))
Explanation: ロジスティック回帰+交差検証で評価します。
End of explanation
clf = DecisionTreeClassifier(criterion='entropy', max_depth=2, min_samples_leaf=2)
scores = cross_val(clf, X, y, 5)
print('Scores:', scores)
print('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))
Image(url='https://raw.githubusercontent.com/PyDataTokyo/pydata-tokyo-tutorial-1/master/images/titanic_decision_tree.png')
Explanation: スコアが改善しました!
6. 決定木(Decision Tree)による生存者推定
決定木は、機械学習の手法の中でも非常によく用いられるものの一つです。分類を決定づけた要因を木構造で説明することが出来るため、非常に分かりやすいという特徴があります。
End of explanation
clf = DecisionTreeClassifier(criterion='entropy', max_depth=3, min_samples_leaf=2)
scores = cross_val(clf, X, y, 5)
print('Scores:', scores)
print('Mean Score: {0:.3f} (+/-{1:.3f})'.format(scores.mean(), scores.std()*2))
Explanation: 演習問題
決定木のパラメータを変えて、スコアを比較してみましょう。
解答例
End of explanation
clf = DecisionTreeClassifier(criterion='entropy', max_depth=2, min_samples_leaf=2)
param_grid = {'max_depth': [2, 3, 4, 5], 'min_samples_leaf': [2, 3, 4, 5]}
cv = KFold(5, shuffle=True, random_state=0)
grid_search = GridSearchCV(clf, param_grid, cv=cv, n_jobs=-1, verbose=1,return_train_score=True)
grid_search.fit(X, y)
Explanation: 7. グリッドサーチ
グリッドサーチは、分類器のパラメータを指定した範囲で変化させ、最もスコアの高いパラメータの組合せを探してくれる便利な機能です。
End of explanation
print('Scores: {:.3f}'.format(grid_search.best_score_))
print('Best Parameter Choice:', grid_search.best_params_)
Explanation: ベストなスコアとパラメータの組合せを確認します。
End of explanation
grid_search.cv_results_
grid_search.cv_results_['mean_test_score']
scores = grid_search.cv_results_['mean_test_score'].reshape(4, 4)
fig, ax = plt.subplots()
cm = plt.cm.Blues
mat = ax.matshow(scores, cmap=cm)
ax.set_xlabel('min_samples_leaf')
ax.set_ylabel('max_depth')
ax.set_xticklabels(['']+param_grid['min_samples_leaf'])
ax.set_yticklabels(['']+param_grid['max_depth'])
fig.colorbar(mat)
plt.show()
Explanation: 全ての結果を確認することも出来ます。
End of explanation
y_test_pred = grid_search.predict(X_test)
Explanation: ベストなパラメータの組合せで推定を行います。
End of explanation
df_kaggle = pd.DataFrame({'PassengerId': df_test['PassengerId'], 'Survived':np.array(y_test_pred)})
df_kaggle.to_csv('kaggle_decisiontree.csv', index=False)
Explanation: Kaggleに投稿するためのCSVファイルを作成し、結果を確認してみましょう。
End of explanation |
10,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Template for test
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Note
Step2: Y Phosphorylation
Step3: T Phosphorylation | Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
Explanation: Template for test
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_s_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark(j, "S")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_s_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark(j, "S")
del x
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Note: SMOTEEN seems to preform best
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
try:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_Y_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark(j, "Y")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_Y_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark(j, "Y")
del x
except:
print("Benchmark not relevant")
Explanation: Y Phosphorylation
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
benchmarks = ["Data/Benchmarks/phos_CDK1.csv", "Data/Benchmarks/phos_CK2.csv", "Data/Benchmarks/phos_MAPK1.csv", "Data/Benchmarks/phos_PKA.csv", "Data/Benchmarks/phos_PKC.csv"]
for j in benchmarks:
for i in par:
print("y", i, " ", j)
y = Predictor()
y.load_data(file="Data/Training/clean_t_filtered.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("bagging")
y.benchmark(j, "T")
del y
print("x", i, " ", j)
x = Predictor()
x.load_data(file="Data/Training/clean_t_filtered.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("bagging")
x.benchmark(j, "T")
del x
Explanation: T Phosphorylation
End of explanation |
10,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Issue 12
The purpose of this notebook is to debug issue 12
Here is the Pyramid version info
Step1: Read the data in
Step2: Determining the periodicity
Our issue filer claims that the periodicity of the data is every 2 minutes
Step3: However, m is greater than the number of samples in the data, which is suspect... unless there is LOTS of data, it's unlikely we have a bi-minutely seasonal period...
Step4: On inspection of the data, it's clear that there is a 2 minute interval between each sample, but what is the actual seasonality? That's what we're unsure of. Let's take a look
Step5: We can see that even though the data is separated at a 2-minute interval, the seasonality is VERY DIFFERENT (and much less frequent). Therefore, part of the issue is the assumption that the actual data frequency is equivalent to the seasonality. It is not!! It looks like we have 3 seasons in a 2 month period, which means we likely have 18 seasons over the course of the year (3 * 6). Only 3 are in the sample, however, and the algo is time invariant (i.e., it has no idea whether this is a year or a month; it only knows there are m seasons). Therefore, we'll set m to 3.
This is really important to understand
Step6: nsdiffs shows that we do not need to perform any seasonal differencing, so we can turn our attention to differencing with ndiffs.
Step7: ndiffs asserts that we should only difference once. Let's look at how the time series looks after differencing once
Step8: Note that auto_arima will only ever use lag=1 while differencing, and that these estimations all happen under the covers in auto_arima. There is not a need to difference prior to using auto_arima, as it will find the optimal differencing orders for you.
Let's try fitting an ARIMA using auto_arima.
Step9: Running R's auto.arima found the exact same parameters with almost the same exact AIC, BIC, etc.
```R
library(forecast)
df = read.csv('dummy_data.csv')
head(df)
time occupancy
1 2017-03-01 00 | Python Code:
import pmdarima as pm
print("Pyramid version: %r" % pm.__version__)
Explanation: Issue 12
The purpose of this notebook is to debug issue 12
Here is the Pyramid version info:
End of explanation
import pandas as pd
data = pd.read_csv('dummy_data.csv')
data.head()
Explanation: Read the data in:
End of explanation
n_days = 60
n_hours = 24
n_minutes = 30 # every other minute
# determine m:
m = n_days * n_hours * n_minutes
m
Explanation: Determining the periodicity
Our issue filer claims that the periodicity of the data is every 2 minutes:
I have a seasonal time-series data set which comprises of 2 months of data. Now if I mention the frequency as "2 min" or "365x24x30" then it tries to divide the dataset with that frequency. But my data set is just for 2 months and obviously, it has way lesser values than the freq.
Therefore, rather than use 365 * 24 * 30 = 262800, we can probably use the number of days in the two months (say, 60) instead of the 365: 60 * 24 * 30 = 43200
End of explanation
print("n_periods: %i" % m)
print("n_samples: %i" % data.shape[0])
Explanation: However, m is greater than the number of samples in the data, which is suspect... unless there is LOTS of data, it's unlikely we have a bi-minutely seasonal period...
End of explanation
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
# extract the data we're interested in
n_samples = data.shape[0]
xlab, y = data.time, data.occupancy
plt.plot(np.arange(n_samples), y)
plt.axis([0, n_samples, y.min(), y.max()])
plt.show()
Explanation: On inspection of the data, it's clear that there is a 2 minute interval between each sample, but what is the actual seasonality? That's what we're unsure of. Let's take a look:
End of explanation
from pmdarima.arima.utils import nsdiffs
# Test for best nsdiffs (assuming m is 3, as previously shown)
nsdiffs(y, m=3, max_D=5, test='ch')
Explanation: We can see that even though the data is separated at a 2-minute interval, the seasonality is VERY DIFFERENT (and much less frequent). Therefore, part of the issue is the assumption that the actual data frequency is equivalent to the seasonality. It is not!! It looks like we have 3 seasons in a 2 month period, which means we likely have 18 seasons over the course of the year (3 * 6). Only 3 are in the sample, however, and the algo is time invariant (i.e., it has no idea whether this is a year or a month; it only knows there are m seasons). Therefore, we'll set m to 3.
This is really important to understand: the m parameter must be apriori knowledge!! Even R's auto.arima will have a tough time fitting a model for a ts with unknown frequency.
Stationarity
Stationarity is an important part of ARIMA analysis. There are several avenues we can explore in order to attempt to make a time series stationary, but one of the more common ones is differencing. pyramid provides a differencing method that can be used for this (diff), and also provides a function that can estimate the number of diffs required to reach stationarity (ndiffs). Before we can difference the actual timeseries, however, we need to address the pretty apparent seasons in the data. We can estimate the order of seasonal differencing using nsdiffs.
End of explanation
from pmdarima.arima.utils import ndiffs, diff
# Test for best ndiffs at the p < 0.05 level
ndiffs(y, alpha=0.05, test='kpss', max_d=5)
Explanation: nsdiffs shows that we do not need to perform any seasonal differencing, so we can turn our attention to differencing with ndiffs.
End of explanation
def plot_ts(x, title="Time series"):
n_samples = x.shape[0]
plt.plot(np.arange(n_samples), x)
plt.axis([0, n_samples, x.min(), x.max()])
plt.title(title)
plt.show()
plot_ts(diff(y, differences=1, lag=1), title="diff=1, lag=1")
Explanation: ndiffs asserts that we should only difference once. Let's look at how the time series looks after differencing once:
End of explanation
from pmdarima.arima import auto_arima
# We'll give it a broad range of parameters to search, though it might take a bit.
arima = auto_arima(y, # this is our unlagged data
exogenous=None, # if you have covariates, you can regress against them as well (optionally)
start_p=1, max_p=5,
start_q=1, max_q=5,
start_P=1, max_P=3,
start_Q=1, max_Q=3,
d=1, D=0, # we have already estimated our d and D, so no need to compute it again
max_order=None, # do not limit the order of parameters for this search
stepwise=True, # faster
error_action='ignore', # do not care if we find parameters that fail; skip them
trace=True, seasonal=True, m=3)
arima.summary()
Explanation: Note that auto_arima will only ever use lag=1 while differencing, and that these estimations all happen under the covers in auto_arima. There is not a need to difference prior to using auto_arima, as it will find the optimal differencing orders for you.
Let's try fitting an ARIMA using auto_arima.
End of explanation
print("Last few values: %r" % y[-5:].tolist())
print("Nex three predicted values: %r" % arima.predict(n_periods=3))
Explanation: Running R's auto.arima found the exact same parameters with almost the same exact AIC, BIC, etc.
```R
library(forecast)
df = read.csv('dummy_data.csv')
head(df)
time occupancy
1 2017-03-01 00:02:01 2
2 2017-03-01 00:04:01 3
3 2017-03-01 00:06:01 2
4 2017-03-01 00:08:01 1
5 2017-03-01 00:10:01 4
6 2017-03-01 00:12:01 1
y = ts(df$occupancy, frequency=3)
auto.arima(y)
Series: y
ARIMA(1,1,1)(0,0,1)[3]
Coefficients:
ar1 ma1 sma1
0.1598 -0.8479 0.0557
s.e. 0.0330 0.0198 0.0283
sigma^2 estimated as 3.21: log likelihood=-3220.24
AIC=6448.47 AICc=6448.5 BIC=6470.01
```
To predict your next value, you simply call predict. Note that once you've discovered your parameters, you'll periodically have to refresh your model with new data as it comes in. Since ARIMAs use the most recent values to project into the future, you will not want to predict many values into the future; only several at a time. Here's how we can forecast the next three values:
End of explanation |
10,956 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I get how to use pd.MultiIndex.from_tuples() in order to change something like | Problem:
import pandas as pd
import numpy as np
l = [('A', '1', 'a'), ('A', '1', 'b'), ('A', '2', 'a'), ('A', '2', 'b'), ('B', '1','a'), ('B', '1','b')]
np.random.seed(1)
df = pd.DataFrame(np.random.randn(5, 6), columns=l)
def g(df):
df.columns = pd.MultiIndex.from_tuples(df.columns, names=['Caps','Middle','Lower'])
return df
df = g(df.copy()) |
10,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Average wage in Russia
Step1: Для выполнения этого задания нам понадобятся данные о среднемесячных уровнях заработной платы в России
Step2: Проверка стационарности и STL-декомпозиция ряда
Step3: Стабилизация дисперсии
Сделаем преобразование Бокса-Кокса для стабилизации дисперсии
Step4: Стационарность
Критерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность
Step5: Критерий Дики-Фуллера отвергает гипотезу нестационарности, НО полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование
Step6: Гипотеза нестационарности отвергается с ещё большим уровнем значимости, и визуально ряд выглядит лучше — тренда больше нет.
Подбор модели
Посмотрим на ACF и PACF полученного ряда
Step7: Начальные приближения
Step8: Лучшая модель
Step9: Её остатки
Step10: Остатки несмещены (подтверждается критерием Стьюдента), стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные
Step11: Прогноз | Python Code:
from __future__ import division
import numpy as np
import pandas as pd
from scipy import stats
import statsmodels.api as sm
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
from itertools import product
from datetime import *
from dateutil.relativedelta import *
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
Explanation: Average wage in Russia
End of explanation
#Reading data
wage = pd.read_csv('WAG_C_M.csv', sep=';', index_col='month', parse_dates=True, dayfirst=True)
wage.info()
wage.head()
_ = plt.figure(figsize=(15,7))
_ = wage.WAG_C_M.plot()
_ = plt.title('Average nominal wage')
Explanation: Для выполнения этого задания нам понадобятся данные о среднемесячных уровнях заработной платы в России:
End of explanation
_ = sm.tsa.seasonal_decompose(wage.WAG_C_M).plot()
print('Augmented Dickey-Fuller unit root test p=%f' % sm.tsa.stattools.adfuller(wage.WAG_C_M)[1])
Explanation: Проверка стационарности и STL-декомпозиция ряда:
End of explanation
wage['WAG_C_M_box'], lmbda = stats.boxcox(wage.WAG_C_M)
_ = plt.figure(figsize=(15,7))
_ = wage.WAG_C_M_box.plot()
_ = plt.title(u'Transformed average nominal wage')
print('Optimal parameter of the Box-Cox power transformation: %f' % lmbda)
print('Augmented Dickey-Fuller unit root test p=%f' % sm.tsa.stattools.adfuller(wage.WAG_C_M_box)[1])
Explanation: Стабилизация дисперсии
Сделаем преобразование Бокса-Кокса для стабилизации дисперсии:
End of explanation
wage['WAG_C_M_box_diff'] = wage.WAG_C_M_box - wage.WAG_C_M_box.shift(12)
_ = sm.tsa.seasonal_decompose(wage.WAG_C_M_box_diff.dropna()).plot()
print('Augmented Dickey-Fuller unit root test p=%f' % sm.tsa.stattools.adfuller(wage.WAG_C_M_box_diff.dropna())[1])
Explanation: Стационарность
Критерий Дики-Фуллера отвергает гипотезу нестационарности, но визуально в данных виден тренд. Попробуем сезонное дифференцирование; сделаем на продифференцированном ряде STL-декомпозицию и проверим стационарность:
End of explanation
wage['WAG_C_M_box_diff2'] = wage.WAG_C_M_box_diff - wage.WAG_C_M_box_diff.shift(1)
_ = sm.tsa.seasonal_decompose(wage.WAG_C_M_box_diff2.dropna()).plot()
print('Augmented Dickey-Fuller unit root test p=%f' % sm.tsa.stattools.adfuller(wage.WAG_C_M_box_diff2.dropna())[1])
Explanation: Критерий Дики-Фуллера отвергает гипотезу нестационарности, НО полностью избавиться от тренда не удалось. Попробуем добавить ещё обычное дифференцирование:
End of explanation
plt.figure(figsize=(15,10))
ax = plt.subplot(211)
sm.graphics.tsa.plot_acf(wage.WAG_C_M_box_diff2.dropna()[12:].squeeze(), lags=50, ax=ax);
ax = plt.subplot(212)
sm.graphics.tsa.plot_pacf(wage.WAG_C_M_box_diff2.dropna()[12:].squeeze(), lags=50, ax=ax);
Explanation: Гипотеза нестационарности отвергается с ещё большим уровнем значимости, и визуально ряд выглядит лучше — тренда больше нет.
Подбор модели
Посмотрим на ACF и PACF полученного ряда:
End of explanation
ps = range(0, 2)
d=1
qs = range(0, 2)
Ps = range(0, 2)
D=1
Qs = range(0, 1)
parameters = product(ps, qs, Ps, Qs)
parameters_list = list(parameters)
parameters_list
len(parameters_list)
%%time
results = []
best_aic = float("inf")
warnings.filterwarnings('ignore')
for param in parameters_list:
#try except нужен, потому что на некоторых наборах параметров модель не обучается
try:
model=sm.tsa.statespace.SARIMAX(wage.WAG_C_M_box, order=(param[0], d, param[1]),
seasonal_order=(param[2], D, param[3], 12)).fit(disp=-1)
#выводим параметры, на которых модель не обучается и переходим к следующему набору
except ValueError:
print('wrong parameters:', param)
continue
aic = model.aic
#сохраняем лучшую модель, aic, параметры
if aic < best_aic:
best_model = model
best_aic = aic
best_param = param
results.append([param, model.aic])
warnings.filterwarnings('default')
result_table = pd.DataFrame(results)
result_table.columns = ['parameters', 'aic']
print(result_table.sort_values(by = 'aic', ascending=True).head())
Explanation: Начальные приближения: Q=0, q=1, P=1, p=1.
End of explanation
print(best_model.summary())
Explanation: Лучшая модель:
End of explanation
_ = plt.figure(figsize=(15,12))
_ = plt.subplot(211)
_ = best_model.resid[13:].plot()
_ = plt.ylabel(u'Residuals')
_ = ax = plt.subplot(212)
_ = sm.graphics.tsa.plot_acf(best_model.resid.values.squeeze(), lags=50, ax=ax)
print("Критерий Стьюдента: p=%f" % stats.ttest_1samp(best_model.resid[13:], 0)[1])
print("Критерий Дики-Фуллера: p=%f" % sm.tsa.stattools.adfuller(best_model.resid[13:])[1])
Explanation: Её остатки:
End of explanation
def invboxcox(y,lmbda):
if lmbda == 0:
return(np.exp(y))
else:
return(np.exp(np.log(lmbda*y+1)/lmbda))
wage['model'] = invboxcox(best_model.fittedvalues, lmbda)
_ = plt.figure(figsize=(15,7))
_ = wage.WAG_C_M.plot()
_ = wage.model[13:].plot(color='r')
_ = plt.title('Average nominal wage')
Explanation: Остатки несмещены (подтверждается критерием Стьюдента), стационарны (подтверждается критерием Дики-Фуллера и визуально), неавтокоррелированы (подтверждается критерием Льюнга-Бокса и коррелограммой). Посмотрим, насколько хорошо модель описывает данные:
End of explanation
wage2 = wage[['WAG_C_M']]
date_list = [datetime.strptime("2017-07-01", "%Y-%m-%d") + relativedelta(months=x) for x in range(0,36)]
future = pd.DataFrame(index=date_list, columns=wage2.columns)
wage2 = pd.concat([wage2, future])
wage2['forecast'] = invboxcox(best_model.predict(start=294, end=329), lmbda)
_ = plt.figure(figsize=(15,7))
_ = wage2.WAG_C_M.plot()
_ = wage2.forecast.plot(color='r')
_ = plt.title('Average nominal wage')
Explanation: Прогноз
End of explanation |
10,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr align=left><td><img align=left src="https
Step1: Mixed Equations In-Class Project
Consider the reaction-diffusion PDE
$$\begin{aligned}
u_t &= \sigma D_1 \nabla^2 u + f(u, v) \
v_t &= \sigma D_2 \nabla^2 v + g(u, v)
\end{aligned}$$
in two-dimensions (i.e. $\nabla^2 u = u_{xx} + u_{yy}$) and with the source terms
$$\begin{aligned}
f(u,v) &= \alpha u (1 - \tau_1 v^2) + v (1 - \tau_2 u) \
g(u,v) &= \beta v + \alpha \tau_1 u v^2 + u (\gamma + \tau_2 v).
\end{aligned}$$
These equations with the appropriate parameters $\sigma, D_1, D_2, \alpha, \beta, \tau_1, \tau_2, \gamma$ can be used to study emergent patterns from seemingly random initial data which we will investigate numerically.
Step3: Spatial Derivative Discretization
Let's consider the above PDEs on the a square domain $\Omega = [-1, 1] \times [-1, 1]$ with periodic boundary conditions. First write a function that uses a five-point stencil to represent the Laplacian operator in 2d and returns the appropriate sparse matrix reprsentation.
Step5: Time Stepping
First let's see if we can make a simple explicit method, in this case forward Euler, work for us. We know this might not be such as great idea due to the diffusion term but maybe the reaction terms will be helpfull.
First write a function that uses forward Euler to take a single time step to solve the equations of interest.
Step6: Let's now try to solve the PDE given the parameters
$$
\sigma = 0.0021, ~ \tau_1 = 3.5, ~ \tau_2 = 0.0, ~ \alpha = 0.899, ~ \beta=-0.91, ~\gamma=-\alpha
$$
with the default values of $D_1 = 0.5$ and $D_2 = 1.0$. We will also take a random initial condition.
Note what step-size we might need here. For the two-dimensional heat equation we can show that forward Euler is going to require a step size of
$$
\Delta t \leq \frac{\Delta x^2}{4 \kappa}
$$
where now $\kappa$ is the coefficient out in front of the Laplacian. Here we will take the maximum of the coefficient in front of the Laplacians to remain stable.
Step7: Implicit-Explicit Splitting
The previous approach was clearly very slow so let's try applying one of our splitting techniques to the problem instead. IMEX methods are actually pretty ideal for this case so let's try using backwards Euler for the stiff diffusion term and the forward Euler time step for the explicit reaction terms.
Implicit
Step8: Try playing with the input parameters and see what kind of behavior you see. | Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import scipy.sparse as sparse
import scipy.sparse.linalg as linalg
Explanation: <table>
<tr align=left><td><img align=left src="https://i.creativecommons.org/l/by/4.0/88x31.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
Based on an example from https://github.com/ketch/finite-difference-course
End of explanation
def f_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma):
return alpha * U * (1.0 - tau_1 * V**2) + V * (1.0 - tau_2 * U)
def g_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma):
return beta * V * (1.0 + alpha * tau_1 / beta * U * V) + U * (gamma + tau_2 * V)
Explanation: Mixed Equations In-Class Project
Consider the reaction-diffusion PDE
$$\begin{aligned}
u_t &= \sigma D_1 \nabla^2 u + f(u, v) \
v_t &= \sigma D_2 \nabla^2 v + g(u, v)
\end{aligned}$$
in two-dimensions (i.e. $\nabla^2 u = u_{xx} + u_{yy}$) and with the source terms
$$\begin{aligned}
f(u,v) &= \alpha u (1 - \tau_1 v^2) + v (1 - \tau_2 u) \
g(u,v) &= \beta v + \alpha \tau_1 u v^2 + u (\gamma + \tau_2 v).
\end{aligned}$$
These equations with the appropriate parameters $\sigma, D_1, D_2, \alpha, \beta, \tau_1, \tau_2, \gamma$ can be used to study emergent patterns from seemingly random initial data which we will investigate numerically.
End of explanation
def laplacian_discretization(m):
Constructs a sparse matrix that discretizes the 2d Laplacian
Uses a five-point stencil and periodic boundary conditions.
delta_x = 2.0 / (m + 1)
# Primary discretization
e = numpy.ones(m)
T = sparse.spdiags([e, -4.0 * e, e], [-1, 0, 1], m, m)
S = sparse.spdiags([e, e], [-1, 1], m, m)
I = sparse.eye(m)
A = sparse.kron(I, T) + sparse.kron(S, I)
# Construct periodic BCs
e = numpy.ones(m**2)
A_periodic = sparse.spdiags([e, e],[m - m**2, m**2 - m], m**2, m**2).tolil()
# Left & right BCs:
for i in range(m):
A_periodic[i * m, (i + 1) * m - 1] = 1.0
A_periodic[(i + 1) * m - 1, i * m] = 1.0
# Combine two matrices
A = A + A_periodic
A /= delta_x**2
A = A.todia()
return A
A = laplacian_discretization(4)
plt.spy(A)
plt.show()
Explanation: Spatial Derivative Discretization
Let's consider the above PDEs on the a square domain $\Omega = [-1, 1] \times [-1, 1]$ with periodic boundary conditions. First write a function that uses a five-point stencil to represent the Laplacian operator in 2d and returns the appropriate sparse matrix reprsentation.
End of explanation
def forward_euler_step(U, V, delta_t, A, sigma, f, g, D1=0.5, D2=1.0):
Take a single forward Euler step on the reaction-diffusion equation
U_new = U + delta_t * (sigma * D1 * A * U + f(U, V))
V_new = V + delta_t * (sigma * D2 * A * V + g(U, V))
return U_new, V_new
Explanation: Time Stepping
First let's see if we can make a simple explicit method, in this case forward Euler, work for us. We know this might not be such as great idea due to the diffusion term but maybe the reaction terms will be helpfull.
First write a function that uses forward Euler to take a single time step to solve the equations of interest.
End of explanation
def forward_euler_coupled_solver(sigma, tau_1, tau_2, alpha, beta, gamma, D_1, D_2):
# Alias reaction functions with the above parameters
f = lambda U, V: f_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma)
g = lambda U, V: g_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma)
# Set up grid
m = 150
delta_x = 2.0 / m
x = numpy.linspace(-1.0, 1.0, m)
y = numpy.linspace(-1.0, 1.0, m)
Y, X = numpy.meshgrid(y, x)
# Initial data
U = numpy.random.randn(m, m) / 2.0
V = numpy.random.randn(m, m) / 2.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1, aspect='equal')
plot = axes.pcolor(x, y, U, cmap=plt.get_cmap("viridis"))
fig.colorbar(plot)
# Setup spatial discretization
U = U.reshape(-1)
V = V.reshape(-1)
A = laplacian_discretization(m)
# Time
t = 0.0
t_final = 300.0
delta_t = delta_x**2 / (5.0 * sigma)
num_steps = int(numpy.round(t_final / delta_t))
# Evolve in time
next_output_time = 0.0
for j in xrange(num_steps):
U, V = forward_euler_step(U, V, delta_t, A, sigma, f, g)
t += delta_t
if t >= next_output_time:
next_output_time += 50.0
U_output = U.reshape((m, m))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1, aspect='equal')
plot = axes.pcolor(x, y, U_output, cmap=plt.get_cmap("viridis"))
fig.colorbar(plot)
axes.set_title("t = %s" % t)
plt.show()
forward_euler_coupled_solver(sigma=0.0021, tau_1=3.5, tau_2=0, alpha=0.899, beta=-0.91, gamma=-0.899, D_1=0.5, D_2=1.0)
Explanation: Let's now try to solve the PDE given the parameters
$$
\sigma = 0.0021, ~ \tau_1 = 3.5, ~ \tau_2 = 0.0, ~ \alpha = 0.899, ~ \beta=-0.91, ~\gamma=-\alpha
$$
with the default values of $D_1 = 0.5$ and $D_2 = 1.0$. We will also take a random initial condition.
Note what step-size we might need here. For the two-dimensional heat equation we can show that forward Euler is going to require a step size of
$$
\Delta t \leq \frac{\Delta x^2}{4 \kappa}
$$
where now $\kappa$ is the coefficient out in front of the Laplacian. Here we will take the maximum of the coefficient in front of the Laplacians to remain stable.
End of explanation
def backward_euler_diffusion_step(U, V, A, delta_t, sigma, D_1, D_2):
U = linalg.spsolve((sparse.eye(A.shape[0]) - delta_t * sigma * D_1 * A), U)
V = linalg.spsolve((sparse.eye(A.shape[0]) - delta_t * sigma * D_2 * A), V)
return U, V
def forward_euler_reaction_step(U, V, delta_t, f, g):
U_new = U + delta_t * f(U, V)
V_new = V + delta_t * g(U, V)
return U_new, V_new
def imex_solver(sigma, tau_1, tau_2, alpha, beta, gamma, D_1, D_2):
# Alias reaction functions with the above parameters
f = lambda U, V: f_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma)
g = lambda U, V: g_reaction(U, V, sigma, tau_1, tau_2, alpha, beta, gamma)
# Set up grid
m = 150
delta_x = 2.0 / m
x = numpy.linspace(-1.0, 1.0, m)
y = numpy.linspace(-1.0, 1.0, m)
Y, X = numpy.meshgrid(y, x)
# Initial data
U = numpy.random.randn(m, m) / 2.0
V = numpy.random.randn(m, m) / 2.0
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1, aspect='equal')
plot = axes.pcolor(x, y, U, cmap=plt.get_cmap("viridis"))
fig.colorbar(plot)
# Setup spatial discretization
U = U.reshape(-1)
V = V.reshape(-1)
A = laplacian_discretization(m)
# Time
t = 0.0
t_final = 30.0
delta_t = delta_x / (10.0 * sigma)
num_steps = int(numpy.round(t_final / delta_t))
# Evolve in time
next_output_time = 0.0
for j in xrange(num_steps):
U, V = backward_euler_diffusion_step(U, V, A, delta_t, sigma, D_1, D_2)
U, V = forward_euler_step(U, V, delta_t, A, sigma, f, g)
t += delta_t
if t >= next_output_time:
next_output_time += 5.0
U_output = U.reshape((m, m))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1, aspect='equal')
plot = axes.pcolor(x, y, U_output, cmap=plt.get_cmap("viridis"))
fig.colorbar(plot)
axes.set_title("t = %s" % t)
plt.show()
# Parameters
imex_solver(sigma=0.0021, tau_1=3.5, tau_2=0, alpha=0.899, beta=-0.91, gamma=-0.899, D_1=0.5, D_2=1.0)
Explanation: Implicit-Explicit Splitting
The previous approach was clearly very slow so let's try applying one of our splitting techniques to the problem instead. IMEX methods are actually pretty ideal for this case so let's try using backwards Euler for the stiff diffusion term and the forward Euler time step for the explicit reaction terms.
Implicit:
$$\begin{aligned}
u_t &= \sigma D_1 \nabla^2 u \
v_t &= \sigma D_2 \nabla^2 v
\end{aligned}$$
Explicit:
$$\begin{aligned}
u_t &= f(u, v) \
v_t &= g(u, v)
\end{aligned}$$
Numerical method:
$$\begin{aligned}
U^\ast &= U^n + \Delta t \sigma D_1 \nabla^2 U^\ast \
V^\ast &= V^n + \Delta t \sigma D_2 \nabla^2 V^\ast \
U^{n+1} &= U^\ast + \Delta t f(U^\ast, V^\ast) \
V^{n+1} &= V^\ast + \Delta t g(U^\ast, V^\ast) \
\end{aligned}$$
End of explanation
sigma=0.0045; tau1=2.02; tau2=0.; alpha=2.0; beta=-0.91; gamma=-alpha;
sigma=0.0005; tau1=2.02; tau2=0.; alpha=2.0; beta=-0.91; gamma=-alpha;
sigma=0.0021; tau1=3.5; tau2=0; alpha=0.899; beta=-0.91; gamma=-alpha;
sigma=0.0045; tau1=0.02; tau2=0.2; alpha=1.9; beta=-0.85; gamma=-alpha;
sigma=0.0001; tau1=0.02; tau2=0.2; alpha=0.899; beta=-0.91; gamma=-alpha;
sigma=0.0045; tau1=0.02; tau2=0.2; alpha=1.9; beta=-0.91; gamma=-alpha;
Explanation: Try playing with the input parameters and see what kind of behavior you see.
End of explanation |
10,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Daymet Data Download</h1>
Daymet data can be extracted/downloaded in two ways. The nationwide or localized grid can be downloaded; alternately, the data for particular grid cells can be extracted through a web interface.
<h2>Daymet Data Download - Nationwide Dataset</h2>
<h3>Required Python libraries</h3>
Step1: <h3>Parameters</h3>
Range of years to download datasets.
- startYear is the first year of data
- endYear is the last year of data
- the default value for the last year is the current year.
Step2: Local data file location
- This is the base directory of the Daymet data which contains the Daymet Data File Structure.
Step3: Set up the URL template information
- This information is determined by the URL structure of the Oak Ridge National Laboratory (ORLN) file server. This is how we determine the file structure.
- Go to the "Daymet Data Sets List" page
- In the Daymet Data Sets List there is a THREDDS column, the URL for each of the types of data (Annual, Daily, and Monthly) can be discovered.
Annual - https
Step4: <h2>Daymet Data Parameters and Time Frames</h2>
<table>
<tr>
<th>Parameter Abbr</th><th>Data Type</th><th>Annual</th><th>Daily</th><th>Monthly</th>
</tr><tr>
<td>dayl</td><td>day length (s)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>prcp</td><td>precipitation (mm/day)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>srad</td><td>shortwave radiation (W/m<sup>2</sup>)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>swe</td><td>snow water equivalent (kg/m<sup>2</sup>)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>tmax</td><td>maximum temp (°C)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>tmin</td><td>minimum temp (°C)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>vp</td><td>humidity as water vapor pressure (Pa)</td><td>X</td><td>X</td><td>X</td>
</tr>
</table>
<h2><a id='daymetDataStructure'>Daymet Data File Structure</a></h2>
The following is a representation of the data structure for the Daymet data directory. The annual, monthly, and daily directories each contain directories which hold the parametric data for their identified type.
Daymet
Annual
prcp
tmax
tmin
vp
Daily
dayl
prcp
srad
swe
tmax
tmin
vp
Monthly
prcp
tmax
tmin
vp
<h2>Execute Data Download</h2>
This script should only need to be executed once a year. The data files being downloaded are HUGE. A single years worth of data is about 21.5 GB. The best idea for running this script is overnight or over a weekend. This should minimize limiting Internet access for other users.
The script should be executed from the python directory of the Daymet external drive. To do this
Step5: Change the begin/end date values on the following to allow for download of data. Remember each year of data requires about 21.5 GB of storage and bandwidth. Do everyone a favor and run this over a weekend or at night. If a data file has already been downloaded the system will skip to the next file. Currently the system has all the data from 1980 to 2015. | Python Code:
import urllib
import os
from datetime import date as dt
Explanation: <h1>Daymet Data Download</h1>
Daymet data can be extracted/downloaded in two ways. The nationwide or localized grid can be downloaded; alternately, the data for particular grid cells can be extracted through a web interface.
<h2>Daymet Data Download - Nationwide Dataset</h2>
<h3>Required Python libraries</h3>
End of explanation
startYear = 2016 # First year of data extraction
endYear = dt.today().year # Last year of data extraction Defaults to current year.
Explanation: <h3>Parameters</h3>
Range of years to download datasets.
- startYear is the first year of data
- endYear is the last year of data
- the default value for the last year is the current year.
End of explanation
dataDir = "..\daymet"
Explanation: Local data file location
- This is the base directory of the Daymet data which contains the Daymet Data File Structure.
End of explanation
urlBase = "http://thredds.daac.ornl.gov/thredds/fileServer/ornldaac"
Explanation: Set up the URL template information
- This information is determined by the URL structure of the Oak Ridge National Laboratory (ORLN) file server. This is how we determine the file structure.
- Go to the "Daymet Data Sets List" page
- In the Daymet Data Sets List there is a THREDDS column, the URL for each of the types of data (Annual, Daily, and Monthly) can be discovered.
Annual - https://thredds.daac.ornl.gov/thredds/catalog/ornldaac/1343/catalog.html
Daily - https://thredds.daac.ornl.gov/thredds/catalog/ornldaac/1328/catalog.html
Monthly - https://thredds.daac.ornl.gov/thredds/catalog/ornldaac/1345/catalog.html
One important portion of these URLs is "https://thredds.daac.ornl.gov/thredds/catalog/ornldaac/". The other important portion is the number which follows (current values Annual - 1343, Daily 1328, and Monthly - 1345.) These will need to be checked and updated as they change.
End of explanation
!conda --version
import daymetFileDownload as dfd
Explanation: <h2>Daymet Data Parameters and Time Frames</h2>
<table>
<tr>
<th>Parameter Abbr</th><th>Data Type</th><th>Annual</th><th>Daily</th><th>Monthly</th>
</tr><tr>
<td>dayl</td><td>day length (s)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>prcp</td><td>precipitation (mm/day)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>srad</td><td>shortwave radiation (W/m<sup>2</sup>)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>swe</td><td>snow water equivalent (kg/m<sup>2</sup>)</td><td></td><td>X</td><td></td>
</tr><tr>
<td>tmax</td><td>maximum temp (°C)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>tmin</td><td>minimum temp (°C)</td><td>X</td><td>X</td><td>X</td>
</tr><tr>
<td>vp</td><td>humidity as water vapor pressure (Pa)</td><td>X</td><td>X</td><td>X</td>
</tr>
</table>
<h2><a id='daymetDataStructure'>Daymet Data File Structure</a></h2>
The following is a representation of the data structure for the Daymet data directory. The annual, monthly, and daily directories each contain directories which hold the parametric data for their identified type.
Daymet
Annual
prcp
tmax
tmin
vp
Daily
dayl
prcp
srad
swe
tmax
tmin
vp
Monthly
prcp
tmax
tmin
vp
<h2>Execute Data Download</h2>
This script should only need to be executed once a year. The data files being downloaded are HUGE. A single years worth of data is about 21.5 GB. The best idea for running this script is overnight or over a weekend. This should minimize limiting Internet access for other users.
The script should be executed from the python directory of the Daymet external drive. To do this:
- Plug in Daymet external drive
- Check which drive letter the external drive is assigned in Windows Explorer (for this example we will use F:)
- Open a command prompt
- Check that the appropriate version of Python is installed.
- The response should look like <code>conda 4.3.21</code>.
- An error will look like <code>'conda' is not recognized ....</code>
- If an error occurs you will need to install Anaconda try looking in the G:\Software\Python directory for installation instructions.
End of explanation
dfd.downloadDaymet(startYear, endYear)
Explanation: Change the begin/end date values on the following to allow for download of data. Remember each year of data requires about 21.5 GB of storage and bandwidth. Do everyone a favor and run this over a weekend or at night. If a data file has already been downloaded the system will skip to the next file. Currently the system has all the data from 1980 to 2015.
End of explanation |
10,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Better Long-Term Stock Forecasts
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous paper showed a strong predictive relationship between the P/Sales ratio and long-term returns of some individual stocks and the S&P 500 stock-market index.
However, there was a considerable amount of noise in those scatter-plots, because we considered fixed investment periods of exactly 10 years, for example. So even though the P/Sales ratio was a strong predictor for the mispricing at the buy-time, it was impossible to predict the mispricing at the sell-time, because the stock-market could be in a bubble or in a crash 10 years into the future, which would distort the estimated returns.
This paper presents a simple solution, which is to consider the average returns for all investment periods between 7 and 15 years, and then make a scatter-plot of the mean returns versus the P/Sales ratio. This produces incredibly smooth curves for estimating the future long-term returns of the S&P 500 and some individual stocks.
Along with the previous paper, this is a very important discovery and it has implications for many areas of both theoretical and applied finance. It means that the U.S. stock-market as a whole is not "efficient" and does not follow a purely "random walk" in the long-term. It is possible to estimate the future long-term return of the stock-market and some individual stocks from just a single indicator variable.
Python Imports
This Jupyter Notebook is implemented in Python v. 3.6 and requires various packages for numerical computations and plotting. See the installation instructions in the README-file.
Step1: Load Data
We now load all the financial data we will be using.
Step4: Plotting Functions
These are helper-functions used for making plots.
Step5: Case Study
Step6: We can forecast the future long-term returns using the fitted "return curve" from the scatter-plot above. Towards the end of 2017, the P/Sales ratio was almost 2.2 for the S&P 500, which was about the previous high point of the "Dot-Com" bubble around year 2000.
Step7: So if you purchased the S&P 500 in December 2017 at this P/Sales ratio and will keep the investment for more than 7 years, while reinvesting all dividends during those years (all taxes are ignored), then the formula forecasts an annualized return of about 1.35%
Step8: Towards the end of 2017 the P/Sales ratio was about 4.9 which is close to the all-time historical highs experienced during the stock-market bubble around year 2000.
Step9: Using the formula for the fitted "return curve" from the scatter-plot above, we get this forecasted long-term return
Step10: When we plot the historical P/Sales ratio, we see that at the end of 2017 it was around 3.5 which was near its all-time high experienced during the bubble around year 2000.
Step11: Using the fitted reciprocal curve from the scatter-plot above, we get a forecasted return of about 6.1% per year, when dividends are reinvested without taxes
Step12: Towards the end of 2017 the P/Sales ratio was about 1.8 which was actually very close to the historical average.
Step13: Using the fitted "return curve" from the scatter-plot above with the P/Sales ratio of 1.8 we get the forecasted return | Python Code:
%matplotlib inline
# Imports from Python packages.
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import pandas as pd
import numpy as np
import os
# Imports from FinanceOps.
from curve_fit import CurveFitReciprocal
from data_keys import *
from data import load_index_data, load_stock_data
from returns import prepare_mean_ann_returns
Explanation: Better Long-Term Stock Forecasts
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
The previous paper showed a strong predictive relationship between the P/Sales ratio and long-term returns of some individual stocks and the S&P 500 stock-market index.
However, there was a considerable amount of noise in those scatter-plots, because we considered fixed investment periods of exactly 10 years, for example. So even though the P/Sales ratio was a strong predictor for the mispricing at the buy-time, it was impossible to predict the mispricing at the sell-time, because the stock-market could be in a bubble or in a crash 10 years into the future, which would distort the estimated returns.
This paper presents a simple solution, which is to consider the average returns for all investment periods between 7 and 15 years, and then make a scatter-plot of the mean returns versus the P/Sales ratio. This produces incredibly smooth curves for estimating the future long-term returns of the S&P 500 and some individual stocks.
Along with the previous paper, this is a very important discovery and it has implications for many areas of both theoretical and applied finance. It means that the U.S. stock-market as a whole is not "efficient" and does not follow a purely "random walk" in the long-term. It is possible to estimate the future long-term return of the stock-market and some individual stocks from just a single indicator variable.
Python Imports
This Jupyter Notebook is implemented in Python v. 3.6 and requires various packages for numerical computations and plotting. See the installation instructions in the README-file.
End of explanation
# Define the ticker-names for the stocks we consider.
ticker_SP500 = "S&P 500"
ticker_JNJ = "JNJ"
ticker_K = "K"
ticker_PG = "PG"
ticker_WMT = "WMT"
# Load the financial data for the stocks.
df_SP500 = load_index_data(ticker=ticker_SP500)
df_JNJ = load_stock_data(ticker=ticker_JNJ)
df_K = load_stock_data(ticker=ticker_K)
df_PG = load_stock_data(ticker=ticker_PG)
df_WMT = load_stock_data(ticker=ticker_WMT)
Explanation: Load Data
We now load all the financial data we will be using.
End of explanation
def plot_psales(df, ticker, start_date=None):
Plot the P/Sales ratio.
:param df: Pandas DataFrame with PSALES.
:param ticker: Ticker-name for the stock or index.
:param start_date: Start-date for the plot.
:return: Nothing.
psales = df[PSALES][start_date:].dropna()
psales.plot(title=ticker + " - P/Sales", grid=True)
def plot_ann_returns(ticker, df, key=PSALES,
min_years=7, max_years=15,
use_colors=True):
Create a single scatter-plot with P/Sales or P/Book
vs. Mean Annualized Returns for e.g. 7-15 years.
:param ticker: Ticker-name for the stock or index.
:param df: Pandas DataFrame containing key and TOTAL_RETURN.
:param key: Name of data-column to use e.g. PSALES or PBOOK.
:param min_years: Min number of years for return periods.
:param max_years: Max number of years for return periods.
:param use_colors: Boolean whether to use colors in plot.
:return: Nothing.
# Prepare the data.
# x is the P/Sales or P/Book and y is the Mean Ann. Returns.
x, y = prepare_mean_ann_returns(df=df, key=key,
min_years=min_years,
max_years=max_years)
# Create a single plot.
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(211)
# Scatter-plot.
if use_colors:
# Give each dot in the scatter-plot a shade of blue
# according to the date of the data-point.
ax.scatter(x, y,
c=list(range(len(x))), cmap='Blues',
alpha=1.0, marker='o')
else:
# Use the same color for all dots.
ax.scatter(x, y, marker='o')
# First part of the title.
title1 = "[{0}] {1} vs. {2}-{3} Years Mean Ann. Return"
title1 = title1.format(ticker, key, min_years, max_years)
# X-values for plotting fitted curves.
x_min = np.min(x)
x_max = np.max(x)
x_range = np.arange(x_min, x_max, (x_max/x_min)/1000)
# Plot reciprocal curve-fit.
curve_fit_reciprocal = CurveFitReciprocal(x=x, y=y)
y_pred = curve_fit_reciprocal.predict(x=x_range)
ax.plot(x_range, y_pred, color='red')
# Title with these curve-fit parameters.
title2 = "Mean Ann. Return = {0:.1%} / " + key + " + {1:.1%}"
title2 = title2.format(*curve_fit_reciprocal.params)
# Combine and set the plot-title.
title = "\n".join([title1, title2])
ax.set_title(title)
# Set axis labels.
ax.set_xlabel(key)
ax.set_ylabel("Mean Ann. Return")
# Convert y-ticks to percentages.
# We use a custom FuncFormatter because PercentFormatter
# is inconsistent with string-formatters used elsewhere.
formatter = FuncFormatter(lambda y, _: '{:.0%}'.format(y))
ax.yaxis.set_major_formatter(formatter)
# Show grid.
ax.grid()
# Show the plot.
plt.show()
Explanation: Plotting Functions
These are helper-functions used for making plots.
End of explanation
plot_ann_returns(ticker=ticker_SP500, df=df_SP500, key=PSALES,
min_years=7, max_years=15, use_colors=True)
Explanation: Case Study: S&P 500
The S&P 500 is a stock-market index consisting of the stocks of 500 of the largest companies in USA. The S&P 500 covers about 80% of the whole U.S. stock-market in terms of size so it is useful as a gauge for the entire U.S. stock-market.
We consider the Total Return of the S&P 500 which is what you would get from investing in the S&P 500 and re-investing all dividends back into the S&P 500. We ignore all taxes here.
The following scatter-plot shows the P/Sales ratio versus the Mean Annualized Returns of the S&P 500 for periods between 7 and 15 years.
For each day we calculate the Total Return of the S&P 500 over the next 7-15 years, then we calculate the Mean Annualized Return from those, and then we put a blue dot in the scatter-plot for that date's P/Sales ratio and the Mean Annualized Return we just calculated. This process is continued for all days in the time-series, until we have calculated and plotted the P/Sales vs. Mean Annualized Return for all days.
As can be seen from this scatter-plot, the P/Sales ratio is a very strong predictor for long investment periods between 7-15 years. We call the fitted red curve for the "return curve".
End of explanation
df_SP500[PSALES].dropna().tail(1)
plot_psales(df=df_SP500, ticker=ticker_SP500)
Explanation: We can forecast the future long-term returns using the fitted "return curve" from the scatter-plot above. Towards the end of 2017, the P/Sales ratio was almost 2.2 for the S&P 500, which was about the previous high point of the "Dot-Com" bubble around year 2000.
End of explanation
plot_ann_returns(ticker=ticker_JNJ, df=df_JNJ, key=PSALES,
min_years=7, max_years=15, use_colors=True)
Explanation: So if you purchased the S&P 500 in December 2017 at this P/Sales ratio and will keep the investment for more than 7 years, while reinvesting all dividends during those years (all taxes are ignored), then the formula forecasts an annualized return of about 1.35%:
$$
Annualized\ Return = 14.4\% / (P/Sales) - 5.2\% = 14.4\% / 2.2 - 5.2\% \simeq 1.35\%
$$
The formula cannot predict exactly what will happen in the future, because there might be a stock-market bubble or a crash in any given year. The formula merely predicts an average annualized return for long-term investments of about 7-15 years in the S&P 500.
Case Study: Johnson & Johnson (JNJ)
Now let us consider individual companies instead of a whole stock-market index. The first company we consider is Johnson & Johnson with the ticker symbol JNJ. This is a very large company with over 130.000 employees worldwide that manufacture a wide range of health-care related products.
When we plot the P/Sales ratio versus the mean annualized return for 7-15 year periods, we see that the "return curve" fits quite well although there appears to be a few separate "return curves" for P/Sales ratios roughly between 2 and 3.
The blue shades in the scatter-plot indicate the time of the data-points and suggest that the separate curves belong to different periods of time. More research would be needed to establish why these periods have different "return curves". Perhaps the periods had significantly different profit-margins or sales-growth.
End of explanation
df_JNJ[PSALES].dropna().tail(1)
plot_psales(df=df_JNJ, ticker=ticker_JNJ)
Explanation: Towards the end of 2017 the P/Sales ratio was about 4.9 which is close to the all-time historical highs experienced during the stock-market bubble around year 2000.
End of explanation
plot_ann_returns(ticker=ticker_PG, df=df_PG, key=PSALES,
min_years=7, max_years=15)
Explanation: Using the formula for the fitted "return curve" from the scatter-plot above, we get this forecasted long-term return:
$$
Annualized\ Return \simeq 77.9\% / (P/Sales) - 8.9\%
\simeq 77.9\% / 4.9 - 8.9\% \simeq 7.0\%
$$
So according to this formula, the annualized return of the JNJ stock will be around 7.0% if you own the stock for at least 7 years, when dividends are reinvested and ignoring taxes.
Again there is the caveat that it is impossible to predict whether there will be a stock-market bubble or crash several years into the future, so the forecasted return is an average for 7-15 year investment periods.
Case Study: Procter & Gamble (PG)
Another very large company is Procter & Gamble with the ticker symbol PG, which sells a wide range of consumer products and has almost 100.000 employees.
If we plot the P/Sales ratio versus the mean annualized return we get an incredibly regular curve of data-points. The red line shows a reciprocal curve-fit, which is apparently not the correct formula for this data, as it doesn't fit so well at the ends. You are encouraged to try and find a better curve-fit and a theoretical explanation why your formula is better.
End of explanation
plot_psales(df=df_PG, ticker=ticker_PG)
Explanation: When we plot the historical P/Sales ratio, we see that at the end of 2017 it was around 3.5 which was near its all-time high experienced during the bubble around year 2000.
End of explanation
plot_ann_returns(ticker=ticker_K, df=df_K, key=PSALES,
min_years=7, max_years=15, use_colors=True)
Explanation: Using the fitted reciprocal curve from the scatter-plot above, we get a forecasted return of about 6.1% per year, when dividends are reinvested without taxes:
$$
Annualized\ Return \simeq 24.4\% / (P/Sales) - 0.9\% \simeq
24.4\% / 3.5 - 0.9\% \simeq 6.1\%
$$
But it should again be noted that this formula doesn't fit so well towards the ends of the data, and looking at the scatter-plot suggests a slightly lower return of maybe 5.5%.
Case Study: Kellogg's (K)
The next company is Kellogg's which trades under the ticker symbol K. The company has about 33.000 employees and is especially known for making breakfast cereals.
When we plot the P/Sales ratio versus the mean annualized return it shows a strong trend that higher P/Sales ratios gives lower long-term returns, although the curve-fit is not as good as for the other companies we studied above, especially for lower P/Sales ratios.
The blue shades show the time of the data-points. It can be hard to see in this plot, but for P/Sales ratios between 1.50 and 1.75, there is a "blob" of light-blue data-points well above the fitted red curve. This clearly indicates that the outlying data-points belong to a specific period in time. But we would have to do more research into the financial data for that period, to uncover the reason why the returns are so different.
End of explanation
df_K[PSALES].dropna().mean()
plot_psales(df=df_K, ticker=ticker_K)
Explanation: Towards the end of 2017 the P/Sales ratio was about 1.8 which was actually very close to the historical average.
End of explanation
plot_ann_returns(ticker=ticker_WMT, df=df_WMT, key=PSALES,
min_years=7, max_years=15, use_colors=True)
Explanation: Using the fitted "return curve" from the scatter-plot above with the P/Sales ratio of 1.8 we get the forecasted return:
$$
Annualized\ Return \simeq 27.5\% / (P/Sales) - 6.2\% \simeq
27.5\% / 1.8 - 6.2\% \simeq 9.1\%
$$
So a forecasted return of about 9.1% per year over the next 7-15 years when dividends are reinvested without taxes. That is about 2% (percentage points) higher than the return forecasted for JNJ and 3% higher than forecasted for PG above.
Case Study: Wal-Mart (WMT)
Now let us consider the company Wal-Mart which trades under the ticker symbol WMT. It is an extremely large retail-company with about 2.3 million employees.
If we plot the P/Sales ratio versus the mean annualized return, we see that the red curve fits very poorly. There seems to be several separate trends in the data, and the blue shades indicate that the trends belong to different periods in time. But more research into the company's financial history would be needed to uncover the reason for this, perhaps it is because of significantly different sales-growth, profit margins, etc.
End of explanation |
10,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
No real difference between grain sizes for the parallel vs perpendicular graphite.
Step1: Make a giant comparison plot | Python Code:
ABIG = 1.0
big_sil = SingleGrainPop('Grain', 'Silicate', 'Mie', amax=ABIG, md=MD)
big_gra = SingleGrainPop('Grain', 'Graphite', 'Mie', amax=ABIG, md=MD)
%%time
big_sil.calculate_ext(EVALS, unit='kev', theta=THVALS)
%%time
big_gra.calculate_ext(EVALS, unit='kev', theta=THVALS)
ax = plt.subplot(111)
big_sil.plot_ext(ax, 'all')
plt.loglog()
ax.set_ylim(0.01, 2)
plt.title('Silicate')
ax = plt.subplot(111)
big_gra.plot_ext(ax, 'all')
plt.loglog()
ax.set_ylim(0.01, 2)
plt.title('Graphite')
inds = [0, 50, -10]
ms = dict(zip(inds,['d','o','s']))
for i in inds:
plt.plot(THVALS, big_sil.int_diff[i], 'g', ls='',
marker=ms[i], markersize=10, label='%.2f keV' % EVALS[i])
plt.plot(THVALS, big_gra.int_diff[i], 'b', ls='', marker=ms[i], markersize=10)
plt.loglog()
plt.legend(loc='lower left', frameon=False)
giant_sil = SingleGrainPop('Grain', 'Silicate', 'Mie', amax=A0, md=MD)
giant_gra = SingleGrainPop('Grain', 'Graphite', 'Mie', amax=A0, md=MD)
%%time
giant_sil.calculate_ext(EVALS, unit='kev', theta=THVALS)
%%time
giant_gra.calculate_ext(EVALS, unit='kev', theta=THVALS)
ax = plt.subplot(111)
giant_sil.plot_ext(ax, 'all')
plt.loglog()
ax.set_ylim(0.01, 2)
plt.title('Silicate')
ax = plt.subplot(111)
giant_gra.plot_ext(ax, 'all')
plt.loglog()
ax.set_ylim(0.01, 2)
plt.title('Graphite')
inds = [0, 50, -10]
ms = dict(zip(inds,['d','o','s']))
for i in inds:
plt.plot(THVALS, giant_sil.int_diff[i], 'g', ls='',
marker=ms[i], markersize=10, label='%.2f keV' % EVALS[i])
plt.plot(THVALS, giant_gra.int_diff[i], 'b', ls='', marker=ms[i], markersize=10)
plt.loglog()
plt.legend(loc='lower left', frameon=False)
Explanation: No real difference between grain sizes for the parallel vs perpendicular graphite.
End of explanation
ax = plt.subplot(111)
big_gra.plot_ext(ax, 'abs', color='b', lw=1, label='1 um gra')
big_sil.plot_ext(ax, 'abs', color='g', lw=1, label='1 um sil')
giant_gra.plot_ext(ax, 'abs', color='b', lw=2, label='10 um gra')
giant_sil.plot_ext(ax, 'abs', color='g', lw=2, label='10 um sil')
plt.loglog()
plt.xlim(0.1, 20)
plt.ylim(0.001, 2)
plt.title("Absorption")
plt.legend(loc='lower left', frameon=False)
Explanation: Make a giant comparison plot
End of explanation |
10,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
나이브베이즈로 들어가겠다. 텍스트 분석은 나이브 베이즈가 Good
텍스트 분석이다 보니 텍스트 처리 기능을 알아야 한다.
Python 문자열 인코딩
문자와 인코딩
문자의 구성
바이트 열 Byte Sequence
Step1: 유니코드 리터럴(Literal)
따옴표 앞에 u자를 붙이면 unicode 문자열로 인식
내부적으로 유니코드 포인트로 저장
Step2: 유니코드 인코딩(Encoding) / 디코딩(Decoding)
encode
unicode 타입의 메소드
unicode -> string (byte sequence)
decode
str 타입의 메소드
str -> unicode
Step3: str에 encode 메소드를 적용하면 또는 unicode에 decode 메소드를 적용하면?
Step4: 에러난 이유는 먼저 디코드를 하고 그 다음 인코드를 해야 하는데 그러지 못했음.
아스키로 디코드를 하려는데 아스키에서 에러가 난 것임.
Step5: 이미 디코딩이 된 애. 유니코드 포인트.
가라는 글자는 아스키로 인코딩 불가능 함. 에스키에 아예 없기 때문에
Step6: str에 encode 메소드를 적용 | Python Code:
c = "a"
c
print(c), type(c)
#python2 기준. 인코딩 문제로 '가'가 아닌 특수문자들이 뜨는 경우가 있다. python3에서 실행할 경우에는 unicode로 되어 있기 때문에
#전부 정상적으로 한글 문자를 인식할 것이다.
x = "가"
x
print(x)
print(x.__repr__())
x = ["가"]
print(x), type(x)
x = "가"
len(x), type(x)
x = "ABC"
y = "가나다"
print(len(x), len(y))
print(x[0], x[1], x[2])
print(y[0], y[1], y[2])
print(y[0], y[1], y[2], y[3])
Explanation: 나이브베이즈로 들어가겠다. 텍스트 분석은 나이브 베이즈가 Good
텍스트 분석이다 보니 텍스트 처리 기능을 알아야 한다.
Python 문자열 인코딩
문자와 인코딩
문자의 구성
바이트 열 Byte Sequence: 컴퓨터에 저장되는 자료. 각 글자에 바이트 열을 지정
글리프 Glyph: 눈에 보이는 그림
http://www.asciitable.com/
http://www.kreativekorp.com/charset/encoding.php?name=CP949
코드 포인트 Code Point: 각 글자에 바이트 열과는 독립적인 숫자를 지정 (유니코드)
인코딩 (방식)
바이트 열을 지정하는 방식
기본 Ascii 인코딩
한글 인코딩
euc-kr
cp949
utf-8
참고
http://d2.naver.com/helloworld/19187
http://d2.naver.com/helloworld/76650
Python 2 문자열
string 타입 (기본)
컴퓨터 환경에서 지정한 인코딩을 사용한 byte string
unicode 타입
유니코드 코드 포인트(Unicode Code Point)를 사용한 내부 저장
string(byte string)과의 변환을 위해 encode(인코딩)/decode(디코딩) 명령 사용
Python 3에서는 unicode 타입이 기본
string <-> unicode point
코드 포인트라는 개념을 알아야 한다.
가장 좋은 점은 유니코드 포인트는 사이즈가 딱 정해져있다. 다른 글자들은 변한다.
환경 설정에 따라서 실제로 “가나다”가 어떻게 컴퓨터 바이트에 들어가는지는 알 수가 없다. 그래서 설정을 봐야 한다.
그러면 utf-8은 무엇인가? 유니코드 시스템이 나오기 전에 “가”를 “b7e9”로 변환하는 것이 인코딩 디코딩 개념이었다. 하지만 유니코드 시스템이 나온 이후로는 똑같은 “가”를 어떤 애는 “b7e9"나 ”a9b0"으로 표현한다. 인코딩과 상관없는 주민등록번호를 부여하자. “가”에다가. 그게 바로 유니코드 포인트를 말한다. 순수 int(eger)로 저장된다. 그런데 int 그대로 아까 본 인코딩된 것들과 맞출 수 없다. 예를 들어 KS, euc-kr, CP949등과 같은 기존의 방식들과 맞지가 않는다. 우리나라가 글자가 많고 빨리 신청해서 대다수의 유니코드 번호를 다 받아왔다. 그래서 int는 인코딩과 아무 상관없는 숫자다. C나 다른 랭귀지에는 유니코드 포인트라는 개념이 없다. 파이썬에서 u"~"하거나 .decode하면 유니코드로 변환해준다. utf-8은 이미 인코딩이 된 것이고 unicode point와 최대한 비슷하게 만든 것이다.
Python의 문자열 표시
__repr__()
그냥 변수이름을 쳤을 때 나오는 표시
다른 객체의 원소인 경우
아스키 테이블로 표시할 수 없는 문자는 string 포맷으로 표시
print() 명령
가능한 글리프(폰트)를 찾아서 출력
End of explanation
y = u"가"
y
print(y)
y = u"가나다"
print(y[0])
print(y[1])
print(y[2])
y = u"가나다"
print(y[0], y[1], y[2])
Explanation: 유니코드 리터럴(Literal)
따옴표 앞에 u자를 붙이면 unicode 문자열로 인식
내부적으로 유니코드 포인트로 저장
End of explanation
print(type(y))
z1 = y.encode("cp949")
print(type(z1))
print(z1)
print(type(y))
z2 = y.encode("utf-8")
print(type(z2))
print(z2)
print(type(z1))
y1 = z1.decode("cp949")
print(type(y1))
print(y1)
print(type(z1))
y2 = z2.decode("utf-8")
print(type(y2))
print(y2)
Explanation: 유니코드 인코딩(Encoding) / 디코딩(Decoding)
encode
unicode 타입의 메소드
unicode -> string (byte sequence)
decode
str 타입의 메소드
str -> unicode
End of explanation
"가".encode("utf-8")
Explanation: str에 encode 메소드를 적용하면 또는 unicode에 decode 메소드를 적용하면?
End of explanation
unicode("가", "ascii").encode("utf-8")
u"가".decode("utf-8")
Explanation: 에러난 이유는 먼저 디코드를 하고 그 다음 인코드를 해야 하는데 그러지 못했음.
아스키로 디코드를 하려는데 아스키에서 에러가 난 것임.
End of explanation
u"가".encode("ascii").decode("utf-8")
Explanation: 이미 디코딩이 된 애. 유니코드 포인트.
가라는 글자는 아스키로 인코딩 불가능 함. 에스키에 아예 없기 때문에
End of explanation
u"가".encode("utf-8"), u"가".encode("cp949"), "가"
u"가".encode("ascii")
import sys
print(sys.getdefaultencoding())
print(sys.stdin.encoding)
print(sys.stdout.encoding)
import locale
print(locale.getpreferredencoding())
Explanation: str에 encode 메소드를 적용:
내부적으로 유니코드로 변환 시도
unicode에 decode 메소드를 적용:
바이트열이 스트링이라고 가정해 버린다.
디코드와 유니코드의 차이. 유니코드 포인트는 주민등록번호. 코드 포인트를 말함. 코드 포인트로 바꾸어 주는 것을 디코드라고 한다. 인코드는 그 반대
디폴트 인코딩
End of explanation |
10,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adaptive histogram
This type of histogram automatically adapts bins when new values are added. Note that only fixed-width continuous binning scheme is currently supported.
Step1: Adding single values
Step2: Adding multiple values at once
Step3: Adding two adaptive histograms together | Python Code:
# Necessary import evil
import physt
from physt import h1, h2, histogramdd
import numpy as np
import matplotlib.pyplot as plt
# Create an empty histogram
h = h1(None, "fixed_width", bin_width=10, name="People height", axis_name="cm", adaptive=True)
h
Explanation: Adaptive histogram
This type of histogram automatically adapts bins when new values are added. Note that only fixed-width continuous binning scheme is currently supported.
End of explanation
# Add a first value
h.fill(157)
h.plot()
h
# Add a second value
h.fill(173)
h.plot()
# Add a few more values, including weights
h.fill(173, 2)
h.fill(186, 5)
h.fill(188, 3)
h.fill(193, 1)
h.plot(errors=True, show_stats=True);
Explanation: Adding single values
End of explanation
ha = h1(None, "fixed_width", bin_width=10, adaptive=True)
ha.plot(show_stats=True);
# Beginning
ha.fill_n([10, 11, 34])
ha.plot();
# Add a distant value
ha.fill_n([234], weights=[10])
ha.plot(show_stats=True);
# Let's create a huge dataset
values = np.random.normal(130, 20, 100000)
%%time
# Add lots of values (no loop in Python)
hn = h1(None, "fixed_width", bin_width=10, adaptive=True)
hn.fill_n(values)
# ha.plot()
%%time
# Comparison with Python loop
hp = h1(None, "fixed_width", bin_width=10, adaptive=True)
for value in values:
hp.fill(value)
# Hopefully equal results
print("Equal?", hp == hn)
hp.plot(show_stats=True);
Explanation: Adding multiple values at once
End of explanation
ha1 = h1(None, "fixed_width", bin_width=5, adaptive=True)
ha1.fill_n(np.random.normal(100, 10, 1000))
ha2 = h1(None, "fixed_width", bin_width=5, adaptive=True)
ha2.fill_n(np.random.normal(70, 10, 500))
ha = ha1 + ha2
fig, ax= plt.subplots()
ha1.plot(alpha=0.1, ax=ax, label="1", color="red")
ha2.plot(alpha=0.1, ax=ax, label="2")
ha.plot("scatter", label="sum", ax=ax, errors=True)
ax.legend(loc=2); # TODO? Why don't we show the sum???
Explanation: Adding two adaptive histograms together
End of explanation |
10,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10-python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 2
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255.0
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
x = np.asarray(x)
result = np.zeros((x.shape[0], 10))
result[np.arange(x.shape[0]), x] = 1
return result
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, ) + image_shape, name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.uint8, shape=(None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_depth = x_tensor.get_shape().as_list()[-1]
conv_strides = (1,) + conv_strides + (1, )
pool_ksize = (1,) + pool_ksize + (1, )
pool_strides = (1,) + pool_strides + (1, )
weights = tf.Variable(tf.random_normal(list(conv_ksize) + [input_depth, conv_num_outputs]))
bias = tf.Variable(tf.zeros([conv_num_outputs]))
x = tf.nn.conv2d(x_tensor, weights, conv_strides, 'SAME')
x = tf.nn.bias_add(x, bias)
x = tf.nn.relu(x)
x = tf.nn.max_pool(x, pool_ksize, pool_strides, 'SAME')
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
from tensorflow.contrib.layers.python import layers
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return layers.flatten(x_tensor)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
from tensorflow.contrib.layers.python import layers
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
x = layers.fully_connected(x_tensor, num_outputs)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (2, 2)
conv_strides = (2, 2)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv_output = 32
x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)
# x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)
# x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
x = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x = fully_conn(x, 4096)
# x = tf.nn.relu(x)
x = tf.nn.dropout(x, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
num_outputs = 10
x = output(x, num_outputs)
return x
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
cost_val = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
accuracy_val = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Cost: %f, Accuracy: %.2f%%' % (cost_val, accuracy_val * 100))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 256
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
10,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Global Imports
Step1: External Package Imports
Step2: Tweaking Display Parameters
Load default custom.css file from ipython profile
Step3: Pandas display parameters
Step4: Tweaking color scheme
Step5: Global Run Variables
<span class='altert alert-success' style='font-size | Python Code:
%pylab inline
Explanation: Global Imports
End of explanation
import os as os
import pickle as pickle
import pandas as pd
print 'changing to source dirctory'
os.chdir('../src')
import Data.Firehose as FH
Explanation: External Package Imports
End of explanation
from IPython import utils
from IPython.display import HTML
css_file = 'profile_default/static/custom/custom.css'
base = utils.path.get_ipython_dir()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base, css_file),'r').read())
display(HTML(styles))
Explanation: Tweaking Display Parameters
Load default custom.css file from ipython profile
End of explanation
pd.set_option('precision', 3)
pd.set_option('display.width', 300)
plt.rcParams['font.size'] = 12
Explanation: Pandas display parameters
End of explanation
'''Color schemes for paper taken from http://colorbrewer2.org/'''
colors = plt.rcParams['axes.color_cycle']
colors_st = ['#CA0020', '#F4A582', '#92C5DE', '#0571B0']
colors_th = ['#E66101', '#FDB863', '#B2ABD2', '#5E3C99']
Explanation: Tweaking color scheme
End of explanation
OUT_PATH = '../Data'
RUN_DATE = '2014_07_15'
VERSION = 'all'
CANCER = 'HNSC'
FIGDIR = '../Figures/'
if not os.path.isdir(FIGDIR):
os.makedirs(FIGDIR)
DESCRIPTION = '''Updating analysis for updated dataset.'''
PARAMETERS = {'min_patients' : 12,
'pathway_file' : '../ExtraData/c2.cp.v3.0.symbols_edit.csv'
}
#GENE_POS = pd.read_csv('../ExtraData/HGNC_chr_pos.txt')
#GENE_LIST_FILE = '../ExtraData/HGNC_Genes'
#GENES = open(GENE_LIST_FILE, 'rb').read().split('\n')
Explanation: Global Run Variables
<span class='altert alert-success' style='font-size:120%'>Change OUT_PATH to directory on your machine where you want to store the data</span>
End of explanation |
10,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross-Validation
We follow Rosser et al. and use a maximum-likelihood approach to finding the "best" parameters for the time and space bandwidths.
Use a "training" dataset of 180 days
For each of the next 60 days we compute the "risk" using from the start of the 180 days to before the current day.
Then for the current day, we compute the log likelihood using the actual events which occurred.
Following Rosser et al. if an event occurs at a location which had 0 risk, we convert this to (log value) -27.6
Step1: With the "fast" exact caching predictor
Uses a lot of memory...
Step2: Visualise
We see something a bit different to the paper of Rosser et al.
Plot only values above the 25% percentile
We find a "blob" shape, which seems to different to the paper
E.g. 2000m and 100 days seems no better than a very small time/space window, whereas Rosser et al find seemingly no drop-off.
We find the maximum likelihood at 500m and 55days, a tighter bandwidth than Rosser et al.
This of course might simply be a genuine difference in the data. Rosser et al use London, UK data, and this is from Chicago.
Step3: With the approximate predictor
This takes a couple of days to run... (single threaded; but without shared memory, this would be hard to multi-process. Damn Python...)
Thought about serialising the actual risks, but a quick estimate shows that this will be around 30 GB in size!
Step4: Try to visualise
Exactly the same methodolody as above. We see what we might expect, given that this is an approximation that would only be completely accurate in the absence of loops in the geometry. Namely, the maximum likelihood occurs for slightly shorter space distance, and slightly longer time distance. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.collections
import numpy as np
import shelve
import open_cp.network
import open_cp.geometry
import open_cp.network_hotspot
import open_cp.logger
open_cp.logger.log_to_true_stdout()
import pickle, lzma
with lzma.open("input_old.pic.xz", "rb") as f:
timed_points = pickle.load(f)
with open("input.graph", "rb") as f:
graph = open_cp.network.PlanarGraph.from_bytes(f.read())
trainer = open_cp.network_hotspot.Trainer()
trainer.graph = graph
trainer.maximum_edge_length = 20
trainer.data = timed_points
predictor = trainer.compile()
def log_likelihood(result, network_timed_points):
logli = 0
for s, e in zip(network_timed_points.start_keys, network_timed_points.end_keys):
edge_index, _ = result.graph.find_edge(s,e)
if result.risks[edge_index] == 0:
logli -= 27.6
else:
logli += np.log(result.risks[edge_index])
return logli
timed_points.time_range
tstart = np.datetime64("2013-01-01")
tend = np.datetime64("2013-01-01") + np.timedelta64(180, "D")
def score(predictor):
out = 0
risks = dict()
for day in range(60):
start = tend + np.timedelta64(1, "D") * day
end = tend + np.timedelta64(1, "D") * (day + 1)
result = predictor.predict(cutoff_time=tstart, predict_time=start)
ntp = predictor.network_timed_points
mask = (ntp.timestamps > start) & (ntp.timestamps <= end)
ntp = ntp[mask]
out += log_likelihood(result, ntp)
risks[start] = result.risks
return out, risks
Explanation: Cross-Validation
We follow Rosser et al. and use a maximum-likelihood approach to finding the "best" parameters for the time and space bandwidths.
Use a "training" dataset of 180 days
For each of the next 60 days we compute the "risk" using from the start of the 180 days to before the current day.
Then for the current day, we compute the log likelihood using the actual events which occurred.
Following Rosser et al. if an event occurs at a location which had 0 risk, we convert this to (log value) -27.6
End of explanation
predictor = open_cp.network_hotspot.FastPredictor(predictor, 2000)
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
results = dict()
for sl in space_lengths:
predictor.kernel = open_cp.network_hotspot.TriangleKernel(sl)
for tl in time_lengths:
predictor.time_kernel = open_cp.network_hotspot.ExponentialTimeKernel(tl)
results[ (sl, tl) ], _ = score(predictor)
with open("cross_validate.pic", "wb") as f:
pickle.dump(results, f)
Explanation: With the "fast" exact caching predictor
Uses a lot of memory...
End of explanation
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
with open("cross_validate.pic", "rb") as f:
results = pickle.load(f)
data = np.empty((39,19))
for i, sl in enumerate(space_lengths):
for j, tl in enumerate(time_lengths):
data[i,j] = results[(sl,tl)]
ordered = data.copy().ravel()
ordered.sort()
cutoff = ordered[int(len(ordered) * 0.25)]
data = np.ma.masked_where(data<cutoff, data)
fig, ax = plt.subplots(figsize=(8,6))
mappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap="Blues")
ax.set(xlabel="Time (days)", ylabel="Space (meters)")
fig.colorbar(mappable, ax=ax)
None
print(max(results.values()))
[k for k, v in results.items() if v > -5645]
Explanation: Visualise
We see something a bit different to the paper of Rosser et al.
Plot only values above the 25% percentile
We find a "blob" shape, which seems to different to the paper
E.g. 2000m and 100 days seems no better than a very small time/space window, whereas Rosser et al find seemingly no drop-off.
We find the maximum likelihood at 500m and 55days, a tighter bandwidth than Rosser et al.
This of course might simply be a genuine difference in the data. Rosser et al use London, UK data, and this is from Chicago.
End of explanation
pred = open_cp.network_hotspot.ApproxPredictorCaching(predictor)
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
results = dict()
for sl in space_lengths:
pred.kernel = open_cp.network_hotspot.TriangleKernel(sl)
for tl in time_lengths:
pred.time_kernel = open_cp.network_hotspot.ExponentialTimeKernel(tl)
key = (sl, tl)
results[key], _ = score(pred)
with open("cross_validate_approx.pic", "wb") as f:
pickle.dump(results, f)
Explanation: With the approximate predictor
This takes a couple of days to run... (single threaded; but without shared memory, this would be hard to multi-process. Damn Python...)
Thought about serialising the actual risks, but a quick estimate shows that this will be around 30 GB in size!
End of explanation
time_lengths = list(range(5,100,5))
space_lengths = list(range(50, 2000, 50))
with open("cross_validate_approx.pic", "rb") as f:
results = pickle.load(f)
len(space_lengths), len(time_lengths)
data = np.empty((39,19))
for i, sl in enumerate(space_lengths):
for j, tl in enumerate(time_lengths):
data[i,j] = results[(sl,tl)]
ordered = data.copy().ravel()
ordered.sort()
cutoff = ordered[int(len(ordered) * 0.25)]
data = np.ma.masked_where(data<cutoff, data)
fig, ax = plt.subplots(figsize=(8,6))
mappable = ax.pcolor(range(5,105,5), range(50,2050,50), data, cmap="Blues")
ax.set(xlabel="Time (days)", ylabel="Space (meters)")
fig.colorbar(mappable, ax=ax)
None
print(max(results.values()))
[k for k, v in results.items() if v > -5758]
Explanation: Try to visualise
Exactly the same methodolody as above. We see what we might expect, given that this is an approximation that would only be completely accurate in the absence of loops in the geometry. Namely, the maximum likelihood occurs for slightly shorter space distance, and slightly longer time distance.
End of explanation |
10,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
tsam - Optimal combination of segments and periods for building supply systems
Date
Step1: Input data
Read in time series from testdata.csv with pandas
Step2: Create a plot function for a visual comparison of the time series
Step3: Plot an example series - in this case the temperature
Step4: Tune a hierarchical aggregation with segments in combination with duration representation
Step5: And determine the pareto optimal aggregation up to 200 total time steps. This may take some time...
Step6: And show the results for the last aggregation | Python Code:
%load_ext autoreload
%autoreload 2
import copy
import os
import pandas as pd
import matplotlib.pyplot as plt
import tsam.timeseriesaggregation as tsam
import tsam.hyperparametertuning as tune
import tqdm
%matplotlib inline
Explanation: tsam - Optimal combination of segments and periods for building supply systems
Date: 29.05.2022
Author: Leander Kotzur
Import pandas and the relevant time series aggregation class
End of explanation
raw = pd.read_csv('testdata.csv', index_col = 0)
raw=raw.rename(columns={'T': 'Temperature [°C]', 'Load':'Load [kW]', 'Wind':'Wind [m/s]', 'GHI': 'Solar [W/m²]'})
raw.drop(columns=['Wind [m/s]',], inplace=True)
Explanation: Input data
Read in time series from testdata.csv with pandas
End of explanation
def plotTS(plot_data, raw_data, periodlength=24):
fig, axes = plt.subplots(figsize = [7, 6], dpi = 100, nrows = raw_data.shape[1], ncols = 1)
for i, column in enumerate(raw.columns):
data = plot_data[column]
stacked, timeindex = tsam.unstackToPeriods(copy.deepcopy(data), periodlength)
cax = axes[i].imshow(stacked.values.T, interpolation = 'nearest', vmin = raw_data[column].min(), vmax = raw_data[column].max(), origin='lower')
axes[i].set_aspect('auto')
axes[i].set_ylabel('Hour')
plt.xlabel('Day in the year')
cbar=plt.colorbar(cax, ax=axes[i], pad=0.01, aspect=7)
cbar.set_label(column)
fig.subplots_adjust(right = 1.1, hspace = 0.05)
Explanation: Create a plot function for a visual comparison of the time series
End of explanation
plotTS(raw,raw,periodlength=24)
Explanation: Plot an example series - in this case the temperature
End of explanation
tunedAggregations = tune.HyperTunedAggregations(
tsam.TimeSeriesAggregation(
raw,
hoursPerPeriod=24,
clusterMethod="hierarchical",
representationMethod="durationRepresentation",
distributionPeriodWise=False,
rescaleClusterPeriods=False,
segmentation=True,
)
)
Explanation: Tune a hierarchical aggregation with segments in combination with duration representation
End of explanation
tunedAggregations.identifyParetoOptimalAggregation(untilTotalTimeSteps=100)
Explanation: And determine the pareto optimal aggregation up to 200 total time steps. This may take some time...
End of explanation
predictedPeriods = tunedAggregations.aggregationHistory[-1].predictOriginalData()
plotTS(predictedPeriods,raw,periodlength=24)
tunedAggregations._segmentHistory[-1]
tunedAggregations._periodHistory[-1]
aggregation=tsam.TimeSeriesAggregation(
raw,
hoursPerPeriod=24,
noSegments=8,
noTypicalPeriods=14,
clusterMethod="hierarchical",
rescaleClusterPeriods=False,
segmentation=True,
representationMethod="distributionAndMinMaxRepresentation",
distributionPeriodWise=False
)
plotTS(aggregation.predictOriginalData(), raw,periodlength=24)
Explanation: And show the results for the last aggregation
End of explanation |
10,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this
Step2: 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
Warm-up exercise
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step13: Expected Output
Step15: We have already implemented a 3-layer neural network. You will train it with
Step16: You will now run this 3 layer neural network with each of the 3 optimization methods.
5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
Step17: 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
Step18: 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
import math
import sklearn
import sklearn.datasets
from opt_utils import load_params_and_grads, initialize_parameters, forward_propagation, backward_propagation
from opt_utils import compute_cost, predict, predict_dec, plot_decision_boundary, load_dataset
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Optimization Methods
Until now, you've always used Gradient Descent to update the parameters and minimize the cost. In this notebook, you will learn more advanced optimization methods that can speed up learning and perhaps even get you to a better final value for the cost function. Having a good optimization algorithm can be the difference between waiting days vs. just a few hours to get a good result.
Gradient descent goes "downhill" on a cost function $J$. Think of it as trying to do this:
<img src="images/cost.jpg" style="width:650px;height:300px;">
<caption><center> <u> Figure 1 </u>: Minimizing the cost is like finding the lowest point in a hilly landscape<br> At each step of the training, you update your parameters following a certain direction to try to get to the lowest possible point. </center></caption>
Notations: As usual, $\frac{\partial J}{\partial a } = $ da for any variable a.
To get started, run the following code to import the libraries you will need.
End of explanation
# GRADED FUNCTION: update_parameters_with_gd
def update_parameters_with_gd(parameters, grads, learning_rate):
Update parameters using one step of gradient descent
Arguments:
parameters -- python dictionary containing your parameters to be updated:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients to update each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
learning_rate -- the learning rate, scalar.
Returns:
parameters -- python dictionary containing your updated parameters
L = len(parameters) // 2 # number of layers in the neural networks
# Update rule for each parameter
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads['dW' + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads['db' + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads, learning_rate = update_parameters_with_gd_test_case()
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 1 - Gradient Descent
A simple optimization method in machine learning is gradient descent (GD). When you take gradient steps with respect to all $m$ examples on each step, it is also called Batch Gradient Descent.
Warm-up exercise: Implement the gradient descent update rule. The gradient descent rule is, for $l = 1, ..., L$:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{1}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{2}$$
where L is the number of layers and $\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.
End of explanation
# GRADED FUNCTION: random_mini_batches
def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
Creates a list of random minibatches from (X, Y)
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
mini_batch_size -- size of the mini-batches, integer
Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
np.random.seed(seed) # To make your "random" minibatches the same as ours
m = X.shape[1] # number of training examples
mini_batches = []
# Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((1,m))
# Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
for k in range(0, num_complete_minibatches):
### START CODE HERE ### (approx. 2 lines)
mini_batch_X = shuffled_X[:,k * mini_batch_size:(k + 1) * mini_batch_size]
mini_batch_Y = shuffled_Y[:,k * mini_batch_size:(k + 1) * mini_batch_size]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
# Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
### START CODE HERE ### (approx. 2 lines)
end = m - mini_batch_size * math.floor(m / mini_batch_size)
mini_batch_X = shuffled_X[:,num_complete_minibatches * mini_batch_size:]
mini_batch_Y = shuffled_Y[:,num_complete_minibatches * mini_batch_size:]
### END CODE HERE ###
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch)
return mini_batches
X_assess, Y_assess, mini_batch_size = random_mini_batches_test_case()
mini_batches = random_mini_batches(X_assess, Y_assess, mini_batch_size)
print ("shape of the 1st mini_batch_X: " + str(mini_batches[0][0].shape))
print ("shape of the 2nd mini_batch_X: " + str(mini_batches[1][0].shape))
print ("shape of the 3rd mini_batch_X: " + str(mini_batches[2][0].shape))
print ("shape of the 1st mini_batch_Y: " + str(mini_batches[0][1].shape))
print ("shape of the 2nd mini_batch_Y: " + str(mini_batches[1][1].shape))
print ("shape of the 3rd mini_batch_Y: " + str(mini_batches[2][1].shape))
print ("mini batch sanity check: " + str(mini_batches[0][0][0][0:3]))
Explanation: Expected Output:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63535156 -0.62320365 -0.53718766]
[-1.07799357 0.85639907 -2.29470142]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74604067]
[-0.75184921]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32171798 -0.25467393 1.46902454]
[-2.05617317 -0.31554548 -0.3756023 ]
[ 1.1404819 -1.09976462 -0.1612551 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88020257]
[ 0.02561572]
[ 0.57539477]] </td>
</tr>
</table>
A variant of this is Stochastic Gradient Descent (SGD), which is equivalent to mini-batch gradient descent where each mini-batch has just 1 example. The update rule that you have just implemented does not change. What changes is that you would be computing gradients on just one training example at a time, rather than on the whole training set. The code examples below illustrate the difference between stochastic gradient descent and (batch) gradient descent.
(Batch) Gradient Descent:
``` python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
# Forward propagation
a, caches = forward_propagation(X, parameters)
# Compute cost.
cost = compute_cost(a, Y)
# Backward propagation.
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
```
Stochastic Gradient Descent:
python
X = data_input
Y = labels
parameters = initialize_parameters(layers_dims)
for i in range(0, num_iterations):
for j in range(0, m):
# Forward propagation
a, caches = forward_propagation(X[:,j], parameters)
# Compute cost
cost = compute_cost(a, Y[:,j])
# Backward propagation
grads = backward_propagation(a, caches, parameters)
# Update parameters.
parameters = update_parameters(parameters, grads)
In Stochastic Gradient Descent, you use only 1 training example before updating the gradients. When the training set is large, SGD can be faster. But the parameters will "oscillate" toward the minimum rather than converge smoothly. Here is an illustration of this:
<img src="images/kiank_sgd.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : SGD vs GD<br> "+" denotes a minimum of the cost. SGD leads to many oscillations to reach convergence. But each step is a lot faster to compute for SGD than for GD, as it uses only one training example (vs. the whole batch for GD). </center></caption>
Note also that implementing SGD requires 3 for-loops in total:
1. Over the number of iterations
2. Over the $m$ training examples
3. Over the layers (to update all parameters, from $(W^{[1]},b^{[1]})$ to $(W^{[L]},b^{[L]})$)
In practice, you'll often get faster results if you do not use neither the whole training set, nor only one training example, to perform each update. Mini-batch gradient descent uses an intermediate number of examples for each step. With mini-batch gradient descent, you loop over the mini-batches instead of looping over individual training examples.
<img src="images/kiank_minibatch.png" style="width:750px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 2 </u>: <font color='purple'> SGD vs Mini-Batch GD<br> "+" denotes a minimum of the cost. Using mini-batches in your optimization algorithm often leads to faster optimization. </center></caption>
<font color='blue'>
What you should remember:
- The difference between gradient descent, mini-batch gradient descent and stochastic gradient descent is the number of examples you use to perform one update step.
- You have to tune a learning rate hyperparameter $\alpha$.
- With a well-turned mini-batch size, usually it outperforms either gradient descent or stochastic gradient descent (particularly when the training set is large).
2 - Mini-Batch Gradient descent
Let's learn how to build mini-batches from the training set (X, Y).
There are two steps:
- Shuffle: Create a shuffled version of the training set (X, Y) as shown below. Each column of X and Y represents a training example. Note that the random shuffling is done synchronously between X and Y. Such that after the shuffling the $i^{th}$ column of X is the example corresponding to the $i^{th}$ label in Y. The shuffling step ensures that examples will be split randomly into different mini-batches.
<img src="images/kiank_shuffle.png" style="width:550px;height:300px;">
Partition: Partition the shuffled (X, Y) into mini-batches of size mini_batch_size (here 64). Note that the number of training examples is not always divisible by mini_batch_size. The last mini batch might be smaller, but you don't need to worry about this. When the final mini-batch is smaller than the full mini_batch_size, it will look like this:
<img src="images/kiank_partition.png" style="width:550px;height:300px;">
Exercise: Implement random_mini_batches. We coded the shuffling part for you. To help you with the partitioning step, we give you the following code that selects the indexes for the $1^{st}$ and $2^{nd}$ mini-batches:
python
first_mini_batch_X = shuffled_X[:, 0 : mini_batch_size]
second_mini_batch_X = shuffled_X[:, mini_batch_size : 2 * mini_batch_size]
...
Note that the last mini-batch might end up smaller than mini_batch_size=64. Let $\lfloor s \rfloor$ represents $s$ rounded down to the nearest integer (this is math.floor(s) in Python). If the total number of examples is not a multiple of mini_batch_size=64 then there will be $\lfloor \frac{m}{mini_batch_size}\rfloor$ mini-batches with a full 64 examples, and the number of examples in the final mini-batch will be ($m-mini__batch__size \times \lfloor \frac{m}{mini_batch_size}\rfloor$).
End of explanation
# GRADED FUNCTION: initialize_velocity
def initialize_velocity(parameters):
Initializes the velocity as a python dictionary with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
Returns:
v -- python dictionary containing the current velocity.
v['dW' + str(l)] = velocity of dWl
v['db' + str(l)] = velocity of dbl
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
# Initialize velocity
for l in range(L):
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = np.zeros((parameters['W' + str(l+1)].shape[0],parameters['W' + str(l+1)].shape[1]))
v["db" + str(l+1)] = np.zeros((parameters['b' + str(l+1)].shape[0],1))
### END CODE HERE ###
return v
parameters = initialize_velocity_test_case()
v = initialize_velocity(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td > **shape of the 1st mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_X** </td>
<td > (12288, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_X** </td>
<td > (12288, 20) </td>
</tr>
<tr>
<td > **shape of the 1st mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 2nd mini_batch_Y** </td>
<td > (1, 64) </td>
</tr>
<tr>
<td > **shape of the 3rd mini_batch_Y** </td>
<td > (1, 20) </td>
</tr>
<tr>
<td > **mini batch sanity check** </td>
<td > [ 0.90085595 -0.7612069 0.2344157 ] </td>
</tr>
</table>
<font color='blue'>
What you should remember:
- Shuffling and Partitioning are the two steps required to build mini-batches
- Powers of two are often chosen to be the mini-batch size, e.g., 16, 32, 64, 128.
3 - Momentum
Because mini-batch gradient descent makes a parameter update after seeing just a subset of examples, the direction of the update has some variance, and so the path taken by mini-batch gradient descent will "oscillate" toward convergence. Using momentum can reduce these oscillations.
Momentum takes into account the past gradients to smooth out the update. We will store the 'direction' of the previous gradients in the variable $v$. Formally, this will be the exponentially weighted average of the gradient on previous steps. You can also think of $v$ as the "velocity" of a ball rolling downhill, building up speed (and momentum) according to the direction of the gradient/slope of the hill.
<img src="images/opt_momentum.png" style="width:400px;height:250px;">
<caption><center> <u><font color='purple'>Figure 3</u><font color='purple'>: The red arrows shows the direction taken by one step of mini-batch gradient descent with momentum. The blue points show the direction of the gradient (with respect to the current mini-batch) on each step. Rather than just following the gradient, we let the gradient influence $v$ and then take a step in the direction of $v$.<br> <font color='black'> </center>
Exercise: Initialize the velocity. The velocity, $v$, is a python dictionary that needs to be initialized with arrays of zeros. Its keys are the same as those in the grads dictionary, that is:
for $l =1,...,L$:
python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
Note that the iterator l starts at 0 in the for loop while the first parameters are v["dW1"] and v["db1"] (that's a "one" on the superscript). This is why we are shifting l to l+1 in the for loop.
End of explanation
# GRADED FUNCTION: update_parameters_with_momentum
def update_parameters_with_momentum(parameters, grads, v, beta, learning_rate):
Update parameters using Momentum
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- python dictionary containing the current velocity:
v['dW' + str(l)] = ...
v['db' + str(l)] = ...
beta -- the momentum hyperparameter, scalar
learning_rate -- the learning rate, scalar
Returns:
parameters -- python dictionary containing your updated parameters
v -- python dictionary containing your updated velocities
L = len(parameters) // 2 # number of layers in the neural networks
v['dW0']=0
v['db0']=0
# Momentum update for each parameter
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
# compute velocities
v["dW" + str(l + 1)] = beta * v["dW" + str(l + 1)] + (1 - beta) * grads['dW' + str(l + 1)]
v["db" + str(l + 1)] = beta * v["db" + str(l + 1)] + (1 - beta) * grads['db' + str(l + 1)]
# update parameters
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * v["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * v["db" + str(l + 1)]
### END CODE HERE ###
return parameters, v
parameters, grads, v = update_parameters_with_momentum_test_case()
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta = 0.9, learning_rate = 0.01)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
Exercise: Now, implement the parameters update with momentum. The momentum update rule is, for $l = 1, ..., L$:
$$ \begin{cases}
v_{dW^{[l]}} = \beta v_{dW^{[l]}} + (1 - \beta) dW^{[l]} \
W^{[l]} = W^{[l]} - \alpha v_{dW^{[l]}}
\end{cases}\tag{3}$$
$$\begin{cases}
v_{db^{[l]}} = \beta v_{db^{[l]}} + (1 - \beta) db^{[l]} \
b^{[l]} = b^{[l]} - \alpha v_{db^{[l]}}
\end{cases}\tag{4}$$
where L is the number of layers, $\beta$ is the momentum and $\alpha$ is the learning rate. All parameters should be stored in the parameters dictionary. Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$ (that's a "one" on the superscript). So you will need to shift l to l+1 when coding.
End of explanation
# GRADED FUNCTION: initialize_adam
def initialize_adam(parameters) :
Initializes v and s as two python dictionaries with:
- keys: "dW1", "db1", ..., "dWL", "dbL"
- values: numpy arrays of zeros of the same shape as the corresponding gradients/parameters.
Arguments:
parameters -- python dictionary containing your parameters.
parameters["W" + str(l)] = Wl
parameters["b" + str(l)] = bl
Returns:
v -- python dictionary that will contain the exponentially weighted average of the gradient.
v["dW" + str(l)] = ...
v["db" + str(l)] = ...
s -- python dictionary that will contain the exponentially weighted average of the squared gradient.
s["dW" + str(l)] = ...
s["db" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural networks
v = {}
s = {}
# Initialize v, s. Input: "parameters". Outputs: "v, s".
for l in range(L):
### START CODE HERE ### (approx. 4 lines)
v["dW" + str(l+1)] = np.zeros((parameters['W' + str(l+1)].shape[0],parameters['W' + str(l+1)].shape[1]))
v["db" + str(l+1)] = np.zeros((parameters['b' + str(l+1)].shape[0],1))
s["dW" + str(l+1)] = np.zeros((parameters['W' + str(l+1)].shape[0],parameters['W' + str(l+1)].shape[1]))
s["db" + str(l+1)] = np.zeros((parameters['b' + str(l+1)].shape[0],1))
### END CODE HERE ###
return v, s
parameters = initialize_adam_test_case()
v, s = initialize_adam(parameters)
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td > **W1** </td>
<td > [[ 1.62544598 -0.61290114 -0.52907334]
[-1.07347112 0.86450677 -2.30085497]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.74493465]
[-0.76027113]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.31930698 -0.24990073 1.4627996 ]
[-2.05974396 -0.32173003 -0.38320915]
[ 1.13444069 -1.0998786 -0.1713109 ]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.87809283]
[ 0.04055394]
[ 0.58207317]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]</td>
</tr>
</table>
Note that:
- The velocity is initialized with zeros. So the algorithm will take a few iterations to "build up" velocity and start to take bigger steps.
- If $\beta = 0$, then this just becomes standard gradient descent without momentum.
How do you choose $\beta$?
The larger the momentum $\beta$ is, the smoother the update because the more we take the past gradients into account. But if $\beta$ is too big, it could also smooth out the updates too much.
Common values for $\beta$ range from 0.8 to 0.999. If you don't feel inclined to tune this, $\beta = 0.9$ is often a reasonable default.
Tuning the optimal $\beta$ for your model might need trying several values to see what works best in term of reducing the value of the cost function $J$.
<font color='blue'>
What you should remember:
- Momentum takes past gradients into account to smooth out the steps of gradient descent. It can be applied with batch gradient descent, mini-batch gradient descent or stochastic gradient descent.
- You have to tune a momentum hyperparameter $\beta$ and a learning rate $\alpha$.
4 - Adam
Adam is one of the most effective optimization algorithms for training neural networks. It combines ideas from RMSProp (described in lecture) and Momentum.
How does Adam work?
1. It calculates an exponentially weighted average of past gradients, and stores it in variables $v$ (before bias correction) and $v^{corrected}$ (with bias correction).
2. It calculates an exponentially weighted average of the squares of the past gradients, and stores it in variables $s$ (before bias correction) and $s^{corrected}$ (with bias correction).
3. It updates parameters in a direction based on combining information from "1" and "2".
The update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{dW^{[l]}} = \beta_1 v_{dW^{[l]}} + (1 - \beta_1) \frac{\partial \mathcal{J} }{ \partial W^{[l]} } \
v^{corrected}{dW^{[l]}} = \frac{v{dW^{[l]}}}{1 - (\beta_1)^t} \
s_{dW^{[l]}} = \beta_2 s_{dW^{[l]}} + (1 - \beta_2) (\frac{\partial \mathcal{J} }{\partial W^{[l]} })^2 \
s^{corrected}{dW^{[l]}} = \frac{s{dW^{[l]}}}{1 - (\beta_1)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{dW^{[l]}}}{\sqrt{s^{corrected}{dW^{[l]}}} + \varepsilon}
\end{cases}$$
where:
- t counts the number of steps taken of Adam
- L is the number of layers
- $\beta_1$ and $\beta_2$ are hyperparameters that control the two exponentially weighted averages.
- $\alpha$ is the learning rate
- $\varepsilon$ is a very small number to avoid dividing by zero
As usual, we will store all parameters in the parameters dictionary
Exercise: Initialize the Adam variables $v, s$ which keep track of the past information.
Instruction: The variables $v, s$ are python dictionaries that need to be initialized with arrays of zeros. Their keys are the same as for grads, that is:
for $l = 1, ..., L$:
```python
v["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
v["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
s["dW" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["W" + str(l+1)])
s["db" + str(l+1)] = ... #(numpy array of zeros with the same shape as parameters["b" + str(l+1)])
```
End of explanation
# GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
v['dW0']=0
v['db0']=0
s['dW0']=0
s['db0']=0
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v['dW' + str(l+1)] + (1-beta1) * grads['dW' + str(l+1)]
v["db" + str(l+1)] = beta1 * v['db' + str(l+1)] + (1-beta1) * grads['db' + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v['dW' + str(l+1)] / (1 - np.power(beta1, t))
v_corrected["db" + str(l+1)] = v['db' + str(l+1)] / (1 - np.power(beta1, t))
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s['dW' + str(l+1)] + (1-beta2) * np.power(grads['dW' + str(l+1)], 2)
s["db" + str(l+1)] = beta2 * s['db' + str(l+1)] + (1-beta2) * np.power(grads['db' + str(l+1)], 2)
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s['dW' + str(l+1)] / (1 - np.power(beta2, t))
s_corrected["db" + str(l+1)] = s['db' + str(l+1)] / (1 - np.power(beta2, t))
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters['W' + str(l+1)] - learning_rate * v_corrected['dW' + str(l+1)] / np.sqrt(s_corrected['dW' + str(l+1)] + epsilon)
parameters["b" + str(l+1)] = parameters['b' + str(l+1)] - learning_rate * v_corrected['db' + str(l+1)] / np.sqrt(s_corrected['db' + str(l+1)] + epsilon)
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"]))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td > **v["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 0. 0. 0.]
[ 0. 0. 0.]
[ 0. 0. 0.]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
</table>
Exercise: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$:
$$\begin{cases}
v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \
v^{corrected}{W^{[l]}} = \frac{v{W^{[l]}}}{1 - (\beta_1)^t} \
s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \
s^{corrected}{W^{[l]}} = \frac{s{W^{[l]}}}{1 - (\beta_2)^t} \
W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}{W^{[l]}}}{\sqrt{s^{corrected}{W^{[l]}}}+\varepsilon}
\end{cases}$$
Note that the iterator l starts at 0 in the for loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift l to l+1 when coding.
End of explanation
train_X, train_Y = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td > **W1** </td>
<td > [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]] </td>
</tr>
<tr>
<td > **b1** </td>
<td > [[ 1.75225313]
[-0.75376553]] </td>
</tr>
<tr>
<td > **W2** </td>
<td > [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09245036 -0.16498684]] </td>
</tr>
<tr>
<td > **b2** </td>
<td > [[-0.88529978]
[ 0.03477238]
[ 0.57537385]] </td>
</tr>
<tr>
<td > **v["dW1"]** </td>
<td > [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]] </td>
</tr>
<tr>
<td > **v["db1"]** </td>
<td > [[-0.01228902]
[-0.09357694]] </td>
</tr>
<tr>
<td > **v["dW2"]** </td>
<td > [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]] </td>
</tr>
<tr>
<td > **v["db2"]** </td>
<td > [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]] </td>
</tr>
<tr>
<td > **s["dW1"]** </td>
<td > [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]] </td>
</tr>
<tr>
<td > **s["db1"]** </td>
<td > [[ 1.51020075e-05]
[ 8.75664434e-04]] </td>
</tr>
<tr>
<td > **s["dW2"]** </td>
<td > [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] </td>
</tr>
<tr>
<td > **s["db2"]** </td>
<td > [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]] </td>
</tr>
</table>
You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference.
5 - Model with different optimization algorithms
Lets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.)
End of explanation
def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters
Explanation: We have already implemented a 3-layer neural network. You will train it with:
- Mini-batch Gradient Descent: it will call your function:
- update_parameters_with_gd()
- Mini-batch Momentum: it will call your functions:
- initialize_velocity() and update_parameters_with_momentum()
- Mini-batch Adam: it will call your functions:
- initialize_adam() and update_parameters_with_adam()
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: You will now run this 3 layer neural network with each of the 3 optimization methods.
5.1 - Mini-batch Gradient descent
Run the following code to see how the model does with mini-batch gradient descent.
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: 5.2 - Mini-batch gradient descent with momentum
Run the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains.
End of explanation
# train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
Explanation: 5.3 - Mini-batch with Adam mode
Run the following code to see how the model does with Adam.
End of explanation |
10,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functional Expansions
OpenMC's general tally system accommodates a wide range of tally filters. While most filters are meant to identify regions of phase space that contribute to a tally, there are a special set of functional expansion filters that will multiply the tally by a set of orthogonal functions, e.g. Legendre polynomials, so that continuous functions of space or angle can be reconstructed from the tallied moments.
In this example, we will determine the spatial dependence of the flux along the $z$ axis by making a Legendre polynomial expansion. Let us represent the flux along the z axis, $\phi(z)$, by the function
$$ \phi(z') = \sum\limits_{n=0}^N a_n P_n(z') $$
where $z'$ is the position normalized to the range [-1, 1]. Since $P_n(z')$ are known functions, our only task is to determine the expansion coefficients, $a_n$. By the orthogonality properties of the Legendre polynomials, one can deduce that the coefficients, $a_n$, are given by
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z').$$
Thus, the problem reduces to finding the integral of the flux times each Legendre polynomial -- a problem which can be solved by using a Monte Carlo tally. By using a Legendre polynomial filter, we obtain stochastic estimates of these integrals for each polynomial order.
Step1: To begin, let us first create a simple model. The model will be a slab of fuel material with reflective boundaries conditions in the x- and y-directions and vacuum boundaries in the z-direction. However, to make the distribution slightly more interesting, we'll put some B<sub>4</sub>C in the middle of the slab.
Step2: For the starting source, we'll use a uniform distribution over the entire box geometry.
Step3: Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the SpatialLegendreFilter class which multiplies tally scores by Legendre polynomials evaluated on normalized spatial positions along an axis.
Step4: The last thing we need to do is create a Tallies collection and export the entire model, which we'll do using the Model convenience class.
Step5: Running a simulation is now as simple as calling the run() method of Model.
Step6: Now that the run is finished, we need to load the results from the statepoint file.
Step7: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
Step8: Since the expansion coefficients are given as
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z')$$
we just need to multiply the Legendre moments by $(2n + 1)/2$.
Step9: To plot the flux distribution, we can use the numpy.polynomial.Legendre class which represents a truncated Legendre polynomial series. Since we really want to plot $\phi(z)$ and not $\phi(z')$ we first need to perform a change of variables. Since
$$ \lvert \phi(z) dz \rvert = \lvert \phi(z') dz' \rvert $$
and, for this case, $z = 10z'$, it follows that
$$ \phi(z) = \frac{\phi(z')}{10} = \sum_{n=0}^N \frac{a_n}{10} P_n(z'). $$
Step10: Let's plot it and see how our flux looks!
Step11: As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.
One more thing we can do is confirm that integrating the distribution gives us the same value as the first moment (since $P_0(z') = 1$). This can easily be done by numerically integrating using the trapezoidal rule
Step12: In addition to being able to tally Legendre moments, there are also functional expansion filters available for spherical harmonics (SphericalHarmonicsFilter) and Zernike polynomials over a unit disk (ZernikeFilter). A separate LegendreFilter class can also be used for determining Legendre scattering moments (i.e., an expansion of the scattering cosine, $\mu$).
Zernike polynomials
Now let's look at an example of functional expansion tallies using Zernike polynomials as the basis functions.
In this example, we will determine the spatial dependence of the flux along the radial direction $r'$ and $/$ or azimuthal angle $\theta$ by making a Zernike polynomial expansion. Let us represent the flux along the radial and azimuthal direction, $\phi(r', \theta)$, by the function
$$ \phi(r', \theta) = \sum\limits_{n=0}^N \sum\limits_{m=-n}^n a_n^m Z_n^m(r',
\theta) $$
where $r'$ is the position normalized to the range [0, r] (r is the radius of cylindrical geometry), and the azimuthal lies within the range [0, $ 2\pi$].
Since $Z_n^m(r', \theta)$ are known functions, we need to determine the expansion coefficients, $a_n^m$. By the orthogonality properties of the Zernike polynomials, one can deduce that the coefficients, $a_n^m$, are given by
$$ a_n^m = k_n^m \int_{0}^r dr' \int_{0}^{2\pi} d\theta Z_n^m(r',\theta) \phi(r', \theta).$$
$$ k_n^m = \frac{2n + 2}{\pi}, m \ne 0. $$
$$ k_n^m = \frac{n+1}{\pi}, m = 0.$$
Similarly, the problem reduces to finding the integral of the flux times each Zernike polynomial.
To begin with, let us first create a simple model. The model will be a pin-cell fuel material with vacuum boundary condition in both radial direction and axial direction.
Step13: For the starting source, we'll use a uniform distribution over the entire box geometry.
Step14: Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the SpatialLegendreFilter, ZernikeFilter, ZernikeRadialFilter classes which multiplies tally scores by Legendre, azimuthal Zernike and radial-only Zernike polynomials evaluated on normalized spatial positions along radial and axial directions.
Step15: The last thing we need to do is create a Tallies collection and export the entire model, which we'll do using the Model convenience class.
Step16: Running a simulation is now as simple as calling the run() method of Model.
Step17: Now that the run is finished, we need to load the results from the statepoint file.
Step18: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
Step19: Since the scaling factors for expansion coefficients will be provided by the Python API, thus, we do not need to multiply the moments by scaling factors.
Step20: Loading the coefficients is realized via calling the OpenMC Python API as follows
Step21: Let's plot it and see how our flux looks!
Step22: A rough cosine shape is obtained.
One can also numerically integrate the function using the trapezoidal rule.
Step23: The following cases show how to reconstruct the flux distribution Zernike polynomials tallied results.
Step24: Let's plot the flux in radial direction with specific azimuthal angle ($\theta = 0.0$).
Step25: A polar figure with all azimuthal can be plotted like this
Step26: Sometimes, we just need the radial-only Zernike polynomial tallied flux distribution.
Let us extract the tallied coefficients first.
Step27: A plot along with r-axis is also done.
Step28: Similarly, we can also re-construct the polar figure based on radial-only Zernike polinomial coefficients.
Step29: Based on Legendre polynomial coefficients and the azimuthal or radial-only Zernike coefficient, it's possible to reconstruct the flux both on radial and axial directions.
Step30: One can also reconstruct the 3D flux distribution based on Legendre and Zernike polynomial tallied coefficients.
Step31: Let us print out with VTK format.
Step32: Use VisIt or ParaView to plot it as you want. Then, the plot can be loaded and shown as follows. | Python Code:
%matplotlib inline
import openmc
import numpy as np
import matplotlib.pyplot as plt
Explanation: Functional Expansions
OpenMC's general tally system accommodates a wide range of tally filters. While most filters are meant to identify regions of phase space that contribute to a tally, there are a special set of functional expansion filters that will multiply the tally by a set of orthogonal functions, e.g. Legendre polynomials, so that continuous functions of space or angle can be reconstructed from the tallied moments.
In this example, we will determine the spatial dependence of the flux along the $z$ axis by making a Legendre polynomial expansion. Let us represent the flux along the z axis, $\phi(z)$, by the function
$$ \phi(z') = \sum\limits_{n=0}^N a_n P_n(z') $$
where $z'$ is the position normalized to the range [-1, 1]. Since $P_n(z')$ are known functions, our only task is to determine the expansion coefficients, $a_n$. By the orthogonality properties of the Legendre polynomials, one can deduce that the coefficients, $a_n$, are given by
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z').$$
Thus, the problem reduces to finding the integral of the flux times each Legendre polynomial -- a problem which can be solved by using a Monte Carlo tally. By using a Legendre polynomial filter, we obtain stochastic estimates of these integrals for each polynomial order.
End of explanation
# Define fuel and B4C materials
fuel = openmc.Material()
fuel.add_element('U', 1.0, enrichment=4.5)
fuel.add_nuclide('O16', 2.0)
fuel.set_density('g/cm3', 10.0)
b4c = openmc.Material()
b4c.add_element('B', 4.0)
b4c.add_element('C', 1.0)
b4c.set_density('g/cm3', 2.5)
# Define surfaces used to construct regions
zmin, zmax = -10., 10.
box = openmc.model.rectangular_prism(10., 10., boundary_type='reflective')
bottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')
boron_lower = openmc.ZPlane(z0=-0.5)
boron_upper = openmc.ZPlane(z0=0.5)
top = openmc.ZPlane(z0=zmax, boundary_type='vacuum')
# Create three cells and add them to geometry
fuel1 = openmc.Cell(fill=fuel, region=box & +bottom & -boron_lower)
absorber = openmc.Cell(fill=b4c, region=box & +boron_lower & -boron_upper)
fuel2 = openmc.Cell(fill=fuel, region=box & +boron_upper & -top)
geom = openmc.Geometry([fuel1, absorber, fuel2])
Explanation: To begin, let us first create a simple model. The model will be a slab of fuel material with reflective boundaries conditions in the x- and y-directions and vacuum boundaries in the z-direction. However, to make the distribution slightly more interesting, we'll put some B<sub>4</sub>C in the middle of the slab.
End of explanation
settings = openmc.Settings()
spatial_dist = openmc.stats.Box(*geom.bounding_box)
settings.source = openmc.Source(space=spatial_dist)
settings.batches = 210
settings.inactive = 10
settings.particles = 1000
Explanation: For the starting source, we'll use a uniform distribution over the entire box geometry.
End of explanation
# Create a flux tally
flux_tally = openmc.Tally()
flux_tally.scores = ['flux']
# Create a Legendre polynomial expansion filter and add to tally
order = 8
expand_filter = openmc.SpatialLegendreFilter(order, 'z', zmin, zmax)
flux_tally.filters.append(expand_filter)
Explanation: Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the SpatialLegendreFilter class which multiplies tally scores by Legendre polynomials evaluated on normalized spatial positions along an axis.
End of explanation
tallies = openmc.Tallies([flux_tally])
model = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)
Explanation: The last thing we need to do is create a Tallies collection and export the entire model, which we'll do using the Model convenience class.
End of explanation
sp_file = model.run(output=False)
Explanation: Running a simulation is now as simple as calling the run() method of Model.
End of explanation
with openmc.StatePoint(sp_file) as sp:
df = sp.tallies[flux_tally.id].get_pandas_dataframe()
Explanation: Now that the run is finished, we need to load the results from the statepoint file.
End of explanation
df
Explanation: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
End of explanation
n = np.arange(order + 1)
a_n = (2*n + 1)/2 * df['mean']
Explanation: Since the expansion coefficients are given as
$$ a_n = \frac{2n + 1}{2} \int_{-1}^1 dz' P_n(z') \phi(z')$$
we just need to multiply the Legendre moments by $(2n + 1)/2$.
End of explanation
phi = np.polynomial.Legendre(a_n/10, domain=(zmin, zmax))
Explanation: To plot the flux distribution, we can use the numpy.polynomial.Legendre class which represents a truncated Legendre polynomial series. Since we really want to plot $\phi(z)$ and not $\phi(z')$ we first need to perform a change of variables. Since
$$ \lvert \phi(z) dz \rvert = \lvert \phi(z') dz' \rvert $$
and, for this case, $z = 10z'$, it follows that
$$ \phi(z) = \frac{\phi(z')}{10} = \sum_{n=0}^N \frac{a_n}{10} P_n(z'). $$
End of explanation
z = np.linspace(zmin, zmax, 1000)
plt.plot(z, phi(z))
plt.xlabel('Z position [cm]')
plt.ylabel('Flux [n/src]')
Explanation: Let's plot it and see how our flux looks!
End of explanation
np.trapz(phi(z), z)
Explanation: As you might expect, we get a rough cosine shape but with a flux depression in the middle due to the boron slab that we introduced. To get a more accurate distribution, we'd likely need to use a higher order expansion.
One more thing we can do is confirm that integrating the distribution gives us the same value as the first moment (since $P_0(z') = 1$). This can easily be done by numerically integrating using the trapezoidal rule:
End of explanation
# Define fuel
fuel = openmc.Material()
fuel.add_element('U', 1.0, enrichment=5.0)
fuel.add_nuclide('O16', 2.0)
fuel.set_density('g/cm3', 10.0)
# Define surfaces used to construct regions
zmin, zmax, radius = -1., 1., 0.5
pin = openmc.ZCylinder(x0=0.0, y0=0.0, r=radius, boundary_type='vacuum')
bottom = openmc.ZPlane(z0=zmin, boundary_type='vacuum')
top = openmc.ZPlane(z0=zmax, boundary_type='vacuum')
# Create three cells and add them to geometry
fuel = openmc.Cell(fill=fuel, region= -pin & +bottom & -top)
geom = openmc.Geometry([fuel])
Explanation: In addition to being able to tally Legendre moments, there are also functional expansion filters available for spherical harmonics (SphericalHarmonicsFilter) and Zernike polynomials over a unit disk (ZernikeFilter). A separate LegendreFilter class can also be used for determining Legendre scattering moments (i.e., an expansion of the scattering cosine, $\mu$).
Zernike polynomials
Now let's look at an example of functional expansion tallies using Zernike polynomials as the basis functions.
In this example, we will determine the spatial dependence of the flux along the radial direction $r'$ and $/$ or azimuthal angle $\theta$ by making a Zernike polynomial expansion. Let us represent the flux along the radial and azimuthal direction, $\phi(r', \theta)$, by the function
$$ \phi(r', \theta) = \sum\limits_{n=0}^N \sum\limits_{m=-n}^n a_n^m Z_n^m(r',
\theta) $$
where $r'$ is the position normalized to the range [0, r] (r is the radius of cylindrical geometry), and the azimuthal lies within the range [0, $ 2\pi$].
Since $Z_n^m(r', \theta)$ are known functions, we need to determine the expansion coefficients, $a_n^m$. By the orthogonality properties of the Zernike polynomials, one can deduce that the coefficients, $a_n^m$, are given by
$$ a_n^m = k_n^m \int_{0}^r dr' \int_{0}^{2\pi} d\theta Z_n^m(r',\theta) \phi(r', \theta).$$
$$ k_n^m = \frac{2n + 2}{\pi}, m \ne 0. $$
$$ k_n^m = \frac{n+1}{\pi}, m = 0.$$
Similarly, the problem reduces to finding the integral of the flux times each Zernike polynomial.
To begin with, let us first create a simple model. The model will be a pin-cell fuel material with vacuum boundary condition in both radial direction and axial direction.
End of explanation
settings = openmc.Settings()
spatial_dist = openmc.stats.Box(*geom.bounding_box)
settings.source = openmc.Source(space=spatial_dist)
settings.batches = 100
settings.inactive = 20
settings.particles = 100000
Explanation: For the starting source, we'll use a uniform distribution over the entire box geometry.
End of explanation
# Create a flux tally
flux_tally_legendre = openmc.Tally()
flux_tally_legendre.scores = ['flux']
# Create a Legendre polynomial expansion filter and add to tally
order = 10
cell_filter = openmc.CellFilter(fuel)
legendre_filter = openmc.SpatialLegendreFilter(order, 'z', zmin, zmax)
flux_tally_legendre.filters = [cell_filter, legendre_filter]
# Create a Zernike azimuthal polynomial expansion filter and add to tally
flux_tally_zernike = openmc.Tally()
flux_tally_zernike.scores = ['flux']
zernike_filter = openmc.ZernikeFilter(order=order, x=0.0, y=0.0, r=radius)
flux_tally_zernike.filters = [cell_filter, zernike_filter]
# Create a Zernike radial polynomial expansion filter and add to tally
flux_tally_zernike1d = openmc.Tally()
flux_tally_zernike1d.scores = ['flux']
zernike1d_filter = openmc.ZernikeRadialFilter(order=order, x=0.0, y=0.0, r=radius)
flux_tally_zernike1d.filters = [cell_filter, zernike1d_filter]
Explanation: Defining the tally is relatively straightforward. One simply needs to list 'flux' as a score and then add an expansion filter. For this case, we will want to use the SpatialLegendreFilter, ZernikeFilter, ZernikeRadialFilter classes which multiplies tally scores by Legendre, azimuthal Zernike and radial-only Zernike polynomials evaluated on normalized spatial positions along radial and axial directions.
End of explanation
tallies = openmc.Tallies([flux_tally_legendre, flux_tally_zernike, flux_tally_zernike1d])
model = openmc.model.Model(geometry=geom, settings=settings, tallies=tallies)
Explanation: The last thing we need to do is create a Tallies collection and export the entire model, which we'll do using the Model convenience class.
End of explanation
sp_file = model.run(output=False)
Explanation: Running a simulation is now as simple as calling the run() method of Model.
End of explanation
with openmc.StatePoint(sp_file) as sp:
df1 = sp.tallies[flux_tally_legendre.id].get_pandas_dataframe()
Explanation: Now that the run is finished, we need to load the results from the statepoint file.
End of explanation
df1
Explanation: We've used the get_pandas_dataframe() method that returns tally data as a Pandas dataframe. Let's see what the raw data looks like.
End of explanation
a_n = df1['mean']
Explanation: Since the scaling factors for expansion coefficients will be provided by the Python API, thus, we do not need to multiply the moments by scaling factors.
End of explanation
phi = openmc.legendre_from_expcoef(a_n, domain=(zmin, zmax))
Explanation: Loading the coefficients is realized via calling the OpenMC Python API as follows:
End of explanation
z = np.linspace(zmin, zmax, 1000)
plt.plot(z, phi(z))
plt.xlabel('Z position [cm]')
plt.ylabel('Flux [n/src]')
Explanation: Let's plot it and see how our flux looks!
End of explanation
np.trapz(phi(z), z)
Explanation: A rough cosine shape is obtained.
One can also numerically integrate the function using the trapezoidal rule.
End of explanation
with openmc.StatePoint(sp_file) as sp:
df2 = sp.tallies[flux_tally_zernike.id].get_pandas_dataframe()
df2
Explanation: The following cases show how to reconstruct the flux distribution Zernike polynomials tallied results.
End of explanation
z_n = df2['mean']
zz = openmc.Zernike(z_n, radius)
rr = np.linspace(0, radius, 100)
plt.plot(rr, zz(rr, 0.0))
plt.xlabel('Radial position [cm]')
plt.ylabel('Flux')
Explanation: Let's plot the flux in radial direction with specific azimuthal angle ($\theta = 0.0$).
End of explanation
z_n = df2['mean']
zz = openmc.Zernike(z_n, radius=radius)
#
# Using linspace so that the endpoint of 360 is included...
azimuths = np.radians(np.linspace(0, 360, 50))
zeniths = np.linspace(0, radius, 100)
r, theta = np.meshgrid(zeniths, azimuths)
values = zz(zeniths, azimuths)
fig, ax = plt.subplots(subplot_kw=dict(projection='polar'))
ax.contourf(theta, r, values, cmap='jet')
plt.show()
Explanation: A polar figure with all azimuthal can be plotted like this:
End of explanation
with openmc.StatePoint(sp_file) as sp:
df3 = sp.tallies[flux_tally_zernike1d.id].get_pandas_dataframe()
df3
Explanation: Sometimes, we just need the radial-only Zernike polynomial tallied flux distribution.
Let us extract the tallied coefficients first.
End of explanation
z_n = df3['mean']
zz = openmc.ZernikeRadial(z_n, radius=radius)
rr = np.linspace(0, radius, 50)
plt.plot(rr, zz(rr))
plt.xlabel('Radial position [cm]')
plt.ylabel('Flux')
Explanation: A plot along with r-axis is also done.
End of explanation
z_n = df3['mean']
zz = openmc.ZernikeRadial(z_n, radius=radius)
azimuths = np.radians(np.linspace(0, 360, 50))
zeniths = np.linspace(0, radius, 100)
r, theta = np.meshgrid(zeniths, azimuths)
values = [[i for i in zz(zeniths)] for j in range(len(azimuths))]
fig, ax = plt.subplots(subplot_kw=dict(projection='polar'), figsize=(6,6))
ax.contourf(theta, r, values, cmap='jet')
plt.show()
Explanation: Similarly, we can also re-construct the polar figure based on radial-only Zernike polinomial coefficients.
End of explanation
# Reconstruct 3-D flux based on radial only Zernike and Legendre polynomials
z_n = df3['mean']
zz = openmc.ZernikeRadial(z_n, radius=radius)
azimuths = np.radians(np.linspace(0, 360, 100)) # azimuthal mesh
zeniths = np.linspace(0, radius, 100) # radial mesh
zmin, zmax = -1.0, 1.0
z = np.linspace(zmin, zmax, 100) # axial mesh
#
# flux = np.matmul(np.matrix(phi(z)).transpose(), np.matrix(zz(zeniths)))
# flux = np.array(flux) # change np.matrix to np.array
# np.matrix is not recommended for use anymore
flux = np.array([phi(z)]).T @ np.array([zz(zeniths)])
#
plt.figure(figsize=(5,10))
plt.title('Flux distribution')
plt.xlabel('Radial Position [cm]')
plt.ylabel('Axial Height [cm]')
plt.pcolor(zeniths, z, flux, cmap='jet')
plt.colorbar()
Explanation: Based on Legendre polynomial coefficients and the azimuthal or radial-only Zernike coefficient, it's possible to reconstruct the flux both on radial and axial directions.
End of explanation
# Define needed function first
def cart2pol(x, y):
rho = np.sqrt(x**2 + y**2)
phi = np.arctan2(y, x)
return(rho, phi)
# Reconstruct 3-D flux based on azimuthal Zernike and Legendre polynomials
z_n = df2['mean']
zz = openmc.Zernike(z_n, radius=radius)
#
xstep = 2.0*radius/20
hstep = (zmax - zmin)/20
x = np.linspace(-radius, radius, 50)
x = np.array(x)
[X,Y] = np.meshgrid(x,x)
h = np.linspace(zmin, zmax, 50)
h = np.array(h)
[r, theta] = cart2pol(X,Y)
flux3d = np.zeros((len(x), len(x), len(h)))
flux3d.fill(np.nan)
#
for i in range(len(x)):
for j in range(len(x)):
if r[i][j]<=radius:
for k in range(len(h)):
flux3d[i][j][k] = phi(h[k]) * zz(r[i][j], theta[i][j])
Explanation: One can also reconstruct the 3D flux distribution based on Legendre and Zernike polynomial tallied coefficients.
End of explanation
# You'll need to install pyevtk as a prerequisite
from pyevtk.hl import gridToVTK
import numpy as np
#
# Dimensions
nx, ny, nz = len(x), len(x), len(h)
lx, ly, lz = 2.0*radius, 2.0*radius, (zmax-zmin)
dx, dy, dz = lx/nx, ly/ny, lz/nz
#
ncells = nx * ny * nz
npoints = (nx + 1) * (ny + 1) * (nz + 1)
#
# Coordinates
x = np.arange(0, lx + 0.1*dx, dx, dtype='float64')
y = np.arange(0, ly + 0.1*dy, dy, dtype='float64')
z = np.arange(0, lz + 0.1*dz, dz, dtype='float64')
# Print out
path = gridToVTK("./rectilinear", x, y, z, cellData = {"flux3d" : flux3d})
Explanation: Let us print out with VTK format.
End of explanation
f1 = plt.imread('./images/flux3d.png')
plt.imshow(f1, cmap='jet')
Explanation: Use VisIt or ParaView to plot it as you want. Then, the plot can be loaded and shown as follows.
End of explanation |
10,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I. Imports
Step1: I want to import Vgg16 as well because I'll want it's low-level features
Step2: Actually, looks like Vgg's ImageNet weights won't be needed.
Step3: II. Load Data
Step4: III. Preprocessing
Keras Convolutional layers expect color channels, so expand an empty dimension in the input data, to account for no colors.
Step5: One-Hot Encoding the outputs
Step6: Since this notebook's models are all mimicking Vgg16, the input data should be preprocessed in the same way
Step7: Create Data Batch Generator
ImageDataGenerator with no arguments will return a generator. Later, when data is augmented, it'll be told how to do so. I don't know what batch-size should be set to
Step8: General workflow, going forward
Step9: 2. Single Dense Layer
This is what people in the 80s & 90s thought of as a 'Neural Network'
Step10: With an accuracy of 0.9823 and validation accuracy of 0.9664, the model's starting to overfit significantly and hit its limits, so it's time to go on to the next technique.
3. Basic 'VGG' style Convolutional Neural Network
I'm specifying an output shape equal to the input shape, to suppress the warnings keras was giving me; and it stated it was defaulting to that anyway. Or maybe I should've written output_shape=input_shape
Aha
Step11: 4. Data Augmentation
Step12: 5. Batch Normalization + Data Augmentation
See this thread for info on BatchNorm axis.
Step13: 6. Dropout + Batch Normalization + Data Augmentation
Step14: 7. Ensembling
Define a function to automatically train a model
Step15: I finally got my GPU running on my workstation. Decided to leave the ghost of Bill Gates alone and put Ubuntu Linux on the second harddrive. This nvidia GTX 870M takes 17 seconds to get through the 60,000 images. The Core i5 on my Mac took an average of 340. A 20x speed up. This also means, at those numbers, a 6-strong ensemble running the regime in train_model() will take about 49 minutes and 18 seconds, instead of 16 hours and 26 minutes. You can see what the motivation was, for me to spend ~9 hours today and get the GPU working. It's a warm feeling, knowing your computer isn't just good for playing DOOM, but'll be doing its share of work real soon.
So, onward
Step16: Save the models' weights -- bc this wasn't computationally cheap
Step17: Create an array of predictions from the models on the test-set. I'm using a batch size of 256 because that's what was done in lecture, and prediction is such an easier task that I think the large size just helps things go faster.
Step18: Finally, take the average of the predictions
Step19: Boom. 0.99699.. ~ 99.7% accuracy. Same as achieved in lecture; took roughly 50 minutes to train. Unfortunately I didn't have the h5py module installed when I ran this, so the weight's can't be saved easily -- simple fix of rerunning after install.
Trying the above again, this time having h5py installed. | Python Code:
import keras
import numpy as np
from keras.datasets import mnist
from keras.optimizers import Adam
from keras.models import Sequential
from keras.preprocessing import image
from keras.layers.core import Dense
from keras.layers.core import Lambda
from keras.layers.core import Flatten
from keras.layers.core import Dropout
from keras.layers.pooling import MaxPooling2D
from keras.layers.convolutional import Convolution2D
from keras.layers.normalization import BatchNormalization
from keras.utils.np_utils import to_categorical
Explanation: I. Imports
End of explanation
# import os, sys
# sys.path.insert(1, os.path.join('../utils/'))
Explanation: I want to import Vgg16 as well because I'll want it's low-level features
End of explanation
# from vgg16 import Vgg16
# vgg = Vgg16()
Explanation: Actually, looks like Vgg's ImageNet weights won't be needed.
End of explanation
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Explanation: II. Load Data
End of explanation
x_train = np.expand_dims(x_train, 1) # can also enter <axis=1> for <1>
x_test = np.expand_dims(x_test, 1)
x_train.shape
Explanation: III. Preprocessing
Keras Convolutional layers expect color channels, so expand an empty dimension in the input data, to account for no colors.
End of explanation
y_train, y_test = to_categorical(y_train), to_categorical(y_test)
Explanation: One-Hot Encoding the outputs:
End of explanation
x_mean = x_train.mean().astype(np.float32)
x_stdv = x_train.std().astype(np.float32)
def norm_input(x): return (x - x_mean) / x_stdv
Explanation: Since this notebook's models are all mimicking Vgg16, the input data should be preprocessed in the same way: in this case normalized by subtracting the mean and dividing by the standard deviation. It turns out this is a good idea generally.
End of explanation
gen = image.ImageDataGenerator()
trn_batches = gen.flow(x_train, y_train, batch_size=64)
tst_batches = gen.flow(x_test, y_test, batch_size=64)
Explanation: Create Data Batch Generator
ImageDataGenerator with no arguments will return a generator. Later, when data is augmented, it'll be told how to do so. I don't know what batch-size should be set to: in Lecture it was 64.
End of explanation
def LinModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
Linear_model = LinModel()
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=trn_batches.n)
Linear_model.optimizer.lr=0.1
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Linear_model.optimizer.lr=0.01
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Linear_model.optimizer.lr=0.001
Linear_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=8,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: General workflow, going forward:
* Define the model's architecture.
* Run 1 Epoch at default learning rate (0.01 ~ 0.001 depending on optimizer) to get it started.
* Jack up the learning to 0.1 (as high as you'll ever want to go) and run 1 Epoch, possibly more if you can get away with it.
* Lower the learning rate by a factor of 10 and run for a number of Epochs -- repeat until model begins to overfit (acc > valacc)
Points on internal architecture:
* Each model will have a data-preprocessing Lambda layer, which normalizes the input and assigns a shape of (1 color-channel x 28 pixels x 28 pixels)
* Weights are flattened before entering FC layers
* Convolutional Layers will come in 2 pairs (because this is similar to the Vgg model).
* Convol layer-pairs will start with 32 3x3 filters and double to 64 3x3 layers
* A MaxPooling Layer comes after each Convol-pair.
* When Batch-Normalization is applied, it is done after every layer but last (excluding MaxPooling).
* Final layer is always an FC softmax layer with 10 outputs for our 10 digits.
* Dropout, when applied, should increase toward later layers.
* Optimizer used in Lecture was Adam(), all layers but last use a ReLU activation, loss function is categorical cross-entropy.
1. Linear Model
aka 'Dense', 'Fully-Connected'
End of explanation
def FCModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28)),
Dense(512, activation='relu'),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
FC_model = FCModel()
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
FC_model.optimizer=0.1
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
FC_model.optimizer=0.01
FC_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: 2. Single Dense Layer
This is what people in the 80s & 90s thought of as a 'Neural Network': a single Fully-Connected hidden layer. I don't yet know why the hidden layer is ouputting 512 units. For natural-image recognition it's 4096. I'll see whether a ReLU or Softmax hidden layer works better.
By the way, the training and hyper-parameter tuning process should be automated. I want to use a NN to figure out how to do that for me.
End of explanation
def ConvModel():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
Convolution2D(64, 3, 3, activation='relu'),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_model = ConvModel()
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
CNN_model.optimizer=0.1
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
CNN_model.optimizer=0.01
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# Running again until validation accuracy stops increasing
CNN_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: With an accuracy of 0.9823 and validation accuracy of 0.9664, the model's starting to overfit significantly and hit its limits, so it's time to go on to the next technique.
3. Basic 'VGG' style Convolutional Neural Network
I'm specifying an output shape equal to the input shape, to suppress the warnings keras was giving me; and it stated it was defaulting to that anyway. Or maybe I should've written output_shape=input_shape
Aha: yes it's as I thought. See this thread -- output_shape warnings were added to Keras, and neither vgg16.py (nor I until now) were specifying output_shape. It's fine.
The first time I ran this, I forgot to have 2 pairs of Conv layers. At the third λr=0.01 epoch I had acc/val of 0.9964, 0.9878
Also noticing: in lecture JH was using a GPU which I think was an NVidia Titan X. I'm using an Intel Core i5 CPU on a MacBook Pro. His epochs took on average 6 seconds, mine are taking 180~190. Convolutions are also the most computationally-intensive part of the NN being built here.
Interestingly, the model with 2 Conv-layer pairs is taking avg 160s. Best Acc/Val: 0.9968/0.9944
Final: 0.9975/0.9918 - massive overfitting
End of explanation
gen = image.ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
trn_batches = gen.flow(x_train, y_train, batch_size=64)
tst_batches = gen.flow(x_test, y_test, batch_size=64)
CNN_Aug_model = ConvModel()
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# upping LR
print("Learning Rate, η = 0.1")
CNN_Aug_model.optimizer.lr=0.1
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# brining LR back down for more epochs
print("Learning Rate, η = 0.01")
CNN_Aug_model.optimizer.lr=0.01
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# 4 more epochs at η=0.01
CNN_Aug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: 4. Data Augmentation
End of explanation
def ConvModelBN():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_BNAug_model = ConvModelBN()
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.1")
CNN_BNAug_model.optimizer=0.1
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=2, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNAug_model.optimizer=0.01
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# some more training at 0.1 and 0.01:
print("Learning Rate, η = 0.1")
CNN_BNAug_model.optimizer=0.1
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNAug_model.optimizer=0.01
CNN_BNAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: 5. Batch Normalization + Data Augmentation
See this thread for info on BatchNorm axis.
End of explanation
def ConvModelBNDo():
model = Sequential([
Lambda(norm_input, input_shape=(1, 28, 28), output_shape=(1, 28, 28)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(32, 3, 3, activation='relu'),
MaxPooling2D(),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
Convolution2D(64, 3, 3, activation='relu'),
MaxPooling2D(),
Flatten(),
BatchNormalization(),
Dense(512, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(10, activation='softmax')
])
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
return model
CNN_BNDoAug_model = ConvModelBNDo()
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.1")
CNN_BNDoAug_model.optimizer.lr=0.1
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=4, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate, η = 0.01")
CNN_BNDoAug_model.optimizer.lr=0.01
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
# 6 more epochs at 0.01
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=6, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
print("Learning Rate η = 0.001")
CNN_BNDoAug_model.optimizer.lr=0.001
CNN_BNDoAug_model.fit_generator(trn_batches, trn_batches.n, nb_epoch=12, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: 6. Dropout + Batch Normalization + Data Augmentation
End of explanation
# I'll set it to display progress at the start of each LR-change
def train_model():
model = ConvModelBNDo()
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.1
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=3, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.01
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.optimizer.lr=0.001
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
model.fit_generator(trn_batches, trn_batches.n, nb_epoch=11, verbose=0,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
return model
# Running a little test on the GPU now
testmodel = ConvModelBNDo()
testmodel.fit_generator(trn_batches, trn_batches.n, nb_epoch=1, verbose=1,
validation_data=tst_batches, nb_val_samples=tst_batches.n)
Explanation: 7. Ensembling
Define a function to automatically train a model:
End of explanation
# this'll take some time
models = [train_model() for m in xrange(6)]
Explanation: I finally got my GPU running on my workstation. Decided to leave the ghost of Bill Gates alone and put Ubuntu Linux on the second harddrive. This nvidia GTX 870M takes 17 seconds to get through the 60,000 images. The Core i5 on my Mac took an average of 340. A 20x speed up. This also means, at those numbers, a 6-strong ensemble running the regime in train_model() will take about 49 minutes and 18 seconds, instead of 16 hours and 26 minutes. You can see what the motivation was, for me to spend ~9 hours today and get the GPU working. It's a warm feeling, knowing your computer isn't just good for playing DOOM, but'll be doing its share of work real soon.
So, onward:
Create an array of models
End of explanation
from os import getcwd
path = getcwd() + 'data/mnist/'
model_path = path + 'models/'
for i,m in enumerate(models):
m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl')
Explanation: Save the models' weights -- bc this wasn't computationally cheap
End of explanation
ensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models])
Explanation: Create an array of predictions from the models on the test-set. I'm using a batch size of 256 because that's what was done in lecture, and prediction is such an easier task that I think the large size just helps things go faster.
End of explanation
avg_preds = ensemble_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
Explanation: Finally, take the average of the predictions:
End of explanation
# this'll take some time
models = [train_model() for m in xrange(6)]
from os import getcwd
import os
path = getcwd() + '/data/mnist/'
model_path = path + 'models/'
if not os.path.exists(path):
os.mkdir('data')
os.mkdir('data/mnist')
if not os.path.exists(model_path): os.mkdir(model_path)
for i,m in enumerate(models):
m.save_weights(model_path + 'MNIST_CNN' + str(i) + '.pkl')
ensemble_preds = np.stack([m.predict(x_test, batch_size=256) for m in models])
avg_preds = ensemble_preds.mean(axis=0)
keras.metrics.categorical_accuracy(y_test, avg_preds).eval()
Explanation: Boom. 0.99699.. ~ 99.7% accuracy. Same as achieved in lecture; took roughly 50 minutes to train. Unfortunately I didn't have the h5py module installed when I ran this, so the weight's can't be saved easily -- simple fix of rerunning after install.
Trying the above again, this time having h5py installed.
End of explanation |
10,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inital Sources
Using the sources at 007.20321 +14.87119 and RA = 20
Step1: Read in the two data files. Currently, the *id's are in double format. This is different from the orginial table's long as .read() was having overflow errors
Step2: Picking out the relevant data into their own arrays to work with.
Step3: Source 1
As each data file had multiple oid's present, I plotted both the raw file and also the individual sources on their own.
Step4: Decomposed Oids
Step5: This oid doesnt seem to have an variability. And, given the plot above, it would seem that these are in fact distinct sources.
Step6: Again, this oid doesn't have any apparent variability.
Source 2
Step7: Decomposed Oids
Step8: This is just a single point so it is likely to be some sort of outlier or misattributed source.
Step9: Folded Lightcurves
For oids 226832060006908 and 26832000005734
Step10: There maybe some periodic variability here. A fit of a cosine might be able to reproduce this data. However, it apprears to be scattered fairly randomly. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from astropy.table import Table as tab
Explanation: Inital Sources
Using the sources at 007.20321 +14.87119 and RA = 20:50:00.91, dec = -00:42:23.8 taken from the NASA/IPAC Infrared Science Archieve on 6/22/17.
End of explanation
source_1 = tab.read('source1.tbl', format='ipac') #In order for this to compile properly, these filenames will need to reflect
source_2 = tab.read('source2.tbl', format= 'ipac') #the directory of the user.
Explanation: Read in the two data files. Currently, the *id's are in double format. This is different from the orginial table's long as .read() was having overflow errors
End of explanation
times_1 = source_1[0][:] #date expressed in juilian days
obs_mag_1 = source_1[1][:] #observed magnitude, auto corrected? correlated?
obs_mag_error_1 = source_1[2][:] #error on the observed magnitude
times_2 = source_2[0][:]
obs_mag_2 = source_2[1][:]
obs_mag_error_2 = source_2[2][:]
Explanation: Picking out the relevant data into their own arrays to work with.
End of explanation
plt.errorbar(times_1, obs_mag_1, yerr = obs_mag_error_1, fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "All Oids"')
Explanation: Source 1
As each data file had multiple oid's present, I plotted both the raw file and also the individual sources on their own.
End of explanation
oid_11 = np.where(source_1[3][:] == 33261000001104)
plt.errorbar(times_1[oid_11], obs_mag_1[oid_11], yerr = obs_mag_error_1[oid_11], fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "Oid 33261000001104')
Explanation: Decomposed Oids
End of explanation
oid_12 = np.where(source_1[3][:] == 33262000001431)
plt.errorbar(times_1[oid_12], obs_mag_1[oid_12], yerr = obs_mag_error_1[oid_12], fmt = 'ro', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed Magnitude')
plt.title('Source 1 Lightcurve "Oid 33262000001431')
Explanation: This oid doesnt seem to have an variability. And, given the plot above, it would seem that these are in fact distinct sources.
End of explanation
plt.errorbar(times_2, obs_mag_2, yerr = obs_mag_error_2, fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "All Oids"')
Explanation: Again, this oid doesn't have any apparent variability.
Source 2
End of explanation
oid_21 = np.where(source_2[3][:] == 226831060005494)
plt.errorbar(times_2[oid_21], obs_mag_2[oid_21], yerr = obs_mag_error_2[oid_21], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 226831060005494"')
Explanation: Decomposed Oids
End of explanation
oid_22 = np.where(source_2[3][:] == 226832060006908)
plt.errorbar(times_2[oid_22], obs_mag_2[oid_22], yerr = obs_mag_error_2[oid_22], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 226832060006908"')
oid_23 = np.where(source_2[3][:] == 26832000005734)
plt.errorbar(times_2[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('MJD')
plt.ylabel('Observed mag')
plt.title('Source 2 Lightcurve "Oid 26832000005734"')
Explanation: This is just a single point so it is likely to be some sort of outlier or misattributed source.
End of explanation
primary_period_1 = 0.191486 #taken from the NASA Exoplanet Archieve Periodogram Service
phase_21 = (times_2 % primary_period_1) / primary_period_1
plt.errorbar(phase_21[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('Phase')
plt.ylabel('Observed mag')
plt.title('Source 2 Periodic Lightcurve For Oid 226832060006908')
Explanation: Folded Lightcurves
For oids 226832060006908 and 26832000005734
End of explanation
primary_period_2 = 2.440220
phase_22 = (times_2 % primary_period_2) / primary_period_2
plt.errorbar(phase_22[oid_23], obs_mag_2[oid_23], yerr = obs_mag_error_2[oid_23], fmt = 'bo', markersize = 3)
plt.xlabel('Phase')
plt.ylabel('Observed mag')
plt.title('Source 2 Periodic Lightcurve For Oid 26832000005734')
Explanation: There maybe some periodic variability here. A fit of a cosine might be able to reproduce this data. However, it apprears to be scattered fairly randomly.
End of explanation |
10,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding a discharge point source to a LEM
(Greg Tucker, CSDMS / CU Boulder, fall 2020)
This notebook shows how to add one or more discharge point sources to a Landlab-built landscape evolution model (LEM), using the flow routing components. The basic idea is to modify the water__unit_flux_in field to include a large flux (which could be represented as either drainage area or discharge) at one or more locations along the edge of a grid.
Step1: Docstring example from FlowAccumulator
The following is a tiny example from the FlowAccumulator documentation
Step2: We can extend this tiny example to show that you can subsequently modify the rnff array and it will take effect when you re-run the FlowAccumulator
Step3: Larger example
In this example, we create a slightly larger grid, with a surface that slopes down toward the south / bottom boundary. We will introduce a runoff point source at a node in the middle of the top-most non-boundary row.
Start by defining some parameters
Step4: Create grid and topography, and set boundaries
Step5: The FlowAccumulator component takes care of identifying drainage directions (here using the D8 method) and calculating the cumulative drainage area and surface water discharge.
Note that in this case we are assuming a default runoff value of unity, meaning that the calculated surface_water__discharge is actually just drainage area. To introduce the drainage area of a river entering at the top, we will use a large value for runoff. Because we are considering drainage area as the primary variable, with unit "runoff", our input runoff is a dimensionless variable
Step6: Changing the amount and/or location of input
We can change the input drainage area / discharge amount or location simply by modifying the water__unit_flux_in field. Here we will shift it to the left and double its magnitude.
Step7: Note that the drainage_area field does not recognize any runoff input. It continues to track only the local drainage area
Step8: This means that you should use the surface_water__discharge field rather than the drainage_area field, regardless of whether the former is meant to represent discharge (volume per time) or effective drainage area (area).
Combining with a Landscape Evolution Model
Here we'll set up a simple LEM that uses the river input. | Python Code:
from landlab import RasterModelGrid, imshow_grid
from landlab.components import FlowAccumulator
import numpy as np
Explanation: Adding a discharge point source to a LEM
(Greg Tucker, CSDMS / CU Boulder, fall 2020)
This notebook shows how to add one or more discharge point sources to a Landlab-built landscape evolution model (LEM), using the flow routing components. The basic idea is to modify the water__unit_flux_in field to include a large flux (which could be represented as either drainage area or discharge) at one or more locations along the edge of a grid.
End of explanation
mg = RasterModelGrid((5, 4), xy_spacing=(10.0, 10))
topographic__elevation = np.array(
[
0.0,
0.0,
0.0,
0.0,
0.0,
21.0,
10.0,
0.0,
0.0,
31.0,
20.0,
0.0,
0.0,
32.0,
30.0,
0.0,
0.0,
0.0,
0.0,
0.0,
]
)
_ = mg.add_field("topographic__elevation", topographic__elevation, at="node")
mg.set_closed_boundaries_at_grid_edges(True, True, True, False)
fa = FlowAccumulator(mg, "topographic__elevation", flow_director="FlowDirectorSteepest")
runoff_rate = np.arange(mg.number_of_nodes, dtype=float)
rnff = mg.add_field("water__unit_flux_in", runoff_rate, at="node", clobber=True)
fa.run_one_step()
print(mg.at_node["surface_water__discharge"].reshape(5, 4))
# array([ 0., 500., 5200., 0.,
# 0., 500., 5200., 0.,
# 0., 900., 4600., 0.,
# 0., 1300., 2700., 0.,
# 0., 0., 0., 0.])
Explanation: Docstring example from FlowAccumulator
The following is a tiny example from the FlowAccumulator documentation:
End of explanation
rnff[:] = 1.0
fa.run_one_step()
print(mg.at_node["surface_water__discharge"].reshape(5, 4))
Explanation: We can extend this tiny example to show that you can subsequently modify the rnff array and it will take effect when you re-run the FlowAccumulator:
End of explanation
# Parameters
nrows = 41
ncols = 41
dx = 100.0 # grid spacing in m
slope_gradient = 0.01 # gradient of topographic surface
noise_amplitude = 0.2 # amplitude of random noise
input_runoff = 10000.0 # equivalent to a drainage area of 10,000 dx^2 or 10^8 m2
Explanation: Larger example
In this example, we create a slightly larger grid, with a surface that slopes down toward the south / bottom boundary. We will introduce a runoff point source at a node in the middle of the top-most non-boundary row.
Start by defining some parameters:
End of explanation
# Create a grid, and a field for water input
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
# Have just one edge (south / bottom) be open
grid.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Create an elevation field as a ramp with random noise
topo = grid.add_zeros("topographic__elevation", at="node")
topo[:] = slope_gradient * grid.y_of_node
np.random.seed(0)
topo[grid.core_nodes] += noise_amplitude * np.random.randn(grid.number_of_core_nodes)
Explanation: Create grid and topography, and set boundaries:
End of explanation
# Create a FlowAccumulator component
fa = FlowAccumulator(grid, flow_director="FlowDirectorD8")
# Create a runoff input field, and set one of its nodes to have a large input
runoff = grid.add_ones("water__unit_flux_in", at="node", clobber=True)
top_middle_node = grid.number_of_nodes - int(1.5 * ncols)
runoff[top_middle_node] = input_runoff
fa.run_one_step()
imshow_grid(grid, "surface_water__discharge")
Explanation: The FlowAccumulator component takes care of identifying drainage directions (here using the D8 method) and calculating the cumulative drainage area and surface water discharge.
Note that in this case we are assuming a default runoff value of unity, meaning that the calculated surface_water__discharge is actually just drainage area. To introduce the drainage area of a river entering at the top, we will use a large value for runoff. Because we are considering drainage area as the primary variable, with unit "runoff", our input runoff is a dimensionless variable: the number of contributing grid cell equivalents. We will set this to unity at all the nodes in the model except the point-source location.
End of explanation
runoff[top_middle_node] = 1.0 # go back to being a "regular" node
runoff[top_middle_node - 15] = 2 * input_runoff # shift 15 cells left and double amount
fa.run_one_step()
imshow_grid(grid, "surface_water__discharge")
Explanation: Changing the amount and/or location of input
We can change the input drainage area / discharge amount or location simply by modifying the water__unit_flux_in field. Here we will shift it to the left and double its magnitude.
End of explanation
imshow_grid(grid, "drainage_area")
Explanation: Note that the drainage_area field does not recognize any runoff input. It continues to track only the local drainage area:
End of explanation
from landlab.components import StreamPowerEroder, LinearDiffuser
# Parameters
K = 4.0e-5
D = 0.01
uplift_rate = 0.0001
nrows = 51
ncols = 51
dx = 10.0 # grid spacing in m
slope_gradient = 0.01 # gradient of topographic surface
noise_amplitude = 0.04 # amplitude of random noise
input_runoff = 10000.0 # equivalent to a drainage area of 10,000 dx^2 or 10^6 m2
run_duration = 25.0 / uplift_rate
dt = dx / (K * (dx * dx * input_runoff) ** 0.5)
num_steps = int(run_duration / dt)
print(str(num_steps) + " steps.")
# Create a grid, and a field for water input
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
# Have just one edge (south / bottom) be open
grid.set_closed_boundaries_at_grid_edges(True, True, True, False)
# Create an elevation field as a ramp with random noise
topo = grid.add_zeros("topographic__elevation", at="node")
topo[:] = slope_gradient * grid.y_of_node
np.random.seed(0)
topo[grid.core_nodes] += noise_amplitude * np.random.randn(grid.number_of_core_nodes)
# Create components
fa = FlowAccumulator(grid, flow_director="FlowDirectorD8")
sp = StreamPowerEroder(grid, K_sp=K, discharge_field="surface_water__discharge")
ld = LinearDiffuser(grid, linear_diffusivity=D)
runoff = grid.add_ones("water__unit_flux_in", at="node", clobber=True)
top_middle_node = grid.number_of_nodes - int(1.5 * ncols)
runoff[top_middle_node] = input_runoff
for _ in range(num_steps):
topo[grid.core_nodes] += uplift_rate * dt
fa.run_one_step()
ld.run_one_step(dt)
sp.run_one_step(dt)
imshow_grid(grid, topo)
Explanation: This means that you should use the surface_water__discharge field rather than the drainage_area field, regardless of whether the former is meant to represent discharge (volume per time) or effective drainage area (area).
Combining with a Landscape Evolution Model
Here we'll set up a simple LEM that uses the river input.
End of explanation |
10,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedback or issues?
For any feedback or questions, please open an issue.
Vertex SDK for Python
Step1: Enter Your Project and GCS Bucket
Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
Step2: Set Your Task Name, and GCS Prefix
If you want to centeralize all input and output files under the gcs location.
Step3: HMDB
Step4: Copy AutoML Video Demo Train Data for Creating Managed Dataset
Step5: Run AutoML Video Training with Managed Video Dataset
Initialize Vertex SDK for Python
Initialize the client for Vertex AI.
Step6: Create a Dataset on Vertex AI
We will now create a Vertex AI video dataset using the previously prepared csv files. Choose one of the options below.
Option 1
Step7: Option 2
Step8: Launch a Training Job and Create a Model on Vertex AI
Config a Training Job
Step9: Run the Training Job
Step10: Batch Prediction Job on the Model
Copy AutoML Video Demo Prediction Data for Creating Batch Prediction Job | Python Code:
!pip3 uninstall -y google-cloud-aiplatform
!pip3 install google-cloud-aiplatform
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Feedback or issues?
For any feedback or questions, please open an issue.
Vertex SDK for Python: AutoML Video Action Recognition Example
To use this Jupyter notebook, copy the notebook to a Google Cloud Notebooks instance with Tensorflow installed and open it. You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Jupyter automatically displays the return value of the last line in each cell. For more information about running notebooks in Google Cloud Notebook, see the Google Cloud Notebook guide.
This notebook demonstrate how to create an AutoML Video Action Recognition Model, with a Vertex AI video dataset, and how to serve the model for batch prediction. It will require you provide a bucket where the dataset will be stored.
Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK.
Install Vertex SDK for Python
After the SDK installation the kernel will be automatically restarted.
End of explanation
MY_PROJECT = "YOUR PROJECT ID"
MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as ucaip
import sys
if "google.colab" in sys.modules:
import os
from google.colab import auth
auth.authenticate_user()
os.environ["GOOGLE_CLOUD_PROJECT"] = MY_PROJECT
Explanation: Enter Your Project and GCS Bucket
Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
End of explanation
TASK_TYPE = "mbsdk_automl-video-training"
PREDICTION_TYPE = "action_recognition"
MODEL_TYPE = "CLOUD"
TASK_NAME = f"{TASK_TYPE}_{PREDICTION_TYPE}"
BUCKET_NAME = MY_STAGING_BUCKET.split("gs://")[1]
GCS_PREFIX = TASK_NAME
print(f"Bucket Name: {BUCKET_NAME}")
print(f"Task Name: {TASK_NAME}")
Explanation: Set Your Task Name, and GCS Prefix
If you want to centeralize all input and output files under the gcs location.
End of explanation
automl_video_demo_train_data = "gs://automl-video-demo-data/hmdb_golf_swing_all.csv"
automl_video_demo_batch_prediction_data = (
"gs://automl-video-demo-data/hmdb_golf_swing_predict.jsonl"
)
Explanation: HMDB: a large human motion database
We prepared some training data and prediction data for the demo using the HMDB Dataset.
The HMDB Dataset is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit https://creativecommons.org/licenses/by/4.0/
For more information about this dataset please visit: https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/
End of explanation
gcs_source_train = f"gs://{BUCKET_NAME}/{TASK_NAME}/data/video_action_recognition.csv"
!gsutil cp $automl_video_demo_train_data $gcs_source_train
Explanation: Copy AutoML Video Demo Train Data for Creating Managed Dataset
End of explanation
from google.cloud import aiplatform
aiplatform.init(project=MY_PROJECT, staging_bucket=MY_STAGING_BUCKET)
Explanation: Run AutoML Video Training with Managed Video Dataset
Initialize Vertex SDK for Python
Initialize the client for Vertex AI.
End of explanation
dataset = aiplatform.VideoDataset.create(
display_name=f"temp-{TASK_NAME}",
gcs_source=gcs_source_train,
import_schema_uri=aiplatform.schema.dataset.ioformat.video.action_recognition,
sync=False,
)
Explanation: Create a Dataset on Vertex AI
We will now create a Vertex AI video dataset using the previously prepared csv files. Choose one of the options below.
Option 1: Using MBSDK VideoDataset class
End of explanation
dataset.wait()
Explanation: Option 2: Using MBSDK Dataset class
dataset = aiplatform.Dataset.create(
display_name=f'temp-{TASK_NAME}',
metadata_schema_uri=aiplatform.schema.dataset.metadata.video,
gcs_source=gcs_source_train,
import_schema_uri=aiplatform.schema.dataset.ioformat.video.action_recognition,
sync=False
)
End of explanation
job = aiplatform.AutoMLVideoTrainingJob(
display_name=f"temp-{TASK_NAME}",
prediction_type=PREDICTION_TYPE,
model_type=MODEL_TYPE,
)
Explanation: Launch a Training Job and Create a Model on Vertex AI
Config a Training Job
End of explanation
model = job.run(
dataset=dataset,
training_fraction_split=0.8,
test_fraction_split=0.2,
model_display_name=f"temp-{TASK_NAME}",
sync=False,
)
model.wait()
Explanation: Run the Training Job
End of explanation
gcs_source_batch_prediction = f"gs://{BUCKET_NAME}/{TASK_NAME}/data/video_action_recognition_batch_prediction.jsonl"
gcs_destination_prefix_batch_prediction = (
f"gs://{BUCKET_NAME}/{TASK_NAME}/batch_prediction"
)
!gsutil cp $automl_video_demo_batch_prediction_data $gcs_source_batch_prediction
batch_predict_job = model.batch_predict(
job_display_name=f"temp-{TASK_NAME}",
gcs_source=gcs_source_batch_prediction,
gcs_destination_prefix=gcs_destination_prefix_batch_prediction,
sync=False,
)
batch_predict_job.wait()
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
import json
import tensorflow as tf
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
break
line
Explanation: Batch Prediction Job on the Model
Copy AutoML Video Demo Prediction Data for Creating Batch Prediction Job
End of explanation |
10,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Housekeeping Unit Test
This notebook contains a Unit Test for the housekeeping information from the Observatory Simulator.
Starting the Observatory Simulator
Remember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.
When you are running this notbook and it has not been power cycled, you should set preload=False.
Step1: As a quick diagnostic, we start and stop capture frames.
Step2: Now we collect the measurements; this will make 10 sets of samples for heater currents that are between 0 and 150 mA, spaced out by 10 mA.
Step3: Make sure to run the following cell so that the heaters can "chill out".
Step4: Plots
Let's first plot the set value on the $x$ axis versus the measured value on the $y$ axis to test how well calibrated the sensors are
Step5: As we can see, the measured heater values from the housekeeping are roughly the same as the values set values, up to an affine transformation. To see this, it is helpful to compute measured - set to see the errors
Step6: From the above we can infer that in order to correct for the error we need to disregard the measured values when the heaters are set to 0 mA, since they are apparently outliars.
Calibration
We next turn to calibrating the heaters.
The above analysis only really looks at 3 heaters; we can see that the error measurements have an outliar at 0 mA, but it is of interest to reproduce this analysis for more than N=3 components. This should be as simple as rerunning this notebook, however.
For the time being we are proceeding with calibration by performing a simple linear transformation on the set value so that it will properly correspond with the desired observed value.
To calculate this, we first compute via linear regression the constants $m$ and $c$ in the equation below
Step7: Once $m$ and $c$ have been calculated in the calibrate function, we have to make a preadjustment function, which is given by
Step8: Now that we can compute the error correction function, we turn to looking at the error again, after calibration.
Step9: Given these error corrections, we now turn to using them as preadjustments and recompute our measurements that we previously took
Step10: Smokey the bear warns you to turn off your heaters when you are done | Python Code:
from tessfpe.dhu.fpe import FPE
from tessfpe.dhu.unit_tests import check_house_keeping_voltages
fpe1 = FPE(1, debug=False, preload=False, FPE_Wrapper_version='6.1.1')
print fpe1.version
if check_house_keeping_voltages(fpe1):
print "Wrapper load complete. Interface voltages OK."
Explanation: Housekeeping Unit Test
This notebook contains a Unit Test for the housekeeping information from the Observatory Simulator.
Starting the Observatory Simulator
Remember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.
When you are running this notbook and it has not been power cycled, you should set preload=False.
End of explanation
fpe1.cmd_start_frames()
fpe1.cmd_stop_frames()
Explanation: As a quick diagnostic, we start and stop capture frames.
End of explanation
from tessfpe.data.operating_parameters import operating_parameters
operating_parameters["heater_1_current"]
high = operating_parameters["heater_1_current"]["high"]
measurements = []
for i in range(5,int(high)*10,10):
a = i % high
b = (i + 100) % high
c = abs((high - i) % high)
fpe1.ops.heater_1_current = a
fpe1.ops.heater_2_current = b
fpe1.ops.heater_3_current = c
fpe1.ops.send()
measurements.append({"set": {"heater_1_current": a,
"heater_2_current": b,
"heater_3_current": c},
"measured": {"heater_1_current": fpe1.house_keeping["analogue"]["heater_1_current"],
"heater_2_current": fpe1.house_keeping["analogue"]["heater_2_current"],
"heater_3_current": fpe1.house_keeping["analogue"]["heater_3_current"]}})
len(measurements)
Explanation: Now we collect the measurements; this will make 10 sets of samples for heater currents that are between 0 and 150 mA, spaced out by 10 mA.
End of explanation
fpe1.ops.heater_1_current = fpe1.ops.heater_1_current.low
fpe1.ops.heater_2_current = fpe1.ops.heater_2_current.low
fpe1.ops.heater_3_current = fpe1.ops.heater_3_current.low
fpe1.ops.send()
Explanation: Make sure to run the following cell so that the heaters can "chill out".
End of explanation
x1 = [i["set"]["heater_1_current"] for i in measurements]
y1 = [j["measured"]["heater_1_current"] for j in measurements]
x2 = [i["set"]["heater_2_current"] for i in measurements]
y2 = [j["measured"]["heater_2_current"] for j in measurements]
x3 = [i["set"]["heater_3_current"] for i in measurements]
y3 = [j["measured"]["heater_3_current"] for j in measurements]
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import numpy as np
import matplotlib.pyplot as plt
f = plt.figure(figsize=(10,10))
ax1 = f.add_subplot(331, aspect='equal')
ax1.set_title('Heater 1')
ax1.scatter(x1,y1,color='blue')
ax1.locator_params(nbins=4)
ax1.set_xlim([0,160])
ax1.set_ylim([0,160])
ax2 = f.add_subplot(332, aspect='equal')
ax2.set_title('Heater 2')
ax2.scatter(x2,y2,color='red')
ax2.locator_params(nbins=4)
ax2.set_xlim([0,160])
ax2.set_ylim([0,160])
ax3 = f.add_subplot(333, aspect='equal')
ax3.set_title('Heater 3')
ax3.scatter(x3,y3,color='green')
ax3.locator_params(nbins=4)
ax3.set_xlim([0,160])
ax3.set_ylim([0,160])
plt.show()
Explanation: Plots
Let's first plot the set value on the $x$ axis versus the measured value on the $y$ axis to test how well calibrated the sensors are:
End of explanation
err1 = map(lambda y,x: y-x, y1, x1)
err2 = map(lambda y,x: y-x, y2, x2)
err3 = map(lambda y,x: y-x, y3, x3)
f = plt.figure(figsize=(10,10))
ax1 = f.add_subplot(331)
ax1.set_title('Error for Heater 1')
ax1.locator_params(nbins=4)
ax1.set_xlim([-20,160])
ax1.scatter(x1,err1,color='blue')
ax2 = f.add_subplot(332)
ax2.set_title('Error for Heater 2')
ax2.locator_params(nbins=4)
ax2.set_xlim([-20,160])
ax2.scatter(x2,err2,color='red')
ax3 = f.add_subplot(333)
ax3.set_title('Error for Heater 3')
ax3.locator_params(nbins=4)
ax3.set_xlim([-20,160])
ax3.scatter(x3,err3,color='green')
plt.show()
Explanation: As we can see, the measured heater values from the housekeeping are roughly the same as the values set values, up to an affine transformation. To see this, it is helpful to compute measured - set to see the errors:
End of explanation
def get_set_observed(heater_current, measurements):
return ([i["set"][heater_current] for i in measurements
if i["set"][heater_current] != 0],
[j["measured"][heater_current] for j in measurements
if j["set"][heater_current] != 0])
def calibrate(heater_current, measurements):
from scipy import stats
x,y = get_set_observed(heater_current, measurements)
return stats.linregress(x,y)
Explanation: From the above we can infer that in order to correct for the error we need to disregard the measured values when the heaters are set to 0 mA, since they are apparently outliars.
Calibration
We next turn to calibrating the heaters.
The above analysis only really looks at 3 heaters; we can see that the error measurements have an outliar at 0 mA, but it is of interest to reproduce this analysis for more than N=3 components. This should be as simple as rerunning this notebook, however.
For the time being we are proceeding with calibration by performing a simple linear transformation on the set value so that it will properly correspond with the desired observed value.
To calculate this, we first compute via linear regression the constants $m$ and $c$ in the equation below:
$$ \texttt{measured_value} = m \cdot \texttt{set_value} + c $$
We make sure to avoid $\texttt{set_value} = 0$ where outliers have been observed.
End of explanation
def error_correction_function(heater_current, measurements):
m, c, r_value, p_value, std_err = calibrate(heater_current, measurements)
return (lambda x: (x - c) / m)
Explanation: Once $m$ and $c$ have been calculated in the calibrate function, we have to make a preadjustment function, which is given by:
$$ preadjustment(x) := \frac{x - c}{m} $$
End of explanation
err_corr1 = error_correction_function("heater_1_current", measurements)
err_corr2 = error_correction_function("heater_2_current", measurements)
err_corr3 = error_correction_function("heater_3_current", measurements)
Explanation: Now that we can compute the error correction function, we turn to looking at the error again, after calibration.
End of explanation
preadjusted_measurements = []
for i in range(5,int(high)*10,10):
a = i % high
b = (i + 100) % high
c = abs((high - i) % high)
if not (0 < err_corr1(a) < high) or not (0 < err_corr2(b) < high) or not (0 < err_corr3(c) < high):
continue
fpe1.ops.heater_1_current = err_corr1(a)
fpe1.ops.heater_2_current = err_corr2(b)
fpe1.ops.heater_3_current = err_corr3(c)
fpe1.ops.send()
analogue_house_keeping = fpe1.house_keeping["analogue"]
preadjusted_measurements.append({"set": {"heater_1_current": a,
"heater_2_current": b,
"heater_3_current": c},
"measured": {"heater_1_current":
analogue_house_keeping["heater_1_current"],
"heater_2_current":
analogue_house_keeping["heater_2_current"],
"heater_3_current":
analogue_house_keeping["heater_3_current"]}})
Explanation: Given these error corrections, we now turn to using them as preadjustments and recompute our measurements that we previously took:
End of explanation
fpe1.ops.heater_1_current = fpe1.ops.heater_1_current.low
fpe1.ops.heater_2_current = fpe1.ops.heater_2_current.low
fpe1.ops.heater_3_current = fpe1.ops.heater_3_current.low
fpe1.ops.send()
x1 = [i["set"]["heater_1_current"] for i in measurements]
y1 = [err_corr1(j["measured"]["heater_1_current"]) for j in measurements]
x2 = [i["set"]["heater_2_current"] for i in measurements]
y2 = [err_corr2(j["measured"]["heater_2_current"]) for j in measurements]
x3 = [i["set"]["heater_3_current"] for i in measurements]
y3 = [err_corr3(j["measured"]["heater_3_current"]) for j in measurements]
err1 = map(lambda y,x: y-x, y1, x1)
err2 = map(lambda y,x: y-x, y2, x2)
err3 = map(lambda y,x: y-x, y3, x3)
f = plt.figure(figsize=(10,10))
ax1 = f.add_subplot(331)
ax1.set_title('Error for Heater 1')
ax1.locator_params(nbins=4)
ax1.set_xlim([-20,160])
ax1.scatter(x1,err1,color='blue')
ax2 = f.add_subplot(332)
ax2.set_title('Error for Heater 2')
ax2.locator_params(nbins=4)
ax2.set_xlim([-20,160])
ax2.scatter(x2,err2,color='red')
ax3 = f.add_subplot(333)
ax3.set_title('Error for Heater 3')
ax3.locator_params(nbins=4)
ax3.set_xlim([-20,160])
ax3.scatter(x3,err3,color='green')
plt.show()
Explanation: Smokey the bear warns you to turn off your heaters when you are done:
End of explanation |
10,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS109 Final Project Process Book
Background & Motivation
Social media and entertainment are such pervasive parts of millennials' lives. We want to study the intersection of these two. Is it possible to predict box office success of film through sentiments expressed on social media? Stay tuned for more!
Table of Contents
CS109 Final Project Process Book
Milestone 1
Step1: Milestone 1
Step2: Secondly, we prepare the data frame movie_df to store the data that we will scrape from BOM. We give this dataframe 9 columns
Step3: Now we write a function rowInfoGrabber that we will call in a loop over the table on the BOM webpage to grab the attributes and save them into the corresponding columns in movie_df.
Step4: This is the image
Step5: Finally we're ready to scrape! <br>
Because IMDB was created in 1990, we will scrape that far back in BOM. So we're scraping the past 26 years (1990 - 2015). Also note that because the HTML was changed starting in 2001, our scraping will be a little different before and after then.
Step6: Because some films do not have a close date, we will have to be careful with the close date!
Step7: Next, we combine the close_date, open_date and year columns into two columns close_date and open_date that are time series. This will make it easier for us to work with the data in the future.
Step8: Let's take a look at the data, now!
Step9: Let's take a look at if we can get the run times for each movie!
Step10: Looks like the data is ready for us to use! Let's save this data so we're ready to use it next time.
Step11: Loading and preparing IMDB review dataset
We have cleaned up our IMDB review dataset in the ipython notebook New.ipynb. From that notebook we have been able to save dictionaries of our data, which we will now call.
Step12: The Stanford group distinguishes between train and test because this was relevant for their project. This may prove to be useful later, so we will keep two separate dataframes. However, for our purposes at the moment, we can combine them since the BOM data will serve as our true data set.
Step13: We want to figure out which movies from IMDB_df are also present in our BOM DF. So let's get all the movie titles in our BOM table.
Step14: Now let's create a mask over IMDB_df, the boolean values of which indicate whether or not a movie is in the BOM list.
Step15: We can now create out final, relevant IMDB data frame, with only those movies that also appear in the BOM tables from 1990 - 2015.
Step16: Finally we want to save our dictionary of IMDB_dftouse into a JSON file for storage.
Step17: Analyzing and Saving Review Attributes Using labMT Happiness Dictionary
Now let's download labMT, a word score list for sentiment analysis containing over 10,000 words. The file contains a "happiness" value, and ranks words by their happiness. It also includes mean and standard deviation, Twitter rank and Google rank.
Step18: Now let's create a happiness dictionary of (word, valence) pairs where each valence is that word's original valence minus the average valence.
Step19: Now let's collect several attributes from a given review's text body, and save all valuable information into a new data frame. First we define a function that removes stop words (all non important words from a valence perspective) from a text body.
Step20: Now we'll write a function that returns total happiness, average happiness, total scorable words, and percentage of scorable words in a given review text.
Step21: Now we'll write a function that, given a data frame, returns a new data frame with the concatenation of valence (happiness) info in 4 new columns
Step22: Now let's create a new dataframe valence_df with the valence statistics run on our IMDB_df. This code takes a few minutes to run.
Step23: Milestone 2
Step24: Now we'll import allMoives_new.json. Remember that we still need to convert some of the columns into the datetime type so we'll do that afterwards.
Step25: Great! Let's take a look at all the dataframes we've created
Step26: Now, we want to make a new dataframe flattened_df that we can use to run our regressions on. This dataframe will include all the columns in movie_df and the extra columns
* number of reviews for the movie in IMDB_df
* average stars from the reviews
* overall ranking
Step27: Let's take a look at what our dataframe looks like now!
Step28: ASK ANDREW WHETHER WE NEED TEST/TRAIN FOR OUR REGRESSION
WORK FLOW 0 GOAL | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
import time
import json
import statsmodels.api as sm
from statsmodels.formula.api import glm, ols
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
import math
Explanation: CS109 Final Project Process Book
Background & Motivation
Social media and entertainment are such pervasive parts of millennials' lives. We want to study the intersection of these two. Is it possible to predict box office success of film through sentiments expressed on social media? Stay tuned for more!
Table of Contents
CS109 Final Project Process Book
Milestone 1: Scrape and prepare data before thanksgiving
Scraping and cleaning Box Office Mojo
Loading and preparing IMDB review dataset
Loading and preparing AFINN dictionary
Milestone 2: Analysing and visualizing the data
Descriptive statistics
Analysis
Visualization
Milestone 3: Video and finishing touches
Screencast video
Website
Finishing touches
End of explanation
from bs4 import BeautifulSoup
# The "requests" library makes working with HTTP requests easier
# than the built-in urllib libraries.
import requests
Explanation: Milestone 1: Scrape and prepare data before thanksgiving
For our project we will be using data from 3 different sources
<ul><b>Box Office Mojo (BOM)</b> (http://www.boxofficemojo.com) is a website that aggregates, in a table, a list of all movies released in a year and attributes such as how much it grossed in the opening week, how much it grossed in total and how long it aired for </ul>
<ul><b>Large Movie Review Dataset</b> (http://ai.stanford.edu/~amaas/data/sentiment/) is a polarized dataset of movie reviews from IMDB prepared by Maas et al from Stanford. The dataset contains 25,000 entries in the training set and 25,000 entries in the test set. </ul>
<ul><b>AFINN-111 Dictionary</b> (http://www2.imm.dtu.dk/pubdb/views/publication_details.php?id=60100) is a dictionary list of 2477 english words and phrases rated for valence with an integer from -5 to 5. Originally prepared by Finn Årup Nielsen</ul>
<br>
In this first milestone, we will get all the data into a format that we can start running analysis on!
Scraping and cleaning Box Office Mojo
First we import the requests and BeautifulSoup libraries to make working with HTTP requests easier, and then easily transfer HTML content to Python data structures.
End of explanation
movie_df = pd.DataFrame(columns=['close_date', 'gross', 'open_date', 'opening_gross', 'opening_theaters','ranking','title','total_theaters','year'])
Explanation: Secondly, we prepare the data frame movie_df to store the data that we will scrape from BOM. We give this dataframe 9 columns: <br>
* ranking: the ranking of the movie in its release year by gross
* title: movie title
* gross: how much the movie grossed while in theatres
* Total_theaters: the total number of theaters that showed this movie
* opening_gross: how much the movie grossed in the opening weekend (Fri-Sun)
* opening_theaters: the total number of theaters that showed this movie in the opening weekend (Fri-Sun)
* open_date: date of opening
* close_date: date of closing
* year: year of release
End of explanation
def rowInfoGrabber(r):
info = []
# Ranking
info.append(int(r.find("font").get_text()))
# Title
info.append(r.find("a").get_text())
# Gross
info.append(int(r.find("td", attrs={"align":"right"}).find("b").get_text().strip("$").replace(",","")))
'''
For the next 3 categories, we need to deal with the 2000 Anomaly "Fantasia" where there are missing numbers.
In this case I have chosen to replace the missing values 'N/A' with the values from 'Final Destination', which
if right above it in the movie table and differs in gross income by about $1 million, which is a small
difference. See the picture below for a snapshot of the anomaly in the movie table from 2000.
'''
# Total number of theaters
if r.find_all("td",attrs={"align":"right"})[1].find("font").get_text().replace(",","") == 'N/A':
info.append(2587)
else:
info.append(int(r.find_all("td",attrs={"align":"right"})[1].find("font").get_text().replace(",","")))
# Opening Gross
if r.find_all("td", attrs={"align":"right"})[2].find("font").get_text().strip("$").replace(",","") == 'N/A':
info.append(10015822)
else:
info.append(int(r.find_all("td", attrs={"align":"right"})[2].find("font").get_text().strip("$").replace(",","")))
# Opening Number of Theaters
if r.find_all("td", attrs={"align":"right"})[3].find("font").get_text().replace(",","") == 'N/A':
info.append(2587)
else:
info.append(int(r.find_all("td", attrs={"align":"right"})[3].find("font").get_text().replace(",","")))
# Date of Opening
info.append(r.find_all("td", attrs={"align":"right"})[4].find("a").get_text())
# Date of Closing: Before 2002 they didn't have a "closing" date in their tables. We must account for this.
if (len(r.find_all("td", attrs={"align":"right"})) <= 5):
info.append('-')
else:
info.append(r.find_all("td", attrs={"align":"right"})[5].find("font").get_text())
return info
Explanation: Now we write a function rowInfoGrabber that we will call in a loop over the table on the BOM webpage to grab the attributes and save them into the corresponding columns in movie_df.
End of explanation
fields = ["ranking", "title", "gross", "total_theaters", "opening_gross", "opening_theaters", "open_date", "close_date"]
Explanation: This is the image: <image src=“Example.png”>
End of explanation
%%time
years = [1990 + i for i in range(26)]
for year in years:
pageText = requests.get("http://www.boxofficemojo.com/yearly/chart/?yr=%(yr)d&p=.htm" % {'yr':year})
soup = BeautifulSoup(pageText.text, "html.parser")
movieTable = soup.find("td", attrs={"colspan":"3"})
movieRows = movieTable.find("table").find_all("tr")[2:102]
print year
movie_dicts = [dict(zip(fields, rowInfoGrabber(row))) for row in movieRows]
year_df = pd.DataFrame(movie_dicts)
year_df['year'] = year
movie_df = movie_df.append(year_df, ignore_index=True)
time.sleep(1)
movie_df.shape
movie_df.head()
Explanation: Finally we're ready to scrape! <br>
Because IMDB was created in 1990, we will scrape that far back in BOM. So we're scraping the past 26 years (1990 - 2015). Also note that because the HTML was changed starting in 2001, our scraping will be a little different before and after then.
End of explanation
# if we decide it's worth just dumping the movies with no close_date, we can use the code below
# movie_df=movie_df[movie_df.close_date != '-'].reset_index(drop=True)
Explanation: Because some films do not have a close date, we will have to be careful with the close date!
End of explanation
# splitting the close_date and open_date into the respective month and day
movie_df['close_month'] = movie_df['close_date'].map(lambda x: '0' if x=='-' else x[:x.find('/')])
movie_df['close_day'] = movie_df['close_date'].map(lambda x: '0' if x=='-' else x[x.find('/')+1:len(x)])
movie_df['open_month'] = movie_df['open_date'].map(lambda x: x[:x.find('/')])
movie_df['open_day'] = movie_df['open_date'].map(lambda x: x[x.find('/')+1:len(x)])
# dropping the old close_date and open_date
movie_df = movie_df.drop('close_date', 1)
movie_df = movie_df.drop('open_date', 1)
# creating an open_year by turning the year column into a string and getting rid of trailing bits
movie_df['open_year'] = movie_df.year.astype(str)
movie_df['open_year'] = movie_df.open_year.map(lambda x: x[:x.find('.')])
# creating a close_year column, by looking at whether the close month is earlier/later than the open month in the year
close_month = movie_df['close_month'].astype(int)
open_month = movie_df['open_month'].astype(int)
year = movie_df['year'].astype(int)
close_year=[]
for i in range (0, len(year)):
if close_month[i] >= open_month[i]:
close_year.append(year[i])
else:
close_year.append(year[i]+1)
movie_df['close_year'] = close_year
movie_df['close_year'] = movie_df['close_year'].astype(str)
movie_df.head()
Explanation: Next, we combine the close_date, open_date and year columns into two columns close_date and open_date that are time series. This will make it easier for us to work with the data in the future.
End of explanation
movie_df.head()
Explanation: Let's take a look at the data, now!
End of explanation
run_time=[]
for index, row in movie_df.iterrows():
if row.close_date != None:
run_time.append(row['close_date']-row['open_date'])
else:
run_time.append('N/A')
movie_df.head()
Explanation: Let's take a look at if we can get the run times for each movie!
End of explanation
! pip install pymongo
# Save the movie Dictionaries corresponding to each row of the BoxOfficeMojo table.
import json # (dong)
# import pymongo
# from bson import json_util
# Make a dictionary out of the dataset for storage in JSON format.
movieSaved = {feature: movie_df[feature].values.tolist() for feature in movie_df.columns.values}
fp = open("allMovies_new.json","w")
json.dump(movieSaved, fp)
fp.close()
Explanation: Looks like the data is ready for us to use! Let's save this data so we're ready to use it next time.
End of explanation
with open("train_df_dict.json", "r") as fd:
train_df_dict = json.load(fd)
with open("test_df_dict.json", "r") as fd:
test_df_dict = json.load(fd)
train_df = pd.DataFrame(train_df_dict)
test_df = pd.DataFrame(test_df_dict)
Explanation: Loading and preparing IMDB review dataset
We have cleaned up our IMDB review dataset in the ipython notebook New.ipynb. From that notebook we have been able to save dictionaries of our data, which we will now call.
End of explanation
IMDB_df = train_df.append(test_df)
Explanation: The Stanford group distinguishes between train and test because this was relevant for their project. This may prove to be useful later, so we will keep two separate dataframes. However, for our purposes at the moment, we can combine them since the BOM data will serve as our true data set.
End of explanation
BOM_movie_list = movie_df.title.values.tolist()
Explanation: We want to figure out which movies from IMDB_df are also present in our BOM DF. So let's get all the movie titles in our BOM table.
End of explanation
movie_mask = [(movie in BOM_movie_list) for movie in IMDB_df.movie_name]
sum(movie_mask)
Explanation: Now let's create a mask over IMDB_df, the boolean values of which indicate whether or not a movie is in the BOM list.
End of explanation
IMDB_dftouse=IMDB_df[movie_mask]
Explanation: We can now create out final, relevant IMDB data frame, with only those movies that also appear in the BOM tables from 1990 - 2015.
End of explanation
IMDB_dftouse_dict = {feature: IMDB_dftouse[feature].values.tolist() for feature in IMDB_dftouse.columns.values}
fp = open("IMDB_dftouse_dict.json","w")
json.dump(IMDB_dftouse_dict, fp)
fp.close()
# Reopen
with open("IMDB_dftouse_dict.json", "r") as fd:
IMDB_dftouse_dict = json.load(fd)
Explanation: Finally we want to save our dictionary of IMDB_dftouse into a JSON file for storage.
End of explanation
url = 'http://www.plosone.org/article/fetchSingleRepresentation.action?uri=info:doi/10.1371/journal.pone.0026752.s001'
labmt = pd.read_csv(url, skiprows=2, sep='\t', index_col=0)
labmt.head()
Explanation: Analyzing and Saving Review Attributes Using labMT Happiness Dictionary
Now let's download labMT, a word score list for sentiment analysis containing over 10,000 words. The file contains a "happiness" value, and ranks words by their happiness. It also includes mean and standard deviation, Twitter rank and Google rank.
End of explanation
average = labmt.happiness_average.mean()
happiness = (labmt.happiness_average - average).to_dict()
print "Score(happy): ", happiness['happy']
print "Score(miserable): ", happiness['miserable']
print "Best score: ", max(happiness.values())
print "Worst score: ", min(happiness.values())
# Save to disc
# fp = open("happiness.json","w")
# json.dump(happiness, fp)
# fp.close()
# Reopen
with open("happiness.json", "r") as fp:
happiness = json.load(fp)
Explanation: Now let's create a happiness dictionary of (word, valence) pairs where each valence is that word's original valence minus the average valence.
End of explanation
from sklearn.feature_extraction import text
stopwords = text.ENGLISH_STOP_WORDS
punctuation = list('.,;:!?()[]{}`''\"@#$%^&*+-|-=~_')
def removeStopWords(text, stopwords = stopwords):
new_text = ""
for word in text.split():
if word not in stopwords:
while len(word) != 0 and word[-1] in punctuation:
word = word[:len(word)-1]
new_text += word + ' '
return new_text
Explanation: Now let's collect several attributes from a given review's text body, and save all valuable information into a new data frame. First we define a function that removes stop words (all non important words from a valence perspective) from a text body.
End of explanation
'''
Name: getValenceInfo()
Inputs: review text, dictionary of happiness
Returns: a 4-tuple of (happiness total, happiness average, total # of scorable words, % of scorable words)
'''
def getValenceInfo(text, valenceDict):
total_words = len(text.split())
happiness_total, count_relevant = 0, 0
for word in text.split():
if word in valenceDict.keys():
count_relevant += 1
happiness_total += valenceDict[word]
if count_relevant != 0:
avg_valence = 1.*happiness_total/count_relevant
else:
avg_valence = 0
return happiness_total, avg_valence, total_words, 1.*count_relevant / total_words
Explanation: Now we'll write a function that returns total happiness, average happiness, total scorable words, and percentage of scorable words in a given review text.
End of explanation
'''
Name: getAllInfo
Input: data frame, happiness dictionary, list of stop words
Returns: a new data frame with 4 new columns: valence_sum, valence_avg, n_scorables, pct_scorables
'''
def getAllInfo(df, valenceDict, stopwords):
valence_suml, valence_avgl, review_lenl, review_fractionl = [], [], [], []
for i, row in df.iterrows():
cleaned_review = removeStopWords(row['text'], stopwords)
valence_sum, valence_avg, review_len, review_fraction = getValenceInfo(cleaned_review, valenceDict)
valence_suml.append(valence_sum)
valence_avgl.append(valence_avg)
review_lenl.append(review_len)
review_fractionl.append(review_fraction)
conc = pd.DataFrame({'valence_sum': valence_suml, 'valence_avg':valence_avgl ,'n_scorables': review_lenl,
'pct_scorables': review_fractionl})
return pd.concat([df, conc], axis=1)
Explanation: Now we'll write a function that, given a data frame, returns a new data frame with the concatenation of valence (happiness) info in 4 new columns: valence sum, valence average, # of scorable words, % of scorable words.
End of explanation
%%time
valence_df = getAllInfo(IMDB_df, happiness, stopwords)
valence_df.head()
# Convert True/False to 1/0: needed to make valence_df JSON serializable, also better practice
valence_df.positive = 1.0*valence_df.positive
# Save to disc
# fp = open("valence_df_dict.json","w")
# json.dump(valence_df.to_dict(), fp)
# fp.close()
# Reopen
with open("valence_df_dict.json", "r") as fp:
valence_df_dict = json.load(fp)
valence_df = pd.DataFrame(valence_df_dict)
Explanation: Now let's create a new dataframe valence_df with the valence statistics run on our IMDB_df. This code takes a few minutes to run.
End of explanation
dictionary=pd.read_csv("dictionary.csv")
with open("IMDB_dftouse_dict.json", "r") as fd:
IMDB = json.load(fd)
IMDB_df = pd.DataFrame(IMDB)
Explanation: Milestone 2: Analysing and visualizing the data
Descriptive statistics
We're going to reimport our prepared datasets - dictionary.csv (contains the dictionary from the AFINN dataset), and allMovies_new.json (which contains all the reviews that cross over with our BOM data). We will import these as panda dataframes dictionary and IMDB_df respectively.
End of explanation
with open("allMovies_new.json", "r") as fd:
movie_df = json.load(fd)
movie_df = pd.DataFrame(movie_df)
# making close_date and open_date by concatenating the year, month and day
import datetime
close_date = []
for index, row in movie_df.iterrows():
if row.close_day != '0':
close_date.append(datetime.datetime(int(row.close_year), int(row.close_month), int(row.close_day)))
else:
close_date.append(None)
movie_df['close_date'] = close_date
movie_df['open_date']=movie_df.open_year + '-' + movie_df.open_month + '-' + movie_df.open_day
movie_df['open_date']=movie_df['open_date'].apply(pd.datetools.parse)
# dropping unnecessary columns
movie_df = movie_df.drop('close_day', 1)
movie_df = movie_df.drop('close_month', 1)
movie_df = movie_df.drop('open_day', 1)
movie_df = movie_df.drop('open_month', 1)
Explanation: Now we'll import allMoives_new.json. Remember that we still need to convert some of the columns into the datetime type so we'll do that afterwards.
End of explanation
movie_df
IMDB_df.head()
Explanation: Great! Let's take a look at all the dataframes we've created
End of explanation
# set index to title
indexed_df = movie_df.set_index("title")
# use groupby to get the review_count and the star_avg
gold = IMDB_df.groupby("movie_name")
review_count = gold.movie_name.count()
star_avg = gold.stars.mean()
positive = gold.positive.mean()
# concatenate the two series into our final dataframe flattened_df
flattened_df = pd.concat([indexed_df, review_count], axis=1, join_axes=[indexed_df.index])
flattened_df.rename(columns={'movie_name': 'review_count'}, inplace=True)
flattened_df = pd.concat([flattened_df, star_avg], axis=1, join_axes=[indexed_df.index])
flattened_df.rename(columns={'stars': 'star_avg'}, inplace=True)
flattened_df = pd.concat([flattened_df, positive], axis=1, join_axes=[indexed_df.index])
Explanation: Now, we want to make a new dataframe flattened_df that we can use to run our regressions on. This dataframe will include all the columns in movie_df and the extra columns
* number of reviews for the movie in IMDB_df
* average stars from the reviews
* overall ranking
End of explanation
flattened_df.head()
dftouse = flattened_df[~flattened_df['review_count'].map(np.isnan)]
dftouse.shape
dftouse.head()
plt.scatter(y=dftouse.opening_gross, x=dftouse.star_avg)
plt.scatter(y=dftouse_four.opening_gross, x=dftouse_four.star_avg,s=dftouse_four.review_count)
plt.scatter(y=dftouse_seven.opening_gross, x=dftouse_seven.star_avg,s=dftouse_seven.review_count)
dftouse_four = flattened_df[flattened_df['star_avg'] <= 4]
dftouse_seven = flattened_df[flattened_df['star_avg'] >= 7]
dftouse_four.head()
bobby_ols = ols('opening_gross ~ star_avg',dftouse).fit()
bobby_ols.summary()
bobby_ols = ols('opening_gross ~ star_avg',dftouse_four).fit()
bobby_ols.summary()
bobby_ols = ols('opening_gross ~ star_avg',dftouse_seven).fit()
bobby_ols.summary()
Explanation: Let's take a look at what our dataframe looks like now!
End of explanation
#getting inflation data and creating a new dataframe
inflation = pd.read_csv("inf.csv")
print inflation
#Creating a dataframe of Cumulative Inflation
years_90 = range(1990,2015)
infdict = {}
infindex = 0
infvalue = 1
testlist = []
for row in inflation.values:
currentval = 1 + (row[1]/100)
cuminf = infvalue*currentval
infdict[years_90[infindex]] = cuminf
infindex += 1
infvalue = cuminf
testlist.append(cuminf)
inframe = pd.DataFrame(data=testlist, index=range(1990,2015))
#infdict exists in case we need it later
inframe
adjframe =
Explanation: ASK ANDREW WHETHER WE NEED TEST/TRAIN FOR OUR REGRESSION
WORK FLOW 0 GOAL: DESCRIPTIVE STATISTICS <br>
- How many unique movies we have in our IMDB_df
- how many reviews they have each
- which years do they represent
WORK FLOW 1 GOAL: MAKE A VALENCE SCORE THAT CLOSELY MATCHES STARS
1. Create a valence score for each review
* summation
* average
* product
1a. Check how good our valence score is by testing on the stars
STEPHEN's EDITS number 2
WORK FLOW 2 GOAL: FIND A REGRESSION MODEL THAT BEST PREDICTS THE FOLLOWING Y VARIABLES
2. Regress on:
* opening_gross (controlled for opening weekend)
* opening_gross/opening_theater
* total_gross
* total_gross/total_theateer
Just realised that I was meant to take notes.
- try making branches on github: each person should work on their own branch and not push to the origin until you are sure what you're doing is correct
- share andrew on the github
-
Re: pretty woman question
- useful to consider polarizing reviews actually predict the box office performance
Re: regression
- if at the end of the day we choose to do simply box office gross vs. sentiment this is just a correlation, then it is more on the simper side of things
- the difficulty should be more commensurate with the pset difficulty
- should take it to the next level, a few possibilities
*
measures of dispersion on the sentiment analysis
different types of distributions you have in a given review
how frequently are certain words used
average length of words
average document lengths
ratio between high valence words and low valence words
we can do different sentiment scores based on different sentiment dictionaries
could use a BOW model (have the corpus of words in the reviews) & shrink down for regularization
We can adopt the hypothesis that stars predict box office gross
Or hypothesis that evry polarized responses predict box office gross
It's okay to have a dud hypothesis
Graded on the effort we put in - low unimpressive results should not be equated with not having much work to do. We can make up for it via lit reviews or extra research.
BY NEXT MONDAY: ANALYSIS SHOULD BE DONE, AND AT THAT POINT JUST TOUCHING UP THE VISUALIZATIONS
1. Do not underestimate the time that it takes to make a good screencast, website, and effective visualizations
- ANDREW IS HUMAN
- If we put in alot of work to our visualizations, it will go a long way :)
- just as much thought put into the end result as with the analysis
Analysis
End of explanation |
10,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic regression
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: Generational Changes
As a second example of logistic regression, we'll use data from the General Social Survey (GSS) to describe generational changes in support for legalization of marijuana.
Since 1972 the GSS has surveyed a representative sample of adults in the U.S., asking about issues like "national spending priorities, crime and punishment, intergroup relations, and confidence in institutions".
I have selected a subset of the GSS data, resampled it to correct for stratified sampling, and made the results available in an HDF file.
The following cell downloads the data.
Step2: We can use Pandas to load the data.
Step3: The result is a DataFrame with one row for each respondent and one column for each variable.
The primary variable we'll explore is grass, which encodes each respondent's answer to this question (details here)
Step4: The value 1.0 represents "yes"; 2.0 represents "no"; NaN represents peope who were not asked the question and a small number of respondents who did not respond or said "I don't know".
To explore generational changes in the responses, we will look at the level of support for legalization as a function of birth year, which is encoded in a variable called cohort. Here's a summary of this variable.
Step5: The oldest GSS respondent was born in 1883; the youngest was born in 2000.
Before we analyze this data, I will select the subset of respondents with valid data for grass and cohort
Step6: There are about 37,000 respondents with the data we need.
I'll recode the values of grass so 1 means yes and 0 means no.
Step7: Now, for this problem, I'm going to represent the data in a different format. Rather than one row for each respondent, I am going to group the respondents by birth year and record the number of respondents in each group, count, and the number who support legalization, sum.
Step9: Here's what the results look like
Step10: There is a strong relationship between birth year and support for legalization. People born before 1920 are the least likely to say "yes"; people born after 1990 are the most likely.
There are substantial departures from the long-term trend for people born in the 1950s and late 1960s. If you want to conjecture about the causes, it might help to think about what was happening when each group turned 18. People born in 1950 turned 18 during the counterculture of the 1960s. People born in the late 1960s turned 18 during the "Just Say No" era of the War on Drugs and the peak in the AIDS epidemic in the U.S.
Point estimates
I'll use StatsModels again to generate point estimates for the slope and intercept of a logistic model.
As we did with the previous problem, I'll center the values of the explanatory variable so the mean is 0.
Step11: Here are the results from StatsModels.
Step12: To visualize the results, I'll use these parameters to estimate the probability of support in each cohort.
Step13: I'll shift the birth years in data by offset.
Step14: And use expit to compute the probabilities.
Step15: Here's what the model looks like with the data.
Step16: With these parameters, the model captures the long term trend in the data.
Computing likelihoods
Before we do the Bayesian update, let's compute the probability of the data with the estimated parameters.
From the data, we know how many people there are in each group and how many of them support legalization. From the model, we have an estimate for the probability of support in each group.
So we can use the binomial distribution to compute the probability of the data given the estimated probabilities.
Step17: For each group likes contains the probability of the outcome, k, given the group size, n, and the estimated probability, p.
The likelihood of the data is the product of these likelihoods
Step18: This likelihood is very small, for two reasons
Step19: Any number smaller than that "underflows"; that is, it gets rounded down to 0. When that happens, we lose the ability to distinguish between parameters that make the model fit the data or not. In the worst case, if all likelihoods underflow, all probabilities in the posterior distribution would be 0.
In this example, the likelihoods are big enough that we can still do a Bayesian update, so we'll do that next.
Then I will demonstrate a trick we can use to avoid underflow
Step20: I'll make a joint prior.
Step21: And stack it into a Pmf with a two-column index.
Step22: Here's the update, using the binomial distribution to compute the likelihood of the data in each group.
Step23: Again, the likelihoods are small.
Step24: But we can do the update in the usual way.
Step25: And there are enough non-zero elements to get a useful posterior distribution.
Here's what it looks like.
Step26: We can confirm that the parameters with maximum posterior probability are consistent with the point estimates.
Step27: Here are the means of the marginal distributions.
Step28: Recall that the intercept indicates the log odds of the hypothesis at x=0.
To make the distribution of intercepts easier to interpret, I'll use expit to transform the values to probabilities.
Step29: And here's what it looks like.
Step30: The mean of this distribution is about 24%, which is the predicted probability of supporting legalization for someone born around 1949.
Step31: The estimated slope is the log of the likelihood ratio for each additional year of birth. To interpret slopes as likelihood ratios, we can use np.exp to transform the values in the posterior distribution.
Step32: And here's what it looks like.
Step33: The mean of this distribution is about 0.43, which indicates that each additional year is evidence that the respondent will say "yes", with a a likelihood ratio (or Bayes factor) of 0.43.
Step34: Later we will use the joint posterior distribution to generate predictions, but first I'll show how to compute likelihoods under a log transform.
Log Likelihood
Because of the problem of underflow, many likelihood computations are done under a log transform. That's why the distributions in SciPy, including binom, provide functions to compute logarithms of PMFs and PDFs.
Here's a loop that uses binom.logpmf to compute the log likelihood of the data for each pair of parameters in joint_pmf
Step35: log_likes is an array that contains the logarithms of the binomial PMFs for each group.
The sum of these logarithms is the log of their product, which is the log-likelihood of the data.
Since the likelihoods are small, their logarithms are negative. The smallest (most negative) is about -610; the largest (least negative) is about -480.
Step36: So the log likelihoods are comfortably with the range we can represent with floating-point numbers.
However, before we can do the update, we have to convert the logarithms back to a linear scale. To do that while minimizing underflow, I am going to shift the logs up toward zero.
Adding a constant to the log_likelihood is the same as multiplying a constant by likelihood.
We can do that without affecting the results because we have to normalize the posterior probabilities, so the multiplicative constant gets normalized away.
Step37: After subtracting away the largest element in log_likelihood, the range of values in the result is from -127 to 0.
Step38: So the range of likelihoods is from near 0 to 1.
Step39: Now we can use them as likelihoods in a Bayesian update.
Step40: To confirm that we get the same results using likelihoods or log-likelihoods, I'll compute the mean of the marginal posterior distributions
Step41: And compare them to what we got using (non-log) likelihoods.
Step42: They are the same except for small differences due to floating-point approximation.
In this example, we can compute the posterior distribution either way, using likelihoods or log likelihoods.
But if there were more data, the likelihoods would underflow and it would be necessary to use log likelihoods.
Making predictions
As we did with the previous example, we can use the posterior distribution of the parameters to generate predictions, which we can use to see whether the model fits the data and to extrapolate beyond the data.
I'll start with a sample from the posterior distribution.
Step43: And a range of xs that extends 20 years past the observed data.
Step44: We can use the sampled parameters to predict probabilities for each group.
Step45: But that only accounts for uncertainty about the parameters.
We also have to account for variability in the size of the groups. Here's the distribution of group size, dropping the groups smaller than 20.
Step46: To simulate variation in group size, I'll use np.random.choice to resample the group sizes; that is, I'll draw from counts a sample with the same length as xs, sampling with replacement.
Step47: Even if we know how many people are in each group and their probability of saying "yes", there is still uncertainty in the outcome. We can use the binomial distribution to simulate this (final) source of uncertainty.
Putting it all together, the following loop combines these sources of uncertainty to generate predictive distributions for each group.
Step48: The result is an array with one row for each pair of parameters in the sample and one column for each value in xs.
Now we can use np.percentile to compute percentiles in each column.
Step49: And use them to plot a 90% credible interval for the predictions. | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Logistic regression
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
# Load the data file
import os
datafile = 'gss_eda.hdf5'
if not os.path.exists(datafile):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/data/gss_eda.hdf5
Explanation: Generational Changes
As a second example of logistic regression, we'll use data from the General Social Survey (GSS) to describe generational changes in support for legalization of marijuana.
Since 1972 the GSS has surveyed a representative sample of adults in the U.S., asking about issues like "national spending priorities, crime and punishment, intergroup relations, and confidence in institutions".
I have selected a subset of the GSS data, resampled it to correct for stratified sampling, and made the results available in an HDF file.
The following cell downloads the data.
End of explanation
gss = pd.read_hdf(datafile, 'gss')
gss.shape
Explanation: We can use Pandas to load the data.
End of explanation
gss['grass'].value_counts(dropna=False)
Explanation: The result is a DataFrame with one row for each respondent and one column for each variable.
The primary variable we'll explore is grass, which encodes each respondent's answer to this question (details here):
"Do you think the use of marijuana should be made legal or not?"
This question was asked during most years of the survey starting in 1973, so it provides a useful view of changes in attitudes over almost 50 years.
Here are is the distributions of responses:
End of explanation
gss['cohort'].describe()
Explanation: The value 1.0 represents "yes"; 2.0 represents "no"; NaN represents peope who were not asked the question and a small number of respondents who did not respond or said "I don't know".
To explore generational changes in the responses, we will look at the level of support for legalization as a function of birth year, which is encoded in a variable called cohort. Here's a summary of this variable.
End of explanation
valid = gss.dropna(subset=['grass', 'cohort']).copy()
valid.shape
Explanation: The oldest GSS respondent was born in 1883; the youngest was born in 2000.
Before we analyze this data, I will select the subset of respondents with valid data for grass and cohort:
End of explanation
valid['y'] = valid['grass'].replace(2, 0)
valid['y'].value_counts()
Explanation: There are about 37,000 respondents with the data we need.
I'll recode the values of grass so 1 means yes and 0 means no.
End of explanation
data = valid.groupby('cohort')['y'].agg(['sum', 'count'])
data
Explanation: Now, for this problem, I'm going to represent the data in a different format. Rather than one row for each respondent, I am going to group the respondents by birth year and record the number of respondents in each group, count, and the number who support legalization, sum.
End of explanation
def plot_data(data):
Plot the fraction of yes responses.
data: DataFrame with columns `sum` and `count`
fraction = data['sum'] / data['count']
plt.plot(data.index, fraction, 'o',
label='GSS data', color='C0', alpha=0.4)
decorate(xlabel='Year of birth',
ylabel='Percent in favor',
title='Support for legal marijuana vs cohort')
plot_data(data)
Explanation: Here's what the results look like:
End of explanation
offset = valid['cohort'].mean()
valid['x'] = valid['cohort'] - offset
Explanation: There is a strong relationship between birth year and support for legalization. People born before 1920 are the least likely to say "yes"; people born after 1990 are the most likely.
There are substantial departures from the long-term trend for people born in the 1950s and late 1960s. If you want to conjecture about the causes, it might help to think about what was happening when each group turned 18. People born in 1950 turned 18 during the counterculture of the 1960s. People born in the late 1960s turned 18 during the "Just Say No" era of the War on Drugs and the peak in the AIDS epidemic in the U.S.
Point estimates
I'll use StatsModels again to generate point estimates for the slope and intercept of a logistic model.
As we did with the previous problem, I'll center the values of the explanatory variable so the mean is 0.
End of explanation
import statsmodels.formula.api as smf
formula = 'y ~ x'
results = smf.logit(formula, data=valid).fit(disp=0)
results.params
Explanation: Here are the results from StatsModels.
End of explanation
inter = results.params['Intercept']
slope = results.params['x']
Explanation: To visualize the results, I'll use these parameters to estimate the probability of support in each cohort.
End of explanation
data['x'] = data.index - offset
data.head()
Explanation: I'll shift the birth years in data by offset.
End of explanation
probs = expit(inter + slope * data['x'])
probs.head()
Explanation: And use expit to compute the probabilities.
End of explanation
probs.plot(label='Logistic model', color='C1')
plot_data(data)
Explanation: Here's what the model looks like with the data.
End of explanation
from scipy.stats import binom
ks = data['sum']
ns = data['count']
likes = binom.pmf(ks, ns, probs)
likes.shape
Explanation: With these parameters, the model captures the long term trend in the data.
Computing likelihoods
Before we do the Bayesian update, let's compute the probability of the data with the estimated parameters.
From the data, we know how many people there are in each group and how many of them support legalization. From the model, we have an estimate for the probability of support in each group.
So we can use the binomial distribution to compute the probability of the data given the estimated probabilities.
End of explanation
likes.prod()
Explanation: For each group likes contains the probability of the outcome, k, given the group size, n, and the estimated probability, p.
The likelihood of the data is the product of these likelihoods:
End of explanation
import sys
sys.float_info.min_exp
Explanation: This likelihood is very small, for two reasons:
The dataset is large, which means that there are many possible outcomes, so the probability of any particular outcome is small.
The data deviate substantially from the model, so the probability of this particular outcome is small.
In theory, it's not a problem if the likelihood of the data is small. We might not get a model that fits the data perfectly, but we'll get the parameters that come as close as possible.
However, in practice small likelihoods can be problematic. With floating-point numbers, the smallest positive number we can represent is about 1e-1021.
End of explanation
qs = np.linspace(-0.95, -0.75, num=51)
prior_inter = make_uniform(qs, 'Intercept')
qs = np.linspace(0.025, 0.035, num=51)
prior_slope = make_uniform(qs, 'Slope')
Explanation: Any number smaller than that "underflows"; that is, it gets rounded down to 0. When that happens, we lose the ability to distinguish between parameters that make the model fit the data or not. In the worst case, if all likelihoods underflow, all probabilities in the posterior distribution would be 0.
In this example, the likelihoods are big enough that we can still do a Bayesian update, so we'll do that next.
Then I will demonstrate a trick we can use to avoid underflow: computing likelihoods under a log transformation.
The update
I'll use uniform priors for the parameters, with locations centered around the point estimates.
End of explanation
joint = make_joint(prior_inter, prior_slope)
joint.head()
Explanation: I'll make a joint prior.
End of explanation
joint_pmf = Pmf(joint.stack())
joint_pmf.head()
Explanation: And stack it into a Pmf with a two-column index.
End of explanation
likelihood = joint_pmf.copy()
xs = data['x']
ks = data['sum']
ns = data['count']
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
likes = binom.pmf(ks, ns, ps)
likelihood[slope, inter] = likes.prod()
Explanation: Here's the update, using the binomial distribution to compute the likelihood of the data in each group.
End of explanation
likelihood.sum()
Explanation: Again, the likelihoods are small.
End of explanation
posterior_pmf = joint_pmf * likelihood
posterior_pmf.normalize()
Explanation: But we can do the update in the usual way.
End of explanation
joint_posterior = posterior_pmf.unstack()
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution')
Explanation: And there are enough non-zero elements to get a useful posterior distribution.
Here's what it looks like.
End of explanation
print(posterior_pmf.max_prob())
print(results.params.values[::-1])
Explanation: We can confirm that the parameters with maximum posterior probability are consistent with the point estimates.
End of explanation
marginal_inter = marginal(joint_posterior, 0)
marginal_slope = marginal(joint_posterior, 1)
marginal_inter.mean(), marginal_slope.mean()
Explanation: Here are the means of the marginal distributions.
End of explanation
marginal_probs = transform(marginal_inter, expit)
Explanation: Recall that the intercept indicates the log odds of the hypothesis at x=0.
To make the distribution of intercepts easier to interpret, I'll use expit to transform the values to probabilities.
End of explanation
marginal_probs.plot(color='C4')
decorate(xlabel='Probability at x=0',
ylabel='PDF',
title='Posterior distribution of intercept in terms of probability')
Explanation: And here's what it looks like.
End of explanation
marginal_probs.mean(), offset
Explanation: The mean of this distribution is about 24%, which is the predicted probability of supporting legalization for someone born around 1949.
End of explanation
marginal_lr = transform(marginal_inter, np.exp)
Explanation: The estimated slope is the log of the likelihood ratio for each additional year of birth. To interpret slopes as likelihood ratios, we can use np.exp to transform the values in the posterior distribution.
End of explanation
marginal_lr.plot(color='C2')
decorate(xlabel='Likelihood ratio of each additional year',
ylabel='PDF',
title='Posterior distribution of slope in terms of likelihood ratio')
Explanation: And here's what it looks like.
End of explanation
marginal_lr.mean()
Explanation: The mean of this distribution is about 0.43, which indicates that each additional year is evidence that the respondent will say "yes", with a a likelihood ratio (or Bayes factor) of 0.43.
End of explanation
log_likelihood = joint_pmf.copy()
for slope, inter in joint_pmf.index:
ps = expit(inter + slope * xs)
log_likes = binom.logpmf(ks, ns, ps)
log_likelihood[slope, inter] = log_likes.sum()
Explanation: Later we will use the joint posterior distribution to generate predictions, but first I'll show how to compute likelihoods under a log transform.
Log Likelihood
Because of the problem of underflow, many likelihood computations are done under a log transform. That's why the distributions in SciPy, including binom, provide functions to compute logarithms of PMFs and PDFs.
Here's a loop that uses binom.logpmf to compute the log likelihood of the data for each pair of parameters in joint_pmf:
End of explanation
log_likelihood.min(), log_likelihood.max()
Explanation: log_likes is an array that contains the logarithms of the binomial PMFs for each group.
The sum of these logarithms is the log of their product, which is the log-likelihood of the data.
Since the likelihoods are small, their logarithms are negative. The smallest (most negative) is about -610; the largest (least negative) is about -480.
End of explanation
shifted = log_likelihood - log_likelihood.max()
likelihood2 = np.exp(shifted)
Explanation: So the log likelihoods are comfortably with the range we can represent with floating-point numbers.
However, before we can do the update, we have to convert the logarithms back to a linear scale. To do that while minimizing underflow, I am going to shift the logs up toward zero.
Adding a constant to the log_likelihood is the same as multiplying a constant by likelihood.
We can do that without affecting the results because we have to normalize the posterior probabilities, so the multiplicative constant gets normalized away.
End of explanation
shifted.min(), shifted.max()
Explanation: After subtracting away the largest element in log_likelihood, the range of values in the result is from -127 to 0.
End of explanation
likelihood2.min(), likelihood2.max()
Explanation: So the range of likelihoods is from near 0 to 1.
End of explanation
posterior_pmf2 = joint_pmf * likelihood2
posterior_pmf2.normalize()
Explanation: Now we can use them as likelihoods in a Bayesian update.
End of explanation
joint_posterior2 = posterior_pmf2.unstack()
marginal2_inter = marginal(joint_posterior2, 0)
marginal2_slope = marginal(joint_posterior2, 1)
print(marginal2_inter.mean(), marginal2_slope.mean())
Explanation: To confirm that we get the same results using likelihoods or log-likelihoods, I'll compute the mean of the marginal posterior distributions:
End of explanation
print(marginal_inter.mean(), marginal_slope.mean())
Explanation: And compare them to what we got using (non-log) likelihoods.
End of explanation
np.random.seed(42)
sample = posterior_pmf.sample(101)
Explanation: They are the same except for small differences due to floating-point approximation.
In this example, we can compute the posterior distribution either way, using likelihoods or log likelihoods.
But if there were more data, the likelihoods would underflow and it would be necessary to use log likelihoods.
Making predictions
As we did with the previous example, we can use the posterior distribution of the parameters to generate predictions, which we can use to see whether the model fits the data and to extrapolate beyond the data.
I'll start with a sample from the posterior distribution.
End of explanation
xs = np.arange(1880, 2021) - offset
Explanation: And a range of xs that extends 20 years past the observed data.
End of explanation
ps = np.empty((len(sample), len(xs)))
for i, (slope, inter) in enumerate(sample):
ps[i] = expit(inter + slope * xs)
ps.shape
Explanation: We can use the sampled parameters to predict probabilities for each group.
End of explanation
not_small = (data['count'] >= 20)
counts = data.loc[not_small, 'count']
counts.describe()
Explanation: But that only accounts for uncertainty about the parameters.
We also have to account for variability in the size of the groups. Here's the distribution of group size, dropping the groups smaller than 20.
End of explanation
ns = np.random.choice(counts, len(xs), replace=True)
ns[:10]
Explanation: To simulate variation in group size, I'll use np.random.choice to resample the group sizes; that is, I'll draw from counts a sample with the same length as xs, sampling with replacement.
End of explanation
pred = np.empty((len(sample), len(xs)))
for i, (slope, inter) in enumerate(sample):
ps = expit(inter + slope * xs)
ns = np.random.choice(counts, len(xs), replace=True)
ks = binom(ns, ps).rvs(len(xs))
pred[i] = ks / ns
pred.shape
Explanation: Even if we know how many people are in each group and their probability of saying "yes", there is still uncertainty in the outcome. We can use the binomial distribution to simulate this (final) source of uncertainty.
Putting it all together, the following loop combines these sources of uncertainty to generate predictive distributions for each group.
End of explanation
low, median, high = np.percentile(pred, [5, 50, 95], axis=0)
median.shape
Explanation: The result is an array with one row for each pair of parameters in the sample and one column for each value in xs.
Now we can use np.percentile to compute percentiles in each column.
End of explanation
plt.fill_between(xs+offset, low, high,
color='C1', alpha=0.2)
plt.plot(xs+offset, median, label='Logistic model',
color='C1')
plot_data(data)
Explanation: And use them to plot a 90% credible interval for the predictions.
End of explanation |
10,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
베르누이 확률 분포
베르누이 시도
결과가 성공(Success) 혹은 실패(Fail) 두 가지 중 하나로만 나오는 것을 베르누이 시도(Bernoulli trial)라고 한다. 예를 들어 동전을 한 번 던져 앞면(H
Step1: pmf 메서드를 사용하면 확률 질량 함수(pmf
Step2: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
Step3: 결과를 seaborn의 countplot 명령으로 시각화한다.
Step4: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
NumPy의 bincount 명령으로 값이 0인 데이터의 수와 값이 1인 데이터의 수를 세고 이를 데이터프레임에 정리했다.
Step5: seaborn의 barplot 명령으로 시각화하면 다음과 같다.
Step6: 베르누이 분포의 모멘트
베르누이 분포의 모멘트는 다음과 같다.
기댓값
$$\text{E}[X] = \theta$$
(증명)
$$\text{E}[X] = 1 \cdot \theta + 0 \cdot (1 - \theta) = \theta$$
분산
$$\text{Var}[X] = \theta(1-\theta)$$
(증명)
$$\text{Var}[X] = (1 - \theta)^2 \cdot \theta + (0 - \theta)^2 \cdot (1 - \theta) = \theta(1-\theta)$$
앞의 예에서는 $\theta = 0.6$이였으므로 이론적인 기댓값과 분산은 다음과 같다.
$$ \text{E}[X] = 0.6 $$
$$ \text{Var}[X] = 0.6 \cdot (1 - 0.6) = 0.24 $$
데이터에서 계산한 샘플 평균 및 샘플 분산은 다음과 같이 계산한다.
Step7: SciPy의 describe 명령을 쓰면 다음과 같이 계산할 수 있다.
Step8: 또는 Pandas의 Series객체로 바꾸어 describe 메서드를 써서 다음과 같이 계산한다. | Python Code:
theta = 0.6
rv = sp.stats.bernoulli(theta)
rv
Explanation: 베르누이 확률 분포
베르누이 시도
결과가 성공(Success) 혹은 실패(Fail) 두 가지 중 하나로만 나오는 것을 베르누이 시도(Bernoulli trial)라고 한다. 예를 들어 동전을 한 번 던져 앞면(H:Head)이 나오거나 뒷면(T:Tail)이 나오게 하는 것은 베르누이 시도의 일종이다.
베르누이 시도의 결과를 확률 변수(random variable) $X$ 로 나타낼 때는 일반적으로 성공을 정수 1 ($X=1$), 실패를 정수 0 ($X=0$)으로 정한다. 때로는 실패를 0 이 아닌 -1($X=-1$)로 정하는 경우도 있다.
베르누이 분포
베르누이 확률 변수는 0, 1 두 가지 값 중 하나만 가질 수 있으므로 이산 확률 변수(discrete random variable)이다. 따라서 확률 질량 함수(pmf: probability mass function)와 누적 분포 함수(cdf:cumulataive distribution function)으로 정의할 수 있다.
베르누이 확률 변수는 1이 나올 확률 $\theta$ 라는 하나의 모수(parameter)만을 가진다. 0이 나올 확률은 $1 - \theta$ 로 정의된다.
베르누이 확률 분포의 확률 질량 함수는 다음과 같다.
$$
\text{Bern}(x;\theta) =
\begin{cases}
\theta & \text{if }x=1, \
1-\theta & \text{if }x=0
\end{cases}
$$
이를 case문 없이 하나의 수식으로 표현하면 다음과 같이 쓸 수도 있다.
$$
\text{Bern}(x;\theta) = \theta^x(1-\theta)^{(1-x)}
$$
만약 베르누이 확률 변수가 1과 -1이라는 값을 가진다면 다음과 같은 수식으로 써야 한다.
$$ \text{Bern}(x; \theta) = \theta^{(1+x)/2} (1-\theta)^{(1-x)/2} $$
만약 어떤 확률 변수 $X$가 베르누이 분포에 의해 발생된다면 "확률 변수 $X$가 베르누이 분포를 따른다"라고 말하고 다음과 같이 수식으로 쓴다.
$$ X \sim \text{Bern}(x;\theta) $$
SciPy를 사용한 베르누이 분포의 시뮬레이션
Scipy의 stats 서브 패키지에 있는 bernoulli 클래스가 베르누이 확률 분포를 위한 클래스다. p 인수로 분포의 모수 $\theta$을 설정한다.
다음 예에서는 p = 0.6 으로 설정하였다.
End of explanation
xx = [0, 1]
plt.bar(xx, rv.pmf(xx), align="center")
plt.xlim(-1, 2)
plt.ylim(0, 1)
plt.xticks([0, 1], ["X=0", "X=1"])
plt.ylabel("P(x)")
plt.title("pmf of Bernoulli distribution")
plt.show()
Explanation: pmf 메서드를 사용하면 확률 질량 함수(pmf: probability mass function)를 계산할 수 있다.
End of explanation
x = rv.rvs(100, random_state=0)
x
Explanation: 시뮬레이션을 하려면 rvs 메서드를 사용한다.
End of explanation
sns.countplot(x)
plt.show()
Explanation: 결과를 seaborn의 countplot 명령으로 시각화한다.
End of explanation
y = np.bincount(x, minlength=2)/float(len(x))
df = pd.DataFrame({"theoretic": rv.pmf(xx), "simulation": y}).stack()
df = df.reset_index()
df.columns = ["value", "type", "ratio"]
df.pivot("value", "type", "ratio")
Explanation: 이론적인 확률 분포와 샘플의 확률 분포를 동시에 나타내려면 다음과 같은 코드를 사용한다.
NumPy의 bincount 명령으로 값이 0인 데이터의 수와 값이 1인 데이터의 수를 세고 이를 데이터프레임에 정리했다.
End of explanation
sns.barplot(x="value", y="ratio", hue="type", data=df)
plt.show()
Explanation: seaborn의 barplot 명령으로 시각화하면 다음과 같다.
End of explanation
np.mean(x)
np.var(x, ddof=1)
Explanation: 베르누이 분포의 모멘트
베르누이 분포의 모멘트는 다음과 같다.
기댓값
$$\text{E}[X] = \theta$$
(증명)
$$\text{E}[X] = 1 \cdot \theta + 0 \cdot (1 - \theta) = \theta$$
분산
$$\text{Var}[X] = \theta(1-\theta)$$
(증명)
$$\text{Var}[X] = (1 - \theta)^2 \cdot \theta + (0 - \theta)^2 \cdot (1 - \theta) = \theta(1-\theta)$$
앞의 예에서는 $\theta = 0.6$이였으므로 이론적인 기댓값과 분산은 다음과 같다.
$$ \text{E}[X] = 0.6 $$
$$ \text{Var}[X] = 0.6 \cdot (1 - 0.6) = 0.24 $$
데이터에서 계산한 샘플 평균 및 샘플 분산은 다음과 같이 계산한다.
End of explanation
s = sp.stats.describe(x)
s[2], s[3]
Explanation: SciPy의 describe 명령을 쓰면 다음과 같이 계산할 수 있다.
End of explanation
pd.Series(x).describe()
Explanation: 또는 Pandas의 Series객체로 바꾸어 describe 메서드를 써서 다음과 같이 계산한다.
End of explanation |
10,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
데이터의 특성은 분포와 관계가 있다.
모양에 해당하는 부분이 분포다.
예를 들어, 원의 경우 반지름, 사각형의 경우 가로세로 길이가 모양에 해당한다.
그림을 그려서 파악하는 이유는 봉우리가 몇 개인지를 파악하기 위해서
uni-modal(단봉)인지 multi-modal인지 파악하기 위해서
관계의 경우에는 scatter plot이나 다른 모형들로 파악하게 된다.
Python으로 자료의 분포를 묘사하는 방법
복수개의 자료 즉, 자료 집합이 있을 때 자료 값의 분포 특성을 묘사하는 통계적인 방법을 기술 통계(descriptive statistics)라고 한다.
단일 자료 집합의 분포를 묘사하는 방법
자료 집합이 하나인 경우에는 보통 다음과 같은 방법으로 분포의 특성을 구하거나 전달할 수 있다.
요약 통계
히스토그램 (histogram)
커널 밀도 (kernel density)
요약 통계
요약 통계는 자료 집합에 포함된 값의 최댓값, 최솟값, 평균값, 분산 등을 계산하는 방법을 말한다.
Python에서는 다음과 같은 방법을 사용한다.
scipy의 describe 함수
pandas Series/DataFrame의 describe 메소드
Step1: 히스토그램
히스토그램은 자료 값이 가질 수 있는 범위를 몇 개의 구간으로 나누고 각 구간에 해당하는 값의 숫자 혹은 상대적 빈도를 계산하는 방법이다.
Python에서 히스토그램을 구하거나 그리기 위해 다음과 같은 방법을 사용한다.
matplotlib의 hist 함수
seaborn의 distplot 함수
matplotlib의 hist 함수는 다음과 같은 3개의 값을 반환한다.
n
Step2: seaborn의 displot 함수 히스토그램에 대한 axis 객체만을 반환하는 대신 러그(rug), 커널 밀도(kernel density) 등을 표시하거나 특정한 확률 모형으로 fitting하는 추가 기능이 있다.
Step3: 커널 밀도
앞의 그림에서 곡선으로 나타난 것이 커널 밀도이다. 커널 밀도는 커널이라고 하는 특정 구간의 분포를 묘사하는 함수의 집합을 사용하여 전체 분포를 묘사하는 방법이다. 커널 밀도를 사용하면 분포의 전체 모양을 파악하기가 더 쉽다.
커널 밀도에 관한 자세한 내용은 scikit-learn 패키지의 사용자 가이드와 예제를 참조하면 된다.
http
Step4: 만약 변수의 집합이 두 개 이상이라면 seaborn 패키지의 pairplot을 사용한다. pairplot은 grid 형태로 각 집합의 조합에 대해 히스토그램과 스캐터 플롯을 그린다.
seaborn 패키지의 pairplot 함수
Step5: 만약 두 자료 집합이 모두 이산적인 값 혹은 카테고리(category) 값을 가진다면 seaborn 패키지의 heatmap을 사용하면 된다.
seaborn 패키지의 heatmap 함수
Step6: 두 자료 집합 중 하나는 연속적인 값이고 다른 하나는 이산적인 혹은 카테고리 값인 경우에는 seaborn에서 제공하는 다음과 같은 플롯을 사용할 수 있다.
boxplot
violinplot
stripplot
swarmplot
pointplot
factorplot | Python Code:
np.random.seed(0)
x = np.random.normal(size=100)
x
sp.stats.describe(x)
pd.Series(x).describe()
Explanation: 데이터의 특성은 분포와 관계가 있다.
모양에 해당하는 부분이 분포다.
예를 들어, 원의 경우 반지름, 사각형의 경우 가로세로 길이가 모양에 해당한다.
그림을 그려서 파악하는 이유는 봉우리가 몇 개인지를 파악하기 위해서
uni-modal(단봉)인지 multi-modal인지 파악하기 위해서
관계의 경우에는 scatter plot이나 다른 모형들로 파악하게 된다.
Python으로 자료의 분포를 묘사하는 방법
복수개의 자료 즉, 자료 집합이 있을 때 자료 값의 분포 특성을 묘사하는 통계적인 방법을 기술 통계(descriptive statistics)라고 한다.
단일 자료 집합의 분포를 묘사하는 방법
자료 집합이 하나인 경우에는 보통 다음과 같은 방법으로 분포의 특성을 구하거나 전달할 수 있다.
요약 통계
히스토그램 (histogram)
커널 밀도 (kernel density)
요약 통계
요약 통계는 자료 집합에 포함된 값의 최댓값, 최솟값, 평균값, 분산 등을 계산하는 방법을 말한다.
Python에서는 다음과 같은 방법을 사용한다.
scipy의 describe 함수
pandas Series/DataFrame의 describe 메소드
End of explanation
n, bins, patches = plt.hist(x, bins=10)
n
bins
patches
Explanation: 히스토그램
히스토그램은 자료 값이 가질 수 있는 범위를 몇 개의 구간으로 나누고 각 구간에 해당하는 값의 숫자 혹은 상대적 빈도를 계산하는 방법이다.
Python에서 히스토그램을 구하거나 그리기 위해 다음과 같은 방법을 사용한다.
matplotlib의 hist 함수
seaborn의 distplot 함수
matplotlib의 hist 함수는 다음과 같은 3개의 값을 반환한다.
n : 각 구간에 있는 값의 수 혹은 빈도 리스트
bins : 구간의 경계값 리스트
patches : 각 구간을 그리는 matplotlib patch 객체 리스트
End of explanation
sns.distplot(x, rug=True);
Explanation: seaborn의 displot 함수 히스토그램에 대한 axis 객체만을 반환하는 대신 러그(rug), 커널 밀도(kernel density) 등을 표시하거나 특정한 확률 모형으로 fitting하는 추가 기능이 있다.
End of explanation
np.random.seed(0)
tips = sns.load_dataset("tips")
sns.jointplot(x='total_bill', y='tip', data=tips)
iris = sns.load_dataset('iris')
sns.jointplot("sepal_width", "petal_length", data=iris, kind='kde', space=0, zorder=0, n_levels=6)
Explanation: 커널 밀도
앞의 그림에서 곡선으로 나타난 것이 커널 밀도이다. 커널 밀도는 커널이라고 하는 특정 구간의 분포를 묘사하는 함수의 집합을 사용하여 전체 분포를 묘사하는 방법이다. 커널 밀도를 사용하면 분포의 전체 모양을 파악하기가 더 쉽다.
커널 밀도에 관한 자세한 내용은 scikit-learn 패키지의 사용자 가이드와 예제를 참조하면 된다.
http://scikit-learn.org/stable/modules/density.html#kernel-density-estimation
복수개의 자료 집합의 분포를 묘사하는 경우
자료 집합이 하나가 아니라 두 개 이상이 있는 경우에는 두 자료 집합간의 관계를 알고 싶을 것이다.
만약 자료 집합의 수가 두 개이고 모두 연속적인 실수값이라면 스캐터 플롯(scatter plot)을 사용하면 된다. 스캐터 플롯을 그리기 위해서는 seaborn 패키지의 joinplot 함수를 사용한다. joinplot 함수는 스캐터 플롯뿐 아니라 각 변수의 히스토그램도 동시에 그린다.
seaborn 패키지의 joinplot 함수
End of explanation
sns.pairplot(iris, hue="species", markers=["o", "s", "D"], size=2)
Explanation: 만약 변수의 집합이 두 개 이상이라면 seaborn 패키지의 pairplot을 사용한다. pairplot은 grid 형태로 각 집합의 조합에 대해 히스토그램과 스캐터 플롯을 그린다.
seaborn 패키지의 pairplot 함수
End of explanation
flights = sns.load_dataset("flights")
flights = flights.pivot("month", "year", "passengers")
sns.heatmap(flights, annot=True, fmt="d", linewidths=1)
Explanation: 만약 두 자료 집합이 모두 이산적인 값 혹은 카테고리(category) 값을 가진다면 seaborn 패키지의 heatmap을 사용하면 된다.
seaborn 패키지의 heatmap 함수
End of explanation
titanic = sns.load_dataset("titanic")
sns.factorplot(x="age", y="embark_town", hue="sex", row="class", data=titanic[titanic.embark_town.notnull()],
orient="h", size=2, aspect=3.5, palette="Set3", kind="violin", split=True, cut=0, bw=.2)
Explanation: 두 자료 집합 중 하나는 연속적인 값이고 다른 하나는 이산적인 혹은 카테고리 값인 경우에는 seaborn에서 제공하는 다음과 같은 플롯을 사용할 수 있다.
boxplot
violinplot
stripplot
swarmplot
pointplot
factorplot
End of explanation |
10,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WH Nixalo - 02 Aug 2017
Step1: Aaaahaaaaaaaa. Okay. so the .output parameter for kears.models.Model(..) will take all layers of a model up to and including the layer specified.
It does NOT create a model of only the layer specified. input is a keras tensor (with attributes
Step2: I'm still a bit unclear whether input and output have to be from the same original model. What if I'm making a new model taking the input of one and the output of another? Checking out below | Python Code:
import keras
from keras.models import Model
from keras.layers import Dense, Input, Convolution2D
from keras.applications.imagenet_utils import _obtain_input_shape
from keras import backend as K
input_shape = (224, 224, 3)
img_input = Input(shape=input_shape, name='blah-input')
# x = Convolution2D(64, 3, 3, activation='relu', border_mode='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation="relu", name="block1_conv1", padding="same")(img_input)
x = Dense(1024, activation='relu', name='fc1')(x)
x = Dense(256, activation='relu', name= 'fc2')(x)
img_input
xModel = Model(img_input, x, name='xmodel')
# model = Model(img_input, xModel.get_layer('block1_conv1').output)
model = Model(img_input, xModel.get_layer('fc2').output)
model.summary()
Explanation: WH Nixalo - 02 Aug 2017
End of explanation
xModel.get_layer('fc1').output
model.layers
Explanation: Aaaahaaaaaaaa. Okay. so the .output parameter for kears.models.Model(..) will take all layers of a model up to and including the layer specified.
It does NOT create a model of only the layer specified. input is a keras tensor (with attributes: name, shape, dtype). output is also a keras tensor with attributes (name-&-activation-fn, shape, dtype).
So, a model is created consisting of all layers between and including input and output layers.
End of explanation
input_shape = (224,224,3)
input_shape = _obtain_input_shape(input_shape, default_size=224, min_size=48,
include_top=False, data_format='float32')
input_shape
img_input_2 = Input(shape=input_shape, name='blah-2-input')
ჯ = Dense(1024, activation='relu',name='2fc1')(img_input_2)
ჯ = Dense(512, activation='relu', name='2fc2')(ჯ)
ჯ = Dense(256, activation='relu', name='2fc3')(ჯ)
ჯModel = Model(img_input_2, ჯ, name='ჯmodel')
kerlaModel = Model(img_input_2, xModel.get_layer('fc1').output)
# kerlaModel_1 = Model(img_input, xModel.get_layer('fc1').output)
# kerlaModel_2 = Model(img_input_2, ჯModel.get_layer('2fc2').output)
Explanation: I'm still a bit unclear whether input and output have to be from the same original model. What if I'm making a new model taking the input of one and the output of another? Checking out below:
End of explanation |
10,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BERT (from HuggingFace Transformers) for Text Extraction
Author
Step1: Set-up BERT tokenizer
Step2: Load the data
Step3: Preprocess the data
Go through the JSON file and store every record as a SquadExample object.
Go through each SquadExample and create x_train, y_train, x_eval, y_eval.
Step4: Create the Question-Answering Model using BERT and Functional API
Step5: This code should preferably be run on Google Colab TPU runtime.
With Colab TPUs, each epoch will take 5-6 minutes.
Step7: Create evaluation Callback
This callback will compute the exact match score using the validation data
after every epoch.
Step8: Train and Evaluate | Python Code:
import os
import re
import json
import string
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tokenizers import BertWordPieceTokenizer
from transformers import BertTokenizer, TFBertModel, BertConfig
max_len = 384
configuration = BertConfig() # default parameters and configuration for BERT
Explanation: BERT (from HuggingFace Transformers) for Text Extraction
Author: Apoorv Nandan<br>
Date created: 2020/05/23<br>
Last modified: 2020/05/23<br>
Description: Fine tune pretrained BERT from HuggingFace Transformers on SQuAD.
Introduction
This demonstration uses SQuAD (Stanford Question-Answering Dataset).
In SQuAD, an input consists of a question, and a paragraph for context.
The goal is to find the span of text in the paragraph that answers the question.
We evaluate our performance on this data with the "Exact Match" metric,
which measures the percentage of predictions that exactly match any one of the
ground-truth answers.
We fine-tune a BERT model to perform this task as follows:
Feed the context and the question as inputs to BERT.
Take two vectors S and T with dimensions equal to that of
hidden states in BERT.
Compute the probability of each token being the start and end of
the answer span. The probability of a token being the start of
the answer is given by a dot product between S and the representation
of the token in the last layer of BERT, followed by a softmax over all tokens.
The probability of a token being the end of the answer is computed
similarly with the vector T.
Fine-tune BERT and learn S and T along the way.
References:
BERT
SQuAD
Setup
End of explanation
# Save the slow pretrained tokenizer
slow_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
save_path = "bert_base_uncased/"
if not os.path.exists(save_path):
os.makedirs(save_path)
slow_tokenizer.save_pretrained(save_path)
# Load the fast tokenizer from saved file
tokenizer = BertWordPieceTokenizer("bert_base_uncased/vocab.txt", lowercase=True)
Explanation: Set-up BERT tokenizer
End of explanation
train_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json"
train_path = keras.utils.get_file("train.json", train_data_url)
eval_data_url = "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json"
eval_path = keras.utils.get_file("eval.json", eval_data_url)
Explanation: Load the data
End of explanation
class SquadExample:
def __init__(self, question, context, start_char_idx, answer_text, all_answers):
self.question = question
self.context = context
self.start_char_idx = start_char_idx
self.answer_text = answer_text
self.all_answers = all_answers
self.skip = False
def preprocess(self):
context = self.context
question = self.question
answer_text = self.answer_text
start_char_idx = self.start_char_idx
# Clean context, answer and question
context = " ".join(str(context).split())
question = " ".join(str(question).split())
answer = " ".join(str(answer_text).split())
# Find end character index of answer in context
end_char_idx = start_char_idx + len(answer)
if end_char_idx >= len(context):
self.skip = True
return
# Mark the character indexes in context that are in answer
is_char_in_ans = [0] * len(context)
for idx in range(start_char_idx, end_char_idx):
is_char_in_ans[idx] = 1
# Tokenize context
tokenized_context = tokenizer.encode(context)
# Find tokens that were created from answer characters
ans_token_idx = []
for idx, (start, end) in enumerate(tokenized_context.offsets):
if sum(is_char_in_ans[start:end]) > 0:
ans_token_idx.append(idx)
if len(ans_token_idx) == 0:
self.skip = True
return
# Find start and end token index for tokens from answer
start_token_idx = ans_token_idx[0]
end_token_idx = ans_token_idx[-1]
# Tokenize question
tokenized_question = tokenizer.encode(question)
# Create inputs
input_ids = tokenized_context.ids + tokenized_question.ids[1:]
token_type_ids = [0] * len(tokenized_context.ids) + [1] * len(
tokenized_question.ids[1:]
)
attention_mask = [1] * len(input_ids)
# Pad and create attention masks.
# Skip if truncation is needed
padding_length = max_len - len(input_ids)
if padding_length > 0: # pad
input_ids = input_ids + ([0] * padding_length)
attention_mask = attention_mask + ([0] * padding_length)
token_type_ids = token_type_ids + ([0] * padding_length)
elif padding_length < 0: # skip
self.skip = True
return
self.input_ids = input_ids
self.token_type_ids = token_type_ids
self.attention_mask = attention_mask
self.start_token_idx = start_token_idx
self.end_token_idx = end_token_idx
self.context_token_to_char = tokenized_context.offsets
with open(train_path) as f:
raw_train_data = json.load(f)
with open(eval_path) as f:
raw_eval_data = json.load(f)
def create_squad_examples(raw_data):
squad_examples = []
for item in raw_data["data"]:
for para in item["paragraphs"]:
context = para["context"]
for qa in para["qas"]:
question = qa["question"]
answer_text = qa["answers"][0]["text"]
all_answers = [_["text"] for _ in qa["answers"]]
start_char_idx = qa["answers"][0]["answer_start"]
squad_eg = SquadExample(
question, context, start_char_idx, answer_text, all_answers
)
squad_eg.preprocess()
squad_examples.append(squad_eg)
return squad_examples
def create_inputs_targets(squad_examples):
dataset_dict = {
"input_ids": [],
"token_type_ids": [],
"attention_mask": [],
"start_token_idx": [],
"end_token_idx": [],
}
for item in squad_examples:
if item.skip == False:
for key in dataset_dict:
dataset_dict[key].append(getattr(item, key))
for key in dataset_dict:
dataset_dict[key] = np.array(dataset_dict[key])
x = [
dataset_dict["input_ids"],
dataset_dict["token_type_ids"],
dataset_dict["attention_mask"],
]
y = [dataset_dict["start_token_idx"], dataset_dict["end_token_idx"]]
return x, y
train_squad_examples = create_squad_examples(raw_train_data)
x_train, y_train = create_inputs_targets(train_squad_examples)
print(f"{len(train_squad_examples)} training points created.")
eval_squad_examples = create_squad_examples(raw_eval_data)
x_eval, y_eval = create_inputs_targets(eval_squad_examples)
print(f"{len(eval_squad_examples)} evaluation points created.")
Explanation: Preprocess the data
Go through the JSON file and store every record as a SquadExample object.
Go through each SquadExample and create x_train, y_train, x_eval, y_eval.
End of explanation
def create_model():
## BERT encoder
encoder = TFBertModel.from_pretrained("bert-base-uncased")
## QA Model
input_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
token_type_ids = layers.Input(shape=(max_len,), dtype=tf.int32)
attention_mask = layers.Input(shape=(max_len,), dtype=tf.int32)
embedding = encoder(
input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask
)[0]
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(embedding)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(embedding)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation(keras.activations.softmax)(start_logits)
end_probs = layers.Activation(keras.activations.softmax)(end_logits)
model = keras.Model(
inputs=[input_ids, token_type_ids, attention_mask],
outputs=[start_probs, end_probs],
)
loss = keras.losses.SparseCategoricalCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam(lr=5e-5)
model.compile(optimizer=optimizer, loss=[loss, loss])
return model
Explanation: Create the Question-Answering Model using BERT and Functional API
End of explanation
use_tpu = True
if use_tpu:
# Create distribution strategy
tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()
strategy = tf.distribute.TPUStrategy(tpu)
# Create model
with strategy.scope():
model = create_model()
else:
model = create_model()
model.summary()
Explanation: This code should preferably be run on Google Colab TPU runtime.
With Colab TPUs, each epoch will take 5-6 minutes.
End of explanation
def normalize_text(text):
text = text.lower()
# Remove punctuations
exclude = set(string.punctuation)
text = "".join(ch for ch in text if ch not in exclude)
# Remove articles
regex = re.compile(r"\b(a|an|the)\b", re.UNICODE)
text = re.sub(regex, " ", text)
# Remove extra white space
text = " ".join(text.split())
return text
class ExactMatch(keras.callbacks.Callback):
Each `SquadExample` object contains the character level offsets for each token
in its input paragraph. We use them to get back the span of text corresponding
to the tokens between our predicted start and end tokens.
All the ground-truth answers are also present in each `SquadExample` object.
We calculate the percentage of data points where the span of text obtained
from model predictions matches one of the ground-truth answers.
def __init__(self, x_eval, y_eval):
self.x_eval = x_eval
self.y_eval = y_eval
def on_epoch_end(self, epoch, logs=None):
pred_start, pred_end = self.model.predict(self.x_eval)
count = 0
eval_examples_no_skip = [_ for _ in eval_squad_examples if _.skip == False]
for idx, (start, end) in enumerate(zip(pred_start, pred_end)):
squad_eg = eval_examples_no_skip[idx]
offsets = squad_eg.context_token_to_char
start = np.argmax(start)
end = np.argmax(end)
if start >= len(offsets):
continue
pred_char_start = offsets[start][0]
if end < len(offsets):
pred_char_end = offsets[end][1]
pred_ans = squad_eg.context[pred_char_start:pred_char_end]
else:
pred_ans = squad_eg.context[pred_char_start:]
normalized_pred_ans = normalize_text(pred_ans)
normalized_true_ans = [normalize_text(_) for _ in squad_eg.all_answers]
if normalized_pred_ans in normalized_true_ans:
count += 1
acc = count / len(self.y_eval[0])
print(f"\nepoch={epoch+1}, exact match score={acc:.2f}")
Explanation: Create evaluation Callback
This callback will compute the exact match score using the validation data
after every epoch.
End of explanation
exact_match_callback = ExactMatch(x_eval, y_eval)
model.fit(
x_train,
y_train,
epochs=1, # For demonstration, 3 epochs are recommended
verbose=2,
batch_size=64,
callbacks=[exact_match_callback],
)
Explanation: Train and Evaluate
End of explanation |
10,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
Step2: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https
Step4: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
Step6: Hint
Step8: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
Step10: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha. | Python Code:
from google.cloud import bigquery
Explanation: <h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
(
SELECT
RAND() AS splitfield,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
)
WHERE
splitfield < 0.8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/bigquery-samples:airline_ontime_data.flights">a BigQuery public dataset</a> of airline arrival data. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is 70 million, and then switch to the Preview tab to look at a few rows.
<p>
We want to predict the arrival delay of an airline based on the departure delay. The model that we will use is a zero-bias linear model:
$$ delay_{arrival} = \alpha * delay_{departure} $$
<p>
To train the model is to estimate a good value for $\alpha$.
<p>
One approach to estimate alpha is to use this formula:
$$ \alpha = \frac{\sum delay_{departure} delay_{arrival} }{ \sum delay_{departure}^2 } $$
Because we'd like to capture the idea that this relationship is different for flights from New York to Los Angeles vs. flights from Austin to Indianapolis (shorter flight, less busy airports), we'd compute a different $alpha$ for each airport-pair. For simplicity, we'll do this model only for flights between Denver and Los Angeles.
<h2> Naive random split (not repeatable) </h2>
End of explanation
compute_rmse =
#standardSQL
SELECT
dataset,
SQRT(
AVG(
(arrival_delay - ALPHA * departure_delay) *
(arrival_delay - ALPHA * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' )
GROUP BY
dataset
bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe()
Explanation: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
End of explanation
train_and_eval_rand =
#standardSQL
WITH
alldata AS (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' ),
training AS (
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
alldata
WHERE
dataset = 'train' )
SELECT
MAX(alpha) AS alpha,
dataset,
SQRT(
AVG(
(arrival_delay - alpha * departure_delay) *
(arrival_delay - alpha * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
alldata,
training
GROUP BY
dataset
bigquery.Client().query(train_and_eval_rand).to_dataframe()
Explanation: Hint:
* Are you really getting the same training data in the compute_rmse query as in the compute_alpha query?
* Do you get the same answers each time you rerun the compute_alpha and compute_rmse blocks?
<h3> How do we correctly train and evaluate? </h3>
<br/>
Here's the right way to compute the RMSE using the actual training and held-out (evaluation) data. Note how much harder this feels.
Although the calculations are now correct, the experiment is still not repeatable.
Try running it several times; do you get the same answer?
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
AND ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
End of explanation
compute_rmse =
#standardSQL
SELECT
IF(ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8, 'train', 'eval') AS dataset,
SQRT(
AVG(
(arrival_delay - ALPHA * departure_delay) *
(arrival_delay - ALPHA * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
GROUP BY
dataset
print(bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe().head())
Explanation: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha.
End of explanation |
10,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-supervised contrastive learning with SimSiam
Author
Step1: Define hyperparameters
Step2: Load the CIFAR-10 dataset
Step3: Defining our data augmentation pipeline
As studied in SimCLR having the right data
augmentation pipeline is critical for SSL systems to work effectively in computer vision.
Two particular augmentation transforms that seem to matter the most are
Step4: It should be noted that an augmentation pipeline is generally dependent on various
properties of the dataset we are dealing with. For example, if images in the dataset are
heavily object-centric then taking random crops with a very high probability may hurt the
training performance.
Let's now apply our augmentation pipeline to our dataset and visualize a few outputs.
Convert the data into TensorFlow Dataset objects
Here we create two different versions of our dataset without any ground-truth labels.
Step5: Notice that the images in samples_images_one and sample_images_two are essentially
the same but are augmented differently.
Defining the encoder and the predictor
We use an implementation of ResNet20 that is specifically configured for the CIFAR10
dataset. The code is taken from the
keras-idiomatic-programmer repository. The hyperparameters of
these architectures have been referred from Section 3 and Appendix A of the original
paper.
Step6: Defining the (pre-)training loop
One of the main reasons behind training networks with these kinds of approaches is to
utilize the learned representations for downstream tasks like classification. This is why
this particular training phase is also referred to as pre-training.
We start by defining the loss function.
Step7: We then define our training loop by overriding the train_step() function of the
tf.keras.Model class.
Step8: Pre-training our networks
In the interest of this example, we will train the model for only 5 epochs. In reality,
this should at least be 100 epochs.
Step9: If your solution gets very close to -1 (minimum value of our loss) very quickly with a
different dataset and a different backbone architecture that is likely because of
representation collapse. It is a phenomenon where the encoder yields similar output for
all the images. In that case additional hyperparameter tuning is required especially in
the following areas | Python Code:
from tensorflow.keras import layers
from tensorflow.keras import regularizers
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
Explanation: Self-supervised contrastive learning with SimSiam
Author: Sayak Paul<br>
Date created: 2021/03/19<br>
Last modified: 2021/03/20<br>
Description: Implementation of a self-supervised learning method for computer vision.
Self-supervised learning (SSL) is an interesting branch of study in the field of
representation learning. SSL systems try to formulate a supervised signal from a corpus
of unlabeled data points. An example is we train a deep neural network to predict the
next word from a given set of words. In literature, these tasks are known as pretext
tasks or auxiliary tasks. If we train such a network on a huge dataset (such as
the Wikipedia text corpus) it learns very effective
representations that transfer well to downstream tasks. Language models like
BERT, GPT-3,
ELMo all benefit from this.
Much like the language models we can train computer vision models using similar
approaches. To make things work in computer vision, we need to formulate the learning
tasks such that the underlying model (a deep neural network) is able to make sense of the
semantic information present in vision data. One such task is to a model to contrast
between two different versions of the same image. The hope is that in this way the model
will have learn representations where the similar images are grouped as together possible
while the dissimilar images are further away.
In this example, we will be implementing one such system called SimSiam proposed in
Exploring Simple Siamese Representation Learning. It
is implemented as the following:
We create two different versions of the same dataset with a stochastic data
augmentation pipeline. Note that the random initialization seed needs to be the same
during create these versions.
We take a ResNet without any classification head (backbone) and we add a shallow
fully-connected network (projection head) on top of it. Collectively, this is known
as the encoder.
We pass the output of the encoder through a predictor which is again a shallow
fully-connected network having an
AutoEncoder like structure.
We then train our encoder to maximize the cosine similarity between the two different
versions of our dataset.
This example requires TensorFlow 2.4 or higher.
Setup
End of explanation
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 128
EPOCHS = 5
CROP_TO = 32
SEED = 26
PROJECT_DIM = 2048
LATENT_DIM = 512
WEIGHT_DECAY = 0.0005
Explanation: Define hyperparameters
End of explanation
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
print(f"Total training examples: {len(x_train)}")
print(f"Total test examples: {len(x_test)}")
Explanation: Load the CIFAR-10 dataset
End of explanation
def flip_random_crop(image):
# With random crops we also apply horizontal flipping.
image = tf.image.random_flip_left_right(image)
image = tf.image.random_crop(image, (CROP_TO, CROP_TO, 3))
return image
def color_jitter(x, strength=[0.4, 0.4, 0.4, 0.1]):
x = tf.image.random_brightness(x, max_delta=0.8 * strength[0])
x = tf.image.random_contrast(
x, lower=1 - 0.8 * strength[1], upper=1 + 0.8 * strength[1]
)
x = tf.image.random_saturation(
x, lower=1 - 0.8 * strength[2], upper=1 + 0.8 * strength[2]
)
x = tf.image.random_hue(x, max_delta=0.2 * strength[3])
# Affine transformations can disturb the natural range of
# RGB images, hence this is needed.
x = tf.clip_by_value(x, 0, 255)
return x
def color_drop(x):
x = tf.image.rgb_to_grayscale(x)
x = tf.tile(x, [1, 1, 3])
return x
def random_apply(func, x, p):
if tf.random.uniform([], minval=0, maxval=1) < p:
return func(x)
else:
return x
def custom_augment(image):
# As discussed in the SimCLR paper, the series of augmentation
# transformations (except for random crops) need to be applied
# randomly to impose translational invariance.
image = flip_random_crop(image)
image = random_apply(color_jitter, image, p=0.8)
image = random_apply(color_drop, image, p=0.2)
return image
Explanation: Defining our data augmentation pipeline
As studied in SimCLR having the right data
augmentation pipeline is critical for SSL systems to work effectively in computer vision.
Two particular augmentation transforms that seem to matter the most are: 1.) Random
resized crops and 2.) Color distortions. Most of the other SSL systems for computer
vision (such as BYOL,
MoCoV2, SwAV,
etc.) include these in their training pipelines.
End of explanation
ssl_ds_one = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_one = (
ssl_ds_one.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
ssl_ds_two = tf.data.Dataset.from_tensor_slices(x_train)
ssl_ds_two = (
ssl_ds_two.shuffle(1024, seed=SEED)
.map(custom_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
# We then zip both of these datasets.
ssl_ds = tf.data.Dataset.zip((ssl_ds_one, ssl_ds_two))
# Visualize a few augmented images.
sample_images_one = next(iter(ssl_ds_one))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_one[n].numpy().astype("int"))
plt.axis("off")
plt.show()
# Ensure that the different versions of the dataset actually contain
# identical images.
sample_images_two = next(iter(ssl_ds_two))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images_two[n].numpy().astype("int"))
plt.axis("off")
plt.show()
Explanation: It should be noted that an augmentation pipeline is generally dependent on various
properties of the dataset we are dealing with. For example, if images in the dataset are
heavily object-centric then taking random crops with a very high probability may hurt the
training performance.
Let's now apply our augmentation pipeline to our dataset and visualize a few outputs.
Convert the data into TensorFlow Dataset objects
Here we create two different versions of our dataset without any ground-truth labels.
End of explanation
!wget -q https://git.io/JYx2x -O resnet_cifar10_v2.py
import resnet_cifar10_v2
N = 2
DEPTH = N * 9 + 2
NUM_BLOCKS = ((DEPTH - 2) // 9) - 1
def get_encoder():
# Input and backbone.
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(
inputs
)
x = resnet_cifar10_v2.stem(x)
x = resnet_cifar10_v2.learner(x, NUM_BLOCKS)
x = layers.GlobalAveragePooling2D(name="backbone_pool")(x)
# Projection head.
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
x = layers.BatchNormalization()(x)
x = layers.ReLU()(x)
x = layers.Dense(
PROJECT_DIM, use_bias=False, kernel_regularizer=regularizers.l2(WEIGHT_DECAY)
)(x)
outputs = layers.BatchNormalization()(x)
return tf.keras.Model(inputs, outputs, name="encoder")
def get_predictor():
model = tf.keras.Sequential(
[
# Note the AutoEncoder-like structure.
layers.Input((PROJECT_DIM,)),
layers.Dense(
LATENT_DIM,
use_bias=False,
kernel_regularizer=regularizers.l2(WEIGHT_DECAY),
),
layers.ReLU(),
layers.BatchNormalization(),
layers.Dense(PROJECT_DIM),
],
name="predictor",
)
return model
Explanation: Notice that the images in samples_images_one and sample_images_two are essentially
the same but are augmented differently.
Defining the encoder and the predictor
We use an implementation of ResNet20 that is specifically configured for the CIFAR10
dataset. The code is taken from the
keras-idiomatic-programmer repository. The hyperparameters of
these architectures have been referred from Section 3 and Appendix A of the original
paper.
End of explanation
def compute_loss(p, z):
# The authors of SimSiam emphasize the impact of
# the `stop_gradient` operator in the paper as it
# has an important role in the overall optimization.
z = tf.stop_gradient(z)
p = tf.math.l2_normalize(p, axis=1)
z = tf.math.l2_normalize(z, axis=1)
# Negative cosine similarity (minimizing this is
# equivalent to maximizing the similarity).
return -tf.reduce_mean(tf.reduce_sum((p * z), axis=1))
Explanation: Defining the (pre-)training loop
One of the main reasons behind training networks with these kinds of approaches is to
utilize the learned representations for downstream tasks like classification. This is why
this particular training phase is also referred to as pre-training.
We start by defining the loss function.
End of explanation
class SimSiam(tf.keras.Model):
def __init__(self, encoder, predictor):
super(SimSiam, self).__init__()
self.encoder = encoder
self.predictor = predictor
self.loss_tracker = tf.keras.metrics.Mean(name="loss")
@property
def metrics(self):
return [self.loss_tracker]
def train_step(self, data):
# Unpack the data.
ds_one, ds_two = data
# Forward pass through the encoder and predictor.
with tf.GradientTape() as tape:
z1, z2 = self.encoder(ds_one), self.encoder(ds_two)
p1, p2 = self.predictor(z1), self.predictor(z2)
# Note that here we are enforcing the network to match
# the representations of two differently augmented batches
# of data.
loss = compute_loss(p1, z2) / 2 + compute_loss(p2, z1) / 2
# Compute gradients and update the parameters.
learnable_params = (
self.encoder.trainable_variables + self.predictor.trainable_variables
)
gradients = tape.gradient(loss, learnable_params)
self.optimizer.apply_gradients(zip(gradients, learnable_params))
# Monitor loss.
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
Explanation: We then define our training loop by overriding the train_step() function of the
tf.keras.Model class.
End of explanation
# Create a cosine decay learning scheduler.
num_training_samples = len(x_train)
steps = EPOCHS * (num_training_samples // BATCH_SIZE)
lr_decayed_fn = tf.keras.optimizers.schedules.CosineDecay(
initial_learning_rate=0.03, decay_steps=steps
)
# Create an early stopping callback.
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="loss", patience=5, restore_best_weights=True
)
# Compile model and start training.
simsiam = SimSiam(get_encoder(), get_predictor())
simsiam.compile(optimizer=tf.keras.optimizers.SGD(lr_decayed_fn, momentum=0.6))
history = simsiam.fit(ssl_ds, epochs=EPOCHS, callbacks=[early_stopping])
# Visualize the training progress of the model.
plt.plot(history.history["loss"])
plt.grid()
plt.title("Negative Cosine Similairty")
plt.show()
Explanation: Pre-training our networks
In the interest of this example, we will train the model for only 5 epochs. In reality,
this should at least be 100 epochs.
End of explanation
# We first create labeled `Dataset` objects.
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test))
# Then we shuffle, batch, and prefetch this dataset for performance. We
# also apply random resized crops as an augmentation but only to the
# training set.
train_ds = (
train_ds.shuffle(1024)
.map(lambda x, y: (flip_random_crop(x), y), num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
test_ds = test_ds.batch(BATCH_SIZE).prefetch(AUTO)
# Extract the backbone ResNet20.
backbone = tf.keras.Model(
simsiam.encoder.input, simsiam.encoder.get_layer("backbone_pool").output
)
# We then create our linear classifier and train it.
backbone.trainable = False
inputs = layers.Input((CROP_TO, CROP_TO, 3))
x = backbone(inputs, training=False)
outputs = layers.Dense(10, activation="softmax")(x)
linear_model = tf.keras.Model(inputs, outputs, name="linear_model")
# Compile model and start training.
linear_model.compile(
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
optimizer=tf.keras.optimizers.SGD(lr_decayed_fn, momentum=0.9),
)
history = linear_model.fit(
train_ds, validation_data=test_ds, epochs=EPOCHS, callbacks=[early_stopping]
)
_, test_acc = linear_model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))
Explanation: If your solution gets very close to -1 (minimum value of our loss) very quickly with a
different dataset and a different backbone architecture that is likely because of
representation collapse. It is a phenomenon where the encoder yields similar output for
all the images. In that case additional hyperparameter tuning is required especially in
the following areas:
Strength of the color distortions and their probabilities.
Learning rate and its schedule.
Architecture of both the backbone and their projection head.
Evaluating our SSL method
The most popularly used method to evaluate a SSL method in computer vision (or any other
pre-training method as such) is to learn a linear classifier on the frozen features of
the trained backbone model (in this case it is ResNet20) and evaluate the classifier on
unseen images. Other methods include
fine-tuning on the source dataset or even a
target dataset with 5% or 10% labels present. Practically, we can use the backbone model
for any downstream task such as semantic segmentation, object detection, and so on where
the backbone models are usually pre-trained with pure supervised learning.
End of explanation |
10,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Meta with Big Data Malaysia
Scraping the Big Data Malaysia Facebook group for fun. Profit unlikely.
Hello World
This is an introductory-level notebook demonstrating how to deal with a small, but meaty dataset. Things we will do here include
Step1: Is it big enough?
Now we have all our data loaded into variable big_data, but can we really say it's Big Data?
Step2: Wow! So data! Very big!
Seriously though... it's not big. In fact it's rather small. How small is small? Here's a clue...
Step3: At the time this was written, the file was just about 3MB, and there were fewer than 2k posts... note that excludes comments made on posts, but still, this stuff is small. It is small enough that at no point do we need to do anything clever from a data indexing/caching/storage perspective, so to start we will take the simplistic but often appropriate approach of slicing and dicing our big_data object directly. Later on we'll get into pandas DataFrame objects.
Anyway, size doesn't matter. It's variety that counts.
Fields of gold
Now we know how many elements (rows I guess?) we have, but how much variety do we have in this data? One measure of this may be to look at the number of fields in each of those items
Step4: Are we missing anything? A good way to sanity check things is to actually inspect the data, so let's look at a random item
Step5: From that you should be able to sense that we are missing some things - it isn't simply that there are some number of fields that describe each item, because some of those fields have data hierarchies beneath them, for example
Step6: From that we can see some fields have hierarchies within them, e.g. likes have a list of id dictionaries, which happen to be relatively trivial (names and ids... I wonder why Facebook didn't just post the id and make you look up the name?) but the comment field is a bit more complex, wherein it contains a list of dictionaries with each field potentially being a dictionary of its own, e.g. we can see that the second comment on that post tagged Teuku Faruq
Step7: Data quality annoyances
Actually I'm not even sure why the comments field is a single entry list. Is that always the case?
Step8: Apparently that's not always the case, sometimes there are 2 items in the list, let's see what that looks like...
Step9: Skimming the above it looks as though very long comment threads are split into multiple "pages" in the comments list. This may be an artifact of the paging code in pull_feed.py, which is not ideal. At some point we may fix it there, but for the time being we'll just consider it a data quality inconvenience that we will have to deal with.
Here's a function to work around this annoyance
Step10: Start plotting things already dammit
Now that we're counting comments, it's natural to ask
Step11: This sort of adds up intuitively; posts with long comment threads will be rare, though from experience with this forum it does not seem right to conclude that there is a lot of posting going on with no interaction... the community is a bit more engaged than that.
But since this is Facebook, comments aren't the only way of interacting with a post. There's also the wonderful 'Like'.
Step12: Note that the above does not include Likes on Comments made on posts; only Likes made on posts themselves are counted.
While this paints the picture of a more engaged community, it still doesn't feel quite right. It seems unusual these days to find a post go by without a Like or two.
I have a hunch that the zero-like posts are skewed a bit to the earlier days of the group. To dig into that we'll need to start playing with timestamps. Personally I prefer to deal with time as UTC epoch seconds, and surprisingly it seems I need to write my own helper function for this.
Step13: In general it seems my hunch may have been right, but it will be clearer if we plot it.
Step14: This is looking pretty legit now. We can see that lately there's been a significant uptick in the number of posts, and an uptick in the ratio of posts that receive at least one Like.
As another sanity check, we can revisit the Likes-per-post Histogram, but only include recent posts. While we're at it we might as well do the same for the Comments-per-post Histogram. | Python Code:
# we need this for later:
%matplotlib inline
import json
INPUT_FILE = "all_the_data.json"
with open(INPUT_FILE, "r") as big_data_fd:
big_data = json.load(big_data_fd)
Explanation: Getting Meta with Big Data Malaysia
Scraping the Big Data Malaysia Facebook group for fun. Profit unlikely.
Hello World
This is an introductory-level notebook demonstrating how to deal with a small, but meaty dataset. Things we will do here include:
Loading a JSON dataset.
Dealing with a minor data quality issue.
Handling timestamps.
Dataset slicing and dicing.
Plotting histograms.
A "follow the data" approach will be taken. This notebook may appear quite long, but a good portion of the length is pretty-printing of raw data which noone is expected to read in entirety, but it's there for one to skim to get an idea of the structure of our data.
Get all the data
This notebook assumes you have already prepared a flattened JSON file into all_the_data.json, which you would have done by:
* Writing your oauth token into oauth_file according to the instructions in pull_feed.py.
* Running python pull_feed.py to pull down the feed pages into the BigDataMyData directory.
* Running python flatten_saved_data.py > all_the_data.json.
End of explanation
print "We have {} posts".format(len(big_data))
Explanation: Is it big enough?
Now we have all our data loaded into variable big_data, but can we really say it's Big Data?
End of explanation
import os
print "The source file is {} bytes. Pathetic.".format(os.stat(INPUT_FILE).st_size)
Explanation: Wow! So data! Very big!
Seriously though... it's not big. In fact it's rather small. How small is small? Here's a clue...
End of explanation
import itertools
all_the_fields = set(itertools.chain.from_iterable(big_data))
print "We have {} different field names:".format(len(all_the_fields))
print all_the_fields
Explanation: At the time this was written, the file was just about 3MB, and there were fewer than 2k posts... note that excludes comments made on posts, but still, this stuff is small. It is small enough that at no point do we need to do anything clever from a data indexing/caching/storage perspective, so to start we will take the simplistic but often appropriate approach of slicing and dicing our big_data object directly. Later on we'll get into pandas DataFrame objects.
Anyway, size doesn't matter. It's variety that counts.
Fields of gold
Now we know how many elements (rows I guess?) we have, but how much variety do we have in this data? One measure of this may be to look at the number of fields in each of those items:
End of explanation
import random
import pprint
# re-run this as much as you like to inspect different items
pprint.pprint(random.choice(big_data))
Explanation: Are we missing anything? A good way to sanity check things is to actually inspect the data, so let's look at a random item:
End of explanation
pprint.pprint(big_data[234])
Explanation: From that you should be able to sense that we are missing some things - it isn't simply that there are some number of fields that describe each item, because some of those fields have data hierarchies beneath them, for example:
End of explanation
pprint.pprint(big_data[234]['comments'][0]['data'][1]['message_tags'])
Explanation: From that we can see some fields have hierarchies within them, e.g. likes have a list of id dictionaries, which happen to be relatively trivial (names and ids... I wonder why Facebook didn't just post the id and make you look up the name?) but the comment field is a bit more complex, wherein it contains a list of dictionaries with each field potentially being a dictionary of its own, e.g. we can see that the second comment on that post tagged Teuku Faruq:
End of explanation
set([len(data['comments']) for data in big_data if 'comments' in data])
Explanation: Data quality annoyances
Actually I'm not even sure why the comments field is a single entry list. Is that always the case?
End of explanation
multi_item_comment_lists = [data['comments'] for data in big_data if ('comments' in data) and (len(data['comments']) > 1)]
print len(multi_item_comment_lists)
pprint.pprint(multi_item_comment_lists[0])
Explanation: Apparently that's not always the case, sometimes there are 2 items in the list, let's see what that looks like...
End of explanation
def flatten_comments_pages(post):
flattened_comments = []
for page in post:
flattened_comments += page['data']
return flattened_comments
post_comments_paged = multi_item_comment_lists[0]
print "Post has {} comments".format(len(flatten_comments_pages(post_comments_paged)))
Explanation: Skimming the above it looks as though very long comment threads are split into multiple "pages" in the comments list. This may be an artifact of the paging code in pull_feed.py, which is not ideal. At some point we may fix it there, but for the time being we'll just consider it a data quality inconvenience that we will have to deal with.
Here's a function to work around this annoyance:
End of explanation
comments_threads = [data['comments'] for data in big_data if 'comments' in data]
count_of_posts_with_no_comments = len(big_data) - len(comments_threads)
comments_counts = [0] * count_of_posts_with_no_comments
comments_counts += [len(flatten_comments_pages(thread)) for thread in comments_threads]
import matplotlib.pyplot as plt
plt.hist(comments_counts, bins=max(comments_counts))
plt.title("Comments-per-post Histogram")
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
Explanation: Start plotting things already dammit
Now that we're counting comments, it's natural to ask: what does the number-of-comments-per-post distribution look like?
IMPORTANT NOTE: Beyond this point, we start to "follow the data" as we analyse things, and we do so in a time-relative way (e.g. comparing the last N days of posts to historical data). As Big Data Malaysia is a living breathing group, the data set is a living breathing thing, so things may change, and the conclusions informing the analysis here may suffer logic rot.
End of explanation
likes_threads = [data['likes']['data'] for data in big_data if 'likes' in data]
count_of_posts_with_no_likes = len(big_data) - len(likes_threads)
likes_counts = [0] * count_of_posts_with_no_likes
likes_counts += [len(thread) for thread in likes_threads]
plt.hist(likes_counts, bins=max(likes_counts))
plt.title("Likes-per-post Histogram")
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
Explanation: This sort of adds up intuitively; posts with long comment threads will be rare, though from experience with this forum it does not seem right to conclude that there is a lot of posting going on with no interaction... the community is a bit more engaged than that.
But since this is Facebook, comments aren't the only way of interacting with a post. There's also the wonderful 'Like'.
End of explanation
import datetime
import dateutil
import pytz
def epoch_utc_s(date_string):
dt_local = dateutil.parser.parse(str(date_string))
dt_utc = dt_local.astimezone(pytz.utc)
nineteenseventy = datetime.datetime(1970,1,1)
epoch_utc = dt_utc.replace(tzinfo=None) - nineteenseventy
return int(epoch_utc.total_seconds())
posts_without_likes = [data for data in big_data if 'likes' not in data]
posts_with_likes = [data for data in big_data if 'likes' in data]
timestamps_of_posts_without_likes = [epoch_utc_s(post['created_time']) for post in posts_without_likes]
timestamps_of_posts_with_likes = [epoch_utc_s(post['created_time']) for post in posts_with_likes]
import numpy
median_epoch_liked = int(numpy.median(timestamps_of_posts_with_likes))
median_epoch_non_liked = int(numpy.median(timestamps_of_posts_without_likes))
print "Median timestamp of posts without likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_non_liked),
median_epoch_non_liked)
print "Median timestamp of posts with likes: {} ({})".format(datetime.datetime.fromtimestamp(median_epoch_liked),
median_epoch_liked)
Explanation: Note that the above does not include Likes on Comments made on posts; only Likes made on posts themselves are counted.
While this paints the picture of a more engaged community, it still doesn't feel quite right. It seems unusual these days to find a post go by without a Like or two.
I have a hunch that the zero-like posts are skewed a bit to the earlier days of the group. To dig into that we'll need to start playing with timestamps. Personally I prefer to deal with time as UTC epoch seconds, and surprisingly it seems I need to write my own helper function for this.
End of explanation
plt.hist(timestamps_of_posts_without_likes, alpha=0.5, label='non-Liked posts')
plt.hist(timestamps_of_posts_with_likes, alpha=0.5, label='Liked posts')
plt.title("Liked vs non-Liked posts")
plt.xlabel("Time (epoch UTC s)")
plt.ylabel("Count of posts")
plt.legend(loc='upper left')
plt.show()
Explanation: In general it seems my hunch may have been right, but it will be clearer if we plot it.
End of explanation
def less_than_n_days_ago(date_string, n):
query_date = epoch_utc_s(date_string)
today_a_year_ago = epoch_utc_s(datetime.datetime.now(pytz.utc) - datetime.timedelta(days=n))
return query_date > today_a_year_ago
# try changing this variable then re-running this cell...
days_ago = 30
# create a slice of our big_data containing only posts created n days ago
recent_data = [data for data in big_data if less_than_n_days_ago(data['created_time'], days_ago)]
# plot the Likes-per-post Histogram for recent_data
recent_likes_threads = [data['likes']['data'] for data in recent_data if 'likes' in data]
recent_count_of_posts_with_no_likes = len(recent_data) - len(recent_likes_threads)
recent_likes_counts = [0] * recent_count_of_posts_with_no_likes
recent_likes_counts += [len(thread) for thread in recent_likes_threads]
plt.hist(recent_likes_counts, bins=max(recent_likes_counts))
plt.title("Likes-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Likes per post")
plt.ylabel("Frequency")
plt.show()
# plot the Comment-per-post Histogram for recent_data
recent_comments_threads = [data['comments'] for data in recent_data if 'comments' in data]
recent_count_of_posts_with_no_comments = len(recent_data) - len(comments_threads)
recent_comments_counts = [0] * recent_count_of_posts_with_no_comments
recent_comments_counts += [len(flatten_comments_pages(thread)) for thread in recent_comments_threads]
plt.hist(recent_comments_counts, bins=max(recent_comments_counts))
plt.title("Comments-per-post Histogram (last {} days)".format(days_ago))
plt.xlabel("Comments per post")
plt.ylabel("Frequency")
plt.show()
Explanation: This is looking pretty legit now. We can see that lately there's been a significant uptick in the number of posts, and an uptick in the ratio of posts that receive at least one Like.
As another sanity check, we can revisit the Likes-per-post Histogram, but only include recent posts. While we're at it we might as well do the same for the Comments-per-post Histogram.
End of explanation |
10,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
General Concepts
HOW TO RUN THIS FILE
Step1: Let's get started with some basic imports
Step2: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel).
The levels from most to least information are
Step3: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with "WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note
Step4: If you ever need to know the type of a Parameter, you can always use python's built-in type functionality
Step5: If we print the parameter object we can see a summary of information
Step6: You can see here that we've defined three a few things about parameter
Step7: and that each of these is available through both a dictionary key and an object attribute. For example
Step8: The 'qualifier' attribute is essentially an abbreviated name for the Parameter.
These tags will be shared across all Parameters, regardless of their type.
Attributes, on the other hand, can be dependent on the type of the Parameter and tell the Parameter its rules and how to interpret its value. You can access a list of available attributes as follows
Step9: and again, each of these are available through both a dictionary key and as an object attribute. For example, all parameters have a 'description' attribute which gives additional information about what the Parameter means
Step10: For the special case of the 'value' attribute, there is also a get method (will become handy later when we want to be able to request the value in a specific unit).
Step11: The value attribute is also the only attribute that you'll likely want to change, so it also has a set method
Step12: The 'visible_if' attribute only comes into play when the Parameter is a member of a ParameterSet, so we'll discuss it at the end of this tutorial when we get to ParameterSets.
The 'copy_for' attribute is only used when the Parameter is in a particular type of ParameterSet called a Bundle (explained at the very end of this tutorial). We'll see the 'copy_for' capability in action later in the Datasets Tutorial, but for now, just know that you can view this property only and cannot change it... and most of the time it will just be an empty string.
StringParameters
We'll just mention StringParameters again for completeness, but we've already seen about all they can do - the value must cast to a valid string but no limits or checks are performed at all on the value.
ChoiceParameters
ChoiceParameters are essentially StringParameters with one very important exception
Step13: FloatParameters
FloatParameters are probably the most common Parameter used in PHOEBE and hold both a float and a unit, with the ability to retrieve the value in any other convertible unit.
Step14: You'll notice here a few new mentions in the summary... "Constrained by", "Constrains", and "Related to" are all referring to constraints which will be discussed in a future tutorial.
Step15: FloatParameters have an attribute which hold the "limits" - whenever a value is set it will be checked to make sure it falls within the limits. If either the lower or upper limit is None, then there is no limit check for that extreme.
Step16: FloatParameters have an attribute which holds the "default_unit" - this is the unit in which the value is stored and the unit that will be provided if not otherwise overriden.
Step17: Calling get_value will then return a float in these units
Step18: But we can also request the value in a different unit, by passing an astropy Unit object or its string representation.
Step19: FloatParameters also have their own method to access an astropy Quantity object that includes both the value and the unit
Step20: The set_value method also accepts a unit - this doesn't change the default_unit internally, but instead converts the provided value before storing.
Step21: If for some reason you want to change the default_unit, you can do so as well
Step22: But note that the limits are still stored as a quantity object in the originally defined default_units
Step23: IntParameters
IntParameters are essentially the same as FloatParameters except they always cast to an Integer and they have no units.
Step24: Like FloatParameters above, IntParameters still have limits
Step25: Note that if you try to set the value to a float it will not raise an error, but will cast that value to an integer (following python rules of truncation, not rounding)
Step26: Bool Parameters
Boolean Parameters are even simpler - they accept True or False.
Step27: Note that, like IntParameters, BoolParameters will attempt to cast anything you give it into True or False.
Step28: As with Python, an empty string will cast to False and a non-empty string will cast to True
Step29: The only exception to this is that (unlike Python), 'true' or 'True' will cast to True and 'false' or 'False' will cast to False.
Step30: FloatArrayParameters
FloatArrayParameters are essentially the same as FloatParameters (in that they have the same unit treatment, although obviously no limits) but hold numpy arrays rather than a single value.
By convention in Phoebe, these will (almost) always have a pluralized qualifier.
Step31: FloatArrayParameters also allow for built-in interpolation... but this requires them to be a member of a Bundle, so we'll discuss this in just a bit.
ParametersSets
ParameterSets are a collection of Parameters that can be filtered by their tags to return another ParameterSet.
For illustration, let's create 3 random FloatParameters and combine them to make a ParameterSet.
Step32: If we print a ParameterSet, we'll see a listing of all the Parameters and their values.
Step33: Twigs
The string notation used for the Parameters is called a 'twig' - its simply a combination of all the tags joined with the '@' symbol and gives a very convenient way to access any Parameter.
The order of the tags doesn't matter, and you only need to provide enough tags to produce a unique match. Since there is only one parameter with kind='kind1', we do not need to provide the extraneous context='context1' in the twig to get a match.
Step34: Note that this returned the ParameterObject itself, so you can now use any of the Parameter methods or attributes we saw earlier. For example
Step35: But we can also use set and get_value methods from the ParameterSet itself
Step36: Tags
Each Parameter has a number of tags, and the ParameterSet has the same tags - where the value of any given tag is None if not shared by all Parameters in that ParameterSet.
So let's just print the names of the tags again and then describe what each one means.
Step37: Most of these "metatags" act as labels - for example, you can give a component tag to each of the components for easier referencing.
But a few of these tags are fixed and not editable
Step38: This returns None since not all objects in this ParameterSet share a single context. But you can see all the options for a given tag by providing the plural version of that tag name
Step39: Filtering
Any of the tags can also be used to filter the ParameterSet
Step40: Here we were returned a ParameterSet of all Parameters that matched the filter criteria. Since we're returned another ParameterSet, we can chain additional filter calls together.
Step41: Now we see that we have drilled down to a single Parameter. Note that a ParameterSet is still returned - filter will always return a ParameterSet.
We could have accomplished the exact same thing with a single call to filter
Step42: If you want to access the actual Parameter, you must use get instead of (or in addition to) filter. All of the following lines do the exact same thing
Step43: Or we can use those twigs. Remember that twigs are just a combination of these tags separated by the @ symbol. You can use these for dictionary access in a ParameterSet - without needing to provide the name of the tag, and without having to worry about order. And whenever this returns a ParameterSet, these are also chainable, so the following two lines will do the same thing
Step44: You may notice that the final result was a Parameter, not a ParameterSet. Twig dictionary access tries to be smart - if exactly 1 Parameter is found, it will return that Parameter instead of a ParameterSet. Notice the difference between the two following lines
Step45: Of course, once you get the Parameter you can then use dictionary keys to access any attributes of that Parameter.
Step46: So we decided we might as well allow access to those attributes directly from the twig as well
Step47: The Bundle
The Bundle is nothing more than a glorified ParameterSet with some extra methods to compute models, add new components and datasets, etc.
You can initialize an empty Bundle as follows
Step48: and filter just as you would for a ParameterSet
Step49: Visible If
As promised earlier, the 'visible_if' attribute of a Parameter controls whether its visible to a ParameterSet... but it only does anything if the Parameter belongs to a Bundle.
Let's make a new ParameterSet in which the visibility of one parameter is dependent on the value of another.
Step50: It doesn't make much sense to need to define a mass if this thing isn't baryonic. So if we change the value of 'what_is_this' to 'aether' then the 'mass' Parameter will temporarily hide itself.
Step51: FloatArrayParameters
Step52: Now we can interpolate the 'ys' param for any given value of 'xs' | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: General Concepts
HOW TO RUN THIS FILE: if you're running this in a Jupyter notebook or Google Colab session, you can click on a cell and then shift+Enter to run the cell and automatically select the next cell. Alt+Enter will run a cell and create a new cell below it. Ctrl+Enter will run a cell but keep it selected. To restart from scratch, restart the kernel/runtime.
This tutorial introduces all the general concepts of dealing with Parameters, ParameterSets, and the Bundle. This tutorial aims to be quite complete - covering almost everything you can do with Parameters, so on first read you may just want to try to get familiar, and then return here as a reference for any details later.
All of these tutorials assume basic comfort with Python in general - particularly with the concepts of lists, dictionaries, and objects as well as basic comfort with using the numpy and matplotlib packages.
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
Explanation: Let's get started with some basic imports
End of explanation
logger = phoebe.logger(clevel='WARNING', flevel='DEBUG', filename='tutorial.log')
Explanation: If running in IPython notebooks, you may see a "ShimWarning" depending on the version of Jupyter you are using - this is safe to ignore.
PHOEBE 2 uses constants defined in the IAU 2015 Resolution which conflict with the constants defined in astropy. As a result, you'll see the warnings as phoebe.u and phoebe.c "hijacks" the values in astropy.units and astropy.constants.
Whenever providing units, please make sure to use phoebe.u instead of astropy.units, otherwise the conversions may be inconsistent.
Logger
Before starting any script, it is a good habit to initialize a logger and define which levels of information you want printed to the command line (clevel) and dumped to a file (flevel).
The levels from most to least information are:
DEBUG
INFO
WARNING
ERROR
CRITICAL
End of explanation
param = phoebe.parameters.StringParameter(qualifier='myparameter',
description='mydescription',
value='myvalue')
Explanation: All of these arguments are optional and will default to clevel='WARNING' if not provided. There is therefore no need to provide a filename if you don't provide a value for flevel.
So with this logger, anything with "WARNING, ERROR, or CRITICAL levels will be printed to the screen. All messages of any level will be written to a file named 'tutorial.log' in the current directory.
Note: the logger messages are not included in the outputs shown below.
Parameters
Parameters hold a single value, but need to be aware about their own types, limits, and connection with other Parameters (more on this later when we discuss ParameterSets).
Note that generally you won't ever have to "create" or "define" your own Parameters, those will be created for you by helper functions, but we have to start somewhere... so let's create our first Parameter.
We'll start with creating a StringParameter since it is the most generic, and then discuss and specific differences for each type of Parameter.
End of explanation
print type(param)
Explanation: If you ever need to know the type of a Parameter, you can always use python's built-in type functionality:
End of explanation
print param
Explanation: If we print the parameter object we can see a summary of information
End of explanation
print param.meta
Explanation: You can see here that we've defined three a few things about parameter: the qualifier, description, and value (others do exist, they just don't show up in the summary).
These "things" can be split into two groups: tags and attributes (although in a pythonic sense, both can be accessed as attributes). Don't worry too much about this distinction - it isn't really important except for the fact that tags are shared across all Parameters whereas attributes are dependent on the type of the Parameter.
The tags of a Parameter define the Parameter and how it connects to other Parameters (again, more on this when we get to ParameterSets). For now, just now that you can access a list of all the tags as follows:
End of explanation
print param['qualifier'], param.qualifier
Explanation: and that each of these is available through both a dictionary key and an object attribute. For example:
End of explanation
param.attributes
Explanation: The 'qualifier' attribute is essentially an abbreviated name for the Parameter.
These tags will be shared across all Parameters, regardless of their type.
Attributes, on the other hand, can be dependent on the type of the Parameter and tell the Parameter its rules and how to interpret its value. You can access a list of available attributes as follows:
End of explanation
print param['description'], param.description
Explanation: and again, each of these are available through both a dictionary key and as an object attribute. For example, all parameters have a 'description' attribute which gives additional information about what the Parameter means:
End of explanation
print param.get_value(), param['value'], param.value
Explanation: For the special case of the 'value' attribute, there is also a get method (will become handy later when we want to be able to request the value in a specific unit).
End of explanation
param.set_value('newvalue')
print param.get_value()
Explanation: The value attribute is also the only attribute that you'll likely want to change, so it also has a set method:
End of explanation
param = phoebe.parameters.ChoiceParameter(qualifier='mychoiceparameter',
description='mydescription',
choices=['choice1', 'choice2'],
value='choice1')
print param
print param.attributes
print param['choices'], param.choices
print param.get_value()
#param.set_value('not_a_choice') # would raise a ValueError
param.set_value('choice2')
print param.get_value()
Explanation: The 'visible_if' attribute only comes into play when the Parameter is a member of a ParameterSet, so we'll discuss it at the end of this tutorial when we get to ParameterSets.
The 'copy_for' attribute is only used when the Parameter is in a particular type of ParameterSet called a Bundle (explained at the very end of this tutorial). We'll see the 'copy_for' capability in action later in the Datasets Tutorial, but for now, just know that you can view this property only and cannot change it... and most of the time it will just be an empty string.
StringParameters
We'll just mention StringParameters again for completeness, but we've already seen about all they can do - the value must cast to a valid string but no limits or checks are performed at all on the value.
ChoiceParameters
ChoiceParameters are essentially StringParameters with one very important exception: the value must match one of the prescribed choices.
Therefore, they have a 'choice' attribute and an error will be raised if trying to set the value to any string not in that list.
End of explanation
param = phoebe.parameters.FloatParameter(qualifier='myfloatparameter',
description='mydescription',
default_unit=u.m,
limits=[None,20],
value=5)
print param
Explanation: FloatParameters
FloatParameters are probably the most common Parameter used in PHOEBE and hold both a float and a unit, with the ability to retrieve the value in any other convertible unit.
End of explanation
print param.attributes
Explanation: You'll notice here a few new mentions in the summary... "Constrained by", "Constrains", and "Related to" are all referring to constraints which will be discussed in a future tutorial.
End of explanation
print param['limits'], param.limits
#param.set_value(30) # would raise a ValueError
param.set_value(2)
print param.get_value()
Explanation: FloatParameters have an attribute which hold the "limits" - whenever a value is set it will be checked to make sure it falls within the limits. If either the lower or upper limit is None, then there is no limit check for that extreme.
End of explanation
print param['default_unit'], param.default_unit
Explanation: FloatParameters have an attribute which holds the "default_unit" - this is the unit in which the value is stored and the unit that will be provided if not otherwise overriden.
End of explanation
print param.get_value()
Explanation: Calling get_value will then return a float in these units
End of explanation
print param.get_value(unit=u.km), param.get_value(unit='km')
Explanation: But we can also request the value in a different unit, by passing an astropy Unit object or its string representation.
End of explanation
print param.get_quantity(), param.get_quantity(unit=u.km)
Explanation: FloatParameters also have their own method to access an astropy Quantity object that includes both the value and the unit
End of explanation
param.set_value(10)
print param.get_quantity()
param.set_value(0.001*u.km)
print param.get_quantity()
param.set_value(10, unit='cm')
print param.get_quantity()
Explanation: The set_value method also accepts a unit - this doesn't change the default_unit internally, but instead converts the provided value before storing.
End of explanation
param.set_default_unit(u.km)
print param.get_quantity()
Explanation: If for some reason you want to change the default_unit, you can do so as well:
End of explanation
print param.limits
Explanation: But note that the limits are still stored as a quantity object in the originally defined default_units
End of explanation
param = phoebe.parameters.IntParameter(qualifier='myintparameter',
description='mydescription',
limits=[0,None],
value=1)
print param
print param.attributes
Explanation: IntParameters
IntParameters are essentially the same as FloatParameters except they always cast to an Integer and they have no units.
End of explanation
print param['limits'], param.limits
Explanation: Like FloatParameters above, IntParameters still have limits
End of explanation
param.set_value(1.9)
print param.get_value()
Explanation: Note that if you try to set the value to a float it will not raise an error, but will cast that value to an integer (following python rules of truncation, not rounding)
End of explanation
param = phoebe.parameters.BoolParameter(qualifier='myboolparameter',
description='mydescription',
value=True)
print param
print param.attributes
Explanation: Bool Parameters
Boolean Parameters are even simpler - they accept True or False.
End of explanation
param.set_value(0)
print param.get_value()
param.set_value(None)
print param.get_value()
Explanation: Note that, like IntParameters, BoolParameters will attempt to cast anything you give it into True or False.
End of explanation
param.set_value('')
print param.get_value()
param.set_value('some_string')
print param.get_value()
Explanation: As with Python, an empty string will cast to False and a non-empty string will cast to True
End of explanation
param.set_value('False')
print param.get_value()
param.set_value('false')
print param.get_value()
Explanation: The only exception to this is that (unlike Python), 'true' or 'True' will cast to True and 'false' or 'False' will cast to False.
End of explanation
param = phoebe.parameters.FloatArrayParameter(qualifier='myfloatarrayparameters',
description='mydescription',
default_unit=u.m,
value=np.array([0,1,2,3]))
print param
print param.attributes
print param.get_value(unit=u.km)
Explanation: FloatArrayParameters
FloatArrayParameters are essentially the same as FloatParameters (in that they have the same unit treatment, although obviously no limits) but hold numpy arrays rather than a single value.
By convention in Phoebe, these will (almost) always have a pluralized qualifier.
End of explanation
param1 = phoebe.parameters.FloatParameter(qualifier='param1',
description='param1 description',
default_unit=u.m,
limits=[None,20],
value=5,
context='context1',
kind='kind1')
param2 = phoebe.parameters.FloatParameter(qualifier='param2',
description='param2 description',
default_unit=u.deg,
limits=[0,2*np.pi],
value=0,
context='context2',
kind='kind2')
param3 = phoebe.parameters.FloatParameter(qualifier='param3',
description='param3 description',
default_unit=u.kg,
limits=[0,2*np.pi],
value=0,
context='context1',
kind='kind2')
ps = phoebe.parameters.ParameterSet([param1, param2, param3])
print ps.to_list()
Explanation: FloatArrayParameters also allow for built-in interpolation... but this requires them to be a member of a Bundle, so we'll discuss this in just a bit.
ParametersSets
ParameterSets are a collection of Parameters that can be filtered by their tags to return another ParameterSet.
For illustration, let's create 3 random FloatParameters and combine them to make a ParameterSet.
End of explanation
print ps
Explanation: If we print a ParameterSet, we'll see a listing of all the Parameters and their values.
End of explanation
print ps.get('param1@kind1')
Explanation: Twigs
The string notation used for the Parameters is called a 'twig' - its simply a combination of all the tags joined with the '@' symbol and gives a very convenient way to access any Parameter.
The order of the tags doesn't matter, and you only need to provide enough tags to produce a unique match. Since there is only one parameter with kind='kind1', we do not need to provide the extraneous context='context1' in the twig to get a match.
End of explanation
print ps.get('param1@kind1').description
Explanation: Note that this returned the ParameterObject itself, so you can now use any of the Parameter methods or attributes we saw earlier. For example:
End of explanation
ps.set_value('param1@kind1', 10)
print ps.get_value('param1@kind1')
Explanation: But we can also use set and get_value methods from the ParameterSet itself:
End of explanation
print ps.meta.keys()
Explanation: Tags
Each Parameter has a number of tags, and the ParameterSet has the same tags - where the value of any given tag is None if not shared by all Parameters in that ParameterSet.
So let's just print the names of the tags again and then describe what each one means.
End of explanation
print ps.context
Explanation: Most of these "metatags" act as labels - for example, you can give a component tag to each of the components for easier referencing.
But a few of these tags are fixed and not editable:
qualifier: literally the name of the parameter.
kind: tells what kind a parameter is (ie whether a component is a star or an orbit).
context: tells what context this parameter belongs to
twig: a shortcut to the parameter in a single string.
uniquetwig: the minimal twig needed to reach this parameter.
uniqueid: an internal representation used to reach this parameter
These contexts are (you'll notice that most are represented in the tags):
setting
history
system
component
feature
dataset
constraint
compute
model
fitting [not yet supported]
feedback [not yet supported]
plugin [not yet supported]
One way to distinguish between context and kind is with the following question and answer:
"What kind of [context] is this? It's a [kind] tagged [context]=[tag-with-same-name-as-context]."
In different cases, this will then become:
"What kind of component is this? It's a star tagged component=starA." (context='component', kind='star', component='starA')
"What kind of feature is this? It's a spot tagged feature=spot01." (context='feature', kind='spot', feature='spot01')
"What kind of dataset is this? It's a LC (light curve) tagged dataset=lc01." (context='dataset', kind='LC', dataset='lc01')
"What kind of compute (options) are these? They're phoebe (compute options) tagged compute=preview." (context='compute', kind='phoebe', compute='preview')
As we saw before, these tags can be accessed at the Parameter level as either a dictionary key or as an object attribute. For ParameterSets, the tags are only accessible through object attributes.
End of explanation
print ps.contexts
Explanation: This returns None since not all objects in this ParameterSet share a single context. But you can see all the options for a given tag by providing the plural version of that tag name:
End of explanation
print ps.filter(context='context1')
Explanation: Filtering
Any of the tags can also be used to filter the ParameterSet:
End of explanation
print ps.filter(context='context1', kind='kind1')
Explanation: Here we were returned a ParameterSet of all Parameters that matched the filter criteria. Since we're returned another ParameterSet, we can chain additional filter calls together.
End of explanation
print ps.filter(context='context1', kind='kind1')
Explanation: Now we see that we have drilled down to a single Parameter. Note that a ParameterSet is still returned - filter will always return a ParameterSet.
We could have accomplished the exact same thing with a single call to filter:
End of explanation
print ps.filter(context='context1', kind='kind1').get()
print ps.get(context='context1', kind='kind1')
Explanation: If you want to access the actual Parameter, you must use get instead of (or in addition to) filter. All of the following lines do the exact same thing:
End of explanation
print ps['context1@kind1']
print ps['context1']['kind1']
Explanation: Or we can use those twigs. Remember that twigs are just a combination of these tags separated by the @ symbol. You can use these for dictionary access in a ParameterSet - without needing to provide the name of the tag, and without having to worry about order. And whenever this returns a ParameterSet, these are also chainable, so the following two lines will do the same thing:
End of explanation
print ps['context1']
print ps['context1@kind1']
Explanation: You may notice that the final result was a Parameter, not a ParameterSet. Twig dictionary access tries to be smart - if exactly 1 Parameter is found, it will return that Parameter instead of a ParameterSet. Notice the difference between the two following lines:
End of explanation
print ps['context1@kind1']['description']
Explanation: Of course, once you get the Parameter you can then use dictionary keys to access any attributes of that Parameter.
End of explanation
print ps['description@context1@kind1']
Explanation: So we decided we might as well allow access to those attributes directly from the twig as well
End of explanation
b = phoebe.Bundle()
print b
Explanation: The Bundle
The Bundle is nothing more than a glorified ParameterSet with some extra methods to compute models, add new components and datasets, etc.
You can initialize an empty Bundle as follows:
End of explanation
print b.filter(context='system')
Explanation: and filter just as you would for a ParameterSet
End of explanation
param1 = phoebe.parameters.ChoiceParameter(qualifier='what_is_this',
choices=['matter', 'aether'],
value='matter',
context='context1')
param2 = phoebe.parameters.FloatParameter(qualifier='mass',
default_unit=u.kg,
value=5,
visible_if='what_is_this:matter',
context='context1')
b = phoebe.Bundle([param1, param2])
print b.filter()
Explanation: Visible If
As promised earlier, the 'visible_if' attribute of a Parameter controls whether its visible to a ParameterSet... but it only does anything if the Parameter belongs to a Bundle.
Let's make a new ParameterSet in which the visibility of one parameter is dependent on the value of another.
End of explanation
b.set_value('what_is_this', 'aether')
print b.filter()
Explanation: It doesn't make much sense to need to define a mass if this thing isn't baryonic. So if we change the value of 'what_is_this' to 'aether' then the 'mass' Parameter will temporarily hide itself.
End of explanation
xparam = phoebe.parameters.FloatArrayParameter(qualifier='xs',
default_unit=u.d,
value=np.linspace(0,1,10),
context='context1')
yparam = phoebe.parameters.FloatArrayParameter(qualifier='ys',
default_unit=u.m,
value=np.linspace(0,1,10)**2,
context='context1')
b = phoebe.Bundle([xparam, yparam])
b.filter('ys').get().twig
b['ys'].get_value()
Explanation: FloatArrayParameters: interpolation
As mentioned earlier, when a part of a Bundle, FloatArrayParameters can handle simple linear interpolation with respect to another FloatArrayParameter in the same Bundle.
End of explanation
b['ys'].interp_value(xs=0)
b['ys'].interp_value(xs=0.2)
Explanation: Now we can interpolate the 'ys' param for any given value of 'xs'
End of explanation |
10,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
9 defenses
Low Bar
ALLIANCE selected
Audience selected
ALLIANCE selected
ALLIANCE selected
Data structure choices include
Step1: Analysis Functions | Python Code:
# Object oriented approach, would have to feed csv data into objects
# maybe get rid of this and just use library analysis tools
class Robot(object):
def __init__(self, name, alliance, auto_points, points):
self.name = name
self.alliance = alliance
self.auto_points = auto_points
self.points = points
def points_per_sec(self):
return self.points / 150
def auto_points_per_sec(self):
return self.auto_points / 15
def get_name(self):
return self.name
def get_alliance(self):
return self.alliance
data
def analyze(dataframe, team):
total_points = dataframe[team]['Points'] + dataframe[team]['Auto Points']
cumulative_success_rate = 4
pps = dataframe[team]['Points'] / 150
auto_pps = dataframe[team]['Auto Points'] / 15
return(total_points, pps, auto_pps)
stuff = analyze(data, 'Cougar Tech')
print stuff
Explanation: 9 defenses
Low Bar
ALLIANCE selected
Audience selected
ALLIANCE selected
ALLIANCE selected
Data structure choices include:
- Pandas dataframes
- Numpy Arrays
- Object oriented
- Dictionary
End of explanation
data = pd.read_csv("robodummy.csv")
fig, axs = plt.subplots(1, 4, sharey = True)
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[0], figsize = (16, 8))
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[1])
data.plot(kind='scatter', x = 'x', y = 'y', ax = axs[2])
a = np.array(([1, 4], [6, 5], [9, 3]))
np.sort(a)
Explanation: Analysis Functions:
End of explanation |
10,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.загружаем файлы .json
Step1: Смотрим, где именно в файле интересующие нас данные
Step2: Считываем нужные нам данные как датафреймы
Step3: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
Step4: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
Step5: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
Step6: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
Step7: Строим график
Step8: Выводим результат | Python Code:
path = 'data/Sessions_Page.json'
path2 = 'data/Goal1CompletionLocation_Goal1Completions.json'
with open(path, 'r') as f:
sessions_page = json.loads(f.read())
with open(path2, 'r') as f:
goals_page = json.loads(f.read())
Explanation: .загружаем файлы .json
End of explanation
type (sessions_page)
sessions_page.keys()
sessions_page['reports'][0].keys()
sessions_page['reports'][0]['data']['rows']
Explanation: Смотрим, где именно в файле интересующие нас данные
End of explanation
sessions_df = pd.DataFrame(sessions_page['reports'][0]['data']['rows'])
goals_df = pd.DataFrame(goals_page['reports'][0]['data']['rows'])
Explanation: Считываем нужные нам данные как датафреймы
End of explanation
x=[]
for i in sessions_df.dimensions:
x.append(str(i[0]))
sessions_df.insert(2, 'name', x)
x=[]
for i in goals_df.dimensions:
x.append(str(i[0]))
goals_df.insert(2, 'name', x)
x=[]
for i in sessions_df.metrics:
x.append(float(i[0]['values'][0]))
sessions_df.insert(3, 'sessions', x)
x=[]
for i in goals_df.metrics:
x.append(float(i[0]['values'][0]))
goals_df.insert(3, 'goals', x)
Explanation: Создаем в датафреймах отдельные столбцы с данными в удобных нам форматах.
End of explanation
goals_df.insert(4, 'sessions', 0)
goals_df.insert(5, 'convers_rate', 0)
Explanation: Создаем столбцы в датафрейме с "Goal1Complitations", где будем хранить количество сессий и конверсию
End of explanation
for i in range(7):
goals_df.sessions[i] = sum(sessions_df.sessions[sessions_df.name==goals_df.name[i]])
goals_df.convers_rate = goals_df.goals/goals_df.sessions*100
Explanation: Переносим из таблицы сессий количество сессий и считаем конверсию для каждой страницы, которая есть в "Goal1Complitations"
End of explanation
goals_df.convers_rate[goals_df.sessions==0] = 0
goals_df.ix[range(1,7),[2,5]]
Explanation: Обнулим конверсию для тех страниц по которым не було сессий. В даннос случае это страница "(entrance)"
End of explanation
goals_df.ix[range(1,7),[2,5]].plot(kind="bar", legend=False)
plt.xticks([0, 1, 2, 3, 4, 5], goals_df.name, rotation="vertical")
plt.show()
Explanation: Строим график
End of explanation
name = goals_df.ix[goals_df.convers_rate==max(goals_df.convers_rate),2]
print 'The best converting page on your site is "',str(name)[5:len(name)-28], '" with conversion rate', max(goals_df.convers_rate),'%'
Explanation: Выводим результат
End of explanation |
10,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Issue 26
Step1: Builtin method to highlight clades
Step2: Or, use toyplot directly
This method is more flexible and you can do just about anything with it.
Step3: More examples
Things can get a little wonky with respect to the tipnames when you add extra elements on the same set of axes, since it affects the extents of the plot. This makes it difficult to create automated functions like the .draw_clade_box() annotation function above that will work generally for any tip names. This is something that I hope to improve in the future... | Python Code:
import toytree
import toyplot
# generate a random tree
tre = toytree.rtree.unittree(ntips=10, treeheight=100, seed=123)
Explanation: Issue 26: highlight clades
I've created an early form of an Annotator class object that can be used to add highlights to a toytree drawing.
Possible extensions:
- apply linear gradient of >1 colors
- box around tipnames
- ...
End of explanation
# draw tree on canvas
canvas, axes, mark = tre.draw(ts='c', layout='r', tip_labels=True);
# get annotator tool
anno = toytree.utils.Annotator(tre, axes, mark)
# annotate clade by selecting names
anno.draw_clade_box(
names=['r0', 'r5'],
style={
"fill": 'red',
"fill-opacity": 0.15,
"stroke-width": 2,
"stroke": 'red',
"stroke-opacity": 0.3,
},
);
Explanation: Builtin method to highlight clades
End of explanation
# draw tree on canvas
canvas, axes, mark = tre.draw(ts='o', layout='r', tip_labels=False);
# draw rectangles next to two clades
axes.rectangle(20, 40, -0.45, 3.45, color=toytree.colors[1], opacity=0.5)
axes.rectangle(20, 40, 3.55, 5.45, color=toytree.colors[2], opacity=0.5)
axes.rectangle(20, 40, 5.55, 9.45, color=toytree.colors[3], opacity=0.5)
axes.text(50, 1.5, "clade A", style={"text-anchor": "start", "fill": toytree.colors[1]})
axes.text(50, 4.5, "clade B", style={"text-anchor": "start", "fill": toytree.colors[2]})
axes.text(50, 7.5, "clade C", style={"text-anchor": "start", "fill": toytree.colors[3]});
Explanation: Or, use toyplot directly
This method is more flexible and you can do just about anything with it.
End of explanation
import numpy as np
import string
tre = toytree.rtree.unittree(ntips=10, treeheight=100, seed=123)
tre = tre.set_node_values(
"name",
{i: str(i) + string.ascii_letters[:np.random.randint(5, 15)] for i in range(10)}
)
color = toytree.colors[1]
# draw tree on canvas
canvas, axes, mark = tre.draw(ts='c', layout='r', tip_labels=True);
# get annotator tool
anno = toytree.utils.Annotator(tre, axes, mark)
# annotate clade
anno.draw_clade_box(
tre.get_tip_labels()[:3],
yspace=tre.treenode.height / 15,
style={
"fill": color,
"fill-opacity": 0.25,
"stroke-width": 2,
"stroke": color,
"stroke-opacity": 0.5,
},
);
Explanation: More examples
Things can get a little wonky with respect to the tipnames when you add extra elements on the same set of axes, since it affects the extents of the plot. This makes it difficult to create automated functions like the .draw_clade_box() annotation function above that will work generally for any tip names. This is something that I hope to improve in the future...
End of explanation |
10,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GaAs-AlAs Bragg Mirror
A simple bragg mirror in the near infrared. As a default, this notebook is setup to work with binder.
Importing libraries
Below we import the library python style, i.e. with the import Library syntax.
Step1: Dielectric Stack inputs
Below are all the input variables of the dielectric mirror. Play around with them and see what happens
Step2: Field computation and reference data loading
In the first cell we compute the dielectric stack reflectance using 1DPyHC, and convert the output to a numpy array;
In the second cell we import the reflectance spectra computed for the same system with py_matrix
Step3: data plotting
Here we plot the data using matplotlib and pyplot. If you want to create new plots with different stack parameters, it's better to remove the first row in the plot command, therefore removing the reference plot. | Python Code:
# importing libraries
import numpy as np # numpy
import matplotlib.pyplot as plt # matplotlib pyplot
import sys # sys to add py_matrix to the path
# adding folder containing 1DPyHC to path: by default is the folder for running in binder
sys.path.append('/home/main/notebooks')
import pyhc as phc # importing 1DPyHC
# useful parameters
f_size=20;
# inline plot magic
%matplotlib inline
Explanation: GaAs-AlAs Bragg Mirror
A simple bragg mirror in the near infrared. As a default, this notebook is setup to work with binder.
Importing libraries
Below we import the library python style, i.e. with the import Library syntax.
End of explanation
# ref indexes
n2 = 3.49 # GaAs
n1 = 2.95 # AlAs
n_inc = 1.0 # Incidend medium
n_sub = 1.45 # Substrate
# wavelength
wl = 1064
v_wl = np.linspace(950,1200,100)
# thickness
b = wl/(4 * n2)
a = wl/(4 * n1)
# stacks
N=20
Explanation: Dielectric Stack inputs
Below are all the input variables of the dielectric mirror. Play around with them and see what happens
End of explanation
# 1dphc computation
v_R = np.array([phc.rN(b, a, n2, n1, n_inc, n_sub, N, phc.f_omega(l), 0.0) for l in v_wl])
# reference t-matrix loading
ref_data = np.loadtxt('gaas_alas_tmatrix.spt')
v_wl_ref = ref_data[:,0]
v_R_ref = ref_data[:,1]
Explanation: Field computation and reference data loading
In the first cell we compute the dielectric stack reflectance using 1DPyHC, and convert the output to a numpy array;
In the second cell we import the reflectance spectra computed for the same system with py_matrix
End of explanation
# result plot
plt.figure(figsize=(12,10))
plt.plot(v_wl_ref,v_R_ref,'k',
v_wl,v_R,'ro',linewidth=2.0)
# ticks and labels
plt.xticks(fontsize=f_size)
plt.yticks(fontsize=f_size)
plt.xlabel("Wavelength (nm)", fontsize=f_size)
plt.ylabel("R", fontsize=f_size)
# legend
plt.legend(('tmatrix reference','1DPyHC'),frameon=False,fontsize=f_size-5,loc='lower center')
Explanation: data plotting
Here we plot the data using matplotlib and pyplot. If you want to create new plots with different stack parameters, it's better to remove the first row in the plot command, therefore removing the reference plot.
End of explanation |
10,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https
Step1: Overview
Pre-processes COMPAS dataset
Step2: Processing original dataset
Step3: Shuffle and Split into Train (70%) and Test set (30%)
Step4: Computing Invese propensity weights for each subgroup, and writes to directory.
IPS_example_weights_with_label.json
Step5: Construct vocabulary.json, and write to directory.
vocabulary.json
Step6: Construct mean_std.json, and write to directory
mean_std.json | Python Code:
from __future__ import division
import pandas as pd
import numpy as np
import json
import os,sys
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import numpy as np
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
End of explanation
pd.options.display.float_format = '{:,.2f}'.format
dataset_base_dir = './group_agnostic_fairness/data/compas/'
dataset_file_name = 'compas-scores-two-years.csv'
Explanation: Overview
Pre-processes COMPAS dataset:
Download the COMPAS dataset from:
https://github.com/propublica/compas-analysis/blob/master/compas-scores-two-years.csv
and save it in the ./group_agnostic_fairness/data/compas folder.
Input: ./group_agnostic_fairness/data/compas/compas-scores-two-years.csv
Outputs: train.csv, test.csv, mean_std.json, vocabulary.json, IPS_exampleweights_with_label.json, IPS_exampleweights_without_label.json
End of explanation
file_path = os.path.join(dataset_base_dir,dataset_file_name)
with open(file_path, "r") as file_name:
temp_df = pd.read_csv(file_name)
# Columns of interest
columns = ['juv_fel_count', 'juv_misd_count', 'juv_other_count', 'priors_count',
'age',
'c_charge_degree',
'c_charge_desc',
'age_cat',
'sex', 'race', 'is_recid']
target_variable = 'is_recid'
target_value = 'Yes'
# Drop duplicates
temp_df = temp_df[['id']+columns].drop_duplicates()
df = temp_df[columns].copy()
# Convert columns of type ``object`` to ``category``
df = pd.concat([
df.select_dtypes(include=[], exclude=['object']),
df.select_dtypes(['object']).apply(pd.Series.astype, dtype='category')
], axis=1).reindex_axis(df.columns, axis=1)
# Binarize target_variable
df['is_recid'] = df.apply(lambda x: 'Yes' if x['is_recid']==1.0 else 'No', axis=1).astype('category')
# Process protected-column values
race_dict = {'African-American':'Black','Caucasian':'White'}
df['race'] = df.apply(lambda x: race_dict[x['race']] if x['race'] in race_dict.keys() else 'Other', axis=1).astype('category')
df.head()
Explanation: Processing original dataset
End of explanation
train_df, test_df = train_test_split(df, test_size=0.30, random_state=42)
output_file_path = os.path.join(dataset_base_dir,'train.csv')
with open(output_file_path, mode="w") as output_file:
train_df.to_csv(output_file,index=False,columns=columns,header=False)
output_file.close()
output_file_path = os.path.join(dataset_base_dir,'test.csv')
with open(output_file_path, mode="w") as output_file:
test_df.to_csv(output_file,index=False,columns=columns,header=False)
output_file.close()
Explanation: Shuffle and Split into Train (70%) and Test set (30%)
End of explanation
IPS_example_weights_without_label = {
0: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex != 'Female')])), # 00: White Male
1: (len(train_df))/(len(train_df[(train_df.race != 'Black') & (train_df.sex == 'Female')])), # 01: White Female
2: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex != 'Female')])), # 10: Black Male
3: (len(train_df))/(len(train_df[(train_df.race == 'Black') & (train_df.sex == 'Female')])) # 11: Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_without_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_without_label))
output_file.close()
print(IPS_example_weights_without_label)
IPS_example_weights_with_label = {
0: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 000: Negative White Male
1: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 001: Negative White Female
2: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 010: Negative Black Male
3: (len(train_df))/(len(train_df[(train_df[target_variable] != target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 011: Negative Black Female
4: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex != 'Female')])), # 100: Positive White Male
5: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race != 'Black') & (train_df.sex == 'Female')])), # 101: Positive White Female
6: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex != 'Female')])), # 110: Positive Black Male
7: (len(train_df))/(len(train_df[(train_df[target_variable] == target_value) & (train_df.race == 'Black') & (train_df.sex == 'Female')])), # 111: Positive Black Female
}
output_file_path = os.path.join(dataset_base_dir,'IPS_example_weights_with_label.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(IPS_example_weights_with_label))
output_file.close()
print(IPS_example_weights_with_label)
Explanation: Computing Invese propensity weights for each subgroup, and writes to directory.
IPS_example_weights_with_label.json: json dictionary of the format
{subgroup_id : inverse_propensity_score,...}. Used by IPS_reweighting_model approach.
End of explanation
cat_cols = train_df.select_dtypes(include='category').columns
vocab_dict = {}
for col in cat_cols:
vocab_dict[col] = list(set(train_df[col].cat.categories))
output_file_path = os.path.join(dataset_base_dir,'vocabulary.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(vocab_dict))
output_file.close()
print(vocab_dict)
Explanation: Construct vocabulary.json, and write to directory.
vocabulary.json: json dictionary of the format {feature_name: [feature_vocabulary]}, containing vocabulary for categorical features.
End of explanation
temp_dict = train_df.describe().to_dict()
mean_std_dict = {}
for key, value in temp_dict.items():
mean_std_dict[key] = [value['mean'],value['std']]
output_file_path = os.path.join(dataset_base_dir,'mean_std.json')
with open(output_file_path, mode="w") as output_file:
output_file.write(json.dumps(mean_std_dict))
output_file.close()
print(mean_std_dict)
Explanation: Construct mean_std.json, and write to directory
mean_std.json: json dictionary of the format feature_name: [mean, std]},
containing mean and std for numerical features.
End of explanation |
10,990 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have two arrays A (len of 3.8million) and B (len of 20k). For the minimal example, lets take this case: | Problem:
import numpy as np
A = np.array([1,1,2,3,3,3,4,5,6,7,8,8])
B = np.array([1,2,8])
C = A[~np.in1d(A,B)] |
10,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 3 - Sampling with AdaptiveMD
Imports
Step1: Let's open our tutorial project by its name. If you completed the first examples this should all work out of the box.
Step2: Open all connections to the MongoDB and Session so we can get started.
An interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in Tutorial 1 or 2, then come back here and check on the change of the project contents.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
Step3: Now restore our old ways to generate tasks by loading the previously used generators.
Step4: Run simulations
You are free to conduct your simulations from a notebook but normally you will use a script. The main point about adaptivity is to make decision about tasks along the way.
To make this happen we need Conditions which are functions that evaluate to True or False and once they are True they cannot change anymore back to False. Like a one time on switch.
These are used to describe the happening of an event. We will now deal with some types of events.
Functional Events
We want to first look into a way to run python code asynchronously in the project. For this, we write a function that should be executed. Inside you will create tasks and submit them.
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called. An example event function here with different (some equivalent) conditions described
Step5: and add the event to the project (these cannot be stored!)
This logical layer is implemented by the class adaptivemd.Event
and is used to run the strategy function. Here we can see why
this function is a generator, and needs to yield functions that
return a Boolean value. The blocks between yield statements are
used to generate the workflow as seen above, and the yielded functions
should be used to inspect the state of the workflow.
```python
done = False
proceed = True
while not done
Step6: What is missing now? The adding of the event triggered the first part of the code. But to recheck if we should continue needs to be done manually.
Still that is no problem, we can do that easily and watch what is happening
Let's see how our project is growing. TODO
Step7: One way to wait for an event is to use a reference to it, returned by the project.add_event method. The event objects are a False condition when completed, and True before this.
Step8: Let's do another round with more loops. This time we will wait using the project's events_done condition. In the prior example, the project is manually triggered until the event is complete. By using wait_until method, the project will trigger itself.
Step9: And some analysis (might have better functions for that)
Step10: Event
And do this with multiple events in parallel.
Step11: See, that we again reused our strategy. | Python Code:
from adaptivemd import Project
Explanation: Tutorial 3 - Sampling with AdaptiveMD
Imports
End of explanation
project = Project('tutorial')
Explanation: Let's open our tutorial project by its name. If you completed the first examples this should all work out of the box.
End of explanation
print(project.files)
print(project.generators)
print(project.models)
Explanation: Open all connections to the MongoDB and Session so we can get started.
An interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in Tutorial 1 or 2, then come back here and check on the change of the project contents.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
def strategy(loops=10, trajs_per_loop=4, length=100):
for loop in range(loops):
# submit some trajectory tasks
trajectories = project.new_ml_trajectory(engine, length, trajs_per_loop)
tasks = map(engine.run, trajectories)
project.queue(tasks)
# continue if ALL of the tasks are done (can be failed)
#yield [task.is_done for task in tasks]
#yield lambda: all([task.is_done() for task in tasks])
yield lambda: all(map(lambda task: task.is_done(), tasks))
# how about ANY of the tasks
# --> some won't be included in model
#yield lambda: any(map(lambda task: task.is_done(), tasks))
# LESS smart since tasks might fail, so we'd get the progress
# with task.is_done but not traj.exists
#yield lambda: all(map(lambda tj: tj.exists, trajectories))
# submit an analysis
task = modeller.execute(list(project.trajectories))
project.queue(task)
# when it is done do next loop
yield task.is_done
Explanation: Run simulations
You are free to conduct your simulations from a notebook but normally you will use a script. The main point about adaptivity is to make decision about tasks along the way.
To make this happen we need Conditions which are functions that evaluate to True or False and once they are True they cannot change anymore back to False. Like a one time on switch.
These are used to describe the happening of an event. We will now deal with some types of events.
Functional Events
We want to first look into a way to run python code asynchronously in the project. For this, we write a function that should be executed. Inside you will create tasks and submit them.
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called. An example event function here with different (some equivalent) conditions described:
End of explanation
event = project.add_event(strategy(loops=2))
Explanation: and add the event to the project (these cannot be stored!)
This logical layer is implemented by the class adaptivemd.Event
and is used to run the strategy function. Here we can see why
this function is a generator, and needs to yield functions that
return a Boolean value. The blocks between yield statements are
used to generate the workflow as seen above, and the yielded functions
should be used to inspect the state of the workflow.
```python
done = False
proceed = True
while not done:
try:
if proceed:
# _next is a function reference
_next_func = next(strategy())
proceed = False
# val is Boolean, returned by _next_func
val = _next_func()
if val is True:
proceed = True
time.sleep(5)
except StopIteration:
done = True
```
When the strategy has been exhausted, the workflow is done.
End of explanation
import sys, time
from IPython.display import clear_output
Explanation: What is missing now? The adding of the event triggered the first part of the code. But to recheck if we should continue needs to be done manually.
Still that is no problem, we can do that easily and watch what is happening
Let's see how our project is growing. TODO: Add threading.Timer to auto trigger.
End of explanation
try:
while project._events:
print('# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories)))
print('# of models %8d : %s' % (len(project.models), '#' * len(project.models)))
sys.stdout.flush()
project.trigger()
time.sleep(3)
clear_output(wait=True)
except KeyboardInterrupt:
pass
Explanation: One way to wait for an event is to use a reference to it, returned by the project.add_event method. The event objects are a False condition when completed, and True before this.
End of explanation
project.add_event(strategy(loops=2))
project.wait_until(project.events_done)
Explanation: Let's do another round with more loops. This time we will wait using the project's events_done condition. In the prior example, the project is manually triggered until the event is complete. By using wait_until method, the project will trigger itself.
End of explanation
from adaptivemd import File
# find, which frames from which trajectories have been chosen
trajs = project.trajectories
q = {}
ins = {}
for f in trajs:
source = f.frame if isinstance(f.frame, File) else f.frame.trajectory
ind = 0 if isinstance(f.frame, File) else f.frame.index
ins[source] = ins.get(source, []) + [ind]
for a,b in ins.iteritems():
print a.short, ':', b
Explanation: And some analysis (might have better functions for that)
End of explanation
def strategy2():
for loop in range(10):
num = len(project.trajectories)
task = modeller.execute(list(project.trajectories))
print(task)
project.queue(task)
yield task.is_done
# continue only when there are at least 2 more trajectories
print("Requiring %d trajectories for strategy2 to complete" % num)
yield project.on_ntraj(num + 2)
project.add_event(strategy(loops=10, trajs_per_loop=2))
project.add_event(strategy2())
Explanation: Event
And do this with multiple events in parallel.
End of explanation
ev=project._events[0]
project._events[0].trigger()
project._events[0]._finish_conditions[0]()
project.wait_until(project.events_done)
# Its hard to catch this becuase the _events list
# clears when an event's finish_conditions evaluate
# to True
project._events[0]._finish_conditions[0]()
project.workers.all.execute('shutdown')
project.close()
Explanation: See, that we again reused our strategy.
End of explanation |
10,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order
Step1: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
Step2: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
Step3: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
Step4: General Approach for Parameter Tuning
We are going to preform the steps as follows
Step5: Step 2
Step6: Step 3
Step7: Step 4
Step8: Step 5
Step9: Step 6
Step10: Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
Step11: Use final model to predict the given test data set | Python Code:
%matplotlib inline
import pandas as pd
from pandas.tools.plotting import scatter_matrix
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
import matplotlib.colors as colors
import xgboost as xgb
import numpy as np
from sklearn.metrics import confusion_matrix, f1_score, accuracy_score
from classification_utilities import display_cm, display_adj_cm
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import validation_curve
from sklearn.datasets import load_svmlight_files
from sklearn.model_selection import StratifiedKFold
from sklearn.datasets import make_classification
from xgboost.sklearn import XGBClassifier
from scipy.sparse import vstack
seed = 123
np.random.seed(seed)
filename = './facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data.head()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data.info()
training_data.describe()
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00','#1B4F72',
'#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS','WS', 'D','PS', 'BS']
facies_counts = training_data['Facies'].value_counts().sort_index()
facies_counts.index = facies_labels
facies_counts.plot(kind='bar',color=facies_colors,title='Distribution of Training Data by Facies')
sns.heatmap(training_data.corr(), vmax=1.0, square=True)
Explanation: In this notebook, we mainly utilize extreme gradient boost to improve the prediction model originially proposed in TLE 2016 November machine learning tuotrial. Extreme gradient boost can be viewed as an enhanced version of gradient boost by using a more regularized model formalization to control over-fitting, and XGB usually performs better. Applications of XGB can be found in many Kaggle competitions. Some recommended tutorrials can be found
Our work will be orginized in the follwing order:
•Background
•Exploratory Data Analysis
•Data Prepration and Model Selection
•Final Results
Background
The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007).
The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a classifier to predict facies types.
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate.
The seven predictor variables are:
•Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE.
•Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)
The nine discrete facies (classes of rocks) are:
1.Nonmarine sandstone
2.Nonmarine coarse siltstone
3.Nonmarine fine siltstone
4.Marine siltstone and shale
5.Mudstone (limestone)
6.Wackestone (limestone)
7.Dolomite
8.Packstone-grainstone (limestone)
9.Phylloid-algal bafflestone (limestone)
These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.
Facies/ Label/ Adjacent Facies
1 SS 2
2 CSiS 1,3
3 FSiS 2
4 SiSh 5
5 MS 4,6
6 WS 5,7
7 D 6,8
8 PS 6,7,9
9 BS 7,8
Exprolatory Data Analysis
After the background intorduction, we start to import the pandas library for some basic data analysis and manipulation. The matplotblib and seaborn are imported for data vislization.
End of explanation
X_train = training_data.drop(['Facies', 'Well Name','Formation','Depth'], axis = 1 )
Y_train = training_data['Facies' ] - 1
dtrain = xgb.DMatrix(X_train, Y_train)
train = X_train.copy()
train['Facies']=Y_train
train.head()
Explanation: Data Preparation and Model Selection
Now we are ready to test the XGB approach, and will use confusion matrix and f1_score, which were imported, as metric for classification, as well as GridSearchCV, which is an excellent tool for parameter optimization.
End of explanation
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
target='Facies'
Explanation: The accuracy function and accuracy_adjacent function are defined in the following to quatify the prediction correctness.
End of explanation
def modelfit(alg, dtrain, features, useTrainCV=True,
cv_fold=10,early_stopping_rounds = 50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgb_param['num_class']=9
xgtrain = xgb.DMatrix(train[features].values,label = train[target].values)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=
alg.get_params()['n_estimators'],nfold=cv_fold,
metrics='merror',early_stopping_rounds = early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(dtrain[features], dtrain[target],eval_metric='merror')
#Predict training set:
dtrain_prediction = alg.predict(dtrain[features])
dtrain_predprob = alg.predict_proba(dtrain[features])[:,1]
#Pring model report
print ("\nModel Report")
print ("Accuracy : %.4g" % accuracy_score(dtrain[target],
dtrain_prediction))
print ("F1 score (Train) : %f" % f1_score(dtrain[target],
dtrain_prediction,average='weighted'))
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp.plot(kind='bar',title='Feature Importances')
plt.ylabel('Feature Importance Score')
features =[x for x in X_train.columns]
features
Explanation: Before processing further, we define a functin which will help us create XGBoost models and perform cross-validation.
End of explanation
xgb1 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=5,
min_child_weight=1,
gamma = 0,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb1, train, features)
xgb1
Explanation: General Approach for Parameter Tuning
We are going to preform the steps as follows:
1.Choose a relatively high learning rate, e.g., 0.1. Usually somewhere between 0.05 and 0.3 should work for different problems.
2.Determine the optimum number of tress for this learning rate.XGBoost has a very usefull function called as "cv" which performs cross-validation at each boosting iteration and thus returns the optimum number of tress required.
3.Tune tree-based parameters(max_depth, min_child_weight, gamma, subsample, colsample_bytree) for decided learning rate and number of trees.
4.Tune regularization parameters(lambda, alpha) for xgboost which can help reduce model complexity and enhance performance.
5.Lower the learning rate and decide the optimal parameters.
Step 1:Fix learning rate and number of estimators for tuning tree-based parameters
In order to decide on boosting parameters, we need to set some initial values of other parameters. Lets take the following values:
1.max_depth = 5
2.min_child_weight = 1
3.gamma = 0
4.subsample, colsample_bytree = 0.8 : This is a commonly used used start value.
5.scale_pos_weight = 1
Please note that all the above are just initial estimates and will be tuned later. Lets take the default learning rate of 0.1 here and check the optimum number of trees using cv function of xgboost. The function defined above will do it for us.
End of explanation
param_test1={
'max_depth':range(3,10,2),
'min_child_weight':range(1,6,2)
}
gs1 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5,
min_child_weight=1, n_estimators=290, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test1,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs1.fit(train[features],train[target])
gs1.grid_scores_, gs1.best_params_,gs1.best_score_
param_test2={
'max_depth':[8,9,10],
'min_child_weight':[1,2]
}
gs2 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=5,
min_child_weight=1, n_estimators=290, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test2,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs2.fit(train[features],train[target])
gs2.grid_scores_, gs2.best_params_,gs2.best_score_
gs2.best_estimator_
Explanation: Step 2: Tune max_depth and min_child_weight
End of explanation
param_test3={
'gamma':[i/10.0 for i in range(0,5)]
}
gs3 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=290, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test3,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs3.fit(train[features],train[target])
gs3.grid_scores_, gs3.best_params_,gs3.best_score_
xgb2 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.8,
colsample_bytree=0.8,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb2,train,features)
xgb2
Explanation: Step 3: Tune gamma
End of explanation
param_test4={
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
gs4 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4.fit(train[features],train[target])
gs4.grid_scores_, gs4.best_params_,gs4.best_score_
param_test4b={
'subsample':[i/10.0 for i in range(5,7)],
}
gs4b = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.8),param_grid=param_test4b,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs4b.fit(train[features],train[target])
gs4b.grid_scores_, gs4b.best_params_,gs4b.best_score_
Explanation: Step 4:Tune subsample and colsample_bytree
End of explanation
param_test5={
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
gs5 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test5,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs5.fit(train[features],train[target])
gs5.grid_scores_, gs5.best_params_,gs5.best_score_
param_test6={
'reg_alpha':[0, 0.001, 0.005, 0.01, 0.05]
}
gs6 = GridSearchCV(XGBClassifier(colsample_bylevel=1, colsample_bytree=0.8,
gamma=0.2, learning_rate=0.1, max_delta_step=0, max_depth=9,
min_child_weight=1, n_estimators=236, nthread=4,
objective='multi:softprob', reg_alpha=0, reg_lambda=1,
scale_pos_weight=1, seed=123,subsample=0.6),param_grid=param_test6,
scoring='accuracy', n_jobs=4,iid=False, cv=5)
gs6.fit(train[features],train[target])
gs6.grid_scores_, gs6.best_params_,gs6.best_score_
xgb3 = XGBClassifier(
learning_rate = 0.1,
n_estimators=1000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb3,train,features)
xgb3
Explanation: Step 5: Tuning Regularization Parameters
End of explanation
xgb4 = XGBClassifier(
learning_rate = 0.01,
n_estimators=5000,
max_depth=9,
min_child_weight=1,
gamma = 0.2,
subsample=0.6,
colsample_bytree=0.8,
reg_alpha=0.05,
objective='multi:softmax',
nthread =4,
scale_pos_weight=1,
seed = seed,
)
modelfit(xgb4,train,features)
xgb4
Explanation: Step 6: Reducing Learning Rate
End of explanation
# Load data
filename = './facies_vectors.csv'
data = pd.read_csv(filename)
# Change to category data type
data['Well Name'] = data['Well Name'].astype('category')
data['Formation'] = data['Formation'].astype('category')
# Leave one well out for cross validation
well_names = data['Well Name'].unique()
f1=[]
for i in range(len(well_names)):
# Split data for training and testing
X_train = data.drop(['Facies', 'Formation','Depth'], axis = 1 )
Y_train = data['Facies' ] - 1
train_X = X_train[X_train['Well Name'] != well_names[i] ]
train_Y = Y_train[X_train['Well Name'] != well_names[i] ]
test_X = X_train[X_train['Well Name'] == well_names[i] ]
test_Y = Y_train[X_train['Well Name'] == well_names[i] ]
train_X = train_X.drop(['Well Name'], axis = 1 )
test_X = test_X.drop(['Well Name'], axis = 1 )
# Final recommended model based on the extensive parameters search
model_final = XGBClassifier(base_score=0.5, colsample_bylevel=1,
colsample_bytree=0.8, gamma=0.2,
learning_rate=0.01, max_delta_step=0, max_depth=9,
min_child_weight=1, missing=None, n_estimators=432, nthread=4,
objective='multi:softmax', reg_alpha=0.05, reg_lambda=1,
scale_pos_weight=1, seed=123, silent=1,
subsample=0.6)
# Train the model based on training data
model_final.fit( train_X , train_Y , eval_metric = 'merror' )
# Predict on the test set
predictions = model_final.predict(test_X)
# Print report
print ("\n------------------------------------------------------")
print ("Validation on the leaving out well " + well_names[i])
conf = confusion_matrix( test_Y, predictions, labels = np.arange(9) )
print ("\nModel Report")
print ("-Accuracy: %.6f" % ( accuracy(conf) ))
print ("-Adjacent Accuracy: %.6f" % ( accuracy_adjacent(conf, adjacent_facies) ))
print ("-F1 Score: %.6f" % ( f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ) ))
f1.append(f1_score ( test_Y , predictions , labels = np.arange(9), average = 'weighted' ))
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
print ("\nConfusion Matrix Results")
from classification_utilities import display_cm, display_adj_cm
display_cm(conf, facies_labels,display_metrics=True, hide_zeros=True)
print ("\n------------------------------------------------------")
print ("Final Results")
print ("-Average F1 Score: %6f" % (sum(f1)/(1.0*len(f1))))
Explanation: Next we use our tuned final model to do cross validation on the training data set. One of the wells will be used as test data and the rest will be the training data. Each iteration, a different well is chosen.
End of explanation
# Load test data
test_data = pd.read_csv('validation_data_nofacies.csv')
test_data['Well Name'] = test_data['Well Name'].astype('category')
X_test = test_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
# Predict facies of unclassified data
Y_predicted = model_final.predict(X_test)
test_data['Facies'] = Y_predicted + 1
# Store the prediction
test_data.to_csv('Prediction2.csv')
test_data
Explanation: Use final model to predict the given test data set
End of explanation |
10,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download
Step1: While
Step2: Pass, Break, Continue
Step3: While e For juntos | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download: http://github.com/dsacademybr
End of explanation
# Usando o loop while para imprimir os valores de 0 a 9
counter = 0
while counter < 10:
print(counter)
counter = counter + 1
# Também é possível usar a claúsula else para encerrar o loop while
x = 0
while x < 10:
print ('O valor de x nesta iteração é: ', x)
print (' x ainda é menor que 10, somando 1 a x')
x += 1
else:
print ('Loop concluído!')
Explanation: While
End of explanation
counter = 0
while counter < 100:
if counter == 4:
break
else:
pass
print(counter)
counter = counter + 1
for verificador in "Python":
if verificador == "h":
continue
print(verificador)
Explanation: Pass, Break, Continue
End of explanation
for i in range(2,30):
j = 2
counter = 0
while j < i:
if i % j == 0:
counter = 1
j = j + 1
else:
j = j + 1
if counter == 0:
print(str(i) + " é um número primo")
counter = 0
else:
counter = 0
Explanation: While e For juntos
End of explanation |
10,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
n=np.random.standard_normal?
n=np.random.standard_normal
n=np.random.randn
n=np.random.randn
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
N=np.random.normal(0,sigma**2)
x=np.linspace(-1.0,1.0,size)
if sigma==0:
y=m*x +b
else:
y=m*x +b+N
return y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x=np.linspace(-1.0,1.0,size)
plt.plot(x,random_line(m,b,sigma,size),color)
plt.ylim(-10.0,10.0)
plt.vlines(0,-10,10)
plt.hlines(0,-1,1)
plt.box(False)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line,m=(-10.0,10.0),b=(-5.0,5.0),sigma=(0,5.0,.01),size=(10,100,10),color=('r','g','b'))
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
10,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing features with MusicExtractor
MusicExtractor is a multi-purpose algorithm for music audio feature extraction from files (see the complete list of computed features here). It combines many algorithms and provides the same functionality as the command-line Music Extractor, which is a wrapper of this algorithm.
As an input, the algorithm requires a filename to analyze. All the computed features are stored in two output Pools, for raw frames data and the aggregated statistics across frames. You can access any particular feature in these pools for your needs or store them in files for future use.
You can see one of the previous versions of MusicExtractor in use in the AcousticBrainz database, where it is used to compute the low-level data for each track submission.
Step1: We can then access particular values in the pools
Step2: We can assess frame values for audio features computed on frames
Step3: Metadata
The pools include the input audio file metadata in addition to the audio analysis results. This is useful to allow you tracking down some of the details about the analyzed files. MusicExtractor uses the MetadataReader algorithm internally for ID3 tags and similar track metadata. Those are stored inside metadata.tags if they are present.
Step4: Storing results to files
In many situations, we may want to analyze multiple tracks and store results for further processing. We can use the YamlOutput algorithm to store the pools with the analysis results from MusicExtractor to either JSON or YAML formats. | Python Code:
audiofile = '../../../test/audio/recorded/dubstep.mp3'
# This is how the audio we want to process sounds like.
import IPython
IPython.display.Audio(audiofile)
import essentia
import essentia.standard as es
# Compute all features.
# Aggregate 'mean' and 'stdev' statistics for all low-level, rhythm, and tonal frame features.
features, features_frames = es.MusicExtractor(lowlevelStats=['mean', 'stdev'],
rhythmStats=['mean', 'stdev'],
tonalStats=['mean', 'stdev'])(audiofile)
# See all feature names in the pool in a sorted order
print(sorted(features.descriptorNames()))
Explanation: Computing features with MusicExtractor
MusicExtractor is a multi-purpose algorithm for music audio feature extraction from files (see the complete list of computed features here). It combines many algorithms and provides the same functionality as the command-line Music Extractor, which is a wrapper of this algorithm.
As an input, the algorithm requires a filename to analyze. All the computed features are stored in two output Pools, for raw frames data and the aggregated statistics across frames. You can access any particular feature in these pools for your needs or store them in files for future use.
You can see one of the previous versions of MusicExtractor in use in the AcousticBrainz database, where it is used to compute the low-level data for each track submission.
End of explanation
print("Filename:", features['metadata.tags.file_name'])
print("-"*80)
print("Replay gain:", features['metadata.audio_properties.replay_gain'])
print("EBU128 integrated loudness:", features['lowlevel.loudness_ebu128.integrated'])
print("EBU128 loudness range:", features['lowlevel.loudness_ebu128.loudness_range'])
print("-"*80)
print("MFCC mean:", features['lowlevel.mfcc.mean'])
print("-"*80)
print("BPM:", features['rhythm.bpm'])
print("Beat positions (sec.)", features['rhythm.beats_position'])
print("-"*80)
print("Key/scale estimation (using a profile specifically suited for electronic music):",
features['tonal.key_edma.key'], features['tonal.key_edma.scale'])
Explanation: We can then access particular values in the pools:
End of explanation
from pylab import plot, show, figure, imshow
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.colors as colors
plt.rcParams['figure.figsize'] = (15, 6) # set plot sizes to something larger than default
imshow(features_frames['lowlevel.melbands128'].T,
aspect='auto', origin='lower', interpolation='none', norm=colors.LogNorm())
plt.title("Mel-spectrogram (128 bins)")
show()
imshow(features_frames['tonal.hpcp'].T,
aspect='auto', origin='lower', interpolation='none', norm=colors.LogNorm())
plt.title("HPCP (chroma) 36 bins)")
show()
Explanation: We can assess frame values for audio features computed on frames:
End of explanation
print("Essentia version:", features['metadata.version.essentia'])
print("Essentia version git SHA:", features['metadata.version.essentia_git_sha'])
print("Essentia MusicExtractor version:", features['metadata.version.extractor'])
print("Filename:", features['metadata.tags.file_name'])
print("MD5 hash for the encoded audio:", features['metadata.audio_properties.md5_encoded'])
print("Audio bit rate:", features['metadata.audio_properties.bit_rate'])
print("Audio codec:", features['metadata.audio_properties.codec'])
print("Duration (seconds):", features['metadata.audio_properties.length'])
print("Number of channels (mono or stereo):", features['metadata.audio_properties.number_channels'])
Explanation: Metadata
The pools include the input audio file metadata in addition to the audio analysis results. This is useful to allow you tracking down some of the details about the analyzed files. MusicExtractor uses the MetadataReader algorithm internally for ID3 tags and similar track metadata. Those are stored inside metadata.tags if they are present.
End of explanation
# Write the aggregated features into a temporary directory.
from tempfile import TemporaryDirectory
temp_dir = TemporaryDirectory()
results_file = temp_dir.name + '/results.json'
es.YamlOutput(filename=results_file, format="json")(features)
# Preview the resulting file.
!cat $results_file
Explanation: Storing results to files
In many situations, we may want to analyze multiple tracks and store results for further processing. We can use the YamlOutput algorithm to store the pools with the analysis results from MusicExtractor to either JSON or YAML formats.
End of explanation |
10,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A quick example of the code for generating the mask for the North Atlantic (to be further used with fpost)
%matplotlib notebook
%load_ext autoreload
%autoreload 2
Step1: selecting the NA mask using multiple boxes | Python Code:
import sys
sys.path.append("../")
import pyfesom as pf
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
%matplotlib notebook
from matplotlib import cm
from netCDF4 import Dataset
meshID='fArc'
# read the mesh
meshpath ='/work/ab0995/a270109/fArc_2400/'
#mesh = pf.load_mesh(meshpath, abg = [50, 15, -90], get3d = False, usepickle=False)
mesh = pf.load_mesh(meshpath, abg = [0., 0., 0.], get3d = False, usepickle=False)
Explanation: A quick example of the code for generating the mask for the North Atlantic (to be further used with fpost)
%matplotlib notebook
%load_ext autoreload
%autoreload 2
End of explanation
plt.plot(mesh.x2, mesh.y2, '.k')
ind1= (np.array(mesh.x2>-100) & np.array(mesh.x2<50)) | np.array(mesh.y2>65)
ind2=~(np.array(mesh.x2>-110) & np.array(mesh.x2<-70) & np.array(mesh.y2>-50) & np.array(mesh.y2<8))
ind3=~(np.array(mesh.x2>-80.41) & np.array(mesh.x2<-78) & np.array(mesh.y2>7) & np.array(mesh.y2<9.02))
ind4=~(np.array(mesh.x2>-110) & np.array(mesh.x2<-81.5) & np.array(mesh.y2>7.9) & np.array(mesh.y2<8.6))
ind5=~(np.array(mesh.x2>-100.75) & np.array(mesh.x2<-83.5) & np.array(mesh.y2>8.5) & np.array(mesh.y2<10))
ind6=~(np.array(mesh.x2>-110) & np.array(mesh.x2<-85) & np.array(mesh.y2>9) & np.array(mesh.y2<15))
ind7=~(np.array(mesh.x2>-110) & np.array(mesh.x2<-91) & np.array(mesh.y2>14.5) & np.array(mesh.y2<17))
ind8=~(np.array(mesh.x2>22.55) & np.array(mesh.x2<55) & np.array(mesh.y2>-35) & np.array(mesh.y2<29.9))
ind9=~(np.array(mesh.x2>47) & np.array(mesh.x2<51) & np.array(mesh.y2>29) & np.array(mesh.y2<31))
ind10=np.array(mesh.y2)>-32
ind=ind1&ind2&ind3&ind4&ind5&ind6&ind7&ind8&ind9&ind10
# CORE2 mesh patch
ind11=~(np.array(mesh.x2>-75) & np.array(mesh.x2<-65) & np.array(mesh.y2>-25) & np.array(mesh.y2<-15))
ind12=~(np.array(mesh.x2>-100.5) & np.array(mesh.x2<-99.5) & np.array(mesh.y2>16) & np.array(mesh.y2<17.5))
ind13=~(np.array(mesh.x2>-83.8) & np.array(mesh.x2<-83.) & np.array(mesh.y2>8.5) & np.array(mesh.y2<9.1))
ind14=~(np.array(mesh.x2>-81.5) & np.array(mesh.x2<-80.) & np.array(mesh.y2>8.0) & np.array(mesh.y2<8.5))
ind15=~(np.array(mesh.x2> 179.) & np.array(mesh.x2<180.4) & np.array(mesh.y2>64.) & np.array(mesh.y2<65.5))
ind16= (np.array(mesh.x2>-80.4) & np.array(mesh.x2<-80.) & np.array(mesh.y2>8.8) & np.array(mesh.y2<9.1))
ind=(ind&ind11&ind12&ind13&ind14&ind15)|(ind16)
plt.plot(mesh.x2[ind], mesh.y2[ind], '.r')
Explanation: selecting the NA mask using multiple boxes
End of explanation |
10,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear PCA
Step1: non linear PCA
http | Python Code:
#xyz=records[['Latitude','Longitude','Magnitude','Depth/Km','deltaT']].values[1:].T
lxyz=xyz.T.copy()
lxyz=lxyz[:,2:]
lxyz/=lxyz.std(axis=0)
"Magnitude,Depth,deltaT"
print lxyz.shape
l,e,MD=pma.pma(lxyz)
X=pma.get_XY(lxyz,e)
sns.plt.plot(np.cumsum(l)/np.sum(l),'o-')
sns.plt.figure()
sns.plt.plot(e[:,:3])
sns.plt.legend("123")
ax=sns.plt.gca()
ax.xaxis.set_ticks([0,1,2])
ax.xaxis.set_ticklabels(["Magnitude","Depth","Dt"])
sns.plt.scatter(X[0],X[1],s=X[2]*10)
# colors=sns.plt.cm.jet(xyz[2]**.25)
#f,a=sns.plt.subplots(1,1,figsize=(10,8))
x1,x2,x3,x4=records['Latitude'].min(),records['Latitude'].max(),records['Longitude'].min(),records['Longitude'].max()
print x1,x2,x3,x4
m=Basemap(projection='mill',llcrnrlat=x1,urcrnrlat=x2,llcrnrlon=x3/2,urcrnrlon=x4,resolution = 'c')
m.drawcoastlines()
m.drawcountries()
#m.bluemarble()
m.fillcontinents(color="#dbc8b2")
print
txyz=xyz[:,xyz[0]<40]
x,y=m(txyz[1],txyz[0])
m.scatter(x,y,alpha=1,lw=0,s=xyz[2]*10,zorder=1e5,c=colors)
import scipy.spatial.distance as ssd
import scipy
print txyz.shape
pdists=ssd.squareform(ssd.pdist(xyz.T[:,:2]))
#zz=np.asarray([xyz[-1],xyz[-1]]).T
#tdists=ssd.squareform(ssd.pdist(zz,'braycurtis'))
print pdists.shape
#print tdists.shape
mx,my=scipy.meshgrid(xyz[-2],xyz[-2])
tdists=lambda u,v:u-v
tdists=tdists(mx,my)
tdists[tdists<0]=np.nan
print tdists.shape
print (tdists<0).sum()/tdists.shape[0]**2.
d_n_t=pdists/(tdists+1)
print np.isnan(d_n_t).sum()
sns.plt.imshow((d_n_t),origin="bottom")
sns.plt.colorbar()
sns.plt.figure()
#_=sns.plt.hist(np.ma.masked_invalid(d_n_t[:,0]),bins=np.arange(0,6,.2))
#_=sns.plt.hist(np.ma.masked_invalid(pdists[d_n_t[:,0]<2,0]),bins=np.arange(0,6,.2))
# colors=sns.plt.cm.jet(xyz[2]**.25)
#f,a=sns.plt.subplots(1,1,figsize=(10,8))
x1,x2,x3,x4=records['Latitude'].min(),records['Latitude'].max(),records['Longitude'].min(),records['Longitude'].max()
print x1,x2,x3,x4
m=Basemap(projection='mill',llcrnrlat=x1,urcrnrlat=x2,llcrnrlon=x3/2,urcrnrlon=x4,resolution = 'c')
m.drawcoastlines()
m.drawcountries()
#m.bluemarble()
m.fillcontinents(color="#dbc8b2")
print txyz.shape
trs=.25
t=np.where(d_n_t[0]>trs)[0]
t2=np.where(d_n_t[t[0]]>trs)[0]
t3=np.where(d_n_t[t2[0]]<trs)[0]
tfxyz=xyz[:,t3]
x,y=m(tfxyz[1],tfxyz[0])
N=x.shape[0]
colors=sns.plt.cm.BrBG(np.arange(0.,N,1.)/N)
m.scatter(x,y,alpha=1,lw=0,s=30,zorder=1e5,c=colors)
m.scatter(x[0],y[0],alpha=1,lw=0,s=80,zorder=1e5,c=colors)
pl=sns.plt
pl.figure()
pl.scatter(tfxyz[0],tfxyz[1])
Explanation: Linear PCA
End of explanation
from sklearn.decomposition import KernelPCA
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=100, fit_inverse_transform=1)
kpca_in =xyz[:,:2]
kpca_out=scikit_kpca.fit_transform(kpca_in)
kpca_out_inv=scikit_kpca.inverse_transform(kpca_out)
print "doing pca"
l,e,_=pma.pma(kpca_in)
pca_out=np.asarray(pma.get_XY(kpca_in,e))
sns.plt.scatter(*kpca_in.T,c='r')
#sns.plt.scatter(*kpca_out.T,c='r')
sns.plt.scatter(*kpca_out_inv.T,s=xyz[:,3]*20)
#sns.plt.scatter(*pca_out,c='g')
Explanation: non linear PCA
http://sebastianraschka.com/Articles/2014_kernel_pca.html#nonlinear-dimensionality-reduction
End of explanation |
10,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
if type(x) or type(t) == np.array:
answer = np.array(0.5 * c/(np.cosh(np.sqrt(c)/2*(x-c*t-a)))**2)
else:
answer = 0.5 * c/(np.cosh(np.sqrt(c)/2*(x-c*t-a)))
return answer
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
phi = np.ndarray(shape=(xpoints,tpoints), dtype = float)
for i in x:
for j in t:
phi[i,j] = soliton(x[i],t[j],c,a)
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
plt.plot(soliton(x,t[i],c,a))
ax = plt.gca()
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.axes.get_yaxis().tick_left()
plt.title('Soliton Wave')
plt.xlabel('X')
plt.ylabel('Psi(x,t)')
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_soliton_data,i = (0,50))
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
10,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Context
Reading data from a software version control system can be pretty useful if you want to answer some evolutionary questions like
* Who are our main committers to the software?
* Are there any areas in the code where only one developer knows of?
* Where were we working on the last months?
In my previous notebook, I showed you how to read a Git repository directly in Python with Pandas and GitPython. As much as I like that approach (because everything is in one place and therefore reproducible), it's (currently) very slow while reading all the statistics information (but I'll work on that!). What I want to have now is a really fast method to read in a complete Git repository.
I take this opportunity to show you how to read any kind of structure, linear data into Pandas' <tt>DataFrame</tt>. The general rule of thumb is
Step1: For each commit, we choose to create a header line with the following commit info (by using <tt>--pretty=format
Step2: Data Wrangling
OK, but now we have a <strike>problem</strike> data wrangling challenge. We have the commit info as well as the statistic for the modified file in one column, but they don't belong together. What we want is to have the commit info along with the file statistics in separate columns to get some serious analysis started.
Commit info
Let's treat the commit info first. Luckily, we set some kind of anchor or marker to identify the commit info
Step3: With this, we can focus on extracting the information of a commit info row. The next command could be looking a little frightening, but don't worry. We go through it step by step.
Step4: We want to extract some data from the <tt>raw</tt> column. For this, we use the <tt>extract</tt> method on the string representation (note the<tt> str</tt>) of all the rows. This method expects a regular expression. We provide our own regex
<pre>
^--(?P<sha>.\*?)--(?P<date>.\*?)--(?P<author>.\*?)$
</pre>
that works as follows
Step5: Luckily, the row's data is just a tab-separated string that we can easily split with the <tt>split</tt> method. We expand the result to get a <tt>DataFrame</tt> , rename the default columns to something that make more sense and adjust some data types. For the later, we use the keyword <tt>coerce</tt> that will let <tt>to_numeric</tt> return <tt>Nan</tt>'s for all entries that are not a number.
Step6: Putting it all together
Now we have three parts
Step7: ...and fill the missing values for the file statistics' rows to get the needed structure. Together, this is done like the following
Step8: After filling the file statistics rows, we can throw away the dedicated commit info rows by reusing the index from above (look at the index for seeing this clearly).
Step9: The easy step afterward is to join the <tt>file_stats</tt> <tt>DataFrame</tt> with the <tt>commit_data</tt>.
Step10: We're done!
Complete code block
To much code to look through? Here is everything from above in a condensed format.
Step11: Just some milliseconds to run through, not bad!
Summary
In this notebook, I showed you how to read some non-perfect structured data via the non-character separator trick. I also showed you how to transform the rows that contain multiple kinds of data into one nicely structured <tt>DataFrame</tt>.
Now that we have the Git repository <tt>DataFrame</tt>, we can do some nice things with it e. g. visualizing the code churn of a project, but that's a story for another notebook! But to give you a short preview | Python Code:
with open (r'data/gitlog_aim42.log') as log:
[print(line, end='') for line in log.readlines()[:8]]
Explanation: Context
Reading data from a software version control system can be pretty useful if you want to answer some evolutionary questions like
* Who are our main committers to the software?
* Are there any areas in the code where only one developer knows of?
* Where were we working on the last months?
In my previous notebook, I showed you how to read a Git repository directly in Python with Pandas and GitPython. As much as I like that approach (because everything is in one place and therefore reproducible), it's (currently) very slow while reading all the statistics information (but I'll work on that!). What I want to have now is a really fast method to read in a complete Git repository.
I take this opportunity to show you how to read any kind of structure, linear data into Pandas' <tt>DataFrame</tt>. The general rule of thumb is: As long as you see a pattern in the raw data, Pandas can read and tame it, too!
The idea
We are taking a shortcut for retrieving the commit history by exporting it into a log file. You can use e. g.
<pre>
git log --all --numstat --pretty=format:'--%h--%ad--%aN' --no-renames > git.log
</pre>
to do this. This will output a file with all the log information of a repository.
In this notebook, we analyze the Git repository of aim42 (an open book project about how to improve legacy systems).
The first entries of that file look something like this:
End of explanation
import pandas as pd
commits = pd.read_csv("data\gitlog_aim42.log",
sep="\u0012",
header=None,
names=['raw'])
commits.head()
Explanation: For each commit, we choose to create a header line with the following commit info (by using <tt>--pretty=format:'--%h--%ad--%aN'</tt>):
<pre>
--fa1ca6f--Thu Dec 22 08:04:18 2016 +0100--feststelltaste
</pre>
It contains the SHA key, the timestamp as well as the author's name of the commit, separated by <tt>--</tt>.
For each other row, we got some statistics about the modified files:
<pre>
2 0 src/main/asciidoc/appendices/bibliography.adoc
</pre>
It contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.
Let's get started!
Import the data
First, I'll show you my approach on how to read nearly everything into a <tt>DataFrame</tt>. The key is to use Pandas' <tt>read_csv</tt> for reading "non-character separated values". How to do that? We simply choose a separator that doesn't occur in the file that we want to read. My favorite character for this is the "DEVICE CONTROL TWO" character U+0012. I haven't encountered a situation yet where this character was included in a data set.
We just read our <tt>git.log</tt> file without any headers (because there are none) and give the only column a nice name.
End of explanation
commit_marker = commits[
commits['raw'].str.startswith("--")]
commit_marker.head()
Explanation: Data Wrangling
OK, but now we have a <strike>problem</strike> data wrangling challenge. We have the commit info as well as the statistic for the modified file in one column, but they don't belong together. What we want is to have the commit info along with the file statistics in separate columns to get some serious analysis started.
Commit info
Let's treat the commit info first. Luckily, we set some kind of anchor or marker to identify the commit info: Each commit info starts with a <tt>--</tt>. So let's extract all the commit info from the original <tt>commits</tt> <tt>DataFrame</tt>.
End of explanation
commit_info = commit_marker['raw'].str.extract(
r"^--(?P<sha>.*?)--(?P<date>.*?)--(?P<author>.*?)$",
expand=True)
commit_info['date'] = pd.to_datetime(commit_info['date'])
commit_info.head()
Explanation: With this, we can focus on extracting the information of a commit info row. The next command could be looking a little frightening, but don't worry. We go through it step by step.
End of explanation
file_stats_marker = commits[
~commits.index.isin(commit_info.index)]
file_stats_marker.head()
Explanation: We want to extract some data from the <tt>raw</tt> column. For this, we use the <tt>extract</tt> method on the string representation (note the<tt> str</tt>) of all the rows. This method expects a regular expression. We provide our own regex
<pre>
^--(?P<sha>.\*?)--(?P<date>.\*?)--(?P<author>.\*?)$
</pre>
that works as follows:
<tt>^</tt>: the beginning of the row
<tt>--</tt>: the two dashes that we choose and are used in the git log file as separator between the entries
<tt>(?P<sha>.*?)--</tt>: a named match group (marked by the <tt>(</tt> and <tt>)</tt> ) with the name <tt>sha</tt> for all characters (<tt>.*</tt>) until the next occurrence (<tt>?</tt>) of the <tt>--</tt> separators.
and so on until
<tt>\$</tt>: the marker for the end of the row (actually, <tt>^</tt> and <tt>$</tt> aren't needed, but it looks nicer from a regex string's perspective in my eyes ;-) )
I use these ugly looking, named match groups because then the name of such a group will be used by Pandas for the name of the column (therefore we avoid renaming the columns later on).
The <tt>expand=True</tt> keyword delivers a <tt>DataFrame</tt> with columns for each detected regex group.
We simply store the result into a new <tt>DataFrame</tt> variable <tt>commit_info</tt>.
Because we've worked with the string representation of the row, Pandas didn't recognize the right data types for our newly created columns. That's why we need to cast the <tt>date</tt> column to the right type.
OK, this part is ready, let's have a look at the file statistics!
File statistics
Every row that is not a commit info row is a file statistics row. So we just reuse the index of our already prepared <tt>commit_info</tt> <tt>DataFrame</tt> to get all the other data by saying "give me all commits that are not in the index of the <tt>commit_info</tt>'s <tt>DataFrame</tt>".
End of explanation
file_stats = file_stats_marker['raw'].str.split(
"\t", expand=True)
file_stats = file_stats.rename(
columns={ 0: "insertions", 1: "deletions", 2: "filename"})
file_stats['insertions'] = pd.to_numeric(
file_stats['insertions'], errors='coerce')
file_stats['deletions'] = pd.to_numeric(
file_stats['deletions'], errors='coerce')
file_stats.head()
Explanation: Luckily, the row's data is just a tab-separated string that we can easily split with the <tt>split</tt> method. We expand the result to get a <tt>DataFrame</tt> , rename the default columns to something that make more sense and adjust some data types. For the later, we use the keyword <tt>coerce</tt> that will let <tt>to_numeric</tt> return <tt>Nan</tt>'s for all entries that are not a number.
End of explanation
commit_info.reindex(commits.index).head(3)
Explanation: Putting it all together
Now we have three parts: all commits, the separated commit info and the file statistics.
We only need to glue the commit info and the file statistics together into a normalized <tt>DataFrame</tt>. For this, we have to make some adjustments to the indexes.
For the commit info, we want to have each info for each file statistics row. That means we reindex the commit info by using the index of the <tt>commits</tt> <tt>DataFrame</tt>...
End of explanation
commit_data = commit_info.reindex(
commits.index).fillna(method="ffill")
commit_data.head()
Explanation: ...and fill the missing values for the file statistics' rows to get the needed structure. Together, this is done like the following:
End of explanation
commit_data = commit_data[~commit_data.index.isin(commit_info.index)]
commit_data.head()
Explanation: After filling the file statistics rows, we can throw away the dedicated commit info rows by reusing the index from above (look at the index for seeing this clearly).
End of explanation
commit_data = commit_data.join(file_stats)
commit_data.head()
Explanation: The easy step afterward is to join the <tt>file_stats</tt> <tt>DataFrame</tt> with the <tt>commit_data</tt>.
End of explanation
%%time
import pandas as pd
commits = pd.read_csv(r'C:\dev\repos\aim42\git.log', sep="\u0012", header=None, names=['raw'])
commit_marker = commits[commits['raw'].str.startswith("--",na=False)]
commit_info = commit_marker['raw'].str.extract(r"^--(?P<sha>.*?)--(?P<date>.*?)--(?P<author>.*?)$", expand=True)
commit_info['date'] = pd.to_datetime(commit_info['date'])
file_stats_marker = commits[~commits.index.isin(commit_info.index)]
file_stats = file_stats_marker['raw'].str.split("\t", expand=True)
file_stats = file_stats.rename(columns={0: "insertions", 1: "deletions", 2: "filename"})
file_stats['insertions'] = pd.to_numeric(file_stats['insertions'], errors='coerce')
file_stats['deletions'] = pd.to_numeric(file_stats['deletions'], errors='coerce')
commit_data = commit_info.reindex(commits.index).fillna(method="ffill")
commit_data = commit_data[~commit_data.index.isin(commit_info.index)]
commit_data = commit_data.join(file_stats)
Explanation: We're done!
Complete code block
To much code to look through? Here is everything from above in a condensed format.
End of explanation
%matplotlib inline
timed_commits = commit_data.set_index(pd.DatetimeIndex(commit_data['date']))[['insertions', 'deletions']].resample('1D').sum()
(timed_commits['insertions'] - timed_commits['deletions']).cumsum().fillna(method='ffill').plot()
Explanation: Just some milliseconds to run through, not bad!
Summary
In this notebook, I showed you how to read some non-perfect structured data via the non-character separator trick. I also showed you how to transform the rows that contain multiple kinds of data into one nicely structured <tt>DataFrame</tt>.
Now that we have the Git repository <tt>DataFrame</tt>, we can do some nice things with it e. g. visualizing the code churn of a project, but that's a story for another notebook! But to give you a short preview:
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.