Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
8,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Factors
When we first looked at factors, we explored the set of built-in factors. Frequently, a desired computation isn't included as a built-in factor. One of the most powerful features of the Pipeline API is that it allows us to define our own custom factors. When a desired computation doesn't exist as a built-in, we define a custom factor.
Conceptually, a custom factor is identical to a built-in factor. It accepts inputs, window_length, and mask as constructor arguments, and returns a Factor object each day.
Let's take an example of a computation that doesn't exist as a built-in
Step1: Next, let's define our custom factor to calculate the standard deviation over a trailing window using numpy.nanstd
Step2: Finally, let's instantiate our factor in make_pipeline()
Step3: When this pipeline is run, StdDev.compute() will be called every day with data as follows
Step4: Default Inputs
When writing a custom factor, we can set default inputs and window_length in our CustomFactor subclass. For example, let's define the TenDayMeanDifference custom factor to compute the mean difference between two data columns over a trailing window using numpy.nanmean. Let's set the default inputs to [USEquityPricing.close, USEquityPricing.open] and the default window_length to 10
Step5: <i>Remember in this case that close and open are each 10 x ~8000 2D numpy arrays.</i>
If we call TenDayMeanDifference without providing any arguments, it will use the defaults.
Step6: The defaults can be manually overridden by specifying arguments in the constructor call.
Step7: Further Example
Let's take another example where we build a momentum custom factor and use it to create a filter. We will then use that filter as a screen for our pipeline.
Let's start by defining a Momentum factor to be the division of the most recent close price by the close price from n days ago where n is the window_length.
Step8: Now, let's instantiate our Momentum factor (twice) to create a 10-day momentum factor and a 20-day momentum factor. Let's also create a positive_momentum filter returning True for securities with both a positive 10-day momentum and a positive 20-day momentum.
Step9: Next, let's add our momentum factors and our positive_momentum filter to make_pipeline. Let's also pass positive_momentum as a screen to our pipeline.
Step10: Running this pipeline outputs the standard deviation and each of our momentum computations for securities with positive 10-day and 20-day momentum. | Python Code:
from quantopian.pipeline import CustomFactor
import numpy
Explanation: Custom Factors
When we first looked at factors, we explored the set of built-in factors. Frequently, a desired computation isn't included as a built-in factor. One of the most powerful features of the Pipeline API is that it allows us to define our own custom factors. When a desired computation doesn't exist as a built-in, we define a custom factor.
Conceptually, a custom factor is identical to a built-in factor. It accepts inputs, window_length, and mask as constructor arguments, and returns a Factor object each day.
Let's take an example of a computation that doesn't exist as a built-in: standard deviation. To create a factor that computes the standard deviation over a trailing window, we can subclass quantopian.pipeline.CustomFactor and implement a compute method whose signature is:
def compute(self, today, asset_ids, out, *inputs):
...
*inputs are M x N numpy arrays, where M is the window_length and N is the number of securities (usually around ~8000 unless a mask is provided). *inputs are trailing data windows. Note that there will be one M x N array for each BoundColumn provided in the factor's inputs list. The data type of each array will be the dtype of the corresponding BoundColumn.
out is an empty array of length N. out will be the output of our custom factor each day. The job of compute is to write output values into out.
asset_ids will be an integer array of length N containing security ids corresponding to the columns in our *inputs arrays.
today will be a pandas Timestamp representing the day for which compute is being called.
Of these, *inputs and out are most commonly used.
An instance of CustomFactor that’s been added to a pipeline will have its compute method called every day. For example, let's define a custom factor that computes the standard deviation of the close price over the last 5 days. To start, let's add CustomFactor and numpy to our import statements.
End of explanation
class StdDev(CustomFactor):
def compute(self, today, asset_ids, out, values):
# Calculates the column-wise standard deviation, ignoring NaNs
out[:] = numpy.nanstd(values, axis=0)
Explanation: Next, let's define our custom factor to calculate the standard deviation over a trailing window using numpy.nanstd:
End of explanation
def make_pipeline():
std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5)
return Pipeline(
columns={
'std_dev': std_dev
}
)
Explanation: Finally, let's instantiate our factor in make_pipeline():
End of explanation
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
Explanation: When this pipeline is run, StdDev.compute() will be called every day with data as follows:
- values: An M x N numpy array, where M is 20 (window_length), and N is ~8000 (the number of securities in our database on the day in question).
- out: An empty array of length N (~8000). In this example, the job of compute is to populate out with an array storing of 5-day close price standard deviations.
End of explanation
class TenDayMeanDifference(CustomFactor):
# Default inputs.
inputs = [USEquityPricing.close, USEquityPricing.open]
window_length = 10
def compute(self, today, asset_ids, out, close, open):
# Calculates the column-wise mean difference, ignoring NaNs
out[:] = numpy.nanmean(close - open, axis=0)
Explanation: Default Inputs
When writing a custom factor, we can set default inputs and window_length in our CustomFactor subclass. For example, let's define the TenDayMeanDifference custom factor to compute the mean difference between two data columns over a trailing window using numpy.nanmean. Let's set the default inputs to [USEquityPricing.close, USEquityPricing.open] and the default window_length to 10:
End of explanation
# Computes the 10-day mean difference between the daily open and close prices.
close_open_diff = TenDayMeanDifference()
Explanation: <i>Remember in this case that close and open are each 10 x ~8000 2D numpy arrays.</i>
If we call TenDayMeanDifference without providing any arguments, it will use the defaults.
End of explanation
# Computes the 10-day mean difference between the daily high and low prices.
high_low_diff = TenDayMeanDifference(inputs=[USEquityPricing.high, USEquityPricing.low])
Explanation: The defaults can be manually overridden by specifying arguments in the constructor call.
End of explanation
class Momentum(CustomFactor):
# Default inputs
inputs = [USEquityPricing.close]
# Compute momentum
def compute(self, today, assets, out, close):
out[:] = close[-1] / close[0]
Explanation: Further Example
Let's take another example where we build a momentum custom factor and use it to create a filter. We will then use that filter as a screen for our pipeline.
Let's start by defining a Momentum factor to be the division of the most recent close price by the close price from n days ago where n is the window_length.
End of explanation
ten_day_momentum = Momentum(window_length=10)
twenty_day_momentum = Momentum(window_length=20)
positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1))
Explanation: Now, let's instantiate our Momentum factor (twice) to create a 10-day momentum factor and a 20-day momentum factor. Let's also create a positive_momentum filter returning True for securities with both a positive 10-day momentum and a positive 20-day momentum.
End of explanation
def make_pipeline():
ten_day_momentum = Momentum(window_length=10)
twenty_day_momentum = Momentum(window_length=20)
positive_momentum = ((ten_day_momentum > 1) & (twenty_day_momentum > 1))
std_dev = StdDev(inputs=[USEquityPricing.close], window_length=5)
return Pipeline(
columns={
'std_dev': std_dev,
'ten_day_momentum': ten_day_momentum,
'twenty_day_momentum': twenty_day_momentum
},
screen=positive_momentum
)
Explanation: Next, let's add our momentum factors and our positive_momentum filter to make_pipeline. Let's also pass positive_momentum as a screen to our pipeline.
End of explanation
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
Explanation: Running this pipeline outputs the standard deviation and each of our momentum computations for securities with positive 10-day and 20-day momentum.
End of explanation |
8,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from
Step1: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the
Step2: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
Step3: The results of these mass univariate analyses can be visualised by plotting | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import mne
from mne.channels import find_ch_adjacency, make_1020_channel_selections
from mne.stats import spatio_temporal_cluster_test
np.random.seed(0)
# Load the data
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# These data are quite smooth, so to speed up processing we'll (unsafely!) just
# decimate them
epochs.decimate(4, verbose='error')
name = "NumberOfLetters"
# Split up the data by the median length in letters via the attached metadata
median_value = str(epochs.metadata[name].median())
long_words = epochs[name + " > " + median_value]
short_words = epochs[name + " < " + median_value]
Explanation: Visualising statistical significance thresholds on EEG data
MNE-Python provides a range of tools for statistical hypothesis testing
and the visualisation of the results. Here, we show a few options for
exploratory and confirmatory tests - e.g., targeted t-tests, cluster-based
permutation approaches (here with Threshold-Free Cluster Enhancement);
and how to visualise the results.
The underlying data comes from :footcite:DufauEtAl2015; we contrast long vs.
short words. TFCE is described in :footcite:SmithNichols2009.
End of explanation
time_windows = ((.2, .25), (.35, .45))
elecs = ["Fz", "Cz", "Pz"]
index = ['condition', 'epoch', 'time']
# display the EEG data in Pandas format (first 5 rows)
print(epochs.to_data_frame(index=index)[elecs].head())
report = "{elec}, time: {tmin}-{tmax} s; t({df})={t_val:.3f}, p={p:.3f}"
print("\nTargeted statistical test results:")
for (tmin, tmax) in time_windows:
long_df = long_words.copy().crop(tmin, tmax).to_data_frame(index=index)
short_df = short_words.copy().crop(tmin, tmax).to_data_frame(index=index)
for elec in elecs:
# extract data
A = long_df[elec].groupby("condition").mean()
B = short_df[elec].groupby("condition").mean()
# conduct t test
t, p = ttest_ind(A, B)
# display results
format_dict = dict(elec=elec, tmin=tmin, tmax=tmax,
df=len(epochs.events) - 2, t_val=t, p=p)
print(report.format(**format_dict))
Explanation: If we have a specific point in space and time we wish to test, it can be
convenient to convert the data into Pandas Dataframe format. In this case,
the :class:mne.Epochs object has a convenient
:meth:mne.Epochs.to_data_frame method, which returns a dataframe.
This dataframe can then be queried for specific time windows and sensors.
The extracted data can be submitted to standard statistical tests. Here,
we conduct t-tests on the difference between long and short words.
End of explanation
# Calculate adjacency matrix between sensors from their locations
adjacency, _ = find_ch_adjacency(epochs.info, "eeg")
# Extract data: transpose because the cluster test requires channels to be last
# In this case, inference is done over items. In the same manner, we could
# also conduct the test over, e.g., subjects.
X = [long_words.get_data().transpose(0, 2, 1),
short_words.get_data().transpose(0, 2, 1)]
tfce = dict(start=.4, step=.4) # ideally start and step would be smaller
# Calculate statistical thresholds
t_obs, clusters, cluster_pv, h0 = spatio_temporal_cluster_test(
X, tfce, adjacency=adjacency,
n_permutations=100) # a more standard number would be 1000+
significant_points = cluster_pv.reshape(t_obs.shape).T < .05
print(str(significant_points.sum()) + " points selected by TFCE ...")
Explanation: Absent specific hypotheses, we can also conduct an exploratory
mass-univariate analysis at all sensors and time points. This requires
correcting for multiple tests.
MNE offers various methods for this; amongst them, cluster-based permutation
methods allow deriving power from the spatio-temoral correlation structure
of the data. Here, we use TFCE.
End of explanation
# We need an evoked object to plot the image to be masked
evoked = mne.combine_evoked([long_words.average(), short_words.average()],
weights=[1, -1]) # calculate difference wave
time_unit = dict(time_unit="s")
evoked.plot_joint(title="Long vs. short words", ts_args=time_unit,
topomap_args=time_unit) # show difference wave
# Create ROIs by checking channel labels
selections = make_1020_channel_selections(evoked.info, midline="12z")
# Visualize the results
fig, axes = plt.subplots(nrows=3, figsize=(8, 8))
axes = {sel: ax for sel, ax in zip(selections, axes.ravel())}
evoked.plot_image(axes=axes, group_by=selections, colorbar=False, show=False,
mask=significant_points, show_names="all", titles=None,
**time_unit)
plt.colorbar(axes["Left"].images[-1], ax=list(axes.values()), shrink=.3,
label="µV")
plt.show()
Explanation: The results of these mass univariate analyses can be visualised by plotting
:class:mne.Evoked objects as images (via :class:mne.Evoked.plot_image)
and masking points for significance.
Here, we group channels by Regions of Interest to facilitate localising
effects on the head.
End of explanation |
8,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#YOUR-NAME-(NEPTUN)" data-toc-modified-id="YOUR-NAME-(NEPTUN)-1"><span class="toc-item-num">1 </span>YOUR NAME (NEPTUN)</a></div><div class="lev1 toc-item"><a href="#Business-Intelligence---Pandas" data-toc-modified-id="Business-Intelligence---Pandas-2"><span class="toc-item-num">2 </span>Business Intelligence - Pandas</a></div><div class="lev1 toc-item"><a href="#Important-steps-before-starting-anything" data-toc-modified-id="Important-steps-before-starting-anything-3"><span class="toc-item-num">3 </span>Important steps before starting anything</a></div><div class="lev2 toc-item"><a href="#General-information" data-toc-modified-id="General-information-31"><span class="toc-item-num">3.1 </span>General information</a></div><div class="lev3 toc-item"><a href="#Submission" data-toc-modified-id="Submission-311"><span class="toc-item-num">3.1.1 </span>Submission</a></div><div class="lev3 toc-item"><a href="#Tips" data-toc-modified-id="Tips-312"><span class="toc-item-num">3.1.2 </span>Tips</a></div><div class="lev3 toc-item"><a href="#Credits" data-toc-modified-id="Credits-313"><span class="toc-item-num">3.1.3 </span>Credits</a></div><div class="lev3 toc-item"><a href="#Feedback" data-toc-modified-id="Feedback-314"><span class="toc-item-num">3.1.4 </span>Feedback</a></div><div class="lev2 toc-item"><a href="#Code-quality" data-toc-modified-id="Code-quality-32"><span class="toc-item-num">3.2 </span>Code quality</a></div><div class="lev2 toc-item"><a href="#PEP8-style-guide" data-toc-modified-id="PEP8-style-guide-33"><span class="toc-item-num">3.3 </span>PEP8 style guide</a></div><div class="lev2 toc-item"><a href="#Figure-quality" data-toc-modified-id="Figure-quality-34"><span class="toc-item-num">3.4 </span>Figure quality</a></div><div class="lev1 toc-item"><a href="#Main-imports" data-toc-modified-id="Main-imports-4"><span class="toc-item-num">4 </span>Main imports</a></div><div class="lev3 toc-item"><a href="#downloading-the-dataset" data-toc-modified-id="downloading-the-dataset-401"><span class="toc-item-num">4.0.1 </span>downloading the dataset</a></div><div class="lev1 toc-item"><a href="#Loading-the-dataset" data-toc-modified-id="Loading-the-dataset-5"><span class="toc-item-num">5 </span>Loading the dataset</a></div><div class="lev1 toc-item"><a href="#Normalizing-the-dataset" data-toc-modified-id="Normalizing-the-dataset-6"><span class="toc-item-num">6 </span>Normalizing the dataset</a></div><div class="lev2 toc-item"><a href="#the-.dt-and-the-.str-namespaces" data-toc-modified-id="the-.dt-and-the-.str-namespaces-61"><span class="toc-item-num">6.1 </span>the <code>.dt</code> and the <code>.str</code> namespaces</a></div><div class="lev3 toc-item"><a href="#.dt-namespace" data-toc-modified-id=".dt-namespace-611"><span class="toc-item-num">6.1.1 </span><code>.dt</code> namespace</a></div><div class="lev3 toc-item"><a href="#.str-namespace" data-toc-modified-id=".str-namespace-612"><span class="toc-item-num">6.1.2 </span><code>.str</code> namespace</a></div><div class="lev2 toc-item"><a href="#Let's-extract-the-release-year-of-a-movie-into-a-separate" data-toc-modified-id="Let's-extract-the-release-year-of-a-movie-into-a-separate-62"><span class="toc-item-num">6.2 </span>Let's extract the release year of a movie into a separate</a></div><div class="lev3 toc-item"><a href="#the-most-common-years-are" data-toc-modified-id="the-most-common-years-are-621"><span class="toc-item-num">6.2.1 </span>the most common years are</a></div><div class="lev3 toc-item"><a href="#video_release_date-is-always-NaT-(not-a-time),-let's-drop-it" data-toc-modified-id="video_release_date-is-always-NaT-(not-a-time),-let's-drop-it-622"><span class="toc-item-num">6.2.2 </span><code>video_release_date</code> is always <code>NaT</code> (not a time), let's drop it</a></div><div class="lev1 toc-item"><a href="#Basic-analysis-of-the-dataset" data-toc-modified-id="Basic-analysis-of-the-dataset-7"><span class="toc-item-num">7 </span>Basic analysis of the dataset</a></div><div class="lev1 toc-item"><a href="#Basic-queries" data-toc-modified-id="Basic-queries-8"><span class="toc-item-num">8 </span>Basic queries</a></div><div class="lev1 toc-item"><a href="#Which-movies-were-released-in-1956?" data-toc-modified-id="Which-movies-were-released-in-1956?-9"><span class="toc-item-num">9 </span>Which movies were released in 1956?</a></div><div class="lev1 toc-item"><a href="#How-many-movies-were-released-in-the-80s?" data-toc-modified-id="How-many-movies-were-released-in-the-80s?-10"><span class="toc-item-num">10 </span>How many movies were released in the 80s?</a></div><div class="lev1 toc-item"><a href="#When-were-the-Die-Hard-movies-released?" data-toc-modified-id="When-were-the-Die-Hard-movies-released?-11"><span class="toc-item-num">11 </span>When were the Die Hard movies released?</a></div><div class="lev2 toc-item"><a href="#How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?" data-toc-modified-id="How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?-111"><span class="toc-item-num">11.1 </span>How many movies are both action and romance? What about action or romance?</a></div><div class="lev1 toc-item"><a href="#Problem-Set-1
Step1: General information
This goal of this notebook is to give a brief introduction to the pandas library, a popular data manipulation and analysis tool for Python.
Problems are numbered from Q1 to Q5 with many subproblems such as Q1.1. The scores range from 2 to 5 based on the difficulty of the problem. Grades are determined using this table
Step2: downloading the dataset
Step3: Loading the dataset
pd.read_table loads a tabular dataset. The full function signature is
Step4: A couple improvements
Step5: Normalizing the dataset
the .dt and the .str namespaces
The year in the title seems redundant, let's check if it's always the same as the release date. The .dt namespace has various methods and attributes for handling dates, and the .str namespace has string handling methods.
.dt namespace
Step6: .str namespace
Step7: .str can also be indexed like a string
Step8: Let's extract the release year of a movie into a separate
Step9: the most common years are
Step10: video_release_date is always NaT (not a time), let's drop it
Step11: Basic analysis of the dataset
describe generate descriptive statistics.
Step12: Only numeric columns are included by default. A single column (pd.Series) has a describe function too
Step13: Numberic statistics are available as separate functions too
Step14: Basic queries
Which movies were released in 1956?
Step15: How many movies were released in the 80s?
Let's print 5 examples too.
Step16: When were the Die Hard movies released?
Step17: Die Hard 4 and 5 are missing. This is because the dataset only contains movies released between
Step18: and Die Hard 4 and 5 were released in 2007 and 2013 respectively.
How many movies are both action and romance? What about action or romance?
Make sure you parenthesize the conditions
Step19: Problem Set 1
Step20: Q1.2 Are there thrillers for children? Find an example. (2 points)
Step21: Q1.3 How many movies have title longer than 40 characters? (3 points)
Step22: pd.Series.apply
Step23: Q1.4* How many content words does the average title have? Which title has the most/least words?
Content words are capitalized. The opposite of content words are function words (and, a, an, the, more etc.).
We should not include the release year in the word count. The release year is always the last word of the title.
Step 1. Count words in the title (2 points)
Step24: Step 2. Shortest and longest titles by word count (3 points)
Step25: Q1.5* How many movies have the word 'and' in their title? Write a function that counts movies with a particular word. (3 points)
Disregard case and avoid matching subwords for examples. For example 'and' should not match 'Holland' nor should it match the movie 'Andrew'.
Step26: Groupby and visualization
How many movies are released each year?
Step27: another option is the use pd.Series.value_counts
Step28: most movies were released in the late 80s and 90s, let's zoom in. Let's also change the figure size.
We create the plot object with one subplot, we then specify which axis pandas should use for plotting (ax=ax).
Step29: we can groupby on multiple columns
Back to the romance-action combinations
Step30: we can also group on arbitrary conditions
how many movies were released each decade?
Step31: Problem Set 2
Step32: Q2.2 Plot the number of adventure movies from 1985 to 1999 on a bar chart. Use your group_genre_by_year function. (2 points)
Step33: Q2.3 Plot the distribution of release day (day of month) on a pie chart.
Step 1. groupby (2 points)
Step34: Step 2. pie chart. Add percent values. (3 points)
Step35: Q2.4 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.
Step 1. Compute frequencies. (3 points)
You don't need to perform any preprocessing.
Step36: Step 2. Plot it on a bar chart in descending order. (3 points)
The most common letter should be the first bar.
Step37: Problem Set 3. Handling multiple dataframes
The main table of this dataset is u.data with 100000 ratings.
Step38: The timestamp column is a Unix timestamp, let's convert it to pd.DateTime
Step39: Merging with the movies dataframe
We overwrite ratings
Step40: How many ratings are timestamped before the release date?
Step41: Which movies were rated before the release date?
Step42: Q3.1. How many times was each movie rated?
Step 1. Compute the frequencies of movie ratings. Use titles instead of movie ids. (2 points)
Step43: Step 2. Plot the frequencies on a histogram. (2 points)
pd.Series has a hist function. It uses 10 bins by default, use more.
Step44: Q3.2. How many ratings were submitted by each day of the week? What were their averages?
Step 1. groupby (3 points)
Tip
Step45: Step 2. number of ratings per day (2 points)
Step46: Step 3. mean rating by day (2 points)
Step47: Q3.3** What is the mean of ratings by genre?
If a movie has multiple genres, include it in every genre.
Step 1. Compute the mean scores. (5 points)
There are many ways to solve this problem. Try to do it without explicit for loops.
Step48: Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (3 points)
Step49: Problem Set 4. User demographics
Q4.1 Load the users table from the file u.user. (3 points)
u.users has the following columns
Step50: Q4.2 Merge the users table with ratings. Do not discard any columns. (3 points)
Step51: Q 4.3 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.
Step 1. Compute the averages by occupation. (2 points)
Step52: Step 2. Plot it on a bar chart. (2 points)
Make the bar chart wider and restrict the y-axis to (2.5, 4).
Step53: Q4.4* Plot the averages by occupation and gender on a multiple bar plot. (4 points)
Tip
Step54: Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.
Step 1. Number of ratings by age group (3 points)
Step55: Step 2. Plot it on a bar chart. (2 points)
Step56: Q4.6 What hour of the day do different occupations rate? (3 points)
Create a function that computes the number of ratings per hour for a single occupation.
Step57: Q4.7* Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)
A two-subplot figure is created. ax is an array of the two subplots, use ax[0] for marketing employees and ax[1] for programmers. Set the titles of the subplots accordingly.
Step58: Q4.8* Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).
Step1. compute mean length (4 points)
Tip
Step59: Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)
Step60: Problem Set 5. A simple recommendation system
Let's build a simple recommendation system that finds similar movies based on genre.
Q5.1. Extract genre information as a matrix. (2 points)
The .values attribute represents the underlying values as a Numpy ndarray.
Step61: Q5.2 Run the k-nearest neighbor algorithm on X. (3 points)
Find a usage example in the documentation of NearestNeighbors.
Store the indices in a variable names indices.
K is the number of nearest neighbors. It should be a parameter of your function.
Step62: indices is more convenient as a DataFrame
Step63: Q5.3 Increment by one (3 points)
The index of this DataFrame refers to a particular movie and the rest of the rows are the indices of similar movies. The problem is that this matrix is zero-indexed, while the dataset (movies table) is indexed from 1.
Both the index and all values should be increased by one.
Step64: Q5.4* Find the movies corresponding to these indices (5 points)
You'll need multiple merge operations.
Tip
Step65: Q5.5* Replace the index of the movie by its title. (2 points)
Step66: Q5.6** Improve your recommedation system by adding other columns. (5 points)
Tips
Step67: Q6* Extra (3 points)
Add any extra observations that you find interesting. Did you find any interesting patterns in the dataset? Are certain genres more appealing to particular demographic groups?
You can add multiple observations for extra points.
Please explain your answer in this field (double click on YOUR CODE GOES HERE) | Python Code:
import pandas as pd
if pd.__version__ < '1':
print("WARNING: Pandas version older than 1.0.0: {}".format(pd.__version__))
else:
print("Pandas version OK: {}".format(pd.__version__))
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#YOUR-NAME-(NEPTUN)" data-toc-modified-id="YOUR-NAME-(NEPTUN)-1"><span class="toc-item-num">1 </span>YOUR NAME (NEPTUN)</a></div><div class="lev1 toc-item"><a href="#Business-Intelligence---Pandas" data-toc-modified-id="Business-Intelligence---Pandas-2"><span class="toc-item-num">2 </span>Business Intelligence - Pandas</a></div><div class="lev1 toc-item"><a href="#Important-steps-before-starting-anything" data-toc-modified-id="Important-steps-before-starting-anything-3"><span class="toc-item-num">3 </span>Important steps before starting anything</a></div><div class="lev2 toc-item"><a href="#General-information" data-toc-modified-id="General-information-31"><span class="toc-item-num">3.1 </span>General information</a></div><div class="lev3 toc-item"><a href="#Submission" data-toc-modified-id="Submission-311"><span class="toc-item-num">3.1.1 </span>Submission</a></div><div class="lev3 toc-item"><a href="#Tips" data-toc-modified-id="Tips-312"><span class="toc-item-num">3.1.2 </span>Tips</a></div><div class="lev3 toc-item"><a href="#Credits" data-toc-modified-id="Credits-313"><span class="toc-item-num">3.1.3 </span>Credits</a></div><div class="lev3 toc-item"><a href="#Feedback" data-toc-modified-id="Feedback-314"><span class="toc-item-num">3.1.4 </span>Feedback</a></div><div class="lev2 toc-item"><a href="#Code-quality" data-toc-modified-id="Code-quality-32"><span class="toc-item-num">3.2 </span>Code quality</a></div><div class="lev2 toc-item"><a href="#PEP8-style-guide" data-toc-modified-id="PEP8-style-guide-33"><span class="toc-item-num">3.3 </span>PEP8 style guide</a></div><div class="lev2 toc-item"><a href="#Figure-quality" data-toc-modified-id="Figure-quality-34"><span class="toc-item-num">3.4 </span>Figure quality</a></div><div class="lev1 toc-item"><a href="#Main-imports" data-toc-modified-id="Main-imports-4"><span class="toc-item-num">4 </span>Main imports</a></div><div class="lev3 toc-item"><a href="#downloading-the-dataset" data-toc-modified-id="downloading-the-dataset-401"><span class="toc-item-num">4.0.1 </span>downloading the dataset</a></div><div class="lev1 toc-item"><a href="#Loading-the-dataset" data-toc-modified-id="Loading-the-dataset-5"><span class="toc-item-num">5 </span>Loading the dataset</a></div><div class="lev1 toc-item"><a href="#Normalizing-the-dataset" data-toc-modified-id="Normalizing-the-dataset-6"><span class="toc-item-num">6 </span>Normalizing the dataset</a></div><div class="lev2 toc-item"><a href="#the-.dt-and-the-.str-namespaces" data-toc-modified-id="the-.dt-and-the-.str-namespaces-61"><span class="toc-item-num">6.1 </span>the <code>.dt</code> and the <code>.str</code> namespaces</a></div><div class="lev3 toc-item"><a href="#.dt-namespace" data-toc-modified-id=".dt-namespace-611"><span class="toc-item-num">6.1.1 </span><code>.dt</code> namespace</a></div><div class="lev3 toc-item"><a href="#.str-namespace" data-toc-modified-id=".str-namespace-612"><span class="toc-item-num">6.1.2 </span><code>.str</code> namespace</a></div><div class="lev2 toc-item"><a href="#Let's-extract-the-release-year-of-a-movie-into-a-separate" data-toc-modified-id="Let's-extract-the-release-year-of-a-movie-into-a-separate-62"><span class="toc-item-num">6.2 </span>Let's extract the release year of a movie into a separate</a></div><div class="lev3 toc-item"><a href="#the-most-common-years-are" data-toc-modified-id="the-most-common-years-are-621"><span class="toc-item-num">6.2.1 </span>the most common years are</a></div><div class="lev3 toc-item"><a href="#video_release_date-is-always-NaT-(not-a-time),-let's-drop-it" data-toc-modified-id="video_release_date-is-always-NaT-(not-a-time),-let's-drop-it-622"><span class="toc-item-num">6.2.2 </span><code>video_release_date</code> is always <code>NaT</code> (not a time), let's drop it</a></div><div class="lev1 toc-item"><a href="#Basic-analysis-of-the-dataset" data-toc-modified-id="Basic-analysis-of-the-dataset-7"><span class="toc-item-num">7 </span>Basic analysis of the dataset</a></div><div class="lev1 toc-item"><a href="#Basic-queries" data-toc-modified-id="Basic-queries-8"><span class="toc-item-num">8 </span>Basic queries</a></div><div class="lev1 toc-item"><a href="#Which-movies-were-released-in-1956?" data-toc-modified-id="Which-movies-were-released-in-1956?-9"><span class="toc-item-num">9 </span>Which movies were released in 1956?</a></div><div class="lev1 toc-item"><a href="#How-many-movies-were-released-in-the-80s?" data-toc-modified-id="How-many-movies-were-released-in-the-80s?-10"><span class="toc-item-num">10 </span>How many movies were released in the 80s?</a></div><div class="lev1 toc-item"><a href="#When-were-the-Die-Hard-movies-released?" data-toc-modified-id="When-were-the-Die-Hard-movies-released?-11"><span class="toc-item-num">11 </span>When were the Die Hard movies released?</a></div><div class="lev2 toc-item"><a href="#How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?" data-toc-modified-id="How-many-movies-are-both-action-and-romance?-What-about-action-or-romance?-111"><span class="toc-item-num">11.1 </span>How many movies are both action and romance? What about action or romance?</a></div><div class="lev1 toc-item"><a href="#Problem-Set-1:-simple-queries" data-toc-modified-id="Problem-Set-1:-simple-queries-12"><span class="toc-item-num">12 </span>Problem Set 1: simple queries</a></div><div class="lev1 toc-item"><a href="#Q1.1-How-many-action-movies-were-release-before-1985-and-in-1985-or-later?-(2-points)" data-toc-modified-id="Q1.1-How-many-action-movies-were-release-before-1985-and-in-1985-or-later?-(2-points)-13"><span class="toc-item-num">13 </span>Q1.1 How many <em>action</em> movies were release before 1985 and in 1985 or later? (2 points)</a></div><div class="lev1 toc-item"><a href="#Q1.2-Are-there-thrillers-for-children?-Find-an-example.-(2-points)" data-toc-modified-id="Q1.2-Are-there-thrillers-for-children?-Find-an-example.-(2-points)-14"><span class="toc-item-num">14 </span>Q1.2 Are there thrillers for children? Find an example. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q1.3-How-many-movies-have-title-longer-than-40-characters?-(3-points)" data-toc-modified-id="Q1.3-How-many-movies-have-title-longer-than-40-characters?-(3-points)-15"><span class="toc-item-num">15 </span>Q1.3 How many movies have title longer than 40 characters? (3 points)</a></div><div class="lev1 toc-item"><a href="#pd.Series.apply:-running-arbitrary-functions-on-each-element" data-toc-modified-id="pd.Series.apply:-running-arbitrary-functions-on-each-element-16"><span class="toc-item-num">16 </span><code>pd.Series.apply</code>: running arbitrary functions on each element</a></div><div class="lev1 toc-item"><a href="#Q1.4*-How-many-content-words-does-the-average-title-have?-Which-title-has-the-most/least-words?" data-toc-modified-id="Q1.4*-How-many-content-words-does-the-average-title-have?-Which-title-has-the-most/least-words?-17"><span class="toc-item-num">17 </span>Q1.4* How many content words does the average title have? Which title has the most/least words?</a></div><div class="lev2 toc-item"><a href="#Step-1.-Count-words-in-the-title-(2-points)" data-toc-modified-id="Step-1.-Count-words-in-the-title-(2-points)-171"><span class="toc-item-num">17.1 </span>Step 1. Count words in the title (2 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Shortest-and-longest-titles-by-word-count-(3-points)" data-toc-modified-id="Step-2.-Shortest-and-longest-titles-by-word-count-(3-points)-172"><span class="toc-item-num">17.2 </span>Step 2. Shortest and longest titles by word count (3 points)</a></div><div class="lev1 toc-item"><a href="#Q1.5*-How-many-movies-have-the-word-'and'-in-their-title?-Write-a-function-that-counts-movies-with-a-particular-word.-(3-points)" data-toc-modified-id="Q1.5*-How-many-movies-have-the-word-'and'-in-their-title?-Write-a-function-that-counts-movies-with-a-particular-word.-(3-points)-18"><span class="toc-item-num">18 </span>Q1.5* How many movies have the word 'and' in their title? Write a function that counts movies with a particular word. (3 points)</a></div><div class="lev1 toc-item"><a href="#Groupby-and-visualization" data-toc-modified-id="Groupby-and-visualization-19"><span class="toc-item-num">19 </span>Groupby and visualization</a></div><div class="lev3 toc-item"><a href="#we-can-groupby-on-multiple-columns" data-toc-modified-id="we-can-groupby-on-multiple-columns-1901"><span class="toc-item-num">19.0.1 </span>we can groupby on multiple columns</a></div><div class="lev3 toc-item"><a href="#we-can-also-group-on-arbitrary-conditions" data-toc-modified-id="we-can-also-group-on-arbitrary-conditions-1902"><span class="toc-item-num">19.0.2 </span>we can also group on arbitrary conditions</a></div><div class="lev1 toc-item"><a href="#Problem-Set-2:-Groupby-and-visualization" data-toc-modified-id="Problem-Set-2:-Groupby-and-visualization-20"><span class="toc-item-num">20 </span>Problem Set 2: Groupby and visualization</a></div><div class="lev1 toc-item"><a href="#Q2.1-Write-a-function-that-takes-a-genre-and-groups-movies-of-that-genre-by-year.-Do-not-include-movies-older-than-1985.--(3-points)" data-toc-modified-id="Q2.1-Write-a-function-that-takes-a-genre-and-groups-movies-of-that-genre-by-year.-Do-not-include-movies-older-than-1985.--(3-points)-21"><span class="toc-item-num">21 </span>Q2.1 Write a function that takes a genre and groups movies of that genre by year. Do not include movies older than 1985. (3 points)</a></div><div class="lev1 toc-item"><a href="#Q2.2-Plot-the-number-of-adventure-movies-from-1985-to-1999-on-a-bar-chart.-Use-your-group_genre_by_year-function.-(2-points)" data-toc-modified-id="Q2.2-Plot-the-number-of-adventure-movies-from-1985-to-1999-on-a-bar-chart.-Use-your-group_genre_by_year-function.-(2-points)-22"><span class="toc-item-num">22 </span>Q2.2 Plot the number of adventure movies from 1985 to 1999 on a <em>bar</em> chart. Use your <code>group_genre_by_year</code> function. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q2.3-Plot-the-distribution-of-release-day-(day-of-month)-on-a-pie-chart." data-toc-modified-id="Q2.3-Plot-the-distribution-of-release-day-(day-of-month)-on-a-pie-chart.-23"><span class="toc-item-num">23 </span>Q2.3 Plot the distribution of release day (day of month) on a pie chart.</a></div><div class="lev2 toc-item"><a href="#Step-1.-groupby-(2-points)" data-toc-modified-id="Step-1.-groupby-(2-points)-231"><span class="toc-item-num">23.1 </span>Step 1. groupby (2 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-pie-chart.-Add-percent-values.-(3-points)" data-toc-modified-id="Step-2.-pie-chart.-Add-percent-values.-(3-points)-232"><span class="toc-item-num">23.2 </span>Step 2. pie chart. Add percent values. (3 points)</a></div><div class="lev1 toc-item"><a href="#Q2.4-We're-building-a-traditional-lexicon-of-the-titles.-What-is-the-distribution-of-initial-letters-(i.e.-how-many-titles-start-with-S?)?-Plot-it-on-a-bar-chart." data-toc-modified-id="Q2.4-We're-building-a-traditional-lexicon-of-the-titles.-What-is-the-distribution-of-initial-letters-(i.e.-how-many-titles-start-with-S?)?-Plot-it-on-a-bar-chart.-24"><span class="toc-item-num">24 </span>Q2.4 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.</a></div><div class="lev2 toc-item"><a href="#Step-1.-Compute-frequencies.-(3-points)" data-toc-modified-id="Step-1.-Compute-frequencies.-(3-points)-241"><span class="toc-item-num">24.1 </span>Step 1. Compute frequencies. (3 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-it-on-a-bar-chart-in-descending-order.-(3-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart-in-descending-order.-(3-points)-242"><span class="toc-item-num">24.2 </span>Step 2. Plot it on a bar chart in descending order. (3 points)</a></div><div class="lev1 toc-item"><a href="#Problem-Set-3.-Handling-multiple-dataframes" data-toc-modified-id="Problem-Set-3.-Handling-multiple-dataframes-25"><span class="toc-item-num">25 </span>Problem Set 3. Handling multiple dataframes</a></div><div class="lev1 toc-item"><a href="#Merging-with-the-movies-dataframe" data-toc-modified-id="Merging-with-the-movies-dataframe-26"><span class="toc-item-num">26 </span>Merging with the movies dataframe</a></div><div class="lev2 toc-item"><a href="#How-many-ratings-are-timestamped-before-the-release-date?" data-toc-modified-id="How-many-ratings-are-timestamped-before-the-release-date?-261"><span class="toc-item-num">26.1 </span>How many ratings are timestamped <em>before</em> the release date?</a></div><div class="lev2 toc-item"><a href="#Which-movies-were-rated-before-the-release-date?" data-toc-modified-id="Which-movies-were-rated-before-the-release-date?-262"><span class="toc-item-num">26.2 </span>Which movies were rated before the release date?</a></div><div class="lev1 toc-item"><a href="#Q3.1.-How-many-times-was-each-movie-rated?" data-toc-modified-id="Q3.1.-How-many-times-was-each-movie-rated?-27"><span class="toc-item-num">27 </span>Q3.1. How many times was each movie rated?</a></div><div class="lev2 toc-item"><a href="#Step-1.-Compute-the-frequencies-of-movie-ratings.-Use-titles-instead-of-movie-ids.-(2-points)" data-toc-modified-id="Step-1.-Compute-the-frequencies-of-movie-ratings.-Use-titles-instead-of-movie-ids.-(2-points)-271"><span class="toc-item-num">27.1 </span>Step 1. Compute the frequencies of movie ratings. Use titles instead of movie ids. (2 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-the-frequencies-on-a-histogram.-(2-points)" data-toc-modified-id="Step-2.-Plot-the-frequencies-on-a-histogram.-(2-points)-272"><span class="toc-item-num">27.2 </span>Step 2. Plot the frequencies on a histogram. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q3.2.-How-many-ratings-were-submitted-by-each-day-of-the-week?-What-were-their-averages?" data-toc-modified-id="Q3.2.-How-many-ratings-were-submitted-by-each-day-of-the-week?-What-were-their-averages?-28"><span class="toc-item-num">28 </span>Q3.2. How many ratings were submitted by each day of the week? What were their averages?</a></div><div class="lev2 toc-item"><a href="#Step-1.-groupby-(3-points)" data-toc-modified-id="Step-1.-groupby-(3-points)-281"><span class="toc-item-num">28.1 </span>Step 1. groupby (3 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-number-of-ratings-per-day-(2-points)" data-toc-modified-id="Step-2.-number-of-ratings-per-day-(2-points)-282"><span class="toc-item-num">28.2 </span>Step 2. number of ratings per day (2 points)</a></div><div class="lev2 toc-item"><a href="#Step-3.-mean-rating-by-day-(2-points)" data-toc-modified-id="Step-3.-mean-rating-by-day-(2-points)-283"><span class="toc-item-num">28.3 </span>Step 3. mean rating by day (2 points)</a></div><div class="lev1 toc-item"><a href="#Q3.3**-What-is-the-mean-of-ratings-by-genre?" data-toc-modified-id="Q3.3**-What-is-the-mean-of-ratings-by-genre?-29"><span class="toc-item-num">29 </span>Q3.3** What is the mean of ratings by genre?</a></div><div class="lev2 toc-item"><a href="#Step-1.-Compute-the-mean-scores.-(5-points)" data-toc-modified-id="Step-1.-Compute-the-mean-scores.-(5-points)-291"><span class="toc-item-num">29.1 </span>Step 1. Compute the mean scores. (5 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-it-on-a-bar-chart-in-descending-order-by-score.-Set-the-limits-of-the-y-axis-to-(2.5,-4).-(3-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart-in-descending-order-by-score.-Set-the-limits-of-the-y-axis-to-(2.5,-4).-(3-points)-292"><span class="toc-item-num">29.2 </span>Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (3 points)</a></div><div class="lev1 toc-item"><a href="#Problem-Set-4.-User-demographics" data-toc-modified-id="Problem-Set-4.-User-demographics-30"><span class="toc-item-num">30 </span>Problem Set 4. User demographics</a></div><div class="lev1 toc-item"><a href="#Q4.1-Load-the-users-table-from-the-file-u.user.-(3-points)" data-toc-modified-id="Q4.1-Load-the-users-table-from-the-file-u.user.-(3-points)-31"><span class="toc-item-num">31 </span>Q4.1 Load the users table from the file <code>u.user</code>. (3 points)</a></div><div class="lev1 toc-item"><a href="#Q4.2-Merge-the-users-table-with-ratings.-Do-not-discard-any-columns.-(3-points)" data-toc-modified-id="Q4.2-Merge-the-users-table-with-ratings.-Do-not-discard-any-columns.-(3-points)-32"><span class="toc-item-num">32 </span>Q4.2 Merge the <code>users</code> table with <code>ratings</code>. Do not discard any columns. (3 points)</a></div><div class="lev1 toc-item"><a href="#Q-4.3-How-strict-are-people-by-occupation?-Compute-the-average-of-ratings-by-occupation.-Plot-it-on-a-bar-chart-in-descending-order." data-toc-modified-id="Q-4.3-How-strict-are-people-by-occupation?-Compute-the-average-of-ratings-by-occupation.-Plot-it-on-a-bar-chart-in-descending-order.-33"><span class="toc-item-num">33 </span>Q 4.3 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.</a></div><div class="lev2 toc-item"><a href="#Step-1.-Compute-the-averages-by-occupation.-(2-points)" data-toc-modified-id="Step-1.-Compute-the-averages-by-occupation.-(2-points)-331"><span class="toc-item-num">33.1 </span>Step 1. Compute the averages by occupation. (2 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-it-on-a-bar-chart.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-(2-points)-332"><span class="toc-item-num">33.2 </span>Step 2. Plot it on a bar chart. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q4.4*-Plot-the-averages-by-occupation-and-gender-on-a-multiple-bar-plot.-(4-points)" data-toc-modified-id="Q4.4*-Plot-the-averages-by-occupation-and-gender-on-a-multiple-bar-plot.-(4-points)-34"><span class="toc-item-num">34 </span>Q4.4* Plot the averages by occupation <em>and</em> gender on a multiple bar plot. (4 points)</a></div><div class="lev1 toc-item"><a href="#Q4.5-How-likely-are-different-age-groups-to-rate-movies?-Compute-the-number-of-ratings-by-age-grouped-into-10-19,-20-29,-etc.-Plot-it-on-a-bar-chart." data-toc-modified-id="Q4.5-How-likely-are-different-age-groups-to-rate-movies?-Compute-the-number-of-ratings-by-age-grouped-into-10-19,-20-29,-etc.-Plot-it-on-a-bar-chart.-35"><span class="toc-item-num">35 </span>Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.</a></div><div class="lev2 toc-item"><a href="#Step-1.-Number-of-ratings-by-age-group-(3-points)" data-toc-modified-id="Step-1.-Number-of-ratings-by-age-group-(3-points)-351"><span class="toc-item-num">35.1 </span>Step 1. Number of ratings by age group (3 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-it-on-a-bar-chart.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-(2-points)-352"><span class="toc-item-num">35.2 </span>Step 2. Plot it on a bar chart. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q4.6-What-hour-of-the-day-do-different-occupations-rate?-(3-points)" data-toc-modified-id="Q4.6-What-hour-of-the-day-do-different-occupations-rate?-(3-points)-36"><span class="toc-item-num">36 </span>Q4.6 What hour of the day do different occupations rate? (3 points)</a></div><div class="lev2 toc-item"><a href="#Create-a-function-that-computes-the-number-of-ratings-per-hour-for-a-single-occupation." data-toc-modified-id="Create-a-function-that-computes-the-number-of-ratings-per-hour-for-a-single-occupation.-361"><span class="toc-item-num">36.1 </span>Create a function that computes the number of ratings per hour for a single occupation.</a></div><div class="lev1 toc-item"><a href="#Q4.7*-Plot-the-rating-hours-of-marketing-employees-and-programmers-on-two-pie-charts.-(4-points)" data-toc-modified-id="Q4.7*-Plot-the-rating-hours-of-marketing-employees-and-programmers-on-two-pie-charts.-(4-points)-37"><span class="toc-item-num">37 </span>Q4.7* Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)</a></div><div class="lev1 toc-item"><a href="#Q4.8*-Do-older-people-prefer-movies-with-longer-titles?-Compute-the-average-title-length-by-age-group-(0-10,-10-20)." data-toc-modified-id="Q4.8*-Do-older-people-prefer-movies-with-longer-titles?-Compute-the-average-title-length-by-age-group-(0-10,-10-20).-38"><span class="toc-item-num">38 </span>Q4.8* Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).</a></div><div class="lev2 toc-item"><a href="#Step1.-compute-mean-length-(4-points)" data-toc-modified-id="Step1.-compute-mean-length-(4-points)-381"><span class="toc-item-num">38.1 </span>Step1. compute mean length (4 points)</a></div><div class="lev2 toc-item"><a href="#Step-2.-Plot-it-on-a-bar-chart.-Choose-a-reasonable-range-for-the-y-axis.-(2-points)" data-toc-modified-id="Step-2.-Plot-it-on-a-bar-chart.-Choose-a-reasonable-range-for-the-y-axis.-(2-points)-382"><span class="toc-item-num">38.2 </span>Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)</a></div><div class="lev1 toc-item"><a href="#Problem-Set-5.-A-simple-recommendation-system" data-toc-modified-id="Problem-Set-5.-A-simple-recommendation-system-39"><span class="toc-item-num">39 </span>Problem Set 5. A simple recommendation system</a></div><div class="lev1 toc-item"><a href="#Q5.1.-Extract-genre-information-as-a-matrix.-(2-points)" data-toc-modified-id="Q5.1.-Extract-genre-information-as-a-matrix.-(2-points)-40"><span class="toc-item-num">40 </span>Q5.1. Extract genre information as a matrix. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q5.2-Run-the-k-nearest-neighbor-algorithm-on-X.-(3-points)" data-toc-modified-id="Q5.2-Run-the-k-nearest-neighbor-algorithm-on-X.-(3-points)-41"><span class="toc-item-num">41 </span>Q5.2 Run the k-nearest neighbor algorithm on X. (3 points)</a></div><div class="lev1 toc-item"><a href="#Q5.3-Increment-by-one-(3-points)" data-toc-modified-id="Q5.3-Increment-by-one-(3-points)-42"><span class="toc-item-num">42 </span>Q5.3 Increment by one (3 points)</a></div><div class="lev1 toc-item"><a href="#Q5.4*-Find-the-movies-corresponding-to-these-indices-(5-points)" data-toc-modified-id="Q5.4*-Find-the-movies-corresponding-to-these-indices-(5-points)-43"><span class="toc-item-num">43 </span>Q5.4* Find the movies corresponding to these indices (5 points)</a></div><div class="lev1 toc-item"><a href="#Q5.5*-Replace-the-index-of-the-movie-by-its-title.-(2-points)" data-toc-modified-id="Q5.5*-Replace-the-index-of-the-movie-by-its-title.-(2-points)-44"><span class="toc-item-num">44 </span>Q5.5* Replace the index of the movie by its title. (2 points)</a></div><div class="lev1 toc-item"><a href="#Q5.6**-Improve-your-recommedation-system-by-adding-other-columns.-(5-points)" data-toc-modified-id="Q5.6**-Improve-your-recommedation-system-by-adding-other-columns.-(5-points)-45"><span class="toc-item-num">45 </span>Q5.6** Improve your recommedation system by adding other columns. (5 points)</a></div><div class="lev1 toc-item"><a href="#Q6*-Extra-(3-points)" data-toc-modified-id="Q6*-Extra-(3-points)-46"><span class="toc-item-num">46 </span>Q6* Extra (3 points)</a></div><div class="lev1 toc-item"><a href="#Submission" data-toc-modified-id="Submission-47"><span class="toc-item-num">47 </span>Submission</a></div>
# YOUR NAME (NEPTUN)
# Business Intelligence - Pandas
March 11, 2020
# Important steps before starting anything
Pandas is outdated on most lab computers. Upgrading can be done via Anaconda Prompt:
conda upgrade --all
conda upgrade pandas
Both commands ask for confirmation, just press Enter.
If Windows asks for any permissions, you can deny it (allowing would require Administrator priviliges).
You should not see a warning here:
End of explanation
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
sns.set_context('notebook')
Explanation: General information
This goal of this notebook is to give a brief introduction to the pandas library, a popular data manipulation and analysis tool for Python.
Problems are numbered from Q1 to Q5 with many subproblems such as Q1.1. The scores range from 2 to 5 based on the difficulty of the problem. Grades are determined using this table:
| score | grade |
| ---- | ----|
| 80+ | 5 |
| 60+ | 4 |
| 40+ | 3 |
| 20+ | 2 |
| 20- | 1 |
Your answer should go in place of YOUR CODE HERE. Please remove raise NotImplementedError.
Most of the tasks are automatically graded using visible and hidden tests.
Visible tests are available in this version, hidden tests are not available to you.
This means that passing all visible tests does not ensure that your answer is correct.
Not passing the visible tests means that your answer is incorrect.
Do not delete or copy cells and do not edit the source code.
You may add cells but they will not be graded.
You will not find the hidden tests in the source code but you can mess up the autograder if you manually edit it.
VERY IMPORTANT Do not edit cell metadata or the raw .ipynb. Autograde will fail if you change something carelessly.
Advanced exercises are marked with *.
More advanced ones have more stars.
Some problems build on each other - it should be obvious how from the code - but advanced problems can safely be skipped.
Completing all non-advanced exercises correctly is worth 63 points.
Submission
You only need to submit this notebook (no separate report).
Make sure that you save the last version of your notebook.
Please rename the notebook to your neptun code (ABC123.ipynb), package it in an archive (.zip) and upload it to the class website.
You are free to continue working on this problem set after class but make sure that you upload it by the end of the week (Sunday).
VERY IMPORTANT Run Kernel->Restart & Run All and make sure that it finishes without errors.
If you skip exercises, you need to manually run the remaining cells.
You can run a single cell and step to the next cell by pressing Shift+Enter.
Skipping exercises won't affect the autograder.
Tips
You generally don't need to leave any DataFrames printed as cell outputs. You can do it for debug purposes and it won't affect the autograder but please don't leave long tables in the output. Use .head() instead.
Be concise. All exercises can be solved with less than 5 lines of code.
Avoid for loops. Almost all tasks can be solved with efficient pandas operations.
Avoid overriding Python built-in functions with your own variables (max = 2).
If you mess up, you can always do one of the following
1. Kernel -> Restart & Run All - this will run all cells from top to bottom until an exception is thrown
1. Kernel -> Restart, Cell -> Run All Above - this will run all cells from top to bottom until the current cell is reached or and exception is thrown
If your notebook runs for longer than a minute, one or more of your solutions is very inefficient.
It's easy to accidentally change the type of a cell from code to Markdown/raw text.
When this happens, you lose syntax highlight in that cell.
You can change it back in the toolbar or in the Cell menu.
Table of Contents
Jupyter has an extension called Table of Contents (2).
Table of Contents lists all headers in a separate frame and makes navigation much easier.
You can install and enable it the following way in Anaconda Prompt:
conda install -c conda-forge jupyter_contrib_nbextensions
python -m jupyter nbextension enable toc2/main
Save and refresh this notebook and you should see a new button in the toolbar that looks like a bullet point list.
That button turns on Table of Contents.
Credits
This assignment was created by Judit Ács using nbgrader.
Feedback
Please fill out this short survey after you completed the problems.
Code quality
You can get 3 extra points for code quality.
PEP8 style guide
You can get 2 extra points for adhering to the PEP8 Python style guide.
Figure quality
You can get 5 extra points for the quality of your figures. Good figures have labeled axes with meaningful names, reasonable figure size and reasonable axes limits.
Extra attention to details also helps.
Pie charts are ugly no matter what you do, do not sweat it too much, they are the Comic Sans of visualization.
Main imports
End of explanation
import os
data_dir = os.getenv("MOVIELENS")
if data_dir is None:
data_dir = ""
ml_path = os.path.join(data_dir, "ml.zip")
if os.path.exists(ml_path):
print("File already exists, skipping download step.")
else:
print("Downloading the Movielens dataset")
import urllib
u = urllib.request.URLopener()
u.retrieve("http://files.grouplens.org/datasets/movielens/ml-100k.zip", ml_path)
unzip_path = os.path.join(data_dir, "ml-100k")
if os.path.exists(unzip_path):
print("Dataset already unpacked, skipping unpacking step.")
else:
print("Unziping the dataset.")
from zipfile import ZipFile
with ZipFile(ml_path) as myzip:
myzip.extractall(data_dir)
data_dir = unzip_path
Explanation: downloading the dataset
End of explanation
# df = pd.read_table("ml-100k/u.item") # UnicodeDecodeErrort kapunk, mert rossz dekódert használ
df = pd.read_table(os.path.join(data_dir, "u.item"), encoding="latin1")
df.head()
Explanation: Loading the dataset
pd.read_table loads a tabular dataset. The full function signature is:
~~~
pandas.read_table(filepath_or_buffer: Union[str, pathlib.Path, IO[~AnyStr]], sep='t', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, skipfooter=0, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, cache_dates=True, iterator=False, chunksize=None, compression='infer', thousands=None, decimal: str = '.', lineterminator=None, quotechar='"', quoting=0, doublequote=True, escapechar=None, comment=None, encoding=None, dialect=None, error_bad_lines=True, warn_bad_lines=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None
~~~
let's try it with defaults
End of explanation
column_names = [
"movie_id", "title", "release_date", "video_release_date", "imdb_url", "unknown", "action", "adventure", "animation",
"children", "comedy", "crime", "documentary", "drama", "fantasy", "film_noir", "horror", "musical", "mystery",
"romance", "sci_fi", "thriller", "war", "western"]
df = pd.read_table(
os.path.join(data_dir, "u.item"), sep="|",
names=column_names, encoding="latin1", index_col='movie_id',
parse_dates=['release_date', 'video_release_date']
)
df.head()
Explanation: A couple improvements:
Use a different separator. | instead of \t.
The first line of the file is used as the header. The real names of the columns are listed in the README, they can be specified with the names parameters.
read_table added an index (0..N-1), but the dataset already has an index, let's use that one (index_col='movie_id')
two columns, release_date and video_release_date are dates, pandas can parse them and create its own datetype.
End of explanation
print(", ".join([d for d in dir(df.release_date.dt) if not d.startswith('_')]))
Explanation: Normalizing the dataset
the .dt and the .str namespaces
The year in the title seems redundant, let's check if it's always the same as the release date. The .dt namespace has various methods and attributes for handling dates, and the .str namespace has string handling methods.
.dt namespace
End of explanation
print(", ".join([d for d in dir(df.title.str) if not d.startswith('_')]))
Explanation: .str namespace
End of explanation
df.title.str[1].head()
df.title.str[-5:].tail()
Explanation: .str can also be indexed like a string
End of explanation
df['year'] = df.release_date.dt.year
Explanation: Let's extract the release year of a movie into a separate
End of explanation
df.year.value_counts().head()
Explanation: the most common years are
End of explanation
df.video_release_date.isnull().value_counts()
df = df.drop('video_release_date', axis=1)
Explanation: video_release_date is always NaT (not a time), let's drop it
End of explanation
df.describe()
Explanation: Basic analysis of the dataset
describe generate descriptive statistics.
End of explanation
df.release_date.describe()
Explanation: Only numeric columns are included by default. A single column (pd.Series) has a describe function too
End of explanation
df.mean()
Explanation: Numberic statistics are available as separate functions too:
count: number of non-NA cells. NA is NOT the same as 0
mean: average
std: standard deviation
var: variance
min, max
etc.
End of explanation
df[df.release_date.dt.year==1956]
Explanation: Basic queries
Which movies were released in 1956?
End of explanation
d = df[(df.release_date.dt.year >= 1980) & (df.release_date.dt.year < 1990)]
print(f"{len(d)} movies were released in the 80s.")
print("\nA few examples:")
print("\n".join(d.sample(5).title))
Explanation: How many movies were released in the 80s?
Let's print 5 examples too.
End of explanation
df[df.title.str.contains('Die Hard')]
Explanation: When were the Die Hard movies released?
End of explanation
df.release_date.min(), df.release_date.max()
Explanation: Die Hard 4 and 5 are missing. This is because the dataset only contains movies released between:
End of explanation
print("Action and romance:", len(df[(df.action==1) & (df.romance==1)]))
print("Action or romance:", len(df[(df.action==1) | (df.romance==1)]))
Explanation: and Die Hard 4 and 5 were released in 2007 and 2013 respectively.
How many movies are both action and romance? What about action or romance?
Make sure you parenthesize the conditions
End of explanation
def count_movies_before_1985(df):
# YOUR CODE HERE
raise NotImplementedError()
def count_movies_after_1984(df):
# YOUR CODE HERE
raise NotImplementedError()
before = count_movies_before_1985(df)
assert isinstance(before, int) or isinstance(before, np.integer)
# action movies only, not all!
assert before != 272
after = count_movies_after_1984(df)
assert isinstance(after, int) or isinstance(after, np.integer)
Explanation: Problem Set 1: simple queries
Q1.1 How many action movies were release before 1985 and in 1985 or later? (2 points)
End of explanation
def child_thriller(df):
# YOUR CODE HERE
raise NotImplementedError()
title = child_thriller(df)
assert isinstance(title, str)
Explanation: Q1.2 Are there thrillers for children? Find an example. (2 points)
End of explanation
def long_titles(df):
# YOUR CODE HERE
raise NotImplementedError()
title_cnt = long_titles(df)
assert isinstance(title_cnt, int) or isinstance(title_cnt, np.integer)
Explanation: Q1.3 How many movies have title longer than 40 characters? (3 points)
End of explanation
def number_of_words(field):
return len(field.split(" "))
df.title.apply(number_of_words).value_counts().sort_index()
# or
# df.title.apply(lambda t: len(t.split(" "))).value_counts().sort_index()
df.title.apply(number_of_words).value_counts().sort_index().plot(kind='bar')
Explanation: pd.Series.apply: running arbitrary functions on each element
The apply function allow running arbitrary functions on pd.Series:
End of explanation
#df['title_word_cnt'] = ...
def count_content_words(title):
# YOUR CODE HERE
raise NotImplementedError()
df['title_word_cnt'] = df.title.apply(count_content_words)
assert 'title_word_cnt' in df.columns
assert df.loc[1424, 'title_word_cnt'] == 5
assert df.loc[1170, 'title_word_cnt'] == 2
df.title_word_cnt.value_counts().sort_index().plot(kind='bar')
Explanation: Q1.4* How many content words does the average title have? Which title has the most/least words?
Content words are capitalized. The opposite of content words are function words (and, a, an, the, more etc.).
We should not include the release year in the word count. The release year is always the last word of the title.
Step 1. Count words in the title (2 points)
End of explanation
#shortest_title = ...
#longest_title = ...
#shortest_title_len = ...
#longest_title_len = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(shortest_title, str)
assert isinstance(longest_title, str)
assert isinstance(shortest_title_len, np.integer) or isinstance(shortest_title_len, int)
assert isinstance(longest_title_len, np.integer) or isinstance(longest_title_len, int)
assert shortest_title_len == 0
assert longest_title == 'Englishman Who Went Up a Hill, But Came Down a Mountain, The (1995)'
assert shortest_title == 'unknown'
assert longest_title_len == 10
Explanation: Step 2. Shortest and longest titles by word count (3 points)
End of explanation
def movies_with_word(df, word):
# YOUR CODE HERE
raise NotImplementedError()
and_movies = movies_with_word(df, "and")
assert isinstance(and_movies, pd.DataFrame)
assert and_movies.shape == (66, 24)
assert 'Mr. Holland\'s Opus (1995)' not in and_movies.title.values
assert movies_with_word(df, "The").shape == (465, 24)
assert movies_with_word(df, "the").shape == (465, 24)
Explanation: Q1.5* How many movies have the word 'and' in their title? Write a function that counts movies with a particular word. (3 points)
Disregard case and avoid matching subwords for examples. For example 'and' should not match 'Holland' nor should it match the movie 'Andrew'.
End of explanation
df.groupby('year').size().plot()
Explanation: Groupby and visualization
How many movies are released each year?
End of explanation
df.year.value_counts().sort_index().plot()
Explanation: another option is the use pd.Series.value_counts
End of explanation
fig, ax = plt.subplots(1, figsize=(10, 6))
d = df[df.year>1985]
d.groupby('year').size().plot(kind='bar', ax=ax)
Explanation: most movies were released in the late 80s and 90s, let's zoom in. Let's also change the figure size.
We create the plot object with one subplot, we then specify which axis pandas should use for plotting (ax=ax).
End of explanation
df.groupby(['action', 'romance']).size()
Explanation: we can groupby on multiple columns
Back to the romance-action combinations
End of explanation
df.groupby(df.year // 10 * 10).size()
Explanation: we can also group on arbitrary conditions
how many movies were released each decade?
End of explanation
def group_genre_by_year(df, genre):
# YOUR CODE HERE
raise NotImplementedError()
crime = group_genre_by_year(df, 'crime')
assert type(crime) == pd.core.groupby.DataFrameGroupBy
assert len(crime) <= 15 # movies between 1985-1999
Explanation: Problem Set 2: Groupby and visualization
Q2.1 Write a function that takes a genre and groups movies of that genre by year. Do not include movies older than 1985. (3 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q2.2 Plot the number of adventure movies from 1985 to 1999 on a bar chart. Use your group_genre_by_year function. (2 points)
End of explanation
def groupby_release_day(df):
# YOUR CODE HERE
raise NotImplementedError()
by_day = groupby_release_day(df)
assert type(by_day) == pd.core.groupby.DataFrameGroupBy
# the longest month is 31 days
assert len(by_day) < 32
# not month but DAY of month
assert len(by_day) != 12
# shouldn't group on day of week
assert len(by_day) > 7
Explanation: Q2.3 Plot the distribution of release day (day of month) on a pie chart.
Step 1. groupby (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. pie chart. Add percent values. (3 points)
End of explanation
def compute_initial_letter_frequencies(df):
# YOUR CODE HERE
raise NotImplementedError()
initial = compute_initial_letter_frequencies(df)
assert type(initial) == pd.Series
# frequency counts should be >= 1
assert initial.min() >= 1
# the largest one cannot be larger than the full dataframe
assert initial.max() <= len(df)
# there are 32 initial letters in the dataset
assert len(initial) == 32
assert 'B' in initial.index
Explanation: Q2.4 We're building a traditional lexicon of the titles. What is the distribution of initial letters (i.e. how many titles start with S?)? Plot it on a bar chart.
Step 1. Compute frequencies. (3 points)
You don't need to perform any preprocessing.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart in descending order. (3 points)
The most common letter should be the first bar.
End of explanation
cols = ['user', 'movie_id', 'rating', 'timestamp']
ratings = pd.read_table(os.path.join(data_dir, "u.data"), names=cols)
ratings.head()
Explanation: Problem Set 3. Handling multiple dataframes
The main table of this dataset is u.data with 100000 ratings.
End of explanation
ratings['timestamp'] = pd.to_datetime(ratings.timestamp, unit='s')
ratings.head()
Explanation: The timestamp column is a Unix timestamp, let's convert it to pd.DateTime:
End of explanation
movies = df
ratings = pd.merge(ratings, movies, left_on='movie_id', right_index=True)
Explanation: Merging with the movies dataframe
We overwrite ratings:
End of explanation
(ratings.timestamp <= ratings.release_date).value_counts()
Explanation: How many ratings are timestamped before the release date?
End of explanation
ratings[ratings.timestamp <= ratings.release_date].title.value_counts()
Explanation: Which movies were rated before the release date?
End of explanation
def compute_movie_rated_frequencies(df):
# YOUR CODE HERE
raise NotImplementedError()
title_freq = compute_movie_rated_frequencies(ratings)
assert isinstance(title_freq, pd.Series)
# use titles
assert 'Lone Star (1996)' in title_freq.index
Explanation: Q3.1. How many times was each movie rated?
Step 1. Compute the frequencies of movie ratings. Use titles instead of movie ids. (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot the frequencies on a histogram. (2 points)
pd.Series has a hist function. It uses 10 bins by default, use more.
End of explanation
def groupby_day_of_week(ratings):
# YOUR CODE HERE
raise NotImplementedError()
by_day = groupby_day_of_week(ratings)
assert isinstance(by_day, pd.core.groupby.generic.DataFrameGroupBy)
# there are 7 days
assert len(by_day) == 7
# use names of the days
assert 'Monday' in by_day.groups
Explanation: Q3.2. How many ratings were submitted by each day of the week? What were their averages?
Step 1. groupby (3 points)
Tip: look around in the .dt namespace.
End of explanation
#number_of_ratings_per_day = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(number_of_ratings_per_day, pd.Series)
assert len(number_of_ratings_per_day) == 7
assert number_of_ratings_per_day.min() > 10000
Explanation: Step 2. number of ratings per day (2 points)
End of explanation
#mean_rating_by_day = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(mean_rating_by_day, pd.Series)
Explanation: Step 3. mean rating by day (2 points)
End of explanation
genres = ['unknown', 'action', 'adventure', 'animation',
'children', 'comedy', 'crime', 'documentary', 'drama', 'fantasy',
'film_noir', 'horror', 'musical', 'mystery', 'romance', 'sci_fi',
'thriller', 'war', 'western']
def compute_mean_rating_by_genre(ratings):
# YOUR CODE HERE
raise NotImplementedError()
genre_rating = compute_mean_rating_by_genre(ratings)
assert len(genre_rating) == len(genres)
# all means are between 3 and 4
assert genre_rating.min() > 3.0
assert genre_rating.max() < 4.0
# film noir is rated highest
assert genre_rating.idxmax() == 'film_noir'
for g in genres:
assert g in genre_rating.index
Explanation: Q3.3** What is the mean of ratings by genre?
If a movie has multiple genres, include it in every genre.
Step 1. Compute the mean scores. (5 points)
There are many ways to solve this problem. Try to do it without explicit for loops.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart in descending order by score. Set the limits of the y-axis to (2.5, 4). (3 points)
End of explanation
# users = ...
# YOUR CODE HERE
raise NotImplementedError()
assert type(users) == pd.DataFrame
# users are indexed from 0
assert 0 not in users.index
assert users.shape == (943, 4)
# user_id should be the index
assert 'user_id' not in users.columns
Explanation: Problem Set 4. User demographics
Q4.1 Load the users table from the file u.user. (3 points)
u.users has the following columns: user_id, age, gender, occupation, zip. Use user_id as the index.
End of explanation
# ratings = ratings.merge...
# YOUR CODE HERE
raise NotImplementedError()
assert type(ratings) == pd.DataFrame
# all movies have ratings (nunique return the number of unique elements)
assert ratings.movie_id.nunique() == 1682
Explanation: Q4.2 Merge the users table with ratings. Do not discard any columns. (3 points)
End of explanation
def compute_mean_by_occupation(ratings):
# YOUR CODE HERE
raise NotImplementedError()
mean_by_occupation = compute_mean_by_occupation(ratings)
assert isinstance(mean_by_occupation, pd.Series)
# ratings are between 1 and 5
assert mean_by_occupation.min() > 1
assert mean_by_occupation.max() < 5
Explanation: Q 4.3 How strict are people by occupation? Compute the average of ratings by occupation. Plot it on a bar chart in descending order.
Step 1. Compute the averages by occupation. (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. (2 points)
Make the bar chart wider and restrict the y-axis to (2.5, 4).
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.4* Plot the averages by occupation and gender on a multiple bar plot. (4 points)
Tip: there is an example of a multiple bar plot here
Tip 2: there are many ways to solve this problem, one is a one-liner using DataFrame.unstack.
End of explanation
def count_ratings_by_age_group(ratings):
# YOUR CODE HERE
raise NotImplementedError()
rating_by_age_group = count_ratings_by_age_group(ratings)
assert isinstance(rating_by_age_group, pd.Series)
assert 20 in rating_by_age_group
assert 25 not in rating_by_age_group
Explanation: Q4.5 How likely are different age groups to rate movies? Compute the number of ratings by age grouped into 10-19, 20-29, etc. Plot it on a bar chart.
Step 1. Number of ratings by age group (3 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. (2 points)
End of explanation
def count_rating_by_hour_occupation(ratings, occupation):
# YOUR CODE HERE
raise NotImplementedError()
marketing = count_rating_by_hour_occupation(ratings, "marketing")
assert isinstance(marketing, pd.Series)
# there are only 24 hours
assert len(marketing) < 25
Explanation: Q4.6 What hour of the day do different occupations rate? (3 points)
Create a function that computes the number of ratings per hour for a single occupation.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q4.7* Plot the rating hours of marketing employees and programmers on two pie charts. (4 points)
A two-subplot figure is created. ax is an array of the two subplots, use ax[0] for marketing employees and ax[1] for programmers. Set the titles of the subplots accordingly.
End of explanation
def get_mean_length_by_age_group(ratings):
# YOUR CODE HERE
raise NotImplementedError()
title_len_by_age = get_mean_length_by_age_group(ratings)
assert isinstance(title_len_by_age, pd.Series)
assert len(title_len_by_age) == 8
# titles are long
assert title_len_by_age.min() >= 20
# index should contain the lower bound of the age group
assert 0 in title_len_by_age.index
assert 20 in title_len_by_age.index
# the maximum age in the dataset is 73, there should be no 80-90 age group
assert 80 not in title_len_by_age.index
Explanation: Q4.8* Do older people prefer movies with longer titles? Compute the average title length by age group (0-10, 10-20).
Step1. compute mean length (4 points)
Tip: You should probably create a copy of some of the columns.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Step 2. Plot it on a bar chart. Choose a reasonable range for the y-axis. (2 points)
End of explanation
#X = ...
# YOUR CODE HERE
raise NotImplementedError()
assert isinstance(X, np.ndarray)
# shape should be movies X genres
assert X.shape == (1682, 19)
assert list(np.unique(X)) == [0, 1]
Explanation: Problem Set 5. A simple recommendation system
Let's build a simple recommendation system that finds similar movies based on genre.
Q5.1. Extract genre information as a matrix. (2 points)
The .values attribute represents the underlying values as a Numpy ndarray.
End of explanation
from sklearn.neighbors import NearestNeighbors
def run_knn(X, K):
# YOUR CODE HERE
raise NotImplementedError()
assert run_knn(X, 2).shape == (1682, 2)
K = 4
indices = run_knn(X, K)
assert isinstance(indices, np.ndarray)
assert indices.shape[1] == K
Explanation: Q5.2 Run the k-nearest neighbor algorithm on X. (3 points)
Find a usage example in the documentation of NearestNeighbors.
Store the indices in a variable names indices.
K is the number of nearest neighbors. It should be a parameter of your function.
End of explanation
ind = pd.DataFrame(indices)
ind.head()
Explanation: indices is more convenient as a DataFrame
End of explanation
def increment_table(df):
# YOUR CODE HERE
raise NotImplementedError()
indices = increment_table(ind)
assert indices.shape[1] == 4
assert indices.index[0] == 1
assert indices.index[-1] == len(indices)
indices.head()
Explanation: Q5.3 Increment by one (3 points)
The index of this DataFrame refers to a particular movie and the rest of the rows are the indices of similar movies. The problem is that this matrix is zero-indexed, while the dataset (movies table) is indexed from 1.
Both the index and all values should be increased by one.
End of explanation
def find_neighbor_titles(movies, indices):
# YOUR CODE HERE
raise NotImplementedError()
neighbors = find_neighbor_titles(movies, indices)
assert isinstance(neighbors, pd.DataFrame)
assert neighbors.shape[1] == K
neighbors.head()
Explanation: Q5.4* Find the movies corresponding to these indices (5 points)
You'll need multiple merge operations.
Tip: the names of the columns in indices are not strings but integers, you can rename the columns of a dataframe:
~~~
df = df.rename(columns={'old': 'new', 'other old': 'other new'})
~~~
You can discard all other columns.
End of explanation
def recover_titles(movies, neighbors):
# YOUR CODE HERE
raise NotImplementedError()
most_similar = recover_titles(movies, neighbors)
assert type(most_similar) == pd.DataFrame
assert "Toy Story (1995)" in most_similar.index
Explanation: Q5.5* Replace the index of the movie by its title. (2 points)
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q5.6** Improve your recommedation system by adding other columns. (5 points)
Tips: you can add the average rating of a movie by occupation/age group/gender
Please fit your solution in one cell.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
Explanation: Q6* Extra (3 points)
Add any extra observations that you find interesting. Did you find any interesting patterns in the dataset? Are certain genres more appealing to particular demographic groups?
You can add multiple observations for extra points.
Please explain your answer in this field (double click on YOUR CODE GOES HERE):
YOUR ANSWER HERE
And the code here:
End of explanation |
8,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3>artcontrol gallery</h3>
Create gallery for artcontrol artwork.
Uses Year / Month / Day format.
Create blog post for each day there is a post.
It will need to list the files for that day and create a markdown file in posts that contains the artwork. Name of art then followed by each pience of artwork -line, bw, color.
write a message about each piece of artwork.
Step1: check to see if that blog post name already excist, if so error and ask for something more unique!
input art piece writers. Shows the art then asks for input, appending the input below the artwork. Give a name for the art that is appended above.
Step2: | Python Code:
import os
import arrow
import getpass
raw = arrow.now()
myusr = getpass.getuser()
galpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))
galpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))
popath = ('/home/{}/git/artcontrolme/posts/'.format(myusr))
class DayStuff():
def getUsr():
return getpass.getuser()
def reTime():
return raw()
def getYear():
return raw.strftime("%Y")
def getMonth():
return raw.strftime("%m")
def getDay():
return raw.strftime("%d")
def Fullday():
return (getYear() + '/' + getMonth() + '/' + getDay())
def fixDay():
return (raw.strftime('%Y/%m/%d'))
#def postPath():
#return ('/home/{}/git/artcontrolme/posts/'.format(myusr))
def listPath():
return os.listdir(popath)
#def galleryPath():
# return (galpath)
def galyrPath():
return ('{}{}'.format(galpath, getYear()))
def galmonPath():
return('{}{}/{}'.format(galpath, getYear(), getMonth()))
def galdayPath():
return('{}{}/{}/{}'.format(galpath, getYear(), getMonth(), getDay()))
def galleryList():
return os.listdir('/home/{}/git/artcontrolme/galleries/'.format(myusr))
def galyrList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear()))
def galmonList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear(), getMonth()))
def galdayList():
return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}/{}'.format(myusr, getYear(), getMonth(), getDay()))
def checkYear():
if getYear() not in galleryList():
return os.mkdir('{}{}'.format(galleryPath(), getYear()))
def checkMonth():
if getMonth() not in DayStuff.galyrList():
return os.mkdir('{}{}'.format(galleryPath(), getMonth()))
def checkDay():
if getDay() not in DayStuff.galmonList():
return os.mkdir('{}/{}/{}'.format(galleryPath(), getMonth(), getDay()))
#def makeDay
#DayStuff.getUsr()
#DayStuff.getYear()
#DayStuff.getMonth()
#DayStuff.getDay()
#DayStuf
#DayStuff.Fullday()
#DayStuff.postPath()
#DayStuff.
#DayStuff.galmonPath()
#DayStuff.galdayPath()
#DayStuff.galyrList()
#getDay()
#getMonth()
#galleryList()
#DayStuff.checkDay()
#DayStuff.galyrList()
#DayStuff.galmonList()
#DayStuff.checkDay()
#DayStuff.checl
#DayStuff.checkMonth()
#DayStuff.galyrList()
#listPath()
#if getYear() not in galleryList():
# os.mkdir('{}{}'.format(galleryPath(), getYear()))
#galleryPath()
#fixDay()
#galleryPath()
#Fullday()
#getDay()
#getYear()
#getMonth()
#getusr()
#yraw = raw.strftime("%Y")
#mntaw = raw.strftime("%m")
#dytaw = raw.strftime("%d")
#fulda = yraw + '/' + mntaw + '/' + dytaw
#fultim = fulda + ' ' + raw.strftime('%H:%M:%S')
#arnow = arrow.now()
#curyr = arnow.strftime('%Y')
#curmon = arnow.strftime('%m')
#curday = arnow.strftime('%d')
#galerdir = ('/home/wcmckee/github/artcontrolme/galleries/')
#galdir = os.listdir('/home/wcmckee/github/artcontrolme/galleries/')
#galdir
#mondir = os.listdir(galerdir + curyr)
#daydir = os.listdir(galerdir + curyr + '/' + curmon )
#daydir
#galdir#
#mondir
#daydir
#if curyr in galdir:
# pass
#else:
# os.mkdir(galerdir + curyr)
#if curmon in mondir:
# pass
#else:
# os.mkdir(galerdir + curyr + '/' + curmon)
#fulldaypath = (galerdir + curyr + '/' + curmon + '/' + curday)
#if curday in daydir:
# pass
#else:
# os.mkdir(galerdir + curyr + '/' + curmon + '/' + curday)
#galdir
#mondir
#daydir
#str(arnow.date())
#nameofblogpost = input('Post name: ')
Explanation: <h3>artcontrol gallery</h3>
Create gallery for artcontrol artwork.
Uses Year / Month / Day format.
Create blog post for each day there is a post.
It will need to list the files for that day and create a markdown file in posts that contains the artwork. Name of art then followed by each pience of artwork -line, bw, color.
write a message about each piece of artwork.
End of explanation
#daypost = open('/home/{}/github/artcontrolme/posts/{}.md'.format(getusr(), nameofblogpost), 'w')
#daymetapost = open('/home/{}/github/artcontrolme/posts/{}.meta'.format(getUsr(), nameofblogpost), 'w')
#daymetapost.write('.. title: ' + nameofblogpost + ' \n' + '.. slug: ' + nameofblogpost + ' \n' + '.. date: ' + fultim + ' \n' + '.. author: wcmckee')
#daymetapost.close()
#todayart = os.listdir(fulldaypath)
#titlewor = list()
#titlewor
Explanation: check to see if that blog post name already excist, if so error and ask for something more unique!
input art piece writers. Shows the art then asks for input, appending the input below the artwork. Give a name for the art that is appended above.
End of explanation
#galpath = ('/galleries/' + curyr + '/' + curmon + '/' + curday + '/')
#galpath
#todayart.sort()
#todayart
#for toar in todayart:
# daypost.write(('!' + '[' + toar.strip('.png') + '](' + galpath + toar + ')\n'))
#daypost.close()
Explanation:
End of explanation |
8,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step11: Vertex constants
Setup up the following constants for Vertex
Step12: AutoML constants
Set constants unique to AutoML datasets and training
Step13: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step14: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Job Service for batch prediction and custom training.
Step16: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config"
Step17: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
Step18: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step19: Now save the unique dataset identifier for the Dataset resource instance you created.
Step20: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps
Step21: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step22: Now save the unique identifier of the training pipeline you created.
Step23: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step24: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step25: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step26: Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you
Step27: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make
Step28: Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests
Step29: Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters
Step30: Now get the unique identifier for the batch prediction job you created.
Step31: Get information on a batch prediction job
Use this helper function get_batch_prediction_job, with the following paramter
Step33: Get the predictions with explanations
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions and corresponding explanations stored at the Cloud Storage path you set as output. The explanations will be in a CSV format, which you indicated at the time we made the batch explanation job, under a subfolder starting with the name prediction, and under that folder will be a file called explanations*.csv.
Now display (cat) the contents. You will see one line for each explanation.
The first four fields are the values (features) you did the prediction on.
The remaining fields are the confidence values, between 0 and 1, for each prediction.
Step34: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML tabular classification model for batch prediction with explanation
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_classification_batch_explain.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do batch prediction with explanation using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the Iris dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor.
Objective
In this tutorial, you create an AutoML tabular classification model from a Python script, and then do a batch prediction with explainability using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction with explainability.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import time
import google.cloud.aiplatform_v1beta1 as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular classification model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Job Service for batch prediction and custom training.
End of explanation
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
CSV
For tabular classification, the CSV file has a few requirements:
The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.
All but one column are features.
One column is the label, which you will specify when you subsequently create the training pipeline.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
Explanation: Quick peek at your data
You will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
You also need for training to know the heading name of the label column, which is save as label_column. For this dataset, it is the last column in the CSV file.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML tabular classification model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (logLoss and auPrc) you will print the result.
End of explanation
HEADING = "petal_length,petal_width,sepal_length,sepal_width"
INSTANCE_1 = "1.4,1.3,5.1,2.8"
INSTANCE_2 = "1.5,1.2,4.7,2.4"
Explanation: Model deployment for batch prediction
Now deploy the trained Vertex Model resource you created for batch prediction. This differs from deploying a Model resource for on-demand prediction.
For online prediction, you:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Make online prediction requests to the Endpoint resource.
For batch-prediction, you:
Create a batch prediction job.
The job service will provision resources for the batch prediction request.
The results of the batch prediction request are returned to the caller.
The job service will unprovision the resoures for the batch prediction request.
Make a batch prediction request
Now do a batch prediction to your deployed model.
Make test items
You will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(HEADING + "\n")
f.write(str(INSTANCE_1) + "\n")
f.write(str(INSTANCE_2) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:
The first line is the heading with the feature (fields) heading names.
Each remaining line is a separate prediction request with the corresponding feature values.
For example:
"feature_1", "feature_2". ...
value_1, value_2, ...
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your batch prediction requests:
Single Instance: The batch prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.
Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
BATCH_MODEL = "iris_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
"generate_explanation": True,
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "csv"
OUT_FORMAT = "csv" # [csv]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
Explanation: Make batch prediction request
Now that your batch of two test items is ready, let's do the batch request. Use this helper function create_batch_prediction_job, with the following parameters:
display_name: The human readable name for the prediction job.
model_name: The Vertex fully qualified identifier for the Model resource.
gcs_source_uri: The Cloud Storage path to the input file -- which you created above.
gcs_destination_output_uri_prefix: The Cloud Storage path that the service will write the predictions to.
parameters: Additional filtering parameters for serving prediction results.
The helper function calls the job client service's create_batch_prediction_job metho, with the following parameters:
parent: The Vertex location root path for Dataset, Model and Pipeline resources.
batch_prediction_job: The specification for the batch prediction job.
Let's now dive into the specification for the batch_prediction_job:
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
dedicated_resources: The compute resources to provision for the batch prediction job.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
starting_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
model_parameters: Additional filtering parameters for serving prediction results. Note, image segmentation models do not support additional parameters.
input_config: The input source and format type for the instances to predict.
instances_format: The format of the batch prediction request file: csv only supported.
gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests.
output_config: The output destination and format for the predictions.
prediction_format: The format of the batch prediction response file: csv only supported.
gcs_destination: The output destination for the predictions.
This call is an asychronous operation. You will print from the response object a few select fields, including:
name: The Vertex fully qualified identifier assigned to the batch prediction job.
display_name: The human readable name for the prediction batch job.
model: The Vertex fully qualified identifier for the Model resource.
generate_explanations: Whether True/False explanations were provided with the predictions (explainability).
state: The state of the prediction job (pending, running, etc).
Since this call will take a few moments to execute, you will likely get JobState.JOB_STATE_PENDING for state.
End of explanation
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
Explanation: Now get the unique identifier for the batch prediction job you created.
End of explanation
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
Explanation: Get information on a batch prediction job
Use this helper function get_batch_prediction_job, with the following paramter:
job_name: The Vertex fully qualified identifier for the batch prediction job.
The helper function calls the job client service's get_batch_prediction_job method, with the following paramter:
name: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- batch_job_id
The helper function will return the Cloud Storage path to where the predictions are stored -- gcs_destination.
End of explanation
def get_latest_predictions(gcs_out_dir):
Get the latest prediction subfolder using the timestamp in the subfolder name
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/explanation*.csv
! gsutil cat $folder/explanation*.csv
break
time.sleep(60)
Explanation: Get the predictions with explanations
When the batch prediction is done processing, the job state will be JOB_STATE_SUCCEEDED.
Finally you view the predictions and corresponding explanations stored at the Cloud Storage path you set as output. The explanations will be in a CSV format, which you indicated at the time we made the batch explanation job, under a subfolder starting with the name prediction, and under that folder will be a file called explanations*.csv.
Now display (cat) the contents. You will see one line for each explanation.
The first four fields are the values (features) you did the prediction on.
The remaining fields are the confidence values, between 0 and 1, for each prediction.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation |
8,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding the vanishing gradient problem through visualization
There're reasons why deep neural network could work very well, while few people get a promising result or make it possible by simply make their neural network deep.
Computational power and data grow tremendously. People need more complex model and faster computer to make it feasible.
Realize and understand the difficulties associated with training a deep model.
In this tutorial, we would like to show you some insights of the techniques that researchers find useful in training a deep model, using MXNet and its visualizing tool -- TensorBoard.
Let’s recap some of the relevant issues on training a deep model
Step2: What to expect?
If a setting suffers from an vanish gradient problem, the gradients passed from the top should be very close to zero, and the weight of the network barely change/update.
Uniform and Sigmoid
Uniform and sigmoid
args = parse_args('uniform', 'uniform_sigmoid')
data_shape = (784, )
net = get_mlp("sigmoid")
train
train_model.fit(args, net, get_iterator(data_shape))
As you've seen, the metrics of fc_backward_weight is so close to zero, and it didn't change a lot during batchs.
```
2017-01-07 15
Step3: Even we have a "poor" initialization, the model could still converge quickly with proper activation function. And its magnitude has significant difference.
```
2017-01-07 15 | Python Code:
import sys
sys.path.append('./mnist/')
from train_mnist import *
Explanation: Understanding the vanishing gradient problem through visualization
There're reasons why deep neural network could work very well, while few people get a promising result or make it possible by simply make their neural network deep.
Computational power and data grow tremendously. People need more complex model and faster computer to make it feasible.
Realize and understand the difficulties associated with training a deep model.
In this tutorial, we would like to show you some insights of the techniques that researchers find useful in training a deep model, using MXNet and its visualizing tool -- TensorBoard.
Let’s recap some of the relevant issues on training a deep model:
Weight initialization. If you initialize the network with random and small weights, when you look at the gradients down the top layer, you would find they’re getting smaller and smaller, then the first layer almost doesn’t change as the gradients are too small to make a significant update. Without a chance to learn the first layer effectively, it's impossible to update and learn a good deep model.
Nonlinearity activation. When people use sigmoid or tanh as activation function, the gradient, same as the above, is getting smaller and smaller. Just remind the formula of the parameter updates and the gradient.
Experiment Setting
Here we create a simple MLP for cifar10 dataset and visualize the learning processing through loss/accuracy, and its gradient distributions, by changing its initialization and activation setting.
General Setting
We adopt MLP as our model and run our experiment in MNIST dataset. Then we'll visualize the weight and gradient of a layer using Monitor in MXNet and Histogram in TensorBoard.
Network Structure
Here's the network structure:
python
def get_mlp(acti="relu"):
multi-layer perceptron
data = mx.symbol.Variable('data')
fc = mx.symbol.FullyConnected(data = data, name='fc', num_hidden=512)
act = mx.symbol.Activation(data = fc, name='act', act_type=acti)
fc0 = mx.symbol.FullyConnected(data = act, name='fc0', num_hidden=256)
act0 = mx.symbol.Activation(data = fc0, name='act0', act_type=acti)
fc1 = mx.symbol.FullyConnected(data = act0, name='fc1', num_hidden=128)
act1 = mx.symbol.Activation(data = fc1, name='act1', act_type=acti)
fc2 = mx.symbol.FullyConnected(data = act1, name = 'fc2', num_hidden = 64)
act2 = mx.symbol.Activation(data = fc2, name='act2', act_type=acti)
fc3 = mx.symbol.FullyConnected(data = act2, name='fc3', num_hidden=32)
act3 = mx.symbol.Activation(data = fc3, name='act3', act_type=acti)
fc4 = mx.symbol.FullyConnected(data = act3, name='fc4', num_hidden=16)
act4 = mx.symbol.Activation(data = fc4, name='act4', act_type=acti)
fc5 = mx.symbol.FullyConnected(data = act4, name='fc5', num_hidden=10)
mlp = mx.symbol.SoftmaxOutput(data = fc5, name = 'softmax')
return mlp
As you might already notice, we intentionally add more layers than usual, as the vanished gradient problem becomes severer as the network goes deeper.
Weight Initialization
The weight initialization also has uniform and xavier.
python
if args.init == 'uniform':
init = mx.init.Uniform(0.1)
if args.init == 'xavier':
init = mx.init.Xavier(factor_type="in", magnitude=2.34)
Note that we intentionally choose a near zero setting in uniform.
Activation Function
We would compare two different activations, sigmoid and relu.
```python
acti = sigmoid or relu.
act = mx.symbol.Activation(data = fc, name='act', act_type=acti)
```
Logging with TensorBoard and Monitor
In order to monitor the weight and gradient of this network in different settings, we could use MXNet's monitor for logging and TensorBoard for visualization.
Usage
Here's a code snippet from train_model.py:
```python
import mxnet as mx
from tensorboard import summary
from tensorboard import FileWriter
where to keep your TensorBoard logging file
logdir = './logs/'
summary_writer = FileWriter(logdir)
mx.mon.Monitor's callback
def get_gradient(g):
# get flatten list
grad = g.asnumpy().flatten()
# logging using tensorboard, use histogram type.
s = summary.histogram('fc_backward_weight', grad)
summary_writer.add_summary(s)
return mx.nd.norm(g)/np.sqrt(g.size)
mon = mx.mon.Monitor(int(args.num_examples/args.batch_size), get_gradient, pattern='fc_backward_weight') # get the gradient passed to the first fully-connnected layer.
training
model.fit(
X = train,
eval_data = val,
eval_metric = eval_metrics,
kvstore = kv,
monitor = mon,
epoch_end_callback = checkpoint)
close summary_writer
summary_writer.close()
```
End of explanation
# Uniform and sigmoid
args = parse_args('uniform', 'uniform_relu')
data_shape = (784, )
net = get_mlp("relu")
# train
train_model.fit(args, net, get_iterator(data_shape))
Explanation: What to expect?
If a setting suffers from an vanish gradient problem, the gradients passed from the top should be very close to zero, and the weight of the network barely change/update.
Uniform and Sigmoid
Uniform and sigmoid
args = parse_args('uniform', 'uniform_sigmoid')
data_shape = (784, )
net = get_mlp("sigmoid")
train
train_model.fit(args, net, get_iterator(data_shape))
As you've seen, the metrics of fc_backward_weight is so close to zero, and it didn't change a lot during batchs.
```
2017-01-07 15:44:38,845 Node[0] Batch: 1 fc_backward_weight 5.1907e-07
2017-01-07 15:44:38,846 Node[0] Batch: 1 fc_backward_weight 4.2085e-07
2017-01-07 15:44:38,847 Node[0] Batch: 1 fc_backward_weight 4.31894e-07
2017-01-07 15:44:38,848 Node[0] Batch: 1 fc_backward_weight 5.80652e-07
2017-01-07 15:45:50,199 Node[0] Batch: 4213 fc_backward_weight 5.49988e-07
2017-01-07 15:45:50,200 Node[0] Batch: 4213 fc_backward_weight 5.89305e-07
2017-01-07 15:45:50,201 Node[0] Batch: 4213 fc_backward_weight 3.71941e-07
2017-01-07 15:45:50,202 Node[0] Batch: 4213 fc_backward_weight 8.05085e-07
```
You might wonder why we have 4 different fc_backward_weight, cause we use 4 cpus.
Uniform and ReLu
End of explanation
# Xavier and sigmoid
args = parse_args('xavier', 'xavier_sigmoid')
data_shape = (784, )
net = get_mlp("sigmoid")
# train
train_model.fit(args, net, get_iterator(data_shape))
Explanation: Even we have a "poor" initialization, the model could still converge quickly with proper activation function. And its magnitude has significant difference.
```
2017-01-07 15:54:12,286 Node[0] Batch: 1 fc_backward_weight 0.000267409
2017-01-07 15:54:12,287 Node[0] Batch: 1 fc_backward_weight 0.00031988
2017-01-07 15:54:12,288 Node[0] Batch: 1 fc_backward_weight 0.000306785
2017-01-07 15:54:12,289 Node[0] Batch: 1 fc_backward_weight 0.000347533
2017-01-07 15:55:25,936 Node[0] Batch: 4213 fc_backward_weight 0.0226081
2017-01-07 15:55:25,937 Node[0] Batch: 4213 fc_backward_weight 0.0039793
2017-01-07 15:55:25,937 Node[0] Batch: 4213 fc_backward_weight 0.0306151
2017-01-07 15:55:25,938 Node[0] Batch: 4213 fc_backward_weight 0.00818676
```
Xavier and Sigmoid
End of explanation |
8,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preparing Data
In this step, we are going to load data from disk to the memory and properly format them so that we can processing them in the next "preprocessing" stage.
Step1: Loading Tokenised Full Text
In the previous tutorial (Jupyter notebook), we generated a bunch of .json files storing our tokenised full texts. Now we are going to load them.
Step2: Preprocessing Data for Gensim and Finetuning
In this stage, we preprocess the data so it could be read by Gensim. Then we will furthur clean up the data to better train the model.
First of all, we need a dictionary of our corpus, i.e., the whole collection of our full texts. However, there are documents in our dataset written in some other languages. We need to stay with one language (in the example, English) in order to best train the model, so let's filter them out first.
Language Detection
TextBlob ships with a handy API wrapper of Google's language detection service. We will store the id of these non-English documents in a list called non_en and save it as a pickled file for later use.
Step3: Although we tried to handle these hyphenations in the previous tutorial, now we still have them for some reasons. The most conveient way to remove them is to remove them in the corpus and rebuild the dictionary. Then re-apply our previous filter.
Step4: Lemmatization
But before building the vocabulary, we need to unify some variants of the same phrases. For example, "technologies" should be mapped to "technology". This process is called lemmatization.
Step5: Then we can create our lemmatized vocabulary.
Step6: Obviously we have a way too large vocabulary size. This is because the algorithm used in TextBlob's noun phrase extraction is not very robust in complicated scenario. Let's see what we can do about this.
Filtering Vocabulary
First of all, let's rule out the most obvious ones
Step7: Now we have drastically reduced the size of the vocabulary from 2936116 to 102508. However this is not enough. For example
Step8: We have 752 such meaningless tokens in our vocabulary. Presumably this is because that during the extraction of the PDF, some mathenmatical equations are parsed as plain text (of course).
Now we are going to remove these
Step9: Removing Names & Locations
There are a lot of citations and references in the PDFs, and they are extremely difficult to be recoginsed given that they come in a lot of variants.
We will demostrate how to identify these names and locations in another tutorial (see TOC) using a Stanford NLP library, and eventually we can get a list of names and locations in names.json and locations.json respectively.
Step10: Building Corpus in Gensim Format
Since we already have a dictionary, each distinct token can be expressed as a id in the dictionary. Then we can compress the Corpus using this new representation and convert the document to be a BoW (bag of words).
Step11: Train the LDA Model
Now we have the dictionary and the corpus, we are ready to train our LDA model. We take the LDA model with 150 topics for example.
Step12: Visualize the LDA Model
There is a convenient library called pyLDAvis that allows us to visualize our trained LDA model. | Python Code:
# Loading metadata from trainning database
con = sqlite3.connect("F:/FMR/data.sqlite")
db_documents = pd.read_sql_query("SELECT * from documents", con)
db_authors = pd.read_sql_query("SELECT * from authors", con)
data = db_documents # just a handy alias
data.head()
Explanation: Preparing Data
In this step, we are going to load data from disk to the memory and properly format them so that we can processing them in the next "preprocessing" stage.
End of explanation
tokenised = load_json("abstract_tokenised.json")
# Let's have a peek
tokenised["acis2001/1"][:10]
Explanation: Loading Tokenised Full Text
In the previous tutorial (Jupyter notebook), we generated a bunch of .json files storing our tokenised full texts. Now we are going to load them.
End of explanation
from textblob import TextBlob
non_en = [] # a list of ids of the documents in other languages
count = 0
for id_, entry in data.iterrows():
count += 1
try:
lang = TextBlob(entry["title"] + " " + entry["abstract"]).detect_language()
except:
raise
if lang != 'en':
non_en.append(id_)
print(lang, data.iloc[id_]["title"])
if (count % 100) == 0:
print("Progress: ", count)
save_pkl(non_en, "non_en.list.pkl")
non_en = load_pkl("non_en.list.pkl")
# Convert our dict-based structure to be a list-based structure that are readable by Gensim and at the same time,
# filter out those non-English documents
tokenised_list = [tokenised[i] for i in data["submission_path"] if i not in non_en]
Explanation: Preprocessing Data for Gensim and Finetuning
In this stage, we preprocess the data so it could be read by Gensim. Then we will furthur clean up the data to better train the model.
First of all, we need a dictionary of our corpus, i.e., the whole collection of our full texts. However, there are documents in our dataset written in some other languages. We need to stay with one language (in the example, English) in order to best train the model, so let's filter them out first.
Language Detection
TextBlob ships with a handy API wrapper of Google's language detection service. We will store the id of these non-English documents in a list called non_en and save it as a pickled file for later use.
End of explanation
def remove_hyphenation(l):
return [i.replace("- ", "").replace("-", "") for i in l]
tokenised_list = [remove_hyphenation(i) for i in tokenised_list]
Explanation: Although we tried to handle these hyphenations in the previous tutorial, now we still have them for some reasons. The most conveient way to remove them is to remove them in the corpus and rebuild the dictionary. Then re-apply our previous filter.
End of explanation
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize(l):
return [" ".join([lemmatizer.lemmatize(token)
for token
in phrase.split(" ")])
for phrase in l]
def lemmatize_all(tokenised):
# Lemmatize the documents.
lemmatized = [lemmatize(entry) for entry in tokenised]
return lemmatized
" ".join([lemmatizer.lemmatize(token)
for token
in 'assistive technologies'.split(" ")])
tokenised_list = lemmatize_all(tokenised_list)
# In case we need it in the future
save_json(tokenised_list, "abstract_lemmatized.json")
# To load it:
tokenised_list = load_json("abstract_lemmatized.json")
Explanation: Lemmatization
But before building the vocabulary, we need to unify some variants of the same phrases. For example, "technologies" should be mapped to "technology". This process is called lemmatization.
End of explanation
from gensim.corpora import Dictionary
# Create a dictionary for all the documents. This might take a while.
dictionary = Dictionary(tokenised_list)
# Let's see what's inside, note the spelling :)
# But there is really nothing we can do with that.
dictionary[0]
len(dictionary)
Explanation: Then we can create our lemmatized vocabulary.
End of explanation
# remove tokens that appear in less than 20 documents and tokens that appear in more than 50% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.5, keep_n=None)
len(dictionary)
Explanation: Obviously we have a way too large vocabulary size. This is because the algorithm used in TextBlob's noun phrase extraction is not very robust in complicated scenario. Let's see what we can do about this.
Filtering Vocabulary
First of all, let's rule out the most obvious ones: words and phrases that appear in too many documents and ones that appear only 1-5 documents. Gensim provides a very convenient built-in function to filter them out:
End of explanation
# Helpers
display_limit = 10
def shorter_than(n):
bad = []
count = 0
for i in dictionary:
if len(dictionary[i]) < n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def if_in(symbol):
bad = []
count = 0
for i in dictionary:
if symbol in dictionary[i]:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def more_than(symbol, n):
bad = []
count = 0
for i in dictionary:
if dictionary[i].count(symbol) > n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
bad = shorter_than(3)
Explanation: Now we have drastically reduced the size of the vocabulary from 2936116 to 102508. However this is not enough. For example:
End of explanation
dictionary.filter_tokens(bad_ids=bad)
display_limit = 10
bad = if_in("*")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("<")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in(">")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("%")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("/")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("[")
bad += if_in("]")
bad += if_in("}")
bad += if_in("{")
dictionary.filter_tokens(bad_ids=bad)
display_limit = 20
bad = more_than(" ", 3)
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("- ") # verify that there is no hyphenation problem
bad = if_in("quarter")
dictionary.filter_tokens(bad_ids=bad)
Explanation: We have 752 such meaningless tokens in our vocabulary. Presumably this is because that during the extraction of the PDF, some mathenmatical equations are parsed as plain text (of course).
Now we are going to remove these:
End of explanation
names = load_json("names.json")
name_ids = [i for i, v in dictionary.iteritems() if v in names]
dictionary.filter_tokens(bad_ids=name_ids)
locations = load_json("locations.json")
location_ids = [i for i, v in dictionary.iteritems() if v in locations]
dictionary.filter_tokens(bad_ids=location_ids)
locations[:10]
names[:15] # not looking good, but it seems like it won't do much harm either
Explanation: Removing Names & Locations
There are a lot of citations and references in the PDFs, and they are extremely difficult to be recoginsed given that they come in a lot of variants.
We will demostrate how to identify these names and locations in another tutorial (see TOC) using a Stanford NLP library, and eventually we can get a list of names and locations in names.json and locations.json respectively.
End of explanation
corpus = [dictionary.doc2bow(l) for l in tokenised_list]
# Save it for future usage
from gensim.corpora.mmcorpus import MmCorpus
MmCorpus.serialize("aisnet_abstract_np_cleaned.mm", corpus)
# Also save the dictionary
dictionary.save("aisnet_abstract_np_cleaned.ldamodel.dictionary")
# To load the corpus:
from gensim.corpora.mmcorpus import MmCorpus
corpus = MmCorpus("aisnet_abstract_cleaned.mm")
# To load the dictionary:
from gensim.corpora import Dictionary
dictionary = Dictionary.load("aisnet_abstract_np_cleaned.ldamodel.dictionary")
Explanation: Building Corpus in Gensim Format
Since we already have a dictionary, each distinct token can be expressed as a id in the dictionary. Then we can compress the Corpus using this new representation and convert the document to be a BoW (bag of words).
End of explanation
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 150
chunksize = 2000
passes = 1
iterations = 150
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make a index to word dictionary.
print("Dictionary test: " + dictionary[0]) # This is only to "load" the dictionary.
id2word = dictionary.id2token
model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \
alpha='auto', eta='auto', \
iterations=iterations, num_topics=num_topics, \
passes=passes, eval_every=eval_every)
# Save the LDA model
model.save("aisnet_abstract_150_cleaned.ldamodel")
Explanation: Train the LDA Model
Now we have the dictionary and the corpus, we are ready to train our LDA model. We take the LDA model with 150 topics for example.
End of explanation
from gensim.models import LdaModel
model = LdaModel.load("aisnet_abstract_150_cleaned.ldamodel")
import pyLDAvis.gensim
vis = pyLDAvis.gensim.prepare(model, corpus, dictionary)
pyLDAvis.display(vis)
Explanation: Visualize the LDA Model
There is a convenient library called pyLDAvis that allows us to visualize our trained LDA model.
End of explanation |
8,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with sEEG data
MNE-Python supports working with more than just MEG and EEG data. Here we show
some of the functions that can be used to facilitate working with
stereoelectroencephalography (sEEG) data.
This example shows how to use
Step1: Let's load some sEEG data with channel locations and make epochs.
Step2: Let use the Talairach transform computed in the Freesurfer recon-all
to apply the Freesurfer surface RAS ('mri') to MNI ('mni_tal') transform.
Step3: Let's check to make sure everything is aligned.
<div class="alert alert-info"><h4>Note</h4><p>The most rostral electrode in the temporal lobe is outside the
fsaverage template brain. This is not ideal but it is the best that
the linear Talairach transform can accomplish. A more complex
transform is necessary for more accurate warping, see
`tut-ieeg-localize`.</p></div>
Step4: Let's also look at which regions of interest are nearby our electrode
contacts.
Step5: Now, let's the electrodes and a few regions of interest that the contacts
of the electrode are proximal to.
Step6: Next, we'll get the epoch data and plot its amplitude over time.
Step7: We can visualize this raw data on the fsaverage brain (in MNI space) as
a heatmap. This works by first creating an Evoked data structure
from the data of interest (in this example, it is just the raw LFP).
Then one should generate a stc data structure, which will be able
to visualize source activity on the brain in various different formats.
Step8: Plot 3D source (brain region) visualization | Python Code:
# Authors: Eric Larson <[email protected]>
# Adam Li <[email protected]>
# Alex Rockhill <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import fetch_fsaverage
# paths to mne datasets - sample sEEG and FreeSurfer's fsaverage subject
# which is in MNI space
misc_path = mne.datasets.misc.data_path()
sample_path = mne.datasets.sample.data_path()
subjects_dir = op.join(sample_path, 'subjects')
# use mne-python's fsaverage data
fetch_fsaverage(subjects_dir=subjects_dir, verbose=True) # downloads if needed
Explanation: Working with sEEG data
MNE-Python supports working with more than just MEG and EEG data. Here we show
some of the functions that can be used to facilitate working with
stereoelectroencephalography (sEEG) data.
This example shows how to use:
sEEG data
channel locations in MNI space
projection into a volume
Note that our sample sEEG electrodes are already assumed to be in MNI
space. If you want to map positions from your subject MRI space to MNI
fsaverage space, you must apply the FreeSurfer's talairach.xfm transform
for your dataset. You can take a look at tut-freesurfer-mne for
more information.
For an example that involves ECoG data, channel locations in a
subject-specific MRI, or projection into a surface, see
tut-working-with-ecog. In the ECoG example, we show
how to visualize surface grid channels on the brain.
End of explanation
raw = mne.io.read_raw(op.join(misc_path, 'seeg', 'sample_seeg_ieeg.fif'))
events, event_id = mne.events_from_annotations(raw)
epochs = mne.Epochs(raw, events, event_id, detrend=1, baseline=None)
epochs = epochs['Response'][0] # just process one epoch of data for speed
Explanation: Let's load some sEEG data with channel locations and make epochs.
End of explanation
montage = epochs.get_montage()
# first we need a head to mri transform since the data is stored in "head"
# coordinates, let's load the mri to head transform and invert it
this_subject_dir = op.join(misc_path, 'seeg')
head_mri_t = mne.coreg.estimate_head_mri_t('sample_seeg', this_subject_dir)
# apply the transform to our montage
montage.apply_trans(head_mri_t)
# now let's load our Talairach transform and apply it
mri_mni_t = mne.read_talxfm('sample_seeg', op.join(misc_path, 'seeg'))
montage.apply_trans(mri_mni_t) # mri to mni_tal (MNI Taliarach)
# for fsaverage, "mri" and "mni_tal" are equivalent and, since
# we want to plot in fsaverage "mri" space, we need use an identity
# transform to equate these coordinate frames
montage.apply_trans(
mne.transforms.Transform(fro='mni_tal', to='mri', trans=np.eye(4)))
epochs.set_montage(montage)
Explanation: Let use the Talairach transform computed in the Freesurfer recon-all
to apply the Freesurfer surface RAS ('mri') to MNI ('mni_tal') transform.
End of explanation
# compute the transform to head for plotting
trans = mne.channels.compute_native_head_t(montage)
# note that this is the same as:
# ``mne.transforms.invert_transform(
# mne.transforms.combine_transforms(head_mri_t, mri_mni_t))``
fig = mne.viz.plot_alignment(epochs.info, trans, 'fsaverage',
subjects_dir=subjects_dir, show_axes=True,
surfaces=['pial', 'head'], coord_frame='mri')
Explanation: Let's check to make sure everything is aligned.
<div class="alert alert-info"><h4>Note</h4><p>The most rostral electrode in the temporal lobe is outside the
fsaverage template brain. This is not ideal but it is the best that
the linear Talairach transform can accomplish. A more complex
transform is necessary for more accurate warping, see
`tut-ieeg-localize`.</p></div>
End of explanation
aseg = 'aparc+aseg' # parcellation/anatomical segmentation atlas
labels, colors = mne.get_montage_volume_labels(
montage, 'fsaverage', subjects_dir=subjects_dir, aseg=aseg)
# separate by electrodes which have names like LAMY 1
electrodes = set([''.join([lttr for lttr in ch_name
if not lttr.isdigit() and lttr != ' '])
for ch_name in montage.ch_names])
print(f'Electrodes in the dataset: {electrodes}')
electrodes = ('LPM', 'LSMA') # choose two for this example
for elec in electrodes:
picks = [ch_name for ch_name in epochs.ch_names if elec in ch_name]
fig = plt.figure(num=None, figsize=(8, 8), facecolor='black')
mne.viz.plot_channel_labels_circle(labels, colors, picks=picks, fig=fig)
fig.text(0.3, 0.9, 'Anatomical Labels', color='white')
Explanation: Let's also look at which regions of interest are nearby our electrode
contacts.
End of explanation
picks = [ii for ii, ch_name in enumerate(epochs.ch_names) if
any([elec in ch_name for elec in electrodes])]
labels = ('ctx-lh-caudalmiddlefrontal', 'ctx-lh-precentral',
'ctx-lh-superiorfrontal', 'Left-Putamen')
fig = mne.viz.plot_alignment(mne.pick_info(epochs.info, picks), trans,
'fsaverage', subjects_dir=subjects_dir,
surfaces=[], coord_frame='mri')
brain = mne.viz.Brain('fsaverage', alpha=0.1, cortex='low_contrast',
subjects_dir=subjects_dir, units='m', figure=fig)
brain.add_volume_labels(aseg='aparc+aseg', labels=labels)
brain.show_view(azimuth=120, elevation=90, distance=0.25)
brain.enable_depth_peeling()
Explanation: Now, let's the electrodes and a few regions of interest that the contacts
of the electrode are proximal to.
End of explanation
epochs.plot()
Explanation: Next, we'll get the epoch data and plot its amplitude over time.
End of explanation
# get standard fsaverage volume (5mm grid) source space
fname_src = op.join(subjects_dir, 'fsaverage', 'bem',
'fsaverage-vol-5-src.fif')
vol_src = mne.read_source_spaces(fname_src)
evoked = epochs.average()
stc = mne.stc_near_sensors(
evoked, trans, 'fsaverage', subjects_dir=subjects_dir, src=vol_src,
verbose='error') # ignore missing electrode warnings
stc = abs(stc) # just look at magnitude
clim = dict(kind='value', lims=np.percentile(abs(evoked.data), [10, 50, 75]))
Explanation: We can visualize this raw data on the fsaverage brain (in MNI space) as
a heatmap. This works by first creating an Evoked data structure
from the data of interest (in this example, it is just the raw LFP).
Then one should generate a stc data structure, which will be able
to visualize source activity on the brain in various different formats.
End of explanation
brain = stc.plot_3d(
src=vol_src, subjects_dir=subjects_dir,
view_layout='horizontal', views=['axial', 'coronal', 'sagittal'],
size=(800, 300), show_traces=0.4, clim=clim,
add_data_kwargs=dict(colorbar_kwargs=dict(label_font_size=8)))
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=3, interpolation='linear', framerate=5,
# time_viewer=True, filename='./mne-test-seeg.m4')
Explanation: Plot 3D source (brain region) visualization:
By default, stc.plot_3d() <mne.VolSourceEstimate.plot_3d> will show a time
course of the source with the largest absolute value across any time point.
In this example, it is simply the source with the largest raw signal value.
Its location is marked on the brain by a small blue sphere.
End of explanation |
8,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q1
In this question, you'll be introduced to the scikit-image package. Only a small portion of the package will be explored; you're encouraged to check it out if this interests you!
A
scikit-image is a pretty awesome all-purpose lightweight image analysis package for Python. In this question, you'll work with its data submodule.
Scikit-image comes prepackaged with a great deal of sample data for you to experiment with. In the skimage.data submodule, you'll find quite a few functions that return this sample data. Go the documentation page (linked in the previous paragraph), pick a sample function, call it, and display it using matplotlib's imshow() function.
Step1: B
Now, we'll measure some properties of the image you chose. This will require use of the measure submodule.
You'll need to call two functions from this module. First, you'll need to label discrete regions of your image--effectively, identify "objects" in it. This will use the skimage.measure.label() function, which takes two arguments | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import skimage.data
### BEGIN SOLUTION
### END SOLUTION
Explanation: Q1
In this question, you'll be introduced to the scikit-image package. Only a small portion of the package will be explored; you're encouraged to check it out if this interests you!
A
scikit-image is a pretty awesome all-purpose lightweight image analysis package for Python. In this question, you'll work with its data submodule.
Scikit-image comes prepackaged with a great deal of sample data for you to experiment with. In the skimage.data submodule, you'll find quite a few functions that return this sample data. Go the documentation page (linked in the previous paragraph), pick a sample function, call it, and display it using matplotlib's imshow() function.
End of explanation
import skimage.measure
import skimage.color
### BEGIN SOLUTION
### END SOLUTION
Explanation: B
Now, we'll measure some properties of the image you chose. This will require use of the measure submodule.
You'll need to call two functions from this module. First, you'll need to label discrete regions of your image--effectively, identify "objects" in it. This will use the skimage.measure.label() function, which takes two arguments: your image, and the pixel value to be considered background. You'll need to experiment a little with this second argument to find the "right" value for your image!
(IMPORTANT: If your image is RGB, you'll need to convert it to grayscale before calling label(); you can do this with the skimage.color.rgb2gray() function)
Second, you'll need to give the labels you get from the label() function to skimage.measure.regionprops. This will return a dictionary with a bunch of useful information about your image. You'll use that information to answer some questions in part C below.
End of explanation |
8,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Effect Size
Credits
Step1: To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
Step2: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
Step3: Here's what the two distributions look like.
Step4: Let's assume for now that those are the true distributions for the population. Of course, in real life we never observe the true population distribution. We generally have to work with a random sample.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
Step5: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
Step6: The sample mean is close to the population mean, but not exact, as expected.
Step7: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means
Step8: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems
Step9: But a problem with relative differences is that you have to choose which mean to express them relative to.
Step10: Part Two
An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means
Step11: A better, but slightly more complicated threshold is the place where the PDFs cross.
Step12: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold
Step13: And how many women are above it
Step14: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
Step15: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex
Step16: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
Step18: Overlap (or misclassification rate) and "probability of superiority" have two good properties
Step19: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
Step21: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
Step23: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
Step24: Here's an example that demonstrates the function
Step25: And an interactive widget you can use to visualize what different values of $d$ mean | Python Code:
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from IPython.html.widgets import interact, fixed
from IPython.html import widgets
# seed the random number generator so we all get the same results
numpy.random.seed(17)
# some nice colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
Explanation: Effect Size
Credits: Forked from CompStats by Allen Downey. License: Creative Commons Attribution 4.0 International.
End of explanation
mu1, sig1 = 178, 7.7
male_height = scipy.stats.norm(mu1, sig1)
mu2, sig2 = 163, 7.3
female_height = scipy.stats.norm(mu2, sig2)
Explanation: To explore statistics that quantify effect size, we'll look at the difference in height between men and women. I used data from the Behavioral Risk Factor Surveillance System (BRFSS) to estimate the mean and standard deviation of height in cm for adult women and men in the U.S.
I'll use scipy.stats.norm to represent the distributions. The result is an rv object (which stands for random variable).
End of explanation
def eval_pdf(rv, num=4):
mean, std = rv.mean(), rv.std()
xs = numpy.linspace(mean - num*std, mean + num*std, 100)
ys = rv.pdf(xs)
return xs, ys
Explanation: The following function evaluates the normal (Gaussian) probability density function (PDF) within 4 standard deviations of the mean. It takes and rv object and returns a pair of NumPy arrays.
End of explanation
xs, ys = eval_pdf(male_height)
pyplot.plot(xs, ys, label='male', linewidth=4, color=COLOR2)
xs, ys = eval_pdf(female_height)
pyplot.plot(xs, ys, label='female', linewidth=4, color=COLOR3)
pyplot.xlabel('height (cm)')
None
Explanation: Here's what the two distributions look like.
End of explanation
male_sample = male_height.rvs(1000)
female_sample = female_height.rvs(1000)
Explanation: Let's assume for now that those are the true distributions for the population. Of course, in real life we never observe the true population distribution. We generally have to work with a random sample.
I'll use rvs to generate random samples from the population distributions. Note that these are totally random, totally representative samples, with no measurement error!
End of explanation
mean1, std1 = male_sample.mean(), male_sample.std()
mean1, std1
Explanation: Both samples are NumPy arrays. Now we can compute sample statistics like the mean and standard deviation.
End of explanation
mean2, std2 = female_sample.mean(), female_sample.std()
mean2, std2
Explanation: The sample mean is close to the population mean, but not exact, as expected.
End of explanation
difference_in_means = male_sample.mean() - female_sample.mean()
difference_in_means # in cm
Explanation: And the results are similar for the female sample.
Now, there are many ways to describe the magnitude of the difference between these distributions. An obvious one is the difference in the means:
End of explanation
# Exercise: what is the relative difference in means, expressed as a percentage?
relative_difference = difference_in_means / male_sample.mean()
relative_difference * 100 # percent
Explanation: On average, men are 14--15 centimeters taller. For some applications, that would be a good way to describe the difference, but there are a few problems:
Without knowing more about the distributions (like the standard deviations) it's hard to interpret whether a difference like 15 cm is a lot or not.
The magnitude of the difference depends on the units of measure, making it hard to compare across different studies.
There are a number of ways to quantify the difference between distributions. A simple option is to express the difference as a percentage of the mean.
End of explanation
relative_difference = difference_in_means / female_sample.mean()
relative_difference * 100 # percent
Explanation: But a problem with relative differences is that you have to choose which mean to express them relative to.
End of explanation
simple_thresh = (mean1 + mean2) / 2
simple_thresh
Explanation: Part Two
An alternative way to express the difference between distributions is to see how much they overlap. To define overlap, we choose a threshold between the two means. The simple threshold is the midpoint between the means:
End of explanation
thresh = (std1 * mean2 + std2 * mean1) / (std1 + std2)
thresh
Explanation: A better, but slightly more complicated threshold is the place where the PDFs cross.
End of explanation
male_below_thresh = sum(male_sample < thresh)
male_below_thresh
Explanation: In this example, there's not much difference between the two thresholds.
Now we can count how many men are below the threshold:
End of explanation
female_above_thresh = sum(female_sample > thresh)
female_above_thresh
Explanation: And how many women are above it:
End of explanation
overlap = male_below_thresh / len(male_sample) + female_above_thresh / len(female_sample)
overlap
Explanation: The "overlap" is the total area under the curves that ends up on the wrong side of the threshold.
End of explanation
misclassification_rate = overlap / 2
misclassification_rate
Explanation: Or in more practical terms, you might report the fraction of people who would be misclassified if you tried to use height to guess sex:
End of explanation
# Exercise: suppose I choose a man and a woman at random.
# What is the probability that the man is taller?
sum(x > y for x, y in zip(male_sample, female_sample)) / len(male_sample)
Explanation: Another way to quantify the difference between distributions is what's called "probability of superiority", which is a problematic term, but in this context it's the probability that a randomly-chosen man is taller than a randomly-chosen woman.
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Overlap (or misclassification rate) and "probability of superiority" have two good properties:
As probabilities, they don't depend on units of measure, so they are comparable between studies.
They are expressed in operational terms, so a reader has a sense of what practical effect the difference makes.
There is one other common way to express the difference between distributions. Cohen's $d$ is the difference in means, standardized by dividing by the standard deviation. Here's a function that computes it:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: Computing the denominator is a little complicated; in fact, people have proposed several ways to do it. This implementation uses the "pooled standard deviation", which is a weighted average of the standard deviations of the two groups.
And here's the result for the difference in height between men and women.
End of explanation
def overlap_superiority(control, treatment, n=1000):
Estimates overlap and superiority based on a sample.
control: scipy.stats rv object
treatment: scipy.stats rv object
n: sample size
control_sample = control.rvs(n)
treatment_sample = treatment.rvs(n)
thresh = (control.mean() + treatment.mean()) / 2
control_above = sum(control_sample > thresh)
treatment_below = sum(treatment_sample < thresh)
overlap = (control_above + treatment_below) / n
superiority = sum(x > y for x, y in zip(treatment_sample, control_sample)) / n
return overlap, superiority
Explanation: Most people don't have a good sense of how big $d=1.9$ is, so let's make a visualization to get calibrated.
Here's a function that encapsulates the code we already saw for computing overlap and probability of superiority.
End of explanation
def plot_pdfs(cohen_d=2):
Plot PDFs for distributions that differ by some number of stds.
cohen_d: number of standard deviations between the means
control = scipy.stats.norm(0, 1)
treatment = scipy.stats.norm(cohen_d, 1)
xs, ys = eval_pdf(control)
pyplot.fill_between(xs, ys, label='control', color=COLOR3, alpha=0.7)
xs, ys = eval_pdf(treatment)
pyplot.fill_between(xs, ys, label='treatment', color=COLOR2, alpha=0.7)
o, s = overlap_superiority(control, treatment)
print('overlap', o)
print('superiority', s)
Explanation: Here's the function that takes Cohen's $d$, plots normal distributions with the given effect size, and prints their overlap and superiority.
End of explanation
plot_pdfs(2)
Explanation: Here's an example that demonstrates the function:
End of explanation
slider = widgets.FloatSliderWidget(min=0, max=4, value=2)
interact(plot_pdfs, cohen_d=slider)
None
Explanation: And an interactive widget you can use to visualize what different values of $d$ mean:
End of explanation |
8,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diffusion
Class
Step1: Self-diffusion of water
The self-diffusion coefficient of water (in micrometers<sup>2</sup>/millisecond) is dependent on the temperature and pressure. Several groups have derived quantitative descriptions of the relationship between temperature, pressure, and the diffusion coefficient of water. Here we use the formula presented in
Step2: The self-diffusion of water at body temperature and standard pressure, in micrometers<sup>2</sup>/millimeter, is
Step3: Now we'll plot D for a biologically meaningful range of temperatures
Step4: Question 1
a. The average atmospheric pressure in Denver Colorado is about 84 kPa. How different (in percent) is the self-diffusion coefficient of water at body temperature in Denver relative to that in Palo Alto, which is about at sea level?
b. Suppose you are running a fever of 40 deg Centigrade. Compared to someone without a fever, how much higher (in percent) is the water diffusion coefficient in your body?
Step5: Brownian motion
Set up the diffusion simulation
Here we simulate the brownian motion in a small chunk of tissue. First, we define some parameters, including the size of the simulated voxel (in micrometers), the time step (in milliseconds), and the Apparent Diffusion Coefficient (ADC) to simulate (in micrometers<sup>2</sup>/millisecond). Our tissue model will include simple barriers that will roughly approximate the effects of cell membranes, which are relatively impermeable to the free diffusion of water. So we will also define the barrier spacing (in micrometers). Finally, we'll specify the number of particles and time-steps to run.
Step6: Run the diffusion simulation
In this loop, we update all the particle positions at each time step. The diffusion equation tells us that the final position of a particle moving in Brownian motion can be described by a Gaussian distribution with a standard deviation of sqrt(ADC*timeStep). So we update the current particle position by drawing numbers from a Gaussian with this standard deviation.
Step7: Question 2
a. What is the average position change of each particle in the X dimension? In Y? (Hint
Step8: By comparing the particle ending positions (in xy) with their starting positions (in start_xy), we can compute the diffusion tensor. This is essentially just a 2-d Gaussian fit to the position differences, computed using the covariance function (cov). We also need to normalize the positions by the total time that we diffused.
The eigensystem of the diffusion tensor (computed using 'eig') describes an isoprobability ellipse through the data points.
Step9: Question 3
a. What are the units of the ADC?
b. What are the units of the PDD?
Step10: Now lets show the particle starting positions a little line segment showing where each moved to.
Step11: Question 4
a. Run the simulation with and without barriers by adjusting the 'barrierSpacing' variable. How does the diffusion tensor change?
b. Adjust the barrier spacing. What effect does this have on the princpal diffusion direction? On the estimatedADC values?
c. With barriers in place, reduce the number of time steps (nTimeSteps). How does this affect the estimated ADC values? Explore the interaction between the barrier spacing and the number of time steps.
Step12: The effect of diffusion on the MR signal
We'll simulate an image that represents a vial of water in a 3 Tesla magnetic field. The image intensity at each point will represent the local magnetic field strength, expressed as the Larmor frequency difference between that region and the frequency at 3 T.
First let's define some parameters, such as the simulated filed strength (B0) and the gyromagnetic ratio for Hydrogen
Step13: Question 4
a. What is the Larmor frequency of Hydrogen spins at 3T?
b. What is it at 7T?
Step14: Simulate spins in an MR experiment
We start by defining the size of our simulated voxel, in micrometers. For the diffusion gradients, we'll start with the Y gradient turned off and the X gradient set to 5e-8 Tesla per micrometer (that's 50 mT/m, a typical gradient strengh for clinical MR scanners). We'll also quantize space into 100 regions and use meshgrid to lay these out into 2d arrays that be used to compute a 100x100 image. Finally, we compute the relative field strength at each spatial location across our simulated voxel. Gradient strengths are symmetric about the center. To understand the following expression, work through the units
Step15: Calculate the relative spin frequency at each spin location. Our gradient strengths are expressed as T/cm and are symmetric about the center of the voxelSize. To understand the following expression, work through the units
Step16: Note that including the B0 field in this equation simply adds an offset to the spin frequency. For most purposes, we usually only care about the spin frequency relative to the B0 frequency (i.e., the rotating frame of reference), so we can leave that last term off and compute relative frequencies (in Hz)
Step17: Question 5
a. Speculate on why the relative spin frequency is the most important value to calculate here.
b. Do you think the B0 field strength will play a role in the calculation of the diffusion tensor?
Step18: Display the relative frequencies (in Hz)
When we first apply an RF pulse, all the spins will all precess in phase. If they are all experienceing the same magnetic field, they will remain in phase. However, if some spins experience a different local field, they will become out of phase with the others. Let's show this with a movie, where the phase will be represented with color. Our timestep is 1 millisecond. | Python Code:
%pylab inline
rcParams["figure.figsize"] = (8, 6)
rcParams["axes.grid"] = True
from IPython.display import display, clear_output
from mpl_toolkits.axes_grid1 import make_axes_locatable
from time import sleep
from __future__ import division
def cart2pol(x, y):
theta = arctan2(y, x)
r = sqrt(x ** 2 + y ** 2)
return theta, r
def pol2cart(theta, r):
x = r * cos(theta)
y = r * sin(theta)
return x, y
Explanation: Diffusion
Class: Psych 204a
Tutorial: Diffusion
Author: Dougherty
Date: 2001.10.31
Duration: 90 minutes
Copyright: Stanford University, Robert F. Dougherty
Translated to Python by Bob Dougherty, 11/2012 and Grace Tang 10/13
The purpose of this tutorial is to illustrate the nature of the data acquired in a diffusion-weighted imaging scan. The computational methods available for interpreting these data are also introduced.
First, we'll set up the python environment and define some utility functions that will be used below.
End of explanation
def selfDiffusionOfWater(T, P=101.325):
# Implements the Krynicki formula; returns the self-diffusion of water (in micrometers^2/millisec)
# given the temperature T (in Centigrade) and the pressure P (in kPa).
d = 12.5 * exp(P * -5.22 * 1e-6) * sqrt(T + 273.15) * exp(-925 * exp(P * -2.6 * 1e-6)/(T + 273.15 - (95 + P * 2.61 * 1e-4)))
return d
Explanation: Self-diffusion of water
The self-diffusion coefficient of water (in micrometers<sup>2</sup>/millisecond) is dependent on the temperature and pressure. Several groups have derived quantitative descriptions of the relationship between temperature, pressure, and the diffusion coefficient of water. Here we use the formula presented in:
Krynicki, Green & Sawyer (1978) Pressure and temperature dependence of self-diffusion in water. Faraday Discuss. Chem. Soc., 66, 199 - 208.
Mills (1973) Self-Diffusion in Normal and Heavy Water. JPhysChem 77(5), pg. 685 - 688.
Also see http://www.lsbu.ac.uk/water/explan5.html.
Let's start by defining a function that implements the Krynicki formula. The the default value for the pressure parameter will be set to the standard atmospheric pressure at sea level: 101.325 kilo Pascals (kPa).
End of explanation
D = selfDiffusionOfWater(37)
print("%f micrometers^2/millisecond" % D)
Explanation: The self-diffusion of water at body temperature and standard pressure, in micrometers<sup>2</sup>/millimeter, is:
End of explanation
T = arange(25,41)
D = selfDiffusionOfWater(T)
figure()
plot(T, D, 'k')
xlabel('Temperature (Centigrade)', fontsize=14)
ylabel('Self-diffusion ($\mu$m$^2$/ms)', fontsize=14)
plot([37,37], [2,3.4], 'r-')
text(37, 3.45, 'Body Temperature', ha='center', color='r', fontsize=12)
Explanation: Now we'll plot D for a biologically meaningful range of temperatures
End of explanation
# compute your answer here
Explanation: Question 1
a. The average atmospheric pressure in Denver Colorado is about 84 kPa. How different (in percent) is the self-diffusion coefficient of water at body temperature in Denver relative to that in Palo Alto, which is about at sea level?
b. Suppose you are running a fever of 40 deg Centigrade. Compared to someone without a fever, how much higher (in percent) is the water diffusion coefficient in your body?
End of explanation
voxel_size = 50.0 # micrometers
ADC = 2.0 # micrometers^2/millisecond)
barrier_spacing = 10.0 # micrometers (set this to 0 for no barriers)
num_particles = 500
def draw_particles(ax, title, xy, particle_color, voxel_size, barrier_spacing):
ax.set_xlabel('X position $(\mu m)$')
ax.set_ylabel('Y position $(\mu m)$')
ax.axis('equal')
ax.set_title(title)
ax.set_xlim(-voxel_size/2, voxel_size/2)
ax.set_ylim(-voxel_size/2, voxel_size/2)
if barrier_spacing > 0:
compartments = unique(np.round(xy[1,:] / barrier_spacing))
for c in range(compartments.size):
ax.hlines(compartments[c]*barrier_spacing, -voxel_size/2, voxel_size/2, linewidth=4, colors=[.7, .7, .8], linestyles='solid')
particles = []
for p in range(xy.shape[1]):
particles.append(Circle(xy[:,p], 0.3, color=particle_color[p]))
ax.add_artist(particles[p])
return particles
# Place some particles randomly distributed in the volume
xy = random.rand(2, num_particles) * voxel_size - voxel_size/2.
start_xy = xy
particle_color = [((xy[0,p] + voxel_size/2) / voxel_size, (xy[1,p] + voxel_size/2) / voxel_size, .5) for p in range(num_particles)]
# draw 'em
fig,ax = subplots(1, 1, figsize=(6, 6))
particles = draw_particles(ax, 'initial particle positions', xy, particle_color, voxel_size, barrier_spacing)
Explanation: Brownian motion
Set up the diffusion simulation
Here we simulate the brownian motion in a small chunk of tissue. First, we define some parameters, including the size of the simulated voxel (in micrometers), the time step (in milliseconds), and the Apparent Diffusion Coefficient (ADC) to simulate (in micrometers<sup>2</sup>/millisecond). Our tissue model will include simple barriers that will roughly approximate the effects of cell membranes, which are relatively impermeable to the free diffusion of water. So we will also define the barrier spacing (in micrometers). Finally, we'll specify the number of particles and time-steps to run.
End of explanation
import time, sys
from IPython.core.display import clear_output
time_step = 0.1 # milliseconds
nTimeSteps = 100
fig,ax = subplots(1, 1, figsize=(6, 6))
total_time = 0
for t_i in range(nTimeSteps):
dxy = np.random.standard_normal(xy.shape) * sqrt(2 * ADC * time_step)
new_xy = xy + dxy
if barrier_spacing>0:
curCompartment = np.round(xy[1,:]/barrier_spacing)
newCompartment = np.round(new_xy[1,:]/barrier_spacing)
for p in range(newCompartment.size):
if newCompartment[p]!=curCompartment[p]:
# approximate particles reflecting off the impermeable barrier
new_xy[1,p] = xy[1,p]
xy = new_xy
title = 'elapsed time: %5.2f ms' % (time_step * t_i)
particles = draw_particles(ax, title, xy, particle_color, voxel_size, barrier_spacing)
clear_output(wait=True)
display(fig,ax)
ax.cla()
total_time += time_step
close()
Explanation: Run the diffusion simulation
In this loop, we update all the particle positions at each time step. The diffusion equation tells us that the final position of a particle moving in Brownian motion can be described by a Gaussian distribution with a standard deviation of sqrt(ADC*timeStep). So we update the current particle position by drawing numbers from a Gaussian with this standard deviation.
End of explanation
# compute your answer here
Explanation: Question 2
a. What is the average position change of each particle in the X dimension? In Y? (Hint: start_xy contains the starting positions.)
b. What is the average distance that each particle moved? (Hint: compute the Euclidean distance that each moved.)
End of explanation
Dt = cov(start_xy - xy) / (2 * total_time)
[val,vec] = eig(Dt)
estimatedADC = val / total_time
principalDiffusionDirection = vec[0]
print('ADC = ' + str(estimatedADC))
print('Principal Diffusion Direction (PDD) = ' + str(principalDiffusionDirection))
Explanation: By comparing the particle ending positions (in xy) with their starting positions (in start_xy), we can compute the diffusion tensor. This is essentially just a 2-d Gaussian fit to the position differences, computed using the covariance function (cov). We also need to normalize the positions by the total time that we diffused.
The eigensystem of the diffusion tensor (computed using 'eig') describes an isoprobability ellipse through the data points.
End of explanation
# compute your answer here
Explanation: Question 3
a. What are the units of the ADC?
b. What are the units of the PDD?
End of explanation
fig,ax = subplots(1, 1, figsize=(6, 6))
start_p = draw_particles(ax, 'initial particle positions', start_xy, particle_color, voxel_size, barrier_spacing)
for p in range(num_particles):
ax.plot((start_xy[0,p], xy[0,p]), (start_xy[1,p], xy[1,p]), linewidth=1, color=[.5, .5, .5], linestyle='solid')
Explanation: Now lets show the particle starting positions a little line segment showing where each moved to.
End of explanation
# compute your answer here
Explanation: Question 4
a. Run the simulation with and without barriers by adjusting the 'barrierSpacing' variable. How does the diffusion tensor change?
b. Adjust the barrier spacing. What effect does this have on the princpal diffusion direction? On the estimatedADC values?
c. With barriers in place, reduce the number of time steps (nTimeSteps). How does this affect the estimated ADC values? Explore the interaction between the barrier spacing and the number of time steps.
End of explanation
B0 = 3.0 # Magnetic field strength (Tesla)
gyromagneticRatio = 42.58e+6 # Gyromagnetic constant for hydrogen (Hz / Tesla)
# The Larmor frequency (in Hz) of Hydrogen spins in this magnet is:
spinFreq = gyromagneticRatio * B0
Explanation: The effect of diffusion on the MR signal
We'll simulate an image that represents a vial of water in a 3 Tesla magnetic field. The image intensity at each point will represent the local magnetic field strength, expressed as the Larmor frequency difference between that region and the frequency at 3 T.
First let's define some parameters, such as the simulated filed strength (B0) and the gyromagnetic ratio for Hydrogen:
End of explanation
# compute your answer here
Explanation: Question 4
a. What is the Larmor frequency of Hydrogen spins at 3T?
b. What is it at 7T?
End of explanation
voxelSize = 100.0 # micrometers
gx = 5e-8 # Tesla / micrometer
gy = 0.0 # Tesla / micrometer
def draw_spins(ax, title, field_image, im_unit, sx, sy, px, py):
# a function to draw spin-packets
# draw the relative magnetic field map image
im = ax.imshow(field_image, extent=im_unit+im_unit, cmap=matplotlib.cm.bone)
ax.set_ylabel('Y position (micrometers)')
ax.set_xlabel('X position (micrometers)')
ax.set_title(title)
# Place some spin packets in there:
ax.scatter(x=sx+px, y=sy+py, color='r', s=3)
ax.scatter(x=sx, y=sy, color='c', s=3)
ax.set_xlim(im_unit)
ax.set_ylim(im_unit)
# add a colorbar
divider = make_axes_locatable(ax)
cax = divider.append_axes("bottom", size="7%", pad=0.5)
cbl = fig.colorbar(im, cax=cax, orientation='horizontal')
cbl.set_label('Relative field strength (micro Tesla)')
im_unit = (-voxelSize/2, voxelSize/2)
x = linspace(im_unit[0], im_unit[1], 100)
y = linspace(im_unit[0], im_unit[1], 100)
[X,Y] = meshgrid(x,y)
# micrometers * Tesla/micrometer * 1e6 = microTesla
relative_field_image = (X*gx + Y*gy) * 1e6
locations = linspace(-voxelSize/2+voxelSize/10, voxelSize/2-voxelSize/10, 5)
sx,sy = meshgrid(locations, locations);
sx = sx.flatten()
sy = sy.flatten()
# set the phase/magnitude to be zero
px = zeros(sx.shape)
py = zeros(sy.shape)
fig,ax = subplots(1, 1)
draw_spins(ax, 'Spin packets at rest in a gradient', relative_field_image, im_unit, sx, sy, px, py)
Explanation: Simulate spins in an MR experiment
We start by defining the size of our simulated voxel, in micrometers. For the diffusion gradients, we'll start with the Y gradient turned off and the X gradient set to 5e-8 Tesla per micrometer (that's 50 mT/m, a typical gradient strengh for clinical MR scanners). We'll also quantize space into 100 regions and use meshgrid to lay these out into 2d arrays that be used to compute a 100x100 image. Finally, we compute the relative field strength at each spatial location across our simulated voxel. Gradient strengths are symmetric about the center. To understand the following expression, work through the units: (micrometers * T/um + micrometers * T/um) leaves us with T. We scale this by 1e6 to express the resulting image in micro-Teslas
End of explanation
spinFreq = (sx * gx + sy * gy) * gyromagneticRatio + B0 * gyromagneticRatio
print(spinFreq)
Explanation: Calculate the relative spin frequency at each spin location. Our gradient strengths are expressed as T/cm and are symmetric about the center of the voxelSize. To understand the following expression, work through the units: (centimeters * T/cm + centimeters * T/cm) * Hz/Tesla + Tesla*Hz/Tesla leaves us with MHz
End of explanation
relativeSpinFreq = (sx * gx + sy * gy) * gyromagneticRatio
print(relativeSpinFreq)
Explanation: Note that including the B0 field in this equation simply adds an offset to the spin frequency. For most purposes, we usually only care about the spin frequency relative to the B0 frequency (i.e., the rotating frame of reference), so we can leave that last term off and compute relative frequencies (in Hz):
End of explanation
# compute your answer here
Explanation: Question 5
a. Speculate on why the relative spin frequency is the most important value to calculate here.
b. Do you think the B0 field strength will play a role in the calculation of the diffusion tensor?
End of explanation
fig,ax = subplots(1, 1)
timeStep = .001
# Initialize the transverse magnitization to reflect a 90 deg RF pulse.
# The scale here is arbitrary and is selected simply to make a nice plot.
Mxy0 = 5
# Set the T2 value of the spins (in seconds)
T2 = 0.07
curPhase = zeros(sx.size)
t = 0.
nTimeSteps = 50
for ti in range(nTimeSteps):
# Update the current phase based on the spin's precession rate, which is a function
# of its local magnetic field.
curPhase = curPhase + 2*pi*gyromagneticRatio * (sx*gx+sy*gy) * timeStep
# Do a 180 flip at the TE/2:
if ti==round(nTimeSteps/2.):
curPhase = -curPhase
# The transverse magnitization magnitude decays with the T2:
curMagnitude = Mxy0 * exp(-t/T2)
px = sin(curPhase) * curMagnitude
py = cos(curPhase) * curMagnitude
# Summarize the total (relative) MR signal for this iteration
S = sqrt(sum(px**2 + py**2)) / sx.size
title = 'elapsed time: %5.1f/%5.1f ms' % (t*1000., timeStep*(nTimeSteps-1)*1000)
draw_spins(ax, title, relative_field_image, im_unit, sx, sy, px, py)
clear_output(wait=True)
display(fig,ax)
ax.cla()
t = t+timeStep
close()
Explanation: Display the relative frequencies (in Hz)
When we first apply an RF pulse, all the spins will all precess in phase. If they are all experienceing the same magnetic field, they will remain in phase. However, if some spins experience a different local field, they will become out of phase with the others. Let's show this with a movie, where the phase will be represented with color. Our timestep is 1 millisecond.
End of explanation |
8,911 | Given the following text description, write Python code to implement the functionality described.
Description:
Add two numbers x and y
This is how the function will work:
add(2, 3)
5
This is how the function will work:
add(5, 7)
12
| Python Code:
def add(x: int, y: int):
return x + y |
8,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cleaning
Step1: 2. Print data summaries including the number of null values. Should we drop or try to correct any of the null values?
Step2: Gender and year of birth have nulls, I don't think we should drop them because we would lose over 100000 rows; instead we could use the median or mean to replace nulls for the year of birth. Regarding the gender it's not possible to make any replacement, but it should be noted that most of the entries are male.
Step3: 3. Create a column in the trip table that contains only the date (no time)
Step4: 4. Merge weather data with trip data and be sure not to lose any trip data
Step5: 5. Drop records that are completely duplicated (all values). Check for and inspect any duplicate trip_id values that remain. Remove if they exist.
Step6: 6. Create columns for lat & long values for the from- and to- stations
Step7: 7. Write a function to round all tripduration values to the nearest half second increment and then round all the values in the data
Step8: 8. Verify that trip_duration matches the timestamps to within 60 seconds
Step9: 9.Something is wrong with the Max_Gust_Speed_MPH column. Identify and correct the problem, then save the data.
Step10: Cleaning
Step11: 2. Create a function that detects and lists non-numeric columns containing values with leading or trailing whitespace. Remove the whitespace in these columns.
Step12: 3. Remove duplicate records. Inspect any remaining duplicate movie titles.
Step13: 4. Create a function that returns two arrays
Step14: 5. Alter the names of duplicate titles that are different movies so each is unique. Then drop all duplicate rows based on movie title.
Step15: 6. Create a series that ranks actors by proportion of movies they have appeared in
Step16: 7. Create a table that contains the first and last years each actor appeared, and their length of history. Then include columns for the actors proportion and total number of movies.
length is number of years they have appeared in movies
Step17: 8. Create a column that gives each movie an integer ranking based on gross sales
1 should indicate the highest gross
If more than one movie has equal sales, assign all the lowest rank in the group
The next rank after this group should increase only by 1 | Python Code:
import pandas as pd
import numpy as np
sets = ['station', 'trip', 'weather']
cycle = {}
for s in sets:
cycle[s] = pd.read_csv('cycle_share/' + s + '.csv')
cycle['trip'].head()
Explanation: Cleaning: Cycle Share
There are 3 datasets that provide data on the stations, trips, and weather from 2014-2016.
Station dataset
station_id: station ID number
name: name of station
lat: station latitude
long: station longitude
install_date: date that station was placed in service
install_dockcount: number of docks at each station on the installation date
modification_date: date that station was modified, resulting in a change in location or dock count
current_dockcount: number of docks at each station on 8/31/2016
decommission_date: date that station was placed out of service
Trip dataset
trip_id: numeric ID of bike trip taken
starttime: day and time trip started, in PST
stoptime: day and time trip ended, in PST
bikeid: ID attached to each bike
tripduration: time of trip in seconds
from_station_name: name of station where trip originated
to_station_name: name of station where trip terminated
from_station_id: ID of station where trip originated
to_station_id: ID of station where trip terminated
usertype: "Short-Term Pass Holder" is a rider who purchased a 24-Hour or 3-Day Pass; "Member" is a rider who purchased a Monthly or an Annual Membership
gender: gender of rider
birthyear: birth year of rider
Weather dataset contains daily weather information in the service area
1. Import all sets into a dictionary and correct any errors
The trip file had the headers repeated after the values of a line, I simply got rid of them cancelling the values from the file. I also noticed that the first several rows were repeated and the line with the headers was one of those, so I used the values from the original line to fill in the missing ones.
End of explanation
for df in cycle:
print(df)
print(cycle[df].describe(include='all'))
print('\n')
Explanation: 2. Print data summaries including the number of null values. Should we drop or try to correct any of the null values?
End of explanation
cycle['trip'].groupby('gender')['trip_id'].count()
Explanation: Gender and year of birth have nulls, I don't think we should drop them because we would lose over 100000 rows; instead we could use the median or mean to replace nulls for the year of birth. Regarding the gender it's not possible to make any replacement, but it should be noted that most of the entries are male.
End of explanation
#cycle['trip']['date'] = cycle['trip']['starttime'].apply(lambda x: pd.to_datetime(x[0:x.find(' ')], format='%m/%d/%Y'))
cycle['trip']['date'] = cycle['trip']['starttime'].apply(lambda x: x[0:x.find(' ')])
cycle['trip'].head()
Explanation: 3. Create a column in the trip table that contains only the date (no time)
End of explanation
trip_weather = pd.merge(cycle['trip'], cycle['weather'], left_on='date', right_on='Date', how='left')
trip_weather.head()
Explanation: 4. Merge weather data with trip data and be sure not to lose any trip data
End of explanation
print(len(trip_weather))
trip_weather = trip_weather.drop_duplicates()
print(len(trip_weather))
print(len(trip_weather['trip_id']))
print(len(trip_weather['trip_id'].unique()))
Explanation: 5. Drop records that are completely duplicated (all values). Check for and inspect any duplicate trip_id values that remain. Remove if they exist.
End of explanation
trip_weather = pd.merge(trip_weather, cycle['station'][['station_id', 'lat', 'long']], left_on='from_station_id', right_on='station_id', how='left').drop('station_id', axis=1)
trip_weather = pd.merge(trip_weather, cycle['station'][['station_id', 'lat', 'long']], left_on='to_station_id', right_on='station_id', how='left', suffixes=['_from_station', '_to_station']).drop('station_id', axis=1)
trip_weather.head()
Explanation: 6. Create columns for lat & long values for the from- and to- stations
End of explanation
def round_trips(duration):
roundings = np.array([np.floor(duration), np.floor(duration)+0.5, np.ceil(duration)])
return roundings[np.argmin(np.abs(duration - roundings))]
trip_weather['tripduration'] = trip_weather['tripduration'].apply(round_trips)
trip_weather['tripduration'].head(10)
Explanation: 7. Write a function to round all tripduration values to the nearest half second increment and then round all the values in the data
End of explanation
trip_weather[np.abs(((pd.to_datetime(trip_weather['stoptime']) - pd.to_datetime(trip_weather['starttime'])) / np.timedelta64(1, 's')) - trip_weather['tripduration']) > 60]
Explanation: 8. Verify that trip_duration matches the timestamps to within 60 seconds
End of explanation
# not an int, let's convert it
trip_weather['Max_Gust_Speed_MPH'] = trip_weather['Max_Gust_Speed_MPH'].replace('-', np.NaN).astype('float')
trip_weather['Max_Gust_Speed_MPH'].describe()
trip_weather.to_csv('cycle_share/trip_weather.csv')
Explanation: 9.Something is wrong with the Max_Gust_Speed_MPH column. Identify and correct the problem, then save the data.
End of explanation
movies = pd.read_csv('movies/movies_data.csv')
movies.head()
movies.dtypes
movies.describe(include='all')
print(movies['color'].unique())
movies['color'] = movies['color'].apply(lambda x: 'Color' if x == 'color' else 'Black and White' if x == 'black and white' else x)
print(movies['color'].unique())
Explanation: Cleaning: Movies
This data set contains 28 attributes related to various movie titles that have been scraped from IMDb. The set is supposed to contain unique titles for each record, where each record has the following attributes:
"movie_title" "color" "num_critic_for_reviews" "movie_facebook_likes" "duration" "director_name" "director_facebook_likes" "actor_3_name" "actor_3_facebook_likes" "actor_2_name" "actor_2_facebook_likes" "actor_1_name" "actor_1_facebook_likes" "gross" "genres" "num_voted_users" "cast_total_facebook_likes" "facenumber_in_poster" "plot_keywords" "movie_imdb_link" "num_user_for_reviews" "language" "country" "content_rating" "budget" "title_year" "imdb_score" "aspect_ratio"
The original set is available kaggle (here)
1. Check for and correct similar values in color, language, and country
End of explanation
def find_spaces(df):
cols = []
for index, value in df.dtypes[df.dtypes == 'object'].iteritems():
if df[index].str.startswith(' ').any() | df[index].str.endswith(' ').any():
cols.append(index)
return cols
find_spaces(movies)
for col in find_spaces(movies):
movies[col] = movies[col].str.lstrip().str.rstrip()
find_spaces(movies)
Explanation: 2. Create a function that detects and lists non-numeric columns containing values with leading or trailing whitespace. Remove the whitespace in these columns.
End of explanation
print(len(movies))
movies = movies.drop_duplicates()
print(len(movies))
title_duplicates = list(movies['movie_title'].value_counts()[movies['movie_title'].value_counts() > 1].index)
movies[movies['movie_title'].isin(title_duplicates)].sort_values(by='movie_title')
print(movies.loc[337])
print(movies.loc[4584])
Explanation: 3. Remove duplicate records. Inspect any remaining duplicate movie titles.
End of explanation
true_dup = []
false_dup = []
for title in title_duplicates:
for index, value in movies[movies['movie_title'] == title]['movie_imdb_link'].value_counts().iteritems():
if value > 1:
true_dup.append(title)
else:
false_dup.append(title)
break
print(true_dup)
print(false_dup)
Explanation: 4. Create a function that returns two arrays: one for titles that are truly duplicated, and one for duplicated titles are not the same movie.
hint: do this by comparing the imdb link values
End of explanation
movies['movie_title'] = movies.apply(lambda x: x['movie_title'] + ' (' + str(int(x['title_year'])) + ')' if str(x['title_year']) != 'nan' and x['movie_title'] in false_dup else x['movie_title'], axis=1)
print(len(movies))
movies = movies.drop_duplicates('movie_title')
print(len(movies))
Explanation: 5. Alter the names of duplicate titles that are different movies so each is unique. Then drop all duplicate rows based on movie title.
End of explanation
actors = movies.groupby(['actor_1_name'])['movie_title'].count()
actors = actors.add(movies.groupby(['actor_2_name'])['movie_title'].count(), fill_value=0)
actors = actors.add(movies.groupby(['actor_3_name'])['movie_title'].count(), fill_value=0)
(actors / len(movies)).sort_values(ascending=False).head(20)
Explanation: 6. Create a series that ranks actors by proportion of movies they have appeared in
End of explanation
actor_years = movies.groupby(['actor_1_name'])['title_year'].aggregate({'min_year_1': np.min, 'max_year_1': np.max})
actor_years = actor_years.add(movies.groupby(['actor_2_name'])['title_year'].aggregate({'min_year_2': np.min, 'max_year_2': np.max}), fill_value=0)
actor_years = actor_years.add(movies.groupby(['actor_3_name'])['title_year'].aggregate({'min_year_3': np.min, 'max_year_3': np.max}), fill_value=0)
actor_years['first_year'] = np.min(actor_years[['min_year_1', 'min_year_2', 'min_year_3']], axis=1)
actor_years['last_year'] = np.max(actor_years[['max_year_1', 'max_year_2', 'max_year_3']], axis=1)
actor_years.drop(['min_year_1', 'min_year_2', 'min_year_3', 'max_year_1', 'max_year_2', 'max_year_3'], axis=1, inplace=True)
actor_years['history_length'] = actor_years['last_year'] - actor_years['first_year']
actor_years['movie_number'] = actors
actor_years['movie_proportion'] = actors / len(movies)
actor_years
Explanation: 7. Create a table that contains the first and last years each actor appeared, and their length of history. Then include columns for the actors proportion and total number of movies.
length is number of years they have appeared in movies
End of explanation
movies['gross_sales_rank'] = movies['gross'].rank(method='dense', ascending=False, na_option='bottom')
movies[['movie_title', 'gross', 'gross_sales_rank']].sort_values(by='gross_sales_rank').head(20)
Explanation: 8. Create a column that gives each movie an integer ranking based on gross sales
1 should indicate the highest gross
If more than one movie has equal sales, assign all the lowest rank in the group
The next rank after this group should increase only by 1
End of explanation |
8,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CHAPTER 4
4.2 Algorithms
Step1: Imports, logging, and data
On top of doing the things we already know, we now additionally import also the CollaborativeFiltering algorithm, which is, as should be obvious by now, accessible through the bestPy.algorithms subpackage.
Step2: Creating a new CollaborativeFiltering object with data
Again, this is as straightforward as you would expect. This time, we will attach the data to the algorithm right away.
Step3: Parameters of the collaborative filtering algorithm
Inspecting the new recommendation object with Tab completion again reveals binarize as a first attribute.
Step4: It has the same meaning as in the baseline recommendation
Step5: Indeed, collaborative filtering cannot necessarily provide recommendations for all customers. Specifically, it fails to do so if the customer in question only bought articles that no other customer has bought. For these cases, we need a fallback solution, which is provided by the algorithm specified through the baseline attribute. As you can see, that algorithm is currently a Baseline instance. We could, of course, also provide the baseline algorithm manually.
Step6: More about that later. There is one more paramter to be explored first.
Step7: In short, collaborative filtering (as it is implemented in bestPy) works by recommending articles that are most similar to the articles the target customer has already bought. What exactly similar means, however, is not set in stone and quite a few similarity measures are available.
+ Dice (dice)
+ Jaccard (jaccard)
+ Kulsinksi (kulsinski)
+ Sokal-Sneath (sokalsneath)
+ Russell-Rao (russellrao)
+ cosine (cosine)
+ binary cosine (cosine_binary)
In the last option, we recognize again our concept of binarize where, to compute the cosine similarity between two articles, we do not count how often they have been bought by any particular user but only if they have been bought.
It is not obvious which similarity measure is best in which case, so some experimentation is required. If we want to set the similarity to something other than the default choice of kulsinski, we have to import what we need from the logically located subsubpackage.
Step8: And that's it for the parameters of the collaborative filtering algorithm.
Making a recommendation for a target customer
Now that everything is set up and we have data attached to the algorithm, its for_one() method is available and can be called with the internal integer index of the target customer as argument.
Step9: And, voilà, your recommendation. Again, a higher number means that the article with the same index as that number is more highly recommended for the target customer.
To appreciate the necessity for this fallback solution, we try to get a recommendation for the customer with ID '4' next. | Python Code:
import sys
sys.path.append('../..')
Explanation: CHAPTER 4
4.2 Algorithms: Collaborative filtering
Having understood the basics of how an algorithm is configured, married with data, and deployed in bestPy, we are now ready to move from a baseline recommendation to something more inolved. In particular, we are going to discuss the implementation and use of collaborative filtering without, however, going too deep into the technical details of how the algorithm works.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
End of explanation
from bestPy import write_log_to
from bestPy.datastructures import Transactions
from bestPy.algorithms import Baseline, CollaborativeFiltering # Additionally import CollaborativeFiltering
logfile = 'logfile.txt'
write_log_to(logfile, 20)
file = 'examples_data.csv'
data = Transactions.from_csv(file)
Explanation: Imports, logging, and data
On top of doing the things we already know, we now additionally import also the CollaborativeFiltering algorithm, which is, as should be obvious by now, accessible through the bestPy.algorithms subpackage.
End of explanation
recommendation = CollaborativeFiltering().operating_on(data)
recommendation.has_data
Explanation: Creating a new CollaborativeFiltering object with data
Again, this is as straightforward as you would expect. This time, we will attach the data to the algorithm right away.
End of explanation
recommendation.binarize
Explanation: Parameters of the collaborative filtering algorithm
Inspecting the new recommendation object with Tab completion again reveals binarize as a first attribute.
End of explanation
recommendation.baseline
Explanation: It has the same meaning as in the baseline recommendation: True means we only care whether or not a customer bought an article and False means we also take into account how often a customer bought an article.
Speaking about baseline, you will notice that the recommendation object we just created actually has an attribute baseline.
End of explanation
recommendation.baseline = Baseline()
recommendation.baseline
Explanation: Indeed, collaborative filtering cannot necessarily provide recommendations for all customers. Specifically, it fails to do so if the customer in question only bought articles that no other customer has bought. For these cases, we need a fallback solution, which is provided by the algorithm specified through the baseline attribute. As you can see, that algorithm is currently a Baseline instance. We could, of course, also provide the baseline algorithm manually.
End of explanation
recommendation.similarity
Explanation: More about that later. There is one more paramter to be explored first.
End of explanation
from bestPy.algorithms.similarities import dice, jaccard, sokalsneath, russellrao, cosine, cosine_binary
recommendation.similarity = dice
recommendation.similarity
Explanation: In short, collaborative filtering (as it is implemented in bestPy) works by recommending articles that are most similar to the articles the target customer has already bought. What exactly similar means, however, is not set in stone and quite a few similarity measures are available.
+ Dice (dice)
+ Jaccard (jaccard)
+ Kulsinksi (kulsinski)
+ Sokal-Sneath (sokalsneath)
+ Russell-Rao (russellrao)
+ cosine (cosine)
+ binary cosine (cosine_binary)
In the last option, we recognize again our concept of binarize where, to compute the cosine similarity between two articles, we do not count how often they have been bought by any particular user but only if they have been bought.
It is not obvious which similarity measure is best in which case, so some experimentation is required. If we want to set the similarity to something other than the default choice of kulsinski, we have to import what we need from the logically located subsubpackage.
End of explanation
customer = data.user.index_of['5']
recommendation.for_one(customer)
Explanation: And that's it for the parameters of the collaborative filtering algorithm.
Making a recommendation for a target customer
Now that everything is set up and we have data attached to the algorithm, its for_one() method is available and can be called with the internal integer index of the target customer as argument.
End of explanation
customer = data.user.index_of['4']
recommendation.for_one(customer)
Explanation: And, voilà, your recommendation. Again, a higher number means that the article with the same index as that number is more highly recommended for the target customer.
To appreciate the necessity for this fallback solution, we try to get a recommendation for the customer with ID '4' next.
End of explanation |
8,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pima Indian Diabetes Prediction (with Model Reload)
Import some basic libraries.
* Pandas - provided data frames
* matplotlib.pyplot - plotting support
Use Magic %matplotlib to display graphics inline instead of in a popup window.
Step1: Loading and Reviewing the Data
Step2: Definition of features
From the metadata on the data source we have the following definition of the features.
| Feature | Description | Comments |
|--------------|-------------|--------|
| num_preg | number of pregnancies |
| glucose_conc | Plasma glucose concentration a 2 hours in an oral glucose tolerance test |
| diastolic_bp | Diastolic blood pressure (mm Hg) |
| thickness | Triceps skin fold thickness (mm) |
|insulin | 2-Hour serum insulin (mu U/ml) |
| bmi | Body mass index (weight in kg/(height in m)^2) |
| diab_pred | Diabetes pedigree function |
| Age (years) | Age (years)|
| skin | ???? | What is this? |
| diabetes | Class variable (1=True, 0=False) | Why is our data boolean (True/False)? |
Check for null values
Step4: Correlated Feature Check
Helper function that displays correlation by color. Red is most correlated, Blue least.
Step5: The skin and thickness columns are correlated 1 to 1. Dropping the skin column
Step6: Check for additional correlations
Step7: The correlations look good. There appear to be no coorelated columns.
Mold Data
Data Types
Inspect data types to see if there are any issues. Data should be numeric.
Step8: Change diabetes from boolean to integer, True=1, False=0
Step9: Verify that the diabetes data type has been changed.
Step10: Check for null values
Step11: No obvious null values.
Check class distribution
Rare events are hard to predict
Step12: Good distribution of true and false cases. No special work needed.
Spliting the data
70% for training, 30% for testing
Step13: We check to ensure we have the the desired 70% train, 30% test split of the data
Step14: Verifying predicted value was split correctly
Step15: Post-split Data Preparation
Hidden Missing Values
Step16: Are these 0 values possible?
How many rows have have unexpected 0 values?
Step17: Impute with the mean
Step18: Training Initial Algorithm - Naive Bayes
Step19: Performance on Training Data
Step20: Performance on Testing Data
Step21: Metrics
Step22: Random Forest
Step23: Predict Training Data
Step24: Predict Test Data
Step25: Logistic Regression
Step26: Setting regularization parameter
Step27: Logisitic regression with class_weight='balanced'
Step28: LogisticRegressionCV
Step29: Predict on Test data
Step30: Using your trained Model
Save trained model to file
Step31: Load trained model from file
Step32: Test Prediction on data
Once the model is loaded we can use it to predict on some data. In this case the data file contains a few rows from the original Pima CSV file.
Step33: The truncated file contained 4 rows from the original CSV.
Data is the same is in same format as the original CSV file's data. Therefore, just like the original data, we need to transform it before we can make predictions on the data.
Note
Step34: We need to drop the diabetes column since that is what we are predicting.
Store data without the column with the prefix X as we did with the X_train and X_test to indicate that it contains only the columns we are prediction.
Step35: Data has 0 in places it should not.
Just like test or test datasets we will use imputation to fix this.
Step36: At this point our data is ready to be used for prediction.
Predict diabetes with the prediction data. Returns 1 if True, 0 if false | Python Code:
import pandas as pd # pandas is a dataframe library
import matplotlib.pyplot as plt # matplotlib.pyplot plots data
%matplotlib inline
Explanation: Pima Indian Diabetes Prediction (with Model Reload)
Import some basic libraries.
* Pandas - provided data frames
* matplotlib.pyplot - plotting support
Use Magic %matplotlib to display graphics inline instead of in a popup window.
End of explanation
df = pd.read_csv("./data/pima-data.csv")
df.shape
df.head(5)
df.tail(5)
Explanation: Loading and Reviewing the Data
End of explanation
df.isnull().values.any()
Explanation: Definition of features
From the metadata on the data source we have the following definition of the features.
| Feature | Description | Comments |
|--------------|-------------|--------|
| num_preg | number of pregnancies |
| glucose_conc | Plasma glucose concentration a 2 hours in an oral glucose tolerance test |
| diastolic_bp | Diastolic blood pressure (mm Hg) |
| thickness | Triceps skin fold thickness (mm) |
|insulin | 2-Hour serum insulin (mu U/ml) |
| bmi | Body mass index (weight in kg/(height in m)^2) |
| diab_pred | Diabetes pedigree function |
| Age (years) | Age (years)|
| skin | ???? | What is this? |
| diabetes | Class variable (1=True, 0=False) | Why is our data boolean (True/False)? |
Check for null values
End of explanation
def plot_corr(df, size=11):
Function plots a graphical correlation matrix for each pair of columns in the dataframe.
Input:
df: pandas DataFrame
size: vertical and horizontal size of the plot
Displays:
matrix of correlation between columns. Blue-cyan-yellow-red-darkred => less to more correlated
0 ------------------> 1
Expect a darkred line running from top left to bottom right
corr = df.corr() # data frame correlation function
fig, ax = plt.subplots(figsize=(size, size))
ax.matshow(corr) # color code the rectangles by correlation value
plt.xticks(range(len(corr.columns)), corr.columns) # draw x tick marks
plt.yticks(range(len(corr.columns)), corr.columns) # draw y tick marks
plot_corr(df)
df.corr()
df.head(5)
Explanation: Correlated Feature Check
Helper function that displays correlation by color. Red is most correlated, Blue least.
End of explanation
del df['skin']
df.head(5)
Explanation: The skin and thickness columns are correlated 1 to 1. Dropping the skin column
End of explanation
plot_corr(df)
Explanation: Check for additional correlations
End of explanation
df.head(5)
Explanation: The correlations look good. There appear to be no coorelated columns.
Mold Data
Data Types
Inspect data types to see if there are any issues. Data should be numeric.
End of explanation
diabetes_map = {True : 1, False : 0}
df['diabetes'] = df['diabetes'].map(diabetes_map)
Explanation: Change diabetes from boolean to integer, True=1, False=0
End of explanation
df.head(5)
Explanation: Verify that the diabetes data type has been changed.
End of explanation
df.isnull().values.any()
Explanation: Check for null values
End of explanation
num_obs = len(df)
num_true = len(df.loc[df['diabetes'] == 1])
num_false = len(df.loc[df['diabetes'] == 0])
print("Number of True cases: {0} ({1:2.2f}%)".format(num_true, (num_true/num_obs) * 100))
print("Number of False cases: {0} ({1:2.2f}%)".format(num_false, (num_false/num_obs) * 100))
Explanation: No obvious null values.
Check class distribution
Rare events are hard to predict
End of explanation
from sklearn.cross_validation import train_test_split
feature_col_names = ['num_preg', 'glucose_conc', 'diastolic_bp', 'thickness', 'insulin', 'bmi', 'diab_pred', 'age']
predicted_class_names = ['diabetes']
X = df[feature_col_names].values # predictor feature columns (8 X m)
y = df[predicted_class_names].values # predicted class (1=true, 0=false) column (1 X m)
split_test_size = 0.30
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=split_test_size, random_state=42)
# test_size = 0.3 is 30%, 42 is the answer to everything
Explanation: Good distribution of true and false cases. No special work needed.
Spliting the data
70% for training, 30% for testing
End of explanation
print("{0:0.2f}% in training set".format((len(X_train)/len(df.index)) * 100))
print("{0:0.2f}% in test set".format((len(X_test)/len(df.index)) * 100))
Explanation: We check to ensure we have the the desired 70% train, 30% test split of the data
End of explanation
print("Original True : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 1]), (len(df.loc[df['diabetes'] == 1])/len(df.index)) * 100.0))
print("Original False : {0} ({1:0.2f}%)".format(len(df.loc[df['diabetes'] == 0]), (len(df.loc[df['diabetes'] == 0])/len(df.index)) * 100.0))
print("")
print("Training True : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 1]), (len(y_train[y_train[:] == 1])/len(y_train) * 100.0)))
print("Training False : {0} ({1:0.2f}%)".format(len(y_train[y_train[:] == 0]), (len(y_train[y_train[:] == 0])/len(y_train) * 100.0)))
print("")
print("Test True : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 1]), (len(y_test[y_test[:] == 1])/len(y_test) * 100.0)))
print("Test False : {0} ({1:0.2f}%)".format(len(y_test[y_test[:] == 0]), (len(y_test[y_test[:] == 0])/len(y_test) * 100.0)))
Explanation: Verifying predicted value was split correctly
End of explanation
df.head()
Explanation: Post-split Data Preparation
Hidden Missing Values
End of explanation
print("# rows in dataframe {0}".format(len(df)))
print("# rows missing glucose_conc: {0}".format(len(df.loc[df['glucose_conc'] == 0])))
print("# rows missing diastolic_bp: {0}".format(len(df.loc[df['diastolic_bp'] == 0])))
print("# rows missing thickness: {0}".format(len(df.loc[df['thickness'] == 0])))
print("# rows missing insulin: {0}".format(len(df.loc[df['insulin'] == 0])))
print("# rows missing bmi: {0}".format(len(df.loc[df['bmi'] == 0])))
print("# rows missing diab_pred: {0}".format(len(df.loc[df['diab_pred'] == 0])))
print("# rows missing age: {0}".format(len(df.loc[df['age'] == 0])))
Explanation: Are these 0 values possible?
How many rows have have unexpected 0 values?
End of explanation
from sklearn.preprocessing import Imputer
#Impute with mean all 0 readings
fill_0 = Imputer(missing_values=0, strategy="mean", axis=0)
X_train = fill_0.fit_transform(X_train)
X_test = fill_0.fit_transform(X_test)
Explanation: Impute with the mean
End of explanation
from sklearn.naive_bayes import GaussianNB
# create Gaussian Naive Bayes model object and train it with the data
nb_model = GaussianNB()
nb_model.fit(X_train, y_train.ravel())
Explanation: Training Initial Algorithm - Naive Bayes
End of explanation
# predict values using the training data
nb_predict_train = nb_model.predict(X_train)
# import the performance metrics library
from sklearn import metrics
# Accuracy
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_train, nb_predict_train)))
print()
Explanation: Performance on Training Data
End of explanation
# predict values using the testing data
nb_predict_test = nb_model.predict(X_test)
from sklearn import metrics
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, nb_predict_test)))
Explanation: Performance on Testing Data
End of explanation
print("Confusion Matrix")
print("{0}".format(metrics.confusion_matrix(y_test, nb_predict_test)))
print("")
print("Classification Report")
print(metrics.classification_report(y_test, nb_predict_test))
Explanation: Metrics
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_model = RandomForestClassifier(random_state=42) # Create random forest object
rf_model.fit(X_train, y_train.ravel())
Explanation: Random Forest
End of explanation
rf_predict_train = rf_model.predict(X_train)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_train, rf_predict_train)))
Explanation: Predict Training Data
End of explanation
rf_predict_test = rf_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, rf_predict_test)))
print(metrics.confusion_matrix(y_test, rf_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, rf_predict_test))
Explanation: Predict Test Data
End of explanation
from sklearn.linear_model import LogisticRegression
lr_model =LogisticRegression(C=0.7, random_state=42)
lr_model.fit(X_train, y_train.ravel())
lr_predict_test = lr_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_predict_test)))
print(metrics.confusion_matrix(y_test, lr_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_predict_test))
Explanation: Logistic Regression
End of explanation
C_start = 0.1
C_end = 5
C_inc = 0.1
C_values, recall_scores = [], []
C_val = C_start
best_recall_score = 0
while (C_val < C_end):
C_values.append(C_val)
lr_model_loop = LogisticRegression(C=C_val, random_state=42)
lr_model_loop.fit(X_train, y_train.ravel())
lr_predict_loop_test = lr_model_loop.predict(X_test)
recall_score = metrics.recall_score(y_test, lr_predict_loop_test)
recall_scores.append(recall_score)
if (recall_score > best_recall_score):
best_recall_score = recall_score
best_lr_predict_test = lr_predict_loop_test
C_val = C_val + C_inc
best_score_C_val = C_values[recall_scores.index(best_recall_score)]
print("1st max value of {0:.3f} occured at C={1:.3f}".format(best_recall_score, best_score_C_val))
%matplotlib inline
plt.plot(C_values, recall_scores, "-")
plt.xlabel("C value")
plt.ylabel("recall score")
Explanation: Setting regularization parameter
End of explanation
C_start = 0.1
C_end = 5
C_inc = 0.1
C_values, recall_scores = [], []
C_val = C_start
best_recall_score = 0
while (C_val < C_end):
C_values.append(C_val)
lr_model_loop = LogisticRegression(C=C_val, class_weight="balanced", random_state=42)
lr_model_loop.fit(X_train, y_train.ravel())
lr_predict_loop_test = lr_model_loop.predict(X_test)
recall_score = metrics.recall_score(y_test, lr_predict_loop_test)
recall_scores.append(recall_score)
if (recall_score > best_recall_score):
best_recall_score = recall_score
best_lr_predict_test = lr_predict_loop_test
C_val = C_val + C_inc
best_score_C_val = C_values[recall_scores.index(best_recall_score)]
print("1st max value of {0:.3f} occured at C={1:.3f}".format(best_recall_score, best_score_C_val))
%matplotlib inline
plt.plot(C_values, recall_scores, "-")
plt.xlabel("C value")
plt.ylabel("recall score")
from sklearn.linear_model import LogisticRegression
lr_model =LogisticRegression( class_weight="balanced", C=best_score_C_val, random_state=42)
lr_model.fit(X_train, y_train.ravel())
lr_predict_test = lr_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_predict_test)))
print(metrics.confusion_matrix(y_test, lr_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_predict_test))
print(metrics.recall_score(y_test, lr_predict_test))
Explanation: Logisitic regression with class_weight='balanced'
End of explanation
from sklearn.linear_model import LogisticRegressionCV
lr_cv_model = LogisticRegressionCV(n_jobs=-1, random_state=42, Cs=3, cv=10, refit=False, class_weight="balanced") # set number of jobs to -1 which uses all cores to parallelize
lr_cv_model.fit(X_train, y_train.ravel())
Explanation: LogisticRegressionCV
End of explanation
lr_cv_predict_test = lr_cv_model.predict(X_test)
# training metrics
print("Accuracy: {0:.4f}".format(metrics.accuracy_score(y_test, lr_cv_predict_test)))
print(metrics.confusion_matrix(y_test, lr_cv_predict_test) )
print("")
print("Classification Report")
print(metrics.classification_report(y_test, lr_cv_predict_test))
Explanation: Predict on Test data
End of explanation
from sklearn.externals import joblib
joblib.dump(lr_cv_model, "./data/pima-trained-model.pkl")
Explanation: Using your trained Model
Save trained model to file
End of explanation
lr_cv_model = joblib.load("./data/pima-trained-model.pkl")
Explanation: Load trained model from file
End of explanation
# get data from truncated pima data file
df_predict = pd.read_csv("./data/pima-data-trunc.csv")
print(df_predict.shape)
df_predict
Explanation: Test Prediction on data
Once the model is loaded we can use it to predict on some data. In this case the data file contains a few rows from the original Pima CSV file.
End of explanation
del df_predict['skin']
df_predict
Explanation: The truncated file contained 4 rows from the original CSV.
Data is the same is in same format as the original CSV file's data. Therefore, just like the original data, we need to transform it before we can make predictions on the data.
Note: If the data had been previously "cleaned up" this would not be necessary.
We do this by executed the same transformations as we did to the original data
Start by dropping the "skin" which is the same as thickness, with different units.
End of explanation
X_predict = df_predict
del X_predict['diabetes']
Explanation: We need to drop the diabetes column since that is what we are predicting.
Store data without the column with the prefix X as we did with the X_train and X_test to indicate that it contains only the columns we are prediction.
End of explanation
#Impute with mean all 0 readings
fill_0 = Imputer(missing_values=0, strategy="mean", axis=0)
X_predict = fill_0.fit_transform(X_predict)
Explanation: Data has 0 in places it should not.
Just like test or test datasets we will use imputation to fix this.
End of explanation
lr_cv_model.predict(X_predict)
Explanation: At this point our data is ready to be used for prediction.
Predict diabetes with the prediction data. Returns 1 if True, 0 if false
End of explanation |
8,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One Port Tiered Calibration
Intro
A one-port network analyzer can be used to measure a two-port device, provided that the device is reciprocal. This is accomplished by performing two calibrations, which is why its called a tiered calibration.
First, the VNA is calibrated at the test-port like normal. This is called the first tier. Next, the device is connected to the test-port, and a calibration is performed at the far end of the device, the second tier. A diagram is shown below,
Step1: This notebook will demonstrate how to use skrf to do a two-tiered one-port calibration. We'll use data that was taken to characterize a waveguide-to-CPW probe. So, for this specific example the diagram above looks like
Step2: Some Data
The data available is the folders 'tier1/' and 'tier2/'.
Step3: (if you dont have the git repo for these examples, the data for this notebook can be found here)
In each folder you will find the two sub-folders, called 'ideals/' and 'measured/'. These contain touchstone files of the calibration standards ideal and measured responses, respectively.
Step4: The first tier is at waveguide interface, and consisted of the following set of standards
short
delay short
load
radiating open (literally an open waveguide)
Step5: Creating Calibrations
Tier 1
First defining the calibration for Tier 1
Step6: Because we saved corresponding ideal and measured standards with identical names, the Calibration will automatically align our standards upon initialization. (More info on creating Calibration objects this can be found in the docs.)
Similarly for the second tier 2,
Tier 2
Step7: Error Networks
Each one-port Calibration contains a two-port error network, that is determined from the calculated error coefficients. The error network for tier1 models the VNA, while the error network for tier2 represents the VNA and the DUT. These can be visualized through the parameter 'error_ntwk'.
For tier 1,
Step8: Similarly for tier 2,
Step9: De-embedding the DUT
As previously stated, the error network for tier1 models the VNA, and the error network for tier2 represents the VNA+DUT. So to deterine the DUT's response, we cascade the inverse S-parameters of the VNA with the VNA+DUT.
$$ DUT = VNA^{-1}\cdot (VNA \cdot DUT)$$
In skrf, this is done as follows
Step10: You may want to save this to disk, for future use,
Step11: formatting junk | Python Code:
from IPython.display import SVG
SVG('images/boxDiagram.svg')
Explanation: One Port Tiered Calibration
Intro
A one-port network analyzer can be used to measure a two-port device, provided that the device is reciprocal. This is accomplished by performing two calibrations, which is why its called a tiered calibration.
First, the VNA is calibrated at the test-port like normal. This is called the first tier. Next, the device is connected to the test-port, and a calibration is performed at the far end of the device, the second tier. A diagram is shown below,
End of explanation
SVG('images/probe.svg')
Explanation: This notebook will demonstrate how to use skrf to do a two-tiered one-port calibration. We'll use data that was taken to characterize a waveguide-to-CPW probe. So, for this specific example the diagram above looks like:
End of explanation
ls
Explanation: Some Data
The data available is the folders 'tier1/' and 'tier2/'.
End of explanation
ls tier1/
Explanation: (if you dont have the git repo for these examples, the data for this notebook can be found here)
In each folder you will find the two sub-folders, called 'ideals/' and 'measured/'. These contain touchstone files of the calibration standards ideal and measured responses, respectively.
End of explanation
ls tier1/measured/
Explanation: The first tier is at waveguide interface, and consisted of the following set of standards
short
delay short
load
radiating open (literally an open waveguide)
End of explanation
from skrf.calibration import OnePort
import skrf as rf
# enable in-notebook plots
%matplotlib inline
rf.stylely()
tier1_ideals = rf.read_all_networks('tier1/ideals/')
tier1_measured = rf.read_all_networks('tier1/measured/')
tier1 = OnePort(measured = tier1_measured,
ideals = tier1_ideals,
name = 'tier1',
sloppy_input=True)
tier1
Explanation: Creating Calibrations
Tier 1
First defining the calibration for Tier 1
End of explanation
tier2_ideals = rf.read_all_networks('tier2/ideals/')
tier2_measured = rf.read_all_networks('tier2/measured/')
tier2 = OnePort(measured = tier2_measured,
ideals = tier2_ideals,
name = 'tier2',
sloppy_input=True)
tier2
Explanation: Because we saved corresponding ideal and measured standards with identical names, the Calibration will automatically align our standards upon initialization. (More info on creating Calibration objects this can be found in the docs.)
Similarly for the second tier 2,
Tier 2
End of explanation
tier1.error_ntwk.plot_s_db()
title('Tier 1 Error Network')
Explanation: Error Networks
Each one-port Calibration contains a two-port error network, that is determined from the calculated error coefficients. The error network for tier1 models the VNA, while the error network for tier2 represents the VNA and the DUT. These can be visualized through the parameter 'error_ntwk'.
For tier 1,
End of explanation
tier2.error_ntwk.plot_s_db()
title('Tier 2 Error Network')
Explanation: Similarly for tier 2,
End of explanation
dut = tier1.error_ntwk.inv ** tier2.error_ntwk
dut.name = 'probe'
dut.plot_s_db()
title('Probe S-parameters')
ylim(-60,10)
Explanation: De-embedding the DUT
As previously stated, the error network for tier1 models the VNA, and the error network for tier2 represents the VNA+DUT. So to deterine the DUT's response, we cascade the inverse S-parameters of the VNA with the VNA+DUT.
$$ DUT = VNA^{-1}\cdot (VNA \cdot DUT)$$
In skrf, this is done as follows
End of explanation
dut.write_touchstone()
ls probe*
Explanation: You may want to save this to disk, for future use,
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/plotly.css", "r").read()
return HTML(styles)
css_styling()
Explanation: formatting junk
End of explanation |
8,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image compression with K-means
K-means is a clustering algorithm which defines K cluster centroids in the feature space and, by making use of an appropriate distance function, iteratively assigns each example to the closest cluster centroid and each cluster centroid to the mean of points previously assigned to it.
In the following example we will make use of K-means clustering to reduce the number of colors contained in an image stored using 24-bit RGB encoding.
Overview
The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. In a 24-bit encoding, each pixel is represented as three 8-bit unsigned integers (ranging from 0 to 255) that specify the red, green and blue intensity values, resulting in a total of 256*256*256=16,777,216 possible colors.
To compress the image, we will reduce this number to 16, assign each color to an index and then each pixel to an index. This process will significantly decrease the amount of space occupied by the image, at the cost of introducing some computational effort.
For a 128x128 image
Step1: Note that PIL (or Pillow) library is also needed to successfully import the image, so pip install it if you have not it installed.
Now let's take a look at the image by plotting it with matplotlib
Step2: The image is stored in a 3-dimensional matrix, where the first and second dimension represent the pixel location on the 2-dimensional plan and the third dimension the RGB intensities
Step3: Preprocessing
We want to flatten the matrix, in order to give it to the clustering algorithm
Step4: Fitting
Now by fitting the KMeans estimator in scikit-learn we can identify the best clusters for the flattened matrix
Step5: We can verify that each pixel has been assigned to a cluster
Step6: And we can visualize each cluster centroid
Step7: Note that cluster centroids are computed as the mean of the features, so we easily end up on decimal values, which are not admitted in a 24 bit representation (three 8-bit unsigned integers ranging from 0 to 255) of the colors. We decide to round them with a floor operation. Furthermore we have to invert the sign of the clusters to visualize them
Step8: Reconstructing
Step9: The data contained in clusters and labels define the compressed image and should be stored in a proper format, in order to effectively realize the data compression
Step10: At the cost of a deterioriation in the color quality, the space occupied by the image will be significantly lesser. We can compare the original and the compressed image in the following figure | Python Code:
from scipy import misc
pic = misc.imread('media/irobot.png')
Explanation: Image compression with K-means
K-means is a clustering algorithm which defines K cluster centroids in the feature space and, by making use of an appropriate distance function, iteratively assigns each example to the closest cluster centroid and each cluster centroid to the mean of points previously assigned to it.
In the following example we will make use of K-means clustering to reduce the number of colors contained in an image stored using 24-bit RGB encoding.
Overview
The RGB color model is an additive color model in which red, green and blue light are added together in various ways to reproduce a broad array of colors. In a 24-bit encoding, each pixel is represented as three 8-bit unsigned integers (ranging from 0 to 255) that specify the red, green and blue intensity values, resulting in a total of 256*256*256=16,777,216 possible colors.
To compress the image, we will reduce this number to 16, assign each color to an index and then each pixel to an index. This process will significantly decrease the amount of space occupied by the image, at the cost of introducing some computational effort.
For a 128x128 image:
* Uncompressed format: 16,384 px * 24 bits/px = 393,216 bits
* Compressed format: 16,384 px * 4 bits/px + 16 clusters * 24 bits/cluster = 65,536 + 385 bits = 65,920 bits (17%)
Note that we won't implement directly the K-means algorithm, as we are primarily interested in showing its application in a common scenario, but we'll delegate it to the scikit-learn library.
Implementation
Import
First, let's import the image with scipy:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(pic)
Explanation: Note that PIL (or Pillow) library is also needed to successfully import the image, so pip install it if you have not it installed.
Now let's take a look at the image by plotting it with matplotlib:
End of explanation
pic.shape
Explanation: The image is stored in a 3-dimensional matrix, where the first and second dimension represent the pixel location on the 2-dimensional plan and the third dimension the RGB intensities:
End of explanation
w = pic.shape[0]
h = pic.shape[1]
X = pic.reshape((w*h,3))
X.shape
Explanation: Preprocessing
We want to flatten the matrix, in order to give it to the clustering algorithm:
End of explanation
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=16)
kmeans.fit(X)
Explanation: Fitting
Now by fitting the KMeans estimator in scikit-learn we can identify the best clusters for the flattened matrix:
End of explanation
kmeans.labels_
np.unique(kmeans.labels_)
Explanation: We can verify that each pixel has been assigned to a cluster:
End of explanation
kmeans.cluster_centers_
Explanation: And we can visualize each cluster centroid:
End of explanation
import numpy as np
plt.imshow(np.floor(kmeans.cluster_centers_.reshape((1,16,3))) * (-1))
Explanation: Note that cluster centroids are computed as the mean of the features, so we easily end up on decimal values, which are not admitted in a 24 bit representation (three 8-bit unsigned integers ranging from 0 to 255) of the colors. We decide to round them with a floor operation. Furthermore we have to invert the sign of the clusters to visualize them:
End of explanation
labels = kmeans.labels_
clusters = np.floor(kmeans.cluster_centers_) * (-1)
Explanation: Reconstructing
End of explanation
# Assigning RGB to clusters and reshaping
pic_recovered = clusters[labels,:].reshape((w,h,3))
plt.imshow(pic_recovered)
Explanation: The data contained in clusters and labels define the compressed image and should be stored in a proper format, in order to effectively realize the data compression:
* clusters: 16 clusters * 24 bits/cluster
* labels: (width x height) px * 4 bits/px
To reconstruct the image we assign RGB values of the cluster centroids to the pixels and we reshape the matrix in the original form:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2,figsize=(10,5))
axes[0].imshow(pic)
axes[1].imshow(pic_recovered)
Explanation: At the cost of a deterioriation in the color quality, the space occupied by the image will be significantly lesser. We can compare the original and the compressed image in the following figure:
End of explanation |
8,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return x / 255
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
tf.reset_default_graph()
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
num_channels = tensor_shape[3]
weights = tf.get_variable('weights',
shape=[conv_ksize[0], conv_ksize[1], num_channels, conv_num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[conv_num_outputs],
initializer=tf.constant_initializer(0.0))
conv = tf.nn.conv2d(x_tensor, weights, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME')
conv = tf.nn.bias_add(conv, biases)
conv_relu = tf.nn.relu(conv)
pooled = tf.nn.max_pool(conv_relu, ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
return pooled
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tf.shape(x_tensor)[0]
flat_image_size = np.product(tensor_shape[1:])
return tf.reshape(x_tensor, shape=tf.stack([batch_size, flat_image_size]))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
tf.reset_default_graph()
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tensor_shape[0]
num_features = tensor_shape[1]
weights = tf.get_variable('weights',
shape=[num_features, num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[num_outputs],
initializer=tf.constant_initializer(0.0))
fc = tf.matmul(x_tensor, weights)
fc = tf.nn.bias_add(fc, biases)
fc_relu = tf.nn.relu(fc)
return fc_relu
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
tf.reset_default_graph()
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
tensor_shape = x_tensor.get_shape().as_list()
batch_size = tensor_shape[0]
num_features = tensor_shape[1]
weights = tf.get_variable('weights',
shape=[num_features, num_outputs],
initializer=tf.random_normal_initializer(stddev=0.1))
biases = tf.get_variable('biases',
shape=[num_outputs],
initializer=tf.constant_initializer(0.0))
out = tf.matmul(x_tensor, weights)
out = tf.nn.bias_add(out, biases)
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
with tf.variable_scope("conv1"):
conv1_out = conv2d_maxpool(x,
conv_num_outputs=32,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
with tf.variable_scope("conv2"):
conv2_out = conv2d_maxpool(conv1_out,
conv_num_outputs=64,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
with tf.variable_scope("conv3"):
conv3_out = conv2d_maxpool(conv2_out,
conv_num_outputs=128,
conv_ksize=(5,5),
conv_strides=(1,1),
pool_ksize=(3,3),
pool_strides=(2,2))
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
conv3_flat = flatten(conv3_out)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
with tf.variable_scope("fc1"):
fc1_out = fully_conn(conv3_flat, num_outputs=512)
fc1_out = tf.nn.dropout(fc1_out, keep_prob)
with tf.variable_scope("fc2"):
fc2_out = fully_conn(fc1_out, num_outputs=64)
fc2_out = tf.nn.dropout(fc2_out, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
with tf.variable_scope("out"):
logits = output(fc2_out, 10)
# TODO: return output
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
#Moved the test so it doesn't interfere with the variable scopes
tf.reset_default_graph()
tests.test_conv_net(conv_net)
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict={x: feature_batch,
y: label_batch,
keep_prob: keep_probability})
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
loss = session.run(cost, feed_dict={x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict={x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
pass
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 30
batch_size = 512
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
8,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Guided Project 1
Learning Objectives
Step1: Step 1. Environment setup
skaffold tool setup
Step2: Modify the PATH environment variable so that skaffold is available
Step3: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
Step4: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https
Step5: Set the image name as tfx-pipeline under the current GCP project
Step6: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
Step7: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
Step8: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the Chicago Taxi dataset.
Here is brief introduction to each of the Python files
Step9: Let's quickly go over the structure of a test file to test Tensorflow code
Step10: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features
Step11: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
Step12: Let's create a TFX pipeline using the tfx pipeline create command.
Note
Step13: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be guided_project_1.tar.gz if the name of your pipeline is guided_project_1.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note
Step14: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip
Step15: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard
Step16: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip
Step17: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note
Step18: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
Step19: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6. | Python Code:
import os
Explanation: Guided Project 1
Learning Objectives:
Learn how to generate a standard TFX template pipeline using tfx template
Learn how to modify and run a templated TFX pipeline
Note: This guided project is adapted from Create a TFX pipeline using templates).
End of explanation
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
Explanation: Step 1. Environment setup
skaffold tool setup
End of explanation
!which skaffold
Explanation: Modify the PATH environment variable so that skaffold is available:
At this point, you shoud see the skaffold tool with the command which:
End of explanation
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
Explanation: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
End of explanation
ENDPOINT = # Enter your ENDPOINT here.
Explanation: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start">https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start</a>,
ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com.
End of explanation
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
Explanation: Set the image name as tfx-pipeline under the current GCP project:
End of explanation
PIPELINE_NAME = "guided_project_1"
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
Explanation: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
End of explanation
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
Explanation: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
!python -m models.features_test
!python -m models.keras.model_test
Explanation: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the Chicago Taxi dataset.
Here is brief introduction to each of the Python files:
pipeline - This directory contains the definition of the pipeline
* configs.py — defines common constants for pipeline runners
* pipeline.py — defines TFX components and a pipeline
models - This directory contains ML model definitions.
* features.py, features_test.py — defines features for the model
* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform
models/estimator - This directory contains an Estimator based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using TF estimator
models/keras - This directory contains a Keras based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using Keras
beam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine
Running the tests:
You might notice that there are some files with _test.py in their name.
These are unit tests of the pipeline and it is recommended to add more unit
tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag.
You can usually get a module name by deleting .py extension and replacing / with ..
For example:
End of explanation
!tail -26 models/features_test.py
Explanation: Let's quickly go over the structure of a test file to test Tensorflow code:
End of explanation
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil mb gs://{GCS_BUCKET_NAME}
Explanation: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features:
python
from models import features
To implement test cases start by defining your own test class inheriting from tf.test.TestCase:
python
class FeaturesTest(tf.test.TestCase):
Wen you execute the test file with
bash
python -m models.features_test
the main method
python
tf.test.main()
will parse your test class (here: FeaturesTest) and execute every method whose name starts by test. Here we have two such methods for instance:
python
def testNumberOfBucketFeatureBucketCount(self):
def testTransformedNames(self):
So when you want to add a test case, just add a method to that test class whose name starts by test. Now inside the body of these test methods is where the actual testing takes place. In this case for instance, testTransformedNames test the function features.transformed_name and makes sure it outputs what is expected.
Since your test class inherits from tf.test.TestCase it has a number of helper methods you can use to help you create tests, as for instance
python
self.assertEqual(expected_outputs, obtained_outputs)
that will fail the test case if obtained_outputs do the match the expected_outputs.
Typical examples of test case you may want to implement for machine learning code would comprise test insurring that your model builds correctly, your preprocessing function preprocesses raw data as expected, or that your model can train successfully on a few mock examples. When writing tests make sure that their execution is fast (we just want to check that the code works not actually train a performant model when testing). For that you may have to create synthetic data in your test files. For more information, read the tf.test.TestCase documentation and the Tensorflow testing best practices.
Step 4. Run your first TFX pipeline
Components in the TFX pipeline will generate outputs for each run as
ML Metadata Artifacts, and they need to be stored somewhere.
You can use any storage which the KFP cluster can access, and for this example we
will use Google Cloud Storage (GCS).
Let us create this bucket. Its name will be <YOUR_PROJECT>-kubeflowpipelines-default.
End of explanation
!gsutil cp data/data.csv gs://{GCS_BUCKET_NAME}/tfx-template/data/data.csv
Explanation: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
End of explanation
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
Explanation: Let's create a TFX pipeline using the tfx pipeline create command.
Note: When creating a pipeline for KFP, we need a container image which will
be used to run our pipeline. And skaffold will build the image for us. Because skaffold
pulls base images from the docker hub, it will take 5~10 minutes when we build
the image for the first time, but it will take much less time from the second build.
End of explanation
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be guided_project_1.tar.gz if the name of your pipeline is guided_project_1.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note: You may see the following error Error importing tfx_bsl_extension.coders. Please ignore it.
Debugging tip: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems.
Please make sure your KFP cluster has permissions to access Google Cloud APIs.
This can be configured when you create a KFP cluster in GCP,
or see Troubleshooting document in GCP.
End of explanation
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
End of explanation
print('https://' + ENDPOINT)
Explanation: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
print("https://" + ENDPOINT)
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
Verify that the pipeline DAG has changed accordingly in the Kubeflow UI:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note: You MUST set your GCP region in the configs.py file before proceeding
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is guided_project_1 if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation |
8,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Puerto Rico water quality measurements-results data
Adapted from ODM2 API
Step1: odm2api version used to run this notebook
Step2: Connect to the ODM2 SQLite Database
This example uses an ODM2 SQLite database file loaded with water quality sample data from multiple monitoring sites in the iUTAH Gradients Along Mountain to Urban Transitions (GAMUT) water quality monitoring network. Water quality samples have been collected and analyzed for nitrogen, phosphorus, total coliform, E-coli, and some water isotopes. The database (iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite) contains "measurement"-type results.
The example database is located in the data sub-directory.
Step3: Run Some Basic Queries on the ODM2 Database
This section shows some examples of how to use the API to run both simple and more advanced queries on the ODM2 database, as well as how to examine the query output in convenient ways thanks to Python tools.
Simple query functions like getVariables( ) return objects similar to the entities in ODM2, and individual attributes can then be retrieved from the objects returned.
Get all Variables
A simple query with simple output.
Step4: Get all People
Another simple query.
Step5: Site Sampling Features
Step6: Since we know this is a geospatial dataset (Sites, which have latitude and longitude), we can use more specialized Python tools like GeoPandas (geospatially enabled Pandas) and Folium interactive maps.
Step7: A site has a SiteTypeCV. Let's examine the site type distribution, and use that information to create a new GeoDataFrame column to specify a map marker color by SiteTypeCV.
Step8: Now we'll create an interactive and helpful Folium map of the sites. This map features
Step9: You can also drill down and get objects linked by foreign keys. The API returns related objects in a nested hierarchy so they can be interrogated in an object oriented way. So, if I use the getResults( ) function to return a Result from the database (e.g., a "Measurement" Result), I also get the associated Action that created that Result (e.g., a "Specimen analysis" Action).
Step10: Get a Result and its Attributes
Because all of the objects are returned in a nested form, if you retrieve a result, you can interrogate it to get all of its related attributes. When a Result object is returned, it includes objects that contain information about Variable, Units, ProcessingLevel, and the related Action that created that Result.
Step11: The last block of code returns a particular Measurement Result. From that I can get the SamplingFeaureID (in this case 26) for the Specimen from which the Result was generated. But, if I want to figure out which Site the Specimen was collected at, I need to query the database to get the related Site SamplingFeature. I can use getRelatedSamplingFeatures( ) for this. Once I've got the SamplingFeature for the Site, I could get the rest of the SamplingFeature attributes.
Retrieve the "Related" Site at which a Specimen was collected
Step12: Return Results and Data Values for a Particular Site/Variable
From the list of Variables returned above and the information about the SamplingFeature I queried above, I know that VariableID = 2 for Total Phosphorus and SiteID = 1 for the Red Butte Creek site at 1300E. I can use the getResults( ) function to get all of the Total Phosphorus results for this site by passing in the VariableID and the SiteID.
Step13: Retrieve the Result (Data) Values, Then Create a Quick Time Series Plot of the Data
Now I can retrieve all of the data values associated with the list of Results I just retrieved. In ODM2, water chemistry measurements are stored as "Measurement" results. Each "Measurement" Result has a single data value associated with it. So, for convenience, the getResultValues( ) function allows you to pass in a list of ResultIDs so you can get the data values for all of them back in a Pandas data frame object, which is easier to work with. Once I've got the data in a Pandas data frame object, I can use the plot( ) function directly on the data frame to create a quick visualization.
10/5/2018. NOTE CURRENT ISSUE REGARDING ValueDateTime RETURNED BY read.getResultValues. There seems to be an unexpected behavior with the data type returned for ValueDateTime for SQLite databases. It should be a datetime, but it's currently a string. This is being investigated. For now, we are converting to a datetime manually in cells 25 and 27 via the statement | Python Code:
%matplotlib inline
import os
import matplotlib.pyplot as plt
from shapely.geometry import Point
import pandas as pd
import geopandas as gpd
import folium
from folium.plugins import MarkerCluster
import odm2api
from odm2api.ODMconnection import dbconnection
import odm2api.services.readService as odm2rs
pd.__version__, gpd.__version__, folium.__version__
Explanation: Puerto Rico water quality measurements-results data
Adapted from ODM2 API: Retrieve, manipulate and visualize ODM2 water quality measurement-type data
This example shows how to use the ODM2 Python API (odm2api) to connect to an ODM2 database, retrieve data, and analyze and visualize the data. The database (iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite) contains "measurement"-type results.
This example uses SQLite for the database because it doesn't require a server. However, the ODM2 Python API demonstrated here can alse be used with ODM2 databases implemented in MySQL, PostgreSQL or Microsoft SQL Server.
More details on the ODM2 Python API and its source code and latest development can be found at https://github.com/ODM2/ODM2PythonAPI
Emilio Mayorga. Last updated 2019-3-27.
End of explanation
odm2api.__version__
Explanation: odm2api version used to run this notebook:
End of explanation
# Assign directory paths and SQLite file name
dbname_sqlite = "MariaWaterQualityData.sqlite"
# sqlite_pth = os.path.join("data", dbname_sqlite)
sqlite_pth = dbname_sqlite
try:
session_factory = dbconnection.createConnection('sqlite', sqlite_pth)
read = odm2rs.ReadODM2(session_factory)
print("Database connection successful!")
except Exception as e:
print("Unable to establish connection to the database: ", e)
Explanation: Connect to the ODM2 SQLite Database
This example uses an ODM2 SQLite database file loaded with water quality sample data from multiple monitoring sites in the iUTAH Gradients Along Mountain to Urban Transitions (GAMUT) water quality monitoring network. Water quality samples have been collected and analyzed for nitrogen, phosphorus, total coliform, E-coli, and some water isotopes. The database (iUTAHGAMUT_waterquality_measurementresults_ODM2.sqlite) contains "measurement"-type results.
The example database is located in the data sub-directory.
End of explanation
# Get all of the Variables from the ODM2 database then read the records
# into a Pandas DataFrame to make it easy to view and manipulate
allVars = read.getVariables()
variables_df = pd.DataFrame.from_records([vars(variable) for variable in allVars],
index='VariableID')
variables_df.head(10)
Explanation: Run Some Basic Queries on the ODM2 Database
This section shows some examples of how to use the API to run both simple and more advanced queries on the ODM2 database, as well as how to examine the query output in convenient ways thanks to Python tools.
Simple query functions like getVariables( ) return objects similar to the entities in ODM2, and individual attributes can then be retrieved from the objects returned.
Get all Variables
A simple query with simple output.
End of explanation
allPeople = read.getPeople()
pd.DataFrame.from_records([vars(person) for person in allPeople]).head()
Explanation: Get all People
Another simple query.
End of explanation
# Get all of the SamplingFeatures from the ODM2 database that are Sites
siteFeatures = read.getSamplingFeatures(sftype='Site')
# Read Sites records into a Pandas DataFrame
# "if sf.Latitude" is used only to instantiate/read Site attributes)
df = pd.DataFrame.from_records([vars(sf) for sf in siteFeatures if sf.Latitude])
Explanation: Site Sampling Features: pass arguments to the API query
Some of the API functions accept arguments that let you subset what is returned. For example, I can query the database using the getSamplingFeatures( ) function and pass it a SamplingFeatureType of "Site" to return a list of those SamplingFeatures that are Sites.
End of explanation
# Create a GeoPandas GeoDataFrame from Sites DataFrame
ptgeom = [Point(xy) for xy in zip(df['Longitude'], df['Latitude'])]
gdf = gpd.GeoDataFrame(df, geometry=ptgeom, crs={'init': 'epsg:4326'})
gdf.head(5)
# Number of records (features) in GeoDataFrame
len(gdf)
# A trivial but easy-to-generate GeoPandas plot
gdf.plot();
Explanation: Since we know this is a geospatial dataset (Sites, which have latitude and longitude), we can use more specialized Python tools like GeoPandas (geospatially enabled Pandas) and Folium interactive maps.
End of explanation
gdf['SiteTypeCV'].value_counts()
Explanation: A site has a SiteTypeCV. Let's examine the site type distribution, and use that information to create a new GeoDataFrame column to specify a map marker color by SiteTypeCV.
End of explanation
gdf["color"] = gdf.apply(lambda feat: 'green' if feat['SiteTypeCV'] == 'Stream' else 'red', axis=1)
m = folium.Map(tiles='CartoDB positron')
marker_cluster = MarkerCluster().add_to(m)
for idx, feature in gdf.iterrows():
folium.Marker(location=[feature.geometry.y, feature.geometry.x],
icon=folium.Icon(color=feature['color']),
popup="{0} ({1}): {2}".format(
feature['SamplingFeatureCode'], feature['SiteTypeCV'],
feature['SamplingFeatureName'])
).add_to(marker_cluster)
# Set the map extent (bounds) to the extent of the ODM2 sites
m.fit_bounds(m.get_bounds())
# Done with setup. Now render the map
m
# Get the SamplingFeature object for a particular SamplingFeature by passing its SamplingFeatureCode
site_sf_code = 'SystemD1_PV'
sf = read.getSamplingFeatures(codes=[site_sf_code])[0]
type(sf)
# Simple way to examine the content (properties) of a Python object, as if it were a dictionary
vars(sf)
Explanation: Now we'll create an interactive and helpful Folium map of the sites. This map features:
- Automatic panning to the location of the sites (no hard wiring, except for the zoom scale), based on GeoPandas functionality and information from the ODM2 Site Sampling Features
- Color coding by SiteTypeCV
- Marker clustering
- Simple marker pop ups with content from the ODM2 Site Sampling Features
End of explanation
try:
# Call getResults, but return only the first Result
firstResult = read.getResults()[0]
frfa = firstResult.FeatureActionObj
frfaa = firstResult.FeatureActionObj.ActionObj
print("The FeatureAction object for the Result is: ", frfa)
print("The Action object for the Result is: ", frfaa)
# Print some Action attributes in a more human readable form
print("\nThe following are some of the attributes for the Action that created the Result: ")
print("ActionTypeCV: {}".format(frfaa.ActionTypeCV))
print("ActionDescription: {}".format(frfaa.ActionDescription))
print("BeginDateTime: {}".format(frfaa.BeginDateTime))
print("EndDateTime: {}".format(frfaa.EndDateTime))
print("MethodName: {}".format(frfaa.MethodObj.MethodName))
print("MethodDescription: {}".format(frfaa.MethodObj.MethodDescription))
except Exception as e:
print("Unable to demo Foreign Key Example: {}".format(e))
vars(frfaa)
vars(frfaa.MethodObj)
Explanation: You can also drill down and get objects linked by foreign keys. The API returns related objects in a nested hierarchy so they can be interrogated in an object oriented way. So, if I use the getResults( ) function to return a Result from the database (e.g., a "Measurement" Result), I also get the associated Action that created that Result (e.g., a "Specimen analysis" Action).
End of explanation
print("------- Example of Retrieving Attributes of a Result -------")
try:
firstResult = read.getResults()[0]
frfa = firstResult.FeatureActionObj
print("The following are some of the attributes for the Result retrieved: ")
print("ResultID: {}".format(firstResult.ResultID))
print("ResultTypeCV: {}".format(firstResult.ResultTypeCV))
print("ValueCount: {}".format(firstResult.ValueCount))
print("ProcessingLevel: {}".format(firstResult.ProcessingLevelObj.Definition))
print("SampledMedium: {}".format(firstResult.SampledMediumCV))
print("Variable: {}: {}".format(firstResult.VariableObj.VariableCode,
firstResult.VariableObj.VariableNameCV))
print("Units: {}".format(firstResult.UnitsObj.UnitsName))
print("SamplingFeatureID: {}".format(frfa.SamplingFeatureObj.SamplingFeatureID))
print("SamplingFeatureCode: {}".format(frfa.SamplingFeatureObj.SamplingFeatureCode))
except Exception as e:
print("Unable to demo example of retrieving Attributes of a Result: {}".format(e))
vars(firstResult)
vars(frfa)
Explanation: Get a Result and its Attributes
Because all of the objects are returned in a nested form, if you retrieve a result, you can interrogate it to get all of its related attributes. When a Result object is returned, it includes objects that contain information about Variable, Units, ProcessingLevel, and the related Action that created that Result.
End of explanation
specimen_sf_id = frfa.SamplingFeatureObj.SamplingFeatureID
# specimen-entity attributes only show up after first printing one out explicitly
frfa.SamplingFeatureObj.SpecimenTypeCV
vars(frfa.SamplingFeatureObj)
# Pass the Sampling Feature ID of the specimen, and the relationship type
relatedSite = read.getRelatedSamplingFeatures(sfid=specimen_sf_id,
relationshiptype='Was Collected at')[0]
vars(relatedSite)
Explanation: The last block of code returns a particular Measurement Result. From that I can get the SamplingFeaureID (in this case 26) for the Specimen from which the Result was generated. But, if I want to figure out which Site the Specimen was collected at, I need to query the database to get the related Site SamplingFeature. I can use getRelatedSamplingFeatures( ) for this. Once I've got the SamplingFeature for the Site, I could get the rest of the SamplingFeature attributes.
Retrieve the "Related" Site at which a Specimen was collected
End of explanation
siteID = relatedSite.SamplingFeatureID
results_all_at_site = read.getResults(siteid=siteID, restype="Measurement")
len(results_all_at_site)
vars(results_all_at_site[0])
res_vars = []
for r in results_all_at_site:
res_vars.append([r.ResultID, r.ResultDateTime, r.VariableID,
r.VariableObj.VariableCode, r.VariableObj.VariableNameCV, r.VariableObj.VariableDefinition])
# print out a count of number of results for each variable, plus the date range
# Do this by ingesting into a data frame first
res_vars_df = pd.DataFrame(res_vars, columns=['ResultID', 'ResultDateTime', 'VariableID', 'VariableCode', 'VariableNameCV', 'VariableDefinition'])
res_vars_df.head()
res_vars_df.VariableCode.value_counts()
Explanation: Return Results and Data Values for a Particular Site/Variable
From the list of Variables returned above and the information about the SamplingFeature I queried above, I know that VariableID = 2 for Total Phosphorus and SiteID = 1 for the Red Butte Creek site at 1300E. I can use the getResults( ) function to get all of the Total Phosphorus results for this site by passing in the VariableID and the SiteID.
End of explanation
# function that encapsulates the `VariableID`, `getResults` and `getResultValues` queries
def get_results_and_values(siteid, variablecode):
v = variables_df[variables_df['VariableCode'] == variablecode]
variableID = v.index[0]
results = read.getResults(siteid=siteid, variableid=variableID, restype="Measurement")
resultIDList = [x.ResultID for x in results]
# Get all of the data values for the Results in the list created above
# Call getResultValues, which returns a Pandas Data Frame with the data
resultValues = read.getResultValues(resultids=resultIDList, lowercols=False)
resultValues['ValueDateTime'] = pd.to_datetime(resultValues['ValueDateTime'])
return resultValues, results
resultValues, results = get_results_and_values(siteID, 'pH')
resultValues.head()
result_select = results[0]
# Plot the time sequence of Measurement Result Values
ax = resultValues.plot(x='ValueDateTime', y='DataValue', title=relatedSite.SamplingFeatureName,
kind='line', use_index=True, style='o')
ax.set_ylabel("{0} ({1})".format(result_select.VariableObj.VariableNameCV,
result_select.UnitsObj.UnitsAbbreviation))
ax.set_xlabel('Date/Time')
ax.legend().set_visible(False)
# results_faam = lambda results, i: results[i].FeatureActionObj.ActionObj.MethodObj
method = result_select.FeatureActionObj.ActionObj.MethodObj
print("METHOD: {0} ({1})".format(method.MethodName, method.MethodDescription))
Explanation: Retrieve the Result (Data) Values, Then Create a Quick Time Series Plot of the Data
Now I can retrieve all of the data values associated with the list of Results I just retrieved. In ODM2, water chemistry measurements are stored as "Measurement" results. Each "Measurement" Result has a single data value associated with it. So, for convenience, the getResultValues( ) function allows you to pass in a list of ResultIDs so you can get the data values for all of them back in a Pandas data frame object, which is easier to work with. Once I've got the data in a Pandas data frame object, I can use the plot( ) function directly on the data frame to create a quick visualization.
10/5/2018. NOTE CURRENT ISSUE REGARDING ValueDateTime RETURNED BY read.getResultValues. There seems to be an unexpected behavior with the data type returned for ValueDateTime for SQLite databases. It should be a datetime, but it's currently a string. This is being investigated. For now, we are converting to a datetime manually in cells 25 and 27 via the statement:
python
resultValues['ValueDateTime'] = pd.to_datetime(resultValues['ValueDateTime'])
This problem is present in odm2api version 0.7.1, but was not present in Nov. 2017
End of explanation |
8,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TF-Agents Authors.
Step1: 再生バッファ
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 再生バッファ API
再生バッファのクラスには、次の定義とメソッドがあります。
```python
class ReplayBuffer(tf.Module)
Step12: バッファへの書き込み
Step13: バッファからの読み込み
TFUniformReplayBuffer からデータを読み込む方法は 3 つあります。
get_next() - バッファからサンプルを 1 つ返します。返されるサンプルバッチサイズとタイムステップ数は、このメソッドの引数で指定できます。
as_dataset() - 再生バッファを tf.data.Dataset として返します。その後、データセットイテレータを作成し、バッファ内のアイテムのサンプルをイテレートできます。
gather_all() - バッファ内のすべてのアイテムを形状 [batch, time, data_spec] を含むテンソルとして返します。
上記の各メソッドを使用して再生バッファから読み込む例を以下に示します。
Step14: PyUniformReplayBuffer
PyUniformReplayBuffer は TFUniformReplayBuffer と同じ機能を持ちますが、データは tf 変数ではなく numpy 配列に格納されます。このバッファはグラフ外のデータ収集に使用されます。numpy にバッキングストレージを用意すると、Tensorflow 変数を使用せずに一部のアプリケーションがデータ操作(優先度を更新するためのインデックス作成など)を実行しやすくなります。ただし、この実装では Tensorflow を使用したグラフ最適化のメリットを享受できません。
エージェントのポリシートラジェクトリの仕様から PyUniformReplayBuffer をインスタンス化する例を以下に示します。
Step15: トレーニング中に再生バッファを使用する
再生バッファを作成してアイテムを読み書きする方法がわかったので、エージェントのトレーニング中に再生バッファを使用してトラジェクトリを格納できるようになりました。
データ収集
まず、データ収集中に再生バッファを使用する方法を見てみましょう。
TF-Agents では、Driver(詳細は Driver のチュートリアルをご覧ください)を使用して環境内の経験を収集します。Driver を使用するには、Driver がトラジェクトリを受け取ったときに実行する関数である Observer を指定します。
そのため、軌跡要素を再生バッファーに追加するために add_batch(items) を呼び出すオブザーバーを追加してアイテムのバッチを再生バッファーに追加します。
TFUniformReplayBuffer を使用した例を以下に示します。まず、環境、ネットワーク、エージェントを作成します。その後、TFUniformReplayBuffer を作成します。再生バッファ内のトラジェクトリ要素の仕様は、エージェントの収集データ仕様に等しくなっていることに注意してください。その後、その add_batch メソッドをトレーニング中にデータ収集を実行するドライバーのオブザーバーに設定します。
Step16: トレーニングステップ用のデータ読み込み
トラジェクトリ要素を再生バッファに追加した後は、再生バッファからトラジェクトリのバッチを読み取り、トレーニングステップの入力データとして使用できます。
トレーニングループ内で再生バッファのトラジェクトリをトレーニングする方法の例を以下に示します。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TF-Agents Authors.
End of explanation
!pip install tf-agents
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
from tf_agents import specs
from tf_agents.agents.dqn import dqn_agent
from tf_agents.drivers import dynamic_step_driver
from tf_agents.environments import suite_gym
from tf_agents.environments import tf_py_environment
from tf_agents.networks import q_network
from tf_agents.replay_buffers import py_uniform_replay_buffer
from tf_agents.replay_buffers import tf_uniform_replay_buffer
from tf_agents.specs import tensor_spec
from tf_agents.trajectories import time_step
Explanation: 再生バッファ
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/agents/tutorials/5_replay_buffers_tutorial"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/5_replay_buffers_tutorial.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colabで実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/agents/tutorials/5_replay_buffers_tutorial.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/agents/tutorials/5_replay_buffers_tutorial.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
はじめに
強化学習アルゴリズムは、環境でポリシーを実行するときに再生バッファを使用して経験のトラジェクトリを格納します。トレーニング中、再生バッファはトラジェクトリのサブセット(シーケンシャルサブセットまたはサンプルのいずれか)がエージェントの経験を「再生」するために照会されます。
このコラボでは、共通の API を共有する 2 種類の再生バッファ(python 方式と tensorflow 方式)について考察します。次のセクションでは API と各バッファの実装、およびデータ収集トレーニングでそれらを使用する方法を説明します。
セットアップ
tf-agents をまだインストールしていない場合はインストールしてください。
End of explanation
data_spec = (
tf.TensorSpec([3], tf.float32, 'action'),
(
tf.TensorSpec([5], tf.float32, 'lidar'),
tf.TensorSpec([3, 2], tf.float32, 'camera')
)
)
batch_size = 32
max_length = 1000
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
data_spec,
batch_size=batch_size,
max_length=max_length)
Explanation: 再生バッファ API
再生バッファのクラスには、次の定義とメソッドがあります。
```python
class ReplayBuffer(tf.Module):
Abstract base class for TF-Agents replay buffer.
def init(self, data_spec, capacity):
Initializes the replay buffer.
Args:
data_spec: A spec or a list/tuple/nest of specs describing
a single item that can be stored in this buffer
capacity: number of elements that the replay buffer can hold.
@property
def data_spec(self):
Returns the spec for items in the replay buffer.
@property
def capacity(self):
Returns the capacity of the replay buffer.
def add_batch(self, items):
Adds a batch of items to the replay buffer.
def get_next(self,
sample_batch_size=None,
num_steps=None,
time_stacked=True):
Returns an item or batch of items from the buffer.
def as_dataset(self,
sample_batch_size=None,
num_steps=None,
num_parallel_calls=None):
Creates and returns a dataset that returns entries from the buffer.
def gather_all(self):
Returns all the items in buffer.
return self._gather_all()
def clear(self):
Resets the contents of replay buffer
```
再生オブジェクトバッファのオブジェクトが初期化される際には、そのオブジェクトが格納する要素の data_spec が必要です。この仕様は、バッファに追加される軌道要素の TensorSpec に対応しています。この仕様は通常、トレーニング時にエージェントが期待する形状、タイプ、構造を定義するエージェントの agent.collect_data_spec を調査することで取得できます(詳細は後述します)。
TFUniformReplayBuffer
TFUniformReplayBuffer は TF-Agents で最も一般的に使用される再生バッファであるため、このチュートリアルではこの再生バッファを使用します。TFUniformReplayBuffer では、バッキングバッファの記憶は tensorflow 変数によって行われるため、計算グラフの一部になっています。
バッファには複数の要素がバッチ単位で格納され、バッチセグメントごとに最大容量の max_length 要素があります。したがって、合計バッファ容量は、batch_size x max_length 要素となります。バッファに格納される要素はすべて、対応するデータ仕様を持っている必要があります。再生バッファがデータ収集に使用される場合、仕様はエージェントの収集データ仕様になります。
バッファの作成:
TFUniformReplayBuffer を作成するには、次の値を渡します。
バッファが格納するデータ要素の仕様
バッファのバッチサイズに対応する batch size
バッチセグメントごとの max_length 要素数
サンプルデータ仕様(batch_size 32 および max_length 1000)を使用して TFUniformReplayBuffer を作成する例を以下に示します。
End of explanation
action = tf.constant(1 * np.ones(
data_spec[0].shape.as_list(), dtype=np.float32))
lidar = tf.constant(
2 * np.ones(data_spec[1][0].shape.as_list(), dtype=np.float32))
camera = tf.constant(
3 * np.ones(data_spec[1][1].shape.as_list(), dtype=np.float32))
values = (action, (lidar, camera))
values_batched = tf.nest.map_structure(lambda t: tf.stack([t] * batch_size),
values)
replay_buffer.add_batch(values_batched)
Explanation: バッファへの書き込み:
再生バッファに要素を追加するため、items がバッファに追加されるアイテムのバッチを表すテンソルのリスト/タプル/ネストになっている add_batch(items) メソッドを使用します。items の各要素には外側の次元に等しい batch_size が必要であり、残りの次元は(再生バッファのコンストラクタに渡されたデータ仕様と同じ)アイテムのデータ仕様に準拠していなければなりません。
アイテムのバッチを追加する例を以下に示します。
End of explanation
# add more items to the buffer before reading
for _ in range(5):
replay_buffer.add_batch(values_batched)
# Get one sample from the replay buffer with batch size 10 and 1 timestep:
sample = replay_buffer.get_next(sample_batch_size=10, num_steps=1)
# Convert the replay buffer to a tf.data.Dataset and iterate through it
dataset = replay_buffer.as_dataset(
sample_batch_size=4,
num_steps=2)
iterator = iter(dataset)
print("Iterator trajectories:")
trajectories = []
for _ in range(3):
t, _ = next(iterator)
trajectories.append(t)
print(tf.nest.map_structure(lambda t: t.shape, trajectories))
# Read all elements in the replay buffer:
trajectories = replay_buffer.gather_all()
print("Trajectories from gather all:")
print(tf.nest.map_structure(lambda t: t.shape, trajectories))
Explanation: バッファからの読み込み
TFUniformReplayBuffer からデータを読み込む方法は 3 つあります。
get_next() - バッファからサンプルを 1 つ返します。返されるサンプルバッチサイズとタイムステップ数は、このメソッドの引数で指定できます。
as_dataset() - 再生バッファを tf.data.Dataset として返します。その後、データセットイテレータを作成し、バッファ内のアイテムのサンプルをイテレートできます。
gather_all() - バッファ内のすべてのアイテムを形状 [batch, time, data_spec] を含むテンソルとして返します。
上記の各メソッドを使用して再生バッファから読み込む例を以下に示します。
End of explanation
replay_buffer_capacity = 1000*32 # same capacity as the TFUniformReplayBuffer
py_replay_buffer = py_uniform_replay_buffer.PyUniformReplayBuffer(
capacity=replay_buffer_capacity,
data_spec=tensor_spec.to_nest_array_spec(data_spec))
Explanation: PyUniformReplayBuffer
PyUniformReplayBuffer は TFUniformReplayBuffer と同じ機能を持ちますが、データは tf 変数ではなく numpy 配列に格納されます。このバッファはグラフ外のデータ収集に使用されます。numpy にバッキングストレージを用意すると、Tensorflow 変数を使用せずに一部のアプリケーションがデータ操作(優先度を更新するためのインデックス作成など)を実行しやすくなります。ただし、この実装では Tensorflow を使用したグラフ最適化のメリットを享受できません。
エージェントのポリシートラジェクトリの仕様から PyUniformReplayBuffer をインスタンス化する例を以下に示します。
End of explanation
env = suite_gym.load('CartPole-v0')
tf_env = tf_py_environment.TFPyEnvironment(env)
q_net = q_network.QNetwork(
tf_env.time_step_spec().observation,
tf_env.action_spec(),
fc_layer_params=(100,))
agent = dqn_agent.DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=q_net,
optimizer=tf.compat.v1.train.AdamOptimizer(0.001))
replay_buffer_capacity = 1000
replay_buffer = tf_uniform_replay_buffer.TFUniformReplayBuffer(
agent.collect_data_spec,
batch_size=tf_env.batch_size,
max_length=replay_buffer_capacity)
# Add an observer that adds to the replay buffer:
replay_observer = [replay_buffer.add_batch]
collect_steps_per_iteration = 10
collect_op = dynamic_step_driver.DynamicStepDriver(
tf_env,
agent.collect_policy,
observers=replay_observer,
num_steps=collect_steps_per_iteration).run()
Explanation: トレーニング中に再生バッファを使用する
再生バッファを作成してアイテムを読み書きする方法がわかったので、エージェントのトレーニング中に再生バッファを使用してトラジェクトリを格納できるようになりました。
データ収集
まず、データ収集中に再生バッファを使用する方法を見てみましょう。
TF-Agents では、Driver(詳細は Driver のチュートリアルをご覧ください)を使用して環境内の経験を収集します。Driver を使用するには、Driver がトラジェクトリを受け取ったときに実行する関数である Observer を指定します。
そのため、軌跡要素を再生バッファーに追加するために add_batch(items) を呼び出すオブザーバーを追加してアイテムのバッチを再生バッファーに追加します。
TFUniformReplayBuffer を使用した例を以下に示します。まず、環境、ネットワーク、エージェントを作成します。その後、TFUniformReplayBuffer を作成します。再生バッファ内のトラジェクトリ要素の仕様は、エージェントの収集データ仕様に等しくなっていることに注意してください。その後、その add_batch メソッドをトレーニング中にデータ収集を実行するドライバーのオブザーバーに設定します。
End of explanation
# Read the replay buffer as a Dataset,
# read batches of 4 elements, each with 2 timesteps:
dataset = replay_buffer.as_dataset(
sample_batch_size=4,
num_steps=2)
iterator = iter(dataset)
num_train_steps = 10
for _ in range(num_train_steps):
trajectories, _ = next(iterator)
loss = agent.train(experience=trajectories)
Explanation: トレーニングステップ用のデータ読み込み
トラジェクトリ要素を再生バッファに追加した後は、再生バッファからトラジェクトリのバッチを読み取り、トレーニングステップの入力データとして使用できます。
トレーニングループ内で再生バッファのトラジェクトリをトレーニングする方法の例を以下に示します。
End of explanation |
8,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create an example dataframe
Step2: Create a function to assign letter grades | Python Code:
import pandas as pd
import numpy as np
Explanation: Title: Create A Pandas Column With A For Loop
Slug: pandas_create_column_with_loop
Summary: Create A Pandas Column With A For Loop
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
raw_data = {'student_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze', 'Jacon', 'Ryaner', 'Sone', 'Sloan', 'Piger', 'Riani', 'Ali'],
'test_score': [76, 88, 84, 67, 53, 96, 64, 91, 77, 73, 52, np.NaN]}
df = pd.DataFrame(raw_data, columns = ['student_name', 'test_score'])
Explanation: Create an example dataframe
End of explanation
# Create a list to store the data
grades = []
# For each row in the column,
for row in df['test_score']:
# if more than a value,
if row > 95:
# Append a letter grade
grades.append('A')
# else, if more than a value,
elif row > 90:
# Append a letter grade
grades.append('A-')
# else, if more than a value,
elif row > 85:
# Append a letter grade
grades.append('B')
# else, if more than a value,
elif row > 80:
# Append a letter grade
grades.append('B-')
# else, if more than a value,
elif row > 75:
# Append a letter grade
grades.append('C')
# else, if more than a value,
elif row > 70:
# Append a letter grade
grades.append('C-')
# else, if more than a value,
elif row > 65:
# Append a letter grade
grades.append('D')
# else, if more than a value,
elif row > 60:
# Append a letter grade
grades.append('D-')
# otherwise,
else:
# Append a failing grade
grades.append('Failed')
# Create a column from the list
df['grades'] = grades
# View the new dataframe
df
Explanation: Create a function to assign letter grades
End of explanation |
8,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SED fitting with naima
In this notebook we will carry out a fit of an IC model to the HESS spectrum of RX J1713.7-3946 with the naima wrapper around emcee. This tutorial will follow loosely the tutorial found on the naima documentation.
The first step is to load the data, which we can find in the same directory as this notebook. The data format required by naima for the data files can be found in the documentation.
Step1: The we define the model to be fit. The model function must take a tuple of free parameters as first argument and a data table as second. It must return the model flux at the energies given by data['energy'] in first place, and any extra objects will be saved with the MCMC chain.
emcee does not accept astropy Quantities as parameters, so we have to give them units before setting the attributes of the particle distribution function.
Here we define an IC model with an Exponential Cutoff Power-Law with the amplitude, index, and cutoff energy as free parameters. Because the amplitude and cutoff energy may be considered to have a uniform prior in log-space, we sample their decimal logarithms (we could also use a log-uniform prior). We also place a uniform prior on the particle index with limits between -1 and 5.
Step2: We take the data, model, prior, parameter vector, and labels and call the main fitting procedure
Step3: Simultaneous fitting of two radiative components
Step4: Note that in all naima functions (including run_sampler) you can provide a list of spectra, so you can consider both the HESS and Suzaku spectra
Step5: Below is the model, labels, parameters and prior defined above for the IC-only fit. Modify it as needed and feed it to naima.run_sampler to obtain an estimate of the magnetic field strength. | Python Code:
import naima
import numpy as np
from astropy.io import ascii
import astropy.units as u
%matplotlib inline
import matplotlib.pyplot as plt
hess_spectrum = ascii.read('RXJ1713_HESS_2007.dat', format='ipac')
fig = naima.plot_data(hess_spectrum)
Explanation: SED fitting with naima
In this notebook we will carry out a fit of an IC model to the HESS spectrum of RX J1713.7-3946 with the naima wrapper around emcee. This tutorial will follow loosely the tutorial found on the naima documentation.
The first step is to load the data, which we can find in the same directory as this notebook. The data format required by naima for the data files can be found in the documentation.
End of explanation
from naima.models import ExponentialCutoffPowerLaw, InverseCompton
from naima import uniform_prior
ECPL = ExponentialCutoffPowerLaw(1e36/u.eV, 5*u.TeV, 2.7, 50*u.TeV)
IC = InverseCompton(ECPL, seed_photon_fields=['CMB', ['FIR', 30*u.K, 0.4*u.eV/u.cm**3]])
# define labels and initial vector for the parameters
labels = ['log10(norm)', 'index', 'log10(cutoff)']
p0 = np.array((34, 2.7, np.log10(30)))
# define the model function
def model(pars, data):
ECPL.amplitude = (10**pars[0]) / u.eV
ECPL.alpha = pars[1]
ECPL.e_cutoff = (10**pars[2]) * u.TeV
return IC.flux(data['energy'], distance=2.0*u.kpc), IC.compute_We(Eemin=1*u.TeV)
from naima import uniform_prior
def lnprior(pars):
lnprior = uniform_prior(pars[1], -1, 5)
return lnprior
Explanation: The we define the model to be fit. The model function must take a tuple of free parameters as first argument and a data table as second. It must return the model flux at the energies given by data['energy'] in first place, and any extra objects will be saved with the MCMC chain.
emcee does not accept astropy Quantities as parameters, so we have to give them units before setting the attributes of the particle distribution function.
Here we define an IC model with an Exponential Cutoff Power-Law with the amplitude, index, and cutoff energy as free parameters. Because the amplitude and cutoff energy may be considered to have a uniform prior in log-space, we sample their decimal logarithms (we could also use a log-uniform prior). We also place a uniform prior on the particle index with limits between -1 and 5.
End of explanation
sampler, pos = naima.run_sampler(data_table=hess_spectrum, model=model, prior=lnprior, p0=p0, labels=labels,
nwalkers=32, nburn=50, nrun=100, prefit=True, threads=4)
# inspect the chains stored in the sampler for the three free parameters
f = naima.plot_chain(sampler, 0)
f = naima.plot_chain(sampler, 1)
f = naima.plot_chain(sampler, 2)
# make a corner plot of the parameters to show covariances
f = naima.plot_corner(sampler)
# Show the fit
f = naima.plot_fit(sampler)
f.axes[0].set_ylim(bottom=1e-13)
# Inspect the metadata blob saved
f = naima.plot_blob(sampler,1, label='$W_e (E_e>1$ TeV)')
# There is also a convenience function that will plot all the above files to pngs or a single pdf
naima.save_diagnostic_plots('RXJ1713_naima_fit', sampler, blob_labels=['Spectrum','$W_e (E_e>1$ TeV)'])
Explanation: We take the data, model, prior, parameter vector, and labels and call the main fitting procedure: naima.run_sampler. This function is a wrapper around emcee, and the details of the MCMC run can be configured through its arguments:
nwalkers: number of emcee walkers.
nburn: number of steps to take for the burn-in period. These steps will be discarded in the final results.
nrun: number of steps to take and save to the sampler chain.
prefit: whether to do a Nelder-Mead fit before starting the MCMC run (reduces the burn-in steps required).
interactive: whether to launch an interactive model fitter before starting the run to set the initial vector. This will only work in matplotlib is using a GUI backend (qt4, qt5, gtkagg, tkagg, etc.). The final parameters when you close the window will be used as starting point for the run.
threads: How many different threads (CPU cores) to use when computing the likelihood.
End of explanation
suzaku_spectrum = ascii.read('RXJ1713_Suzaku-XIS.dat')
f=naima.plot_data(suzaku_spectrum)
Explanation: Simultaneous fitting of two radiative components: Synchrotron and IC.
Use the Suzaku XIS spectrum of RX J1713 to do a simultaneous fit of the synchrotron and inverse Compton spectra and derive an estimate of the magnetic field strength under the assumption of a leptonic scenario.
End of explanation
f=naima.plot_data([suzaku_spectrum, hess_spectrum], sed=True)
Explanation: Note that in all naima functions (including run_sampler) you can provide a list of spectra, so you can consider both the HESS and Suzaku spectra:
End of explanation
#from naima.models import ExponentialCutoffPowerLaw, InverseCompton
#from naima import uniform_prior
#ECPL = ExponentialCutoffPowerLaw(1e36/u.eV, 10*u.TeV, 2.7, 50*u.TeV)
#IC = InverseCompton(ECPL, seed_photon_fields=['CMB', ['FIR', 30*u.K, 0.4*u.eV/u.cm**3]])
## define labels and initial vector for the parameters
#labels = ['log10(norm)', 'index', 'log10(cutoff)']
#p0 = np.array((34, 2.7, np.log10(30)))
## define the model function
#def model(pars, data):
# ECPL.amplitude = (10**pars[0]) / u.eV
# ECPL.alpha = pars[1]
# ECPL.e_cutoff = (10**pars[2]) * u.TeV
# return IC.flux(data['energy'], distance=2.0*u.kpc), IC.compute_We(Eemin=1*u.TeV)
#from naima import uniform_prior
#def lnprior(pars):
# lnprior = uniform_prior(pars[1], -1, 5)
# return lnprior
Explanation: Below is the model, labels, parameters and prior defined above for the IC-only fit. Modify it as needed and feed it to naima.run_sampler to obtain an estimate of the magnetic field strength.
End of explanation |
8,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Impedance or reflectivity
Trying to see how to combine G with a derivative operator to get from the impedance model to the data with one forward operator.
Step2: Construct the model m
Step3: But I really want to use the reflectivity, so let's compute that
Step4: I don't know how best to control the magnitude of the coefficients or how to combine this matrix with G, so for now we'll stick to the model m being the reflectivity, calculated the normal way.
Step5: Forward operator
Step6: Forward model the data d
Now we can perform the forward problem
Step8: Let's visualize these components for fun...
Step9: Note that G * m gives us exactly the same result as np.convolve(w_, m). This is just another way of implementing convolution that lets us use linear algebra to perform the operation, and its inverse. | Python Code:
import numpy as np
import numpy.linalg as la
import matplotlib.pyplot as plt
from utils import plot_all
%matplotlib inline
from scipy import linalg as spla
def convmtx(h, n):
Equivalent of MATLAB's convmtx function, http://www.mathworks.com/help/signal/ref/convmtx.html.
Makes the convolution matrix, C. The product C.x is the convolution of h and x.
Args
h (ndarray): a 1D array, the kernel.
n (int): the number of rows to make.
Returns
ndarray. Size m+n-1
col_1 = np.r_[h[0], np.zeros(n-1)]
row_1 = np.r_[h, np.zeros(n-1)]
return spla.toeplitz(col_1, row_1)
Explanation: Impedance or reflectivity
Trying to see how to combine G with a derivative operator to get from the impedance model to the data with one forward operator.
End of explanation
# Impedance, imp VP RHO
imp = np.ones(50) * 2550 * 2650
imp[10:15] = 2700 * 2750
imp[15:27] = 2400 * 2450
imp[27:35] = 2800 * 3000
plt.plot(imp)
Explanation: Construct the model m
End of explanation
D = convmtx([-1, 1], imp.size)[:, :-1]
D
r = D @ imp
plt.plot(r[:-1])
Explanation: But I really want to use the reflectivity, so let's compute that:
End of explanation
m = (imp[1:] - imp[:-1]) / (imp[1:] + imp[:-1])
plt.plot(m)
Explanation: I don't know how best to control the magnitude of the coefficients or how to combine this matrix with G, so for now we'll stick to the model m being the reflectivity, calculated the normal way.
End of explanation
from scipy.signal import ricker
wavelet = ricker(40, 2)
plt.plot(wavelet)
# Downsampling: set to 1 to use every sample.
s = 2
# Make G.
G = convmtx(wavelet, m.size)[::s, 20:70]
plt.imshow(G, cmap='viridis', interpolation='none')
# Or we can use bruges (pip install bruges)
# from bruges.filters import ricker
# wavelet = ricker(duration=0.04, dt=0.001, f=100)
# G = convmtx(wavelet, m.size)[::s, 21:71]
# f, (ax0, ax1) = plt.subplots(1, 2)
# ax0.plot(wavelet)
# ax1.imshow(G, cmap='viridis', interpolation='none', aspect='auto')
Explanation: Forward operator: convolution with wavelet
Now we make the kernel matrix G, which represents convolution.
End of explanation
d = G @ m
Explanation: Forward model the data d
Now we can perform the forward problem: computing the data.
End of explanation
def add_subplot_axes(ax, rect, axisbg='w'):
Facilitates the addition of a small subplot within another plot.
From: http://stackoverflow.com/questions/17458580/
embedding-small-plots-inside-subplots-in-matplotlib
License: CC-BY-SA
Args:
ax (axis): A matplotlib axis.
rect (list): A rect specifying [left pos, bot pos, width, height]
Returns:
axis: The sub-axis in the specified position.
def axis_to_fig(axis):
fig = axis.figure
def transform(coord):
a = axis.transAxes.transform(coord)
return fig.transFigure.inverted().transform(a)
return transform
fig = plt.gcf()
left, bottom, width, height = rect
trans = axis_to_fig(ax)
x1, y1 = trans((left, bottom))
x2, y2 = trans((left + width, bottom + height))
subax = fig.add_axes([x1, y1, x2 - x1, y2 - y1])
x_labelsize = subax.get_xticklabels()[0].get_size()
y_labelsize = subax.get_yticklabels()[0].get_size()
x_labelsize *= rect[2] ** 0.5
y_labelsize *= rect[3] ** 0.5
subax.xaxis.set_tick_params(labelsize=x_labelsize)
subax.yaxis.set_tick_params(labelsize=y_labelsize)
return subax
from matplotlib import gridspec, spines
fig = plt.figure(figsize=(12, 6))
gs = gridspec.GridSpec(5, 8)
# Set up axes.
axw = plt.subplot(gs[0, :5]) # Wavelet.
axg = plt.subplot(gs[1:4, :5]) # G
axm = plt.subplot(gs[:, 5]) # m
axe = plt.subplot(gs[:, 6]) # =
axd = plt.subplot(gs[1:4, 7]) # d
cax = add_subplot_axes(axg, [-0.08, 0.05, 0.03, 0.5])
params = {'ha': 'center',
'va': 'bottom',
'size': 40,
'weight': 'bold',
}
axw.plot(G[5], 'o', c='r', mew=0)
axw.plot(G[5], 'r', alpha=0.4)
axw.locator_params(axis='y', nbins=3)
axw.text(1, 0.6, "wavelet", color='k')
im = axg.imshow(G, cmap='viridis', aspect='1', interpolation='none')
axg.text(45, G.shape[0]//2, "G", color='w', **params)
axg.axhline(5, color='r')
plt.colorbar(im, cax=cax)
y = np.arange(m.size)
axm.plot(m, y, 'o', c='r', mew=0)
axm.plot(m, y, c='r', alpha=0.4)
axm.text(0, m.size//2, "m", color='k', **params)
axm.invert_yaxis()
axm.locator_params(axis='x', nbins=3)
axe.set_frame_on(False)
axe.set_xticks([])
axe.set_yticks([])
axe.text(0.5, 0.5, "=", color='k', **params)
y = np.arange(d.size)
axd.plot(d, y, 'o', c='b', mew=0)
axd.plot(d, y, c='b', alpha=0.4)
axd.plot(d[5], y[5], 'o', c='r', mew=0, ms=10)
axd.text(0, d.size//2, "d", color='k', **params)
axd.invert_yaxis()
axd.locator_params(axis='x', nbins=3)
for ax in fig.axes:
ax.xaxis.label.set_color('#888888')
ax.tick_params(axis='y', colors='#888888')
ax.tick_params(axis='x', colors='#888888')
for child in ax.get_children():
if isinstance(child, spines.Spine):
child.set_color('#aaaaaa')
# For some reason this doesn't work...
for _, sp in cax.spines.items():
sp.set_color('w')
# But this does...
cax.xaxis.label.set_color('#ffffff')
cax.tick_params(axis='y', colors='#ffffff')
cax.tick_params(axis='x', colors='#ffffff')
fig.tight_layout()
plt.show()
Explanation: Let's visualize these components for fun...
End of explanation
plt.plot(np.convolve(wavelet, m, mode='same')[::s], 'blue', lw=3)
plt.plot(G @ m, 'red')
Explanation: Note that G * m gives us exactly the same result as np.convolve(w_, m). This is just another way of implementing convolution that lets us use linear algebra to perform the operation, and its inverse.
End of explanation |
8,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Week 8 - Advanced Machine Learning
During the course we have covered a variety of different tasks and algorithms. These were chosen for their broad applicability and ease of use with many important techniques and areas of study skipped. The goal of this class is to provide a brief overview of some of the latest advances and areas that could not be covered due to our limited time.
Deep learning
Glosser.ca via wikimedia.
Although a neural network has been added to scikit learn relatively recently it only runs on the CPU making the large neural networks now popular prohibitively slow. Fortunately, there are a number of different packages available for python that can run on a GPU.
Theano is the GPGPU equivalent of numpy. It implements all the core functionality needed to build a deep neural network, and run it on the GPGPU, but does not come with an existing implementation.
A variety of packages have been built on top of Theano that enable neural networks to be implemented in a relatively straightforward manner. Parrallels can be draw with the relationship between numpy and scikit learn. Pylearn2 was perhaps the first major package built on Theano but has now been superseded by a number of new packages, including blocks, keras, and lasagne.
You may have also heard of TensorFlow that was released by Google a year or two ago. TensorFlow lies somewhere between the low-level Theano and the high-level packages such as blocks, keras, and lasagne. Currently only keras supports TensorFlow as an alternative backend. Keras will also be included with TensorFlow soon.
Installing these packages with support for executing code on the GPU is more challenging than simply conda install ... or pip install .... In addition to installing these packages it is also necessary to install the CUDA packages.
Beyond the advances due to the greater computational capacity available on the GPU there have been a number of other important approaches utilized
Step1: The performance here is very poor. We really need to train with more samples and for more epochs. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
plt.gray()
from keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
fig, axes = plt.subplots(3,5, figsize=(12,8))
for i, ax in enumerate(axes.flatten()):
ax.imshow(X_train[i], interpolation='nearest')
plt.show()
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.utils import np_utils
batch_size = 512
nb_classes = 10
nb_epoch = 3
X_train = X_train.reshape(X_train.shape[0], 1, 28, 28)
X_test = X_test.reshape(X_test.shape[0], 1, 28, 28)
X_train = X_train.astype("float32")
X_test = X_test.astype("float32")
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
# CAUTION: Without utilizing a GPU even this very short example is incredibly slow to run.
model = Sequential()
#model.add(Convolution2D(8, 1, 3, 3, input_shape=(1,28,28), activation='relu'))
model.add(Convolution2D(4, 3, 3, input_shape=(1,28,28), activation='relu'))
#model.add(Convolution2D(4, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(4, input_dim=4*28*28*0.25, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes, input_dim=4, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy'])
model.fit(X_train[:1024], Y_train[:1024], batch_size=batch_size, nb_epoch=nb_epoch, verbose=1,
validation_data=(X_test, Y_test))
score = model.evaluate(X_test, Y_test, verbose=0)
print('Test score:', score)
predictions = model.predict_classes(X_test)
fig, axes = plt.subplots(3,5, figsize=(12,8))
for i, ax in enumerate(axes.flatten()):
ax.imshow(X_test[predictions == 7][i].reshape((28,28)), interpolation='nearest')
plt.show()
Explanation: Week 8 - Advanced Machine Learning
During the course we have covered a variety of different tasks and algorithms. These were chosen for their broad applicability and ease of use with many important techniques and areas of study skipped. The goal of this class is to provide a brief overview of some of the latest advances and areas that could not be covered due to our limited time.
Deep learning
Glosser.ca via wikimedia.
Although a neural network has been added to scikit learn relatively recently it only runs on the CPU making the large neural networks now popular prohibitively slow. Fortunately, there are a number of different packages available for python that can run on a GPU.
Theano is the GPGPU equivalent of numpy. It implements all the core functionality needed to build a deep neural network, and run it on the GPGPU, but does not come with an existing implementation.
A variety of packages have been built on top of Theano that enable neural networks to be implemented in a relatively straightforward manner. Parrallels can be draw with the relationship between numpy and scikit learn. Pylearn2 was perhaps the first major package built on Theano but has now been superseded by a number of new packages, including blocks, keras, and lasagne.
You may have also heard of TensorFlow that was released by Google a year or two ago. TensorFlow lies somewhere between the low-level Theano and the high-level packages such as blocks, keras, and lasagne. Currently only keras supports TensorFlow as an alternative backend. Keras will also be included with TensorFlow soon.
Installing these packages with support for executing code on the GPU is more challenging than simply conda install ... or pip install .... In addition to installing these packages it is also necessary to install the CUDA packages.
Beyond the advances due to the greater computational capacity available on the GPU there have been a number of other important approaches utilized:
Convolutional neural nets
Recurrent neural nets
Dropout
Early stopping
Data augmentation
Aphex34 via wikimedia.
End of explanation
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, predictions)
np.fill_diagonal(cm, 0)
plt.bone()
plt.matshow(cm)
plt.colorbar()
plt.ylabel('True label')
plt.xlabel('Predicted label')
Explanation: The performance here is very poor. We really need to train with more samples and for more epochs.
End of explanation |
8,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data frames 3
Step1: lang
Step2: lang
Step3: lang
Step4: lang
Step5: lang
Step7: 予習課題
Step8: lang | Python Code:
# データをCVSファイルから読み込みます。 Read the data from CSV file.
df = pd.read_csv('data/15-July-2019-Tokyo-hourly.csv')
print("データフレームの行数は %d" % len(df))
print(df.dtypes)
df.head()
Explanation: Data frames 3: 簡単なデータの変換 (Simple data manipulation)
```
ASSIGNMENT METADATA
assignment_id: "DataFrame3"
```
lang:en
In this unit, we will get acquainted with a couple of simple techniques to change the data:
Filter rows based on a condition
Create new columns as a transformation of other columns
Drop columns that are no longer needed
Let's start with reading the data.
lang:ja
この講義では、簡単なデータの変換を紹介します。
行を条件によりフィルター(抽出)します
データ変換によって新しい列を作ります
必要だけの列を抽出します
まずはデータを読み込みましょう。
End of explanation
# This is an example of filtering rows by a condition
# that is computed over variables in the dataframe.
# 条件によってデータフレームをフィルターします。
df2 = df[df['Precipitation_mm'] > 0]
len(df2)
Explanation: lang:en Let's consider the question of how one should hold an umbrella when it rains.
Depending on the wind direction, it's better to slant the umbrella towards the direction
the rain is coming from. Therefore, one needs to know the wind direction when it rains.
First step is to limit the data to the hours when there was rain. To accomplish that,
we filter the data set by using a condition. The condition is placed in square brackets
after the dataframe.
Technical details:
* The inner df['Precipitation_mm'] extracts a single column as a pandas Series object.
* The comparison df['Precipitation_mm'] > 0' is evaluated as a vector expression, that computes
the condition element-wise, resulting in a Series object of the same length with boolean elements
(true or false).
* Finally, the indexing of a data frame by the boolean series performs the filtering of the rows
in the dataframe only to rows which had the corresponding element as True. Note that the original
data frame is left unmodified. Instead, a new copy of a data
lang:ja雨の中の傘の持ち方について考えましょう。風の向きによって、適切な持ち方が変わります。風が来ている方向に傾けると傘の効率がよくなります。
したがって、雨のときの風の向きを調べなければいけません。
まずは雨のなかったデータを除きましょう。そのために条件をつけてデータをフィルターします。
条件はデータフレームの参照の後に角括弧に入ります。
詳しく述べると:
角括弧に入っているdf['Precipitation_mm']は一つの列を抽出します。それはpandasのSeriesオブジェクトになります。
比較表現 df['Precipitation_mm'] > 0' は各行ごとに評価されます、真理値のベクターになります。それもSeriesです。長さはデータフレームの行数です。
データフレームの後に角括弧に真理値ベクターを入れるとFalseの行が除かれます。
結果のデータフレームは新しいデータフレームです。既存のデータフレームは変わらないままで、フィルターされたデータフレームを新しい変数に保存します。
End of explanation
px.histogram(df2, x='WindDirection_16compasspoints')
Explanation: lang:en So it was 11 hours out of 24 in a day that the rain was falling. Let's see what the distribution of wind directions was.
lang:ja 一日の24時間の中に雨が降っていたは11時間がありました。 風の向きを可視化しましょう。 px.histogramはxの値を数えて、個数を棒グラフとして可視化します。
End of explanation
px.histogram(df, x='WindDirection_16compasspoints')
Explanation: lang:en Now we can clearly see that NE was the prevailing wind direction while it rained.
Note that the result may have been different if we did not filter for the hours with rain:
lang:ja雨が降ったときに風はNEの方向に吹いたことがわかります。雨だけのデータにフィルターしなければ、グラフは異なる結果がえられます。
以下はdfは元のデータフレームで、フィルターされたデータフレームはdf2です。
End of explanation
# This creates a new column named "rained" that is a boolean variable
# indicating whether it was raining in that hour.
# 新しい真理値の列'rained'を追加します。
df['rained'] = df['Precipitation_mm'] > 0
px.histogram(df, x='WindDirection_16compasspoints', color='rained')
Explanation: lang:en We can plot the whole data and use the color dimension to distinguish between hours when it rained or not by using a different technique: instead of filtering rows by some condition, we can introduce the condition
as a new boolean variable. This is done by assigning to a new column in the data frame:
lang:jaフィルターに変わりに、可視化によって同じデータを確認ができます。たとえば、雨が降ったかどうかを色で表現します。
そのために新しい真理値の列を作らなければなりません。以下の例はdfのデータフレームに新しい列を追加します。
End of explanation
# そのままだとデータが多すぎて混乱しやすい。
# その表を見せてなにがいいたいのか分かりづらい。
df
# 列の名前の一覧を見ましょう。
df.dtypes
# Indexing by list of column names returns a copy of the data frame just with the named
# columns.
# 列の名前を二重角括弧に入れると、列の抽出ができます。 列の名前は以上の`dtypes`の一覧によって確認できます。
df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
Explanation: lang:en Now let's consider how could we present the same data in a tabular form. If we do not do anything,
all existing columns in the data frame would be shown, which may make it hard for the reader
to see the point of the author. To make reading the data easier, we can limit the data output
just to columns we are interested in.
lang:ja 今まで解析してきたデータを表の形に表示について考えましょう。 dfのデータフレームをそのまま表示するとたくさんの列が出て、
どのデータを見せたかったのはとてもわかりにくくなります。 それを解決するために、見せたい列だけを抽出しましょう。
End of explanation
%%solution
# BEGIN PROMPT
# Note: you can do multiple steps to get the data frame you need.
# 複数の段階に分けてデータ処理してもよい。
df['rained'] = df[...]
sunny_df = df[...]
sunny_df = sunny_df[...]
# END PROMPT
# BEGIN SOLUTION
df['rained'] = df['Precipitation_mm'] > 0
sunny_df = df[df['SunshineDuration_h'] > 0]
sunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
# END SOLUTION
Explanation: 予習課題: データの変換 (Data manipulation)
```
EXERCISE METADATA
exercise_id: "DataManipulation"
```
lang:en
Starting with the weather data frame df defined above, filter out the data set consisting only of the day hours when sun was shining (i.e. variable SunshineDuration_h > 0), and containing only the following columns:
* Time_Hour -- extracted from the original data frame.
* WindDirection_16compasspoints -- extracted from the original data frame.
* rained -- the boolean indicator of whether it was raining or not (Precipitation_mm > 0). This is a new column that is not present in the original data, so it should be added.
lang:ja
以上に定義したdfのデータフレームを使って、以下のデータの表を抽出しましょう。
* 日が出ていた時間帯のみ (すなわち、SunshineDuration_h > 0)
以下の列だけを抽出しましょう。
* Time_Hour -- 元のデータフレームから抽出しましょう。
* WindDirection_16compasspoints -- 元のデータフレームから抽出しましょう。
* rained -- 雨があったかどうかの真理値列 (すなわち、Precipitation_mm > 0)。こちらの列は元のデータに入ってないため、追加しなければなりません。
End of explanation
# Inspect the data frame
sunny_df
%%studenttest StudentTest
# Test your solution
assert len(sunny_df) == 2, "The result data frame should only have 2 rows, yours has %d" % len(sunny_df)
assert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], "Sunshine was during 13h,14h, but you got %s" % sunny_df['Time_Hour']
assert np.all(sunny_df['rained'] == False), "It was not raining during sunshine hours!"
%%inlinetest AutograderTest
# This cell will not be present in the students notebook.
assert 'sunny_df' in globals(), "Did you define the data frame named 'sunny_df' in the solution cell?"
assert sunny_df.__class__ == pd.core.frame.DataFrame, "Did you define a data frame named 'sunny_df'? 'sunny_df' was a %s instead" % sunny_df.__class__
assert len(sunny_df) == 2, "The data frame should have 2 rows, but you have %d" % len(sunny_df)
assert np.sort(np.unique(sunny_df['Time_Hour'])).tolist() == [13, 14], "Sunshine was during 13h,14h, but you got %s" % sunny_df['Time_Hour']
assert np.all(sunny_df['rained'] == False), "It was not raining during sunshine hours!"
assert np.all(np.sort(np.unique(sunny_df.columns)) == ['Time_Hour', 'WindDirection_16compasspoints', 'rained']), ("Expected to see 3 columns: rained, Time_Hour, WindDirection_16compasspoints, but got %d: %s" % (len(np.unique(sunny_df.columns)), np.sort(np.unique(sunny_df.columns))) )
%%submission
df['rained'] = df['Precipitation_mm'] > 0
sunny_df = df[df['SunshineDuration_h'] > 0]
#sunny_df = sunny_df[['Time_Hour', 'WindDirection_16compasspoints', 'rained']]
import re
result, logs = %autotest AutograderTest
assert re.match(r'Expected to see 3 columns.*', str(result.results['error']))
report(AutograderTest, results=result.results, source=submission_source.source)
Explanation: lang:enNote: if you see a warning SettingWithCopyWarning, it means that you are trying to apply transformation
to a data frame that is a copy or a slice of a different data frame. This is an optimization that Pandas
library may do on filtering steps to reduce memory use. To avoid this warning, you can either move the new column computation before the filtering step, or add a .copy() call to the filtered data frame to force
creating of a full data frame object.
lang:jaもしSettingWithCopyWarningのエラーが出たら、データフレームのコピーに変更を行うという意味なのです。pandasは、データ抽出のときに
自動的にコピーしないような最適化の副作用です。解決のために、データ変更は先にするか、抽出の後に.copy()を呼び出すことができます。
End of explanation |
8,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Gaussian Mixture Models with EM
In this assignment you will
* implement the EM algorithm for a Gaussian mixture model
* apply your implementation to cluster images
* explore clustering results and interpret the output of the EM algorithm
Note to Amazon EC2 users
Step2: Implementing the EM algorithm for Gaussian mixture models
In this section, you will implement the EM algorithm. We will take the following steps
Step3: After specifying a particular set of clusters (so that the results are reproducible across assignments), we use the above function to generate a dataset.
Step4: Checkpoint
Step5: Now plot the data you created above. The plot should be a scatterplot with 100 points that appear to roughly fall into three clusters.
Step8: Log likelihood
We provide a function to calculate log likelihood for mixture of Gaussians. The log likelihood quantifies the probability of observing a given set of data under a particular setting of the parameters in our model. We will use this to assess convergence of our EM algorithm; specifically, we will keep looping through EM update steps until the log likehood ceases to increase at a certain rate.
Step9: Implementation
You will now complete an implementation that can run EM on the data you just created. It uses the loglikelihood function we provided above.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
Hint
Step10: Testing the implementation on the simulated data
Now we'll fit a mixture of Gaussians to this data using our implementation of the EM algorithm. As with k-means, it is important to ask how we obtain an initial configuration of mixing weights and component parameters. In this simple case, we'll take three random points to be the initial cluster means, use the empirical covariance of the data to be the initial covariance in each cluster (a clear overestimate), and set the initial mixing weights to be uniform across clusters.
Step11: Checkpoint. For this particular example, the EM algorithm is expected to terminate in 30 iterations. That is, the last line of the log should say "Iteration 29". If your function stopped too early or too late, you should re-visit your code.
Our algorithm returns a dictionary with five elements
Step12: Quiz Question
Step13: Quiz Question
Step14: Plot progress of parameters
One useful feature of testing our implementation on low-dimensional simulated data is that we can easily visualize the results.
We will use the following plot_contours function to visualize the Gaussian components over the data at three different points in the algorithm's execution
Step15: Fill in the following code block to visualize the set of parameters we get after running EM for 12 iterations.
Step16: Quiz Question
Step17: Fitting a Gaussian mixture model for image data
Now that we're confident in our implementation of the EM algorithm, we'll apply it to cluster some more interesting data. In particular, we have a set of images that come from four categories
Step18: We need to come up with initial estimates for the mixture weights and component parameters. Let's take three images to be our initial cluster centers, and let's initialize the covariance matrix of each cluster to be diagonal with each element equal to the sample variance from the full data. As in our test on simulated data, we'll start by assuming each mixture component has equal weight.
This may take a few minutes to run.
Step19: The following sections will evaluate the results by asking the following questions
Step20: The log likelihood increases so quickly on the first few iterations that we can barely see the plotted line. Let's plot the log likelihood after the first three iterations to get a clearer view of what's going on
Step21: Evaluating uncertainty
Next we'll explore the evolution of cluster assignment and uncertainty. Remember that the EM algorithm represents uncertainty about the cluster assignment of each data point through the responsibility matrix. Rather than making a 'hard' assignment of each data point to a single cluster, the algorithm computes the responsibility of each cluster for each data point, where the responsibility corresponds to our certainty that the observation came from that cluster.
We can track the evolution of the responsibilities across iterations to see how these 'soft' cluster assignments change as the algorithm fits the Gaussian mixture model to the data; one good way to do this is to plot the data and color each point according to its cluster responsibilities. Our data are three-dimensional, which can make visualization difficult, so to make things easier we will plot the data using only two dimensions, taking just the [R G], [G B] or [R B] values instead of the full [R G B] measurement for each observation.
Step22: To begin, we will visualize what happens when each data has random responsibilities.
Step23: We now use the above plotting function to visualize the responsibilites after 1 iteration.
Step24: We now use the above plotting function to visualize the responsibilites after 20 iterations. We will see there are fewer unique colors; this indicates that there is more certainty that each point belongs to one of the four components in the model.
Step25: Plotting the responsibilities over time in [R B] space shows a meaningful change in cluster assignments over the course of the algorithm's execution. While the clusters look significantly better organized at the end of the algorithm than they did at the start, it appears from our plot that they are still not very well separated. We note that this is due in part our decision to plot 3D data in a 2D space; everything that was separated along the G axis is now "squashed" down onto the flat [R B] plane. If we were to plot the data in full [R G B] space, then we would expect to see further separation of the final clusters. We'll explore the cluster interpretability more in the next section.
Interpreting each cluster
Let's dig into the clusters obtained from our EM implementation. Recall that our goal in this section is to cluster images based on their RGB values. We can evaluate the quality of our clustering by taking a look at a few images that 'belong' to each cluster. We hope to find that the clusters discovered by our EM algorithm correspond to different image categories - in this case, we know that our images came from four categories ('cloudy sky', 'rivers', 'sunsets', and 'trees and forests'), so we would expect to find that each component of our fitted mixture model roughly corresponds to one of these categories.
If we want to examine some example images from each cluster, we first need to consider how we can determine cluster assignments of the images from our algorithm output. This was easy with k-means - every data point had a 'hard' assignment to a single cluster, and all we had to do was find the cluster center closest to the data point of interest. Here, our clusters are described by probability distributions (specifically, Gaussians) rather than single points, and our model maintains some uncertainty about the cluster assignment of each observation.
One way to phrase the question of cluster assignment for mixture models is as follows
Step26: We'll use the 'assignments' SFrame to find the top images from each cluster by sorting the datapoints within each cluster by their score under that cluster (stored in probs). We can plot the corresponding images in the original data using show().
Create a function that returns the top 5 images assigned to a given category in our data (HINT
Step27: Use this function to show the top 5 images in each cluster. | Python Code:
import graphlab as gl
import numpy as np
import matplotlib.pyplot as plt
import copy
from scipy.stats import multivariate_normal
%matplotlib inline
Explanation: Fitting Gaussian Mixture Models with EM
In this assignment you will
* implement the EM algorithm for a Gaussian mixture model
* apply your implementation to cluster images
* explore clustering results and interpret the output of the EM algorithm
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
End of explanation
def generate_MoG_data(num_data, means, covariances, weights):
Creates a list of data points
num_clusters = len(weights)
data = []
for i in range(num_data):
# Use np.random.choice and weights to pick a cluster id greater than or equal to 0 and less than num_clusters.
k = np.random.choice(len(weights), 1, p=weights)[0]
# Use np.random.multivariate_normal to create data from this cluster
x = np.random.multivariate_normal(means[k], covariances[k])
data.append(x)
return data
Explanation: Implementing the EM algorithm for Gaussian mixture models
In this section, you will implement the EM algorithm. We will take the following steps:
Create some synthetic data.
Provide a log likelihood function for this model.
Implement the EM algorithm.
Visualize the progress of the parameters during the course of running EM.
Visualize the convergence of the model.
Dataset
To help us develop and test our implementation, we will generate some observations from a mixture of Gaussians and then run our EM algorithm to discover the mixture components. We'll begin with a function to generate the data, and a quick plot to visualize its output for a 2-dimensional mixture of three Gaussians.
Now we will create a function to generate data from a mixture of Gaussians model.
End of explanation
# Model parameters
init_means = [
[5, 0], # mean of cluster 1
[1, 1], # mean of cluster 2
[0, 5] # mean of cluster 3
]
init_covariances = [
[[.5, 0.], [0, .5]], # covariance of cluster 1
[[1., .7], [0, .7]], # covariance of cluster 2
[[.5, 0.], [0, .5]] # covariance of cluster 3
]
init_weights = [1/4., 1/2., 1/4.] # weights of each cluster
# Generate data
np.random.seed(4)
data = generate_MoG_data(100, init_means, init_covariances, init_weights)
Explanation: After specifying a particular set of clusters (so that the results are reproducible across assignments), we use the above function to generate a dataset.
End of explanation
assert len(data) == 100
assert len(data[0]) == 2
print 'Checkpoint passed!'
Explanation: Checkpoint: To verify your implementation above, make sure the following code does not return an error.
End of explanation
plt.figure()
d = np.vstack(data)
plt.plot(d[:,0], d[:,1],'ko')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Now plot the data you created above. The plot should be a scatterplot with 100 points that appear to roughly fall into three clusters.
End of explanation
def log_sum_exp(Z):
Compute log(\sum_i exp(Z_i)) for some array Z.
return np.max(Z) + np.log(np.sum(np.exp(Z - np.max(Z))))
def loglikelihood(data, weights, means, covs):
Compute the loglikelihood of the data for a Gaussian mixture model with the given parameters.
num_clusters = len(means)
num_dim = len(data[0])
ll = 0
for d in data:
Z = np.zeros(num_clusters)
for k in range(num_clusters):
# Compute (x-mu)^T * Sigma^{-1} * (x-mu)
delta = np.array(d) - means[k]
exponent_term = np.dot(delta.T, np.dot(np.linalg.inv(covs[k]), delta))
# Compute loglikelihood contribution for this data point and this cluster
Z[k] += np.log(weights[k])
Z[k] -= 1/2. * (num_dim * np.log(2*np.pi) + np.log(np.linalg.det(covs[k])) + exponent_term)
# Increment loglikelihood contribution of this data point across all clusters
ll += log_sum_exp(Z)
return ll
Explanation: Log likelihood
We provide a function to calculate log likelihood for mixture of Gaussians. The log likelihood quantifies the probability of observing a given set of data under a particular setting of the parameters in our model. We will use this to assess convergence of our EM algorithm; specifically, we will keep looping through EM update steps until the log likehood ceases to increase at a certain rate.
End of explanation
def EM(data, init_means, init_covariances, init_weights, maxiter=1000, thresh=1e-4):
# Make copies of initial parameters, which we will update during each iteration
means = init_means[:]
covariances = init_covariances[:]
weights = init_weights[:]
# Infer dimensions of dataset and the number of clusters
num_data = len(data)
num_dim = len(data[0])
num_clusters = len(means)
# Initialize some useful variables
resp = np.zeros((num_data, num_clusters))
ll = loglikelihood(data, weights, means, covariances)
ll_trace = [ll]
for i in range(maxiter):
if i % 5 == 0:
print("Iteration %s" % i)
# E-step: compute responsibilities
# Update resp matrix so that resp[j, k] is the responsibility of cluster k for data point j.
# Hint: To compute likelihood of seeing data point j given cluster k, use multivariate_normal.pdf.
for j in range(num_data):
for k in range(num_clusters):
# YOUR CODE HERE
resp[j, k] = multivariate_normal.pdf(data[j], means[k], covariances[k])
row_sums = resp.sum(axis=1)[:, np.newaxis]
resp = resp / row_sums # normalize over all possible cluster assignments
# M-step
# Compute the total responsibility assigned to each cluster, which will be useful when
# implementing M-steps below. In the lectures this is called N^{soft}
counts = np.sum(resp, axis=0)
for k in range(num_clusters):
Nsoft = resp[ ,k].sum()
# Update the weight for cluster k using the M-step update rule for the cluster weight, \hat{\pi}_k.
# YOUR CODE HERE
weights[k] = Nsoft/num_data
# Update means for cluster k using the M-step update rule for the mean variables.
# This will assign the variable means[k] to be our estimate for \hat{\mu}_k.
weighted_sum = 0
for j in range(num_data):
# YOUR CODE HERE
weighted_sum += resp[j,k]*data[j]
# YOUR CODE HERE
means[k] = weighted_sum/Nsoft
# Update covariances for cluster k using the M-step update rule for covariance variables.
# This will assign the variable covariances[k] to be the estimate for \hat{\Sigma}_k.
weighted_sum = np.zeros((num_dim, num_dim))
for j in range(num_data):
# YOUR CODE HERE (Hint: Use np.outer on the data[j] and this cluster's mean)
weighted_sum += np.outer(data[j],means[k])*resp[j,k]
# YOUR CODE HERE
covariances[k] = weighted_sum/resp[ :k].sum()
# Compute the loglikelihood at this iteration
# YOUR CODE HERE
ll_latest = loglikelihood(data, weights, means, covariances)
ll_trace.append(ll_latest)
# Check for convergence in log-likelihood and store
if (ll_latest - ll) < thresh and ll_latest > -np.inf:
break
ll = ll_latest
if i % 5 != 0:
print("Iteration %s" % i)
out = {'weights': weights, 'means': means, 'covs': covariances, 'loglik': ll_trace, 'resp': resp}
return out
Explanation: Implementation
You will now complete an implementation that can run EM on the data you just created. It uses the loglikelihood function we provided above.
Fill in the places where you find ## YOUR CODE HERE. There are seven places in this function for you to fill in.
Hint: Some useful functions
multivariate_normal.pdf: lets you compute the likelihood of seeing a data point in a multivariate Gaussian distribution.
np.outer: comes in handy when estimating the covariance matrix from data.
End of explanation
np.random.seed(4)
# Initialization of parameters
chosen = np.random.choice(len(data), 3, replace=False)
initial_means = [data[x] for x in chosen]
initial_covs = [np.cov(data, rowvar=0)] * 3
initial_weights = [1/3.] * 3
# Run EM
results = EM(data, initial_means, initial_covs, initial_weights)
Explanation: Testing the implementation on the simulated data
Now we'll fit a mixture of Gaussians to this data using our implementation of the EM algorithm. As with k-means, it is important to ask how we obtain an initial configuration of mixing weights and component parameters. In this simple case, we'll take three random points to be the initial cluster means, use the empirical covariance of the data to be the initial covariance in each cluster (a clear overestimate), and set the initial mixing weights to be uniform across clusters.
End of explanation
# Your code here
Explanation: Checkpoint. For this particular example, the EM algorithm is expected to terminate in 30 iterations. That is, the last line of the log should say "Iteration 29". If your function stopped too early or too late, you should re-visit your code.
Our algorithm returns a dictionary with five elements:
* 'loglik': a record of the log likelihood at each iteration
* 'resp': the final responsibility matrix
* 'means': a list of K means
* 'covs': a list of K covariance matrices
* 'weights': the weights corresponding to each model component
Quiz Question: What is the weight that EM assigns to the first component after running the above codeblock?
End of explanation
# Your code here
Explanation: Quiz Question: Using the same set of results, obtain the mean that EM assigns the second component. What is the mean in the first dimension?
End of explanation
# Your code here
Explanation: Quiz Question: Using the same set of results, obtain the covariance that EM assigns the third component. What is the variance in the first dimension?
End of explanation
import matplotlib.mlab as mlab
def plot_contours(data, means, covs, title):
plt.figure()
plt.plot([x[0] for x in data], [y[1] for y in data],'ko') # data
delta = 0.025
k = len(means)
x = np.arange(-2.0, 7.0, delta)
y = np.arange(-2.0, 7.0, delta)
X, Y = np.meshgrid(x, y)
col = ['green', 'red', 'indigo']
for i in range(k):
mean = means[i]
cov = covs[i]
sigmax = np.sqrt(cov[0][0])
sigmay = np.sqrt(cov[1][1])
sigmaxy = cov[0][1]/(sigmax*sigmay)
Z = mlab.bivariate_normal(X, Y, sigmax, sigmay, mean[0], mean[1], sigmaxy)
plt.contour(X, Y, Z, colors = col[i])
plt.title(title)
plt.rcParams.update({'font.size':16})
plt.tight_layout()
# Parameters after initialization
plot_contours(data, initial_means, initial_covs, 'Initial clusters')
# Parameters after running EM to convergence
results = EM(data, initial_means, initial_covs, initial_weights)
plot_contours(data, results['means'], results['covs'], 'Final clusters')
Explanation: Plot progress of parameters
One useful feature of testing our implementation on low-dimensional simulated data is that we can easily visualize the results.
We will use the following plot_contours function to visualize the Gaussian components over the data at three different points in the algorithm's execution:
At initialization (using initial_mu, initial_cov, and initial_weights)
After running the algorithm to completion
After just 12 iterations (using parameters estimates returned when setting maxiter=12)
End of explanation
# YOUR CODE HERE
results = ...
plot_contours(data, results['means'], results['covs'], 'Clusters after 12 iterations')
Explanation: Fill in the following code block to visualize the set of parameters we get after running EM for 12 iterations.
End of explanation
results = EM(data, initial_means, initial_covs, initial_weights)
# YOUR CODE HERE
loglikelihoods = ...
plt.plot(range(len(loglikelihoods)), loglikelihoods, linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Quiz Question: Plot the loglikelihood that is observed at each iteration. Is the loglikelihood plot monotonically increasing, monotonically decreasing, or neither [multiple choice]?
End of explanation
images = gl.SFrame('images.sf')
gl.canvas.set_target('ipynb')
import array
images['rgb'] = images.pack_columns(['red', 'green', 'blue'])['X4']
images.show()
Explanation: Fitting a Gaussian mixture model for image data
Now that we're confident in our implementation of the EM algorithm, we'll apply it to cluster some more interesting data. In particular, we have a set of images that come from four categories: sunsets, rivers, trees and forests, and cloudy skies. For each image we are given the average intensity of its red, green, and blue pixels, so we have a 3-dimensional representation of our data. Our goal is to find a good clustering of these images using our EM implementation; ideally our algorithm would find clusters that roughly correspond to the four image categories.
To begin with, we'll take a look at the data and get it in a form suitable for input to our algorithm. The data are provided in SFrame format:
End of explanation
np.random.seed(1)
# Initalize parameters
init_means = [images['rgb'][x] for x in np.random.choice(len(images), 4, replace=False)]
cov = np.diag([images['red'].var(), images['green'].var(), images['blue'].var()])
init_covariances = [cov, cov, cov, cov]
init_weights = [1/4., 1/4., 1/4., 1/4.]
# Convert rgb data to numpy arrays
img_data = [np.array(i) for i in images['rgb']]
# Run our EM algorithm on the image data using the above initializations.
# This should converge in about 125 iterations
out = EM(img_data, init_means, init_covariances, init_weights)
Explanation: We need to come up with initial estimates for the mixture weights and component parameters. Let's take three images to be our initial cluster centers, and let's initialize the covariance matrix of each cluster to be diagonal with each element equal to the sample variance from the full data. As in our test on simulated data, we'll start by assuming each mixture component has equal weight.
This may take a few minutes to run.
End of explanation
ll = out['loglik']
plt.plot(range(len(ll)),ll,linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The following sections will evaluate the results by asking the following questions:
Convergence: How did the log likelihood change across iterations? Did the algorithm achieve convergence?
Uncertainty: How did cluster assignment and uncertainty evolve?
Interpretability: Can we view some example images from each cluster? Do these clusters correspond to known image categories?
Evaluating convergence
Let's start by plotting the log likelihood at each iteration - we know that the EM algorithm guarantees that the log likelihood can only increase (or stay the same) after each iteration, so if our implementation is correct then we should see an increasing function.
End of explanation
plt.figure()
plt.plot(range(3,len(ll)),ll[3:],linewidth=4)
plt.xlabel('Iteration')
plt.ylabel('Log-likelihood')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: The log likelihood increases so quickly on the first few iterations that we can barely see the plotted line. Let's plot the log likelihood after the first three iterations to get a clearer view of what's going on:
End of explanation
import colorsys
def plot_responsibilities_in_RB(img, resp, title):
N, K = resp.shape
HSV_tuples = [(x*1.0/K, 0.5, 0.9) for x in range(K)]
RGB_tuples = map(lambda x: colorsys.hsv_to_rgb(*x), HSV_tuples)
R = img['red']
B = img['blue']
resp_by_img_int = [[resp[n][k] for k in range(K)] for n in range(N)]
cols = [tuple(np.dot(resp_by_img_int[n], np.array(RGB_tuples))) for n in range(N)]
plt.figure()
for n in range(len(R)):
plt.plot(R[n], B[n], 'o', c=cols[n])
plt.title(title)
plt.xlabel('R value')
plt.ylabel('B value')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: Evaluating uncertainty
Next we'll explore the evolution of cluster assignment and uncertainty. Remember that the EM algorithm represents uncertainty about the cluster assignment of each data point through the responsibility matrix. Rather than making a 'hard' assignment of each data point to a single cluster, the algorithm computes the responsibility of each cluster for each data point, where the responsibility corresponds to our certainty that the observation came from that cluster.
We can track the evolution of the responsibilities across iterations to see how these 'soft' cluster assignments change as the algorithm fits the Gaussian mixture model to the data; one good way to do this is to plot the data and color each point according to its cluster responsibilities. Our data are three-dimensional, which can make visualization difficult, so to make things easier we will plot the data using only two dimensions, taking just the [R G], [G B] or [R B] values instead of the full [R G B] measurement for each observation.
End of explanation
N, K = out['resp'].shape
random_resp = np.random.dirichlet(np.ones(K), N)
plot_responsibilities_in_RB(images, random_resp, 'Random responsibilities')
Explanation: To begin, we will visualize what happens when each data has random responsibilities.
End of explanation
out = EM(img_data, init_means, init_covariances, init_weights, maxiter=1)
plot_responsibilities_in_RB(images, out['resp'], 'After 1 iteration')
Explanation: We now use the above plotting function to visualize the responsibilites after 1 iteration.
End of explanation
out = EM(img_data, init_means, init_covariances, init_weights, maxiter=20)
plot_responsibilities_in_RB(images, out['resp'], 'After 20 iterations')
Explanation: We now use the above plotting function to visualize the responsibilites after 20 iterations. We will see there are fewer unique colors; this indicates that there is more certainty that each point belongs to one of the four components in the model.
End of explanation
means = out['means']
covariances = out['covs']
rgb = images['rgb']
N = len(images)
K = len(means)
assignments = [0]*N
probs = [0]*N
for i in range(N):
# Compute the score of data point i under each Gaussian component:
p = np.zeros(K)
for k in range(K):
# YOUR CODE HERE (Hint: use multivariate_normal.pdf and rgb[i])
p[k] = ...
# Compute assignments of each data point to a given cluster based on the above scores:
# YOUR CODE HERE
assignments[i] = ...
# For data point i, store the corresponding score under this cluster assignment:
# YOUR CODE HERE
probs[i] = ...
assignments = gl.SFrame({'assignments':assignments, 'probs':probs, 'image': images['image']})
Explanation: Plotting the responsibilities over time in [R B] space shows a meaningful change in cluster assignments over the course of the algorithm's execution. While the clusters look significantly better organized at the end of the algorithm than they did at the start, it appears from our plot that they are still not very well separated. We note that this is due in part our decision to plot 3D data in a 2D space; everything that was separated along the G axis is now "squashed" down onto the flat [R B] plane. If we were to plot the data in full [R G B] space, then we would expect to see further separation of the final clusters. We'll explore the cluster interpretability more in the next section.
Interpreting each cluster
Let's dig into the clusters obtained from our EM implementation. Recall that our goal in this section is to cluster images based on their RGB values. We can evaluate the quality of our clustering by taking a look at a few images that 'belong' to each cluster. We hope to find that the clusters discovered by our EM algorithm correspond to different image categories - in this case, we know that our images came from four categories ('cloudy sky', 'rivers', 'sunsets', and 'trees and forests'), so we would expect to find that each component of our fitted mixture model roughly corresponds to one of these categories.
If we want to examine some example images from each cluster, we first need to consider how we can determine cluster assignments of the images from our algorithm output. This was easy with k-means - every data point had a 'hard' assignment to a single cluster, and all we had to do was find the cluster center closest to the data point of interest. Here, our clusters are described by probability distributions (specifically, Gaussians) rather than single points, and our model maintains some uncertainty about the cluster assignment of each observation.
One way to phrase the question of cluster assignment for mixture models is as follows: how do we calculate the distance of a point from a distribution? Note that simple Euclidean distance might not be appropriate since (non-scaled) Euclidean distance doesn't take direction into account. For example, if a Gaussian mixture component is very stretched in one direction but narrow in another, then a data point one unit away along the 'stretched' dimension has much higher probability (and so would be thought of as closer) than a data point one unit away along the 'narrow' dimension.
In fact, the correct distance metric to use in this case is known as Mahalanobis distance. For a Gaussian distribution, this distance is proportional to the square root of the negative log likelihood. This makes sense intuitively - reducing the Mahalanobis distance of an observation from a cluster is equivalent to increasing that observation's probability according to the Gaussian that is used to represent the cluster. This also means that we can find the cluster assignment of an observation by taking the Gaussian component for which that observation scores highest. We'll use this fact to find the top examples that are 'closest' to each cluster.
Quiz Question: Calculate the likelihood (score) of the first image in our data set (images[0]) under each Gaussian component through a call to multivariate_normal.pdf. Given these values, what cluster assignment should we make for this image?
Now we calculate cluster assignments for the entire image dataset using the result of running EM for 20 iterations above:
End of explanation
def get_top_images(assignments, cluster, k=5):
# YOUR CODE HERE
images_in_cluster = ...
top_images = images_in_cluster.topk('probs', k)
return top_images['image']
Explanation: We'll use the 'assignments' SFrame to find the top images from each cluster by sorting the datapoints within each cluster by their score under that cluster (stored in probs). We can plot the corresponding images in the original data using show().
Create a function that returns the top 5 images assigned to a given category in our data (HINT: use the GraphLab Create function topk(column, k) to find the k top values according to specified column in an SFrame).
End of explanation
gl.canvas.set_target('ipynb')
for component_id in range(4):
get_top_images(assignments, component_id).show()
Explanation: Use this function to show the top 5 images in each cluster.
End of explanation |
8,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: 52N IOOS SOS Stable Demo -- network offering (multi-station) data request
Create pyoos "collector" that connects to the SOS end point and parses GetCapabilities (offerings)
Step2: Set up request filters (selections), issue "collect" request (GetObservation -- get time series data), and examine the response
Step3: Read time series data for two stations, ingest into Pandas DataFrames, then create plots | Python Code:
from datetime import datetime, timedelta
import pandas as pd
from pyoos.collectors.ioos.swe_sos import IoosSweSos
# convenience function to build record style time series representation
def flatten_element(p):
rd = {'time':p.time}
for m in p.members:
rd[m['standard']] = m['value']
return rd
Explanation: https://www.wakari.io/sharing/bundle/emayorga/pyoos_ioos_sos_demo1
Using Pyoos to access Axiom 52North IOOS SOS "Stable Demo"
Examine the offerings; then query, parse and examine a "network offering" (all stations)
Use the same approach to access time series data from an ncSOS end point and a 52North IOOS SOS end point (the Axiom stable demo). ncSOS can only return one station in the response, while 52North SOS can return multiple stations when a network-offering request is made.
Emilio Mayorga, 2/12/2014
(2/20/2014: Updated bundled Wakari environment to pyoos 0.6)
End of explanation
url52n = 'http://ioossos.axiomalaska.com/52n-sos-ioos-stable/sos/kvp'
collector52n = IoosSweSos(url52n)
offerings52n = collector52n.server.offerings
# Examine the first offering
of0 = offerings52n[0]
of0.id, of0.name, of0.begin_position, of0.end_position, of0.observed_properties, of0.procedures
# Examine the second offering
of1 = offerings52n[1]
of1.id, of1.name, of1.begin_position, of1.end_position, of1.observed_properties, of1.procedures
vars(of1)
Explanation: 52N IOOS SOS Stable Demo -- network offering (multi-station) data request
Create pyoos "collector" that connects to the SOS end point and parses GetCapabilities (offerings)
End of explanation
# Use the network:test:all offering to query across all stations
# Set and apply filters, then "collect"
collector52n.start_time = of0.begin_position
collector52n.end_time = of0.end_position
#collector52n.variables=of0.observed_properties # 2 obsprops, each of a different feature type
# For now, query only the variable that returns timeSeries; pyoos can't handle timeSeriesProfile yet
collector52n.variables=['http://mmisw.org/ont/cf/parameter/air_temperature']
offeringname = ['urn:ioos:network:test:all']
respfrmt = 'text/xml; subtype="om/1.0.0/profiles/ioos_sos/1.0"'
obs52n=collector52n.collect(offerings=offeringname, responseFormat=respfrmt)
obs52n
# 'stations' should be a Paegan 'StationCollection' with list of Paegan 'Station' elements
stations=obs52n[0].feature
print 'Station Object:', type(stations)
print 'Feature Type:', obs52n[0].feature_type
print 'Number of station in StationCollection:', len(stations.elements)
# stations returned in the network offering response
stations.elements
# Examine one station; list its unique observed properties (variables)
station52n_0 = stations.elements['urn:ioos:station:test:0']
station52n_0.get_unique_members()
# List and show one data value element
station52n_0.elements[0].time, station52n_0.elements[0].members
# Extract and parse units string
unitsraw = station52n_0.elements[0].members[0]['units']
units = unitsraw.split(':')[-1]
print unitsraw, ' | ', units
Explanation: Set up request filters (selections), issue "collect" request (GetObservation -- get time series data), and examine the response
End of explanation
# First station
flattened52n_0 = map(flatten_element, station52n_0.elements)
n52n_0df=pd.DataFrame.from_records(flattened52n_0, index=['time'])
n52n_0df.head()
# A second station
station52n_5 = stations.elements['urn:ioos:station:test:5']
flattened52n_5 = map(flatten_element, station52n_5.elements)
n52n_5df=pd.DataFrame.from_records(flattened52n_5, index=['time'])
# Plot data from first station using plot method on DataFrame
# Note the datetime x-axis labels are much nicer here (pandas)
# than on the next plot (bare matplotlib)
n52n_0df.plot(figsize(12,5))
ylabel(units);
# Joint plot of time series from two stations in the network response, using matplotlib
op = n52n_0df.columns[0]
plot(n52n_0df.index, n52n_0df[op], '-b',
n52n_5df.index, n52n_5df[op], '-r')
ylabel(units)
legend([station52n_0.uid, station52n_5.uid]);
Explanation: Read time series data for two stations, ingest into Pandas DataFrames, then create plots
End of explanation |
8,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
xmin = 0
xmax = 255
a = 0.1
b = 0.9
retArr = []
for x in image_data:
xdash = a + ((x - xmin)*(b-a)/ (xmax - xmin))
retArr.append(xdash)
return retArr
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 1
learning_rate = 0.5
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
8,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: TODO
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
layer_1_filter_shape = [5,5,1,6]
layer_1_weights = tf.Variable(tf.truncated_normal(shape = layer_1_filter_shape, mean = mu, stddev = sigma))
layer_1_bias = tf.Variable(tf.zeros(6))
layer_1_strides = [1, 1, 1, 1]
layer_1_padding = 'VALID'
layer_1 = tf.nn.conv2d(x, layer_1_weights, layer_1_strides, layer_1_padding) + layer_1_bias
# Activation.
layer_1 = tf.nn.relu(layer_1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
p1_filter_shape = [1, 2, 2, 1]
p1_strides = [1, 2, 2, 1]
p1_padding = 'VALID'
layer_1 = tf.nn.max_pool(layer_1, p1_filter_shape, p1_strides, p1_padding)
# Layer 2: Convolutional. Output = 10x10x16.
layer_2_filter_shape = [5,5,6,16]
layer_2_weights = tf.Variable(tf.truncated_normal(shape = layer_2_filter_shape , mean = mu, stddev = sigma))
layer_2_bias = tf.Variable(tf.zeros(16))
layer_2_strides = [1, 1, 1, 1]
layer_2_padding = 'VALID'
layer_2 = tf.nn.conv2d(layer_1, layer_2_weights, layer_2_strides, layer_2_padding) + layer_2_bias
# Activation.
layer_2 = tf.nn.relu(layer_2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
p2_filter_shape = [1, 2, 2, 1]
p2_strides = [1, 2, 2, 1]
p2_padding = 'VALID'
layer_2 = tf.nn.max_pool(layer_2, p2_filter_shape, p2_strides, p2_padding)
# Flatten. Input = 5x5x16. Output = 400.
layer_2 = flatten(layer_2)
# Layer 3: Fully Connected. Input = 400. Output = 120.
layer_3_filter_shape = [400,120]
layer_3_weights = tf.Variable(tf.truncated_normal(shape = layer_3_filter_shape, mean = mu, stddev = sigma))
layer_3_bias = tf.Variable(tf.zeros(120))
layer_3 = tf.matmul(layer_2, layer_3_weights) + layer_3_bias
# Activation.
layer_3 = tf.nn.relu(layer_3)
# Layer 4: Fully Connected. Input = 120. Output = 84.
layer_4_filter_shape = [120, 84]
layer_4_weights = tf.Variable(tf.truncated_normal(shape = layer_4_filter_shape, mean = mu, stddev = sigma))
layer_4_bias = tf.Variable(tf.zeros(84))
layer_4 = tf.matmul(layer_3, layer_4_weights) + layer_4_bias
# Activation.
layer_4 = tf.nn.relu(layer_4)
# Layer 5: Fully Connected. Input = 84. Output = 10.
layer_5_filter_shape = [84, 10]
layer_5_weights = tf.Variable(tf.truncated_normal(shape = layer_5_filter_shape, mean = mu, stddev = sigma))
layer_5_bias = tf.Variable(tf.zeros(10))
logits = tf.matmul(layer_4, layer_5_weights) + layer_5_bias
return logits
Explanation: TODO: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=one_hot_y, logits=logits)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
8,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <a href="https
Step2: Model
(for cifar10)
Setting up hyperparams
Step3: This model is a hierarchical model with multiple stochastic blocks with multiple deterministic layers. You can know about model skeleton by observing the encoder and decoder "strings"
How to understand the string
Step4: Downloading cifar10 dataset
Step5: Setting up the model, data and the preprocess fn.
Step6: Evaluation
Step7: Function to save and show of batch of images given as a numpy array.
Step8: Generations
Step9: Images will be saved in the following dir
Step10: As the model params are replicated over multiple devices, unreplicated copy of them is made to use it for sampling and generations.
Step11: Reconstructions
Step12: Preprocessing images before getting the latents
Step13: Getting the partial functions from the model methods
Step14: Getting latents of different levels.
Step15: No of latent observations used depends on H.num_variables_visualize, altering it gives different resolutions of the reconstructions.
Step16: Original Images
Step17: Reconstructions. | Python Code:
from google.colab import auth
auth.authenticate_user()
project_id = "probml"
!gcloud config set project {project_id}
this should be the format of the checkpoint filetree:
checkpoint_path >> model(optimizer)_checkpoint_file.
checkpoint_path_ema >> ema_checkpoint_file
checkpoint_path = "/content/vdvae_cifar10_2.86/latest_cifar10"
# checkpoints are downloaded at these paths.
# vdvae_cifar10_2.86/latest_cifar10 - optimizer+mode
# vdvae_cifar10_2.86/latest_cifar10_ema - ema_params'
# @title Download checkpoints
!gsutil cp -r gs://gsoc_bucket/vdvae_cifar10_2.86 ./
!ls -l /content/vdvae_cifar10_2.86/latest_cifar10
!ls -l /content/vdvae_cifar10_2.86/latest_cifar10_ema
!git clone https://github.com/j-towns/vdvae-jax.git
%cd vdvae-jax
!pip install --quiet flax
import os
try:
os.environ["COLAB_TPU_ADDR"]
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
except:
pass
import jax
jax.local_devices()
Explanation: <a href="https://colab.research.google.com/github/always-newbie161/probml-notebooks/blob/jax_vdvae/notebooks/vdvae_jax_cifar_demo.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook shows demo working with vdvae in jax and the code used is from vdvae-jax from Jamie Townsend
Setup
End of explanation
from hps import HPARAMS_REGISTRY, Hyperparams, add_vae_arguments
from train_helpers import setup_save_dirs
import argparse
import dataclasses
H = Hyperparams()
parser = argparse.ArgumentParser()
parser = add_vae_arguments(parser)
parser.set_defaults(hps="cifar10", conv_precision="highest")
H = dataclasses.replace(H, **vars(parser.parse_args([])))
hparam_sets = [x for x in H.hps.split(",") if x]
for hp_set in hparam_sets:
hps = HPARAMS_REGISTRY[hp_set]
parser.set_defaults(**hps)
H = dataclasses.replace(H, **vars(parser.parse_args([])))
H = setup_save_dirs(H)
Explanation: Model
(for cifar10)
Setting up hyperparams
End of explanation
hparams = dataclasses.asdict(H)
for k in ["enc_blocks", "dec_blocks", "zdim", "n_batch", "device_count"]:
print(f"{k}:{hparams[k]}")
from utils import logger
from jax.interpreters.xla import DeviceArray
log = logger(H.logdir)
if H.log_wandb:
import wandb
def logprint(*args, pprint=False, **kwargs):
if len(args) > 0:
log(*args)
wandb.log({k: np.array(x) if type(x) is DeviceArray else x for k, x in kwargs.items()})
wandb.init(config=dataclasses.asdict(H))
else:
logprint = log
import numpy as np
from jax import lax
import torch
import imageio
from PIL import Image
import glob
from torch.utils.data import DataLoader
from torchvision import transforms
np.random.seed(H.seed)
torch.manual_seed(H.seed)
H = dataclasses.replace(
H,
conv_precision={"default": lax.Precision.DEFAULT, "high": lax.Precision.HIGH, "highest": lax.Precision.HIGHEST}[
H.conv_precision
],
seed_init=H.seed,
seed_sample=H.seed + 1,
seed_train=H.seed + 2 + H.host_id,
seed_eval=H.seed + 2 + H.host_count + H.host_id,
)
print("training model on ", H.dataset)
Explanation: This model is a hierarchical model with multiple stochastic blocks with multiple deterministic layers. You can know about model skeleton by observing the encoder and decoder "strings"
How to understand the string:
* blocks are comma seperated
* axb implies there are b res blocks(set of Conv layers) for dimensions axa
* amb implies it is a mixin block which increases the inter-image dims from a to b using nearest neighbour upsampling (used in decoder)
* adb implies it's a block with avg-pooling layer which reduces the dims from a to b(used in encoder)
for more understanding refer to this paper
End of explanation
!./setup_cifar10.sh
Explanation: Downloading cifar10 dataset
End of explanation
from data import set_up_data
H, data_train, data_valid_or_test, preprocess_fn = set_up_data(H)
from train_helpers import load_vaes
H = dataclasses.replace(H, restore_path=checkpoint_path)
optimizer, ema_params, start_epoch = load_vaes(H, logprint)
start_epoch # no.of.epochs trained
# Hparams for the current model
hparams = dataclasses.asdict(H)
for i, k in enumerate(sorted(hparams)):
logprint(f"type=hparam, key={k}, value={getattr(H, k)}")
Explanation: Setting up the model, data and the preprocess fn.
End of explanation
from train import run_test_eval
run_test_eval(H, ema_params, data_valid_or_test, preprocess_fn, logprint)
Explanation: Evaluation
End of explanation
def zoom_in(fname, shape):
im = Image.open(fname)
resized_im = im.resize(shape)
resized_im.save(fname)
def save_n_show(images, order, image_shape, fname, zoom=True, show=False):
n_rows, n_images = order
im = (
images.reshape((n_rows, n_images, *image_shape))
.transpose([0, 2, 1, 3, 4])
.reshape([n_rows * image_shape[0], n_images * image_shape[1], 3])
)
print(f"printing samples to {fname}")
imageio.imwrite(fname, im)
if zoom:
zoom_in(fname, (640, 64)) # w=640, h=64
if show:
display(Image.open(fname))
Explanation: Function to save and show of batch of images given as a numpy array.
End of explanation
n_images = 10
num_temperatures = 3
image_shape = [H.image_size, H.image_size, H.image_channels]
H = dataclasses.replace(H, num_images_visualize=n_images, num_temperatures_visualize=num_temperatures)
Explanation: Generations
End of explanation
H.save_dir
Explanation: Images will be saved in the following dir
End of explanation
from jax import random
from vae import VAE
from flax import jax_utils
from functools import partial
rng = random.PRNGKey(H.seed_sample)
ema_apply = partial(VAE(H).apply, {"params": jax_utils.unreplicate(ema_params)})
forward_uncond_samples = partial(ema_apply, method=VAE(H).forward_uncond_samples)
temperatures = [1.0, 0.9, 0.8, 0.7]
for t in temperatures[: H.num_temperatures_visualize]:
im = forward_uncond_samples(n_images, rng, t=t)
im = np.asarray(im)
save_n_show(im, [1, n_images], image_shape, f"{H.save_dir}/generations-tem-{t}.png")
for t in temperatures[: H.num_temperatures_visualize]:
print("=" * 25)
print(f"Generation of {n_images} new images for t={t}")
print("=" * 25)
fname = f"{H.save_dir}/generations-tem-{t}.png"
display(Image.open(fname))
Explanation: As the model params are replicated over multiple devices, unreplicated copy of them is made to use it for sampling and generations.
End of explanation
n_images = 10
image_shape = [H.image_size, H.image_size, H.image_channels]
Explanation: Reconstructions
End of explanation
from train import get_sample_for_visualization
viz_batch_original, viz_batch_processed = get_sample_for_visualization(
data_valid_or_test, preprocess_fn, n_images, H.dataset
)
Explanation: Preprocessing images before getting the latents
End of explanation
forward_get_latents = partial(ema_apply, method=VAE(H).forward_get_latents)
forward_samples_set_latents = partial(ema_apply, method=VAE(H).forward_samples_set_latents)
Explanation: Getting the partial functions from the model methods
End of explanation
zs = [s["z"] for s in forward_get_latents(viz_batch_processed, rng)]
Explanation: Getting latents of different levels.
End of explanation
recons = []
lv_points = np.floor(np.linspace(0, 1, H.num_variables_visualize + 2) * len(zs)).astype(int)[1:-1]
for i in lv_points:
recons.append(forward_samples_set_latents(n_images, zs[:i], rng, t=0.1))
Explanation: No of latent observations used depends on H.num_variables_visualize, altering it gives different resolutions of the reconstructions.
End of explanation
orig_im = np.array(viz_batch_original)
print("Original test images")
save_n_show(orig_im, [1, n_images], image_shape, f"{H.save_dir}/orig_test.png", show=True)
Explanation: Original Images
End of explanation
for i, r in enumerate(recons):
r = np.array(r)
print("=" * 25)
print(f"Generation of {n_images} new images for {i+1}x resolution")
print("=" * 25)
fname = f"{H.save_dir}/recon_test-res-{i+1}x.png"
save_n_show(r, [1, n_images], image_shape, fname, show=True)
Explanation: Reconstructions.
End of explanation |
8,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
List Comprehensions
List comprehensions are quick and concise way to create lists. List comprehensions comprises of an expression, followed by a for clause and then zero or more for or if clauses. The result of the list comprehension returns a list.
It is generally in the form of
returned_list = [<expression> <for x in current_list> <if filter(x)>]
In other programming languages, this is generally equivalent to
Step1: Example 2
Step2: Example 3
Step3: Example 4
Step4: Example 5
Step5: Example 6
Step6: Example 7 | Python Code:
# Simple List Comprehension
list = [x for x in range(5)]
print(list)
Explanation: List Comprehensions
List comprehensions are quick and concise way to create lists. List comprehensions comprises of an expression, followed by a for clause and then zero or more for or if clauses. The result of the list comprehension returns a list.
It is generally in the form of
returned_list = [<expression> <for x in current_list> <if filter(x)>]
In other programming languages, this is generally equivalent to:
for <item> in <list>
if (<condition>):
<expression>
Example 1
End of explanation
# Generate Squares for 10 numbers
list1 = [x**2 for x in range(10)]
print(list1)
Explanation: Example 2
End of explanation
# List comprehension with a filter condition
list2 = [x**2 for x in range(10) if x%2 == 0]
print(list2)
Explanation: Example 3
End of explanation
# Use list comprehension to filter out numbers
words = "Hello 12345 World".split()
numbers = [w for w in words if w.isdigit()]
print(numbers)
Explanation: Example 4
End of explanation
words = "An apple a day keeps the doctor away".split()
vowels = [w.upper() for w in words if w.lower().startswith(('a','e','i','o','u'))]
for vowel in vowels:
print(vowel)
Explanation: Example 5
End of explanation
list5 = [x + y for x in [1,2,3,4,5] for y in [10,11,12,13,14]]
print(list5)
Explanation: Example 6
End of explanation
# create 3 lists
list_1 = [1,2,3]
list_2 = [3,4,5]
list_3 = [7,8,9]
# create a matrix
matrix = [list_1,list_2,list_3]
# get the first column
first_col = [row[0] for row in matrix]
print(first_col)
Explanation: Example 7
End of explanation |
8,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 7
Step1: Listing 7.1
Step2: Listing 7.2
Step3: Listing 7.3
Step4: Listing 7.4
Step5: Listing 7.5
Step6: Listing 7.6 | Python Code:
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 7: Error Handling
End of explanation
with open('myfile.csv') as fh:
line = fh.readline()
value = line.split('\t')[0]
with open('other.txt',"w") as fw:
fw.write(str(int(value)*.2))
Explanation: Listing 7.1: wotest.py: Program with no error checking
End of explanation
import os
iname = input("Enter input filename: ")
oname = input("Enter output filename: ")
if os.path.exists(iname):
with open(iname) as fh:
line = fh.readline()
if "\t" in line:
value = line.split('\t')[0]
if os.access(oname, os.W_OK) == 0:
with open(oname, 'w') as fw:
if value.isdigit():
fw.write(str(int(value)*.2))
else:
print("Can’t be converted to int")
else:
print("Output file is not writable")
else:
print("There is no TAB. Check the input file")
else:
print("The file doesn’t exist")
Explanation: Listing 7.2: LBYL.py: Error handling LBYL version
End of explanation
try:
iname = input("Enter input filename: ")
oname = input("Enter output filename: ")
with open(iname) as fh:
line = fh.readline()
if '\t' in line:
value = line.split('\t')[0]
with open(oname, 'w') as fw:
fw.write(str(int(value)*.2))
except NameError:
print("There is no TAB. Check the input file")
except FileNotFoundError:
print("File not exist")
except PermissionError:
print("Can’t write to outfile.")
except ValueError:
print("The value can’t be converted to int")
else:
print("Thank you!. Everything went OK.")
Explanation: Listing 7.3: exception.py: Similar to 7.2 but with exception handling.
End of explanation
iname = input("Enter input filename: ")
oname = input("Enter output filename: ")
try:
with open(iname) as fh:
line = fh.readline()
except FileNotFoundError:
print("File not exist")
if '\t' in line:
value = line.split('\t')[0]
try:
with open(oname, 'w') as fw:
fw.write(str(int(value)*.2))
except NameError:
print("There is no TAB. Check the input file")
except PermissionError:
print("Can’t write to outfile.")
except ValueError:
print("The value can’t be converted to int")
else:
print("Thank you!. Everything went OK.")
d = {"A":"Adenine","C":"Cytosine","T":"Timine","G":"Guanine"}
try:
print(d[input("Enter letter: ")])
except:
print("No such nucleotide")
d = {"A":"Adenine", "C":"Cytosine", "T":"Timine", "G":"Guanine"}
try:
print(d[input("Enter letter: ")])
except EOFError:
print("Good bye!")
except KeyError:
print("No such nucleotide")
Explanation: Listing 7.4: nested.py: Code with nested exceptions
End of explanation
import sys
try:
0/0
except:
a,b,c = sys.exc_info()
print('Error name: {0}'.format(a.__name__))
print('Message: {0}'.format(b))
print('Error in line: {}'.format(c.tb_lineno))
Explanation: Listing 7.5: sysexc.py: Using sys.exc_info()
End of explanation
import sys
try:
x = open('random_filename')
except:
a, b = sys.exc_info()[:2]
print('Error name: {}'.format(a.__name__))
print('Error code: {}'.format(b.args[0]))
print('Error message: {}'.format(b.args[1]))
def avg(numbers):
return sum(numbers)/len(numbers)
avg([])
def avg(numbers):
if not numbers:
raise ValueError("Please enter at least one element")
return sum(numbers)/len(numbers)
avg([])
Explanation: Listing 7.6: sysexc2.py: Another use of sys.exc_info()
End of explanation |
8,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
mosasaurus example
This notebook shows how to run mosasaurus to extract spectra from a sample dataset. In this example, there's a small sample dataset of raw LDSS3C images stored in the directory /Users/zkbt/Cosmos/Data/mosaurusexample/data/ut140809. These data come from a transit observation of WASP-94Ab, and contain one of every expected filetype for LDSS3C.
instrument
First we create an instrument object for LDSS3C. This contains all the instrument-specific information mosasaurus needs to be aware of (how to calibrate CCD images, what information to pull out of headers, where to find wavelength calibration information, etc...). We can also use it to set up some of the basics for how we should conduct the extraction for this instrument (how big of a subarray to think about for each target, parameters for extraction apertures, is it worth trying to zap cosmic rays?).
Step1: target
Next, we create a target object for the star we were looking at. This is mostly to pull out the RA and Dec for calculating barycentric corrections to the observation times.
Step2: night
We create a night object to store information related to this night of observations. It will be connected to a data directory. For this, the night of ut140809, it expects a group of FITS images to be stored inside the base directory in data/ut140809. One thing this a night can do is set up a log of all the files, pulling information from their FITS headers.
Step3: observation
The core unit of a mosasaurus analysis is an observation, pointing to a specific target with a specific instrument on a specific night. An observation will need to set up text files that indicate which file prefixes are associated with which type of file needed for a reduction.
Step4: reducer
The reducer object will go in and extract spectra from that observation. Frankly, I'm not entirely sure anymore why this is distinct from an observation -- maybe we should just move the (very few) features of the reducer into observation, so we can just say o.reduce() straight away?
Step5: cubes
This is where it starts getting particularly kludgy. First, we create an unshifted cube, with every spectrum resample onto a uniform wavelength grid (although with a rough accuracy for the wavelength calibration). | Python Code:
%matplotlib auto
# create an instrument with the appropriate settings
from mosasaurus.instruments import LDSS3C
i = LDSS3C(grism='vph-all')
# set up the basic directory structure, where `data/` should be found
path = '/Users/zkbt/Cosmos/Data/mosaurusexample'
i.setupDirectories(path)
# set the extraction defaults
i.extractiondefaults['spatialsubarray'] = 200
i.extractiondefaults['narrowest'] = 4
i.extractiondefaults['widest'] = 20
i.extractiondefaults['numberofapertures'] = 5
i.extractiondefaults['zapcosmics'] = False
# print out a summary of this instrument
i.summarize()
Explanation: mosasaurus example
This notebook shows how to run mosasaurus to extract spectra from a sample dataset. In this example, there's a small sample dataset of raw LDSS3C images stored in the directory /Users/zkbt/Cosmos/Data/mosaurusexample/data/ut140809. These data come from a transit observation of WASP-94Ab, and contain one of every expected filetype for LDSS3C.
instrument
First we create an instrument object for LDSS3C. This contains all the instrument-specific information mosasaurus needs to be aware of (how to calibrate CCD images, what information to pull out of headers, where to find wavelength calibration information, etc...). We can also use it to set up some of the basics for how we should conduct the extraction for this instrument (how big of a subarray to think about for each target, parameters for extraction apertures, is it worth trying to zap cosmic rays?).
End of explanation
# create a target, pulling values from simbad
from mosasaurus.Target import Target
import astropy.units as u
t = Target(starname='WASP-94A', name='WASP-94Ab')
t.summarize()
t.star.summarize()
Explanation: target
Next, we create a target object for the star we were looking at. This is mostly to pull out the RA and Dec for calculating barycentric corrections to the observation times.
End of explanation
# create a night to analyze
from mosasaurus.Night import Night
n = Night('ut140809', instrument=i)
n.createNightlyLog(remake=False)
Explanation: night
We create a night object to store information related to this night of observations. It will be connected to a data directory. For this, the night of ut140809, it expects a group of FITS images to be stored inside the base directory in data/ut140809. One thing this a night can do is set up a log of all the files, pulling information from their FITS headers.
End of explanation
# create an observation
from mosasaurus.Observation import Observation
o = Observation(t, i, n)
o.setupFilePrefixes(science=['wasp94'], reference=['wasp94 thru mask'], flat=['flat'])
Explanation: observation
The core unit of a mosasaurus analysis is an observation, pointing to a specific target with a specific instrument on a specific night. An observation will need to set up text files that indicate which file prefixes are associated with which type of file needed for a reduction.
End of explanation
# create a reducer to analyze this observation
from mosasaurus.Reducer import Reducer
r = Reducer(o, visualize=False)
r.reduce()
Explanation: reducer
The reducer object will go in and extract spectra from that observation. Frankly, I'm not entirely sure anymore why this is distinct from an observation -- maybe we should just move the (very few) features of the reducer into observation, so we can just say o.reduce() straight away?
End of explanation
from mosasaurus.Cube import Cube
# create a cube, using 16 pixel apertures
c = Cube(o, width=16)
# define which is the target, and which are comparisons
c.setStars(target='aperture_713_1062', comparisons='aperture_753_1062')
# populate the spectra in the cube, and save it
c.populate(shift=False, max=None)
c.save()
c.imageCube(keys=['raw_counts'], stars=[c.target])
# estimate the required shifts for each exposures
from mosasaurus.WavelengthRecalibrator import WavelengthRecalibrator
wr = WavelengthRecalibrator(c)
# fix up the wavelength calibration for each exposure
r.mask.setup()
r.mask.addWavelengthCalibration(shift=True)
# repopulate the cube
c.populate(shift=True, remake=True)
c.imageCube(keys=['raw_counts'], stars=[c.target])
c.save()
# make movie of the cube
c.movieCube(stride=1, remake=True)
Explanation: cubes
This is where it starts getting particularly kludgy. First, we create an unshifted cube, with every spectrum resample onto a uniform wavelength grid (although with a rough accuracy for the wavelength calibration).
End of explanation |
8,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Post-training weight quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train and export the model
Step3: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TFLite model
The savedmodel directory is named with a timestamp. Select the most recent one
Step4: Using the python TFLiteConverter, the saved model can be converted into a TFLite model.
First load the model using the TFLiteConverter
Step5: Write it out to a tflite file
Step6: To quantize the model on export, set the optimizations flag to optimize for size
Step7: Note how the resulting file, is approximately 1/4 the size.
Step8: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
load the test data
First let's load the mnist test data to feed to it
Step9: Load the model into an interpreter
Step10: Test the model on one image
Step11: Evaluate the models
Step12: Repeat the evaluation on the weight quantized model to obtain
Step13: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available at the
Tensorflow Lite model repository.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by
Step14: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
! git clone --depth 1 https://github.com/tensorflow/models
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
Explanation: Post-training weight quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 8 bit precision as part of model conversion from
tensorflow graphdefs to TensorFlow Lite's flat buffer format. Weight quantization
achieves a 4x reduction in the model size. In addition, TFLite supports on the
fly quantization and dequantization of activations to allow for:
Using quantized kernels for faster implementation when available.
Mixing of floating-point kernels with quantized kernels for different parts
of the graph.
The activations are always stored in floating point. For ops that
support quantized kernels, the activations are quantized to 8 bits of precision
dynamically prior to processing and are de-quantized to float precision after
processing. Depending on the model being converted, this can give a speedup over
pure floating point computation.
In contrast to
quantization aware training
, the weights are quantized post training and the activations are quantized dynamically
at inference in this method.
Therefore, the model weights are not retrained to compensate for quantization
induced errors. It is important to check the accuracy of the quantized model to
ensure that the degradation is acceptable.
This tutorial trains an MNIST model from scratch, checks its accuracy in
TensorFlow, and then converts the saved model into a Tensorflow Lite flatbuffer
with weight quantization. Finally, it checks the
accuracy of the converted model and compare it to the original saved model. The training script, mnist.py, is from
Tensorflow official mnist tutorial.
Build an MNIST model
Setup
End of explanation
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
# Note: channels_last is required here or the conversion may fail.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
Explanation: Train and export the model
End of explanation
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
Explanation: For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TFLite model
The savedmodel directory is named with a timestamp. Select the most recent one:
End of explanation
import tensorflow as tf
tf.enable_eager_execution()
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
Explanation: Using the python TFLiteConverter, the saved model can be converted into a TFLite model.
First load the model using the TFLiteConverter:
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: Write it out to a tflite file:
End of explanation
# Note: If you don't have a recent tf-nightly installed, the
# "optimizations" line will have no effect.
tf.logging.set_verbosity(tf.logging.INFO)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
Explanation: To quantize the model on export, set the optimizations flag to optimize for size:
End of explanation
!ls -lh {tflite_models_dir}
Explanation: Note how the resulting file, is approximately 1/4 the size.
End of explanation
import numpy as np
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.cast(mnist_test[0], tf.float32)/255.0, mnist_test[1]
# Note: If you change the batch size, then use
# `tf.lite.Interpreter.resize_tensor_input` to also change it for
# the interpreter.
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
Explanation: Run the TFLite models
Run the TensorFlow Lite model using the Python TensorFlow Lite
Interpreter.
load the test data
First let's load the mnist test data to feed to it:
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
tf.logging.set_verbosity(tf.logging.DEBUG)
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
input_index = interpreter_quant.get_input_details()[0]["index"]
output_index = interpreter_quant.get_output_details()[0]["index"]
Explanation: Load the model into an interpreter
End of explanation
for img, label in mnist_ds.take(1):
break
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0])))
plt.grid(False)
Explanation: Test the model on one image
End of explanation
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
print(eval_model(interpreter, mnist_ds))
Explanation: Evaluate the models
End of explanation
print(eval_model(interpreter_quant, mnist_ds))
Explanation: Repeat the evaluation on the weight quantized model to obtain:
End of explanation
archive_path = tf.keras.utils.get_file("resnet_v2_101.tgz", "https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/resnet_v2_101.tgz", extract=True)
archive_path = pathlib.Path(archive_path)
archive_dir = str(archive_path.parent)
Explanation: In this example, the compressed model has no difference in the accuracy.
Optimizing an existing model
Resnets with pre-activation layers (Resnet-v2) are widely used for vision applications.
Pre-trained frozen graph for resnet-v2-101 is available at the
Tensorflow Lite model repository.
You can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:
End of explanation
! cat {archive_dir}/resnet_v2_101_299_info.txt
graph_def_file = pathlib.Path(archive_path).parent/"resnet_v2_101_299_frozen.pb"
input_arrays = ["input"]
output_arrays = ["output"]
converter = tf.lite.TFLiteConverter.from_frozen_graph(
str(graph_def_file), input_arrays, output_arrays, input_shapes={"input":[1,299,299,3]})
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
resnet_tflite_file = graph_def_file.parent/"resnet_v2_101_quantized.tflite"
resnet_tflite_file.write_bytes(converter.convert())
!ls -lh {archive_dir}/*.tflite
Explanation: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph.
End of explanation |
8,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function Approximation with a Multilayer Perceptron
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
Step1: Function definitions. Here we consider a hard-coded two-layer perception with one hidden layer, using the rectified linear unit (ReLU) as activation function, and a linear output layer. The output of the perceptron can hence be written as
\begin{equation}
\hat{f}(x,\boldsymbol{\theta}) = \sum_{i=1}^m v_i\sigma(x+b_i)
\end{equation}
where $\sigma(x) = \text{ReLU}(x) = \begin{cases}0 & \text{if }x < 0 \x & \text{otherwise}\end{cases}$.
Instead of specifying all the parameters individually, we group them in a single vector $\boldsymbol{\theta}$ with
\begin{equation}
\boldsymbol{\theta}=\begin{pmatrix}
v_1 & b_1 & v_2 & b_2 & \cdots & v_m & b_m\end{pmatrix}
\end{equation}
Step2: The cost function is the mean-squared error, i.e.,
\begin{equation}
J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)^2
\end{equation}
The gradient of the cost function can be computed by hand as
\begin{equation}
\nabla_{\boldsymbol{\theta}}J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)\begin{pmatrix}
\sigma(x_i^{[\text{train}]}+\theta_2) \
\theta_1\sigma^\prime(x_i^{[\text{train}]}+\theta_2) \
\sigma(x_i^{[\text{train}]}+\theta_4) \
\theta_3\sigma^\prime(x_i^{[\text{train}]}+\theta_4) \
\vdots \
\sigma(x_i^{[\text{train}]}+\theta_{2m}) \
\theta_{2m-1}\sigma^\prime(x_i^{[\text{train}]}+\theta_{2m})\end{pmatrix}
\end{equation}
where
\begin{equation}
\sigma^\prime(x) = \left{\begin{array}{ll}
0 & \text{if }x < 0 \
1 & \textrm{otherwise}\end{array}\right.
\end{equation}
Step3: Here, we use the Adam optimizer algorithm [1] to find the best configuration of parameters $\boldsymbol{\theta}$. See also the notebook MLP_introduction.ipynb for a description of the Adam optimizer.
1] D. P. Kingma and J. L. Ba, "Adam
Step4: Carry out the optimization using 10000 iterations with Adam and specify the number of components $m$. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
function_select = 4
def myfun(x):
functions = {
1: np.power(x,2), # quadratic function
2: np.sin(x), # sinus
3: np.sign(x), # signum
4: np.exp(x), # exponential function
5: np.abs(x)
}
return functions.get(function_select)
# Generate training data.
N = 32
x_train = np.linspace(-2, 2, num=N).reshape(-1,1)
# Generate the evaluation data.
# (can exceed the range of the training data to evaluate the prediction capabilities)
x_eval = np.linspace(-4, 4, num=4*N).reshape(-1,1)
Explanation: Function Approximation with a Multilayer Perceptron
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates:
* training of neural networks by hand
* approximation of a function using a multilayer perceptron consisting of 3 hidden units with ReLU activation function and a linear output unit
* generation of Video illustrating piecewise linear approximation of a function by ReLU
End of explanation
def sigma(x):
return np.maximum(0,x)
# First order derivative of sigma (here tanh)
def sigma_prime(x):
retval = np.zeros(x.shape)
retval[x >= 0] = 1
return retval
def MLP(x,theta):
y = np.zeros(x.shape)
for k in range(0,len(theta),2):
y += theta[k]*sigma(x+theta[k+1])
return y
Explanation: Function definitions. Here we consider a hard-coded two-layer perception with one hidden layer, using the rectified linear unit (ReLU) as activation function, and a linear output layer. The output of the perceptron can hence be written as
\begin{equation}
\hat{f}(x,\boldsymbol{\theta}) = \sum_{i=1}^m v_i\sigma(x+b_i)
\end{equation}
where $\sigma(x) = \text{ReLU}(x) = \begin{cases}0 & \text{if }x < 0 \x & \text{otherwise}\end{cases}$.
Instead of specifying all the parameters individually, we group them in a single vector $\boldsymbol{\theta}$ with
\begin{equation}
\boldsymbol{\theta}=\begin{pmatrix}
v_1 & b_1 & v_2 & b_2 & \cdots & v_m & b_m\end{pmatrix}
\end{equation}
End of explanation
def cost_function(x, y, theta):
# cost function is mean-squared error bvetween the training set x and the y
difference = np.array([MLP(e, theta) for e in x]) - y
return np.dot(difference.T, difference)/len(x)
# gradient of the cost function
def cost_function_gradient(x, y, theta):
gradient = np.zeros(len(theta))
for k in range(len(x)):
ig = np.zeros(len(theta))
for j in range(0,len(theta),2):
ig[j] = sigma(x[k]+theta[j+1])
ig[j+1] = theta[j]*sigma_prime(x[k]+theta[j+1])
gradient += 2*(MLP(x[k],theta) - y[k])*ig
return gradient / len(x)
Explanation: The cost function is the mean-squared error, i.e.,
\begin{equation}
J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)^2
\end{equation}
The gradient of the cost function can be computed by hand as
\begin{equation}
\nabla_{\boldsymbol{\theta}}J(\boldsymbol{\theta},\mathbb{X}^{[\text{train}]},\mathbb{Y}^{[\text{train}]}) = \frac{1}{N}\sum_{i=1}^N\left(\hat{f}(x_i^{[\text{train}]},\boldsymbol{\theta}) - y_i^{[\text{train}]}\right)\begin{pmatrix}
\sigma(x_i^{[\text{train}]}+\theta_2) \
\theta_1\sigma^\prime(x_i^{[\text{train}]}+\theta_2) \
\sigma(x_i^{[\text{train}]}+\theta_4) \
\theta_3\sigma^\prime(x_i^{[\text{train}]}+\theta_4) \
\vdots \
\sigma(x_i^{[\text{train}]}+\theta_{2m}) \
\theta_{2m-1}\sigma^\prime(x_i^{[\text{train}]}+\theta_{2m})\end{pmatrix}
\end{equation}
where
\begin{equation}
\sigma^\prime(x) = \left{\begin{array}{ll}
0 & \text{if }x < 0 \
1 & \textrm{otherwise}\end{array}\right.
\end{equation}
End of explanation
def approx_1d_function_adam(x_train, theta_initial, epochs):
y_train = myfun(x_train)
theta = theta_initial
beta1 = 0.9
beta2 = 0.999
alpha = 0.001
epsilon = 1e-8
m = np.zeros(theta.shape)
t = 0
v = np.zeros(theta.shape)
for k in range(epochs):
t += 1
g = cost_function_gradient(x_train, y_train, theta)
m = beta1*m + (1-beta1)*g
v = beta2*v + (1-beta2)*(g**2)
mhat = m/(1-beta1**t)
vhat = v/(1-beta2**t)
theta = theta - alpha*mhat/(np.sqrt(vhat)+epsilon)
return theta
Explanation: Here, we use the Adam optimizer algorithm [1] to find the best configuration of parameters $\boldsymbol{\theta}$. See also the notebook MLP_introduction.ipynb for a description of the Adam optimizer.
1] D. P. Kingma and J. L. Ba, "Adam: A Method for Stochastic Optimization," published at ICLR 2015, available at https://arxiv.org/pdf/1412.6980.pdf
End of explanation
epochs = 10000
m = 10
theta_initial = np.random.randn(2*m)
theta_adam = approx_1d_function_adam(x_train, theta_initial, epochs)
# compute evaluation
predictions = MLP(x_eval, theta_adam)
fig = plt.figure(1, figsize=(18,6))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.rc('text.latex', preamble=r'\usepackage{amsmath}\usepackage{amssymb}\usepackage{bm}')
ax = fig.add_subplot(1, 2, 1)
plt.plot(x_eval, myfun(x_eval), '-', color='royalblue', linewidth=1.0)
plt.plot(x_eval, predictions, '-', label='output', color='darkorange', linewidth=2.0)
plt.plot(x_train, myfun(x_train), '.', color='royalblue',markersize=14)
plt.xlim((min(x_train),max(x_train)))
plt.ylim((-0.5,8))
plt.grid(which='both');
plt.rcParams.update({'font.size': 14})
plt.xlabel('$x$');
plt.ylabel('$y$')
plt.title('%d ReLU-neurons in hidden layer with %d iterations of Adam' % (m,epochs))
plt.legend(['Function $f(x)$', r'MLP output $\hat{f}(x,\bm{\theta})$', 'Training set'])
ax = fig.add_subplot(1, 2, 2)
for k in range(0,len(theta_adam),2):
plt.plot(x_eval, [theta_adam[k]*sigma(x + theta_adam[k+1]) for x in x_eval], '--', label='Relu %d' % (k//2), linewidth=2.0)
plt.grid(which='both');
plt.xlim((min(x_train),max(x_train)))
plt.xlabel('$x$');
plt.ylabel('$y$')
plt.title('Weighted output of the %d neurons' % m)
#plt.savefig('MLP_ReLU_m%d_fun%d.pdf' % (m,function_select),bbox_inches='tight')
plt.show()
from matplotlib import animation, rc
from IPython.display import HTML
from matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.
fig, ax = plt.subplots(1, figsize=(8,6))
# Animation function. This is called sequentially.
def animate(i):
ax.clear()
ax.plot(x_eval, myfun(x_eval), '-', color='royalblue', linewidth=1.0)
ax.plot(x_train, myfun(x_train), '.', color='royalblue',markersize=8)
function_agg = np.zeros(len(x_eval))
for k in range(0,i):
part_relu = np.array([theta_adam[2*k]*sigma(x[0] + theta_adam[2*k+1]) for x in x_eval])
ax.plot(x_eval, part_relu, '--', color='gray', linewidth=0.5)
function_agg += part_relu
ax.plot(x_eval, function_agg, '-', color='darkorange', linewidth=3.0)
ax.grid(which='both')
ax.set_title("%d components" % i, fontsize=16)
ax.set_xlim((min(x_eval),max(x_eval)))
ax.set_ylim((-2,8))
ax.set_xlim((-4,4))
ax.set_xlabel(r'$x$')
ax.set_ylabel(r'$y=f(x)$')
return fig,
# Call the animator.
anim = animation.FuncAnimation(fig, animate, frames=1+len(theta_adam)//2, interval=2000, blit=True, repeat=False)
# If you want to save the animation, use the following line.
#anim.save('basic_animation_test_fun%d.gif' % function_select, writer=PillowWriter(fps=.4))
HTML(anim.to_html5_video())
Explanation: Carry out the optimization using 10000 iterations with Adam and specify the number of components $m$.
End of explanation |
8,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC
Step1: Adversarial Learning
Step2: Main Objective — Building an Apparel Classifier & Performing Adversarial Learning
We will keep things simple here with regard to the key objective. We will build a simple apparel classifier by training models on the very famous Fashion MNIST dataset based on Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The task is to classify these images into an apparel category amongst 10 categories on which we will be training our models on.
The second main objective here is to perturb and add some intentional noise to these apparel images to try and fool our classification model
The third main objective is to build an adversarial regularized model on top of our base model by training it on perturbed images to try and perform better on adversarial attacks
Here's an example how the data looks (each class takes three-rows)
Step3: Fine-tuning a pre-trained VGG-19 CNN Model - Base Model
Here, we will use a VGG-19 model which was pre-trained on the ImageNet dataset by fine-tuning it on the Fashion-MNIST dataset.
Model Architecture Details
<font size=2>Source
Step4: Resizing Image Data for Modeling
The minimum image size expected by the VGG model is 32x32 so we need to resize our images
Step5: View Sample Data
Step6: Build CNN Model Architecture
We will now build our CNN model architecture customizing the VGG-19 model.
Build Cut-VGG19 Model
Step7: Set layers to trainable to enable fine-tuning
Step8: Build CNN model on top of VGG19
Step9: Train CNN Model
Step10: Plot Learning Curves
Step11: Evaluate Model Performance on Organic Test Data
Here we check the performance of our pre-trained CNN model on the organic test data (without introducing any perturbations)
Step12: Adversarial Attacks with Fast Gradient Sign Method (FGSM)
What is an adversarial example?
Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a white box attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.
<font size=2>Source
Step13: Get Loss Function for our problem
We use Sparse Categorical Crossentropy here as we focus on a multi-class classification problem
Step14: Adversarial Attack Examples
Here we look at a few examples of applying thte FGSM adversarial attack on sample apparel images and how it affects our model predictions. We create a simple wrapper over our perform_adversarial_attack_fgsm function to try it out on sample images.
Step15: Adversarial Learning with Neural Structured Learning
We will now leverage Neural Structured Learning (NSL) to train an adversarial-regularized VGG-19 model.
Install NSL Dependency
Step16: Adversarial Learning Configs
adv_multiplier
Step17: Feel free to play around with the hyperparameters and observe model performance
Fine-tuning VGG-19 CNN Model with Adversarial Learning - Adversarial Model
Create Base Model Architecture
Step18: Setup Adversarial Model with Adversarial Regularization on Base Model
Step19: Format Training / Validation data into TF Datasets
Step20: Train Model
Step21: Visualize Learning Curves
Step22: VGG-19 Adversarial Model Performance on Organic Test Dataset
Here we check the performance of our adversarially-trained CNN model on the organic test data (without introducing any perturbations)
Step23: Almost similar performance as our non-adversarial trained CNN model!
Generate Adversarial Attacks (FGSM) on Test Data to create Perturbed Test Dataset
Here we create a helper function to help us create a perturbed dataset using a specific adversarial epsilon multiplier.
Step24: Generate a Perturbed Test Dataset
We generate a perturbed version of the test dataset using an epsilion multiplier of 0.05 to test the performance of our base VGG model and adversarially-trained VGG model shortly.
Step25: VGG-19 Base Model performance on Perturbed Test Dataset
Let's look at the performance of our base VGG-19 model on the perturbed dataset.
Step26: We can see that the performance of the base VGG-19 (non adversarial-trained) model reduces by almost 50% on the perturbed test dataset, bringing a powerful ImageNet winning model to its knees!
VGG-19 Adversarial Model performance on Perturbed Test Dataset
Evaluating our adversarial trained CNN model on the test dataset with perturbations. We see an approx. 38% jump in performance!
Step27: Compare Model Performances on Sample Perturbed Test Examples | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC
End of explanation
# To prevent unnecessary warnings (e.g. FutureWarnings in TensorFlow)
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# TensorFlow and tf.keras
import tensorflow as tf
# Helper libraries
import numpy as np
import matplotlib.pyplot as plt
import os
import subprocess
import json
import requests
from tqdm import tqdm
print(tf.__version__)
Explanation: Adversarial Learning: Building Robust Image Classifiers
<br>
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
In this tutorial, we will explore the use of adversarial learning
(Goodfellow et al., 2014) for image
classification using Neural Structured Learning (NSL).
Adversarial attacks intentionally introduce some noise in the form of perturbations to input images to fool the deep learning model. For example, in a classification system, by adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the loss function with respect to the input, we can change the model's classification of the image.
CNN Classifier
The most popular deep learning models leveraged for computer vision problems are convolutional neural networks (CNNs)!
<font size=2>Created by: Dipanjan Sarkar</font>
We will look at how we can build, train and evaluate a multi-class CNN classifier in this notebook and also perform adversarial learning.
Transfer Learning
The idea is to leverage a pre-trained model instead of building a CNN from scratch in our image classification problem
<font size=2>Source: CNN Essentials</font>
Tutorial Outline
In this tutorial, we illustrate the following procedure of applying adversarial learning to obtain robust models using the Neural Structured Learning framework on a CNN model:
Create a neural network as a base model. In this tutorial, the base model is
created with the tf.keras sequential API by wrapping a pre-trained VGG19 model which we use for fine-tuning using transfer learning
Train and evaluate the base model performance on organic FashionMNIST data
Perform perturbations using the fast gradient sign method (FSGM) technique and look at model weaknesses
Wrap the base model with the nsl.keras.AdversarialRegularization wrapper class,
which is provided by the NSL framework, to create a new tf.keras.Model
instance. This new model will include the adversarial loss as a
regularization term in its training objective.
Convert examples in the training data to a tf.data.Dataset to train.
Train and evaluate the adversarial-regularized model
Generate perturbed dataset from the test data using FGSM and evaluate base model performance
Evaluate adversarial model performance on organic and perturbed test datasets
Load Dependencies
This leverages the tf.keras API style and hence it is recommended you try this out on TensorFlow 2.x
End of explanation
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
print('\nTrain_images.shape: {}, of {}'.format(train_images.shape, train_images.dtype))
print('Test_images.shape: {}, of {}'.format(test_images.shape, test_images.dtype))
Explanation: Main Objective — Building an Apparel Classifier & Performing Adversarial Learning
We will keep things simple here with regard to the key objective. We will build a simple apparel classifier by training models on the very famous Fashion MNIST dataset based on Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The task is to classify these images into an apparel category amongst 10 categories on which we will be training our models on.
The second main objective here is to perturb and add some intentional noise to these apparel images to try and fool our classification model
The third main objective is to build an adversarial regularized model on top of our base model by training it on perturbed images to try and perform better on adversarial attacks
Here's an example how the data looks (each class takes three-rows):
<table>
<tr><td>
<img src="https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png"
alt="Fashion MNIST sprite" width="600">
</td></tr>
<tr><td align="center">
<a href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>
</td></tr>
</table>
Fashion MNIST is intended as a drop-in replacement for the classic MNIST dataset—often used as the "Hello, World" of machine learning programs for computer vision. You can access the Fashion MNIST dataset directly from TensorFlow.
Note: Although these are really images, they are loaded as NumPy arrays and not binary image objects.
We will build the following two deep learning CNN (Convolutional Neural Network) classifiers in this notebook.
- Fine-tuned pre-trained VGG-19 CNN (Base Model)
- Adversarial Regularization Trained VGG-19 CNN Model (Adversarial Model)
The idea is to look at how to use transfer learning where you fine-tune a pre-trained model to adapt it to classify images based on your dataset and then build a robust classifier which can handle adversarial attacks using adversarial learning.
Load Dataset
End of explanation
train_images_3ch = np.stack([train_images]*3, axis=-1)
test_images_3ch = np.stack([test_images]*3, axis=-1)
print('\nTrain_images.shape: {}, of {}'.format(train_images_3ch.shape, train_images_3ch.dtype))
print('Test_images.shape: {}, of {}'.format(test_images_3ch.shape, test_images_3ch.dtype))
Explanation: Fine-tuning a pre-trained VGG-19 CNN Model - Base Model
Here, we will use a VGG-19 model which was pre-trained on the ImageNet dataset by fine-tuning it on the Fashion-MNIST dataset.
Model Architecture Details
<font size=2>Source: CNN Essentials</font>
Reshaping Image Data for Modeling
We do need to reshape our data before we train our model. Here we will convert the images to 3-channel images (image pixel tensors) as the VGG model was originally trained on RGB images
End of explanation
def resize_image_array(img, img_size_dims):
img = tf.image.resize(
img, img_size_dims, method=tf.image.ResizeMethod.BICUBIC)
img = np.array(img, dtype=np.float32)
return img
%%time
IMG_DIMS = (32, 32)
train_images_3ch = np.array([resize_image_array(img, img_size_dims=IMG_DIMS) for img in train_images_3ch])
test_images_3ch = np.array([resize_image_array(img, img_size_dims=IMG_DIMS) for img in test_images_3ch])
print('\nTrain_images.shape: {}, of {}'.format(train_images_3ch.shape, train_images_3ch.dtype))
print('Test_images.shape: {}, of {}'.format(test_images_3ch.shape, test_images_3ch.dtype))
Explanation: Resizing Image Data for Modeling
The minimum image size expected by the VGG model is 32x32 so we need to resize our images
End of explanation
fig, ax = plt.subplots(2, 5, figsize=(12, 6))
c = 0
for i in range(10):
idx = i // 5
idy = i % 5
ax[idx, idy].imshow(train_images_3ch[i]/255.)
ax[idx, idy].set_title(class_names[train_labels[i]])
Explanation: View Sample Data
End of explanation
# define input shape
INPUT_SHAPE = (32, 32, 3)
# get the VGG19 model
vgg_layers = tf.keras.applications.vgg19.VGG19(weights='imagenet', include_top=False,
input_shape=INPUT_SHAPE)
vgg_layers.summary()
Explanation: Build CNN Model Architecture
We will now build our CNN model architecture customizing the VGG-19 model.
Build Cut-VGG19 Model
End of explanation
# Fine-tune all the layers
for layer in vgg_layers.layers:
layer.trainable = True
# Check the trainable status of the individual layers
for layer in vgg_layers.layers:
print(layer, layer.trainable)
Explanation: Set layers to trainable to enable fine-tuning
End of explanation
# define sequential model
model = tf.keras.models.Sequential()
# Add the vgg convolutional base model
model.add(vgg_layers)
# add flatten layer
model.add(tf.keras.layers.Flatten())
# add dense layers with some dropout
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=0.3))
model.add(tf.keras.layers.Dense(256, activation='relu'))
model.add(tf.keras.layers.Dropout(rate=0.3))
# add output layer
model.add(tf.keras.layers.Dense(10))
# compile model
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# view model layers
model.summary()
Explanation: Build CNN model on top of VGG19
End of explanation
EPOCHS = 100
train_images_3ch_scaled = train_images_3ch / 255.
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=2,
restore_best_weights=True,
verbose=1)
history = model.fit(train_images_3ch_scaled, train_labels,
batch_size=32,
callbacks=[es_callback],
validation_split=0.1, epochs=EPOCHS,
verbose=1)
Explanation: Train CNN Model
End of explanation
import pandas as pd
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
history_df = pd.DataFrame(history.history)
history_df[['loss', 'val_loss']].plot(kind='line',
ax=ax[0])
history_df[['accuracy', 'val_accuracy']].plot(kind='line',
ax=ax[1]);
Explanation: Plot Learning Curves
End of explanation
test_images_3ch_scaled = test_images_3ch / 255.
predictions = model.predict(test_images_3ch_scaled)
predictions[:5]
prediction_labels = np.argmax(predictions, axis=1)
prediction_labels[:5]
from sklearn.metrics import confusion_matrix, classification_report
print(classification_report(test_labels, prediction_labels,
target_names=class_names))
pd.DataFrame(confusion_matrix(test_labels, prediction_labels),
index=class_names, columns=class_names)
Explanation: Evaluate Model Performance on Organic Test Data
Here we check the performance of our pre-trained CNN model on the organic test data (without introducing any perturbations)
End of explanation
def get_model_preds(input_image, class_names_map, model):
preds = model.predict(input_image)
# Convert logits to probabilities by taking softmax.
probs = np.exp(preds) / np.sum(np.exp(preds))
top_idx = np.argsort(-probs)[0][0]
top_prob = -np.sort(-probs)[0][0]
top_class = np.array(class_names_map)[top_idx]
return top_class, top_prob
def generate_adversarial_pattern(input_image, image_label_idx, model, loss_func):
with tf.GradientTape() as tape:
tape.watch(input_image)
prediction = model(input_image)
loss = loss_func(image_label_idx, prediction)
# Get the gradients of the loss w.r.t to the input image.
gradient = tape.gradient(loss, input_image)
# Get the sign of the gradients to create the perturbation
signed_grad = tf.sign(gradient)
return signed_grad
def perform_adversarial_attack_fgsm(input_image, image_label_idx, cnn_model, class_names_map, loss_func, eps=0.01):
# basic image shaping
input_image = np.array([input_image])
tf_img = tf.convert_to_tensor(input_image)
# predict class before adversarial attack
ba_pred_class, ba_pred_prob = get_model_preds(tf_img, class_names_map, cnn_model)
# generate adversarial image
adv_pattern = generate_adversarial_pattern(tf_img, image_label_idx, model, loss_func)
clip_adv_pattern = tf.clip_by_value(adv_pattern, clip_value_min=0., clip_value_max=1.)
perturbed_img = tf_img + (eps * adv_pattern)
perturbed_img = tf.clip_by_value(perturbed_img, clip_value_min=0., clip_value_max=1.)
# predict class after adversarial attack
aa_pred_class, aa_pred_prob = get_model_preds(perturbed_img, class_names_map, cnn_model)
# visualize results
fig, ax = plt.subplots(1, 3, figsize=(15, 4))
ax[0].imshow(tf_img[0].numpy())
ax[0].set_title('Before Adversarial Attack\nTrue:{} Pred:{} Prob:{:.3f}'.format(class_names_map[image_label_idx],
ba_pred_class,
round(ba_pred_prob, 3)))
ax[1].imshow(clip_adv_pattern[0].numpy())
ax[1].set_title('Adversarial Pattern - EPS:{}'.format(eps))
ax[2].imshow(perturbed_img[0].numpy())
ax[2].set_title('After Adversarial Attack\nTrue:{} Pred:{} Prob:{:.3f}'.format(class_names_map[image_label_idx],
aa_pred_class,
aa_pred_prob))
Explanation: Adversarial Attacks with Fast Gradient Sign Method (FGSM)
What is an adversarial example?
Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a white box attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.
<font size=2>Source: Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014</font>
Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.
Fast gradient sign method
The fast gradient sign method works by using the gradients of the neural network to create an adversarial example. For an input image, the method uses the gradients of the loss with respect to the input image to create a new image that maximises the loss. This new image is called the adversarial image. This can be summarised using the following expression:
$$adv_x = x + \epsilon*\text{sign}(\nabla_xJ(\theta, x, y))$$
where
adv_x : Adversarial image.
x : Original input image.
y : Original input label.
$\epsilon$ : Multiplier to ensure the perturbations are small.
$\theta$ : Model parameters.
$J$ : Loss.
The gradients are taken with respect to the input image because the objective is to create an image that maximizes the loss. A method to accomplish this is to find how much each pixel in the image contributes to the loss value, and add a perturbation accordingly. This works pretty fast because it is easy to find how much each input pixel contributes to the loss by using the chain rule and finding the required gradients. Since our goal here is to attack a model that has already been trained, the gradient is not taken with respect to the trainable variables, i.e., the model parameters, which are now frozen.
So let's try and fool our pretrained VGG19 model.
Utility Functions for FGSM
get_model_preds(...): Helps in getting the top predicted class label and probability of an input image based on a specific trained CNN model
generate_adversarial_pattern(...): Helps in getting the gradients and the sign of the gradients w.r.t the input image and the trained CNN model
perform_adversarial_attack_fgsm(...): Create perturbations which will be used to distort the original image resulting in an adversarial image by adding epsilon to the gradient signs (can be added to gradients also) and then showcase model performance on the same
End of explanation
scc = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
Explanation: Get Loss Function for our problem
We use Sparse Categorical Crossentropy here as we focus on a multi-class classification problem
End of explanation
def show_adv_attack_example(image_idx, image_dataset,
image_labels, cnn_model,
class_names, loss_fn, eps):
sample_apparel_img = image_dataset[image_idx]
sample_apparel_labelidx = image_labels[image_idx]
perform_adversarial_attack_fgsm(input_image=sample_apparel_img,
image_label_idx=sample_apparel_labelidx,
cnn_model=cnn_model,
class_names_map=class_names,
loss_func=loss_fn, eps=eps)
show_adv_attack_example(6, test_images_3ch_scaled,
test_labels, model,
class_names, scc, 0.05)
show_adv_attack_example(60, test_images_3ch_scaled,
test_labels, model,
class_names, scc, 0.05)
show_adv_attack_example(500, test_images_3ch_scaled,
test_labels, model,
class_names, scc, 0.05)
show_adv_attack_example(560, test_images_3ch_scaled,
test_labels, model,
class_names, scc, 0.05)
Explanation: Adversarial Attack Examples
Here we look at a few examples of applying thte FGSM adversarial attack on sample apparel images and how it affects our model predictions. We create a simple wrapper over our perform_adversarial_attack_fgsm function to try it out on sample images.
End of explanation
!pip install neural-structured-learning
import neural_structured_learning as nsl
Explanation: Adversarial Learning with Neural Structured Learning
We will now leverage Neural Structured Learning (NSL) to train an adversarial-regularized VGG-19 model.
Install NSL Dependency
End of explanation
adv_multiplier = 0.45
adv_step_size = 0.95
adv_grad_norm = 'l2'
adversarial_config = nsl.configs.make_adv_reg_config(
multiplier=adv_multiplier,
adv_step_size=adv_step_size,
adv_grad_norm=adv_grad_norm
)
adversarial_config
Explanation: Adversarial Learning Configs
adv_multiplier: The weight of adversarial loss in the training
objective, relative to the labeled loss.
adv_step_size: The magnitude of adversarial perturbation.
adv_grad_norm: The norm to measure the magnitude of adversarial
perturbation.
Adversarial Neighbors are created leveraging the above config settings.
adv_neighbor = input_features + adv_step_size * gradient where adv_step_size is the step size (analogous to learning rate) for searching/calculating adversarial neighbors.
End of explanation
vgg_layers = tf.keras.applications.vgg19.VGG19(weights='imagenet', include_top=False,
input_shape=INPUT_SHAPE)
# Fine-tune all the layers
for layer in vgg_layers.layers:
layer.trainable = True
# Check the trainable status of the individual layers
for layer in vgg_layers.layers:
print(layer, layer.trainable)
# define sequential model
base_model = tf.keras.models.Sequential()
# Add the vgg convolutional base model
base_model.add(vgg_layers)
# add flatten layer
base_model.add(tf.keras.layers.Flatten())
# add dense layers with some dropout
base_model.add(tf.keras.layers.Dense(256, activation='relu'))
base_model.add(tf.keras.layers.Dropout(rate=0.3))
base_model.add(tf.keras.layers.Dense(256, activation='relu'))
base_model.add(tf.keras.layers.Dropout(rate=0.3))
# add output layer
base_model.add(tf.keras.layers.Dense(10))
Explanation: Feel free to play around with the hyperparameters and observe model performance
Fine-tuning VGG-19 CNN Model with Adversarial Learning - Adversarial Model
Create Base Model Architecture
End of explanation
adv_model = nsl.keras.AdversarialRegularization(
base_model,
label_keys=['label'],
adv_config=adversarial_config
)
adv_model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=2e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Setup Adversarial Model with Adversarial Regularization on Base Model
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(train_images_3ch_scaled,
train_labels,
test_size=0.1,
random_state=42)
batch_size = 256
train_data = tf.data.Dataset.from_tensor_slices(
{'input': X_train,
'label': tf.convert_to_tensor(y_train, dtype='float32')}).batch(batch_size)
val_data = tf.data.Dataset.from_tensor_slices(
{'input': X_val,
'label': tf.convert_to_tensor(y_val, dtype='float32')}).batch(batch_size)
val_steps = X_val.shape[0] / batch_size
Explanation: Format Training / Validation data into TF Datasets
End of explanation
EPOCHS = 100
es_callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=2,
restore_best_weights=False,
verbose=1)
history = adv_model.fit(train_data, validation_data=val_data,
validation_steps=val_steps,
batch_size=batch_size,
callbacks=[es_callback],
epochs=EPOCHS,
verbose=1)
Explanation: Train Model
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
history_df = pd.DataFrame(history.history)
history_df[['loss', 'val_loss']].plot(kind='line',
ax=ax[0])
history_df[['sparse_categorical_accuracy',
'val_sparse_categorical_accuracy']].plot(kind='line',
ax=ax[1]);
Explanation: Visualize Learning Curves
End of explanation
predictions = adv_model.base_model.predict(test_images_3ch_scaled)
prediction_labels = np.argmax(predictions, axis=1)
print(classification_report(test_labels, prediction_labels,
target_names=class_names))
pd.DataFrame(confusion_matrix(test_labels, prediction_labels),
index=class_names, columns=class_names)
Explanation: VGG-19 Adversarial Model Performance on Organic Test Dataset
Here we check the performance of our adversarially-trained CNN model on the organic test data (without introducing any perturbations)
End of explanation
def generate_perturbed_images(input_images, image_label_idxs, model, loss_func, eps=0.01):
perturbed_images = []
# don't use list on large data - used just to view fancy progress-bar
for image, label in tqdm(list(zip(input_images, image_label_idxs))):
image = tf.convert_to_tensor(np.array([image]))
adv_pattern = generate_adversarial_pattern(image, label, model, loss_func)
perturbed_img = image + (eps * adv_pattern)
perturbed_img = tf.clip_by_value(perturbed_img, clip_value_min=0., clip_value_max=1.)[0]
perturbed_images.append(perturbed_img)
return tf.convert_to_tensor(perturbed_images)
Explanation: Almost similar performance as our non-adversarial trained CNN model!
Generate Adversarial Attacks (FGSM) on Test Data to create Perturbed Test Dataset
Here we create a helper function to help us create a perturbed dataset using a specific adversarial epsilon multiplier.
End of explanation
perturbed_test_imgs = generate_perturbed_images(input_images=test_images_3ch_scaled,
image_label_idxs=test_labels, model=model,
loss_func=scc, eps=0.05)
Explanation: Generate a Perturbed Test Dataset
We generate a perturbed version of the test dataset using an epsilion multiplier of 0.05 to test the performance of our base VGG model and adversarially-trained VGG model shortly.
End of explanation
predictions = model.predict(perturbed_test_imgs)
prediction_labels = np.argmax(predictions, axis=1)
print(classification_report(test_labels, prediction_labels,
target_names=class_names))
pd.DataFrame(confusion_matrix(test_labels, prediction_labels),
index=class_names, columns=class_names)
Explanation: VGG-19 Base Model performance on Perturbed Test Dataset
Let's look at the performance of our base VGG-19 model on the perturbed dataset.
End of explanation
predictions = adv_model.base_model.predict(perturbed_test_imgs)
prediction_labels = np.argmax(predictions, axis=1)
print(classification_report(test_labels, prediction_labels,
target_names=class_names))
pd.DataFrame(confusion_matrix(test_labels, prediction_labels),
index=class_names, columns=class_names)
Explanation: We can see that the performance of the base VGG-19 (non adversarial-trained) model reduces by almost 50% on the perturbed test dataset, bringing a powerful ImageNet winning model to its knees!
VGG-19 Adversarial Model performance on Perturbed Test Dataset
Evaluating our adversarial trained CNN model on the test dataset with perturbations. We see an approx. 38% jump in performance!
End of explanation
f, ax = plt.subplots(2, 5, figsize=(30, 15))
for idx, i in enumerate([6, 7, 8 , 9, 10, 11, 95, 99, 29, 33]):
idx_x = idx // 5
idx_y = idx % 5
sample_apparel_idx = i
sample_apparel_img = tf.convert_to_tensor([perturbed_test_imgs[sample_apparel_idx]])
sample_apparel_labelidx = test_labels[sample_apparel_idx]
bm_pred = get_model_preds(input_image=sample_apparel_img,
class_names_map=class_names,
model=model)[0]
am_pred = get_model_preds(input_image=sample_apparel_img,
class_names_map=class_names,
model=adv_model.base_model)[0]
ax[idx_x, idx_y].imshow(sample_apparel_img[0])
ax[idx_x, idx_y].set_title('True Label:{}\nBase VGG Model Pred:{}\nAdversarial Reg. Model Pred:{}'.format(class_names[sample_apparel_labelidx],
bm_pred,
am_pred))
Explanation: Compare Model Performances on Sample Perturbed Test Examples
End of explanation |
8,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diffusion Boundary
The simulation script described in this chapter is available at STEPS_Example repository.
In some systems it may be a convenient simulation feature to be able to localize certain chemical species in one particular region of a volume without diffusion to neighboring regions even if they are not separated by a physical boundary. For example, in some biological systems certain proteins may exist only in local regions and though the structural features are simplified in a model the proteins are assumed to diffuse in a local region to meet and react with each other. So it is sometimes important to restrict the diffusional space of some proteins and not the others from a biologically feasible perspective. Similarly, it may be convenient to separate a large simulation volume into a number of compartments that are not physically separated by a membrane and so are connected to other compartments by chemical diffusion. Such an approach allows for different biochemical behavior in different regions of the volume to be specified and may simplify simulation initialization and data recording considerably. In this brief chapter we'll introduce an object termed the “Diffusion Boundary“ (steps.geom.DiffBoundary) which allows for this important simulation convenience
Step1: Model specification
We'll go straight into our function that will set up the biochemical model. Here we will create two chemical species objects, 'X' and 'Y', and describe their diffusion rules. Notice that this time we use separate volume systems for the two compartments we will create, as is our option. We intend volume system 'vsysA' to be added to a compartment 'A' and 'vsysB' to be added to compartment 'B', the reason for which will become clear as we progress
Step2: Note that if our model were set up with the following code instead, diffusion would NOT be defined for species 'X' in compartment 'B' (if we add only volume system 'B' to compartment 'B' as we intend)
Step3: Now we'll create our two compartments. We'll split our cylinder down the middle of the z-axis creating two compartments of (approximately) equal volume. Since our cylinder is oriented on the z-axis we simply need to separate tetrahedrons by those that are below the centre point on the z-axis and those that are above.
Firstly we count the number of tetrahedrons using method steps.geom.Tetmesh.countTets
Step4: And create empty lists to group the tetrahedrons as those that belong to compartment 'A' and those to 'B'
Step5: And similarly create empty sets to group all the triangles in compartments 'A' and 'B'. All tetrahedrons are comprised of 4 triangles, and we store all triangles belonging to all tetrahedrons in the compartment (in a set so as not to store more than once). The reason for doing so will become apparent soon
Step6: Next we find the bounds of the mesh, and the mid point (on the z-axis)- the point at which we want our Diffusion Boundary to appear
Step7: Now we’ll run a for loop over all tetrahedrons to sort tetrahedrons and triangles into the compartments. The criterior is that tetrahedrons with their barycenter less than the mid point on the z-axis will belong to compartment A and those with their barycenter greater than the mid point will belong to compartment B
Step8: With our tetrahedrons sorted in this way we can create our mesh compartments. As we have seen in the previous chapter, a steps.geom.TmComp requires to the constructor, in order
Step9: And add volume system 'vsysA' to compartment 'A' and volume system 'vsysB' to compartment 'B'
Step10: Now comes our diffusion boundary as part of our geometry description and therefore the Diffusion Boundary class is to be found in module steps.geom which we have imported with name sgeom. Recall that, to create a diffusion boundary, we must have a sequence of all the triangle indices that comprise the diffusion boundary and all of the triangles must connect the same two compartments. The reason that the user has to explicitly declare which triangles to use is that the diffusion boundary between compartments may not necessarily form the whole surface between the two compartments and may comprise a smaller area. However here we will use the entire surface between the two compartments.
The way that we find the triangle indices is very straightforward- they are simply the common triangles to both compartments. We have the triangle indices of both compartments stored in Python sets, the common triangles are therefore the intersection
Step11: If this point is not very clear, consider the simple example where two tetrahedrons are connected at a surface (triangle). Lets say tetrahedron A is comprised of triangles (0,1,2,3) and tetrahedron B is comprised of triangles (0,4,5,6). That would mean that their common triangle (0) forms their connection. The common triangle could be found by finding the intersection of two sets of the triangles, that is the intersection of (0,1,2,3) and (0,4,5,6) is (0). That is what the above code does on a larger scale where the sets contain all triangles in the entire compartment and the intersection therefore gives the entire surface connection between the two compartments.
Now we have to convert the set to a list (or other sequence such as a tuple or NumPy array) as this is what the diffusion boundary constructor requires
Step12: Finally we can create the diffusion boundary between compartment 'A' and compartment 'B'. The object constructor looks similar to that for a mesh compartment or patch, but with some important differences. That is the constructor expects, in order
Step13: And that is basically all we need to do create our diffusion boundary. As usual we should note the string identifier because that is what we will need to control the diffusion boundary during simulation. The technique for finding the common triangles between two compartments is a very useful technique that may be applied or adapted when creating diffusion boundaries in most STEPS simulations.
We return the parent steps.geom.Tetmesh object, along with the lists of tetrahedrons by compartment at the end of our function body.
Our entire function code is
Step14: Simulation with Tetexact
So now we come to our example simulation run. As in the previous chapter we will create the 3 important objects required to the solver constructor, which are
Step15: Note that, as well as the steps.geom.Tetmesh container object, the gen_geom function also returns the indices of the terahedrons for both compartments, which we will store in variables tets_compA and tets_compB
Step16: As in previous chapters, create our random number generator and initialise with some seed value
Step17: And create our solver object, using steps.solver.Tetexact for a mesh-based diffusion simulation with the usual object references to the solver constructor, which to recall are (in order)
Step18: Now to create the data structures for running our simulation and storing data. There are many ways to achieve our aims here, but we will follow a pattern set by previous chapters which is first to create a NumPy array for “time-points“ to run the simulation, and find how many “time-points“ we have
Step19: And now create our structures for storing data, again NumPy arrays, but this time of a size to record data from every tetrahedron in the mesh. We record how many tetrahedrons there are by using method countTets on our mesh object (steps.geom.Tetmesh.countTets). We also separate our results arrays, one to record from compartment 'A' and one for compartment 'B'
Step20: Next, let's assume we wish to inject our molecules at the two ends of the cylinders, that is the points at which our z-axis is at a minimum and and a maximum. From creating our mesh (or finding out through methods getBoundMax and getBoundMin on our steps.geom.Tetmesh object) we know that our boundaries are at z = -5 microns and +5 microns. To find the tetrahedrons at the centre points of the two boundaries (i.e. at x=0 and y=0) we use steps.geom.Tetmesh method findTetByPoint, which will return the index of the tetrahedron that encompasses any point given in 3D cartesian coordinates as a Python sequence. We give points slightly inside the boundary so as to be sure that our point is inside the mesh (the method will return -1 if not)
Step21: Let's set our initial conditions by injecting 1000 molecules of species 'X' into the lower Z boundary tetrahedron (which will be contained in compartment 'A') and 500 molecules of species 'Y' into the upper z boundary tetrahedron (which will be contained in compartment 'B')
Step22: Now for the main focus of this chapter, which is to allow diffusion between the compartments joined by a Diffusion Boundary. During our geometry construction we already created our steps.geom.DiffBoundary object (named rather unimaginatively 'diffb') which will be included in the simulation, with the default behaviour to block diffusion between compartment 'A' and 'B' completely for all molecules. We now wish to allow diffusion of species 'Y' through the boundary which we achieve with one simple solver method call. Importantly, we would be unable to activate diffusion through the boundary for species 'X'; this is because 'X' is undefined in compartment 'B' since it does not appear in any reaction or diffusion rules there.
To activate diffusion through the boundary we call the rather wordy steps.solver.Tetexact solver method setDiffBoundaryDiffusionActive (steps.solver.Tetexact.setDiffBoundaryDiffusionActive), with 3 arguments to the function; the string identifier of the diffusion boundary ('diffb'), the string identifier of the species ('Y') and a bool as to whether diffusion through the boundary is active or not (True)
Step23: And that is all we need to do to activate diffusion of species 'Y' through the diffusion boundary 'diffb' and therefore allow diffusion of 'Y' between compartments 'A' and 'B'. To inactivate diffusion (which is incidentally the default behaviour for all species) we would call the same function with boolean False.
So now a simple for loop to run our simulation. We have already constructed our NumPy arrays for this purpose
Step24: Plotting simulation output
Having run our simulation it now comes to visualizing and analyzing the output of the simulation. One way to do this is to plot the data, once again using the plotting capability from Matplotlib.
Let's assume we want a spatial plot- distance on the z-axis vs concentration- but don't want to plot every tetrahedron individually. In other words we want to split the cylinder into bins with equal width on the z-axis. Then we record counts from a tetrahedron and add it to the bin that the tetrahedron belongs to. We could, of course, have set up structures to record data from bins before and during our simulation, but instead we will use the data that we have recorded in all individual tetrahedrons (in the code above) to read and split into bins for plotting. And that is exactly what is achieved in the following function, which won't contain a detailed step-by-step explanation as it is not strictly STEPS code, but is included for the user to see how such tasks may be achieved. This function does use some structures defined outside of the function, such as tpnts, so would have to appear after the previous code in a Python script to work as it is
Step25: This function will plot the bin concentration of both species 'X' and 'Y' along the z-axis for any of our “time-points“, with the default number of bins being 100. We run our simulation up to 100ms, which was the 100th time-point so lets plot that with call | Python Code:
import steps.model as smodel
import steps.geom as sgeom
import steps.rng as srng
import steps.solver as solvmod
import steps.utilities.meshio as meshio
import numpy
import pylab
Explanation: Diffusion Boundary
The simulation script described in this chapter is available at STEPS_Example repository.
In some systems it may be a convenient simulation feature to be able to localize certain chemical species in one particular region of a volume without diffusion to neighboring regions even if they are not separated by a physical boundary. For example, in some biological systems certain proteins may exist only in local regions and though the structural features are simplified in a model the proteins are assumed to diffuse in a local region to meet and react with each other. So it is sometimes important to restrict the diffusional space of some proteins and not the others from a biologically feasible perspective. Similarly, it may be convenient to separate a large simulation volume into a number of compartments that are not physically separated by a membrane and so are connected to other compartments by chemical diffusion. Such an approach allows for different biochemical behavior in different regions of the volume to be specified and may simplify simulation initialization and data recording considerably. In this brief chapter we'll introduce an object termed the “Diffusion Boundary“ (steps.geom.DiffBoundary) which allows for this important simulation convenience: optional chemical diffusion between connected mesh compartments.
The Diffusion Boundary of course can only be added to a mesh-based (i.e. not a well-mixed) simulation and is described by a collection of triangles. These triangles must form some or all of the connection between two (and only two) compartments, and none of the triangles may already be described as part of a “Patch“ (steps.geom.TmPatch). It is not possible for two compartments to be connected in the same area by a Patch and a Diffusion Boundary since a Patch is intended to model a membrane and a Diffusion Boundary is just some internal area within a volume that may block diffusion, and it would be unrealistic to allow surface reactions and free diffusion to occur in the same area. Once a Diffusion Boundary is in place the modeler may specify which chemical species (if any) may freely diffuse through the boundary. Diffusion boundaries are currently supported in solvers steps.solver.Tetexact and steps.mpi.solver.TetOpSplit, but for this chapter we will only demonstrate usage in Tetexact. For approximate MPI simulations with TetOpSplit please see later chapters.
For the example we'll set up a simple system to introduce the steps.geom.DiffBoundary object and expand on our mesh manipulation in previous chapters through STEPS methods provided in the Python interface. The simple examples here may of course be expanded and built on for more complex mesh manipulations in detailed, realistic simulations, though greater complexity is beyond the scope of this chapter.
Modeling solution
Organisation of code
To run our simulation we'll, as usual, create a Python script, following a similar structure to previous chapters. Again, for clarity, we'll show Python code as if typed at the prompt and go through the code step by step looking at some statements in detail as we go.
To get started we import STEPS and outside packages as usual:
End of explanation
def gen_model():
# Create the model container object
mdl = smodel.Model()
# Create the chemical species
X = smodel.Spec('X', mdl)
Y = smodel.Spec('Y', mdl)
# Create separate volume systems for compartments A and B
vsysA = smodel.Volsys('vsysA', mdl)
vsysB = smodel.Volsys('vsysB', mdl)
# Describe diffusion of molecules in compartments A and B
diff_X_A = smodel.Diff('diff_X_A', vsysA, X, dcst = 0.1e-9)
diff_X_B = smodel.Diff('diff_X_B', vsysB, X, dcst = 0.1e-9)
diff_Y_A = smodel.Diff('diff_Y_A', vsysA, Y, dcst = 0.1e-9)
diff_Y_B = smodel.Diff('diff_Y_B', vsysB, Y, dcst = 0.1e-9)
# Return the container object
return mdl
Explanation: Model specification
We'll go straight into our function that will set up the biochemical model. Here we will create two chemical species objects, 'X' and 'Y', and describe their diffusion rules. Notice that this time we use separate volume systems for the two compartments we will create, as is our option. We intend volume system 'vsysA' to be added to a compartment 'A' and 'vsysB' to be added to compartment 'B', the reason for which will become clear as we progress:
End of explanation
mesh = meshio.loadMesh('meshes/cyl_len10_diam1')[0]
Explanation: Note that if our model were set up with the following code instead, diffusion would NOT be defined for species 'X' in compartment 'B' (if we add only volume system 'B' to compartment 'B' as we intend):
# Describe diffusion of molecules in compartments A and B
# NOTE: diffusion is not defined for species X in compartment B
diff_X = smodel.Diff('diff_X', vsysA, X, dcst = 0.1e-9)
diff_Y_A = smodel.Diff('diff_Y_A', vsysA, Y, dcst = 0.1e-9)
diff_Y_B = smodel.Diff('diff_Y_B', vsysB, Y, dcst = 0.1e-9)
This is an important point because if a species does not react or diffuse within a compartment (as is the case for 'X' in compartment 'B' here) it is undefined in the compartment by the solver- it does nothing in the compartment so memory and simulation time is not wasted by including the species in that compartment during simulation. For this reason if we were to later try to allow diffusion of 'X' across the diffusion boundary during our simulation in this example we would receive an error message because it may not diffuse into compartment 'B' since it is undefined there.
Geometry specification
Next we define our geometry function. Because some of the operations are new we'll look at the code in more detail.
First we import our mesh, a cylinder of axial length 10 microns (on the z-axis) which we have previously imported and saved in STEPS format (with method steps.utilities.meshio.saveMesh) in folder 'meshes' in the current directory.
Here, the object that is returned to us and stored by mesh will be a steps.geom.Tetmesh object, which is the zeroth element of the tuple returned by the function:
End of explanation
ntets = mesh.countTets()
Explanation: Now we'll create our two compartments. We'll split our cylinder down the middle of the z-axis creating two compartments of (approximately) equal volume. Since our cylinder is oriented on the z-axis we simply need to separate tetrahedrons by those that are below the centre point on the z-axis and those that are above.
Firstly we count the number of tetrahedrons using method steps.geom.Tetmesh.countTets:
End of explanation
tets_compA = []
tets_compB = []
Explanation: And create empty lists to group the tetrahedrons as those that belong to compartment 'A' and those to 'B':
End of explanation
tris_compA = set()
tris_compB = set()
Explanation: And similarly create empty sets to group all the triangles in compartments 'A' and 'B'. All tetrahedrons are comprised of 4 triangles, and we store all triangles belonging to all tetrahedrons in the compartment (in a set so as not to store more than once). The reason for doing so will become apparent soon:
End of explanation
z_max = mesh.getBoundMax()[2]
z_min = mesh.getBoundMin()[2]
z_mid = z_min+(z_max-z_min)/2.0
Explanation: Next we find the bounds of the mesh, and the mid point (on the z-axis)- the point at which we want our Diffusion Boundary to appear:
End of explanation
for t in range(ntets):
# Fetch the z coordinate of the barycenter
barycz = mesh.getTetBarycenter(t)[2]
# Fetch the triangle indices of the tetrahedron, a tuple of length 4:
tris = mesh.getTetTriNeighb(t)
if barycz < z_mid:
tets_compA.append(t)
tris_compA.add(tris[0])
tris_compA.add(tris[1])
tris_compA.add(tris[2])
tris_compA.add(tris[3])
else:
tets_compB.append(t)
tris_compB.add(tris[0])
tris_compB.add(tris[1])
tris_compB.add(tris[2])
tris_compB.add(tris[3])
Explanation: Now we’ll run a for loop over all tetrahedrons to sort tetrahedrons and triangles into the compartments. The criterior is that tetrahedrons with their barycenter less than the mid point on the z-axis will belong to compartment A and those with their barycenter greater than the mid point will belong to compartment B:
End of explanation
compA = sgeom.TmComp('compA', mesh, tets_compA)
compB = sgeom.TmComp('compB', mesh, tets_compB)
Explanation: With our tetrahedrons sorted in this way we can create our mesh compartments. As we have seen in the previous chapter, a steps.geom.TmComp requires to the constructor, in order: a unique string identifier, a reference to the parent steps.geom.Tetmesh container object, and all the indices of the tetrahedrons that comprise the compartment in a Python sequence such as a list or tuple
End of explanation
compA.addVolsys('vsysA')
compB.addVolsys('vsysB')
Explanation: And add volume system 'vsysA' to compartment 'A' and volume system 'vsysB' to compartment 'B':
End of explanation
tris_DB = tris_compA.intersection(tris_compB)
Explanation: Now comes our diffusion boundary as part of our geometry description and therefore the Diffusion Boundary class is to be found in module steps.geom which we have imported with name sgeom. Recall that, to create a diffusion boundary, we must have a sequence of all the triangle indices that comprise the diffusion boundary and all of the triangles must connect the same two compartments. The reason that the user has to explicitly declare which triangles to use is that the diffusion boundary between compartments may not necessarily form the whole surface between the two compartments and may comprise a smaller area. However here we will use the entire surface between the two compartments.
The way that we find the triangle indices is very straightforward- they are simply the common triangles to both compartments. We have the triangle indices of both compartments stored in Python sets, the common triangles are therefore the intersection:
End of explanation
tris_DB = list(tris_DB)
Explanation: If this point is not very clear, consider the simple example where two tetrahedrons are connected at a surface (triangle). Lets say tetrahedron A is comprised of triangles (0,1,2,3) and tetrahedron B is comprised of triangles (0,4,5,6). That would mean that their common triangle (0) forms their connection. The common triangle could be found by finding the intersection of two sets of the triangles, that is the intersection of (0,1,2,3) and (0,4,5,6) is (0). That is what the above code does on a larger scale where the sets contain all triangles in the entire compartment and the intersection therefore gives the entire surface connection between the two compartments.
Now we have to convert the set to a list (or other sequence such as a tuple or NumPy array) as this is what the diffusion boundary constructor requires:
End of explanation
diffb = sgeom.DiffBoundary('diffb', mesh, tris_DB)
Explanation: Finally we can create the diffusion boundary between compartment 'A' and compartment 'B'. The object constructor looks similar to that for a mesh compartment or patch, but with some important differences. That is the constructor expects, in order: a unique string identifier, a reference to the parent steps.geom.Tetmesh container, and a sequence of all triangles that comprise the boundary. Note that no references to the compartments that the boundary connects are required- these are found internally and checked to be common amongst all triangles in the diffusion boundary:
End of explanation
def gen_geom():
mesh = meshio.loadMesh('meshes/cyl_len10_diam1')[0]
ntets = mesh.countTets()
tets_compA = []
tets_compB = []
tris_compA = set()
tris_compB = set()
z_max = mesh.getBoundMax()[2]
z_min = mesh.getBoundMin()[2]
z_mid = z_min+(z_max-z_min)/2.0
for t in range(ntets):
# Fetch the z coordinate of the barycenter
barycz = mesh.getTetBarycenter(t)[2]
# Fetch the triangle indices of the tetrahedron, which is a tuple of length 4
tris = mesh.getTetTriNeighb(t)
if (barycz < z_mid):
tets_compA.append(t)
tris_compA.add(tris[0])
tris_compA.add(tris[1])
tris_compA.add(tris[2])
tris_compA.add(tris[3])
else:
tets_compB.append(t)
tris_compB.add(tris[0])
tris_compB.add(tris[1])
tris_compB.add(tris[2])
tris_compB.add(tris[3])
compA = sgeom.TmComp('compA', mesh, tets_compA)
compB = sgeom.TmComp('compB', mesh, tets_compB)
compA.addVolsys('vsysA')
compB.addVolsys('vsysB')
tris_DB = tris_compA.intersection(tris_compB)
tris_DB = list(tris_DB)
diffb = sgeom.DiffBoundary('diffb', mesh, tris_DB)
return mesh, tets_compA, tets_compB
Explanation: And that is basically all we need to do create our diffusion boundary. As usual we should note the string identifier because that is what we will need to control the diffusion boundary during simulation. The technique for finding the common triangles between two compartments is a very useful technique that may be applied or adapted when creating diffusion boundaries in most STEPS simulations.
We return the parent steps.geom.Tetmesh object, along with the lists of tetrahedrons by compartment at the end of our function body.
Our entire function code is:
End of explanation
mdl = gen_model()
Explanation: Simulation with Tetexact
So now we come to our example simulation run. As in the previous chapter we will create the 3 important objects required to the solver constructor, which are: a steps.model.Model object (returned by our gen_model function), a steps.geom.Tetmesh object (for a mesh-based simulation; returned by our gen_geom function) and a steps.rng.RNG object that we will create.
We generate our steps.model.Model container object with a call to function gen_geom and store in variable mdl:
End of explanation
mesh, tets_compA, tets_compB = gen_geom()
Explanation: Note that, as well as the steps.geom.Tetmesh container object, the gen_geom function also returns the indices of the terahedrons for both compartments, which we will store in variables tets_compA and tets_compB:
End of explanation
rng = srng.create('mt19937', 256)
rng.initialize(654)
Explanation: As in previous chapters, create our random number generator and initialise with some seed value:
End of explanation
sim = solvmod.Tetexact(mdl, mesh, rng)
sim.reset()
Explanation: And create our solver object, using steps.solver.Tetexact for a mesh-based diffusion simulation with the usual object references to the solver constructor, which to recall are (in order): a steps.model.Model object, a steps.geom.Tetmesh object, and a steps.rng.RNG object:
End of explanation
tpnts = numpy.arange(0.0, 0.101, 0.001)
ntpnts = tpnts.shape[0]
Explanation: Now to create the data structures for running our simulation and storing data. There are many ways to achieve our aims here, but we will follow a pattern set by previous chapters which is first to create a NumPy array for “time-points“ to run the simulation, and find how many “time-points“ we have:
End of explanation
ntets = mesh.countTets()
resX = numpy.zeros((ntpnts, ntets))
resY = numpy.zeros((ntpnts, ntets))
Explanation: And now create our structures for storing data, again NumPy arrays, but this time of a size to record data from every tetrahedron in the mesh. We record how many tetrahedrons there are by using method countTets on our mesh object (steps.geom.Tetmesh.countTets). We also separate our results arrays, one to record from compartment 'A' and one for compartment 'B':
End of explanation
tetX = mesh.findTetByPoint([0, 0, -4.99e-6])
tetY = mesh.findTetByPoint([0, 0, 4.99e-6])
Explanation: Next, let's assume we wish to inject our molecules at the two ends of the cylinders, that is the points at which our z-axis is at a minimum and and a maximum. From creating our mesh (or finding out through methods getBoundMax and getBoundMin on our steps.geom.Tetmesh object) we know that our boundaries are at z = -5 microns and +5 microns. To find the tetrahedrons at the centre points of the two boundaries (i.e. at x=0 and y=0) we use steps.geom.Tetmesh method findTetByPoint, which will return the index of the tetrahedron that encompasses any point given in 3D cartesian coordinates as a Python sequence. We give points slightly inside the boundary so as to be sure that our point is inside the mesh (the method will return -1 if not):
End of explanation
sim.setTetCount(tetX , 'X', 1000)
sim.setTetCount(tetY, 'Y', 500)
Explanation: Let's set our initial conditions by injecting 1000 molecules of species 'X' into the lower Z boundary tetrahedron (which will be contained in compartment 'A') and 500 molecules of species 'Y' into the upper z boundary tetrahedron (which will be contained in compartment 'B'):
End of explanation
sim.setDiffBoundaryDiffusionActive('diffb', 'Y', True)
Explanation: Now for the main focus of this chapter, which is to allow diffusion between the compartments joined by a Diffusion Boundary. During our geometry construction we already created our steps.geom.DiffBoundary object (named rather unimaginatively 'diffb') which will be included in the simulation, with the default behaviour to block diffusion between compartment 'A' and 'B' completely for all molecules. We now wish to allow diffusion of species 'Y' through the boundary which we achieve with one simple solver method call. Importantly, we would be unable to activate diffusion through the boundary for species 'X'; this is because 'X' is undefined in compartment 'B' since it does not appear in any reaction or diffusion rules there.
To activate diffusion through the boundary we call the rather wordy steps.solver.Tetexact solver method setDiffBoundaryDiffusionActive (steps.solver.Tetexact.setDiffBoundaryDiffusionActive), with 3 arguments to the function; the string identifier of the diffusion boundary ('diffb'), the string identifier of the species ('Y') and a bool as to whether diffusion through the boundary is active or not (True):
End of explanation
for i in range(ntpnts):
sim.run(tpnts[i])
for k in range(ntets):
resX[i, k] = sim.getTetCount(k, 'X')
resY[i, k] = sim.getTetCount(k, 'Y')
Explanation: And that is all we need to do to activate diffusion of species 'Y' through the diffusion boundary 'diffb' and therefore allow diffusion of 'Y' between compartments 'A' and 'B'. To inactivate diffusion (which is incidentally the default behaviour for all species) we would call the same function with boolean False.
So now a simple for loop to run our simulation. We have already constructed our NumPy arrays for this purpose: tpnts stores the times that we run our simulation and collect our data for (we chose 1ms increments up to 100ms) and ntpnts stores how many of these 'time-points' there are, which is 101 (including time=0). At every time-point we will collect our data, here recording the number of molecules of 'X' and 'Y' in every tetrahedron in the mesh.
End of explanation
from __future__ import print_function # for backward compatibility with Py2
def plot_binned(t_idx, bin_n = 100, solver='unknown'):
if (t_idx > tpnts.size):
print("Time index is out of range.")
return
# Create structure to record z-position of tetrahedron
z_tets = numpy.zeros(ntets)
zbound_min = mesh.getBoundMin()[2]
# Now find the distance of the centre of the tets to the Z lower face
for i in range(ntets):
baryc = mesh.getTetBarycenter(i)
z = baryc[2] - zbound_min
# Convert to microns and save
z_tets[i] = z*1.0e6
# Find the maximum and minimum z of all tetrahedrons
z_max = z_tets.max()
z_min = z_tets.min()
# Set up the bin structures, recording the individual bin volumes
z_seg = (z_max-z_min)/bin_n
bin_mins = numpy.zeros(bin_n+1)
z_tets_binned = numpy.zeros(bin_n)
bin_vols = numpy.zeros(bin_n)
# Now sort the counts into bins for species 'X'
z = z_min
for b in range(bin_n + 1):
bin_mins[b] = z
if (b!=bin_n): z_tets_binned[b] = z +z_seg/2.0
z+=z_seg
bin_counts = [None]*bin_n
for i in range(bin_n): bin_counts[i] = []
for i in range((resX[t_idx].size)):
i_z = z_tets[i]
for b in range(bin_n):
if(i_z>=bin_mins[b] and i_z<bin_mins[b+1]):
bin_counts[b].append(resX[t_idx][i])
bin_vols[b]+=sim.getTetVol(i)
break
# Convert to concentration in arbitrary units
bin_concs = numpy.zeros(bin_n)
for c in range(bin_n):
for d in range(bin_counts[c].__len__()):
bin_concs[c] += bin_counts[c][d]
bin_concs[c]/=(bin_vols[c]*1.0e18)
t = tpnts[t_idx]
# Plot the data
pylab.scatter(z_tets_binned, bin_concs, label = 'X', color = 'blue')
# Repeat the process for species 'Y'- separate from 'X' for clarity:
z = z_min
for b in range(bin_n + 1):
bin_mins[b] = z
if (b!=bin_n): z_tets_binned[b] = z +z_seg/2.0
z+=z_seg
bin_counts = [None]*bin_n
for i in range(bin_n): bin_counts[i] = []
for i in range((resY[t_idx].size)):
i_z = z_tets[i]
for b in range(bin_n):
if(i_z>=bin_mins[b] and i_z<bin_mins[b+1]):
bin_counts[b].append(resY[t_idx][i])
break
bin_concs = numpy.zeros(bin_n)
for c in range(bin_n):
for d in range(bin_counts[c].__len__()):
bin_concs[c] += bin_counts[c][d]
bin_concs[c]/=(bin_vols[c]*1.0e18)
pylab.scatter(z_tets_binned, bin_concs, label = 'Y', color = 'red')
pylab.xlabel('Z axis (microns)', fontsize=16)
pylab.ylabel('Bin concentration (N/m^3)', fontsize=16)
pylab.ylim(0)
pylab.xlim(0, 10)
pylab.legend(numpoints=1)
pylab.title('Simulation with '+ solver)
pylab.show()
Explanation: Plotting simulation output
Having run our simulation it now comes to visualizing and analyzing the output of the simulation. One way to do this is to plot the data, once again using the plotting capability from Matplotlib.
Let's assume we want a spatial plot- distance on the z-axis vs concentration- but don't want to plot every tetrahedron individually. In other words we want to split the cylinder into bins with equal width on the z-axis. Then we record counts from a tetrahedron and add it to the bin that the tetrahedron belongs to. We could, of course, have set up structures to record data from bins before and during our simulation, but instead we will use the data that we have recorded in all individual tetrahedrons (in the code above) to read and split into bins for plotting. And that is exactly what is achieved in the following function, which won't contain a detailed step-by-step explanation as it is not strictly STEPS code, but is included for the user to see how such tasks may be achieved. This function does use some structures defined outside of the function, such as tpnts, so would have to appear after the previous code in a Python script to work as it is:
End of explanation
pylab.figure(figsize=(10,7))
plot_binned(100, 50)
Explanation: This function will plot the bin concentration of both species 'X' and 'Y' along the z-axis for any of our “time-points“, with the default number of bins being 100. We run our simulation up to 100ms, which was the 100th time-point so lets plot that with call
End of explanation |
8,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
Step1: NumPy
NumPy is the fundamental package for scientific computing with Python. It contains among other things
Step2: shape
np.zeros
Step3: np.ones
Step4: np.empty
Step5: np.arange | Python Code:
test = "Hello World"
print ("test: " + test)
Explanation: About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
End of explanation
#
Explanation: NumPy
NumPy is the fundamental package for scientific computing with Python. It contains among other things:
a powerful N-dimensional array object
sophisticated (broadcasting) functions
useful linear algebra, Fourier transform, and random number capabilities
Besides its obvious scientific uses, NumPy can also be used as an efficient multi-dimensional container of generic data. Arbitrary data-types can be defined. This allows NumPy to seamlessly and speedily integrate with a wide variety of databases.
Library documentation: <a>http://www.numpy.org/</a>
NumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes. The number of axes is rank.
For example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is 2-dimensional). The first dimension (axis) has a length of 2, the second dimension has a length of 3.
NumPy’s array class is called ndarray. It is also known by the alias array. Note that numpy.array is not the same as the Standard Python Library class array.array, which only handles one-dimensional arrays and offers less functionality. The more important attributes of an ndarray object are:
np.array
list as argument to array
End of explanation
#The function zeros creates an array full of zeros
Explanation: shape
np.zeros
End of explanation
# function ones creates an array full of ones
Explanation: np.ones
End of explanation
#function empty creates an array whose initial content is random and depends on the state of the memory
Explanation: np.empty
End of explanation
# To create sequences of numbers, NumPy provides a function analogous to range that returns arrays instead of lists.
Explanation: np.arange
End of explanation |
8,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is a brief sketch of how to use Simon's algorithm.
We start by declaring all necessary imports.
Step1: Simon's algorithm can be used to find the mask $m$ of a 2-to-1 periodic Boolean function defined by
$$f(x) = f(x \oplus m)$$
where $\oplus$ is the bit-wise XOR operator. To create one we can define a mask as a string and call a utility to generate a map. To assert the correct result we check it against an expected map
Step2: To understand what a 2-to-1 function is let us revert the map and collect all keys tha point to the same value. As the assertion shows all values have 2 distinct origins
Step3: To use Simon's algorithm on a Quantum Hardware we need to define the connection to the QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you run this notebook, ensure to replace cxn with a pyQuil connection object.
Step4: Now let's run Simon's algorithm. We instantiate the Simon object and then call its find_mask mehtod with the connection object and the 2-to-1 function whose mask we wish to find.
Finally we assert its correctness by checking with the mask we used to generate the function | Python Code:
from collections import defaultdict
import numpy as np
from mock import patch
from grove.simon.simon import Simon, create_valid_2to1_bitmap
Explanation: This notebook is a brief sketch of how to use Simon's algorithm.
We start by declaring all necessary imports.
End of explanation
mask = '110'
bm = create_valid_2to1_bitmap(mask, random_seed=42)
expected_map = {
'000': '001',
'001': '101',
'010': '000',
'011': '111',
'100': '000',
'101': '111',
'110': '001',
'111': '101'
}
for k, v in bm.items():
assert v == expected_map[k]
Explanation: Simon's algorithm can be used to find the mask $m$ of a 2-to-1 periodic Boolean function defined by
$$f(x) = f(x \oplus m)$$
where $\oplus$ is the bit-wise XOR operator. To create one we can define a mask as a string and call a utility to generate a map. To assert the correct result we check it against an expected map
End of explanation
reverse_bitmap = defaultdict(list)
for k, v in bm.items():
reverse_bitmap[v].append(k)
expected_reverse_bitmap = {
'001': ['000', '110'],
'101': ['001', '111'],
'000': ['010', '100'],
'111': ['011', '101']
}
for k, v in reverse_bitmap.items():
assert sorted(v) == sorted(expected_reverse_bitmap[k])
Explanation: To understand what a 2-to-1 function is let us revert the map and collect all keys tha point to the same value. As the assertion shows all values have 2 distinct origins
End of explanation
with patch("pyquil.api.QuantumComputer") as qc:
# Need to mock multiple returns as an iterable
qc.run.side_effect = [
(np.asarray([0, 1, 1], dtype=int), ),
(np.asarray([1, 1, 1], dtype=int), ),
(np.asarray([1, 1, 1], dtype=int), ),
(np.asarray([1, 0, 0], dtype=int), ),
]
Explanation: To use Simon's algorithm on a Quantum Hardware we need to define the connection to the QVM or QPU. However we don't have a real connection in this notebook, so we just mock out the response. If you run this notebook, ensure to replace cxn with a pyQuil connection object.
End of explanation
sa = Simon()
found_mask = sa.find_mask(qc, bm)
assert ''.join([str(b) for b in found_mask]) == mask, "Found mask is not expected mask"
Explanation: Now let's run Simon's algorithm. We instantiate the Simon object and then call its find_mask mehtod with the connection object and the 2-to-1 function whose mask we wish to find.
Finally we assert its correctness by checking with the mask we used to generate the function
End of explanation |
8,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Survival Curve, S(t) maps from a duration, t, to the probability of surviving longer than t.
$$
S(t) = 1-\text{CDF}(t)
$$
where CDF(t) is the probability of a lifetime less than or equal to t
Step3: Hazard Function - maps from a time, t, to the fraction of pregnancies that continue until t and then end at t. Numerator is equal to PMF(t)
$$
\lambda(t) = \frac{S(t)-S(t+1)}{S(t)}
$$
Step5: The goal of this chapter is to use NSFG data to quantify how long respondents "survive" until they get married for the first time. The range of respondents is 14 to 44 years. For women who have been married, the date of the first marriage is known. For women not married, all we know is their age at the time of interview.
Kaplan-Meier estimation - we can use the data to estimate the hazard function then convert the hazard function to a survival curve. For each age we consider
Step7: Estimating the survival curve
Step8: Cohort Effects
left part of the graph has data for all respondents, but right part of the graph only has the oldest respondents. If the relevant characterisitics of respondents are not changing over time, that's fine, but in this case there are probably generational shifts.
We can investigate effect by grouping respondents according to their decade of birth. These groups are called cohorts
Step9: To extrapolate, we can "borrow" data from a pervious cohort...
Step11: To plot expected remaining lifetime, we make a pmf from the survival function and then for each value of t, we cut off previous values and get the mean of the remaining values in the PMF.
memoryless - when the remaining lifetime function levels out completely, so past has no effect on the previous predictions. "Any day now." If you're still in the game now, anything can happen.
NBUE - "New better than used in expectation." Young women have decreasing remaining "lifetimes." New parts expected to last longer
UBNE - "Used better than new in expectaton" The older the part, the longer it is expected to last. Also newborns and cancer patients.
Exercise
cmdivorcx - date of divorce for first marriage
Compute dur of marriages that ended in divorce and duration of ongoing marriages. Estim haz and surv for duration of marriage
Use resampling to take inot account weights
Consider dividing into decades of birth and age at first marriage | Python Code:
preg = nsfg.ReadFemPreg()
complete = preg.query('outcome in [1,3,4]').prglngth
cdf = thinkstats2.Cdf(complete, label='cdf')
##note: property is a method that can be invoked as if
##it were a variable.
class SurvivalFunction(object):
def __init__(self, cdf, label=''):
self.cdf = cdf
self.label = label or cdf.label
@property
def ts(self):
sequence of lifetimes
return self.cdf.xs
@property
def ss(self):
survival curve
return 1 - self.cdf.ps
def __getitem__(self, t):
return self.Prob(t)
def Prob(self, t):
return 1 - self.cdf.Prob(t)
sf = survival.SurvivalFunction(cdf)
##fraction of pregs that proceed past the first trimester.
sf[13]
thinkplot.Plot(cdf)
thinkplot.Plot(sf)
thinkplot.Show()
Explanation: Survival Curve, S(t) maps from a duration, t, to the probability of surviving longer than t.
$$
S(t) = 1-\text{CDF}(t)
$$
where CDF(t) is the probability of a lifetime less than or equal to t
End of explanation
hf = sf.MakeHazard()
##of all pregnancies that proceed until week 39, hf[39] end
##in week 39
hf[39]
thinkplot.Plot(hf)
Explanation: Hazard Function - maps from a time, t, to the fraction of pregnancies that continue until t and then end at t. Numerator is equal to PMF(t)
$$
\lambda(t) = \frac{S(t)-S(t+1)}{S(t)}
$$
End of explanation
def EstimateHazardFunction(complete, ongoing, label=''):
complete: set of complete observations (marriage age)
ongoing: set of incomplete observations (age of resp)
n = len(complete)
hist_complete = thinkstats2.Hist(complete)
sf_complete = SurvivalFunction(thinkstats2.Cdf(complete))
m = len(ongoing)
sf_ongoing = SurvivalFunction(thinkstats2.Cdf(ongoing))
lams = {}
##at_risk measures number of resps whose outcomes are not known at t
##ended = number of respondents married at age t
##n * sf_complete[t], num respondents married after age t
##m * sf_ongoing[t], num of unmarried resps inverviewed after t
for t, ended in sorted(hist_complete.Items()):
at_risk = ended + n * sf_complete[t] + m * sf_ongoing[t]
lams[t] = ended / at_risk
return survival.HazardFunction(lams, label=label)
resp = chap01soln.ReadFemResp()
resp.cmmarrhx.replace([9997, 9998, 9999], np.nan, inplace=True)
resp['agemarry'] = (resp.cmmarrhx - resp.cmbirth) / 12.0
resp['age'] = (resp.cmintvw - resp.cmbirth) / 12.0
complete = resp[resp.evrmarry==1].agemarry
ongoing = resp[resp.evrmarry==0].age
hf = EstimateHazardFunction(complete, ongoing)
thinkplot.Plot(hf)
Explanation: The goal of this chapter is to use NSFG data to quantify how long respondents "survive" until they get married for the first time. The range of respondents is 14 to 44 years. For women who have been married, the date of the first marriage is known. For women not married, all we know is their age at the time of interview.
Kaplan-Meier estimation - we can use the data to estimate the hazard function then convert the hazard function to a survival curve. For each age we consider:
1. the number of women who got married at that age
2. the number of women 'at risk' of getting married, which includes women who were not married at an earlier age.
End of explanation
# HazardFunction:
def MakeSurvival(self):
ts: sequence of times where hazard function is estimated
ss: cumulative product of comp hazard function
ts = self.series.index
ss = (1 - self.series).cumprod()
cdf = thinkstats2.Cdf(ts, 1 - ss)
sf = SurvivalFunction(cdf)
return sf
sf = hf.MakeSurvival()
thinkplot.Plot(sf)
##Sampling Error:
def ResampleSurvival(resp, iters=101):
low, high = resp.agemarry.min(), resp.agemarry.max()
ts = np.arange(low, high, 1/12.0)
ss_seq = []
for i in range(iters):
sample = thinkstats2.ResampleRowsWeighted(resp)
hf, sf = survival.EstimateSurvival(sample)
ss_seq.append(sf.Probs(ts))
low, high = thinkstats2.PercentileRows(ss_seq, [5,95])
thinkplot.FillBetween(ts, low, high)
sf = hf.MakeSurvival()
ResampleSurvival(resp)
thinkplot.Plot(sf)
##discrepancy indicates that sample weights
##have a substantial effect
Explanation: Estimating the survival curve:
the chance of surviving past time t is the chance of surviving all times up throught, which is cumulative product of the complementary hazard function:
$$
[1-\lambda(0)][1-\lambda(1)]...[1-\lambda(t)]
$$
End of explanation
month0 = pandas.to_datetime('1899-12-15')
dates = [month0 + pandas.DateOffset(months=cm)
for cm in resp.cmbirth]
resp['decade'] = (pandas.DatetimeIndex(dates).year - 1900) // 10
def EstimateSurvivalByDecade(groups):
for name, group, in groups:
hf, sf = survival.EstimateSurvival(group)
thinkplot.Plot(sf)
for i in range(1):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pandas.concat(samples, ignore_index=True)
groups = sample.groupby('decade')
EstimateSurvivalByDecade(groups)
survival.PlotResampledByDecade(resps)
Explanation: Cohort Effects
left part of the graph has data for all respondents, but right part of the graph only has the oldest respondents. If the relevant characterisitics of respondents are not changing over time, that's fine, but in this case there are probably generational shifts.
We can investigate effect by grouping respondents according to their decade of birth. These groups are called cohorts
End of explanation
# class HazardFunction
def Extend(self, other):
last = self.series.index[-1]
more = other.series[other.series.index > last]
serlf.series = pandas.concat([self.series, more])
survival.PlotResampledByDecade(resps, predict_flag=True)
Explanation: To extrapolate, we can "borrow" data from a pervious cohort...
End of explanation
resp5 = survival.ReadFemResp1995()
resp6 = survival.ReadFemResp2002()
resp7 = survival.ReadFemResp2010()
resps = [resp5, resp6, resp7]
def EstimateDivorceSS(resp, cleanData=False):
if cleanData:
resp.cmmarrhx.replace([9997, 9998, 9999], np.nan, inplace=True)
resp.cmdivorcx.replace([9998,9999], np.nan, inplace=True)
resp['agemarry'] = (resp.cmmarrhx - resp.cmbirth) / 12.0
resp['age'] = (resp.cmintvw - resp.cmbirth) / 12.0
resp['durmarr'] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0
resp['sincemarr'] = (resp.age - resp.agemarry)
complete = resp[resp.durmarr.notnull()].durmarr
ongoing = resp[resp.durmarr.isnull()][resp.evrmarry==1].sincemarr
hf = EstimateHazardFunction(complete, ongoing)
ss = hf.MakeSurvival()
return hf, ss
resp = chap01soln.ReadFemResp()
hf, ss = EstimateDivorceSS(resp, cleanData=True)
thinkplot.Plot(ss)
def PlotConfidenceIntervalSS(resp, iters=101, func=EstimateDivorceSS):
low = 0
high = resp[resp.durmarr.isnull()].sincemarr.max()
print 'high',high
ts = np.arange(low, high, 1/12.0)
ss_seq = []
for i in range(iters):
sample = thinkstats2.ResampleRowsWeighted(resp)
sample = sample.reset_index()
hf, sf = func(sample)
ss_seq.append(sf.Probs(ts))
low, high = thinkstats2.PercentileRows(ss_seq, [5,95])
thinkplot.FillBetween(ts, low, high)
thinkplot.Plot(ss)
PlotConfidenceIntervalSS(resp)
resp5 = survival.ReadFemResp1995()
resp6 = survival.ReadFemResp2002()
resp7 = survival.ReadFemResp2010()
resps = [resp5, resp6, resp7]
def EstimateSurvivalByDecade(groups, **options):
thinkplot.PrePlot(len(groups))
for _, group in groups:
_, sf = EstimateDivorceSS(group, cleanData=True)
thinkplot.Plot(sf, **options)
def PlotPredictionsByDecade(groups, **options):
hfs = []
for _, group in groups:
hf, sf = EstimateDivorceSS(group, cleanData=True)
hfs.append(hf)
thinkplot.PrePlot(len(hfs))
for i, hf in enumerate(hfs):
if i > 0:
hf.Extend(hfs[i-1])
sf = hf.MakeSurvival()
thinkplot.Plot(sf, **options)
def PlotResampledDivorceByDecade(resps, iters=11, predict_flag=False, omit=None):
for i in range(iters):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pandas.concat(samples, ignore_index=True)
groups = sample.groupby('decade')
if omit:
groups = [(name, group) for name, group in groups
if name not in omit]
if i == 0:
survival.AddLabelsByDecade(groups, alpha=0.7)
if predict_flag:
EstimateSurvivalByDecade(groups, alpha=0.2)
try:
PlotPredictionsByDecade(groups, alpha=0.2)
except IndexError:
pass
else:
print "not predicting"
EstimateSurvivalByDecade(groups, alpha=0.2)
thinkplot.Config(title="without predictions")
PlotResampledDivorceByDecade(resps, predict_flag=False)
thinkplot.Show()
thinkplot.Config(title='with predictions')
PlotResampledDivorceByDecade(resps, predict_flag=True)
thinkplot.Show()
def CleanData2(resps):
for r in resps:
r['marrgroup'] = r.agemarry // 5
def marr_AddLabelsByDecade(groups, **options):
Draws fake points in order to add labels to the legend.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for name, _ in groups:
label = '%d' % ((name + 1) * 5)
thinkplot.Plot([15], [1], label=label, **options)
def Marr_PlotResampledDivorceByDecade(resps, iters=11, predict_flag=False, omit=None):
CleanData2(resps)
for i in range(iters):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pandas.concat(samples, ignore_index=True)
groups = sample.groupby('marrgroup')
if omit:
groups = [(name, group) for name, group in groups
if name not in omit]
if i == 0:
marr_AddLabelsByDecade(groups, alpha=0.7)
if predict_flag:
EstimateSurvivalByDecade(groups, alpha=0.2)
try:
PlotPredictionsByDecade(groups, alpha=0.2)
except IndexError:
pass
else:
EstimateSurvivalByDecade(groups, alpha=0.2)
Marr_PlotResampledDivorceByDecade(resps)
thinkplot.Config(title="without prediction")
thinkplot.Show()
Marr_PlotResampledDivorceByDecade([resp7], omit=[0,1,2,3,4,5])
thinkplot.Config(title="without prediction")
thinkplot.Show()
hf, sf = EstimateDivorceSS(resp, cleanData=True)
func = lambda pmf: pmf.Percentile(50)
rem_life = sf.RemainingLifetime(func=func)
thinkplot.Plot(rem_life)
Explanation: To plot expected remaining lifetime, we make a pmf from the survival function and then for each value of t, we cut off previous values and get the mean of the remaining values in the PMF.
memoryless - when the remaining lifetime function levels out completely, so past has no effect on the previous predictions. "Any day now." If you're still in the game now, anything can happen.
NBUE - "New better than used in expectation." Young women have decreasing remaining "lifetimes." New parts expected to last longer
UBNE - "Used better than new in expectaton" The older the part, the longer it is expected to last. Also newborns and cancer patients.
Exercise
cmdivorcx - date of divorce for first marriage
Compute dur of marriages that ended in divorce and duration of ongoing marriages. Estim haz and surv for duration of marriage
Use resampling to take inot account weights
Consider dividing into decades of birth and age at first marriage
End of explanation |
8,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras model are serialzed in a JSON format.
Step1: Getting the weights
Weights can be retrieved either directly from the model or from each individual layer.
Step2: Moreover the respespective backend variables that store the weights can be retrieved.
Step3: Getting the activations and net inputs
Intermediary computation results, i.e. results are not part of the prediction cannot be directly retrieved from Keras. It possible to build a new model for which the intermediary result is the prediction, but this approach makes computation rather inefficient when several intermediary results are to be retrieved. Instead it is better to reach directly into the backend for this purpose.
Activations are still fairly straight forward as the relevant tensors can be retrieved as output of the layer.
Step4: Net input is a little more complicated as we have to reach heuristically into the TensorFlow graph to find the relevant tensors. However, it can be safely assumed most of the time that the net input tensor in input to the activaton op.
Step5: Getting layer properties
Each Keras layer object provides the relevant properties as attributes
Step6: Layer type information can only be retrieved through the class name | Python Code:
model.get_config()
Explanation: Keras model are serialzed in a JSON format.
End of explanation
# Weights and biases of the entire model.
model.get_weights()
# Weights and bias for a single layer.
conv_layer = model.get_layer('conv2d_1')
conv_layer.get_weights()
Explanation: Getting the weights
Weights can be retrieved either directly from the model or from each individual layer.
End of explanation
conv_layer.weights
Explanation: Moreover the respespective backend variables that store the weights can be retrieved.
End of explanation
# Getting the Tensorflow session and the input tensor.
sess = keras.backend.get_session()
network_input_tensor = model.layers[0].input
network_input_tensor
# Getting the tensor that holds the actiations as the output of a layer.
activation_tensor = conv_layer.output
activation_tensor
activations = sess.run(activation_tensor, feed_dict={network_input_tensor: test_data[0:1]})
activations.shape
for i in range(32):
plt.imshow(activations[0, ..., i])
plt.show()
Explanation: Getting the activations and net inputs
Intermediary computation results, i.e. results are not part of the prediction cannot be directly retrieved from Keras. It possible to build a new model for which the intermediary result is the prediction, but this approach makes computation rather inefficient when several intermediary results are to be retrieved. Instead it is better to reach directly into the backend for this purpose.
Activations are still fairly straight forward as the relevant tensors can be retrieved as output of the layer.
End of explanation
net_input_tensor = activation_tensor.op.inputs[0]
net_input_tensor
net_inputs = sess.run(net_input_tensor, feed_dict={network_input_tensor: test_data[0:1]})
net_inputs.shape
for i in range(32):
plt.imshow(net_inputs[0, ..., i])
plt.show()
Explanation: Net input is a little more complicated as we have to reach heuristically into the TensorFlow graph to find the relevant tensors. However, it can be safely assumed most of the time that the net input tensor in input to the activaton op.
End of explanation
conv_layer = model.get_layer('conv2d_1')
conv_layer
conv_layer.input_shape
conv_layer.output_shape
conv_layer.kernel_size
conv_layer.strides
max_pool_layer = model.get_layer('max_pooling2d_1')
max_pool_layer
max_pool_layer.strides
max_pool_layer.pool_size
Explanation: Getting layer properties
Each Keras layer object provides the relevant properties as attributes
End of explanation
conv_layer.__class__.__name__
Explanation: Layer type information can only be retrieved through the class name
End of explanation |
8,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manual Neural Network
In this notebook we will manually build out a neural network that mimics the TensorFlow API. This will greatly help your understanding when working with the real TensorFlow!
Quick Note on Super() and OOP
Step1: The child class will use its own initialization method, if not specified otherwise.
Step2: If we want to use initialization from the parent class, we can do that using
Step6: Operation
Step7: Example Operations
Addition
Step8: Multiplication
Step9: Matrix Multiplication
Step11: Placeholders
Step13: Variables
Step15: Graph
Step16: A Basic Graph
$$ z = Ax + b $$
With A=10 and b=1
$$ z = 10x + 1 $$
Just need a placeholder for x and then once x is filled in we can solve it!
Step17: Session
Step20: Traversing Operation Nodes
More details about tree post order traversal
Step21: The result should look like
Step22: Looks like we did it!
Step23: Activation Function
Step24: Sigmoid as an Operation
Step25: Classification Example
Step26: Defining the Perceptron
$$ y = mx + b $$
$$ y = -x + 5 $$
$$ f1 = mf2 + b , m = 1$$
$$ f1 = -f2 + 5 $$
$$ f1 + f2 - 5 = 0 $$
Convert to a Matrix Representation of Features
$$ w^Tx + b = 0 $$
$$ \Big(1, 1\Big)f - 5 = 0 $$
Then if the result is > 0 its label 1, if it is less than 0, it is label=0
Example Point
Let's say we have the point f1=2 , f2=2 otherwise stated as (8,10). Then we have
Step27: Or if we have (4,-10)
Step28: Using an Example Session Graph | Python Code:
class SimpleClass():
def __init__(self, str_input):
print("SIMPLE" + str_input)
class ExtendedClass(SimpleClass):
def __init__(self):
print('EXTENDED')
Explanation: Manual Neural Network
In this notebook we will manually build out a neural network that mimics the TensorFlow API. This will greatly help your understanding when working with the real TensorFlow!
Quick Note on Super() and OOP
End of explanation
s = ExtendedClass()
Explanation: The child class will use its own initialization method, if not specified otherwise.
End of explanation
class ExtendedClass(SimpleClass):
def __init__(self):
super().__init__(" My String")
print('EXTENDED')
s = ExtendedClass()
Explanation: If we want to use initialization from the parent class, we can do that using:
python
super().__init__()
End of explanation
class Operation():
An Operation is a node in a "Graph". TensorFlow will also use this concept of a Graph.
This Operation class will be inherited by other classes that actually compute the specific
operation, such as adding or matrix multiplication.
def __init__(self, input_nodes = []):
Intialize an Operation
# The list of input nodes
self.input_nodes = input_nodes
# Initialize list of nodes consuming this node's output
self.output_nodes = []
# For every node in the input, we append this operation (self) to the list of
# the consumers of the input nodes
for node in input_nodes:
node.output_nodes.append(self)
# There will be a global default graph (TensorFlow works this way)
# We will then append this particular operation
# Append this operation to the list of operations in the currently active default graph
_default_graph.operations.append(self)
def compute(self):
This is a placeholder function. It will be overwritten by the actual specific operation
that inherits from this class.
pass
Explanation: Operation
End of explanation
class add(Operation):
def __init__(self, x, y):
super().__init__([x, y])
def compute(self, x_var, y_var):
self.inputs = [x_var, y_var]
return x_var + y_var
Explanation: Example Operations
Addition
End of explanation
class multiply(Operation):
def __init__(self, a, b):
super().__init__([a, b])
def compute(self, a_var, b_var):
self.inputs = [a_var, b_var]
return a_var * b_var
Explanation: Multiplication
End of explanation
class matmul(Operation):
def __init__(self, a, b):
super().__init__([a, b])
def compute(self, a_mat, b_mat):
self.inputs = [a_mat, b_mat]
return a_mat.dot(b_mat)
Explanation: Matrix Multiplication
End of explanation
class Placeholder():
A placeholder is a node that needs to be provided a value for computing the output in the Graph.
In case of supervised learning, X (input) and Y (output) will require placeholders.
def __init__(self):
self.output_nodes = []
_default_graph.placeholders.append(self)
Explanation: Placeholders
End of explanation
class Variable():
This variable is a changeable parameter of the Graph.
For a simple neural networks, it will be weights and biases.
def __init__(self, initial_value = None):
self.value = initial_value
self.output_nodes = []
_default_graph.variables.append(self)
Explanation: Variables
End of explanation
class Graph():
def __init__(self):
self.operations = []
self.placeholders = []
self.variables = []
def set_as_default(self):
Sets this Graph instance as the Global Default Graph
global _default_graph
_default_graph = self
Explanation: Graph
End of explanation
g = Graph()
g.set_as_default()
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
A = Variable(10)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
b = Variable(1)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
# Will be filled out later
x = Placeholder()
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
y = multiply(A,x)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
z = add(y, b)
print("Operations:")
print(g.operations)
print("Placeholders:")
print(g.placeholders)
print("Variables:")
print(g.variables)
Explanation: A Basic Graph
$$ z = Ax + b $$
With A=10 and b=1
$$ z = 10x + 1 $$
Just need a placeholder for x and then once x is filled in we can solve it!
End of explanation
import numpy as np
Explanation: Session
End of explanation
def traverse_postorder(operation):
PostOrder Traversal of Nodes.
Basically makes sure computations are done in the correct order (Ax first , then Ax + b).
nodes_postorder = []
def recurse(node):
if isinstance(node, Operation):
for input_node in node.input_nodes:
recurse(input_node)
nodes_postorder.append(node)
recurse(operation)
return nodes_postorder
class Session:
def run(self, operation, feed_dict = {}):
operation: The operation to compute
feed_dict: Dictionary mapping placeholders to input values (the data)
# Puts nodes in correct order
nodes_postorder = traverse_postorder(operation)
print("Post Order:")
print(nodes_postorder)
for node in nodes_postorder:
if type(node) == Placeholder:
node.output = feed_dict[node]
elif type(node) == Variable:
node.output = node.value
else: # Operation
node.inputs = [input_node.output for input_node in node.input_nodes]
node.output = node.compute(*node.inputs)
# Convert lists to numpy arrays
if type(node.output) == list:
node.output = np.array(node.output)
# Return the requested node value
return operation.output
sess = Session()
result = sess.run(operation = z,
feed_dict = {x : 10})
Explanation: Traversing Operation Nodes
More details about tree post order traversal: https://en.wikipedia.org/wiki/Tree_traversal#Post-order_(LRN)
End of explanation
result
10 * 10 + 1
# Running just y = Ax
# The post order should be only up to
result = sess.run(operation = y,
feed_dict = {x : 10})
result
Explanation: The result should look like:
Variable (A), Placeholder (x), Multiple operation (Ax), Variable (b), Add (Ax + b)
End of explanation
g = Graph()
g.set_as_default()
A = Variable([[10, 20], [30, 40]])
b = Variable([1, 1])
x = Placeholder()
y = matmul(A,x)
z = add(y,b)
sess = Session()
result = sess.run(operation = z,
feed_dict = {x : 10})
result
Explanation: Looks like we did it!
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# Defining sigmoid function
def sigmoid(z):
return 1 / (1 + np.exp(-z))
sample_z = np.linspace(-10, 10, 100)
sample_a = sigmoid(sample_z)
plt.figure(figsize = (8, 8))
plt.title("Sigmoid")
plt.plot(sample_z, sample_a)
Explanation: Activation Function
End of explanation
class Sigmoid(Operation):
def __init__(self, z):
# a is the input node
super().__init__([z])
def compute(self, z_val):
return 1 / (1 + np.exp(-z_val))
Explanation: Sigmoid as an Operation
End of explanation
from sklearn.datasets import make_blobs
# Creating 50 samples divided into 2 blobs with 2 features
data = make_blobs(n_samples = 50,
n_features = 2,
centers = 2,
random_state = 75)
data
features = data[0]
plt.scatter(features[:,0],features[:,1])
labels = data[1]
plt.scatter(x = features[:,0],
y = features[:,1],
c = labels,
cmap = 'coolwarm')
# DRAW A LINE THAT SEPERATES CLASSES
x = np.linspace(0, 11 ,10)
y = -x + 5
plt.scatter(features[:,0],
features[:,1],
c = labels,
cmap = 'coolwarm')
plt.plot(x,y)
Explanation: Classification Example
End of explanation
z = np.array([1, 1]).dot(np.array([[8], [10]])) - 5
print(z)
a = 1 / (1 + np.exp(-z))
print(a)
Explanation: Defining the Perceptron
$$ y = mx + b $$
$$ y = -x + 5 $$
$$ f1 = mf2 + b , m = 1$$
$$ f1 = -f2 + 5 $$
$$ f1 + f2 - 5 = 0 $$
Convert to a Matrix Representation of Features
$$ w^Tx + b = 0 $$
$$ \Big(1, 1\Big)f - 5 = 0 $$
Then if the result is > 0 its label 1, if it is less than 0, it is label=0
Example Point
Let's say we have the point f1=2 , f2=2 otherwise stated as (8,10). Then we have:
$$
\begin{pmatrix}
1 , 1
\end{pmatrix}
\begin{pmatrix}
8 \
10
\end{pmatrix} + 5 = $$
End of explanation
z = np.array([1,1]).dot(np.array([[2],[-10]])) - 5
print(z)
a = 1 / (1 + np.exp(-z))
print(a)
Explanation: Or if we have (4,-10)
End of explanation
g = Graph()
g.set_as_default()
x = Placeholder()
w = Variable([1,1])
b = Variable(-5)
z = add(matmul(w,x),b)
a = Sigmoid(z)
sess = Session()
sess.run(operation = a,
feed_dict = {x : [8, 10]})
sess.run(operation = a,
feed_dict = {x : [2, -10]})
Explanation: Using an Example Session Graph
End of explanation |
8,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Справочник
Токенизатор
Токенизатор в Yargy реализован на регулярных выражениях. Для каждого типа токена есть правило с регуляркой
Step1: Токенизатор инициализируется списком правил. По-умолчанию — это RULES
Step2: Пользователь может убрать часть правил из списка или добавить новые. Уберём токены с переводами строк
Step3: Для этого удалим правило EOL
Step4: В Yargy есть примитивные правила для токенизации емейлов и телефонов. По-умолчанию они отключены
Step5: Добавим собственное для извлечения доменов
Step6: По умолчанию, Yargy использует не Tokenizer, а MorphTokenizer. Для каждого токена с типом 'RU' он запускает Pymorphy2, добавляет поле forms с морфологией
Step7: Газеттир
Словарь профессий, географических объектов можно записывать стандартные средствами через rule, or_, normalized, caseless
Step8: Это неудобно, легко ошибиться. Для составления словарей в Yargy используется pipeline. Реализовано два типа газеттиров
Step9: caseless_pipeline ищет слова без нормализации. Например, найдём в тексте арабские имена
Step10: Предикаты
Step11: Интерпретация
Объект-результат интерпретации описывает конструктор fact. attribute задаёт значение поля по-умолчанию. Например, в Date по-умолчанию год будет равен 2017
Step12: Для дат деревья разбора выглядят просто
Step13: Как будет себя вести алгоритм интерпретации, когда ребёнок конструктора не атрибут, а другой конструктор? Или когда ребёнок атрибута другой атрибут? Или когда под конструктором или атрибутом не одна, а несколько вершин с токенами? Пойдём от простого к сложному. Когда под вершиной-атрибутом несколько токенов, они объединяются
Step14: В Money.value два слова
Step15: Когда под вершиной-атрибутом смесь из токенов и вершин-конструктов, интерпретация кидает TypeError
Step16: Если под вершиной-атрибутом другая вершина-атрибут, нижняя просто исчезает
Step17: "X" попадёт в A.y, не в A.x
Step18: Что если под вершиной-конструктом несколько одинаковых вершин-атрибутов? Самый правый атрибут перезаписывает все остальные
Step19: В A.x попадёт "3"
Step20: Но бывает нужно сохранить содержание всех повторяющихся вершин-атрибутов, не только самой правой. Помечаем поле как repeatable
Step21: «Дядя Ваня» не перезапишет «Каштанка», они оба окажутся в Item.titles
Step22: Остался последний неочевидный случай, когда ребёнок вершины-конструктора, другая вершина-конструктор. Такая ситуация возникает при использовании рекурсивных грамматик. В примере ребёнок вершины Item другая вершина Item
Step23: В ходе интерпретации появится два объекта
Step24: Нормализация
В Yargy реализованы четыре основных метода для нормализации
Step25: С normalized слово "июня" меняется на "июнь"
Step26: Если в normalized попадает несколько токенов, каждый приводится к нормальной форме без согласования
Step27: Особым образом ведёт себя normalized, когда идёт после газеттира. Результат нормализации — ключ газеттира
Step28: inflected склоняет слово, соответствует методу inflect в Pymorphy2
Step29: inflected принимает набор граммем
Step30: custom применяет к слову произвольную функцию
Step31: custom может применяться вместе с normalized. Тогда слово начала ставится в нормальную форму, потом к нему применяется функция
Step32: const просто заменяет слово или словосочетания фиксированным значением
Step33: Согласование
В Yargy реализовано четыре типа согласований
Step34: main указывает на главное слово во фразе. По-умолчанию главное слово — самое левое | Python Code:
from yargy.tokenizer import RULES
RULES
Explanation: Справочник
Токенизатор
Токенизатор в Yargy реализован на регулярных выражениях. Для каждого типа токена есть правило с регуляркой:
End of explanation
from yargy.tokenizer import Tokenizer
text = '[email protected]'
tokenizer = Tokenizer()
list(tokenizer(text))
Explanation: Токенизатор инициализируется списком правил. По-умолчанию — это RULES:
End of explanation
tokenizer = Tokenizer()
text = '''
abc
123
'''
list(tokenizer(text))
Explanation: Пользователь может убрать часть правил из списка или добавить новые. Уберём токены с переводами строк:
End of explanation
tokenizer = Tokenizer().remove_types('EOL')
list(tokenizer(text))
Explanation: Для этого удалим правило EOL:
End of explanation
from yargy.tokenizer import EMAIL_RULE, PHONE_RULE
text = 'email: [email protected] call: 8 915 132 54 76'
tokenizer = Tokenizer().add_rules(EMAIL_RULE, PHONE_RULE)
list(tokenizer(text))
Explanation: В Yargy есть примитивные правила для токенизации емейлов и телефонов. По-умолчанию они отключены:
End of explanation
from yargy.tokenizer import TokenRule
DOMAIN_RULE = TokenRule('DOMAIN', '[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+')
text = 'на сайте www.VKontakte.ru'
tokenizer = Tokenizer().add_rules(DOMAIN_RULE)
list(tokenizer(text))
Explanation: Добавим собственное для извлечения доменов:
End of explanation
from yargy.tokenizer import MorphTokenizer
tokenizer = MorphTokenizer()
list(tokenizer('X век стал'))
Explanation: По умолчанию, Yargy использует не Tokenizer, а MorphTokenizer. Для каждого токена с типом 'RU' он запускает Pymorphy2, добавляет поле forms с морфологией:
End of explanation
from yargy import rule, or_
from yargy.predicates import normalized, caseless
POSITION = or_(
rule(normalized('генеральный'), normalized('директор')),
rule(normalized('бухгалтер'))
)
GEO = or_(
rule(normalized('Ростов'), '-', caseless('на'), '-', caseless('Дону')),
rule(normalized('Москва'))
)
Explanation: Газеттир
Словарь профессий, географических объектов можно записывать стандартные средствами через rule, or_, normalized, caseless:
End of explanation
from yargy import Parser
from yargy.pipelines import morph_pipeline
TYPE = morph_pipeline(['электронный дневник'])
parser = Parser(TYPE)
text = 'электронным дневником, электронные дневники, электронное дневнику'
for match in parser.findall(text):
print([_.value for _ in match.tokens])
Explanation: Это неудобно, легко ошибиться. Для составления словарей в Yargy используется pipeline. Реализовано два типа газеттиров: morph_pipeline и caseless_pipeline. morph_pipeline перед работой приводит слова к нормальной форме:
End of explanation
from yargy.pipelines import caseless_pipeline
NAME = caseless_pipeline([
'Абд Аль-Азиз Бин Мухаммад',
'Абд ар-Рахман Наср ас-Са ди'
])
parser = Parser(NAME)
text = 'Абд Аль-Азиз Бин Мухаммад, АБД АР-РАХМАН НАСР АС-СА ДИ'
for match in parser.findall(text):
print([_.value for _ in match.tokens])
Explanation: caseless_pipeline ищет слова без нормализации. Например, найдём в тексте арабские имена: "Абд Аль-Азиз Бин Мухаммад", "Абд ар-Рахман Наср ас-Са ди":
End of explanation
from IPython.display import HTML
from yargy.predicates import bank
def html():
for name in bank.__all__:
yield f'<h3>{name} <a name="predicates.{name}"><a href="#predicates.{name}">#</a></h3>'
doc = getattr(bank, name).__doc__.strip()
yield f'<br/><pre> {doc}</pre>'
HTML('\n'.join(html()))
Explanation: Предикаты
End of explanation
from IPython.display import display
from yargy import Parser, rule, and_, or_
from yargy.interpretation import fact, attribute
from yargy.predicates import dictionary, gte, lte
Date = fact(
'Date',
[attribute('year', 2017), 'month', 'day']
)
MONTHS = {
'январь',
'февраль',
'март',
'апрель',
'мая',
'июнь',
'июль',
'август',
'сентябрь',
'октябрь',
'ноябрь',
'декабрь'
}
MONTH_NAME = dictionary(MONTHS)
DAY = and_(
gte(1),
lte(31)
)
YEAR = and_(
gte(1900),
lte(2100)
)
DATE = rule(
DAY.interpretation(
Date.day
),
MONTH_NAME.interpretation(
Date.month
),
YEAR.interpretation(
Date.year
).optional()
).interpretation(
Date
)
text = '''18 июля 2016
15 марта
'''
parser = Parser(DATE)
for line in text.splitlines():
match = parser.match(line)
display(match.fact)
Explanation: Интерпретация
Объект-результат интерпретации описывает конструктор fact. attribute задаёт значение поля по-умолчанию. Например, в Date по-умолчанию год будет равен 2017:
End of explanation
parser = Parser(DATE)
for line in text.splitlines():
match = parser.match(line)
display(match.tree.as_dot)
Explanation: Для дат деревья разбора выглядят просто: вершина-конструктор и несколько детей-атрибутов:
End of explanation
from yargy.predicates import eq, type, dictionary
Money = fact(
'Money',
['value', 'currency']
)
MONEY = rule(
rule(
type('INT'),
dictionary({
'тысяча',
'миллион'
})
).interpretation(
Money.value
),
eq('$').interpretation(
Money.currency
)
).interpretation(
Money
)
parser = Parser(MONEY)
match = parser.match('5 тысяч$')
match.tree.as_dot
Explanation: Как будет себя вести алгоритм интерпретации, когда ребёнок конструктора не атрибут, а другой конструктор? Или когда ребёнок атрибута другой атрибут? Или когда под конструктором или атрибутом не одна, а несколько вершин с токенами? Пойдём от простого к сложному. Когда под вершиной-атрибутом несколько токенов, они объединяются:
End of explanation
match.fact
Explanation: В Money.value два слова:
End of explanation
from yargy.predicates import true
A = fact(
'A',
['x']
)
B = fact(
'B',
['y']
)
RULE = rule(
true(),
true().interpretation(
B.y
).interpretation(
B
)
).interpretation(
A.x
).interpretation(
A
)
parser = Parser(RULE)
match = parser.match('X Y')
match.tree.as_dot
# match.fact Будет TypeError
Explanation: Когда под вершиной-атрибутом смесь из токенов и вершин-конструктов, интерпретация кидает TypeError:
End of explanation
from yargy.predicates import true
A = fact(
'A',
['x', 'y']
)
RULE = true().interpretation(
A.x
).interpretation(
A.y
).interpretation(A)
parser = Parser(RULE)
match = parser.match('X')
match.tree.as_dot
Explanation: Если под вершиной-атрибутом другая вершина-атрибут, нижняя просто исчезает:
End of explanation
match.fact
Explanation: "X" попадёт в A.y, не в A.x:
End of explanation
A = fact(
'A',
['x']
)
RULE = true().interpretation(
A.x
).repeatable().interpretation(
A
)
parser = Parser(RULE)
match = parser.match('1 2 3')
match.tree.normalized.as_dot
Explanation: Что если под вершиной-конструктом несколько одинаковых вершин-атрибутов? Самый правый атрибут перезаписывает все остальные:
End of explanation
match.fact
Explanation: В A.x попадёт "3":
End of explanation
from yargy import not_
Item = fact(
'Item',
[attribute('titles').repeatable()]
)
TITLE = rule(
'«',
not_(eq('»')).repeatable(),
'»'
)
ITEM = rule(
TITLE.interpretation(
Item.titles
),
eq(',').optional()
).repeatable().interpretation(
Item
)
parser = Parser(ITEM)
text = '«Каштанка», «Дядя Ваня»'
match = parser.match(text)
match.tree.as_dot
Explanation: Но бывает нужно сохранить содержание всех повторяющихся вершин-атрибутов, не только самой правой. Помечаем поле как repeatable:
End of explanation
match.fact
Explanation: «Дядя Ваня» не перезапишет «Каштанка», они оба окажутся в Item.titles:
End of explanation
from yargy import forward, or_
Item = fact(
'Item',
['title', 'date']
)
ITEM = forward().interpretation(
Item
)
ITEM.define(or_(
TITLE.interpretation(
Item.title
),
rule(ITEM, TITLE),
rule(
ITEM,
DATE.interpretation(
Item.date
)
)
))
parser = Parser(ITEM)
text = '«Каштанка» 18 июня'
match = parser.match(text)
match.tree.as_dot
Explanation: Остался последний неочевидный случай, когда ребёнок вершины-конструктора, другая вершина-конструктор. Такая ситуация возникает при использовании рекурсивных грамматик. В примере ребёнок вершины Item другая вершина Item:
End of explanation
match.fact
Explanation: В ходе интерпретации появится два объекта: Item(title='«Каштанка»', date=None) и Item(title=None, date=Date('18', 'июня')). В конце произойдёт слияние:
End of explanation
DATE = rule(
DAY.interpretation(
Date.day
),
MONTH_NAME.interpretation(
Date.month
),
YEAR.interpretation(
Date.year
)
).interpretation(
Date
)
parser = Parser(DATE)
match = parser.match('8 июня 2015')
match.fact
Explanation: Нормализация
В Yargy реализованы четыре основных метода для нормализации: normalized, inflected, custom и const. normalized возвращает нормальную форму слова, соответствует normal_form в Pymorphy2:
End of explanation
DATE = rule(
DAY.interpretation(
Date.day
),
MONTH_NAME.interpretation(
Date.month.normalized()
),
YEAR.interpretation(
Date.year
)
).interpretation(
Date
)
parser = Parser(DATE)
match = parser.match('8 июня 2015')
match.fact
Explanation: С normalized слово "июня" меняется на "июнь":
End of explanation
from yargy.interpretation import fact
from yargy.predicates import normalized
from IPython.display import display
Geo = fact(
'Geo',
['name']
)
RULE = rule(
normalized('Красная'),
normalized('площадь')
).interpretation(
Geo.name.normalized()
).interpretation(
Geo
)
parser = Parser(RULE)
for match in parser.findall('на Красной площади'):
display(match.fact)
Explanation: Если в normalized попадает несколько токенов, каждый приводится к нормальной форме без согласования:
End of explanation
from yargy.pipelines import morph_pipeline
RULE = morph_pipeline([
'красная площадь',
'первомайская улица'
]).interpretation(
Geo.name.normalized()
).interpretation(
Geo
)
parser = Parser(RULE)
for match in parser.findall('c Красной площади на Первомайскую улицу'):
display(match.fact)
Explanation: Особым образом ведёт себя normalized, когда идёт после газеттира. Результат нормализации — ключ газеттира:
End of explanation
from yargy.interpretation import fact
from yargy.predicates import gram
Name = fact(
'Name',
['first']
)
NAME = gram('Name').interpretation(
Name.first.inflected()
).interpretation(
Name
)
parser = Parser(NAME)
for match in parser.findall('Саше, Маше, Вадиму'):
display(match.fact)
Explanation: inflected склоняет слово, соответствует методу inflect в Pymorphy2:
End of explanation
NAME = gram('Name').interpretation(
Name.first.inflected({'accs', 'plur'}) # винительный падеж, множественное число
).interpretation(
Name
)
parser = Parser(NAME)
for match in parser.findall('Саша, Маша, Вадим'):
display(match.fact)
Explanation: inflected принимает набор граммем:
End of explanation
from yargy.interpretation import fact
from yargy.predicates import type
Float = fact(
'Float',
['value']
)
INT = type('INT')
FLOAT = rule(
INT,
'.',
INT
).interpretation(
Float.value.custom(float)
).interpretation(
Float
)
parser = Parser(FLOAT)
match = parser.match('3.1415')
match.fact
Explanation: custom применяет к слову произвольную функцию:
End of explanation
MONTHS = {
'январь': 1,
'февраль': 2,
'март': 3,
'апрель': 4,
'мая': 5,
'июнь': 6,
'июль': 7,
'август': 8,
'сентябрь': 9,
'октябрь': 10,
'ноябрь': 11,
'декабрь': 12
}
DATE = rule(
DAY.interpretation(
Date.day.custom(int)
),
MONTH_NAME.interpretation(
Date.month.normalized().custom(MONTHS.__getitem__)
),
YEAR.interpretation(
Date.year.custom(int)
)
).interpretation(
Date
)
parser = Parser(DATE)
match = parser.match('8 июня 2015')
match.fact
Explanation: custom может применяться вместе с normalized. Тогда слово начала ставится в нормальную форму, потом к нему применяется функция:
End of explanation
Era = fact(
'Era',
['value']
)
BC = morph_pipeline([
'до нашей эры',
'до н.э.'
]).interpretation(
Era.value.const('BC')
)
AD = morph_pipeline([
'наша эра',
'н.э.'
]).interpretation(
Era.value.const('AD')
)
ERA = or_(
BC,
AD
).interpretation(
Era
)
parser = Parser(ERA)
for match in parser.findall('наша эра, до н.э.'):
display(match.fact)
Explanation: const просто заменяет слово или словосочетания фиксированным значением:
End of explanation
from yargy.relations import gnc_relation
Name = fact(
'Name',
['first', 'last']
)
gnc = gnc_relation()
NAME = rule(
gram('Name').interpretation(
Name.first.inflected()
).match(gnc),
gram('Surn').interpretation(
Name.last.inflected()
).match(gnc)
).interpretation(
Name
)
parser = Parser(NAME)
match = parser.match('Сашу Иванову')
display(match.fact)
display(match.tree.as_dot)
Explanation: Согласование
В Yargy реализовано четыре типа согласований: gender_relation — согласование по роду, number_relation — по числу, case_relation — по падежу, gnc_relation — по роду, числу и падежу. Метод match указывает согласование:
End of explanation
from yargy.relations import main
POSITION = rule(
normalized('главный'),
main(normalized('бухгалтер'))
)
POSITION.as_dot
from yargy.relations import case_relation
case = case_relation()
PERSON = rule(
POSITION.match(case),
NAME.match(case)
)
parser = Parser(PERSON)
assert not parser.match('главного бухгалтер марину игореву')
match = parser.match('главного бухгалтера марину игореву')
match.tree.as_dot
Explanation: main указывает на главное слово во фразе. По-умолчанию главное слово — самое левое:
End of explanation |
8,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simulation of the METIS scenario with rooms in one floor
This notebook simulates the scenario with one access point in each room of a given floor building.
Some Initialization Code
First we do some initializations and import the required modules.
Step1: Simulation Configuration
Now we set the simulation configuration.
Step2: Perform the Simulation
calculate the SINRs
Step3: Print Min/Mean/Max SIR values (no noise)
Step4: Create the Plots for the different cases
First we will create the plots for a noise variance equal to zero.
Plot case without path loss (only wall loss)
Step5: Plot case with 3GPP path loss
Step6: Case with Free Space Path Loss
Step7: Plot case with METIS PS7 path loss
Step8: Create the plots with interact
Here we repeat the plots, but now using IPython interact. This allow us to change unput parameters and see the result in the plot. | Python Code:
%matplotlib inline
# xxxxxxxxxx Add the parent folder to the python path. xxxxxxxxxxxxxxxxxxxx
import sys
import os
sys.path.append('../')
# xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.html import widgets
from IPython.display import display_latex
# Import the simulation runner
from apps.metis_scenarios.simulate_metis_scenario import *
Explanation: Simulation of the METIS scenario with rooms in one floor
This notebook simulates the scenario with one access point in each room of a given floor building.
Some Initialization Code
First we do some initializations and import the required modules.
End of explanation
scenario_params = {
'side_length': 10, # 10 meters side length
'single_wall_loss_dB': 5,
'num_rooms_per_side': 12,
'ap_decimation': 1}
power_params = {
'Pt_dBm': 20, # 20 dBm transmit power
'noise_power_dBm': -300 # Very low noise power
}
Explanation: Simulation Configuration
Now we set the simulation configuration.
End of explanation
out = perform_simulation_SINR_heatmap(scenario_params, power_params)
(sinr_array_pl_nothing_dB,
sinr_array_pl_3gpp_dB,
sinr_array_pl_free_space_dB,
sinr_array_pl_metis_ps7_dB) = out
num_discrete_positions_per_room = 15
sinr_array_pl_nothing_dB2 = prepare_sinr_array_for_color_plot(
sinr_array_pl_nothing_dB,
scenario_params['num_rooms_per_side'],
num_discrete_positions_per_room)
sinr_array_pl_3gpp_dB2 = prepare_sinr_array_for_color_plot(
sinr_array_pl_3gpp_dB,
scenario_params['num_rooms_per_side'],
num_discrete_positions_per_room)
sinr_array_pl_free_space_dB2 = prepare_sinr_array_for_color_plot(
sinr_array_pl_free_space_dB,
scenario_params['num_rooms_per_side'],
num_discrete_positions_per_room)
sinr_array_pl_metis_ps7_dB2 = prepare_sinr_array_for_color_plot(
sinr_array_pl_metis_ps7_dB,
scenario_params['num_rooms_per_side'],
num_discrete_positions_per_room)
Explanation: Perform the Simulation
calculate the SINRs
End of explanation
print(("Min/Mean/Max SINR value (no PL):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_nothing_dB.min(),
sinr_array_pl_nothing_dB.mean(),
sinr_array_pl_nothing_dB.max()))
print(("Min/Mean/Max SINR value (3GPP):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_3gpp_dB.min(),
sinr_array_pl_3gpp_dB.mean(),
sinr_array_pl_3gpp_dB.max()))
print(("Min/Mean/Max SINR value (Free Space):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_free_space_dB.min(),
sinr_array_pl_free_space_dB.mean(),
sinr_array_pl_free_space_dB.max()))
print(("Min/Mean/Max SINR value (METIS PS7):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_metis_ps7_dB.min(),
sinr_array_pl_metis_ps7_dB.mean(),
sinr_array_pl_metis_ps7_dB.max()))
Explanation: Print Min/Mean/Max SIR values (no noise)
End of explanation
fig1, ax1 = plt.subplots(figsize=(10, 8))
print("Max SINR: {0}".format(sinr_array_pl_nothing_dB.max()))
print("Min SINR: {0}".format(sinr_array_pl_nothing_dB.min()))
print("Mean SINR: {0}".format(sinr_array_pl_nothing_dB.mean()))
im1 = ax1.imshow(sinr_array_pl_nothing_dB2, interpolation='nearest', vmax=-1.5, vmin=-5)
fig1.colorbar(im1)
plt.show()
Explanation: Create the Plots for the different cases
First we will create the plots for a noise variance equal to zero.
Plot case without path loss (only wall loss)
End of explanation
fig2, ax2 = plt.subplots(figsize=(10, 8))
print("Max SINR: {0}".format(sinr_array_pl_3gpp_dB.max()))
print("Min SINR: {0}".format(sinr_array_pl_3gpp_dB.min()))
print("Mean SINR: {0}".format(sinr_array_pl_3gpp_dB.mean()))
im2 = ax2.imshow(sinr_array_pl_3gpp_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig2.colorbar(im2)
plt.show()
Explanation: Plot case with 3GPP path loss
End of explanation
fig3, ax3 = plt.subplots(figsize=(10, 8))
print("Max SINR: {0}".format(sinr_array_pl_free_space_dB.max()))
print("Min SINR: {0}".format(sinr_array_pl_free_space_dB.min()))
print("Mean SINR: {0}".format(sinr_array_pl_free_space_dB.mean()))
im3 = ax3.imshow(sinr_array_pl_free_space_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig3.colorbar(im3)
plt.show()
Explanation: Case with Free Space Path Loss
End of explanation
fig4, ax4 = plt.subplots(figsize=(10, 8))
print("Max SINR: {0}".format(sinr_array_pl_metis_ps7_dB.max()))
print("Min SINR: {0}".format(sinr_array_pl_metis_ps7_dB.min()))
print("Mean SINR: {0}".format(sinr_array_pl_metis_ps7_dB.mean()))
im4 = ax4.imshow(sinr_array_pl_metis_ps7_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig4.colorbar(im4)
plt.show()
Explanation: Plot case with METIS PS7 path loss
End of explanation
@interact(Pt_dBm=(0., 40., 5.), noise_power_dBm=(-160., 0.0, 5.), pl_model=['nothing', '3gpp', 'free_space', 'metis'], ap_decimation=['1', '2', '4', '9'])
def plot_SINRs(Pt_dBm=30., noise_power_dBm=-160, pl_model='3gpp', ap_decimation=1):
scenario_params = {
'side_length': 10, # 10 meters side length
'single_wall_loss_dB': 5,
'num_rooms_per_side': 12,
'ap_decimation': int(ap_decimation)}
power_params = {
'Pt_dBm': Pt_dBm, # 20 dBm transmit power
'noise_power_dBm': noise_power_dBm # Very low noise power
}
out = perform_simulation_SINR_heatmap(scenario_params, power_params)
(sinr_array_pl_nothing_dB,
sinr_array_pl_3gpp_dB,
sinr_array_pl_free_space_dB,
sinr_array_pl_metis_ps7_dB) = out
#sinr_array_pl_nothing_dB, sinr_array_pl_3gpp_dB, sinr_array_pl_free_space_dB, sinr_array_pl_metis_ps7_dB = calc_SINRs(Pt_dBm, noise_var)
fig, ax = plt.subplots(figsize=(10, 8))
if pl_model == 'nothing':
im = ax.imshow(sinr_array_pl_nothing_dB2, interpolation='nearest', vmax=-1.5, vmin=-5.)
fig.colorbar(im)
plt.show()
print(("Min/Mean/Max SINR value (no PL):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_nothing_dB.min(),
sinr_array_pl_nothing_dB.mean(),
sinr_array_pl_nothing_dB.max()))
elif pl_model == '3gpp':
im = ax.imshow(sinr_array_pl_3gpp_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig.colorbar(im)
ax.set_title('ka')
plt.show()
print(("Min/Mean/Max SINR value (3GPP):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_3gpp_dB.min(),
sinr_array_pl_3gpp_dB.mean(),
sinr_array_pl_3gpp_dB.max()))
elif pl_model == 'free_space':
im = ax.imshow(sinr_array_pl_free_space_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig.colorbar(im)
plt.show()
print(("Min/Mean/Max SINR value (Free Space):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_free_space_dB.min(),
sinr_array_pl_free_space_dB.mean(),
sinr_array_pl_free_space_dB.max()))
elif pl_model == 'metis':
im = ax.imshow(sinr_array_pl_metis_ps7_dB2, interpolation='nearest', vmax=30, vmin=-2.5)
fig.colorbar(im)
plt.show()
print(("Min/Mean/Max SINR value (METIS PS7):"
"\n {0}\n {1}\n {2}").format(
sinr_array_pl_metis_ps7_dB.min(),
sinr_array_pl_metis_ps7_dB.mean(),
sinr_array_pl_metis_ps7_dB.max()))
else:
raise ValueError('Invalid path loss model: {0}'.format(pl_model))
Explanation: Create the plots with interact
Here we repeat the plots, but now using IPython interact. This allow us to change unput parameters and see the result in the plot.
End of explanation |
8,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation d'un distribution Python
Il existe plusieurs distributions de Python à destination des scientifiques
Step1: Scalaires
Les types numériques int et float
Step2: Nombres complexes
Step3: Question
Step4: Pourquoi
python
a == b
retourne vrai alors que
python
a is b
retourne faux?
Test "If"
Step5: Boucle "For"
Step6: Boucle "While"
Step7: Exercice
Step8: Le concaténation des caractères se fait avec le signe +
Step9: Les caractères disposent de méthodes de manipulation, remplacement et de recherche
Step10: Les chaînes de caractères sont en fait des listes de caractères
Step11: Une version plus efficace de la lecture de fichier est d'utiliser le mot clef with.
Celui-ci crée un sous bloc de code qui, quoi qu'il arrive, exécute une action avant la sortie de code.
Utilisé en conjonction avec open, with fermera toujours le fichier en fin de code.
Step12: Conversion de nombres en textes
Step13: Il existe une autre syntaxe qui permet des expressions plus complexes
Step14: Pour aller plus loin avec les chaînes de caractères
Step15: La liste est mutable, il est possible de lui ajouter des éléments
Step16: Attention les variables en Python sont des pointeurs, c'est à dire une adresse qui pointe vers un objet dans la mémoire.
Step17: Pour être sûr de créer un nouvel objet, il faut utiliser le module copy
Step18: La liste n'est qu'un container, elle ne réalise peu ou pas d'opérations numériques.
La liste n'est pas l'équivalent du vecteur (on verra plus tard le vecteur dans numpy)
Step19: Il existe des fonctions de création de liste comme range
Step20: Les listes possèdent également des méthodes de tri et de recherche d'éléments
Step21: Listes en compréhension
(comprehensive list)
Il s'agit d'une syntaxe spécifique pour créer des listes
Step22: Les tuples
Les tuples sont des listes immutables
Step23: Les dictionnaires
Step24: On peut créer un dictionnaire directement
Step25: Ou bien à partir d'une liste de (clefs, valeurs)
Step26: Exercice
Step27: Exercice
Step28: Les dictionnaires en compréhension
(comprehensive dictionnary)
Step29: Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Les variables et les objets
Une variable Python est un pointeur, c'est à dire une adresse vers un objet dans la mémoire vive de votre OS.
En Python, tout est objet (une liste est un objet, un module est un objet, une classe est un objet...).
Une variable peut être de plusieurs types
Step30: Un objet Python possède
Step31: Les méthodes et les attributs commençant et finissant par "__" sont des méthodes spéciales.
Par exemple, Python interprete toujours la méthode __add__ comme la méthode permettant d'additioner un objet avec un autre.
La méthode __doc__ est appelée sur un objet lorsque que l'on invoque la commande help(objet)
Pour acceder à la liste des méthodes et des attributs d'un objet, vous pouvez taper dir(objet). En mode interactif dans Ipython, il suffit de taper un point à la suite du nom d'un objet et de faire tab.
<img src="tab.png"/>
Les importations de packages
Python est modulaire, il repose sur un écosystème de packages permettant d'enrichir ses fonctionnalités (ie
Step32: Pour alléger les notations, il est possible de faire de imports relatifs
Step33: Exercice
Step35: Notez que a n'était pas passé en argument...
Les variables à l'extérieur des fonctions sont accessible dans la fonction... mais pas l'inverse
fun1 ne reroune aucune valeur, son résultat est
Python
None
Step36: On peut aussi définir des fonctions avec une liste d'arguments variable
Step38: Documentation de fonction avec le docstring (les triples quotes au début de la définition de la fonction)
Step39: regarder l'aide de empty_documented_function
<img src="data_type.png"> </img>
Mise en page des équations | Python Code:
print "Hello world"
print '1', # la virgule empèche le saut de ligne après le print
print '2'
Explanation: Installation d'un distribution Python
Il existe plusieurs distributions de Python à destination des scientifiques :
Anaconda
Canopy
Python(x,y)
Chacune de ces distributions est disponible pour un grand nombre d'architectures 32 et 64 bits (Linux, Windows, Mac OS...).
Courant 2014, Anaconda semble être l'offre la plus simple d'accès et qui offre la meilleure compatibilité multi-plateformes.
Il s'agit de la distribution installée par défaut sur les machines de la salle de TP.
Installation de packages supplémentaires :
Chacune de ces distributions comprend un ensemble de packages scientifiques (numpy, scipy, pandas...) et d'outils de développement (Spyder, Ipython). Python étant un langage ouvert, il est possible d'installer d'autres packages issus de la communauté ou de collègues.
Plusieurs outils de gestion des paquets existent. Par exemple, pour installer seaborn :
bash
pip install seaborn
ou
conda install seaborn (uniquement pour la distribution Anaconda)
Ces outils se connectent à des dépots en ligne et gèrent intégralement le téléchargement des sources et leur installation.
Si un paquet n'est pas disponible dans les dépôts officiels, il existe plusieurs alternatives :
Télécharger les sources et les installer manuellement avec la commande
bash
python setup.py install
Prise en main de Spyder
Spyder est un environnement graphique de développement inspiré de l'interface Matlab.
<img src='spyder_.png'/>
Ses fonctionnalités comprennent :
Edition et éxecution de fichiers sources
Exploration de fichiers et de variables
Profiling du code
Affichage de l'aide
Gestion des variables d'environnement
...
Création de fichier :
Créez un nouveau fichier en cliquant sur l'icône en haut à gauche ou avec le raccourci CTRL+N
Tapez
python
print "hello world"
Exécutez-le en tapant F5...
... ou n'exécutez que la ligne en cours de sélection avec F9
Le résultat de l'exécution s'affiche dans la console
TP 1 : Types, variables et opérations de contrôle :
Ouvrez le fichier data_type.py contenu dans l'archive avec Spyder
Print
End of explanation
a = 1 # int
b = 1. # float
print a
print b
# Les variables sont sensibles à la casse
print A
print(1+1.)
print 1+1
print 1/2
print 1./2
Explanation: Scalaires
Les types numériques int et float
End of explanation
c = 1+2j
print c.real # real et un attribut de l'objet c
print c.imag # imag et un attribut de l'objet c
print c.conjugate() # Pourquoi des parenthèses ici? conjugate est une méthode de c
print c*c
print a+c
Explanation: Nombres complexes
End of explanation
print True + True #True vaut 1, False vaut 0
print True & False
print (1 | False)
print not False
print a == b
print a is b
Explanation: Question :
calculer le module de c
Booléens
Les booléens sont un sous ensemble des entiers
End of explanation
if a == 1 :
print "a == 1" # Notez les 4 espaces après le retour à la ligne.
# Un groupe d'instruction est marqué par un même niveau d'indentation
else :
print "a <> 1"
if a == 1 :
if type(a) == type(1.0) :
print "a is a float"
elif type(a) == type(1):
print 'a is an integer'
else :
print "a <> 1"
Explanation: Pourquoi
python
a == b
retourne vrai alors que
python
a is b
retourne faux?
Test "If"
End of explanation
for i in [0,1,2,'a','test'] :
print i
range(3)
Explanation: Boucle "For"
End of explanation
i = 0
while i < 5:
i += 1 # identique à i = i + 1
print i
Explanation: Boucle "While"
End of explanation
print "a"
print 'b' # on peut aussi bien utiliser ' que "
print "aujourd'hui"
Explanation: Exercice :
Ecrire les multiples de 7 et de 3 de 0 à 500 (utiliser l'opérateur // pour le modulo)
Caractères et chaînes de caractères
End of explanation
print 'a'+'b'
print 'a'*5+'b'*4
Explanation: Le concaténation des caractères se fait avec le signe +
End of explanation
print 'banana'.split('a')
print 'banana'.replace('n','l')
print 'banana'.capitalize()
print 'banana'.upper()
print '--'.join(['1','2','3','4'])
print 'banana'.count('na')
print "Python".find('on')
Explanation: Les caractères disposent de méthodes de manipulation, remplacement et de recherche :
End of explanation
e = 'anticonstitutionnellement'
print len(e)
print e[0] # Comme en C, les listes commencent à 0
print e[-1] # l'élément -1 est le dernier élément, le -2 l'avant dernier...
print e[15:] # imprime tous les éléments après le 15ème
for i,char in enumerate(e) : # enumerate liste les éléments d'une liste et leur position
print i,char
e[0] = 'A' # les chaînes de charactères sont immutables
### Lecture d'un fichier ligne par ligne :
f = open('zen.txt','r')
print 'Ligne 0: '+ f.readline()
print 'Ligne 1: '+ f.readline()
print 'Ligne 2: '+ f.readline()
print 'Ligne 3: '+ f.readline()
f.close() # fermeture du fichier
# Si le fichier reste ouvert, il sera indisponible pour les autres programmes de l'OS
Explanation: Les chaînes de caractères sont en fait des listes de caractères :
End of explanation
with open('zen.txt','r') as f :
s = f.read()
print s
# à la fin execute f.__exit__() qui ferme le fichier
f.read()
Explanation: Une version plus efficace de la lecture de fichier est d'utiliser le mot clef with.
Celui-ci crée un sous bloc de code qui, quoi qu'il arrive, exécute une action avant la sortie de code.
Utilisé en conjonction avec open, with fermera toujours le fichier en fin de code.
End of explanation
print "%d"%42
print "La variable %s vaut %2.3f à l'itération %d"%('toto',3.14159,12.) # arguments positionnels
Explanation: Conversion de nombres en textes
End of explanation
print '{0}, {1}, {2}'.format('a', 'b', 'c')
print 'Coordinates: {latitude}N, {longitude}W {text}'.format(latitude=37.24, longitude=-115.81,text='Zen')
Explanation: Il existe une autre syntaxe qui permet des expressions plus complexes :
End of explanation
g = [1,2,3]
print g
print g[0] # Comme en C, les listes commencent par 0
g[1]='deux' # les listes sont modifiables ("mutable")
print g
print g[1][0] # accède au premier élément de l'élément 2 de la liste
print g[-1]
Explanation: Pour aller plus loin avec les chaînes de caractères :
Expressions régulières avec le module re (import re)
Les listes
Ce sont des containers, c'est à dire que les listes contiennent d'autres objets, peu importe leur type.
Les listes sont des objets mutables, c'est à dire qu'elles sont modifiables après leur création
End of explanation
g.append(4)
print g
g.extend([5,6,7])
print g
Explanation: La liste est mutable, il est possible de lui ajouter des éléments
End of explanation
print g
g2 = g
g2[0] = 'un'
print g2
print g # La modification de la liste g2 entraine la modification de la liste g ... g et g2 pointent toutes les deux vers la même variable
Explanation: Attention les variables en Python sont des pointeurs, c'est à dire une adresse qui pointe vers un objet dans la mémoire.
End of explanation
from copy import copy
g2 = copy(g)
print g
g2[-1] = 'sept'
print g2
print g
Explanation: Pour être sûr de créer un nouvel objet, il faut utiliser le module copy
End of explanation
print g+g
print g*3
# La liste se comporte comme la chaine de caractère. + est une opération de concaténation
Explanation: La liste n'est qu'un container, elle ne réalise peu ou pas d'opérations numériques.
La liste n'est pas l'équivalent du vecteur (on verra plus tard le vecteur dans numpy)
End of explanation
h = range(1,4)
print h
Explanation: Il existe des fonctions de création de liste comme range
End of explanation
h.append(0)
print h
h.sort()
print h
h.reverse()
print h
# Exercice : fabriquer une liste d'entier allant de 42 à 99 par pas de 4
Explanation: Les listes possèdent également des méthodes de tri et de recherche d'éléments
End of explanation
cl = [ i**2 for i in range(5) ]
print cl
Explanation: Listes en compréhension
(comprehensive list)
Il s'agit d'une syntaxe spécifique pour créer des listes :
End of explanation
# Les tuples
g = (1,2,3) # on utilise des parenthèses à la place des crochets pour la création
print g
print g[0] # mais on indice des la même manière
g[1]='test' # les tuples ne sont pas modifiables ("immutable")
print g+g
print g*3
print g == [1,2,3]
print g == (1,2,3)
g.sort()
Explanation: Les tuples
Les tuples sont des listes immutables
End of explanation
k = {} # dictionnaire vide
k['A'] = [0,1,2,'trois',['toto']]
k["text"] = 'zen'
k[(0,1)] = 'tuple'
print k
k[[0,1]] = 'list' # renvoie une erreur
print k['A'] # retrouve la valeur associée à la clef 'A'
k.keys() # retourne la liste des clefs
k.values() # retourne la liste des valeurs
k.items() #retourne la liste des paires clefs values sous la forme d'une liste de tuples
Explanation: Les dictionnaires :
Les dictionnaires sont des containers qui relient une clef à une valeur.
Les dictionnaires reposent sur des tables de hachage et ont une complexité en temps de l'ordre de $\mathcal{O}(1)$, c'est à dire que quelle que soit la taille d'un dictionnaire, le temps pour retrouver un élément est constant.
Cette performance vient avec une limitation : les clefs d'un dictionnaire sont nécessairement immutables (pour être précis, elles doivent être hashable). On ne peut donc pas utiliser de listes comme clef. On peut cependant utiliser un tuple.
Il n'y a pas de limitation sur les valeurs. Tout les objets peuvent être stockés dans un dictionnaire.
End of explanation
k2 = {1:'A',
2:'B',
3:'C'}
print k2
Explanation: On peut créer un dictionnaire directement :
End of explanation
k3 = dict( [('T','Température'),
('P','Puissance'),
('H','humidité')])
print k3
print 'T' in k3 # T est une clefs de k3
print 't' in k3
Explanation: Ou bien à partir d'une liste de (clefs, valeurs)
End of explanation
import string
print string.ascii_lowercase
Explanation: Exercice :
En utilisant la fonction enumerate, créez un dictionnaire qui associe à chaque chiffre de 0 à 25 la lettre de l'alphabet correspondante
End of explanation
from this import d
print d
Explanation: Exercice :
Décryptez le texte stocké dans le fichier zen.txt à l'aide du dictionnaire d
End of explanation
k4 = {k: k**2 for k in range(10)}
print k4
Explanation: Les dictionnaires en compréhension
(comprehensive dictionnary)
End of explanation
a = 1
b = 2
print a
print b
a,b = b,a # permutation de variables
print a,b
a+b
Explanation: Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Les variables et les objets
Une variable Python est un pointeur, c'est à dire une adresse vers un objet dans la mémoire vive de votre OS.
En Python, tout est objet (une liste est un objet, un module est un objet, une classe est un objet...).
Une variable peut être de plusieurs types : modifiable (mutable) ou non modifiable (immutable).
End of explanation
a.__doc__ # La documentation d'un objet est disponible dans son attribut __doc__
print a.__add__ # la méthode __add__ permet d'additionner des variables entre elles.
print a.__add__(b)
Explanation: Un objet Python possède :
des attributs : càd des informations qui le définissent
des méthodes : càd des fonctions permettant de le manipuler
End of explanation
import datetime
start = datetime.datetime(2015,3,4,8,30)
end = datetime.datetime(2015,3,4,18,30)
t = end-start
print start, end
print t
print start.isoweekday()
print start.strftime('%A %B %Y %H::%m::%S')
print t.seconds
Explanation: Les méthodes et les attributs commençant et finissant par "__" sont des méthodes spéciales.
Par exemple, Python interprete toujours la méthode __add__ comme la méthode permettant d'additioner un objet avec un autre.
La méthode __doc__ est appelée sur un objet lorsque que l'on invoque la commande help(objet)
Pour acceder à la liste des méthodes et des attributs d'un objet, vous pouvez taper dir(objet). En mode interactif dans Ipython, il suffit de taper un point à la suite du nom d'un objet et de faire tab.
<img src="tab.png"/>
Les importations de packages
Python est modulaire, il repose sur un écosystème de packages permettant d'enrichir ses fonctionnalités (ie : toolbox matlab gratuite)
Exemple : Le package datetime permet la manipulation des dates et du temps
End of explanation
from datetime import datetime, timedelta
dt = timedelta(hours=2,minutes=7,seconds=18)
print datetime.utcnow()
print datetime.utcnow()+dt
Explanation: Pour alléger les notations, il est possible de faire de imports relatifs :
End of explanation
def fun2(a,b):
return a*b
print fun2('fun',5)
print fun2(12,5) # appel de fonction avec argument positionnel
print fun2(b=5,a=12) # appel de fonction avec argument par mot clef
print a # la valeur de la variable a n'a pas changé
print a
def fun1():
print a
pass
print fun1()
Explanation: Exercice :
En utilisant les sous-modules datetime et timedelta, calculer le nombre de secondes entre le 1er juin 2012 et le début de cette formation.
Combien de jours cela fait-il?
Les fonctions
End of explanation
def fun3(a=0):
return a
print fun3() # Argument par défault
print fun3(4)
def fun4(param0, param1 = 1, param2 = 3.14 ):
Documentation ici
return param0*param1*param2, param0*param1
res1,res2 = fun4(1)
print res1, res2
print fun4(param0 = 1)
print fun4(param0=1, param1 = 1, param2 = 3.14 )
# Toutes les lignes sont strictement équivalentes
help(fun4)
Explanation: Notez que a n'était pas passé en argument...
Les variables à l'extérieur des fonctions sont accessible dans la fonction... mais pas l'inverse
fun1 ne reroune aucune valeur, son résultat est
Python
None
End of explanation
def fun5(req0,req1,opt0=0.0, opt1='c',opt2={},**kwargs):
print req0
print req1
print opt0
print opt1
print opt2
print kwargs # key worded arguments
return None
print fun5(req0='a',req1='b',opt0=1,opt1=2,opt2=3,**{'toto':7,'tata':8})
Explanation: On peut aussi définir des fonctions avec une liste d'arguments variable
End of explanation
def empty_documented_function(required_int,optional_float=0.0, optional_string='CSTB', *kargs ,**kwargs):
u
Mettre ici la description de ce que la fonction fait.
Des équations en ligne ici :math:`T_{ext}(t)\\geq0^{\\circ}C`
Des équations centrées ici :
.. math:: e_{sat}(t) = 610.5e^{\\frac{17.269.T_{ext}(t)}{273.3+T_{ext}(t)}}
Parameters
----------
required_int: int
required_int how great is the CSTB [1]_
optional_float : float
optional_float is an optional float. Default value is 0.0.
optional_string : string
optional_string is an optional string of character. Its default value is
*kargs : a list
Output shape. If the given shape is, e.g., ``(m, n, k)``, then
``m * n * k`` samples are drawn.
See Also
--------
myModule.myOtherFunctionDoingSimilarThing : also great.
References
----------
.. [1] CSTB, "Web site",
http://www.cstb.fr/
Examples
--------
>>> s = empty_function(2)
>>> len(s)
s = ' '.join(['CSTB is']+['great']*required_int+['!'])
print s
return s
if __name__ == "__main__" :
print(u'Le module est executé en tant que script')
else :
print(u'Le module est importé')
Explanation: Documentation de fonction avec le docstring (les triples quotes au début de la définition de la fonction)
End of explanation
%%javascript
console.log("Hello World!")
Explanation: regarder l'aide de empty_documented_function
<img src="data_type.png"> </img>
Mise en page des équations :
http://martinkeefe.com/math/mathjax1
Packages par thèmes :
Matrice, algère linéaire, optimization, traitement du signal :
numpy http://wiki.scipy.org/NumPy_for_Matlab_Users
scipy https://docs.scipy.org/doc/scipy-0.14.0/reference/tutorial/index.html
Séries temporelles, données tabulaires : Pandas http://pandas.pydata.org/pandas-docs/dev/tutorials.html
Statistiques : statsmodels http://statsmodels.sourceforge.net/
Apprentissage machine : scikitlearn http://scikit-learn.org/stable/
Processeur solaire : Pysolar https://github.com/pingswept/pysolar/tree/0.6
Graphes :
matplotlib http://matplotlib.org/gallery.html
seaborn http://stanford.edu/~mwaskom/software/seaborn/examples/index.html
bokeh http://bokeh.pydata.org/en/latest/tutorial/
plotly (online) https://plot.ly/python/
Cartographie :
basemap http://matplotlib.org/basemap/
folium http://bl.ocks.org/wrobstory/5609856
Analyse de sensibilité et propagation d'incertitude : OpenTurns (outils EDF) http://www.openturns.org/
Inversion Bayesienne : Pymc http://nbviewer.ipython.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter1_Introduction/Chapter1.ipynb
Calcul symbolique : Sympy http://docs.sympy.org/latest/tutorial/intro.html
Automatisation du travail sous EnergyPlus : EPPY, http://www.datadrivenbuilding.org/
Lecture / écriture de fichier xml : lxml http://lxml.de/tutorial.html
Requête http : requests http://docs.python-requests.org/en/latest/user/quickstart/
Traitement de document html : beautifulSoup http://www.crummy.com/software/BeautifulSoup/bs4/doc/
End of explanation |
8,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read in the data
Step1: Read in the surveys
Step2: Add DBN columns
Step3: Convert columns to numeric
Step4: Condense datasets
Step5: Convert AP scores to numeric
Step6: Combine the datasets
Step7: Add a school district column for mapping
Step8: Find correlations
Step9: Survey Correlations
Step10: From the survey fields, two stand out due to their significant positive correlations
Step11: So a high saf_s_11 student safety and respect score doesn't really have any predictive value regarding SAT score. However, a low saf_s_11 has a very strong correlation with low SAT scores.
Map out Safety Scores
Step12: So it looks like the safest schools are in Manhattan, while the least safe schools are in Brooklyn.
This jives with crime statistics by borough
Race and SAT Scores
There are a few columsn that indicate the percentage of each race at a given school
Step13: A higher percentage of white and asian students correlates positively with SAT scores and a higher percentage of black or hispanic students correlates negatively with SAT scores. I wouldn't say any of this is suprising. My guess would be that there is an underlying economic factor which is the cause - white and asian neighborhoods probably have a higher median household income and more well funded schools than black or hispanic neighborhoods.
Step14: The above scatterplot shows that a low hispanic percentage isn't particularly predictive of SAT score. However, a high hispanic percentage is highly predictive of a low SAT score.
Step15: The above schools appear to contain a lot of international schools focused on recent immigrants who are learning English as a 2nd language. It makes sense that they would have a harder time on the SAT which is given soley in English.
Step16: Most of the schools above appear to be specialized science and technology schools which receive extra funding and require students to do well on a standardized test before being admitted. So it is reasonable that students at these schools would have a high average SAT score.
Gender and SAT Scores
There are two columns that indicate the percentage of each gender at a school
Step17: In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong.
More data would be required before I was wiling to say that this is a significant effect.
Step18: The above plot appears to show that either very low or very high percentage of females in a school leads to a low average SAT score. However, a percentage in the range 40 to 80 or so can lead to good scores. There doesn't appear to be a strong overall correlation.
Step19: These schools appears to be very selective liberal arts schools that have high academic standards.
AP Scores vs SAT Scores
The Advanced Placement (AP) exams are exams that high schoolers take in order to gain college credit. AP exams can be taken in many different subjects, and passing the AP exam means that colleges may grant you credits.
It makes sense that the number of students who took the AP exam in a school and SAT scores would be highly correlated. Let's dig into this relationship more.
Since total_enrollment is highly correlated with sat_score, we don't want to bias our results, so we'll instead look at the percentage of students in each school who took at least one AP exam. | Python Code:
import pandas as pd
import numpy as np
import re
data_files = ["ap_2010.csv",
"class_size.csv",
"demographics.csv",
"graduation.csv",
"hs_directory.csv",
"sat_results.csv"]
data = {}
for f in data_files:
d = pd.read_csv("../data/schools/{0}".format(f))
data[f.replace(".csv", "")] = d
Explanation: Read in the data
End of explanation
all_survey = pd.read_csv("../data/schools/survey_all.txt", delimiter="\t", encoding='windows-1252')
d75_survey = pd.read_csv("../data/schools/survey_d75.txt", delimiter="\t", encoding='windows-1252')
survey = pd.concat([all_survey, d75_survey], axis=0)
survey["DBN"] = survey["dbn"]
survey_fields = [
"DBN",
"rr_s",
"rr_t",
"rr_p",
"N_s",
"N_t",
"N_p",
"saf_p_11",
"com_p_11",
"eng_p_11",
"aca_p_11",
"saf_t_11",
"com_t_11",
"eng_t_10",
"aca_t_11",
"saf_s_11",
"com_s_11",
"eng_s_11",
"aca_s_11",
"saf_tot_11",
"com_tot_11",
"eng_tot_11",
"aca_tot_11",
]
survey = survey.loc[:,survey_fields]
data["survey"] = survey
Explanation: Read in the surveys
End of explanation
data["hs_directory"]["DBN"] = data["hs_directory"]["dbn"]
def pad_csd(num):
string_representation = str(num)
if len(string_representation) > 1:
return string_representation
else:
return "0" + string_representation
data["class_size"]["padded_csd"] = data["class_size"]["CSD"].apply(pad_csd)
data["class_size"]["DBN"] = data["class_size"]["padded_csd"] + data["class_size"]["SCHOOL CODE"]
Explanation: Add DBN columns
End of explanation
cols = ['SAT Math Avg. Score', 'SAT Critical Reading Avg. Score', 'SAT Writing Avg. Score']
for c in cols:
data["sat_results"][c] = pd.to_numeric(data["sat_results"][c], errors="coerce")
data['sat_results']['sat_score'] = data['sat_results'][cols[0]] + data['sat_results'][cols[1]] + data['sat_results'][cols[2]]
def find_lat(loc):
coords = re.findall("\(.+, .+\)", loc)
lat = coords[0].split(",")[0].replace("(", "")
return lat
def find_lon(loc):
coords = re.findall("\(.+, .+\)", loc)
lon = coords[0].split(",")[1].replace(")", "").strip()
return lon
data["hs_directory"]["lat"] = data["hs_directory"]["Location 1"].apply(find_lat)
data["hs_directory"]["lon"] = data["hs_directory"]["Location 1"].apply(find_lon)
data["hs_directory"]["lat"] = pd.to_numeric(data["hs_directory"]["lat"], errors="coerce")
data["hs_directory"]["lon"] = pd.to_numeric(data["hs_directory"]["lon"], errors="coerce")
Explanation: Convert columns to numeric
End of explanation
class_size = data["class_size"]
class_size = class_size[class_size["GRADE "] == "09-12"]
class_size = class_size[class_size["PROGRAM TYPE"] == "GEN ED"]
class_size = class_size.groupby("DBN").agg(np.mean)
class_size.reset_index(inplace=True)
data["class_size"] = class_size
data["demographics"] = data["demographics"][data["demographics"]["schoolyear"] == 20112012]
data["graduation"] = data["graduation"][data["graduation"]["Cohort"] == "2006"]
data["graduation"] = data["graduation"][data["graduation"]["Demographic"] == "Total Cohort"]
Explanation: Condense datasets
End of explanation
cols = ['AP Test Takers ', 'Total Exams Taken', 'Number of Exams with scores 3 4 or 5']
for col in cols:
data["ap_2010"][col] = pd.to_numeric(data["ap_2010"][col], errors="coerce")
Explanation: Convert AP scores to numeric
End of explanation
combined = data["sat_results"]
combined = combined.merge(data["ap_2010"], on="DBN", how="left")
combined = combined.merge(data["graduation"], on="DBN", how="left")
to_merge = ["class_size", "demographics", "survey", "hs_directory"]
for m in to_merge:
combined = combined.merge(data[m], on="DBN", how="inner")
combined = combined.fillna(combined.mean())
combined = combined.fillna(0)
Explanation: Combine the datasets
End of explanation
def get_first_two_chars(dbn):
return dbn[0:2]
combined["school_dist"] = combined["DBN"].apply(get_first_two_chars)
Explanation: Add a school district column for mapping
End of explanation
correlations = combined.corr()
correlations = correlations["sat_score"]
correlations = correlations.dropna()
correlations.sort_values(ascending=False, inplace=True)
# Interesting correlations tend to have r value > .25 or < -.25
interesting_correlations = correlations[abs(correlations) > 0.25]
print(interesting_correlations)
# Setup Matplotlib to work in Jupyter notebook
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Find correlations
End of explanation
# Make a bar plot of the correlations between survey fields and sat_score
correlations[survey_fields].plot.bar(figsize=(9,7))
Explanation: Survey Correlations
End of explanation
# Make a scatterplot of the saf_s_11 column vs the sat-score in combined
combined.plot.scatter(x='sat_score', y='saf_s_11', figsize=(9,5))
Explanation: From the survey fields, two stand out due to their significant positive correlations:
* N_s - Number of student respondents
* N_p - Number of parent respondents
* aca_s_11 - Academic expectations score based on student responses
* saf_s_11 - Safety and Respect score based on student responses
Why are some possible reasons that N_s and N_p could matter?
1. Higher numbers of students and parents responding to the survey may be an indicator that students and parents care more about the school and about academics in general.
1. Maybe larger schools do better on the SAT and higher numbers of respondents is just indicative of a larger overall student population.
1. Maybe there is a hidden underlying correlation, say that rich students/parents or white students/parents are more likely to both respond to surveys and to have the students do well on the SAT.
1. Maybe parents who care more will fill out the surveys and get their kids to fill out the surveys and these same parents will push their kids to study for the SAT.
Safety and SAT Scores
Both student and teacher perception of safety and respect at school correlate significantly with SAT scores. Let's dig more into this relationship.
End of explanation
# Find the average values for each column for each school_dist in combined
districts = combined.groupby('school_dist').agg(np.mean)
# Reset the index of districts, making school_dist a column again
districts.reset_index(inplace=True)
# Make a map that shows afety scores by district
from mpl_toolkits.basemap import Basemap
plt.figure(figsize=(8,8))
# Setup the Matplotlib Basemap centered on New York City
m = Basemap(projection='merc',
llcrnrlat=40.496044,
urcrnrlat=40.915256,
llcrnrlon=-74.255735,
urcrnrlon=-73.700272,
resolution='i')
m.drawmapboundary(fill_color='white')
m.drawcoastlines(color='blue', linewidth=.4)
m.drawrivers(color='blue', linewidth=.4)
# Convert the lat and lon columns of districts to lists
longitudes = districts['lon'].tolist()
latitudes = districts['lat'].tolist()
# Plot the locations
m.scatter(longitudes, latitudes, s=50, zorder=2, latlon=True,
c=districts['saf_s_11'], cmap='summer')
# Add colorbar
# add colorbar.
cbar = m.colorbar(location='bottom',pad="5%")
cbar.set_label('saf_s_11')
Explanation: So a high saf_s_11 student safety and respect score doesn't really have any predictive value regarding SAT score. However, a low saf_s_11 has a very strong correlation with low SAT scores.
Map out Safety Scores
End of explanation
# Make a plot of the correlations between racial cols and sat_score
race_cols = ['white_per', 'asian_per', 'black_per', 'hispanic_per']
race_corr = correlations[race_cols]
race_corr.plot(kind='bar')
Explanation: So it looks like the safest schools are in Manhattan, while the least safe schools are in Brooklyn.
This jives with crime statistics by borough
Race and SAT Scores
There are a few columsn that indicate the percentage of each race at a given school:
* white_per
* asian_per
* black_per
* hispanic_per
By plotting out the correlations between these columns and sat_score, we can see if there are any racial differences in SAT performance.
End of explanation
# Explore schools with low SAT scores and a high hispanic_per
combined.plot.scatter(x='hispanic_per', y='sat_score')
Explanation: A higher percentage of white and asian students correlates positively with SAT scores and a higher percentage of black or hispanic students correlates negatively with SAT scores. I wouldn't say any of this is suprising. My guess would be that there is an underlying economic factor which is the cause - white and asian neighborhoods probably have a higher median household income and more well funded schools than black or hispanic neighborhoods.
End of explanation
# Research any schools with a greater than 95% hispanic_per
high_hispanic = combined[combined['hispanic_per'] > 95]
# Find the names of schools from the data
high_hispanic['SCHOOL NAME']
Explanation: The above scatterplot shows that a low hispanic percentage isn't particularly predictive of SAT score. However, a high hispanic percentage is highly predictive of a low SAT score.
End of explanation
# Research any schools with less than 10% hispanic_per and greater than
# 1800 average SAT score
high_sat_low_hispanic = combined[(combined['hispanic_per'] < 10) &
(combined['sat_score'] > 1800)]
high_sat_low_hispanic['SCHOOL NAME']
Explanation: The above schools appear to contain a lot of international schools focused on recent immigrants who are learning English as a 2nd language. It makes sense that they would have a harder time on the SAT which is given soley in English.
End of explanation
# Investigate gender differences in SAT scores
gender_cols = ['male_per', 'female_per']
gender_corr = correlations[gender_cols]
gender_corr
# Make a plot of the gender correlations
gender_corr.plot.bar()
Explanation: Most of the schools above appear to be specialized science and technology schools which receive extra funding and require students to do well on a standardized test before being admitted. So it is reasonable that students at these schools would have a high average SAT score.
Gender and SAT Scores
There are two columns that indicate the percentage of each gender at a school:
* male_per
* female_per
End of explanation
# Investigate schools with high SAT scores and a high female_per
combined.plot.scatter(x='female_per', y='sat_score')
Explanation: In the plot above, we can see that a high percentage of females at a school positively correlates with SAT score, whereas a high percentage of males at a school negatively correlates with SAT score. Neither correlation is extremely strong.
More data would be required before I was wiling to say that this is a significant effect.
End of explanation
# Research any schools with a greater than 60% female_per, and greater
# than 1700 average SAT score.
high_female_high_sat = combined[(combined['female_per'] > 60) &
(combined['sat_score'] > 1700)]
high_female_high_sat['SCHOOL NAME']
Explanation: The above plot appears to show that either very low or very high percentage of females in a school leads to a low average SAT score. However, a percentage in the range 40 to 80 or so can lead to good scores. There doesn't appear to be a strong overall correlation.
End of explanation
# Compute the percentage of students in each school that took the AP exam
combined['ap_per'] = combined['AP Test Takers '] / combined['total_enrollment']
# Investigate the relationship between AP scores and SAT scores
combined.plot.scatter(x='ap_per', y='sat_score')
Explanation: These schools appears to be very selective liberal arts schools that have high academic standards.
AP Scores vs SAT Scores
The Advanced Placement (AP) exams are exams that high schoolers take in order to gain college credit. AP exams can be taken in many different subjects, and passing the AP exam means that colleges may grant you credits.
It makes sense that the number of students who took the AP exam in a school and SAT scores would be highly correlated. Let's dig into this relationship more.
Since total_enrollment is highly correlated with sat_score, we don't want to bias our results, so we'll instead look at the percentage of students in each school who took at least one AP exam.
End of explanation |
8,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BigQuery query magic
Jupyter magics are notebook-specific shortcuts that allow you to run commands with minimal syntax. Jupyter notebooks come with many built-in commands. The BigQuery client library, google-cloud-bigquery, provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame.
Run a query on a public dataset
The following example queries the BigQuery usa_names public dataset. usa_names is a Social Security Administration dataset that contains all names from Social Security card applications for births that occurred in the United States after 1879.
The following example shows how to invoke the magic (%%bigquery), and how to pass in a standard SQL query in the body of the code cell. The results are displayed below the input cell as a pandas DataFrame.
Step1: Display verbose output
As the query job is running, status messages below the cell update with the query job ID and the amount of time the query has been running. By default, this output is erased and replaced with the results of the query. If you pass the --verbose flag, the output will remain below the cell after query completion.
Step2: Explicitly specify a project
By default, the %%bigquery magic command uses your default project to run the query. You may also explicitly provide a project ID using the --project flag. Note that your credentials must have permissions to create query jobs in the project you specify.
Step3: Assign the query results to a variable
To save the results of your query to a variable, provide a variable name as a parameter to %%bigquery. The following example saves the results of the query to a variable named df. Note that when a variable is provided, the results are not displayed below the cell that invokes the magic command.
Step4: Run a parameterized query
Parameterized queries are useful if you need to run a query with certain parameters that are calculated at run time. Note that the value types must be JSON serializable. The following example defines a parameters dictionary and passes it to the --params flag. The key of the dictionary is the name of the parameter, and the value of the dictionary is the value of the parameter. | Python Code:
%%bigquery
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
Explanation: BigQuery query magic
Jupyter magics are notebook-specific shortcuts that allow you to run commands with minimal syntax. Jupyter notebooks come with many built-in commands. The BigQuery client library, google-cloud-bigquery, provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame.
Run a query on a public dataset
The following example queries the BigQuery usa_names public dataset. usa_names is a Social Security Administration dataset that contains all names from Social Security card applications for births that occurred in the United States after 1879.
The following example shows how to invoke the magic (%%bigquery), and how to pass in a standard SQL query in the body of the code cell. The results are displayed below the input cell as a pandas DataFrame.
End of explanation
%%bigquery --verbose
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
Explanation: Display verbose output
As the query job is running, status messages below the cell update with the query job ID and the amount of time the query has been running. By default, this output is erased and replaced with the results of the query. If you pass the --verbose flag, the output will remain below the cell after query completion.
End of explanation
project_id = "your-project-id"
%%bigquery --project $project_id
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
Explanation: Explicitly specify a project
By default, the %%bigquery magic command uses your default project to run the query. You may also explicitly provide a project ID using the --project flag. Note that your credentials must have permissions to create query jobs in the project you specify.
End of explanation
%%bigquery df
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT 10
df
Explanation: Assign the query results to a variable
To save the results of your query to a variable, provide a variable name as a parameter to %%bigquery. The following example saves the results of the query to a variable named df. Note that when a variable is provided, the results are not displayed below the cell that invokes the magic command.
End of explanation
params = {"limit": 10}
%%bigquery --params $params
SELECT name, SUM(number) as count
FROM `bigquery-public-data.usa_names.usa_1910_current`
GROUP BY name
ORDER BY count DESC
LIMIT @limit
Explanation: Run a parameterized query
Parameterized queries are useful if you need to run a query with certain parameters that are calculated at run time. Note that the value types must be JSON serializable. The following example defines a parameters dictionary and passes it to the --params flag. The key of the dictionary is the name of the parameter, and the value of the dictionary is the value of the parameter.
End of explanation |
8,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
\title{myHDL Implementation of a CIC Filter}
\author{Steven K Armour}
\maketitle
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#References" data-toc-modified-id="References-1"><span class="toc-item-num">1 </span>References</a></span></li><li><span><a href="#Preliminaries" data-toc-modified-id="Preliminaries-2"><span class="toc-item-num">2 </span>Preliminaries</a></span></li><li><span><a href="#To-improve
Step1: References
Christopher Felton who designed the original myHDL CIC filter found here https
Step2: Code to read back the generated Verilog(VHDL) from myHDL back into the Jupyter Notebook
Step3: Z-Transform preliminary math and Z-Plane graphing code
Step4: To improve
Step5: myHDL implementation
<img style='float
Step6: myHDL Testing
Step7: To Verilog
Step8: Comb
Theory
Step9: myHDL implementation
<img style='float
Step10: myHDL Testing
Step11: To Verilog
Step12: Integrator
Theory
Step13: myHDL implementation
<img style='float
Step14: myHDL Testing
Step15: To Verilog
Step16: Decimator
Theory
$$y(n)=x(Rn)$$
Step17: myHDL implementation
<img style='float
Step18: Testing
Step19: To Verilog
Step20: Interpolator
Theory
$$y(n)=\sum_{k=-\infty}^{\infty}x(k)\delta(n-kR)$$
Step21: myHDL implementation
<img style='float
Step22: muHDL Testing
Step23: To Verilog
Step24: Pass Through
myHDL implementation
Step25: myHDL Testing
Step26: To Verilog
Step27: Complete CIC Filter
Theory
Step28: myHDL implementation
<img style='float
Step29: myHDL Testing
(Need to come up with a test bench)
Chris | Python Code:
import numpy as np
np.seterr(divide='ignore', invalid='ignore')
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
#import plotly.plotly as py
#import plotly.graph_objs as go
from sympy import *
from sympy import S; Zero=S.Zero
init_printing()
import scipy.signal as sig
%matplotlib notebook
import ipywidgets as widg
from myhdl import *
from myhdlpeek import Peeker
Explanation: \title{myHDL Implementation of a CIC Filter}
\author{Steven K Armour}
\maketitle
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#References" data-toc-modified-id="References-1"><span class="toc-item-num">1 </span>References</a></span></li><li><span><a href="#Preliminaries" data-toc-modified-id="Preliminaries-2"><span class="toc-item-num">2 </span>Preliminaries</a></span></li><li><span><a href="#To-improve:" data-toc-modified-id="To-improve:-3"><span class="toc-item-num">3 </span>To improve:</a></span></li><li><span><a href="#Delay" data-toc-modified-id="Delay-4"><span class="toc-item-num">4 </span>Delay</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-4.1"><span class="toc-item-num">4.1 </span>Theory</a></span></li><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-4.2"><span class="toc-item-num">4.2 </span>myHDL implementation</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-4.3"><span class="toc-item-num">4.3 </span>myHDL Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-4.4"><span class="toc-item-num">4.4 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Comb" data-toc-modified-id="Comb-5"><span class="toc-item-num">5 </span>Comb</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-5.1"><span class="toc-item-num">5.1 </span>Theory</a></span></li><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-5.2"><span class="toc-item-num">5.2 </span>myHDL implementation</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-5.3"><span class="toc-item-num">5.3 </span>myHDL Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-5.4"><span class="toc-item-num">5.4 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Integrator" data-toc-modified-id="Integrator-6"><span class="toc-item-num">6 </span>Integrator</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-6.1"><span class="toc-item-num">6.1 </span>Theory</a></span></li><li><span><a href="#myHDL--implementation" data-toc-modified-id="myHDL--implementation-6.2"><span class="toc-item-num">6.2 </span>myHDL implementation</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-6.3"><span class="toc-item-num">6.3 </span>myHDL Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-6.4"><span class="toc-item-num">6.4 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Decimator" data-toc-modified-id="Decimator-7"><span class="toc-item-num">7 </span>Decimator</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-7.1"><span class="toc-item-num">7.1 </span>Theory</a></span></li><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-7.2"><span class="toc-item-num">7.2 </span>myHDL implementation</a></span></li><li><span><a href="#Testing" data-toc-modified-id="Testing-7.3"><span class="toc-item-num">7.3 </span>Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-7.4"><span class="toc-item-num">7.4 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Interpolator" data-toc-modified-id="Interpolator-8"><span class="toc-item-num">8 </span>Interpolator</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-8.1"><span class="toc-item-num">8.1 </span>Theory</a></span></li><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-8.2"><span class="toc-item-num">8.2 </span>myHDL implementation</a></span></li><li><span><a href="#muHDL-Testing" data-toc-modified-id="muHDL-Testing-8.3"><span class="toc-item-num">8.3 </span>muHDL Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-8.4"><span class="toc-item-num">8.4 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Pass-Through" data-toc-modified-id="Pass-Through-9"><span class="toc-item-num">9 </span>Pass Through</a></span><ul class="toc-item"><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-9.1"><span class="toc-item-num">9.1 </span>myHDL implementation</a></span></li><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-9.2"><span class="toc-item-num">9.2 </span>myHDL Testing</a></span></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-9.3"><span class="toc-item-num">9.3 </span>To Verilog</a></span></li></ul></li><li><span><a href="#Complete-CIC-Filter" data-toc-modified-id="Complete-CIC-Filter-10"><span class="toc-item-num">10 </span>Complete CIC Filter</a></span><ul class="toc-item"><li><span><a href="#Theory" data-toc-modified-id="Theory-10.1"><span class="toc-item-num">10.1 </span>Theory</a></span></li></ul></li><li><span><a href="#myHDL-implementation" data-toc-modified-id="myHDL-implementation-11"><span class="toc-item-num">11 </span>myHDL implementation</a></span><ul class="toc-item"><li><span><a href="#myHDL-Testing" data-toc-modified-id="myHDL-Testing-11.1"><span class="toc-item-num">11.1 </span>myHDL Testing</a></span><ul class="toc-item"><li><span><a href="#(Need-to-come-up-with-a-test-bench)" data-toc-modified-id="(Need-to-come-up-with-a-test-bench)-11.1.1"><span class="toc-item-num">11.1.1 </span>(Need to come up with a test bench)</a></span></li></ul></li><li><span><a href="#To-Verilog" data-toc-modified-id="To-Verilog-11.2"><span class="toc-item-num">11.2 </span>To Verilog</a></span></li></ul></li></ul></div>
End of explanation
BitWidth=32
Explanation: References
Christopher Felton who designed the original myHDL CIC filter found here https://github.com/jandecaluwe/site-myhdl-retired/blob/master/_ori/pages/projects/gciccomplete.txt
West Coast DSP blog article on CIC filters
https://westcoastdsp.wordpress.com/2015/09/07/cascaded-integrator-comb-filters/
Demystifying Hogenauer filters
Preliminaries
Target Architecture size
End of explanation
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
Explanation: Code to read back the generated Verilog(VHDL) from myHDL back into the Jupyter Notebook
End of explanation
z, r, DisAng=symbols('z, r, Omega')
zFunc=Eq(z, r*exp(1j*DisAng)); zFunc
zFuncN=lambdify((r, DisAng), zFunc.rhs, dummify=False)
zr=np.arange(-1.5, 1.5+.03, .03); zi=np.copy(zr)
zR, zI=np.meshgrid(zr, zi)
zN=zR+1j*zI
rN=1.0
AngThetaN=np.arange(0, 1+.005, .005)*2*np.pi
zAtR1=zFuncN(rN, AngThetaN)
%matplotlib notebook
def Zplot(zR, zI, HzNMag, HzAtR1NMag, HzNPhase, HzAtR1NPhase, title):
fig = plt.figure()
#plot the z space mag
axZmag = fig.add_subplot(221, projection='3d')
Mags=axZmag.plot_surface(zR, zI, HzNMag, cmap=plt.get_cmap('tab20'))
axZmag.plot(np.real(zAtR1), np.imag(zAtR1), HzAtR1NMag, 'r-', label='r=1')
axZmag.set_xlabel('Re'); axZmag.set_ylabel('Im'); axZmag.set_zlabel('Mag')
axZmag.legend(loc='best')
fig.colorbar(Mags)
#plot the z space phase
axZph = fig.add_subplot(222, projection='3d')
Phase=axZph.plot_surface(zR, zI, HzNPhase, cmap=plt.get_cmap('tab20'))
axZph.plot(np.real(zAtR1), np.imag(zAtR1), HzAtR1NPhase, 'r-', label='r=1')
axZph.set_xlabel('Re'); axZph.set_ylabel('Im'); axZph.set_zlabel('Phase')
axZph.legend(loc='best')
fig.colorbar(Phase)
axBodeM=fig.add_subplot(212)
Mline=axBodeM.plot(AngThetaN, HzAtR1NMag, label='FTMag')
axBodeM.set_ylabel('Mag')
axBodeP=axBodeM.twinx()
Pline=axBodeP.plot(AngThetaN, np.rad2deg(HzAtR1NPhase), 'g--', label='FTPhase')
axBodeP.set_ylabel('Phase [deg]')
axBodeP.set_xlabel('Ang')
lns = Mline+Pline
labs = [l.get_label() for l in lns]
axBodeP.legend(lns, labs, loc='best')
fig.suptitle(title)
fig.show()
Explanation: Z-Transform preliminary math and Z-Plane graphing code
End of explanation
M=symbols('M')
DelayH=z**(-M); DelayH
DelaySupH=simplify(DelayH.subs(zFunc.lhs, zFunc.rhs)); DelaySupH
DelaySupH=simplify(DelaySupH.subs(r, 1)); DelaySupH
DelayHN=lambdify((z, M), DelayH, dummify=False)
Mvalue=2
HzN=DelayHN(zN, M=Mvalue); HzN.shape
HzNMag=np.abs(HzN); HzNPhase=np.angle(HzN)
HAtR1=zFuncN(rN, AngThetaN)
HzAtR1N=DelayHN(zAtR1, M=Mvalue)
HzAtR1NMag=np.abs(HzAtR1N); HzAtR1NPhase=np.angle(HzAtR1N)
def DelayExplorer(M=1):
Mvalue=M
HzN=DelayHN(zN, M=Mvalue); HzN.shape
HzNMag=np.abs(HzN); HzNPhase=np.angle(HzN)
HAtR1=zFuncN(rN, AngThetaN)
HzAtR1N=DelayHN(zAtR1, M=Mvalue)
HzAtR1NMag=np.abs(HzAtR1N); HzAtR1NPhase=np.angle(HzAtR1N)
Zplot(zR, zI, HzNMag, HzAtR1NMag, HzNPhase, HzAtR1NPhase,
f'z Delay order M={M}')
# will add widgets for interative later
DelayExplorer(M=1)
Explanation: To improve:
Come up with good test bench for CIC
Record all test to DataFrame and z-transform results
add widgets
Clean up theory
Delay
Theory
End of explanation
def Delay(x, y, ena_in, clk):
'''
Z delay bulding block for a CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n+1) output of y(n+1)=x(n)
'''
@always(clk. posedge)
def logic():
if ena_in:
y.next=x
return logic
Explanation: myHDL implementation
<img style='float: center;' src='Delay.png'>
End of explanation
Peeker.clear()
x=Signal(modbv(0)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(0)[BitWidth:]); Peeker(y, 'y')
ena_in, clk=[Signal(bool(0)) for _ in range(2)]
Peeker(ena_in, 'ena_in'); Peeker(clk, 'clk')
DUT=Delay(x, y, ena_in, clk)
DateCol=pd.DataFrame(columns=['x', 'y', 'ena_in'])
def Delay_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
Tested_ena=False
count=0
while 1:
if Tested_ena==False and count<=2:
print(f'Tested_ena: {Tested_ena}, count:{count}')
elif Tested_ena==False and count>2:
print(f'Tested_ena: {Tested_ena}, count:{count}')
ena_in.next=True
Tested_ena=True
x.next=0
if Tested_ena and count>2:
x.next=x+1
if count> 2*BitWidth:
raise StopSimulation
DateCol.loc[count]=[int(x),int(y), int(ena_in)]
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Delay_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: myHDL Testing
End of explanation
x=Signal(modbv(0)[BitWidth:])
y=Signal(modbv(0)[BitWidth:])
ena_in, clk=[Signal(bool(0)) for _ in range(2)]
toVerilog(Delay, x, y, ena_in, clk)
VerilogTextReader('Delay');
Explanation: To Verilog
End of explanation
CombH=1-z**(-M); CombH
CombSupH=simplify(CombH.subs(zFunc.lhs, zFunc.rhs)); CombSupH
CombSupH=simplify(CombSupH.subs(r, 1)); CombSupH
CombHN=lambdify((z, M), CombH, dummify=False)
def CombEx(M=2):
Mvalue=M
HzN=CombHN(zN, M=Mvalue); HzN.shape
HzNMag=np.abs(HzN); HzNPhase=np.angle(HzN)
HAtR1=zFuncN(rN, AngThetaN)
HzAtR1N=DelayHN(zAtR1, M=Mvalue)
HzAtR1NMag=np.abs(HzAtR1N); HzAtR1NPhase=np.angle(HzAtR1N)
Zplot(zR, zI, HzNMag, HzAtR1NMag, HzNPhase, HzAtR1NPhase,
f'Comb of Order M={Mvalue}')
CombEx(M=2)
Explanation: Comb
Theory
End of explanation
def Comb(x, y, ena_in, ena_out, clk, rst, M=2):
'''
the comb section of a CIC filter relaying on `Delay` to create
the Z dealy blocks
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n) output of y(n)=y(n-1)+x(n)
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
Parm:
M: the nuumber of Z delays for this comb section
'''
#stage the the Zdelays
Zdelay_i=[None for i in range(M)]
#Parmters for sizeing the 2's comp interdaley wires
WordLen_1=len(x)-1
WireGuage=2**WordLen_1
# Create the wiring between the delays
Zwire_ij=[Signal(modbv(0, min=-WireGuage, max=WireGuage)) for j in range(M)]
#instainsate and wire togather the Zdelays
for i in range(M):
if i==0:
Zdelay_i[i]=Delay(x, Zwire_ij[i], ena_in, clk)
else:
Zdelay_i[i]=Delay(Zwire_ij[i-1], Zwire_ij[i], ena_in, clk)
#make the last delay output unieq x(M-1)
subx=Zwire_ij[M-1]
@always(clk.posedge)
def logc():
if rst:
y.next=0
else:
if ena_in:
#y=x-x(M-1)
y.next=x-subx
ena_out.next=True
else:
ena_out.next=False
return instances()
Explanation: myHDL implementation
<img style='float: center;' src='Comb.png'>
End of explanation
Peeker.clear()
x=Signal(modbv(1)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(5)[BitWidth:]); Peeker(y, 'y')
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
Peeker(ena_in, 'ena_in'); Peeker(ena_out, 'ena_out'); Peeker(clk, 'clk'); Peeker(rst, 'rst')
DUT=Comb(x, y, ena_in, ena_out, clk, rst, M=2)
def Comb_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
Tested_ena=False
Tested_rst=False
count=0
while 1:
if Tested_ena==False and count<=2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
elif Tested_ena==False and count>2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
ena_in.next=True
Tested_ena=True
if Tested_ena and Tested_rst==False:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=True
Tested_rst=True
elif Tested_ena and Tested_rst and count<=4:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=False
Tested_rst=True
x.next=1
if Tested_ena and Tested_rst and count>4:
x.next=2*(x+1)
if count> 2*BitWidth:
raise StopSimulation
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Comb_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: myHDL Testing
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(5)[BitWidth:])
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
toVerilog(Comb, x, y, ena_in, ena_out, clk, rst, M=2)
VerilogTextReader('Comb');
Explanation: To Verilog
End of explanation
IntegratorH=1/(1-z**-1); IntegratorH
IntegratorSupH=simplify(IntegratorH.subs(zFunc.lhs, zFunc.rhs))
IntegratorSupH
IntegratorSupH=simplify(IntegratorSupH.subs(r, 1))
IntegratorSupH
IntegratorHN=lambdify(z, IntegratorH, dummify=False)
HzN=IntegratorHN(zN); HzN.shape
HzNMag=np.abs(HzN); HzNPhase=np.angle(HzN)
HAtR1=zFuncN(rN, AngThetaN)
HzAtR1N=IntegratorHN(zAtR1)
HzAtR1NMag=np.abs(HzAtR1N); HzAtR1NPhase=np.angle(HzAtR1N)
Zplot(zR, zI, HzNMag, HzAtR1NMag, HzNPhase, HzAtR1NPhase, 'Integrator')
Explanation: Integrator
Theory
End of explanation
def Integrator(x, y, ena_in, ena_out, clk, rst):
'''
Simple Integrator/ Accumultor with exstior hold contorls as part
of the building blocks of a CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n) output of y(n)=y(n-1)+x(n)
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
'''
@always(clk.posedge)
def logic():
if rst:
y.next=0
else:
if ena_in:
#y(n)=y(n-1)+x(n)
y.next=y+x
ena_out.next=True
else:
ena_out.next=False
return logic
Explanation: myHDL implementation
<img style='float: center;' src='Integrator.png'>
End of explanation
Peeker.clear()
x=Signal(modbv(1)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(5)[BitWidth:]); Peeker(y, 'y')
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
Peeker(ena_in, 'ena_in'); Peeker(ena_out, 'ena_out'); Peeker(clk, 'clk'); Peeker(rst, 'rst')
DUT=Integrator(x, y, ena_in, ena_out, clk, rst)
def Integrator_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
Tested_ena=False
Tested_rst=False
count=0
while 1:
if Tested_ena==False and count<=2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
elif Tested_ena==False and count>2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
ena_in.next=True
Tested_ena=True
if Tested_ena and Tested_rst==False:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=True
Tested_rst=True
elif Tested_ena and Tested_rst and count<=4:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=False
Tested_rst=True
x.next=0
if Tested_ena and Tested_rst and count>4:
x.next=x+1
if count> 2*BitWidth:
raise StopSimulation
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Integrator_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: myHDL Testing
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(5)[BitWidth:])
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
toVerilog(Integrator, x, y, ena_in, ena_out, clk, rst)
VerilogTextReader('Integrator');
Explanation: To Verilog
End of explanation
R, k=symbols('R, k')
X=Function('X')(z); X
Decimator=summation(X.subs(z, z**(1/R) *exp(2j*pi*k/R)), (k, 0, R-1))/R
Decimator
Decimator=simplify(Decimator.subs(zFunc.lhs, zFunc.rhs))
Decimator
Decimator.subs(r, 1)
Explanation: Decimator
Theory
$$y(n)=x(Rn)$$
End of explanation
def Decimator(x, y, ena_in, ena_out, clk, rst, R=8):
'''
A decimation (down sampling) section for a CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n) output
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
Parm:
R: the decimation ratio
'''
countSize=2**np.ceil(np.log2(R))
count=Signal(intbv(0, max=countSize, min=0))
@always(clk.posedge)
def PassControl():
if rst:
y.next=0
else:
if count==0:
y.next=x
ena_out.next=True
else:
y.next=0
ena_out.next=False
@always(clk.posedge)
def CountControl():
if rst:
count.next=0
else:
if count==R-1:
count.next=0
else:
count.next=count+1
return instances()
Explanation: myHDL implementation
<img style='float: center;' src='Decimator.png'>
End of explanation
Peeker.clear()
x=Signal(modbv(1)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(0)[BitWidth:]); Peeker(y, 'y')
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
Peeker(ena_in, 'ena_in'); Peeker(ena_out, 'ena_out'); Peeker(clk, 'clk'); Peeker(rst, 'rst')
DUT=Decimator(x, y, ena_in, ena_out, clk, rst, R=2)
def Integrator_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
Tested_ena=False
Tested_rst=False
count=0
while 1:
if Tested_ena==False and count<=2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
elif Tested_ena==False and count>2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
ena_in.next=True
Tested_ena=True
if Tested_ena and Tested_rst==False:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=True
Tested_rst=True
elif Tested_ena and Tested_rst and count<=4:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=False
Tested_rst=True
x.next=1
if Tested_ena and Tested_rst and count>4:
x.next=x+1
if count> 2*BitWidth:
raise StopSimulation
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Integrator_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: Testing
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(0)[BitWidth:])
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
toVerilog(Decimator, x, y, ena_in, ena_out, clk, rst, R=2)
VerilogTextReader('Decimator');
Explanation: To Verilog
End of explanation
InterpolatorTheory=X.subs(z, exp(1j*DisAng*R)); InterpolatorTheory
Explanation: Interpolator
Theory
$$y(n)=\sum_{k=-\infty}^{\infty}x(k)\delta(n-kR)$$
End of explanation
def Interpolator(x, y, ena_in, ena_out, clk, rst, R=8):
'''
A interpolation section for a CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n) output of y(n)=y(n-1)+x(n)
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
Parm:
R: the up-sampleing ratio
'''
countSize=2**np.ceil(np.log2(R))
count=Signal(intbv(0, max=countSize, min=0))
@always(clk.posedge)
def PassControl():
if rst:
y.next=0
else:
if ena_in:
y.next=x
ena_out.next=True
elif count>0:
y.next=0
ena_out.next=True
else:
y.next=0
ena_out.next=False
@always(clk.posedge)
def CountControl():
if rst:
count.next=0
else:
if ena_in:
count.next=R-1
elif count>0:
count.next=count+1
return instances()
Explanation: myHDL implementation
<img style='float: center;' src='Interpolator.png'>
End of explanation
Peeker.clear()
x=Signal(modbv(1)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(0)[BitWidth:]); Peeker(y, 'y')
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
Peeker(ena_in, 'ena_in'); Peeker(ena_out, 'ena_out'); Peeker(clk, 'clk'); Peeker(rst, 'rst')
DUT=Interpolator(x, y, ena_in, ena_out, clk, rst, R=2)
def Integrator_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
Tested_ena=False
Tested_rst=False
count=0
while 1:
if Tested_ena==False and count<=2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
elif Tested_ena==False and count>2:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
ena_in.next=True
Tested_ena=True
if Tested_ena and Tested_rst==False:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=True
Tested_rst=True
elif Tested_ena and Tested_rst and count<=4:
print(f'Tested_ena: {Tested_ena}, Tested_rst:{Tested_rst}, count:{count}')
rst.next=False
Tested_rst=True
x.next=1
if Tested_ena and Tested_rst and count>4:
x.next=x+1
if count> 2*BitWidth:
raise StopSimulation
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Integrator_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: muHDL Testing
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(0)[BitWidth:])
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
toVerilog(Interpolator, x, y, ena_in, ena_out, clk, rst, R=2)
VerilogTextReader('Interpolator');
Explanation: To Verilog
End of explanation
def PassThrough(x, y, ena_in, ena_out):
'''
A pass-throug (do nothing) section for a CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
Outputs:
y (data): the y(n) output of y(n)=y(n-1)+x(n)
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
'''
@always_comb
def logic():
y.next=x
ena_out.next=ena_in
return logic
Explanation: Pass Through
myHDL implementation
End of explanation
Peeker.clear()
x=Signal(modbv(1)[BitWidth:]); Peeker(x, 'x')
y=Signal(modbv(0)[BitWidth:]); Peeker(y, 'y')
ena_in, ena_out, clk=[Signal(bool(0)) for _ in range(3)]
Peeker(ena_in, 'ena_in'); Peeker(ena_out, 'ena_out'); Peeker(clk, 'clk')
DUT=PassThrough(x, y, ena_in, ena_out)
def Integrator_TB():
@always(delay(1)) ## delay in nano seconds
def clkGen():
clk.next = not clk
@instance
def stimulus():
count=0
while 1:
if count<=5:
ena_in.next=True
elif count<=10:
ena_in.next=False
elif count>=15:
ena_in.next=True
if count> 2*BitWidth:
raise StopSimulation
x.next=x+1
count+=1
yield clk.posedge
return instances()
sim = Simulation(DUT, Integrator_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom(start_time=0, stop_time=40, tock=True)
Explanation: myHDL Testing
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(0)[BitWidth:])
ena_in, ena_out, clk=[Signal(bool(0)) for _ in range(3)]
toVerilog(PassThrough, x, y, ena_in, ena_out)
VerilogTextReader('PassThrough');
Explanation: To Verilog
End of explanation
N=symbols('N')
CIC=simplify(((1-z**(-R*M))**N)/((1-z**(-1))**N)); CIC
CICsub=simplify(CIC.subs(zFunc.lhs, zFunc.rhs)); CICsub
CICN=lambdify((z, M, R, N), CIC, dummify=False)
def CICEx(N=2, M=2, R=2):
Mvalue=M; Rvalue=R; Nvalue=N
HzN=CICN(zN, M=Mvalue, R=Rvalue, N=Nvalue); HzN.shape
HzNMag=np.abs(HzN); HzNPhase=np.angle(HzN)
HAtR1=zFuncN(rN, AngThetaN)
HzAtR1N=CICN(zAtR1, M=Mvalue, R=Rvalue, N=Nvalue)
HzAtR1NMag=np.abs(HzAtR1N); HzAtR1NPhase=np.angle(HzAtR1N)
Zplot(zR, zI, HzNMag, HzAtR1NMag, HzNPhase, HzAtR1NPhase,
f'CIC Dec of N={Nvalue}, M={Mvalue}, R={Rvalue}')
CICEx()
Explanation: Complete CIC Filter
Theory
End of explanation
def CICFilter(x, y, ena_in, ena_out, clk, rst, N=3, M=2, R=8, Type=None):
'''
The complet CIC filter
Inputs:
x (data): the x(n) data in feed
------------------------
ena_in (bool): the exstiror calc hold input. calc is done only if
`ena_in` is True
clk(bool): clock feed
rst(bool): reset feed
Outputs:
y (data): the y(n) output of y(n)=y(n-1)+x(n)
----------------------
ena_out: the exstior calc hold output. will be false if
`ena_in` is False
Parm:
N: the number of stages
M: the Z delay order
R: the decimation/Interpolation ratio
Type (None, 'Dec', 'Interp'): type of CIC filter
'''
#Parmters for sizeing the 2's comp data wires
#assumes len(x)==len(y)
WordLen_1=len(x)-1
WireGuage=2**WordLen_1
#------------------------------------------------
#create the wires from the interpoltor (or lack of) to the comb sec
#data wire
Interp_Comb_DWire=Signal(intbv(0, min=-WireGuage, max=WireGuage))
#enable wire
Interp_Comp_EWire=Signal(bool(0))
#instatiniat the interpolator (or lack of)
if Type=='Interp':
# x, y, ena_in, ena_out, clk, rst, R
Interp=Interpolator(x, Interp_Comb_DWire, ena_in, Interp_Comp_EWire, clk, rst, R)
else:
#x, y, ena_in, ena_out
Interp=PassThrough(x, Interp_Comb_DWire, ena_in, Interp_Comp_EWire)
#------------------------------------------------------------
#Data Wires
Comb_DWireIJ=[Signal(intbv(0, min=-WireGuage, max=WireGuage)) for i in range(N)]
#Enable Wires
Comb_EWireIJ=[Signal(bool(0)) for i in range(N)]
#instatintte the comb sections and wire them
Comb_i=[]
for i in range(N):
if i==0:
# x, y, ena_in, ena_out, clk, rst, M
Comb_i.append(Comb(Interp_Comb_DWire, Comb_DWireIJ[i] , Interp_Comp_EWire, Comb_EWireIJ[i], clk, rst, M))
else:
# x, y, ena_in, ena_out, clk, rst, M
Comb_i.append(Comb(Comb_DWireIJ[i-1], Comb_DWireIJ[i], Comb_EWireIJ[i-1], Comb_EWireIJ[i], clk, rst, M))
#------------------------------------------------------------
#Data Wires
Integrator_DWireIJ=[Signal(intbv(0, min=-WireGuage, max=WireGuage)) for i in range(N)]
#Enable Wires
Integrator_EWireIJ=[Signal(bool(0)) for i in range(N)]
#instatintte the integrator sections and wire them
Integrtor_i=[]
for i in range(N):
if i==0:
# x, y, ena_in, ena_out, clk, rst
Integrtor_i.append(Integrator(Comb_DWireIJ[N-1], Integrator_DWireIJ[i], Comb_EWireIJ[N-1], Integrator_EWireIJ[i], clk, rst))
else:
# x, y, ena_in, ena_out, clk, rst
Integrtor_i.append(Integrator(Integrator_DWireIJ[i-1], Integrator_DWireIJ[i], Integrator_EWireIJ[i-1], Integrator_EWireIJ[i], clk, rst))
#-------------------------------------------------------------
#instatiniat the decimator (or lack of)
if Type == 'Dec':
# x, y, ena_in, ena_out, clk, rst, R
Dec=Decimator(Integrator_DWireIJ[N-1], y, Integrator_EWireIJ[N-1], ena_out, clk, rst, R)
else:
# x, y, ena_in, ena_out
Dec=PassThrough(Integrator_DWireIJ[N-1], y, Integrator_EWireIJ[N-1], ena_out)
return instances()
Explanation: myHDL implementation
<img style='float: center;' src='CICStruct.png'>
End of explanation
x=Signal(modbv(1)[BitWidth:])
y=Signal(modbv(0)[BitWidth:])
ena_in, ena_out, clk, rst=[Signal(bool(0)) for _ in range(4)]
toVerilog(CICFilter, x, y, ena_in, ena_out, clk, rst, N=3, M=2, R=8, Type=None)
VerilogTextReader('CICFilter');
Explanation: myHDL Testing
(Need to come up with a test bench)
Chris: Would a square unit pulse be the best test of this
To Verilog
End of explanation |
8,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deque
kolekcja zbliżona do listy
optymalne operacje na brzegach, nieoptymalne indeksowanie
implementacja bazuje na double linked liście
Step1: Krotki
immutable
podobne do listy
hashowalne, listy nie są
Step2: NamedTuples | Python Code:
from collections import deque
a = deque([1, 2, 3], maxlen=5)
a.append(4)
a.append(5)
a.append(6)
print(a)
Explanation: Deque
kolekcja zbliżona do listy
optymalne operacje na brzegach, nieoptymalne indeksowanie
implementacja bazuje na double linked liście
End of explanation
a = (1, 2, 3)
b = (1, )
c = ()
print(a, b, c)
a = tuple([1, 2, 3])
print(a)
print( (1, 2, 3)[0] )
print( (1, 'a', c)[:2] )
print( len((1, 2, 3)) )
a = (1, 'a', c)
a[0] = 5
a = 1, 2, 3
print(a)
a, b, c = (1, 2, 3)
print(a, b, c)
Explanation: Krotki
immutable
podobne do listy
hashowalne, listy nie są
End of explanation
from collections import namedtuple
Point = namedtuple('Point', ['x', 'y'])
punkt = Point(x=5, y=10)
print(punkt)
print(punkt[0])
print(punkt[1])
print(punkt.x)
print(punkt.y)
print(punkt[:])
from collections import namedtuple
Point = namedtuple('Point', ['def', 'y', 'y'], rename=False)
print( Point(1, 2, 2) )
from collections import namedtuple
Point = namedtuple('Point', ['def', 'y', 'y'], rename=True)
print(Point._source)
from collections import namedtuple
Point = namedtuple('Point', ['def', 'y', 'y'], rename=True)
print( Point._make([1,2,3]) )
print( Point(1, 2, 3)._asdict() ) # jakiego typu dict?
print( Point(1, 2, 3)._replace(y=5) )
print( Point._fields )
Explanation: NamedTuples
End of explanation |
8,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Get the data
2MASS => J, H K, angular resolution ~4"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
Step1: Matching coordinates
Step2: Plot $W_1-J$ vs $W_1$
Step3: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
Filter all Cats
Step4: Collect relevant data
Step5: Analysis
we can try
Step6: DBSCAN
Step7: Plot $W_1 - J$ vs $J$
Step8: t-SNE | Python Code:
#obj = ["3C 454.3", 343.49062, 16.14821, 1.0]
obj = ["PKS J0006-0623", 1.55789, -6.39315, 1.0]
#obj = ["M87", 187.705930, 12.391123, 1.0]
#### name, ra, dec, radius of cone
obj_name = obj[0]
obj_ra = obj[1]
obj_dec = obj[2]
cone_radius = obj[3]
obj_coord = coordinates.SkyCoord(ra=obj_ra, dec=obj_dec, unit=(u.deg, u.deg), frame="icrs")
# Query data
data_2mass = Irsa.query_region(obj_coord, catalog="fp_psc", radius=cone_radius * u.deg)
data_wise = Irsa.query_region(obj_coord, catalog="allwise_p3as_psd", radius=cone_radius * u.deg)
__data_galex = Vizier.query_region(obj_coord, catalog='II/335', radius=cone_radius * u.deg)
data_galex = __data_galex[0]
num_2mass = len(data_2mass)
num_wise = len(data_wise)
num_galex = len(data_galex)
print("Number of object in (2MASS, WISE, GALEX): ", num_2mass, num_wise, num_galex)
Explanation: Get the data
2MASS => J, H K, angular resolution ~4"
WISE => 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) with an angular resolution of 6.1", 6.4", 6.5", & 12.0"
GALEX imaging => Five imaging surveys in a Far UV band (1350-1750Å) and Near UV band (1750-2800Å) with 6-8 arcsecond resolution (80% encircled energy) and 1 arcsecond astrometry, and a cosmic UV background map.
End of explanation
# use only coordinate columns
ra_2mass = data_2mass['ra']
dec_2mass = data_2mass['dec']
c_2mass = coordinates.SkyCoord(ra=ra_2mass, dec=dec_2mass, unit=(u.deg, u.deg), frame="icrs")
ra_wise = data_wise['ra']
dec_wise = data_wise['dec']
c_wise = coordinates.SkyCoord(ra=ra_wise, dec=dec_wise, unit=(u.deg, u.deg), frame="icrs")
ra_galex = data_galex['RAJ2000']
dec_galex = data_galex['DEJ2000']
c_galex = coordinates.SkyCoord(ra=ra_galex, dec=dec_galex, unit=(u.deg, u.deg), frame="icrs")
####
sep_min = 1.0 * u.arcsec # minimum separation in arcsec
# Only 2MASS and WISE matching
#
idx_2mass, idx_wise, d2d, d3d = c_wise.search_around_sky(c_2mass, sep_min)
# select only one nearest if there are more in the search reagion (minimum seperation parameter)!
print("Only 2MASS and WISE: ", len(idx_2mass))
Explanation: Matching coordinates
End of explanation
# from matching of 2 cats (2MASS and WISE) coordinate
data_2mass_matchwith_wise = data_2mass[idx_2mass]
data_wise_matchwith_2mass = data_wise[idx_wise] # WISE dataset
w1 = data_wise_matchwith_2mass['w1mpro']
j = data_2mass_matchwith_wise['j_m']
w1j = w1-j
cutw1j = -1.7 # https://academic.oup.com/mnras/article/448/2/1305/1055284
# WISE galaxy data -> from cut
galaxy = data_wise_matchwith_2mass[w1j < cutw1j]
print("Number of galaxy from cut W1-J:", len(galaxy))
w1j_galaxy = w1j[w1j<cutw1j]
w1_galaxy = w1[w1j<cutw1j]
plt.scatter(w1j, w1, marker='o', color='blue')
plt.scatter(w1j_galaxy, w1_galaxy, marker='.', color="red")
plt.axvline(x=cutw1j) # https://academic.oup.com/mnras/article/448/2/1305/1055284
Explanation: Plot $W_1-J$ vs $W_1$
End of explanation
# GALEX
###
# coord of object in 2mass which match wise (first objet/nearest in sep_min region)
c_2mass_matchwith_wise = c_2mass[idx_2mass]
c_wise_matchwith_2mass = c_wise[idx_wise]
#Check with 2mass cut
idx_2mass_wise_galex, idx_galex1, d2d, d3d = c_galex.search_around_sky(c_2mass_matchwith_wise, sep_min)
num_galex1 = len(idx_galex1)
#Check with wise cut
idx_wise_2mass_galex, idx_galex2, d2d, d3d = c_galex.search_around_sky(c_wise_matchwith_2mass, sep_min)
num_galex2 = len(idx_galex2)
print("Number of GALEX match in 2MASS cut (with WISE): ", num_galex1)
print("Number of GALEX match in WISE cut (with 2MASS): ", num_galex2)
# diff/average
print("Confusion level: ", abs(num_galex1 - num_galex2)/np.mean([num_galex1, num_galex2])*100, "%")
# Choose which one is smaller!
if num_galex1 < num_galex2:
select_from_galex = idx_galex1
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# 2MASS from GALEX_selected
_idx_galex1, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(c_selected_galex, sep_min)
match_2mass = data_2mass[_idx_2mass]
# WISE from 2MASS_selected
_ra_match_2mass = match_2mass['ra']
_dec_match_2mass = match_2mass['dec']
_c_match_2mass = coordinates.SkyCoord(ra=_ra_match_2mass, dec=_dec_match_2mass, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_wise, d2d, d3d = c_wise.search_around_sky(_c_match_2mass, sep_min)
match_wise = data_wise[_idx_wise]
else:
select_from_galex = idx_galex2
match_galex = data_galex[select_from_galex]
c_selected_galex = c_galex[select_from_galex]
# WISE from GALEX_selected
_idx_galex1, _idx_wise, d2d, d3d = c_wise.search_around_sky(c_selected_galex, sep_min)
match_wise = data_wise[_idx_wise]
# 2MASS from WISE_selected
_ra_match_wise = match_wise['ra']
_dec_match_wise = match_wise['dec']
_c_match_wise = coordinates.SkyCoord(ra=_ra_match_wise, dec=_dec_match_wise, unit=(u.deg, u.deg), frame="icrs")
_idx, _idx_2mass, d2d, d3d = c_2mass.search_around_sky(_c_match_wise, sep_min)
match_2mass = data_2mass[_idx_2mass]
print("Number of match in GALEX: ", len(match_galex))
print("Number of match in 2MASS: ", len(match_2mass))
print("Number of match in WISE : ", len(match_wise))
Explanation: W1-J < -1.7 => galaxy
W1-J > -1.7 => stars
Filter all Cats
End of explanation
joindata = np.array([match_2mass['j_m'],
match_2mass['j_m']-match_2mass['h_m'],
match_2mass['j_m']-match_2mass['k_m'],
match_2mass['j_m']-match_wise['w1mpro'],
match_2mass['j_m']-match_wise['w2mpro'],
match_2mass['j_m']-match_wise['w3mpro'],
match_2mass['j_m']-match_wise['w4mpro'],
match_2mass['j_m']-match_galex['NUVmag']])
joindata = joindata.T
Explanation: Collect relevant data
End of explanation
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
X = joindata
pca = PCA(n_components=4)
X_r = pca.fit(X).transform(X)
print(pca.components_)
print(pca.explained_variance_)
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,0], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,2], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,2], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,1], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,1], X_r[i,3], marker=".", color="red")
# plot PCA result
# Plot data using PC1 vs PC2
plt.scatter(X_r[:,2], X_r[:,3], marker='o', color='blue')
# overplot galaxy selected using cut W1-J
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,2], X_r[i,3], marker=".", color="red")
Explanation: Analysis
we can try:
- dimensionality reduction
- clustering
- classification
- data embedding
PCA
End of explanation
from sklearn.cluster import DBSCAN
from sklearn.preprocessing import StandardScaler
X = joindata
db = DBSCAN(eps=1, min_samples=3).fit(X)
core_samples_mask = np.zeros_like(db.labels_, dtype=bool)
core_samples_mask[db.core_sample_indices_] = True
labels = db.labels_
# Number of clusters in labels, ignoring noise if present.
n_clusters_ = len(set(labels)) - (1 if -1 in labels else 0)
print('Estimated number of clusters: %d' % n_clusters_)
#print(labels)
Explanation: DBSCAN
End of explanation
# Black removed and is used for noise instead.
unique_labels = set(labels)
colors = [plt.cm.Spectral(each) for each in np.linspace(0, 1, len(unique_labels))]
for k, col in zip(unique_labels, colors):
if k == -1:
# Black used for noise.
col = [0, 0, 0, 1]
class_member_mask = (labels == k)
## J vs J-W1
xy = X[class_member_mask & core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=14)
xy = X[class_member_mask & ~core_samples_mask]
plt.plot(xy[:, 3], xy[:, 0], 'o', markerfacecolor=tuple(col), markeredgecolor='k', markersize=8)
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.plot(X[i,3], X[i,0], marker="X", markerfacecolor='red', markeredgecolor='none', markersize=8)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
Explanation: Plot $W_1 - J$ vs $J$
End of explanation
from sklearn.manifold import TSNE
X = joindata #scale(joindata)
X_r = TSNE(n_components=2).fit_transform(X)
plt.scatter(X_r[:,0], X_r[:,1], marker='o', color="blue")
for i, name in enumerate(match_wise['designation']):
for galaxyname in galaxy['designation']:
if name == galaxyname:
plt.scatter(X_r[i,0], X_r[i,1], marker='.', color="red")
Explanation: t-SNE
End of explanation |
8,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Phoenix BT-Settl Bolometric Corrections
Figuring out the best method of handling Phoenix bolometric correction files.
Step1: Change to directory containing bolometric correction files.
Step2: Load a bolometric correction table, say for the Cousins AB photometric system.
Step3: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.
Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
Step4: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
Step5: This agrees with data in the bolometric correciton table.
Teff logg [Fe/H] [a/Fe] B V R I
1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291
Now, let's raise the temperature.
Step6: Again, we have a good match to tabulated values,
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273
However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
Step7: This appears consistent. What about progressing to lower metallicity values?
Step8: For reference, at [Fe/H] = $-0.5$ dex, we have
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154
The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.
Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
Step9: Make sure there are magnitudes and colors associated with this isochrone.
Step10: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
Step11: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
Step12: Let's try something different
Step13: Create an interpolation surface from the magnitude table.
Step14: Compute magnitudes for a Dartmouth isochrone.
Step15: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
Step16: Now compare against MARCS values.
Step17: Load an isochrone from the Lyon-Phoenix series.
Step18: Export a new isochrone with colors from AGSS09 (PHX)
Step19: Separate Test Case
These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate as scint
Explanation: Phoenix BT-Settl Bolometric Corrections
Figuring out the best method of handling Phoenix bolometric correction files.
End of explanation
cd /Users/grefe950/Projects/starspot/starspot/color/tab/phx/
Explanation: Change to directory containing bolometric correction files.
End of explanation
bc_table = np.genfromtxt('colmag.BT-Settl.server.JOHNSON.AB.bolcor', comments='!')
Explanation: Load a bolometric correction table, say for the Cousins AB photometric system.
End of explanation
test_surface = scint.LinearNDInterpolator(bc_table[:, :3], bc_table[:, 4:])
Explanation: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.
Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
End of explanation
test_surface(np.array([1500., 5.0, 0.0]))
Explanation: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
End of explanation
test_surface(np.array([3000., 5.0, 0.0]))
Explanation: This agrees with data in the bolometric correciton table.
Teff logg [Fe/H] [a/Fe] B V R I
1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291
Now, let's raise the temperature.
End of explanation
test_surface(np.array([3000., 5.0, 0.1]))
Explanation: Again, we have a good match to tabulated values,
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273
However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
End of explanation
test_surface(np.array([3000., 5.0, -0.2]))
Explanation: This appears consistent. What about progressing to lower metallicity values?
End of explanation
iso = np.genfromtxt('/Users/grefe950/evolve/dmestar/iso/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
Explanation: For reference, at [Fe/H] = $-0.5$ dex, we have
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154
The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.
Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
End of explanation
iso.shape
Explanation: Make sure there are magnitudes and colors associated with this isochrone.
End of explanation
test_bcs = test_surface(10**iso[:,1], iso[:, 2], 0.0)
test_bcs.shape
Explanation: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
End of explanation
bol_mags = 4.74 - 2.5*iso[:, 3]
for i in range(test_bcs.shape[1]):
bcs = -1.0*np.log10(10**iso[:, 1]/5777.) + test_bcs[:, i] - 5.0*iso[:, 4]
if i == 0:
test_mags = bol_mags - bcs
else:
test_mags = np.column_stack((test_mags, bol_mags - bcs))
iso[50, 0:4], iso[50, 6:], test_mags[50]
Explanation: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
End of explanation
col_table = np.genfromtxt('colmag.BT-Settl.server.COUSINS.AB', comments='!')
Explanation: Let's try something different: using the color tables provided by the Phoenix group, from which the bolometric corrections are calculated.
End of explanation
col_surface = scint.LinearNDInterpolator(col_table[:, :3], col_table[:, 4:8])
Explanation: Create an interpolation surface from the magnitude table.
End of explanation
phx_mags = col_surface(10.0**iso[:, 1], iso[:, 2], 0.0)
Explanation: Compute magnitudes for a Dartmouth isochrone.
End of explanation
for i in range(phx_mags.shape[1]):
phx_mags[:, i] = phx_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
Explanation: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
End of explanation
iso[40, :5], iso[40, 6:], phx_mags[40]
Explanation: Now compare against MARCS values.
End of explanation
phx_iso = np.genfromtxt('/Users/grefe950/Notebook/Projects/ngc2516_spots/data/phx_isochrone_120myr.txt')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True)
ax[0].set_xlim(0.0, 2.0)
ax[1].set_xlim(0.0, 4.0)
ax[0].set_ylim(16, 2)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c="#b22222")
ax[0].plot(phx_mags[:, 0] - phx_mags[:, 1], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[0].plot(phx_iso[:, 7] - phx_iso[:, 8], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c="#b22222")
ax[1].plot(phx_mags[:, 1] - phx_mags[:, 3], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[1].plot(phx_iso[:, 8] - phx_iso[:, 10], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
Explanation: Load an isochrone from the Lyon-Phoenix series.
End of explanation
new_isochrone = np.column_stack((iso[:, :6], phx_mags))
np.savetxt('/Users/grefe950/Notebook/Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso',
new_isochrone, fmt='%16.8f')
Explanation: Export a new isochrone with colors from AGSS09 (PHX)
End of explanation
tmp = -10.*np.log10(3681./5777.) + test_surface(3681., 4.78, 0.0) #+ 5.0*np.log10(0.477)
tmp
4.74 - 2.5*(-1.44) - tmp
Explanation: Separate Test Case
These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first.
End of explanation |
8,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WikiData
From here
Step1: Add the 590 MetAtlas compounds that are missing from all these databases
Step2: miBIG (using notebook and pubchem API)
Step3: get names for ones that are missing names (determined from first pass through)
Step4: get names for missing from inchikey (no CID, determined from first pass through)
Step5: IMG-ABC
https
Step6: ENZO Library
502 compounds
501 with unique structures
request table from here
Step7: MSMLS-Library
Step8: MetaCyc
12,370 compounds
11,256 compounds with unique structure
Login as [email protected]
Smart table is called
Step9: Dr. Dukes Phytochemical and Ethnobotanical Database
29585 Compounds
No structural information or ids from external databases.
Step10: LipidMaps
40716 compounds
40145 with a unique structural description
wget http
Step11: HMDB
41895
41722 neutral and unique structures
wget http
Step12: ChEMBL
ftp
Step13: ChEBI
wget ftp
Step14: GNPS
9312 spectra
3971 chemical structures
wget ftp
Step15: BNICE
http
Step16: Load the dataframe and make an rdkit mol for each comound
Step17: Replace float(nan) with ''
Step18: Missing the "InChI=" at the start
This is how most of the wikidata inchis are stored. GNPS has some like this too.
Step19: Missing the "I" in "InChI"
Step20: has an inchikey instead of inchi
Step21: Has an "N/A"
Step22: Has something else in InChI field. Usually this is SMILES string
Step23: Take a look at the rows that don't have an rdkit mol.
They don't have a SMILES or InChI
RDKit could not parse their description
Desalt and remove disconnected components
Step24: neutralize those that are charged
Step25: make venn
Step26: create consolidate table from unique compounds
Step27: Remove URLS from various fields throughout
Step28: Add old MetAtlas compounds that are not in any of these databases
Step29: Mass values are not high enough precision
Mass might not be includign the charge | Python Code:
terms_to_keep = ['smiles','inchi','source_database','ROMol','common_name','Definition', 'synonyms','pubchem_compound_id','lipidmaps_id','metacyc_id','hmdb_id','img_abc_id','chebi_id','kegg_id']
import_compounds = reload(import_compounds)
wikidata = import_compounds.get_wikidata(terms_to_keep)
df = wikidata[terms_to_keep]
wikidata.head()
# compoundLabel
Explanation: WikiData
From here: https://github.com/yuvipanda/python-wdqs/blob/master/wdqs/client.py
Compound properties table
From here: https://www.wikidata.org/wiki/Wikidata:List_of_properties/Terms
24305 compounds
15413 with a unique structural identifier
Has relationships that link to the largest database on earth
Often no structural description exists, but a PubChemCI does.
End of explanation
# See below, I did it at the end.
# metatlas_added = pd.read_csv('metatlas_compounds_not_found.csv',index_col=0)
# metatlas_added = metatlas_added[['inchi','name']]
# df = pd.concat([df,metatlas_added], axis=0, ignore_index=True)
# metatlas_added.head()
Explanation: Add the 590 MetAtlas compounds that are missing from all these databases
End of explanation
def strip_whitespace(x):
return x.strip()
miBIG = pd.read_pickle('miBIG.pkl')
miBIG = miBIG[~miBIG.common_name.str.contains('Status: 404')] #catch any defunct ids and throw them away here
miBIG.inchi = miBIG.inchi.apply(strip_whitespace)
miBIG['source_database'] = 'miBIG'
df = pd.concat([df,miBIG], axis=0, ignore_index=True)
miBIG.head()
Explanation: miBIG (using notebook and pubchem API)
End of explanation
no_name_cid = pd.read_pickle('no_names_with_cid.pkl')
no_name_cid = no_name_cid[~no_name_cid.common_name.str.contains('Status: 404')] #catch any defunct ids and throw them away here
no_name_cid.inchi = no_name_cid.inchi.apply(strip_whitespace)
df = pd.concat([df,no_name_cid], axis=0, ignore_index=True)
no_name_cid.head()
Explanation: get names for ones that are missing names (determined from first pass through)
End of explanation
no_name_inchikey = pd.read_pickle('no_names_with_inchikey.pkl')
no_name_inchikey = no_name_inchikey[~no_name_inchikey.common_name.str.contains('Status: 404')] #catch any defunct ids and throw them away here
no_name_inchikey.inchi = no_name_inchikey.inchi.apply(strip_whitespace)
df = pd.concat([df,no_name_inchikey], axis=0, ignore_index=True)
no_name_inchikey.head()
Explanation: get names for missing from inchikey (no CID, determined from first pass through)
End of explanation
img_abc = import_compounds.get_img(terms_to_keep)
df = pd.concat([df,img_abc[terms_to_keep]], axis=0, ignore_index=True)
img_abc.head()
# Secondary Metabolite (SM) Name
# SM ID
Explanation: IMG-ABC
https://img.jgi.doe.gov/cgi-bin/abc/main.cgi?section=NaturalProd&page=smListing
Exported as tab delimited table
1109 Compounds
666 Compounds have a unique structure
End of explanation
enzo = import_compounds.get_enzo(terms_to_keep)
df = pd.concat([df,enzo[terms_to_keep]], axis=0, ignore_index=True)
enzo.head()
# Name
Explanation: ENZO Library
502 compounds
501 with unique structures
request table from here:
http://www.enzolifesciences.com/BML-2865/screen-well-natural-product-library/
BML-2865.xlsx
I had to save as tab delimted text. The Excel JChem structures messed up the excel import.
End of explanation
import_compounds = reload(import_compounds)
msmls = import_compounds.get_msmls(terms_to_keep)
df = pd.concat([df,msmls[terms_to_keep]], axis=0, ignore_index=True)
msmls.keys()
msmls.head()# CNAME
# PC_CID
Explanation: MSMLS-Library
End of explanation
import_compounds = reload(import_compounds)
metacyc = import_compounds.get_metacyc(terms_to_keep)
df = pd.concat([df,metacyc[terms_to_keep]], axis=0, ignore_index=True)
metacyc.head()
# KEGG
# PubChem
# Common-Name
# Names
# Object ID
Explanation: MetaCyc
12,370 compounds
11,256 compounds with unique structure
Login as [email protected]
Smart table is called: "MetAtlas Export MetaCyc Compounds"
Has mapping to reactions
Has compound ontology
Save as spreadsheet from their website. Prior to saving, I deleted the first row because I had to open it in excel ad save as "xlsx" for pandas to read it. The first row has columns that are longer than Excel's maximum number of characters and they get wrapped into new rows.
End of explanation
# dr_dukes = import_compounds.get_dr_dukes()
Explanation: Dr. Dukes Phytochemical and Ethnobotanical Database
29585 Compounds
No structural information or ids from external databases.
End of explanation
print df.shape
print df.source_database.unique().shape
print df.keys()
df.head()
import_compounds = reload(import_compounds)
lipid_maps = import_compounds.get_lipid_maps(terms_to_keep)
df = pd.concat([df,lipid_maps[terms_to_keep]], axis=0, ignore_index=True)
lipid_maps.head()
# PUBCHEM_CID
# KEGG_ID
# COMMON_NAME
# SYNONYMS
# ID
Explanation: LipidMaps
40716 compounds
40145 with a unique structural description
wget http://lipidmaps.org/resources/downloads/LMSDFDownload28Jun15.tar.gz
Its probably possible to get the enzyme-reaction mapping for these!
End of explanation
hmdb = import_compounds.get_hmdb(terms_to_keep)
df = pd.concat([df,hmdb[terms_to_keep]], axis=0, ignore_index=True)
hmdb.head()
# GENERIC_NAME
# SYNONYMS
# HMDB_ID
Explanation: HMDB
41895
41722 neutral and unique structures
wget http://www.hmdb.ca/system/downloads/current/structures.zip
unzip to structures.sdf
has calcualted physical properties from jchem
RDKit can't convert the mol for Cyanocobalmin and Hydroxocobalamin.
End of explanation
mz = 116.070608
delta_mz = mz*5/1e6
min_mz = mz - delta_mz
max_mz = mz + delta_mz
print mz
print delta_mz
print min_mz
print max_mz
#This needs a metadata file to accompany it.
#chembl = pd.read_pickle('/project/projectdirs/openmsi/projects/compound_data/chembl/chembl.pkl')
#These are the fields in the pkl
#(1583839, 3)
#Index([u'ID', u'ROMol', u'chembl_id'], dtype='object')
# chembl = import_compounds.get_chembl()
# df = pd.concat([df,chembl[terms_to_keep]], axis=0, ignore_index=True)
# has 1.583 million compounds
# tackle this next time
Explanation: ChEMBL
ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_21/chembl_21.sdf.gz
End of explanation
chebi = import_compounds.get_chebi(terms_to_keep)
df = pd.concat([df,chebi[terms_to_keep]], axis=0, ignore_index=True)
chebi.head()
# ChEBI Name
# ChEBI ID
# KEGG COMPOUND Database Links
# Synonyms
Explanation: ChEBI
wget ftp://ftp.ebi.ac.uk/pub/databases/chebi/SDF/ChEBI_complete_3star.sdf.gz
wget ftp://ftp.ebi.ac.uk/pub/databases/chebi/SDF/ChEBI_complete.sdf.gz
End of explanation
gnps = import_compounds.get_gnps(terms_to_keep)
df = pd.concat([df,gnps[terms_to_keep]], axis=0, ignore_index=True)
gnps.head()
# name
Explanation: GNPS
9312 spectra
3971 chemical structures
wget ftp://ccms-ftp.ucsd.edu/Spectral_Libraries/ALL_GNPS.mgf
Many of the named compounds don't have strucutral identifiers or ids that map to other databases. Many of the structural identifiers are formated wrong or in the wrong field (example: SMILES are used in the InChI field).
End of explanation
df.source_database.unique()
Explanation: BNICE
http://pubs.acs.org/doi/abs/10.1021/acssynbio.6b00054
http://bioinformatics.oxfordjournals.org/content/21/8/1603.short
SDF Files are here:
http://lincolnpark.chem-eng.northwestern.edu/release/
End of explanation
df.to_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/compounds_fixedStereo_notchembl.pkl')
# df = pd.read_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/compounds_fixedStereo_notchembl.pkl')
Explanation: Load the dataframe and make an rdkit mol for each comound
End of explanation
df = df.where((pd.notnull(df)), None)
Explanation: Replace float(nan) with ''
End of explanation
idx = (df['inchi'].str.startswith('1')) & (df['inchi'])
print sum(idx)
print df.source_database[idx].unique()
df['inchi'][idx] = 'InChI=' + df['inchi']
Explanation: Missing the "InChI=" at the start
This is how most of the wikidata inchis are stored. GNPS has some like this too.
End of explanation
idx = (df['inchi'].str.startswith('nChI')) & (df['inchi'])
print sum(idx)
print df.source_database[idx].unique()
df['inchi'][idx] = 'I' + df['inchi']
Explanation: Missing the "I" in "InChI"
End of explanation
idx = (df['inchi'].str.endswith('-N')) & (df['inchi'])
print sum(idx)
print df.source_database[idx].unique()
df['inchi'][idx] = ''
Explanation: has an inchikey instead of inchi
End of explanation
idx = (df['inchi'].str.startswith('N/A')) & (df['inchi'])
print sum(idx)
print df.source_database[idx].unique()
df['inchi'][idx] = ''
idx = (df['smiles'].str.startswith('N/A')) & (df['smiles'])
print sum(idx)
print df.source_database[idx].unique()
df['smiles'][idx] = ''
Explanation: Has an "N/A"
End of explanation
idx = (df['inchi'].str.contains('^((?!InChI).)*$')) & df['inchi']
print sum(idx)
print df.source_database[idx].unique()
df['smiles'][idx] = df['inchi'][idx]
df['inchi'][idx] = ''
df[idx].head()
def make_mol_from_smiles_and_inchi(row):
if not row['ROMol']:
mol= []
if row['inchi']:
mol = Chem.MolFromInchi(row['inchi'].encode('utf-8'))
elif row['smiles']:
mol = Chem.MolFromSmiles(row['smiles'].encode('utf-8'))
if mol:
return mol
else:
return row['ROMol']
df.ROMol = df.apply(make_mol_from_smiles_and_inchi, axis=1)
Explanation: Has something else in InChI field. Usually this is SMILES string
End of explanation
def desalt_compounds_in_dataframe(x):
if x:
if x.GetNumAtoms()>1:
c = import_compounds.desalt(x)
if c[1]:
return c[0]
else:
return x
df.ROMol = df.ROMol.apply(desalt_compounds_in_dataframe)
Explanation: Take a look at the rows that don't have an rdkit mol.
They don't have a SMILES or InChI
RDKit could not parse their description
Desalt and remove disconnected components
End of explanation
import_compounds = reload(import_compounds)
def neutralize_compounds_in_dataframe(x):
if x:
if x.GetNumAtoms()> 0:
neutral_mol = []
try:
c = import_compounds.NeutraliseCharges(x)
neutral_mol = c[0]
except:
pass
if neutral_mol:
return neutral_mol
df.ROMol = df.ROMol.apply(neutralize_compounds_in_dataframe)
def calculate_num_radicals_in_dataframe(x):
num_radicals = 0.0
if x:
num_radicals = Descriptors.NumRadicalElectrons(x)
return num_radicals
def calculate_formula_in_dataframe(x):
formula = ''
if x:
formula = rdMolDescriptors.CalcMolFormula(x)
return formula
def calculate_monoisotopic_mw_in_dataframe(x):
mw = 0.0
if x:
mw = Descriptors.ExactMolWt(x)
return mw
def calculate_inchi_in_dataframe(x):
inchi = ''
if x:
try:
inchi = Chem.MolToInchi(x)
except:
pass#This fails when can't kekulize mol
return inchi
def calculate_flattened_inchi_in_dataframe(x):
flattened_inchi = ''
if x:
sm = Chem.MolToSmiles(x).replace('@','')
flattened_rdkit_mol = Chem.MolFromSmiles(sm)
try:
flattened_inchi = Chem.MolToInchi(flattened_rdkit_mol)
except:
pass#This fails when can't kekulize mol
return flattened_inchi
def calculate_inchikey_in_dataframe(x):
ik = ''
if x:
try:
ik = Chem.InchiToInchiKey(x)
except:
pass#This fails when can't kekulize mol. Carbo-cations are the culprit usually.
return ik
def calculate_charge_in_dataframe(x):
if x:
my_charge = Chem.GetFormalCharge(x)
return my_charge
df['charge'] = df.ROMol.apply(calculate_charge_in_dataframe)
df['formula'] = df.ROMol.apply(calculate_formula_in_dataframe)
df['monoisotopic_mw'] = df.ROMol.apply(calculate_monoisotopic_mw_in_dataframe)
df['num_radicals'] = df.ROMol.apply(calculate_num_radicals_in_dataframe)
df['metatlas_inchi'] = df.ROMol.apply(calculate_inchi_in_dataframe)
df['metatlas_inchi_key'] = df.metatlas_inchi.apply(calculate_inchikey_in_dataframe)
df['flat_inchi'] = df.ROMol.apply(calculate_flattened_inchi_in_dataframe)
df['flat_inchikey'] = df.flat_inchi.apply(calculate_inchikey_in_dataframe)
Explanation: neutralize those that are charged
End of explanation
# dbs = df.source_database.unique().tolist()
# M = np.zeros((len(dbs),len(dbs)))
# for i in range(len(dbs)):
# # df1 = df.loc[df['source_database'] == dbs[i]]
# # M[i,0] = df1.shape[0]
# # print dbs
# for j in range(0,len(dbs)):
# #i is row, j is column
# u1 = df.loc[df['source_database'] == dbs[i],'metatlas_inchi_key'].unique().tolist()
# u2 = df.loc[df['source_database'] == dbs[j],'metatlas_inchi_key'].unique().tolist()
# u1u2 = set(u1).intersection(u2)
# M[i,j] = len(u1u2)
# M.astype(int)
Explanation: make venn
End of explanation
df.to_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/all_stereo_compounds_temp.pkl')
# df.to_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/all_compounds_temp.pkl')
df = pd.read_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/all_stereo_compounds_temp.pkl')
df.source_database.unique()
def strip_non_ascii(string):
''' Returns the string without non ASCII characters'''
stripped = (c for c in string if 0 < ord(c) < 127)
return ''.join(stripped)
def make_strings_consistent(x):
if x:
if type(x) == list:
x = '///'.join([strip_non_ascii(s) for s in x])
else:
x = strip_non_ascii(x)
return x
df = df[df.metatlas_inchi_key != '']
# compound.name = unicode(df_row.common_name, "utf-8",errors='ignore')
# def make_strings_consistent(x):
# try:
# if type(x) == list:
# x = [str(s.encode('utf-8')) for x in x]
# else:
# x = str(x.encode('utf-8'))
# except:
# if type(x) == list:
# x = [str(s) for x in x]
# else:
# x = str(x)
# return x
df['common_name'] = df['common_name'].apply(make_strings_consistent)
df['synonyms'] = df['synonyms'].apply(make_strings_consistent)
my_keys = df.keys().tolist()
my_keys.remove('smiles')
my_keys.remove('inchi')
my_keys.remove('ROMol')
for k in my_keys:
print k
df[k] = df[k].astype(str)#apply(str)
Explanation: create consolidate table from unique compounds
End of explanation
for k in my_keys:
print k
df[k] = df[k].str.replace(r'<[^>]*>', '')
df.synonyms[100000]
#TODO: define my_keys once! then list the ones to pull out that need special attention
my_agg_keys = my_keys
my_agg_keys.remove('pubchem_compound_id') #has to be handled special...
my_agg_keys.remove('metatlas_inchi_key') #its the grouby key
my_agg_dict = {}
def pubchem_combine_fun(x):
new_list = []
for d in x:
if (d) and (d != 'None'):
d = str(int(float(d)))
new_list.append(d)
return '///'.join(list(set(new_list)))
my_agg_dict['pubchem_compound_id'] = pubchem_combine_fun
for k in my_agg_keys:
my_agg_dict[k] = lambda x: '///'.join(list(set([d for d in x if (d) and (d != '') and (d.lower() != 'none') and (d.lower() != 'nan')])))
gb = df.groupby('metatlas_inchi_key').agg(my_agg_dict)
gb.reset_index(inplace=True)
# import itertools
# import operator
# def most_common(dL):
# # get an iterable of (item, iterable) pairs
# L = dL.split('///')
# SL = sorted((x, i) for i, x in enumerate(L))
# # print 'SL:', SL
# groups = itertools.groupby(SL, key=operator.itemgetter(0))
# # auxiliary function to get "quality" for an item
# def _auxfun(g):
# item, iterable = g
# count = 0
# min_index = len(L)
# for _, where in iterable:
# count += 1
# min_index = min(min_index, where)
# # print 'item %r, count %r, minind %r' % (item, count, min_index)
# return count, -min_index
# # pick the highest-count/earliest item
# return max(groups, key=_auxfun)[0]
# def take_first_common_name(x):
# return sorted(x.split('///'),key=len)[0]
# gb['common_name'] = gb['common_name'].apply(take_first_common_name)
# gb['common_name'] = gb['common_name'].apply(most_common)
gb.head(10)
gb.synonyms.unique()
no_name = gb[gb.common_name == '']
no_name[no_name.pubchem_compound_id != ''].pubchem_compound_id.to_csv('pubchem_cid_no_name.csv')
no_name[no_name.pubchem_compound_id == ''].metatlas_inchi_key.to_csv('inchi_key_no_name_no_pubchem.csv')
gb = pd.read_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/unique_compounds.pkl')
Explanation: Remove URLS from various fields throughout
End of explanation
metatlas_not_found = pd.read_pickle('metatlas_compounds_not_found.pkl')
metatlas_not_found.drop('inchi',axis=1,inplace=True)
metatlas_not_found.rename(columns = {'name':'common_name'}, inplace = True)
metatlas_not_found['source_database'] = 'MetAtlas'
for c in set(gb.keys()) - set(metatlas_not_found.keys()):
metatlas_not_found[c] = ''
metatlas_not_found.head()
print gb.keys()
print ' '
print metatlas_not_found.keys()
print ' '
print set(gb.keys()) - set(metatlas_not_found.keys())
combo = pd.concat([metatlas_not_found, gb], ignore_index=True)
print combo.shape, gb.shape, metatlas_not_found.shape
combo.shape
combo.to_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/unique_compounds.pkl')
combo.to_csv('/project/projectdirs/metatlas/projects/uniquecompounds.csv')
combo.to_csv('uniquecompounds.csv')
Explanation: Add old MetAtlas compounds that are not in any of these databases
End of explanation
combo = pd.read_pickle('/project/projectdirs/openmsi/projects/ben_run_pactolus/unique_compounds.pkl')
combo.drop('monoisotopoic_mw', axis=1, inplace=True)
def calculate_monoisotopic_mw_from_inchi(row):
mw = ''
rdk_mol = Chem.MolFromInchi(row['metatlas_inchi'])
if rdk_mol:
mw = str(float(Descriptors.ExactMolWt(rdk_mol)))
return mw
combo['monoisotopic_mw'] = combo.apply(calculate_monoisotopic_mw_from_inchi, axis=1)
combo.keys()
combo = combo[(combo.monoisotopic_mw!='') ]
# my_str = 'radical'
# gb[gb.common_name.str.contains(r'(?:\s|^)%s(?:\s|$)'%my_str,case=False,regex=True)].head(20)
# from rdkit.Chem import Descriptors
# print Descriptors.NumRadicalElectrons(Chem.MolFromInchi('InChI=1S/C9H11NO3/c10-8(9(12)13)5-6-1-3-7(11)4-2-6/h1-4,8,11H,5,10H2,(H,12,13)/q+1'))
# print Descriptors.NumRadicalElectrons(Chem.MolFromInchi('InChI=1S/C40H46N8S4/c1-25-19-35-33(21-29(25)23-49-39(45(3)4)46(5)6)41-37(51-35)27-11-15-31(16-12-27)43-44-32-17-13-28(14-18-32)38-42-34-22-30(26(2)20-36(34)52-38)24-50-40(47(7)8)48(9)10/h11-22H,23-24H2,1-10H3/q+2/b44-43+'))
# import sys
# sys.path.append('/global/project/projectdirs/openmsi/jupyterhub_libs/anaconda/lib/python2.7/site-packages')
# from rdkit import Chem
# import numpy as np
# import pandas as pd
# from rdkit.Chem import Draw
# from rdkit.Chem import PandasTools
# %matplotlib inline
# inchis = ['InChI=1S/C3H7NO2/c1-2(4)3(5)6/h2H,4H2,1H3,(H,5,6)/p+1/t2-/m1/s1',
# 'InChI=1S/C18H18N4O6S4/c1-3-21-13-7-5-11(31(23,24)25)9-15(13)29-17(21)19-20-18-22(4-2)14-8-6-12(32(26,27)28)10-16(14)30-18/h5-10H,3-4H2,1-2H3,(H,23,24,25)(H,26,27,28)/q+1/p-1',
# 'InChI=1S/C3H7NO2.H3N/c1-2(4)3(5)6;/h2H,4H2,1H3,(H,5,6);1H3/p+1/t2-;/m0./s1',
# 'InChI=1S/C3H7NO2/c1-2(4)3(5)6/h2H,4H2,1H3,(H,5,6)/t2-/m0/s1',
# 'InChI=1S/C3H7NO2/c1-2(4)3(5)6/h2H,4H2,1H3,(H,5,6)/t2-/m1/s1']
# labels = ['D-Alanine Cation','2,2''-azino-bis-(3-ethylbenzothiazoline-6-sulfonate) radical cation', 'ammonium alanine','L-alanine','D-Alanine']
# mols = [Chem.MolFromInchi(ii) for ii in inchis]
# de_salt = [m[0] for m in [desalt(m) for m in mols]]
# neutralized = [m[0] for m in [NeutraliseCharges(m) for m in de_salt]]
# # sm1 = Chem.MolToSmiles(m)#,isomericSmiles=True)
# # m2 = Chem.MolFromSmiles(sm1)
# # sm1 = Chem.MolToInchi(m2)
# # m3 = Chem.MolFromInchi(sm1)
# Chem.Draw.MolsToGridImage(mols+de_salt+neutralized,
# legends = labels+['desalted %s'%m for m in labels]+['neutralized %s'%m for m in labels],
# molsPerRow = len(inchis))
Explanation: Mass values are not high enough precision
Mass might not be includign the charge
End of explanation |
8,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OpenCV Filters Webcam
In this notebook, several filters will be applied to webcam images.
Those input sources and applied filters will then be displayed either directly in the notebook or on HDMI output.
To run all cells in this notebook a webcam and HDMI output monitor are required.
1. Start HDMI output
Step 1
Step1: Step 2
Step2: 2. Applying OpenCV filters on Webcam input
Step 1
Step3: Step 2
Step4: Step 3
Step5: Step 4
Step6: Step 5
Step7: Step 6
Step8: Step 7 | Python Code:
from pynq import Overlay
Overlay("base.bit").download()
Explanation: OpenCV Filters Webcam
In this notebook, several filters will be applied to webcam images.
Those input sources and applied filters will then be displayed either directly in the notebook or on HDMI output.
To run all cells in this notebook a webcam and HDMI output monitor are required.
1. Start HDMI output
Step 1: Load the overlay
End of explanation
from pynq.drivers.video import HDMI
hdmi_out = HDMI('out')
hdmi_out.start()
Explanation: Step 2: Initialize HDMI I/O
End of explanation
# monitor configuration: 640*480 @ 60Hz
hdmi_out.mode(HDMI.VMODE_640x480)
hdmi_out.start()
# monitor (output) frame buffer size
frame_out_w = 1920
frame_out_h = 1080
# camera (input) configuration
frame_in_w = 640
frame_in_h = 480
Explanation: 2. Applying OpenCV filters on Webcam input
Step 1: Initialize Webcam and set HDMI Out resolution
End of explanation
from pynq.drivers.video import Frame
import cv2
videoIn = cv2.VideoCapture(0)
videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);
videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);
print("capture device is open: " + str(videoIn.isOpened()))
Explanation: Step 2: Initialize camera from OpenCV
End of explanation
import numpy as np
ret, frame_vga = videoIn.read()
if (ret):
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
frame_1080p[0:480,0:640,:] = frame_vga[0:480,0:640,:]
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
raise RuntimeError("Error while reading from camera.")
Explanation: Step 3: Send webcam input to HDMI output
End of explanation
import time
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
num_frames = 20
readError = 0
start = time.time()
for i in range (num_frames):
# read next image
ret, frame_vga = videoIn.read()
if (ret):
laplacian_frame = cv2.Laplacian(frame_vga, cv2.CV_8U)
# copy to frame buffer / show on monitor reorder RGB (HDMI = GBR)
frame_1080p[0:480,0:640,[0,1,2]] = laplacian_frame[0:480,0:640,[1,0,2]]
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
readError += 1
end = time.time()
print("Frames per second: " + str((num_frames-readError) / (end - start)))
print("Number of read errors: " + str(readError))
Explanation: Step 4: Edge detection
Detecting edges on webcam input and display on HDMI out.
End of explanation
frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)
num_frames = 20
start = time.time()
for i in range (num_frames):
# read next image
ret, frame_webcam = videoIn.read()
if (ret):
frame_canny = cv2.Canny(frame_webcam,100,110)
frame_1080p[0:480,0:640,0] = frame_canny[0:480,0:640]
frame_1080p[0:480,0:640,1] = frame_canny[0:480,0:640]
frame_1080p[0:480,0:640,2] = frame_canny[0:480,0:640]
# copy to frame buffer / show on monitor
hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8)))
else:
readError += 1
end = time.time()
print("Frames per second: " + str((num_frames-readError) / (end - start)))
print("Number of read errors: " + str(readError))
Explanation: Step 5: Canny edge detection
Detecting edges on webcam input and display on HDMI out.
Any edges with intensity gradient more than maxVal are sure to be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to “sure-edge” pixels, they are considered to be part of edges. Otherwise, they are also discarded.
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
plt.figure(1, figsize=(10, 10))
frame_vga = np.zeros((480,640,3)).astype(np.uint8)
frame_vga[0:480,0:640,0] = frame_canny[0:480,0:640]
frame_vga[0:480,0:640,1] = frame_canny[0:480,0:640]
frame_vga[0:480,0:640,2] = frame_canny[0:480,0:640]
plt.imshow(frame_vga[:,:,[2,1,0]])
plt.show()
Explanation: Step 6: Show results
Now use matplotlib to show filtered webcam input inside notebook
End of explanation
videoIn.release()
hdmi_out.stop()
del hdmi_out
Explanation: Step 7: Release camera and HDMI
End of explanation |
8,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style='color
Step1: Part A - Calculate features for an individual source
To demonstrate how the FATS library works, we will begin by calculating features for the source with $\alpha_{\rm J2000} = 312.23854988, \delta_{\rm J2000} = -0.89670553$. The data structure for FATS is a little different from how we have structured data in other portions of this class. In short, FATS is looking for a 2-d array that contains time, mag, and mag uncertainty. To get the required formatting, we can preprocess the dats as follows
Step2: Problem A2 What do you notice about the points that are flagged and removed for this source?
Answer type your answer here
This particular source shows the (potential) danger of preprocessing. Is the "flare" from this source real, or is it the result of incorrectly calibrated observations? In practice, determining the answer to this question would typically involve close inspection of the actual PTF image where the flare is detected, and possibly additional observations as well. In this particular case, given that there are other observations showing the decay of the flare, and that $(g - r) = 1.7$, which is consistent with an M-dwarf, the flare is likely real. In sum, preprocessing probably would lead to an incorrect classification for this source. Nevertheless, for most applications it is necessary to produce reasonable classifications.
Part B - Calculate features and check that they are reasonable
Now we will focus on the source with $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$ , to calculate features using FATS. Once the data have been preprocessed, features can be calcualted using the FeatureSpace module (note - FATS is designed to handle data in multiple passbands, and as such the input arrays passed to FATS must be specified)
Step3: The features object, feats can be retrieved in three different ways
Step4: For now, we will ignore the precise definition of these 59 (59!) features. But, let's focus on the first feature in the dictionary, Amplitude, to perform a quick check that the feature calculation is proceeding as we would expect.
Problem B2 Plot the light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, and check if the amplitude agrees with that calculated by FATS.
Note - the amplitude as calculated by FATS is actually the half amplitude, and it is calculated by taking half the difference between the median of the brightest 5% and the median of the faintest 5% of the observations. A quick eyeball test is sufficient.
Step5: Now, let's check one other feature to see if these results make sense. The best fit lomb-scargle period is stored as PeriodLS in the FATS feature dictionary. The feature period_fit reports the false alarm probability for a given light curve. This sources has period_fit $\sim 10^{-6}$, so it's fairly safe to say this is a periodic variable, but this should be confirmed.
Problem B3 Plot the phase folded light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, using the period determined by FATS.
Step6: Does this light curve look familiar at all? Why, it's our favorite star!
Part C - Calculate features for all PTF sources with light curves
Finally, and this is the most important portion of this exercise, we need to calculate features for all of the PTF sources for which we have light curves. Essentially, a for loop needs to be created to cover every light curve in the shelf file, but there are a few things you must keep in mind
Step7: Problem 2) Build the machine learning model
In many ways, the most difficult steps are now complete. We will now build a machine learning model (one that is significantly more complicated than the model we built yesterday, but the mechanics are nearly identical), and then predict the classification of our sources based on their light curves.
The training set is stored in a csv file that you have already downloaded
Step8: As is immediately clear - this dataset is more complicated than the one we had yesterday. We are now calcualting 59 different features to characterize the light curves. 59 is a large number, and it would prove cumbersome to actually plot histograms for each of them. Also, if some of the features are uniformative, which often happens in problems like this, plotting everything can actually be a waste of time.
Part A - Construct the Random Forest Model
We will begin by constructing a random forest model from the training set, and then we will infer which features are the most important based on the rankings provided by random forest. [Note - this was the challenge problem from the end of yesterday's session. Refer to that if you do not know how to calculate the feature importances.]
Problem A1 Construct a random forest using the training set, and determine the three most important features as measured by the random forest.
Hint - it may be helpful to figure out the indicies corresponding to the most important features. This can be done using np.argsort() which returns the indicies associated with the sorted argument to the call. The sorting goes from smallest number to highest. We want the most important features, so the indicies corresponding to that can be obtained by using [
Step9: As before, we are going to ignore the meaning of the three most important features for now (it is highly recommended that you check out Nun et al. 2015 to learn the definition of the features, but we don't have time for that at the moment).
To confirm that these three features actually help to separate the different classes, we will examine the histogram distributions for the three classes and these three features.
Problem A2 Plot the histogram distribution of each class for the three most important features, as determined by random forest.
Hint - this is very similar to the histogram plots that were created yesterday, consult your answer there for helpful hints about weighting the entries so that the sum of all entries = 1. It also helps to plot the x-axis on a log scale.
Step10: Part B - Evaluate the accuracy of the model
Like yesterday, we are going to evaluate how well the model performs via cross validation. While we have looked at the feature distribution for this data set, we have not closely examined the classes thus far. We will begin with that prior to estimating the accuracy of the model via cross validation.
Recall that a machine learning model is only as good as the model training set. Since you were handed the training set without details as to how it was constructed, you cannot easily evaluate whether or not it is biased (that being said, the training set is almost certainly biased). We can attempt to determine if this training set is representative of the field, however. In particular, yesterday we constructed a model to determine whether sources were stars, RRL variables, or QSOs based on their SDSS colors. This model provides a rough, but certainly not perfect, estimate of the ratio of these three classes to each other.
Problem B1 Calculate the number of RRL, QSOs, and stars in the training set and compare these numbers to the ratios for these classes as determined by the predictions from the SDSS colors-based classifier that was constructed yseterday.
Step11: While the ratios between the classes are far from identical to the distribution we might expect based on yesterday's classification, it is fair to say that the distribution is reasonable. In particular, there are more QSOs than RRL, and there are significantly more stars than either of the other two classes. Nevertheless, the training set is not large. Note - large in this sense is very relative, if you are searching for something that is extremely rare, such as the EM counterpart to a gravitational wave event, a training set of 2 might be large, but there are thousands of known QSOs and RRL, and a $\sim$billion known stars. Thus, this training set is small.
Problem B2 Do you think the training set is representative of the diversity in each of the three classes?
Answer type your response to this question here
At this point, it should be clear that the training set for this exercise is far from perfect. But guess what? That means this training set has something in common with every other astronomical study that utilizes machine learning. At the risk of beating the horse well beyond the grave, it is important to highlight once again that building a training set is extremely difficult work. There are almost always going to be (known and unknown) biases present in any sample used to train a model, and as a result predicted accuracies from cross validation are typically going to be over-optimistic. Nevertheless, cross validation remains one of the best ways to quantitatively examine the quality of a model.
Problem B3 Determine the overall accuracy of the time-domain machine learning model using 5-fold cross validation.
Step12: As noted earlier - this accuracy is quite high, and it is likely over-optimistic. While the overall accuracy provides a nice summary statistic, it is always useful to know where the classifier is making mistakes. As noted yesterday, this is most easily accomplished with a confusion matrix.
Problem B4 Plot the normalized confusion matrix for the new time-domain classifier. Identify the most common error made by the classifier.
Hint - you will (likely) need to run cross-validation again so you can produce cross-validated class predictions for each source in the training set.
Step13: Problem 3) Use the model to identify RRL and QSO candidates in the PTF field
Now, virtually all of the hard work is done, we simply need to make some final predictions on our dataset.
Part A - Apply the machine learning model
Problem A1 Using the features that you calculated in Problem 1, predict the class of all the PTF sources.
Step14: Problem A2 Determine the number of candidate sources belonging to each class. | Python Code:
shelf_file = " " # complete the path to the appropriate shelf file here
shelf = shelve.open(shelf_file)
shelf.keys()
Explanation: <span style='color:red'>An essential note in preparation for this exercise.</span> We will use scikit-learn to provide classifications of the PTF sources that we developed on the first day of the summer school. Calculating the features for these light curves can be computationally intensive and may require you to run your computer for several hours. It is essential that you complete this portion of the exercise prior to Friday afternoon. Or, in other words, this is homework for Thursday night.
Fortunately, there is an existing library that will calculate all the light curve features for you. It is not included in the anaconda python distribution, but you can easily install the library using pip, from the command line (i.e. outside the notebook):
pip install FATS
After a short download, you should have the FATS (Feature Analysis for Time Series) library loaded and ready to use.
Note within a note The FATS library is not compatible with Python 3, thus is you are using Python 3 feel free to ignore this and we will give you an array with the necessary answers.
Hands-On Exercise 6: Building a Machine Learning Classifier to Identify RRL and QSO Candidates Via Light Curves
Version 0.1
We have spent a lot of time discussing RR Lyrae stars and QSOs. The importance of these sources will not be re-hashed here, instead we will jump right into the exercise.
Today, we will measure a large number of light curve features for each PTF source with a light curve. This summer school has been dedicated to the study of time-variable phenomena, and today, finally, everything will come together. We will use machine learning tools to classify variable sources.
By AA Miller (c) 2015 Jul 30
Problem 1) Calculate the features
The training set for today has already been made. The first step is to calculate the features for the PTF sources we are hoping to classify. We will do this using the FATS library in python. The basic steps are simple: the light curve, i.e. time, mag, uncertainty on the mag, is passed to FATS, and features are calculated and returned. Prior to calculating the features, FATS preprocesses the data by removing $5\sigma$ outliers and observations with anomolously large uncertainties. After this, features are calculated.
We begin by reading in the data from the first day.
End of explanation
import FATS
reference_catalog = '../data/PTF_Refims_Files/PTF_d022683_f02_c06_u000114210_p12_sexcat.ctlg'
outfile = reference_catalog.split('/')[-1].replace('ctlg','shlv')
lc_mjd, lc_mag, lc_magerr = source_lightcurve("../data/"+outfile, # complete
[mag, time, error] = FATS.Preprocess_LC( # complete
plt.errorbar( # complete
plt.errorbar( # complete
Explanation: Part A - Calculate features for an individual source
To demonstrate how the FATS library works, we will begin by calculating features for the source with $\alpha_{\rm J2000} = 312.23854988, \delta_{\rm J2000} = -0.89670553$. The data structure for FATS is a little different from how we have structured data in other portions of this class. In short, FATS is looking for a 2-d array that contains time, mag, and mag uncertainty. To get the required formatting, we can preprocess the dats as follows:
import FATS
[mag, time, error] = FATS.Preprocess_LC(lc_mag, lc_mjd, lc_magerr).Preprocess()
where the result from this call is a 2d array ready for feature calculations. lc_mag, lc_mjd, and lc_magerr are individual arrays for the source in question that we will pass to FATS.
Problem A1 Perform preprocessing on the source with $\alpha_{\rm J2000} = 312.23854988, \delta_{\rm J2000} = -0.89670553$ from the shelf file, then plot the light curve both before and after preprocessing using different colors to see which epochs are removed during preprocessing.
Hint - this won't actually affect your code, because FATS properly understands NumPy masks, but recall that each source in the shelf file has a different mask array, while none of the MJDs in that file have a mask.
End of explanation
lc_mjd, lc_mag, lc_magerr = # complete
# complete
# complete
lc = # complete
feats = FATS.FeatureSpace( # complete
Explanation: Problem A2 What do you notice about the points that are flagged and removed for this source?
Answer type your answer here
This particular source shows the (potential) danger of preprocessing. Is the "flare" from this source real, or is it the result of incorrectly calibrated observations? In practice, determining the answer to this question would typically involve close inspection of the actual PTF image where the flare is detected, and possibly additional observations as well. In this particular case, given that there are other observations showing the decay of the flare, and that $(g - r) = 1.7$, which is consistent with an M-dwarf, the flare is likely real. In sum, preprocessing probably would lead to an incorrect classification for this source. Nevertheless, for most applications it is necessary to produce reasonable classifications.
Part B - Calculate features and check that they are reasonable
Now we will focus on the source with $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$ , to calculate features using FATS. Once the data have been preprocessed, features can be calcualted using the FeatureSpace module (note - FATS is designed to handle data in multiple passbands, and as such the input arrays passed to FATS must be specified):
lc = np.array([mag, time, error])
feats = FATS.FeatureSpace(Data=['magnitude', 'time', 'error']).calculateFeature(lc)
Following these commands, we now have an object feats that contains the features of our source. As there is only one filter with a light curve for this source, FATS will not be able to calculate the full library of features.
Problem B1 Preprocess the light curve for the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, and use FATS to calculate the features for this source.
End of explanation
# execute this cell
print('There are a total of {:d} features for single band LCs'.format(len(feats.result(method='array'))))
print('Here is a dictionary showing the features:')
feats.result(method='dict')
Explanation: The features object, feats can be retrieved in three different ways: dict returns a dictionary with the feature names and their correpsonding values, array returns the feature values, and features returns an array with the names of the individual features.
Execute the cell below to determine how many features there are, and to examine the features calculated by FATS.
End of explanation
plt.errorbar( # complete
Explanation: For now, we will ignore the precise definition of these 59 (59!) features. But, let's focus on the first feature in the dictionary, Amplitude, to perform a quick check that the feature calculation is proceeding as we would expect.
Problem B2 Plot the light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, and check if the amplitude agrees with that calculated by FATS.
Note - the amplitude as calculated by FATS is actually the half amplitude, and it is calculated by taking half the difference between the median of the brightest 5% and the median of the faintest 5% of the observations. A quick eyeball test is sufficient.
End of explanation
plt.errorbar( # complete
Explanation: Now, let's check one other feature to see if these results make sense. The best fit lomb-scargle period is stored as PeriodLS in the FATS feature dictionary. The feature period_fit reports the false alarm probability for a given light curve. This sources has period_fit $\sim 10^{-6}$, so it's fairly safe to say this is a periodic variable, but this should be confirmed.
Problem B3 Plot the phase folded light curve the source at $\alpha_{\rm J2000} = 312.50395, \delta_{\rm J2000} = -0.70654$, using the period determined by FATS.
End of explanation
%%capture
# not too many hints this time
Xfeats = # complete or incorporate elsewhere
# for loop goes here
# 2 lines below show an example of how to create an astropy table and then save the feature calculation as a csv file
# -- if you use these lines, be sure that the variable names match your for loop
#feat_table = Table(Xfeats, names = tuple(feats.result(method='features')))
#feat_table.write('PTF_feats.csv', format='csv')
Explanation: Does this light curve look familiar at all? Why, it's our favorite star!
Part C - Calculate features for all PTF sources with light curves
Finally, and this is the most important portion of this exercise, we need to calculate features for all of the PTF sources for which we have light curves. Essentially, a for loop needs to be created to cover every light curve in the shelf file, but there are a few things you must keep in mind:
It is essential that the features be stored in a sensible way that can be passed to scikit-learn. Recall that features are represented as 2d arrays where each row corresponds to one source and each column corresponds to a given feature.
Finally, if you can easily figure it out, it would be good to store your data in a file of some kind so the features can be easily read by your machine in the future.
Problem C1 Measure features for all the sources in the PTF field that we have been studying this week. Store the results in an array Xfeats that can be used by scikit-learn.
Note - FATS will produce warnings for every single light curve in the loop, which results in a lot of output text. Thus, we have employed %%capture at the start of this cell to supress that output.
Hint - you may find it helpful to include a progress bar since this loop will take $\sim$2 hr to run. See the Making_a_Lightcurve notebook for an example. This is not necessary, however.
End of explanation
ts = Table.read("../data/TS_PTF_feats.csv")
ts
Explanation: Problem 2) Build the machine learning model
In many ways, the most difficult steps are now complete. We will now build a machine learning model (one that is significantly more complicated than the model we built yesterday, but the mechanics are nearly identical), and then predict the classification of our sources based on their light curves.
The training set is stored in a csv file that you have already downloaded: ../data/TS_PTF_feats.csv. We will begin by reading in the training set to an astropy Table.
End of explanation
# the trick here is to remember that both the features and the class labels are included in ts
y = # complete
X = # complete
# complete
# complete
from sklearn.ensemble import RandomForestClassifier
RFmod = RandomForestClassifier(n_estimators = 100)
RFmod.fit( # complete
feat_order = np.array(ts.colnames)[np.argsort(RFmod.feature_importances_)[::-1]]
print('The 3 most important features are: {:s}, {:s}, {:s}'.format( # complete
Explanation: As is immediately clear - this dataset is more complicated than the one we had yesterday. We are now calcualting 59 different features to characterize the light curves. 59 is a large number, and it would prove cumbersome to actually plot histograms for each of them. Also, if some of the features are uniformative, which often happens in problems like this, plotting everything can actually be a waste of time.
Part A - Construct the Random Forest Model
We will begin by constructing a random forest model from the training set, and then we will infer which features are the most important based on the rankings provided by random forest. [Note - this was the challenge problem from the end of yesterday's session. Refer to that if you do not know how to calculate the feature importances.]
Problem A1 Construct a random forest using the training set, and determine the three most important features as measured by the random forest.
Hint - it may be helpful to figure out the indicies corresponding to the most important features. This can be done using np.argsort() which returns the indicies associated with the sorted argument to the call. The sorting goes from smallest number to highest. We want the most important features, so the indicies corresponding to that can be obtained by using [::-1], which flips the order of a NumPy array. Thus, you can obtain indicies sorting the features from most important to least important with the following command:
np.argsort(RFmod.feature_importances_)[::-1]
This may, or may not depending on your approach, help you identify the 3 most important features.
End of explanation
Nqso =
Nrrl =
Nstar =
for feat in feat_order[0:3]:
plt.figure()
plt.hist( # complete
# complete
# complete
plt.xscale('log')
plt.legend(fancybox = True)
Explanation: As before, we are going to ignore the meaning of the three most important features for now (it is highly recommended that you check out Nun et al. 2015 to learn the definition of the features, but we don't have time for that at the moment).
To confirm that these three features actually help to separate the different classes, we will examine the histogram distributions for the three classes and these three features.
Problem A2 Plot the histogram distribution of each class for the three most important features, as determined by random forest.
Hint - this is very similar to the histogram plots that were created yesterday, consult your answer there for helpful hints about weighting the entries so that the sum of all entries = 1. It also helps to plot the x-axis on a log scale.
End of explanation
print('There are {:d} QSOs, {:d} RRL, and {:d} stars in the training set'.format(# complete
Explanation: Part B - Evaluate the accuracy of the model
Like yesterday, we are going to evaluate how well the model performs via cross validation. While we have looked at the feature distribution for this data set, we have not closely examined the classes thus far. We will begin with that prior to estimating the accuracy of the model via cross validation.
Recall that a machine learning model is only as good as the model training set. Since you were handed the training set without details as to how it was constructed, you cannot easily evaluate whether or not it is biased (that being said, the training set is almost certainly biased). We can attempt to determine if this training set is representative of the field, however. In particular, yesterday we constructed a model to determine whether sources were stars, RRL variables, or QSOs based on their SDSS colors. This model provides a rough, but certainly not perfect, estimate of the ratio of these three classes to each other.
Problem B1 Calculate the number of RRL, QSOs, and stars in the training set and compare these numbers to the ratios for these classes as determined by the predictions from the SDSS colors-based classifier that was constructed yseterday.
End of explanation
from sklearn import cross_validation # recall that this will only work with sklearn v0.16+
RFmod = RandomForestClassifier( # complete
cv_accuracy = # complete
print("The cross-validation accuracy is {:.1f}%%".format(100*np.mean(cv_accuracy)))
Explanation: While the ratios between the classes are far from identical to the distribution we might expect based on yesterday's classification, it is fair to say that the distribution is reasonable. In particular, there are more QSOs than RRL, and there are significantly more stars than either of the other two classes. Nevertheless, the training set is not large. Note - large in this sense is very relative, if you are searching for something that is extremely rare, such as the EM counterpart to a gravitational wave event, a training set of 2 might be large, but there are thousands of known QSOs and RRL, and a $\sim$billion known stars. Thus, this training set is small.
Problem B2 Do you think the training set is representative of the diversity in each of the three classes?
Answer type your response to this question here
At this point, it should be clear that the training set for this exercise is far from perfect. But guess what? That means this training set has something in common with every other astronomical study that utilizes machine learning. At the risk of beating the horse well beyond the grave, it is important to highlight once again that building a training set is extremely difficult work. There are almost always going to be (known and unknown) biases present in any sample used to train a model, and as a result predicted accuracies from cross validation are typically going to be over-optimistic. Nevertheless, cross validation remains one of the best ways to quantitatively examine the quality of a model.
Problem B3 Determine the overall accuracy of the time-domain machine learning model using 5-fold cross validation.
End of explanation
from sklearn.metrics import confusion_matrix
y_cv_preds = # complete
cm = # complete
plt.imshow( # complete
plt.colorbar()
plt.ylabel( # complete
plt.xlabel( # complete
plt.tight_layout()
Explanation: As noted earlier - this accuracy is quite high, and it is likely over-optimistic. While the overall accuracy provides a nice summary statistic, it is always useful to know where the classifier is making mistakes. As noted yesterday, this is most easily accomplished with a confusion matrix.
Problem B4 Plot the normalized confusion matrix for the new time-domain classifier. Identify the most common error made by the classifier.
Hint - you will (likely) need to run cross-validation again so you can produce cross-validated class predictions for each source in the training set.
End of explanation
RFmod = # complete
# complete
PTF_classes = # complete
Explanation: Problem 3) Use the model to identify RRL and QSO candidates in the PTF field
Now, virtually all of the hard work is done, we simply need to make some final predictions on our dataset.
Part A - Apply the machine learning model
Problem A1 Using the features that you calculated in Problem 1, predict the class of all the PTF sources.
End of explanation
print('There are {:d} candidate QSOs, {:d} candidate RRL, and {:d} candidate stars.'.format(Nqso_cand, Nrrl_cand, Nstar_cand))
Explanation: Problem A2 Determine the number of candidate sources belonging to each class.
End of explanation |
8,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
On-the-fly training using ASE
Yu Xie ([email protected])
This is a quick introduction of how to set up our ASE-OTF interface to train a force field. We will train a force field model for diamond. To run the on-the-fly training, we will need to
Create a supercell with ASE Atoms object
Set up FLARE ASE calculator, including the kernel functions, hyperparameters, cutoffs for Gaussian process, and mapping parameters (if Mapped Gaussian Process is used)
Set up DFT ASE calculator. Here we will give an example of Quantum Espresso
Set up on-the-fly training with ASE MD engine
Please make sure you are using the LATEST FLARE code in our master branch.
Step 1
Step1: Step 2
Step2: Optional
If you want to use Mapped Gaussian Process (MGP), then set up MGP as follows
Step3: Now let's set up FLARE's ASE calculator. If you want to use MGP model, then set use_mapping = True and mgp_model = mgp_model below.
Step4: Step 3
Step5: Optional
Step6: Step 4
Step7: Set up parameters for On-The-Fly (OTF) training. The descriptions of the parameters are in ASE OTF module.
Set up the ASE_OTF training engine, and run
Note
Step8: Then check the *.out file for the training log.
Step 5 (Optional) | Python Code:
import numpy as np
from ase import units
from ase.spacegroup import crystal
from ase.build import bulk
np.random.seed(12345)
a = 3.52678
super_cell = bulk('C', 'diamond', a=a, cubic=True)
Explanation: On-the-fly training using ASE
Yu Xie ([email protected])
This is a quick introduction of how to set up our ASE-OTF interface to train a force field. We will train a force field model for diamond. To run the on-the-fly training, we will need to
Create a supercell with ASE Atoms object
Set up FLARE ASE calculator, including the kernel functions, hyperparameters, cutoffs for Gaussian process, and mapping parameters (if Mapped Gaussian Process is used)
Set up DFT ASE calculator. Here we will give an example of Quantum Espresso
Set up on-the-fly training with ASE MD engine
Please make sure you are using the LATEST FLARE code in our master branch.
Step 1: Set up supercell with ASE
Here we create a 2x1x1 supercell with lattice constant 3.855, and randomly perturb the positions of the atoms, so that they will start MD with non-zero forces.
End of explanation
from flare.gp import GaussianProcess
from flare.utils.parameter_helper import ParameterHelper
# set up GP hyperparameters
kernels = ['twobody', 'threebody'] # use 2+3 body kernel
parameters = {'cutoff_twobody': 5.0,
'cutoff_threebody': 3.5}
pm = ParameterHelper(
kernels = kernels,
random = True,
parameters=parameters
)
hm = pm.as_dict()
hyps = hm['hyps']
cut = hm['cutoffs']
print('hyps', hyps)
gp_model = GaussianProcess(
kernels = kernels,
component = 'mc', # If you are using ASE, please set to "mc" no matter for single-component or multi-component
hyps = hyps,
cutoffs = cut,
hyp_labels = ['sig2','ls2','sig3','ls3','noise'],
opt_algorithm = 'L-BFGS-B',
n_cpus = 1
)
Explanation: Step 2: Set up FLARE calculator
Now let’s set up our Gaussian process model in the same way as introduced before
End of explanation
from flare.mgp import MappedGaussianProcess
grid_params = {'twobody': {'grid_num': [64]},
'threebody': {'grid_num': [16, 16, 16]}}
mgp_model = MappedGaussianProcess(grid_params,
unique_species = [6],
n_cpus = 1,
var_map=None)
Explanation: Optional
If you want to use Mapped Gaussian Process (MGP), then set up MGP as follows
End of explanation
from flare.ase.calculator import FLARE_Calculator
flare_calculator = FLARE_Calculator(gp_model,
par = True,
mgp_model = None,
use_mapping = False)
super_cell.set_calculator(flare_calculator)
Explanation: Now let's set up FLARE's ASE calculator. If you want to use MGP model, then set use_mapping = True and mgp_model = mgp_model below.
End of explanation
from ase.calculators.lj import LennardJones
lj_calc = LennardJones()
Explanation: Step 3: Set up DFT calculator
For DFT calculator, you can use any calculator provided by ASE, e.g. Quantum Espresso (QE), VASP, etc.
For a quick illustration of our interface, we use the Lennard-Jones (LJ) potential as an example.
End of explanation
import os
from ase.calculators.espresso import Espresso
# ---------------- set up executable ----------------
label = 'C'
input_file = label+'.pwi'
output_file = label+'.pwo'
no_cpus = 32
npool = 32
pw_loc = 'path/to/pw.x'
# serial
os.environ['ASE_ESPRESSO_COMMAND'] = f'{pw_loc} < {input_file} > {output_file}'
## parallel qe using mpirun
# os.environ['ASE_ESPRESSO_COMMAND'] = f'mpirun -np {no_cpus} {pw_loc} -npool {npool} < {input_file} > {output_file}'
## parallel qe using srun (for slurm system)
# os.environ['ASE_ESPRESSO_COMMAND'] = 'srun -n {no_cpus} --mpi=pmi2 {pw_loc} -npool {npool} < {input_file} > {output_file}'
# -------------- set up input parameters --------------
input_data = {'control': {'prefix': label,
'pseudo_dir': './',
'outdir': './out',
'calculation': 'scf'},
'system': {'ibrav': 0,
'ecutwfc': 60,
'ecutrho': 360},
'electrons': {'conv_thr': 1.0e-9,
'electron_maxstep': 100,
'mixing_beta': 0.7}}
# ---------------- pseudo-potentials -----------------
ion_pseudo = {'C': 'C.pz-rrkjus.UPF'}
# -------------- create ASE calculator ----------------
dft_calc = Espresso(pseudopotentials=ion_pseudo, label=label,
tstress=True, tprnfor=True, nosym=True,
input_data=input_data, kpts=(8, 8, 8))
Explanation: Optional: alternatively, set up Quantum Espresso calculator
We also give the code below for setting up the ASE quantum espresso calculator, following the instruction on ASE website.
First, we need to set up our environment variable ASE_ESPRESSO_COMMAND to our QE executable, so that ASE can find this calculator. Then set up our input parameters of QE and create an ASE calculator
End of explanation
from ase import units
from ase.md.velocitydistribution import (MaxwellBoltzmannDistribution,
Stationary, ZeroRotation)
temperature = 500
MaxwellBoltzmannDistribution(super_cell, temperature * units.kB)
Stationary(super_cell) # zero linear momentum
ZeroRotation(super_cell) # zero angular momentum
md_engine = 'VelocityVerlet'
md_kwargs = {}
Explanation: Step 4: Set up On-The-Fly MD engine
Finally, our OTF is compatible with
5 MD engines that ASE supports: VelocityVerlet, NVTBerendsen, NPTBerendsen, NPT and Langevin,
and 1 MD engine implemented by FLARE: NoseHoover.
We can choose any of them, and set up the parameters based on ASE requirements. After everything is set up, we can run the on-the-fly training.
Note 1: Currently, only VelocityVerlet is tested on real system, NPT may have issue with pressure and stress.
Set up ASE_OTF training engine:
Initialize the velocities of atoms as 500K
Set up MD arguments as a dictionary based on ASE MD parameters. For VelocityVerlet, we don't need to set up extra parameters.
E.g. for NVTBerendsen, we can set md_kwargs = {'temperature': 500, 'taut': 0.5e3 * units.fs}
Note 2: For some tricks and tips related to the on-the-fly training (e.g. how to set up temperatures, how to optimize hyperparameters), see FAQs
End of explanation
from flare.ase.otf import ASE_OTF
otf_params = {'init_atoms': [0, 1, 2, 3],
'output_name': 'otf',
'std_tolerance_factor': 2,
'max_atoms_added' : 4,
'freeze_hyps': 10,
'write_model': 3} # If you will probably resume the training, please set to 3
test_otf = ASE_OTF(super_cell,
timestep = 1 * units.fs,
number_of_steps = 3,
dft_calc = lj_calc,
md_engine = md_engine,
md_kwargs = md_kwargs,
**otf_params)
test_otf.run()
Explanation: Set up parameters for On-The-Fly (OTF) training. The descriptions of the parameters are in ASE OTF module.
Set up the ASE_OTF training engine, and run
Note: the ASE Trajectory is supported, but NOT recommended.
Check otf.out after the training is done.
End of explanation
new_otf = ASE_OTF.from_checkpoint("<output_name>_checkpt.json")
new_otf.run()
Explanation: Then check the *.out file for the training log.
Step 5 (Optional): Resume Interrupted Training
At the end of each OTF training step, there will be several checkpoint files dumpped
<output_name>_checkpt.json: checkpoint of the current MD step of OTF. In the above example, al_otf_qe_checkpt.json.
<output_name>_flare.json: If you've set write_model=3, then there will be another file saving the trained FLARE calculator, which will be loaded when restarting OTF.
<output_name>_atoms.json: The ASE Atoms of the current MD step in the format of json
<output_name>_dft.pickle: The DFT calculator saved in the format of .pickle.
Then, use ASE_OTF.from_checkpoint(<output_name>_checkpt.json) to load the OTF state, and resume the training by run().
End of explanation |
8,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Head model and forward computation
The aim of this tutorial is to be a getting started for forward
computation.
For more extensive details and presentation of the general
concepts for forward modeling. See ch_forward.
Step1: Computing the forward operator
To compute a forward operator we need
Step2: Visualization the coregistration
The coregistration is operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
Step3: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces
Step4: The surface based source space src contains two parts, one for the left
hemisphere (4098 locations) and one for the right hemisphere
(4098 locations). Sources can be visualized on top of the BEM surfaces
in purple.
Step5: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0)
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
Step6: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the
Step7: With the surface-based source space only sources that lie in the plotted MRI
slices are shown. Let's write a few lines of mayavi to see all sources in 3D.
Step8: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
Step9: Note that the
Step10: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
Step11: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following
Step12: This is equivalent to the following code that explicitly applies the
forward operator to a source estimate composed of the identity operator | Python Code:
import os.path as op
import mne
from mne.datasets import sample
data_path = sample.data_path()
# the raw file containing the channel location + types
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
# The paths to Freesurfer reconstructions
subjects_dir = data_path + '/subjects'
subject = 'sample'
Explanation: Head model and forward computation
The aim of this tutorial is to be a getting started for forward
computation.
For more extensive details and presentation of the general
concepts for forward modeling. See ch_forward.
End of explanation
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', orientation='coronal')
Explanation: Computing the forward operator
To compute a forward operator we need:
a -trans.fif file that contains the coregistration info.
a source space
the :term:BEM surfaces
Compute and visualize BEM surfaces
The :term:BEM surfaces are the triangulations of the interfaces between
different tissues needed for forward computation. These surfaces are for
example the inner skull surface, the outer skull surface and the outer skin
surface, a.k.a. scalp surface.
Computing the BEM surfaces requires FreeSurfer and makes use of either of
the two following command line tools:
gen_mne_watershed_bem
gen_mne_flash_bem
Or by calling in a Python script one of the functions
:func:mne.bem.make_watershed_bem or :func:mne.bem.make_flash_bem.
Here we'll assume it's already computed. It takes a few minutes per subject.
For EEG we use 3 layers (inner skull, outer skull, and skin) while for
MEG 1 layer (inner skull) is enough.
Let's look at these surfaces. The function :func:mne.viz.plot_bem
assumes that you have the the bem folder of your subject FreeSurfer
reconstruction the necessary files.
End of explanation
# The transformation file obtained by coregistration
trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
info = mne.io.read_info(raw_fname)
# Here we look at the dense head, which isn't used for BEM computations but
# is useful for coregistration.
mne.viz.plot_alignment(info, trans, subject=subject, dig=True,
meg=['helmet', 'sensors'], subjects_dir=subjects_dir,
surfaces='head-dense')
Explanation: Visualization the coregistration
The coregistration is operation that allows to position the head and the
sensors in a common coordinate system. In the MNE software the transformation
to align the head and the sensors in stored in a so-called trans file.
It is a FIF file that ends with -trans.fif. It can be obtained with
:func:mne.gui.coregistration (or its convenient command line
equivalent gen_mne_coreg), or mrilab if you're using a Neuromag
system.
For the Python version see :func:mne.gui.coregistration
Here we assume the coregistration is done, so we just visually check the
alignment with the following code.
End of explanation
src = mne.setup_source_space(subject, spacing='oct6',
subjects_dir=subjects_dir, add_dist=False)
print(src)
Explanation: Compute Source Space
The source space defines the position and orientation of the candidate source
locations. There are two types of source spaces:
source-based source space when the candidates are confined to a
surface.
volumetric or discrete source space when the candidates are discrete,
arbitrarily located source points bounded by the surface.
Source-based source space is computed using
:func:mne.setup_source_space, while volumetric source space is computed
using :func:mne.setup_volume_source_space.
We will now compute a source-based source space with an OCT-6 resolution.
See setting_up_source_space for details on source space definition
and spacing parameter.
End of explanation
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', src=src, orientation='coronal')
Explanation: The surface based source space src contains two parts, one for the left
hemisphere (4098 locations) and one for the right hemisphere
(4098 locations). Sources can be visualized on top of the BEM surfaces
in purple.
End of explanation
sphere = (0.0, 0.0, 40.0, 90.0)
vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir,
sphere=sphere)
print(vol_src)
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', src=vol_src, orientation='coronal')
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside a sphere of radius 90mm centered at (0.0, 0.0, 40.0)
you can use the following code.
Obviously here, the sphere is not perfect. It is not restricted to the
brain and it can miss some parts of the cortex.
End of explanation
surface = op.join(subjects_dir, subject, 'bem', 'inner_skull.surf')
vol_src = mne.setup_volume_source_space(subject, subjects_dir=subjects_dir,
surface=surface)
print(vol_src)
mne.viz.plot_bem(subject=subject, subjects_dir=subjects_dir,
brain_surfaces='white', src=vol_src, orientation='coronal')
Explanation: To compute a volume based source space defined with a grid of candidate
dipoles inside the brain (requires the :term:BEM surfaces) you can use the
following.
End of explanation
import numpy as np # noqa
from mayavi import mlab # noqa
from surfer import Brain # noqa
brain = Brain('sample', 'lh', 'inflated', subjects_dir=subjects_dir)
surf = brain.geo['lh']
vertidx = np.where(src[0]['inuse'])[0]
mlab.points3d(surf.x[vertidx], surf.y[vertidx],
surf.z[vertidx], color=(1, 1, 0), scale_factor=1.5)
Explanation: With the surface-based source space only sources that lie in the plotted MRI
slices are shown. Let's write a few lines of mayavi to see all sources in 3D.
End of explanation
conductivity = (0.3,) # for single layer
# conductivity = (0.3, 0.006, 0.3) # for three layers
model = mne.make_bem_model(subject='sample', ico=4,
conductivity=conductivity,
subjects_dir=subjects_dir)
bem = mne.make_bem_solution(model)
Explanation: Compute forward solution
We can now compute the forward solution.
To reduce computation we'll just compute a single layer BEM (just inner
skull) that can then be used for MEG (not EEG).
We specify if we want a one-layer or a three-layer BEM using the
conductivity parameter.
The BEM solution requires a BEM model which describes the geometry
of the head the conductivities of the different tissues.
End of explanation
fwd = mne.make_forward_solution(raw_fname, trans=trans, src=src, bem=bem,
meg=True, eeg=False, mindist=5.0, n_jobs=2)
print(fwd)
Explanation: Note that the :term:BEM does not involve any use of the trans file. The BEM
only depends on the head geometry and conductivities.
It is therefore independent from the MEG data and the head position.
Let's now compute the forward operator, commonly referred to as the
gain or leadfield matrix.
See :func:mne.make_forward_solution for details on parameters meaning.
End of explanation
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: We can explore the content of fwd to access the numpy array that contains
the gain matrix.
End of explanation
fwd_fixed = mne.convert_forward_solution(fwd, surf_ori=True, force_fixed=True,
use_cps=True)
leadfield = fwd_fixed['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
Explanation: To extract the numpy array containing the forward operator corresponding to
the source space fwd['src'] with cortical orientation constraint
we can use the following:
End of explanation
n_dipoles = leadfield.shape[1]
vertices = [src_hemi['vertno'] for src_hemi in fwd_fixed['src']]
stc = mne.SourceEstimate(1e-9 * np.eye(n_dipoles), vertices, tmin=0., tstep=1)
leadfield = mne.apply_forward(fwd_fixed, stc, info).data / 1e-9
Explanation: This is equivalent to the following code that explicitly applies the
forward operator to a source estimate composed of the identity operator:
End of explanation |
8,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
Step2: <i class="fa fa-book"></i> Primero librerias
Step3: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
Step4: <i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
Step5: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint
Step6: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
Step7: <i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML
Step8: cuales son los tamanios de estos nuevos datos?
y ahora entrenamos nuestro modelo y checamos el error
Step9: <i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
Step10: Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
Step11: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
Step12: a probarlo!
mejoro?
Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que
Step13: <i class="fa fa-pagelines"></i> El conjunto de datos Iris
Un modelo multi-dimensional
Step14: Actividad | Python Code:
from IPython.core.display import HTML
import os
def css_styling():
Load default custom.css file from ipython profile
base = os.getcwd()
styles = "<style>\n%s\n</style>" % (open(os.path.join(base,'files/custom.css'),'r').read())
return HTML(styles)
css_styling()
Explanation: <i class="fa fa-diamond"></i> Primero pimpea tu libreta!
End of explanation
import numpy as np
import sklearn as sk
import matplotlib.pyplot as plt
import sklearn.datasets as datasets
import seaborn as sns
%matplotlib inline
Explanation: <i class="fa fa-book"></i> Primero librerias
End of explanation
X,Y = datasets.make_blobs()
Explanation: <i class="fa fa-database"></i> Vamos a crear datos de jugete
Crea varios "blobs"
recuerda la funcion de scikit-learn datasets.make_blobs()
Tambien prueba
python
centers = [[1, 1], [-1, -1], [1, -1]]
X,Y = datasets.make_blobs(n_samples=10000, centers=centers, cluster_std=0.6)
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
Explanation: <i class="fa fa-tree"></i> Ahora vamos a crear un modelo de arbol
podemos usar DecisionTreeClassifier como clasificador
End of explanation
help(clf)
Explanation: <i class="fa fa-question-circle"></i> Que parametros y funciones tiene el classificador?
Hint: usa help(cosa)!
End of explanation
clf.fit(X, Y)
clf.score(X,Y)
Explanation: vamos a ajustar nuestro modelo con fit y sacar su puntaje con score
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.33)
Explanation: <i class="fa fa-question-circle"></i>
Por que no queremos 100%?
Este problema se llama "Overfitting"
<i class="fa fa-list"></i> Pasos para un tipico algoritmo ML:
Crear un modelo
Particionar tus datos en diferentes pedazos (10% entrenar y 90% prueba)
Entrenar tu modelo sobre cada pedazo de los datos
Escogete el mejor modelo o el promedio de los modelos
Predice!
Primero vamos a particionar los datos usando
End of explanation
clf.fit(X_train, Y_train)
clf.score(X_test,Y_test)
Explanation: cuales son los tamanios de estos nuevos datos?
y ahora entrenamos nuestro modelo y checamos el error
End of explanation
clf.feature_importances_
Explanation: <i class="fa fa-question-circle"></i>
Como se ve nuestro modelo?
Que fue mas importante para hacer una decision?
Como podemos mejorar y controlar como dividimos nuestros datos?
End of explanation
from sklearn.cross_validation import cross_val_score
resultados =cross_val_score(clf, X, Y, cv=10)
Explanation: Validación cruzada y
K-fold
Y lo mejor es que podemos hacer todo de usa sola patada con sci-kit!
Hay que usar cross_val_score
End of explanation
from sklearn.ensemble import RandomForestClassifier
Explanation: <i class="fa fa-question-circle"></i>
Y como podemos mejorar un arbol de decision?
RandomForestClassifier(n_estimators=n_estimators) Al rescate!
End of explanation
ks=np.arange(2,40)
scores=[]
for k in ks:
clf = RandomForestClassifier(n_estimators=k)
scores.append(cross_val_score(clf, X, Y, cv=10).mean())
plt.plot(ks,scores)
Explanation: a probarlo!
mejoro?
Pero ahora tenemos un parametro nuevo, cuantos arboles queremos usar?
<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i>,<i class="fa fa-tree"></i> ...
Que tal si probamos con un for loop!? Y checamos el error conforme al numero de arboles?
Actividad!
Hay que :
Definir nuestro rango de arboles a probar en un arreglo
hacer un for loop sobre este arreglo
Para cada elemento, entrena un bosque y saca el score
Guarda el score en una lista
graficalo!
End of explanation
g = sns.PairGrid(iris, hue="species")
g = g.map(plt.scatter)
g = g.add_legend()
Explanation: <i class="fa fa-pagelines"></i> El conjunto de datos Iris
Un modelo multi-dimensional
End of explanation
iris = datasets.load_iris()
X = iris.data
Y = iris.target
Explanation: Actividad:
Objetivo: Entrena un arbol para predecir la especie de la planta
Checa las graficas, que variables podrian ser mas importante?
Agarra los datos, que dimensiones son?
Rompelos en pedacitos y entrena tus modelos
Que scores te da? Que resulto ser importante?
End of explanation |
8,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Now it's time for you to demonstrate your new skills with a project of your own!
In this exercise, you will work with a dataset of your choosing. Once you've selected a dataset, you'll design and create your own plot to tell interesting stories behind the data!
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
Step1: The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
Step2: Step 1
Step3: Step 2
Step4: Step 3
Step5: After the code cell above is marked correct, run the code cell below without changes to view the first five rows of the data.
Step6: Step 4 | Python Code:
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
Explanation: Now it's time for you to demonstrate your new skills with a project of your own!
In this exercise, you will work with a dataset of your choosing. Once you've selected a dataset, you'll design and create your own plot to tell interesting stories behind the data!
Setup
Run the next cell to import and configure the Python libraries that you need to complete the exercise.
End of explanation
# Set up code checking
from learntools.core import binder
binder.bind(globals())
from learntools.data_viz_to_coder.ex7 import *
print("Setup Complete")
Explanation: The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
# Check for a dataset with a CSV file
step_1.check()
Explanation: Step 1: Attach a dataset to the notebook
Begin by selecting a CSV dataset from Kaggle Datasets. If you're unsure how to do this, please revisit the instructions in the previous tutorial.
Once you have selected a dataset, click on the [+ Add data] option in the top right corner. This will generate a pop-up window that you can use to search for your chosen dataset.
Once you have found the dataset, click on the [Add] button to attach it to the notebook. You can check that it was successful by looking at the Data dropdown menu to the right of the notebook -- look for an input folder containing a subfolder that matches the name of the dataset.
<center>
<img src="https://i.imgur.com/nMYc1Nu.png" width=30%><br/>
</center>
You can click on the carat to the left of the name of the dataset to double-check that it contains a CSV file. For instance, the image below shows that the example dataset contains two CSV files: (1) dc-wikia-data.csv, and (2) marvel-wikia-data.csv.
<center>
<img src="https://i.imgur.com/B4sJkVA.png" width=30%><br/>
</center>
Once you've uploaded a dataset with a CSV file, run the code cell below without changes to receive credit for your work!
End of explanation
# Fill in the line below: Specify the path of the CSV file to read
my_filepath = ____
# Check for a valid filepath to a CSV file in a dataset
step_2.check()
#%%RM_IF(PROD)%%
my_filepath = "../input/abcdefg1234.csv"
step_2.assert_check_failed()
#%%RM_IF(PROD)%%
my_filepath = "../input/candy.csv"
step_2.assert_check_passed()
Explanation: Step 2: Specify the filepath
Now that the dataset is attached to the notebook, you can find its filepath. To do this, begin by clicking on the CSV file you'd like to use. This will open the CSV file in a tab below the notebook. You can find the filepath towards the top of this new tab.
After you find the filepath corresponding to your dataset, fill it in as the value for my_filepath in the code cell below, and run the code cell to check that you've provided a valid filepath. For instance, in the case of this example dataset, we would set
my_filepath = "../input/fivethirtyeight-comic-characters-dataset/dc-wikia-data.csv"
Note that you must enclose the filepath in quotation marks; otherwise, the code will return an error.
Once you've entered the filepath, you can close the tab below the notebook by clicking on the [X] at the top of the tab.
End of explanation
# Fill in the line below: Read the file into a variable my_data
my_data = ____
# Check that a dataset has been uploaded into my_data
step_3.check()
#%%RM_IF(PROD)%%
my_data = pd.read_csv(my_filepath, index_col="id")
step_3.assert_check_passed()
Explanation: Step 3: Load the data
Use the next code cell to load your data file into my_data. Use the filepath that you specified in the previous step.
End of explanation
# Print the first five rows of the data
my_data.head()
Explanation: After the code cell above is marked correct, run the code cell below without changes to view the first five rows of the data.
End of explanation
# Create a plot
____ # Your code here
# Check that a figure appears below
step_4.check()
#%%RM_IF(PROD)%%
sns.regplot(x=my_data['sugarpercent'], y=my_data['winpercent'])
step_4.assert_check_passed()
Explanation: Step 4: Visualize the data
Use the next code cell to create a figure that tells a story behind your dataset. You can use any chart type (line chart, bar chart, heatmap, etc) of your choosing!
End of explanation |
8,959 | Given the following text description, write Python code to implement the functionality described.
Description:
Largest number in the Array having frequency same as value
Function to find the largest number whose frequency is equal to itself . ; Adding 65536 to keep the count of the current number ; Right shifting by 16 bits to find the count of the number i ; Driver code
| Python Code:
def findLargestNumber(arr , n ) :
for i in range(n ) :
arr[i ] &= 0xFFFF ;
if(arr[i ] <= n ) :
arr[i ] += 0x10000 ;
for i in range(n - 1 , 0 , - 1 ) :
if(( arr[i ] >> 16 ) == i ) :
return i + 1 ;
return - 1 ;
if __name__== ' __main __' :
arr =[3 , 2 , 5 , 5 , 2 , 4 , 5 ] ;
n = len(arr ) ;
print(findLargestNumber(arr , n ) ) ;
|
8,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to
Step1: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
Step2: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
Step3: You have
Step4: Expected Output
Step5: You can now plot the decision boundary of these models. Run the code below.
Step7: Expected Output
Step9: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step17: Expected output
Step19: Expected Output
Step21: Expected Output
Step22: Expected Output
Step23: Expected Output
Step24: Expected Output
Step25: Interpretation | Python Code:
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
Explanation: Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to:
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- sklearn provides simple and efficient tools for data mining and data analysis.
- matplotlib is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
End of explanation
X, Y = load_planar_dataset()
Explanation: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
End of explanation
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = shape_X[1]# training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
Explanation: You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y?
Hint: How do you get the shape of a numpy array? (help)
End of explanation
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
End of explanation
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
Explanation: You can now plot the decision boundary of these models. Run the code below.
End of explanation
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
Interpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
Here is our model:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
Mathematically:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{2} > 0.5 \ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.
4.1 - Defining the neural network structure
Exercise: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
Hint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape = (n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape = (n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
4.2 - Initialize the model's parameters
Exercise: Implement the function initialize_parameters().
Instructions:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1,X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
4.3 - The Loop
Question: Implement forward_propagation().
Instructions:
- Look above at the mathematical representation of your classifier.
- You can use the function sigmoid(). It is built-in (imported) in the notebook.
- You can use the function np.tanh(). It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of initialize_parameters()) by using parameters[".."].
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2),Y)
cost = -np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
Exercise: Implement compute_cost() to compute the value of the cost $J$.
Instructions:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{2})$:
python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
(you can use either np.multiply() and then np.sum() or directly np.dot()).
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache["A1"]
A2 = cache["A2"]
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = 1/m * np.dot(dZ2, np.transpose(A1))
db2 = 1/m * np.sum(dZ2, axis = 1, keepdims = True)
dZ1 = np.dot(np.transpose(W2), dZ2) * (1 - np.power(A1, 2))
dW1 = 1/m * np.dot(dZ1, np.transpose(X))
db1 = 1/m * np.sum(dZ1, axis =1, keepdims = True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
Question: Implement the function backward_propagation().
Instructions:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
Tips:
To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
Question: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
General gradient descent rule: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
Illustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
End of explanation
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()
Question: Build your neural network model in nn_model().
Instructions: The neural network model has to use the previous functions in the right order.
End of explanation
# GRADED FUNCTION: predict
def predict(parameters, X):
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = A2 > 0.5
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
4.5 Predictions
Question: Use your model to predict by building predict().
Use forward propagation to predict results.
Reminder: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)
End of explanation
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
End of explanation
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
End of explanation
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
4.6 - Tuning hidden layer size (optional/ungraded exercise)
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
End of explanation
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Interpretation:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
Optional questions:
Note: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
You've learnt to:
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
End of explanation |
8,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Initialization of setup
Step2: 2. Finite Differences setup
Step3: 3. Finite Volumes setup
Step4: 4. Initial condition
Step5: 4. Solution for the inhomogeneous problem
Upwind finite volume scheme
We decompose the solution into right propagating $\mathbf{\Lambda}_i^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}_i^{-}$ where
\begin{equation}
\mathbf{\Lambda}_i^{+}=
\begin{pmatrix}
-c_i & 0 \
0 & 0
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{\Lambda}_i^{-}=
\begin{pmatrix}
0 & 0 \
0 & c_i
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{A}_i^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}_i^{\pm}\mathbf{R}
\end{equation}
This strategy allows us to formulate an upwind finite volume scheme for any hyperbolic system as
\begin{equation}
\mathbf{Q}{i}^{n+1} = \mathbf{Q}{i}^{n} - \frac{dt}{dx}(\mathbf{A}i^{+}\Delta\mathbf{Q}{l} - \mathbf{A}i^{-}\Delta\mathbf{Q}{r})
\end{equation}
with corresponding flux term given by
\begin{equation}
\mathbf{F}{l} = \mathbf{A}_i^{+}\Delta\mathbf{Q}{l}
\qquad\text{,}\qquad
\mathbf{F}{r} = \mathbf{A}_i^{-}\Delta\mathbf{Q}{r}
\end{equation}
Lax-Wendroff finite volume scheme
The upwind solution presents a strong diffusive behavior. In this sense, the Lax-Wendroff perform better, with the advantage that it is not needed to decompose the eigenvalues into right and left propagations. Here the matrix $\mathbf{A}_i$ can be used in its original form. The Lax-Wendroff follows
\begin{equation}
\mathbf{Q}{i}^{n+1} = \mathbf{Q}{i}^{n} - \frac{dt}{2dx}\mathbf{A}i(\Delta\mathbf{Q}{l} + \Delta\mathbf{Q}{r}) + \frac{1}{2}\Big(\frac{dt}{dx}\Big)^2\mathbf{A}_i^2(\Delta\mathbf{Q}{l} - \Delta\mathbf{Q}_{r})
\end{equation} | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Finite Volume Method - 1D Elastic Wave Equation. Heterogeneous case</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
The source free elastic wave equation can be written in terms of a coupled first-order system
\begin{align}
\partial_t \sigma - \mu \partial_x v & = 0 \
\partial_t v - \frac{1}{\rho} \partial_x \sigma & = 0
\end{align}
with $\rho$ the density and $\mu$ the shear modulus. This equation in matrix-vector notation follows
\begin{equation}
\partial_t \mathbf{Q} + \mathbf{A} \partial_x \mathbf{Q} = 0
\end{equation}
where $\mathbf{Q} = (\sigma, v)$ is the vector of unknowns and the matrix $\mathbf{A}$ contains the parameters $\rho$ and $\mu$. The above matrix equation is analogous to the advection equation $\partial_t q + a \partial_x q = 0$. Although it is a coupled system, diagonalization of $\mathbf{A} = \mathbf{R}^{-1}\mathbf{\Lambda}\mathbf{R}$ allows us to implement all elements developed for the solution of the advection equation in terms of fluxes. It turns out that the decoupled version is
\begin{equation}
\partial_t \mathbf{W} + \mathbf{\Lambda} \partial_x \mathbf{W} = 0
\end{equation}
with $\mathbf{W} = \mathbf{R}^{-1}\mathbf{Q}$, where the eigenvector matrix $\mathbf{R}$ and the diagonal matrix of eigenvalues $\mathbf{\Lambda}$ are given for the heterogeneous case
\begin{equation}
\mathbf{\Lambda}_i=
\begin{pmatrix}
-c_i & 0 \
0 & c_i
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{A}_i=
\begin{pmatrix}
0 & -\mu_i \
-1/\rho_i & 0
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{R} =
\begin{pmatrix}
Z_i & -Z_i \
1 & 1
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{R}^{-1} = \frac{1}{2Z_i}
\begin{pmatrix}
1 & Z_i \
-1 & Z_i
\end{pmatrix}
\end{equation}
Here $Z_i = \rho_i c_i$ represents the seismic impedance. In comparison with the homogeneous case, here we allow coefficients of matrix $\mathbf{A}$ to vary for each element
This notebook implements the Lax-Wendroff scheme for solving the free source version of the elastic wave equation in a heterogeneous media. This solution is compared with the one obtained using the finite difference scheme. To keep the problem simple we use as spatial initial condition a Gauss function with half-width $\sigma$
\begin{equation}
Q(x,t=0) = e^{-1/\sigma^2 (x - x_{o})^2}
\end{equation}
End of explanation
# Initialization of setup
# --------------------------------------------------------------------------
nx = 800 # number of grid points
c0 = 2500 # acoustic velocity in m/s
rho = 2500 # density in kg/m^3
Z0 = rho*c0 # impedance
mu = rho*c0**2 # shear modulus
rho0 = rho # density
mu0 = mu # shear modulus
xmax = 10000 # in m
eps = 0.5 # CFL
tmax = 1.5 # simulation time in s
isnap = 10 # plotting rate
sig = 200 # argument in the inital condition
x0 = 2500 # position of the initial condition
Explanation: 1. Initialization of setup
End of explanation
# Finite Differences setup
# --------------------------------------------------------------------------
dx = xmax/(nx-1) # calculate space increment
xfd = np.arange(0, nx)*dx # initialize space
mufd = np.zeros(xfd.size) + mu0 # initialize shear modulus
rhofd = np.zeros(xfd.size) + rho0 # initialize density
# Introduce inhomogeneity
mufd[int((nx-1)/2) + 1:nx] = mufd[int((nx-1)/2) + 1:nx]*4
# initialize fields
s = np.zeros(xfd.size)
v = np.zeros(xfd.size)
dv = np.zeros(xfd.size)
ds = np.zeros(xfd.size)
s = np.exp(-1./sig**2 * (xfd-x0)**2) # Initial condition
Explanation: 2. Finite Differences setup
End of explanation
# Finite Volumes setup
# --------------------------------------------------------------------------
A = np.zeros((2,2,nx))
Z = np.zeros((1,nx))
c = np.zeros((1,nx))
# Initialize velocity
c = c + c0
c[int(nx/2):nx] = c[int(nx/2):nx]*2
Z = rho*c
# Initialize A for each cell
for i in range(1,nx):
A0 = np.array([[0, -mu], [-1/rho, 0]])
if i > nx/2:
A0= np.array([[0, -4*mu], [-1/rho, 0]])
A[:,:,i] = A0
# Initialize Space
x, dx = np.linspace(0,xmax,nx,retstep=True)
# use wave based CFL criterion
dt = eps*dx/np.max(c) # calculate tim step from stability criterion
# Simulation time
nt = int(np.floor(tmax/dt))
# Initialize wave fields
Q = np.zeros((2,nx))
Qnew = np.zeros((2,nx))
Explanation: 3. Finite Volumes setup
End of explanation
# Initial condition
#----------------------------------------------------------------
sx = np.exp(-1./sig**2 * (x-x0)**2)
Q[0,:] = sx
# ---------------------------------------------------------------
# Plot initial condition
# ---------------------------------------------------------------
plt.plot(x, sx, color='r', lw=2, label='Initial condition')
plt.ylabel('Amplitude', size=16)
plt.xlabel('x', size=16)
plt.legend()
plt.grid(True)
plt.show()
Explanation: 4. Initial condition
End of explanation
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(10,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
ax1.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b')
ax2.axvspan(((nx-1)/2+1)*dx, nx*dx, alpha=0.2, facecolor='b')
ax1.set_xlim([0, xmax])
ax2.set_xlim([0, xmax])
ax1.set_ylabel('Stress')
ax2.set_ylabel('Velocity')
ax2.set_xlabel(' x ')
line1 = ax1.plot(x, Q[0,:], 'k', x, s, 'r--')
line2 = ax2.plot(x, Q[1,:], 'k', x, v, 'r--')
plt.suptitle('Heterogeneous F. volume - Lax-Wendroff method', size=16)
ax1.text(0.1*xmax, 0.8*max(sx), '$\mu$ = $\mu_{o}$')
ax1.text(0.8*xmax, 0.8*max(sx), '$\mu$ = $4\mu_{o}$')
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for j in range(nt):
# Finite Volume Extrapolation scheme-------------------------
for i in range(1,nx-1):
# Lax-Wendroff method
dQl = Q[:,i] - Q[:,i-1]
dQr = Q[:,i+1] - Q[:,i]
Qnew[:,i] = Q[:,i] - dt/(2*dx)*A[:,:,i] @ (dQl + dQr)\
+ 1/2*(dt/dx)**2 *A[:,:,i] @ A[:,:,i] @ (dQr - dQl)
# Absorbing boundary conditions
Qnew[:,0] = Qnew[:,1]
Qnew[:,nx-1] = Qnew[:,nx-2]
Q, Qnew = Qnew, Q
# Finite Difference Extrapolation scheme---------------------
# Stress derivative
for i in range(1, nx-1):
ds[i] = (s[i+1] - s[i])/dx
# Velocity extrapolation
v = v + dt*ds/rhofd
# Velocity derivative
for i in range(1, nx-1):
dv[i] = (v[i] - v[i-1])/dx
# Stress extrapolation
s = s + dt*mufd*dv
# --------------------------------------
# Animation plot. Display solutions
if not j % isnap:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
line1 = ax1.plot(x, Q[0,:], 'k', x, s, 'r--')
line2 = ax2.plot(x, Q[1,:], 'k', x, v, 'r--')
plt.legend(iter(line2), ('F. Volume', 'f. Diff'))
plt.gcf().canvas.draw()
Explanation: 4. Solution for the inhomogeneous problem
Upwind finite volume scheme
We decompose the solution into right propagating $\mathbf{\Lambda}_i^{+}$ and left propagating eigenvalues $\mathbf{\Lambda}_i^{-}$ where
\begin{equation}
\mathbf{\Lambda}_i^{+}=
\begin{pmatrix}
-c_i & 0 \
0 & 0
\end{pmatrix}
\qquad\text{,}\qquad
\mathbf{\Lambda}_i^{-}=
\begin{pmatrix}
0 & 0 \
0 & c_i
\end{pmatrix}
\qquad\text{and}\qquad
\mathbf{A}_i^{\pm} = \mathbf{R}^{-1}\mathbf{\Lambda}_i^{\pm}\mathbf{R}
\end{equation}
This strategy allows us to formulate an upwind finite volume scheme for any hyperbolic system as
\begin{equation}
\mathbf{Q}{i}^{n+1} = \mathbf{Q}{i}^{n} - \frac{dt}{dx}(\mathbf{A}i^{+}\Delta\mathbf{Q}{l} - \mathbf{A}i^{-}\Delta\mathbf{Q}{r})
\end{equation}
with corresponding flux term given by
\begin{equation}
\mathbf{F}{l} = \mathbf{A}_i^{+}\Delta\mathbf{Q}{l}
\qquad\text{,}\qquad
\mathbf{F}{r} = \mathbf{A}_i^{-}\Delta\mathbf{Q}{r}
\end{equation}
Lax-Wendroff finite volume scheme
The upwind solution presents a strong diffusive behavior. In this sense, the Lax-Wendroff perform better, with the advantage that it is not needed to decompose the eigenvalues into right and left propagations. Here the matrix $\mathbf{A}_i$ can be used in its original form. The Lax-Wendroff follows
\begin{equation}
\mathbf{Q}{i}^{n+1} = \mathbf{Q}{i}^{n} - \frac{dt}{2dx}\mathbf{A}i(\Delta\mathbf{Q}{l} + \Delta\mathbf{Q}{r}) + \frac{1}{2}\Big(\frac{dt}{dx}\Big)^2\mathbf{A}_i^2(\Delta\mathbf{Q}{l} - \Delta\mathbf{Q}_{r})
\end{equation}
End of explanation |
8,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Science, Data, Tools
or 'Tips and tricks for a your everyday workflow'
Matteo Guzzo
Prologue
AKA My Research
The cumulant expansion
The struggle has been too long, but we are starting to see the light...
As you might recall...
We are trying to go beyond the GW approximation
and succeed in describing satellites.
The spectral function
It looks like this
Step2: <img src="files/graphics/xkcd1.png" width=700>
See https
Step4: <img src="./graphics/scatter_demo.png" width=700>
http | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
with plt.xkcd():
plt.rcParams['figure.figsize'] = (6., 4.)
x = np.linspace(-5, 5, 50)
gauss = np.exp(-(x**2) / 2)/np.sqrt(2 * np.pi)
ax = plt.subplot(111)
ax.plot(x, gauss, label="Best curve ever")
cdf = np.array([np.trapz(gauss[:i], x[:i]) for i, _ in enumerate(gauss)])
plt.plot(x, cdf, label="Bestest curve ever")
plt.xlim(-3, 3)
ax.set_xticks([])
ax.set_yticks([])
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.xlabel('most exciting independent variable')
plt.ylabel("I'm not trolling")
plt.legend(loc='best')
plt.annotate("I CAN ALSO\n DO MATH:\n"+
" "+
r"$\frac{e^{-\frac{x^2}{2}}}{\sqrt{2\pi}}$",
xy=(0.1, 0.4), arrowprops=dict(arrowstyle='->'), xytext=(2, 0.6))
fname = "./graphics/xkcd1.png"
plt.savefig(fname, dpi=300)
Explanation: Science, Data, Tools
or 'Tips and tricks for a your everyday workflow'
Matteo Guzzo
Prologue
AKA My Research
The cumulant expansion
The struggle has been too long, but we are starting to see the light...
As you might recall...
We are trying to go beyond the GW approximation
and succeed in describing satellites.
The spectral function
It looks like this:
\begin{equation}
A(\omega) = \mathrm{Im}|G(\omega)|
\end{equation}
GW vs Cumulant
Mathematically very different:
\begin{equation}
G^{GW} (\omega) = \frac1{ \omega - \epsilon - \Sigma (\omega) }
\end{equation}
\begin{equation}
G^C(t_1, t_2) = G^0(t_1, t_2) e^{ i \int_{t_1}^{t_2} \int_{t'}^{t_2} dt' dt'' W (t', t'') }
\end{equation}
BUT they connect through $\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )$.
Guzzo et al., Phys. Rev. Lett. 107, 166401, 2011
Implementation
Using a multi-pole representation for $\Sigma^{GW}$:
\begin{equation}
\mathrm{Im} W (\omega) = \frac1\pi \mathrm{Im} \Sigma ( \epsilon - \omega )
\end{equation}
\begin{equation}
W (\tau) = - i \lambda \bigl[ e^{ i \omega_p \tau } \theta ( - \tau ) + e^{ - i \omega_p \tau } \theta ( \tau ) \bigr]
\end{equation}
GW vs Cumulant
GW:
\begin{equation}
A(\omega) = \frac1\pi \frac{\mathrm{Im}\Sigma (\omega)}
{ [ \omega - \epsilon - \mathrm{Re}\Sigma (\omega) ]^2 +
[ \mathrm{Im}\Sigma (\omega) ]^2}
\end{equation}
Cumulant (plasmon-pole):
\begin{equation}
A(\omega) = \frac1\pi \sum_{n=0}^{\infty} \frac{a^n}{n!} \frac{\Gamma}{ (\omega - \epsilon + n \omega_p)^2 + \Gamma^2 }
\end{equation}
Problems
<br>
\begin{equation}
A(\omega) = \frac1\pi \sum_{n=0}^{\infty} \frac{a^n}{n!} \frac{\Gamma}{ (\omega - \epsilon + n \omega_p)^2 + \Gamma^2 }
\end{equation}
Alternative: full numerical
With $ t = t_2 - t_1 $:
\begin{equation}
G^C (t) = G^0(t) e^{ i \int d\omega
\frac{W (\omega)} {\omega^2} \left( e^{i \omega t} -i \omega t - 1 \right) }
\end{equation}
<br>
Kas et al., Phys. Rev. B 90, 085112, 2014
To be continued... --> https://github.com/teoguso/SF.git
<img src="./graphics/githubSF.png" alt="test" width=600>
Acknowledgements
<img src="graphics/sky.png" width=60>
Jiangqiang "Sky" Zhou - Ecole Polytechnique, France
<img src="graphics/gatti.png" width=60>
Matteo Gatti - Ecole Polytechnique, France
<img src="graphics/andris.png" width=60>
Andris Gulans - HU Berlin
Let's talk about tools (My actual Outline)
How do you go about your everyday work?
What do you use to:
Write and maintain code
Review and analyse data
Present data and results
My answer
Git version control system:
https://git-scm.com/
Python + matplotlib plotting library:
https://www.python.org/
http://matplotlib.org/
Jupyter notebooks:
http://jupyter.org/
Time to take a breath
(Questions?)
Git
QUESTION: Who knows what Git is?
State of the art for distributed version control
Version control?
<img src="graphics/vc-xkcd.jpg" width=700>
Git in a nutshell
<img src=http://i.stack.imgur.com/9IW5z.png width=500>
Git in a nutshell
Very powerful (essential) for collaborative work (e.g. software, articles)
Very useful for personal work
Ok, but what should I use it for?
Source code
Scripts
Latex documents
In short:
Any text-based (ASCII) project
Fortran, Python, Latex, Bash, ...
Python + matplotlib
End of explanation
Simple demo of a scatter plot.
# import numpy as np
# import matplotlib.pyplot as plt
N = 50
x = np.random.rand(N)
y = np.random.rand(N)
colors = np.random.rand(N)
area = np.pi * (15 * np.random.rand(N))**2 # 0 to 15 point radiuses
plt.scatter(x, y, s=area, c=colors, alpha=0.5)
plt.savefig("./graphics/scatter_demo.png", dpi=200)
Explanation: <img src="files/graphics/xkcd1.png" width=700>
See https://www.xkcd.com
End of explanation
This shows an example of the "fivethirtyeight" styling, which
tries to replicate the styles from FiveThirtyEight.com.
from matplotlib import pyplot as plt
import numpy as np
x = np.linspace(0, 10)
with plt.style.context('fivethirtyeight'):
plt.plot(x, np.sin(x) + x + np.random.randn(50))
plt.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
plt.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
plt.savefig("./graphics/fivethirtyeight_demo.png", dpi=200)
Explanation: <img src="./graphics/scatter_demo.png" width=700>
http://matplotlib.org/gallery.html
End of explanation |
8,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step7: Implementation
Step8: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer
Step9: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step10: Implementation
Step11: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step12: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
Step13: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer
Step14: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step15: Implementation
Step16: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step17: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [0, 100, 400]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
import seaborn as sns
sns.heatmap((samples-data.mean())/data.std(ddof=0), annot=True, cbar=False, square=True)
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_score
removed = 'Delicatessen'
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = pd.DataFrame.copy(data)
new_data.drop([removed], axis = 1, inplace = True)
# TODO: Split the data into training and testing sets using the given feature as the target
X_train, X_test, y_train, y_test = train_test_split(new_data, data[removed], test_size=0.33, random_state=42)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=42)
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print "Score: ", score
# reviewer code
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
def calculate_r_2_for_feature(data,feature):
new_data = data.drop(feature, axis=1)
X_train, X_test, y_train, y_test = train_test_split(
new_data,data[feature],test_size=0.25, random_state=42 # random_state to reproduce
)
regressor = DecisionTreeRegressor(random_state=42) # update to repeat result
regressor.fit(X_train,y_train)
score = regressor.score(X_test,y_test)
return score
def r_2_mean(data,feature,runs=200):
return np.array([calculate_r_2_for_feature(data,feature)
for _ in range(200) ]).mean().round(4)
print "{0:17} {1}".format("Fresh: ", r_2_mean(data,'Fresh'))
print "{0:17} {1}".format("Milk: ", r_2_mean(data,'Milk'))
print "{0:17} {1}".format("Grocery: ", r_2_mean(data,'Grocery'))
print "{0:17} {1}".format("Frozen: ", r_2_mean(data,'Frozen'))
print "{0:17} {1}".format("Detergents_Paper: ", r_2_mean(data,'Detergents_Paper'))
print "{0:17} {1}".format("Delicatessen: ", r_2_mean(data,'Delicatessen'))
zip(new_data, regressor.feature_importances_)
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer:
Comparing each sample with the mean of data (reviewer tip) looks like that the 0 spend more money in Fresh, Milk and Grocery, and almost nothing in Frozen, so looks like a bakery.
The 1 (indices 100) was above in all features, but Fresh and well above in Detergent_Paperwas Grocery, so, to me, it looks like a hotel.
The 2 (indeces 400) was below every feature but Frozen. Considering this low level and the relative high spend in Frozen look like a restaurant.
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# reviewer feedback
import matplotlib.pyplot as plt
corr = data.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True,
cmap='RdBu', fmt='+.3f')
plt.xticks(rotation=45, ha='center')
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer:
The score was -15.76 wich means that the model fail to predict the feature 'Delicatessen'. So predict this feature isn't a good idea.
See the table that show the predicted feature and the score:
| Predicted Feature | Predictor | Score |
|------------------- |---------------------------------------- |------------------ |
| Fresh | DecisionTreeRegressor(random_state=42) | -0.462688879814 |
| Milk | DecisionTreeRegressor(random_state=42) | 0.182318577665 |
| Grocery | DecisionTreeRegressor(random_state=42) | 0.636215878713 |
| Frozen | DecisionTreeRegressor(random_state=42) | -0.0438047351883 |
| Detergents_Paper | DecisionTreeRegressor(random_state=42) | 0.468649611154 |
| Delicatessen | DecisionTreeRegressor(random_state=42) | -15.7652569193 |
Without any transformation or selection the features Fresh, Frozen and Delicatessen archived negative scores, witch means that these features aren't good ones to be predicted by the others, so they are the most important features to identifying customers' spending habits. Grocery receive a 0.636, perhaps we can remove this feature to simplify the data and the model. The others has a positive score, wich means that we can predict it with relative accuracy, but Milk has a low value, so let's try to predict Grocery. With this Score I would say that it is necessary to identify the customer's spending habits.
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = np.log(pd.DataFrame.copy(data))
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(pd.DataFrame.copy(samples))
# reviewer feedback
import matplotlib.pyplot as plt
corr = log_data.corr()
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask, 1)] = True
with sns.axes_style("white"):
ax = sns.heatmap(corr, mask=mask, square=True, annot=True,
cmap='RdBu', fmt='+.3f')
plt.xticks(rotation=45, ha='center')
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer:
The reviewer feedback's code gave my much more information!
By examing the diagonal, the data does not appear normally distributed, so some transformations can be applied to improve this matrix results. Visually Milk and Grocery are similiar in correltion with the other features, but for me looks like that Grocery still remains the better option.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
End of explanation
# Hint: Change the code to show both sample dataset (better to see)
# Display the log-transformed sample data
print "Original sample dataset:"
display(samples)
print "Log-transformed sample dataset:"
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
# For each feature find the data points with extreme high or low values
from collections import Counter
c = Counter()
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = 1.5*(Q3-Q1)
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
data_filter = ~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))
o = log_data[data_filter]
# get the index of outliers
i = log_data.index[data_filter] # there is some pythonic way?
c.update(i)
display(o)
# OPTIONAL: Select the indices for data points you wish to remove
outliers = c.keys()
print "Outliers for more then one feature:"
print sorted(list(k for k, v in c.items() if v > 1))
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6, random_state=43)
pca.fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer:
The list [65, 66, 75, 128, 154] represents the outliers for more then one feature. All points marked as outlier at least once is removed. Outliers don't represent the data behavior and so can't be used to predict the data.
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
Answer:
After removing the outliers the weights inverses the signal in the first and second dimensions, but the amount of variance didn't change much. For first and second PC's there are 0.4993+0.2259=0.7252 variance. For first four dimentions this number increases to 0.7252+0.1049+0.0978=0.9279 with is an increase of 27% in relation with the first two dimentions. This is a relative low increases, which points that the first two dimetions are more important.
In other words the first four PC's cover the most variation, but the two first are responsible for the majority of variance.
The first PCA has a strong positive correlation with Detergent_Paper, Grocery and Milk, so they score is increased by the first PCA. This is a suggestions that these features vary together. Even further we see that this component correlates most strongly with Detergents_Paper. So this component can be viewed as a quality measure of that feature mainly for Detergent_Paper and the lack of quality in ther other features.
The second component increases with increasing Froze, Fresh and Delicatesse.
The third component increases with decreasing Fresh (a lot) and increasing Delicatessen.
From the first four dimentions we can see that Grocery and Milk decreases the negative-weight wich mean that they have a bigger influence. Fresh decrease a positive-weight and then keep the value in negative-weight. Frozen switch from negative to positive-weight and Delicatessen show the inverse behavior. The first four dimentions shows that Milk and Grocery has a strong influence in the customer spending, so they can be used in the prediciont model.
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2, random_state=43)
pca.fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Explanation: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
End of explanation
from sklearn import mixture
from sklearn.metrics import silhouette_score
# TODO: Apply your clustering algorithm of choice to the reduced data
n_components = 2
clusterer = mixture.GaussianMixture(n_components = n_components, random_state = 44)
clusterer.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
print score
Explanation: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
K-Means has the advantages of be easy to interpret and can make goods clusters if both the distance functions and number o cluster are choosen correctly. The GMM's advantages are the flexibility to choose the component distribution, obtain a density estimation for each cluster and exists well-studied statistical inference techniques available.
I'll use the GMM algorithms, because the statistical help to our prediction and I believe that for this data and purpose, it's better to use some model-based clustering.
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
| # clusters | Score |
|------------|----------------|
| 2 | 0.447411995571 |
| 3 | 0.361193625039 |
| 4 | 0.298435230841 |
| 5 | 0.328904464926 |
| 10 | 0.325824425705 |
| 20 | 0.240609727557 |
| 50 | 0.295497507145 |
| 100 | 0.325141869116 |
| 250 | 0.291387775963 |
| 339 | 0.143005278551 |
After removing the outliers the best score was for 2 clusters.
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
true_centers = true_centers.append(data.describe().ix['50%'])
true_centers.plot(kind = 'bar', figsize = (16, 4))
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
print "Clusters"
display(true_centers)
print "Sample Data"
display(samples)
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
print 'The distance between sample point {} and center of cluster {}:'.format(i, pred)
print (samples.iloc[i] - true_centers.iloc[pred])
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
Answer:
Each segment makes reference to an average os establishments. That means if one establishment buy an amount of products wich makes him classified in one of segment, then he will spend the amount of his segments, approximately. So to predict the feature we must classify the establishment in one of these segments.
In segment 0 the Fresh and Frozen is well above the average while the other features is below.
In segment 1 Milk, Grocery and Detergent_Paper are above the average (Grocery is much more above the average) and the other are below.
Thus with characteristics looks like the segment 0 represent the Restaurant and segment 1 is the set of Hotel and Cafe.
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
Explanation: Answer:
With the distance of each point we can see that:
1. For Sample 0 (predicted to Cluster 1):
1. Fresh is above the average and resemble to Cluster 0, not 1;
1. Milk is above the average and resemble to Cluster 1;
1. Grocery is above the average and resemble to Cluster 1;
1. Frozen is below the average and resemble to Cluster 1;
1. Deterget_Paper is above the average and resemble to Cluster 1;
1. Delicatessen is above the average and resemble to Cluster 1;
1. For Sample 1 (predicted to Cluster 1):
1. Fresh is above the average and resemble to Cluster 0, not 1;
1. Milk is above the average and resemble to Cluster 1;
1. Grocery is above the average and resemble to Cluster 1;
1. Frozen is above the average and resemble to Cluster 0, not 1;
1. Deterget_Paper is above the average and resemble to Cluster 1;
1. Delicatessen is above the average and resemble to Cluster 1;
1. For Sample 2 (predicted to Cluster 0):
1. Fresh is below the average and resemble to Cluster 1, not 0;
1. Milk is below the average and resemble to Cluster 0;
1. Grocery is below the average and resemble to Cluster 0;
1. Frozen is above the average and resemble to Cluster 0;
1. Deterget_Paper is below the average and resemble to Cluster 0;
1. Delicatessen is above the average and resemble to Cluster 1, not 0;
Now, looking to distance the Cluster 1 predicted samples of each feature Milk and Delicatessen seems to be consistent with the category in the sample, but predicte wrong in both cases for Fresh. Cluster 2 was consistent with Milk and Detergents_Paper, but predicted wrong for Fresh and Delicatessen).
The prediction seems to be consistent.
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
The A/B can be applyed where the hyphotesis is that the delivery service's change will affect positively the establishment. For the versions A and B can sampling (~5% of each segment for example) and get the feedback to calculate the increase (or decrease). Some statisticals actions like sampling correctly the segments and other stuff should be taken to make the test more accurated. In the end the distributor will have information about acceptance and will decide if apply the change to each segment or not.
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
Once the segments is successfuly defined, a supervised learn model can use the establishments' segments as labels to learn and predict the segments for new establishment. Once the new establishment is labeled, the distributor can predict the estimated product spending for each feature.
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
8,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of DenseCRF with non-RGB data
This notebook goes through an example of how to use DenseCRFs on non-RGB data.
At the same time, it will explain basic concepts and walk through an example, so it could be useful even if you're dealing with RGB data, though do have a look at PyDenseCRF's README too!
Basic setup
It is highly recommended you install PyDenseCRF through pip, for example pip install git+https
Step1: Unary Potential
The unary potential consists of per-pixel class-probabilities. This could come from any kind of model such as a random-forest or the softmax of a deep neural network.
Create unary potential
Step2: Run inference with unary potential
We can already run a DenseCRF with only a unary potential.
This doesn't account for neighborhoods at all, so it's not the greatest idea, but we can do it
Step3: Pairwise terms
The whole point of DenseCRFs is to use some form of content to smooth out predictions. This is done via "pairwise" terms, which encode relationships between elements.
Add (non-RGB) pairwise term
For example, in image processing, a popular pairwise relationship is the "bilateral" one, which roughly says that pixels with either a similar color or a similar location are likely to belong to the same class.
Step4: Run inference of complete DenseCRF
Now we can create a dense CRF with both unary and pairwise potentials and run inference on it to get our final result. | Python Code:
#import sys
#sys.path.insert(0,'/path/to/pydensecrf/')
import pydensecrf.densecrf as dcrf
from pydensecrf.utils import unary_from_softmax, create_pairwise_bilateral
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
Explanation: Example of DenseCRF with non-RGB data
This notebook goes through an example of how to use DenseCRFs on non-RGB data.
At the same time, it will explain basic concepts and walk through an example, so it could be useful even if you're dealing with RGB data, though do have a look at PyDenseCRF's README too!
Basic setup
It is highly recommended you install PyDenseCRF through pip, for example pip install git+https://github.com/lucasb-eyer/pydensecrf.git, but if for some reason you couldn't, you can always use it like so after compiling it:
End of explanation
from scipy.stats import multivariate_normal
H, W, NLABELS = 400, 512, 2
# This creates a gaussian blob...
pos = np.stack(np.mgrid[0:H, 0:W], axis=2)
rv = multivariate_normal([H//2, W//2], (H//4)*(W//4))
probs = rv.pdf(pos)
# ...which we project into the range [0.4, 0.6]
probs = (probs-probs.min()) / (probs.max()-probs.min())
probs = 0.5 + 0.2 * (probs-0.5)
# The first dimension needs to be equal to the number of classes.
# Let's have one "foreground" and one "background" class.
# So replicate the gaussian blob but invert it to create the probability
# of the "background" class to be the opposite of "foreground".
probs = np.tile(probs[np.newaxis,:,:],(2,1,1))
probs[1,:,:] = 1 - probs[0,:,:]
# Let's have a look:
plt.figure(figsize=(15,5))
plt.subplot(1,2,1); plt.imshow(probs[0,:,:]); plt.title('Foreground probability'); plt.axis('off'); plt.colorbar();
plt.subplot(1,2,2); plt.imshow(probs[1,:,:]); plt.title('Background probability'); plt.axis('off'); plt.colorbar();
Explanation: Unary Potential
The unary potential consists of per-pixel class-probabilities. This could come from any kind of model such as a random-forest or the softmax of a deep neural network.
Create unary potential
End of explanation
# Inference without pair-wise terms
U = unary_from_softmax(probs) # note: num classes is first dim
d = dcrf.DenseCRF2D(W, H, NLABELS)
d.setUnaryEnergy(U)
# Run inference for 10 iterations
Q_unary = d.inference(10)
# The Q is now the approximate posterior, we can get a MAP estimate using argmax.
map_soln_unary = np.argmax(Q_unary, axis=0)
# Unfortunately, the DenseCRF flattens everything, so get it back into picture form.
map_soln_unary = map_soln_unary.reshape((H,W))
# And let's have a look.
plt.imshow(map_soln_unary); plt.axis('off'); plt.title('MAP Solution without pairwise terms');
Explanation: Run inference with unary potential
We can already run a DenseCRF with only a unary potential.
This doesn't account for neighborhoods at all, so it's not the greatest idea, but we can do it:
End of explanation
NCHAN=1
# Create simple image which will serve as bilateral.
# Note that we put the channel dimension last here,
# but we could also have it be the first dimension and
# just change the `chdim` parameter to `0` further down.
img = np.zeros((H,W,NCHAN), np.uint8)
img[H//3:2*H//3,W//4:3*W//4,:] = 1
plt.imshow(img[:,:,0]); plt.title('Bilateral image'); plt.axis('off'); plt.colorbar();
# Create the pairwise bilateral term from the above image.
# The two `s{dims,chan}` parameters are model hyper-parameters defining
# the strength of the location and image content bilaterals, respectively.
pairwise_energy = create_pairwise_bilateral(sdims=(10,10), schan=(0.01,), img=img, chdim=2)
# pairwise_energy now contains as many dimensions as the DenseCRF has features,
# which in this case is 3: (x,y,channel1)
img_en = pairwise_energy.reshape((-1, H, W)) # Reshape just for plotting
plt.figure(figsize=(15,5))
plt.subplot(1,3,1); plt.imshow(img_en[0]); plt.title('Pairwise bilateral [x]'); plt.axis('off'); plt.colorbar();
plt.subplot(1,3,2); plt.imshow(img_en[1]); plt.title('Pairwise bilateral [y]'); plt.axis('off'); plt.colorbar();
plt.subplot(1,3,3); plt.imshow(img_en[2]); plt.title('Pairwise bilateral [c]'); plt.axis('off'); plt.colorbar();
Explanation: Pairwise terms
The whole point of DenseCRFs is to use some form of content to smooth out predictions. This is done via "pairwise" terms, which encode relationships between elements.
Add (non-RGB) pairwise term
For example, in image processing, a popular pairwise relationship is the "bilateral" one, which roughly says that pixels with either a similar color or a similar location are likely to belong to the same class.
End of explanation
d = dcrf.DenseCRF2D(W, H, NLABELS)
d.setUnaryEnergy(U)
d.addPairwiseEnergy(pairwise_energy, compat=10) # `compat` is the "strength" of this potential.
# This time, let's do inference in steps ourselves
# so that we can look at intermediate solutions
# as well as monitor KL-divergence, which indicates
# how well we have converged.
# PyDenseCRF also requires us to keep track of two
# temporary buffers it needs for computations.
Q, tmp1, tmp2 = d.startInference()
for _ in range(5):
d.stepInference(Q, tmp1, tmp2)
kl1 = d.klDivergence(Q) / (H*W)
map_soln1 = np.argmax(Q, axis=0).reshape((H,W))
for _ in range(20):
d.stepInference(Q, tmp1, tmp2)
kl2 = d.klDivergence(Q) / (H*W)
map_soln2 = np.argmax(Q, axis=0).reshape((H,W))
for _ in range(50):
d.stepInference(Q, tmp1, tmp2)
kl3 = d.klDivergence(Q) / (H*W)
map_soln3 = np.argmax(Q, axis=0).reshape((H,W))
img_en = pairwise_energy.reshape((-1, H, W)) # Reshape just for plotting
plt.figure(figsize=(15,5))
plt.subplot(1,3,1); plt.imshow(map_soln1);
plt.title('MAP Solution with DenseCRF\n(5 steps, KL={:.2f})'.format(kl1)); plt.axis('off');
plt.subplot(1,3,2); plt.imshow(map_soln2);
plt.title('MAP Solution with DenseCRF\n(20 steps, KL={:.2f})'.format(kl2)); plt.axis('off');
plt.subplot(1,3,3); plt.imshow(map_soln3);
plt.title('MAP Solution with DenseCRF\n(75 steps, KL={:.2f})'.format(kl3)); plt.axis('off');
Explanation: Run inference of complete DenseCRF
Now we can create a dense CRF with both unary and pairwise potentials and run inference on it to get our final result.
End of explanation |
8,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
If the DNA species distribution is truely Gaussian in a buoyant density gradient, then what sigma would be needed to reproduce the detection of all taxa > 0.1% in abundance throughout the entire gradient
If 1e10 16S rRNA copies in community, then 0.1% abundant taxon = 1e7
If detection limit = 1 molecule, then probability density of normal distribution across the entire gradient that we sequence must be >= 1e-7
ie., at least 1 of the 1e7 16S rRNA DNA molecules in every gradient fraction
Method
assess PDF across gradient for different levels of sigma
Setting parameters
Step1: Init
Step2: GC min-max
Step3: How big must sigma be to detect throughout the gradient?
Step4: Notes
sigma must be >= 18 to have taxon detected in all gradients
assuming mean GC of taxon fragments is 30%
How small would the fragments need to be to explain this just from diffusion (Clay et al., 2003)?
How small of fragments would be needed to get the observed detection threshold?
sigma distribution of fragment GC for the reference dataset genomes
Step5: Percent of taxa that would be detected in all fraction depending on the fragment BD stdev with accounting for diffusion | Python Code:
%load_ext rpy2.ipython
workDir = '/home/nick/notebook/SIPSim/dev/fullCyc/frag_norm_9_2.5_n5/default_run/'
%%R
sigmas = seq(1, 50, 1)
means = seq(25, 100, 1) # mean GC content of 30 to 70%
## max 13C shift
max_13C_shift_in_BD = 0.036
## min BD (that we care about)
min_GC = 13.5
min_BD = min_GC/100.0 * 0.098 + 1.66
## max BD (that we care about)
max_GC = 80
max_BD = max_GC / 100.0 * 0.098 + 1.66 # 80.0% G+C
max_BD = max_BD + max_13C_shift_in_BD
%%R
max_BD
Explanation: Goal
If the DNA species distribution is truely Gaussian in a buoyant density gradient, then what sigma would be needed to reproduce the detection of all taxa > 0.1% in abundance throughout the entire gradient
If 1e10 16S rRNA copies in community, then 0.1% abundant taxon = 1e7
If detection limit = 1 molecule, then probability density of normal distribution across the entire gradient that we sequence must be >= 1e-7
ie., at least 1 of the 1e7 16S rRNA DNA molecules in every gradient fraction
Method
assess PDF across gradient for different levels of sigma
Setting parameters
End of explanation
%%R
library(dplyr)
library(tidyr)
library(ggplot2)
library(gridExtra)
import numpy as np
import pandas as pd
import scipy.stats as stats
import dill
%%R
GC2BD = function(GC) GC / 100.0 * 0.098 + 1.66
GC2BD(50) %>% print
BD2GC = function(BD) (BD - 1.66) / 0.098 * 100
BD2GC(1.709) %>% print
Explanation: Init
End of explanation
%%R
min_GC = BD2GC(min_BD)
max_GC = BD2GC(max_BD)
cat('Min-max GC:', min_GC, max_GC, '\n')
Explanation: GC min-max
End of explanation
%%R
# where is density > X
detect_thresh = function(mean, sd, min_GC=13, max_GC=117){
GCs = min_GC:max_GC
dens = dnorm(GCs, mean=mean, sd=sd)
all(dens > 1e-9)
}
df = expand.grid(means, sigmas)
colnames(df) = c('mean', 'sigma')
df$detect = mapply(detect_thresh, mean=df$mean, sd=df$sigma)
df %>% head(n=4)
%%R -w 600
# plotting
ggplot(df, aes(mean, sigma, fill=detect)) +
geom_tile(color='black') +
scale_y_continuous(expand=c(0,0)) +
scale_x_continuous(expand=c(0,0)) +
labs(title='Detection probability of >1e-9 across the entire gradient') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: How big must sigma be to detect throughout the gradient?
End of explanation
# loading fragments
F = os.path.join(workDir, '1', 'fragsParsed.pkl')
with open(F, 'rb') as inFH:
frags = dill.load(inFH)
stds = []
for x in frags:
otu = x[0]
for scaf,arr in x[1].items():
arr = np.array(arr)
sd = np.std(arr[:,2]) # fragment GC
stds.append([otu, scaf, sd])
stds = np.array(stds)
%%R -i stds -w 500 -h 300
stds = stds %>% as.data.frame
colnames(stds) = c('taxon', 'scaffold', 'sigma')
stds = stds %>%
mutate(sigma = sigma %>% as.character %>% as.numeric)
ggplot(stds, aes(sigma)) +
geom_histogram() +
theme_bw() +
theme(
text = element_text(size=16)
)
%%R
# using 10% quantile
## a relatively small, but not totally outlier of a sigma
## this will require a lot of diffusion
q10 = quantile(stds$sigma, probs=c(0.1)) %>% as.vector
q10
%%R
# function for sigma diffusion (Clay et al., 2003)
sigma_dif = function(L){
sqrt(44.5 / L)
}
# function for calculating total sigma (fragment buoyant density) based on mean fragment length
total_sigma = function(L, sigma_start){
# L = fragment length (kb)
# start_sigma = genome sigma prior to diffusion
sigma_D = sigma_dif(L)
sqrt(sigma_D**2 + sigma_start**2)
}
frag_lens = seq(0.1, 20, 0.1)
total_sd = sapply(frag_lens, total_sigma, sigma_start=q10)
df = data.frame('length__kb' = frag_lens, 'sigma' = total_sd)
df %>% head
%%R -w 600 -h 350
# plotting
ggplot(df, aes(length__kb, sigma)) +
geom_point() +
geom_line() +
geom_hline(yintercept=18, linetype='dashed', alpha=0.7) +
labs(x='Fragment length (kb)', y='Standard deviation of fragment BD\n(+diffusion)') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Notes
sigma must be >= 18 to have taxon detected in all gradients
assuming mean GC of taxon fragments is 30%
How small would the fragments need to be to explain this just from diffusion (Clay et al., 2003)?
How small of fragments would be needed to get the observed detection threshold?
sigma distribution of fragment GC for the reference dataset genomes
End of explanation
%%R
sigma_thresh = 18
frag_lens = seq(0.1, 20, 0.1)
df = expand.grid(stds$sigma, frag_lens)
colnames(df) = c('sigma', 'length__kb')
df$total_sd = mapply(total_sigma, df$length__kb, df$sigma)
df$detect = ifelse(df$total_sd >= sigma_thresh, 1, 0)
df = df %>%
group_by(length__kb) %>%
summarize(n = n(),
detected = sum(detect),
detect_perc = detected / n * 100)
df %>% head(n=3)
%%R -w 600 -h 350
# plotting
ggplot(df, aes(length__kb, detect_perc)) +
geom_point() +
geom_line() +
labs(x='Fragment length (kb)', y='% of taxa detected in all\ngradient fractions') +
theme_bw() +
theme(
text = element_text(size=16)
)
Explanation: Percent of taxa that would be detected in all fraction depending on the fragment BD stdev with accounting for diffusion
End of explanation |
8,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Try tsfresh
tsfresh rolling ts
Step1: Generate features for rolling & expanding windows
roll_time_series
By default it's expanding window
For rolling window, set max_timeshift value and make sure it's positive (forward moving) to avoid lookahead
In this case, I need to shift y when appending y to generated ts features
Expanding Window
Step2: Rolling Window
Step3: Generate Specified Features
Step4: DIY Feature
You need to update tsfresh source code to do this
Follow this guidance, no need to send Git pull request
Step5: tsfresh Built-in Feature Selection
tsfresh feature selection module | Python Code:
import pandas as pd
# mock up ts data
df = pd.DataFrame({
"group": ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'b'],
"time": [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4, 5],
"x": [1, 3, 5, 7, 9, 11, 13, 15, 17, 19, 21, 23],
"y": [2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24],
})
df
Explanation: Try tsfresh
tsfresh rolling ts: https://tsfresh.readthedocs.io/en/latest/text/forecasting.html
rolling ts settings: https://tsfresh.readthedocs.io/en/latest/api/tsfresh.utilities.html
All features: https://tsfresh.readthedocs.io/en/latest/text/list_of_features.html
tsfresh <b>feature calculators</b>: https://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_extraction.html
tsfresh <b>feature extraction choices</b>: https://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_extraction.html#module-tsfresh.feature_extraction.settings
tsfresh <b>params settings for feature generation</b>: https://tsfresh.readthedocs.io/en/latest/text/feature_extraction_settings.html
How to add your customized features: https://tsfresh.readthedocs.io/en/latest/text/how_to_add_custom_feature.html
Git pull request is not mandatory, but you need to modify tsfresh source code
Summary
Tried feature generation in expanding & rolling windows, without bring in lookahead
Besides select default feature extraction lists, you can also specify your own list of features
To add customized features, need to update tsfresh code in:
https://github.com/blue-yonder/tsfresh/blob/main/tsfresh/feature_extraction/feature_calculators.py
https://github.com/blue-yonder/tsfresh/blob/main/tsfresh/feature_extraction/settings.py
End of explanation
from tsfresh.utilities.dataframe_functions import roll_time_series
from tsfresh import extract_features
# expanding window
df_expanded = roll_time_series(df, column_id="group", column_sort="time")
df_expanded
df_expanded_features = extract_features(df_expanded[['id', 'time', 'x']], column_id="id", column_sort="time")
df_expanded_features
df_expanded_features = df_expanded_features.reset_index()
df_expanded_features
# Append y to generated features
y_df = df[['group', 'y']].groupby('group').shift()
df_expanded_features['y'] = y_df['y']
df_expanded_features = df_expanded_features.drop(['level_0', 'level_1'], axis=1)
df_expanded_features
Explanation: Generate features for rolling & expanding windows
roll_time_series
By default it's expanding window
For rolling window, set max_timeshift value and make sure it's positive (forward moving) to avoid lookahead
In this case, I need to shift y when appending y to generated ts features
Expanding Window
End of explanation
# rolling windod
df_rolled= roll_time_series(df, column_id="group", column_sort="time", max_timeshift=3)
df_rolled
df_rolled_features = extract_features(df_rolled[['id', 'time', 'x']], column_id="id", column_sort="time")
df_rolled_features
df_rolled
df_rolled_features = df_rolled_features.reset_index()
df_rolled_features
# Append y to generated features
y_df = df[['group', 'y']].groupby('group').shift()
df_rolled_features['y'] = y_df['y']
df_rolled_features = df_rolled_features.drop(['level_0', 'level_1'], axis=1)
df_rolled_features.head()
Explanation: Rolling Window
End of explanation
df_rolled_features_test = extract_features(df_rolled[['id', 'time', 'x']], column_id="id", column_sort="time",
default_fc_parameters = {
"length": None,
"hanhan_test_feature": None,
"large_standard_deviation": [{"r": 0.05}, {"r": 0.1}],
})
df_rolled_features_test
Explanation: Generate Specified Features
End of explanation
from tsfresh.feature_extraction import ComprehensiveFCParameters
settings = ComprehensiveFCParameters()
df_rolled_features_test = extract_features(df_rolled[['id', 'time', 'x']], column_id="id", column_sort="time",
default_fc_parameters=settings)
df_rolled_features_test
Explanation: DIY Feature
You need to update tsfresh source code to do this
Follow this guidance, no need to send Git pull request: https://tsfresh.readthedocs.io/en/latest/text/how_to_add_custom_feature.html
Find your local tsfresh code by typing pip show tsfresh and modify the code there
You DIF feature implementation should be in feature_calculators.py: https://github.com/blue-yonder/tsfresh/blob/main/tsfresh/feature_extraction/feature_calculators.py
To add your DIY feature in a list, add it in settings.py: https://github.com/blue-yonder/tsfresh/blob/main/tsfresh/feature_extraction/settings.py
End of explanation
from tsfresh.feature_selection.relevance import calculate_relevance_table
from tsfresh import defaults
FDR_LEVEL = defaults.FDR_LEVEL
HYPOTHESES_INDEPENDENT = defaults.HYPOTHESES_INDEPENDENT
df_expanded_features.head()
df_features = df_expanded_features.fillna(0) # neither feature not target could contain null when using tsfresh feature selection
y = df_features['y']
X = df_features.drop('y', axis=1)
X.head()
df_pvalues = calculate_relevance_table(X, y, test_for_real_target_real_feature='kendall', test_for_real_target_binary_feature='mann')
df_pvalues.head()
print("# total \t", len(df_pvalues))
print("# relevant \t", (df_pvalues["relevant"] == True).sum())
print("# irrelevant \t", (df_pvalues["relevant"] == False).sum(), "( # constant", (df_pvalues["type"] == "const").sum(), ")")
def calc_rejection_line(df_pvalues, hypothesis_independent, fdr_level):
m = len(df_pvalues.loc[~(df_pvalues.type == "const")])
K = list(range(1, m + 1))
if hypothesis_independent:
C = [1] * m
else:
C = [sum([1.0 / k for k in K])] * m
return [fdr_level * k / m * 1.0 / c for k, c in zip(K, C)]
rejection_line = calc_rejection_line(df_pvalues, HYPOTHESES_INDEPENDENT, FDR_LEVEL)
rejection_line[0:10]
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams["figure.figsize"] = [16, 6]
matplotlib.rcParams["font.size"] = 14
matplotlib.style.use('seaborn-darkgrid')
df_pvalues.index = pd.Series(range(0, len(df_pvalues.index)))
df_pvalues.p_value.where(df_pvalues.relevant).plot(style=".", label="relevant features")
df_pvalues.p_value.where(~df_pvalues.relevant & (df_pvalues.type != "const")).plot(style=".", label="irrelevant features")
df_pvalues.p_value.fillna(1).where(df_pvalues.type == "const").plot(style=".", label="irrelevant (constant) features")
plt.plot(rejection_line, label="rejection line (FDR = " + str(FDR_LEVEL) + ")")
plt.xlabel("Feature #")
plt.ylabel("p-value")
plt.title("tsfresh Feature Selection Overview")
plt.legend()
plt.plot()
import numpy as np
last_rejected_index = (df_pvalues["relevant"] == True).sum() - 1
margin = 20
a = max(last_rejected_index - margin, 0)
b = min(last_rejected_index + margin, len(df_pvalues) - 1)
df_pvalues[a:b].p_value.where(df_pvalues[a:b].relevant)\
.plot(style=".", label="relevant features")
df_pvalues[a:b].p_value.where(~df_pvalues[a:b].relevant)\
.plot(style=".", label="irrelevant features")
plt.plot(np.arange(a, b), rejection_line[a:b], label="rejection line (FDR = " + str(FDR_LEVEL) + ")")
plt.xlabel("Feature #")
plt.ylabel("p-value")
plt.title("tsfresh Feature Selection Overview - Zoomed Plot")
plt.legend()
plt.plot()
Explanation: tsfresh Built-in Feature Selection
tsfresh feature selection module: https://tsfresh.readthedocs.io/en/latest/api/tsfresh.feature_selection.html
For every feature the influence on the target is evaluated by an univariate tests and the p-Value is calculated
It uses Benjamini Hochberg procedure to decide which features to keep solely based on p-value
H_0 = the Feature is not relevant and should not be added
When p-value is smaller than FDR_LEVEL, reject H0 and the feature will be kept
It supports both real number and binary number targets
When the feature is binary, you can choose between Mann-Whitney-U test (mann) and Kolmogorov-Smirnov test (smir)
What is <b>Benjamini–Hochberg procedure</b>
https://www.statisticshowto.com/benjamini-hochberg-procedure/#:~:text=What%20is%20the%20Benjamini%2DHochberg,reject%20the%20true%20null%20hypotheses.
End of explanation |
8,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Michaël Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.
Exercise 5
Step1: 1 Graph
Goal
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: 2 Fourier Basis
Compute the eigendecomposition $L=U \Lambda U^t$ of the Laplacian, where $\Lambda$ is the diagonal matrix of eigenvalues $\Lambda_{\ell\ell} = \lambda_\ell$ and $U = [u_1, \ldots, u_n]^t$ is the graph Fourier basis.
Hint
Step8: Visualize the eigenvectors $u_\ell$ corresponding to the first eight non-zero eigenvalues $\lambda_\ell$.
Can you explain what you observe and relate it to the structure of the graph ?
Step9: 3 Graph Signals
Let $f(u)$ be a positive and non-increasing function of $u$.
Compute the graph signal $x$ whose graph Fourier transform satisfies $\hat{x}(\ell) = f(\lambda_\ell)$.
Visualize the result.
Can you interpret it ? How does the choice of $f$ influence the result ? | Python Code:
import numpy as np
import scipy.spatial
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: A Network Tour of Data Science
Michaël Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.
Exercise 5: Graph Signals and Fourier Transform
The goal of this exercise is to experiment with the notions of graph signals, graph Fourier transform and smoothness and illustrate these concepts in the light of clustering.
End of explanation
d = 2 # Dimensionality.
n = 100 # Number of samples.
c = 1 # Number of communities.
# Data matrix, structured in communities.
X = np.random.uniform(0, 1, (n, d))
X += np.linspace(0, 2, c).repeat(n//c)[:, np.newaxis]
fig, ax = plt.subplots(1, 1, squeeze=True)
ax.scatter(X[:n//c, 0], X[:n//c, 1], c='b', s=40, linewidths=0, label='class 0');
ax.scatter(X[n//c:, 0], X[n//c:, 1], c='r', s=40, linewidths=0, label='class 1');
lim1 = X.min() - 0.5
lim2 = X.max() + 0.5
ax.set_xlim(lim1, lim2)
ax.set_ylim(lim1, lim2)
ax.set_aspect('equal')
ax.legend(loc='upper left');
Explanation: 1 Graph
Goal: compute the combinatorial Laplacian $L$ of a graph formed with $c=2$ clusters.
Step 1: construct and visualize a fabricated data matrix $X = [x_1, \ldots, x_n]^t \in \mathbb{R}^{n \times d}$ whose lines are $n$ samples embedded in a $d$-dimensional Euclidean space.
End of explanation
# Pairwise distances.
dist = scipy.spatial.distance.pdist(X, metric='euclidean')
dist = scipy.spatial.distance.squareform(dist)
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
Explanation: Step 2: compute all $n^2$ pairwise euclidean distances $\operatorname{dist}(i, j) = \|x_i - x_j\|_2$.
Hint: you may use the function scipy.spatial.distance.pdist() and scipy.spatial.distance.squareform().
End of explanation
k = 10 # Miminum number of edges per node.
idx = np.argsort(dist)[:, 1:k+1]
dist.sort()
dist = dist[:, 1:k+1]
assert dist.shape == (n, k)
Explanation: Step 3: order the distances and, for each sample, solely keep the $k=10$ closest samples to form a $k$ nearest neighbor ($k$-NN) graph.
Hint: you may sort a numpy array with np.sort() or np.argsort().
End of explanation
# Scaling factor.
sigma2 = np.mean(dist[:, -1])**2
# Weights with Gaussian kernel.
dist = np.exp(- dist**2 / sigma2)
plt.figure(figsize=(15, 5))
plt.hist(dist.flatten(), bins=40);
Explanation: Step 4: compute the weights using a Gaussian kernel, i.e. $$\operatorname{weight}(i, j) = \exp\left(-\frac{\operatorname{dist}(i,j)^2}{\sigma^2}\right) = \exp\left(-\frac{\|x_i - x_j\|_2^2}{\sigma^2}\right).$$
Hint: you may use the below definition of $\sigma^2$.
End of explanation
# Weight matrix.
I = np.arange(0, n).repeat(k)
J = idx.reshape(n*k)
V = dist.reshape(n*k)
W = scipy.sparse.coo_matrix((V, (I, J)), shape=(n, n))
# No self-connections.
W.setdiag(0)
# Non-directed graph.
bigger = W.T > W
W = W - W.multiply(bigger) + W.T.multiply(bigger)
assert type(W) == scipy.sparse.csr_matrix
print('n = |V| = {}, k|V| < |E| = {}'.format(n, W.nnz))
plt.spy(W, markersize=2, color='black');
import scipy.io
import os.path
scipy.io.mmwrite(os.path.join('datasets', 'graph_inpainting', 'embedding.mtx'), X)
scipy.io.mmwrite(os.path.join('datasets', 'graph_inpainting', 'graph.mtx'), W)
Explanation: Step 5: construct and visualize the sparse weight matrix $W_{ij} = \operatorname{weight}(i, j)$.
Hint: you may use the function scipy.sparse.coo_matrix() to create a sparse matrix.
End of explanation
# Degree matrix.
D = W.sum(axis=0)
D = scipy.sparse.diags(D.A.squeeze(), 0)
# Laplacian matrix.
L = D - W
fig, axes = plt.subplots(1, 2, squeeze=True, figsize=(15, 5))
axes[0].spy(L, markersize=2, color='black');
axes[1].plot(D.diagonal(), '.');
Explanation: Step 6: compute the combinatorial graph Laplacian $L = D - W$ where $D$ is the diagonal degree matrix $D_{ii} = \sum_j W_{ij}$.
End of explanation
lamb, U = np.linalg.eigh(L.toarray())
#print(lamb)
plt.figure(figsize=(15, 5))
plt.plot(lamb, '.-');
Explanation: 2 Fourier Basis
Compute the eigendecomposition $L=U \Lambda U^t$ of the Laplacian, where $\Lambda$ is the diagonal matrix of eigenvalues $\Lambda_{\ell\ell} = \lambda_\ell$ and $U = [u_1, \ldots, u_n]^t$ is the graph Fourier basis.
Hint: you may use the function np.linalg.eigh().
End of explanation
def scatter(ax, x):
ax.scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax.set_xlim(lim1, lim2)
ax.set_ylim(lim1, lim2)
ax.set_aspect('equal')
fig, axes = plt.subplots(2, 4, figsize=(15, 6))
for i, ax in enumerate(axes.flatten()):
u = U[:, i+1]
scatter(ax, u)
ax.set_title('u_{}'.format(i+1))
Explanation: Visualize the eigenvectors $u_\ell$ corresponding to the first eight non-zero eigenvalues $\lambda_\ell$.
Can you explain what you observe and relate it to the structure of the graph ?
End of explanation
def f1(u, a=2):
y = np.zeros(n)
y[:a] = 1
return y
def f2(u):
return f1(u, a=3)
def f3(u):
return f1(u, a=n//4)
def f4(u):
return f1(u, a=n)
def f5(u, m=4):
return np.maximum(1 - m * u / u[-1], 0)
def f6(u):
return f5(u, 2)
def f7(u):
return f5(u, 1)
def f8(u):
return f5(u, 1/2)
def f9(u, a=1/2):
return np.exp(-u / a)
def f10(u):
return f9(u, a=1)
def f11(u):
return f9(u, a=2)
def f12(u):
return f9(u, a=4)
def plot(F):
plt.figure(figsize=(15, 5))
for f in F:
plt.plot(lamb, eval(f)(lamb), '.-', label=f)
plt.xlim(0, lamb[-1])
plt.legend()
F = ['f{}'.format(i+1) for i in range(12)]
plot(F[0:4])
plot(F[4:8])
plot(F[8:12])
fig, axes = plt.subplots(3, 4, figsize=(15, 9))
for f, ax in zip(F, axes.flatten()):
xhat = eval(f)(lamb)
x = U.dot(xhat) # U @ xhat
#x = U.dot(xhat * U.T[:,2])
scatter(ax, x)
ax.set_title(f)
Explanation: 3 Graph Signals
Let $f(u)$ be a positive and non-increasing function of $u$.
Compute the graph signal $x$ whose graph Fourier transform satisfies $\hat{x}(\ell) = f(\lambda_\ell)$.
Visualize the result.
Can you interpret it ? How does the choice of $f$ influence the result ?
End of explanation |
8,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate Two Networks with Different Spacing
Step1: Position Networks Appropriately, then Stitch Together
Step2: Quickly Visualize the Network
Let's just make sure things are working as planned using OpenPNMs basic visualization tools
Step3: Create Geometry Objects for Each Layer
Step4: Add Geometrical Properties to the Small Domain
The small domain will be treated as a continua, so instead of assigning pore sizes we want the 'pore' to be same size as the lattice cell.
Step5: Add Geometrical Properties to the Large Domain
Step6: Create Phase and Physics Objects
Step7: Add pore-scale models for diffusion to each Physics
Step8: For the small layer we've used a normal diffusive conductance model, which when combined with the diffusion coefficient of air will be equivalent to open-air diffusion. If we want the small layer to have some tortuosity we must account for this
Step9: Note that this extra line is NOT a pore-scale model, so it will be over-written when the phys_sm object is regenerated.
Add a Reaction Term to the Small Layer
A standard n-th order chemical reaction is $ r=k \cdot x^b $, or more generally
Step10: Perform a Diffusion Calculation
Step11: Visualize the Concentration Distribution
And the result would look something like this | Python Code:
spacing_lg = 0.00006
layer_lg = op.network.Cubic(shape=[10, 10, 1], spacing=spacing_lg)
spacing_sm = 0.00002
layer_sm = op.network.Cubic(shape=[30, 5, 1], spacing=spacing_sm)
Explanation: Generate Two Networks with Different Spacing
End of explanation
# Start by assigning labels to each network for identification later
layer_sm.set_label("small", pores=layer_sm.Ps, throats=layer_sm.Ts)
layer_lg.set_label("large", pores=layer_lg.Ps, throats=layer_lg.Ts)
# Next manually offset CL one full thickness relative to the GDL
layer_sm['pore.coords'] -= [0, spacing_sm*5, 0]
layer_sm['pore.coords'] += [0, 0, spacing_lg/2 - spacing_sm/2] # And shift up by 1/2 a lattice spacing
# Finally, send both networks to stitch which will stitch CL onto GDL
from openpnm.topotools import stitch
stitch(network=layer_lg, donor=layer_sm,
P_network=layer_lg.pores('left'),
P_donor=layer_sm.pores('right'),
len_max=0.00005)
combo_net = layer_lg
combo_net.name = 'combo'
Explanation: Position Networks Appropriately, then Stitch Together
End of explanation
fig = op.topotools.plot_connections(network=combo_net)
Explanation: Quickly Visualize the Network
Let's just make sure things are working as planned using OpenPNMs basic visualization tools:
End of explanation
Ps = combo_net.pores('small')
Ts = combo_net.throats('small')
geom_sm = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)
Ps = combo_net.pores('large')
Ts = combo_net.throats('small', mode='not')
geom_lg = op.geometry.GenericGeometry(network=combo_net, pores=Ps, throats=Ts)
Explanation: Create Geometry Objects for Each Layer
End of explanation
geom_sm['pore.diameter'] = spacing_sm
geom_sm['pore.area'] = spacing_sm**2
geom_sm['throat.diameter'] = spacing_sm
geom_sm['throat.area'] = spacing_sm**2
geom_sm['throat.length'] = 1e-12 # A very small number to represent nearly 0-length
geom_sm.add_model(propname='throat.endpoints',
model=gm.throat_endpoints.circular_pores)
geom_sm.add_model(propname='throat.length',
model=gm.throat_length.piecewise)
geom_sm.add_model(propname='throat.conduit_lengths',
model=gm.throat_length.conduit_lengths)
Explanation: Add Geometrical Properties to the Small Domain
The small domain will be treated as a continua, so instead of assigning pore sizes we want the 'pore' to be same size as the lattice cell.
End of explanation
geom_lg['pore.diameter'] = spacing_lg*np.random.rand(combo_net.num_pores('large'))
geom_lg.add_model(propname='pore.area',
model=gm.pore_area.sphere)
geom_lg.add_model(propname='throat.diameter',
model=mm.from_neighbor_pores,
pore_prop='pore.diameter', mode='min')
geom_lg.add_model(propname='throat.area',
model=gm.throat_area.cylinder)
geom_lg.add_model(propname='throat.endpoints',
model=gm.throat_endpoints.circular_pores)
geom_lg.add_model(propname='throat.length',
model=gm.throat_length.piecewise)
geom_lg.add_model(propname='throat.conduit_lengths',
model=gm.throat_length.conduit_lengths)
Explanation: Add Geometrical Properties to the Large Domain
End of explanation
air = op.phases.Air(network=combo_net, name='air')
phys_lg = op.physics.GenericPhysics(network=combo_net, geometry=geom_lg, phase=air)
phys_sm = op.physics.GenericPhysics(network=combo_net, geometry=geom_sm, phase=air)
Explanation: Create Phase and Physics Objects
End of explanation
phys_lg.add_model(propname='throat.diffusive_conductance',
model=pm.diffusive_conductance.ordinary_diffusion)
phys_sm.add_model(propname='throat.diffusive_conductance',
model=pm.diffusive_conductance.ordinary_diffusion)
Explanation: Add pore-scale models for diffusion to each Physics:
End of explanation
porosity = 0.5
tortuosity = 2
phys_sm['throat.diffusive_conductance'] *= (porosity/tortuosity)
Explanation: For the small layer we've used a normal diffusive conductance model, which when combined with the diffusion coefficient of air will be equivalent to open-air diffusion. If we want the small layer to have some tortuosity we must account for this:
End of explanation
# Set Source Term
air['pore.A1'] = -1e-10 # Reaction pre-factor
air['pore.A2'] = 1 # Reaction order
air['pore.A3'] = 0 # A generic offset that is not needed so set to 0
phys_sm.add_model(propname='pore.reaction',
model=pm.generic_source_term.power_law,
A1='pore.A1', A2='pore.A2', A3='pore.A3',
X='pore.concentration',
regen_mode='deferred')
Explanation: Note that this extra line is NOT a pore-scale model, so it will be over-written when the phys_sm object is regenerated.
Add a Reaction Term to the Small Layer
A standard n-th order chemical reaction is $ r=k \cdot x^b $, or more generally: $ r = A_1 \cdot x^{A_2} + A_3 $. This model is available in OpenPNM.Physics.models.generic_source_terms, and we must specify values for each of the constants.
End of explanation
Deff = op.algorithms.ReactiveTransport(network=combo_net, phase=air)
Ps = combo_net.pores(['large', 'right'], mode='intersection')
Deff.set_value_BC(pores=Ps, values=1)
Ps = combo_net.pores('small')
Deff.set_source(propname='pore.reaction', pores=Ps)
Deff.settings['conductance'] = 'throat.diffusive_conductance'
Deff.settings['quantity'] = 'pore.concentration'
Deff.run()
Explanation: Perform a Diffusion Calculation
End of explanation
fig = op.topotools.plot_coordinates(network=combo_net, c=Deff['pore.concentration'], cmap='jet')
Explanation: Visualize the Concentration Distribution
And the result would look something like this:
End of explanation |
8,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import the libraries
Step1: Create an empty network
Step2: Create a new species S0
S0 is a reference to access quickly to the newly created species latter in the code. Note that one can add attributes to a species by adding elements to a parameter array that is passed to the new_species method.
<font color='red'>ATTENTION
Step3: Adding a gene
A species itself is of no use for the evolution algorithm. The architecture of a networks associates a TModule and a CorePromoter to a a species to build an cluster representing a gene for the program. The TModule is there so that other transcription factors can bind to it and regulate S0.
Step4: ### Add complexation between S0 and S1.
It creates a ppi interaction and a new species S4 corresponding to the complex.
Step5: Add a phosphorylation of S2 by S4
This creates the phosphorylated version of S2, S5, and a a phosphorylation, phospho .
Step6: Regulate the production of S1 by S3 and S5
Note that since the parameter L.fixed_activity_for_TF is True, the activity setting in new_TFHill is not taken into account. Only the TF activity counts.
Step7: Add a regulation of The production of S0 by S5 and S3 | Python Code:
from phievo.Networks import mutation,deriv2
import random
Explanation: Import the libraries
End of explanation
g = random.Random(20160225) # This define a new random number generator
L = mutation.Mutable_Network(g) # Create an empty network
Explanation: Create an empty network
End of explanation
parameters=[['Degradable',0.5]] ## The species is degradable with a rate 0.5
parameters.append(['Input',0]) ## The species cannot serve as an input for the evolution algorithm
parameters.append(['Complexable']) ## The species can be involved in a complex
parameters.append(['Kinase']) ## The specise can phosphorilate another species.
parameters.append(['TF',1]) ## 1 for activator 0 for repressor
S0 = L.new_Species(parameters)
Explanation: Create a new species S0
S0 is a reference to access quickly to the newly created species latter in the code. Note that one can add attributes to a species by adding elements to a parameter array that is passed to the new_species method.
<font color='red'>ATTENTION : This way of creating a species is not recommanded as it does not handle the interaction between the network's different species (see next section). It is here as to get a feeling on how the intern code works.</font>
End of explanation
L = mutation.Mutable_Network(g) ## Clear the network
## Gene 0
parameters=[['Degradable',0.5]]
parameters.append(['TF',1])
parameters.append(['Complexable'])
TM0,prom0,S0 = L.new_gene(0.5,5,parameters) ## Adding a new gene creates a TModule, a CorePromoter and a species
# Gene 1
parameters=[['Degradable',0.5]]
parameters.append(['TF',0])
parameters.append(['Complexable'])
TM1,prom1,S1 = L.new_gene(0.5,5,parameters)
# Gene 2
parameters=[['Degradable',0.5]]
parameters.append(['TF',1])
parameters.append(['Phosphorylable'])
TM2,prom2,S2 = L.new_gene(0.5,5,parameters)
# Gene 3
parameters=[['Degradable',0.5]]
parameters.append(['TF',0])
TM3,prom3,S3 = L.new_gene(0.5,5,parameters)
Explanation: Adding a gene
A species itself is of no use for the evolution algorithm. The architecture of a networks associates a TModule and a CorePromoter to a a species to build an cluster representing a gene for the program. The TModule is there so that other transcription factors can bind to it and regulate S0.
End of explanation
parameters.append(['Kinase'])
ppi,S4 = L.new_PPI(S0 , S1 , 2.0 , 1.0 , parameters)
Explanation: ### Add complexation between S0 and S1.
It creates a ppi interaction and a new species S4 corresponding to the complex.
End of explanation
S5,phospho = L.new_Phosphorylation(S4,S2,2.0,0.5,1.0,3)
Explanation: Add a phosphorylation of S2 by S4
This creates the phosphorylated version of S2, S5, and a a phosphorylation, phospho .
End of explanation
S5.change_type("TF",[1]) # Note this is already the default value for a phosphorilated species
tfhill1 = L.new_TFHill( S3, 1, 0.5, TM1,activity=1)
tfhill2 = L.new_TFHill( S5, 1, 0.5, TM1,activity=1)
Explanation: Regulate the production of S1 by S3 and S5
Note that since the parameter L.fixed_activity_for_TF is True, the activity setting in new_TFHill is not taken into account. Only the TF activity counts.
End of explanation
L.draw()
Explanation: Add a regulation of The production of S0 by S5 and S3
End of explanation |
8,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Podemos clasificar de dos formas, mediante discriminación o asignando probabilidades. Discriminando, asignamos a cada $x$ una de las $K$ clases $C_k$. Por contra, desde un punto de vista probabilístico, lo que haríamos es asignar a cada $x$ la probabilidad de pertenecer a la clase $C_k$. El tipo de clasificación que realicemos es a discreción del usuario y muchas veces dependerá de la distribución de los datos o de los requisitos que nos imponga el cliente. Por ejemplo, hay campeonatos en Kaggle donde lo que se pide es identificar la clase —Digit Recognizer—, pero también puede ser un requisito el determinar la probabilidad de pertecer a una clase determinada —Otto Group Product Classification Challenge.
En scikit-learn podemos obtener clasificaciones de ambas maneras una vez entrenado el modelo.
modelo.predict(), para asignar una categoría.
modelo.predict_proba(), para determinar la probabilidad de pertenencia.
Aquí nos centraremos en la parte probabilística, que espero nos dé una visión más ampliar, y a su vez nos servirá para asignar una categoría si definimos un hiperplano.
Para modelos probabilísticos lo más conveniente, en el caso de contar con dos categorías, es la representación binaria donde contamos con una única variable objetivo $t \in {0,1}$ tal que $t=0$ representa la clase $C_1$ y $t=1$ representa la clase $C_2$. Podemos considerar que el valor de $t$ representa la probabilidad de que la clase sea $C_2$, con los valores de probabilidad tomando valores entre $0$ y $1$.
Veamos un ejemplo.
Step1: Con la función make_classification de scikit-learn, creamos un conjunto de datos para clasificar. Para empezar vamos a contar con sólo un atributo o feature y dos clases o categorías. Los categorías van a estar separadas, pero permitiremos un cierto grado de solapamiento a través del parámetro class_sep; así, la clasificación probabilística cobra más sentido.
Step2: En regresión logística los que vamos a hacer es calcular las probabilidades $p(C_k|x)$. La función logística o sigmoide nos va a permitir definir esas probabilidades y viene definida como
$$f(x) = \frac{1}{1 + \exp(-k(x-x_0))} $$
Como veremos a continuación, la sigmoide tiene forma de S y la función logística juega un papel muy importante en muchos algoritmos de clasificación. Pero no es la única función de ese tipo; también podemos encontrarnos las función arcotangente, tangente hiperbólica o softmax entre otras.
Como es costumbre en scikit-learn, primero definimos el modelo que vamos a emplear que será LogisticRegression. Lo cargamos con los parámetros por defecto y lo entrenamos.
Step3: Por defecto nos va a imprimir los parámetros con los que se ha entrenado el modelo. Una vez entrenado podemos predecir las probabilidades de pertenencia a cada categoría. Para ello, como ya hemos dicho, utilizaremos la función predict_proba() que toma como datos de entrada los atributos X.
Lo que nos devuelve la función predict_proba() es un array de dimensiones (n atributos, n clases). A nosotros sólo nos va a interesar representar la segunda columna, es decir, $p(C_1|x)$, pues sabemos que $p(C_1|x) = 1 - p(C_0|x)$.
Step4: Se aprecia claramente la curva en forma de S de la función logística que es lo que estábamos buscando. Esto nos dice que un punto con $x=0$ tiene aproximadamente un 50 % de probabilidades de pertenecer a cualquiera de las dos categorías.
Si a partir de las probabilidades quisiesemos hacer una clasificación por categorías no tendríamos más que definir un valor umbral. Es decir, cuando la función logística asigna una probabilidad mayor a, por ejemplo, 0.5 entonces asignamos esa categoría. Eso es básicamente lo que hace predict tal y como podemos ver a continuación. | Python Code:
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Podemos clasificar de dos formas, mediante discriminación o asignando probabilidades. Discriminando, asignamos a cada $x$ una de las $K$ clases $C_k$. Por contra, desde un punto de vista probabilístico, lo que haríamos es asignar a cada $x$ la probabilidad de pertenecer a la clase $C_k$. El tipo de clasificación que realicemos es a discreción del usuario y muchas veces dependerá de la distribución de los datos o de los requisitos que nos imponga el cliente. Por ejemplo, hay campeonatos en Kaggle donde lo que se pide es identificar la clase —Digit Recognizer—, pero también puede ser un requisito el determinar la probabilidad de pertecer a una clase determinada —Otto Group Product Classification Challenge.
En scikit-learn podemos obtener clasificaciones de ambas maneras una vez entrenado el modelo.
modelo.predict(), para asignar una categoría.
modelo.predict_proba(), para determinar la probabilidad de pertenencia.
Aquí nos centraremos en la parte probabilística, que espero nos dé una visión más ampliar, y a su vez nos servirá para asignar una categoría si definimos un hiperplano.
Para modelos probabilísticos lo más conveniente, en el caso de contar con dos categorías, es la representación binaria donde contamos con una única variable objetivo $t \in {0,1}$ tal que $t=0$ representa la clase $C_1$ y $t=1$ representa la clase $C_2$. Podemos considerar que el valor de $t$ representa la probabilidad de que la clase sea $C_2$, con los valores de probabilidad tomando valores entre $0$ y $1$.
Veamos un ejemplo.
End of explanation
X, y = make_classification(n_features=1, n_informative=1, n_redundant=0, n_clusters_per_class=1,
class_sep=0.9, random_state=27)
plt.scatter(X, y, alpha=0.4)
plt.xlabel('X')
plt.ylabel('Probabilidad')
Explanation: Con la función make_classification de scikit-learn, creamos un conjunto de datos para clasificar. Para empezar vamos a contar con sólo un atributo o feature y dos clases o categorías. Los categorías van a estar separadas, pero permitiremos un cierto grado de solapamiento a través del parámetro class_sep; así, la clasificación probabilística cobra más sentido.
End of explanation
lr = LogisticRegression()
lr.fit(X, y)
Explanation: En regresión logística los que vamos a hacer es calcular las probabilidades $p(C_k|x)$. La función logística o sigmoide nos va a permitir definir esas probabilidades y viene definida como
$$f(x) = \frac{1}{1 + \exp(-k(x-x_0))} $$
Como veremos a continuación, la sigmoide tiene forma de S y la función logística juega un papel muy importante en muchos algoritmos de clasificación. Pero no es la única función de ese tipo; también podemos encontrarnos las función arcotangente, tangente hiperbólica o softmax entre otras.
Como es costumbre en scikit-learn, primero definimos el modelo que vamos a emplear que será LogisticRegression. Lo cargamos con los parámetros por defecto y lo entrenamos.
End of explanation
plt.scatter(X, y, alpha=0.4, label='real')
plt.plot(np.sort(X, axis=0), lr.predict_proba(np.sort(X, axis=0))[:,1], color='r', label='sigmoide')
plt.legend(loc=2)
plt.xlabel('X')
plt.ylabel('Probabilidad')
Explanation: Por defecto nos va a imprimir los parámetros con los que se ha entrenado el modelo. Una vez entrenado podemos predecir las probabilidades de pertenencia a cada categoría. Para ello, como ya hemos dicho, utilizaremos la función predict_proba() que toma como datos de entrada los atributos X.
Lo que nos devuelve la función predict_proba() es un array de dimensiones (n atributos, n clases). A nosotros sólo nos va a interesar representar la segunda columna, es decir, $p(C_1|x)$, pues sabemos que $p(C_1|x) = 1 - p(C_0|x)$.
End of explanation
plt.scatter(X, y, alpha=0.4, label='real')
plt.plot(np.sort(X, axis=0), lr.predict(np.sort(X, axis=0)), color='r', label='categoría')
plt.legend(loc=2)
plt.xlabel('X')
plt.ylabel('Probabilidad')
Explanation: Se aprecia claramente la curva en forma de S de la función logística que es lo que estábamos buscando. Esto nos dice que un punto con $x=0$ tiene aproximadamente un 50 % de probabilidades de pertenecer a cualquiera de las dos categorías.
Si a partir de las probabilidades quisiesemos hacer una clasificación por categorías no tendríamos más que definir un valor umbral. Es decir, cuando la función logística asigna una probabilidad mayor a, por ejemplo, 0.5 entonces asignamos esa categoría. Eso es básicamente lo que hace predict tal y como podemos ver a continuación.
End of explanation |
8,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learn to Throw
In this notebook, we will train a fully-connected neural network to solve an inverse ballistics problem.
We will compare supervised training to differentiable physics training, both numerically and visually.
Φ-Flow
Documentation
API
Demos
Step1: To define the physics problem, we write a function to simulate the forward physics. This function takes the initial position, height, speed and angle of a thrown object and computes where the object will land, assuming the object follows a parabolic trajectory. Neglecting friction, can compute this by solving a quadratic equation.
Step2: Let's plot the trajectory! We define y(x) and sample it on a grid. We have to drop invalid values, such as negative flight times and below-ground points.
Step3: Before we train neural networks on this problem, let's perform a classical optimization using gradient descent in the initial velocity vel. We need to define a loss function to optimize. Here we desire the object to hit at x=0.
Step4: Φ<sub>Flow</sub> uses the selected library (PyTorch/TensorFlow/Jax) to derive analytic derivatives.
By default, the gradient function also returns the function value.
Step5: Now we can just subtract the gradient times a learning rate $\eta = 0.2$ until we converge.
Step6: Next, we generate a training set and test by sampling random values.
Step7: Now, let's create a fully-connected neural network with three hidden layers. We can reset the seed to make the weight initialization predictable.
Step8: For the differentiable physics network we do the same thing again.
Step9: Indeed, the We see that the networks were initialized identically! Alternatively, we could have saved and loaded the state.
Supervised Training
Now we can train the network. We feed the desired hit position into the network and predict a possible initial state.
For supervised training, we compare the network prediction to the ground truth from our training set.
Step10: What? That's almost no progress! Feel free to run more iterations but there is a deeper problem at work here. Before we get into that, let's train a the network again but with a differentiable physics loss function.
Training with Differentiable Physics
For the differentiable physics loss, we simulate the trajectory given the initial conditions predicted by the network. Then we can measure how close to the desired location the network got.
Step11: This looks even worse! The differentiable physics network seems to stray even further from the ground truth.
Well, we're not trying to match the ground truth, though. Let's instead measure how close to the desired location the network threw the object.
Step12: Now this is much more promissing. The diff.phys. network seems to hit the desired location very accurately considering it was only trained for 100 iterations. With more training steps, this loss will go down even further, unlike the supervised network.
So what is going on here? Why does the supervised network perform so poorly?
The answer lies in the problem itself. The task is multi-modal, i.e. there are many initial states that will hit the same target.
The network only gets the target position and must decide on a single initial state. With supervised training, there is no way to know which ground truth solution occurs in the test set. The best the network can do is average nearby solutions from the training set. But since the problem is non-linear, this will give only a rough guess.
The diff.phys. network completely ignores the ground truth solutions which are not even passed to the physics_loss function. Instead, it learns to hit the desired spot, which is exactly what we want.
We can visualize the difference by looking at a couple of trajectories. | Python Code:
# !pip install phiflow
# from phi.tf.flow import *
from phi.torch.flow import *
# from phi.jax.stax.flow import *
Explanation: Learn to Throw
In this notebook, we will train a fully-connected neural network to solve an inverse ballistics problem.
We will compare supervised training to differentiable physics training, both numerically and visually.
Φ-Flow
Documentation
API
Demos
End of explanation
def simulate_hit(pos, height, vel, angle, gravity=1.):
vel_x, vel_y = math.cos(angle) * vel, math.sin(angle) * vel
height = math.maximum(height, .5)
hit_time = (vel_y + math.sqrt(vel_y**2 + 2 * gravity * height)) / gravity
return pos + vel_x * hit_time, hit_time, height, vel_x, vel_y
simulate_hit(10, 1, 1, 0)[0]
Explanation: To define the physics problem, we write a function to simulate the forward physics. This function takes the initial position, height, speed and angle of a thrown object and computes where the object will land, assuming the object follows a parabolic trajectory. Neglecting friction, can compute this by solving a quadratic equation.
End of explanation
def sample_trajectory(pos, height, vel, angle, gravity=1.):
hit, hit_time, height, vel_x, vel_y = simulate_hit(pos, height, vel, angle, gravity)
def y(x):
t = (x.vector[0] - pos) / vel_x
y_ = height + vel_y * t - gravity / 2 * t ** 2
return math.where((y_ > 0) & (t > 0), y_, NAN)
return CenteredGrid(y, x=2000, bounds=Box(x=(min(pos.min, hit.min), max(pos.max, hit.max))))
vis.plot(sample_trajectory(tensor(10), 1, 1, math.linspace(-PI/4, 1.5, 7)), title="Varying Angle")
Explanation: Let's plot the trajectory! We define y(x) and sample it on a grid. We have to drop invalid values, such as negative flight times and below-ground points.
End of explanation
vel = 1
def loss_function(vel):
return math.l2_loss(simulate_hit(10, 1, vel, 0)[0] - 0)
loss_function(0)
Explanation: Before we train neural networks on this problem, let's perform a classical optimization using gradient descent in the initial velocity vel. We need to define a loss function to optimize. Here we desire the object to hit at x=0.
End of explanation
gradient = math.functional_gradient(loss_function)
gradient(0)
Explanation: Φ<sub>Flow</sub> uses the selected library (PyTorch/TensorFlow/Jax) to derive analytic derivatives.
By default, the gradient function also returns the function value.
End of explanation
trj = [vel]
for i in range(10):
loss, (grad,) = gradient(vel)
vel = vel - .2 * grad
trj.append(vel)
print(f"vel={vel} - loss={loss}")
trj = math.stack(trj, channel('opt'))
vis.plot(sample_trajectory(tensor(10), 1, trj, 0))
Explanation: Now we can just subtract the gradient times a learning rate $\eta = 0.2$ until we converge.
End of explanation
def generate_data(shape):
pos = math.random_normal(shape)
height = math.random_uniform(shape) + .5
vel = math.random_uniform(shape)
angle = math.random_uniform(shape) * PI/2
return math.stack(dict(pos=pos, height=height, vel=vel, angle=angle), channel('vector'))
x_train = generate_data(batch(examples=1000))
x_test = generate_data(batch(examples=1000))
y_train = simulate_hit(*x_train.vector)[0]
y_test = simulate_hit(*x_test.vector)[0]
Explanation: Next, we generate a training set and test by sampling random values.
End of explanation
math.seed(0)
net_sup = dense_net(1, 4, [32, 64, 32])
net_sup
# net_sup.trainable_weights[0] # TensorFlow
net_sup.state_dict()['linear0.weight'].flatten() # PyTorch
# net_sup.parameters[0][0] # Stax
Explanation: Now, let's create a fully-connected neural network with three hidden layers. We can reset the seed to make the weight initialization predictable.
End of explanation
math.seed(0)
net_dp = dense_net(1, 4, [32, 64, 32])
# net_dp.trainable_weights[0] # TensorFlow
net_dp.state_dict()['linear0.weight'].flatten() # PyTorch
# net_dp.parameters[0][0] # Stax
Explanation: For the differentiable physics network we do the same thing again.
End of explanation
opt_sup = adam(net_sup)
def supervised_loss(x, y, net=net_sup):
prediction = math.native_call(net, y)
return math.l2_loss(prediction - x)
print(f"Supervised loss (training set): {supervised_loss(x_train, y_train)}")
print(f"Supervised loss (test set): {supervised_loss(x_test, y_test)}")
for i in range(100):
update_weights(net_sup, opt_sup, supervised_loss, x_train, y_train)
print(f"Supervised loss (training set): {supervised_loss(x_train, y_train)}")
print(f"Supervised loss (test set): {supervised_loss(x_test, y_test)}")
Explanation: Indeed, the We see that the networks were initialized identically! Alternatively, we could have saved and loaded the state.
Supervised Training
Now we can train the network. We feed the desired hit position into the network and predict a possible initial state.
For supervised training, we compare the network prediction to the ground truth from our training set.
End of explanation
def physics_loss(y, net=net_dp):
prediction = math.native_call(net, y)
y_sim = simulate_hit(*prediction.vector)[0]
return math.l2_loss(y_sim - y)
opt_dp = adam(net_dp)
for i in range(100):
update_weights(net_dp, opt_dp, physics_loss, y_train)
print(f"Supervised loss (training set): {supervised_loss(x_train, y_train, net=net_dp)}")
print(f"Supervised loss (test set): {supervised_loss(x_test, y_test, net=net_dp)}")
Explanation: What? That's almost no progress! Feel free to run more iterations but there is a deeper problem at work here. Before we get into that, let's train a the network again but with a differentiable physics loss function.
Training with Differentiable Physics
For the differentiable physics loss, we simulate the trajectory given the initial conditions predicted by the network. Then we can measure how close to the desired location the network got.
End of explanation
print(f"Supervised network (test set): {physics_loss(y_test, net=net_sup)}")
print(f"Diff.Phys. network (test set): {physics_loss(y_test, net=net_dp)}")
Explanation: This looks even worse! The differentiable physics network seems to stray even further from the ground truth.
Well, we're not trying to match the ground truth, though. Let's instead measure how close to the desired location the network threw the object.
End of explanation
predictions = math.stack({
"Ground Truth": x_test.examples[:4],
"Supervised": math.native_call(net_sup, y_test.examples[:4]),
"Diff.Phys": math.native_call(net_dp, y_test.examples[:4]),
}, channel('curves'))
vis.plot(sample_trajectory(*predictions.vector), size=(16, 4))
Explanation: Now this is much more promissing. The diff.phys. network seems to hit the desired location very accurately considering it was only trained for 100 iterations. With more training steps, this loss will go down even further, unlike the supervised network.
So what is going on here? Why does the supervised network perform so poorly?
The answer lies in the problem itself. The task is multi-modal, i.e. there are many initial states that will hit the same target.
The network only gets the target position and must decide on a single initial state. With supervised training, there is no way to know which ground truth solution occurs in the test set. The best the network can do is average nearby solutions from the training set. But since the problem is non-linear, this will give only a rough guess.
The diff.phys. network completely ignores the ground truth solutions which are not even passed to the physics_loss function. Instead, it learns to hit the desired spot, which is exactly what we want.
We can visualize the difference by looking at a couple of trajectories.
End of explanation |
8,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nodes and Edges
Step1: Basic Network Statistics
Let's first understand how many students and friendships are represented in the network.
Step2: Exercise
Can you write a single line of code that returns the number of nodes in the graph? (1 min.)
Step3: Let's now figure out who is connected to who in the network
Step4: Exercise
Can you write a single line of code that returns the number of relationships represented? (1 min)
Step5: Concept
A network, more technically known as a graph, is comprised of
Step6: Exercise
Can you count how many males and females are represented in the graph? (3 min.)
Hint
Step7: Edges can also store attributes in their attribute dictionary.
Step8: In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the "count" variable.
Exercise
Can you figure out the maximum times any student rated another student as their favourite? (3 min.)
Step9: Exercise
We found out that there are two individuals that we left out of the network, individual no. 30 and 31. They are one male (30) and one female (31), and they are a pair that just love hanging out with one another and with individual 7 (count=3), in both directions per pair. Add this information to the graph. (5 min.)
If you need more help, check out https
Step10: Verify that you have added in the edges and nodes correctly by running the following cell.
Step11: Exercise (break-time)
If you would like a challenge during the break, try figuring out which students have "unrequited" friendships, that is, they have rated another student as their favourite at least once, but that other student has not rated them as their favourite at least once.
Specifically, get a list of edges for which the reverse edge is not present.
Hint
Step12: In a previous session at ODSC East 2018, a few other class participants provided the following solutions.
This one by @schwanne is the list comprehension version of the above solution
Step13: This one by @end0 is a unique one involving sets.
Step14: Tests
A note about the tests
Step15: If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.
Step16: However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.
Matrix Plot
Instead, we can use a matrix to represent them. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. This is done by using the MatrixPlot object from nxviz.
Step17: Arc Plot
The Arc Plot is the basis of the next set of rational network visualizations.
Step18: Circos Plot
Let's try another visualization, the Circos plot. We can order the nodes in the Circos plot according to the node ID, but any other ordering is possible as well. Edges are drawn between two nodes.
Credit goes to Justin Zabilansky (MIT) for the implementation, Jon Charest for subsequent improvements, and nxviz contributors for further development.
Step19: This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.
Hive Plot
Next up, let's try Hive Plots. HivePlots are not yet implemented in nxviz just yet, so we're going to be using the old hiveplot API for this. When HivePlots have been migrated over to nxviz, its API will resemble that of the CircosPlot's. | Python Code:
G = cf.load_seventh_grader_network()
Explanation: Nodes and Edges: How do we represent relationships between individuals using NetworkX?
As mentioned earlier, networks, also known as graphs, are comprised of individual entities and their representatives. The technical term for these are nodes and edges, and when we draw them we typically use circles (nodes) and lines (edges).
In this notebook, we will work with a social network of seventh graders, in which nodes are individual students, and edges represent their relationships. Edges between individuals show how often the seventh graders indicated other seventh graders as their favourite.
Data credit: http://konect.cc/networks/moreno_seventh
Data Representation
In the networkx implementation, graph objects store their data in dictionaries.
Nodes are part of the attribute Graph.node, which is a dictionary where the key is the node ID and the values are a dictionary of attributes.
Edges are part of the attribute Graph.edge, which is a nested dictionary. Data are accessed as such: G.edge[node1, node2]['attr_name'].
Because of the dictionary implementation of the graph, any hashable object can be a node. This means strings and tuples, but not lists and sets.
Load Data
Let's load some real network data to get a feel for the NetworkX API. This dataset comes from a study of 7th grade students.
This directed network contains proximity ratings between studetns from 29 seventh grade students from a school in Victoria. Among other questions the students were asked to nominate their preferred classmates for three different activities. A node represents a student. An edge between two nodes shows that the left student picked the right student as his answer. The edge weights are between 1 and 3 and show how often the left student chose the right student as his favourite.
End of explanation
len(G.nodes())
# Who are represented in the network?
list(G.nodes())[0:5]
Explanation: Basic Network Statistics
Let's first understand how many students and friendships are represented in the network.
End of explanation
len(G.nodes())
# len(G)
Explanation: Exercise
Can you write a single line of code that returns the number of nodes in the graph? (1 min.)
End of explanation
# Who is connected to who in the network?
# list(G.edges())[0:5]
list(G.edges())[0:5]
Explanation: Let's now figure out who is connected to who in the network
End of explanation
len(G.edges())
Explanation: Exercise
Can you write a single line of code that returns the number of relationships represented? (1 min)
End of explanation
# Let's get a list of nodes with their attributes.
list(G.nodes(data=True))[0:5]
# G.nodes(data=True)
# NetworkX will return a list of tuples in the form (node_id, attribute_dictionary)
Explanation: Concept
A network, more technically known as a graph, is comprised of:
a set of nodes
joined by a set of edges
They can be represented as two lists:
A node list: a list of 2-tuples where the first element of each tuple is the representation of the node, and the second element is a dictionary of metadata associated with the node.
An edge list: a list of 3-tuples where the first two elements are the nodes that are connected together, and the third element is a dictionary of metadata associated with the edge.
Since this is a social network of people, there'll be attributes for each individual, such as a student's gender. We can grab that data off from the attributes that are stored with each node.
End of explanation
from collections import Counter
mf_counts = Counter([d['gender']
for n, d in G.nodes(data=True)])
def test_answer(mf_counts):
assert mf_counts['female'] == 17
assert mf_counts['male'] == 12
test_answer(mf_counts)
Explanation: Exercise
Can you count how many males and females are represented in the graph? (3 min.)
Hint: You may want to use the Counter object from the collections module.
End of explanation
list(G.edges(data=True))[0:5]
Explanation: Edges can also store attributes in their attribute dictionary.
End of explanation
# Answer
counts = [d['count'] for n1, n2, d in G.edges(data=True)]
maxcount = max(counts)
def test_maxcount(maxcount):
assert maxcount == 3
test_maxcount(maxcount)
Explanation: In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the "count" variable.
Exercise
Can you figure out the maximum times any student rated another student as their favourite? (3 min.)
End of explanation
# Answer
G.add_node(30, gender='male')
G.add_node(31, gender='female')
G.add_edge(30, 31, count=3)
G.add_edge(31, 30, count=3) # reverse is optional in undirected network
G.add_edge(30, 7, count=3) # but this network is directed
G.add_edge(7, 30, count=3)
G.add_edge(31, 7, count=3)
G.add_edge(7, 31, count=3)
Explanation: Exercise
We found out that there are two individuals that we left out of the network, individual no. 30 and 31. They are one male (30) and one female (31), and they are a pair that just love hanging out with one another and with individual 7 (count=3), in both directions per pair. Add this information to the graph. (5 min.)
If you need more help, check out https://networkx.github.io/documentation/stable/tutorial.html
End of explanation
def test_graph_integrity(G):
assert 30 in G.nodes()
assert 31 in G.nodes()
assert G.nodes[30]['gender'] == 'male'
assert G.nodes[31]['gender'] == 'female'
assert G.has_edge(30, 31)
assert G.has_edge(30, 7)
assert G.has_edge(31, 7)
assert G.edges[30, 7]['count'] == 3
assert G.edges[7, 30]['count'] == 3
assert G.edges[31, 7]['count'] == 3
assert G.edges[7, 31]['count'] == 3
assert G.edges[30, 31]['count'] == 3
assert G.edges[31, 30]['count'] == 3
print('All tests passed.')
test_graph_integrity(G)
Explanation: Verify that you have added in the edges and nodes correctly by running the following cell.
End of explanation
unrequitted_friendships = []
for n1, n2 in G.edges():
if not G.has_edge(n2, n1):
unrequitted_friendships.append((n1, n2))
assert len(unrequitted_friendships) == 124
Explanation: Exercise (break-time)
If you would like a challenge during the break, try figuring out which students have "unrequited" friendships, that is, they have rated another student as their favourite at least once, but that other student has not rated them as their favourite at least once.
Specifically, get a list of edges for which the reverse edge is not present.
Hint: You may need the class method G.has_edge(n1, n2). This returns whether a graph has an edge between the nodes n1 and n2.
End of explanation
len([(n1, n2) for n1, n2 in G.edges() if not G.has_edge(n2, n1)])
Explanation: In a previous session at ODSC East 2018, a few other class participants provided the following solutions.
This one by @schwanne is the list comprehension version of the above solution:
End of explanation
links = ((n1, n2) for n1, n2, d in G.edges(data=True))
reverse_links = ((n2, n1) for n1, n2, d in G.edges(data=True))
len(list(set(links) - set(reverse_links)))
Explanation: This one by @end0 is a unique one involving sets.
End of explanation
nx.draw(G)
Explanation: Tests
A note about the tests: Testing is good practice when writing code. Well-crafted assertion statements help you program defensivel, by forcing you to explicitly state your assumptions about the code or data.
For more references on defensive programming, check out Software Carpentry's website: http://swcarpentry.github.io/python-novice-inflammation/08-defensive/
For more information on writing tests for your data, check out these slides from a lightning talk I gave at Boston Python and SciPy 2015: http://j.mp/data-test
Coding Patterns
These are some recommended coding patterns when doing network analysis using NetworkX, which stem from my roughly two years of experience with the package.
Iterating using List Comprehensions
I would recommend that you use the following for compactness:
[d['attr'] for n, d in G.nodes(data=True)]
And if the node is unimportant, you can do:
[d['attr'] for _, d in G.nodes(data=True)]
Iterating over Edges using List Comprehensions
A similar pattern can be used for edges:
[n2 for n1, n2, d in G.edges(data=True)]
or
[n2 for _, n2, d in G.edges(data=True)]
If the graph you are constructing is a directed graph, with a "source" and "sink" available, then I would recommend the following pattern:
[(sc, sk) for sc, sk, d in G.edges(data=True)]
or
[d['attr'] for sc, sk, d in G.edges(data=True)]
Drawing Graphs
As illustrated above, we can draw graphs using the nx.draw() function. The most popular format for drawing graphs is the node-link diagram.
Hairballs
Nodes are circles and lines are edges. Nodes more tightly connected with one another are clustered together. Large graphs end up looking like hairballs.
End of explanation
nx.draw(G, with_labels=True)
Explanation: If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.
End of explanation
from nxviz import MatrixPlot
m = MatrixPlot(G)
m.draw()
plt.show()
Explanation: However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.
Matrix Plot
Instead, we can use a matrix to represent them. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. This is done by using the MatrixPlot object from nxviz.
End of explanation
from nxviz import ArcPlot
a = ArcPlot(G, node_color='gender', node_grouping='gender')
a.draw()
Explanation: Arc Plot
The Arc Plot is the basis of the next set of rational network visualizations.
End of explanation
from nxviz import CircosPlot
c = CircosPlot(G, node_color='gender', node_grouping='gender')
c.draw()
plt.savefig('images/seventh.png', dpi=300)
Explanation: Circos Plot
Let's try another visualization, the Circos plot. We can order the nodes in the Circos plot according to the node ID, but any other ordering is possible as well. Edges are drawn between two nodes.
Credit goes to Justin Zabilansky (MIT) for the implementation, Jon Charest for subsequent improvements, and nxviz contributors for further development.
End of explanation
from hiveplot import HivePlot
nodes = dict()
nodes['male'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'male']
nodes['female'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'female']
edges = dict()
edges['group1'] = G.edges(data=True)
nodes_cmap = dict()
nodes_cmap['male'] = 'blue'
nodes_cmap['female'] = 'red'
edges_cmap = dict()
edges_cmap['group1'] = 'black'
h = HivePlot(nodes, edges, nodes_cmap, edges_cmap)
h.draw()
Explanation: This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.
Hive Plot
Next up, let's try Hive Plots. HivePlots are not yet implemented in nxviz just yet, so we're going to be using the old hiveplot API for this. When HivePlots have been migrated over to nxviz, its API will resemble that of the CircosPlot's.
End of explanation |
8,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forward Modeling the X-ray Image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.
Step1: The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
Step2: im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
Step3: A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters
Step4: The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
Step5: The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
Step6: Masking out the other sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
Step7: As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
Step8: A Generative Model for the X-ray Image
All of the discussion above was in terms of predicting the expected number of counts in each pixel, $\mu_k$. This is not what we observe | Python Code:
from __future__ import print_function
import astropy.io.fits as pyfits
import astropy.visualization as viz
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
Explanation: Forward Modeling the X-ray Image data
In this notebook, we'll take a closer look at the X-ray image data products, and build a simple, generative, forward model for the observed data.
End of explanation
imfits = pyfits.open('a1835_xmm/P0098010101M2U009IMAGE_3000.FTZ')
im = imfits[0].data
Explanation: The XMM Image Data
Recall that we downloaded some XMM data in the "First Look" notebook.
We downloaded three files, and just looked at one - the "science" image.
End of explanation
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
Explanation: im is the image, our observed data, presented after some "standard processing." The numbers in the pixels are counts (i.e. numbers of photoelectrons recorded by the CCD during the exposure).
We display the image on a log scale, which allows us to simultaneously see both the cluster of galaxies in the center, and the much fainter background and other sources in the field.
End of explanation
pbfits = pyfits.open('a1835_xmm/P0098010101M2X000BKGMAP3000.FTZ')
pb = pbfits[0].data
exfits = pyfits.open('a1835_xmm/P0098010101M2U009EXPMAP3000.FTZ')
ex = exfits[0].data
Explanation: A Model for the Cluster of Galaxies
We will use a common parametric model for the surface brightness of galaxy clusters: the azimuthally symmetric beta model:
$S(r) = S_0 \left[1.0 + \left(\frac{r}{r_c}\right)^2\right]^{-3\beta + 1/2}$,
where $r$ is projected distance from the cluster center.
The parameters of this model are:
$x_0$, the $x$ coordinate of the cluster center
$y_0$, the $y$ coordinate of the cluster center
$S_0$, the normalization, in surface brightness units
$r_c$, a radial scale (called the "core radius")
$\beta$, which determines the slope of the profile
Note that this model describes a 2D surface brightness distribution, since $r^2 = x^2 + y^2$
Let's draw a cartoon of this model on the whiteboard
Planning an Expected Counts Map
Our data are counts, i.e. the number of times a physical pixel in the camera was activated while pointing at the area of sky corresponding to a pixel in our image. We can think of different sky pixels as having different effective exposure times, as encoded by an exposure map, ex.
We expect to see counts due to a number of sources:
X-rays from the galaxy cluster
X-rays from other detected sources in the field
X-rays from unresolved sources (the Cosmic X-ray Background)
Diffuse X-rays from the Galactic halo and the local bubble (the local X-ray foreground)
Soft protons from the solar wind, cosmic rays, and other undesirables (the particle background)
Let's go through these in turn.
1. Counts from the Cluster
Since our data are counts in each pixel, our model needs to first predict the expected counts in each pixel. Physical models predict intensity (counts per second per pixel per unit effective area of the telescope). The spatial variation of the effective area relative to the aimpoint is one of the things accounted for in the exposure map, and we can leave the overall area to one side when fitting (although we would need it to turn our results into physically interesting conclusions about, e.g. the luminosity of the cluster).
Since the X-rays from the cluster are transformed according to the exposure map, the units of $S_0$ are counts/s/pixel, and the model prediction for the expected number of counts from the cluster is CL*ex, where CL is an image with pixel values computed from $S(r)$.
2-4. X-ray background model
The X-ray background will be "vignetted" in the same way as X-rays from the cluster. We can lump sources 2-4 together, to extend our model so that it is composed of a galaxy cluster, plus an X-ray background.
The simplest assumption we can make about the X-ray background is that it is spatially uniform, on average. The model must account for the varying effective exposure as a function of position, however. So the model prediction associated with this component is b*ex, where b is a single number with units of counts/s/pixel.
We can circumvent the problem of the other detected sources in the field by masking them out, leaving us with the assumption that any remaining counts are not due to the masked sources. This could be a source of systematic error, so we'll note it down for later.
5. Particle background model
The particle background represents a flux of particles that either do not traverse the telescope optics at all, or follow a different optical path than X-rays - so the exposure map (and its vignetting correction) does not apply.
Instead, we're given, from a black box, a prediction for the expected counts/pixel due to particles, so the extension to our model is simply to add this image, pb.
Full model
Combining these three components, the model (CL+b)*ex + pb gives us an expected number of counts/pixel across the field.
A Look at the Other XMM Products
The "exposure map" and the "particle background map" were supplied to us by the XMM reduction pipeline, along with the science image. Let's take a look at them now.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: The "Exposure Map"
The ex image is in units of seconds, and represents the effective exposure time at each pixel position.
This is actually the product of the exposure time that the detector was exposed for, and a relative sensitivity map accounting for the vignetting of the telescope, dithering, and bad pixels whose data have been excised.
Displaying the exposure map on a linear scale makes the vignetting pattern and other features clear.
End of explanation
plt.imshow(pb, cmap='gray', origin='lower');
Explanation: The "Particle Background Map"
pb is not data at all, but rather a model for the expected counts/pixel in this specific observation due to the "quiescent particle background."
This map comes out of a blackbox in the processing pipeline. Even though there are surely uncertainties in it, we have no quantitative description of them to work with.
Note that the exposure map above does not apply to the particle backround; some particles are vignetted by the telescope optics, but not to the same degree as X-rays. The resulting spatial pattern and the total exposure time are accounted for in pb.
End of explanation
mask = np.loadtxt('a1835_xmm/M2ptsrc.txt')
for reg in mask:
# this is inefficient but effective
for i in np.round(reg[1]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
for j in np.round(reg[0]+np.arange(-np.ceil(reg[2]),np.ceil(reg[2]))):
if (i-reg[1])**2 + (j-reg[0])**2 <= reg[2]**2:
ex[np.int(i-1), np.int(j-1)] = 0.0
Explanation: Masking out the other sources
There are non-cluster sources in this field. To simplify the model-building exercise, we will crudely mask them out for the moment.
A convenient way to do this is by setting the exposure map to zero in these locations - as if a set of tiny little shutters in front of each of those pixels had not been opened. "Not observed" is different from "observed zero counts."
Let's read in a text file encoding a list of circular regions in the image, and set the exposure map pixels within each of those regions in to zero.
End of explanation
plt.imshow(ex, cmap='gray', origin='lower');
Explanation: As a sanity check, let's have a look at the modified exposure map.
Compare the location of the "holes" to the science image above.
End of explanation
# import cluster_pgm
# cluster_pgm.forward()
from IPython.display import Image
Image(filename="cluster_pgm_forward.png")
def beta_model_profile(r, S0, rc, beta):
'''
The fabled beta model, radial profile S(r)
'''
return S0 * (1.0 + (r/rc)**2)**(-3.0*beta + 0.5)
def beta_model_image(x, y, x0, y0, S0, rc, beta):
'''
Here, x and y are arrays ("meshgrids" or "ramps") containing x and y pixel numbers,
and the other arguments are galaxy cluster beta model parameters.
Returns a surface brightness image of the same shape as x and y.
'''
r = np.sqrt((x-x0)**2 + (y-y0)**2)
return beta_model_profile(r, S0, rc, beta)
def model_image(x, y, ex, pb, x0, y0, S0, rc, beta, b):
'''
Here, x, y, ex and pb are images, all of the same shape, and the other args are
cluster model and X-ray background parameters. ex is the (constant) exposure map
and pb is the (constant) particle background map.
'''
return (beta_model_image(x, y, x0, y0, S0, rc, beta) + b) * ex + pb
# Set up the ramp images, to enable fast array calculations:
nx,ny = ex.shape
x = np.outer(np.ones(ny),np.arange(nx))
y = np.outer(np.arange(ny),np.ones(nx))
fig,ax = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(x, cmap='gray', origin='lower')
ax[0].set_title('x')
fig.colorbar(left,ax=ax[0],shrink=0.9)
right = ax[1].imshow(y, cmap='gray', origin='lower')
ax[1].set_title('y')
fig.colorbar(right,ax=ax[1],shrink=0.9)
# Now choose parameters, compute model and plot, compared to data!
x0,y0 = 328,348 # The center of the image is 328,328
S0,b = 0.01,5e-7 # Cluster and background surface brightness, arbitrary units
beta = 2.0/3.0 # Canonical value is beta = 2/3
rc = 4 # Core radius, in pixels
# Realize the expected counts map for the model:
mu = model_image(x,y,ex,pb,x0,y0,S0,rc,beta,b)
# Draw a *sample image* from the Poisson sampling distribution:
mock = np.random.poisson(mu,mu.shape)
# The difference between the mock and the real data should be symmetrical noise if the model
# is a good match...
diff = im - mock
# Plot three panels:
fig,ax = plt.subplots(nrows=1, ncols=3)
fig.set_size_inches(15, 6)
plt.subplots_adjust(wspace=0.2)
left = ax[0].imshow(viz.scale_image(mock, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[0].set_title('Mock (log, rescaled)')
fig.colorbar(left,ax=ax[0],shrink=0.6)
center = ax[1].imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower')
ax[1].set_title('Data (log, rescaled)')
fig.colorbar(center,ax=ax[1],shrink=0.6)
right = ax[2].imshow(diff, vmin=-40, vmax=40, cmap='gray', origin='lower')
ax[2].set_title('Difference (linear)')
fig.colorbar(right,ax=ax[2],shrink=0.6)
Explanation: A Generative Model for the X-ray Image
All of the discussion above was in terms of predicting the expected number of counts in each pixel, $\mu_k$. This is not what we observe: we observe counts.
To be able to generate a mock dataset, we need to make an assumption about the form of the sampling distribution for the counts $N$ in each pixel, ${\rm Pr}(N_k|\mu_k)$.
Let's assume that this distribution is Poisson, since we expect X-ray photon arrivals to be "rare events."
${\rm Pr}(N_k|\mu_k) = \frac{{\rm e}^{-\mu_k} \mu_k^{N_k}}{N_k !}$
Here, $\mu_k(\theta)$ is the expected number of counts in the $k$th pixel:
$\mu_k(\theta) = \left( S(r_k;\theta) + b \right) \cdot$ ex + pb
Note that writing the sampling distribution like this contains the assumption that the pixels are independent (i.e., there is no cross-talk between the cuboids of silicon that make up the pixels in the CCD chip). (Also note that this assumption is different from the assumption that the expected numbers of counts are independent! They are explicitly not independent: we wrote down a model for a cluster surface brightness distribution that is potentially many pixels in diameter.)
At this point we can draw the PGM for a forward model of this dataset, using the exposure and particle background maps supplied, and some choices for the model parameters.
Then, we can go ahead and simulate some mock data and compare with the image we have.
End of explanation |
8,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Relevant Parameters
An l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux.
Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
Step3: So let's add a LC dataset
Step4: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
Step5: l3_mode = 'flux'
When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.
Step6: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
Step7: l3_mode = 'fraction'
When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.
Step8: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
Step9: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
Step10: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
Step11: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: "Third" Light
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.filter(qualifier='l3_mode')
Explanation: Relevant Parameters
An l3_mode parameter exists for each LC dataset, which determines whether third light will be provided in flux units, or as a fraction of the total flux.
Since this is passband dependent and only used for flux measurments - it does not yet exist for a new empty Bundle.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
Explanation: So let's add a LC dataset
End of explanation
print(b.filter(qualifier='l3*'))
Explanation: We now see that the LC dataset created an 'l3_mode' parameter, and since l3_mode is set to 'flux' the 'l3' parameter is also visible.
End of explanation
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3'))
Explanation: l3_mode = 'flux'
When l3_mode is set to 'flux', the l3 parameter defines (in flux units) how much extraneous light is added to the light curve in that particular passband/dataset.
End of explanation
print(b.compute_l3s())
Explanation: To compute the fractional third light from the provided value in flux units, call b.compute_l3s. This assumes that the flux of the system is the sum of the extrinsic passband luminosities (see the pblum tutorial for more details on intrinsic vs extrinsic passband luminosities) divided by $4\pi$ at t0@system, and according to the compute options.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
b.set_value('l3_mode', 'fraction')
print(b.filter(qualifier='l3*'))
print(b.get_parameter('l3_frac'))
Explanation: l3_mode = 'fraction'
When l3_mode is set to 'fraction', the l3 parameter is now replaced by an l3_frac parameter.
End of explanation
print(b.compute_l3s())
Explanation: Similarly to above, we can convert to actual flux units (under the same assumptions), by calling b.compute_l3s.
Note that calling compute_l3s is not necessary, as the backend will handle the conversion automatically.
End of explanation
b.run_compute(irrad_method='none', model='no_third_light')
b.set_value('l3_mode', 'flux')
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light')
Explanation: Influence on Light Curves (Fluxes)
"Third" light is simply additional flux added to the light curve from some external source - whether it be crowding from a background object, light from the sky, or an extra component in the system that is unaccounted for in the system hierarchy.
To see this we'll compare a light curve with and without "third" light.
End of explanation
afig, mplfig = b['lc01'].plot(model='no_third_light')
afig, mplfig = b['lc01'].plot(model='with_third_light', legend=True, show=True)
Explanation: As expected, adding 5 W/m^3 of third light simply shifts the light curve up by that exact same amount.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01', columns=['intensities@lc01', 'abs_intensities@lc01'])
b.set_value('l3', 0.0)
b.run_compute(irrad_method='none', model='no_third_light', overwrite=True)
b.set_value('l3', 5)
b.run_compute(irrad_method='none', model='with_third_light', overwrite=True)
print("no_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light abs_intensities: ", np.nanmean(b.get_value(qualifier='abs_intensities', component='primary', dataset='lc01', model='with_third_light')))
print("no_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='no_third_light')))
print("with_third_light intensities: ", np.nanmean(b.get_value(qualifier='intensities', component='primary', dataset='lc01', model='with_third_light')))
Explanation: Influence on Meshes (Intensities)
"Third" light does not affect the intensities stored in the mesh (including those in relative units). In other words, like distance, "third" light only scales the fluxes.
NOTE: this is different than pblums which DO affect the relative intensities. Again, see the pblum tutorial for more details.
To see this we can run both of our models again and look at the values of the intensities in the mesh.
End of explanation |
8,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Catapult Project - Australian Geoscience Datacube API
Perform band maths and produce a Normalised Difference Vegetation Index (NDVI) file.
Step1: Select the first time index and plot the first timeslice (only one timeslice in this example).
Step2: Plot a band just for fun
Step3: We can also select a range on the spatial dimensions.
Insert some rsgislib segmentation magic below
The xarray can be turned into a Dataset, which can then be saved across all of the timeseries to a NetCDF file.
Step4: Transposing dimensions
Merging two datasets
Step5: We will clip and scale the image to improve the contrast of the visible bands.
Step6: Behind the scenes
The ndvi result is performed by chaining a series of operations together on a per-chunk basis. The execution tree, including the masking on the nodata value done by the API, ends up being the same for each chunk. The graph can be read from the bottom up, with the ouput array chunks at the top.
(Double-click the tree-graph image below to zoom)
Step7: If we look at a single chunk, the NDVI calculation can be seen where the lines cross over to the add and sub circles.
Some optimizarion has taken place | Python Code:
from pprint import pprint
%matplotlib inline
from matplotlib import pyplot as plt
import xarray
import datacube.api
dc = datacube.api.API()
alos2 = dc.get_dataset(product='gamma0', platform='ALOS_2',
y=(-42.55,-42.57), x=(147.55,147.57),
variables=['hh_gamma0', 'hv_gamma0'])
s1a = dc.get_dataset(product='gamma0', platform='SENTINEL_1A',
y=(-42.55,-42.57), x=(147.55,147.57),
variables=['vh_gamma0', 'vv_gamma0'])
hh = alos2.hh_gamma0
hv = alos2.hv_gamma0
vh = s1a.vv_gamma0
vv = s1a.vv_gamma0
Explanation: Catapult Project - Australian Geoscience Datacube API
Perform band maths and produce a Normalised Difference Vegetation Index (NDVI) file.
End of explanation
hv.isel(time=0).plot.hist(bins=256, range=(0.0,1.0))
Explanation: Select the first time index and plot the first timeslice (only one timeslice in this example).
End of explanation
vv.isel(time=0).plot.imshow(cmap="spectral", clim=(0.0, 0.5))
Explanation: Plot a band just for fun
End of explanation
ds_vv = vv[:100,:100,:100].to_dataset(name='vv')
ds_vv.to_netcdf('vv.nc')
Explanation: We can also select a range on the spatial dimensions.
Insert some rsgislib segmentation magic below
The xarray can be turned into a Dataset, which can then be saved across all of the timeseries to a NetCDF file.
End of explanation
alos2_array = dc.get_dataset(product='gamma0', platform='ALOS_2',
y=(-42.55,-42.57), x=(147.55,147.57),
variables=['hh_gamma0', 'hv_gamma0'])
s1a_array = dc.get_dataset(product='gamma0', platform='SENTINEL_1A',
y=(-42.55,-42.57), x=(147.55,147.57),
variables=['vh_gamma0', 'vv_gamma0'])
sar_array = s1a_array.merge(alos2_array)
print (sar_array)
Explanation: Transposing dimensions
Merging two datasets
End of explanation
fake_saturation = 0.05
clipped_sar = sar_array.where(sar_array<fake_saturation).fillna(fake_saturation)
max_val = clipped_sar.max(['y', 'x'])
scaled = (clipped_sar / max_val)
rgb = scaled.transpose('time', 'y', 'x')
rgb.dims
plt.imshow(rgb.isel(time=0))
import matplotlib.image
matplotlib.image.imsave('vv_hh_vh.png', rgb.isel(time=16))
Explanation: We will clip and scale the image to improve the contrast of the visible bands.
End of explanation
sar.data.visualize()
Explanation: Behind the scenes
The ndvi result is performed by chaining a series of operations together on a per-chunk basis. The execution tree, including the masking on the nodata value done by the API, ends up being the same for each chunk. The graph can be read from the bottom up, with the ouput array chunks at the top.
(Double-click the tree-graph image below to zoom)
End of explanation
partial = vv[0,0,0]
partial.data.visualize(optimize_graph=True)
Explanation: If we look at a single chunk, the NDVI calculation can be seen where the lines cross over to the add and sub circles.
Some optimizarion has taken place: the div operation has been combined with another inline function, and the other chunks have been discarded.
End of explanation |
8,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: After the import command, we now have access to a large number of pre-built classes and functions. This assumes the library is installed; in our lab environment all the necessary libraries are installed. One way pandas allows you to work with data is a dataframe. Let's go through the process to go from a comma separated values (.csv ) file to a dataframe. This variable csv_path stores the path of the .csv ,that is used as an argument to the read_csv function. The result is stored in the object df, this is a common short form used for a variable referring to a Pandas dataframe.
Step2: <div class="alert alert-block alert-info" style="margin-top
Step3: The process for loading an excel file is similar, we use the path of the excel file and the function read_excel. The result is a data frame as before
Step4: We can access the column "Length" and assign it a new dataframe 'x'
Step5: The process is shown in the figure
Step6: You can also assign different columns, for example, we can assign the column 'Artist'
Step7: Assign the variable 'q' to the dataframe that is made up of the column 'Rating'
Step8: <div align="right">
<a href="#q1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q1" class="collapse">
```
q=df[['Rating']]
q
```
</div>
You can do the same thing for multiple columns; we just put the dataframe name, in this case, df, and the name of the multiple column headers enclosed in double brackets. The result is a new dataframe comprised of the specified columns
Step9: The process is shown in the figure
Step10: Assign the variable 'q' to the dataframe that is made up of the column 'Released' and 'Artist'
Step11: You can access the 2nd row and first column as follows
Step12: You can access the 1st row 3rd column as follows
Step13: Access the 2nd row 3rd column
Step14: <div align="right">
<a href="#q3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q3" class="collapse">
```
df.ix[1,2]
or df.iloc[0,2]
```
</div>
You can access the column using the name as well, the following are the same as above
Step15: You can perform slicing using both the index and the name of the column | Python Code:
import pandas as pd
Explanation: <a href="http://cocl.us/topNotebooksPython101Coursera"><img src = "https://ibm.box.com/shared/static/yfe6h4az47ktg2mm9h05wby2n7e8kei3.png" width = 750, align = "center"></a>
<a href="https://www.bigdatauniversity.com"><img src = "https://ibm.box.com/shared/static/ugcqz6ohbvff804xp84y4kqnvvk3bq1g.png" width = 300, align = "center"></a>
<h1 align=center><font size = 5>Introduction to Pandas Python</font></h1>
Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
<li><a href="#ref0">About the Dataset</a></li>
<li><a href="#ref1">Importing Data</a></p></li>
<li><a href="#ref2">Viewing Data and Accessing Data </a></p></li>
<br>
<p></p>
Estimated Time Needed: <strong>15 min</strong>
</div>
<hr>
<a id="ref0"></a>
<h2 align=center>About the Dataset</h2>
The table has one row for each album and several columns
artist - Name of the artist
album - Name of the album
released_year - Year the album was released
length_min_sec - Length of the album (hours,minutes,seconds)
genre - Genre of the album
music_recording_sales_millions - Music recording sales (millions in USD) on SONG://DATABASE
claimed_sales_millions - Album's claimed sales (millions in USD) on SONG://DATABASE
date_released - Date on which the album was released
soundtrack - Indicates if the album is the movie soundtrack (Y) or (N)
rating_of_friends - Indicates the rating from your friends from 1 to 10
<br>
You can see the dataset here:
<font size="1">
<table font-size:xx-small style="width:25%">
<tr>
<th>Artist</th>
<th>Album</th>
<th>Released</th>
<th>Length</th>
<th>Genre</th>
<th>Music recording sales (millions)</th>
<th>Claimed sales (millions)</th>
<th>Released</th>
<th>Soundtrack</th>
<th>Rating (friends)</th>
</tr>
<tr>
<td>Michael Jackson</td>
<td>Thriller</td>
<td>1982</td>
<td>00:42:19</td>
<td>Pop, rock, R&B</td>
<td>46</td>
<td>65</td>
<td>30-Nov-82</td>
<td></td>
<td>10.0</td>
</tr>
<tr>
<td>AC/DC</td>
<td>Back in Black</td>
<td>1980</td>
<td>00:42:11</td>
<td>Hard rock</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td></td>
<td>8.5</td>
</tr>
<tr>
<td>Pink Floyd</td>
<td>The Dark Side of the Moon</td>
<td>1973</td>
<td>00:42:49</td>
<td>Progressive rock</td>
<td>24.2</td>
<td>45</td>
<td>01-Mar-73</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Whitney Houston</td>
<td>The Bodyguard</td>
<td>1992</td>
<td>00:57:44</td>
<td>Soundtrack/R&B, soul, pop</td>
<td>26.1</td>
<td>50</td>
<td>25-Jul-80</td>
<td>Y</td>
<td>7.0</td>
</tr>
<tr>
<td>Meat Loaf</td>
<td>Bat Out of Hell</td>
<td>1977</td>
<td>00:46:33</td>
<td>Hard rock, progressive rock</td>
<td>20.6</td>
<td>43</td>
<td>21-Oct-77</td>
<td></td>
<td>7.0</td>
</tr>
<tr>
<td>Eagles</td>
<td>Their Greatest Hits (1971-1975)</td>
<td>1976</td>
<td>00:43:08</td>
<td>Rock, soft rock, folk rock</td>
<td>32.2</td>
<td>42</td>
<td>17-Feb-76</td>
<td></td>
<td>9.5</td>
</tr>
<tr>
<td>Bee Gees</td>
<td>Saturday Night Fever</td>
<td>1977</td>
<td>1:15:54</td>
<td>Disco</td>
<td>20.6</td>
<td>40</td>
<td>15-Nov-77</td>
<td>Y</td>
<td>9.0</td>
</tr>
<tr>
<td>Fleetwood Mac</td>
<td>Rumours</td>
<td>1977</td>
<td>00:40:01</td>
<td>Soft rock</td>
<td>27.9</td>
<td>40</td>
<td>04-Feb-77</td>
<td></td>
<td>9.5</td>
</tr>
</table>
</font>
<a id="ref1"></a>
<h2 align=center> Importing Data </h2>
We can import the libraries or dependency like Pandas using the following command:
End of explanation
csv_path='https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/top_selling_albums.csv'
df = pd.read_csv(csv_path)
Explanation: After the import command, we now have access to a large number of pre-built classes and functions. This assumes the library is installed; in our lab environment all the necessary libraries are installed. One way pandas allows you to work with data is a dataframe. Let's go through the process to go from a comma separated values (.csv ) file to a dataframe. This variable csv_path stores the path of the .csv ,that is used as an argument to the read_csv function. The result is stored in the object df, this is a common short form used for a variable referring to a Pandas dataframe.
End of explanation
df.head()
Explanation: <div class="alert alert-block alert-info" style="margin-top: 20px">
<a href="http://cocl.us/object_storage_corsera"><img src = "https://ibm.box.com/shared/static/6qbj1fin8ro0q61lrnmx2ncm84tzpo3c.png" width = 750, align = "center"></a>
We can use the method **head()** to examine the first five rows of a dataframe:
End of explanation
#dependency needed to install file
!pip install xlrd
xlsx_path='https://ibm.box.com/shared/static/mzd4exo31la6m7neva2w45dstxfg5s86.xlsx'
df = pd.read_excel(xlsx_path)
df.head()
Explanation: The process for loading an excel file is similar, we use the path of the excel file and the function read_excel. The result is a data frame as before:
End of explanation
x=df[['Length']]
x
Explanation: We can access the column "Length" and assign it a new dataframe 'x':
End of explanation
x=df['Length']
x
Explanation: The process is shown in the figure:
<img src = "https://ibm.box.com/shared/static/bz800py5ui4w0kpb0k09lq3k5oegop5v.png" width = 750, align = "center"></a>
<a id="ref2"></a>
<h2 align=center> Viewing Data and Accessing Data </h2>
You can also assign the value to a series, you can think of a Pandas series as a 1-D dataframe. Just use one bracket:
End of explanation
x=df[['Artist']]
x
Explanation: You can also assign different columns, for example, we can assign the column 'Artist':
End of explanation
q = df[['Rating']]
q
Explanation: Assign the variable 'q' to the dataframe that is made up of the column 'Rating':
End of explanation
y=df[['Artist','Length','Genre']]
y
Explanation: <div align="right">
<a href="#q1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q1" class="collapse">
```
q=df[['Rating']]
q
```
</div>
You can do the same thing for multiple columns; we just put the dataframe name, in this case, df, and the name of the multiple column headers enclosed in double brackets. The result is a new dataframe comprised of the specified columns:
End of explanation
print(df[['Album','Released','Length']])
q = df[['Album','Released']]
q
Explanation: The process is shown in the figure:
<img src = "https://ibm.box.com/shared/static/dh9duk3ucuhmmmbixa6ugac6g384m5sq.png" width = 1100, align = "center"></a>
End of explanation
#**ix** will be deprecated, use **iloc** for integer indexes
#df.ix[0,0]
df.iloc[0,0]
Explanation: Assign the variable 'q' to the dataframe that is made up of the column 'Released' and 'Artist':
<div align="right">
<a href="#q2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q2" class="collapse">
```
q=df[['Released','Artist']]
q
```
</div>
One way to access unique elements is the 'ix' method, where you can access the 1st row and first column as follows :
End of explanation
#**ix** will be deprecated, use **iloc** for integer indexes
#df.ix[1,0]
df.iloc[1,0]
Explanation: You can access the 2nd row and first column as follows:
End of explanation
#**ix** will be deprecated, use **iloc** for integer indexes
#df.ix[0,2]
df.iloc[0,2]
Explanation: You can access the 1st row 3rd column as follows:
End of explanation
df.iloc[1, 2]
Explanation: Access the 2nd row 3rd column:
End of explanation
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[0,'Artist']
df.loc[0,'Artist']
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[1,'Artist']
df.loc[1,'Artist']
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[0,'Released']
df.loc[0,'Released']
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[1,'Released']
df.loc[1,'Released']
df.ix[1,2]
Explanation: <div align="right">
<a href="#q3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a>
</div>
<div id="q3" class="collapse">
```
df.ix[1,2]
or df.iloc[0,2]
```
</div>
You can access the column using the name as well, the following are the same as above:
End of explanation
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[0:2, 0:3]
df.iloc[0:2, 0:3]
#**ix** will be deprecated, use **loc** for label-location based indexer
#df.ix[0:2, 'Artist':'Released']
df.loc[0:2, 'Artist':'Released']
Explanation: You can perform slicing using both the index and the name of the column:
End of explanation |
8,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyJHTDB are failed to compile on windows. One alternative way might be to use zeep package.
More details can be found at http
Step1: In GetData_Python, Function_name could be
GetVelocity, GetMagneticField, GetVectorPotential,
GetVelocityGradient, GetMagneticFieldGradient, GetVectorPotentialGradient,
GetVelocityHessian, GetMagneticHessian, GetVectorPotentialHessian,
GetVelocityLaplacian, GetMagneticFieldLaplacian, GetVectorPotentialLaplacian,
GetPressure, GetTemperature, GetDensity,
GetPressureGradient, GetTemperatureGradient, GetDensityGradient,
GetPressureHessian, GetTemperatureHessian, GetDensityHessian,
GetVelocityAndPressure, GetVelocityAndTemperature, GetForce, GetInvariant
Step2: In GetPosition_Python, Function_name could be
GetPosition only.
Step3: In GetFilter_Python, Function_name could be
GetBoxFilter, GetBoxFilterSGSscalar, GetBoxFilterSGSvector,
GetBoxFilterSGSsymtensor, GetBoxFilterSGStensor, GetBoxFilterGradient. | Python Code:
import zeep
import numpy as np
client = zeep.Client('http://turbulence.pha.jhu.edu/service/turbulence.asmx?WSDL')
ArrayOfFloat = client.get_type('ns0:ArrayOfFloat')
ArrayOfArrayOfFloat = client.get_type('ns0:ArrayOfArrayOfFloat')
SpatialInterpolation = client.get_type('ns0:SpatialInterpolation')
TemporalInterpolation = client.get_type('ns0:TemporalInterpolation')
token="edu.jhu.pha.turbulence.testing-201406" #replace with your own token
nnp=5 #number of points
points=np.random.rand(nnp,3)
# convert to JHTDB structures
x_coor=ArrayOfFloat(points[:,0].tolist())
y_coor=ArrayOfFloat(points[:,1].tolist())
z_coor=ArrayOfFloat(points[:,2].tolist())
point=ArrayOfArrayOfFloat([x_coor,y_coor,z_coor]);
print(points)
Explanation: pyJHTDB are failed to compile on windows. One alternative way might be to use zeep package.
More details can be found at http://turbulence.pha.jhu.edu/service/turbulence.asmx
End of explanation
Function_name="GetVelocityGradient"
time=0.6
number_of_component=9 # change this based on function_name, see http://turbulence.pha.jhu.edu/webquery/query.aspx
result=client.service.GetData_Python(Function_name, token,"isotropic1024coarse", 0.6,
SpatialInterpolation("None_Fd4"), TemporalInterpolation("None"), point)
result=np.array(result).reshape((-1, number_of_component))
print(result)
Explanation: In GetData_Python, Function_name could be
GetVelocity, GetMagneticField, GetVectorPotential,
GetVelocityGradient, GetMagneticFieldGradient, GetVectorPotentialGradient,
GetVelocityHessian, GetMagneticHessian, GetVectorPotentialHessian,
GetVelocityLaplacian, GetMagneticFieldLaplacian, GetVectorPotentialLaplacian,
GetPressure, GetTemperature, GetDensity,
GetPressureGradient, GetTemperatureGradient, GetDensityGradient,
GetPressureHessian, GetTemperatureHessian, GetDensityHessian,
GetVelocityAndPressure, GetVelocityAndTemperature, GetForce, GetInvariant
End of explanation
Function_name="GetPosition"
startTime=0.1
endTime=0.2
dt=0.02
number_of_component=3 # change this based on function_name, see http://turbulence.pha.jhu.edu/webquery/query.aspx
result=client.service.GetPosition_Python(Function_name, token,"isotropic1024coarse", startTime, endTime, dt,
SpatialInterpolation("None"), point)
result=np.array(result).reshape((-1, number_of_component))
print(result)
Explanation: In GetPosition_Python, Function_name could be
GetPosition only.
End of explanation
Function_name="GetBoxFilter" #could also be
field="u"
time=0.6
filterwidth=0.05
spacing=0 #spacing is only used in GetBoxFilterGradient, but always provide it.
number_of_component=3 # change this based on function_name, see http://turbulence.pha.jhu.edu/webquery/query.aspx
result=client.service.GetFilter_Python("GetBoxFilter",token,"isotropic1024coarse", field,
time, filterwidth, SpatialInterpolation("None"), point, spacing)
result=np.array(result).reshape((-1, number_of_component))
print(result)
import struct
import base64
field="u"
timestep=1
x_start=1
y_start=1
z_start=1
x_end=2
y_end=5
z_end=8
x_step=1
y_step=1
z_step=1
filter_width=0
result=client.service.GetAnyCutoutWeb(token,"isotropic1024coarse", field, timestep,
x_start, y_start, z_start, x_end, y_end, z_end,
x_step, y_step, z_step, filter_width, "") # put empty string for the last parameter
# transfer base64 format to numpy
number_of_component=3 # change this based on the field
nx=len(range(x_start, x_end+1, x_step))
ny=len(range(y_start, y_end+1, y_step))
nz=len(range(z_start, z_end+1, z_step))
base64_len=int(nx*ny*nz*number_of_component)
base64_format='<'+str(base64_len)+'f'
result=struct.unpack(base64_format, result)
result=np.array(result).reshape((nz, ny, nx, number_of_component))
print(result.shape) # see the shape of the result and compare it with nx, ny, nz and number of component
Explanation: In GetFilter_Python, Function_name could be
GetBoxFilter, GetBoxFilterSGSscalar, GetBoxFilterSGSvector,
GetBoxFilterSGSsymtensor, GetBoxFilterSGStensor, GetBoxFilterGradient.
End of explanation |
8,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Material Science Tensile Tests
In an engineering tensile stress test, a given specimen of cross-sectional area $A_{o}$ is subjected to a given load $P$ under tension. The stress $\sigma$ generated by load $P$ over cross-sectional area $A_{o}$ is expressed as follows
Step1: Steel 4142
Step2: $\epsilon$ Data for Steel 4142
Step3: _$\sigma$ Data for Steel 4142 _
Step4: _$E$ Value for Steel 4142 _
Step5: Figure 1
Step6: $\sigma_{y}$ and $\sigma_{u}$ values for Steel 4142
Step7: _Ductility Value for Steel 4142 _
Step8: Aluminum 6061
Step9: $\epsilon$ Data for Aluminium 6061
Step10: $\sigma$ Data for Aluminum 6061
Step11: $E$ Value for Aluminum 6061
Step12: Figure 2
Step13: $\sigma_{y}$ and $\sigma_{u}$ values for Aluminum 6061
Step14: Ductility Value for Aluminum 6061
Step15: Table 1
| | <center>Steel 4142</center> | <center>Reference Steel 4142</center> | Aluminum 6061</center> | <center>_Reference _Aluminum 6061</center> |
|----------------------------|------------------------|------------------------------|----------------------|--------------------|-------------------------|
|Elastic Modulus (Ksi) | <center>29334</center>| <center>29900</center> | <center>9731</center> | <center>10000</center>|
|Yield Strength (Ksi) | <center>91.7</center> | <center>140</center> | <center>46.2</center> | <center>40.0</center> |
|Tensile Strength (Ksi)| <center>95.6</center> | <center>204.8</center> | <center>47.0</center> | <center>45.0</center> |
|%Ductility | <center>17.4</center> | <center>16.0</center> | <center>19.6</center> | <center>17.0</center> |
Figure 3 | Python Code:
import numpy as np # imports the numpy package and creates the alias np for broader control of vector arrays
import pandas as pd # imports the pandas package and creates the alias pd to work with data tables and lists
import matplotlib.pyplot as plt # imports the matplotlib.pyplot package and creates the alias plt for plotting superiority
import matplotlib.gridspec as gridspec # imports the matplotlib.gridspec as the alias gridspec for more customizabe subplots arragement
import statistics as st # optional import: imports the statistics package as the alias st for statistical analysis
from scipy.stats import linregress # imports linregress as the means to generate the slope between two given vectors of the same size
from IPython.display import Markdown # optional line: imports markdown formatting tools
df_st = pd.read_excel('Steel_4142.xls') # uses the .read_excel method to extract the Steel data for excel
df_al = pd.read_excel('Al_6061.xls') # same but now its aluminum instead
Explanation: Material Science Tensile Tests
In an engineering tensile stress test, a given specimen of cross-sectional area $A_{o}$ is subjected to a given load $P$ under tension. The stress $\sigma$ generated by load $P$ over cross-sectional area $A_{o}$ is expressed as follows:
$$\sigma = \frac{P}{A_{o}}$$
When the stresses on the test sample do not exceed the elastic limit${^1}$ the resulting strain $\epsilon$ is proportional to the stress $\sigma$:
$$\sigma = E\epsilon$$
However, if the stresses on the member do exceed the yield strength${^2}$ of the material, the linear relationship between stress and strain no longer holds and the specimen will begin to plastically deform. If the load $P$ continues to increase, the ultimate tensile strength${^3}$ will be reached and ultimately the specimen will fracture.
<br>
${^1}$The region in which a material will undergo strain without plastically deforming. No permanent deformation is observed.
${^2}$The strength of the material denoting the end of the elastic region and the beginning of the plastic region. Tensile stress at this point is $\sigma_{y}$
${^3}$The maximum strength of the material. Tensile stress at that point is called the ultimate tensile stress and denoted by $\sigma_{u}$, $\sigma_{TS}$ or $\sigma_{UTS}$
End of explanation
#print(df_st.to_string()) # optional execution. prints the entire excel steel data sheet on the screen
Explanation: Steel 4142
End of explanation
# for steel: generating and storing the strain vector
strain_steel = df_st['EXT'][2:166]*0.01 # extracts the strain data from the excel data sheet
st_strain_value = pd.Series([0.195509],index=[166]) # adds the ultimate strain value to the list
strain_steel = strain_steel.append(st_strain_value) # generates a new strain variable with all the strain data needed to genreate the plot
#strain_steel # optional execution, check to see the value/s store in the variable
Explanation: $\epsilon$ Data for Steel 4142
End of explanation
# generating and storing the stress vector
d_steel = 0.5071 # diameter of specimen
Ao_steel = np.pi*(d_steel/2)**2 # Area of the cross-section
#print(A0_steel) # optional execution, check to see the value/s store in the variable
stress_steel = df_st['FORCE'][2:166]*(0.001/Ao_steel) # extracts the force data from the excel data sheet and converts it to stress
st_stress_value = pd.Series([55.854765],index=[166]) # adds the breaking stress value to the list
stress_steel = stress_steel.append(st_stress_value) # generates a new strain variable with all the stress data needed to generate the plot
#stress_steel # optional execution, check to see the value/s store in the variable
Explanation: _$\sigma$ Data for Steel 4142 _
End of explanation
# extracting the slope (elastic modulus E) from the stress vs strain plot in the elastic region
ms = linregress(strain_steel[35:102], stress_steel[35:102]) # the linregress function yields the value of the slope as the first index in the vector
#print(ms) # optional execution, check to see the value/s store in the variable
Es = ms[0] # calls the value of the first index in the vector ms and stores it in Es
#print(Es) # optional execution, check to see the value/s store in the variable
stress_steel_offs = Es*(strain_steel - 0.002) # yields the linear relationship that denotes the elastic region in the form y = m(x-a)
print('The Elastic modulus of Steel 4142 is %d ksi' % Es ) # prints the input line
Explanation: _$E$ Value for Steel 4142 _
End of explanation
# genrating a plot containing the elastic region
#%matplotlib inline # optional execution: keeps the plot inline instead of exporting it to seperate window
%matplotlib nbagg
plt.rcParams['figure.figsize'] = (9,7) # determines the dimensions (size of the figure)
plt.plot(strain_steel, stress_steel, # plots the stress-strain curve and the linear offset (dashed line) dennoting the elastic reagion
strain_steel, stress_steel_offs,'k--')
plt.xlabel('Strain $\epsilon$ (in/in)') # labels the x axis
plt.ylabel('Stress $\sigma$ (ksi)') # labels the y axis
plt.title('Steel 4142 Elastic Region') # titles the plot
plt.axis([0,0.03,0,110]) # sets the x and y axis to enphasize the elastic reagion
plt.legend(["Steel 4142 ($\sigma_{y}$)","0.2% off-set"], # generates a legend for the plot
loc='lower right')
plt.tight_layout() # it ensures the plot will print within the grid. Prevents the plot from getting cut-off
plt.savefig('steel.png') # saves a picture of the graph in .png or .jpeg format
Explanation: Figure 1
End of explanation
# assigning the values for yield and tensile stress
steel_yield = 91.6517 # stores the value of the yield stress in the steel_yeild variable
steel_ystrain = 0.00512437 # stores the value of the strain corresponding to the yield strength
steel_tensile = max(stress_steel) # the max() function yields the max value of stress in the stress_steel vector
print('The Yeild and Tensile Strenght of Steel 4142 are %.1f ksi and %.1f ksi respectively' # prints the input line
%(steel_yield, steel_tensile))
Explanation: $\sigma_{y}$ and $\sigma_{u}$ values for Steel 4142
End of explanation
# generating and storing the value for ductility
steel_lo =1.9783 # stores the value of the initial length between the reference marks on the test member
steel_lf =2.3216 # stores the value of the final lenght between the reference marks after fracture
Ductility_steel =((steel_lf - steel_lo)/steel_lo)*100 # calculates and stores the value of ductility
print('The ductility of Steel 4142 is %.1f '% (Ductility_steel)) # prints the input line
Explanation: _Ductility Value for Steel 4142 _
End of explanation
#print(df_al.to_string()) # optional execution. prints the entire excel aluminum data sheet on the screen
Explanation: Aluminum 6061
End of explanation
# for aluminum: generating and storing the strain vector
strain_al = df_al["EXT"][1:172]*0.01 # extracts the strain data from the excel data sheet
al_strain_value = pd.Series([0.173532],index=[172]) # adds the ultimate strain value to the list
strain_al = strain_al.append(al_strain_value) # generates a new strain variable with all the strain data needed to genreate the plot
#strain_al # optional execution, check to see the value/s store in the variable
Explanation: $\epsilon$ Data for Aluminium 6061
End of explanation
# generating and storing the stress vector
d_al = 0.5065 # diameter of specimen
Ao_al = np.pi*(d_al/2)**2 # Area of the cross-section
stress_al = df_al["FORCE"][1:172]*(0.001/Ao_al) # extracts the force data from the excel data sheet and converts it to stress
al_stress_value = pd.Series([29.107393],index=[172]) # adds the breaking stress value to the list
stress_al = stress_al.append(al_stress_value) # generates a new strain variable with all the stress data needed to generate the plot
#stress_al # optional execution, check to see the value/s store in the variable
Explanation: $\sigma$ Data for Aluminum 6061
End of explanation
#extracting the slope (elastic modulus E) from the stress vs strain plot in the elastic region
ma = linregress(strain_al[25:85], stress_al[25:85]) # the linregress function yields the value of the slope as the first index in the vector
#print(ma) # optional execution, check to see the value/s store in the variable
Ea = ma[0] # calls the value of the first index in the vector ms and stores it in Es
#print(Ea) # optional execution, check to see the value/s store in the variable
stress_al_offs = Ea*(strain_al - 0.002) # yields the linear relationship that denotes the elastic region in the form y = m(x-a)
print('The Elastic Modulus of Aluminum 6061 is %d ksi' %Ea) # prints the input line
Explanation: $E$ Value for Aluminum 6061
End of explanation
# genrating a plot containing the elastic region
#%matplotlib inline # optional execution: keeps the plot inline instead of exporting it to seperate window
%matplotlib nbagg
plt.rcParams['figure.figsize'] = (9,7) # determines the dimensions (size of the figure)
plt.plot(strain_al, stress_al,'C1', strain_al, stress_al_offs,'k--') # plots the stress-strain curve and the linear offset (dashed line) dennoting the elastic reagion
plt.xlabel('Strain $\epsilon$ (in/in)') # labels the x axis
plt.ylabel('Stress $\sigma$ (ksi)') # labesl the y axis
plt.title('Aluminum 6061 Elastic Region') # titles the plot
plt.legend(["Aluminum 6061 ($\sigma_{y}$)","0.2% off-set"], # generates a legend for the plot
loc='lower right')
plt.axis([0,0.03,0,55]) # sets the x and y axis to enphasize the elastic reagion
plt.tight_layout() # it ensures the plot will print within the grid. Prevents the plot from getting cut-off
plt.savefig('alu.png') # saves a picture of the graph in .png or .jpeg format
Explanation: Figure 2
End of explanation
# assigning the values for yield and tensile stress
al_yield = 46.1668 # stores the value of the yield stress in the steel_yeild variable
al_ystrain = 0.00674403 # stores the value of the strain corresponding to the yield strength
al_tensile = max(stress_al) # the max() function yields the max value of stress in the stress_steel vector
print('The Yield and Tensile strength of Aluminum 6061 are %.1f ksi and %d ksi respectively' # prints the input line
%(al_yield, al_tensile))
Explanation: $\sigma_{y}$ and $\sigma_{u}$ values for Aluminum 6061
End of explanation
al_Lo =1.9866 # in # stores the value of the initial length between the reference marks on the test member
al_Lf =2.375 # in # stores the value of the final lenght between the reference marks after fracture
Ductility_al =((al_Lf - al_Lo)/al_Lo)*100 # calculates and stores the value of ductility
print('The ductility of Aluminum 6061 is %.1f percent' # prints the input line
% (Ductility_al))
Explanation: Ductility Value for Aluminum 6061
End of explanation
%matplotlib inline
plt.rcParams['figure.figsize'] = (16,7) # determines the dimensions (size of the figure)
fig = plt.figure()
grid = plt.GridSpec(2, 3, wspace=0.4, hspace=0.5) #create a 2 row, 3 col grid
ax1 = plt.subplot(grid[:2, :2]) # subplot occupies rows 1-2, cols 1-2
ax2 = plt.subplot(grid[0, 2]) # subplot occupies row 1, col 3
ax3 = plt.subplot(grid[1, 2]) # subplot occupies row 2, col 3
ax1.plot(strain_steel, stress_steel, # plots the stress-strain curves and highlights the yield strength with red dots ( using the flag 'ro')
strain_al, stress_al,
steel_ystrain, steel_yield, 'ro',
al_ystrain, al_yield, 'ro') # plots the data and accents the aluminum line with the 'r' flag
ax1.xaxis.set_label_text('Strain $\epsilon$ (in/in)') # labels the x axis
ax1.yaxis.set_label_text('Stress $\sigma$ (ksi)') # labesl the y axis
ax1.set_title('Stress-Strain Curve Of Steel 4142 And Aluminum 6061 und') # titles the plot
ax1.legend(['Steel 4142','Aluminum 6061',"Yield ($\sigma_{y}$)"]) # generates a legend
ax2.plot(strain_steel, stress_steel, strain_steel, stress_steel_offs,'k--') # plots the steel stress-strain curve with the 0.2% off-set as a dashed line (using the flag'k--')
ax2.xaxis.set_label_text('Strain $\epsilon$ (in/in)') # labels the x axis
ax2.yaxis.set_label_text('Stress $\sigma$ (ksi)') # labesl the y axis
ax2.set_title('Steel 4142 Elastic Region') # titles the plot
ax2.axis([0,0.02,0,110]) # sets the x and y limits to enphasize the elastic reagion
ax2.legend(["Steel 4142 ($\sigma_{y}$)","0.2% off-set"], # generates a legend for the plot
loc='lower right')
ax3.plot(strain_al, stress_al,'C1', strain_al, stress_al_offs,'k--') # plots the aluminum stress-strain curve with the 0.2% off-set as a dashed line (using the flag'k--')
ax3.xaxis.set_label_text('Strain $\epsilon$ (in/in)') # labels the x axis
ax3.yaxis.set_label_text('Stress $\sigma$ (ksi)') # labesl the y axis
ax3.set_title('Aluminum 6061 Elastic Region') # titles the plot
ax3.axis([0,0.02,0,55]) # sets the x and y limits to enphasize the elastic reagion
ax3.legend(["Aluminum 6061 ($\sigma_{y}$)","0.2% off-set"], # generates a legend for the plot
loc='lower right')
#plt.savefig("run.png")
plt.show()
Explanation: Table 1
| | <center>Steel 4142</center> | <center>Reference Steel 4142</center> | Aluminum 6061</center> | <center>_Reference _Aluminum 6061</center> |
|----------------------------|------------------------|------------------------------|----------------------|--------------------|-------------------------|
|Elastic Modulus (Ksi) | <center>29334</center>| <center>29900</center> | <center>9731</center> | <center>10000</center>|
|Yield Strength (Ksi) | <center>91.7</center> | <center>140</center> | <center>46.2</center> | <center>40.0</center> |
|Tensile Strength (Ksi)| <center>95.6</center> | <center>204.8</center> | <center>47.0</center> | <center>45.0</center> |
|%Ductility | <center>17.4</center> | <center>16.0</center> | <center>19.6</center> | <center>17.0</center> |
Figure 3
End of explanation |
8,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tag Key Value Uploader
A tool for bulk editing key value pairs for CM placements.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter Tag Key Value Uploader Recipe Parameters
Add this card to a recipe and save it.
Then click Run Now to deploy.
Follow the instructions in the sheet for setup.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute Tag Key Value Uploader
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: Tag Key Value Uploader
A tool for bulk editing key value pairs for CM placements.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'recipe_name':'', # Name of document to deploy to.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter Tag Key Value Uploader Recipe Parameters
Add this card to a recipe and save it.
Then click Run Now to deploy.
Follow the instructions in the sheet for setup.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'drive':{
'auth':'user',
'hour':[
],
'copy':{
'source':'https://docs.google.com/spreadsheets/d/19Sxy4BDtK9ocq_INKTiZ-rZHgqhfpiiokXOTsYzmah0/',
'destination':{'field':{'name':'recipe_name','prefix':'Key Value Uploader For ','kind':'string','order':1,'description':'Name of document to deploy to.','default':''}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute Tag Key Value Uploader
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
8,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Data Analysis and Visualization Using Python
by Yanal Kashou
This is a free dataset of exoplanets from the RDatasets.
The data was obtained from the URLs below of the .csv data file and the .html documentation file, respectively
Step1: Explore the Dataset
Step2: Plotting Using Seaborn and PyPlot
Step3: Pairplot
Step4: Radial Visualization
Step5: Vertical Barchart
Step6: Horizontal Barchart
Step7: Histogram
Step8: Andrews Curves | Python Code:
import pandas as pd
porsche = pd.read_csv("PorschePrice.csv")
Explanation: Basic Data Analysis and Visualization Using Python
by Yanal Kashou
This is a free dataset of exoplanets from the RDatasets.
The data was obtained from the URLs below of the .csv data file and the .html documentation file, respectively:
https://raw.github.com/vincentarelbundock/Rdatasets/master/csv/Stat2Data/PorschePrice.csv
https://raw.github.com/vincentarelbundock/Rdatasets/master/doc/Stat2Data/PorschePrice.html
It contains 30 observations on the following 3 variables.
* Price Asking price for the car (in $1,000's)
* Age Age of the car (in years)
* Mileage Previous miles driven (in 1,000's)
Load the Dataset
End of explanation
porsche.shape
porsche.head(5)
porsche = porsche.rename(columns = {'Unnamed: 0':'Number'})
porsche.head(5)
porsche.describe()
Explanation: Explore the Dataset
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
Explanation: Plotting Using Seaborn and PyPlot
End of explanation
sns.pairplot(porsche[["Price", "Age", "Mileage"]])
plt.show()
Explanation: Pairplot
End of explanation
from pandas.tools.plotting import radviz
plt.figure()
radviz(porsche, 'Age')
plt.show()
Explanation: Radial Visualization
End of explanation
plt.figure();
porsche.plot(kind = 'bar', stacked = True);
plt.show()
Explanation: Vertical Barchart
End of explanation
porsche.plot(kind='barh', stacked=True);
plt.show()
Explanation: Horizontal Barchart
End of explanation
plt.figure();
porsche['Mileage'].diff().hist(bins = 7)
plt.show()
Explanation: Histogram
End of explanation
from pandas.tools.plotting import andrews_curves
plt.figure()
andrews_curves(porsche, 'Age', colormap = 'autumn')
plt.show()
Explanation: Andrews Curves
End of explanation |
8,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 2
Step1: Intermediate Level
This exercise is designed for those who are already somewhat comfortable with python and want to learn more about exploiting its capabilities. It asks you to read in a file containing 10 time series, each containing a gaussian radio pulse. Then, using numpy and matplotlib, it asks you to plot the pulse, measure the pulse's signal to noise ratio, and output values in a nicely formatted table.
Read in the file "intermediate.txt" in this directory.
The file contains 10 rows of comma separated numbers. Each row represents the amount of signal output from a radio antenna as a function of time (in 1 second time intervals). Loop through the lines in the file (f.readlines() will be useful here). For each line, do the following
Step2: Exercise with APLpy and plotting fits images
In this exercise, you will use aplpy ("apple pie") to make an image of a field of ALFALFA data.
Read in the fits file "HI1020_21.mapPYM2.5b.fits" in this directory, and plot it in inverted greyscale.
Overplot a contour at 0.13 mJy/beam.
There are two groups of galaxies in the image. Put a box around each one.
Label the lower left group NGC 3227 group, and the upper right group the NGC 3190 group
Make your axis labels bold, and give the figure a thick border
Save a .png and .eps version of the figure
This is a piece of code used to make a figure from Leisman et al. 2016! | Python Code:
# Put your code here
pass
# only run this cell after you finished writing your code
%load beginner_soln.py
Explanation: Part 2: Demonstration Exercises
Here are some sample exercises to work through. They demonstrate many techniques that we use all the time.
Beginner Level
This exercise is designed for those who are fairly new to python and coding in general. It asks you to read in a list of numbers from a file and to write an algorithm to sort the list.
Using the techniques described in the example code above, read in the "beginner.txt" file in this directory and store its contents as a list (f.readlines() will be useful). Print your list.
The list you've read in will be a list of strings. Write a for loop that converts each string in the list to an integer (using range(len(list))...). Print your updated list.
Next, create a second, empty list to store the sorted data in.
Now write a for loop that loops over the list you read in from file and:
stores the first entry
looks at each successive entry in the list and compares it to the stored entry.
If an entry is less than the stored entry, replace the stored entry with this new lowest value.
Congratulations, you've now found the lowest value in the list. Take the value stored in your for loop and add it to your second list (using the list.append() method). Use the list.remove(x) method to remove the value you've just added to the second list from the first list.
Now repeat the process in steps 4 and 5 for each value in the initial list (do this by embedding steps 4 and 5 in a for loop; the syntax range(len(list)) will be useful here). [Note, you also could use a while statement, but we'll stick with for loops].
Print out your newly sorted list to make sure your algorithm worked.
If time permits, add a variable verbose, that when it's true you print out the list at each step of the way.
If time permits, come up with a more efficient method for sorting the list (there are many: it's fine to use google to see what sorting algorithms are out there. And of course, there's a python sort command - see if you can figure out how it works).
End of explanation
# Put your code here
pass
# only run this cell after you finished writing your code
%load beginner_soln.py
Explanation: Intermediate Level
This exercise is designed for those who are already somewhat comfortable with python and want to learn more about exploiting its capabilities. It asks you to read in a file containing 10 time series, each containing a gaussian radio pulse. Then, using numpy and matplotlib, it asks you to plot the pulse, measure the pulse's signal to noise ratio, and output values in a nicely formatted table.
Read in the file "intermediate.txt" in this directory.
The file contains 10 rows of comma separated numbers. Each row represents the amount of signal output from a radio antenna as a function of time (in 1 second time intervals). Loop through the lines in the file (f.readlines() will be useful here). For each line, do the following:
Convert the line from one long string into a numpy array of floats.
Using matplotlib.pyplot make a plot of the data you just read in as a function of time (hint: you'll have to figure out how many time steps are present in the data).
Using the capabilities of numpy, find the value of the maximum flux in your time series.
Excluding your pulse, (the pulse is in the first half of the time series, so you can cheat and just limit yourself to the second half of the time series) calculate the rms noise in your spectrum. (Recall that the rms is the root mean square - find the mean of the squares of all the points, then take the square root. You might also use np.std() and compare the results (and think about why they are different, if they are different)).
Do a simple estimate of the signal to noise ratio of the pulse as peakflux/rms.
Using a formatted string, print the output signal to noise, peakflux and rms to a descriptive table, rounding each number to two decimal places.
If time permits figure out how to display all your time series on top of one another at the end, rather than having the plots pop up one at a time.
If time permits mess around with fitting the gaussian pulse and come up with other estimates of the signal to noise ratio.
End of explanation
# put your code here....
Explanation: Exercise with APLpy and plotting fits images
In this exercise, you will use aplpy ("apple pie") to make an image of a field of ALFALFA data.
Read in the fits file "HI1020_21.mapPYM2.5b.fits" in this directory, and plot it in inverted greyscale.
Overplot a contour at 0.13 mJy/beam.
There are two groups of galaxies in the image. Put a box around each one.
Label the lower left group NGC 3227 group, and the upper right group the NGC 3190 group
Make your axis labels bold, and give the figure a thick border
Save a .png and .eps version of the figure
This is a piece of code used to make a figure from Leisman et al. 2016!
End of explanation |
8,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab_to_int =
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints =
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels =
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints =
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features =
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state =
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
8,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
Note to Amazon EC2 users
Step1: Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
Step2: Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.
Step3: Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
Step4: Let's look at the top 10 nearest neighbors by performing the following query
Step6: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages
Step7: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data
Step8: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
Step9: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
Step10: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint
Step11: Checkpoint. Check your has_top_words function on two random articles
Step12: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint
Step13: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
Step14: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama
Step15: Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
Step16: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
Step17: The first 10 words should say
Step18: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint
Step19: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability
Step20: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
Step21: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
Step22: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
Note
Step23: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
Step24: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story
Step25: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
Step26: Now, compute the cosine distance between the Barack Obama article and this tweet
Step27: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors | Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
As usual we need to first import the Python packages that we will need.
End of explanation
wiki = graphlab.SFrame('people_wiki.gl')
wiki
Explanation: Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
End of explanation
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
Explanation: Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.
End of explanation
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
Explanation: Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
End of explanation
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
Explanation: Let's look at the top 10 nearest neighbors by performing the following query:
End of explanation
def top_words(name):
Get a table of the most frequent words in the given person's wikipedia page.
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
Explanation: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
End of explanation
combined_words = obama_words.join(barrio_words, on='word')
combined_words
Explanation: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details.
For instance, running
obama_words.join(barrio_words, on='word')
will extract the rows from both tables that correspond to the common words.
End of explanation
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
Explanation: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
End of explanation
combined_words.sort('Obama', ascending=False)
Explanation: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
End of explanation
common_words = combined_words['word'][0:5] # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum() # YOUR CODE HERE
wiki.head(5)
Explanation: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task.
- Convert the list of top 5 words into set using the syntax
set(common_words)
where common_words is a Python list. See this link if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the keys() method.
- Convert the list of keys into a set as well.
- Use issubset() method to check if all 5 words are among the keys.
* Now apply the has_top_words function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
End of explanation
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
Explanation: Checkpoint. Check your has_top_words function on two random articles:
End of explanation
wiki['word_count'][wiki['name']=='Barack Obama'][0]
print graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='Barack Obama'][0],
wiki['word_count'][wiki['name']=='George W. Bush'][0])
print graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='Barack Obama'][0],
wiki['word_count'][wiki['name']=='Joe Biden'][0])
print graphlab.distances.euclidean(wiki['word_count'][wiki['name']=='George W. Bush'][0],
wiki['word_count'][wiki['name']=='Joe Biden'][0])
Explanation: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage.
End of explanation
def get_common_words(name1, name2, num_of_words=10):
words1 = top_words(name1)
words2 = top_words(name2)
combined_words = words1.join(words2, on='word')
return combined_words.sort('count', ascending=False)[0:num_of_words]
get_common_words('Barack Obama', 'George W. Bush')
Explanation: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
End of explanation
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Explanation: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
End of explanation
wiki.head(3)
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
Explanation: Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
End of explanation
combined_tf_idf_words = obama_tf_idf.join(schiliro_tf_idf, on='word')
combined_tf_idf_words.sort('weight', ascending=False)
Explanation: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
End of explanation
common_words = combined_tf_idf_words['word'][0:5] # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = set(word_count_vector.keys()) # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
wiki['has_top_words'].sum() # YOUR CODE HERE
Explanation: The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
End of explanation
print graphlab.distances.euclidean(wiki['tf_idf'][wiki['name']=='Barack Obama'][0],
wiki['tf_idf'][wiki['name']=='Joe Biden'][0])
Explanation: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
End of explanation
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Explanation: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
End of explanation
def compute_length(row):
return len(row['text'])
wiki['length'] = wiki.apply(compute_length)
wiki.head(3)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
Explanation: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
End of explanation
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
End of explanation
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
Explanation: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 2000 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many Wikipedia articles are 2500 words or more, and both Obama and Biden are over 2500 words long.
Note: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to cosine distances:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
End of explanation
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([1000, 5500, 0, 0.004])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
Explanation: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
End of explanation
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
Explanation: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
End of explanation
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
Explanation: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
End of explanation
obama = wiki[wiki['name'] == 'Barack Obama']
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
Explanation: Now, compute the cosine distance between the Barack Obama article and this tweet:
End of explanation
model2_tf_idf.query(obama, label='name', k=10)
Explanation: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
End of explanation |
8,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The
Step1:
Step2: Now, we can create an
Step3: Epochs behave similarly to
Step4: You can select subsets of epochs by indexing the
Step5: It is also possible to iterate through
Step6: You can manually remove epochs from the Epochs object by using
Step7: If you wish to save the epochs as a file, you can do it with
Step8: Later on you can read the epochs with
Step9: If you wish to look at the average across trial types, then you may do so,
creating an | Python Code:
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
Explanation: The :class:Epochs <mne.Epochs> data structure: epoched data
:class:Epochs <mne.Epochs> objects are a way of representing continuous
data as a collection of time-locked trials, stored in an array of shape
(n_events, n_channels, n_times). They are useful for many statistical
methods in neuroscience, and make it easy to quickly overview what occurs
during a trial.
End of explanation
data_path = mne.datasets.sample.data_path()
# Load a dataset that contains events
raw = mne.io.read_raw_fif(
op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif'))
# If your raw object has a stim channel, you can construct an event array
# easily
events = mne.find_events(raw, stim_channel='STI 014')
# Show the number of events (number of rows)
print('Number of events:', len(events))
# Show all unique event codes (3rd column)
print('Unique event codes:', np.unique(events[:, 2]))
# Specify event codes of interest with descriptive labels.
# This dataset also has visual left (3) and right (4) events, but
# to save time and memory we'll just look at the auditory conditions
# for now.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2}
Explanation: :class:Epochs <mne.Epochs> objects can be created in three ways:
1. From a :class:Raw <mne.io.Raw> object, along with event times
2. From an :class:Epochs <mne.Epochs> object that has been saved as a
.fif file
3. From scratch using :class:EpochsArray <mne.EpochsArray>. See
tut_creating_data_structures
End of explanation
epochs = mne.Epochs(raw, events, event_id, tmin=-0.1, tmax=1,
baseline=(None, 0), preload=True)
print(epochs)
Explanation: Now, we can create an :class:mne.Epochs object with the events we've
extracted. Note that epochs constructed in this manner will not have their
data available until explicitly read into memory, which you can do with
:func:get_data <mne.Epochs.get_data>. Alternatively, you can use
preload=True.
Expose the raw data as epochs, cut from -0.1 s to 1.0 s relative to the event
onsets
End of explanation
print(epochs.events[:3])
print(epochs.event_id)
Explanation: Epochs behave similarly to :class:mne.io.Raw objects. They have an
:class:info <mne.Info> attribute that has all of the same
information, as well as a number of attributes unique to the events contained
within the object.
End of explanation
print(epochs[1:5])
print(epochs['Auditory/Right'])
Explanation: You can select subsets of epochs by indexing the :class:Epochs <mne.Epochs>
object directly. Alternatively, if you have epoch names specified in
event_id then you may index with strings instead.
End of explanation
# These will be epochs objects
for i in range(3):
print(epochs[i])
# These will be arrays
for ep in epochs[:2]:
print(ep)
Explanation: It is also possible to iterate through :class:Epochs <mne.Epochs> objects
in this way. Note that behavior is different if you iterate on Epochs
directly rather than indexing:
End of explanation
epochs.drop([0], reason='User reason')
epochs.drop_bad(reject=dict(grad=2500e-13, mag=4e-12, eog=200e-6), flat=None)
print(epochs.drop_log)
epochs.plot_drop_log()
print('Selection from original events:\n%s' % epochs.selection)
print('Removed events (from numpy setdiff1d):\n%s'
% (np.setdiff1d(np.arange(len(events)), epochs.selection).tolist(),))
print('Removed events (from list comprehension -- should match!):\n%s'
% ([li for li, log in enumerate(epochs.drop_log) if len(log) > 0]))
Explanation: You can manually remove epochs from the Epochs object by using
:func:epochs.drop(idx) <mne.Epochs.drop>, or by using rejection or flat
thresholds with :func:epochs.drop_bad(reject, flat) <mne.Epochs.drop_bad>.
You can also inspect the reason why epochs were dropped by looking at the
list stored in epochs.drop_log or plot them with
:func:epochs.plot_drop_log() <mne.Epochs.plot_drop_log>. The indices
from the original set of events are stored in epochs.selection.
End of explanation
epochs_fname = op.join(data_path, 'MEG', 'sample', 'sample-epo.fif')
epochs.save(epochs_fname)
Explanation: If you wish to save the epochs as a file, you can do it with
:func:mne.Epochs.save. To conform to MNE naming conventions, the
epochs file names should end with '-epo.fif'.
End of explanation
epochs = mne.read_epochs(epochs_fname, preload=False)
Explanation: Later on you can read the epochs with :func:mne.read_epochs. For reading
EEGLAB epochs files see :func:mne.read_epochs_eeglab. We can also use
preload=False to save memory, loading the epochs from disk on demand.
End of explanation
ev_left = epochs['Auditory/Left'].average()
ev_right = epochs['Auditory/Right'].average()
f, axs = plt.subplots(3, 2, figsize=(10, 5))
_ = f.suptitle('Left / Right auditory', fontsize=20)
_ = ev_left.plot(axes=axs[:, 0], show=False, time_unit='s')
_ = ev_right.plot(axes=axs[:, 1], show=False, time_unit='s')
plt.tight_layout()
Explanation: If you wish to look at the average across trial types, then you may do so,
creating an :class:Evoked <mne.Evoked> object in the process. Instances
of Evoked are usually created by calling :func:mne.Epochs.average. For
creating Evoked from other data structures see :class:mne.EvokedArray and
tut_creating_data_structures.
End of explanation |
8,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exporting CSV data from the server
This process is slightly cumbersome because of Unix permissions. Remember - nine times out of ten, on Unix, it's probably a permissions problem.
In this case the user 'postgres' which runs the PostgreSQL server doesn't have write permissions to your home directory /home/jovyan/work. To work around it, we write to a shared space, /tmp, from PostgreSQL, then copy the file to your own directory.
Standard db setup
Step1: A little database example
Step2: Exporting to CSV
Now that you see we have a real table with real data, we export using the same COPY command we use for import. The main differences are
Step3: We can see that the file correctly exported.
Step4: Now move the file to a space you can reach from Jupyter | Python Code:
!echo 'redspot' | sudo -S service postgresql restart
%load_ext sql
!createdb -U dbuser test
%sql postgresql://dbuser@localhost:5432/test
Explanation: Exporting CSV data from the server
This process is slightly cumbersome because of Unix permissions. Remember - nine times out of ten, on Unix, it's probably a permissions problem.
In this case the user 'postgres' which runs the PostgreSQL server doesn't have write permissions to your home directory /home/jovyan/work. To work around it, we write to a shared space, /tmp, from PostgreSQL, then copy the file to your own directory.
Standard db setup
End of explanation
%%sql
DROP TABLE IF EXISTS foo;
CREATE TABLE foo (
id SERIAL,
s TEXT
);
%%sql
INSERT INTO foo (s) VALUES
('hi'),
('bye'),
('yo')
;
%%sql
SELECT * FROM foo;
Explanation: A little database example
End of explanation
%%sql
COPY
(SELECT * FROM foo ORDER BY s)
TO '/tmp/testout.csv'
WITH
CSV
HEADER
DELIMITER ','
QUOTE '"';
Explanation: Exporting to CSV
Now that you see we have a real table with real data, we export using the same COPY command we use for import. The main differences are:
COPY ... TO instead of COPY ... FROM
You may specify an arbitrarily complex query, using multiple tables, etc.
Note the /tmp/ location of the output file; this is our shared space.
Read all the details about pgsql's non-standard-SQL COPY function at https://www.postgresql.org/docs/9.5/static/sql-copy.html.
End of explanation
!cat /tmp/testout.csv
!csvlook /tmp/testout.csv
Explanation: We can see that the file correctly exported.
End of explanation
!cp /tmp/testout.csv /home/jovyan/work/testout.csv
Explanation: Now move the file to a space you can reach from Jupyter:
End of explanation |
8,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Upload a gist via the GitHub API
Our OAuthenticator config has passed GitHub information via environment variables.
We can use these to publish gists to GitHub.
Get the GitHub username and token from environment variables
Step1: Create a requests.Session for holding our oauth token
Step2: Verify that we have the scopes we expect
Step3: Now we can make a gist! | Python Code:
import os
gh_user = os.environ['GITHUB_USER']
gh_token = os.environ['GITHUB_TOKEN']
Explanation: Upload a gist via the GitHub API
Our OAuthenticator config has passed GitHub information via environment variables.
We can use these to publish gists to GitHub.
Get the GitHub username and token from environment variables
End of explanation
import requests
s = requests.session()
s.headers['Authorization'] = 'token ' + gh_token
Explanation: Create a requests.Session for holding our oauth token
End of explanation
r = s.get('https://api.github.com/user')
r.raise_for_status()
r.headers['X-OAuth-Scopes']
Explanation: Verify that we have the scopes we expect:
End of explanation
import json
r = s.post('https://api.github.com/gists',
data=json.dumps({
'files': {
'test.md': {
'content': '# JupyterHub gist\n\nThis file was created from JupyterHub.',
},
},
'description': 'test uploading a gist from JupyterHub',
}),
)
r.raise_for_status()
print("Created gist: %s" % r.json()['html_url'])
Explanation: Now we can make a gist!
End of explanation |
8,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
[PUBLIC] Analysis of CLBlast client multiple sizes
<a id="overview"></a>
Overview
This Jupyter Notebook analyses the performance that CLBlast (single configuaration) achieves across a range of sizes.
<a id="data"></a>
Get the experimental data from DropBox
NB
Step1: Scientific
If some of the scientific packages are missing, please install them using
Step2: Collective Knowledge
If CK is not installed, please install it using
Step3: Define helper functions
Step4: Access the experimental data | Python Code:
import os
import sys
import json
import re
Explanation: [PUBLIC] Analysis of CLBlast client multiple sizes
<a id="overview"></a>
Overview
This Jupyter Notebook analyses the performance that CLBlast (single configuaration) achieves across a range of sizes.
<a id="data"></a>
Get the experimental data from DropBox
NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
The experimental data was collected on the experimental platform and archived as follows:
$ cd `ck find ck-math:script:<...>`
$ python <...>.py
$ ck zip local:experiment:* --archive_name=<...>.zip
It can be downloaded and extracted as follows:
$ wget <...>.zip
$ ck add repo:<....> --zip=<....>.zip --quiet
<a id="code"></a>
Data wrangling code
NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
Includes
Standard
End of explanation
import IPython as ip
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mp
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Seaborn version: %s' % sns.__version__) # apt install python-tk
print ('Matplotlib version: %s' % mp.__version__)
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from IPython.display import Image
from IPython.core.display import HTML
Explanation: Scientific
If some of the scientific packages are missing, please install them using:
```
pip install jupyter pandas numpy matplotlib
```
End of explanation
import ck.kernel as ck
print ('CK version: %s' % ck.__version__)
Explanation: Collective Knowledge
If CK is not installed, please install it using:
```
pip install ck
```
End of explanation
# Return the number of floating-point operations for C = alpha * A * B + beta * C,
# where A is a MxK matrix and B is a KxN matrix.
def xgemm_flops(alpha, beta, M, K, N):
flops_AB = 2*M*N*K if alpha!=0 else 0
flops_C = 2*M*N if beta!=0 else 0
flops = flops_AB + flops_C
return flops
# Return GFLOPS (Giga floating-point operations per second) for a known kernel and -1 otherwise.
def GFLOPS(kernel, run_characteristics, time_ms):
if kernel.lower().find('xgemm') != -1:
time_ms = np.float64(time_ms)
alpha = np.float64(run_characteristics['arg_alpha'])
beta = np.float64(run_characteristics['arg_beta'])
M = np.int64(run_characteristics['arg_m'])
K = np.int64(run_characteristics['arg_k'])
N = np.int64(run_characteristics['arg_n'])
return (1e-9 * xgemm_flops(alpha, beta, M, K, N)) / (1e-3 * time_ms)
else:
return (-1.0)
def convert2int(s):
if s[-1]=='K':
return np.int64(s[0:-1])*1024
else:
return np.int64(s)
def args_str(kernel, run):
args = ''
if kernel.lower().find('xgemm') != -1:
args = 'alpha=%s, beta=%s, M=%s, K=%s, N=%s' % \
(run['arg_alpha'], run['arg_beta'], run['arg_m'], run['arg_k'], run['arg_n'])
return args
Explanation: Define helper functions
End of explanation
def get_experimental_results(repo_uoa='explore-matrix-size-acl-sgemm-opencl-odroid-xu3', tags=''):
module_uoa = 'experiment'
r = ck.access({'action':'search', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'tags':tags})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
experiments = r['lst']
dfs = []
for experiment in experiments:
data_uoa = experiment['data_uoa']
r = ck.access({'action':'list_points', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
print experiment
for point in r['points']:
with open(os.path.join(r['path'], 'ckp-%s.0001.json' % point)) as point_file:
point_data_raw = json.load(point_file)
characteristics_list = point_data_raw['characteristics_list']
num_repetitions = len(characteristics_list)
print characteristics_list
# Obtain column data.
data = [
{
'repetition_id': repetition_id,
'm': convert2int(characteristics['run']['m']),
'n': convert2int(characteristics['run']['n']),
'k': convert2int(characteristics['run']['k']),
#'mnk': convert2int(characteristics['run']['m'][0]) * convert2int(characteristics['run']['n'][0]) * convert2int(characteristics['run']['k'][0]),
'G': np.float32(characteristics['run']['GFLOPS_1'])
#'strategy' : tuner_output['strategy'],
#'config_id': config_id,
#'config' : config['parameters'],
#'kernel' : config['kernel']
#'args_id' : args_str(config['kernel'], characteristics['run']),
#'ms' : np.float64(config['time']),
#'GFLOPS' : GFLOPS(config['kernel'], characteristics['run'], config['time'])
}
for (repetition_id, characteristics) in zip(range(num_repetitions), characteristics_list)
#for (m,n,k,G,) in characteristics['run']
#for (config_id, config) in zip(range(len(tuner_output['result'])), tuner_output['result'])
]
#print data
#Construct a DataFrame.
df = pd.DataFrame(data)
# Set columns and index names.
df.columns.name = 'characteristics'
df.index.name = 'index'
df = df.set_index(['m', 'n', 'k', 'repetition_id'])
# Append to the list of similarly constructed DataFrames.
dfs.append(df)
# Concatenate all constructed DataFrames (i.e. stack on top of each other).
result = pd.concat(dfs)
return result.sortlevel(result.index.names)
df = get_experimental_results(tags='acl-sgemm-opencl')
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
df
df = df.sortlevel(df.index.names[3])
#df.sort_value(level=df.index.names[3])
#df = df.sort_values('mnk')
#pd.options.display.max_columns=2
#df = df.reset_index('mnk').sort_values('mnk')
df_mean = df.groupby(level=df.index.names[:-1]).mean()
df_std = df.groupby(level=df.index.names[:-1]).std()
df_mean.T \
.plot(yerr=df_std.T, title='GFLOPS',
kind='bar', rot=0, ylim=[0,20], figsize=[20, 12], grid=True, legend=True, colormap=cm.autumn, fontsize=16)
kernel = df.iloc[0].name[0]
kernel
Explanation: Access the experimental data
End of explanation |
8,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mustererkennung in Funkmessdaten
Aufgabe 1
Step1: Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben.
Step2: Aufgabe 2
Step3: Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.
Step4: Für die Analyse der Frames definieren wir einige Hilfsfunktionen.
Step5: Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.
Step6: Betrachte den Inhalt der "target"-Spalte von df_x1_t1_trx_1_4.
Step7: Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension.
Step8: Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner "target"-Werte.
Step9: Frage
Step10: Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben.
Step11: Aufgabe 3
Step12: Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1).
Step13: Aufgabe 4
Step14: Öffnen von Hdf mittels pandas
Step15: Beispiel Erkenner
Datensätze vorbereiten
Step16: Schließen von HDF Store | Python Code:
# imports
import re
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pprint as pp
Explanation: Mustererkennung in Funkmessdaten
Aufgabe 1: Laden der Datenbank in Jupyter Notebook
End of explanation
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')
print(hdf.keys)
Explanation: Wir öffnen die Datenbank und lassen uns die Keys der einzelnen Tabellen ausgeben.
End of explanation
df_x1_t1_trx_1_4 = hdf.get('/x1/t1/trx_1_4')
print("Rows:", df_x1_t1_trx_1_4.shape[0])
print("Columns:", df_x1_t1_trx_1_4.shape[1])
Explanation: Aufgabe 2: Inspektion eines einzelnen Dataframes
Wir laden den Frame x1_t1_trx_1_4 und betrachten seine Dimension.
End of explanation
# first inspection of columns from df_x1_t1_trx_1_4
df_x1_t1_trx_1_4.head(5)
Explanation: Als nächstes Untersuchen wir exemplarisch für zwei Empfänger-Sender-Gruppen die Attributzusammensetzung.
End of explanation
# Little function to retrieve sender-receiver tuples from df columns
def extract_snd_rcv(df):
regex = r"trx_[1-4]_[1-4]"
# creates a set containing the different pairs
snd_rcv = {x[4:7] for x in df.columns if re.search(regex, x)}
return [(x[0],x[-1]) for x in snd_rcv]
# Sums the number of columns for each sender-receiver tuple
def get_column_counts(snd_rcv, df):
col_counts = {}
for snd,rcv in snd_rcv:
col_counts['Columns for pair {} {}:'.format(snd, rcv)] = len([i for i, word in enumerate(list(df.columns)) if word.startswith('trx_{}_{}'.format(snd, rcv))])
return col_counts
# Analyze the column composition of a given measurement.
def analyse_columns(df):
df_snd_rcv = extract_snd_rcv(df)
cc = get_column_counts(df_snd_rcv, df)
for x in cc:
print(x, cc[x])
print("Sum of pair related columns: %i" % sum(cc.values()))
print()
print("Other columns are:")
for att in [col for col in df.columns if 'ifft' not in col and 'ts' not in col]:
print(att)
# Analyze the values of the target column.
def analyze_target(df):
print(df['target'].unique())
print("# Unique values in target: %i" % len(df['target'].unique()))
Explanation: Für die Analyse der Frames definieren wir einige Hilfsfunktionen.
End of explanation
analyse_columns(df_x1_t1_trx_1_4)
Explanation: Bestimme nun die Spaltezusammensetzung von df_x1_t1_trx_1_4.
End of explanation
analyze_target(df_x1_t1_trx_1_4)
Explanation: Betrachte den Inhalt der "target"-Spalte von df_x1_t1_trx_1_4.
End of explanation
df_x3_t2_trx_3_1 = hdf.get('/x3/t2/trx_3_1')
print("Rows:", df_x3_t2_trx_3_1.shape[0])
print("Columns:", df_x3_t2_trx_3_1.shape[1])
Explanation: Als nächstes laden wir den Frame x3_t2_trx_3_1 und betrachten seine Dimension.
End of explanation
analyse_columns(df_x3_t2_trx_3_1)
analyze_target(df_x3_t2_trx_3_1)
Explanation: Gefolgt von einer Analyse seiner Spaltenzusammensetzung und seiner "target"-Werte.
End of explanation
vals = df_x1_t1_trx_1_4.loc[:,'trx_2_4_ifft_0':'trx_2_4_ifft_1999'].values
# one big heatmap
plt.figure(figsize=(14, 12))
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
plt.show()
Explanation: Frage: Was stellen Sie bzgl. der „Empfänger-Nummer_Sender-Nummer“-Kombinationen fest? Sind diese gleich? Welche Ausprägungen finden Sie in der Spalte „target“?
Antwort: Wir sehen, wenn jeweils ein Paar sendet, hören die anderen beiden Sender zu und messen ihre Verbindung zu den gerade sendenden Knoten (d.h. 6 Paare in jedem Dataframe). Sendet z.B. das Paar 3 1, so misst Knoten 1 die Verbindung 1-3, Knoten 3 die Verbindung 3-1 und Knoten 2 und 4 Verbindung 2-1 und 2-3 bzw. 4-1 und 4-3. Die 10 verschiedenen Ausprägungen der Spalte "target" sind oben zu sehen.
Aufgabe 3: Visualisierung der Messreihe des Datensatz
Wir visualisieren die Rohdaten mit verschiedenen Heatmaps, um so die Integrität der Daten optisch zu validieren und Ideen für mögliche Features zu entwickeln. Hier stellen wir exemplarisch die Daten von Frame df_x1_t1_trx_1_4 dar.
End of explanation
# compare different heatmaps
plt.figure(1, figsize=(12,10))
# nipy_spectral_r scheme
plt.subplot(221)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='nipy_spectral_r')
# terrain scheme
plt.subplot(222)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='terrain')
# Vega10 scheme
plt.subplot(223)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Vega10')
# Wistia scheme
plt.subplot(224)
plt.title('trx_2_4_ifft')
plt.xlabel("ifft of frequency")
plt.ylabel("measurement")
ax = sns.heatmap(vals, xticklabels=200, yticklabels=20, vmin=0, vmax=1, cmap='Wistia')
# Adjust the subplot layout, because the logit one may take more space
# than usual, due to y-tick labels like "1 - 10^{-3}"
plt.subplots_adjust(top=0.92, bottom=0.08, left=0.10, right=0.95, hspace=0.25,
wspace=0.2)
plt.show()
Explanation: Wir betrachten wie verschiedene Farbschemata unterschiedliche Merkmale unserer Rohdaten hervorheben.
End of explanation
# Iterating over hdf data and creating interim data presentation stored in data/interim/testmessungen_interim.hdf
# Interim data representation contains aditional binary class (binary_target - encoding 0=empty and 1=not empty)
# and multi class target (multi_target - encoding 0-9 for each possible class)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
interim_path = '../../data/interim/01_testmessungen.hdf'
def binary_mapper(df):
def map_binary(target):
if target.startswith('Empty'):
return 0
else:
return 1
df['binary_target'] = pd.Series(map(map_binary, df['target']))
def multiclass_mapper(df):
le.fit(df['target'])
df['multi_target'] = le.transform(df['target'])
for key in hdf.keys():
df = hdf.get(key)
binary_mapper(df)
multiclass_mapper(df)
df.to_hdf(interim_path, key)
hdf.close()
Explanation: Aufgabe 3: Groundtruth-Label anpassen
End of explanation
hdf = pd.HDFStore('../../data/interim/01_testmessungen.hdf')
df_x1_t1_trx_3_1 = hdf.get('/x1/t1/trx_3_1')
print("binary_target for measurement 5:", df_x1_t1_trx_3_1['binary_target'][5])
print("binary_target for measurement 120:", df_x1_t1_trx_3_1['binary_target'][120])
hdf.close()
Explanation: Überprüfe neu beschrifteten Dataframe „/x1/t1/trx_3_1“ verwenden. Wir erwarten als Ergebnisse für 5 zu Beginn des Experiments „Empty“ (bzw. 0) und für 120 mitten im Experiment „Not Empty“ (bzw. 1).
End of explanation
from evaluation import *
from filters import *
from utility import *
from features import *
Explanation: Aufgabe 4: Einfacher Erkenner mit Hold-Out-Validierung
Wir folgen den Schritten in Aufgabe 4 und testen einen einfachen Erkenner.
End of explanation
# raw data to achieve target values
hdf = pd.HDFStore('../../data/raw/TestMessungen_NEU.hdf')
Explanation: Öffnen von Hdf mittels pandas
End of explanation
# generate datasets
tst = ['1','2','3']
tst_ds = []
for t in tst:
df_tst = hdf.get('/x1/t'+t+'/trx_3_1')
lst = df_tst.columns[df_tst.columns.str.contains('_ifft_')]
#df_tst_cl,_ = distortion_filter(df_tst_cl)
groups = get_trx_groups(df_tst)
df_std = rf_grouped(df_tst, groups=groups, fn=rf_std_single, label='target')
df_mean = rf_grouped(df_tst, groups=groups, fn=rf_mean_single)
df_p2p = rf_grouped(df_tst, groups=groups, fn=rf_ptp_single) # added p2p feature
df_all = pd.concat( [df_std, df_mean, df_p2p], axis=1 ) # added p2p feature
df_all = cf_std_window(df_all, window=4, label='target')
df_tst_sum = generate_class_label_presence(df_all, state_variable='target')
# remove index column
df_tst_sum = df_tst_sum[df_tst_sum.columns.values[~df_tst_sum.columns.str.contains('index')].tolist()]
print('Columns in Dataset:',t)
print(df_tst_sum.columns)
tst_ds.append(df_tst_sum.copy())
# holdout validation
print(hold_out_val(tst_ds, target='target', include_self=False, cl='rf', verbose=False, random_state=1))
Explanation: Beispiel Erkenner
Datensätze vorbereiten
End of explanation
hdf.close()
Explanation: Schließen von HDF Store
End of explanation |
8,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topology
This caterogy of questions is intended to retrieve the network topology
used by Batfish. This topology is a combination of information in the
snapshot and inference logic (e.g., which interfaces are layer3 neighbors).
Currently, Layer 3 topology can be retrieved.
User Provided Layer 1 Topology
Layer 3 Topology
Step1: User Provided Layer 1 Topology
Returns normalized Layer 1 edges that were input to Batfish.
Lists Layer 1 edges after potentially normalizing node and interface names. All node names are lower-cased, and for nodes that appear in the snapshot, interface names are canonicalized based on the vendor. All input edges are in the output, including nodes and interfaces that do not appear in the snapshot.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include edges whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
Step2: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
Remote_Interface | Interface at which the edge terminates | Interface
Print the first 5 rows of the returned Dataframe
Step3: Print the first row of the returned Dataframe
Step4: Layer 3 Topology
Returns Layer 3 links.
Lists all Layer 3 edges in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include edges whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
Step5: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
IPs | IPs | Set of str
Remote_Interface | Interface at which the edge terminates | Interface
Remote_IPs | Remote IPs | Set of str
Print the first 5 rows of the returned Dataframe
Step6: Print the first row of the returned Dataframe | Python Code:
bf.set_network('generate_questions')
bf.set_snapshot('aristaevpn')
Explanation: Topology
This caterogy of questions is intended to retrieve the network topology
used by Batfish. This topology is a combination of information in the
snapshot and inference logic (e.g., which interfaces are layer3 neighbors).
Currently, Layer 3 topology can be retrieved.
User Provided Layer 1 Topology
Layer 3 Topology
End of explanation
result = bf.q.userProvidedLayer1Edges().answer().frame()
Explanation: User Provided Layer 1 Topology
Returns normalized Layer 1 edges that were input to Batfish.
Lists Layer 1 edges after potentially normalizing node and interface names. All node names are lower-cased, and for nodes that appear in the snapshot, interface names are canonicalized based on the vendor. All input edges are in the output, including nodes and interfaces that do not appear in the snapshot.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include edges whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
Remote_Interface | Interface at which the edge terminates | Interface
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
bf.set_network('generate_questions')
bf.set_snapshot('generate_questions')
Explanation: Print the first row of the returned Dataframe
End of explanation
result = bf.q.layer3Edges().answer().frame()
Explanation: Layer 3 Topology
Returns Layer 3 links.
Lists all Layer 3 edges in the network.
Inputs
Name | Description | Type | Optional | Default Value
--- | --- | --- | --- | ---
nodes | Include edges whose first node matches this name or regex. | NodeSpec | True | .
remoteNodes | Include edges whose second node matches this name or regex. | NodeSpec | True | .
Invocation
End of explanation
result.head(5)
Explanation: Return Value
Name | Description | Type
--- | --- | ---
Interface | Interface from which the edge originates | Interface
IPs | IPs | Set of str
Remote_Interface | Interface at which the edge terminates | Interface
Remote_IPs | Remote IPs | Set of str
Print the first 5 rows of the returned Dataframe
End of explanation
result.iloc[0]
Explanation: Print the first row of the returned Dataframe
End of explanation |
8,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Support Vector Machine - Basics
Support Vector Machine (SVM) is one of the commonly used algorithm.
It can be used for both classification and regression.
Today we will walk through the basics of using SVM for classification.
There are three sub modules in the svm module of scikit-learn.
LinearSVC, the linear version of SVC, same as SVC with linear kernel (more efficient).
SVC, use svm for classification.
SVR, use svm for regression.
The layout of the session will be as the following
Step1: We start with the simple example of linear SVC, here we use the 'linear' kernal of SVC.
We'll introduce the concept of a kernal later on.
We start with creating a data set that is a collection of two groups of points seperated by a plane.
Step2: We initial the SVC, and fit our data as simple as the following.
Step3: Let's examine how SVC tries to classify the points.
Before that, we want to first introduce a few attributes of the SVC class.
hyperplane
Step4: In the above example, linear SVM is attemped to find the line that seperates the two classes
and ensure maximum margin.
maximum margin - The distance from the points on either size of the group to the line is maximized. This is determined by the hyperplane pass through the suporting vectors.
The selected support vectors can be used to define the margins.
The plane found by SVM would usually be slight different from the logistic regression since they optimize different properties.
Introduce the Kernals and their parameters
Step5: We apply each of the above methods to the three datasets and plot the decision boundaries.
We scale each dataset first using the StandardScaler() function from scikit-learn.
By default, this shift the dataset to mean of 0, and standard deviation of 1.
Step6: There are lots of information in the above figures, let's break it down step by step | Python Code:
#import all the needed package
import numpy as np
import scipy as sp
import pandas as pd
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split,cross_val_score
from sklearn import metrics
from sklearn.datasets import make_moons, make_circles, make_classification
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.svm import SVC
Explanation: Support Vector Machine - Basics
Support Vector Machine (SVM) is one of the commonly used algorithm.
It can be used for both classification and regression.
Today we will walk through the basics of using SVM for classification.
There are three sub modules in the svm module of scikit-learn.
LinearSVC, the linear version of SVC, same as SVC with linear kernel (more efficient).
SVC, use svm for classification.
SVR, use svm for regression.
The layout of the session will be as the following:
linear SVM
kernels
End of explanation
X = np.r_[np.random.randn(20,2) - [2,2],np.random.randn(20,2)+[2,2]]
Y = [0]*20+[1]*20
plt.scatter(X[:,0],X[:,1],c=Y,cmap=plt.cm.Paired)
plt.axis('tight')
plt.show()
Explanation: We start with the simple example of linear SVC, here we use the 'linear' kernal of SVC.
We'll introduce the concept of a kernal later on.
We start with creating a data set that is a collection of two groups of points seperated by a plane.
End of explanation
clf=SVC(kernel='linear', C=0.001)
clf.fit(X,Y)
Explanation: We initial the SVC, and fit our data as simple as the following.
End of explanation
# get the separating hyper plane.
w=clf.coef_[0]
a=-w[0]/w[1]
xx=np.linspace(-5,5)
yy = a*xx-(clf.intercept_[0])/w[1]
# plot the paralles to the separating hyperplane that pass through the supporting vectors.
b=clf.support_vectors_[0]
yy_down = a*xx+(b[1]-a*b[0])
b=clf.support_vectors_[-1]
yy_up = a*xx + (b[1]-a*b[0])
#to compare with logistic regression
logistic = LogisticRegression()
logistic.fit(X,Y)
yy_log= -logistic.coef_[0][0]/logistic.coef_[0][1]*xx+logistic.intercept_[0]/logistic.coef_[0][1]
# let's look at the relation of these planes to our cluster of points.
plt.plot(xx,yy,'k-')
plt.plot(xx,yy_down,'k--')
plt.plot(xx,yy_up,'k--')
plt.plot(xx,yy_log,'r-')
plt.scatter(clf.support_vectors_[:,0],clf.support_vectors_[:,1],s=80,facecolors='none')
plt.scatter(X[:,0],X[:,1],c=Y,cmap=plt.cm.Paired)
plt.axis('tight')
plt.show()
Explanation: Let's examine how SVC tries to classify the points.
Before that, we want to first introduce a few attributes of the SVC class.
hyperplane: the plan that used to seperate the classes.
coef_:
intercept_:
support vectors: the training examples that are closest to the hyperplane are called support vectors.
decision functions: Distance of the samples X to the separating hyperplane.
End of explanation
names = ['LinearSVC','LinearSVC, C=0.025','SVCRBF,gamma=2','SVCRBF,gamma=10','SVCPOLY,degree=2','SVCPOLY,degree=4,coef=10,gamma=0.1']
classifiers=[
SVC(kernel="linear"),
SVC(kernel="linear", C=0.025),
SVC(gamma=2),
SVC(gamma=10),
SVC(kernel="poly",degree=2),
SVC(kernel="poly",degree=4,coef0=10,gamma=0.1)
]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.4, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
Explanation: In the above example, linear SVM is attemped to find the line that seperates the two classes
and ensure maximum margin.
maximum margin - The distance from the points on either size of the group to the line is maximized. This is determined by the hyperplane pass through the suporting vectors.
The selected support vectors can be used to define the margins.
The plane found by SVM would usually be slight different from the logistic regression since they optimize different properties.
Introduce the Kernals and their parameters:
- linear: C
- RBF: gamma, C
- Polynomial: degree, coef0, gamma
C>0 is the penalty parameter of the error term (data points falls into the wrong class), and linear SVM with small C behaves more like logistic regression.
Let's compare the properties of the three kernels using three examples.
We use the make_classification function from scikit-learn to create three different
type of data sets:
- two classes that makes a circle together.
- two classes that can be sepearted by a circle.
- two linear separable classes.
End of explanation
from matplotlib.colors import ListedColormap
figure = plt.figure(figsize=(27, 9))
i = 1
h = 0.1
# iterate over datasets
for ds in datasets:
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.3)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = plt.subplot(len(datasets), len(classifiers) + 1, i)
score=np.mean(cross_val_score(clf,X,y,cv=3,scoring='accuracy'))
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
clf.fit(X, y)
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
v = np.linspace(-6, 6., 9, endpoint=True)
CS=ax.contourf(xx, yy, Z,v, cmap=cm, alpha=.8)
if name=='SVCPOLY,degree=4,coef=10,gamma=0.1':
plt.colorbar(CS,ticks=v)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
Explanation: We apply each of the above methods to the three datasets and plot the decision boundaries.
We scale each dataset first using the StandardScaler() function from scikit-learn.
By default, this shift the dataset to mean of 0, and standard deviation of 1.
End of explanation
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.grid_search import GridSearchCV
X,Y=datasets[0]
X = StandardScaler().fit_transform(X)
C_range = np.logspace(-3, 3, 10)
gamma_range = np.logspace(-3, 3, 10)
param_grid = dict(gamma=gamma_range, C=C_range)
cv = StratifiedShuffleSplit(Y, n_iter=5, test_size=0.3, random_state=42)
grid = GridSearchCV(SVC(kernel='rbf'), scoring="accuracy", param_grid=param_grid, cv=cv)
grid.fit(X, Y)
print "The best parameters are %s with a score of %0.4f" % (grid.best_params_, grid.best_score_)
Explanation: There are lots of information in the above figures, let's break it down step by step:
Decrease C means more regularization and more smooth decision function.
Increase gamma in RBF kernels and increase degree in polynomial kernels means more complicated decision function.
RBF kernels can be easily used in general situations.
Linear kernels and polynomial kernels do the best when the underline hyperplane can be modeled with linear/polynimial functions.
Use the grid search method from scikit-learn to fine tune the SVM algorithms.
End of explanation |
8,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Determining the worst winter ever in Chicago
The object of this exercise is to take weather observations from past winters in Chicago and determine which of them could be considered the worst winter ever. Various criteria will be used, such as the number of days below zero degrees (F) and the number of days with heavy snowfall, and a Badness Index will be assigned to each winter using these criteria.
Collecting data from NOAA National Climate Data Center
NOAA has some great weather records. These data points come from the weather station at Midway airport, starting in 1928 (measurements at O'Hare start around 1962). I pulled the dataset from NOAA-NCDC at http
Step1: Definition of variables
Here are the variables used to determine which Chicago winter was the worst. In the future I'd love to include others related to wind chill, cloud cover, high gusts, freezing rain, and other wintery hazards, but this monthly NCDC dataset didn't include them. Perhaps the daily figures are worth looking into.
The units are American
Step2: The Badness Index of each winter
To determine the badness index for a particular winter, first we assign to each of its variables a score from 0 to 100. A score of 100 means it was the worst or coldest recorded value (for example, more snowfall than any other winter) and a score of 0 means it was the least bad or warmest recorded value (for example, less snowfall than any winter); otherwise the variable will get a score somewhere in between (on a linear scale). Then each winter is assigned a badness index equal to the average of each of its variable scores, ranging from 0 to 100.
Step3: There you have it! Some candidates for Worst Winter Ever can be determined by the highest peaks. The winter of 2013-14 was pretty bad, but it paled in comparison to the winter of 1978-79.
Bonus Statistical Analysis [UNDER CONSTRUCTION]
Here we'll dive into some Principal Component Analysis, which basically extracts the most prominent trends from all the variables for further inspection. Check Wikipedia for more info. | Python Code:
import pandas as pd
# Read data, sort by year & month
dateparse = lambda x: pd.datetime.strptime(x, '%Y%m%d')
noaa_monthly = pd.read_csv('chicago-midway-noaa.csv', index_col=2,
parse_dates=True, date_parser=dateparse, na_values=-9999)
noaa_monthly = noaa_monthly.groupby([noaa_monthly.index.year, noaa_monthly.index.month]).sum()
# Fix "suspicious" entry in January 1930, based on a NOAA source
noaa_monthly.loc[(1930, 1), 'MXSD'] = 268 # conversion: 268 mm == 11 in
# Sum seasonal totals
winter_vars = ['MNTM','EMNT','DT00','DX32','MXSD','EMXP','TSNW','DP10']
year_start = 1928
year_end = 2014
season_start = 11 #November
season_end = 3 #March
noaa_winters = pd.concat(
[noaa_monthly.loc[(year, season_start):(year+1, season_end), winter_vars].sum(axis=0)
for year in range(year_start, year_end+1)], axis=1).transpose()
noaa_winters.index = range(year_start, year_end+1)
# Fix variables that should have been handled differently
noaa_winters['TSNW'] /= 24.4
for year in noaa_winters.index:
noaa_winters.loc[year, 'MNTM'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'MNTM'].mean() * 0.18 + 32
noaa_winters.loc[year, 'EMNT'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'EMNT'].min() * 0.18 + 32
noaa_winters.loc[year, 'MXSD'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'MXSD'].max() / 24.4
noaa_winters.loc[year, 'EMXP'] = \
noaa_monthly.loc[(year, season_start):(year+1, season_end), 'EMXP'].max() / 24.4
Explanation: Determining the worst winter ever in Chicago
The object of this exercise is to take weather observations from past winters in Chicago and determine which of them could be considered the worst winter ever. Various criteria will be used, such as the number of days below zero degrees (F) and the number of days with heavy snowfall, and a Badness Index will be assigned to each winter using these criteria.
Collecting data from NOAA National Climate Data Center
NOAA has some great weather records. These data points come from the weather station at Midway airport, starting in 1928 (measurements at O'Hare start around 1962). I pulled the dataset from NOAA-NCDC at http://www.ncdc.noaa.gov/cdo-web/datatools, specifically the Monthly Summaries data from CHICAGO MIDWAY AIRPORT 3 SW. The data is directly available here: https://github.com/MarcKjerland/Worst-Winter-Ever/blob/master/chicago-midway-noaa.csv.
Here I've defined winter as November through March. Your definition may vary! Some of the variables would translate well to an expanded winter season. Further criteria could be added to highlight painfully long winters or miserable holiday travel conditions, for example.
In this first code section I do some data wrangling to prepare it for the analysis.
End of explanation
acronym = { 'DP10': 'Number of days with greater than or equal to 1.0 inch of precipitation',
'MXSD': 'Maximum snow depth, inches',
'EMXP': 'Extreme maximum daily precipitation, inches',
'DT00': 'Number days with minimum temperature less than or equal to 0.0 F',
'DX32': 'Number days with maximum temperature less than or equal to 32.0 F',
'EMNT': 'Extreme minimum daily temperature',
'TSNW': 'Total snow fall, inches',
'MNTM': 'Mean temperature'}
# Plot variables
import matplotlib.pyplot as plt
%matplotlib inline
for v in noaa_winters.columns:
noaa_winters[v].plot(figsize=(13,3), color='skyblue');
pd.rolling_mean(noaa_winters[v], 20).plot(color='blue')
plt.title(acronym[v])
plt.legend(["observed data", "20-year rolling average"], loc='best')
plt.show()
Explanation: Definition of variables
Here are the variables used to determine which Chicago winter was the worst. In the future I'd love to include others related to wind chill, cloud cover, high gusts, freezing rain, and other wintery hazards, but this monthly NCDC dataset didn't include them. Perhaps the daily figures are worth looking into.
The units are American: inches and Fahrenheit.
(Note: the max snow depth in 1929-30 appears to be incorrect, although there was a lot of snow that winter.)
End of explanation
# Find the best & worst for each variable
winter_coldest = pd.Series(index=noaa_winters.columns)
winter_warmest = pd.Series(index=noaa_winters.columns)
# For these variables, big is bad
for v in ['MXSD','EMXP','DT00','DX32','TSNW','DP10']:
winter_coldest[v] = noaa_winters[v].max()
winter_warmest[v] = noaa_winters[v].min()
# For these variables, small (or negative) is bad
for v in ['MNTM','EMNT']:
winter_coldest[v] = noaa_winters[v].min()
winter_warmest[v] = noaa_winters[v].max()
# Assign scores to each year
winter_score = 100 * (noaa_winters-winter_warmest).abs() / (winter_coldest-winter_warmest).abs()
badness = winter_score.mean(axis=1)
# Plot the Badness Index
badness.plot(figsize=(13,6), marker='s', color='skyblue', xticks=badness.index[2::5])
pd.rolling_mean(badness, 20).plot(color='blue')
plt.title("Badness Index of each Chicago winter")
plt.ylabel("Badness index")
plt.xlabel("Year (start of winter)")
plt.legend(["Computed Badness", "20-year rolling average"])
plt.show()
Explanation: The Badness Index of each winter
To determine the badness index for a particular winter, first we assign to each of its variables a score from 0 to 100. A score of 100 means it was the worst or coldest recorded value (for example, more snowfall than any other winter) and a score of 0 means it was the least bad or warmest recorded value (for example, less snowfall than any winter); otherwise the variable will get a score somewhere in between (on a linear scale). Then each winter is assigned a badness index equal to the average of each of its variable scores, ranging from 0 to 100.
End of explanation
z = (noaa_winters - noaa_winters.mean()) / noaa_winters.std()
from sklearn.decomposition import PCA
pca = PCA(n_components=4)
pca.fit(z)
pca_components = pd.DataFrame(pca.components_, index=['PC'+str(i) for i in range(1,pca.n_components_+1)], \
columns=z.columns)
pca_scores = pd.DataFrame(pca.transform(z), index=z.index, columns=pca_components.index )
print "Explained variance ratios:", pca.explained_variance_ratio_
pca_scores.plot(figsize=(13,8))
plt.legend(loc='best')
plt.title('Principal component scores')
plt.show()
# Cluster analysis
import numpy as np
from scipy.spatial.distance import squareform
from scipy.cluster.hierarchy import linkage, dendrogram
dissimilarity = 1 - noaa_winters.corr().abs()
row_distance = np.clip(squareform(dissimilarity),0,1)
L = linkage(row_distance, method='average')
plt.figure(figsize=(13,9), dpi=100)
plt.subplot(212)
R = dendrogram(L, orientation='bottom')
plt.ylabel('Cluster distance (UPGMA)')
# Matched up with PC loadings (scaled by corresponding PC variances)
leaves = [pca_components.columns[i] for i in R['leaves']]
plt.subplot(211)
(pca_components[leaves].iloc[0] * pca.explained_variance_[0]).plot(kind='bar', color='blue')
(pca_components[leaves].iloc[1] * pca.explained_variance_[1]).plot(kind='bar', color='green')
(pca_components[leaves].iloc[2] * pca.explained_variance_[2]).plot(kind='bar', color='red')
(pca_components[leaves].iloc[3] * pca.explained_variance_[3]).plot(kind='bar', color='cyan')
plt.ylabel('PC loadings times PC variance')
plt.legend(loc='best')
plt.title('Components of each variable: PC loadings scaled by corresponding PC variances')
plt.show()
Explanation: There you have it! Some candidates for Worst Winter Ever can be determined by the highest peaks. The winter of 2013-14 was pretty bad, but it paled in comparison to the winter of 1978-79.
Bonus Statistical Analysis [UNDER CONSTRUCTION]
Here we'll dive into some Principal Component Analysis, which basically extracts the most prominent trends from all the variables for further inspection. Check Wikipedia for more info.
End of explanation |
8,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
InfluxDB Logger Example
This notebook is a small demo of how to use gpumon in Jupyter notebooks and some convenience methods for working with GPUs
You will need to have PyTorch and Torchvision installed to run this as well as the python InfluxDB client
To install Pytorch and associated requiremetns run the following
Step1: Let's create a simple CNN and run the CIFAR dataset against it to see the load on our GPU
Step2: Be carefull that you specify the correct host and credentials in the context below
Step3: If you had your Grafana dashboard running you should have seen the measurements there. You can also pull the data from the database using the InfluxDB python client | Python Code:
from gpumon import device_count, device_name
device_count() # Returns the number of GPUs available
device_name() # Returns the type of GPU available
Explanation: InfluxDB Logger Example
This notebook is a small demo of how to use gpumon in Jupyter notebooks and some convenience methods for working with GPUs
You will need to have PyTorch and Torchvision installed to run this as well as the python InfluxDB client
To install Pytorch and associated requiremetns run the following:
bash
cuda install pytorch torchvision cuda80 -c python
To install python InfluxDB client
bash
pip install influxdb
see here for more details on the InfluxDB client
End of explanation
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64,
shuffle=True, num_workers=4)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
net.cuda()
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
from gpumon.influxdb import log_context
display_every_minibatches=100
Explanation: Let's create a simple CNN and run the CIFAR dataset against it to see the load on our GPU
End of explanation
with log_context('localhost', 'admin', 'password', 'gpudb', 'gpuseries'):
for epoch in range(20): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
print('[%d] loss: %.3f' %
(epoch + 1, running_loss / (i+1)))
print('Finished Training')
Explanation: Be carefull that you specify the correct host and credentials in the context below
End of explanation
from influxdb import InfluxDBClient, DataFrameClient
client = InfluxDBClient(host='localhost', username='admnin', password='password', database='gpudb')
client.get_list_measurements()
data = client.query('select * from gpuseries limit 10;')
type(data)
data
df_client = DataFrameClient(host='localhost', username='admnin', password='password', database='gpudb')
df = df_client.query('select * from gpuseries limit 100;')['gpuseries']
df.head(100)
Explanation: If you had your Grafana dashboard running you should have seen the measurements there. You can also pull the data from the database using the InfluxDB python client
End of explanation |
8,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Histograms are a useful type of statistics plot for engineers. A histogram is a type of bar plot that shows the frequency or number of values compared to a set of value ranges. Histogram plots can be created with Python and the plotting package matplotlib. The plt.hist() function creates histogram plots.
Before matplotlib can be used, matplotlib must first be installed. To install matplotlib open the Anaconda Prompt (or use a terminal and pip) and type
Step1: For our dataset, let's define a mean (average) mu = 80 and a standard deviation (spread) sigma = 7. Then we'll use numpy's np.random.normal() function to produce an array of random numbers with a normal distribution. 200 random numbers is a sufficient quantity to plot. The general format of the np.random.normal() function is below
Step2: Matplotlib's plt.hist() function produces histogram plots. The first positional argument passed to plt.hist() is a list or array of values, the second positional argument denotes the number of bins on the histogram.
python
plt.hist(values, num_bins)
Similar to matplotlib line plots, bar plots and pie charts, a set of keyword arguments can be included in the plt.hist() function call. Specifying values for the keyword arguments customizes the histogram. Some keyword arguments we can use with plt.hist() are
Step3: Our next histogram example involves a list of commute times. Suppose the following commute times were recorded in a survey
Step4: Now we'll call plt.hist() and include our commute_times list and specify 5 bins.
Step5: If we want our bins to have specific bin ranges, we can specify a list or array of bin edges in the keyword argument bins=. Let's also add some axis labels and a title to the histogram. A table of some keyword arguments used with plt.hist() is below | Python Code:
import matplotlib.pyplot as plt
import numpy as np
# if using a Jupyter notebook, includue:
%matplotlib inline
Explanation: Histograms are a useful type of statistics plot for engineers. A histogram is a type of bar plot that shows the frequency or number of values compared to a set of value ranges. Histogram plots can be created with Python and the plotting package matplotlib. The plt.hist() function creates histogram plots.
Before matplotlib can be used, matplotlib must first be installed. To install matplotlib open the Anaconda Prompt (or use a terminal and pip) and type:
```text
conda install matplotlib
```
or
text
$ pip install matplotlib
If you are using the Anaconda distribution of Python, matplotlib is already installed.
To create a histogram with matplotlib, first import matplotlib with the standard line:
python
import matplotlib.pyplot as plt
The alias plt is commonly used for matplotlib's pyplot library and will look familiar to other programmers.
In our first example, we will also import numpy with the line import numpy as np. We'll use numpy's random number generator to create a dataset for us to plot. If using a Jupyter notebook, include the line %matplotlib inline below the imports.
End of explanation
mu = 80
sigma = 7
x = np.random.normal(mu, sigma, size=200)
Explanation: For our dataset, let's define a mean (average) mu = 80 and a standard deviation (spread) sigma = 7. Then we'll use numpy's np.random.normal() function to produce an array of random numbers with a normal distribution. 200 random numbers is a sufficient quantity to plot. The general format of the np.random.normal() function is below:
python
var = np.random.normal(mean, stdev, size=<number of values>)
End of explanation
plt.hist(x, 20,
density=True,
histtype='bar',
facecolor='b',
alpha=0.5)
plt.show()
Explanation: Matplotlib's plt.hist() function produces histogram plots. The first positional argument passed to plt.hist() is a list or array of values, the second positional argument denotes the number of bins on the histogram.
python
plt.hist(values, num_bins)
Similar to matplotlib line plots, bar plots and pie charts, a set of keyword arguments can be included in the plt.hist() function call. Specifying values for the keyword arguments customizes the histogram. Some keyword arguments we can use with plt.hist() are:
* density=
* histtype=
* facecolor=
* alpha=(opacity).
End of explanation
import matplotlib.pyplot as plt
# if using a Jupyter notebook, include:
%matplotlib inline
commute_times = [23, 25, 40, 35, 36, 47, 33, 28, 48, 34,
20, 37, 36, 23, 33, 36, 20, 27, 50, 34,
47, 18, 28, 52, 21, 44, 34, 13, 40, 49]
Explanation: Our next histogram example involves a list of commute times. Suppose the following commute times were recorded in a survey:
text
23, 25, 40, 35, 36, 47, 33, 28, 48, 34,
20, 37, 36, 23, 33, 36, 20, 27, 50, 34,
47, 18, 28, 52, 21, 44, 34, 13, 40, 49
Let's plot a histogram of these commute times. First, import matplotlib as in the previous example, and include %matplotib inline if using a Jupyter notebook. Then build a Python list of commute times from the survey data above.
End of explanation
plt.hist(commute_times, 5)
plt.show()
Explanation: Now we'll call plt.hist() and include our commute_times list and specify 5 bins.
End of explanation
bin_edges = [0,15,30,45,60]
plt.hist(commute_times,
bins=bin_edges,
density=False,
histtype='bar',
color='b',
edgecolor='k',
alpha=0.5)
plt.xlabel('Commute time (min)')
plt.xticks([0,15,30,45,60])
plt.ylabel('Number of commuters')
plt.title('Histogram of commute times')
plt.show()
Explanation: If we want our bins to have specific bin ranges, we can specify a list or array of bin edges in the keyword argument bins=. Let's also add some axis labels and a title to the histogram. A table of some keyword arguments used with plt.hist() is below:
| keyword argument | description | example |
| --- | --- | --- |
| bins= | list of bin edges | bins=[5, 10, 20, 30] |
| density= | if true, data is normalized | density=false |
| histtype= | type of histogram: bar, stacked, step or step-filled | histtype='bar' |
| color= | bar color | color='b' |
| edgecolor= | bar edge color | color='k' |
| alpha= | bar opacity | alpha=0.5 |
Let's specify our bins in 15 min increments. This means our bin edges are [0,15,30,45,60]. We'll also specify density=False, color='b'(blue), edgecolor='k'(black), and alpha=0.5(half transparent). The lines plt.xlabel(), plt.ylabel(), and plt.title() give our histogram axis labels and a title. plt.xticks() defines the location of the x-axis tick labels. If the bins are spaced out at 15 minute intervals, it makes sense to label the x-axis at these same intervals.
End of explanation |
8,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
Step2: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient)
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Now, run backward propagation.
Step12: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still | Python Code:
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
Explanation: Gradient Checking
Welcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking.
You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker.
But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".
Let's do it!
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
### START CODE HERE ### (approx. 1 line)
J = np.dot(theta, x)
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
Explanation: 1) How does gradient checking work?
Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.
Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$.
Let's look back at the definition of a derivative (or gradient):
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."
We know the following:
$\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly.
You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct.
Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct!
2) 1-dimensional gradient checking
Consider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.
You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct.
<img src="images/1Dgrad_kiank.png" style="width:600px;height:250px;">
<caption><center> <u> Figure 1 </u>: 1D linear model<br> </center></caption>
The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation").
Exercise: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
Explanation: Expected Output:
<table style=>
<tr>
<td> ** J ** </td>
<td> 8</td>
</tr>
</table>
Exercise: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
End of explanation
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon=1e-7):
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2 * epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print("The gradient is correct!")
else:
print("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
Explanation: Expected Output:
<table>
<tr>
<td> ** dtheta ** </td>
<td> 2 </td>
</tr>
</table>
Exercise: To show that the backward_propagation() function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.
Instructions:
- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow:
1. $\theta^{+} = \theta + \varepsilon$
2. $\theta^{-} = \theta - \varepsilon$
3. $J^{+} = J(\theta^{+})$
4. $J^{-} = J(\theta^{-})$
5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$
- Then compute the gradient using backward propagation, and store the result in a variable "grad"
- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:
$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$
You will need 3 Steps to compute this formula:
- 1'. compute the numerator using np.linalg.norm(...)
- 2'. compute the denominator. You will need to call np.linalg.norm(...) twice.
- 3'. divide them.
- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
End of explanation
def forward_propagation_n(X, Y, parameters):
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1. / m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
Explanation: Expected Output:
The gradient is correct!
<table>
<tr>
<td> ** difference ** </td>
<td> 2.9193358103083e-10 </td>
</tr>
</table>
Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in backward_propagation().
Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it!
3) N-dimensional gradient checking
The following figure describes the forward and backward propagation of your fraud detection model.
<img src="images/NDgrad_kiank.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: deep neural network<br>LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID</center></caption>
Let's look at your implementations for forward propagation and backward propagation.
End of explanation
def backward_propagation_n(X, Y, cache):
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1. / m * np.dot(dZ3, A2.T)
db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1. / m * np.dot(dZ2, A1.T) * 2 # Should not multiply by 2
db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1. / m * np.dot(dZ1, X.T)
db1 = 4. / m * np.sum(dZ1, axis=1, keepdims=True) # Should not multiply by 4
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
Explanation: Now, run backward propagation.
End of explanation
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7):
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 1e-7:
print("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
Explanation: You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct.
How does gradient checking work?.
As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:
$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$
However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "dictionary_to_vector()" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.
The inverse function is "vector_to_dictionary" which outputs back the "parameters" dictionary.
<img src="images/dictionary_to_vector.png" style="width:600px;height:400px;">
<caption><center> <u> Figure 2 </u>: dictionary_to_vector() and vector_to_dictionary()<br> You will need these functions in gradient_check_n()</center></caption>
We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.
Exercise: Implement gradient_check_n().
Instructions: Here is pseudo-code that will help you implement the gradient check.
For each i in num_parameters:
- To compute J_plus[i]:
1. Set $\theta^{+}$ to np.copy(parameters_values)
2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$
3. Calculate $J^{+}_i$ using to forward_propagation_n(x, y, vector_to_dictionary($\theta^{+}$ )).
- To compute J_minus[i]: do the same thing with $\theta^{-}$
- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$
Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to parameter_values[i]. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute:
$$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
End of explanation |
8,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Trace Analysis Examples
Kernel Functions Profiling
Details on functions profiling are given in Plot Functions Profiling Data below.
Step1: Import required modules
Step2: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
Step3: Workload Execution and Functions Profiling Data Collection
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Step4: Parse Trace and Profiling Data
Step5: Report Functions Profiling Data
Step6: Plot Functions Profiling Data
The only method of the FunctionsAnalysis class that is used for functions profiling is plotProfilingStats. This method is used to plot functions profiling metrics for the specified kernel functions. For each speficied metric a barplot is generated which reports the value of the metric when the kernel function has been executed on each CPU.
The default metric is avg if not otherwise specified. A list of kernel functions to plot can also be passed to plotProfilingStats. Otherwise, by default, all the kernel functions are plotted. | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
Explanation: Trace Analysis Examples
Kernel Functions Profiling
Details on functions profiling are given in Plot Functions Profiling Data below.
End of explanation
# Generate plots inline
%matplotlib inline
import json
import os
# Support to access the remote target
import devlib
from env import TestEnv
from executor import Executor
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
# Support for trace events analysis
from trace import Trace
Explanation: Import required modules
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
"password" : 'juno',
# Folder where all the results will be collected
"results_dir" : "TraceAnalysis_FunctionsProfiling",
# Define devlib modules to load
"modules": ['cpufreq'],
"exclude_modules" : [ 'hwmon' ],
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"functions" : [
"pick_next_task_fair",
"select_task_rq_fair",
"enqueue_task_fair",
"update_curr_fair",
"dequeue_task_fair",
],
"buffsize" : 100 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
# "rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
def experiment(te):
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# FTrace the execution of this workload
te.ftrace.start()
rtapp.run(out_dir=te.res_dir)
te.ftrace.stop()
# Collect and keep track of the trace
trace_file = os.path.join(te.res_dir, 'trace.dat')
te.ftrace.get_trace(trace_file)
# Collect and keep track of the Kernel Functions performance data
stats_file = os.path.join(te.res_dir, 'trace.stats')
te.ftrace.get_stats(stats_file)
# Dump platform descriptor
te.platform_dump(te.res_dir)
experiment(te)
Explanation: Workload Execution and Functions Profiling Data Collection
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
End of explanation
# Base folder where tests folder are located
res_dir = te.res_dir
logging.info('Content of the output folder %s', res_dir)
!tree {res_dir}
with open(os.path.join(res_dir, 'platform.json'), 'r') as fh:
platform = json.load(fh)
print json.dumps(platform, indent=4)
logging.info('LITTLE cluster max capacity: %d',
platform['nrg_model']['little']['cpu']['cap_max'])
trace = Trace(platform, res_dir, events=[])
Explanation: Parse Trace and Profiling Data
End of explanation
# Get the DataFrame for the specified list of kernel functions
df = trace.data_frame.functions_stats(['enqueue_task_fair', 'dequeue_task_fair'])
df
# Get the DataFrame for the single specified kernel function
df = trace.data_frame.functions_stats('select_task_rq_fair')
df
Explanation: Report Functions Profiling Data
End of explanation
# Plot Average and Total execution time for the specified
# list of kernel functions
trace.analysis.functions.plotProfilingStats(
functions = [
'select_task_rq_fair',
'enqueue_task_fair',
'dequeue_task_fair'
],
metrics = [
# Average completion time per CPU
'avg',
# Total execution time per CPU
'time',
]
)
# Plot Average execution time for the single specified kernel function
trace.analysis.functions.plotProfilingStats(
functions = 'update_curr_fair',
)
Explanation: Plot Functions Profiling Data
The only method of the FunctionsAnalysis class that is used for functions profiling is plotProfilingStats. This method is used to plot functions profiling metrics for the specified kernel functions. For each speficied metric a barplot is generated which reports the value of the metric when the kernel function has been executed on each CPU.
The default metric is avg if not otherwise specified. A list of kernel functions to plot can also be passed to plotProfilingStats. Otherwise, by default, all the kernel functions are plotted.
End of explanation |
8,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 2 – Lists, conditionals and loops
Recap on Variables
A variable
named cell of memory storing a single value
Assigned a value using the equals symbol
Can be given any name you like
Except no spaces and cannot start with a number
Variables can hold
Step1: Lists can be accessed by numerical position (aka index)
value
Step2: Slicing can also be used with strings in the same way. Just imagine that each character in the string is the same as a list element
Changing a list value
Changing a value in a specific position iss similar to defining a variable
Step3: Print all values in a list
Step4: Note
Step5: Other useful list methods
Step6: Sometimes we may want to capture the popped value.
Step7: Sorting list
sorted() – is a built-in function and will create a new list object
Step8: sort() – is a method and will replace the original list object
Step9: Sorting lists numerically
Step10: Sorting big to small
Step11: Finding the length of a list
Step12: CONDITIONALS
Often you will want to add logic to your programs
This is done with simple statements (in nearly all languages)
Step13: Examples of logical operators in action
Step14: Boolean logic
Boolean logic is used for combining conditional statements
and
or
not
Used with logical operators can ask questions of almost unlimited complexity
Important to make sure you understand the logic of your question to combine conditions correctly
Using and
Code to test if both marks are greater than or equal to 70
Code
Step15: Using or
Code to test if either mark is less than 50
Step16: Converting English to Boolean
“If the day is not Monday or Friday, I am researching”
Step17: This is wrong!
Statement really is
Step18: i.e. with negatives both conditions have to be true
Nested statements
Step19: Note
Step20: This tests the if statement first, then the first elsif statement and so on.
If no condition is matched, the else condition will be evaluated.
exit
exit() is very useful for forcing a program to finish
Step21: exit is very useful for error checking in your program e.g. if you want to check that something must happen to avoid a runtime error (remember example of dividing by zero last week...)
LOOPS
Looping structures
Two main structures in Python
for
while
for
Used for repeating code a number of times or iterating over items in an object such as a list
while
Used for looping in a conditional manner
for loops
Step22: Note
Step23: while loops
while loops are completely flexible
Step24: while loops are used a lot for reading from files (lecture 5) – the conditional statement is that you keep reading until there are no lines left in the file
Loops that never end...
Need to be careful that your loops actually finish | Python Code:
exam_scores = [67,78,94,45,55,66]
print("scores: " ,exam_scores)
Explanation: Lecture 2 – Lists, conditionals and loops
Recap on Variables
A variable
named cell of memory storing a single value
Assigned a value using the equals symbol
Can be given any name you like
Except no spaces and cannot start with a number
Variables can hold:
a string (i.e. text): message = "Hello, World!"
an integer: x = 5
floating-point: y = 5.3741
You can perform basic operations on variables
z = x + y
You can call functions on your variables
length = len(message)
Overview
Lists
Very useful structure for storing data
Can store multiple data types together
Appending,slicing, sorting, printing
Conditionals
Can apply logic to data
Boolean logic (True/False)
issues with converting plain language into logical statement
Loops
for and while can be used for all sorts of applications
for generally used for working with lists
LISTS
A list of numbers: 67 78 94 45 55
End of explanation
exam_scores = [67,78,94,45,55]
print("score 2: " ,exam_scores[1])
print("score 3: " ,exam_scores[2])
print("score 2 & 3: " ,exam_scores[1:3])
Explanation: Lists can be accessed by numerical position (aka index)
value: 67 78 94 45 55
position: 0 1 2 3 5
You can also take "slices" using:
End of explanation
exam_scores = [67,78,94,45,55]
exam_scores[2] = 90
print("score: " ,exam_scores[2])
Explanation: Slicing can also be used with strings in the same way. Just imagine that each character in the string is the same as a list element
Changing a list value
Changing a value in a specific position iss similar to defining a variable
End of explanation
exam_scores = [67,78,94,45,55]
print ("Here are the scores:",exam_scores)
Explanation: Print all values in a list
End of explanation
exam_scores = [67,78,94,45,55]
exam_scores.append(32) #add a variable to this list
print ("Here are the scores:",exam_scores)
Explanation: Note: If you want to print all elements individually (and do something else with them), you should loop through them – later in lecture...
Adding to lists
You can add values to the end of a list using the append() method
Append is a method so it must be called on a list
End of explanation
# extend(list) - appends another list
exam_scores = [67,78,94,45,55]
exam_scores2 = [54,73,65]
exam_scores.extend(exam_scores2)
print (exam_scores)
# insert(index,item)-insert an item at a given index
exam_scores = [67,78,94,45,55]
exam_scores.insert(4,90)
print (exam_scores)
exam_scores = [67,78,94,45,55]
exam_scores.pop(2)
print (exam_scores)
Explanation: Other useful list methods
End of explanation
exam_scores = [67,78,94,45,55]
popped_value = exam_scores.pop(2)
print (exam_scores)
print("Popped value=", popped_value)
exam_scores = [67,78,94,45,55]
exam_scores.reverse()
print (exam_scores)
Explanation: Sometimes we may want to capture the popped value.
End of explanation
names = ["James", "John", "Andy", "Ben", "Chris", "Thomas"]
sorted_names = sorted(names)
print("Sorted names:" ,sorted_names)
Explanation: Sorting list
sorted() – is a built-in function and will create a new list object
End of explanation
names = ["James", "John", "Andy", "Ben", "Chris", "Thomas"]
names.sort()
print("Sorted names:",names)
Explanation: sort() – is a method and will replace the original list object
End of explanation
values = [5, 7, 4, 6, 1, 2]
sorted_values = sorted(values)
print("Sorted values:", sorted_values)
Explanation: Sorting lists numerically
End of explanation
values = [5, 7, 4, 6, 1, 2, 45, 12]
sorted_values = sorted(values,reverse=True)
print("Sorted values:", sorted_values)
values = [5, 7, 4, 6, 1, 2, 45, 12]
values.sort(reverse=True)
print("Sorted values:",values)
Explanation: Sorting big to small
End of explanation
exam_scores = [67,78,94,45,55]
length = len(exam_scores)
print("number of scores:",length)
Explanation: Finding the length of a list
End of explanation
x = 5
y = 5
if x == y :
print("x and y are the same")
Explanation: CONDITIONALS
Often you will want to add logic to your programs
This is done with simple statements (in nearly all languages):
if
else
elif (aka else if)
In Python, code within a conditional or a loop is denoted by a : followed by indentation (this will become clearer with examples)
Comparisons (logical operators)
An if statement is either true or false (synonymous with 1 or 0), these are known as Boolean values.
End of explanation
a = 5
b = 4
if a < b:
print("a is less than b") #conditional block must be indented four spaces or a tab
else:
print("a is not less than b")
name1 = "Alistair"
name2 = "Alastair"
if name1 == name2:
print("names are the same")
else:
print("names are not the same")
Explanation: Examples of logical operators in action
End of explanation
biol733_mark = 75
biol734_mark = 72
if biol733_mark >= 70 and biol734_mark >= 70:
print("You're getting a distinction :-)")
else:
print("You're not getting a distinction :-(")
Explanation: Boolean logic
Boolean logic is used for combining conditional statements
and
or
not
Used with logical operators can ask questions of almost unlimited complexity
Important to make sure you understand the logic of your question to combine conditions correctly
Using and
Code to test if both marks are greater than or equal to 70
Code
End of explanation
biol733_mark = 34
biol734_mark = 55
if biol733_mark <50 or biol734_mark < 50:
print("You've failed a module :-( ")
else:
print("You've passed your modules :-) ")
Explanation: Using or
Code to test if either mark is less than 50
End of explanation
day = "Monday"
if day != "Monday" or day != "Friday":
print("Alistair is researching")
else:
print("Alistair is not researching")
Explanation: Converting English to Boolean
“If the day is not Monday or Friday, I am researching”
End of explanation
day = "Monday"
if day != "Monday" and day != "Friday":
print("Alistair is researching\n")
else:
print("Alistair is not researching\n")
Explanation: This is wrong!
Statement really is:
“If the day is not Monday and the day is not Friday, I am researching”
End of explanation
a = 4
b = 4
c = 6
if a == b:
print("a equals b")
if b >= c:
print("and b is greater or equal to c")
else:
print("and b is less than c")
else:
print("a doesn't equal b")
Explanation: i.e. with negatives both conditions have to be true
Nested statements
End of explanation
module_code = "BIOL734"
if module_code == "BIOL007":
module_name = "Statistics for Life Science"
elif module_code == "BIOL733":
module_name = "Perl Programming"
elif module_code == "BIOL734":
module_name = "Post-genomic technologies"
else:
module_name = "Unknown module code"
print("The module is " + module_name + "\n")
Explanation: Note: you can have make statements as nested and complicated as you need. BUT...
1. Your indentation Must be correct
2. Always work out your logic in advance and add comments to your code (otherwise your code won’t do what you want it to do)
Using elif
elif is very useful if you want to test lots of conditions in order
End of explanation
a = 4
b = 5
if a == b:
print("a equals b\n")
else:
exit("Error - a doesn't equal b\n")
print("Program continues...\n")
Explanation: This tests the if statement first, then the first elsif statement and so on.
If no condition is matched, the else condition will be evaluated.
exit
exit() is very useful for forcing a program to finish
End of explanation
for x in range(1, 6):
print("Row number " + str(x))
Explanation: exit is very useful for error checking in your program e.g. if you want to check that something must happen to avoid a runtime error (remember example of dividing by zero last week...)
LOOPS
Looping structures
Two main structures in Python
for
while
for
Used for repeating code a number of times or iterating over items in an object such as a list
while
Used for looping in a conditional manner
for loops
End of explanation
exam_scores = [67,78,94,45,55]
counter = 1
for x in exam_scores:
print("score " + str(counter) + ": " + str(x))
counter +=1 # “+=“ is the same as counter = counter + 1 incrementing
Explanation: Note: As with conditionals, indentation is crucial to denote the code that you want to run within the loop
for initiates the loop
x is a variable that represents the current loop item (in this case an integer representing the number of the loop)
range() creates a list of numbers between two given integers
Loop through a list
End of explanation
a = 1
while a < 5:
print("The value of a is ",a)
a+=1
Explanation: while loops
while loops are completely flexible:
you can use whatever conditional statement you like
you need to ensure that you have the correct logic for making sure the loop ends at some point
End of explanation
# a = 1
# while 4<5:
# print("The value of a is ",a)
# a+=1
Explanation: while loops are used a lot for reading from files (lecture 5) – the conditional statement is that you keep reading until there are no lines left in the file
Loops that never end...
Need to be careful that your loops actually finish
End of explanation |
8,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Scope
1. Representation - 2D, 3D plots
2. Provide idioms
-- "Business graphics" line, bar, scatter plots
-- "Statistics plots" whisker
-- "Higher dimensioned data" - heatmap
3. In-class problem solving for a task
Step2: Visualization in Python
There are many python packages for visualization. We'll start with the most popular package, matplotlib. And, we'll use the trip data.
Step3: Now let's consider the popularity of the stations.
Step4: Our initial task is comparison - which stations are most popular. A bar plot seems appropriate.
Step5: Now let's plot the to counts
Step6: We want if there is a general movement of bikes from one station to another. That is, are from and to counts out of balance. This is a comparison task. One approach is to combine the two bar plots in the same figure.
Step7: But this is deceptive since the two plots have different x-axis.
Step8: But this is awkward since it's difficult to find a specific station. Prefer to sort.
We'd like to compare this with the two stations. So, we need to order the x-axis.
Step9: To find the imbalance, compare the difference between "from" and "to"
Step10: We can make this readable by only looking at stations with large outflows, either positive or negative. | Python Code:
def mm(s_conc, vmax, km):
:param np.array s_conc: substrate concentrations
:param float vmax: maximum reaction rate
:param float km: half substrate concentration
:return np.array: reaction rates
result = vmax*s_conc/(s_conc+km)
return result
s_conc = np.array([m+0.1 for m in range(100)])
plt.plot(s_conc,mm(s_conc, 4, .4), 'b.-')
s_conc = np.array([m+0.1 for m in range(100)])
params = [(5,0.5), (5, 20), (10, 0.5), (15, 20)]
ymax = max([x for (x,y) in params])
nplots = len(params)
fig = plt.figure()
yticks = np.arange(0, ymax)
cur = 0
for (vmax, km) in params:
cur += 1
ax = fig.add_subplot(nplots, 1, cur)
ax.axis([0, len(s_conc), 0, ymax])
ax.set_yticks([0, ymax])
ax.set_yticklabels([0, ymax])
plt.plot(s_conc, mm(s_conc, vmax, km), 'b.-')
plt.show()
# Parameter plot
km = [y for (x,y) in params]
vmax = [x for (x,y) in params]
plt.axis([0, max(km)+2, 0, max(vmax)+ 2])
plt.xlabel('K_M')
plt.ylabel('V_MAX')
plt.plot(km, vmax, 'bo ')
Explanation: Scope
1. Representation - 2D, 3D plots
2. Provide idioms
-- "Business graphics" line, bar, scatter plots
-- "Statistics plots" whisker
-- "Higher dimensioned data" - heatmap
3. In-class problem solving for a task
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
# The following ensures that the plots are in the notebook
%inline matplotlib
# We'll also use capabilities in numpy
import numpy as np
df = pd.read_csv("2015_trip_data.csv")
df.head()
Explanation: Visualization in Python
There are many python packages for visualization. We'll start with the most popular package, matplotlib. And, we'll use the trip data.
End of explanation
from_counts = pd.value_counts(df.from_station_id)
to_counts = pd.value_counts(df.to_station_id)
Explanation: Now let's consider the popularity of the stations.
End of explanation
from_counts.plot.bar()
Explanation: Our initial task is comparison - which stations are most popular. A bar plot seems appropriate.
End of explanation
to_counts.plot.bar()
Explanation: Now let's plot the to counts
End of explanation
plt.subplot(3,1,1)
from_counts.plot.bar()
plt.subplot(3,1,3)
to_counts.plot.bar()
# Note the use of an empty second plot to provide space between the plots
Explanation: We want if there is a general movement of bikes from one station to another. That is, are from and to counts out of balance. This is a comparison task. One approach is to combine the two bar plots in the same figure.
End of explanation
count_list = [to_counts[x] for x in from_counts.index]
ordered_to_counts = pd.Series(count_list, index=from_counts.index)
plt.subplot(3,1,1)
from_counts.plot.bar()
plt.subplot(3,1,3)
ordered_to_counts.plot.bar()
Explanation: But this is deceptive since the two plots have different x-axis.
End of explanation
df_counts = pd.DataFrame({'from': from_counts.values, 'to': ordered_to_counts.values}, index=from_counts.index)
df_counts.head()
df_counts.sort_index(inplace=True) # Modifies the calling dataframe
df_counts.head()
Explanation: But this is awkward since it's difficult to find a specific station. Prefer to sort.
We'd like to compare this with the two stations. So, we need to order the x-axis.
End of explanation
df_outflow = pd.DataFrame({'outflow':df_counts.to - df_counts['from']}, index=df_counts.index)
df_outflow.plot.bar(legend=False)
Explanation: To find the imbalance, compare the difference between "from" and "to"
End of explanation
min_outflow = 500
sel = abs(df_outflow.outflow) > min_outflow
df_outflow_small = df_outflow[sel]
df_outflow_small.plot.bar(legend=False)
Explanation: We can make this readable by only looking at stations with large outflows, either positive or negative.
End of explanation |
8,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The MIT License (MIT)<br>
Copyright (c) 2018 Massachusetts Institute of Technology<br>
Authors
Step1: TESS End-to-End 6 Simulated Light Curve Time Series<br>
Source
Step2: Normalize flux
Step3: Plot Relative PDCSAP Flux vs time | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
Explanation: The MIT License (MIT)<br>
Copyright (c) 2018 Massachusetts Institute of Technology<br>
Authors: Cody Rude<br>
This software has been created in projects supported by the US National<br>
Science Foundation and NASA (PI: Pankratius)<br>
Permission is hereby granted, free of charge, to any person obtaining a copy<br>
of this software and associated documentation files (the "Software"), to deal<br>
in the Software without restriction, including without limitation the rights<br>
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell<br>
copies of the Software, and to permit persons to whom the Software is<br>
furnished to do so, subject to the following conditions:<br>
The above copyright notice and this permission notice shall be included in<br>
all copies or substantial portions of the Software.<br>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR<br>
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,<br>
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE<br>
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER<br>
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,<br>
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN<br>
THE SOFTWARE.<br>
End of explanation
from skdaccess.astro.tess.simulated.cache import DataFetcher as TESS_DF
from skdaccess.framework.param_class import *
import numpy as np
tess_fetcher = TESS_DF([AutoList([376664523])])
tess_dw = tess_fetcher.output()
label, data = next(tess_dw.getIterator())
Explanation: TESS End-to-End 6 Simulated Light Curve Time Series<br>
Source: https://archive.stsci.edu/tess/ete-6.html
End of explanation
valid_index = data['PDCSAP_FLUX'] != 0.0
data.loc[valid_index, 'RELATIVE_PDCSAP_FLUX'] = data.loc[valid_index, 'PDCSAP_FLUX'] / np.median(data.loc[valid_index, 'PDCSAP_FLUX'])
Explanation: Normalize flux
End of explanation
plt.gcf().set_size_inches(6,2);
plt.scatter(data.loc[valid_index, 'TIME'], data.loc[valid_index, 'RELATIVE_PDCSAP_FLUX'], s=2, edgecolor='none');
plt.xlabel('Time');
plt.ylabel('Relative PDCSAP Flux');
plt.title('Simulated Data TID: ' + str(int(label)));
Explanation: Plot Relative PDCSAP Flux vs time
End of explanation |
8,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing and comparing several pitch detection methods on sample files
For simplicity I am using the Anaconda distribution on my Macbook Pro for this notebook.
The purpose is to first experiment here with sample WAV files. Each file comes from a database of free samples provided free of rights by the Philharmonia Orchestra at http
Step1: We will use scipy from the Anaconda distribution to read the WAV sample files
Step2: We define the length we want to record in seconds and the sampling rate to the source file sample rate (44100 Hz)
Step3: Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency
Step4: First method
Step5: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency
Step6: We notice already things are not going to be that easy. There are different harmonics picked here, and 2 of the most important ones are comparable in amplitude.
We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
Step7: This method detects a fundamental frequency of 248Hz, which is wrong.
We notice that as suspected by looking at the chart of the Fourier transform, the 3rd harmonic of the expected fundamental is detected with this naive method
Step8: WIP | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Implementing and comparing several pitch detection methods on sample files
For simplicity I am using the Anaconda distribution on my Macbook Pro for this notebook.
The purpose is to first experiment here with sample WAV files. Each file comes from a database of free samples provided free of rights by the Philharmonia Orchestra at http://www.philharmonia.co.uk/explore/sound_samples/.
We will use 6 samples representing a long Forte string pick of each of the 6 strings of an accoustic guitar tuned in Standard E.
Note: I have converted the sample files myself from their original mp3 format to wav format with 32bit, 44100Hz and mono channel.
We will use two different methods for detecting the pitch and compare their results.
For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar:
String | Frequency | Scientific pitch notation | Sample
--- | --- | --- | ---
1 (E) | 329.63 Hz | E4 | Sample file
2 (B) | 246.94 Hz | B3 | Sample file
3 (G) | 196.00 Hz | G3 | Sample file
4 (D) | 146.83 Hz | D3 | Sample file
5 (A) | 110.00 Hz | A2 | Sample file
6 (E) | 82.41 Hz | E2 | Sample file
End of explanation
from scipy.io import wavfile
# Let's start with the first sample corresponding to the lower string E2
rate, myrecording = wavfile.read("samples/guitar_E2_very-long_forte_normal.wav")
print(rate, np_array.size)
Explanation: We will use scipy from the Anaconda distribution to read the WAV sample files
End of explanation
duration = 1 # seconds
fs = rate # samples by second
# Let's restrict our sample to 1 second of the recording, after 0.5 second of sound to avoid the string picking
array = myrecording[int(0.5*fs):int(2.5*fs)]
print(array.size)
Explanation: We define the length we want to record in seconds and the sampling rate to the source file sample rate (44100 Hz)
End of explanation
df = pd.DataFrame(array)
df.loc[25000:35000].plot()
Explanation: Let's plot a section of this array to look at it first
We notice a pretty periodic signal with a clear fundamental frequency: which makes sense since a guitar string vibrates producing an almost purely sinuzoidal wave
End of explanation
fourier = np.fft.fft(array)
Explanation: First method: Naive pitch detection using Fast Fourier Transform
One first naive idea would be to "simply" take the (discrete) Fourier transform of the signal to find the fundamental frequency of the recording.
Let's try that out and see what result we get.
We use numpy to compute the discrete Fourier transform of the signal:
End of explanation
plt.plot(abs(fourier[:len(fourier)/10]))
Explanation: We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency:
End of explanation
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
Explanation: We notice already things are not going to be that easy. There are different harmonics picked here, and 2 of the most important ones are comparable in amplitude.
We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
End of explanation
# Work in progress: coming soon
Explanation: This method detects a fundamental frequency of 248Hz, which is wrong.
We notice that as suspected by looking at the chart of the Fourier transform, the 3rd harmonic of the expected fundamental is detected with this naive method: 248.5 = 3 x 82.41, where 82.41Hz was the expected fundamental frequency for this sample of the E2 note.
Taking the convolution of the sample and a Hamming window before applying FFT
One traditional way to deal with this is issue is to first convolute the sample with a window function, such as the Hamming window
End of explanation
rec = array
rec = rec[15000:35000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
Explanation: WIP: Using Autocorrelation method for pitch detection
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.