markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Step 7: Fit a curve to the data (**For undergraduates, this section is optional.**)Based on macroscopic arguments, the energy of a cluster could scale with both surface area (via a surface tension) and volume (via an energy density for bulk) of the cluster. So we could model the minimum energy as depending on the cluster size in this way:\begin{equation}U_{min} \propto a + b N^{2/3} +cN\end{equation}Fit this equation to your data in the K=10000 case. You can do this using a least-squares fit, for example using fitting functions within SciPy (`optimize.leastsq`, for example, or similar functions in `scipy.stats`. A fairly dated tutorial is [here](http://www.tau.ac.il/~kineret/amit/scipy_tutorial) (sec 5.4), or see [stack overflow](https://stackoverflow.com/questions/19791581/how-to-use-leastsq-function-from-scipy-optimize-in-python-to-fit-both-a-straight).**Once you perform the fit, plot the difference between the true minimum energy and the expected energy from this equation as a function of N. Can you identify the magic numbers from this curve?** | #Your code here | _____no_output_____ | CC-BY-4.0 | uci-pharmsci/assignments/energy_minimization/energy_minimization_assignment.ipynb | matthagy/drug-computing |
Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis β with focus instead on the six product categories recorded for customers.Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported. | # Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?" | Wholesale customers dataset has 440 samples with 6 features each.
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase. | # Display a description of the dataset
display(data.describe()) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another. | # Select three indices of your choice you wish to sample from the dataset
indices = [25, 186, 220]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples) | Chosen samples of wholesale customers dataset:
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. *What kind of establishment (customer) could each of the three samples you've chosen represent?* **Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant. | samples_value_mean = data.describe().loc['mean']
samples_mean = samples.append(samples_value_mean)
foo = samples_mean.plot(kind='bar', figsize=(12,6)) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
1st one: Grocery Store as Fresh is above mean, grocery is in the mean, and detergent purchase costs are above the mean 2nd one: Possibly a restaurant cafe because Frozen, Fresh are the highest followed by a decent amount of Deli as well. Only the Frozen item is higher than the mean3rd one: A retailer or a store that sells Fresh food. Highest category being Fresh, higher than mean, and everything else being lessa than mean. Implementation: Feature RelevanceOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.In the code block below, you will need to implement the following: - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function. - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`. - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's `score` function. | # Make a copy of the DataFrame, using the 'drop' function to drop the given feature
#feature_name = 'Grocery'
def do_feat(feature_name, data):
new_data = data.drop(feature_name, 1)
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_data, data[feature_name], test_size=0.25, random_state=42)
# Create a decision tree regressor and fit it to the training set
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor(random_state=42)
regressor.fit(X_train, y_train)
feat_imp = zip(new_data, regressor.feature_importances_)
# Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print ("prediction score for " + feature_name + " is: " + str(score))
feature_list = [ "Grocery", "Detergents_Paper", "Delicatessen", "Frozen", "Milk", "Fresh" ]
for feature_n in feature_list:
do_feat(feature_n, data) | prediction score for Grocery is: 0.681884008544
prediction score for Detergents_Paper is: 0.271666980627
prediction score for Delicatessen is: -2.2547115372
prediction score for Frozen is: -0.210135890125
prediction score for Milk is: 0.156275395017
prediction score for Fresh is: -0.385749710204
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 2*Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?* **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data. Attempted to predict "Grocery". Prediction score using DecisionTreeRegressor was 0.68188. Intuitively, The feature is too generalized to identify customer habits. If you look at the R^2 score , it seems to be the easiest to predict from the balance of the data. Based on this it is the least necessary to identify spending habits. Visualize Feature DistributionsTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix. | # Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
import seaborn as sns
sns.heatmap(data.corr(), annot=True) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 3*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?* **Hint:** Is the data normally distributed? Where do most of the data points lie? There is some degree of correlation between Grocery and Detergents_Paper. Similarly, there is some degree of correlation between Milk and Detergents_Paper. Also , note a small correlation between Grocery and Milk. The data is not normally distributed. Most of data points in each feature lie within 10%-20% range. Data PreprocessingIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature ScalingIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling β particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.In the code block below, you will need to implement the following: - Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this. - Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`. | # Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).Run the code below to see how the sample data has changed after having the natural logarithm applied to it. | # Display the log-transformed sample data
display(log_samples)
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 3)
axes = axes.flatten()
fig.set_size_inches(18, 6)
fig.suptitle('Distribution of Features')
for i, col in enumerate(data.columns):
feature = data[col]
sns.distplot(feature, label=col, ax=axes[i]).set(xlim=(-1000, 20000),)
axes[i].axvline(feature.mean(),linewidth=1)
axes[i].axvline(feature.median(),linewidth=1, color='r')
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 3)
axes = axes.flatten()
fig.set_size_inches(18, 6)
fig.suptitle('Distribution of Features for Log Data')
for i, col in enumerate(log_data.columns):
feature = log_data[col]
sns.distplot(feature, label=col, ax=axes[i])
axes[i].axvline(feature.mean(),linewidth=1)
axes[i].axvline(feature.median(),linewidth=1, color='r')
import matplotlib.pyplot as plt
import seaborn as sns
# set plot style & color scheme
sns.set_style('ticks')
with sns.color_palette("Reds_r"):
# plot densities of log data
plt.figure(figsize=(8,4))
for col in data.columns:
sns.kdeplot(log_data[col], shade=True)
plt.legend(loc='best') | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this. - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`. - Assign the calculation of an outlier step for the given feature to `step`. - Optionally remove data points from the dataset by adding indices to the `outliers` list.**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable `good_data`. | outliers_list = np.array([], dtype='int64')
#cnt = Counter()
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature],25)
# Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1)*1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
found_outliers = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
outliers_list = np.append(outliers_list, found_outliers.index.values.astype('int64'))
display(found_outliers)
# OPTIONAL: Select the indices for data points you wish to remove
outliers_list, counts = np.unique(outliers_list, return_counts=True)
outliers = outliers_list[counts>1]
print outliers
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True) | Data points considered outliers for the feature 'Fresh':
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 4*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.* Data Points considered for more than one feature (Observation in output above )65, 66, 75, 128, 154Decided to drop these outliers in the good_data because:- they appeared as outlier in more than one feature in the Tukey Method of outlier detection.- data point looks little out of range e.g. 65 on Detergents_Paper- It appears the outliers cause significant association between features Feature TransformationIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCANow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension β how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.In the code block below, you will need to implement the following: - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`. | # Apply PCA by fitting the good data with the same number of dimensions as features
from sklearn.decomposition import PCA
pca = PCA(n_components=6).fit(good_data)
# Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
print np.cumsum(pca.explained_variance_ratio_)
# create an x-axis variable for each pca component
x = np.arange(1,7)
# plot the cumulative variance
plt.plot(x, np.cumsum(pca.explained_variance_ratio_), '-o', color='black')
# plot the components' variance
plt.bar(x, pca.explained_variance_ratio_, align='center', alpha=0.5)
# plot styling
plt.ylim(0, 1.05)
plt.annotate('Cumulative\nexplained\nvariance',
xy=(3.7, .88), arrowprops=dict(arrowstyle='->'), xytext=(4.5, .6))
for i,j in zip(x, np.cumsum(pca.explained_variance_ratio_)):
plt.annotate(str(j.round(4)),xy=(i+.2,j-.02))
plt.xticks(range(1,7))
plt.xlabel('PCA components')
plt.ylabel('Explained Variance')
plt.show() | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 5*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.* **Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the indivdual feature weights. First principal component explains 44.3% variance and the second principal component explains 26.38% variance. The first and the second principal component combined explains 70.68 % variance. The first four principal components explain 93.11% of the variance. In terms of customer spending:- Dimension 1: Highest for "Detergent_Paper" then heavy on "Milk" and "Grocery" The three most correlated features. This(PCA1) represents that a increase is associated with increase in "Milk", "Grocery" and "Detergent_Paper" - Dimension 2: Very heavy on "Fresh", "Frozen" and "Deli". This(PCA2) shows larger increases in "Fresh", "Frozen" and "Deli". - Dimension 3: Low on "Fresh" and "Detergents_Paper". High on "Deli" An increase in this (PCA3) increases Deli but decreases Fresh. Also notes some increase in "Frozen", and decrease in "Detergents_Paper" - Dimension 4: Low on Deli and Fresh, but very high on Frozen Increase in this (PCA4) is associated with a large increase in "Frozen" and a large decrease in "Delicatessen" customer spending.As described above, PCA largely describes the customer spending as it changes. ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points. | # Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values)) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data β in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`. - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`. | # Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
vr = np.cumsum(pca.explained_variance_ratio_)
print vr | [ 0.44302505 0.70681723]
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions. | # Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2'])) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Visualizing a BiplotA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.Run the code cell below to produce a biplot of the reduced-dimension data. | # Create a biplot
vs.biplot(good_data, reduced_data, pca) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
ObservationOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories. From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier? ClusteringIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6*What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?* K-Means clustering algorithms is computationally faster and is easier to understand. Gaussian Mixture Model (GMM) is a superset of K-Means , is more flexible and handles mixed cluster membership better. Likely to use K-Means algorithm because because it might make it easier to understand the wholesale customer data in multiple segments clustered using K-Means. GMM is a good classification algorithm for static non-time oriented data. GMM may not work well if the dimensions are high ( 6 or more). GMM works well with non-linear geometry , and does not bias to cluster size of specific structure. This allows GMM to any shape cluster. K-Means , however, is computationally faster and produces disjoint flat cluster in a unsupervised iterative method. Disadvantage with K-Means is there are fixed number of clusters and it is difficult to predict the value of K. In GMM, the Gaussian blob is is allowed to be of different sizes and stretched in different directions. However, K-Means requires that each blob to be of a fixed size and completely symmetrical. The K-Means uses a lot of hard assignments and GMM has a lot of soft assignment flexibility. In K-Means, there is no stretching in different directions like GMM. This means that the results in lower number of dimensions in GMM may turn out better than K-Means, however, K-Means will be faster for larger amounts of data. In the wholesale customer data, there are 6 dimensions and we will use K-Means . Implementation: Creating ClustersDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data β if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.In the code block below, you will need to implement the following: - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`. - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`. - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`. - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`. - Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`. - Assign the silhouette score to `score` and print the result. | # TODO: Apply your clustering algorithm of choice to the reduced data
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
reduced_samples = pd.DataFrame(pca_samples, columns = ['Dimension 1', 'Dimension 2'])
clusterer = KMeans(n_clusters=2, random_state=42).fit(reduced_data)
# Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# Find the cluster centers
centers = clusterer.cluster_centers_
# Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(reduced_samples)
# Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, clusterer.labels_, metric='euclidean')
print("K-Means silhouette score: ",score) | ('K-Means silhouette score: ', 0.42628101546910835)
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 7*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?* Cluster of 2: 0.4219Cluster of 4: 0.3326Cluster of 6: 0.3654Cluster of 8: 0.3644Cluster of 10: 0.3505Best silhouette score was obtained for a cluster of 2 Cluster VisualizationOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters. | # Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.In the code block below, you will need to implement the following: - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`. - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`. | def ndprint(a, format_string ='{0:.2f}'):
print [format_string.format(v,i) for i,v in enumerate(a)]
# Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# Exponentiate the centers
true_centers = np.exp(log_centers)
print "centers:"
#display(true_centers)
for i,v in enumerate(true_centers):
print v
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
true_centers = true_centers.append(data.describe().ix['50%'])
true_centers = true_centers.append(data.describe().ix['25%'])
true_centers.plot(kind = 'bar', figsize = (16, 4))
# had to re-drop to get relatively good range data (??)
some_good_data = data.drop(data.index[outliers]).reset_index(drop = True)
print "mean:"
ndprint(np.around(some_good_data.mean().values, decimals=1))
print "median:"
ndprint(np.around(some_good_data.median().values, decimals=1))
print ""
print "Centers offset from mean"
display(true_centers - np.around(some_good_data.mean().values))
print "Centers offset from median"
display(true_centers - np.around(some_good_data.median().values))
| centers:
[ 8866.53752613 1896.60679852 2476.59086525 2088.23564999 293.89989089
681.28963817]
[ 4005.0815051 7899.91306923 12103.82742157 952.34560465
4561.39226009 1035.53326638]
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Question 8Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?* **Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`. Customer segments are represented as the first segment 0 is the one with high Fresh but low on Detergents_Paper and Deli, but moderate Milk, Grocery and likeley going to a Restaurant.Segment 1 is very high on Grocery , low on Frozen and Deli so is likely going to be a Retail store. Now let's look at quartile, mean and median and comparing with centers.Segment 0:With outliers removed, slightly above median in Fresh and Frozen. Much lower than median on Grocery but slightly lower in Detergents and Deli. Segment 1:With outliers removed, above the median in Grocery, Milk and Detergents.Really below the mean in Fresh and Frozen. Based on this my choices are:- Segment 0: Restaurant , Cafes, Food prepare businesses- Segment 1: Retail stores, Grocery, supermarkets Question 9*For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*Run the code block below to find which cluster each sample point is predicted to be. | # Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
print 'The distance between sample point {} and center of cluster {}:'.format(i, pred)
print (samples.iloc[i] - true_centers.iloc[pred])
| Sample point 0 predicted to be in Cluster 1
The distance between sample point 0 and center of cluster 1:
Fresh 12160.0
Milk -3670.0
Grocery -4509.0
Frozen -751.0
Detergents_Paper -558.0
Delicatessen -979.0
dtype: float64
Sample point 1 predicted to be in Cluster 0
The distance between sample point 1 and center of cluster 0:
Fresh -5858.0
Milk -1376.0
Grocery -1623.0
Frozen 1382.0
Detergents_Paper 655.0
Delicatessen 46.0
dtype: float64
Sample point 2 predicted to be in Cluster 0
The distance between sample point 2 and center of cluster 0:
Fresh 5888.0
Milk -998.0
Grocery -1095.0
Frozen -323.0
Detergents_Paper -238.0
Delicatessen 68.0
dtype: float64
| MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Segment 0 is very high on Detergent and Grocery and Segment 1 is very high on Fresh and fairly low on detergent. If we look at the sample points, the predictions of points 0 and 1 should be swapped. Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?* **Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? It is unclear from the data the frequency of the delivery for each product segment. "Fresh" is the highest in terms of purchase so that would be a good category to understand the change in delivery service with possibly a lower impact on profit margin, and based on the data they are probably more frequest buyers. However, if we would like to do A/B testing we should select small groups of "sample customers" from both Segment 0 and Segment 1 that look statistically significant. The remainder customers in each segment become the "second variant group".The 3 day schedule is applied to the "sample customers" in small groups in each segment. The response of service is received and evaluated for these "sample customers" as well as the "second variant group". This will give us 2 separate categories in each segment. A/B testing is usually done in two variants, and we have two variant in each segment.The response would help us determine a suitable group customers from the two variantswith those who respond positively. Because of the multiple segments, and two variants, it may not affect the customers equally , but you will probablistically obtain a larger number of customers that are likely to respond positively or negatively to the change. Question 11Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. *How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?* **Hint:** A supervised learner could be used to train on the original customers. What would be the target variable? We would first use the clustering technique to create multiple segments of customers. Supervised learner could be used on each of these segments, so when a new customer joins he's be categorized based on product spending and type of purchase. Product type of purchase , frequency and product spending would be the target variables. Visualizing Underlying DistributionsAt the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling. | # Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples) | _____no_output_____ | MIT | Udacity/MachineLearning/customer_segments_project/customer_segments_nabin_proj.ipynb | nacharya/notebooks |
Paraphrase This tutorial is available as an IPython notebook at [Malaya/example/paraphrase](https://github.com/huseinzol05/Malaya/tree/master/example/paraphrase). This module only trained on standard language structure, so it is not save to use it for local language structure. | %%time
import malaya
from pprint import pprint
malaya.paraphrase.available_transformer() | INFO:root:tested on 1k paraphrase texts.
| MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Load T5 models ```pythondef transformer(model: str = 't2t', quantized: bool = False, **kwargs): """ Load Malaya transformer encoder-decoder model to generate a paraphrase given a string. Parameters ---------- model : str, optional (default='t2t') Model architecture supported. Allowed values: * ``'t2t'`` - Malaya Transformer BASE parameters. * ``'small-t2t'`` - Malaya Transformer SMALL parameters. * ``'t5'`` - T5 BASE parameters. * ``'small-t5'`` - T5 SMALL parameters. quantized : bool, optional (default=False) if True, will load 8-bit quantized model. Quantized model not necessary faster, totally depends on the machine. Returns ------- result: malaya.model.tf.Paraphrase class """``` | t5 = malaya.paraphrase.transformer(model = 't5', quantized = True) | WARNING:root:Load quantized model will cause accuracy drop.
| MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Paraphrase simple stringWe only provide `greedy_decoder` method for T5 models,```python@check_typedef greedy_decoder(self, string: str, split_fullstop: bool = True): """ paraphrase a string. Decoder is greedy decoder with beam width size 1, alpha 0.5 . Parameters ---------- string: str split_fullstop: bool, (default=True) if True, will generate paraphrase for each strings splitted by fullstop. Returns ------- result: str """``` | string = "Beliau yang juga saksi pendakwaan kesembilan berkata, ia bagi mengelak daripada wujud isu digunakan terhadap Najib."
pprint(string)
pprint(t5.greedy_decoder([string]))
string = """
PELETAKAN jawatan Tun Dr Mahathir Mohamad sebagai Pengerusi Parti Pribumi Bersatu Malaysia (Bersatu) ditolak di dalam mesyuarat khas Majlis Pimpinan Tertinggi (MPT) pada 24 Februari lalu.
Justeru, tidak timbul soal peletakan jawatan itu sah atau tidak kerana ia sudah pun diputuskan pada peringkat parti yang dipersetujui semua termasuk Presiden, Tan Sri Muhyiddin Yassin.
Bekas Setiausaha Agung Bersatu Datuk Marzuki Yahya berkata, pada mesyuarat itu MPT sebulat suara menolak peletakan jawatan Dr Mahathir.
"Jadi ini agak berlawanan dengan keputusan yang kita sudah buat. Saya tak faham bagaimana Jabatan Pendaftar Pertubuhan Malaysia (JPPM) kata peletakan jawatan itu sah sedangkan kita sudah buat keputusan di dalam mesyuarat, bukan seorang dua yang buat keputusan.
"Semua keputusan mesti dibuat melalui parti. Walau apa juga perbincangan dibuat di luar daripada keputusan mesyuarat, ini bukan keputusan parti.
"Apa locus standy yang ada pada Setiausaha Kerja untuk membawa perkara ini kepada JPPM. Seharusnya ia dibawa kepada Setiausaha Agung sebagai pentadbir kepada parti," katanya kepada Harian Metro.
Beliau mengulas laporan media tempatan hari ini mengenai pengesahan JPPM bahawa Dr Mahathir tidak lagi menjadi Pengerusi Bersatu berikutan peletakan jawatannya di tengah-tengah pergolakan politik pada akhir Februari adalah sah.
Laporan itu juga menyatakan, kedudukan Muhyiddin Yassin memangku jawatan itu juga sah.
Menurutnya, memang betul Dr Mahathir menghantar surat peletakan jawatan, tetapi ditolak oleh MPT.
"Fasal yang disebut itu terpakai sekiranya berhenti atau diberhentikan, tetapi ini mesyuarat sudah menolak," katanya.
Marzuki turut mempersoal kenyataan media yang dibuat beberapa pimpinan parti itu hari ini yang menyatakan sokongan kepada Perikatan Nasional.
"Kenyataan media bukanlah keputusan rasmi. Walaupun kita buat 1,000 kenyataan sekali pun ia tetap tidak merubah keputusan yang sudah dibuat di dalam mesyuarat. Kita catat di dalam minit apa yang berlaku di dalam mesyuarat," katanya.
"""
import re
# minimum cleaning, just simply to remove newlines.
def cleaning(string):
string = string.replace('\n', ' ')
string = re.sub(r'[ ]+', ' ', string).strip()
return string
string = cleaning(string)
splitted = malaya.text.function.split_into_sentences(string)
' '.join(splitted[:2])
t5.greedy_decoder([' '.join(splitted[:2])]) | _____no_output_____ | MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Load TransformerTo load 8-bit quantized model, simply pass `quantized = True`, default is `False`.We can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine. | model = malaya.paraphrase.transformer(model = 'small-t2t')
quantized_model = malaya.paraphrase.transformer(model = 'small-t2t', quantized = True) | WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/function/__init__.py:112: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.
| MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Predict using greedy decoder```pythondef greedy_decoder(self, strings: List[str], **kwargs): """ Paraphrase strings using greedy decoder. Parameters ---------- strings: List[str] Returns ------- result: List[str] """``` | ' '.join(splitted[:2])
model.greedy_decoder([' '.join(splitted[:2])])
quantized_model.greedy_decoder([' '.join(splitted[:2])]) | _____no_output_____ | MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Predict using beam decoder```pythondef beam_decoder(self, strings: List[str], **kwargs): """ Paraphrase strings using beam decoder, beam width size 3, alpha 0.5 . Parameters ---------- strings: List[str] Returns ------- result: List[str] """``` | model.beam_decoder([' '.join(splitted[:2])])
quantized_model.beam_decoder([' '.join(splitted[:2])]) | _____no_output_____ | MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
Predict using nucleus decoder```pythondef nucleus_decoder(self, strings: List[str], top_p: float = 0.7, **kwargs): """ Paraphrase strings using nucleus sampling. Parameters ---------- strings: List[str] top_p: float, (default=0.7) cumulative distribution and cut off as soon as the CDF exceeds `top_p`. Returns ------- result: List[str] """``` | model.nucleus_decoder([' '.join(splitted[:2])])
quantized_model.nucleus_decoder([' '.join(splitted[:2])], top_p = 0.5) | _____no_output_____ | MIT | example/paraphrase/load-paraphrase.ipynb | zulkiflizaki/malaya |
[](https://colab.research.google.com/drive/1-mKbU9Yz9vaaQz6cr13SrvvMy_4hXkk7?usp=sharing) Investment Management 1--- Assignment 1**You have to use this Colab notebook to complete the assignment. To get started, create a copy of the notebook and save it on your Google drive.** **Deadline:** See C@mpus. **Total:** 100 Points **Late submission penalty:** there is a penalty-free grace period of two hours past the deadline. Any work that is submitted between 2 hour and 24 hours past the deadline will receive a 20% grade deduction. No other work will be accepted after that. C@mpus submission time will be used, not your local computer time. You can submit your completed assignment as many times as required before the deadline. Consider submitting your work early. This assignment is a warm up to get you used to the Colab/ Jupyter notebook environment used in the course, and also to help you acquaint yourself with Python and relevant Python libraries. The assignment must be completed individually. The TBS plagarism rules apply.By the end of this assignment, you should be able to:* fetch or load financial time series data into Colab * load data into `pandas` dataframe* perform basic operations with `pandas` dataframes* perform EDA (exploratory data analysis) on a given dataset You will need to use the `numpy` and `pandas` libraries for necessary data manipulation. For more information, see:https://numpy.org/doc/stable/reference/https://pandas.pydata.org/You can use a financial data library of your choice to access historical asset prices. One example is `yfinance`. It is used to access the financial data available on Yahoo Finance. Other widely used libraries are `pandas_datareader`, `yahoo_fin`, `ffn` (highly recommended), and `PyNance`. You are also free to use any Python data visualisation library of your choice (default is `matplotlib`). Some of the available options include: `Seaborn`, `Bokeh`, `ggplot`, `pygal`, and `Plotly`. **What to submit**Submit a PDF file containing your code, outputs, and write-up from parts 1-4. You can produce a PDF of your Colab file by going to `File >>> Print` and selecting `save as PDF`. See the Python Workspace document in the course GitHub repository for more information. **Do not submit any other data files produced by your code.** You also need to provide a link to your completed Colab file in your submission - see the **"Colab link"** section below.Please note that you have to use Google Colab to complete this assignment. If you want to use Jupyter Notebook, complete the assignment and upload your Jupyter Notebook file to Google Colab for submission. **Colab link**Before submitting your work, make sure to include a link to your colab file below.**Colab Link:** _ _ _ _ _ _ _ _ _ _ _ _ **Part 1: Obtaining financial data [10 pt]**The purpose of this section is to get you used to the basics of Python and the Colab notebook environment. This includes importing data and working with variables, lists, dataframes, and functions.Your code will be checked for clarity and efficiency. If you have trouble with this part of the assignment, please review the introductory Colab notebooks stored in the GitHub course repository. Part 1.1. Loading historical stock prices (6pt)Using any Python financial data library (e.g. `yfinance`) download daily adjusted close prices for 5 U.S. stocks of your choice for the last 5 years and store them in a `pandas` DataFrame object named `stock_prices`. Only stocks that are current constituents of the S&P 500 should be considered. As the financial data library you use is not pre-installed in Google Colab by default, make sure to install it first by executing the following code:```!pip install library_name```The !pip install command looks for the latest version of the package and installs it. This only needs to be done once per session.If you are unable to install the required library to fetch the data, you can prepare a separate CSV file containing the necessary data and use the following code to read it into a `pandas` dataframe object:```from google.colab import filesfiles.upload()```followed by:```import pandas as pdstock_prices = pd.read_csv('filename.csv')```Note that `filename.csv` should be changed to the exact name of your CSV file. | # step 1: install required libraries using "!pip install"
# YOUR CODE HERE
# step 2: import required libraries using "import"
# YOUR CODE HERE
# step 3: fetch historical asset prices
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
Part 1.2. Obtaining data on risk-free asset (4pt)Using a financial data library (e.g. `yfinance`) of your choice, obtain daily data on the U.S. risk-free (1-month Treasury Bill) rate for the last 5 years and store them in a `pandas` DataFrame object named `rf`.If you are unable to obtain the risk-free data using your chosen data library, you can prepare a separate CSV file containing the necessary data and use the steps discussed above to read it into a `pandas` dataframe object `rf`. | # step 4: fetch historical risk-free rate
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
**Part 2: Visualising historical asset prices [10pt]**In this part of the assignment, you will be manipulating dataframes containing historical asset prices using Pandas, and visualising them using a Python plotting library of your choice. The purpose of these visualisations is to help you explore the data and identify any patterns. One robust visualisation library you may want to consider is `Matplotlib`. It is one of the most popular, and certainly the most widely used, multi-platform data visualisation library built on NumPy arrays in Python. It is used to generate simple yet powerful visualisations with just a few lines of code. It can be used in both interactive and non-interactive scripts.Make sure you import the required libraries first. Part 2.1. Raw stock prices (4pt)Plot the adjusted daily close prices for your stocks on the same diagram using a Python data visualisation library of your choice (default is matplotlib). Use the historical price data stored in the `stock_prices` dataframe created earlier. | # step 5: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
Part 2.1. Rebased stock prices (6pt)To make comparing and plotting different asset price series together easier, we often "rebase" all prices to a given initial value - e.g. 100. In this section, you need to rebase the adjusted close prices for your stocks and plot them on the same diagram using a visualisation library of your choice (default is matplotlib). Note that some financial data libraries have handy built-in functions to perform this kind of task. Have a look at the `ffn` library documentation . | # step 6: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
**Part 3: Absolute return and risk measures [40pt]**In this part of the assignment, you will work with basic financial calculations and functions, such as computing and compounding investment returns, calculating averages, and computing measures of investment risk.I suggest you use `pandas` dataframes to store all necessary data. Colab includes an extension that renders Pandas dataframes into interactive tables that can be filtered, sorted, and explored dynamically.The extension can be enabled by executing `%load_ext google.colab.data_table` in any code cell and disabled with `%unload_ext google.colab.data_table`. 3.1. Stock returns (6pt)In asset management, we are often interested in the returns of a given time series. Therefore, in this part of the assignment, you need to compute **daily**, **weekly**, and **monthly** **arithmetic and logarithmic** returns for each chosen stock and store them in separate `pandas` dataframe objects named `returns` and `log_returns`, respectively.Make sure to drop any missing values and display the first 5 lines of the resulting dataframes. | # step 7: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
3.2. Distribution of returns (5pt)Check what the return distributions look like by plotting a histogram of daily returns calculated in the previous section. You can use any Python visualisation library of your choice.Plot returns distributions for both, arithmetic and logarithmic returns. Discuss whether there are significant differences between the two. Also, provide a short explanation on when and why we use log returns, rather than normal returns. | # step 8: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
**Your response/ short explanation:** ________HERE_________ 3.3. Correlation matrix (5pt)Using daily arithmetic stock returns, compute pairwise correlations between your 5 assets and plot a correlation matrix. (optional) You may want to have a look at the `heatmap()` method in the `Seaborn` visualisation library. It allows you to create elegant correlation heatmaps easily. | # step 9: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
3.4. Cumulative returns (8pt)Using the arithmetic daily returns, compute cumulative returns for each stock over the last 1β, 3-, and 5- year periods and display them as values. Once done, annualise the resulting cumulative daily returns for each stock and display them as well. | # step 10: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
3.5. Arithmetic average returns (8pt)Compute arithmetic average daily returns for each stock, annualise them, and display the resulting values. As there are typically 252 trading days in a year, to annualise a daily return $r_d$ we use:$$ (1+r_d)^{252} - 1$$ | # step 11: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
3.6. Standard deviation (8pt)Using the stock returns calculated earlier, compute standard deviations of daily returns for each stock over the last 1β, 3-, and 5- year periods and display them.Once done, repeat the calculation of standard deviations but using monthly returns instead. Display the resulting values.Explain what the best way to annualise standard deviations is. | # step 12: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
**Your response/ short explanation:** ________HERE_________ **Part 4: Risk-adjusted performance evaluation [40pt]**As part of the course we considered several risk-adjusted performance evaluation measures. In this section of the assignment you are asked to compute one of them - the Sharpe ratio: $$Sharpe\ ratio = \frac{E[{r_p}-{r_f}]}{\sqrt{[{r_p}-{r_f}]}}$$ Part 4.1. Calculating the Sharpe measure [10pt]Using previously calculated monthly stock returns and the corresponding risk-free interest rates, compute Sharpe ratios for your selected stocks for the last 1-, 3-, and 5-years. Annualise the calculated Sharpe measures and report them as values. | # step 13: import required data visualisation library
# YOUR CODE HERE | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
Part 4.2. Sharpe measure function [30pt]Define a new Python function `sharpe(ticker_1, ticker_2, ticker_3)` which:* accepts 3 stock tickers as the only arguments;* fetches historical daily prices for the 3 selected tickers over the last 3 years;* fetches U.S. treasury bill (1-month T-Bill rates) rates over the corresponding 3 year period;* computes daily returns and excess returns for each stock;* computes daily average excess returns for each stock;* computes standard deviations of excess daily returns for each stock;* compute Sharpe ratios based on the daily average excess returns and standard deviations of excess retunrs;* annualises the resulting Sharpe ratio (by multiplying the daily Sharpe by $\sqrt[2]{252}$);* returns the annualised Sharpe ratios for the 3 stocks.Assume that all libraries required by your function are already preinstalled and imported (i.e. do not include any `import` statements within your function). However, make sure to import all the required libraries in the code cell below, directly before the function. | # step 14: install required libraries and import as needed
def sharpe(ticker_1, ticker_2, ticker_3):
"""This function returns annualised Sharpe
ratios for the entered tickers using last 3 years
of stock data from Yahoo finance"""
# YOUR CODE HERE
# YOUR CODE HERE
return # YOUR CODE HERE
# execute your functions using AAPL, MSFT, and JPM as arguments | _____no_output_____ | MIT | 2_assignment_1/1_PM_assignment_1.ipynb | MathildeBreton/TBS_portfolio_management_2022 |
Preprocessing captions | # Turn a Unicode string to plain ASCII, thanks to
# https://stackoverflow.com/a/518232/2809427
def unicodeToAscii(s):
return ''.join(
c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn'
)
# Lowercase, trim, and remove non-letter characters
def normalizeString(s):
s = unicodeToAscii(s.lower().strip())
s = re.sub(r"([.!?])", r" \1", s)
s = re.sub(r"[^a-zA-Z.!?]+", r" ", s)
return s
SOS_token = 1
EOS_token = 2
PAD_token = 0
UNK_token = 3
class Vocabulary:
def __init__(self,name):
self.word2index = {"SOS":1,"EOS":2,"UNK":3,"PAD":0}
self.index2word = {1:"SOS",2:"EOS",3:"UNK",0:"PAD"}
self.word2count = {}
self.nwords = 4
def addSentence(self,sentence):
for word in sentence.split(' '):
self.addWord(word)
def addWord(self,word):
if word not in list(self.word2index.keys()):
self.word2index[word] = self.nwords
self.index2word[self.nwords] = word
self.word2count[word] = 1
self.nwords += 1
else:
self.word2count[word] += 1
def save(self,word2index_dic = 'word2index_dic', index2word_dic = 'index2word_dic',
word2count_dic = 'word2count_dic'):
with open('/content/Save/'+word2index_dic+'.p', 'wb') as fp:
pickle.dump(self.word2index, fp, protocol=pickle.HIGHEST_PROTOCOL)
with open('/content/Save/'+index2word_dic+'.p', 'wb') as fp:
pickle.dump(self.index2word, fp, protocol=pickle.HIGHEST_PROTOCOL)
with open('/content/Save/'+word2count_dic+'.p', 'wb') as fp:
pickle.dump(self.word2count, fp, protocol=pickle.HIGHEST_PROTOCOL)
def load(self, word2index_dic = 'word2index_dic', index2word_dic = 'index2word_dic',
word2count_dic = 'word2count_dic'):
with open('/content/Save/'+word2index_dic+'.p', 'rb') as fp:
self.word2index = pickle.load(fp)
with open('/content/Save/'+index2word_dic+'.p', 'rb') as fp:
self.index2word = pickle.load(fp)
with open('/content/Save/'+word2count_dic+'.p', 'rb') as fp:
self.word2count = pickle.load(fp)
self.nwords = len(self.word2index)
voc = Vocabulary('COCO_TRAIN')
#voc.load()
import pickle
# for _,target in tqdm.tqdm(coco_train_caption):
# for sen in target:
# voc.addSentence(normalizeString(sen))
# voc.save()
voc.load()
class COCO14Dataset(Dataset):
def __init__(self,coco_caption,coco_detection,voc,transforms=None):
self.coco_caption = coco_caption
self.coco_detection = coco_detection
self.voc = voc
self.transforms = transforms
def __len__(self):
return len(self.coco_caption)
def __getitem__(self,idx):
img,target = self.coco_caption[idx]
img_detection,detection_target = self.coco_detection[idx]
original_shape = np.array(img_detection).shape
lbl = normalizeString(random.choice(target))
label = []
for s in lbl.split(' '):
if s in list(voc.word2count.keys()):
label.append(voc.word2index[s])
else:
label.append(UNK_token)
label = [SOS_token]+label +[EOS_token]
bounding_box = []
bounding_box_category = []
wratio = (224*1.0)/original_shape[0]
hratio = (224*1.0)/original_shape[1]
for i in range(len(detection_target)):
bnding_box = detection_target[i]['bbox']
b_category = detection_target[i]['category_id']
bbox_plt ,_ = bbox_transform_coco2cv(bnding_box,hratio,wratio)
bounding_box.append(list(bbox_plt))
bounding_box_category.append(b_category)
return img, label,bounding_box,bounding_box_category
batch_size = 32
train_dset = COCO14Dataset(coco_train_caption,coco_train_detection,voc,transforms=data_transform)
def collate_fn(batch):
data = [item[0] for item in batch]
images = torch.stack(data,0)
label = [item[1] for item in batch]
max_target_len = max([len(indexes) for indexes in label])
padList = list(itertools.zip_longest(*label, fillvalue = 0))
lengths = torch.tensor([len(p) for p in label])
padVar = torch.LongTensor(padList)
bounding_box = [item[2] for item in batch]
bounding_box_category = [item[3] for item in batch]
m = []
for i, seq in enumerate(padVar):
#m.append([])
tmp = []
for token in seq:
if token == 0:
tmp.append(int(0))
else:
tmp.append(1)
m.append(tmp)
m = torch.tensor(m)
return images, padVar, m, max_target_len,bounding_box,bounding_box_category
train_loader=DataLoader(train_dset,batch_size = batch_size, num_workers = 8,shuffle = False,
collate_fn = collate_fn, drop_last=True) | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Exploratory Data Analysis on COCO datasets
| import json
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import numpy as np
import pdb
import os
from pycocotools.coco import COCO
from skimage import io
font = {'family' : 'Arial',
'weight' : 'normal',
'size' : 12}
matplotlib.rc('font', **font) | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Setting the root directory and annotation json file |
# src_root = '../../../../datasets/voc2012/'
# src_subset = 'images/'
# src_file = src_root+'annotations/instances_train2012.json'
# src_desc = 'train2017_voc12'
src_root = '../../../../datasets/coco/'
src_subset = '/content/opt/cocoapi/images/train2014'
src_file = '/content/opt/cocoapi/annotations/instances_train2014.json'
src_desc = 'train2014_coco' # a name (identifier) for the dataset
annotation_file = '/content/opt/cocoapi/annotations/captions_train2014.json'
coco_obj = COCO(src_file)
# Reading the json file
with open(src_file, 'r') as f:
root = json.load(f)
root.keys()
with open(annotation_file, 'r') as f:
caption = json.load(f)
caption.keys()
# caption['annotations']
# root['annotations']
root['categories'] | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Basic High Level Information | # Basic High Level Information
n_images = len(root['images'])
n_boxes = len(root['annotations'])
n_categ = len(root['categories'])
# height, width
heights = [x['height'] for x in root['images']]
widths = [x['width'] for x in root['images']]
print('Dataset Name: ',src_desc)
print('Number of images: ',n_images)
print('Number of bounding boxes: ', n_boxes)
print('Number of classes: ', n_categ)
print('Max min avg height: ', max(heights), min(heights), int(sum(heights)/len(heights)))
print('Max min avg width: ', max(widths), min(widths), int(sum(widths)/len(widths))) | Dataset Name: train2014_coco
Number of images: 82783
Number of bounding boxes: 604907
Number of classes: 80
Max min avg height: 640 51 483
Max min avg width: 640 59 578
| MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Distribution of objects across images | # Objects per image distribution
img2nboxes = {} # mapping "image id" to "number of boxes"
for ann in root['annotations']:
img_id = ann['image_id']
if img_id in img2nboxes.keys():
img2nboxes[img_id] += 1
else:
img2nboxes[img_id] = 1
nboxes_list = list(img2nboxes.values())
min_nboxes = min(nboxes_list)
max_nboxes = max(nboxes_list)
avg_nboxes = int(sum(nboxes_list)/len(nboxes_list))
out = pd.cut(nboxes_list, bins=np.arange(0,max_nboxes+10,10), include_lowest=True)
counts = out.value_counts().values
labels = [(int(i.left),int(i.right)) for i in out.value_counts().index.categories]
graph_xind = [i[0] for i in labels]
ticks = [ '('+str(i[0])+','+ str(i[1])+')' for i in labels]
plt.figure(figsize=(10,5))
plt.bar(graph_xind, counts, tick_label=ticks, width=5)
plt.xlabel('Number of objects')
plt.ylabel('Number of images')
plt.title('Number of objects distribution over the dataset')
plt.show()
print("Number of images having atleast one box: ", len(nboxes_list))
print("Min number of boxes per image: ", min_nboxes)
print("Max number of boxes per image: ", max_nboxes)
print("Avg number of boxes per image: ", avg_nboxes) | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Class wise distribution of objects | # Class distribution
class2nboxes = {}
for ann in root['annotations']:
cat_id = ann['category_id']
if cat_id in class2nboxes.keys():
class2nboxes[cat_id] += 1
else:
class2nboxes[cat_id] = 1
sorted_c2nb = [(k,v)for k, v in sorted(class2nboxes.items(), reverse=True, key=lambda item: item[1])]
# top 20 classes
top = min(len(sorted_c2nb),20)
# to plot
y = [i[1] for i in sorted_c2nb[:top]]
x = [i[0] for i in sorted_c2nb[:top]]
plt.figure(figsize=(10,5))
plt.bar(np.arange(len(y)),y, width=0.5,tick_label=x,color='orange')
plt.xlim(-0.5,len(y))
plt.xlabel('Category Id')
plt.ylabel('Number of boxes')
plt.title('Class distribution (decreasing order)')
plt.show()
categ_map = {x['id']: x['name'] for x in root['categories']}
for k in categ_map.keys():
print(k,'->',categ_map[k]) | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Class wise bounding box area distribution | # Class wise bounding box area distribution
bbox_areas = {} # key: class index, value -> a list of bounding box areas
for ann in root['annotations']:
area = ann['area']
cat_id = ann['category_id']
if area <= 0.0:
continue
if cat_id in bbox_areas.keys():
bbox_areas[cat_id].append(area)
else:
bbox_areas[cat_id] = [area]
bbox_avg_areas = []
for cat in bbox_areas.keys():
areas = bbox_areas[cat]
avg_area = int(sum(areas)/len(areas))
bbox_avg_areas.append((cat,avg_area))
bbox_avg_areas = sorted(bbox_avg_areas, key = lambda x: x[1])
top = min(10, len(bbox_avg_areas))
plt.figure(figsize=(10,10))
y = [item[1] for item in bbox_avg_areas[-top:]]
x = [item[0] for item in bbox_avg_areas[-top:]]
y2 = [item[1] for item in bbox_avg_areas[:top]]
x2 = [item[0] for item in bbox_avg_areas[:top]]
plt.subplot(211)
plt.bar(np.arange(len(y)),y, width=0.5,tick_label=x,color='green')
plt.xlim(-0.5,len(y))
plt.xlabel('Category Id')
plt.ylabel('Average bounding box area in pixel squared')
plt.title('Top '+str(top)+' Classes with highest avg bounding box size')
plt.subplot(212)
plt.bar(np.arange(len(y2)),y2, width=0.5,tick_label=x2,color='red')
plt.xlim(-0.5,len(y2))
plt.xlabel('Category Id')
plt.ylabel('Average bounding box area in pixel squared')
plt.title('Top '+str(top)+' Classes with lowest avg bounding box size')
plt.show()
categ_map = {x['id']: x['name'] for x in root['categories']}
for k in categ_map.keys():
print(k,'->',categ_map[k])
# root['annotations'] | _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Class wise 'occurance' in Captions
| # caption['annotations']
caption_list=[]
for x in caption['annotations']:
caption_list.append(x['caption'])
caption_list[1:8]
len(caption_list)
caption_list[9]
if 'toilet' in caption_list[9]:
print("bro")
class_list=[]
for x in root['categories']:
class_list.append(x['name'])
# class_list
cnt_frequency=[]
for class_name in class_list:
cnt=0;
for caption in caption_list:
if class_name in caption:
cnt +=1
cnt_frequency.append(cnt);
print(class_name + ' : ' + str(cnt))
cat_list=[]
for k in categ_map.keys():
cat_list.append(k)
word_cnt_list=[]
for k in cnt_frequency:
word_cnt_list.append(k)
word_cnt_tuple_list=[]
for i in range(len(cat_list)):
a = cat_list[i]
b = word_cnt_list[i]
word_cnt_tuple_list.append((a,b))
word_cnt_tuple_list = sorted(word_cnt_tuple_list, key = lambda x: x[1])
top = min(10, len(word_cnt_tuple_list))
plt.figure(figsize=(10,10))
y = [item[1] for item in word_cnt_tuple_list[-top:]]
x = [item[0] for item in word_cnt_tuple_list[-top:]]
y2 = [item[1] for item in word_cnt_tuple_list[:top]]
x2 = [item[0] for item in word_cnt_tuple_list[:top]]
plt.subplot(211)
plt.bar(np.arange(len(y)),y, width=0.5,tick_label=x,color='green')
plt.xlim(-0.5,len(y))
plt.xlabel('Category Id')
plt.ylabel('nos. of captions')
plt.title('Top '+str(top)+' Classes with highest occurance in captions')
plt.subplot(212)
plt.bar(np.arange(len(y2)),y2, width=0.5,tick_label=x2,color='red')
plt.xlim(-0.5,len(y2))
plt.xlabel('Category Id')
plt.ylabel('nos. of captions')
plt.title('Top '+str(top)+' Classes with lowest occurance in captions')
plt.show()
categ_map = {x['id']: x['name'] for x in root['categories']}
for k in categ_map.keys():
print(k,'->',categ_map[k])
| _____no_output_____ | MIT | 12_case_study_of_evaluating_attention_in_COCO_dataset/codes/EDA_COCO_Dataset.ipynb | lnpandey/DL_explore_synth_data |
Scala RepresentationsScala objects are integrated with HoTT by using wrappers, combinators and implicit based convenience methods. In this note we look at the basic representations. The main power of this is to provide automatically (through implicits) types and scala bindings for functions from the basic ones.A more advanced form of Scala representations also makes symbolic algebra simplifications. The basic form should be used, for example, for group presentations, where simplifications are not expected. | load.jar("/home/gadgil/code/ProvingGround/core/.jvm/target/scala-2.11/ProvingGround-Core-assembly-0.8.jar")
import provingground._
import HoTT._
import ScalaRep._ | _____no_output_____ | MIT | notes/.ipynb_checkpoints/ScalaRep-checkpoint.ipynb | alf239/ProvingGround |
We consider the type of Natural numbers formed from Integers. This is defined in ScalaRep as:```scalacase object NatInt extends ScalaTyp[Int]```**Warning:** This is an unsafe type, as Integers can overflow, and there is no checking for positivity. | NatInt | _____no_output_____ | MIT | notes/.ipynb_checkpoints/ScalaRep-checkpoint.ipynb | alf239/ProvingGround |
Conversion using the term methodThe term method converts a scala object, with scala type T say, into a Term, provided there is an implicit representation with scala type T. | import NatInt.rep
1.term | _____no_output_____ | MIT | notes/.ipynb_checkpoints/ScalaRep-checkpoint.ipynb | alf239/ProvingGround |
Functions to FuncTermsGiven the representation of Int, there are combinators that give representations of, for instance Int => Int => Int. Note also that the type of the resulting term is a type parameter of the scala representations, so we get a refined compile time type | val sum = ((n: Int) => (m: Int) => n + m).term
sum(1.term)(2.term)
val n = "n" :: NatInt
sum(n)(2.term)
val s = lmbda(n)(sum(n)(2.term))
s(3.term) | _____no_output_____ | MIT | notes/.ipynb_checkpoints/ScalaRep-checkpoint.ipynb | alf239/ProvingGround |
We will also define the product | val prod = ((n : Int) => (m: Int) => n * m).term
prod(2.term)(4.term) | _____no_output_____ | MIT | notes/.ipynb_checkpoints/ScalaRep-checkpoint.ipynb | alf239/ProvingGround |
Options | # parse options
problem = 'twelve_pieces_process.json' # 'pavilion_process.json' # 'twelve_pieces_process.json'
problem_subdir = 'results'
recompute_action_states = False
load_external_movements = False
from collections import namedtuple
PlanningArguments = namedtuple('PlanningArguments', ['problem', 'viewer', 'debug', 'diagnosis', 'id_only', 'solve_mode', 'viz_upon_found',
'save_now', 'write', 'plan_impacted', 'watch', 'step_sim', 'verbose'])
# args = PlanningArguments(problem, viewer, debug, diagnosis, id_only, solve_mode, viz_upon_found, save_now, write, plan_impacted, watch, step_sim, verbose) | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Parse process from json | import os
from termcolor import cprint
import pybullet_planning as pp
from integral_timber_joints.planning.parsing import parse_process, save_process_and_movements, get_process_path, save_process
process = parse_process(problem, subdir=problem_subdir)
result_path = get_process_path(problem, subdir='results')
if len(process.movements) == 0:
cprint('No movements found in process, trigger recompute actions.', 'red')
recompute_action_states = True
if recompute_action_states:
cprint('Recomputing Actions and States', 'cyan')
recompute_action_states(process)
from integral_timber_joints.process import RoboticMovement
moves = process.get_movements_by_beam_id('b4')
cnt = 0
for m in moves:
if isinstance(m, RoboticMovement):
cnt+=1
print('{}/{}'.format(cnt, len(moves))) | 45/72
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
read jsons | # from collections import defaultdict
import json
# file_name = 'b4_runtime_data_w_TC_final_nonlinear.json'
notc_file_name = 'b4_runtime_data_No_TC_final_all.json'
tc_file_name = 'b4_runtime_data_w_TC_nonlinear_40_trials.json'
tc_file_name2 = 'b4_runtime_data_w_TC_final_linear.json'
# b4_runtime_data_No_TC_21-07-06_11-54-15.json, on-going
# 'b1_runtime_data_w_TC_21-07-06_07-35-29.json'
#'b4_runtime_data_No_TC_21-07-06_00-04-45.json', 600 timeout, before bug fixed
# 'b4_runtime_data_No_TC_21-07-05_19-59-42.json' 1800 timeout, before bug fixed
runtime_data = {}
with open('figs/{}'.format(notc_file_name), 'r') as f:
runtime_data['notc'] = json.load(f)
runtime_data['tc'] = {}
with open('figs/{}'.format(tc_file_name), 'r') as f:
runtime_data['tc'].update(json.load(f))
with open('figs/{}'.format(tc_file_name2), 'r') as f:
runtime_data['tc'].update(json.load(f))
print(runtime_data['notc'].keys())
print(runtime_data['tc'].keys()) | dict_keys(['nonlinear', 'linear_forward', 'linear_backward'])
dict_keys(['nonlinear', 'linear_forward', 'linear_backward'])
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
B4-Histogram | from collections import defaultdict
# aggregate all success/failure trials
agg_data = {'notc':{}, 'tc':{}}
for tc_flag in runtime_data:
for solve_mode_ in runtime_data[tc_flag]:
agg_data[tc_flag][solve_mode_] = defaultdict(list)
cnt = 0
for outer_trial_i, tdata in runtime_data[tc_flag][solve_mode_].items():
for inner_trial_j_data in tdata.values():
runtime_per_move = [sum(inner_trial_j_data['profiles'][mid]['plan_time']) \
for mid in inner_trial_j_data['profiles']]
runtime_key = 'success' if inner_trial_j_data['success'] else 'failure'
agg_data[tc_flag][solve_mode_]['history'].append((inner_trial_j_data['success'], sum(runtime_per_move)))
# if cnt < sample_num:
agg_data[tc_flag][solve_mode_][runtime_key].append(sum(runtime_per_move))
cnt += 1
# agg_data['tc']
fig, ax = plt.subplots()
history = agg_data['tc']['linear_forward']['history'][:37]
print(len(history))
ax.plot(range(len(history)), [h[1] for h in history])
ax.scatter(range(len(history)), [h[1] for h in history], c=['g' if h[0] else 'r' for h in history])
import numpy as np
total_attempts = 37
plot_data = {'notc':{}, 'tc':{}}
for tc_flag in agg_data:
for solve_mode, solve_data in agg_data[tc_flag].items():
history = solve_data['history'][0:total_attempts]
success_runs = [h[1] for h in history if h[0]]
failed_runs = [h[1] for h in history if not h[0]]
success_rate = len(success_runs) / len(history)
success_mean = np.mean(success_runs)
success_std = np.std(success_runs)
failure_mean = np.mean(failed_runs)
failure_std = np.std(failed_runs)
plot_data[tc_flag][solve_mode] = {}
plot_data[tc_flag][solve_mode]['total_attempts'] = len(history)
plot_data[tc_flag][solve_mode]['success_rate'] = success_rate
plot_data[tc_flag][solve_mode]['success_mean'] = success_mean
plot_data[tc_flag][solve_mode]['success_std'] = success_std
plot_data[tc_flag][solve_mode]['failure_mean'] = failure_mean
plot_data[tc_flag][solve_mode]['failure_std'] = failure_std
plot_data[tc_flag][solve_mode]['success_range'] = (success_mean-np.min(success_runs), np.max(success_runs)-success_mean) \
if success_runs else (0,0)
plot_data[tc_flag][solve_mode]['failed_range'] = (failure_mean-np.min(failed_runs), np.max(failed_runs)-failure_mean)
plot_data
# use Helvetica font
# https://felix11h.github.io/blog/matplotlib-tgheros
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
# rc(βfontβ,**{βfamilyβ:βsans-serifβ,βsans-serifβ:[βArialβ]})
import matplotlib.pyplot as plt
import numpy as np
constriant_type = 'notc'
pp_data = plot_data[constriant_type]
x = np.arange(len(pp_data)) # the label locations
width = 0.3 # the width of the bars
success_green = '#caffbf'
failure_red = '#ffadad'
average_color = '#a0c4ff'
scatter_size = 5
fig, ax = plt.subplots(1,3,figsize=(14,4)) # plt.figaspect(2)
# ! First figure
s_height = 40
rate_x = x
rate_alpha = 1.0
success_heights = [(pp_data[s]['success_rate'])*s_height for s in pp_data]
failed_heights = [(1-pp_data[s]['success_rate'])*s_height for s in pp_data]
rects1_1 = ax[0].bar(rate_x, success_heights, width, color=success_green, alpha=rate_alpha)
rects1_2 = ax[0].bar(rate_x, failed_heights, width, bottom=success_heights, color=failure_red, alpha=rate_alpha)
ax[0].bar_label(rects1_1, labels=['{:.1f}%'.format(pp_data[s]['success_rate']*100) for s in pp_data],
label_type='center') #padding=3)
ax[0].bar_label(rects1_2, labels=['{:.1f}%'.format((1-pp_data[s]['success_rate'])*100) for s in pp_data],
label_type='center') #padding=3)
ax[0].set_ylabel('number of trials')
ax[0].set_xticks(x)
ax[0].set_xticklabels(pp_data)
# ax[0].legend()
ax[0].set_title('Success rate')
elinewidth = 0.5
ecapsize = 2
# ! Second Figure
rects2 = ax[1].bar(x - width/2, [pp_data[s]['success_mean'] for s in pp_data], width,
yerr=[[pp_data[s]['success_range'][0] for s in pp_data], [pp_data[s]['success_range'][1] for s in pp_data]],
label='success runtime', error_kw={'elinewidth':elinewidth},
color=success_green, ecolor='black', capsize=ecapsize)
rects3 = ax[1].bar(x + width/2, [pp_data[s]['failure_mean'] for s in pp_data], width,
yerr=[[pp_data[s]['failed_range'][0] for s in pp_data], [pp_data[s]['failed_range'][1] for s in pp_data]],
label='failure runtime', error_kw={'elinewidth':elinewidth},
color=failure_red, ecolor='black', capsize=ecapsize)
ax[1].set_ylabel('planning time (s)')
ax[1].set_xticks(x)
ax[1].set_xticklabels(pp_data)
ax[1].set_ylim([0,420])
ax[1].legend(loc='upper center')
ax[1].set_title('Average runtime for each attempt')
# average time to obtain a successful result
# ! Third Figure
timeout = 600*3
data_summary = {}
for solve_mode, solve_mode_data in runtime_data[constriant_type].items():
runtime_per_trial = []
for outer_trial_data in solve_mode_data.values():
runtime_per_inner = []
for inner_trial_j_data in outer_trial_data.values():
runtime_per_inner.append(sum([sum(inner_trial_j_data['profiles'][mid]['plan_time']) \
for mid in inner_trial_j_data['profiles']]))
runtime_per_trial.append(sum(runtime_per_inner))
num_bts = [len(solve_mode_data[str(at)])-1 for at in range(len(solve_mode_data))]
data_summary[solve_mode] = (np.average(runtime_per_trial), runtime_per_trial, num_bts)
bars = ax[2].bar(x, [d[0] for _, d in data_summary.items()], width, align='center', zorder=1, color=average_color)
# scatter points
for tx, rdata in zip(x, data_summary.values()):
inner_runtimes = rdata[1]
ax[2].scatter([tx for _ in inner_runtimes], inner_runtimes, c=['black' if rt < timeout else '#ef476f' \
for rt in inner_runtimes], s=scatter_size, zorder=2)
# for t, bt in zip(inner_runtimes, rdata[2]):
# ax[2].annotate(bt, (tx+0.05, t), fontsize=7)
# timeout
ax[2].plot(x, [timeout for _ in x], c=failure_red, dashes=[6, 2], label='timeout', zorder=2, lw=1)
# ax[2].set_ylim([0,1850])
ax[2].set_xticks(x)
ax[2].set_xticklabels(data_summary)
ax[2].set_ylabel('planning time (s)')
ax[2].set_title('Average runtime until success/timeout')
ax[2].set_ylim([0,2100])
ax[2].legend(loc='upper right')
fig.tight_layout()
# all: comparison between linear and nonlinear planning b4's all movements (xx robot movements)
# a: success rate, b: runtime for each attempt, c: average runtime until success
plt.savefig(os.path.join('figs','10_beam4_runtime_without_TC.svg'))
plt.savefig(os.path.join('figs','10_beam4_runtime_without_TC.png'))
plt.show()
# ! Third Figure
tc_file_name = 'b4_runtime_data_w_TC_final_nonlinear.json'
tc_file_name2 = 'b4_runtime_data_w_TC_final_linear.json'
tc_runtime_data = {}
with open('figs/{}'.format(tc_file_name), 'r') as f:
tc_runtime_data.update(json.load(f))
with open('figs/{}'.format(tc_file_name2), 'r') as f:
tc_runtime_data.update(json.load(f))
import matplotlib.pyplot as plt
import numpy as np
constriant_type = 'tc'
pp_data = plot_data[constriant_type]
x = np.arange(len(pp_data)) # the label locations
width = 0.3 # the width of the bars
success_green = '#caffbf'
failure_red = '#ffadad'
average_color = '#a0c4ff'
scatter_size = 5
fig, ax = plt.subplots(1,3,figsize=(14,4)) # plt.figaspect(2)
# ! First figure
s_height = 40
rate_x = x
rate_alpha = 1.0
success_heights = [(pp_data[s]['success_rate'])*s_height for s in pp_data]
failed_heights = [(1-pp_data[s]['success_rate'])*s_height for s in pp_data]
rects1_1 = ax[0].bar(rate_x, success_heights, width, color=success_green, alpha=rate_alpha)
rects1_2 = ax[0].bar(rate_x, failed_heights, width, bottom=success_heights, color=failure_red, alpha=rate_alpha)
ax[0].bar_label(rects1_1, labels=['{:.1f}%'.format(pp_data[s]['success_rate']*100) for s in pp_data],
label_type='center') #padding=3)
ax[0].bar_label(rects1_2, labels=['{:.1f}%'.format((1-pp_data[s]['success_rate'])*100) for s in pp_data],
label_type='center') #padding=3)
ax[0].set_ylabel('number of trials')
ax[0].set_xticks(x)
ax[0].set_xticklabels(pp_data)
# ax[0].legend()
ax[0].set_title('Success rate')
# ! Second Figure
rects2 = ax[1].bar(x - width/2, [pp_data[s]['success_mean'] for s in pp_data], width,
yerr=[[pp_data[s]['success_range'][0] for s in pp_data], [pp_data[s]['success_range'][1] for s in pp_data]],
label='success runtime', error_kw={'elinewidth':elinewidth},
color=success_green, ecolor='black', capsize=ecapsize)
rects3 = ax[1].bar(x + width/2, [pp_data[s]['failure_mean'] for s in pp_data], width,
yerr=[[pp_data[s]['failed_range'][0] for s in pp_data], [pp_data[s]['failed_range'][1] for s in pp_data]],
label='failure runtime', error_kw={'elinewidth':elinewidth},
color=failure_red, ecolor='black', capsize=ecapsize)
ax[1].set_ylabel('planning time (s)')
ax[1].set_xticks(x)
ax[1].set_xticklabels(pp_data)
ax[1].set_ylim([0,420])
ax[1].legend(loc='upper center')
ax[1].set_title('Average runtime for each attempt')
# average time to obtain a successful result
# ! Third Figure
timeout = 600*3
data_summary = {}
solve_mode = 'nonlinear'
solve_mode_data = tc_runtime_data[solve_mode]
runtime_per_trial = []
for outer_trial_data in solve_mode_data.values():
runtime_per_inner = []
for inner_trial_j_data in outer_trial_data.values():
runtime_per_inner.append(sum([sum(inner_trial_j_data['profiles'][mid]['plan_time']) \
for mid in inner_trial_j_data['profiles']]))
runtime_per_trial.append(sum(runtime_per_inner))
num_bts = [len(solve_mode_data[str(at)])-1 for at in range(len(solve_mode_data))]
data_summary[solve_mode] = (np.average(runtime_per_trial), runtime_per_trial, num_bts)
# 455, 68
lf_bts = 500
data_summary['linear_forward'] = (timeout, [timeout/lf_bts for i in range(lf_bts)], lf_bts)
lb_bts = 500
data_summary['linear_backward'] = (timeout, [timeout/lb_bts for i in range(lb_bts)], lb_bts)
bars = ax[2].bar(x, [d[0] for _, d in data_summary.items()], width, align='center', zorder=1, color=average_color)
# scatter points
tx = 0
rdata = data_summary['nonlinear']
inner_runtimes = rdata[1]
ax[2].scatter([tx for _ in inner_runtimes], inner_runtimes, c=['black' if rt < timeout else '#ef476f' \
for rt in inner_runtimes], s=scatter_size, zorder=2) # label
ax[2].scatter([1,2], [timeout, timeout], c='#ef476f', s=scatter_size, zorder=2)
# for t, bt in zip(inner_runtimes, rdata[2]):
# ax[2].annotate(bt, (tx+0.05, t), fontsize=7)
# timeout
ax[2].plot(x, [timeout for _ in x], c=failure_red, dashes=[6, 2], label='timeout', zorder=2, lw=1)
ax[2].set_xticks(x)
ax[2].set_xticklabels(data_summary)
ax[2].set_ylabel('planning time (s)')
ax[2].set_title('Average runtime until success/timeout')
ax[2].set_ylim([0,2100])
ax[2].legend(loc='upper right')
fig.tight_layout()
# all: comparison between linear and nonlinear planning b4's all movements (xx robot movements)
# a: success rate, b: runtime for each attempt, c: average runtime until success
plt.savefig(os.path.join('figs','11_beam4_runtime_with_TC.svg'))
plt.savefig(os.path.join('figs','11_beam4_runtime_with_TC.png'))
plt.show()
import matplotlib.pyplot as plt
import numpy as np
timeout = 600*3
fig, ax = plt.subplots()
data_summary = {}
for solve_mode, solve_mode_data in runtime_data.items():
runtime_per_trial = []
for outer_trial_data in solve_mode_data.values():
runtime_per_inner = []
for inner_trial_j_data in outer_trial_data.values():
runtime_per_inner.append(sum([sum(inner_trial_j_data['profiles'][mid]['plan_time']) \
for mid in inner_trial_j_data['profiles']]))
runtime_per_trial.append(sum(runtime_per_inner))
num_bts = [len(solve_mode_data[str(at)])-1 for at in range(len(solve_mode_data))]
data_summary[solve_mode] = (np.average(runtime_per_trial), runtime_per_trial, num_bts)
width = 0.35 # the width of the bars
x_pos = np.arange(len(data_summary))
bars = ax.bar(x_pos, [d[0] for _, d in data_summary.items()], width, align='center', zorder=1)
# scatter points
for x, rdata in zip(x_pos, data_summary.values()):
inner_runtimes = rdata[1]
ax.scatter([x for _ in inner_runtimes], inner_runtimes, c='red', s=2.0, zorder=2) # label
for t, bt in zip(inner_runtimes, rdata[2]):
plt.annotate(bt, (x, t))
ax.plot(x_pos, [timeout for _ in x_pos], c='r', dashes=[6, 2], label='timeout', zorder=2)
leg = ax.legend(loc='right')
ax.set_xticks(x_pos)
ax.set_xticklabels(data_summary)
ax.set_ylabel('runtime (s)')
# import os
# plt.savefig(os.path.join('figs',file_name.split('.json')[0]+'_hist.png'))
fig.tight_layout()
plt.show() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
All beam scatter plot | # from collections import defaultdict
import json
file_names = {'b{}'.format(bid) : 'b{}_runtime_data_w_TC_{}.json'.format(bid, '21-07-06_23-04-03') \
for bid in list(range(0,25)) + list(range(26,32))}
file_names.update(
{'b{}'.format(bid) : 'b{}_runtime_data_w_TC_{}.json'.format(bid, '21-07-07_07-55-05') for bid in range(32,40)}
)
all_runtime_data = {}
for bid, fn in file_names.items():
with open('figs/{}'.format(fn), 'r') as f:
all_runtime_data[bid] = json.load(f)
print(all_runtime_data.keys())
all_runtime_data['b1']['nonlinear'].keys()
# ['nonlinear', 'linear_forward', 'linear_backward']
for bid, beam_data in all_runtime_data.items():
for solve_mode_ in beam_data:
print('='*20)
for i, tdata in beam_data[solve_mode_].items():
print('{} | #{}-T#{}:'.format(bid, solve_mode_, i))
sc = all([d['success'] for di, d in tdata.items()])
total_runtime = []
for i, trial_data in tdata.items():
trial_profiles = trial_data['profiles']
runtime_per_move = [sum(trial_profiles[mid]['plan_time']) for mid in trial_profiles]
total_runtime.append(sum(runtime_per_move))
tdata['total_runtime'] = sum(total_runtime)
cprint('- {} - BT {} | time {:.2f}'.format(sc, len(tdata), sum(total_runtime)), 'green' if sc else 'red')
print('---')
import matplotlib.pyplot as plt
import numpy as np
timeout = 600*3
fig, ax = plt.subplots()
data_summary = {}
for bid, beam_data in all_runtime_data.items():
for solve_mode, solve_mode_data in runtime_data.items():
x = range(len(solve_mode_data))
runtime_per_trial = [solve_mode_data[str(at)]['total_runtime'] for at in x]
num_bts = [len(solve_mode_data[str(at)])-1 for at in x]
data_summary[solve_mode] = (np.average(runtime_per_trial), runtime_per_trial, num_bts)
x_pos = np.arange(len(data_summary))
bars = ax.bar(x_pos, [d[0] for _, d in data_summary.items()], align='center', zorder=1)
# scatter points
for x, rdata in zip(x_pos, data_summary.values()):
inner_runtimes = rdata[1]
ax.scatter([x for _ in inner_runtimes], inner_runtimes, c='red', s=2.0, zorder=2) # label
for t, bt in zip(inner_runtimes, rdata[2]):
plt.annotate(bt, (x, t))
ax.plot(x_pos, [timeout for _ in x_pos], c='r', dashes=[6, 2], label='timeout', zorder=2)
leg = ax.legend(loc='right')
ax.set_xticks(x_pos)
ax.set_xticklabels(data_summary)
ax.set_ylabel('runtime (s)')
plt.draw() # Draw the figure so you can find the positon of the legend.
import os
plt.savefig(os.path.join('figs',file_name.split('.json')[0]+'_hist.png'))
plt.show() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
All-beam scatter plot | # from collections import defaultdict
import json
file_names = {bid : 'b{}_runtime_data_w_TC_{}.json'.format(bid, '21-07-06_23-04-03') \
for bid in list(range(0,25)) + list(range(26,32))}
file_names.update(
{bid : 'b{}_runtime_data_w_TC_{}.json'.format(bid, '21-07-07_07-55-05') for bid in range(32,40)}
)
for bid in [6,29,36]:
file_names[bid] = 'b{}_runtime_data_w_TC_21-07-08_17-21-22.json'.format(bid)
for bid in [34]:
file_names[bid] = 'b{}_runtime_data_w_TC_21-07-07_21-58-12.json'.format(bid)
# file_names[bid] = 'b{}_runtime_data_w_TC_21-07-07_07-55-05.json'.format(bid)
for bid in [25]:
file_names[bid] = 'b{}_runtime_data_w_TC_21-07-08_21-12-56.json'.format(bid)
for bid in [4, 13, 32, 37, 38, 39]:
file_names[bid] = 'b{}_runtime_data_w_TC_21-07-08_19-58-46.json'.format(bid)
runtime_per_beam = {}
for bid, fn in file_names.items():
runtime_data = {}
with open('figs/{}'.format(fn), 'r') as f:
runtime_data = json.load(f)
# ['nonlinear', 'linear_forward', 'linear_backward']
for solve_mode_ in runtime_data:
print('='*20)
for i, tdata in runtime_data[solve_mode_].items():
print('b{} | #{}-T#{}:'.format(bid, solve_mode_, i))
sc = any([d['success'] for di, d in tdata.items()])
total_runtime = []
for i, trial_data in tdata.items():
trial_profiles = trial_data['profiles']
runtime_per_move = [sum(trial_profiles[mid]['plan_time']) for mid in trial_profiles]
total_runtime.append(sum(runtime_per_move))
tdata['total_runtime'] = sum(total_runtime)
# !
runtime_per_beam[bid] = (sum(total_runtime), len(tdata))
cprint('{} - BT {} | time {:.2f}'.format(sc, len(tdata), sum(total_runtime)), 'green' if sc else 'red')
print('---')
import matplotlib.pyplot as plt
import numpy as np
timeout = 600*3
fig, ax = plt.subplots(figsize=(16,4))
beams = list(sorted(runtime_per_beam.keys()))
# plt.scatter(beams, [runtime_per_beam[b][0] for b in beams], s=5)
b_runtimes = [runtime_per_beam[b][0] for b in beams]
bar_chart = ax.bar(np.array(beams)+1, b_runtimes, color=average_color) #, edgecolor='black'
ax.bar_label(bar_chart, fontsize=7, padding=3, fmt='%.f')
# plt.plot(beams, [runtime_per_beam[b][0] for b in beams])
# for b in beams:
# plt.annotate(runtime_per_beam[b][1], (b, runtime_per_beam[b][0]))
# plt.plot(beams, [timeout for _ in beams], c='r', label='timeout')
ax.set_xlabel('timber element sequence')
ax.set_ylabel('planning time (s)')
ax.set_xlim([0,41])
ax.set_ylim([0,1700])
# import os
plt.savefig(os.path.join('figs','12_all_beam_runtime.svg'))
plt.savefig(os.path.join('figs','12_all_beam_runtime.png'))
fig.tight_layout()
plt.show()
# all beams, until success, no timeout
import matplotlib.pyplot as plt
import numpy as np
timeout = 600*3
fig, ax = plt.subplots()
markers = ['o', '^', (5, 0)]
mcolors = ['g', 'r', 'b']
for marker, mcolor, (solve_m, solve_mode_data) in zip(markers, mcolors, runtime_data.items()):
x = range(len(solve_mode_data))
runtime_per_trial = [solve_mode_data[str(at)]['total_runtime'] for at in x]
num_bts = [len(solve_mode_data[str(at)])-1 for at in x]
plt.scatter(x, runtime_per_trial, marker=marker, c=mcolor, label=solve_m)
for i in x:
plt.annotate(num_bts[i], (i, runtime_per_trial[i]))
plt.plot(x, [timeout for _ in x], c='r', label='timeout')
ax.set_xlabel('random trials')
ax.set_ylabel('runtime(s)')
# ax.set_title('Runtime by sovl')
leg = ax.legend(loc='upper right')
plt.draw() # Draw the figure so you can find the positon of the legend.
# Get the bounding box of the original legend
bb = leg.get_bbox_to_anchor().transformed(ax.transAxes.inverted())
# Change to location of the legend.
xOffset = 0.4
bb.x0 += xOffset
bb.x1 += xOffset
leg.set_bbox_to_anchor(bb, transform = ax.transAxes)
# import os
# plt.savefig(os.path.join('figs',file_name.split('.json')[0]+'.png'))
plt.show() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Single result | fn = 'b6_runtime_data_w_TC_21-07-07_21-58-12.json'
single_runtime_data = {}
with open('figs/{}'.format(fn), 'r') as f:
single_runtime_data = json.load(f)
# ['nonlinear', 'linear_forward', 'linear_backward']
for bid in [25]: # [4, 13, 32, 37, 38, 39]: #[6,25,29,34,36]:
# fn = 'b{}_runtime_data_w_TC_21-07-08_19-58-46.json'.format(bid)
fn = 'b{}_runtime_data_w_TC_21-07-08_21-12-56.json'.format(bid)
single_runtime_data = {}
with open('figs/{}'.format(fn), 'r') as f:
single_runtime_data = json.load(f)
beam_id = fn.split('_')[0]
for solve_mode_ in single_runtime_data:
print('='*20)
for i, tdata in single_runtime_data[solve_mode_].items():
print('{} | #{}-T#{}:'.format(beam_id, solve_mode_, i))
sc = any([d['success'] for di, d in tdata.items()])
total_runtime = []
for i, trial_data in tdata.items():
trial_profiles = trial_data['profiles']
runtime_per_move = [sum(trial_profiles[mid]['plan_time']) for mid in trial_profiles]
total_runtime.append(sum(runtime_per_move))
# tdata['total_runtime'] = sum(total_runtime)
cprint('{} - BT {} | time {:.2f}'.format(sc, len(tdata), sum(total_runtime)), 'green' if sc else 'red')
print('---') | ====================
b25 | #nonlinear-T#0:
[32mTrue - BT 9 | time 542.60[0m
---
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Detailed diagram | from plotly.subplots import make_subplots
import plotly.graph_objects as go
from integral_timber_joints.process import RoboticFreeMovement, RoboticLinearMovement, RoboticClampSyncLinearMovement
# solve_mode_ = 'linear_forward' # linear_backward | linear_forward | nonlinear
beam_id = file_name.split('_runtime_data')[0]
# total_rows = 0
# for i, d in runtime_data[solve_mode_].items():
# total_rows += len(d)+1
max_inner_loop_displayed = 11
for solve_mode_ in runtime_data:
for attempt_i, s_rdata in runtime_data[solve_mode_].items():
if 'total_runtime' in s_rdata:
del s_rdata['total_runtime']
if len(s_rdata) > max_inner_loop_displayed:
num_rows = max_inner_loop_displayed+1
half = int(max_inner_loop_displayed/2)
selected_inners = list(range(0,half)) + list(range(len(s_rdata)-half,len(s_rdata)))
else:
num_rows = len(s_rdata)+1
selected_inners = list(range(len(s_rdata)))
fig = make_subplots(rows=num_rows, cols=2)
success = any([d['success'] for di, d in s_rdata.items()])
total_runtime = []
failed_m_id = []
for i in s_rdata.keys():
trial_data = s_rdata[i]
trial_profiles = trial_data['profiles']
mid_keys = sorted(trial_profiles.keys(), key=int)
runtime_per_move = [sum(trial_profiles[mid]['plan_time']) for mid in mid_keys]
total_runtime.append(sum(runtime_per_move))
for mid in mid_keys:
if not any(trial_profiles[mid]['plan_success']):
movement = process.get_movement_by_movement_id(trial_profiles[mid]['movement_id'][0])
m_color = '#ff1b6b' if isinstance(movement, RoboticFreeMovement) else '#45caff'
failed_m_id.append((mid, movement.short_summary, m_color))
break
else:
failed_m_id.append((-1, 'success!', '#00ff87'))
if i in selected_inners or int(i) in selected_inners:
success_colors = ['#99C24D' if any(trial_profiles[mid]['plan_success']) else '#F18F01' for mid in mid_keys]
row_id = selected_inners.index(int(i))+1
fig.append_trace(go.Scatter(x=mid_keys,
y=runtime_per_move,
mode='markers',
marker_color=success_colors,
text=[process.get_movement_by_movement_id(trial_profiles[mid]['movement_id'][0]).short_summary \
for mid in mid_keys], # hover text goes here
name='#{}-feasibility'.format(i),
),
row=row_id, col=1
)
fig.append_trace(go.Scatter(x=mid_keys,
y=runtime_per_move,
mode='markers',
marker=dict(
size=5,
color=[trial_profiles[mid]['sample_order'][0] for mid in mid_keys], #set color equal to a variable
colorscale='Viridis', # one of plotly colorscales
showscale=True
),
text=['S#{}-{}'.format(trial_profiles[mid]['sample_order'][0], process.get_movement_by_movement_id(trial_profiles[mid]['movement_id'][0]).short_summary) \
for mid in mid_keys], # hover text goes here
name='#{}-sample order'.format(i),),
row=row_id, col=2
)
if row_id == 1:
fig.update_xaxes(title_text="m_id",row=row_id, col=1)
fig.update_yaxes(title_text="runtime(s)",row=row_id, col=1)
fig.append_trace(go.Scatter(x=list(range(len(s_rdata))),y=total_runtime),
row=num_rows, col=1)
fig.update_xaxes(title_text="trials",row=num_rows, col=1)
fig.update_yaxes(title_text="runtime(s)",row=num_rows, col=1)
fig.append_trace(go.Scatter(x=list(range(len(failed_m_id))),y=[int(tt[0]) for tt in failed_m_id],
mode='markers',
marker_color=[tt[2] for tt in failed_m_id],
text=[tt[1] for tt in failed_m_id],
), row=num_rows, col=2)
fig.update_xaxes(title_text="trials",row=num_rows, col=2)
fig.update_yaxes(title_text="failed_movement_id",row=num_rows, col=2)
title = "figs/{}-{}-trail_{}_success-{}_BT-{}_time-{:.1f}".format(beam_id, solve_mode_,
attempt_i, success, len(s_rdata), sum(total_runtime))
fig.update_layout(title=title)
fig.write_html(title + ".html")
# fig.show()
len(failed_m_id) | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Save runtime data | runtime_data.keys() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Start client | from integral_timber_joints.planning.robot_setup import load_RFL_world
from integral_timber_joints.planning.run import set_initial_state
# * Connect to path planning backend and initialize robot parameters
# viewer or diagnosis or view_states or watch or step_sim,
client, robot, _ = load_RFL_world(viewer=False, verbose=False)
set_initial_state(client, robot, process, disable_env=disable_env, reinit_tool=False)
client.disconnect() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Visualize traj | from integral_timber_joints.planning.state import set_state
from integral_timber_joints.planning.visualization import visualize_movement_trajectory
altered_ms = [process.get_movement_by_movement_id('A43_M2')]
set_state(client, robot, process, process.initial_state)
for altered_m in altered_ms:
visualize_movement_trajectory(client, robot, process, altered_m, step_sim=False, step_duration=0.05) | ===
Viz:[0m
[33mNo traj found for RoboticLinearMovement(#A43_M2, Linear Approach 2 of 2 to place CL3 ('c2') in storage., traj 0)
-- has_start_conf False, has_end_conf True[0m
Press enter to continue
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Disconnect client | client.disconnect() | _____no_output_____ | MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Plan only one movement | # if id_only:
# beam_id = process.get_beam_id_from_movement_id(id_only)
# process.get_movement_summary_by_beam_id(beam_id)
from integral_timber_joints.planning.stream import compute_free_movement, compute_linear_movement
from integral_timber_joints.planning.solve import compute_movement
chosen_m = process.get_movement_by_movement_id(id_only)
compute_movement(client, robot, process, chosen_m, options=lm_options, diagnosis=diagnosis)
from integral_timber_joints.planning.visualization import visualize_movement_trajectory
with pp.WorldSaver():
visualize_movement_trajectory(client, robot, process, chosen_m, step_sim=True) | ===
Viz:[0m
[32mRoboticLinearMovement(#A2_M1, Linear Advance to Final Frame of Beam ('b0'), traj 1)[0m
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Debug | prev_m = process.get_movement_by_movement_id('A40_M6')
start_state = process.get_movement_start_state(prev_m)
end_state = process.get_movement_end_state(prev_m)
# v = end_state['robot'].current_frame.point - start_state['robot'].current_frame.point
# list(v)
set_state(client, robot, process, end_state)
print(end_state['tool_changer'].current_frame)
print(client.get_object_frame('^tool_changer$', scale=1e3)[75])
client.set_robot_configuration(robot, end_state['robot'].kinematic_config)
print(client.get_object_frame('^tool_changer$', scale=1e3)[75])
from compas_fab_pychoreo.backend_features.pychoreo_configuration_collision_checker import PyChoreoConfigurationCollisionChecker
set_state(client, robot, process, end_state, options=options)
# set_state(client, robot, process, start_state, options=options)
pychore_collision_fn = PyChoreoConfigurationCollisionChecker(client)
# end_state['robot'].kinematic_config
options['diagnosis'] = True
pychore_collision_fn.check_collisions(robot, prev_m.trajectory.points[-2], options=options)
tc_body = client.pychoreo_attachments['tool_changer']
from compas_fab_pychoreo.conversions import pose_from_frame, frame_from_pose
frame_from_pose(pp.get_pose(75))
client.get_object_frame('^tool_changer$')
print(end_state['robot'])
print(end_state['tool_changer']) | State: current frame: {
"point": [
16365.989685058594,
5373.808860778809,
1185.4075193405151
],
"xaxis": [
-0.25802939931448104,
0.6277901217809272,
0.7343707456616834
],
"yaxis": [
-0.9661370648091927,
-0.16763997964096333,
-0.1961530250285612
]
} | config: JointTrajectoryPoint((15.468, -4.130, -2.020, 2.159, -0.587, -2.805, 0.492, -2.039, 0.908), (2, 2, 2, 0, 0, 0, 0, 0, 0), (0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000), (0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000), (0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000, 0.000), Duration(11, 0)) | attached to robot: False
State: current frame: {
"point": [
16366.001562499872,
5373.822840010225,
1185.408652972277
],
"xaxis": [
-0.2580290176609404,
0.6277482599146081,
0.7344066640622972
],
"yaxis": [
-0.9661371673033442,
-0.16765452393882985,
-0.19614008911467898
]
} | config: None | attached to robot: True
| MIT | examples/notebooks/SCF_diagram.ipynb | gramaziokohler/integral_timber_joints |
Ungraded Lab: GradCAMThis lab will walk you through generating gradient-weighted class activation maps (GradCAMs) for model predictions. - This is similar to the CAMs you generated before except: - GradCAMs uses gradients instead of the global average pooling weights to weight the activations. Imports | %tensorflow_version 2.x
import warnings
warnings.filterwarnings("ignore")
import os
import glob
import cv2
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from skimage.io import imread, imsave
from skimage.transform import resize
from sklearn.model_selection import train_test_split
from tensorflow.keras.models import Model
from tensorflow.keras import layers
from tensorflow.keras.applications import vgg16
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.optimizers import SGD, Adam, RMSprop
import tensorflow as tf
import tensorflow.keras.backend as K
import tensorflow_datasets as tfds
import tensorflow_hub as hub
import imgaug as ia
from imgaug import augmenters as iaa | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Download and Prepare the DatasetYou will use the Cats vs Dogs dataset again for this exercise. The following will prepare the train, test, and eval sets. | tfds.disable_progress_bar()
splits = ['train[:80%]', 'train[80%:90%]', 'train[90%:]']
# load the dataset given the splits defined above
splits, info = tfds.load('cats_vs_dogs', with_info=True, as_supervised=True, split = splits)
(train_examples, validation_examples, test_examples) = splits
num_examples = info.splits['train'].num_examples
num_classes = info.features['label'].num_classes
BATCH_SIZE = 32
IMAGE_SIZE = (224, 224)
# resizes the image and normalizes the pixel values
def format_image(image, label):
image = tf.image.resize(image, IMAGE_SIZE) / 255.0
return image, label
# prepare batches
train_batches = train_examples.shuffle(num_examples // 4).map(format_image).batch(BATCH_SIZE).prefetch(tf.data.experimental.AUTOTUNE)
validation_batches = validation_examples.map(format_image).batch(BATCH_SIZE).prefetch(tf.data.experimental.AUTOTUNE)
test_batches = test_examples.map(format_image).batch(1) | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
ModellingYou will use a pre-trained VGG16 network as your base model for the classifier. This will be followed by a global average pooling (GAP) and a 2-neuron Dense layer with softmax activation for the output. The earlier VGG blocks will be frozen and we will just fine-tune the final layers during training. These steps are shown in the utility function below. | def build_model():
# load the base VGG16 model
base_model = vgg16.VGG16(input_shape=IMAGE_SIZE + (3,),
weights='imagenet',
include_top=False)
# add a GAP layer
output = layers.GlobalAveragePooling2D()(base_model.output)
# output has two neurons for the 2 classes (cats and dogs)
output = layers.Dense(2, activation='softmax')(output)
# set the inputs and outputs of the model
model = Model(base_model.input, output)
# freeze the earlier layers
for layer in base_model.layers[:-4]:
layer.trainable=False
# choose the optimizer
optimizer = tf.keras.optimizers.RMSprop(0.001)
# configure the model for training
model.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
# display the summary
model.summary()
return model
model = build_model() | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
You can now train the model. This will take around 10 minutes to run. | EPOCHS = 3
model.fit(train_batches,
epochs=EPOCHS,
validation_data=validation_batches) | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Model InterpretabilityLet's now go through the steps to generate the class activation maps. You will start by specifying the layers you want to visualize. | # select all the layers for which you want to visualize the outputs and store it in a list
outputs = [layer.output for layer in model.layers[1:18]]
# Define a new model that generates the above output
vis_model = Model(model.input, outputs)
# store the layer names we are interested in
layer_names = []
for layer in outputs:
layer_names.append(layer.name.split("/")[0])
print("Layers that will be used for visualization: ")
print(layer_names) | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Class activation maps (GradCAM)We'll define a few more functions to output the maps. `get_CAM()` is the function highlighted in the lectures and takes care of generating the heatmap of gradient weighted features. `show_random_sample()` takes care of plotting the results. | def get_CAM(processed_image, actual_label, layer_name='block5_conv3'):
model_grad = Model([model.inputs],
[model.get_layer(layer_name).output, model.output])
with tf.GradientTape() as tape:
conv_output_values, predictions = model_grad(processed_image)
# watch the conv_output_values
tape.watch(conv_output_values)
## Use binary cross entropy loss
## actual_label is 0 if cat, 1 if dog
# get prediction probability of dog
# If model does well,
# pred_prob should be close to 0 if cat, close to 1 if dog
pred_prob = predictions[:,1]
# make sure actual_label is a float, like the rest of the loss calculation
actual_label = tf.cast(actual_label, dtype=tf.float32)
# add a tiny value to avoid log of 0
smoothing = 0.00001
# Calculate loss as binary cross entropy
loss = -1 * (actual_label * tf.math.log(pred_prob + smoothing) + (1 - actual_label) * tf.math.log(1 - pred_prob + smoothing))
print(f"binary loss: {loss}")
# get the gradient of the loss with respect to the outputs of the last conv layer
grads_values = tape.gradient(loss, conv_output_values)
grads_values = K.mean(grads_values, axis=(0,1,2))
conv_output_values = np.squeeze(conv_output_values.numpy())
grads_values = grads_values.numpy()
# weight the convolution outputs with the computed gradients
for i in range(512):
conv_output_values[:,:,i] *= grads_values[i]
heatmap = np.mean(conv_output_values, axis=-1)
heatmap = np.maximum(heatmap, 0)
heatmap /= heatmap.max()
del model_grad, conv_output_values, grads_values, loss
return heatmap
def show_sample(idx=None):
# if image index is specified, get that image
if idx:
for img, label in test_batches.take(idx):
sample_image = img[0]
sample_label = label[0]
# otherwise if idx is not specified, get a random image
else:
for img, label in test_batches.shuffle(1000).take(1):
sample_image = img[0]
sample_label = label[0]
sample_image_processed = np.expand_dims(sample_image, axis=0)
activations = vis_model.predict(sample_image_processed)
pred_label = np.argmax(model.predict(sample_image_processed), axis=-1)[0]
sample_activation = activations[0][0,:,:,16]
sample_activation-=sample_activation.mean()
sample_activation/=sample_activation.std()
sample_activation *=255
sample_activation = np.clip(sample_activation, 0, 255).astype(np.uint8)
heatmap = get_CAM(sample_image_processed, sample_label)
heatmap = cv2.resize(heatmap, (sample_image.shape[0], sample_image.shape[1]))
heatmap = heatmap *255
heatmap = np.clip(heatmap, 0, 255).astype(np.uint8)
heatmap = cv2.applyColorMap(heatmap, cv2.COLORMAP_HOT)
converted_img = sample_image.numpy()
super_imposed_image = cv2.addWeighted(converted_img, 0.8, heatmap.astype('float32'), 2e-3, 0.0)
f,ax = plt.subplots(2,2, figsize=(15,8))
ax[0,0].imshow(sample_image)
ax[0,0].set_title(f"True label: {sample_label} \n Predicted label: {pred_label}")
ax[0,0].axis('off')
ax[0,1].imshow(sample_activation)
ax[0,1].set_title("Random feature map")
ax[0,1].axis('off')
ax[1,0].imshow(heatmap)
ax[1,0].set_title("Class Activation Map")
ax[1,0].axis('off')
ax[1,1].imshow(super_imposed_image)
ax[1,1].set_title("Activation map superimposed")
ax[1,1].axis('off')
plt.tight_layout()
plt.show()
return activations | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Time to visualize the results | # Choose an image index to show, or leave it as None to get a random image
activations = show_sample(idx=None) | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Intermediate activations of layersYou can use the utility function below to visualize the activations in the intermediate layers you defined earlier. This plots the feature side by side for each convolution layer starting from the earliest layer all the way to the final convolution layer. | def visualize_intermediate_activations(layer_names, activations):
assert len(layer_names)==len(activations), "Make sure layers and activation values match"
images_per_row=16
for layer_name, layer_activation in zip(layer_names, activations):
nb_features = layer_activation.shape[-1]
size= layer_activation.shape[1]
nb_cols = nb_features // images_per_row
grid = np.zeros((size*nb_cols, size*images_per_row))
for col in range(nb_cols):
for row in range(images_per_row):
feature_map = layer_activation[0,:,:,col*images_per_row + row]
feature_map -= feature_map.mean()
feature_map /= feature_map.std()
feature_map *=255
feature_map = np.clip(feature_map, 0, 255).astype(np.uint8)
grid[col*size:(col+1)*size, row*size:(row+1)*size] = feature_map
scale = 1./size
plt.figure(figsize=(scale*grid.shape[1], scale*grid.shape[0]))
plt.title(layer_name)
plt.grid(False)
plt.axis('off')
plt.imshow(grid, aspect='auto', cmap='viridis')
plt.show()
visualize_intermediate_activations(activations=activations,
layer_names=layer_names) | _____no_output_____ | Apache-2.0 | Copy_of_C3_W4_Lab_4_GradCam.ipynb | nafiul-araf/TensorFlow-Advanced-Techniques-Specialization |
Ensembling several predictionsEnsembling is about combining the forecasts produced by several models, in order to obtain a final β and hopefully better forecast. | models = [NaiveSeasonal(12), NaiveSeasonal(24), NaiveDrift()]
model_predictions = [m.historical_forecasts(series,
start=pd.Timestamp('20170101'),
forecast_horizon=12,
stride=12,
last_points_only=False,
verbose=True)
for m in models]
model_predictions = [reduce((lambda a, b: a.append(b)), model_pred) for model_pred in model_predictions]
model_predictions_stacked = model_predictions[0]
for model_prediction in model_predictions[1:]:
model_predictions_stacked = model_predictions_stacked.stack(model_prediction)
""" We build the regression model, and tell it to use the current predictions
"""
regr_model = RegressionModel(lags=None, lags_future_covariates=[0])
""" Our target series is what we want to predict (the actual data)
It has to have the same time index as the features series:
"""
series_target = series.slice_intersect(model_predictions[0])
""" Here we backtest our regression model
"""
ensemble_pred = regr_model.historical_forecasts(
series=series_target, future_covariates=model_predictions_stacked,
start=pd.Timestamp('20180101'), forecast_horizon=12, stride=12, verbose=True
)
fig, ax = plt.subplots(2,2,figsize=(12,6))
ax = ax.ravel()
for i, m in enumerate(models):
series.plot(label='actual', ax=ax[i])
model_predictions[i].plot(label=str(m), ax=ax[i])
# intersect last part, to compare all the methods over the duration of the ensemble forecast
model_pred = model_predictions[i].slice_intersect(ensemble_pred)
mape_model = mape(series, model_pred)
ax[i].set_title('\nMAPE: {:.2f}%'.format(mape_model))
ax[i].legend()
series.plot(label='actual', ax=ax[3])
ensemble_pred.plot(label='Ensemble', ax=ax[3])
ax[3].set_title('\nMAPE, ensemble: {:.2f}%'.format(mape(series, ensemble_pred)))
ax[3].legend()
print('\nRegression coefficients for the individual models:')
for i, m in enumerate(models):
print('Learned coefficient for {}: {:.2f}'.format(m, regr_model.model.coef_[i]))
plt.tight_layout(); | _____no_output_____ | Apache-2.0 | emsembling_predictions.ipynb | siyue-zhang/time-series-forecast-Darts |
RegressionEnsembleModel approach | ensemble_model = RegressionEnsembleModel(
forecasting_models=[NaiveSeasonal(12), NaiveSeasonal(24), NaiveDrift()],
regression_train_n_points=12)
ensemble_model.fit(train)
ensemble_pred = ensemble_model.predict(48)
series.plot(label='actual')
ensemble_pred.plot(label='Ensemble forecast')
plt.title('MAPE = {:.2f}%'.format(mape(ensemble_pred, series)))
plt.legend();
val[:48].plot(label='actual')
ensemble_pred.plot(label='Ensemble forecast')
plt.title('MAPE = {:.2f}%'.format(mape(ensemble_pred, series)))
plt.legend(); | _____no_output_____ | Apache-2.0 | emsembling_predictions.ipynb | siyue-zhang/time-series-forecast-Darts |
Only Naive model | naive_model = NaiveSeasonal(K=24)
naive_model.fit(train)
naive_forecast = naive_model.predict(48)
val[:48].plot(label='actual')
naive_forecast.plot(label='Navie forecast')
plt.title('MAPE = {:.2f}%'.format(mape(naive_forecast, series)))
plt.legend(); | _____no_output_____ | Apache-2.0 | emsembling_predictions.ipynb | siyue-zhang/time-series-forecast-Darts |
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it. | import pandas as pd
import numpy as np | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 3. Use the tsv file and assign it to a dataframe called food | food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t') | //anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
| BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 4. See the first 5 entries | food.head() | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 5. What is the number of observations in the dataset? | food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 6. What is the number of columns in the dataset? | print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries | (356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
| BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 7. Print the name of all the columns. | food.columns | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 8. What is the name of 105th column? | food.columns[104] | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 9. What is the type of the observations of the 105th column? | food.dtypes['-glucose_100g'] | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 10. How is the dataset indexed? | food.index | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Step 11. What is the product name of the 19th observation? | food.values[18][7] | _____no_output_____ | BSD-3-Clause | 01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | iamoespana92/pandas_exercises |
Testing is goodTesting is one of the most important things we can do in our infrastructure to make sure that things are configured the way we expect them to be and that the system as a whole is operating the way we want it to.In a world of dynamic protocols that are **designed** to continue to operate in the face of multiple failures, it's always good to make sure that you know when the system has gone through a failure. If routing works the way it's supposed to, you may not even be aware you have a failure until the last bandaid finally falls off and you have a total meltdown.More importantly, Testing can be used to help gain confidence in your changes, not just for you, but for your peers, managers, and the business who depends on the network to get things done.We're going to start as usual by grabbing all the imports we need. ** *Note: I'm going to fly through some of these steps as I've covered them pretty thouroughly in previous blogs, please feel free to ask/comment if there's something that you'd like me to explain in further detail* ** | from pyhpeimc.auth import *
from pyhpeimc.plat.icc import *
from pyhpeimc.plat.device import *
import jtextfsm as textfsm
import yaml
#import githubuser
import mygithub #file not in github repo
auth = IMCAuth("http://", "10.101.0.203", "8080", "admin", "admin") | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
First we've got to grab the devID of the switch we wish to test | devid = get_dev_details('10.101.0.221', auth.creds, auth.url)['id']
devid | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Now we need to create the list of commands that we want to gather their output. For this example, we want to make sure that OSPF, as a system, is still working. So we want to gather the **display ospf peer** output so that we can tkae a look at the peers and make sure that all expected peers are still present and in a **Full/BDR** state. | cmd_list = ['display ospf peer'] | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Now that we've got the command list, we're going to use the **run_dev_cmd** function from the **pyhpeimc** library to gather this for the devid of the switch we specified above.We'll also take a quick look at the contents of the **contents** key of the object that is returned by the run_dev_cmd function. | raw_text_data = run_dev_cmd(devid, cmd_list, auth.creds, auth.url)
raw_text_data['content'] | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Just to be sure, we'll print this out to make sure that this is the response we would actually expect for this command from that specific device OS. | print (raw_text_data['content']) | OSPF Process 1 with Router ID 10.101.0.221
Neighbor Brief Information
Area: 0.0.0.0
Router ID Address Pri Dead-Time Interface State
10.101.16.1 10.101.0.1 1 36 Vlan1 Full/BDR
10.101.16.1 10.101.15.1 1 32 Vlan15 Full/BDR
10.20.1.1 10.20.1.1 1 37 GE2/0/23 Full/BDR
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
We will now run this through a TextFSM template to transform this string into some structured text which will make it much easier to deal with. | template = open("./Textfsm/Templates/displayospf.textfsm")
re_table = textfsm.TextFSM(template)
fsm_results = re_table.ParseText(raw_text_data['content'])
ospf_peer = [ { 'area': i[0], 'router_id' :i[1], 'address':i[2], 'pri' :i[3], 'deadtime': i[4], 'interface': i[5], 'state': i[6]} for i in fsm_results]
print ( "There are currently " + str(len(ospf_peer)) + ' OSPF peers on this device')
ospf_peer[0] | There are currently 4 OSPF peers on this device
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Now that we've got an object with all the OSPF Peers in them, let's write some quick code to see if the one specific peer, 10.20.1.1, is present in the OSPF peer table and if it's current state is Full/BDR. This will let us know that the OSPF peer we expect to be in the table is, in fact, still in the table and in the FULL/BDR state which tells us there's a pretty good chance it's passing traffic.I've also added an **else** clause to | for peer in ospf_peer:
if (peer['address']) == '10.20.1.1' and peer['state'] == "Full/BDR":
print ( peer['address'] + " was the peer I was looking for and it's Full")
else:
print (peer['address'] + ' was not the peer I was looking for') | 10.101.0.1 was not the peer I was looking for
10.101.15.1 was not the peer I was looking for
10.20.1.1 was the peer I was looking for and it's Full
was not the peer I was looking for
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Checking IP RoutesWhat about checking the routing table of a remote peer? | cmd_list = ['display ip routing-table']
raw_text_data = run_dev_cmd(devid, cmd_list, auth.creds, auth.url)
raw_text_data['content']
print (raw_text_data['content'])
template = open("./Textfsm/Templates/displayiproutingtable.textfsm")
re_table = textfsm.TextFSM(template)
fsm_results = re_table.ParseText(raw_text_data['content'])
ip_routes = [ { 'DestinationMask': i[0], 'Proto' :i[1], 'Pre':i[2], 'Cost' :i[3], 'NextHop': i[4], 'Interface': i[5]} for i in fsm_results]
ip_routes[0]
for route in ip_routes:
if route['DestinationMask'] == "10.20.10.0/24":
print (json.dumps(route, indent =4))
| {
"Proto": "Direct",
"Interface": "GE1/0/22",
"NextHop": "10.20.10.1 ",
"Pre": "0",
"DestinationMask": "10.20.10.0/24",
"Cost": "0"
}
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Checking VLANs | devid = get_dev_details('10.20.10.10', auth.creds, auth.url)['id']
devid
cmd_list = ['display vlan brief']
raw_text_data = run_dev_cmd(devid, cmd_list, auth.creds, auth.url)
raw_text_data['content']
print (raw_text_data['content'])
template = open("./TextFSM/Templates/displayvlanbrief.textfsm")
re_table = textfsm.TextFSM(template)
fsm_results = re_table.ParseText(raw_text_data['content'])
fsm_results
dev_vlans = [ {'vlanId': i[0], 'vlanName' : i[1]} for i in fsm_results]
dev_vlans | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Checking our workNow that we've captured the VLANs present on the device, we can easily compare this back to the GITHUB YAML file where we originally defined what VLANs should be on the device.First we'll create the git_vlans object from the file vlans.yaml directly from GITHUB. | gitauth = mygithub.gitcreds() #you didn't think I was going to give you my password did you?
git_vlans = yaml.load(requests.get('https://raw.githubusercontent.com/netmanchris/Jinja2-Network-Configurations-Scripts/master/vlans.yaml', auth=gitauth).text) | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Cleaning up a bitIf we take a look at the gitvlans variable, we can see it's a little too deep for what we want to do. We're going to perform two transformations on the data here to get it to where we want it to be- grab the contents of the git_vlans['vlans'] key which is just the list of vlans.- use the .pop() method on each of the vlans to get rid of the **vlanStatus** key which we don't want hereFor the | git_vlans = git_vlans['vlans']
for vlan in git_vlans:
vlan.pop('vlanStatus')
git_vlans | _____no_output_____ | Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Comparing git_vlans and dev_vlansNow that we've got two different list which contain a vlan dictionary for each VLAN with the exact same keys, we can do a boolean magic to see if each of the VLANs are present in the other objects. We'll first do this by comparing to see if all of the VLANs from the **git_vlans** object **are** actually on the device. The git_vlans objects was loaded from a YAML file on github where we defined what VLANS **should** be on the device, remember? | for vlan in git_vlans:
if vlan in dev_vlans:
print (vlan['vlanId'] + " is there")
elif vlan not in dev_vlans:
print (devv['vlanId'] + " is not there") | 1 is there
2 is there
3 is there
10 is there
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Comparing dev_vlans to git_vlansYou didn't think we were done did you?For the last step here, we'll do the exact opposite to see if all of the vlans that are actually present on the device are also defined in the vlans.yaml file on github. We want to make sure that no body snuck in and configured a VLAN in our production environment when we weren't looking, right? | for vlan in dev_vlans:
if vlan in git_vlans:
print ( "VLAN " + vlan['vlanId'] + " should be there")
elif vlan not in git_vlans:
print ( "\nSomebody added VLAN " + vlan['vlanId'] + " when we weren't looking. \n \nGo slap them please.\n\n") | VLAN 1 should be there
VLAN 2 should be there
VLAN 3 should be there
Somebody added VLAN 5 when we weren't looking.
Go slap them please.
VLAN 10 should be there
| Apache-2.0 | Events/ETSS June 2016/DevOps Networking Model/Testing Your Changes.ipynb | manpowertw/leafspine-ops |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.