markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
We can plot the data for this location on top of the general data cloud. | fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, sharey=True, figsize=(16,9))
# All data
ax1.scatter(data["qc [MPa]"], data["z [m]"], s=5)
ax2.scatter(data["Blowcount [Blows/m]"], data["z [m]"], s=5)
ax3.scatter(data["Normalised ENTRHU [-]"], data["z [m]"], s=5)
# Location-specific data
ax1.plot(location_data["qc [MPa]"], location_data["z [m]"], color='red')
ax2.plot(location_data["Blowcount [Blows/m]"], location_data["z [m]"], color='red')
ax3.plot(location_data["Normalised ENTRHU [-]"], location_data["z [m]"], color='red')
for ax in (ax1, ax2, ax3):
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.grid()
ax.set_ylim(50, 0)
ax1.set_xlabel(r"Cone tip resistance (MPa)")
ax1.set_xlim(0, 120)
ax2.set_xlabel(r"Blowcount (Blows/m)")
ax2.set_xlim(0, 200)
ax3.set_xlabel(r"Normalised ENTRHU (-)")
ax3.set_xlim(0, 1)
ax1.set_ylabel(r"Depth below mudline (m)")
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can see that pile driving started from 5m depth and continued until a depth of 30m, when the pile tip reached a sand layer with $ q_c $ > 60MPa.Feel free to investigate the soil profile and driving data for the other locations by changing the location ID. For the purpose of the prediction event, we are interested in the variation of blowcount with $ q_c $, hammer energy, ... We can also generate plots to see the correlations. The data shows significant scatter and non-linear behaviour. We will take this into account for our machine learning model. | fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, figsize=(15,6))
# All data
ax1.scatter(data["qc [MPa]"], data["Blowcount [Blows/m]"], s=5)
ax2.scatter(data["Normalised ENTRHU [-]"], data["Blowcount [Blows/m]"], s=5)
ax3.scatter(data["z [m]"], data["Blowcount [Blows/m]"], s=5)
# Location-specific data
ax1.scatter(location_data["qc [MPa]"], location_data["Blowcount [Blows/m]"], color='red')
ax2.scatter(location_data["Normalised ENTRHU [-]"], location_data["Blowcount [Blows/m]"], color='red')
ax3.scatter(location_data["z [m]"], location_data["Blowcount [Blows/m]"], color='red')
for ax in (ax1, ax2, ax3):
ax.grid()
ax.set_ylim(0, 200)
ax.set_ylabel(r"Blowcount (Blows/m)")
ax1.set_xlabel(r"Cone tip resistance (MPa)")
ax1.set_xlim(0, 120)
ax2.set_xlabel(r"Normalised ENTRHU (-)")
ax2.set_xlim(0, 1)
ax3.set_xlabel(r"Depth below mudline (m)")
ax3.set_xlim(0, 50)
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
3. Basics of machine learningThe goal of the prediction exercise is to define a model relating the input (soil data, hammer energy, pile data) with the output (blowcount).In ML terminology, we call the inputs (the columns of the dataset except for the blowcount) features. The blowcount is the target variable. Each row in the dataframe represents a sample, a combination of feature values for which the output is known. Data for which a value of the target variable is not yet available is called unseen data.Before we dive into the code for generating ML models, let's discuss some of the concepts in more detail. 3.1. Machine learning techniquesML combines several data science techniques under one general denominator. We can discern the following families: - Classification: Predict the value of a discrete target variable of a data point based on its features - Regression: Predict the value of a continuous target variable based on its features - Clustering: Identify groups of similar data points based on their features - Dimensionality reduction: Identify the features with the greatest influence on the data The first techniques are examples of supervised learning. We will use data where the output has been observed and use that to train the ML model. Training a model is essentially the optimisation of the coefficients of a mathematical model to minimise the difference between model predictions and observed values. Such a trained algorithm is then capable of making predictions for unseen data.This concept is not fundamentally different from any other type of data-driven modelling. The main advantage of the ML approach is the speed at which the models can be trained and the many types of models available to the engineer.In our example of pile driving, we have a regression problem where we are training a model to relate features (soil data, hammer energy and pile data) with a continuous target variable (blowcount). 3.2. Model fittingMachine learning has disadvantages which can lead to problematic situations if the techniques are misused. One of these disadvantages is that the ML algorithm will always find a fit, even if it is a poor one.The figure below shows an example with data showing a non-linear trend between input and output with some scatter around a trend. We can identify the following situations: - Underfitting: If we use a linear model for this data, we are not capturing the trend. The model predictions will be poor; - Good fit: If we formulate a model (quadratic in this case) which captures the general trend but allows variations around the trend, we obtain a good fit. In geotechnical problems, we will never get a perfect a fit but if we identify the influence of the input parameters in a consistent manner, we can build good-quality models; - Overfitting: If we have a model which perfectly fits all known data points, the prediction for an unseen data point will be poor. The influence of each measurement on the model is too important. The model overfits the data and does not capture the general trends. It just represents the data on which it was trained. 3.3. Model metricsTo prevent misuse of ML models, we will look at certain model metrics to check the quality. There are several model metrics. Two of the more common ones are the Mean Squared Error (MSE) and the coefficient of determination ($ R^2 $).The MSE-value is the normalised sum of quadratic differences. The closer it is to 0, the better the fit.$$ \text{MSE}(y, \hat{y}) = \frac{1}{n_\text{samples}} \sum_{i=0}^{n_\text{samples} - 1} (y_i - \hat{y}_i)^2. $$$ \hat{y}_i $ is the predicted value of the i-th sample and $ y_i $ is the true (measured) value.The coefficient of determination ($ R^2 $) is a measure for a measure of how well future samples are likely to be predicted by the model. A good model has an $ R^2 $-value which is close to 1.$$ R^2(y, \hat{y}) = 1 - \frac{\sum_{i=0}^{n_{\text{samples}} - 1} (y_i - \hat{y}_i)^2}{\sum_{i=0}^{n_\text{samples} - 1} (y_i - \bar{y})^2} \quad \text{where} \ \bar{y} = \frac{1}{n_{\text{samples}}} \sum_{i=0}^{n_{\text{samples}} - 1} y_i$$In the example, we will see how we can easily calculate these metrics from the data using the functions available in the ML Python package ```scikit-learn```. 3.4. Model validationWhen building a ML model, we will only use a subset of the data for training the model. The other subset is deliberately excluded from the learning process and used to validate the model. The trained model is applied on the unseen data of the validation dataset and the accuracy of the predictions is checked, resulting in a validation score representing the accuracy of the model for the validation dataset.If our trained model is of good quality, the predictions for the validation dataset will be close to the measured values.We will partition our data in a training dataset and a validation dataset. For the validation data set, we use seven piles. The other piles will be used as the training dataset. | validation_ids = ['EL', 'CB', 'AV', 'BV', 'EF', 'DL', 'BM']
# Training data - ID not in validation_ids
training_data = data[~data['Location ID'].isin(validation_ids)]
# Validation data - ID in validation_ids
validation_data = data[data['Location ID'].isin(validation_ids)] | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
With these concepts in mind, we can start building up a simple ML model. 4. Basic machine learning example: Linear modellingThe most basic type of ML model is a linear model. We are already using linear models in a variety of applications and often fit them without making use of ML techniques. The general equation for a linear model is given below for a model with $ N $ features:$$ y = a_0 + a_1 \cdot x_1 + a_2 \cdot x_2 + ... + a_N \cdot x_N + \epsilon $$where $ \epsilon $ is the estimation error.Based on the training dataset, the value of the coefficients ($ a_0, a_1, ..., a_N $) is determined using optimisation techniques to minimise the difference between measured and predicted values. As the equation shows, a good fit will be obtained when the relation between output and inputs is truly linear. If there are non-linearities in the data, the fit will be less good.We will illustrate how a linear regression machine learning model is generated from the available driving data. 4.1. Linear model based on normalised ENTHRU onlyThe simplest linear model depends on only one feature. We can select the normalised energy transmitted to the pile (ENTRHU) as the only feature for illustration purposes.The mathematical form of the model can be written as:$$ BLCT = a_o + a_1 \cdot \text{ENTRHU}_{norm} + \epsilon $$We will create a dataframe $ X $ with only the normalised ENTHRU feature data and we will put the observed values of the target variable (blowcount) in the vector $ y $.Note that machine learning algorithms will raise errors when NaN values are provided. We need to ensure that we remove such values. We can creata a dataframe ```cleaned_training_data``` which only contains rows with no NaN values. | features = ['Normalised ENTRHU [-]']
cleaned_training_data = training_data.dropna() # Remove NaN values
X = cleaned_training_data[features]
y = cleaned_training_data["Blowcount [Blows/m]"] | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can now create a linear model. We need to import this type of model from the scikit-learn package. We can fit the linear model to the data using the ```fit()``` method. | from sklearn.linear_model import LinearRegression
model_1 = LinearRegression().fit(X,y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
At this point, our model has been trained with the data and the coefficients are known. $ a_0 $ is called the intercept and $ a_1 $ to $ a_n $ are stored in ```coef_```. Because we only have one feature, ```coef_``` only returns a single value. | model_1.coef_, model_1.intercept_ | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can plot the data with our trained fit. We can see that the fit follows a general trend but the quality is not great. | plt.scatter(X, y)
x = np.linspace(0.0, 1, 50)
plt.plot(x, model_1.intercept_ + model_1.coef_ * x, color='red')
plt.xlabel("Normalised ENTHRU (-)")
plt.ylabel("Blowcount (Blows/m)")
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can also calculate the $ R^2 $ score for our training data. The score is below 0.5 and it goes without saying that this model needs improvement. | model_1.score(X,y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
In the following sections, we will explore ways to improve our model. 4.2. Linearizing featuresWhen using ENTRHU as our model feature, we can see that a linear model is not the most appropriate choice as the relation between blowcount and ENTRHU is clearly non-linear. However, we can linearize features.For example, we can propose a relation using using a tangent hyperbolic law, which seems to fit better with the data. | plt.scatter(training_data["Normalised ENTRHU [-]"], training_data["Blowcount [Blows/m]"])
x = np.linspace(0, 1, 100)
plt.plot(x, 80 * np.tanh(5 * x - 0.5), color='red')
plt.xlabel("Normalised ENTHRU (-)")
plt.ylabel("Blowcount (Blows/m)")
plt.ylim([0.0, 175.0])
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can create a linearized feature:$$ (\text{ENTHRU})_{lin} = \tanh(5 \cdot \text{ENTHRU}_{norm} - 0.5) $$ | Xlin = np.tanh(5 * cleaned_training_data[["Normalised ENTRHU [-]"]] - 0.5) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
When plotting the linearized data against the blowcount, we can see that a linear relation is much more appropriate. | plt.scatter(Xlin, y)
plt.xlabel(r"$ \tanh(5 \cdot ENTRHU_{norm} - 0.5) $")
plt.ylabel("Blowcount (Blows/m)")
plt.ylim([0.0, 175.0])
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can fit another linear model using this linearized feature. | model_2 = LinearRegression().fit(Xlin, y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can check the intercept and the model coefficient: | model_2.coef_, model_2.intercept_ | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
The model with the linearized feature can then be written as:$$ BLCT = a_0 + a_1 \cdot (\text{ENTHRU})_{lin} $$ We can visualize the fit. | plt.scatter(X, y)
x = np.linspace(0.0, 1, 50)
plt.plot(x, model_2.intercept_ + model_2.coef_ * (np.tanh(5*x - 0.5)), color='red')
plt.xlabel("Normalised ENTHRU (-)")
plt.ylabel("Blowcount (Blows/m)")
plt.ylim([0.0, 175])
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can check the $ R^2 $ model score. By linearizing the normalised ENTHRU energy, we have improved our $ R^2 $ score and are thus fitting a model which better describes our data. | model_2.score(Xlin, y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
4.3. Using engineering knowledgeWe know from engineering considerations on the pile driving problem that the soil resistance to driving (SRD) can be expressed as the sum of shaft friction and end bearing resistance. The shaft friction can be expressed as the integral of the unit shaft friction over the pile circumference and length.If we make the simplifying assumption that there is a proportionality between the cone resistance and the unit shaft friction ($ f_s = \alpha \cdot q_c $), we can write the shaft resistance as follows:$$ R_s = \int_{0}^{L} \alpha \cdot q_c \cdot \pi \cdot D \cdot dz \approx \alpha \cdot \pi \cdot D \cdot \sum q_{c,i} \cdot \Delta z $$We can create an additional feature for this. Creating features based on our engineering knowledge will often help us to introduce experience in a machine learning algorithm.To achieve this, we will create a new dataframe using our training data. We will iteration over all locations in the training data and calculate the $ R_s $ feature using a cumulative sum function. We will then put this data together for all locations. | enhanced_data = pd.DataFrame() # Create a dataframe for the data enhanced with the shaft friction feature
for location in training_data['Location ID'].unique(): # Loop over all unique locations
locationdata = training_data[training_data['Location ID']==location].copy() # Select the location-specific data
# Calculate the shaft resistance feature
locationdata["Rs [kN]"] = \
(np.pi * locationdata["Diameter [m]"] * locationdata["z [m]"].diff() * locationdata["qc [MPa]"]).cumsum()
enhanced_data = pd.concat([enhanced_data, locationdata]) # Combine data for the different locations in 1 dataframe | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can plot the data to see that the clustering of our SRD shaft resistance feature vs blowcount is much better than the clustering of $ q_c $ vs blowcount. We can also linearize the relation between shaft resistance and blowcount.We can propose the following relation:$$ BLCT = 85 \cdot \tanh \left( \frac{R_s}{1000} - 1 \right) $$ | fig, ((ax1, ax2)) = plt.subplots(1, 2, sharey=True, figsize=(12,6))
ax1.scatter(enhanced_data["qc [MPa]"], enhanced_data["Blowcount [Blows/m]"])
ax2.scatter(enhanced_data["Rs [kN]"], enhanced_data["Blowcount [Blows/m]"])
x = np.linspace(0.0, 12000, 50)
ax2.plot(x, 85 * (np.tanh(0.001*x-1)), color='red')
ax1.set_xlabel("Cone tip resistance (MPa)")
ax2.set_xlabel("Shaft resistance (kN)")
ax1.set_ylabel("Blowcount (Blows/m)")
ax2.set_ylabel("Blowcount (Blows/m)")
ax1.set_ylim([0.0, 175])
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We then proceed to filter the NaN values from the data and fit a linear model. | features = ["Rs [kN]"]
X = enhanced_data.dropna()[features]
y = enhanced_data.dropna()["Blowcount [Blows/m]"]
Xlin = np.tanh((0.001 * X) - 1)
model_3 = LinearRegression().fit(Xlin, y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can print the coefficients of the linear model and visualise the fit. | model_3.intercept_, model_3.coef_
plt.scatter(X, y)
x = np.linspace(0.0, 12000, 50)
plt.plot(x, model_3.intercept_ + model_3.coef_ * (np.tanh(0.001*x - 1)), color='red')
plt.xlabel("Shaft resistance (kN)")
plt.ylabel("Blowcount (Blows/m)")
plt.ylim([0.0, 175])
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
The fit looks reasonable and this is also reflected in the $ R^2 $ score which is just greater than 0.6. We have shown that using engineering knowledge can greatly improve model quality. | model_3.score(Xlin, y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
4.4. Using multiple featuresThe power of machine learning algorithms is that you can experiment with adding multiple features. Adding a feature can improve you model if it has a meaningful relation with the output.We can use our linearized relation with normalised ENTHRU, shaft resistance and we can also linearize the variation of blowcount with depth:$$ BLCT = 100 \cdot \tanh \left( \frac{z}{10} - 0.5 \right) $$ | plt.scatter(data["z [m]"], data["Blowcount [Blows/m]"])
z = np.linspace(0,35,100)
plt.plot(z, 100 * np.tanh(0.1 * z - 0.5), color='red')
plt.ylim([0, 175])
plt.xlabel("Depth (m)")
plt.ylabel("Blowcount (Blows/m)")
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
Our model with the combined features will take the following mathematical form:$$ BLCT = a_0 + a_1 \cdot \tanh \left( 5 \cdot \text{ENTHRU}_{norm} - 0.5 \right) + a_2 \cdot \tanh \left( \frac{R_s}{1000} - 1 \right) + a_3 \cdot \tanh \left( \frac{z}{10} - 0.5 \right) $$We can create the necessary features in our dataframe: | enhanced_data["linearized ENTHRU"] = np.tanh(5 * enhanced_data["Normalised ENTRHU [-]"] - 0.5)
enhanced_data["linearized Rs"] = np.tanh(0.001 * enhanced_data["Rs [kN]"] - 1)
enhanced_data["linearized z"] = np.tanh(0.1 * enhanced_data["z [m]"] - 0.5)
linearized_features = ["linearized ENTHRU", "linearized Rs", "linearized z"] | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can now fit a linear model with three features. The matrix $ X $ is now an $ n \times 3 $ matrix ($ n $ samples and 3 features). | X = enhanced_data.dropna()[linearized_features]
y = enhanced_data.dropna()["Blowcount [Blows/m]"]
model_4 = LinearRegression().fit(X,y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can calculate the $ R^2 $ score. The score is slightly better compared to our previous model. Given the scatter in the data, this score is already a reasonable value. | model_4.score(X, y) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
4.4. Model predictionsThe linear regression model always allows us to write down the mathematical form of the model. We can do so here by filling in the intercept ($ a_0 $) a coefficients $ a_1 $, $ a_2 $ and $ a_3 $ in the equation above. | model_4.intercept_, model_4.coef_ | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
However, we don't need to explicitly write down the mathematical shape of the model to use it in the code. We can make predictions using the fitted model straightaway. | predictions = model_4.predict(X)
predictions | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can plot these predictions together with the data. We can see that the model follows the general trend of the data fairly well. There is still significant scatter around the trend. | fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, figsize=(15,6))
# Measurements
ax1.scatter(enhanced_data["Rs [kN]"], enhanced_data["Blowcount [Blows/m]"], s=5)
ax2.scatter(enhanced_data["Normalised ENTRHU [-]"], enhanced_data["Blowcount [Blows/m]"], s=5)
ax3.scatter(enhanced_data["z [m]"], enhanced_data["Blowcount [Blows/m]"], s=5)
# Predictions
ax1.scatter(enhanced_data.dropna()["Rs [kN]"], predictions, color='red')
ax2.scatter(enhanced_data.dropna()["Normalised ENTRHU [-]"], predictions, color='red')
ax3.scatter(enhanced_data.dropna()["z [m]"], predictions, color='red')
for ax in (ax1, ax2, ax3):
ax.grid()
ax.set_ylim(0, 175)
ax.set_ylabel(r"Blowcount (Blows/m)")
ax1.set_xlabel(r"Shaft resistance (kN)")
ax1.set_xlim(0, 12000)
ax2.set_xlabel(r"Normalised ENTRHU (-)")
ax2.set_xlim(0, 1)
ax3.set_xlabel(r"Depth below mudline (m)")
ax3.set_xlim(0, 50)
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
During the prediction event, the goal is to fit a machine learning model which further refines the model developed above. 4.5 Model validationAt the start of the exercise, we excluded a couple of locations from the fitting to check how well the model would perform for these unseen locations.We can now perform this validation exercise by calculating the shaft resistance and linearizing the model features. We can then make predictions with our model developed above.We will illustrate this for location CB. | # Create a copy of the dataframe with location-specific data
validation_data_CB = validation_data[validation_data["Location ID"] == "CB"].copy()
# Calculate the shaft resistance feature and put it in the column 'Rs [kN]'
validation_data_CB["Rs [kN]"] = \
(np.pi * validation_data_CB["Diameter [m]"] * \
validation_data_CB["z [m]"].diff() * validation_data_CB["qc [MPa]"]).cumsum()
# Calculate linearized ENTHRU, Rs and z
validation_data_CB["linearized ENTHRU"] = np.tanh(5 * validation_data_CB["Normalised ENTRHU [-]"] - 0.5)
validation_data_CB["linearized Rs"] = np.tanh(0.001 * validation_data_CB["Rs [kN]"] - 1)
validation_data_CB["linearized z"] = np.tanh(0.1 * validation_data_CB["z [m]"] - 0.5)
# Create the matrix with n samples and 3 features
X_validation = validation_data_CB.dropna()[linearized_features]
# Create the vector with n observations of blowcount
y_validation = validation_data_CB.dropna()["Blowcount [Blows/m]"] | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
Given our fitted model, we can now calculate the $ R^2 $ score for our validation data. The score is relatively high and we can conclude that the model generalises well. If this validation score would be low, we would have to re-evaluate our feature selection. | # Calculate the R2 score for the validation data
model_4.score(X_validation, y_validation) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can calculate the predicted blowcounts for our validation data. | validation_predictions = model_4.predict(X_validation) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
The predictions (red dots) can be plotted against the actual observed blowcounts. The cone resistance and normalised ENTHRU are also plotted for information.The predictions are reasonable and follow the general trend fairly well. In the layer with lower cone resistance below (10-15m depth), there is an overprediction of blowcount. This is due to the relatively limited amount of datapoints with low cone resistance in the training data. Further model refinement could address this issue. | fig, ((ax1, ax2, ax3)) = plt.subplots(1, 3, figsize=(15,6))
# All data
ax1.plot(validation_data_CB["qc [MPa]"], validation_data_CB["z [m]"])
ax2.plot(validation_data_CB["Normalised ENTRHU [-]"], validation_data_CB["z [m]"])
ax3.plot(validation_data_CB["Blowcount [Blows/m]"], validation_data_CB["z [m]"])
# Location-specific data
ax3.scatter(validation_predictions, validation_data_CB.dropna()["z [m]"], color='red')
for ax in (ax1, ax2, ax3):
ax.grid()
ax.xaxis.tick_top()
ax.xaxis.set_label_position('top')
ax.set_ylim(30, 0)
ax.set_ylabel(r"Depth below mudline (m)")
ax1.set_xlabel(r"Cone tip resistance (MPa)")
ax1.set_xlim(0, 120)
ax2.set_xlabel(r"Normalised ENTRHU (-)")
ax2.set_xlim(0, 1)
ax3.set_xlabel(r"Blowcount (Blows/m)")
ax3.set_xlim(0, 175)
plt.show() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
The process of validation can be automated. The [scikit-learn documentation](https://scikit-learn.org/stable/modules/cross_validation.html) has further details on this. 5. Prediction event submissionWhile a number of locations are held out during the training process to check if the model generalises well, the model will have to be applied to unseen data and predictions will need to be submitted.The validation data which will be used for the ranking of submissions is provided in the file ```validation_data.csv```. | final_data = pd.read_csv("/kaggle/input/validation_data.csv")
final_data.head() | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can see that the target variable (```Blowcount [Blows/m]```) is not provided and we need to predict it.Similary to the previous process, we will calculate the shaft resistance to enhance our data. | enhanced_final_data = pd.DataFrame() # Create a dataframe for the final data enhanced with the shaft friction feature
for location in final_data['Location ID'].unique(): # Loop over all unique locations
locationdata = final_data[final_data['Location ID']==location].copy() # Select the location-specific data
# Calculate the shaft resistance feature
locationdata["Rs [kN]"] = \
(np.pi * locationdata["Diameter [m]"] * locationdata["z [m]"].diff() * locationdata["qc [MPa]"]).cumsum()
enhanced_final_data = pd.concat(
[enhanced_final_data, locationdata]) # Combine data for the different locations in 1 dataframe | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
A NaN value is generated at the pile top, we can remove any NaN values using the ```dropna``` method on the DataFrame. | enhanced_final_data.dropna(inplace=True) # Drop the rows containing NaN values and overwrite the dataframe | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can then linearize the features as before: | enhanced_final_data["linearized ENTHRU"] = np.tanh(5 * enhanced_final_data["Normalised ENTRHU [-]"] - 0.5)
enhanced_final_data["linearized Rs"] = np.tanh(0.001 * enhanced_final_data["Rs [kN]"] - 1)
enhanced_final_data["linearized z"] = np.tanh(0.1 * enhanced_final_data["z [m]"] - 0.5) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can extract the linearized features which are required for the predictions: | # Create the matrix with n samples and 3 features
X = enhanced_final_data[linearized_features] | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can make the predictions using our final model: | final_predictions = model_4.predict(X) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can assign these predictions to the column ```Blowcount [Blows/m]``` in our resulting dataframe. | enhanced_final_data["Blowcount [Blows/m]"] = final_predictions | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
We can write this file to a csv file. For the submission, we only need the ```ID``` and ```Blowcount [Blows/m]``` column. | enhanced_final_data[["ID", "Blowcount [Blows/m]"]].to_csv("sample_submission_linearmodel.csv", index=False) | _____no_output_____ | Apache-2.0 | analysis/isfog-2020-linear-model-demo.ipynb | alexandershires/offshore-geo |
Table of Contents1 Name2 Search2.1 Load Cached Results2.2 Build Model From Google Images3 Analysis3.1 Gender cross validation3.2 Face Sizes3.3 Screen Time Across All Shows3.4 Appearances on a Single Show3.5 Other People Who Are On Screen4 Persist to Cloud4.1 Save Model to Google Cloud Storage4.2 Save Labels to DB4.2.1 Commit the person and labeler4.2.2 Commit the FaceIdentity labels | from esper.prelude import *
from esper.identity import *
from esper.topics import *
from esper.plot_util import *
from esper import embed_google_images | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Name Please add the person's name and their expected gender below (Male/Female). | name = 'Syed Rizwan Farook'
gender = 'Male' | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Search Load Cached Results Reads cached identity model from local disk. Run this if the person has been labelled before and you only wish to regenerate the graphs. Otherwise, if you have never created a model for this person, please see the next section. | assert name != ''
results = FaceIdentityModel.load(name=name)
imshow(tile_images([cv2.resize(x[1][0], (200, 200)) for x in results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(results) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Build Model From Google Images Run this section if you do not have a cached model and precision curve estimates. This section will grab images using Google Image Search and score each of the faces in the dataset. We will interactively build the precision vs score curve.It is important that the images that you select are accurate. If you make a mistake, rerun the cell below. | assert name != ''
# Grab face images from Google
img_dir = embed_google_images.fetch_images(name)
# If the images returned are not satisfactory, rerun the above with extra params:
# query_extras='' # additional keywords to add to search
# force=True # ignore cached images
face_imgs = load_and_select_faces_from_images(img_dir)
face_embs = embed_google_images.embed_images(face_imgs)
assert(len(face_embs) == len(face_imgs))
reference_imgs = tile_imgs([cv2.resize(x[0], (200, 200)) for x in face_imgs if x], cols=10)
def show_reference_imgs():
print('User selected reference images for {}.'.format(name))
imshow(reference_imgs)
plt.show()
show_reference_imgs()
# Score all of the faces in the dataset (this can take a minute)
face_ids_by_bucket, face_ids_to_score = face_search_by_embeddings(face_embs)
precision_model = PrecisionModel(face_ids_by_bucket) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Now we will validate which of the images in the dataset are of the target identity.__Hover over with mouse and press S to select a face. Press F to expand the frame.__ | show_reference_imgs()
print(('Mark all images that ARE NOT {}. Thumbnails are ordered by DESCENDING distance '
'to your selected images. (The first page is more likely to have non "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
lower_widget = precision_model.get_lower_widget()
lower_widget
show_reference_imgs()
print(('Mark all images that ARE {}. Thumbnails are ordered by ASCENDING distance '
'to your selected images. (The first page is more likely to have "{}" images.) '
'There are a total of {} frames. (CLICK THE DISABLE JUPYTER KEYBOARD BUTTON '
'BEFORE PROCEEDING.)').format(
name, name, precision_model.get_lower_count()))
upper_widget = precision_model.get_upper_widget()
upper_widget | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Run the following cell after labelling to compute the precision curve. Do not forget to re-enable jupyter shortcuts. | # Compute the precision from the selections
lower_precision = precision_model.compute_precision_for_lower_buckets(lower_widget.selected)
upper_precision = precision_model.compute_precision_for_upper_buckets(upper_widget.selected)
precision_by_bucket = {**lower_precision, **upper_precision}
results = FaceIdentityModel(
name=name,
face_ids_by_bucket=face_ids_by_bucket,
face_ids_to_score=face_ids_to_score,
precision_by_bucket=precision_by_bucket,
model_params={
'images': list(zip(face_embs, face_imgs))
}
)
plot_precision_and_cdf(results) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
The next cell persists the model locally. | results.save() | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Analysis Gender cross validationSituations where the identity model disagrees with the gender classifier may be cause for alarm. We would like to check that instances of the person have the expected gender as a sanity check. This section shows the breakdown of the identity instances and their labels from the gender classifier. | gender_breakdown = compute_gender_breakdown(results)
print('Expected counts by gender:')
for k, v in gender_breakdown.items():
print(' {} : {}'.format(k, int(v)))
print()
print('Percentage by gender:')
denominator = sum(v for v in gender_breakdown.values())
for k, v in gender_breakdown.items():
print(' {} : {:0.1f}%'.format(k, 100 * v / denominator))
print() | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Situations where the identity detector returns high confidence, but where the gender is not the expected gender indicate either an error on the part of the identity detector or the gender detector. The following visualization shows randomly sampled images, where the identity detector returns high confidence, grouped by the gender label. | high_probability_threshold = 0.8
show_gender_examples(results, high_probability_threshold) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Face SizesFaces shown on-screen vary in size. For a person such as a host, they may be shown in a full body shot or as a face in a box. Faces in the background or those part of side graphics might be smaller than the rest. When calculuating screentime for a person, we would like to know whether the results represent the time the person was featured as opposed to merely in the background or as a tiny thumbnail in some graphic.The next cell, plots the distribution of face sizes. Some possible anomalies include there only being very small faces or large faces. | plot_histogram_of_face_sizes(results) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
The histogram above shows the distribution of face sizes, but not how those sizes occur in the dataset. For instance, one might ask why some faces are so large or whhether the small faces are actually errors. The following cell groups example faces, which are of the target identity with probability, by their sizes in terms of screen area. | high_probability_threshold = 0.8
show_faces_by_size(results, high_probability_threshold, n=10) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Screen Time Across All ShowsOne question that we might ask about a person is whether they received a significantly different amount of screentime on different shows. The following section visualizes the amount of screentime by show in total minutes and also in proportion of the show's total time. For a celebrity or political figure such as Donald Trump, we would expect significant screentime on many shows. For a show host such as Wolf Blitzer, we expect that the screentime be high for shows hosted by Wolf Blitzer. | screen_time_by_show = get_screen_time_by_show(results)
plot_screen_time_by_show(name, screen_time_by_show) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
We might also wish to validate these findings by comparing to the whether the person's name is mentioned in the subtitles. This might be helpful in determining whether extra or lack of screentime for a person may be due to a show's aesthetic choices. The following plots show compare the screen time with the number of caption mentions. | caption_mentions_by_show = get_caption_mentions_by_show([name.upper()])
plot_screen_time_and_other_by_show(name, screen_time_by_show, caption_mentions_by_show,
'Number of caption mentions', 'Count') | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Appearances on a Single ShowFor people such as hosts, we would like to examine in greater detail the screen time allotted for a single show. First, fill in a show below. | show_name = 'FOX and Friends'
# Compute the screen time for each video of the show
screen_time_by_video_id = compute_screen_time_by_video(results, show_name) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
One question we might ask about a host is "how long they are show on screen" for an episode. Likewise, we might also ask for how many episodes is the host not present due to being on vacation or on assignment elsewhere. The following cell plots a histogram of the distribution of the length of the person's appearances in videos of the chosen show. | plot_histogram_of_screen_times_by_video(name, show_name, screen_time_by_video_id) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
For a host, we expect screentime over time to be consistent as long as the person remains a host. For figures such as Hilary Clinton, we expect the screentime to track events in the real world such as the lead-up to 2016 election and then to drop afterwards. The following cell plots a time series of the person's screentime over time. Each dot is a video of the chosen show. Red Xs are videos for which the face detector did not run. | plot_screentime_over_time(name, show_name, screen_time_by_video_id) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
We hypothesized that a host is more likely to appear at the beginning of a video and then also appear throughout the video. The following plot visualizes the distibution of shot beginning times for videos of the show. | plot_distribution_of_appearance_times_by_video(results, show_name) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
In the section 3.3, we see that some shows may have much larger variance in the screen time estimates than others. This may be because a host or frequent guest appears similar to the target identity. Alternatively, the images of the identity may be consistently low quality, leading to lower scores. The next cell plots a histogram of the probabilites for for faces in a show. | plot_distribution_of_identity_probabilities(results, show_name) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Other People Who Are On ScreenFor some people, we are interested in who they are often portrayed on screen with. For instance, the White House press secretary might routinely be shown with the same group of political pundits. A host of a show, might be expected to be on screen with their co-host most of the time. The next cell takes an identity model with high probability faces and displays clusters of faces that are on screen with the target person. | get_other_people_who_are_on_screen(results, k=25, precision_thresh=0.8) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Persist to CloudThe remaining code in this notebook uploads the built identity model to Google Cloud Storage and adds the FaceIdentity labels to the database. Save Model to Google Cloud Storage | gcs_model_path = results.save_to_gcs() | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
To ensure that the model stored to Google Cloud is valid, we load it and print the precision and cdf curve below. | gcs_results = FaceIdentityModel.load_from_gcs(name=name)
imshow(tile_imgs([cv2.resize(x[1][0], (200, 200)) for x in gcs_results.model_params['images']], cols=10))
plt.show()
plot_precision_and_cdf(gcs_results) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Save Labels to DBIf you are satisfied with the model, we can commit the labels to the database. | from django.core.exceptions import ObjectDoesNotExist
def standardize_name(name):
return name.lower()
person_type = ThingType.objects.get(name='person')
try:
person = Thing.objects.get(name=standardize_name(name), type=person_type)
print('Found person:', person.name)
except ObjectDoesNotExist:
person = Thing(name=standardize_name(name), type=person_type)
print('Creating person:', person.name)
labeler = Labeler(name='face-identity:{}'.format(person.name), data_path=gcs_model_path) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Commit the person and labelerThe labeler and person have been created but not set saved to the database. If a person was created, please make sure that the name is correct before saving. | person.save()
labeler.save() | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Commit the FaceIdentity labelsNow, we are ready to add the labels to the database. We will create a FaceIdentity for each face whose probability exceeds the minimum threshold. | commit_face_identities_to_db(results, person, labeler, min_threshold=0.001)
print('Committed {} labels to the db'.format(FaceIdentity.objects.filter(labeler=labeler).count())) | _____no_output_____ | Apache-2.0 | app/notebooks/labeled_identities/shooters/syed_rizwan_farook.ipynb | scanner-research/esper-tv |
Implementing the Gradient Descent AlgorithmIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data. | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
#Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color) | _____no_output_____ | MIT | lesson_2/gradient-descent/GradientDescent.ipynb | danielbruno301/pytorch-scholarship-challenge |
Reading and plotting the data | data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show() | _____no_output_____ | MIT | lesson_2/gradient-descent/GradientDescent.ipynb | danielbruno301/pytorch-scholarship-challenge |
TODO: Implementing the basic functionsHere is your turn to shine. Implement the following formulas, as explained in the text.- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$- The function that updates the weights$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$$$ b \longrightarrow b + \alpha (y - \hat{y})$$ | # Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1/(1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
y = np.dot(features,weights ) + bias
y_hat = sigmoid(y)
return y_hat
# Error (log-loss) formula
def error_formula(y, output):
error = -y*np.log(output) - (1-y)*np.log(1-output)
return error
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
y_hat = output_formula(x, weights, bias)
learned_error = learnrate * (y-y_hat)
weights = weights + learned_error * x
bias = bias + learned_error
return weights, bias | _____no_output_____ | MIT | lesson_2/gradient-descent/GradientDescent.ipynb | danielbruno301/pytorch-scholarship-challenge |
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm. | np.random.seed(44)
epochs = 1000
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show() | _____no_output_____ | MIT | lesson_2/gradient-descent/GradientDescent.ipynb | danielbruno301/pytorch-scholarship-challenge |
Time to train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A plot of the error function. Notice how it decreases as we go through more epochs. | train(X, y, epochs, learnrate, True) |
========== Epoch 0 ==========
Train loss: 0.7135845195381634
Accuracy: 0.4
========== Epoch 100 ==========
Train loss: 0.3235511002047678
Accuracy: 0.94
========== Epoch 200 ==========
Train loss: 0.2445014537977157
Accuracy: 0.94
========== Epoch 300 ==========
Train loss: 0.21128008952075578
Accuracy: 0.93
========== Epoch 400 ==========
Train loss: 0.19288993789458375
Accuracy: 0.93
========== Epoch 500 ==========
Train loss: 0.18118268826379075
Accuracy: 0.91
========== Epoch 600 ==========
Train loss: 0.17307306304520367
Accuracy: 0.92
========== Epoch 700 ==========
Train loss: 0.16712852408679463
Accuracy: 0.92
========== Epoch 800 ==========
Train loss: 0.16259061436092043
Accuracy: 0.92
========== Epoch 900 ==========
Train loss: 0.15901909628351343
Accuracy: 0.92
| MIT | lesson_2/gradient-descent/GradientDescent.ipynb | danielbruno301/pytorch-scholarship-challenge |
import | from tqdm.notebook import tqdm
raw_genre_gn_all = pd.read_json('./raw_data/genre_gn_all.json', typ = 'seriese')
raw_song_meta = pd.read_json('./raw_data/song_meta.json')
raw_test = pd.read_json('./raw_data/test.json')
raw_train = pd.read_json('./raw_data/train.json')
raw_val = pd.read_json('./raw_data/val.json') | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
์ฅ๋ฅด | genre_gn_all = pd.DataFrame(raw_genre_gn_all, columns = ['genre_name']).reset_index().rename(columns={"index" : "genre_code"})
genre_gn_all.head()
genre_gn_all['genre_name'].unique() | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
genre_code : ๋๋ถ๋ฅ | genre_code = genre_gn_all[genre_gn_all['genre_code'].str[-2:] == "00"]
genre_code.head()
import requests
from bs4 import BeautifulSoup | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
dtl_genre_code : ์๋ถ๋ฅ | dtl_genre_code = genre_gn_all[genre_gn_all['genre_code'].str[-2:] != "00"]
dtl_genre_code.columns = ['dtl_genre_code','dtl_genre_name']
dtl_genre_code.head() | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
genre : ์ฅ๋ฅด ์ ์ฒด df | genre_code['join_code'] = genre_code['genre_code'].str[:4]
dtl_genre_code['join_code'] = dtl_genre_code['dtl_genre_code'].str[:4]
genre = pd.merge(genre_code, dtl_genre_code, how = 'left', on = 'join_code')
genre = genre[['genre_code','genre_name','dtl_genre_code','dtl_genre_name']]
genre | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
๊ณก- list์์ ๋ค์ด์๋ ๊ฐ๋ค์ ์ ๋ํฌํ ๊ฐ์ด ์๋ | raw_song_meta.head()
# ์ฅ๋ฅด ๋ถ๋ฅ๊ฐ ์ด์ํ๋ฏ
raw_song_meta[raw_song_meta['song_name']=="๊ทธ๋จ์ ๊ทธ์ฌ์"]
raw_song_meta.info()
# ๊ณก ์์ด๋(id)์ ๋๋ถ๋ฅ ์ฅ๋ฅด์ฝ๋ ๋ฆฌ์คํธ(song_gn_gnr_basket) ์ถ์ถ
song_gnr_map = raw_song_meta.loc[:, ['id', 'song_gn_gnr_basket']]
# ๋น list์ None๊ฐ์ ๋ฃ์ด์ค
song_gnr_map['song_gn_gnr_basket'] = song_gnr_map.song_gn_gnr_basket.apply(lambda x: x if len(x) >= 1 else [None])
# unnest song_gn_gnr_basket
song_gnr_map_unnest = np.dstack(
(
np.repeat(song_gnr_map.id.values, list(map(len, song_gnr_map.song_gn_gnr_basket))),
np.concatenate(song_gnr_map.song_gn_gnr_basket.values)
)
)
# unnested ๋ฐ์ดํฐํ๋ ์ ์์ฑ : song_gnr_map
song_gnr_map = pd.DataFrame(data = song_gnr_map_unnest[0], columns = song_gnr_map.columns)
song_gnr_map['id'] = song_gnr_map['id'].astype(str)
song_gnr_map.rename(columns = {'id' : 'song_id', 'song_gn_gnr_basket' : 'gnr_code'}, inplace = True)
# unnest ๊ฐ์ฒด ์ ๊ฑฐ
del song_gnr_map_unnest
song_gnr_map
# 1. ๊ณก ๋ณ ์ฅ๋ฅด ๊ฐ์ count ํ
์ด๋ธ ์์ฑ : song_gnr_count
song_gnr_count = song_gnr_map.groupby('song_id').gnr_code.nunique().reset_index(name = 'mapping_gnr_cnt')
# 2. 1๋ฒ์์ ์์ฑํ ํ
์ด๋ธ์ ๊ฐ์ง๊ณ ๋งคํ๋ ์ฅ๋ฅด ๊ฐ์ ๋ณ ๊ณก ์ count ํ
์ด๋ธ ์์ฑ : gnr_song_count
gnr_song_count = song_gnr_count.groupby('mapping_gnr_cnt').song_id.nunique().reset_index(name = '๋งคํ๋ ๊ณก ์')
# 3. 2๋ฒ ํ
์ด๋ธ์ ๋น์จ ๊ฐ ์ถ๊ฐ
gnr_song_count.loc[:,'๋น์จ(%)'] = round(gnr_song_count['๋งคํ๋ ๊ณก ์']/sum(gnr_song_count['๋งคํ๋ ๊ณก ์'])*100, 2)
gnr_song_count = gnr_song_count.reset_index().rename(columns = {'mapping_gnr_cnt' : '์ฅ๋ฅด ์'})
gnr_song_count[['์ฅ๋ฅด ์', '๋งคํ๋ ๊ณก ์', '๋น์จ(%)']]
raw_song_meta[(raw_song_meta['song_gn_gnr_basket'].apply(len) == 0)] | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
train & test data | raw_song_meta['song_gn_dtl_gnr']= raw_song_meta['song_gn_dtl_gnr_basket'].apply(','.join)
raw_song_meta['artist_name']= raw_song_meta['artist_name_basket'].apply(','.join)
id_gnr_df = raw_song_meta[['id','song_gn_dtl_gnr']]
id_gnr_df.head()
raw_train
ls2 = []
for i in range(len(raw_train)):
ls2.append(len(raw_train['songs'][i]))
ls = []
for i in range(len(raw_train)):
ls.append(len(raw_train['tags'][i]))
a = pd.DataFrame(ls)
a[0].describe()
b = pd.DataFrame(ls2)
b[0].describe()
raw_train.sort_values(by="like_cnt",ascending=False)[:10]
raw_test.sort_values(by='like_cnt',ascending=False) | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
validation | raw_val.tail()
msno.matrix(val)
ls = []
for i in range(len(raw_val['plylst_title'])):
ls.append(len(raw_val['plylst_title'][i]))
pd.DataFrame(ls).describe(percentiles=[0.805]) | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
songs to grn | raw_song_meta['song_gn_dtl_gnr']= raw_song_meta['song_gn_dtl_gnr_basket'].apply(','.join)
raw_song_meta['artist_name']= raw_song_meta['artist_name_basket'].apply(','.join)
id_gnr_df = raw_song_meta[['id','song_gn_dtl_gnr_basket']]
id_gnr_df.head()
song_tag = pd.read_csv('./raw_data/song_tags.csv')
song_tag
raw_train['tags']
raw_train['songs'].apply(lambda x : [ for i in x])
song_tag
song_tag.iloc[615137]
song_tag[song_tag['tags'] == "['์๋๋ฎค์ง']"]
raw_train
# ํ๋ ์ด๋ฆฌ์คํธ ์์ด๋(id)์ ๋งคํ๋ ํ๊ทธ(tags) ์ถ์ถ
plylst_tag_map = raw_train[['id', 'tags']]
# unnest tags
plylst_tag_map_unnest = np.dstack(
(
np.repeat(plylst_tag_map.id.values, list(map(len, plylst_tag_map.tags))),
np.concatenate(plylst_tag_map.tags.values)
)
)
# unnested ๋ฐ์ดํฐํ๋ ์ ์์ฑ : plylst_tag_map
plylst_tag_map = pd.DataFrame(data = plylst_tag_map_unnest[0], columns = plylst_tag_map.columns)
plylst_tag_map['id'] = plylst_tag_map['id'].astype(str)
# unnest ๊ฐ์ฒด ์ ๊ฑฐ
del plylst_tag_map_unnest
plylst_tag_map
plylst_tag_map.drop_duplicates('tags')
# train_uniq_song_cnt = plylst_song_map.songs.nunique() # ์ ๋ํฌ ๊ณก ์
train_uniq_tag_cnt = plylst_tag_map.tags.nunique() # ์ ๋ํฌ ํ๊ทธ ์
# print('๊ณก ์ : %s' %train_uniq_song_cnt)
print('ํ๊ทธ ์ : %s' %train_uniq_tag_cnt)
ls = []
for i in raw_train['songs'].iloc[:5]:
for j in i:
ls.append(id_gnr_df[id_gnr_df['id'] == j]['song_gn_dtl_gnr_basket'])
pd.DataFrame(ls | _____no_output_____ | MIT | howard/playlist_recommendation/playlist_recommandation.ipynb | hmkim312/recommendation_project |
Developing a Complex Model for Regression TestingThe purpose of this notebook is to establish a complex model that we can generate training and test data to practice our regression skills. We want some number of inputs, which can range from 0 to 10. Some are more important than others. Some will have dependence. Let's start with importing the necessary modules. | %matplotlib inline
import numpy as np
import scipy.stats as st
import matplotlib.pyplot as plt | _____no_output_____ | MIT | ComplexModel.ipynb | sawyerap/PythonDataViz |
Our first parameter will be alpha. It varies 0->10. It will be the first order parameter for the model. A weibull is added for some 'spice' | xa = np.linspace(0,10,100)
w1 = st.weibull_min(1.79, loc=6.0, scale=2.0)
def alpha(x):
return(0.1 * x - 0.5 * w1.cdf(x))
f = plt.plot(xa,alpha(xa)) | _____no_output_____ | MIT | ComplexModel.ipynb | sawyerap/PythonDataViz |
Now we'll introduce beta, another parameter. | xb = np.linspace(0,10,100)
n1 = st.norm(loc=5.0, scale=2.0)
def beta(y):
return(1.5 * n1.pdf(y))
f = plt.plot(xb,beta(xb))
xx, yy = np.meshgrid(xa,xb)
z = alpha(xx) + beta(yy)
fig, ax = plt.subplots()
CS = ax.contour(xa, xb, z)
l = ax.clabel(CS, inline=1, fontsize=10)
plt.plot(xa,alpha(xa)+beta(9))
plt.plot(xa,alpha(xa)+beta(5)) | _____no_output_____ | MIT | ComplexModel.ipynb | sawyerap/PythonDataViz |
Now to add a third variable, gamma. | xg = np.linspace(0,10,100)
def gamma(z):
return((np.exp(0.036*z) - 1.0) * np.cos(2*z/np.pi))
plt.plot(xg, gamma(xg)) | _____no_output_____ | MIT | ComplexModel.ipynb | sawyerap/PythonDataViz |
The ResponseNow we have our function. | def response(a,b,g):
out = alpha(a) + beta(b) + gamma(g)
return(out)
plt.plot(xa,response(xa,8,0))
plt.plot(xa,response(xa,8,5)) | _____no_output_____ | MIT | ComplexModel.ipynb | sawyerap/PythonDataViz |
Programming with Python Episode 1b - Introduction to PlottingTeaching: 60 min, Exercises: 30 min Objectives- Perform operations on arrays of data.- Plot simple graphs from data. Array operationsOften, we want to do more than add, subtract, multiply, and divide array elements. NumPy knows how to do more complex operations, too. If we want to find the average inflammation for all patients on all days, for example, we can ask NumPy to compute data's mean value:```print(numpy.mean(data))``` | import numpy
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
print(numpy.mean(data))
print(data) | 6.14875
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
`mean()` is a function that takes an array as an argument.However, not all functions have input.Generally, a function uses inputs to produce outputs. However, some functions produce outputs without needing any input. For example, checking the current time doesn't require any input.```import timeprint(time.ctime())``` | import time
print(time.ctime()) | Tue Dec 3 02:26:33 2019
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
For functions that don't take in any arguments, we still need parentheses `()` to tell Python to go and do something for us.NumPy has lots of useful functions that take an array as input. Let's use three of those functions to get some descriptive values about the dataset. We'll also use *multiple assignment*, a convenient Python feature that will enable us to do this all in one line.```maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data)``` | maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data) | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Here we've assigned the return value from `numpy.max(data)` to the variable `maxval`, the return value from `numpy.min(data)` to `minval`, and so on. Let's have a look at the results:```print('maximum inflammation:', maxval)print('minimum inflammation:', minval)print('standard deviation:', stdval)``` | print('maximum inflammation:', maxval)
print('minimum inflammation:', minval)
print('standard deviation:', stdval) | maximum inflammation: 20.0
minimum inflammation: 0.0
standard deviation: 4.613833197118566
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Mystery Functions in IPythonHow did we know what functions NumPy has and how to use them? If you are working in IPython or in a Jupyter Notebook (which we are), there is an easy way to find out. If you type the name of something followed by a dot `.`, then you can use `Tab` completion (e.g. type `numpy.` and then press `tab`) to see a list of all functions and attributes that you can use. tabใๆผใใฆๆฐ็งๅพ
ใค | numpy. | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
After selecting one, you can also add a question mark `?` (e.g. `numpy.cumprod?`), and IPython will return an explanation of the method! This is the same as running `help(numpy.cumprod)`. | #help(numpy.cumprod) | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
When analysing data, though, we often want to look at variations in statistical values, such as the maximum inflammation per patient or the average inflammation per day. One way to do this is to create a new temporary array of the data we want, then ask it to do the calculation:```patient_0 = data[0, :] Comment: 0 on the first axis (rows), everything on the second (columns)print('maximum inflammation for patient 0:', numpy.max(patient_0))``` | patient_0 = data[0, :]
print('maximum inflammation for patient 0:', numpy.max(patient_0)) | maximum inflammation for patient 0: 18.0
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Everything in a line of code following the `` symbol is a comment that is ignored by Python. Comments allow programmers to leave explanatory notes for other programmers or their future selves. We don't actually need to store the row in a variable of its own. Instead, we can combine the selection and the function call:```print('maximum inflammation for patient 2:', numpy.max(data[2, :]))``` | print('maximum inflammation for patient 2:', numpy.max(data[2, :])) | maximum inflammation for patient 2: 19.0
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Operations Across AxesWhat if we need the maximum inflammation for each patient over all days or the average for each day ? In other words want to perform the operation across a different axis.To support this functionality, most array functions allow us to specify the axis we want to work on. If we ask for the average across axis 0 (rows in our 2D example), we get:```print(numpy.mean(data, axis=0))``` | print(numpy.mean(data, axis=0))
print(numpy.mean(data, axis=0).shape) | (40,)
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
As a quick check, we can ask this array what its shape is:```print(numpy.mean(data, axis=0).shape)``` The results (40,) tells us we have an Nร1 vector, so this is the average inflammation per day for all 40 patients. If we average across axis 1 (columns in our example), we use:```print(numpy.mean(data, axis=1))``` | print(numpy.mean(data, axis=1).shape)
# each patient - mean | (60,)
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
which is the average inflammation per patient across all days.And if you are now confused, here's a simpler example:```tiny = [[1, 2, 3, 4], [10, 20, 30, 40], [100, 200, 300, 400]] print(tiny)print('Sum the entire matrix: ', numpy.sum(tiny))``` | tiny = [[1, 2, 3, 4],
[10, 20, 30, 40],
[100, 200, 300, 400]]
print(tiny)
print('Sum the entire matrix: ', numpy.sum(tiny)) | [[1, 2, 3, 4], [10, 20, 30, 40], [100, 200, 300, 400]]
Sum the entire matrix: 1110
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Now let's add the rows (first axis, i.e. zeroth)```print('Sum the columns (i.e. add the rows): ', numpy.sum(tiny, axis=0))``` | print('Sum the columns (i.e. add the rows): ', numpy.sum(tiny, axis=0))
# axis=0 means 'sum of the columns'
# 1+10+100, 2+20+200, 3+30+300, 4+40+400 | Sum the columns (i.e. add the rows): [111 222 333 444]
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
and now on the other dimension (axis=1, i.e. the second dimension)```print('Sum the rows (i.e. add the columns): ', numpy.sum(tiny, axis=1))``` | print('Sum the rows (i.e. add the columns): ', numpy.sum(tiny, axis=1))
# 1+2+3+4, 10+20+30+40, 100+200+300+400 | Sum the rows (i.e. add the columns): [ 10 100 1000]
| MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Here's a diagram to demonstrate how array axes work in NumPy:- `numpy.sum(data)` --> Sum all elements in data- `numpy.sum(data, axis=0)` --> Sum vertically (down, axis=0)- `numpy.sum(data, axis=1)` --> Sum horizontally (across, axis=1) Visualising dataThe mathematician Richard Hamming once said, โThe purpose of computing is insight, not numbers,โ and the best way to develop insight is often to visualise data.Visualisation deserves an entire workshop of its own, but we can explore a few features of Python's `matplotlib` library here. While there is no official plotting library, `matplotlib` is the de facto standard. First, we will import the `pyplot` module from `matplotlib` and use two of its functions to create and display a heat map of our data:```import matplotlib.pyplotplot = matplotlib.pyplot.imshow(data)``` | import matplotlib.pyplot
plot = matplotlib.pyplot.imshow(data)
# heat map | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Heatmap of the DataBlue pixels in this heat map represent low values, while yellow pixels represent high values. As we can see, inflammation rises and falls over a 40-day period. Some IPython MagicIf you're using a Jupyter notebook, you'll need to execute the following command in order for your matplotlib images to appear in the notebook when show() is called:```%matplotlib inline``` | %matplotlib inline
# magic function only in the notebook | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
The `%` indicates an IPython magic function - a function that is only valid within the notebook environment. Note that you only have to execute this function once per notebook. Let's take a look at the average inflammation over time:```ave_inflammation = numpy.mean(data, axis=0)ave_plot = matplotlib.pyplot.plot(ave_inflammation)``` | ave_inflammation = numpy.mean(data, axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation) | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Here, we have put the average per day across all patients in the variable `ave_inflammation`, then asked `matplotlib.pyplot` to create and display a line graph of those values. The result is a roughly linear rise and fall, which is suspicious: we might instead expect a sharper rise and slower fall. Let's have a look at two other statistics, the maximum inflammation of all the patients each day:```max_plot = matplotlib.pyplot.plot(numpy.max(data, axis=0))``` | max_plot = matplotlib.pyplot.plot(numpy.max(data, axis=0)) | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
... and the minimum inflammation across all patient each day ...```min_plot = matplotlib.pyplot.plot(numpy.min(data, axis=0))matplotlib.pyplot.show()``` | min_plot = matplotlib.pyplot.plot(numpy.min(data, axis=0))
matplotlib.pyplot.show() | _____no_output_____ | MIT | lessons/python/ep1b-plotting-intro.ipynb | emichan14/2019-12-03-intro-to-python-workshop |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.