markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
drop price data in x data | x_data=df.drop('price',axis=1) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Now we randomly split our data into training and testing data using the function train_test_split. | from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.10, random_state=1)
print("number of test samples :", x_test.shape[0])
print("number of training samples:",x_train.shape[0])
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
The test_size parameter sets the proportion of data that is split into the testing set. In the above, the testing set is set to 10% of the total dataset. Question 1):Use the function "train_test_split" to split up the data set such that 40% of the data samples will be utilized for testing, set the parameter "random_state" equal to zero. The output of the function should be the following: "x_train_1" , "x_test_1", "y_train_1" and "y_test_1". | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonx_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size=0.4, random_state=0) print("number of test samples :", x_test1.shape[0])print("number of training samples:",x_train1.shape[0])``` Let's import LinearRegression from the module linear_model. | from sklearn.linear_model import LinearRegression | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We create a Linear Regression object: | lre=LinearRegression() | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
we fit the model using the feature horsepower | lre.fit(x_train[['horsepower']], y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's Calculate the R^2 on the test data: | lre.score(x_test[['horsepower']], y_test) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
we can see the R^2 is much smaller using the test data. | lre.score(x_train[['horsepower']], y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Question 2): Find the R^2 on the test data using 40% of the data for training data | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonx_train1, x_test1, y_train1, y_test1 = train_test_split(x_data, y_data, test_size=0.4, random_state=0)lre.fit(x_train1[['horsepower']],y_train1)lre.score(x_test1[['horsepower']],y_test1)``` Sometimes you do not have sufficient testing data; as a result, you may want to perform Cross-validation. Let's go over several methods that you can use for Cross-validation. Cross-validation Score Lets import model_selection from the module cross_val_score. | from sklearn.model_selection import cross_val_score | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We input the object, the feature in this case ' horsepower', the target data (y_data). The parameter 'cv' determines the number of folds; in this case 4. | Rcross = cross_val_score(lre, x_data[['horsepower']], y_data, cv=4) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
The default scoring is R^2; each element in the array has the average R^2 value in the fold: | Rcross | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We can calculate the average and standard deviation of our estimate: | print("The mean of the folds are", Rcross.mean(), "and the standard deviation is" , Rcross.std()) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We can use negative squared error as a score by setting the parameter 'scoring' metric to 'neg_mean_squared_error'. | -1 * cross_val_score(lre,x_data[['horsepower']], y_data,cv=4,scoring='neg_mean_squared_error') | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Question 3): Calculate the average R^2 using two folds, find the average R^2 for the second fold utilizing the horsepower as a feature : | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonRc=cross_val_score(lre,x_data[['horsepower']], y_data,cv=2)Rc.mean()``` You can also use the function 'cross_val_predict' to predict the output. The function splits up the data into the specified number of folds, using one fold for testing and the other folds are used for training. First import the function: | from sklearn.model_selection import cross_val_predict | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We input the object, the feature in this case 'horsepower' , the target data y_data. The parameter 'cv' determines the number of folds; in this case 4. We can produce an output: | yhat = cross_val_predict(lre,x_data[['horsepower']], y_data,cv=4)
yhat[0:5] | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Part 2: Overfitting, Underfitting and Model SelectionIt turns out that the test data sometimes referred to as the out of sample data is a much better measure of how well your model performs in the real world. One reason for this is overfitting; let's go over some examples. It turns out these differences are more apparent in Multiple Linear Regression and Polynomial Regression so we will explore overfitting in that context. Let's create Multiple linear regression objects and train the model using 'horsepower', 'curb-weight', 'engine-size' and 'highway-mpg' as features. | lr = LinearRegression()
lr.fit(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Prediction using training data: | yhat_train = lr.predict(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
yhat_train[0:5] | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Prediction using test data: | yhat_test = lr.predict(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])
yhat_test[0:5] | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's perform some model evaluation using our training and testing data separately. First we import the seaborn and matplotlibb library for plotting. | import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's examine the distribution of the predicted values of the training data. | Title = 'Distribution Plot of Predicted Value Using Training Data vs Training Data Distribution'
DistributionPlot(y_train, yhat_train, "Actual Values (Train)", "Predicted Values (Train)", Title) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Figure 1: Plot of predicted values using the training data compared to the training data. So far the model seems to be doing well in learning from the training dataset. But what happens when the model encounters new data from the testing dataset? When the model generates new values from the test data, we see the distribution of the predicted values is much different from the actual target values. | Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data'
DistributionPlot(y_test,yhat_test,"Actual Values (Test)","Predicted Values (Test)",Title) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Figur 2: Plot of predicted value using the test data compared to the test data. Comparing Figure 1 and Figure 2; it is evident the distribution of the test data in Figure 1 is much better at fitting the data. This difference in Figure 2 is apparent where the ranges are from 5000 to 15 000. This is where the distribution shape is exceptionally different. Let's see if polynomial regression also exhibits a drop in the prediction accuracy when analysing the test dataset. | from sklearn.preprocessing import PolynomialFeatures | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
OverfittingOverfitting occurs when the model fits the noise, not the underlying process. Therefore when testing your model using the test-set, your model does not perform as well as it is modelling noise, not the underlying process that generated the relationship. Let's create a degree 5 polynomial model. Let's use 55 percent of the data for training and the rest for testing: | x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.45, random_state=0) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We will perform a degree 5 polynomial transformation on the feature 'horse power'. | pr = PolynomialFeatures(degree=5)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
pr | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Now let's create a linear regression model "poly" and train it. | poly = LinearRegression()
poly.fit(x_train_pr, y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We can see the output of our model using the method "predict." then assign the values to "yhat". | yhat = poly.predict(x_test_pr)
yhat[0:5] | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's take the first five predicted values and compare it to the actual targets. | print("Predicted values:", yhat[0:4])
print("True values:", y_test[0:4].values) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We will use the function "PollyPlot" that we defined at the beginning of the lab to display the training data, testing data, and the predicted function. | PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train, y_test, poly,pr) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Figur 4 A polynomial regression model, red dots represent training data, green dots represent test data, and the blue line represents the model prediction. We see that the estimated function appears to track the data but around 200 horsepower, the function begins to diverge from the data points. R^2 of the training data: | poly.score(x_train_pr, y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
R^2 of the test data: | poly.score(x_test_pr, y_test) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We see the R^2 for the training data is 0.5567 while the R^2 on the test data was -29.87. The lower the R^2, the worse the model, a Negative R^2 is a sign of overfitting. Let's see how the R^2 changes on the test data for different order polynomials and plot the results: | Rsqu_test = []
order = [1, 2, 3, 4]
for n in order:
pr = PolynomialFeatures(degree=n)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
lr.fit(x_train_pr, y_train)
Rsqu_test.append(lr.score(x_test_pr, y_test))
plt.plot(order, Rsqu_test)
plt.xlabel('order')
plt.ylabel('R^2')
plt.title('R^2 Using Test Data')
plt.text(3, 0.75, 'Maximum R^2 ') | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We see the R^2 gradually increases until an order three polynomial is used. Then the R^2 dramatically decreases at four. The following function will be used in the next section; please run the cell. | def f(order, test_data):
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data, test_size=test_data, random_state=0)
pr = PolynomialFeatures(degree=order)
x_train_pr = pr.fit_transform(x_train[['horsepower']])
x_test_pr = pr.fit_transform(x_test[['horsepower']])
poly = LinearRegression()
poly.fit(x_train_pr,y_train)
PollyPlot(x_train[['horsepower']], x_test[['horsepower']], y_train,y_test, poly, pr) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
The following interface allows you to experiment with different polynomial orders and different amounts of data. | interact(f, order=(0, 6, 1), test_data=(0.05, 0.95, 0.05)) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Question 4a):We can perform polynomial transformations with more than one feature. Create a "PolynomialFeatures" object "pr1" of degree two? | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonpr1=PolynomialFeatures(degree=2)``` Question 4b): Transform the training and testing samples for the features 'horsepower', 'curb-weight', 'engine-size' and 'highway-mpg'. Hint: use the method "fit_transform" ? | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonx_train_pr1=pr1.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])x_test_pr1=pr1.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])``` <!-- The answer is below:x_train_pr1=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])x_test_pr1=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']])--> Question 4c): How many dimensions does the new feature have? Hint: use the attribute "shape" | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonx_train_pr1.shape there are now 15 features``` Question 4d): Create a linear regression model "poly1" and train the object using the method "fit" using the polynomial features? | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonpoly1=LinearRegression().fit(x_train_pr1,y_train)``` Question 4e): Use the method "predict" to predict an output on the polynomial features, then use the function "DistributionPlot" to display the distribution of the predicted output vs the test data? | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonyhat_test1=poly1.predict(x_test_pr1)Title='Distribution Plot of Predicted Value Using Test Data vs Data Distribution of Test Data'DistributionPlot(y_test, yhat_test1, "Actual Values (Test)", "Predicted Values (Test)", Title)``` Question 4f): Using the distribution plot above, explain in words about the two regions were the predicted prices are less accurate than the actual prices | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonThe predicted value is higher than actual value for cars where the price $10,000 range, conversely the predicted price is lower than the price cost in the $30,000 to $40,000 range. As such the model is not as accurate in these ranges.``` Part 3: Ridge regression In this section, we will review Ridge Regression we will see how the parameter Alfa changes the model. Just a note here our test data will be used as validation data. Let's perform a degree two polynomial transformation on our data. | pr=PolynomialFeatures(degree=2)
x_train_pr=pr.fit_transform(x_train[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg','normalized-losses','symboling']])
x_test_pr=pr.fit_transform(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg','normalized-losses','symboling']]) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's import Ridge from the module linear models. | from sklearn.linear_model import Ridge | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's create a Ridge regression object, setting the regularization parameter to 0.1 | RigeModel=Ridge(alpha=0.1) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Like regular regression, you can fit the model using the method fit. | RigeModel.fit(x_train_pr, y_train) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Similarly, you can obtain a prediction: | yhat = RigeModel.predict(x_test_pr) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Let's compare the first five predicted samples to our test set | print('predicted:', yhat[0:4])
print('test set :', y_test[0:4].values) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We select the value of Alpha that minimizes the test error, for example, we can use a for loop. | Rsqu_test = []
Rsqu_train = []
dummy1 = []
Alpha = 10 * np.array(range(0,1000))
for alpha in Alpha:
RigeModel = Ridge(alpha=alpha)
RigeModel.fit(x_train_pr, y_train)
Rsqu_test.append(RigeModel.score(x_test_pr, y_test))
Rsqu_train.append(RigeModel.score(x_train_pr, y_train)) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We can plot out the value of R^2 for different Alphas | width = 12
height = 10
plt.figure(figsize=(width, height))
plt.plot(Alpha,Rsqu_test, label='validation data ')
plt.plot(Alpha,Rsqu_train, 'r', label='training Data ')
plt.xlabel('alpha')
plt.ylabel('R^2')
plt.legend() | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
**Figure 6**:The blue line represents the R^2 of the validation data, and the red line represents the R^2 of the training data. The x-axis represents the different values of Alpha. Here the model is built and tested on the same data. So the training and test data are the same.The red line in figure 6 represents the R^2 of the training data. As Alpha increases the R^2 decreases. Therefore as Alpha increases the model performs worse on the training data. The blue line represents the R^2 on the validation data. As the value for Alpha increases the R^2 increases and converges at a point Question 5): Perform Ridge regression and calculate the R^2 using the polynomial features, use the training data to train the model and test data to test the model. The parameter alpha should be set to 10. | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Click here for the solution```pythonRigeModel = Ridge(alpha=10) RigeModel.fit(x_train_pr, y_train)RigeModel.score(x_test_pr, y_test)``` Part 4: Grid Search The term Alfa is a hyperparameter, sklearn has the class GridSearchCV to make the process of finding the best hyperparameter simpler. Let's import GridSearchCV from the module model_selection. | from sklearn.model_selection import GridSearchCV | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We create a dictionary of parameter values: | parameters1= [{'alpha': [0.001,0.1,1, 10, 100, 1000, 10000, 100000, 100000]}]
parameters1 | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Create a ridge regions object: | RR=Ridge()
RR | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Create a ridge grid search object | Grid1 = GridSearchCV(RR, parameters1,cv=4) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Fit the model | Grid1.fit(x_data[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_data) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
The object finds the best parameter values on the validation data. We can obtain the estimator with the best parameters and assign it to the variable BestRR as follows: | BestRR=Grid1.best_estimator_
BestRR | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
We now test our model on the test data | BestRR.score(x_test[['horsepower', 'curb-weight', 'engine-size', 'highway-mpg']], y_test) | _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
Question 6): Perform a grid search for the alpha parameter and the normalization parameter, then find the best values of the parameters | # Write your code below and press Shift+Enter to execute
| _____no_output_____ | MIT | DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb | alekhaya99/IBM-CLOUD-SQL-AND-PYTHON |
DESU IA4 HEALHInfo-PROF, Introduction to Python programming for health data Session 2: Introduction to PANDASLeaning objectives1. Learning the different data types in pandas: Data frame and series2. Importing and exporting data into a data frame2. Subseting data frames5. Doing transformations with dataframes What is Pandas?Pandas is a Python library used for working with data sets.It has functions for analyzing, cleaning, exploring, and manipulating data.Pandas on-line documentation : https://pandas.pydata.org/docs/reference/index.html | #Importing Pandas and verifying the version
import pandas as pd # as allows to create an alias
import numpy as np
print(pd.__version__) #allow to verify the pandas function | 1.1.5
| Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Data types on Pandas :1. **Series :** It is a one-dimensional array holding data of any type.2. **Dataframes :** Multidimensional data tables holding data of any type. We can think that the series are like the columns of a dataframe whereas the whole table is the dataframe. | # Example series with labels
a = [1, 7, 2]
myvar = pd.Series(a, index = ["x", "y", "z"])
print(myvar) | x 1
y 7
z 2
dtype: int64
| Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Dataframes Dataframes are multidiomensional matrices that can store data of different types. |
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45],
"category" : ['a','b','c']
}
df = pd.DataFrame(data, index = ["day1", "day2", "day3"])
print(df)
students = [ ('jack', 34, 'Sydeny') ,
('Riti', 30, 'Delhi' ) ,
('Aadi', 16, 'New York') ]
# Create a DataFrame object
dfObj = pd.DataFrame(students, columns = ['Name' , 'Age', 'City'], index=['a', 'b', 'c']) | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
**Exercise :** Create a dataframe that stores in one row the person ID, height, weight, sex and birthdate. Add at least three examples [DataFrame attributes](https://pandas.pydata.org/docs/reference/frame.html) Exercise : For the dataframe previously created, go to dataframe attributes and show the following information : 1. Number of elements2. Name of the columns3. Name of the rows4. Number of rows and columns5. Show the first rows of the dataframe Acces the elements of a dataframe :Access by columns: | df['calories']
| _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
DataFrame.loc | Select Column & Rows by NameDataFrame provides indexing label loc for selecting columns and rows by names dataFrame.loc[ROWS RANGE , COLUMNS RANGE] | df.loc['day1',:]
df.loc[:,'calories'] | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
DataFrame.iloc | Select Column Indexes & Rows Index PositionsDataFrame provides indexing label iloc for accessing the column and rows by index positions i.e.*dataFrame.iloc[ROWS INDEX RANGE , COLUMNS INDEX RANGE]*It selects the columns and rows from DataFrame by index position specified in range. If ‘:’ is given in rows or column Index Range then all entries will be included for corresponding row or column. | df.iloc[:,[0,2]] | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Variable conversion : | df_petit = pd.DataFrame({ 'Country': ['France','Spain','Germany', 'Spain','Germany', 'France', 'Italy'], 'Age': [50,60,40,20,40,30, 20] })
df_petit
| _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Label encoding : Label Encoding refers to converting the labels into a numeric form so as to convert them into the machine-readable form. Machine learning algorithms can then decide in a better way how those labels must be operated. It is an important pre-processing step for the structured dataset in supervised learning. | df_petit['Country_cat'] = df_petit['Country'].astype('category').cat.codes
df_petit | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
One hot encoding | help(pd.get_dummies)
df_petit = pd.get_dummies(df_petit,prefix=['Country'], columns = ['Country'], drop_first=True)
df_petit.head() | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
**Exercise :** Create a dataframe with 3 columns with the characteristics : ID, sex (M or F), frailty degree (FB, M, F). Convert the categorical variables using label encoding and one-hot-encoding. Dealing with dates https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html | #Library to deeal with dates
import datetime
dti = pd.to_datetime(
["1/1/2018", np.datetime64("2018-01-01"), datetime.datetime(2018, 1, 1)]
)
dti
df = pd.DataFrame({'date': ['3/10/2000', '3/11/2000', '3/12/2000'],
'value': [2, 3, 4]})
df['date'] = pd.to_datetime(df['date'])
df | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Cutomize the date format | df = pd.DataFrame({'date': ['2016-6-10 20:30:0',
'2016-7-1 19:45:30',
'2013-10-12 4:5:1'],
'value': [2, 3, 4]})
df['date'] = pd.to_datetime(df['date'], format="%Y-%d-%m %H:%M:%S")
df | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
**Exercise :** Check the Pandas documentation and create a dataframe with a columns with dates and try different datetypes. Access date elements dt. accessor :The dt. accessor is an object that allows to access the different data and time elements in a datatime object.https://pandas.pydata.org/docs/reference/api/pandas.Series.dt.html | df['date_only'] = df['date'].dt.date
df['time_only'] = df['date'].dt.time
df['hour_only'] = df['date'].dt.hour
df | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Importing datasetshttps://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html | df = pd.read_csv("https://raw.githubusercontent.com/rakelup/EPICLIN2021/master/diabetes.csv", sep=",",error_bad_lines=False)
df.head() | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Data overview | # Data overview
print ('Rows : ', df.shape[0])
print ('Coloumns : ', df.shape[1])
print ('\nFeatures : \n', df.columns.tolist())
print ('\nNumber of Missing values: ', df.isnull().sum().values.sum())
print ('\nNumber of unique values : \n', df.nunique())
df.describe()
df.columns | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Cleaning data in a dataframe: 1. Dealing with missing values2. Data in wrong format3. Wrong data4. Duplicates Dealing with missing values : Handling missing values is an essential part of data cleaning and preparation process since almost all data in real life comes with some missing values. Check for missing values | df.info()
df.isnull().sum() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 768 entries, 0 to 767
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pregnancies 768 non-null int64
1 Glucose 768 non-null int64
2 BloodPressure 768 non-null int64
3 SkinThickness 768 non-null int64
4 Insulin 768 non-null int64
5 BMI 768 non-null float64
6 DiabetesPedigreeFunction 768 non-null float64
7 Age 768 non-null int64
8 Outcome 768 non-null int64
dtypes: float64(2), int64(7)
memory usage: 54.1 KB
| Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Let's create a daframe with missing values. | df2 = df
df2.Glucose.replace(99, np.nan, inplace=True)
df2.BloodPressure.replace(74, np.nan, inplace=True)
print ('\nNumber of Missing values: ', df2.isnull().sum())
print ('\nTotal number of missing values : ', df2.isnull().sum().values.sum())
|
Valeurs manquantes: Pregnancies 0
Glucose 17
BloodPressure 52
SkinThickness 0
Insulin 0
BMI 0
DiabetesPedigreeFunction 0
Age 0
Outcome 0
dtype: int64
Valeurs manquantes total: 69
| Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
First strategy : Removing the whole row that contains a missing value | # Removing the whole row
df3 = df2.dropna()
print ('\nValeurs manquantes: ', df3.isnull().sum())
print ('\nValeurs manquantes total: ', df3.isnull().sum().values.sum())
##Replace the missing values
df2.Glucose.replace(np.nan, df['Glucose'].median(), inplace=True)
df2.BloodPressure.replace(np.nan, df['BloodPressure'].median(), inplace=True)
| _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Sorting the datahttps://pandas.pydata.org/docs/reference/api/pandas.DataFrame.sort_values.html | #Trier les données
b = df.sort_values('Pregnancies')
b.head() | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
**Exercise :** Sort the data in descending order according to the insulin level and store the data in a new Data frame. How to store the data in the same dataframe? Subseting the data | df[df['BloodPressure'] >70].count() # Filtrage par valeur
df_court = df[['Insulin','Glucose']]
df_court.drop('Insulin', inplace= True, axis = 1)
df_court.head() | /usr/local/lib/python3.7/dist-packages/pandas/core/frame.py:4174: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
errors=errors,
| Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
Statistics applied to dataframesDataFrame.aggregate(func=None, axis=0, *args, **kwargs)Aggregate using one or more operations over the specified axis.https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.aggregate.html | _____no_output_____ | Apache-2.0 | Introduction_to_Pandas.ipynb | Jokos-git/Covid19VaccineAesiDiagnostics |
|
We're going to reate a convolutional neural network that trains to 100% accuracy on these images download below and which cancels training upon hitting training accuracy of >.999 | DESIRED_ACCURACY = 0.999
class StopTrainingCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
if logs.get('acc') >= DESIRED_ACCURACY:
print(f'\nReached {DESIRED_ACCURACY} accuracy so canceling training!')
self.model.stop_training = True
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, kernel_size=(3, 3), padding='same', activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu'),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Conv2D(64, kernel_size=(3, 3), padding='same', activation='relu'),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=128, activation='relu'),
tf.keras.layers.Dense(units=1, activation='sigmoid')
])
model.summary()
model.compile(optimizer=tf.keras.optimizers.RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
%%time
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'/tmp/h-or-s',
target_size=(300, 300),
batch_size=2,
class_mode='binary')
history = model.fit_generator(train_generator, steps_per_epoch=8, epochs=15, callbacks=[StopTrainingCallback()])
# help(tf.keras.models.Model.fit_generator)
| _____no_output_____ | MIT | Course 1 - Introduction to TensorFlow for AI, ML and DL/Week 4 - Using Real-world Images/Exercise4-Question.ipynb | dksifoua/TensorFlow-in-Practice |
语义分割和数据集在前几节讨论的目标检测问题中,我们一直使用方形边界框来标注和预测图像中的目标。本节将探讨语义分割(semantic segmentation)问题,它关注如何将图像分割成属于不同语义类别的区域。值得一提的是,这些语义区域的标注和预测都是像素级的。图9.10展示了语义分割中图像有关狗、猫和背景的标签。可以看到,跟目标检测相比,语义分割标注的像素级的边框显然更加精细。 图像分割和实例分割计算机视觉领域还有两个和语义分割相似的重要问题:图像分割(image segmentation)和实例分割(instance segmentation)。我们在这里将它们和语义分割简单区分一下:* 图像分割将图像分割成若干组成区域。这类问题的方法通常利用图像中像素之间的相关性。它在训练时无需有关图像像素的标签信息,在预测时也无法保证分割出的区域具有我们希望得到的语义。以图9.10的图像为输入,图像分割可能将狗分割成两个区域:一个覆盖以黑色为主的嘴巴和眼睛,而另一个覆盖以黄色为主的其余部分身体。* 实例分割又叫检测并分割(simultaneous detection and segmentation)。它研究如何识别图像中各个目标实例的像素级区域。与语义分割有所不同,实例分割不仅需要区分语义,还要区分不同的目标实例。如果图像中有两只狗,实例分割需要区分像素属于这两只狗中的哪一只。 Pascal VOC2012语义分割数据集语义分割的一个重要数据集叫做Pascal VOC2012 [1]。为了更好地了解这个数据集,我们先导入实验所需的包或模块。 | %matplotlib inline
import d2lzh as d2l
from mxnet import gluon, image, nd
from mxnet.gluon import data as gdata, utils as gutils
import os
import sys
import tarfile | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
我们下载这个数据集的压缩包到`../data`路径下。压缩包大小是2GB,下载需要一定时间。解压之后的数据集将会放置在`../data/VOCdevkit/VOC2012`路径下。 | # 本函数已保存在d2lzh包中方便以后使用
def download_voc_pascal(data_dir='../data'):
voc_dir = os.path.join(data_dir, 'VOCdevkit/VOC2012')
url = ('http://host.robots.ox.ac.uk/pascal/VOC/voc2012'
'/VOCtrainval_11-May-2012.tar')
sha1 = '4e443f8a2eca6b1dac8a6c57641b67dd40621a49'
fname = gutils.download(url, data_dir, sha1_hash=sha1)
with tarfile.open(fname, 'r') as f:
f.extractall(data_dir)
return voc_dir
voc_dir = download_voc_pascal() | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
进入`../data/VOCdevkit/VOC2012`路径后,我们可以获取数据集的不同组成部分。其中`ImageSets/Segmentation`路径包含了指定训练和测试样本的文本文件,而`JPEGImages`和`SegmentationClass`路径下分别包含了样本的输入图像和标签。这里的标签也是图像格式,其尺寸和它所标注的输入图像的尺寸相同。标签中颜色相同的像素属于同一个语义类别。下面定义`read_voc_images`函数将输入图像和标签全部读进内存。 | # 本函数已保存在d2lzh包中方便以后使用
def read_voc_images(root=voc_dir, is_train=True):
txt_fname = '%s/ImageSets/Segmentation/%s' % (
root, 'train.txt' if is_train else 'val.txt')
with open(txt_fname, 'r') as f:
images = f.read().split()
features, labels = [None] * len(images), [None] * len(images)
for i, fname in enumerate(images):
features[i] = image.imread('%s/JPEGImages/%s.jpg' % (root, fname))
labels[i] = image.imread(
'%s/SegmentationClass/%s.png' % (root, fname))
return features, labels
train_features, train_labels = read_voc_images() | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
我们画出前五张输入图像和它们的标签。在标签图像中,白色和黑色分别代表边框和背景,而其他不同的颜色则对应不同的类别。 | n = 5
imgs = train_features[0:n] + train_labels[0:n]
d2l.show_images(imgs, 2, n); | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
接下来,我们列出标签中每个RGB颜色的值及其标注的类别。 | # 该常量已保存在d2lzh包中方便以后使用
VOC_COLORMAP = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0],
[0, 0, 128], [128, 0, 128], [0, 128, 128], [128, 128, 128],
[64, 0, 0], [192, 0, 0], [64, 128, 0], [192, 128, 0],
[64, 0, 128], [192, 0, 128], [64, 128, 128], [192, 128, 128],
[0, 64, 0], [128, 64, 0], [0, 192, 0], [128, 192, 0],
[0, 64, 128]]
# 该常量已保存在d2lzh包中方便以后使用
VOC_CLASSES = ['background', 'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'car', 'cat', 'chair', 'cow',
'diningtable', 'dog', 'horse', 'motorbike', 'person',
'potted plant', 'sheep', 'sofa', 'train', 'tv/monitor'] | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
有了上面定义的两个常量以后,我们可以很容易地查找标签中每个像素的类别索引。 | colormap2label = nd.zeros(256 ** 3)
for i, colormap in enumerate(VOC_COLORMAP):
colormap2label[(colormap[0] * 256 + colormap[1]) * 256 + colormap[2]] = i
# 本函数已保存在d2lzh包中方便以后使用
def voc_label_indices(colormap, colormap2label):
colormap = colormap.astype('int32')
idx = ((colormap[:, :, 0] * 256 + colormap[:, :, 1]) * 256
+ colormap[:, :, 2])
return colormap2label[idx] | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
例如,第一张样本图像中飞机头部区域的类别索引为1,而背景全是0。 | y = voc_label_indices(train_labels[0], colormap2label)
y[105:115, 130:140], VOC_CLASSES[1] | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
预处理数据在之前的章节中,我们通过缩放图像使其符合模型的输入形状。然而在语义分割里,这样做会需要将预测的像素类别重新映射回原始尺寸的输入图像。这样的映射难以做到精确,尤其在不同语义的分割区域。为了避免这个问题,我们将图像裁剪成固定尺寸而不是缩放。具体来说,我们使用图像增广里的随机裁剪,并对输入图像和标签裁剪相同区域。 | # 本函数已保存在d2lzh包中方便以后使用
def voc_rand_crop(feature, label, height, width):
feature, rect = image.random_crop(feature, (width, height))
label = image.fixed_crop(label, *rect)
return feature, label
imgs = []
for _ in range(n):
imgs += voc_rand_crop(train_features[0], train_labels[0], 200, 300)
d2l.show_images(imgs[::2] + imgs[1::2], 2, n); | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
自定义语义分割数据集类我们通过继承Gluon提供的`Dataset`类自定义了一个语义分割数据集类`VOCSegDataset`。通过实现`__getitem__`函数,我们可以任意访问数据集中索引为`idx`的输入图像及其每个像素的类别索引。由于数据集中有些图像的尺寸可能小于随机裁剪所指定的输出尺寸,这些样本需要通过自定义的`filter`函数所移除。此外,我们还定义了`normalize_image`函数,从而对输入图像的RGB三个通道的值分别做标准化。 | # 本类已保存在d2lzh包中方便以后使用
class VOCSegDataset(gdata.Dataset):
def __init__(self, is_train, crop_size, voc_dir, colormap2label):
self.rgb_mean = nd.array([0.485, 0.456, 0.406])
self.rgb_std = nd.array([0.229, 0.224, 0.225])
self.crop_size = crop_size
features, labels = read_voc_images(root=voc_dir, is_train=is_train)
self.features = [self.normalize_image(feature)
for feature in self.filter(features)]
self.labels = self.filter(labels)
self.colormap2label = colormap2label
print('read ' + str(len(self.features)) + ' examples')
def normalize_image(self, img):
return (img.astype('float32') / 255 - self.rgb_mean) / self.rgb_std
def filter(self, imgs):
return [img for img in imgs if (
img.shape[0] >= self.crop_size[0] and
img.shape[1] >= self.crop_size[1])]
def __getitem__(self, idx):
feature, label = voc_rand_crop(self.features[idx], self.labels[idx],
*self.crop_size)
return (feature.transpose((2, 0, 1)),
voc_label_indices(label, self.colormap2label))
def __len__(self):
return len(self.features) | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
读取数据集我们通过自定义的`VOCSegDataset`类来分别创建训练集和测试集的实例。假设我们指定随机裁剪的输出图像的形状为$320\times 480$。下面我们可以查看训练集和测试集所保留的样本个数。 | crop_size = (320, 480)
voc_train = VOCSegDataset(True, crop_size, voc_dir, colormap2label)
voc_test = VOCSegDataset(False, crop_size, voc_dir, colormap2label) | read 1114 examples
| Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
设批量大小为64,分别定义训练集和测试集的迭代器。 | batch_size = 64
num_workers = 0 if sys.platform.startswith('win32') else 4
train_iter = gdata.DataLoader(voc_train, batch_size, shuffle=True,
last_batch='discard', num_workers=num_workers)
test_iter = gdata.DataLoader(voc_test, batch_size, last_batch='discard',
num_workers=num_workers) | _____no_output_____ | Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
打印第一个小批量的形状。不同于图像分类和目标识别,这里的标签是一个三维的数组。 | for X, Y in train_iter:
print(X.shape)
print(Y.shape)
break | (64, 3, 320, 480)
(64, 320, 480)
| Apache-2.0 | chapter_computer-vision/semantic-segmentation-and-dataset.ipynb | femj007/d2l-zh |
Questões1. **A idade determinou suas chances de sobrevivência?**2. **Qual o tamanho de uma família de sobreviventes?**3. **Baseado nas classes, comparar e identificar as relações entre elas?** Análise dos Dados Descrição dos dados- **survival:** Survival (0 = No; 1 = Yes)- **pclass:** Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)- **name:** Name- **sex:** Sex- **age:** Age- **sibsp:** Number of Siblings/Spouses Aboard- **parch:** Number of Parents/Children Aboard- **ticket:** Ticket Number- **fare:** Passenger Fare- **cabin:** Cabin- **embarked:** Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) | # Matlib inline
%matplotlib inline
# Bibliotecas
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Lê o csv e cria o dataframe
titanic_data = pd.read_csv('titanic-data-6.csv')
# Print dos primeiros registros para identificação de dados
titanic_data.head()
# Print dos ultimos registros para identificação dos dados
titanic_data.tail() | _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
**Nota:** Alguns valores para Age são NaN, enquanto os valores de ticket e cabine são alfanuméricos e também valores ausentes com NaN. Com isso, não serão necessários dados do ticket ou da cabine. Limpeza dos dadosDesde a descrição dos dados e perguntas até a resposta, nota-se que algumas colunas não serão utilizadas na análise e por isso podem ser removidas. Isso ajudará no processamento do desempenho do conjunto de dados.- PassengerId- Name- Ticket- Cabin- Fare- EmbarkedPassos utilizados para a limpeza:1. Identifique e remova quaisquer entradas duplicadas2. Remova as colunas desnecessárias3. Corrigir problemas de formato e de dados 1 - Identifique e remova quaisquer entradas duplicadasNão existem colunas duplicadas, como pode-se observar abaixo: | # Identifique e remova quaisquer entradas duplicadas
titanic_duplicados = titanic_data.duplicated()
sum(titanic_duplicados) | _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
2 - Remova as colunas desnecessáriasColunas do passo **limpeza de dados** removidas | # Cria um novo dataset sem as colunas
to_drop = [
'PassengerId',
'Name',
'Ticket',
'Cabin',
'Fare',
'Embarked'
]
def clean_data(to_drop):
"""
Função clean_data.
Argumentos:
to_drop: lista das colunas que deseja remover.
Retorna:
Retorna uma nova dataset sem as colunas to_drop.
"""
titanic_dados_limpos = titanic_data.drop(to_drop, axis=1)
return titanic_dados_limpos
titanic_dados_limpos = clean_data(to_drop)
titanic_dados_limpos.head() | _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
3 - Corrigir problemas de formato e de dados | # Soma de valores faltantes
titanic_dados_limpos.isnull().sum()
# Review da coluna Age para verificar dados NaN
coluna_idade_faltante = pd.isnull(titanic_dados_limpos['Age'])
titanic_dados_limpos[coluna_idade_faltante].head()
# Visualização dos tipos de dados
titanic_dados_limpos.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 6 columns):
Survived 891 non-null int64
Pclass 891 non-null int64
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
dtypes: float64(1), int64(4), object(1)
memory usage: 41.8+ KB
| MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
Pode-se observar que a coluna **Age** irá implicar nas perguntas, então, graficamente iremos tratar as idades nulas como 0. Exploração e Visualização dos Dados | # Descrição dos dados
titanic_dados_limpos.describe() | _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
Questão 1A idade determinou suas chances de sobrevivência? | # Primeiro, identifica-se o número total de dados Age nulos
idade_feminino_vazio = titanic_dados_limpos[coluna_idade_faltante]['Sex'] == 'female'
idade_masculino_vazio = titanic_dados_limpos[coluna_idade_faltante]['Sex'] == 'male'
print ("Total de nulos no sexo feminino".format(idade_feminino_vazio.sum()))
print ("Total de nulos no sexo masculino".format(idade_masculino_vazio.sum()))
# Limpamos o dataset removendo os dados NaN
titanic_data_age_limpo = titanic_dados_limpos.dropna()
# Procuramos o total de sobreviventes e o total de mortes
num_sobreviventes = titanic_data_age_limpo[titanic_data_age_limpo['Survived'] == True]['Survived'].count()
num_mortes = titanic_data_age_limpo[titanic_data_age_limpo['Survived'] == False]['Survived'].count()
# Procuramos a média de sobreviventes e de mortes
idade_media_sobreviventes = titanic_data_age_limpo[titanic_data_age_limpo['Survived'] == True]['Age'].mean()
idade_media_mortes = titanic_data_age_limpo[titanic_data_age_limpo['Survived'] == False]['Age'].mean()
# Print dos resultados encontrado
print ("Total de sobreviventes: {}".format(num_sobreviventes))
print ("Total de mortes: {}".format(num_mortes))
print ("Idade aproximada da media de sobreviventes: {}".format(round(idade_media_sobreviventes)))
print ("Idade aproximada da media de mortes: {}".format(round(idade_media_mortes)))
# Gráfico - Idade dos passageiros com o sexo pela sobrevivência
g = sns.factorplot(x="Survived", y="Age", hue='Sex', data=titanic_data_age_limpo, kind="box", size=7, aspect=.8)
# Add um título
g.fig.suptitle('Sexo e Idade x Sobrevivência')
# Renomeia os labels
(
g.set_axis_labels('Sobreviventes', 'Idade').set_xticklabels(["False", "True"])
)
# Gráfico - Idade dos passageiros com o sexo pela sobrevivência em diferente aspecto
h = sns.swarmplot(x="Survived", y="Age", hue="Sex", data=titanic_data_age_limpo);
# Add título
(
h.set_title('Sexo e Idade x Sobrevivência (gráfico 2)')
) | _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
Baseado nos dados visíveis acima:- Pode-se concluir que a **idade não é um fator deciviso para a taxa de sobrevivência** Questão 2Qual o tamanho de uma família de sobreviventes? | # add Tamanho da Familia em nossa tabela
titanic_data_age_limpo['FamilySize'] = titanic_data_age_limpo['SibSp'] + titanic_data_age_limpo['Parch']
# Agrupamos pelo Tamanho ordenando pela coluna Survived
titanic_data_age_limpo[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False) | /home/allan/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
| MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
Após análise dos dados, observa-se que as famílias com **1 a 3 membros** tem uma taxa maior de sobrevivência que as famílias com **4 a 7 membros** Questão 3Baseado nas classes, comparar e identificar as relações entre elas? | # Gráfico linar com idade x sobreviviu x classe
g = sns.lmplot('Age','Survived',hue='Pclass',data=titanic_data_age_limpo,palette='winter')
# Acessa a figura
fig = g.fig
# Add um título
fig.suptitle("Classe x Sobrevivência")
| _____no_output_____ | MIT | TitanicUdacity.ipynb | AllanKDeveloper/titanic_data |
Feature Selection Tutorial | import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from sklearn.feature_selection import SelectFromModel
from sklearn.linear_model import Lasso, LinearRegression, lasso_path, lasso_stability_path, lars_path
import warnings
from scipy import linalg
from sklearn.linear_model import (RandomizedLasso, lasso_stability_path,
LassoLarsCV)
from sklearn.feature_selection import f_regression
from sklearn.preprocessing import StandardScaler, scale
from sklearn.metrics import auc, precision_recall_curve, mean_squared_error
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.utils.extmath import pinvh
from sklearn.exceptions import ConvergenceWarning
from sklearn.svm import SVR
import pandas as pd
%matplotlib inline | _____no_output_____ | MIT | 2016/tutorial_final/75/Feature Selection Tutorial.ipynb | zeromtmu/practicaldatascience.github.io |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.