markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
For the most part, we'll almost never perform manual standardization because we'll include preprocessing steps in **model pipelines**.So let's import the make_pipeline() function from Scikit-Learn.
# Function for creating model pipelines from sklearn.pipeline import make_pipeline
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Now let's import the StandardScaler, which is used for standardization.
# For standardization from sklearn.preprocessing import StandardScaler
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Next, create a pipelines dictionary.* It should include 3 keys: 'lasso', 'ridge', and 'enet'* The corresponding values should be pipelines that first standardize the data.* For the algorithm in each pipeline, set random_state=123 to ensure replicable results.
# Create pipelines dictionary pipeline_dict = { 'lasso' : make_pipeline(StandardScaler(), Lasso(random_state=123)), 'ridge' : make_pipeline(StandardScaler(), Ridge(random_state=123)), 'enet' : make_pipeline(StandardScaler(), ElasticNet(random_state=123)) }
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
In the next exercise, you'll add pipelines for tree ensembles. Exercise 5.2**Add pipelines for RandomForestRegressor and GradientBoostingRegressor to your pipeline dictionary.*** Name them 'rf' for random forest and 'gb' for gradient boosted tree.* Both pipelines should standardize the data first.* For both, set random_state=123 to ensure replicable results.
# Add a pipeline for 'rf' pipeline_dict['rf'] = make_pipeline(StandardScaler(), RandomForestRegressor(random_state=123)) # Add a pipeline for 'gb' pipeline_dict['gb'] = make_pipeline(StandardScaler(), GradientBoostingRegressor(random_state=123))
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Let's make sure our dictionary has pipelines for each of our algorithms.**Run this code to confirm that you have all 5 algorithms, each part of a pipeline.**
# Check that we have all 5 algorithms, and that they are all pipelines for key, value in pipeline_dict.items(): print( key, type(value) )
lasso <class 'sklearn.pipeline.Pipeline'> ridge <class 'sklearn.pipeline.Pipeline'> enet <class 'sklearn.pipeline.Pipeline'> rf <class 'sklearn.pipeline.Pipeline'> gb <class 'sklearn.pipeline.Pipeline'>
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Now that we have our pipelines, we're ready to move on to declaring hyperparameters to tune.[**Back to Contents**](toc) 3. Declare hyperparameters to tuneUp to now, we've been casually talking about "tuning" models, but now it's time to treat the topic more formally.First, list all the tunable hyperparameters for your Lasso regression pipeline.
# List tuneable hyperparameters of our Lasso pipeline pipeline_dict['lasso'].get_params()
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Next, declare hyperparameters to tune for Lasso and Ridge regression.* Try values between 0.001 and 10 for alpha.
# Lasso hyperparameters lasso_hyperparameters = { 'lasso__alpha' : [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10] } # Ridge hyperparameters ridge_hyperparameters = { 'ridge__alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10] }
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Now declare a hyperparameter grid fo Elastic-Net.* You should tune the l1_ratio in addition to alpha.
# Elastic Net hyperparameters enet_hyperparameters = { 'elasticnet__alpha': [0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10], 'elasticnet__l1_ratio': [0.1, 0.3, 0.5, 0.7, 0.9]}
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Exercise 5.3Let's start by declaring the hyperparameter grid for our random forest.**Declare a hyperparameter grid for RandomForestRegressor.*** Name it rf_hyperparameters* Set 'randomforestregressor\__n_estimators': [100, 200]* Set 'randomforestregressor\__max_features': ['auto', 'sqrt', 0.33]
# Random forest hyperparameters rf_hyperparameters = { 'randomforestregressor__n_estimators' : [100, 200], 'randomforestregressor__max_features': ['auto', 'sqrt', 0.33], }
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Next, let's declare settings to try for our boosted tree.**Declare a hyperparameter grid for GradientBoostingRegressor.*** Name it gb_hyperparameters.* Set 'gradientboostingregressor\__n_estimators': [100, 200]* Set 'gradientboostingregressor\__learning_rate': [0.05, 0.1, 0.2]* Set 'gradientboostingregressor\__max_depth': [1, 3, 5]
# Boosted tree hyperparameters gb_hyperparameters = { 'gradientboostingregressor__n_estimators': [100, 200], 'gradientboostingregressor__learning_rate': [0.05, 0.1, 0.2], 'gradientboostingregressor__max_depth': [1, 3, 5]}
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Now that we have all of our hyperparameters declared, let's store them in a dictionary for ease of access.**Create a hyperparameters dictionary**.* Use the same keys as in the pipelines dictionary. * If you forgot what those keys were, you can insert a new code cell and call pipelines.keys() for a reminder.* Set the values to the corresponding **hyperparameter grids** we've been declaring throughout this module. * e.g. 'rf' : rf_hyperparameters * e.g. 'lasso' : lasso_hyperparameters
# Create hyperparameters dictionary hyperparameters = { 'rf' : rf_hyperparameters, 'gb' : gb_hyperparameters, 'lasso' : lasso_hyperparameters, 'ridge' : ridge_hyperparameters, 'enet' : enet_hyperparameters }
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
**Finally, run this code to check that hyperparameters is set up correctly.**
for key in ['enet', 'gb', 'ridge', 'rf', 'lasso']: if key in hyperparameters: if type(hyperparameters[key]) is dict: print( key, 'was found in hyperparameters, and it is a grid.' ) else: print( key, 'was found in hyperparameters, but it is not a grid.' ) else: print( key, 'was not found in hyperparameters')
enet was found in hyperparameters, and it is a grid. gb was found in hyperparameters, and it is a grid. ridge was found in hyperparameters, and it is a grid. rf was found in hyperparameters, and it is a grid. lasso was found in hyperparameters, and it is a grid.
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
[**Back to Contents**](toc) 4. Fit and tune models with cross-validationNow that we have our pipelines and hyperparameters dictionaries declared, we're ready to tune our models with cross-validation.First, let's to import a helper for cross-validation called GridSearchCV.
# Helper for cross-validation from sklearn.model_selection import GridSearchCV
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Next, to see an example, set up cross-validation for Lasso regression.
# Create cross-validation object from Lasso pipeline and Lasso hyperparameters model = GridSearchCV(pipeline_dict['lasso'], hyperparameters['lasso'], cv=10, n_jobs=-1)
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Pass X_train and y_train into the .fit() function to tune hyperparameters.
# Fit and tune model model.fit(X_train, y_train)
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning)
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
By the way, don't worry if you get the message:ConvergenceWarning: Objective did not converge. You might want to increase the number of iterationsWe'll dive into some of the under-the-hood nuances later.In the next exercise, we'll write a loop that tunes all of our models. Exercise 5.4**Create a dictionary of models named fitted_models that have been tuned using cross-validation.*** The keys should be the same as those in the pipelines and hyperparameters dictionaries. * The values should be GridSearchCV objects that have been fitted to X_train and y_train.* After fitting each model, print '{name} has been fitted.' just to track the progress.This step can take a few minutes, so please be patient.
# Create empty dictionary called fitted_models fitted_models = {} # Loop through model pipelines, tuning each one and saving it to fitted_models for name, pipeline in pipeline_dict.items(): # Create cross-validation object from pipeline and hyperparameters model = GridSearchCV(pipeline, hyperparameters[name], cv=10, n_jobs=-1) # Fit model on X_train, y_train model.fit(X_train, y_train) # Store model in fitted_models[name] fitted_models[name] = model # Print '{name} has been fitted' print(name, 'has been fitted.')
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning) /opt/conda/lib/python3.6/site-packages/sklearn/linear_model/coordinate_descent.py:491: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Fitting data with very small alpha may cause precision problems. ConvergenceWarning)
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
**Run this code to check that the models are of the correct type.**
# Check that we have 5 cross-validation objects for key, value in fitted_models.items(): print( key, type(value) )
lasso <class 'sklearn.model_selection._search.GridSearchCV'> ridge <class 'sklearn.model_selection._search.GridSearchCV'> enet <class 'sklearn.model_selection._search.GridSearchCV'> rf <class 'sklearn.model_selection._search.GridSearchCV'> gb <class 'sklearn.model_selection._search.GridSearchCV'>
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
**Finally, run this code to check that the models have been fitted correctly.**
from sklearn.exceptions import NotFittedError for name, model in fitted_models.items(): try: pred = model.predict(X_test) print(name, 'has been fitted.') except NotFittedError as e: print(repr(e))
lasso has been fitted. ridge has been fitted. enet has been fitted. rf has been fitted. gb has been fitted.
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Nice. Now we're ready to evaluate how our models performed![**Back to Contents**](toc) 5. Evaluate models and select winnerFinally, it's time to evaluate our models and pick the best one.Let's display the holdout $R^2$ score for each fitted model.
# Display best_score_ for each fitted model for name, model in fitted_models.items(): print(name, model.best_score_)
lasso 0.3074411588306972 ridge 0.3155067536069877 enet 0.3422914802767902 rf 0.4833888008561866 gb 0.48722517575886765
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
You should see something similar to the below scores: enet 0.342759786956 lasso 0.309321321129 ridge 0.316805719351 gb 0.48873808731 rf 0.480576134721If your numbers are way off, check to see if you've set the random_state= correctly for each of the models. Next, import the r2_score() and mean_absolute_error() functions.
# Import r2_score and mean_absolute_error functions from sklearn.metrics import r2_score from sklearn.metrics import mean_absolute_error
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Finally, let's see how the fitted models perform on our test set!First, access your fitted random forest and display the object.
# Display fitted random forest object fitted_models['rf']
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Predict the test set using the fitted random forest.
# Predict test set using fitted random forest pred = fitted_models['rf'].predict(X_test)
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Finally, we use the scoring functions we imported to calculate and print $R^2$ and MAE.
# Calculate and print R^2 and MAE print('R^2: ', r2_score(y_test, pred)) print('MAE: ', mean_absolute_error(y_test, pred))
R^2: 0.566278620200386 MAE: 68497.58
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
In the next exercise, we'll evaluate all of our fitted models on the test set and pick the winner. Exercise 5.5**Use a for loop, print the performance of each model in fitted_models on the test set.*** Print both r2_score and mean_absolute_error.* Those functions each take two arguments: * The actual values for your target variable (y_test) * Predicted values for your target variable* Label the output with the name of the algorithm. For example:lasso--------R^2: 0.409313458932MAE: 84963.5598922
# Code here for name, model in fitted_models.items(): pred_var = model.predict(X_test) print(name) print('R^2: ', r2_score(y_test, pred_var)) print('MAE: ', mean_absolute_error(y_test, pred_var)) print('===================================')
lasso R^2: 0.4093410739690313 MAE: 84957.9784492079 =================================== ridge R^2: 0.40978386776640285 MAE: 84899.82281275438 =================================== enet R^2: 0.40415614629545416 MAE: 86465.82558534491 =================================== rf R^2: 0.566278620200386 MAE: 68497.58 =================================== gb R^2: 0.5416475698153993 MAE: 70505.20969788785 ===================================
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
**Next, ask yourself these questions to pick the winning model:*** Which model had the highest $R^2$ on the test set?> Random forest* Which model had the lowest mean absolute error?> Random forest* Are these two models the same one?> Yes* Did it also have the best holdout $R^2$ score from cross-validation?> Yes* **Does it satisfy our win condition?**> Yes, its mean absolute error is less than \$70,000! **Finally, let's plot the performance of the winning model on the test set. Run the code below.*** It first plots a scatter plot.* Then, it plots predicted transaction price on the X-axis.* Finally, it plots actual transaction price on the y-axis.
gb_pred = fitted_models['rf'].predict(X_test) plt.scatter(gb_pred, y_test) plt.xlabel('predicted') plt.ylabel('actual') plt.show()
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
This last visual check is a nice way to confirm our model's performance.* Are the points scattered around the 45 degree diagonal?[**Back to Contents**](toc) Finally, let's save the winning model.Great job! You've created a pretty kick-ass model for real-estate valuation. Now it's time to save your hard work.First, let's take a look at the data type of your winning model.***Run each code cell below after completing the exercises above.***
type(fitted_models['rf'])
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
It looks like this is still the GridSearchCV data type. * You can actually directly save this object if you want, because it will use the winning model pipeline by default. * However, what we really care about is the actual winning model Pipeline, right?In that case, we can use the best\_estimator_ method to access it:
type(fitted_models['rf'].best_estimator_)
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
If we output that object directly, we can also see the winning values for our hyperparameters.
fitted_models['rf'].best_estimator_
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
See? The winning values for our hyperparameters are:* n_estimators: 200* max_features : 'auto'Great, now let's import a helpful package called pickle, which saves Python objects to disk.
import pickle
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Let's save the winning Pipeline object into a pickle file.
with open('saved_models/final_model_employee.pkl', 'wb') as f: pickle.dump(fitted_models['rf'].best_estimator_, f)
_____no_output_____
MIT
Day_7/Lesson 4 - Real Estate Model Training.ipynb
SoftStackFactory/DSML_Primer
Title of the Project__Enter Subtitle here if any__ Overview__What?__- Tell us about the problem you are about to solve.__When?__- Tell us when and how you will determine that this project is successful (metrics)__Why?__- Tell us why this problem is interesting.__Who?__- Tell us who might be interested in your project.__Background and Research__- What has already been done on the problem you are working on? Get the data__Who?__Who collected the original data.__When?__When is the data collected?__What?__- What is data look like? - Number of columns, rows, missing values- Size of the data__Links__- Link to data if available- Link to data dictionary if available.__Connect__- Connect this part to the overview.- If this is supervised learning problem, what is the target column? Which columns will be important in your discussion. Explore the Data - Show us the head of the data, shape of the data.- Missing values, data types, distributions, interesting statistics, etc. - You don't have to show us all the code here but make sure that the work you show us here is connected to the problem you are trying to solve. - Don't share a scrape book with me.- If you are showing a plot make sure that there is a title, axes are labeled and you explained why you are showing me this plot (it's connection to the problem and solution).- If you are working with a supervised learning problem, talk about the target variable. It's distribution, class imbalance etc. Prepare Data- Don't change the original dataset- Don't necessarily show me the functions you wrote. - Use utils.py script and call the utility functions if necessary.- Explain don't show and only mention a work if it is relevant for the later parts of the project. Modeling- What models do you use and why?- What is a good baseline?- Which metric you will be focusing on, why?
## Decision tree model ## Logistic regression model ## confusion matrices for both models
_____no_output_____
MIT
assignments/Final Project Report Template.ipynb
mikev6/UMBC_Data601
Fine Tune - Make sure that you played with the hyper-parameters and fine-tuned the parameters. - Use grid search, cross validation to compare different models. - Don't show all of your work here only mention if it is necessary to understand you work's results.
## here your code if necessary
_____no_output_____
MIT
assignments/Final Project Report Template.ipynb
mikev6/UMBC_Data601
Copyright by Pierian Data Inc. Created by Jose Marcial Portilla. Keras Syntax BasicsWith TensorFlow 2.0 , Keras is now the main API choice. Let's work through a simple regression project to understand the basics of the Keras syntax and adding layers. The DataTo learn the basic syntax of Keras, we will use a very simple fake data set, in the subsequent lectures we will focus on real datasets, along with feature engineering! For now, let's focus on the syntax of TensorFlow 2.0.Let's pretend this data are measurements of some rare gem stones, with 2 measurement features and a sale price. Our final goal would be to try to predict the sale price of a new gem stone we just mined from the ground, in order to try to set a fair price in the market. Load the Data
import pandas as pd df = pd.read_csv('../DATA/fake_reg.csv') df.head()
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Explore the dataLet's take a quick look, we should see strong correlation between the features and the "price" of this made up product.
import seaborn as sns import matplotlib.pyplot as plt sns.pairplot(df)
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Feel free to visualize more, but this data is fake, so we will focus on feature engineering and exploratory data analysis later on in the course in much more detail! Test/Train Split
from sklearn.model_selection import train_test_split # Convert Pandas to Numpy for Keras # Features X = df[['feature1','feature2']].values # Label y = df['price'].values # Split X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.3,random_state=42) X_train.shape X_test.shape y_train.shape y_test.shape
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Normalizing/Scaling the DataWe scale the feature data.[Why we don't need to scale the label](https://stats.stackexchange.com/questions/111467/is-it-necessary-to-scale-the-target-value-in-addition-to-scaling-features-for-re)
from sklearn.preprocessing import MinMaxScaler help(MinMaxScaler) scaler = MinMaxScaler() # Notice to prevent data leakage from the test set, we only fit our scaler to the training set scaler.fit(X_train) X_train = scaler.transform(X_train) X_test = scaler.transform(X_test)
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
TensorFlow 2.0 Syntax Import OptionsThere are several ways you can import Keras from Tensorflow (this is hugely a personal style choice, please use any import methods you prefer). We will use the method shown in the **official TF documentation**.
import tensorflow as tf from tensorflow.keras.models import Sequential help(Sequential)
Help on class Sequential in module tensorflow.python.keras.engine.sequential: class Sequential(tensorflow.python.keras.engine.training.Model) | Sequential(layers=None, name=None) | | Linear stack of layers. | | Arguments: | layers: list of layers to add to the model. | | Example: | | ```python | # Optionally, the first layer can receive an `input_shape` argument: | model = Sequential() | model.add(Dense(32, input_shape=(500,))) | # Afterwards, we do automatic shape inference: | model.add(Dense(32)) | | # This is identical to the following: | model = Sequential() | model.add(Dense(32, input_dim=500)) | | # And to the following: | model = Sequential() | model.add(Dense(32, batch_input_shape=(None, 500))) | | # Note that you can also omit the `input_shape` argument: | # In that case the model gets built the first time you call `fit` (or other | # training and evaluation methods). | model = Sequential() | model.add(Dense(32)) | model.add(Dense(32)) | model.compile(optimizer=optimizer, loss=loss) | # This builds the model for the first time: | model.fit(x, y, batch_size=32, epochs=10) | | # Note that when using this delayed-build pattern (no input shape specified), | # the model doesn't have any weights until the first call | # to a training/evaluation method (since it isn't yet built): | model = Sequential() | model.add(Dense(32)) | model.add(Dense(32)) | model.weights # returns [] | | # Whereas if you specify the input shape, the model gets built continuously | # as you are adding layers: | model = Sequential() | model.add(Dense(32, input_shape=(500,))) | model.add(Dense(32)) | model.weights # returns list of length 4 | | # When using the delayed-build pattern (no input shape specified), you can | # choose to manually build your model by calling `build(batch_input_shape)`: | model = Sequential() | model.add(Dense(32)) | model.add(Dense(32)) | model.build((None, 500)) | model.weights # returns list of length 4 | ``` | | Method resolution order: | Sequential | tensorflow.python.keras.engine.training.Model | tensorflow.python.keras.engine.network.Network | tensorflow.python.keras.engine.base_layer.Layer | tensorflow.python.module.module.Module | tensorflow.python.training.tracking.tracking.AutoTrackable | tensorflow.python.training.tracking.base.Trackable | builtins.object | | Methods defined here: | | __init__(self, layers=None, name=None) | | add(self, layer) | Adds a layer instance on top of the layer stack. | | Arguments: | layer: layer instance. | | Raises: | TypeError: If `layer` is not a layer instance. | ValueError: In case the `layer` argument does not | know its input shape. | ValueError: In case the `layer` argument has | multiple output tensors, or is already connected | somewhere else (forbidden in `Sequential` models). | | build(self, input_shape=None) | Builds the model based on input shapes received. | | This is to be used for subclassed models, which do not know at instantiation | time what their inputs look like. | | This method only exists for users who want to call `model.build()` in a | standalone way (as a substitute for calling the model on real data to | build it). It will never be called by the framework (and thus it will | never throw unexpected errors in an unrelated workflow). | | Args: | input_shape: Single tuple, TensorShape, or list of shapes, where shapes | are tuples, integers, or TensorShapes. | | Raises: | ValueError: | 1. In case of invalid user-provided data (not of type tuple, | list, or TensorShape). | 2. If the model requires call arguments that are agnostic | to the input shapes (positional or kwarg in call signature). | 3. If not all layers were properly built. | 4. If float type inputs are not supported within the layers. | | In each of these cases, the user should build their model by calling it | on real tensor data. | | call(self, inputs, training=None, mask=None) | Calls the model on new inputs. | | In this case `call` just reapplies | all ops in the graph to the new inputs | (e.g. build a new computational graph from the provided inputs). | | Arguments: | inputs: A tensor or list of tensors. | training: Boolean or boolean scalar tensor, indicating whether to run | the `Network` in training mode or inference mode. | mask: A mask or list of masks. A mask can be | either a tensor or None (no mask). | | Returns: | A tensor if there is a single output, or | a list of tensors if there are more than one outputs. | | compute_mask(self, inputs, mask) | Computes an output mask tensor. | | Arguments: | inputs: Tensor or list of tensors. | mask: Tensor or list of tensors. | | Returns: | None or a tensor (or list of tensors, | one per output tensor of the layer). | | compute_output_shape(self, input_shape) | Computes the output shape of the layer. | | If the layer has not been built, this method will call `build` on the | layer. This assumes that the layer will later be used with inputs that | match the input shape provided here. | | Arguments: | input_shape: Shape tuple (tuple of integers) | or list of shape tuples (one per output tensor of the layer). | Shape tuples can include None for free dimensions, | instead of an integer. | | Returns: | An input shape tuple. | | get_config(self) | Returns the config of the layer. | | A layer config is a Python dictionary (serializable) | containing the configuration of a layer. | The same layer can be reinstantiated later | (without its trained weights) from this configuration. | | The config of a layer does not include connectivity | information, nor the layer class name. These are handled | by `Network` (one layer of abstraction above). | | Returns: | Python dictionary. | | pop(self) | Removes the last layer in the model. | | Raises: | TypeError: if there are no layers in the model. | | predict_classes(self, x, batch_size=32, verbose=0) | Generate class predictions for the input samples. | | The input samples are processed batch by batch. | | Arguments: | x: input data, as a Numpy array or list of Numpy arrays | (if the model has multiple inputs). | batch_size: integer. | verbose: verbosity mode, 0 or 1. | | Returns: | A numpy array of class predictions. | | predict_proba(self, x, batch_size=32, verbose=0) | Generates class probability predictions for the input samples. | | The input samples are processed batch by batch. | | Arguments: | x: input data, as a Numpy array or list of Numpy arrays | (if the model has multiple inputs). | batch_size: integer. | verbose: verbosity mode, 0 or 1. | | Returns: | A Numpy array of probability predictions. | | ---------------------------------------------------------------------- | Class methods defined here: | | from_config(config, custom_objects=None) from builtins.type | Instantiates a Model from its config (output of `get_config()`). | | Arguments: | config: Model config dictionary. | custom_objects: Optional dictionary mapping names | (strings) to custom classes or functions to be | considered during deserialization. | | Returns: | A model instance. | | Raises: | ValueError: In case of improperly formatted config dict. | | ---------------------------------------------------------------------- | Data descriptors defined here: | | dynamic | | input_spec | Gets the network's input specs. | | Returns: | A list of `InputSpec` instances (one per input to the model) | or a single instance if the model has only one input. | | layers | | ---------------------------------------------------------------------- | Methods inherited from tensorflow.python.keras.engine.training.Model: | | compile(self, optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None, distribute=None, **kwargs) | Configures the model for training. | | Arguments: | optimizer: String (name of optimizer) or optimizer instance. | See `tf.keras.optimizers`. | loss: String (name of objective function), objective function or | `tf.losses.Loss` instance. See `tf.losses`. If the model has | multiple outputs, you can use a different loss on each output by | passing a dictionary or a list of losses. The loss value that will | be minimized by the model will then be the sum of all individual | losses. | metrics: List of metrics to be evaluated by the model during training | and testing. Typically you will use `metrics=['accuracy']`. | To specify different metrics for different outputs of a | multi-output model, you could also pass a dictionary, such as | `metrics={'output_a': 'accuracy', 'output_b': ['accuracy', 'mse']}`. | You can also pass a list (len = len(outputs)) of lists of metrics | such as `metrics=[['accuracy'], ['accuracy', 'mse']]` or | `metrics=['accuracy', ['accuracy', 'mse']]`. | loss_weights: Optional list or dictionary specifying scalar | coefficients (Python floats) to weight the loss contributions | of different model outputs. | The loss value that will be minimized by the model | will then be the *weighted sum* of all individual losses, | weighted by the `loss_weights` coefficients. | If a list, it is expected to have a 1:1 mapping | to the model's outputs. If a tensor, it is expected to map | output names (strings) to scalar coefficients. | sample_weight_mode: If you need to do timestep-wise | sample weighting (2D weights), set this to `"temporal"`. | `None` defaults to sample-wise weights (1D). | If the model has multiple outputs, you can use a different | `sample_weight_mode` on each output by passing a | dictionary or a list of modes. | weighted_metrics: List of metrics to be evaluated and weighted | by sample_weight or class_weight during training and testing. | target_tensors: By default, Keras will create placeholders for the | model's target, which will be fed with the target data during | training. If instead you would like to use your own | target tensors (in turn, Keras will not expect external | Numpy data for these targets at training time), you | can specify them via the `target_tensors` argument. It can be | a single tensor (for a single-output model), a list of tensors, | or a dict mapping output names to target tensors. | distribute: NOT SUPPORTED IN TF 2.0, please create and compile the | model under distribution strategy scope instead of passing it to | compile. | **kwargs: Any additional arguments. | | Raises: | ValueError: In case of invalid arguments for | `optimizer`, `loss`, `metrics` or `sample_weight_mode`. | | evaluate(self, x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False) | Returns the loss value & metrics values for the model in test mode. | | Computation is done in batches. | | Arguments: | x: Input data. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A dict mapping input names to the corresponding array/tensors, | if the model has named inputs. | - A `tf.data` dataset. | - A generator or `keras.utils.Sequence` instance. | y: Target data. Like the input data `x`, | it could be either Numpy array(s) or TensorFlow tensor(s). | It should be consistent with `x` (you cannot have Numpy inputs and | tensor targets, or inversely). | If `x` is a dataset, generator or | `keras.utils.Sequence` instance, `y` should not be specified (since | targets will be obtained from the iterator/dataset). | batch_size: Integer or `None`. | Number of samples per gradient update. | If unspecified, `batch_size` will default to 32. | Do not specify the `batch_size` is your data is in the | form of symbolic tensors, dataset, | generators, or `keras.utils.Sequence` instances (since they generate | batches). | verbose: 0 or 1. Verbosity mode. | 0 = silent, 1 = progress bar. | sample_weight: Optional Numpy array of weights for | the test samples, used for weighting the loss function. | You can either pass a flat (1D) | Numpy array with the same length as the input samples | (1:1 mapping between weights and samples), | or in the case of temporal data, | you can pass a 2D array with shape | `(samples, sequence_length)`, | to apply a different weight to every timestep of every sample. | In this case you should make sure to specify | `sample_weight_mode="temporal"` in `compile()`. This argument is not | supported when `x` is a dataset, instead pass | sample weights as the third element of `x`. | steps: Integer or `None`. | Total number of steps (batches of samples) | before declaring the evaluation round finished. | Ignored with the default value of `None`. | If x is a `tf.data` dataset and `steps` is | None, 'evaluate' will run until the dataset is exhausted. | This argument is not supported with array inputs. | callbacks: List of `keras.callbacks.Callback` instances. | List of callbacks to apply during evaluation. | See [callbacks](/api_docs/python/tf/keras/callbacks). | max_queue_size: Integer. Used for generator or `keras.utils.Sequence` | input only. Maximum size for the generator queue. | If unspecified, `max_queue_size` will default to 10. | workers: Integer. Used for generator or `keras.utils.Sequence` input | only. Maximum number of processes to spin up when using | process-based threading. If unspecified, `workers` will default | to 1. If 0, will execute the generator on the main thread. | use_multiprocessing: Boolean. Used for generator or | `keras.utils.Sequence` input only. If `True`, use process-based | threading. If unspecified, `use_multiprocessing` will default to | `False`. Note that because this implementation relies on | multiprocessing, you should not pass non-picklable arguments to | the generator as they can't be passed easily to children processes. | | Returns: | Scalar test loss (if the model has a single output and no metrics) | or list of scalars (if the model has multiple outputs | and/or metrics). The attribute `model.metrics_names` will give you | the display labels for the scalar outputs. | | Raises: | ValueError: in case of invalid arguments. | | evaluate_generator(self, generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0) | Evaluates the model on a data generator. | | The generator should return the same kind of data | as accepted by `test_on_batch`. | | Arguments: | generator: Generator yielding tuples (inputs, targets) | or (inputs, targets, sample_weights) | or an instance of `keras.utils.Sequence` | object in order to avoid duplicate data | when using multiprocessing. | steps: Total number of steps (batches of samples) | to yield from `generator` before stopping. | Optional for `Sequence`: if unspecified, will use | the `len(generator)` as a number of steps. | callbacks: List of `keras.callbacks.Callback` instances. | List of callbacks to apply during evaluation. | See [callbacks](/api_docs/python/tf/keras/callbacks). | max_queue_size: maximum size for the generator queue | workers: Integer. Maximum number of processes to spin up | when using process-based threading. | If unspecified, `workers` will default to 1. If 0, will | execute the generator on the main thread. | use_multiprocessing: Boolean. | If `True`, use process-based threading. | If unspecified, `use_multiprocessing` will default to `False`. | Note that because this implementation relies on multiprocessing, | you should not pass non-picklable arguments to the generator | as they can't be passed easily to children processes. | verbose: Verbosity mode, 0 or 1. | | Returns: | Scalar test loss (if the model has a single output and no metrics) | or list of scalars (if the model has multiple outputs | and/or metrics). The attribute `model.metrics_names` will give you | the display labels for the scalar outputs. | | Raises: | ValueError: in case of invalid arguments. | | Raises: | ValueError: In case the generator yields data in an invalid format. | | fit(self, x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None, validation_split=0.0, validation_data=None, shuffle=True, class_weight=None, sample_weight=None, initial_epoch=0, steps_per_epoch=None, validation_steps=None, validation_freq=1, max_queue_size=10, workers=1, use_multiprocessing=False, **kwargs) | Trains the model for a fixed number of epochs (iterations on a dataset). | | Arguments: | x: Input data. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A dict mapping input names to the corresponding array/tensors, | if the model has named inputs. | - A `tf.data` dataset. Should return a tuple | of either `(inputs, targets)` or | `(inputs, targets, sample_weights)`. | - A generator or `keras.utils.Sequence` returning `(inputs, targets)` | or `(inputs, targets, sample weights)`. | y: Target data. Like the input data `x`, | it could be either Numpy array(s) or TensorFlow tensor(s). | It should be consistent with `x` (you cannot have Numpy inputs and | tensor targets, or inversely). If `x` is a dataset, generator, | or `keras.utils.Sequence` instance, `y` should | not be specified (since targets will be obtained from `x`). | batch_size: Integer or `None`. | Number of samples per gradient update. | If unspecified, `batch_size` will default to 32. | Do not specify the `batch_size` if your data is in the | form of symbolic tensors, datasets, | generators, or `keras.utils.Sequence` instances (since they generate | batches). | epochs: Integer. Number of epochs to train the model. | An epoch is an iteration over the entire `x` and `y` | data provided. | Note that in conjunction with `initial_epoch`, | `epochs` is to be understood as "final epoch". | The model is not trained for a number of iterations | given by `epochs`, but merely until the epoch | of index `epochs` is reached. | verbose: 0, 1, or 2. Verbosity mode. | 0 = silent, 1 = progress bar, 2 = one line per epoch. | Note that the progress bar is not particularly useful when | logged to a file, so verbose=2 is recommended when not running | interactively (eg, in a production environment). | callbacks: List of `keras.callbacks.Callback` instances. | List of callbacks to apply during training. | See `tf.keras.callbacks`. | validation_split: Float between 0 and 1. | Fraction of the training data to be used as validation data. | The model will set apart this fraction of the training data, | will not train on it, and will evaluate | the loss and any model metrics | on this data at the end of each epoch. | The validation data is selected from the last samples | in the `x` and `y` data provided, before shuffling. This argument is | not supported when `x` is a dataset, generator or | `keras.utils.Sequence` instance. | validation_data: Data on which to evaluate | the loss and any model metrics at the end of each epoch. | The model will not be trained on this data. | `validation_data` will override `validation_split`. | `validation_data` could be: | - tuple `(x_val, y_val)` of Numpy arrays or tensors | - tuple `(x_val, y_val, val_sample_weights)` of Numpy arrays | - dataset | For the first two cases, `batch_size` must be provided. | For the last case, `validation_steps` must be provided. | shuffle: Boolean (whether to shuffle the training data | before each epoch) or str (for 'batch'). | 'batch' is a special option for dealing with the | limitations of HDF5 data; it shuffles in batch-sized chunks. | Has no effect when `steps_per_epoch` is not `None`. | class_weight: Optional dictionary mapping class indices (integers) | to a weight (float) value, used for weighting the loss function | (during training only). | This can be useful to tell the model to | "pay more attention" to samples from | an under-represented class. | sample_weight: Optional Numpy array of weights for | the training samples, used for weighting the loss function | (during training only). You can either pass a flat (1D) | Numpy array with the same length as the input samples | (1:1 mapping between weights and samples), | or in the case of temporal data, | you can pass a 2D array with shape | `(samples, sequence_length)`, | to apply a different weight to every timestep of every sample. | In this case you should make sure to specify | `sample_weight_mode="temporal"` in `compile()`. This argument is not | supported when `x` is a dataset, generator, or | `keras.utils.Sequence` instance, instead provide the sample_weights | as the third element of `x`. | initial_epoch: Integer. | Epoch at which to start training | (useful for resuming a previous training run). | steps_per_epoch: Integer or `None`. | Total number of steps (batches of samples) | before declaring one epoch finished and starting the | next epoch. When training with input tensors such as | TensorFlow data tensors, the default `None` is equal to | the number of samples in your dataset divided by | the batch size, or 1 if that cannot be determined. If x is a | `tf.data` dataset, and 'steps_per_epoch' | is None, the epoch will run until the input dataset is exhausted. | This argument is not supported with array inputs. | validation_steps: Only relevant if `validation_data` is provided and | is a `tf.data` dataset. Total number of steps (batches of | samples) to draw before stopping when performing validation | at the end of every epoch. If validation_data is a `tf.data` dataset | and 'validation_steps' is None, validation | will run until the `validation_data` dataset is exhausted. | validation_freq: Only relevant if validation data is provided. Integer | or `collections_abc.Container` instance (e.g. list, tuple, etc.). | If an integer, specifies how many training epochs to run before a | new validation run is performed, e.g. `validation_freq=2` runs | validation every 2 epochs. If a Container, specifies the epochs on | which to run validation, e.g. `validation_freq=[1, 2, 10]` runs | validation at the end of the 1st, 2nd, and 10th epochs. | max_queue_size: Integer. Used for generator or `keras.utils.Sequence` | input only. Maximum size for the generator queue. | If unspecified, `max_queue_size` will default to 10. | workers: Integer. Used for generator or `keras.utils.Sequence` input | only. Maximum number of processes to spin up | when using process-based threading. If unspecified, `workers` | will default to 1. If 0, will execute the generator on the main | thread. | use_multiprocessing: Boolean. Used for generator or | `keras.utils.Sequence` input only. If `True`, use process-based | threading. If unspecified, `use_multiprocessing` will default to | `False`. Note that because this implementation relies on | multiprocessing, you should not pass non-picklable arguments to | the generator as they can't be passed easily to children processes. | **kwargs: Used for backwards compatibility. | | Returns: | A `History` object. Its `History.history` attribute is | a record of training loss values and metrics values | at successive epochs, as well as validation loss values | and validation metrics values (if applicable). | | Raises: | RuntimeError: If the model was never compiled. | ValueError: In case of mismatch between the provided input data | and what the model expects. | | fit_generator(self, generator, steps_per_epoch=None, epochs=1, verbose=1, callbacks=None, validation_data=None, validation_steps=None, validation_freq=1, class_weight=None, max_queue_size=10, workers=1, use_multiprocessing=False, shuffle=True, initial_epoch=0) | Fits the model on data yielded batch-by-batch by a Python generator. | | The generator is run in parallel to the model, for efficiency. | For instance, this allows you to do real-time data augmentation | on images on CPU in parallel to training your model on GPU. | | The use of `keras.utils.Sequence` guarantees the ordering | and guarantees the single use of every input per epoch when | using `use_multiprocessing=True`. | | Arguments: | generator: A generator or an instance of `Sequence` | (`keras.utils.Sequence`) | object in order to avoid duplicate data | when using multiprocessing. | The output of the generator must be either | - a tuple `(inputs, targets)` | - a tuple `(inputs, targets, sample_weights)`. | This tuple (a single output of the generator) makes a single batch. | Therefore, all arrays in this tuple must have the same length (equal | to the size of this batch). Different batches may have different | sizes. | For example, the last batch of the epoch is commonly smaller than | the | others, if the size of the dataset is not divisible by the batch | size. | The generator is expected to loop over its data | indefinitely. An epoch finishes when `steps_per_epoch` | batches have been seen by the model. | steps_per_epoch: Total number of steps (batches of samples) | to yield from `generator` before declaring one epoch | finished and starting the next epoch. It should typically | be equal to the number of samples of your dataset | divided by the batch size. | Optional for `Sequence`: if unspecified, will use | the `len(generator)` as a number of steps. | epochs: Integer, total number of iterations on the data. | verbose: Verbosity mode, 0, 1, or 2. | callbacks: List of callbacks to be called during training. | validation_data: This can be either | - a generator for the validation data | - a tuple (inputs, targets) | - a tuple (inputs, targets, sample_weights). | validation_steps: Only relevant if `validation_data` | is a generator. Total number of steps (batches of samples) | to yield from `generator` before stopping. | Optional for `Sequence`: if unspecified, will use | the `len(validation_data)` as a number of steps. | validation_freq: Only relevant if validation data is provided. Integer | or `collections_abc.Container` instance (e.g. list, tuple, etc.). | If an integer, specifies how many training epochs to run before a | new validation run is performed, e.g. `validation_freq=2` runs | validation every 2 epochs. If a Container, specifies the epochs on | which to run validation, e.g. `validation_freq=[1, 2, 10]` runs | validation at the end of the 1st, 2nd, and 10th epochs. | class_weight: Dictionary mapping class indices to a weight | for the class. | max_queue_size: Integer. Maximum size for the generator queue. | If unspecified, `max_queue_size` will default to 10. | workers: Integer. Maximum number of processes to spin up | when using process-based threading. | If unspecified, `workers` will default to 1. If 0, will | execute the generator on the main thread. | use_multiprocessing: Boolean. | If `True`, use process-based threading. | If unspecified, `use_multiprocessing` will default to `False`. | Note that because this implementation relies on multiprocessing, | you should not pass non-picklable arguments to the generator | as they can't be passed easily to children processes. | shuffle: Boolean. Whether to shuffle the order of the batches at | the beginning of each epoch. Only used with instances | of `Sequence` (`keras.utils.Sequence`). | Has no effect when `steps_per_epoch` is not `None`. | initial_epoch: Epoch at which to start training | (useful for resuming a previous training run) | | Returns: | A `History` object. | | Example: | | ```python | def generate_arrays_from_file(path): | while 1: | f = open(path) | for line in f: | # create numpy arrays of input data | # and labels, from each line in the file | x1, x2, y = process_line(line) | yield ({'input_1': x1, 'input_2': x2}, {'output': y}) | f.close() | | model.fit_generator(generate_arrays_from_file('/my_file.txt'), | steps_per_epoch=10000, epochs=10) | ``` | Raises: | ValueError: In case the generator yields data in an invalid format. | | get_weights(self) | Retrieves the weights of the model. | | Returns: | A flat list of Numpy arrays. | | load_weights(self, filepath, by_name=False) | Loads all layer weights, either from a TensorFlow or an HDF5 file. | | predict(self, x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False) | Generates output predictions for the input samples. | | Computation is done in batches. | | Arguments: | x: Input samples. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A `tf.data` dataset. | - A generator or `keras.utils.Sequence` instance. | batch_size: Integer or `None`. | Number of samples per gradient update. | If unspecified, `batch_size` will default to 32. | Do not specify the `batch_size` is your data is in the | form of symbolic tensors, dataset, | generators, or `keras.utils.Sequence` instances (since they generate | batches). | verbose: Verbosity mode, 0 or 1. | steps: Total number of steps (batches of samples) | before declaring the prediction round finished. | Ignored with the default value of `None`. If x is a `tf.data` | dataset and `steps` is None, `predict` will | run until the input dataset is exhausted. | callbacks: List of `keras.callbacks.Callback` instances. | List of callbacks to apply during prediction. | See [callbacks](/api_docs/python/tf/keras/callbacks). | max_queue_size: Integer. Used for generator or `keras.utils.Sequence` | input only. Maximum size for the generator queue. | If unspecified, `max_queue_size` will default to 10. | workers: Integer. Used for generator or `keras.utils.Sequence` input | only. Maximum number of processes to spin up when using | process-based threading. If unspecified, `workers` will default | to 1. If 0, will execute the generator on the main thread. | use_multiprocessing: Boolean. Used for generator or | `keras.utils.Sequence` input only. If `True`, use process-based | threading. If unspecified, `use_multiprocessing` will default to | `False`. Note that because this implementation relies on | multiprocessing, you should not pass non-picklable arguments to | the generator as they can't be passed easily to children processes. | | | Returns: | Numpy array(s) of predictions. | | Raises: | ValueError: In case of mismatch between the provided | input data and the model's expectations, | or in case a stateful model receives a number of samples | that is not a multiple of the batch size. | | predict_generator(self, generator, steps=None, callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False, verbose=0) | Generates predictions for the input samples from a data generator. | | The generator should return the same kind of data as accepted by | `predict_on_batch`. | | Arguments: | generator: Generator yielding batches of input samples | or an instance of `keras.utils.Sequence` object in order to | avoid duplicate data when using multiprocessing. | steps: Total number of steps (batches of samples) | to yield from `generator` before stopping. | Optional for `Sequence`: if unspecified, will use | the `len(generator)` as a number of steps. | callbacks: List of `keras.callbacks.Callback` instances. | List of callbacks to apply during prediction. | See [callbacks](/api_docs/python/tf/keras/callbacks). | max_queue_size: Maximum size for the generator queue. | workers: Integer. Maximum number of processes to spin up | when using process-based threading. | If unspecified, `workers` will default to 1. If 0, will | execute the generator on the main thread. | use_multiprocessing: Boolean. | If `True`, use process-based threading. | If unspecified, `use_multiprocessing` will default to `False`. | Note that because this implementation relies on multiprocessing, | you should not pass non-picklable arguments to the generator | as they can't be passed easily to children processes. | verbose: verbosity mode, 0 or 1. | | Returns: | Numpy array(s) of predictions. | | Raises: | ValueError: In case the generator yields data in an invalid format. | | predict_on_batch(self, x) | Returns predictions for a single batch of samples. | | Arguments: | x: Input data. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A `tf.data` dataset. | | Returns: | Numpy array(s) of predictions. | | Raises: | ValueError: In case of mismatch between given number of inputs and | expectations of the model. | | reset_metrics(self) | Resets the state of metrics. | | test_on_batch(self, x, y=None, sample_weight=None, reset_metrics=True) | Test the model on a single batch of samples. | | Arguments: | x: Input data. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A dict mapping input names to the corresponding array/tensors, | if the model has named inputs. | - A `tf.data` dataset. | y: Target data. Like the input data `x`, | it could be either Numpy array(s) or TensorFlow tensor(s). | It should be consistent with `x` (you cannot have Numpy inputs and | tensor targets, or inversely). If `x` is a dataset `y` should | not be specified (since targets will be obtained from the iterator). | sample_weight: Optional array of the same length as x, containing | weights to apply to the model's loss for each sample. | In the case of temporal data, you can pass a 2D array | with shape (samples, sequence_length), | to apply a different weight to every timestep of every sample. | In this case you should make sure to specify | sample_weight_mode="temporal" in compile(). This argument is not | supported when `x` is a dataset. | reset_metrics: If `True`, the metrics returned will be only for this | batch. If `False`, the metrics will be statefully accumulated across | batches. | | Returns: | Scalar test loss (if the model has a single output and no metrics) | or list of scalars (if the model has multiple outputs | and/or metrics). The attribute `model.metrics_names` will give you | the display labels for the scalar outputs. | | Raises: | ValueError: In case of invalid user-provided arguments. | | train_on_batch(self, x, y=None, sample_weight=None, class_weight=None, reset_metrics=True) | Runs a single gradient update on a single batch of data. | | Arguments: | x: Input data. It could be: | - A Numpy array (or array-like), or a list of arrays | (in case the model has multiple inputs). | - A TensorFlow tensor, or a list of tensors | (in case the model has multiple inputs). | - A dict mapping input names to the corresponding array/tensors, | if the model has named inputs. | - A `tf.data` dataset. | y: Target data. Like the input data `x`, it could be either Numpy | array(s) or TensorFlow tensor(s). It should be consistent with `x` | (you cannot have Numpy inputs and tensor targets, or inversely). If | `x` is a dataset, `y` should not be specified | (since targets will be obtained from the iterator). | sample_weight: Optional array of the same length as x, containing | weights to apply to the model's loss for each sample. In the case of | temporal data, you can pass a 2D array with shape (samples, | sequence_length), to apply a different weight to every timestep of | every sample. In this case you should make sure to specify | sample_weight_mode="temporal" in compile(). This argument is not | supported when `x` is a dataset. | class_weight: Optional dictionary mapping class indices (integers) to a | weight (float) to apply to the model's loss for the samples from this | class during training. This can be useful to tell the model to "pay | more attention" to samples from an under-represented class. | reset_metrics: If `True`, the metrics returned will be only for this | batch. If `False`, the metrics will be statefully accumulated across | batches. | | Returns: | Scalar training loss | (if the model has a single output and no metrics) | or list of scalars (if the model has multiple outputs | and/or metrics). The attribute `model.metrics_names` will give you | the display labels for the scalar outputs. | | Raises: | ValueError: In case of invalid user-provided arguments. | | ---------------------------------------------------------------------- | Data descriptors inherited from tensorflow.python.keras.engine.training.Model: | | metrics | Returns the model's metrics added using `compile`, `add_metric` APIs. | | metrics_names | Returns the model's display labels for all outputs. | | run_eagerly | Settable attribute indicating whether the model should run eagerly. | | Running eagerly means that your model will be run step by step, | like Python code. Your model might run slower, but it should become easier | for you to debug it by stepping into individual layer calls. | | By default, we will attempt to compile your model to a static graph to | deliver the best execution performance. | | Returns: | Boolean, whether the model should run eagerly. | | sample_weights | | ---------------------------------------------------------------------- | Methods inherited from tensorflow.python.keras.engine.network.Network: | | __setattr__(self, name, value) | Support self.foo = trackable syntax. | | get_layer(self, name=None, index=None) | Retrieves a layer based on either its name (unique) or index. | | If `name` and `index` are both provided, `index` will take precedence. | Indices are based on order of horizontal graph traversal (bottom-up). | | Arguments: | name: String, name of layer. | index: Integer, index of layer. | | Returns: | A layer instance. | | Raises: | ValueError: In case of invalid layer name or index. | | reset_states(self) | | save(self, filepath, overwrite=True, include_optimizer=True, save_format=None, signatures=None, options=None) | Saves the model to Tensorflow SavedModel or a single HDF5 file. | | The savefile includes: | - The model architecture, allowing to re-instantiate the model. | - The model weights. | - The state of the optimizer, allowing to resume training | exactly where you left off. | | This allows you to save the entirety of the state of a model | in a single file. | | Saved models can be reinstantiated via `keras.models.load_model`. | The model returned by `load_model` | is a compiled model ready to be used (unless the saved model | was never compiled in the first place). | | Arguments: | filepath: String, path to SavedModel or H5 file to save the model. | overwrite: Whether to silently overwrite any existing file at the | target location, or provide the user with a manual prompt. | include_optimizer: If True, save optimizer's state together. | save_format: Either 'tf' or 'h5', indicating whether to save the model | to Tensorflow SavedModel or HDF5. The default is currently 'h5', but | will switch to 'tf' in TensorFlow 2.0. The 'tf' option is currently | disabled (use `tf.keras.experimental.export_saved_model` instead). | signatures: Signatures to save with the SavedModel. Applicable to the 'tf' | format only. Please see the `signatures` argument in | `tf.saved_model.save` for details. | options: Optional `tf.saved_model.SaveOptions` object that specifies | options for saving to SavedModel. | | Example: | | ```python | from keras.models import load_model | | model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' | del model # deletes the existing model | | # returns a compiled model | # identical to the previous one | model = load_model('my_model.h5') | ``` | | save_weights(self, filepath, overwrite=True, save_format=None) | Saves all layer weights. | | Either saves in HDF5 or in TensorFlow format based on the `save_format` | argument. | | When saving in HDF5 format, the weight file has: | - `layer_names` (attribute), a list of strings | (ordered names of model layers). | - For every layer, a `group` named `layer.name` | - For every such layer group, a group attribute `weight_names`, | a list of strings | (ordered names of weights tensor of the layer). | - For every weight in the layer, a dataset | storing the weight value, named after the weight tensor. | | When saving in TensorFlow format, all objects referenced by the network are | saved in the same format as `tf.train.Checkpoint`, including any `Layer` | instances or `Optimizer` instances assigned to object attributes. For | networks constructed from inputs and outputs using `tf.keras.Model(inputs, | outputs)`, `Layer` instances used by the network are tracked/saved | automatically. For user-defined classes which inherit from `tf.keras.Model`, | `Layer` instances must be assigned to object attributes, typically in the | constructor. See the documentation of `tf.train.Checkpoint` and | `tf.keras.Model` for details. | | While the formats are the same, do not mix `save_weights` and | `tf.train.Checkpoint`. Checkpoints saved by `Model.save_weights` should be | loaded using `Model.load_weights`. Checkpoints saved using | `tf.train.Checkpoint.save` should be restored using the corresponding | `tf.train.Checkpoint.restore`. Prefer `tf.train.Checkpoint` over | `save_weights` for training checkpoints. | | The TensorFlow format matches objects and variables by starting at a root | object, `self` for `save_weights`, and greedily matching attribute | names. For `Model.save` this is the `Model`, and for `Checkpoint.save` this | is the `Checkpoint` even if the `Checkpoint` has a model attached. This | means saving a `tf.keras.Model` using `save_weights` and loading into a | `tf.train.Checkpoint` with a `Model` attached (or vice versa) will not match | the `Model`'s variables. See the [guide to training | checkpoints](https://www.tensorflow.org/alpha/guide/checkpoints) for details | on the TensorFlow format. | | Arguments: | filepath: String, path to the file to save the weights to. When saving | in TensorFlow format, this is the prefix used for checkpoint files | (multiple files are generated). Note that the '.h5' suffix causes | weights to be saved in HDF5 format. | overwrite: Whether to silently overwrite any existing file at the | target location, or provide the user with a manual prompt. | save_format: Either 'tf' or 'h5'. A `filepath` ending in '.h5' or | '.keras' will default to HDF5 if `save_format` is `None`. Otherwise | `None` defaults to 'tf'. | | Raises: | ImportError: If h5py is not available when attempting to save in HDF5 | format. | ValueError: For invalid/unknown format arguments. | | summary(self, line_length=None, positions=None, print_fn=None) | Prints a string summary of the network. | | Arguments: | line_length: Total length of printed lines | (e.g. set this to adapt the display to different | terminal window sizes). | positions: Relative or absolute positions of log elements | in each line. If not provided, | defaults to `[.33, .55, .67, 1.]`. | print_fn: Print function to use. Defaults to `print`. | It will be called on each line of the summary. | You can set it to a custom function | in order to capture the string summary. | | Raises: | ValueError: if `summary()` is called before the model is built. | | to_json(self, **kwargs) | Returns a JSON string containing the network configuration. | | To load a network from a JSON save file, use | `keras.models.model_from_json(json_string, custom_objects={})`. | | Arguments: | **kwargs: Additional keyword arguments | to be passed to `json.dumps()`. | | Returns: | A JSON string. | | to_yaml(self, **kwargs) | Returns a yaml string containing the network configuration. | | To load a network from a yaml save file, use | `keras.models.model_from_yaml(yaml_string, custom_objects={})`. | | `custom_objects` should be a dictionary mapping | the names of custom losses / layers / etc to the corresponding | functions / classes. | | Arguments: | **kwargs: Additional keyword arguments | to be passed to `yaml.dump()`. | | Returns: | A YAML string. | | Raises: | ImportError: if yaml module is not found. | | ---------------------------------------------------------------------- | Data descriptors inherited from tensorflow.python.keras.engine.network.Network: | | non_trainable_weights | | state_updates | Returns the `updates` from all layers that are stateful. | | This is useful for separating training updates and | state updates, e.g. when we need to update a layer's internal state | during prediction. | | Returns: | A list of update ops. | | stateful | | trainable_weights | | weights | Returns the list of all layer variables/weights. | | Returns: | A list of variables. | | ---------------------------------------------------------------------- | Methods inherited from tensorflow.python.keras.engine.base_layer.Layer: | | __call__(self, inputs, *args, **kwargs) | Wraps `call`, applying pre- and post-processing steps. | | Arguments: | inputs: input tensor(s). | *args: additional positional arguments to be passed to `self.call`. | **kwargs: additional keyword arguments to be passed to `self.call`. | | Returns: | Output tensor(s). | | Note: | - The following optional keyword arguments are reserved for specific uses: | * `training`: Boolean scalar tensor of Python boolean indicating | whether the `call` is meant for training or inference. | * `mask`: Boolean input mask. | - If the layer's `call` method takes a `mask` argument (as some Keras | layers do), its default value will be set to the mask generated | for `inputs` by the previous layer (if `input` did come from | a layer that generated a corresponding mask, i.e. if it came from | a Keras layer with masking support. | | Raises: | ValueError: if the layer's `call` method returns None (an invalid value). | | __delattr__(self, name) | Implement delattr(self, name). | | add_loss(self, losses, inputs=None) | Add loss tensor(s), potentially dependent on layer inputs. | | Some losses (for instance, activity regularization losses) may be dependent | on the inputs passed when calling a layer. Hence, when reusing the same | layer on different inputs `a` and `b`, some entries in `layer.losses` may | be dependent on `a` and some on `b`. This method automatically keeps track | of dependencies. | | This method can be used inside a subclassed layer or model's `call` | function, in which case `losses` should be a Tensor or list of Tensors. | | Example: | | ```python | class MyLayer(tf.keras.layers.Layer): | def call(inputs, self): | self.add_loss(tf.abs(tf.reduce_mean(inputs)), inputs=True) | return inputs | ``` | | This method can also be called directly on a Functional Model during | construction. In this case, any loss Tensors passed to this Model must | be symbolic and be able to be traced back to the model's `Input`s. These | losses become part of the model's topology and are tracked in `get_config`. | | Example: | | ```python | inputs = tf.keras.Input(shape=(10,)) | x = tf.keras.layers.Dense(10)(inputs) | outputs = tf.keras.layers.Dense(1)(x) | model = tf.keras.Model(inputs, outputs) | # Actvity regularization. | model.add_loss(tf.abs(tf.reduce_mean(x))) | ``` | | If this is not the case for your loss (if, for example, your loss references | a `Variable` of one of the model's layers), you can wrap your loss in a | zero-argument lambda. These losses are not tracked as part of the model's | topology since they can't be serialized. | | Example: | | ```python | inputs = tf.keras.Input(shape=(10,)) | x = tf.keras.layers.Dense(10)(inputs) | outputs = tf.keras.layers.Dense(1)(x) | model = tf.keras.Model(inputs, outputs) | # Weight regularization. | model.add_loss(lambda: tf.reduce_mean(x.kernel)) | ``` | | The `get_losses_for` method allows to retrieve the losses relevant to a | specific set of inputs. | | Arguments: | losses: Loss tensor, or list/tuple of tensors. Rather than tensors, losses | may also be zero-argument callables which create a loss tensor. | inputs: Ignored when executing eagerly. If anything other than None is | passed, it signals the losses are conditional on some of the layer's | inputs, and thus they should only be run where these inputs are | available. This is the case for activity regularization losses, for | instance. If `None` is passed, the losses are assumed | to be unconditional, and will apply across all dataflows of the layer | (e.g. weight regularization losses). | | add_metric(self, value, aggregation=None, name=None) | Adds metric tensor to the layer. | | Args: | value: Metric tensor. | aggregation: Sample-wise metric reduction function. If `aggregation=None`, | it indicates that the metric tensor provided has been aggregated | already. eg, `bin_acc = BinaryAccuracy(name='acc')` followed by | `model.add_metric(bin_acc(y_true, y_pred))`. If aggregation='mean', the | given metric tensor will be sample-wise reduced using `mean` function. | eg, `model.add_metric(tf.reduce_sum(outputs), name='output_mean', | aggregation='mean')`. | name: String metric name. | | Raises: | ValueError: If `aggregation` is anything other than None or `mean`. | | add_update(self, updates, inputs=None) | Add update op(s), potentially dependent on layer inputs. (deprecated arguments) | | Warning: SOME ARGUMENTS ARE DEPRECATED: `(inputs)`. They will be removed in a future version. | Instructions for updating: | `inputs` is now automatically inferred | | Weight updates (for instance, the updates of the moving mean and variance | in a BatchNormalization layer) may be dependent on the inputs passed | when calling a layer. Hence, when reusing the same layer on | different inputs `a` and `b`, some entries in `layer.updates` may be | dependent on `a` and some on `b`. This method automatically keeps track | of dependencies. | | The `get_updates_for` method allows to retrieve the updates relevant to a | specific set of inputs. | | This call is ignored when eager execution is enabled (in that case, variable | updates are run on the fly and thus do not need to be tracked for later | execution). | | Arguments: | updates: Update op, or list/tuple of update ops, or zero-arg callable | that returns an update op. A zero-arg callable should be passed in | order to disable running the updates by setting `trainable=False` | on this Layer, when executing in Eager mode. | inputs: Deprecated, will be automatically inferred. | | add_variable(self, *args, **kwargs) | Deprecated, do NOT use! Alias for `add_weight`. (deprecated) | | Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. | Instructions for updating: | Please use `layer.add_weight` method instead. | | add_weight(self, name=None, shape=None, dtype=None, initializer=None, regularizer=None, trainable=None, constraint=None, partitioner=None, use_resource=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregation.NONE: 0>, **kwargs) | Adds a new variable to the layer. | | Arguments: | name: Variable name. | shape: Variable shape. Defaults to scalar if unspecified. | dtype: The type of the variable. Defaults to `self.dtype` or `float32`. | initializer: Initializer instance (callable). | regularizer: Regularizer instance (callable). | trainable: Boolean, whether the variable should be part of the layer's | "trainable_variables" (e.g. variables, biases) | or "non_trainable_variables" (e.g. BatchNorm mean and variance). | Note that `trainable` cannot be `True` if `synchronization` | is set to `ON_READ`. | constraint: Constraint instance (callable). | partitioner: Partitioner to be passed to the `Trackable` API. | use_resource: Whether to use `ResourceVariable`. | synchronization: Indicates when a distributed a variable will be | aggregated. Accepted values are constants defined in the class | `tf.VariableSynchronization`. By default the synchronization is set to | `AUTO` and the current `DistributionStrategy` chooses | when to synchronize. If `synchronization` is set to `ON_READ`, | `trainable` must not be set to `True`. | aggregation: Indicates how a distributed variable will be aggregated. | Accepted values are constants defined in the class | `tf.VariableAggregation`. | **kwargs: Additional keyword arguments. Accepted values are `getter` and | `collections`. | | Returns: | The created variable. Usually either a `Variable` or `ResourceVariable` | instance. If `partitioner` is not `None`, a `PartitionedVariable` | instance is returned. | | Raises: | RuntimeError: If called with partitioned variable regularization and | eager execution is enabled. | ValueError: When giving unsupported dtype and no initializer or when | trainable has been set to True with synchronization set as `ON_READ`. | | apply(self, inputs, *args, **kwargs) | Deprecated, do NOT use! (deprecated) | | Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. | Instructions for updating: | Please use `layer.__call__` method instead. | | This is an alias of `self.__call__`. | | Arguments: | inputs: Input tensor(s). | *args: additional positional arguments to be passed to `self.call`. | **kwargs: additional keyword arguments to be passed to `self.call`. | | Returns: | Output tensor(s). | | compute_output_signature(self, input_signature) | Compute the output tensor signature of the layer based on the inputs. | | Unlike a TensorShape object, a TensorSpec object contains both shape | and dtype information for a tensor. This method allows layers to provide | output dtype information if it is different from the input dtype. | For any layer that doesn't implement this function, | the framework will fall back to use `compute_output_shape`, and will | assume that the output dtype matches the input dtype. | | Args: | input_signature: Single TensorSpec or nested structure of TensorSpec | objects, describing a candidate input for the layer. | | Returns: | Single TensorSpec or nested structure of TensorSpec objects, describing | how the layer would transform the provided input. | | Raises: | TypeError: If input_signature contains a non-TensorSpec object. | | count_params(self) | Count the total number of scalars composing the weights. | | Returns: | An integer count. | | Raises: | ValueError: if the layer isn't yet built | (in which case its weights aren't yet defined). | | get_input_at(self, node_index) | Retrieves the input tensor(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A tensor (or list of tensors if the layer has multiple inputs). | | Raises: | RuntimeError: If called in Eager mode. | | get_input_mask_at(self, node_index) | Retrieves the input mask tensor(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A mask tensor | (or list of tensors if the layer has multiple inputs). | | get_input_shape_at(self, node_index) | Retrieves the input shape(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A shape tuple | (or list of shape tuples if the layer has multiple inputs). | | Raises: | RuntimeError: If called in Eager mode. | | get_losses_for(self, inputs) | Retrieves losses relevant to a specific set of inputs. | | Arguments: | inputs: Input tensor or list/tuple of input tensors. | | Returns: | List of loss tensors of the layer that depend on `inputs`. | | get_output_at(self, node_index) | Retrieves the output tensor(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A tensor (or list of tensors if the layer has multiple outputs). | | Raises: | RuntimeError: If called in Eager mode. | | get_output_mask_at(self, node_index) | Retrieves the output mask tensor(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A mask tensor | (or list of tensors if the layer has multiple outputs). | | get_output_shape_at(self, node_index) | Retrieves the output shape(s) of a layer at a given node. | | Arguments: | node_index: Integer, index of the node | from which to retrieve the attribute. | E.g. `node_index=0` will correspond to the | first time the layer was called. | | Returns: | A shape tuple | (or list of shape tuples if the layer has multiple outputs). | | Raises: | RuntimeError: If called in Eager mode. | | get_updates_for(self, inputs) | Retrieves updates relevant to a specific set of inputs. | | Arguments: | inputs: Input tensor or list/tuple of input tensors. | | Returns: | List of update ops of the layer that depend on `inputs`. | | set_weights(self, weights) | Sets the weights of the layer, from Numpy arrays. | | Arguments: | weights: a list of Numpy arrays. The number | of arrays and their shape must match | number of the dimensions of the weights | of the layer (i.e. it should match the | output of `get_weights`). | | Raises: | ValueError: If the provided weights list does not match the | layer's specifications. | | ---------------------------------------------------------------------- | Data descriptors inherited from tensorflow.python.keras.engine.base_layer.Layer: | | activity_regularizer | Optional regularizer function for the output of this layer. | | dtype | | inbound_nodes | Deprecated, do NOT use! Only for compatibility with external Keras. | | input | Retrieves the input tensor(s) of a layer. | | Only applicable if the layer has exactly one input, | i.e. if it is connected to one incoming layer. | | Returns: | Input tensor or list of input tensors. | | Raises: | RuntimeError: If called in Eager mode. | AttributeError: If no inbound nodes are found. | | input_mask | Retrieves the input mask tensor(s) of a layer. | | Only applicable if the layer has exactly one inbound node, | i.e. if it is connected to one incoming layer. | | Returns: | Input mask tensor (potentially None) or list of input | mask tensors. | | Raises: | AttributeError: if the layer is connected to | more than one incoming layers. | | input_shape | Retrieves the input shape(s) of a layer. | | Only applicable if the layer has exactly one input, | i.e. if it is connected to one incoming layer, or if all inputs | have the same shape. | | Returns: | Input shape, as an integer shape tuple | (or list of shape tuples, one tuple per input tensor). | | Raises: | AttributeError: if the layer has no defined input_shape. | RuntimeError: if called in Eager mode. | | losses | Losses which are associated with this `Layer`. | | Variable regularization tensors are created when this property is accessed, | so it is eager safe: accessing `losses` under a `tf.GradientTape` will | propagate gradients back to the corresponding variables. | | Returns: | A list of tensors. | | name | Returns the name of this module as passed or determined in the ctor. | | NOTE: This is not the same as the `self.name_scope.name` which includes | parent module names. | | non_trainable_variables | | outbound_nodes | Deprecated, do NOT use! Only for compatibility with external Keras. | | output | Retrieves the output tensor(s) of a layer. | | Only applicable if the layer has exactly one output, | i.e. if it is connected to one incoming layer. | | Returns: | Output tensor or list of output tensors. | | Raises: | AttributeError: if the layer is connected to more than one incoming | layers. | RuntimeError: if called in Eager mode. | | output_mask | Retrieves the output mask tensor(s) of a layer. | | Only applicable if the layer has exactly one inbound node, | i.e. if it is connected to one incoming layer. | | Returns: | Output mask tensor (potentially None) or list of output | mask tensors. | | Raises: | AttributeError: if the layer is connected to | more than one incoming layers. | | output_shape | Retrieves the output shape(s) of a layer. | | Only applicable if the layer has one output, | or if all outputs have the same shape. | | Returns: | Output shape, as an integer shape tuple | (or list of shape tuples, one tuple per output tensor). | | Raises: | AttributeError: if the layer has no defined output shape. | RuntimeError: if called in Eager mode. | | trainable | | trainable_variables | Sequence of variables owned by this module and it's submodules. | | Note: this method uses reflection to find variables on the current instance | and submodules. For performance reasons you may wish to cache the result | of calling this method if you don't expect the return value to change. | | Returns: | A sequence of variables for the current module (sorted by attribute | name) followed by variables from all submodules recursively (breadth | first). | | updates | | variables | Returns the list of all layer variables/weights. | | Alias of `self.weights`. | | Returns: | A list of variables. | | ---------------------------------------------------------------------- | Class methods inherited from tensorflow.python.module.module.Module: | | with_name_scope(method) from builtins.type | Decorator to automatically enter the module name scope. | | ``` | class MyModule(tf.Module): | @tf.Module.with_name_scope | def __call__(self, x): | if not hasattr(self, 'w'): | self.w = tf.Variable(tf.random.normal([x.shape[1], 64])) | return tf.matmul(x, self.w) | ``` | | Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose | names included the module name: | | ``` | mod = MyModule() | mod(tf.ones([8, 32])) | # ==> <tf.Tensor: ...> | mod.w | # ==> <tf.Variable ...'my_module/w:0'> | ``` | | Args: | method: The method to wrap. | | Returns: | The original method wrapped such that it enters the module's name scope. | | ---------------------------------------------------------------------- | Data descriptors inherited from tensorflow.python.module.module.Module: | | name_scope | Returns a `tf.name_scope` instance for this class. | | submodules | Sequence of all sub-modules. | | Submodules are modules which are properties of this module, or found as | properties of modules which are properties of this module (and so on). | | ``` | a = tf.Module() | b = tf.Module() | c = tf.Module() | a.b = b | b.c = c | assert list(a.submodules) == [b, c] | assert list(b.submodules) == [c] | assert list(c.submodules) == [] | ``` | | Returns: | A sequence of all submodules. | | ---------------------------------------------------------------------- | Data descriptors inherited from tensorflow.python.training.tracking.base.Trackable: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined)
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Creating a ModelThere are two ways to create models through the TF 2 Keras API, either pass in a list of layers all at once, or add them one by one.Let's show both methods (its up to you to choose which method you prefer).
from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Activation
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Model - as a list of layers
model = Sequential([ Dense(units=2), Dense(units=2), Dense(units=2) ])
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Model - adding in layers one by one
model = Sequential() model.add(Dense(2)) model.add(Dense(2)) model.add(Dense(2))
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Let's go ahead and build a simple model and then compile it by defining our solver
model = Sequential() model.add(Dense(4,activation='relu')) model.add(Dense(4,activation='relu')) model.add(Dense(4,activation='relu')) # Final output node for prediction model.add(Dense(1)) model.compile(optimizer='rmsprop',loss='mse')
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Choosing an optimizer and lossKeep in mind what kind of problem you are trying to solve: For a multi-class classification problem model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) For a binary classification problem model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy']) For a mean squared error regression problem model.compile(optimizer='rmsprop', loss='mse') TrainingBelow are some common definitions that are necessary to know and understand to correctly utilize Keras:* Sample: one element of a dataset. * Example: one image is a sample in a convolutional network * Example: one audio file is a sample for a speech recognition model* Batch: a set of N samples. The samples in a batch are processed independently, in parallel. If training, a batch results in only one update to the model.A batch generally approximates the distribution of the input data better than a single input. The larger the batch, the better the approximation; however, it is also true that the batch will take longer to process and will still result in only one update. For inference (evaluate/predict), it is recommended to pick a batch size that is as large as you can afford without going out of memory (since larger batches will usually result in faster evaluation/prediction).* Epoch: an arbitrary cutoff, generally defined as "one pass over the entire dataset", used to separate training into distinct phases, which is useful for logging and periodic evaluation.* When using validation_data or validation_split with the fit method of Keras models, evaluation will be run at the end of every epoch.* Within Keras, there is the ability to add callbacks specifically designed to be run at the end of an epoch. Examples of these are learning rate changes and model checkpointing (saving).
model.fit(X_train,y_train,epochs=250)
Train on 700 samples Epoch 1/250 700/700 [==============================] - 1s 1ms/sample - loss: 256678.6899 Epoch 2/250 700/700 [==============================] - 0s 67us/sample - loss: 256557.3328 Epoch 3/250 700/700 [==============================] - 0s 67us/sample - loss: 256435.2685 Epoch 4/250 700/700 [==============================] - 0s 69us/sample - loss: 256297.5242 Epoch 5/250 700/700 [==============================] - 0s 67us/sample - loss: 256139.6521 Epoch 6/250 700/700 [==============================] - 0s 89us/sample - loss: 255959.0959 Epoch 7/250 700/700 [==============================] - 0s 56us/sample - loss: 255751.4558 Epoch 8/250 700/700 [==============================] - 0s 89us/sample - loss: 255515.1171 Epoch 9/250 700/700 [==============================] - 0s 67us/sample - loss: 255240.5993 Epoch 10/250 700/700 [==============================] - 0s 89us/sample - loss: 254925.4916 Epoch 11/250 700/700 [==============================] - 0s 69us/sample - loss: 254567.7298 Epoch 12/250 700/700 [==============================] - 0s 67us/sample - loss: 254163.5860 Epoch 13/250 700/700 [==============================] - 0s 67us/sample - loss: 253711.2249 Epoch 14/250 700/700 [==============================] - 0s 57us/sample - loss: 253207.9388 Epoch 15/250 700/700 [==============================] - 0s 89us/sample - loss: 252649.8949 Epoch 16/250 700/700 [==============================] - 0s 67us/sample - loss: 252035.8005 Epoch 17/250 700/700 [==============================] - 0s 89us/sample - loss: 251361.9668 Epoch 18/250 700/700 [==============================] - 0s 69us/sample - loss: 250630.4323 Epoch 19/250 700/700 [==============================] - 0s 89us/sample - loss: 249834.5367 Epoch 20/250 700/700 [==============================] - 0s 67us/sample - loss: 248964.4419 Epoch 21/250 700/700 [==============================] - 0s 89us/sample - loss: 248029.2328 Epoch 22/250 700/700 [==============================] - 0s 67us/sample - loss: 247016.8577 Epoch 23/250 700/700 [==============================] - 0s 89us/sample - loss: 245919.6555 Epoch 24/250 700/700 [==============================] - 0s 67us/sample - loss: 244745.7887 Epoch 25/250 700/700 [==============================] - 0s 89us/sample - loss: 243485.6529 Epoch 26/250 700/700 [==============================] - 0s 67us/sample - loss: 242129.3484 Epoch 27/250 700/700 [==============================] - 0s 89us/sample - loss: 240689.1388 Epoch 28/250 700/700 [==============================] - 0s 67us/sample - loss: 239153.4667 Epoch 29/250 700/700 [==============================] - 0s 89us/sample - loss: 237520.4308 Epoch 30/250 700/700 [==============================] - 0s 67us/sample - loss: 235783.5987 Epoch 31/250 700/700 [==============================] - 0s 89us/sample - loss: 233942.2699 Epoch 32/250 700/700 [==============================] - 0s 69us/sample - loss: 231982.6838 Epoch 33/250 700/700 [==============================] - 0s 67us/sample - loss: 229905.5206 Epoch 34/250 700/700 [==============================] - 0s 89us/sample - loss: 227726.2409 Epoch 35/250 700/700 [==============================] - 0s 67us/sample - loss: 225433.7657 Epoch 36/250 700/700 [==============================] - 0s 91us/sample - loss: 223007.5024 Epoch 37/250 700/700 [==============================] - 0s 67us/sample - loss: 220470.3121 Epoch 38/250 700/700 [==============================] - 0s 67us/sample - loss: 217800.4992 Epoch 39/250 700/700 [==============================] - 0s 89us/sample - loss: 215000.5040 Epoch 40/250 700/700 [==============================] - 0s 67us/sample - loss: 212070.4630 Epoch 41/250 700/700 [==============================] - 0s 89us/sample - loss: 209021.6112 Epoch 42/250 700/700 [==============================] - 0s 67us/sample - loss: 205820.6153 Epoch 43/250 700/700 [==============================] - 0s 67us/sample - loss: 202485.9254 Epoch 44/250 700/700 [==============================] - 0s 89us/sample - loss: 199032.7301 Epoch 45/250 700/700 [==============================] - 0s 67us/sample - loss: 195436.0692 Epoch 46/250 700/700 [==============================] - 0s 89us/sample - loss: 191699.3609 Epoch 47/250 700/700 [==============================] - 0s 67us/sample - loss: 187801.8943 Epoch 48/250 700/700 [==============================] - 0s 67us/sample - loss: 183781.5669 Epoch 49/250 700/700 [==============================] - 0s 89us/sample - loss: 179660.2206 Epoch 50/250 700/700 [==============================] - 0s 67us/sample - loss: 175374.3602 Epoch 51/250 700/700 [==============================] - 0s 89us/sample - loss: 170959.2488 Epoch 52/250 700/700 [==============================] - 0s 67us/sample - loss: 166390.8793 Epoch 53/250 700/700 [==============================] - 0s 89us/sample - loss: 161693.8322 Epoch 54/250 700/700 [==============================] - 0s 67us/sample - loss: 156896.2863 Epoch 55/250 700/700 [==============================] - 0s 67us/sample - loss: 151958.7138 Epoch 56/250 700/700 [==============================] - 0s 67us/sample - loss: 146943.3821 Epoch 57/250 700/700 [==============================] - 0s 67us/sample - loss: 141799.3351 Epoch 58/250 700/700 [==============================] - 0s 67us/sample - loss: 136534.7192 Epoch 59/250 700/700 [==============================] - 0s 67us/sample - loss: 131191.1925 Epoch 60/250 700/700 [==============================] - 0s 67us/sample - loss: 125746.5604 Epoch 61/250 700/700 [==============================] - 0s 89us/sample - loss: 120214.6602 Epoch 62/250 700/700 [==============================] - 0s 67us/sample - loss: 114611.5430 Epoch 63/250 700/700 [==============================] - 0s 67us/sample - loss: 108961.6057 Epoch 64/250 700/700 [==============================] - 0s 89us/sample - loss: 103238.3104 Epoch 65/250 700/700 [==============================] - 0s 67us/sample - loss: 97488.6292 Epoch 66/250 700/700 [==============================] - 0s 67us/sample - loss: 91736.5993 Epoch 67/250 700/700 [==============================] - 0s 78us/sample - loss: 85975.4235 Epoch 68/250 700/700 [==============================] - 0s 67us/sample - loss: 80189.9361 Epoch 69/250 700/700 [==============================] - 0s 89us/sample - loss: 74465.9286 Epoch 70/250 700/700 [==============================] - 0s 67us/sample - loss: 68733.6601 Epoch 71/250 700/700 [==============================] - 0s 69us/sample - loss: 63123.0146 Epoch 72/250 700/700 [==============================] - 0s 89us/sample - loss: 57568.7673 Epoch 73/250 700/700 [==============================] - 0s 67us/sample - loss: 52143.8000 Epoch 74/250 700/700 [==============================] - 0s 67us/sample - loss: 46841.6530 Epoch 75/250 700/700 [==============================] - 0s 67us/sample - loss: 41664.3811 Epoch 76/250 700/700 [==============================] - 0s 89us/sample - loss: 36710.3025 Epoch 77/250 700/700 [==============================] - 0s 67us/sample - loss: 31980.2638 Epoch 78/250 700/700 [==============================] - 0s 67us/sample - loss: 27490.0044 Epoch 79/250 700/700 [==============================] - 0s 67us/sample - loss: 23295.2193 Epoch 80/250 700/700 [==============================] - 0s 89us/sample - loss: 19399.2424 Epoch 81/250 700/700 [==============================] - 0s 67us/sample - loss: 15821.4121 Epoch 82/250 700/700 [==============================] - 0s 67us/sample - loss: 12634.9319 Epoch 83/250 700/700 [==============================] - 0s 67us/sample - loss: 9866.9726 Epoch 84/250 700/700 [==============================] - 0s 67us/sample - loss: 7541.5573 Epoch 85/250 700/700 [==============================] - 0s 67us/sample - loss: 5719.8526 Epoch 86/250 700/700 [==============================] - 0s 67us/sample - loss: 4370.8675 Epoch 87/250 700/700 [==============================] - 0s 67us/sample - loss: 3482.8717 Epoch 88/250 700/700 [==============================] - 0s 67us/sample - loss: 3081.3459 Epoch 89/250 700/700 [==============================] - 0s 89us/sample - loss: 2955.0584 Epoch 90/250 700/700 [==============================] - 0s 67us/sample - loss: 2919.0084 Epoch 91/250 700/700 [==============================] - 0s 67us/sample - loss: 2878.1071 Epoch 92/250 700/700 [==============================] - 0s 85us/sample - loss: 2835.3627 Epoch 93/250 700/700 [==============================] - 0s 67us/sample - loss: 2793.9308 Epoch 94/250 700/700 [==============================] - 0s 89us/sample - loss: 2754.3078 Epoch 95/250 700/700 [==============================] - 0s 69us/sample - loss: 2718.6959 Epoch 96/250 700/700 [==============================] - 0s 67us/sample - loss: 2676.1233 Epoch 97/250 700/700 [==============================] - 0s 67us/sample - loss: 2641.5044 Epoch 98/250 700/700 [==============================] - 0s 67us/sample - loss: 2600.8627 Epoch 99/250 700/700 [==============================] - 0s 67us/sample - loss: 2567.6836 Epoch 100/250 700/700 [==============================] - 0s 67us/sample - loss: 2527.4432 Epoch 101/250 700/700 [==============================] - 0s 67us/sample - loss: 2494.8283 Epoch 102/250 700/700 [==============================] - 0s 89us/sample - loss: 2459.9839 Epoch 103/250 700/700 [==============================] - 0s 80us/sample - loss: 2422.0237 Epoch 104/250 700/700 [==============================] - 0s 67us/sample - loss: 2385.5557 Epoch 105/250 700/700 [==============================] - 0s 67us/sample - loss: 2352.6271 Epoch 106/250 700/700 [==============================] - 0s 67us/sample - loss: 2315.9826 Epoch 107/250 700/700 [==============================] - 0s 69us/sample - loss: 2275.5747 Epoch 108/250 700/700 [==============================] - 0s 67us/sample - loss: 2240.5681 Epoch 109/250 700/700 [==============================] - 0s 67us/sample - loss: 2202.7267 Epoch 110/250 700/700 [==============================] - 0s 78us/sample - loss: 2164.8818 Epoch 111/250 700/700 [==============================] - 0s 67us/sample - loss: 2128.8680 Epoch 112/250 700/700 [==============================] - 0s 67us/sample - loss: 2093.5601 Epoch 113/250 700/700 [==============================] - 0s 89us/sample - loss: 2059.8525 Epoch 114/250 700/700 [==============================] - 0s 67us/sample - loss: 2027.5212 Epoch 115/250 700/700 [==============================] - 0s 69us/sample - loss: 1993.6040 Epoch 116/250 700/700 [==============================] - 0s 67us/sample - loss: 1956.8016 Epoch 117/250 700/700 [==============================] - 0s 89us/sample - loss: 1925.7439 Epoch 118/250 700/700 [==============================] - 0s 67us/sample - loss: 1893.9992 Epoch 119/250 700/700 [==============================] - 0s 67us/sample - loss: 1859.5495 Epoch 120/250 700/700 [==============================] - 0s 67us/sample - loss: 1829.7004 Epoch 121/250 700/700 [==============================] - 0s 67us/sample - loss: 1794.5159 Epoch 122/250 700/700 [==============================] - 0s 89us/sample - loss: 1762.4011 Epoch 123/250 700/700 [==============================] - 0s 67us/sample - loss: 1731.3614 Epoch 124/250 700/700 [==============================] - 0s 67us/sample - loss: 1694.8818 Epoch 125/250 700/700 [==============================] - 0s 67us/sample - loss: 1660.6659 Epoch 126/250 700/700 [==============================] - 0s 69us/sample - loss: 1628.8121 Epoch 127/250 700/700 [==============================] - 0s 67us/sample - loss: 1596.7363 Epoch 128/250 700/700 [==============================] - 0s 89us/sample - loss: 1561.3069 Epoch 129/250 700/700 [==============================] - 0s 67us/sample - loss: 1525.3697 Epoch 130/250 700/700 [==============================] - 0s 67us/sample - loss: 1501.4490 Epoch 131/250 700/700 [==============================] - 0s 67us/sample - loss: 1471.8032 Epoch 132/250 700/700 [==============================] - 0s 69us/sample - loss: 1441.8526 Epoch 133/250 700/700 [==============================] - 0s 67us/sample - loss: 1411.3840 Epoch 134/250 700/700 [==============================] - 0s 67us/sample - loss: 1375.3392 Epoch 135/250 700/700 [==============================] - 0s 67us/sample - loss: 1344.4005 Epoch 136/250 700/700 [==============================] - 0s 67us/sample - loss: 1316.0051 Epoch 137/250 700/700 [==============================] - 0s 67us/sample - loss: 1286.1575 Epoch 138/250 700/700 [==============================] - 0s 67us/sample - loss: 1258.5466 Epoch 139/250 700/700 [==============================] - 0s 89us/sample - loss: 1231.0350 Epoch 140/250 700/700 [==============================] - 0s 67us/sample - loss: 1202.8353 Epoch 141/250 700/700 [==============================] - 0s 67us/sample - loss: 1171.3123 Epoch 142/250 700/700 [==============================] - 0s 67us/sample - loss: 1145.8823 Epoch 143/250 700/700 [==============================] - 0s 67us/sample - loss: 1117.1228 Epoch 144/250 700/700 [==============================] - 0s 67us/sample - loss: 1091.9406 Epoch 145/250 700/700 [==============================] - 0s 67us/sample - loss: 1066.3266 Epoch 146/250 700/700 [==============================] - 0s 67us/sample - loss: 1034.5236 Epoch 147/250 700/700 [==============================] - 0s 67us/sample - loss: 1009.6341 Epoch 148/250 700/700 [==============================] - 0s 89us/sample - loss: 982.0937 Epoch 149/250 700/700 [==============================] - 0s 67us/sample - loss: 954.0501 Epoch 150/250 700/700 [==============================] - 0s 67us/sample - loss: 926.7213 Epoch 151/250 700/700 [==============================] - 0s 67us/sample - loss: 903.3459 Epoch 152/250 700/700 [==============================] - 0s 67us/sample - loss: 873.8258 Epoch 153/250 700/700 [==============================] - 0s 89us/sample - loss: 846.7390 Epoch 154/250 700/700 [==============================] - 0s 67us/sample - loss: 822.1480 Epoch 155/250 700/700 [==============================] - 0s 67us/sample - loss: 795.3657 Epoch 156/250 700/700 [==============================] - 0s 88us/sample - loss: 770.9504 Epoch 157/250 700/700 [==============================] - 0s 89us/sample - loss: 744.3620 Epoch 158/250 700/700 [==============================] - 0s 67us/sample - loss: 719.1004 Epoch 159/250 700/700 [==============================] - 0s 113us/sample - loss: 696.3267 Epoch 160/250 700/700 [==============================] - 0s 89us/sample - loss: 671.8435 Epoch 161/250 700/700 [==============================] - 0s 100us/sample - loss: 649.7230 Epoch 162/250 700/700 [==============================] - 0s 97us/sample - loss: 627.0320 Epoch 163/250 700/700 [==============================] - 0s 89us/sample - loss: 605.2505 Epoch 164/250 700/700 [==============================] - 0s 89us/sample - loss: 582.2282 Epoch 165/250 700/700 [==============================] - 0s 134us/sample - loss: 561.1635 Epoch 166/250 700/700 [==============================] - 0s 89us/sample - loss: 541.3536 Epoch 167/250 700/700 [==============================] - 0s 89us/sample - loss: 522.3132 Epoch 168/250 700/700 [==============================] - 0s 69us/sample - loss: 503.2385 Epoch 169/250 700/700 [==============================] - 0s 89us/sample - loss: 481.9888 Epoch 170/250 700/700 [==============================] - 0s 89us/sample - loss: 461.5032 Epoch 171/250 700/700 [==============================] - 0s 89us/sample - loss: 442.1222 Epoch 172/250 700/700 [==============================] - 0s 67us/sample - loss: 423.0606 Epoch 173/250 700/700 [==============================] - 0s 89us/sample - loss: 403.8695 Epoch 174/250 700/700 [==============================] - 0s 84us/sample - loss: 386.0664 Epoch 175/250 700/700 [==============================] - 0s 70us/sample - loss: 370.9212 Epoch 176/250 700/700 [==============================] - 0s 89us/sample - loss: 352.6306 Epoch 177/250 700/700 [==============================] - 0s 67us/sample - loss: 333.7979 Epoch 178/250 700/700 [==============================] - 0s 67us/sample - loss: 316.0235 Epoch 179/250 700/700 [==============================] - 0s 67us/sample - loss: 296.4844 Epoch 180/250 700/700 [==============================] - 0s 69us/sample - loss: 280.1557 Epoch 181/250 700/700 [==============================] - 0s 67us/sample - loss: 263.3886 Epoch 182/250
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
EvaluationLet's evaluate our performance on our training set and our test set. We can compare these two performances to check for overfitting.
model.history.history loss = model.history.history['loss'] sns.lineplot(x=range(len(loss)),y=loss) plt.title("Training Loss per Epoch");
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Compare final evaluation (MSE) on training set and test set.These should hopefully be fairly close to each other.
model.metrics_names training_score = model.evaluate(X_train,y_train,verbose=0) test_score = model.evaluate(X_test,y_test,verbose=0) training_score test_score
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Further Evaluations
test_predictions = model.predict(X_test) test_predictions pred_df = pd.DataFrame(y_test,columns=['Test Y']) pred_df test_predictions = pd.Series(test_predictions.reshape(300,)) test_predictions pred_df = pd.concat([pred_df,test_predictions],axis=1) pred_df.columns = ['Test Y','Model Predictions'] pred_df
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Let's compare to the real test labels!
sns.scatterplot(x='Test Y',y='Model Predictions',data=pred_df) pred_df['Error'] = pred_df['Test Y'] - pred_df['Model Predictions'] sns.distplot(pred_df['Error'],bins=50) from sklearn.metrics import mean_absolute_error,mean_squared_error mean_absolute_error(pred_df['Test Y'],pred_df['Model Predictions']) mean_squared_error(pred_df['Test Y'],pred_df['Model Predictions']) # Essentially the same thing, difference just due to precision test_score #RMSE test_score**0.5
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Predicting on brand new dataWhat if we just saw a brand new gemstone from the ground? What should we price it at? This is the **exact** same procedure as predicting on a new test data!
# [[Feature1, Feature2]] new_gem = [[998,1000]] # Don't forget to scale! scaler.transform(new_gem) new_gem = scaler.transform(new_gem) model.predict(new_gem)
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Saving and Loading a Model
from tensorflow.keras.models import load_model model.save('my_model.h5') # creates a HDF5 file 'my_model.h5' later_model = load_model('my_model.h5') later_model.predict(new_gem)
_____no_output_____
Apache-2.0
FINAL-TF2-FILES/TF_2_Notebooks_and_Data/03-ANNs/00-Keras-Syntax-Basics.ipynb
tanuja333/Tensorflow_Keras
Basic exampleLet's present what classo does when using its default parameters on synthetic data.
from classo import classo_problem, random_data import numpy as np
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Generate the dataThis code snippet generates a problem instance with sparse ß in dimensiond=100 (sparsity d_nonzero=5). The design matrix X comprises n=100 samples generated from an i.i.d standard normaldistribution. The dimension of the constraint matrix C is d x k matrix. The noise level is σ=0.5. The input `zerosum=True` implies that C is the all-ones vector and Cß=0. The n-dimensional outcome vector yand the regression vector ß is then generated to satisfy the given constraints.
m, d, d_nonzero, k, sigma = 100, 200, 5, 1, 0.5 (X, C, y), sol = random_data(m, d, d_nonzero, k, sigma, zerosum=True, seed=1)
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Remark : one can see the parameters that should be selected :
print(np.nonzero(sol))
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Define the classo instanceNext we can define a default c-lasso problem instance with the generated data:
problem = classo_problem(X, y, C)
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Check parametersYou can look at the generated problem instance by typing:
print(problem)
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Solve optimization problemsWe only use stability selection as default model selection strategy. The command also allows you to inspect the computed stability profile for all variables at the theoretical λ
problem.solve()
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
VisualisationAfter completion, the results of the optimization and model selection routines can be visualized using
print(problem.solution)
_____no_output_____
MIT
docs/source/auto_examples/plot_basic_example.ipynb
wendazhou/c-lasso
Tag Analysis
!pip install loguru from loguru import logger from collections import Counter, defaultdict def get_counts(keywords, level=0): kws = map(lambda x: x[level if level<len(x) else len(x)-1], keywords) kws = list(kws) # kws = list(map(str.lower, kws)) counter = Counter(kws) return counter def analyze_kws(keywords, topn=10): plt.figure(figsize=(15, 8)) for level in [0, 1, 2, 3, -1]: _ = get_counts(KEYWORDS, level=level) logger.debug(f"[Level={level}, NKWs={len(_)}] : {_.most_common(10)}") df = pd.DataFrame(_.most_common(topn), columns=["kw", "frequency"]) ax = sns.barplot( x="frequency", y="kw", data=df, linewidth=2.5, facecolor=(1, 1, 1, 0), errcolor=".2", edgecolor=".2" ) plt.title(f"Level={level}, topn={topn}") plt.figure(figsize=(15, 8)) ", ".join(list(get_counts(KEYWORDS, level=1).keys())) analyze_kws(KEYWORDS, topn=20)
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Data Analysis
def parse_kws(kw_str, level=2): res = kw_str.split(",") res = map(lambda kw: [_.strip().lower() for _ in kw.split(">")], res) res = map(lambda x: x[level if level<len(x) else len(x)-1], res) return list(set(res)) def load_data(path, level=0): logger.info(f"Loading data from {path}. [KW Level={level}]") df = pd.read_csv(path) df["desc"] = df["desc"].apply(str.strip) df["labels"] = df["keywords"].apply(lambda x: parse_kws(x, level)) df["textlen"] = df["desc"].apply(len) return df DATA = load_data(DATA_PATH, level=1) DATA.shape DATA.head(10) def analyze_labels(df): df = df.copy() labels = [l for ls in df["labels"] for l in ls] uniques = set(labels) logger.info(f"{len(uniques)} unique labels") analyze_labels(DATA) # idx = 2 # _data.iloc[2].keywords_processed _data = DATA.copy() _data = _data[_data["textlen"]>0] _data.shape # BERT can only process 512 tokens at once len(_data[_data["textlen"] <= 512]) / len(_data), len(_data[_data["textlen"] <= 1024]) / len(_data) plt.figure(figsize=(20, 15)) sns.histplot(data=_data, x="textlen", bins=100).set(xlim=(0, 3000))
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Baseline Model Encode Labels
from sklearn.preprocessing import MultiLabelBinarizer DATA_TO_USE = DATA.copy() DATA_TO_USE = DATA_TO_USE[DATA_TO_USE["textlen"]<=500] DATA_TO_USE.shape DATA_TO_USE.head() analyze_labels(DATA_TO_USE) LE = MultiLabelBinarizer() LABELS_ENCODED = LE.fit_transform(DATA_TO_USE["labels"]) LABELS_ENCODED.shape LE.classes_ LE.inverse_transform(LABELS_ENCODED[0].reshape(1,-1)) DATA_TO_USE["labels_encoded"] = list(LABELS_ENCODED) DATA_TO_USE.head()
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Split Dataset
from sklearn.model_selection import train_test_split X_train, X_test, Y_train, Y_test = train_test_split(DATA_TO_USE["desc"].to_numpy(), LABELS_ENCODED, test_size=0.1, random_state=42) X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size=0.1, random_state=42) X_train.shape, X_val.shape, X_test.shape Y_train.shape, Y_val.shape, Y_test.shape X_test
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
CreateDataset
! pip install pytorch_lightning import torch from torch.utils.data import DataLoader, Dataset import pytorch_lightning as pl class TagDataset (Dataset): def __init__(self,texts, tags, tokenizer, max_len=512): self.tokenizer = tokenizer self.texts = texts self.labels = tags self.max_len = max_len def __len__(self): return len(self.texts) def __getitem__(self, item_idx): text = self.texts[item_idx] inputs = self.tokenizer.encode_plus( text, None, add_special_tokens=True, max_length= self.max_len, padding = 'max_length', return_token_type_ids= False, return_attention_mask= True, truncation=True, return_tensors = 'pt' ) input_ids = inputs['input_ids'].flatten() attn_mask = inputs['attention_mask'].flatten() return { 'input_ids': input_ids , 'attention_mask': attn_mask, 'label': torch.tensor(self.labels[item_idx], dtype=torch.float) } class TagDataModule (pl.LightningDataModule): def __init__(self, x_train, y_train, x_val, y_val, x_test, y_test,tokenizer, batch_size=16, max_token_len=512): super().__init__() self.train_text = x_train self.train_label = y_train self.val_text = x_val self.val_label = y_val self.test_text = x_test self.test_label = y_test self.tokenizer = tokenizer self.batch_size = batch_size self.max_token_len = max_token_len def setup(self): self.train_dataset = TagDataset(texts=self.train_text, tags=self.train_label, tokenizer=self.tokenizer,max_len = self.max_token_len) self.val_dataset = TagDataset(texts=self.val_text,tags=self.val_label,tokenizer=self.tokenizer,max_len = self.max_token_len) self.test_dataset = TagDataset(texts=self.test_text,tags=self.test_label,tokenizer=self.tokenizer,max_len = self.max_token_len) def train_dataloader(self): return DataLoader (self.train_dataset, batch_size = self.batch_size,shuffle = True , num_workers=2) def val_dataloader(self): return DataLoader (self.val_dataset, batch_size= 16) def test_dataloader(self): return DataLoader (self.test_dataset, batch_size= 16)
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Transformers
!pip install transformers from transformers import AutoTokenizer, AutoModel TOKENIZER = AutoTokenizer.from_pretrained("bert-base-uncased") # BASE_MODEL = AutoModel.from_pretrained("bert-base-uncased") BASE_MODEL = None # Initialize the parameters that will be use for training EPOCHS = 10 BATCH_SIZE = 4 MAX_LEN = 512 LR = 1e-03 TAG_DATA_MODULE = TagDataModule( X_train, Y_train, X_val, Y_val, X_test, Y_test, TOKENIZER, BATCH_SIZE, MAX_LEN ) TAG_DATA_MODULE.setup()
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Model
from pytorch_lightning.callbacks import ModelCheckpoint from transformers import AdamW, get_linear_schedule_with_warmup class TagClassifier(pl.LightningModule): # Set up the classifier def __init__(self, base_model=None, n_classes=10, steps_per_epoch=None, n_epochs=5, lr=1e-5 ): super().__init__() self.model = base_model or AutoModel.from_pretrained("bert-base-uncased", return_dict=True) self.classifier = torch.nn.Linear(self.model.config.hidden_size,n_classes) self.steps_per_epoch = steps_per_epoch self.n_epochs = n_epochs self.lr = lr self.criterion = torch.nn.BCEWithLogitsLoss() def forward(self,input_ids, attn_mask): output = self.model(input_ids = input_ids ,attention_mask = attn_mask) output = self.classifier(output.pooler_output) return output def training_step(self,batch,batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['label'] outputs = self(input_ids,attention_mask) loss = self.criterion(outputs,labels) self.log('train_loss',loss , prog_bar=True,logger=True) return {"loss" :loss, "predictions":outputs, "labels": labels } def validation_step(self,batch,batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['label'] outputs = self(input_ids,attention_mask) loss = self.criterion(outputs,labels) self.log('val_loss',loss , prog_bar=True,logger=True) return loss def test_step(self,batch,batch_idx): input_ids = batch['input_ids'] attention_mask = batch['attention_mask'] labels = batch['label'] outputs = self(input_ids,attention_mask) loss = self.criterion(outputs,labels) self.log('test_loss',loss , prog_bar=True,logger=True) return loss def configure_optimizers(self): optimizer = AdamW(self.parameters() , lr=self.lr) warmup_steps = self.steps_per_epoch//3 total_steps = self.steps_per_epoch * self.n_epochs - warmup_steps scheduler = get_linear_schedule_with_warmup(optimizer,warmup_steps,total_steps) return [optimizer], [scheduler] steps_per_epoch = len(X_train)//BATCH_SIZE MODEL = TagClassifier(BASE_MODEL, n_classes=22, steps_per_epoch=steps_per_epoch,n_epochs=EPOCHS,lr=LR) # # saves a file like: input/QTag-epoch=02-val_loss=0.32.ckpt # checkpoint_callback = ModelCheckpoint( # monitor='val_loss',# monitored quantity # filename='QTag-{epoch:02d}-{val_loss:.2f}', # save_top_k=3, # save the top 3 models # mode='min', # mode of the monitored quantity for optimization # ) trainer = pl.Trainer(max_epochs = EPOCHS , gpus = 1, callbacks=[], progress_bar_refresh_rate = 30) trainer.fit(MODEL, TAG_DATA_MODULE) !nvidia-smi trainer.save_checkpoint("model-10.ckpt") !mkdir "$DRIVE_BASE/checkpoints/" ! cp "/content/model-10.ckpt" "$DRIVE_BASE/checkpoints" !ls "$DRIVE_BASE/checkpoints"
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Test
trainer.test(MODEL,datamodule=TAG_DATA_MODULE)
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Inference
MODEL.eval() import pickle with open("le.pkl", "wb") as f: pickle.dump(LE, f) from torch.utils.data import TensorDataset, SequentialSampler DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") MODEL.to(DEVICE) def inference(model, texts, tokenizer, batch_size=2): # model.eval() if isinstance(texts, str): texts = [texts] input_ids, attention_masks = [], [] for text in texts: text_encoded = tokenizer.encode_plus( text, None, add_special_tokens=True, max_length= MAX_LEN, padding = 'max_length', return_token_type_ids= False, return_attention_mask= True, truncation=True, return_tensors = 'pt' ) input_ids.append(text_encoded["input_ids"]) attention_masks.append(text_encoded["attention_mask"]) input_ids = torch.cat(input_ids, dim=0) attention_masks = torch.cat(attention_masks, dim=0) pred_data = TensorDataset(input_ids, attention_masks) pred_sampler = SequentialSampler(pred_data) pred_dataloader = DataLoader(pred_data, sampler=pred_sampler, batch_size=batch_size) pred_outs = [] for batch in pred_dataloader: # Add batch to GPU batch = tuple(t.to(DEVICE) for t in batch) # Unpack the inputs from our dataloader b_input_ids, b_attn_mask = batch with torch.no_grad(): # Forward pass, calculate logit predictions pred_out = model(b_input_ids,b_attn_mask) pred_out = torch.sigmoid(pred_out) # Move predicted output and labels to CPU pred_out = pred_out.detach().cpu().numpy() pred_outs.append(pred_out) return pred_outs _texts = X_test[:10] _pred_outs = inference(MODEL, _texts, TOKENIZER) _pred_outs _texts thresh = 0.3 for _txt, _yt, _p in zip(_texts, Y_test, _pred_outs.copy()): _p = _p.flatten() confs = _p[_p>thresh] _p[_p<thresh] = 0 _p[_p>=thresh] = 1 print(confs) pred_tag = LE.inverse_transform(np.array([_p]))[0] gt_tag = LE.inverse_transform(np.array([_yt]))[0] print(_txt[:50], gt_tag, pred_tag)
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
Custom Evaluation
def inference2(model, tokenizer, texts, gts, threshold=0.3): _pred_outs = inference(model, texts, tokenizer, batch_size=1) res = [] for txt, gt, pred in zip(texts, gts, _pred_outs): p = pred.flatten().copy() confs = p[p>threshold] p[p<threshold] = 0 p[p>=threshold] = 1 p = np.array([p]) gt = np.array([gt]) pred_tags = LE.inverse_transform(p)[0] gt_tags = LE.inverse_transform(gt)[0] res.append({"gts": gt_tags, "preds": pred_tags, "text": txt}) return res def compute_jaccard(tokens1, tokens2): if not tokens1 or not tokens2: return 0 intersection = set(tokens1).intersection(tokens2) union = set(tokens1).union(tokens2) return len(intersection)/len(union) compute_jaccard([1, 2], [1, 2, 3]) import json !mkdir "$DRIVE_BASE/outputs/" def evaluate_jaccard(model, tokenizer, texts, gts, threshold=0.3): """ Jaccard Evaluation. SIimlar to IoU """ predictions = inference2(model, tokenizer, texts, gts, threshold) with open("inference.json", "w") as f: json.dump(predictions, f) metrics = [] for pmap in predictions: metrics.append(compute_jaccard(pmap["gts"], pmap["preds"])) return metrics _ = evaluate_jaccard(MODEL, TOKENIZER, X_test[:50], Y_test[:50], threshold=0.3) _ !cp "inference.json" "$DRIVE_BASE/outputs/"
_____no_output_____
MIT
colab-analysis-training.ipynb
NISH1001/earth-science-text-classification
作業: (1)以, Adam, 為例, 調整 batch_size, epoch , 觀察accurancy, loss 的變化 (2)以同一模型, 分別驗證 SGD, Adam, Rmsprop 的 accurancy
import keras #from keras.datasets import cifar10 from keras.datasets import mnist from keras.preprocessing.image import ImageDataGenerator from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Activation, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import optimizers from keras.callbacks import EarlyStopping, ModelCheckpoint import numpy # 第一步:選擇模型, 順序模型是多個網絡層的線性堆疊 model = Sequential() # 第二步:構建網絡層 model.add(Dense( 500,input_shape=(784,),kernel_initializer='uniform')) # 輸入層,28*28=784 model.add(Activation('relu')) # 激活函數是relu model.add(Dense( 500, kernel_initializer='uniform')) # 隱藏層節點500個 model.add(Activation('relu')) model.add(Dense( 500, kernel_initializer='uniform')) # 隱藏層節點500個 model.add(Activation('relu')) model.add(Dense( 500, kernel_initializer='uniform')) # 隱藏層節點500個 model.add(Activation('relu')) model.add(Dense( 10, kernel_initializer='uniform')) # 輸出結果是10個類別,所以維度是10 model.add(Activation('softmax')) # 最後一層用softmax作為激活函數 # 模型建立完成後,統計參數總量 print("Total Parameters:%d" % model.count_params()) # 輸出模型摘要資訊 model.summary() ''' SGD(隨機梯度下降) - Arguments lr: float >= 0. Learning rate. momentum: float >= 0. Parameter that accelerates SGD in the relevant direction and dampens oscillations. decay: float >= 0. Learning rate decay over each update. nesterov: boolean. Whether to apply Nesterov momentum. ''' opt = keras.optimizers.SGD(lr=0.1, momentum=0.9, decay=0.95, nesterov=True) ''' RMSprop- Arguments lr: float >= 0. Learning rate. rho: float >= 0. epsilon: float >= 0. Fuzz factor. If None, defaults to K.epsilon(). decay: float >= 0. Learning rate decay over each update. ''' opt = keras.optimizers.RMSprop(lr=0.001) # 第三步:編譯, model.compile(optimizer=opt, loss = 'binary_crossentropy', metrics = ['accuracy']) # 第四步:資料分割 # 使用Keras自帶的mnist工具讀取數據(第一次需要聯網) (X_train, y_train), (X_test, y_test) = mnist.load_data() # 由於mist的輸入數據維度是(num, 28 , 28),這裡需要把後面的維度直接拼起來變成784維 X_train = (X_train.reshape(X_train.shape[0], X_train.shape[1] * X_train.shape[2])).astype('float32') / 255. X_test = (X_test.reshape(X_test.shape[0], X_test.shape[1] * X_test.shape[2])).astype('float32') / 255. Y_train = (numpy.arange(10) == y_train[:, None]).astype(int) Y_test = (numpy.arange(10) == y_test[:, None]).astype(int) ''' 宣告並設定 batch_size:對總的樣本數進行分組,每組包含的樣本數量 epochs :訓練次數 ''' batch_size = 256 epochs = 20 # 第五步:訓練, 修正 model 參數 #Blas GEMM launch failed , 避免動態分配GPU / CPU, 出現問題 import tensorflow as tf gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.999) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) history = model.fit(X_train,Y_train,batch_size=batch_size, epochs=epochs, shuffle=True,verbose=2,validation_split=0.3) # 第六步:輸出 print ( " test set " ) scores = model.evaluate(X_test,Y_test,batch_size=200,verbose= 0) print ( "" ) #print ( " The test loss is %f " % scores) print ( " The test loss is %f ", scores) result = model.predict(X_test,batch_size=200,verbose= 0) result_max = numpy.argmax(result, axis = 1 ) test_max = numpy.argmax(Y_test, axis = 1 ) result_bool = numpy.equal(result_max, test_max) true_num = numpy.sum(result_bool) print ( "" ) print ( " The accuracy of the model is %f " % (true_num/len(result_bool))) import matplotlib.pyplot as plt %matplotlib inline # history = model.fit(x, y, validation_split=0.25, epochs=50, batch_size=16, verbose=1) # Plot training & validation accuracy values plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Model accuracy') plt.ylabel('Accuracy') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show() # Plot training & validation loss values plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Model loss') plt.ylabel('Loss') plt.xlabel('Epoch') plt.legend(['Train', 'Test'], loc='upper left') plt.show()
_____no_output_____
MIT
homeworks/D076/Day76-Optimizer_HW.ipynb
peteryuX/100Day-ML-Marathon
Probabilistic Multiple Cracking Model of Brittle-Matrix Composite: One-by-One Crack Tracing AlgorithmInteractive application for fragmentation model presented in the paper [citation and link will be added upon paper publication] - Change the material parameters to trigger the recalculation. - Inspect the cracking history by changing the crack slider. - Visit an annotated source code of the implementation [here](../notebooks/annotated_fragmentation.ipynb)
%%html <style> .output_wrapper button.btn.btn-default, .output_wrapper .ui-dialog-titlebar { display: none; } </style> %matplotlib notebook import numpy as np from scipy.optimize import newton import matplotlib.pylab as plt Em=25e3 # [MPa] matrix modulus Ef=180e3 # [MPa] fiber modulus vf=0.01 # [-] reinforcement ratio T=12. # [N/mm^3] bond intensity sig_cu=10.0 # [MPa] composite strength sig_mu=3.0 # [MPa] matrix strength m=10000 # Weibull shape modulus ## Crack bridge with constant bond def get_sig_m(z, sig_c): # matrix stress (*\label{sig_m}*) sig_m = np.minimum(z * T * vf / (1 - vf), Em * sig_c / (vf * Ef + (1 - vf) * Em)) return sig_m def get_eps_f(z, sig_c): # reinforcement strain (*\label{sig_f}*) sig_m = get_sig_m(z, sig_c) eps_f = (sig_c - sig_m * (1 - vf)) / vf / Ef return eps_f ## Specimen discretization def get_z_x(x, XK): # distance to the closest crack (*\label{get_z_x}*) z_grid = np.abs(x[:, np.newaxis] - np.array(XK)[np.newaxis, :]) return np.amin(z_grid, axis=1) import warnings # (*\label{error1}*) warnings.filterwarnings("error", category=RuntimeWarning) # (*\label{error2}*) def get_sig_c_z(sig_mu, z, sig_c_pre): # crack initiating load at a material element fun = lambda sig_c: sig_mu - get_sig_m(z, sig_c) try: # search for the local crack load level return newton(fun, sig_c_pre) except (RuntimeWarning, RuntimeError): # solution not found (shielded zone) return the ultimate composite strength return sig_cu def get_sig_c_K(z_x, x, sig_c_pre, sig_mu_x): # crack initiating loads over the whole specimen get_sig_c_x = np.vectorize(get_sig_c_z) sig_c_x = get_sig_c_x(sig_mu_x, z_x, sig_c_pre) y_idx = np.argmin(sig_c_x) return sig_c_x[y_idx], x[y_idx] ## Crack tracing algorithm n_x=5000 L_x=500 def get_cracking_history(update_progress=None): x = np.linspace(0, L_x, n_x) # specimen discretization (*\label{discrete}*) sig_mu_x = sig_mu * np.random.weibull(m, size=n_x) # matrix strength (*\label{m_strength}*) Ec = Em * (1-vf) + Ef*vf # [MPa] mixture rule XK = [] # recording the crack postions sig_c_K = [0.] # recording the crack initating loads eps_c_K = [0.] # recording the composite strains CS = [L_x, L_x/2] # crack spacing sig_m_x_K = [np.zeros_like(x)] # stress profiles for crack states idx_0 = np.argmin(sig_mu_x) XK.append(x[idx_0]) # position of the first crack sig_c_0 = sig_mu_x[idx_0] * Ec / Em sig_c_K.append(sig_c_0) eps_c_K.append(sig_mu_x[idx_0] / Em) while True: z_x = get_z_x(x, XK) # distances to the nearest crack sig_m_x_K.append(get_sig_m(z_x, sig_c_K[-1])) # matrix stress sig_c_k, y_i = get_sig_c_K(z_x, x, sig_c_K[-1], sig_mu_x) # identify next crack if sig_c_k == sig_cu: # (*\label{no_crack}*) break if update_progress: # callback to user interface update_progress(sig_c_k) XK.append(y_i) # record crack position sig_c_K.append(sig_c_k) # corresponding composite stress eps_c_K.append( # composite strain - integrate the strain field np.trapz(get_eps_f(get_z_x(x, XK), sig_c_k), x) / np.amax(x)) # (*\label{imple_avg_strain}*) XK_arr = np.hstack([[0], np.sort(np.array(XK)), [L_x]]) CS.append(np.average(XK_arr[1:]-XK_arr[:-1])) # crack spacing sig_c_K.append(sig_cu) # the ultimate state eps_c_K.append(np.trapz(get_eps_f(get_z_x(x, XK), sig_cu), x) / np.amax(x)) CS.append(CS[-1]) if update_progress: update_progress(sig_c_k) return np.array(sig_c_K), np.array(eps_c_K), sig_mu_x, x, np.array(CS), np.array(sig_m_x_K) sig_c_K, eps_c_K, sig_mu_x, x, CS, sig_m_x_K = get_cracking_history() fig, (ax, ax_sig_x) = plt.subplots(1, 2, figsize=(8, 3), tight_layout=True) ax_cs = ax.twinx() sig_c_K, eps_c_K, sig_mu_x, x, CS, sig_m_x_K = get_cracking_history() n_c = len(eps_c_K) - 2 # numer of cracks ax.plot(eps_c_K, sig_c_K, marker='o', label='%d cracks:' % n_c) ax.set_xlabel(r'$\varepsilon_\mathrm{c}$ [-]'); ax.set_ylabel(r'$\sigma_\mathrm{c}$ [MPa]') ax_sig_x.plot(x, sig_mu_x, color='orange') ax_sig_x.fill_between(x, sig_mu_x, 0, color='orange', alpha=0.1) ax_sig_x.set_xlabel(r'$x$ [mm]'); ax_sig_x.set_ylabel(r'$\sigma$ [MPa]') ax.legend() eps_c_KK = np.array([eps_c_K[:-1], eps_c_K[1:]]).T.flatten() CS_KK = np.array([CS[:-1], CS[:-1]]).T.flatten() ax_cs.plot(eps_c_KK, CS_KK, color='gray') ax_cs.fill_between(eps_c_KK, CS_KK, color='gray', alpha=0.2) ax_cs.set_ylabel(r'$\ell_\mathrm{cs}$ [mm]'); plt.interactive(False) plt.show() print('two')
_____no_output_____
MIT
pmcm/fragmentation2.ipynb
bmcs-group/bmcs_fragmentation
Copyright 2022 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Text Searcher with TensorFlow Lite Model Maker View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook See TF Hub model In this colab notebook, you can learn how to use the [TensorFlow Lite Model Maker](https://www.tensorflow.org/lite/guide/model_maker) library to create a TFLite Searcher model. You can use a text Searcher model to build Sematic Search or Smart Reply for your app. This type of model lets you take a text query and search for the most related entries in a text dataset, such as a database of web pages. The model returns a list of the smallest distance scoring entries in the dataset, including metadata you specify, such as URL, page title, or other text entry identifiers. After building this, you can deploy it onto devices (e.g. Android) using [Task Library Searcher API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/text_searcher) to run inference with just a few lines of code.This tutorial leverages CNN/DailyMail dataset as an instance to create the TFLite Searcher model. You can try with your own dataset with the compatible input comma separated value (CSV) format. Text search using Scalable Nearest Neighbor This tutorial uses the publicly available CNN/DailyMail non-anonymized summarization dataset, which was produced from the [GitHub repo](https://github.com/abisee/cnn-dailymail). This dataset contains over 300k news articles, which makes it a good dataset to build the Searcher model, and return various related news during model inference for a text query.The text Searcher model in this example uses a [ScaNN](https://github.com/google-research/google-research/tree/master/scann) (Scalable Nearest Neighbors) index file that can search for similar items from a predefined database. ScaNN achieves state-of-the-art performance for efficient vector similarity search at scale.Highlights and urls in this dataset are used in this colab to create the model:1. Highlights are the text for generating the embedding feature vectors and then used for search.2. Urls are the returned result shown to users after searching the related highlights.This tutorial saves these data into the CSV file and then uses the CSV file to build the model. Here are several examples from the dataset.| Highlights | Urls| ---------- |----------|Hawaiian Airlines again lands at No. 1 in on-time performance. The Airline Quality Rankings Report looks at the 14 largest U.S. airlines. ExpressJet and American Airlines had the worst on-time performance. Virgin America had the best baggage handling; Southwest had lowest complaint rate. | http://www.cnn.com/2013/04/08/travel/airline-quality-report| European football's governing body reveals list of countries bidding to host 2020 finals. The 60th anniversary edition of the finals will be hosted by 13 countries. Thirty-two countries are considering bids to host 2020 matches. UEFA will announce host cities on September 25. | http://edition.cnn.com:80/2013/09/20/sport/football/football-euro-2020-bid-countries/index.html?| Once octopus-hunter Dylan Mayer has now also signed a petition of 5,000 divers banning their hunt at Seacrest Park. Decision by Washington Department of Fish and Wildlife could take months. | http://www.dailymail.co.uk:80/news/article-2238423/Dylan-Mayer-Washington-considers-ban-Octopus-hunting-diver-caught-ate-Puget-Sound.html?| Galaxy was observed 420 million years after the Big Bang. found by NASA’s Hubble Space Telescope, Spitzer Space Telescope, and one of nature’s own natural 'zoom lenses' in space. | http://www.dailymail.co.uk/sciencetech/article-2233883/The-furthest-object-seen-Record-breaking-image-shows-galaxy-13-3-BILLION-light-years-Earth.html Setup Start by installing the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker).
!sudo apt -y install libportaudio2 !pip install -q tflite-model-maker !pip install gdown
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Import the required packages.
from tflite_model_maker import searcher
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Prepare the datasetThis tutorial uses the dataset CNN / Daily Mail summarization dataset from the [GitHub repo](https://github.com/abisee/cnn-dailymail).First, download the text and urls of cnn and dailymail and unzip them. If itfailed to download from google drive, please wait a few minutes to try it again or download it manually and then upload it to the colab.
!gdown https://drive.google.com/uc?id=0BwmD_VLjROrfTHk4NFg2SndKcjQ !gdown https://drive.google.com/uc?id=0BwmD_VLjROrfM1BxdkxVaTY2bWs !wget -O all_train.txt https://raw.githubusercontent.com/abisee/cnn-dailymail/master/url_lists/all_train.txt !tar xzf cnn_stories.tgz !tar xzf dailymail_stories.tgz
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Then, save the data into the CSV file that can be loaded into `tflite_model_maker` library. The code is based on the logic used to load this data in [`tensorflow_datasets`](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/cnn_dailymail.py). We can't use `tensorflow_dataset` directly since it doesn't contain urls which are used in this colab.Since it takes a long time to process the data into embedding feature vectorsfor the whole dataset. Only first 5% stories of CNN and Daily Mail dataset areselected by default for demo purpose. You can adjust thefraction or try with the pre-built TFLite [model](https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/cnn_daily_text_searcher.tflite) with 50% stories of CNN and Daily Mail dataset to search as well.
#@title Save the highlights and urls to the CSV file #@markdown Load the highlights from the stories of CNN / Daily Mail, map urls with highlights, and save them to the CSV file. CNN_FRACTION = 0.05 #@param {type:"number"} DAILYMAIL_FRACTION = 0.05 #@param {type:"number"} import csv import hashlib import os import tensorflow as tf dm_single_close_quote = u"\u2019" # unicode dm_double_close_quote = u"\u201d" END_TOKENS = [ ".", "!", "?", "...", "'", "`", '"', dm_single_close_quote, dm_double_close_quote, ")" ] # acceptable ways to end a sentence def read_file(file_path): """Reads lines in the file.""" lines = [] with tf.io.gfile.GFile(file_path, "r") as f: for line in f: lines.append(line.strip()) return lines def url_hash(url): """Gets the hash value of the url.""" h = hashlib.sha1() url = url.encode("utf-8") h.update(url) return h.hexdigest() def get_url_hashes_dict(urls_path): """Gets hashes dict that maps the hash value to the original url in file.""" urls = read_file(urls_path) return {url_hash(url): url[url.find("id_/") + 4:] for url in urls} def find_files(folder, url_dict): """Finds files corresponding to the urls in the folder.""" all_files = tf.io.gfile.listdir(folder) ret_files = [] for file in all_files: # Gets the file name without extension. filename = os.path.splitext(os.path.basename(file))[0] if filename in url_dict: ret_files.append(os.path.join(folder, file)) return ret_files def fix_missing_period(line): """Adds a period to a line that is missing a period.""" if "@highlight" in line: return line if not line: return line if line[-1] in END_TOKENS: return line return line + "." def get_highlights(story_file): """Gets highlights from a story file path.""" lines = read_file(story_file) # Put periods on the ends of lines that are missing them # (this is a problem in the dataset because many image captions don't end in # periods; consequently they end up in the body of the article as run-on # sentences) lines = [fix_missing_period(line) for line in lines] # Separate out article and abstract sentences highlight_list = [] next_is_highlight = False for line in lines: if not line: continue # empty line elif line.startswith("@highlight"): next_is_highlight = True elif next_is_highlight: highlight_list.append(line) # Make highlights into a single string. highlights = "\n".join(highlight_list) return highlights url_hashes_dict = get_url_hashes_dict("all_train.txt") cnn_files = find_files("cnn/stories", url_hashes_dict) dailymail_files = find_files("dailymail/stories", url_hashes_dict) # The size to be selected. cnn_size = int(CNN_FRACTION * len(cnn_files)) dailymail_size = int(DAILYMAIL_FRACTION * len(dailymail_files)) print("CNN size: %d"%cnn_size) print("Daily Mail size: %d"%dailymail_size) with open("cnn_dailymail.csv", "w") as csvfile: writer = csv.DictWriter(csvfile, fieldnames=["highlights", "urls"]) writer.writeheader() for file in cnn_files[:cnn_size] + dailymail_files[:dailymail_size]: highlights = get_highlights(file) # Gets the filename which is the hash value of the url. filename = os.path.splitext(os.path.basename(file))[0] url = url_hashes_dict[filename] writer.writerow({"highlights": highlights, "urls": url})
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Build the text Searcher model Create a text Searcher model by loading a dataset, creating a model with the data and exporting the TFLite model. Step 1. Load the datasetModel Maker takes the text dataset and the corresponding metadata of each text string (such as urls in this example) in the CSV format. It embeds the text strings into feature vectors using the user-specified embedder model.In this demo, we build the Searcher model using [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder-lite/2), a state-of-the-art sentence embedding model which is already retrained from [colab](https://github.com/tensorflow/tflite-support/blob/master/tensorflow_lite_support/examples/colab/on_device_text_to_image_search_tflite.ipynb). The model is optimized for on-device inference performance, and only takes 6ms to embed a query string (measured on Pixel 6). Alternatively, you can use [this](https://tfhub.dev/google/lite-model/universal-sentence-encoder-qa-ondevice/1?lite-format=tflite) quantized version, which is smaller but takes 38ms for each embedding.
!wget -O universal_sentence_encoder.tflite https://storage.googleapis.com/download.tensorflow.org/models/tflite_support/searcher/text_to_image_blogpost/text_embedder.tflite
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Create a `searcher.TextDataLoader` instance and use `data_loader.load_from_csv` method to load the dataset. It takes ~10 minutes for thisstep since it generates the embedding feature vector for each text one by one. You can try to upload your own CSV file and load it to build the customized model as well.Specify the name of text column and metadata column in the CSV file.* Text is used to generate the embedding feature vectors.* Metadata is the content to be shown when you search the certain text.Here are the first 4 lines of the CNN-DailyMail CSV file generated above.| highlights| urls| ---------- |----------|Syrian official: Obama climbed to the top of the tree, doesn't know how to get down. Obama sends a letter to the heads of the House and Senate. Obama to seek congressional approval on military action against Syria. Aim is to determine whether CW were used, not by whom, says U.N. spokesman.|http://www.cnn.com/2013/08/31/world/meast/syria-civil-war/|Usain Bolt wins third gold of world championship. Anchors Jamaica to 4x100m relay victory. Eighth gold at the championships for Bolt. Jamaica double up in women's 4x100m relay.|http://edition.cnn.com/2013/08/18/sport/athletics-bolt-jamaica-gold|The employee in agency's Kansas City office is among hundreds of "virtual" workers. The employee's travel to and from the mainland U.S. last year cost more than $24,000. The telecommuting program, like all GSA practices, is under review.|http://www.cnn.com:80/2012/08/23/politics/gsa-hawaii-teleworking|NEW: A Canadian doctor says she was part of a team examining Harry Burkhart in 2010. NEW: Diagnosis: "autism, severe anxiety, post-traumatic stress disorder and depression" Burkhart is also suspected in a German arson probe, officials say. Prosecutors believe the German national set a string of fires in Los Angeles.|http://edition.cnn.com:80/2012/01/05/justice/california-arson/index.html?
data_loader = searcher.TextDataLoader.create("universal_sentence_encoder.tflite", l2_normalize=True) data_loader.load_from_csv("cnn_dailymail.csv", text_column="highlights", metadata_column="urls")
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
For image use cases, you can create a `searcher.ImageDataLoader` instance and then use `data_loader.load_from_folder` to load images from the folder. The `searcher.ImageDataLoader` instance needs to be created by a TFLite embedder model because it will be leveraged to encode queries to feature vectors and be exported with the TFLite Searcher model. For instance:```pythondata_loader = searcher.ImageDataLoader.create("mobilenet_v2_035_96_embedder_with_metadata.tflite")data_loader.load_from_folder("food/")``` Step 2. Create the Searcher model* Configure ScaNN options. See [api doc](https://www.tensorflow.org/lite/api_docs/python/tflite_model_maker/searcher/ScaNNOptions) for more details.* Create the Searcher model from data and ScaNN options. You can see the [in-depth examination](https://ai.googleblog.com/2020/07/announcing-scann-efficient-vector.html) to learn more about the ScaNN algorithm.
scann_options = searcher.ScaNNOptions( distance_measure="dot_product", tree=searcher.Tree(num_leaves=140, num_leaves_to_search=4), score_ah=searcher.ScoreAH(dimensions_per_block=1, anisotropic_quantization_threshold=0.2)) model = searcher.Searcher.create_from_data(data_loader, scann_options)
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
In the above example, we define the following options:* `distance_measure`: we use "dot_product" to measure the distance between two embedding vectors. Note that we actually compute the **negative** dot product value to preserve the notion that "smaller is closer".* `tree`: the dataset is divided the dataset into 140 partitions (roughly the square root of the data size), and 4 of them are searched during retrieval, which is roughly 3% of the dataset.* `score_ah`: we quantize the float embeddings to int8 values with the same dimension to save space. Step 3. Export the TFLite modelThen you can export the TFLite Searcher model.
model.export( export_filename="searcher.tflite", userinfo="", export_format=searcher.ExportFormat.TFLITE)
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
Test the TFLite model on your queryYou can test the exported TFLite model using custom query text. To query text using the Searcher model, initialize the model and run a search with text phrase, as follows:
from tflite_support.task import text # Initializes a TextSearcher object. searcher = text.TextSearcher.create_from_file("searcher.tflite") # Searches the input query. results = searcher.search("The Airline Quality Rankings Report looks at the 14 largest U.S. airlines.") print(results)
_____no_output_____
Apache-2.0
tensorflow/lite/g3doc/models/modify/model_maker/text_searcher.ipynb
QS-L-1992/tensorflow
StreuungWellenvektor des einfallenden Strahls: $\vec {k}^{(i)}$ Wellenvektor des ausgehenden Strahls: $\vec {k}^{(s)}$Impulsübertrag: $\vec Q =\vec {k}^{(i)} - \vec {k}^{(s)}$ Laue-BedingungVoraussetzung: elastische Streuung ($|\vec {k}^{(i)}| = |\vec {k}^{(s)}| = \frac{2\pi}{\lambda}$)$\vec Q = \Delta \vec{k} = \vec {k}^{(i)} - \vec {k}^{(s)} = \vec G$
import ipywidgets as iw import matplotlib import matplotlib.pyplot as plt from math import * import numpy as np class Vector: def __init__(self, x, y): self.x = x self.y = y def length(self): return sqrt(self.x*self.x + self.y * self.y) def length_sq(self): return self.x*self.x + self.y * self.y def __mul__(self, s): return Vector(self.x * s, self.y * s) def __rmul__(self, s): return self * s def __add__(self, other): return Vector(self.x + other.x, self.y + other.y) def rotate(self, phi): return Vector(self.x * cos(phi) + self.y * (-sin(phi)), self.x * sin(phi) + self.y * cos(phi)) @staticmethod def fromAbsPhi(r, phi): x = r * cos(phi) y = r * sin(phi) return Vector(x, y) #b reziprokes gitter 2d abbildung b1 = Vector(2,0) b2 = Vector(0,2) angle = 50*pi/180 lambda_ = 0.5 light_vec = Vector.fromAbsPhi(1/lambda_, 0) # plot def draw(alpha, lambda_, b2_mag, b2_phi): light_vec = Vector.fromAbsPhi(1/lambda_, 0) angle = alpha / 180.0 * pi b2 = Vector.fromAbsPhi(b2_mag, b2_phi/180.0 * pi) fig, ax = plt.subplots() fig.set_size_inches(9, 9, forward=True) b1r = b1.rotate(angle) b2r = b2.rotate(angle) epsilon = 1e-4 size = 8 num_1points = int(ceil(2*size / max(abs(b1.y) + abs(b2.y), 0.2))) num_2points = int(ceil(2*size / max(abs(b1.x) + abs(b2.x), 0.2))) num_2points = max(num_1points, num_2points) num_1points = num_2points num1 = len(range(-num_1points, num_1points)) num2 = len(range(-num_2points, num_2points)) points_x = np.zeros((num1, num2)) points_y = np.zeros((num1, num2)) points_all = np.empty((num1, num2), dtype=Vector) for col in range(-num_1points, num_1points): row = (np.arange(-num_2points, num_2points) * b2r) points = row + (b1r * col) points_x[col,:] = [p.x for p in points] points_y[col,:] = [p.y for p in points] points_all[col,:] = points points_all = points_all + light_vec l = abs(light_vec.length_sq()) def distance(p): if abs(p.x - light_vec.x) < epsilon and abs(p.y - light_vec.y) < epsilon: return float("inf") return abs(abs(p.length_sq()) - l) closest = min(points_all.flatten(), key= distance) print(f"x = {closest.x}") print(f"y = {closest.y}") print(f"delta = {closest.length() - sqrt(l)}") ax.arrow(0, 0, light_vec.x, light_vec.y, head_width=0.25, head_length=0.3, length_includes_head=True, fc='maroon', ec='maroon') ax.arrow(0, 0, closest.x, closest.y, head_width=0.25, head_length=0.3, length_includes_head=True, fc='b', ec='b') ax.add_patch(matplotlib.patches.Circle( (0, 0), light_vec.length(), fill = False, ec = "black", lw=1)) ax.scatter(points_x + light_vec.x, points_y + light_vec.y, s=[4]) ax.scatter([0], [0], s=[4], c = ["r"]) ax.set(xlim=(-size, size), ylim=(-size, size)) ax.set_aspect(1) #ax.set_axis_off() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show() #alpha= 103, lambda = 0.3: good fit w = iw.interactive(draw, alpha = iw.IntSlider(min = 0, max = 360, value = 45), lambda_ = iw.FloatSlider(min = 0.2, max = 2, value = 0.5), b2_mag = iw.FloatSlider(min = 0.5, max = 4, value = 2.0, continuous_update=False), b2_phi = iw.IntSlider(min = 0, max = 360, value = 90, continuous_update=False)) output = w.children[-1] output.layout.height = '650px' display(w)
_____no_output_____
MIT
Ewaldkugel.ipynb
moosbruggerj/ewaldkugel-sim
AI for Medicine Course 1 Week 1 lecture exercises Counting labelsAs you saw in the lecture videos, one way to avoid having class imbalance impact the loss function is to weight the losses differently. To choose the weights, you first need to calculate the class frequencies.For this exercise, you'll just get the count of each label. Later on, you'll use the concepts practiced here to calculate frequencies in the assignment!
# Import the necessary packages import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline # Read csv file containing training datadata train_df = pd.read_csv("nih/train-small.csv") # Count up the number of instances of each class (drop non-class columns from the counts) class_counts = train_df.sum().drop(['Image','PatientId']) for column in class_counts.keys(): print(f"The class {column} has {train_df[column].sum()} samples") # Plot up the distribution of counts sns.barplot(class_counts.values, class_counts.index, color='b') plt.title('Distribution of Classes for Training Dataset', fontsize=15) plt.xlabel('Number of Patients', fontsize=15) plt.ylabel('Diseases', fontsize=15) plt.show()
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Weighted Loss function Below is an example of calculating weighted loss. In the assignment, you will calculate a weighted loss function. This sample code will give you some intuition for what the weighted loss function is doing, and also help you practice some syntax you will use in the graded assignment.For this example, you'll first define a hypothetical set of true labels and then a set of predictions.Run the next cell to create the 'ground truth' labels.
# Generate an array of 4 binary label values, 3 positive and 1 negative y_true = np.array( [[1], [1], [1], [0]]) print(f"y_true: \n{y_true}")
y_true: [[1] [1] [1] [0]]
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Two modelsTo better understand the loss function, you will pretend that you have two models.- Model 1 always outputs a 0.9 for any example that it's given. - Model 2 always outputs a 0.1 for any example that it's given.
# Make model predictions that are always 0.9 for all examples y_pred_1 = 0.9 * np.ones(y_true.shape) print(f"y_pred_1: \n{y_pred_1}") print() y_pred_2 = 0.1 * np.ones(y_true.shape) print(f"y_pred_2: \n{y_pred_2}")
y_pred_1: [[0.9] [0.9] [0.9] [0.9]] y_pred_2: [[0.1] [0.1] [0.1] [0.1]]
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Problems with the regular loss functionThe learning goal here is to notice that with a regular loss function (not a weighted loss), the model that always outputs 0.9 has a smaller loss (performs better) than model 2.- This is because there is a class imbalance, where 3 out of the 4 labels are 1.- If the data were perfectly balanced, (two labels were 1, and two labels were 0), model 1 and model 2 would have the same loss. Each would get two examples correct and two examples incorrect.- However, since the data is not balanced, the regular loss function implies that model 1 is better than model 2. Notice the shortcomings of a regular non-weighted lossSee what loss you get from these two models (model 1 always predicts 0.9, and model 2 always predicts 0.1), see what the regular (unweighted) loss function is for each model.
loss_reg_1 = -1 * np.sum(y_true * np.log(y_pred_1)) + \ -1 * np.sum((1 - y_true) * np.log(1 - y_pred_1)) print(f"loss_reg_1: {loss_reg_1:.4f}") loss_reg_2 = -1 * np.sum(y_true * np.log(y_pred_2)) + \ -1 * np.sum((1 - y_true) * np.log(1 - y_pred_2)) print(f"loss_reg_2: {loss_reg_2:.4f}") print(f"When the model 1 always predicts 0.9, the regular loss is {loss_reg_1:.4f}") print(f"When the model 2 always predicts 0.1, the regular loss is {loss_reg_2:.4f}")
When the model 1 always predicts 0.9, the regular loss is 2.6187 When the model 2 always predicts 0.1, the regular loss is 7.0131
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Notice that the loss function gives a greater loss when the predictions are always 0.1, because the data is imbalanced, and has three labels of `1` but only one label for `0`.Given a class imbalance with more positive labels, the regular loss function implies that the model with the higher prediction of 0.9 performs better than the model with the lower prediction of 0.1 How a weighted loss treats both models the sameWith a weighted loss function, you will get the same weighted loss when the predictions are all 0.9 versus when the predictions are all 0.1. - Notice how a prediction of 0.9 is 0.1 away from the positive label of 1.- Also notice how a prediction of 0.1 is 0.1 away from the negative label of 0- So model 1 and 2 are "symmetric" along the midpoint of 0.5, if you plot them on a number line between 0 and 1. Weighted Loss EquationCalculate the loss for the zero-th label (column at index 0)- The loss is made up of two terms. To make it easier to read the code, you will calculate each of these terms separately. We are giving each of these two terms a name for explanatory purposes, but these are not officially called $loss_{pos}$ or $loss_{neg}$ - $loss_{pos}$: we'll use this to refer to the loss where the actual label is positive (the positive examples). - $loss_{neg}$: we'll use this to refer to the loss where the actual label is negative (the negative examples). $$ loss^{(i)} = loss_{pos}^{(i)} + los_{neg}^{(i)} $$$$loss_{pos}^{(i)} = -1 \times weight_{pos}^{(i)} \times y^{(i)} \times log(\hat{y}^{(i)})$$$$loss_{neg}^{(i)} = -1 \times weight_{neg}^{(i)} \times (1- y^{(i)}) \times log(1 - \hat{y}^{(i)})$$ Since this sample dataset is small enough, you can calculate the positive weight to be used in the weighted loss function. To get the positive weight, count how many NEGATIVE labels are present, divided by the total number of examples.In this case, there is one negative label, and four total examples.Similarly, the negative weight is the fraction of positive labels.Run the next cell to define positive and negative weights.
# calculate the positive weight as the fraction of negative labels w_p = 1/4 # calculate the negative weight as the fraction of positive labels w_n = 3/4 print(f"positive weight w_p: {w_p}") print(f"negative weight w_n {w_n}")
positive weight w_p: 0.25 negative weight w_n 0.75
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Model 1 weighted lossRun the next two cells to calculate the two loss terms separately.Here, `loss_1_pos` and `loss_1_neg` are calculated using the `y_pred_1` predictions.
# Calculate and print out the first term in the loss function, which we are calling 'loss_pos' loss_1_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_1 )) print(f"loss_1_pos: {loss_1_pos:.4f}") # Calculate and print out the second term in the loss function, which we're calling 'loss_neg' loss_1_neg = -1 * np.sum(w_n * (1 - y_true) * np.log(1 - y_pred_1 )) print(f"loss_1_neg: {loss_1_neg:.4f}") # Sum positive and negative losses to calculate total loss loss_1 = loss_1_pos + loss_1_neg print(f"loss_1: {loss_1:.4f}")
loss_1: 1.8060
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Model 2 weighted lossNow do the same calculations for when the predictions are from `y_pred_2'. Calculate the two terms of the weighted loss function and add them together.
# Calculate and print out the first term in the loss function, which we are calling 'loss_pos' loss_2_pos = -1 * np.sum(w_p * y_true * np.log(y_pred_2)) print(f"loss_2_pos: {loss_2_pos:.4f}") # Calculate and print out the second term in the loss function, which we're calling 'loss_neg' loss_2_neg = -1 * np.sum(w_n * (1 - y_true) * np.log(1 - y_pred_2)) print(f"loss_2_neg: {loss_2_neg:.4f}") # Sum positive and negative losses to calculate total loss when the prediction is y_pred_2 loss_2 = loss_2_pos + loss_2_neg print(f"loss_2: {loss_2:.4f}")
loss_2: 1.8060
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Compare model 1 and model 2 weighted loss
print(f"When the model always predicts 0.9, the total loss is {loss_1:.4f}") print(f"When the model always predicts 0.1, the total loss is {loss_2:.4f}")
When the model always predicts 0.9, the total loss is 1.8060 When the model always predicts 0.1, the total loss is 1.8060
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
What do you notice?Since you used a weighted loss, the calculated loss is the same whether the model always predicts 0.9 or always predicts 0.1. You may have also noticed that when you calculate each term of the weighted loss separately, there is a bit of symmetry when comparing between the two sets of predictions.
print(f"loss_1_pos: {loss_1_pos:.4f} \t loss_1_neg: {loss_1_neg:.4f}") print() print(f"loss_2_pos: {loss_2_pos:.4f} \t loss_2_neg: {loss_2_neg:.4f}")
loss_1_pos: 0.0790 loss_1_neg: 1.7269 loss_2_pos: 1.7269 loss_2_neg: 0.0790
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Even though there is a class imbalance, where there are 3 positive labels but only one negative label, the weighted loss accounts for this by giving more weight to the negative label than to the positive label. Weighted Loss for more than one classIn this week's assignment, you will calculate the multi-class weighted loss (when there is more than one disease class that your model is learning to predict). Here, you can practice working with 2D numpy arrays, which will help you implement the multi-class weighted loss in the graded assignment.You will work with a dataset that has two disease classes (two columns)
# View the labels (true values) that you will practice with y_true = np.array( [[1,0], [1,0], [1,0], [1,0], [0,1] ]) y_true
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Choosing axis=0 or axis=1You will use `numpy.sum` to count the number of times column `0` has the value 0. First, notice the difference when you set axis=0 versus axis=1
# See what happens when you set axis=0 print(f"using axis = 0 {np.sum(y_true,axis=0)}") # Compare this to what happens when you set axis=1 print(f"using axis = 1 {np.sum(y_true,axis=1)}")
using axis = 0 [4 1] using axis = 1 [1 1 1 1 1]
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Notice that if you choose `axis=0`, the sum is taken for each of the two columns. This is what you want to do in this case. If you set `axis=1`, the sum is taken for each row. Calculate the weightsPreviously, you visually inspected the data to calculate the fraction of negative and positive labels. Here, you can do this programmatically.
# set the positive weights as the fraction of negative labels (0) for each class (each column) w_p = np.sum(y_true == 0,axis=0) / y_true.shape[0] w_p # set the negative weights as the fraction of positive labels (1) for each class w_n = np.sum(y_true == 1, axis=0) / y_true.shape[0] w_n
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
In the assignment, you will train a model to try and make useful predictions. In order to make this example easier to follow, you will pretend that your model always predicts the same value for every example.
# Set model predictions where all predictions are the same y_pred = np.ones(y_true.shape) y_pred[:,0] = 0.3 * y_pred[:,0] y_pred[:,1] = 0.7 * y_pred[:,1] y_pred
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
As before, calculate the two terms that make up the loss function. Notice that you are working with more than one class (represented by columns). In this case, there are two classes.Start by calculating the loss for class `0`.$$ loss^{(i)} = loss_{pos}^{(i)} + los_{neg}^{(i)} $$$$loss_{pos}^{(i)} = -1 \times weight_{pos}^{(i)} \times y^{(i)} \times log(\hat{y}^{(i)})$$$$loss_{neg}^{(i)} = -1 \times weight_{neg}^{(i)} \times (1- y^{(i)}) \times log(1 - \hat{y}^{(i)})$$ View the zero column for the weights, true values, and predictions that you will use to calculate the loss from the positive predictions.
# Print and view column zero of the weight print(f"w_p[0]: {w_p[0]}") print(f"y_true[:,0]: {y_true[:,0]}") print(f"y_pred[:,0]: {y_pred[:,0]}") # calculate the loss from the positive predictions, for class 0 loss_0_pos = -1 * np.sum(w_p[0] * y_true[:, 0] * np.log(y_pred[:, 0]) ) print(f"loss_0_pos: {loss_0_pos:.4f}")
loss_0_pos: 0.9632
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
View the zero column for the weights, true values, and predictions that you will use to calculate the loss from the negative predictions.
# Print and view column zero of the weight print(f"w_n[0]: {w_n[0]}") print(f"y_true[:,0]: {y_true[:,0]}") print(f"y_pred[:,0]: {y_pred[:,0]}") # Calculate the loss from the negative predictions, for class 0 loss_0_neg = -1 * np.sum( w_n[0] * (1 - y_true[:, 0]) * np.log(1 - y_pred[:, 0]) ) print(f"loss_0_neg: {loss_0_neg:.4f}") # add the two loss terms to get the total loss for class 0 loss_0 = loss_0_neg + loss_0_pos print(f"loss_0: {loss_0:.4f}")
loss_0: 1.2485
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Now you are familiar with the array slicing that you would use when there are multiple disease classes stored in a two-dimensional array. Now it's your turn!* Can you calculate the loss for class (column) `1`?
# calculate the loss from the positive predictions, for class 1 loss_1_pos = None
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Expected output```CPPloss_1_pos: 0.2853```
# Calculate the loss from the negative predictions, for class 1 loss_1_neg = None
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
Expected output```CPPloss_1_neg: 0.9632```
# add the two loss terms to get the total loss for class 0 loss_1 = None
_____no_output_____
Apache-2.0
1_diagnosis/week1/lecture_exercise/AI4M_C1_W1_lecture_ex_02.ipynb
amitbcp/ai_for_medicine_specialisation
LightGBM Model作者:艾宏峰创建时间:2020.11.15
import gc import pandas as pd import lightgbm as lgb import numpy as np from datetime import datetime from tqdm import tqdm from sklearn.model_selection import StratifiedKFold, TimeSeriesSplit from sklearn.metrics import accuracy_score import copy import warnings warnings.filterwarnings("ignore")
_____no_output_____
MIT
lightgbm_cu_20201123.ipynb
AlvinAi96/serverless_prediction
LGBMRegressor有下列参数可调整:- boosting_type(gbdt):提升方法,默认是gbdt。有四种:(1)gbdt:传统梯度提升决策树。(2)rf:随机森林。(3)dart:Dropouts meet Multiple Additive Regression Trees。(4)goss:Gradient-based One-Side Sampling。- num_leaves(31):数的最大叶子数量。- max_depth(-1):树的最大深度。- learning_rate(0.1):学习率。- n_estimators(100):提升树数量。- subsample_for_bin(200000):构建bin时的样本数。- objective:默认regression对于LGBMRegressor。- class_weight(None):对多分类任务而言,这里不用管。- min_split_gain(0):进一步拆分树的叶子节点的最小损失减少量。- min_child_weight(1e-3):在子叶子上需要的最小实例权重之和。- min_child_samples(20):在子叶子上需要的数据最小量。- subsample(1):训练集下采样率。- subsample_freq(0):下采样频率,- colsample_bytree(1):构造每棵树时列的子采样率。- reg_alpha(0):有关权重的L1正则化值。- reg_lambda(0):权重的L2正则化项值。- random_state(None):种子数。- important_type('split‘):要填写的功能重要性的类型feature_importances_。如果为“ split”,则结果包含该特征在模型中使用的次数。如果为“ gain”,则结果包含使用该功能的分割的总增益。
# LightGBM参数 params = { 'metric':'mse', 'objective':'regression', 'seed':2022, 'boosting_type':'gbdt', # 也可用其他的,一个个试着先,dart不支持early stopping 'early_stopping_rounds':10, 'subsample':0.8, 'feature_fraction':0.75, 'bagging_fraction': 0.75, 'reg_lambda': 10 } verbose_flag = False # 是否展示模型训练验证详细信息 folds = 5 # 5折交叉验证 # 由eda.ipynb得到含缺失值特征的队列 # miss_qids = [297, 298, 20889, 21487, 21671, 21673, 81221, 82695, 82697, 82929, 83109, 83609] miss_qids = [] # 导入数据 data_path = r'/media/alvinai/Documents/serverless/data/' # 训练集非自变量特征:'QUEUE_ID', 'NEXT_5_CPU_USAGE', 'NEXT_5_LAUNCHING_JOB_NUMS' # 测试集非自变量特征:'ID', 'QUEUE_ID', 'NEXT_5_CPU_USAGE', 'NEXT_5_LAUNCHING_JOB_NUMS' df_train = pd.read_csv(data_path + 'train_v30b1.csv') df_test = pd.read_csv(data_path + 'test_v30b1.csv') sub_sample = pd.read_csv(data_path + 'submit_example.csv') df_train.drop(['DOTTING_MINUTE_4','CPU_USAGE_3_std'], axis = 1, inplace = True) df_test.drop(['DOTTING_MINUTE_4','CPU_USAGE_3_std'], axis = 1, inplace = True) # # 导入lightgbm_ljn.ipynb预测好的NEXT_5_LAUNCHING_JOB_NUMS结果 # ljn_predictions = pd.read_csv(r'/media/alvinai/Documents/serverless/result/lgb_ljn_sub_20201108_2156.csv') def cu_error(y, y_pred): '''根据官网提供对CPU_USAGE的误差测评公式进行打分''' return np.abs(y - y_pred) * 0.9 # def get_import_feats(X_train, Y_train, X_val, Y_val, import_feat_num, params): # model = lgb.LGBMRegressor(**params) # lgb_model = model.fit(X_train, # Y_train, # eval_names=['train', 'valid'], # eval_set=[(X_train, Y_train), (X_val, Y_val)], # verbose=0, # eval_metric=params['metric'], # early_stopping_rounds=params['early_stopping_rounds']) # import_feat_df = pd.DataFrame({ # 'feature': list(X_train), # 'importance': lgb_model.feature_importances_, # }).sort_values(by='importance',ascending=False) # import_feats = list(import_feat_df['feature'].values)[:import_feat_num] # # print(import_feat_df['feature'].values) # # print(import_feats) # # print(Y_train[import_feats]) # return X_train[import_feats], Y_train, X_val[import_feats], Y_val, import_feats def run_lgb_qid(df_train, df_test, target, qid, params): '''针对给定预测目标和队列进行LGB训练、验证和评估 输入: 1. df_train (pd.DataFrame):训练集 2. df_test (pd.DataFrame):测试集 3. target (str) : 当前预测目标变量名 4. qid (int) : 当前针对的队列id 5. params (dict) : 模型参数字典 输出: 1. prediction (pd.DataFrame): 测试集预测结果 2. score (float) : 验证集MSE分数 ''' if qid not in miss_qids: # 正常队列:过滤不相干特征,得到训练模型的输入特征 feature_names = list( filter(lambda x: x not in ['QUEUE_ID'] + [f'cpu_{i}' for i in range(1,6)], df_train.columns)) else: # 对含缺失值特征的队列:过滤掉含缺失值的特征,得到训练模型的输入特征 feature_names = list( filter(lambda x: x not in ['QUEUE_ID'] + [f'cpu_{i}' for i in range(1,6)] + [f for f in df_train.columns if f.startswith('DISK_USAGE')], df_train.columns)) # 提取 QUEUE_ID 对应的数据集 df_train = df_train[df_train['QUEUE_ID'] == qid] df_test = df_test[df_test['QUEUE_ID'] == qid] # # 打印当前训练信息 # if verbose_flag == True: # print(f"QUEUE_ID:{qid}, target:{target}, train样本量:{len(df_train)}, test样本量:{len(df_test)}") # 构建模型 model = lgb.LGBMRegressor(**params) prediction = df_test[['ID', 'QUEUE_ID']] # 用于存放不同折下预测结果的平均值 prediction['pred_' + target] = 0 # 初始化 scores = [] # 用于存放不同折下的预测分数 pred_valid = np.zeros((len(df_train),)) # 初始化验证集预测结果 kfold = StratifiedKFold(n_splits=folds, shuffle=True, random_state=params['seed']) for fold_id, (trn_idx, val_idx) in enumerate(kfold.split(df_train, df_train[target])): # 划分数据集 X_train = df_train.iloc[trn_idx][feature_names] Y_train = df_train.iloc[trn_idx][target] X_val = df_train.iloc[val_idx][feature_names] Y_val = df_train.iloc[val_idx][target] # # 获取特征重要性 # import_feat_num = 50 # X_train, Y_train, X_val, Y_val, import_feats = get_import_feats(X_train, Y_train, X_val, Y_val, import_feat_num, params) # feature_names = import_feats # 训练模型 lgb_model = model.fit(X_train, Y_train, eval_names=['train', 'valid'], eval_set=[(X_train, Y_train), (X_val, Y_val)], verbose=0, eval_metric=params['metric'], early_stopping_rounds=params['early_stopping_rounds']) # 预测划分后的测试集和验证集 pred_test = lgb_model.predict(df_test[feature_names], num_iteration = lgb_model.best_iteration_) pred_valid[val_idx] = lgb_model.predict(X_val, num_iteration = lgb_model.best_iteration_) # 记录每次fold下的模型原始分数 scores.append(lgb_model.best_score_['valid']['l2']) # 追加当前第k折下模型的最佳分数 # 追加预测结果 prediction['pred_' + target] += pred_test / kfold.n_splits # # 打印特征重要性 # print(pd.DataFrame({ # 'feature': list(X_train), # 'importance': lgb_model.feature_importances_, # }).sort_values(by='importance',ascending=False)) # 删除冗余变量 del lgb_model, pred_test, X_train, Y_train, X_val, Y_val gc.collect() # 计算测评分数 formal_score = np.mean([cu_error(y_true, y_pred) for y_true, y_pred in zip(df_train[target].values.ravel(), pred_valid)]) if verbose_flag == True: print("每折下的MSE分数:{}, 平均每折MSE分数:{:.4f}".format([np.round(v,2) for v in scores], np.mean(scores))) print("-"*60) return prediction, np.mean(scores), formal_score predictions = list() scores = list() formal_scores = list() for qid in tqdm(df_test['QUEUE_ID'].unique()): df = pd.DataFrame() for t in [f'cpu_{i}' for i in range(1,6)]: prediction, score, formal_score = run_lgb_qid(df_train, df_test, t, qid, params) if t == 'cpu_1': df = prediction.copy() else: df = pd.merge(df, prediction, on=['ID', 'QUEUE_ID'], how='left') scores.append(score) formal_scores.append(formal_score) predictions.append(df) print('mean MSE score: ', np.mean(scores)) print('mean 测评 score:', np.mean(formal_scores)) sub = pd.concat(predictions) sub = sub.sort_values(by='ID').reset_index(drop=True) sub.drop(['QUEUE_ID'], axis=1, inplace=True) sub.columns = ['ID'] + [f'CPU_USAGE_{i}' for i in range(1,6)] # 全置 0 都比训练出来的结果好 for col in [f'LAUNCHING_JOB_NUMS_{i}' for i in range(1,6)]: sub[col] = 0 sub = sub[['ID', 'CPU_USAGE_1', 'LAUNCHING_JOB_NUMS_1', 'CPU_USAGE_2', 'LAUNCHING_JOB_NUMS_2', 'CPU_USAGE_3', 'LAUNCHING_JOB_NUMS_3', 'CPU_USAGE_4', 'LAUNCHING_JOB_NUMS_4', 'CPU_USAGE_5', 'LAUNCHING_JOB_NUMS_5']] print(sub.shape) sub.head() # 注意: 提交要求预测结果需为非负整数, 包括 ID 也需要是整数 sub['ID'] = sub['ID'].astype(int) for col in [i for i in sub.columns if i != 'ID']: sub[col] = sub[col].round() sub[col] = sub[col].apply(np.floor) sub[col] = sub[col].apply(lambda x: 0 if x<0 else x) sub[col] = sub[col].apply(lambda x: 100 if x>100 else x) sub[col] = sub[col].astype(int) sub.head(10) # 保存最终结果 current_time = datetime.now() current_time = current_time.strftime('%Y%m%d_%H%M') result_name = 'lgb_cu_ljn_sub_' + current_time + '_seed2022.csv' sub.to_csv(r'/media/alvinai/Documents/serverless/result/' + result_name, index = False)
_____no_output_____
MIT
lightgbm_cu_20201123.ipynb
AlvinAi96/serverless_prediction
process image detect
import cv2 import numpy as np import torch import torchvision import os.path as osp from mot_neural_solver.path_cfg import OUTPUT_PATH, DATA_PATH from torchvision.models.detection.faster_rcnn import FastRCNNPredictor import matplotlib.pyplot as plt num_classes = 2 _config = {'dataset_dir': 'synthShrimps/test', 'train_params': {'num_epochs': 27, 'batch_size': 16, 'start_ckpt': 'trained_models/frcnn/mot20_frcnn_epoch_27-30mar21.pt.tar', 'save_only_last_ckpt': True}, 'optimizer_params': {'lr': 0.0001, 'momentum': 0.9, 'weight_decay': 0.0005}, 'seed': 620124203} model_path = osp.join(OUTPUT_PATH, _config['train_params']['start_ckpt']) model_path img_path='/mnt/gpu_storage/hugo/mot_neural_solver/data/synthShrimps/test/SHRIMP_0009/img1/0001.jpg' img = cv2.imread(img_path) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)/255 img.shape imgT = torch.tensor(np.transpose(img, (2, 0, 1))).type(torch.cuda.FloatTensor) # channel first imgT.shape imgT def get_detection_model(num_classes): # load an instance segmentation model pre-trained on COCO model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) # get the number of input features for the classifier in_features = model.roi_heads.box_predictor.cls_score.in_features # replace the pre-trained head with a new one model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes) model.roi_heads.nms_thresh = 0.3 return model def plot(img, boxes): fig, ax = plt.subplots(1, dpi=96) img = img.mul(255).permute(1, 2, 0).byte().numpy() width, height, _ = img.shape ax.imshow(img, cmap='gray') fig.set_size_inches(width / 80, height / 80) for box in boxes: rect = plt.Rectangle( (box[0], box[1]), box[2] - box[0], box[3] - box[1], fill=False, linewidth=1.0) ax.add_patch(rect) plt.axis('off') plt.show() device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') # get the model using our helper function model = get_detection_model(num_classes) model.to(device) model_state_dict = torch.load(model_path) model.load_state_dict(model_state_dict) # put the model in evaluation mode model.eval() with torch.no_grad(): prediction = model([imgT.to(device)])[0] print("prediction") plot(imgT.cpu(), prediction['boxes']) print(prediction)
prediction
MIT
test_object_detection.ipynb
OpenSuze/mot_neural_solver
Inheritance
from random import randrange # Here's the original Pet class class Pet(): boredom_decrement = 4 hunger_decrement = 6 boredom_threshold = 5 hunger_threshold = 10 sounds = ['Mrrp'] def __init__(self, name = "Kitty"): self.name = name self.hunger = randrange(self.hunger_threshold) self.boredom = randrange(self.boredom_threshold) self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class def clock_tick(self): self.boredom += 1 self.hunger += 1 def mood(self): if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold: return "happy" elif self.hunger > self.hunger_threshold: return "hungry" else: return "bored" def __str__(self): state = " I'm " + self.name + ". " state += " I feel " + self.mood() + ". " # state += "Hunger %d Boredom %d Words %s" % (self.hunger, self.boredom, self.sounds) return state def hi(self): print(self.sounds[randrange(len(self.sounds))]) self.reduce_boredom() def teach(self, word): self.sounds.append(word) self.reduce_boredom() def feed(self): self.reduce_hunger() def reduce_hunger(self): self.hunger = max(0, self.hunger - self.hunger_decrement) def reduce_boredom(self): self.boredom = max(0, self.boredom - self.boredom_decrement) # Here's the new definition of class Cat, a subclass of Pet. class Cat(Pet): # the class name that the new class inherits from goes in the parentheses, like so. sounds = ['Meow'] def chasing_rats(self): return "What are you doing, Pinky? Taking over the world?!" p1 = Pet("Fido") print(p1) # we've seen this stuff before! p1.feed() p1.hi() print(p1) cat1 = Cat("Fluffy") print(cat1) # this uses the same __str__ method as the Pets do cat1.feed() # Totally fine, because the cat class inherits from the Pet class! cat1.hi() print(cat1) print(cat1.chasing_rats()) #print(p1.chasing_rats()) # This line will give us an error. The Pet class doesn't have this method! class Cheshire(Cat): # this inherits from Cat, which inherits from Pet def smile(self): # this method is specific to instances of Cheshire print(":D :D :D") # Let's try it with instances. cat1 = Cat("Fluffy") cat1.feed() # Totally fine, because the cat class inherits from the Pet class! cat1.hi() # Uses the special Cat hello. print(cat1) print(cat1.chasing_rats()) new_cat = Cheshire("Pumpkin") # create a Cheshire cat instance with name "Pumpkin" new_cat.hi() # same as Cat! new_cat.chasing_rats() # OK, because Cheshire inherits from Cat new_cat.smile() # Only for Cheshire instances (and any classes that you make inherit from Cheshire) # cat1.smile() # This line would give you an error, because the Cat class does not have this method! # None of the subclass methods can be used on the parent class, though. p1 = Pet("Teddy") p1.hi() # just the regular Pet hello #p1.chasing_rats() # This will give you an error -- this method doesn't exist on instances of the Pet class. #p1.smile() # This will give you an error, too. This method does not exist on instances of the Pet class. CurrentYear = 2019 class Students(): def __init__(self,name,year): self.name = name self.year = year def getYear(self): return CurrentYear - self.year def __str__(self): return "{} ({})".format(self.name , self.getYear()) class Details(Students): def __init__(self,name,year): Students.__init__(self,name,year) self.knowledge = 0 def study(self): self.knowledge = + 1 Final = Details("kajal",2) Final.study() print(Final.getYear()) print(Final.knowledge) print(Final) class Book(): def __init__(self , bookname , author): self.bookname = bookname self.author = author def BookPages(self): self.pages def __str__(self): return "'{}' by {}".format(self.bookname,self.author) class EBook(Book): def __init__(self , bookname , author ,totalpages): Book.__init__(self,bookname,author) self.totalpages = totalpages class Number_of_book(Book): def __init__(self,bookname,author,NumBook): Book.__init__(self,bookname,author) self.NumBook = NumBook class Library: def __init__(self): self.book = [] def addBook(self,books): self.book.append(books) def sizeBook(self): return len(self.book) def __str__(self): return "{} {}".format(self.author , self.name , self.sizeBook) addL = Library() addL.addBook(Number) addL.addBook(NumTotal) print(addL.sizeBook) Number = Number_of_book("jungke Book" ,"Kishore Kumar" , 2) NumTotal = EBook("jungle mumma" , "kajal singh",500) print(NumTotal.totalpages) print(Number.NumBook) from random import randrange # Here's the original Pet class class Pet(): boredom_decrement = 4 hunger_decrement = 6 boredom_threshold = 5 hunger_threshold = 10 sounds = ['Mrrp'] def __init__(self, name = "Kitty"): self.name = name self.hunger = randrange(self.hunger_threshold) self.boredom = randrange(self.boredom_threshold) self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class def clock_tick(self): self.boredom += 1 self.hunger += 1 def mood(self): if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold: return "happy" elif self.hunger > self.hunger_threshold: return "hungry" else: return "bored" def __str__(self): state = " I'm " + self.name + ". " state += " I feel " + self.mood() + ". " # state += "Hunger %d Boredom %d Words %s" % (self.hunger, self.boredom, self.sounds) return state def hi(self): print(self.sounds[randrange(len(self.sounds))]) self.reduce_boredom() def teach(self, word): self.sounds.append(word) self.reduce_boredom() def feed(self): self.reduce_hunger() def reduce_hunger(self): self.hunger = max(0, self.hunger - self.hunger_decrement) def reduce_boredom(self): self.boredom = max(0, self.boredom - self.boredom_decrement) class Cat(Pet): sounds = ['Meow'] def mood(self): if self.hunger > self.hunger_threshold: return "hungry" if self.boredom <2: return "grumpy; leave me alone" elif self.boredom > self.boredom_threshold: return "bored" elif randrange(2) == 0: return "randomly annoyed" else: return "happy" class Dog(Pet): sounds = ['Woof', 'Ruff'] def mood(self): if (self.hunger > self.hunger_threshold) and (self.boredom > self.boredom_threshold): return "bored and hungry" else: return "happy" c1 = Cat("Fluffy") d1 = Dog("Astro") c1.boredom = 1 print(c1.mood()) c1.boredom = 3 for i in range(10): print(c1.mood()) print(d1.mood()) from random import randrange # Here's the original Pet class class Pet(): boredom_decrement = 4 hunger_decrement = 6 boredom_threshold = 5 hunger_threshold = 10 sounds = ['Mrrp'] def __init__(self, name = "Kitty"): self.name = name self.hunger = randrange(self.hunger_threshold) self.boredom = randrange(self.boredom_threshold) self.sounds = self.sounds[:] # copy the class attribute, so that when we make changes to it, we won't affect the other Pets in the class def clock_tick(self): self.boredom += 1 self.hunger += 1 def mood(self): if self.hunger <= self.hunger_threshold and self.boredom <= self.boredom_threshold: return "happy" elif self.hunger > self.hunger_threshold: return "hungry" else: return "bored" def __str__(self): state = " I'm " + self.name + ". " state += " I feel " + self.mood() + ". " # state += "Hunger %d Boredom %d Words %s" % (self.hunger, self.boredom, self.sounds) return state def hi(self): print(self.sounds[randrange(len(self.sounds))]) self.reduce_boredom() def teach(self, word): self.sounds.append(word) self.reduce_boredom() def feed(self): self.reduce_hunger() def reduce_hunger(self): self.hunger = max(0, self.hunger - self.hunger_decrement) def reduce_boredom(self): self.boredom = max(0, self.boredom - self.boredom_decrement) from random import randrange class Dog(Pet): sounds = ['Woof', 'Ruff'] def feed(self): Pet.feed(self) print("Arf! Thanks!") d1 = Dog("Astro") d1.feed() class Bird(Pet): sounds = ["chirp"] def __init__(self, name="Kitty", chirp_number=2): Pet.__init__(self, name) # call the parent class's constructor # basically, call the SUPER -- the parent version -- of the constructor, with all the parameters that it needs. self.chirp_number = chirp_number # now, also assign the new instance variable def hi(self): for i in range(self.chirp_number): print(self.sounds[randrange(len(self.sounds))]) self.reduce_boredom() b1 = Bird('tweety', 5) b1.teach("Polly wanna cracker") b1.hi() class Pokemon(object): attack = 12 defense = 10 health = 15 p_type = "Normal" def __init__(self, name, level = 5): self.name = name self.level = level def train(self): self.update() self.attack_up() self.defense_up() self.health_up() self.level = self.level + 1 if self.level%self.evolve == 0: return self.level, "Evolved!" else: return self.level def attack_up(self): self.attack = self.attack + self.attack_boost return self.attack def defense_up(self): self.defense = self.defense + self.defense_boost return self.defense def health_up(self): self.health = self.health + self.health_boost return self.health def update(self): self.health_boost = 5 self.attack_boost = 3 self.defense_boost = 2 self.evolve = 10 def __str__(self): return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level) class Grass_Pokemon(Pokemon): attack = 15 defense = 14 health = 12 p_type = "Grass" attack_boost=10 def update(self): self.health_boost = 6 self.attack_boost = 2 self.defense_boost = 3 self.evolve = 12 def moves(self): self.p_moves = ["razor leaf", "synthesis", "petal dance"] p2=Grass_Pokemon("Bulby") p3=Grass_Pokemon("Pika") class Pokemon(object): attack = 12 defense = 10 health = 15 p_type = "Normal" def __init__(self, name, level = 5): self.name = name self.level = level def train(self): self.update() self.attack_up() self.defense_up() self.health_up() self.level = self.level + 1 if self.level%self.evolve == 0: return self.level, "Evolved!" else: return self.level def attack_up(self): self.attack = self.attack + self.attack_boost return self.attack def defense_up(self): self.defense = self.defense + self.defense_boost return self.defense def health_up(self): self.health = self.health + self.health_boost return self.health def update(self): self.health_boost = 5 self.attack_boost = 3 self.defense_boost = 2 self.evolve = 10 def __str__(self): return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level) class Grass_Pokemon(Pokemon): attack = 15 defense = 14 health = 12 p_type = "Grass" def update(self): self.health_boost = 6 self.attack_boost = 2 self.defense_boost = 3 self.evolve = 12 def moves(self): self.p_moves = ["razor leaf", "synthesis", "petal dance"] p2 = Grass_Pokemon("Bulby") p3 =Grass_Pokemon("Pika") print(p2) print(p3) class Pokemon(): attack = 12 defense = 10 health = 15 p_type = "Normal" def __init__(self, name,level = 5): self.name = name self.level = level self.weak = "Normal" self.strong = "Normal" def train(self): self.update() self.attack_up() self.defense_up() self.health_up() self.level = self.level + 1 if self.level%self.evolve == 0: return self.level, "Evolved!" else: return self.level def attack_up(self): self.attack = self.attack + self.attack_boost return self.attack def defense_up(self): self.defense = self.defense + self.defense_boost return self.defense def health_up(self): self.health = self.health + self.health_boost return self.health def update(self): self.health_boost = 5 self.attack_boost = 3 self.defense_boost = 2 self.evolve = 10 def __str__(self): self.update() return "Pokemon name: {}, Type: {}, Level: {}".format(self.name, self.p_type, self.level) def opponent(self): return self.weak, self.strong class Grass_Pokemon(Pokemon): attack = 15 defense = 14 health = 12 p_type = "Grass" def __init__(self, name,level = 5): self.name = name self.level = level self.weak = "Fire" self.strong = "Water" def update(self): self.health_boost = 6 self.attack_boost = 2 self.defense_boost = 3 self.evolve = 12 class Ghost_Pokemon(Pokemon): p_type = "Ghost" def __init__(self, name,level = 5): self.name = name self.level = level self.weak = "Dark" self.strong = "Psychic" def update(self): self.health_boost = 3 self.attack_boost = 4 self.defense_boost = 3 class Fire_Pokemon(Pokemon): p_type = "Fire" def __init__(self, name,level = 5): self.name = name self.level = level self.weak = "Water" self.strong = "Grass" class Flying_Pokemon(Pokemon): p_type = "Flying" def __init__(self, name,level = 5): self.name = name self.level = level self.weak = "Electric" self.strong = "Fighting" def Square(x): return x*x import test test.testEqual(Square(10),100) x = 3 y = 4 if x < y: z = x print(z) else: if x > y: z = y print(z) else: ## x must be equal to y assert x==y z = 0 nums = [1, 5, 8] accum = 0 for w in nums: accum = accum + w assert accum == 14 print(accum) nums = [] accum = 0 for w in nums: accum = accum + w assert accum == None nums = [] if len(nums) == 0: accum = None else: accum = 0 for w in nums: accum = accum + w assert accum == None def distance(x1, y1, x2, y2): return 0 import test test.testEqual(distance(1, 2, 1, 2), 0) test.testEqual(distance(1,2, 4,6),5) class Point: """ Point class for representing and manipulating x,y coordinates. """ def __init__(self, initX, initY): self.x = initX self.y = initY def distanceFromOrigin(self): return ((self.x ** 2) + (self.y ** 2)) ** 0.5 def move(self, dx, dy): self.x = self.x + dx self.y = self.y + dy import test #testing class constructor (__init__ method) p = Point(3, 4) test.testEqual(p.y,4) test.testEqual(p.x,3) #testing the distance method p = Point(3, 4) test.testEqual(p.distanceFromOrigin(),5.0) #testing the move method p = Point(3, 4) p.move(-2, 3) test.testEqual(p.x ,1) test.testEqual(p.y,7) try: items = ['a', 'b'] third = items[2] print("This won't print") except Exception: print("got an error") print("continuing") try: items = ['a', 'b'] third = items[2] print("This won't print") except IndexError: print("error 1") print("continuing") try: x = 5 y = x/0 print("This won't print, either") except IndexError: print("error 2") except ZeroDivisionError: print("continuing again") try: items = ['a', 'b'] third = items[2] print("This won't print") except Exception as e: print("got an error") print(e) print("continuing") d = [1,2,3,45,6] if somekey in d: # it's there; extract the data extract_data(d) else: skip_this_one(d) try: extract_data(d) except: skip_this_one(d)
_____no_output_____
MIT
Class ,Constractor,Inheritance,overriding,Exception.ipynb
Ks226/upgard_code