markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
TODO: Backpropagate the errorNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
# TODO: Write the error term formula def error_term_formula(x, y, output): return (y - output) * sigmoid_prime(x) # Neural Network hyperparameters epochs = 1000 learnrate = 0.5 # Training function def train_nn(features, targets, epochs, learnrate): # Use to same seed to make debugging easier np.random.seed(42) n_records, n_features = features.shape last_loss = None # Initialize weights weights = np.random.normal(scale=1 / n_features**.5, size=n_features) for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features.values, targets): # Loop through all records, x is the input, y is the target # Activation of the output unit # Notice we multiply the inputs and the weights here # rather than storing h as a separate variable output = sigmoid(np.dot(x, weights)) # The error, the target minus the network output error = error_formula(y, output) # The error term error_term = error_term_formula(x, y, output) # The gradient descent step, the error times the gradient times the inputs del_w += error_term * x # Update the weights here. The learning rate times the # change in weights, divided by the number of records to average weights += learnrate * del_w / n_records # Printing out the mean square error on the training set if e % (epochs / 10) == 0: out = sigmoid(np.dot(features, weights)) loss = np.mean((out - targets) ** 2) print("Epoch:", e) if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss print("=========") print("Finished training!") return weights weights = train_nn(features, targets, epochs, learnrate)
Epoch: 0 Train loss: 0.27336783372760837 ========= Epoch: 100 Train loss: 0.2144589591438936 ========= Epoch: 200 Train loss: 0.21248210601845877 ========= Epoch: 300 Train loss: 0.21145849287875826 ========= Epoch: 400 Train loss: 0.2108945778573249 ========= Epoch: 500 Train loss: 0.21055121998038537 ========= Epoch: 600 Train loss: 0.21031564296367067 ========= Epoch: 700 Train loss: 0.2101342506838123 ========= Epoch: 800 Train loss: 0.20998112157065615 ========= Epoch: 900 Train loss: 0.20984348241982478 ========= Finished training!
MIT
Introduction to Neural Networks/StudentAdmissions.ipynb
kushkul/Facebook-Pytorch-Scholarship-Challenge
Calculating the Accuracy on the Test Data
# Calculate accuracy on test data test_out = sigmoid(np.dot(features_test, weights)) predictions = test_out > 0.5 accuracy = np.mean(predictions == targets_test) print("Prediction accuracy: {:.3f}".format(accuracy))
Prediction accuracy: 0.800
MIT
Introduction to Neural Networks/StudentAdmissions.ipynb
kushkul/Facebook-Pytorch-Scholarship-Challenge
Lambda School Data Science*Unit 2, Sprint 2, Module 2*--- Random Forests Assignment- [ ] Read [β€œAdopting a Hypothesis-Driven Workflow”](https://outline.com/5S5tsB), a blog post by a Lambda DS student about the Tanzania Waterpumps challenge.- [ ] Continue to participate in our Kaggle challenge.- [ ] Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.- [ ] Try Ordinal Encoding.- [ ] Try a Random Forest Classifier.- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)- [ ] Commit your notebook to your fork of the GitHub repo. Stretch Goals Doing- [ ] Add your own stretch goal(s) !- [ ] Do more exploratory data analysis, data cleaning, feature engineering, and feature selection.- [ ] Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/).- [ ] Get and plot your feature importances.- [ ] Make visualizations and share on Slack. ReadingTop recommendations in _**bold italic:**_ Decision Trees- A Visual Introduction to Machine Learning, [Part 1: A Decision Tree](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/), and _**[Part 2: Bias and Variance](http://www.r2d3.us/visual-intro-to-machine-learning-part-2/)**_- [Decision Trees: Advantages & Disadvantages](https://christophm.github.io/interpretable-ml-book/tree.htmladvantages-2)- [How a Russian mathematician constructed a decision tree β€” by hand β€” to solve a medical problem](http://fastml.com/how-a-russian-mathematician-constructed-a-decision-tree-by-hand-to-solve-a-medical-problem/)- [How decision trees work](https://brohrer.github.io/how_decision_trees_work.html)- [Let’s Write a Decision Tree Classifier from Scratch](https://www.youtube.com/watch?v=LDRbO9a6XPU) Random Forests- [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/), Chapter 8: Tree-Based Methods- [Coloring with Random Forests](http://structuringtheunstructured.blogspot.com/2017/11/coloring-with-random-forests.html)- _**[Random Forests for Complete Beginners: The definitive guide to Random Forests and Decision Trees](https://victorzhou.com/blog/intro-to-random-forests/)**_ Categorical encoding for trees- [Are categorical variables getting lost in your random forests?](https://roamanalytics.com/2016/10/28/are-categorical-variables-getting-lost-in-your-random-forests/)- [Beyond One-Hot: An Exploration of Categorical Variables](http://www.willmcginnis.com/2015/11/29/beyond-one-hot-an-exploration-of-categorical-variables/)- _**[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)**_- _**[Coursera β€”Β How to Win a Data Science Competition: Learn from Top Kagglers β€”Β Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)**_- [Mean (likelihood) encodings: a comprehensive study](https://www.kaggle.com/vprokopev/mean-likelihood-encodings-a-comprehensive-study)- [The Mechanics of Machine Learning, Chapter 6: Categorically Speaking](https://mlbook.explained.ai/catvars.html) Imposter Syndrome- [Effort Shock and Reward Shock (How The Karate Kid Ruined The Modern World)](http://www.tempobook.com/2014/07/09/effort-shock-and-reward-shock/)- [How to manage impostor syndrome in data science](https://towardsdatascience.com/how-to-manage-impostor-syndrome-in-data-science-ad814809f068)- ["I am not a real data scientist"](https://brohrer.github.io/imposter_syndrome.html)- _**[Imposter Syndrome in Data Science](https://caitlinhudon.com/2018/01/19/imposter-syndrome-in-data-science/)**_ More Categorical Encodings**1.** The article **[Categorical Features and Encoding in Decision Trees](https://medium.com/data-design/visiting-categorical-features-and-encoding-in-decision-trees-53400fa65931)** mentions 4 encodings:- **"Categorical Encoding":** This means using the raw categorical values as-is, not encoded. Scikit-learn doesn't support this, but some tree algorithm implementations do. For example, [Catboost](https://catboost.ai/), or R's [rpart](https://cran.r-project.org/web/packages/rpart/index.html) package.- **Numeric Encoding:** Synonymous with Label Encoding, or "Ordinal" Encoding with random order. We can use [category_encoders.OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html).- **One-Hot Encoding:** We can use [category_encoders.OneHotEncoder](http://contrib.scikit-learn.org/categorical-encoding/onehot.html).- **Binary Encoding:** We can use [category_encoders.BinaryEncoder](http://contrib.scikit-learn.org/categorical-encoding/binary.html).**2.** The short video **[Coursera β€” How to Win a Data Science Competition: Learn from Top Kagglers β€” Concept of mean encoding](https://www.coursera.org/lecture/competitive-data-science/concept-of-mean-encoding-b5Gxv)** introduces an interesting idea: use both X _and_ y to encode categoricals.Category Encoders has multiple implementations of this general concept:- [CatBoost Encoder](http://contrib.scikit-learn.org/categorical-encoding/catboost.html)- [James-Stein Encoder](http://contrib.scikit-learn.org/categorical-encoding/jamesstein.html)- [Leave One Out](http://contrib.scikit-learn.org/categorical-encoding/leaveoneout.html)- [M-estimate](http://contrib.scikit-learn.org/categorical-encoding/mestimate.html)- [Target Encoder](http://contrib.scikit-learn.org/categorical-encoding/targetencoder.html)- [Weight of Evidence](http://contrib.scikit-learn.org/categorical-encoding/woe.html)Category Encoder's mean encoding implementations work for regression problems or binary classification problems. For multi-class classification problems, you will need to temporarily reformulate it as binary classification. For example:```pythonencoder = ce.TargetEncoder(min_samples_leaf=..., smoothing=...) Both parameters > 1 to avoid overfittingX_train_encoded = encoder.fit_transform(X_train, y_train=='functional')X_val_encoded = encoder.transform(X_train, y_val=='functional')```For this reason, mean encoding won't work well within pipelines for multi-class classification problems.**3.** The **[dirty_cat](https://dirty-cat.github.io/stable/)** library has a Target Encoder implementation that works with multi-class classification.```python dirty_cat.TargetEncoder(clf_type='multiclass-clf')```It also implements an interesting idea called ["Similarity Encoder" for dirty categories](https://www.slideshare.net/GaelVaroquaux/machine-learning-on-non-curated-data-154905090).However, it seems like dirty_cat doesn't handle missing values or unknown categories as well as category_encoders does. And you may need to use it with one column at a time, instead of with your whole dataframe.**4. [Embeddings](https://www.kaggle.com/learn/embeddings)** can work well with sparse / high cardinality categoricals._**I hope it’s not too frustrating or confusing that there’s not one β€œcanonical” way to encode categoricals. It’s an active area of research and experimentation! Maybe you can make your own contributions!**_ SetupYou can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab (run the code cell below).
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' import pandas as pd from sklearn.model_selection import train_test_split train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'), pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv')) test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv') sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv') train.shape, test.shape
_____no_output_____
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
1. Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.
import pandas as pd import numpy as np %matplotlib inline # Splitting the train into a train & val train, val = train_test_split(train, train_size=0.80, test_size=0.02, stratify=train['status_group'], random_state=42) def wrangle(X): X = X.copy() X['latitude'] = X['latitude'].replace(-2e-08, 0) cols_with_zeros = ['longitude', 'latitude', 'construction_year', 'gps_height', 'population'] for col in cols_with_zeros: X[col] = X[col].replace(0, np.nan) X[col+'_MISSING'] = X[col].isnull() duplicates = ['quantity_group', 'payment_type'] X = X.drop(columns=duplicates) unusable_variance = ['recorded_by', 'id'] X = X.drop(columns=unusable_variance) X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True) X['year_recorded'] = X['date_recorded'].dt.year X['month_recorded'] = X['date_recorded'].dt.month X['day_recorded'] = X['date_recorded'].dt.day X = X.drop(columns='date_recorded') X['years'] = X['year_recorded'] - X['construction_year'] X['years_MISSING'] = X['years'].isnull() return X train = wrangle(train) val = wrangle(val) test = wrangle(test) # The target is status_group target = 'status_group' train_features = train.drop(columns=[target]) # Getting list of the numeric features numeric_features = train_features.select_dtypes(include='number').columns.tolist() cardinality = train_features.select_dtypes(exclude='number').nunique() categorical_features = cardinality[cardinality <= 50].index.tolist() # Combined the lists features = numeric_features + categorical_features # Arranging the data into X features matrix & y target vector X_train = train[features] y_train = train[target] X_val = val[features] y_val = val[target] X_test = test[features] # 38 features print(X_train.shape, X_val.shape)
(47520, 38) (1188, 38)
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
2. Try Ordinal Encoding
pip install category_encoders %%time import category_encoders as ce from sklearn.impute import SimpleImputer from sklearn.ensemble import RandomForestClassifier from sklearn.pipeline import make_pipeline pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), SimpleImputer(strategy='mean'), RandomForestClassifier(n_jobs=-1, random_state=42) ) pipeline.fit(X_train, y_train) print('Validation Accuracy:', round(pipeline.score(X_val, y_val))) encoder = pipeline.named_steps['onehotencoder'] encoded_df = encoder.transform(X_train) print('X_train shape after encoding', encoded_df.shape) # Now there are 182 features %matplotlib inline import matplotlib.pyplot as plt # Get feature importances rf = pipeline.named_steps['randomforestclassifier'] importances = pd.Series(rf.feature_importances_, encoded_df.columns) n = 25 plt.figure(figsize=(10, n/2)) plt.title(f'Top {n} Features') importances.sort_values()[-n:].plot.barh(color='grey'); # My Submission CSV y_pred = pipeline.predict(X_test) submission = sample_submission.copy() submission['status_group'] = y_pred submission.to_csv('ALB_submission_2.csv', index=False) submission.head()
_____no_output_____
MIT
module2-random-forests/Ahvi_Blackwell_LS_DS_222_assignment.ipynb
ahvblackwelltech/DS-Unit-2-Kaggle-Challenge
1. Load the pre-trained VGG-16 (only the feature extractor)
# Load the ImageNet VGG-16 model, ***excluding*** the latter part regarding the classifier # Default of input_shape is 224x224x3 for VGG-16 img_w,img_h = 32,32 vgg_extractor = tf.keras.applications.vgg16.VGG16(weights = "imagenet", include_top=False, input_shape = (img_w, img_h, 3)) vgg_extractor.summary()
Metal device set to: Apple M1 systemMemory: 16.00 GB maxCacheSize: 5.33 GB
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
2. Extend VGG-16 to match our requirement
# Freeze all layers in VGG-16 for i,layer in enumerate(vgg_extractor.layers): print( f"Layer {i}: name = {layer.name} , trainable = {layer.trainable} => {False}" ) layer.trainable = False # freeze this layer x = vgg_extractor.output # Add our custom layer(s) to the end of the existing model x = tf.keras.layers.Flatten()(x) x = tf.keras.layers.Dense(1024, activation="relu")(x) x = tf.keras.layers.Dropout(0.5)(x) new_outputs = tf.keras.layers.Dense(10, activation="softmax")(x) # Create the final model model = tf.keras.models.Model(inputs=vgg_extractor.input, outputs=new_outputs) model.summary()
Model: "model" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ block1_conv1 (Conv2D) (None, 32, 32, 64) 1792 _________________________________________________________________ block1_conv2 (Conv2D) (None, 32, 32, 64) 36928 _________________________________________________________________ block1_pool (MaxPooling2D) (None, 16, 16, 64) 0 _________________________________________________________________ block2_conv1 (Conv2D) (None, 16, 16, 128) 73856 _________________________________________________________________ block2_conv2 (Conv2D) (None, 16, 16, 128) 147584 _________________________________________________________________ block2_pool (MaxPooling2D) (None, 8, 8, 128) 0 _________________________________________________________________ block3_conv1 (Conv2D) (None, 8, 8, 256) 295168 _________________________________________________________________ block3_conv2 (Conv2D) (None, 8, 8, 256) 590080 _________________________________________________________________ block3_conv3 (Conv2D) (None, 8, 8, 256) 590080 _________________________________________________________________ block3_pool (MaxPooling2D) (None, 4, 4, 256) 0 _________________________________________________________________ block4_conv1 (Conv2D) (None, 4, 4, 512) 1180160 _________________________________________________________________ block4_conv2 (Conv2D) (None, 4, 4, 512) 2359808 _________________________________________________________________ block4_conv3 (Conv2D) (None, 4, 4, 512) 2359808 _________________________________________________________________ block4_pool (MaxPooling2D) (None, 2, 2, 512) 0 _________________________________________________________________ block5_conv1 (Conv2D) (None, 2, 2, 512) 2359808 _________________________________________________________________ block5_conv2 (Conv2D) (None, 2, 2, 512) 2359808 _________________________________________________________________ block5_conv3 (Conv2D) (None, 2, 2, 512) 2359808 _________________________________________________________________ block5_pool (MaxPooling2D) (None, 1, 1, 512) 0 _________________________________________________________________ flatten (Flatten) (None, 512) 0 _________________________________________________________________ dense (Dense) (None, 1024) 525312 _________________________________________________________________ dropout (Dropout) (None, 1024) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 10250 ================================================================= Total params: 15,250,250 Trainable params: 535,562 Non-trainable params: 14,714,688 _________________________________________________________________
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
3. Prepare our own dataset
# Load CIFAR-10 color image dataset (x_train , y_train), (x_test , y_test) = tf.keras.datasets.cifar10.load_data() # Inspect the dataset print( f"x_train: type={type(x_train)} dtype={x_train.dtype} shape={x_train.shape} max={x_train.max(axis=None)} min={x_train.min(axis=None)}" ) print( f"y_train: type={type(y_train)} dtype={y_train.dtype} shape={y_train.shape} max={max(y_train)} min={min(y_train)}" ) print( f"x_test: type={type(x_test)} dtype={x_test.dtype} shape={x_test.shape} max={x_test.max(axis=None)} min={x_test.min(axis=None)}" ) print( f"y_test: type={type(y_test)} dtype={y_test.dtype} shape={y_test.shape} max={max(y_test)} min={min(y_test)}" ) y_train[0:5] cifar10_labels = [ 'airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck' ] # Visualize the first five images in x_train plt.figure(figsize=(15,5)) for i in range(5): plt.subplot(150 + 1 + i).set_title( f"class no. {y_train[i]}: {cifar10_labels[ int(y_train[i]) ]}" ) plt.imshow( x_train[i] ) plt.setp( plt.gcf().get_axes(), xticks=[], yticks=[]) # remove all tick marks plt.show() # Preprocess CIFAR-10 dataset to match VGG-16's requirements x_train_vgg = tf.keras.applications.vgg16.preprocess_input(x_train) x_test_vgg = tf.keras.applications.vgg16.preprocess_input(x_test) print( x_train_vgg.dtype, x_train_vgg.shape, np.min(x_train_vgg), np.max(x_train_vgg) ) print( x_test_vgg.dtype, x_test_vgg.shape, np.min(x_test_vgg), np.max(x_test_vgg) )
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
4. Transfer learning
# Set loss function, optimizer and evaluation metric model.compile( loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["acc"] ) history = model.fit( x_train_vgg, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(x_test_vgg,y_test) ) # Summarize history for accuracy plt.figure(figsize=(15,5)) plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('Train accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.grid() plt.show() # Summarize history for loss plt.figure(figsize=(15,5)) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.title('Train loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.grid() plt.show()
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
5. Evaluate and test the model
# Evaluate the trained model on the test set results = model.evaluate(x_test_vgg, y_test, batch_size=128) print("test loss, test acc:", results) # Test using the model on x_test_vgg[0] i = 0 y_pred = model.predict( x_test_vgg[i].reshape(1,32,32,3) ) plt.imshow( x_test[i] ) plt.title( f"x_test[{i}]: predict=[{np.argmax(y_pred)}]{cifar10_labels[np.argmax(y_pred)]}, true={y_test[i]}{cifar10_labels[int(y_test[i])]}" ) plt.show() # Test using the model on the first 20 images in x_test for i in range(20): y_pred = model.predict( x_test_vgg[i].reshape(1,32,32,3) ) plt.imshow( x_test[i] ) plt.title( f"x_test[{i}]: predict=[{np.argmax(y_pred)}]{cifar10_labels[np.argmax(y_pred)]}, true={y_test[i]}{cifar10_labels[int(y_test[i])]}" ) plt.show()
_____no_output_____
MIT
object-detection/ex12_05_keras_VGG16_transfer.ipynb
farofang/thai-traffic-signs
This notebook serves as a refresher with some basic Python code and functions 1) Define a variable called x, with initial value of 5. multiply by 2 four times and print the value each time
x = 5 for i in range(4): x = x*2 print(i, x)
0 10 1 20 2 40 3 80
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
2) Define a list
p = [9, 4, -5, 0, 10.9] # Get length of list len(p) # index of a specific element p.index(0) # first element in list p[0] print(sum(p))
18.9
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
3) Create a numpy array
import numpy as np a = np.array([5, -19, 30, 10]) # Get first element a[0] # Get last element a[-1] # Get first 3 elements print(a[0:3]) print(a[:3]) # Get size of the array a.shape
_____no_output_____
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
4) Define a dictionary that stores the age of three students. Mark: 26, Camilla: 23, Jason: 30
students = {'Mark':26, 'Camilla': 23, 'Jason':30} students['Mark'] students.keys()
_____no_output_____
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
5) Create a square function
def square_number(x): x2 = x**2 return x2 x_squared = square_number(5) print(x_squared)
25
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
6) List comprehension
# add 2 to every element in the numpy array number_array = np.arange(10, 21) print("original array:", number_array) number_array_plus_two = [x+2 for x in number_array] print("array plus 2:", number_array_plus_two) # select only even numbers even_numbers =[x for x in numbers_array if x%2==0] print(even_numbers)
[10, 12, 14, 16, 18, 20]
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
7) Random numbers
np.random.seed(42) rand_number = np.random.random(size =5) print(rand_number) np.random.seed(42) rand_number2 = np.random.random(size =5) print(rand_number2)
[0.37454012 0.95071431 0.73199394 0.59865848 0.15601864]
MIT
.ipynb_checkpoints/Python_101-checkpoint.ipynb
abdelrahman-ayad/MiCM-StatsPython-F21
Imports and Paths
import torch from torch import nn import torch.nn.functional as F import torch.optim as optim import numpy as np import pandas as pd import os import shutil from skimage import io, transform import torchvision import torchvision.transforms as transforms from torch.utils.data import Dataset, DataLoader import matplotlib.pyplot as plt PATH = '/data/msnow/nih_cxr/'
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Load the Data
df = pd.read_csv(f'{PATH}Data_Entry_2017.csv') df.shape df.head() img_list = os.listdir(f'{PATH}images') len(img_list)
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Collate the data
df_pa = df.loc[df.view=='PA',:] df_pa.reset_index(drop=True, inplace=True) trn_sz = int(df_pa.shape[0]/2) df_pa_trn = df_pa.loc[:trn_sz,:] df_pa_tst = df_pa.loc[trn_sz:,:] df_pa_tst.shape pneumo = [] for i,v in df_pa_trn.iterrows(): if "pneumo" in v['labels'].lower(): pneumo.append('pneumo') else: pneumo.append('no pneumo') df_pa_trn['pneumo'] = pneumo pneumo = [] for i,v in df_pa_tst.iterrows(): if "pneumo" in v['labels'].lower(): pneumo.append(pneumo) else: pneumo.append('no pneumo') df_pa_tst['pneumo'] = pneumo df_pa_trn.shape
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Copy images to train and test folders
# dst = os.path.join(PATH,'trn') # src = os.path.join(PATH,'images') # for i,v in df_pa_trn.iterrows(): # src2 = os.path.join(src,v.image) # shutil.copy2(src2,dst) # dst = os.path.join(PATH,'tst') # src = os.path.join(PATH,'images') # for i,v in df_pa_tst.iterrows(): # src2 = os.path.join(src,v.image) # shutil.copy2(src2,dst)
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Create the Dataset and Dataloader
class TDataset(Dataset): def __init__(self, df, root_dir, transform=None): """ Args: df (dataframe): df with all the annotations. root_dir (string): Directory with all the images. transform (callable, optional): Optional transform to be applied on a sample. """ # self.landmarks_frame = pd.read_csv(csv_file) self.df = df self.root_dir = root_dir self.transform = transform def __len__(self): return len(self.df) def __getitem__(self, idx): img_name = os.path.join(self.root_dir,self.df.image[idx]) image = io.imread(img_name) categ = self.df.pneumo[idx] return image, categ aa = trainset[0] trainset = TDataset(df_pa_trn,f'{PATH}trn') testset = TDataset(df_pa_tst,f'{PATH}tst') trainloader = DataLoader(trainset, batch_size=4,shuffle=True, num_workers=4) testloader = DataLoader(testset, batch_size=4,shuffle=False, num_workers=4) aa[0].shape
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Define and train a CNN
class Net(nn.Module): def __init__(self): super(Net, self).__init__() # 1 input image channel, 6 output channels, 5x5 square convolution kernel self.conv1 = nn.Conv2d(1, 6, 5) self.pool = nn.MaxPool2d(2, 2) # 6 input image channel, 16 output channels, 5x5 square convolution kernel self.conv2 = nn.Conv2d(6, 16, 5) self.fc1 = nn.Linear(16 * 5 * 5, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) def forward(self, x): print(f'input shape {x.shape}') x = self.pool(F.relu(self.conv1(x))) print(f'Lin (1,6,5) + relu + pool shape {x.shape}') x = self.pool(F.relu(self.conv2(x))) print(f'Lin (6,16,5) + relu + pool shape {x.shape}') x = x.view(x.shape[0],-1) print(f'reshape shape {x.shape}') # x = x.view(-1, 16 * 5 * 5) x = F.relu(self.fc1(x)) # x = F.relu(self.fc2(x)) # x = self.fc3(x) return x net = Net() input = torch.randn(1, 1, 1024,1024) out = net(input) # print(out) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for i, data in enumerate(trainloader, 0): break inputs, labels = data tst = inputs.view(-1,1,1024,1024) tst = tst.type('torch.FloatTensor') out = net(tst) 16*253*253 tst.shape # tst = inputs[:,None,:,:] tst.type(torch.FloatTensor) type(tst) list(net.parameters())[0].size() net(tst) conv1_tst(tst) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 print('Finished Training')
_____no_output_____
MIT
notebooks/pytorch/pytorch_benchmarking.ipynb
MichoelSnow/data_science
Compute LCMV inverse solution on evoked data in volume source spaceCompute LCMV inverse solution on an auditory evoked dataset in a volume sourcespace. It stores the solution in a nifti file for visualisation e.g. withFreeview.
# Author: Alexandre Gramfort <[email protected]> # # License: BSD (3-clause) import numpy as np import matplotlib.pyplot as plt import mne from mne.datasets import sample from mne.beamformer import lcmv from nilearn.plotting import plot_stat_map from nilearn.image import index_img print(__doc__) data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif' fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
_____no_output_____
BSD-3-Clause
0.12/_downloads/plot_lcmv_beamformer_volume.ipynb
drammock/mne-tools.github.io
Get epochs
event_id, tmin, tmax = 1, -0.2, 0.5 # Setup for reading the raw data raw = mne.io.read_raw_fif(raw_fname, preload=True, proj=True) raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels events = mne.read_events(event_fname) # Set up pick list: EEG + MEG - bad channels (modify to your needs) left_temporal_channels = mne.read_selection('Left-temporal') picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True, exclude='bads', selection=left_temporal_channels) # Pick the channels of interest raw.pick_channels([raw.ch_names[pick] for pick in picks]) # Re-normalize our empty-room projectors, so they are fine after subselection raw.info.normalize_proj() # Read epochs proj = False # already applied epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, 0), preload=True, proj=proj, reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6)) evoked = epochs.average() forward = mne.read_forward_solution(fname_fwd) # Read regularized noise covariance and compute regularized data covariance noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk') data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15, method='shrunk') # Run free orientation (vector) beamformer. Source orientation can be # restricted by setting pick_ori to 'max-power' (or 'normal' but only when # using a surface-based source space) stc = lcmv(evoked, forward, noise_cov, data_cov, reg=0.01, pick_ori=None) # Save result in stc files stc.save('lcmv-vol') stc.crop(0.0, 0.2) # Save result in a 4D nifti file img = mne.save_stc_as_volume('lcmv_inverse.nii.gz', stc, forward['src'], mri_resolution=False) t1_fname = data_path + '/subjects/sample/mri/T1.mgz' # Plotting with nilearn ###################################################### plot_stat_map(index_img(img, 61), t1_fname, threshold=0.8, title='LCMV (t=%.1f s.)' % stc.times[61]) # plot source time courses with the maximum peak amplitudes plt.figure() plt.plot(stc.times, stc.data[np.argsort(np.max(stc.data, axis=1))[-40:]].T) plt.xlabel('Time (ms)') plt.ylabel('LCMV value') plt.show()
_____no_output_____
BSD-3-Clause
0.12/_downloads/plot_lcmv_beamformer_volume.ipynb
drammock/mne-tools.github.io
Speeding-up gradient-boostingIn this notebook, we present a modified version of gradient boosting whichuses a reduced number of splits when building the different trees. Thisalgorithm is called "histogram gradient boosting" in scikit-learn.We previously mentioned that random-forest is an efficient algorithm sinceeach tree of the ensemble can be fitted at the same time independently.Therefore, the algorithm scales efficiently with both the number of cores andthe number of samples.In gradient-boosting, the algorithm is a sequential algorithm. It requiresthe `N-1` trees to have been fit to be able to fit the tree at stage `N`.Therefore, the algorithm is quite computationally expensive. The mostexpensive part in this algorithm is the search for the best split in thetree which is a brute-force approach: all possible split are evaluated andthe best one is picked. We explained this process in the notebook "tree indepth", which you can refer to.To accelerate the gradient-boosting algorithm, one could reduce the number ofsplits to be evaluated. As a consequence, the statistical performance of sucha tree would be reduced. However, since we are combining several trees in agradient-boosting, we can add more estimators to overcome this issue.We will make a naive implementation of such algorithm using building blocksfrom scikit-learn. First, we will load the California housing dataset.
from sklearn.datasets import fetch_california_housing data, target = fetch_california_housing(return_X_y=True, as_frame=True) target *= 100 # rescale the target in k$
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. We will make a quick benchmark of the original gradient boosting.
from sklearn.model_selection import cross_validate from sklearn.ensemble import GradientBoostingRegressor gradient_boosting = GradientBoostingRegressor(n_estimators=200) cv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Gradient Boosting Decision Tree") print(f"R2 score via cross-validation: " f"{cv_results_gbdt['test_score'].mean():.3f} +/- " f"{cv_results_gbdt['test_score'].std():.3f}") print(f"Average fit time: " f"{cv_results_gbdt['fit_time'].mean():.3f} seconds") print(f"Average score time: " f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
We recall that a way of accelerating the gradient boosting is to reduce thenumber of split considered within the tree building. One way is to bin thedata before to give them into the gradient boosting. A transformer called`KBinsDiscretizer` is doing such transformation. Thus, we can pipelinethis preprocessing with the gradient boosting.We can first demonstrate the transformation done by the `KBinsDiscretizer`.
import numpy as np from sklearn.preprocessing import KBinsDiscretizer discretizer = KBinsDiscretizer( n_bins=256, encode="ordinal", strategy="quantile") data_trans = discretizer.fit_transform(data) data_trans
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
NoteThe code cell above will generate a couple of warnings. Indeed, for some ofthe features, we requested too much bins in regard of the data dispersionfor those features. The smallest bins will be removed.We see that the discretizer transforms the original data into an integer.This integer represents the bin index when the distribution by quantile isperformed. We can check the number of bins per feature.
[len(np.unique(col)) for col in data_trans.T]
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
After this transformation, we see that we have at most 256 unique values perfeatures. Now, we will use this transformer to discretize data beforetraining the gradient boosting regressor.
from sklearn.pipeline import make_pipeline gradient_boosting = make_pipeline( discretizer, GradientBoostingRegressor(n_estimators=200)) cv_results_gbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Gradient Boosting Decision Tree with KBinsDiscretizer") print(f"R2 score via cross-validation: " f"{cv_results_gbdt['test_score'].mean():.3f} +/- " f"{cv_results_gbdt['test_score'].std():.3f}") print(f"Average fit time: " f"{cv_results_gbdt['fit_time'].mean():.3f} seconds") print(f"Average score time: " f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
Here, we see that the fit time has been drastically reduced but that thestatistical performance of the model is identical. Scikit-learn provides aspecific classes which are even more optimized for large dataset, called`HistGradientBoostingClassifier` and `HistGradientBoostingRegressor`. Eachfeature in the dataset `data` is first binned by computing histograms, whichare later used to evaluate the potential splits. The number of splits toevaluate is then much smaller. This algorithm becomes much more efficientthan gradient boosting when the dataset has over 10,000 samples.Below we will give an example for a large dataset and we will comparecomputation times with the experiment of the previous section.
from sklearn.experimental import enable_hist_gradient_boosting from sklearn.ensemble import HistGradientBoostingRegressor histogram_gradient_boosting = HistGradientBoostingRegressor( max_iter=200, random_state=0) cv_results_hgbdt = cross_validate(gradient_boosting, data, target, n_jobs=-1) print("Histogram Gradient Boosting Decision Tree") print(f"R2 score via cross-validation: " f"{cv_results_hgbdt['test_score'].mean():.3f} +/- " f"{cv_results_hgbdt['test_score'].std():.3f}") print(f"Average fit time: " f"{cv_results_hgbdt['fit_time'].mean():.3f} seconds") print(f"Average score time: " f"{cv_results_hgbdt['score_time'].mean():.3f} seconds")
_____no_output_____
CC-BY-4.0
notebooks/ensemble_hist_gradient_boosting.ipynb
ThomasBourgeois/scikit-learn-mooc
Select only numeric columns
train_data_raw2 = clean_func(train_data_raw) train_data = train_data_raw2.iloc[:, train_data_raw2.columns != target] train_data_target = train_data_raw2[target].values X_train,X_test,Y_train,Y_test = train_test_split(train_data ,train_data_target ,test_size=0.3 ,random_state=42)
_____no_output_____
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Models- logreg- random forest random forest naive
model_rf = RandomForestClassifier( n_estimators=100 ) model_rf.fit(X_train, Y_train) # Cross Validation RF scores = cross_val_score(model_rf, X_train, Y_train, cv=10) print(scores) pred_rf = model_rf.predict(X_test) metrics.accuracy_score(Y_test,pred_rf)
_____no_output_____
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Random Forest Grid Search
model_rf_gs = RandomForestClassifier() # parmeter dict param_grid = dict( n_estimators=np.arange(60,101,20) , min_samples_leaf=np.arange(2,4,1) #, criterion = ["gini","entropy"] #, max_features = np.arange(0.1,0.5,0.1) ) print(param_grid) grid = GridSearchCV(model_rf_gs,param_grid=param_grid,scoring = "accuracy", cv = 5) grid.fit(train_data, train_data_target) "" # model_rf.fit(train_data, train_data[target]) # print(grid) # for i in ['params',"mean_train_score","mean_test_score"]: # print(i) # print(grid.cv_results_[i]) # grid.cv_results_ print(grid.best_params_) print(grid.best_score_) model_rf_gs_best = RandomForestClassifier(**grid.best_params_) model_rf_gs_best.fit(X_train,Y_train) ## print feture importance model = model_rf_gs_best feature_names = X_train.columns.values feature_importance2 = sorted(zip(map(lambda x: round(x, 4), model.feature_importances_), feature_names), reverse=True) print(len(feature_importance2)) for feature in feature_importance2: print('%f:%s' % feature ) ### # Recursive feature elimination from sklearn.feature_selection import RFECV model = model_rf_gs_best rfecv = RFECV(estimator=model, step=1, cv=3, scoring='accuracy') rfecv.fit(X_train,Y_train) import matplotlib.pyplot as plt from sklearn import base model = model_rf_gs_best print("Optimal number of features : %d" % rfecv.n_features_) plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_) plt.title('Recursive feature elemination') plt.xlabel('Nr of features') plt.ylabel('Acc') feature_short = feature_names[rfecv.support_] print('== Feature short list ==') print(feature_short) model_simple = base.clone(model) model_simple.fit(X_train[feature_short],Y_train)
Optimal number of features : 16 == Feature short list == ['PassengerId' 'Pclass' 'Age' 'SibSp' 'Parch' 'Fare' 'female' 'male' 'embarked_cobh' 'embark_queenstown' 'embark_southampton' 'Cabin_letter_B' 'Cabin_letter_C' 'Cabin_letter_D' 'Cabin_letter_E' 'Cabin_letter_empty']
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
- Converge about 16- let us comare 16 vs full features on test set
Y_pred = model.predict(X_test) model_score = metrics.accuracy_score(Y_test,Y_pred) Y_pred_simple = model_simple.predict(X_test[feature_short]) model_simple_score = metrics.accuracy_score(Y_test,Y_pred_simple) print("model acc: %.3f" % model_score) print("simple model acc: %.3f" % model_simple_score)
model acc: 0.806 simple model acc: 0.825
MIT
notebooks/titanic_explore4_recursive_feature_elimination.ipynb
EmilMachine/kaggle_titanic
Amazon Sentiment Data
%load_ext autoreload %autoreload 2 import lxmls.readers.sentiment_reader as srs from lxmls.deep_learning.utils import AmazonData corpus = srs.SentimentCorpus("books") data = AmazonData(corpus=corpus)
_____no_output_____
MIT
labs/notebooks/non_linear_classifiers/exercise_4.ipynb
mpc97/lxmls
Implement Pytorch Forward pass As the final exercise today implement the log `forward()` method in lxmls/deep_learning/pytorch_models/mlp.pyUse the previous exercise as reference. After you have completed this you can run both systems for comparison.
# Model geometry = [corpus.nr_features, 20, 2] activation_functions = ['sigmoid', 'softmax'] # Optimization learning_rate = 0.05 num_epochs = 10 batch_size = 30 import numpy as np from lxmls.deep_learning.pytorch_models.mlp import PytorchMLP model = PytorchMLP( geometry=geometry, activation_functions=activation_functions, learning_rate=learning_rate ) # Get batch iterators for train and test train_batches = data.batches('train', batch_size=batch_size) test_set = data.batches('test', batch_size=None)[0] # Epoch loop for epoch in range(num_epochs): # Batch loop for batch in train_batches: model.update(input=batch['input'], output=batch['output']) # Prediction for this epoch hat_y = model.predict(input=test_set['input']) # Evaluation accuracy = 100*np.mean(hat_y == test_set['output']) # Inform user print("Epoch %d: accuracy %2.2f %%" % (epoch+1, accuracy))
_____no_output_____
MIT
labs/notebooks/non_linear_classifiers/exercise_4.ipynb
mpc97/lxmls
Explore endangered languages from UNESCO Atlas of the World's Languages in Danger InputEndangered languages- https://www.kaggle.com/the-guardian/extinct-languages/version/1 (updated in 2016)- original data: http://www.unesco.org/languages-atlas/index.php?hl=en&page=atlasmap (published in 2010)Countries of the world- https://www.ethnologue.com/sites/default/files/CountryCodes.tab Output- `endangered_languages_europe.csv` Imports
import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Load data
df = pd.read_csv("../../data/endangerment/extinct_languages.csv") print(df.shape) print(df.dtypes) df.head() df.columns ENDANGERMENT_MAP = { "Vulnerable": 1, "Definitely endangered": 2, "Severely endangered": 3, "Critically endangered": 4, "Extinct": 5, } df["Endangerment code"] = df["Degree of endangerment"].apply(lambda x: ENDANGERMENT_MAP[x]) df[["Degree of endangerment", "Endangerment code"]]
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Distribution of the degree of endangerment
plt.xticks(fontsize=16) plt.yticks(fontsize=16) df["Degree of endangerment"].hist(figsize=(15,5)).get_figure().savefig('endangered_hist.png', format="png")
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Show distribution on map
countries_map = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres")) countries_map.head() # Plot Europe fig, ax = plt.subplots(figsize=(20, 10)) countries_map.plot(color='lightgrey', ax=ax) plt.xlim([-30, 50]) plt.ylim([30, 75]) df.plot( x="Longitude", y="Latitude", kind="scatter", title="Endangered languages in Europe (1=Vulnerable, 5=Extinct)", c="Endangerment code", colormap="YlOrRd", ax=ax, ) plt.show()
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Get endangered languages only for Europe
countries = pd.read_csv("../../data/general/country_codes.tsv", sep="\t") europe = countries[countries["Area"] == "Europe"] europe europe_countries = set(europe["Name"].to_list()) europe_countries df[df["Countries"].isna()] df = df[df["Countries"].notna()] df[df["Countries"].isna()] df["In Europe"] = df["Countries"].apply(lambda x: len(europe_countries.intersection(set(x.split(",")))) > 0) df_europe = df.loc[df["In Europe"] == True] print(df_europe.shape) df_europe.head(20) # Plot only European endangered languages fig, ax = plt.subplots(figsize=(20, 10)) countries_map.plot(color='lightgrey', ax=ax) plt.xlim([-30, 50]) plt.ylim([30, 75]) df_europe.plot( x="Longitude", y="Latitude", kind="scatter", title="Endangered languages in Europe", c="Endangerment code", colormap="YlOrRd", ax=ax, ) plt.xticks(fontsize=16) plt.yticks(fontsize=16) plt.xlabel('Longitude', fontsize=18) plt.ylabel('Latitude', fontsize=18) plt.title("Endangered languages in Europe (1=Vulnerable, 5=Extinct)", fontsize=18) plt.show() fig.savefig("endangered_languages_in_europe.png", format="png", bbox_inches="tight")
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Save output
df_europe.to_csv("../../data/endangerment/endangered_languages_europe.csv", index=False)
_____no_output_____
MIT
notebooks/Eszti/unesco_endangered_lang_europe.ipynb
e8725144/lang-changes
Polynomials Class
from sympy import * import numpy as np x = Symbol('x') class polinomio: def __init__(self, coefficienti: list): self.coefficienti = coefficienti self.grado = 0 if len(self.coefficienti) == 0 else len( self.coefficienti) - 1 i = 0 while i < len(self.coefficienti): if self.coefficienti[0] == 0: self.coefficienti.pop(0) i += 1 # scrittura del polinomio: def __str__(self): output = "" for i in range(0, len(self.coefficienti)): # and x[grado_polinomio]!=0): if (((self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i == 1)): output += "x " if self.grado-i == 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0): output += "{}x ".format(self.coefficienti[i]) if self.coefficienti[i] == 0: pass # continue if self.grado-i != 0 and self.grado-i != 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0): output += "{}x^{} ".format( self.coefficienti[i], self.grado-i) # continue #print(x[i], "$x^", grado_polinomio-i, "$ + ") if (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i != 1 and self.grado-i != 0: output += "x^{} ".format(self.grado-i) # continue elif (self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i != 1 and self.grado-i != 0: output += "- x^{} ".format(self.grado-i) # continue elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] != 1 or self.coefficienti[i] != 1.0): output += "{} ".format(self.coefficienti[i]) elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0): output += "1 " if ((self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i == 1): output += "- x " if (i != self.grado and self.grado-i != 0) and self.coefficienti[i+1] > 0: output += "+ " continue return output def latex(self): latex_polinomio = 0 for i in range(0, len(self.coefficienti)): # and x[grado_polinomio]!=0): if (((self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i == 1)): latex_polinomio += x if self.grado-i == 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0): latex_polinomio += self.coefficienti[i]*x if self.coefficienti[i] == 0: pass # continue if self.grado-i != 0 and self.grado-i != 1 and (self.coefficienti[i] != 0 and self.coefficienti[i] != 1 and self.coefficienti[i] != -1 and self.coefficienti[i] != 1.0 and self.coefficienti[i] != -1.0): latex_polinomio += self.coefficienti[i]*x**(self.grado-i) # continue #print(x[i], "$x^", grado_polinomio-i, "$ + ") if (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0) and self.grado-i != 1 and self.grado-i != 0: latex_polinomio += x**(self.grado-i) # continue elif (self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i != 1 and self.grado-i != 0: latex_polinomio += -x**(self.grado-i) # continue elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] != 1 or self.coefficienti[i] != 1.0): latex_polinomio += self.coefficienti[i] elif self.coefficienti[i] != 0 and self.grado-i == 0 and (self.coefficienti[i] == 1 or self.coefficienti[i] == 1.0): latex_polinomio += 1 if ((self.coefficienti[i] == -1 or self.coefficienti[i] == -1.0) and self.grado-i == 1): latex_polinomio += -x # if (i != self.grado and self.grado-i != 0) and self.coefficienti[i+1] > 0: # latex_polinomio += + # continue return latex_polinomio def __add__(self, y): if type(y).__name__ != "polinomio": raise Exception( f"You are trying to sum a polinomio with a {type(y).__name__}") c = [] n = min(len(self.coefficienti), len(y.coefficienti)) m = max(len(self.coefficienti), len(y.coefficienti)) d = [] if m == len(self.coefficienti): d = self.coefficienti else: d = y.coefficienti for i in range(0, m-n): c.append(d[i]) if m == len(self.coefficienti): for j in range(m-n, m): z = self.coefficienti[j] + y.coefficienti[j-m+n] c.append(z) else: for j in range(m-n, m): z = self.coefficienti[j-m+n] + y.coefficienti[j] c.append(z) i = 0 while i < len(c): if c[0] == 0: c.pop(0) i += 1 d = polinomio(c) return d def __sub__(self, y): c = [] for i in y.coefficienti: c.append(-i) f = self + polinomio(c) return f def __mul__(self, y): grado_prodotto = self.grado + y.grado d = [[], []] for i in range(len(self.coefficienti)): for j in range(len(y.coefficienti)): d[0].append(self.coefficienti[i]*y.coefficienti[j]) d[1].append(i+j) # grado del monomio d[1] = d[1][::-1] # print(d) for i in range(grado_prodotto+1): if d[1].count(grado_prodotto-i) > 1: j = d[1].index(grado_prodotto - i) #print("j vale: ", j) z = j+1 while z < len(d[1]): if d[1][z] == d[1][j]: #print("z vale:", z) d[0][j] = d[0][j]+d[0][z] d[1].pop(z) d[0].pop(z) # print(d) z += 1 i = 0 while i < len(d[0]): if d[0][0] == 0: d[0].pop(0) i += 1 return polinomio(d[0]) def __pow__(self, var: int): p = self i = 0 while i < var-1: p *= self i += 1 return p def __truediv__(self, y, c=[]): d = [] s = self.grado v = y.grado grado_polinomio_risultante = s-v output = 0 if grado_polinomio_risultante > 0: d.append(self.coefficienti[0]/y.coefficienti[0]) i = 0 while i < grado_polinomio_risultante: d.append(0) i += 1 c.append(d[0]) a = polinomio(d) g = a*y f = self - g if (f.grado - y.grado) == 0 and (len(f.coefficienti)-len(c)) > 1: c.append(0) if (f.grado-y.grado) < 0 and f.grado != 0: j = 0 while j < y.grado-f.grado: c.append(0) self = f return f.__truediv__(y, c) elif grado_polinomio_risultante == 0: d.append(self.coefficienti[0]/y.coefficienti[0]) c.append(d[0]) a = polinomio(d) g = a*y f = self - g if f.grado == 0 and (f.coefficienti == [] or f.coefficienti[0] == 0): return polinomio(c).latex() elif f.grado >= 0: self = f return f.__truediv__(y, c) elif grado_polinomio_risultante < 0: output += polinomio(c).latex() + self.latex()/y.latex() # output += self.latex()/y.latex() # output += y.latex() # if polinomio(c).grado != 0: # output += "+" # output += "(" + str(self) + ")/(" # output += str(y) + ")" return output elif s == 0: return polinomio(c).latex() def __eq__(self, y): equality = 0 if len(self.coefficienti) != len(y.coefficienti): return False for i in range(len(self.coefficienti)): if self.coefficienti[i] == y.coefficienti[i]: equality += 1 if equality == len(self.coefficienti): return True else: return False def __ne__(self, y): inequality = 0 if len(self.coefficienti) != len(y.coefficienti): return True else: for i in range(len(self.coefficienti)): if self.coefficienti[i] != y.coefficienti[i]: inequality += 1 if inequality == len(self.coefficienti): return True else: return False a = [1, 1, 2, 1, 1] b = [1, 1, 2, 1, 1] c = polinomio(a) d = polinomio(b) (c+d).latex() # a = [1, 0, 2, 0, 1] # b = [1, 0, 1] # c = polinomio(a) # d = polinomio(b) # c/d a = [1,1,1] b = [1,0] c = polinomio(a) d = polinomio(b) (c*d).latex() a = [1] b = [1,1] c = polinomio(a) d = polinomio(b) c/d # a = [3,3,3] # b = [3] # c = polinomio(a) # d = polinomio(b) # c/d a = [1, 1, 2, 1, 1] b = [1, 1, 2, 1, 1] c = polinomio(a) d = polinomio(b) print(c+d)
2x^4 + 2x^3 + 4x^2 + 2x + 2
MIT
Polinomi.ipynb
RiccardoTancredi/Polynomials
Function to list overlapping Landsat 8 scenesThis function is based on the following tutorial: http://geologyandpython.com/get-landsat-8.htmlThis function uses the area of interest (AOI) to retrieve overlapping Landsat 8 scenes. It will also output on the scenes with the largest portion of overlap and with less than 5% cloud cover.
def landsat_scene_list(aoi, start_date, end_date): '''Creates a list of Landsat 8, level 1, tier 1 scenes that overlap with an aoi and are captured within a specified date range. Parameters ---------- aoi : str The path to a shape file of an aoi with geometry. start-date : str The first date from which to start looking for Landsat image capture in the format yyyy-mm,dd, e.g. '2017-10-01'. end-date : str The last date from which to looking for Landsat image capture in the format yyyy-mm,dd, e.g. '2017-10-31'. Returns ------- wrs : shapefile A catalog of Landsat 8 scenes. scenes : geopandas.geodataframe.GeoDataFrame A dataframe containing the information of Landsat 8, Level 1, Tier 1 scenes that overlap with the aoi. ''' # Download Landsat 8 catalog from USGS (get_data auto unzips) USGS_url = 'https://landsat.usgs.gov/sites/default/files/documents/WRS2_descending.zip' et.data.get_data(url=USGS_url, replace=True) # Open Landsat catalog wrs = gpd.GeoDataFrame.from_file(os.path.join('data', 'earthpy-downloads', 'WRS2_descending', 'WRS2_descending.shp')) # Find polygons that intersect Landsat catalog and aoi wrs_intersection = wrs[wrs.intersects(aoi.geometry[0])] # Calculated paths and rows paths, rows = wrs_intersection['PATH'].values, wrs_intersection['ROW'].values # Iterate through each Polygon of paths and rows intersecting the area for i, row in wrs_intersection.iterrows(): # Create a string for the name containing the path and row of this Polygon name = 'path: %03d, row: %03d' % (row.PATH, row.ROW) # Removing scenes with small amounts of overlap using threshold of intersection area b = (paths > 23) & (paths < 26) paths = paths[b] rows = rows[b] # # Path(s) and row(s) covering the intersection # ############################ WHY NOT PRINTING? ################################### # for i, (path, row) in enumerate(zip(paths, rows)): # print('Image', i+1, ' - path:', path, 'row:', row) # Check scene availability in Amazon S3 bucket list of Landsat scenes s3_scenes = pd.read_csv('http://landsat-pds.s3.amazonaws.com/c1/L8/scene_list.gz', compression='gzip', parse_dates=['acquisitionDate'], index_col=['acquisitionDate']) # Capture only Landsat T1 scenes within dates of interest scene_mask = (s3_scenes.index > start_date) & (s3_scenes.index <= end_date) scene_dates = s3_scenes.loc[scene_mask] scene_product = scene_dates[scene_dates['productId'].str.contains("_T1")] # Geodataframe of scenes with <5% cloud cover, the url to retrieve them #############################row.ROW and row.PATH will need to be fixed################## scenes = scene_product[(scene_product.path == row.PATH) & (scene_product.row == row.ROW) & (scene_product.cloudCover <= 5)] return wrs, scenes
_____no_output_____
BSD-3-Clause
notebooks/testing/previously in ignored file/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb
sarahmjaffe/sagebrush-ecosystem-modeling-with-landsat8
TEST**Can DELETE everything below once tested and approved!**
# WILL DELETE WHEN FUNCTIONS ARE SEPARATED OUT def NEON_site_extent(path_to_NEON_boundaries, site): '''Extracts a NEON site extent from an individual site as long as the original NEON site extent shape file contains a column named 'siteID'. Parameters ---------- path_to_NEON_boundaries : str The path to a shape file that contains the list of all NEON site extents, also known as field sampling boundaries (can be found at NEON and ESRI sites) site : str One siteID contains 4 capital letters, e.g. CPER, HARV, ONAQ or SJER. Returns ------- site_boundary : geopandas.geodataframe.GeoDataFrame A vector containing a single polygon per the site specified. ''' NEON_boundaries = gpd.read_file(path_to_NEON_boundaries) boundaries_indexed = NEON_boundaries.set_index(['siteID']) site_boundary = boundaries_indexed.loc[[site]] site_boundary.reset_index(inplace=True) return site_boundary # Import packages import os from glob import glob import requests import matplotlib.pyplot as plt import numpy as np import pandas as pd import folium import geopandas as gpd import rasterio as rio #from bs4 import BeautifulSoup import shutil import earthpy as et # Set working directory os.chdir(os.path.join(et.io.HOME, 'earth-analytics')) # Download shapefile of all NEON site boundaries url = 'https://www.neonscience.org/sites/default/files/Field_Sampling_Boundaries_2020.zip' et.data.get_data(url=url, replace=True) # Create path to shapefile terrestrial_sites = os.path.join( 'data', 'earthpy-downloads', 'Field_Sampling_Boundaries_2020', 'terrestrialSamplingBoundaries.shp') # Retrieving the boundaries of CPER aoi = NEON_site_extent(terrestrial_sites, 'ONAQ') # Test out new landsat retrieval process scene_catalog, scene_df = landsat_scene_list(aoi, '2017-10-01', '2017-10-31') # Visualize the catalog scene_catalog.head(3) # Visualize the scenes of interest based on the input parameters scene_df
_____no_output_____
BSD-3-Clause
notebooks/testing/previously in ignored file/f-Find-Overlapping-Landsat-Scenes-TEST.ipynb
sarahmjaffe/sagebrush-ecosystem-modeling-with-landsat8
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. #@title MIT License # # Copyright (c) 2017 François Chollet # # Permission is hereby granted, free of charge, to any person obtaining a # copy of this software and associated documentation files (the "Software"), # to deal in the Software without restriction, including without limitation # the rights to use, copy, modify, merge, publish, distribute, sublicense, # and/or sell copies of the Software, and to permit persons to whom the # Software is furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL # THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING # FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER # DEALINGS IN THE SOFTWARE.
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan dan memuat model Liht di TensorFlow.org Jalankan di Google Colab Lihat kode di GitHub Unduh notebook Progres dari model dapat disimpan ketika proses training dan setelah training. Ini berarti sebuah model dapat melanjutkan proses training dengan kondisi yang sama dengan ketika proses training sebelumnya dihentikan dan dapat menghindari waktu training yang panjng. Menyimpan juga berarti Anda dapat membagikan model Anda dan orang lain dapat membuat ulang proyek Anda. Ketika mempublikasikan hasil riset dan teknik dari suatu model, kebanyakan praktisi *machine learning* membagikan:* kode untuk membuat model, dan* berat, atau parameter, dari sebuah modelMembagikan data ini akan membantu orang lain untuk memahami bagaimana model bekerja dan mencoba model tersebut dengan data yang baru.Perhatian: Hati-hati dengan kode yang tidak dapat dipercayaβ€”model-model TensorFlow adalah kode. Lihat [Menggunakan TensorFlow dengan aman](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md) untuk lebih detail. OpsiTerdapat beberapa cara untuk menyimpan model TensorFlowβ€”bergantung kepada API yang Anda gunakan. Panduan ini menggunakan [tf.keras](https://www.tensorflow.org/guide/keras), sebuah API tingkat tinggi yang digunakan untuk membangun dan melatih model di TensorFlow. Untuk pendekatan lainnya, lihat panduan Tensorflow [Simpan dan Restorasi](https://www.tensorflow.org/guide/saved_model) atau [Simpan sesuai keinginan](https://www.tensorflow.org/guide/eagerobject-based_saving). Pengaturan Instal dan import Install dan import TensorFlow dan beberapa *dependency*:
try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass !pip install pyyaml h5py # Required to save models in HDF5 format from __future__ import absolute_import, division, print_function, unicode_literals import os import tensorflow as tf from tensorflow import keras print(tf.version.VERSION)
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Memperoleh datasetUntuk menunjukan bagaimana cara untuk menyimpan dan memuat berat dari model, Anda akan menggunakan [Dataset MNIST](http://yann.lecun.com/exdb/mnist/). Untuk mempercepat operasi ini, gunakan hanya 1000 data pertama:
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data() train_labels = train_labels[:1000] test_labels = test_labels[:1000] train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0 test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Mendefinisikan sebuah model Mulai dengan membangun sebuah model sekuensial sederhana:
# Define a simple sequential model def create_model(): model = tf.keras.models.Sequential([ keras.layers.Dense(512, activation='relu', input_shape=(784,)), keras.layers.Dropout(0.2), keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model # Create a basic model instance model = create_model() # Display the model's architecture model.summary()
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan cek poin ketika proses training You can use a trained model without having to retrain it, or pick-up training where you left offβ€”in case the training process was interrupted. The `tf.keras.callbacks.ModelCheckpoint` callback allows to continually save the model both *during* and at *the end* of training.Anda dapat menggunakan model terlatih tanpa harus melatihnya kembali, atau melanjutkan proses training di titik di mana proses training sebelumnya berhenti. *Callback* `tf.keras.callbacks.ModelCheckpoint` memungkinkan sebuah model untuk disimpan ketika dan setelah proses training dilakukan. Penggunaan *callback* cek poinBuat sebuah callback `tf.keras.callbacks.ModelCheckpoint` yang menyimpan berat hanya ketika proses training berlangsung:
checkpoint_path = "training_1/cp.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, save_weights_only=True, verbose=1) # Train the model with the new callback model.fit(train_images, train_labels, epochs=10, validation_data=(test_images,test_labels), callbacks=[cp_callback]) # Pass callback to training # This may generate warnings related to saving the state of the optimizer. # These warnings (and similar warnings throughout this notebook) # are in place to discourage outdated usage, and can be ignored.
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
This creates a single collection of TensorFlow checkpoint files that are updated at the end of each epoch:
!ls {checkpoint_dir}
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Create a new, untrained model. When restoring a model from weights-only, you must have a model with the same architecture as the original model. Since it's the same model architecture, you can share weights despite that it's a different *instance* of the model.Now rebuild a fresh, untrained model, and evaluate it on the test set. An untrained model will perform at chance levels (~10% accuracy):
# Create a basic model instance model = create_model() # Evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Untrained model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Then load the weights from the checkpoint and re-evaluate:
# Loads the weights model.load_weights(checkpoint_path) # Re-evaluate the model loss,acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Checkpoint callback optionsThe callback provides several options to provide unique names for checkpoints and adjust the checkpointing frequency.Train a new model, and save uniquely named checkpoints once every five epochs:
# Include the epoch in the file name (uses `str.format`) checkpoint_path = "training_2/cp-{epoch:04d}.ckpt" checkpoint_dir = os.path.dirname(checkpoint_path) # Create a callback that saves the model's weights every 5 epochs cp_callback = tf.keras.callbacks.ModelCheckpoint( filepath=checkpoint_path, verbose=1, save_weights_only=True, period=5) # Create a new model instance model = create_model() # Save the weights using the `checkpoint_path` format model.save_weights(checkpoint_path.format(epoch=0)) # Train the model with the new callback model.fit(train_images, train_labels, epochs=50, callbacks=[cp_callback], validation_data=(test_images,test_labels), verbose=0)
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Sekarang, lihat hasil cek poin dan pilih yang terbaru:
!ls {checkpoint_dir} latest = tf.train.latest_checkpoint(checkpoint_dir) latest
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Catatan: secara default format tensorflow hanya menyimpan 5 cek poin terbaru.Untuk tes, reset model dan muat cek poin terakhir:
# Create a new model instance model = create_model() # Load the previously saved weights model.load_weights(latest) # Re-evaluate the model loss, acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Apa sajakah file-file ini? Kode di atas menyimpan berat dari model ke sebuah kumpulan [cek poin](https://www.tensorflow.org/guide/saved_modelsave_and_restore_variables)-file yang hanya berisikan berat dari model yan sudah dilatih dalam format biner. Cek poin terdiri atas:* Satu atau lebih bagian (*shard*) yang berisi berat dari model Anda.* Sebuah file index yang mengindikasikan suatu berat disimpan pada *shard* yang mana.Jika Anda hanya melakukan proses training dari sebuah model pada sebuah komputer, Anda akan hanya memiliki satu *shard* dengan sufiks `.data-00000-of-00001` Menyimpan berat secara manualAnda telah melihat bagaimana caranya memuat berat yang telah disimpan sebelumnya menjadi model. Menyimpannya secara manual dapat dilakukan dengan mudah dengan *method* `Model.save_weights`. Secara default, `tf.keras`β€”dan `save_weights` menggunakan format TensorFlow [cek poin](../../guide/keras/checkpoints) dengan ekstensi `.ckpt` (menyimpan dalam format [HDF5](https://js.tensorflow.org/tutorials/import-keras.html) dengan ekstensi `.h5` dijelaskan dalam panduan ini [Menyimpan dan serialisasi model](../../guide/keras/save_and_serializeweights-only_saving_in_savedmodel_format)):
# Save the weights model.save_weights('./checkpoints/my_checkpoint') # Create a new model instance model = create_model() # Restore the weights model.load_weights('./checkpoints/my_checkpoint') # Evaluate the model loss,acc = model.evaluate(test_images, test_labels, verbose=2) print("Restored model, accuracy: {:5.2f}%".format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Menyimpan keseluruhan modelGunakan [`model.save`](https://www.tensorflow.org/api_docs/python/tf/keras/Modelsave) untuk menyimpan arsitektur dari model, berat, dan konfigurasi training dalam satu file/folder. Hal ini menyebabkan Anda dapat melakukan ekspor dari suatu model sehingga model tersebut dapat digunakan tanpa harus mengakses kode Python secara langsung*. Karena kondisi optimizer dipulihkan, Anda dapat melanjutkan proses training tepat ketika proses training sebelumnya ditinggalkan.Meneyimpan sebuah model fungsional sangat bergunaβ€”Anda dapat memuatnya di TensorFlow.js [HDF5](https://js.tensorflow.org/tutorials/import-keras.html), [Saved Model](https://js.tensorflow.org/tutorials/import-saved-model.html)) dan kemudian melatih dan menggunakan model tersebut di web browser, atau mengubahnya sehingga dapat beroperasi di perangkat *mobile* menggunakan TensorFlw Lite [HDF5](https://www.tensorflow.org/lite/convert/python_apiexporting_a_tfkeras_file_), [Saved Model](https://www.tensorflow.org/lite/convert/python_apiexporting_a_savedmodel_))\*Objek-objek custom (model subkelas atau layer) membutuhkan perhatian lebih ketika proses disimpan atau dimuat. Lihat bagian **Penyimpanan objek custom** di bawah. Format HDF5Keras menyediakan format penyimpanan menggunakan menggunakan [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format)
# Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model to a HDF5 file. # The '.h5' extension indicates that the model shuold be saved to HDF5. model.save('my_model.h5')
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Sekarang, buat ulang model dari file tersebut:
# Recreate the exact same model, including its weights and the optimizer new_model = tf.keras.models.load_model('my_model.h5') # Show the model architecture new_model.summary()
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Cek akurasi dari model:
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100*acc))
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Teknik ini menyimpan semuanya:* Nilai berat* Konfigurasi model (arsitektur)* konfigurasi dari optimizerKeras menyimpan model dengan cara menginspeksi arsitekturnya. Saat ini, belum bisa menyimpan optimizer TensorFlow (dari `tf.train`). Ketika menggunakannya, Anda harus mengkompilasi kembali model setelah dimuat, dan Anda akan kehilangan kondisi dari optimizer. Format SavedModel Format SavedModel adalah suatu cara lainnya untuk melakukan serialisasi model. Model yang disimpan dalam format ini dapat direstorasi menggunakan `tf.keras.models.load_model` dan kompatibel dengan TensorFlow Serving. [Panduan SavedModel](https://www.tensorflow.org/guide/saved_model) menjelaskan detail bagaimana untuk menyediakan/memeriksa SavedModel. Kode di bawah ini mengilustrasikan langkah-langkah yang dilakukan untuk menyimpan dan memuat kembali model.
# Create and train a new model instance. model = create_model() model.fit(train_images, train_labels, epochs=5) # Save the entire model as a SavedModel. !mkdir -p saved_model model.save('saved_model/my_model')
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Format SavedModel merupakan direktori yang berisi sebuah *protobuf binary* dan sebuah cek poin TensorFlow. Mememiksa direktori dari model tersimpan:
# my_model directory !ls saved_model # Contains an assets folder, saved_model.pb, and variables folder. !ls saved_model/my_model
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Muat ulang Keras model yang baru dari model tersimpan:
new_model = tf.keras.models.load_model('saved_model/my_model') # Check its architecture new_model.summary()
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
Model yang sudah terestorasi dikompilasi dengan argument yang sama dengan model asli. Coba lakukan evaluasi dan prediksi menggunakan model tersebut:
# Evaluate the restored model loss, acc = new_model.evaluate(test_images, test_labels, verbose=2) print('Restored model, accuracy: {:5.2f}%'.format(100*acc)) print(new_model.predict(test_images).shape)
_____no_output_____
Apache-2.0
site/id/tutorials/keras/save_and_load.ipynb
NarimaneHennouni/docs-l10n
증식을 ν†΅ν•œ 데이터셋 크기 ν™•μž₯ 1. Google Drive와 연동
from google.colab import drive drive.mount("/content/gdrive") path = "gdrive/'My Drive'/'Colab Notebooks'/CNN" !ls gdrive/'My Drive'/'Colab Notebooks'/CNN/datasets
cats_and_dogs_small
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
2. λͺ¨λΈ 생성
from tensorflow.keras import layers, models, optimizers
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
0. Sequential 객체 생성1. conv layer(filter32, kernel size(3,3), activation 'relu', input_shape()2. pooling layer(pool_size(2,2))3. conv layer(filter 64, kernel size(3,3), activation 'relu'4. pooling layer(pool_size(2,2))5. conv layer(filter 128, kernel size(3,3), activation 'relu'6. pooling layer(pool_size(2,2))7. conv layer(filter 128, kernel size(3,3), activation 'relu'8. pooling layer(pool_size(2,2))-------9. flatten layer10. Dense layer 512, relu11. Dense layer 1, sigmoid
model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(150, 150, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(128, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dropout(0.5)) model.add(layers.Dense(512, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.summary() from tensorflow.keras import optimizers model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=1e-4), metrics=['accuracy']) from tensorflow.keras.preprocessing.image import ImageDataGenerator
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
3. 데이터 μ „μ²˜λ¦¬
import os base_dir = '/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small' train_dir = os.path.join(base_dir,'train') validation_dir = os.path.join(base_dir,'validation') test_dir=os.path.join(base_dir,'test') # [μ½”λ“œμž‘μ„±] # train_datagenμ΄λΌλŠ” ImageDataGenerator 객체 생성 # train_datagen의 증식 μ˜΅μ…˜ # 1. scale : 0~1 # 2. νšŒμ „ 각도 λ²”μœ„ : -40~+40 # 3. μˆ˜ν‰μ΄λ™ λ²”μœ„ : 전체 λ„ˆλΉ„μ˜ 20% λΉ„μœ¨ # 4. μˆ˜μ§μ΄λ™ λ²”μœ„ : 전체 λ†’μ΄μ˜ 20% λΉ„μœ¨ # 5. 전단 λ³€ν™˜(shearing) 각도 λ²”μœ„ : 10% # 6. 사진 ν™•λŒ€ λ²”μœ„ : 20% # 7. 이미지λ₯Ό μˆ˜ν‰μœΌλ‘œ 뒀집기 : True # 8. νšŒμ „μ΄λ‚˜ κ°€λ‘œ/μ„Έλ‘œ μ΄λ™μœΌλ‘œ 인해 μƒˆλ‘­κ²Œ 생성해야 ν•  픽셀을 μ±„μšΈ μ „λž΅ : 'nearest' train_datagen = ImageDataGenerator(rescale=1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.1, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest') validation_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale=1./255) train_generator = train_datagen.flow_from_directory( train_dir, target_size=(150,150), batch_size=20,class_mode='binary') validation_generator = validation_datagen.flow_from_directory( validation_dir, target_size=(150,150), batch_size=20,class_mode='binary') test_generator = test_datagen.flow_from_directory( test_dir, target_size=(150,150), batch_size=20,class_mode='binary')
Found 2000 images belonging to 2 classes. Found 1000 images belonging to 2 classes. Found 1000 images belonging to 2 classes.
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
4. λͺ¨λΈ ν›ˆλ ¨
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=30, validation_data=validation_generator, validation_steps=50)
WARNING:tensorflow:From <ipython-input-16-c480ae1e8dcf>:5: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. Epoch 1/30 100/100 [==============================] - 526s 5s/step - loss: 0.6970 - accuracy: 0.5140 - val_loss: 0.6874 - val_accuracy: 0.5410 Epoch 2/30 100/100 [==============================] - 23s 229ms/step - loss: 0.6865 - accuracy: 0.5545 - val_loss: 0.6869 - val_accuracy: 0.5320 Epoch 3/30 100/100 [==============================] - 22s 225ms/step - loss: 0.6742 - accuracy: 0.5895 - val_loss: 0.6685 - val_accuracy: 0.5890 Epoch 4/30 100/100 [==============================] - 23s 226ms/step - loss: 0.6668 - accuracy: 0.6040 - val_loss: 0.6377 - val_accuracy: 0.6310 Epoch 5/30 100/100 [==============================] - 23s 227ms/step - loss: 0.6580 - accuracy: 0.6155 - val_loss: 0.6327 - val_accuracy: 0.6370 Epoch 6/30 100/100 [==============================] - 23s 229ms/step - loss: 0.6377 - accuracy: 0.6310 - val_loss: 0.6227 - val_accuracy: 0.6420 Epoch 7/30 100/100 [==============================] - 23s 233ms/step - loss: 0.6299 - accuracy: 0.6495 - val_loss: 0.6394 - val_accuracy: 0.6020 Epoch 8/30 100/100 [==============================] - 23s 225ms/step - loss: 0.6145 - accuracy: 0.6640 - val_loss: 0.5787 - val_accuracy: 0.6870 Epoch 9/30 100/100 [==============================] - 22s 225ms/step - loss: 0.6018 - accuracy: 0.6680 - val_loss: 0.6183 - val_accuracy: 0.6330 Epoch 10/30 100/100 [==============================] - 23s 227ms/step - loss: 0.5918 - accuracy: 0.6940 - val_loss: 0.6036 - val_accuracy: 0.6570 Epoch 11/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5928 - accuracy: 0.6690 - val_loss: 0.6776 - val_accuracy: 0.6090 Epoch 12/30 100/100 [==============================] - 22s 223ms/step - loss: 0.5764 - accuracy: 0.6960 - val_loss: 0.5405 - val_accuracy: 0.7140 Epoch 13/30 100/100 [==============================] - 22s 222ms/step - loss: 0.5730 - accuracy: 0.7060 - val_loss: 0.5361 - val_accuracy: 0.7180 Epoch 14/30 100/100 [==============================] - 22s 225ms/step - loss: 0.5678 - accuracy: 0.6925 - val_loss: 0.5781 - val_accuracy: 0.6880 Epoch 15/30 100/100 [==============================] - 23s 229ms/step - loss: 0.5682 - accuracy: 0.6990 - val_loss: 0.5299 - val_accuracy: 0.7190 Epoch 16/30 100/100 [==============================] - 23s 225ms/step - loss: 0.5564 - accuracy: 0.7070 - val_loss: 0.5587 - val_accuracy: 0.6990 Epoch 17/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5492 - accuracy: 0.7155 - val_loss: 0.5078 - val_accuracy: 0.7350 Epoch 18/30 100/100 [==============================] - 23s 227ms/step - loss: 0.5573 - accuracy: 0.7065 - val_loss: 0.5177 - val_accuracy: 0.7370 Epoch 19/30 100/100 [==============================] - 23s 229ms/step - loss: 0.5490 - accuracy: 0.7140 - val_loss: 0.6353 - val_accuracy: 0.6600 Epoch 20/30 100/100 [==============================] - 24s 236ms/step - loss: 0.5443 - accuracy: 0.7075 - val_loss: 0.4860 - val_accuracy: 0.7640 Epoch 21/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5408 - accuracy: 0.7230 - val_loss: 0.5099 - val_accuracy: 0.7400 Epoch 22/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5256 - accuracy: 0.7405 - val_loss: 0.5177 - val_accuracy: 0.7410 Epoch 23/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5321 - accuracy: 0.7390 - val_loss: 0.5267 - val_accuracy: 0.7250 Epoch 24/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5251 - accuracy: 0.7385 - val_loss: 0.6157 - val_accuracy: 0.6870 Epoch 25/30 100/100 [==============================] - 22s 224ms/step - loss: 0.5353 - accuracy: 0.7305 - val_loss: 0.5214 - val_accuracy: 0.7300 Epoch 26/30 100/100 [==============================] - 23s 225ms/step - loss: 0.5129 - accuracy: 0.7460 - val_loss: 0.5402 - val_accuracy: 0.7170 Epoch 27/30 100/100 [==============================] - 22s 223ms/step - loss: 0.5099 - accuracy: 0.7550 - val_loss: 0.4921 - val_accuracy: 0.7560 Epoch 28/30 100/100 [==============================] - 23s 226ms/step - loss: 0.5256 - accuracy: 0.7350 - val_loss: 0.4778 - val_accuracy: 0.7600 Epoch 29/30 100/100 [==============================] - 23s 228ms/step - loss: 0.5287 - accuracy: 0.7350 - val_loss: 0.5037 - val_accuracy: 0.7450 Epoch 30/30 100/100 [==============================] - 23s 228ms/step - loss: 0.5062 - accuracy: 0.7485 - val_loss: 0.5342 - val_accuracy: 0.7140
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
5. μ„±λŠ₯ μ‹œκ°ν™”
import matplotlib.pyplot as plt acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] epochs = range(1, len(acc) +1) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.legend() plt.show()
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
* acc와 val_acc λͺ¨λ‘ μ¦κ°€ν•˜λŠ” κ²½ν–₯을 보아 과적합이 λ°œμƒν•˜μ§€ μ•Šμ•˜μŒ
plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.legend() plt.show()
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
6. λͺ¨λΈ ν‰κ°€ν•˜κΈ°
test_loss, test_accuracy = model.evaluate_generator(test_generator, steps=50) print(test_loss) print(test_accuracy)
0.5713947415351868 0.7160000205039978
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
7. λͺ¨λΈ μ €μž₯
model.save('/content/gdrive/My Drive/Colab Notebooks/CNN/datasets/cats_and_dogs_small_augmentation.h5')
_____no_output_____
MIT
03_CNN/04_2_CatAndDog_Augmentation.ipynb
seungbinahn/START_AI
Start Julia evironment
# Install any required python packages here # !pip install <packages> # Here we install Julia %%capture %%shell if ! command -v julia 3>&1 > /dev/null then wget -q 'https://julialang-s3.julialang.org/bin/linux/x64/1.6/julia-1.6.2-linux-x86_64.tar.gz' \ -O /tmp/julia.tar.gz tar -x -f /tmp/julia.tar.gz -C /usr/local --strip-components 1 rm /tmp/julia.tar.gz fi julia -e 'using Pkg; pkg"add IJulia; precompile;"' echo 'Done'
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
After you run the first cell (the the cell directly above this text), go to Colab's menu bar and select **Edit** and select **Notebook settings** from the drop down. Select *Julia 1.6* in Runtime type. You can also select your prefered harwdware acceleration (defaults to GPU). You should see something like this:> ![Colab Img](https://raw.githubusercontent.com/Dsantra92/Julia-on-Colab/master/misc/julia_menu.png)Click on SAVE**We are ready to get going**
VERSION
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
**The next three cells are for GPU benchmarking. If you are using this notebook for the first time and have GPU enabled, you can give it a try.** Import all the Julia Packages Here, we first import all the required packages. CUDA is used to offload some of the processing to the gpu, Flux is the package for putting together the NN. MLDatasets contains the MNIST dataset which we will use in this example. Images contains some functionality to actually view images. Makie, and CairoMakie are used for plotting.
import Pkg Pkg.add(["CUDA","Flux","MLDatasets","Images","Makie","CairoMakie","ImageMagick"]) using CUDA, Flux, MLDatasets, Images, Makie, Statistics, CairoMakie,ImageMagick using Base.Iterators: partition
 Resolving package versions...  No Changes to `~/.julia/environments/v1.6/Project.toml`  No Changes to `~/.julia/environments/v1.6/Manifest.toml`
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's look at the functions we can call from the MNIST set itself
names(MNIST)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's assume we want to get the training data from the MNIST package. Now, let's see what we get returned if we call that function
Base.return_types(MNIST.traindata)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
This does not mean a heck of a lot to me initially, but we can basically see we get 2 tuples returned. So let's go ahead and assign some x and y to each of the tuples so we can probe further.
x, y = MNIST.traindata();
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Let's now further investigate the x
size(x)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
We know from the MNIST dataset, that the set contains 60000 images, each of size 28x28. So clearly we are looking at the images themselves. So this is our input. Let's plot an example to make sure.
i = rand(1:60000) heatmap(x[:,:,i],colormap = :grays)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
Similarly, let's have a quick look at the size of y. I expect this is the label associated with the images
y[i]
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
And then let's check that the image above is labelled as what we expect.
y[7] show(names(Images)) ?imshow names(ImageShow)
_____no_output_____
MIT
MNIST.ipynb
coenarrow/MNistTests
μ»΄νŒŒμΌλŸ¬μ—μ„œ λ³€μˆ˜, 쑰건문 λ‹€λ£¨κΈ°λ³€μˆ˜λ₯Ό 닀루기 μœ„ν•΄μ„œλŠ” κΈ°κ³„μƒνƒœμ— λ©”λͺ¨λ¦¬λ₯Ό μΆ”κ°€ν•˜κ³  λ©”λͺ¨λ¦¬ 연산을 μœ„ν•œ μ €κΈ‰μ–Έμ–΄ λͺ…령을 μΆ”κ°€ν•œλ‹€.쑰건문을 닀루기 μœ„ν•΄μ„œλŠ” μ‹€ν–‰μ½”λ“œλ₯Ό 순차적으둜만 μ‹€ν–‰ν•˜λŠ” 것이 μ•„λ‹ˆλΌ νŠΉμ • μ½”λ“œ μœ„μΉ˜λ‘œ μ΄λ™ν•˜μ—¬ μ‹€ν–‰ν•˜λŠ” μ €κΈ‰μ–Έμ–΄ λͺ…령을 μΆ”κ°€ν•œλ‹€.
data Expr = Var Name -- x | Val Value -- n | Add Expr Expr -- e1 + e2 -- | Sub Expr Expr -- | Mul Expr Expr -- | Div Expr Expr | If Expr Expr Expr -- if e then e1 else e0 deriving Show type Name = String -- λ³€μˆ˜μ˜ 이름은 λ¬Έμžμ—΄λ‘œ ν‘œν˜„ type Value = Int -- μƒμˆ˜κ°’μ€ μ •μˆ˜ type Stack = [Value] data Inst = ADD | PUSH Value -- μŠ€νƒ λͺ…λ Ή | GOTO Code | JMPZ Code -- μ‹€ν–‰μ½”λ“œ λͺ…λ Ή | READ Addr -- λ©”λͺ¨λ¦¬ λͺ…λ Ή deriving Show type Code = [Inst] -- type Env = [ (Name, Value) ] λΌλŠ” 인터프리터 μ‹€ν–‰ ν™˜κ²½μ„ -- 두 λ‹¨κ³„λ‘œ μ•„λž˜μ™€ 같이 λ‚˜λˆˆλ‹€ type SymTbl = [ (Name, Addr) ] -- 컴파일 λ‹¨κ³„μ—μ„œ μ‚¬μš©ν•˜λŠ” 심볼 ν…Œμ΄λΈ” type Memory = [ (Addr, Value) ] -- 기계(가상머신) μ‹€ν–‰ λ‹¨κ³„μ—μ„œ μ‚¬μš©ν•˜λŠ” λ©”λͺ¨λ¦¬ type Addr = Int -- μ£Όμ†ŒλŠ” μ •μˆ˜λ‘œ ν‘œν˜„ -- 이제 KontλŠ” μŠ€νƒλ§Œμ΄ μ•„λ‹ˆλΌ μ„Έ μš”μ†Œλ‘œ 이루어진 κΈ°κ³„μƒνƒœλ₯Ό λ³€ν™”μ‹œν‚€λŠ” ν•¨μˆ˜ νƒ€μž…μ΄λ‹€ type Kont = (Stack,Memory,Code) -> (Stack,Memory,Code) -- 더 이상 μ‹€ν–‰ν•  μ½”λ“œκ°€ μ—†λŠ” κΈ°κ³„μƒνƒœλ‘œ λ³€ν™”μ‹œν‚€λŠ” ν•¨μˆ˜ haltK :: Kont haltK (s, mem, _) = (s, mem, []) -- μŠ€νƒ λͺ…령을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•œ κΈ°κ³„μƒνƒœλ³€ν™” ν•¨μˆ˜λ“€ pushK :: Int -> Kont pushK n (s, mem, code) = (n:s, mem, code) addK :: Kont addK (n2:n1:s, mem, code) = ((n1+n2):s, mem, code) -- μ‹€ν–‰μ½”λ“œ λͺ…령을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•œ κΈ°κ³„μƒνƒœλ³€ν™” ν•¨μˆ˜λ“€ jmpzK :: Code -> Kont jmpzK code (0:s, mem, _) = (s, mem, code) -- μŠ€νƒ 맨 μœ„ 값이 0이면 μƒˆλ‘œμš΄ code μœ„μΉ˜λ‘œ 점프 jmpzK _ (_:s, mem, c) = (s, mem, c) -- μŠ€νƒ 맨 μœ„κ°€ 0이 μ•„λ‹ˆλ©΄ μ›λž˜ μ‹€ν–‰ν•˜λ˜ μ½”λ“œ cμ‹€ν–‰ gotoK :: Code -> Kont gotoK code (s, mem, _) = (s, mem, code) -- 무쑰건 μƒˆλ‘œμš΄ code μœ„μΉ˜λ‘œ 이동 -- λ©”λͺ¨λ¦¬ λͺ…령을 μ‹€ν–‰μ‹œν‚€κΈ° μœ„ν•œ κΈ°κ³„μƒνƒœλ³€ν™” ν•¨μˆ˜ -- (λ©”λͺ¨λ¦¬μ—μ„œ 값을 읽어 μŠ€νƒ 맨 μœ„μ— μŒ“λŠ”λ‹€) readK a (s, mem, code) = case lookup a mem of Nothing -> error (show a ++ " uninitialized memory address") Just v -> (v:s, mem, code) compile :: SymTbl -> Expr -> Code compile tbl (Var x) = case lookup x tbl of Nothing -> error (x ++ " not found") Just a -> [READ a] compile tbl (Val n) = [PUSH n] compile tbl (Add e1 e2) = compile tbl e1 ++ compile tbl e2 ++ [ADD] compile tbl (If e e1 e0) = compile tbl e ++ [JMPZ c0] ++ c1 ++ [GOTO []] ++ c0 where c1 = compile tbl e1 c0 = compile tbl e0 step :: Inst -> Kont step (PUSH n) = pushK n step ADD = addK step (GOTO c) = gotoK c step (JMPZ c) = jmpzK c step (READ a) = readK a run :: Kont run (s, mem, []) = (s, mem, []) run (s, mem, c:cs) = run (step c (s, mem, cs)) import Data.List (union) vars (Var x) = [x] vars (Val _) = [] vars (Add e1 e2) = vars e1 `union` vars e2 vars (If e e1 e0) = vars e `union` vars e1 `union` vars e0 -- μΈν„°ν”„λ¦¬ν„°μ—μ„œλŠ” μ•„λž˜ 식을 μ‹€ν–‰ν•˜λ €λ©΄ [("x",2),("y",3)]와 같은 -- μ‹€ν–‰ν™˜κ²½μ„ λ§Œλ“€μ–΄ ν•œλ°©μ— μ‹€ν–‰ν•˜λ©΄ λ˜μ§€λ§Œ μ»΄νŒŒμΌλŸ¬μ—λŠ” 두 단계 e0 = Add (Add (Var "x") (Var "y")) (Val 100) e0 -- μ»΄νŒŒμΌν•  λ•ŒλŠ” λ³€μˆ˜λ₯Ό λ©”λͺ¨λ¦¬ μ£Όμ†Œμ— λŒ€μ‘μ‹œν‚€λŠ” μ‹¬λ³Όν…Œμ΄λΈ”μ΄ ν•„μš” code0 = compile [("x",102),("y",103)] e0 code0 -- μ‹€ν–‰ν•  λ•ŒλŠ” ν•΄λ‹Ή μ£Όμ†Œμ— μ μ ˆν•œ 값을 ν• λ‹Ήν•œ λ©”λͺ¨λ¦¬κ°€ ν•„μš” vm0 = ([], [(102,7), (103,3)], code0) run vm0 {- b = 2, x = 12, y = 123 -} -- if b then (x + 3) else y e1 = If (Var "b") (Add (Var "x") (Val 3)) (Var "y") -- (if b then (x + 3) else y) + 1000 e2 = e1 `Add` Val 1000 tbl0 = [("b",101),("x",102),("y",103)] tbl0 mem0 = [(101,2), (102,12), (103,123)] mem0 code1 = compile tbl0 e1 code1 code2 = compile tbl0 e2 code2 {- import GHC.HeapView putStr =<< ppHeapGraph <$> buildHeapGraph 15 code2 (asBox code2) -} -- μ˜ˆμƒλŒ€λ‘œ e1의 계산 κ²°κ³Ό μŠ€νƒ 맨 μœ„μ— 15κ°€ λ‚˜μ˜¨λ‹€ run ([], mem0, code1) -- e2의 계산 κ²°κ³ΌλŠ” 1015이어야 ν•˜λŠ”λ° e1κ³Ό λ§ˆμ°¬κ°€μ§€λ‘œ 15κ°€ λ˜μ–΄λ²„λ¦°λ‹€ run ([], mem0, code2)
_____no_output_____
MIT
0917 Compilers with variables and conditionals.ipynb
hnu-pl/compiler2019fall
μ•„λž˜λŠ” e2λ₯Ό μ»΄νŒŒμΌν•œ code2λ₯Ό μ‹€ν–‰ν•œ κ²°κ³Όκ°€ μ™œ μ›ν•˜λŠ” λŒ€λ‘œ λ‚˜μ˜€μ§€ μ•ŠλŠ”μ§€ 쒀더 μžμ„Ένžˆ μ‚΄νŽ΄λ³΄κΈ° μœ„ν•΄step ν•¨μˆ˜λ₯Ό ν•œλ‹¨κ³„μ”© ν˜ΈμΆœν•΄ κ°€λ©° 각각의 λͺ…λ Ή μ‹€ν–‰ μ „ν›„μ˜ κΈ°κ³„μƒνƒœ vm0,...,vm6λ₯Ό μ•Œμ•„λ³Έ λ‚΄μš©μ΄λ‹€.
vm0@(s0, _,c0:cs0) = ([], mem0, code2) vm0 vm1@(s1,mem1,c1:cs1) = step c0 (s0,mem0,cs0) vm1 vm2@(s2,mem2,c2:cs2) = step c1 (s1,mem1,cs1) vm2 vm3@(s3,mem3,c3:cs3) = step c2 (s2,mem2,cs2) vm3 vm4@(s4,mem4,c4:cs4) = step c3 (s3,mem3,cs3) vm4 vm5@(s5,mem5,c5:cs5) = step c4 (s4,mem4,cs4) vm5 vm6 = step c5 (s5,mem5,cs5) vm6
_____no_output_____
MIT
0917 Compilers with variables and conditionals.ipynb
hnu-pl/compiler2019fall
BASIC PYTHON FOR RESEARCHERS _by_ [**_Megat Harun Al Rashid bin Megat Ahmad_**](https://www.researchgate.net/profile/Megat_Harun_Megat_Ahmad) last updated: April 14, 2016 ------- _8. Database and Data Analysis_ ---$Pandas$ is an open source library for data analysis in _Python_. It gives _Python_ similar capabilities to _R_ programming language and even though it is possible to run _R_ in _Jupyter Notebook_, it would be more practical to do data analysis with a _Python_ friendly syntax. Similar to other libraries, the first step to use $Pandas$ is to import the library and usually together with the $Numpy$ library.
import pandas as pd import numpy as np
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.1 Data Structures_** Data structures (similar to _Sequence_ in _Python_) of $Pandas$ revolves around the **_Series_** and **_DataFrame_** structures. Both are fast as they are built on top of $Numpy$. A **_Series_** is a one-dimensional object with a lot of similar properties similar to a list or dictionary in _Python_'s _Sequence_. Each element or item in a **_Series_** will be assigned by default an index label from _0_ to _N-1_ (where _N_ is the length of the **_Series_**) and it can contains the various type of _Python_'s data.
# Creating a series (with different type of data) s1 = pd.Series([34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW']) s1
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
The index of a **_Series_** can be specified during its creation and giving it a similar function to a dictionary.
# Creating a series with specified index lt = [34, 'Material', 4*np.pi, 'Reactor', [100,250,500,750], 'kW'] s2 = pd.Series(lt, index = ['b1', 'r1', 'solid angle', 18, 'reactor power', 'unit']) s2
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Data can be extracted by specifying the element position or index (similar to list/dictionary).
s1[3], s2['solid angle']
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
**_Series_** can also be constructed from a dictionary.
pop_cities = {'Kuala Lumpur':1588750, 'Seberang Perai':818197, 'Kajang':795522, 'Klang':744062, 'Subang Jaya':708296} cities = pd.Series(pop_cities) cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
The elements can be sort using the $Series.order()$ function. This will not change the structure of the original variable.
cities.order(ascending=False) cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Another sorting function is the $sort()$ function but this will change the structure of the **_Series_** variable.
# Sorting with descending values cities.sort(ascending=False) cities cities
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Conditions can be applied to the elements.
# cities with population less than 800,000 cities[cities<800000] # cities with population between 750,000 and 800,000 cities[cities<800000][cities>750000]
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---A **_DataFrame_** is a 2-dimensional data structure with named rows and columns. It is similar to _R_'s _data.frame_ object and function like a spreadsheet. **_DataFrame_** can be considered to be made of series of **_Series_** data according to the column names. **_DataFrame_** can be created by passing a 2-dimensional array of data and specifying the rows and columns names.
# Creating a DataFrame by passing a 2-D numpy array of random number # Creating first the date-time index using date_range function # and checking it. dates = pd.date_range('20140801', periods = 8, freq = 'D') dates # Creating the column names as list Kedai = ['Kedai A', 'Kedai B', 'Kedai C', 'Kedai D', 'Kedai E'] # Creating the DataFrame with specified rows and columns df = pd.DataFrame(np.random.randn(8,5),index=dates,columns=Kedai) df
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---Some of the useful functions that can be applied to a **_DataFrame_** include:
df.head() # Displaying the first five (default) rows df.head(3) # Displaying the first three (specified) rows df.tail(2) # Displaying the last two (specified) rows df.index # Showing the index of rows df.columns # Showing the fields of columns df.values # Showing the data only in its original 2-D array df.describe() # Simple statistical data for each column df.T # Transposing the DataFrame (index becomes column and vice versa) df.sort_index(axis=1,ascending=False) # Sorting with descending column df.sort(columns='Kedai D') # Sorting according to ascending specific column df['Kedai A'] # Extract specific column (using python list syntax) df['Kedai A'][2:4] # Slicing specific column (using python list syntax) df[2:4] # Slicing specific row data (using python list syntax) # Slicing specific index range df['2014-08-03':'2014-08-05'] # Slicing specific index range for a particular column df['2014-08-03':'2014-08-05']['Kedai B'] # Using the loc() function # Slicing specific index and column ranges df.loc['2014-08-03':'2014-08-05','Kedai B':'Kedai D'] # Slicing specific index range with specific column names df.loc['2014-08-03':'2014-08-05',['Kedai B','Kedai D']] # Possibly not yet to have something like this df.loc[['2014-08-01','2014-08-03':'2014-08-05'],['Kedai B','Kedai D']] # Using the iloc() function df.iloc[3] # Specific row location df.iloc[:,3] # Specific column location (all rows) df.iloc[2:4,1:3] # Python like slicing for range df.iloc[[2,4],[1,3]] # Slicing with python like list # Conditionals on the data df>0 # Array values > 0 OR df[df>0] # Directly getting the value
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
**_NaN_** means empty, missing data or unavailable.
df[df['Kedai B']<0] # With reference to specific value in a column (e.g. Kedai B) df2 = df.copy() # Made a copy of a database df2 # Adding column df2['Tambah'] = ['satu','satu','dua','tiga','empat','tiga','lima','enam'] df2 # Adding row using append() function. The previous loc() is possibly deprecated. # Assign a new name to the new row (with the same format) new_row_name = pd.date_range('20140809', periods = 1, freq = 'D') # Appending new row with new data df2.append(list(np.random.randn(5))+['sembilan']) # Renaming the new row (here actually is a reassignment) df2 = df2.rename(index={10: new_row_name[0]}) df2 # Assigning new data to a row df2.loc['2014-08-05'] = list(np.random.randn(5))+['tujuh'] df2 # Assigning new data to a specific element df2.loc['2014-08-05','Tambah'] = 'lapan' df2 # Using the isin() function (returns boolean data frame) df2.isin(['satu','tiga']) # Select specific row based on additonal column df2[df2['Tambah'].isin(['satu','tiga'])] # Use previous command - select certain column based on selected additional column df2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D'] # Select > 0 from previous cell... (df2[df2['Tambah'].isin(['satu','tiga'])].loc[:,'Kedai B':'Kedai D']>0)
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
*** **_8.2 Data Operations_** We have seen few operations previously on **_Series_** and **_DataFrame_** and here this will be explored further.
df.mean() # Statistical mean (column) - same as df.mean(0), 0 means column df.mean(1) # Statistical mean (row) - 1 means row df.mean()['Kedai C':'Kedai E'] # Statistical mean (range of columns) df.max() # Statistical max (column) df.max()['Kedai C'] # Statistical max (specific column) df.max(1)['2014-08-04':'2014-08-07'] # Statistical max (specific row) df.max(1)[dates[3]] # Statistical max (specific row by variable)
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
---Other statistical functions can be checked by typing df.__. The data in a **_DataFrame_** can be represented by a variable declared using the $lambda$ operator.
df.apply(lambda x: x.max() - x.min()) # Operating array values with function df.apply(lambda z: np.log(z)) # Operating array values with function
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher
Replacing, rearranging and operations of data between columns can be done much like spreadsheet.
df3 = df.copy() df3[r'Kedai A^2/Kedai E'] = df3['Kedai A']**2/df3['Kedai E'] df3
_____no_output_____
Artistic-2.0
.ipynb_checkpoints/Tutorial 8 - Database and Data Analysis-checkpoint.ipynb
megatharun/basic-python-for-researcher