markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Custom layers View on TensorFlow.org Run in Google Colab View source on GitHub We recommend using `tf.keras` as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.
!pip install tf-nightly-2.0-preview import tensorflow as tf
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Layers: common sets of useful operationsMost of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.TensorFlow includes the full [Keras](https://keras.io) API in the tf.keras package, and the Keras layers are very useful when building your own models.
# In the tf.keras.layers package, layers are objects. To construct a layer, # simply construct the object. Most layers take as a first argument the number # of output dimensions / channels. layer = tf.keras.layers.Dense(100) # The number of input dimensions is often unnecessary, as it can be inferred # the first time the layer is used, but it can be provided if you want to # specify it manually, which is useful in some complex models. layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
The full list of pre-existing layers can be seen in [the documentation](https://www.tensorflow.org/api_docs/python/tf/keras/layers). It includes Dense (a fully-connected layer),Conv2D, LSTM, BatchNormalization, Dropout, and many others.
# To use a layer, simply call it. layer(tf.zeros([10, 5])) # Layers have many useful methods. For example, you can inspect all variables # in a layer by calling layer.variables. In this case a fully-connected layer # will have variables for weights and biases. layer.variables # The variables are also accessible through nice accessors layer.kernel, layer.bias
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Implementing custom layersThe best way to implement your own layer is extending the tf.keras.Layer class and implementing: * `__init__` , where you can do all input-independent initialization * `build`, where you know the shapes of the input tensors and can do the rest of the initialization * `call`, where you do the forward computationNote that you don't have to wait until `build` is called to create your variables, you can also create them in `__init__`. However, the advantage of creating them in `build` is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in `__init__` would mean that shapes required to create the variables will need to be explicitly specified.
class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_variable("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, input): return tf.matmul(input, self.kernel) layer = MyDenseLayer(10) print(layer(tf.zeros([10, 5]))) print(layer.variables)
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Note that you don't have to wait until `build` is called to create your variables, you can also create them in `__init__`.Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a [github issue](http://github.com/tensorflow/tensorflow/issues/new) or, even better, sending us a pull request! Models: composing layersMany interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.
class ResnetIdentityBlock(tf.keras.Model): def __init__(self, kernel_size, filters): super(ResnetIdentityBlock, self).__init__(name='') filters1, filters2, filters3 = filters self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1)) self.bn2a = tf.keras.layers.BatchNormalization() self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same') self.bn2b = tf.keras.layers.BatchNormalization() self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1)) self.bn2c = tf.keras.layers.BatchNormalization() def call(self, input_tensor, training=False): x = self.conv2a(input_tensor) x = self.bn2a(x, training=training) x = tf.nn.relu(x) x = self.conv2b(x) x = self.bn2b(x, training=training) x = tf.nn.relu(x) x = self.conv2c(x) x = self.bn2c(x, training=training) x += input_tensor return tf.nn.relu(x) block = ResnetIdentityBlock(1, [1, 2, 3]) print(block(tf.zeros([1, 2, 3, 3]))) print([x.name for x in block.variables])
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(2, 1, padding='same'), tf.keras.layers.BatchNormalization(), tf.keras.layers.Conv2D(3, (1, 1)), tf.keras.layers.BatchNormalization()]) my_seq(tf.zeros([1, 2, 3, 3]))
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
Next stepsNow you can go back to the previous notebook and adapt the linear regression example to use layers and models to be better structured.
_____no_output_____
Apache-2.0
site/en/2/tutorials/eager/custom_layers.ipynb
allenlavoie/docs
check_list = [1,1,5,7,9,6,4] sub_list = [1,1,5] print("original list : " +str (check_list)) print("original sublist : " +str (sub_list)) flag=0 if (set (sub_list).issubset(set (check_list))): flag = 1 if (flag): print("Its a Match.") else : rint("Its Gone")
original list : [1, 1, 5, 7, 9, 6, 4] original sublist : [1, 1, 5] Its a Match.
Apache-2.0
Day5 Assignment1.ipynb
Tulasi-ummadipolu/LetsUpgrade-Python-B7
Planewave propagation in a Whole-space (frequency-domain) PurposeWe visualizae downward propagating planewave in the homogeneous earth medium. With the three apps: a) Plane wave app, b) Profile app, and c) Polarization ellipse app, we understand fundamental concepts of planewave propagation. Set upPlanewave EM equation can be written as $$\frac{\partial^2 \mathbf{E}}{\partial z^2} + k^2 \mathbf{E} = 0,$$For homogeneous earth, solution can be simply derived:$$\mathbf{E} = \mathbf{E}_0 e^{ikz}$$$$\mathbf{H} = - i \omega \mu \nabla \times (\mathbf{E}_0 e^{ikz}).$$where complex wavenumber $k$ is $$ k = \sqrt{\mu \epsilon \omega^2 - i \mu \sigma \omega}.$$In time domain, the wave travelling in the negative z-direction has the form:$$ \mathbf{e} = \mathbf{e}_0^- e^{i(k z + \omega t)}.$$
ax = plotObj3D()
_____no_output_____
MIT
notebooks/em/FDEM_Planewave_Wholespace.ipynb
jcapriot/geosci-labs
Planewave app Parameters:- Field: Type of EM fields ("Ex": electric field, "Hy": magnetic field)- AmpDir: Type of the vectoral EM fields None: $F_x$ or $F_y$ or $F_z$ Amp: $\mathbf{F} \cdot \mathbf{F}^* = |\mathbf{F}|^2$ Dir: Real part of a vectoral EM fields, $\Re[\mathbf{F}]$ - ComplexNumber: Type of complex data ("Re", "Im", "Amp", "Phase") - Frequency: Transmitting frequency (Hz)- Sigma: Conductivity of homogeneous earth (S/m)- Scale: Choose "log" or "linear" scale - Time:
dwidget = PlanewaveWidget() Q = dwidget.InteractivePlaneWave() display(Q)
_____no_output_____
MIT
notebooks/em/FDEM_Planewave_Wholespace.ipynb
jcapriot/geosci-labs
Profile appWe visualize EM fields at vertical profile (marked as red dots in the above app). Parameters:- **Field**: Ex, Hy, and Impedance - ** $\sigma$ **: Conductivity (S/m)- **Scale**: Log10 or Linear scale- **Fixed**: Fix the scale or not- **$f$**: Frequency- **$t$**: Time
display(InteractivePlaneProfile())
_____no_output_____
MIT
notebooks/em/FDEM_Planewave_Wholespace.ipynb
jcapriot/geosci-labs
Polarization Ellipse app
Polarwidget = PolarEllipse(); Polarwidget.Interactive()
_____no_output_____
MIT
notebooks/em/FDEM_Planewave_Wholespace.ipynb
jcapriot/geosci-labs
The 1cycle policy
from fastai.gen_doc.nbdoc import * from fastai import * from fastai.vision import *
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
What is 1cycle? This Callback allows us to easily train a network using Leslie Smith's 1cycle policy. To learn more about the 1cycle technique for training neural networks check out [Leslie Smith's paper](https://arxiv.org/pdf/1803.09820.pdf) and for a more graphical and intuitive explanation check out [Sylvain Gugger's post](https://sgugger.github.io/the-1cycle-policy.html).To use our 1cycle policy we will need an [optimum learning rate](https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html). We can find this learning rate by using a learning rate finder which can be called by using [`lr_finder`](/callbacks.lr_finder.htmlcallbacks.lr_finder). It will do a mock training by going over a large range of learning rates, then plot them against the losses. We will pick a value a bit before the minimum, where the loss still improves. Our graph would look something like this:![onecycle_finder](imgs/onecycle_finder.png)Here anything between `3x10^-2` and `10^-2` is a good idea.Next we will apply the 1cycle policy with the chosen learning rate as the maximum learning rate. The original 1cycle policy has three steps: 1. We progressively increase our learning rate from lr_max/div_factor to lr_max and at the same time we progressively decrease our momentum from mom_max to mom_min. 2. We do the exact opposite: we progressively decrease our learning rate from lr_max to lr_max/div_factor and at the same time we progressively increase our momentum from mom_min to mom_max. 3. We further decrease our learning rate from lr_max/div_factor to lr_max/(div_factor x 100) and we keep momentum steady at mom_max. This gives the following form:Unpublished work has shown even better results by using only two phases: the same phase 1, followed by a second phase where we do a cosine annealing from lr_max to 0. The momentum goes from mom_min to mom_max by following the symmetric cosine (see graph a bit below). Basic Training The one cycle policy allows to train very quickly, a phenomenon termed [_superconvergence_](https://arxiv.org/abs/1708.07120). To see this in practice, we will first train a CNN and see how our results compare when we use the [`OneCycleScheduler`](/callbacks.one_cycle.htmlOneCycleScheduler) with [`fit_one_cycle`](/train.htmlfit_one_cycle).
path = untar_data(URLs.MNIST_SAMPLE) data = ImageDataBunch.from_folder(path) model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy])
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
First lets find the optimum learning rate for our comparison by doing an LR range test.
learn.lr_find() learn.recorder.plot()
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Here 5e-2 looks like a good value, a tenth of the minimum of the curve. That's going to be the highest learning rate in 1cycle so let's try a constant training at that value.
learn.fit(2, 5e-2)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
We can also see what happens when we train at a lower learning rate
model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit(2, 5e-3)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Training with the 1cycle policy Now to do the same thing with 1cycle, we use [`fit_one_cycle`](/train.htmlfit_one_cycle).
model = simple_cnn((3,16,16,2)) learn = Learner(data, model, metrics=[accuracy]) learn.fit_one_cycle(2, 5e-2)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
This gets the best of both world and we can see how we get a far better accuracy and a far lower loss in the same number of epochs. It's possible to get to the same amazing results with training at constant learning rates, that we progressively diminish, but it will take a far longer time.Here is the schedule of the lrs (left) and momentum (right) that the new 1cycle policy uses.
learn.recorder.plot_lr(show_moms=True) show_doc(OneCycleScheduler, doc_string=False)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Create a [`Callback`](/callback.htmlCallback) that handles the hyperparameters settings following the 1cycle policy for `learn`. `lr_max` should be picked with the [`lr_find`](/train.htmllr_find) test. In phase 1, the learning rates goes from `lr_max/div_factor` to `lr_max` linearly while the momentum goes from `moms[0]` to `moms[1]` linearly. In phase 2, the learning rates follows a cosine annealing from `lr_max` to 0, as the momentum goes from `moms[1]` to `moms[0]` with the same annealing.
show_doc(OneCycleScheduler.steps, doc_string=False)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Build the [`Stepper`](/callback.htmlStepper) for the [`Callback`](/callback.htmlCallback) according to `steps_cfg`.
show_doc(OneCycleScheduler.on_train_begin, doc_string=False)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Initiate the parameters of a training for `n_epochs`.
show_doc(OneCycleScheduler.on_batch_end, doc_string=False)
_____no_output_____
Apache-2.0
docs_src/callbacks.one_cycle.ipynb
fmgonzales/fastai
Maskinlæring med Python Michael Gfeller, Computasdag 3.2.2018![Computas](img/logo_blue_small.jpg)----_(Notebook basert på https://www.kaggle.com/futurist/pima-data-visualisation-and-machine-learning, [Apache 2.0 license](http://www.apache.org/licenses/LICENSE-2.0))_ Definer og forstå oppgaven
from IPython.display import YouTubeVideo YouTubeVideo("pN4HqWRybwk")
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Innsikt og forutsigelse om en kvinne fra Pima-folkestammen får diabetes innen 5 år. Last inn biblioteker
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Last inn og utforsk data
pima = pd.read_csv("diabetes.csv") # pandas.core.frame.DataFrame pima.head(4) pima.shape pima.info() pima.describe() pima.groupby("Outcome").size()
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Visualiser data Histogram
pima.hist(figsize=(10,10))
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Boxplot
pima.plot(kind= 'box' , subplots=True, layout=(3,3), sharex=False, sharey=False, figsize=(8,8)) X_columns = pima.columns[0:len(pima.columns) - 1] pima[X_columns].plot(kind= 'box', subplots=False, figsize=(20,8))
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Korrelasjon mellom variablene
correlations = pima[pima.columns].corr() sns.heatmap(correlations, annot = True)
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Velg input-variabler (features, givens, independent)
from sklearn.feature_selection import SelectKBest from sklearn.feature_selection import chi2 X = pima.iloc[:,0:8] Y = pima.iloc[:,8] select_top_4 = SelectKBest(score_func=chi2, k = 4) fit = select_top_4.fit(X,Y) features = fit.transform(X) feature_cols = pima.columns[fit.get_support('indices')] feature_cols features[0:3] pima.head(3) X_features = pd.DataFrame(data = features, columns = feature_cols) X_features.head(3)
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Forbered data med standardisering
from sklearn.preprocessing import StandardScaler # En av flere scalers. X_features_scaled = StandardScaler().fit_transform(X_features) X = pd.DataFrame(data = X_features_scaled, columns= X_features.columns) X.head(3) X.hist() X.plot(kind= 'box', subplots=False, figsize=(20,8))
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Prøv ut forskjellige modeller - binærklassifisering
from sklearn.model_selection import train_test_split random_seed = 22 X_train,X_test,Y_train,Y_test = train_test_split(X,Y, random_state = random_seed, test_size = 0.2) from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB # Gaussian Naive Bayes from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.svm import LinearSVC models = [] models.append(("LR",LogisticRegression())) models.append(("NB",GaussianNB())) models.append(("KNN",KNeighborsClassifier())) models.append(("DT",DecisionTreeClassifier())) models.append(("SVM",SVC())) models.append(("LSVM",LinearSVC())) results = [] names = [] for name,model in models: kfold = KFold(n_splits=10, random_state=random_seed) cv_result = cross_val_score(model,X_train,Y_train, cv = kfold,scoring = "accuracy") names.append(name) results.append(cv_result) for i in range(len(names)): print("%-5s: %.2f%% +/- %.2f%%" % (names[i],results[i].mean()*100,results[i].std()*100))
LR : 77.69% +/- 5.23% NB : 76.05% +/- 5.94% KNN : 74.59% +/- 4.68% DT : 70.36% +/- 3.79% SVM : 77.69% +/- 5.15% LSVM : 77.85% +/- 5.24%
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Visualiser resultatene
ax = sns.boxplot(data=results) ax.set_xticklabels(names)
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Tren og valider de beste modellerLogistisk regresjon og (L)SVM ga de beste resultatene.
X_train.describe() Y_train_df = pd.DataFrame(data = Y_train, columns = ['Outcome']) Y_train_df.groupby("Outcome").size() X_test.describe() Y_test_df = pd.DataFrame(data = Y_test, columns = ['Outcome']) Y_test_df.groupby("Outcome").size()
_____no_output_____
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Logistisk regresjon
lr = LogisticRegression() lr.fit(X_train,Y_train) predictions = lr.predict(X_test) from sklearn.metrics import accuracy_score from sklearn.metrics import classification_report from sklearn.metrics import confusion_matrix print("%-5s: %.2f%%" % ("LR", accuracy_score(Y_test,predictions)*100))
LR : 71.43%
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Support Vector Classifier
svm = SVC() svm.fit(X_train,Y_train) predictions = svm.predict(X_test) print("%-5s: %.2f%%" % ("SVM", accuracy_score(Y_test,predictions)*100)) print(classification_report(Y_test,predictions)) # https://en.wikipedia.org/wiki/Confusion_matrix confusion = confusion_matrix(Y_test,predictions) # print(confusion) tn, fp, fn, tp = confusion.ravel() print("True negatives: %4d" % tn) print("True positives: %4d" % tp) print("False negatives: %4d" % fn) print("False positives: %4d" % fp) print("Accuracy: %4.0f%%" % (100*(tp+tn)/(tn + fp + fn + tp))) print("Precision: %4.0f%%" % (100*tp/(tp+fp))) print("Recall: %4.0f%%" % (100*tp/(tp+fn)))
True negatives: 92 True positives: 21 False negatives: 33 False positives: 8 Accuracy: 73% Precision: 72% Recall: 39%
Apache-2.0
ml-presentation/cx-pima-diabetes.ipynb
mgfeller/tensorflow
Table of Contents1  Load Data2  Demo of Cleaning Functions2.1  Columns2.2  Outliers2.3  Transformations
import datetime as dt import sys from pathlib import Path import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns print(sys.executable) print(sys.version) print(f"Pandas {pd.__version__}") print(f"Seaborn {sns.__version__}") sys.path.append(str(Path.cwd().parent / 'src' / 'codebook')) %load_ext autoreload %autoreload 2 %matplotlib inline # %config InlineBackend.figure_format = 'svg' plt.style.use('raph-base') from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = 'all' pd.set_option('precision', 2) pd.set_option('display.max_columns', 30) pd.set_option('display.expand_frame_repr', False) pd.set_option('max_colwidth', 800) import src.codebook.EDA as EDA import src.codebook.clean as clean
_____no_output_____
MIT
demo/dev_clean.ipynb
rbuerki/codebook
Load Data
df = pd.read_csv("../data/realWorldTestData.csv", low_memory=False, nrows=1000, usecols=[2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 18] ) df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1000 entries, 0 to 999 Data columns (total 14 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 target_event 1000 non-null object 1 NUM_CONSEC_SERVICES 1000 non-null int64 2 SUM_INVOICE_AMOUNT_IN_SERVICE 1000 non-null float64 3 SUM_INVOICE_AMOUNT 1000 non-null float64 4 NUM_EVENTS 1000 non-null int64 5 FIRST_EVT 1000 non-null object 6 LAST_EVT 1000 non-null object 7 LAST_MILEAGE 1000 non-null float64 8 MEAN_MILEAGE_PER_MNTH 1000 non-null float64 9 AVG_DIFF_MNTH 1000 non-null int64 10 age_mnth 1000 non-null int64 11 KANTON_LICENCE_PLATE 991 non-null object 12 INSPECTION_INTERVAL_UID 1000 non-null object 13 CAR_BRAND_UID 1000 non-null object dtypes: float64(4), int64(4), object(6) memory usage: 109.5+ KB
MIT
demo/dev_clean.ipynb
rbuerki/codebook
Demo of Cleaning Functions Columns
# Prettify the column names df = clean.prettify_column_names(df) # Check result df.columns # Delete columns df_del = clean.delete_columns(df, cols_to_delete=["target_event", "first_evt"]) assert df_del.shape[1] == (df.shape[1] - 2) # Downcast dtypes df_lean = clean.downcast_dtypes(df) # Check result df_lean.dtypes
Original df size before downcasting: 0.46 MB New df size after downcasting:0.16 MB
MIT
demo/dev_clean.ipynb
rbuerki/codebook
A word of Warning: Downcasting the numerical dtypes this way can lead to problems with the power transforms that are demonstrated below. That's why we continue with the original frame here. Outliers
# Count Outliers using the IQR-Method, with a distance of X clean.count_outliers_IQR_method(df, iqr_dist=2) # Remove outliers in two selected columns outlier_cols=["avg_diff_mnth", "mean_mileage_per_mnth"] df_outliers, deleted_idx = clean.remove_outliers_IQR_method( df, outlier_cols=outlier_cols, iqr_dist=2, return_idx_deleted=True ) # Because we have set the `return_idx_deleted` param to true # We have also received a list of the removed outliers' index values print(deleted_idx) # (As a sidenote) There is also a helper function, that is simply returning the outlier values # with the lower, upper threshold for an interable outliers, lower_thx, upper_thx = clean.get_outlier_values_with_iqr_method( df["avg_diff_mnth"], iqr_dist=2 ) print(len(outliers)) print(f"Lower threshold: {lower_thx}") print(f"Upper threshold: {upper_thx}") # Winsorize outliers in two more columns # (We use the IQR-Method output from above to define the quantiles) w_dict = { "sum_invoice_amount": (None, 0.035), "last_mileage": (0.0005, 0.014), } df_w = clean.winsorize_outliers(df, w_dict) # Check results print(list(zip( df_w[list(w_dict.keys())].min(), df_w[list(w_dict.keys())].max() )))
[(0.0, 4889.35), (1296.0, 171053.0)]
MIT
demo/dev_clean.ipynb
rbuerki/codebook
Transformations
df_transform = df[["last_mileage", "sum_invoice_amount", "mean_mileage_per_mnth"]]. copy() EDA.plot_distr_histograms(df_transform) df_log = clean.transform_data(df_transform, method="log") EDA.plot_distr_histograms(df_log) df_log10 = clean.transform_data(df_transform, method="log10") EDA.plot_distr_histograms(df_log10) df_bc = clean.transform_data(df_transform, method="box_cox") EDA.plot_distr_histograms(df_bc) df_jy = clean.transform_data(df_transform, method="yeo_johnson") EDA.plot_distr_histograms(df_jy)
_____no_output_____
MIT
demo/dev_clean.ipynb
rbuerki/codebook
Getting DataFirst, we want to grab some graphs and subject covariates from a web-accessible url. We've given this to you on google drive rather than having you set up aws s3 credentials in the interest of saving time. The original data is hosted at m2g.ioBelow, you will be getting the following dataset:| Property | Value ||:--------:|:-----:|| Dataset | SWU4 || N-Subjects | 454 || Scans-per-subjects | 2 || Atlases | Desikan, CPAC200 || Desikan Nodes | 70 || CPAC200 Nodes | 200 |The covariates you have are: `SUBID, SESSION, AGE_AT_SCAN_1, SEX, RESTING_STATE_INSTRUCTION, TIME_OF_DAY, SEASON, SATIETY, LMP`. There are other columns in the `.csv` file (downloaded in the next step) but they are populated with a `` meaning that the value was not recorded.There are several other atlases available - you can change which one you use Running the cell below will get you the data. **Please note, you only have to run these two cells once!!!** Loading Graphs + CovariatesRun the following cells of code to load the graphs into your computer, as well as the covariates.
!pip install networkx==1.9 #networkx broke backwards compatibility with these graph files import numpy as np import networkx as nx import scipy as sp import matplotlib.pyplot as plt import os import csv import networkx.algorithms.centrality as nac from collections import OrderedDict # Initializing dataset names dataset_names = ('SWU4') basepath = 'data' # change which atlas you use, here! atlas = 'desikan' # 'desikan' # or 'CPAC200', or 'Talairach' dir_names = basepath + '/' + dataset_names + '/' + atlas #basepath = "/" #dir_names = basepath print(dir_names) fs = OrderedDict() fs[dataset_names] = [root + "/" + fl for root, dirs, files in os.walk(dir_names) for fl in files if fl.endswith(".gpickle")] ps = "data/SWU4/SWU4.csv" print("Datasets: " + ", ".join([fkey + " (" + str(len(fs[fkey])) + ")" for fkey in fs])) print("Total Subjects: %d" % (sum([len(fs[key]) for key in fs]))) def loadGraphs(filenames, verb=False): """ Given a list of files, returns a dictionary of graphs Required parameters: filenames: - List of filenames for graphs Optional parameters: verb: - Toggles verbose output statements """ # Initializes empty dictionary gstruct = OrderedDict() for idx, files in enumerate(filenames): if verb: print("Loading: " + files) # Adds graphs to dictionary with key being filename fname = os.path.basename(files) gstruct[fname] = nx.read_gpickle(files) return gstruct def constructGraphDict(names, fs, verb=False): """ Given a set of files and a directory to put things, loads graphs. Required parameters: names: - List of names of the datasets fs: - Dictionary of lists of files in each dataset Optional parameters: verb: - Toggles verbose output statements """ # Loads graphs into memory for all datasets graphs = OrderedDict() if verb: print("Loading Dataset: " + names) # The key for the dictionary of graphs is the dataset name graphs[names] = loadGraphs(fs[names], verb=verb) return graphs graphs = constructGraphDict(dataset_names, fs, verb=False) import csv # This gets age and sex, respecitvely. tmp = csv.reader(open(ps,newline='')) # this is the whole phenotype file pheno = OrderedDict() triple = [[t[0].strip(), t[2], int(t[3] == '2')] for t in tmp if t[3] != '#' and t[2] != '#'][1:] # female=1->0, male=2->1 for idx, trip in enumerate(triple): pheno[trip[0]] = trip[1:] ## replace with this k = sorted(list(graphs['SWU4'].keys())) k_id = list(key[6:11] for key in k) k_id = k_id[0::2] k_g1 = k[0::2] g1 = [] for xx in k_g1: g1.append(graphs['SWU4'][xx]) #Create vectors of labels age = list() sex = list() for key in k_id: sex.append(pheno[key][1]) age.append(pheno[key][0])
_____no_output_____
Apache-2.0
projects/graphexplorer/submissions/RonanDariusHamilton/graphexplore.ipynb
wrgr/intersession2018
ASSIGNMENT: (Code above used to get data in the correct format. Below is a simple example test string with kind of silly features)
#Combine features, separate training and test data X = [] for i in range(len(g1)): featvec = [] matrix = nx.to_numpy_matrix(g1[i], nodelist=sorted(g1[i].nodes())) #this is how you go to a matrix logmatrix = np.log10(np.sum(matrix,0) + 1) logmatrix = np.ravel(logmatrix) covariate1 = nx.degree_centrality(g1[i]) covariate1 = covariate1.values() covariate2 = nac.betweenness_centrality(g1[i]) covariate2 = covariate2.values() #dict covariate3 = nx.average_clustering(g1[i]) covariate3 = np.ravel(covariate3) #float covariate4 = nac.closeness_centrality(g1[i]) covariate4 = covariate4.values() #dict covariate5 = nac.eigenvector_centrality(g1[i]) covariate5 = covariate5.values() #dict for ii in logmatrix: featvec.append(ii) for iii in covariate1: featvec.append(iii) for iv in covariate2: featvec.append(iv) for v in covariate3: featvec.append(v) for vi in covariate4: featvec.append(vi) for vii in covariate5: featvec.append(vii) xsum = np.asarray(np.sum(matrix)) featvec.append(xsum) np.shape(featvec) X.append(featvec) X_train = X[0:100] Y_train = sex[0:100] X_test = X[100:200] Y_test = sex[100:200] from sklearn.ensemble import RandomForestClassifier accuracy = [] for ii in range(10): #performance will change over time clf = RandomForestClassifier(n_estimators=100) clf.fit(X_train, Y_train) acc = (clf.predict(X_test) == Y_test) #print(acc) accval = (float(np.sum(acc))/float(len(Y_test))) accuracy.append(accval) print('Accuracy:',accval) print('Overall Accuracy:',str(np.mean(accuracy))) # plot a graph import matplotlib.pyplot as plt %matplotlib inline # mean connectome matrix = np.zeros([70, 70]) n = 0 for i in range(len(g1)): matrix += nx.to_numpy_matrix(g1[i], nodelist=sorted(g1[i].nodes())) #this is how you go to a matrix n += 1 matrix /= n plt.imshow(np.log10(matrix+1)) plt.colorbar() plt.title('Mean Connectome') plt.show() # mean female connectome matrix = np.zeros([70, 70]) n = 0 for i in range(len(g1)): if sex[i] == 0: matrix += nx.to_numpy_matrix(g1[i], nodelist=sorted(g1[i].nodes())) #this is how you go to a matrix n += 1 matrix /= n mFC = nx.DiGraph(matrix) plt.imshow(np.log10(matrix+1)) plt.colorbar() plt.title('Mean Female Connectome') plt.show() # mean male connectome matrix = np.zeros([70, 70]) n = 0 for i in range(len(g1)): if sex[i] == 1: matrix += nx.to_numpy_matrix(g1[i], nodelist=sorted(g1[i].nodes())) #this is how you go to a matrix n += 1 matrix /= n mMC = nx.DiGraph(matrix) plt.imshow(np.log10(matrix+1)) plt.colorbar() plt.title('Mean Male Connectome') plt.show() # mean connectome difference diff = nx.algorithms.difference(mMC, mFC) matrix += nx.to_numpy_matrix(diff, nodelist=sorted(diff.nodes())) #this is how you go to a matrix plt.imshow(np.log10(matrix+1)) plt.colorbar() plt.title('Mean Connectome Difference') plt.show()
_____no_output_____
Apache-2.0
projects/graphexplorer/submissions/RonanDariusHamilton/graphexplore.ipynb
wrgr/intersession2018
Raumluftqualität 2.0 Zeitliche Entwicklung der CO_2-Konzentration in RäumenIn einem gut gelüfteten, leeren Raum wird sich zunächst genau so viel CO_2 befinden, wie in der Außenluft. Wenn sich dann Personen in den Raum begeben und CO_2 freisetzen, wird die CO_2-Konzentration langsam zunehmen. Auf welchen Wert sie sich schließlich einstellt, hängt vom Außenluftvolumenstrom ab, mit dem der Raum belüftet wird.Bei einem völlig unbelüfteten Raum wird das von den Personen produzierte CO_2 sich in der Raumluft immer stärker anreichern, wobei je Zeiteinheit die gleiche Menge an CO_2 freigesetzt wird. Beispiel:In einem Raum von $15 \rm m^2$ Grundfläche bei $2.5 \rm m$ Geschosshöhe befinden sich 2 Personen, die je Person $30\,{\frac{\ell}{h}}$ CO_2 ausatmen. Die CO_2-Konzentration der Außenluft ist 400 ppM. Im Raum sollen 1200 ppM CO_2 zulässig sein.Stellen Sie die zeitliche Entwicklung der CO_2-Konzentration in einem Diagramm dar. Gegeben: Raumvolumen: $V_{\rm ra} = 15 {\rm m^2}\cdot 2.5 {\rm m} = 37.5 {\rm m^3}$ CO_2-Produktion: $\dot V_{\rm sch} = 2\cdot 30 {\rm \dfrac{\ell}{h}} = 60\,000\, {\rm\dfrac{cm^3}{h}}$Damit ergibt sich die Änderungsrate $ \dot k = \cfrac{\dot V_{\rm sch}}{V_{\rm ra}} = {\rm\dfrac{60\,000\,cm^3}{37.5\, m^3\cdot h}} = {\rm 1600 \dfrac{ppM}{h}}$Für das Schadstoffvolumen im Raum ergibt sich:\begin{align} k(t)&= 400 {\rm ppM} + 1600\,{\rm\dfrac{ppM}{h}}\, t\end{align} Dies Ergebnis wird in den folgenden Zeilen in einem Diagramm dargestellt:
import matplotlib.pyplot as plt %config InlineBackend.figure_format = 'retina' import pandas as pd import numpy as np lt = np.linspace(0,120,13) # 10-min Schritte df = pd.DataFrame( { 't': lt, 'k': 400 + 1600*lt/60 # 60min = 1h } ) display(df.T) ax=df.plot(x='t',y='k', label='$k = k(t)$') ax.axhline(1200,c='r') ax.grid() ax.set( xlim=(0,120),xlabel='Zeit $t$ in $min$', ylabel='CO_2-Konzentration $k$ in $\mathrm{ppM}$' );
_____no_output_____
MIT
Notebooks/Notebook_2.ipynb
w-meiners/rlt-rlq
__Word Alignment Assignment__Your task is to learn word alignments for the data provided with this Python Notebook. Start by running the 'train' function below and implementing the assertions which will fail. Then consider the following improvements to the baseline model:* Is the TranslationModel parameterized efficiently?* What form of PriorModel would help here? (Currently the PriorModel is uniform.)* How could you use a Hidden Markov Model to model word alignment indices? (There's an implementation of simple HMM below to help you start.)* How could you initialize more complex models from simpler ones?* How could you model words that are not aligned to anything?Grades will be assigned as follows: Maximum AER on dev and test | Grade ----------|------------- 0.5 - 0.6 | 1 0.4 - 0.5 | 2 0.35 - 0.4 | 3 0.3 - 0.35 | 4 0.25 - 0.3 | 5 You should save the notebook with the final scores for 'dev' and 'test' test sets.
# This cell contains the generative models that you may want to use for word alignment. # Currently only the TranslationModel is at all functional. import numpy as np from collections import defaultdict class TranslationModel: "Models conditional distribution over trg words given a src word." def __init__(self, src_corpus, trg_corpus): self._trg_given_src_probs = defaultdict(lambda : defaultdict(lambda : 1.0)) self._src_trg_counts = defaultdict(lambda : defaultdict(lambda : 0.0)) def get_params(self): return self._trg_given_src_probs def get_conditional_prob(self, src_token, trg_token): "Return the conditional probability of trg_token given src_token." return self._trg_given_src_probs[src_token][trg_token] def get_parameters_for_sentence_pair(self, src_tokens, trg_tokens): "Returns matrix with t[i][j] = p(f_j|e_i)." return np.array([[self._trg_given_src_probs[src_token][trg_token] for trg_token in trg_tokens] for src_token in src_tokens]) def collect_statistics(self, src_tokens, trg_tokens, posterior_matrix): "Accumulate counts of translations from: posterior_matrix[j][i] = p(a_j=i|e, f)" assert posterior_matrix.shape == (len(src_tokens), len(trg_tokens)) assert False, "Implement collection of statistics here." def recompute_parameters(self): "Reestimate parameters and reset counters." self._trg_given_src_probs = defaultdict(lambda : defaultdict(lambda : 0.0)) assert False, "Implement reestimation of parameters from counters here." class PriorModel: "Models the prior probability of an alignment given only the sentence lengths and token indices." def __init__(self, src_corpus, trg_corpus): "Add counters and parameters here for more sophisticated models." self._distance_counts = {} self._distance_probs = {} def get_parameters_for_sentence_pair(self, src_length, trg_length): return np.ones((src_length, trg_length)) * 1.0 / src_length def get_prior_prob(self, src_index, trg_index, src_length, trg_length): "Returns a uniform prior probability." return 1.0 / src_length def collect_statistics(self, src_length, trg_length, posterior_matrix): "Extract the necessary statistics from this matrix if needed." pass def recompute_parameters(self): "Reestimate the parameters and reset counters." pass class TransitionModel: "Models the prior probability of an alignment conditioned on previous alignment." def __init__(self, src_corpus, trg_corpus): "Add counters and parameters here for more sophisticated models." pass def get_parameters_for_sentence_pair(self, src_length): "Retrieve the parameters for this sentence pair: A[k, i] = p(a_{j} = i|a_{j-1} = k)" pass def collect_statistics(self, src_length, bigram_posteriors): "Extract statistics from the bigram posterior[i][j]: p(a_{t-1} = i, a_{t} = j| e, f)" pass def recompute_parameters(self): "Recompute the transition matrix" pass # This cell contains the framework for training and evaluating a model using EM. from utils import read_parallel_corpus, extract_test_set_alignments, score_alignments, write_aligned_corpus def infer_posteriors(src_tokens, trg_tokens, prior_model, translation_model): "Compute the posterior probability p(a_j=i | f, e) for each target token f_j given e and f." # HINT: An HMM will require more complex statistics over the hidden alignments. P = prior_model.get_parameters_for_sentence_pair(len(src_tokens), len(trg_tokens)) T = translation_model.get_parameters_for_sentence_pair(src_tokens, trg_tokens) # t[i][j] = P(f_j|e_i) assert False, "Compute the posterior distribution over src indices for each trg word." # log_likelihood = np.sum(np.log(marginals)) return posteriors, log_likelihood def collect_expected_statistics(src_corpus, trg_corpus, prior_model, translation_model): "E-step: infer posterior distribution over each sentence pair and collect statistics." corpus_log_likelihood = 0.0 for src_tokens, trg_tokens in zip(src_corpus, trg_corpus): # Infer posterior posteriors, log_likelihood = infer_posteriors(src_tokens, trg_tokens, prior_model, translation_model) # Collect statistics in each model. prior_model.collect_statistics(src_tokens, trg_tokens, posteriors) translation_model.collect_statistics(src_tokens, trg_tokens, posteriors) # Update log prob corpus_log_likelihood += log_likelihood return corpus_log_likelihood def estimate_models(src_corpus, trg_corpus, prior_model, translation_model, num_iterations): "Estimate models iteratively using EM." for iteration in range(num_iterations): # E-step corpus_log_likelihood = collect_expected_statistics(src_corpus, trg_corpus, prior_model, translation_model) # M-step prior_model.recompute_parameters() translation_model.recompute_parameters() if iteration > 0: print("corpus log likelihood: %1.3f" % corpus_log_likelihood) return prior_model, translation_model def get_alignments_from_posterior(posteriors): "Returns the MAP alignment for each target word given the posteriors." # HINT: If you implement an HMM, you may want to implement a better algorithm here. alignments = {} for trg_index, src_index in enumerate(np.argmax(posteriors, axis=0)): if trg_index not in alignments: alignments[trg_index] = {} alignments[trg_index][src_index] = '*' return alignments def align_corpus(src_corpus, trg_corpus, prior_model, translation_model): "Align each sentence pair in the corpus in turn." aligned_corpus = [] for src_tokens, trg_tokens in zip(src_corpus, trg_corpus): posteriors, _ = infer_posteriors(src_tokens, trg_tokens, prior_model, translation_model) alignments = get_alignments_from_posterior(posteriors) aligned_corpus.append((src_tokens, trg_tokens, alignments)) return aligned_corpus def initialize_models(src_corpus, trg_corpus): prior_model = PriorModel(src_corpus, trg_corpus) translation_model = TranslationModel(src_corpus, trg_corpus) return prior_model, translation_model def normalize(src_corpus, trg_corpus): assert False, "Apply some normalization here to reduce the numbers of parameters." return normalized_src, normalized_trg def train(num_iterations): src_corpus, trg_corpus, _ = read_parallel_corpus('en-cs.all') src_corpus, trg_corpus = normalize(src_corpus, trg_corpus) prior_model, translation_model = initialize_models(src_corpus, trg_corpus) prior_model, translation_model = estimate_models(src_corpus, trg_corpus, prior_model, translation_model, num_iterations) aligned_corpus = align_corpus(src_corpus, trg_corpus, prior_model, translation_model) return aligned_corpus, extract_test_set_alignments(aligned_corpus) def evaluate(candidate_alignments): src_dev, trg_dev, wa_dev = read_parallel_corpus('en-cs-wa.dev', has_alignments=True) src_test, trg_test, wa_test = read_parallel_corpus('en-cs-wa.test', has_alignments=True) print('recall %1.3f; precision %1.3f; aer %1.3f' % score_alignments(wa_dev, candidate_alignments['dev'])) print('recall %1.3f; precision %1.3f; aer %1.3f' % score_alignments(wa_test, candidate_alignments['test'])) aligned_corpus, test_alignments = train(5) evaluate(test_alignments) # To visualize aligned corpus: # 1. call write_aligned_corpus(aligned_corpus, 'out') # 2. run python corpus_browser.py en-cs-wa.out (in working directory) # Discrete HMM with scaling. You may want to use this if you decide to implement an HMM. # The parameters for this HMM will still need to be provided by the models above. def forward(pi, A, O): S, T = O.shape alpha = np.zeros((S, T)) scaling_factors = np.zeros(T) # base case alpha[:, 0] = pi * O[:, 0] scaling_factors[0] = np.sum(alpha[:, 0]) alpha[:, 0] /= scaling_factors[0] # recursive case for t in range(1, T): alpha[:, t] = np.dot(alpha[:, t-1], A[:, :]) * O[:, t] # Normalize at each step to prevent underflow. scaling_factors[t] = np.sum(alpha[:, t]) alpha[:, t] /= scaling_factors[t] return (alpha, scaling_factors) def backward(pi, A, O, forward_scaling_factors): S, T = O.shape beta = np.zeros((S, T)) # base case beta[:, T-1] = 1 / forward_scaling_factors[T-1] # recursive case for t in range(T-2, -1, -1): beta[:, t] = np.sum(beta[:, t+1] * A[:, :] * O[:, t+1], 1) / forward_scaling_factors[t] return beta def forward_backward(pi, A, O): alpha, forward_scaling_factors = forward(pi, A, O) beta = backward(pi, A, O, forward_scaling_factors) return alpha, beta, np.sum(np.log(forward_scaling_factors))
_____no_output_____
MIT
week09_mt/homework/word_alignment_assignment.ipynb
Holemar/nlp_course
数据网站,http://quotes.money.163.com/stock下载交易历史数据:http://quotes.money.163.com/cjmx/2019/20191120/1300127.xls,获得一个SCV文件。结构如下:成交时间,成交价,价格变动,成交量(手),成交额(元),性质09:30:06,17.2,-0.05,50,86011,卖盘09:30:09,17.21,0.01,887,1525626,买盘大概每3秒一条记录。 Library
import numpy as np import matplotlib.pyplot as plt import pandas as pd import torch import torch.nn as nn from torch.autograd import Variable from sklearn.preprocessing import MinMaxScaler from datetime import datetime
_____no_output_____
MIT
lstm-ashare-live.ipynb
sillyemperor/mypynotebook
Data Plot
data = pd.read_csv('data/ashare/30012720191120.csv', usecols = [0, 1, 3], converters={ 0:lambda x:datetime.strptime(x, '%H:%M:%S') }) # print(data) training_set = data.iloc[:,1].values timeline = data.iloc[:,0].values plt.plot(timeline, training_set, ) plt.show() def local_price(file): data = pd.read_csv(file, usecols=[0, 1, 3], converters={ 0: lambda x: datetime.strptime(x, '%H:%M:%S') }) for i in data.iloc[:,1].values: yield i from stock import train, sliding_windows, predict, LSTM import time loader = local_price('data/ashare/30012720191120.csv') num_epochs = 100 num_classes = 3 seq_length = 12 input_size = 1 hidden_size = 2 num_layers = 1 lstm = LSTM(num_classes, input_size, hidden_size, num_layers, seq_length) sc = MinMaxScaler() bucket = [] data = [] predict_y = None aloss_list = [] loss_list = [] x_list = [] y_list = [] for price in loader: bucket.append([float(price)]) # print(bucket, data) if len(bucket) >= seq_length: data.append(bucket) if len(data) > 1: if predict_y is not None: x = torch.tensor(predict_y) y = torch.tensor(bucket[:num_classes]).view(-1) loss = y - x aloss = loss.sum()/num_classes loss_list += list(loss.view(-1).numpy()) x_list += list(x.view(-1).numpy()) y_list += list(y.view(-1).numpy()) aloss_list.append(aloss) # print(x) # print(y) # print(aloss, elapsed) # print() t1 = time.time() training_data = torch.Tensor(data) training_data = sc.fit_transform(training_data.view(-1, 1)) # training_data = torch.Tensor([training_data]) x, y = sliding_windows(training_data, seq_length, num_classes) trainX = torch.Tensor(np.array(x)) trainY = torch.Tensor(np.array(y)) loss = train(lstm, num_epochs, num_classes, trainX, trainY) elapsed = time.time() - t1 predict_data = data[-1] predict_y = predict(lstm, predict_data) # print(predict_y) bucket = bucket[num_classes:] plt.subplot(2,1,1) plt.plot(loss_list, ) plt.subplot(2,1,2) plt.plot(x_list, ) plt.plot(y_list, ) plt.show()
_____no_output_____
MIT
lstm-ashare-live.ipynb
sillyemperor/mypynotebook
1. Area plots are stacked by default. Ans: True. 2. The following code uses the artist layer to create a stacked area plot of the data in the pandas dataframe, area_df.
ax = series_df.plot(kind='area', figsize=(20, 10)) ax.title('Plot Title') ax.ylabel('Vertical Axis Label') ax.xlabel('Horizontal Axis Label')
_____no_output_____
MIT
Coursera/Data Visualization with Python-IBM/Week-2/Quiz/Basic-Visualization-Tools.ipynb
manipiradi/Online-Courses-Learning
Ans: False. 3. The following code will create an unstacked area plot of the data in the pandas dataframe, area_df, with a transparency value of 0.35?
import matplotlib.pyplot as plt transparency = 0.35 area_df.plot(kind='area', alpha=transparency, figsize=(20, 10)) plt.title('Plot Title') plt.ylabel('Vertical Axis Label') plt.xlabel('Horizontal Axis Label') plt.show()
_____no_output_____
MIT
Coursera/Data Visualization with Python-IBM/Week-2/Quiz/Basic-Visualization-Tools.ipynb
manipiradi/Online-Courses-Learning
Ans: False 4. The following code will create a histogram of a pandas series, series_data, and align the bin edges with the horizontal tick marks.
count, bin_edges = np.histogram(series_data) series_data.plot(kind='hist', xticks = bin_edges)
_____no_output_____
MIT
Coursera/Data Visualization with Python-IBM/Week-2/Quiz/Basic-Visualization-Tools.ipynb
manipiradi/Online-Courses-Learning
Using fuzzy wuzzy
# identieke notes die meerdere keren voorkomen from collections import Counter c = Counter() num_lines = 0 for note in notes: c[note] += 1 repeated = [] ns = [] for k, v in c.most_common(): if v > 1: num_lines += v print(repr(k), v) repeated.append(k) else: ns.append(k) print('total number of lines:', num_lines) %%time # Calculate overlap between repeated notes and lines from fuzzywuzzy import fuzz line_data = pd.DataFrame() for i, n in enumerate(repeated): line_data[i] = [fuzz.partial_ratio(line, n) for line in lines] line_data for i in line_data.columns: print(i) print(line_data[line_data[i].sort_values(ascending=False) > 90].shape) def get_lines(column, t): #print(list(column[column < t].index)) return list(column[column > t].index) t = 90 to_remove = line_data.apply(lambda c: get_lines(c, t)) print(to_remove) to_remove = list(set([item for sublist in to_remove for item in sublist])) print(len(to_remove)) print(len(lines)) ls = lines lines = [] for i, line in enumerate(ls): if i not in to_remove: lines.append(line) print(len(lines)) %%time # Calculate overlap between remaining notes and lines from fuzzywuzzy import fuzz line_data = pd.DataFrame() for i, n in enumerate(notes): line_data[i] = [fuzz.partial_ratio(line, n) for line in lines] %%time # Calculate overlap between remaining notes and lines from py_stringmatching.similarity_measure.partial_ratio import PartialRatio pr = PartialRatio() line_data = pd.DataFrame() for i, n in enumerate(notes): line_data[i] = [pr.get_raw_score(line, n) for line in lines] %%time # Calculate edit distance between remaining notes and lines import edlib # get initial edit distances per line line_data = pd.DataFrame() for i, n in enumerate(notes): line_data[i] = [edlib.align(n, line)['editDistance'] for line in lines] def extend_lines(indexes, length, num=3): #print('start') #print(indexes) #print('num:', num) result = [] for i in indexes: #print('i', i) result.append(i) # lines before i start = max(0, i-num) to_add = list(range(start, i)) for n in to_add: result.append(n) #print(to_add) # lines after i end = min(length, i+num+1) #print('end', end) to_add = list(range(i, end)) for n in to_add: result.append(n) #print(to_add) #print('---') result = list(set(result)) result.sort() return(result) extend_lines([5], 6) extend_lines([1, 5], 10, 1) list(range(4,3)) l = list(line_data[0].sort_values().index[:100]) r = extend_lines(l, len(lines)) print(len(l), len(r)) %%time ns = [{'note': n, 'lines': extend_lines(list(line_data[i].sort_values().index[:100]), len(lines)), 'selected': [], 'scores': []} for i, n in enumerate(notes)] for n in ns: print(len(n['lines'])) %%time for n in ns: ls = n['lines'] ls.sort() note = n['note'] for idx in ls: r = fuzz.partial_ratio(lines[idx], note) n['scores'].append(r) if r > 90: n['selected'].append(idx) added = 0 for i, note in enumerate(ns): #print(note['selected']) #print(note['lines']) if note['selected'] != []: n = note['selected'][-1] + 1 add = range(n, n+3) for a in add: if a in note['lines']: idx = note['lines'].index(a) #print(idx) #print(note['scores'][idx], lines[add]) #print(note['scores'][idx+1], lines[add+1]) #print(note['scores'][idx+2], lines[add+2]) if note['scores'][idx] > 80: note['selected'].append(a) print(i) added += 1 print(added) ns[173] lines[2456] for i, n in enumerate(ns): print(i) print(n['note']) print('---') print(n['selected']) for idx in n['selected']: print(lines[idx]) print('-'*80) plt.figure(figsize=(15,10)) plt.plot(ns[11]['lines'][:100], ns[11]['scores'][:100], 100) note = ns[11] #print(note['selected']) #print(note['lines']) add = note['selected'][-1] + 1 if add in note['lines']: idx = note['lines'].index(add) #print(idx) #print(note['scores'][idx], lines[add]) #print(note['scores'][idx+1], lines[add+1]) #print(note['scores'][idx+2], lines[add+2]) if note['scores'][idx] > 80: note['selected'].append(add) print(note['selected']) fuzz.partial_ratio('hooren van Aiols naam (Fransche tekst vs. 7190 vlgg.). ', 'hooren van Aiols naam (Fransche tekst vs. 7190 vlgg.).') fuzz.partial_ratio('side. - 179 Hs. IIII.', 'side. — 179 GIs. IIII.')
_____no_output_____
Apache-2.0
notebooks/dbnl_remove_notes.ipynb
KBNLresearch/ochre
Using fuzzy wuzzy on all the notes at the same time
notes_text = ''.join(notes) print(notes_text) %%time from fuzzywuzzy import fuzz result = pd.DataFrame() result['pratio'] = [fuzz.partial_ratio(l, notes_text) for l in lines] result.head() result.hist(bins=100) n = 42 print(lines[n]) print(result.loc[n]) print(notes[0]) fuzz.partial_ratio(lines[42], notes[0]) fuzz.partial_ratio(lines[42], notes_text) from fuzzywuzzy import fuzz, StringMatcher import difflib #As long as python-Levenshtein is available, that will be used for the following: print(fuzz.partial_ratio(lines[42], notes[0])) print(fuzz.partial_ratio(lines[42], notes_text)) #Switch to difflib: fuzz.SequenceMatcher = difflib.SequenceMatcher print(fuzz.partial_ratio(lines[42], notes[0])) print(fuzz.partial_ratio(lines[42], notes_text)) for idx in list(result[result['pratio'] > 80].index): print(idx, lines[idx])
_____no_output_____
Apache-2.0
notebooks/dbnl_remove_notes.ipynb
KBNLresearch/ochre
Putting it all together
%%time from nlppln.utils import create_dirs, out_file_name in_file = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr-with-title-page/_aio001jver01_01.txt' # remove selected lines with open(in_file) as f: text = f.read() for n in ns: for idx in n['selected']: #print(idx) l = lines[idx] text = text.replace(l, '') out_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr' create_dirs(out_dir) out = out_file_name(out_dir, in_file) print(out) with open(out, 'w') as f: f.write(text) print(to_remove) lines[43] %%time import edlib import pandas as pd from collections import Counter from fuzzywuzzy import fuzz def split_notes(notes): c = Counter() num_lines = 0 for note in notes: c[note] += 1 repeated = [] ns = [] for k, v in c.most_common(): if v > 1: num_lines += v #print(repr(k), v) repeated.append(k) else: ns.append(k) #print('total number of lines:', num_lines) return ns, repeated def get_lines(column, threshold): return list(column[column > threshold].index) def extend_lines(indexes, length, num=3): #print('start') #print(indexes) #print('num:', num) result = [] for i in indexes: #print('i', i) result.append(i) # lines before i start = max(0, i-num) to_add = list(range(start, i)) for n in to_add: result.append(n) #print(to_add) # lines after i end = min(length, i+num+1) #print('end', end) to_add = list(range(i, end)) for n in to_add: result.append(n) #print(to_add) #print('---') result = list(set(result)) result.sort() return(result) def remove_notes(ocr_file, notes_file, out_dir, topn=100): with open(ocr_file) as f: ls = f.readlines() with open(notes_file) as f: notes = f.readlines() # remove empty lines lines = [] for line in ls: if line.strip() != '': lines.append(line) print('The text contains {} lines.'.format(len(lines))) # get repeated notes ns, repeated = split_notes(notes) print('Processing repeated notes ({})'.format(len(repeated))) # Calculate overlap between repeated notes and lines line_data = pd.DataFrame() for i, n in enumerate(repeated): line_data[i] = [fuzz.partial_ratio(line, n) for line in lines] # get the line numbers of the repeated notes that should be removed t = 90 to_remove_repeated = line_data.apply(lambda c: get_lines(c, t)) to_remove_repeated = list(set([item for sublist in to_remove_repeated for item in sublist])) print('Processing other notes ({})'.format(len(ns))) # get initial edit distances per line # uses edlib for speed print('Calculating distances with edlib') line_data = pd.DataFrame() for i, n in enumerate(notes): line_data[i] = [edlib.align(n, line)['editDistance'] for line in lines] # select the topn lines with smallest edit distances for further processing ns = [{'note': n, 'lines': extend_lines(list(line_data[i].sort_values().index[:topn]), len(lines)), 'selected': [], 'scores': []} for i, n in enumerate(notes)] num_lines = 0 for i, n in enumerate(ns): num_lines += len(n['lines']) #print(n['note']) #print('-') #r = list(line_data[i].sort_values().index[:topn]) #for j in r: # print(j, lines[j]) #print('-') #print(r) #print(n['lines']) #print('---') print('Calculating distances with fuzzywuzzy ({} lines)'.format(num_lines)) # use partial_ratio to select the lines that should be deleted for n in ns: ls = n['lines'] ls.sort() note = n['note'] for idx in ls: r = fuzz.partial_ratio(lines[idx], note) n['scores'].append(r) if r > 90: n['selected'].append(idx) print('Adding missing lines') # add missing (usually short) lines at the end of selected pieces of text added = 0 for i, note in enumerate(ns): #print(note['selected']) #print(note['lines']) if note['selected'] != []: n = note['selected'][-1] + 1 add = range(n, n+3) for a in add: if a in note['lines']: idx = note['lines'].index(a) #print(idx) #print(note['scores'][idx], lines[add]) #print(note['scores'][idx+1], lines[add+1]) #print(note['scores'][idx+2], lines[add+2]) if note['scores'][idx] > 80: note['selected'].append(a) #print(i) added += 1 if added > 0: print('{} lines added to be removed.'.format(added)) print('Removing notes') removed = [] for idx in to_remove_repeated: removed.append(idx) for n in ns: for idx in n['selected']: removed.append(idx) # get the ocr text with open(ocr_file) as f: text = f.read() removed = list(set(removed)) for idx in removed: l = lines[idx] text = text.replace(l, '') # save result create_dirs(out_dir) out = out_file_name(out_dir, ocr_file) #print(out) with open(out, 'w') as f: f.write(text) return removed r = remove_notes('/home/jvdzwaan/data/dbnl_ocr/raw/ocr-with-title-page/_aio001jver01_01.txt', '/home/jvdzwaan/data/dbnl_ocr/raw/notes/_aio001jver01_01.txt', '/home/jvdzwaan/data/dbnl_ocr/raw/ocr', 5) #print(r) import random import os from nlppln.utils import get_files, out_file_name in_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr-without-title-page/' notes_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/notes/' out_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr' in_files = get_files(notes_dir) random.shuffle(in_files) in_files = [os.path.basename(f) for f in in_files[:15]] in_files import os from tqdm import tqdm_notebook as tqdm from nlppln.utils import get_files, out_file_name in_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr-without-title-page/' notes_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/notes/' out_dir = '/home/jvdzwaan/data/dbnl_ocr/raw/ocr' in_files = ['rade001gera01_01.txt', '_zev001198901_01.txt', '_tir001196201_01.txt', 'looy001wond03_01.txt', 'potg001jczi10_01.txt', 'berg050jaro01_01.txt', '_tsj002195001_01.txt', '_jaa006199901_01.txt', '_taa006189101_01.txt', '_sep001197201_01.txt', 'oltm003memo05_01.txt', '_noo001189201_01.txt', 'koni057heil01_01.txt', '_vla016197401_01.txt', '_bij005195501_01.txt'] in_files = [os.path.join(in_dir, f) for f in in_files] for in_file in tqdm(in_files): # needs to be prcessed? out = out_file_name(out_dir, in_file) if not os.path.isfile(out): # is there a notes file? notes_file = os.path.join(notes_dir, os.path.basename(in_file)) if os.path.isfile(notes_file): print('processing', in_file) with open('lines_removed_100.txt', 'a') as f: removed = remove_notes(in_file, notes_file, out_dir, 100) f.write(os.path.basename(out)) f.write('\t') removed = [str(r) for r in removed] f.write(','.join(removed)) f.write('\n')
_____no_output_____
Apache-2.0
notebooks/dbnl_remove_notes.ipynb
KBNLresearch/ochre
用函數取代表格用簡單的神經網路來取代 V
import numpy as np from keras.models import Sequential from keras.layers import Dense from gridworld import GridWorld blocks={(1,1), (3,3)} gw = GridWorld(size=(5,5), start=(0,0), exit=(4,4), blocks=blocks) from ipywidgets import widgets as W from IPython.display import display gw_html = W.HTML(value=gw._repr_html_()) gw.restart() def gw_move(i): def func(b=None): gw_html.value = gw.move(i)._repr_html_() return func def gw_restart(b=None): gw.restart() gw_html.value = gw._repr_html_() buttons = [] for i, bn in enumerate(['arrow-right', 'arrow-up', 'arrow-left', 'arrow-down', 'refresh']): b = W.Button(icon='fa-'+bn, layout=W.Layout(width='5em')) b.on_click(gw_move(i) if i<4 else gw_restart) buttons.append(b) W.HBox([gw_html, W.VBox(buttons)])
_____no_output_____
MIT
RL/Grid World-Function.ipynb
PinmanHuang/CrashCourseML
使用 Q learningQ 用簡單的神經網路來定義
Q = Sequential() Q.add(Dense(128, input_shape=((gw.size[0]+2)*(gw.size[1]+2)+4,), activation="relu" )) # 輸入是 i, j 座標和 a Q.add(Dense(1, activation="tanh")) # 因為輸出是 +-1 Q.compile(loss='mse',optimizer='sgd', metrics=['accuracy']) avectors = [[0]* 4 for i in range(4)] for i in range(4): avectors[i][i]=1 def Qfunc(i,j): ij = np.zeros( (gw.size[0]+2, gw.size[1]+2)) ij[i, j]=1 ij = list(ij.ravel()) return np.array([Q.predict(np.array([ij+avectors[a]]))[0,0] for a in range(4)]) def Qupdate(i, j, a, v): ij = np.zeros( (gw.size[0]+2, gw.size[1]+2)) ij[i, j]=1 ij = list(ij.ravel()) return Q.train_on_batch(np.array([ij+avectors[a]]), np.array([[v]])) Qfunc(1,3) from random import randint, random, shuffle, choice from time import sleep gw_html = W.HTML() display(gw_html) def update_VA(gw, Qfunc): if gw.A is None: gw.A = np.full( (gw.size[0]+2, gw.size[1]+2), -1) if gw.V is None: gw.V = np.full( (gw.size[0]+2, gw.size[1]+2), 2.) for i in range(gw.size[0]): for j in range(gw.size[1]): Qij = Qfunc(i,j) if Qij.min() == 2: gw.A[i,j]=-1 gw.V[i,j]= 2 else: gw.A[i,j] = np.argmax(np.where( Qij > 1, -2, Qij)) gw.V[i,j] = np.max(np.where( Qij > 1, -2, Qij)) gw_html.value = gw._repr_html_() def Qlearn(g): actions = [0,1,2,3] while True: gw_html.value = g._repr_html_() if g.is_end(): break sleep(0.02) src_pos = g.pos Q_src = Qfunc(*src_pos) if random()< ϵ: a = choice(actions) else: a = np.argmax(Q_src) g.move(a) dst_pos = g.pos r = g.score if r: v=r else: v=r+γ*Qfunc(*dst_pos).max() Qupdate(*src_pos, a, v) α = 0.1 γ = 0.95 ϵ = 5. for i in range(500): gw.restart() gw_html.value = gw._repr_html_() Qlearn(gw) update_VA(gw, Qfunc) ϵ *= 0.99 ## 另外一種網路 Q = Sequential() Q.add(Dense(128, input_shape=((gw.size[0]+2)*(gw.size[1]+2),), activation="relu" )) # 輸入是 i, j Q.add(Dense(4, activation="tanh")) # 因為輸出是 +-1 Q.compile(loss='mse',optimizer='sgd', metrics=['accuracy']) # 輸出 a def Qfunc(i,j): ij = np.zeros( (gw.size[0]+2, gw.size[1]+2)) ij[i, j]=1 ij = list(ij.ravel()) return Q.predict(np.array([ij]))[0] def Qupdate(i, j, a, v): ij = np.zeros( (gw.size[0]+2, gw.size[1]+2)) ij[i, j]=1 ij = list(ij.ravel()) Y = Q.predict(np.array([ij])) Y[0][a] = v return Q.train_on_batch(np.array([ij]), Y) Qfunc(1,3)
_____no_output_____
MIT
RL/Grid World-Function.ipynb
PinmanHuang/CrashCourseML
Задача определения частей речи, Part-Of-Speech Tagger (POS) Мы будем решать задачу определения частей речи (POS-теггинга).
import nltk import pandas as pd import numpy as np from nltk.corpus import brown import matplotlib.pyplot as plt
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вам в помощь http://www.nltk.org/book/ Загрузим brown корпус
nltk.download('brown')
[nltk_data] Downloading package brown to /root/nltk_data... [nltk_data] Unzipping corpora/brown.zip.
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Существует не одна система тегирования, поэтому будьте внимательны, когда прогнозируете тег слов в тексте и вычисляете качество прогноза. Можете получить несправедливо низкое качество вашего решения. Cейчас будем использовать универсальную систему тегирования universal_tagset
nltk.download('universal_tagset')
[nltk_data] Downloading package universal_tagset to /root/nltk_data... [nltk_data] Unzipping taggers/universal_tagset.zip.
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Мы имеем массив предложений пар (слово-тег)
brown_tagged_sents = brown.tagged_sents(tagset="universal") brown_tagged_sents
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Первое предложение
brown_tagged_sents[0]
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Все пары (слово-тег)
brown_tagged_words = brown.tagged_words(tagset='universal') brown_tagged_words
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Проанализируйте данные, с которыми Вы работаете. Используйте `nltk.FreqDist()` для подсчета частоты встречаемости тега и слова в нашем корпусе. Под частой элемента подразумевается кол-во этого элемента в корпусе.
# Приведем слова к нижнему регистру brown_tagged_words = list(map(lambda x: (x[0].lower(), x[1]), brown_tagged_words)) print('Кол-во предложений: ', len(brown_tagged_sents)) tags = [tag for (word, tag) in brown_tagged_words] # наши теги words = [word for (word, tag) in brown_tagged_words] # наши слова tag_num = pd.Series(nltk.FreqDist(tags)).sort_values(ascending=False) # тег - кол-во тега в корпусе word_num = pd.Series(nltk.FreqDist(words)).sort_values(ascending=False) # слово - кол-во слова в корпусе tag_num plt.figure(figsize=(12, 5)) plt.bar(tag_num.index, tag_num.values) plt.title("Tag_frequency") plt.show() word_num[:5] plt.figure(figsize=(12, 5)) plt.bar(word_num.index[:10], word_num.values[:10]) plt.title("Word_frequency") plt.show()
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вопрос 1:* Кол-во слова `cat` в корпусе? **(0.5 балл)**
word_num["cat"]
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вопрос 2:* Самое популярное слово с самым популярным тегом? **(0.5 балл)**
# Выбираем сначала слова с самым популярным тегом, а затем среди них выбираем самое популярное слово. lst = [word for (word, tag) in brown_tagged_words if tag == "NOUN"] popular = pd.Series(nltk.FreqDist(lst)).sort_values(ascending=False) print(popular) # time - Самое популярное слово с самым популярным тегом "NOUN"
time 1597 man 1203 af 995 years 949 way 899 ... anti-communists 1 peace-treaty 1 malinovsky 1 eleventh-floor 1 boucle 1 Length: 30246, dtype: int64
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Cделайте разбиение выборки на обучение и контроль в отношении 9:1. **(0.5 балл)**
brown_tagged_sents = brown.tagged_sents(tagset="universal") # Приведем слова к нижнему регистру my_brown_tagged_sents = [] for sent in brown_tagged_sents: my_brown_tagged_sents.append(list(map(lambda x: (x[0].lower(), x[1]), sent))) my_brown_tagged_sents = np.array(my_brown_tagged_sents) from sklearn.model_selection import train_test_split train_sents, test_sents = train_test_split(my_brown_tagged_sents, test_size=0.1, random_state=0) len(train_sents), len(test_sents)
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
DefaultTagger Вопрос 3:* Какое качество вы бы получили, если бы предсказывали любой тег, как самый популярный тег на выборке train(округлите до одного знака после запятой)? **(0.5 балл)** Вы можете использовать DefaultTagger(метод tag для предсказания частей речи предложения).
from nltk.tag import DefaultTagger default_tagger = DefaultTagger("NOUN") true_pred = 0 num_pred = 0 for sent in test_sents: tags = np.array([tag for (word, tag) in sent]) words = np.array([word for (word, tag) in sent]) tagged_sent = default_tagger.tag(words) outputs = [tag for token, tag in tagged_sent] true_pred += sum(outputs == tags) num_pred += len(words) print("Accuracy:", true_pred / num_pred * 100, '%')
Accuracy: 23.47521651004238 %
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
если бы предсказывали любой тег, как самый популярный тег на выборке train: 15,86% - VERB LSTMTagger Подготовка данных Изменим структуру данных
pos_data = [list(zip(*sent)) for sent in brown_tagged_sents] print(pos_data[0])
[('The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation', 'of', "Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.'), ('DET', 'NOUN', 'NOUN', 'ADJ', 'NOUN', 'VERB', 'NOUN', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADJ', 'NOUN', 'NOUN', 'VERB', '.', 'DET', 'NOUN', '.', 'ADP', 'DET', 'NOUN', 'VERB', 'NOUN', '.')]
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Пора эксплуатировать pytorch!
from torchtext.data import Field, BucketIterator import torchtext # наши поля WORD = Field(lower=True) TAG = Field(unk_token=None) # все токены нам извсетны # создаем примеры examples = [] for words, tags in pos_data: examples.append(torchtext.data.Example.fromlist([list(words), list(tags)], fields=[('words', WORD), ('tags', TAG)]))
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вот один наш пример:
print(vars(examples[0]))
{'words': ['the', 'fulton', 'county', 'grand', 'jury', 'said', 'friday', 'an', 'investigation', 'of', "atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.'], 'tags': ['DET', 'NOUN', 'NOUN', 'ADJ', 'NOUN', 'VERB', 'NOUN', 'DET', 'NOUN', 'ADP', 'NOUN', 'ADJ', 'NOUN', 'NOUN', 'VERB', '.', 'DET', 'NOUN', '.', 'ADP', 'DET', 'NOUN', 'VERB', 'NOUN', '.']}
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Теперь формируем наш датасет
# кладем примеры в наш датасет dataset = torchtext.data.Dataset(examples, fields=[('words', WORD), ('tags', TAG)]) train_data, valid_data, test_data = dataset.split(split_ratio=[0.8, 0.1, 0.1]) print(f"Number of training examples: {len(train_data.examples)}") print(f"Number of validation examples: {len(valid_data.examples)}") print(f"Number of testing examples: {len(test_data.examples)}")
Number of training examples: 45872 Number of validation examples: 5734 Number of testing examples: 5734
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Построим словари. Параметр `min_freq` выберете сами. При построении словаря испольузем только **train** **(0.5 балл)**
WORD.build_vocab(train_data, min_freq=10) TAG.build_vocab(train_data) print(f"Unique tokens in source (ru) vocabulary: {len(WORD.vocab)}") print(f"Unique tokens in target (en) vocabulary: {len(TAG.vocab)}") print(WORD.vocab.itos[::200]) print(TAG.vocab.itos)
Unique tokens in source (ru) vocabulary: 7316 Unique tokens in target (en) vocabulary: 13 ['<unk>', 'number', 'available', 'miles', 'clearly', 'corps', 'quickly', 'b.', 'resolution', 'review', 'orchestra', 'occasionally', 'warfare', 'bread', "nation's", 'tested', 'visitors', 'accident', 'sovereign', 'gesture', 'sharpe', '70', 'attacks', 'ada', 'workshop', 'sank', 'label', "doctor's", 'walker', 'mailed', 'blade', 'modernization', 'arriving', 'judged', 'adventures', 'generated', 'rolls'] ['<pad>', 'NOUN', 'VERB', '.', 'ADP', 'DET', 'ADJ', 'ADV', 'PRON', 'CONJ', 'PRT', 'NUM', 'X']
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Здесь вы увидете токен `unk` и `pad`. Первый служит для обозначения слов, которых у нас нет в словаре. Второй служит для того, что объекты в одном батче были одинакового размера.
print(vars(train_data.examples[9]))
{'words': ['there', 'was', 'a', 'contorted', 'ugliness', 'now', ';', ';'], 'tags': ['PRT', 'VERB', 'DET', 'VERB', 'NOUN', 'ADV', '.', '.']}
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Посмотрим с насколько большими предложениями мы имеем дело
length = map(len, [vars(x)['words'] for x in train_data.examples]) plt.figure(figsize=[8, 4]) plt.title("Length distribution in Train data") plt.hist(list(length), bins=20);
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Для обучения `LSTM` лучше использовать colab
import torch from torch import nn import torch.nn.functional as F import torch.optim as optim device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') device
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Для более быстрого и устойчивого обучения сгруппируем наши данные по батчам
# бьем нашу выборку на батч, не забывая сначала отсортировать выборку по длине def _len_sort_key(x): return len(x.words) BATCH_SIZE = 64 train_iterator, valid_iterator, test_iterator = BucketIterator.splits( (train_data, valid_data, test_data), batch_size = BATCH_SIZE, device = device, sort_key=_len_sort_key ) # посморим на количество батчей list(map(len, [train_iterator, valid_iterator, test_iterator]))
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Модель и её обучение Инициализируем нашу модель. Прочитайте про dropout [тут](https://habr.com/ru/company/wunderfund/blog/330814/). **(3 балла)**
class LSTMTagger(nn.Module): def __init__(self, input_dim, emb_dim, hid_dim, output_dim, dropout): super().__init__() self.embeddings = nn.Embedding(num_embeddings=input_dim, embedding_dim=emb_dim) self.dropout = nn.Dropout(p=dropout) self.rnn = nn.LSTM(emb_dim, hid_dim) self.tag = nn.Linear(hid_dim, output_dim) def forward(self, sent): #sent = [sent len, batch size] # не забываем применить dropout к embedding embedded = self.dropout(self.embeddings(sent)) output, _ = self.rnn(embedded) #output = [sent len, batch size, hid dim * n directions] prediction = self.tag(output) return prediction # параметры модели INPUT_DIM = len(WORD.vocab) OUTPUT_DIM = len(TAG.vocab) EMB_DIM = 10 HID_DIM = 10 DROPOUT = 0.5 model = LSTMTagger(input_dim=INPUT_DIM, emb_dim=EMB_DIM, hid_dim=HID_DIM, output_dim=OUTPUT_DIM, dropout=DROPOUT).to(device) # инициализируем веса def init_weights(m): for name, param in m.named_parameters(): nn.init.uniform_(param, -0.08, 0.08) model.apply(init_weights)
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Подсчитаем количество обучаемых параметров нашей модели. Используйте метод `numel()`. **(1 балл)**
def count_parameters(model): return sum(p.numel() for p in model.parameters() if p.requires_grad) print(f'The model has {count_parameters(model):,} trainable parameters')
The model has 37,403 trainable parameters
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Погнали обучать **(2 балла)**
PAD_IDX = TAG.vocab.stoi['<pad>'] optimizer = optim.Adam(model.parameters()) criterion = nn.CrossEntropyLoss(ignore_index = PAD_IDX) def train(model, iterator, optimizer, criterion, clip, train_history=None, valid_history=None): model.train() epoch_loss = 0 history = [] for i, batch in enumerate(iterator): words = batch.words tags = batch.tags optimizer.zero_grad() output = model(words) #tags = [sent len, batch size] #output = [sent len, batch size, output dim] output = output.view(-1, output.shape[-1]) tags = tags.view(-1) #tags = [sent len * batch size] #output = [sent len * batch size, output dim] loss = criterion(output, tags) loss.backward() # Gradient clipping(решение проблемы взрыва граденты), clip - максимальная норма вектора torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=clip) optimizer.step() epoch_loss += loss.item() history.append(loss.cpu().data.numpy()) if (i+1)%10==0: fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(12, 8)) clear_output(True) ax[0].plot(history, label='train loss') ax[0].set_xlabel('Batch') ax[0].set_title('Train loss') if train_history is not None: ax[1].plot(train_history, label='general train history') ax[1].set_xlabel('Epoch') if valid_history is not None: ax[1].plot(valid_history, label='general valid history') plt.legend() plt.show() return epoch_loss / len(iterator) def evaluate(model, iterator, criterion): model.eval() epoch_loss = 0 history = [] with torch.no_grad(): for i, batch in enumerate(iterator): words = batch.words tags = batch.tags output = model(words) #tags = [sent len, batch size] #output = [sent len, batch size, output dim] output = output.view(-1, output.shape[-1]) tags = tags.view(-1) #tags = [sent len * batch size] #output = [sent len * batch size, output dim] loss = criterion(output, tags) epoch_loss += loss.item() return epoch_loss / len(iterator) def epoch_time(start_time, end_time): elapsed_time = end_time - start_time elapsed_mins = int(elapsed_time / 60) elapsed_secs = int(elapsed_time - (elapsed_mins * 60)) return elapsed_mins, elapsed_secs import time import math import matplotlib matplotlib.rcParams.update({'figure.figsize': (16, 12), 'font.size': 14}) import matplotlib.pyplot as plt %matplotlib inline from IPython.display import clear_output train_history = [] valid_history = [] N_EPOCHS = 15 CLIP = 32 best_valid_loss = float('inf') for epoch in range(N_EPOCHS): start_time = time.time() train_loss = train(model, train_iterator, optimizer, criterion, CLIP, train_history, valid_history) valid_loss = evaluate(model, valid_iterator, criterion) end_time = time.time() epoch_mins, epoch_secs = epoch_time(start_time, end_time) if valid_loss < best_valid_loss: best_valid_loss = valid_loss torch.save(model.state_dict(), 'best-val-model.pt') train_history.append(train_loss) valid_history.append(valid_loss) print(f'Epoch: {epoch+1:02} | Time: {epoch_mins}m {epoch_secs}s') print(f'\tTrain Loss: {train_loss:.3f} | Train PPL: {math.exp(train_loss):7.3f}') print(f'\t Val. Loss: {valid_loss:.3f} | Val. PPL: {math.exp(valid_loss):7.3f}')
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Применение модели **(1 балл)**
def accuracy_model(model, iterator): model.eval() true_pred = 0 num_pred = 0 with torch.no_grad(): for i, batch in enumerate(iterator): words = batch.words tags = batch.tags output = model(words) #output = [sent len, batch size, output dim] # Выбираем для каждого слова индекс тэга с максимальной вероятностью output = output.argmax(-1) #output = [sent len, batch size] predict_tags = output.cpu().numpy() true_tags = tags.cpu().numpy() true_pred += np.sum((true_tags == predict_tags) & (true_tags != PAD_IDX)) num_pred += np.prod(true_tags.shape) - (true_tags == PAD_IDX).sum() return round(true_pred / num_pred * 100, 3) print("Accuracy:", accuracy_model(model, test_iterator), '%')
Accuracy: 92.797 %
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вы можете улучшить качество, изменяя параметры модели. Вам неоходимо добиться качества не меньше, чем `accuracy = 92 %`.
best_model = LSTMTagger(INPUT_DIM, EMB_DIM, HID_DIM, OUTPUT_DIM, DROPOUT).to(device) best_model.load_state_dict(torch.load('/content/best-val-model.pt')) assert accuracy_model(best_model, test_iterator) >= 92
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
**Если качество сети меньше 92 процентов, то снимается половина от всех полученных баллов . То есть максимум в этом случае 5 баллов за работу.** Пример решение нашей задачи:
def print_tags(model, data): model.eval() with torch.no_grad(): words, _ = data example = torch.LongTensor([WORD.vocab.stoi[elem] for elem in words]).unsqueeze(1).to(device) output = model(example).argmax(dim=-1).cpu().numpy() tags = [TAG.vocab.itos[int(elem)] for elem in output] for token, tag in zip(words, tags): print(f'{token:15s}{tag}') print_tags(model, pos_data[-1])
From NOUN what DET I NOUN was VERB able ADJ to PRT gauge NOUN in ADP a DET swift ADJ , . greedy NOUN glance NOUN , . the DET figure NOUN inside ADP the DET coral-colored NOUN boucle NOUN dress NOUN was VERB stupefying VERB . .
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
Вывод: **(0.5 балл)**Правильный подбор параметров дает большую точность, также достаточное количество эпох позволяет достичь хорошей точности, однако модель может переобучится
_____no_output_____
MIT
FastStart/module_3_pos_tag.ipynb
Xrenya/RuCode2020
UI for your Machine Learning model Install Gradio
pip install gradio
Requirement already satisfied: gradio in d:\anaconda3\lib\site-packages (1.2.3) Requirement already satisfied: flask in d:\anaconda3\lib\site-packages (from gradio) (1.1.2) Requirement already satisfied: numpy in d:\anaconda3\lib\site-packages (from gradio) (1.18.5) Requirement already satisfied: analytics-python in d:\anaconda3\lib\site-packages (from gradio) (1.2.9) Requirement already satisfied: requests in d:\anaconda3\lib\site-packages (from gradio) (2.24.0) Requirement already satisfied: scikit-image in d:\anaconda3\lib\site-packages (from gradio) (0.16.2) Requirement already satisfied: paramiko in d:\anaconda3\lib\site-packages (from gradio) (2.7.2) Requirement already satisfied: pandas in d:\anaconda3\lib\site-packages (from gradio) (1.1.1) Requirement already satisfied: IPython in d:\anaconda3\lib\site-packages (from gradio) (7.18.1) Requirement already satisfied: scipy in d:\anaconda3\lib\site-packages (from gradio) (1.5.0) Requirement already satisfied: itsdangerous>=0.24 in d:\anaconda3\lib\site-packages (from flask->gradio) (1.1.0) Requirement already satisfied: click>=5.1 in d:\anaconda3\lib\site-packages (from flask->gradio) (7.1.2) Requirement already satisfied: Werkzeug>=0.15 in d:\anaconda3\lib\site-packages (from flask->gradio) (1.0.1) Requirement already satisfied: Jinja2>=2.10.1 in d:\anaconda3\lib\site-packages (from flask->gradio) (2.11.2) Requirement already satisfied: six>=1.5 in d:\anaconda3\lib\site-packages (from analytics-python->gradio) (1.15.0) Requirement already satisfied: python-dateutil>2.1 in d:\anaconda3\lib\site-packages (from analytics-python->gradio) (2.8.1) Requirement already satisfied: certifi>=2017.4.17 in d:\anaconda3\lib\site-packages (from requests->gradio) (2020.6.20) Requirement already satisfied: idna<3,>=2.5 in d:\anaconda3\lib\site-packages (from requests->gradio) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in d:\anaconda3\lib\site-packages (from requests->gradio) (3.0.4) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in d:\anaconda3\lib\site-packages (from requests->gradio) (1.25.10) Requirement already satisfied: PyWavelets>=0.4.0 in d:\anaconda3\lib\site-packages (from scikit-image->gradio) (1.1.1) Requirement already satisfied: networkx>=2.0 in d:\anaconda3\lib\site-packages (from scikit-image->gradio) (2.5) Requirement already satisfied: matplotlib!=3.0.0,>=2.0.0 in d:\anaconda3\lib\site-packages (from scikit-image->gradio) (3.3.1) Requirement already satisfied: pillow>=4.3.0 in d:\anaconda3\lib\site-packages (from scikit-image->gradio) (7.2.0) Requirement already satisfied: imageio>=2.3.0 in d:\anaconda3\lib\site-packages (from scikit-image->gradio) (2.9.0) Requirement already satisfied: cryptography>=2.5 in d:\anaconda3\lib\site-packages (from paramiko->gradio) (3.1) Requirement already satisfied: pynacl>=1.0.1 in d:\anaconda3\lib\site-packages (from paramiko->gradio) (1.4.0) Requirement already satisfied: bcrypt>=3.1.3 in d:\anaconda3\lib\site-packages (from paramiko->gradio) (3.2.0) Requirement already satisfied: pytz>=2017.2 in d:\anaconda3\lib\site-packages (from pandas->gradio) (2020.1) Requirement already satisfied: pygments in d:\anaconda3\lib\site-packages (from IPython->gradio) (2.6.1) Requirement already satisfied: setuptools>=18.5 in d:\anaconda3\lib\site-packages (from IPython->gradio) (49.6.0.post20200814) Requirement already satisfied: colorama; sys_platform == "win32" in d:\anaconda3\lib\site-packages (from IPython->gradio) (0.4.3) Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in d:\anaconda3\lib\site-packages (from IPython->gradio) (3.0.7) Requirement already satisfied: backcall in d:\anaconda3\lib\site-packages (from IPython->gradio) (0.2.0) Requirement already satisfied: pickleshare in d:\anaconda3\lib\site-packages (from IPython->gradio) (0.7.5) Requirement already satisfied: traitlets>=4.2 in d:\anaconda3\lib\site-packages (from IPython->gradio) (4.3.3) Requirement already satisfied: decorator in d:\anaconda3\lib\site-packages (from IPython->gradio) (4.4.2) Requirement already satisfied: jedi>=0.10 in d:\anaconda3\lib\site-packages (from IPython->gradio) (0.17.1) Requirement already satisfied: MarkupSafe>=0.23 in d:\anaconda3\lib\site-packages (from Jinja2>=2.10.1->flask->gradio) (1.1.1) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.3 in d:\anaconda3\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->gradio) (2.4.7) Requirement already satisfied: kiwisolver>=1.0.1 in d:\anaconda3\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->gradio) (1.2.0) Requirement already satisfied: cycler>=0.10 in d:\anaconda3\lib\site-packages (from matplotlib!=3.0.0,>=2.0.0->scikit-image->gradio) (0.10.0) Requirement already satisfied: cffi!=1.11.3,>=1.8 in d:\anaconda3\lib\site-packages (from cryptography>=2.5->paramiko->gradio) (1.14.2) Requirement already satisfied: wcwidth in d:\anaconda3\lib\site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->IPython->gradio) (0.2.5) Requirement already satisfied: ipython-genutils in d:\anaconda3\lib\site-packages (from traitlets>=4.2->IPython->gradio) (0.2.0)Note: you may need to restart the kernel to use updated packages. Requirement already satisfied: parso<0.8.0,>=0.7.0 in d:\anaconda3\lib\site-packages (from jedi>=0.10->IPython->gradio) (0.7.0) Requirement already satisfied: pycparser in d:\anaconda3\lib\site-packages (from cffi!=1.11.3,>=1.8->cryptography>=2.5->paramiko->gradio) (2.20)
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Import the required libraries
import gradio as gr # for creating the UI import numpy as np # for preprocessing images import requests # for downloading human readable labels from keras.applications.vgg16 import VGG16 # VGG16 model from keras.applications.vgg16 import preprocess_input # VGG16 preprocessing function
_____no_output_____
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Loading the model
vgg_model = VGG16()
_____no_output_____
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Download the human readable labels
response = requests.get("https://raw.githubusercontent.com/gradio-app/mobilenet-example/master/labels.txt") labels = response.text.split("\n")
_____no_output_____
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Creating the classification pipeline
# this pipeline returns a dictionary with key as label and # values as the predicted confidence for that label def classify_image(image): image = image.reshape((-1, 224, 224, 3)) # reshaping the image image = preprocess_input(image) # prepare the image for the VGG16 model prediction = vgg_model.predict(image).flatten() # predicting the output return {labels[i]: float(prediction[i]) for i in range(1000)} # finding the predicted labels from the 1000 labels
_____no_output_____
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Initializing the input and output components
image = gr.inputs.Image(shape = (224, 224, 3)) label = gr.outputs.Label(num_top_classes = 3) # predicts the top 3 classes
_____no_output_____
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Launching the Gradio interface with our VGG16 model
gr.Interface(fn = classify_image, inputs = image, outputs = label, capture_session = True).launch()
Running locally at: http://127.0.0.1:7860/ To get a public link for a hosted model, set Share=True Interface loading below...
MIT
ui-for-ml-using-gradio.ipynb
rajtilak82/how-machines-learn
Activation Function
# Previous lecture we learn about neuron on action, but what actually is neuron ? # Every neuron have a weight, and they calculate the weight using activation function. # In this code, you will learn about 4 different activation function.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Threshold Function
import numpy as np import matplotlib.pyplot as plt import numpy as np def binaryStep(x): ''' It returns '0' is the input is less then zero otherwise it returns one ''' return np.heaviside(x,1) x = np.linspace(-10, 10) plt.plot(x, binaryStep(x)) plt.axis('tight') plt.title('Activation Function (Threshold Function)') plt.show() ## Yes or no type of function, it's pretty straight function.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Sigmoid Function
def sigmoid(x): ''' It returns 1/(1+exp(-x)). where the values lies between zero and one ''' return 1/(1+np.exp(-x)) x = np.linspace(-10, 10) plt.plot(x, sigmoid(x)) plt.axis('tight') plt.title('Activation Function (Sigmoid)') plt.show() ## The output is equal to 1/(1+np.exp(-x)). Unlike threshold functions, this gives a smooth progression, useful in output layer.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Rectifier or Relu
def RELU(x): ''' It returns zero if the input is less than zero otherwise it returns the given input. ''' x1=[] for i in x: if i<0: x1.append(0) else: x1.append(i) return x1 x = np.linspace(-10, 10) plt.plot(x, RELU(x)) plt.axis('tight') plt.title('Activation Function (RELU)') plt.show() ## If the weight is 0, it stays 0, other than that, it has linear increase, the maximum value itself is infinite. Ussually used in hidden layers.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Hyperpolic or Tanh Function
def tanh(x): ''' It returns the value (1-exp(-2x))/(1+exp(-2x)) and the value returned will be lies in between -1 to 1.''' return np.tanh(x) x = np.linspace(-10, 10) plt.plot(x, tanh(x)) plt.axis('tight') plt.title('Activation Function (Tanh)') plt.show() ## This return the minimum value of -1 and maximum value of + 1 and the graphic is smooth like sigmoid.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Softmax Function
def softmax(x): ''' Compute softmax values for each sets of scores in x. ''' return np.exp(x) / np.sum(np.exp(x), axis=0) x = np.linspace(-10, 10) plt.plot(x, softmax(x)) plt.axis('tight') plt.title('Activation Function :Softmax') plt.show() ## Sigmoid is a smooth graphic like sigmoid. Often used in multiclass classification.
_____no_output_____
Apache-2.0
Day_2_Activation_Function.ipynb
LukasPurbaW/100_Days_of_Deep_Learning
Model Evaluation and RefinementEstimated time needed: **30** minutes ObjectivesAfter completing this lab you will be able to:- Evaluate and refine prediction models Table of content Model Evaluation Over-fitting, Under-fitting and Model Selection Ridge Regression Grid Search This dataset was hosted on IBM Cloud object click HERE for free storage.
import pandas as pd import numpy as np # Import clean data path = 'https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/module_5_auto.csv' df = pd.read_csv(path) df.to_csv('module_5_auto.csv')
_____no_output_____
MIT
DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb
alekhaya99/IBM-CLOUD-SQL-AND-PYTHON
First lets only use numeric data
df=df._get_numeric_data() df.head()
_____no_output_____
MIT
DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb
alekhaya99/IBM-CLOUD-SQL-AND-PYTHON
Libraries for plotting
%%capture ! pip install ipywidgets from ipywidgets import interact, interactive, fixed, interact_manual
_____no_output_____
MIT
DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb
alekhaya99/IBM-CLOUD-SQL-AND-PYTHON
Functions for plotting
def DistributionPlot(RedFunction, BlueFunction, RedName, BlueName, Title): width = 12 height = 10 plt.figure(figsize=(width, height)) ax1 = sns.distplot(RedFunction, hist=False, color="r", label=RedName) ax2 = sns.distplot(BlueFunction, hist=False, color="b", label=BlueName, ax=ax1) plt.title(Title) plt.xlabel('Price (in dollars)') plt.ylabel('Proportion of Cars') plt.show() plt.close() def PollyPlot(xtrain, xtest, y_train, y_test, lr,poly_transform): width = 12 height = 10 plt.figure(figsize=(width, height)) #training data #testing data # lr: linear regression object #poly_transform: polynomial transformation object xmax=max([xtrain.values.max(), xtest.values.max()]) xmin=min([xtrain.values.min(), xtest.values.min()]) x=np.arange(xmin, xmax, 0.1) plt.plot(xtrain, y_train, 'ro', label='Training Data') plt.plot(xtest, y_test, 'go', label='Test Data') plt.plot(x, lr.predict(poly_transform.fit_transform(x.reshape(-1, 1))), label='Predicted Function') plt.ylim([-10000, 60000]) plt.ylabel('Price') plt.legend()
_____no_output_____
MIT
DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb
alekhaya99/IBM-CLOUD-SQL-AND-PYTHON
Part 1: Training and TestingAn important step in testing your model is to split your data into training and testing data. We will place the target data price in a separate dataframe y:
y_data = df['price']
_____no_output_____
MIT
DA0101EN/.ipynb_checkpoints/model-evaluation-and-refinement-checkpoint.ipynb
alekhaya99/IBM-CLOUD-SQL-AND-PYTHON