markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In order to parse this file, you need to install pyyaml first``` shpip install yaml```or```pip3 install yaml```
import yaml authors = yaml.load(authorSpec, Loader=yaml.FullLoader) authors
_____no_output_____
MIT
pyling/detectAuthors.ipynb
dirkroorda/explore
We need to compile the authors specification in such a way that we can use the triggers
triggers = {} for (key, authorInfo) in authors.items(): for trigger in authorInfo['triggers']: triggers[trigger] = key triggers def fillInAuthorDetails(text): normalized = normalize(text) output = None for trigger in triggers: if trigger in normalized: authorKey = triggers[trigger] authorFull = authors[authorKey]["full"] output = f"""<author><name key="{authorKey}">{authorFull}</name></author>""" break if output is None: print(f"!!! {normalized:<36} => NO AUTHOR DETECTED") return output for text in (testTexts): result = fillInAuthorDetails(text) if result is not None: print(f"{text:<40} => {result}")
Calderón de la Barca, Pedro => <author><name key="cald">Pedro Calderón de la Barca</name></author> CCCCCalderón => <author><name key="cald">Pedro Calderón de la Barca</name></author> !!! caldeeeeeeron => NO AUTHOR DETECTED Pedro Barca => <author><name key="cald">Pedro Calderón de la Barca</name></author> Pedro Barca => <author><name key="cald">Pedro Calderón de la Barca</name></author> Agustin Moreto => <author><name key="more">Agustín Moreto</name></author> A. Moreto => <author><name key="more">Agustín Moreto</name></author> Agustin => <author><name key="more">Agustín Moreto</name></author> Augustine => <author><name key="more">Agustín Moreto</name></author>
MIT
pyling/detectAuthors.ipynb
dirkroorda/explore
Cross-Validation1. We read the data from the npy files2. We combine the QUBICC and NARVAL data4. Set up cross validationDuring cross-validation:1. We scale the data, convert to tf data2. Plot training progress, model biases 3. Write losses and epochs into file
# Ran with 800GB (750GB should also be fine) import sys import numpy as np import time import pandas as pd import matplotlib.pyplot as plt import os import copy import gc #Import sklearn before tensorflow (static Thread-local storage) from sklearn.preprocessing import StandardScaler import tensorflow as tf from tensorflow.keras.models import load_model from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, BatchNormalization from tensorflow.keras.regularizers import l1_l2 from tensorflow.keras import backend as K from tensorflow.keras.layers import Activation # For Leaky_ReLU: from tensorflow import nn t0 = time.time() path = '/pf/b/b309170' # Add path with my_classes to sys.path sys.path.insert(0, path + '/workspace_icon-ml/cloud_cover_parameterization/') # Reloading custom file to incorporate changes dynamically import importlib import my_classes importlib.reload(my_classes) from my_classes import read_mean_and_std from my_classes import TimeOut # Minutes per fold timeout = 2120 # For logging purposes days = 'all_days' # Maximum amount of epochs for each model epochs = 30 # Set seed for reproducibility seed = 10 tf.random.set_seed(seed) # For store_mean_model_biases VERT_LAYERS = 31 gpus = tf.config.experimental.list_physical_devices('GPU') # tf.config.experimental.set_visible_devices(gpus[3], 'GPU') # Cloud Cover or Cloud Area? output_var = 'cl_area' # Set output_var to one of {'clc', 'cl_area'} # QUBICC only or QUBICC+NARVAL training data? Always True for the paper qubicc_only = True path_base = os.path.join(path, 'workspace_icon-ml/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05') path_data = os.path.join(path, 'my_work/icon-ml_data/cloud_cover_parameterization/grid_cell_based_QUBICC_R02B05/based_on_var_interpolated_data') if output_var == 'clc': full_output_var_name = 'cloud_cover' elif output_var == 'cl_area': full_output_var_name = 'cloud_area' if qubicc_only: output_folder = '%s_R2B5_QUBICC'%full_output_var_name else: output_folder = '%s_R2B5_QUBICC+NARVAL'%full_output_var_name path_model = os.path.join(path_base, 'saved_models', output_folder) path_figures = os.path.join(path_base, 'figures', output_folder) narval_output_file = '%s_output_narval.npy'%full_output_var_name qubicc_output_file = '%s_output_qubicc.npy'%full_output_var_name # Prevents crashes of the code physical_devices = tf.config.list_physical_devices('GPU') tf.config.set_visible_devices(physical_devices[0], 'GPU') # Allow the growth of memory Tensorflow allocates (limits memory usage overall) for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) scaler = StandardScaler()
_____no_output_____
MIT
q1_cell_based_qubicc_r2b5/source_code/commence_training_cross_validation-fold_2.ipynb
agrundner24/iconml_clc
Load the data
# input_narval = np.load(path_data + '/cloud_cover_input_narval.npy') # input_qubicc = np.load(path_data + '/cloud_cover_input_qubicc.npy') # output_narval = np.load(path_data + '/cloud_cover_output_narval.npy') # output_qubicc = np.load(path_data + '/cloud_cover_output_qubicc.npy') input_data = np.concatenate((np.load(path_data + '/cloud_cover_input_narval.npy'), np.load(path_data + '/cloud_cover_input_qubicc.npy')), axis=0) output_data = np.concatenate((np.load(os.path.join(path_data, narval_output_file)), np.load(os.path.join(path_data, qubicc_output_file))), axis=0) samples_narval = np.load(path_data + '/cloud_cover_output_narval.npy').shape[0] if qubicc_only: input_data = input_data[samples_narval:] output_data = output_data[samples_narval:] (samples_total, no_of_features) = input_data.shape (samples_total, no_of_features)
_____no_output_____
MIT
q1_cell_based_qubicc_r2b5/source_code/commence_training_cross_validation-fold_2.ipynb
agrundner24/iconml_clc
*Temporal cross-validation*Split into 2-weeks increments (when working with 3 months of data). It's 25 day increments with 5 months of data. 1.: Validate on increments 1 and 4 2.: Validate on increments 2 and 5 3.: Validate on increments 3 and 6--> 2/3 training data, 1/3 validation data
training_folds = [] validation_folds = [] two_week_incr = samples_total//6 for i in range(3): # Note that this is a temporal split since time was the first dimension in the original tensor first_incr = np.arange(samples_total//6*i, samples_total//6*(i+1)) second_incr = np.arange(samples_total//6*(i+3), samples_total//6*(i+4)) validation_folds.append(np.append(first_incr, second_incr)) training_folds.append(np.arange(samples_total)) training_folds[i] = np.delete(training_folds[i], validation_folds[i])
_____no_output_____
MIT
q1_cell_based_qubicc_r2b5/source_code/commence_training_cross_validation-fold_2.ipynb
agrundner24/iconml_clc
Define the model Activation function for the last layer
def lrelu(x): return nn.leaky_relu(x, alpha=0.01) # Create the model model = Sequential() # First hidden layer model.add(Dense(units=64, activation='tanh', input_dim=no_of_features, kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # Second hidden layer model.add(Dense(units=64, activation=nn.leaky_relu, kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # model.add(Dropout(0.221)) # We drop 18% of the hidden nodes model.add(BatchNormalization()) # Third hidden layer model.add(Dense(units=64, activation='tanh', kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732))) # model.add(Dropout(0.221)) # We drop 18% of the hidden nodes # Output layer model.add(Dense(1, activation='linear', kernel_regularizer=l1_l2(l1=0.004749, l2=0.008732)))
_____no_output_____
MIT
q1_cell_based_qubicc_r2b5/source_code/commence_training_cross_validation-fold_2.ipynb
agrundner24/iconml_clc
3-fold cross-validation
# By decreasing timeout we make sure every fold gets the same amount of time # After all, data-loading took some time (Have 3 folds, 60 seconds/minute) # timeout = timeout - 1/3*1/60*(time.time() - t0) timeout = timeout - 1/60*(time.time() - t0) t0 = time.time() #We loop through the folds for i in range(3): filename = 'cross_validation_cell_based_fold_%d'%(i+1) #Standardize according to the fold scaler.fit(input_data[training_folds[i]]) #Load the data for the respective fold and convert it to tf data input_train = scaler.transform(input_data[training_folds[i]]) input_valid = scaler.transform(input_data[validation_folds[i]]) output_train = output_data[training_folds[i]] output_valid = output_data[validation_folds[i]] # Clear memory (Reduces memory requirement to 151 GB) del input_data, output_data, first_incr, second_incr, validation_folds, training_folds gc.collect() # Column-based: batchsize of 128 # Cell-based: batchsize of at least 512 # Shuffle is actually very important because we start off with the uppermost layers with clc=0 basically throughout # This can push us into a local minimum, preferrably yielding clc=0. # The size of the shuffle buffer significantly impacts RAM requirements! Do not increase to above 10000. # Possibly better to use .apply(tf.data.experimental.copy_to_device("/gpu:0")) before prefetch # We might want to cache before shuffling, however it seems to slow down training # We do not repeat after shuffle, because the validation set should be evaluated after each epoch train_ds = tf.data.Dataset.zip((tf.data.Dataset.from_tensor_slices(input_train), tf.data.Dataset.from_tensor_slices(output_train))) \ .shuffle(10**5, seed=seed) \ .batch(batch_size=1028, drop_remainder=True) \ .prefetch(1) # Clear memory del input_train, output_train gc.collect() # No need to add prefetch. # tf data with batch_size=10**5 makes the validation evaluation 10 times faster valid_ds = tf.data.Dataset.zip((tf.data.Dataset.from_tensor_slices(input_valid), tf.data.Dataset.from_tensor_slices(output_valid))) \ .batch(batch_size=10**5, drop_remainder=True) # Clear memory (Reduces memory requirement to 151 GB) del input_valid, output_valid gc.collect() #Feed the model. Increase the learning rate by a factor of 2 when increasing the batch size by a factor of 4 model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=0.000433, epsilon=0.1), loss=tf.keras.losses.MeanSquaredError() ) #Train the model # time_callback = TimeOut(t0, timeout*(i+1)) time_callback = TimeOut(t0, timeout) history = model.fit(train_ds, validation_data=valid_ds, epochs=epochs, verbose=2, callbacks=[time_callback]) # history = model.fit(train_ds, epochs=epochs, validation_data=valid_ds, callbacks=[time_callback]) #Save the model #Serialize model to YAML model_yaml = model.to_yaml() with open(os.path.join(path_model, filename+".yaml"), "w") as yaml_file: yaml_file.write(model_yaml) #Serialize model and weights to a single HDF5-file model.save(os.path.join(path_model, filename+'.h5'), "w") print('Saved model to disk') #Plot the training history if len(history.history['loss']) > len(history.history['val_loss']): del history.history['loss'][-1] pd.DataFrame(history.history).plot(figsize=(8,5)) plt.grid(True) plt.ylabel('Mean Squared Error') plt.xlabel('Number of epochs') plt.savefig(os.path.join(path_figures, filename+'.pdf')) with open(os.path.join(path_model, filename+'.txt'), 'a') as file: file.write('Results from the %d-th fold\n'%(i+1)) file.write('Training epochs: %d\n'%(len(history.history['val_loss']))) file.write('Weights restored from epoch: %d\n\n'%(1+np.argmin(history.history['val_loss'])))
_____no_output_____
MIT
q1_cell_based_qubicc_r2b5/source_code/commence_training_cross_validation-fold_2.ipynb
agrundner24/iconml_clc
Performance metrics of Buy & Hold Strategy The purpose of this notebook is to calculate performance metrics over the benchmark and compare it with results obtained in other papers. I will compare my results with two papers: - Hybrid Investment Strategy Based on Momentum and Macroeconomic Approach - Kamil Korzeń, Robert Ślepaczuk - Predicting prices of S&P500 index using classical methods and recurrent neural networks - Mateusz Kijewski, Robert Ślepaczuk
# Settings for notebook visualization from IPython.core.interactiveshell import InteractiveShell InteractiveShell.ast_node_interactivity = 'all' %matplotlib inline from IPython.core.display import HTML HTML("""<style>.output_png img {display: block;margin-left: auto;margin-right: auto;text-align: center;vertical-align: middle;} </style>""") # Necessary imports import os import numpy as np import pandas as pd import matplotlib as plt import quantstats as qs print("Libraries imported correctly") os.chdir("/Users/Sergio/Documents/Master_QF/Thesis/Code/Algorithmic Strategies") %run Functions.ipynb
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Load data
%run Functions.ipynb df = get_sp500_data(from_local_file=True, save_to_file=False) df['Market_daily_ret'] = df['Close'].pct_change() df = df.loc['1990':'2020', ['Close', 'Market_daily_ret']] df.head() df['Close'].plot(title='SP500')
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Paper from Kamil: Hybrid Investment Strategy Based on Momentum and Macroeconomic Approach Data from 1991-01-03 to 2018-01-03 Uses daily returns to calculate the metrics
from IPython.display import Image Image(filename='/Users/Sergio/Documents/Master_QF/Thesis/Papers/Performance metrics/K-Formulas.png') # Data from 1991-01-03:2018-01-03
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
We do the backtest of buy_and_hold strategy and compare metrics with the ones from the paper:
%run Functions.ipynb df_1 = df.loc['1991-01-03':'2018-01-03', ['Close', 'Market_daily_ret']].copy() df_1 = backtest_strat(df_1, buy_and_hold(df_1), commision=0)[0] df_1.head(4) #df_1.tail(2) #df_1['Close'].plot(title='SP500', legend=True)
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
In this paper, the return from the first day of 1991 (January 2nd) seems to be not included. Metrics from paper:
from IPython.display import Image Image(filename='/Users/Sergio/Documents/Master_QF/Thesis/Papers/Performance metrics/K-Table.png') metrics = ['AbsRet', 'ARC', 'IR', 'aSD', 'MD'] paper_data = [[742.801, 8.222, 0.466, 17.652, 56.775]] df_metrics = pd.DataFrame(data=paper_data, index=['Paper metrics'], columns=metrics) metrics_row = calculate_performance_metrics(df_1, strat_name='Buy and Hold') df_metrics = pd.concat([df_metrics, metrics_row], axis=0).drop_duplicates().round(3) df_metrics
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Paper: "Predicting prices of S&P500 index using classical methods and recurrent neural networks" Data from 2000-01-01 to 2020-05-02 Uses log returns to calculate the metrics
from IPython.display import Image Image(filename='/Users/Sergio/Documents/Master_QF/Thesis/Papers/Performance metrics/M-Formulas.png') # Data from 2000 : 2020-05-02
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
We do the backtest of buy_and_hold strategy and compare metrics with the ones from the paper:
df_2 = df.loc['2000-01-01':'2020-05-02', ['Close', 'Market_daily_ret']].copy() df_2 = backtest_strat(df_2, buy_and_hold(df_2), commision=0)[0] df_2.head(4) #df_2.tail(2) #df_2['Close'].plot(title='SP500', legend=True)
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Metrics from paper:
from IPython.display import Image Image(filename='/Users/Sergio/Documents/Master_QF/Thesis/Papers/Performance metrics/M-Table.png') # Data from 2000 : 2020-05-02 metrics = ['ARC', 'IR', 'aSD', 'MD', 'AMD', 'MLD', 'All Risk', 'ARCMD', 'ARCAMD', 'Num Trades', 'No signal'] paper_data = [[3.23, 0.16, 19.95, 64.33, 17.34, 7.155, 15.92, 0.05, 0.18, 1, 0]] df_metrics = pd.DataFrame(data=paper_data, index=['Paper metrics'], columns=metrics) metrics_row = calculate_performance_metrics(df_2, strat_name='Buy and Hold') df_metrics = pd.concat([df_metrics, metrics_row], axis=0).drop_duplicates().round(3) df_metrics
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Demonstration of MD, MLD and AMD using quanstats library Using data from paper 1 (1991-01-03 to 2018-01-03) Following code is to check drawdowns. - Paper 2 gave a MD of 64.33%, which seems to be wrong
dd = qs.stats.drawdown_details(qs.stats.to_drawdown_series(df_1['Market_cum_ret'])).sort_values(by='max drawdown', ascending=True) dd.head()
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Maximum Loss Duration (in years):
dd = qs.stats.drawdown_details(qs.stats.to_drawdown_series(df_1['Market_cum_ret'])).sort_values(by='days', ascending=False) dd.insert(4, 'years', dd['days']/365.25) dd.head(5)
_____no_output_____
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
For MLD in years, I believe I should divide the number of days of MLD by 365.25, but result is more similar to the one from paper if I divide the number of days by 366. Kamil, how was it calculated on the paper?
from datetime import datetime max_loss_dur = datetime(2007, 5, 30) - datetime(2000, 3, 27) print(max_loss_dur.days) print("{:.4f}".format(max_loss_dur.days / 365)) print("{:.4f}".format(max_loss_dur.days / 365.25)) print("{:.4f}".format(max_loss_dur.days / 366))
2620 7.1781 7.1732 7.1585
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
To calculate AMD, I group returns by year and do the mean of the MD of each year:
print("AMD = {:.3f} %".format(abs(df_2['Market_daily_ret'].groupby(by=df_2.index.year).apply(qs.stats.max_drawdown).mean()*100))) df_2['Market_daily_ret'].groupby(by=df_2.index.year).apply(qs.stats.max_drawdown).mul(100).to_frame(name='MD (%)').abs().round(3).T
AMD = 16.520 %
MIT
Buy & Hold/Buy&Hold.ipynb
scastellanog/Walk-forward-optimization
Model Prediction Verification This script demonstrates how to train a single model class, embed the model, and solve the optimization problem for *regression* problems (i.e., continuous outcome prediction). We fix a sample from our generated data and solve the optimization problem with all elements of $\mathbf{x}$ equal to our data. In general, we might have some elements of $\mathbf{x}$ that are fixed, called our "conceptual variables," and the remaining indices are our decision variables. By fixing all elements of $\mathbf{x}$, we can verify that the model prediction matches the original sklearn model. Load the relevant packages
import pandas as pd import numpy as np import math from sklearn.utils.extmath import cartesian import time import sys import os import time from sklearn.metrics import roc_auc_score, r2_score, mean_squared_error from sklearn.cluster import KMeans import opticl from pyomo import environ from pyomo.environ import *
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Initialize dataWe will work with a basic dataset from `sklearn`.
from sklearn.datasets import make_regression from sklearn.model_selection import train_test_split X, y = make_regression(n_samples=200, n_features = 20, random_state=1) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1) X_train = pd.DataFrame(X_train).add_prefix('col') X_test = pd.DataFrame(X_test).add_prefix('col') X_train
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Train the chosen model type
# alg = 'rf' alg = 'gbm' task_type = 'continuous'
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
The user can optionally select a manual parameter grid for the cross-validation procedure. We implement a default parameter grid; see **run_MLmodels.py** for details on the tuned parameters. If you wish to use the default, leave ```parameter_grid = None``` (or do not specify any grid).
parameter_grid = None # parameter_grid = {'hidden_layer_sizes': [(5,),(10,)]} s = 1 version = 'test' outcome = 'temp' model_save = 'results/%s/%s_%s_model.csv' % (alg, version, outcome) alg_run = alg if alg != 'rf' else 'rf_shallow' m, perf = opticl.run_model(X_train, y_train, X_test, y_test, alg_run, outcome, task = task_type, seed = s, cv_folds = 5, # The user can manually specify the parameter grid for cross-validation if desired parameter_grid = parameter_grid, save_path = model_save, save = False)
------------- Initialize grid ---------------- ------------- Running model ---------------- Algorithm = gbm, metric = None saving... results/gbm_temp_trained.pkl ------------- Model evaluation ---------------- -------------------training evaluation----------------------- Train MSE: 4314.00082576947 Train R2: 0.8939305706172604 -------------------testing evaluation----------------------- Test MSE: 17814.940522763252 Test R2: 0.6170544988313675
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
After training the model, we will save the trained model in the format needed for embedding the constraints. See **constraint_learning.py** for the specific format that is extracted per method. We also save the performance of the model to use in the automated model selection pipeline (if desired).We also create the save directory if it does not exist.
if not os.path.exists('results/%s/' % alg): os.makedirs('results/%s/' % alg) constraintL = opticl.ConstraintLearning(X_train, y_train, m, alg) constraint_add = constraintL.constraint_extrapolation(task_type) constraint_add.to_csv(model_save, index = False) perf.to_csv('results/%s/%s_%s_performance.csv' % (alg, version, outcome), index= False) constraint_add
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Check: what should the result be for our sample observation, if all x are fixed? Choose sample to testThis will be the observation ("patient") that we feed into the optimization model.
sample_id = 1 sample = X_train.loc[sample_id:sample_id,:].reset_index(drop = True)
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Calculate model prediction directly in sklearn.
m.predict(sample)
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Optimization formulationWe will embed the model trained above. The model could also be selected using the model selection pipeline, which we demonstrate in the WFP example script.If manually specifying the model, as we are here, the key elements of the ``model_master`` dataframe are:- model_type: algorithm name.- outcome: name of outcome of interest; this is relevant in the case of multiple learned outcomes.- save_path: file name of the extracted model.- objective: the weight of the objective if it should be included as an additive term in the objective. A weight of 0 omits it from the objective entirely.- lb/ub: the lower (or upper) bound that we wish to apply to the learned outcome. If there is no bound, it should be set to ``None``.In this case, we set the outcome to be our only objective term, which will allow us to verify that the predictions are consistent between the embedded model and the sklearn prediction function.
model_master = pd.DataFrame(columns = ['model_type','outcome','save_path','lb','ub','objective']) model_master.loc[0,'model_type'] = alg model_master.loc[0,'save_path'] = 'results/%s/%s_%s_model.csv' % (alg, version, outcome) model_master.loc[0,'outcome'] = outcome model_master.loc[0,'objective'] = 1 model_master.loc[0,'ub'] = None model_master.loc[0,'lb'] = None model_master.loc[0,'task'] = task_type model_master['SCM_counterfactuals'] = None model_master['features'] = [[col for col in X_train.columns]]
_____no_output_____
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Solve with Pyomo
model_pyo = ConcreteModel() ## We will create our x decision variables, and fix them all to our sample's values for model verification. N = X_train.columns model_pyo.x = Var(N, domain=Reals) def fix_value(model_pyo, index): return model_pyo.x[index] == sample.loc[0,index] model_pyo.Constraint1 = Constraint(N, rule=fix_value) ## Specify any non-learned objective components - none here model_pyo.OBJ = Objective(expr=0, sense=minimize) final_model_pyo = opticl.optimization_MIP(model_pyo, model_pyo.x, model_master, X_train, tr = False) # final_model_pyo.pprint() opt = SolverFactory('gurobi') results = opt.solve(final_model_pyo)
Embedding objective function for temp
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
Check for equality between sklearn and embedded models
print("True outcome: %.3f" % m.predict(sample)[0]) print("Pyomo output: %.3f" % final_model_pyo.OBJ())
True outcome: 182.759 Pyomo output: 182.759
MIT
notebooks/Model_Verification/Model_Verification_Regression.ipynb
hwiberg/OptiCL
[View in Colaboratory](https://colab.research.google.com/github/renatopcamara/Colaboratory/blob/master/colab_drive.ipynb)
!apt-get install -y -qq software-properties-common python-software-properties module-init-tools !add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null !apt-get update -qq 2>&1 > /dev/null !apt-get -y install -qq google-drive-ocamlfuse fuse from google.colab import auth auth.authenticate_user() from oauth2client.client import GoogleCredentials creds = GoogleCredentials.get_application_default() import getpass !google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL vcode = getpass.getpass() !echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} # Create a directory and mount Google Drive using that directory. !mkdir -p drive !google-drive-ocamlfuse drive print ('Files in Drive:') !ls drive/ # Create a file in Drive. !echo "This newly created file will appear in your Drive file list." > drive/created.txt
_____no_output_____
MIT
colab_drive.ipynb
renatopcamara/Colaboratory
Awari - Data Science Exercícios Unidade 4 - Parte 1 Neste Jupyter notebook você irá resolver uma exercícios utilizando a linguagem Python e a biblioteca Pandas.Todos os datasets utilizados nos exercícios estão salvos na pasta *datasets*.Todo o seu código deve ser executado neste Jupyter Notebook. Por fim, se desejar, revise as respostas com o seu mentor.
import pandas as pd
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 1. Importando os dadosCarregue os dados salvos no arquivo ***datasets/users_dataset.txt***.Esse arquivo possui um conjunto de dados de trabalhadores com 5 colunas separadas pelo símbolo "|" (pipe) e 943 linhas.*Dica: utilize a função read_csv com os parâmetros sep e index_col*
users = pd.read_csv('users_dataset.txt',sep='|', index_col='user_id')
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 2. Mostre as 25 primeiras linhas do dataset.*Dica: use a função head do DataFrame*
users.head(25)
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 3. Mostre as 10 últimas linhas
users.tail(10)
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 4. Qual o número de linhas e colunas do DataFrame?
users.shape
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 5. Mostre o nome de todas as colunas.
users.columns
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 6. Qual o tipo de dado de cada columa?
users.dtypes
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 7. Mostre os dados da coluna *occupation*.
users['occupation']
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 8. Quantas ocupações diferentes existem neste dataset?
len(users['occupation'].unique())
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 9. Qual a ocupação mais frequente?
users['occupation'].value_counts()
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Passo 10. Qual a idade média dos usuários?
users['age'].mean()
_____no_output_____
MIT
Exercicios_unidade_4_Manipulacao_de_Dados_Parte_01.ipynb
felipemoreia/Data-Science-com-Python---Awari
Generating mutually exclusive n-hot codingSuppose the number of categories is $C$ and number of output neurons is $m$ ($ n \cdot C \leq m$). For generating mutually exclusive $n$-hot code vectors of size $m$ for each category, we started from the first category to the last one and successively for each category $c \in \{0,1,\cdots,C-1\}$ we initialized its code vector with zero elements and then randomly selected $n$ out of $m-c \cdot n$ elements that were not equal to $1$ in any of the $c$ previously coded category vectors and set them equal to $1$.
import random import numpy as np import torch dtype = torch.float device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def Diff(li1, li2): return list(set(li1) - set(li2)) + list(set(li2) - set(li1)) # coding_layers should be a list of number of neurons in each layer # ones_in_layes should be a list of number of ones in each coding vector def get_n_hot_coding_map(coding_layers , ones_in_layes , number_of_categories ): if(type(coding_layers) != list ): raise Exception('coding_layers is not a list') if(type(ones_in_layes) != list ): raise Exception('ones_in_layes is not a list') if(type(number_of_categories) != int ): raise Exception('number_of_categories is not int') if( len(coding_layers) != len(ones_in_layes) ): raise Exception('inputs len mismatch') coding = [] for i in range(len(coding_layers)): if ( number_of_categories*(ones_in_layes[i] ) > coding_layers[i] ): raise Exception('no a valide coding') coding.append( np.zeros([number_of_categories,coding_layers[i]]) ) initial_indices_list = list(range(coding_layers[i])) for j in range(number_of_categories): indice = random.sample( initial_indices_list , ones_in_layes[i] ) coding[i][j,indice] = 1 k=0 initial_indices_list = Diff(initial_indices_list, indice) coding[i] = torch.tensor(coding[i] , device=device, dtype=dtype, requires_grad=False ) return coding # print(device) # coding = get_n_hot_coding_map([100] , [ 10 ] , 10 ) # # # print(coding) # m = np.zeros( len(coding[0][0]) ) # for i in range(len(coding[0])): # m = m + np.array(coding[0][i]) # print(m) # m = np.ones(len(coding[0][0])) # for i in range(len(coding[0])): # m = m * np.array(coding[0][i]) # print(m) # # print( np.array(coding[1][0]) + np.array(coding[1][1]) + np.array(coding[1][2]) ) # # print( np.array(coding[1][0]) * np.array(coding[1][1]) * np.array(coding[1][2]) ) # # print( np.array([0,2,3]) * np.array([3,2,5]) ) # # i=0 # # print( torch.matmul( coding[i] , torch.transpose(coding[i],0, 1) ) ) # # i=1 # # print( torch.matmul( coding[i] , torch.transpose(coding[i],0, 1) ) ) # coding : list of tensors with size : ( number_of_categories , coding_layers[i]) , coding_layers[i] is number of neurons in layer i # category : 1:can be (Batch size , 1) 2:can be (Batch size ) def code_category(coding , category): if(type(coding) != list ): raise Exception('coding is not a list') if(not torch.is_tensor(category) ): raise Exception('category is not a torch tensor') coded = [] if (len(category.shape) == 1 or (len(category.shape) == 2 and (category.shape[1]) == 1)): category1 = category.view(-1).to(torch.int64 ) for i in range( len(coding) ): coded.append(coding[i][ category1 , :]) #if its one hot code else : raise Exception('category size mismach') return coded # for i in range(len(coding)): # output.append() # a=np.array( [2,1,1,1,1,1,1]) # # print(a.shape) # category = torch.tensor(a).view([-1,1]) # category = torch.tensor([ [0,0,1] , [1,0,0] , [1,0,0] , [1,0,1] ] , dtype=dtype ) # # category = torch.tensor(a) # # category = a # # print(category.shape) # print(category) # # print(category.shape) # code = get_distributed_coding_map([5,3] , [ 2,1] , [1,0] , 3 ) # print(code) # code_category(code , category) # coding : list of tensors with size : ( number_of_categories , coding_layers[i]) , coding_layers[i] is number of neurons in layer i # activity : list of activity of layers with size : ( batch_size , coding_layers[i]) # return : tesor of size : ( batch_size , top ) def decode_category(coding , activity , top=1 , coef=None): if(type(coding) != list ): raise Exception('coding is not a list') if(type(activity) != list ): raise Exception('activity is not a list') result = torch.zeros( [ activity[0].shape[0] , coding[0].shape[0] ] , device = device , dtype=dtype, requires_grad=False ) for i in range(len(coding)): if (coef==None): result = result + torch.matmul( activity[i] , torch.transpose( coding[i] , 0 , 1 ) ) else: result = result + coef[i] * torch.matmul( activity[i] , torch.transpose( coding[i] , 0 , 1 ) ) result2 = torch.zeros( [activity[0].shape[0] , top ] , device = device , dtype=dtype, requires_grad=False ) for i in range(top): a,indece = torch.max(result ,dim=1) result2[np.arange(activity[0].shape[0]) , i ] = indece.view([-1]).to(dtype) result[np.arange(result.shape[0]) ,indece.view([-1]) ] = -1 return result2 # code = get_distributed_coding_map([5,3] , [ 2,1] , [1,0] , 3 ) # print(code[0]) # print(code[1]) # code1 = torch.tensor( [[1,0,0,1,0] , [0,1,0,0,0] , [0,0,1,0,0] ] ) # code2 = torch.tensor( [[1,0,0] , [0,1,0] , [0,0,1] ] ) # activity1 = torch.tensor( [[1,1,1,1,0] , [0,1,0,1,0] , [0,0,0,1,0] , [0,0,1,0,0] , [0,0,1,0,0] , [0,1,0,1,0]] ) # activity2 = torch.tensor( [[1,0,1] , [0,0,0] , [0,1,0] , [0,0,1] , [0,1,0] , [0,1,0]] ) # res = decode_category([code1,code2] , [activity1,activity2] ,top=2 , coef = [0.1 , 0.9] ) # print(res) # true_labels size : (batch size , 1) or (batch size) def get_accuracy( true_labels , coding , activity , top=1 , coef=None ): if(type(activity) != list ): raise Exception('activity is not a list') if(true_labels.shape[0] != activity[0].shape[0] or len(true_labels.shape) > 1): raise Exception('true_labels shape mismatch') if(not torch.is_tensor(true_labels) ): raise Exception('true_labels is not a torch tensor') true_labels = true_labels.view([-1,1]) predicted = decode_category(coding , activity , top=top , coef=coef) res = torch.sum(torch.clamp( torch.sum(predicted == true_labels , dim=1) , 0 ,1 )).to(dtype)/true_labels.shape[0] return res.item() # code1 = torch.tensor( [[1,0,0,1,0] , [0,1,0,0,0] , [0,0,1,0,0] ] ) # code2 = torch.tensor( [[1,0,0] , [0,1,0] , [0,0,1] ] ) # activity1 = torch.tensor( [[1,1,1,1,0] , [0,1,0,1,0] , [0,0,0,1,0] , [0,0,1,0,0] , [0,0,1,0,0] , [0,1,0,1,0]] ) # activity2 = torch.tensor( [[1,0,1] , [0,0,0] , [0,1,0] , [0,0,1] , [0,1,0] , [0,1,0]] ) # true_labels = torch.tensor([1,1,0,2,1,2]) # get_accuracy( true_labels , [code1,code2] , [activity1,activity2] ,top=2) # x list of size (batch , coding[0].shape[1] + coding[1].shape[1] + ... ) def seprate(coding , x): if(not torch.is_tensor(x) ): raise Exception('x is not a torch tensor') if(type(coding) != list ): raise Exception('coding is not a list') mlist = [] form_ = 0 til=0 for i in range(len(coding)): til = til + coding[i].shape[1] mlist.append(x[: , form_ : til]) form_ = form_ + coding[i].shape[1] if(til > x.shape[1]): raise Exception('x size mismatch') if(til != x.shape[1]): raise Exception('x size mismatch') return mlist # coding = get_distributed_coding_map([3,4] , [ 1 ,1 ] , [2,1] , 3 ) # x = torch.rand([5,7]) # y =seprate(coding , x) # print(x) # print(y) # x = torch.tensor([[1,2,3.3],[5.4,1,0.2]]) # print(x) # a,b = torch.max(x ,dim=1) # x[np.arange(x.shape[0]) , b ] = -1 # print(b) # print(x) # a,b = torch.max(x ,dim=1) # x[np.arange(x.shape[0]) , b ] = -1 # print(b) # print(x)
_____no_output_____
Apache-2.0
my_modules/my_coding.ipynb
ARahmansetayesh/FeedbackAlignmentWithWeightNormalization
Train solar models
# import packages import json import logging import matplotlib.pyplot as plt import matplotlib.image as mpimg %matplotlib inline import imp import numpy as np import os import random import rasterio import shapely import tensorflow as tf import descarteslabs as dl # Import local modules import train import generator import transforms # Define parameters # Note, setting epochs, steps to 2 for demonstration # For full training, use: # params = train.params # For testing, define the parameters here params = { 'seed': 21, # for train/val data split # Training data specifications # DATASET METADATA # 'data_metadata': { 'products': ['airbus:oneatlas:spot:v2'], 'bands': ['red', 'green', 'blue', 'nir'], 'resolution': 1.5, 'start_datetime': '2016-01-01', 'end_datetime': '2018-12-31', 'tilesize': 512, 'pad': 0, }, # GLOBAL METADATA # 'global_metadata': { 'local_ground': 'ground/', # directory containing image-target pairs 'local_model': 'model/', # directory to write this model }, # MODEL METADATA 'model_name': 'solar_pv_airbus_spot_rgbn_v5', # TRAINING METADATA # # Metadata to define the training stage 'training_kwargs': { 'datalist': 'train_keys.txt', 'batchsize': 16, 'val_datalist': 'val_keys.txt', 'val_batchsize': 16, 'epochs': 1, #150, 'steps_per_epoch': 2, 'image_dim': (512, 512, 4) # This is the size of the training images }, 'transforms': [ transforms.CastTransform(feature_type='float32', target_type='bool'), transforms.SquareImageTransform(), transforms.AdditiveNoiseTransform(additive_noise=30.), transforms.MultiplicativeNoiseTransform(multiplicative_noise=0.3), transforms.NormalizeFeatureTransform(mean=128., std=1.), transforms.FlipFeatureTargetTransform(), ], } print(params['training_kwargs']) # Train the model train.train_from_document(params=params) !cat 'model/train_solar_pv_airbus_spot_rgbn_v5.log'
_____no_output_____
MIT
solarpv/training/spot/train_solar_unet.ipynb
shivareddyiirs/solar-pv-global-inventory
Load the model and predict on one training image
model = tf.keras.models.load_model('model/solar_pv_airbus_spot_rgbn_v5.hdf5') trf = [ transforms.CastTransform(feature_type='float32', target_type='bool'), transforms.SquareImageTransform(), transforms.NormalizeFeatureTransform(mean=128., std=1.), ] kw_train = params['training_kwargs'] data_list = os.path.join(params['global_metadata']['local_ground'], kw_train['datalist']) trn_generator = generator.DataGenerator(data_list, batch_size=2, dim=(512,512, 4), shuffle=False, augment=True, transforms=trf, ) img, trg = trn_generator.__getitem__(0) def img_plt(img): return np.clip((img+128).astype('uint8'), 0, 255) ii=0 fig, ax = plt.subplots(1,2, figsize=(10,8)) ax[0].imshow(img_plt(img[ii,:,:,:3])) ax[1].imshow(img_plt(trg[ii,:,:,:].squeeze())) proba = model.predict(img) proba.shape plt.imshow(proba[0,...,0].squeeze())
_____no_output_____
MIT
solarpv/training/spot/train_solar_unet.ipynb
shivareddyiirs/solar-pv-global-inventory
$\tau$ and delayed-$\tau$ model sanity checksIn this notebook I will check that the SFH are sensible and integrate to 1. I will check that the average SSFR does not exceed $1/dt$
import numpy as np from provabgs import infer as Infer from provabgs import models as Models from astropy.cosmology import Planck13 # --- plotting --- import corner as DFM import matplotlib as mpl import matplotlib.pyplot as plt mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'serif' mpl.rcParams['axes.linewidth'] = 1.5 mpl.rcParams['axes.xmargin'] = 1 mpl.rcParams['xtick.labelsize'] = 'x-large' mpl.rcParams['xtick.major.size'] = 5 mpl.rcParams['xtick.major.width'] = 1.5 mpl.rcParams['ytick.labelsize'] = 'x-large' mpl.rcParams['ytick.major.size'] = 5 mpl.rcParams['ytick.major.width'] = 1.5 mpl.rcParams['legend.frameon'] = False tau_model = Models.FSPS(name='tau') # tau model dtau_model = Models.FSPS(name='delayed_tau') # delayed tau model zred = 0.01 tage = Planck13.age(zred).value prior = Infer.load_priors([ Infer.UniformPrior(0., 0.), Infer.UniformPrior(0.3, 1e1), # tau SFH Infer.UniformPrior(0., 0.2), # constant SFH Infer.UniformPrior(0., tage-2.), # start time Infer.UniformPrior(0., 0.5), # fburst Infer.UniformPrior(0., tage), # tburst Infer.UniformPrior(1e-6, 1e-3), # metallicity Infer.UniformPrior(0., 4.)])
_____no_output_____
MIT
nb/tests/models_taus.ipynb
kgb0255/provabgs
Check SFH sensibility
np.random.seed(2) theta = prior.sample() print('tau = %.2f' % theta[1]) print('tstart = %.2f' % theta[3]) print('tburst = %.2f' % theta[5]) t1, sfh1 = tau_model.SFH(theta, zred) t2, sfh2 = dtau_model.SFH(theta, zred) fig = plt.figure(figsize=(10,5)) sub = fig.add_subplot(111) sub.plot(t1, sfh1, label=r'$\tau$ model') sub.plot(t2, sfh2, label=r'delayed-$\tau$ model') sub.legend(loc='upper left', fontsize=20) sub.set_xlabel(r'$t_{\rm lookback}$', fontsize=25) sub.set_xlim(0., tage)
_____no_output_____
MIT
nb/tests/models_taus.ipynb
kgb0255/provabgs
check SFH normalization
for i in range(100): theta = prior.sample() t1, sfh1 = tau_model.SFH(theta, zred) t2, sfh2 = dtau_model.SFH(theta, zred) assert np.abs(np.trapz(sfh1, t1) - 1) < 1e-4, ('int(SFH) = %f' % np.trapz(sfh1, t1)) assert np.abs(np.trapz(sfh2, t2) - 1) < 1e-4, ('int(SFH) = %f' % np.trapz(sfh2, t2))
_____no_output_____
MIT
nb/tests/models_taus.ipynb
kgb0255/provabgs
check average SFR calculation
thetas = np.array([prior.sample() for i in range(50000)]) avgsfr1 = tau_model.avgSFR(thetas, zred, dt=0.1) avgsfr2 = dtau_model.avgSFR(thetas, zred, dt=0.1) fig = plt.figure(figsize=(10,5)) sub = fig.add_subplot(111) sub.hist(np.log10(avgsfr1), range=(-13, -7), bins=100, alpha=0.5) sub.hist(np.log10(avgsfr2), range=(-13, -7), bins=100, alpha=0.5) sub.axvline(-8, color='k', linestyle='--') sub.set_xlabel(r'$\log{\rm SSFR}$', fontsize=25) sub.set_xlim(-13., -7.) avgsfr1 = tau_model.avgSFR(thetas, zred, dt=1) avgsfr2 = dtau_model.avgSFR(thetas, zred, dt=1) fig = plt.figure(figsize=(10,5)) sub = fig.add_subplot(111) sub.hist(np.log10(avgsfr1), range=(-13, -7), bins=100, alpha=0.5) sub.hist(np.log10(avgsfr2), range=(-13, -7), bins=100, alpha=0.5) sub.axvline(-9, color='k', linestyle='--') sub.set_xlabel(r'$\log{\rm SSFR}$', fontsize=25) sub.set_xlim(-13., -7.)
_____no_output_____
MIT
nb/tests/models_taus.ipynb
kgb0255/provabgs
Methods - Text Feature Extraction with Bag-of-Words In many tasks, like in the classical spam detection, your input data is text.Free text with variables length is very far from the fixed length numeric representation that we need to do machine learning with scikit-learn.However, there is an easy and effective way to go from text data to a numeric representation using the so-called bag-of-words model, which provides a data structure that is compatible with the machine learning aglorithms in scikit-learn. Let's assume that each sample in your dataset is represented as one string, which could be just a sentence, an email, or a whole news article or book. To represent the sample, we first split the string into a list of tokens, which correspond to (somewhat normalized) words. A simple way to do this to just split by whitespace, and then lowercase the word. Then, we build a vocabulary of all tokens (lowercased words) that appear in our whole dataset. This is usually a very large vocabulary.Finally, looking at our single sample, we could show how often each word in the vocabulary appears.We represent our string by a vector, where each entry is how often a given word in the vocabulary appears in the string.As each sample will only contain very few words, most entries will be zero, leading to a very high-dimensional but sparse representation.The method is called "bag-of-words," as the order of the words is lost entirely.
X = ["Some say the world will end in fire,", "Some say in ice."] len(X) from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() vectorizer.fit(X) vectorizer.vocabulary_ X_bag_of_words = vectorizer.transform(X) X_bag_of_words.shape X_bag_of_words X_bag_of_words.toarray() vectorizer.get_feature_names() vectorizer.inverse_transform(X_bag_of_words)
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
tf-idf EncodingA useful transformation that is often applied to the bag-of-word encoding is the so-called term-frequency inverse-document-frequency (tf-idf) scaling, which is a non-linear transformation of the word counts.The tf-idf encoding rescales words that are common to have less weight:
from sklearn.feature_extraction.text import TfidfVectorizer tfidf_vectorizer = TfidfVectorizer() tfidf_vectorizer.fit(X) import numpy as np np.set_printoptions(precision=2) print(tfidf_vectorizer.transform(X).toarray())
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
tf-idfs are a way to represent documents as feature vectors. tf-idfs can be understood as a modification of the raw term frequencies (`tf`); the `tf` is the count of how often a particular word occurs in a given document. The concept behind the tf-idf is to downweight terms proportionally to the number of documents in which they occur. Here, the idea is that terms that occur in many different documents are likely unimportant or don't contain any useful information for Natural Language Processing tasks such as document classification. If you are interested in the mathematical details and equations, see this [external IPython Notebook](http://nbviewer.jupyter.org/github/rasbt/pattern_classification/blob/master/machine_learning/scikit-learn/tfidf_scikit-learn.ipynb) that walks you through the computation. Bigrams and N-GramsIn the example illustrated in the figure at the beginning of this notebook, we used the so-called 1-gram (unigram) tokenization: Each token represents a single element with regard to the splittling criterion. Entirely discarding word order is not always a good idea, as composite phrases often have specific meaning, and modifiers like "not" can invert the meaning of words.A simple way to include some word order are n-grams, which don't only look at a single token, but at all pairs of neighborhing tokens. For example, in 2-gram (bigram) tokenization, we would group words together with an overlap of one word; in 3-gram (trigram) splits we would create an overlap two words, and so forth:- original text: "this is how you get ants"- 1-gram: "this", "is", "how", "you", "get", "ants"- 2-gram: "this is", "is how", "how you", "you get", "get ants"- 3-gram: "this is how", "is how you", "how you get", "you get ants"Which "n" we choose for "n-gram" tokenization to obtain the optimal performance in our predictive model depends on the learning algorithm, dataset, and task. Or in other words, we have consider "n" in "n-grams" as a tuning parameters, and in later notebooks, we will see how we deal with these.Now, let's create a bag of words model of bigrams using scikit-learn's `CountVectorizer`:
# look at sequences of tokens of minimum length 2 and maximum length 2 bigram_vectorizer = CountVectorizer(ngram_range=(2, 2)) bigram_vectorizer.fit(X) bigram_vectorizer.get_feature_names() bigram_vectorizer.transform(X).toarray()
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
Often we want to include unigrams (single tokens) AND bigrams, wich we can do by passing the following tuple as an argument to the `ngram_range` parameter of the `CountVectorizer` function:
gram_vectorizer = CountVectorizer(ngram_range=(1, 2)) gram_vectorizer.fit(X) gram_vectorizer.get_feature_names() gram_vectorizer.transform(X).toarray()
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
Character n-grams=================Sometimes it is also helpful not only to look at words, but to consider single characters instead. That is particularly useful if we have very noisy data and want to identify the language, or if we want to predict something about a single word.We can simply look at characters instead of words by setting ``analyzer="char"``.Looking at single characters is usually not very informative, but looking at longer n-grams of characters could be:
X char_vectorizer = CountVectorizer(ngram_range=(2, 2), analyzer="char") char_vectorizer.fit(X) print(char_vectorizer.get_feature_names())
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
EXERCISE: Compute the bigrams from "zen of python" as given below (or by ``import this``), and find the most common trigram.We want to treat each line as a separate document. You can achieve this by splitting the string by newlines (``\n``).Compute the Tf-idf encoding of the data. Which words have the highest tf-idf score? Why?What changes if you use ``TfidfVectorizer(norm="none")``?
zen = """Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those!""" # %load solutions/11_ngrams.py
_____no_output_____
CC0-1.0
notebooks/11.Text_Feature_Extraction.ipynb
ogrisel/euroscipy-2019-scikit-learn-tutorial
Concept DriftIn the context of data streams, it is assumed that data can change over time. The change in the relationship between the data (features) and the target to learn is known as **Concept Drift**. As examples we can mention, the electricity demand across the year, the stock market, and the likelihood of a new movie to be successful. Let's consider the movie example: Two movies can have similar features such as popular actors/directors, storyline, production budget, marketing campaigns, etc. yet it is not certain that both will be similarly successful. What the target audience *considers* worth watching (and their money) is constantly changing and production companies must adapt accordingly to avoid "box office flops". Impact of drift on learningConcept drift can have a significant impact on predictive performance if not handled properly. Most batch learning models will fail in the presence of concept drift as they are essentially trained on different data. On the other hand, stream learning methods continuously update themselves and adapt to new concepts. Furthermore, drift-aware methods use change detection methods (a.k.a. drift detectors) to trigger *mitigation mechanisms* if a change in performance is detected. Detecting concept driftMultiple drift detection methods have been proposed. The goal of a drift detector is to signal an alarm in the presence of drift. A good drift detector maximizes the number of true positives while keeping the number of false positives to a minimum. It must also be resource-wise efficient to work in the context of infinite data streams.For this example, we will generate a synthetic data stream by concatenating 3 distributions of 1000 samples each:- $dist_a$: $\mu=0.8$, $\sigma=0.05$- $dist_b$: $\mu=0.4$, $\sigma=0.02$- $dist_c$: $\mu=0.6$, $\sigma=0.1$.
import numpy as np import matplotlib.pyplot as plt from matplotlib import gridspec # Generate data for 3 distributions random_state = np.random.RandomState(seed=42) dist_a = random_state.normal(0.8, 0.05, 1000) dist_b = random_state.normal(0.4, 0.02, 1000) dist_c = random_state.normal(0.6, 0.1, 1000) # Concatenate data to simulate a data stream with 2 drifts stream = np.concatenate((dist_a, dist_b, dist_c)) # Auxiliary function to plot the data def plot_data(dist_a, dist_b, dist_c, drifts=None): fig = plt.figure(figsize=(7,3), tight_layout=True) gs = gridspec.GridSpec(1, 2, width_ratios=[3, 1]) ax1, ax2 = plt.subplot(gs[0]), plt.subplot(gs[1]) ax1.grid() ax1.plot(stream, label='Stream') ax2.grid(axis='y') ax2.hist(dist_a, label=r'$dist_a$') ax2.hist(dist_b, label=r'$dist_b$') ax2.hist(dist_c, label=r'$dist_c$') if drifts is not None: for drift_detected in drifts: ax1.axvline(drift_detected, color='red') plt.show() plot_data(dist_a, dist_b, dist_c)
_____no_output_____
BSD-3-Clause
docs/examples/concept-drift-detection.ipynb
online-ml/creme
Drift detection testWe will use the ADaptive WINdowing (`ADWIN`) drift detection method. Remember that the goal is to indicate that drift has occurred after samples **1000** and **2000** in the synthetic data stream.
from river import drift drift_detector = drift.ADWIN() drifts = [] for i, val in enumerate(stream): drift_detector.update(val) # Data is processed one sample at a time if drift_detector.change_detected: # The drift detector indicates after each sample if there is a drift in the data print(f'Change detected at index {i}') drifts.append(i) drift_detector.reset() # As a best practice, we reset the detector plot_data(dist_a, dist_b, dist_c, drifts)
Change detected at index 1055 Change detected at index 2079
BSD-3-Clause
docs/examples/concept-drift-detection.ipynb
online-ml/creme
Extracting ORGs from papers using SpaCyThis notebook is based on the documentation on the [SpaCy Linguistic Features page](https://spacy.io/usage/linguistic-featuressection-named-entities).We try to extract ORG named entities from our papers dataset. These are likely to be universities and commercial research groups.
import os import re import spacy DATA_DIR = "../data" TEXTFILES_ORG_DIR = os.path.join(DATA_DIR, "textfiles_org") ORGS_SPACY_DIR = os.path.join(DATA_DIR, "orgs_spacy")
_____no_output_____
Apache-2.0
notebooks/13-org-ner-spacy.ipynb
sujitpal/content-engineering-tutorial
Entity ExtractorSpaCy entity extractor is __much faster__ compared to NLTK+Stanford.
def extract_entities(tagger, text): entities = [] if text is None: return entities doc = tagger(text) for ent in doc.ents: if ent.label_ == "ORG": entities.append(ent.text) return entities text = """Yann Le Cun, a native of France was not even 30 when he joined AT&T Bell Laboratories in New Jersey. At Bell Labs, LeCun developed a number of new machine learning methods, including the convolutional neural network—modeled after the visual cortex in animals. Today, he serves as chief AI scientist at Facebook, where he works tirelessly towards new breakthroughs.""" text = text.replace("\n", " ") text = re.sub("\s+", " ", text) print(text) nlp = spacy.load("en") entities = extract_entities(nlp, text) print(entities)
Yann Le Cun, a native of France was not even 30 when he joined AT&T Bell Laboratories in New Jersey. At Bell Labs, LeCun developed a number of new machine learning methods, including the convolutional neural network—modeled after the visual cortex in animals. Today, he serves as chief AI scientist at Facebook, where he works tirelessly towards new breakthroughs. ['AT&T Bell Laboratories', 'Bell Labs', 'Facebook']
Apache-2.0
notebooks/13-org-ner-spacy.ipynb
sujitpal/content-engineering-tutorial
Apply to all (preprocessed) text filesThe preprocessing was done in the `12-org-ner-nltk-stanford` notebook. It pulls the first 50 lines of the original file in an attempt to focus on the part of the text that are most likely to contain the ORGs we are interested in, ie, the affiliations of the authors.
if not os.path.exists(ORGS_SPACY_DIR): os.mkdir(ORGS_SPACY_DIR) def get_text(textfile): lines = [] f = open(textfile, "r") for line in f: lines.append(line.strip()) f.close() text = "\n".join(lines) return text num_written = 0 for textfile in os.listdir(TEXTFILES_ORG_DIR): if num_written % 1000 == 0: print("orgs extracted from {:d} files".format(num_written)) doc_id = int(textfile.split(".")[0]) orgfile = os.path.join(ORGS_SPACY_DIR, "{:d}.org".format(doc_id)) if os.path.exists(orgfile): continue else: text = get_text(os.path.join(TEXTFILES_ORG_DIR, "{:d}.txt".format(doc_id))) entities = extract_entities(nlp, text) entities = list(set(entities)) forgs = open(orgfile, "w") for entity in entities: forgs.write("{:s}\n".format(entity)) forgs.close() num_written += 1 print("orgs extracted from {:d} files, COMPLETE".format(num_written))
orgs extracted from 0 files orgs extracted from 1000 files orgs extracted from 2000 files orgs extracted from 3000 files orgs extracted from 4000 files orgs extracted from 5000 files orgs extracted from 6000 files orgs extracted from 7000 files orgs extracted from 7238 files, COMPLETE
Apache-2.0
notebooks/13-org-ner-spacy.ipynb
sujitpal/content-engineering-tutorial
Dictionaries Working with Dictionaries * A collection of key-value pairs where each key is connected to a value. * Any object you can create in Python can be used as a value in a dictionary. * Defined with `{}` using `:` to match keys with values and `,` separates pairs:
alien_0 = {'color': 'green', 'points': 5} print(alien_0)
{'color': 'green', 'points': 5}
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
Accessing Values in a Dictionary * Access a value by indexing to its key (only if key exists!):
print(alien_0['color']) # Error print(alien_0['origin'])
_____no_output_____
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* Can also use `get()` with the key as an argument, will return `None` if the key doesn't exist:
print(alien_0.get('origin'))
None
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* `get()` also accepts a second argument, which if provided, will be returned if the key provided as the first argument does not exist:
print(alien_0.get('origin','This alien has no origin!'))
This alien has no origin!
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* Can add to a dictionary by indexing to a new key and assigning it a value:
alien_0['x_position'] = 0 alien_0['y_position'] = 25 print(alien_0)
{'color': 'green', 'points': 5, 'x_position': 0, 'y_position': 25}
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* Same to modify a value:
alien_0['x_position'] = 5 print(alien_0)
{'color': 'green', 'points': 5, 'x_position': 5, 'y_position': 25}
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* Remove a key-value pair with `del`:
del alien_0['points'] print(alien_0)
{'color': 'green', 'x_position': 5, 'y_position': 25}
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
Style * Multiline dictionaries: * Are created with the opening bracket on the first line * Have key-value pairs each on their own line and indented 1 level * Closing bracket is at the same indent level. * Include a comma after the last key-value pair too ```python favorite_languages = { 'jen': 'python', 'sarah': 'c', 'edward': 'ruby', 'phil': 'python', } Matthes, Eric. Python Crash Course, 2nd Edition (p. 97). No Starch Press. Kindle Edition. ``` Looping Through a Dictionary * Can loop through key-value pairs, keys, or values * To loop through key-value pairs, use `items()` which creates a list of key-value pairs and assign 2 variables to iterate:
user_0 = { 'username': 'dkong', 'first': 'donkey', 'last': 'kong', } for key, value in user_0.items(): print(f"\nKey: {key}") print(f"Value: {value}")
Key: username Value: dkong Key: first Value: donkey Key: last Value: kong
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* To loop through the keys of a dictionary, use `keys()`:
for key in user_0.keys(): print(key)
username first last
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* OR, simply loop through the dictionary like it were a list, as looping through the keys is the default behavior in Python:
for key in user_0: print(key)
username first last
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
* To loop through values, use the `values()` method:
for value in user_0.values(): print(value)
dkong donkey kong
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
Sets * Sets are collections where the elements must be unique * Can use `set()` to return a copy of a list without duplicates * No specific order.
languages = {'python', 'ruby', 'python', 'c'} print(set(languages))
{'ruby', 'c', 'python'}
MIT
Jupyter/PythonCrashCourse2ndEdition/ch6_dictionaries.ipynb
awakun/LearningPython
Bayesian Statistics Seminar===Copyright 2017 Allen DowneyMIT License: https://opensource.org/licenses/MIT
from __future__ import print_function, division %matplotlib inline import warnings warnings.filterwarnings('ignore') import math import numpy as np from thinkbayes2 import Pmf, Suite import thinkplot
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Working with Pmfs---Create a Pmf object to represent a six-sided die.
d6 = Pmf()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
A Pmf is a map from possible outcomes to their probabilities.
for x in [1,2,3,4,5,6]: d6[x] = 1
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Initially the probabilities don't add up to 1.
d6.Print()
1 1 2 1 3 1 4 1 5 1 6 1
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
`Normalize` adds up the probabilities and divides through. The return value is the total probability before normalizing.
d6.Normalize()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Now the Pmf is normalized.
d6.Print()
1 0.166666666667 2 0.166666666667 3 0.166666666667 4 0.166666666667 5 0.166666666667 6 0.166666666667
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
And we can compute its mean (which only works if it's normalized).
d6.Mean()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
`Random` chooses a random value from the Pmf.
d6.Random()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
`thinkplot` provides methods for plotting Pmfs in a few different styles.
thinkplot.Hist(d6)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
**Exercise 1:** The Pmf object provides `__add__`, so you can use the `+` operator to compute the Pmf of the sum of two dice.Compute and plot the Pmf of the sum of two 6-sided dice.
# Solution thinkplot.Hist(d6+d6)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
**Exercise 2:** Suppose I roll two dice and tell you the result is greater than 3.Plot the Pmf of the remaining possible outcomes and compute its mean.
# Solution pmf = d6 + d6 pmf[2] = 0 pmf[3] = 0 pmf.Normalize() thinkplot.Hist(pmf) pmf.Mean()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
The cookie problem---Create a Pmf with two equally likely hypotheses.
cookie = Pmf(['Bowl 1', 'Bowl 2']) cookie.Print()
Bowl 1 0.5 Bowl 2 0.5
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Update each hypothesis with the likelihood of the data (a vanilla cookie).
cookie['Bowl 1'] *= 0.75 cookie['Bowl 2'] *= 0.5 cookie.Normalize()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Print the posterior probabilities.
cookie.Print()
Bowl 1 0.6 Bowl 2 0.4
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
**Exercise 3:** Suppose we put the first cookie back, stir, choose again from the same bowl, and get a chocolate cookie.Hint: The posterior (after the first cookie) becomes the prior (before the second cookie).
# Solution cookie['Bowl 1'] *= 0.25 cookie['Bowl 2'] *= 0.5 cookie.Normalize() cookie.Print()
Bowl 1 0.428571428571 Bowl 2 0.571428571429
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
**Exercise 4:** Instead of doing two updates, what if we collapse the two pieces of data into one update?Re-initialize `Pmf` with two equally likely hypotheses and perform one update based on two pieces of data, a vanilla cookie and a chocolate cookie.The result should be the same regardless of how many updates you do (or the order of updates).
# Solution cookie = Pmf(['Bowl 1', 'Bowl 2']) cookie['Bowl 1'] *= 0.75 * 0.25 cookie['Bowl 2'] *= 0.5 * 0.5 cookie.Normalize() cookie.Print()
Bowl 1 0.428571428571 Bowl 2 0.571428571429
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
STOP HERE The Euro problem**Exercise 5:** Write a class definition for `Euro`, which extends `Suite` and defines a likelihood function that computes the probability of the data (heads or tails) for a given value of `x` (the probability of heads).Note that `hypo` is in the range 0 to 100. Here's an outline to get you started.
class Euro(Suite): def Likelihood(self, data, hypo): """ hypo is the prob of heads (0-100) data is a string, either 'H' or 'T' """ return 1 # Solution class Euro(Suite): def Likelihood(self, data, hypo): """ hypo is the prob of heads (0-100) data is a string, either 'H' or 'T' """ x = hypo / 100 if data == 'H': return x else: return 1-x
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
We'll start with a uniform distribution from 0 to 100.
euro = Euro(range(101)) thinkplot.Pdf(euro)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Now we can update with a single heads:
euro.Update('H') thinkplot.Pdf(euro)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Another heads:
euro.Update('H') thinkplot.Pdf(euro)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
And a tails:
euro.Update('T') thinkplot.Pdf(euro)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Starting over, here's what it looks like after 7 heads and 3 tails.
euro = Euro(range(101)) for outcome in 'HHHHHHHTTT': euro.Update(outcome) thinkplot.Pdf(euro) euro.MaximumLikelihood()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
The maximum posterior probability is 70%, which is the observed proportion.Here are the posterior probabilities after 140 heads and 110 tails.
euro = Euro(range(101)) evidence = 'H' * 140 + 'T' * 110 for outcome in evidence: euro.Update(outcome) thinkplot.Pdf(euro)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
The posterior mean s about 56%
euro.Mean()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
So is the value with maximum aposteriori probability (MAP).
euro.MAP()
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
The posterior credible interval has a 90% chance of containing the true value (provided that the prior distribution truly represents our background knowledge).
euro.CredibleInterval(90)
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
**Exercise 6** The following function makes a `Euro` object with a triangle prior.
def TrianglePrior(): """Makes a Suite with a triangular prior.""" suite = Euro(label='triangle') for x in range(0, 51): suite.Set(x, x) for x in range(51, 101): suite.Set(x, 100-x) suite.Normalize() return suite
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
And here's what it looks like.
euro1 = Euro(range(101), label='uniform') euro2 = TrianglePrior() thinkplot.Pdfs([euro1, euro2]) thinkplot.Config(title='Priors')
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar
Update `euro1` and `euro2` with the same data we used before (140 heads and 110 tails) and plot the posteriors.
# Solution evidence = 'H' * 140 + 'T' * 110 for outcome in evidence: euro1.Update(outcome) euro2.Update(outcome) thinkplot.Pdfs([euro1, euro2]) thinkplot.Config(title='Posteriors')
_____no_output_____
MIT
seminar01soln.ipynb
AllenDowney/BayesSeminar