markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Persist the model Scikit-learn makes model persistence extraordinarily easily. Everything can be pickled via the "joblib" submodule. There are some exceptions:1. Classes that contain unbound methods2. Classes that contain instances of loggers3. Others...**In general, this is why we design our transformers to take string args as keys for callables rather than callables themselves!!!**
from sklearn.externals import joblib import pickle import os model_location = "heart_disease_model.pkl" with open(model_location, "wb") as mod: joblib.dump(lgr_search.best_estimator_, mod, protocol=pickle.HIGHEST_PROTOCOL) assert os.path.exists(model_location) # demo how we can load and predict in one line! is_certain_class(joblib.load(model_location).predict_proba(X_test))
_____no_output_____
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
We can also use a Jupyter "magic function" to see that the pkl file exists in the file system:
!ls | grep "heart_disease_model"
heart_disease_model.pkl
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Accessing the REST APIOnce the Flask app is live, we can test its `predict` endpoint:
import requests # if you have a proxy... os.environ['NO_PROXY'] = 'localhost' # test if it's running url = "http://localhost:5000/predict" # print the GET result response = requests.get(url) print(response.json()['message'])
Send me a valid POST! I accept JSON data only: {data=[...]}
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Sending data:Let's create a function that will accept a chunk of data, make it into a JSON and ship it to the REST API
import json headers = { 'Content-Type': 'application/json' } def get_predictions(data, url, headers): data = np.asarray(data) # if data is a vector and not a matrix, we need a vec... if len(data.shape) == 1: data = np.asarray([data.tolist()]) # make a JSON out of it jdata = json.dumps({'data': data.tolist()}) response = requests.post(url, data=jdata, headers=headers).json() print(response['message']) return response['predictions'] # ship last few for X_test print(get_predictions(X_test[-10:], url, headers))
Valid POST (n_samples=10) [1, 1, 1, 1, 0, 0, 0, 0, 0, 0]
MIT
code/Heart Disease.ipynb
PacktPublishing/Hands-On-Machine-Learning-with-Python-and-Scikit-Learn
Protein structure prediction with AlphaFold2 and MMseqs2 Easy to use version of AlphaFold 2 (Jumper et al. 2021, Nature) using an API hosted at the Södinglab based on the MMseqs2 server (Mirdita et al. 2019, Bioinformatics) for the multiple sequence alignment creation. **Quickstart**1. Change the runtime type to GPU at "Runtime" -> "Change runtime type" (improves speed)2. Paste your protein sequence in the input field below3. Press "Runtime" -> "Run all"4. The pipeline has 8 steps. The currently running steps is indicated by a circle with a stop sign next to it. **Result**We produce two result files (1) a PDB formated structure and (2) a plot of the model quality. At the end of the computation a download modal box will pop with a `result.tar.gz` file.**Troubleshooting*** Try to restart the session "Runntime" -> "Factory reset runtime"* Check your input sequence **Limitations*** MSAs: MMseqs2 might not find as many hits compared to HHblits/HMMer searched against BFD and Mgnify.* Templates: Currently we do not use template information. But this is work in progress. * Computing resources: MMseqs2 can probably handle >20k requests per day since we run it only on 16 cores.For best results, we recommend using the full pipeline: https://github.com/deepmind/alphafoldMost of the python code was written by Sergey Ovchinnikov (@sokrypton). The API is hosted at the Södinglab (@SoedingL) and maintained by Milot Mirdita (@milot_mirdita). Martin Steinegger (@thesteinegger) integrated everything.
#@title Input protein sequence here before you "Run all" query_sequence = 'MAKTIKITQTRSAIGRLPKHKATLLGLGLRRIGHTVEREDTPAIRGMINAVSFMVKVEE' #@param {type:"string"} # remove whitespaces query_sequence="".join(query_sequence.split()) jobname = 'RL30_ECOLI' #@param {type:"string"} # remove whitespaces jobname="".join(jobname.split()) with open(f"{jobname}.fasta", "w") as text_file: text_file.write(">1\n%s" % query_sequence) # number of models to use #@markdown --- #@markdown ### Advanced settings num_models = 5 #@param [1,2,3,4,5] {type:"raw"} use_amber = True #@param {type:"boolean"} use_msa = True #@param {type:"boolean"} #@markdown --- #@title Install dependencies %%bash -s "$use_amber" if [ ! -f AF2_READY ]; then # install dependencies apt-get -qq -y update 2>&1 1>/dev/null apt-get -qq -y install jq curl zlib1g gawk 2>&1 1>/dev/null pip -q install biopython 2>&1 1>/dev/null pip -q install dm-haiku 2>&1 1>/dev/null pip -q install ml-collections 2>&1 1>/dev/null pip -q install py3Dmol 2>&1 1>/dev/null touch AF2_READY fi # download model if [ ! -d "alphafold/" ]; then git clone https://github.com/deepmind/alphafold.git --quiet mv alphafold alphafold_ mv alphafold_/alphafold . fi # download model params (~1 min) if [ ! -d "params/" ]; then wget -qnc https://storage.googleapis.com/alphafold/alphafold_params_2021-07-14.tar mkdir params tar -xf alphafold_params_2021-07-14.tar -C params/ rm alphafold_params_2021-07-14.tar fi # install openmm for refinement if [ $1 == "True" ] && [ ! -f "alphafold/common/stereo_chemical_props.txt" ]; then wget -qnc https://git.scicore.unibas.ch/schwede/openstructure/-/raw/7102c63615b64735c4941278d92b554ec94415f8/modules/mol/alg/src/stereo_chemical_props.txt mv stereo_chemical_props.txt alphafold/common/ wget -qnc https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh bash Miniconda3-latest-Linux-x86_64.sh -bfp /usr/local 2>&1 1>/dev/null conda install -y -q -c conda-forge openmm=7.5.1 python=3.7 pdbfixer 2>&1 1>/dev/null (cd /usr/local/lib/python3.7/site-packages; patch -s -p0 < /content/alphafold_/docker/openmm.patch) fi #@title Build MSA %%bash -s "$use_msa" "$jobname" if [ $1 == "True" ]; then if [ -f $2.result.tar.gz ]; then echo "looks done" tar xzf $2.result.tar.gz tr -d '\000' < uniref.a3m > $2.a3m else # build msa using the MMseqs2 search server echo "submitting job" ID=$(curl -s -F q=@$2.fasta -F mode=all https://a3m.mmseqs.com/ticket/msa | jq -r '.id') STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status') while [ "${STATUS}" == "RUNNING" ]; do STATUS=$(curl -s https://a3m.mmseqs.com/ticket/${ID} | jq -r '.status') sleep 1 done if [ "${STATUS}" == "COMPLETE" ]; then curl -s https://a3m.mmseqs.com/result/download/${ID} > $2.result.tar.gz tar xzf $2.result.tar.gz tr -d '\000' < uniref.a3m > $2.a3m else echo "MMseqs2 server did not return a valid result." exit 1 fi fi echo "Found $(grep -c ">" $2.a3m) sequences (after redundacy filtering)" else cp $2.fasta $2.a3m fi #@title Setup model # the following code is written by Sergey Ovchinnikov # setup the model if "model" not in dir(): import warnings warnings.filterwarnings('ignore') import os import sys os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' import tensorflow as tf import numpy as np import pickle import py3Dmol import matplotlib.pyplot as plt from alphafold.common import protein from alphafold.data import pipeline from alphafold.data import templates from alphafold.model import data from alphafold.model import config from alphafold.model import model import ipywidgets from ipywidgets import interact, fixed tf.get_logger().setLevel('ERROR') if use_amber and "relax" not in dir(): sys.path.insert(0, '/usr/local/lib/python3.7/site-packages/') from alphafold.relax import relax if "model_params" not in dir(): model_params = {} for model_name in ["model_1","model_2","model_3","model_4","model_5"][:num_models]: if model_name not in model_params: model_config = config.model_config(model_name) model_config.data.eval.num_ensemble = 1 model_params[model_name] = data.get_model_haiku_params(model_name=model_name, data_dir=".") if model_name == "model_1": model_runner_1 = model.RunModel(model_config, model_params[model_name]) if model_name == "model_3": model_runner_3 = model.RunModel(model_config, model_params[model_name]) def mk_mock_template(query_sequence): # since alphafold's model requires a template input # we create a blank example w/ zero input, confidence -1 ln = len(query_sequence) output_templates_sequence = "-"*ln output_confidence_scores = np.full(ln,-1) templates_all_atom_positions = np.zeros((ln, templates.residue_constants.atom_type_num, 3)) templates_all_atom_masks = np.zeros((ln, templates.residue_constants.atom_type_num)) templates_aatype = templates.residue_constants.sequence_to_onehot(output_templates_sequence, templates.residue_constants.HHBLITS_AA_TO_ID) template_features = {'template_all_atom_positions': templates_all_atom_positions[None], 'template_all_atom_masks': templates_all_atom_masks[None], 'template_sequence': [f'none'.encode()], 'template_aatype': np.array(templates_aatype)[None], 'template_confidence_scores': output_confidence_scores[None], 'template_domain_names': [f'none'.encode()], 'template_release_date': [f'none'.encode()]} return template_features def set_bfactor(pdb_filename, bfac): I = open(pdb_filename,"r").readlines() O = open(pdb_filename,"w") for line in I: if line[0:6] == "ATOM ": seq_id = int(line[23:26].strip()) - 1 O.write("{prefix}{bfac:6.2f}{suffix}".format(prefix=line[:60], bfac=bfac[seq_id], suffix=line[66:])) O.close() def predict_structure(prefix, feature_dict, do_relax=True, random_seed=0): """Predicts structure using AlphaFold for the given sequence.""" # Run the models. plddts = [] unrelaxed_pdb_lines = [] relaxed_pdb_lines = [] for model_name, params in model_params.items(): print(f"running {model_name}") # swap params to avoid recompiling # note: models 1,2 have diff number of params compared to models 3,4,5 if any(str(m) in model_name for m in [1,2]): model_runner = model_runner_1 if any(str(m) in model_name for m in [3,4,5]): model_runner = model_runner_3 model_runner.params = params processed_feature_dict = model_runner.process_features(feature_dict, random_seed=random_seed) prediction_result = model_runner.predict(processed_feature_dict) unrelaxed_protein = protein.from_prediction(processed_feature_dict,prediction_result) unrelaxed_pdb_lines.append(protein.to_pdb(unrelaxed_protein)) plddts.append(prediction_result['plddt']) if do_relax: # Relax the prediction. amber_relaxer = relax.AmberRelaxation(max_iterations=0,tolerance=2.39, stiffness=10.0,exclude_residues=[], max_outer_iterations=20) relaxed_pdb_str, _, _ = amber_relaxer.process(prot=unrelaxed_protein) relaxed_pdb_lines.append(relaxed_pdb_str) # rerank models based on predicted lddt lddt_rank = np.mean(plddts,-1).argsort()[::-1] plddts_ranked = {} for n,r in enumerate(lddt_rank): print(f"model_{n+1} {np.mean(plddts[r])}") unrelaxed_pdb_path = f'{prefix}_unrelaxed_model_{n+1}.pdb' with open(unrelaxed_pdb_path, 'w') as f: f.write(unrelaxed_pdb_lines[r]) set_bfactor(unrelaxed_pdb_path,plddts[r]/100) if do_relax: relaxed_pdb_path = f'{prefix}_relaxed_model_{n+1}.pdb' with open(relaxed_pdb_path, 'w') as f: f.write(relaxed_pdb_lines[r]) set_bfactor(relaxed_pdb_path,plddts[r]/100) plddts_ranked[f"model_{n+1}"] = plddts[r] return plddts_ranked #@title Predict structure a3m_lines = "".join(open(f"{jobname}.a3m","r").readlines()) msa, deletion_matrix = pipeline.parsers.parse_a3m(a3m_lines) query_sequence = msa[0] feature_dict = { **pipeline.make_sequence_features(sequence=query_sequence, description="none", num_res=len(query_sequence)), **pipeline.make_msa_features(msas=[msa],deletion_matrices=[deletion_matrix]), **mk_mock_template(query_sequence) } plddts = predict_structure(jobname, feature_dict, do_relax=use_amber) #@title Plot lDDT per residue # confidence per position plt.figure(dpi=100) for model_name,value in plddts.items(): plt.plot(value,label=model_name) plt.legend() plt.ylim(0,100) plt.ylabel("predicted lDDT") plt.xlabel("positions") plt.savefig(jobname+"_lDDT.png") plt.show() #@title Plot Number of Sequences per Position # confidence per position plt.figure(dpi=100) plt.plot((feature_dict["msa"] != 21).sum(0)) plt.xlabel("positions") plt.ylabel("number of sequences") plt.show() #@title Show 3D structure def show_pdb(model_name, show_sidechains=False, show_mainchain=False, color="None"): def mainchain(p, color="white", model=0): BB = ['C','O','N','CA'] p.addStyle({"model":model,'atom':BB}, {'stick':{'colorscheme':f"{color}Carbon",'radius':0.4}}) def sidechain(p, model=0): HP = ["ALA","GLY","VAL","ILE","LEU","PHE","MET","PRO","TRP","CYS","TYR"] BB = ['C','O','N'] p.addStyle({"model":model,'and':[{'resn':HP},{'atom':BB,'invert':True}]}, {'stick':{'colorscheme':"yellowCarbon",'radius':0.4}}) p.addStyle({"model":model,'and':[{'resn':"GLY"},{'atom':'CA'}]}, {'sphere':{'colorscheme':"yellowCarbon",'radius':0.4}}) p.addStyle({"model":model,'and':[{'resn':"PRO"},{'atom':['C','O'],'invert':True}]}, {'stick':{'colorscheme':"yellowCarbon",'radius':0.4}}) p.addStyle({"model":model,'and':[{'resn':HP,'invert':True},{'atom':BB,'invert':True}]}, {'stick':{'colorscheme':"whiteCarbon",'radius':0.4}}) if use_amber: pdb_filename = f"{jobname}_relaxed_{model_name}.pdb" else: pdb_filename = f"{jobname}_unrelaxed_{model_name}.pdb" p = py3Dmol.view(js='https://3dmol.org/build/3Dmol.js') p.addModel(open(pdb_filename,'r').read(),'pdb') if color == "lDDT": p.setStyle({'cartoon': {'colorscheme': {'prop':'b','gradient': 'roygb','min':0,'max':1}}}) elif color == "rainbow": p.setStyle({'cartoon': {'color':'spectrum'}}) else: p.setStyle({'cartoon':{}}) if show_sidechains: sidechain(p) if show_mainchain: mainchain(p) p.zoomTo() return p.show() interact(show_pdb, model_name=ipywidgets.Dropdown(options=model_params.keys(), value='model_1'), show_sidechains=ipywidgets.Checkbox(value=False), show_mainchain=ipywidgets.Checkbox(value=False), color=ipywidgets.Dropdown(options=['None', 'rainbow', 'lDDT'], value='lDDT')) #@title Download result !tar cfz $jobname".result.tar.gz" $jobname"_"*"relaxed_model_"*".pdb" $jobname"_lDDT.png" from google.colab import files files.download(f"{jobname}.result.tar.gz")
_____no_output_____
MIT
AlphaFold2PredictStructure.ipynb
hongqin/alphafold-local-sandbox
PySINDy Package Feature OverviewThis notebook provides a simple overview of the basic functionality of the PySINDy software package. In addition to demonstrating the basic usage for fitting a SINDy model, we demonstrate several means of customizing the SINDy fitting procedure. These include different forms of input data, different optimization methods, different differentiation methods, and custom feature libraries.
import warnings import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np from scipy.integrate import odeint from sklearn.linear_model import Lasso import pysindy as ps %matplotlib inline warnings.filterwarnings('ignore')
_____no_output_____
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Basic usage
def lorenz(z, t): return [ 10 * (z[1] - z[0]), z[0] * (28 - z[2]) - z[1], z[0] * z[1] - (8 / 3) * z[2] ]
_____no_output_____
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Train the model
dt = .002 t_train = np.arange(0, 10, dt) x0_train = [-8, 8, 27] x_train = odeint(lorenz, x0_train, t_train) model = ps.SINDy() model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Assess results on a test trajectory
t_test = np.arange(0, 15, dt) x0_test = np.array([8, 7, 15]) x_test = odeint(lorenz, x0_test, t_test) x_test_sim = model.simulate(x0_test, t_test) x_dot_test_computed = model.differentiate(x_test, t=dt) x_dot_test_predicted = model.predict(x_test) print('Model score: %f' % model.score(x_test, t=dt))
Model score: 1.000000
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Predict derivatives with learned model
fig, axs = plt.subplots(x_test.shape[1], 1, sharex=True, figsize=(7, 9)) for i in range(x_test.shape[1]): axs[i].plot(t_test, x_dot_test_computed[:, i], 'k', label='numerical derivative') axs[i].plot(t_test, x_dot_test_predicted[:, i], 'r--', label='model prediction') axs[i].legend() axs[i].set(xlabel='t', ylabel='$\dot x_{}$'.format(i)) fig.show()
_____no_output_____
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Simulate forward in time
fig, axs = plt.subplots(x_test.shape[1], 1, sharex=True, figsize=(7, 9)) for i in range(x_test.shape[1]): axs[i].plot(t_test, x_test[:, i], 'k', label='true simulation') axs[i].plot(t_test, x_test_sim[:, i], 'r--', label='model simulation') axs[i].legend() axs[i].set(xlabel='t', ylabel='$x_{}$'.format(i)) fig = plt.figure(figsize=(10, 4.5)) ax1 = fig.add_subplot(121, projection='3d') ax1.plot(x_test[:, 0], x_test[:, 1], x_test[:, 2], 'k') ax1.set(xlabel='$x_0$', ylabel='$x_1$', zlabel='$x_2$', title='true simulation') ax2 = fig.add_subplot(122, projection='3d') ax2.plot(x_test_sim[:, 0], x_test_sim[:, 1], x_test_sim[:, 2], 'r--') ax2.set(xlabel='$x_0$', ylabel='$x_1$', zlabel='$x_2$', title='model simulation') fig.show()
_____no_output_____
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Different forms of input data Single trajectory, pass in collection times
model = ps.SINDy() model.fit(x_train, t=t_train) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Single trajectory, pass in pre-computed derivatives
x_dot_true = np.zeros(x_train.shape) for i in range(t_train.size): x_dot_true[i] = lorenz(x_train[i], t_train[i]) model = ps.SINDy() model.fit(x_train, t=t_train, x_dot=x_dot_true) model.print()
x0' = -10.000 x0 + 10.000 x1 x1' = 28.000 x0 + -1.000 x1 + -1.000 x0 x2 x2' = -2.667 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Multiple trajectories
n_trajectories = 20 x0s = np.array([36, 48, 41]) * ( np.random.rand(n_trajectories, 3) - 0.5 ) + np.array([0, 0, 25]) x_train_multi = [] for i in range(n_trajectories): x_train_multi.append(odeint(lorenz, x0s[i], t_train)) model = ps.SINDy() model.fit(x_train_multi, t=dt, multiple_trajectories=True) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Multiple trajectories, different lengths of time
n_trajectories = 20 x0s = np.array([36, 48, 41]) * ( np.random.rand(n_trajectories, 3) - 0.5 ) + np.array([0, 0, 25]) x_train_multi = [] t_train_multi = [] for i in range(n_trajectories): n_samples = np.random.randint(500, 1500) t = np.arange(0, n_samples * dt, dt) x_train_multi.append(odeint(lorenz, x0s[i], t)) t_train_multi.append(t) model = ps.SINDy() model.fit(x_train_multi, t=t_train_multi, multiple_trajectories=True) model.print()
x0' = -10.000 x0 + 10.000 x1 x1' = 27.993 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Discrete time dynamical system (map)
def f(x): return 3.6 * x * (1 - x) n_steps = 1000 eps = 0.001 x_train_map = np.zeros((n_steps)) x_train_map[0] = 0.5 for i in range(1, n_steps): x_train_map[i] = f(x_train_map[i - 1]) + eps * np.random.randn() model = ps.SINDy(discrete_time=True) model.fit(x_train_map) model.print()
x0[k+1] = 3.600 x0[k] + -3.600 x0[k]^2
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Optimization options STLSQ - change parameters
stlsq_optimizer = ps.STLSQ(threshold=.01, alpha=.5) model = ps.SINDy(optimizer=stlsq_optimizer) model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
SR3
sr3_optimizer = ps.SR3(threshold=0.1, nu=1) model = ps.SINDy(optimizer=sr3_optimizer) model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
LASSO
lasso_optimizer = Lasso(alpha=100, fit_intercept=False) model = ps.SINDy(optimizer=lasso_optimizer) model.fit(x_train, t=dt) model.print()
x0' = -0.310 x0 x2 + 0.342 x1 x2 + -0.002 x2^2 x1' = 15.952 x1 + 0.009 x0 x1 + -0.219 x0 x2 + -0.474 x1 x2 + 0.007 x2^2 x2' = 0.711 x0^2 + 0.533 x0 x1 + -0.005 x1 x2 + -0.119 x2^2
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Differentiation options Pass in pre-computed derivatives
x_dot_precomputed = ps.FiniteDifference()._differentiate(x_train, t_train) model = ps.SINDy() model.fit(x_train, t=t_train, x_dot=x_dot_precomputed) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Drop end points from finite difference computation
fd_dropEndpoints = ps.FiniteDifference(drop_endpoints=True) model = ps.SINDy(differentiation_method=fd_dropEndpoints) model.fit(x_train, t=t_train) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.998 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Smoothed finite difference
smoothedFD = ps.SmoothedFiniteDifference() model = ps.SINDy(differentiation_method=smoothedFD) model.fit(x_train, t=t_train) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 27.992 x0 + -0.998 x1 + -1.000 x0 x2 x2' = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Feature libraries Custom feature names
feature_names = ['x', 'y', 'z'] model = ps.SINDy(feature_names=feature_names) model.fit(x_train, t=dt) model.print()
x' = -9.999 x + 9.999 y y' = 27.992 x + -0.999 y + -1.000 x z z' = -2.666 z + 1.000 x y
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Custom left hand side when printing the model
model = ps.SINDy() model.fit(x_train, t=dt) model.print(lhs=['dx0/dt', 'dx1/dt', 'dx2/dt'])
dx0/dt = -9.999 x0 + 9.999 x1 dx1/dt = 27.992 x0 + -0.999 x1 + -1.000 x0 x2 dx2/dt = -2.666 x2 + 1.000 x0 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Customize polynomial library
poly_library = ps.PolynomialLibrary(include_interaction=False) model = ps.SINDy(feature_library=poly_library) model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = -72.092 1 + -13.015 x0 + 9.230 x1 + 9.452 x2 + 0.598 x0^2 + -0.289 x1^2 + -0.247 x2^2 x2' = -41.053 1 + 0.624 x0 + -0.558 x1 + 2.866 x2 + 1.001 x0^2 + 0.260 x1^2 + -0.176 x2^2
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Fourier library
fourier_library = ps.FourierLibrary(n_frequencies=3) model = ps.SINDy(feature_library=fourier_library) model.fit(x_train, t=dt) model.print()
x0' = 0.361 sin(1 x0) + 1.015 cos(1 x0) + 6.068 cos(1 x1) + -2.618 sin(1 x2) + 4.012 cos(1 x2) + -0.468 cos(2 x0) + -0.326 sin(2 x1) + -0.883 cos(2 x1) + 0.353 sin(2 x2) + 0.281 cos(2 x2) + 0.436 sin(3 x0) + 0.134 cos(3 x0) + 2.860 sin(3 x1) + 0.780 cos(3 x1) + 2.413 sin(3 x2) + -1.869 cos(3 x2) x1' = -2.693 sin(1 x0) + -4.096 cos(1 x0) + -0.425 sin(1 x1) + 0.466 cos(1 x1) + -4.697 sin(1 x2) + 7.946 cos(1 x2) + -1.108 sin(2 x0) + -5.631 cos(2 x0) + -1.089 sin(2 x1) + -1.079 cos(2 x1) + 3.288 sin(2 x2) + 1.151 cos(2 x2) + 1.949 sin(3 x0) + 4.829 cos(3 x0) + -0.555 sin(3 x1) + 0.635 cos(3 x1) + 4.257 sin(3 x2) + -3.065 cos(3 x2) x2' = 5.015 sin(1 x0) + 4.775 cos(1 x0) + 5.615 sin(1 x1) + -3.112 cos(1 x1) + -1.019 sin(1 x2) + 0.345 cos(1 x2) + 2.535 sin(2 x0) + 2.008 cos(2 x0) + -4.250 sin(2 x1) + -13.784 cos(2 x1) + -0.797 sin(2 x2) + 2.578 sin(3 x0) + -0.399 cos(3 x0) + 5.208 sin(3 x1) + -8.286 cos(3 x1) + 0.720 sin(3 x2) + -0.229 cos(3 x2)
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Fully custom library
library_functions = [ lambda x : np.exp(x), lambda x : 1./x, lambda x : x, lambda x,y : np.sin(x+y) ] library_function_names = [ lambda x : 'exp(' + x + ')', lambda x : '1/' + x, lambda x : x, lambda x,y : 'sin(' + x + ',' + y + ')' ] custom_library = ps.CustomLibrary( library_functions=library_functions, function_names=library_function_names ) model = ps.SINDy(feature_library=custom_library) model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = 1.407 1/x0 + -48.091 1/x2 + -12.472 x0 + 9.296 x1 + 0.381 x2 + 0.879 sin(x0,x1) + 1.896 sin(x0,x2) + -0.468 sin(x1,x2) x2' = 1.094 1/x0 + -7.674 1/x2 + 0.102 x0 + 0.157 x1 + 3.603 sin(x0,x1) + -3.323 sin(x0,x2) + -3.047 sin(x1,x2)
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Fully custom library, default function names
library_functions = [ lambda x : np.exp(x), lambda x : 1./x, lambda x : x, lambda x,y : np.sin(x+y) ] custom_library = ps.CustomLibrary(library_functions=library_functions) model = ps.SINDy(feature_library=custom_library) model.fit(x_train, t=dt) model.print()
x0' = -9.999 f2(x0) + 9.999 f2(x1) x1' = 1.407 f1(x0) + -48.091 f1(x2) + -12.472 f2(x0) + 9.296 f2(x1) + 0.381 f2(x2) + 0.879 f3(x0,x1) + 1.896 f3(x0,x2) + -0.468 f3(x1,x2) x2' = 1.094 f1(x0) + -7.674 f1(x2) + 0.102 f2(x0) + 0.157 f2(x1) + 3.603 f3(x0,x1) + -3.323 f3(x0,x2) + -3.047 f3(x1,x2)
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Identity library
identity_library = ps.IdentityLibrary() model = ps.SINDy(feature_library=identity_library) model.fit(x_train, t=dt) model.print()
x0' = -9.999 x0 + 9.999 x1 x1' = -12.451 x0 + 9.314 x1 + 0.299 x2 x2' = 0.159 x0 + 0.101 x1
MIT
example/feature_overview.ipynb
eigensteve/pysindy
Data cleaning* Data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within a dataset...
# Task: # Delete unnecessary Rows and Columns # Clean Rank with value '0'. # Clean '-' values with column mean # Clean '.' with column mean # Convert Lending Asset Size = Category like 0, 1, 2 etc based on levels available # Make sure every column in its respective data type. # Save the model as csv file import pandas as pd df=pd.read_csv(r'C:\Users\Jyotiranjan padhi\Desktop\bepec files\Lending_Data.csv') df.head() df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-1
# Delete unnecessary Rows and Columns del df['Unnamed: 10'] #Here all values are NaN. #Here i used drop(),in order to delete the rows #I need to mention index name, after that i need to delete row wise,so i put axis=0 df=df.drop([0,94,95,96],axis=0) df.head() df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-2
# Clean Rank with value '0' df["Rank"]=df["Rank"].replace("NR",0) df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-3
# Clean '-' values with column mean import numpy as np df=df.replace('-', np.nan) df=df.replace(' - ', np.nan) df.tail() df['Amount ($1,000)'].unique() #Here you can see like there is some "," in between integers.i need to remove that df.columns columns=['TA Ratio1','TBL Ratio1','Amount ($1,000)','Number ',"Amount ($1,000).1",'Number .1','Amount ($1,000).2','Number .2'] for i in columns: df[i]=df[i].replace(to_replace='[^0-9]',value="",regex=True) df['Amount ($1,000)'].unique() #Now you can see like there is no "," in between integers,so i can move further #I need to covert all into int format for i in columns: df[i]=pd.to_numeric(df[i]) #Here i am filling all null values with columns mean for i in columns: mean=df[i].mean() df[i]=df[i].fillna(mean) df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-4
# Clean '.' with column mean df['CC Amount/TA1'].unique() df['CC Amount/TA1']=df['CC Amount/TA1'].replace(' . ',np.nan) df.tail() df['CC Amount/TA1']=pd.to_numeric(df['CC Amount/TA1']) df['CC Amount/TA1']=df['CC Amount/TA1'].fillna(df['CC Amount/TA1'].mean()) df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-5
# Convert Lending Asset Size = Category like 0, 1, 2 etc based on levels available df['Lender Asset Size'].unique() #Here i used lamda function to conver these to Category df['Lender Asset Size']=df['Lender Asset Size'].apply(lambda x:0 if x=='>$10B ' else(0 if x==' >$10B ' else 1 if x==' $10B-$50B ' else 2)) df.head() #Another way """"Lending_data['Lender Asset Size']=Lending_data['Lender Asset Size'].replace([' >$50B ', ' $10B-$50B ', ' >$10B ', '>$50B ', '>$10B '],[2,1,0,2,0])""""" #Another way #We can use label encoding as well """"from sklearn.preprocessing import LabelEncoder Label_encoder=LabelEncoder() df=Label_encoder.fit(Lending_data['Lender Asset Size']) Lending_data['Lender Asset Size']=df.transform(Lending_data['Lender Asset Size'])""""" df.head() df.tail()
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-6
# Make sure every column in its respective data type. df.info() df['Rank']=pd.to_numeric(df['Rank']) df['Lender Asset Size']=df['Lender Asset Size'].astype("object") df.info()
<class 'pandas.core.frame.DataFrame'> Int64Index: 93 entries, 1 to 93 Data columns (total 13 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Name of Lending Institution 93 non-null object 1 HQ State 93 non-null object 2 Rank 93 non-null int64 3 TA Ratio1 93 non-null float64 4 TBL Ratio1 93 non-null float64 5 Amount ($1,000) 93 non-null float64 6 Number 93 non-null float64 7 Lender Asset Size 93 non-null object 8 Amount ($1,000).1 93 non-null float64 9 Number .1 93 non-null float64 10 Amount ($1,000).2 93 non-null float64 11 Number .2 93 non-null float64 12 CC Amount/TA1 93 non-null float64 dtypes: float64(9), int64(1), object(3) memory usage: 12.7+ KB
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Task-7
# Save this model df.to_csv(r"C:\Users\Jyotiranjan padhi\Desktop\data folder\updated_Lending_Data.csv")
_____no_output_____
Apache-2.0
Data Cleaning(Part-1).ipynb
Jyotiranjan404/data_cleaning-part-1-
Some Experimentation with Tensorflow Probability
import pandas as pd from tensorflow_probability import edward2 as ed import tensorflow_probability as tfp import tensorflow as tf import numpy as np import sys sys.executable data = pd.read_csv('fatal_airline_accidents.csv') y = np.array(data['accidents']) y = tf.convert_to_tensor(y, dtype=tf.float32) alpha=tf.convert_to_tensor(np.array([5.0]*10),dtype=tf.float32) beta=tf.convert_to_tensor(np.array([2.0]*10),dtype=tf.float32) tfd = tfp.distributions alpha, beta def accidents_model_prior_predictive_dist(alpha, beta): theta = ed.Gamma(concentration=alpha, rate=beta, name="theta") accidents = ed.Poisson(rate=theta,name="accidents") return accidents accidents_gen = accidents_model_prior_predictive_dist(alpha=alpha, beta=beta) # inital state initial_state = tf.random_gamma([10], alpha=5.0, beta=2.0, dtype=tf.float32) result = [] with tf.Session() as sess: for i in range(10000): accidents_ = sess.run(accidents_gen) result.append(accidents_) import numpy as np import matplotlib.pyplot as plt %matplotlib inline print('Mean: %s Var: %s ' % (np.mean(result), np.var(result) ) ) plt.hist(np.array(result).flatten(),bins=15) plt.show() log_joint = ed.make_log_joint_fn(accidents_model_prior_predictive_dist) def target_log_prob_fn(theta): """Target log-probability as a function of states.""" return log_joint(alpha, beta, theta=theta, accidents=y) # Initialize the HMC transition kernel. num_samples = int(10e3) num_burnin_steps = int(1e3) hmc = tfp.mcmc.HamiltonianMonteCarlo( target_log_prob_fn=target_log_prob_fn, num_leapfrog_steps=5, step_size=0.01) samples, kernel_results = tfp.mcmc.sample_chain( num_results=num_samples, current_state=[initial_state], num_burnin_steps=num_burnin_steps, kernel=hmc) # sample from posterior with tf.Session() as sess: samples, is_accepted_ = sess.run([samples, kernel_results.is_accepted]) accepted = np.sum(is_accepted_) print("pct accepted: %s"%(accepted / num_samples)) print(samples)
pct accepted: 0.9999 [array([[ 6.746139 , 7.760075 , 9.008763 , ..., 5.072694 , 4.4772816, 6.278378 ], [ 6.7090974, 7.7435775, 8.994854 , ..., 5.026912 , 4.548915 , 6.35964 ], [ 6.674753 , 7.727055 , 9.020736 , ..., 5.053764 , 4.6378536, 6.350527 ], ..., [10.115327 , 11.060875 , 10.204517 , ..., 8.380159 , 8.087912 , 8.548069 ], [10.151785 , 11.03296 , 10.163288 , ..., 8.402973 , 8.165112 , 8.4707155], [10.17069 , 10.995087 , 10.170074 , ..., 8.397022 , 8.152656 , 8.500627 ]], dtype=float32)]
MIT
exploring_tensorflow_probability.ipynb
sselonick/tensorflow-probability-fun
The ``JPG`` pane embeds a ``.jpg`` or ``.jpeg`` image file in a panel if provided a local path, or it will link to a remote image if provided a URL. Parameters:For layout and styling related parameters see the [customization user guide](../../user_guide/Customization.ipynb).* **``embed``** (boolean, default=False): If given a URL to an image this determines whether the image will be embedded as base64 or merely linked to.* **``object``** (str or object): The JPEG file to display. Can be a string pointing to a local or remote file, or an object with a ``_repr_jpg_`` method.* **``style``** (dict): Dictionary specifying CSS styles___ The ``JPG`` pane can be pointed at any local or remote ``.jpg`` file. If given a URL starting with ``http`` or ``https``, the ``embed`` parameter determines whether the image will be embedded or linked to:
jpg_pane = pn.pane.JPG('https://upload.wikimedia.org/wikipedia/commons/b/b2/JPEG_compression_Example.jpg', width=500) jpg_pane
_____no_output_____
BSD-3-Clause
examples/reference/panes/JPG.ipynb
rupakgoyal/panel-
Like any other pane, the ``JPG`` pane can be updated by setting the ``object`` parameter:
jpg_pane.object = 'https://upload.wikimedia.org/wikipedia/commons/3/38/JPEG_example_JPG_RIP_001.jpg'
_____no_output_____
BSD-3-Clause
examples/reference/panes/JPG.ipynb
rupakgoyal/panel-
Lecture 10: Solving equations [Download on GitHub](https://github.com/NumEconCopenhagen/lectures-2021)[](https://mybinder.org/v2/gh/NumEconCopenhagen/lectures-2021/master?urlpath=lab/tree/10/Solving_equations.ipynb) 1. [Systems of linear equations](Systems-of-linear-equations)2. [Symbolically](Symbolically)3. [Non-linear equations - one dimensional](Non-linear-equations---one-dimensional)4. [Solving non-linear equations (multi-dimensional)](Solving-non-linear-equations-(multi-dimensional))5. [Summary](Summary) You will learn about working with matrices and linear algebra (**scipy.linalg**), including solving systems of linear equations. You will learn to find roots of linear and non-linear equations both numerically (**scipy.optimize**) and symbolically (**sympy**). **Note:** The algorithms written here are meant to be illustrative. The scipy implementations are always both the *fastest* and the *safest* choice. **Links:**1. **scipy.linalg:** [overview](https://docs.scipy.org/doc/scipy/reference/linalg.html) + [tutorial](https://docs.scipy.org/doc/scipy/reference/tutorial/linalg.html)2. **sympy:** [overview](https://docs.sympy.org/latest/index.html) + [tutorial](https://docs.sympy.org/latest/tutorial/index.htmltutorial)3. **scipy.optimize:** [overview](https://docs.scipy.org/doc/scipy/reference/optimize.html) + [turtorial](https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html)
import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-whitegrid') import ipywidgets as widgets import time from scipy import linalg from scipy import optimize import sympy as sm from IPython.display import display # local module for linear algebra %load_ext autoreload %autoreload 2 import numecon_linalg
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
1. Systems of linear equations 1.1 Introduction We consider **matrix equations** with $n$ equations and $n$ unknowns:$$\begin{aligned}Ax = b \Leftrightarrow\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n}\\a_{21} & a_{22} & \cdots & a_{2n}\\\vdots & \vdots & \ddots & \vdots\\a_{n1} & a_{n2} & \cdots & a_{nn}\end{bmatrix}\cdot\begin{bmatrix}x_{1}\\x_{2}\\\vdots\\x_{n}\end{bmatrix} & = \begin{bmatrix}b_{1}\\b_{2}\\\vdots\\b_{n}\end{bmatrix}\end{aligned}$$where $A$ is a square parameter matrix, $b$ is a parameter vector, and $x$ is the vector of unknowns. A specific **example** could be:$$ \begin{aligned}Ax = b \Leftrightarrow\begin{bmatrix} 3 & 2 & 0 \\ 1 & -1 & 0 \\0 & 5 & 1\end{bmatrix} \cdot\begin{bmatrix} x_1 \\ x_2 \\x_3\end{bmatrix} \,=\,\begin{bmatrix} 2 \\ 4 \\-1\end{bmatrix} \end{aligned}$$ **How to solve this?**
A = np.array([[3.0, 2.0, 0.0], [1.0, -1.0, 0], [0.0, 5.0, 1.0]]) b = np.array([2.0, 4.0, -1.0])
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
Trial-and-error:
Ax = A@[2,-1,9] # @ is matrix multiplication print('A@x: ',Ax) if np.allclose(Ax,b): print('solution found') else: print('solution not found')
A@x: [4. 3. 4.] solution not found
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Various matrix operations:**
A.T # transpose np.diag(A) # diagonal np.tril(A) # lower triangular matrix np.triu(A) # upper triangular matrix B = A.copy() np.fill_diagonal(B,0) # fill diagonal with zeros print(B) linalg.inv(A) # inverse linalg.eigvals(A) # eigen values
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
1.2 Direct solution with Gauss-Jordan elimination Consider the column stacked matrix:$$X=[A\,|\,b]=\begin{bmatrix}a_{11} & a_{12} & \cdots & a_{1n} & b_{1}\\a_{21} & a_{22} & \cdots & a_{2n} & b_{2}\\\vdots & \vdots & \ddots & \vdots & \vdots\\a_{n1} & a_{n2} & \cdots & a_{nn} & b_{n}\end{bmatrix}$$ Find the **row reduced echelon form** by performing row operations, i.e.1. Multiply row with constant2. Swap rows3. Add one row to another row, until the $A$ part of the matrix is the identity matrix. **Manually:**
# a. stack X = np.column_stack((A,b)) print('stacked:\n',X) # b. row operations X[0,:] += 2*X[1,:] X[0,:] /= 5.0 X[1,:] -= X[0,:] X[1,:] *= -1 X[2,:] -= 5*X[1,:] print('row reduced echelon form:\n',X) # c. print result (the last column in X in row reduced echelon form) print('solution',X[:,-1])
stacked: [[ 3. 2. 0. 2.] [ 1. -1. 0. 4.] [ 0. 5. 1. -1.]] row reduced echelon form: [[ 1. 0. 0. 2.] [-0. 1. -0. -2.] [ 0. 0. 1. 9.]] solution [ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**General function:**
Y = np.column_stack((A,b)) numecon_linalg.gauss_jordan(Y) print('solution',Y[:,-1])
solution [ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
which can also be used to find the inverse if we stack with the identity matrix instead,
# a. construct stacked matrix Z = np.hstack((A,np.eye(3))) print('stacked:\n',Z) # b. apply gauss jordan elimination numecon_linalg.gauss_jordan(Z) # b. find inverse inv_Z = Z[:,3:] # last 3 columns of Z in row reduced echelon form print('inverse:\n',inv_Z) assert np.allclose(Z[:,3:]@A,np.eye(3))
stacked: [[ 3. 2. 0. 1. 0. 0.] [ 1. -1. 0. 0. 1. 0.] [ 0. 5. 1. 0. 0. 1.]] inverse: [[ 0.2 0.4 0. ] [ 0.2 -0.6 0. ] [-1. 3. 1. ]]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
1.3 Iteative Gauss-Seidel (+) We can always decompose $A$ into additive lower and upper triangular matrices,$$A=L+U=\begin{bmatrix}a_{11} & 0 & \cdots & 0\\a_{21} & a_{22} & \cdots & 0\\\vdots & \vdots & \ddots & \vdots\\a_{n1} & a_{n2} & \cdots & a_{nn}\end{bmatrix}+\begin{bmatrix}0 & a_{12} & \cdots & a_{1n}\\0 & 0 & \cdots & a_{2n}\\\vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & 0\end{bmatrix}$$such that$$Ax=b\Leftrightarrow Lx=b-Ux$$ **Algorithm:** `gauss_seidel()`1. Choose tolerance $\epsilon > 0$, guess on $x_0$, and set $n=1$.2. Find $x_n$ by solving \\( Lx_n = y \equiv (b-Ux_{n-1}) \\).3. If $|x_n-x_{n-1}|_{\infty} < \epsilon$ stop, else $n=n+1 $ and return to step 2. > **Note:** Step 2 is very easy because the equation can be solved directly by *forward substitution*:>> $x_1 = \frac{y_1}{a_{11}}$>> $x_2 = \frac{(y_2 - a_{21} x_1)}{a_{22}}$>> $x_3 = \frac{(y_3 - a_{31} x_1 - a_{32} x_2)}{a_{33}}$>> etc. **Apply Gauss-Seidel:**
x0 = np.array([1,1,1]) x = numecon_linalg.gauss_seidel(A,b,x0) print('solution',x)
solution [ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
> **Note:** Convergence is not ensured unless the matrix is *diagonally dominant* or *symmetric* and *positive definite*.
x = numecon_linalg.gauss_seidel(A,b,x0,do_print=True)
[1 1 1] 0: [ 0.00000000 -4.00000000 19.00000000] 1: [ 3.33333333 -0.66666667 2.33333333] 2: [ 1.11111111 -2.88888889 13.44444444] 3: [ 2.59259259 -1.40740741 6.03703704] 4: [ 1.60493827 -2.39506173 10.97530864] 5: [ 2.26337449 -1.73662551 7.68312757] 6: [ 1.82441701 -2.17558299 9.87791495] 7: [ 2.11705533 -1.88294467 8.41472337] 8: [ 1.92196312 -2.07803688 9.39018442] 9: [ 2.05202459 -1.94797541 8.73987705] 10: [ 1.96531694 -2.03468306 9.17341530] 11: [ 2.02312204 -1.97687796 8.88438980] 12: [ 1.98458531 -2.01541469 9.07707347] 13: [ 2.01027646 -1.98972354 8.94861769] 14: [ 1.99314903 -2.00685097 9.03425487] 15: [ 2.00456732 -1.99543268 8.97716342] 16: [ 1.99695512 -2.00304488 9.01522439] 17: [ 2.00202992 -1.99797008 8.98985041] 18: [ 1.99864672 -2.00135328 9.00676639] 19: [ 2.00090219 -1.99909781 8.99548907] 20: [ 1.99939854 -2.00060146 9.00300729] 21: [ 2.00040097 -1.99959903 8.99799514] 22: [ 1.99973269 -2.00026731 9.00133657] 23: [ 2.00017821 -1.99982179 8.99910895] 24: [ 1.99988119 -2.00011881 9.00059403] 25: [ 2.00007920 -1.99992080 8.99960398] 26: [ 1.99994720 -2.00005280 9.00026401] 27: [ 2.00003520 -1.99996480 8.99982399] 28: [ 1.99997653 -2.00002347 9.00011734] 29: [ 2.00001565 -1.99998435 8.99992177] 30: [ 1.99998957 -2.00001043 9.00005215] 31: [ 2.00000695 -1.99999305 8.99996523] 32: [ 1.99999536 -2.00000464 9.00002318] 33: [ 2.00000309 -1.99999691 8.99998455] 34: [ 1.99999794 -2.00000206 9.00001030] 35: [ 2.00000137 -1.99999863 8.99999313] 36: [ 1.99999908 -2.00000092 9.00000458] 37: [ 2.00000061 -1.99999939 8.99999695] 38: [ 1.99999959 -2.00000041 9.00000203] 39: [ 2.00000027 -1.99999973 8.99999864] 40: [ 1.99999982 -2.00000018 9.00000090] 41: [ 2.00000012 -1.99999988 8.99999940] 42: [ 1.99999992 -2.00000008 9.00000040] 43: [ 2.00000005 -1.99999995 8.99999973] 44: [ 1.99999996 -2.00000004 9.00000018] 45: [ 2.00000002 -1.99999998 8.99999988] 46: [ 1.99999998 -2.00000002 9.00000008] 47: [ 2.00000001 -1.99999999 8.99999995] 48: [ 1.99999999 -2.00000001 9.00000004] 49: [ 2.00000000 -2.00000000 8.99999998] 50: [ 2.00000000 -2.00000000 9.00000002] 51: [ 2.00000000 -2.00000000 8.99999999] 52: [ 2.00000000 -2.00000000 9.00000001] 53: [ 2.00000000 -2.00000000 9.00000000] 54: [ 2.00000000 -2.00000000 9.00000000]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
1.4 Scipy functions **Option 1:** Use `.solve()` (scipy chooses what happens).
x1 = linalg.solve(A, b) print(x1) assert np.all(A@x1 == b)
[ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Option 2:** Compute `.inv()` first and then solve.
Ainv = linalg.inv(A) x2 = Ainv@b print(x2)
[ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
> **Note:** Computing the inverse is normally not a good idea due to numerical stability. **Option 3:** Compute LU decomposition and then solve.
LU,piv = linalg.lu_factor(A) # decomposition (factorization) x3 = linalg.lu_solve((LU,piv),b) print(x3)
[ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Detail:** `piv` contains information on a numerical stable reordering. 1.5 Comparisons1. `linalg.solve()` is the best choice for solving once.2. `linalg.lu_solve()` is the best choice when solving for multipe $b$'s for a fixed $A$ (the LU decomposition only needs to be done once).3. Gauss-Seidel is an alternative when e.g. only an approximate solution is needed. 1.6 Details on LU factorization (+)When $A$ is *regular* (invertible), we can decompose it into a *lower unit triangular matrix*, $L$, and an *upper triangular matrix*, $U$:$$A= L\cdot U = \begin{bmatrix}1 & 0 & \cdots & 0\\l_{21} & 1 & \cdots & 0\\\vdots & \vdots & \ddots & \vdots\\l_{n1} & l_{n2} & \cdots & 1\end{bmatrix}\cdot\begin{bmatrix}u_{11} & u_{12} & \cdots & u_{1n}\\0 & u_{22} & \cdots & u_{2n}\\\vdots & \vdots & \ddots & \vdots\\0 & 0 & \cdots & u_{nn}\end{bmatrix}$$where it can be shown that we can compute the elements by$$\begin{aligned}u_{ij} &= a_{ij} - \sum_{k=1}^{i-1} u_{kj} l_{ik} \\l_{ij} &= \frac{1}{u_{jj}} \big( a_{ij} - \sum_{k=1}^{j-1} u_{kj} l_{ik} \big)\end{aligned}$$This implies that the equation system can be written$$ L(Ux) = b $$ **Algorithm:** `lu_solve()`1. Perform LU decomposition (factorization)2. Solve $Ly = b$ for $y$ (by *forward substitution*) where $y = Ux$3. Solve $Ux = y$ for $x$ (by *backward substitution*)
L,U = numecon_linalg.lu_decomposition(A) # step 1 y = numecon_linalg.solve_with_forward_substitution(L,b) # step 2 x = numecon_linalg.solve_with_backward_substitution(U,y) # step 3 print('L:\n',L) print('\nU:\n',U) print('\nsolution:',x)
L: [[ 1. 0. 0. ] [ 0.33333333 1. 0. ] [ 0. -3. 1. ]] U: [[ 3. 2. 0. ] [ 0. -1.66666667 0. ] [ 0. 0. 1. ]] solution: [ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Relation to scipy:**1. Scipy use pivoting to improve numerical stability.2. Scipy is implemented much much better than here. 1.7 Sparse matrices (+) **Sparse matrix:** A matrix with many zeros. Letting the computer know where they are is extremely valuable.**Documentation:** [basics](https://docs.scipy.org/doc/scipy/reference/sparse.html) + [linear algebra](https://docs.scipy.org/doc/scipy/reference/sparse.linalg.htmlmodule-scipy.sparse.linalg) **Create a sparse matrix**, where most elements are on the diagonal:
from scipy import sparse import scipy.sparse.linalg S = sparse.lil_matrix((1000, 1000)) # 1000x1000 matrix with zeroes S.setdiag(np.random.rand(1000)) # some values on the diagonal S[200, :100] = np.random.rand(100) # some values in a row S[200:210, 100:200] = S[200, :100] # and the same value in some other rows
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
Create a plot of the values in the matrix:
S_np = S.toarray() # conversion to numpy fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.matshow(S_np,cmap=plt.cm.binary);
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Solve it in four different ways:**1. Like it was not sparse2. Using the sparsity3. Using the sparsity + explicit factorization4. Iterative solver (similar to Gauss-Seidel)
k = np.random.rand(1000) # random RHS # a. solve t0 = time.time() x = linalg.solve(S_np,k) print(f'{"solve":12s}: {time.time()-t0:.5f} secs') # b. solve with spsolve t0 = time.time() x_alt = sparse.linalg.spsolve(S.tocsr(), k) print(f'{"spsolve":12s}: {time.time()-t0:.5f} secs') assert np.allclose(x,x_alt) # c. solve with explicit factorization t0 = time.time() S_solver = sparse.linalg.factorized(S.tocsc()) x_alt = S_solver(k) print(f'{"factorized":12s}: {time.time()-t0:.5f} secs') assert np.allclose(x,x_alt) # d. solve with iterative solver (bicgstab) t0 = time.time() x_alt,_info = sparse.linalg.bicgstab(S,k,x0=1.001*x,tol=10**(-8)) print(f'{"bicgstab":12s}: {time.time()-t0:.5f} secs') assert np.allclose(x,x_alt),x-x_alt
solve : 0.02707 secs spsolve : 0.00201 secs factorized : 0.00100 secs bicgstab : 0.12334 secs
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Conclusion:** 1. Using the sparsity can be very important.2. Iterative solvers can be very very slow. 2. Symbolically 2.1 Solve consumer problem Consider solving the following problem:$$ \max_{x_1,x_2} x_1^{\alpha} x_2^{\beta} \text{ s.t. } p_1x_1 + p_2x_2 = I $$ Define all symbols:
x1 = sm.symbols('x_1') # x1 is a Python variable representing the symbol x_1 x2 = sm.symbols('x_2') alpha = sm.symbols('alpha') beta = sm.symbols('beta') p1 = sm.symbols('p_1') p2 = sm.symbols('p_2') I = sm.symbols('I')
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
Define objective and budget constraint:
objective = x1**alpha*x2**beta objective budget_constraint = sm.Eq(p1*x1+p2*x2,I) budget_constraint
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
Solve in **four steps**:1. **Isolate** $x_2$ from the budget constraint2. **Substitute** in $x_2$3. **Take the derivative** wrt. $x_1$4. **Solve the FOC** for $x_1$ **Step 1: Isolate**
x2_from_con = sm.solve(budget_constraint,x2) x2_from_con[0]
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Step 2: Substitute**
objective_subs = objective.subs(x2,x2_from_con[0]) objective_subs
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Step 3: Take the derivative**
foc = sm.diff(objective_subs,x1) foc
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Step 4: Solve the FOC**
sol = sm.solve(sm.Eq(foc,0),x1) sol[0]
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
> An alternative is `sm.solveset()`, which will be the default in the future, but it is still a bit immature in my view. **Task:** Solve the consumer problem with quasi-linear preferences,$$ \max_{x_1,x_2} \sqrt{x_1} + \gamma x_2 \text{ s.t. } p_1x_1 + p_2x_2 = I $$
# write your code here gamma = sm.symbols('gamma') objective_alt = sm.sqrt(x1) + gamma*x2 objective_alt_subs = objective_alt.subs(x2,x2_from_con[0]) foc_alt = sm.diff(objective_alt_subs,x1) sol_alt = sm.solve(foc_alt,x1) sol_alt[0]
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
2.2 Use solution **LaTex:** Print in LaTex format:
print(sm.latex(sol[0]))
\frac{I \alpha}{p_{1} \left(\alpha + \beta\right)}
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Turn into Python function:**
_sol_func = sm.lambdify((p1,I,alpha,beta),sol[0]) def sol_func(p1,I=10,alpha=1,beta=1): return _sol_func(p1,I,alpha,beta) # test p1_vec = np.array([1.2,3,5,9]) demand_p1 = sol_func(p1_vec) print(demand_p1)
[4.16666667 1.66666667 1. 0.55555556]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Is demand always positive?** Give the computer the **information** we have. I.e. that $p_1$, $p_2$, $\alpha$, $\beta$, $I$ are all strictly positive:
for var in [p1,p2,alpha,beta,I]: sm.assumptions.assume.global_assumptions.add(sm.Q.positive(var)) sm.assumptions.assume.global_assumptions
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Ask** the computer a **question**:
answer = sm.ask(sm.Q.positive(sol[0])) print(answer)
True
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
We need the assumption that $p_1 > 0$:
sm.assumptions.assume.global_assumptions.remove(sm.Q.positive(p1)) answer = sm.ask(sm.Q.positive(sol[0])) print(answer)
None
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
To clear all assumptions we can use:
sm.assumptions.assume.global_assumptions.clear()
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
2.3 Solving matrix equations (+) $$ Ax = b $$ **Remember:**
print('A:\n',A) print('b:',b)
A: [[ 3. 2. 0.] [ 1. -1. 0.] [ 0. 5. 1.]] b: [ 2. 4. -1.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Construct symbolic matrix:**
A_sm = numecon_linalg.construct_sympy_matrix(['11','12','21','22','32','33']) # somewhat complicated function A_sm
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Find the inverse symbolically:**
A_sm_inv = A_sm.inv() A_sm_inv
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Fill in the numeric values:**
A_inv_num = numecon_linalg.fill_sympy_matrix(A_sm_inv,A) # somewhat complicated function x = A_inv_num@b print('solution:',x)
solution: [ 2. -2. 9.]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Note:** The inverse multiplied by the determinant looks nicer...
A_sm_det = A_sm.det() A_sm_det A_sm_inv_raw = sm.simplify(A_sm_inv*A_sm_det) A_sm_inv_raw
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
2.4 More features (mixed goodies)
x = sm.symbols('x')
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Derivatives:** Higher order derivatives are also availible
sm.Derivative('x**4',x,x) sm.diff('x**4',x,x)
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
Alternatively,
expr = sm.Derivative('x**4',x,x) expr.doit()
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Integrals:**
sm.Integral(sm.exp(-x), (x, 0, sm.oo)) sm.integrate(sm.exp(-x), (x, 0, sm.oo))
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Limits:**
c = sm.symbols('c') rho = sm.symbols('rho') sm.Limit((c**(1-rho)-1)/(1-rho),rho,1) sm.limit((c**(1-rho)-1)/(1-rho),rho,1)
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Integers:**
X = sm.Integer(7)/sm.Integer(3) Y = sm.Integer(3)/sm.Integer(8) display(X) display(Y) Z = 3 (X*Y)**Z
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Simplify:**
expr = sm.sin(x)**2 + sm.cos(x)**2 display(expr) sm.simplify(expr)
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Solve multiple equations at once:**
x = sm.symbols('x') y = sm.symbols('y') Eq1 = sm.Eq(x**2+y-2,0) Eq2 = sm.Eq(y**2-4,0) sol = sm.solve([Eq1,Eq2],[x,y]) # print all solutions for xy in sol: print(f'(x,y) = ({xy[0]},{xy[1]})')
(x,y) = (-2,-2) (x,y) = (0,2) (x,y) = (0,2) (x,y) = (2,-2)
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
3. Non-linear equations - one dimensional 3.1 Introduction We consider **solving non-linear equations** on the form,$$ f(x) = 0, x \in \mathbb{R} $$This is also called **root-finding**. A specific **example** is:$$f(x) = 10x^3 - x^2 -1$$ 3.2 Derivative based methods **Newton methods:** Assume you know the function value and derivatives at $x_0$. A **first order** approximate value of the function at $x_1$ then is:$$ f(x_1) \approx f(x_0) + f^{\prime}(x_0)(x_1-x_0)$$implying $$f(x_1) = 0 \Leftrightarrow x_1 = x_0 - \frac{f(x_0)}{f^{\prime}(x_0)}$$ This is called **Newtons method**. An alternative is **Halleys method** (see [derivation](https://mathworld.wolfram.com/HalleysMethod.html)), which uses$$x_1 = x_0 - \frac{f(x_0)}{f^{\prime}(x_0)} \Big[ 1-\frac{f(x_0)}{f^{\prime}(x_0)}\frac{f^{\prime\prime}(x_0)}{2f^{\prime}(x_0)} \Big]^{-1}$$making use of information from the **second derivative**. **Algorithm:** `find_root()`1. Choose tolerance $\epsilon > 0$, guess on $x_0$ and set $n = 0$.2. Calculate $f(x_n)$, $f^{\prime}(x_n)$, and perhaps $f^{\prime\prime}(x_n)$.3. If $|f(x_n)| < \epsilon$ stop.4. Calculate $x_{n+1}$ using Newtons or Halleys formula (see above).5. Set $n = n + 1$ and return to step 2.
def find_root(x0,f,fp,fpp=None,method='newton',max_iter=500,tol=1e-8,full_info=False): """ find root Args: x0 (float): initial value f (callable): function fp (callable): derivative fp (callable): second derivative method (str): newton or halley max_iter (int): maximum number of iterations tol (float): tolerance full_info (bool): controls information returned Returns: x (float/ndarray): root (if full_info, all x tried) i (int): number of iterations used fx (ndarray): function values used (if full_info) fpx (ndarray): derivative values used (if full_info) fppx (ndarray): second derivative values used (if full_info) """ # initialize x = np.zeros(max_iter) fx = np.zeros(max_iter) fpx = np.zeros(max_iter) fppx = np.zeros(max_iter) # iterate x[0] = x0 i = 0 while True: # step 2: evaluate function and derivatives fx[i] = f(x[i]) fpx[i] = fp(x[i]) if method == 'halley': fppx[i] = fpp(x[i]) # step 3: check convergence if abs(fx[i]) < tol or i >= max_iter: break # step 4: update x if method == 'newton': x[i+1] = x[i] - fx[i]/fpx[i] elif method == 'halley': a = fx[i]/fpx[i] b = a*fppx[i]/(2*fpx[i]) x[i+1] = x[i] - a/(1-b) # step 5: increment counter i += 1 # return if full_info: return x,i,fx,fpx,fppx else: return x[i],i
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Note:** The cell below contains a function for plotting the convergence.
def plot_find_root(x0,f,fp,fpp=None,method='newton',xmin=-8,xmax=8,xn=100): # a. find root and return all information x,max_iter,fx,fpx,fppx = find_root(x0,f,fp,fpp=fpp,method=method,full_info=True) # b. compute function on grid xvec = np.linspace(xmin,xmax,xn) fxvec = f(xvec) # c. figure def _figure(i): # i. approximation if method == 'newton': fapprox = fx[i] + fpx[i]*(xvec-x[i]) elif method == 'halley': fapprox = fx[i] + fpx[i]*(xvec-x[i]) + fppx[i]/2*(xvec-x[i])**2 # ii. figure fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(xvec,fxvec,label='function') # on grid ax.plot(x[i],fx[i],'o',color='black',label='current') # now ax.plot(xvec,fapprox,label='approximation') # approximation ax.axvline(x[i+1],ls='--',lw=1,color='black') # cross zero ax.plot(x[i+1],fx[i+1],'o',color='black',mfc='none',label='next')# next ax.legend(loc='lower right',facecolor='white',frameon=True) ax.set_ylim([fxvec[0],fxvec[-1]]) widgets.interact(_figure, i=widgets.IntSlider(description="iterations", min=0, max=max_iter-1, step=1, value=0) );
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
3.3 Example
f = lambda x: 10*x**3-x**2-1 fp = lambda x: 30*x**2-2*x fpp = lambda x: 60*x-2 x,i = find_root(-5,f,fp,method='newton') print(i,x,f(x)) plot_find_root(-5,f,fp,method='newton') x,i = find_root(-5,f,fp,fpp,method='halley') print(i,x,f(x)) plot_find_root(-5,f,fp,fpp,method='halley')
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
3.4 Numerical derivative Sometimes, you might not have the **analytical derivative**. Then, you can instead use the **numerical derivative**.
# a. function f = lambda x: 10*x**3 - x**2 -1 # b. numerical derivative (forward) stepsize = 1e-8 fp_approx = lambda x: (f(x+stepsize)-f(x))/stepsize # b. find root x0 = -5 x,i = find_root(x0,f,fp_approx,method='newton') print(i,x,f(x))
17 0.5000000000000091 5.928590951498336e-14
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Question:** What happens if you increase the stepsize? 3.5 Another example
g = lambda x: np.sin(x) gp = lambda x: np.cos(x) gpp = lambda x: -np.sin(x) x0 = -4.0 plot_find_root(x0,g,gp,gpp,method='newton')
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Question:** Is the initial value important? **Sympy** can actually tell us that there are many solutions:
x = sm.symbols('x') sm.solveset(sm.sin(x),)
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
3.6 Derivative free methods: Bisection **Algorithm:** `bisection()`1. Set $a_0 = a$ and $b_0 = b$ where $f(a)$ and $f(b)$ has oposite sign, $f(a_0)f(b_0)<0$2. Compute $f(m_0)$ where $m_0 = (a_0 + b_0)/2$ is the midpoint.3. Determine the next sub-interval $[a_1,b_1]$: * If $f(a_0)f(m_0) < 0$ (different signs) then $a_1 = a_0$ and $b_1 = m_0$ (i.e. focus on the range $[a_0,m_0]$). * If $f(m_0)f(b_0) < 0$ (different signs) then $a_1 = m_0$ and $b_1 = b_0$ (i.e. focus on the range $[m_0,b_0]$).4. Repeat step 2 and step 3 until $f(m_n) < \epsilon$.
def bisection(f,a,b,max_iter=500,tol=1e-6,full_info=False): """ bisection Solve equation f(x) = 0 for a <= x <= b. Args: f (callable): function a (float): left bound b (float): right bound max_iter (int): maximum number of iterations tol (float): tolerance on solution full_info (bool): controls information returned Returns: m (float/ndarray): root (if full_info, all x tried) i (int): number of iterations used a (ndarray): left bounds used b (ndarray): right bounds used fm (ndarray): funciton values at midpoints """ # test inputs if f(a)*f(b) >= 0: print("bisection method fails.") return None # step 1: initialize _a = a _b = b a = np.zeros(max_iter) b = np.zeros(max_iter) m = np.zeros(max_iter) fm = np.zeros(max_iter) a[0] = _a b[0] = _b # step 2-4: main i = 0 while i < max_iter: # step 2: midpoint and associated value m[i] = (a[i]+b[i])/2 fm[i] = f(m[i]) # step 3: determine sub-interval if abs(fm[i]) < tol: break elif f(a[i])*fm[i] < 0: a[i+1] = a[i] b[i+1] = m[i] elif f(b[i])*fm[i] < 0: a[i+1] = m[i] b[i+1] = b[i] else: print("bisection method fails.") return None i += 1 if full_info: return m,i,a,b,fm else: return m[i],i
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Same result** as before, but **trade-off** between more iterations and no evaluation of derivatives.
m,i = bisection(f,-8,7) print(i,m,f(m))
23 0.4999999403953552 -3.8743014130204756e-07
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Note:** The cell below contains a function for plotting the convergence.
def plot_bisection(f,a,b,xmin=-8,xmax=8,xn=100): # a. find root and return all information res = bisection(f,a,b,full_info=True) if res == None: return else: m,max_iter,a,b,fm = res # b. compute function on grid xvec = np.linspace(xmin,xmax,xn) fxvec = f(xvec) # c. figure def _figure(i): # ii. figure fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(xvec,fxvec) # on grid ax.plot(m[i],fm[i],'o',color='black',label='current') # mid ax.plot([a[i],b[i]],[fm[i],fm[i]],'--',color='black',label='range') # range ax.axvline(a[i],ls='--',color='black') ax.axvline(b[i],ls='--',color='black') ax.legend(loc='lower right',facecolor='white',frameon=True) ax.set_ylim([fxvec[0],fxvec[-1]]) widgets.interact(_figure, i=widgets.IntSlider(description="iterations", min=0, max=max_iter-1, step=1, value=0) ); plot_bisection(f,-8,3)
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Note:** Bisection is not good at the final convergence steps. Generally true for methods not using derivatives. 3.7 Scipy Scipy, naturally, has better implementations of the above algorithms. **Newton:**
result = optimize.root_scalar(f,x0=-4,fprime=fp,method='newton') print(result)
converged: True flag: 'converged' function_calls: 30 iterations: 15 root: 0.5
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Halley:**
result = optimize.root_scalar(f,x0=-4,fprime=fp,fprime2=fpp,method='halley') print(result)
converged: True flag: 'converged' function_calls: 27 iterations: 9 root: 0.5
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Bisect:**
result = optimize.root_scalar(f,bracket=[-8,7],method='bisect') print(result)
converged: True flag: 'converged' function_calls: 45 iterations: 43 root: 0.5000000000007958
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
The **best choice** is the more advanced **Brent-method**:
result = optimize.root_scalar(f,bracket=[-8,7],method='brentq') print(result)
converged: True flag: 'converged' function_calls: 16 iterations: 15 root: 0.5000000000002526
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
4. Solving non-linear equations (multi-dimensional) 4.1 Introduction We consider **solving non-linear equations** on the form,$$ f(\boldsymbol{x}) = f(x_1,x_2,\dots,x_k) = \boldsymbol{0}, \boldsymbol{x} \in \mathbb{R}^k$$ A specific **example** is:$$ h(\boldsymbol{x})=h(x_{1,}x_{2})=\begin{bmatrix}h_{1}(x_{1},x_{2})\\h_{2}(x_{1},x_{2})\end{bmatrix}=\begin{bmatrix}x_{1}+0.5(x_{1}-x_{2})^{3}-1\\x_{2}+0.5(x_{1}-x_{2})^{3}\end{bmatrix}\in\mathbb{R}^{2} $$where the **Jacobian** is$$ \nabla h(\boldsymbol{x})=\begin{bmatrix}\frac{\partial h_{1}}{\partial x_{1}} & \frac{\partial h_{1}}{\partial x_{2}}\\\frac{\partial h_{2}}{\partial x_{1}} & \frac{\partial h_{2}}{\partial x_{2}}\end{bmatrix}=\begin{bmatrix}1+1.5(x_{1}-x_{2})^{2} & -1.5(x_{1}-x_{2})^{2}\\-1.5(x_{2}-x_{1})^{2} & 1+1.5(x_{2}-x_{1})^{2}\end{bmatrix}$$
def h(x): y = np.zeros(2) y[0] = x[0]+0.5*(x[0]-x[1])**3-1.0 y[1] = x[1]+0.5*(x[1]-x[0])**3 return y def hp(x): y = np.zeros((2,2)) y[0,0] = 1+1.5*(x[0]-x[1])**2 y[0,1] = -1.5*(x[0]-x[1])**2 y[1,0] = -1.5*(x[1]-x[0])**2 y[1,1] = 1+1.5*(x[1]-x[0])**2 return y
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
4.2 Newton's method Same as Newton's method in one dimension, but with the following **update step**:$$ \boldsymbol{x}_{n+1} = \boldsymbol{x_n} - [ \nabla h(\boldsymbol{x_n})]^{-1} f(\boldsymbol{x_n})$$
def find_root_multidim(x0,f,fp,max_iter=500,tol=1e-8): """ find root Args: x0 (float): initial value f (callable): function fp (callable): derivative max_iter (int): maximum number of iterations tol (float): tolerance Returns: x (float): root i (int): number of iterations used """ # initialize x = x0 i = 0 # iterate while i < max_iter: # step 2: function and derivatives fx = f(x) fpx = fp(x) # step 3: check convergence if max(abs(fx)) < tol: break # step 4: update x fpx_inv = linalg.inv(fpx) x = x - fpx_inv@fx # step 5: increment counter i += 1 return x,i
_____no_output_____
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Test algorithm:**
x0 = np.array([0,0]) x,i = find_root_multidim(x0,h,hp) print(i,x,h(x))
5 [0.8411639 0.1588361] [ 1.41997525e-10 -1.41997469e-10]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
4.3 Scipy There exist a lot of efficient algorithms for finding roots in multiple dimensions. The default scipy choice is something called *hybr*. **With the Jacobian:**
result = optimize.root(h,x0,jac=hp) print(result) print('\nx =',result.x,', h(x) =',h(result.x))
fjac: array([[ 0.89914291, -0.43765515], [ 0.43765515, 0.89914291]]) fun: array([-1.11022302e-16, 0.00000000e+00]) message: 'The solution converged.' nfev: 10 njev: 1 qtf: array([-1.19565972e-11, 4.12770392e-12]) r: array([ 2.16690469, -1.03701789, 1.10605417]) status: 1 success: True x: array([0.8411639, 0.1588361]) x = [0.8411639 0.1588361] , h(x) = [-1.11022302e-16 0.00000000e+00]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021
**Without the Jacobian:**
result = optimize.root(h,x0) print(result) print('\nx =',result.x,', h(x) =',h(result.x))
fjac: array([[-0.89914291, 0.43765515], [-0.43765515, -0.89914291]]) fun: array([-1.11022302e-16, 0.00000000e+00]) message: 'The solution converged.' nfev: 12 qtf: array([ 1.19565972e-11, -4.12770392e-12]) r: array([-2.16690469, 1.03701789, -1.10605417]) status: 1 success: True x: array([0.8411639, 0.1588361]) x = [0.8411639 0.1588361] , h(x) = [-1.11022302e-16 0.00000000e+00]
MIT
web/10/Solving_equations.ipynb
lnc394/lectures-2021