prompt
stringlengths 19
879k
| completion
stringlengths 3
53.8k
| api
stringlengths 8
59
|
---|---|---|
import numpy as np
import sklearn.datasets as skdata
from sklearn.linear_model import Perceptron
'''
Name: Doe, John (Please write names in <Last Name, First Name> format)
Collaborators: Doe, Jane (Please write names in <Last Name, First Name> format)
Collaboration details: Discussed <function name> implementation details with <NAME>.
Summary:
You should answer the questions:
1) Implemented Multi-class Perceptron algorithm
2) iniailized the weights with respect to class to 0 in fit function
predicted the maximum class in predict function
computed the loss if loss is >2.0 and updated the weights of predicted class and weights of actual class until convergence in update function
Trained(80%) , validated(10%) and tested(10%) the loop in main function
3) constants are T=100, tols in fit function; hyperparameters are (tols,train_steps) in main function
Error in predciting:
Algorithm always predicts class 0, I couldn't configure out where I am going wrong
Scores:
Results on the iris dataset using scikit-learn Perceptron model
Training set mean accuracy: 0.8512
Validation set mean accuracy: 0.7333
Testing set mean accuracy: 0.9286
Results on the iris dataset using our Perceptron model trained with 60 steps and tolerance of 0.01
Training set mean accuracy: 0.3306
Validation set mean accuracy: 0.3333
Results on the iris dataset using our Perceptron model trained with 100 steps and tolerance of 0.01
Training set mean accuracy: 0.3306
Validation set mean accuracy: 0.3333
Results on the iris dataset using our Perceptron model trained with 200 steps and tolerance of 0.01
Training set mean accuracy: 0.3306
Validation set mean accuracy: 0.3333
Using best model trained with 200 steps and tolerance of 0.01
Testing set mean accuracy: 0.3571
Results on the wine dataset using scikit-learn Perceptron model
Training set mean accuracy: 0.5625
Validation set mean accuracy: 0.4118
Testing set mean accuracy: 0.4706
Results on the wine dataset using our Perceptron model trained with 60 steps and tolerance of 1
Training set mean accuracy: 0.3889
Validation set mean accuracy: 0.4706
Results on the wine dataset using our Perceptron model trained with 80 steps and tolerance of 1
Training set mean accuracy: 0.3889
Validation set mean accuracy: 0.4706
Results on the wine dataset using our Perceptron model trained with 100 steps and tolerance of 1
Training set mean accuracy: 0.3889
Validation set mean accuracy: 0.4706
Using best model trained with 100 steps and tolerance of 1
Testing set mean accuracy: 0.4118
'''
'''
Implementation of Perceptron for multi-class classification
'''
class PerceptronMultiClass(object):
def __init__(self):
# Define private variables, weights and number of classes
self.__weights = None
self.__n_class = 3
def __predict_label_n_class(self, x_n):
w_c = [np.expand_dims(self.__weights[:, c], axis=-1) for c in range(self.__n_class)]
predict_class = [np.matmul(w.T, x_n) for w in w_c]
max_val = max(predict_class)
predictions = predict_class.index(max_val)
return predictions
def __update(self, x, y):
'''
Update the weight vector during each training iteration
Args:
x : numpy
d x N feature vector
y : numpy
1 x N ground-truth label
'''
# TODO: Implement the member update function
#for c in range(self.__n_class):
#weights_c = np.expand_dims(self.__weights[:,c],axis =0)
threshold = 1.0/self.__n_class * np.ones([1, x.shape[1]])
x = np.concatenate([threshold, x], axis = 0)
for n in range(x.shape[1]):
x_n = np.expand_dims(x[:, n],axis=-1)
predictions = self.__predict_label_n_class(x_n)
if predictions != y[n]:
self.__weights[:, predictions] = self.__weights[:, predictions] - np.squeeze(x_n, axis=-1)
self.__weights[:, y[n]] = self.__weights[:, y[n]] + np.squeeze(x_n, axis=-1)
def fit(self, x, y, T=100, tol=1e-3):
'''
Fits the model to x and y by updating the weight vector
based on mis-classified examples for t iterations until convergence
Args:
x : numpy
d x N feature vector
y : numpy
1 x N ground-truth label
t : int
number of iterations to optimize perceptron
tol : float
change of loss tolerance, if greater than loss + tolerance, then stop
'''
# TODO: Implement the fit function
#Initialize the weights to zero
self.__n_class= len(np.unique(y)) #number of classes
#print(x.shape[0],x.shape[1],self.__n_class) #(d+1,C)
self.__weights = np.zeros([x.shape[0]+1, self.__n_class])
self.__weights[0, :] = -1.0
#print(self.__weights)
#print(self.__weights.shape[0],self.__weights.shape[1])
#finding misclassified examples
# c_hat = h(x^n(t)) , c_star = y^n ---> unique values determine the __n_class
#Initialize loss and weights
prev_loss = 2.0
pre_weights = np.copy(self.__weights)
for t in range(T):
predictions = self.predict(x)
#loss = 1/N sum n^N
loss = np.mean(np.where(predictions !=y, 1.0, 0.0))
#stopping convergence
if loss == 0.0:
break
elif loss > prev_loss + tol and t > 2:
self.__weights = pre_weights
break
prev_loss = loss
pre_weights = np.copy(self.__weights)
#updating weight vector and class
self.__update(x,y)
def predict(self, x):
'''
Predicts the label for each feature vector x
Args:
x : numpy
d x N feature vector
Returns:
numpy : 1 x N label vector
'''
# TODO: Implement the predict function
#compute weights (d+1,N)
#threshold shape is (1,N)
threshold = 1.0/self.__n_class * np.ones([1, x.shape[1]])
#print('threshold',threshold.shape)
#x is (d,N), thus concatenate threshold and # X
x = np.concatenate([threshold,x],axis=0) #--> (d+1,N)
#print('Size of x',x.shape)
#predict w^T(d+1,N)^T . (d+1,N) --> (1,N)
predictions = np.zeros([1, x.shape[1]])
for n in range(x.shape[1]):
x_n = np.expand_dims(x[:, n], axis=-1)
predictions[0, n] = self.__predict_label_n_class(x_n)
return predictions
def score(self, x, y):
'''
Predicts labels based on feature vector x and computes the mean accuracy
of the predictions
Args:
x : numpy
d x N feature vector
y : numpy
1 x N ground-truth label
Returns:
float : mean accuracy
'''
# TODO: Implement the score function
predcitions = self.predict(x)
#accuracy score
scores = np.where(predcitions == y, 1.0, 0.0)
return np.mean(scores)
def split_dataset(x, y, n_sample_train_to_val_test=8):
'''
Helper function to splits dataset into training, validation and testing sets
Args:
x : numpy
d x N feature vector
y : numpy
1 x N ground-truth label
n_sample_train_to_val_test : int
number of training samples for every validation, testing sample
Returns:
x_train : numpy
d x n feature vector
y_train : numpy
1 x n ground-truth label
x_val : numpy
d x m feature vector
y_val : numpy
1 x m ground-truth label
x_test : numpy
d x m feature vector
y_test : numpy
1 x m ground-truth label
'''
n_sample_interval = n_sample_train_to_val_test + 2
train_idx = []
val_idx = []
test_idx = []
for idx in range(x.shape[0]):
if idx and idx % n_sample_interval == (n_sample_interval - 1):
val_idx.append(idx)
elif idx and idx % n_sample_interval == 0:
test_idx.append(idx)
else:
train_idx.append(idx)
x_train, x_val, x_test = x[train_idx, :], x[val_idx, :], x[test_idx, :]
y_train, y_val, y_test = y[train_idx], y[val_idx], y[test_idx]
return x_train, y_train, x_val, y_val, x_test, y_test
if __name__ == '__main__':
iris_data = skdata.load_iris()
wine_data = skdata.load_wine()
datasets = [iris_data, wine_data]
tags = ['iris', 'wine']
# TODO: Experiment with 3 different max training steps (T) for each dataset
train_steps_iris = [50,500,1000]
train_steps_wine = [60, 500, 2000]
train_steps = [train_steps_iris, train_steps_wine]
# TODO: Set a tolerance for each dataset
tol_iris = 1.0
tol_wine = 1.0
tols = [tol_iris, tol_wine]
for dataset, steps, tol, tag in zip(datasets, train_steps, tols, tags):
# Split dataset into 80 training, 10 validation, 10 testing
x = dataset.data
y = dataset.target
x_train, y_train, x_val, y_val, x_test, y_test = split_dataset(
x=x,
y=y,
n_sample_train_to_val_test=8)
'''
Trains and tests Perceptron model from scikit-learn
'''
model = Perceptron(penalty=None, alpha=0.0, tol=1e-3)
# Trains scikit-learn Perceptron model
model.fit(x_train, y_train)
print('Results on the {} dataset using scikit-learn Perceptron model'.format(tag))
# Test model on training set
scores_train = model.score(x_train, y_train)
print('Training set mean accuracy: {:.4f}'.format(scores_train))
# Test model on validation set
scores_val = model.score(x_val, y_val)
print('Validation set mean accuracy: {:.4f}'.format(scores_val))
# Test model on testing set
scores_test = model.score(x_test, y_test)
print('Testing set mean accuracy: {:.4f}'.format(scores_test))
'''
Trains, validates, and tests our Perceptron model for multi-class classification
'''
# TODO: obtain dataset in correct shape (d x N)
x_train = np.transpose(x_train, axes=(1, 0))
x_val = | np.transpose(x_val, axes=(1, 0)) | numpy.transpose |
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Class to do trained model inference in beam."""
import importlib
import os
import struct
import subprocess as sp
import time
import numpy as np
import tensorflow as tf
from tensorflow.contrib import framework as contrib_framework
# LDIF is an internal package, should be imported last.
# pylint: disable=g-bad-import-order
from ldif.datasets import preprocess
from ldif.datasets import shapenet
from ldif.inference import experiment as experiments
from ldif.inference import extract_mesh
from ldif.inference import metrics
from ldif.model import model as sdf_model
from ldif.representation import structured_implicit_function
from ldif.util import camera_util
from ldif.util import file_util
from ldif.util import gaps_util
from ldif.util import geom_util
from ldif.util import geom_util_np
from ldif.util import gpu_util
from ldif.util import path_util
from ldif.util import py_util
from ldif.util import sdf_util
from ldif.util import np_util
from ldif.util.file_util import log
# pylint: enable=g-bad-import-order
importlib.reload(extract_mesh)
importlib.reload(structured_implicit_function)
importlib.reload(sdf_model)
importlib.reload(geom_util)
class TrainedNetwork(object):
"""A base class for all networks trained in XManager."""
def __init__(self, job, ckpt, use_gpu, **kwargs): # pylint: disable=unused-argument
self.job = job
self.ckpt = ckpt
self.graph = tf.Graph()
self.use_gpu = use_gpu
@classmethod
def from_experiment(cls,
experiment,
xid,
ckpt_idx,
use_temp_ckpts=None,
overrides=None,
use_gpu=True,
**kwargs):
"""Instantiates a TrainedNetwork from an experiment object."""
job = experiment.job_from_xmanager_id(xid, must_be_visible=True)
if use_temp_ckpts is not None:
job.set_use_temp_ckpts(use_temp_ckpts)
if overrides is not None:
for k, v in overrides.items():
setattr(job.model_config.hparams, k, v)
if ckpt_idx == 0:
log.error('Please select a checkpoint and rerun. Valid checkpoints:')
log.error(str(job.all_checkpoint_indices))
return
must_equal = ckpt_idx != -1
ckpt = job.latest_checkpoint_before(ckpt_idx, must_equal=must_equal)
log.info(f'Loading checkpoint {ckpt.abspath}')
return cls(job, ckpt, use_gpu, **kwargs)
@classmethod
def from_modeldir(cls,
model_directory,
model_name,
experiment_name,
xid,
ckpt_idx,
overrides=None,
use_temp_ckpts=True,
use_gpu=True,
**kwargs):
"""Creates a TrainedModel from a model directory root and name."""
experiment = experiments.Experiment(model_directory, model_name,
experiment_name)
return cls.from_experiment(experiment, xid, ckpt_idx, use_temp_ckpts,
overrides, use_gpu, **kwargs)
@classmethod
def from_identifiers(cls,
user,
model_name,
experiment_name,
xid,
ckpt_idx,
overrides=None,
use_temp_ckpts=None,
charged_user='viscam',
use_gpu=True,
**kwargs):
"""Creates a trained network from experiment identifiers."""
raise ValueError('No longer supported.')
def restore(self):
"""Creates a session with restored model variables."""
with self.graph.as_default():
if self.use_gpu:
# For now these are disabled since it is difficult to work on
# all GPUs.
#allowable_frac = gpu_util.get_allowable_fraction_without(
# mem_to_reserve=1024 + 512, cuda_device_index=0) # ~1GB
#gpu_options = tf.GPUOptions(
# per_process_gpu_memory_fraction=allowable_frac)
#config = tf.ConfigProto(gpu_options=gpu_options)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
else:
config = tf.ConfigProto(device_count={'GPU': 0})
self.session = tf.Session(config=config)
saver = tf.train.Saver()
saver.restore(self.session, self.ckpt.abspath)
def conform_prediction(vector):
"""Forces an arbitrary vector to be a valid (D)SIF."""
vector = vector.copy()
if vector.shape[-1] not in [10, 42]:
raise ValueError('Unimplemented.')
consts, centers, radii_aa, radii_cov = np.split(
vector[..., :10], [1, 4, 7], axis=-1)
consts = np.minimum(consts, 0.0)
radii_aa = np.maximum(radii_aa, 1e-9)
radii_cov = np.clip(radii_cov, -np.pi / 4., np.pi / 4.)
log.verbose(
repr([
x.shape
for x in [consts, centers, radii_aa, radii_cov, vector[..., 10:]]
]))
return np.concatenate(
[consts, centers, radii_aa, radii_cov, vector[..., 10:]], axis=-1)
class SingleViewDepthEncoder(TrainedNetwork):
"""Maps from a single depth image (max-0) to a shape representation."""
def __init__(self, job, ckpt, use_gpu, **kwargs):
super(SingleViewDepthEncoder, self).__init__(job, ckpt, use_gpu, **kwargs)
with self.graph.as_default():
model_config = self.job.model_config
model_config.inputs = shapenet.build_placeholder_interface(
model_config, proto='ShapeNetOneImXyzPC')
training_example = preprocess.preprocess(model_config)
self.depth_input = model_config.inputs['dataset'].depth_render
self.xyz_input = model_config.inputs['dataset'].xyz_render
self.points_input = model_config.inputs['dataset'].surface_point_samples
training_example = preprocess.preprocess(model_config)
observation = sdf_model.Observation(model_config, training_example)
imp_net = sdf_model.StructuredImplicitModel(model_config, 'imp_net')
prediction = imp_net.forward(observation)
structured_implicit = prediction.structured_implicit
self.packed_vector = structured_implicit.vector
self.restore()
def run(self, depth, points, xyz):
"""Runs the network on the input data, returning a (D)SIF."""
h, w = np.squeeze(depth).shape
depth = np.reshape(depth, [1, h, w, 1])
points = np.reshape(points, [1, 10000, 6])
xyz = np.reshape(xyz, [1, h, w, 3])
with self.graph.as_default():
packed_vector = self.session.run(
self.packed_vector,
feed_dict={
self.depth_input: depth,
self.points_input: points,
self.xyz_input: xyz
})
packed_vector = np.reshape(packed_vector,
[self.job.model_config.hparams.sc, -1])
return packed_vector
def run_example(self, ex):
return self.run(ex.max_depth_224[0, ...] * 1000.0,
ex.get_max_world_pts_from_idx(0), ex.max_world_xyz_224[0,
...])
def run_example_bts(self, ex):
return self.run(ex.bts_depth_224[0, ...] * 1000.0,
ex.get_bts_world_pts_from_idx(0), ex.bts_world_xyz_224[0,
...])
class DepthEncoder(TrainedNetwork):
"""Maps from a dodecahedron of depth images to shape elements."""
def __init__(self, job, ckpt, use_gpu, **kwargs):
super(DepthEncoder, self).__init__(job, ckpt, use_gpu, **kwargs)
with self.graph.as_default():
model_config = self.job.model_config
model_config.hparams.bs = 1
model_config.inputs = shapenet.build_placeholder_interface(model_config)
training_example = preprocess.preprocess(model_config)
self.depth_input = model_config.inputs['dataset'].depth_renders
self.points_input = model_config.inputs['dataset'].surface_point_samples
self.nss_input = model_config.inputs['dataset'].near_surface_samples
training_example = preprocess.preprocess(model_config)
if hasattr(training_example, '_tx'):
self.tx = training_example._tx
else:
self.tx = None
observation = sdf_model.Observation(model_config, training_example)
imp_net = sdf_model.StructuredImplicitModel(model_config, 'imp_net')
prediction = imp_net.forward(observation)
structured_implicit = prediction.structured_implicit
self.packed_vector = structured_implicit.vector
# *phew* we have set up the graph... now we need to pull the weights.
self.restore()
def run(self, dodeca, points, nss=None):
"""Runs the network on the input data, returning a (D)SIF."""
dodeca = np.reshape(dodeca, [1, 20, 224, 224, 1])
points = np.reshape(points, [1, 10000, 6])
with self.graph.as_default():
feed_dict = {self.depth_input: dodeca, self.points_input: points}
if nss is not None:
feed_dict[self.nss_input] = np.reshape(nss, [1, 100000, 4])
if self.tx is not None:
packed_vector, tx = self.session.run([self.packed_vector, self.tx],
feed_dict=feed_dict)
else:
packed_vector = self.session.run(
self.packed_vector, feed_dict=feed_dict)
packed_vector = np.reshape(packed_vector,
[self.job.model_config.hparams.sc, -1])
if self.tx is not None:
return packed_vector, np.reshape(tx, [4, 4])
return packed_vector
def run_example(self, ex):
return self.run(ex.depth_images, ex.precomputed_surface_samples_from_dodeca)
class Decoder(TrainedNetwork):
"""A SIF -> Mesh decoder."""
def __init__(self, job, ckpt, use_gpu, **kwargs):
super(Decoder, self).__init__(job, ckpt, use_gpu, **kwargs)
with self.graph.as_default():
self.sif_input = tf.placeholder(tf.float32, self.batched_vector_shape)
# TODO(kgenova) Maybe the net should be handled entirely by the structured
# implicit function? Although there is a difference between the network
# that can give a result from a vector and a simple wrapper for models
# that don't need variables. Maybe it's just intelligent about creating
# the net only when really needed.
if 'silence_implicits' in kwargs and kwargs['silence_implicits']:
self.job.model_config.hparams.ipc = 'f'
log.info('Silencing implicits.')
net = sdf_model.StructuredImplicitModel(
self.job.model_config, name='imp_net')
structured_implicit = (
structured_implicit_function.StructuredImplicit.from_packed_vector(
self.job.model_config, self.sif_input, net))
self.structured_implicit = structured_implicit
self.block_res = 32
self.native_point_count = self.block_res**3
self.sample_locations_ph = tf.placeholder(
tf.float32, shape=[self.block_res, self.block_res, self.block_res, 3])
samples = tf.reshape(self.sample_locations_ph, [1, self.block_res**3, 3])
predicted_alg, predicted_locals = structured_implicit.class_at_samples(
samples, apply_class_transfer=False)
predicted_class = sdf_util.apply_class_transfer(
predicted_alg,
self.job.model_config,
soft_transfer=True,
offset=self.job.model_config.hparams.lset)
vol_shape = [self.block_res, self.block_res, self.block_res]
self.predicted_alg_grid = tf.reshape(predicted_alg, vol_shape)
self.predicted_class_grid = tf.reshape(predicted_class, vol_shape)
effective_element_count = (
structured_implicit_function.get_effective_element_count(
self.job.model_config))
self.local_decisions = tf.reshape(predicted_locals[0], [
effective_element_count, self.block_res, self.block_res,
self.block_res
])
self.base_grid = np_util.make_coordinate_grid_3d(
length=self.block_res,
height=self.block_res,
width=self.block_res,
is_screen_space=False,
is_homogeneous=False).astype(np.float32)
self._world2local = structured_implicit.world2local
self._use_inference_kernel = True
# Influence samples
self.true_sample_count = 10000
self.generic_sample_ph = tf.placeholder(
tf.float32, shape=[self.true_sample_count, 3])
self.predicted_influences = structured_implicit.rbf_influence_at_samples(
tf.expand_dims(self.generic_sample_ph, axis=0))
# Optimizer stuff
self.optimizer_pc = 5000
self.optimizer_samples = tf.placeholder(
tf.float32, shape=[self.optimizer_pc, 3])
optimizer_samples = tf.reshape(self.optimizer_samples,
[1, self.optimizer_pc, 3])
self.predicted_class, _ = structured_implicit.class_at_samples(
optimizer_samples)
self.predicted_class = tf.reshape(self.predicted_class,
[self.optimizer_pc, 1])
self.target_class_ph = tf.placeholder(tf.float32, [self.optimizer_pc, 1])
loss = 'crossentropy'
if loss == 'crossentropy':
clipped_pred = tf.clip_by_value(self.predicted_class, 1e-05, 1 - 1e-05)
self.optimizer_elt_loss = tf.where(self.target_class_ph > 0.5,
-tf.log(clipped_pred),
-tf.log(1 - clipped_pred))
elif loss == 'l1':
self.optimizer_elt_loss = tf.abs(self.target_class_ph -
self.predicted_class)
elif loss == 'l2':
self.optimizer_elt_loss = tf.square(self.target_class_ph -
self.predicted_class)
apply_where_agree = True
if not apply_where_agree:
gt_outside = self.target_class_ph > 0.5
pred_outside = self.predicted_class > 0.5
gt_inside = tf.logical_not(gt_outside)
pred_inside = tf.logical_not(pred_outside)
agree = tf.logical_or(
tf.logical_and(gt_outside, pred_outside),
tf.logical_and(gt_inside, pred_inside))
self.optimizer_elt_loss = tf.where_v2(agree, 0.0,
self.optimizer_elt_loss)
self.optimizer_loss = tf.reduce_mean(self.optimizer_elt_loss)
self.ldif_gradients = tf.gradients(self.optimizer_loss, self.sif_input)
# TODO(kgenova) Currently disabled since it's in testing and hardcodes
# some values.
# self.coords_ph = tf.placeholder(tf.float32, shape=[3])
# self.am_image_ph = tf.placeholder(tf.int32, shape=[224, 224])
# pose_cam2world, pose_eye = self._spherical_to_4x4(self.coords_ph)
# self.pose_error = self._evaluate_pose_error(pose_cam2world, pose_eye,
# self.am_image_ph)
# self.pose3_gradients = tf.gradients(self.pose_error, self.coords_ph)
try:
self.restore()
except ValueError:
log.warning('No variables to restore or restoration otherwise failed.')
@property
def unbatched_vector_shape(self):
shape_count = self.job.model_config.hparams.sc
shape_size = structured_implicit_function.element_dof(self.job.model_config)
return [shape_count, shape_size]
@property
def batched_vector_shape(self):
return [1] + self.unbatched_vector_shape
@property
def use_inference_kernel(self):
return self._use_inference_kernel
@use_inference_kernel.setter
def use_inference_kernel(self, should_use):
self._use_inference_kernel = bool(should_use)
# TODO(kgenova) The intermediate vector should really be its own class...
def savetxt(self, sif_vector, path=None, version='v1'):
"""Saves a (D)SIF as ASCII text in the SIF file format.
Args:
sif_vector: A numpy array containing the ldif to write to disk. Has shape
(element_count, element_length).
path: A string containing the path to the file to write to, if provided.
If none, no file is written.
version: A string with the version identifier. Must equal 'v1'.
Returns:
A string encoding of the (D)SIF.
"""
if version == 'v0':
raise ValueError('SIF v0 files are no longer supported.')
elif version == 'v1':
s = self.encode_sif_v1(sif_vector)
else:
raise ValueError(f'Unrecognized SIF file format: {version}.')
if path is not None:
file_util.writetxt(path, s)
return s
def encode_sif_v1(self, sif_vector):
"""Encodes a ldif to a string, and optionally writes it to disk.
A description of the file format:
Line 1: SIF
Line 2: Three ints separated by spaces. In order:
1) The number of blobs.
2) The version ID for the blob types. I added this to be safe since
last time when we updated to add rotation it broke all the old txt
files. For now it will always be zero, which means the following
eleven explicit parameters will be given per blob (in order):
1 constant. float.
3 centers (XYZ). float.
3 radii (XYZ diagonals). float.
3 radii (roll-pitch-yaw rotations). float.
1 symmetry ID type. int. For now it will be either 0 or 1:
Zero: Not symmetric.
One: Left-right (XY-plane) symmetry.
3) The number of implicit parameters per blob. So it will likely
be between 0-256.
After the first two lines, there is a line for each blob.
Each line will have the explicit parameters followed by the implicit
parameters. They are space separated.
Args:
sif_vector: The SIF vector to encode as a np array. Has shape
(element_count, element_length).
Returns:
A string encoding of v in the ldif v1 file format.
"""
sif_vector = sif_vector.copy()
shape_count = sif_vector.shape[-2]
shape_len = sif_vector.shape[-1]
if shape_len == 7:
off_axis = np.zeros([shape_count, 3])
sif_vector = np.concatenate([sif_vector, off_axis], axis=1)
shape_len = 10
explicit_len = 10
implicit_len = shape_len - explicit_len
sif_vector = np.reshape(sif_vector, [shape_count, shape_len])
has_implicits = implicit_len > 0
if not has_implicits:
assert shape_len == 10
implicit_len = 0
sif_vector[:, 4:7] = np.sqrt(np.maximum(sif_vector[:, 4:7], 0))
header = 'SIF\n%i %i %i\n' % (shape_count, 0, implicit_len)
out = header
for row_idx in range(shape_count):
row = ' '.join(10 * ['%.9g']) % tuple(sif_vector[row_idx, :10].tolist())
symmetry = int(row_idx < self.job.model_config.hparams.lyr)
row += ' %i' % symmetry
if has_implicits:
implicit_params = ' '.join(implicit_len * ['%.9g']) % (
tuple(sif_vector[row_idx, 10:].tolist()))
row += ' ' + implicit_params
row += '\n'
out += row
return out
def render_ellipsoids(self, sif_vector):
"""Renders an ellipsoid image visualizing the (D)SIF RBFs."""
with py_util.py2_temporary_directory() as d:
qpath = d + '/q.txt'
self.savetxt(sif_vector, qpath)
impath = d + '/im.png'
camera = ('1.0451 1.17901 0.630437 '
'-0.614259 -0.695319 -0.373119 '
'-0.547037 0.715996 -0.433705')
with py_util.x11_server():
cmd = '%s/qview %s -camera %s -image %s' % (path_util.gaps_path(),
qpath, camera, impath)
sp.check_output(cmd, shell=True)
im = file_util.read_image(impath)
return im
def interactive_viewer(self, sif_vector, mesh=None):
"""Opens a GAPS viewer that can display the SIF blobs alongside a mesh."""
with py_util.py2_temporary_directory() as d:
qpath = d + '/q.txt'
self.savetxt(sif_vector, qpath)
init_camera = ('1.0451 1.17901 0.630437 '
'-0.614259 -0.695319 -0.373119 '
'-0.547037 0.715996 -0.433705')
mstr = ''
if mesh is not None:
mpath = d + '/m.ply'
file_util.write_mesh(mpath, mesh)
mstr = f' -input_mesh {mpath}'
cmd = f'{path_util.gaps_path()}/qview {qpath} -camera {init_camera}{mstr}'
sp.check_output(cmd, shell=True)
def world2local(self, sif_vector):
if sif_vector.shape[0] != 1:
sif_vector = np.expand_dims(sif_vector, axis=0)
m = self.session.run(
self._world2local, feed_dict={self.sif_input: sif_vector})
return m
def interactive_mesh_viewer(self, sif_vector, resolution):
"""Opens up an OpenGL session viewing the mesh defined by the SIF/LDIF."""
with py_util.py2_temporary_directory() as d:
mpath = d + '/m.ply'
m = self.extract_mesh(sif_vector, resolution)
file_util.write_mesh(mpath, m)
init_camera = ('1.0451 1.17901 0.630437 '
'-0.614259 -0.695319 -0.373119 '
'-0.547037 0.715996 -0.433705')
cmd = '%s/mshview %s -camera %s' % (path_util.gaps_path(), mpath,
init_camera)
sp.check_output(cmd, shell=True)
def interactive_gridview(self, sif_vector, resolution, extent=0.75):
volume = self._grid_eval(
sif_vector, resolution, extent, extract_parts=False, world2local=None)
return gaps_util.grdview(volume)
def _spherical_to_4x4(self, coords):
"""Turns spherical coords into a 4x4 affine transformation matrix."""
r = coords[0]
theta = coords[1]
phi = coords[2]
st = tf.sin(theta)
x = r * st * tf.cos(phi)
y = r * st * tf.sin(phi)
z = r * tf.cos(theta)
eye = tf.stack([x, y, z], axis=0)
eye = tf.reshape(eye, [1, 3])
center = tf.zeros([1, 3], dtype=tf.float32)
world_up = tf.constant([[0., 1., 0.]], dtype=tf.float32)
world2cam = camera_util.look_at(eye, center, world_up)
cam2world = tf.linalg.inv(world2cam)
cam2world = tf.constant(
[[-9.9398971e-01, 2.7342862e-03, -4.7837296e-03, 1.4993416e-04],
[1.6200442e-09, 8.6298174e-01, 4.9326313e-01, 7.1943283e-01],
[5.5100261e-03, 4.9325553e-01, -8.6296844e-01, -1.2277470e+00],
[0.0000000e+00, 0.0000000e+00, 0.0000000e+00, 1.0000000e+00]],
dtype=tf.float32)
return tf.reshape(cam2world, [4, 4]), eye
def _evaluate_pose_error(self, cam2world, eye, am_image):
"""Evaluates the error of an estimated 4x4 pose matrix."""
# TODO(kgenova) Thisis a hack that only workds for 3d-r2n2
ray_directions = gaps_util.gaps_depth_image_to_cam_image(
np.ones((224, 224)), xfov=0.422204).astype(np.float32)
tc = 15
t_vals = tf.constant(np.arange(0.75, 2.25, .1), dtype=tf.float32)
t_vals = tf.reshape(t_vals, [1, tc, 1])
ray_count = int(np.prod(ray_directions.shape[:-1]))
ray_directions = tf.reshape(ray_directions, [ray_count, 1, 3])
eye = tf.reshape(eye, [1, 1, 3])
cam_rays = ray_directions * t_vals + eye
world_pts = geom_util.apply_4x4(
cam_rays, cam2world, are_points=True, batch_rank=0, sample_rank=2)
world_pts = tf.reshape(world_pts, [1, ray_count * tc, 3])
self.cam_3dof_pts = world_pts
world_rbfs = self.structured_implicit.rbf_influence_at_samples(world_pts)
eec = world_rbfs.get_shape().as_list()[-1]
assert len(am_image.get_shape().as_list()) == 2
is_bg = tf.reshape(
tf.logical_not(tf.equal(am_image, eec)), [1, ray_count, 1])
am_image = tf.tile(tf.expand_dims(am_image, axis=-1), [1, 1, tc])
flat_am = tf.reshape(am_image, [ray_count * tc, 1])
flat_am = tf.where_v2(tf.equal(flat_am, 45), 0, flat_am)
world_rbfs = tf.reshape(world_rbfs, [ray_count * tc, 45])
max_val = tf.gather(world_rbfs, flat_am, batch_dims=1)
max_val = tf.reshape(max_val, [1, ray_count, tc])
max_val = tf.reduce_max(max_val, axis=-1)
is_bg_mult = tf.cast(is_bg, dtype=tf.float32)
max_val = is_bg_mult * max_val
error = -1.0 * tf.reduce_sum(max_val)
return error
def optimize_3dof_pose(self, sif_vector, am_image, e, step_count=10, lr=1e-6):
"""Tries to fit a pose given a SIF in 3D and a SIF segmentation image."""
if len(sif_vector.shape) == 2:
sif_vector = np.expand_dims(sif_vector, axis=0)
# Now rays is an array of shape [h, w, 3]. The origin is currently [0,0,0]
# because the rays are in camera space (for now).
lr = np.array([0.0, lr, lr], dtype=np.float32)
# Just worry about a single step for now:
# The pose is 3-dof: distance, phi, theta.
coords = np.array([0.812717413913 / 1.75, 0.0, 0.0], dtype=np.float32)
# cam2world, eye = self._spherical_to_4x4(coords)
for i in range(step_count):
log.verbose('Step %i: (%0.4f, %0.4f, %0.4f)' %
(i, coords[0], coords[1], coords[2]))
grad, err, pts = self.session.run(
[self.pose3_gradients, self.pose_error, self.cam_3dof_pts],
feed_dict={
self.am_image_ph: am_image,
self.sif_input: sif_vector,
self.coords_ph: coords
})
grad = grad[0]
log.verbose('Error: %0.2f' % err)
log.verbose('grad: %s' % repr(grad))
log.verbose('pts.shape: ', repr(pts.shape))
assert len(grad.shape) == 1
assert grad.shape[0] == 3
update = lr * grad
log.verbose('Update: ', str(update))
gaps_util.ptsview(pts, mesh=e.v1_gt_mesh)
coords = coords - lr * grad
return coords
def optimize_to_gt(self,
sif_vector,
example,
step_count=1,
lr=0.01,
vis=0,
verbosity=0,
target='all',
samps='nss'):
"""Iteratively optimizes a SIF or LDIF to fit ground truth in/out values."""
if samps == 'nss':
all_samples = example.near_surface_samples.copy()
np.random.shuffle(all_samples)
elif samps == 'uni':
all_samples = example.uniform_samples.copy()
elif samps == 'nssuni':
all_samples = np.concatenate(
[example.near_surface_samples, example.uniform_samples], axis=0)
elif samps == 'dodeca':
depth_ims = example.depth_images / 1000.0
all_samples = geom_util.depth_dodeca_to_samples(depth_ims)
elif samps == 'depth':
depth_idx = 1 # TODO(kgenova) Make this the one in the observation.
depth_ims = example.depth_images / 1000.0
depth_im = depth_ims[0, depth_idx, :, :, :]
cam2world = geom_util.get_dodeca_camera_to_worlds()[depth_idx, :, :]
assert depth_im.shape[0] == 224
assert cam2world.shape[0] == 4
log.verbose('Depth im shape: ', depth_im.shape)
all_samples = geom_util.depth_image_to_samples(depth_im, cam2world)
if verbosity >= 2:
gaps_util.ptsview(all_samples[..., :], self.extract_mesh(sif_vector, 128))
np.random.shuffle(all_samples)
cl = all_samples[:, 3]
all_samples[cl < 0, 3] = 0
all_samples[cl > 0, 3] = 1
samples, gt_class = np.split(all_samples, [3], axis=-1)
samples = samples[:self.optimizer_pc, :]
gt_class = gt_class[:self.optimizer_pc, :]
def print_sat_count(vec):
"""Prints the number of contraints that are satisfied and the total."""
pred = self.class_at_samples(vec, np.reshape(samples, [-1, 3]))
pred_is_out = pred > 0.5
gt_is_out = gt_class > 0.5
log.verbose(pred_is_out.shape, gt_is_out.shape)
agree = np.logical_or(
np.logical_and(pred_is_out, gt_is_out),
np.logical_and(
np.logical_not(pred_is_out), np.logical_not(gt_is_out)))
sat_count = np.count_nonzero(agree)
log.info('%i/%i constraints are satisfied.' %
(sat_count, self.optimizer_pc))
if verbosity >= 1:
log.info('Beginning optimization.')
print_sat_count(sif_vector)
assert gt_class.shape[-1] == 1
sif_vector = sif_vector.copy()
sif_vector = np.expand_dims(sif_vector, axis=0)
cur_vector = sif_vector.copy()
ret_best = False
if ret_best:
min_loss = np.inf
best_vec = cur_vector.copy()
momentum = 0.9
velocity = np.zeros_like(cur_vector)
cur_batch_idx = 0
for i in range(step_count):
batch_start = cur_batch_idx
batch_end = cur_batch_idx + self.optimizer_pc
if batch_end > all_samples.shape[0]:
np.random.shuffle(all_samples)
batch_start = 0
batch_end = self.optimizer_pc
cur_batch_idx = 0
batch_all_samples = all_samples[batch_start:batch_end, :]
cur_batch_idx += self.optimizer_pc
batch_samples, batch_gt_class = np.split(batch_all_samples, [3], axis=-1)
grad = self.session.run(
self.ldif_gradients,
feed_dict={
self.target_class_ph: batch_gt_class,
self.sif_input: cur_vector,
self.optimizer_samples: batch_samples
})[0]
vis_this_time = vis >= 2 or (vis >= 1 and (i == 0 or i == step_count - 1))
print_this_time = verbosity >= 2 or (verbosity >= 1 and not i % 1000)
if vis_this_time or print_this_time:
loss = self.session.run(
self.optimizer_elt_loss,
feed_dict={
self.target_class_ph: batch_gt_class,
self.sif_input: cur_vector,
self.optimizer_samples: batch_samples
})
if ret_best:
lsum = np.sum(loss)
if lsum < min_loss:
min_loss = lsum
best_vec = cur_vector.copy()
# Assuming the loss is zero if a constraint is satisfied:
is_sat = self.optimizer_pc - np.count_nonzero(loss)
if print_this_time:
log.info('Step %i: Total loss: %s. Constraints %i/%i' %
(i, repr(np.sum(loss)), is_sat, self.optimizer_pc))
if vis_this_time:
self.vis_loss(
cur_vector,
gt_at_loss=gt_class,
loss=loss,
loss_positions=samples)
if target == 'all-eq':
mults = 42 * [1]
elif target == 'all':
mults = [0.001] + 3 * [0.001] + 6 * [0.0000001] + 32 * [50]
elif target == 'centers':
mults = [0.000] + 3 * [0.001] + 6 * [0.0000000] + 32 * [0]
elif target == 'radii':
mults = [0.000] + 3 * [0.000] + 6 * [0.0000001] + 32 * [0]
elif target == 'features':
mults = [0.000] + 3 * [0.000] + 6 * [0.0000000] + 32 * [50]
elif target == 'constants':
mults = [0.001] + 3 * [0.000] + 6 * [0.0000000] + 32 * [0]
else:
assert False
mults = np.array(mults).reshape([1, 1, 42])
velocity = momentum * velocity + mults * lr * grad
cur_vector = cur_vector - velocity
if verbosity >= 1:
log.info('Finished optimization.')
print_sat_count(cur_vector)
if ret_best:
cur_vector = best_vec
return np.reshape(cur_vector, self.unbatched_vector_shape)
def vis_loss(self, sif_vector, gt_at_loss, loss, loss_positions):
"""Visualizes the loss mid-optimization."""
loss = np.reshape(loss, [-1, 1])
gt_at_loss = np.reshape(gt_at_loss, [-1, 1])
assert gt_at_loss.shape[0] == loss.shape[0]
loss[gt_at_loss <= 0.5] = -loss[gt_at_loss <= 0.5]
loss_positions = np.reshape(loss_positions, [-1, 3])
arr = np.concatenate([loss_positions, loss], axis=1)
with py_util.py2_temporary_directory() as d:
sdf_path = f'{d}/a.sdf'
with file_util.open_file(sdf_path, 'wb') as f:
arr = arr.astype(np.float32)
arr.tofile(f)
m = self.extract_mesh(sif_vector, resolution=128)
m_path = f'{d}/m.ply'
file_util.write_mesh(m_path, m)
init_camera = ('1.0451 1.17901 0.630437 '
'-0.614259 -0.695319 -0.373119 '
'-0.547037 0.715996 -0.433705')
cmd = '%s/ptsview %s %s -camera %s' % (path_util.gaps_path(), sdf_path,
m_path, init_camera)
sp.check_output(cmd, shell=True)
def _grid_eval_cuda(self, sif_vector, resolution, extent):
"""Evaluates a SIF/LDIF densely on a voxel grid."""
log.verbose('Using custom CUDA kernel for evaluation.')
# First step: Get the path where the serialized occnet should be.
# The serialized occnet should be at whatever the checkpoint path is,
# but replace model.ckpt-[idx] with serialized-occnet-[idx].occnet
checkpoint_path = self.ckpt.abspath
log.info(f'Using checkpoint {checkpoint_path} to write OccNet file.')
assert 'model.ckpt-' in checkpoint_path
occnet_path = checkpoint_path.replace('model.ckpt-', 'serialized-occnet-')
occnet_path = occnet_path + '.occnet'
# Second step: If it isn't there, write it to disk.
if not os.path.isfile(occnet_path):
assert os.path.isdir(os.path.dirname(occnet_path))
if self.job.model_config.hparams.ipe == 't':
self.write_occnet_file(occnet_path)
else:
occnet_path = path_util.get_path_to_ldif_root(
) + '/ldif2mesh/extracted.occnet'
# Third step: open a temporary directory, and write the embedding.
# Make sure that the temp directories are deleted afterwards.
with py_util.py2_temporary_directory() as d:
rep_path = f'{d}/ldif.txt'
self.savetxt(sif_vector, rep_path)
# Pick the path to the output grd file:
grd_path = f'{d}/grid.grd'
# Fourth step: Get the path to the kernel
kernel_path = os.path.join(path_util.get_path_to_ldif_root(),
'ldif2mesh/ldif2mesh')
if not os.path.isfile(kernel_path):
raise ValueError(
f'There is no compiled CUDA executable at {kernel_path}.')
cmd = (f'CUDA_VISIBLE_DEVICES=0 {kernel_path} {rep_path} {occnet_path} '
f'{grd_path} -resolution {resolution}')
log.verbose(f'Executing command {cmd}')
# TODO(kgenova) Support extent as a flag
if extent != 0.75:
raise ValueError(
'Currently only 0.75 extent is supported on the '
'custom kernel. Please set use_inference_kernel to false for an'
f' extent of {extent}.')
# Fifth step: Invoke the kernel.
try:
cmd_result = sp.check_output(cmd, shell=True)
log.info(cmd_result.decode('utf-8').replace('\n', ''))
except sp.CalledProcessError as e:
if 'out of memory' in e.output.decode('utf-8'):
raise ValueError(
'The GPU does not have enough free memory left for the'
' inference kernel. Please reduce the fraction'
' reserved by tensorflow.')
elif 'no kernel image is available' in e.output.decode('utf-8'):
raise ValueError(
'It appears that the CUDA kernel was not built to your '
'gpu\'s architecture. Hopefully this is an easy fix. '
'Please go to developer.nvidia.com/cuda-gpus, and find '
'your gpu from the list. Then, modify ./build_kernel.sh '
'by adding compute_XX and sm_XX for whatever your GPU '
'compute capability is according to the website. For '
'example, a 2080 Ti would use compute_75 and sm_75. '
'Note that if your card supports below 35, it likely '
'will fail to compile using this method. If you are '
'seeing this error, please feel free to open up an issue '
'and report it. We would like to support as many gpus as '
'possible.')
else:
raise ValueError(f'Unrecognized error code {e.returncode} occurred'
f' during inference kernel evaluation: {e.output}')
# Seventh step: Read the grid file.
_, grd = file_util.read_grd(grd_path)
# Eighth step: Verify the grid shape and return the grid.
log.verbose(f'The output CUDA grid has shape {grd.shape}.')
# gaps_util.grdview(grd)
return grd
def _grid_eval(self,
sif_vector,
resolution,
extent,
extract_parts,
world2local=None):
"""Evalutes the LDIF/SIF on a grid."""
log.verbose('Evaluating SDF grid for mesh.')
if self.use_inference_kernel and not extract_parts:
return self._grid_eval_cuda(sif_vector, resolution, extent)
if extract_parts or world2local:
log.warning('Part extraction and world2local are not supported with the'
' custom kernel.')
log.warning('Using pure tensorflow for grid evaluation, this will be slow.')
t = time.time()
sif_vector = np.reshape(sif_vector, self.batched_vector_shape)
assert not resolution % self.block_res
block_count = resolution // self.block_res
block_size = (2.0 * extent) / block_count
l_block = []
i = 0
dim_offset = 1 if extract_parts else 0
grid = self.local_decisions if extract_parts else self.predicted_alg_grid
for li in range(block_count):
l_min = -extent + (li) * block_size - 0.5 / resolution
h_block = []
for hi in range(block_count):
h_min = -extent + (hi) * block_size - 0.5 / resolution
w_block = []
for wi in range(block_count):
w_min = -extent + (wi) * block_size - 0.5 / resolution
offset = np.reshape(
np.array([w_min, l_min, h_min], dtype=np.float32), [1, 1, 1, 3])
sample_locations = block_size * self.base_grid + offset
if world2local is not None:
sample_locations = geom_util_np.apply_4x4(
sample_locations, world2local, are_points=True)
grid_out_np = self.session.run(
grid,
feed_dict={
self.sif_input: sif_vector,
self.sample_locations_ph: sample_locations
})
i += 1
w_block.append(grid_out_np)
h_block.append(np.concatenate(w_block, axis=2 + dim_offset))
l_block.append(np.concatenate(h_block, axis=0 + dim_offset))
grid_out = np.concatenate(l_block, axis=1 + dim_offset)
# log.verbose(f'Grid extent: {np.min(grid_out)}, {np.max(grid_out)}')
# grid_out -= 0.5
grid_out_time = time.time()
log.verbose(f'Grid Eval Time: {grid_out_time - t}')
return grid_out
def extract_mesh(self,
sif_vectors,
resolution=128,
extent=0.75,
return_success=False,
world2local=None):
"""Extracts a mesh that is the sum of one or more SIF meshes."""
extract_start_time = time.time()
if isinstance(sif_vectors, list):
volumes = []
if world2local is not None:
assert isinstance(world2local, list)
for i, v in enumerate(sif_vectors):
volumes.append(
self._grid_eval(
v,
resolution,
extent,
extract_parts=False,
world2local=world2local[i]
if world2local is not None else None))
volume = | np.sum(volumes, axis=0) | numpy.sum |
#!/usr/bin/env python
from copy import copy
import rasterio
import matplotlib.pyplot as plt
import numpy as np
import numpy.ma as ma
import pandas as pd
from rasterio.plot import show
import re
import pdb
import projections.pd_utils as pd_utils
from projections.lu.luh2 import LU
shape = (567, 1440)
bounds = (-180, -58, 180, 83.75)
palette = copy(plt.cm.viridis)
#palette.set_over('g', 1.0)
palette.set_under('r', 1.0)
palette.set_bad('k', 1.0)
palette2 = copy(plt.cm.viridis)
palette2.set_over('b', 1.0)
palette2.set_under('r', 1.0)
palette2.set_bad('k', 1.0)
def rcs(height, res, left, bottom, right, top):
er = 6378137.0
lats = np.linspace(top, bottom + res[1], height)
vec = ((np.sin(np.radians(lats + res[1] / 2.0)) -
np.sin(np.radians(lats - res[1] / 2.0))) *
(res[0] * np.pi/180) * er ** 2 / 1e6)
return vec.reshape((vec.shape[0], 1))
def check_hpd(df):
scale = rcs(shape[0], (0.25, 0.25), *bounds)
hpd = ma.masked_invalid(df['hpd'].values.reshape(shape))
total = (hpd * scale).sum()
#pdb.set_trace()
print("hpd: %10.2e" % total)
def check(lu, df):
if lu == 'timber':
return
if lu + '_minimal' in df.columns:
minimal = ma.masked_invalid(df[lu + '_minimal'].values.reshape(shape))
else:
minimal = 0
#if lu + '_light' in df.columns:
light = ma.masked_invalid(df[lu + '_light'].values.reshape(shape))
#else:
# light = 0
#if lu + '_intense' in df.columns:
intense = ma.masked_invalid(df[lu + '_intense'].values.reshape(shape))
#else:
# intense = 0
data = ma.masked_invalid(df[lu].values.reshape(shape))
total = (minimal + light + intense)
print('checking: %s [%6.4f | %8.3f]' % (lu, total.max(), (data - total).sum()))
assert np.all(data - total > -0.01)
if (data - total).sum() > 2:
#pdb.set_trace()
pass
#assert total.max() > 0.9
assert np.isclose(total.min(), 0)
pass
def check_sum(lus, df):
total = ma.masked_invalid(df[lus[0]].values.reshape(shape))
for lu in lus[1:]:
total += ma.masked_invalid(df[lu].values.reshape(shape))
print("%6.4f" % total.max())
print(map(lambda x: "%s, %6.4f" % (x,
df[x].values.reshape(shape)[444, 1208]),
LU.keys()))
pdb.set_trace()
#assert np.allclose(total, 1, equal_nan=True)
pass
def area(lu, df):
pass
def doit():
df1950 = pd_utils.load_pandas('/Volumes/Vagrant 155/playground/1950.pyd')
df2009 = pd_utils.load_pandas('/Volumes/Vagrant 155/playground/2009.pyd')
assert | np.all(df1950.columns == df2009.columns) | numpy.all |
#!/usr/bin/python
import os
import math
import re
import struct
import numpy as np
import matplotlib.pyplot as plt
def readstamp(f):
pgmoffset=17
bs=f.read(pgmoffset+4)
x=struct.unpack("<I", bs[pgmoffset:pgmoffset+4])[0] # reverse byte reading order
t = (x>>0) & 0xffffffff
t = ((t >> 16) & 0xffff) | ((t << 16) & 0xffff0000)
secs = (t >> 25) & 0x7f
cycles = (t >> 12) & 0x1fff
offset = (t >> 0) & 0xfff
return secs + ((cycles + (offset / 3072.0)) / 8000.0)
def getTime(gps_pt):
return gps_pt[10]*60 + gps_pt[11]
gps_pts = np.array(np.loadtxt('gps_data.txt'));
#vo_pts = np.array(np.loadtxt('viso_points_bb.txt'));
vo_pts = np.array(np.loadtxt("/home/kivan/Projects/cv-stereo/build/vo_batch_debug/release/results/bb2_tracker_freak_7_1/bb.txt"))
src_folder = '/home/kivan/Projects/datasets/bumblebee/20121031/'
t_prev=-1
deltas=[]
for name in sorted(os.listdir(src_folder)):
m=re.match(r'fc2.*pgm', name)
if m:
w,n,t=readstamp(open(src_folder+name, mode='rb'))
delta=t-t_prev if t_prev>=0 else 0
if delta<0:
delta+=65536
# print('{} {:01x} {:04x} {}'.format(name, n,t, delta))
t_prev=t
deltas.append(delta)
# cycle_time = num of secs / num of cycles
cycle_time = 111 / sum(deltas)
print("Sum of deltas: ", sum(deltas))
print("Cycle time: ", cycle_time)
# convert delta stamps to delta time
deltas=[x*cycle_time for x in deltas[1:]]
# set odometry start time (0 is start time of first gps point)
vo_start = 3.0 # 2.8 3.3 3.0
vo_times = [vo_start]
# get precise time from timestamps in each frame
for i in range(len(deltas)):
vo_times.append(deltas[i] + vo_times[i])
# we use every 3 frames in odometry
print("Number of frames: ", len(vo_times))
vo_times=vo_times[::3]
vo_pts=vo_pts[::]
print("Number of frames after sampling: ", len(vo_times))
#vo_pts2D=np.ndarray((vo_pts.shape[0], 2))
vo_pts2D=np.zeros((vo_pts.shape[0], 2))
for i in range(len(vo_pts)):
vo_pts2D[i,0]=vo_pts[i,3]
vo_pts2D[i,1]=vo_pts[i,11]
# first point time of gps must be bigger then vis. odo. start time
# otherwise we dont have data to interpolate it
# print(len(gps_pts), gps_pts.shape, len(vo_pts))
t0 = getTime(gps_pts[0])
for i in range(len(gps_pts)):
# cut and break in first gps point with bigger time
if getTime(gps_pts[i])-t0 > vo_times[0]:
gps_pts=gps_pts[i:]
break
# interpoliramo vizualnu odometriju u vremenima
# točaka GPS-a
vo_inter = np.zeros((gps_pts.shape[0], 2))
for i in range(len(gps_pts)):
pt = gps_pts[i]
t = getTime(pt) - t0
#print(t)
for j in range(len(vo_pts2D)):
if vo_times[j] >= t:
if i == 0:
vo_pts_crop = vo_pts2D[j-1:,:]
assert j>0
# print(" -> ", vo_times[j])
alfa = (t - vo_times[j-1]) / (vo_times[j] - vo_times[j-1])
vo_inter[i] = (1-alfa) * vo_pts2D[j-1] + alfa * vo_pts2D[j]
# print(i, vo_inter[i])
break
else:
vo_inter=vo_inter[:i,:]
gps_pts=gps_pts[:i,:]
break
gps_pts = gps_pts[:,0:2]
#print(gps_pts)
#print(vo_pts2D)
#print(vo_inter)
#plt.plot(vo_pts2D[:,0], vo_pts2D[:,1], marker='.', color='r', label="VO_orig")
#plt.plot(vo_pts_crop[:,0], vo_pts_crop[:,1], marker='.', color='b', label="VO_orig")
#plt.plot(vo_inter[:,0], vo_inter[:,1], marker='.', color='b', label="VO_inter")
#plt.show()
#exit(0)
# angle between 2 vectors defined by 3 points using dot product
def calcphi(pt1,pt2,pt3):
v1=pt2-pt1
v2=pt3-pt2
return math.degrees(math.acos(np.dot(v1,v2)/np.linalg.norm(v1)/np.linalg.norm(v2)))
# angle between 2 vectors using vector product (-90, 90)
def calcphi2vec(v1,v2):
return math.degrees(math.asin((v1[0]*v2[1]-v1[1]*v2[0])/
np.linalg.norm(v1)/np.linalg.norm(v2)))
def calcphi2(pt1,pt2,pt3):
v1=pt2-pt1
v2=pt3-pt2
return calcphi2vec(v1,v2)
# angular movement data
gps_phis=np.array([calcphi2(gps_pts[i-1],gps_pts[i],gps_pts[i+1]) for i in range(1,len(vo_inter)-1)])
vo_phis=np.array([calcphi2(vo_inter[i-1],vo_inter[i],vo_inter[i+1]) for i in range(1,len(vo_inter)-1)])
# angular movement difference between gps od visual odometry
# cant do this before vo and gps paths are not mapped with starting point and rotation offset
#gps_vo_phis=[calcphi2vec(gps_pts[i]-gps_pts[i-1], vo_inter[i]-vo_inter[i-1]) for i in range(1,len(vo_inter))]
# speed movement data
gps_speed=np.array([np.linalg.norm(gps_pts[i]-gps_pts[i-1]) for i in range(1,len(vo_inter))])
vo_speed=np.array([np.linalg.norm(vo_inter[i]-vo_inter[i-1]) for i in range(1,len(vo_inter))])
#print (gps_phis[0:10])
#print (vo_phis[0:10])
#print([gps_pts[i] for i in range(0,10)])
#print([vo_inter[i] for i in range(0,10)])
#print([vo_inter[i]-vo_inter[i-1] for i in range(1,10)])
#print(calcphi(vo_inter[2-2],vo_inter[2-1],vo_inter[2]))
#plt.plot(gps_pts[:10,0], gps_pts[:10,1], marker='o', color='r')
#plt.plot(vo_inter[:10,0], vo_inter[:10,1], marker='o', color='b')
trans_mse = np.mean(np.square(gps_speed - vo_speed))
trans_mae = np.mean(np.abs(gps_speed - vo_speed))
print("translation error MSE: ", trans_mse)
print("translation error MAE: ", trans_mae)
fig_speed = plt.figure(figsize=(12,8))
plt.plot(range(1,len(vo_inter)), gps_speed, marker='o', color='r', label="GPS")
plt.plot(range(1,len(vo_inter)), vo_speed, marker='o', color='b', label="visual odometry")
plt.title("MSE = " + str(trans_mse)[:5] + ", MAE = " + str(trans_mae)[:5], fontsize=20)
#plt.title('Speed', fontsize=14)
plt.xlabel('time (s)', fontsize=14)
plt.ylabel('distance (m)', fontsize=14)
plt.legend()
# plot scale error of visual odometry
fig_scale = plt.figure(figsize=(12,8))
scale_err = np.array(gps_speed) / np.array(vo_speed)
plt.plot(scale_err, marker='o', color='r')
plt.plot([0,120], [1.0,1.0], ls="--", color="k")
#fig_scale.suptitle('Scale error', fontsize=18)
plt.xlabel('time (s)', fontsize=14)
plt.ylabel('scale error (gps / odometry)', fontsize=14)
#print(gps_phis)
#print(vo_phis)
#print(np.square(gps_phis - vo_phis))
#print((gps_phis - vo_phis))
#print(np.square(gps_phis - vo_phis))
rot_mse = np.mean(np.square(gps_phis - vo_phis))
rot_mae = np.mean(np.abs(gps_phis - vo_phis))
print("rotation error MSE: ", rot_mse)
print("rotation error MAE: ", rot_mae)
fig_rot = plt.figure(figsize=(12,8))
plt.plot(range(1,len(vo_inter)-1), gps_phis, marker='o', color='r', label="GPS rotation angles")
plt.plot(range(1,len(vo_inter)-1), vo_phis, marker='o', color='b', label="odometry rotation angles")
#plt.plot(range(1,len(vo_inter)-1), gps_vo_phis[:-1], marker='o', color='b', label="TODO")
plt.xlabel('time (s)', fontsize=14)
plt.ylabel('angle (deg): <0 (right), >0 (left)', fontsize=14)
#plt.text(45, 20, "average error = " + str(rot_avgerr)[:5], color='b', fontsize=16)
plt.title("MSE = " + str(rot_mse)[:5] + ", MAE = " + str(rot_mae)[:5], fontsize=20)
plt.legend()
fig_path = plt.figure(figsize=(8,8))
#plt.axis('equal')
plt.axis([-50, 200, -100, 150], 'equal')
#gps_pts[:,1] += 40.0
# translate gps to (0,0)
gps_pts[:,0] -= gps_pts[0,0]
gps_pts[:,1] -= gps_pts[0,1]
vo_inter[:,0] -= vo_inter[0,0]
vo_inter[:,1] -= vo_inter[0,1]
vo_pts_crop[:,0] -= vo_pts_crop[0,0]
vo_pts_crop[:,1] -= vo_pts_crop[0,1]
angle = -2.0 #-2.02
#angle = -2.1 # alan calib
R_gps = np.array([[ | np.cos(angle) | numpy.cos |
# Change: Modifying so that the ends are straight coarse bricks
import matplotlib.pyplot as plt
import numpy as np
from scipy import spatial
import csv
import os
def NodeGen2DV45(x0,xl,y0,yl,z0,elemLenX,elemLenY,numElemX,numElemY,shiftX,shiftY):
# Nodal coordinated
nodeX1=np.linspace(x0,xl,numElemX);
nodeY1=y0+np.zeros(np.shape(nodeX1)) #np.arange(0,specLenY+elemLenY,elemLenY);
#
nodeX2=np.linspace(x0+shiftX,xl-shiftX,numElemX-1);
nodeY2=y0+np.zeros(np.shape(nodeX2))+shiftY
#
# Create all nodes
count=1;
Node=np.array([[0,0,0,0]])
for j in range(0,int(numElemY)-1):
for i in range(0,len(nodeX1)):
Node=np.append(Node,[[int(count+i),nodeX1[i],nodeY1[i]+j*elemLenY,z0]],axis=0)
count=len(Node)
for i in range(0,len(nodeX2)):
Node=np.append(Node,[[int(count+i),nodeX2[i],nodeY2[i]+j*elemLenY,z0]],axis=0)
count=len(Node)
# last line
for i in range(0,len(nodeX1)):
Node=np.append(Node,[[int(count+i),nodeX1[i],nodeY1[i]+(j+1)*elemLenY,z0]],axis=0)
Node=Node[1:len(Node)]
return Node
def NodeGen2DV90(x0,xl,y0,yl,z0,elemLenX,elemLenY,numElemX,numElemY):
# Nodal coordinated
nodeX1=np.linspace(x0,xl,numElemX);
nodeY1=y0+np.zeros(np.shape(nodeX1)) #np.arange(0,specLenY+elemLenY,elemLenY)
# Create all nodes
count=1;
Node=np.array([[0,0,0,0]])
for j in range(0,int(numElemY)):
for i in range(0,len(nodeX1)):
Node=np.append(Node,[[int(count+i),nodeX1[i],nodeY1[i]+j*elemLenY,z0]],axis=0)
count=len(Node)
#
Node=Node[1:len(Node)]
elemLenX=nodeX1[1]-nodeX1[0]
return Node,elemLenX,elemLenY
def FindNodes(loc,Node):
NCorners=[[0,0,0,0]]
for i in range(len(loc)):
NCornersTmp=Node[(Node[:,1]==loc[i,0])]
NCornersTmp=NCornersTmp[(NCornersTmp[:,2]==loc[i,1])]
NCorners=np.append(NCorners,NCornersTmp, axis=0)
NCorners=NCorners[1:len(NCorners)]
return NCorners
def FindNodeRange(loc,Node,elemLenX,elemLenY):
loc=[loc[0]-1e-5,loc[1]-1e-5]
NCornersTmp=Node
NCornersTmp=Node[(Node[:,1]>=loc[0])]
NCornersTmp=NCornersTmp[(NCornersTmp[:,1]<=loc[0]+1.5*elemLenX)]
NCornersTmp=NCornersTmp[(NCornersTmp[:,2]>=loc[1])]
NCornersTmp=NCornersTmp[(NCornersTmp[:,2]<=loc[1]+1.5*elemLenY)]
return NCornersTmp
def FindBoundariesV2(Node,x0,xl,y0,yl,numElemX,numElemY):
# Find corners
#loc=np.array([[x0,y0],[xl,0],[0,yl],[xl,yl]])
loc=np.array([[x0,y0]])
NCorners= FindNodes(loc,Node)
# Find bottom edge
Xrange=np.linspace(x0,xl,numElemX)
Yrange=np.ones(np.shape(Xrange))*y0
loc=np.transpose(np.array([Xrange,Yrange]))
NBtmEdge= FindNodes(loc,Node)
# Find top edge
Xrange=np.linspace(x0,xl,numElemX)
Yrange=np.ones(np.shape(Xrange))*yl
loc=np.transpose(np.array([Xrange,Yrange]))
NTopEdge= FindNodes(loc,Node)
# Find left edge
Yrange=np.linspace(y0,yl,numElemY)
Xrange=np.ones(np.shape(Yrange))*x0
loc=np.transpose(np.array([Xrange,Yrange]))
NLeftEdge= FindNodes(loc,Node)
# Find right edge
Yrange=np.linspace(y0,yl,numElemY)
Xrange=np.ones(np.shape(Yrange))*xl
loc=np.transpose(np.array([Xrange,Yrange]))
NRightEdge= FindNodes(loc,Node)
NBoundary=np.append(NBtmEdge,NRightEdge,axis=0)
NBoundary=np.append(NBoundary,NTopEdge,axis=0)
NBoundary=np.append(NBoundary,NLeftEdge,axis=0)
return NCorners,NBtmEdge,NTopEdge,NLeftEdge,NRightEdge,NBoundary
def FindBoundaries(Node,specLenX,specLenY,elemLenX,elemLenY):
# Find corners
loc=np.array([[0,0],[specLenX,0],[0,specLenY],[specLenX,specLenY]])
NCorners= FindNodes(loc,Node)
# Find bottom edge
Xrange=np.arange(0,specLenX,elemLenX)
Yrange=np.ones(np.shape(Xrange))*0
loc=np.transpose(np.array([Xrange,Yrange]))
NBtmEdge= FindNodes(loc,Node)
# Find top edge
Xrange=np.arange(0,specLenX,elemLenX)
Yrange=np.ones(np.shape(Xrange))*specLenY
loc=np.transpose(np.array([Xrange,Yrange]))
NTopEdge= FindNodes(loc,Node)
# Find left edge
Yrange=np.arange(0,specLenY,elemLenY)
Xrange=np.ones(np.shape(Yrange))*0
loc=np.transpose(np.array([Xrange,Yrange]))
NLeftEdge= FindNodes(loc,Node)
# Find right edge
Yrange=np.arange(0,specLenY,elemLenY)
Xrange=np.ones(np.shape(Yrange))*specLenX
loc=np.transpose(np.array([Xrange,Yrange]))
NRightEdge= FindNodes(loc,Node)
NBoundary=np.append(NBtmEdge,NRightEdge,axis=0)
NBoundary=np.append(NBoundary,NTopEdge,axis=0)
NBoundary=np.append(NBoundary,NLeftEdge,axis=0)
return NCorners,NBtmEdge,NTopEdge,NLeftEdge,NRightEdge,NBoundary
def DefineElem2D45(Node,NBtmEdge,NTopEdge,NLeftEdge,NRightEdge,shiftX,shiftY):
A=spatial.cKDTree(Node[:,1:3])
# Find nearest
XYPnt=np.array([0.0,0.0])
ElemQuad=np.array([[0,0,0,0,0]])
ElemPyrd=np.array([[0,0,0,0]])
eleCount=1
for i in range(0,len(NBtmEdge)):
idx=np.ones([1,3])*-1
XYPnt=NBtmEdge[i,1:3]
distance1,idx1 = A.query([XYPnt[0]+shiftX,XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0]+2*shiftX,XYPnt[1]],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3]
idxTmp=np.unique(idx)
if len(idxTmp)==3:
ElemPyrd=np.append(ElemPyrd,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0] ]],axis=0)
eleCount=eleCount+1
for i in range(0,len(NTopEdge)):
idx=np.ones([1,3])*-1
XYPnt=NTopEdge[i,1:3]
distance1,idx1 = A.query([XYPnt[0]+shiftX,XYPnt[1]-shiftY],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0]+2*shiftX,XYPnt[1]],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3]
idxTmp=np.unique(idx)
if len(idxTmp)==3:
ElemPyrd=np.append(ElemPyrd,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0] ]],axis=0)
eleCount=eleCount+1
for i in range(0,len(NLeftEdge)):
idx=np.ones([1,3])*-1
XYPnt=NLeftEdge[i,1:3]
distance1,idx1 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0]+shiftX,XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0],XYPnt[1]+2*shiftY],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3]
idxTmp=np.unique(idx)
if len(idxTmp)==3:
ElemPyrd=np.append(ElemPyrd,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0] ]],axis=0)
eleCount=eleCount+1
for i in range(0,len(NRightEdge)):
idx=np.ones([1,3])*-1
XYPnt=NRightEdge[i,1:3]
distance1,idx1 = A.query([XYPnt[0],XYPnt[1]+2*shiftY],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0]-shiftX,XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3]
idxTmp=np.unique(idx)
if len(idxTmp)==3:
ElemPyrd=np.append(ElemPyrd,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0] ]],axis=0)
eleCount=eleCount+1
for i in range(0,len(Node)):
idx=np.ones([1,4])*-1
XYPnt=Node[i,1:3]
distance1,idx1 = A.query([XYPnt[0]+shiftX,XYPnt[1]-shiftY],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0]+2*shiftX,XYPnt[1]],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0]+shiftX,XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
distance4,idx4 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3,idx4]
idxTmp=np.unique(idx)
if len(idxTmp)==4:
ElemQuad=np.append(ElemQuad,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0],Node[idx[3],0] ]],axis=0)
eleCount=eleCount+1
ElemQuad=ElemQuad[1:len(ElemQuad)]
ElemPyrd=ElemPyrd[1:len(ElemPyrd)]
return ElemQuad,ElemPyrd,eleCount
def DefineElem2D90(Node,shiftX,shiftY):
A=spatial.cKDTree(Node[:,1:3])
# Find nearest
XYPnt=np.array([0.0,0.0])
ElemQuad=np.array([[0,0,0,0,0]])
eleCount=1
for i in range(0,len(Node)):
idx=np.ones([1,4])*-1
XYPnt=Node[i,1:3]
distance1,idx1 = A.query([XYPnt[0],XYPnt[1]],k=1,distance_upper_bound=2)
distance2,idx2 = A.query([XYPnt[0]+shiftX,XYPnt[1]],k=1,distance_upper_bound=2)
distance3,idx3 = A.query([XYPnt[0]+shiftX,XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
distance4,idx4 = A.query([XYPnt[0],XYPnt[1]+shiftY],k=1,distance_upper_bound=2)
idx=[idx1,idx2,idx3,idx4]
idxTmp=np.unique(idx)
if len(idxTmp)==4:
ElemQuad=np.append(ElemQuad,[[eleCount,Node[idx[0],0],Node[idx[1],0],Node[idx[2],0],Node[idx[3],0] ]],axis=0)
eleCount=eleCount+1
ElemQuad=ElemQuad[1:len(ElemQuad)]
return ElemQuad,eleCount
def NodeGen3D(Node,specLenZ1):
jmp=10**(np.ceil(np.log10(np.abs(max(Node[:,0]) + 1))))
# Creating 3D Node points
Node3D=Node
for i in range(1,len(specLenZ1)):
NodeTmp=np.ones(np.shape(Node))
NodeTmp[:,0]=Node[:,0]+np.ones(np.shape(Node[:,0]))*i*jmp
NodeTmp[:,1:3]=Node[:,1:3]
NodeTmp[:,3]=specLenZ1[i]
Node3D=np.append(Node3D,NodeTmp,axis=0)
return Node3D,jmp
#def NodeGen3DV2(Node,zt,maxNode):
# jmp=10**(np.ceil(np.log10(np.abs(maxNode + 1))))
#
# # Creating 3D Node points
# Node3D=Node
# NodeTmp=np.ones(np.shape(Node))
# NodeTmp[:,0]=Node[:,0]+np.ones(np.shape(Node[:,0]))*jmp
# NodeTmp[:,1:3]=Node[:,1:3]
# NodeTmp[:,3]=zt
# Node3D=np.append(Node3D,NodeTmp,axis=0)
#
# return Node3D,jmp
def DefineElem3D(ElemQuad,ElemPyrd,jmpNode,specLenZ1,plyStack):
# Creating 3D pyramid elements points - 1st ply
EleTmp=ElemPyrd[:,1:len(ElemPyrd)]
EleTmp=EleTmp+np.ones(np.shape(ElemPyrd[:,1:len(ElemPyrd)]))*jmpNode
ElemPyrd3D=np.append(ElemPyrd,EleTmp,axis=1)
ElemPyrd3DPly=ElemPyrd3D
# Generate dummy initial interface
ElemPyrd3DInt=np.zeros(np.shape(ElemPyrd3DPly[0,:]))
ElemPyrd3DInt=ElemPyrd3DInt.reshape(1,len(ElemPyrd3DInt))
# Generate dummy initial interface CAM
ElemPyrd3DCzm=np.zeros(np.shape(ElemPyrd3DPly[0,:]))
ElemPyrd3DCzm=ElemPyrd3DCzm.reshape(1,len(ElemPyrd3DCzm))
# Creating 3D quad elements points - 1st ply
EleTmp=ElemQuad[:,1:len(ElemQuad)]
EleTmp=EleTmp+np.ones(np.shape(ElemQuad[:,1:len(ElemQuad)]))*jmpNode
ElemQuad3D=np.append(ElemQuad,EleTmp,axis=1)
ElemQuad3DPly=ElemQuad3D
# Generate dummy initial interface
ElemQuad3DInt=np.zeros(np.shape(ElemQuad3DPly[0,:]))
ElemQuad3DInt=ElemQuad3DInt.reshape(1,len(ElemQuad3DInt))
# Generate dummy initial interface CZM
ElemQuad3DCzm=np.zeros(np.shape(ElemQuad3DPly[0,:]))
ElemQuad3DCzm=ElemQuad3DCzm.reshape(1,len(ElemQuad3DCzm))
ElemSetPly=[]
ElemSetInt=[]
ElemSetCzm=[]
jmpElem=10**(np.ceil(np.log10(np.abs(max(ElemQuad3D[:,0]) + 1))))
for i in range(1,len(specLenZ1)):
ElemSet=[]
# Pyramid elements
EleTmpNds=ElemPyrd3D[:,1:len(ElemPyrd3D)]
EleTmpNds=EleTmpNds+np.ones(np.shape(ElemPyrd3D[:,1:len(ElemPyrd3D)]))*(i-1)*jmpNode
EleTmpNums=ElemPyrd3D[:,0]
EleTmpNums=EleTmpNums+np.ones(np.shape(ElemPyrd3D[:,0]))*(i-1)*jmpElem
EleTmpNums=EleTmpNums.reshape(len(EleTmpNums),1)
EleTmpAdd=np.append(EleTmpNums,EleTmpNds,axis=1)
if plyStack[i-1]==-1:
ElemPyrd3DInt=np.append(ElemPyrd3DInt,EleTmpAdd,axis=0)
ElemSetInt=np.append(ElemSetInt,ElemPyrd3DInt[:,0])
elif plyStack[i-1]==-2:
ElemPyrd3DCzm=np.append(ElemPyrd3DCzm,EleTmpAdd,axis=0)
ElemSetCzm=np.append(ElemSetCzm,ElemPyrd3DCzm[:,0])
else:
ElemPyrd3DPly=np.append(ElemPyrd3DPly,EleTmpAdd,axis=0)
ElemSetPly=np.append(ElemSetPly,ElemPyrd3DPly[:,0])
ElemSet=np.append(ElemSet,EleTmpAdd[:,0])
# Quad element
EleTmpNds=ElemQuad3D[:,1:len(ElemQuad3D)]
EleTmpNds=EleTmpNds+np.ones(np.shape(ElemQuad3D[:,1:len(ElemQuad3D)]))*(i-1)*jmpNode
EleTmpNums=ElemQuad3D[:,0]
EleTmpNums=EleTmpNums+np.ones(np.shape(ElemQuad3D[:,0]))*(i-1)*jmpElem
EleTmpNums=EleTmpNums.reshape(len(EleTmpNums),1)
EleTmpAdd=np.append(EleTmpNums,EleTmpNds,axis=1)
if plyStack[i-1]==-1:
ElemQuad3DInt=np.append(ElemQuad3DInt,EleTmpAdd,axis=0)
ElemSetInt=np.append(ElemSetInt,ElemQuad3DInt[:,0])
elif plyStack[i-1]==-2:
ElemQuad3DCzm=np.append(ElemQuad3DCzm,EleTmpAdd,axis=0)
ElemSetCzm=np.append(ElemSetCzm,ElemQuad3DCzm[:,0])
else:
ElemQuad3DPly=np.append(ElemQuad3DPly,EleTmpAdd,axis=0)
ElemSetPly=np.append(ElemSetPly,ElemQuad3DPly[:,0])
ElemSet=np.append(ElemSet,EleTmpAdd[:,0])
writeEleSetV2(ElemSet,i)
writeSecOriV2(ElemSet,plyStack[i-1],i)
# Delete initial row
ElemPyrd3DInt=ElemPyrd3DInt[1:len(ElemPyrd3DInt)]
ElemQuad3DInt=ElemQuad3DInt[1:len(ElemQuad3DInt)]
ElemPyrd3DCzm=ElemPyrd3DCzm[1:len(ElemPyrd3DCzm)]
ElemQuad3DCzm=ElemQuad3DCzm[1:len(ElemQuad3DCzm)]
return ElemPyrd3DPly,ElemQuad3DPly,ElemPyrd3DInt,ElemQuad3DInt,ElemPyrd3DCzm,ElemQuad3DCzm,ElemSetPly,ElemSetInt,ElemSetCzm
#def DefineElem3DV2(ElemQuad,ElemPyrd,jmpNode):
# # Creating 3D pyramid elements points - 1st ply
# EleTmp=ElemPyrd[:,1:len(ElemPyrd)]
# EleTmp=EleTmp+np.ones(np.shape(ElemPyrd[:,1:len(ElemPyrd)]))*jmpNode
# ElemPyrd3D=np.append(ElemPyrd,EleTmp,axis=1)
# ElemPyrd3D=ElemPyrd3D
#
# # Creating 3D quad elements points - 1st ply
# EleTmp=ElemQuad[:,1:len(ElemQuad)]
# EleTmp=EleTmp+np.ones(np.shape(ElemQuad[:,1:len(ElemQuad)]))*jmpNode
# ElemQuad3D=np.append(ElemQuad,EleTmp,axis=1)
# ElemQuad3D=ElemQuad3D
#
# # initialize element set
# ElemSet=[]
# # increment in element number
# jmpElem=10**(np.ceil(np.log10(np.abs(max(ElemQuad3D[:,0]) + 1))))
#
# # Pyramid elements
# EleTmpNds=ElemPyrd3D[:,1:len(ElemPyrd3D)]
# EleTmpNds=EleTmpNds+np.ones(np.shape(ElemPyrd3D[:,1:len(ElemPyrd3D)]))*jmpNode
# EleTmpNums=ElemPyrd3D[:,0]
# EleTmpNums=EleTmpNums+np.ones(np.shape(ElemPyrd3D[:,0]))*jmpElem
# EleTmpNums=EleTmpNums.reshape(len(EleTmpNums),1)
# EleTmpAdd=np.append(EleTmpNums,EleTmpNds,axis=1)
# ElemPyrd3D=np.append(ElemPyrd3D,EleTmpAdd,axis=0)
# ElemSet=np.append(ElemSet,ElemPyrd3D[:,0])
#
# # Quad element
# EleTmpNds=ElemQuad3D[:,1:len(ElemQuad3D)]
# EleTmpNds=EleTmpNds+np.ones(np.shape(ElemQuad3D[:,1:len(ElemQuad3D)]))*jmpNode
# EleTmpNums=ElemQuad3D[:,0]
# EleTmpNums=EleTmpNums+np.ones(np.shape(ElemQuad3D[:,0]))*jmpElem
# EleTmpNums=EleTmpNums.reshape(len(EleTmpNums),1)
# EleTmpAdd=np.append(EleTmpNums,EleTmpNds,axis=1)
# ElemQuad3D=np.append(ElemQuad3D,EleTmpAdd,axis=0)
# ElemSet=np.append(ElemSet,ElemQuad3D[:,0])
#
# return ElemPyrd3D,ElemQuad3D,ElemSet
def DefineThk(specLenZ0,PlyStack,thkPly,thkInt,thkCzm):
specLenZ1=np.array([specLenZ0])
thk=specLenZ0
for i in range(0,len(PlyStack)):
if PlyStack[i]==-1:
thk=thk+thkInt
elif PlyStack[i]==-2:
thk=thk+thkCzm
else:
thk=thk+thkPly
specLenZ1=np.append(specLenZ1,thk)
return specLenZ1
#def writeEleSet(ElemSet):
# f = open('EleSetFile.inp', 'w')
# for i in range(0,len(ElemSet)):
# elemTmp='*ELSET, GENERATE, ELSET=SET'+str(int(ElemSet[i,0]))
# f.write("%s\n" % elemTmp) #
# elemTmp=str(int(ElemSet[i,1]))+','+str(int(ElemSet[i,2]))+str(int(ElemSet[i,1]))+','+str(int(ElemSet[i,2]))+str(int(ElemSet[i,1]))+','+str(int(ElemSet[i,2]))+str(int(ElemSet[i,1]))+','+str(int(ElemSet[i,2]))
# f.write("%s\n" % elemTmp)
# f.close()
def writeEleSetV2(ElemSet,idt):
ElemSet=ElemSet.astype(int)
f = open('EleSetFile.inp', 'a+')
elemTmp='*ELSET, ELSET=SET'+str(idt)
f.write("%s\n" % elemTmp) #
f.close()
ElemSetTmp1=ElemSet[0:len(ElemSet)//8*8].reshape(len(ElemSet)//8,8)
with open("EleSetFile.inp", "a") as f:
writer = csv.writer(f)
writer.writerows(ElemSetTmp1)
f.close()
if len(ElemSet)%8>0:
ElemSetTmp2=ElemSet[len(ElemSet)//8*8:len(ElemSet)]
with open("EleSetFile.inp", "a") as f:
writer = csv.writer(f)
writer.writerow(ElemSetTmp2)
f.close()
def writeNodeSetV2(NodeSet,idt):
NodeSet=NodeSet.astype(int)
f = open('NodeSetFile.inp', 'a+')
elemTmp='*NSET, NSET=NSET'+idt
f.write("%s\n" % elemTmp) #
f.close()
NodeSetTmp1=NodeSet[0:len(NodeSet)//8*8].reshape(len(NodeSet)//8,8)
with open("NodeSetFile.inp", "a") as f:
writer = csv.writer(f)
writer.writerows(NodeSetTmp1)
f.close()
if len(NodeSet)%8>0:
NodeSetTmp2=NodeSet[len(NodeSet)//8*8:len(NodeSet)]
with open("NodeSetFile.inp", "a") as f:
writer = csv.writer(f)
writer.writerow(NodeSetTmp2)
f.close()
#def writeSecOri(ElemSet,PlyStack):
# f = open('SecOri.inp', 'w+')
# for i in range(0,len(ElemSet)):
# if PlyStack[i]==-1:
# txtTmp1='*Orientation, name=PlyOri-'+str(int(ElemSet[i,0]))
# txtTmp2='1., 0., 0., 0., 1., 0.,'
# txtTmp3='3, 0'
# txtTmp4='*Solid Section, elset=SET'+str(int(ElemSet[i,0]))+', orientation=PlyOri-'+str(int(ElemSet[i,0]))+', material=matInt'
# elif PlyStack[i]==-2:
# txtTmp1='*Orientation, name=PlyOri-'+str(int(ElemSet[i,0]))
# txtTmp2='1., 0., 0., 0., 1., 0.,'
# txtTmp3='3, 0'
# txtTmp4='*Solid Section, elset=SET'+str(int(ElemSet[i,0]))+', orientation=PlyOri-'+str(int(ElemSet[i,0]))+', material=matCzm'
# else:
# txtTmp1='*Orientation, name=PlyOri-'+str(int(ElemSet[i,0]))
# txtTmp2='1., 0., 0., 0., 1., 0.,'
# txtTmp3='3,'+ str(PlyStack[i])
# txtTmp4='*Solid Section, elset=SET'+str(int(ElemSet[i,0]))+', orientation=PlyOri-'+str(int(ElemSet[i,0]))+', material=matLamina'
# txtTmp5=','
#
# f.write("%s\n" % txtTmp1) #
# f.write("%s\n" % txtTmp2) #
# f.write("%s\n" % txtTmp3) #
# f.write("%s\n" % txtTmp4) #
# f.write("%s\n" % txtTmp5) #
# f.close()
def writeSecOriV2(ElemSet,PlyStack,idt):
f = open('SecOri.inp', 'a+')
if PlyStack==-1:
txtTmp1='*Orientation, name=PlyOri-'+str(idt)
txtTmp2='1., 0., 0., 0., 1., 0.,'
txtTmp3='3, 0'
txtTmp4='*Solid Section, elset=SET'+str(idt)+', orientation=PlyOri-'+str(idt)+', material=matInt'
elif PlyStack==-2:
txtTmp1='*Orientation, name=PlyOri-'+str(idt)
txtTmp2='1., 0., 0., 0., 1., 0.,'
txtTmp3='3, 0'
txtTmp4='*Solid Section, elset=SET'+str(idt)+', orientation=PlyOri-'+str(idt)+', material=matCzm'
else:
txtTmp1='*Orientation, name=PlyOri-'+str(idt)
txtTmp2='1., 0., 0., 0., 1., 0.,'
txtTmp3='3,'+ str(PlyStack)
txtTmp4='*Solid Section, elset=SET'+str(idt)+', orientation=PlyOri-'+str(idt)+', material=matLamina'
txtTmp5=','
f.write("%s\n" % txtTmp1) #
f.write("%s\n" % txtTmp2) #
f.write("%s\n" % txtTmp3) #
f.write("%s\n" % txtTmp4) #
f.write("%s\n" % txtTmp5) #
f.close()
def plotElem(Elem,Node):
Elem=Elem.astype(int)
for i in range(0,len(Elem)):
size=len(Elem[i])
x=[]
y=[]
for k in range(1,size):
x=np.append(x,Node[Node[:,0]==Elem[i,k],1], axis=0)
y= | np.append(y,Node[Node[:,0]==Elem[i,k],2], axis=0) | numpy.append |
import numpy as np
import pandas as pd
import math
from scipy.signal import lfilter
from pensimpy.constants import NUM_STEPS, STEP_IN_HOURS
def pid_controller(uk1, ek, ek1, yk, yk1, yk2, u_min, u_max, Kp, Ti, Td, h):
"""
PID controller
:param uk1:
:param ek:
:param ek1:
:param yk:
:param yk1:
:param yk2:
:param u_min:
:param u_max:
:param Kp:
:param Ti:
:param Td:
:param h:
:return:
"""
# proportional component
P = ek - ek1
# checks if the integral time constant is defined
I = ek * h / Ti if Ti > 1e-7 else 0
# derivative component
D = -Td / h * (yk - 2 * yk1 + yk2) if Td > 0.001 else 0
# computes and saturates the control signal
uu = uk1 + Kp * (P + I + D)
uu = u_max if uu > u_max else uu
uu = u_min if uu < u_min else uu
return uu
def smooth(y, width):
"""
Realize Matlab smooth() func.
:param y: list
:param width:
:return: list
"""
n = len(y)
b1 = np.ones(width) / width
c = lfilter(b1, [1], y, axis=0)
cbegin = np.cumsum(y[0:width - 2])
cbegin = cbegin[::2] / np.arange(1, width - 1, 2)
cend = | np.cumsum(y[n - width + 2:n][::-1]) | numpy.cumsum |
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License
"""Unit tests for the PlanDevices pass. We check:
- The pass alone given the expected AST, though we need to manually run InferTypes.
- The pass is idempotent.
- Execution on the VM backend yields the correct result."""
import tvm
from tvm import relay
import tvm.testing
import numpy as np
HOST_DEVICE = tvm.device("cpu")
HOST_TARGET = tvm.target.Target("llvm")
CPU_DEVICE = tvm.device("cpu")
CPU_TARGET = tvm.target.Target("llvm").with_host(HOST_TARGET)
GPU_DEVICE = tvm.device("cuda")
GPU_TARGET = tvm.target.Target("cuda").with_host(HOST_TARGET)
TARGETS = {
tvm.tir.IntImm("int32", CPU_DEVICE.device_type): CPU_TARGET,
tvm.tir.IntImm("int32", GPU_DEVICE.device_type): GPU_TARGET,
}
HOST = tvm.target.make_se_scope(HOST_DEVICE, HOST_TARGET) # device_type=1
CPU = tvm.target.make_se_scope(CPU_DEVICE, CPU_TARGET) # device_type=1
GPU = tvm.target.make_se_scope(GPU_DEVICE, GPU_TARGET) # device_type=2
DEFAULT = GPU
CTXT = tvm.transform.PassContext(config={"relay.fallback_device_type": DEFAULT.device_type_int})
core = tvm.IRModule()
core.import_from_std("core.rly")
def rewrite_and_assert(in_mod, expected_mod):
"""Manually run the pass and assert it's structurally equals to the expected."""
config = tvm.target.make_compilation_config(CTXT, TARGETS, HOST_TARGET)
actual_mod = relay.transform.InferType()(in_mod)
actual_mod = relay.transform.PlanDevices(config)(actual_mod)
actual_mod = relay.transform.InferType()(actual_mod)
expected_mod = relay.transform.InferType()(expected_mod)
if not tvm.ir.structural_equal(actual_mod, expected_mod, True):
# Print everything in full so we can see what's going on when things fail.
print("Input module:")
print(in_mod)
print("Expected module:")
print(expected_mod)
print("Actual module:")
print(actual_mod)
# Assert again so as to see the actual disagreeing sub-expressions.
tvm.ir.assert_structural_equal(actual_mod, expected_mod, True)
def eval_and_assert(in_mod: tvm.IRModule, reference_func, args):
"""Test the standard compilation flow gives us a function which agrees with the Numpy
reference implementation."""
if not tvm.runtime.enabled("cuda"):
print("Not evaluating since GPU is not available")
return
with tvm.transform.PassContext(opt_level=3):
compiled = relay.create_executor(
"vm", mod=in_mod, device=GPU_DEVICE, target=GPU_TARGET
).evaluate()
actual = compiled(*args).numpy()
expected = reference_func(*args)
tvm.testing.assert_allclose(actual, expected)
def rand(shape):
return np.random.rand(*shape).astype("float32")
def rands(shape, n):
return [rand(shape) for i in range(n)]
def exercise(in_mod: tvm.IRModule, expected_mod: tvm.IRModule, reference_func, args):
"""Test in_mod against expected_mod and reference_func using args."""
# Correctness
rewrite_and_assert(in_mod, expected_mod)
# Idempotence
rewrite_and_assert(expected_mod, expected_mod)
# The VM can compile and possibly even run the module
if not (reference_func is None) and not (args is None):
eval_and_assert(in_mod, reference_func, args)
def test_plain():
metatable = {"SEScope": [CPU, GPU]}
# Everything defaults to GPU
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
%0 = add(%a, %b);
%1 = add(%c, %d);
subtract(%0, %1)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][1], meta[SEScope][1], meta[SEScope][1], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
%1 = add(%c, %d);
subtract(%0, %1)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_left_add_on_cpu():
metatable = {"SEScope": [CPU, GPU]}
# Force some args to be on CPU, rest default to GPU.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = add(%c, %d);
subtract(%1, %2)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][1], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%3 = add(%c, %d);
subtract(%2, %3)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_left_add_on_cpu_via_copy():
metatable = {"SEScope": [CPU, GPU]}
# As for test_left_add_on_cpu, but with an explicit device_copy.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
%0 = add(%a, %b);
%1 = device_copy(%0, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%2 = add(%c, %d);
subtract(%1, %2)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][1], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%3 = add(%c, %d);
subtract(%2, %3)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_both_adds_on_cpu():
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
%0 = add(%a, %b);
%1 = add(%c, %d);
%2 = on_device(%0, se_scope=meta[SEScope][0]);
%3 = on_device(%1, se_scope=meta[SEScope][0]);
subtract(%2, %3)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][0], meta[SEScope][0]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = add(%c, %d);
%3 = on_device(%2, se_scope=meta[SEScope][0], is_fixed=True);
%4 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%5 = device_copy(%3, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
subtract(%4, %5)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_sharing():
metatable = {"SEScope": [CPU, GPU]}
# The same add sub-expression is annotated twice.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = on_device(%0, se_scope=meta[SEScope][0]);
subtract(%1, %2)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%3 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%4 = device_copy(%2, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
subtract(%3, %4)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b):
x = np.add(a, b)
return np.subtract(x, x)
exercise(input(), expected(), ref, rands((5, 7), 2))
def test_let_on_cpu():
metatable = {"SEScope": [CPU, GPU]}
# The device for a let-bound expression can flow from uses of the let-bound var.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
let %l = add(%a, %b);
let %r = add(%c, %d);
%0 = on_device(%l, se_scope=meta[SEScope][0]);
subtract(%0, %r)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][1], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%a, %b);
let %l = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
let %r = add(%c, %d);
%1 = device_copy(%l, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
subtract(%1, %r)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_func_param_on_cpu():
metatable = {"SEScope": [CPU, GPU]}
# Devices for function parameters flow to call sites.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
let %f = fn (%x, %y) {
%0 = add(%x, %y);
on_device(%0, se_scope=meta[SEScope][0])
};
%1 = %f(%a, %b);
%2 = add(%c, %d);
subtract(%1, %2)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][0], meta[SEScope][0]],
result_se_scope=meta[SEScope][0]) {
let %f = fn (%x, %y,
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
add(%x, %y)
};
%0 = %f(%a, %b);
%1 = add(%c, %d);
subtract(%0, %1)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_func_result_on_cpu():
metatable = {"SEScope": [CPU, GPU]}
# Devices for call sites flow to function results.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32]) {
let %f = fn (%x, %y) {
add(%x, %y)
};
%0 = %f(%a, %b);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = add(%c, %d);
subtract(%1, %2)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
%c: Tensor[(5, 7), float32], %d: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][1], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
let %f = fn (%x, %y,
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
add(%x, %y)
};
%1 = %f(%a, %b);
%2 = on_device(%1, se_scope=meta[SEScope][0], is_fixed=True);
%3 = device_copy(%2, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%4 = add(%c, %d);
subtract(%3, %4)
}
""",
"from_string",
None,
metatable,
)
def ref(a, b, c, d):
return np.subtract(np.add(a, b), np.add(c, d))
exercise(input(), expected(), ref, rands((5, 7), 4))
def test_higher_order():
metatable = {"SEScope": [CPU, GPU]}
# The constraint on %a flows back to %y via %f and %h
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32]) {
let %f = fn (%g) {
fn (%a) {
%0 = on_device(%a, se_scope=meta[SEScope][0]);
%1 = %g(%0);
add(%1, %x)
}
};
let %h = fn (%b) {
negative(%b)
};
%2 = %f(%h);
%3 = %2(%y);
subtract(%x, %3)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][1], meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
let %f = fn (%g, param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][1]) {
fn (%a, param_se_scopes=[meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
%0 = device_copy(%a, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%1 = %g(%0);
add(%1, %x)
}
};
let %h = fn (%b, param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][1]) {
negative(%b)
};
%2 = %f(%h);
%3 = %2(%y);
subtract(%x, %3)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y):
def f(g):
return lambda a: np.add(g(a), x)
def h(b):
return np.negative(b)
return np.subtract(x, f(h)(y))
exercise(input(), expected(), ref, rands((5, 7), 2))
def test_function_in_tuple():
metatable = {"SEScope": [CPU, GPU]}
# Since %f ends up in a tuple its argument and result is forced to be on the CPU
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32]) {
let %f = fn (%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32]) {
%0 = on_device(%b, se_scope=meta[SEScope][0]);
add(%a, %0)
};
let %t = (%f, %x);
%1 = %t.1;
%2 = %t.0;
%2(%1, %y)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
let %f = fn (%a: Tensor[(5, 7), float32], %b: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
add(%a, %b)
};
let %t = (%f, %x);
%0 = %t.1;
%1 = %t.0;
%1(%0, %y)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y):
return np.add(x, y)
exercise(input(), expected(), ref, rands((5, 7), 2))
def test_device_copy():
const = rand((5, 7))
metatable = {"SEScope": [CPU, GPU], "relay.Constant": [relay.const(const)]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32]) {
%0 = device_copy(%x, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
add(%0, meta[relay.Constant][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
%0 = device_copy(%x, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
add(%0, meta[relay.Constant][0])
}
""",
"from_string",
None,
metatable,
)
def ref(x):
return np.add(x, const)
exercise(input(), expected(), ref, rands((5, 7), 1))
def test_shape_func():
metatable = {"SEScope": [HOST, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(?), float32], %s: Tensor[(1), int64]) {
%0 = fn (%y: Tensor[(?), float32]) {
nn.relu(%y)
};
let %p = on_device(%0, se_scope=meta[SEScope][1], is_fixed=True);
%1 = on_device(%x, se_scope=meta[SEScope][1], is_fixed=True);
%2 = vm.shape_of(%1, dtype="int64");
%3 = (%2,);
%4 = (%s,);
vm.shape_func(%p, %3, %4, is_input=[False])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(?), float32], %s: Tensor[(1), int64],
param_se_scopes=[meta[SEScope][1], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
let %p = fn (%y: Tensor[(?), float32],
param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][1]) {
nn.relu(%y)
};
%1 = vm.shape_of(%x, dtype="int64");
%2 = (%1,);
%3 = (%s,);
vm.shape_func(%p, %2, %3, is_input=[False])
}
""",
"from_string",
None,
metatable,
)
# Don't try to execute, too fiddly to setup.
exercise(input(), expected(), None, None)
def test_shape_of():
metatable = {"SEScope": [HOST, GPU]}
# We need to use is_fixed=True in the on_device call so that the tensor will be on the GPU. Otherwise the
# result defaults to the result device for @main which is the CPU, thus forcing a copy.
# TODO(mbs): Perhaps the defaulting heuristics are being too clever?
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(?, ?), float32]) {
%0 = on_device(%x, se_scope=meta[SEScope][1], is_fixed=True);
vm.shape_of(%0, dtype="int64")
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(?, ?), float32],
param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][0]) {
vm.shape_of(%x, dtype="int64")
}
""",
"from_string",
None,
metatable,
)
def ref(x):
return x.shape
exercise(input(), expected(), ref, rands((5, 7), 1))
def test_alloc_storage():
metatable = {"SEScope": [HOST, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%size: int64, %alignment: int64) {
memory.alloc_storage(%size, %alignment, se_scope=meta[SEScope][1])
}
""",
"from_string",
core,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%size: int64, %alignment: int64,
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
memory.alloc_storage(%size, %alignment, se_scope=meta[SEScope][1])
}
""",
"from_string",
core,
metatable,
)
# Don't try to execute, too fiddly to setup.
exercise(input(), expected(), None, None)
def test_alloc_tensor():
shape = np.array([3, 2])
metatable = {"SEScope": [HOST, GPU], "relay.Constant": [relay.const(shape, dtype="int64")]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%sto: Storage[]) {
memory.alloc_tensor(%sto, 0, meta[relay.Constant][0],
const_shape=meta[relay.Constant][0], assert_shape=[])
}
""",
"from_string",
core,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%sto: Storage[], param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][1]) {
%0 = on_device(0, se_scope=meta[SEScope][0], is_fixed=True);
%1 = on_device(meta[relay.Constant][0], se_scope=meta[SEScope][0], is_fixed=True);
memory.alloc_tensor(%sto, %0, %1, const_shape=meta[relay.Constant][0], assert_shape=[])
}
""",
"from_string",
core,
metatable,
)
# Don't try to execute, too fiddly to setup.
exercise(input(), expected(), None, None)
def test_reshape_tensor():
newshape = [2, 4, 2]
metatable = {"SEScope": [HOST, GPU], "relay.Constant": [relay.const(newshape, dtype="int64")]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(2, 8), float32]) {
vm.reshape_tensor(%x, meta[relay.Constant][0], newshape=[2, 4, 2])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(2, 8), float32],
param_se_scopes=[meta[SEScope][1]], result_se_scope=meta[SEScope][1]) {
%0 = on_device(meta[relay.Constant][0], se_scope=meta[SEScope][0], is_fixed=True);
vm.reshape_tensor(%x, %0, newshape=[2, 4, 2])
}
""",
"from_string",
None,
metatable,
)
def ref(x):
return np.reshape(x, newshape)
exercise(input(), expected(), ref, rands((2, 8), 1))
def test_dynamic_input():
metatable = {"SEScope": [GPU]}
# There's nothing special about inferring devices for partially unknown types.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x0: Tensor[(?, ?), float32], %x1: Tensor[(?, ?), float32]) {
add(%x0, %x1)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x0: Tensor[(?, ?), float32], %x1: Tensor[(?, ?), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
add(%x0, %x1)
}
""",
"from_string",
None,
metatable,
)
def ref(x0, x1):
return np.add(x0, x1)
exercise(input(), expected(), ref, rands((5, 7), 2))
def test_redundant_annotation():
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = subtract(%1, %z);
%3 = on_device(%0, se_scope=meta[SEScope][0]);
add(%2, %3)
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][1]],
result_se_scope=meta[SEScope][1]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%3 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%4 = subtract(%2, %z);
%5 = device_copy(%3, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
add(%4, %5)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y, z):
a = np.add(x, y)
return np.add(np.subtract(a, z), a)
exercise(input(), expected(), ref, rands((5, 7), 3))
def test_annotate_expr():
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][1]);
%2 = subtract(%1, %z);
on_device(%2, se_scope=meta[SEScope][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][1], meta[SEScope][1], meta[SEScope][0]],
result_se_scope=meta[SEScope][0]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][1], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][1], dst_se_scope=meta[SEScope][0]);
subtract(%2, %z)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y, z):
return np.subtract(np.add(x, y), z)
exercise(input(), expected(), ref, rands((5, 7), 3))
def test_annotate_all():
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = subtract(%1, %z);
on_device(%2, se_scope=meta[SEScope][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32], %z: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][0]],
result_se_scope=meta[SEScope][0]) {
%0 = add(%x, %y);
subtract(%0, %z)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y, z):
return np.subtract(np.add(x, y), z)
exercise(input(), expected(), ref, rands((5, 7), 3))
def test_conv_network():
r"""The network and devices are as follows:
data1 data2 <--- CPU
| |
conv2d conv2d <--- CPU
\ /
\ /
add <--- GPU
|
conv2d <--- CPU
|
<result> <--- CPU
"""
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%data1: Tensor[(1, 64, 56, 56), float32], %data2: Tensor[(1, 64, 56, 56), float32],
%weight: Tensor[(64, 64, 3, 3), float32]) {
%0 = nn.conv2d(%data1, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]);
%1 = nn.conv2d(%data2, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]);
%2 = on_device(%0, se_scope=meta[SEScope][0]);
%3 = on_device(%1, se_scope=meta[SEScope][0]);
%4 = add(%2, %3);
%5 = on_device(%4, se_scope=meta[SEScope][1]);
%6 = nn.conv2d(%5, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]);
on_device(%6, se_scope=meta[SEScope][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%data1: Tensor[(1, 64, 56, 56), float32], %data2: Tensor[(1, 64, 56, 56), float32],
%weight: Tensor[(64, 64, 3, 3), float32],
param_se_scopes=[meta[SEScope][0], meta[SEScope][0], meta[SEScope][0]],
result_se_scope=meta[SEScope][0]) {
%0 = nn.conv2d(%data1, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = nn.conv2d(%data2, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]);
%3 = on_device(%2, se_scope=meta[SEScope][0], is_fixed=True);
%4 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%5 = device_copy(%3, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%6 = add(%4, %5);
%7 = on_device(%6, se_scope=meta[SEScope][1], is_fixed=True);
%8 = device_copy(%7, src_se_scope=meta[SEScope][1], dst_se_scope=meta[SEScope][0]);
nn.conv2d(%8, %weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3])
}
""",
"from_string",
None,
metatable,
)
# Don't try to execute, we don't have a reference conv2d
exercise(input(), expected(), None, None)
def test_tuple_get_item():
metatable = {"SEScope": [CPU, GPU]}
# Note that the device copy should be placed after projection rather than before. This is handled by
# a heuristic in the pass.
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(3, 3, 4), float32]) {
let %t = split(%x, indices_or_sections=3);
%0 = on_device(%t, se_scope=meta[SEScope][0]);
%1 = on_device(%t, se_scope=meta[SEScope][0]);
%2 = %0.0;
%3 = %1.1;
%4 = subtract(%2, %3);
on_device(%4, se_scope=meta[SEScope][1])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(3, 3, 4), float32],
param_se_scopes=[meta[SEScope][0]], result_se_scope=meta[SEScope][1]) {
%0 = split(%x, indices_or_sections=3);
let %t = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%1 = %t.0;
%2 = on_device(%1, se_scope=meta[SEScope][0], is_fixed=True);
%3 = %t.1;
%4 = on_device(%3, se_scope=meta[SEScope][0], is_fixed=True);
%5 = device_copy(%2, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%6 = device_copy(%4, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
subtract(%5, %6)
}
""",
"from_string",
None,
metatable,
)
def ref(x):
t = np.split(x, 3)
return np.subtract(t[0], t[1])
exercise(input(), expected(), ref, rands((3, 3, 4), 1))
def test_propogation():
r""" The network and devices are as follows:
x <--- CPU
|
negative <--- CPU
/ \
negative negative <--- GPU
\ /
add <--- GPU
|
negative <--- CPU
|
<result> <--- CPU
"""
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32]) {
%0 = negative(%x);
%1 = on_device(%0, se_scope=meta[SEScope][0]);
%2 = negative(%1);
%3 = on_device(%0, se_scope=meta[SEScope][0]);
%4 = negative(%3);
%5 = on_device(%2, se_scope=meta[SEScope][1]);
%6 = on_device(%4, se_scope=meta[SEScope][1]);
%7 = add(%5, %6);
%8 = on_device(%7, se_scope=meta[SEScope][1]);
%9 = negative(%8);
on_device(%9, se_scope=meta[SEScope][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][0]], result_se_scope=meta[SEScope][0]) {
%0 = negative(%x);
%1 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%3 = on_device(%0, se_scope=meta[SEScope][0], is_fixed=True);
%4 = device_copy(%3, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%5 = negative(%2);
%6 = negative(%4);
%7 = add(%5, %6);
%8 = on_device(%7, se_scope=meta[SEScope][1], is_fixed=True);
%9 = device_copy(%8, src_se_scope=meta[SEScope][1], dst_se_scope=meta[SEScope][0]);
negative(%9)
}
""",
"from_string",
None,
metatable,
)
def ref(x):
y = np.negative(x)
return np.negative(np.add(np.negative(y), np.negative(y)))
exercise(input(), expected(), ref, rands((5, 7), 1))
def test_fusible_network():
r""" The network is as follows:
x y <--- GPU
\ /
add <--- GPU
/ \
negative \ <--- CPU
\ \
\ negative <--- GPU
\ /
add <--- GPU
|
negative <--- CPU
|
<result> <--- CPU
"""
metatable = {"SEScope": [CPU, GPU]}
def input():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][1]);
%2 = negative(%1);
%3 = on_device(%2, se_scope=meta[SEScope][0]);
%4 = negative(%0);
%5 = add(%3, %4);
%6 = on_device(%5, se_scope=meta[SEScope][1]);
%7 = negative(%6);
on_device(%7, se_scope=meta[SEScope][0])
}
""",
"from_string",
None,
metatable,
)
def expected():
return tvm.parser.parse(
"""
#[version = "0.0.5"]
def @main(%x: Tensor[(5, 7), float32], %y: Tensor[(5, 7), float32],
param_se_scopes=[meta[SEScope][1], meta[SEScope][1]], result_se_scope=meta[SEScope][0]) {
%0 = add(%x, %y);
%1 = on_device(%0, se_scope=meta[SEScope][1], is_fixed=True);
%2 = device_copy(%1, src_se_scope=meta[SEScope][1], dst_se_scope=meta[SEScope][0]);
%3 = negative(%2);
%4 = on_device(%3, se_scope=meta[SEScope][0], is_fixed=True);
%5 = device_copy(%4, src_se_scope=meta[SEScope][0], dst_se_scope=meta[SEScope][1]);
%6 = negative(%0);
%7 = add(%5, %6);
%8 = on_device(%7, se_scope=meta[SEScope][1], is_fixed=True);
%9 = device_copy(%8, src_se_scope=meta[SEScope][1], dst_se_scope=meta[SEScope][0]);
negative(%9)
}
""",
"from_string",
None,
metatable,
)
def ref(x, y):
z = np.add(x, y)
return np.negative(np.add(np.negative(z), | np.negative(z) | numpy.negative |
#!/usr/bin/env python
# Copyright 2018-2021 <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import nbconvert
import numpy as np
with open("project1.ipynb") as f:
exporter = nbconvert.PythonExporter()
python_file, _ = exporter.from_file(f)
with open("project1.py", "w") as f:
f.write(python_file)
from project1 import *
class TestSolution(unittest.TestCase):
def test_extrapolate_depth(self):
depth = extrapolate_depth('Nechelik_Data.csv', 54, 22)
ans = np.array([[ 2.70173155, 14.61847619, 24.75209183, 34.67826221,
45.44437714, 57.59245966, 68.78224487, 80.74127656,
93.33218919, 104.92247446],
[ 65.56019807, 82.074415 , 91.4114576 , 101.08788026,
112.58602273, 124.47371216, 133.36619646, 140.26372697,
144.79608252, 144.38027237],
[130.12883762, 142.95904286, 152.57926161, 155.04714833,
154.48411818, 152.21118072, 150.96466227, 148.46526713,
145.4597082 , 139.93757709],
[155.24926496, 155.37301106, 154.11428633, 149.92800778,
146.03594214, 142.35462312, 140.11571568, 135.07962928,
128.332499 , 122.3394009 ],
[151.15995179, 148.28292214, 143.97502747, 139.15637031,
135.05617018, 130.7246469 , 126.57847504, 122.21429303,
118.62278665, 115.57720736],
[143.73023011, 140.65271043, 136.55451279, 133.21773345,
128.99558116, 125.16930013, 121.93717074, 118.81040867,
115.54214496, 112.75619709],
[137.47723739, 134.19496031, 131.3469027 , 127.93072662,
124.63255187, 122.15669922, 120.11714503, 118.291131 ,
115.54858713, 113.84711935],
[132.06298958, 129.22726407, 126.36815605, 123.6668177 ,
121.3084949 , 119.12047552, 117.95847689, 115.73521596,
114.05345113, 112.13128615],
[125.62886418, 123.10016293, 121.16497259, 119.45967825,
117.40247364, 115.58775299, 113.97347575, 111.66651658,
110.59510072, 110.09450219],
[119.23080888, 117.58698671, 115.68735178, 113.77367335,
112.5836805 , 111.55229405, 110.34970856, 108.52301829,
106.98599511, 105.21573825]])
np.testing.assert_allclose(depth[10:20,10:20], ans, atol=10)
def test_nans(self):
depth = extrapolate_depth('Nechelik_Data.csv', 54, 22)
ans = np.array([[ np.nan, np.nan],
[ np.nan, np.nan],
[ np.nan, np.nan] ])
| np.testing.assert_allclose(depth[:3,:2], ans) | numpy.testing.assert_allclose |
import sys
import os
import shutil
import numpy
import soundfile
import librosa
import librosa.display
import matplotlib
from matplotlib import pyplot
def main():
src_dir = "../datasets/room2reverb/test_A/"
data_dir = "../datasets/room2reverb/test_B/"
if not os.path.isdir("output/images/"):
os.makedirs("output/images/")
set_style()
f = sys.argv[1]
example = os.path.join(data_dir, "%s_img.wav" % f)
src = os.path.join(src_dir, "%s_label.jpg" % f)
output = "output/images/%s_spec.png" % f
src_output = "output/images/%s_input.jpg" % f
y, sr = soundfile.read(example)
shutil.copy2(src, src_output)
y /= numpy.abs(y).max()
t = numpy.where(numpy.abs(y) > 0.00001)
y = y[t[0][0]:t[0][-1]]
m = librosa.feature.melspectrogram(y)
m = | numpy.log(m + 1e-8) | numpy.log |
# -*- coding: utf-8 -*-
# !/usr/bin/python
from .calculator import Calculator
from scipy.stats import norm
from ..util import convert_dataset, eval_pdf
import numpy as np
class AsymptoticCalculator(Calculator):
"""
Class for for asymptotic calculators. Can be used only with one parameter
of interest.
See <NAME>, <NAME>, <NAME> and <NAME>: Asymptotic formulae for
likelihood- based tests of new physics. Eur. Phys. J., C71:1–19, 2011
"""
def __init__(self, config, nbins=100):
"""
__init__ function
"""
super(AsymptoticCalculator, self).__init__(config)
self._asymov_dataset = {}
self._asymov_loss = {}
self._asymov_nll = {}
self._nbins = nbins
def asymov_dataset(self, poi):
if poi not in self._asymov_dataset.keys():
models = self.config.models
minimizer = self.config.minimizer
oldverbose = minimizer.verbosity
minimizer.verbosity = 5
loss = self.config.obsloss()
poiparam = poi.parameter
poivalue = poi.value
msg = "\nGet fit best values for nuisance parameters for the"
msg += " alternative hypothesis!"
print(msg)
with poiparam.set_value(poivalue):
poiparam.floating = False
asymin = minimizer.minimize(loss=loss)
poiparam.floating = True
minimizer.verbosity = oldverbose
values = asymin.params
values[poiparam] = {"value": poivalue}
asydatasets = []
for m in models:
space = m.space
asydatasets.append(generate_asymov_dataset(m, values, space, self._nbins))
self._asymov_dataset[poi] = asydatasets
return self._asymov_dataset[poi]
def asymov_loss(self, poi):
if poi not in self._asymov_loss.keys():
config = self.config
models = config.models
obsdata = config.datasets
datasets = []
for i, ad in enumerate(self.asymov_dataset(poi)):
data = convert_dataset(obsdata[i], ad[0], ad[1])
datasets.append(data)
loss = config.lossbuilder(models, datasets)
self._asymov_loss[poi] = loss
return self._asymov_loss[poi]
def asymov_nll(self, poi, poialt):
config = self.config
minimizer = config.minimizer
ret = np.empty(len(poi))
for i, p in enumerate(poi):
if p not in self._asymov_nll.keys():
loss = self.asymov_loss(poialt)
nll = config.pll(minimizer, loss, p.parameter, p.value)
self._asymov_nll[p] = nll
ret[i] = self._asymov_nll[p]
return ret
def pvalue(self, poinull, poialt=None, qtilde=False, onesided=True,
onesideddiscovery=False):
qobs = self.qobs(poinull, onesided=onesided, qtilde=qtilde,
onesideddiscovery=onesideddiscovery)
sqrtqobs = np.sqrt(qobs)
needpalt = poialt is not None
if needpalt:
nll_poinull_asy = self.asymov_nll(poinull, poialt)
nll_poialt_asy = self.asymov_nll(poialt, poialt)
qalt = self.q(nll_poinull_asy, nll_poialt_asy)
qalt = self.qdist(qalt, 0, poinull.value, onesided=onesided,
onesideddiscovery=onesideddiscovery)
sqrtqalt = | np.sqrt(qalt) | numpy.sqrt |
import os
import argparse
from copy import deepcopy
DATA_DIR = '../data'
parser = argparse.ArgumentParser()
parser.add_argument("--seed", type=int, default=0)
parser.add_argument("--epochs", nargs='+', type=int, default=[10, 10, 10, 10, 10],
help='Epoch number for each task')
parser.add_argument("--batch_size", type=int, default=8,
help='training batch size')
parser.add_argument("--bert_learning_rate", type=float, default=3e-5,
help='learning rate for pretrained Bert')
parser.add_argument("--learning_rate", type=float, default=3e-5,
help='learning rate for Class Classifier/General Space Encoder/Specific Space Encoder')
parser.add_argument("--task_learning_rate", type=float, default=5e-4,
help='learning rate for Task ID Classifier')
parser.add_argument("--replay_freq", type=int, default=10,
help='frequency of replaying, i.e. replay one batch from memory'
' every replay_freq batches')
parser.add_argument('--kmeans', type=bool, default=False,
help='whether applying Kmeans when choosing examples to store')
parser.add_argument("--dump", type=bool, default=False,
help='dump the model or not')
parser.add_argument('--gpu', default='0', type=str,
help='id(s) for CUDA_VISIBLE_DEVICES')
parser.add_argument('--n-labeled', type=int, default=2000,
help='Number of training data for each class')
parser.add_argument('--n-val', type=int, default=2000,
help='Number of validation data for each class')
parser.add_argument("--nspcoe", type=float, default=1.0,
help='Coefficient for Next Sentence Prediction Loss')
parser.add_argument("--tskcoe", type=float, default=1.0,
help='Coefficient for task ID Prediction Loss')
parser.add_argument("--disen", type=bool, default=False,
help='Apply Information Disentanglement or not')
parser.add_argument("--hidden_size", type=int, default=128,
help='size of General/Specific Space')
parser.add_argument("--model_path", type=str, default="./dump",
help='where to dump the model')
parser.add_argument("--reg", type=bool, default=False,
help='Apply Regularization or Not')
parser.add_argument("--regcoe", type=float, default=0.5,
help='Regularization Coefficient when not replaying')
parser.add_argument("--regcoe_rply", type=float, default=5.0,
help='Regularization Coefficient when replaying')
parser.add_argument("--reggen", type=float, default=0.5,
help='Regularization Coefficient on General Space')
parser.add_argument("--regspe", type=float, default=0.5,
help='Regularization Coefficient on Specific Space')
parser.add_argument("--store_ratio", type=float, default=0.01,
help='how many samples to store for replaying')
parser.add_argument('--tasks', nargs='+', type=str,
default=['ag', 'yelp', 'amazon', 'yahoo', 'dbpedia'], help='Task Sequence')
parser.add_argument('--select_best', nargs='+', type=bool,
default=[True, True, True, True, True],
help='whether picking the model with best val acc on each task')
args = parser.parse_args()
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
import numpy as np
import torch
from tqdm import tqdm
from transformers import AdamW, get_constant_schedule_with_warmup
from sklearn.cluster import KMeans, MiniBatchKMeans
from model import Model, Predictor
from read_data import compute_class_offsets, prepare_dataloaders
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
args.device = device
n_gpu = torch.cuda.device_count()
dataset_classes = {
'amazon' : 5,
'yelp' : 5,
'yahoo' : 10,
'ag' : 4,
'dbpedia' : 14,
}
class Memory(object):
def __init__(self):
self.examples = []
self.masks = []
self.labels = []
self.tasks = []
self.features = []
def append(self, example, mask, label, task):
self.examples.append(example)
self.masks.append(mask)
self.labels.append(label)
self.tasks.append(task)
def store_features(self, model):
"""
Args:
model: The model trained just after previous task
Returns: None
store previous features before trained on new class
"""
self.features = []
length = len(self.labels)
model.eval()
with torch.no_grad():
for i in range(length):
x = torch.tensor(self.examples[i]).view(1, -1).to(device)
mask = torch.tensor(self.masks[i]).view(1, -1).to(device)
g_fea, s_fea, _, _, _ = model(x, mask)
fea = torch.cat([g_fea, s_fea], dim=1).view(-1).data.cpu().numpy()
self.features.append(fea)
print(len(self.features))
print(len(self.labels))
def get_random_batch(self, batch_size, task_id=None):
if task_id is None:
permutations = np.random.permutation(len(self.labels))
index = permutations[:batch_size]
mini_examples = [self.examples[i] for i in index]
mini_masks = [self.masks[i] for i in index]
mini_labels = [self.labels[i] for i in index]
mini_tasks = [self.tasks[i] for i in index]
mini_features = [self.features[i] for i in index]
else:
index = [i for i in range(len(self.labels)) if self.tasks[i] == task_id]
np.random.shuffle(index)
index = index[:batch_size]
mini_examples = [self.examples[i] for i in index]
mini_masks = [self.masks[i] for i in index]
mini_labels = [self.labels[i] for i in index]
mini_tasks = [self.tasks[i] for i in index]
mini_features = [self.features[i] for i in index]
return torch.tensor(mini_examples), torch.tensor(mini_masks), torch.tensor(mini_labels), \
torch.tensor(mini_tasks), torch.tensor(mini_features)
def get_minibatch(self, batch_size):
length = len(self.labels)
permutations = | np.random.permutation(length) | numpy.random.permutation |
'''
###############################################################################
"MajoranaNanowire" Python3 Module
v 1.0 (2020)
Created by <NAME> (2018)
###############################################################################
"H_class/Lutchyn_Oreg/builders" submodule
This sub-package builds Lutchyn-Oreg Hamiltonians.
###############################################################################
'''
#%%############################################################################
######################## Required Packages ############################
###############################################################################
import numpy as np
import scipy.sparse
import scipy.sparse.linalg
import scipy.linalg
import scipy.constants as cons
from MajoranaNanowires.Functions import diagonal
#%%
def LO_1D_builder(N,dis,m_eff,mu,B,aR,d, space='position', k_vec=np.nan ,sparse='no'):
"""
1D Lutchy-Oreg Hamiltonian builder. It obtaines the Hamiltoninan for a 1D
Lutchy-Oreg chain with superconductivity.
Parameters
----------
N: int or arr
Number of sites.
dis: int or arr
Distance (in nm) between sites.
m_eff: int or arr
Effective mass. If it is an array, each element is the on-site
effective mass.
mu: float or arr
Chemical potential. If it is an array, each element is the on-site
chemical potential
B: float or arr
Zeeman splitting. If it is an array, each element is the Zeeman
splitting in each direction.
aR: float or arr
Rashba coupling.
-If aR is a float, aR is the Rashba coupling along the z-direction,
with the same value in every site.
-If aR is a 1D array with length=3, each element of the array is
the rashba coupling in each direction.
-If aR is an array of arrays (3 x N), each element of aR[i] is
an array with the on-site Rashba couplings in the direction i.
d: float or arr
Superconductor paring amplitud.
-If d is a float, d is the Rashba coupling along the y-direction,
with the same value in every site.
-If d is an array, each element of the array is the on-site
superconducting paring amplitud
space: {"position","momentum"}
Space in which the Hamiltonian is built. "position" means
real-space (r-space). In this case the boundary conditions are open.
On the other hand, "momentum" means reciprocal space (k-space). In
this case the built Hamiltonian corresponds to the Hamiltonian of
the unit cell, with periodic boundary conditions along the
x-direction.
k_vec: arr
If space=='momentum', k_vec is the (discretized) momentum vector,
usually in the First Brillouin Zone.
sparse: {"yes","no"}
Sparsety of the built Hamiltonian. "yes" builds a dok_sparse matrix,
while "no" builds a dense matrix.
Returns
-------
H: arr
Hamiltonian matrix.
"""
#Make sure that the onsite parameters are arrays:
if | np.isscalar(m_eff) | numpy.isscalar |
"""Perform normalization on inputs or rewards.
"""
import numpy as np
import torch
from gym.spaces import Box
def normalize_angle(x):
"""Wraps input angle to [-pi, pi].
"""
return ((x + np.pi) % (2 * np.pi)) - np.pi
class RunningMeanStd():
"""Calulates the running mean and std of a data stream.
Attributes:
mean (np.array): mean of data stream.
var (np.array): variance of data stream.
count (float): total count of data steam.
"""
def __init__(self, epsilon=1e-4, shape=()):
"""Initializes containers for data mean and variance.
Args:
epsilon (float): helps with arithmetic issues.
shape (tuple): the shape of the data stream's output.
"""
self.mean = np.zeros(shape, np.float64)
self.var = np.ones(shape, np.float64)
self.count = epsilon
def update(self, arr):
"""Update current stats with a new stream of data.
Args:
arr (np.array): 1D array of data, (batch_size, *shape).
"""
batch_mean = np.mean(arr, axis=0)
batch_var = np.var(arr, axis=0)
batch_count = arr.shape[0]
self.update_from_moments(batch_mean, batch_var, batch_count)
def update_from_moments(self, batch_mean, batch_var, batch_count):
"""Util function for `update` method.
"""
delta = batch_mean - self.mean
tot_count = self.count + batch_count
new_mean = self.mean + delta * batch_count / tot_count
m_a = self.var * self.count
m_b = batch_var * batch_count
m_2 = m_a + m_b + np.square(delta) * self.count * batch_count / (self.count + batch_count)
new_var = m_2 / (self.count + batch_count)
new_count = batch_count + self.count
self.mean = new_mean
self.var = new_var
self.count = new_count
class BaseNormalizer(object):
"""Template/default normalizer.
Attributes:
read_only (bool): if to freeze the current stats being tracked.
"""
def __init__(self, read_only=False):
self.read_only = read_only
def set_read_only(self):
self.read_only = True
def unset_read_only(self):
self.read_only = False
def __call__(self, x, *args, **kwargs):
"""Invokes normalization on the given input.
"""
return x
def state_dict(self):
"""Returns snapshot of current stats.
"""
return {}
def load_state_dict(self, _):
"""Restores the stats from a snapshot.
"""
pass
class MeanStdNormalizer(BaseNormalizer):
"""Normalize by the running average.
"""
def __init__(self, shape=(), read_only=False, clip=10.0, epsilon=1e-8):
"""Initializes the data stream tracker.
Args:
shape (tuple): shape of data being tracked.
read_only (bool): if to freeze the tracker.
clip (float): bounds on the data.
epsilon (float): offset to provide divide-by-zero.
"""
super().__init__(read_only)
self.read_only = read_only
self.rms = RunningMeanStd(shape=shape)
self.clip = clip
self.epsilon = epsilon
def __call__(self, x):
"""Update tracker given data, optionally normalize the data.
"""
x = np.asarray(x)
if not self.read_only:
self.rms.update(x)
return np.clip(
(x - self.rms.mean) / np.sqrt(self.rms.var + self.epsilon),
-self.clip, self.clip)
def state_dict(self):
return {'mean': self.rms.mean, 'var': self.rms.var}
def load_state_dict(self, saved):
self.rms.mean = saved['mean']
self.rms.var = saved['var']
class RewardStdNormalizer(MeanStdNormalizer):
"""Reward normalization by running average of returns.
Papers:
* arxiv.org/pdf/1808.04355.pdf
* arxiv.org/pdf/1810.12894.pdf
Also see:
* github.com/openai/baselines/issues/538
"""
def __init__(self, gamma=0.99, read_only=False, clip=10.0, epsilon=1e-8):
"""Initializes the data stream tracker.
Args:
gamma (float): discount factor for rewards.
read_only (bool): if to freeze the tracker.
clip (float): bounds on the data.
epsilon (float): offset to provide divide-by-zero.
"""
# Reward has default shape (1,) or just ().
super().__init__((), read_only, clip, epsilon)
self.gamma = gamma
self.ret = None
def __call__(self, x, dones):
"""Update tracker given reward, optionally normalize the reward (only scaling).
"""
x = | np.asarray(x) | numpy.asarray |
from __future__ import division
import numpy as np
import scipy.sparse as sp
import numpy.polynomial.legendre as leg
from scipy.linalg import lu
import scipy.interpolate as intpl
from pymg.collocation_base import CollBase
class CollGaussLegendre(CollBase):
"""
Implements Gauss-Legendre Quadrature by deriving from CollBase and implementing Gauss-Legendre nodes
-> actually already part of CollBase, this is just for consistency
"""
def __init__(self, num_nodes, tleft, tright):
super(CollGaussLegendre, self).__init__(num_nodes, tleft, tright)
assert num_nodes > 1, "Number of nodes should be at least 1 for Gauss-Legendre, but is %d" % num_nodes
self.order = 2 * self.num_nodes
self.nodes = self._getNodes
self.weights = self._getWeights(tleft, tright)
self.Qmat = self._gen_Qmatrix
self.Smat = self._gen_Smatrix
self.QDmat = self._gen_QDmatrix
self.delta_m = self._gen_deltas
self.left_is_node = False
self.right_is_node = False
@property
def _getNodes(self):
"""
Computes nodes for the Gauss-Legendre quadrature of order :math:`n>1` on :math:`[-1,+1]`.
(ported from MATLAB code, reference see below, original commend from MATLAB code:)
.. epigraph::
Unlike many publicly available functions, this function is valid for :math:`n>=46`.
This is due to the fact that it does not rely on MATLAB's build-in 'root' routines to determine the roots
of the Legendre polynomial, but finds the roots by looking for the eigenvalues of an alternative version of
the companion matrix of the n'th degree Legendre polynomial.
The companion matrix is constructed as a symmetrical matrix, guaranteeing that all the eigenvalues (roots)
will be real.
On the contrary, MATLAB's 'roots' function uses a general form for the companion matrix, which becomes
unstable at higher orders :math:`n`, leading to complex roots.
-- original MATLAB function by: <NAME> <<EMAIL>> (February 21, 2010)
Python version by <NAME>, 2014
:return: Gauss-Legendre nodes
"""
M = self.num_nodes
a = self.tleft
b = self.tright
# Building the companion matrix comp_mat with det(nodes*I-comp_mat)=P_n(nodes), where P_n is the
# Legendre polynomial under consideration. comp_mat will be constructed in such a way that it is symmetric.
linspace = np.linspace(1, M - 1, M - 1)
v = linspace / np.sqrt(4.0 * linspace ** 2 - 1.0)
comp_mat = np.diag(v, 1) + np.diag(v, -1)
# Determining the abscissas (nodes) - since det(nodesI-comp_mat)=P_n(nodes), the abscissas are the roots
# of the characteristic polynomial, i.e. the eigenvalues of comp_mat
[eig_vals, _] = np.linalg.eig(comp_mat)
indizes = np.argsort(eig_vals)
nodes = eig_vals[indizes]
# take real part and shift from [-1,1] to [a,b]
nodes = nodes.real
nodes = (a * (1 - nodes) + b * (1 + nodes)) / 2
return nodes
class CollGaussLobatto(CollBase):
"""
Implements Gauss-Lobatto Quadrature by deriving from CollBase and implementing Gauss-Lobatto nodes
"""
def __init__(self, num_nodes, tleft, tright):
super(CollGaussLobatto, self).__init__(num_nodes, tleft, tright)
assert num_nodes > 1, "Number of nodes should be at least 2 for Gauss-Lobatto, but is %d" % num_nodes
self.order = 2 * self.num_nodes - 2
self.nodes = self._getNodes
self.weights = self._getWeights(tleft, tright)
self.Qmat = self._gen_Qmatrix
self.Smat = self._gen_Smatrix
self.delta_m = self._gen_deltas
self.left_is_node = True
self.right_is_node = True
self.QDmat = self._gen_QDmatrix
@property
def _getNodes(self):
"""
Copyright by <NAME>, 2014
Computes Gauss-Lobatto integration nodes.
Calculates the Gauss-Lobatto integration nodes via a root calculation of derivatives of the legendre
polynomials. Note that the precision of float 64 is not guarantied.
"""
M = self.num_nodes
a = self.tleft
b = self.tright
roots = leg.legroots(leg.legder(np.array([0] * (M - 1) + [1], dtype=np.float64)))
nodes = np.array(np.append([-1.0], np.append(roots, [1.0])), dtype=np.float64)
nodes = (a * (1 - nodes) + b * (1 + nodes)) / 2
return nodes
class CollGaussRadau_Right(CollBase):
"""
Implements Gauss-Radau Quadrature by deriving from CollBase and implementing Gauss-Radau nodes
"""
def __init__(self, num_nodes, tleft, tright):
super(CollGaussRadau_Right, self).__init__(num_nodes, tleft, tright)
assert num_nodes > 1, "Number of nodes should be at least 2 for Gauss-Radau, but is %d" % num_nodes
self.order = 2 * self.num_nodes - 1
self.nodes = self._getNodes
self.weights = self._getWeights(tleft, tright)
self.Qmat = self._gen_Qmatrix
self.Smat = self._gen_Smatrix
self.delta_m = self._gen_deltas
self.left_is_node = False
self.right_is_node = True
self.QDmat = self._gen_QDmatrix
@property
def _getNodes(self):
"""
Copyright by <NAME> (who copied this from somewhere else), 2014
Computes Gauss-Radau integration nodes with right point included.
"""
M = self.num_nodes
a = self.tleft
b = self.tright
alpha = 1.0
beta = 0.0
diag = np.zeros(M - 1)
subdiag = np.zeros(M - 2)
diag[0] = (beta - alpha) / (2 + alpha + beta)
for jj in range(1, M - 1):
diag[jj] = (beta - alpha) * (alpha + beta) / (2 * jj + 2 + alpha + beta) / (2 * jj + alpha + beta)
subdiag[jj - 1] = np.sqrt(4 * jj * (jj + alpha) * (jj + beta) * (jj + alpha + beta)) \
/ np.sqrt(
(2 * jj - 1 + alpha + beta) * (2 * jj + alpha + beta) ** 2 * (2 * jj + 1 + alpha + beta))
subdiag1 = np.zeros(M - 1)
subdiag2 = np.zeros(M - 1)
subdiag1[0:-1] = subdiag
subdiag2[1:] = subdiag
Mat = sp.spdiags(data=[subdiag1, diag, subdiag2], diags=[-1, 0, 1], m=M - 1, n=M - 1).todense()
x = np.sort(np.linalg.eigvals(Mat))
nodes = np.concatenate((x, [1.0]))
nodes = (a * (1 - nodes) + b * (1 + nodes)) / 2
return nodes
class CollGaussRadau_Left(CollBase):
"""
Implements Gauss-Radau Quadrature by deriving from CollBase and implementing Gauss-Radau nodes
"""
def __init__(self, num_nodes, tleft, tright):
super(CollGaussRadau_Left, self).__init__(num_nodes, tleft, tright)
assert num_nodes > 1, "Number of nodes should be at least 2 for Gauss-Radau, but is %d" % num_nodes
self.order = 2 * self.num_nodes - 1
self.nodes = self._getNodes
self.weights = self._getWeights(tleft, tright)
self.Qmat = self._gen_Qmatrix
self.Smat = self._gen_Smatrix
self.delta_m = self._gen_deltas
self.left_is_node = True
self.right_is_node = False
self.QDmat = self._gen_QDmatrix
@property
def _getNodes(self):
"""
Copyright by <NAME> (who copied this from somewhere else), 2014
Computes Gauss-Radau integration nodes with left point included.
"""
M = self.num_nodes
a = self.tleft
b = self.tright
alpha = 0.0
beta = 1.0
diag = np.zeros(M - 1)
subdiag = np.zeros(M - 2)
diag[0] = (beta - alpha) / (2 + alpha + beta)
for jj in range(1, M - 1):
diag[jj] = (beta - alpha) * (alpha + beta) / (2 * jj + 2 + alpha + beta) / (2 * jj + alpha + beta)
subdiag[jj - 1] = np.sqrt(4 * jj * (jj + alpha) * (jj + beta) * (jj + alpha + beta)) \
/ np.sqrt(
(2 * jj - 1 + alpha + beta) * (2 * jj + alpha + beta) ** 2 * (2 * jj + 1 + alpha + beta))
subdiag1 = np.zeros(M - 1)
subdiag2 = np.zeros(M - 1)
subdiag1[0:-1] = subdiag
subdiag2[1:] = subdiag
Mat = sp.spdiags(data=[subdiag1, diag, subdiag2], diags=[-1, 0, 1], m=M - 1, n=M - 1).todense()
x = np.sort(np.linalg.eigvals(Mat))
nodes = np.concatenate(([-1.0], x))
nodes = (a * (1 - nodes) + b * (1 + nodes)) / 2
print('WARNING: GaussRadau_Left untested, use at own risk!')
return nodes
class CollGaussRadau_Right_LU_Trick(CollGaussRadau_Right):
"""
Implements Gauss-Radau Quadrature by deriving from CollBase and implementing Gauss-Radau nodes and as
preconditioner we implement the LU_Trick
"""
def __init__(self, num_nodes, tleft, tright):
super(CollGaussRadau_Right_LU_Trick, self).__init__(num_nodes, tleft, tright)
Q = self.Qmat
p, l, u = lu(Q[1:, 1:].transpose())
# print np.diag(l)
self.QDmat = u.transpose()
class CollSplineRight(CollBase):
"""
If a spectral quadrature method is used a order higher than 15 is not applicable,
because the underlying interpolation numerically losses the stability. This collocation class
uses spline functions to achieve arbitrary big Q matrices with a band structure.
"""
def __init__(self, num_nodes, tleft, tright, order=3):
super(CollSplineRight, self).__init__(num_nodes, tleft, tright)
self.Q = np.zeros((num_nodes, num_nodes))
self.nodes = self._getNodes
# get the defining tck's for each spline basis function
circ_one = | np.zeros(self.num_nodes) | numpy.zeros |
"""
Factor class and necessary functions
"""
# from __future__ import annotations
import numpy as np
from sortedcontainers import SortedSet
from collections import namedtuple
from typing import List, Tuple, Union, overload, Optional, Callable, Iterable
from .helper import reduce_tuples
import functools
Variable = namedtuple('Variable', ['vid', 'dim', 'type'])
Assignment = namedtuple('Assignment', ['vid', 'val'])
# def reduce_tuples(input_list: List, pos: int=0)-> List:
# return [el[pos] for el in input_list]
class Factor:
fid = 0
def __init__(self, scope: List[Variable]=None, table: Union[List[float], float]=None, factor_type: str= ''):
self.fid = Factor.fid # later maintain this fid in a SortedSet
Factor.fid += 1
# self.logscale = log_scale
self.type = factor_type
self.scope_vars = SortedSet(scope) if scope else SortedSet()
self.scope_vids = SortedSet(reduce_tuples(scope, pos=0)) if scope else SortedSet()
if not scope:
self.table = | np.array(table, dtype=np.float64) | numpy.array |
import numpy as np
import json
from detail import param
from detail import mask as maskUtils
class instsegEval:
def __init__(self,details):
self.params = param.Params(iouType='segm')
self.details = details
self.params.useCat = True
self.params.imgIds = list(details.imgs.keys())
self.params.catIds = list(details.cats.keys())
self.params.maxDets = [1, 10, 100, np.inf]
self.params.iouThrs = \
np.linspace(.5, 0.95, | np.round((0.95 - .5) / .05) | numpy.round |
"""Modified Whole-History Rating model for climbing"""
# Copyright 2019 <NAME>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import itertools
import numpy as np
from .bradley_terry import get_bt_derivatives
from .climber import Climber
from .log_normal_distribution import LogNormalDistribution
def expand_to_slices(values, slices, dtype=None):
"""Expand normalized values to contiguous blocks.
Parameters
----------
values : ndarray
The normalized values.
slices : list of pairs
The (start, end) pairs corresponding to a slice in the output. The
implied slices must be contiguous and in ascending order.
"""
_, n = slices[-1]
expanded = np.empty([n], dtype=dtype)
for i, (start, end) in enumerate(slices):
expanded[start:end] = values[i]
return expanded
class Ascents(
collections.namedtuple(
"Ascents", ["wins", "slices", "adversary", "clean"], defaults=[None]
)
):
"""Stores ascents organized into contiguous slices.
Ascents are organized into player-order, where the player is a route or
a page. Hence ascents with the same player are contiguous and can be
addressed by a slice.
Attributes
----------
wins : ndarray
Count of wins for each player.
slices : list of pairs
(start, end) pairs defining the slice in the player-ordered ascents,
for each player.
adversary : ndarray of intp
The index of the adversary for each player-ordered ascent.
clean : ndarray or None
Each element is 1 if the ascent was clean, 0 otherwise for each ascent.
"""
def make_route_ascents(ascents_clean, ascents_page_slices, ascents_route, num_routes):
"""Create a permutation of ascents in route-order.
Parameters
----------
ascents_clean : array_like of float
1 if the ascent was clean, 0 otherwise.
ascents_route : ndarray of intp
Route index of each ascent.
ascents_page : ndarray of intp
Page index of each ascent.
num_routes : integer
Number of routes. Route indices must be in the interval
[0, num_routes). Routes may have zero ascents.
Returns
-------
Ascents
Ascents ordered by (and sliced by) route. The "slices" list will have
length num_routes. The "clean" attribute is unpopulated.
"""
num_ascents = len(ascents_route)
route_wins = []
rascents_route_slices = []
rascents_page = [0] * num_ascents
permutation = [(route, a) for a, route in enumerate(ascents_route)]
permutation.sort()
ascent_to_rascent = [0] * num_ascents
# Add an additional ascent so the loop adds all routes.
permutation = itertools.chain(permutation, [(num_routes, -1)])
start = end = 0
i = 0
wins = 0.0
for j, (route, a) in enumerate(permutation):
if 0 <= a:
ascent_to_rascent[a] = j
if i < route:
rascents_route_slices.append((start, end))
route_wins.append(wins)
# Routes with no ascents:
rascents_route_slices.extend([(end, end)] * (route - i - 1))
route_wins.extend([0.0] * (route - i - 1))
i = route
start = j
wins = 0.0
end = j + 1
wins += 1.0 - ascents_clean[a]
for page, (start, end) in enumerate(ascents_page_slices):
for a in range(start, end):
rascents_page[ascent_to_rascent[a]] = page
return Ascents(
np.array(route_wins, dtype=np.float64),
rascents_route_slices,
np.array(rascents_page, dtype=np.intp),
)
class WholeHistoryRating:
"""Performs optimization for route and climber ratings.
Initializes models for climbers and routes from raw ascent data.
Stores an estimate for each rating and performs optimization using Newton's
method.
We use two orderings for ascents:
- page-order: ascents are ordered by page index
- route-order: ascents are ordered by route index
Attributes
----------
page_ratings : ndarray
Current estimate of the gamma rating of each page.
route_ratings : ndarray
Current estimate of the gamma rating of each route.
page_var : ndarray
Estimate of the variance of the natural rating of each page.
page_cov : ndarray
Estimate of the covariance between the natural rating of each page and
the next page. The covariance for the last page of each climber is
not meaningful.
route_var : ndarray
Estimate of the variance of the natural rating of each route.
"""
climber_mean = 0.0
climber_variance = 1.0
route_variance = 1.0
# Private Attributes
# ------------------
# _page_ascents : Ascents
# Ascents in page order.
# _route_ascents : Ascents
# Ascents in route order (no clean).
# _pages_climber_slices : list of pairs
# Start and end indices in _page_ratings for each climber.
# _pages_gap : ndarray
# Interval of time between consecutive pages of a climber.
# _route_priors : GammaDistribution
# Distributions for the gamma prior on each route's rating.
# _climbers : list of Climber
# Climbers (in the same order as _pages_climber_slices).
def __init__(
self,
ascents_route,
ascents_clean,
ascents_page_slices,
pages_climber_slices,
routes_gamma,
pages_gap,
):
"""Initialize a WHR model.
Parameters
----------
ascents_route : array_like of int
The 0-based ID of the route for each ascent. The implied ascents
must be in page order.
ascents_clean : array_like of float
1 for a clean ascent, 0 otherwise, for each ascent. The implied
ascents must be in page order.
ascents_page_slices : list of pairs
Each (start, end) entry defines the slice of the ascents for a page.
pages_climber_slices : list of pairs
Each (start, end) entry defines the slice of the pages for a
climber.
routes_gamma : list
Initial gamma ratings for each route.
pages_gap : array_like of float
Interval of time between each page and the next page. The gap for
the last page of each climber is not used.
"""
num_pages = len(ascents_page_slices)
self.route_ratings = np.array(routes_gamma, dtype=np.float64)
self.page_ratings = | np.full(num_pages, 1.0) | numpy.full |
#!/usr/bin/env python
import numpy as np
import sys
import traceback
import pandas as pd
def customMutation(individual, attrs_list, indpb=0.2, continuous_scale=0.1, discrete_scale=0.1):
"""Mutation
Parameters
----------
indpb : float
Independent probability for each attribute to be mutated.
"""
assert len(individual) == len(attrs_list)
for i, attr in enumerate(attrs_list):
# determine whether we are performing a mutation
if np.random.random() < indpb:
vartype = attr.__name__
if "continuous" in vartype:
# Gaussian perturbation with scale being 0.1 of domain range
bound_low = attr.args[0]
bound_high = attr.args[1]
scale = (bound_high - bound_low) * continuous_scale
individual[i] += np.random.normal(loc=0.0, scale=scale)
individual[i] = _project_bounds(individual[i], bound_low, bound_high)
elif "discrete" in vartype:
# add/substract an integer by rounding Gaussian perturbation
# scale is 0.1 of domain range
bound_low = attr.args[0]
bound_high = attr.args[1]
scale = (bound_high - bound_low) * discrete_scale
delta = np.random.normal(loc=0.0, scale=scale)
individual[i] += np.round(delta, decimals=0)
individual[i] = _project_bounds(individual[i], bound_low, bound_high)
elif "categorical" in vartype:
# resample a random category
individual[i] = attr()
else:
raise ValueError()
else:
continue
return individual,
def cxDummy(ind1, ind2):
"""Dummy crossover that does nothing. This is used when we have a single gene in the chromosomes, such that
crossover would not change the population.
"""
return ind1, ind2
def create_deap_toolbox(param_space):
from deap import base
toolbox = base.Toolbox()
attrs_list = []
for i, param in enumerate(param_space):
vartype = param['type']
if vartype in 'continuous':
toolbox.register(f"x{i}_{vartype}", np.random.uniform, param['low'], param['high'])
elif vartype in 'discrete':
toolbox.register(f"x{i}_{vartype}", np.random.randint, param['low'], param['high'])
elif vartype in 'categorical':
toolbox.register(f"x{i}_{vartype}", np.random.choice, param['categories'])
attr = getattr(toolbox, f"x{i}_{vartype}")
attrs_list.append(attr)
return toolbox, attrs_list
def _project_bounds(x, x_low, x_high):
if x < x_low:
return x_low
elif x > x_high:
return x_high
else:
return x
def random_sampling(param_space):
X_next = []
for param in param_space:
vartype = param['type']
if vartype in 'continuous':
x = np.random.uniform(low=param['low'], high=param['high'])
elif vartype in 'discrete':
x = np.random.randint(low=param['low'], high=param['high'])
elif vartype in 'categorical':
x = np.random.choice(param['categories'])
X_next.append(x)
return X_next
def second_sample(X, param_space):
"""Rule to generate second sample"""
if len(np.shape(X)) > 1:
# remove one dimension
if isinstance(X, list) or isinstance(X, np.ndarray):
X = X[0]
elif isinstance(X, pd.DataFrame):
X = X.to_numpy()[0]
else:
raise NotImplementedError
X = list(X)
X_next = []
for xi, param in zip(X, param_space):
vartype = param['type']
if vartype in 'continuous':
if (xi - param['low']) > (param['high'] - xi):
x = xi - (xi - param['low']) / 2.
else:
x = xi + (param['high'] - xi) / 2.
elif vartype in 'discrete':
if (xi - param['low']) > (param['high'] - xi):
x = int(xi - (xi - param['low']) / 2.)
else:
x = int(xi + (param['high'] - xi) / 2.)
elif vartype in 'categorical':
x = | np.random.choice(param['categories']) | numpy.random.choice |
import pyopencl as cl
import numpy
from pyPaSWAS.Core.SmithWaterman import SmithWaterman
from pyPaSWAS.Core import STOP_DIRECTION, LEFT_DIRECTION, NO_DIRECTION, UPPER_DIRECTION, UPPER_LEFT_DIRECTION
from pyPaSWAS.Core.PaSWAS import CPUcode
from pyPaSWAS.Core.PaSWAS import GPUcode
from pyPaSWAS.Core.StartingPoint import StartingPoint
class SmithWatermanOcl(SmithWaterman):
'''
classdocs
'''
def __init__(self, logger, score, settings):
'''
Constructor
'''
SmithWaterman.__init__(self, logger, score, settings)
#self.oclcode = OCLcode(self.logger)
# platforms: A single ICD on a computer
self.platform = None
# device: device which will perform computation (for example a CPU or GPU)
self.device = None
# context: manages a command-queue, memory, program and kernel objects
self.ctx = None
# queue: stores instructions for the device
self.queue = None
# program: the compiled kernel program
self.program = None
# device_type: type of device to run computations on
self.device_type = 0
self._set_device_type(self.settings.device_type)
self._set_platform(self.settings.platform_name)
self._initialize_device(int(self.settings.device_number))
self.always_reallocate_memory = False
def _init_oclcode(self):
# Compiling part of the OpenCL code in advance
self.oclcode.set_shared_xy_code(self.shared_x, self.shared_y)
self.oclcode.set_direction_code(NO_DIRECTION, UPPER_LEFT_DIRECTION,
UPPER_DIRECTION, LEFT_DIRECTION,
STOP_DIRECTION)
def _execute_calculate_score_kernel(self, number_of_blocks, idx, idy):
''' Executes a single run of the calculate score kernel'''
pass
def _execute_traceback_kernel(self, number_of_blocks, idx, idy):
''' Executes a single run of the traceback kernel'''
pass
def _get_direction_byte_array(self):
'''
Get the resulting directions
@return gives the resulting direction array as byte array
'''
pass
def __del__(self):
'''Destructor. Removes the current running context'''
del self.program
del self.queue
del self.ctx
del self.device
del self.platform
self.device_type = 0
def _set_device_type(self, device_type):
'''Sets the device type'''
if device_type.upper() == 'ACCELERATOR':
self.device_type = cl.device_type.ACCELERATOR
elif device_type.upper() == 'GPU':
self.device_type = cl.device_type.GPU
elif device_type.upper() == 'CPU':
self.device_type = cl.device_type.CPU
else:
self.logger.warning("Warning: device type is set to default: GPU")
self.device_type = cl.device_type.GPU
def _set_platform(self, platform_name):
found_platform = False
for platform in cl.get_platforms():
for device in platform.get_devices():
if (platform_name.upper() in str(platform).upper()
and device.get_info(cl.device_info.TYPE) == self.device_type):
self.platform = platform
found_platform = True
break
if(found_platform):
self.logger.debug("Found platform {}".format(str(self.platform)))
break
if not (self.platform):
for platform in cl.get_platforms():
for device in platform.get_devices():
if (device.get_info(cl.device_info.TYPE) == self.device_type):
self.platform = platform
found_platform = True
break
if(found_platform):
self.logger.debug('Found platform {}, however this is not the platform indicated by the user'.format(str(self.platform)))
break
if not (self.platform):
raise RuntimeError('Failed to find platform')
def _initialize_device(self, device_number):
'''
Initalizes a device and verifies its computational abilities.
@param device_number: int value representing the device to use
'''
self.logger.debug('Initializing device {0}'.format(device_number))
self.device = self.platform.get_devices(device_type=self.device_type)[device_number]
if int(self.settings.number_of_compute_units) > 0:
self.device = self.device.create_sub_devices([cl.device_partition_property.EQUALLY,int(self.settings.number_of_compute_units)])[int(self.settings.sub_device)]
self.ctx = cl.Context(devices=[self.device])
self.queue = cl.CommandQueue(self.ctx)
#self.logger.debug("context:{}".format(self.ctx) )
def _device_global_mem_size(self):
#return clCharacterize.usable_local_mem_size(self.device)
# GLOBAL_MEM_SIZE
return self.device.get_info(cl.device_info.MAX_MEM_ALLOC_SIZE)
def _clear_memory(self):
'''Clears the claimed memory on the device.'''
if not self.always_reallocate_memory:
return
self.logger.debug('Clearing device memory.')
self._clear_normal_memory()
self._clear_zero_copy_memory()
try:
self.queue.finish()
except:
pass
def _clear_normal_memory(self):
self.logger.debug('Clearing normal device memory.')
if (self.d_sequences is not None):
try:
self.d_sequences.finish()
except:
pass
self.d_sequences.release()
if (self.d_targets is not None):
try:
self.d_targets.finish()
except:
pass
self.d_targets.release()
if (self.d_matrix is not None):
try:
self.d_matrix.finish()
except:
pass
self.d_matrix.release()
if (self.gap_extension and self.d_matrix_i is not None):
try:
self.d_matrix_i.finish()
except:
pass
self.d_matrix_i.release()
if (self.gap_extension and self.d_matrix_j is not None):
try:
self.d_matrix_j.finish()
except:
pass
self.d_matrix_j.release()
if (self.d_global_maxima is not None):
try:
self.d_global_maxima.finish()
except:
pass
self.d_global_maxima.release()
if (self.d_index_increment is not None):
try:
self.d_index_increment.finish()
except:
pass
self.d_index_increment.release()
def _clear_zero_copy_memory(self):
self.logger.debug('Clearing zero-copy device memory.')
if (self.d_starting_points_zero_copy is not None):
try:
self.d_starting_points_zero_copy.finish()
except:
pass
self.d_starting_points_zero_copy.release()
if (self.d_max_possible_score_zero_copy is not None):
try:
self.d_max_possible_score_zero_copy.finish()
except:
pass
self.d_max_possible_score_zero_copy.release()
def _need_reallocation(self, buffer, size):
if self.always_reallocate_memory:
return True
if buffer is None:
return True
if buffer.get_info(cl.mem_info.SIZE) < size:
try:
buffer.finish()
except:
pass
buffer.release()
return True
return False
def _init_normal_memory(self):
'''
#_init_memory will initialize all required memory on the device based on the current settings.
Make sure to initialize these values!
'''
# Sequence device memory
self.logger.debug('Initializing normal device memory.')
memory = self.length_of_x_sequences * self.number_of_sequences
if self._need_reallocation(self.d_sequences, memory):
self.d_sequences = cl.Buffer(self.ctx, cl.mem_flags.READ_ONLY, size=memory)
mem_size = memory
# Target device memory
memory = self.length_of_y_sequences * self.number_targets
if self._need_reallocation(self.d_targets, memory):
self.d_targets = cl.Buffer(self.ctx, cl.mem_flags.READ_ONLY, size=memory)
mem_size += memory
if self._need_reallocation(self.d_index_increment, SmithWaterman.int_size):
self.d_index_increment = cl.Buffer(self.ctx, cl.mem_flags.WRITE_ONLY, size=SmithWaterman.int_size)
return mem_size
def _init_zero_copy_memory(self):
self.logger.debug('Initializing zero-copy memory.')
# Starting points host memory allocation and device copy
memory = (self.size_of_startingpoint * self.maximum_number_starting_points * self.number_of_sequences *
self.number_targets)
if self._need_reallocation(self.d_starting_points_zero_copy, memory):
self.d_starting_points_zero_copy = cl.Buffer(self.ctx, cl.mem_flags.WRITE_ONLY | cl.mem_flags.ALLOC_HOST_PTR, size=memory)
mem_size = memory
# Maximum zero copy memory allocation and device copy
memory = (self.number_of_sequences * self.number_of_targets * SmithWaterman.float_size)
#self.d_max_possible_score_zero_copy = cl.Buffer(self.ctx, cl.mem_flags.READ_ONLY | cl.mem_flags.ALLOC_HOST_PTR, size=memory)
mem_size += memory
return mem_size
def _init_memory(self):
mem_size = self._init_normal_memory()
mem_size += self._init_zero_copy_memory()
self.logger.debug('Allocated: {}MB of memory'.format(str(mem_size / 1024.0 / 1024.00)))
def _init_zero_copy(self):
''' Initializes the index used for the 'zero copy' of the found starting points '''
index = numpy.zeros((1), dtype=numpy.int32)
cl.enqueue_copy(self.queue, self.d_index_increment, index)
def _compile_code(self):
"""Compile the device code with current settings"""
self.logger.debug('Compiling OpenCL code.')
code = self.oclcode.get_code(self.score, self.number_of_sequences, self.number_targets, self.length_of_x_sequences, self.length_of_y_sequences)
#self.logger.debug('Code: \n{}'.format(code))
self.program = cl.Program(self.ctx, code).build()
self.calculateScoreAffineGap_kernel = self.program.calculateScoreAffineGap
self.calculateScore_kernel = self.program.calculateScore
self.tracebackAffineGap_kernel = self.program.tracebackAffineGap
self.traceback_kernel = self.program.traceback
def copy_sequences(self, h_sequences, h_targets):
'''
Copy the sequences and targets to the device
@param h_sequences: the sequences to be copied. Should be a single string containing all sequences
@param h_targets: the targets to be copied. Should be a single string containing all sequences
'''
cl.enqueue_copy(self.queue, self.d_sequences, h_sequences, is_blocking=False)
cl.enqueue_copy(self.queue, self.d_targets, h_targets, is_blocking=False)
def _get_number_of_starting_points(self):
''' Returns the number of startingpoints. '''
self.logger.debug('Getting number of starting points.')
self.index = numpy.zeros((1), dtype=numpy.int32)
cl.enqueue_copy(self.queue, self.index, self.d_index_increment)
return self.index[0]
def _fill_max_possible_score(self, target_index, targets, i, index, records_seqs):
for tI in range(self.number_of_targets):
if tI+target_index < len(targets) and i+index < len(records_seqs):
self.set_minimum_score(tI*self.max_sequences + i, float(self.score.highest_score) * (len(records_seqs[i+index])
if len(records_seqs[i+index]) < len(targets[tI+target_index])
else len(targets[tI+target_index])) * float(self.filter_factor))
def _copy_min_score(self):
if self._need_reallocation(self.d_max_possible_score_zero_copy, self.min_score_np.nbytes):
self.d_max_possible_score_zero_copy = cl.Buffer(self.ctx, cl.mem_flags.READ_ONLY | cl.mem_flags.ALLOC_HOST_PTR, size=self.min_score_np.nbytes)
cl.enqueue_copy(self.queue, self.d_max_possible_score_zero_copy, self.min_score_np, is_blocking=False)
def _set_max_possible_score(self, target_index, targets, i, index, records_seqs):
'''fills the max_possible_score datastructure on the host'''
# self.h_max_possible_score_zero_copy = cl.enqueue_map_buffer(self.queue, self.d_max_possible_score_zero_copy,
# cl.map_flags.WRITE, 0,
# self.number_of_sequences * self.number_targets ,
# dtype=numpy.float32)[0]
self._fill_max_possible_score(target_index, targets, i, index, records_seqs)
#Unmap memory object
# del self.h_max_possible_score_zero_copy
def _get_starting_point_byte_array(self, number_of_starting_points):
'''
Get the resulting starting points
@return gives the resulting starting point array as byte array
'''
if self.h_starting_points_zero_copy is not None and len(self.h_starting_points_zero_copy) > 0 :
self.h_starting_points_zero_copy.base.release()
self.h_starting_points_zero_copy = cl.enqueue_map_buffer(self.queue, self.d_starting_points_zero_copy, cl.map_flags.READ, 0,
(self.size_of_startingpoint *
number_of_starting_points, 1), dtype=numpy.byte)[0]
return self.h_starting_points_zero_copy
class SmithWatermanCPU(SmithWatermanOcl):
'''
classdocs
'''
def __init__(self, logger, score, settings):
'''
Constructor
'''
SmithWatermanOcl.__init__(self, logger, score, settings)
self.oclcode = CPUcode(self.logger)
self.workload_x = 4
self.workload_y = 4
self.workgroup_x = self.shared_x // self.workload_x
self.workgroup_y = self.shared_y // self.workload_y
self.d_semaphores = None
self._init_oclcode()
def _init_normal_memory(self):
mem_size = SmithWatermanOcl._init_normal_memory(self)
# Input matrix device memory
memory = (SmithWaterman.float_size * (self.length_of_x_sequences + 1) * self.number_of_sequences *
(self.length_of_y_sequences + 1) * self.number_targets)
if self._need_reallocation(self.d_matrix, memory):
self.d_matrix = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE, size=memory)
mem_size += memory
pattern = numpy.zeros((1),dtype=numpy.float32)
cl.enqueue_fill_buffer(self.queue, self.d_matrix, pattern, 0, size = memory)
if self.gap_extension:
if self._need_reallocation(self.d_matrix_i, memory):
self.d_matrix_i = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE, size=memory)
mem_size += memory
if self._need_reallocation(self.d_matrix_j, memory):
self.d_matrix_j = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE, size=memory)
mem_size += memory
pattern = numpy.array([-1E10],dtype=numpy.float32)
cl.enqueue_fill_buffer(self.queue, self.d_matrix_i, pattern, 0, size = memory)
cl.enqueue_fill_buffer(self.queue, self.d_matrix_j, pattern, 0, size = memory)
# Maximum global device memory
memory = (SmithWaterman.float_size * self.x_div_shared_x * self.number_of_sequences *
self.y_div_shared_y * self.number_targets * self.workload_x * self.workload_y)
if self._need_reallocation(self.d_global_maxima, memory):
self.d_global_maxima = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE, size=memory)
mem_size += memory
memory = (SmithWaterman.int_size *
self.length_of_x_sequences *
self.number_of_sequences *
self.length_of_y_sequences *
self.number_targets)
if self._need_reallocation(self.d_semaphores, memory):
self.d_semaphores = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE, size=memory)
pattern = numpy.zeros((1),dtype=numpy.int32)
cl.enqueue_fill_buffer(self.queue, self.d_semaphores, pattern, 0, size=memory)
mem_size += memory
return mem_size
def _init_zero_copy_memory(self):
mem_size = SmithWatermanOcl._init_zero_copy_memory(self)
# Global directions host memory allocation and device copy
memory = (self.length_of_x_sequences * self.number_of_sequences * self.length_of_y_sequences * self.number_targets)
if self._need_reallocation(self.d_global_direction_zero_copy, memory):
self.d_global_direction_zero_copy = cl.Buffer(self.ctx, cl.mem_flags.READ_WRITE | cl.mem_flags.ALLOC_HOST_PTR, size=memory)
mem_size += memory
return mem_size
def _clear_normal_memory(self):
SmithWatermanOcl._clear_normal_memory(self)
if (self.d_semaphores is not None):
try:
self.d_semaphores.finish()
except:
pass
self.d_semaphores.release()
def _clear_zero_copy_memory(self):
SmithWatermanOcl._clear_zero_copy_memory(self)
if (self.d_global_direction_zero_copy is not None):
try:
self.d_global_direction_zero_copy.finish()
except:
pass
self.d_global_direction_zero_copy.release()
def _get_direction_byte_array(self):
'''
Get the resulting directions
@return gives the resulting direction array as byte array
'''
h_global_direction_zero_copy = cl.enqueue_map_buffer(self.queue, self.d_global_direction_zero_copy, cl.map_flags.READ, 0,
(self.number_of_sequences,
self.number_targets,
self.length_of_x_sequences,
self.length_of_y_sequences), dtype=numpy.byte)[0]
return h_global_direction_zero_copy
def _get_direction(self, direction_array, sequence, target, block_x, block_y, value_x, value_y):
return direction_array[sequence][target][block_x*self.shared_x + value_x][block_y*self.shared_y + value_y]
def _set_direction(self, direction, direction_array, sequence, target, block_x, block_y, value_x, value_y):
direction_array[sequence][target][block_x*self.shared_x + value_x][block_y*self.shared_y + value_y] = direction
def _execute_calculate_score_kernel(self, number_of_blocks, idx, idy):
''' Executes a single run of the calculate score kernel'''
dim_block = (self.workgroup_x, self.workgroup_y)
dim_grid_sw = (self.number_of_sequences * self.workgroup_x, self.number_targets * number_of_blocks * self.workgroup_y)
if self.gap_extension:
self.calculateScoreAffineGap_kernel(self.queue,
dim_grid_sw,
dim_block,
self.d_matrix,
self.d_matrix_i,
self.d_matrix_j,
numpy.int32(idx),
numpy.int32(idy),
numpy.int32(number_of_blocks),
self.d_sequences,
self.d_targets,
self.d_global_maxima,
self.d_global_direction_zero_copy)
else:
self.calculateScore_kernel(self.queue,
dim_grid_sw,
dim_block,
self.d_matrix,
numpy.int32(idx),
numpy.int32(idy),
numpy.int32(number_of_blocks),
self.d_sequences,
self.d_targets,
self.d_global_maxima,
self.d_global_direction_zero_copy)
def _execute_traceback_kernel(self, number_of_blocks, idx, idy):
''' Executes a single run of the traceback kernel'''
dim_block = (self.workgroup_x, self.workgroup_y)
dim_grid_sw = (self.number_of_sequences * self.workgroup_x, self.number_targets * number_of_blocks * self.workgroup_y)
if self.gap_extension:
self.tracebackAffineGap_kernel(self.queue, dim_grid_sw, dim_block,
self.d_matrix,
self.d_matrix_i,
self.d_matrix_j,
numpy.int32(idx),
numpy.int32(idy),
numpy.int32(number_of_blocks),
self.d_global_maxima,
self.d_global_direction_zero_copy,
self.d_index_increment,
self.d_starting_points_zero_copy,
self.d_max_possible_score_zero_copy,
self.d_semaphores)
else:
self.traceback_kernel(self.queue, dim_grid_sw, dim_block,
self.d_matrix,
| numpy.int32(idx) | numpy.int32 |
# Import packages for plotting and system
import getopt
import random
import sys
from collections import deque
import matplotlib.pyplot as plt
import numpy as np
import torch
from flatland.envs.observations import TreeObsForRailEnv
from flatland.envs.predictions import ShortestPathPredictorForRailEnv
from flatland.envs.rail_env import RailEnv
from flatland.envs.rail_generators import complex_rail_generator
# Import Flatland/ Observations and Predictors
from flatland.envs.schedule_generators import complex_schedule_generator
from importlib_resources import path
# Import Torch and utility functions to normalize observation
import torch_training.Nets
from torch_training.dueling_double_dqn import Agent
from utils.observation_utils import norm_obs_clip, split_tree_into_feature_groups
def main(argv):
try:
opts, args = getopt.getopt(argv, "n:", ["n_episodes="])
except getopt.GetoptError:
print('training_navigation.py -n <n_episodes>')
sys.exit(2)
for opt, arg in opts:
if opt in ('-n', '--n_episodes'):
n_episodes = int(arg)
## Initialize the random
random.seed(1)
| np.random.seed(1) | numpy.random.seed |
"""
script to create pandas Dataframes with all results from multiple runs of the EQL stored inside.
"""
__author__ = "<NAME> (GMi)"
__version__ = "1.2.0"
__date__ = "07.09.2020"
__email__ = "<EMAIL>"
__status__ = "Development"
import numpy as np
#from matplotlib import pyplot as plt
import pandas
import sympy
import pandas as pd
from os import path, walk
# import holoviews as hv
# import bokeh
# from bokeh.io import show
# from holoviews import opts
# hv.extension('bokeh','matplotlib')
from graphviz import Source
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'xx-large',
'figure.figsize': (15, 15),
'axes.labelsize': 'xx-large',
'axes.titlesize':'xx-large',
'xtick.labelsize':'xx-large',
'ytick.labelsize':'xx-large'}
pylab.rcParams.update(params)
################### functions from original EQL - modified ########################
def select_instance(df=None, file=None, use_extrapolation=True):
"""
Expects a file with one row per network and columns reporting the parameters and complexity and performance
First line should be the column names, col1 col2 col3..., then one additional comments line which can be empty.
Third line should be the values for each column.
:param df: pandas dataframe containing data about model performance
:param file: file containing data about model performance, only used if dataframe is none
:param use_extrapolation: flag to determine if extrapolation data should be used
:return: pandas dataframe containing id and performance data of best model.
"""
if df is not None and file is not None:
raise ValueError('Both results_df and file specified. Only specify one.')
if df is None:
if file is None:
raise ValueError('Either results_df or file have to be specified.')
df = pd.read_csv(file)
if 'extr_error' in df.keys():
extr_available = not df['extr_error'].isnull().values.any()
else:
extr_available = False
if use_extrapolation and not extr_available:
raise ValueError("use_extrapolation flag is set to True but no extrapolation results were found.")
if use_extrapolation:
df['extr_normed'] = normalize_to_max_one(df['extr_error'])
df['val_normed'] = normalize_to_max_one(df['val_error'])
df['complexity_normed'] = normalize_to_max_one(df['complexity'], defensive=False)
if use_extrapolation:
print('Extrapolation data used.')
df['score'] = | np.sqrt(df['extr_normed'] ** 2 + df['val_normed'] ** 2) | numpy.sqrt |
import numpy as np
from prox_tv import tv1_1d, tv1w_1d, tv2_1d, tv1_2d, tvp_1d, \
tv1w_2d, tvp_2d, tvgen
def test_tv1w_1d():
methods = ('tautstring', 'pn')
for _ in range(20):
dimension = np.random.randint(1e1, 1e3)
x = 100*np.random.randn(dimension)
w = 20*np.random.rand(dimension-1)
solutions = [tv1w_1d(x, w, method=method) for method in methods]
for i in range(len(solutions)-1):
assert np.allclose(solutions[i], solutions[i+1])
def test_tv1w_1d_uniform_weights():
_test_tv1w_1d_uniform_weights(1e1, 1e4)
def test_tv1w_1d_uniform_weights_small_input():
_test_tv1w_1d_uniform_weights(2, 4)
def _test_tv1w_1d_uniform_weights(min, max):
for _ in range(1000):
dimension = np.random.randint(min, max)
x = 100*np.random.randn(dimension)
w1 = np.random.rand()
w = np.ones(dimension-1) * w1
solw = tv1w_1d(x, w)
sol = tv1_1d(x, w1)
assert np.allclose(solw, sol)
def test_tv1_1d():
"""Tests 1-dimensional TV methods"""
for _ in range(20):
dimension = np.random.randint(1e1, 3e1)
x = 100*np.random.randn(dimension)
w = 20*np.random.rand()
_test_tv1_methods(x, w)
def test_tv1_1d_int():
"""Tests 1-dimensional TV methods for integer inputs"""
for _ in range(20):
dimension = np.random.randint(1e1, 3e1)
x = (100*np.random.randn(dimension)).astype('int')
w = 20*np.random.rand()
_test_tv1_methods(x, w)
def _test_tv1_methods(x, w):
"""For given input signal and weight, all TV1 methods must be similar"""
methods = ('classictautstring', 'linearizedtautstring', 'hybridtautstring',
'pn', 'condat', 'dp', 'condattautstring', 'kolmogorov')
solutions = [tv1_1d(x, w, method=method) for method in methods]
solutions.append(tv1_1d(x, w, method='hybridtautstring', maxbacktracks=1.2))
for i in range(1, len(solutions)):
assert np.allclose(solutions[0], solutions[i], atol=1e-3)
def test_tvp_1d():
"""Test that all 1D-lp-TV methods produce equivalent results"""
# Some of these methods are kind of unstable, so we ensure only that most
# of the times the results are similar
methods = ('gp', 'fw', 'gpfw')
errors = 0
for _ in range(20):
dimension = np.random.randint(1e1, 3e1)
x = 100*np.random.randn(dimension)
w = 20*np.random.rand()
p = 1 + 10 * np.random.rand()
solutions = [tvp_1d(x, w, p, method=method, max_iters=100000)
for method in methods]
for i in range(1, len(solutions)):
try:
assert np.allclose(solutions[0], solutions[i], atol=1e-3)
except AssertionError:
errors += 1
assert(errors < 10)
def test_tv2_1d():
methods = ('ms', 'pg', 'mspg')
for _ in range(20):
dimension = np.random.randint(1e1, 3e1)
x = 100*np.random.randn(dimension)
w = 20*np.random.rand()
solutions = [tv2_1d(x, w, method=method) for method in methods]
for i in range(len(solutions)-1):
assert np.allclose(solutions[i], solutions[i+1], atol=1e-3)
def _generate2D():
"""Generates a 2D array for the test"""
rows = np.random.randint(1e1, 3e1)
cols = np.random.randint(1e1, 3e1)
return 100*np.random.randn(rows, cols)
def test_tv1_2d():
"""Tests that all 2D-TV methods produce equivalent results"""
methods = ('yang', 'condat', 'chambolle-pock', 'kolmogorov', 'pd', 'dr')
for _ in range(20):
x = _generate2D()
w = 20*np.random.rand()
solutions = [tv1_2d(x, w, method=method, max_iters=5000)
for method in methods]
for i in range(1, len(solutions)):
print(methods[i], ":", solutions[i])
assert np.allclose(solutions[i], solutions[0], atol=1e-3)
def test_tv1_tvp_2d():
"""Tests that 2D-TVp == 2D-TV1 when p=1"""
for _ in range(20):
x = _generate2D()
w = 20*np.random.rand()
solution1 = tv1_2d(x, w, max_iters=5000)
solutionp = tvp_2d(x, w, w, 1, 1, max_iters=5000)
assert np.allclose(solution1, solutionp, atol=1e-3)
def test_tv1_tv1w_2d():
"""Tests that 2D-TV1w == 2D-TV1 for unit weights"""
for _ in range(20):
x = _generate2D()
rows = len(x)
cols = len(x[0])
w = 20*np.random.rand()
w_cols = w * np.ones((rows-1, cols))
w_rows = w * np.ones((rows, cols-1))
solution1 = tv1_2d(x, w, max_iters=5000)
solutionp = tv1w_2d(x, w_cols, w_rows, max_iters=5000)
assert np.allclose(solution1, solutionp, atol=1e-3)
def test_tv1w_2d_uniform_weights():
for _ in range(20):
x = _generate2D()
rows = len(x)
cols = len(x[0])
w1 = np.random.rand()
w_rows = np.ones([rows-1, cols]) * w1
w_cols = np.ones([rows, cols-1]) * w1
solw = tv1w_2d(x, w_rows, w_cols, max_iters=5000)
solw1 = tv1_2d(x, w1, max_iters=5000)
assert np.allclose(solw, solw1, atol=1e-3)
def test_tv1w_2d_uniform_weights_small_input():
for _ in range(1000):
rows = np.random.randint(2, 4)
cols = np.random.randint(2, 4)
x = 100*np.random.randn(rows, cols)
w1 = np.random.rand()
w_rows = np.ones([rows-1, cols]) * w1
w_cols = np.ones([rows, cols-1]) * w1
solw = tv1w_2d(x, w_rows, w_cols, max_iters=5000)
solw1 = tv1_2d(x, w1, max_iters=5000)
assert np.allclose(solw, solw1, atol=1e-3)
def test_tv1w_2d_emengd():
r"""Issue reported by emengd
Make the solver fail due to missing checks on integer arguments
"""
a = -np.array([[1,2,3],[4,5,6],[7,8,9]])/10.
sol1 = tv1w_2d(a, np.array([[1,1,1],[1,1,1]]),
np.array([[1,1],[1,1],[1,1]]), max_iters=100)
sol2 = tv1_2d(a, 1)
assert np.allclose(sol1, sol2, atol=1e-3)
def test_tvgen_1d():
"""Tests that the general solver returns correct 1d solutions"""
for _ in range(20):
dimension = np.random.randint(1e1, 3e1)
x = 100*np.random.randn(dimension)
w = 20*np.random.rand()
specific = tv1_1d(x, w)
general = tvgen(x, [w], [1], [1])
assert np.allclose(specific, general, atol=1e-3)
def test_tvgen_2d():
"""Tests that the general solver returns correct 2d solutions"""
for _ in range(20):
x = _generate2D()
w = 20*np.random.rand()
specific = tv1_2d(x, w, max_iters=1000)
general = tvgen(x, [w, w], [1, 2], [1, 1], max_iters=1000)
assert np.allclose(specific, general, atol=1e-2)
def test_tvgen_nd():
"""Test that the general solver does not crash for high-d tensors"""
for _ in range(20):
dims = np.random.randint(3, 5)
shape = np.random.randint(2, 10, size=dims)
x = np.random.randn(*shape)
w = np.random.randn(dims)
tvgen(x, w, list(range(1,dims+1)), np.ones(dims))
def test_tvgen_multireg():
"""Test applying several regularizers on same dimension"""
for _ in range(20):
x = _generate2D()
w = 20*np.random.rand()
specific = tv1_2d(x, w, max_iters=1000)
general = tvgen(
x,
[w/2., w/2., w/3., w/3., w/3.],
[1, 1, 2, 2, 2],
[1, 1, 1, 1, 1],
max_iters=1000
)
print("Max diff: " + str((specific-general).max()))
assert | np.allclose(specific, general, atol=1e-2) | numpy.allclose |
import torch
import numpy as np
import torch.nn.parallel
from KDEpy import FFTKDE
# Code from
# https://github.com/zhang64-llnl/Mix-n-Match-Calibration/blob/master/demo_calibration.py
def mirror_1d(d, xmin=None, xmax=None):
"""If necessary apply reflecting boundary conditions."""
if xmin is not None and xmax is not None:
xmed = (xmin+xmax)/2
return np.concatenate(((2*xmin-d[d < xmed]).reshape(-1,1), d, (2*xmax-d[d >= xmed]).reshape(-1,1)))
elif xmin is not None:
return np.concatenate((2*xmin-d, d))
elif xmax is not None:
return np.concatenate((d, 2*xmax-d))
else:
return d
def get_kde_ece(p, y, order=1):
def ece_kde_binary(p,label,p_int=None,order=1):
# points from numerical integration
if p_int is None:
p_int = np.copy(p)
p = np.clip(p,1e-256,1-1e-256)
p_int = np.clip(p_int,1e-256,1-1e-256)
x_int = | np.linspace(-0.6, 1.6, num=2**14) | numpy.linspace |
import os
import gym
import numpy as np
import tensorflow as tf
from baselines.common.distributions import make_pdtype
from baselines.a2c.utils import fc, batch_to_seq, seq_to_batch, lstm, ortho_init
def make_env(env_id, process_idx=0, outdir=None):
import sunblaze_envs
from .sunblaze_monitor import MonitorParameters
env = sunblaze_envs.make(env_id)
if outdir:
env = MonitorParameters(
env,
output_filename=os.path.join(outdir, 'env-parameters-{}.json'.format(process_idx))
)
return env
def gru(xs, ms, s, scope, nh, init_scale=1.0, activ='tanh'):
""" Implements a gated recurrent unit """
nbatch, nin = [v.value for v in xs[0].get_shape()]
nsteps = len(xs)
with tf.variable_scope(scope):
wx1 = tf.get_variable("wx1", [nin, nh*2], initializer=ortho_init(init_scale))
wh1 = tf.get_variable("wh1", [nh, nh*2], initializer=ortho_init(init_scale))
b1 = tf.get_variable("b1", [nh*2], initializer=tf.constant_initializer(0.0))
wx2 = tf.get_variable("wx2", [nin, nh], initializer=ortho_init(init_scale))
wh2 = tf.get_variable("wh2", [nh, nh], initializer=ortho_init(init_scale))
b2 = tf.get_variable("b2", [nh], initializer=tf.constant_initializer(0.0))
for idx, (x, m) in enumerate(zip(xs, ms)):
s = s*(1-m) # resets hidden state of RNN
y = tf.matmul(x, wx1) + tf.matmul(s, wh1) + b1
z, r = tf.split(axis=1, num_or_size_splits=2, value=y)
z = tf.nn.sigmoid(z)
r = tf.nn.sigmoid(r)
h = tf.matmul(x, wx2) + tf.matmul(s*r, wh2) + b2
if activ == 'tanh':
h = tf.tanh(h)
elif activ == 'relu':
h = tf.nn.relu(h)
else:
raise ValueError(activ)
s = (1-z)*h + z*s
xs[idx] = s
return xs, s
class lstm_policy(object):
""" Creates policy and value LSTM networks, with parameter sharing.
In addition to the observation, the networks also take the action, reward, and done as inputs.
There is one hidden layer with nlstm units, default 256.
Environments with a discrete action space have a softmax policy, while environments with
a continuous action space have Gaussian with diagonal covariance. """
def __init__(self, sess, ob_space, ac_space, nbatch, nsteps, nlstm=256, reuse=False, feature_mlp=True):
nenv = nbatch // nsteps
# assume that inputs are vectors and reward is a scalar
if len(ac_space.shape) == 0:
# discrete set of actions, input as one-hot encodings
nact = ac_space.n
discrete = True
input_length = ob_space.shape[0] + nact + 2
else:
actdim = ac_space.shape[0]
discrete = False
input_length = ob_space.shape[0] + actdim + 2
input_shape = (nbatch, input_length)
X = tf.placeholder(tf.float32, input_shape, name="Input")
M = tf.placeholder(tf.float32, [nbatch]) # mask (done with a trial at time t-1)
S = tf.placeholder(tf.float32, [nenv, nlstm*2]) # states of the recurrent policy
with tf.variable_scope("model", reuse=reuse):
activ = tf.tanh
if feature_mlp:
print("Using feature network in front of LSTM")
h1 = activ(fc(X, "fc1", nh=nlstm, init_scale=np.sqrt(2)))
h2 = activ(fc(h1, "fc2", nh=nlstm, init_scale=np.sqrt(2)))
xs = batch_to_seq(h2, nenv, nsteps)
else:
print("No feature network in front of LSTM")
xs = batch_to_seq(X, nenv, nsteps)
ms = batch_to_seq(M, nenv, nsteps)
h5, snew = lstm(xs, ms, S, "lstm1", nh=nlstm)
h5 = seq_to_batch(h5)
vf = fc(h5, "vf", 1)
if discrete:
pi = fc(h5, "pi", nact, init_scale=0.01)
else:
pi = fc(h5, "pi", actdim, init_scale=0.01)
logstd = tf.get_variable(name="logstd", shape=[1, actdim], initializer=tf.zeros_initializer())
self.pdtype = make_pdtype(ac_space)
if discrete:
self.pd = self.pdtype.pdfromflat(pi)
else:
pdparam = tf.concat([pi, pi*0.0+logstd], axis=1)
self.pd = self.pdtype.pdfromflat(pdparam)
v0 = vf[:,0]
a0 = self.pd.sample()
neglogp0 = self.pd.neglogp(a0)
self.initial_state = np.zeros((nenv, nlstm*2), dtype=np.float32)
def step(ob, state, ac, rew, done, mask):
# if discrete action space, convert ac to one-hot encoding and done to int
rew = np.reshape(np.asarray([rew]), (nbatch, 1))
done = np.reshape(np.asarray([done], dtype=float), (nbatch, 1))
if discrete:
if ac[0] == -1:
ac = np.zeros((nbatch, nact), dtype=np.int)
else:
ac = np.reshape(np.asarray([ac]), (nbatch,))
ac = np.eye(nact)[ac]
x = np.concatenate((ob, ac, rew, done), axis=1)
else:
ac = np.reshape(np.asarray([ac]), (nbatch, actdim))
x = np.concatenate((ob, ac, rew, done), axis=1)
return sess.run([a0, v0, snew, neglogp0], {X:x, S:state, M:mask})
def value(ob, state, ac, rew, done, mask):
rew = np.reshape(np.asarray([rew]), (nbatch, 1))
done = np.reshape(np.asarray([done], dtype=float), (nbatch, 1))
if discrete:
if ac[0] == -1:
ac = np.zeros((nbatch, nact), dtype=np.int)
else:
ac = np.reshape(np.asarray([ac]), (nbatch,))
ac = np.eye(nact)[ac]
x = np.concatenate((ob, ac, rew, | np.array(done, dtype=float) | numpy.array |
"""
Evaluate lfw accuracy.
Reference: https://github.com/clcarwin/sphereface_pytorch/blob/master/lfw_eval.py
Note:
- To keep consistent with face SR task, the faces are not aligned.
- Flipped features are used.
"""
import os,sys,cv2,random,datetime
import numpy as np
import zipfile
from PIL import Image
from utils import utils
import multiprocessing as mp
from time import time
import skimage.transform as tf
from lfw.matlab_cp2tform import get_similarity_transform_for_cv2
from models.trainer import normalization
from data.dataloader_mask_verification import Mask_Data
from PIL import Image
from pretrain.model_ir_se50 import ir_se_50_512
import torch
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms
class LFWData(Dataset):
"""
img_size: cropped image size, (W, H)
"""
def __init__(self, data_root, img_size=(96, 112)):
self.data_root = data_root
self.img_size = img_size
self.img_dir = os.path.join(data_root, 'images')
self.pair_txt = os.path.join(data_root, 'pairs.txt')
self.landmark_txt = os.path.join(data_root, 'lfw_landmark.txt')
self.get_pair_info()
def get_pair_info(self):
# Read landmarks
self.landmark = {}
with open(self.landmark_txt) as f:
landmark_lines = f.readlines()
for line in landmark_lines:
l = line.replace('\n','').split('\t')
self.landmark[l[0]] = [int(k) for k in l[1:]]
# Read image information
with open(self.pair_txt) as f:
pairs_lines = f.readlines()[1:]
self.pair_names = []
self.label = []
for i in pairs_lines:
p = i.strip().split()
if 3==len(p):
sameflag = 1
name1 = p[0]+'/'+p[0]+'_'+'{:04}.jpg'.format(int(p[1]))
name2 = p[0]+'/'+p[0]+'_'+'{:04}.jpg'.format(int(p[2]))
if 4==len(p):
sameflag = 0
name1 = p[0]+'/'+p[0]+'_'+'{:04}.jpg'.format(int(p[1]))
name2 = p[2]+'/'+p[2]+'_'+'{:04}.jpg'.format(int(p[3]))
self.pair_names.append([name1, name2])
self.label.append(sameflag)
def gen_occlusion_mask(self, size):
w, h = self.img_size
mask = np.ones((h, w, 1))
pos_y = np.random.randint(0, w - size[0])
pos_x = np.random.randint(0, h - size[1])
mask[pos_y: pos_y+size[0], pos_x : pos_x+size[1]] = 0
return mask
def align(self, src_img, src_pts):
src_img = np.array(src_img)
ref_pts = [ [30.2946, 51.6963],[65.5318, 51.5014],
[48.0252, 71.7366],[33.5493, 92.3655],[62.7299, 92.2041] ]
src_pts = np.array(src_pts).reshape(5,2)
s = np.array(src_pts).astype(np.float32)
r = np.array(ref_pts).astype(np.float32)
# trans = tf.SimilarityTransforTruem()
# trans.estimate(s, r)
# face_img = cv2.warpAffine(src_img, trans.params[:2], self.img_size)
tfm = get_similarity_transform_for_cv2(s, r)
face_img = cv2.warpAffine(src_img, tfm, self.img_size)
return face_img
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
imgs_tensor = []
mask = self.gen_occlusion_mask(MASK_SIZE)
for i in range(2):
img_name = self.pair_names[idx][i]
img = cv2.imread(os.path.join(self.img_dir, img_name))
img = self.align(img, self.landmark[img_name])
if i == 0:
img = img * mask
img = ((img - 127.5)/128).transpose(2, 0, 1)
imgs_tensor.append(torch.from_numpy(img).float())
label = self.label[idx]
mask = mask.transpose(2, 0, 1)
mask = torch.from_numpy(mask).float()
return imgs_tensor[0], imgs_tensor[1], mask, label
def KFold(n=6000, n_folds=10, shuffle=False):
folds = []
base = list(range(n))
if shuffle: random.shuffle(base)
for i in range(n_folds):
test = base[i*n//n_folds:(i+1)*n//n_folds]
train = list(set(base)-set(test))
folds.append([train,test])
return folds
def save_wrong_imgs(wrong_idx, new):
face_root = '../mask_data'
data = Mask_Data(face_root)
Tensor2Image = transforms.ToPILImage()
for i in wrong_idx:
sample = data[i]
img1 = sample['img1']*0.5+0.5
img2 = sample['img2']*0.5+0.5
img1 = Tensor2Image(img1).convert('RGB')
img2 = Tensor2Image(img2).convert('RGB')
if new == 0:
img1.save('./wrong_images/{:4d}_1.png'.format(i))
img2.save('./wrong_images/{:4d}_2.png'.format(i))
if new == 1:
img1.save('./wrong_images_new/{:4d}_1.png'.format(i))
img2.save('./wrong_images_new/{:4d}_2.png'.format(i))
def eval_acc(threshold, diff, save_wrong, new=0):
y_true = []
y_predict = []
y_idx = []
for d in diff:
same = 1 if float(d[0]) > threshold else 0
y_predict.append(same)
y_true.append(int(d[1]))
y_idx.append(int(d[2]))
y_true = np.array(y_true)
y_predict = np.array(y_predict)
accuracy = 1.0*np.count_nonzero(y_true==y_predict)/len(y_true)
if save_wrong == 1:
y_idx= | np.array(y_idx) | numpy.array |
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rc #, colors, colorbar, cm, tri
import pathlib
from Particle import Particle
params = {'text.usetex' : True}#, 'font.size' : 28, 'font.family' : 'lmodern'}
plt.rcParams.update(params)
def plots():
print("\nMaking Plots...\n")
# aspect ratio of plots
aspect = 9/16
# bin density for histograms
bin_density = 50
for l in Particle.l:
# sl histogram
if Particle.isotropic==False:
# Not plotting ql of particles with fewer than 2 neighbors.
# 0 neighbors will always give ql=0.0, and 1 neighbor gives ql=1.0
maxsl = max([Particle.data[k].sl[l] for k in Particle.centers if Particle.data[k].sl[l]!=None and len(Particle.data[k].neighs) > 1])
minsl = min([Particle.data[k].sl[l] for k in Particle.centers if Particle.data[k].sl[l]!=None and len(Particle.data[k].neighs) > 1])
step = 0.02
binbounds = np.arange(minsl-step, maxsl+step, step)
vals = [Particle.data[k].sl[l] for k in Particle.centers if Particle.data[k].sl[l]!=None and len(Particle.data[k].neighs) > 1]
fig2, ax2 = plt.subplots()
bins = int(bin_density*(max(vals) - min(vals)))
if bins < 2:
bins = 10
n, bins, patches = ax2.hist(vals, bins=bins, align='mid')
plt.close()
area = sum([(bins[i]-bins[i-1])*n[i-1] for i in range(1,len(bins))])
n = [i/area for i in n]
fig, ax = plt.subplots()
# Plot the resulting histogram
center = (bins[:-1]+bins[1:])/2
width = 0.95*(bins[1]-bins[0])
ax.bar(center, n, width)#, align="center")
if minsl > 0.0:
if maxsl < 1.0:
ax.set_xlim(left=-step,right=1.0+step)
else:
ax.set_xlim(left=-step,right=maxsl+step)
else:
ax.set_xlim(left=minsl-step,right=maxsl+step)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
ax.set_xlabel(r"$s_"+str(l)+"$", fontsize=14)
ax.set_ylabel(r"Probability", fontsize=14)
plt.grid(axis="x")
if Particle.writefiles==True:
plt.savefig(pathlib.Path(str(Particle.OUT)+"/s"+str(l)+"/s"+str(l)+"_histogram.png"),dpi=300, transparent=True)
if Particle.show_plots==True:
plt.show()
plt.close()
# ql histogram
if Particle.orientational_only==False:
# Not plotting ql of particles with fewer than 2 neighbors.
# 0 neighbors will always give ql=0.0, and 1 neighbor gives ql=1.0
maxql = max([Particle.data[k].ql[l] for k in Particle.centers if Particle.data[k].ql[l]!=None and len(Particle.data[k].neighs) > 1])
minql = min([Particle.data[k].ql[l] for k in Particle.centers if Particle.data[k].ql[l]!=None and len(Particle.data[k].neighs) > 1])
step = 0.02
binbounds = np.arange(minql-step, maxql+step, step)
vals = [Particle.data[k].ql[l] for k in Particle.centers if Particle.data[k].ql[l]!=None and len(Particle.data[k].neighs) > 1]
fig2, ax2 = plt.subplots()
bins = int(bin_density*(max(vals) - min(vals)))
if bins < 2:
bins = 10
n, bins, patches = ax2.hist(vals, bins=bins, align='mid')
plt.close()
area = sum([(bins[i]-bins[i-1])*n[i-1] for i in range(1,len(bins))])
n = [i/area for i in n]
fig, ax = plt.subplots()
# Plot the resulting histogram
center = (bins[:-1]+bins[1:])/2
width = 0.95*(bins[1]-bins[0])
ax.bar(center, n, width)#, align="center")
if minql > 0.0:
if maxql < 1.0:
ax.set_xlim(left=-step,right=1.0+step)
else:
ax.set_xlim(left=-step,right=maxql+step)
else:
ax.set_xlim(left=minql-step,right=maxql+step)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
ax.set_xlabel(r"$q_"+str(l)+"$", fontsize=14)
ax.set_ylabel(r"Probability", fontsize=14)
plt.grid(axis="x")
if Particle.writefiles==True:
plt.savefig(pathlib.Path(str(Particle.OUT)+"/q"+str(l)+"/q"+str(l)+"_histogram.png"),dpi=300, transparent=True)
if Particle.show_plots==True:
plt.show()
plt.close()
# qlmtilde dot qlmtilde histogram
if Particle.orientational_only==False:
maxqlmdotqlm = max([np.real(Particle.data[k].qlmdotqlm[l][n]) for k in Particle.centers for n in Particle.data[k].qlmdotqlm[l] if Particle.data[k].qlmdotqlm[l][n]!=None and len(Particle.data[k].neighs) > 1])
minqlmdotqlm = min([np.real(Particle.data[k].qlmdotqlm[l][n]) for k in Particle.centers for n in Particle.data[k].qlmdotqlm[l] if Particle.data[k].qlmdotqlm[l][n]!=None and len(Particle.data[k].neighs) > 1])
step = 0.02
binbounds = | np.arange(minqlmdotqlm-step, maxqlmdotqlm+step, step) | numpy.arange |
import json
import matplotlib
from transformers import AutoModelForMaskedLM
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
from sklearn.decomposition import PCA
import random
from sklearn.preprocessing import StandardScaler
from adjustText import adjust_text
def plot_embedding_weights():
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131']
epoch2s = ['501','501','501','251','181'] # books
# epoch2s = ['501','501','501','501','501'] # clothing
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
# MODEL_CAT2='Clothing_Shoes_and_Jewelry'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}-mlm/epoch{epoch1}'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}-mlm/epoch{epoch2}'
model1 = AutoModelForMaskedLM.from_pretrained(model_path1)
embeddings_1 = model1.bert.embeddings.word_embeddings.weight.data
embedding1_numpy = np.array(embeddings_1)
model2 = AutoModelForMaskedLM.from_pretrained(model_path2)
embeddings_2 = model2.bert.embeddings.word_embeddings.weight.data
embedding2_numpy = np.array(embeddings_2)
print(embedding1_numpy.shape)
X = np.concatenate((embedding1_numpy,embedding2_numpy),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = X_3d[:30522]
data["control"] = X_3d[30522:]
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general', 'control'], ['3', (5,2)], ["blue", "red"]):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1]]
h = [h[0], h[1]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=17.5,
framealpha=0.6,
markerscale=2)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=1200,
transparent=True)
plt.clf()
def plot_embedding_weights1():
plt.figure(dpi=600)
exp_names1 = ['10_model_10_data','10_model_100_data','100_model_10_data','100_model_100_data']
exp_names2 = ['new_control_10_model_10_data','new_control_10_model_100_data','new_control_100_model_10_data','new_control_100_model_100_data']
epoch1s = ['501','501','251','131']
epoch2s = ['501','501','101','181'] # books
specific_words = [(7592, 'hello'), (2646, 'toward'), (7615, 'comment'), (4952, 'listen'), (3071, 'everyone')]
general_words = [(22524, 'appendix'), (8544,'publishers'), (8882, 'curriculum'), (24402, 'grammatical'), (18534, 'autobiographical')]
for i in range(4):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
# MODEL_CAT2='Clothing_Shoes_and_Jewelry'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}-mlm/epoch{epoch1}'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}-mlm/epoch{epoch2}'
model1 = AutoModelForMaskedLM.from_pretrained(model_path1)
embeddings_1 = model1.bert.embeddings.word_embeddings.weight.data
embedding1_numpy = np.array(embeddings_1)
model2 = AutoModelForMaskedLM.from_pretrained(model_path2)
embeddings_2 = model2.bert.embeddings.word_embeddings.weight.data
embedding2_numpy = np.array(embeddings_2)
print(embedding1_numpy.shape)
X = np.concatenate((embedding1_numpy,embedding2_numpy),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = X_3d[:30522]
data["control"] = X_3d[30522:]
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general', 'control'], ['3', (5,2)], ["blue", "red"]):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
# if label == 'general':
# texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'g*', fontsize=12.5, color='black') for idx,word in specific_words]
# texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'g', fontsize=12.5, color='black') for idx,word in general_words]
# else:
# texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'c*', fontsize=12.5, color='black') for idx,word in specific_words]
# texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'c', fontsize=12.5, color='black') for idx,word in general_words]
# adjust_text(texts)
if i==3:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1]]
h = [h[0], h[1]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=12,
framealpha=0.6,
markerscale=1)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial-"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=1200,
transparent=True)
plt.clf()
def plot_embedding_weights2():
plt.figure(dpi=600)
exp_names1 = ['10_model_10_data','10_model_100_data','100_model_10_data','100_model_100_data']
exp_names2 = ['new_control_10_model_10_data','new_control_10_model_100_data','new_control_100_model_10_data','new_control_100_model_100_data']
epoch1s = ['501','501','251','131']
epoch2s = ['501','501','101','181'] # books
specific_words = [(7592, 'hello'), (2646, 'toward'), (7615, 'comment'), (4952, 'listen'), (3071, 'everyone')]
general_words = [(22524, 'appendix'), (8544,'publishers'), (8882, 'curriculum'), (24402, 'grammatical'), (18534, 'autobiographical')]
for i in range(4):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
# MODEL_CAT2='Clothing_Shoes_and_Jewelry'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}-mlm/epoch{epoch1}'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}-mlm/epoch{epoch2}'
model1 = AutoModelForMaskedLM.from_pretrained(model_path1)
embeddings_1 = model1.bert.embeddings.word_embeddings.weight.data
embedding1_numpy = np.array(embeddings_1)
model2 = AutoModelForMaskedLM.from_pretrained(model_path2)
embeddings_2 = model2.bert.embeddings.word_embeddings.weight.data
embedding2_numpy = np.array(embeddings_2)
print(embedding1_numpy.shape)
X = np.concatenate((embedding1_numpy,embedding2_numpy),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = X_3d[:30522]
data["control"] = X_3d[30522:]
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general'], ['3'], ["blue"]):
# for label, marker, color in zip(['control'], [(5,2)], ["red"]):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if label == 'general':
texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 's', fontsize=12.5, color='black') for idx,word in specific_words]
texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'g', fontsize=12.5, color='black') for idx,word in general_words]
else:
texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 's', fontsize=12.5, color='black') for idx,word in specific_words]
texts=[ax.text(X_temp[idx,0], X_temp[idx,1], 'g', fontsize=12.5, color='black') for idx,word in general_words]
adjust_text(texts)
if i==3:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0]]
h = [h[0]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=12,
framealpha=0.6,
markerscale=1)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial-g-"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=1200,
transparent=True)
plt.clf()
def plot_five_embedding_weights():
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
exp_names3 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
exp_names4 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
exp_names5 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
exp_names6 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131'] # general
epoch2s = ['501','501','501','251','181'] # books
epoch3s = ['501','501','501','501','501'] # clothing
epoch4s = ['501','501','501','251','151'] # electronics
epoch5s = ['501','501','501','251','151'] # home
epoch6s = ['501','501','501','251','181'] # movie
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
EXP_NAME3=exp_names3[i]
MODEL_CAT3='Clothing_Shoes_and_Jewelry'
epoch3=epoch3s[i]
EXP_NAME4=exp_names4[i]
MODEL_CAT4='Electronics'
epoch4=epoch4s[i]
EXP_NAME5=exp_names5[i]
MODEL_CAT5='Home_and_Kitchen'
epoch5=epoch5s[i]
EXP_NAME6=exp_names6[i]
MODEL_CAT6='Movies_and_TV'
epoch6=epoch6s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}-mlm/epoch{epoch1}'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}-mlm/epoch{epoch2}'
model_path3 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME3}/{MODEL_CAT3}-mlm/epoch{epoch3}'
model_path4 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME4}/{MODEL_CAT4}-mlm/epoch{epoch4}'
model_path5 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME5}/{MODEL_CAT5}-mlm/epoch{epoch5}'
model_path6 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME6}/{MODEL_CAT6}-mlm/epoch{epoch6}'
model1 = AutoModelForMaskedLM.from_pretrained(model_path1)
embeddings_1 = model1.bert.embeddings.word_embeddings.weight.data
embedding1_numpy = np.array(embeddings_1)
model2 = AutoModelForMaskedLM.from_pretrained(model_path2)
embeddings_2 = model2.bert.embeddings.word_embeddings.weight.data
embedding2_numpy = np.array(embeddings_2)
model3 = AutoModelForMaskedLM.from_pretrained(model_path3)
embeddings_3 = model3.bert.embeddings.word_embeddings.weight.data
embedding3_numpy = np.array(embeddings_3)
model4 = AutoModelForMaskedLM.from_pretrained(model_path4)
embeddings_4 = model4.bert.embeddings.word_embeddings.weight.data
embedding4_numpy = np.array(embeddings_4)
model5 = AutoModelForMaskedLM.from_pretrained(model_path5)
embeddings_5 = model5.bert.embeddings.word_embeddings.weight.data
embedding5_numpy = np.array(embeddings_5)
model6 = AutoModelForMaskedLM.from_pretrained(model_path6)
embeddings_6 = model6.bert.embeddings.word_embeddings.weight.data
embedding6_numpy = np.array(embeddings_6)
print(embedding1_numpy.shape)
X = np.concatenate((embedding1_numpy,embedding2_numpy,embedding3_numpy,embedding4_numpy,embedding5_numpy,embedding6_numpy),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = X_3d[:30522]
data["book"] = X_3d[30522:30522*2]
data["clothing"] = X_3d[30522*2:30522*3]
data["electronics"] = X_3d[30522*3:30522*4]
data["home"] = X_3d[30522*4:30522*5]
data["movie"] = X_3d[30522*5:30522*6]
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general', 'book', 'clothing', 'electronics', 'home', 'movie'], ['3', (5,2), '+', 'x', '1', '2'], ["blue", "red", 'green', 'cyan', 'yellow', 'magenta']):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1], l[2], l[3], l[4], l[5]]
h = [h[0], h[1], h[2], h[3], h[4], h[5]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=17.5,
framealpha=0.6,
markerscale=2)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_all"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=1200,
transparent=True)
plt.clf()
def plot_embedding_layer_representation():
random.seed(30)
indices = (random.sample(range(0,430923),k=2500)) # books: 430923 clothing: 117499
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131']
epoch2s = ['501','501','501','251','181'] # books
# epoch2s = ['501','501','501','501','501'] # clothing
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books' # Clothing_Shoes_and_Jewelry
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}/epoch{epoch1}/Books_layer_0_hidden_state.npy'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}/epoch{epoch2}/Books_layer_0_hidden_state.npy'
acts1 = np.load(model_path1) # data points x number of hidden dimension
acts2 = np.load(model_path2)
print(acts1.shape)
with open(general_mask_file) as f:
word_mask_list = []
for line in f.readlines():
word_mask_list += [int(x) for x in line.strip().split()]
word_mask = np.array(word_mask_list, dtype=bool)
assert len(word_mask) == acts1.shape[1] # sanity check
assert len(word_mask) == acts2.shape[1] # sanity check
acts1 = acts1[:,word_mask]
acts2 = acts2[:,word_mask]
X = np.concatenate((acts1, acts2),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = np.take(X_3d[:430923], indices, axis=0) # books: 430923 clothing: 117499
data["control"] = np.take(X_3d[430923:], indices, axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general', 'control'], ['3', (5,2)], ["blue", "red"]):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1]]
h = [h[0], h[1]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=17.5,
framealpha=0.6,
markerscale=2)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_embedding_layer_representation"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=600,
transparent=True)
plt.clf()
def plot_embedding_layer_representation_with_mask1():
# random.seed(30)
# indices = (random.sample(range(0,117499),k=2500)) # books: 430923 clothing: 117499
plt.figure(dpi=600)
exp_names1 = ['10_model_10_data','10_model_100_data','100_model_10_data','100_model_100_data']
exp_names2 = ['new_control_10_model_10_data','new_control_10_model_100_data','new_control_100_model_10_data','new_control_100_model_100_data']
epoch1s = ['501','501','251','131']
epoch2s = ['501','501','101','181'] # books
# epoch2s = ['501','501','501','501','501'] # clothing
for i in range(4):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}/epoch{epoch1}/Books_layer_0_hidden_state.npy'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}/epoch{epoch2}/Books_layer_0_hidden_state.npy'
acts1 = np.load(model_path1) # data points x number of hidden dimension
acts2 = np.load(model_path2)
print(acts1.shape)
X = np.concatenate((acts1, acts2),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
size = X_3d.shape[0]//2
general_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.general'
specific_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.specific'
with open(general_mask_file) as f:
word_mask_list_gen = []
for line in f.readlines():
word_mask_list_gen += [int(x) for x in line.strip().split()]
word_mask_gen = np.array(word_mask_list_gen, dtype=bool)
assert len(word_mask_gen) == acts1.shape[0] # sanity check
with open(specific_mask_file) as f:
word_mask_list_spe = []
for line in f.readlines():
word_mask_list_spe += [int(x) for x in line.strip().split()]
word_mask_spe = np.array(word_mask_list_spe, dtype=bool)
assert len(word_mask_spe) == acts1.shape[0] # sanity check
general_data = X_3d[:size]
general_data_gen = general_data[word_mask_gen]
general_data_spe = general_data[word_mask_spe]
control_data = X_3d[size:]
control_data_gen = control_data[word_mask_gen]
control_data_spe = control_data[word_mask_spe]
random.seed(30)
indices_gen = (random.sample(range(0,general_data_gen.shape[0]), k=1000))
indices_spe = (random.sample(range(0,general_data_spe.shape[0]), k=1000))
data = {}
data["general-general"] = np.take(general_data_gen, indices_gen, axis=0) # books: 430923 clothing: 117499
data["control-general"] = np.take(control_data_gen, indices_gen, axis=0)
data["general-specific"] = np.take(general_data_spe, indices_spe, axis=0) # books: 430923 clothing: 117499
data["control-specific"] = np.take(control_data_spe, indices_spe, axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general-general', 'control-general', 'general-specific', 'control-specific'], ['3', (5,2), '+', '1'], ["blue", 'red', 'cyan', 'magenta']):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==3:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1], l[2], l[3]]
h = [h[0], h[1], h[2], h[3]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=12,
framealpha=0.6,
markerscale=1)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_embedding_layer_representation_mask-"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=600,
transparent=True)
plt.clf()
def plot_embedding_layer_representation_with_mask_model():
# random.seed(30)
# indices = (random.sample(range(0,117499),k=2500)) # books: 430923 clothing: 117499
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131']
epoch2s = ['501','501','501','251','181'] # books
# epoch2s = ['501','501','501','501','501'] # clothing
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}/epoch{epoch1}/Books_layer_0_hidden_state.npy'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}/epoch{epoch2}/Books_layer_0_hidden_state.npy'
acts1 = np.load(model_path1) # data points x number of hidden dimension
acts2 = np.load(model_path2)
print(acts1.shape)
X = np.concatenate((acts1, acts2),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
size = X_3d.shape[0]//2
general_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.general'
specific_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.specific'
with open(general_mask_file) as f:
word_mask_list_gen = []
for line in f.readlines():
word_mask_list_gen += [int(x) for x in line.strip().split()]
word_mask_gen = np.array(word_mask_list_gen, dtype=bool)
assert len(word_mask_gen) == acts1.shape[0] # sanity check
with open(specific_mask_file) as f:
word_mask_list_spe = []
for line in f.readlines():
word_mask_list_spe += [int(x) for x in line.strip().split()]
word_mask_spe = np.array(word_mask_list_spe, dtype=bool)
assert len(word_mask_spe) == acts1.shape[0] # sanity check
general_data = X_3d[:size]
general_data_gen = general_data[word_mask_gen]
general_data_spe = general_data[word_mask_spe]
control_data = X_3d[size:]
control_data_gen = control_data[word_mask_gen]
control_data_spe = control_data[word_mask_spe]
random.seed(30)
indices_gen = (random.sample(range(0,general_data_gen.shape[0]), k=1000))
indices_spe = (random.sample(range(0,general_data_spe.shape[0]), k=1000))
data = {}
data["general-gen"] = np.take(general_data_gen, indices_gen, axis=0) # books: 430923 clothing: 117499
data["control-gen"] = np.take(control_data_gen, indices_gen, axis=0)
data["general-spe"] = np.take(general_data_spe, indices_spe, axis=0) # books: 430923 clothing: 117499
data["control-spe"] = np.take(control_data_spe, indices_spe, axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general-gen', 'control-gen', 'general-spe', 'control-spe'], ['3', (5,2), '+', '1'], ["blue", 'red', 'cyan', 'magenta']):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1], l[2], l[3]]
h = [h[0], h[1], h[2], h[3]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=12.5,
framealpha=0.6,
markerscale=1)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_embedding_layer_representation_mask"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=600,
transparent=True)
plt.clf()
def plot_embedding_layer_representation_with_mask_data():
# random.seed(30)
# indices = (random.sample(range(0,117499),k=2500)) # books: 430923 clothing: 117499
plt.figure(dpi=600)
exp_names1 = ['100_model_10_data','100_model_50_data','100_model_100_data','100_model_200_data']
exp_names2 = ['new_control_100_model_10_data','new_control_100_model_50_data','new_control_100_model_100_data','new_control_100_model_200_data']
epoch1s = ['251','151','131','131']
epoch2s = ['101','231','181','151'] # books
# epoch2s = ['501','501','501','501','501'] # clothing
for i in range(4):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}/epoch{epoch1}/Books_layer_0_hidden_state.npy'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}/epoch{epoch2}/Books_layer_0_hidden_state.npy'
acts1 = np.load(model_path1) # data points x number of hidden dimension
acts2 = np.load(model_path2)
print(acts1.shape)
X = np.concatenate((acts1, acts2),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
size = X_3d.shape[0]//2
general_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.general'
specific_mask_file = f'/disk/ocean/zheng/summarization_svcca/data/AmazonReviews/{MODEL_CAT2}/Test_2500_{MODEL_CAT2}.txt.specific'
with open(general_mask_file) as f:
word_mask_list_gen = []
for line in f.readlines():
word_mask_list_gen += [int(x) for x in line.strip().split()]
word_mask_gen = np.array(word_mask_list_gen, dtype=bool)
assert len(word_mask_gen) == acts1.shape[0] # sanity check
with open(specific_mask_file) as f:
word_mask_list_spe = []
for line in f.readlines():
word_mask_list_spe += [int(x) for x in line.strip().split()]
word_mask_spe = np.array(word_mask_list_spe, dtype=bool)
assert len(word_mask_spe) == acts1.shape[0] # sanity check
general_data = X_3d[:size]
general_data_gen = general_data[word_mask_gen]
general_data_spe = general_data[word_mask_spe]
control_data = X_3d[size:]
control_data_gen = control_data[word_mask_gen]
control_data_spe = control_data[word_mask_spe]
random.seed(30)
indices_gen = (random.sample(range(0,general_data_gen.shape[0]), k=1000))
indices_spe = (random.sample(range(0,general_data_spe.shape[0]), k=1000))
data = {}
data["general-gen"] = np.take(general_data_gen, indices_gen, axis=0) # books: 430923 clothing: 117499
data["control-gen"] = np.take(control_data_gen, indices_gen, axis=0)
data["general-spe"] = np.take(general_data_spe, indices_spe, axis=0) # books: 430923 clothing: 117499
data["control-spe"] = np.take(control_data_spe, indices_spe, axis=0)
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general-gen', 'control-gen', 'general-spe', 'control-spe'], ['3', (5,2), '+', '1'], ["blue", 'red', 'cyan', 'magenta']):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1], l[2], l[3]]
h = [h[0], h[1], h[2], h[3]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=12.5,
framealpha=0.6,
markerscale=1)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_embedding_layer_representation_mask_data"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=600,
transparent=True)
plt.clf()
def plot_final_layer_weights():
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131']
epoch2s = ['501','501','501','251','181']
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}-mlm/epoch{epoch1}'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/checkpoints/bert_base_uncased/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}-mlm/epoch{epoch2}'
model1 = AutoModelForMaskedLM.from_pretrained(model_path1)
embeddings_1 = model1.bert.encoder.layer[11].output.dense.weight.data
embedding1_numpy = np.array(embeddings_1)
embedding1_numpy = embedding1_numpy.T
model2 = AutoModelForMaskedLM.from_pretrained(model_path2)
embeddings_2 = model2.bert.encoder.layer[11].output.dense.weight.data
embedding2_numpy = np.array(embeddings_2)
embedding2_numpy = embedding2_numpy.T
print(embedding2_numpy.shape)
X = np.concatenate((embedding1_numpy,embedding2_numpy),axis=0)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = X_3d[:3072]
data["control"] = X_3d[3072:]
fig = plt.figure()
ax = fig.add_subplot(111)
for label, marker, color in zip(['general', 'control'], ['3', (5,2)], ["blue", "red"]):
X_temp = data[label]
ax.scatter(x=X_temp[:, 0], y=X_temp[:, 1],
label=label,
marker=marker,
color=color,
alpha=0.5)
if i==4:
legend = ax.legend()
h, l = ax.get_legend_handles_labels()
l = [l[0], l[1]]
h = [h[0], h[1]]
legend = ax.legend(h,
l,
loc='upper right',
fontsize=17.5,
framealpha=0.6,
markerscale=2)
for lh in legend.legendHandles:
lh.set_alpha(1)
ax.set_xticklabels([])
ax.set_yticklabels([])
# ax.axis('off')
fig.savefig("trial_cls_decoder"+str(i)+".pdf",
format='pdf',
bbox_inches='tight',
dpi=1200,
transparent=True)
plt.clf()
def plot_final_layer_representation():
random.seed(30)
indices = (random.sample(range(0,430923),k=2500))
plt.figure(dpi=600)
exp_names1 = ['10_model_100_data','25_model_100_data','50_model_100_data','75_model_100_data','100_model_100_data']
exp_names2 = ['new_control_10_model_100_data','new_control_25_model_100_data','new_control_50_model_100_data','new_control_75_model_100_data','new_control_100_model_100_data']
epoch1s = ['501','501','501','201','131']
epoch2s = ['501','501','501','251','181']
for i in range(5):
EXP_NAME1=exp_names1[i]
MODEL_CAT1='top5'
epoch1=epoch1s[i]
EXP_NAME2=exp_names2[i]
MODEL_CAT2='Books'
epoch2=epoch2s[i]
model_path1 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME1}/{MODEL_CAT1}/epoch{epoch1}/Books_layer_12_hidden_state.npy'
model_path2 = f'/disk/ocean/zheng/summarization_svcca/out/activations/amazon_reviews/seed1/{EXP_NAME2}/{MODEL_CAT2}/epoch{epoch2}/Books_layer_12_hidden_state.npy'
acts1 = np.load(model_path1) # data points x number of hidden dimension
acts2 = np.load(model_path2)
print(acts1.shape)
X = np.concatenate((acts1, acts2),axis=0)
X = StandardScaler().fit_transform(X)
X_3d = PCA(n_components=2).fit_transform(X)
print(X_3d.shape)
data = {}
data["general"] = np.take(X_3d[:430923], indices, axis=0)
data["control"] = | np.take(X_3d[430923:], indices, axis=0) | numpy.take |
"""
Discrete ZOO search algorithm
==========================
"""
import random
import numpy as np
import torch
from torch.nn.functional import softmax
from textattack.shared import AttackedText, utils
from textattack.search_methods import SearchMethod
from textattack.goal_function_results import GoalFunctionResultStatus
from textattack.constraints.semantics.sentence_encoders import UniversalSentenceEncoder
class DiscreteZOO(SearchMethod):
"""Reimplementation of the DiscreteZOO attack in the textattack framework.
"""
def __init__(self,
word_embeddings,
candidates=10,
neighborhood_multiplier=5,
max_changes_per_word=1,
max_changes_per_sentence=0,
wir_method="unk",
normalize_displacements=True,
normalize_differences=False,
average_displacements=False,
sample_cos_nn=False,
threshold_samples=False,
threshold_value=0.0,
min_neighbors=1,
short_circuit=False,
discretize_furthest=False,
logging=False):
self.word_embeddings = word_embeddings
self.candidates = candidates
self.max_gradient_steps = max_changes_per_word
self.max_changes_per_sentence = max_changes_per_sentence
self.wir_method = wir_method
self.normalize_displacements = normalize_displacements
self.normalize_differences = normalize_differences
self.average_displacements = average_displacements
self.neighborhood_multiplier = neighborhood_multiplier
self.sample_cos_nn = sample_cos_nn
self.threshold_samples = threshold_samples
self.threshold_value = threshold_value
self.min_neighbors = min_neighbors
self.short_circuit = short_circuit
self.discretize_furthest = discretize_furthest
self.logging = logging
def extra_repr_keys(self):
return [
'candidates', 'max_gradient_steps', 'normalize_displacements',
'normalize_differences', 'average_displacements',
'neighborhood_multiplier', 'sample_cos_nn', 'threshold_samples',
'threshold_value', 'min_neighbors',
]
def _check_constraints(self, transformed_text, current_text, original_text):
"""Check if `transformted_text` still passes the constraints with
respect to `current_text` and `original_text`.
This method is required because of a lot population-based methods does their
own transformations apart from the actual `transformation`. Examples include
`crossover` from `GeneticAlgorithm` and `move` from
`ParticleSwarmOptimization`.
Args:
transformed_text (AttackedText): Resulting text after transformation
current_text (AttackedText): Recent text from which `transformed_text` was
produced from.
original_text (AttackedText): Original text
Returns
`True` if constraints satisfied and `False` if otherwise.
"""
filtered = self.filter_transformations([transformed_text],
current_text,
original_text=original_text)
return True if filtered else False
def _get_index_order(self, initial_text):
"""Returns word indices of ``initial_text`` in descending order of
importance."""
len_text = len(initial_text.words)
if self.wir_method == "unk":
leave_one_texts = [
initial_text.replace_word_at_index(i, "[UNK]")
for i in range(len_text)
]
leave_one_results, search_over = self.get_goal_results(leave_one_texts)
index_scores = np.array([result.score for result in leave_one_results])
elif self.wir_method == "weighted-saliency":
# first, compute word saliency
leave_one_texts = [
initial_text.replace_word_at_index(i, "[UNK]")
for i in range(len_text)
]
leave_one_results, search_over = self.get_goal_results(leave_one_texts)
saliency_scores = np.array([result.score for result in leave_one_results])
softmax_saliency_scores = softmax(torch.Tensor(saliency_scores),
dim=0).numpy()
# compute the largest change in score we can find by swapping each word
delta_ps = []
for idx in range(len_text):
transformed_text_candidates = self.get_transformations(
initial_text,
original_text=initial_text,
indices_to_modify=[idx],
)
if not transformed_text_candidates:
# no valid synonym substitutions for this word
delta_ps.append(0.0)
continue
swap_results, _ = self.get_goal_results(transformed_text_candidates)
score_change = [result.score for result in swap_results]
max_score_change = np.max(score_change)
delta_ps.append(max_score_change)
index_scores = softmax_saliency_scores * np.array(delta_ps)
elif self.wir_method == "delete":
leave_one_texts = [
initial_text.delete_word_at_index(i) for i in range(len_text)
]
leave_one_results, search_over = self.get_goal_results(leave_one_texts)
index_scores = np.array([result.score for result in leave_one_results])
elif self.wir_method == "random":
index_order = np.arange(len_text)
np.random.shuffle(index_order)
search_over = False
else:
index_order = None
raise ValueError(f"Unsupported WIR method {self.wir_method}")
if self.wir_method != "random":
index_order = (-index_scores).argsort()
return index_order, search_over
def _get_candidates(self, current_attack, target_index):
"""This samples tokens nearby in embedding space, ignoring constraints.
In order for the algorithm to work, we want to sample tokens nearby without
being constrained because even words that don't fit can still be informative
when calculating displacements. self.get_transformations filters words
after finding neighbors in embedding space.
Args:
current_attack: An AttackedText with our current in-progress attack.
target_index: The index of the word we want to replace.
"""
indices_to_change = {target_index}
for constraint in self.pre_transformation_constraints:
indices_to_change = indices_to_change & constraint(
current_attack, self.transformation)
if len(indices_to_change) == 0:
return []
token_to_change = current_attack.words[target_index]
embedding = self.word_embeddings[token_to_change]
if embedding is None:
return []
if self.sample_cos_nn:
candidate_list, distance_list = self.word_embeddings.get_cos_nn(
embedding, 1 + self.candidates * self.neighborhood_multiplier)
candidate_list = candidate_list[1:]
distance_list = distance_list[1:]
if self.threshold_samples:
candidate_list = [
candidate
for candidate, distance in zip(candidate_list, distance_list)
if distance >= self.threshold_value
]
else:
candidate_list, distance_list = self.word_embeddings.get_euc_nn(
embedding, 1 + self.candidates * self.neighborhood_multiplier)
candidate_list = candidate_list[1:]
distance_list = distance_list[1:]
if self.threshold_samples:
candidate_list = [
candidate
for candidate, distance in zip(candidate_list, distance_list)
if distance <= self.threshold_value
]
#utils.logger.info("There are " + str(len(candidate_list)) + " acceptable replacement tokens.")
if len(candidate_list) < self.min_neighbors:
return []
if len(candidate_list) < self.candidates:
return candidate_list
candidate_list = random.sample(candidate_list, k=self.candidates)
return candidate_list
def _transform_text(self, previous_result, original_text, target_index,
**kwargs):
"""Creates possible texts given our current attack.
Args:
current_attack: An AttackedText with our current in-progress attack.
original_result: The original text result, GoalFunctionResult.
target_index: The index of the word that we are currently attacking.
"""
search_over = False
updated_result = previous_result
for i in range(self.max_gradient_steps):
current_attack = previous_result.attacked_text
candidate_tokens = self._get_candidates(current_attack, target_index)
if candidate_tokens == []:
return previous_result, search_over
candidates = [
current_attack.replace_word_at_index(target_index, token)
for token in candidate_tokens
]
current_embedding = (
self.word_embeddings[current_attack.words[target_index]])
#remove
#changed_tokens = []
#for candidate in candidates:
# changed_tokens.append(candidate.words[target_index])
new_results, search_over = self.get_goal_results(candidates)
if self.short_circuit:
for result in new_results:
if result.goal_status == GoalFunctionResultStatus.SUCCEEDED:
if self.logging:
print("success by short circuiting")
return result, True
changed_tokens_embeddings = np.array(
[self.word_embeddings[token] for token in candidate_tokens])
displacement_vectors = changed_tokens_embeddings - current_embedding
eps = np.finfo(displacement_vectors.dtype).eps
normalizers = np.maximum(np.linalg.norm(displacement_vectors, axis=-1),
eps)
if self.logging:
print("Mean Displacement Norm" + str(np.mean(normalizers)))
new_result_scores = np.array([result.score for result in new_results])
result_score_diffs = np.expand_dims(
new_result_scores - previous_result.score, 1)
if self.normalize_displacements:
displacement_vectors = displacement_vectors / np.expand_dims(
normalizers, 1)
if self.normalize_differences:
result_score_diffs = result_score_diffs / | np.expand_dims(normalizers, 1) | numpy.expand_dims |
"""
This module defines various classes that can serve as the `input` to an interface. Each class must inherit from
`InputComponent`, and each class must define a path to its template. All of the subclasses of `InputComponent` are
automatically added to a registry, which allows them to be easily referenced in other parts of the code.
"""
import json
import warnings
from gradio.component import Component
import numpy as np
import PIL
from gradio import processing_utils, test_data
import pandas as pd
from ffmpy import FFmpeg
import math
import tempfile
import csv
class InputComponent(Component):
"""
Input Component. All input components subclass this.
"""
def __init__(self, label, requires_permissions=False):
self.set_interpret_parameters()
super().__init__(label, requires_permissions)
def preprocess(self, x):
"""
Any preprocessing needed to be performed on function input.
"""
return x
def serialize(self, x, called_directly):
"""
Convert from a human-readable version of the input (path of an image, URL of a video, etc.) into the interface to a serialized version (e.g. base64) to pass into an API. May do different things if the interface is called() vs. used via GUI.
Parameters:
x (Any): Input to interface
called_directly (bool): if true, the interface was called(), otherwise, it is being used via the GUI
"""
return x
def preprocess_example(self, x):
"""
Any preprocessing needed to be performed on an example before being passed to the main function.
"""
return x
def set_interpret_parameters(self):
'''
Set any parameters for interpretation.
'''
return self
def get_interpretation_neighbors(self, x):
'''
Generates values similar to input to be used to interpret the significance of the input in the final output.
Parameters:
x (Any): Input to interface
Returns: (neighbor_values, interpret_kwargs, interpret_by_removal)
neighbor_values (List[Any]): Neighboring values to input x to compute for interpretation
interpret_kwargs (Dict[Any]): Keyword arguments to be passed to get_interpretation_scores
interpret_by_removal (bool): If True, returned neighbors are values where the interpreted subsection was removed. If False, returned neighbors are values where the interpreted subsection was modified to a different value.
'''
pass
def get_interpretation_scores(self, x, neighbors, scores, **kwargs):
'''
Arrange the output values from the neighbors into interpretation scores for the interface to render.
Parameters:
x (Any): Input to interface
neighbors (List[Any]): Neighboring values to input x used for interpretation.
scores (List[float]): Output value corresponding to each neighbor in neighbors
kwargs (Dict[str, Any]): Any additional arguments passed from get_interpretation_neighbors.
Returns:
(List[Any]): Arrangement of interpretation scores for interfaces to render.
'''
pass
def generate_sample(self):
"""
Returns a sample value of the input that would be accepted by the api. Used for api documentation.
"""
pass
class Textbox(InputComponent):
"""
Component creates a textbox for user to enter input. Provides a string as an argument to the wrapped function.
Input type: str
Demos: hello_world, diff_texts
"""
def __init__(self, lines=1, placeholder=None, default="", numeric=False, type="str", label=None):
"""
Parameters:
lines (int): number of line rows to provide in textarea.
placeholder (str): placeholder hint to provide behind textarea.
default (str): default text to provide in textarea.
numeric (bool): DEPRECATED. Whether the input should be parsed as a number instead of a string.
type (str): DEPRECATED. Type of value to be returned by component. "str" returns a string, "number" returns a float value. Use Number component in place of number type.
label (str): component name in interface.
"""
self.lines = lines
self.placeholder = placeholder
self.default = default
if numeric or type == "number":
warnings.warn(
"The 'numeric' type has been deprecated. Use the Number input component instead.", DeprecationWarning)
self.type = "number"
else:
self.type = type
if default == "":
self.test_input = {
"str": "the quick brown fox jumped over the lazy dog",
"number": 786.92,
}.get(type)
else:
self.test_input = default
self.interpret_by_tokens = True
super().__init__(label)
def get_template_context(self):
return {
"lines": self.lines,
"placeholder": self.placeholder,
"default": self.default,
**super().get_template_context()
}
@classmethod
def get_shortcut_implementations(cls):
return {
"text": {},
"textbox": {"lines": 7},
}
def preprocess(self, x):
"""
Parameters:
x (str): text input
"""
if self.type == "str":
return x
elif self.type == "number":
return float(x)
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'str', 'number'.")
def preprocess_example(self, x):
"""
Returns:
(str): Text representing function input
"""
return x
def set_interpret_parameters(self, separator=" ", replacement=None):
"""
Calculates interpretation score of characters in input by splitting input into tokens, then using a "leave one out" method to calculate the score of each token by removing each token and measuring the delta of the output value.
Parameters:
separator (str): Separator to use to split input into tokens.
replacement (str): In the "leave one out" step, the text that the token should be replaced with.
"""
self.interpretation_separator = separator
self.interpretation_replacement = replacement
return self
def tokenize(self, x):
"""
Tokenizes an input string by dividing into "words" delimited by self.interpretation_separator
"""
tokens = x.split(self.interpretation_separator)
leave_one_out_strings = []
for index in range(len(tokens)):
leave_one_out_set = list(tokens)
if self.interpretation_replacement is None:
leave_one_out_set.pop(index)
else:
leave_one_out_set[index] = self.interpretation_replacement
leave_one_out_strings.append(
self.interpretation_separator.join(leave_one_out_set))
return tokens, leave_one_out_strings, None
def get_masked_inputs(self, tokens, binary_mask_matrix):
"""
Constructs partially-masked sentences for SHAP interpretation
"""
masked_inputs = []
for binary_mask_vector in binary_mask_matrix:
masked_input = np.array(tokens)[np.array(
binary_mask_vector, dtype=bool)]
masked_inputs.append(
self.interpretation_separator.join(masked_input))
return masked_inputs
def get_interpretation_scores(self, x, neighbors, scores, tokens, masks=None):
"""
Returns:
(List[Tuple[str, float]]): Each tuple set represents a set of characters and their corresponding interpretation score.
"""
result = []
for token, score in zip(tokens, scores):
result.append((token, score))
result.append((self.interpretation_separator, 0))
return result
def generate_sample(self):
return "Hello World"
class Number(InputComponent):
"""
Component creates a field for user to enter numeric input. Provides a number as an argument to the wrapped function.
Input type: float
Demos: tax_calculator, titanic_survival
"""
def __init__(self, default=None, label=None):
'''
Parameters:
default (float): default value.
label (str): component name in interface.
'''
self.default = default
self.test_input = default if default is not None else 1
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"default": self.default,
**super().get_template_context()
}
@classmethod
def get_shortcut_implementations(cls):
return {
"number": {},
}
def preprocess(self, x):
"""
Parameters:
x (number): numeric input
Returns:
(float): number representing function input
"""
return float(x)
def preprocess_example(self, x):
"""
Returns:
(float): Number representing function input
"""
return x
def set_interpret_parameters(self, steps=3, delta=1, delta_type="percent"):
"""
Calculates interpretation scores of numeric values close to the input number.
Parameters:
steps (int): Number of nearby values to measure in each direction (above and below the input number).
delta (float): Size of step in each direction between nearby values.
delta_type (str): "percent" if delta step between nearby values should be a calculated as a percent, or "absolute" if delta should be a constant step change.
"""
self.interpretation_steps = steps
self.interpretation_delta = delta
self.interpretation_delta_type = delta_type
return self
def get_interpretation_neighbors(self, x):
x = float(x)
neighbors = []
if self.interpretation_delta_type == "percent":
delta = 1.0 * self.interpretation_delta * x / 100
elif self.interpretation_delta_type == "absolute":
delta = self.interpretation_delta
negatives = (x + np.arange(-self.interpretation_steps, 0)
* delta).tolist()
positives = (x + np.arange(1, self.interpretation_steps+1)
* delta).tolist()
return negatives + positives, {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(List[Tuple[float, float]]): Each tuple set represents a numeric value near the input and its corresponding interpretation score.
"""
interpretation = list(zip(neighbors, scores))
interpretation.insert(int(len(interpretation) / 2), [x, None])
return interpretation
def generate_sample(self):
return 1.0
class Slider(InputComponent):
"""
Component creates a slider that ranges from `minimum` to `maximum`. Provides a number as an argument to the wrapped function.
Input type: float
Demos: sentence_builder, generate_tone, titanic_survival
"""
def __init__(self, minimum=0, maximum=100, step=None, default=None, label=None):
'''
Parameters:
minimum (float): minimum value for slider.
maximum (float): maximum value for slider.
step (float): increment between slider values.
default (float): default value.
label (str): component name in interface.
'''
self.minimum = minimum
self.maximum = maximum
if step is None:
difference = maximum - minimum
power = math.floor(math.log10(difference) - 2)
step = 10 ** power
self.step = step
self.default = minimum if default is None else default
self.test_input = self.default
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"minimum": self.minimum,
"maximum": self.maximum,
"step": self.step,
"default": self.default,
**super().get_template_context()
}
@classmethod
def get_shortcut_implementations(cls):
return {
"slider": {},
}
def preprocess(self, x):
"""
Parameters:
x (number): numeric input
Returns:
(number): numeric input
"""
return x
def preprocess_example(self, x):
"""
Returns:
(float): Number representing function input
"""
return x
def set_interpret_parameters(self, steps=8):
"""
Calculates interpretation scores of numeric values ranging between the minimum and maximum values of the slider.
Parameters:
steps (int): Number of neighboring values to measure between the minimum and maximum values of the slider range.
"""
self.interpretation_steps = steps
return self
def get_interpretation_neighbors(self, x):
return np.linspace(self.minimum, self.maximum, self.interpretation_steps).tolist(), {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(List[float]): Each value represents the score corresponding to an evenly spaced range of inputs between the minimum and maximum slider values.
"""
return scores
def generate_sample(self):
return self.maximum
class Checkbox(InputComponent):
"""
Component creates a checkbox that can be set to `True` or `False`. Provides a boolean as an argument to the wrapped function.
Input type: bool
Demos: sentence_builder, titanic_survival
"""
def __init__(self, default=False, label=None):
"""
Parameters:
label (str): component name in interface.
default (bool): default value.
"""
self.test_input = True
self.default = default
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"default": self.default,
**super().get_template_context()
}
@classmethod
def get_shortcut_implementations(cls):
return {
"checkbox": {},
}
def preprocess(self, x):
"""
Parameters:
x (bool): boolean input
Returns:
(bool): boolean input
"""
return x
def preprocess_example(self, x):
"""
Returns:
(bool): Boolean representing function input
"""
return x
def set_interpret_parameters(self):
"""
Calculates interpretation score of the input by comparing the output against the output when the input is the inverse boolean value of x.
"""
return self
def get_interpretation_neighbors(self, x):
return [not x], {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(Tuple[float, float]): The first value represents the interpretation score if the input is False, and the second if the input is True.
"""
if x:
return scores[0], None
else:
return None, scores[0]
def generate_sample(self):
return True
class CheckboxGroup(InputComponent):
"""
Component creates a set of checkboxes of which a subset can be selected. Provides a list of strings representing the selected choices as an argument to the wrapped function.
Input type: Union[List[str], List[int]]
Demos: sentence_builder, titanic_survival, fraud_detector
"""
def __init__(self, choices, default=[], type="value", label=None):
'''
Parameters:
choices (List[str]): list of options to select from.
default (List[str]): default selected list of options.
type (str): Type of value to be returned by component. "value" returns the list of strings of the choices selected, "index" returns the list of indicies of the choices selected.
label (str): component name in interface.
'''
self.choices = choices
self.default = default
self.type = type
self.test_input = self.choices
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"choices": self.choices,
"default": self.default,
**super().get_template_context()
}
def preprocess(self, x):
"""
Parameters:
x (List[str]): list of selected choices
Returns:
(Union[List[str], List[int]]): list of selected choices as strings or indices within choice list
"""
if self.type == "value":
return x
elif self.type == "index":
return [self.choices.index(choice) for choice in x]
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'value', 'index'.")
def set_interpret_parameters(self):
"""
Calculates interpretation score of each choice in the input by comparing the output against the outputs when each choice in the input is independently either removed or added.
"""
return self
def get_interpretation_neighbors(self, x):
leave_one_out_sets = []
for choice in self.choices:
leave_one_out_set = list(x)
if choice in leave_one_out_set:
leave_one_out_set.remove(choice)
else:
leave_one_out_set.append(choice)
leave_one_out_sets.append(leave_one_out_set)
return leave_one_out_sets, {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(List[Tuple[float, float]]): For each tuple in the list, the first value represents the interpretation score if the input is False, and the second if the input is True.
"""
final_scores = []
for choice, score in zip(self.choices, scores):
if choice in x:
score_set = [score, None]
else:
score_set = [None, score]
final_scores.append(score_set)
return final_scores
def save_flagged(self, dir, label, data, encryption_key):
"""
Returns: (List[str]])
"""
return json.dumps(data)
def restore_flagged(self, dir, data, encryption_key):
return json.loads(data)
def generate_sample(self):
return self.choices
class Radio(InputComponent):
"""
Component creates a set of radio buttons of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function.
Input type: Union[str, int]
Demos: sentence_builder, tax_calculator, titanic_survival
"""
def __init__(self, choices, type="value", default=None, label=None):
'''
Parameters:
choices (List[str]): list of options to select from.
type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
default (str): default value.
label (str): component name in interface.
'''
self.choices = choices
self.type = type
self.test_input = self.choices[0]
self.default = default if default is not None else self.choices[0]
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"choices": self.choices,
"default": self.default,
**super().get_template_context()
}
def preprocess(self, x):
"""
Parameters:
x (str): selected choice
Returns:
(Union[str, int]): selected choice as string or index within choice list
"""
if self.type == "value":
return x
elif self.type == "index":
return self.choices.index(x)
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'value', 'index'.")
def set_interpret_parameters(self):
"""
Calculates interpretation score of each choice by comparing the output against each of the outputs when alternative choices are selected.
"""
return self
def get_interpretation_neighbors(self, x):
choices = list(self.choices)
choices.remove(x)
return choices, {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(List[float]): Each value represents the interpretation score corresponding to each choice.
"""
scores.insert(self.choices.index(x), None)
return scores
def generate_sample(self):
return self.choices[0]
class Dropdown(InputComponent):
"""
Component creates a dropdown of which only one can be selected. Provides string representing selected choice as an argument to the wrapped function.
Input type: Union[str, int]
Demos: sentence_builder, filter_records, titanic_survival
"""
def __init__(self, choices, type="value", default=None, label=None):
'''
Parameters:
choices (List[str]): list of options to select from.
type (str): Type of value to be returned by component. "value" returns the string of the choice selected, "index" returns the index of the choice selected.
default (str): default value.
label (str): component name in interface.
'''
self.choices = choices
self.type = type
self.test_input = self.choices[0]
self.default = default if default is not None else self.choices[0]
self.interpret_by_tokens = False
super().__init__(label)
def get_template_context(self):
return {
"choices": self.choices,
"default": self.default,
**super().get_template_context()
}
def preprocess(self, x):
"""
Parameters:
x (str): selected choice
Returns:
(Union[str, int]): selected choice as string or index within choice list
"""
if self.type == "value":
return x
elif self.type == "index":
return self.choices.index(x)
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'value', 'index'.")
def set_interpret_parameters(self):
"""
Calculates interpretation score of each choice by comparing the output against each of the outputs when alternative choices are selected.
"""
return self
def get_interpretation_neighbors(self, x):
choices = list(self.choices)
choices.remove(x)
return choices, {}
def get_interpretation_scores(self, x, neighbors, scores):
"""
Returns:
(List[float]): Each value represents the interpretation score corresponding to each choice.
"""
scores.insert(self.choices.index(x), None)
return scores
def generate_sample(self):
return self.choices[0]
class Image(InputComponent):
"""
Component creates an image upload box with editing capabilities.
Input type: Union[numpy.array, PIL.Image, file-object]
Demos: image_classifier, image_mod, webcam, digit_classifier
"""
def __init__(self, shape=None, image_mode='RGB', invert_colors=False, source="upload", tool="editor", type="numpy", label=None, optional=False):
'''
Parameters:
shape (Tuple[int, int]): (width, height) shape to crop and resize image to; if None, matches input image size.
image_mode (str): "RGB" if color, or "L" if black and white.
invert_colors (bool): whether to invert the image as a preprocessing step.
source (str): Source of image. "upload" creates a box where user can drop an image file, "webcam" allows user to take snapshot from their webcam, "canvas" defaults to a white image that can be edited and drawn upon with tools.
tool (str): Tools used for editing. "editor" allows a full screen editor, "select" provides a cropping and zoom tool.
type (str): Type of value to be returned by component. "numpy" returns a numpy array with shape (width, height, 3) and values from 0 to 255, "pil" returns a PIL image object, "file" returns a temporary file object whose path can be retrieved by file_obj.name, "filepath" returns the path directly.
label (str): component name in interface.
optional (bool): If True, the interface can be submitted with no uploaded image, in which case the input value is None.
'''
self.shape = shape
self.image_mode = image_mode
self.source = source
requires_permissions = source == "webcam"
self.tool = tool
self.type = type
self.optional = optional
self.invert_colors = invert_colors
self.test_input = test_data.BASE64_IMAGE
self.interpret_by_tokens = True
super().__init__(label, requires_permissions)
@classmethod
def get_shortcut_implementations(cls):
return {
"image": {},
"webcam": {"source": "webcam"},
"sketchpad": {"image_mode": "L", "source": "canvas", "shape": (28, 28), "invert_colors": True},
}
def get_template_context(self):
return {
"image_mode": self.image_mode,
"shape": self.shape,
"source": self.source,
"tool": self.tool,
"optional": self.optional,
**super().get_template_context()
}
def preprocess(self, x):
"""
Parameters:
x (str): base64 url data
Returns:
(Union[numpy.array, PIL.Image, file-object]): image in requested format
"""
if x is None:
return x
im = processing_utils.decode_base64_to_image(x)
fmt = im.format
with warnings.catch_warnings():
warnings.simplefilter("ignore")
im = im.convert(self.image_mode)
if self.shape is not None:
im = processing_utils.resize_and_crop(im, self.shape)
if self.invert_colors:
im = PIL.ImageOps.invert(im)
if self.type == "pil":
return im
elif self.type == "numpy":
return np.array(im)
elif self.type == "file" or self.type == "filepath":
file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=(
"."+fmt.lower() if fmt is not None else ".png"))
im.save(file_obj.name)
if self.type == "file":
warnings.warn(
"The 'file' type has been deprecated. Set parameter 'type' to 'filepath' instead.", DeprecationWarning)
return file_obj
else:
return file_obj.name
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'numpy', 'pil', 'filepath'.")
def preprocess_example(self, x):
return processing_utils.encode_file_to_base64(x)
def serialize(self, x, called_directly=False):
# if called directly, can assume it's a URL or filepath
if self.type == "filepath" or called_directly:
return processing_utils.encode_url_or_file_to_base64(x)
elif self.type == "file":
return processing_utils.encode_url_or_file_to_base64(x.name)
elif self.type in ("numpy", "pil"):
if self.type == "numpy":
x = PIL.Image.fromarray(np.uint8(x)).convert('RGB')
fmt = x.format
file_obj = tempfile.NamedTemporaryFile(delete=False, suffix=(
"."+fmt.lower() if fmt is not None else ".png"))
x.save(file_obj.name)
return processing_utils.encode_url_or_file_to_base64(file_obj.name)
else:
raise ValueError("Unknown type: " + str(self.type) +
". Please choose from: 'numpy', 'pil', 'filepath'.")
def set_interpret_parameters(self, segments=16):
"""
Calculates interpretation score of image subsections by splitting the image into subsections, then using a "leave one out" method to calculate the score of each subsection by whiting out the subsection and measuring the delta of the output value.
Parameters:
segments (int): Number of interpretation segments to split image into.
"""
self.interpretation_segments = segments
return self
def _segment_by_slic(self, x):
"""
Helper method that segments an image into superpixels using slic.
Parameters:
x: base64 representation of an image
"""
x = processing_utils.decode_base64_to_image(x)
if self.shape is not None:
x = processing_utils.resize_and_crop(x, self.shape)
resized_and_cropped_image = np.array(x)
try:
from skimage.segmentation import slic
except (ImportError, ModuleNotFoundError):
raise ValueError(
"Error: running this interpretation for images requires scikit-image, please install it first.")
try:
segments_slic = slic(
resized_and_cropped_image, self.interpretation_segments, compactness=10,
sigma=1, start_label=1)
except TypeError: # For skimage 0.16 and older
segments_slic = slic(
resized_and_cropped_image, self.interpretation_segments, compactness=10,
sigma=1)
return segments_slic, resized_and_cropped_image
def tokenize(self, x):
"""
Segments image into tokens, masks, and leave-one-out-tokens
Parameters:
x: base64 representation of an image
Returns:
tokens: list of tokens, used by the get_masked_input() method
leave_one_out_tokens: list of left-out tokens, used by the get_interpretation_neighbors() method
masks: list of masks, used by the get_interpretation_neighbors() method
"""
segments_slic, resized_and_cropped_image = self._segment_by_slic(x)
tokens, masks, leave_one_out_tokens = [], [], []
replace_color = np.mean(resized_and_cropped_image, axis=(0, 1))
for (i, segment_value) in enumerate(np.unique(segments_slic)):
mask = (segments_slic == segment_value)
image_screen = np.copy(resized_and_cropped_image)
image_screen[segments_slic == segment_value] = replace_color
leave_one_out_tokens.append(
processing_utils.encode_array_to_base64(image_screen))
token = np.copy(resized_and_cropped_image)
token[segments_slic != segment_value] = 0
tokens.append(token)
masks.append(mask)
return tokens, leave_one_out_tokens, masks
def get_masked_inputs(self, tokens, binary_mask_matrix):
masked_inputs = []
for binary_mask_vector in binary_mask_matrix:
masked_input = | np.zeros_like(tokens[0], dtype=int) | numpy.zeros_like |
"""
Definition of the fundamental class of functions.
"""
import copy as cp
import numpy as np
from .basis1010utils import compute_Qd1_block
from .basis1010utils import compute_Qd1_dtau_block
from .basis1010utils import compute_Qd3_block
from .basis1010utils import compute_Qd3_dtau_block
class cBasis1010(object):
dim_ = 6
def __init__(self, _params):
self.Dmat_ = np.array(
[[1, -1, 0, 0, 0, 0], [1, 1, 0, 0, 0, 0], [0, 0, -1, -1, 0, 0],
[0, 0, 1, -1, 0, 0], [0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 0]],
dtype=np.float)
self.buff_ = np.ones((6, ))
self.buff_[5] = 1.0
self.dim_ = 6
self.params_ = cp.deepcopy(_params)
def derivMatrixOnWindow(self, _tau, _deg):
alpha = self.params_
k = np.sqrt(2) / 4.0 * np.power(alpha, 0.25) / \
np.power((1.0 - alpha), 0.25)
tauk = k * 2.0
res = np.linalg.matrix_power(tauk * self.Dmat_, _deg)
return res
def evalDerivOnWindow(self, _s, _tau, _deg):
assert np.isscalar(_s)
alpha = self.params_
k = np.sqrt(2) / 4.0 * np.power(alpha, 0.25) / \
np.power((1.0 - alpha), 0.25)
# rememver ds/dt = 2.0/tau
aux = np.power(2.0 * k, _deg)
v = self.evalOnWindow(_s, _tau)
return np.ravel(aux * np.linalg.matrix_power(self.Dmat_, _deg).dot(v))
def evalOnWindow(self, _s, _tau):
"""
Eval on window evaluate in [-1, 1]
returns the cFundFuncBasis instance which contains the time
derivate of the current instance.
"""
assert np.isscalar(_s)
alpha = self.params_
k = np.sqrt(2) / 4.0 * np.power(alpha, 0.25) / \
np.power((1.0 - alpha), 0.25)
p = _tau * k * _s
expp = | np.exp(p) | numpy.exp |
import random
import pandas as pd
import numpy as np
import module_PvA
import module_predict
random.seed(23)
from module_dep import DataStream
data_stream = DataStream()
print("Initialize data...")
data_stream.initialize_data()
print("Testing models on 7 random days ...")
dates = pd.Series(pd.date_range(start='2021-05-01', end='2021-07-31'))
random_dates = random.choices(dates.apply(lambda x: x.date()), k=14)
bac_new, rec_new, con_per_new = [], [], []
for prediction_date in random_dates:
print(f"Report evaluation for ...{prediction_date}")
df_p_new = module_predict.predict_cp(data_stream, prediction_date)
eval_results = module_PvA.evaluate_report(data_stream, prediction_date)
bac_new.append(eval_results[0])
rec_new.append(eval_results[1])
con_per_new.append(eval_results[2])
df_eval = pd.DataFrame({'date': random_dates,
'BAC': bac_new,
'REC': rec_new,
'conversion%': con_per_new
})
print(df_eval.to_string())
print('SDC_f1_s_jlib.pkl')
print()
print(f"BAC mean: {np.mean(bac_new)}")
print(f"REC mean: {np.mean(rec_new)}")
print(f"Conversion % : {np.mean(con_per_new)}")
print(f"BAC var: {np.var(bac_new)}")
print(f"REC var: {np.var(rec_new)}")
print(f"Conversion % var: { | np.var(con_per_new) | numpy.var |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Trajectory model based on Equations of Motion Karlgaard
Created on Mon Nov 28 15:50:50 2016
@author: tr1010 (<NAME>)
"""
import numpy as np
import atmospheres as atmo
import aerodynamics as aero
#from pprint import pprint
def traj_uvw(x, t, earth, mass, areas, normals, centroids, I, scLS, aero_params):
"""
traj_uvw calculates the time derivative of the system state vector. See
See documentation for a full description of the state vector and the
trajectory equations which are being solved, as well as the different frames
of reference used.
Inputs:
x: state vector. Contains:
r, phi, theta, u, v, w, e[0], e[1], e[2], e[3], angvel[0],
angvel[1], angvel[2] = x
t: time variable (necessary for scipy.odeint)
earth: python tuple of earth parameters: mu, RE, J2, omega
mass: mass of spacecraft
areas: n-element array of the spacecraft surface areas
normals: 3xn element array of the outward-pointing unit normal vectors
centroids: n-element array of the centroids of the spacecraft surfaces
I: 3x3 spacecraft inertia tensor
scLS: length-scale associated with spacecraft. By default, it is the
longest of the spacecraft's three dimensions
aero_params: python tuple describing a number of parameters for the
aerodynamics solver in the following order:
KnFM, KnCont, a1, a2, SigN, SigT = aero_params
Outputs:
dxdt: Time derivative of the state vector
"""
# Unpack
e = np.zeros(4)
angvel = np.zeros(3)
r, phi, theta, u, v, w, e[0], e[1], e[2], e[3], angvel[0], angvel[1], angvel[2] = x
mu, RE, J2, ome = earth
cphi = np.cos(phi)
tphi = np.tan(phi)
# Geodetic latitude to Declination transformation
phigd = phi
# Atmosphere calculation at new altitude and calculate useful aerodynamic
# quantities
#rho, P, T, mfp, eta, MolW, SoS = atmo.US62_76(r,earth[1])
#R = 287.058
rho, P, T, R, mfp, eta, MolW, SoS = atmo.nrlmsise00(172,0,29000,r-earth[1],phi,theta,16,150,150,4)
Vinf = np.linalg.norm(np.array([u,v,w])) #speed of s/c
Tw = 287.0
Ma = Vinf/SoS
Kn = mfp/scLS
q_inf = 0.5*rho*Vinf**2
if r-earth[1] <= 80e3 and Vinf/SoS < 3:
# Check if Mach number falls below 5 (Newtonian theory fails) or
# S/C hits ground
# There is a better way of doing this which I should implement
dxdt = np.zeros((13,))
else:
# Geocentric to body rotation matrix
Gd = np.array([[e[0]**2 + e[1]**2 - e[2]**2 - e[3]**2, 2*(e[1]*e[2] + e[0]*e[3]), 2*(e[1]*e[3] - e[0]*e[2])],
[2*(e[1]*e[2] - e[0]*e[3]), e[0]**2 - e[1]**2 + e[2]**2 - e[3]**2, 2*(e[0]*e[1] + e[2]*e[3])],
[2*(e[1]*e[3] + e[0]*e[2]), 2*(e[2]*e[3] - e[0]*e[1]), e[0]**2 - e[1]**2 - e[2]**2 + e[3]**2]])
#Geodetic to Geocentric transformation matrix
Gphi = np.array([[np.cos(phi-phigd), 0, np.sin(phi-phigd)],
[0, 1, 0],
[-np.sin(phi-phigd), 0, np.cos(phi-phigd)]])
#Geodetic to body rotation matrix
Gcb = np.dot(Gd,Gphi)
# freestream velocity in the body frame of reference
VinfB = np.dot(Gcb,-np.array([u,v,w]))
# Aerodynamics Calculations -- forces and moments in the body frame of reference
AeroF, AeroM = aero.aero_calc(VinfB, areas, normals, centroids, Ma, Kn, R, T, q_inf, P, Tw, aero_params)
# Transform Aerodynamic forces to Geocentric FoR
AeroF_GC = np.dot(np.linalg.inv(Gcb),np.transpose(AeroF))
# Solve eqns of rotational motion to Calculate angular velocity in body frame
temp = np.transpose(np.subtract(AeroM,np.cross(angvel,np.dot(I,angvel))))
angvel_dot = np.dot( | np.linalg.inv(I) | numpy.linalg.inv |
import numpy as np
import cv2
from .math_functions import euclidean_dist_2_pts
def extract_img_markers(img, workspace_ratio=1.0):
"""
Extract working area from an image thanks to 4 Niryo's markers
:param img: OpenCV image which contain 4 Niryo's markers
:param workspace_ratio: Ratio between the width and the height of the area represented by the markers
:return: extracted and warped working area image
"""
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_thresh = cv2.adaptiveThreshold(gray, maxValue=255, adaptiveMethod=cv2.ADAPTIVE_THRESH_MEAN_C,
thresholdType=cv2.THRESH_BINARY, blockSize=15, C=25)
list_good_candidates = find_markers_from_img_thresh(img_thresh)
if not list_good_candidates or len(list_good_candidates) > 6:
return None
if len(list_good_candidates) == 4:
list_good_candidates = sort_markers_detection(list_good_candidates)
else:
list_good_candidates = complicated_sort_markers(list_good_candidates, workspace_ratio=workspace_ratio)
if list_good_candidates is None:
return None
im_cut = extract_sub_img(img, list_good_candidates, ratio_w_h=workspace_ratio)
return im_cut
def extract_sub_img(img, list_corners, ratio_w_h=1.0):
"""
Extract an small image from a big one using a Perspective Warp
:param img: Big image from which the small one will be extracted
:param list_corners: corners list of the small image
:param ratio_w_h: Width over Height ratio of the area. It helps to not stretch the working area image
:return: extracted and warped image
"""
if list_corners is None or len(list_corners) != 4:
return None
if ratio_w_h >= 1.0:
target_w_area = int(round(ratio_w_h * 200))
target_h_area = 200
else:
ratio_w_h = 1.0 / ratio_w_h
target_h_area = int(round(ratio_w_h * 200))
target_w_area = 200
points_grid = []
for marker in list_corners:
points_grid.append(marker.get_center())
points_grid = | np.array(points_grid, dtype=np.float32) | numpy.array |
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
'''
@File : hacker_dataloader.py.py
@Contact : <EMAIL>
@License : (C)Copyright 2019-2020, DRL_Lab-Cheng-CASIA
@Modify Time @Author @Version @Desciption
------------ ------- -------- -----------
2019/9/22 上午3:22 Cheng 1.0 init
'''
import random
import torch
import numpy as np
import dataloaders.transforms as transforms
from dataloaders.dataloader import MyDataloader
from PIL import Image
import os
import numpy as np
from PIL import Image
from torch.utils.data import Dataset
def pil_loader(path, rgb=True):
# open path as file to avoid ResourceWarning (https://github.com/python-pillow/Pillow/issues/835)
with open(path, 'rb') as f:
img = Image.open(f)
if rgb:
img = img.convert('RGB')
img = np.asarray(img, dtype=np.float32)
return img
else:
img = img.convert('L')
img = np.asarray(img, dtype=np.float32)
return img
iheight, iwidth = 720, 1280
# iheight, iwidth = 360, 640
alpha, beta = 0.02, 12.0 # NYU Depth, min depth is 0.02m, max depth is 10.0m
K = 80 # NYU is 68, but in paper, 80 is good
class HackerDataloader(MyDataloader):
def __init__(self, root, type, sparsifier=None, modality='rgb'):
super(HackerDataloader, self).__init__(root, type, sparsifier, modality)
self.output_size = (256, 480) # TODO
data = type + '.txt'
# data = 'test.txt' or 'val.txt'
with open(os.path.join(root, data), 'r') as f:
lines = f.readlines()
self.root = root
self.data = lines
self.type = type
self.loader = pil_loader
self.size = (480, 256) # (1280, 720)
self.output_size = (480, 256)
def __getitem__(self, index):
sample = self.data[index].strip('\n').split()
# im_path, gt_path
im_path = os.path.join(self.root, sample[0])
gt_path = os.path.join(self.root, sample[1])
# print(im_path, gt_path)
im = self.loader(im_path)
gt = self.loader(gt_path, rgb=False)
print('gt_path', gt_path)
if self.type == 'train':
input_np, depth_np = self.train_transform(im, gt)
elif self.type == 'val':
input_np, depth_np = self.val_transform(im, gt)
elif self.type == 'test':
input_np, depth_np = self.val_transform(im, gt)
else:
print("Error whih input type")
exit(1)
return input_np, depth_np
# input_tensor = to_tensor(input_np)
# """Convert a ``numpy.ndarray`` to tensor.
# Converts a numpy.ndarray (H x W x C) to a torch.FloatTensor of shape (C x H x W)."""
#
# while input_tensor.dim() < 3:
# input_tensor = input_tensor.unsqueeze(0)
# depth_tensor = to_tensor(depth_np)
# depth_tensor = depth_tensor.unsqueeze(0)
# return input_tensor, depth_tensor
# print('input_tensor.size', input_tensor.size(), 'depth_tensor.size', depth_tensor.size())
# print('depth_tensor---------------------------------------------', np.shape(depth_tensor))
# rgb_np = np.asarray(im, dtype=np.float32)
# depth_np = np.asarray(gt, dtype=np.float32)
# if self.modality == 'rgb':
# input_np = rgb_np
# elif self.modality == 'rgbd':
# input_np = self.create_rgbd(rgb_np, depth_np)
# elif self.modality == 'd':
# input_np = self.create_sparse_depth(rgb_np, depth_np)
# else:
# print('input_np is use before define')
# exit(-1)
# print('input_np.size', np.shape(input_np))
def train_transform(self, im, gt):
# im = np.array(im).astype(np.float32)
# gt = np.array(gt).astype(np.float32)
# s = np.random.uniform(1.0, 1.5) # random scaling
angle = np.random.uniform(-17.0, 17.0) # random rotation degrees
do_flip = np.random.uniform(0.0, 1.0) < 0.5 # random horizontal flip
# Upper_cor = random.randint(1, 464)
# Left_cor = random.randint(1, 800)
Upper_cor = random.randint(1, 360)
Left_cor = random.randint(1, 640)
# color_jitter = my_transforms.ColorJitter(0.4, 0.4, 0.4)
transform = my_transforms.Compose([
# my_transforms.Crop(Upper_cor, Left_cor, 256, 480),
my_transforms.Crop(Upper_cor, Left_cor, 360, 640),
# my_transforms.Resize(460 / 240, interpolation='bilinear'),
my_transforms.Rotate(angle),
# my_transforms.Resize(s),
# my_transforms.CenterCrop(self.size),
my_transforms.HorizontalFlip(do_flip)
])
im_ = transform(im)
# im_ = color_jitter(im_)
gt_ = transform(gt)
im_ = np.array(im_).astype(np.float32)
gt_ = np.array(gt_).astype(np.float32)
im_ /= 255.0
gt_ /= 1000.0 # mm -> m
im_ = to_tensor(im_)
gt_ = to_tensor(gt_)
gt_ = gt_.unsqueeze(0)
return im_, gt_
def val_transform(self, im, gt):
# im = np.array(im).astype(np.float32)
# gt = np.array(gt).astype(np.float32)
# Upper_cor = random.randint(1, 464)
# Left_cor = random.randint(1, 800)
Upper_cor = random.randint(1, 360)
Left_cor = random.randint(1, 640)
transform = my_transforms.Compose([
my_transforms.Crop(Upper_cor, Left_cor, 360, 640),
# my_transforms.Crop(Upper_cor, Left_cor, 256, 480),
# my_transforms.Resize(460 / 240, interpolation='bilinear'),
# my_transforms.CenterCrop(self.size)
])
im_ = transform(im)
gt_ = transform(gt)
im_ = np.array(im_).astype(np.float32)
gt_ = np.array(gt_).astype(np.float32)
im_ /= 255.0
gt_ /= 1000.0
im_ = to_tensor(im_)
gt_ = to_tensor(gt_)
gt_ = gt_.unsqueeze(0)
return im_, gt_
# def train_transform(self, rgb, depth):
# s = np.random.uniform(1.0, 1.5) # random scaling
# depth_np = depth / s
# angle = np.random.uniform(-5.0, 5.0) # random rotation degrees
# do_flip = np.random.uniform(0.0, 1.0) < 0.5 # random horizontal flip
#
# # perform 1st step of data augmentation
# transform = transforms.Compose([
# transforms.Resize(288.0 / iheight), # this is for computational efficiency, since rotation can be slow
# transforms.Rotate(angle),
# transforms.Resize(s),
# transforms.CenterCrop(self.output_size),
# transforms.HorizontalFlip(do_flip)
# ])
# # rgb_np = transform(rgb)
# # rgb_np = self.color_jitter(rgb_np) # random color jittering
# # rgb_np = np.asfarray(rgb_np, dtype='float') / 255
# rgb_np = np.asfarray(rgb, dtype='float') / 255
# depth_np = transform(depth_np)
#
# return rgb_np, depth_np
#
# def val_transform(self, rgb, depth):
# depth_np = depth
# transform = transforms.Compose([
# transforms.Resize(288.0 / iheight),
# transforms.CenterCrop(self.output_size),
# ])
# rgb_np = transform(rgb)
# rgb_np = np.asfarray(rgb_np, dtype='float') / 255
# depth_np = transform(depth_np)
#
# return rgb_np, depth_np
# array to tensor
from dataloaders import transforms as my_transforms
to_tensor = my_transforms.ToTensor()
class HackerFolder(Dataset):
"""
root/
train: xxx/xxx.jpg, xxxx/xxxx.png
val: xxx/xxx.jpg, xxxx/xxxx.png
test.txt
val.txt
"""
def __init__(self, root, data, transform=None):
# data = 'test.txt' or 'val.txt'
with open(os.path.join(root, data), 'r') as f:
lines = f.readlines()
self.root = root
self.data = lines
self.transform = transform
self.loader = pil_loader
self.size = (1280, 720)
def __getitem__(self, idx):
sample = self.data[idx].strip('\n').split()
# im_path, gt_path
im_path = os.path.join(self.root, sample[0])
gt_path = os.path.join(self.root, sample[1])
im = self.loader(im_path)
gt = self.loader(gt_path, rgb=False)
if self.data == 'train.txt':
im, gt = self.train_transform(im, gt)
else:
im, gt = self.val_transform(im, gt)
return im, gt
def __len__(self):
return len(self.data)
def train_transform(self, im, gt):
im = np.array(im).astype(np.float32)
gt = np.array(gt).astype(np.float32)
s = np.random.uniform(1.0, 1.5) # random scaling
angle = np.random.uniform(-5.0, 5.0) # random rotation degrees
do_flip = | np.random.uniform(0.0, 1.0) | numpy.random.uniform |
import os
from dataclasses import dataclass
import datetime
import tempfile
import warnings
import isce3
import numpy as np
from osgeo import gdal
# Other functionalities
def compute_az_carrier(burst, orbit, offset, position):
'''
Estimate azimuth carrier and store in numpy arrary
Parameters
----------
burst: Sentinel1BurstSlc
Sentinel1 burst object
orbit: isce3.core.Orbit
Sentinel1 orbit ephemerides
offset: float
Offset between reference and secondary burst
position: tuple
Tuple of locations along y and x directions
Returns
-------
carr: np.ndarray
Azimuth carrier
'''
# Get burst sensing mid relative to orbit reference epoch
fmt = "%Y-%m-%dT%H:%M:%S.%f"
orbit_ref_epoch = datetime.datetime.strptime(orbit.reference_epoch.__str__()[:-3], fmt)
t_mid = burst.sensing_mid - orbit_ref_epoch
_, v = orbit.interpolate(t_mid.total_seconds())
vs = np.linalg.norm(v)
ks = 2 * vs * burst.azimuth_steer_rate / burst.wavelength
y, x = position
n_lines, _ = burst.shape
eta = (y - (n_lines // 2) + offset) * burst.azimuth_time_interval
rng = burst.starting_range + x * burst.range_pixel_spacing
f_etac = np.array(
burst.doppler.poly1d.eval(rng.flatten().tolist())).reshape(rng.shape)
ka = np.array(
burst.azimuth_fm_rate.eval(rng.flatten().tolist())).reshape(rng.shape)
eta_ref = (burst.doppler.poly1d.eval(
burst.starting_range) / burst.azimuth_fm_rate.eval(
burst.starting_range)) - (f_etac / ka)
kt = ks / (1.0 - ks / ka)
carr = np.pi * kt * ((eta - eta_ref) ** 2)
return carr
def polyfit(xin, yin, zin, azimuth_order, range_order,
sig=None, snr=None, cond=1.0e-12,
max_order=True):
"""
Fit 2-D polynomial
Parameters:
xin: np.ndarray
Array locations along x direction
yin: np.ndarray
Array locations along y direction
zin: np.ndarray
Array locations along z direction
azimuth_order: int
Azimuth polynomial order
range_order: int
Slant range polynomial order
sig: -
---------------------------
snr: float
Signal to noise ratio
cond: float
---------------------------
max_order: bool
---------------------------
Returns:
poly: isce3.core.Poly2D
class represents a polynomial function of range
'x' and azimuth 'y'
"""
x = np.array(xin)
xmin = np.min(x)
xnorm = np.max(x) - xmin
if xnorm == 0:
xnorm = 1.0
x = (x - xmin) / xnorm
y = np.array(yin)
ymin = np.min(y)
ynorm = np.max(y) - ymin
if ynorm == 0:
ynorm = 1.0
y = (y - ymin) / ynorm
z = np.array(zin)
big_order = max(azimuth_order, range_order)
arr_list = []
for ii in range(azimuth_order + 1):
yfact = np.power(y, ii)
for jj in range(range_order + 1):
xfact = np.power(x, jj) * yfact
if max_order:
if ((ii + jj) <= big_order):
arr_list.append(xfact.reshape((x.size, 1)))
else:
arr_list.append(xfact.reshape((x.size, 1)))
A = np.hstack(arr_list)
if sig is not None and snr is not None:
raise Exception('Only one of sig / snr can be provided')
if sig is not None:
snr = 1.0 + 1.0 / sig
if snr is not None:
A = A / snr[:, None]
z = z / snr
return_val = True
val, res, _, eigs = np.linalg.lstsq(A, z, rcond=cond)
if len(res) > 0:
print('Chi squared: %f' % (np.sqrt(res / (1.0 * len(z)))))
else:
print('No chi squared value....')
print('Try reducing rank of polynomial.')
return_val = False
coeffs = []
count = 0
for ii in range(azimuth_order + 1):
row = []
for jj in range(range_order + 1):
if max_order:
if (ii + jj) <= big_order:
row.append(val[count])
count = count + 1
else:
row.append(0.0)
else:
row.append(val[count])
count = count + 1
coeffs.append(row)
poly = isce3.core.Poly2d(coeffs, xmin, ymin, xnorm, ynorm)
return poly
@dataclass(frozen=True)
class Doppler:
poly1d: isce3.core.Poly1d
lut2d: isce3.core.LUT2d
@dataclass(frozen=True)
class Sentinel1BurstSlc:
'''Raw values extracted from SAFE XML.
'''
sensing_start: datetime.datetime
radar_center_frequency: float
wavelength: float
azimuth_steer_rate: float
azimuth_time_interval: float
slant_range_time: float
starting_range: float
iw2_mid_range: float
range_sampling_rate: float
range_pixel_spacing: float
shape: tuple()
azimuth_fm_rate: isce3.core.Poly1d
doppler: Doppler
range_bandwidth: float
polarization: str # {VV, VH, HH, HV}
burst_id: str # t{track_number}_iw{1,2,3}_b{burst_index}
platform_id: str # S1{A,B}
center: tuple # {center lon, center lat} in degrees
border: list # list of lon, lat coordinate tuples (in degrees) representing burst border
orbit: isce3.core.Orbit
orbit_direction: str
# VRT params
tiff_path: str # path to measurement tiff in SAFE/zip
i_burst: int
first_valid_sample: int
last_valid_sample: int
first_valid_line: int
last_valid_line: int
# window parameters
range_window_type: str
range_window_coefficient: float
rank: int # The number of PRI between transmitted pulse and return echo.
prf_raw_data: float # Pulse repetition frequency (PRF) of the raw data [Hz]
def as_isce3_radargrid(self):
'''Init and return isce3.product.RadarGridParameters.
Returns:
--------
_ : RadarGridParameters
RadarGridParameters constructed from class members.
'''
prf = 1 / self.azimuth_time_interval
length, width = self.shape
time_delta = datetime.timedelta(days=2)
ref_epoch = isce3.core.DateTime(self.sensing_start - time_delta)
# sensing start with respect to reference epoch
sensing_start = time_delta.total_seconds()
# init radar grid
return isce3.product.RadarGridParameters(sensing_start,
self.wavelength,
prf,
self.starting_range,
self.range_pixel_spacing,
isce3.core.LookSide.Right,
length,
width,
ref_epoch)
def slc_to_file(self, out_path: str, fmt: str = 'ENVI'):
'''Write burst to GTiff file.
Parameters:
-----------
out_path : string
Path of output GTiff file.
'''
if not self.tiff_path:
warn_str = f'Unable write SLC to file. Burst does not contain image data; only metadata.'
warnings.warn(warn_str)
return
# get output directory of out_path
dst_dir, _ = os.path.split(out_path)
# create VRT; make temporary if output not VRT
if fmt != 'VRT':
temp_vrt = tempfile.NamedTemporaryFile(dir=dst_dir)
vrt_fname = temp_vrt.name
else:
vrt_fname = out_path
self.slc_to_vrt_file(vrt_fname)
if fmt == 'VRT':
return
# open temporary VRT and translate to GTiff
src_ds = gdal.Open(vrt_fname)
gdal.Translate(out_path, src_ds, format=fmt)
# clean up
src_ds = None
def slc_to_vrt_file(self, out_path):
'''Write burst to VRT file.
Parameters:
-----------
out_path : string
Path of output VRT file.
'''
if not self.tiff_path:
warn_str = f'Unable write SLC to file. Burst does not contain image data; only metadata.'
warnings.warn(warn_str)
return
line_offset = self.i_burst * self.shape[0]
inwidth = self.last_valid_sample - self.first_valid_sample + 1
inlength = self.last_valid_line - self.first_valid_line + 1
outlength, outwidth = self.shape
yoffset = line_offset + self.first_valid_line
localyoffset = self.first_valid_line
xoffset = self.first_valid_sample
gdal_obj = gdal.Open(self.tiff_path, gdal.GA_ReadOnly)
fullwidth = gdal_obj.RasterXSize
fulllength = gdal_obj.RasterYSize
# TODO maybe cleaner to write with ElementTree
tmpl = f'''<VRTDataset rasterXSize="{outwidth}" rasterYSize="{outlength}">
<VRTRasterBand dataType="CFloat32" band="1">
<NoDataValue>0.0</NoDataValue>
<SimpleSource>
<SourceFilename relativeToVRT="1">{self.tiff_path}</SourceFilename>
<SourceBand>1</SourceBand>
<SourceProperties RasterXSize="{fullwidth}" RasterYSize="{fulllength}" DataType="CInt16"/>
<SrcRect xOff="{xoffset}" yOff="{yoffset}" xSize="{inwidth}" ySize="{inlength}"/>
<DstRect xOff="{xoffset}" yOff="{localyoffset}" xSize="{inwidth}" ySize="{inlength}"/>
</SimpleSource>
</VRTRasterBand>
</VRTDataset>'''
with open(out_path, 'w') as fid:
fid.write(tmpl)
def get_az_carrier_poly(self, offset=0.0, xstep=500, ystep=50,
az_order=5, rg_order=3, index_as_coord=False):
"""
Estimate burst azimuth carrier polymonials
Parameters
----------
offset: float
Offset between reference and secondary bursts
xstep: int
Spacing along x direction
ystep: int
Spacing along y direction
az_order: int
Azimuth polynomial order
rg_order: int
Slant range polynomial order
index_as_coord: bool
If true, polyfit with az/range indices. Else, polyfit with az/range.
Returns
-------
poly: isce3.core.Poly2D
class represents a polynomial function of range
'x' and azimuth 'y'
"""
rdr_grid = self.as_isce3_radargrid()
lines, samples = self.shape
x = np.arange(0, samples, xstep, dtype=int)
y = np.arange(0, lines, ystep, dtype=int)
x_mesh, y_mesh = np.meshgrid(x, y)
# Estimate azimuth carrier
az_carrier = compute_az_carrier(self, self.orbit,
offset=offset,
position=(y_mesh, x_mesh))
# Fit azimuth carrier polynomial with x/y or range/azimuth
if index_as_coord:
az_carrier_poly = polyfit(x_mesh.flatten()+1, y_mesh.flatten()+1,
az_carrier.flatten(), az_order,
rg_order)
else:
# Convert x/y to range/azimuth
rg = self.starting_range + (x + 1) * self.range_pixel_spacing
az = rdr_grid.sensing_start + (y + 1) * self.azimuth_time_interval
rg_mesh, az_mesh = np.meshgrid(rg, az)
# Estimate azimuth carrier polynomials
az_carrier_poly = polyfit(rg_mesh.flatten(), az_mesh.flatten(),
az_carrier.flatten(), az_order,
rg_order)
return az_carrier_poly
def as_dict(self):
"""
Return SLC class attributes as dict
Returns
-------
self_as_dict: dict
Dict representation as a dict
"""
self_as_dict = {}
for key, val in self.__dict__.items():
if key == 'sensing_start':
val = str(val)
elif key == 'center':
val = val.coords[0]
elif isinstance(val, np.float64):
val = float(val)
elif key == 'azimuth_fm_rate':
temp = {}
temp['order'] = val.order
temp['mean'] = val.mean
temp['std'] = val.std
temp['coeffs'] = val.coeffs
val = temp
elif key == 'border':
val = self.border[0].wkt
elif key == 'doppler':
temp = {}
temp['poly1d'] = {}
temp['poly1d']['order'] = val.poly1d.order
temp['poly1d']['mean'] = val.poly1d.mean
temp['poly1d']['std'] = val.poly1d.std
temp['poly1d']['coeffs'] = val.poly1d.coeffs
temp['lut2d'] = {}
temp['lut2d']['x_start'] = val.lut2d.x_start
temp['lut2d']['x_spacing'] = val.lut2d.x_spacing
temp['lut2d']['y_start'] = val.lut2d.y_start
temp['lut2d']['y_spacing'] = val.lut2d.y_spacing
temp['lut2d']['length'] = val.lut2d.length
temp['lut2d']['width'] = val.lut2d.width
temp['lut2d']['data'] = val.lut2d.data.flatten().tolist()
val = temp
elif key in ['orbit']:
temp = {}
temp['ref_epoch'] = str(val.reference_epoch)
temp['time'] = {}
temp['time']['first'] = val.time.first
temp['time']['spacing'] = val.time.spacing
temp['time']['last'] = val.time.last
temp['time']['size'] = val.time.size
temp['position_x'] = val.position[:,0].tolist()
temp['position_y'] = val.position[:,1].tolist()
temp['position_z'] = val.position[:,2].tolist()
temp['velocity_x'] = val.velocity[:,0].tolist()
temp['velocity_y'] = val.velocity[:,1].tolist()
temp['velocity_z'] = val.velocity[:,2].tolist()
val = temp
self_as_dict[key] = val
return self_as_dict
def bistatic_delay(self, xstep=1, ystep=1):
'''Computes the bistatic delay correction in azimuth direction
due to the movement of the platform between pulse transmission and echo reception
as described in equation (21) in Gisinger et al. (2021, TGRS).
References
-------
<NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., et al.
(2021). In-Depth Verification of Sentinel-1 and TerraSAR-X Geolocation Accuracy Using
the Australian Corner Reflector Array. IEEE Trans. Geosci. Remote Sens., 59(2), 1154-
1181. doi:10.1109/TGRS.2019.2961248
ETAD-DLR-DD-0008, Algorithm Technical Baseline Document. Available: https://sentinels.
copernicus.eu/documents/247904/4629150/Sentinel-1-ETAD-Algorithm-Technical-Baseline-
Document.pdf
Parameters
-------
xstep : int
spacing along x direction (range direction) in units of pixels
ystep : int
spacing along y direction (azimuth direction) in units of pixels
Returns
-------
LUT2D object of bistatic delay correction in seconds as a function
of the range and zimuth indices. This correction needs to be added
to the SLC tagged azimuth time to get the corrected azimuth times.
'''
pri = 1.0 / self.prf_raw_data
tau0 = self.rank * pri
tau_mid = self.iw2_mid_range * 2.0 / isce3.core.speed_of_light
nx = np.ceil(self.width / xstep).astype(int)
ny = np.ceil(self.length / ystep).astype(int)
x = | np.arange(0, nx*xstep, xstep, dtype=int) | numpy.arange |
"""Test `seals.util`."""
import collections
import numpy as np
from seals import util
def test_sample_distribution():
"""Test util.sample_distribution."""
distr_size = 5
distr = np.random.rand(distr_size)
distr /= distr.sum()
n_samples = 1000
rng = np.random.RandomState()
sample_count = collections.Counter(
util.sample_distribution(distr, rng) for _ in range(n_samples)
)
empirical_distr = np.array([sample_count[i] for i in range(distr_size)]) / n_samples
# Empirical distribution matches real distribution
l1_err = np.sum(np.abs(empirical_distr - distr))
assert l1_err < 0.1
# Same seed gives same samples
assert all(
util.sample_distribution(distr, random=np.random.RandomState(seed))
== util.sample_distribution(distr, random=np.random.RandomState(seed))
for seed in range(20)
)
def test_one_hot_encoding():
"""Test util.one_hot_encoding."""
Case = collections.namedtuple("Case", ["pos", "size", "encoding"])
cases = [
Case(pos=0, size=1, encoding=np.array([1.0])),
Case(pos=1, size=5, encoding=np.array([0.0, 1.0, 0.0, 0.0, 0.0])),
Case(pos=3, size=4, encoding= | np.array([0.0, 0.0, 0.0, 1.0]) | numpy.array |
from .base import BaseDynamics
import networkx as nx
import numpy as np
class SingleUnbiasedRandomWalker(BaseDynamics):
def __init__(self):
self.results={}
def simulate(self,G,L,initial_node=None):
"""
Simulate single random-walker dynamics on a ground truth network.
Generates an N x L time series TS; TS[j,t]==1 if the walker is at
node j at time t, and TS[j,t]==0 otherwise.
Example Usage:
#######
G = nx.ring_of_cliques(4,16)
L = 2001
dynamics = SingleUnbiasedRandomWalker()
TS = dynamics.simulate(G, L)
#######
Params
------
G (nx.Graph): the input (ground-truth) graph with $N$ nodes.
L (int): the length of the desired time series.
Returns
-------
TS (np.ndarray): an $N \times L$ array of synthetic time series data.
"""
# get adjacency matrix and set up vector of indices
A=nx.to_numpy_matrix(G)
N=G.number_of_nodes()
W=np.zeros(L,dtype=int)
# place walker at initial location
if initial_node:
W[0]=initial_node
else:
W[0]=np.random.randint(N)
# run dynamical process
for t in range(L-1):
W[t+1]=np.random.choice( | np.where(A[W[t],:]) | numpy.where |
#!/usr/bin/env python
# bucket1.py
import numpy as np
from mpi4py import MPI
comm = MPI.COMM_WORLD
rank = comm.Get_rank(); size = comm.Get_size(); N = 16
unsorted = np.zeros(N, dtype="int")
final_sorted = | np.zeros(N, dtype="int") | numpy.zeros |
# harmonypy - A data alignment algorithm.
# Copyright (C) 2018 <NAME>
# 2019 <NAME> <<EMAIL>>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
import pandas as pd
import numpy as np
from scipy.cluster.vq import kmeans
import logging
# create logger
logger = logging.getLogger('harmonypy')
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
# from IPython.core.debugger import set_trace
def run_harmony(
data_mat: np.ndarray,
meta_data: pd.DataFrame,
vars_use,
theta = None,
lamb = None,
sigma = 0.1,
nclust = None,
tau = 0,
block_size = 0.05,
max_iter_harmony = 10,
max_iter_kmeans = 20,
epsilon_cluster = 1e-5,
epsilon_harmony = 1e-4,
plot_convergence = False,
verbose = True,
reference_values = None,
cluster_prior = None,
random_state = 0
):
"""Run Harmony.
"""
# theta = None
# lamb = None
# sigma = 0.1
# nclust = None
# tau = 0
# block_size = 0.05
# epsilon_cluster = 1e-5
# epsilon_harmony = 1e-4
# plot_convergence = False
# verbose = True
# reference_values = None
# cluster_prior = None
# random_state = 0
N = meta_data.shape[0]
if data_mat.shape[1] != N:
data_mat = data_mat.T
assert data_mat.shape[1] == N, \
"data_mat and meta_data do not have the same number of cells"
if nclust is None:
nclust = np.min([np.round(N / 30.0), 100]).astype(int)
if type(sigma) is float and nclust > 1:
sigma = np.repeat(sigma, nclust)
if isinstance(vars_use, str):
vars_use = [vars_use]
phi = pd.get_dummies(meta_data[vars_use]).to_numpy().T
phi_n = meta_data[vars_use].describe().loc['unique'].to_numpy().astype(int)
if theta is None:
theta = np.repeat([1] * len(phi_n), phi_n)
elif isinstance(theta, float) or isinstance(theta, int):
theta = np.repeat([theta] * len(phi_n), phi_n)
elif len(theta) == len(phi_n):
theta = np.repeat([theta], phi_n)
assert len(theta) == np.sum(phi_n), \
"each batch variable must have a theta"
if lamb is None:
lamb = np.repeat([1] * len(phi_n), phi_n)
elif isinstance(lamb, float) or isinstance(lamb, int):
lamb = np.repeat([lamb] * len(phi_n), phi_n)
elif len(lamb) == len(phi_n):
lamb = np.repeat([lamb], phi_n)
assert len(lamb) == np.sum(phi_n), \
"each batch variable must have a lambda"
# Number of items in each category.
N_b = phi.sum(axis = 1)
# Proportion of items in each category.
Pr_b = N_b / N
if tau > 0:
theta = theta * (1 - np.exp(-(N_b / (nclust * tau)) ** 2))
lamb_mat = np.diag(np.insert(lamb, 0, 0))
phi_moe = np.vstack((np.repeat(1, N), phi))
np.random.seed(random_state)
ho = Harmony(
data_mat, phi, phi_moe, Pr_b, sigma, theta, max_iter_harmony, max_iter_kmeans,
epsilon_cluster, epsilon_harmony, nclust, block_size, lamb_mat, verbose
)
return ho
class Harmony(object):
def __init__(
self, Z, Phi, Phi_moe, Pr_b, sigma,
theta, max_iter_harmony, max_iter_kmeans,
epsilon_kmeans, epsilon_harmony, K, block_size,
lamb, verbose
):
self.Z_corr = np.array(Z)
self.Z_orig = | np.array(Z) | numpy.array |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import sys
# add python path of PadleDetection to sys.path
parent_path = os.path.abspath(os.path.join(__file__, *(['..'] * 3)))
if parent_path not in sys.path:
sys.path.append(parent_path)
import argparse
import time
import yaml
import ast
from functools import reduce
import cv2
import numpy as np
import paddle
import paddle.fluid as fluid
from preprocess import preprocess, Resize, Normalize, Permute, PadStride
from visualize import visualize_box_mask, lmk2out
# Global dictionary
SUPPORT_MODELS = {
'YOLO',
'SSD',
'RetinaNet',
'EfficientDet',
'RCNN',
'Face',
'TTF',
'FCOS',
'SOLOv2',
}
class Detector(object):
"""
Args:
config (object): config of model, defined by `Config(model_dir)`
model_dir (str): root path of __model__, __params__ and infer_cfg.yml
device (str): Choose the device you want to run, it can be: CPU/GPU/XPU, default is CPU
run_mode (str): mode of running(fluid/trt_fp32/trt_fp16)
threshold (float): threshold to reserve the result for output.
enable_mkldnn (bool): whether use mkldnn with CPU.
enable_mkldnn_bfloat16 (bool): whether use mkldnn bfloat16 with CPU.
"""
def __init__(self,
config,
model_dir,
device='CPU',
run_mode='fluid',
threshold=0.5,
trt_calib_mode=False,
enable_mkldnn=False,
enable_mkldnn_bfloat16=False):
self.config = config
if self.config.use_python_inference:
self.executor, self.program, self.fecth_targets = load_executor(
model_dir, device=device)
else:
self.predictor = load_predictor(
model_dir,
run_mode=run_mode,
min_subgraph_size=self.config.min_subgraph_size,
device=device,
trt_calib_mode=trt_calib_mode,
enable_mkldnn=enable_mkldnn,
enable_mkldnn_bfloat16=enable_mkldnn_bfloat16)
def preprocess(self, im):
preprocess_ops = []
for op_info in self.config.preprocess_infos:
new_op_info = op_info.copy()
op_type = new_op_info.pop('type')
if op_type == 'Resize':
new_op_info['arch'] = self.config.arch
preprocess_ops.append(eval(op_type)(**new_op_info))
im, im_info = preprocess(im, preprocess_ops)
inputs = create_inputs(im, im_info, self.config.arch)
return inputs, im_info
def postprocess(self, np_boxes, np_masks, np_lmk, im_info, threshold=0.5):
# postprocess output of predictor
results = {}
if np_lmk is not None:
results['landmark'] = lmk2out(np_boxes, np_lmk, im_info, threshold)
if self.config.arch in ['SSD', 'Face']:
w, h = im_info['origin_shape']
np_boxes[:, 2] *= h
np_boxes[:, 3] *= w
np_boxes[:, 4] *= h
np_boxes[:, 5] *= w
expect_boxes = (np_boxes[:, 1] > threshold) & (np_boxes[:, 0] > -1)
np_boxes = np_boxes[expect_boxes, :]
for box in np_boxes:
print('class_id:{:d}, confidence:{:.4f},'
'left_top:[{:.2f},{:.2f}],'
' right_bottom:[{:.2f},{:.2f}]'.format(
int(box[0]), box[1], box[2], box[3], box[4], box[5]))
results['boxes'] = np_boxes
if np_masks is not None:
np_masks = np_masks[expect_boxes, :, :, :]
results['masks'] = np_masks
return results
def predict(self,
image,
threshold=0.5,
warmup=0,
repeats=1,
run_benchmark=False):
'''
Args:
image (str/np.ndarray): path of image/ np.ndarray read by cv2
threshold (float): threshold of predicted box' score
Returns:
results (dict): include 'boxes': np.ndarray: shape:[N,6], N: number of box,
matix element:[class, score, x_min, y_min, x_max, y_max]
MaskRCNN's results include 'masks': np.ndarray:
shape:[N, class_num, mask_resolution, mask_resolution]
'''
inputs, im_info = self.preprocess(image)
np_boxes, np_masks, np_lmk = None, None, None
if self.config.use_python_inference:
for i in range(warmup):
outs = self.executor.run(self.program,
feed=inputs,
fetch_list=self.fecth_targets,
return_numpy=False)
t1 = time.time()
for i in range(repeats):
outs = self.executor.run(self.program,
feed=inputs,
fetch_list=self.fecth_targets,
return_numpy=False)
t2 = time.time()
ms = (t2 - t1) * 1000.0 / repeats
print("Inference: {} ms per batch image".format(ms))
np_boxes = np.array(outs[0])
if self.config.mask_resolution is not None:
np_masks = np.array(outs[1])
else:
input_names = self.predictor.get_input_names()
for i in range(len(input_names)):
input_tensor = self.predictor.get_input_tensor(input_names[i])
input_tensor.copy_from_cpu(inputs[input_names[i]])
for i in range(warmup):
self.predictor.zero_copy_run()
output_names = self.predictor.get_output_names()
boxes_tensor = self.predictor.get_output_tensor(output_names[0])
np_boxes = boxes_tensor.copy_to_cpu()
if self.config.mask_resolution is not None:
masks_tensor = self.predictor.get_output_tensor(
output_names[1])
np_masks = masks_tensor.copy_to_cpu()
if self.config.with_lmk is not None and self.config.with_lmk == True:
face_index = self.predictor.get_output_tensor(output_names[
1])
landmark = self.predictor.get_output_tensor(output_names[2])
prior_boxes = self.predictor.get_output_tensor(output_names[
3])
np_face_index = face_index.copy_to_cpu()
np_prior_boxes = prior_boxes.copy_to_cpu()
np_landmark = landmark.copy_to_cpu()
np_lmk = [np_face_index, np_landmark, np_prior_boxes]
t1 = time.time()
for i in range(repeats):
self.predictor.zero_copy_run()
output_names = self.predictor.get_output_names()
boxes_tensor = self.predictor.get_output_tensor(output_names[0])
np_boxes = boxes_tensor.copy_to_cpu()
if self.config.mask_resolution is not None:
masks_tensor = self.predictor.get_output_tensor(
output_names[1])
np_masks = masks_tensor.copy_to_cpu()
if self.config.with_lmk is not None and self.config.with_lmk == True:
face_index = self.predictor.get_output_tensor(output_names[
1])
landmark = self.predictor.get_output_tensor(output_names[2])
prior_boxes = self.predictor.get_output_tensor(output_names[
3])
np_face_index = face_index.copy_to_cpu()
np_prior_boxes = prior_boxes.copy_to_cpu()
np_landmark = landmark.copy_to_cpu()
np_lmk = [np_face_index, np_landmark, np_prior_boxes]
t2 = time.time()
ms = (t2 - t1) * 1000.0 / repeats
print("Inference: {} ms per batch image".format(ms))
# do not perform postprocess in benchmark mode
results = []
if not run_benchmark:
if reduce(lambda x, y: x * y, np_boxes.shape) < 6:
print('[WARNNING] No object detected.')
results = {'boxes': np.array([])}
else:
results = self.postprocess(
np_boxes, np_masks, np_lmk, im_info, threshold=threshold)
return results
class DetectorSOLOv2(Detector):
def __init__(self,
config,
model_dir,
device='CPU',
run_mode='fluid',
threshold=0.5,
trt_calib_mode=False,
enable_mkldnn=False,
enable_mkldnn_bfloat16=False):
super(DetectorSOLOv2, self).__init__(
config=config,
model_dir=model_dir,
device=device,
run_mode=run_mode,
threshold=threshold,
trt_calib_mode=trt_calib_mode,
enable_mkldn=enable_mkldnn,
enable_mkldnn_bfloat16=enable_mkldnn_bfloat16)
def predict(self,
image,
threshold=0.5,
warmup=0,
repeats=1,
run_benchmark=False):
inputs, im_info = self.preprocess(image)
np_label, np_score, np_segms = None, None, None
if self.config.use_python_inference:
for i in range(warmup):
outs = self.executor.run(self.program,
feed=inputs,
fetch_list=self.fecth_targets,
return_numpy=False)
t1 = time.time()
for i in range(repeats):
outs = self.executor.run(self.program,
feed=inputs,
fetch_list=self.fecth_targets,
return_numpy=False)
t2 = time.time()
ms = (t2 - t1) * 1000.0 / repeats
print("Inference: {} ms per batch image".format(ms))
np_label, np_score, np_segms = np.array(outs[0]), np.array(outs[
1]), np.array(outs[2])
else:
input_names = self.predictor.get_input_names()
for i in range(len(input_names)):
input_tensor = self.predictor.get_input_tensor(input_names[i])
input_tensor.copy_from_cpu(inputs[input_names[i]])
for i in range(warmup):
self.predictor.zero_copy_run()
output_names = self.predictor.get_output_names()
np_label = self.predictor.get_output_tensor(output_names[
0]).copy_to_cpu()
np_score = self.predictor.get_output_tensor(output_names[
1]).copy_to_cpu()
np_segms = self.predictor.get_output_tensor(output_names[
2]).copy_to_cpu()
t1 = time.time()
for i in range(repeats):
self.predictor.zero_copy_run()
output_names = self.predictor.get_output_names()
np_label = self.predictor.get_output_tensor(output_names[
0]).copy_to_cpu()
np_score = self.predictor.get_output_tensor(output_names[
1]).copy_to_cpu()
np_segms = self.predictor.get_output_tensor(output_names[
2]).copy_to_cpu()
t2 = time.time()
ms = (t2 - t1) * 1000.0 / repeats
print("Inference: {} ms per batch image".format(ms))
# do not perform postprocess in benchmark mode
results = []
if not run_benchmark:
return dict(segm=np_segms, label=np_label, score=np_score)
return results
def create_inputs(im, im_info, model_arch='YOLO'):
"""generate input for different model type
Args:
im (np.ndarray): image (np.ndarray)
im_info (dict): info of image
model_arch (str): model type
Returns:
inputs (dict): input of model
"""
inputs = {}
inputs['image'] = im
origin_shape = list(im_info['origin_shape'])
resize_shape = list(im_info['resize_shape'])
pad_shape = list(im_info['pad_shape']) if im_info[
'pad_shape'] is not None else list(im_info['resize_shape'])
scale_x, scale_y = im_info['scale']
if 'YOLO' in model_arch:
im_size = np.array([origin_shape]).astype('int32')
inputs['im_size'] = im_size
elif 'RetinaNet' in model_arch or 'EfficientDet' in model_arch:
scale = scale_x
im_info = np.array([pad_shape + [scale]]).astype('float32')
inputs['im_info'] = im_info
elif ('RCNN' in model_arch) or ('FCOS' in model_arch):
scale = scale_x
im_info = np.array([pad_shape + [scale]]).astype('float32')
im_shape = np.array([origin_shape + [1.]]).astype('float32')
inputs['im_info'] = im_info
inputs['im_shape'] = im_shape
elif 'TTF' in model_arch:
scale_factor = np.array([scale_x, scale_y] * 2).astype('float32')
inputs['scale_factor'] = scale_factor
elif 'SOLOv2' in model_arch:
scale = scale_x
im_info = | np.array([resize_shape + [scale]]) | numpy.array |
import numpy as np
import torch
import os
import json
from sklearn.metrics import precision_recall_fscore_support
import argparse
import copy
from collections import deque
def compute_accuracy(y_pred, y_true):
size_pred = np.shape(y_pred)
size_true = np.shape(y_true)
if len(size_pred) == 1:
n = size_pred[0]
acc = np.sum(y_pred==y_true) * 1.0 / n
elif len(size_pred) == 2:
n = size_pred[0]
acc = np.sum(np.all(y_pred==y_true, 1)) * 1.0 / n
return acc
def update_confusion_matrix(y_pred, y_true, conf_matrix, normalise=True):
for y, y_p in zip(y_true, y_pred):
conf_matrix[y, y_p] += 1
if normalise:
conf_matrix /= conf_matrix.sum()
return conf_matrix
def update_class_freq(labels, class_freq, normalise=False):
for y in labels:
class_freq[y, y_p] += 1
if normalise:
class_freq /= class_freq.sum()
return conf_matrix
def update_confusion_matrix_with_checks(y_pred, y_true, conf_matrix, normalise=True, max_lbl=None):
if max_lbl is None:
max_lbl = max(max(y_pred), max(y_true))
conf_len = np.shape(conf_matrix)[0]
if conf_len <= max_lbl:
new_conf_matrix=np.zeros((max_lbl+1, max_lbl+1))
new_conf_matrix[:conf_len, :conf_len] = conf_matrix
cm = update_confusion_matrix(y_pred, y_true, new_conf_matrix)
else:
cm = update_confusion_matrix(y_pred, y_true, conf_matrix)
if normalise:
cm = cm / cm.sum()
return cm
def confusion_matrix_combine(conf_matrices, normalise=True):
lens = [np.shape(cm)[0] for cm in conf_matrices]
max_len = max(lens)
new_cm = np.zeros((max_len, max_len))
for i, cm in enumerate(conf_matrices):
cm_len = lens[i]
new_cm[:cm_len, :cm_len] += cm
if normalise:
new_cm /= np.sum(new_cm)
return new_cm
def save_conf_matrix_visualisation(conf_matrix, filepath):
"""
Saves visualisation of the confusion matrix saved in the task performance
"""
n_row, n_col = np.shape(conf_matrix)
df_cm = pd.DataFrame(array, index=np.arange(n_row), columns=np.arange(n_col))
plt.figure(figsize = (35,35))
ax = sn.heatmap(df_cm, annot=True)
fig = ax.get_figure()
fig.savefig(filepath)
def class_freq_combine(class_freqs, normalise=False):
lens = [np.shape(cf)[0] for cf in class_freqs]
max_len = max(lens)
new_cf = np.zeros(max_len)
for i, cf in enumerate(class_freqs):
cf_len = lens[i]
new_cf[:cm_len] += cf
if normalise:
new_cf /= np.sum(new_cf)
return new_cf
def subtract_confusion_matrices(conf_matrix1, conf_matrix2):
shape1 = np.shape(conf_matrix1)
shape2 = np.shape(conf_matrix2)
if shape1[0] > shape2[0]:
conf_matrix1 = conf_matrix1[:shape2[0], :shape2[0]]
elif shape1[0] < shape2[0]:
conf_matrix2 = conf_matrix2[:shape1[0], :shape1[0]]
return conf_matrix1 - conf_matrix2
def compute_errors(conf_matrix):
TP = | np.diag(conf_matrix) | numpy.diag |
#!/usr/bin/env python
# coding: utf-8
# In[1]:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import seaborn as sns
import os
from scipy import stats
import argparse
# In[2]:
parser = argparse.ArgumentParser(description='GAN-SODE')
parser.add_argument('--GPU', type=int, default=0, help='GPU ID')
parser.add_argument('-prb', '--problem', choices=['1e4', '1e5', '1e6', '1e7', 'Inf'])
parser.add_argument('-trs', '--train_size', type=int, default=10000)
parser.add_argument('-dim', '--dim', type=int, default=1)
parser.add_argument('-its', '--iterations', type=int, default=100000)
parser.add_argument('-res', '--restore', type=int, default=-1)
parser.add_argument('--seed',type=int, default=0, help='random seed')
parser.add_argument('--lasso', type=float, default = 0.0, help='use L1 penalty on the terms, not for nn')
# parser.add_argument('--GAN',help='version of GAN')
parser.add_argument('--grad', action= 'store_true')
parser.add_argument('--drift', choices=['2term', '4term', 'nn'], help='the format of the drift')
parser.add_argument('--float64', action= 'store_true')
parser.add_argument('--diff', choices=['known','const'], default='known')
parser.add_argument('--lr', type=float, default=1e-4)
parser.add_argument('--bs', type=int, default= 1000)
args = parser.parse_args()
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID' # see issue #152
os.environ['CUDA_VISIBLE_DEVICES']= str(args.GPU)
bs = args.bs
seed = args.seed
lamda = 0.1
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
if args.float64:
dtype = tf.float64
else:
dtype = tf.float32
dim = args.dim
zdim = args.dim
dt = 0.01
steps = [20, 50, 100]
ref_steps = [0, 500]
total_steps = 500
frames = len(steps)
ref_frames = len(ref_steps)
ref = {i: np.load('data1D/ref_{}.npz'.format(i))['ref'] for i in ref_steps + steps}
Qdata = [ref[A][np.random.choice(len(ref[A]),args.train_size,False),:] for A in steps]
# In[3]:
def feed_NN(X, W, b, act = tf.nn.tanh):
A = X
L = len(W)
for i in range(L-1):
A = act(tf.add(tf.matmul(A, W[i]), b[i]))
return tf.add(tf.matmul(A, W[-1]), b[-1])
def initgenerator(X, W, b):
y = feed_NN(X,W,b, act= tf.nn.tanh)
return y
# In[4]:
def fun_diff(x):
if args.diff == 'known':
diff = 1
elif args.diff == 'const':
diff = tf.nn.softplus(s_W[0])
else:
raise NotImplementedError
return diff
def fun_drift(x):
if args.drift == '2term':
drift = d_W[0] * x + d_W[1] * x**3
elif args.drift == '4term':
drift = d_W[0] + d_W[1] * x + d_W[2] * x**2 + d_W[3] * x**3
elif args.drift == 'nn':
drift = feed_NN(x, d_W, d_b, act= tf.nn.tanh)
if args.grad:
drift = tf.gradients(drift, x)[0]
else:
raise NotImplementedError
return drift
def generator(x, steps, dt, bs = bs):
'''
x shape: [bs, dim]
'''
u = [None for i in range(steps + 1)]
u[0] = x
print(0, end = ' ', flush = True)
for i in range(steps):
drift = fun_drift(u[i])
diff = fun_diff(u[i])
u[i+1] = u[i] + dt * drift + 1 * np.sqrt(dt) * diff * tf.random.normal([bs, dim], mean=0.0, stddev=1.0, dtype = dtype)
print(i+1, end = ' ', flush = True)
return u[-1], u
def mkfigure_train_1D(title):
plt.figure(figsize=(10,6 * frames))
plotid = 0
for plotid in range(frames):
s = steps[plotid]
plt.subplot(frames,1,plotid + 1)
init = np.concatenate([sess.run(Gs[s]) for i in range(10)], axis = 0)
sns.kdeplot(init[:,0], label = '10,000 \n generated sample')
sns.kdeplot(Qdata[plotid][:,0], label = '{} \n training samples'.format(len(Qdata[plotid])))
sns.kdeplot(ref[s][np.random.choice(len(ref[s]),10000,False),0], label = '10,000 \n MC samples')
plt.title('t = {}'.format(s/100))
plt.legend()
plt.xlim(-5,5)
plt.savefig(savedir+ '/' + title + '.eps', format = 'eps')
def mkfigure_ref_1D(title):
plt.figure(figsize=(10, 6 * ref_frames))
plotid = 0
for plotid in range(ref_frames):
s = ref_steps[plotid]
plt.subplot(ref_frames,1,plotid + 1)
init = np.concatenate([sess.run(Gs[s]) for i in range(10)], axis = 0)
sns.kdeplot(init[:,0], label = '10,000 \n generated sample')
sns.kdeplot(ref[s][np.random.choice(len(ref[s]),10000,False),0], label = '10,000 \n MC samples')
plt.title('t = {}'.format(s/100))
plt.legend()
plt.xlim(-5,5)
plt.savefig(savedir+ '/' + title + '.eps', format = 'eps')
def save_sample(title, steps, repeat = 100):
init = []
for s in steps:
init.append(np.concatenate([sess.run(Gs[s]) for i in range(repeat)], axis = 0))
np.savez(savedir + '/' + title + '.npz', steps = np.array(steps), Gdata = | np.array(init) | numpy.array |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
@author: yunnaidan
@time: 2019/11/24
@file: DVI.py
"""
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
import os
import json
from matplotlib import rc
plt.rcParams['savefig.dpi'] = 300
rc('text', usetex=True)
rc('font', size=15)
rc('xtick', labelsize=10)
rc('ytick', labelsize=10)
import gaussian_variables as gv
import utils as u
import plot_utils as pu
import bayes_layers as bnn
from bayes_models import MLP, PointMLP, AdaptedMLP
from dataset.UCIdataset import UCIDataset
from dataset.Facedataset import FaceDataset
def make_model(hypers):
if hypers['method'].lower().strip() == 'bayes':
MLP_factory = MLP
def prediction(y): return tf.reshape(y.mean[:, 0], [-1])
loss = bnn.regression_loss
else:
MLP_factory = PointMLP
def prediction(y): return tf.reshape(y.mean[:, 0], [-1])
loss = bnn.point_regression_loss
mlp = MLP_factory(hypers['x_dim'], hypers['y_dim'], hypers)
mlp = AdaptedMLP(mlp)
mlp.make_placeholders()
ipt = mlp.placeholders['ipt_mean']
y = mlp(ipt)
target = tf.placeholder(tf.float32, [None])
mlp.placeholders['target'] = target
global_step = tf.Variable(0, trainable=False, name='global_step')
loss, logprob, all_surprise = loss(y, target, mlp, hypers, global_step)
accuracy = tf.reduce_mean(tf.abs(target - prediction(y)))
return {
'model': mlp,
'metrics': {
'accuracy': accuracy, 'loss': loss,
'logprob': logprob, 'all_surprise': all_surprise
},
'global_step': global_step}
def train_test(
Xtrain,
Ytrain,
Xtest,
Ytest,
paras,
outpath):
train_no, x_dim = Xtrain.shape
try:
test_no, y_dim = Ytest.shape
except:
test_no = Ytest.shape
y_dim = 1
hypers = {
"x_dim": x_dim,
"y_dim": y_dim,
"hidden_dims": paras["hidden_dims"],
"nonlinearity": "relu",
"adapter": {'in':paras['in'],'out':paras['out']},
"method": "bayes",
"style": "heteroskedastic",
"homo_logvar_scale": 2 * np.log(0.2),
"prior_type": [
"empirical",
"wider_he",
"wider_he"],
"n_epochs": paras['n_epochs'],
# "batch_size": 32,
"batch_size": train_no,
"learning_rate": paras['learning_rate'],
"lambda": 1.0,
"warmup_updates": {
'lambda': 14000.0},
"anneal_updates": {
'lambda': 1000.0},
"optimizer": "adam",
"gradient_clip": 0.1,
"data_fraction": 1.0,
"sections_to_run": [
"train",
'test']}
data = [[Xtrain, Ytrain.reshape(-1)],
[Xtest, Ytest.reshape(-1)]]
restricted_training_set = u.restrict_dataset_size(
data[0], hypers['data_fraction'])
hypers['dataset_size'] = len(restricted_training_set[0])
device_id = 1
device_string = u.get_device_string(device_id)
print(hypers)
with tf.device(device_string):
if True:
model_and_metrics = make_model(hypers)
train_op = u.make_optimizer(model_and_metrics, hypers)
sess = u.get_session()
saver = tf.train.Saver()
all_summaries = []
best_valid_accuracy = np.inf
for epoch in range(1, hypers['n_epochs'] + 1):
verbose = (epoch % 20 == 0)
if verbose:
print("Epoch %i: " % epoch, end='')
epoch_summary, accuracies = u.train_valid_test(
{
'train': restricted_training_set,
'test': data[1]
},
sess, model_and_metrics, train_op, hypers, verbose)
# dump log file
all_summaries.append(epoch_summary)
if epoch % 5000 == 0:
saver.save(
sess,
os.path.join(
outpath,
'model.ckpt'),
global_step=epoch)
with open(os.path.join(outpath, "summaries.json"), 'w') as f:
json.dump(all_summaries, f, indent=4, cls=u.NumpyEncoder)
return None
def run_(dataset_name, dataset_path, times, paras):
np.random.seed(123)
for time in range(times):
outpath = os.path.join(dataset_path, str(time))
if not os.path.exists(outpath):
os.makedirs(outpath)
if dataset_name == 'face':
data = FaceDataset("./dataset", 0.9)
else:
data = UCIDataset(dataset_name, 0.9)
print(
data.Xtrain.shape,
data.Ytrain.shape,
data.Xtest.shape,
data.Ytest.shape)
train_test(
data.Xtrain,
data.Ytrain,
data.Xtest,
data.Ytest,
paras,
outpath)
return None
def show_(datasets, root_path, times, epoch_list, shape):
fig = plt.figure()
for i in range(len(datasets)):
dataset_name = datasets[i]
print (dataset_name)
data_path = os.path.join(root_path, dataset_name)
b_epoch, e_epoch = epoch_list[i]
ax = fig.add_subplot(2, 2, i + 1)
test_mean, test_std = pu.UCI_result_plot(
dataset_name,
data_path,
times,
ax,
b_epoch=b_epoch,
e_epoch=e_epoch,
shape=shape)
if i+1 in [1, 3]:
ax.set_ylabel(shape)
if i+1 in [3, 4]:
ax.set_xlabel('Epoch')
print(np.min(test_mean),
0.5 * test_std[np.where(test_mean == | np.min(test_mean) | numpy.min |
import os
import sys
import math
import pickle
import pdb
import argparse
import random
from tqdm import tqdm
from shutil import copy
import torch
from torch import nn, optim
from torch.optim.lr_scheduler import ReduceLROnPlateau
import numpy as np
import scipy.io
from scipy.linalg import qr
import igraph
from random import shuffle
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from util import *
from models import *
from sklearn import manifold
# from dataset import *
parser = argparse.ArgumentParser(description='Train Variational Autoencoders for DAGs')
# general settings
parser.add_argument('--data-name', default='threeStageOpamp', help='graph dataset name')
parser.add_argument('--save-appendix', default='',
help='what to append to data-name as save-name for results')
parser.add_argument('--only-test', action='store_true', default=False,
help='if True, perform some experiments without training the model')
parser.add_argument('--backup', action='store_true', default=True,
help='if True, copy current py files to result dir')
parser.add_argument('--save-interval', type=int, default=1, metavar='N',
help='how many epochs to wait each time to save model states')
parser.add_argument('--sample-number', type=int, default=10, metavar='N',
help='how many samples to generate each time')
parser.add_argument('--gpu', type=int, default=3, help='which gpu to use')
# training settings
# parser.add_argument('--model', default='DVAE_hybirdLoss', help='model to use')
parser.add_argument('--model', default='DVAE', help='model to use')
# parser.add_argument('--data_file', type=str, default='dataset_withoutY', help='dataset original file to use')
parser.add_argument('--trainSet_size', type=int, default=2000, help='control the size of training set')
parser.add_argument('--hs', type=int, default=501, metavar='N',
help='hidden size of GRUs')
parser.add_argument('--nz', type=int, default=10, metavar='N',
help='number of dimensions of latent vectors z')
parser.add_argument('--load_model_path', default='', help='model path to loaded')
parser.add_argument('--load_model_name', default='500', help='model name to loaded')
# optimization settings
parser.add_argument('--lr', type=float, default=5e-4, metavar='LR',
help='learning rate (default: 1e-4)')
parser.add_argument('--epochs', type=int, default=500, metavar='N',
help='number of epochs to train')
parser.add_argument('--batch_size', type=int, default=16, metavar='N',
help='batch size during training')
parser.add_argument('--infer-batch-size', type=int, default=128, metavar='N',
help='batch size during inference')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
args = parser.parse_args()
torch.manual_seed(args.seed)
gpu = 'cuda:' + str(args.gpu)
device = torch.device(gpu if torch.cuda.is_available() else 'cpu')
np.random.seed(args.seed)
random.seed(args.seed)
print(args)
'''Prepare data'''
args.file_dir = os.getcwd()
args.res_dir = os.path.join(args.file_dir, 'results/{}{}'.format(args.data_name,
args.save_appendix))
if not os.path.exists(args.res_dir):
os.makedirs(args.res_dir)
pkl_name = os.path.join(args.res_dir, args.data_name + '.pkl')
# check whether to load pre-stored pickle data
if os.path.isfile(pkl_name):
with open(pkl_name, 'rb') as f:
train_data, test_data, graph_args = pickle.load(f)
# otherwise process the raw data and save to .pkl
else:
# data_file = args.data_file
# train_data, test_data, graph_args = load_CIRCUIT_graphs(data_file)
train_data, test_data, graph_args = load_CIRCUIT_graphs()
train_data = train_data[:args.trainSet_size]
with open(pkl_name, 'wb') as f:
pickle.dump((train_data, test_data, graph_args), f)
if args.backup:
# backup current .py files
copy('train.py', args.res_dir)
copy('models.py', args.res_dir)
copy('util.py', args.res_dir)
# save command line input
cmd_input = 'python ' + ' '.join(sys.argv) + '\n'
with open(os.path.join(args.res_dir, 'cmd_input.txt'), 'a') as f:
f.write(cmd_input)
print('Command line input: ' + cmd_input + ' is saved.')
'''Prepare the model'''
# model
model = eval(args.model)(
max_n=graph_args.max_n,
fs=graph_args.edge_feature,
nvt=graph_args.nvt,
START_TYPE=0,
END_TYPE=1,
hs=args.hs,
nz=args.nz
)
# optimizer and scheduler
optimizer = optim.Adam(model.parameters(), lr=args.lr)
scheduler = ReduceLROnPlateau(optimizer, 'min', factor=0.1, patience=10, verbose=True)
model.to(device)
'''
# plot sample train/test graphs
if not (os.path.exists(os.path.join(args.res_dir, 'train_graph_id0.pdf')) or os.path.exists(os.path.join(args.res_dir, 'train_graph_id0.png'))):
for data in ['train_data', 'test_data']:
G = [g for g, y in eval(data)[:10]]
for i, g in enumerate(G):
name = '{}_graph_id{}'.format(data[:-5], i)
plot_DAG(g, args.res_dir, name)
'''
'''Define some train/test functions'''
def train(epoch):
model.train()
train_loss = 0
recon_loss = 0
kld_loss = 0
pred_loss = 0
pbar = tqdm(train_data)
g_batch = []
y_batch = []
min_dist = 1
max_dist = 0
for i, (g, y) in enumerate(pbar):
g_batch.append(g)
y_batch.append(y)
if len(g_batch) == args.batch_size or i == len(train_data) - 1:
optimizer.zero_grad()
g_batch = model._collate_fn(g_batch)
'''
mu, logvar = model.encode(g_batch)
loss, recon, kld = model.loss(mu, logvar, g_batch)
'''
loss, recon, kld = model(g_batch)
# if epoch % 100 ==0 and i == len(train_data) - 1:
# Hv
for vi in range(0, model.max_n):
# print("vi:", vi)
Hvi = model._get_vertex_state(g_batch, vi)
'''
for j in range(Hvi.size()[0]):
for k in range(j+1, Hvi.size()[0]):
dist = torch.cosine_similarity(Hvi[j], Hvi[k], dim=0)
min_dist = min(dist, min_dist)
max_dist = max(dist, max_dist)
'''
# print("min_dist:", min_dist)
# print("max_dist:", max_dist)
# print(Hvi.size()[0])
# print(i, Hvi)
pbar.set_description('Epoch: %d, loss: %0.4f, recon: %0.4f, kld: %0.4f' % (
epoch, loss.item() / len(g_batch), recon.item() / len(g_batch), kld.item() / len(g_batch)))
loss.backward()
# train_loss += float(loss)
# recon_loss += float(recon)
# kld_loss += float(kld)
train_loss += loss.item()
recon_loss += recon.item()
kld_loss += kld.item()
optimizer.step()
g_batch = []
y_batch = []
print('====> Epoch: {} Average loss: {:.4f}'.format(epoch, train_loss / len(train_data)))
return train_loss, recon_loss, kld_loss
def test():
# test recon accuracy
test_model.eval()
encode_times = 1
decode_times = 1
Nll = 0
n_perfect = 0
print('Testing begins...')
print('Performance on the train data: ')
pbar1 = tqdm(train_data)
g_batch = []
y_batch = []
for i, (g, y) in enumerate(pbar1):
g_batch.append(g)
y_batch.append(y)
if len(g_batch) == args.infer_batch_size or i == len(train_data) - 1:
g = test_model._collate_fn(g_batch)
mu, logvar = test_model.encode(g)
_, nll, _ = test_model.loss(mu, logvar, g)
pbar1.set_description('recon loss: {:.4f}'.format(nll.item() / len(g_batch)))
Nll += nll.item()
# construct igraph g from tensor g to check recon quality
for _ in range(encode_times):
z = test_model.reparameterize(mu, logvar)
for _ in range(decode_times):
g_recon = test_model.decode(z)
n_perfect += sum(is_same_DAG(g0, g1) for g0, g1 in zip(g, g_recon))
g_batch = []
y_batch = []
Nll /= len(train_data)
acc = n_perfect / (len(train_data) * encode_times * decode_times)
print('Trainset average recon loss: {0}, recon accuracy: {1:.4f}'.format(Nll, acc))
print('Performence on the test data: ')
pbar = tqdm(test_data)
g_batch = []
y_batch = []
Nll = 0
n_perfect = 0
for i, (g, y) in enumerate(pbar):
g_batch.append(g)
y_batch.append(y)
if len(g_batch) == args.infer_batch_size or i == len(test_data) - 1:
g = test_model._collate_fn(g_batch)
mu, logvar = test_model.encode(g)
print("mu", mu)
print("logvar", logvar)
_, nll, _ = test_model.loss(mu, logvar, g)
pbar.set_description('recon loss: {:.4f}'.format(nll.item() / len(g_batch)))
# Nll += nll.item()
Nll += float(nll)
# construct igraph g from tensor g to check recon quality
for _ in range(encode_times):
z = test_model.reparameterize(mu, logvar)
for _ in range(decode_times):
g_recon = test_model.decode(z)
n_perfect += sum(is_same_DAG(g0, g1) for g0, g1 in zip(g, g_recon))
if i == len(test_data) - 1:
for j in range(g_batch[-1].vcount()):
print("True paramaters of graph node ", j)
print(g_batch[-1].vs[j]['param'])
print("Decoded paramaters of graph node ", j)
print(g_recon[-1].vs[j]['param'])
g_batch = []
y_batch = []
Nll /= len(test_data)
acc = n_perfect / (len(test_data) * encode_times * decode_times)
print('Testset average recon loss: {0}, recon accuracy: {1:.4f}'.format(Nll, acc))
# return Nll, acc
def visualize_recon(epoch, current_model):
current_model.eval()
# draw some reconstructed train/test graphs to visualize recon quality
for i, (g, y) in enumerate(test_data[:10] + train_data[:10]):
g_recon = current_model.encode_decode(g)[0] # remove []
name0 = 'graph_epoch{}_id{}_original'.format(epoch, i)
plot_DAG(g, args.res_dir, name0)
name1 = 'graph_epoch{}_id{}_recon'.format(epoch, i)
plot_DAG(g_recon, args.res_dir, name1)
def extract_latent(data):
model.eval()
Z = []
Y = []
g_batch = []
for i, (g, y) in enumerate(tqdm(data)):
# copy igraph
# otherwise original igraphs will save the H states and consume more GPU memory
g_ = g.copy()
g_batch.append(g_)
if len(g_batch) == args.infer_batch_size or i == len(data) - 1:
g_batch = model._collate_fn(g_batch)
mu, _ = model.encode(g_batch)
mu = mu.cpu().detach().numpy()
Z.append(mu)
g_batch = []
Y.append(y)
return | np.concatenate(Z, 0) | numpy.concatenate |
#!/usr/bin/env python3
"""
Changelog:
New in version 1_0:
- Create script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Author:
<NAME>
Email:
<EMAIL>
Github:
@The-SS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This script does the following:
- Import the `NodeListData` pickle file that RRT* generates
- Extract the optimal trajectory and print its cost
- Shorten the trajectory so that each segment between sampled points is given a proportional horizon to its length (both
in linear and angular distance)
Tested platform:
- Python 3.6.9 on Ubuntu 18.04 LTS (64 bit)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
###############################################################################
###############################################################################
# Import all the required libraries
import math
import numpy as np
import matplotlib.pyplot as plt
import time
import pickle
from casadi.tools import *
import copy
import numpy.linalg as la
import sys
import os
sys.path.insert(0, '../')
from rrtstar import DR_RRTStar_Node
from matplotlib.patches import Rectangle, Ellipse
from matplotlib.collections import EllipseCollection
from matplotlib.offsetbox import AnnotationBbox, AuxTransformBox
# from drrrts_nmpc import *
from drrrts_nmpc import SetUpSteeringLawParametersBigM, nonlinsteerBigM
from drrrts_nmpc import SetUpSteeringLawParametersNoColAvoid, nonlinsteerNoColAvoid
from drrrts_nmpc import find_dr_padding, get_padded_edges
import UKF_Estimator as UKF_Estimator
from collision_check import PtObsColFlag, LineObsColFlag
# Global variables
import config
DT = config.DT # timestep between controls
SAVEPATH = config.SAVEPATH # path where RRT* data is located and where this data will be stored
GOALAREA = config.GOALAREA # goal area [xmin,xmax,ymin,ymax]
OBSTACLELIST = config.OBSTACLELIST
import file_version
FILEVERSION = file_version.FILEVERSION # version of this file
STEER_TIME = config.STEER_TIME # Maximum Steering Time Horizon
DT = config.DT # timestep between controls
VELMIN = config.VELMIN
VELMAX = config.VELMAX
ANGVELMIN = config.ANGVELMIN
ANGVELMAX = config.ANGVELMAX
SIGMAW = config.SIGMAW
SIGMAV = config.SIGMAV
CROSSCOR = config.CROSSCOR
QLL = config.QLL
RLL = config.RLL
QTLL = config.QTLL
ROBRAD = config.ROBRAD # radius of robot (added as padding to environment bounds and the obstacles
OBSTACLELIST = copy.copy(config.OBSTACLELIST) # [ox,oy,wd,ht]
RANDAREA = copy.copy(config.RANDAREA) # [xmin,xmax,ymin,ymax]
ALFA = config.ALFA
DRRRT = config.DRRRT
lastalfa = ALFA[-1]
obsalfa = ALFA[0:-4]
obsalfa.insert(0, lastalfa)
ALFA = obsalfa
# def SetUpSteeringLawParametersBigM(N, T, v_max, v_min, omega_max, omega_min):
# """
# Sets up a BONMIN MINLP solver using Casadi Opti
# Collision avoidance is encoded with Big-M formulation
#
# Inputs:
# N: horizon
# T: time step (sec)
# v_max, v_min: maximum and minimum linear velocities in m/s
# omega_max, omega_min: maximum and minimum angular velocities in rad/s
# Outputs:
# solver, f, n_states, n_controls, U, X, P, delta
# solver: Casadi NLP solver using bonmin
# f: Casadi continuous time dynamics function
# n_states, n_controls: number of states and controls
# U, X: Casadi input and state variables (N x n_controls and (N+1)x n_states matrices)
# P: Casadi desired state parameters ((N+1) x n_states matrix)
# Delta: Casadi 0-1 variables for constraints (4*num_obs vector)
# """
#
# # Define state and input cost matrices
# Q = QLL
# R = RLL
# QT = QTLL
#
#
# opti = casadi.Opti()
#
# # Define symbolic states using Casadi Opti
# x = opti.variable()
# y = opti.variable()
# theta = opti.variable()
# states = vertcat(x, y, theta) # all three states
# n_states = states.size()[0] # number of symbolic states
#
# # Define symbolic inputs using Cadadi SX
# v = opti.variable()
# omega = opti.variable()
# controls = vertcat(v, omega) # both controls
# n_controls = controls.size()[0] # number of symbolic inputs
#
# # RHS of nonlinear unicycle dynamics (continuous time model)
# rhs = horzcat(v * cos(theta), v * sin(theta), omega)
#
# # Unicycle continuous time dynamics function
# f = Function('f', [states, controls], [rhs], ['input_state', 'control_input'], ['rhs'])
#
# # Casadi Opti trajectory variables/parameters for multiple shooting
# U = opti.variable(N, n_controls)
# X = opti.variable(N+1, n_states)
# P = opti.parameter(N+1, n_states)
# discrete = [False]*(N*n_controls + (N+1)*n_states) # specify U and X to be continuous variables
#
# # Cost function
# obj = 0 # objective/cost
# opti.subject_to(X[0, :].T == P[0, :].T)
# for i in range(N):
# # add to the cost the quadratic stage cost: (x-x_des)*Q*(x-x_des)^T + u*R*u^T
# obj += mtimes([U[i, :], R, U[i, :].T]) # quadratic penalty on control effort
# obj += mtimes([X[i, :] - P[i, :], Q, X[i, :].T - P[i, :].T]) # quadratic penalty on deviation from reference state
#
# # compute the next state from the dynamics
# x_next_ = f(X[i, :], U[i, :]) * T + X[i, :]
#
# # make the dynamics' next state the same as the i+1 trajectory state (multiple shooting) (satisfy dynamics)
# opti.subject_to(X[i + 1, :].T == x_next_.T)
#
# # we might not be able to get back to the original target goal state
# # alternatively, we have a large penalty of being away from it
# obj += mtimes([X[N, :] - P[N, :], QT, X[N, :].T - P[N, :].T])
#
# # minimize this objective
# opti.minimize(obj)
#
# # state environment constraints
# opti.subject_to(opti.bounded(-casadi.inf, X[:,2], casadi.inf)) # theta only now (x,y states added later)
# # input constraints
# opti.subject_to(opti.bounded(v_min, U[:,0], v_max))
# opti.subject_to(opti.bounded(omega_min, U[:,1], omega_max))
#
#
# # obstacle constraints using Big-M formulation TODO: TRY THE CONVEX-HULL REFORMULATION https://optimization.mccormick.northwestern.edu/index.php/Disjunctive_inequalities (it might be faster)
# obs_edges, env_edges = get_padded_edges()
# x_max_env = env_edges["right"]
# x_min_env = env_edges["left"]
# y_max_env = env_edges["top"]
# y_min_env = env_edges["bottom"]
#
# num_obs = len(obs_edges)
# DELTA = opti.variable(4 * num_obs) # 0-1 variables to indicate if an obstacle is hit
# opti.subject_to(opti.bounded(0, DELTA, 1))
# discrete += [True] * (4 * num_obs) # specify the delta variables to be discrete (with above bound --> 0-1 variables)
# M = max(x_max_env-x_min_env, y_max_env-y_min_env) + 1 # 10 # a large upper bound on x and y
# STARTIDX = opti.parameter(1) # specify which points in the horizon should have collision avoidance enforced
# # DR padding values
# OBSPAD = opti.parameter(N+1, 4 * num_obs) # for each time step, each obstacle edge has its own dr padding (right, left, top, bottom)
# ENVPAD = opti.parameter(N+1, 4) # for each time step, the four environment edges have their own dr padding (xmax, xmin, ymax, ymin) = (right, left, top, bottom)
#
# opti.subject_to(opti.bounded(x_min_env + ENVPAD[:,1], X[:, 0], x_max_env - ENVPAD[:,0]))
# opti.subject_to(opti.bounded(y_min_env + ENVPAD[:,3], X[:, 1], y_max_env - ENVPAD[:,2]))
#
# for obs_num, obs in enumerate(obs_edges):
# # for every obstacle
# top = obs["top"]
# bottom = obs["bottom"]
# right = obs["right"]
# left = obs["left"]
#
# # add Big-M formulation disjunctive constraints
# opti.subject_to(opti.bounded(-M * (1 - DELTA[4 * obs_num + 0]) + right + OBSPAD[:, 0],
# X[:, 0],
# x_max_env - ENVPAD[:, 0] + M * (1 - DELTA[4 * obs_num + 0]))) # be to the right of the obstacle
# opti.subject_to(opti.bounded(-M * (1 - DELTA[4 * obs_num + 1]) + x_min_env + ENVPAD[:, 1],
# X[:, 0],
# left - OBSPAD[:, 1] + M * (1 - DELTA[4 * obs_num + 1]))) # be to the left of the obstacle
# opti.subject_to(opti.bounded(-M * (1 - DELTA[4 * obs_num + 2]) + top + OBSPAD[:, 2],
# X[:, 1],
# y_max_env - ENVPAD[:, 2] + M * (1 - DELTA[4 * obs_num + 2]))) # be to the top of the obstacle
# opti.subject_to(opti.bounded(-M * (1 - DELTA[4 * obs_num + 3]) + y_min_env + ENVPAD[:, 3],
# X[:, 1],
# bottom - OBSPAD[:, 3] + M * (1 - DELTA[4 * obs_num + 3]))) # be to the bottom of the obstacle
#
# # require at least one of these constraints to be true
# opti.subject_to(
# 1 <= DELTA[4 * obs_num + 0] + DELTA[4 * obs_num + 1] + DELTA[4 * obs_num + 2] + DELTA[4 * obs_num + 3])
#
# # create a dict of the discrete flags
# args = dict(discrete=discrete)
# # specify the solver
# opti.solver("bonmin", args)
#
# solver = opti # solver instance to return
#
# return solver, f, n_states, n_controls, U, X, P, DELTA, STARTIDX, OBSPAD, ENVPAD
# def nonlinsteerBigM(solver, x0, xT, n_states, n_controls, N, T, U, X, P, DELTA, STARTIDX, OBSPAD, ENVPAD, current_ref_traj, current_ref_inputs, start_idx, obs_pad, env_pad):
# """
# Solves the nonlinear steering problem using the solver from SetUpSteeringLawParametersBigM
# Inputs:
# solver: Casadi NLP solver from SetUpSteeringLawParameters
# x0, xT: initial and final states as (n_states)x1 ndarrays e.g. [[2.], [4.], [3.14]]
# n_states, n_controls: number of states and controls
# N: horizon
# T: time step
# lbg, lbx, ubg, ubx: lower and upper (l,u) state and input (x,g) bounds
# current_ref_traj, current_ref_inputs: reference trajectory and reference inputs as Nx(n_states) ndarrays# TODO: add shapes
# Outputs:
# x_casadi, u_casadi: trajectory states and inputs returned by Casadi
# if solution found:
# states: (N+1)x(n_states) ndarray e.g. [[1 2 0], [1.2 2.4 0], [2 3.5 0]]
# controls: (N)x(n_controls) ndarray e.g. [[0.5 0], [1 0.01], [1.2 -0.01]]
# else, [],[] returned
# """
#
# # Create an initial state trajectory that roughly accomplishes the desired state transfer (by interpolating)
# init_states_param = np.linspace(0, 1, N + 1)
# init_states = np.zeros([N + 1, n_states])
# dx = xT - x0
# for i in range(N + 1):
# init_states[i] = (x0 + init_states_param[i] * dx).flatten()
#
# # Create an initial input trajectory that roughly accomplishes the desired state transfer
# # (using interpolated states to compute rough estimate of controls)
# dist = la.norm(xT[0:2] - x0[0:2])
# ang_dist = xT[2][0] - x0[2][0]
# total_time = N * T
# const_vel = dist / total_time
# const_ang_vel = ang_dist / total_time
# init_inputs = np.array([const_vel, const_ang_vel] * N).reshape(-1, 2)
#
# ## set parameter
# constraint_states = []
# # constraint_states.append(x0.reshape(n_states))
#
#
# for ref_state in current_ref_traj:
# constraint_states.append(ref_state.reshape(n_states))
# constraint_states = np.array(constraint_states)
#
# init_inputs = []
# for ref_input in current_ref_inputs:
# init_inputs.append(ref_input.reshape(n_controls))
# init_inputs = np.array(init_inputs)
#
# solver.set_value(P, constraint_states)
# solver.set_value(STARTIDX, start_idx)
# solver.set_value(OBSPAD, obs_pad)
# solver.set_value(ENVPAD, env_pad)
# solver.set_initial(X, constraint_states)
# solver.set_initial(U, init_inputs)
# try:
# res = solver.solve()
# except:
# print('Steering NLP Failed')
# return [], []
#
# # Update the cost_total
# # cost_total = res.value(self.obj) # self.opti.debug.value(self.obj)
# # Obtain the optimal control input sequence
# u_casadi = res.value(U) # shape: (N, n_controls)
# # Get the predicted state trajectory for N time steps ahead
# x_casadi = res.value(X) # shape: # (N+1, n_states)
#
# print('delta', res.value(DELTA))
#
# return x_casadi, u_casadi
# def get_padded_edges():
# '''
# Finds the left, right, top, and bottom padded (by robot radius) edges for the obstacles and the environment
# Outputs:
# obs_edges = edges of obstacles in the form of a list where each element is a dictionary with "top","bottom", "right", and "left"
# env_edges = edges of environment in the form of a dictionary with "top","bottom", "right", and "left"
# obs_edges should be used as (x < "left") or (x > "right") or (y < "bottom") or (y > "top")
# env_edges should be used as (x > "left") and (x < "right") and (y > "bottom") and (y < "top")
# '''
# randArea1 = copy.copy(RANDAREA) # [xmin,xmax,ymin,ymax]
# obstacleList1 = copy.copy(OBSTACLELIST) # [ox,oy,wd,ht]
#
# # environment bounds
# xmin = randArea1[0]
# xmax = randArea1[1]
# ymin = randArea1[2]
# ymax = randArea1[3]
# # thickness of env edges (doesn't matter much, anything > 0 works)
# thickness = 0.1
# # original environment area - width and height
# width = xmax - xmin
# height = ymax - ymin
#
# env_edges = {"left": xmin+ROBRAD, "right": xmax-ROBRAD, "bottom": ymin+ROBRAD, "top": ymax-ROBRAD} # environment edges
# obs_edges = []
#
# # add enough padding for obstacles for robot radius
# for obs in obstacleList1:
# xmin = obs[0] - ROBRAD
# xmax = xmin + obs[2] + (2 * ROBRAD)
# ymin = obs[1] - ROBRAD
# ymax = ymin + obs[3] + (2 * ROBRAD)
# edges = {"left": xmin, "right": xmax, "bottom": ymin, "top": ymax}
# obs_edges.append(edges)
#
# return obs_edges, env_edges
# def find_dr_padding(alfa, N, obs_edges, horizon_covars):
# '''
# Finds DR padding value for each environment and obstacle edge
# '''
# xDir = np.array([1, 0, 0]) # x direction
# yDir = np.array([0, 1, 0]) # x direction
# num_obs = len(obs_edges)
#
# env_pad = np.zeros([N + 1, 4]) # for each time step, the four environment edges have their own dr padding (right, left, top, bottom)
# obs_pad = np.zeros([N + 1, 4 * num_obs]) # for each time step, each obstacle edge has its own dr padding (right, left, top, bottom)
#
# # find tightening value for all alfa values delta = sqrt((1-alfa)/alfa)
# alpha = np.array(alfa, float)
# delta = (1-alpha) / alpha
# delta = delta**(0.5)
#
# for n in range(1,N+1): # skip the first time step (no DR padding there - it is already realized)
# sigma = horizon_covars[n-1] # this step's covariance
#
# # environment dr padding
# rl_pad = delta[0] * math.sqrt(xDir.T @ sigma @ xDir) # padding along right/left direction
# tb_pad = delta[0] * math.sqrt(yDir.T @ sigma @ yDir) # padding along top/bottom direction
# env_pad[n, 0] = rl_pad # right
# env_pad[n, 1] = rl_pad # left
# env_pad[n, 2] = tb_pad # top
# env_pad[n, 3] = tb_pad # bottom
#
# # obstacle padding
# for ob in range(num_obs): # for every obstacle, do the above
# rl_pad = delta[ob+1] * math.sqrt(xDir.T @ sigma @ xDir) # padding along right/left direction
# tb_pad = delta[ob+1] * math.sqrt(yDir.T @ sigma @ yDir) # padding along top/bottom direction
# obs_pad[n, 4 * ob + 0] = rl_pad # right
# obs_pad[n, 4 * ob + 1] = rl_pad # left
# obs_pad[n, 4 * ob + 2] = tb_pad # top
# obs_pad[n, 4 * ob + 3] = tb_pad # bottom
#
# return env_pad, obs_pad
###############################################################################
###############################################################################
def load_pickle_file(filename):
'''
loads pickle file containing pathNodesList
'''
with open(filename, 'rb') as f:
pathNodesList = pickle.load(f)
return pathNodesList
def get_full_opt_traj_and_ctrls(pathNodesList):
'''
Extract the full state and control sequence from pathNodesList
'''
tree_node_inputs = [] # full optimal trajectory inputs
tree_node_states = [] # full optimal trajectory states
opt_traj_nodes = [] # only has the optimal trajectory using the sampled points
found_node_in_goal = False
if pathNodesList is not None:
print("Sampled paths exist")
least_cost_node = []
found_first_node = False
# find a node in the goal region
for node in pathNodesList:
x = node.means[-1, 0, :][0]
y = node.means[-1, 1, :][0]
xmin_goal = GOALAREA[0]
xmax_goal = GOALAREA[1]
ymin_goal = GOALAREA[2]
ymax_goal = GOALAREA[3]
if (x > xmin_goal) and (x < xmax_goal) and (y > ymin_goal) and (y < ymax_goal):
found_node_in_goal = True
if not found_first_node:
least_cost_node = node
found_first_node = True
elif node.cost < least_cost_node.cost:
least_cost_node = copy.copy(node)
goal_node = least_cost_node
# if a node in the goal region is not found, return
if not found_node_in_goal:
print("No node in goal region found")
return
else:
print('Found path with cost: ', goal_node.cost)
# if a node in the goal region is found, construct the optimal trajectory
traj = []
ctrl_inputs = []
num_traj_states = len(goal_node.means)
node_pt = [goal_node.means[-1, 0, :][0], goal_node.means[-1, 1, :][0], goal_node.means[-1, 2, :][0]]
for i in range(num_traj_states-1):
pt = [goal_node.means[i, 0, :][0], goal_node.means[i, 1, :][0], goal_node.means[i, 2, :][0]]
traj.append(pt)
ctrl = [goal_node.inputCommands[i, 0], goal_node.inputCommands[i, 1]]
ctrl_inputs.append(ctrl)
opt_traj_nodes = [node_pt] + opt_traj_nodes
tree_node_states = traj + tree_node_states #+ [node_pt]
tree_node_inputs = ctrl_inputs + tree_node_inputs
# find index of parent
idx_of_parent_node = goal_node.parent
while idx_of_parent_node != None: # if parent found
parent_node = pathNodesList[idx_of_parent_node] # get parent node
# add parent node info to data
traj = []
ctrl_inputs = []
node_pt = [parent_node.means[-1, 0, :][0], parent_node.means[-1, 1, :][0], parent_node.means[-1, 2, :][0]]
num_traj_states = len(parent_node.means)
for i in range(num_traj_states-2):
pt = [parent_node.means[i, 0, :][0], parent_node.means[i, 1, :][0], parent_node.means[i, 2, :][0]]
traj.append(pt)
ctrl = [parent_node.inputCommands[i, 0], parent_node.inputCommands[i, 1]]
ctrl_inputs.append(ctrl)
opt_traj_nodes = [node_pt] + opt_traj_nodes
tree_node_states = traj + tree_node_states
tree_node_inputs = ctrl_inputs + tree_node_inputs
# find index of parent
idx_of_parent_node = parent_node.parent
print('Number of steps: ', len(np.array(tree_node_states)))
return [np.array(opt_traj_nodes), np.array(tree_node_states), np.array(tree_node_inputs)]
def get_sampled_traj_and_ctrls(pathNodesList):
'''
Extract the nodes and control sequence at each node from pathNodesList
'''
start_state = []
control_inputs = []
state_trajectory = []
for k, node in enumerate(pathNodesList):
point = [node.means[-1, 0, :][0], node.means[-1, 1, :][0], node.means[-1, 2, :][0]]
ctrls = node.inputCommands
if k == 0:
start_state.append(point)
state_trajectory.append(point)
control_inputs.append(ctrls)
else:
state_trajectory.append(point)
control_inputs.append(ctrls)
tree_states, tree_ctrl = reshape_data(state_trajectory, control_inputs, 3) # 3 = num states
return [start_state, tree_states, tree_ctrl]
def reshape_data(state_trajectory, control_inputs, numstates):
'''
Reshapes the data of get_sampled_traj_and_ctrls
'''
traj = | np.array(state_trajectory) | numpy.array |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
test_sfg2d
----------------------------------
Tests for `sfg2d` module.
"""
import sys
import unittest
from contextlib import contextmanager
from click.testing import CliRunner
import numpy as np
from datetime import timedelta
from sfg2d import SfgRecord
class TestQuartz(unittest.TestCase):
def setUp(self):
self.data = SfgRecord(
'../sfg2d/data/00_sp_quarz_w650_gcm_e20s_pr3000.dat')
self.result_dict = {
'shape_of_data' : (1, 1, 3, 1600),
'metadata' : {
'central_wl' : 650,
'material' : 'quarz',
'sp_type' : 'sp',
'gain' : -1,
'exposure_time' : timedelta(0, 20),
},
'some_row' : ([0, 0, 1, slice(None, None)],
np.load('../data/00_quarts_row_1.npy')),
'type' : 'sp',
'pixel' : np.arange(1600),
'times' : [timedelta(0)],
'frames' : 1,
'pp_delays' : np.array([0]),
'wavelength' : np.load('../data/00_quartz_wavelength.npy'),
'wavenumber' : np.load('../data/00_quartz_wavenumber.npy'),
}
def tearDown(self):
del self.data
def test_pp_delays_is_numpy_array(self):
assert isinstance(self.data.pp_delays, type(np.zeros(1)))
def test_data_is_numpy_array(self):
assert isinstance(self.data.data, type(np.zeros(1)))
def test_shape_of_data(self):
assert self.data.data.shape == self.result_dict['shape_of_data']
def test_metadata(self):
md = self.data.metadata
for key in self.result_dict['metadata']:
assert self.data.metadata[key] == self.result_dict['metadata'][key]
def test_some_row(self):
ind, data = self.result_dict['some_row']
assert np.all(self.data.data[ind] == data)
def test_data_pixel(self):
assert all(self.data.pixel == self.result_dict['pixel'])
def test_data_times(self):
assert self.data.times == self.result_dict['times']
def test_data_frames(self):
assert self.data.frames == self.result_dict['frames']
def test_data_ppdelays(self):
assert self.data.pp_delays == self.result_dict['pp_delays']
def test_data_wavelength(self):
wl = self.result_dict['wavelength']
# Must allow for small machine percision differences
small_values = np.abs(wl - self.data.wavelength)
assert | np.any(small_values < 10**(-12)) | numpy.any |
"""
Class definition for the Branch_and_Bound subdriver.
This pseudo-driver can only be run when plugged into the AMIEGO driver's minlp slot.
This is the branch and bound algorithm that maximizes the constrained
expected improvement function and returns an integer infill point. The
algorithm uses the relaxation techniques proposed by Jones et.al. on their
paper on EGO,1998. This enables the algorithm to use any gradient-based
approach to obtain a global solution. Also, to satisfy the integer
constraints, a new branching scheme has been implemented.
Developed by <NAME>
School of Aeronautics & Astronautics
Purdue University, West Lafayette, IN 47906
July, 2016
Implemented in OpenMDAO, Aug 2016, <NAME>
"""
from collections import OrderedDict
import os
from time import time
import numpy as np
from scipy.special import erf
from pyDOE2 import lhs
from openmdao.core.driver import Driver
from openmdao.drivers.genetic_algorithm_driver import GeneticAlgorithm
from openmdao.utils.concurrent import concurrent_eval, concurrent_eval_lb
from openmdao.utils.general_utils import set_pyoptsparse_opt
from amiego.optimize_function import snopt_opt
# check that pyoptsparse is installed
# if it is, try to use SNOPT but fall back to SLSQP
_, OPTIMIZER = set_pyoptsparse_opt('SNOPT')
class Branch_and_Bound(Driver):
"""
Class definition for the Branch_and_Bound driver.
This pseudo-driver can only be run when plugged into the AMIEGO driver's minlp slot.
This is the branch and bound algorithm that maximizes the constrained
expected improvement function and returns an integer infill point. The
algorithm uses the relaxation techniques proposed by Jones et.al. on
their paper on EGO,1998. This enables the algorithm to use any
gradient-based approach to obtain a global solution. Also, to satisfy the
integer constraints, a new branching scheme has been implemented.
Attributes
----------
dvs : list
Cache of integer design variable names.
eflag_MINLPBB : bool
This is set to True when we find a local minimum.
fopt : ndarray
Objective value at optimal design.
obj_surrogate : <AMIEGOKrigingSurrogate>
Surrogate model of the objective as a function of the integer design vars.
xI_lb : ndarray
Lower bound of the integer design variables.
xI_ub : ndarray
Upper bound of the integer design variables.
xopt : ndarray
Optimal design.
_randomstate : np.random.RandomState, int
Random state (or seed-number) which controls the seed and random draws.
"""
def __init__(self):
"""
Initialize the Branch_and_Bound driver.
"""
super(Branch_and_Bound, self).__init__()
# What we support
self.supports['inequality_constraints'] = True
self.supports['equality_constraints'] = False
self.supports['multiple_objectives'] = False
self.supports['two_sided_constraints'] = False
self.supports['active_set'] = False
self.supports['linear_constraints'] = False
self.supports['gradients'] = False
self.supports['integer_design_vars'] = True
self.dvs = []
self.i_idx_cache = {}
self.obj_surrogate = None
# We will set this to True if we have found a minimum.
self.eflag_MINLPBB = False
# Amiego retrieves optimal design and optimum upon completion.
self.xopt = None
self.fopt = None
# Experimental Options. TODO: could go into Options
self.load_balance = True
self.aggressive_splitting = False
# Random state can be set for predictability during testing
if 'SimpleGADriver_seed' in os.environ:
self._randomstate = int(os.environ['SimpleGADriver_seed'])
else:
self._randomstate = None
def _declare_options(self):
"""
Declare options before kwargs are processed in the init method.
"""
opt = self.options
opt.declare('active_tol', 1.0e-6, lower=0.0,
desc='Tolerance (2-norm) for triggering active set '
'reduction.')
opt.declare('atol', 0.1, lower=0.0,
desc='Absolute tolerance (inf-norm) of upper minus '
'lower bound for termination.')
opt.declare('con_tol', 1.0e-6, lower=0.0,
desc='Constraint thickness.')
opt.declare('disp', True,
desc='Set to False to prevent printing of iteration '
'messages.')
opt.declare('ftol', 1.0e-4, lower=0.0,
desc='Absolute tolerance for sub-optimizations.')
opt.declare('maxiter', 100000, lower=0.0,
desc='Maximum number of iterations.')
opt.declare('trace_iter', 5,
desc='Number of generations to trace back for ubd.')
opt.declare('trace_iter_max', 10,
desc='Maximum number of generations to trace back for ubd.')
opt.declare('maxiter_ubd', 10000,
desc='Number of generations ubd stays the same')
opt.declare('local_search', 0, values=[0, 1, 2],
desc='Local search type. Set to 0 for GA, 1 for LHS, 2 for LHS + SQP '
'(Default = 0)')
def run(self):
"""
Execute the Branch_and_Bound method.
Returns
-------
boolean
Failure flag; True if failed to converge, False if successful.
"""
problem = self._problem()
obj_surrogate = self.obj_surrogate
atol = self.options['atol']
disp = self.options['disp']
maxiter = self.options['maxiter']
maxiter_ubd = self.options['maxiter_ubd']
self.iter_count = 1
self.eflag_MINLPBB = False
obj_surrogate.p = 2
obj_surrogate.y_best = np.min(obj_surrogate.Y)
# ----------------------------------------------------------------------
# Step 1: Initialize
# ----------------------------------------------------------------------
num_des = len(self.xI_lb)
node_num = 0
itercount = 0
ubd_count = 0
# Initial B&B bounds are infinite.
UBD = np.inf
LBD = -np.inf
LBD_prev = -np.inf
# Copy our desvars' user specified upper and lower bounds
xL_iter = self.xI_lb.copy()
xU_iter = self.xI_ub.copy()
num_init_sam = num_des
init_sam = lhs(num_des, samples=num_init_sam, criterion='center',
random_state=self._randomstate)
for ii in range(num_init_sam):
xopt_ii = np.round(xL_iter + init_sam[ii] * (xU_iter - xL_iter)).reshape(num_des)
fopt_ii = self.objective_callback(xopt_ii)
if fopt_ii < UBD:
self.eflag_MINLPBB = True
UBD = fopt_ii
fopt = fopt_ii
xopt = xopt_ii
# This stuff is just for printing.
par_node = 0
# Active set fields: (Updated!)
# Aset = [[NodeNumber, lb, ub, LBD, UBD, nodeHist], [], ..]
active_set = []
nodeHist = NodeHist()
UBD_term = UBD
comm = problem.model.comm
if self.load_balance:
# Master/Worker config
n_proc = comm.size - 1
if n_proc < 2:
comm = None
n_proc = 1
else:
# Each proc has its own jobs
n_proc = comm.size
if n_proc < 2:
comm = None
# Initial node. This is the data structure we pass into the concurrent evaluator.
if self.aggressive_splitting:
# Initial number of nodes based on number of available procs
args = init_nodes(n_proc, xL_iter, xU_iter, par_node, LBD_prev, LBD,
UBD, fopt, xopt, nodeHist, ubd_count)
else:
# Start with 1 node.
args = [(xL_iter, xU_iter, par_node, LBD_prev, LBD, UBD, fopt,
xopt, node_num, nodeHist, ubd_count)]
# Main Loop
terminate = False
while not terminate:
# Branch and Bound evaluation of a set of nodes, starting with the initial one.
# When executed in serial, only a single node is evaluted.
cases = [(arg, None) for arg in args]
if self.load_balance:
results = concurrent_eval_lb(self.evaluate_node, cases,
comm, broadcast=True)
else:
results = concurrent_eval(self.evaluate_node, cases,
comm, allgather=True)
itercount += len(args)
if UBD < -1.0e-3:
ubd_count += len(args)
# Put all the new nodes into active set.
for result in results:
# Print the traceback if it fails
if not result[0]:
print(result[1])
new_UBD, new_fopt, new_xopt, new_nodes = result[0]
# Save stats for the best case.
if new_UBD < UBD:
UBD = new_UBD
fopt = new_fopt
xopt = new_xopt
# Look for substantial change in UBD to reset the counter
if abs(new_UBD - UBD_term) > 0.001:
ubd_count = 1
UBD_term = new_UBD
# TODO: Should we extend the active set with all the cases we
# ran, or just the best one. All for now.
active_set.extend(new_nodes)
node_num += len(new_nodes)
# Update active set: Removes all nodes worse than the best new node.
if len(active_set) >= 1:
active_set = update_active_set(active_set, UBD)
# Termination
if len(active_set) >= 1:
# Update LBD and select the current rectangle
args = []
# Grab the best nodes, as many as we have processors.
n_nodes = np.min((n_proc, len(active_set)))
for j in range(n_nodes):
# a. Set LBD as lowest in the active set
all_LBD = [item[3] for item in active_set]
LBD = min(all_LBD)
ind_LBD = all_LBD.index(LBD)
LBD_prev = LBD
# b. Select the lowest LBD node as the current node
par_node, xL_iter, xU_iter, _, _, nodeHist = active_set[ind_LBD]
self.iter_count += 1
args.append((xL_iter, xU_iter, par_node, LBD_prev, LBD, UBD, fopt,
xopt, node_num, nodeHist, ubd_count))
# c. Delete the selected node from the Active set of nodes
del active_set[ind_LBD]
# --------------------------------------------------------------
# Step 7: Check for convergence
# --------------------------------------------------------------
diff = np.abs(UBD - LBD)
if diff < atol:
terminate = True
if disp:
print("=" * 85)
print("Terminating! Absolute difference between the upper " +
"and lower bound is below the tolerence limit.")
else:
terminate = True
if disp:
print("=" * 85)
print("Terminating! No new node to explore.")
print("Max Node", node_num)
if itercount > maxiter or ubd_count > maxiter_ubd:
terminate = True
# Finalize by putting optimal value back into openMDAO
self.xopt = xopt
self.fopt = fopt
return False
def evaluate_node(self, xL_iter, xU_iter, par_node, LBD_prev, LBD, UBD, fopt, xopt, node_num,
nodeHist, ubd_count):
"""
Perform Branch and Bound step on a single node.
This function encapsulates the portion of the code that runs in parallel.
Parameters
----------
xL_iter : ndarray
Lower bound of design variables.
xU_iter : ndarray
Upper bound of design variables.
par_node : int
Index of parent node for this child node.
LBD_prev : float
Previous iteration value of LBD.
LBD : float
Current value of lower bound estimate.
UBD : float
Current value of upper bound esimate.
fopt : float
Current best objective value
xopt : ndarray
Current best design values.
node_num : int
Index of this current node
nodeHist : <NodeHist>
Data structure containing information about this node.
ubd_count : int
Counter for number of generations.
Returns
-------
float
New upper bound estimate.
float
New best objective value.
ndaray
New design variables.
list
List of parameters for new node.
"""
if OPTIMIZER == 'SNOPT':
options = {'Major optimality tolerance': 1.0e-5}
elif OPTIMIZER == 'SLSQP':
options = {'ACC': 1.0e-5}
elif OPTIMIZER == 'CONMIN':
options = {'DABFUN': 1.0e-5}
active_tol = self.options['active_tol']
local_search = self.options['local_search']
disp = self.options['disp']
trace_iter = self.options['trace_iter']
trace_iter_max = self.options['trace_iter_max']
obj_surrogate = self.obj_surrogate
num_des = len(self.xI_lb)
new_nodes = []
# Keep this to 0.49 to always round towards bottom-left
xloc_iter = np.round(xL_iter + 0.49 * (xU_iter - xL_iter))
floc_iter = self.objective_callback(xloc_iter)
# Genetic Algorithm
if local_search == 0:
# --------------------------------------------------------------
# Step 2: Obtain a local solution using a GA.
# --------------------------------------------------------------
ga = GeneticAlgorithm(self.obj_for_GA)
bits = np.ceil(np.log2(xU_iter - xL_iter + 1)).astype(int)
bits[bits <= 0] = 1
vub_vir = (2**bits - 1) + xL_iter
# More important nodes get a higher population size and number of generations.
if nodeHist.priority_flag == 1:
max_gen = 300
mfac = 6
else:
max_gen = 200
mfac = 4
L = np.sum(bits)
pop_size = mfac * L
t0 = time()
self.xU_iter = xU_iter
xloc_iter_new, floc_iter_new, nfit = \
ga.execute_ga(xL_iter, xL_iter, vub_vir, vub_vir, bits, pop_size, max_gen,
self._randomstate)
t_GA = time() - t0
if floc_iter_new < floc_iter:
floc_iter = floc_iter_new
xloc_iter = xloc_iter_new
# LHS Sampling or SNOPT
else:
# TODO Future research on sampling here
num_samples = np.round(np.max([10, np.min([50, num_des / nodeHist.priority_flag])]))
init_sam_node = lhs(num_des, samples=num_samples, criterion='center',
random_state=self._randomstate)
t_GA = 0.
for ii in range(int(num_samples)):
xloc_iter_new = np.round(xL_iter + init_sam_node[ii] * (xU_iter - xL_iter))
floc_iter_new = self.objective_callback(xloc_iter_new)
# SNOPT
if local_search == 2:
# TODO: did we lose a tol check here?
# active_tol: #Perform at non-flat starting point
if np.abs(floc_iter_new) > -np.inf:
# --------------------------------------------------------------
# Step 2: Obtain a local solution
# --------------------------------------------------------------
# Using a gradient-based method here.
# TODO: Make it more pluggable.
def _objcall(dv_dict):
"""
Compute objective for SNOPT.
"""
fail = 0
x = dv_dict['x']
# Objective
func_dict = {}
func_dict['obj'] = self.objective_callback(x)[0]
return func_dict, fail
xC_iter = xloc_iter_new
opt_x, opt_f, succ_flag, msg = snopt_opt(_objcall, xC_iter, xL_iter,
xU_iter, title='LocalSearch',
options=options)
xloc_iter_new = np.round(np.asarray(opt_x).flatten())
floc_iter_new = self.objective_callback(xloc_iter_new)
if floc_iter_new < floc_iter:
floc_iter = floc_iter_new
xloc_iter = xloc_iter_new
# Do some prechecks before commencing for partitioning.
ubdloc_best = nodeHist.ubdloc_best
if nodeHist.ubdloc_best > floc_iter + 1.0e-6:
ubd_track = np.concatenate((nodeHist.ubd_track, np.array([0])), axis=0)
ubdloc_best = floc_iter
else:
ubd_track = np.concatenate((nodeHist.ubd_track, np.array([1])), axis=0)
# diff_LBD = abs(LBD_prev - LBD_NegConEI)
if len(ubd_track) >= trace_iter_max or \
(len(ubd_track) >= trace_iter and np.sum(ubd_track[-trace_iter:]) == 0):
# TODO : Did we lose ths? -> #and UBD<=-1.0e-3:
child_info = np.array([[par_node, np.inf, floc_iter], [par_node, np.inf, floc_iter]])
# Fathomed due to no change in UBD_loc for 'trace_iter' generations
dis_flag = ['Y', 'Y']
else:
# --------------------------------------------------------------------------
# Step 3: Partition the current rectangle as per the new branching scheme.
# --------------------------------------------------------------------------
child_info = np.zeros([2, 3])
dis_flag = [' ', ' ']
# Choose
l_iter = (xU_iter - xL_iter).argmax()
if xloc_iter[l_iter] < xU_iter[l_iter]:
delta = 0.5 # 0<delta<1
else:
delta = -0.5 # -1<delta<0
for ii in range(2):
lb = xL_iter.copy()
ub = xU_iter.copy()
if ii == 0:
ub[l_iter] = np.floor(xloc_iter[l_iter] + delta)
elif ii == 1:
lb[l_iter] = np.ceil(xloc_iter[l_iter] + delta)
if np.linalg.norm(ub - lb) > active_tol: # Not a point
# --------------------------------------------------------------
# Step 4: Obtain an LBD of f in the newly created node
# --------------------------------------------------------------
S4_fail = False
x_comL, x_comU, Ain_hat, bin_hat = gen_coeff_bound(lb, ub, obj_surrogate)
sU, eflag_sU = self.maximize_S(x_comL, x_comU, Ain_hat, bin_hat)
if eflag_sU:
yL, eflag_yL = self.minimize_y(x_comL, x_comU, Ain_hat, bin_hat)
if eflag_yL:
NegEI = calc_conEI_norm([], obj_surrogate, SSqr=sU, y_hat=yL)
else:
S4_fail = True
else:
S4_fail = True
# Convex approximation failed!
if S4_fail:
LBD_NegConEI = LBD_prev
dis_flag[ii] = 'F'
else:
LBD_NegConEI = max(NegEI, LBD_prev)
# --------------------------------------------------------------
# Step 5: Store any new node inside the active set that has LBD
# lower than the UBD.
# --------------------------------------------------------------
priority_flag = 0
if LBD_NegConEI < np.inf and LBD_prev > -np.inf:
if np.abs((LBD_prev - LBD_NegConEI) / LBD_prev) < 0.005:
priority_flag = 1
nodeHist_new = NodeHist()
nodeHist_new.ubd_track = ubd_track
nodeHist_new.ubdloc_best = ubdloc_best
nodeHist_new.priority_flag = priority_flag
if LBD_NegConEI < UBD - 1.0e-6:
node_num += 1
new_node = [node_num, lb, ub, LBD_NegConEI, floc_iter, nodeHist_new]
new_nodes.append(new_node)
child_info[ii] = np.array([node_num, LBD_NegConEI, floc_iter])
else:
child_info[ii] = np.array([par_node, LBD_NegConEI, floc_iter])
# Flag for child created but not added to active set. (fathomed)
dis_flag[ii] = 'X'
else:
if ii == 1:
xloc_iter = ub
floc_iter = self.objective_callback(xloc_iter)
child_info[ii] = np.array([par_node, np.inf, floc_iter])
# Flag for No child created
dis_flag[ii] = 'x'
# Update the active set whenever better solution found
if floc_iter < UBD:
UBD = floc_iter
fopt = floc_iter
xopt = xloc_iter.reshape(num_des)
if disp:
if (self.iter_count - 1) % 25 == 0:
# Display output in a tabular format
print("=" * 95)
print("%19s%12s%14s%21s" % ("Global", "Parent", "Child1", "Child2"))
template = "%s%8s%10s%8s%9s%11s%10s%11s%11s%11s"
print(template % ("Iter", "LBD", "UBD", "Node", "Node1", "LBD1",
"Node2", "LBD2", "Flocal", "GA time"))
print("=" * 95)
template = "%3d%10.2f%10.2f%6d%8d%1s%13.2f%8d%1s%13.2f%9.2f%9.2f"
print(template % (self.iter_count, LBD, UBD, par_node, child_info[0, 0],
dis_flag[0], child_info[0, 1], child_info[1, 0],
dis_flag[1], child_info[1, 1], child_info[1, 2], t_GA))
return UBD, fopt, xopt, new_nodes
def objective_callback(self, xI):
"""
Evalute main problem objective at the requested point.
Objective is the expected improvement function with modifications to make it concave.
Parameters
----------
xI : ndarray
Value of design variables.
Returns
-------
float
Objective value
"""
obj_surrogate = self.obj_surrogate
# Normalized as per the convention in openmdao_Alpha:Kriging.
xval = (xI - obj_surrogate.X_mean) / obj_surrogate.X_std
NegEI = calc_conEI_norm(xval, obj_surrogate)
# print(xI, f)
return NegEI
def maximize_S(self, x_comL, x_comU, Ain_hat, bin_hat):
"""
Maximize the SigmaSqr Error.
This method finds an upper bound to the SigmaSqr Error, and scales up 'r' to provide a
smooth design space for gradient-based approach.
Parameters
----------
x_comL : ndarray
Full lower bounds vector
x_comU : ndarray
Full upper bounds vector.
Ain_hat : ndarray
Matrix Ain_hat for linear model of constraints.
bin_hat : ndarray
Vector bin_hat for linear model of constraints.
Returns
-------
float
Maximized upper bound for sigma squared error.
bool
Success flag True if successful.
"""
if OPTIMIZER == 'SNOPT':
options = {'Major optimality tolerance': 1.0e-5}
elif OPTIMIZER == 'SLSQP':
options = {'ACC': 1.0e-5}
elif OPTIMIZER == 'CONMIN':
options = {'DABFUN': 1.0e-5}
surrogate = self.obj_surrogate
R_inv = surrogate.R_inv
SigmaSqr = surrogate.SigmaSqr
X = surrogate.X
n, k = X.shape
one = np.ones([n, 1])
xhat_comL = x_comL.copy()
xhat_comU = x_comU.copy()
xhat_comL[k:] = 0.0
xhat_comU[k:] = 1.0
# Calculate the convexity factor alpha
rL = x_comL[k:]
rU = x_comU[k:]
dr_drhat = np.diag(rU[:, 0] - rL[:, 0])
T2_num = np.dot(np.dot(R_inv, one), np.dot(R_inv, one).T)
T2_den = np.dot(one.T, np.dot(R_inv, one))
d2S_dr2 = 2.0 * SigmaSqr * (R_inv - (T2_num / T2_den))
H_hat = np.dot(np.dot(dr_drhat, d2S_dr2), dr_drhat)
# Use Gershgorin's circle theorem to find a lower bound of the
# min eigen value of the hessian
eig_lb = np.zeros([n, 1])
for ii in range(n):
dia_ele = H_hat[ii, ii]
sum_rw = 0.0
sum_col = 0.0
for jj in range(n):
if ii != jj:
sum_rw += np.abs(H_hat[ii, jj])
sum_col += np.abs(H_hat[jj, ii])
eig_lb[ii] = dia_ele - np.min(np.array([sum_rw, sum_col]))
eig_min = np.min(eig_lb)
alpha = np.max(np.array([0.0, -0.5 * eig_min]))
# Maximize S
x0 = 0.5 * (xhat_comL + xhat_comU)
# Just storing stuff here to pull it out in the callback.
surrogate._alpha = alpha
self.x_comL = x_comL
self.x_comU = x_comU
self.xhat_comL = xhat_comL
self.xhat_comU = xhat_comU
self.Ain_hat = Ain_hat
self.bin_hat = bin_hat
opt_x, opt_f, succ_flag, msg = snopt_opt(self.calc_SSqr_convex, x0, xhat_comL,
xhat_comU, ncon=len(bin_hat),
title='Maximize_S',
options=options,
jac=Ain_hat,
sens=self.calc_SSqr_convex_grad)
Neg_sU = opt_f
# if not succ_flag:
# eflag_sU = False
# else:
# eflag_sU = True
eflag_sU = True
tol = self.options['con_tol']
for ii in range(2 * n):
if np.dot(Ain_hat[ii, :], opt_x) > (bin_hat[ii] + tol):
eflag_sU = False
break
sU = - Neg_sU
return sU, eflag_sU
def calc_SSqr_convex(self, dv_dict):
"""
Callback function for minimization of mean squared error.
Parameters
----------
dv_dict : dict
Dictionary of design variable values.
Returns
-------
func_dict : dict
Dictionary of all functional variables evaluated at design point.
fail : int
0 for successful function evaluation
1 for unsuccessful function evaluation
"""
fail = 0
x_com = dv_dict['x']
surrogate = self.obj_surrogate
R_inv = surrogate.R_inv
SigmaSqr = surrogate.SigmaSqr
alpha = surrogate._alpha
n, k = surrogate.X.shape
one = np.ones([n, 1])
rL = self.x_comL[k:]
rU = self.x_comU[k:]
rhat = x_com[k:].reshape(n, 1)
r = rL + rhat * (rU - rL)
rhat_L = self.xhat_comL[k:]
rhat_U = self.xhat_comU[k:]
term0 = np.dot(R_inv, r)
term1 = -SigmaSqr * (1.0 - r.T.dot(term0) +
((1.0 - one.T.dot(term0))**2 / (one.T.dot(np.dot(R_inv, one)))))
term2 = alpha * (rhat - rhat_L).T.dot(rhat - rhat_U)
S2 = term1 + term2
# Objectives
func_dict = {}
func_dict['obj'] = S2[0, 0]
# Constraints
Ain_hat = self.Ain_hat
bin_hat = self.bin_hat
func_dict['con'] = np.dot(Ain_hat, x_com) - bin_hat
# print('x', dv_dict)
# print('obj', func_dict['obj'])
return func_dict, fail
def calc_SSqr_convex_grad(self, dv_dict, func_dict):
"""
Callback function for gradient of mean squared error.
Parameters
----------
dv_dict : dict
Dictionary of design variable values.
func_dict : dict
Dictionary of all functional variables evaluated at design point.
Returns
-------
sens_dict : dict
Dictionary of dictionaries for gradient of each dv/func pair
fail : int
0 for successful function evaluation
1 for unsuccessful function evaluation
"""
fail = 0
x_com = dv_dict['x']
surrogate = self.obj_surrogate
X = surrogate.X
R_inv = surrogate.R_inv
SigmaSqr = surrogate.SigmaSqr
alpha = surrogate._alpha
n, k = X.shape
nn = len(x_com)
one = np.ones([n, 1])
rL = self.x_comL[k:]
rU = self.x_comU[k:]
rhat = x_com[k:].reshape(n, 1)
r = rL + rhat * (rU - rL)
rhat_L = self.xhat_comL[k:]
rhat_U = self.xhat_comU[k:]
dr_drhat = np.diag((rU - rL).flat)
term0 = np.dot(R_inv, r)
term1 = ((1.0 - one.T.dot(term0)) / (one.T.dot(np.dot(R_inv, one)))) * np.dot(R_inv, one)
term = 2.0 * SigmaSqr * (term0 + term1)
dterm1 = np.dot(dr_drhat, term)
dterm2 = alpha * (2.0 * rhat - rhat_L - rhat_U)
dobj_dr = (dterm1 + dterm2).T
# Objectives
sens_dict = OrderedDict()
sens_dict['obj'] = OrderedDict()
sens_dict['obj']['x'] = np.zeros((1, nn))
sens_dict['obj']['x'][:, k:] = dobj_dr
# Constraints
Ain_hat = self.Ain_hat
sens_dict['con'] = OrderedDict()
sens_dict['con']['x'] = Ain_hat
# print('obj deriv', sens_dict['obj']['x'] )
# print('con deriv', sens_dict['con']['x'])
return sens_dict, fail
def minimize_y(self, x_comL, x_comU, Ain_hat, bin_hat):
"""
Minimize the lower bound.
Parameters
----------
x_comL : ndarray
Full lower bounds vector
x_comU : ndarray
Full upper bounds vector.
Ain_hat : ndarray
Matrix Ain_hat for linear model of constraints.
bin_hat : ndarray
Vector bin_hat for linear model of constraints.
Returns
-------
float
Maximized upper bound for sigma squared error.
bool
Success flag True if successful.
"""
if OPTIMIZER == 'SNOPT':
options = {'Major optimality tolerance': 1.0e-8}
elif OPTIMIZER == 'SLSQP':
options = {'ACC': 1.0e-8}
elif OPTIMIZER == 'CONMIN':
options = {'DABFUN': 1.0e-8}
# 1- Formulates y_hat as LP (weaker bound)
# 2- Uses non-convex relaxation technique (stronger bound) [Future release]
app = 1
surrogate = self.obj_surrogate
X = surrogate.X
n, k = X.shape
xhat_comL = x_comL.copy()
xhat_comU = x_comU.copy()
xhat_comL[k:] = 0.0
xhat_comU[k:] = 1.0
if app == 1:
x0 = 0.5 * (xhat_comL + xhat_comU)
# Just storing stuff here to pull it out in the callback.
self.x_comL = x_comL
self.x_comU = x_comU
self.Ain_hat = Ain_hat
self.bin_hat = bin_hat
opt_x, opt_f, succ_flag, msg = snopt_opt(self.calc_y_hat_convex, x0, xhat_comL,
xhat_comU, ncon=len(bin_hat),
title='minimize_y',
options=options,
jac=Ain_hat,
sens=self.calc_y_hat_convex_grad)
yL = opt_f
# if not succ_flag:
# eflag_yL = False
# else:
# eflag_yL = True
eflag_yL = True
tol = self.options['con_tol']
for ii in range(2 * n):
if np.dot(Ain_hat[ii, :], opt_x) > (bin_hat[ii] + tol):
eflag_yL = False
break
return yL, eflag_yL
def calc_y_hat_convex(self, dv_dict):
"""
Callback function for objective during minimization of y_hat.
Parameters
----------
dv_dict : dict
Dictionary of design variable values.
Returns
-------
func_dict : dict
Dictionary of all functional variables evaluated at design point.
fail : int
0 for successful function evaluation
1 for unsuccessful function evaluation
"""
fail = 0
x_com = dv_dict['x']
surrogate = self.obj_surrogate
X = surrogate.X
c_r = surrogate.c_r
mu = surrogate.mu
n, k = X.shape
rL = self.x_comL[k:]
rU = self.x_comU[k:]
rhat = np.array([x_com[k:]]).reshape(n, 1)
r = rL + rhat * (rU - rL)
y_hat = mu + np.dot(r.T, c_r)
# Objective
func_dict = {}
func_dict['obj'] = y_hat[0, 0]
# Constraints
Ain_hat = self.Ain_hat
bin_hat = self.bin_hat
func_dict['con'] = np.dot(Ain_hat, x_com) - bin_hat
# print('x', dv_dict)
# print('obj', func_dict['obj'])
return func_dict, fail
def calc_y_hat_convex_grad(self, dv_dict, func_dict):
"""
Callback function for gradient during minimization of y_hat.
Parameters
----------
dv_dict : dict
Dictionary of design variable values.
func_dict : dict
Dictionary of all functional variables evaluated at design point.
Returns
-------
sens_dict : dict
Dictionary of dictionaries for gradient of each dv/func pair
fail : int
0 for successful function evaluation
1 for unsuccessful function evaluation
"""
fail = 0
x_com = dv_dict['x']
surrogate = self.obj_surrogate
X = surrogate.X
c_r = surrogate.c_r
n, k = X.shape
nn = len(x_com)
rL = self.x_comL[k:]
rU = self.x_comU[k:]
dobj_dr = c_r * (rU - rL)
# Objectives
sens_dict = OrderedDict()
sens_dict['obj'] = OrderedDict()
sens_dict['obj']['x'] = np.zeros((1, nn))
sens_dict['obj']['x'][:, k:] = dobj_dr.T
# Constraints
Ain_hat = self.Ain_hat
sens_dict['con'] = OrderedDict()
sens_dict['con']['x'] = Ain_hat
# print('obj deriv', sens_dict['obj']['x'] )
# print('con deriv', sens_dict['con']['x'])
return sens_dict, fail
def obj_for_GA(self, x, icase):
"""
Evalute main problem objective at the requested point.
Objective is the expected improvement function with modifications to make it concave.
Parameters
----------
x : ndarray
Value of design variables.
icase : int
Case number, used for identification when run in parallel.
Returns
-------
float
Objective value
bool
Success flag, True if successful
int
Case number, used for identification when run in parallel.
"""
surrogate = self.obj_surrogate
xU_iter = self.xU_iter
num_des = len(x)
P = 0.0
rp = 100.0
g = x / xU_iter - 1.0
idx = np.where(g > 0.0)
if len(idx) > 0:
P = np.einsum('i->', g[idx]**2)
xval = (x - surrogate.X_mean) / surrogate.X_std
NegEI = calc_conEI_norm(xval, surrogate)
f = NegEI + rp * P
return f, True, icase
def update_active_set(active_set, ubd):
"""
Update the active set.
Remove variables from the active set data structure if their current upper bound exceeds the
given value.
Parameters
----------
active_set : list of lists of floats
Active set data structure of form [[NodeNumber, lb, ub, LBD, UBD], [], ..]
ubd : float
Maximum for bounds test.
Returns
-------
list of list of floats
New active_set
"""
return [a for a in active_set if a[3] < ubd]
def gen_coeff_bound(xI_lb, xI_ub, surrogate):
"""
Generate upper and lower bounds for r.
This function generates the upper and lower bound of the artificial
variable r and the coefficients for the linearized under estimator
constraints. The version accepts design bound in the original design
space, converts it to normalized design space.
Parameters
----------
xI_lb : ndarray
Lower bound of the integer design variables.
xI_ub : ndarray
Upper bound of the integer design variables.
surrogate : <AMIEGOKrigingSurrogate>
Surrogate model of optimized objective with respect to integer design variables.
Returns
-------
ndarray
Full lower bounds vector
ndarray
Full upper bounds vector.
ndarray
Matrix Ain_hat for linear model of constraints.
ndarray
Vector bin_hat for linear model of constraints.
"""
mean = surrogate.X_mean
std = surrogate.X_std
# Normalized as per Openmdao kriging model
xL_hat = (xI_lb - mean) / std
xU_hat = (xI_ub - mean) / std
rL, rU = interval_analysis(xL_hat, xU_hat, surrogate)
# Combined design variables for supbproblem
num = len(xL_hat) + len(rL)
x_comL = np.append(xL_hat, rL).reshape(num, 1)
x_comU = np.append(xU_hat, rU).reshape(num, 1)
# Coefficients of the linearized constraints of the subproblem
Ain_hat, bin_hat = lin_underestimator(x_comL, x_comU, surrogate)
return x_comL, x_comU, Ain_hat, bin_hat
def interval_analysis(lb_x, ub_x, surrogate):
"""
Predict lower and upper bounds for r.
The module predicts the lower and upper bound of the artificial variable 'r' from the bounds
of the design variable x r is related to x by the following equation:
r_i = exp(-sum(theta_h*(x_h - x_h_i)^2))
Parameters
----------
lb_x : ndarray
Lower bound of the integer design variables.
ub_x : ndarray
Upper bound of the integer design variables.
surrogate : <AMIEGOKrigingSurrogate>
Surrogate model of optimized objective with respect to integer design variables.
Returns
-------
ndarray
Predicted lower bound for r
ndarray
Predicted upper bound for r
"""
p = surrogate.p
if p % 2 == 0:
X = surrogate.X
thetas = surrogate.thetas
n, k = X.shape
t3L = np.empty([n, k])
t3U = np.empty([n, k])
t1L = lb_x - X
t1U = ub_x - X
fac1 = t1L * t1L
fac2 = t1L * t1U
fac3 = t1U * t1U
for i in range(n):
for h in range(k):
fact = np.array([fac1[i, h], fac2[i, h], fac3[i, h]])
t2L = np.max(np.array([0, np.min(fact)]))
t2U = np.max(np.array([0, np.max(fact)]))
fact = -thetas[h] * np.array([t2L, t2U])
t3L[i, h] = np.min(fact)
t3U[i, h] = np.max(fact)
lb_r = np.exp(np.sum(t3L, axis=1))
ub_r = np.exp(np.sum(t3U, axis=1))
else:
print("\nWarning! Value of p should be 2. Cannot perform interval analysis")
print("\nReturing global bound of the r variable")
lb_r = np.zeros([n, k])
ub_r = np.zeros([n, k])
return lb_r, ub_r
def lin_underestimator(lb, ub, surrogate):
"""
Compute the coefficients of the linearized underestimator constraints.
Parameters
----------
lb : ndarray
Lower bound vector.
ub : ndarray
Upper bound vector
surrogate : <AMIEGOKrigingSurrogate>
Surrogate model of optimized objective with respect to integer design variables.
Returns
-------
ndarray
Matrix Ain_hat for linear model of constraints.
ndarray
Vector bin_hat for linear model of constraints.
"""
X = surrogate.X
thetas = surrogate.thetas
p = surrogate.p
n, k = X.shape
lb_x = lb[:k]
ub_x = ub[:k]
lb_r = lb[k:]
ub_r = ub[k:]
a1_hat = np.zeros([n, n])
a3_hat = np.zeros([n, n])
a2 = np.empty([n, k])
a4 = np.empty([n, k])
b2 = np.empty([n, k])
b4 = np.empty([n, k])
b1_hat = np.empty([n, ])
b3_hat = np.empty([n, ])
dist_r = ub_r - lb_r
dist_x = ub_x - lb_x
x_m = 0.5 * (ub_x + lb_x)
r_m = 0.5 * (lb_r + ub_r)
ub_fact = (ub_x - X.T)**p
lb_fact = (lb_x - X.T)**p
fact_p = (x_m - X.T)**p
fact_pm1 = (x_m - X.T)**(p - 1)
for i in range(n):
# T1: Linearize under-estimator of ln[r_i] = a1*r[i] + b1
if ub_r[i] <= lb_r[i]:
a1 = 0.0
else:
a1 = (np.log(ub_r[i]) - np.log(lb_r[i])) / dist_r[i]
b1 = | np.log(ub_r[i]) | numpy.log |
import numpy as np
DEFAULT_PRNG = np.random
# def transformBbox(matrix, translate, center, box):
#
# '''
# Note that in SimpleITK transform parameters are applied from output sapce to input space.
# xi = A(Xo - C) + T + C
# where A:linear transform matrix, C:center, T:translate, which implies:
# xo = A^-1(xi - C - T) + C
# '''
# x1,x2,y1,y2,z1,z2 = box
# points = np.array([
# [x1, x2, x1, x2, x1, x2, x1, x2],
# [y1, y1, y2, y2, y1, y1, y2, y2],
# [z1, z1, z1, z1, z2, z2, z2, z2]
# ])
# matrix = np.array(matrix.copy())
# translate = np.array(translate)
# center = np.array(center)
#
# points -= np.expand_dims(translate + center, axis=-1)
# inv_ma = np.linalg.inv(matrix)
# points = inv_ma.dot(points)
# points += np.expand_dims(center, axis=-1)
#
# min_corner = points.min(axis=1)
# max_corner = points.max(axis=1)
# print('min_corner shape',min_corner.shape)
#
# transformed = np.zeros(6)
# transformed[::2] = min_corner[:]
# transformed[1::2] = max_corner[:]
# return transformed
def transformBbox(box, transform):
'''
Note that in SimpleITK transform parameters are applied from output sapce to input space.
xi = A(Xo - C) + T + C
where A:linear transform matrix, C:center, T:translate, which implies:
xo = A^-1(xi - C - T) + C
'''
x1, x2, y1, y2, z1, z2 = box
points = np.array([
[x1, x2, x1, x2, x1, x2, x1, x2],
[y1, y1, y2, y2, y1, y1, y2, y2],
[z1, z1, z1, z1, z2, z2, z2, z2]
], dtype=np.float64) # double for TransformPoint
inverse = transform.GetInverse()
for i in range(points.shape[1]):
points[:, i] = np.array(inverse.TransformPoint(points[:, i]))
# matrix = np.array(matrix.copy())
# translate = np.array(translate)
# center = np.array(center)
#
# points -= np.expand_dims(translate + center, axis=-1)
# inv_ma = np.linalg.inv(matrix)
# points = inv_ma.dot(points)
# points += np.expand_dims(center, axis=-1)
min_corner = points.min(axis=1)
max_corner = points.max(axis=1)
# print('min_corner shape',min_corner.shape)
transformed = np.zeros(6)
transformed[::2] = min_corner[:]
transformed[1::2] = max_corner[:]
return transformed
def _randomVector(min, max, prng=DEFAULT_PRNG):
min = np.array(min)
max = np.array(max)
assert min.shape == max.shape
assert len(min.shape) == 1
return prng.uniform(min, max)
def randomTranslation(min, max, prng=DEFAULT_PRNG):
return _randomVector(min, max, prng)
def scaling(factor):
return np.array([
[factor[0], 0, 0],
[0, factor[1], 0],
[0, 0, factor[2]]
])
def randomScaling(min, max, prng=DEFAULT_PRNG):
return scaling(_randomVector(min, max, prng))
def horizontalRotation(angle):
return np.array([
[np.cos(angle), -np.sin(angle), 0],
[np.sin(angle), np.cos(angle), 0],
[0, 0, 1]
])
def randomHorRotation(min, max, prng=DEFAULT_PRNG):
return horizontalRotation(prng.uniform(min, max))
def randomFlip(flip_x_chance, flip_y_chance, prng=DEFAULT_PRNG):
flip_x = prng.uniform(0, 1) < flip_x_chance
flip_y = prng.uniform(0, 1) < flip_y_chance
'''
1 - 2 * flip_x == -1 if flip_x else 1
1 - 2 * flip_y ...
1 - 2 * (flip_x ^ flip_y)) ...
'''
return scaling((1 - 2 * flip_x, 1 - 2 * flip_y, -2 * (flip_x ^ flip_y) + 1))
def randomTransform(
min_scaling=(1, 1, 1),
max_scaling=(1, 1, 1),
min_horizontal_rotation=0,
max_horizontal_rotation=0,
flip_x_chance=0,
flip_y_chance=0,
min_translation=(0, 0, 0),
max_translation=(0, 0, 0),
prng=DEFAULT_PRNG
):
linear = np.linalg.multi_dot([
randomScaling(min_scaling, max_scaling, prng),
randomHorRotation(min_horizontal_rotation, max_horizontal_rotation, prng),
randomFlip(flip_x_chance, flip_y_chance, prng)
])
translation = randomTranslation(min_translation, max_translation)
return linear, translation
def randomTransformGenerator(
prng=None,
min_scaling=(1, 1, 1),
max_scaling=(1, 1, 1),
min_horizontal_rotation=0,
max_horizontal_rotation=0,
flip_x_chance=0,
flip_y_chance=0,
min_translation=(0, 0, 0),
max_translation=(0, 0, 0),
):
if prng is None:
prng = | np.random.RandomState() | numpy.random.RandomState |
# -*- coding: utf-8 -*-
"""
Created on Thu Jul 2 16:15:15 2020
@author: YuKaiyu
"""
from keras import models
from keras import layers
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from keras.utils import plot_model
from ann_visualizer.visualize import ann_viz
#======================================== 数据加载 ========================================
'''
本节将要预测 20 世纪 70 年代中期波士顿郊区房屋价格的中位数,已知当时郊区的一些数据点,比如犯罪率、当地房产税率等。
只有 506 个,分为 404 个训练样本和 102 个测试样本。输入数据的每个特征(比如犯罪率)都有不同的取值范围。,有些特性是比例,取值范围为 0~1;有的取值范围为 1~12;还有的取值范围为 0~100
'''
boston = load_boston()
x = boston.data
print(x.shape)
y = boston.target
print(y.shape)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=33)
print(x_train.shape)
print(x_test.shape)
#======================================== 数据加载 END ========================================
#======================================== 数据归一化 ========================================
# 神经网络,所以需要数据归一化
train_data = x_train
train_targets = y_train
test_data = x_test
test_targets = y_test
#test_data标准化采用的是train_data的均值和标准差,is this reaonable?
mean = train_data.mean(axis = 0)
train_data -= mean
std = train_data.std(axis=0)
train_data /= std
# 测试数据的标准化,也只能使用训练数据的mean,std
test_data -= mean
test_data /= std
#======================================== 数据归一化 END ========================================
#======================================== 构建网络 ========================================
'''
一般来说,训练数据越少,过拟合会越严重,而较小的网络可以降低过拟合。 网络的主体: - 两个中间层,每层都有 64 个隐藏单元,使用relu作为激活函数; - 第三层输出一个标量,是一个线性层,不需要激活函数这样可以实现任意值的预测。
注意的点: - loss函数:用的是 mse 损失函数,即均方误差(MSE,mean squared error),预测值与目标值之差的平方。这是回归问题常用的损失函数; - 监控一个新指标::平均绝对误差(MAE,mean absolute error)。它是预测值与目标值之差的绝对值。
'''
def build_model():
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(train_data.shape[1],)))
model.add(layers.Dense(64, activation='relu'))
# 注意没有激活层,是一个线性层,因为回归的是一个标量
model.add(layers.Dense(1))
# mse:均方误差
# mae:平均绝对误差
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
model.summary()
plot_model(model, to_file='model.png', show_shapes=True)
ann_viz(model, title="model structure")
return model
#======================================== 构建网络 END ========================================
#======================================== 利用K折验证法 ========================================
'''
在coding使用一个numpy中的函数,也就是数据堆叠。 concatenate的使用
每次运行模型得到的验证分数有很大差异,从 2.6 到 3.2 不等。平均分数(3.0)是比单一分数更可靠的指标——这就是 K 折交叉验证的关键。
'''
# 设定K为4,数据折成4段,也需要循环4次,
k_flod = 4
num_val_samples = len(train_data) // k_flod
num_epochs = 100
all_scores = []
for i in range(k_flod):
print('Processing fold #', i)
val_data = train_data[i*num_val_samples : (i+1)*num_val_samples]
val_targets = train_targets[i*num_val_samples : (i+1)*num_val_samples]
# 数据合成
partial_train_data = | np.concatenate([train_data[:i*num_val_samples], train_data[:(i+1)*num_val_samples]], axis=0) | numpy.concatenate |
from bs4 import BeautifulSoup as bs
from label_navi import calLinkChaotic as clc
from label_navi import calLinkRatio as clr
import scipy as sp
import numpy as np
import sys
import matplotlib.pyplot as plt
fileName = sys.argv[1]
dom = bs(open('../data/en-original/' + fileName))
nodeList = []
for des in dom.descendants:
try:
if des.name != None and des.find_all('a'):
if ~np.isnan(clc(des)) and clc(des) >= 0:
nodeList.append([clc(des), clr(des), len(des.find_all('a')), des.name])
except:
print(des.name)
break
dataList = []
for node in nodeList:
if node[2] > 1:
dataList.append( [node[0], node[1] ])
dataList = | np.array(dataList) | numpy.array |
import argparse
import matplotlib.pyplot as plt
from agent.agent import Agent
from agent.model_selection import ModelSelector
import numpy as np
from autolab_core import YamlConfig
from carbongym_utils.draw import draw_transforms
from env.block_push_ig_env import GymFrankaBlockPushEnv
from planning.blockpushpolicy import BlockPushPolicy
from planning.transition_models import LearnedTransitionModel, BlockPushSimpleTransitionModel
from planning.operators import *
from planning.planner import Planner, Node
from pillar_state_py import State
def make_block_push_env(two_d = False):
parser = argparse.ArgumentParser()
#parser.add_argument('--cfg', '-c', type=str, default='cfg/franka_block_push_two_d.yaml')
#args = parser.parse_args()
if two_d:
cfg ='cfg/franka_block_push_two_d.yaml'
else:
cfg = 'cfg/franka_block_push.yaml'
cfg = YamlConfig(cfg)
vec_env = GymFrankaBlockPushEnv(cfg)
vec_env.reset()
if not two_d:
vec_env.goto_start(teleport=False)
def custom_draws(scene):
franka = scene.get_asset('franka0')
for env_ptr in scene.env_ptrs:
ee_transform = franka.get_ee_transform(env_ptr, 'franka0')
draw_transforms(scene.gym, scene.viewer, [env_ptr], [ee_transform])
[(vec_env._scene.step(), vec_env.render(custom_draws=custom_draws)) for i in range(5)]
return vec_env, custom_draws
def move_robot_to_start(vec_env, custom_draws):
policy = BlockPushPolicy()
policy.go_to_push_start(vec_env)
[(vec_env._scene.step(), vec_env.render(custom_draws=custom_draws)) for i in range(100)]
policy.go_to_block(vec_env)
[(vec_env._scene.step(), vec_env.render(custom_draws=custom_draws)) for i in range(50)]
return policy
def test_delta_pose():
vec_env, custom_draws = make_block_push_env()
block_goal = vec_env.get_delta_goal(-0.1)
import time
for i in range(100):
vec_env.render(custom_draws=custom_draws)
time.sleep(0.1)
def test_go_to_start():
vec_env, custom_draws = make_block_push_env()
policy = BlockPushPolicy()
[(vec_env._scene.step(), vec_env.render(custom_draws=custom_draws)) for i in range(10)]
block_goal = vec_env.get_delta_goal(-0.02)
move_robot_to_start(vec_env, custom_draws)
def test_short_goal():
"""
robot can get nudge block some delta
"""
vec_env, custom_draws = make_block_push_env()
block_goal = vec_env.get_delta_goal(-0.08, visualize=False)
#policy = move_robot_to_start(vec_env, custom_draws)
policy = BlockPushPolicy()
obs = vec_env._compute_obs(None)
start_state = obs["observation"]
_, actions, _ = policy.plan(start_state, block_goal, delta = 0.00005, horizon=40)
for t in range(actions.shape[-1]):
[(vec_env.step(actions[:,t]), vec_env.render(custom_draws=custom_draws)) for i in range(1)]
dists = vec_env.dists_to_goal(block_goal)
tol = 0.01
print(dists, "distances")
if not ((np.abs(dists) < tol).all()):
print("Test failed")
else:
print("test passed")
def test_action_sampling():
vec_env, custom_draws = make_block_push_env()
block_goal = vec_env.get_delta_goal(-0.08, visualize=False)
policy = move_robot_to_start(vec_env, custom_draws)
delta = 0.05
start_states = vec_env.get_states()
actions = policy.sample_actions(start_states, block_goal, delta = delta)
next_states = start_states.copy()
next_states[:,:3] = next_states[:,:3] + actions[:,:3]
#actions go toward goal
dist_to_goal_before = np.linalg.norm(start_states-block_goal)
dist_to_goal_after = np.linalg.norm(next_states-block_goal)
assert(dist_to_goal_after < dist_to_goal_before)
assert(np.allclose(np.linalg.norm(actions[:,:3]), delta))
print("test passed")
def test_model_selection():
model_selector = ModelSelector()
tm = BlockPushSimpleTransitionModel()
lm = LearnedTransitionModel()
model_selector.add(tm)
model_selector.add(lm, model_type="learned")
N=5
good_states = np.random.uniform(low=-0.1, high = 0.1, size=(N,7))
bad_states = np.random.uniform(low=0.1, high = 0.2, size=(N,7))
states = np.vstack([good_states, bad_states])
errors = [0.02,0.01,0,0,0,1,3,1,5,1]
model_selector.add_history(states, errors, tm)
null_action = np.array([0,0,0,1,0,0,0,10])
tol = 0.5
low_error_model = model_selector.select_model(states[0], null_action, tol)
assert(low_error_model == tm)
high_error_model = model_selector.select_model(states[8], null_action, tol)
assert(high_error_model == lm)
#pick one where it should pick the manual one
# and where is should pick the learned one
def test_learned_transition_model():
N = 500
stiffness = 100
random_states = np.random.uniform(low = -0.1, high = 0.1, size=(N,7))
random_actions = np.random.uniform(low = -0.0001, high = 0.0001, size=(N,1))
actions_rest =np.multiply(np.ones((random_states.shape[0], 5)), np.array([1,0,0,0,stiffness]) )
random_actions = np.hstack([np.zeros((N,2)),random_actions, actions_rest])
tm = BlockPushSimpleTransitionModel()
next_states = tm.predict(random_states, random_actions)
lm = LearnedTransitionModel()
lm.train(random_states, random_actions, next_states)
predicted_next_states = lm.predict(random_states, random_actions)[0]
mean_dist = np.mean(np.abs(predicted_next_states[:,:3] - next_states[:,:3]))
print("mean distance", mean_dist)
assert(mean_dist) < 0.01
print("test passed")
def test_learned_transition_model_real_data():
N = 40
states = np.load("data/states.npy")
actions = np.load("data/actions.npy")
next_states = np.load("data/next_states.npy")
lm = LearnedTransitionModel()
mm = BlockPushSimpleTransitionModel()
lm.train(states, actions, next_states)
random_noise = np.random.uniform(low=-0.00001, high=0.00001, size=states.shape )
predicted_next_states = lm.predict(states+random_noise, actions, flatten=False)
max_dist = np.max(np.linalg.norm(predicted_next_states - next_states, axis=0))
plt.plot(predicted_next_states[:,1], label="predicted by GP")
plt.plot(next_states[:,1], label="actual next states")
plt.legend()
plt.show()
assert(max_dist) < 0.03
test_states = np.array([states[3]+[0.0001,0.015]])
test_actions = np.array([[-0.02, 300]])
next_states_pred = lm.predict(test_states, test_actions)
next_states_manual = mm.predict(test_states.flatten(), test_actions.flatten())
distance = np.linalg.norm(next_states_pred-next_states_manual)
gp_test_distance = | np.linalg.norm(next_states_pred-test_states) | numpy.linalg.norm |
import math
from typing import Any
import torch
import torch.nn as nn
from torch.nn import functional as f
import numpy as np
BETA_START = 0.4
BETA_FRAMES = 100000
V_MAX = 10
V_MIN = -10
N_ATOMS = 51
DELTA_Z = (V_MAX - V_MIN) / (N_ATOMS - 1)
class NoisyLinear(nn.Linear):
def __init__(self, in_features, out_features, sigma_init=0.017, bias=True):
super(NoisyLinear, self).__init__(in_features, out_features, bias=bias)
w = torch.full((out_features, in_features), sigma_init)
self.sigma_weight = nn.Parameter(w)
z = torch.zeros(out_features, in_features)
self.register_buffer("epsilon_weight", z)
if bias:
w = torch.full((out_features,), sigma_init)
self.sigma_bias = nn.Parameter(w)
z = torch.zeros(out_features)
self.register_buffer("epsilon_bias", z)
self.reset_parameters()
def reset_parameters(self):
std = math.sqrt(3 / self.in_features)
self.weight.data.uniform_(-std, std)
self.bias.data.uniform_(-std, std)
def forward(self, x):
self.epsilon_weight.normal_()
bias = self.bias
if bias is not None:
self.epsilon_bias.normal_()
bias = bias + self.sigma_bias * self.epsilon_bias.data
v = self.sigma_weight * self.epsilon_weight.data + self.weight
return f.linear(x, v, bias)
def _forward_unimplemented(self, *input_forward: Any) -> None:
pass
class NoisyDQN(nn.Module):
def __init__(self, input_shape, num_actions):
super(NoisyDQN, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(input_shape[0], 32, kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.ReLU()
)
conv_out_size = self._get_conv_out(input_shape)
self.noisy_layers = [
NoisyLinear(conv_out_size, 512),
NoisyLinear(512, num_actions)
]
self.fc = nn.Sequential(
self.noisy_layers[0],
nn.ReLU(),
self.noisy_layers[1]
)
def _get_conv_out(self, shape):
o = self.conv(torch.zeros(1, *shape))
return int(np.prod(o.size()))
def forward(self, x):
fx = x.float() / 256
conv_out = self.conv(fx).view(fx.size()[0], -1)
return self.fc(conv_out)
def noisy_layers_sigma_snr(self):
return [
((layer.weight ** 2).mean().sqrt() / (layer.sigma_weight ** 2).mean().sqrt()).item()
for layer in self.noisy_layers
]
def _forward_unimplemented(self, *input_forward: Any) -> None:
pass
class PrioritizedReplayBuffer:
def __init__(self, exp_source, buf_size, prob_alpha=0.6):
self.exp_source_iter = iter(exp_source)
self.prob_alpha = prob_alpha
self.capacity = buf_size
self.pos = 0
self.buffer = []
self.priorities = np.zeros((buf_size,), dtype=np.float32)
self.beta = BETA_START
def update_beta(self, idx):
v = BETA_START + idx * (1.0 - BETA_START) / BETA_FRAMES
self.beta = min(1.0, v)
return self.beta
def __len__(self):
return len(self.buffer)
def populate(self, count):
max_priority = self.priorities.max(initial=1.0) if self.buffer else 1.0
for _ in range(count):
sample = next(self.exp_source_iter)
if len(self.buffer) < self.capacity:
self.buffer.append(sample)
else:
self.buffer[self.pos] = sample
self.priorities[self.pos] = max_priority
self.pos = (self.pos + 1) % self.capacity
def sample(self, batch_size):
if len(self.buffer) == self.capacity:
priorities = self.priorities
else:
priorities = self.priorities[:self.pos]
probabilities = priorities ** self.prob_alpha
probabilities /= probabilities.sum()
indices = np.random.choice(len(self.buffer), batch_size, p=probabilities)
samples = [self.buffer[idx] for idx in indices]
total = len(self.buffer)
weights = (total * probabilities[indices]) ** (-self.beta)
weights /= weights.max()
return samples, indices, np.array(weights, dtype=np.float32)
def update_priorities(self, batch_indices, batch_priorities):
for idx, priority in zip(batch_indices, batch_priorities):
self.priorities[idx] = priority
class DuelingDQN(nn.Module):
def __init__(self, input_shape, num_actions):
super(DuelingDQN, self).__init__()
self.conv = nn.Sequential(
nn.Conv2d(input_shape[0], 32,
kernel_size=8, stride=4),
nn.ReLU(),
nn.Conv2d(32, 64, kernel_size=4, stride=2),
nn.ReLU(),
nn.Conv2d(64, 64, kernel_size=3, stride=1),
nn.ReLU()
)
conv_out_size = self._get_conv_out(input_shape)
self.fc_adv = nn.Sequential(
nn.Linear(conv_out_size, 256),
nn.ReLU(),
nn.Linear(256, num_actions)
)
self.fc_val = nn.Sequential(
nn.Linear(conv_out_size, 256),
nn.ReLU(),
nn.Linear(256, 1)
)
def _get_conv_out(self, shape):
o = self.conv(torch.zeros(1, *shape))
return int(np.prod(o.size()))
def forward(self, x):
adv, val = self.adv_val(x)
return val + (adv - adv.mean(dim=1, keepdim=True))
def adv_val(self, x):
fx = x.float() / 256
conv_out = self.conv(fx).view(fx.size()[0], -1)
return self.fc_adv(conv_out), self.fc_val(conv_out)
def _forward_unimplemented(self, *input_forward: Any) -> None:
pass
def distr_projection(next_distr, rewards, dones, gamma):
batch_size = len(rewards)
proj_distr = np.zeros((batch_size, N_ATOMS), dtype=np.float32)
delta_z = (V_MAX - V_MIN) / (N_ATOMS - 1)
for atom in range(N_ATOMS):
v = rewards + (V_MIN + atom * delta_z) * gamma
tz_j = np.minimum(V_MAX, np.maximum(V_MIN, v))
b_j = (tz_j - V_MIN) / delta_z
l = np.floor(b_j).astype(np.int64)
u = | np.ceil(b_j) | numpy.ceil |
import numpy as np
import matplotlib.pyplot as plt
import itertools
import time
import os
from numpy.fft import fft, ifft, fft2, ifft2, fftn, ifftn, fftshift, ifftshift
from IPython import display
from scipy.ndimage import uniform_filter
from concurrent.futures import ProcessPoolExecutor
from .util import *
from .optics import *
from .background_estimator import *
def intensity_mapping(img_stack):
img_stack_out = np.zeros_like(img_stack)
img_stack_out[0] = img_stack[0].copy()
img_stack_out[1] = img_stack[4].copy()
img_stack_out[2] = img_stack[3].copy()
img_stack_out[3] = img_stack[1].copy()
img_stack_out[4] = img_stack[2].copy()
return img_stack_out
def instrument_matrix_and_source_calibration(I_cali_mean, handedness = 'RCP'):
_, N_cali = I_cali_mean.shape
# Source intensity
I_tot = np.sum(I_cali_mean,axis=0)
# Calibration matrix
theta = np.r_[0:N_cali]/N_cali*2*np.pi
C_matrix = np.array([np.ones((N_cali,)), np.cos(2*theta), np.sin(2*theta)])
# offset calibration
I_cali_norm = I_cali_mean/I_tot
offset_est = np.transpose(np.linalg.pinv(C_matrix.transpose()).dot(np.transpose(I_cali_norm[0,:])))
alpha = np.arctan2(-offset_est[2], offset_est[1])/2
# Source calibration
C_matrix_offset = np.array([np.ones((N_cali,)), np.cos(2*(theta+alpha)), np.sin(2*(theta+alpha))])
S_source = np.linalg.pinv(C_matrix_offset.transpose()).dot(I_tot[:,np.newaxis])
S_source_norm = S_source/S_source[0]
Ax = np.sqrt((S_source_norm[0]+S_source_norm[1])/2)
Ay = np.sqrt((S_source_norm[0]-S_source_norm[1])/2)
del_phi = np.arccos(S_source_norm[2]/2/Ax/Ay)
if handedness == 'RCP':
E_in = np.array([Ax, Ay*np.exp(1j*del_phi)])
elif handedness == 'LCP':
E_in = np.array([Ax, Ay*np.exp(-1j*del_phi)])
else:
raise TypeError("handedness type must be 'LCP' or 'RCP'")
# Instrument matrix calibration
A_matrix = np.transpose(np.linalg.pinv(C_matrix_offset.transpose()).dot(np.transpose(I_cali_norm)))
theta_fine = np.r_[0:360]/360*2*np.pi
C_matrix_offset_fine = np.array([np.ones((360,)), np.cos(2*(theta_fine+alpha)), np.sin(2*(theta_fine+alpha))])
print('Calibrated source field:\n' + str(np.round(E_in,4)))
print('Calibrated instrument matrix:\n' + str(np.round(A_matrix,4)))
fig,ax = plt.subplots(2,2,figsize=(20,20))
ax[0,0].plot(theta/np.pi*180,np.transpose(I_cali_mean))
ax[0,0].legend(['$I_0$', '$I_{45}$', '$I_{90}$', '$I_{135}$'])
ax[0,0].set_title('Calibration curve without normalization')
ax[0,0].set_xlabel('Orientation of LP (deg)')
ax[0,0].set_ylabel('Raw intensity')
ax[0,1].plot(theta/np.pi*180,I_tot)
ax[0,1].plot(theta_fine/np.pi*180,np.transpose(C_matrix_offset_fine).dot(S_source))
ax[0,1].legend(['Mean source intensity', 'Fitted source intensity'])
ax[0,1].set_title('Source calibration curve')
ax[0,1].set_xlabel('Orientation of LP (deg)')
ax[0,1].set_ylabel('Mean intensity from 4 linear channels')
ax[1,0].plot(theta/np.pi*180,np.transpose(I_cali_mean/I_tot))
ax[1,0].legend(['$I_0$', '$I_{45}$', '$I_{90}$', '$I_{135}$'])
ax[1,0].set_title('Normalized calibration curve')
ax[1,0].set_xlabel('Orientation of LP (deg)')
ax[1,0].set_ylabel('Normalized intensity')
ax[1,1].plot(theta/np.pi*180,np.transpose(I_cali_norm))
ax[1,1].plot(theta_fine/np.pi*180,np.transpose(A_matrix.dot(C_matrix_offset_fine)))
ax[1,1].legend(['$I_0$', '$I_{45}$', '$I_{90}$', '$I_{135}$'])
ax[1,1].set_xlabel('Orientation of LP (deg)')
ax[1,1].set_ylabel('Normalized intensity')
ax[1,1].set_title('Fitted calibration curves')
return E_in, A_matrix, np.transpose(A_matrix.dot(C_matrix_offset_fine))
def instrument_matrix_calibration(I_cali_norm, I_meas):
_, N_cali = I_cali_norm.shape
theta = np.r_[0:N_cali]/N_cali*2*np.pi
S_matrix = np.array([np.ones((N_cali,)), np.cos(2*theta), np.sin(2*theta)])
A_matrix = np.transpose(np.linalg.pinv(S_matrix.transpose()).dot(np.transpose(I_cali_norm)))
if I_meas.ndim == 3:
I_mean = np.mean(I_meas,axis=(1,2))
elif I_meas.ndim == 4:
I_mean = np.mean(I_meas,axis=(1,2,3))
I_tot = np.sum(I_mean)
A_matrix_S3 = I_mean/I_tot-A_matrix[:,0]
I_corr = (I_tot/4)*(A_matrix_S3)/np.mean(A_matrix[:,0])
print('Calibrated instrument matrix:\n' + str(np.round(A_matrix,4)))
print('Last column of instrument matrix:\n' + str(np.round(A_matrix_S3.reshape((4,1)),4)))
plt.plot(np.transpose(I_cali_norm))
plt.plot(np.transpose(A_matrix.dot(S_matrix)))
plt.xlabel('Orientation of LP (deg)')
plt.ylabel('Normalized intensity')
plt.title('Fitted calibration curves')
plt.legend(['$I_0$', '$I_{45}$', '$I_{90}$', '$I_{135}$'])
return A_matrix, I_corr
class waveorder_microscopy:
'''
waveorder_microscopy contains methods to compute weak object transfer function
for label-free image reconstruction with various types of dataset:
1) 2D/3D phase reconstruction with a single brightfield defocused stack (Transport of intensity, TIE)
2) 2D/3D phase reconstruction with intensities of asymetric illumination
(differential phase contrast, DPC)
3) 2D/3D joint phase and polarization (2D orientation) reconstruction
with brightfield-illuminated polarization-sensitive intensities (QLIPP)
4) 2D/3D joint phase and polarization (uniaxial permittivity tensor) reconstruction
with asymmetrically-illuminated polarization-sensitive intensities (uPTI)
Parameters
----------
img_dim : tuple
shape of the computed 2D space with size of (N, M)
lambda_illu : float
wavelength of the incident light
ps : float
xy pixel size of the image space
psz : float
z step size of the image space
NA_obj : float
numerical aperture of the detection objective
NA_illu : float
numerical aperture of the illumination condenser
z_defocus : numpy.ndarray
1D array of defocused z position corresponds to the intensity stack
(matters for 2D reconstruction, the direction positive z matters for 3D reconstruction)
chi : float
swing of the illumination or detection polarization state (in radian)
n_media : float
refractive index of the immersing media
cali : bool
'True' for the orientation convention of QLIPP data,
'False' for the orientation convention of uPTI data
bg_option : str
'local' for estimating background with scipy uniform filter
'local_fit' for estimating background with polynomial fit
other string for normal background subtraction with the provided background
A_matrix : numpy.ndarray
self-provided instrument matrix converting polarization-sensitive intensity images into Stokes parameters
with shape of (N_channel, N_Stokes)
If None is provided, the instrument matrix is determined by the QLIPP convention with swing specify by chi
QLIPP_birefringence_only : bool
'True' to skip pre-processing functions for phase/uPTI reconstruction
'False' to continue with pre-processing functions for phase/uPTI reconstruction
bire_in_plane_deconv : str
string contains the dimension of 2D birefringence deconvolution
'2D' for 2D deconvolution of 2D birefringence
'3D' for 3D deconvolution of 2D birefringence
inc_recon : str
option for constructing settings for 3D orientation reconstruction
'2D-vec-WOTF' for 2D diffractive reconstruction of 3D anisotropy
'3D' for 3D for diffractive reconstruction of 3D anisotropy
phase_deconv : str
string contains the phase reconstruction dimension
'2D' for 2D phase deconvolution
'3D' for 3D phase deconvolution
ph_deconv_layer : int
number of layers included for each layer of semi-3D phase reconstruction
illu_mode : str
string to set the pattern of illumination source
'BF' for brightfield illumination with source pattern specified by NA_illu
'PH' for phase contrast illumination with the source pattern specify by NA_illu and NA_illu_in
'Arbitrary' for self-defined source pattern of dimension (N_pattern, N, M)
NA_illu_in : flaot
numerical aperture of the inner circle for phase contrast ring illumination
Source : numpy.ndarray
illumination source pattern with dimension of (N_pattern, N, M)
Source_PolState : numpy.ndarray
illumination polarization states (Ex, Ey) for each illumination pattern with dimension of (N_pattern, 2)
If provided with size of (2,), a single state is used for all illumination patterns
pad_z : int
number of z-layers to pad (reflection boundary condition) for 3D deconvolution
use_gpu : bool
option to use gpu or not
gpu_id : int
number refering to which gpu will be used
'''
def __init__(self, img_dim, lambda_illu, ps, NA_obj, NA_illu, z_defocus, chi=None,\
n_media=1, cali=False, bg_option='global',
A_matrix=None, QLIPP_birefringence_only = False, bire_in_plane_deconv=None, inc_recon=None,
phase_deconv=None, ph_deconv_layer = 5,
illu_mode='BF', NA_illu_in=None, Source=None, Source_PolState=np.array([1, 1j]),
pad_z=0, use_gpu=False, gpu_id=0):
'''
initialize the system parameters for phase and orders microscopy
'''
t0 = time.time()
# GPU/CPU
self.use_gpu = use_gpu
self.gpu_id = gpu_id
if self.use_gpu:
globals()['cp'] = __import__("cupy")
cp.cuda.Device(self.gpu_id).use()
# Basic parameter
self.N, self.M = img_dim
self.n_media = n_media
self.lambda_illu = lambda_illu/n_media
self.ps = ps
self.z_defocus = z_defocus.copy()
if len(z_defocus) >= 2:
self.psz = np.abs(z_defocus[0] - z_defocus[1])
self.G_tensor_z_upsampling = np.ceil(self.psz/(self.lambda_illu/2))
self.pad_z = pad_z
self.NA_obj = NA_obj/n_media
self.NA_illu = NA_illu/n_media
self.N_defocus = len(z_defocus)
self.N_defocus_3D = self.N_defocus + 2*self.pad_z
self.chi = chi
self.cali = cali
self.bg_option = bg_option
self.phase_deconv = phase_deconv
if QLIPP_birefringence_only == False:
# setup microscocpe variables
self.xx, self.yy, self.fxx, self.fyy = gen_coordinate((self.N, self.M), ps)
self.Pupil_obj = gen_Pupil(self.fxx, self.fyy, self.NA_obj, self.lambda_illu)
self.Pupil_support = self.Pupil_obj.copy()
# illumination setup
self.illumination_setup(illu_mode, NA_illu_in, Source, Source_PolState)
# Defocus kernel initialization
self.Hz_det_setup(self.phase_deconv, ph_deconv_layer, bire_in_plane_deconv, inc_recon)
# select either 2D or 3D model for phase deconvolution
self.phase_deconv_setup(self.phase_deconv)
# instrument matrix for polarization detection
self.instrument_matrix_setup(A_matrix)
# select either 2D or 3D model for 2D birefringence deconvolution
self.bire_in_plane_deconv_setup(bire_in_plane_deconv)
# inclination reconstruction model selection
self.inclination_recon_setup(inc_recon)
else:
# instrument matrix for polarization detection
self.instrument_matrix_setup(A_matrix)
############## constructor function group ##############
def illumination_setup(self, illu_mode, NA_illu_in, Source, Source_PolState):
'''
setup illumination source function for transfer function computing
Parameters
----------
illu_mode : str
string to set the pattern of illumination source
'BF' for brightfield illumination with source pattern specified by NA_illu
'PH' for phase contrast illumination with the source pattern specify by NA_illu and NA_illu_in
'Arbitrary' for self-defined source pattern of dimension (N_pattern, N, M)
NA_illu_in : flaot
numerical aperture of the inner circle for phase contrast ring illumination
Source : numpy.ndarray
illumination source pattern with dimension of (N_pattern, N, M)
Source_PolState : numpy.ndarray
illumination polarization states (Ex, Ey) for each illumination pattern with dimension of (N_pattern, 2)
'''
if illu_mode == 'BF':
self.Source = gen_Pupil(self.fxx, self.fyy, self.NA_illu, self.lambda_illu)
self.N_pattern = 1
elif illu_mode == 'PH':
if NA_illu_in == None:
raise('No inner rim NA specified in the PH illumination mode')
else:
self.NA_illu_in = NA_illu_in/self.n_media
inner_pupil = gen_Pupil(self.fxx, self.fyy, self.NA_illu_in/self.n_media, self.lambda_illu)
self.Source = gen_Pupil(self.fxx, self.fyy, self.NA_illu, self.lambda_illu)
self.Source -= inner_pupil
Pupil_ring_out = gen_Pupil(self.fxx, self.fyy, self.NA_illu/self.n_media, self.lambda_illu)
Pupil_ring_in = gen_Pupil(self.fxx, self.fyy, self.NA_illu_in/self.n_media, self.lambda_illu)
self.Pupil_obj = self.Pupil_obj*np.exp((Pupil_ring_out-Pupil_ring_in)*(np.log(0.7)-1j*(np.pi/2 - 0.0*np.pi)))
self.N_pattern = 1
elif illu_mode == 'Arbitrary':
self.Source = Source.copy()
if Source.ndim == 2:
self.N_pattern = 1
else:
self.N_pattern = len(Source)
self.Source_PolState = np.zeros((self.N_pattern, 2), complex)
if Source_PolState.ndim == 1:
for i in range(self.N_pattern):
self.Source_PolState[i] = Source_PolState/(np.sum(np.abs(Source_PolState)**2))**(1/2)
else:
if len(Source_PolState) != self.N_pattern:
raise('The length of Source_PolState needs to be either 1 or the same as N_pattern')
for i in range(self.N_pattern):
self.Source_PolState[i] = Source_PolState[i]/(np.sum(np.abs(Source_PolState[i])**2))**(1/2)
def Hz_det_setup(self, phase_deconv, ph_deconv_layer, bire_in_plane_deconv, inc_recon):
'''
setup defocus kernels for deconvolution with the corresponding dimensions
Parameters
----------
phase_deconv : str
string contains the dimension of the phase reconstruction
'2D' for 2D phase deconvolution
'3D' for 3D phase deconvolution
ph_deconv_layer : int
number of layers included for each layer of semi-3D phase reconstruction
bire_in_plane_deconv : str
string contains the dimension of 2D birefringence deconvolution
'2D' for 2D deconvolution of 2D birefringence
'3D' for 3D deconvolution of 2D birefringence
inc_recon : str
option for constructing settings for 3D orientation reconstruction
'2D-geometric' for 2D non-diffractive reconstruction of 3D anisotropy
'2D-vec-WOTF' for 2D diffractive reconstruction of 3D anisotropy
'3D' for 3D for diffractive reconstruction of 3D anisotropy
'''
if phase_deconv == '2D' or bire_in_plane_deconv == '2D' or inc_recon == '2D-vec-WOTF':
# generate defocus kernel based on Pupil function and z_defocus
self.Hz_det_2D = gen_Hz_stack(self.fxx, self.fyy, self.Pupil_support, self.lambda_illu, self.z_defocus)
if phase_deconv == 'semi-3D':
self.ph_deconv_layer = ph_deconv_layer
if self.z_defocus[0] - self.z_defocus[1] >0:
z_deconv = -(np.r_[:self.ph_deconv_layer]-self.ph_deconv_layer//2)*self.psz
else:
z_deconv = (np.r_[:self.ph_deconv_layer]-self.ph_deconv_layer//2)*self.psz
self.Hz_det_semi_3D = gen_Hz_stack(self.fxx, self.fyy, self.Pupil_support, self.lambda_illu, z_deconv)
self.G_fun_z_semi_3D = gen_Greens_function_z(self.fxx, self.fyy, self.Pupil_support, self.lambda_illu, z_deconv)
if phase_deconv == '3D' or bire_in_plane_deconv == '3D' or inc_recon == '3D':
# generate defocus kernel and Green's function
if self.z_defocus[0] - self.z_defocus[1] >0:
z = -ifftshift((np.r_[0:self.N_defocus_3D]-self.N_defocus_3D//2)*self.psz)
else:
z = ifftshift((np.r_[0:self.N_defocus_3D]-self.N_defocus_3D//2)*self.psz)
self.Hz_det_3D = gen_Hz_stack(self.fxx, self.fyy, self.Pupil_support, self.lambda_illu, z)
self.G_fun_z_3D = gen_Greens_function_z(self.fxx, self.fyy, self.Pupil_support, self.lambda_illu, z)
def phase_deconv_setup(self, phase_deconv):
'''
setup transfer functions for phase deconvolution with the corresponding dimensions
Parameters
----------
phase_deconv : str
string contains the dimension of the phase reconstruction
'2D' for 2D phase deconvolution
'3D' for 3D phase deconvolution
ph_deconv_layer : int
number of layers included for each layer of semi-3D phase reconstruction
'''
if phase_deconv == '2D':
# compute 2D phase transfer function
self.gen_WOTF()
elif phase_deconv == 'semi-3D':
self.gen_semi_3D_WOTF()
elif phase_deconv == '3D':
# compute 3D phase transfer function
self.gen_3D_WOTF()
def bire_in_plane_deconv_setup(self, bire_in_plane_deconv):
'''
setup transfer functions for 2D birefringence deconvolution with the corresponding dimensions
Parameters
----------
bire_in_plane_deconv : str
string contains the dimension of 2D birefringence deconvolution
'2D' for 2D deconvolution of 2D birefringence
'3D' for 3D deconvolution of 2D birefringence
'''
if bire_in_plane_deconv == '2D':
# generate 2D vectorial transfer function for 2D birefringence deconvolution in 2D space
self.gen_2D_vec_WOTF(False)
elif bire_in_plane_deconv == '3D':
# generate 3D vectorial transfer function for 2D birefringence deconvolution in 3D space
self.gen_3D_vec_WOTF(False)
def inclination_recon_setup(self, inc_recon):
'''
setup transfer functions for uPTI reconstruction
Parameters
----------
phase_deconv : str
string contains the phase reconstruction dimension
'2D' for 2D phase deconvolution
'3D' for 3D phase deconvolution
inc_recon : str
option for constructing settings for 3D orientation reconstruction
'2D-geometric' for 2D non-diffractive reconstruction of 3D anisotropy
'2D-vec-WOTF' for 2D diffractive reconstruction of 3D anisotropy
'3D' for 3D for diffractive reconstruction of 3D anisotropy
'''
if inc_recon is not None and inc_recon != '3D':
if inc_recon == '2D-geometric':
wave_vec_norm_x = self.lambda_illu*self.fxx
wave_vec_norm_y = self.lambda_illu*self.fyy
wave_vec_norm_z = (np.maximum(0,1 - wave_vec_norm_x**2 - wave_vec_norm_y**2))**(0.5)
incident_theta = np.arctan2((wave_vec_norm_x**2 + wave_vec_norm_y**2)**(0.5), wave_vec_norm_z)
incident_phi = np.arctan2(wave_vec_norm_y,wave_vec_norm_x)
self.geometric_inc_matrix, self.geometric_inc_matrix_inv = gen_geometric_inc_matrix(incident_theta, incident_phi, self.Source)
elif inc_recon == '2D-vec-WOTF':
# generate 2D vectorial transfer function for 2D uPTI
self.gen_2D_vec_WOTF(True)
# compute the AHA matrix for later 2D inversion
self.inc_AHA_2D_vec = np.zeros((7,7,self.N,self.M),complex)
for i,j,p in itertools.product(range(7), range(7), range(self.N_Stokes)):
self.inc_AHA_2D_vec[i,j] += np.sum(np.conj(self.H_dyadic_2D_OTF[p,i])*self.H_dyadic_2D_OTF[p,j],axis=2)
elif inc_recon == '3D':
# generate 3D vectorial transfer function for 3D uPTI
self.gen_3D_vec_WOTF(True)
self.inc_AHA_3D_vec = np.zeros((7,7,self.N,self.M,self.N_defocus_3D), dtype='complex64')
# compute the AHA matrix for later 3D inversion
for i,j,p in itertools.product(range(7), range(7), range(self.N_Stokes)):
self.inc_AHA_3D_vec[i,j] += np.sum(np.conj(self.H_dyadic_OTF[p,i])*self.H_dyadic_OTF[p,j],axis=0)
def instrument_matrix_setup(self, A_matrix):
'''
setup instrument matrix
Parameters
----------
A_matrix : numpy.ndarray
self-provided instrument matrix converting polarization-sensitive intensity images into Stokes parameters
with shape of (N_channel, N_Stokes)
If None is provided, the instrument matrix is determined by the QLIPP convention with swing specify by chi
'''
if A_matrix is None:
self.N_channel = 5
self.N_Stokes = 4
self.A_matrix = 0.5*np.array([[1,0,0,-1], \
[1, np.sin(self.chi), 0, -np.cos(self.chi)], \
[1, 0, | np.sin(self.chi) | numpy.sin |
import time
import joblib
import numpy as np
from sklearn.base import clone
from .base import Explainer
from .parsers import util
class DShap(Explainer):
"""
Explainer that approx. data Shapley values using
the TMC-Shapley algorithm.
Local-Influence Semantics
- Inf.(x_i, x_t) = Avg. L(y_i, f_{w/o x_i}(x_t)) - L(y_i, f(x_t))
over all possible permutations of the training data.
- Pos. value means a decrease in test loss (a.k.a. proponent, helpful).
- Neg. value means an increase in test loss (a.k.a. opponent, harmful).
Reference
- https://github.com/amiratag/DataShapley
Paper
- http://proceedings.mlr.press/v97/ghorbani19c.html
Note
- Supports both GBDTs and RFs.
- No validation set, we are computing loss on training or ONE test example;
thus, there is no average loss score and use of a `tolerance` parameter
for early truncation.
* However, we can use a hard truncation limit via `trunc_frac`.
"""
def __init__(self, trunc_frac=0.25, n_jobs=1,
check_every=100, random_state=1, logger=None):
"""
Input
trunc_frac: float, fraction of instances to compute marginals for per iter.
n_jobs: int, no. iterations / processes to run in parallel.
check_every: int, no. iterations to run between checking convergence.
random_state: int, random seed to enhance reproducibility.
logger: object, If not None, output to logger.
"""
self.trunc_frac = trunc_frac
self.n_jobs = n_jobs
self.check_every = check_every
self.random_state = random_state
self.logger = logger
def fit(self, model, X, y):
"""
- Convert model to internal standardized tree structures.
- Perform any initialization necessary for the chosen method.
Input
model: tree ensemble.
X: 2d array of train data.
y: 1d array of train targets.
"""
super().fit(model, X, y)
X, y = util.check_data(X, y, objective=self.model_.objective)
self.original_model_ = model
self.objective_ = self.model_.objective
self.n_class_ = self.model_.n_class_
self.X_train_ = X.copy()
self.y_train_ = y.copy()
self.loss_fn_ = util.get_loss_fn(self.objective_, self.n_class_, self.model_.factor)
self.random_loss_ = self._get_random_loss()
return self
def get_local_influence(self, X, y):
"""
- Compute influence of each training instance on the test loss.
Input
X: 2d array of test examples.
y: 1d array of test targets.
Return
- 2d array of shape=(no. train, X.shape[0]).
* Arrays are returned in the same order as the training data.
"""
X, y = util.check_data(X, y, objective=self.model_.objective)
return self._run_tmc_shapley(X_test=X, y_test=y, inf='local')
# private
def _run_tmc_shapley(self, X_test=None, y_test=None, batch=False, inf='global', stability_tol=0.1):
"""
- Run the TMC-Shapley algorithm until marginal contributions converge.
Return
- 2d array of average marginals, shape=(no. train, 1 or X_test.shape[0]).
* Arrays are returned in the same order as the traing data.
"""
# extract parameters
original_model = self.original_model_
X_train = self.X_train_
y_train = self.y_train_
loss_fn = self.loss_fn_
random_loss = self.random_loss_
truncation_frac = self.trunc_frac
objective = self.objective_
n_class = self.n_class_
random_state = self.random_state
check_every = self.check_every
# select no. processes to run in parallel
if self.n_jobs == -1:
n_jobs = joblib.cpu_count()
else:
assert self.n_jobs >= 1
n_jobs = min(self.n_jobs, joblib.cpu_count())
start = time.time()
if self.logger:
self.logger.info('\n[INFO] computing approx. data Shapley values...')
self.logger.info(f'[INFO] no. cpus: {n_jobs:,}...')
# run TMC-Shapley alg. until convergence
with joblib.Parallel(n_jobs=n_jobs) as parallel:
# result container
if inf == 'local':
marginals = np.zeros((0, self.X_train_.shape[0], X_test.shape[0]), dtype=util.dtype_t)
result = np.zeros((self.X_train_.shape[0], X_test.shape[0]), dtype=util.dtype_t)
stable = np.zeros(X_test.shape[0], dtype=util.dtype_t)
else:
assert inf == 'global'
marginals = np.zeros((0, self.X_train_.shape[0], 1), dtype=util.dtype_t) # shape=(no. train, 1)
result = np.zeros((self.X_train_.shape[0], 1), dtype=util.dtype_t)
stable = np.zeros(1, dtype=util.dtype_t)
iteration = 0
while True:
# shape=(check_every, no. train, 1 or no. test)
results = parallel(joblib.delayed(_run_iteration)
(original_model, X_train, y_train, loss_fn,
random_loss, truncation_frac, objective, n_class,
random_state, iteration, i, X_test, y_test,
batch, inf) for i in range(check_every))
iteration += check_every
# synchronization barrier
marginals = np.vstack([marginals, results]) # shape=(check_every + (1), no. train, 1 or X.shape[0])
# check convergence
# - add up all marginals using axis=0, then divide by their iteration
# - diff. between last `check_every` runs and last run, divide by last run, average over all points
errors = np.zeros(marginals.shape[2], dtype=util.dtype_t) # shape=(X.shape[0],)
for i in range(marginals.shape[2]):
divisor = np.arange(1, iteration + 1)[-check_every:].reshape(-1, 1) # shape=(iteration, 1)
v = (np.cumsum(marginals[:, :, i], axis=0)[-check_every:] / divisor) # (check_every, no. train)
errors[i] = np.max(np.mean( | np.abs(v - v[-1:]) | numpy.abs |
# -*- coding: utf-8 -*-
"""
Created on Thu Apr 15 18:58:22 2021
@author: <NAME>
Code to build model of neurons by analyzing their image and score trajectory.
Depending on the `Visual_Neuron_Modelling` repository.
"""
# %load_ext autoreload
# %autoreload 2
#%%
# backup_dir = r"C:\Users\Ponce lab\Documents\ml2a-monk\generate_BigGAN\2021-07-23-12-23-21"
# backup_dir = r"C:\Users\Poncelab-ML2a\Documents\monkeylogic2\generate_integrated\2021-10-25-11-05-37"
backup_dir = r"E:\Network_Data_Sync\Stimuli\2021-10-27-Beto-01\2021-10-27-12-15-46"
# r"C:\Users\Ponce lab\Documents\ml2a-monk\generate_BigGAN\2021-06-28-12-34-03"
# r"C:\Users\Ponce lab\Documents\ml2a-monk\generate_BigGAN\2021-06-04-11-54-42"
# backup_dir = r"N:\Stimuli\2021-EvolDecomp\2021-04-27-Alfa-03\2021-04-27-13-07-55"
threadid = 1
Animal = "Beto"
exptime = backup_dir.split("\\")[-1]
#%%
from time import time
import os
import re
from os.path import join
import sys
if os.environ['COMPUTERNAME'] == 'PONCELAB-ML2B':
Python_dir = r"C:\Users\Ponce lab\Documents\Python"
elif os.environ['COMPUTERNAME'] == 'PONCELAB-ML2A':
Python_dir = r"C:\Users\Poncelab-ML2a\Documents\Python"
elif os.environ['COMPUTERNAME'] == 'DESKTOP-MENSD6S':
Python_dir = r"E:\Github_Projects"
elif os.environ['COMPUTERNAME'] == 'DESKTOP-9DDE2RH':
Python_dir = r"D:\Github"
sys.path.append(join(Python_dir,"Visual_Neuro_InSilico_Exp"))
sys.path.append(join(Python_dir,"Visual_Neuron_Modelling"))
# sys.path.append(join(Python_dir,"PerceptualSimilarity"))
import numpy as np
from scipy.stats import sem
import torch
from pytorch_pretrained_biggan import BigGAN, truncated_noise_sample
from GAN_utils import upconvGAN
from skimage.io import imsave, imread
from torchvision.utils import make_grid
from build_montages import build_montages
from scipy.io import loadmat
import matplotlib.pylab as plt
def visualize_cctsr_simple(featFetcher, layers2plot, imgcol, savestr="Evol", titstr="Alfa_Evol", figdir=""):
"""
Demo
ExpType = "EM_cmb"
layers2plot = ['conv3_3', 'conv4_3', 'conv5_3']
figh = visualize_cctsr(featFetcher, layers2plot, ReprStats, Expi, Animal, ExpType, )
figh.savefig(join("S:\corrFeatTsr","VGGsummary","%s_Exp%d_%s_corrTsr_vis.png"%(Animal,Expi,ExpType)))
"""
nlayer = max(4, len(layers2plot))
figh, axs = plt.subplots(3,nlayer,figsize=[10/3*nlayer,8])
if imgcol is not None:
for imgi in range(len(imgcol)):
axs[0,imgi].imshow(imgcol[imgi])
axs[0,imgi].set_title("Highest Score Evol Img")
axs[0,imgi].axis("off")
for li, layer in enumerate(layers2plot):
chanN = featFetcher.cctsr[layer].shape[0]
tmp=axs[1,li].matshow(np.nansum(featFetcher.cctsr[layer].abs().numpy(),axis=0) / chanN)
plt.colorbar(tmp, ax=axs[1,li])
axs[1,li].set_title(layer+" mean abs cc")
tmp=axs[2,li].matshow(np.nanmax(featFetcher.cctsr[layer].abs().numpy(),axis=0))
plt.colorbar(tmp, ax=axs[2,li])
axs[2,li].set_title(layer+" max abs cc")
figh.suptitle("%s Exp Corr Feat Tensor"%(titstr))
plt.show()
figh.savefig(join(figdir, "%s_corrTsr_vis.png" % (savestr)))
figh.savefig(join(figdir, "%s_corrTsr_vis.pdf" % (savestr)))
return figh
def ind2xy(ind, div_n, pH, pW):
yi, xi = np.divmod(ind, div_n)
return yi * pH, xi * pW
def roll_image(img, yroll, xroll):
imggxshift = np.zeros(img.shape, img.dtype)
imggyshift = np.zeros(img.shape, img.dtype)
#assert xroll * yroll is not 0
if xroll is not 0:
imggxshift[xroll:,:] = img[:-xroll,:]
imggxshift[:xroll,:] = img[-xroll:,:] # roll the lower edge up to fill the blank!
else:
imggxshift = img.copy()
if yroll is not 0:
imggyshift[:,yroll:] = imggxshift[:,:-yroll] # same for x axis.
imggyshift[:,:yroll] = imggxshift[:,-yroll:]
else:
imggyshift = imggxshift
return imggyshift
def patch_shuffle(img, div_n=8):
"""div_n, how many patch do you want along each axis"""
# div_n = 16
H, W = img.shape
#assert (H%div_n is 0) and (W%div_n is 0), "`div_n` should divide both W and H of image, like 1,2,4,8,16"
patch_n = div_n * div_n
pH = int(H / div_n)
pW = int(W / div_n)
perm_p_id = np.random.permutation(patch_n)
imgpshf = np.zeros(img.shape, img.dtype)
imgpshf[:,:] = img[:,:]
for ind in range(patch_n):
targ_y, targ_x = ind2xy(ind, div_n, pH, pW)
src_y, src_x = ind2xy(perm_p_id[ind], div_n, pH, pW)
imgpshf[targ_y:targ_y + pH, targ_x:targ_x + pW] = img[src_y:src_y + pH, src_x:src_x + pW]
return imgpshf
#%% Load basic information
data = loadmat(join(backup_dir, "Evol_ScoreImgTraj.mat"))
imgfp_col = data.get("imgfp_col")
score_col = data.get("score_col")
imgsize = data.get('imgsize').astype('float')
imgpos = data.get('imgpos').astype('float')
pref_chan = data.get('prefchan').astype('int')
imgsize = imgsize[threadid-1, 0]
imgpos = imgpos[threadid-1, :]
pref_chan = pref_chan[threadid-1, 0]
scorevec_thread = score_col[0, threadid-1][:,0]
imgfp_thread = imgfp_col[0, threadid-1]
imgpatt = re.compile("block(\d*)_thread")
blockvec_thread = np.array([int(imgpatt.findall(imgfn)[0]) for imgfn in imgfp_thread])
blockarr = range(min(blockvec_thread),max(blockvec_thread)+1)
meanarr = np.array([np.mean(scorevec_thread[blockvec_thread==blocki]) for blocki in blockarr])
semarr = np.array([sem(scorevec_thread[blockvec_thread==blocki]) for blocki in blockarr])
#%
if os.environ['COMPUTERNAME'] == 'DESKTOP-9DDE2RH':
backup_dir_old, _ = os.path.split(imgfp_thread[0])
imgfp_thread = np.array([fp.replace(backup_dir_old, backup_dir) for fp in imgfp_thread])
figh = plt.figure(figsize=[6,5]);
plt.scatter(blockvec_thread,scorevec_thread,alpha=0.5)
plt.plot(blockarr, meanarr, 'k-')
plt.fill_between(blockarr, meanarr-semarr, meanarr+semarr,alpha=0.4)
plt.ylabel("Spike rate");
plt.xlabel("Generations");
plt.title("Evolution Trajectory prefchan %02d, %.1f deg pos [%.1f %.1f], thread %d"%\
(pref_chan,imgsize,imgpos[0],imgpos[1],threadid))
plt.show()
#%% Collect some best images
score_idx = np.argsort(-scorevec_thread)
score_examp = scorevec_thread[score_idx[:4]]
imgfp_examp = imgfp_thread[score_idx[:4]]
imgcol_examp = [imread(fp) for fp in imgfp_examp]
#%%
from torchvision import models
from CorrFeatTsr_lib import Corr_Feat_Machine, Corr_Feat_pipeline, loadimg_preprocess, visualize_cctsr
from CorrFeatTsr_predict_lib import score_images, softplus, fitnl_predscore
from featvis_lib import rectify_tsr, tsr_factorize, vis_featmap_corr, vis_feattsr, \
vis_feattsr_factor, vis_featvec, vis_featvec_wmaps, vis_featvec_point, load_featnet, \
tsr_posneg_factorize, posneg_sep
#%%
# netname = "alexnet";layers2plot = ["conv2", "conv3", "conv4", "conv5",]
# netname = "vgg16";layers2plot = ["conv2_2", "conv3_3", "conv4_3", "conv5_3", ]
# netname = "resnet50";layers2plot = ["layer2", "layer3", "layer4", ]
netname = "resnet50_linf8";layers2plot = ["layer2", "layer3", "layer4", ]
ccdir = join(backup_dir, "CCFactor_%s"%netname)
# ccdir = "debug_tmp_%s"%netname
os.makedirs(join(ccdir, "img"), exist_ok=True)
figh.savefig(join(ccdir,"ExpEvolTraj.png"))
featnet, net = load_featnet(netname)
G = upconvGAN("fc6")
G.requires_grad_(False).cuda().eval();
#%% Create correlation online
imgpix = int(imgsize * 40) #%224 #
# titstr = "Driver Chan %d, %.1f deg [%s]"%(pref_chan, imgsize, tuple(imgpos))
featFetcher = Corr_Feat_Machine()
featFetcher.register_hooks(net, layers2plot, netname=netname,)
featFetcher.init_corr()
Corr_Feat_pipeline(featnet, featFetcher, scorevec_thread, imgfp_thread,
lambda x:loadimg_preprocess(x, borderblur=True, imgpix=imgpix), online_compute=True,
batchsize=100, savedir=ccdir, savenm="Evol" ) # % (Animal, Expi, expsuffix),
corrDict = np.load(join(ccdir, "%s_corrTsr.npz" % ("Evol")), allow_pickle=True)
figh = visualize_cctsr_simple(featFetcher, layers2plot, imgcol_examp, savestr="%s_Evol%s_%s"%(Animal,exptime,netname),
titstr="%s_Evol%s_%s"%(Animal,exptime,netname), figdir=ccdir)
cctsr_dict = corrDict.get("cctsr").item()
Ttsr_dict = corrDict.get("Ttsr").item()
stdtsr_dict = corrDict.get("featStd").item()
featFetcher.clear_hook()
#%% OK starts decompostion.
# layer = "conv4"; bdr = 1;
# layer = "conv3_3"; bdr = 2;
layer = "layer3"; bdr = 1;
vis_score_mode = "cosine" # "corr"
ccdir = join(backup_dir, "CCFactor_%s-%s"%(netname,layer))
os.makedirs(join(ccdir, "img"), exist_ok=True)
NF = 3; rect_mode = "Tthresh"; thresh = (None, 3)#"pos"
Ttsr = Ttsr_dict[layer]
cctsr = cctsr_dict[layer]
stdtsr = stdtsr_dict[layer]
covtsr = cctsr * stdtsr
covtsr = np.nan_to_num(covtsr)
cctsr = np.nan_to_num(cctsr)
# Ttsr_pp = rectify_tsr(Ttsr, rect_mode) #
covtsr_pp = rectify_tsr(covtsr, mode=rect_mode, thr=thresh, Ttsr=Ttsr) # add thresholding to T tsr
# Hmat, Hmaps, Tcomponents, ccfactor = tsr_factorize(Ttsr_pp, cctsr, bdr=bdr, Nfactor=NF, figdir=ccdir, savestr="%s-%s"%(netname, layer))
Hmat, Hmaps, ccfactor, FactStat = tsr_posneg_factorize(covtsr_pp, bdr=bdr, Nfactor=NF,
figdir=ccdir, savestr="%s-%s"%(netname, layer))
Tcomponents = None
#%%
torchseed = int(time())
torch.manual_seed(torchseed)
print("Visualizing featuer vectors with differeent spatial masks (using %s scoring)"%vis_score_mode)
finimgs, mtg, score_traj = vis_feattsr(cctsr, net, G, layer, netname=netname, score_mode=vis_score_mode,
featnet=featnet, Bsize=5, figdir=ccdir, savestr="corr", saveimg=True)
# finimgs, mtg, score_traj = vis_feattsr(covtsr_pp, net, G, layer, netname=netname, score_mode=vis_score_mode,
# featnet=featnet, Bsize=5, figdir=ccdir, savestr="cov_pp", saveimg=True)
# finimgs, mtg, score_traj = vis_feattsr(covtsr, net, G, layer, netname=netname, score_mode=vis_score_mode,
# featnet=featnet, Bsize=5, figdir=ccdir, savestr="cov", saveimg=True)
finimgs, mtg, score_traj = vis_feattsr_factor(ccfactor, Hmaps, net, G, layer, netname=netname, score_mode=vis_score_mode,
featnet=featnet, Bsize=5, bdr=bdr, figdir=ccdir, savestr="corr", saveimg=True)
finimgs_col, mtg_col, score_traj_col = vis_featvec_wmaps(ccfactor, Hmaps, net, G, layer, netname=netname, score_mode=vis_score_mode, \
featnet=featnet, bdr=bdr, Bsize=10, saveImgN=5, figdir=ccdir, savestr="corr", imshow=False, saveimg=True)
finimgs_col, mtg_col, score_traj_col = vis_featvec(ccfactor, net, G, layer, netname=netname, score_mode=vis_score_mode,
featnet=featnet, Bsize=10, saveImgN=5, figdir=ccdir, savestr="corr", imshow=False, saveimg=True)
finimgs_col, mtg_col, score_traj_col = vis_featvec_point(ccfactor, Hmaps, net, G, layer, netname=netname, score_mode=vis_score_mode,\
featnet=featnet, bdr=bdr, Bsize=10, saveImgN=5, figdir=ccdir, savestr="corr", imshow=False, saveimg=True, pntsize=4)
#%%
print("Saving the Evolved images with highest scores")
score_examp = scorevec_thread[score_idx[:5]]
imgfp_examp = imgfp_thread[score_idx[:5]]
imgcol_examp = [imread(fp) for fp in imgfp_examp]
for i, img in enumerate(imgcol_examp):
imgid = imgfp_examp[i].split("\\")[-1].split(".")[0]
imsave(join(ccdir, "img", "evol_best_%02d_%s.png"%(i, imgid)), img)
#%%
print("Saving record for the factorization method")
np.savez(join(ccdir, "factor_record.npz"), Hmat=Hmat, Hmaps=Hmaps, Tcomponents=Tcomponents, ccfactor=ccfactor,
netname=netname, layer=layer, bdr=bdr, NF=NF, rect_mode=rect_mode, torchseed=torchseed, vis_score_mode=vis_score_mode)
#%% Scramble the feature vectors
print("Shuffling feature vectors")
ccfactor_shfl = np.concatenate(tuple([ccfactor[np.random.permutation(ccfactor.shape[0]),ci:ci+1]
for ci in range(ccfactor.shape[1])]),axis=1)
#%%
print("Visualizing models with shuffled feature vectors. (using %s scoring)"%vis_score_mode)
finimgs_col, mtg_col, score_traj_col = vis_featvec(ccfactor_shfl, net, G, layer, netname=netname, score_mode=vis_score_mode,
featnet=featnet, Bsize=10, saveImgN=5, figdir=ccdir, savestr="shuffle", imshow=False, saveimg=True)
finimgs_col, mtg_col, score_traj_col = vis_featvec_point(ccfactor_shfl, Hmaps, net, G, layer, netname=netname, score_mode=vis_score_mode,\
featnet=featnet, bdr=bdr, Bsize=10, saveImgN=5, figdir=ccdir, savestr="shuffle", imshow=False, saveimg=True)
finimgs_col, mtg_col, score_traj_col = vis_featvec_wmaps(ccfactor_shfl, Hmaps, net, G, layer, netname=netname, score_mode=vis_score_mode,\
featnet=featnet, bdr=bdr, Bsize=10, saveImgN=5, figdir=ccdir, savestr="shuffle", imshow=False, saveimg=True)
#%% Patch shuffling + rolling to scramble the spatial masks for factors.
print("Shuffling spatial masks")
PatchN = 6
H_H, H_W = Hmaps.shape[0], Hmaps.shape[1]
Hmaps_patchshffule = np.concatenate(tuple(roll_image(patch_shuffle(Hmaps[:,:,ci], div_n=PatchN), \
| np.random.randint(H_H) | numpy.random.randint |
# -*- coding: latin-1 -*-
# Copyright CERFACS (http://cerfacs.fr/)
# Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
#
# Author: <NAME>
"""
Types of temporal aggregations (slice_mode):
- 'month' (all months of year)
- 'year'
- 'DFJ'
- 'MAM'
- 'JJA'
- 'SON'
- 'ONDJFM'
- 'AMJJAS'
- user selected months
- user defined seasons
- None : whole selected period will be processed
Note: DJF 2000: December 2000 + January 2001 + February 2001
"""
import cftime
import numpy
import pdb
from datetime import datetime, timedelta
from collections import OrderedDict
import sys
from .util import util_dt
from .icclim_exceptions import *
## This function creates a dictionary with centroid day and centroid month for each type of temporal aggregation
## except for slice_mode=None
def get_map_info_slice(slice_mode):
map_slices={}
map_slices[str(slice_mode)]={}
if slice_mode=='year':
months=None
centroid_day=1
centroid_month=7
elif slice_mode=='month':
months=range(1,13)
centroid_day=16
elif slice_mode=='DJF':
months=([12], [1,2])
elif slice_mode=='MAM':
months=[3,4,5]
elif slice_mode=='JJA':
months=[6,7,8]
elif slice_mode=='SON':
months=[9,10,11]
elif slice_mode=='ONDJFM':
months=([10,11,12], [1,2,3])
elif slice_mode=='AMJJAS':
months=[4,5,6,7,8,9]
elif type(slice_mode) is list:
months=slice_mode[1]
else:
raise InvalidIcclimArgumentError("slice_mode", "Invalid value = " + slice_mode)
map_slices[str(slice_mode)]['months']=months
if type(months) is list: # simple season like 'MAM' [3,4,5]
months=months
elif type(months) is tuple: # composed season like 'DJF' ([12], [1,2]) or 'ONDJFM' ([10,11,12], [1,2,3])
months=months[0]+months[1]
try:
# centroid day
if len(months) % 2 == 0 and slice_mode!='month': # nb of months in season is even
centroid_day=1
else: # nb of months in season is odd
centroid_day=16
# centroid month
if slice_mode=='month' or slice_mode[0]=='month': #i.e. only for months
centroid_month=None
else:
#Necessary to add 'int' since python 3 returns round number. e.g 3/2=1.5 in python 3 while 3/2=1 in python 2
centroid_month=months[int(len(months)/2)]
except:
pass
map_slices[str(slice_mode)]['centroid_day']=centroid_day
map_slices[str(slice_mode)]['centroid_month']=centroid_month
return map_slices
def get_dict_temporal_slices(dt_arr, values_arr, fill_value, calend='gregorian', temporal_subset_mode=None, time_range=None):
'''
This function returns a dictionary with temporal slices.
:param dt_arr: Datetime vector.
:type dt_arr: numpy.ndarray (1D) of datetime.datetime objects
:param values_arr: Corresponding to ``dt_arr`` array of values.
:type values_arr: numpy.ndarray (3D)
:param temporal_subset_mode: Type of temporal aggregation: the same set of possible values as ``slice_mode``.
:type temporal_subset_mode: str
:param time_range: Time range.
:type time_range: [datetime.datetime, datetime.datetime]
:rtype: dict, where key is (``temporal_subset_mode``, year) and values are grouped in a tuple with 5 elements: (dt_centroid, dt_bounds, dt_arr, values_arr, fill_value).
.. note:: To view all keys of the returned dict:
>>> my_dict.keys()
.. note:: structure of the returned dictionary:
>>> all_slices = my_dict.keys()
dt_centroid = my_dict['any_slice'][0]
dt_bounds = my_dict['any_slice'][1]
dt_arr = my_dict['any_slice'][2]
values_arr = my_dict['any_slice'][3]
fill_val = my_dict['any_slice'][4]
##################################################
##################################################
Example:
>>> import time_subset
>>> from netCDF4 import Dataset
>>> from datetime import datetime
>>> import numpy
>>> import icclim
>>>
>>> f = '/data/tasmax_day_EC-EARTH_rcp26_r8i1p1_20760101-21001231.nc'
>>> nc = Dataset(f, 'r')
>>>
>>> v_arr = nc.variables['tasmax'][:,:,:]
>>> t_arr = nc.variables['time'][:]
>>> dt_arr = numpy.array([icclim.util_dt.num2date(dt, calend='gregorian', units='days since 2006-1-1') for dt in t_arr])
>>>
>>> dict_temp_subset = time_subset.get_dict_temporal_slices(dt_arr=dt_arr, values_arr=v_arr, calend='gregorian', temporal_subset_mode='DJF', time_range=[datetime(2080,01,01), datetime(2085,12,31)])
>>>
>>> for key in dict_temp_subset.keys():
>>> print key, '======', dict_temp_subset[key][0], '======', dict_temp_subset[key][1]
('DJF', 2080) ====== 2081-01-16 00:00:00 ====== [datetime.datetime(2080, 12, 1, 12, 0) datetime.datetime(2081, 3, 1, 12, 0)]
('DJF', 2081) ====== 2082-01-16 00:00:00 ====== [datetime.datetime(2081, 12, 1, 12, 0) datetime.datetime(2082, 3, 1, 12, 0)]
('DJF', 2082) ====== 2083-01-16 00:00:00 ====== [datetime.datetime(2082, 12, 1, 12, 0) datetime.datetime(2083, 3, 1, 12, 0)]
('DJF', 2083) ====== 2084-01-16 00:00:00 ====== [datetime.datetime(2083, 12, 1, 12, 0) datetime.datetime(2084, 3, 1, 12, 0)]
('DJF', 2084) ====== 2085-01-16 00:00:00 ====== [datetime.datetime(2084, 12, 1, 12, 0) datetime.datetime(2085, 3, 1, 12, 0)]
('DJF', 2085) ====== 2086-01-16 00:00:00 ====== [datetime.datetime(2085, 12, 1, 12, 0) datetime.datetime(2086, 3, 1, 12, 0)]
>>> dict_temp_subset = time_subset.get_dict_temporal_slices(dt_arr=dt_arr, values_arr=v_arr, temporal_subset_mode='JJA', time_range=[datetime(2080,01,01), datetime(2085,12,31)])
>>> for key in dict_temp_subset.keys():
>>> print key, '======', dict_temp_subset[key][0], '======', dict_temp_subset[key][1]
('JJA', 2080) ====== 2080-07-16 00:00:00 ====== [datetime.datetime(2080, 6, 1, 12, 0) datetime.datetime(2080, 9, 1, 12, 0)]
('JJA', 2081) ====== 2081-07-16 00:00:00 ====== [datetime.datetime(2081, 6, 1, 12, 0) datetime.datetime(2081, 9, 1, 12, 0)]
('JJA', 2082) ====== 2082-07-16 00:00:00 ====== [datetime.datetime(2082, 6, 1, 12, 0) datetime.datetime(2082, 9, 1, 12, 0)]
('JJA', 2083) ====== 2083-07-16 00:00:00 ====== [datetime.datetime(2083, 6, 1, 12, 0) datetime.datetime(2083, 9, 1, 12, 0)]
('JJA', 2084) ====== 2084-07-16 00:00:00 ====== [datetime.datetime(2084, 6, 1, 12, 0) datetime.datetime(2084, 9, 1, 12, 0)]
('JJA', 2085) ====== 2085-07-16 00:00:00 ====== [datetime.datetime(2085, 6, 1, 12, 0) datetime.datetime(2085, 9, 1, 12, 0)]
>>> dict_temp_subset = time_subset.get_dict_temporal_slices(dt_arr=dt_arr, values_arr=v_arr, calend='gregorian', temporal_subset_mode='month', time_range=[datetime(2080,01,01), datetime(2082,12,31)])
>>>
>>> for key in dict_temp_subset.keys():
>>> print key, '======', dict_temp_subset[key][0], '======', dict_temp_subset[key][1]
(1, 2080) ====== 2080-01-16 00:00:00 ====== [datetime.datetime(2080, 1, 1, 12, 0) datetime.datetime(2080, 2, 1, 12, 0)]
(2, 2080) ====== 2080-02-16 00:00:00 ====== [datetime.datetime(2080, 2, 1, 12, 0) datetime.datetime(2080, 3, 1, 12, 0)]
(3, 2080) ====== 2080-03-16 00:00:00 ====== [datetime.datetime(2080, 3, 1, 12, 0) datetime.datetime(2080, 4, 1, 12, 0)]
(4, 2080) ====== 2080-04-16 00:00:00 ====== [datetime.datetime(2080, 4, 1, 12, 0) datetime.datetime(2080, 5, 1, 12, 0)]
(5, 2080) ====== 2080-05-16 00:00:00 ====== [datetime.datetime(2080, 5, 1, 12, 0) datetime.datetime(2080, 6, 1, 12, 0)]
(6, 2080) ====== 2080-06-16 00:00:00 ====== [datetime.datetime(2080, 6, 1, 12, 0) datetime.datetime(2080, 7, 1, 12, 0)]
(7, 2080) ====== 2080-07-16 00:00:00 ====== [datetime.datetime(2080, 7, 1, 12, 0) datetime.datetime(2080, 8, 1, 12, 0)]
(8, 2080) ====== 2080-08-16 00:00:00 ====== [datetime.datetime(2080, 8, 1, 12, 0) datetime.datetime(2080, 9, 1, 12, 0)]
(9, 2080) ====== 2080-09-16 00:00:00 ====== [datetime.datetime(2080, 9, 1, 12, 0) datetime.datetime(2080, 10, 1, 12, 0)]
(10, 2080) ====== 2080-10-16 00:00:00 ====== [datetime.datetime(2080, 10, 1, 12, 0) datetime.datetime(2080, 11, 1, 12, 0)]
(11, 2080) ====== 2080-11-16 00:00:00 ====== [datetime.datetime(2080, 11, 1, 12, 0) datetime.datetime(2080, 12, 1, 12, 0)]
(12, 2080) ====== 2080-12-16 00:00:00 ====== [datetime.datetime(2080, 12, 1, 12, 0) datetime.datetime(2081, 1, 1, 12, 0)]
(1, 2081) ====== 2081-01-16 00:00:00 ====== [datetime.datetime(2081, 1, 1, 12, 0) datetime.datetime(2081, 2, 1, 12, 0)]
(2, 2081) ====== 2081-02-16 00:00:00 ====== [datetime.datetime(2081, 2, 1, 12, 0) datetime.datetime(2081, 3, 1, 12, 0)]
(3, 2081) ====== 2081-03-16 00:00:00 ====== [datetime.datetime(2081, 3, 1, 12, 0) datetime.datetime(2081, 4, 1, 12, 0)]
...
'''
seconds_per_day = 86400.0
tunits = "seconds since 1600-01-01 00:00:00"
if type(values_arr)==list: # case of anomalies
values_arr=values_arr[0]
assert(values_arr.ndim == 3)
assert(dt_arr.ndim == 1)
assert(values_arr.shape[0] == dt_arr.shape[0])
return_dict = OrderedDict()
if temporal_subset_mode != None:
map_info_slice=get_map_info_slice(slice_mode=temporal_subset_mode)
###########################
## step 1: list of all years
if time_range == None:
years = util_dt.get_year_list(dt_arr)
else:
year_begin = time_range[0].year
year_end = time_range[1].year
all_years = numpy.array( util_dt.get_year_list(dt_arr) )
if temporal_subset_mode in ['DJF', 'ONDJFM']:
# if time_range is from 1995 to 2000: the "DJF" season of 2000 will be: December 2000 + January 2001 + February 2001
mask_years = numpy.logical_and(all_years >= year_begin, all_years <= year_end+1)
else:
mask_years = numpy.logical_and(all_years >= year_begin, all_years <= year_end)
years = all_years[mask_years]
years.sort()
## step 2: subset
# whole selected time range will be processed
if temporal_subset_mode is None:
dummy_time_units = "hours since 1901-01-01 12:00 UTC"
# cdftime = cftime.utime(dummy_time_units, calendar=calend)
if calend == '360_day':
if time_range[0].day > 30:
time_range[0] = cftime.datetime(time_range[0].year, time_range[0].month, 30, time_range[0].hour)
print("Warning: Specified time range is invalid: there are only 30 days in every month with the 360_day calendar. Truncating to 30 days per month.")
elif time_range[1].day > 30:
time_range[1] = cftime.datetime(time_range[1].year, time_range[1].month, 30, time_range[1].hour)
print("Warning: Specified time range is invalid: there are only 30 days in every month with the 360_day calendar. Truncating to 30 days per month.")
first_second = cftime.date2num(time_range[0], dummy_time_units, calendar=calend)
last_second = cftime.date2num(time_range[1], dummy_time_units, calendar=calend)
dt_centroid_second = first_second + (last_second - first_second) / 2.
dt_centroid = cftime.num2date(dt_centroid_second, dummy_time_units, calendar=calend)
dt_bounds = time_range
return_dict['whole_time_range', time_range[0].year, time_range[1].year] = (dt_centroid, dt_bounds, dt_arr, values_arr, fill_value)
# all or selected months of each year will be processed
elif temporal_subset_mode == 'month' or temporal_subset_mode[0] == 'month':
for y in years:
for m in map_info_slice[str(temporal_subset_mode)]['months']:
indices_dt_arr_non_masked_i = get_indices_temp_aggregation(dt_arr, month=m, year=y, f=0)
dt_arr_subset_i = dt_arr[indices_dt_arr_non_masked_i]
arr_subset_i = values_arr[indices_dt_arr_non_masked_i, :, :]
dt_centroid = datetime( y, m, map_info_slice[str(temporal_subset_mode)]['centroid_day'] )
dtt_num_i = util_dt.date2num(dt_arr_subset_i[-1], calend, tunits) + seconds_per_day
dtt_i = util_dt.num2date(dtt_num_i, calend=calend, units=tunits)
dt_bounds = numpy.array([ dt_arr_subset_i[0], dtt_i ]) # [ bnd1, bnd2 )
return_dict[m, y] = (dt_centroid, dt_bounds, dt_arr_subset_i, arr_subset_i, fill_value)
#print y
# simple seasons (standard or user defined) of each year will be processed
elif (temporal_subset_mode in ['MAM', 'JJA', 'SON', 'AMJJAS']) or (temporal_subset_mode[0] == 'season' and type(temporal_subset_mode[1]) is list):
for y in years:
indices_dt_arr_non_masked_year = get_indices_temp_aggregation(dt_arr, month=map_info_slice[str(temporal_subset_mode)]['months'], year=y, f=1)
if indices_dt_arr_non_masked_year.size==0:
continue
dt_arr_subset_i = dt_arr[indices_dt_arr_non_masked_year]
arr_subset_i = values_arr[indices_dt_arr_non_masked_year, :, :]
dt_centroid = datetime(y, map_info_slice[str(temporal_subset_mode)]['centroid_month'], map_info_slice[str(temporal_subset_mode)]['centroid_day'] )
dtt_num_i = util_dt.date2num(dt_arr_subset_i[-1], calend, tunits) + seconds_per_day
dtt_i = util_dt.num2date(dtt_num_i, calend=calend, units=tunits)
dt_bounds = numpy.array([ dt_arr_subset_i[0], dtt_i ]) # [ bnd1, bnd2 )
return_dict[str(temporal_subset_mode), y] = (dt_centroid, dt_bounds, dt_arr_subset_i, arr_subset_i, fill_value)
#print y
# composed seasons (standard or user defined) of each year will be processed
elif (temporal_subset_mode in ['DJF', 'ONDJFM']) or (temporal_subset_mode[0] == 'season' and type(temporal_subset_mode[1]) is tuple):
for y in years:
next_year = y+1
if next_year in years:
indices_dt_arr_non_masked_first_year = get_indices_temp_aggregation(dt_arr, month=map_info_slice[str(temporal_subset_mode)]['months'][0], year=y, f=1)
indices_dt_arr_non_masked_next_year = get_indices_temp_aggregation(dt_arr, month=map_info_slice[str(temporal_subset_mode)]['months'][1], year=next_year, f=1)
indices_dt_arr_non_masked_current_season = numpy.concatenate((indices_dt_arr_non_masked_first_year, indices_dt_arr_non_masked_next_year))
indices_dt_arr_non_masked_current_season.sort()
dt_arr_subset_i = dt_arr[indices_dt_arr_non_masked_current_season]
arr_subset_i = values_arr[indices_dt_arr_non_masked_current_season, :, :]
dt_centroid = datetime( next_year, map_info_slice[str(temporal_subset_mode)]['centroid_month'], map_info_slice[str(temporal_subset_mode)]['centroid_day'] )
dtt_num_i = util_dt.date2num(dt_arr_subset_i[-1], calend, tunits) + seconds_per_day
dtt_i = util_dt.num2date(dtt_num_i, calend=calend, units=tunits)
dt_bounds = numpy.array([ dt_arr_subset_i[0], dtt_i ]) # [ bnd1, bnd2 )
return_dict[str(temporal_subset_mode), y] = (dt_centroid, dt_bounds, dt_arr_subset_i, arr_subset_i, fill_value)
else:
pass
elif temporal_subset_mode == 'year':
for y in years:
indices_dt_arr_non_masked_i = get_indices_temp_aggregation(dt_arr, month=None, year=y, f=2)
dt_arr_subset_i = dt_arr[indices_dt_arr_non_masked_i]
arr_subset_i = values_arr[indices_dt_arr_non_masked_i, :, :]
dt_centroid = datetime( y, map_info_slice[str(temporal_subset_mode)]['centroid_month'], map_info_slice[str(temporal_subset_mode)]['centroid_day'] )
dtt_num_i = util_dt.date2num(dt_arr_subset_i[-1], calend, tunits) + seconds_per_day
dtt_i = util_dt.num2date(dtt_num_i, calend=calend, units=tunits)
dt_bounds = numpy.array([ dt_arr_subset_i[0], dtt_i ]) # [ bnd1, bnd2 )
return_dict[temporal_subset_mode, y] = (dt_centroid, dt_bounds, dt_arr_subset_i, arr_subset_i, fill_value)
#print y
return return_dict
def get_indices_temp_aggregation(dt_arr, month, year, f=0):
'''
Return indices used for temporal aggregation.
param dt_arr: datetime vector
type dt_arr: numpy.ndarray (1D) of datetime.datetime objects
param month: month
type month: int or list of int
param year: year
type year: int
param f: used for different kinds of temporal aggregations: ``0`` - monthly, ``1`` - seasonal, ``2`` - annual (default: 0)
type f: int
rtype: numpy.ndarray (1D)
'''
if f == 0: # used for monthly temporal aggregation
dt_arr_month = numpy.array([dt.month for dt in dt_arr])
dt_arr_mask_month = dt_arr_month != month
indices_non_masked_month = numpy.where(dt_arr_mask_month==False)[0]
dt_arr_year = numpy.array([dt.year for dt in dt_arr])
dt_arr_mask_year = dt_arr_year != year
indices_non_masked_year = numpy.where(dt_arr_mask_year==False)[0]
indices_non_masked = numpy.intersect1d(indices_non_masked_year, indices_non_masked_month)
elif f == 1: # used for seasonal temporal aggregation
indices_non_masked_month_glob = []
dt_arr_month = numpy.array([dt.month for dt in dt_arr])
for m in month:
dt_arr_mask_month = dt_arr_month != m
indices_non_masked_month = numpy.where(dt_arr_mask_month==False)[0]
indices_non_masked_month_glob.extend( list(indices_non_masked_month) )
dt_arr_year = numpy.array([dt.year for dt in dt_arr])
dt_arr_mask_year = dt_arr_year != year
indices_non_masked_year = numpy.where(dt_arr_mask_year==False)[0]
indices_non_masked = numpy.intersect1d(indices_non_masked_year, indices_non_masked_month_glob)
elif f == 2: # used for annual temporal aggregation
dt_arr_year = numpy.array([dt.year for dt in dt_arr])
dt_arr_mask_year = dt_arr_year != year
indices_non_masked = numpy.where(dt_arr_mask_year==False)[0]
return indices_non_masked
### This function is used for the bootstrapping procedure
def get_resampled_arrs(dt_arr, values_arr, year_to_eliminate, year_to_duplicate):
### "out-of-base" years ---> no resampling
if year_to_eliminate == year_to_duplicate == -9999:
return (dt_arr, values_arr)
### "in-base" years ---> resampling
else:
# step 1: we eliminate in-base year ("year_to_eliminate"), i.e. we subset our arrays (dt and values)
# we define indices where dt_arr != year_to_eliminate
dt_arr_years = numpy.array([dt.year for dt in dt_arr])
dt_arr_mask_year = dt_arr_years == year_to_eliminate
indices_non_masked = | numpy.where(dt_arr_mask_year==False) | numpy.where |
#------------------------------------------------------
# import
#------------------------------------------------------
import os
import argparse
import cv2
import numpy as np
#------------------------------------------------------
# global
#------------------------------------------------------
#------------------------------------------------------
# function
#------------------------------------------------------
class YOLO_OpenCV():
def __init__(self, config='yolov3.cfg', weights='yolov3.weights',
classfile='coco.names', width=416, height=416, score_th=0.7, nms_th=0.4):
self.in_size = (width, height)
self.height = height
self.score_th = score_th
self.nms_th = nms_th
self.model = cv2.dnn.readNetFromDarknet(config, weights)
self.model.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
self.model.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
with open(classfile, 'r') as f:
self.classes = f.read().rstrip('\n').split('\n')
def fit(self, img):
# (width, height, channel) -> (1, channel, width, height)
# pixel = pixel * scalefactor
# pixel = pixel - mean[R,G,B]
# if swapRB is True, BGR -> RGB
blob = cv2.dnn.blobFromImage(img, scalefactor=1/255, size=self.in_size,
mean=[0,0,0], swapRB=True, crop=False)
self.model.setInput(blob)
outs = self.model.forward(self._outlayers(self.model))
return self._postprocess(img, outs)
def _to_classname(self, classid):
if self.classes:
assert(classid < len(self.classes))
return self.classes[classid]
def _outlayers(self, model):
layersNames = self.model.getLayerNames()
output_layers = [layersNames[i[0] - 1] for i in self.model.getUnconnectedOutLayers()]
return output_layers
def _postprocess(self, img, outs):
img_height, img_width, _ = img.shape
classids = []
scores = []
boxes = []
for out in outs:
for detection in out:
# detection =
# [center_x, center_y, box_width, box_height,
# confidence(box enclose object), confidence(each class)....]
detected_scores = detection[5:]
classid = np.argmax(detected_scores)
score = detected_scores[classid]
if score > self.score_th:
center_x = int(detection[0] * img_width)
center_y = int(detection[1] * img_height)
box_width = int(detection[2] * img_width)
box_height = int(detection[3] * img_height)
left = max(0, int(center_x - box_width/2))
top = max(0, int(center_y - box_height/2))
classids.append(classid)
scores.append(float(score))
boxes.append([left, top, box_width, box_height])
out_classes = []
out_scores = []
out_boxes = []
indices = cv2.dnn.NMSBoxes(boxes, scores, self.score_th, self.nms_th)
for i in indices:
i = i[0]
left, top, box_width, box_height = boxes[i]
left = max(0, np.floor(left+0.5).astype(np.uint32))
top = max(0, np.floor(top+0.5).astype(np.uint32))
right = min(img_width, np.floor(left+box_width+0.5).astype(np.uint32))
bottom = min(img_height, | np.floor(top+box_height+0.5) | numpy.floor |
import numpy as np
import numpy.linalg as la
import scipy.interpolate as inter
import scipy.optimize as opt
from numpy.polynomial.legendre import leggauss
import numpy.random as ra
from neml.nlsolvers import MaximumIterations, MaximumSubdivisions, newton, scalar_newton
class Driver(object):
"""
Superclass of all drivers, basically just sets up history and reports
results.
"""
def __init__(self, model, verbose = False, rtol = 1.0e-6, atol = 1.0e-10,
miter = 25, T_init = 0.0, no_thermal_strain = False):
"""
Parameters:
model: material model to play with
verbose: verbose output
rtol: relative tolerance, where needed
atol: absolute tolerance, where needed
miter: maximum iterations, where needed
"""
self.model = model
self.verbose = verbose
self.rtol = rtol
self.atol = atol
self.miter = miter
self.nts = no_thermal_strain
self.stress_int = [np.zeros((6,))]
self.stored_int = [self.model.init_store()]
self.T_int = [T_init]
self.t_int = [0.0]
self.u_int = [0.0]
self.p_int = [0.0]
@property
def stress(self):
return np.array(self.stress_int)
@property
def stored(self):
return np.array(self.stored_int)
@property
def history(self):
return self.stored[:,:self.model.nhist]
@property
def T(self):
return np.array(self.T_int)
@property
def t(self):
return np.array(self.t_int)
@property
def u(self):
return np.array(self.u_int)
@property
def p(self):
return np.array(self.p_int)
class Driver_sd(Driver):
"""
Superclass of generic small strain drivers, contains generic step methods.
"""
def __init__(self, *args, **kwargs):
"""
Parameters:
model: material model to play with
verbose: verbose output
rtol: relative tolerance, where needed
atol: absolute tolerance, where needed
miter: maximum iterations, where needed
"""
super(Driver_sd, self).__init__(*args, **kwargs)
self.strain_int = [np.zeros((6,))]
self.thermal_strain_int = [np.zeros((6,))]
self.mechanical_strain_int = [np.zeros((6,))]
def solve_try(self, RJ, x0, extra = []):
"""
Try several different nonlinear solvers in the hope that at least
one will converge
Parameters:
RJ: function that returns the residual equations and associated
Jacobian
x0: initial guess
extra: list of extra solver functions of the type below
"""
def s1(x0i):
try:
x = newton(RJ, x0i, verbose = self.verbose,
rtol = self.rtol, atol = self.atol, miter = self.miter)
return x, True
except Exception:
return np.zeros((12,)), False
def s2(x0i):
try:
res = opt.root(RJ, x0i, jac = True, method = 'lm')
return res.x, res.success
except Exception:
return np.zeros((12,)), False
def s3(x0i):
try:
x = newton(RJ, x0i, verbose = self.verbose,
rtol = self.rtol, atol = self.atol, miter = self.miter,
linesearch = 'backtracking')
return x, True
except Exception:
return np.zeros((12,)), False
solvers = [s1,s3]
guesses = [x0] + extra
success = False
for xi in guesses:
for solv in solvers:
x, success = solv(xi)
if success:
break
if success:
break
if not success:
raise MaximumIterations()
return x
@property
def strain(self):
return np.array(self.strain_int)
@property
def thermal_strain(self):
return np.array(self.thermal_strain_int)
@property
def mechanical_strain(self):
return np.array(self.mechanical_strain_int)
def update_thermal_strain(self, T_np1):
"""
Move the thermal strains to the next step
Parameters:
T_np1: next temperature
"""
if self.nts:
return np.zeros((6,))
else:
dT = T_np1 - self.T_int[-1]
a_np1 = self.model.alpha(T_np1)
a_n = self.model.alpha(self.T_int[-1])
return self.thermal_strain_int[-1] + dT * (a_np1 + a_n) / 2 * np.array([1.0,1,1,0,0,0])
def strain_step(self, e_np1, t_np1, T_np1):
"""
Take a strain-controlled step
Parameters:
e_np1: next strain
t_np1: next time
T_np1: next temperature
"""
enext = self.update_thermal_strain(T_np1)
s_np1, h_np1, A_np1, u_np1, p_np1 = self.model.update_sd(e_np1 - enext,
self.mechanical_strain_int[-1],
T_np1, self.T_int[-1], t_np1, self.t_int[-1], self.stress_int[-1],
self.stored_int[-1], self.u_int[-1], self.p_int[-1])
self.strain_int.append(np.copy(e_np1))
self.mechanical_strain_int.append(e_np1 - enext)
self.thermal_strain_int.append(enext)
self.stress_int.append(np.copy(s_np1))
self.stored_int.append(np.copy(h_np1))
self.T_int.append(T_np1)
self.t_int.append(t_np1)
self.u_int.append(u_np1)
self.p_int.append(p_np1)
def stress_step(self, s_np1, t_np1, T_np1):
"""
Take a stress-controlled step
Parameters:
s_np1: next stress
t_np1: next time
T_np1: next temperature
"""
enext = self.update_thermal_strain(T_np1)
def RJ(e):
s, h, A, u, p = self.model.update_sd(e - enext, self.mechanical_strain_int[-1],
T_np1, self.T_int[-1], t_np1, self.t_int[-1],
self.stress_int[-1],
self.stored_int[-1], self.u_int[-1], self.p_int[-1])
R = s - s_np1
return R, A
if len(self.strain_int) > 1:
inc = self.strain_int[-1] - self.strain_int[-2]
extra = [self.strain_int[-1] + inc]
else:
extra = []
e_np1 = self.solve_try(RJ, self.strain_int[-1], extra = extra)
self.strain_step(e_np1, t_np1, T_np1)
def erate_step(self, sdir, erate, t_np1, T_np1,
einc_guess = None, ainc_guess = None):
"""
Drive in a given stress direction at a prescribed strain rate, like
an actual "stress controlled" experiment.
Parameters:
sdir: stress direction
erate: strain rate (in the direction)
t_np1: next time
T_np1: next temperature
einc_guess: a guess at the strain increment
ainc_guess: a guess at the stress increment
"""
sdir = sdir / la.norm(sdir)
dt = t_np1 - self.t_int[-1]
enext = self.update_thermal_strain(T_np1)
def RJ(x):
a = x[0]
e_inc = x[1:]
s, h, A, u, p = self.model.update_sd(self.strain_int[-1] + e_inc - enext,
self.mechanical_strain_int[-1],
T_np1, self.T_int[-1], t_np1, self.t_int[-1], self.stress_int[-1],
self.stored_int[-1],
self.u_int[-1], self.p_int[-1])
R = np.zeros((7,))
J = np.zeros((7,7))
R[:6] = s - (sdir * a + self.stress_int[-1])
R[6] = np.dot(e_inc, sdir) / dt - erate
J[:6,0] = -sdir
J[:6,1:] = A
J[6,0] = 0.0
J[6,1:] = sdir / dt
return R, J
x0 = np.zeros((7,))
if einc_guess is not None:
x0[1:] = einc_guess
else:
x0[1:] = sdir / 10000.0
if ainc_guess is not None:
x0[0] = ainc_guess
else:
x0[0] = 1.0
x = self.solve_try(RJ, x0)
e_np1 = self.strain_int[-1] + x[1:]
self.strain_step(e_np1, t_np1, T_np1)
return x[1:], x[0]
def erate_einc_step(self, sdir, erate, einc, T_np1, **kwargs):
"""
Similar to erate_step but specify the strain increment instead of the
time increment.
Parameters:
sdir: stress direction
erate: strain rate, in stress direction
einc: strain increment, in stress direction
T_np1: temperature at next time step
"""
dt = einc / erate
return self.erate_step(sdir, erate, self.t_int[-1] + dt, T_np1, **kwargs)
def srate_sinc_step(self, sdir, srate, sinc, T_np1):
"""
Similar to rate_step but specify the stress increment instead of the
time increment.
Parameters:
sdir: stress direction
srate: stress rate
sinc: stress increment
T_np1: temperature at next time step
"""
if np.allclose(sdir, 0.0):
s_np1 = self.stress_int[-1]
else:
s_np1 = self.stress_int[-1] + sdir / la.norm(sdir) * sinc
if np.isclose(srate, 0.0):
dt = 0.0
else:
dt = np.abs(np.dot(s_np1 - self.stress_int[-1], sdir) / srate)
self.stress_step(s_np1, self.t_int[-1] + dt, T_np1)
def strain_hold_step(self, i, t_np1, T_np1, q = 1.0, E = -1.0):
"""
A special, mixed step which holds the strain in index i constant
while holding the stress in the other directions to their previous
values
Parameters:
i: index to hold
t_np1: next time
T_np1: next temperature
q: follow up factor
E: Young's modulus to use -- must redo interface at some point
"""
if not np.isclose(q, 1.0) and np.isclose(E, -1.0):
raise ValueError("You must supply the Youngs modulus")
enext = self.update_thermal_strain(T_np1)
oset = sorted(list(set(range(6)) - set([i])))
def RJ(e_np1):
s, h, A, u, p = self.model.update_sd(e_np1 - enext,
self.mechanical_strain_int[-1],
T_np1, self.T_int[-1], t_np1, self.t_int[-1], self.stress_int[-1],
self.stored_int[-1], self.u_int[-1], self.p_int[-1])
R = np.zeros((6,))
R[0] = (e_np1[i] - self.strain_int[-1][i]
) + (s[i] - self.stress_int[-1][i]) / E * (q - 1)
R[1:] = s[oset] - self.stress_int[-1][oset]
J = np.zeros((6,6))
J[0,0] = 1.0
J[0,:] += A[i,:] / E * (q - 1)
J[1:,:] = A[oset,:][:]
return R, J
x0 = np.copy(self.strain_int[-1])
e_np1 = self.solve_try(RJ, x0)
self.strain_step(e_np1, t_np1, T_np1)
def uniaxial_test(model, erate, T = 300.0, emax = 0.05, nsteps = 250,
sdir = np.array([1,0,0,0,0,0]), verbose = False,
offset = 0.2/100.0, history = None, tdir = np.array([0,1,0,0,0,0]),
rtol = 1e-6, atol = 1e-10, miter = 25):
"""
Make a uniaxial stress/strain curve
Parameters:
model: material model
erate: strain rate
Keyword Args:
T: temperature, default 300.0
emax: maximum strain, default 5%
nsteps: number of steps to use, default 250
sdir: stress direction, default tension in x
verbose: whether to be verbose
offset: used to calculate yield stress
history: initial model history
tdir: transverse direction for Poisson's ratio
Returns:
dict: results dictionary containing...
**Results in dictionary:**
================= ============================================
Name Description
================= ============================================
strain strain in direction
stress stress in direction
energy_density strain energy density
plastic_work plastic dissipation
youngs young's modulus of initial curve
yield yield stress implied by curve
poissons poisson's ratio implied by non-axial strains
================= ============================================
"""
e_inc = emax / nsteps
driver = Driver_sd(model, verbose = verbose, T_init = T, rtol = rtol,
atol = atol, miter = miter)
if history is not None:
driver.stored_int[0] = history
strain = [0.0]
stress = [0.0]
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T)
else:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T,
einc_guess = einc, ainc_guess = ainc)
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
strain = np.array(strain)
stress = np.array(stress)
# Calculate the yield stress and Young's modulus
E = np.abs(stress[1]) / np.abs(strain[1])
nu = -np.dot(driver.strain_int[1], tdir) / np.dot(
driver.strain_int[1], sdir)
sfn = inter.interp1d(np.abs(strain), np.abs(stress))
tfn = lambda e: E * (e - offset)
try:
sYe = opt.brentq(lambda e: sfn(e) - tfn(e), 0.0, np.max(strain))
sY = tfn(sYe)
except Exception:
sY = np.inf
return {'strain': strain, 'stress': stress,
'energy_density': np.copy(driver.u),
'plastic_work': np.copy(driver.p),
'youngs': E, 'yield': sY, 'poissons': nu,
'history': driver.stored_int[-1]}
def strain_cyclic(model, emax, R, erate, ncycles, T = 300.0, nsteps = 50,
sdir = np.array([1,0,0,0,0,0]), hold_time = None, n_hold = 25,
verbose = False, check_dmg = False, dtol = 0.75):
"""
Strain controlled cyclic test.
Parameters:
emax: maximum strain
R: R = emin / emax
erate: strain rate to go at
ncycles: number of cycles
T: temperature, default 300
Keyword Args:
nsteps: number of steps per half cycle
sdir: stress direction, defaults to x and tension first
hold_time: if None don't hold, if scalar then hold symmetrically top/bot
if an array specify different hold times for first direction
(default tension) and second direction
n_hold: number of steps to hold over
verbose: whether to be verbose
check_dmg: check to see if material damage exceeds dtol, stop the
simulation when that happens
dtol: damage to stop at
Returns:
dict: results dictionary containing...
**Results in dictionary:**
============= ========================
Name Description
============= ========================
strain: strain in direction
stress: stress in direction
cycles: list of cycle numbers
max: maximum stress per cycle
min: minimum stress per cycle
mean: mean stress per cycle
============= ========================
"""
# Setup
driver = Driver_sd(model, verbose = verbose, T_init = T)
emin = emax * R
if hold_time:
if np.isscalar(hold_time):
hold_time = [hold_time, hold_time]
else:
hold_time = [0,0]
# Setup results
strain = [0.0]
stress = [0.0]
time = [0.0]
cycles = []
smax = []
smin = []
smean = []
ecycle = []
pcycle = []
# First half cycle
if verbose:
print("Initial half cycle")
e_inc = emax / nsteps
try:
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T)
else:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T, einc_guess = einc,
ainc_guess = ainc)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + e_inc / erate)
except Exception as e:
print("Failed to make first half cycle")
raise e
# Begin cycling
for s in range(ncycles):
if verbose:
print("Cycle %i" % s)
try:
# Tension hold
if hold_time[0] > 0.0:
dt = hold_time[0] / n_hold
for i in range(n_hold):
einc, ainc = driver.erate_step(sdir, 0.0, time[-1] + dt, T,
einc_guess = np.zeros((6,)), ainc_guess = -1)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + dt)
si = len(driver.strain_int)
e_inc = np.abs(emin - emax) / nsteps
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(-sdir, erate, e_inc, T,
einc_guess = -einc, ainc_guess = -ainc)
else:
einc, ainc = driver.erate_einc_step(-sdir, erate, e_inc, T,
einc_guess = einc, ainc_guess = ainc)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + e_inc / erate)
# Compression hold
if hold_time[1] > 0.0:
dt = hold_time[1] / n_hold
for i in range(n_hold):
einc, ainc = driver.erate_step(sdir, 0.0, time[-1] + dt, T,
einc_guess = np.zeros((6,)), ainc_guess = -1)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + dt)
e_inc = np.abs(emax - emin) / nsteps
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T,
einc_guess = -einc, ainc_guess = -ainc)
else:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T,
einc_guess = einc, ainc_guess = ainc)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + e_inc / erate)
# Calculate
if np.isnan(max(stress[si:])) or np.isnan(min(stress[si:])):
break
cycles.append(s)
smax.append(max(stress[si:]))
smin.append(min(stress[si:]))
smean.append((smax[-1]+smin[-1])/2)
ecycle.append(driver.u_int[-1])
pcycle.append(driver.p_int[-1])
except Exception as e:
break
# Setup and return
return {"strain": np.array(strain), "stress": np.array(stress),
"cycles": np.array(cycles, dtype = int), "max": np.array(smax),
"min": np.array(smin), "mean": np.array(smean),
"energy_density": np.array(ecycle), "plastic_work": np.array(pcycle),
"history": driver.stored_int[-1], "time": np.array(time)}
def strain_cyclic_extrapolated(model, emax, R, erate, ncycles, T = 300.0, nsteps = 50,
sdir = np.array([1,0,0,0,0,0]), hold_time = None, n_hold = 25,
verbose = False, check_dmg = False, dtol = 0.75, min_cycle=3, unit_extrapolate = 10,
jump_delta_N=10, allowable_jump_stress=5.0):
"""
Strain controlled cyclic test extrapolation.
Extra Keyword Args:
min_cycle minimum cycles to start the extrapolation process
unit_extrapolate number of cycles to perform single cycle extrapolation
jump_delta_N number of cycles to jump
allowable_jump_stress extrapolate when stress jump is within this limit
Returns:
dict: results dictionary containing...
**Results in dictionary:**
============= ========================
Name Description
============= ========================
cycles: list of cycle numbers
max: maximum stress per cycle
min: minimum stress per cycle
============= ========================
"""
# Setup
driver = Driver_sd(model, verbose = verbose, T_init = T)
emin = emax * R
if hold_time:
if np.isscalar(hold_time):
hold_time = [hold_time, hold_time]
else:
hold_time = [0,0]
# Setup results
strain = [0.0]
stress = [0.0]
time = [0.0]
cycles = []
smax = []
smin = []
smean = []
ecycle = []
pcycle = []
# First half cycle
if verbose:
print("Initial half cycle")
e_inc = emax / nsteps
try:
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T)
else:
einc, ainc = driver.erate_einc_step(sdir, erate, e_inc, T, einc_guess = einc,
ainc_guess = ainc)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + e_inc / erate)
except Exception as e:
print("Failed to make first half cycle")
raise e
s = 0
# steps in one cycle
if (hold_time[0] > 0) and (hold_time[1] == 0):
steps = 2*nsteps + n_hold
elif (hold_time[1] > 0) and (hold_time[0] == 0):
steps = 2*nsteps + n_hold
elif (hold_time[0] > 0) and (hold_time[1] > 0):
steps = 2*nsteps + 2*n_hold
else:
steps = 2*nsteps
extrapolate = False
while s < ncycles:
if verbose:
print("Cycle %i" % s)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
print("Damage check exceeded")
break
if (s >= min_cycle) and (extrapolate == True): # No extrapolation before min_cycle
if (s <= unit_extrapolate): # single cycle jump for first unit_extrapolate cycles
delta_N = 1
else:
delta_N = jump_delta_N # specified cycles to jump
n = len(driver.stored_int)
# extrapolating history
pos_hist_last_last = driver.stored_int[n - 1 - steps]
pos_hist_last = driver.stored_int[n-1]
dN_1 = cycles[-1] - cycles[-2]
pos_extrapolated_history = pos_hist_last + (pos_hist_last - pos_hist_last_last)*delta_N/dN_1
# extrapolating smax
smax_last_last = smax[-2]
smax_last = smax[-1]
extrapolated_smax = smax_last + (smax_last - smax_last_last)*delta_N/dN_1
# extrapolating smax
smin_last_last = smin[-2]
smin_last = smin[-1]
extrapolated_smin = smin_last + (smin_last - smin_last_last)*delta_N/dN_1
# criteria for extrapolation
pos_stress_last_last = driver.stress_int[n - 1 - 2*steps]
pos_stress_last = driver.stress_int[n-1]
pos_extrapolated_stress = pos_stress_last + (pos_stress_last - pos_stress_last_last)*delta_N/dN_1
stress_jump = pos_extrapolated_stress[0] - pos_stress_last[0]
if np.fabs(stress_jump) <= allowable_jump_stress:
s = s + delta_N
if s > ncycles:
break
driver.stored_int.append(pos_extrapolated_history)
driver.stress_int.append(pos_extrapolated_stress)
smax.append(extrapolated_smax)
smin.append(extrapolated_smin)
cycles.append(s)
extrapolate = False
else:
extrapolate = False
else:
try:
if hold_time[0] > 0.0:
dt = hold_time[0] / n_hold
for i in range(n_hold):
einc, ainc = driver.erate_step(sdir, 0.0, time[-1] + dt, T,
einc_guess = np.zeros((6,)), ainc_guess = -1)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + dt)
si = len(driver.strain_int)
e_inc = np.abs(emin - emax) / nsteps
for i in range(nsteps):
if i == 0:
einc, ainc = driver.erate_einc_step(-sdir, erate, e_inc, T,
einc_guess = -einc, ainc_guess = -ainc)
else:
einc, ainc = driver.erate_einc_step(-sdir, erate, e_inc, T,
einc_guess = einc, ainc_guess = ainc)
if check_dmg:
if driver.stored_int[-1][0] > dtol:
raise Exception("Damage check exceeded")
strain.append(np.dot(driver.strain_int[-1], sdir))
stress.append(np.dot(driver.stress_int[-1], sdir))
time.append(time[-1] + e_inc / erate)
# Compression hold
if hold_time[1] > 0.0:
dt = hold_time[1] / n_hold
for i in range(n_hold):
einc, ainc = driver.erate_step(sdir, 0.0, time[-1] + dt, T,
einc_guess = | np.zeros((6,)) | numpy.zeros |
import numpy as np
from scipy import stats
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from scipy.stats import kde
def print_stats(labels_test, labels_predict):
''''
Calculate the following statistics from machine learning tests.
RMSE, Bias, linregress results
'''
# labels_predict = regr.predict(features_test)
sqerr_sum_i = 0
sqerr_sum_pl = 0
sqerr_sum_w = 0
b_i = 0
b_pl = 0
b_w = 0
rmse_counter = 0
average_sum = 0
for i in range(len(labels_test)):
b_i += np.subtract(labels_predict[i][0], labels_test[i][0])
b_pl += np.subtract(labels_predict[i][1], labels_test[i][1])
b_w += | np.subtract(labels_predict[i][2], labels_test[i][2]) | numpy.subtract |
import os
from concurrent.futures import ThreadPoolExecutor
from copy import deepcopy
from dataclasses import dataclass
from functools import partial
from pathlib import Path
from typing import (
Any,
Callable,
Hashable,
Iterator,
List,
Mapping,
MutableMapping,
MutableSequence,
Optional,
Sequence,
Tuple,
Union,
)
import numpy as np
import xarray as xr
from read_roi import read_roi_zip
from skimage import draw
from skimage import io
from skimage.measure import regionprops
from skimage.measure._regionprops import _RegionProperties
from starfish.core.imagestack.imagestack import ImageStack
from starfish.core.morphology.label_image import LabelImage
from starfish.core.morphology.util import (
_get_axes_names,
_normalize_physical_ticks,
_normalize_pixel_ticks,
)
from starfish.core.types import ArrayLike, Axes, Coordinates, Number
from starfish.core.util.logging import Log
from .expand import fill_from_mask
@dataclass
class MaskData:
binary_mask: np.ndarray
offsets: Tuple[int, ...]
region_properties: Optional[_RegionProperties]
class BinaryMaskCollection:
"""Collection of binary masks with a dict-like access pattern.
Parameters
----------
pixel_ticks : Union[Mapping[Axes, ArrayLike[int]], Mapping[str, ArrayLike[int]]]
A map from the axis to the values for that axis.
physical_ticks : Union[Mapping[Coordinates, ArrayLike[Number]], Mapping[str, ArrayLike[Number]]
A map from the physical coordinate type to the values for axis. For 2D label images,
X and Y physical coordinates must be provided. For 3D label images, Z physical
coordinates must also be provided.
masks : Sequence[MaskData]
A sequence of data for binary masks.
Attributes
----------
max_shape : Mapping[Axes, int]
Maximum index of contained masks.
"""
def __init__(
self,
pixel_ticks: Union[Mapping[Axes, ArrayLike[int]], Mapping[str, ArrayLike[int]]],
physical_ticks: Union[Mapping[Coordinates, ArrayLike[Number]],
Mapping[str, ArrayLike[Number]]],
masks: Sequence[MaskData],
log: Optional[Log],
):
self._pixel_ticks: Mapping[Axes, ArrayLike[int]] = _normalize_pixel_ticks(pixel_ticks)
self._physical_ticks: Mapping[Coordinates, ArrayLike[Number]] = _normalize_physical_ticks(
physical_ticks)
self._masks: MutableMapping[int, MaskData] = {}
self._log: Log = log or Log()
for ix, mask_data in enumerate(masks):
if mask_data.binary_mask.ndim not in (2, 3):
raise TypeError(f"expected 2 or 3 dimensions; got {mask_data.binary_mask.ndim}")
if mask_data.binary_mask.dtype != bool:
raise ValueError(f"expected dtype of bool; got {mask_data.binary_mask.dtype}")
self._masks[ix] = mask_data
if len(self._pixel_ticks) != len(self._physical_ticks):
raise ValueError(
"pixel_ticks should have the same cardinality as physical_ticks")
for axis, coord in zip(*_get_axes_names(len(self._pixel_ticks))):
if axis not in self._pixel_ticks:
raise ValueError(f"pixel ticks missing {axis.value} data")
if coord not in self._physical_ticks:
raise ValueError(f"physical coordinate ticks missing {coord.value} data")
if len(self._pixel_ticks[axis]) != len(self._physical_ticks[coord]):
raise ValueError(
f"pixel ticks for {axis.name} does not have the same cardinality as physical "
f"coordinates ticks for {coord.name}")
def __getitem__(self, index: int) -> xr.DataArray:
return self._format_mask_as_xarray(index)
def __iter__(self) -> Iterator[Tuple[int, xr.DataArray]]:
for mask_index in self._masks.keys():
yield mask_index, self._format_mask_as_xarray(mask_index)
def __len__(self) -> int:
return len(self._masks)
def _format_mask_as_xarray(self, index: int) -> xr.DataArray:
"""Convert a np-based mask into an xarray DataArray."""
mask_data = self._masks[index]
max_mask_name_len = len(str(len(self._masks) - 1))
xr_dims: MutableSequence[str] = []
xr_coords: MutableMapping[Hashable, Any] = {}
for ix, (axis, coord) in enumerate(zip(*_get_axes_names(len(self._pixel_ticks)))):
xr_dims.append(axis.value)
start_offset = mask_data.offsets[ix]
end_offset = mask_data.offsets[ix] + mask_data.binary_mask.shape[ix]
xr_coords[axis.value] = self._pixel_ticks[axis][start_offset:end_offset]
xr_coords[coord.value] = (
axis.value, self._physical_ticks[coord][start_offset:end_offset])
return xr.DataArray(
mask_data.binary_mask,
dims=xr_dims,
coords=xr_coords,
name=f"{index:0{max_mask_name_len}d}"
)
def uncropped_mask(self, index: int) -> xr.DataArray:
"""Some of the binary mask collections builders will crop the binary masks when constructing
the collection. The purpose is to exclude the regions of the image that are entirely False.
Use this method to obtain the mask sized according to the pixel and physical shape provided
for the entire binary mask collection, use this method."""
mask_data = self._masks[index]
uncropped_shape = tuple(
len(self._pixel_ticks[axis])
for axis, _ in zip(*_get_axes_names(len(self._pixel_ticks)))
)
if uncropped_shape == mask_data.binary_mask.shape:
return self._format_mask_as_xarray(index)
max_mask_name_len = len(str(len(self._masks) - 1))
xr_dims: MutableSequence[str] = []
xr_coords: MutableMapping[Hashable, Any] = {}
for ix, (axis, coord) in enumerate(zip(*_get_axes_names(len(self._pixel_ticks)))):
xr_dims.append(axis.value)
xr_coords[axis.value] = self._pixel_ticks[axis]
xr_coords[coord.value] = (axis.value, self._physical_ticks[coord])
image = np.zeros(
shape=tuple(
len(self._pixel_ticks[axis])
for axis, _ in zip(*_get_axes_names(len(self._pixel_ticks)))
),
dtype=bool,
)
fill_from_mask(
mask_data.binary_mask,
mask_data.offsets,
1,
image,
)
return xr.DataArray(
image,
dims=xr_dims,
coords=xr_coords,
name=f"{index:0{max_mask_name_len}d}"
)
def masks(self) -> Iterator[xr.DataArray]:
for mask_index in self._masks.keys():
yield self._format_mask_as_xarray(mask_index)
def mask_regionprops(self, mask_id: int) -> _RegionProperties:
"""
Return the region properties for a given mask.
Parameters
----------
mask_id : int
The mask ID for the mask.
Returns
-------
_RegionProperties
The region properties for that mask.
"""
mask_data = self._masks[mask_id]
if mask_data.region_properties is None:
# recreate the label image (but with just this mask)
image = np.zeros(
shape=tuple(
len(self._pixel_ticks[axis])
for axis, _ in zip(*_get_axes_names(len(self._pixel_ticks)))
),
dtype=np.uint32,
)
fill_from_mask(
mask_data.binary_mask,
mask_data.offsets,
mask_id + 1,
image,
)
measured_region_props = regionprops(image)
assert len(measured_region_props) == 1
mask_data.region_properties = measured_region_props[0]
return mask_data.region_properties
@property
def max_shape(self) -> Mapping[Axes, int]:
return {
axis: len(self._pixel_ticks[axis])
for ix, (axis, _) in enumerate(zip(*_get_axes_names(len(self._pixel_ticks))))
}
@property
def log(self) -> Log:
return self._log
@classmethod
def from_label_image(cls, label_image: LabelImage) -> "BinaryMaskCollection":
"""Creates binary masks from a label image. Masks are cropped to the smallest size that
contains the non-zero values, but pixel and physical coordinates ticks are retained. Masks
extracted from BinaryMaskCollections will be cropped. To extract masks sized to the
original label image, use :py:meth:`starfish.BinaryMaskCollection.uncropped_mask`.
Parameters
----------
label_image : LabelImage
LabelImage to extract binary masks from.
Returns
-------
masks : BinaryMaskCollection
Masks generated from the label image.
"""
props = regionprops(label_image.xarray.data)
pixel_ticks = {
axis.value: label_image.xarray.coords[axis.value]
for axis, _ in zip(*_get_axes_names(label_image.xarray.ndim))
if axis.value in label_image.xarray.coords
}
physical_ticks = {
coord.value: label_image.xarray.coords[coord.value]
for _, coord in zip(*_get_axes_names(label_image.xarray.ndim))
if coord.value in label_image.xarray.coords
}
masks: Sequence[MaskData] = [
MaskData(prop.image, prop.bbox[:label_image.xarray.ndim], prop)
for prop in props
]
log = deepcopy(label_image.log)
return cls(
pixel_ticks,
physical_ticks,
masks,
log,
)
@classmethod
def from_fiji_roi_set(
cls, path_to_roi_set_zip: Union[str, Path], original_image: ImageStack
) -> "BinaryMaskCollection":
"""
Construct BinaryMaskCollection from external Fiji ROI set.
Parameters
----------
path_to_roi_set_zip : Union[str, Path]
Path to an external fiji roi file
original_image : ImageStack
image from same FOV used in fiji segmentation workflow
Returns
--------
BinaryMaskCollection
Notes
-----
This method only supports construction of masks from 2D polygons
at this time.
"""
roi_set = read_roi_zip(path_to_roi_set_zip)
# Get the physical ticks from the original dapi image
physical_ticks = {Coordinates.Y: original_image.xarray.yc.values,
Coordinates.X: original_image.xarray.xc.values}
# Get the pixel values from the original dapi image
pixel_ticks = {Axes.Y: original_image.xarray.y.values,
Axes.X: original_image.xarray.x.values}
masks: List[MaskData] = []
# for each region (and its properties):
for label, roi in enumerate(roi_set.values()):
polygon = np.array([roi[Axes.Y.value], roi[Axes.X.value]]).T
y_min, x_min = np.floor(np.amin(polygon, axis=0)).astype(int)
y_max, x_max = np.floor(np.amax(polygon, axis=0)).astype(int)
vertex_row_coords, vertex_col_coords = polygon.T
vertex_col_coords -= vertex_col_coords.min()
vertex_row_coords -= vertex_row_coords.min()
# draw a mask from the polygon
y_size = y_max - y_min + 1 # type: ignore
x_size = x_max - x_min + 1 # type: ignore
shape = y_size, x_size
mask = | np.zeros(shape, dtype=bool) | numpy.zeros |
import ipdb
import torch
import torch.nn.functional as F
import time
import os
import sys
import numpy as np
from numpy import nonzero
from imageio import imwrite
from utils import AverageMeter
from models.sal_losses import cc_score, nss_score, similarity, auc_judd, auc_shuff_np
def normalize_data(data):
data_min = np.min(data)
data_max = np.max(data)
data_norm = np.clip((data - data_min) *
(255.0 / (data_max - data_min)),
0, 255).astype(np.uint8)
return data_norm
def save_video_results(output_buffer, save_path):
video_outputs = torch.stack(output_buffer)
for i in range(video_outputs.size()[0]):
save_name = os.path.join(save_path, 'pred_sal_{0:06d}.jpg'.format(i + 9))
imwrite(save_name, normalize_data(video_outputs[i][0].numpy()))
def test(data_loader, model, opt):
print('test')
model.eval()
with torch.no_grad():
batch_time = AverageMeter()
data_time = AverageMeter()
end_time = time.time()
output_buffer = []
previous_video_id = ''
cc = AverageMeter()
nss = AverageMeter()
sim = AverageMeter()
auc_j = AverageMeter()
for i, (data, targets, valid) in enumerate(data_loader):
data_time.update(time.time() - end_time)
if not opt.no_cuda:
targets['salmap'] = targets['salmap'].cuda()
targets['binmap'] = targets['binmap'].cuda()
valid['sal'] = valid['sal'].cuda()
inputs = data['rgb']
curr_batch_size = inputs.size()[0]
targets['salmap'] = targets['salmap'].float()
targets['binmap'] = targets['binmap'].float()
valid['sal'] = valid['sal'].float()
while inputs.size()[0] < opt.batch_size:
inputs = torch.cat((inputs, inputs[0:1, :]), 0)
while data['audio'].size(0) < opt.batch_size:
data['audio'] = torch.cat((data['audio'], data['audio'][0:1, :]), 0)
outputs = model(inputs, data['audio'])
ipdb.set_trace()
outputs['sal'][-1] = outputs['sal'][-1][0:curr_batch_size, :]
cc_test = cc_score(outputs['sal'][-1], targets['salmap'], valid['sal'])
nss_test = nss_score(outputs['sal'][-1], targets['binmap'], valid['sal'])
sim_test = similarity(outputs['sal'][-1], targets['salmap'])
auc_j_test = auc_judd(outputs['sal'][-1], targets['binmap'])
auc_j.update(torch.mean(auc_j_test), nonzero(valid['sal'])[:, 0].size(0))
if not opt.no_sigmoid_in_test:
outputs['sal'] = torch.sigmoid(outputs['sal'][-1])
if sum(valid['sal']) > 0:
cc_tmp = cc_test / | nonzero(valid['sal']) | numpy.nonzero |
"""
Contains miscellaneous light sources.
Each light source must implement 3 methods:
- illuminates(point, scene): returns True if the light reaches given point on scene, False otherwise
- get_light_intensity_at(point): returns color of the light at given point ((0, 0, 0) represents no illumination)
- get_light_vector_at(point): returns normalized light ray direction at given point (might be None)
Values returned by get_light_intensity_at and get_light_vector_at must be numpy arrays with 3 values (range 0 - 255).
"""
import numpy as np
class Point:
"""
Represents a point light source.
Light rays from this light source spread in all directions from light position (360 degree angle).
Light intensity fades with distance (inverse square falloff). Points located further than
max_light_distance will not be illuminated (light color at those points will be (0, 0, 0)).
It can be used to represent lamps.
"""
def __init__(self, color=(255, 255, 255), position=(0, 0, 0), max_lighting_distance=30):
"""
:param color: represents light color (as a tuple of 3 values, range 0 - 255)
:param position: represents light source position
:param max_lighting_distance: distance of effective illumination from this light source
"""
self.color = np.array(color)
self.position = np.array(position)
self.max_lighting_distance = max_lighting_distance
def illuminates(self, point, scene):
light_vector = self.position - point
light_vector_len = np.linalg.norm(light_vector)
return not scene.check_collision(point, light_vector / light_vector_len, far=light_vector_len)
def get_light_intensity_at(self, point):
distance = np.linalg.norm(point - self.position)
if distance > self.max_lighting_distance:
return np.array((0, 0, 0))
else:
factor = ((self.max_lighting_distance - distance) / self.max_lighting_distance)**2
return self.color * factor
def get_light_vector_at(self, point):
light_vector = point - self.position
light_vector /= np.linalg.norm(light_vector)
return light_vector
class Ambient:
"""
Represents ambient (scattered) light.
This light does not have a source, therefore light vector will always be None.
The light reaches every point on the scene and never fades (light intensity is
constant). This light can be used to brighten the scene.
"""
def __init__(self, color=(127, 127, 127)):
"""
:param color: represents light color (as a tuple of 3 values, range 0 - 255)
"""
self.color = | np.array(color) | numpy.array |
"""
@author: <NAME>
@use_as: python3 DNN.py mix/path/mix.wav sp1/path/sp1.wav sp2/path/sp2.wav {"train", "test"}
Train and/or Test of the DNN model.
"""
import ipykernel # training progress bars working seamlessly
import numpy as np
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
import librosa
import soundfile as sf
from utils import sig_len, ring_a_bell
import models
import f_eval
import argparse
from datetime import datetime
import os
parser = argparse.ArgumentParser()
parser.add_argument("mix", help="Mix Path: the path of the directory where the .wav file of the mix is located")
parser.add_argument("sp1",
help="Source 1 Path: the path of the directory the .wav file of the first speaker is located")
parser.add_argument("sp2", help="Source 2 Path: the path of the directory the .wav file of the second speaker is "
"located")
parser.add_argument('mode', choices=['train', 'test'], help='Operation Mode. Possible values: {train, test}')
args = parser.parse_args()
print(args)
og_sig = args.sp1
mode = args.mode
# Load sound files, invoke STFT, use it as input for the network
sp1, Fs = librosa.load(args.sp1, sr=None)
sp1 = librosa.util.fix_length(sp1, len(sp1) + 512 // 2)
sp1 = librosa.core.stft(sp1, n_fft=512, hop_length=256, window='hann', center=True, pad_mode='reflect')
sp1 = sp1.T
sp2, Fs = librosa.load(args.sp2, sr=None)
sp2 = librosa.util.fix_length(sp2, len(sp2) + 512 // 2)
sp2 = librosa.core.stft(sp2, n_fft=512, hop_length=256, window='hann', center=True, pad_mode='reflect')
sp2 = sp2.T
mix, Fs = librosa.load(args.mix, sr=None)
mix = librosa.util.fix_length(mix, len(mix) + 512 // 2)
mix = librosa.core.stft(mix, n_fft=512, hop_length=256, window='hann', center=True, pad_mode='reflect')
mix = mix.T
# Parameters set-up, used throughout the file.
bins = np.size(mix, 1)
ep = 200
b = 16
p = 5
hl_nodes = 260
tr_ratio = 0.75
tst_ratio = 0.7
sr_out = 16000
n = round((1 - tr_ratio) * tst_ratio * sig_len(og_sig))
# IRM Masks set-up
mask = np.divide(np.abs(sp1), np.add(np.abs(sp1), np.abs(sp2)))
mask[np.isnan(mask)] = 0
mask = np.log1p(mask)
mask[np.isnan(mask)] = 0
mask2 = np.divide(np.abs(sp2), np.add(np.abs(sp1), np.abs(sp2)))
mask2[np.isnan(mask2)] = 0
mask2 = np.log1p(mask2)
mask2[np.isnan(mask2)] = 0
x = np.abs(mix)
x = np.log1p(x)
x[np.isnan(x)] = 0
y = mask
y2 = mask2
# Split Train and Test set
X_train, x_eval, Y_train, y_eval = train_test_split(x, y, train_size=tr_ratio, shuffle=False)
X_eval, X_test, Y_eval, Y_test = train_test_split(x_eval, y_eval, test_size=tst_ratio, shuffle=False)
X_train, x_eval, Y_train2, y_eval2 = train_test_split(x, y2, train_size=tr_ratio, shuffle=False)
X_eval, X_test, Y_eval2, Y_test2 = train_test_split(x_eval, y_eval2, test_size=tst_ratio, shuffle=False)
og_shape = Y_test.shape
# DNN Model Train. Model Saved upon completion.
if mode == "train":
model = models.bl_dnn_mimo(bins, hl_nodes)
tf.keras.utils.plot_model(model, show_shapes=True, show_layer_names=True, to_file='dnn.png')
es = tf.keras.callbacks.EarlyStopping(monitor='loss', mode='min', verbose=1, patience=p, restore_best_weights=True)
model.fit(X_train, [Y_train, Y_train2], validation_data=(X_eval, [Y_eval, Y_eval2]), epochs=ep, batch_size=b,
callbacks=[es])
model.save('DNN.hdf5')
ring_a_bell()
# DNN Model Test. Output is the reconstructed .wav files for each speaker. Evaluation follows.
if mode == "test":
# directory creator for results
ts = datetime.fromtimestamp(datetime.timestamp(datetime.now()))
ack = ""
if "4608" in str(args.mix): ack = "ACK"
else: ack = "NACK"
if "2472" in str(args.mix): ack = "NACK"
ack += "_" + str(args.mix[-7:-6]) + "dB" # manipulate indexes if double-triple + decimal point digit dB SNR,
# e.g. 5.2dB, 12dB, 12.5dB
out_path = "Results/DNN/" + str(ts) + "__" + str(ack) + "/"
if not os.path.exists(out_path):
os.makedirs(out_path)
model = keras.models.load_model('DNN.hdf5', compile=False)
m, m2 = model.predict(X_test)
m = np.reshape(m, og_shape)
m = np.expm1(m)
s1est = np.multiply(m, mix[-np.size(m, 0):])
m2 = np.reshape(m2, og_shape)
m2 = np.expm1(m2)
s2est = np.multiply(m2, mix[-np.size(m2, 0):])
print('MAS Speaker 1: ', mean_absolute_error(np.abs(s1est), np.abs(sp1[-np.size(s1est, 0):])))
print('MAS Speaker 2: ', mean_absolute_error(np.abs(s2est), np.abs(sp2[- | np.size(s2est, 0) | numpy.size |
import os
import numpy as np
from scipy import stats
import hickle
from tqdm import tqdm
import matplotlib
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = [8.0, 6.0]
mpl.rcParams['font.size'] = 20
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['font.sans-serif'] = 'Arial'
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.spines.right'] = False
plt.rcParams['ps.useafm'] = True
plt.rcParams['pdf.use14corefonts'] = True
np.random.seed(0)
def classify(resp):
"""
resp: (2, n_movies, nt, n_units)
"""
nt2 = stim_frames + post_frames
# responsiveness
resp_mean = np.mean(resp, axis=1) # (2, nt, n_units)
resp_std = np.std(resp, axis=1) # (2, nt, n_units)
base_mean = np.mean(resp_mean[:, pre_frames // 2: pre_frames, :], axis=1, keepdims=True) # (2, 1, n_units)
base_std = np.mean(resp_std[:, pre_frames // 2: pre_frames, :], axis=1, keepdims=True) # (2, 1, n_units)
d_resp = np.abs(resp_mean[:, pre_frames:, :] - base_mean)
d_resp = np.where(d_resp > (resp_std[:, pre_frames:, :] + base_std) * 0.5, d_resp, 0) # (2, 180, n_units)
# non-responsive cell
c = np.sum(d_resp != 0, axis=1) < nt2 * 0.1 # (2, n_units)
flat_idx = np.where(np.sum(c, axis=0) == n_stims)[0]
return set(flat_idx.tolist())
def is_flat_old(_resp):
"""
_resp: (2, n_movies, nt)
"""
nt2 = stim_frames + post_frames
# responsiveness
resp_mean = np.mean(_resp, axis=1) # (2, nt)
resp_std = np.std(_resp, axis=1) # (2, nt)
base_mean = np.mean(resp_mean[:, pre_frames // 2: pre_frames], axis=1, keepdims=True) # (2, 1)
base_std = np.mean(resp_std[:, pre_frames // 2: pre_frames], axis=1, keepdims=True) # (2, 1)
d_resp = np.abs(resp_mean[:, pre_frames:] - base_mean)
d_resp = np.where(d_resp > (resp_std[:, pre_frames:] + base_std) * 0.5, d_resp, 0) # (2, 180)
# non-responsive cell
c = np.sum(d_resp != 0, axis=1) < nt2 * 0.1 # (2, n_units)
return np.sum(c, axis=0) == n_stims
def is_flat(_resp):
"""
_resp: (2, n_movies, nt)
check flatness of the timecourse by `mean > SD`
"""
# responsiveness
resp_mean = np.mean(_resp[:, :, pre_frames:], axis=1) # (2, nt2)
resp_std = np.std(_resp[:, :, pre_frames:], axis=1) # (2, nt2)
return np.sum(np.abs(resp_mean) > resp_std) < (stim_frames + post_frames) * n_stims * 0.1
def is_flat2(_resp, corr_n=2128896):
"""
_resp: (2, n_movies, nt)
check flatness of the timecourse by `p-value < 0.05`
total units:
(E1, E2, A1, A2, Ahat1, Ahat2)
1032192 + 32256 + 516096 + 16128 + 516096 + 16128 = 2128896
"""
s, p = stats.ttest_1samp(_resp[:, :, pre_frames:], 0, axis=1) # p: (2, nt2)
_resp_mean = np.mean(_resp[:, :, pre_frames:], axis=1) # (2, nt2)
# correct p: one-way and Bonferroni
p_corrected = p * 0.5 * corr_n # (2, nt2)
for i in range(n_stims):
for j in range(stim_frames + post_frames):
if np.isnan(p_corrected[i, j]):
if _resp_mean[i, j] == 0:
p_corrected[i, j] = 1
else:
p_corrected[i, j] = 0
return np.sum(p_corrected < 0.05) < 1
############################### parameters #####################################
pre_frames = 10 # 50
stim_frames = 20 # 80
post_frames = 20 # 100
n_movies = 1000
batch_size = 1
n_stims = 2
n_cells_to_plot = 100
SAVE_DIR = './response/200806_1/'
# targets = ['E0', 'E1', 'E2', 'R0', 'R1', 'R2', 'A0', 'A1', 'A2', 'Ahat0', 'Ahat1', 'Ahat2']
# targets = ['E0', 'E1', 'E2', 'A0', 'A1', 'A2', 'Ahat0', 'Ahat1', 'Ahat2']
targets = ['E1', 'E2', 'A1', 'A2', 'Ahat1', 'Ahat2']
# targets = ['Ahat1', 'Ahat2']
# targets = ['E2', 'A2', 'Ahat2', 'R2']
# targets = ['E0', 'A0', 'Ahat0']
# taregets = ['E1', 'A1', 'Ahat1']
# targets = ['E2']
for target in targets:
# load all data
for n in tqdm(range(n_movies // batch_size)):
resp_deg0 = hickle.load(SAVE_DIR + 'MAE_P_deg0_' + target + '_' + str(n) + '.hkl') # (batch_size, 230, 12, 14, 192)
resp_deg180 = hickle.load(SAVE_DIR + 'MAE_P_deg180_' + target + '_' + str(n) + '.hkl')
if n == 0:
n_units = resp_deg0.shape[2] * resp_deg0.shape[3] * resp_deg0.shape[4]
nt = resp_deg0.shape[1]
# randomly sample 10%
n_units2 = n_units // 10
idxs = np.random.permutation(list(range(n_units)))[:n_units2]
resp = | np.zeros((n_stims, n_movies, nt, n_units2)) | numpy.zeros |
import numpy as np
import cv2
import datetime
from pathlib import Path
class CenterFace(object):
W_PATH = str((Path(__file__) / '../centerface.onnx').resolve())
def __init__(self, landmarks=True):
"""
Example:
def camera():
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
h, w = frame.shape[:2]
centerface = CenterFace(landmarks=True) # default
while True:
ret, frame = cap.read()
# dets, lms = centerface(frame, h, w, threshold=0.35) # centerface orignal
dets, lms = centerface(frame, threshold=0.35) # modified
for det in dets:
boxes, score = det[:4], det[4]
cv2.rectangle(frame, (int(boxes[0]), int(boxes[1])), (int(boxes[2]), int(boxes[3])), (2, 255, 0), 1)
for lm in lms:
for i in range(0, 5):
cv2.circle(frame, (int(lm[i * 2]), int(lm[i * 2 + 1])), 2, (0, 0, 255), -1)
cv2.imshow('out', frame)
# Press Q on keyboard to stop recording
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
"""
self.landmarks = landmarks
if self.landmarks:
self.net = cv2.dnn.readNetFromONNX(CenterFace.W_PATH)
else:
self.net = cv2.dnn.readNetFromONNX('../models/onnx/cface.1k.onnx')
self.img_h_new, self.img_w_new, self.scale_h, self.scale_w = 0, 0, 0, 0
def __call__(self, img, height, width, threshold=0.5):
self.img_h_new, self.img_w_new, self.scale_h, self.scale_w = self.transform(height, width)
return self.inference_opencv(img, threshold)
def inference_opencv(self, img, threshold):
blob = cv2.dnn.blobFromImage(img, scalefactor=1.0, size=(self.img_w_new, self.img_h_new), mean=(0, 0, 0), swapRB=True, crop=False)
self.net.setInput(blob)
if self.landmarks:
heatmap, scale, offset, lms = self.net.forward(["537", "538", "539", '540'])
else:
heatmap, scale, offset = self.net.forward(["535", "536", "537"])
return self.postprocess(heatmap, lms, offset, scale, threshold)
def transform(self, h, w):
img_h_new, img_w_new = int(np.ceil(h / 32) * 32), int(np.ceil(w / 32) * 32)
scale_h, scale_w = img_h_new / h, img_w_new / w
return img_h_new, img_w_new, scale_h, scale_w
def postprocess(self, heatmap, lms, offset, scale, threshold):
if self.landmarks:
dets, lms = self.decode(heatmap, scale, offset, lms, (self.img_h_new, self.img_w_new), threshold=threshold)
else:
dets = self.decode(heatmap, scale, offset, None, (self.img_h_new, self.img_w_new), threshold=threshold)
if len(dets) > 0:
dets[:, 0:4:2], dets[:, 1:4:2] = dets[:, 0:4:2] / self.scale_w, dets[:, 1:4:2] / self.scale_h
if self.landmarks:
lms[:, 0:10:2], lms[:, 1:10:2] = lms[:, 0:10:2] / self.scale_w, lms[:, 1:10:2] / self.scale_h
else:
dets = np.empty(shape=[0, 5], dtype=np.float32)
if self.landmarks:
lms = np.empty(shape=[0, 10], dtype=np.float32)
if self.landmarks:
return dets, lms
else:
return dets
def decode(self, heatmap, scale, offset, landmark, size, threshold=0.1):
heatmap = np.squeeze(heatmap)
scale0, scale1 = scale[0, 0, :, :], scale[0, 1, :, :]
offset0, offset1 = offset[0, 0, :, :], offset[0, 1, :, :]
c0, c1 = np.where(heatmap > threshold)
if self.landmarks:
boxes, lms = [], []
else:
boxes = []
if len(c0) > 0:
for i in range(len(c0)):
s0, s1 = np.exp(scale0[c0[i], c1[i]]) * 4, np.exp(scale1[c0[i], c1[i]]) * 4
o0, o1 = offset0[c0[i], c1[i]], offset1[c0[i], c1[i]]
s = heatmap[c0[i], c1[i]]
x1, y1 = max(0, (c1[i] + o1 + 0.5) * 4 - s1 / 2), max(0, (c0[i] + o0 + 0.5) * 4 - s0 / 2)
x1, y1 = min(x1, size[1]), min(y1, size[0])
boxes.append([x1, y1, min(x1 + s1, size[1]), min(y1 + s0, size[0]), s])
if self.landmarks:
lm = []
for j in range(5):
lm.append(landmark[0, j * 2 + 1, c0[i], c1[i]] * s1 + x1)
lm.append(landmark[0, j * 2, c0[i], c1[i]] * s0 + y1)
lms.append(lm)
boxes = np.asarray(boxes, dtype=np.float32)
keep = self.nms(boxes[:, :4], boxes[:, 4], 0.3)
boxes = boxes[keep, :]
if self.landmarks:
lms = | np.asarray(lms, dtype=np.float32) | numpy.asarray |
import json
import re
import ast
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold, cross_val_score
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, log_loss
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import r2_score
from sklearn.model_selection import cross_val_score
people_path = pd.read_csv("../data/processed/people_transformation/people_cast_list.csv")
## 1.Dataset Builder
def convert_output_id(output):
data = []
for i in range(0, len(output)):
if isinstance(output[i], dict):
if len(output[i]["movie_results"]) >= 1:
data.append(output[i]["movie_results"][0]["id"])
return data
def get_transformed_json(path):
id_remove = []
json_final = []
with open(path) as json_file:
data = json.load(json_file)
for i in range(0, len(data)):
if data[i] == "<class 'requests.exceptions.ReadTimeout'>":
id_remove.append(i)
elif data[i] == "<class 'requests.exceptions.ConnectionError'>":
id_remove.append(i)
else:
json_final.append(data[i])
return json_final
## 2.1 Pre_transformation
def from_json_to_array(df, column, regex):
df[column] = df[column].apply(lambda x: re.findall(rf"{regex}", str(x)))
def split_credits_column(df):
df["cast"] = df["credits"].apply(lambda x: string_to_dictionary(x, "cast"))
df["crew"] = df["credits"].apply(lambda x: string_to_dictionary(x, "crew"))
df.drop("credits", axis=1, inplace=True)
## 2.2 People Pre Pre_transformation
def unique_values(df_list):
ids_list = [re.findall(r'\b\d+\b', value) for value in df_list]
return set(flatten(ids_list))
## 4 Data Wrangling
def create_new_columns(df, column):
value_list = []
for cell in df[column]:
lista_genres = re.findall(r'\b\w+\b', cell)
for value in lista_genres:
value_list.append(value)
v = get_value_counts(value_list)
columns_to_zero(df, v, column)
validate_column(df, column)
def get_average_people(df, df_list, year):
ids_list = [re.findall(r'\b\d+\b', value) for value in df_list]
for i in range(len(df_list)):
df.loc[i, "cast"] = np.mean(get_score(ids_list[i], year[i]))
## Modeling
def predict(model, X_train, y_train, X_test, y_test, model_text):
model.fit(X_train, y_train)
y_pred_test = model.predict(X_test)
cf_matrix = confusion_matrix(y_test, y_pred_test)
plot_confusion_matrix(cf_matrix, model_text)
return baseline_report(model, X_train, X_test, y_train, y_test, model_text)
def predict_linear(model, X_test, X_train, y_train, X, y, model_text):
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = r2_score(y_train, y_pred)
scores = cross_val_score(model,
X_train,
y_train,
cv=5,
scoring='r2')
print('CV Mean: ', | np.mean(scores) | numpy.mean |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# dphutils.py
"""
This is for small utility functions that don't have a proper home yet
Copyright (c) 2016, <NAME>
"""
import subprocess
import numpy as np
import scipy as sp
import re
import io
import os
import requests
import tifffile as tif
from scipy.fftpack.helper import next_fast_len
from scipy.optimize import minimize_scalar, minimize
from scipy.ndimage.fourier import fourier_gaussian
from scipy.ndimage._ni_support import _normalize_sequence
from scipy.signal import signaltools as sig
from scipy.special import zeta
from scipy.stats import nbinom
from .lm import curve_fit
from .rolling_ball import rolling_ball_filter
import tqdm
import matplotlib.pyplot as plt
# from .llc import jit_filter_function, jit_filter1d_function
try:
import pyfftw
from pyfftw.interfaces.numpy_fft import fftshift, ifftshift, fftn, ifftn, rfftn, irfftn
# Turn on the cache for optimum performance
pyfftw.interfaces.cache.enable()
FFTW = True
except ImportError:
from numpy.fft import fftshift, ifftshift, fftn, ifftn, rfftn, irfftn
FFTW = False
import logging
logger = logging.getLogger(__name__)
eps = np.finfo(float).eps
def get_git(path="."):
try:
# we slice to remove trailing new line.
cmd = ["git", "--git-dir=" + os.path.join(path, ".git"), "describe", "--long", "--always"]
return subprocess.check_output(cmd).decode()[:-1]
except (subprocess.CalledProcessError, FileNotFoundError) as e:
logger.error(e)
logger.error(" ".join(cmd))
return "Unknown"
def generate_meta_data():
pass
def bin_ndarray(ndarray, new_shape=None, bin_size=None, operation="sum"):
"""
Bins an ndarray in all axes based on the target shape, by summing or
averaging.
Number of output dimensions must match number of input dimensions and
new axes must divide old ones.
Parameters
----------
ndarray : array like object (can be dask array)
new_shape : iterable (optional)
The new size to bin the data to
bin_size : scalar or iterable (optional)
The size of the new bins
Returns
-------
binned array.
"""
if new_shape is None:
# if new shape isn't passed then calculate it
if bin_size is None:
# if bin_size isn't passed then raise error
raise ValueError("Either new shape or bin_size must be passed")
# pull old shape
old_shape = np.array(ndarray.shape)
# calculate new shape, integer division!
new_shape = old_shape // bin_size
# calculate the crop window
crop = tuple(slice(None, -r) if r else slice(None) for r in old_shape % bin_size)
# crop the input array
ndarray = ndarray[crop]
# proceed as before
operation = operation.lower()
if operation not in {"sum", "mean"}:
raise ValueError("Operation not supported.")
if ndarray.ndim != len(new_shape):
raise ValueError(f"Shape mismatch: {ndarray.shape} -> {new_shape}")
compression_pairs = [(d, c // d) for d, c in zip(new_shape, ndarray.shape)]
flattened = [l for p in compression_pairs for l in p]
ndarray = ndarray.reshape(flattened)
for i in range(len(new_shape)):
op = getattr(ndarray, operation)
ndarray = op(-1 * (i + 1))
return ndarray
def scale(data, dtype=None):
"""
Scales data to [0.0, 1.0] range, unless an integer dtype is specified
in which case the data is scaled to fill the bit depth of the dtype.
Parameters
----------
data : numeric type
Data to be scaled, can contain nan
dtype : integer dtype
Specify the bit depth to fill
Returns
-------
scaled_data : numeric type
Scaled data
Examples
--------
>>> from numpy.random import randn
>>> a = randn(10)
>>> b = scale(a)
>>> b.max()
1.0
>>> b.min()
0.0
>>> b = scale(a, dtype = np.uint16)
>>> b.max()
65535
>>> b.min()
0
"""
if np.issubdtype(data.dtype, np.complexfloating):
raise TypeError("`scale` is not defined for complex values")
dmin = np.nanmin(data)
dmax = np.nanmax(data)
if np.issubdtype(dtype, np.integer):
tmin = np.iinfo(dtype).min
tmax = np.iinfo(dtype).max
else:
tmin = 0.0
tmax = 1.0
return ((data - dmin) / (dmax - dmin) * (tmax - tmin) + tmin).astype(dtype)
def scale_uint16(data):
"""Convenience function to scale data to the uint16 range."""
return scale(data, np.uint16)
def radial_profile(data, center=None, binsize=1.0):
"""Take the radial average of a 2D data array
Adapted from http://stackoverflow.com/a/21242776/5030014
Parameters
----------
data : ndarray (2D)
the 2D array for which you want to calculate the radial average
center : sequence
the center about which you want to calculate the radial average
binsize : sequence
Size of radial bins, numbers less than one have questionable utility
Returns
-------
radial_mean : ndarray
a 1D radial average of data
radial_std : ndarray
a 1D radial standard deviation of data
Examples
--------
>>> radial_profile(np.ones((11, 11)))
(array([ 1., 1., 1., 1., 1., 1., 1., 1.]), array([ 0., 0., 0., 0., 0., 0., 0., 0.]))
"""
# test if the data is complex
if np.iscomplexobj(data):
# if it is complex, call this function on the real and
# imaginary parts and return the complex sum.
real_prof, real_std = radial_profile(np.real(data), center, binsize)
imag_prof, imag_std = radial_profile( | np.imag(data) | numpy.imag |
#! /usr/bin/env python3
from matplotlib import pylab as plt
from astropy.table import Table
import numpy as np
import scipy as sp
import scipy.stats
from matplotlib import pylab as pl
import matplotlib as mpl
import re
import sys
from afdtable import read as read_table, compute as compute_table
def marginal_earning(tab, univ, metric, unit=None):
tab1 = change_metric(tab, univ, metric, unit=unit)
df = tab['AFD5%'].sum() * (tab1[univ]['p'] - tab[univ]['p'])
return df
def yearly_marginal_earnings(year, univ, metric, unit=None):
tab = read_table(year)
df = [[marginal_earning(tab, u, m, unit=unit)
for u in univ]
for m in metric]
f95 = tab['AFD95%'].sum()
f = f95 + tab['AFD5%'].sum()
# in 2018, 95% is not 95% some go to new universities, old ones
# get a lower share ;-)
if year == 2018:
f95 -= tab[[25,26]]['AFD95%'].sum()
print(year, f95, f)
return f95, f, df
def historical_marginal_earnings(metric=['G', 'P'], univ=[0,1,2,3], unit=None):
start = 2006
end = 2019
years = np.arange(start, end + 1)
me = [yearly_marginal_earnings(year, univ, metric, unit=unit)
for year in years]
F95, F, dF = [np.array(a) for a in zip(*me)]
return years, F95, F, dF
def cumulated_marginal_earnings(y, F95, F, dF, start=2013, duration=3, p=0,
icorr=0):
now = max(y)
start = np.array(start)
duration = np.array(duration)
# extrapolation
y_ex = np.arange(now + 1, (start + duration).max() + 31)
icorr_ex = np.ones_like(y_ex)
F_ex = F[-1] * (1 + p) ** (y_ex - now)
F95_ex = F95[-1] * (1 + p) ** (y_ex - now)
dF_ex = extrapolate_earnings(y, dF, y_ex, icorr=icorr)
# merge past values with extrapolated ones
dF = np.vstack([dF, dF_ex])
F = | np.hstack([F, F_ex]) | numpy.hstack |
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
import zxntools as zxn
NAME = 'sl2toimg'
VERSION = '1.00.00'
DATE = "10 Apr 2021"
def my_help(name):
version()
sys.stderr.write(
("Usage: {} [<options>] [<infile>] [<outfile>]\n"
"\toptions are\n"
"\t-3\t--320x256\t320x192 prefer resoultion\n"
"\t-6\t--640x256\t640x192 prefer resoultion\n"
"\t-h\t--help\t\tshow this help message\n"
"\t-i\t--in\t\tinput file (stdin)\n"
"\t-o\t--out\t\toutput file (stdout)\n"
"\t-p\t--pal\t\tpalette (none)"
"\t-V\t--version\tget version information\n"
"\t-v\t--verbose\tincrease verbosity\n"
).format(name))
def version():
sys.stderr.write("{} version {} {}\n".format(NAME, VERSION, DATE))
zxn.version(True)
DEFAULTS = {
'help': my_help,
'inks': None,
'long_opts': ['320', '320x256', '640', '640x256', 'help', 'in=', 'out=', 'pal=', 'version', 'verbose'],
'num_colors': None,
'opts': '36hi:o:p:vV',
'pal_type': None,
'papers': None,
'res': (0, 0),
'tile_y': None,
'to_zxnext': False,
'zxn_fmt': zxn.Options.SL2
}
def sl2toimg(options):
print("sl2toimg({})".format(options))
foo = options.infile.read()
options.infile.close()
full_size = len(foo)
data_size = 0
if full_size in (6144, 6160, 6176):
options.res = (128, 96)
data_size = 6144
elif full_size in (12288, 12544, 12800):
data_size = 12288
options.res = (128, 96)
elif full_size in (49152, 48408, 49664):
options.res = (256, 192)
data_size = 49152
elif full_size == 81920:
data_size = 81920
if options.res[0] != 640:
options.res = (320, 256)
elif full_size in (81936, 81952):
data_size = 81920
options.res = (640, 256)
elif full_size in (82176, 82432):
data_size = 81920
options.res = (320, 256)
else:
sys.stderr.write("Malformed file\n")
exit(1)
pal_size = full_size - data_size
if options.zxn_fmt in (options.SL2, options.SLR):
pal = list(foo[data_size:])
data = list(foo[:data_size])
else:
pal = list(foo[:pal_size])
data = list(foo[pal_size:])
if pal_size == 0:
pal = list(np.asarray(zxn.palette(8))[:, :3].reshape((768,)))
elif pal_size == 16:
pal = zxn.pal8to24(pal) * 3
elif pal_size == 32:
pal = zxn.pal9to24(pal) * 3
elif pal_size == 256:
pal = zxn.pal8to24(pal)
elif pal_size == 512:
pal = zxn.pal9to24(pal)
else:
sys.stderr.write("Malformed file\n")
exit(1)
img = Image.new('P', options.res)
img.putpalette(pal)
if options.res[0] == 128 and data_size == 6144:
data_m = np.empty((96, 64, 2))
data = np.asarray(data).reshape((96, 64))
data_a = data >> 4
data_b = data & 15
data_m[:, :, 0] = data_a
data_m[:, :, 1] = data_b
img.putdata(list(data_m.reshape((12288,))))
elif options.res[0] in (128, 256):
img.putdata(data)
elif options.res[0] == 320:
data = list(np.asarray(data).reshape((320, 256)).transpose().reshape((81920,)))
img.putdata(data)
else:
data = | np.asarray(data) | numpy.asarray |
###Classes that define different off policy estimators for semi-synthetic experiments
import sys
import numpy
import scipy.sparse
import sklearn.model_selection
import sklearn.tree
import sklearn.linear_model
class Estimator:
#ranking_size: (int) Size of slate, l
#logging_policy: (UniformPolicy) Logging policy, \mu
#target_policy: (Policy) Target policy, \pi
def __init__(self, ranking_size, logging_policy, target_policy):
self.rankingSize=ranking_size
self.name=None
self.loggingPolicy=logging_policy
self.targetPolicy=target_policy
if target_policy.name is None or logging_policy.name is None:
print("Estimator:init [ERR] Either target or logging policy is not initialized", flush=True)
sys.exit(0)
if target_policy.dataset.name != logging_policy.dataset.name:
print("Estimator:init [ERR] Target and logging policy operate on different datasets", flush=True)
sys.exit(0)
###All sub-classes of Estimator should supply a estimate method
###Requires: query, logged_ranking, logged_value,
###Returns: float indicating estimated value
self.runningSum=0
self.runningMean=0.0
def updateRunningAverage(self, value):
self.runningSum+=1
delta=value-self.runningMean
self.runningMean+=delta/self.runningSum
def reset(self):
self.runningSum=0
self.runningMean=0.0
class OnPolicy(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy, metric):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='OnPolicy'
self.metric=metric
#This member is set on-demand by estimateAll(...)
self.savedValues=None
def estimateAll(self):
if self.savedValues is not None:
return
self.savedValues=[]
numQueries=len(self.loggingPolicy.dataset.docsPerQuery)
for i in range(numQueries):
newRanking=self.targetPolicy.predict(i, self.rankingSize)
self.savedValues.append(self.metric.computeMetric(i, newRanking))
if i%100==0:
print(".", end="", flush=True)
print("")
print("OnPolicy:estimateAll [LOG] Precomputed estimates.", flush=True)
def estimate(self, query, logged_ranking, new_ranking, logged_value):
currentValue=None
if self.savedValues is not None:
currentValue=self.savedValues[query]
else:
currentValue=self.metric.computeMetric(query, new_ranking)
self.updateRunningAverage(currentValue)
return self.runningMean
def reset(self):
Estimator.reset(self)
self.savedValues=None
class UniformIPS(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='Unif-IPS'
def estimate(self, query, logged_ranking, new_ranking, logged_value):
exactMatch=numpy.absolute(new_ranking-logged_ranking).sum() == 0
currentValue=0.0
if exactMatch:
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
validDocs=logged_ranking.size
invPropensity=None
if self.loggingPolicy.allowRepetitions:
invPropensity=numpy.float_power(numAllowedDocs, validDocs)
else:
invPropensity=numpy.prod(range(numAllowedDocs+1-validDocs, numAllowedDocs+1), dtype=numpy.float64)
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
return self.runningMean
class NonUniformIPS(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='NonUnif-IPS'
def estimate(self, query, logged_ranking, new_ranking, logged_value):
exactMatch=numpy.absolute(new_ranking-logged_ranking).sum() == 0
currentValue=0.0
if exactMatch:
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
underlyingRanking=self.loggingPolicy.policy.predict(query, -1)
currentDistribution=self.loggingPolicy.multinomials[numAllowedDocs]
numRankedDocs=logged_ranking.size
invPropensity=1.0
denominator=1.0
for j in range(numRankedDocs):
underlyingIndex=numpy.flatnonzero(underlyingRanking == logged_ranking[j])[0]
invPropensity*=(denominator*1.0/currentDistribution[underlyingIndex])
if not self.loggingPolicy.allowRepetitions:
denominator-=currentDistribution[underlyingIndex]
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
return self.runningMean
class UniformSNIPS(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='Unif-IPS_SN'
self.runningDenominatorMean=0.0
def estimate(self, query, logged_ranking, new_ranking, logged_value):
exactMatch=numpy.absolute(new_ranking-logged_ranking).sum() == 0
currentValue=0.0
if exactMatch:
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
validDocs=logged_ranking.size
invPropensity=None
if self.loggingPolicy.allowRepetitions:
invPropensity=numpy.float_power(numAllowedDocs, validDocs)
else:
invPropensity=numpy.prod(range(numAllowedDocs+1-validDocs, numAllowedDocs+1), dtype=numpy.float64)
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
denominatorDelta=invPropensity-self.runningDenominatorMean
self.runningDenominatorMean+=denominatorDelta/self.runningSum
if self.runningDenominatorMean!=0.0:
return 1.0*self.runningMean/self.runningDenominatorMean
else:
return 0.0
def reset(self):
Estimator.reset(self)
self.runningDenominatorMean=0.0
class NonUniformSNIPS(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='NonUnif-IPS_SN'
self.runningDenominatorMean=0.0
def estimate(self, query, logged_ranking, new_ranking, logged_value):
exactMatch=numpy.absolute(new_ranking-logged_ranking).sum() == 0
currentValue=0.0
if exactMatch:
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
underlyingRanking=self.loggingPolicy.policy.predict(query, -1)
currentDistribution=self.loggingPolicy.multinomials[numAllowedDocs]
numRankedDocs=logged_ranking.size
invPropensity=1.0
denominator=1.0
for j in range(numRankedDocs):
underlyingIndex=numpy.flatnonzero(underlyingRanking == logged_ranking[j])[0]
invPropensity*=(denominator*1.0/currentDistribution[underlyingIndex])
if not self.loggingPolicy.allowRepetitions:
denominator-=currentDistribution[underlyingIndex]
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
denominatorDelta=invPropensity-self.runningDenominatorMean
self.runningDenominatorMean+=denominatorDelta/self.runningSum
if self.runningDenominatorMean!=0.0:
return 1.0*self.runningMean/self.runningDenominatorMean
else:
return 0.0
def reset(self):
Estimator.reset(self)
self.runningDenominatorMean=0.0
class UniformPI(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='Unif-PI'
def estimate(self, query, logged_ranking, new_ranking, logged_value):
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
validDocs=logged_ranking.size
vectorDimension=validDocs*numAllowedDocs
exploredMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
newMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
for j in range(validDocs):
if self.loggingPolicy.dataset.mask is None:
exploredMatrix[j, logged_ranking[j]]=1
newMatrix[j, new_ranking[j]]=1
else:
logIndex=numpy.flatnonzero(self.loggingPolicy.dataset.mask[query] == logged_ranking[j])[0]
newIndex=numpy.flatnonzero(self.loggingPolicy.dataset.mask[query] == new_ranking[j])[0]
exploredMatrix[j, logIndex]=1
newMatrix[j, newIndex]=1
posRelVector=exploredMatrix.reshape(vectorDimension)
newSlateVector=newMatrix.reshape(vectorDimension)
estimatedPhi=numpy.dot(self.loggingPolicy.gammas[numAllowedDocs], posRelVector)
invPropensity=numpy.dot(estimatedPhi, newSlateVector)
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
return self.runningMean
class NonUniformPI(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='NonUnif-PI'
def estimate(self, query, logged_ranking, new_ranking, logged_value):
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
underlyingRanking=self.loggingPolicy.policy.predict(query, -1)
validDocs=logged_ranking.size
vectorDimension=validDocs*numAllowedDocs
exploredMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
newMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
for j in range(validDocs):
logIndex=numpy.flatnonzero(underlyingRanking == logged_ranking[j])[0]
newIndex=numpy.flatnonzero(underlyingRanking == new_ranking[j])[0]
exploredMatrix[j, logIndex]=1
newMatrix[j, newIndex]=1
posRelVector=exploredMatrix.reshape(vectorDimension)
newSlateVector=newMatrix.reshape(vectorDimension)
estimatedPhi=numpy.dot(self.loggingPolicy.gammas[numAllowedDocs], posRelVector)
invPropensity=numpy.dot(estimatedPhi, newSlateVector)
currentValue=logged_value*invPropensity
self.updateRunningAverage(currentValue)
return self.runningMean
class UniformSNPI(Estimator):
def __init__(self, ranking_size, logging_policy, target_policy):
Estimator.__init__(self, ranking_size, logging_policy, target_policy)
self.name='Unif-PI_SN'
self.runningDenominatorMean=0.0
def estimate(self, query, logged_ranking, new_ranking, logged_value):
numAllowedDocs=self.loggingPolicy.dataset.docsPerQuery[query]
validDocs=logged_ranking.size
vectorDimension=validDocs*numAllowedDocs
exploredMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
newMatrix=numpy.zeros((validDocs, numAllowedDocs), dtype=numpy.float64)
for j in range(validDocs):
if self.loggingPolicy.dataset.mask is None:
exploredMatrix[j, logged_ranking[j]]=1
newMatrix[j, new_ranking[j]]=1
else:
logIndex= | numpy.flatnonzero(self.loggingPolicy.dataset.mask[query] == logged_ranking[j]) | numpy.flatnonzero |
import numpy as np
import pandas as pd
import copy
THRESHOLD = 15
def get_average_metrics(results):
individual_performance = {}
overall_performance = {}
measure_list = ['recall', 'precision', 'F1',
'accuracy', 're', 'mae',
'maep', 'nde', 'sae']
for appliance in results.keys():
measure_dict = {}
measure_average = {}
# initialization
for measure in measure_list:
measure_dict[measure] = []
overall_performance[measure] = []
# save details
for test_house in results[appliance]['y_test_raw'].keys():
performance = get_all_metrics(results[appliance]['y_test_raw'][test_house],
results[appliance]['pred_test'][test_house])
for measure in performance.keys():
measure_dict[measure].append(performance[measure])
# save mean
for measure in measure_list:
measure_average[measure] = np.mean(measure_dict[measure])
individual_performance[appliance] = measure_average
overall_performance_detail = {}
# initialize
for measure in measure_list:
overall_performance_detail[measure] = []
# save details
for appliance in individual_performance.keys():
for measure in measure_list:
overall_performance_detail[measure].append(individual_performance[appliance][measure])
# save mean
for measure in measure_list:
overall_performance[measure] = np.mean(overall_performance_detail[measure])
individual_performance = pd.DataFrame(individual_performance)
return individual_performance, overall_performance
def get_all_metrics(target, prediction):
threshold = THRESHOLD
results = {'recall': get_recall(target, prediction, threshold),
'precision': get_precision(target, prediction, threshold),
'F1': get_F1(target, prediction, threshold),
'accuracy': get_accuracy(target, prediction, threshold),
're': get_relative_error(target, prediction),
'mae': get_abs_error(target, prediction),
'maep': get_abs_error_positive(target, prediction),
'nde': get_nde(target, prediction),
'sae': get_sae(target, prediction)}
return results
def get_TP(target, prediction, threshold):
'''
compute the number of true positive
Parameters:
----------------
target: the groud truth , np.array
prediction: the prediction, np.array
threshold: float
'''
assert (target.shape == prediction.shape)
target = 1 - np.clip(target, threshold, 0) / threshold
prediction = 1 - np.clip(prediction, threshold, 0) / threshold
tp_array = np.logical_and(target, prediction) * 1.0
tp = np.sum(tp_array)
return tp
def get_FP(target, prediction, threshold):
'''
compute the number of false positive
Parameters:
----------------
target: the groud truth , np.array
prediction: the prediction, np.array
threshold: float
'''
assert (target.shape == prediction.shape)
target = | np.clip(target, threshold, 0) | numpy.clip |
import warnings
from collections import namedtuple
import numpy as np
import h5py
from ecogdata.channel_map import ChannelMap
from ecogdata.trigger_fun import process_trigger
from .file2data import FileLoader
gain = {
'2t-as daq v1' : 10,
'2t-as daq v2' : 10
}
pitch_lookup = {
'actv_64' : 0.4,
'active_1008ch_sp_v2' : (0.3214, 0.25) # pitch is dx, dy
}
DAQunmix = namedtuple('DAQunmix', ['col', 'row', 'extra_col', 'extra_row'])
active_headstages = ('zif26 to 2x uhdmi',
'zif26 to 2x 20 pins harwin to 2x uhdmi',
'zif to 50mil',
'zif51_p4-50_demux-14c20r',)
def load_active(exp_path, name, electrode, daq, headstage, bnc=(), trigger_idx=0, **load_kwargs):
"""
Parameters
----------
exp_path: str
Path for experiment recordings
name: str
Name of the recording to load
electrode:
Electrode tag
daq:
DAQ equipment tag
headstage:
Headstage equipment tag
bnc: int or sequence
Columns in the acquired data corresponding to BNC inputs
trigger_idx: int
If there are BNC columns, then this one corresponds to a timestamp trigger.
**load_kwargs: dict
Other arguments for the FileLoader type
Returns
-------
dataset: Bunch
Bunch containing ".data" (a DataSource), ".chan_map" (a ChannelMap), and many other metadata attributes.
"""
loader = ActiveLoader(exp_path, name, electrode, daq, headstage, bnc=bnc, trigger_idx=trigger_idx, **load_kwargs)
return loader.create_dataset()
def get_daq_unmix(daq, headstage, electrode, row_order=()):
daq = daq.lower()
headstage = headstage.lower()
electrode = electrode.lower()
row_order = list(map(int, row_order))
# e.g. penn data 4/28/2016
if (daq == '2t-as daq v2') and (headstage == 'zif26 to 2x uhdmi') and \
(electrode == 'actv_64'):
col_order = [2, 1, 5, 8, 7, 6, 9, 0, 4, 3]
if not len(row_order):
row_order = [0, 1, 2, 3, 7, 4, 6, 5]
col = [col_order.index(i) for i in range(len(col_order))]
row = [row_order.index(i) for i in range(len(row_order))]
# diagnostic channels are last 2 columns
extra_col = col[-2:]
col = col[:-2]
unmix = DAQunmix(np.array(col[::-1]), np.array(row), extra_col, ())
# e.g. duke data winter/spring 2016
elif (daq == '2t-as daq v1') and \
(headstage == 'zif26 to 2x 20 pins harwin to 2x uhdmi') and \
(electrode == 'actv_64'):
col_order = [7, 9, 8, 2, 4, 5, 1, 0, 3, 6]
col = [col_order.index(i) for i in range(len(col_order))]
extra_col = [1, 4]
for c in extra_col:
col.remove(c)
col = np.array(col)
# This is Ken's original order
if not len(row_order):
row_order = [6, 5, 1, 0, 2, 3, 7, 4]
row = [row_order.index(i) for i in range(len(row_order))]
# This is Ken's 2nd order (sequential)
#row = range(8)
# this is Ken's 3rd order (skip 3)
#row = list( (np.arange(8) * 3) % 8 )
unmix = DAQunmix(col[::-1], row, extra_col, ())
# e.g. duke data from 4/26/2016
elif (daq == '2t-as daq v1') and (headstage == 'zif26 to 2x uhdmi') and \
(electrode == 'actv_64'):
col_order = list( np.array([6, 9, 8, 7, 10, 1, 5, 4, 3, 2]) - 1 )
if not len(row_order):
row_order = list( np.array([1, 2, 3, 4, 8, 5, 6, 7]) - 1 )
col = [col_order.index(i) for i in range(len(col_order))]
extra_col = col[-2:]
col = col[:-2]
row = [row_order.index(i) for i in range(len(row_order))]
unmix = DAQunmix(np.array(col[::-1]), np.array(row), extra_col, ())
elif (daq == '2t-as daq v1') and (headstage == 'zif to 50mil') and \
(electrode == 'cardiac v1'):
col_order = np.array([12, 14, 17, 19, 5, 11, 13, 16, 18,
20, 2, 4, 7, 9, 15, 10, 8, 6, 3, 1]) - 1
if not len(row_order):
row_order = np.array([16, 1, 6, 8, 4, 20, 2, 12, 14, 17, 9,
22, 21, 10, 13, 18, 3, 19, 7, 11, 15, 5]) - 1
# reorder to my convention
col = [list(col_order).index(i) for i in range(len(col_order))]
# remove floating and ref channels
extra_col = [4, 14]
col.remove(4)
col.remove(14)
row = [list(row_order).index(i) for i in range(len(row_order))]
unmix = DAQunmix(np.array(col[::-1]), np.array(row), extra_col, ())
elif (daq == '2t-as daq v2') and (headstage == 'zif51_p4-50_demux-14c20r') \
and (electrode == 'active_1008ch_sp_v2'):
col_order = np.array([8, 7, 11, 14, 13, 12, -1, 1, 5,
4, 3, 2, 6, 10, 9, 28, 27, 22, 16, 18,
20, -1, 15, 23, 21, 19, 17, 25, 24, 26]) - 1
col = [list(col_order).index(i) for i in np.sort( col_order[col_order>=0] )]
if not len(row_order):
row_order = np.array([8, 6, 2, 4, 18, 14, 16, 1, 3, 10, 12, 5, 7, 11,
9, 17, 15, 13, 26, 24, 20, 22, 36, 32, 34, 19,
21, 28, 30, 23, 25, 29, 27, 35, 33, 31]) - 1
row = [list(row_order).index(i) for i in range(len(row_order))]
extra_col = np.where(col_order < 0)[0]
unmix = DAQunmix(np.array(col[::-1]), np.array(row), extra_col, ())
elif daq.lower() == 'passthru':
unmix = DAQunmix(slice(None), slice(None), (), ())
else:
err = ['Combination unknown:',
'DAQ {0}'.format(daq),
'Headstage {0}'.format(headstage),
'Electrode {0}'.format(electrode)]
raise NotImplementedError('\n'.join(err))
return unmix
class ActiveLoader(FileLoader):
transpose_array = True
permissible_types = ['.mat', '.h5', '.hdf']
def __init__(self, experiment_path, recording, electrode, daq, headstage, bnc=(), **kwargs):
self.daq_type = daq
self.headstage_type = headstage
self.bnc_columns = bnc
self.scale_to_uv = 1e6 / gain.get(self.daq_type, 1.0)
super(ActiveLoader, self).__init__(experiment_path, recording, electrode, **kwargs)
with h5py.File(self.data_file, 'r') as h5file:
shape = h5file['data'].shape
num_row = int(h5file['numRow'][()])
num_chan = int(h5file['numChan'][()])
total_channels = num_row * num_chan
# if this is a downsample file, check for an extracted BNC array
source_has_bnc = 'bnc' in h5file
self.transpose_array = (shape[1] == total_channels)
if bnc:
if source_has_bnc:
self.aligned_arrays = ['bnc']
else:
bnc_channels = np.concatenate([np.arange(bnc * num_row, (bnc + 1) * num_row) for bnc in bnc])
self.aligned_arrays = [('bnc', bnc_channels)]
def create_downsample_file(self, data_file, resample_rate, downsamp_file, **kwargs):
# The parent method creates a channel-compatible source file with anti-aliased downsamples in the channel
# array. For active electrode data with all external channels (e.g. logic levels) packed into the main data
# array, a side effect is that the external channels will be anti-alias filtered as well.
# However, the new source file will have a separate "bnc" array that is downsampled w/o filtering.
new_file = super(ActiveLoader, self).create_downsample_file(data_file, resample_rate, downsamp_file, **kwargs)
# add in the other metadata -- note that this assumes that create_downsample creates a mapped file,
# which may change
with h5py.File(data_file, 'r') as f1, h5py.File(new_file, 'r+') as f2:
samp_rate = f1['Fs'][()]
samp_rate[:] = resample_rate
f2['Fs'] = samp_rate
for k in f1.keys():
if k not in (self.data_array, 'Fs', 'bnc'):
try:
f2[k] = f1[k][()]
except AttributeError:
pass
# shorten this to the extracted BNC array
self.aligned_arrays = ['bnc']
return new_file
def make_channel_map(self):
unmix = get_daq_unmix(self.daq_type, self.headstage_type, self.electrode)
with h5py.File(self.data_file, 'r') as h5file:
nrow = int(h5file['numRow'][()])
ncol = int(h5file['numCol'][()])
pitch = pitch_lookup.get(self.electrode, 1.0)
# go through channels,
# if channel is data, put down the array matrix location
# else, put down a disconnected channel
data_rows = list(unmix.row)
data_cols = list(unmix.col)
# data_chans = np.array(data_cols) * nrow + np.array(data_rows)
electrode_chans = []
chan_map = []
other_chans = []
for c in range(nrow * ncol):
col = c // nrow
row = c % nrow
if col in data_cols:
arow = data_rows.index(row)
acol = data_cols.index(col)
chan_map.append(arow * len(data_cols) + acol)
electrode_chans.append(c)
else:
other_chans.append(c)
nr = len(unmix.row)
nc = len(unmix.col)
cm = ChannelMap(chan_map, (nr, nc), pitch=pitch, col_major=False)
return cm, electrode_chans, other_chans, []
def find_trigger_signals(self, data_file):
bnc_columns = self.bnc_columns
if not bnc_columns:
return (), ()
# If trigger index is an integer, proceed. If not and it evaluates false, then skip
if not isinstance(self.trigger_idx, int) and not self.trigger_idx:
return (), ()
if not np.iterable(bnc_columns):
bnc_columns = (bnc_columns,)
trigger_idx = self.trigger_idx
if np.iterable(trigger_idx):
trigger_idx = trigger_idx[0]
with h5py.File(data_file, 'r') as h5file:
nrow = int(h5file['numRow'][()])
# if this is a downsample file, it should be the case that a BNC array has been extracted and downsampled
# without filtering
if 'bnc' in h5file:
bnc_data = h5file['bnc'][:].reshape(len(bnc_columns), nrow, -1)
else:
bnc_channels = np.concatenate([ | np.arange(bnc * nrow, (bnc + 1) * nrow) | numpy.arange |
# -*- coding: utf-8 -*-
"""
Created on Wed Jun 12 09:38:56 2019
@author: Weike (Vicky) Sun <EMAIL>/<EMAIL>
(c) 2020 <NAME>, all rights reserved
"""
"""
This file call the matlab based ADPATx for state space model fitting,
There are two modes:
(1) Single training set, with option of testing data
(2) Multiple training sets, with option of testing data
"""
import numpy as np
import matlab.engine
import scipy.io as sio
import os
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
def Adaptx_matlab_single(X, y, data_url, url, X_test=None, y_test=None, train_ratio = 1,\
mymaxlag = 12, mydegs = [-1, 0, 1], mynow = 1, steps = 10, plot = True):
'''This function fits the CVA-state space model for training data X, y of first training ratio data,
and use rest (1-ratio_data) for forcasting. There can also be test data to test the fitted state space model
Input:
X: training data predictors numpy array: Nxm
y: training data response numy array: Nx1
X_test: testing data predictors numpy arrray: N_testxm
y_test: testing data response numpy array: N_test x 1
data_url: desired working directory for saving all the results, be a sub-folder of the main ADPATX folder
url: main directory for ADAPTX folder
train_ratio: float, portion of training data used to train the model, and the rest is used as validation data
mymaxlag: maximum lag number considered for the CVA, default = 12
mydegs: degs considered for the trend in state space model, can chose from [-1 0 1 2 3,4], defualt [-1 0 1]
mynow: instantaneous effect of u on y in state space model, 1: yes, 0: no, defualt 1
steps: number of steps considered for prediction
plot: flag for plotting results or not, default TRUE
Output:
optimal_params: dictonary
mylag: int, selected optimal lag number by AIC
mydeg: int, selected optimal order on trend by AIC
ord: int, selected optimal order for CVA by AIC
myresults:
State space model parameters:
Phi, G, H, A, B, Q, R, E, F, ABR, BBR
CVA projection: Jk
Preditction results:
MSE_train, MSE_val, MSE_test with differnt prediction steps
Ym: prediction by final optimal model, Num_steps X timeinstances, the frist row is one step ahead by Kalman
error: the error Ym-Yp
'''
###Save Data in desired form to mydata.mat, which contains mydata as a matrix of size (m_y+m_x)xN, where y as the first row
n_train = round(train_ratio*np.shape(y)[0])
scaler = StandardScaler()
scaler.fit(X[:n_train])
X = scaler.transform(X)
scalery = StandardScaler()
scalery.fit(y[:n_train])
y=scalery.transform(y)
mydata=np.vstack((np.transpose(y),np.transpose(X)))
sio.savemat('mydata.mat', {'mydata':mydata})
if y_test is not None:
###Save test Data in desired form to mydataval.mat
X_test = scaler.transform(X_test)
y_test = scalery.transform(y_test)
mydataval=np.vstack((np.transpose(y_test),np.transpose(X_test)))
sio.savemat('mydataval.mat', {'mydataval':mydataval})
test = 1
else:
test = 0
m_y = np.shape(y)[1]
###Save parameters in a file
sio.savemat('myparams.mat', {'url':url,'data_url':data_url, 'mydimy':m_y, 'mymaxlag':mymaxlag,'mydegs':mydegs, 'mynow':mynow, 'val':test, 'steps':steps, 'n_train':n_train})
###Call the matlab script
eng = matlab.engine.start_matlab()
eng.cd(os.getcwd())
#eng.addpath(url, nargout=0)
eng.CVA_singleset_fit_test(nargout=0)
eng.quit()
###Read Results and do Plots
#the parameters are saved in myresults
myresults = sio.loadmat(data_url+'myresults.mat')
prediction_train = sio.loadmat(data_url+'kstep_training.mat')
y_real_train = np.array(prediction_train['yp'])
if train_ratio < 1:
y_real_val = y_real_train[:,n_train:]
y_real_train = y_real_train[:,:n_train]
y_predict_train = np.array(prediction_train['ym'])
if train_ratio < 1:
y_predict_val = y_predict_train[:,n_train:]
y_predict_train = y_predict_train[:,:n_train]
else:
y_predict_val = None
train_error = np.array(prediction_train['ye'])
if train_ratio < 1:
val_error = train_error[:,n_train:]
train_error = train_error[:,:n_train]
MSE_val = np.nansum(val_error**2,axis=1)/np.sum(~np.isnan(val_error),axis=1)
else:
MSE_val = None
val_error = None
MSE_train = np.nansum(train_error**2,axis=1)/np.sum(~np.isnan(train_error),axis=1)
if test:
prediction_test = sio.loadmat(data_url+'kstep_testing.mat')
y_real_test = np.array(prediction_test['yp'])
y_predict_test = np.array(prediction_test['ym'])
test_error = np.array(prediction_test['ye'])
MSE_test = np.nansum(test_error**2,axis=1)/np.sum(~np.isnan(test_error),axis=1)
else:
MSE_test = None
test_error = None
y_predict_test = None
#plot the prediction results
if plot:
import matplotlib
cmap = matplotlib.cm.get_cmap('Paired')
s=12
#plot the prediction vs real
for i in range(steps):
for j in range(m_y):
plt.figure(figsize=(5,3))
plt.plot(y_real_train[j], color= cmap(j*2+1), label= 'real')
plt.plot(y_predict_train[m_y*i+j], '--', color= 'xkcd:coral', label = 'prediction')
plt.title('Training data' + str(i+1) +'-step prediction for y' + str(j+1),fontsize=s)
plt.xlabel('Time index',fontsize=s)
plt.ylabel('y',fontsize=s)
plt.legend(fontsize=s)
plt.tight_layout()
plt.savefig('Train_var_' + str(j+1)+'_step_'+str(i+1)+'.png', dpi = 600,bbox_inches='tight')
if train_ratio < 1:
plt.figure(figsize=(5,3))
plt.plot(y_real_val[j], color= cmap(j*2+1), label= 'real')
plt.plot(y_predict_val[m_y*i+j], '--', color= 'xkcd:coral',label = 'prediction')
plt.title('Validation data ' + str(i+1) +'-step prediction for y' + str(j+1),fontsize=s)
plt.xlabel('Time index',fontsize=s)
plt.ylabel('y',fontsize=s)
plt.legend(fontsize=s)
plt.tight_layout()
plt.savefig('Val_var_' + str(j+1)+'_step_'+str(i+1)+'.png', dpi = 600,bbox_inches='tight')
if test:
plt.figure(figsize=(5,3))
plt.plot(y_real_test[j], color= cmap(j*2+1), label= 'real')
plt.plot(y_predict_test[m_y*i+j], '--',color= 'xkcd:coral', label = 'prediction')
plt.title('Test data ' + str(i+1) +'-step prediction for y' + str(j+1),fontsize=s)
plt.xlabel('Time index',fontsize=s)
plt.ylabel('y',fontsize=s)
plt.legend(fontsize=s)
plt.tight_layout()
plt.savefig('Test_var_' + str(j+1)+'_step_'+str(i+1)+'.png', dpi = 600,bbox_inches='tight')
# plt.close('all')
#plot fitting errors
max_limit=np.nanmax(train_error[-2:],axis=1)
min_limit=np.nanmin(train_error[-2:],axis=1)
fig, axs = plt.subplots(steps,m_y,figsize=(3*m_y,2*steps))
if m_y>1:
for i in range(steps):
for j in range(m_y):
axs[i,j].plot(train_error[m_y*i+j], color= cmap(j*2+1))
axs[i,j].set_title('Training data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs[i,j].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs[i,j].set_xlabel('Time index', fontsize=s)
fig.tight_layout()
plt.savefig('Train error.png', dpi = 600,bbox_inches='tight')
if train_ratio < 1:
max_limit=np.nanmax(val_error[-2:],axis=1)
min_limit=np.nanmin(val_error[-2:],axis=1)
fig1, axs1 = plt.subplots(steps,m_y,figsize=(3*m_y,2*steps))
for i in range(steps):
for j in range(m_y):
axs1[i,j].plot(val_error[m_y*i+j], color= cmap(j*2+1))
axs1[i,j].set_title('Val data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs1[i,j].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs1[i,j].set_xlabel('Time index', fontsize=s)
fig1.tight_layout()
plt.savefig('Val error.png', dpi=600,bbox_inches='tight')
if test:
max_limit=np.nanmax(test_error[-2:],axis=1)
min_limit=np.nanmin(test_error[-2:],axis=1)
fig2, axs2 = plt.subplots(steps,m_y,figsize=(3*m_y,2*steps))
for i in range(steps):
for j in range(m_y):
axs2[i,j].plot(test_error[m_y*i+j], color= cmap(j*2+1))
axs2[i,j].set_title('Test data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs2[i,j].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs2[i,j].set_xlabel('Time index', fontsize=s)
fig2.tight_layout()
plt.savefig('Test error.png', dpi=600,bbox_inches='tight')
else:
j=0
for i in range(steps):
axs[i].plot(train_error[m_y*i+j], color= cmap(j*2+1))
axs[i].set_title('Training data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs[i].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs[i].set_xlabel('Time index', fontsize=s)
fig.tight_layout()
plt.savefig('Train error.png', dpi = 600,bbox_inches='tight')
if train_ratio < 1:
max_limit=np.nanmax(val_error[-2:],axis=1)
min_limit=np.nanmin(val_error[-2:],axis=1)
fig1, axs1 = plt.subplots(steps,m_y,figsize=(3*m_y,2*steps))
for i in range(steps):
axs1[i].plot(val_error[m_y*i+j], color= cmap(j*2+1))
axs1[i].set_title('Val data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs1[i].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs1[i].set_xlabel('Time index', fontsize=s)
fig1.tight_layout()
plt.savefig('Val error.png', dpi=600,bbox_inches='tight')
if test:
max_limit=np.nanmax(test_error[-2:],axis=1)
min_limit=np.nanmin(test_error[-2:],axis=1)
fig2, axs2 = plt.subplots(steps,m_y,figsize=(3*m_y,2*steps))
for i in range(steps):
axs2[i].plot(test_error[m_y*i+j], color= cmap(j*2+1))
axs2[i].set_title('Test data' + str(i+1) +'-step error for y' + str(j+1), fontsize=s)
axs2[i].set_ylim(min_limit[j]*1.5,max_limit[j]*1.5)
if i is steps-1:
axs2[i].set_xlabel('Time index', fontsize=s)
fig2.tight_layout()
plt.savefig('Test error.png', dpi=600,bbox_inches='tight')
#MSE for prediction results over different steps
for i in range(m_y):
plt.figure(figsize=(3,2))
plt.plot(MSE_train[i::m_y], 'd-', color = cmap(i*2+1))
plt.title('MSE for y' + str(i+1) +' training prediction', fontsize = s)
plt.xlabel('k-step ahead', fontsize = s)
plt.ylabel('MSE', fontsize = s)
plt.savefig('MSE_train '+str(i+1)+'.png', dpi=600,bbox_inches='tight')
if train_ratio < 1:
for i in range(m_y):
plt.figure(figsize=(3,2))
plt.plot(MSE_val[i::m_y], 'd-', color = cmap(i*2+1))
plt.title('MSE for y' + str(i+1) +' validation prediction', fontsize = s)
plt.xlabel('k-step ahead', fontsize = s)
plt.ylabel('MSE', fontsize = s)
plt.savefig('MSE_val '+str(i+1)+'.png', dpi=600,bbox_inches='tight')
if test:
for i in range(m_y):
plt.figure(figsize=(3,2))
plt.plot(MSE_test[i::m_y], 'd-', color = cmap(i*2+1))
plt.title('MSE for y' + str(i+1) +' testing prediction', fontsize = s)
plt.xlabel('k-step ahead', fontsize = s)
plt.ylabel('MSE', fontsize = s)
plt.savefig('MSE_test'+str(i+1)+'.png', dpi=600,bbox_inches='tight')
optimal_params = {}
optimal_params['lag'] = myresults['mylag']
optimal_params['deg'] = myresults['mydeg']
optimal_params['ord'] = myresults['ord']
return(optimal_params, myresults, MSE_train, MSE_val, MSE_test, y_predict_train, y_predict_val, y_predict_test, train_error, val_error, test_error)
def Adaptx_matlab_multi(X, y, timeindex, num_series, data_url, url, X_test=None, y_test=None, train_ratio = 1,\
mymaxlag = 12, mydegs = [-1, 0, 1], mynow = 1, steps = 10, plot = True):
'''This function fits the CVA-state space model for training data X, y of first training ratio data,
and use rest (1-ratio_data) for forcasting. There can also be test data to test the fitted state space model
Input:
X: dictionary of training data predictors numpy array: Nxm, composed of all the data (several time seireses)
y: dictionary of training data response numy array: Nx1, composed of all the data (several time seireses)
timeindex: time invterval for each seperate time series, stored in one dictionary, labeled by times seires index, which has shape (N,)
train_ratio: float, portion of training data used to train the model, and the rest is used as validation data, is applied to every time seires
num_series: total number of time series contained
X_test: testing data predictors numpy arrray: N_testxm
y_test: testing data response numpy array: N_test x 1
data_url: desired working directory for saving all the results, be a sub-folder of the main ADPATX folder
url: main directory for ADAPTX folder
mymaxlag: maximum lag number considered for the CVA, default = 12
mydegs: degs considered for the trend in state space model, can chose from [-1 0 1 2 3,4], defualt [-1 0 1]
mynow: instantaneous effect of u on y in state space model, 1: yes, 0: no, defualt 1
steps: number of steps considered for prediction
plot: flag for plotting results or not, default TRUE
Output:
optimal_params: dictonary
mylag: int, selected optimal lag number by AIC
mydeg: int, selected optimal order on trend by AIC
ord: int, selected optimal order for CVA by AIC
myresults:
State space model parameters:
Phi, G, H, A, B, Q, R, E, F, ABR, BBR
CVA projection: Jk
Preditction results:
MSE_train for several, MSE_test with differnt prediction steps
Ym: prediction by final optimal model, Num_steps X timeinstances, the frist row is one step ahead by Kalman
error: the error Ym-Yp
'''
cum = 0
##scale data
for i in range(num_series):
num = np.shape(timeindex[i+1])[0]
num_up_to = round(train_ratio*num)
if i == 0:
y_scale = y[cum:cum+num_up_to]
X_scale = X[cum:cum+num_up_to]
else:
y_scale = np.vstack((y_scale, y[cum:cum+num_up_to]))
X_scale = np.vstack((X_scale,X[cum:cum+num_up_to]))
cum += num
scaler = StandardScaler()
scaler.fit(X_scale)
X = scaler.transform(X)
scalery = StandardScaler()
scalery.fit(y_scale)
y=scalery.transform(y)
###Save Data in desired form to mydata.mat, which contains mydata as a matrix of size (m_y+m_x)xN, where y as the first row
timax = 0
filist = []
cum = 0
for i in range(num_series):
num = np.shape(timeindex[i+1])[0]
num_up_to = round(train_ratio*num)
if timax<num_up_to: timax = num_up_to
filist.append('filist' + str(i+1))
d = np.vstack((np.transpose(y[cum:cum+num_up_to]),np.transpose(X[cum:cum+num_up_to])))
sio.savemat(data_url+'filist' + str(i+1)+'.mat', {'d':d, 'timint':timeindex[i+1][:num_up_to]})
cum += num
sio.savemat('myfilist.mat', {'filist':filist,'timax':timax, 'num_series':num_series})
if y_test is not None:
###Save test Data in desired form to mydataval.mat
X_test = scaler.transform(X_test)
y_test = scalery.transform(y_test)
mydataval=np.vstack((np.transpose(y_test),np.transpose(X_test)))
sio.savemat('mydataval.mat', {'mydataval':mydataval})
test = 1
else:
test = 0
m_y = np.shape(y)[1]
m_u = np.shape(X)[1]
###Save parameters in a file
sio.savemat('myparams.mat', {'url':url,'data_url':data_url, 'mydimy':m_y, 'mydimu':m_u, 'mymaxlag':mymaxlag,'mydegs':mydegs, 'mynow':mynow, 'val':test, 'steps':steps})
###Call the matlab script
eng = matlab.engine.start_matlab()
eng.cd(os.getcwd())
#eng.addpath(url, nargout=0)
eng.CVA_multiset_fit_test(nargout=0)
###Read Results and do Plots
#the parameters are saved in myresults
myresults = sio.loadmat(data_url+'myresults.mat')
'''Do prediction, first for the training and validation data (if train_ratio<1), for each time series'''
cum=0
for i in range(num_series):
sio.savemat('myparams_prediction.mat', {'url':url,'data_url':data_url, 'steps':steps, 'id':i})
num = np.shape(timeindex[i+1])[0]
mydataval_prediction = np.vstack((np.transpose(y[cum:cum+num]),np.transpose(X[cum:cum+num])))
sio.savemat('mydataval_prediction.mat', {'mydataval_prediction':mydataval_prediction})
cum += num
eng.cd(os.getcwd())
eng.CVA_prediction(nargout=0)
eng.quit()
'''load prediction restuls for training and validation'''
y_real_train = {}
y_real_val = {}
y_predict_train = {}
y_predict_val = {}
train_error = {}
val_error = {}
MSE_train = np.zeros((num_series, steps*m_y))
MSE_val = np.zeros((num_series, steps*m_y))
for i in range(num_series):
prediction_train = sio.loadmat(data_url+'kstep' + str(i) + '.mat')
num = np.shape(timeindex[i+1])[0]
n_train = round(train_ratio*num)
y_real_train[i+1] = np.array(prediction_train['yp'])
if train_ratio < 1:
y_real_val[i+1] = y_real_train[i+1][:,n_train:]
y_real_train[i+1] = y_real_train[i+1][:,:n_train]
y_predict_train[i+1] = np.array(prediction_train['ym'])
if train_ratio < 1:
y_predict_val[i+1] = y_predict_train[i+1][:,n_train:]
y_predict_train[i+1] = y_predict_train[i+1][:,:n_train]
else:
y_predict_val[i+1] = None
train_error[i+1] = np.array(prediction_train['ye'])
if train_ratio < 1:
val_error[i+1] = train_error[i+1][:,n_train:]
train_error[i+1] = train_error[i+1][:,:n_train]
MSE_val[i] = np.nansum(val_error[i+1]**2,axis=1)/np.sum(~np.isnan(val_error[i+1]),axis=1)
else:
MSE_val[i] = None
val_error[i+1] = None
MSE_train[i] = np.nansum(train_error[i+1]**2,axis=1)/np.sum(~np.isnan(train_error[i+1]),axis=1)
'''Prediction for testing data is done already if y_test is not none'''
if test:
prediction_test = sio.loadmat(data_url+'kstep_testing.mat')
y_real_test = | np.array(prediction_test['yp']) | numpy.array |
"""
Module for handling molecules. Uses the pymatgen.core.structure.Molecule
class as a base. Has a function for reorienting molecules
(reoriented_molecule), and for calculating valid orientations within a Wyckoff
position based on symmetry (orientation_in_wyckoff_position).
The orientation class can be used to identify
degrees of freedom for molecules in Wyckoff positions with certain symmetry
constraints.
"""
# Imports
import numpy as np
from copy import deepcopy
from scipy.spatial.transform import Rotation
import networkx as nx
from random import choice
# ------------------------------
# External Libraries
from pymatgen.core.structure import Molecule
from pymatgen.symmetry.analyzer import PointGroupAnalyzer, generate_full_symmops
from pymatgen.core.bonds import CovalentBond
# PyXtal imports
from pyxtal.msg import printx
from pyxtal.tolerance import Tol_matrix
from pyxtal.database.element import Element
from pyxtal.operations import SymmOp, OperationAnalyzer, rotate_vector, angle
from pyxtal.database.collection import Collection
# Define functions
# ------------------------------
molecule_collection = Collection("molecules")
class pyxtal_molecule:
"""
Extended molecule class based on pymatgen.core.structure.Molecule
The added features include:
0, parse the input
1, estimate volume/tolerance/radii
2, find and store symmetry (todo)
3, get the principle axis (todo)
4, re-align the molecule (todo)
Args:
mol: a string to reprent the molecule
tm: tolerance matrix
"""
def __init__(self, mol=None, symmetrize=True, tm=Tol_matrix(prototype="molecular")):
mo = None
if type(mol) == str:
# Parse molecules: either file or molecule name
tmp = mol.split(".")
self.name = tmp[0]
if len(tmp) > 1:
# Load the molecule from the given file
if tmp[-1] in ["xyz", "gjf", "g03", "json"]:
import os
if os.path.exists(mol):
mo = Molecule.from_file(mol)
else:
raise NameError("{:s} is not a valid path".format(mol))
else:
raise NameError("{:s} is not a supported format".format(tmp[-1]))
else:
# print('\nLoad the molecule {:s} from collections'.format(mol))
mo = molecule_collection[mol]
elif hasattr(mol, "sites"): # pymatgen molecule
self.name = str(mol.formula)
mo = mol
if mo is None:
msg = "Could not create molecules from given input: {:s}".format(mol)
raise NameError(msg)
self.props = mo.site_properties
if len(mo) > 1:
if symmetrize:
pga = PointGroupAnalyzer(mo)
mo = pga.symmetrize_molecule()["sym_mol"]
mo = self.add_site_props(mo)
self.mol = mo
self.tm = tm
self.get_box()
self.volume = self.box.volume
self.get_radius()
self.get_symbols()
self.get_tols_matrix()
def __str__(self):
return '[' + self.name + ']'
def save_dict(self):
return self.mol.as_dict()
def copy(self):
"""
simply copy the structure
"""
return deepcopy(self)
def reset_positions(self, coors):
"""
reset the coordinates
"""
from pymatgen.core.sites import Site
if len(coors) != len(self.mol._sites):
raise ValueError("number of atoms is inconsistent!")
else:
for i, coor in enumerate(coors):
_site = self.mol._sites[i]
new_site = Site(_site.species, coor, properties=_site.properties)
self.mol._sites[i] = new_site
@classmethod
def load_dict(cls, dicts):
"""
load the molecule from a dictionary
"""
mol = Molecule.from_dict(dicts)
return cls(mol)
def swap_axis(self, ax):
"""
swap the molecular axis
"""
coords = self.mol.cart_coords[:, ax]
mo = Molecule(self.symbols, coords)
mo = self.add_site_props(mo)
return pyxtal_molecule(mo, self.tm)
def add_site_props(self, mo):
if len(self.props) > 0:
for key in self.props.keys():
mo.add_site_property(key, self.props[key])
return mo
def get_box(self):
"""
Given a molecule, find a minimum orthorhombic box containing it.
Size is calculated using min and max x, y, and z values,
plus the padding defined by the vdw radius
For best results, call oriented_molecule first.
Args:
mol: a pymatgen Molecule object. Should be oriented along its principle axes.
Returns:
a Box object
"""
mol, P = reoriented_molecule(self.mol)
minx, miny, minz, maxx, maxy, maxz = 0.0, 0.0, 0.0, 0.0, 0.0, 0.0
for p in mol:
x, y, z = p.coords
r = Element(p.species_string).vdw_radius
if x - r < minx:
minx = x - r
if y - r < miny:
miny = y - r
if z - r < minz:
minz = z - r
if x + r > maxx:
maxx = x + r
if y + r > maxy:
maxy = y + r
if z + r > maxz:
maxz = z + r
self.box = Box(minx, maxx, miny, maxy, minz, maxz)
self.axes = P
def get_radius(self):
r_max = 0
for coord, number in zip(self.mol.cart_coords, self.mol.atomic_numbers):
radius = (
np.sqrt(np.sum(coord * coord)) + self.tm.get_tol(number, number) * 0.5
)
if radius > r_max:
r_max = radius
self.radius = r_max
# reestimate the radius if it has stick shape
rmax = max([self.box.width,self.box.height,self.box.length])
rmin = min([self.box.width,self.box.height,self.box.length])
if rmax/rmin > 3 and rmax >12:
self.radius = rmin
def has_stick_shape(self):
sizes = [self.box.width,self.box.height,self.box.length]
sizes.sort()
if sizes[2]>15: #and sizes[2]/sizes[0]>2 and sizes[2]/sizes[1]>2:
return True
else:
return False
def get_symbols(self):
self.symbols = [specie.name for specie in self.mol.species]
def get_tols_matrix(self):
"""
Returns: a 2D matrix which is used internally for distance checking.
"""
numbers = self.mol.atomic_numbers
tols = np.zeros((len(numbers), len(numbers)))
for i1, number1 in enumerate(numbers):
for i2, number2 in enumerate(numbers):
tols[i1][i2] = self.tm.get_tol(number1, number2)
if len(self.mol)==1:
tols *= 0.8 # if only one atom, reduce the tolerance
self.tols_matrix = tols
def show(self):
from pyxtal.viz import display_molecules
return display_molecules([self.mol])
class Box:
"""
Class for storing the binding box for a molecule. Box is oriented along the x, y, and
z axes.
Args:
minx: the minimum x value
maxx: the maximum x value
miny: the minimum y value
maxy: the maximum y value
minz: the minimum z value
maxz: the maximum z value
"""
def __init__(self, minx, maxx, miny, maxy, minz, maxz):
self.minx = float(minx)
self.maxx = float(maxx)
self.miny = float(miny)
self.maxy = float(maxy)
self.minz = float(minz)
self.maxz = float(maxz)
self.width = float(abs(maxx - minx))
self.length = float(abs(maxy - miny))
self.height = float(abs(maxz - minz))
self.minl = min(self.width, self.length, self.height)
self.maxl = max(self.width, self.length, self.height)
for x in (self.width, self.length, self.height):
if x <= self.maxl and x >= self.minl:
self.midl = x
self.volume = float(self.width * self.length * self.height)
class Orientation:
"""
Stores orientations for molecules based on vector constraints.
Can be stored to regenerate orientations consistent with a given constraint
vector, without re-calling orientation_in_wyckoff_position. Allows for
generating orientations which differ only in their rotation about a given
axis.
Args:
matrix: a 3x3 rotation matrix describing the orientation (and/or
inversion) to store
degrees: the number of degrees of freedom...
0 - The orientation refers to a single rotation matrix
1 - The orientation can be rotated about a single axis
2 - The orientation can be any pure rotation matrix
axis:
an optional axis about which the orientation can rotate. Only used
if degrees is equal to 1
"""
def __init__(self, matrix=None, degrees=2, axis=None):
self.matrix = np.array(matrix)
self.degrees = degrees # The number of degrees of freedom.
if degrees == 1:
if axis is None:
raise ValueError("axis is required for orientation")
else:
axis /= np.linalg.norm(axis)
self.axis = axis
self.r = Rotation.from_matrix(self.matrix) # scipy transform.Rotation class
self.angle = None
def __str__(self):
s = "-------PyXtal.molecule.Orientation class----\n"
s += "degree of freedom: {:d}\n".format(self.degrees)
s += "Rotation matrix:\n"
s += "{:6.3f} {:6.3f} {:6.3f}\n".format(*self.matrix[:,0])
s += "{:6.3f} {:6.3f} {:6.3f}\n".format(*self.matrix[:,1])
s += "{:6.3f} {:6.3f} {:6.3f}\n".format(*self.matrix[:,2])
if self.axis is not None:
s += "Rotation axis\n"
s += "{:6.2f} {:6.2f} {:6.3f}\n".format(*self.axis)
return s
def reset_matrix(self, matrix):
self.matrix = matrix
self.r = Rotation.from_matrix(self.matrix)
def __repr__(self):
return str(self)
def copy(self):
return deepcopy(self)
def save_dict(self):
dict0 = {"matrix": self.matrix,
"degrees": self.degrees,
"axis": self.axis
}
return dict0
@classmethod
def load_dict(cls, dicts):
matrix = dicts['matrix']
degrees = dicts['degrees']
axis = dicts['axis']
return cls(matrix, degrees, axis)
def change_orientation(self, angle="random", flip=False):
"""
Allows for specification of an angle (possibly random) to
rotate about the constraint axis.
Args:
angle: an angle to rotate about the constraint axis.
If "random", chooses a random rotation angle.
If self.degrees==2, chooses a random rotation matrix.
If self.degrees==1, only apply on angle
If self.degrees==0, no change
"""
if self.degrees >= 1:
# choose the axis
if self.axis is None:
axis = np.random.RandomState().rand(3) - 0.5
self.axis = axis / np.linalg.norm(axis)
# parse the angle
if angle == "random":
angle = np.random.RandomState().rand() * np.pi * 2
self.angle = angle
# update the matrix
r1 = Rotation.from_rotvec(self.angle * self.axis)
if self.degrees == 2 and flip:
if np.random.random()>0.5:
ax = choice(['x','y','z'])
angle0 = choice([90, 180, 270])
r2 = Rotation.from_euler(ax, angle0, degrees=True)
r1 = r2*r1
self.r = r1 * self.r
#self.r *= r1
self.matrix = self.r.as_matrix()
def rotate_by_matrix(self, matrix, ignore_constraint=True):
"""
rotate
Args:
matrix: 3*3 rotation matrix
"""
if not ignore_constraint:
if self.degrees == 0:
raise ValueError("cannot rotate")
elif self.degrees == 1:
axis = self.axis
vec = Rotation.from_matrix(matrix).as_rotvec()
if angle(vec, self.axis) > 1e-2 and angle(vec, -self.axis) > 1e-2:
raise ValueError("must rotate along the given axis")
else:
axis = None
matrix = matrix.dot(self.matrix)
return Orientation(matrix, self.degrees, axis)
def get_matrix(self, angle="random"):
"""
Generate a 3x3 rotation matrix consistent with the orientation's
constraints. Allows for specification of an angle (possibly random) to
rotate about the constraint axis.
Args:
angle: an angle to rotate about the constraint axis. If "random",
chooses a random rotation angle. If self.degrees==2, chooses a
random 3d rotation matrix to multiply by. If the original matrix
is wanted, set angle=0, or call self.matrix
Returns:
a 3x3 rotation (and/or inversion) matrix (numpy array)
"""
if self.degrees == 2:
if angle == "random":
axis = np.random.sample(3)
axis = axis / np.linalg.norm(axis)
angle = np.random.random() * np.pi * 2
else:
axis = self.axis
return Rotation.from_rotvec(angle * axis).as_matrix()
elif self.degrees == 1:
if angle == "random":
angle = np.random.random() * np.pi * 2
else:
angle = self.angle
return Rotation.from_rotvec(angle * self.axis).as_matrix()
elif self.degrees == 0:
return self.matrix
def get_op(self, angle=None):
"""
Generate a SymmOp object consistent with the orientation's
constraints. Allows for specification of an angle (possibly random) to
rotate about the constraint axis.
Args:
angle: an angle to rotate about the constraint axis. If "random",
chooses a random rotation angle. If self.degrees==2, chooses a
random 3d rotation matrix to multiply by. If the original matrix
is wanted, set angle=0, or call self.matrix
Returns:
pymatgen.core.structure. SymmOp object
"""
#if angle is not None:
# self.change_orientation(angle)
return SymmOp.from_rotation_and_translation(self.matrix, [0, 0, 0])
@classmethod
def from_constraint(self, v1, c1):
"""
Geneate an orientation object given a constraint axis c1, and a
corresponding vector v1. v1 will be rotated onto c1, and the resulting
orientation will have a rotational degree of freedom about c1.
Args:
v1: a 1x3 vector in the original reference frame
c1: a corresponding axis which v1 must be mapped to
Returns:
an orientation object consistent with the supplied constraint
"""
# c1 is the constraint vector; v1 will be rotated onto it
m = rotate_vector(v1, c1)
return Orientation(m, degrees=1, axis=c1)
@classmethod
def from_constraints(self, v1, c1, v2, c2):
"""
Geneate an orientation object given two constraint vectors
Args:
v1: a 1x3 vector in the original reference frame
c1: a corresponding axis which v1 must be mapped to
v1: a second 1x3 vector in the original reference frame
c1: a corresponding axis which v2 must be mapped to
Returns:
an orientation object consistent with the supplied constraints
"""
T = rotate_vector(v1, c1)
phi = angle(c1, c2)
phi2 = angle(c1, (np.dot(T, v2)))
if not np.isclose(phi, phi2, rtol=0.01):
printx("Error: constraints and vectors do not match.", priority=1)
return
r = np.sin(phi)
c = np.linalg.norm(np.dot(T, v2) - c2)
theta = np.arccos(1 - (c ** 2) / (2 * (r ** 2)))
Rot = R.from_rotvec(theta * c1)
T2 = np.dot(R, T)
a = angle(np.dot(T2, v2), c2)
if not np.isclose(a, 0, rtol=0.01):
T2 = np.dot(np.linalg.inv(R), T)
a = angle(np.dot(T2, v2), c2)
if not np.isclose(a, 0, rtol=0.01):
printx("Error: Generated incorrect rotation: " + str(theta), priority=1)
return Orientation(T2, degrees=0)
def random_orientation(self):
"""
Applies random rotation (if possible) and returns a new orientation with
the new base matrix.
Returns:
a new orientation object with a different base rotation matrix
"""
self.change_orientation()
return self
def get_Euler_angles(self):
return self.r.as_euler('zxy', degrees=True)
def get_inertia_tensor(coords):
"""
Calculate the symmetric inertia tensor for a Molecule
the principal axes of symmetry.
Args:
coords: [N, 3] array of coordinates
Returns:
a 3x3 numpy array representing the inertia tensor
"""
coords -= np.mean(coords, axis=0)
Inertia = np.zeros([3,3])
Inertia[0,0] = np.sum(coords[:,1]**2 + coords[:,2]**2)
Inertia[1,1] = np.sum(coords[:,0]**2 + coords[:,2]**2)
Inertia[2,2] = np.sum(coords[:,0]**2 + coords[:,1]**2)
Inertia[0,1] = Inertia[1,0] = -np.sum(coords[:,0]*coords[:,1])
Inertia[0,2] = Inertia[2,0] = -np.sum(coords[:,0]*coords[:,2])
Inertia[1,2] = Inertia[2,1] = -np.sum(coords[:,1]*coords[:,2])
return Inertia
def reoriented_molecule(mol, nested=False):
"""
Reorient a molecule so that its principal axes are aligned with the
identity matrix.
Args:
mol: a Molecule object
nested: keep track of how many times the function
has been called recursively
Returns:
new_mol: a reoriented copy of the original molecule.
P: the 3x3 rotation matrix used to obtain it.
"""
coords = mol.cart_coords
numbers = mol.atomic_numbers
coords -= np.mean(coords, axis=0)
A = get_inertia_tensor(coords)
# Store the eigenvectors of the inertia tensor
P = np.linalg.eigh(A)[1]
if np.linalg.det(P) < 0:
P[0] *= -1
coords = np.dot(coords, P)
return Molecule(numbers, coords), P
def get_symmetry(mol, already_oriented=False):
"""
Return a molecule's point symmetry.
Note: for linear molecules, infinitessimal rotations are treated as 6-fold
rotations, which works for 3d and 2d point groups.
Args:
mol: a Molecule object
already_oriented: whether or not the principle axes of mol are already
reoriented. Can save time if True, but is not required.
Returns:
a list of SymmOp objects which leave the molecule unchanged when applied
"""
# For single atoms, we cannot represent the point group using a list of operations
if len(mol) == 1:
return []
pga = PointGroupAnalyzer(mol)
# Handle linear molecules
if "*" in pga.sch_symbol:
if already_oriented == False:
# Reorient the molecule
oriented_mol, P = reoriented_molecule(mol)
pga = PointGroupAnalyzer(oriented_mol)
pg = pga.get_pointgroup()
symm_m = []
for op in pg:
symm_m.append(op)
# Add 12-fold and reflections in place of ininitesimal rotation
for axis in [[1, 0, 0], [0, 1, 0], [0, 0, 1]]:
# op = SymmOp.from_rotation_and_translation(aa2matrix(axis, np.pi/6), [0,0,0])
m1 = Rotation.from_rotvec(np.pi / 6 * axis).as_matrix()
op = SymmOp.from_rotation_and_translation(m1, [0, 0, 0])
if pga.is_valid_op(op):
symm_m.append(op)
# Any molecule with infinitesimal symmetry is linear;
# Thus, it possess mirror symmetry for any axis perpendicular
# To the rotational axis. pymatgen does not add this symmetry
# for all linear molecules - for example, hydrogen
if axis == [1, 0, 0]:
symm_m.append(SymmOp.from_xyz_string("x,-y,z"))
symm_m.append(SymmOp.from_xyz_string("x,y,-z"))
r = SymmOp.from_xyz_string("-x,y,-z")
elif axis == [0, 1, 0]:
symm_m.append(SymmOp.from_xyz_string("-x,y,z"))
symm_m.append(SymmOp.from_xyz_string("x,y,-z"))
r = SymmOp.from_xyz_string("-x,-y,z")
elif axis == [0, 0, 1]:
symm_m.append(SymmOp.from_xyz_string("-x,y,z"))
symm_m.append(SymmOp.from_xyz_string("x,-y,z"))
r = SymmOp.from_xyz_string("x,-y,-z")
# Generate a full list of SymmOps for the molecule's pointgroup
symm_m = generate_full_symmops(symm_m, 1e-3)
break
# Reorient the SymmOps into mol's original frame
if not already_oriented:
new = []
for op in symm_m:
new.append(P.inverse * op * P)
return new
elif already_oriented:
return symm_m
# Handle nonlinear molecules
else:
pg = pga.get_pointgroup()
symm_m = []
for op in pg:
symm_m.append(op)
return symm_m
def orientation_in_wyckoff_position(
mol,
wyckoff_position,
randomize=True,
exact_orientation=False,
already_oriented=False,
allow_inversion=True,
rtol = 1e-2,
):
"""
Tests if a molecule meets the symmetry requirements of a Wyckoff position,
and returns the valid orientations.
Args:
mol: a Molecule object. Orientation is arbitrary
wyckoff_position: a pyxtal.symmetry.Wyckoff_position object
randomize: whether or not to apply a random rotation consistent with
the symmetry requirements
exact_orientation: whether to only check compatibility for the provided
orientation of the molecule. Used within general case for checking.
If True, this function only returns True or False
already_oriented: whether or not to reorient the principle axes
when calling get_symmetry. Setting to True can remove redundancy,
but is not necessary
allow_inversion: whether or not to allow chiral molecules to be
inverted. Should only be True if the chemical and biological
properties of the mirror image are known to be suitable for the
desired application
Returns:
a list of operations.Orientation objects which can be applied to the
molecule while allowing it to satisfy the symmetry requirements of the
Wyckoff position. If no orientations are found, returns False.
"""
# For single atoms, there are no constraints
if len(mol) == 1:
return [Orientation([[1, 0, 0], [0, 1, 0], [0, 0, 1]], degrees=2)]
wyckoffs = wyckoff_position.ops
w_symm = wyckoff_position.symmetry_m
index = wyckoff_position.index
# Obtain the Wyckoff symmetry
symm_w = w_symm[0]
pga = PointGroupAnalyzer(mol)
# Check exact orientation
if exact_orientation is True:
mo = deepcopy(mol)
valid = True
for op in symm_w:
if not pga.is_valid_op(op):
valid = False
if valid is True:
return True
elif valid is False:
return False
# Obtain molecular symmetry, exact_orientation==False
symm_m = get_symmetry(mol, already_oriented=already_oriented)
# Store OperationAnalyzer objects for each molecular SymmOp
chiral = True
opa_m = []
for op_m in symm_m:
opa = OperationAnalyzer(op_m)
opa_m.append(opa)
if opa.type == "rotoinversion":
chiral = False
elif opa.type == "inversion":
chiral = False
# If molecule is chiral and allow_inversion is False,
# check if WP breaks symmetry
if chiral is True:
if allow_inversion is False:
for op in wyckoffs:
if np.linalg.det(op.rotation_matrix) < 0:
printx(
"Warning: cannot place chiral molecule in spagegroup", priority=2,
)
return False
# Store OperationAnalyzer objects for each Wyckoff symmetry SymmOp
opa_w = []
for op_w in symm_w:
opa_w.append(OperationAnalyzer(op_w))
# Check for constraints from the Wyckoff symmetry...
# If we find ANY two constraints (SymmOps with unique axes), the molecule's
# point group MUST contain SymmOps which can be aligned to these particular
# constraints. However, there may be multiple compatible orientations of the
# molecule consistent with these constraints
constraint1 = None
constraint2 = None
for i, op_w in enumerate(symm_w):
if opa_w[i].axis is not None:
constraint1 = opa_w[i]
for j, op_w in enumerate(symm_w):
if opa_w[j].axis is not None:
dot = np.dot(opa_w[i].axis, opa_w[j].axis)
if (not np.isclose(dot, 1, rtol=rtol)) and (
not np.isclose(dot, -1, rtol=rtol)
):
constraint2 = opa_w[j]
break
break
# Indirectly store the angle between the constraint axes
if constraint1 is not None and constraint2 is not None:
dot_w = np.dot(constraint1.axis, constraint2.axis)
# Generate 1st consistent molecular constraints
constraints_m = []
if constraint1 is not None:
for i, opa1 in enumerate(opa_m):
if opa1.is_conjugate(constraint1):
constraints_m.append([opa1, []])
# Generate 2nd constraint in opposite direction
extra = deepcopy(opa1)
extra.axis = [opa1.axis[0] * -1, opa1.axis[1] * -1, opa1.axis[2] * -1]
constraints_m.append([extra, []])
# Remove redundancy for the first constraints
list_i = list(range(len(constraints_m)))
list_j = list(range(len(constraints_m)))
copy = deepcopy(constraints_m)
for i, c1 in enumerate(copy):
if i in list_i:
for j, c2 in enumerate(copy):
if i > j and j in list_j and j in list_i:
# Check if axes are colinear
if np.isclose(np.dot(c1[0].axis, c2[0].axis), 1, rtol=rtol):
list_i.remove(j)
list_j.remove(j)
# Check if axes are symmetrically equivalent
else:
cond1 = False
cond2 = False
for opa in opa_m:
if opa.type == "rotation":
op = opa.op
if np.isclose(
np.dot(op.operate(c1[0].axis), c2[0].axis),
1,
rtol=5*rtol,
):
cond1 = True
break
if cond1 is True: # or cond2 is True:
list_i.remove(j)
list_j.remove(j)
c_m = deepcopy(constraints_m)
constraints_m = []
for i in list_i:
constraints_m.append(c_m[i])
# Generate 2nd consistent molecular constraints
valid = list(range(len(constraints_m)))
if constraint2 is not None:
for i, c in enumerate(constraints_m):
opa1 = c[0]
for j, opa2 in enumerate(opa_m):
if opa2.is_conjugate(constraint2):
dot_m = np.dot(opa1.axis, opa2.axis)
# Ensure that the angles are equal
if abs(dot_m - dot_w) < 0.02 or abs(dot_m + dot_w) < 0.02:
constraints_m[i][1].append(opa2)
# Generate 2nd constraint in opposite direction
extra = deepcopy(opa2)
extra.axis = [
opa2.axis[0] * -1,
opa2.axis[1] * -1,
opa2.axis[2] * -1,
]
constraints_m[i][1].append(extra)
# If no consistent constraints are found, remove first constraint
if constraints_m[i][1] == []:
valid.remove(i)
copy = deepcopy(constraints_m)
constraints_m = []
for i in valid:
constraints_m.append(copy[i])
# Generate orientations consistent with the possible constraints
orientations = []
# Loop over molecular constraint sets
for c1 in constraints_m:
v1 = c1[0].axis
v2 = constraint1.axis
T = rotate_vector(v1, v2)
# If there is only one constraint
if c1[1] == []:
o = Orientation(T, degrees=1, axis=constraint1.axis)
orientations.append(o)
else:
# Loop over second molecular constraints
for opa in c1[1]:
phi = angle(constraint1.axis, constraint2.axis)
phi2 = angle(constraint1.axis, np.dot(T, opa.axis))
if np.isclose(phi, phi2, rtol=rtol):
r = np.sin(phi)
c = np.linalg.norm(np.dot(T, opa.axis) - constraint2.axis)
theta = np.arccos(1 - (c ** 2) / (2 * (r ** 2)))
# R = aa2matrix(constraint1.axis, theta)
R = Rotation.from_rotvec(theta * constraint1.axis).as_matrix()
T2 = np.dot(R, T)
a = angle(np.dot(T2, opa.axis), constraint2.axis)
if not | np.isclose(a, 0, rtol=rtol) | numpy.isclose |
# -*- coding: utf-8 -*-
"""
pysteps.nowcasts.steps
======================
Implementation of the STEPS stochastic nowcasting method as described in
:cite:`Seed2003`, :cite:`BPS2006` and :cite:`SPN2013`.
.. autosummary::
:toctree: ../generated/
forecast
"""
import numpy as np
import scipy.ndimage
import time
from pysteps import cascade
from pysteps import extrapolation
from pysteps import noise
from pysteps import utils
from pysteps.nowcasts import utils as nowcast_utils
from pysteps.postprocessing import probmatching
from pysteps.timeseries import autoregression, correlation
try:
import dask
DASK_IMPORTED = True
except ImportError:
DASK_IMPORTED = False
def forecast(
R,
V,
timesteps,
n_ens_members=24,
n_cascade_levels=6,
R_thr=None,
kmperpixel=None,
timestep=None,
extrap_method="semilagrangian",
decomp_method="fft",
bandpass_filter_method="gaussian",
noise_method="nonparametric",
noise_stddev_adj=None,
ar_order=2,
vel_pert_method="bps",
conditional=False,
probmatching_method="cdf",
mask_method="incremental",
callback=None,
return_output=True,
seed=None,
num_workers=1,
fft_method="numpy",
domain="spatial",
extrap_kwargs=None,
filter_kwargs=None,
noise_kwargs=None,
vel_pert_kwargs=None,
mask_kwargs=None,
measure_time=False,
):
"""Generate a nowcast ensemble by using the Short-Term Ensemble Prediction
System (STEPS) method.
Parameters
----------
R: array-like
Array of shape (ar_order+1,m,n) containing the input precipitation fields
ordered by timestamp from oldest to newest. The time steps between the
inputs are assumed to be regular.
V: array-like
Array of shape (2,m,n) containing the x- and y-components of the advection
field. The velocities are assumed to represent one time step between the
inputs. All values are required to be finite.
timesteps: int or list of floats
Number of time steps to forecast or a list of time steps for which the
forecasts are computed (relative to the input time step). The elements of
the list are required to be in ascending order.
n_ens_members: int, optional
The number of ensemble members to generate.
n_cascade_levels: int, optional
The number of cascade levels to use.
R_thr: float, optional
Specifies the threshold value for minimum observable precipitation
intensity. Required if mask_method is not None or conditional is True.
kmperpixel: float, optional
Spatial resolution of the input data (kilometers/pixel). Required if
vel_pert_method is not None or mask_method is 'incremental'.
timestep: float, optional
Time step of the motion vectors (minutes). Required if vel_pert_method is
not None or mask_method is 'incremental'.
extrap_method: str, optional
Name of the extrapolation method to use. See the documentation of
pysteps.extrapolation.interface.
decomp_method: {'fft'}, optional
Name of the cascade decomposition method to use. See the documentation
of pysteps.cascade.interface.
bandpass_filter_method: {'gaussian', 'uniform'}, optional
Name of the bandpass filter method to use with the cascade decomposition.
See the documentation of pysteps.cascade.interface.
noise_method: {'parametric','nonparametric','ssft','nested',None}, optional
Name of the noise generator to use for perturbating the precipitation
field. See the documentation of pysteps.noise.interface. If set to None,
no noise is generated.
noise_stddev_adj: {'auto','fixed',None}, optional
Optional adjustment for the standard deviations of the noise fields added
to each cascade level. This is done to compensate incorrect std. dev.
estimates of casace levels due to presence of no-rain areas. 'auto'=use
the method implemented in pysteps.noise.utils.compute_noise_stddev_adjs.
'fixed'= use the formula given in :cite:`BPS2006` (eq. 6), None=disable
noise std. dev adjustment.
ar_order: int, optional
The order of the autoregressive model to use. Must be >= 1.
vel_pert_method: {'bps',None}, optional
Name of the noise generator to use for perturbing the advection field. See
the documentation of pysteps.noise.interface. If set to None, the advection
field is not perturbed.
conditional: bool, optional
If set to True, compute the statistics of the precipitation field
conditionally by excluding pixels where the values are below the threshold
R_thr.
mask_method: {'obs','sprog','incremental',None}, optional
The method to use for masking no precipitation areas in the forecast field.
The masked pixels are set to the minimum value of the observations.
'obs' = apply R_thr to the most recently observed precipitation intensity
field, 'sprog' = use the smoothed forecast field from S-PROG, where the
AR(p) model has been applied, 'incremental' = iteratively buffer the mask
with a certain rate (currently it is 1 km/min), None=no masking.
probmatching_method: {'cdf','mean',None}, optional
Method for matching the statistics of the forecast field with those of
the most recently observed one. 'cdf'=map the forecast CDF to the observed
one, 'mean'=adjust only the conditional mean value of the forecast field
in precipitation areas, None=no matching applied. Using 'mean' requires
that mask_method is not None.
callback: function, optional
Optional function that is called after computation of each time step of
the nowcast. The function takes one argument: a three-dimensional array
of shape (n_ens_members,h,w), where h and w are the height and width
of the input field R, respectively. This can be used, for instance,
writing the outputs into files.
return_output: bool, optional
Set to False to disable returning the outputs as numpy arrays. This can
save memory if the intermediate results are written to output files using
the callback function.
seed: int, optional
Optional seed number for the random generators.
num_workers: int, optional
The number of workers to use for parallel computation. Applicable if dask
is enabled or pyFFTW is used for computing the FFT. When num_workers>1, it
is advisable to disable OpenMP by setting the environment variable
OMP_NUM_THREADS to 1. This avoids slowdown caused by too many simultaneous
threads.
fft_method: str, optional
A string defining the FFT method to use (see utils.fft.get_method).
Defaults to 'numpy' for compatibility reasons. If pyFFTW is installed,
the recommended method is 'pyfftw'.
domain: {"spatial", "spectral"}
If "spatial", all computations are done in the spatial domain (the
classical STEPS model). If "spectral", the AR(2) models and stochastic
perturbations are applied directly in the spectral domain to reduce
memory footprint and improve performance :cite:`PCH2019b`.
extrap_kwargs: dict, optional
Optional dictionary containing keyword arguments for the extrapolation
method. See the documentation of pysteps.extrapolation.
filter_kwargs: dict, optional
Optional dictionary containing keyword arguments for the filter method.
See the documentation of pysteps.cascade.bandpass_filters.py.
noise_kwargs: dict, optional
Optional dictionary containing keyword arguments for the initializer of
the noise generator. See the documentation of pysteps.noise.fftgenerators.
vel_pert_kwargs: dict, optional
Optional dictionary containing keyword arguments 'p_par' and 'p_perp' for
the initializer of the velocity perturbator. The choice of the optimal
parameters depends on the domain and the used optical flow method.
Default parameters from :cite:`BPS2006`:
p_par = [10.88, 0.23, -7.68]
p_perp = [5.76, 0.31, -2.72]
Parameters fitted to the data (optical flow/domain):
darts/fmi:
p_par = [13.71259667, 0.15658963, -16.24368207]
p_perp = [8.26550355, 0.17820458, -9.54107834]
darts/mch:
p_par = [24.27562298, 0.11297186, -27.30087471]
p_perp = [-7.80797846e+01, -3.38641048e-02, 7.56715304e+01]
darts/fmi+mch:
p_par = [16.55447057, 0.14160448, -19.24613059]
p_perp = [14.75343395, 0.11785398, -16.26151612]
lucaskanade/fmi:
p_par = [2.20837526, 0.33887032, -2.48995355]
p_perp = [2.21722634, 0.32359621, -2.57402761]
lucaskanade/mch:
p_par = [2.56338484, 0.3330941, -2.99714349]
p_perp = [1.31204508, 0.3578426, -1.02499891]
lucaskanade/fmi+mch:
p_par = [2.31970635, 0.33734287, -2.64972861]
p_perp = [1.90769947, 0.33446594, -2.06603662]
vet/fmi:
p_par = [0.25337388, 0.67542291, 11.04895538]
p_perp = [0.02432118, 0.99613295, 7.40146505]
vet/mch:
p_par = [0.5075159, 0.53895212, 7.90331791]
p_perp = [0.68025501, 0.41761289, 4.73793581]
vet/fmi+mch:
p_par = [0.29495222, 0.62429207, 8.6804131 ]
p_perp = [0.23127377, 0.59010281, 5.98180004]
fmi=Finland, mch=Switzerland, fmi+mch=both pooled into the same data set
The above parameters have been fitten by using run_vel_pert_analysis.py
and fit_vel_pert_params.py located in the scripts directory.
See pysteps.noise.motion for additional documentation.
mask_kwargs: dict
Optional dictionary containing mask keyword arguments 'mask_f' and
'mask_rim', the factor defining the the mask increment and the rim size,
respectively.
The mask increment is defined as mask_f*timestep/kmperpixel.
measure_time: bool
If set to True, measure, print and return the computation time.
Returns
-------
out: ndarray
If return_output is True, a four-dimensional array of shape
(n_ens_members,num_timesteps,m,n) containing a time series of forecast
precipitation fields for each ensemble member. Otherwise, a None value
is returned. The time series starts from t0+timestep, where timestep is
taken from the input precipitation fields R. If measure_time is True, the
return value is a three-element tuple containing the nowcast array, the
initialization time of the nowcast generator and the time used in the
main loop (seconds).
See also
--------
pysteps.extrapolation.interface, pysteps.cascade.interface,
pysteps.noise.interface, pysteps.noise.utils.compute_noise_stddev_adjs
References
----------
:cite:`Seed2003`, :cite:`BPS2006`, :cite:`SPN2013`, :cite:`PCH2019b`
"""
_check_inputs(R, V, timesteps, ar_order)
if extrap_kwargs is None:
extrap_kwargs = dict()
if filter_kwargs is None:
filter_kwargs = dict()
if noise_kwargs is None:
noise_kwargs = dict()
if vel_pert_kwargs is None:
vel_pert_kwargs = dict()
if mask_kwargs is None:
mask_kwargs = dict()
if np.any(~np.isfinite(V)):
raise ValueError("V contains non-finite values")
if mask_method not in ["obs", "sprog", "incremental", None]:
raise ValueError(
"unknown mask method %s: must be 'obs', 'sprog' or 'incremental' or None"
% mask_method
)
if conditional and R_thr is None:
raise ValueError("conditional=True but R_thr is not set")
if mask_method is not None and R_thr is None:
raise ValueError("mask_method!=None but R_thr=None")
if noise_stddev_adj not in ["auto", "fixed", None]:
raise ValueError(
"unknown noise_std_dev_adj method %s: must be 'auto', 'fixed', or None"
% noise_stddev_adj
)
if kmperpixel is None:
if vel_pert_method is not None:
raise ValueError("vel_pert_method is set but kmperpixel=None")
if mask_method == "incremental":
raise ValueError("mask_method='incremental' but kmperpixel=None")
if timestep is None:
if vel_pert_method is not None:
raise ValueError("vel_pert_method is set but timestep=None")
if mask_method == "incremental":
raise ValueError("mask_method='incremental' but timestep=None")
print("Computing STEPS nowcast:")
print("------------------------")
print("")
print("Inputs:")
print("-------")
print("input dimensions: %dx%d" % (R.shape[1], R.shape[2]))
if kmperpixel is not None:
print("km/pixel: %g" % kmperpixel)
if timestep is not None:
print("time step: %d minutes" % timestep)
print("")
print("Methods:")
print("--------")
print("extrapolation: %s" % extrap_method)
print("bandpass filter: %s" % bandpass_filter_method)
print("decomposition: %s" % decomp_method)
print("noise generator: %s" % noise_method)
print("noise adjustment: %s" % ("yes" if noise_stddev_adj else "no"))
print("velocity perturbator: %s" % vel_pert_method)
print("conditional statistics: %s" % ("yes" if conditional else "no"))
print("precip. mask method: %s" % mask_method)
print("probability matching: %s" % probmatching_method)
print("FFT method: %s" % fft_method)
print("domain: %s" % domain)
print("")
print("Parameters:")
print("-----------")
if isinstance(timesteps, int):
print("number of time steps: %d" % timesteps)
else:
print("time steps: %s" % timesteps)
print("ensemble size: %d" % n_ens_members)
print("parallel threads: %d" % num_workers)
print("number of cascade levels: %d" % n_cascade_levels)
print("order of the AR(p) model: %d" % ar_order)
if vel_pert_method == "bps":
vp_par = vel_pert_kwargs.get("p_par", noise.motion.get_default_params_bps_par())
vp_perp = vel_pert_kwargs.get(
"p_perp", noise.motion.get_default_params_bps_perp()
)
print(
"velocity perturbations, parallel: %g,%g,%g"
% (vp_par[0], vp_par[1], vp_par[2])
)
print(
"velocity perturbations, perpendicular: %g,%g,%g"
% (vp_perp[0], vp_perp[1], vp_perp[2])
)
if conditional or mask_method is not None:
print("precip. intensity threshold: %g" % R_thr)
num_ensemble_workers = n_ens_members if num_workers > n_ens_members else num_workers
if measure_time:
starttime_init = time.time()
fft = utils.get_method(fft_method, shape=R.shape[1:], n_threads=num_workers)
M, N = R.shape[1:]
# initialize the band-pass filter
filter_method = cascade.get_method(bandpass_filter_method)
filter = filter_method((M, N), n_cascade_levels, **filter_kwargs)
decomp_method, recomp_method = cascade.get_method(decomp_method)
extrapolator_method = extrapolation.get_method(extrap_method)
x_values, y_values = np.meshgrid(np.arange(R.shape[2]), np.arange(R.shape[1]))
xy_coords = np.stack([x_values, y_values])
R = R[-(ar_order + 1) :, :, :].copy()
# determine the domain mask from non-finite values
domain_mask = np.logical_or.reduce(
[~np.isfinite(R[i, :]) for i in range(R.shape[0])]
)
# determine the precipitation threshold mask
if conditional:
MASK_thr = np.logical_and.reduce(
[R[i, :, :] >= R_thr for i in range(R.shape[0])]
)
else:
MASK_thr = None
# advect the previous precipitation fields to the same position with the
# most recent one (i.e. transform them into the Lagrangian coordinates)
extrap_kwargs = extrap_kwargs.copy()
extrap_kwargs["xy_coords"] = xy_coords
extrap_kwargs["allow_nonfinite_values"] = True
res = list()
def f(R, i):
return extrapolator_method(R[i, :, :], V, ar_order - i, "min", **extrap_kwargs)[
-1
]
for i in range(ar_order):
if not DASK_IMPORTED:
R[i, :, :] = f(R, i)
else:
res.append(dask.delayed(f)(R, i))
if DASK_IMPORTED:
num_workers_ = len(res) if num_workers > len(res) else num_workers
R = np.stack(list(dask.compute(*res, num_workers=num_workers_)) + [R[-1, :, :]])
# replace non-finite values with the minimum value
R = R.copy()
for i in range(R.shape[0]):
R[i, ~np.isfinite(R[i, :])] = np.nanmin(R[i, :])
if noise_method is not None:
# get methods for perturbations
init_noise, generate_noise = noise.get_method(noise_method)
# initialize the perturbation generator for the precipitation field
pp = init_noise(R, fft_method=fft, **noise_kwargs)
if noise_stddev_adj == "auto":
print("Computing noise adjustment coefficients... ", end="", flush=True)
if measure_time:
starttime = time.time()
R_min = np.min(R)
noise_std_coeffs = noise.utils.compute_noise_stddev_adjs(
R[-1, :, :],
R_thr,
R_min,
filter,
decomp_method,
pp,
generate_noise,
20,
conditional=True,
num_workers=num_workers,
)
if measure_time:
print("%.2f seconds." % (time.time() - starttime))
else:
print("done.")
elif noise_stddev_adj == "fixed":
f = lambda k: 1.0 / (0.75 + 0.09 * k)
noise_std_coeffs = [f(k) for k in range(1, n_cascade_levels + 1)]
else:
noise_std_coeffs = np.ones(n_cascade_levels)
if noise_stddev_adj is not None:
print("noise std. dev. coeffs: %s" % str(noise_std_coeffs))
# compute the cascade decompositions of the input precipitation fields
R_d = []
for i in range(ar_order + 1):
R_ = decomp_method(
R[i, :, :],
filter,
mask=MASK_thr,
fft_method=fft,
output_domain=domain,
normalize=True,
compute_stats=True,
compact_output=True,
)
R_d.append(R_)
# normalize the cascades and rearrange them into a four-dimensional array
# of shape (n_cascade_levels,ar_order+1,m,n) for the autoregressive model
R_c = nowcast_utils.stack_cascades(R_d, n_cascade_levels)
R_d = R_d[-1]
R_d = [R_d.copy() for j in range(n_ens_members)]
# compute lag-l temporal autocorrelation coefficients for each cascade level
GAMMA = np.empty((n_cascade_levels, ar_order))
for i in range(n_cascade_levels):
GAMMA[i, :] = correlation.temporal_autocorrelation(R_c[i], mask=MASK_thr)
nowcast_utils.print_corrcoefs(GAMMA)
if ar_order == 2:
# adjust the lag-2 correlation coefficient to ensure that the AR(p)
# process is stationary
for i in range(n_cascade_levels):
GAMMA[i, 1] = autoregression.adjust_lag2_corrcoef2(GAMMA[i, 0], GAMMA[i, 1])
# estimate the parameters of the AR(p) model from the autocorrelation
# coefficients
PHI = np.empty((n_cascade_levels, ar_order + 1))
for i in range(n_cascade_levels):
PHI[i, :] = autoregression.estimate_ar_params_yw(GAMMA[i, :])
nowcast_utils.print_ar_params(PHI)
# discard all except the p-1 last cascades because they are not needed for
# the AR(p) model
R_c = [R_c[i][-ar_order:] for i in range(n_cascade_levels)]
# stack the cascades into a list containing all ensemble members
R_c = [
[R_c[j].copy() for j in range(n_cascade_levels)] for i in range(n_ens_members)
]
# initialize the random generators
if noise_method is not None:
randgen_prec = []
randgen_motion = []
| np.random.seed(seed) | numpy.random.seed |
"""
Functions for plotting hiatuses using the latest definition (v3)
Author : <NAME>
Date : 2 September 2021
Version : 1
"""
### Import packages
import sys
import math
import time
import matplotlib.pyplot as plt
import numpy as np
import calc_Hiatus_v3 as HA
import pandas as pd
import scipy.stats as stats
from mpl_toolkits.basemap import Basemap, addcyclic, shiftgrid
import palettable.cubehelix as cm
import palettable.cartocolors.qualitative as cc
import palettable.scientific.sequential as sss
import cmocean as cmocean
import calc_Utilities as UT
import calc_dataFunctions as df
import calc_Stats as dSS
import scipy.stats as sts
import matplotlib
import cmasher as cmr
### Plotting defaults
plt.rc('text',usetex=True)
plt.rc('font',**{'family':'sans-serif','sans-serif':['Avant Garde']})
###############################################################################
###############################################################################
###############################################################################
### Data preliminaries
modelGCMs = ['CESM2le']
dataset_obs = 'ERA5'
allDataLabels = modelGCMs
letters = ["a","b","c","d","e","f","g","h","i","j","k","l","m"]
datasetsingle = ['CESM2le']
monthlychoiceq = ['annual']
variables = ['T2M']
reg_name = 'SMILEGlobe'
level = 'surface'
###############################################################################
###############################################################################
randomalso = False
timeper = 'hiatus'
shuffletype = 'GAUSS'
###############################################################################
###############################################################################
land_only = False
ocean_only = False
###############################################################################
###############################################################################
baseline = np.arange(1951,1980+1,1)
###############################################################################
###############################################################################
window = 0
yearsall = np.arange(1979+window,2099+1,1)
yearsobs = np.arange(1979+window,2020+1,1)
###############################################################################
###############################################################################
numOfEns = 40
lentime = len(yearsall)
###############################################################################
###############################################################################
dataset = datasetsingle[0]
lat_bounds,lon_bounds = UT.regions(reg_name)
###############################################################################
###############################################################################
ravelyearsbinary = False
ravelbinary = False
lensalso = True
###############################################################################
###############################################################################
###############################################################################
###############################################################################
### Read in model and observational/reanalysis data
def read_primary_dataset(variq,dataset,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper,lat_bounds=lat_bounds,lon_bounds=lon_bounds):
data,lats,lons = df.readFiles(variq,dataset,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper)
datar,lats,lons = df.getRegion(data,lats,lons,lat_bounds,lon_bounds)
print('\nOur dataset: ',dataset,' is shaped',data.shape)
return datar,lats,lons
def read_obs_dataset(variq,dataset_obs,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds):
data_obs,lats_obs,lons_obs = df.readFiles(variq,dataset_obs,monthlychoice,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,timeper)
data_obs,lats_obs,lons_obs = df.getRegion(data_obs,lats_obs,lons_obs,lat_bounds,lon_bounds)
print('our OBS dataset: ',dataset_obs,' is shaped',data_obs.shape)
return data_obs,lats_obs,lons_obs
### Call functions
vv = 0
mo = 0
variq = variables[vv]
monthlychoice = monthlychoiceq[mo]
directoryfigure = '/Users/zlabe/Desktop/GmstTrendPrediction/'
saveData = monthlychoice + '_' + variq + '_' + reg_name + '_' + dataset_obs
print('*Filename == < %s >' % saveData)
### Read data
models,lats,lons = read_primary_dataset(variq,dataset,monthlychoice,numOfEns,
lensalso,randomalso,ravelyearsbinary,
ravelbinary,shuffletype,timeper,
lat_bounds,lon_bounds)
obs,lats_obs,lons_obs = read_obs_dataset(variq,dataset_obs,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds)
### Calculate global mean temperature
lon2,lat2 = np.meshgrid(lons,lats)
modelsm = UT.calc_weightedAve(models,lat2)
obsm = UT.calc_weightedAve(obs,lat2)
### Call functions
trendlength = 10
AGWstart = 1990
years_newmodel = np.arange(AGWstart,yearsall[-1]+1,1)
years_newobs = np.arange(AGWstart,yearsobs[-1]+1,1)
vv = 0
mo = 0
variq = variables[vv]
monthlychoice = monthlychoiceq[mo]
directoryfigure = '/Users/zlabe/Desktop/GmstTrendPrediction/'
saveData = monthlychoice + '_' + variq + '_' + reg_name + '_' + dataset_obs
print('*Filename == < %s >' % saveData)
### Read data for hiatus periods
models = []
modelsm = []
obs = []
obsm = []
SLOPEthreshh_o = []
SLOPEthreshh_m = []
diff_o = []
diff_m = []
yearstrend_obsh = []
linetrend_obsh = []
indexslopeNegative_obsh = []
classes_obsh = []
yearstrend_mh = []
linetrend_mh = []
indexslopeNegative_mh = []
classes_mh = []
count = []
for i in range(len(modelGCMs)):
dataset = modelGCMs[i]
modelsq,lats,lons = read_primary_dataset(variq,dataset,monthlychoice,numOfEns,
lensalso,randomalso,ravelyearsbinary,
ravelbinary,shuffletype,timeper,
lat_bounds,lon_bounds)
obsq,lats,lons = read_obs_dataset(variq,dataset_obs,numOfEns,lensalso,randomalso,ravelyearsbinary,ravelbinary,shuffletype,lat_bounds=lat_bounds,lon_bounds=lon_bounds)
### Calculate global mean temperature
lon2,lat2 = np.meshgrid(lons,lats)
modelsmq = UT.calc_weightedAve(modelsq,lat2)
obsmq = UT.calc_weightedAve(obsq,lat2)
### Calculate thresholds for hiatus period
SLOPEthreshh_oq,diff_oq = HA.calc_thresholdOfTrend(obsmq,trendlength,yearsobs,AGWstart,'hiatus')
SLOPEthreshh_mq,diff_mq = HA.calc_thresholdOfTrend(modelsmq,trendlength,yearsall,AGWstart,'hiatus')
### Calculate actual hiatus periods in climate models and observations
yearstrend_obshq,linetrend_obshq,indexslopeNegative_obshq,classes_obshq = HA.calc_HiatusAcc(obsmq,trendlength,yearsobs,AGWstart,SLOPEthreshh_oq,'hiatus',diff_oq)
yearstrend_mhq,linetrend_mhq,indexslopeNegative_mhq,classes_mhq = HA.calc_HiatusAcc(modelsmq,trendlength,yearsall,AGWstart,SLOPEthreshh_mq,'hiatus',diff_oq)
### County how many hiatus
countq = len(indexslopeNegative_mhq)
### Save for each data set separately
models.append(modelsq)
modelsm.append(modelsmq)
obs.append(obsq)
obsm.append(obsmq)
SLOPEthreshh_o.append(SLOPEthreshh_oq)
SLOPEthreshh_m.append(SLOPEthreshh_mq)
diff_o.append(diff_oq)
diff_m.append(diff_mq)
yearstrend_obsh.append(yearstrend_obshq)
linetrend_obsh.append(linetrend_obshq)
indexslopeNegative_obsh.append(indexslopeNegative_obshq)
classes_obsh.append(classes_obshq)
yearstrend_mh.append(yearstrend_mhq)
linetrend_mh.append(linetrend_mhq)
indexslopeNegative_mh.append(indexslopeNegative_mhq)
classes_mh.append(classes_mhq)
count.append(countq)
### Check for arrays
models = np.asarray(models)
modelsm = np.asarray(modelsm)
obs = np.asarray(obs).squeeze()
obsm = np.asarray(obsm).squeeze()
SLOPEthreshh_o = np.asarray(SLOPEthreshh_o).squeeze()
SLOPEthreshh_m = np.asarray(SLOPEthreshh_m)
diff_o = np.asarray(diff_o).squeeze()
diff_m = np.asarray(diff_m)
yearstrend_obsh = np.asarray(yearstrend_obsh).squeeze()
linetrend_obsh = np.asarray(linetrend_obsh).squeeze()
indexslopeNegative_obsh = np.asarray(indexslopeNegative_obsh).squeeze()
classes_obsh = np.asarray(classes_obsh).squeeze()
yearstrend_mh = np.asarray(yearstrend_mh)
linetrend_mh = np.asarray(linetrend_mh)
indexslopeNegative_mh = | np.asarray(indexslopeNegative_mh) | numpy.asarray |
import numpy as np
import math
import tensorflow as tf
from model.model_sparse_graph_signal import Model
import six.moves.cPickle as pickle
tf.set_random_seed(0)
import time
from model import config as cf
# DATA_PATH = "data"
n_steps = 100
tf.flags.DEFINE_integer("n_steps", n_steps, "num of step.")
tf.flags.DEFINE_integer("time_interval", cf.time_interval, "the time interval")
tf.flags.DEFINE_integer("n_time_interval", cf.n_time_interval, "the number of time interval")
tf.flags.DEFINE_integer("num_rnn_layers", 2, "number of rnn layers .")
tf.flags.DEFINE_integer("cl_decay_steps", 1000, "cl_decay_steps .")
tf.flags.DEFINE_integer("num_kernel", 2, "chebyshev .")
tf.flags.DEFINE_float("learning_rate", 0.005, "learning_rate.")
tf.flags.DEFINE_integer("batch_size", 32, "batch size.")
tf.flags.DEFINE_integer("num_hidden", 32, "hidden rnn size.")
tf.flags.DEFINE_float("l1", 5e-5, "l1.")
tf.flags.DEFINE_float("l2", 1e-3, "l2.")
tf.flags.DEFINE_float("l1l2", 1.0, "l1l2.")
tf.flags.DEFINE_string("activation", "relu", "activation function.")
tf.flags.DEFINE_integer("training_iters", 200 * 3200 + 1, "max training iters.")
tf.flags.DEFINE_integer("display_step", 100, "display step.")
tf.flags.DEFINE_integer("n_hidden_dense1", 32, "dense1 size.")
tf.flags.DEFINE_integer("n_hidden_dense2", 16, "dense2 size.")
tf.flags.DEFINE_string("version", "v1", "data version.")
tf.flags.DEFINE_integer("max_grad_norm", 5, "gradient clip.")
tf.flags.DEFINE_float("stddev", 0.01, "initialization stddev.")
tf.flags.DEFINE_integer("feat_in", 100, "num of feature in")
tf.flags.DEFINE_integer("feat_out", 50, "num of feature out")
tf.flags.DEFINE_integer("lmax", 2, "max L")
tf.flags.DEFINE_integer("num_nodes", 100, "number of max nodes in cascade")
config = tf.flags.FLAGS
print("l2", config.l2)
print("learning rate:", config.learning_rate)
def get_batch(x, L, y, sz, time, n_time_interval, step, batch_size, num_step):
batch_y = np.zeros(shape=(batch_size, 1))
batch_x = []
batch_L = []
batch_time_interval_index = []
batch_rnn_index = []
start = step * batch_size % len(x)
for i in range(batch_size):
id = (i + start) % len(x)
batch_y[i, 0] = y[id]
batch_L.append(L[id].todense())
temp_x = []
for m in range(len(x[id])):
temp_x.append(x[id][m].todense())
batch_x.append(temp_x)
batch_time_interval_index_sample = []
for j in range(sz[id]):
temp_time = np.zeros(shape=(n_time_interval))
k = int(math.floor(time[id][j] / config.time_interval))
temp_time[k] = 1
batch_time_interval_index_sample.append(temp_time)
if len(batch_time_interval_index_sample) < num_step:
for i in range(num_step - len(batch_time_interval_index_sample)):
temp_time_padding = np.zeros(shape=(n_time_interval))
batch_time_interval_index_sample.append(temp_time_padding)
i = i + 1
batch_time_interval_index.append(batch_time_interval_index_sample)
rnn_index_temp = np.zeros(shape=(config.n_steps))
rnn_index_temp[:sz[id]] = 1
batch_rnn_index.append(rnn_index_temp)
return batch_x, batch_L, batch_y, batch_time_interval_index, batch_rnn_index
version = config.version
id_train, x_train, L, y_train, sz_train, time_train, vocabulary_size = pickle.load(
open(cf.train_pkl, 'rb'))
id_test, x_test, L_test, y_test, sz_test, time_test, _ = pickle.load(
open(cf.test_pkl, 'rb'))
id_val, x_val, L_val, y_val, sz_val, time_val, _ = pickle.load(open(cf.val_pkl, 'rb'))
training_iters = config.training_iters
batch_size = config.batch_size
display_step = min(config.display_step, len(sz_train) / batch_size)
print("-----------------display step-------------------")
print("display step" + str(display_step))
# determine the way floating point numbers,arrays and other numpy object are displayed
np.set_printoptions(precision=2)
sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))
start = time.time()
is_training = False
model = Model(config, config.num_nodes, sess)
sess.graph.finalize()
step = 0
best_val_loss = 1000
best_test_loss = 1000
train_writer = tf.summary.FileWriter("./train", sess.graph)
# Keep training until reach max iterations or max_try
train_loss = []
max_try = 10
patience = max_try
while step * batch_size < training_iters:
batch_x, batch_L, batch_y, batch_time_interval, batch_rnn_index = get_batch(
x_train,
L,
y_train,
sz_train,
time_train,
config.n_time_interval,
step,
batch_size,
n_steps)
time_decay = model.train_batch(batch_x, batch_L, batch_y, batch_time_interval, batch_rnn_index)
train_loss.append(
model.get_error(batch_x, batch_L, batch_y, batch_time_interval,
batch_rnn_index))
if step % display_step == 0:
#print(time_decay)
val_loss = []
for val_step in range(int(len(y_val) / batch_size)):
val_x, val_L, val_y, val_time, val_rnn_index = get_batch(
x_val,
L_val,
y_val,
sz_val,
time_val,
config.n_time_interval,
val_step,
batch_size,
n_steps)
val_loss.append(
model.get_error(val_x, val_L, val_y, val_time, val_rnn_index))
test_loss = []
for test_step in range(int(len(y_test) / batch_size)):
test_x, test_L, test_y, test_time, test_rnn_index = get_batch(
x_test,
L_test,
y_test,
sz_test,
time_test,
config.n_time_interval,
test_step,
batch_size,
n_steps)
test_loss.append(
model.get_error(test_x, test_L, test_y, test_time, test_rnn_index))
if | np.mean(val_loss) | numpy.mean |
"""Numba implementation of some PAC functions."""
import numpy as np
from scipy.special import erfinv
# if Numba not installed, this section should return a Numba-free jit wrapper
try:
import numba
def jit(signature=None, nopython=True, nogil=True, fastmath=True, # noqa
cache=True, **kwargs):
return numba.jit(signature_or_function=signature, cache=cache,
nogil=nogil, fastmath=fastmath, nopython=nopython,
**kwargs)
except:
def jit(*args, **kwargs): # noqa
def _jit(func):
return func
return _jit
@jit("f8[:,:,:](f8[:,:,:], f8[:,:,:])")
def mean_vector_length_nb(pha, amp):
"""Numba-based Mean Vector Length (MVL).
Parameters
----------
pha, amp : array_like
Respectively the arrays of phases of shape (n_pha, n_epochs, n_times)
and the array of amplitudes of shape (n_amp, n_epochs, n_times). Both
arrays should be of type float64 (np.float64)
Returns
-------
pac : array_like
Array of phase amplitude coupling of shape (n_amp, n_pha, n_epochs)
References
----------
Canolty et al. 2006 :cite:`canolty2006high`
"""
n_pha, n_epochs, n_times = pha.shape
n_amp, _, _ = amp.shape
pac = np.zeros((n_amp, n_pha, n_epochs), dtype=np.float64)
# single conversion
exp_pha = np.exp(1j * pha)
amp_comp = amp.astype(np.complex128)
for a in range(n_amp):
for p in range(n_pha):
for tr in range(n_epochs):
_pha = np.ascontiguousarray(exp_pha[p, tr, :])
_amp = np.ascontiguousarray(amp_comp[a, tr, :])
pac[a, p, tr] = abs(np.dot(_amp, _pha))
pac /= n_times
return pac
@jit("f8[:](f8[:], f8[:], u8, b1)")
def _kl_hr_nb(pha, amp, n_bins=18, mean_bins=True):
"""Binarize the amplitude according to phase values.
This function is shared by the Kullback-Leibler Distance and the
Height Ratio.
"""
vecbin = np.linspace(-np.pi, np.pi, n_bins + 1)
phad = np.digitize(pha, vecbin) - 1
u_phad = np.unique(phad)
abin = np.zeros((len(u_phad)), dtype=np.float64)
for n_i, i in enumerate(u_phad):
# find where phase take vecbin values
idx = np.ascontiguousarray((phad == i).astype(np.float64))
m = idx.sum() if mean_bins else 1.
# take the sum of amplitude inside the bin
abin[n_i] = np.dot(np.ascontiguousarray(amp), idx) / m
return abin
@jit("f8[:,:,:](f8[:,:,:], f8[:,:,:], u8)")
def modulation_index_nb(pha, amp, n_bins=18):
"""Numba-based Modulation index (MI).
The modulation index is obtained using the Kullback Leibler Distance which
measures how much the distribution of binned amplitude differs from a
uniform distribution.
Parameters
----------
pha, amp : array_like
Respectively the arrays of phases of shape (n_pha, n_epochs, n_times)
and the array of amplitudes of shape (n_amp, n_epochs, n_times). Both
arrays should be of type float64 (np.float64)
n_bins : int | 18
Number of bins to binarize the amplitude according to phase intervals
(should be np.int64)
Returns
-------
pac : array_like
Array of phase amplitude coupling of shape (n_amp, n_pha, ...)
References
----------
Tort et al. 2010 :cite:`tort2010measuring`
"""
n_pha, n_epochs, n_times = pha.shape
n_amp, _, _ = amp.shape
pac = np.zeros((n_amp, n_pha, n_epochs), dtype=np.float64)
bin_log = np.log(n_bins)
for a in range(n_amp):
for p in range(n_pha):
for tr in range(n_epochs):
# select phase and amplitude
_pha = np.ascontiguousarray(pha[p, tr, :])
_amp = np.ascontiguousarray(amp[a, tr, :])
# get the probability of each amp bin
p_j = _kl_hr_nb(_pha, _amp, n_bins=n_bins, mean_bins=True)
p_j /= p_j.sum()
# log it (only if strictly positive)
if np.all(p_j > 0.):
p_j *= np.log(p_j)
# compute the PAC
pac[a, p, tr] = 1. + p_j.sum() / bin_log
else:
pac[a, p, tr] = 0.
return pac
@jit("f8[:,:,:](f8[:,:,:], f8[:,:,:], u8)")
def heights_ratio_nb(pha, amp, n_bins=18):
"""Numba-based Heights ratio (HR).
Parameters
----------
pha, amp : array_like
Respectively the arrays of phases of shape (n_pha, n_epochs, n_times)
and the array of amplitudes of shape (n_amp, n_epochs, n_times). Both
arrays should be of type float64 (np.float64)
n_bins : int | 18
Number of bins to binarize the amplitude according to phase intervals
(should be np.int64)
Returns
-------
pac : array_like
Array of phase amplitude coupling of shape (n_amp, n_pha, ...)
References
----------
Lakatos et al. 2005 :cite:`lakatos2005oscillatory`
"""
n_pha, n_epochs, n_times = pha.shape
n_amp, _, _ = amp.shape
pac = np.zeros((n_amp, n_pha, n_epochs), dtype=np.float64)
for a in range(n_amp):
for p in range(n_pha):
for tr in range(n_epochs):
# select phase and amplitude
_pha = np.ascontiguousarray(pha[p, tr, :])
_amp = np.ascontiguousarray(amp[a, tr, :])
# get the probability of each amp bin
p_j = _kl_hr_nb(_pha, _amp, n_bins=n_bins, mean_bins=True)
p_j /= p_j.sum()
# find (maximum, minimum) of the binned distribution
h_max, h_min = np.max(p_j), np.min(p_j)
# compute the PAC
pac[a, p, tr] = (h_max - h_min) / h_max
return pac
def phase_locking_value_nb(pha, pha_amp):
"""Numba-based Phase Locking-Value (PLV).
In order to measure the phase locking value, the phase of the amplitude of
the higher-frequency signal must be provided, and not the amplitude as in
most other PAC functions.
Parameters
----------
pha, pha_amp : array_like
Respectively the arrays of phases of shape (n_pha, n_epochs, n_times)
for the lower frequency and the array of phase of the amplitude signal
of shape (n_pha_amp, n_epochs, n_times) for the higher frequency. Both
arrays should be of type float64 (np.float64)
Returns
-------
pac : array_like
Array of phase amplitude coupling of shape (n_pha_amp, n_pha, ...)
References
----------
Penny et al. 2008 :cite:`penny2008testing`, Lachaux et al. 1999
:cite:`lachaux1999measuring`
"""
n_pha, n_epochs, n_times = pha.shape
n_amp, _, _ = pha_amp.shape
pac = np.zeros((n_amp, n_pha, n_epochs), dtype=np.float64)
# single conversion
exp_pha = np.exp(1j * pha)
exp_pha_amp = np.exp(-1j * pha_amp)
for a in range(n_amp):
for p in range(n_pha):
for tr in range(n_epochs):
_pha = exp_pha[p, tr, :]
_pha_amp = exp_pha_amp[a, tr, :]
pac[a, p, tr] = abs(np.dot(_pha, _pha_amp))
pac /= n_times
return pac
"""
I don't think this function can be entirely compiled with Numba because of two
issues :
* Numba supports the mean / std but not across a specific axis
* erfinv is a special function of scipy that don't seems to be supported
for the moment
Therefore, the beginning and the end of the function are tensor-based while the
core function that computes PAC is the Numba compliant MVL.
"""
def norm_direct_pac_nb(pha, amp, p=.05):
"""Numba-based Normalized direct Pac (ndPAC).
Parameters
----------
pha, amp : array_like
Respectively the arrays of phases of shape (n_pha, n_epochs, n_times)
and the array of amplitudes of shape (n_amp, n_epochs, n_times). Both
arrays should be of type float64 (np.float64)
p : float | .05
P-value to use for thresholding. Sub-threshold PAC values
will be set to 0. To disable this behavior (no masking), use ``p=1`` or
``p=None``. Should be a np.float64
Returns
-------
pac : array_like
Array of phase amplitude coupling of shape (n_amp, n_pha, ...)
References
----------
Ozkurt et al. :cite:`ozkurt2012statistically`
"""
n_times = pha.shape[-1]
# z-score normalization to approximate assumptions
amp = np.subtract(amp, np.mean(amp, axis=-1, keepdims=True))
amp = np.divide(amp, | np.std(amp, ddof=1, axis=-1, keepdims=True) | numpy.std |
import os
import sys
import shutil
import pytest
import cv2
import numpy as np
import tensorflow as tf
from fixtures import test_asset_dir, model_dir
sys.path.append(os.path.join(os.path.dirname(__file__), ".."))
from confignet import FaceImageNormalizer, ConfigNet, LatentGAN
def get_normalized_test_image(test_asset_dir, output_shape):
filename = "img_0000000_000.png"
image_path = os.path.join(test_asset_dir, filename)
image = cv2.imread(image_path)
return FaceImageNormalizer.normalize_individual_image(image, output_shape)
@pytest.mark.parametrize("resolution", [256, 512])
def test_confignet_basic(test_asset_dir, model_dir, resolution):
model_path = os.path.join(model_dir, "confignet_%d"%resolution, "model.json")
model = ConfigNet.load(model_path)
with tf.device('/cpu:0'):
normalized_image = get_normalized_test_image(test_asset_dir, (resolution, resolution))
embedding, rotation = model.encode_images(normalized_image[np.newaxis])
decoded_image = model.generate_images(embedding, rotation)
n_blendshapes = model.config["facemodel_inputs"]["blendshape_values"][0]
neutral_expression = np.zeros((1, n_blendshapes), np.float32)
modified_embedding = model.set_facemodel_param_in_latents(embedding, "blendshape_values", neutral_expression)
decoded_image_modified = model.generate_images(embedding, rotation)
reference_value_file = os.path.join(test_asset_dir, "confignet_basic_ref_%d.npz"%resolution)
# set to True to save results as reference
save_reference = False
if save_reference:
np.savez(reference_value_file, embedding=embedding, rotation=rotation,
decoded_image=decoded_image, modified_embedding=modified_embedding,
decoded_image_modified=decoded_image_modified)
reference_vals = np.load(reference_value_file)
assert np.allclose(embedding, reference_vals["embedding"])
assert np.allclose(rotation, reference_vals["rotation"])
assert | np.allclose(decoded_image, reference_vals["decoded_image"]) | numpy.allclose |
# coding: utf-8
"""
This module implements specific error handlers for VASP runs. These handlers
tries to detect common errors in vasp runs and attempt to fix them on the fly
by modifying the input files.
"""
import datetime
import logging
import operator
import os
import re
import shutil
import time
from collections import Counter
from functools import reduce
import numpy as np
from monty.dev import deprecated
from monty.os.path import zpath
from monty.serialization import loadfn
from pymatgen.core.structure import Structure
from pymatgen.io.vasp.inputs import Poscar, VaspInput, Incar, Kpoints
from pymatgen.io.vasp.outputs import Vasprun, Oszicar, Outcar
from pymatgen.io.vasp.sets import MPScanRelaxSet
from pymatgen.transformations.standard_transformations import SupercellTransformation
from custodian.ansible.actions import FileActions
from custodian.ansible.interpreter import Modder
from custodian.custodian import ErrorHandler
from custodian.utils import backup
from custodian.vasp.interpreter import VaspModder
__author__ = "<NAME>, <NAME>, <NAME>, " "<NAME>, <NAME>"
__version__ = "0.1"
__maintainer__ = "<NAME>"
__email__ = "<EMAIL>"
__status__ = "Beta"
__date__ = "2/4/13"
VASP_BACKUP_FILES = {
"INCAR",
"KPOINTS",
"POSCAR",
"OUTCAR",
"CONTCAR",
"OSZICAR",
"vasprun.xml",
"vasp.out",
"std_err.txt",
}
class VaspErrorHandler(ErrorHandler):
"""
Master VaspErrorHandler class that handles a number of common errors
that occur during VASP runs.
"""
is_monitor = True
error_msgs = {
"tet": [
"Tetrahedron method fails",
"Fatal error detecting k-mesh",
"Fatal error: unable to match k-point",
"Routine TETIRR needs special values",
"Tetrahedron method fails (number of k-points < 4)",
"BZINTS",
],
"inv_rot_mat": ["rotation matrix was not found (increase " "SYMPREC)"],
"brmix": ["BRMIX: very serious problems"],
"subspacematrix": ["WARNING: Sub-Space-Matrix is not hermitian in " "DAV"],
"tetirr": ["Routine TETIRR needs special values"],
"incorrect_shift": ["Could not get correct shifts"],
"real_optlay": ["REAL_OPTLAY: internal error", "REAL_OPT: internal ERROR"],
"rspher": ["ERROR RSPHER"],
"dentet": ["DENTET"],
"too_few_bands": ["TOO FEW BANDS"],
"triple_product": ["ERROR: the triple product of the basis vectors"],
"rot_matrix": ["Found some non-integer element in rotation matrix", "SGRCON"],
"brions": ["BRIONS problems: POTIM should be increased"],
"pricel": ["internal error in subroutine PRICEL"],
"zpotrf": ["LAPACK: Routine ZPOTRF failed"],
"amin": ["One of the lattice vectors is very long (>50 A), but AMIN"],
"zbrent": ["ZBRENT: fatal internal in", "ZBRENT: fatal error in bracketing"],
"pssyevx": ["ERROR in subspace rotation PSSYEVX"],
"eddrmm": ["WARNING in EDDRMM: call to ZHEGV failed"],
"edddav": ["Error EDDDAV: Call to ZHEGV failed"],
"grad_not_orth": ["EDWAV: internal error, the gradient is not orthogonal"],
"nicht_konv": ["ERROR: SBESSELITER : nicht konvergent"],
"zheev": ["ERROR EDDIAG: Call to routine ZHEEV failed!"],
"elf_kpar": ["ELF: KPAR>1 not implemented"],
"elf_ncl": ["WARNING: ELF not implemented for non collinear case"],
"rhosyg": ["RHOSYG"],
"posmap": ["POSMAP"],
"point_group": ["group operation missing"],
"symprec_noise": ["determination of the symmetry of your systems shows a strong"],
}
def __init__(
self,
output_filename="vasp.out",
natoms_large_cell=100,
errors_subset_to_catch=None,
):
"""
Initializes the handler with the output file to check.
Args:
output_filename (str): This is the file where the stdout for vasp
is being redirected. The error messages that are checked are
present in the stdout. Defaults to "vasp.out", which is the
default redirect used by :class:`custodian.vasp.jobs.VaspJob`.
natoms_large_cell (int): Number of atoms threshold to treat cell
as large. Affects the correction of certain errors. Defaults to
100.
errors_subset_to_detect (list): A subset of errors to catch. The
default is None, which means all supported errors are detected.
Use this to only catch only a subset of supported errors.
E.g., ["eddrrm", "zheev"] will only catch the eddrmm and zheev
errors, and not others. If you wish to only excluded one or
two of the errors, you can create this list by the following
lines:
```
subset = list(VaspErrorHandler.error_msgs.keys())
subset.pop("eddrrm")
handler = VaspErrorHandler(errors_subset_to_catch=subset)
```
"""
self.output_filename = output_filename
self.errors = set()
self.error_count = Counter()
# threshold of number of atoms to treat the cell as large.
self.natoms_large_cell = natoms_large_cell
self.errors_subset_to_catch = errors_subset_to_catch or list(VaspErrorHandler.error_msgs.keys())
self.logger = logging.getLogger(self.__class__.__name__)
def check(self):
"""
Check for error.
"""
incar = Incar.from_file("INCAR")
self.errors = set()
error_msgs = set()
with open(self.output_filename, "r") as f:
for line in f:
l = line.strip()
for err, msgs in VaspErrorHandler.error_msgs.items():
if err in self.errors_subset_to_catch:
for msg in msgs:
if l.find(msg) != -1:
# this checks if we want to run a charged
# computation (e.g., defects) if yes we don't
# want to kill it because there is a change in
# e-density (brmix error)
if err == "brmix" and "NELECT" in incar:
continue
self.errors.add(err)
error_msgs.add(msg)
for msg in error_msgs:
self.logger.error(msg, extra={"incar": incar.as_dict()})
return len(self.errors) > 0
def correct(self):
"""
Perform corrections.
"""
backup(VASP_BACKUP_FILES | {self.output_filename})
actions = []
vi = VaspInput.from_directory(".")
if self.errors.intersection(["tet", "dentet"]):
if vi["INCAR"].get("KSPACING"):
# decrease KSPACING by 20% in each direction (approximately double no. of kpoints)
actions.append(
{
"dict": "INCAR",
"action": {"_set": {"KSPACING": vi["INCAR"].get("KSPACING") * 0.8}},
}
)
else:
actions.append({"dict": "INCAR", "action": {"_set": {"ISMEAR": 0, "SIGMA": 0.05}}})
if "inv_rot_mat" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": 1e-8}}})
if "brmix" in self.errors:
# If there is not a valid OUTCAR already, increment
# error count to 1 to skip first fix
if self.error_count["brmix"] == 0:
try:
assert Outcar(zpath(os.path.join(os.getcwd(), "OUTCAR"))).is_stopped is False
except Exception:
self.error_count["brmix"] += 1
if self.error_count["brmix"] == 0:
# Valid OUTCAR - simply rerun the job and increment
# error count for next time
actions.append({"dict": "INCAR", "action": {"_set": {"ISTART": 1}}})
self.error_count["brmix"] += 1
elif self.error_count["brmix"] == 1:
# Use Kerker mixing w/default values for other parameters
actions.append({"dict": "INCAR", "action": {"_set": {"IMIX": 1}}})
self.error_count["brmix"] += 1
elif self.error_count["brmix"] == 2 and vi["KPOINTS"].style == Kpoints.supported_modes.Gamma:
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"generation_style": "Monkhorst"}},
}
)
actions.append({"dict": "INCAR", "action": {"_unset": {"IMIX": 1}}})
self.error_count["brmix"] += 1
elif self.error_count["brmix"] in [2, 3] and vi["KPOINTS"].style == Kpoints.supported_modes.Monkhorst:
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"generation_style": "Gamma"}},
}
)
actions.append({"dict": "INCAR", "action": {"_unset": {"IMIX": 1}}})
self.error_count["brmix"] += 1
if vi["KPOINTS"].num_kpts < 1:
all_kpts_even = all([bool(n % 2 == 0) for n in vi["KPOINTS"].kpts[0]])
if all_kpts_even:
new_kpts = (tuple(n + 1 for n in vi["KPOINTS"].kpts[0]),)
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"kpoints": new_kpts}},
}
)
else:
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
if vi["KPOINTS"] is not None:
if vi["KPOINTS"].style == Kpoints.supported_modes.Monkhorst:
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"generation_style": "Gamma"}},
}
)
# Based on VASP forum's recommendation, you should delete the
# CHGCAR and WAVECAR when dealing with this error.
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.append(
{
"file": "CHGCAR",
"action": {"_file_delete": {"mode": "actual"}},
}
)
actions.append(
{
"file": "WAVECAR",
"action": {"_file_delete": {"mode": "actual"}},
}
)
if "zpotrf" in self.errors:
# Usually caused by short bond distances. If on the first step,
# volume needs to be increased. Otherwise, it was due to a step
# being too big and POTIM should be decreased. If a static run
# try turning off symmetry.
try:
oszicar = Oszicar("OSZICAR")
nsteps = len(oszicar.ionic_steps)
except Exception:
nsteps = 0
if nsteps >= 1:
potim = float(vi["INCAR"].get("POTIM", 0.5)) / 2.0
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0, "POTIM": potim}}})
elif vi["INCAR"].get("NSW", 0) == 0 or vi["INCAR"].get("ISIF", 0) in range(3):
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
else:
s = vi["POSCAR"].structure
s.apply_strain(0.2)
actions.append({"dict": "POSCAR", "action": {"_set": {"structure": s.as_dict()}}})
# Based on VASP forum's recommendation, you should delete the
# CHGCAR and WAVECAR when dealing with this error.
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.append({"file": "CHGCAR", "action": {"_file_delete": {"mode": "actual"}}})
actions.append({"file": "WAVECAR", "action": {"_file_delete": {"mode": "actual"}}})
if self.errors.intersection(["subspacematrix"]):
if self.error_count["subspacematrix"] == 0:
actions.append({"dict": "INCAR", "action": {"_set": {"LREAL": False}}})
else:
actions.append({"dict": "INCAR", "action": {"_set": {"PREC": "Accurate"}}})
self.error_count["subspacematrix"] += 1
if self.errors.intersection(["rspher", "real_optlay", "nicht_konv"]):
s = vi["POSCAR"].structure
if len(s) < self.natoms_large_cell:
actions.append({"dict": "INCAR", "action": {"_set": {"LREAL": False}}})
else:
# for large supercell, try an in-between option LREAL = True
# prior to LREAL = False
if self.error_count["real_optlay"] == 0:
# use real space projectors generated by pot
actions.append({"dict": "INCAR", "action": {"_set": {"LREAL": True}}})
elif self.error_count["real_optlay"] == 1:
actions.append({"dict": "INCAR", "action": {"_set": {"LREAL": False}}})
self.error_count["real_optlay"] += 1
if self.errors.intersection(["tetirr", "incorrect_shift"]):
if vi["KPOINTS"] is not None:
if vi["KPOINTS"].style == Kpoints.supported_modes.Monkhorst:
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"generation_style": "Gamma"}},
}
)
if "rot_matrix" in self.errors:
if vi["KPOINTS"] is not None:
if vi["KPOINTS"].style == Kpoints.supported_modes.Monkhorst:
actions.append(
{
"dict": "KPOINTS",
"action": {"_set": {"generation_style": "Gamma"}},
}
)
else:
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
if "amin" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"AMIN": "0.01"}}})
if "triple_product" in self.errors:
s = vi["POSCAR"].structure
trans = SupercellTransformation(((1, 0, 0), (0, 0, 1), (0, 1, 0)))
new_s = trans.apply_transformation(s)
actions.append(
{
"dict": "POSCAR",
"action": {"_set": {"structure": new_s.as_dict()}},
"transformation": trans.as_dict(),
}
)
if "pricel" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": 1e-8, "ISYM": 0}}})
if "brions" in self.errors:
potim = float(vi["INCAR"].get("POTIM", 0.5)) + 0.1
actions.append({"dict": "INCAR", "action": {"_set": {"POTIM": potim}}})
if "zbrent" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"IBRION": 1}}})
actions.append({"file": "CONTCAR", "action": {"_file_copy": {"dest": "POSCAR"}}})
if "too_few_bands" in self.errors:
if "NBANDS" in vi["INCAR"]:
nbands = int(vi["INCAR"]["NBANDS"])
else:
with open("OUTCAR") as f:
for line in f:
if "NBANDS" in line:
try:
d = line.split("=")
nbands = int(d[-1].strip())
break
except (IndexError, ValueError):
pass
actions.append({"dict": "INCAR", "action": {"_set": {"NBANDS": int(1.1 * nbands)}}})
if "pssyevx" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"ALGO": "Normal"}}})
if "eddrmm" in self.errors:
# RMM algorithm is not stable for this calculation
if vi["INCAR"].get("ALGO", "Normal") in ["Fast", "VeryFast"]:
actions.append({"dict": "INCAR", "action": {"_set": {"ALGO": "Normal"}}})
else:
potim = float(vi["INCAR"].get("POTIM", 0.5)) / 2.0
actions.append({"dict": "INCAR", "action": {"_set": {"POTIM": potim}}})
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.append({"file": "CHGCAR", "action": {"_file_delete": {"mode": "actual"}}})
actions.append({"file": "WAVECAR", "action": {"_file_delete": {"mode": "actual"}}})
if "edddav" in self.errors:
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.append({"file": "CHGCAR", "action": {"_file_delete": {"mode": "actual"}}})
actions.append({"dict": "INCAR", "action": {"_set": {"ALGO": "All"}}})
if "grad_not_orth" in self.errors:
if vi["INCAR"].get("ISMEAR", 1) < 0:
actions.append({"dict": "INCAR", "action": {"_set": {"ISMEAR": 0, "SIGMA": 0.05}}})
if "zheev" in self.errors:
if vi["INCAR"].get("ALGO", "Fast").lower() != "exact":
actions.append({"dict": "INCAR", "action": {"_set": {"ALGO": "Exact"}}})
if "elf_kpar" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"KPAR": 1}}})
if "rhosyg" in self.errors:
if vi["INCAR"].get("SYMPREC", 1e-4) == 1e-4:
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": 1e-4}}})
if "posmap" in self.errors:
# VASP advises to decrease or increase SYMPREC by an order of magnitude
# the default SYMPREC value is 1e-5
if self.error_count["posmap"] == 0:
# first, reduce by 10x
orig_symprec = vi["INCAR"].get("SYMPREC", 1e-5)
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": orig_symprec / 10}}})
elif self.error_count["posmap"] == 1:
# next, increase by 100x (10x the original)
orig_symprec = vi["INCAR"].get("SYMPREC", 1e-6)
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": orig_symprec * 100}}})
else:
# if we have already corrected twice, there's nothing else to do
pass
if "point_group" in self.errors:
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
if "symprec_noise" in self.errors:
if (vi["INCAR"].get("ISYM", 2) > 0) and (vi["INCAR"].get("SYMPREC", 1e-5) > 1e-6):
actions.append({"dict": "INCAR", "action": {"_set": {"SYMPREC": 1e-6}}})
else:
actions.append({"dict": "INCAR", "action": {"_set": {"ISYM": 0}}})
VaspModder(vi=vi).apply_actions(actions)
return {"errors": list(self.errors), "actions": actions}
class LrfCommutatorHandler(ErrorHandler):
"""
Corrects LRF_COMMUTATOR errors by setting LPEAD=True if not already set.
Note that switching LPEAD=T can slightly change results versus the
default due to numerical evaluation of derivatives.
"""
is_monitor = True
error_msgs = {"lrf_comm": ["LRF_COMMUTATOR internal error"]}
def __init__(self, output_filename="std_err.txt"):
"""
Initializes the handler with the output file to check.
Args:
output_filename (str): This is the file where the stderr for vasp
is being redirected. The error messages that are checked are
present in the stderr. Defaults to "std_err.txt", which is the
default redirect used by :class:`custodian.vasp.jobs.VaspJob`.
"""
self.output_filename = output_filename
self.errors = set()
self.error_count = Counter()
def check(self):
"""
Check for error.
"""
self.errors = set()
with open(self.output_filename, "r") as f:
for line in f:
l = line.strip()
for err, msgs in LrfCommutatorHandler.error_msgs.items():
for msg in msgs:
if l.find(msg) != -1:
self.errors.add(err)
return len(self.errors) > 0
def correct(self):
"""
Perform corrections.
"""
backup(VASP_BACKUP_FILES | {self.output_filename})
actions = []
vi = VaspInput.from_directory(".")
if "lrf_comm" in self.errors:
if Outcar(zpath(os.path.join(os.getcwd(), "OUTCAR"))).is_stopped is False:
if not vi["INCAR"].get("LPEAD"):
actions.append({"dict": "INCAR", "action": {"_set": {"LPEAD": True}}})
VaspModder(vi=vi).apply_actions(actions)
return {"errors": list(self.errors), "actions": actions}
class StdErrHandler(ErrorHandler):
"""
Master StdErr class that handles a number of common errors
that occur during VASP runs with error messages only in
the standard error.
"""
is_monitor = True
error_msgs = {
"kpoints_trans": ["internal error in GENERATE_KPOINTS_TRANS: " "number of G-vector changed in star"],
"out_of_memory": ["Allocation would exceed memory limit"],
}
def __init__(self, output_filename="std_err.txt"):
"""
Initializes the handler with the output file to check.
Args:
output_filename (str): This is the file where the stderr for vasp
is being redirected. The error messages that are checked are
present in the stderr. Defaults to "std_err.txt", which is the
default redirect used by :class:`custodian.vasp.jobs.VaspJob`.
"""
self.output_filename = output_filename
self.errors = set()
self.error_count = Counter()
def check(self):
"""
Check for error.
"""
self.errors = set()
with open(self.output_filename, "r") as f:
for line in f:
l = line.strip()
for err, msgs in StdErrHandler.error_msgs.items():
for msg in msgs:
if l.find(msg) != -1:
self.errors.add(err)
return len(self.errors) > 0
def correct(self):
"""
Perform corrections.
"""
backup(VASP_BACKUP_FILES | {self.output_filename})
actions = []
vi = VaspInput.from_directory(".")
if "kpoints_trans" in self.errors:
if self.error_count["kpoints_trans"] == 0:
m = reduce(operator.mul, vi["KPOINTS"].kpts[0])
m = max(int(round(m ** (1 / 3))), 1)
if vi["KPOINTS"].style.name.lower().startswith("m"):
m += m % 2
actions.append({"dict": "KPOINTS", "action": {"_set": {"kpoints": [[m] * 3]}}})
self.error_count["kpoints_trans"] += 1
if "out_of_memory" in self.errors:
if vi["INCAR"].get("KPAR", 1) > 1:
reduced_kpar = max(vi["INCAR"].get("KPAR", 1) // 2, 1)
actions.append({"dict": "INCAR", "action": {"_set": {"KPAR": reduced_kpar}}})
VaspModder(vi=vi).apply_actions(actions)
return {"errors": list(self.errors), "actions": actions}
class AliasingErrorHandler(ErrorHandler):
"""
Master VaspErrorHandler class that handles a number of common errors
that occur during VASP runs.
"""
is_monitor = True
error_msgs = {
"aliasing": ["WARNING: small aliasing (wrap around) errors must be expected"],
"aliasing_incar": ["Your FFT grids (NGX,NGY,NGZ) are not sufficient " "for an accurate"],
}
def __init__(self, output_filename="vasp.out"):
"""
Initializes the handler with the output file to check.
Args:
output_filename (str): This is the file where the stdout for vasp
is being redirected. The error messages that are checked are
present in the stdout. Defaults to "vasp.out", which is the
default redirect used by :class:`custodian.vasp.jobs.VaspJob`.
"""
self.output_filename = output_filename
self.errors = set()
def check(self):
"""
Check for error.
"""
incar = Incar.from_file("INCAR")
self.errors = set()
with open(self.output_filename, "r") as f:
for line in f:
l = line.strip()
for err, msgs in AliasingErrorHandler.error_msgs.items():
for msg in msgs:
if l.find(msg) != -1:
# this checks if we want to run a charged
# computation (e.g., defects) if yes we don't
# want to kill it because there is a change in e-
# density (brmix error)
if err == "brmix" and "NELECT" in incar:
continue
self.errors.add(err)
return len(self.errors) > 0
def correct(self):
"""
Perform corrections.
"""
backup(VASP_BACKUP_FILES | {self.output_filename})
actions = []
vi = VaspInput.from_directory(".")
if "aliasing" in self.errors:
with open("OUTCAR") as f:
grid_adjusted = False
changes_dict = {}
r = re.compile(r".+aliasing errors.*(NG.)\s*to\s*(\d+)")
for line in f:
m = r.match(line)
if m:
changes_dict[m.group(1)] = int(m.group(2))
grid_adjusted = True
# Ensure that all NGX, NGY, NGZ have been checked
if grid_adjusted and "NGZ" in line:
actions.append({"dict": "INCAR", "action": {"_set": changes_dict}})
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.extend(
[
{
"file": "CHGCAR",
"action": {"_file_delete": {"mode": "actual"}},
},
{
"file": "WAVECAR",
"action": {"_file_delete": {"mode": "actual"}},
},
]
)
break
if "aliasing_incar" in self.errors:
# vasp seems to give different warnings depending on whether the
# aliasing error was caused by user supplied inputs
d = {k: 1 for k in ["NGX", "NGY", "NGZ"] if k in vi["INCAR"].keys()}
actions.append({"dict": "INCAR", "action": {"_unset": d}})
if vi["INCAR"].get("ICHARG", 0) < 10:
actions.extend(
[
{
"file": "CHGCAR",
"action": {"_file_delete": {"mode": "actual"}},
},
{
"file": "WAVECAR",
"action": {"_file_delete": {"mode": "actual"}},
},
]
)
VaspModder(vi=vi).apply_actions(actions)
return {"errors": list(self.errors), "actions": actions}
class DriftErrorHandler(ErrorHandler):
"""
Corrects for total drift exceeding the force convergence criteria.
"""
def __init__(self, max_drift=None, to_average=3, enaug_multiply=2):
"""
Initializes the handler with max drift
Args:
max_drift (float): This defines the max drift. Leaving this at the default of None gets the max_drift from
EDFIFFG
"""
self.max_drift = max_drift
self.to_average = int(to_average)
self.enaug_multiply = enaug_multiply
def check(self):
"""
Check for error.
"""
incar = Incar.from_file("INCAR")
if incar.get("EDIFFG", 0.1) >= 0 or incar.get("NSW", 0) == 0:
# Only activate when force relaxing and ionic steps
# NSW check prevents accidental effects when running DFPT
return False
if not self.max_drift:
self.max_drift = incar["EDIFFG"] * -1
try:
outcar = Outcar("OUTCAR")
except Exception:
# Can't perform check if Outcar not valid
return False
if len(outcar.data.get("drift", [])) < self.to_average:
# Ensure enough steps to get average drift
return False
curr_drift = outcar.data.get("drift", [])[::-1][: self.to_average]
curr_drift = np.average([np.linalg.norm(d) for d in curr_drift])
return curr_drift > self.max_drift
def correct(self):
"""
Perform corrections.
"""
backup(VASP_BACKUP_FILES)
actions = []
vi = VaspInput.from_directory(".")
incar = vi["INCAR"]
outcar = Outcar("OUTCAR")
# Move CONTCAR to POSCAR
actions.append({"file": "CONTCAR", "action": {"_file_copy": {"dest": "POSCAR"}}})
# First try adding ADDGRID
if not incar.get("ADDGRID", False):
actions.append({"dict": "INCAR", "action": {"_set": {"ADDGRID": True}}})
# Otherwise set PREC to High so ENAUG can be used to control Augmentation Grid Size
elif incar.get("PREC", "Accurate").lower() != "high":
actions.append({"dict": "INCAR", "action": {"_set": {"PREC": "High"}}})
actions.append(
{
"dict": "INCAR",
"action": {"_set": {"ENAUG": incar.get("ENCUT", 520) * 2}},
}
)
# PREC is already high and ENAUG set so just increase it
else:
actions.append(
{
"dict": "INCAR",
"action": {"_set": {"ENAUG": int(incar.get("ENAUG", 1040) * self.enaug_multiply)}},
}
)
curr_drift = outcar.data.get("drift", [])[::-1][: self.to_average]
curr_drift = np.average([ | np.linalg.norm(d) | numpy.linalg.norm |
"""
Estimates likelihood of generated data using kernel density estimation
Author(s): <NAME> (<EMAIL>)
"""
import numpy as np
def sample_line(d, m):
# Sample m points along a line parallel to a d-dimensional space's basis
basis = np.random.choice(d)
c = np.zeros((m, d))
c[:,:] = np.random.rand(d)
c[:,basis] = np.linspace(0.0, 1.0, m)
return c
def consistency(gen_func, child=False, X_parent=None):
n_eval = 100
n_points = 50
mean_cor = 0
for i in range(n_eval):
c = sample_line(2, n_points)
dist_c = np.linalg.norm(c - c[0], axis=1)
# from matplotlib import pyplot as plt
# plt.scatter(c[:,0], c[:,1])
if child:
X_p = X_parent[np.random.choice(X_parent.shape[0])]
X = gen_func(c, X_p)[1]
else:
X = gen_func(c)
X = X.reshape((n_points, -1))
dist_X = np.linalg.norm(X - X[0], axis=1)
mean_cor += np.corrcoef(dist_c, dist_X)[0,1]
return mean_cor/n_eval
def ci_cons(n, gen_func, child=False, X_parent=None):
conss = | np.zeros(n) | numpy.zeros |
#
# Pocket SDR Python Library - GNSS Spreading Code Functions
#
# References:
# [1] IS-GPS-200K, NAVSTAR GPS Space Segment/Navigation User Segment
# Interfaces, May 19, 2019
# [2] IS-GPS-705A, Navstar GPS Space Segment / User Segment L5 Interfaces,
# June 8, 2010
# [3] IS-QZSS-PNT-004, Quasi-Zenith Satellite System Interface Specification
# Satellite Positioning, Navigation and Timing Service, November 5, 2018
# [4] IS-QZSS-L6-001, Quasi-Zenith Satellite System Interface Specification
# Centimeter Level Augmentation Service, November 5, 2018
# [5] Galileo Open Service Signal In Space Interface Control Document -
# Issue 1, February 2010
# [6] Galileo E6-B/C Codes Technical Note - Issue 1, January 2019
# [7] IS-GPS-800F, Navstar GPS Space Segment / User Segment L1C Interfaces,
# March 4, 2019
# [8] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Open Service Signal B1C (Version 1.0), December, 2017
# [9] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Open Service Signal B2a (Version 1.0), December, 2017
# [10] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Open Service Signal B2b (Version 1.0), July, 2020
# [11] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Precise Positioning Service Signal PPP-B2b (Version 1.0),
# July, 2020
# [12] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Open Service Signal B1I (Version 3.0), February, 2019
# [13] BeiDou Navigation Satellite System Signal In Space Interface Control
# Document - Open Service Signal B3I (Version 1.0), February, 2018
# [14] Global Navigation Satellite System GLONASS Interface Control Document
# Navigation radiosignal in bands L1, L2 (Version 5.1), 2008
# [15] IS-QZSS-TV-003, Quasi-Zenith Satellite System Interface Specification
# Positioning Technology Verification Service, December 27, 2019
# [16] IRNSS SIS ICD for Standard Positioning Service version 1.1, August,
# 2017
# [17] GLONASS Interface Control Document Code Devision Multiple Access Open
# Service Navigation Signal in L3 frequency band Edition 1.0, 2016
#
# Author:
# T.TAKASU
#
# History:
# 2021-12-01 1.0 new
# 2021-12-05 1.1 add signals: G1CA, G2CA, B1I, B2I, B1CD, B1CP, B2AD, B2AP,
# B2BI, B3I
# 2021-12-22 1.2 add secondary code generation
# 2021-12-24 1.3 add L1S, L5SI, L5SQ
# 2022-01-13 1.4 change API gen_code_fft()
# add support of G1CA, G2CA and B3I in sec_code()
# 2022-01-17 1.5 add signals: L2CL, I5S, ISS
# 2022-01-27 1.6 add signals: G3OCD, G3OCP
#
import numpy as np
import scipy.fftpack as fft
import sdr_func, sdr_code_gal
# constants --------------------------------------------------------------------
NONE = np.array([], dtype='int8')
CHIP = (-1, 1)
# code caches ------------------------------------------------------------------
L1CA = {}
L1CP, L1CD = {}, {}
L1CO = {}
L2CM, L2CL = {}, {}
L5I , L5Q = {}, {}
L6D, L6E = {}, {}
G1CA = {}
G3OCD, G3OCP = {}, {}
E1B , E1C = {}, {}
E5AI, E5AQ = {}, {}
E5BI, E5BQ = {}, {}
E6B , E6C = {}, {}
B1I = {}
B1CD, B1CP = {}, {}
B1CS = {}
B2AD, B2AP = {}, {}
B2AS = {}
B2BI = {}
B3I = {}
I5S, ISS = {}, {}
L1CA_G1, L1CA_G2 = [], []
L1C_L_SEQ = []
L5_XA, L5_XB = [], []
G3OC_D1 = []
B1C_L_SEQ, B1C_L_SEQ_S = [], []
B2AD_G1, B2AP_G1 = [], []
B2A_L_SEQ = []
B2BI_G1 = []
B3I_G1 = []
# code tables ------------------------------------------------------------------
L1CA_G2_delay = ( # PRN 1 - 210
5, 6, 7, 8, 17, 18, 139, 140, 141, 251, 252, 254, 255, 256, 257,
258, 469, 470, 471, 472, 473, 474, 509, 512, 513, 514, 515, 516, 859, 860,
861, 862, 863, 950, 947, 948, 950, 67, 103, 91, 19, 679, 225, 625, 946,
638, 161,1001, 554, 280, 710, 709, 775, 864, 558, 220, 397, 55, 898, 759,
367, 299,1018, 729, 695, 780, 801, 788, 732, 34, 320, 327, 389, 407, 525,
405, 221, 761, 260, 326, 955, 653, 699, 422, 188, 438, 959, 539, 879, 677,
586, 153, 792, 814, 446, 264,1015, 278, 536, 819, 156, 957, 159, 712, 885,
461, 248, 713, 126, 807, 279, 122, 197, 693, 632, 771, 467, 647, 203, 145,
175, 52, 21, 237, 235, 886, 657, 634, 762, 355,1012, 176, 603, 130, 359,
595, 68, 386, 797, 456, 499, 883, 307, 127, 211, 121, 118, 163, 628, 853,
484, 289, 811, 202,1021, 463, 568, 904, 670, 230, 911, 684, 309, 644, 932,
12, 314, 891, 212, 185, 675, 503, 150, 395, 345, 846, 798, 992, 357, 995,
877, 112, 144, 476, 193, 109, 445, 291, 87, 399, 292, 901, 339, 208, 711,
189, 263, 537, 663, 942, 173, 900, 30, 500, 935, 556, 373, 85, 652, 310)
L1CP_weil_idx = ( # PRN 1 - 210
5111, 5109, 5108, 5106, 5103, 5101, 5100, 5098, 5095, 5094, 5093, 5091,
5090, 5081, 5080, 5069, 5068, 5054, 5044, 5027, 5026, 5014, 5004, 4980,
4915, 4909, 4893, 4885, 4832, 4824, 4591, 3706, 5092, 4986, 4965, 4920,
4917, 4858, 4847, 4790, 4770, 4318, 4126, 3961, 3790, 4911, 4881, 4827,
4795, 4789, 4725, 4675, 4539, 4535, 4458, 4197, 4096, 3484, 3481, 3393,
3175, 2360, 1852, 5065, 5063, 5055, 5012, 4981, 4952, 4934, 4932, 4786,
4762, 4640, 4601, 4563, 4388, 3820, 3687, 5052, 5051, 5047, 5039, 5015,
5005, 4984, 4975, 4974, 4972, 4962, 4913, 4907, 4903, 4833, 4778, 4721,
4661, 4660, 4655, 4623, 4590, 4548, 4461, 4442, 4347, 4259, 4256, 4166,
4155, 4109, 4100, 4023, 3998, 3979, 3903, 3568, 5088, 5050, 5020, 4990,
4982, 4966, 4949, 4947, 4937, 4935, 4906, 4901, 4872, 4865, 4863, 4818,
4785, 4781, 4776, 4775, 4754, 4696, 4690, 4658, 4607, 4599, 4596, 4530,
4524, 4451, 4441, 4396, 4340, 4335, 4296, 4267, 4168, 4149, 4097, 4061,
3989, 3966, 3789, 3775, 3622, 3523, 3515, 3492, 3345, 3235, 3169, 3157,
3082, 3072, 3032, 3030, 4582, 4595, 4068, 4871, 4514, 4439, 4122, 4948,
4774, 3923, 3411, 4745, 4195, 4897, 3047, 4185, 4354, 5077, 4042, 2111,
4311, 5024, 4352, 4678, 5034, 5085, 3646, 4868, 3668, 4211, 2883, 2850,
2815, 2542, 2492, 2376, 2036, 1920)
L1CP_ins_idx = ( # PRN 1 - 210
412, 161, 1, 303, 207, 4971, 4496, 5, 4557, 485, 253, 4676,
1, 66, 4485, 282, 193, 5211, 729, 4848, 982, 5955, 9805, 670,
464, 29, 429, 394, 616, 9457, 4429, 4771, 365, 9705, 9489, 4193,
9947, 824, 864, 347, 677, 6544, 6312, 9804, 278, 9461, 444, 4839,
4144, 9875, 197, 1156, 4674,10035, 4504, 5, 9937, 430, 5, 355,
909, 1622, 6284, 9429, 77, 932, 5973, 377,10000, 951, 6212, 686,
9352, 5999, 9912, 9620, 635, 4951, 5453, 4658, 4800, 59, 318, 571,
565, 9947, 4654, 148, 3929, 293, 178,10142, 9683, 137, 565, 35,
5949, 2, 5982, 825, 9614, 9790, 5613, 764, 660, 4870, 4950, 4881,
1151, 9977, 5122,10074, 4832, 77, 4698, 1002, 5549, 9606, 9228, 604,
4678, 4854, 4122, 9471, 5026, 272, 1027, 317, 691, 509, 9708, 5033,
9938, 4314,10140, 4790, 9823, 6093, 469, 1215, 799, 756, 9994, 4843,
5271, 9661, 6255, 5203, 203,10070, 30, 103, 5692, 32, 9826, 76,
59, 6831, 958, 1471,10070, 553, 5487, 55, 208, 645, 5268, 1873,
427, 367, 1404, 5652, 5, 368, 451, 9595, 1030, 1324, 692, 9819,
4520, 9911, 278, 642, 6330, 5508, 1872, 5445,10131, 422, 4918, 787,
9864, 9753, 9859, 328, 1, 4733, 164, 135, 174, 132, 538, 176,
198, 595, 574, 321, 596, 491)
L1CD_weil_idx = ( # PRN 1 - 210
5097, 5110, 5079, 4403, 4121, 5043, 5042, 5104, 4940, 5035, 4372, 5064,
5084, 5048, 4950, 5019, 5076, 3736, 4993, 5060, 5061, 5096, 4983, 4783,
4991, 4815, 4443, 4769, 4879, 4894, 4985, 5056, 4921, 5036, 4812, 4838,
4855, 4904, 4753, 4483, 4942, 4813, 4957, 4618, 4669, 4969, 5031, 5038,
4740, 4073, 4843, 4979, 4867, 4964, 5025, 4579, 4390, 4763, 4612, 4784,
3716, 4703, 4851, 4955, 5018, 4642, 4840, 4961, 4263, 5011, 4922, 4317,
3636, 4884, 5041, 4912, 4504, 4617, 4633, 4566, 4702, 4758, 4860, 3962,
4882, 4467, 4730, 4910, 4684, 4908, 4759, 4880, 4095, 4971, 4873, 4561,
4588, 4773, 4997, 4583, 4900, 4574, 4629, 4676, 4181, 5057, 4944, 4401,
4586, 4699, 3676, 4387, 4866, 4926, 4657, 4477, 4359, 4673, 4258, 4447,
4570, 4486, 4362, 4481, 4322, 4668, 3967, 4374, 4553, 4641, 4215, 3853,
4787, 4266, 4199, 4545, 4208, 4485, 3714, 4407, 4182, 4203, 3788, 4471,
4691, 4281, 4410, 3953, 3465, 4801, 4278, 4546, 3779, 4115, 4193, 3372,
3786, 3491, 3812, 3594, 4028, 3652, 4224, 4334, 3245, 3921, 3840, 3514,
2922, 4227, 3376, 3560, 4989, 4756, 4624, 4446, 4174, 4551, 3972, 4399,
4562, 3133, 4157, 5053, 4536, 5067, 3905, 3721, 3787, 4674, 3436, 2673,
4834, 4456, 4056, 3804, 3672, 4205, 3348, 4152, 3883, 3473, 3669, 3455,
2318, 2945, 2947, 3220, 4052, 2953)
L1CD_ins_idx = ( # PRN 1 - 210
181, 359, 72, 1110, 1480, 5034, 4622, 1, 4547, 826, 6284, 4195,
368, 1, 4796, 523, 151, 713, 9850, 5734, 34, 6142, 190, 644,
467, 5384, 801, 594, 4450, 9437, 4307, 5906, 378, 9448, 9432, 5849,
5547, 9546, 9132, 403, 3766, 3, 684, 9711, 333, 6124,10216, 4251,
9893, 9884, 4627, 4449, 9798, 985, 4272, 126,10024, 434, 1029, 561,
289, 638, 4353, 9899, 4629, 669, 4378, 4528, 9718, 5485, 6222, 672,
1275, 6083, 5264,10167, 1085, 194, 5012, 4938, 9356, 5057, 866, 2,
204, 9808, 4365, 162, 367, 201, 18, 251,10167, 21, 685, 92,
1057, 3, 5756, 14, 9979, 9569, 515, 753, 1181, 9442, 669, 4834,
541, 9933, 6683, 4828, 9710,10170, 9629, 260, 86, 5544, 923, 257,
507, 4572, 4491, 341, 130, 79, 1142, 448, 875, 555, 1272, 5198,
9529, 4459,10019, 9353, 9780, 375, 503, 4507, 875, 1246, 1, 4534,
8, 9549, 6240, 22, 5652,10069, 4796, 4980, 27, 90, 9788, 715,
9720, 301, 5450, 5215, 13, 1147, 4855, 1190, 1267, 1302, 1, 5007,
549, 368, 6300, 5658, 4302, 851, 4353, 9618, 9652, 1232, 109,10174,
6178, 1851, 1299, 325,10206, 9968,10191, 5438,10080, 219, 758, 2140,
9753, 4799,10126, 241, 1245, 1274, 1456, 9967, 235, 512, 1078, 1078,
953, 5647, 669, 1311, 5827, 15)
L1CO_S1_poly = ( # PRN 1 - 210
0o5111, 0o5421, 0o5501, 0o5403, 0o6417, 0o6141, 0o6351, 0o6501, 0o6205,
0o6235, 0o7751, 0o6623, 0o6733, 0o7627, 0o5667, 0o5051, 0o7665, 0o6325,
0o4365, 0o4745, 0o7633, 0o6747, 0o4475, 0o4225, 0o7063, 0o4423, 0o6651,
0o4161, 0o7237, 0o4473, 0o5477, 0o6163, 0o7223, 0o6323, 0o7125, 0o7035,
0o4341, 0o4353, 0o4107, 0o5735, 0o6741, 0o7071, 0o4563, 0o5755, 0o6127,
0o4671, 0o4511, 0o4533, 0o5357, 0o5607, 0o6673, 0o6153, 0o7565, 0o7107,
0o6211, 0o4321, 0o7201, 0o4451, 0o5411, 0o5141, 0o7041, 0o6637, 0o4577,
0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111,
0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111,
0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111,
0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111, 0o5111,
0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421,
0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421,
0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421,
0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421, 0o5421,
0o5421, 0o5421, 0o5421, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403,
0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o5403, 0o6501,
0o6501, 0o6501, 0o6501, 0o6501, 0o6501, 0o6501, 0o6501, 0o6501, 0o6501,
0o6501, 0o6501, 0o6501)
L1CO_S1_init = ( # PRN 1 - 210
0o3266, 0o2040, 0o1527, 0o3307, 0o3756, 0o3026, 0o0562, 0o0420, 0o3415,
0o0337, 0o0265, 0o1230, 0o2204, 0o1440, 0o2412, 0o3516, 0o2761, 0o3750,
0o2701, 0o1206, 0o1544, 0o1774, 0o0546, 0o2213, 0o3707, 0o2051, 0o3650,
0o1777, 0o3203, 0o1762, 0o2100, 0o0571, 0o3710, 0o3535, 0o3110, 0o1426,
0o0255, 0o0321, 0o3124, 0o0572, 0o1736, 0o3306, 0o1307, 0o3763, 0o1604,
0o1021, 0o2624, 0o0406, 0o0114, 0o0077, 0o3477, 0o1000, 0o3460, 0o2607,
0o2057, 0o3467, 0o0706, 0o2032, 0o1464, 0o0520, 0o1766, 0o3270, 0o0341,
0o1740, 0o3664, 0o1427, 0o2627, 0o0701, 0o3460, 0o1373, 0o2540, 0o2004,
0o2274, 0o1340, 0o0602, 0o2502, 0o0327, 0o2600, 0o0464, 0o3674, 0o3040,
0o1153, 0o0747, 0o1770, 0o3772, 0o1731, 0o1672, 0o1333, 0o2705, 0o2713,
0o3562, 0o3245, 0o3770, 0o3202, 0o3521, 0o3250, 0o2117, 0o0530, 0o3021,
0o2511, 0o1562, 0o1067, 0o0424, 0o3402, 0o1326, 0o2142, 0o0733, 0o0504,
0o1611, 0o2724, 0o0753, 0o3724, 0o2652, 0o1743, 0o0013, 0o3464, 0o2300,
0o1334, 0o2175, 0o2564, 0o3075, 0o3455, 0o3627, 0o0617, 0o1324, 0o3506,
0o2231, 0o1110, 0o1271, 0o3740, 0o3652, 0o1644, 0o3635, 0o3436, 0o3076,
0o0434, 0o3340, 0o0054, 0o2446, 0o0025, 0o0150, 0o2746, 0o2723, 0o2601,
0o3440, 0o1312, 0o0544, 0o2062, 0o0176, 0o3616, 0o1740, 0o3777, 0o0432,
0o2466, 0o1667, 0o3601, 0o2706, 0o2022, 0o1363, 0o2331, 0o3556, 0o2205,
0o3734, 0o2115, 0o0010, 0o2140, 0o3136, 0o0272, 0o3264, 0o2017, 0o2505,
0o3532, 0o0647, 0o1542, 0o2154, 0o3734, 0o2621, 0o2711, 0o0217, 0o3503,
0o3457, 0o3750, 0o2525, 0o0113, 0o0265, 0o1711, 0o0552, 0o0675, 0o1706,
0o3513, 0o1135, 0o0566, 0o0500, 0o0254, 0o3445, 0o2542, 0o1257, 0o0211,
0o0534, 0o1420, 0o3401, 0o0714, 0o0613, 0o2475, 0o2572, 0o3265, 0o1250,
0o1711, 0o2704, 0o0135)
L1CO_S2_init = ( # 64 - 210
0o3035, 0o1557, 0o0237, 0o2527, 0o3307, 0o1402, 0o1225, 0o0607, 0o0351,
0o3724, 0o1675, 0o2625, 0o1030, 0o1443, 0o3277, 0o1132, 0o0572, 0o1241,
0o0535, 0o1366, 0o0041, 0o0561, 0o0122, 0o1205, 0o3753, 0o2543, 0o3031,
0o2260, 0o3773, 0o3156, 0o2215, 0o0146, 0o2413, 0o2564, 0o3310, 0o2267,
0o3120, 0o0064, 0o1042, 0o0476, 0o1020, 0o0431, 0o0216, 0o2736, 0o2527,
0o2431, 0o1013, 0o0524, 0o0726, 0o1042, 0o3362, 0o1364, 0o3354, 0o0623,
0o0145, 0o0214, 0o0223, 0o0151, 0o2405, 0o2522, 0o3235, 0o0452, 0o2617,
0o1300, 0o1430, 0o0773, 0o0772, 0o3561, 0o0607, 0o0420, 0o0527, 0o3770,
0o2536, 0o2233, 0o3366, 0o3766, 0o3554, 0o2060, 0o2070, 0o0713, 0o3366,
0o3247, 0o2776, 0o1244, 0o2102, 0o1712, 0o1245, 0o3344, 0o1277, 0o0165,
0o2131, 0o3623, 0o0141, 0o0421, 0o3032, 0o2065, 0o3024, 0o2663, 0o2274,
0o2114, 0o1664, 0o0413, 0o1512, 0o0135, 0o2737, 0o1015, 0o1075, 0o1255,
0o3473, 0o2716, 0o0101, 0o1105, 0o1407, 0o3407, 0o1046, 0o3237, 0o0154,
0o3010, 0o2245, 0o2051, 0o2144, 0o1743, 0o2511, 0o3410, 0o1414, 0o1275,
0o2257, 0o2331, 0o0276, 0o3261, 0o1760, 0o0430, 0o3477, 0o1676, 0o1636,
0o2411, 0o1473, 0o2266, 0o2104, 0o2070, 0o1766, 0o0711, 0o2533, 0o0353,
0o1744, 0o0053, 0o2222)
L2CM_R_init_1 = ( # PRN 1 - 63
0o742417664, 0o756014035, 0o002747144, 0o066265724, 0o601403471,
0o703232733, 0o124510070, 0o617316361, 0o047541621, 0o733031046,
0o713512145, 0o024437606, 0o021264003, 0o230655351, 0o001314400,
0o222021506, 0o540264026, 0o205521705, 0o064022144, 0o120161274,
0o044023533, 0o724744327, 0o045743577, 0o741201660, 0o700274134,
0o010247261, 0o713433445, 0o737324162, 0o311627434, 0o710452007,
0o722462133, 0o050172213, 0o500653703, 0o755077436, 0o136717361,
0o756675453, 0o435506112, 0o771353753, 0o226107701, 0o022025110,
0o402466344, 0o752566114, 0o702011164, 0o041216771, 0o047457275,
0o266333164, 0o713167356, 0o060546335, 0o355173035, 0o617201036,
0o157465571, 0o767360553, 0o023127030, 0o431343777, 0o747317317,
0o045706125, 0o002744276, 0o060036467, 0o217744147, 0o603340174,
0o326616775, 0o063240065, 0o111460621)
L2CM_R_init_2 = ( # PRN 159 - 210
0o604055104, 0o157065232, 0o013305707, 0o603552017, 0o230461355,
0o603653437, 0o652346475, 0o743107103, 0o401521277, 0o167335110,
0o014013575, 0o362051132, 0o617753265, 0o216363634, 0o755561123,
0o365304033, 0o625025543, 0o054420334, 0o415473671, 0o662364360,
0o373446602, 0o417564100, 0o000526452, 0o226631300, 0o113752074,
0o706134401, 0o041352546, 0o664630154, 0o276524255, 0o714720530,
0o714051771, 0o044526647, 0o207164322, 0o262120161, 0o204244652,
0o202133131, 0o714351204, 0o657127260, 0o130567507, 0o670517677,
0o607275514, 0o045413633, 0o212645405, 0o613700455, 0o706202440,
0o705056276, 0o020373522, 0o746013617, 0o132720621, 0o434015513,
0o566721727, 0o140633660)
L2CL_R_init_1 = ( # PRN 1 - 63
0o624145772, 0o506610362, 0o220360016, 0o710406104, 0o001143345,
0o053023326, 0o652521276, 0o206124777, 0o015563374, 0o561522076,
0o023163525, 0o117776450, 0o606516355, 0o003037343, 0o046515565,
0o671511621, 0o605402220, 0o002576207, 0o525163451, 0o266527765,
0o006760703, 0o501474556, 0o743747443, 0o615534726, 0o763621420,
0o720727474, 0o700521043, 0o222567263, 0o132765304, 0o746332245,
0o102300466, 0o255231716, 0o437661701, 0o717047302, 0o222614207,
0o561123307, 0o240713073, 0o101232630, 0o132525726, 0o315216367,
0o377046065, 0o655351360, 0o435776513, 0o744242321, 0o024346717,
0o562646415, 0o731455342, 0o723352536, 0o000013134, 0o011566642,
0o475432222, 0o463506741, 0o617127534, 0o026050332, 0o733774235,
0o751477772, 0o417631550, 0o052247456, 0o560404163, 0o417751005,
0o004302173, 0o715005045, 0o001154457)
L2CL_R_init_2 = ( # PRN 159 - 210
0o605253024, 0o063314262, 0o066073422, 0o737276117, 0o737243704,
0o067557532, 0o227354537, 0o704765502, 0o044746712, 0o720535263,
0o733541364, 0o270060042, 0o737176640, 0o133776704, 0o005645427,
0o704321074, 0o137740372, 0o056375464, 0o704374004, 0o216320123,
0o011322115, 0o761050112, 0o725304036, 0o721320336, 0o443462103,
0o510466244, 0o745522652, 0o373417061, 0o225526762, 0o047614504,
0o034730440, 0o453073141, 0o533654510, 0o377016461, 0o235525312,
0o507056307, 0o221720061, 0o520470122, 0o603764120, 0o145604016,
0o051237167, 0o033326347, 0o534627074, 0o645230164, 0o000171400,
0o022715417, 0o135471311, 0o137422057, 0o714426456, 0o640724672,
0o501254540, 0o513322453)
L5I_XB_adv = ( # PRN 1 - 210
266, 365, 804, 1138, 1509, 1559, 1756, 2084, 2170, 2303, 2527, 2687,
2930, 3471, 3940, 4132, 4332, 4924, 5343, 5443, 5641, 5816, 5898, 5918,
5955, 6243, 6345, 6477, 6518, 6875, 7168, 7187, 7329, 7577, 7720, 7777,
8057, 5358, 3550, 3412, 819, 4608, 3698, 962, 3001, 4441, 4937, 3717,
4730, 7291, 2279, 7613, 5723, 7030, 1475, 2593, 2904, 2056, 2757, 3756,
6205, 5053, 6437, 7789, 2311, 7432, 5155, 1593, 5841, 5014, 1545, 3016,
4875, 2119, 229, 7634, 1406, 4506, 1819, 7580, 5446, 6053, 7958, 5267,
2956, 3544, 1277, 2996, 1758, 3360, 2718, 3754, 7440, 2781, 6756, 7314,
208, 5252, 696, 527, 1399, 5879, 6868, 217, 7681, 3788, 1337, 2424,
4243, 5686, 1955, 4791, 492, 1518, 6566, 5349, 506, 113, 1953, 2797,
934, 3023, 3632, 1330, 4909, 4867, 1183, 3990, 6217, 1224, 1733, 2319,
3928, 2380, 841, 5049, 7027, 1197, 7208, 8000, 152, 6762, 3745, 4723,
5502, 4796, 123, 8142, 5091, 7875, 330, 5272, 4912, 374, 2045, 6616,
6321, 7605, 2570, 2419, 1234, 1922, 4317, 5110, 825, 958, 1089, 7813,
6058, 7703, 6702, 1714, 6371, 2281, 1986, 6282, 3201, 3760, 1056, 6233,
1150, 2823, 6250, 645, 2401, 1639, 2946, 7091, 923, 7045, 6493, 1706,
5836, 926, 6086, 950, 5905, 3240, 6675, 3197, 1555, 3589, 4555, 5671,
6948, 4664, 2086, 5950, 5521, 1515)
L5Q_XB_adv = ( # PRN 1 - 210
1701, 323, 5292, 2020, 5429, 7136, 1041, 5947, 4315, 148, 535, 1939,
5206, 5910, 3595, 5135, 6082, 6990, 3546, 1523, 4548, 4484, 1893, 3961,
7106, 5299, 4660, 276, 4389, 3783, 1591, 1601, 749, 1387, 1661, 3210,
708, 4226, 5604, 6375, 3056, 1772, 3662, 4401, 5218, 2838, 6913, 1685,
1194, 6963, 5001, 6694, 991, 7489, 2441, 639, 2097, 2498, 6470, 2399,
242, 3768, 1186, 5246, 4259, 5907, 3870, 3262, 7387, 3069, 2999, 7993,
7849, 4157, 5031, 5986, 4833, 5739, 7846, 898, 2022, 7446, 6404, 155,
7862, 7795, 6121, 4840, 6585, 429, 6020, 200, 1664, 1499, 7298, 1305,
7323, 7544, 4438, 2485, 3387, 7319, 1853, 5781, 1874, 7555, 2132, 6441,
6722, 1192, 2588, 2188, 297, 1540, 4138, 5231, 4789, 659, 871, 6837,
1393, 7383, 611, 4920, 5416, 1611, 2474, 118, 1382, 1092, 7950, 7223,
1769, 4721, 1252, 5147, 2165, 7897, 4054, 3498, 6571, 2858, 8126, 7017,
1901, 181, 1114, 5195, 7479, 4186, 3904, 7128, 1396, 4513, 5967, 2580,
2575, 7961, 2598, 4508, 2090, 3685, 7748, 684, 913, 5558, 2894, 5858,
6432, 3813, 3573, 7523, 5280, 3376, 7424, 2918, 5793, 1747, 7079, 2921,
2490, 4119, 3373, 977, 681, 4273, 5419, 5626, 1266, 5804, 2414, 6444,
4757, 427, 5452, 5182, 6606, 6531, 4268, 3115, 6835, 862, 4856, 2765,
37, 1943, 7977, 2512, 4451, 4071)
L6D_R_init = ( # PRN 193 - 201
0o00255021, 0o00327455, 0o00531421, 0o00615350, 0o00635477, 0o00000000,
0o01715254, 0o01741247, 0o02322713)
L6E_R_init = ( # PRN 203 - 211
0o01142153, 0o01723711, 0o03672765, 0o00030404, 0o00000546, 0o00000000,
0o03642512, 0o00255043, 0o02020075)
E5AI_X2_init = ( # PRN 1 - 50
0o30305, 0o14234, 0o27213, 0o20577, 0o23312, 0o33463, 0o15614, 0o12537,
0o01527, 0o30236, 0o27344, 0o07272, 0o36377, 0o17046, 0o06434, 0o15405,
0o24252, 0o11631, 0o24776, 0o00630, 0o11560, 0o17272, 0o27445, 0o31702,
0o13012, 0o14401, 0o34727, 0o22627, 0o30623, 0o27256, 0o01520, 0o14211,
0o31465, 0o22164, 0o33516, 0o02737, 0o21316, 0o35425, 0o35633, 0o24655,
0o14054, 0o27027, 0o06604, 0o31455, 0o34465, 0o25273, 0o20763, 0o31721,
0o17312, 0o13277)
E5AQ_X2_init = ( # PRN 1 - 50
0o25652, 0o05142, 0o24723, 0o31751, 0o27366, 0o24660, 0o33655, 0o27450,
0o07626, 0o01705, 0o12717, 0o32122, 0o16075, 0o16644, 0o37556, 0o02477,
0o02265, 0o06430, 0o25046, 0o12735, 0o04262, 0o11230, 0o00037, 0o06137,
0o04312, 0o20606, 0o11162, 0o22252, 0o30533, 0o24614, 0o07767, 0o32705,
0o05052, 0o27553, 0o03711, 0o02041, 0o34775, 0o05274, 0o37356, 0o16205,
0o36270, 0o06600, 0o26773, 0o17375, 0o35267, 0o36255, 0o12044, 0o26442,
0o21621, 0o25411)
E5BI_X2_init = ( # PRN 1 - 50
0o07220, 0o26047, 0o00252, 0o17166, 0o14161, 0o02540, 0o01537, 0o26023,
0o01725, 0o20637, 0o02364, 0o27731, 0o30640, 0o34174, 0o06464, 0o07676,
0o32231, 0o10353, 0o00755, 0o26077, 0o11644, 0o11537, 0o35115, 0o20452,
0o34645, 0o25664, 0o21403, 0o32253, 0o02337, 0o30777, 0o27122, 0o22377,
0o36175, 0o33075, 0o33151, 0o13134, 0o07433, 0o10216, 0o35466, 0o02533,
0o05351, 0o30121, 0o14010, 0o32576, 0o30326, 0o37433, 0o26022, 0o35770,
0o06670, 0o12017)
E5BQ_X2_init = ( # PRN 1 - 50
0o03331, 0o06143, 0o25322, 0o23371, 0o00413, 0o36235, 0o17750, 0o04745,
0o13005, 0o37140, 0o30155, 0o20237, 0o03461, 0o31662, 0o27146, 0o05547,
0o02456, 0o30013, 0o00322, 0o10761, 0o26767, 0o36004, 0o30713, 0o07662,
0o21610, 0o20134, 0o11262, 0o10706, 0o34143, 0o11051, 0o25460, 0o17665,
0o32354, 0o21230, 0o20146, 0o11362, 0o37246, 0o16344, 0o15034, 0o25471,
0o25646, 0o22157, 0o04336, 0o16356, 0o04075, 0o02626, 0o11706, 0o37011,
0o27041, 0o31024)
B1I_ph_sel = ( # PRN 1 - 63
(1, 3) , (1, 4) , (1, 5) , (1, 6) , (1, 8) , (1, 9) ,
(1, 10) , (1, 11) , (2, 7) , (3, 4) , (3, 5) , (3, 6) ,
(3, 8) , (3, 9) , (3, 10) , (3, 11) , (4, 5) , (4, 6) ,
(4, 8) , (4, 9) , (4, 10) , (4, 11) , (5, 6) , (5, 8) ,
(5, 9) , (5, 10) , (5, 11) , (6, 8) , (6, 9) , (6, 10) ,
(6, 11) , (8, 9) , (8, 10) , (8, 11) , (9, 10) , (9, 11) ,
(10, 11) , (1, 2, 7) , (1, 3, 4), (1, 3, 6) , (1, 3, 8) , (1, 3, 10),
(1, 3, 11), (1, 4, 5) , (1, 4, 9), (1, 5, 6) , (1, 5, 8) , (1, 5, 10),
(1, 5, 11), (1, 6, 9) , (1, 8, 9), (1, 9, 10), (1, 9, 11), (2, 3, 7) ,
(2, 5, 7) , (2, 7, 9) , (3, 4, 5), (3, 4, 9) , (3, 5, 6) , (3, 5, 8) ,
(3, 5, 10), (3, 5, 11), (3, 6, 9))
B1CD_ph_diff = ( # PRN 1 - 63
2678, 4802, 958, 859, 3843, 2232, 124, 4352, 1816, 1126, 1860, 4800,
2267, 424, 4192, 4333, 2656, 4148, 243, 1330, 1593, 1470, 882, 3202,
5095, 2546, 1733, 4795, 4577, 1627, 3638, 2553, 3646, 1087, 1843, 216,
2245, 726, 1966, 670, 4130, 53, 4830, 182, 2181, 2006, 1080, 2288,
2027, 271, 915, 497, 139, 3693, 2054, 4342, 3342, 2592, 1007, 310,
4203, 455, 4318)
B1CD_trunc_pnt = ( # PRN 1 - 63
699, 694, 7318, 2127, 715, 6682, 7850, 5495, 1162, 7682, 6792, 9973,
6596, 2092, 19,10151, 6297, 5766, 2359, 7136, 1706, 2128, 6827, 693,
9729, 1620, 6805, 534, 712, 1929, 5355, 6139, 6339, 1470, 6867, 7851,
1162, 7659, 1156, 2672, 6043, 2862, 180, 2663, 6940, 1645, 1582, 951,
6878, 7701, 1823, 2391, 2606, 822, 6403, 239, 442, 6769, 2560, 2502,
5072, 7268, 341)
B1CP_ph_diff = ( # PRN 1 - 63
796, 156, 4198, 3941, 1374, 1338, 1833, 2521, 3175, 168, 2715, 4408,
3160, 2796, 459, 3594, 4813, 586, 1428, 2371, 2285, 3377, 4965, 3779,
4547, 1646, 1430, 607, 2118, 4709, 1149, 3283, 2473, 1006, 3670, 1817,
771, 2173, 740, 1433, 2458, 3459, 2155, 1205, 413, 874, 2463, 1106,
1590, 3873, 4026, 4272, 3556, 128, 1200, 130, 4494, 1871, 3073, 4386,
4098, 1923, 1176)
B1CP_trunc_pnt = ( # PRN 1 - 63
7575, 2369, 5688, 539, 2270, 7306, 6457, 6254, 5644, 7119, 1402, 5557,
5764, 1073, 7001, 5910,10060, 2710, 1546, 6887, 1883, 5613, 5062, 1038,
10170, 6484, 1718, 2535, 1158, 526, 7331, 5844, 6423, 6968, 1280, 1838,
1989, 6468, 2091, 1581, 1453, 6252, 7122, 7711, 7216, 2113, 1095, 1628,
1713, 6102, 6123, 6070, 1115, 8047, 6795, 2575, 53, 1729, 6388, 682,
5565, 7160, 2277)
B1CS_ph_diff = ( # PRN 1 - 63
269, 1448, 1028, 1324, 822, 5, 155, 458, 310, 959, 1238, 1180,
1288, 334, 885, 1362, 181, 1648, 838, 313, 750, 225, 1477, 309,
108, 1457, 149, 322, 271, 576, 1103, 450, 399, 241, 1045, 164,
513, 687, 422, 303, 324, 495, 725, 780, 367, 882, 631, 37,
647, 1043, 24, 120, 134, 136, 158, 214, 335, 340, 661, 889,
929, 1002, 1149)
B1CS_trunc_pnt = ( # PRN 1 - 63
1889, 1268, 1593, 1186, 1239, 1930, 176, 1696, 26, 1344, 1271, 1182,
1381, 1604, 1333, 1185, 31, 704, 1190, 1646, 1385, 113, 860, 1656,
1921, 1173, 1928, 57, 150, 1214, 1148, 1458, 1519, 1635, 1257, 1687,
1382, 1514, 1, 1583, 1806, 1664, 1338, 1111, 1706, 1543, 1813, 228,
2871, 2884, 1823, 75, 11, 63, 1937, 22, 1768, 1526, 1402, 1445,
1680, 1290, 1245)
B2AD_G2_init = ( # PRN 1 - 63
0b1000000100101, 0b1000000110100, 0b1000010101101, 0b1000101001111,
0b1000101010101, 0b1000110101110, 0b1000111101110, 0b1000111111011,
0b1001100101001, 0b1001111011010, 0b1010000110101, 0b1010001000100,
0b1010001010101, 0b1010001011011, 0b1010001011100, 0b1010010100011,
0b1010011110111, 0b1010100000001, 0b1010100111110, 0b1010110101011,
0b1010110110001, 0b1011001010011, 0b1011001100010, 0b1011010011000,
0b1011010110110, 0b1011011110010, 0b1011011111111, 0b1011100010010,
0b1011100111100, 0b1011110100001, 0b1011111001000, 0b1011111010100,
0b1011111101011, 0b1011111110011, 0b1100001010001, 0b1100010010100,
0b1100010110111, 0b1100100010001, 0b1100100011001, 0b1100110101011,
0b1100110110001, 0b1100111010010, 0b1101001010101, 0b1101001110100,
0b1101011001011, 0b1101101010111, 0b1110000110100, 0b1110010000011,
0b1110010001011, 0b1110010100011, 0b1110010101000, 0b1110100111011,
0b1110110010111, 0b1111001001000, 0b1111010010100, 0b1111010011001,
0b1111011011010, 0b1111011111000, 0b1111011111111, 0b1111110110101,
0b0010000000010, 0b1101111110101, 0b0001111010010)
B2AP_G2_init = ( # PRN 1 - 63
0b1000000100101, 0b1000000110100, 0b1000010101101, 0b1000101001111,
0b1000101010101, 0b1000110101110, 0b1000111101110, 0b1000111111011,
0b1001100101001, 0b1001111011010, 0b1010000110101, 0b1010001000100,
0b1010001010101, 0b1010001011011, 0b1010001011100, 0b1010010100011,
0b1010011110111, 0b1010100000001, 0b1010100111110, 0b1010110101011,
0b1010110110001, 0b1011001010011, 0b1011001100010, 0b1011010011000,
0b1011010110110, 0b1011011110010, 0b1011011111111, 0b1011100010010,
0b1011100111100, 0b1011110100001, 0b1011111001000, 0b1011111010100,
0b1011111101011, 0b1011111110011, 0b1100001010001, 0b1100010010100,
0b1100010110111, 0b1100100010001, 0b1100100011001, 0b1100110101011,
0b1100110110001, 0b1100111010010, 0b1101001010101, 0b1101001110100,
0b1101011001011, 0b1101101010111, 0b1110000110100, 0b1110010000011,
0b1110010001011, 0b1110010100011, 0b1110010101000, 0b1110100111011,
0b1110110010111, 0b1111001001000, 0b1111010010100, 0b1111010011001,
0b1111011011010, 0b1111011111000, 0b1111011111111, 0b1111110110101,
0b1010010000110, 0b0010111111000, 0b0001101010101)
B2AS_ph_diff = ( # PRN 1 - 63
123, 55, 40, 139, 31, 175, 350, 450, 478, 8, 73, 97,
213, 407, 476, 4, 15, 47, 163, 280, 322, 353, 375, 510,
332, 7, 13, 16, 18, 25, 50, 81, 118, 127, 132, 134,
164, 177, 208, 249, 276, 349, 439, 477, 498, 88, 155, 330,
3, 21, 84, 111, 128, 153, 197, 199, 214, 256, 265, 291,
324, 326, 340)
B2AS_trunc_pnt = ( # PRN 1 - 63
138, 570, 351, 77, 885, 247, 413, 180, 3, 26, 17, 172,
30, 1008, 646, 158, 170, 99, 53, 179, 925, 114, 10, 584,
60, 3, 684, 263, 545, 22, 546, 190, 303, 234, 38, 822,
57, 668, 697, 93, 18, 66, 318, 133, 98, 70, 132, 26,
354, 58, 41, 182, 944, 205, 23, 1, 792, 641, 83, 7,
111, 96, 92)
B2BI_G2_init = ( # PRN 1 - 63
0b1000000100101, 0b1000000110100, 0b1000010101101, 0b1000101001111,
0b1000101010101, 0b1000110101110, 0b1000111101110, 0b1000111111011,
0b1001100101001, 0b1001111011010, 0b1010000110101, 0b1010001000100,
0b1010001010101, 0b1010001011011, 0b1010001011100, 0b1010010100011,
0b1010011110111, 0b1010100000001, 0b1010100111110, 0b1010110101011,
0b1010110110001, 0b1011001010011, 0b1011001100010, 0b1011010011000,
0b1011010110110, 0b1011011110010, 0b1011011111111, 0b1011100010010,
0b1011100111100, 0b1011110100001, 0b1011111001000, 0b1011111010100,
0b1011111101011, 0b1011111110011, 0b1100001010001, 0b1100010010100,
0b1100010110111, 0b1100100010001, 0b1100100011001, 0b1100110101011,
0b1100110110001, 0b1100111010010, 0b1101001010101, 0b1101001110100,
0b1101011001011, 0b1101101010111, 0b1110000110100, 0b1110010000011,
0b1110010001011, 0b1110010100011, 0b1110010101000, 0b1110100111011,
0b1110110010111, 0b1111001001000, 0b1111010010100, 0b1111010011001,
0b1111011011010, 0b1111011111000, 0b1111011111111, 0b1111110110101,
0b1111110111101, 0b0101110000101, 0b0101100111011)
B3I_G2_init = ( # PRN 1 - 63
0b1010111111111, 0b1111000101011, 0b1011110001010, 0b1111111111011,
0b1100100011111, 0b1001001100100, 0b1111111010010, 0b1110111111101,
0b1010000000010, 0b0010000011011, 0b1110101110000, 0b0010110011110,
0b0110010010101, 0b0111000100110, 0b1000110001001, 0b1110001111100,
0b0010011000101, 0b0000011101100, 0b1000101010111, 0b0001011011110,
0b0010000101101, 0b0010110001010, 0b0001011001111, 0b0011001100010,
0b0011101001000, 0b0100100101001, 0b1011011010011, 0b1010111100010,
0b0001011110101, 0b0111111111111, 0b0110110001111, 0b1010110001001,
0b1001010101011, 0b1100110100101, 0b1101001011101, 0b1111101110100,
0b0010101100111, 0b1110100010000, 0b1101110010000, 0b1101011001110,
0b1000000110100, 0b0101111011001, 0b0110110111100, 0b1101001110001,
0b0011100100010, 0b0101011000101, 0b1001111100110, 0b1111101001000,
0b0000101001001, 0b1000010101100, 0b1111001001100, 0b0100110001111,
0b0000000011000, 0b1000000000100, 0b0011010100110, 0b1011001000110,
0b0111001111000, 0b0010111001010, 0b1100111110110, 0b1001001000101,
0b0111000100000, 0b0011001000010, 0b0010001001110)
I5S_G2_init = ( # PRN 1 - 14
0b1110100111, 0b0000100110, 0b1000110100, 0b0101110010, 0b1110110000,
0b0001101011, 0b0000010100, 0b0100110000, 0b0010011000, 0b1101100100,
0b0001001100, 0b1101111100, 0b1011010010, 0b0111101010)
ISS_G2_init = ( # PRN 1 - 14
0b0011101111, 0b0101111101, 0b1000110001, 0b0010101011, 0b1010010001,
0b0100101100, 0b0010001110, 0b0100100110, 0b1100001110, 0b1010111110,
0b1110010001, 0b1101101001, 0b0101000101, 0b0100001101)
NH10 = ( # 10 bits Neuman-Hoffman code
-1, -1, -1, -1, 1, 1, -1, 1, -1, 1)
NH20 = ( # 20 bits Neuman-Hoffman code
-1, -1, -1, -1, -1, 1, -1, -1, 1, 1, -1, 1, -1, 1, -1, -1, 1, 1, 1, -1)
BC = ( # Baker code
-1, -1, -1, 1, -1)
#-------------------------------------------------------------------------------
# Generate primary code.
#
# args:
# sig (I) Signal type as string ('L1CA', 'L1CB', 'L1CP', ....)
# prn (I) PRN number
#
# returns:
# code Primary code as int8 ndarray (-1 or 1)
# (sub-carrier modulated for BOC or zero-padded for TDM)
#
def gen_code(sig, prn):
sig = sig.upper()
if sig == 'L1CA':
return gen_code_L1CA(prn)
elif sig == 'L1S':
return gen_code_L1S(prn)
elif sig == 'L1CB':
return gen_code_L1CB(prn)
elif sig == 'L1CP':
return gen_code_L1CP(prn)
elif sig == 'L1CD':
return gen_code_L1CD(prn)
elif sig == 'L2CM':
return gen_code_L2CM(prn)
elif sig == 'L2CL':
return gen_code_L2CL(prn)
elif sig == 'L5I':
return gen_code_L5I(prn)
elif sig == 'L5Q':
return gen_code_L5Q(prn)
elif sig == 'L5SI':
return gen_code_L5SI(prn)
elif sig == 'L5SQ':
return gen_code_L5SQ(prn)
elif sig == 'L6D':
return gen_code_L6D(prn)
elif sig == 'L6E':
return gen_code_L6E(prn)
elif sig == 'G1CA':
return gen_code_G1CA(prn)
elif sig == 'G2CA':
return gen_code_G2CA(prn)
elif sig == 'G3OCD':
return gen_code_G3OCD(prn)
elif sig == 'G3OCP':
return gen_code_G3OCP(prn)
elif sig == 'E1B':
return gen_code_E1B(prn)
elif sig == 'E1C':
return gen_code_E1C(prn)
elif sig == 'E5AI':
return gen_code_E5AI(prn)
elif sig == 'E5AQ':
return gen_code_E5AQ(prn)
elif sig == 'E5BI':
return gen_code_E5BI(prn)
elif sig == 'E5BQ':
return gen_code_E5BQ(prn)
elif sig == 'E6B':
return gen_code_E6B(prn)
elif sig == 'E6C':
return gen_code_E6C(prn)
elif sig == 'B1I':
return gen_code_B1I(prn)
elif sig == 'B1CD':
return gen_code_B1CD(prn)
elif sig == 'B1CP':
return gen_code_B1CP(prn)
elif sig == 'B2I':
return gen_code_B2I(prn)
elif sig == 'B2AD':
return gen_code_B2AD(prn)
elif sig == 'B2AP':
return gen_code_B2AP(prn)
elif sig == 'B2BI':
return gen_code_B2BI(prn)
elif sig == 'B3I':
return gen_code_B3I(prn)
elif sig == 'I5S':
return gen_code_I5S(prn)
elif sig == 'ISS':
return gen_code_ISS(prn)
else:
return NONE
#-------------------------------------------------------------------------------
# Generate secondary (overlay) code.
#
# args:
# sig (I) Signal type as string ('L1CA', 'L1CB', 'L1CP', ....)
# prn (I) PRN number
#
# returns:
# code Secondary code as int8 ndarray (-1 or 1)
#
def sec_code(sig, prn):
sig = sig.upper()
if sig in ('L1CA', 'L1S', 'L1CB','L1CD', 'L2CM', 'L2CL', 'L6D', 'L6E', 'E1B',
'E6B', 'B1CD', 'B2BI', 'I5S', 'ISS'):
return np.array([1], dtype='int8') # no secondary code
elif sig == 'L1CP':
return sec_code_L1CP(prn)
elif sig == 'L5I':
return sec_code_L5I(prn)
elif sig == 'L5Q':
return sec_code_L5Q(prn)
elif sig == 'L5SI':
return sec_code_L5SI(prn)
elif sig == 'L5SQ':
return sec_code_L5SQ(prn)
elif sig == 'G1CA':
return sec_code_G1CA(prn)
elif sig == 'G2CA':
return sec_code_G2CA(prn)
elif sig == 'G3OCD':
return sec_code_G3OCD(prn)
elif sig == 'G3OCP':
return sec_code_G3OCP(prn)
elif sig == 'E1C':
return sec_code_E1C(prn)
elif sig == 'E5AI':
return sec_code_E5AI(prn)
elif sig == 'E5AQ':
return sec_code_E5AQ(prn)
elif sig == 'E5BI':
return sec_code_E5BI(prn)
elif sig == 'E5BQ':
return sec_code_E5BQ(prn)
elif sig == 'E6C':
return sec_code_E6C(prn)
elif sig == 'B1I':
return sec_code_B1I(prn)
elif sig == 'B1CP':
return sec_code_B1CP(prn)
elif sig == 'B2I':
return sec_code_B2I(prn)
elif sig == 'B2AD':
return sec_code_B2AD(prn)
elif sig == 'B2AP':
return sec_code_B2AP(prn)
elif sig == 'B3I':
return sec_code_B3I(prn)
else:
return NONE
#-------------------------------------------------------------------------------
# Generate resampled and zero-padded code.
#
# args:
# code (I) Code as int8 ndarray (-1 or 1)
# T (I) Code cycle (period) (s)
# coff (I) Code offset (s)
# fs (I) Sampling frequency (Hz)
# N (I) Number of samples
# Nz=0 (I) Number of zero-padding (optional)
#
# returns:
# code Resampled and zero-padded code as complex64 ndarray (-1 or 1)
#
def res_code(code, T, coff, fs, N, Nz=0):
dx = len(code) / T / fs
ix = ((coff * fs + np.arange(N)) * dx).astype('int')
code = np.array(code[ix % len(code)], dtype='complex64')
if Nz > 0:
code = np.hstack([code, np.zeros(Nz, dtype='complex64')])
return code
#-------------------------------------------------------------------------------
# Generate resampled and zero-padded code FFT (DFT).
#
# args:
# code (I) Code as int8 ndarray (-1 or 1)
# T (I) Code cycle (period) (s)
# coff (I) Code offset (s)
# fs (I) Sampling frequency (Hz)
# N (I) Number of samples
# Nz=0 (I) Number of zero-padding (optional)
#
# returns:
# code_fft Resampled and zero-padded code DFT as complex64 ndarray
#
def gen_code_fft(code, T, coff, fs, N, Nz=0):
code_res = res_code(code, T, coff, fs, N, Nz)
return np.conj(fft.fft(code_res))
#-------------------------------------------------------------------------------
# Get primary code cycle (period).
#
# args:
# sig (I) Signal type as string ('L1CA', 'L1CB', 'L1CP', ....)
#
# returns:
# cyc Primary code cycle (period) (s) (0.0: error)
#
def code_cyc(sig):
sig = sig.upper()
if sig in ('L1CA', 'L1CB', 'L1S', 'L5I', 'L5Q', 'L5SI', 'L5SQ', 'G1CA',
'G2CA', 'G3OCD', 'G3OCP', 'E5AI', 'E5AQ', 'E5BI', 'E5BQ', 'E6B', 'E6C',
'B1I', 'B2I', 'B2AD', 'B2AP', 'B2BI', 'B3I', 'I5S', 'ISS'):
return 1e-3
elif sig in ('L6D', 'L6E', 'E1B', 'E1C'):
return 4e-3
elif sig in ('L1CP', 'L1CD', 'B1CD', 'B1CP'):
return 10e-3
elif sig == 'L2CM':
return 20e-3
elif sig == 'L2CL':
return 1500e-3
else:
return 0.0
#-------------------------------------------------------------------------------
# Get primary code length.
#
# args:
# sig (I) Signal type as string ('L1CA', 'L1CB', 'L1CP', ....)
#
# returns:
# N Primary code length (chips) (0: error)
#
def code_len(sig):
sig = sig.upper()
if sig in ('L1CA', 'L1S', 'L1CB', 'I5S', 'ISS'):
return 1023
elif sig in ('L1CP', 'L1CD', 'L2CM', 'L5I', 'L5Q', 'L5SI', 'L5SQ', 'L6D',
'L6E', 'G3OCD', 'G3OCP', 'E5AI', 'E5AQ', 'E5BI', 'E5BQ', 'B1CD',
'B1CP', 'B2AD', 'B2AP', 'B2BI', 'B3I'):
return 10230
elif sig == 'L2CL':
return 767250
elif sig in ('E6B', 'E6C'):
return 5115
elif sig in ('E1B', 'E1C'):
return 4092
elif sig in ('G1CA', 'G2CA'):
return 511
elif sig in ('B1I', 'B2I'):
return 2046
else:
return 0
#-------------------------------------------------------------------------------
# Get signal carrier frequency.
#
# args:
# sig (I) Signal type as string ('L1CA', 'L1CB', 'L1CP', ....)
#
# returns:
# freq Signal carrier frequency (Hz) (0.0: error)
#
def sig_freq(sig):
sig = sig.upper()
if sig in ('L1CA', 'L1CB', 'L1S' , 'E1B', 'E1C', 'L1CP', 'L1CD', 'B1CD',
'B1CP'):
return 1575.42e6
elif sig in ('L2CM', 'L2CL'):
return 1227.60e6
elif sig in ('L5I', 'L5Q', 'L5SI', 'L5SQ', 'E5AI', 'E5AQ', 'B2AD', 'B2AP',
'I5S'):
return 1176.45e6
elif sig in ('E5BI', 'E5BQ', 'B2I', 'B2BI'):
return 1207.14e6
elif sig in ('L6D', 'L6E', 'E6B' , 'E6C'):
return 1278.75e6
elif sig == 'B1I':
return 1561.098e6
elif sig == 'B3I':
return 1268.52e6
elif sig == 'G1CA':
return 1602.0e6
elif sig == 'G2CA':
return 1246.0e6
elif sig in ('G3OCD', 'G3OCP'):
return 1202.025e6
elif sig == 'ISS':
return 2492.028e6
else:
return 0.0
# generate L1C/A code ([1]) ----------------------------------------------------
def gen_code_L1CA(prn):
if prn < 1 or prn > 210:
return NONE
N = 1023
if prn not in L1CA:
global L1CA_G1, L1CA_G2
if len(L1CA_G1) == 0:
L1CA_G1 = gen_code_L1CA_G1(N)
L1CA_G2 = gen_code_L1CA_G2(N)
L1CA[prn] = -L1CA_G1 * np.roll(L1CA_G2, L1CA_G2_delay[prn-1])
return L1CA[prn]
# generate L1C/A G1 code -------------------------------------------------------
def gen_code_L1CA_G1(N):
return LFSR(N, 0b1111111111, 0b0010000001, 10)
# generate L1C/A G2 code -------------------------------------------------------
def gen_code_L1CA_G2(N):
return LFSR(N, 0b1111111111, 0b0110010111, 10)
# generate L1S code ([3]) ------------------------------------------------------
def gen_code_L1S(prn):
if prn < 184 or prn > 191:
return NONE
return gen_code_L1CA(prn)
# generate L1C/B code ([3]) ----------------------------------------------------
def gen_code_L1CB(prn):
if prn < 203 or prn > 206:
return NONE
code = gen_code_L1CA(prn)
return mod_code(code, [1, -1]) # BOC(1,1)
# generate L1CP code -----------------------------------------------------------
def gen_code_L1CP(prn):
if prn < 1 or prn > 210:
return NONE
N = 10230
if prn not in L1CP:
code = gen_code_L1CPD(N, L1CP_weil_idx[prn-1], L1CP_ins_idx[prn-1])
L1CP[prn] = mod_code(code, [1, -1]) # BOC(1,1) instead of TMBOC(6,1,4/33)
return L1CP[prn]
# generate L1CD code -----------------------------------------------------------
def gen_code_L1CD(prn):
if prn < 1 or prn > 210:
return NONE
N = 10230
if prn not in L1CD:
code = gen_code_L1CPD(N, L1CD_weil_idx[prn-1], L1CD_ins_idx[prn-1])
L1CD[prn] = mod_code(code, [1, -1]) # BOC(1,1)
return L1CD[prn]
# generate L1CP/D code ([7]) ---------------------------------------------------
def gen_code_L1CPD(N, w, p):
global L1C_L_SEQ
if len(L1C_L_SEQ) == 0:
L1C_L_SEQ = gen_legendre_seq(10223)
ins_code = [-1, 1, 1, -1, 1, -1, -1]
code = np.zeros(N, dtype='int8')
for t in range(0, p - 1):
code[t] = -L1C_L_SEQ[t] * L1C_L_SEQ[(t + w) % 10223]
for t in range(p - 1, p + 6):
code[t] = ins_code[t - p + 1]
for t in range(p + 6, N):
code[t] = -L1C_L_SEQ[t - 7] * L1C_L_SEQ[(t - 7 + w) % 10223]
return code
# generate Legendre sequence ---------------------------------------------------
def gen_legendre_seq(N):
L = np.full(N, -1, dtype='int8')
for i in range(1, N):
L[(i * i) % N] = 1
return L
# generate L1CP secondary code ([7]) -------------------------------------------
def sec_code_L1CP(prn):
if prn < 1 or prn > 210:
return NONE
N = 1800
if prn not in L1CO:
tap1 = rev_reg(L1CO_S1_poly[prn-1] >> 1, 11)
code1 = LFSR(N, rev_reg(L1CO_S1_init[prn-1], 11), tap1, 11)
if prn >= 64:
tap2 = 0b00000000101
code2 = LFSR(N, rev_reg(L1CO_S2_init[prn-64], 11), tap2, 11)
code1 = -code1 * code2
L1CO[prn] = code1
return L1CO[prn]
# generate L2CM code ([1]) -----------------------------------------------------
def gen_code_L2CM(prn):
if (prn < 1 or prn > 63) and (prn < 159 or prn > 210):
return NONE
N = 10230
if prn not in L2CM:
R = L2CM_R_init_1[prn-1] if prn <= 63 else L2CM_R_init_2[prn-159]
code = gen_code_L2C(N, R)
L2CM[prn] = mod_code(code, [-1, 0]) # TDM
return L2CM[prn]
# generate L2CL code ([1]) -----------------------------------------------------
def gen_code_L2CL(prn):
if (prn < 1 or prn > 63) and (prn < 159 or prn > 210):
return NONE
N = 767250
if prn not in L2CL:
R = L2CL_R_init_1[prn-1] if prn <= 63 else L2CL_R_init_2[prn-159]
code = gen_code_L2C(N, R)
L2CL[prn] = mod_code(code, [0, 1]) # TDM
return L2CL[prn]
# generate L2C code ([1]) ------------------------------------------------------
def gen_code_L2C(N, R):
code = np.zeros(N, dtype='int8')
for i in range(N):
code[i] = CHIP[R & 1]
R = (R >> 1) ^ (0b100100101001001010100111100 * (R & 1))
return code
# generate L5I code ([2]) ------------------------------------------------------
def gen_code_L5I(prn):
if prn < 1 and prn > 210:
return NONE
N = 10230
if prn not in L5I:
global L5_XA, L5_XB
if len(L5_XA) == 0:
L5_XA = gen_code_L5_XA(N)
L5_XB = gen_code_L5_XB(N)
L5I[prn] = -L5_XA * np.roll(L5_XB, -L5I_XB_adv[prn-1])
return L5I[prn]
# generate L5Q code ([2]) ------------------------------------------------------
def gen_code_L5Q(prn):
if prn < 1 and prn > 210:
return NONE
N = 10230
if prn not in L5Q:
global L5_XA, L5_XB
if len(L5_XA) == 0:
L5_XA = gen_code_L5_XA(N)
L5_XB = gen_code_L5_XB(N)
L5Q[prn] = -L5_XA * np.roll(L5_XB, -L5Q_XB_adv[prn-1])
return L5Q[prn]
# generate L5SI code ([15]) ----------------------------------------------------
def gen_code_L5SI(prn):
if prn < 184 and prn > 189:
return NONE
return gen_code_L5I(prn)
# generate L5SQ code ([15]) ----------------------------------------------------
def gen_code_L5SQ(prn):
if prn < 184 and prn > 189:
return NONE
return gen_code_L5Q(prn)
# generate L5 XA code ----------------------------------------------------------
def gen_code_L5_XA(N):
code = LFSR(8190, 0b1111111111111, 0b0000000011011, 13)
return np.hstack([code, code[:N-8190]])
# generate L5 XB code ----------------------------------------------------------
def gen_code_L5_XB(N):
return LFSR(N, 0b1111111111111, 0b1011011100011, 13)
# generate L5I secondary code ([2]) --------------------------------------------
def sec_code_L5I(prn):
return np.array(NH10, dtype='int8')
# generate L5Q secondary code ([2]) --------------------------------------------
def sec_code_L5Q(prn):
return np.array(NH20, dtype='int8')
# generate L5SI secondary code ([6]) -------------------------------------------
def sec_code_L5SI(prn):
if prn < 184 and prn > 189:
return NONE
return sec_code_L5I(prn)
# generate L5SQ secondary code ([6]) -------------------------------------------
def sec_code_L5SQ(prn):
if prn < 184 and prn > 189:
return NONE
return sec_code_L5Q(prn)
# generate L6D code ([4]) ------------------------------------------------------
def gen_code_L6D(prn):
if prn < 193 or prn > 201:
return NONE
N = 10230
if prn not in L6D:
code = gen_code_L6(N, L6D_R_init[prn-193])
L6D[prn] = mod_code(code, [1, 0]) # TDM
return L6D[prn]
# generate L6E code ([4]) ------------------------------------------------------
def gen_code_L6E(prn):
if prn < 203 or prn > 211:
return NONE
N = 10230
if prn not in L6E:
code = gen_code_L6(N, L6E_R_init[prn-203])
L6E[prn] = mod_code(code, [0, 1]) # TDM
return L6E[prn]
# generate L6 code -------------------------------------------------------------
def gen_code_L6(N, R):
R = rev_reg(R, 20)
code1 = LFSR(N, 0b1111111111, 0b0011110011, 10)
code2 = LFSR(N, R, 0b00000000000001010011, 20)
return -code1 * code2
# generate GLONASS C/A code ----------------------------------------------------
def gen_code_GLO_CA(N):
R = 0b111111111
code = np.zeros(N, dtype='int8')
for i in range(N):
code[i] = CHIP[(R >> 2) & 1]
R = (sdr_func.xor_bits(R & 0b000010001) << 8) | (R >> 1)
return code
# generate G1CA code ([14]) ----------------------------------------------------
def gen_code_G1CA(prn):
if prn < -7 or prn > 6: # FCN
return NONE
N = 511
if 1 not in G1CA:
G1CA[1] = gen_code_GLO_CA(N)
return G1CA[1]
# generate G2CA code ([14]) ----------------------------------------------------
def gen_code_G2CA(prn):
return gen_code_G1CA(prn)
# generate G3OCD code ([17]) ---------------------------------------------------
def gen_code_G3OCD(prn):
if prn < 0 or prn > 63:
return NONE
N = 10230
if prn not in G3OCD:
DC1 = gen_code_G3OC_DC1(N)
DC2 = LFSR(N, prn, 0b0000011, 7)
G3OCD[prn] = -DC1 * DC2
return G3OCD[prn]
# generate G3OCP code ([17]) ---------------------------------------------------
def gen_code_G3OCP(prn):
if prn < 0 or prn > 63:
return NONE
N = 10230
if prn not in G3OCP:
DC1 = gen_code_G3OC_DC1(N)
DC3 = LFSR(N, prn + 64, 0b0000011, 7)
G3OCP[prn] = -DC1 * DC3
return G3OCP[prn]
# generate G3OC DC1 code ([17]) ------------------------------------------------
def gen_code_G3OC_DC1(N):
global G3OC_D1
if len(G3OC_D1) == 0:
G3OC_D1 = LFSR(N, 0b00110100111000, 0b00010001000011, 14)
return G3OC_D1
# generate G1CA secondary code -------------------------------------------------
def sec_code_G1CA(prn):
if prn < -7 or prn > 6: # FCN
return NONE
return | np.array([1, -1] * 5, dtype='int8') | numpy.array |
"""
for binary classification.
https://www.johnwittenauer.net/machine-learning-exercises-in-python-part-3/
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.optimize as opt
from machine_learning.utils import sigmoid_activation, log_loss, single_gradient_step
regularized = True # regularized = with polynomial features
if not regularized:
path = '../../../../datasets/per_field/sl/clss/logreg_simple.txt'
names = ['Exam 1', 'Exam 2', 'Admitted']
else:
path = '../../../../datasets/per_field/sl/clss/logreg_simple_regularized.txt'
names = ['Test 1', 'Test 2', 'Accepted']
df = pd.read_csv(path, header=None, names=names)
# Let's start by examining the data (exploratory analysis stage):
print(df.head(), '\n')
print(df.describe(), '\n')
positive = df[df[names[2]].isin([1])]
negative = df[df[names[2]].isin([0])]
fig, ax = plt.subplots(figsize=(12, 8))
ax.scatter(positive[names[0]], positive[names[1]], s=50, c='b', marker='o', label=names[2])
ax.scatter(negative[names[0]], negative[names[1]], s=50, c='r', marker='x', label='Not ' + names[2]) # Rejected
ax.legend()
# ax.legend(loc='best', shadow=False, scatterpoints=1)
ax.set_xlabel(names[0] + ' Score')
ax.set_ylabel(names[1] + ' Score')
plt.show()
# We can test the cost function to make sure it’s working, but first we need to do some setup.
if not regularized:
# add a ones column - this makes the matrix multiplication work out easier
df.insert(0, 'Ones', 1)
else: # exercise_type == TWO_CLASSES_REGULARIZED
# when there is no linear decision boundary that will perform well on this data.
# One way to deal with this using a linear technique like logistic regression is to construct features that are
# derived from polynomials of the original features.
# We can try creating a bunch of polynomial features to feed into the classifier.
degree = 5
x1 = df[names[0]]
x2 = df[names[1]]
df.insert(3, 'Ones', 1)
for i in range(1, degree):
for j in range(0, i):
df['F' + str(i) + str(j)] = np.power(x1, i - j) * np.power(x2, j)
df.drop(names[0], axis=1, inplace=True)
df.drop(names[1], axis=1, inplace=True)
print(df.head(), '\n')
# set X (training data) and y (target variable)
cols = df.shape[1]
if not regularized:
X = df.iloc[:, 0:cols - 1]
y = df.iloc[:, cols - 1:cols]
else: # remember from above that we moved the label to column 0
X = df.iloc[:, 1:cols]
y = df.iloc[:, 0:1]
# convert to numpy arrays and initialize the parameter array theta (model parameters)
X = | np.array(X.values) | numpy.array |
from collections import OrderedDict
import numpy as np
from robosuite_extra.env_base.sawyer import SawyerEnv
from robosuite.models.arenas import TableArena
from robosuite.models.objects import BoxObject, CylinderObject
from robosuite_extra.models.generated_objects import FullyFrictionalBoxObject
from robosuite_extra.models.tasks import UniformSelectiveSampler
from robosuite.utils.mjcf_utils import array_to_string
from robosuite_extra.push_env.push_task import PushTask
from robosuite_extra.utils import transform_utils as T
from robosuite_extra.controllers import SawyerEEFVelocityController
import copy
from collections import deque
class SawyerPush(SawyerEnv):
"""
This class corresponds to a Pushing task for the sawyer robot arm.
This task consists of pushing a rectangular puck from some initial position to a final goal.
The goal and initial positions are chosen randomly within some starting bounds
"""
def __init__(
self,
gripper_type="PushingGripper",
parameters_to_randomise=None,
randomise_initial_conditions=True,
table_full_size=(0.8, 1.6, 0.719),
table_friction=(1e-4, 5e-3, 1e-4),
use_camera_obs=False,
use_object_obs=True,
reward_shaping=True,
placement_initializer=None,
gripper_visualization=True,
use_indicator_object=False,
has_renderer=False,
has_offscreen_renderer=True,
render_collision_mesh=False,
render_visual_mesh=True,
control_freq=10,
horizon=80,
ignore_done=False,
camera_name="frontview",
camera_height=256,
camera_width=256,
camera_depth=False,
pid=True,
):
"""
Args:
gripper_type (str): type of gripper, used to instantiate
gripper models from gripper factory.
parameters_to_randomise [string,] : List of keys for parameters to randomise, None means all the available parameters are randomised
randomise_initial_conditions [bool,]: Whether or not to randomise the starting configuration of the task.
table_full_size (3-tuple): x, y, and z dimensions of the table.
table_friction (3-tuple): the three mujoco friction parameters for
the table.
use_camera_obs (bool): if True, every observation includes a
rendered image.
use_object_obs (bool): if True, include object (cube) information in
the observation.
reward_shaping (bool): if True, use dense rewards.
placement_initializer (ObjectPositionSampler instance): if provided, will
be used to place objects on every reset, else a UniformRandomSampler
is used by default.
gripper_visualization (bool): True if using gripper visualization.
Useful for teleoperation.
use_indicator_object (bool): if True, sets up an indicator object that
is useful for debugging.
has_renderer (bool): If true, render the simulation state in
a viewer instead of headless mode.
has_offscreen_renderer (bool): True if using off-screen rendering.
render_collision_mesh (bool): True if rendering collision meshes
in camera. False otherwise.
render_visual_mesh (bool): True if rendering visual meshes
in camera. False otherwise.
control_freq (float): how many control signals to receive
in every second. This sets the amount of simulation time
that passes between every action input.
horizon (int): Every episode lasts for exactly @horizon timesteps.
ignore_done (bool): True if never terminating the environment (ignore @horizon).
camera_name (str): name of camera to be rendered. Must be
set if @use_camera_obs is True.
camera_height (int): height of camera frame.
camera_width (int): width of camera frame.
camera_depth (bool): True if rendering RGB-D, and RGB otherwise.
pid (bool) : True if using a velocity PID controller for controlling the arm, false if using a
mujoco-implemented proportional controller.
"""
self.initialised = False
# settings for table
self.table_full_size = table_full_size
self.table_friction = table_friction
# whether to use ground-truth object states
self.use_object_obs = use_object_obs
# reward configuration
self.reward_shaping = reward_shaping
if (self.reward_shaping):
self.reward_range = [-np.inf, horizon * (0.1)]
else:
self.reward_range = [0, 1]
# Domain Randomisation Parameters
self.parameters_to_randomise = parameters_to_randomise
self.randomise_initial_conditions = randomise_initial_conditions
self.dynamics_parameters = OrderedDict()
self.default_dynamics_parameters = OrderedDict()
self.parameter_sampling_ranges = OrderedDict()
self.factors_for_param_randomisation = OrderedDict()
# object placement initializer
if placement_initializer:
self.placement_initializer = placement_initializer
else:
self.placement_initializer = UniformSelectiveSampler(
x_range=None,
y_range=None,
ensure_object_boundary_in_range=True,
z_rotation=None,
np_random=None
)
# Param for storing a specific goal and object starting positions
self.specific_goal_position = None
self.specific_gripper_position = None
self.gripper_pos_neutral = [0.44969246, 0.16029991, 1.00875409]
super().__init__(
gripper_type=gripper_type,
gripper_visualization=gripper_visualization,
use_indicator_object=use_indicator_object,
has_renderer=has_renderer,
has_offscreen_renderer=has_offscreen_renderer,
render_collision_mesh=render_collision_mesh,
render_visual_mesh=render_visual_mesh,
control_freq=control_freq,
horizon=horizon,
ignore_done=ignore_done,
use_camera_obs=use_camera_obs,
camera_name=camera_name,
camera_height=camera_height,
camera_width=camera_width,
camera_depth=camera_depth,
pid=pid,
)
self._set_default_dynamics_parameters(pid)
self._set_default_parameter_sampling_ranges()
self._set_dynamics_parameters(self.default_dynamics_parameters)
self._set_factors_for_param_randomisation(self.default_dynamics_parameters)
# Check that the parameters to randomise are within the allowed parameters
if (self.parameters_to_randomise is not None):
self._check_allowed_parameters(self.parameters_to_randomise)
# IK solver for placing the arm at desired locations during reset
self.IK_solver = SawyerEEFVelocityController()
self.placement_initializer.set_random_number_generator(self.np_random)
self.init_control_timestep = self.control_timestep
self.init_qpos = self.mujoco_robot.init_qpos
# Storing parameters for temporary switching
self.cached_parameters_to_randomise = None
self.cached_dynamics_parameters = None
self.initialised = True
self.reset()
def _set_dynamics_parameters(self, parameters):
self.dynamics_parameters = copy.deepcopy(parameters)
def _default_damping_params(self):
# return np.array([0.01566, 1.171, 0.4906, 0.1573, 1.293, 0.08688, 0.1942]) # -real world calibration
# return np.array([0.8824,2.3357,1.1729, 0.0 , 0.5894, 0.0 ,0.0082]) #- command calibration
return np.array([8.19520686e-01, 1.25425414e+00, 1.04222253e+00,
0.00000000e+00, 1.43146116e+00, 1.26807887e-01, 1.53680244e-01, ]) # - command calibration 2
def _default_armature_params(self):
return np.array([0.00000000e+00, 0.00000000e+00, 2.70022664e-02, 5.35581203e-02,
3.31204140e-01, 2.59623415e-01, 2.81964631e-01, ])
def _default_joint_friction_params(self):
return np.array([4.14390483e-03,
9.30938506e-02, 2.68656509e-02, 0.00000000e+00, 0.00000000e+00,
4.24867204e-04, 8.62040317e-04])
def _set_default_dynamics_parameters(self, use_pid):
"""
Setting the the default environment parameters.
"""
self.default_dynamics_parameters['joint_forces'] = np.zeros((7,))
self.default_dynamics_parameters['acceleration_forces'] = np.zeros((7,))
self.default_dynamics_parameters['eef_forces'] = np.zeros((6,))
self.default_dynamics_parameters['obj_forces'] = np.zeros((6,))
self.default_dynamics_parameters['eef_timedelay'] = np.asarray(0)
self.default_dynamics_parameters['obj_timedelay'] = np.asarray(0)
self.default_dynamics_parameters['timestep_parameter'] = np.asarray(0.0)
self.default_dynamics_parameters['pid_iteration_time'] = np.asarray(0.)
self.default_dynamics_parameters['mujoco_timestep'] = np.asarray(0.002)
self.default_dynamics_parameters['action_additive_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['action_multiplicative_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['action_systematic_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['eef_obs_position_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['eef_obs_velocity_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['obj_obs_position_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['obj_obs_velocity_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['obj_angle_noise'] = np.asarray(0.0)
self.default_dynamics_parameters['obj_density'] = np.asarray(400)
self.default_dynamics_parameters['obj_size'] = np.array([0.0555 / 2, 0.0555 / 2, 0.03 / 2])
self.default_dynamics_parameters['obj_sliding_friction'] = np.asarray(0.4)
self.default_dynamics_parameters['obj_torsional_friction'] = np.asarray(0.01)
link_masses = np.zeros((7,))
for link_name, idx, body_node, mass_node, joint_node in self._robot_link_nodes_generator():
if (mass_node is not None):
dynamics_parameter_value = float(mass_node.get("mass"))
link_masses[idx] = dynamics_parameter_value
self.default_dynamics_parameters['link_masses'] = link_masses
self.default_dynamics_parameters['joint_dampings'] = self._default_damping_params()
self.default_dynamics_parameters['armatures'] = self._default_armature_params()
self.default_dynamics_parameters['joint_frictions'] = self._default_joint_friction_params()
if (use_pid):
gains = self.mujoco_robot.velocity_pid_gains
kps = np.array([gains['right_j{}'.format(actuator)]['p'] for actuator in range(7)])
kis = np.array([gains['right_j{}'.format(actuator)]['i'] for actuator in range(7)])
kds = np.array([gains['right_j{}'.format(actuator)]['d'] for actuator in range(7)])
#
self.default_dynamics_parameters['kps'] = kps
self.default_dynamics_parameters['kis'] = kis
self.default_dynamics_parameters['kds'] = kds
else:
kvs = np.zeros((7,))
for target_joint, jnt_idx, node in self._velocity_actuator_nodes_generator():
gains_value = float(node.get("kv"))
kvs[jnt_idx] = gains_value
self.default_dynamics_parameters['kvs'] = kvs
def _set_default_parameter_sampling_ranges(self):
"""
Returns the parameter ranges to draw samples from in the domain randomisation.
"""
parameter_ranges = {
'joint_forces': np.array([[0.,0.,0.,0.,0.,0.,0.], [1.5,1.5,1.5,1.5,1.5,1.5,1.5]]),#
'acceleration_forces': np.array([[0.,0.,0.,0.,0.,0.,0.], [0.05,0.05,0.05,0.05,0.05,0.05,0.05]]),#
'eef_forces': np.array([[0.,0.,0.,0.,0.,0.], [0.06 ,0.06,0.06,0.01,0.01,0.01,]]), #
'obj_forces': np.array([[0., 0., 0., 0., 0., 0., ], [0.0011, 0.0011, 0.0011, 0.0005, 0.0005, 0.0005, ]]),
'eef_timedelay': np.array([0, 1]),
'obj_timedelay': np.array([0,2]),
'timestep_parameter': np.array([0.0, 0.01]),
'pid_iteration_time': np.array([0., 0.04]),
'mujoco_timestep': np.array([0.001,0.002]),
'action_additive_noise': np.array([0.01, 0.1]),
'action_multiplicative_noise': np.array([0.005,0.02]),
'action_systematic_noise': np.array([-0.05, 0.05]),
'eef_obs_position_noise': np.array([0.0005, 0.001]),
'eef_obs_velocity_noise': np.array([0.0005, 0.001]),
'obj_obs_position_noise': np.array([0.0005, 0.001]),
'obj_obs_velocity_noise': np.array([0.0005, 0.0015]),
'obj_angle_noise': np.array([0.005, 0.05]),
'obj_density': np.array([100, 800]),
'obj_size': np.array([0.995, 1.005]),
'obj_sliding_friction': np.array([0.01, 0.8]),
'obj_torsional_friction': np.array([0.001, 0.3]),
'link_masses': np.array([0.98, 1.02]),
'joint_dampings': np.array([0.5, 2.]),
'armatures': np.array([0.66, 1.5]),
'joint_frictions': np.array([0.66, 1.5]),
}
if (self.pid):
parameter_ranges['kps'] = np.array([0.66, 1.5])
parameter_ranges['kis'] = np.array([0.66, 1.5])
parameter_ranges['kds'] = np.array([0.66, 1.5])
else:
parameter_ranges['kvs'] = [0.5, 2]
self.parameter_sampling_ranges = parameter_ranges
def _set_factors_for_param_randomisation(self, parameters):
factors = copy.deepcopy(parameters)
factors['joint_forces'] = np.ones((7,))
factors['acceleration_forces'] = np.ones((7,))
factors['eef_forces'] = np.ones((1,))
factors['obj_forces'] = np.ones((6,))
factors['eef_timedelay'] = 1.0
factors['timestep_parameter'] = 1.0
factors['pid_iteration_time'] = 1.0
factors['mujoco_timestep'] = 1.0
factors['obj_timedelay'] = 1.0
factors['action_additive_noise'] = 1.0
factors['action_multiplicative_noise'] = 1.0
factors['action_systematic_noise'] = 1.0
factors['eef_obs_position_noise'] = 1.0
factors['eef_obs_velocity_noise'] = 1.0
factors['obj_obs_position_noise'] = 1.0
factors['obj_obs_velocity_noise'] = 1.0
factors['obj_angle_noise'] = 1.0
factors['obj_density'] = 1.0
factors['obj_sliding_friction'] = 1.0
factors['obj_torsional_friction'] = 1.0
self.factors_for_param_randomisation = factors
def _velocity_actuator_nodes_generator(self):
"""
Caching the xml nodes for the velocity actuators for use when setting the parameters
"""
for node in self.model.root.findall(".//velocity[@kv]"):
target_joint = node.get("joint")
jnt_idx = int(target_joint[-1])
yield target_joint, jnt_idx, node
def _robot_link_nodes_generator(self):
"""
Caching the xml nodes for the velocity actuators for use when setting the parameters
"""
for link_idx, link_name in enumerate(self.mujoco_robot.links):
body_node = self.mujoco_robot.root.find(".//body[@name='{}']".format(link_name))
mass_node = body_node.find("./inertial[@mass]")
joint_node = body_node.find("./joint")
yield link_name, link_idx, body_node, mass_node, joint_node
def _check_allowed_parameters(self, parameters):
allowed_parameters = self.get_parameter_keys()
for param in parameters:
assert param in allowed_parameters, '{} not allowed. Only allowed parameters are {}'.format(param,
allowed_parameters)
def _select_appropriate_distribution(self, key):
'''
Which distribution to use to sample the different dynamics parameters.
:param key: The parameter to consider.
'''
if (
key == 'joint_forces'
or key == 'acceleration_forces'
or key == 'eef_forces'
or key == 'obj_forces'
or key == 'timestep_parameter'
or key == 'pid_iteration_time'
or key == 'mujoco_timestep'
or key == 'action_additive_noise'
or key == 'action_multiplicative_noise'
or key == 'action_systematic_noise'
or key == 'eef_obs_position_noise'
or key == 'eef_obs_velocity_noise'
or key == 'obj_obs_position_noise'
or key == 'obj_obs_velocity_noise'
or key == 'obj_angle_noise'
or key == 'link_masses'
or key == 'obj_size'
or key == 'obj_density'
or key == 'obj_sliding_friction'
):
return self.np_random.uniform
elif (
key == 'eef_timedelay'
or key == 'obj_timedelay'
):
return self._ranged_random_choice
else:
return self._loguniform
def _loguniform(self, low=1e-10, high=1., size=None):
return np.asarray(np.exp(self.np_random.uniform(np.log(low), np.log(high), size)))
def _ranged_random_choice(self,low, high, size=1):
vals = np.arange(low,high+1)
return self.np_random.choice(vals, size)
def _parameter_for_randomisation_generator(self, parameters=None):
'''
Generates (key,value) pairs of sampled dynamics parameters.
:param parameters: The parameters to be sampled for randomisation, if None, all the allowed parameters are sampled.
'''
parameter_ranges = self.parameter_sampling_ranges
if (parameters is None):
parameters = self.get_parameter_keys()
for key in parameters:
parameter_range = parameter_ranges[key]
if (parameter_range.shape[0] == 1):
yield key, np.asarray(parameter_range[0])
elif (parameter_range.shape[0] == 2):
distribution = self._select_appropriate_distribution(key)
size = self.default_dynamics_parameters[key].shape
yield key, np.asarray(
self.factors_for_param_randomisation[key] * distribution(*parameter_ranges[key], size=size))
else:
raise RuntimeError('Parameter radomisation range needs to be of shape {1,2}xN')
def _load_model(self):
"""
Loads an xml model, puts it in self.model. This sets up the mujoco xml for the scene.
"""
super()._load_model()
self.mujoco_robot.set_base_xpos([0, 0, 0])
obj_size = np.array([0.0555 / 2, 0.0555 / 2, 0.03 / 2])
### Domain Randomisation ###
if (self.initialised):
for key, val in self._parameter_for_randomisation_generator(parameters=self.parameters_to_randomise):
self.dynamics_parameters[key] = val
## Queues for adding time delays
self.eef_pos_queue = deque(maxlen=int(self.dynamics_parameters['eef_timedelay'] + 1))
self.eef_vel_queue = deque(maxlen=int(self.dynamics_parameters['eef_timedelay'] + 1))
self.obj_pos_queue = deque(maxlen=int(self.dynamics_parameters['obj_timedelay'] + 1))
self.obj_vel_queue = deque(maxlen=int(self.dynamics_parameters['obj_timedelay'] + 1))
self.obj_angle_queue = deque(maxlen=int(self.dynamics_parameters['obj_timedelay'] + 1))
if (self.pid is not None):
self.pid.sample_time = self.dynamics_parameters['pid_iteration_time']
obj_size = self.dynamics_parameters['obj_size']
### Create the Task ###
## Load the Arena ##
self.mujoco_arena = TableArena(
table_full_size=self.table_full_size, table_friction=self.table_friction
)
if self.use_indicator_object:
self.mujoco_arena.add_pos_indicator()
# The sawyer robot has a pedestal, we want to align it with the table
self.mujoco_arena.set_origin([0.16 + self.table_full_size[0] / 2, 0, 0])
## Create the objects that will go into the arena ##
# Create object and goal
if(self.initialised):
density = self.dynamics_parameters['obj_density']
friction = np.array([self.dynamics_parameters['obj_sliding_friction'],
self.dynamics_parameters['obj_torsional_friction'],
self.table_friction[2]])
else:
density = None
friction = None
rectangle = FullyFrictionalBoxObject(
size_min= obj_size, #
size_max= obj_size, #
rgba=[1, 0, 0, 1],
density=density,
friction=friction
)
self.mujoco_objects = OrderedDict([("rectangle", rectangle)])
goal = CylinderObject(
size=[0.03, 0.001],
rgba=[0, 1, 0, 1],
)
self.mujoco_goal = goal
## Put everything together into the task ##
self.model = PushTask(
self.mujoco_arena,
self.mujoco_robot,
self.mujoco_goal,
self.mujoco_objects,
initializer=self.placement_initializer,
)
# Add some small damping to the objects to prevent infinite acceleration
for obj in self.model.xml_objects:
obj.find('./joint').set('damping', '0.005')
## Manipulate objects in task ##
# Reduce penetration of objects
for obj in self.model.xml_objects:
obj.find('geom').set('solimp', "0.99 0.99 0.01")
obj.find('geom').set('solref', "0.01 1")
self.model.arena.table_collision.set('solimp', "0.99 0.99 0.01")
self.model.arena.table_collision.set('solref', "0.01 1")
# Place goal: it can be placed anywhere in a 16x30 cm box centered 15 cm away
# from the center of the table along its length
if (self.specific_goal_position is not None):
g_pos = np.array([self.gripper_pos_neutral[0] + self.specific_goal_position[0],
self.gripper_pos_neutral[1] + self.specific_goal_position[1],
self.model.table_top_offset[2]])
elif (self.randomise_initial_conditions):
noise = self.np_random.uniform(-1, 1, 3) * np.array([0.15, 0.08, 0.0])
offset = np.array([0.0, 0.15, 0.0])
g_pos = noise + offset + self.model.table_top_offset
else:
g_pos = [0.44969246, 0.16029991 + 0.335, self.model.table_top_offset[2]] #Placing the object at 30 cm , the eef needs to be at 33.5 cm
self.model.xml_goal.set("pos", array_to_string(g_pos))
### Set the xml parameters to the values given by the dynamics_parameters attribute ###
if (self.initialised):
self._apply_xml_dynamics_parameters()
def _apply_xml_dynamics_parameters(self):
"""
Applying the values contained in dynamics_parameters to the xml elements of the model. If a pid is used this
also applied the pid gains contained in the dynamics parameters.
"""
opt_node = self.model.root.find('option')
opt_node.set("timestep", str(self.dynamics_parameters['mujoco_timestep']))
for link_name, idx, body_node, mass_node, joint_node in self._robot_link_nodes_generator():
if (mass_node is not None):
mass_node.set("mass", str(self.dynamics_parameters['link_masses'][idx]))
if (joint_node is not None):
joint_node.set("damping", str(self.dynamics_parameters['joint_dampings'][idx]))
joint_node.set("armature", str(self.dynamics_parameters['armatures'][idx]))
joint_node.set("frictionloss", str(self.dynamics_parameters['joint_frictions'][idx]))
if (self.pid):
self.pid.tunings = (self.dynamics_parameters['kps'],
self.dynamics_parameters['kis'],
self.dynamics_parameters['kds'],
)
else:
for target_joint, jnt_idx, node in self._velocity_actuator_nodes_generator():
node.set("kv", str(self.dynamics_parameters['kvs'][jnt_idx]))
def set_parameter_sampling_ranges(self, sampling_ranges):
'''
Set a new sampling range for the dynamics parameters.
:param sampling_ranges: (Dict) Dictionary of the sampling ranges for the different parameters of the form
(param_name, range) where param_name is a valid param name string and range is a numpy array of dimensionality
{1,2}xN where N is the dimension of the given parameter
'''
for candidate_name, candidate_value in sampling_ranges.items():
assert candidate_name in self.parameter_sampling_ranges, 'Valid parameters are {}'.format(self.parameter_sampling_ranges.keys())
assert candidate_value.shape[0] == 1 or candidate_value.shape[0]==2, 'First dimension of the sampling parameter needs to have value 1 or 2'
assert len(candidate_value.shape) == len(self.parameter_sampling_ranges[candidate_name].shape), '{} has the wrong number of dimensions'.format(candidate_name)
if(len(self.parameter_sampling_ranges[candidate_name].shape) >1):
assert self.parameter_sampling_ranges[candidate_name].shape[1] == candidate_value.shape[1], '{} has the wrong shape'.format(candidate_name)
self.parameter_sampling_ranges[candidate_name] = candidate_value
def get_parameter_sampling_ranges(self):
return copy.deepcopy(self.parameter_sampling_ranges)
def get_parameter_keys(self):
return self.default_dynamics_parameters.keys()
def get_total_parameter_dimension(self):
total_dimension = 0
for key, val in self.default_dynamics_parameters.items():
param_shape = val.shape
if(param_shape ==()):
total_dimension += 1
else:
total_dimension += param_shape[0]
return total_dimension
def get_internal_state(self):
return np.concatenate([self._joint_positions, self._joint_velocities]).tolist()
def get_internal_state_dimension(self):
internal_state = self.get_internal_state()
return len(internal_state)
def change_parameters_to_randomise(self, parameters):
self._check_allowed_parameters(parameters)
self._set_dynamics_parameters(self.default_dynamics_parameters)
self.parameters_to_randomise = parameters
def get_randomised_parameters(self):
if (self.parameters_to_randomise is not None):
return self.parameters_to_randomise
else:
return self.get_parameter_keys()
def get_randomised_parameter_dimensions(self):
""" Return the number of dimensions of the ranomised parameters"""
randomised_parameter_names = self.get_randomised_parameters()
total_dimension = 0
for param in randomised_parameter_names:
param_shape = self.default_dynamics_parameters[param].shape
if(param_shape ==()):
total_dimension += 1
else:
total_dimension += param_shape[0]
return total_dimension
def get_dynamics_parameters(self):
"""
Returns the values of the current dynamics parameters.
"""
return copy.deepcopy(self.dynamics_parameters)
def get_default_dynamics_parameters(self):
"""
Returns the default values of the dynamics parameters.
"""
return copy.deepcopy(self.default_dynamics_parameters)
def get_factors_for_randomisation(self):
"""
Returns the factor used for domain randomisation.
"""
return copy.deepcopy(self.factors_for_param_randomisation)
def set_dynamics_parameters(self, dynamics_parameter_dict):
"""
Setting the dynamics parameters of the environment to specific values. These are going to be used the next
time the environment is reset, and will be overriden if domain randomisation is on.
:param dynamics_parameter_dict: Dictionary with the values of the parameters to set.
"""
for key, value in dynamics_parameter_dict.items():
assert key in self.dynamics_parameters, 'Setting a parameter that does not exist'
self.dynamics_parameters[key] = value
def randomisation_off(self,):
'''
Disable the parameter randomisation temporarily and cache the current set of parameters and
which parameters are being randomised.This can be useful for evaluation.
'''
current_params_to_randomise = self.get_randomised_parameters()
current_params = self.get_dynamics_parameters()
self.cached_parameters_to_randomise = current_params_to_randomise
self.cached_dynamics_parameters = current_params
self.parameters_to_randomise = []
return current_params, current_params_to_randomise
def randomisation_on(self):
'''
Restoring the randomisation as they were before the call to switch_params
'''
if(self.cached_dynamics_parameters is None):
print("Randomisation was not switched off before switching it back on.")
return
self.parameters_to_randomise = self.cached_parameters_to_randomise
self.set_dynamics_parameters(self.cached_dynamics_parameters)
self.cached_parameters_to_randomise = None
self.cached_dynamics_parameters = None
def sample_parameter_randomisation(self, parameters=None):
''' Samples a dictionary of dynamics parameters values using the randomisation process currently set in the environment
parameters ([string,]) : List of parameters to sample a randomisation from. If None, all the allowed parameters are sampled.
'''
if (not self.initialised):
print('Function has undefined behaviour if environment fully initialised, returning with no effect')
return
parameters_sample = {}
for key, val in self._parameter_for_randomisation_generator(parameters):
assert key in self.get_parameter_keys(), '{} not allowed. Choose from {}'.format(key,
self.get_parameter_keys())
parameters_sample[key] = val
return parameters_sample
def _set_goal_neutral_offset(self, goal_x, goal_y):
self.specific_goal_position = np.array([goal_x, goal_y])
def _set_gripper_neutral_offset(self, gripper_x, gripper_y):
self.specific_gripper_position = np.array([gripper_x, gripper_y])
def _get_reference(self):
"""
Sets up references to important components. A reference is typically an
index or a list of indices that point to the corresponding elements
in a flatten array, which is how MuJoCo stores physical simulation data.
"""
super()._get_reference()
# Pushing object ids
self.push_obj_name = self.model.object_names[self.model.push_object_idx]
self.object_body_id = self.sim.model.body_name2id(self.push_obj_name)
self.object_geom_id = self.sim.model.geom_name2id(self.push_obj_name)
# Pushing object qpos indices for the object
object_qpos = self.sim.model.get_joint_qpos_addr(self.push_obj_name)
self._ref_object_pos_low, self._ref_object_pos_high = object_qpos
# goal ids
self.goal_body_id = self.sim.model.body_name2id("goal")
self.goal_site_id = self.sim.model.site_name2id("goal")
# Gripper ids
self.l_finger_geom_ids = [
self.sim.model.geom_name2id(x) for x in self.gripper.left_finger_geoms
]
self.r_finger_geom_ids = [
self.sim.model.geom_name2id(x) for x in self.gripper.right_finger_geoms
]
def _reset_internal(self):
"""
Resets simulation internal configurations. Is called upon environment reset.
"""
super()._reset_internal()
self.sim.forward()
if (self.initialised):
### Set the arm position using IK ###
## Get the pose of the gripper in the initial position ##
# Find the gripper length
gripper_site = self.sim.data.site_xpos[self.eef_site_id]
right_hand_pos = self.sim.data.get_body_xpos('right_hand')
gripper_length = (right_hand_pos - gripper_site)[2]
if(self.specific_gripper_position is not None):
init_pos = np.array([self.gripper_pos_neutral[0] + self.specific_gripper_position[0],
self.gripper_pos_neutral[1] + self.specific_gripper_position[1],
self.model.table_top_offset.copy()[2] + 0.007+ gripper_length])
init_pose = T.make_pose(init_pos, np.array([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, -1.0]]))
elif (self.randomise_initial_conditions):
# Get the initial position of interest :
# A box of size 12x12cm, 15 cm away from the center of the table in the y axis
noise = self.np_random.uniform(-1, 1, 3) * np.array([0.12, 0.12, 0.0])
offset = np.array([0.0, -0.15, 0.007])
init_pos = self.model.table_top_offset.copy() + noise + offset
init_pos[2] = init_pos[2] + gripper_length
init_pose = T.make_pose(init_pos, np.array([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, -1.0]])) #
else:
gripper_pos = self.sim.data.get_site_xpos('grip_site')
init_pos = np.concatenate([gripper_pos[:2], [self.model.table_top_offset.copy()[2] + 0.007]])
init_pos[2] = init_pos[2] + gripper_length
init_pose = T.make_pose(init_pos, np.array([[0.0, 1.0, 0.0], [1.0, 0.0, 0.0], [0.0, 0.0, -1.0]]))
## Do the IK to find the joint angles for this initial pose ##
# Start the IK search from the rest qpos
ref_q = self.mujoco_robot.init_qpos
# Express init_pose in the base frame of the robot
init_pose_in_base = self.pose_in_base(init_pose)
# Do the IK
joint_angles = self.IK_solver.compute_joint_angles_for_endpoint_pose(init_pose_in_base, ref_q)
# Set the robot joint angles
self.set_robot_joint_positions(joint_angles)
# Set reference attributes
self.init_qpos = joint_angles
self.init_right_hand_quat = self._right_hand_quat
self.init_right_hand_orn = self._right_hand_orn
self.init_right_hand_pos = self._right_hand_pos
eef_rot_in_world = self.sim.data.get_body_xmat("right_hand").reshape((3, 3))
self.world_rot_in_eef = copy.deepcopy(eef_rot_in_world.T)
### Set the object position next to the arm ###
# Find End effector position
eef_pos = np.array(self.sim.data.site_xpos[self.eef_site_id])
# Get the mujoco pusing object
obj = self.model.mujoco_objects[self.model.push_object_idx]
# Find the position just next to the eef
obj_radius = obj.get_horizontal_radius()
obj_bottom_offset = obj.get_bottom_offset()
if (self.randomise_initial_conditions):
obj_pos = np.array([eef_pos[0], eef_pos[1] + obj_radius + 0.00701,
self.model.table_top_offset[2] - obj_bottom_offset[2]])
obj_pos += self.np_random.uniform(size=3) * np.array([0.0012, 0.001, 0.0])
# Get the object orientation
obj_angle = np.pi / 2. + self.np_random.uniform(-1, 1) * np.pi / 6.
obj_quat = np.array([np.cos(obj_angle / 2.), 0., 0., np.sin(obj_angle / 2.)])
else:
obj_pos = np.array([eef_pos[0], eef_pos[1] + obj.size[0] +0.0071+ 0.0002 , #0.0071 is the griper half length
self.model.table_top_offset[2] - obj_bottom_offset[2]])
obj_angle = np.pi/2.
obj_quat = np.array([np.cos(obj_angle/2.), 0., 0., np.sin(obj_angle/2.)])
# Concatenate to get the object qpos
obj_qpos = np.concatenate([obj_pos, obj_quat])
self.sim.data.qpos[self._ref_object_pos_low:self._ref_object_pos_high] = obj_qpos
self.sim.forward()
def reward(self, action=None):
"""
Reward function for the task.
The dense reward has three components.
Reaching: in [-inf, 0], to encourage the arm to reach the object
Goal Distance: in [-inf, 0] the distance between the pushed object and the goal
Safety reward in [-inf, 0], -1 for every joint that is at its limit.
The sparse reward only receives a {0,1} upon reaching the goal
Args:
action (np array): The action taken in that timestep
Returns:
reward (float or dict): the reward if sparce rewards are used otherwise a dictionary
with the total reward, and the subcoponents of the dense reward.
"""
reward = 0.
# sparse completion reward
if not self.reward_shaping and self._check_success():
reward = 1.0
# use a dense reward
if self.reward_shaping:
object_pos = self.sim.data.body_xpos[self.object_body_id]
# max joint angles reward
joint_limits = self._joint_ranges
current_joint_pos = self._joint_positions
hitting_limits_reward = - int(any([(x < joint_limits[i, 0] + 0.05 or x > joint_limits[i, 1] - 0.05) for i, x in
enumerate(current_joint_pos)]))
reward += hitting_limits_reward
# reaching reward
gripper_site_pos = self.sim.data.site_xpos[self.eef_site_id]
dist = np.linalg.norm(gripper_site_pos[:2] - object_pos[:2])
reaching_reward = -0.1 * dist
reward += reaching_reward
# Success Reward
success = self._check_success()
if (success):
reward += 0.1
# goal distance reward
goal_pos = self.sim.data.site_xpos[self.goal_site_id]
dist = np.linalg.norm(goal_pos[:2] - object_pos[:2])
goal_distance_reward = -dist
reward += goal_distance_reward
unstable = reward < -2.5
# Return all three types of rewards
reward = {"reward": reward, "reaching_distance": -10 * reaching_reward,
"goal_distance": - goal_distance_reward,
"hitting_limits_reward": hitting_limits_reward,
"unstable":unstable}
return reward
def _check_success(self):
"""
Returns True if task has been completed.
"""
object_pos = self.sim.data.body_xpos[self.object_body_id][:2]
goal_pos = self.sim.data.site_xpos[self.goal_site_id][:2]
dist = np.linalg.norm(goal_pos - object_pos)
goal_horizontal_radius = self.model.mujoco_goal.get_horizontal_radius()
# object centre is within the goal radius
return dist < goal_horizontal_radius
def _pre_action(self, action):
""" Takes the action, randomised the control timestep, and adds some additional random noise to the action."""
# Change control timestep to simulate various random time delays
timestep_parameter = self.dynamics_parameters['timestep_parameter']
self.control_timestep = self.init_control_timestep + self.np_random.exponential(scale=timestep_parameter)
# Add action noise to simulate unmodelled effects
additive_noise = self.dynamics_parameters['action_additive_noise'] * self.np_random.uniform(-1, 1, action.shape)
additive_systematic_noise = self.dynamics_parameters['action_systematic_noise']
multiplicative_noise = 1.0 + (
self.dynamics_parameters['action_multiplicative_noise'] * self.np_random.uniform(-1, 1,
action.shape))
action = action * (1.0 + additive_noise + additive_systematic_noise) * multiplicative_noise
super()._pre_action(action)
# Adding forces
# Addding forces to the joint
self.sim.data.qfrc_applied[
self._ref_joint_vel_indexes
] += self.dynamics_parameters['joint_forces'] * self.np_random.uniform(-1, 1, 7)
# Adding force proportional to acceleration
self.sim.data.qfrc_applied[
self._ref_joint_vel_indexes
] += self.dynamics_parameters['acceleration_forces'] * self.sim.data.qacc[
self._ref_joint_vel_indexes
]
self.sim.data.xfrc_applied[
self._ref_gripper_body_indx
] = self.dynamics_parameters['eef_forces'] * self.np_random.uniform(-1, 1, 6)
# Adding forces to the object
obj_qvel_low_idx , obj_qvel_high_idx = self.sim.model.get_joint_qvel_addr('rectangle')
self.sim.data.qfrc_applied[
obj_qvel_low_idx: obj_qvel_high_idx
] += self.dynamics_parameters['obj_forces'] * self.np_random.uniform(-1, 1, 6)
def _post_action(self, action):
"""
Add dense reward subcomponents to info, and checks for success of the task.
"""
reward, done, info = super()._post_action(action)
if self.reward_shaping:
info = reward
reward = reward["reward"]
if(info["unstable"]):
done = True
info["success"] = self._check_success()
return reward, done, info
def _get_observation(self):
"""
Returns an OrderedDict containing observations [(name_string, np.array), ...].
Important keys:
gripper_to_object : The x-y component of the gripper to object distance
object_to_goal : The x-y component of the object-to-goal distance
object_z_rot : the roation of the object around an axis sticking out the table
object_xvelp: x-y linear velocity of the object
gripper_xvelp: x-y linear velocity of the gripper
task-state : a concatenation of all the above.
"""
di = OrderedDict()
push_obj_name = self.model.object_names[self.model.push_object_idx]
# camera observations
if self.use_camera_obs:
camera_obs = self.sim.render(
camera_name=self.camera_name,
width=self.camera_width,
height=self.camera_height,
depth=self.camera_depth,
)
if self.camera_depth:
di["image"], di["depth"] = camera_obs
else:
di["image"] = camera_obs
# low-level object information
if self.use_object_obs:
# Extract position and velocity of the eef
eef_pos_in_world = self.sim.data.get_body_xpos("right_hand")
eef_xvelp_in_world = self.sim.data.get_body_xvelp("right_hand")
# Apply time delays
eef_pos_in_world = self._apply_time_delay(eef_pos_in_world, self.eef_pos_queue)
eef_xvelp_in_world = self._apply_time_delay(eef_xvelp_in_world, self.eef_vel_queue)
# Add random noise
position_noise = self.dynamics_parameters['eef_obs_position_noise']
velocity_noise = self.dynamics_parameters['eef_obs_velocity_noise']
eef_pos_in_world = eef_pos_in_world + self.np_random.normal(loc=0., scale=position_noise)
eef_xvelp_in_world = eef_xvelp_in_world + self.np_random.normal(loc=0., scale=velocity_noise)
# Get the position, velocity, rotation and rotational velocity of the object in the world frame
object_pos_in_world = self.sim.data.body_xpos[self.object_body_id]
object_xvelp_in_world = self.sim.data.get_body_xvelp(push_obj_name)
object_rot_in_world = self.sim.data.get_body_xmat(self.push_obj_name)
# Apply time delays
object_pos_in_world = self._apply_time_delay(object_pos_in_world, self.obj_pos_queue)
object_xvelp_in_world = self._apply_time_delay(object_xvelp_in_world, self.obj_vel_queue)
object_rot_in_world = self._apply_time_delay(object_rot_in_world, self.obj_angle_queue)
# Get the z-angle with respect to the reference position and do sin-cosine encoding
world_rotation_in_reference = np.array([[0., 1., 0., ], [-1., 0., 0., ], [0., 0., 1., ]])
object_rotation_in_ref = world_rotation_in_reference.dot(object_rot_in_world)
object_euler_in_ref = T.mat2euler(object_rotation_in_ref)
z_angle = object_euler_in_ref[2]
# Add random noise
position_noise = self.dynamics_parameters['obj_obs_position_noise']
velocity_noise = self.dynamics_parameters['obj_obs_velocity_noise']
angle_noise = self.dynamics_parameters['obj_angle_noise']
object_pos_in_world = object_pos_in_world + self.np_random.normal(loc=0., scale=position_noise)
object_xvelp_in_world = object_xvelp_in_world + self.np_random.normal(loc=0., scale=velocity_noise)
z_angle = z_angle + self.np_random.normal(loc=0., scale=angle_noise)
# construct vectors for policy observation
sine_cosine = np.array([np.sin(8*z_angle), np.cos(8*z_angle)])
# Get the goal position in the world
goal_site_pos_in_world = | np.array(self.sim.data.site_xpos[self.goal_site_id]) | numpy.array |
# -*- coding: utf-8 -*-
"""
Created on Wed Mar 21 10:00:33 2018
@author: jdkern
"""
from __future__ import division
from sklearn import linear_model
from statsmodels.tsa.api import VAR
import scipy.stats as st
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
######################################################################
# LOAD
######################################################################
#import data
df_load = pd.read_excel('Synthetic_demand_pathflows/hist_demanddata.xlsx',sheet_name='hourly_load',header=0)
df_weather = pd.read_excel('Synthetic_demand_pathflows/hist_demanddata.xlsx',sheet_name='weather',header=0)
BPA_weights = pd.read_excel('Synthetic_demand_pathflows/hist_demanddata.xlsx',sheet_name='BPA_location_weights',header=0)
CAISO_weights = pd.read_excel('Synthetic_demand_pathflows/hist_demanddata.xlsx',sheet_name='CAISO_location_weights',header=0)
Name_list=pd.read_csv('Synthetic_demand_pathflows/Covariance_Calculation.csv')
Name_list=list(Name_list.loc['SALEM_T':])
Name_list=Name_list[1:]
df_wind=pd.read_csv('Synthetic_wind_power/wind_power_sim.csv',header=0)
sim_years = int(len(df_wind)/8760) + 3
sim_weather=pd.read_csv('Synthetic_weather/synthetic_weather_data.csv',header=0,index_col=0)
sim_weather = sim_weather.iloc[0:365*sim_years,:]
sim_weather = sim_weather.iloc[365:len(sim_weather)-730,:]
sim_weather = sim_weather.reset_index(drop=True)
#weekday designation
dow = df_weather.loc[:,'Weekday']
#generate simulated day of the week assuming starts from monday
count=0
sim_dow= np.zeros(len(sim_weather))
for i in range(0,len(sim_weather)):
count = count +1
if count <=5:
sim_dow[i]=1
elif count > 5:
sim_dow[i]=0
if count ==7:
count =0
#Generate a datelist
datelist=pd.date_range(pd.datetime(2017,1,1),periods=365).tolist()
sim_month=np.zeros(len(sim_weather))
sim_day=np.zeros(len(sim_weather))
sim_year=np.zeros(len(sim_weather))
count=0
for i in range(0,len(sim_weather)):
if count <=364:
sim_month[i]=datelist[count].month
sim_day[i]=datelist[count].day
sim_year[i]=datelist[count].year
else:
count=0
sim_month[i]=datelist[count].month
sim_day[i]=datelist[count].day
sim_year[i]=datelist[count].year
count=count+1
######################################################################
# BPAT
######################################################################
#Find the simulated data at the sites
col_BPA_T = ['SALEM_T','SEATTLE_T','PORTLAND_T','EUGENE_T','BOISE_T']
col_BPA_W = ['SALEM_W','SEATTLE_W','PORTLAND_W','EUGENE_W','BOISE_W']
BPA_sim_T=sim_weather[col_BPA_T].values
BPA_sim_W=sim_weather[col_BPA_W].values
sim_days = len(sim_weather)
weighted_SimT = np.zeros((sim_days,1))
###########################################
#find average temps
cities = ['Salem','Seattle','Portland','Eugene','Boise']
num_cities = len(cities)
num_days = len(df_weather)
AvgT = np.zeros((num_days,num_cities))
Wind = np.zeros((num_days,num_cities))
weighted_AvgT = np.zeros((num_days,1))
for i in cities:
n1 = i + '_MaxT'
n2 = i + '_MinT'
n3 = i + '_Wind'
j = int(cities.index(i))
AvgT[:,j] = 0.5*df_weather.loc[:,n1] + 0.5*df_weather.loc[:,n2]
weighted_AvgT[:,0] = weighted_AvgT[:,0] + AvgT[:,j]*BPA_weights.loc[0,i]
Wind[:,j] = df_weather.loc[:,n3]
weighted_SimT[:,0] = weighted_SimT[:,0] + BPA_sim_T[:,j]*BPA_weights.loc[0,i]
#Convert simulated temperature to F
weighted_SimT=(weighted_SimT * 9/5) +32
BPA_sim_T_F=(BPA_sim_T * 9/5) +32
#convert to degree days
HDD = np.zeros((num_days,num_cities))
CDD = np.zeros((num_days,num_cities))
HDD_sim = np.zeros((sim_days,num_cities))
CDD_sim = np.zeros((sim_days,num_cities))
for i in range(0,num_days):
for j in range(0,num_cities):
HDD[i,j] = np.max((0,65-AvgT[i,j]))
CDD[i,j] = np.max((0,AvgT[i,j] - 65))
for i in range(0,sim_days):
for j in range(0,num_cities):
HDD_sim[i,j] = np.max((0,65-BPA_sim_T_F[i,j]))
CDD_sim[i,j] = np.max((0,BPA_sim_T_F[i,j] - 65))
#separate wind speed by cooling/heating degree day
binary_CDD = CDD>0
binary_HDD = HDD>0
CDD_wind = np.multiply(Wind,binary_CDD)
HDD_wind = np.multiply(Wind,binary_HDD)
binary_CDD_sim = CDD_sim > 0
binary_HDD_sim = HDD_sim > 0
CDD_wind_sim = np.multiply(BPA_sim_W,binary_CDD_sim)
HDD_wind_sim = np.multiply(BPA_sim_W,binary_HDD_sim)
#convert load to array
BPA_load = df_load.loc[:,'BPA'].values
#remove NaNs
a = np.argwhere(np.isnan(BPA_load))
for i in a:
BPA_load[i] = BPA_load[i+24]
peaks = np.zeros((num_days,1))
#find peaks
for i in range(0,num_days):
peaks[i] = np.max(BPA_load[i*24:i*24+24])
#Separate data by weighted temperature
M = np.column_stack((weighted_AvgT,peaks,dow,HDD,CDD,HDD_wind,CDD_wind))
M_sim=np.column_stack((weighted_SimT,sim_dow,HDD_sim,CDD_sim,HDD_wind_sim,CDD_wind_sim))
X70p = M[(M[:,0] >= 70),2:]
y70p = M[(M[:,0] >= 70),1]
X65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),2:]
y65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),1]
X60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),2:]
y60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),1]
X55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),2:]
y55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),1]
X50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),2:]
y50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),1]
X40_50 = M[(M[:,0] >= 40) & (M[:,0] < 50),2:]
y40_50 = M[(M[:,0] >= 40) & (M[:,0] < 50),1]
X30_40 = M[(M[:,0] >= 30) & (M[:,0] < 40),2:]
y30_40 = M[(M[:,0] >= 30) & (M[:,0] < 40),1]
X25_30 = M[(M[:,0] >= 25) & (M[:,0] < 30),2:]
y25_30 = M[(M[:,0] >= 25) & (M[:,0] < 30),1]
X25m = M[(M[:,0] < 25),2:]
y25m = M[(M[:,0] < 25),1]
X70p_Sim = M_sim[(M_sim[:,0] >= 70),1:]
X65_70_Sim = M_sim[(M_sim[:,0] >= 65) & (M_sim[:,0] < 70),1:]
X60_65_Sim = M_sim[(M_sim[:,0] >= 60) & (M_sim[:,0] < 65),1:]
X55_60_Sim = M_sim[(M_sim[:,0] >= 55) & (M_sim[:,0] < 60),1:]
X50_55_Sim = M_sim[(M_sim[:,0] >= 50) & (M_sim[:,0] < 55),1:]
X40_50_Sim = M_sim[(M_sim[:,0] >= 40) & (M_sim[:,0] < 50),1:]
X30_40_Sim = M_sim[(M_sim[:,0] >= 30) & (M_sim[:,0] < 40),1:]
X25_30_Sim = M_sim[(M_sim[:,0] >= 25) & (M_sim[:,0] < 30),1:]
X25m_Sim = M_sim[(M_sim[:,0] < 25),1:]
#multivariate regression
#Create linear regression object
reg70p = linear_model.LinearRegression()
reg65_70 = linear_model.LinearRegression()
reg60_65 = linear_model.LinearRegression()
reg55_60 = linear_model.LinearRegression()
reg50_55 = linear_model.LinearRegression()
reg40_50 = linear_model.LinearRegression()
reg30_40 = linear_model.LinearRegression()
reg25_30 = linear_model.LinearRegression()
reg25m = linear_model.LinearRegression()
# Train the model using the training sets
if len(y70p) > 0:
reg70p.fit(X70p,y70p)
if len(y65_70) > 0:
reg65_70.fit(X65_70,y65_70)
if len(y60_65) > 0:
reg60_65.fit(X60_65,y60_65)
if len(y55_60) > 0:
reg55_60.fit(X55_60,y55_60)
if len(y50_55) > 0:
reg50_55.fit(X50_55,y50_55)
if len(y40_50) > 0:
reg40_50.fit(X40_50,y40_50)
if len(y30_40) > 0:
reg30_40.fit(X30_40,y30_40)
if len(y25_30) > 0:
reg25_30.fit(X25_30,y25_30)
if len(y25m) > 0:
reg25m.fit(X25m,y25m)
# Make predictions using the testing set
predicted = []
for i in range(0,num_days):
s = M[i,2:]
s = s.reshape((1,len(s)))
if M[i,0]>=70:
y_hat = reg70p.predict(s)
elif M[i,0] >= 65 and M[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M[i,0] >= 60 and M[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M[i,0] >= 55 and M[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M[i,0] >= 50 and M[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M[i,0] >= 40 and M[i,0] < 50:
y_hat = reg40_50.predict(s)
elif M[i,0] >= 30 and M[i,0] < 40:
y_hat = reg30_40.predict(s)
elif M[i,0] >= 25 and M[i,0] < 30:
y_hat = reg25_30.predict(s)
elif M[i,0] < 25:
y_hat = reg25m.predict(s)
predicted = np.append(predicted,y_hat)
BPA_p = predicted.reshape((len(predicted),1))
#Simulate using the regression above
simulated=[]
for i in range(0,sim_days):
s = M_sim[i,1:]
s = s.reshape((1,len(s)))
if M_sim[i,0]>=70:
y_hat = reg70p.predict(s)
elif M_sim[i,0] >= 65 and M_sim[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M_sim[i,0] >= 60 and M_sim[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M_sim[i,0] >= 55 and M_sim[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M_sim[i,0] >= 50 and M_sim[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M_sim[i,0] >= 40 and M_sim[i,0] < 50:
y_hat = reg40_50.predict(s)
elif M_sim[i,0] >= 30 and M_sim[i,0] < 40:
y_hat = reg30_40.predict(s)
elif M_sim[i,0] >= 25 and M_sim[i,0] < 30:
y_hat = reg25_30.predict(s)
elif M_sim[i,0] < 25:
y_hat = reg25m.predict(s)
simulated = np.append(simulated,y_hat)
BPA_sim = simulated.reshape((len(simulated),1))
a=st.pearsonr(peaks,BPA_p)
print(a[0]**2, a[1])
# Residuals
BPAresiduals = BPA_p - peaks
BPA_y = peaks
# RMSE
RMSE = (np.sum((BPAresiduals**2))/len(BPAresiduals))**.5
output = np.column_stack((BPA_p,peaks))
#########################################################################
# CAISO
#########################################################################
#Find the simulated data at the sites
col_CAISO_T = ['FRESNO_T','OAKLAND_T','LOS ANGELES_T','SAN DIEGO_T','SACRAMENTO_T','SAN JOSE_T','SAN FRANCISCO_T']
col_CAISO_W = ['FRESNO_W','OAKLAND_W','LOS ANGELES_W','SAN DIEGO_W','SACRAMENTO_W','SAN JOSE_W','SAN FRANCISCO_W']
CAISO_sim_T=sim_weather[col_CAISO_T].values
CAISO_sim_W=sim_weather[col_CAISO_W].values
sim_days = len(sim_weather)
weighted_SimT = np.zeros((sim_days,1))
#find average temps
cities = ['Fresno','Oakland','LA','SanDiego','Sacramento','SanJose','SanFran']
num_cities = len(cities)
num_days = len(df_weather)
AvgT = np.zeros((num_days,num_cities))
Wind = np.zeros((num_days,num_cities))
weighted_AvgT = np.zeros((num_days,1))
for i in cities:
n1 = i + '_MaxT'
n2 = i + '_MinT'
n3 = i + '_Wind'
j = int(cities.index(i))
AvgT[:,j] = 0.5*df_weather.loc[:,n1] + 0.5*df_weather.loc[:,n2]
Wind[:,j] = df_weather.loc[:,n3]
weighted_AvgT[:,0] = weighted_AvgT[:,0] + AvgT[:,j]*CAISO_weights.loc[1,i]
weighted_SimT[:,0] = weighted_SimT[:,0] + CAISO_sim_T[:,j]*CAISO_weights.loc[1,i]
#Convert simulated temperature to F
weighted_SimT=(weighted_SimT * 9/5) +32
CAISO_sim_T_F=(CAISO_sim_T * 9/5) +32
#convert to degree days
HDD = np.zeros((num_days,num_cities))
CDD = np.zeros((num_days,num_cities))
HDD_sim = np.zeros((sim_days,num_cities))
CDD_sim = np.zeros((sim_days,num_cities))
for i in range(0,num_days):
for j in range(0,num_cities):
HDD[i,j] = np.max((0,65-AvgT[i,j]))
CDD[i,j] = np.max((0,AvgT[i,j] - 65))
for i in range(0,sim_days):
for j in range(0,num_cities):
HDD_sim[i,j] = np.max((0,65-CAISO_sim_T_F[i,j]))
CDD_sim[i,j] = np.max((0,CAISO_sim_T_F[i,j] - 65))
#separate wind speed by cooling/heating degree day
binary_CDD = CDD>0
binary_HDD = HDD>0
binary_CDD_sim = CDD_sim > 0
binary_HDD_sim = HDD_sim > 0
CDD_wind = np.multiply(Wind,binary_CDD)
HDD_wind = np.multiply(Wind,binary_HDD)
CDD_wind_sim = np.multiply(CAISO_sim_W,binary_CDD_sim)
HDD_wind_sim = np.multiply(CAISO_sim_W,binary_HDD_sim)
###########################
# CAISO - SDGE
###########################
#convert load to array
SDGE_load = df_load.loc[:,'SDGE'].values
#remove NaNs
a = np.argwhere(np.isnan(SDGE_load))
for i in a:
SDGE_load[i] = SDGE_load[i+24]
peaks = np.zeros((num_days,1))
#find peaks
for i in range(0,num_days):
peaks[i] = np.max(SDGE_load[i*24:i*24+24])
#Separate data by weighted temperature
M = np.column_stack((weighted_AvgT,peaks,dow,HDD,CDD,HDD_wind,CDD_wind))
M_sim=np.column_stack((weighted_SimT,sim_dow,HDD_sim,CDD_sim,HDD_wind_sim,CDD_wind_sim))
X80p = M[(M[:,0] >= 80),2:]
y80p = M[(M[:,0] >= 80),1]
X75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),2:]
y75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),1]
X70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),2:]
y70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),1]
X65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),2:]
y65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),1]
X60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),2:]
y60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),1]
X55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),2:]
y55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),1]
X50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),2:]
y50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),1]
X50 = M[(M[:,0] < 50),2:]
y50 = M[(M[:,0] < 50),1]
X80p_Sim = M_sim[(M_sim[:,0] >= 80),1:]
X75_80_Sim = M_sim[(M_sim[:,0] >= 75) & (M_sim[:,0] < 80),1:]
X70_75_Sim = M_sim[(M_sim[:,0] >= 70) & (M_sim[:,0] < 75),1:]
X65_70_Sim = M_sim[(M_sim[:,0] >= 65) & (M_sim[:,0] < 70),1:]
X60_65_Sim = M_sim[(M_sim[:,0] >= 60) & (M_sim[:,0] < 65),1:]
X55_60_Sim = M_sim[(M_sim[:,0] >= 55) & (M_sim[:,0] < 60),1:]
X50_55_Sim = M_sim[(M_sim[:,0] >= 50) & (M_sim[:,0] < 55),1:]
X50_Sim = M_sim[(M_sim[:,0] < 50),1:]
#Create linear regression object
reg80p = linear_model.LinearRegression()
reg75_80 = linear_model.LinearRegression()
reg70_75 = linear_model.LinearRegression()
reg65_70 = linear_model.LinearRegression()
reg60_65 = linear_model.LinearRegression()
reg55_60 = linear_model.LinearRegression()
reg50_55 = linear_model.LinearRegression()
reg50m = linear_model.LinearRegression()
## Train the model using the training sets
if len(y80p) > 0:
reg80p.fit(X80p,y80p)
if len(y75_80) > 0:
reg75_80.fit(X75_80,y75_80)
if len(y70_75) > 0:
reg70_75.fit(X70_75,y70_75)
if len(y65_70) > 0:
reg65_70.fit(X65_70,y65_70)
if len(y60_65) > 0:
reg60_65.fit(X60_65,y60_65)
if len(y55_60) > 0:
reg55_60.fit(X55_60,y55_60)
if len(y50_55) > 0:
reg50_55.fit(X50_55,y50_55)
if len(y50) > 0:
reg50m.fit(X50,y50)
# Make predictions using the testing set
predicted = []
for i in range(0,num_days):
s = M[i,2:]
s = s.reshape((1,len(s)))
if M[i,0]>=80:
y_hat = reg80p.predict(s)
elif M[i,0] >= 75 and M[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M[i,0] >= 70 and M[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M[i,0] >= 65 and M[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M[i,0] >= 60 and M[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M[i,0] >= 55 and M[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M[i,0] >= 50 and M[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M[i,0] < 50:
y_hat = reg50m.predict(s)
predicted = np.append(predicted,y_hat)
SDGE_p = predicted.reshape((len(predicted),1))
simulated=[]
for i in range(0,sim_days):
s = M_sim[i,1:]
s = s.reshape((1,len(s)))
if M_sim[i,0]>=80:
y_hat = reg80p.predict(s)
elif M_sim[i,0] >= 75 and M_sim[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M_sim[i,0] >= 70 and M_sim[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M_sim[i,0] >= 65 and M_sim[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M_sim[i,0] >= 60 and M_sim[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M_sim[i,0] >= 55 and M_sim[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M_sim[i,0] >= 50 and M_sim[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M_sim[i,0] < 50:
y_hat = reg50m.predict(s)
#
simulated = np.append(simulated,y_hat)
SDGE_sim = simulated.reshape((len(simulated),1))
# Residuals
SDGEresiduals = SDGE_p - peaks
SDGE_y = peaks
#a=st.pearsonr(peaks,SDGE_p)
#print a[0]**2
# RMSE
RMSE = (np.sum((SDGEresiduals**2))/len(SDGEresiduals))**.5
###########################
# CAISO - SCE
###########################
#convert load to array
SCE_load = df_load.loc[:,'SCE'].values
#remove NaNs
a = np.argwhere(np.isnan(SCE_load))
for i in a:
SCE_load[i] = SCE_load[i+24]
peaks = np.zeros((num_days,1))
#find peaks
for i in range(0,num_days):
peaks[i] = np.max(SCE_load[i*24:i*24+24])
#Separate data by weighted temperature
M = np.column_stack((weighted_AvgT,peaks,dow,HDD,CDD,HDD_wind,CDD_wind))
M_sim=np.column_stack((weighted_SimT,sim_dow,HDD_sim,CDD_sim,HDD_wind_sim,CDD_wind_sim))
X80p = M[(M[:,0] >= 80),2:]
y80p = M[(M[:,0] >= 80),1]
X75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),2:]
y75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),1]
X70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),2:]
y70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),1]
X65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),2:]
y65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),1]
X60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),2:]
y60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),1]
X55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),2:]
y55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),1]
X50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),2:]
y50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),1]
X50 = M[(M[:,0] < 50),2:]
y50 = M[(M[:,0] < 50),1]
X80p_Sim = M_sim[(M_sim[:,0] >= 80),1:]
X75_80_Sim = M_sim[(M_sim[:,0] >= 75) & (M_sim[:,0] < 80),1:]
X70_75_Sim = M_sim[(M_sim[:,0] >= 70) & (M_sim[:,0] < 75),1:]
X65_70_Sim = M_sim[(M_sim[:,0] >= 65) & (M_sim[:,0] < 70),1:]
X60_65_Sim = M_sim[(M_sim[:,0] >= 60) & (M_sim[:,0] < 65),1:]
X55_60_Sim = M_sim[(M_sim[:,0] >= 55) & (M_sim[:,0] < 60),1:]
X50_55_Sim = M_sim[(M_sim[:,0] >= 50) & (M_sim[:,0] < 55),1:]
X50_Sim = M_sim[(M_sim[:,0] < 50),1:]
##multivariate regression
#
#Create linear regression object
reg80p = linear_model.LinearRegression()
reg75_80 = linear_model.LinearRegression()
reg70_75 = linear_model.LinearRegression()
reg65_70 = linear_model.LinearRegression()
reg60_65 = linear_model.LinearRegression()
reg55_60 = linear_model.LinearRegression()
reg50_55 = linear_model.LinearRegression()
reg50m = linear_model.LinearRegression()
## Train the model using the training sets
if len(y80p) > 0:
reg80p.fit(X80p,y80p)
if len(y75_80) > 0:
reg75_80.fit(X75_80,y75_80)
if len(y70_75) > 0:
reg70_75.fit(X70_75,y70_75)
if len(y65_70) > 0:
reg65_70.fit(X65_70,y65_70)
if len(y60_65) > 0:
reg60_65.fit(X60_65,y60_65)
if len(y55_60) > 0:
reg55_60.fit(X55_60,y55_60)
if len(y50_55) > 0:
reg50_55.fit(X50_55,y50_55)
if len(y50) > 0:
reg50m.fit(X50,y50)
# Make predictions using the testing set
predicted = []
for i in range(0,num_days):
s = M[i,2:]
s = s.reshape((1,len(s)))
if M[i,0]>=80:
y_hat = reg80p.predict(s)
elif M[i,0] >= 75 and M[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M[i,0] >= 70 and M[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M[i,0] >= 65 and M[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M[i,0] >= 60 and M[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M[i,0] >= 55 and M[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M[i,0] >= 50 and M[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M[i,0] < 50:
y_hat = reg50m.predict(s)
predicted = np.append(predicted,y_hat)
SCE_p = predicted.reshape((len(predicted),1))
simulated=[]
for i in range(0,sim_days):
s = M_sim[i,1:]
s = s.reshape((1,len(s)))
if M_sim[i,0]>=80:
y_hat = reg80p.predict(s)
elif M_sim[i,0] >= 75 and M_sim[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M_sim[i,0] >= 70 and M_sim[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M_sim[i,0] >= 65 and M_sim[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M_sim[i,0] >= 60 and M_sim[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M_sim[i,0] >= 55 and M_sim[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M_sim[i,0] >= 50 and M_sim[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M_sim[i,0] < 50:
y_hat = reg50m.predict(s)
simulated = np.append(simulated,y_hat)
SCE_sim = simulated.reshape((len(simulated),1))
#a=st.pearsonr(peaks,SCE_p)
#print a[0]**2
# Residuals
SCEresiduals = SCE_p - peaks
SCE_y = peaks
# RMSE
RMSE = (np.sum((SCEresiduals**2))/len(SCEresiduals))**.5
###########################
# CAISO - PG&E Valley
###########################
#convert load to array
PGEV_load = df_load.loc[:,'PGE_V'].values
#remove NaNs
a = np.argwhere(np.isnan(PGEV_load))
for i in a:
PGEV_load[i] = PGEV_load[i+24]
peaks = np.zeros((num_days,1))
#find peaks
for i in range(0,num_days):
peaks[i] = np.max(PGEV_load[i*24:i*24+24])
#Separate data by weighted temperature
M = np.column_stack((weighted_AvgT,peaks,dow,HDD,CDD,HDD_wind,CDD_wind))
M_sim=np.column_stack((weighted_SimT,sim_dow,HDD_sim,CDD_sim,HDD_wind_sim,CDD_wind_sim))
X80p = M[(M[:,0] >= 80),2:]
y80p = M[(M[:,0] >= 80),1]
X75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),2:]
y75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),1]
X70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),2:]
y70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),1]
X65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),2:]
y65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),1]
X60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),2:]
y60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),1]
X55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),2:]
y55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),1]
X50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),2:]
y50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),1]
X50 = M[(M[:,0] < 50),2:]
y50 = M[(M[:,0] < 50),1]
X80p_Sim = M_sim[(M_sim[:,0] >= 80),1:]
X75_80_Sim = M_sim[(M_sim[:,0] >= 75) & (M_sim[:,0] < 80),1:]
X70_75_Sim = M_sim[(M_sim[:,0] >= 70) & (M_sim[:,0] < 75),1:]
X65_70_Sim = M_sim[(M_sim[:,0] >= 65) & (M_sim[:,0] < 70),1:]
X60_65_Sim = M_sim[(M_sim[:,0] >= 60) & (M_sim[:,0] < 65),1:]
X55_60_Sim = M_sim[(M_sim[:,0] >= 55) & (M_sim[:,0] < 60),1:]
X50_55_Sim = M_sim[(M_sim[:,0] >= 50) & (M_sim[:,0] < 55),1:]
X50_Sim = M_sim[(M_sim[:,0] < 50),1:]
##multivariate regression
#
#Create linear regression object
reg80p = linear_model.LinearRegression()
reg75_80 = linear_model.LinearRegression()
reg70_75 = linear_model.LinearRegression()
reg65_70 = linear_model.LinearRegression()
reg60_65 = linear_model.LinearRegression()
reg55_60 = linear_model.LinearRegression()
reg50_55 = linear_model.LinearRegression()
reg50m = linear_model.LinearRegression()
## Train the model using the training sets
if len(y80p) > 0:
reg80p.fit(X80p,y80p)
if len(y75_80) > 0:
reg75_80.fit(X75_80,y75_80)
if len(y70_75) > 0:
reg70_75.fit(X70_75,y70_75)
if len(y65_70) > 0:
reg65_70.fit(X65_70,y65_70)
if len(y60_65) > 0:
reg60_65.fit(X60_65,y60_65)
if len(y55_60) > 0:
reg55_60.fit(X55_60,y55_60)
if len(y50_55) > 0:
reg50_55.fit(X50_55,y50_55)
if len(y50) > 0:
reg50m.fit(X50,y50)
# Make predictions using the testing set
predicted = []
for i in range(0,num_days):
s = M[i,2:]
s = s.reshape((1,len(s)))
if M[i,0]>=80:
y_hat = reg80p.predict(s)
elif M[i,0] >= 75 and M[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M[i,0] >= 70 and M[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M[i,0] >= 65 and M[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M[i,0] >= 60 and M[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M[i,0] >= 55 and M[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M[i,0] >= 50 and M[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M[i,0] < 50:
y_hat = reg50m.predict(s)
predicted = np.append(predicted,y_hat)
PGEV_p = predicted.reshape((len(predicted),1))
simulated=[]
for i in range(0,sim_days):
s = M_sim[i,1:]
s = s.reshape((1,len(s)))
if M_sim[i,0]>=80:
y_hat = reg80p.predict(s)
elif M_sim[i,0] >= 75 and M_sim[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M_sim[i,0] >= 70 and M_sim[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M_sim[i,0] >= 65 and M_sim[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M_sim[i,0] >= 60 and M_sim[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M_sim[i,0] >= 55 and M_sim[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M_sim[i,0] >= 50 and M_sim[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M_sim[i,0] < 50:
y_hat = reg50m.predict(s)
simulated = np.append(simulated,y_hat)
PGEV_sim = simulated.reshape((len(simulated),1))
a=st.pearsonr(peaks,PGEV_p)
print(a[0]**2, a[1])
# Residuals
PGEVresiduals = PGEV_p - peaks
PGEV_y = peaks
# RMSE
RMSE = (np.sum((PGEVresiduals**2))/len(PGEVresiduals))**.5
###########################
# CAISO - PG&E Bay
###########################
#convert load to array
PGEB_load = df_load.loc[:,'PGE_B'].values
#remove NaNs
a = np.argwhere(np.isnan(PGEB_load))
for i in a:
PGEB_load[i] = PGEB_load[i+24]
peaks = np.zeros((num_days,1))
#find peaks
for i in range(0,num_days):
peaks[i] = np.max(PGEB_load[i*24:i*24+24])
#Separate data by weighted temperature
M = np.column_stack((weighted_AvgT,peaks,dow,HDD,CDD,HDD_wind,CDD_wind))
M_sim=np.column_stack((weighted_SimT,sim_dow,HDD_sim,CDD_sim,HDD_wind_sim,CDD_wind_sim))
X80p = M[(M[:,0] >= 80),2:]
y80p = M[(M[:,0] >= 80),1]
X75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),2:]
y75_80 = M[(M[:,0] >= 75) & (M[:,0] < 80),1]
X70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),2:]
y70_75 = M[(M[:,0] >= 70) & (M[:,0] < 75),1]
X65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),2:]
y65_70 = M[(M[:,0] >= 65) & (M[:,0] < 70),1]
X60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),2:]
y60_65 = M[(M[:,0] >= 60) & (M[:,0] < 65),1]
X55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),2:]
y55_60 = M[(M[:,0] >= 55) & (M[:,0] < 60),1]
X50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),2:]
y50_55 = M[(M[:,0] >= 50) & (M[:,0] < 55),1]
X50 = M[(M[:,0] < 50),2:]
y50 = M[(M[:,0] < 50),1]
X80p_Sim = M_sim[(M_sim[:,0] >= 80),1:]
X75_80_Sim = M_sim[(M_sim[:,0] >= 75) & (M_sim[:,0] < 80),1:]
X70_75_Sim = M_sim[(M_sim[:,0] >= 70) & (M_sim[:,0] < 75),1:]
X65_70_Sim = M_sim[(M_sim[:,0] >= 65) & (M_sim[:,0] < 70),1:]
X60_65_Sim = M_sim[(M_sim[:,0] >= 60) & (M_sim[:,0] < 65),1:]
X55_60_Sim = M_sim[(M_sim[:,0] >= 55) & (M_sim[:,0] < 60),1:]
X50_55_Sim = M_sim[(M_sim[:,0] >= 50) & (M_sim[:,0] < 55),1:]
X50_Sim = M_sim[(M_sim[:,0] < 50),1:]
#Create linear regression object
reg80p = linear_model.LinearRegression()
reg75_80 = linear_model.LinearRegression()
reg70_75 = linear_model.LinearRegression()
reg65_70 = linear_model.LinearRegression()
reg60_65 = linear_model.LinearRegression()
reg55_60 = linear_model.LinearRegression()
reg50_55 = linear_model.LinearRegression()
reg50m = linear_model.LinearRegression()
## Train the model using the training sets
if len(y80p) > 0:
reg80p.fit(X80p,y80p)
if len(y75_80) > 0:
reg75_80.fit(X75_80,y75_80)
if len(y70_75) > 0:
reg70_75.fit(X70_75,y70_75)
if len(y65_70) > 0:
reg65_70.fit(X65_70,y65_70)
if len(y60_65) > 0:
reg60_65.fit(X60_65,y60_65)
if len(y55_60) > 0:
reg55_60.fit(X55_60,y55_60)
if len(y50_55) > 0:
reg50_55.fit(X50_55,y50_55)
if len(y50) > 0:
reg50m.fit(X50,y50)
# Make predictions using the testing set
predicted = []
for i in range(0,num_days):
s = M[i,2:]
s = s.reshape((1,len(s)))
if M[i,0]>=80:
y_hat = reg80p.predict(s)
elif M[i,0] >= 75 and M[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M[i,0] >= 70 and M[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M[i,0] >= 65 and M[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M[i,0] >= 60 and M[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M[i,0] >= 55 and M[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M[i,0] >= 50 and M[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M[i,0] < 50:
y_hat = reg50m.predict(s)
predicted = np.append(predicted,y_hat)
PGEB_p = predicted.reshape((len(predicted),1))
simulated=[]
for i in range(0,sim_days):
s = M_sim[i,1:]
s = s.reshape((1,len(s)))
if M_sim[i,0]>=80:
y_hat = reg80p.predict(s)
elif M_sim[i,0] >= 75 and M_sim[i,0] < 80:
y_hat = reg75_80.predict(s)
elif M_sim[i,0] >= 70 and M_sim[i,0] < 75:
y_hat = reg70_75.predict(s)
elif M_sim[i,0] >= 65 and M_sim[i,0] < 70:
y_hat = reg65_70.predict(s)
elif M_sim[i,0] >= 60 and M_sim[i,0] < 65:
y_hat = reg60_65.predict(s)
elif M_sim[i,0] >= 55 and M_sim[i,0] < 60:
y_hat = reg55_60.predict(s)
elif M_sim[i,0] >= 50 and M_sim[i,0] < 55:
y_hat = reg50_55.predict(s)
elif M_sim[i,0] < 50:
y_hat = reg50m.predict(s) #
simulated = np.append(simulated,y_hat)
PGEB_sim = simulated.reshape((len(simulated),1))
#a=st.pearsonr(peaks,PGEB_p)
#print a[0]**2
# Residuals
PGEBresiduals = PGEB_p - peaks
PGEB_y = peaks
# RMSE
RMSE = (np.sum((PGEBresiduals**2))/len(PGEBresiduals))**.5
#Collect residuals from load regression
R = np.column_stack((BPAresiduals,SDGEresiduals,SCEresiduals,PGEVresiduals,PGEBresiduals))
ResidualsLoad = R[0:3*365,:]
###################################
# PATH 46
###################################
#import data
df_data1 = pd.read_excel('Synthetic_demand_pathflows/46_daily.xlsx',sheet_name='Sheet1',header=0)
#find average temps
cities = ['Tuscon','Phoenix','Vegas','Fresno','Oakland','LA','SanDiego','Sacramento','SanJose','SanFran']
num_cities = len(cities)
num_days = len(df_data1)
AvgT = np.zeros((num_days,num_cities))
Wind = np.zeros((num_days,num_cities))
for i in cities:
n1 = i + '_AvgT'
n2 = i + '_Wind'
j = int(cities.index(i))
AvgT[:,j] = df_data1.loc[:,n1]
Wind[:,j] = df_data1.loc[:,n2]
#convert to degree days
HDD = np.zeros((num_days,num_cities))
CDD = np.zeros((num_days,num_cities))
for i in range(0,num_days):
for j in range(0,num_cities):
HDD[i,j] = np.max((0,65-AvgT[i,j]))
CDD[i,j] = np.max((0,AvgT[i,j] - 65))
#separate wind speed by cooling/heating degree day
binary_CDD = CDD>0
binary_HDD = HDD>0
CDD_wind = np.multiply(Wind,binary_CDD)
HDD_wind = np.multiply(Wind,binary_HDD)
X1 = np.array(df_data1.loc[:,'Month':'Path66'])
X2 = np.column_stack((HDD,CDD,HDD_wind,CDD_wind))
cX = np.column_stack((X1,X2))
df_data = pd.DataFrame(cX)
df_data.rename(columns={0:'Month'}, inplace=True)
df_data.rename(columns={3:'Path46'}, inplace=True)
df_data.rename(columns={4:'Weekday'}, inplace=True)
jan = df_data.loc[df_data['Month'] == 1,:]
feb = df_data.loc[df_data['Month'] == 2,:]
mar = df_data.loc[df_data['Month'] == 3,:]
apr = df_data.loc[df_data['Month'] == 4,:]
may = df_data.loc[df_data['Month'] == 5,:]
jun = df_data.loc[df_data['Month'] == 6,:]
jul = df_data.loc[df_data['Month'] == 7,:]
aug = df_data.loc[df_data['Month'] == 8,:]
sep = df_data.loc[df_data['Month'] == 9,:]
oct = df_data.loc[df_data['Month'] == 10,:]
nov = df_data.loc[df_data['Month'] == 11,:]
dec = df_data.loc[df_data['Month'] == 12,:]
y = df_data.loc[:,'Path46']
#multivariate regression
jan_reg_46 = linear_model.LinearRegression()
feb_reg_46 = linear_model.LinearRegression()
mar_reg_46 = linear_model.LinearRegression()
apr_reg_46 = linear_model.LinearRegression()
may_reg_46 = linear_model.LinearRegression()
jun_reg_46 = linear_model.LinearRegression()
jul_reg_46 = linear_model.LinearRegression()
aug_reg_46 = linear_model.LinearRegression()
sep_reg_46 = linear_model.LinearRegression()
oct_reg_46 = linear_model.LinearRegression()
nov_reg_46 = linear_model.LinearRegression()
dec_reg_46 = linear_model.LinearRegression()
# Train the model using the training sets
jan_reg_46.fit(jan.loc[:,'Weekday':],jan.loc[:,'Path46'])
feb_reg_46.fit(feb.loc[:,'Weekday':],feb.loc[:,'Path46'])
mar_reg_46.fit(mar.loc[:,'Weekday':],mar.loc[:,'Path46'])
apr_reg_46.fit(apr.loc[:,'Weekday':],apr.loc[:,'Path46'])
may_reg_46.fit(may.loc[:,'Weekday':],may.loc[:,'Path46'])
jun_reg_46.fit(jun.loc[:,'Weekday':],jun.loc[:,'Path46'])
jul_reg_46.fit(jul.loc[:,'Weekday':],jul.loc[:,'Path46'])
aug_reg_46.fit(aug.loc[:,'Weekday':],aug.loc[:,'Path46'])
sep_reg_46.fit(sep.loc[:,'Weekday':],sep.loc[:,'Path46'])
oct_reg_46.fit(oct.loc[:,'Weekday':],oct.loc[:,'Path46'])
nov_reg_46.fit(nov.loc[:,'Weekday':],nov.loc[:,'Path46'])
dec_reg_46.fit(dec.loc[:,'Weekday':],dec.loc[:,'Path46'])
# Make predictions using the testing set
predicted = []
rc = np.shape(jan.loc[:,'Weekday':])
n = rc[1]
for i in range(0,len(y)):
m = df_data.loc[i,'Month']
if m==1:
s = jan.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = jan_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==2:
s = feb.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = feb_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==3:
s = mar.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = mar_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==4:
s = apr.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = apr_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==5:
s = may.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = may_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==6:
s = jun.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = jun_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==7:
s = jul.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = jul_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==8:
s = aug.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = aug_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==9:
s = sep.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = sep_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==10:
s = oct.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = oct_reg_46.predict(s)
predicted = np.append(predicted,p)
elif m==11:
s = nov.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = nov_reg_46.predict(s)
predicted = np.append(predicted,p)
else:
s = dec.loc[i,'Weekday':]
s = np.reshape(s[:,None],(1,n))
p = dec_reg_46.predict(s)
predicted = np.append(predicted,p)
Path46_p = predicted
# Residuals
residuals = predicted - y.values
Residuals46 = np.reshape(residuals[730:],(1095,1))
Path46_y = y.values
# RMSE
RMSE = (np.sum((residuals**2))/len(residuals))**.5
##R2
#a=st.pearsonr(y,predicted)
#print a[0]**2
###############################
# NW PATHS
###############################
#import data
df_data1 = pd.read_excel('Synthetic_demand_pathflows/NW_Path_data.xlsx',sheet_name='Daily',header=0)
#find average temps
cities = ['Salem','Seattle','Portland','Eugene','Boise','Tuscon','Phoenix','Vegas','Fresno','Oakland','LA','SanDiego','Sacramento','SanJose','SanFran']
num_cities = len(cities)
num_days = len(df_data1)
AvgT = np.zeros((num_days,num_cities))
Wind = np.zeros((num_days,num_cities))
for i in cities:
n1 = i + '_AvgT'
n2 = i + '_Wind'
j = int(cities.index(i))
AvgT[:,j] = df_data1.loc[:,n1]
Wind[:,j] = df_data1.loc[:,n2]
#convert to degree days
HDD = np.zeros((num_days,num_cities))
CDD = np.zeros((num_days,num_cities))
for i in range(0,num_days):
for j in range(0,num_cities):
HDD[i,j] = np.max((0,65-AvgT[i,j]))
CDD[i,j] = np.max((0,AvgT[i,j] - 65))
#separate wind speed by cooling/heating degree day
binary_CDD = CDD>0
binary_HDD = HDD>0
CDD_wind = np.multiply(Wind,binary_CDD)
HDD_wind = np.multiply(Wind,binary_HDD)
X1 = np.array(df_data1.loc[:,'Month':'Weekday'])
X2 = np.column_stack((HDD,CDD,HDD_wind,CDD_wind))
cX = np.column_stack((X1,X2))
df_data = pd.DataFrame(cX)
H = df_data
#df_data.to_excel('Synthetic_demand_pathflows/cX.xlsx')
df_data.rename(columns={0:'Month'}, inplace=True)
df_data.rename(columns={3:'Path8'}, inplace=True)
df_data.rename(columns={4:'Path14'}, inplace=True)
df_data.rename(columns={5:'Path3'}, inplace=True)
df_data.rename(columns={6:'BPA_wind'}, inplace=True)
df_data.rename(columns={7:'BPA_hydro'}, inplace=True)
df_data.rename(columns={8:'Weekday'}, inplace=True)
df_data.rename(columns={9:'Salem_HDD'}, inplace=True)
jan = df_data.loc[df_data['Month'] == 1,:]
feb = df_data.loc[df_data['Month'] == 2,:]
mar = df_data.loc[df_data['Month'] == 3,:]
apr = df_data.loc[df_data['Month'] == 4,:]
may = df_data.loc[df_data['Month'] == 5,:]
jun = df_data.loc[df_data['Month'] == 6,:]
jul = df_data.loc[df_data['Month'] == 7,:]
aug = df_data.loc[df_data['Month'] == 8,:]
sep = df_data.loc[df_data['Month'] == 9,:]
oct = df_data.loc[df_data['Month'] == 10,:]
nov = df_data.loc[df_data['Month'] == 11,:]
dec = df_data.loc[df_data['Month'] == 12,:]
lines = ['Path8','Path14','Path3']
num_lines = len(lines)
export_residuals = np.zeros((len(cX),num_lines))
NWPaths_p= np.zeros((len(cX),num_lines))
NWPaths_y = np.zeros((len(cX),num_lines))
for line in lines:
y = df_data.loc[:,line]
line_index = lines.index(line)
#multivariate regression
name='jan_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='feb_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='mar_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='apr_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='may_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='jun_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='jul_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='aug_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='sep_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='oct_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='nov_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
name='dec_reg_NW' + str(line)
locals()[name] = linear_model.LinearRegression()
# Train the model using the training sets
name='jan_reg_NW' + str(line)
locals()[name].fit(jan.loc[:,'BPA_wind':],jan.loc[:,line])
name='feb_reg_NW' + str(line)
locals()[name].fit(feb.loc[:,'BPA_wind':],feb.loc[:,line])
name='mar_reg_NW' + str(line)
locals()[name].fit(mar.loc[:,'BPA_wind':],mar.loc[:,line])
name='apr_reg_NW' + str(line)
locals()[name].fit(apr.loc[:,'BPA_wind':],apr.loc[:,line])
name='may_reg_NW' + str(line)
locals()[name].fit(may.loc[:,'BPA_wind':],may.loc[:,line])
name='jun_reg_NW' + str(line)
locals()[name].fit(jun.loc[:,'BPA_wind':],jun.loc[:,line])
name='jul_reg_NW' + str(line)
locals()[name].fit(jul.loc[:,'BPA_wind':],jul.loc[:,line])
name='aug_reg_NW' + str(line)
locals()[name].fit(aug.loc[:,'BPA_wind':],aug.loc[:,line])
name='sep_reg_NW' + str(line)
locals()[name].fit(sep.loc[:,'BPA_wind':],sep.loc[:,line])
name='oct_reg_NW' + str(line)
locals()[name].fit(oct.loc[:,'BPA_wind':],oct.loc[:,line])
name='nov_reg_NW' + str(line)
locals()[name].fit(nov.loc[:,'BPA_wind':],nov.loc[:,line])
name='dec_reg_NW' + str(line)
locals()[name].fit(dec.loc[:,'BPA_wind':],dec.loc[:,line])
# Make predictions using the testing set
predicted = []
rc = np.shape(jan.loc[:,'BPA_wind':])
n = rc[1]
for i in range(0,len(y)):
m = df_data.loc[i,'Month']
if m==1:
s = jan.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='jan_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==2:
s = feb.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='feb_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==3:
s = mar.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='mar_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==4:
s = apr.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='apr_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==5:
s = may.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='may_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==6:
s = jun.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='jun_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==7:
s = jul.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='jul_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==8:
s = aug.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='aug_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==9:
s = sep.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='sep_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==10:
s = oct.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='oct_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
elif m==11:
s = nov.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='nov_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
else:
s = dec.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
name='dec_reg_NW' + str(line)
p = locals()[name].predict(s)
predicted = np.append(predicted,p)
NWPaths_p[:,line_index] = predicted
# Residuals
residuals = predicted - y.values
export_residuals[:,line_index] = residuals
NWPaths_y[:,line_index] = y.values
# RMSE
RMSE = (np.sum((residuals**2))/len(residuals))**.5
# #R2
# a=st.pearsonr(y,predicted)
# print a[0]**2
ResidualsNWPaths = export_residuals
###############################
# Other CA PATHS
###############################
#import data
df_data1 = pd.read_excel('Synthetic_demand_pathflows/OtherCA_Path_data.xlsx',sheet_name='Daily',header=0)
#find average temps
cities = ['Salem','Seattle','Portland','Eugene','Boise','Tuscon','Phoenix','Vegas','Fresno','Oakland','LA','SanDiego','Sacramento','SanJose','SanFran']
num_cities = len(cities)
num_days = len(df_data1)
AvgT = np.zeros((num_days,num_cities))
Wind = np.zeros((num_days,num_cities))
for i in cities:
n1 = i + '_AvgT'
n2 = i + '_Wind'
j = int(cities.index(i))
AvgT[:,j] = df_data1.loc[:,n1]
Wind[:,j] = df_data1.loc[:,n2]
#convert to degree days
HDD = np.zeros((num_days,num_cities))
CDD = np.zeros((num_days,num_cities))
for i in range(0,num_days):
for j in range(0,num_cities):
HDD[i,j] = np.max((0,65-AvgT[i,j]))
CDD[i,j] = np.max((0,AvgT[i,j] - 65))
#separate wind speed by cooling/heating degree day
binary_CDD = CDD>0
binary_HDD = HDD>0
CDD_wind = np.multiply(Wind,binary_CDD)
HDD_wind = np.multiply(Wind,binary_HDD)
X1 = np.array(df_data1.loc[:,'Month':'Path66'])
X2 = np.column_stack((HDD,CDD,HDD_wind,CDD_wind))
cX = np.column_stack((X1,X2))
df_data = pd.DataFrame(cX)
df_data.rename(columns={0:'Month'}, inplace=True)
df_data.rename(columns={3:'Path61'}, inplace=True)
df_data.rename(columns={4:'Path42'}, inplace=True)
df_data.rename(columns={5:'Path24'}, inplace=True)
df_data.rename(columns={6:'Path45'}, inplace=True)
df_data.rename(columns={7:'BPA_wind'}, inplace=True)
jan = df_data.loc[df_data['Month'] == 1,:]
feb = df_data.loc[df_data['Month'] == 2,:]
mar = df_data.loc[df_data['Month'] == 3,:]
apr = df_data.loc[df_data['Month'] == 4,:]
may = df_data.loc[df_data['Month'] == 5,:]
jun = df_data.loc[df_data['Month'] == 6,:]
jul = df_data.loc[df_data['Month'] == 7,:]
aug = df_data.loc[df_data['Month'] == 8,:]
sep = df_data.loc[df_data['Month'] == 9,:]
oct = df_data.loc[df_data['Month'] == 10,:]
nov = df_data.loc[df_data['Month'] == 11,:]
dec = df_data.loc[df_data['Month'] == 12,:]
lines = ['Path61','Path42','Path24','Path45']
num_lines = len(lines)
export_residuals = np.zeros((len(cX),num_lines))
OtherCA_Paths_p= np.zeros((len(cX),num_lines))
OtherCA_Paths_y = np.zeros((len(cX),num_lines))
for line in lines:
y = df_data.loc[:,line]
line_index = lines.index(line)
#multivariate regression
name_1='jan_reg_CA' + str(line)
name_2='feb_reg_CA' + str(line)
name_3='mar_reg_CA' + str(line)
name_4='apr_reg_CA' + str(line)
name_5='may_reg_CA' + str(line)
name_6='jun_reg_CA' + str(line)
name_7='jul_reg_CA' + str(line)
name_8='aug_reg_CA' + str(line)
name_9='sep_reg_CA' + str(line)
name_10='oct_reg_CA' + str(line)
name_11='nov_reg_CA' + str(line)
name_12='dec_reg_CA' + str(line)
locals()[name_1] = linear_model.LinearRegression()
locals()[name_2] = linear_model.LinearRegression()
locals()[name_3] = linear_model.LinearRegression()
locals()[name_4] = linear_model.LinearRegression()
locals()[name_5] = linear_model.LinearRegression()
locals()[name_6] = linear_model.LinearRegression()
locals()[name_7] = linear_model.LinearRegression()
locals()[name_8] = linear_model.LinearRegression()
locals()[name_9] = linear_model.LinearRegression()
locals()[name_10] = linear_model.LinearRegression()
locals()[name_11] = linear_model.LinearRegression()
locals()[name_12] = linear_model.LinearRegression()
# Train the model using the training sets
locals()[name_1].fit(jan.loc[:,'BPA_wind':],jan.loc[:,line])
locals()[name_2].fit(feb.loc[:,'BPA_wind':],feb.loc[:,line])
locals()[name_3].fit(mar.loc[:,'BPA_wind':],mar.loc[:,line])
locals()[name_4].fit(apr.loc[:,'BPA_wind':],apr.loc[:,line])
locals()[name_5].fit(may.loc[:,'BPA_wind':],may.loc[:,line])
locals()[name_6].fit(jun.loc[:,'BPA_wind':],jun.loc[:,line])
locals()[name_7].fit(jul.loc[:,'BPA_wind':],jul.loc[:,line])
locals()[name_8].fit(aug.loc[:,'BPA_wind':],aug.loc[:,line])
locals()[name_9].fit(sep.loc[:,'BPA_wind':],sep.loc[:,line])
locals()[name_10].fit(oct.loc[:,'BPA_wind':],oct.loc[:,line])
locals()[name_11].fit(nov.loc[:,'BPA_wind':],nov.loc[:,line])
locals()[name_12].fit(dec.loc[:,'BPA_wind':],dec.loc[:,line])
# Make predictions using the testing set
predicted = []
rc = np.shape(jan.loc[:,'BPA_wind':])
n = rc[1]
for i in range(0,len(y)):
m = df_data.loc[i,'Month']
if m==1:
s = jan.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_1].predict(s)
predicted = np.append(predicted,p)
elif m==2:
s = feb.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_2].predict(s)
predicted = np.append(predicted,p)
elif m==3:
s = mar.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_3].predict(s)
predicted = np.append(predicted,p)
elif m==4:
s = apr.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_4].predict(s)
predicted = np.append(predicted,p)
elif m==5:
s = may.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_5].predict(s)
predicted = np.append(predicted,p)
elif m==6:
s = jun.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_6].predict(s)
predicted = np.append(predicted,p)
elif m==7:
s = jul.loc[i,'BPA_wind':]
s = np.reshape(s[:,None],(1,n))
p = locals()[name_7].predict(s)
predicted = np.append(predicted,p)
elif m==8:
s = aug.loc[i,'BPA_wind':]
s = | np.reshape(s[:,None],(1,n)) | numpy.reshape |
########################################################################
# Copyright 2021, UChicago Argonne, LLC
#
# Licensed under the BSD-3 License (the "License"); you may not use
# this file except in compliance with the License. You may obtain a
# copy of the License at
#
# https://opensource.org/licenses/BSD-3-Clause
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied. See the License for the specific language governing
# permissions and limitations under the License.
########################################################################
"""
date: 2021-11-02
author: matz
Test the behavior and attributes of unrodded DASSH Region instances
"""
########################################################################
import copy
import os
import dassh
import numpy as np
import pytest
def test_simple_unrodded_reg_instantiation(c_lrefl_simple):
"""Test that the unrodded region has all the right stuffs"""
assert c_lrefl_simple.vf['coolant'] == 0.25
assert c_lrefl_simple.vf['struct'] == 0.75
assert c_lrefl_simple.duct_ftf[1] == 0.116
assert len(c_lrefl_simple.temp['coolant_int']) == 1
assert c_lrefl_simple.temp['duct_mw'].shape == (1, 6)
# If it don't fail, it pass
c_lrefl_simple.temp['coolant_int'] *= 623.15
c_lrefl_simple._update_coolant_params(623.15)
def test_ur_reg_instantiation_fancy(testdir):
"""Make sure a fancy unrodded region can be instantiated"""
inp = dassh.DASSH_Input(
os.path.join(testdir, 'test_inputs', 'input_ur_conv_factor.txt'),
empty4c=True)
mat = {'coolant': dassh.Material('sodium'),
'duct': dassh.Material('ht9')}
# Test fully unrodded assembly
ur1 = dassh.region_unrodded.make_ur_asm(
'testboi', inp.data['Assembly']['fuel'], mat, 1.0)
print(ur1.mratio)
print(ur1._mratio)
print(inp.data['Assembly']['fuel']['convection_factor'])
assert ur1.mratio is not None
assert ur1.mratio != 1.0
# Test default in unrodded axial regions
ur2 = dassh.region_unrodded.make_ur_axialregion(
inp.data['Assembly']['control'], 'empty_cr', mat, 1.0)
assert ur2.mratio == 1.0
# Test nondefault in unrodded axial regions
ur2 = dassh.region_unrodded.make_ur_axialregion(
inp.data['Assembly']['control'], 'upper_cr', mat, 1.0)
assert ur2.mratio == 0.8
def test_unrodded_reg_clone_shallow(c_lrefl_simple):
"""Test that region attributes are properly copied"""
clone = c_lrefl_simple.clone(15.0)
non_matches = []
# Shallow copies
for attr in ['z', 'duct_ftf', 'duct_thickness', 'duct_perim',
'vf', 'area', 'total_area', '_params', 'x_pts']:
id_clone = id(getattr(clone, attr))
id_original = id(getattr(c_lrefl_simple, attr))
if id_clone == id_original: # They should be the same
continue
else:
non_matches.append(attr)
print(attr, id_clone, id_original)
assert len(non_matches) == 0
def test_unrodded_reg_clone_deep(c_lrefl_simple):
"""Test that region attributes are properly copied"""
clone = c_lrefl_simple.clone(15.0)
non_matches = []
# Shallow copies
for attr in ['temp', 'flow_rate', 'coolant_params']:
id_clone = id(getattr(clone, attr))
id_original = id(getattr(c_lrefl_simple, attr))
if id_clone != id_original: # They should be different
continue
else:
non_matches.append(attr)
print(attr, id_clone, id_original)
assert len(non_matches) == 0
def test_simple_unrodded_reg_zero_power(c_lrefl_simple):
"""Test that no power temp calc returns no change"""
in_temp = c_lrefl_simple.temp['coolant_int']
t_gap = np.ones(6) * c_lrefl_simple.avg_duct_mw_temp
c_lrefl_simple.calculate(
0.1, {'refl': 0.0}, t_gap, 0.0, adiabatic_duct=True)
assert c_lrefl_simple.temp['coolant_int'] == pytest.approx(in_temp)
assert c_lrefl_simple.pressure_drop > 0.0
def test_simple_unrodded_reg_none_power(c_lrefl_simple):
"""Test that giving power=None returns no change in temps"""
in_temp = c_lrefl_simple.temp['coolant_int']
t_gap = np.ones(6) * c_lrefl_simple.avg_duct_mw_temp
c_lrefl_simple.calculate(
0.1, {'refl': None}, t_gap, 0.0, adiabatic_duct=True)
assert c_lrefl_simple.temp['coolant_int'] == pytest.approx(in_temp)
assert c_lrefl_simple.pressure_drop > 0.0
def test_simple_unrodded_reg_qmcdt(c_lrefl_simple):
"""Test that simple coolant calc returns proper result"""
# Set up some stuff
c_lrefl_simple.temp['coolant_int'] *= 623.15
c_lrefl_simple.temp['duct_mw'] *= 623.15
c_lrefl_simple.temp['duct_surf'] *= 623.15
in_temp = copy.deepcopy(c_lrefl_simple.temp['coolant_int'])
power = 10000.0
dz = 0.1
qlin = power / dz
# Calculate dT and estimate Q
c_lrefl_simple._update_coolant_params(623.15)
dT = c_lrefl_simple._calc_coolant_temp(dz, {'refl': qlin})
q_est = (c_lrefl_simple.coolant.heat_capacity *
c_lrefl_simple.flow_rate * dT)
print('m =', c_lrefl_simple.flow_rate)
print('Cp =', c_lrefl_simple.coolant.heat_capacity)
# print('dT =', c_lrefl_simple.temp['coolant_int'] - in_temp)
print('dT =', dT)
print('q (est) = ', q_est)
assert power == pytest.approx(q_est)
assert c_lrefl_simple.temp['coolant_int'] == in_temp
def test_simple_unrodded_reg_duct(c_lrefl_simple):
"""Test that simple homog duct calc returns proper result"""
# Set up some stuff
c_lrefl_simple.temp['coolant_int'] *= 633.15
# Calculate dT and estimate Q
gap_temp = np.ones(6) * 623.15
gap_htc = np.ones(6) * 7.5e4 # made this up
# print(c_lrefl_simple.temp['duct_mw'][0])
c_lrefl_simple._update_coolant_params(633.15)
c_lrefl_simple._calc_duct_temp(gap_temp, gap_htc)
print('inner', c_lrefl_simple.temp['duct_surf'][0, 0])
print('midwall', c_lrefl_simple.temp['duct_mw'][0])
print('outer', c_lrefl_simple.temp['duct_surf'][0, 1])
assert all([623.15 < x < 633.15 for x in
c_lrefl_simple.temp['duct_mw'][0]])
# Coolant temp is greater than inner duct surface temp, which
# is greater than duct midwall temp
assert all([633.15
> c_lrefl_simple.temp['duct_surf'][0, 0, i]
> c_lrefl_simple.temp['duct_mw'][0, i]
for i in range(6)])
# Duct midwall temp is greater than outer duct surface temp,
# which is greater than gap coolant temp
assert all([c_lrefl_simple.temp['duct_mw'][0, i]
> c_lrefl_simple.temp['duct_surf'][0, 1, i]
> 623.15
for i in range(6)])
def test_mnh_ur_ebal_adiabatic(shield_ur_mnh):
"""Test multi-node homogeneous unrodded region energy balance
with adiabatic duct wall"""
n_steps = 100
dz = 0.001
power = {'refl': 100.0}
gap_temp = np.arange(625, 775, 25) # [625, 650, 675, 700, 725, 750]
fake_htc = np.ones(6) * 2e4
for i in range(n_steps):
shield_ur_mnh.calculate(dz, power, gap_temp, fake_htc,
ebal=True, adiabatic_duct=True)
assert np.sum(shield_ur_mnh.ebal['duct']) == 0.0
# Check power added real quick
tot_power_added = n_steps * dz * power['refl']
assert shield_ur_mnh.ebal['power'] - tot_power_added <= 1e-12
print('ENERGY ADDED (W): ', shield_ur_mnh.ebal['power'])
print('ENERGY FROM DUCT (W)', np.sum(shield_ur_mnh.ebal['duct']))
total = (np.sum(shield_ur_mnh.ebal['duct'])
+ shield_ur_mnh.ebal['power'])
print('TOTAL ENERGY INPUT (W)', total)
e_temp_rise = (shield_ur_mnh.flow_rate
* shield_ur_mnh.coolant.heat_capacity
* (shield_ur_mnh.avg_coolant_temp - 623.15))
print('ENERGY COOLANT DT (W):', e_temp_rise)
bal = total - e_temp_rise
print('DIFFERENCE (W)', bal)
assert bal <= 1e-7
def test_mnh_ur_ebal(shield_ur_mnh):
"""Test multi-node homogeneous unrodded region energy balance"""
dz = dassh.region_unrodded.calculate_min_dz(
shield_ur_mnh, 623.15, 773.15)
n_steps = 100
dz = 0.001 # less than dz calculated above
power = {'refl': 0.0}
gap_temp = np.arange(625, 775, 25) # [625, 650, 675, 700, 725, 750]
fake_htc = np.ones(6) * 2e4
for i in range(n_steps):
shield_ur_mnh.calculate(dz, power, gap_temp, fake_htc, ebal=True)
# Check power added real quick
tot_power_added = n_steps * dz * power['refl']
assert shield_ur_mnh.ebal['power'] - tot_power_added <= 1e-12
print('ENERGY ADDED (W): ', shield_ur_mnh.ebal['power'])
print('ENERGY FROM DUCT (W):', np.sum(shield_ur_mnh.ebal['duct']))
total = (np.sum(shield_ur_mnh.ebal['duct'])
+ shield_ur_mnh.ebal['power'])
print('TOTAL ENERGY INPUT (W):', total)
e_temp_rise = (shield_ur_mnh.flow_rate
* shield_ur_mnh.coolant.heat_capacity
* (shield_ur_mnh.avg_coolant_temp - 623.15))
print('ENERGY COOLANT DT (W):', e_temp_rise)
bal = total - e_temp_rise
print('DIFFERENCE (W):', bal)
assert bal <= 1e-7
def test_ur_asm_pressure_drop(c_shield_rr_params):
"""Test that the pressure drop calculation gives the same result
in RR and UR objects"""
input, mat = c_shield_rr_params
mat['coolant'] = dassh.Material('sodium') # get dynamic proeprties
fr = 0.50
# Make rodded region
rr = dassh.region_rodded.make_rr_asm(input, 'dummy', mat, fr)
# Make unrodded region; manually set UR params
input['use_low_fidelity_model'] = True
input['convection_factor'] = 'calculate'
ur = dassh.region_unrodded.make_ur_asm('testboi', input, mat, fr)
T_in = 623.15
dz = 0.01
dp_rr = 0.0
dp_ur = 0.0
for i in range(50):
T = T_in + i
rr._update_coolant_int_params(T)
ur._update_coolant_params(T)
dp_rr += rr.calculate_pressure_drop(dz)
dp_ur += ur.calculate_pressure_drop(dz)
print('dp_rr:', dp_rr)
print('dp_ur:', dp_ur)
diff = dp_rr - dp_ur
print(diff)
assert np.abs(diff) < 1e-8
def test_ur_dp_rr_equiv(testdir):
"""Test that the RR equivalent UR returns the same pressure drop"""
# Get answer to compare with
path_ans = os.path.join(
testdir, 'test_data', 'test_single_asm', 'dassh_reactor.pkl')
if os.path.exists(path_ans):
r_ans = dassh.reactor.load(path_ans)
else:
inpath = os.path.join(testdir, 'test_inputs', 'input_single_asm.txt')
outpath = os.path.join(testdir, 'test_results', 'test_single_asm')
inp = dassh.DASSH_Input(inpath)
r_ans = dassh.Reactor(inp, path=outpath, write_output=True)
r_ans.temperature_sweep()
ans = np.zeros(4)
for i in range(len(r_ans.assemblies[0].region)):
ans[i] = r_ans.assemblies[0].region[i].pressure_drop
ans[-1] = r_ans.assemblies[0].pressure_drop
# Get result to compare
inpath = os.path.join(testdir, 'test_inputs', 'input_single_asm_lf.txt')
outpath = os.path.join(testdir, 'test_results', 'test_single_asm_lf')
inp = dassh.DASSH_Input(inpath)
r_res = dassh.Reactor(inp, path=outpath, write_output=True)
r_res.temperature_sweep()
res = np.zeros(4)
for i in range(len(r_res.assemblies[0].region)):
res[i] = r_res.assemblies[0].region[i].pressure_drop
res[-1] = r_res.assemblies[0].pressure_drop
# Compare them
diff = (res - ans) / ans
assert np.max(np.abs(diff)) < 1e-3
def test_ur_dp(testdir):
"""Test that the pressure drop calculation for the unrodded region
is similar to that of the pin bundle when comparable parameters
are used"""
# Get answer to compare with
path_ans = os.path.join(
testdir, 'test_data', 'test_single_asm', 'dassh_reactor.pkl')
if os.path.exists(path_ans):
r_ans = dassh.reactor.load(path_ans)
else:
inpath = os.path.join(testdir, 'test_inputs', 'input_single_asm.txt')
outpath = os.path.join(testdir, 'test_results', 'test_single_asm')
inp = dassh.DASSH_Input(inpath)
r_ans = dassh.Reactor(inp, path=outpath, write_output=True)
r_ans.temperature_sweep()
# Just want pressure drop per unit length of rod bundle region
asm = r_ans.assemblies[0]
ans = asm.rodded.pressure_drop
ans /= asm.region_bnd[2] - asm.region_bnd[1]
# Get result to compare
inpath = os.path.join(testdir, 'test_inputs', 'input_single_asm_lf.txt')
outpath = os.path.join(testdir, 'test_results', 'test_single_asm_lf-2')
inp = dassh.DASSH_Input(inpath)
k = ('Assembly', 'fuel', 'AxialRegion', 'lower_refl')
inp.data[k[0]][k[1]][k[2]][k[3]]['hydraulic_diameter'] = \
asm.rodded.bundle_params['de']
inp.data[k[0]][k[1]][k[2]][k[3]]['vf_coolant'] = \
(asm.rodded.bundle_params['area']
/ (0.5 * np.sqrt(3) * asm.rodded.duct_ftf[0][0]**2))
# print('de', inp.data[k[0]][k[1]][k[2]][k[3]]['hydraulic_diameter'])
# print('vfc', inp.data[k[0]][k[1]][k[2]][k[3]]['vf_coolant'])
r_res = dassh.Reactor(inp, path=outpath, write_output=True)
r_res.temperature_sweep()
asm = r_res.assemblies[0]
res = asm.region[0].pressure_drop
res /= asm.region_bnd[1] - asm.region_bnd[0]
print('ans', ans)
print('res', res)
# Compare them
diff = (res - ans) / ans
print('rel diff', diff)
assert abs(diff) < 0.05 # 5 % difference is tolerable
@pytest.mark.skip(reason='toy problem for milos')
def test_ur_ctrl_asm_sweep(simple_ctrl_params):
"""Test the simple model approximation on a double-duct assembly"""
input, mat = simple_ctrl_params
mat = {'coolant': dassh.Material('sodium_se2anl_425'),
'duct': dassh.Material('ht9_se2anl_425')}
fr = 1.0
# Make rodded region
rr = dassh.region_rodded.make_rr_asm(input, 'dummy', mat.copy(), fr)
# Make unrodded region; manually set UR params
input['use_low_fidelity_model'] = True
input['convection_factor'] = "calculate"
ur = dassh.region_unrodded.make_ur_asm('testboi', input, mat.copy(), fr)
# Manual activation
for k in rr.temp.keys():
rr.temp[k] *= 623.15
try:
ur.temp[k] *= 623.15
except KeyError:
continue
# Calculate mesh size
dz_rr = dassh.region_rodded.calculate_min_dz(rr, 623.15, 773.15)
dz_ur = dassh.region_unrodded.calculate_min_dz(ur, 623.15, 773.15)
dz = min([dz_rr[0], dz_ur[0]])
print('dz_rr', dz_rr)
print('dz_ur (simple)', dz_ur)
print(rr.coolant.thermal_conductivity * rr._sf)
print(rr.coolant.density * rr.coolant.heat_capacity
* rr.coolant_int_params['eddy'])
print(1 - rr.pin_diameter / rr.pin_pitch)
assert 0
# Sweep
length = 1.0
n_steps = np.ceil(length / dz)
print(n_steps)
p_lin = 0.15e6
power_ur = {'refl': p_lin}
power_rr = make_rr_power(rr, power_ur)
gap_temp_ur = np.ones(6) * (350.0 + 273.15)
gap_temp_rr = make_rr_gap_temps_rr(rr, gap_temp_ur)
fake_htc = np.ones(2) * 2e4
for i in range(int(n_steps)):
ur._update_coolant_params(ur.avg_coolant_int_temp)
ur.calculate(dz, power_ur, gap_temp_ur, fake_htc, ebal=True)
rr._update_coolant_int_params(rr.avg_coolant_int_temp)
rr._update_coolant_byp_params(rr.avg_coolant_byp_temp)
rr.calculate(dz, power_rr, gap_temp_rr, fake_htc, ebal=True)
cp = ur.coolant.heat_capacity
print()
print('UR ENERGY FROM DUCT (W):', ur.ebal['from_duct'])
print('RR ENERGY FROM DUCT (W):', rr.ebal['from_duct'])
print()
print('UR COOLANT DT (C): ', ur.avg_coolant_temp - 623.15)
print('RR COOLANT DT (C): ', rr.avg_coolant_temp - 623.15)
print()
print('UR EBAL PER HEX SIDE')
print(ur.ebal['per_hex_side'])
print('RR EBAL PER HEX SIDE')
print(rr.ebal['per_hex_side'])
print()
print('UR EBAL')
print('added:', ur.ebal['power'])
print('from duct:', ur.ebal['from_duct'])
tot = ur.ebal['power'] + ur.ebal['from_duct']
print('sum:', tot)
dT = ur.avg_coolant_temp - 623.15
print('coolant rise:', dT * ur.flow_rate * cp)
print('bal:', tot - dT * ur.flow_rate * cp)
print()
print('RR EBAL')
print('added:', rr.ebal['power'])
print('from duct:', rr.ebal['from_duct'])
print('to byp:', rr.ebal['from_duct_byp'])
tot = rr.ebal['power'] + rr.ebal['from_duct_byp'] + rr.ebal['from_duct']
print('sum:', tot)
dT = rr.avg_coolant_temp - 623.15
print('coolant rise:', dT * rr.total_flow_rate * cp)
print('bal:', tot - dT * rr.total_flow_rate * cp)
print()
print('UR AVG COOLANT // DUCT TEMP')
print(ur.temp['coolant_int'])
print(ur.avg_coolant_temp - 273.15)
print(ur.avg_duct_mw_temp[0] - 273.15)
print(np.average(ur.temp['duct_surf'][-1, -1]) - 273.15)
print('RR AVG COOLANT // DUCT TEMP')
print(rr.avg_coolant_int_temp - 273.15)
print(rr.avg_coolant_temp - 273.15)
print(rr.avg_duct_mw_temp[0] - 273.15)
print(np.average(rr.temp['duct_surf'][-1, -1]) - 273.15)
print()
# print(c_shield_rr.temp['coolant_int'])
assert 0
@pytest.mark.skip(reason='lol')
def test_ur_vs_rr_ebal(shield_ur_simple, shield_ur_mnh, c_shield_rr,
c_shield_simple_rr):
"""Compare energy balance in rodded and un-rodded regions"""
c_shield_rr._conv_approx = True
c_shield_simple_rr._conv_approx = True
# shield_ur_mnh._params['xhtc'] = shield_ur_mnh.vf['coolant']
# shield_ur_simple._params['xhtc'] = shield_ur_mnh.vf['coolant']
# shield_ur_mnh._params['xhtc'] = 0.577442107490257
# shield_ur_simple._params['xhtc'] = 0.577442107490257
# shield_ur_mnh._params['xhtc'] = 0.12
# shield_ur_simple._params['xhtc'] = 0.12
shield_ur_mnh._params['lowflow'] = True
shield_ur_simple._params['lowflow'] = True
# print(c_shield_rr.params['area'][0]
# * c_shield_rr.subchannel.n_sc['coolant']['interior'])
# print(c_shield_rr.params['area'][1]
# * c_shield_rr.subchannel.n_sc['coolant']['edge']
# + c_shield_rr.params['area'][2]
# * c_shield_rr.subchannel.n_sc['coolant']['corner'])
# print(c_shield_rr._sf)
c_shield_rr._sf = 1.0
dz_rr = dassh.region_rodded.calculate_min_dz(
c_shield_rr, 623.15, 773.15)
# dz_rr2 = dassh.region_rodded.calculate_min_dz(
# c_shield_simple_rr, 623.15, 773.15)
dz_ur1 = dassh.region_unrodded.calculate_min_dz(
shield_ur_simple, 623.15, 773.15)
dz_ur2 = dassh.region_unrodded.calculate_min_dz(
shield_ur_mnh, 623.15, 773.15)
dz = min([dz_rr[0], dz_ur1[0], dz_ur2[0]])
print('dz_rr (m)', dz_rr)
# print('dz_rr_7pin (m)', dz_rr2)
print('dz_ur (simple)', dz_ur1)
print('dz_ur (6 node)', dz_ur2)
n_steps = 100
# p_lin = 1000.0
p_lin = 0.0
power_ur = {'refl': p_lin}
power_rr = {'pins': np.ones(61) * p_lin / 61,
'duct': np.zeros(
c_shield_rr.subchannel.n_sc['duct']['total']),
'cool': np.zeros(
c_shield_rr.subchannel.n_sc['coolant']['total'])
}
power_rr2 = {'pins': np.ones(7) * p_lin / 7,
'duct': np.zeros(c_shield_simple_rr
.subchannel.n_sc['duct']['total']),
'cool': np.zeros(c_shield_simple_rr
.subchannel.n_sc['coolant']['total'])}
# gap_temp_ur = np.linspace(625, 750, 6) # [625, 650, 675, 700, 725, 750]
# gap_temp_rr = np.linspace(625, 750, (c_shield_rr.subchannel
# .n_sc['duct']['total']))
# gap_temp_ur = 623.15 * np.ones(6)
# gap_temp_rr = 623.15 * np.ones((c_shield_rr.subchannel
# .n_sc['duct']['total']))
gap_temp_ur = np.ones(6) * 700.0
# gap_temp_ur = np.array([623.15 + 10, 623.15 - 10, 623.15 - 20,
# 623.15 - 10, 623.15 + 10, 623.15 + 20])
duct_per_side = int(c_shield_rr.subchannel.n_sc['duct']['total'] / 6)
gap_temp_rr = np.linspace(np.roll(gap_temp_ur, 1),
gap_temp_ur,
duct_per_side + 1)
gap_temp_rr = gap_temp_rr.transpose()
gap_temp_rr = gap_temp_rr[:, 1:]
gap_temp_rr = | np.hstack(gap_temp_rr) | numpy.hstack |
import numpy as np
import os, sys, subprocess
import copy
from openmdao.api import ExplicitComponent
from wisdem.ccblade.ccblade import CCAirfoil, CCBlade
from wisdem.ccblade.Polar import Polar
import csv # for exporting airfoil polar tables
import matplotlib.pyplot as plt
import time
import multiprocessing as mp
from functools import partial
from wisdem.commonse.mpi_tools import MPI
def runXfoil(xfoil_path, x, y, Re, AoA_min=-9, AoA_max=25, AoA_inc=0.5, Ma=0.0, multi_run=False, MPI_run=False):
#This function is used to create and run xfoil simulations for a given set of airfoil coordinates
# Set initial parameters needed in xfoil
numNodes = 310 # number of panels to use (260...but increases if needed)
#dist_param = 0.15 # TE/LE panel density ratio (0.15)
dist_param = 0.12 #This is current value that i am trying to help with convergence (!bem)
#IterLimit = 100 # Maximum number of iterations to try and get to convergence
IterLimit = 10 #This decreased IterLimit will speed up analysis (!bem)
#panelBunch = 1.5 # Panel bunching parameter to bunch near larger changes in profile gradients (1.5)
panelBunch = 1.6 #This is the value I am currently using to try and improve convergence (!bem)
#rBunch = 0.15 # Region to LE bunching parameter (used to put additional panels near flap hinge) (0.15)
rBunch = 0.08 #This is the current value that I am using (!bem)
XT1 = 0.55 # Defining left boundary of bunching region on top surface (should be before flap)
# XT1 = 1.0
#XT2 = 0.85 # Defining right boundary of bunching region on top surface (should be after flap)
XT2 = 0.9 #This is the value I am currently using (!bem)
# XT2 = 1.0
XB1 = 0.55 # Defining left boundary of bunching region on bottom surface (should be before flap)
# XB1 = 1.0
#XB2 = 0.85 # Defining right boundary of bunching region on bottom surface (should be after flap)
XB2 = 0.9 #This is the current value that I am using (!bem)
# XB2 = 1.0
runFlag = True # Flag used in error handling
dfdn = -0.5 # Change in angle of attack during initialization runs down to AoA_min
runNum = 0 # Initialized run number
dfnFlag = False # This flag is used to determine if xfoil needs to be re-run if the simulation fails due to convergence issues at low angles of attack
# Set filenames
# if multi_run or MPI_run:
pid = mp.current_process().pid
print('Running xfoil on PID = {}'.format(pid))
xfoil_rundir = 'xfoil_run_p{}'.format(pid)
if not os.path.exists(xfoil_rundir):
os.makedirs(xfoil_rundir)
LoadFlnmAF = os.path.join(xfoil_rundir,'airfoil_p{}.txt'.format(pid))
saveFlnmPolar = os.path.join(xfoil_rundir,'Polar_p{}.txt'.format(pid))
xfoilFlnm = os.path.join(xfoil_rundir,'xfoil_input_p{}.txt'.format(pid))
NUL_fname = os.path.join(xfoil_rundir,'NUL_p{}'.format(pid))
# if MPI_run:
# rank = MPI.COMM_WORLD.Get_rank()
# LoadFlnmAF = 'airfoil_r{}.txt'.format(rank) # This is a temporary file that will be deleted after it is no longer needed
# saveFlnmPolar = 'Polar_r{}.txt'.format(rank) # file name of outpur xfoil polar (can be useful to look at during debugging...can also delete at end if you don't want it stored)
# xfoilFlnm = 'xfoil_input_r{}.txt'.format(rank) # Xfoil run script that will be deleted after it is no longer needed
# else:
# LoadFlnmAF = 'airfoil.txt' # This is a temporary file that will be deleted after it is no longer needed
# saveFlnmPolar = 'Polar.txt' # file name of outpur xfoil polar (can be useful to look at during debugging...can also delete at end if you don't want it stored)
# xfoilFlnm = 'xfoil_input.txt' # Xfoil run script that will be deleted after it is no longer needed
# NUL_fname = 'NUL'
t0 = time.time()
while runFlag:
# Cleaning up old files to prevent replacement issues
if os.path.exists(saveFlnmPolar):
os.remove(saveFlnmPolar)
if os.path.exists(xfoilFlnm):
os.remove(xfoilFlnm)
if os.path.exists(LoadFlnmAF):
os.remove(LoadFlnmAF)
if os.path.exists(NUL_fname):
os.remove(NUL_fname)
# Writing temporary airfoil coordinate file for use in xfoil
dat=np.array([x,y])
np.savetxt(LoadFlnmAF, dat.T, fmt=['%f','%f'])
# %% Writes the Xfoil run script to read in coordinates, create flap, re-pannel, and create polar
# Create the airfoil with flap
fid = open(xfoilFlnm,"w")
fid.write("PLOP \n G \n\n") # turn off graphics
fid.write("LOAD \n")
fid.write( LoadFlnmAF + "\n" + "\n") # name of .txt file with airfoil coordinates
# fid.write( self.AFName + "\n") # set name of airfoil (internal to xfoil)
fid.write("GDES \n") # enter into geometry editing tools in xfoil
fid.write("UNIT \n") # normalize profile to unit chord
fid.write("EXEC \n \n") # move buffer airfoil to current airfoil
# Re-panel with specified number of panes and LE/TE panel density ratio
fid.write("PPAR\n")
fid.write("N \n" )
fid.write(str(numNodes) + "\n")
fid.write("P \n") # set panel bunching parameter
fid.write(str(panelBunch) + " \n")
fid.write("T \n") # set TE/LE panel density ratio
fid.write( str(dist_param) + "\n")
fid.write("R \n") # set region panel bunching ratio
fid.write(str(rBunch) + " \n")
fid.write("XT \n") # set region panel bunching bounds on top surface
fid.write(str(XT1) +" \n" + str(XT2) + " \n")
fid.write("XB \n") # set region panel bunching bounds on bottom surface
fid.write(str(XB1) +" \n" + str(XB2) + " \n")
fid.write("\n\n")
# Set Simulation parameters (Re and max number of iterations)
fid.write("OPER\n")
fid.write("VISC \n")
fid.write( str(Re) + "\n") # this sets Re to value specified in yaml file as an input
#fid.write( "5000000 \n") # bem: I was having trouble geting convergence for some of the thinner airfoils at the tip for the large Re specified in the yaml, so I am hard coding in Re (5e6 is the highest I was able to get to using these paneling parameters)
fid.write("MACH\n")
fid.write(str(Ma)+" \n")
fid.write("ITER \n")
fid.write( str(IterLimit) + "\n")
# Run simulations for range of AoA
if dfnFlag: # bem: This if statement is for the case when there are issues getting convergence at AoA_min. It runs a preliminary set of AoA's down to AoA_min (does not save them)
for ii in range(int((0.0-AoA_min)/AoA_inc+1)):
fid.write("ALFA "+ str(0.0-ii*float(AoA_inc)) +"\n")
fid.write("PACC\n\n\n") #Toggle saving polar on
# fid.write("ASEQ 0 " + str(AoA_min) + " " + str(dfdn) + "\n") # The preliminary runs are just to get an initialize airfoil solution at min AoA so that the actual runs will not become unstable
for ii in range(int((AoA_max-AoA_min)/AoA_inc+1)): # bem: run each AoA seperately (makes polar generation more convergence error tolerant)
fid.write("ALFA "+ str(AoA_min+ii*float(AoA_inc)) +"\n")
#fid.write("ASEQ " + str(AoA_min) + " " + "16" + " " + str(AoA_inc) + "\n") #run simulations for desired range of AoA using a coarse step size in AoA up to 16 deg
#fid.write("ASEQ " + "16.5" + " " + str(AoA_max) + " " + "0.1" + "\n") #run simulations for desired range of AoA using a fine AoA increment up to final AoA to help with convergence issues at high Re
fid.write("PWRT\n") #Toggle saving polar off
fid.write(saveFlnmPolar + " \n \n")
fid.write("QUIT \n")
fid.close()
# Run the XFoil calling command
try:
subprocess.run([xfoil_path], stdin=open(xfoilFlnm,'r'), stdout=open(NUL_fname, 'w'), timeout=300)
flap_polar = np.loadtxt(saveFlnmPolar,skiprows=12)
except subprocess.TimeoutExpired:
print('XFOIL timeout on p{}'.format(pid))
try:
flap_polar = np.loadtxt(saveFlnmPolar,skiprows=12) # Sometimes xfoil will hang up but still generate a good set of polars
except:
flap_polar = [] # in case no convergence was achieved
except:
flap_polar = [] # in case no convergence was achieved
# Error handling (re-run simulations with more panels if there is not enough data in polars)
if np.size(flap_polar) < 3: # This case is if there are convergence issues at the lowest angles of attack
plen = 0
a0 = 0
a1 = 0
dfdn = -0.25 # decrease AoA step size during initialization to try and get convergence in the next run
dfnFlag = True # Set flag to run initialization AoA down to AoA_min
print('XFOIL convergence issues on p{}'.format(pid))
else:
plen = len(flap_polar[:,0]) # Number of AoA's in polar
a0 = flap_polar[-1,0] # Maximum AoA in Polar
a1 = flap_polar[0,0] # Minimum AoA in Polar
dfnFlag = False # Set flag so that you don't need to run initialization sequence
if a0 > 19. and plen >= 40 and a1 < -12.5: # The a0 > 19 is to check to make sure polar entered into stall regiem plen >= 40 makes sure there are enough AoA's in polar for interpolation and a1 < -15 makes sure polar contains negative stall.
runFlag = False # No need ro re-run polar
if numNodes > 310:
print('Xfoil completed after {} attempts on run on p{}.'.format(runNum+1, pid))
else:
numNodes += 50 # Re-run with additional panels
# AoA_inc *= 0.5
runNum += 1 # Update run number
# AoA_min = -9
# AoA_max = 25
# if numNodes > 480:
if runNum > 6:
# Warning('NO convergence in XFoil achieved!')
print('No convergence in XFOIL achieved on p{}!'.format(pid))
if not os.path.exists('xfoil_errorfiles'):
os.makedirs('xfoil_errorfiles')
try:
os.rename(xfoilFlnm, os.path.join('xfoil_errorfiles', xfoilFlnm))
except:
pass
try:
os.rename(saveFlnmPolar, os.path.join('xfoil_errorfiles', saveFlnmPolar))
except:
pass
try:
os.rename(LoadFlnmAF, os.path.join('xfoil_errorfiles', LoadFlnmAF))
except:
pass
try:
os.rename(NUL_fname, os.path.join('xfoil_errorfiles', NUL_fname))
except:
pass
break
print('Refining paneling to ' + str(numNodes) + ' nodes')
# Load back in polar data to be saved in instance variables
#flap_polar = np.loadtxt(LoadFlnmAF,skiprows=12) # (note, we are assuming raw Xfoil polars when skipping the first 12 lines)
# self.af_flap_polar = flap_polar
# self.flap_polar_flnm = saveFlnmPolar # Not really needed unless you keep the files and want to load them later
# Delete Xfoil run script file
if os.path.exists(xfoilFlnm):
os.remove(xfoilFlnm)
if os.path.exists(saveFlnmPolar): # bem: For now leave the files, but eventually we can get rid of them (remove # in front of commands) so that we don't have to store them
os.remove(saveFlnmPolar)
if os.path.exists(LoadFlnmAF):
os.remove(LoadFlnmAF)
if os.path.exists(NUL_fname):
os.remove(NUL_fname)
if os.path.exists(xfoil_rundir):
os.rmdir(xfoil_rundir)
print('Xfoil calls on p{} completed in {} seconds'.format(pid, time.time()-t0))
return flap_polar
class RunXFOIL(ExplicitComponent):
# Openmdao component to run XFOIL and re-compute polars
def initialize(self):
self.options.declare('modeling_options')
self.options.declare('opt_options')
def setup(self):
rotorse_options = self.options['modeling_options']['WISDEM']['RotorSE']
self.n_span = n_span = rotorse_options['n_span']
self.n_te_flaps = n_te_flaps = rotorse_options['n_te_flaps']
self.n_tab = rotorse_options['n_tab']
self.n_aoa = n_aoa = rotorse_options['n_aoa'] # Number of angle of attacks
self.n_Re = n_Re = rotorse_options['n_Re'] # Number of Reynolds, so far hard set at 1
self.n_tab = n_tab = rotorse_options['n_tab']# Number of tabulated data. For distributed aerodynamic control this could be > 1
self.n_xy = n_xy = rotorse_options['n_xy'] # Number of coordinate points to describe the airfoil geometry
self.xfoil_path = self.options['modeling_options']['xfoil']['path']
# Use openfast cores for parallelization of xfoil
FASTpref = self.options['modeling_options']['openfast']
xfoilpref = self.options['modeling_options']['xfoil']
try:
if xfoilpref['run_parallel']:
self.cores = mp.cpu_count()
else:
self.cores = 1
except KeyError:
self.cores = 1
if MPI and self.options['modeling_options']['Level3']['flag'] and self.options['opt_options']['driver']['optimization']['flag']:
self.mpi_comm_map_down = FASTpref['analysis_settings']['mpi_comm_map_down']
# Inputs blade outer shape
self.add_input('s', val=np.zeros(n_span), desc='1D array of the non-dimensional spanwise grid defined along blade axis (0-blade root, 1-blade tip)')
self.add_input('r', val=np.zeros(n_span), units='m', desc='radial locations where blade is defined (should be increasing and not go all the way to hub or tip)')
self.add_input('coord_xy_interp', val=np.zeros((n_span, n_xy, 2)), desc='3D array of the non-dimensional x and y airfoil coordinates of the airfoils interpolated along span for n_span stations.')
self.add_input('chord', val=np.zeros(n_span), units='m', desc='chord length at each section')
# Inputs flaps
self.add_input('span_end', val=np.zeros(n_te_flaps), desc='1D array of the positions along blade span where the trailing edge flap(s) end. Only values between 0 and 1 are meaningful.')
self.add_input('span_ext', val=np.zeros(n_te_flaps), desc='1D array of the extensions along blade span of the trailing edge flap(s). Only values between 0 and 1 are meaningful.')
self.add_input('chord_start',val=np.zeros(n_te_flaps), desc='1D array of the positions along chord where the trailing edge flap(s) start. Only values between 0 and 1 are meaningful.')
self.add_input('delta_max_pos', val=np.zeros(n_te_flaps), units='rad', desc='1D array of the max angle of the trailing edge flaps.')
self.add_input('delta_max_neg', val=np.zeros(n_te_flaps), units='rad', desc='1D array of the min angle of the trailing edge flaps.')
# Inputs control
self.add_input('max_TS', val=0.0, units='m/s', desc='Maximum allowed blade tip speed.')
self.add_input('rated_TSR', val=0.0, desc='Constant tip speed ratio in region II.')
# Inputs environment
self.add_input('rho_air', val=1.225, units='kg/m**3', desc='Density of air')
self.add_input('mu_air', val=1.81e-5, units='kg/(m*s)', desc='Dynamic viscosity of air')
self.add_input('speed_sound_air', val=340., units='m/s', desc='Speed of sound in air.')
# Inputs polars
self.add_input('aoa', val=np.zeros(n_aoa), units='rad', desc='1D array of the angles of attack used to define the polars of the airfoils. All airfoils defined in openmdao share this grid.')
self.add_input('cl_interp', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the lift coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
self.add_input('cd_interp', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the drag coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
self.add_input('cm_interp', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the moment coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
# Outputs flap geometry
self.add_output('span_start', val=np.zeros(n_te_flaps), desc='1D array of the positions along blade span where the trailing edge flap(s) start. Only values between 0 and 1 are meaningful.')
# Output polars
self.add_output('cl_interp_flaps', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the lift coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
self.add_output('cd_interp_flaps', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the drag coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
self.add_output('cm_interp_flaps', val=np.zeros((n_span, n_aoa, n_Re, n_tab)), desc='4D array with the moment coefficients of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the angles of attack, dimension 2 is along the Reynolds number, dimension 3 is along the number of tabs, which may describe multiple sets at the same station, for example in presence of a flap.')
self.add_output('flap_angles', val=np.zeros((n_span, n_Re, n_tab)), units = 'deg', desc='3D array with the flap angles of the airfoils. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the Reynolds number, dimension 2 is along the number of tabs, which may describe multiple sets at the same station.')
self.add_output('Re_loc', val=np.zeros((n_span, n_Re, n_tab)), desc='3D array with the Re. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the Reynolds number, dimension 2 is along the number of tabs, which may describe multiple sets at the same station.')
self.add_output('Ma_loc', val=np.zeros((n_span, n_Re, n_tab)), desc='3D array with the Mach number. Dimension 0 is along the blade span for n_span stations, dimension 1 is along the Reynolds number, dimension 2 is along the number of tabs, which may describe multiple sets at the same station.')
# initialize saved data polar data.
# - This is filled if we're not changing the flaps, so we don't need to re-run xfoil every time
self.saved_polar_data = {}
def compute(self, inputs, outputs):
# If trailing edge flaps are present, compute the perturbed profiles with XFOIL
self.flap_profiles = [{} for i in range(self.n_span)]
outputs['span_start'] = inputs['span_end'] - inputs['span_ext']
if self.n_te_flaps > 0:
try:
from scipy.ndimage import gaussian_filter
except:
print('Cannot import the library gaussian_filter from scipy. Please check the conda environment and potential conflicts between numpy and scipy')
xfoil_kw = {}
if MPI:
xfoil_kw['MPI_run'] = True
elif self.cores > 1:
xfoil_kw['multi_run'] = True
for i in range(self.n_span):
# Loop through the flaps specified in yaml file
for k in range(self.n_te_flaps):
# Only create flap geometries where the yaml file specifies there is a flap (Currently going to nearest blade station location)
if inputs['s'][i] >= outputs['span_start'][k] and inputs['s'][i] <= inputs['span_end'][k]:
self.flap_profiles[i]['flap_angles']= []
# Initialize the profile coordinates to zeros
self.flap_profiles[i]['coords'] = np.zeros([self.n_xy,2,self.n_tab])
# Ben:I am not going to force it to include delta=0. If this is needed, a more complicated way of getting flap deflections to calculate is needed.
flap_angles = np.linspace(inputs['delta_max_neg'][k],inputs['delta_max_pos'][k],self.n_tab) * 180. / np.pi
# Loop through the flap angles
for ind, fa in enumerate(flap_angles):
# NOTE: negative flap angles are deflected to the suction side, i.e. positively along the positive z- (radial) axis
af_flap = CCAirfoil(np.array([1,2,3]), np.array([100]), np.zeros(3), | np.zeros(3) | numpy.zeros |
import lmfit
import numpy as np
from numpy.linalg import inv
import scipy as sp
import itertools
import matplotlib as mpl
from collections import OrderedDict, defaultdict
from pycqed.utilities import timer as tm_mod
from sklearn.mixture import GaussianMixture as GM
from sklearn.tree import DecisionTreeClassifier as DTC
from pycqed.analysis import fitting_models as fit_mods
from pycqed.analysis import analysis_toolbox as a_tools
import pycqed.analysis_v2.base_analysis as ba
import pycqed.analysis_v2.readout_analysis as roa
from pycqed.analysis_v2.readout_analysis import \
Singleshot_Readout_Analysis_Qutrit as SSROQutrit
import pycqed.analysis_v2.tomography_qudev as tomo
from pycqed.analysis.tools.plotting import SI_val_to_msg_str
from copy import deepcopy
from pycqed.measurement.sweep_points import SweepPoints
from pycqed.measurement.calibration.calibration_points import CalibrationPoints
import matplotlib.pyplot as plt
from pycqed.analysis.three_state_rotation import predict_proba_avg_ro
import logging
from pycqed.utilities import math
from pycqed.utilities.general import find_symmetry_index
import pycqed.measurement.waveform_control.segment as seg_mod
import datetime as dt
log = logging.getLogger(__name__)
try:
import qutip as qtp
except ImportError as e:
log.warning('Could not import qutip, tomography code will not work')
class AveragedTimedomainAnalysis(ba.BaseDataAnalysis):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.single_timestamp = True
self.params_dict = {
'value_names': 'value_names',
'measured_values': 'measured_values',
'measurementstring': 'measurementstring',
'exp_metadata': 'exp_metadata'}
self.numeric_params = []
if kwargs.get('auto', True):
self.run_analysis()
def process_data(self):
self.metadata = self.raw_data_dict.get('exp_metadata', {})
if self.metadata is None:
self.metadata = {}
cal_points = self.metadata.get('cal_points', None)
cal_points = self.options_dict.get('cal_points', cal_points)
cal_points_list = roa.convert_channel_names_to_index(
cal_points, len(self.raw_data_dict['measured_values'][0]),
self.raw_data_dict['value_names'])
self.proc_data_dict['cal_points_list'] = cal_points_list
measured_values = self.raw_data_dict['measured_values']
cal_idxs = self._find_calibration_indices()
scales = [np.std(x[cal_idxs]) for x in measured_values]
observable_vectors = np.zeros((len(cal_points_list),
len(measured_values)))
observable_vector_stds = np.ones_like(observable_vectors)
for i, observable in enumerate(cal_points_list):
for ch_idx, seg_idxs in enumerate(observable):
x = measured_values[ch_idx][seg_idxs] / scales[ch_idx]
if len(x) > 0:
observable_vectors[i][ch_idx] = np.mean(x)
if len(x) > 1:
observable_vector_stds[i][ch_idx] = np.std(x)
Omtx = (observable_vectors[1:] - observable_vectors[0]).T
d0 = observable_vectors[0]
corr_values = np.zeros(
(len(cal_points_list) - 1, len(measured_values[0])))
for i in range(len(measured_values[0])):
d = np.array([x[i] / scale for x, scale in zip(measured_values,
scales)])
corr_values[:, i] = inv(Omtx.T.dot(Omtx)).dot(Omtx.T).dot(d - d0)
self.proc_data_dict['corr_values'] = corr_values
def measurement_operators_and_results(self):
"""
Converts the calibration points to measurement operators. Assumes that
the calibration points are ordered the same as the basis states for
the tomography calculation (e.g. for two qubits |gg>, |ge>, |eg>, |ee>).
Also assumes that each calibration in the passed cal_points uses
different segments.
Returns:
A tuple of
the measured values with outthe calibration points;
the measurement operators corresponding to each channel;
and the expected covariation matrix between the operators.
"""
d = len(self.proc_data_dict['cal_points_list'])
cal_point_idxs = [set() for _ in range(d)]
for i, idxs_lists in enumerate(self.proc_data_dict['cal_points_list']):
for idxs in idxs_lists:
cal_point_idxs[i].update(idxs)
cal_point_idxs = [sorted(list(idxs)) for idxs in cal_point_idxs]
cal_point_idxs = np.array(cal_point_idxs)
raw_data = self.raw_data_dict['measured_values']
means = [None] * d
residuals = [list() for _ in raw_data]
for i, cal_point_idx in enumerate(cal_point_idxs):
means[i] = [np.mean(ch_data[cal_point_idx]) for ch_data in raw_data]
for j, ch_residuals in enumerate(residuals):
ch_residuals += list(raw_data[j][cal_point_idx] - means[i][j])
means = np.array(means)
residuals = np.array(residuals)
Fs = [np.diag(ms) for ms in means.T]
Omega = residuals.dot(residuals.T) / len(residuals.T)
data_idxs = np.setdiff1d(np.arange(len(raw_data[0])),
cal_point_idxs.flatten())
data = np.array([ch_data[data_idxs] for ch_data in raw_data])
return data, Fs, Omega
def _find_calibration_indices(self):
cal_indices = set()
cal_points = self.options_dict['cal_points']
nr_segments = self.raw_data_dict['measured_values'].shape[-1]
for observable in cal_points:
if isinstance(observable, (list, np.ndarray)):
for idxs in observable:
cal_indices.update({idx % nr_segments for idx in idxs})
else: # assume dictionaries
for idxs in observable.values():
cal_indices.update({idx % nr_segments for idx in idxs})
return list(cal_indices)
def all_cal_points(d, nr_ch, reps=1):
"""
Generates a list of calibration points for a Hilbert space of dimension d,
with nr_ch channels and reps reprtitions of each calibration point.
"""
return [[list(range(-reps*i, -reps*(i-1)))]*nr_ch for i in range(d, 0, -1)]
class Single_Qubit_TimeDomainAnalysis(ba.BaseDataAnalysis):
def process_data(self):
"""
This takes care of rotating and normalizing the data if required.
this should work for several input types.
- I/Q values (2 quadratures + cal points)
- weight functions (1 quadrature + cal points)
- counts (no cal points)
There are several options possible to specify the normalization
using the options dict.
cal_points (tuple) of indices of the calibrati on points
zero_coord, one_coord
"""
cal_points = self.options_dict.get('cal_points', None)
zero_coord = self.options_dict.get('zero_coord', None)
one_coord = self.options_dict.get('one_coord', None)
if cal_points is None:
# default for all standard Timedomain experiments
cal_points = [list(range(-4, -2)), list(range(-2, 0))]
if len(self.raw_data_dict['measured_values']) == 1:
# if only one weight function is used rotation is not required
self.proc_data_dict['corr_data'] = a_tools.rotate_and_normalize_data_1ch(
self.raw_data_dict['measured_values'][0],
cal_zero_points=cal_points[0],
cal_one_points=cal_points[1])
else:
self.proc_data_dict['corr_data'], zero_coord, one_coord = \
a_tools.rotate_and_normalize_data(
data=self.raw_data_dict['measured_values'][0:2],
zero_coord=zero_coord,
one_coord=one_coord,
cal_zero_points=cal_points[0],
cal_one_points=cal_points[1])
# This should be added to the hdf5 datafile but cannot because of the
# way that the "new" analysis works.
# self.add_dataset_to_analysisgroup('Corrected data',
# self.proc_data_dict['corr_data'])
class MultiQubit_TimeDomain_Analysis(ba.BaseDataAnalysis):
"""
Base class for multi-qubit time-domain analyses.
Parameters that can be specified in the options dict:
- rotation_type: type of rotation to be done on the raw data.
Types of rotations supported by this class:
- 'cal_states' (default, no need to specify): rotation based on
CalibrationPoints for 1D and TwoD data. Supports 2 and 3 cal states
per qubit
- 'fixed_cal_points' (only for TwoD, with 2 cal states):
does PCA on the columns corresponding to the highest cal state
to find the indices of that cal state in the columns, then uses
those to get the data points for the other cal state. Does
rotation using the mean of the data points corresponding to the
two cal states as the zero and one coordinates to rotate
the data.
- 'PCA': ignores cal points and does pca; in the case of TwoD data it
does PCA row by row
- 'column_PCA': cal points and does pca; in the case of TwoD data it
does PCA column by column
- 'global_PCA' (only for TwoD): does PCA on the whole 2D array
- main_sp (default: None): dict with keys qb_name used to specify which
sweep parameter should be used as axis label in plot
- functionality to split measurements with tiled sweep_points:
- split_params (default: None): list of strings with sweep parameters
names expected to be found in SweepPoints. Groups data by these
parameters and stores it in proc_data_dict['split_data_dict'].
- select_split (default: None): dict with keys qb_names and values
a tuple (sweep_param_name, value) or (sweep_param_name, index).
Stored in self.measurement_strings which specify the plot title.
The selected parameter must also be part of the split_params for
that qubit.
"""
def __init__(self,
qb_names: list=None, label: str='',
t_start: str=None, t_stop: str=None, data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True,
params_dict=None, numeric_params=None, **kwargs):
super().__init__(t_start=t_start, t_stop=t_stop, label=label,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only,
do_fitting=do_fitting, **kwargs)
self.qb_names = qb_names
self.params_dict = params_dict
if self.params_dict is None:
self.params_dict = {}
self.numeric_params = numeric_params
self.measurement_strings = {}
if self.numeric_params is None:
self.numeric_params = []
if not hasattr(self, "job"):
self.create_job(qb_names=qb_names, t_start=t_start, t_stop=t_stop,
label=label, data_file_path=data_file_path,
do_fitting=do_fitting, options_dict=options_dict,
extract_only=extract_only, params_dict=params_dict,
numeric_params=numeric_params, **kwargs)
if auto:
self.run_analysis()
def extract_data(self):
super().extract_data()
if self.qb_names is None:
self.qb_names = self.get_param_value('ro_qubits')
if self.qb_names is None:
raise ValueError('Provide the "qb_names."')
self.measurement_strings = {
qbn: self.raw_data_dict['measurementstring'] for qbn in
self.qb_names}
self.channel_map = self.get_param_value('meas_obj_value_names_map')
if self.channel_map is None:
# if the new name meas_obj_value_names_map is not found, try with
# the old name channel_map
self.channel_map = self.get_param_value('channel_map')
if self.channel_map is None:
value_names = self.raw_data_dict['value_names']
if np.ndim(value_names) > 0:
value_names = value_names
if 'w' in value_names[0]:
self.channel_map = a_tools.get_qb_channel_map_from_hdf(
self.qb_names, value_names=value_names,
file_path=self.raw_data_dict['folder'])
else:
self.channel_map = {}
for qbn in self.qb_names:
self.channel_map[qbn] = value_names
if len(self.channel_map) == 0:
raise ValueError('No qubit RO channels have been found.')
# creates self.sp
self.get_sweep_points()
def get_sweep_points(self):
self.sp = self.get_param_value('sweep_points')
if self.sp is not None:
self.sp = SweepPoints(self.sp)
def create_sweep_points_dict(self):
sweep_points_dict = self.get_param_value('sweep_points_dict')
hard_sweep_params = self.get_param_value('hard_sweep_params')
if self.sp is not None:
self.mospm = self.get_param_value('meas_obj_sweep_points_map')
main_sp = self.get_param_value('main_sp')
if self.mospm is None:
raise ValueError('When providing "sweep_points", '
'"meas_obj_sweep_points_map" has to be '
'provided in addition.')
if main_sp is not None:
self.proc_data_dict['sweep_points_dict'] = {}
for qbn, p in main_sp.items():
dim = self.sp.find_parameter(p)
if dim == 1:
log.warning(f"main_sp is only implemented for sweep "
f"dimension 0, but {p} is in dimension 1.")
self.proc_data_dict['sweep_points_dict'][qbn] = \
{'sweep_points': self.sp.get_sweep_params_property(
'values', dim, p)}
else:
self.proc_data_dict['sweep_points_dict'] = \
{qbn: {'sweep_points': self.sp.get_sweep_params_property(
'values', 0, self.mospm[qbn])[0]}
for qbn in self.qb_names}
elif sweep_points_dict is not None:
# assumed to be of the form {qbn1: swpts_array1, qbn2: swpts_array2}
self.proc_data_dict['sweep_points_dict'] = \
{qbn: {'sweep_points': sweep_points_dict[qbn]}
for qbn in self.qb_names}
elif hard_sweep_params is not None:
self.proc_data_dict['sweep_points_dict'] = \
{qbn: {'sweep_points': list(hard_sweep_params.values())[0][
'values']} for qbn in self.qb_names}
else:
self.proc_data_dict['sweep_points_dict'] = \
{qbn: {'sweep_points': self.data_filter(
self.raw_data_dict['hard_sweep_points'])}
for qbn in self.qb_names}
def create_sweep_points_2D_dict(self):
soft_sweep_params = self.get_param_value('soft_sweep_params')
if self.sp is not None:
self.proc_data_dict['sweep_points_2D_dict'] = OrderedDict()
for qbn in self.qb_names:
self.proc_data_dict['sweep_points_2D_dict'][qbn] = \
OrderedDict()
for pn in self.mospm[qbn]:
if pn in self.sp[1]:
self.proc_data_dict['sweep_points_2D_dict'][qbn][
pn] = self.sp[1][pn][0]
elif soft_sweep_params is not None:
self.proc_data_dict['sweep_points_2D_dict'] = \
{qbn: {pn: soft_sweep_params[pn]['values'] for
pn in soft_sweep_params}
for qbn in self.qb_names}
else:
if len(self.raw_data_dict['soft_sweep_points'].shape) == 1:
self.proc_data_dict['sweep_points_2D_dict'] = \
{qbn: {self.raw_data_dict['sweep_parameter_names'][1]:
self.raw_data_dict['soft_sweep_points']} for
qbn in self.qb_names}
else:
sspn = self.raw_data_dict['sweep_parameter_names'][1:]
self.proc_data_dict['sweep_points_2D_dict'] = \
{qbn: {sspn[i]: self.raw_data_dict['soft_sweep_points'][i]
for i in range(len(sspn))} for qbn in self.qb_names}
if self.get_param_value('percentage_done', 100) < 100:
# This indicated an interrupted measurement.
# Remove non-measured sweep points in that case.
# raw_data_dict['soft_sweep_points'] is obtained in
# BaseDataAnalysis.add_measured_data(), and its length should
# always correspond to the actual number of measured soft sweep
# points.
ssl = len(self.raw_data_dict['soft_sweep_points'])
for sps in self.proc_data_dict['sweep_points_2D_dict'].values():
for k, v in sps.items():
sps[k] = v[:ssl]
def create_meas_results_per_qb(self):
measured_RO_channels = list(self.raw_data_dict['measured_data'])
meas_results_per_qb_raw = {}
meas_results_per_qb = {}
for qb_name, RO_channels in self.channel_map.items():
meas_results_per_qb_raw[qb_name] = {}
meas_results_per_qb[qb_name] = {}
if isinstance(RO_channels, str):
meas_ROs_per_qb = [RO_ch for RO_ch in measured_RO_channels
if RO_channels in RO_ch]
for meas_RO in meas_ROs_per_qb:
meas_results_per_qb_raw[qb_name][meas_RO] = \
self.raw_data_dict[
'measured_data'][meas_RO]
meas_results_per_qb[qb_name][meas_RO] = \
self.data_filter(
meas_results_per_qb_raw[qb_name][meas_RO])
elif isinstance(RO_channels, list):
for qb_RO_ch in RO_channels:
meas_ROs_per_qb = [RO_ch for RO_ch in measured_RO_channels
if qb_RO_ch in RO_ch]
for meas_RO in meas_ROs_per_qb:
meas_results_per_qb_raw[qb_name][meas_RO] = \
self.raw_data_dict[
'measured_data'][meas_RO]
meas_results_per_qb[qb_name][meas_RO] = \
self.data_filter(
meas_results_per_qb_raw[qb_name][meas_RO])
else:
raise TypeError('The RO channels for {} must either be a list '
'or a string.'.format(qb_name))
self.proc_data_dict['meas_results_per_qb_raw'] = \
meas_results_per_qb_raw
self.proc_data_dict['meas_results_per_qb'] = \
meas_results_per_qb
def process_data(self):
super().process_data()
self.data_filter = self.get_param_value('data_filter')
prep_params = self.get_param_value('preparation_params',
default_value=dict())
self.data_with_reset = False
if self.data_filter is None:
if 'active' in prep_params.get('preparation_type', 'wait'):
reset_reps = prep_params.get('reset_reps', 1)
self.data_filter = lambda x: x[reset_reps::reset_reps+1]
self.data_with_reset = True
elif "preselection" in prep_params.get('preparation_type', 'wait'):
self.data_filter = lambda x: x[1::2] # filter preselection RO
if self.data_filter is None:
self.data_filter = lambda x: x
self.create_sweep_points_dict()
self.create_meas_results_per_qb()
# temporary fix for appending calibration points to x values but
# without breaking sequences not yet using this interface.
self.rotate = self.get_param_value('rotate', default_value=False)
cal_points = self.get_param_value('cal_points')
last_ge_pulses = self.get_param_value('last_ge_pulses',
default_value=False)
try:
self.cp = CalibrationPoints.from_string(cal_points)
# for now assuming the same for all qubits.
self.cal_states_dict = self.cp.get_indices(
self.qb_names, prep_params)[self.qb_names[0]]
cal_states_rots = self.cp.get_rotations(last_ge_pulses,
self.qb_names[0])[self.qb_names[0]] if self.rotate \
else None
self.cal_states_rotations = self.get_param_value(
'cal_states_rotations', default_value=cal_states_rots)
sweep_points_w_calpts = \
{qbn: {'sweep_points': self.cp.extend_sweep_points(
self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'], qbn)} for qbn in self.qb_names}
self.proc_data_dict['sweep_points_dict'] = sweep_points_w_calpts
except TypeError as e:
log.error(e)
log.warning("Failed retrieving cal point objects or states. "
"Please update measurement to provide cal point object "
"in metadata. Trying to get them using the old way ...")
self.cal_states_rotations = self.get_param_value(
'cal_states_rotations', default_value=None) \
if self.rotate else None
self.cal_states_dict = self.get_param_value('cal_states_dict',
default_value={})
if self.get_param_value('global_PCA') is not None:
log.warning('Parameter "global_PCA" is deprecated. Please set '
'rotation_type="global_PCA" instead.')
self.rotation_type = self.get_param_value(
'rotation_type',
default_value='cal_states' if self.rotate else 'no_rotation')
# create projected_data_dict
self.data_to_fit = deepcopy(self.get_param_value('data_to_fit'))
if self.data_to_fit is None:
# If we have cal points, but data_to_fit is not specified,
# choose a reasonable default value. In cases with only two cal
# points, this decides which projected plot is generated. (In
# cases with three cal points, we will anyways get all three
# projected plots.)
if 'e' in self.cal_states_dict.keys():
self.data_to_fit = {qbn: 'pe' for qbn in self.qb_names}
elif 'g' in self.cal_states_dict.keys():
self.data_to_fit = {qbn: 'pg' for qbn in self.qb_names}
else:
self.data_to_fit = {}
# TODO: Steph 15.09.2020
# This is a hack to allow list inside data_to_fit. These lists are
# currently only supported by MultiCZgate_CalibAnalysis
for qbn in self.data_to_fit:
if isinstance(self.data_to_fit[qbn], (list, tuple)):
self.data_to_fit[qbn] = self.data_to_fit[qbn][0]
if self.rotate or self.rotation_type == 'global_PCA':
self.cal_states_analysis()
else:
# this assumes data obtained with classifier detector!
# ie pg, pe, pf are expected to be in the value_names
self.proc_data_dict['projected_data_dict'] = OrderedDict()
for qbn, data_dict in self.proc_data_dict[
'meas_results_per_qb'].items():
self.proc_data_dict['projected_data_dict'][qbn] = OrderedDict()
for state_prob in ['pg', 'pe', 'pf']:
self.proc_data_dict['projected_data_dict'][qbn].update(
{state_prob: data for key, data in data_dict.items()
if state_prob in key})
if self.cal_states_dict is None:
self.cal_states_dict = {}
self.num_cal_points = np.array(list(
self.cal_states_dict.values())).flatten().size
# correct probabilities given calibration matrix
if self.get_param_value("correction_matrix") is not None:
self.proc_data_dict['projected_data_dict_corrected'] = OrderedDict()
for qbn, data_dict in self.proc_data_dict[
'meas_results_per_qb'].items():
self.proc_data_dict['projected_data_dict'][qbn] = OrderedDict()
probas_raw = np.asarray([data_dict[k] for k in data_dict
for state_prob in ['pg', 'pe', 'pf'] if
state_prob in k])
corr_mtx = self.get_param_value("correction_matrix")[qbn]
probas_corrected = np.linalg.inv(corr_mtx).T @ probas_raw
for state_prob in ['pg', 'pe', 'pf']:
self.proc_data_dict['projected_data_dict_corrected'][qbn].update(
{state_prob: data for key, data in
zip(["pg", "pe", "pf"], probas_corrected)})
# get data_to_fit
self.proc_data_dict['data_to_fit'] = OrderedDict()
for qbn, prob_data in self.proc_data_dict[
'projected_data_dict'].items():
if qbn in self.data_to_fit:
self.proc_data_dict['data_to_fit'][qbn] = prob_data[
self.data_to_fit[qbn]]
# create msmt_sweep_points, sweep_points, cal_points_sweep_points
for qbn in self.qb_names:
if self.num_cal_points > 0:
self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points'] = \
self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][:-self.num_cal_points]
self.proc_data_dict['sweep_points_dict'][qbn][
'cal_points_sweep_points'] = \
self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][-self.num_cal_points::]
else:
self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points'] = self.proc_data_dict[
'sweep_points_dict'][qbn]['sweep_points']
self.proc_data_dict['sweep_points_dict'][qbn][
'cal_points_sweep_points'] = []
if self.options_dict.get('TwoD', False):
self.create_sweep_points_2D_dict()
# handle data splitting if needed
self.split_data()
def split_data(self):
def unique(l):
try:
return np.unique(l, return_inverse=True)
except Exception:
h = [repr(a) for a in l]
_, i, j = np.unique(h, return_index=True, return_inverse=True)
return l[i], j
split_params = self.get_param_value('split_params', [])
if not len(split_params):
return
pdd = self.proc_data_dict
pdd['split_data_dict'] = {}
for qbn in self.qb_names:
pdd['split_data_dict'][qbn] = {}
for p in split_params:
dim = self.sp.find_parameter(p)
sv = self.sp.get_sweep_params_property(
'values', param_names=p, dimension=dim)
usp, ind = unique(sv)
if len(usp) <= 1:
continue
svs = [self.sp.subset(ind == i, dim) for i in
range(len(usp))]
[s.remove_sweep_parameter(p) for s in svs]
sdd = {}
pdd['split_data_dict'][qbn][p] = sdd
for i in range(len(usp)):
subset = (np.concatenate(
[ind == i,
[True] * len(pdd['sweep_points_dict'][qbn][
'cal_points_sweep_points'])]))
sdd[i] = {}
sdd[i]['value'] = usp[i]
sdd[i]['sweep_points'] = svs[i]
d = pdd['sweep_points_dict'][qbn]
if dim == 0:
sdd[i]['sweep_points_dict'] = {
'sweep_points': d['sweep_points'][subset],
'msmt_sweep_points':
d['msmt_sweep_points'][ind == i],
'cal_points_sweep_points':
d['cal_points_sweep_points'],
}
sdd[i]['sweep_points_2D_dict'] = pdd[
'sweep_points_2D_dict'][qbn]
else:
sdd[i]['sweep_points_dict'] = \
pdd['sweep_points_dict'][qbn]
sdd[i]['sweep_points_2D_dict'] = {
k: v[ind == i] for k, v in pdd[
'sweep_points_2D_dict'][qbn].items()}
for d in ['projected_data_dict', 'data_to_fit']:
if isinstance(pdd[d][qbn], dict):
if dim == 0:
sdd[i][d] = {k: v[:, subset] for
k, v in pdd[d][qbn].items()}
else:
sdd[i][d] = {k: v[ind == i, :] for
k, v in pdd[d][qbn].items()}
else:
if dim == 0:
sdd[i][d] = pdd[d][qbn][:, subset]
else:
sdd[i][d] = pdd[d][qbn][ind == i, :]
select_split = self.get_param_value('select_split')
if select_split is not None:
for qbn, select in select_split.items():
p, v = select
if p not in pdd['split_data_dict'][qbn]:
log.warning(f"Split parameter {p} for {qbn} not "
f"found. Ignoring this selection.")
try:
ind = [a['value'] for a in pdd['split_data_dict'][
qbn][p].values()].index(v)
except ValueError:
ind = v
try:
pdd['split_data_dict'][qbn][p][ind]
except ValueError:
log.warning(f"Value {v} for split parameter {p} "
f"of {qbn} not found. Ignoring this "
f"selection.")
continue
for d in ['projected_data_dict', 'data_to_fit',
'sweep_points_dict', 'sweep_points_2D_dict']:
pdd[d][qbn] = pdd['split_data_dict'][qbn][p][ind][d]
self.measurement_strings[qbn] += f' ({p}: {v})'
def get_cal_data_points(self):
self.num_cal_points = np.array(list(
self.cal_states_dict.values())).flatten().size
do_PCA = self.rotation_type == 'PCA' or \
self.rotation_type == 'column_PCA'
self.cal_states_dict_for_rotation = OrderedDict()
states = False
cal_states_rotations = self.cal_states_rotations
for key in cal_states_rotations.keys():
if key == 'g' or key == 'e' or key == 'f':
states = True
for qbn in self.qb_names:
self.cal_states_dict_for_rotation[qbn] = OrderedDict()
if states:
cal_states_rot_qb = cal_states_rotations
else:
cal_states_rot_qb = cal_states_rotations[qbn]
for i in range(len(cal_states_rot_qb)):
cal_state = \
[k for k, idx in cal_states_rot_qb.items()
if idx == i][0]
self.cal_states_dict_for_rotation[qbn][cal_state] = \
None if do_PCA and self.num_cal_points != 3 else \
self.cal_states_dict[cal_state]
def cal_states_analysis(self):
self.get_cal_data_points()
self.proc_data_dict['projected_data_dict'] = OrderedDict(
{qbn: '' for qbn in self.qb_names})
for qbn in self.qb_names:
cal_states_dict = self.cal_states_dict_for_rotation[qbn]
if len(cal_states_dict) not in [0, 2, 3]:
raise NotImplementedError('Calibration states rotation is '
'currently only implemented for 0, '
'2, or 3 cal states per qubit.')
data_mostly_g = self.get_param_value('data_mostly_g',
default_value=True)
if self.get_param_value('TwoD', default_value=False):
if self.rotation_type == 'global_PCA':
self.proc_data_dict['projected_data_dict'].update(
self.global_pca_TwoD(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map, self.data_to_fit,
data_mostly_g=data_mostly_g))
elif len(cal_states_dict) == 3:
self.proc_data_dict['projected_data_dict'].update(
self.rotate_data_3_cal_states_TwoD(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map,
self.cal_states_dict_for_rotation))
elif self.rotation_type == 'fixed_cal_points':
rotated_data_dict, zero_coord, one_coord = \
self.rotate_data_TwoD_same_fixed_cal_idxs(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map, self.cal_states_dict_for_rotation,
self.data_to_fit)
self.proc_data_dict['projected_data_dict'].update(
rotated_data_dict)
self.proc_data_dict['rotation_coordinates'] = \
[zero_coord, one_coord]
else:
self.proc_data_dict['projected_data_dict'].update(
self.rotate_data_TwoD(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map, self.cal_states_dict_for_rotation,
self.data_to_fit, data_mostly_g=data_mostly_g,
column_PCA=self.rotation_type == 'column_PCA'))
else:
if len(cal_states_dict) == 3:
self.proc_data_dict['projected_data_dict'].update(
self.rotate_data_3_cal_states(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map,
self.cal_states_dict_for_rotation))
else:
self.proc_data_dict['projected_data_dict'].update(
self.rotate_data(
qbn, self.proc_data_dict['meas_results_per_qb'],
self.channel_map, self.cal_states_dict_for_rotation,
self.data_to_fit, data_mostly_g=data_mostly_g))
@staticmethod
def rotate_data_3_cal_states(qb_name, meas_results_per_qb, channel_map,
cal_states_dict):
# FOR 3 CAL STATES
rotated_data_dict = OrderedDict()
meas_res_dict = meas_results_per_qb[qb_name]
rotated_data_dict[qb_name] = OrderedDict()
cal_pts_idxs = list(cal_states_dict[qb_name].values())
cal_points_data = np.zeros((len(cal_pts_idxs), 2))
if list(meas_res_dict) == channel_map[qb_name]:
raw_data = np.array([v for v in meas_res_dict.values()]).T
for i, cal_idx in enumerate(cal_pts_idxs):
cal_points_data[i, :] = np.mean(raw_data[cal_idx, :],
axis=0)
rotated_data = predict_proba_avg_ro(raw_data, cal_points_data)
for i, state in enumerate(list(cal_states_dict[qb_name])):
rotated_data_dict[qb_name][f'p{state}'] = rotated_data[:, i]
else:
raise NotImplementedError('Calibration states rotation with 3 '
'cal states only implemented for '
'2 readout channels per qubit.')
return rotated_data_dict
@staticmethod
def rotate_data(qb_name, meas_results_per_qb, channel_map,
cal_states_dict, data_to_fit, data_mostly_g=True):
# ONLY WORKS FOR 2 CAL STATES
meas_res_dict = meas_results_per_qb[qb_name]
rotated_data_dict = OrderedDict()
if len(cal_states_dict[qb_name]) == 0:
cal_zero_points = None
cal_one_points = None
else:
cal_zero_points = list(cal_states_dict[qb_name].values())[0]
cal_one_points = list(cal_states_dict[qb_name].values())[1]
rotated_data_dict[qb_name] = OrderedDict()
if len(meas_res_dict) == 1:
# one RO channel per qubit
if cal_zero_points is None and cal_one_points is None:
data = meas_res_dict[list(meas_res_dict)[0]]
data = (data - np.min(data))/(np.max(data) - np.min(data))
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]] = data
else:
rotated_data_dict[qb_name][data_to_fit[qb_name]] = \
a_tools.rotate_and_normalize_data_1ch(
data=meas_res_dict[list(meas_res_dict)[0]],
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
elif list(meas_res_dict) == channel_map[qb_name]:
# two RO channels per qubit
data, _, _ = a_tools.rotate_and_normalize_data_IQ(
data=np.array([v for v in meas_res_dict.values()]),
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]] = data
else:
# multiple readouts per qubit per channel
if isinstance(channel_map[qb_name], str):
qb_ro_ch0 = channel_map[qb_name]
else:
qb_ro_ch0 = channel_map[qb_name][0]
ro_suffixes = [s[len(qb_ro_ch0)+1::] for s in
list(meas_res_dict) if qb_ro_ch0 in s]
for i, ro_suf in enumerate(ro_suffixes):
if len(ro_suffixes) == len(meas_res_dict):
# one RO ch per qubit
if cal_zero_points is None and cal_one_points is None:
data = meas_res_dict[list(meas_res_dict)[i]]
data = (data - np.min(data))/(np.max(data) - np.min(data))
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][ro_suf] = data
else:
rotated_data_dict[qb_name][ro_suf] = \
a_tools.rotate_and_normalize_data_1ch(
data=meas_res_dict[list(meas_res_dict)[i]],
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
else:
# two RO ch per qubit
keys = [k for k in meas_res_dict if ro_suf in k]
correct_keys = [k for k in keys
if k[len(qb_ro_ch0)+1::] == ro_suf]
data_array = np.array([meas_res_dict[k]
for k in correct_keys])
data, _, _ = a_tools.rotate_and_normalize_data_IQ(
data=data_array,
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][ro_suf] = data
return rotated_data_dict
@staticmethod
def rotate_data_3_cal_states_TwoD(qb_name, meas_results_per_qb,
channel_map, cal_states_dict):
# FOR 3 CAL STATES
meas_res_dict = meas_results_per_qb[qb_name]
rotated_data_dict = OrderedDict()
rotated_data_dict[qb_name] = OrderedDict()
cal_pts_idxs = list(cal_states_dict[qb_name].values())
cal_points_data = np.zeros((len(cal_pts_idxs), 2))
if list(meas_res_dict) == channel_map[qb_name]:
# two RO channels per qubit
raw_data_arr = meas_res_dict[list(meas_res_dict)[0]]
for i, state in enumerate(list(cal_states_dict[qb_name])):
rotated_data_dict[qb_name][f'p{state}'] = np.zeros(
raw_data_arr.shape)
for col in range(raw_data_arr.shape[1]):
raw_data = np.concatenate([
v[:, col].reshape(len(v[:, col]), 1) for
v in meas_res_dict.values()], axis=1)
for i, cal_idx in enumerate(cal_pts_idxs):
cal_points_data[i, :] = np.mean(raw_data[cal_idx, :],
axis=0)
# rotated data is (raw_data_arr.shape[0], 3)
rotated_data = predict_proba_avg_ro(
raw_data, cal_points_data)
for i, state in enumerate(list(cal_states_dict[qb_name])):
rotated_data_dict[qb_name][f'p{state}'][:, col] = \
rotated_data[:, i]
else:
raise NotImplementedError('Calibration states rotation with 3 '
'cal states only implemented for '
'2 readout channels per qubit.')
# transpose data
for i, state in enumerate(list(cal_states_dict[qb_name])):
rotated_data_dict[qb_name][f'p{state}'] = \
rotated_data_dict[qb_name][f'p{state}'].T
return rotated_data_dict
@staticmethod
def global_pca_TwoD(qb_name, meas_results_per_qb, channel_map,
data_to_fit, data_mostly_g=True):
meas_res_dict = meas_results_per_qb[qb_name]
if list(meas_res_dict) != channel_map[qb_name]:
raise NotImplementedError('Global PCA is only implemented '
'for two-channel RO!')
raw_data_arr = meas_res_dict[list(meas_res_dict)[0]]
rotated_data_dict = OrderedDict({qb_name: OrderedDict()})
rotated_data_dict[qb_name][data_to_fit[qb_name]] = \
deepcopy(raw_data_arr.transpose())
data_array = np.array(
[v.T.flatten() for v in meas_res_dict.values()])
rot_flat_data, _, _ = \
a_tools.rotate_and_normalize_data_IQ(
data=data_array)
data = np.reshape(rot_flat_data, raw_data_arr.T.shape)
data = a_tools.set_majority_sign(data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]] = data
return rotated_data_dict
@staticmethod
def rotate_data_TwoD(qb_name, meas_results_per_qb, channel_map,
cal_states_dict, data_to_fit,
column_PCA=False, data_mostly_g=True):
meas_res_dict = meas_results_per_qb[qb_name]
rotated_data_dict = OrderedDict()
if len(cal_states_dict[qb_name]) == 0:
cal_zero_points = None
cal_one_points = None
else:
cal_zero_points = list(cal_states_dict[qb_name].values())[0]
cal_one_points = list(cal_states_dict[qb_name].values())[1]
rotated_data_dict[qb_name] = OrderedDict()
if len(meas_res_dict) == 1:
# one RO channel per qubit
raw_data_arr = meas_res_dict[list(meas_res_dict)[0]]
rotated_data_dict[qb_name][data_to_fit[qb_name]] = \
deepcopy(raw_data_arr.transpose())
if column_PCA:
for row in range(raw_data_arr.shape[0]):
data = a_tools.rotate_and_normalize_data_1ch(
data=raw_data_arr[row, :],
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]][
:, row] = data
else:
for col in range(raw_data_arr.shape[1]):
data = a_tools.rotate_and_normalize_data_1ch(
data=raw_data_arr[:, col],
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]][col] = data
elif list(meas_res_dict) == channel_map[qb_name]:
# two RO channels per qubit
raw_data_arr = meas_res_dict[list(meas_res_dict)[0]]
rotated_data_dict[qb_name][data_to_fit[qb_name]] = \
deepcopy(raw_data_arr.transpose())
if column_PCA:
for row in range(raw_data_arr.shape[0]):
data_array = np.array(
[v[row, :] for v in meas_res_dict.values()])
data, _, _ = \
a_tools.rotate_and_normalize_data_IQ(
data=data_array,
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][data_to_fit[qb_name]][
:, row] = data
else:
for col in range(raw_data_arr.shape[1]):
data_array = np.array(
[v[:, col] for v in meas_res_dict.values()])
data, _, _ = a_tools.rotate_and_normalize_data_IQ(
data=data_array,
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][
data_to_fit[qb_name]][col] = data
else:
# multiple readouts per qubit per channel
if isinstance(channel_map[qb_name], str):
qb_ro_ch0 = channel_map[qb_name]
else:
qb_ro_ch0 = channel_map[qb_name][0]
ro_suffixes = [s[len(qb_ro_ch0)+1::] for s in
list(meas_res_dict) if qb_ro_ch0 in s]
for i, ro_suf in enumerate(ro_suffixes):
if len(ro_suffixes) == len(meas_res_dict):
# one RO ch per qubit
raw_data_arr = meas_res_dict[list(meas_res_dict)[i]]
rotated_data_dict[qb_name][ro_suf] = \
deepcopy(raw_data_arr.transpose())
for col in range(raw_data_arr.shape[1]):
data = a_tools.rotate_and_normalize_data_1ch(
data=raw_data_arr[:, col],
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][ro_suf][col] = data
else:
# two RO ch per qubit
raw_data_arr = meas_res_dict[list(meas_res_dict)[i]]
rotated_data_dict[qb_name][ro_suf] = \
deepcopy(raw_data_arr.transpose())
for col in range(raw_data_arr.shape[1]):
data_array = np.array(
[v[:, col] for k, v in meas_res_dict.items()
if ro_suf in k])
data, _, _ = a_tools.rotate_and_normalize_data_IQ(
data=data_array,
cal_zero_points=cal_zero_points,
cal_one_points=cal_one_points)
if cal_zero_points is None:
data = a_tools.set_majority_sign(
data, -1 if data_mostly_g else 1)
rotated_data_dict[qb_name][ro_suf][col] = data
return rotated_data_dict
@staticmethod
def rotate_data_TwoD_same_fixed_cal_idxs(qb_name, meas_results_per_qb,
channel_map, cal_states_dict,
data_to_fit):
meas_res_dict = meas_results_per_qb[qb_name]
if list(meas_res_dict) != channel_map[qb_name]:
raise NotImplementedError('rotate_data_TwoD_same_fixed_cal_idxs '
'only implemented for two-channel RO!')
if len(cal_states_dict[qb_name]) == 0:
cal_zero_points = None
cal_one_points = None
else:
cal_zero_points = list(cal_states_dict[qb_name].values())[0]
cal_one_points = list(cal_states_dict[qb_name].values())[1]
# do pca on the one cal states
raw_data_arr = meas_res_dict[list(meas_res_dict)[0]]
rot_dat_e = np.zeros(raw_data_arr.shape[1])
for row in cal_one_points:
rot_dat_e += a_tools.rotate_and_normalize_data_IQ(
data=np.array([v[row, :] for v in meas_res_dict.values()]),
cal_zero_points=None, cal_one_points=None)[0]
rot_dat_e /= len(cal_one_points)
# find the values of the zero and one cal points
col_idx = np.argmax(np.abs(rot_dat_e))
zero_coord = [np.mean([v[r, col_idx] for r in cal_zero_points])
for v in meas_res_dict.values()]
one_coord = [np.mean([v[r, col_idx] for r in cal_one_points])
for v in meas_res_dict.values()]
# rotate all data based on the fixed zero_coord and one_coord
rotated_data_dict = OrderedDict({qb_name: OrderedDict()})
rotated_data_dict[qb_name][data_to_fit[qb_name]] = \
deepcopy(raw_data_arr.transpose())
for col in range(raw_data_arr.shape[1]):
data_array = np.array(
[v[:, col] for v in meas_res_dict.values()])
rotated_data_dict[qb_name][
data_to_fit[qb_name]][col], _, _ = \
a_tools.rotate_and_normalize_data_IQ(
data=data_array,
zero_coord=zero_coord,
one_coord=one_coord)
return rotated_data_dict, zero_coord, one_coord
def get_xaxis_label_unit(self, qb_name):
hard_sweep_params = self.get_param_value('hard_sweep_params')
sweep_name = self.get_param_value('sweep_name')
sweep_unit = self.get_param_value('sweep_unit')
if self.sp is not None:
main_sp = self.get_param_value('main_sp', None)
if main_sp is not None and qb_name in main_sp:
param_names = [main_sp[qb_name]]
else:
param_names = self.mospm[qb_name]
_, xunit, xlabel = self.sp.get_sweep_params_description(
param_names=param_names, dimension=0)[0]
elif hard_sweep_params is not None:
xlabel = list(hard_sweep_params)[0]
xunit = list(hard_sweep_params.values())[0][
'unit']
elif (sweep_name is not None) and (sweep_unit is not None):
xlabel = sweep_name
xunit = sweep_unit
else:
xlabel = self.raw_data_dict['sweep_parameter_names']
xunit = self.raw_data_dict['sweep_parameter_units']
if np.ndim(xlabel) > 0:
xlabel = xlabel[0]
if np.ndim(xunit) > 0:
xunit = xunit[0]
return xlabel, xunit
@staticmethod
def get_cal_state_color(cal_state_label):
if cal_state_label == 'g' or cal_state_label == r'$|g\rangle$':
return 'k'
elif cal_state_label == 'e' or cal_state_label == r'$|e\rangle$':
return 'gray'
elif cal_state_label == 'f' or cal_state_label == r'$|f\rangle$':
return 'C8'
else:
return 'C4'
@staticmethod
def get_latex_prob_label(prob_label):
if '$' in prob_label:
return prob_label
elif 'p' in prob_label.lower():
return r'$|{}\rangle$'.format(prob_label[-1])
else:
return r'$|{}\rangle$'.format(prob_label)
def prepare_plots(self):
if self.get_param_value('plot_proj_data', default_value=True):
select_split = self.get_param_value('select_split')
fig_name_suffix = self.get_param_value('fig_name_suffix', '')
title_suffix = self.get_param_value('title_suffix', '')
for qb_name, corr_data in self.proc_data_dict[
'projected_data_dict'].items():
fig_name = f'projected_plot_{qb_name}'
title_suf = title_suffix
if select_split is not None:
param, idx = select_split[qb_name]
# remove qb_name from param
p = '_'.join([e for e in param.split('_') if e != qb_name])
# create suffix
suf = f'({p}, {str(np.round(idx, 3))})'
# add suffix
fig_name += f'_{suf}'
title_suf = f'{suf}_{title_suf}' if \
len(title_suf) else suf
if isinstance(corr_data, dict):
for data_key, data in corr_data.items():
if not self.rotate:
data_label = data_key
plot_name_suffix = data_key
plot_cal_points = False
data_axis_label = 'Population'
else:
fn = f'{fig_name}_{data_key}'
data_label = 'Data'
plot_name_suffix = ''
tf = f'{data_key}_{title_suf}' if \
len(title_suf) else data_key
plot_cal_points = (
not self.options_dict.get('TwoD', False))
data_axis_label = \
'Strongest principal component (arb.)' if \
'pca' in self.rotation_type.lower() else \
'{} state population'.format(
self.get_latex_prob_label(data_key))
self.prepare_projected_data_plot(
fn, data, qb_name=qb_name,
data_label=data_label,
title_suffix=tf,
plot_name_suffix=plot_name_suffix,
fig_name_suffix=fig_name_suffix,
data_axis_label=data_axis_label,
plot_cal_points=plot_cal_points)
else:
fig_name = 'projected_plot_' + qb_name
self.prepare_projected_data_plot(
fig_name, corr_data, qb_name=qb_name,
plot_cal_points=(
not self.options_dict.get('TwoD', False)))
if self.get_param_value('plot_raw_data', default_value=True):
self.prepare_raw_data_plots(plot_filtered=False)
if 'preparation_params' in self.metadata:
if 'active' in self.metadata['preparation_params'].get(
'preparation_type', 'wait'):
self.prepare_raw_data_plots(plot_filtered=True)
def prepare_raw_data_plots(self, plot_filtered=False):
if plot_filtered or not self.data_with_reset:
key = 'meas_results_per_qb'
suffix = 'filtered' if self.data_with_reset else ''
func_for_swpts = lambda qb_name: self.proc_data_dict[
'sweep_points_dict'][qb_name]['sweep_points']
else:
key = 'meas_results_per_qb_raw'
suffix = ''
func_for_swpts = lambda qb_name: self.raw_data_dict[
'hard_sweep_points']
for qb_name, raw_data_dict in self.proc_data_dict[key].items():
if qb_name not in self.qb_names:
continue
sweep_points = func_for_swpts(qb_name)
if len(raw_data_dict) == 1:
numplotsx = 1
numplotsy = 1
elif len(raw_data_dict) == 2:
numplotsx = 1
numplotsy = 2
else:
numplotsx = 2
numplotsy = len(raw_data_dict) // 2 + len(raw_data_dict) % 2
plotsize = self.get_default_plot_params(set=False)['figure.figsize']
fig_title = (self.raw_data_dict['timestamp'] + ' ' +
self.raw_data_dict['measurementstring'] +
'\nRaw data ' + suffix + ' ' + qb_name)
plot_name = 'raw_plot_' + qb_name + suffix
xlabel, xunit = self.get_xaxis_label_unit(qb_name)
for ax_id, ro_channel in enumerate(raw_data_dict):
if self.get_param_value('TwoD', default_value=False):
if self.sp is None:
soft_sweep_params = self.get_param_value(
'soft_sweep_params')
if soft_sweep_params is not None:
yunit = list(soft_sweep_params.values())[0]['unit']
else:
yunit = self.raw_data_dict[
'sweep_parameter_units'][1]
if np.ndim(yunit) > 0:
yunit = yunit[0]
for pn, ssp in self.proc_data_dict['sweep_points_2D_dict'][
qb_name].items():
ylabel = pn
if self.sp is not None:
yunit = self.sp.get_sweep_params_property(
'unit', dimension=1, param_names=pn)
ylabel = self.sp.get_sweep_params_property(
'label', dimension=1, param_names=pn)
self.plot_dicts[f'{plot_name}_{ro_channel}_{pn}'] = {
'fig_id': plot_name + '_' + pn,
'ax_id': ax_id,
'plotfn': self.plot_colorxy,
'xvals': sweep_points,
'yvals': ssp,
'zvals': raw_data_dict[ro_channel].T,
'xlabel': xlabel,
'xunit': xunit,
'ylabel': ylabel,
'yunit': yunit,
'numplotsx': numplotsx,
'numplotsy': numplotsy,
'plotsize': (plotsize[0]*numplotsx,
plotsize[1]*numplotsy),
'title': fig_title,
'clabel': '{} (Vpeak)'.format(ro_channel)}
else:
self.plot_dicts[plot_name + '_' + ro_channel] = {
'fig_id': plot_name,
'ax_id': ax_id,
'plotfn': self.plot_line,
'xvals': sweep_points,
'xlabel': xlabel,
'xunit': xunit,
'yvals': raw_data_dict[ro_channel],
'ylabel': '{} (Vpeak)'.format(ro_channel),
'yunit': '',
'numplotsx': numplotsx,
'numplotsy': numplotsy,
'plotsize': (plotsize[0]*numplotsx,
plotsize[1]*numplotsy),
'title': fig_title}
if len(raw_data_dict) == 1:
self.plot_dicts[
plot_name + '_' + list(raw_data_dict)[0]]['ax_id'] = None
def prepare_projected_data_plot(
self, fig_name, data, qb_name, title_suffix='', sweep_points=None,
plot_cal_points=True, plot_name_suffix='', fig_name_suffix='',
data_label='Data', data_axis_label='', do_legend_data=True,
do_legend_cal_states=True):
if len(fig_name_suffix):
fig_name = f'{fig_name}_{fig_name_suffix}'
if data_axis_label == '':
data_axis_label = 'Strongest principal component (arb.)' if \
'pca' in self.rotation_type.lower() else \
'{} state population'.format(self.get_latex_prob_label(
self.data_to_fit[qb_name]))
plotsize = self.get_default_plot_params(set=False)['figure.figsize']
plotsize = (plotsize[0], plotsize[0]/1.25)
if sweep_points is None:
sweep_points = self.proc_data_dict['sweep_points_dict'][qb_name][
'sweep_points']
plot_names_cal = []
if plot_cal_points and self.num_cal_points != 0:
yvals = data[:-self.num_cal_points]
xvals = sweep_points[:-self.num_cal_points]
# plot cal points
for i, cal_pts_idxs in enumerate(
self.cal_states_dict.values()):
plot_dict_name_cal = fig_name + '_' + \
list(self.cal_states_dict)[i] + '_' + \
plot_name_suffix
plot_names_cal += [plot_dict_name_cal]
self.plot_dicts[plot_dict_name_cal] = {
'fig_id': fig_name,
'plotfn': self.plot_line,
'plotsize': plotsize,
'xvals': self.proc_data_dict['sweep_points_dict'][qb_name][
'cal_points_sweep_points'][cal_pts_idxs],
'yvals': data[cal_pts_idxs],
'setlabel': list(self.cal_states_dict)[i],
'do_legend': do_legend_cal_states,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left',
'linestyle': 'none',
'line_kws': {'color': self.get_cal_state_color(
list(self.cal_states_dict)[i])}}
self.plot_dicts[plot_dict_name_cal+'_line'] = {
'fig_id': fig_name,
'plotsize': plotsize,
'plotfn': self.plot_hlines,
'y': np.mean(data[cal_pts_idxs]),
'xmin': self.proc_data_dict['sweep_points_dict'][qb_name][
'sweep_points'][0],
'xmax': self.proc_data_dict['sweep_points_dict'][qb_name][
'sweep_points'][-1],
'colors': 'gray'}
else:
yvals = data
xvals = sweep_points
title = (self.raw_data_dict['timestamp'] + ' ' +
self.raw_data_dict['measurementstring'])
title += '\n' + f'{qb_name}_{title_suffix}' if len(title_suffix) else \
' ' + qb_name
plot_dict_name = f'{fig_name}_{plot_name_suffix}'
xlabel, xunit = self.get_xaxis_label_unit(qb_name)
if self.get_param_value('TwoD', default_value=False):
if self.sp is None:
soft_sweep_params = self.get_param_value(
'soft_sweep_params')
if soft_sweep_params is not None:
yunit = list(soft_sweep_params.values())[0]['unit']
else:
yunit = self.raw_data_dict['sweep_parameter_units'][1]
if np.ndim(yunit) > 0:
yunit = yunit[0]
for pn, ssp in self.proc_data_dict['sweep_points_2D_dict'][
qb_name].items():
ylabel = pn
if self.sp is not None:
yunit = self.sp.get_sweep_params_property(
'unit', dimension=1, param_names=pn)
ylabel = self.sp.get_sweep_params_property(
'label', dimension=1, param_names=pn)
self.plot_dicts[f'{plot_dict_name}_{pn}'] = {
'plotfn': self.plot_colorxy,
'fig_id': fig_name + '_' + pn,
'xvals': xvals,
'yvals': ssp,
'zvals': yvals,
'xlabel': xlabel,
'xunit': xunit,
'ylabel': ylabel,
'yunit': yunit,
'zrange': self.get_param_value('zrange', None),
'title': title,
'clabel': data_axis_label}
else:
self.plot_dicts[plot_dict_name] = {
'plotfn': self.plot_line,
'fig_id': fig_name,
'plotsize': plotsize,
'xvals': xvals,
'xlabel': xlabel,
'xunit': xunit,
'yvals': yvals,
'ylabel': data_axis_label,
'yunit': '',
'setlabel': data_label,
'title': title,
'linestyle': 'none',
'do_legend': do_legend_data,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left'}
# add plot_params to each plot dict
plot_params = self.get_param_value('plot_params', default_value={})
for plt_name in self.plot_dicts:
self.plot_dicts[plt_name].update(plot_params)
if len(plot_names_cal) > 0:
if do_legend_data and not do_legend_cal_states:
for plot_name in plot_names_cal:
plot_dict_cal = self.plot_dicts.pop(plot_name)
self.plot_dicts[plot_name] = plot_dict_cal
class Idling_Error_Rate_Analyisis(ba.BaseDataAnalysis):
def __init__(self, t_start: str=None, t_stop: str=None,
label: str='', data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
label=label,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'xvals': 'sweep_points',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
post_sel_th = self.options_dict.get('post_sel_th', 0.5)
raw_shots = self.raw_data_dict['measured_values'][0][0]
post_sel_shots = raw_shots[::2]
data_shots = raw_shots[1::2]
data_shots[np.where(post_sel_shots > post_sel_th)] = np.nan
states = ['0', '1', '+']
self.proc_data_dict['xvals'] = np.unique(self.raw_data_dict['xvals'])
for i, state in enumerate(states):
self.proc_data_dict['shots_{}'.format(state)] =data_shots[i::3]
self.proc_data_dict['yvals_{}'.format(state)] = \
np.nanmean(np.reshape(self.proc_data_dict['shots_{}'.format(state)],
(len(self.proc_data_dict['xvals']), -1),
order='F'), axis=1)
def prepare_plots(self):
# assumes that value names are unique in an experiment
states = ['0', '1', '+']
for i, state in enumerate(states):
yvals = self.proc_data_dict['yvals_{}'.format(state)]
xvals = self.proc_data_dict['xvals']
self.plot_dicts['Prepare in {}'.format(state)] = {
'ax_id': 'main',
'plotfn': self.plot_line,
'xvals': xvals,
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': yvals,
'ylabel': 'Counts',
'yrange': [0, 1],
'xrange': self.options_dict.get('xrange', None),
'yunit': 'frac',
'setlabel': 'Prepare in {}'.format(state),
'do_legend':True,
'title': (self.raw_data_dict['timestamps'][0]+' - ' +
self.raw_data_dict['timestamps'][-1] + '\n' +
self.raw_data_dict['measurementstring'][0]),
'legend_pos': 'upper right'}
if self.do_fitting:
for state in ['0', '1', '+']:
self.plot_dicts['fit_{}'.format(state)] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['fit {}'.format(state)]['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'fit |{}>'.format(state),
'do_legend': True,
'legend_pos': 'upper right'}
self.plot_dicts['fit_text']={
'ax_id':'main',
'box_props': 'fancy',
'xpos':1.05,
'horizontalalignment':'left',
'plotfn': self.plot_text,
'text_string': self.proc_data_dict['fit_msg']}
def analyze_fit_results(self):
fit_msg =''
states = ['0', '1', '+']
for state in states:
fr = self.fit_res['fit {}'.format(state)]
N1 = fr.params['N1'].value, fr.params['N1'].stderr
N2 = fr.params['N2'].value, fr.params['N2'].stderr
fit_msg += ('Prep |{}> : \n\tN_1 = {:.2g} $\pm$ {:.2g}'
'\n\tN_2 = {:.2g} $\pm$ {:.2g}\n').format(
state, N1[0], N1[1], N2[0], N2[1])
self.proc_data_dict['fit_msg'] = fit_msg
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
states = ['0', '1', '+']
for i, state in enumerate(states):
yvals = self.proc_data_dict['yvals_{}'.format(state)]
xvals = self.proc_data_dict['xvals']
mod = lmfit.Model(fit_mods.idle_error_rate_exp_decay)
mod.guess = fit_mods.idle_err_rate_guess.__get__(mod, mod.__class__)
# Done here explicitly so that I can overwrite a specific guess
guess_pars = mod.guess(N=xvals, data=yvals)
vary_N2 = self.options_dict.get('vary_N2', True)
if not vary_N2:
guess_pars['N2'].value = 1e21
guess_pars['N2'].vary = False
self.fit_dicts['fit {}'.format(states[i])] = {
'model': mod,
'fit_xvals': {'N': xvals},
'fit_yvals': {'data': yvals},
'guess_pars': guess_pars}
# Allows fixing the double exponential coefficient
class Grovers_TwoQubitAllStates_Analysis(ba.BaseDataAnalysis):
def __init__(self, t_start: str=None, t_stop: str=None,
label: str='', data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
label=label,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'xvals': 'sweep_points',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
self.proc_data_dict = OrderedDict()
normalize_to_cal_points = self.options_dict.get('normalize_to_cal_points', True)
cal_points = [
[[-4, -3], [-2, -1]],
[[-4, -2], [-3, -1]],
]
for idx in [0,1]:
yvals = list(self.raw_data_dict['measured_data'].values())[idx][0]
self.proc_data_dict['ylabel_{}'.format(idx)] = \
self.raw_data_dict['value_names'][0][idx]
self.proc_data_dict['yunit'] = self.raw_data_dict['value_units'][0][idx]
if normalize_to_cal_points:
yvals = a_tools.rotate_and_normalize_data_1ch(yvals,
cal_zero_points=cal_points[idx][0],
cal_one_points=cal_points[idx][1])
self.proc_data_dict['yvals_{}'.format(idx)] = yvals
y0 = self.proc_data_dict['yvals_0']
y1 = self.proc_data_dict['yvals_1']
p_success = ((y0[0]*y1[0]) +
(1-y0[1])*y1[1] +
(y0[2])*(1-y1[2]) +
(1-y0[3])*(1-y1[3]) )/4
self.proc_data_dict['p_success'] = p_success
def prepare_plots(self):
# assumes that value names are unique in an experiment
for i in [0, 1]:
yvals = self.proc_data_dict['yvals_{}'.format(i)]
xvals = self.raw_data_dict['xvals'][0]
ylabel = self.proc_data_dict['ylabel_{}'.format(i)]
self.plot_dicts['main_{}'.format(ylabel)] = {
'plotfn': self.plot_line,
'xvals': self.raw_data_dict['xvals'][0],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_{}'.format(i)],
'ylabel': ylabel,
'yunit': self.proc_data_dict['yunit'],
'title': (self.raw_data_dict['timestamps'][0] + ' \n' +
self.raw_data_dict['measurementstring'][0]),
'do_legend': False,
'legend_pos': 'upper right'}
self.plot_dicts['limit_text']={
'ax_id':'main_{}'.format(ylabel),
'box_props': 'fancy',
'xpos':1.05,
'horizontalalignment':'left',
'plotfn': self.plot_text,
'text_string': 'P succes = {:.3f}'.format(self.proc_data_dict['p_success'])}
class FlippingAnalysis(Single_Qubit_TimeDomainAnalysis):
def __init__(self, t_start: str=None, t_stop: str=None,
data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.single_timestamp = True
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'measurementstring': 'measurementstring',
'sweep_points': 'sweep_points',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
# This analysis makes a hardcoded assumption on the calibration points
self.options_dict['cal_points'] = [list(range(-4, -2)),
list(range(-2, 0))]
self.numeric_params = []
if auto:
self.run_analysis()
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
# Even though we expect an exponentially damped oscillation we use
# a simple cosine as this gives more reliable fitting and we are only
# interested in extracting the frequency of the oscillation
cos_mod = lmfit.Model(fit_mods.CosFunc)
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=self.raw_data_dict['sweep_points'][:-4],
data=self.proc_data_dict['corr_data'][:-4])
# This enforces the oscillation to start at the equator
# and ensures that any over/under rotation is absorbed in the
# frequency
guess_pars['amplitude'].value = 0.5
guess_pars['amplitude'].vary = False
guess_pars['offset'].value = 0.5
guess_pars['offset'].vary = False
self.fit_dicts['cos_fit'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': self.raw_data_dict['sweep_points'][:-4]},
'fit_yvals': {'data': self.proc_data_dict['corr_data'][:-4]},
'guess_pars': guess_pars}
# In the case there are very few periods we fall back on a small
# angle approximation to extract the drive detuning
poly_mod = lmfit.models.PolynomialModel(degree=1)
# the detuning can be estimated using on a small angle approximation
# c1 = d/dN (cos(2*pi*f N) ) evaluated at N = 0 -> c1 = -2*pi*f
poly_mod.set_param_hint('frequency', expr='-c1/(2*pi)')
guess_pars = poly_mod.guess(x=self.raw_data_dict['sweep_points'][:-4],
data=self.proc_data_dict['corr_data'][:-4])
# Constraining the line ensures that it will only give a good fit
# if the small angle approximation holds
guess_pars['c0'].vary = False
guess_pars['c0'].value = 0.5
self.fit_dicts['line_fit'] = {
'model': poly_mod,
'fit_xvals': {'x': self.raw_data_dict['sweep_points'][:-4]},
'fit_yvals': {'data': self.proc_data_dict['corr_data'][:-4]},
'guess_pars': guess_pars}
def analyze_fit_results(self):
sf_line = self._get_scale_factor_line()
sf_cos = self._get_scale_factor_cos()
self.proc_data_dict['scale_factor'] = self.get_scale_factor()
msg = 'Scale fact. based on '
if self.proc_data_dict['scale_factor'] == sf_cos:
msg += 'cos fit\n'
else:
msg += 'line fit\n'
msg += 'cos fit: {:.4f}\n'.format(sf_cos)
msg += 'line fit: {:.4f}'.format(sf_line)
self.raw_data_dict['scale_factor_msg'] = msg
# TODO: save scale factor to file
def get_scale_factor(self):
"""
Returns the scale factor that should correct for the error in the
pulse amplitude.
"""
# Model selection based on the Bayesian Information Criterion (BIC)
# as calculated by lmfit
if (self.fit_dicts['line_fit']['fit_res'].bic <
self.fit_dicts['cos_fit']['fit_res'].bic):
scale_factor = self._get_scale_factor_line()
else:
scale_factor = self._get_scale_factor_cos()
return scale_factor
def _get_scale_factor_cos(self):
# 1/period of the oscillation corresponds to the (fractional)
# over/under rotation error per gate
frequency = self.fit_dicts['cos_fit']['fit_res'].params['frequency']
# the square is needed to account for the difference between
# power and amplitude
scale_factor = (1+frequency)**2
phase = np.rad2deg(self.fit_dicts['cos_fit']['fit_res'].params['phase']) % 360
# phase ~90 indicates an under rotation so the scale factor
# has to be larger than 1. A phase ~270 indicates an over
# rotation so then the scale factor has to be smaller than one.
if phase > 180:
scale_factor = 1/scale_factor
return scale_factor
def _get_scale_factor_line(self):
# 1/period of the oscillation corresponds to the (fractional)
# over/under rotation error per gate
frequency = self.fit_dicts['line_fit']['fit_res'].params['frequency']
scale_factor = (1+frequency)**2
# no phase sign check is needed here as this is contained in the
# sign of the coefficient
return scale_factor
def prepare_plots(self):
self.plot_dicts['main'] = {
'plotfn': self.plot_line,
'xvals': self.raw_data_dict['sweep_points'],
'xlabel': self.raw_data_dict['xlabel'],
'xunit': self.raw_data_dict['xunit'], # does not do anything yet
'yvals': self.proc_data_dict['corr_data'],
'ylabel': 'Excited state population',
'yunit': '',
'setlabel': 'data',
'title': (self.raw_data_dict['timestamp'] + ' ' +
self.raw_data_dict['measurementstring']),
'do_legend': True,
'legend_pos': 'upper right'}
if self.do_fitting:
self.plot_dicts['line_fit'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['line_fit']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'line fit',
'do_legend': True,
'legend_pos': 'upper right'}
self.plot_dicts['cos_fit'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'cos fit',
'do_legend': True,
'legend_pos': 'upper right'}
self.plot_dicts['text_msg'] = {
'ax_id': 'main',
'ypos': 0.15,
'plotfn': self.plot_text,
'box_props': 'fancy',
'text_string': self.raw_data_dict['scale_factor_msg']}
class Intersect_Analysis(Single_Qubit_TimeDomainAnalysis):
"""
Analysis to extract the intercept of two parameters.
relevant options_dict parameters
ch_idx_A (int) specifies first channel for intercept
ch_idx_B (int) specifies second channel for intercept if same as first
it will assume data was taken interleaved.
"""
def __init__(self, t_start: str=None, t_stop: str=None,
data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.single_timestamp = False
self.params_dict = {'xlabel': 'sweep_name',
'xvals': 'sweep_points',
'xunit': 'sweep_unit',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
"""
selects the relevant acq channel based on "ch_idx_A" and "ch_idx_B"
specified in the options dict. If ch_idx_A and ch_idx_B are the same
it will unzip the data.
"""
self.proc_data_dict = deepcopy(self.raw_data_dict)
# The channel containing the data must be specified in the options dict
ch_idx_A = self.options_dict.get('ch_idx_A', 0)
ch_idx_B = self.options_dict.get('ch_idx_B', 0)
self.proc_data_dict['ylabel'] = self.raw_data_dict['value_names'][0][ch_idx_A]
self.proc_data_dict['yunit'] = self.raw_data_dict['value_units'][0][ch_idx_A]
if ch_idx_A == ch_idx_B:
yvals = list(self.raw_data_dict['measured_data'].values())[ch_idx_A][0]
self.proc_data_dict['xvals_A'] = self.raw_data_dict['xvals'][0][::2]
self.proc_data_dict['xvals_B'] = self.raw_data_dict['xvals'][0][1::2]
self.proc_data_dict['yvals_A'] = yvals[::2]
self.proc_data_dict['yvals_B'] = yvals[1::2]
else:
self.proc_data_dict['xvals_A'] = self.raw_data_dict['xvals'][0]
self.proc_data_dict['xvals_B'] = self.raw_data_dict['xvals'][0]
self.proc_data_dict['yvals_A'] = list(self.raw_data_dict
['measured_data'].values())[ch_idx_A][0]
self.proc_data_dict['yvals_B'] = list(self.raw_data_dict
['measured_data'].values())[ch_idx_B][0]
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
self.fit_dicts['line_fit_A'] = {
'model': lmfit.models.PolynomialModel(degree=2),
'fit_xvals': {'x': self.proc_data_dict['xvals_A']},
'fit_yvals': {'data': self.proc_data_dict['yvals_A']}}
self.fit_dicts['line_fit_B'] = {
'model': lmfit.models.PolynomialModel(degree=2),
'fit_xvals': {'x': self.proc_data_dict['xvals_B']},
'fit_yvals': {'data': self.proc_data_dict['yvals_B']}}
def analyze_fit_results(self):
fr_0 = self.fit_res['line_fit_A'].best_values
fr_1 = self.fit_res['line_fit_B'].best_values
c0 = (fr_0['c0'] - fr_1['c0'])
c1 = (fr_0['c1'] - fr_1['c1'])
c2 = (fr_0['c2'] - fr_1['c2'])
poly_coeff = [c0, c1, c2]
poly = np.polynomial.polynomial.Polynomial([fr_0['c0'],
fr_0['c1'], fr_0['c2']])
ic = np.polynomial.polynomial.polyroots(poly_coeff)
self.proc_data_dict['intersect_L'] = ic[0], poly(ic[0])
self.proc_data_dict['intersect_R'] = ic[1], poly(ic[1])
if (((np.min(self.proc_data_dict['xvals']))< ic[0]) and
( ic[0] < (np.max(self.proc_data_dict['xvals'])))):
self.proc_data_dict['intersect'] =self.proc_data_dict['intersect_L']
else:
self.proc_data_dict['intersect'] =self.proc_data_dict['intersect_R']
def prepare_plots(self):
self.plot_dicts['main'] = {
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['xvals_A'],
'xlabel': self.proc_data_dict['xlabel'][0],
'xunit': self.proc_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_A'],
'ylabel': self.proc_data_dict['ylabel'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'A',
'title': (self.proc_data_dict['timestamps'][0] + ' \n' +
self.proc_data_dict['measurementstring'][0]),
'do_legend': True,
'yrange': (0,1),
'legend_pos': 'upper right'}
self.plot_dicts['on'] = {
'plotfn': self.plot_line,
'ax_id': 'main',
'xvals': self.proc_data_dict['xvals_B'],
'xlabel': self.proc_data_dict['xlabel'][0],
'xunit': self.proc_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_B'],
'ylabel': self.proc_data_dict['ylabel'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'B',
'do_legend': True,
'legend_pos': 'upper right'}
if self.do_fitting:
self.plot_dicts['line_fit_A'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['line_fit_A']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit A',
'do_legend': True}
self.plot_dicts['line_fit_B'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['line_fit_B']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit B',
'do_legend': True}
ic, ic_unit = SI_val_to_msg_str(
self.proc_data_dict['intersect'][0],
self.proc_data_dict['xunit'][0][0], return_type=float)
self.plot_dicts['intercept_message'] = {
'ax_id': 'main',
'plotfn': self.plot_line,
'xvals': [self.proc_data_dict['intersect'][0]],
'yvals': [self.proc_data_dict['intersect'][1]],
'line_kws': {'alpha': .5, 'color':'gray',
'markersize':15},
'marker': 'o',
'setlabel': 'Intercept: {:.1f} {}'.format(ic, ic_unit),
'do_legend': True}
def get_intersect(self):
return self.proc_data_dict['intersect']
class CZ_1QPhaseCal_Analysis(ba.BaseDataAnalysis):
"""
Analysis to extract the intercept for a single qubit phase calibration
experiment
N.B. this is a less generic version of "Intersect_Analysis" and should
be deprecated (MAR Dec 2017)
"""
def __init__(self, t_start: str=None, t_stop: str=None,
data_file_path: str=None,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.single_timestamp = False
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'xvals': 'sweep_points',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
"""
selects the relevant acq channel based on "ch_idx" in options dict and
then splits the data for th
"""
self.proc_data_dict = OrderedDict()
# The channel containing the data must be specified in the options dict
ch_idx = self.options_dict['ch_idx']
yvals = list(self.raw_data_dict['measured_data'].values())[ch_idx][0]
self.proc_data_dict['ylabel'] = self.raw_data_dict['value_names'][0][ch_idx]
self.proc_data_dict['yunit'] = self.raw_data_dict['value_units'][0][ch_idx]
self.proc_data_dict['xvals_off'] = self.raw_data_dict['xvals'][0][::2]
self.proc_data_dict['xvals_on'] = self.raw_data_dict['xvals'][0][1::2]
self.proc_data_dict['yvals_off'] = yvals[::2]
self.proc_data_dict['yvals_on'] = yvals[1::2]
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
self.fit_dicts['line_fit_off'] = {
'model': lmfit.models.PolynomialModel(degree=1),
'fit_xvals': {'x': self.proc_data_dict['xvals_off']},
'fit_yvals': {'data': self.proc_data_dict['yvals_off']}}
self.fit_dicts['line_fit_on'] = {
'model': lmfit.models.PolynomialModel(degree=1),
'fit_xvals': {'x': self.proc_data_dict['xvals_on']},
'fit_yvals': {'data': self.proc_data_dict['yvals_on']}}
def analyze_fit_results(self):
fr_0 = self.fit_res['line_fit_off'].best_values
fr_1 = self.fit_res['line_fit_on'].best_values
ic = -(fr_0['c0'] - fr_1['c0'])/(fr_0['c1'] - fr_1['c1'])
self.proc_data_dict['zero_phase_diff_intersect'] = ic
def prepare_plots(self):
self.plot_dicts['main'] = {
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['xvals_off'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_off'],
'ylabel': self.proc_data_dict['ylabel'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ off',
'title': (self.raw_data_dict['timestamps'][0] + ' \n' +
self.raw_data_dict['measurementstring'][0]),
'do_legend': True,
'yrange': (0,1),
'legend_pos': 'upper right'}
self.plot_dicts['on'] = {
'plotfn': self.plot_line,
'ax_id': 'main',
'xvals': self.proc_data_dict['xvals_on'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_on'],
'ylabel': self.proc_data_dict['ylabel'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ on',
'do_legend': True,
'legend_pos': 'upper right'}
if self.do_fitting:
self.plot_dicts['line_fit_off'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['line_fit_off']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit CZ off',
'do_legend': True}
self.plot_dicts['line_fit_on'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['line_fit_on']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit CZ on',
'do_legend': True}
ic, ic_unit = SI_val_to_msg_str(
self.proc_data_dict['zero_phase_diff_intersect'],
self.raw_data_dict['xunit'][0][0], return_type=float)
self.plot_dicts['intercept_message'] = {
'ax_id': 'main',
'plotfn': self.plot_line,
'xvals': [self.proc_data_dict['zero_phase_diff_intersect']],
'yvals': [np.mean(self.proc_data_dict['xvals_on'])],
'line_kws': {'alpha': 0},
'setlabel': 'Intercept: {:.1f} {}'.format(ic, ic_unit),
'do_legend': True}
def get_zero_phase_diff_intersect(self):
return self.proc_data_dict['zero_phase_diff_intersect']
class Oscillation_Analysis(ba.BaseDataAnalysis):
"""
Very basic analysis to determine the phase of a single oscillation
that has an assumed period of 360 degrees.
"""
def __init__(self, t_start: str=None, t_stop: str=None,
data_file_path: str=None,
label: str='',
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
label=label,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.single_timestamp = False
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'xvals': 'sweep_points',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
self.proc_data_dict = OrderedDict()
idx = 1
self.proc_data_dict['yvals'] = list(self.raw_data_dict['measured_data'].values())[idx][0]
self.proc_data_dict['ylabel'] = self.raw_data_dict['value_names'][0][idx]
self.proc_data_dict['yunit'] = self.raw_data_dict['value_units'][0][idx]
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=self.raw_data_dict['xvals'][0],
data=self.proc_data_dict['yvals'], freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts['cos_fit'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': self.raw_data_dict['xvals'][0]},
'fit_yvals': {'data': self.proc_data_dict['yvals']},
'guess_pars': guess_pars}
def analyze_fit_results(self):
fr = self.fit_res['cos_fit'].best_values
self.proc_data_dict['phi'] = np.rad2deg(fr['phase'])
def prepare_plots(self):
self.plot_dicts['main'] = {
'plotfn': self.plot_line,
'xvals': self.raw_data_dict['xvals'][0],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals'],
'ylabel': self.proc_data_dict['ylabel'],
'yunit': self.proc_data_dict['yunit'],
'title': (self.raw_data_dict['timestamps'][0] + ' \n' +
self.raw_data_dict['measurementstring'][0]),
'do_legend': True,
# 'yrange': (0,1),
'legend_pos': 'upper right'}
if self.do_fitting:
self.plot_dicts['cos_fit'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit',
'do_legend': True}
class Conditional_Oscillation_Analysis(ba.BaseDataAnalysis):
"""
Analysis to extract quantities from a conditional oscillation.
"""
def __init__(self, t_start: str=None, t_stop: str=None,
data_file_path: str=None,
label: str='',
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(t_start=t_start, t_stop=t_stop,
label=label,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only, do_fitting=do_fitting)
self.single_timestamp = False
self.params_dict = {'xlabel': 'sweep_name',
'xunit': 'sweep_unit',
'xvals': 'sweep_points',
'measurementstring': 'measurementstring',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = []
if auto:
self.run_analysis()
def process_data(self):
"""
selects the relevant acq channel based on "ch_idx_osc" and
"ch_idx_spec" in the options dict and then splits the data for the
off and on cases
"""
self.proc_data_dict = OrderedDict()
# The channel containing the data must be specified in the options dict
ch_idx_spec = self.options_dict.get('ch_idx_spec', 0)
ch_idx_osc = self.options_dict.get('ch_idx_osc', 1)
normalize_to_cal_points = self.options_dict.get('normalize_to_cal_points', True)
cal_points = [
[[-4, -3], [-2, -1]],
[[-4, -2], [-3, -1]],
]
i = 0
for idx, type_str in zip([ch_idx_osc, ch_idx_spec], ['osc', 'spec']):
yvals = list(self.raw_data_dict['measured_data'].values())[idx][0]
self.proc_data_dict['ylabel_{}'.format(type_str)] = self.raw_data_dict['value_names'][0][idx]
self.proc_data_dict['yunit'] = self.raw_data_dict['value_units'][0][idx]
if normalize_to_cal_points:
yvals = a_tools.rotate_and_normalize_data_1ch(yvals,
cal_zero_points=cal_points[i][0],
cal_one_points=cal_points[i][1])
i +=1
self.proc_data_dict['yvals_{}_off'.format(type_str)] = yvals[::2]
self.proc_data_dict['yvals_{}_on'.format(type_str)] = yvals[1::2]
self.proc_data_dict['xvals_off'] = self.raw_data_dict['xvals'][0][::2]
self.proc_data_dict['xvals_on'] = self.raw_data_dict['xvals'][0][1::2]
else:
self.proc_data_dict['yvals_{}_off'.format(type_str)] = yvals[::2]
self.proc_data_dict['yvals_{}_on'.format(type_str)] = yvals[1::2]
self.proc_data_dict['xvals_off'] = self.raw_data_dict['xvals'][0][::2]
self.proc_data_dict['xvals_on'] = self.raw_data_dict['xvals'][0][1::2]
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=self.proc_data_dict['xvals_off'][:-2],
data=self.proc_data_dict['yvals_osc_off'][:-2],
freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts['cos_fit_off'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': self.proc_data_dict['xvals_off'][:-2]},
'fit_yvals': {'data': self.proc_data_dict['yvals_osc_off'][:-2]},
'guess_pars': guess_pars}
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=self.proc_data_dict['xvals_on'][:-2],
data=self.proc_data_dict['yvals_osc_on'][:-2],
freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts['cos_fit_on'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': self.proc_data_dict['xvals_on'][:-2]},
'fit_yvals': {'data': self.proc_data_dict['yvals_osc_on'][:-2]},
'guess_pars': guess_pars}
def analyze_fit_results(self):
fr_0 = self.fit_res['cos_fit_off'].params
fr_1 = self.fit_res['cos_fit_on'].params
phi0 = np.rad2deg(fr_0['phase'].value)
phi1 = np.rad2deg(fr_1['phase'].value)
phi0_stderr = np.rad2deg(fr_0['phase'].stderr)
phi1_stderr = np.rad2deg(fr_1['phase'].stderr)
self.proc_data_dict['phi_0'] = phi0, phi0_stderr
self.proc_data_dict['phi_1'] = phi1, phi1_stderr
phi_cond_stderr = (phi0_stderr**2+phi1_stderr**2)**.5
self.proc_data_dict['phi_cond'] = (phi1 -phi0), phi_cond_stderr
osc_amp = np.mean([fr_0['amplitude'], fr_1['amplitude']])
osc_amp_stderr = np.sqrt(fr_0['amplitude'].stderr**2 +
fr_1['amplitude']**2)/2
self.proc_data_dict['osc_amp_0'] = (fr_0['amplitude'].value,
fr_0['amplitude'].stderr)
self.proc_data_dict['osc_amp_1'] = (fr_1['amplitude'].value,
fr_1['amplitude'].stderr)
self.proc_data_dict['osc_offs_0'] = (fr_0['offset'].value,
fr_0['offset'].stderr)
self.proc_data_dict['osc_offs_1'] = (fr_1['offset'].value,
fr_1['offset'].stderr)
offs_stderr = (fr_0['offset'].stderr**2+fr_1['offset'].stderr**2)**.5
self.proc_data_dict['offs_diff'] = (
fr_1['offset'].value - fr_0['offset'].value, offs_stderr)
# self.proc_data_dict['osc_amp'] = (osc_amp, osc_amp_stderr)
self.proc_data_dict['missing_fraction'] = (
np.mean(self.proc_data_dict['yvals_spec_on'][:-2]) -
np.mean(self.proc_data_dict['yvals_spec_off'][:-2]))
def prepare_plots(self):
self._prepare_main_oscillation_figure()
self._prepare_spectator_qubit_figure()
def _prepare_main_oscillation_figure(self):
self.plot_dicts['main'] = {
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['xvals_off'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_osc_off'],
'ylabel': self.proc_data_dict['ylabel_osc'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ off',
'title': (self.raw_data_dict['timestamps'][0] + ' \n' +
self.raw_data_dict['measurementstring'][0]),
'do_legend': True,
# 'yrange': (0,1),
'legend_pos': 'upper right'}
self.plot_dicts['on'] = {
'plotfn': self.plot_line,
'ax_id': 'main',
'xvals': self.proc_data_dict['xvals_on'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_osc_on'],
'ylabel': self.proc_data_dict['ylabel_osc'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ on',
'do_legend': True,
'legend_pos': 'upper right'}
if self.do_fitting:
self.plot_dicts['cos_fit_off'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit_off']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit CZ off',
'do_legend': True}
self.plot_dicts['cos_fit_on'] = {
'ax_id': 'main',
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit_on']['fit_res'],
'plot_init': self.options_dict['plot_init'],
'setlabel': 'Fit CZ on',
'do_legend': True}
# offset as a guide for the eye
y = self.fit_res['cos_fit_off'].params['offset'].value
self.plot_dicts['cos_off_offset'] ={
'plotfn': self.plot_matplot_ax_method,
'ax_id':'main',
'func': 'axhline',
'plot_kws': {
'y': y, 'color': 'C0', 'linestyle': 'dotted'}
}
phase_message = (
'Phase diff.: {:.1f} $\pm$ {:.1f} deg\n'
'Phase off: {:.1f} $\pm$ {:.1f}deg\n'
'Phase on: {:.1f} $\pm$ {:.1f}deg\n'
'Osc. amp. off: {:.4f} $\pm$ {:.4f}\n'
'Osc. amp. on: {:.4f} $\pm$ {:.4f}\n'
'Offs. diff.: {:.4f} $\pm$ {:.4f}\n'
'Osc. offs. off: {:.4f} $\pm$ {:.4f}\n'
'Osc. offs. on: {:.4f} $\pm$ {:.4f}'.format(
self.proc_data_dict['phi_cond'][0],
self.proc_data_dict['phi_cond'][1],
self.proc_data_dict['phi_0'][0],
self.proc_data_dict['phi_0'][1],
self.proc_data_dict['phi_1'][0],
self.proc_data_dict['phi_1'][1],
self.proc_data_dict['osc_amp_0'][0],
self.proc_data_dict['osc_amp_0'][1],
self.proc_data_dict['osc_amp_1'][0],
self.proc_data_dict['osc_amp_1'][1],
self.proc_data_dict['offs_diff'][0],
self.proc_data_dict['offs_diff'][1],
self.proc_data_dict['osc_offs_0'][0],
self.proc_data_dict['osc_offs_0'][1],
self.proc_data_dict['osc_offs_1'][0],
self.proc_data_dict['osc_offs_1'][1]))
self.plot_dicts['phase_message'] = {
'ax_id': 'main',
'ypos': 0.9,
'xpos': 1.45,
'plotfn': self.plot_text,
'box_props': 'fancy',
'line_kws': {'alpha': 0},
'text_string': phase_message}
def _prepare_spectator_qubit_figure(self):
self.plot_dicts['spectator_qubit'] = {
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['xvals_off'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_spec_off'],
'ylabel': self.proc_data_dict['ylabel_spec'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ off',
'title': (self.raw_data_dict['timestamps'][0] + ' \n' +
self.raw_data_dict['measurementstring'][0]),
'do_legend': True,
# 'yrange': (0,1),
'legend_pos': 'upper right'}
self.plot_dicts['spec_on'] = {
'plotfn': self.plot_line,
'ax_id': 'spectator_qubit',
'xvals': self.proc_data_dict['xvals_on'],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': self.proc_data_dict['yvals_spec_on'],
'ylabel': self.proc_data_dict['ylabel_spec'],
'yunit': self.proc_data_dict['yunit'],
'setlabel': 'CZ on',
'do_legend': True,
'legend_pos': 'upper right'}
if self.do_fitting:
leak_msg = (
'Missing fraction: {:.2f} % '.format(
self.proc_data_dict['missing_fraction']*100))
self.plot_dicts['leak_msg'] = {
'ax_id': 'spectator_qubit',
'ypos': 0.7,
'plotfn': self.plot_text,
'box_props': 'fancy',
'line_kws': {'alpha': 0},
'text_string': leak_msg}
# offset as a guide for the eye
y = self.fit_res['cos_fit_on'].params['offset'].value
self.plot_dicts['cos_on_offset'] ={
'plotfn': self.plot_matplot_ax_method,
'ax_id':'main',
'func': 'axhline',
'plot_kws': {
'y': y, 'color': 'C1', 'linestyle': 'dotted'}
}
class StateTomographyAnalysis(ba.BaseDataAnalysis):
"""
Analyses the results of the state tomography experiment and calculates
the corresponding quantum state.
Possible options that can be passed in the options_dict parameter:
cal_points: A data structure specifying the indices of the calibration
points. See the AveragedTimedomainAnalysis for format.
The calibration points need to be in the same order as the
used basis for the result.
data_type: 'averaged' or 'singleshot'. For singleshot data each
measurement outcome is saved and arbitrary order correlations
between the states can be calculated.
meas_operators: (optional) A list of qutip operators or numpy 2d arrays.
This overrides the measurement operators otherwise
found from the calibration points.
covar_matrix: (optional) The covariance matrix of the measurement
operators as a 2d numpy array. Overrides the one found
from the calibration points.
use_covariance_matrix (bool): Flag to define whether to use the
covariance matrix
basis_rots_str: A list of standard PycQED pulse names that were
applied to qubits before measurement
basis_rots: As an alternative to single_qubit_pulses, the basis
rotations applied to the system as qutip operators or numpy
matrices can be given.
mle: True/False, whether to do maximum likelihood fit. If False, only
least squares fit will be done, which could give negative
eigenvalues for the density matrix.
imle: True/False, whether to do iterative maximum likelihood fit. If
True, it takes preference over maximum likelihood method. Otherwise
least squares fit will be done, then 'mle' option will be checked.
pauli_raw: True/False, extracts Pauli expected values from a measurement
without assignment correction based on calibration data. If True,
takes preference over other methods except pauli_corr.
pauli_values: True/False, extracts Pauli expected values from a
measurement with assignment correction based on calibration data.
If True, takes preference over other methods.
iterations (optional): maximum number of iterations allowed in imle.
Tomographies with more qubits require more iterations to converge.
tolerance (optional): minimum change across iterations allowed in imle.
The iteration will stop if it goes under this value. Tomographies
with more qubits require smaller tolerance to converge.
rho_target (optional): A qutip density matrix that the result will be
compared to when calculating fidelity.
"""
def __init__(self, *args, **kwargs):
auto = kwargs.pop('auto', True)
super().__init__(*args, **kwargs)
kwargs['auto'] = auto
self.single_timestamp = True
self.params_dict = {'exp_metadata': 'exp_metadata'}
self.numeric_params = []
self.data_type = self.options_dict['data_type']
if self.data_type == 'averaged':
self.base_analysis = AveragedTimedomainAnalysis(*args, **kwargs)
elif self.data_type == 'singleshot':
self.base_analysis = roa.MultiQubit_SingleShot_Analysis(
*args, **kwargs)
else:
raise KeyError("Invalid tomography data mode: '" + self.data_type +
"'. Valid modes are 'averaged' and 'singleshot'.")
if kwargs.get('auto', True):
self.run_analysis()
def process_data(self):
tomography_qubits = self.options_dict.get('tomography_qubits', None)
data, Fs, Omega = self.base_analysis.measurement_operators_and_results(
tomography_qubits)
if 'data_filter' in self.options_dict:
data = self.options_dict['data_filter'](data.T).T
data = data.T
for i, v in enumerate(data):
data[i] = v / v.sum()
data = data.T
Fs = self.options_dict.get('meas_operators', Fs)
Fs = [qtp.Qobj(F) for F in Fs]
d = Fs[0].shape[0]
self.proc_data_dict['d'] = d
Omega = self.options_dict.get('covar_matrix', Omega)
if Omega is None:
Omega = np.diag(np.ones(len(Fs)))
elif len(Omega.shape) == 1:
Omega = np.diag(Omega)
metadata = self.raw_data_dict.get('exp_metadata',
self.options_dict.get(
'exp_metadata', {}))
if metadata is None:
metadata = {}
self.raw_data_dict['exp_metadata'] = metadata
basis_rots_str = metadata.get('basis_rots_str', None)
basis_rots_str = self.options_dict.get('basis_rots_str', basis_rots_str)
if basis_rots_str is not None:
nr_qubits = int(np.round(np.log2(d)))
pulse_list = list(itertools.product(basis_rots_str,
repeat=nr_qubits))
rotations = tomo.standard_qubit_pulses_to_rotations(pulse_list)
else:
rotations = metadata.get('basis_rots', None)
rotations = self.options_dict.get('basis_rots', rotations)
if rotations is None:
raise KeyError("Either 'basis_rots_str' or 'basis_rots' "
"parameter must be passed in the options "
"dictionary or in the experimental metadata.")
rotations = [qtp.Qobj(U) for U in rotations]
all_Fs = tomo.rotated_measurement_operators(rotations, Fs)
all_Fs = list(itertools.chain(*np.array(all_Fs, dtype=np.object).T))
all_mus = np.array(list(itertools.chain(*data.T)))
all_Omegas = sp.linalg.block_diag(*[Omega] * len(data[0]))
self.proc_data_dict['meas_operators'] = all_Fs
self.proc_data_dict['covar_matrix'] = all_Omegas
self.proc_data_dict['meas_results'] = all_mus
if self.options_dict.get('pauli_values', False):
rho_pauli = tomo.pauli_values_tomography(all_mus,Fs,basis_rots_str)
self.proc_data_dict['rho_raw'] = rho_pauli
self.proc_data_dict['rho'] = rho_pauli
elif self.options_dict.get('pauli_raw', False):
pauli_raw = self.generate_raw_pauli_set()
rho_raw = tomo.pauli_set_to_density_matrix(pauli_raw)
self.proc_data_dict['rho_raw'] = rho_raw
self.proc_data_dict['rho'] = rho_raw
elif self.options_dict.get('imle', False):
it = metadata.get('iterations', None)
it = self.options_dict.get('iterations', it)
tol = metadata.get('tolerance', None)
tol = self.options_dict.get('tolerance', tol)
rho_imle = tomo.imle_tomography(
all_mus, all_Fs, it, tol)
self.proc_data_dict['rho_imle'] = rho_imle
self.proc_data_dict['rho'] = rho_imle
else:
rho_ls = tomo.least_squares_tomography(
all_mus, all_Fs,
all_Omegas if self.get_param_value('use_covariance_matrix', False)
else None )
self.proc_data_dict['rho_ls'] = rho_ls
self.proc_data_dict['rho'] = rho_ls
if self.options_dict.get('mle', False):
rho_mle = tomo.mle_tomography(
all_mus, all_Fs,
all_Omegas if self.get_param_value('use_covariance_matrix', False) else None,
rho_guess=rho_ls)
self.proc_data_dict['rho_mle'] = rho_mle
self.proc_data_dict['rho'] = rho_mle
rho = self.proc_data_dict['rho']
self.proc_data_dict['purity'] = (rho * rho).tr().real
rho_target = metadata.get('rho_target', None)
rho_target = self.options_dict.get('rho_target', rho_target)
if rho_target is not None:
self.proc_data_dict['fidelity'] = tomo.fidelity(rho, rho_target)
if d == 4:
self.proc_data_dict['concurrence'] = tomo.concurrence(rho)
else:
self.proc_data_dict['concurrence'] = 0
def prepare_plots(self):
self.prepare_density_matrix_plot()
d = self.proc_data_dict['d']
if 2 ** (d.bit_length() - 1) == d:
# dimension is power of two, plot expectation values of pauli
# operators
self.prepare_pauli_basis_plot()
def prepare_density_matrix_plot(self):
self.tight_fig = self.options_dict.get('tight_fig', False)
rho_target = self.raw_data_dict['exp_metadata'].get('rho_target', None)
rho_target = self.options_dict.get('rho_target', rho_target)
d = self.proc_data_dict['d']
xtick_labels = self.options_dict.get('rho_ticklabels', None)
ytick_labels = self.options_dict.get('rho_ticklabels', None)
if 2 ** (d.bit_length() - 1) == d:
nr_qubits = d.bit_length() - 1
fmt_string = '{{:0{}b}}'.format(nr_qubits)
labels = [fmt_string.format(i) for i in range(2 ** nr_qubits)]
if xtick_labels is None:
xtick_labels = ['$|' + lbl + r'\rangle$' for lbl in labels]
if ytick_labels is None:
ytick_labels = [r'$\langle' + lbl + '|$' for lbl in labels]
color = (0.5 * np.angle(self.proc_data_dict['rho'].full()) / np.pi) % 1.
cmap = self.options_dict.get('rho_colormap', self.default_phase_cmap())
if self.options_dict.get('pauli_raw', False):
title = 'Density matrix reconstructed from the Pauli (raw) set\n'
elif self.options_dict.get('pauli_values', False):
title = 'Density matrix reconstructed from the Pauli set\n'
elif self.options_dict.get('mle', False):
title = 'Maximum likelihood fit of the density matrix\n'
elif self.options_dict.get('it_mle', False):
title = 'Iterative maximum likelihood fit of the density matrix\n'
else:
title = 'Least squares fit of the density matrix\n'
empty_artist = mpl.patches.Rectangle((0, 0), 0, 0, visible=False)
legend_entries = [(empty_artist,
r'Purity, $Tr(\rho^2) = {:.1f}\%$'.format(
100 * self.proc_data_dict['purity']))]
if rho_target is not None:
legend_entries += [
(empty_artist, r'Fidelity, $F = {:.1f}\%$'.format(
100 * self.proc_data_dict['fidelity']))]
if d == 4:
legend_entries += [
(empty_artist, r'Concurrence, $C = {:.2f}$'.format(
self.proc_data_dict['concurrence']))]
meas_string = self.base_analysis.\
raw_data_dict['measurementstring']
if isinstance(meas_string, list):
if len(meas_string) > 1:
meas_string = meas_string[0] + ' to ' + meas_string[-1]
else:
meas_string = meas_string[0]
self.plot_dicts['density_matrix'] = {
'plotfn': self.plot_bar3D,
'3d': True,
'3d_azim': -35,
'3d_elev': 35,
'xvals': np.arange(d),
'yvals': np.arange(d),
'zvals': np.abs(self.proc_data_dict['rho'].full()),
'zrange': (0, 1),
'color': color,
'colormap': cmap,
'bar_widthx': 0.5,
'bar_widthy': 0.5,
'xtick_loc': np.arange(d),
'xtick_labels': xtick_labels,
'ytick_loc': np.arange(d),
'ytick_labels': ytick_labels,
'ctick_loc': np.linspace(0, 1, 5),
'ctick_labels': ['$0$', r'$\frac{1}{2}\pi$', r'$\pi$',
r'$\frac{3}{2}\pi$', r'$2\pi$'],
'clabel': 'Phase (rad)',
'title': (title + self.raw_data_dict['timestamp'] + ' ' +
meas_string),
'do_legend': True,
'legend_entries': legend_entries,
'legend_kws': dict(loc='upper left', bbox_to_anchor=(0, 0.94))
}
if rho_target is not None:
rho_target = qtp.Qobj(rho_target)
if rho_target.type == 'ket':
rho_target = rho_target * rho_target.dag()
elif rho_target.type == 'bra':
rho_target = rho_target.dag() * rho_target
self.plot_dicts['density_matrix_target'] = {
'plotfn': self.plot_bar3D,
'3d': True,
'3d_azim': -35,
'3d_elev': 35,
'xvals': np.arange(d),
'yvals': np.arange(d),
'zvals': np.abs(rho_target.full()),
'zrange': (0, 1),
'color': (0.5 * np.angle(rho_target.full()) / np.pi) % 1.,
'colormap': cmap,
'bar_widthx': 0.5,
'bar_widthy': 0.5,
'xtick_loc': np.arange(d),
'xtick_labels': xtick_labels,
'ytick_loc': np.arange(d),
'ytick_labels': ytick_labels,
'ctick_loc': np.linspace(0, 1, 5),
'ctick_labels': ['$0$', r'$\frac{1}{2}\pi$', r'$\pi$',
r'$\frac{3}{2}\pi$', r'$2\pi$'],
'clabel': 'Phase (rad)',
'title': ('Target density matrix\n' +
self.raw_data_dict['timestamp'] + ' ' +
meas_string),
'bar_kws': dict(zorder=1),
}
def generate_raw_pauli_set(self):
nr_qubits = self.proc_data_dict['d'].bit_length() - 1
pauli_raw_values = []
for op in tomo.generate_pauli_set(nr_qubits)[1]:
nr_terms = 0
sum_terms = 0.
for meas_op, meas_res in zip(self.proc_data_dict['meas_operators'],
self.proc_data_dict['meas_results']):
trace = (meas_op*op).tr().real
clss = int(trace*2)
if clss < 0:
sum_terms -= meas_res
nr_terms += 1
elif clss > 0:
sum_terms += meas_res
nr_terms += 1
pauli_raw_values.append(2**nr_qubits*sum_terms/nr_terms)
return pauli_raw_values
def generate_corr_pauli_set(self,Fs,rotations):
nr_qubits = self.proc_data_dict['d'].bit_length() - 1
Fs_corr = []
assign_corr = []
for i,F in enumerate(Fs):
new_op = np.zeros(2**nr_qubits)
new_op[i] = 1
Fs_corr.append(qtp.Qobj(np.diag(new_op)))
assign_corr.append(np.diag(F.full()))
pauli_Fs = tomo.rotated_measurement_operators(rotations, Fs_corr)
pauli_Fs = list(itertools.chain(*np.array(pauli_Fs, dtype=np.object).T))
mus = self.proc_data_dict['meas_results']
pauli_mus = np.reshape(mus,[-1,2**nr_qubits])
for i,raw_mus in enumerate(pauli_mus):
pauli_mus[i] = np.matmul(np.linalg.inv(assign_corr),np.array(raw_mus))
pauli_mus = pauli_mus.flatten()
pauli_values = []
for op in tomo.generate_pauli_set(nr_qubits)[1]:
nr_terms = 0
sum_terms = 0.
for meas_op, meas_res in zip(pauli_Fs,pauli_mus):
trace = (meas_op*op).tr().real
clss = int(trace*2)
if clss < 0:
sum_terms -= meas_res
nr_terms += 1
elif clss > 0:
sum_terms += meas_res
nr_terms += 1
pauli_values.append(2**nr_qubits*sum_terms/nr_terms)
return pauli_values
def prepare_pauli_basis_plot(self):
yexp = tomo.density_matrix_to_pauli_basis(self.proc_data_dict['rho'])
nr_qubits = self.proc_data_dict['d'].bit_length() - 1
labels = list(itertools.product(*[['I', 'X', 'Y', 'Z']]*nr_qubits))
labels = [''.join(label_list) for label_list in labels]
if nr_qubits == 1:
order = [1, 2, 3]
elif nr_qubits == 2:
order = [1, 2, 3, 4, 8, 12, 5, 6, 7, 9, 10, 11, 13, 14, 15]
elif nr_qubits == 3:
order = [1, 2, 3, 4, 8, 12, 16, 32, 48] + \
[5, 6, 7, 9, 10, 11, 13, 14, 15] + \
[17, 18, 19, 33, 34, 35, 49, 50, 51] + \
[20, 24, 28, 36, 40, 44, 52, 56, 60] + \
[21, 22, 23, 25, 26, 27, 29, 30, 31] + \
[37, 38, 39, 41, 42, 43, 45, 46, 47] + \
[53, 54, 55, 57, 58, 59, 61, 62, 63]
else:
order = np.arange(4**nr_qubits)[1:]
if self.options_dict.get('pauli_raw', False):
fit_type = 'raw counts'
elif self.options_dict.get('pauli_values', False):
fit_type = 'corrected counts'
elif self.options_dict.get('mle', False):
fit_type = 'maximum likelihood estimation'
elif self.options_dict.get('imle', False):
fit_type = 'iterative maximum likelihood estimation'
else:
fit_type = 'least squares fit'
meas_string = self.base_analysis. \
raw_data_dict['measurementstring']
if np.ndim(meas_string) > 0:
if len(meas_string) > 1:
meas_string = meas_string[0] + ' to ' + meas_string[-1]
else:
meas_string = meas_string[0]
self.plot_dicts['pauli_basis'] = {
'plotfn': self.plot_bar,
'xcenters': np.arange(len(order)),
'xwidth': 0.4,
'xrange': (-1, len(order)),
'yvals': np.array(yexp)[order],
'xlabel': r'Pauli operator, $\hat{O}$',
'ylabel': r'Expectation value, $\mathrm{Tr}(\hat{O} \hat{\rho})$',
'title': 'Pauli operators, ' + fit_type + '\n' +
self.raw_data_dict['timestamp'] + ' ' + meas_string,
'yrange': (-1.1, 1.1),
'xtick_loc': np.arange(4**nr_qubits - 1),
'xtick_rotation': 90,
'xtick_labels': np.array(labels)[order],
'bar_kws': dict(zorder=10),
'setlabel': 'Fit to experiment',
'do_legend': True
}
if nr_qubits > 2:
self.plot_dicts['pauli_basis']['plotsize'] = (10, 5)
rho_target = self.raw_data_dict['exp_metadata'].get('rho_target', None)
rho_target = self.options_dict.get('rho_target', rho_target)
if rho_target is not None:
rho_target = qtp.Qobj(rho_target)
ytar = tomo.density_matrix_to_pauli_basis(rho_target)
self.plot_dicts['pauli_basis_target'] = {
'plotfn': self.plot_bar,
'ax_id': 'pauli_basis',
'xcenters': np.arange(len(order)),
'xwidth': 0.8,
'yvals': np.array(ytar)[order],
'xtick_loc': np.arange(len(order)),
'xtick_labels': np.array(labels)[order],
'bar_kws': dict(color='0.8', zorder=0),
'setlabel': 'Target values',
'do_legend': True
}
purity_str = r'Purity, $Tr(\rho^2) = {:.1f}\%$'.format(
100 * self.proc_data_dict['purity'])
if rho_target is not None:
fidelity_str = '\n' + r'Fidelity, $F = {:.1f}\%$'.format(
100 * self.proc_data_dict['fidelity'])
else:
fidelity_str = ''
if self.proc_data_dict['d'] == 4:
concurrence_str = '\n' + r'Concurrence, $C = {:.1f}\%$'.format(
100 * self.proc_data_dict['concurrence'])
else:
concurrence_str = ''
self.plot_dicts['pauli_info_labels'] = {
'ax_id': 'pauli_basis',
'plotfn': self.plot_line,
'xvals': [0],
'yvals': [0],
'line_kws': {'alpha': 0},
'setlabel': purity_str + fidelity_str,
'do_legend': True
}
def default_phase_cmap(self):
cols = np.array(((41, 39, 231), (61, 130, 163), (208, 170, 39),
(209, 126, 4), (181, 28, 20), (238, 76, 152),
(251, 130, 242), (162, 112, 251))) / 255
n = len(cols)
cdict = {
'red': [[i/n, cols[i%n][0], cols[i%n][0]] for i in range(n+1)],
'green': [[i/n, cols[i%n][1], cols[i%n][1]] for i in range(n+1)],
'blue': [[i/n, cols[i%n][2], cols[i%n][2]] for i in range(n+1)],
}
return mpl.colors.LinearSegmentedColormap('DMDefault', cdict)
class ReadoutROPhotonsAnalysis(Single_Qubit_TimeDomainAnalysis):
"""
Analyses the photon number in the RO based on the
readout_photons_in_resonator function
function specific options for options dict:
f_qubit
chi
artif_detuning
print_fit_results
"""
def __init__(self, t_start: str=None, t_stop: str=None,
label: str='', data_file_path: str=None,
close_figs: bool=False, options_dict: dict=None,
extract_only: bool=False, do_fitting: bool=False,
auto: bool=True):
super().__init__(t_start=t_start, t_stop=t_stop,
data_file_path=data_file_path,
options_dict=options_dict,
close_figs=close_figs, label=label,
extract_only=extract_only, do_fitting=do_fitting)
if self.options_dict.get('TwoD', None) is None:
self.options_dict['TwoD'] = True
self.label = label
self.params_dict = {
'measurementstring': 'measurementstring',
'sweep_points': 'sweep_points',
'sweep_points_2D': 'sweep_points_2D',
'value_names': 'value_names',
'value_units': 'value_units',
'measured_values': 'measured_values'}
self.numeric_params = self.options_dict.get('numeric_params',
OrderedDict())
self.kappa = self.options_dict.get('kappa_effective', None)
self.chi = self.options_dict.get('chi', None)
self.T2 = self.options_dict.get('T2echo', None)
self.artif_detuning = self.options_dict.get('artif_detuning', 0)
if (self.kappa is None) or (self.chi is None) or (self.T2 is None):
raise ValueError('kappa_effective, chi and T2echo must be passed to '
'the options_dict.')
if auto:
self.run_analysis()
def process_data(self):
self.proc_data_dict = OrderedDict()
self.proc_data_dict['qubit_state'] = [[],[]]
self.proc_data_dict['delay_to_relax'] = self.raw_data_dict[
'sweep_points_2D'][0]
self.proc_data_dict['ramsey_times'] = []
for i,x in enumerate(np.transpose(self.raw_data_dict[
'measured_data']['raw w0 _measure'][0])):
self.proc_data_dict['qubit_state'][0].append([])
self.proc_data_dict['qubit_state'][1].append([])
for j,y in enumerate(np.transpose(self.raw_data_dict[
'measured_data']['raw w0 _measure'][0])[i]):
if j%2 == 0:
self.proc_data_dict['qubit_state'][0][i].append(y)
else:
self.proc_data_dict['qubit_state'][1][i].append(y)
for i,x in enumerate( self.raw_data_dict['sweep_points'][0]):
if i % 2 == 0:
self.proc_data_dict['ramsey_times'].append(x)
#I STILL NEED to pass Chi
def prepare_fitting(self):
self.proc_data_dict['photon_number'] = [[],[]]
self.proc_data_dict['fit_results'] = []
self.proc_data_dict['ramsey_fit_results'] = [[],[]]
for i,tau in enumerate(self.proc_data_dict['delay_to_relax']):
self.proc_data_dict['ramsey_fit_results'][0].append(self.fit_Ramsey(
self.proc_data_dict['ramsey_times'][:-4],
self.proc_data_dict['qubit_state'][0][i][:-4]/
max(self.proc_data_dict['qubit_state'][0][i][:-4]),
state=0,
kw=self.options_dict))
self.proc_data_dict['ramsey_fit_results'][1].append(self.fit_Ramsey(
self.proc_data_dict['ramsey_times'][:-4],
self.proc_data_dict['qubit_state'][1][i][:-4]/
max(self.proc_data_dict['qubit_state'][1][i][:-4]),
state=1,
kw=self.options_dict))
n01 = self.proc_data_dict['ramsey_fit_results'
][0][i][0].params['n0'].value
n02 = self.proc_data_dict['ramsey_fit_results'
][1][i][0].params['n0'].value
self.proc_data_dict['photon_number'][0].append(n01)
self.proc_data_dict['photon_number'][1].append(n02)
def run_fitting(self):
print_fit_results = self.params_dict.pop('print_fit_results',False)
exp_dec_mod = lmfit.Model(fit_mods.ExpDecayFunc)
exp_dec_mod.set_param_hint('n',
value=1,
vary=False)
exp_dec_mod.set_param_hint('offset',
value=0,
min=0,
vary=True)
exp_dec_mod.set_param_hint('tau',
value=self.proc_data_dict[
'delay_to_relax'][-1],
min=1e-11,
vary=True)
exp_dec_mod.set_param_hint('amplitude',
value=1,
min=0,
vary=True)
params = exp_dec_mod.make_params()
self.fit_res = OrderedDict()
self.fit_res['ground_state'] = exp_dec_mod.fit(
data=self.proc_data_dict['photon_number'][0],
params=params,
t=self.proc_data_dict['delay_to_relax'])
self.fit_res['excited_state'] = exp_dec_mod.fit(
data=self.proc_data_dict['photon_number'][1],
params=params,
t=self.proc_data_dict['delay_to_relax'])
if print_fit_results:
print(self.fit_res['ground_state'].fit_report())
print(self.fit_res['excited_state'].fit_report())
def fit_Ramsey(self, x, y, state, **kw):
x = np.array(x)
y = np.array(y)
exp_dec_p_mod = lmfit.Model(fit_mods.ExpDecayPmod)
comb_exp_dec_mod = lmfit.Model(fit_mods.CombinedOszExpDecayFunc)
average = np.mean(y)
ft_of_data = np.fft.fft(y)
index_of_fourier_maximum = np.argmax(np.abs(
ft_of_data[1:len(ft_of_data) // 2])) + 1
max_ramsey_delay = x[-1] - x[0]
fft_axis_scaling = 1 / max_ramsey_delay
freq_est = fft_axis_scaling * index_of_fourier_maximum
n_est = (freq_est-self.artif_detuning)/(2 * self.chi)
exp_dec_p_mod.set_param_hint('T2echo',
value=self.T2,
vary=False)
exp_dec_p_mod.set_param_hint('offset',
value=average,
min=0,
vary=True)
exp_dec_p_mod.set_param_hint('delta',
value=self.artif_detuning,
vary=False)
exp_dec_p_mod.set_param_hint('amplitude',
value=1,
min=0,
vary=True)
exp_dec_p_mod.set_param_hint('kappa',
value=self.kappa[state],
vary=False)
exp_dec_p_mod.set_param_hint('chi',
value=self.chi,
vary=False)
exp_dec_p_mod.set_param_hint('n0',
value=n_est,
min=0,
vary=True)
exp_dec_p_mod.set_param_hint('phase',
value=0,
vary=True)
comb_exp_dec_mod.set_param_hint('tau',
value=self.T2,
vary=True)
comb_exp_dec_mod.set_param_hint('offset',
value=average,
min=0,
vary=True)
comb_exp_dec_mod.set_param_hint('oscillation_offset',
value=average,
min=0,
vary=True)
comb_exp_dec_mod.set_param_hint('amplitude',
value=1,
min=0,
vary=True)
comb_exp_dec_mod.set_param_hint('tau_gauss',
value=self.kappa[state],
vary=True)
comb_exp_dec_mod.set_param_hint('n0',
value=n_est,
min=0,
vary=True)
comb_exp_dec_mod.set_param_hint('phase',
value=0,
vary=True)
comb_exp_dec_mod.set_param_hint('delta',
value=self.artif_detuning,
vary=False)
comb_exp_dec_mod.set_param_hint('chi',
value=self.chi,
vary=False)
if (np.average(y[:4]) >
np.average(y[4:8])):
phase_estimate = 0
else:
phase_estimate = np.pi
exp_dec_p_mod.set_param_hint('phase',
value=phase_estimate, vary=True)
comb_exp_dec_mod.set_param_hint('phase',
value=phase_estimate, vary=True)
amplitude_guess = 0.5
if np.all(np.logical_and(y >= 0, y <= 1)):
exp_dec_p_mod.set_param_hint('amplitude',
value=amplitude_guess,
min=0.00,
max=4.0,
vary=True)
comb_exp_dec_mod.set_param_hint('amplitude',
value=amplitude_guess,
min=0.00,
max=4.0,
vary=True)
else:
print('data is not normalized, varying amplitude')
exp_dec_p_mod.set_param_hint('amplitude',
value=max(y),
min=0.00,
max=4.0,
vary=True)
comb_exp_dec_mod.set_param_hint('amplitude',
value=max(y),
min=0.00,
max=4.0,
vary=True)
fit_res_1 = exp_dec_p_mod.fit(data=y,
t=x,
params= exp_dec_p_mod.make_params())
fit_res_2 = comb_exp_dec_mod.fit(data=y,
t=x,
params= comb_exp_dec_mod.make_params())
if fit_res_1.chisqr > .35:
log.warning('Fit did not converge, varying phase')
fit_res_lst = []
for phase_estimate in np.linspace(0, 2*np.pi, 10):
for i, del_amp in enumerate(np.linspace(
-max(y)/10, max(y)/10, 10)):
exp_dec_p_mod.set_param_hint('phase',
value=phase_estimate,
vary=False)
exp_dec_p_mod.set_param_hint('amplitude',
value=max(y)+ del_amp)
fit_res_lst += [exp_dec_p_mod.fit(
data=y,
t=x,
params= exp_dec_p_mod.make_params())]
chisqr_lst = [fit_res_1.chisqr for fit_res_1 in fit_res_lst]
fit_res_1 = fit_res_lst[np.argmin(chisqr_lst)]
if fit_res_2.chisqr > .35:
log.warning('Fit did not converge, varying phase')
fit_res_lst = []
for phase_estimate in np.linspace(0, 2*np.pi, 10):
for i, del_amp in enumerate(np.linspace(
-max(y)/10, max(y)/10, 10)):
comb_exp_dec_mod.set_param_hint('phase',
value=phase_estimate,
vary=False)
comb_exp_dec_mod.set_param_hint('amplitude',
value=max(y)+ del_amp)
fit_res_lst += [comb_exp_dec_mod.fit(
data=y,
t=x,
params= comb_exp_dec_mod.make_params())]
chisqr_lst = [fit_res_2.chisqr for fit_res_2 in fit_res_lst]
fit_res_2 = fit_res_lst[np.argmin(chisqr_lst)]
if fit_res_1.chisqr < fit_res_2.chisqr:
self.proc_data_dict['params'] = exp_dec_p_mod.make_params()
return [fit_res_1,fit_res_1,fit_res_2]
else:
self.proc_data_dict['params'] = comb_exp_dec_mod.make_params()
return [fit_res_2,fit_res_1,fit_res_2]
def prepare_plots(self):
self.prepare_2D_sweep_plot()
self.prepare_photon_number_plot()
self.prepare_ramsey_plots()
def prepare_2D_sweep_plot(self):
self.plot_dicts['off_full_data_'+self.label] = {
'title': 'Raw data |g>',
'plotfn': self.plot_colorxy,
'xvals': self.proc_data_dict['ramsey_times'],
'xlabel': 'Ramsey delays',
'xunit': 's',
'yvals': self.proc_data_dict['delay_to_relax'],
'ylabel': 'Delay after first RO-pulse',
'yunit': 's',
'zvals': np.array(self.proc_data_dict['qubit_state'][0]) }
self.plot_dicts['on_full_data_'+self.label] = {
'title': 'Raw data |e>',
'plotfn': self.plot_colorxy,
'xvals': self.proc_data_dict['ramsey_times'],
'xlabel': 'Ramsey delays',
'xunit': 's',
'yvals': self.proc_data_dict['delay_to_relax'],
'ylabel': 'Delay after first RO-pulse',
'yunit': 's',
'zvals': np.array(self.proc_data_dict['qubit_state'][1]) }
def prepare_ramsey_plots(self):
x_fit = np.linspace(self.proc_data_dict['ramsey_times'][0],
max(self.proc_data_dict['ramsey_times']),101)
for i in range(len(self.proc_data_dict['ramsey_fit_results'][0])):
self.plot_dicts['off_'+str(i)] = {
'title': 'Ramsey w t_delay = '+\
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |g> state',
'ax_id':'ramsey_off_'+str(i),
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['ramsey_times'],
'xlabel': 'Ramsey delays',
'xunit': 's',
'yvals': np.array(self.proc_data_dict['qubit_state'][0][i]/
max(self.proc_data_dict['qubit_state'][0][i][:-4])),
'ylabel': 'Measured qubit state',
'yunit': '',
'marker': 'o',
'setlabel': '|g> data_'+str(i),
'do_legend': True }
self.plot_dicts['off_fit_'+str(i)] = {
'title': 'Ramsey w t_delay = '+ \
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |g> state',
'ax_id':'ramsey_off_'+str(i),
'plotfn': self.plot_line,
'xvals': x_fit,
'yvals': self.proc_data_dict['ramsey_fit_results'][0][i][1].eval(
self.proc_data_dict['ramsey_fit_results'][0][i][1].params,
t=x_fit),
'linestyle': '-',
'marker': '',
'setlabel': '|g> fit_model'+str(i),
'do_legend': True }
self.plot_dicts['off_fit_2_'+str(i)] = {
'title': 'Ramsey w t_delay = '+ \
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |g> state',
'ax_id':'ramsey_off_'+str(i),
'plotfn': self.plot_line,
'xvals': x_fit,
'yvals': self.proc_data_dict['ramsey_fit_results'][0][i][2].eval(
self.proc_data_dict['ramsey_fit_results'][0][i][2].params,
t=x_fit),
'linestyle': '-',
'marker': '',
'setlabel': '|g> fit_simpel_model'+str(i),
'do_legend': True }
self.plot_dicts['hidden_g_'+str(i)] = {
'ax_id':'ramsey_off_'+str(i),
'plotfn': self.plot_line,
'xvals': [0],
'yvals': [0],
'color': 'w',
'setlabel': 'Residual photon count = '
''+str(self.proc_data_dict['photon_number'][0][i]),
'do_legend': True }
self.plot_dicts['on_'+str(i)] = {
'title': 'Ramsey w t_delay = '+ \
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |e> state',
'ax_id':'ramsey_on_'+str(i),
'plotfn': self.plot_line,
'xvals': self.proc_data_dict['ramsey_times'],
'xlabel': 'Ramsey delays',
'xunit': 's',
'yvals': np.array(self.proc_data_dict['qubit_state'][1][i]/
max(self.proc_data_dict['qubit_state'][1][i][:-4])),
'ylabel': 'Measured qubit state',
'yunit': '',
'marker': 'o',
'setlabel': '|e> data_'+str(i),
'do_legend': True }
self.plot_dicts['on_fit_'+str(i)] = {
'title': 'Ramsey w t_delay = '+ \
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |e> state',
'ax_id':'ramsey_on_'+str(i),
'plotfn': self.plot_line,
'xvals': x_fit,
'yvals': self.proc_data_dict['ramsey_fit_results'][1][i][1].eval(
self.proc_data_dict['ramsey_fit_results'][1][i][1].params,
t=x_fit),
'linestyle': '-',
'marker': '',
'setlabel': '|e> fit_model'+str(i),
'do_legend': True }
self.plot_dicts['on_fit_2_'+str(i)] = {
'title': 'Ramsey w t_delay = '+ \
str(self.proc_data_dict['delay_to_relax'][i])+ \
' s, in |e> state',
'ax_id':'ramsey_on_'+str(i),
'plotfn': self.plot_line,
'xvals': x_fit,
'yvals': self.proc_data_dict['ramsey_fit_results'][1][i][2].eval(
self.proc_data_dict['ramsey_fit_results'][1][i][2].params,
t=x_fit),
'linestyle': '-',
'marker': '',
'setlabel': '|e> fit_simpel_model'+str(i),
'do_legend': True }
self.plot_dicts['hidden_e_'+str(i)] = {
'ax_id':'ramsey_on_'+str(i),
'plotfn': self.plot_line,
'xvals': [0],
'yvals': [0],
'color': 'w',
'setlabel': 'Residual photon count = '
''+str(self.proc_data_dict['photon_number'][1][i]),
'do_legend': True }
def prepare_photon_number_plot(self):
ylabel = 'Average photon number'
yunit = ''
x_fit = np.linspace(min(self.proc_data_dict['delay_to_relax']),
max(self.proc_data_dict['delay_to_relax']),101)
minmax_data = [min(min(self.proc_data_dict['photon_number'][0]),
min(self.proc_data_dict['photon_number'][1])),
max(max(self.proc_data_dict['photon_number'][0]),
max(self.proc_data_dict['photon_number'][1]))]
minmax_data[0] -= minmax_data[0]/5
minmax_data[1] += minmax_data[1]/5
self.proc_data_dict['photon_number'][1],
self.fit_res['excited_state'].eval(
self.fit_res['excited_state'].params,
t=x_fit)
self.plot_dicts['Photon number count'] = {
'plotfn': self.plot_line,
'xlabel': 'Delay after first RO-pulse',
'ax_id': 'Photon number count ',
'xunit': 's',
'xvals': self.proc_data_dict['delay_to_relax'],
'yvals': self.proc_data_dict['photon_number'][0],
'ylabel': ylabel,
'yunit': yunit,
'yrange': minmax_data,
'title': 'Residual photon number',
'color': 'b',
'linestyle': '',
'marker': 'o',
'setlabel': '|g> data',
'func': 'semilogy',
'do_legend': True}
self.plot_dicts['main2'] = {
'plotfn': self.plot_line,
'xunit': 's',
'xvals': x_fit,
'yvals': self.fit_res['ground_state'].eval(
self.fit_res['ground_state'].params,
t=x_fit),
'yrange': minmax_data,
'ax_id': 'Photon number count ',
'color': 'b',
'linestyle': '-',
'marker': '',
'setlabel': '|g> fit',
'func': 'semilogy',
'do_legend': True}
self.plot_dicts['main3'] = {
'plotfn': self.plot_line,
'xunit': 's',
'xvals': self.proc_data_dict['delay_to_relax'],
'yvals': self.proc_data_dict['photon_number'][1],
'yrange': minmax_data,
'ax_id': 'Photon number count ',
'color': 'r',
'linestyle': '',
'marker': 'o',
'setlabel': '|e> data',
'func': 'semilogy',
'do_legend': True}
self.plot_dicts['main4'] = {
'plotfn': self.plot_line,
'xunit': 's',
'ax_id': 'Photon number count ',
'xvals': x_fit,
'yvals': self.fit_res['excited_state'].eval(
self.fit_res['excited_state'].params,
t=x_fit),
'yrange': minmax_data,
'ylabel': ylabel,
'color': 'r',
'linestyle': '-',
'marker': '',
'setlabel': '|e> fit',
'func': 'semilogy',
'do_legend': True}
self.plot_dicts['hidden_1'] = {
'ax_id': 'Photon number count ',
'plotfn': self.plot_line,
'yrange': minmax_data,
'xvals': [0],
'yvals': [0],
'color': 'w',
'setlabel': 'tau_g = '
''+str("%.3f" %
(self.fit_res['ground_state'].params['tau'].value*1e9))+''
' ns',
'do_legend': True }
self.plot_dicts['hidden_2'] = {
'ax_id': 'Photon number count ',
'plotfn': self.plot_line,
'yrange': minmax_data,
'xvals': [0],
'yvals': [0],
'color': 'w',
'setlabel': 'tau_e = '
''+str("%.3f" %
(self.fit_res['excited_state'].params['tau'].value*1e9))+''
' ns',
'do_legend': True}
class RODynamicPhaseAnalysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names: list=None, t_start: str=None, t_stop: str=None,
data_file_path: str=None, single_timestamp: bool=False,
options_dict: dict=None, extract_only: bool=False,
do_fitting: bool=True, auto=True):
super().__init__(qb_names=qb_names, t_start=t_start, t_stop=t_stop,
data_file_path=data_file_path,
options_dict=options_dict,
extract_only=extract_only,
do_fitting=do_fitting,
auto=False)
if auto:
self.run_analysis()
def process_data(self):
super().process_data()
if 'qbp_name' in self.metadata:
self.pulsed_qbname = self.metadata['qbp_name']
else:
self.pulsed_qbname = self.options_dict.get('pulsed_qbname')
self.measured_qubits = [qbn for qbn in self.channel_map if
qbn != self.pulsed_qbname]
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
for qbn in self.measured_qubits:
ro_dict = self.proc_data_dict['projected_data_dict'][qbn]
sweep_points = self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points']
for ro_suff, data in ro_dict.items():
cos_mod = lmfit.Model(fit_mods.CosFunc)
if self.num_cal_points != 0:
data = data[:-self.num_cal_points]
guess_pars = fit_mods.Cos_guess(
model=cos_mod,
t=sweep_points,
data=data)
guess_pars['amplitude'].vary = True
guess_pars['offset'].vary = True
guess_pars['frequency'].vary = True
guess_pars['phase'].vary = True
key = 'cos_fit_{}{}'.format(qbn, ro_suff)
self.fit_dicts[key] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': sweep_points},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
self.dynamic_phases = OrderedDict()
for meas_qbn in self.measured_qubits:
self.dynamic_phases[meas_qbn] = \
(self.fit_dicts['cos_fit_{}_measure'.format(meas_qbn)][
'fit_res'].best_values['phase'] -
self.fit_dicts['cos_fit_{}_ref_measure'.format(meas_qbn)][
'fit_res'].best_values['phase'])*180/np.pi
def prepare_plots(self):
super().prepare_plots()
if self.do_fitting:
for meas_qbn in self.measured_qubits:
sweep_points_dict = self.proc_data_dict['sweep_points_dict'][
meas_qbn]
if self.num_cal_points != 0:
yvals = [self.proc_data_dict['projected_data_dict'][meas_qbn][
'_ref_measure'][:-self.num_cal_points],
self.proc_data_dict['projected_data_dict'][meas_qbn][
'_measure'][:-self.num_cal_points]]
sweep_points = sweep_points_dict['msmt_sweep_points']
# plot cal points
for i, cal_pts_idxs in enumerate(
self.cal_states_dict.values()):
key = list(self.cal_states_dict)[i] + meas_qbn
self.plot_dicts[key] = {
'fig_id': 'dyn_phase_plot_' + meas_qbn,
'plotfn': self.plot_line,
'xvals': np.mean([
sweep_points_dict['cal_points_sweep_points'][
cal_pts_idxs],
sweep_points_dict['cal_points_sweep_points'][
cal_pts_idxs]],
axis=0),
'yvals': np.mean([
self.proc_data_dict['projected_data_dict'][meas_qbn][
'_ref_measure'][cal_pts_idxs],
self.proc_data_dict['projected_data_dict'][meas_qbn][
'_measure'][cal_pts_idxs]],
axis=0),
'setlabel': list(self.cal_states_dict)[i],
'do_legend': True,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left',
'linestyle': 'none',
'line_kws': {'color': self.get_cal_state_color(
list(self.cal_states_dict)[i])}}
else:
yvals = [self.proc_data_dict['projected_data_dict'][meas_qbn][
'_ref_measure'],
self.proc_data_dict['projected_data_dict'][meas_qbn][
'_measure']]
sweep_points = sweep_points_dict['sweep_points']
self.plot_dicts['dyn_phase_plot_' + meas_qbn] = {
'plotfn': self.plot_line,
'xvals': [sweep_points, sweep_points],
'xlabel': self.raw_data_dict['xlabel'][0],
'xunit': self.raw_data_dict['xunit'][0][0],
'yvals': yvals,
'ylabel': 'Excited state population',
'yunit': '',
'setlabel': ['with measurement', 'no measurement'],
'title': (self.raw_data_dict['timestamps'][0] + ' ' +
self.raw_data_dict['measurementstring'][0]),
'linestyle': 'none',
'do_legend': True,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left'}
self.plot_dicts['cos_fit_' + meas_qbn + '_ref_measure'] = {
'fig_id': 'dyn_phase_plot_' + meas_qbn,
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit_{}_ref_measure'.format(
meas_qbn)]['fit_res'],
'setlabel': 'cos fit',
'do_legend': True,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left'}
self.plot_dicts['cos_fit_' + meas_qbn + '_measure'] = {
'fig_id': 'dyn_phase_plot_' + meas_qbn,
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['cos_fit_{}_measure'.format(
meas_qbn)]['fit_res'],
'setlabel': 'cos fit',
'do_legend': True,
'legend_bbox_to_anchor': (1, 0.5),
'legend_pos': 'center left'}
textstr = 'Dynamic phase = {:.2f}'.format(
self.dynamic_phases[meas_qbn]) + r'$^{\circ}$'
self.plot_dicts['text_msg_' + meas_qbn] = {
'fig_id': 'dyn_phase_plot_' + meas_qbn,
'ypos': -0.175,
'xpos': 0.5,
'horizontalalignment': 'center',
'verticalalignment': 'top',
'plotfn': self.plot_text,
'text_string': textstr}
class FluxAmplitudeSweepAnalysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names, *args, **kwargs):
self.mask_freq = kwargs.pop('mask_freq', None)
self.mask_amp = kwargs.pop('mask_amp', None)
super().__init__(qb_names, *args, **kwargs)
def extract_data(self):
super().extract_data()
# Set some default values specific to FluxPulseScopeAnalysis if the
# respective options have not been set by the user or in the metadata.
# (We do not do this in the init since we have to wait until
# metadata has been extracted.)
if self.get_param_value('rotation_type', default_value=None) is None:
self.options_dict['rotation_type'] = 'global_PCA'
if self.get_param_value('TwoD', default_value=None) is None:
self.options_dict['TwoD'] = True
def process_data(self):
super().process_data()
pdd = self.proc_data_dict
nr_sp = {qb: len(pdd['sweep_points_dict'][qb]['sweep_points'])
for qb in self.qb_names}
nr_sp2d = {qb: len(list(pdd['sweep_points_2D_dict'][qb].values())[0])
for qb in self.qb_names}
nr_cp = self.num_cal_points
# make matrix out of vector
data_reshaped = {qb: np.reshape(deepcopy(
pdd['data_to_fit'][qb]).T.flatten(), (nr_sp[qb], nr_sp2d[qb]))
for qb in self.qb_names}
pdd['data_reshaped'] = data_reshaped
# remove calibration points from data to fit
data_no_cp = {qb: np.array([pdd['data_reshaped'][qb][i, :]
for i in range(nr_sp[qb]-nr_cp)])
for qb in self.qb_names}
# apply mask
for qb in self.qb_names:
if self.mask_freq is None:
self.mask_freq = [True]*nr_sp2d[qb] # by default, no point is masked
if self.mask_amp is None:
self.mask_amp = [True]*(nr_sp[qb]-nr_cp)
pdd['freqs_masked'] = {}
pdd['amps_masked'] = {}
pdd['data_masked'] = {}
for qb in self.qb_names:
sp_param = [k for k in self.mospm[qb] if 'freq' in k][0]
pdd['freqs_masked'][qb] = \
pdd['sweep_points_2D_dict'][qb][sp_param][self.mask_freq]
pdd['amps_masked'][qb] = \
pdd['sweep_points_dict'][qb]['sweep_points'][
:-self.num_cal_points][self.mask_amp]
data_masked = data_no_cp[qb][self.mask_amp,:]
pdd['data_masked'][qb] = data_masked[:, self.mask_freq]
def prepare_fitting(self):
pdd = self.proc_data_dict
self.fit_dicts = OrderedDict()
# Gaussian fit of amplitude slices
gauss_mod = fit_mods.GaussianModel_v2()
for qb in self.qb_names:
for i in range(len(pdd['amps_masked'][qb])):
data = pdd['data_masked'][qb][i,:]
self.fit_dicts[f'gauss_fit_{qb}_{i}'] = {
'model': gauss_mod,
'fit_xvals': {'x': pdd['freqs_masked'][qb]},
'fit_yvals': {'data': data}
}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['gauss_center'] = {}
pdd['gauss_center_err'] = {}
pdd['filtered_center'] = {}
pdd['filtered_amps'] = {}
for qb in self.qb_names:
pdd['gauss_center'][qb] = np.array([
self.fit_res[f'gauss_fit_{qb}_{i}'].best_values['center']
for i in range(len(pdd['amps_masked'][qb]))])
pdd['gauss_center_err'][qb] = np.array([
self.fit_res[f'gauss_fit_{qb}_{i}'].params['center'].stderr
for i in range(len(pdd['amps_masked'][qb]))])
# filter out points with stderr > 1e6 Hz
pdd['filtered_center'][qb] = np.array([])
pdd['filtered_amps'][qb] = np.array([])
for i, stderr in enumerate(pdd['gauss_center_err'][qb]):
try:
if stderr < 1e6:
pdd['filtered_center'][qb] = \
np.append(pdd['filtered_center'][qb],
pdd['gauss_center'][qb][i])
pdd['filtered_amps'][qb] = \
np.append(pdd['filtered_amps'][qb],
pdd['sweep_points_dict'][qb]\
['sweep_points'][:-self.num_cal_points][i])
except:
continue
# if gaussian fitting does not work (i.e. all points were filtered
# out above) use max value of data to get an estimate of freq
if len(pdd['filtered_amps'][qb]) == 0:
for qb in self.qb_names:
freqs = np.array([])
for i in range(pdd['data_masked'][qb].shape[0]):
freqs = np.append(freqs, pdd['freqs_masked'][qb]\
[np.argmax(pdd['data_masked'][qb][i,:])])
pdd['filtered_center'][qb] = freqs
pdd['filtered_amps'][qb] = pdd['amps_masked'][qb]
# fit the freqs to the qubit model
self.fit_func = self.get_param_value('fit_func', fit_mods.Qubit_dac_to_freq)
if self.fit_func == fit_mods.Qubit_dac_to_freq_precise:
fit_guess_func = fit_mods.Qubit_dac_arch_guess_precise
else:
fit_guess_func = fit_mods.Qubit_dac_arch_guess
freq_mod = lmfit.Model(self.fit_func)
fixed_params = \
self.get_param_value("fixed_params_for_fit", {}).get(qb, None)
if fixed_params is None:
fixed_params = dict(E_c=0)
freq_mod.guess = fit_guess_func.__get__(
freq_mod, freq_mod.__class__)
self.fit_dicts[f'freq_fit_{qb}'] = {
'model': freq_mod,
'fit_xvals': {'dac_voltage': pdd['filtered_amps'][qb]},
'fit_yvals': {'data': pdd['filtered_center'][qb]},
"guessfn_pars": {"fixed_params": fixed_params}}
self.run_fitting()
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
for qb in self.qb_names:
sp_param = [k for k in self.mospm[qb] if 'freq' in k][0]
self.plot_dicts[f'data_2d_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'data_2d_{qb}',
'plotfn': self.plot_colorxy,
'xvals': pdd['sweep_points_dict'][qb]['sweep_points'],
'yvals': pdd['sweep_points_2D_dict'][qb][sp_param],
'zvals': np.transpose(pdd['data_reshaped'][qb]),
'xlabel': r'Flux pulse amplitude',
'xunit': 'V',
'ylabel': r'Qubit drive frequency',
'yunit': 'Hz',
'zlabel': 'Excited state population',
}
if self.do_fitting:
if self.options_dict.get('scatter', True):
label = f'freq_scatter_{qb}_scatter'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'data_2d_{qb}',
'plotfn': self.plot_line,
'linestyle': '',
'marker': 'o',
'xvals': pdd['filtered_amps'][qb],
'yvals': pdd['filtered_center'][qb],
'xlabel': r'Flux pulse amplitude',
'xunit': 'V',
'ylabel': r'Qubit drive frequency',
'yunit': 'Hz',
'color': 'white',
}
amps = pdd['sweep_points_dict'][qb]['sweep_points'][
:-self.num_cal_points]
label = f'freq_scatter_{qb}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'data_2d_{qb}',
'plotfn': self.plot_line,
'linestyle': '-',
'marker': '',
'xvals': amps,
'yvals': self.fit_func(amps,
**self.fit_res[f'freq_fit_{qb}'].best_values),
'color': 'red',
}
class T1FrequencySweepAnalysis(MultiQubit_TimeDomain_Analysis):
def process_data(self):
super().process_data()
pdd = self.proc_data_dict
nr_cp = self.num_cal_points
self.lengths = OrderedDict()
self.amps = OrderedDict()
self.freqs = OrderedDict()
for qbn in self.qb_names:
len_key = [pn for pn in self.mospm[qbn] if 'length' in pn]
if len(len_key) == 0:
raise KeyError('Couldn"t find sweep points corresponding to '
'flux pulse length.')
self.lengths[qbn] = self.sp.get_sweep_params_property(
'values', 0, len_key[0])
amp_key = [pn for pn in self.mospm[qbn] if 'amp' in pn]
if len(len_key) == 0:
raise KeyError('Couldn"t find sweep points corresponding to '
'flux pulse amplitude.')
self.amps[qbn] = self.sp.get_sweep_params_property(
'values', 1, amp_key[0])
freq_key = [pn for pn in self.mospm[qbn] if 'freq' in pn]
if len(freq_key) == 0:
self.freqs[qbn] = None
else:
self.freqs[qbn] =self.sp.get_sweep_params_property(
'values', 1, freq_key[0])
nr_amps = len(self.amps[self.qb_names[0]])
nr_lengths = len(self.lengths[self.qb_names[0]])
# make matrix out of vector
data_reshaped_no_cp = {qb: np.reshape(deepcopy(
pdd['data_to_fit'][qb][
:, :pdd['data_to_fit'][qb].shape[1]-nr_cp]).flatten(),
(nr_amps, nr_lengths)) for qb in self.qb_names}
pdd['data_reshaped_no_cp'] = data_reshaped_no_cp
pdd['mask'] = {qb: np.ones(nr_amps, dtype=np.bool)
for qb in self.qb_names}
def prepare_fitting(self):
pdd = self.proc_data_dict
self.fit_dicts = OrderedDict()
exp_mod = fit_mods.ExponentialModel()
for qb in self.qb_names:
for i, data in enumerate(pdd['data_reshaped_no_cp'][qb]):
self.fit_dicts[f'exp_fit_{qb}_amp_{i}'] = {
'model': exp_mod,
'fit_xvals': {'x': self.lengths[qb]},
'fit_yvals': {'data': data}}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['T1'] = {}
pdd['T1_err'] = {}
for qb in self.qb_names:
pdd['T1'][qb] = np.array([
abs(self.fit_res[f'exp_fit_{qb}_amp_{i}'].best_values['decay'])
for i in range(len(self.amps[qb]))])
pdd['T1_err'][qb] = np.array([
self.fit_res[f'exp_fit_{qb}_amp_{i}'].params['decay'].stderr
for i in range(len(self.amps[qb]))])
for i in range(len(self.amps[qb])):
try:
if pdd['T1_err'][qb][i] >= 10 * pdd['T1'][qb][i]:
pdd['mask'][qb][i] = False
except TypeError:
pdd['mask'][qb][i] = False
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
for qb in self.qb_names:
for p, param_values in enumerate([self.amps, self.freqs]):
if param_values is None:
continue
suffix = '_amp' if p == 0 else '_freq'
mask = pdd['mask'][qb]
xlabel = r'Flux pulse amplitude' if p == 0 else \
r'Derived qubit frequency'
if self.do_fitting:
# Plot T1 vs flux pulse amplitude
label = f'T1_fit_{qb}{suffix}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] + '\n' + rdd['timestamp'],
'plotfn': self.plot_line,
'linestyle': '-',
'xvals': param_values[qb][mask],
'yvals': pdd['T1'][qb][mask],
'yerr': pdd['T1_err'][qb][mask],
'xlabel': xlabel,
'xunit': 'V' if p == 0 else 'Hz',
'ylabel': r'T1',
'yunit': 's',
'color': 'blue',
}
# Plot rotated integrated average in dependece of flux pulse
# amplitude and length
label = f'T1_color_plot_{qb}{suffix}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] + '\n' + rdd['timestamp'],
'plotfn': self.plot_colorxy,
'linestyle': '-',
'xvals': param_values[qb][mask],
'yvals': self.lengths[qb],
'zvals': np.transpose(pdd['data_reshaped_no_cp'][qb][mask]),
'xlabel': xlabel,
'xunit': 'V' if p == 0 else 'Hz',
'ylabel': r'Flux pulse length',
'yunit': 's',
'zlabel': r'Excited state population'
}
# Plot population loss for the first flux pulse length as a
# function of flux pulse amplitude
label = f'Pop_loss_{qb}{suffix}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] + '\n' + rdd['timestamp'],
'plotfn': self.plot_line,
'linestyle': '-',
'xvals': param_values[qb][mask],
'yvals': 1 - pdd['data_reshaped_no_cp'][qb][:, 0][mask],
'xlabel': xlabel,
'xunit': 'V' if p == 0 else 'Hz',
'ylabel': r'Pop. loss @ {:.0f} ns'.format(
self.lengths[qb][0]/1e-9
),
'yunit': '',
}
# Plot all fits in single figure
if self.options_dict.get('all_fits', False) and self.do_fitting:
colormap = self.options_dict.get('colormap', mpl.cm.plasma)
for i in range(len(self.amps[qb])):
color = colormap(i/(len(self.amps[qb])-1))
label = f'exp_fit_{qb}_amp_{i}'
fitid = param_values[qb][i]
self.plot_dicts[label] = {
'title': rdd['measurementstring'] + '\n' + rdd['timestamp'],
'fig_id': f'T1_fits_{qb}',
'xlabel': r'Flux pulse length',
'xunit': 's',
'ylabel': r'Excited state population',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[label],
'plot_init': self.options_dict.get('plot_init', False),
'color': color,
'setlabel': f'freq={fitid:.4f}' if p == 1
else f'amp={fitid:.4f}',
'do_legend': False,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
label = f'freq_scatter_{qb}_{i}'
self.plot_dicts[label] = {
'fig_id': f'T1_fits_{qb}',
'plotfn': self.plot_line,
'xvals': self.lengths[qb],
'linestyle': '',
'yvals': pdd['data_reshaped_no_cp'][qb][i, :],
'color': color,
'setlabel': f'freq={fitid:.4f}' if p == 1
else f'amp={fitid:.4f}',
}
class T2FrequencySweepAnalysis(MultiQubit_TimeDomain_Analysis):
def process_data(self):
super().process_data()
pdd = self.proc_data_dict
nr_cp = self.num_cal_points
nr_amps = len(self.metadata['amplitudes'])
nr_lengths = len(self.metadata['flux_lengths'])
nr_phases = len(self.metadata['phases'])
# make matrix out of vector
data_reshaped_no_cp = {qb: np.reshape(
deepcopy(pdd['data_to_fit'][qb][
:, :pdd['data_to_fit'][qb].shape[1]-nr_cp]).flatten(),
(nr_amps, nr_lengths, nr_phases)) for qb in self.qb_names}
pdd['data_reshaped_no_cp'] = data_reshaped_no_cp
if self.metadata['use_cal_points']:
pdd['cal_point_data'] = {qb: deepcopy(
pdd['data_to_fit'][qb][
len(pdd['data_to_fit'][qb])-nr_cp:]) for qb in self.qb_names}
pdd['mask'] = {qb: np.ones(nr_amps, dtype=np.bool)
for qb in self.qb_names}
def prepare_fitting(self):
pdd = self.proc_data_dict
self.fit_dicts = OrderedDict()
nr_amps = len(self.metadata['amplitudes'])
for qb in self.qb_names:
for i in range(nr_amps):
for j, data in enumerate(pdd['data_reshaped_no_cp'][qb][i]):
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=self.metadata['phases'],
data=data,
freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts[f'cos_fit_{qb}_{i}_{j}'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': self.metadata['phases']},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['T2'] = {}
pdd['T2_err'] = {}
pdd['phase_contrast'] = {}
nr_lengths = len(self.metadata['flux_lengths'])
nr_amps = len(self.metadata['amplitudes'])
for qb in self.qb_names:
pdd['phase_contrast'][qb] = {}
exp_mod = fit_mods.ExponentialModel()
for i in range(nr_amps):
pdd['phase_contrast'][qb][f'amp_{i}'] = np.array([self.fit_res[
f'cos_fit_{qb}_{i}_{j}'
].best_values['amplitude']
for j in
range(nr_lengths)])
self.fit_dicts[f'exp_fit_{qb}_{i}'] = {
'model': exp_mod,
'fit_xvals': {'x': self.metadata['flux_lengths']},
'fit_yvals': {'data': np.array([self.fit_res[
f'cos_fit_{qb}_{i}_{j}'
].best_values['amplitude']
for j in
range(nr_lengths)])}}
self.run_fitting()
pdd['T2'][qb] = np.array([
abs(self.fit_res[f'exp_fit_{qb}_{i}'].best_values['decay'])
for i in range(len(self.metadata['amplitudes']))])
pdd['mask'][qb] = []
for i in range(len(self.metadata['amplitudes'])):
try:
if self.fit_res[f'exp_fit_{qb}_{i}']\
.params['decay'].stderr >= 1e-5:
pdd['mask'][qb][i] = False
except TypeError:
pdd['mask'][qb][i] = False
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
for qb in self.qb_names:
mask = pdd['mask'][qb]
label = f'T2_fit_{qb}'
xvals = self.metadata['amplitudes'][mask] if \
self.metadata['frequencies'] is None else \
self.metadata['frequencies'][mask]
xlabel = r'Flux pulse amplitude' if \
self.metadata['frequencies'] is None else \
r'Derived qubit frequency'
self.plot_dicts[label] = {
'plotfn': self.plot_line,
'linestyle': '-',
'xvals': xvals,
'yvals': pdd['T2'][qb][mask],
'xlabel': xlabel,
'xunit': 'V' if self.metadata['frequencies'] is None else 'Hz',
'ylabel': r'T2',
'yunit': 's',
'color': 'blue',
}
# Plot all fits in single figure
if not self.options_dict.get('all_fits', False):
continue
colormap = self.options_dict.get('colormap', mpl.cm.plasma)
for i in range(len(self.metadata['amplitudes'])):
color = colormap(i/(len(self.metadata['frequencies'])-1))
label = f'exp_fit_{qb}_amp_{i}'
freqs = self.metadata['frequencies'] is not None
fitid = self.metadata.get('frequencies',
self.metadata['amplitudes'])[i]
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'T2_fits_{qb}',
'xlabel': r'Flux pulse length',
'xunit': 's',
'ylabel': r'Excited state population',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[label],
'plot_init': self.options_dict.get('plot_init', False),
'color': color,
'setlabel': f'freq={fitid:.4f}' if freqs
else f'amp={fitid:.4f}',
'do_legend': False,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
label = f'freq_scatter_{qb}_{i}'
self.plot_dicts[label] = {
'ax_id': f'T2_fits_{qb}',
'plotfn': self.plot_line,
'xvals': self.metadata['phases'],
'linestyle': '',
'yvals': pdd['data_reshaped_no_cp'][qb][i,:],
'color': color,
'setlabel': f'freq={fitid:.4f}' if freqs
else f'amp={fitid:.4f}',
}
class MeasurementInducedDephasingAnalysis(MultiQubit_TimeDomain_Analysis):
def process_data(self):
super().process_data()
rdd = self.raw_data_dict
pdd = self.proc_data_dict
pdd['data_reshaped'] = {qb: [] for qb in pdd['data_to_fit']}
pdd['amps_reshaped'] = np.unique(self.metadata['hard_sweep_params']['ro_amp_scale']['values'])
pdd['phases_reshaped'] = []
for amp in pdd['amps_reshaped']:
mask = self.metadata['hard_sweep_params']['ro_amp_scale']['values'] == amp
pdd['phases_reshaped'].append(self.metadata['hard_sweep_params']['phase']['values'][mask])
for qb in self.qb_names:
pdd['data_reshaped'][qb].append(pdd['data_to_fit'][qb][:len(mask)][mask])
def prepare_fitting(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
self.fit_dicts = OrderedDict()
for qb in self.qb_names:
for i, data in enumerate(pdd['data_reshaped'][qb]):
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=pdd['phases_reshaped'][i],
data=data, freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts[f'cos_fit_{qb}_{i}'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': pdd['phases_reshaped'][i]},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['phase_contrast'] = {}
pdd['phase_offset'] = {}
pdd['sigma'] = {}
pdd['sigma_err'] = {}
pdd['a'] = {}
pdd['a_err'] = {}
pdd['c'] = {}
pdd['c_err'] = {}
for qb in self.qb_names:
pdd['phase_contrast'][qb] = np.array([
self.fit_res[f'cos_fit_{qb}_{i}'].best_values['amplitude']
for i, _ in enumerate(pdd['data_reshaped'][qb])])
pdd['phase_offset'][qb] = np.array([
self.fit_res[f'cos_fit_{qb}_{i}'].best_values['phase']
for i, _ in enumerate(pdd['data_reshaped'][qb])])
pdd['phase_offset'][qb] += np.pi * (pdd['phase_contrast'][qb] < 0)
pdd['phase_offset'][qb] = (pdd['phase_offset'][qb] + np.pi) % (2 * np.pi) - np.pi
pdd['phase_offset'][qb] = 180*np.unwrap(pdd['phase_offset'][qb])/np.pi
pdd['phase_contrast'][qb] = np.abs(pdd['phase_contrast'][qb])
gauss_mod = lmfit.models.GaussianModel()
self.fit_dicts[f'phase_contrast_fit_{qb}'] = {
'model': gauss_mod,
'guess_dict': {'center': {'value': 0, 'vary': False}},
'fit_xvals': {'x': pdd['amps_reshaped']},
'fit_yvals': {'data': pdd['phase_contrast'][qb]}}
quadratic_mod = lmfit.models.QuadraticModel()
self.fit_dicts[f'phase_offset_fit_{qb}'] = {
'model': quadratic_mod,
'guess_dict': {'b': {'value': 0, 'vary': False}},
'fit_xvals': {'x': pdd['amps_reshaped']},
'fit_yvals': {'data': pdd['phase_offset'][qb]}}
self.run_fitting()
self.save_fit_results()
pdd['sigma'][qb] = self.fit_res[f'phase_contrast_fit_{qb}'].best_values['sigma']
pdd['sigma_err'][qb] = self.fit_res[f'phase_contrast_fit_{qb}'].params['sigma']. \
stderr
pdd['a'][qb] = self.fit_res[f'phase_offset_fit_{qb}'].best_values['a']
pdd['a_err'][qb] = self.fit_res[f'phase_offset_fit_{qb}'].params['a'].stderr
pdd['c'][qb] = self.fit_res[f'phase_offset_fit_{qb}'].best_values['c']
pdd['c_err'][qb] = self.fit_res[f'phase_offset_fit_{qb}'].params['c'].stderr
pdd['sigma_err'][qb] = float('nan') if pdd['sigma_err'][qb] is None \
else pdd['sigma_err'][qb]
pdd['a_err'][qb] = float('nan') if pdd['a_err'][qb] is None else pdd['a_err'][qb]
pdd['c_err'][qb] = float('nan') if pdd['c_err'][qb] is None else pdd['c_err'][qb]
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
phases_equal = True
for phases in pdd['phases_reshaped'][1:]:
if not np.all(phases == pdd['phases_reshaped'][0]):
phases_equal = False
break
for qb in self.qb_names:
if phases_equal:
self.plot_dicts[f'data_2d_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'plotfn': self.plot_colorxy,
'xvals': pdd['phases_reshaped'][0],
'yvals': pdd['amps_reshaped'],
'zvals': pdd['data_reshaped'][qb],
'xlabel': r'Pulse phase, $\phi$',
'xunit': 'deg',
'ylabel': r'Readout pulse amplitude scale, $V_{RO}/V_{ref}$',
'yunit': '',
'zlabel': 'Excited state population',
}
colormap = self.options_dict.get('colormap', mpl.cm.plasma)
for i, amp in enumerate(pdd['amps_reshaped']):
color = colormap(i/(len(pdd['amps_reshaped'])-1))
label = f'cos_data_{qb}_{i}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'amplitude_crossections_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['phases_reshaped'][i],
'yvals': pdd['data_reshaped'][qb][i],
'xlabel': r'Pulse phase, $\phi$',
'xunit': 'deg',
'ylabel': 'Excited state population',
'linestyle': '',
'color': color,
'setlabel': f'amp={amp:.4f}',
'do_legend': True,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
if self.do_fitting:
for i, amp in enumerate(pdd['amps_reshaped']):
color = colormap(i/(len(pdd['amps_reshaped'])-1))
label = f'cos_fit_{qb}_{i}'
self.plot_dicts[label] = {
'ax_id': f'amplitude_crossections_{qb}',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[label],
'plot_init': self.options_dict.get('plot_init', False),
'color': color,
'setlabel': f'fit, amp={amp:.4f}',
}
# Phase contrast
self.plot_dicts[f'phase_contrast_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': 200*pdd['phase_contrast'][qb],
'xlabel': r'Readout pulse amplitude scale, $V_{RO}/V_{ref}$',
'xunit': '',
'ylabel': 'Phase contrast',
'yunit': '%',
'linestyle': '',
'color': 'k',
'setlabel': 'data',
'do_legend': True,
}
self.plot_dicts[f'phase_contrast_fit_{qb}'] = {
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': 200*self.fit_res[f'phase_contrast_fit_{qb}'].best_fit,
'color': 'r',
'marker': '',
'setlabel': 'fit',
'do_legend': True,
}
self.plot_dicts[f'phase_contrast_labels_{qb}'] = {
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': 200*pdd['phase_contrast'][qb],
'marker': '',
'linestyle': '',
'setlabel': r'$\sigma = ({:.5f} \pm {:.5f})$ V'.
format(pdd['sigma'][qb], pdd['sigma_err'][qb]),
'do_legend': True,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
# Phase offset
self.plot_dicts[f'phase_offset_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'],
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': pdd['phase_offset'][qb],
'xlabel': r'Readout pulse amplitude scale, $V_{RO}/V_{ref}$',
'xunit': '',
'ylabel': 'Phase offset',
'yunit': 'deg',
'linestyle': '',
'color': 'k',
'setlabel': 'data',
'do_legend': True,
}
self.plot_dicts[f'phase_offset_fit_{qb}'] = {
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': self.fit_res[f'phase_offset_fit_{qb}'].best_fit,
'color': 'r',
'marker': '',
'setlabel': 'fit',
'do_legend': True,
}
self.plot_dicts[f'phase_offset_labels_{qb}'] = {
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['amps_reshaped'],
'yvals': pdd['phase_offset'][qb],
'marker': '',
'linestyle': '',
'setlabel': r'$a = {:.0f} \pm {:.0f}$ deg/V${{}}^2$'.
format(pdd['a'][qb], pdd['a_err'][qb]) + '\n' +
r'$c = {:.1f} \pm {:.1f}$ deg'.
format(pdd['c'][qb], pdd['c_err'][qb]),
'do_legend': True,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
class DriveCrosstalkCancellationAnalysis(MultiQubit_TimeDomain_Analysis):
def process_data(self):
super().process_data()
if self.sp is None:
raise ValueError('This analysis needs a SweepPoints '
'class instance.')
pdd = self.proc_data_dict
# get the ramsey phases as the values of the first sweep parameter
# in the 2nd sweep dimension.
# !!! This assumes all qubits have the same ramsey phases !!!
pdd['ramsey_phases'] = self.sp.get_sweep_params_property('values', 1)
pdd['qb_sweep_points'] = {}
pdd['qb_sweep_param'] = {}
for k, v in self.sp.get_sweep_dimension(0).items():
if k == 'phase':
continue
qb, param = k.split('.')
pdd['qb_sweep_points'][qb] = v[0]
pdd['qb_sweep_param'][qb] = (param, v[1], v[2])
pdd['qb_msmt_vals'] = {}
pdd['qb_cal_vals'] = {}
for qb, data in pdd['data_to_fit'].items():
pdd['qb_msmt_vals'][qb] = data[:, :-self.num_cal_points].reshape(
len(pdd['qb_sweep_points'][qb]), len(pdd['ramsey_phases']))
pdd['qb_cal_vals'][qb] = data[0, -self.num_cal_points:]
def prepare_fitting(self):
pdd = self.proc_data_dict
self.fit_dicts = OrderedDict()
for qb in self.qb_names:
for i, data in enumerate(pdd['qb_msmt_vals'][qb]):
cos_mod = fit_mods.CosModel
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=pdd['ramsey_phases'],
data=data, freq_guess=1/360)
guess_pars['frequency'].value = 1/360
guess_pars['frequency'].vary = False
self.fit_dicts[f'cos_fit_{qb}_{i}'] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': pdd['ramsey_phases']},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['phase_contrast'] = {}
pdd['phase_offset'] = {}
for qb in self.qb_names:
pdd['phase_contrast'][qb] = np.array([
2*self.fit_res[f'cos_fit_{qb}_{i}'].best_values['amplitude']
for i, _ in enumerate(pdd['qb_msmt_vals'][qb])])
pdd['phase_offset'][qb] = np.array([
self.fit_res[f'cos_fit_{qb}_{i}'].best_values['phase']
for i, _ in enumerate(pdd['qb_msmt_vals'][qb])])
pdd['phase_offset'][qb] *= 180/np.pi
pdd['phase_offset'][qb] += 180 * (pdd['phase_contrast'][qb] < 0)
pdd['phase_offset'][qb] = (pdd['phase_offset'][qb] + 180) % 360 - 180
pdd['phase_contrast'][qb] = np.abs(pdd['phase_contrast'][qb])
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
for qb in self.qb_names:
self.plot_dicts[f'data_2d_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'plotfn': self.plot_colorxy,
'xvals': pdd['ramsey_phases'],
'yvals': pdd['qb_sweep_points'][qb],
'zvals': pdd['qb_msmt_vals'][qb],
'xlabel': r'Ramsey phase, $\phi$',
'xunit': 'deg',
'ylabel': pdd['qb_sweep_param'][qb][2],
'yunit': pdd['qb_sweep_param'][qb][1],
'zlabel': 'Excited state population',
}
colormap = self.options_dict.get('colormap', mpl.cm.plasma)
for i, pval in enumerate(pdd['qb_sweep_points'][qb]):
if i == len(pdd['qb_sweep_points'][qb]) - 1:
legendlabel='data, ref.'
else:
legendlabel = f'data, {pdd["qb_sweep_param"][qb][0]}='\
f'{pval:.4f}{pdd["qb_sweep_param"][qb][1]}'
color = colormap(i/(len(pdd['qb_sweep_points'][qb])-1))
label = f'cos_data_{qb}_{i}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'param_crossections_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['ramsey_phases'],
'yvals': pdd['qb_msmt_vals'][qb][i],
'xlabel': r'Ramsey phase, $\phi$',
'xunit': 'deg',
'ylabel': 'Excited state population',
'linestyle': '',
'color': color,
'setlabel': legendlabel,
'do_legend': False,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
if self.do_fitting:
for i, pval in enumerate(pdd['qb_sweep_points'][qb]):
if i == len(pdd['qb_sweep_points'][qb]) - 1:
legendlabel = 'fit, ref.'
else:
legendlabel = f'fit, {pdd["qb_sweep_param"][qb][0]}='\
f'{pval:.4f}{pdd["qb_sweep_param"][qb][1]}'
color = colormap(i/(len(pdd['qb_sweep_points'][qb])-1))
label = f'cos_fit_{qb}_{i}'
self.plot_dicts[label] = {
'ax_id': f'param_crossections_{qb}',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[label],
'plot_init': self.options_dict.get('plot_init', False),
'color': color,
'do_legend': False,
# 'setlabel': legendlabel
}
# Phase contrast
self.plot_dicts[f'phase_contrast_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['qb_sweep_points'][qb][:-1],
'yvals': pdd['phase_contrast'][qb][:-1] * 100,
'xlabel': pdd['qb_sweep_param'][qb][2],
'xunit': pdd['qb_sweep_param'][qb][1],
'ylabel': 'Phase contrast',
'yunit': '%',
'linestyle': '-',
'marker': 'o',
'color': 'C0',
'setlabel': 'data',
'do_legend': True,
}
self.plot_dicts[f'phase_contrast_ref_{qb}'] = {
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_hlines,
'xmin': pdd['qb_sweep_points'][qb][:-1].min(),
'xmax': pdd['qb_sweep_points'][qb][:-1].max(),
'y': pdd['phase_contrast'][qb][-1] * 100,
'linestyle': '--',
'colors': '0.6',
'setlabel': 'ref',
'do_legend': True,
}
# Phase offset
self.plot_dicts[f'phase_offset_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['qb_sweep_points'][qb][:-1],
'yvals': pdd['phase_offset'][qb][:-1],
'xlabel': pdd['qb_sweep_param'][qb][2],
'xunit': pdd['qb_sweep_param'][qb][1],
'ylabel': 'Phase offset',
'yunit': 'deg',
'linestyle': '-',
'marker': 'o',
'color': 'C0',
'setlabel': 'data',
'do_legend': True,
}
self.plot_dicts[f'phase_offset_ref_{qb}'] = {
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_hlines,
'xmin': pdd['qb_sweep_points'][qb][:-1].min(),
'xmax': pdd['qb_sweep_points'][qb][:-1].max(),
'y': pdd['phase_offset'][qb][-1],
'linestyle': '--',
'colors': '0.6',
'setlabel': 'ref',
'do_legend': True,
}
class FluxlineCrosstalkAnalysis(MultiQubit_TimeDomain_Analysis):
"""Analysis for the measure_fluxline_crosstalk measurement.
The measurement involves Ramsey measurements on a set of crosstalk qubits,
which have been brought to a flux-sensitive position with a flux pulse.
The first dimension is the ramsey-phase of these qubits.
In the second sweep dimension, the amplitude of a flux pulse on another
(target) qubit is swept.
The analysis extracts the change in Ramsey phase offset, which gets
converted to a frequency offset due to the flux pulse on the target qubit.
The frequency offset is then converted to a flux offset, which is a measure
of the crosstalk between the target fluxline and the crosstalk qubit.
The measurement is hard-compressed, meaning the raw data is inherently 1d,
with one set of calibration points as the final segments. The experiment
part of the measured values are reshaped to the correct 2d shape for
the analysis. The sweep points passed into the analysis should still reflect
the 2d nature of the measurement, meaning the ramsey phase values should be
passed in the first dimension and the target fluxpulse amplitudes in the
second sweep dimension.
"""
def __init__(self, qb_names, *args, **kwargs):
params_dict = {f'{qbn}.amp_to_freq_model':
f'Instrument settings.{qbn}.fit_ge_freq_from_flux_pulse_amp'
for qbn in qb_names}
kwargs['params_dict'] = kwargs.get('params_dict', {})
kwargs['params_dict'].update(params_dict)
super().__init__(qb_names, *args, **kwargs)
def process_data(self):
super().process_data()
if self.sp is None:
raise ValueError('This analysis needs a SweepPoints '
'class instance.')
pdd = self.proc_data_dict
pdd['ramsey_phases'] = self.sp.get_sweep_params_property('values', 0)
pdd['target_amps'] = self.sp.get_sweep_params_property('values', 1)
pdd['target_fluxpulse_length'] = \
self.get_param_value('target_fluxpulse_length')
pdd['crosstalk_qubits_amplitudes'] = \
self.get_param_value('crosstalk_qubits_amplitudes')
pdd['qb_msmt_vals'] = {qb:
pdd['data_to_fit'][qb][:, :-self.num_cal_points].reshape(
len(pdd['target_amps']), len(pdd['ramsey_phases']))
for qb in self.qb_names}
pdd['qb_cal_vals'] = {
qb: pdd['data_to_fit'][qb][0, -self.num_cal_points:]
for qb in self.qb_names}
def prepare_fitting(self):
pdd = self.proc_data_dict
self.fit_dicts = OrderedDict()
cos_mod = lmfit.Model(fit_mods.CosFunc)
cos_mod.guess = fit_mods.Cos_guess.__get__(cos_mod, cos_mod.__class__)
for qb in self.qb_names:
for i, data in enumerate(pdd['qb_msmt_vals'][qb]):
self.fit_dicts[f'cos_fit_{qb}_{i}'] = {
'model': cos_mod,
'guess_dict': {'frequency': {'value': 1 / 360,
'vary': False}},
'fit_xvals': {'t': pdd['ramsey_phases']},
'fit_yvals': {'data': data}}
def analyze_fit_results(self):
pdd = self.proc_data_dict
pdd['phase_contrast'] = {}
pdd['phase_offset'] = {}
pdd['freq_offset'] = {}
pdd['freq'] = {}
self.skip_qb_freq_fits = self.get_param_value('skip_qb_freq_fits', False)
if not self.skip_qb_freq_fits:
pdd['flux'] = {}
for qb in self.qb_names:
pdd['phase_contrast'][qb] = np.array([
2 * self.fit_res[f'cos_fit_{qb}_{i}'].best_values['amplitude']
for i, _ in enumerate(pdd['qb_msmt_vals'][qb])])
pdd['phase_offset'][qb] = np.array([
self.fit_res[f'cos_fit_{qb}_{i}'].best_values['phase']
for i, _ in enumerate(pdd['qb_msmt_vals'][qb])])
pdd['phase_offset'][qb] *= 180 / np.pi
pdd['phase_offset'][qb] += 180 * (pdd['phase_contrast'][qb] < 0)
pdd['phase_offset'][qb] = (pdd['phase_offset'][qb] + 180) % 360 - 180
pdd['phase_offset'][qb] = \
np.unwrap(pdd['phase_offset'][qb] / 180 * np.pi) * 180 / np.pi
pdd['phase_contrast'][qb] = np.abs(pdd['phase_contrast'][qb])
pdd['freq_offset'][qb] = pdd['phase_offset'][qb] / 360 / pdd[
'target_fluxpulse_length']
fr = lmfit.Model(lambda a, f_a=1, f0=0: a * f_a + f0).fit(
data=pdd['freq_offset'][qb], a=pdd['target_amps'])
pdd['freq_offset'][qb] -= fr.best_values['f0']
if not self.skip_qb_freq_fits:
mpars = eval(self.raw_data_dict[f'{qb}.amp_to_freq_model'])
freq_idle = fit_mods.Qubit_dac_to_freq(
pdd['crosstalk_qubits_amplitudes'].get(qb, 0), **mpars)
pdd['freq'][qb] = pdd['freq_offset'][qb] + freq_idle
mpars.update({'V_per_phi0': 1, 'dac_sweet_spot': 0})
pdd['flux'][qb] = fit_mods.Qubit_freq_to_dac(
pdd['freq'][qb], **mpars)
# fit fitted results to linear models
lin_mod = lmfit.Model(lambda x, a=1, b=0: a*x + b)
def guess(model, data, x, **kwargs):
a_guess = (data[-1] - data[0])/(x[-1] - x[0])
b_guess = data[0] - x[0]*a_guess
return model.make_params(a=a_guess, b=b_guess)
lin_mod.guess = guess.__get__(lin_mod, lin_mod.__class__)
keys_to_fit = []
for qb in self.qb_names:
for param in ['phase_offset', 'freq_offset', 'flux']:
if param == 'flux' and self.skip_qb_freq_fits:
continue
key = f'{param}_fit_{qb}'
self.fit_dicts[key] = {
'model': lin_mod,
'fit_xvals': {'x': pdd['target_amps']},
'fit_yvals': {'data': pdd[param][qb]}}
keys_to_fit.append(key)
self.run_fitting(keys_to_fit=keys_to_fit)
def prepare_plots(self):
pdd = self.proc_data_dict
rdd = self.raw_data_dict
for qb in self.qb_names:
self.plot_dicts[f'data_2d_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'plotfn': self.plot_colorxy,
'xvals': pdd['ramsey_phases'],
'yvals': pdd['target_amps'],
'zvals': pdd['qb_msmt_vals'][qb],
'xlabel': r'Ramsey phase, $\phi$',
'xunit': 'deg',
'ylabel': self.sp.get_sweep_params_property('label', 1,
'target_amp'),
'yunit': self.sp.get_sweep_params_property('unit', 1,
'target_amp'),
'zlabel': 'Excited state population',
}
colormap = self.options_dict.get('colormap', mpl.cm.plasma)
for i, pval in enumerate(pdd['target_amps']):
legendlabel = f'data, amp. = {pval:.4f} V'
color = colormap(i / (len(pdd['target_amps']) - 1))
label = f'cos_data_{qb}_{i}'
self.plot_dicts[label] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'param_crossections_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['ramsey_phases'],
'yvals': pdd['qb_msmt_vals'][qb][i],
'xlabel': r'Ramsey phase, $\phi$',
'xunit': 'deg',
'ylabel': 'Excited state population',
'linestyle': '',
'color': color,
'setlabel': legendlabel,
'do_legend': False,
'legend_bbox_to_anchor': (1, 1),
'legend_pos': 'upper left',
}
if self.do_fitting:
for i, pval in enumerate(pdd['target_amps']):
legendlabel = f'fit, amp. = {pval:.4f} V'
color = colormap(i / (len(pdd['target_amps']) - 1))
label = f'cos_fit_{qb}_{i}'
self.plot_dicts[label] = {
'ax_id': f'param_crossections_{qb}',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[label],
'plot_init': self.options_dict.get('plot_init', False),
'color': color,
'setlabel': legendlabel,
'do_legend': False,
}
# Phase contrast
self.plot_dicts[f'phase_contrast_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'phase_contrast_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['target_amps'],
'yvals': pdd['phase_contrast'][qb] * 100,
'xlabel':self.sp.get_sweep_params_property('label', 1,
'target_amp'),
'xunit': self.sp.get_sweep_params_property('unit', 1,
'target_amp'),
'ylabel': 'Phase contrast',
'yunit': '%',
'linestyle': '-',
'marker': 'o',
'color': 'C0',
}
# Phase offset
self.plot_dicts[f'phase_offset_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'phase_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['target_amps'],
'yvals': pdd['phase_offset'][qb],
'xlabel':self.sp.get_sweep_params_property('label', 1,
'target_amp'),
'xunit': self.sp.get_sweep_params_property('unit', 1,
'target_amp'),
'ylabel': 'Phase offset',
'yunit': 'deg',
'linestyle': 'none',
'marker': 'o',
'color': 'C0',
}
# Frequency offset
self.plot_dicts[f'freq_offset_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'freq_offset_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['target_amps'],
'yvals': pdd['freq_offset'][qb],
'xlabel':self.sp.get_sweep_params_property('label', 1,
'target_amp'),
'xunit': self.sp.get_sweep_params_property('unit', 1,
'target_amp'),
'ylabel': 'Freq. offset, $\\Delta f$',
'yunit': 'Hz',
'linestyle': 'none',
'marker': 'o',
'color': 'C0',
}
if not self.skip_qb_freq_fits:
# Flux
self.plot_dicts[f'flux_data_{qb}'] = {
'title': rdd['measurementstring'] +
'\n' + rdd['timestamp'] + '\n' + qb,
'ax_id': f'flux_{qb}',
'plotfn': self.plot_line,
'xvals': pdd['target_amps'],
'yvals': pdd['flux'][qb],
'xlabel': self.sp[1]['target_amp'][2],
'xunit': self.sp[1]['target_amp'][1],
'ylabel': 'Flux, $\\Phi$',
'yunit': '$\\Phi_0$',
'linestyle': 'none',
'marker': 'o',
'color': 'C0',
}
for param in ['phase_offset', 'freq_offset', 'flux']:
if param == 'flux' and self.skip_qb_freq_fits:
continue
self.plot_dicts[f'{param}_fit_{qb}'] = {
'ax_id': f'{param}_{qb}',
'plotfn': self.plot_fit,
'fit_res': self.fit_res[f'{param}_fit_{qb}'],
'plot_init': self.options_dict.get('plot_init', False),
'linestyle': '-',
'marker': '',
'color': 'C1',
}
class RabiAnalysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names, *args, **kwargs):
params_dict = {}
for qbn in qb_names:
s = 'Instrument settings.'+qbn
for trans_name in ['ge', 'ef']:
params_dict[f'{trans_name}_amp180_'+qbn] = \
s+f'.{trans_name}_amp180'
params_dict[f'{trans_name}_amp90scale_'+qbn] = \
s+f'.{trans_name}_amp90_scale'
kwargs['params_dict'] = params_dict
kwargs['numeric_params'] = list(params_dict)
super().__init__(qb_names, *args, **kwargs)
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
for qbn in self.qb_names:
data = self.proc_data_dict['data_to_fit'][qbn]
sweep_points = self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points']
if self.num_cal_points != 0:
data = data[:-self.num_cal_points]
cos_mod = lmfit.Model(fit_mods.CosFunc)
guess_pars = fit_mods.Cos_guess(
model=cos_mod, t=sweep_points, data=data)
guess_pars['amplitude'].vary = True
guess_pars['amplitude'].min = -10
guess_pars['offset'].vary = True
guess_pars['frequency'].vary = True
guess_pars['phase'].vary = True
self.set_user_guess_pars(guess_pars)
key = 'cos_fit_' + qbn
self.fit_dicts[key] = {
'fit_fn': fit_mods.CosFunc,
'fit_xvals': {'t': sweep_points},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
self.proc_data_dict['analysis_params_dict'] = OrderedDict()
for qbn in self.qb_names:
fit_res = self.fit_dicts['cos_fit_' + qbn]['fit_res']
sweep_points = self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points']
self.proc_data_dict['analysis_params_dict'][qbn] = \
self.get_amplitudes(fit_res=fit_res, sweep_points=sweep_points)
self.save_processed_data(key='analysis_params_dict')
def get_amplitudes(self, fit_res, sweep_points):
# Extract the best fitted frequency and phase.
freq_fit = fit_res.best_values['frequency']
phase_fit = fit_res.best_values['phase']
freq_std = fit_res.params['frequency'].stderr
phase_std = fit_res.params['phase'].stderr
# If fitted_phase<0, shift fitted_phase by 4. This corresponds to a
# shift of 2pi in the argument of cos.
if np.abs(phase_fit) < 0.1:
phase_fit = 0
# If phase_fit<1, the piHalf amplitude<0.
if phase_fit < 1:
log.info('The data could not be fitted correctly. '
'The fitted phase "%s" <1, which gives '
'negative piHalf '
'amplitude.' % phase_fit)
stepsize = sweep_points[1] - sweep_points[0]
if freq_fit > 2 * stepsize:
log.info('The data could not be fitted correctly. The '
'frequency "%s" is too high.' % freq_fit)
n = np.arange(-2, 10)
piPulse_vals = (n*np.pi - phase_fit)/(2*np.pi*freq_fit)
piHalfPulse_vals = (n*np.pi + np.pi/2 - phase_fit)/(2*np.pi*freq_fit)
# find piHalfPulse
try:
piHalfPulse = \
np.min(piHalfPulse_vals[piHalfPulse_vals >= sweep_points[1]])
n_piHalf_pulse = n[piHalfPulse_vals==piHalfPulse]
except ValueError:
piHalfPulse = np.asarray([])
if piHalfPulse.size == 0 or piHalfPulse > max(sweep_points):
i = 0
while (piHalfPulse_vals[i] < min(sweep_points) and
i<piHalfPulse_vals.size):
i+=1
piHalfPulse = piHalfPulse_vals[i]
n_piHalf_pulse = n[i]
# find piPulse
try:
if piHalfPulse.size != 0:
piPulse = \
np.min(piPulse_vals[piPulse_vals >= piHalfPulse])
else:
piPulse = np.min(piPulse_vals[piPulse_vals >= 0.001])
n_pi_pulse = n[piHalfPulse_vals == piHalfPulse]
except ValueError:
piPulse = np.asarray([])
if piPulse.size == 0:
i = 0
while (piPulse_vals[i] < min(sweep_points) and
i < piPulse_vals.size):
i += 1
piPulse = piPulse_vals[i]
n_pi_pulse = n[i]
try:
freq_idx = fit_res.var_names.index('frequency')
phase_idx = fit_res.var_names.index('phase')
if fit_res.covar is not None:
cov_freq_phase = fit_res.covar[freq_idx, phase_idx]
else:
cov_freq_phase = 0
except ValueError:
cov_freq_phase = 0
try:
piPulse_std = self.calculate_pulse_stderr(
f=freq_fit,
phi=phase_fit,
f_err=freq_std,
phi_err=phase_std,
period_num=n_pi_pulse,
cov=cov_freq_phase)
piHalfPulse_std = self.calculate_pulse_stderr(
f=freq_fit,
phi=phase_fit,
f_err=freq_std,
phi_err=phase_std,
period_num=n_piHalf_pulse,
cov=cov_freq_phase)
except Exception as e:
log.error(e)
piPulse_std = 0
piHalfPulse_std = 0
rabi_amplitudes = {'piPulse': piPulse,
'piPulse_stderr': piPulse_std,
'piHalfPulse': piHalfPulse,
'piHalfPulse_stderr': piHalfPulse_std}
return rabi_amplitudes
def calculate_pulse_stderr(self, f, phi, f_err, phi_err,
period_num, cov=0):
x = period_num + phi
return np.sqrt((f_err*x/(2*np.pi*(f**2)))**2 +
(phi_err/(2*np.pi*f))**2 -
2*(cov**2)*x/((2*np.pi*(f**3))**2))[0]
def prepare_plots(self):
super().prepare_plots()
if self.do_fitting:
for qbn in self.qb_names:
base_plot_name = 'Rabi_' + qbn
self.prepare_projected_data_plot(
fig_name=base_plot_name,
data=self.proc_data_dict['data_to_fit'][qbn],
plot_name_suffix=qbn+'fit',
qb_name=qbn)
fit_res = self.fit_dicts['cos_fit_' + qbn]['fit_res']
self.plot_dicts['fit_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_fit,
'fit_res': fit_res,
'setlabel': 'cosine fit',
'color': 'r',
'do_legend': True,
'legend_ncol': 2,
'legend_bbox_to_anchor': (1, -0.15),
'legend_pos': 'upper right'}
rabi_amplitudes = self.proc_data_dict['analysis_params_dict']
self.plot_dicts['piamp_marker_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_line,
'xvals': np.array([rabi_amplitudes[qbn]['piPulse']]),
'yvals': np.array([fit_res.model.func(
rabi_amplitudes[qbn]['piPulse'],
**fit_res.best_values)]),
'setlabel': '$\pi$-Pulse amp',
'color': 'r',
'marker': 'o',
'line_kws': {'markersize': 10},
'linestyle': '',
'do_legend': True,
'legend_ncol': 2,
'legend_bbox_to_anchor': (1, -0.15),
'legend_pos': 'upper right'}
self.plot_dicts['piamp_hline_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_hlines,
'y': [fit_res.model.func(
rabi_amplitudes[qbn]['piPulse'],
**fit_res.best_values)],
'xmin': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][0],
'xmax': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][-1],
'colors': 'gray'}
self.plot_dicts['pihalfamp_marker_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_line,
'xvals': np.array([rabi_amplitudes[qbn]['piHalfPulse']]),
'yvals': np.array([fit_res.model.func(
rabi_amplitudes[qbn]['piHalfPulse'],
**fit_res.best_values)]),
'setlabel': '$\pi /2$-Pulse amp',
'color': 'm',
'marker': 'o',
'line_kws': {'markersize': 10},
'linestyle': '',
'do_legend': True,
'legend_ncol': 2,
'legend_bbox_to_anchor': (1, -0.15),
'legend_pos': 'upper right'}
self.plot_dicts['pihalfamp_hline_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_hlines,
'y': [fit_res.model.func(
rabi_amplitudes[qbn]['piHalfPulse'],
**fit_res.best_values)],
'xmin': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][0],
'xmax': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][-1],
'colors': 'gray'}
trans_name = 'ef' if 'f' in self.data_to_fit[qbn] else 'ge'
old_pipulse_val = self.raw_data_dict[
f'{trans_name}_amp180_'+qbn]
if old_pipulse_val != old_pipulse_val:
old_pipulse_val = 0
old_pihalfpulse_val = self.raw_data_dict[
f'{trans_name}_amp90scale_'+qbn]
if old_pihalfpulse_val != old_pihalfpulse_val:
old_pihalfpulse_val = 0
old_pihalfpulse_val *= old_pipulse_val
textstr = (' $\pi-Amp$ = {:.3f} V'.format(
rabi_amplitudes[qbn]['piPulse']) +
' $\pm$ {:.3f} V '.format(
rabi_amplitudes[qbn]['piPulse_stderr']) +
'\n$\pi/2-Amp$ = {:.3f} V '.format(
rabi_amplitudes[qbn]['piHalfPulse']) +
' $\pm$ {:.3f} V '.format(
rabi_amplitudes[qbn]['piHalfPulse_stderr']) +
'\n $\pi-Amp_{old}$ = ' + '{:.3f} V '.format(
old_pipulse_val) +
'\n$\pi/2-Amp_{old}$ = ' + '{:.3f} V '.format(
old_pihalfpulse_val))
self.plot_dicts['text_msg_' + qbn] = {
'fig_id': base_plot_name,
'ypos': -0.2,
'xpos': 0,
'horizontalalignment': 'left',
'verticalalignment': 'top',
'plotfn': self.plot_text,
'text_string': textstr}
class T1Analysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names, *args, **kwargs):
params_dict = {}
for qbn in qb_names:
s = 'Instrument settings.'+qbn
for trans_name in ['ge', 'ef']:
params_dict[f'{trans_name}_T1_'+qbn] = s+'.T1{}'.format(
'_ef' if trans_name == 'ef' else '')
kwargs['params_dict'] = params_dict
kwargs['numeric_params'] = list(params_dict)
super().__init__(qb_names, *args, **kwargs)
def prepare_fitting(self):
self.fit_dicts = OrderedDict()
for qbn in self.qb_names:
data = self.proc_data_dict['data_to_fit'][qbn]
sweep_points = self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points']
if self.num_cal_points != 0:
data = data[:-self.num_cal_points]
exp_decay_mod = lmfit.Model(fit_mods.ExpDecayFunc)
guess_pars = fit_mods.exp_dec_guess(
model=exp_decay_mod, data=data, t=sweep_points)
guess_pars['amplitude'].vary = True
guess_pars['tau'].vary = True
if self.options_dict.get('vary_offset', False):
guess_pars['offset'].vary = True
else:
guess_pars['offset'].value = 0
guess_pars['offset'].vary = False
self.set_user_guess_pars(guess_pars)
key = 'exp_decay_' + qbn
self.fit_dicts[key] = {
'fit_fn': exp_decay_mod.func,
'fit_xvals': {'t': sweep_points},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
self.proc_data_dict['analysis_params_dict'] = OrderedDict()
for qbn in self.qb_names:
self.proc_data_dict['analysis_params_dict'][qbn] = OrderedDict()
self.proc_data_dict['analysis_params_dict'][qbn]['T1'] = \
self.fit_dicts['exp_decay_' + qbn]['fit_res'].best_values['tau']
self.proc_data_dict['analysis_params_dict'][qbn]['T1_stderr'] = \
self.fit_dicts['exp_decay_' + qbn]['fit_res'].params[
'tau'].stderr
self.save_processed_data(key='analysis_params_dict')
def prepare_plots(self):
super().prepare_plots()
if self.do_fitting:
for qbn in self.qb_names:
# rename base plot
base_plot_name = 'T1_' + qbn
self.prepare_projected_data_plot(
fig_name=base_plot_name,
data=self.proc_data_dict['data_to_fit'][qbn],
plot_name_suffix=qbn+'fit',
qb_name=qbn)
self.plot_dicts['fit_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_fit,
'fit_res': self.fit_dicts['exp_decay_' + qbn]['fit_res'],
'setlabel': 'exp decay fit',
'do_legend': True,
'color': 'r',
'legend_ncol': 2,
'legend_bbox_to_anchor': (1, -0.15),
'legend_pos': 'upper right'}
trans_name = 'ef' if 'f' in self.data_to_fit[qbn] else 'ge'
old_T1_val = self.raw_data_dict[f'{trans_name}_T1_'+qbn]
if old_T1_val != old_T1_val:
old_T1_val = 0
T1_dict = self.proc_data_dict['analysis_params_dict']
textstr = '$T_1$ = {:.2f} $\mu$s'.format(
T1_dict[qbn]['T1']*1e6) \
+ ' $\pm$ {:.2f} $\mu$s'.format(
T1_dict[qbn]['T1_stderr']*1e6) \
+ '\nold $T_1$ = {:.2f} $\mu$s'.format(old_T1_val*1e6)
self.plot_dicts['text_msg_' + qbn] = {
'fig_id': base_plot_name,
'ypos': -0.2,
'xpos': 0,
'horizontalalignment': 'left',
'verticalalignment': 'top',
'plotfn': self.plot_text,
'text_string': textstr}
class RamseyAnalysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names, *args, **kwargs):
params_dict = {}
for qbn in qb_names:
s = 'Instrument settings.'+qbn
for trans_name in ['ge', 'ef']:
params_dict[f'{trans_name}_freq_'+qbn] = s+f'.{trans_name}_freq'
kwargs['params_dict'] = params_dict
kwargs['numeric_params'] = list(params_dict)
super().__init__(qb_names, *args, **kwargs)
def prepare_fitting(self):
if self.options_dict.get('fit_gaussian_decay', True):
self.fit_keys = ['exp_decay_', 'gauss_decay_']
else:
self.fit_keys = ['exp_decay_']
self.fit_dicts = OrderedDict()
for qbn in self.qb_names:
data = self.proc_data_dict['data_to_fit'][qbn]
sweep_points = self.proc_data_dict['sweep_points_dict'][qbn][
'msmt_sweep_points']
if self.num_cal_points != 0:
data = data[:-self.num_cal_points]
for i, key in enumerate([k + qbn for k in self.fit_keys]):
exp_damped_decay_mod = lmfit.Model(fit_mods.ExpDampOscFunc)
guess_pars = fit_mods.exp_damp_osc_guess(
model=exp_damped_decay_mod, data=data, t=sweep_points,
n_guess=i+1)
guess_pars['amplitude'].vary = False
guess_pars['amplitude'].value = 0.5
guess_pars['frequency'].vary = True
guess_pars['tau'].vary = True
guess_pars['phase'].vary = True
guess_pars['n'].vary = False
guess_pars['oscillation_offset'].vary = \
'f' in self.data_to_fit[qbn]
# guess_pars['exponential_offset'].value = 0.5
guess_pars['exponential_offset'].vary = True
self.set_user_guess_pars(guess_pars)
self.fit_dicts[key] = {
'fit_fn': exp_damped_decay_mod .func,
'fit_xvals': {'t': sweep_points},
'fit_yvals': {'data': data},
'guess_pars': guess_pars}
def analyze_fit_results(self):
if 'artificial_detuning' in self.options_dict:
artificial_detuning_dict = OrderedDict(
[(qbn, self.options_dict['artificial_detuning'])
for qbn in self.qb_names])
elif 'artificial_detuning_dict' in self.metadata:
artificial_detuning_dict = self.metadata[
'artificial_detuning_dict']
elif 'artificial_detuning' in self.metadata:
artificial_detuning_dict = OrderedDict(
[(qbn, self.metadata['artificial_detuning'])
for qbn in self.qb_names])
else:
raise ValueError('"artificial_detuning" not found.')
self.proc_data_dict['analysis_params_dict'] = OrderedDict()
for qbn in self.qb_names:
self.proc_data_dict['analysis_params_dict'][qbn] = OrderedDict()
for key in [k + qbn for k in self.fit_keys]:
self.proc_data_dict['analysis_params_dict'][qbn][key] = \
OrderedDict()
fit_res = self.fit_dicts[key]['fit_res']
for par in fit_res.params:
if fit_res.params[par].stderr is None:
fit_res.params[par].stderr = 0
trans_name = 'ef' if 'f' in self.data_to_fit[qbn] else 'ge'
old_qb_freq = self.raw_data_dict[f'{trans_name}_freq_'+qbn]
if old_qb_freq != old_qb_freq:
old_qb_freq = 0
self.proc_data_dict['analysis_params_dict'][qbn][key][
'old_qb_freq'] = old_qb_freq
self.proc_data_dict['analysis_params_dict'][qbn][key][
'new_qb_freq'] = old_qb_freq + \
artificial_detuning_dict[qbn] - \
fit_res.best_values['frequency']
self.proc_data_dict['analysis_params_dict'][qbn][key][
'new_qb_freq_stderr'] = fit_res.params['frequency'].stderr
self.proc_data_dict['analysis_params_dict'][qbn][key][
'T2_star'] = fit_res.best_values['tau']
self.proc_data_dict['analysis_params_dict'][qbn][key][
'T2_star_stderr'] = fit_res.params['tau'].stderr
self.proc_data_dict['analysis_params_dict'][qbn][key][
'artificial_detuning'] = artificial_detuning_dict[qbn]
hdf_group_name_suffix = self.options_dict.get(
'hdf_group_name_suffix', '')
self.save_processed_data(key='analysis_params_dict' +
hdf_group_name_suffix)
def prepare_plots(self):
super().prepare_plots()
if self.do_fitting:
ramsey_dict = self.proc_data_dict['analysis_params_dict']
for qbn in self.qb_names:
base_plot_name = 'Ramsey_' + qbn
self.prepare_projected_data_plot(
fig_name=base_plot_name,
data=self.proc_data_dict['data_to_fit'][qbn],
plot_name_suffix=qbn+'fit',
qb_name=qbn)
exp_decay_fit_key = self.fit_keys[0] + qbn
old_qb_freq = ramsey_dict[qbn][
exp_decay_fit_key]['old_qb_freq']
textstr = ''
T2_star_str = ''
for i, key in enumerate([k + qbn for k in self.fit_keys]):
fit_res = self.fit_dicts[key]['fit_res']
self.plot_dicts['fit_' + key] = {
'fig_id': base_plot_name,
'plotfn': self.plot_fit,
'fit_res': fit_res,
'setlabel': 'exp decay fit' if i == 0 else
'gauss decay fit',
'do_legend': True,
'color': 'r' if i == 0 else 'C4',
'legend_bbox_to_anchor': (1, -0.15),
'legend_pos': 'upper right'}
if i != 0:
textstr += '\n'
textstr += \
('$f_{{qubit \_ new \_ {{{key}}} }}$ = '.format(
key=('exp' if i == 0 else 'gauss')) +
'{:.6f} GHz '.format(
ramsey_dict[qbn][key]['new_qb_freq']*1e-9) +
'$\pm$ {:.2E} GHz '.format(
ramsey_dict[qbn][key][
'new_qb_freq_stderr']*1e-9))
T2_star_str += \
('\n$T_{{2,{{{key}}} }}^\star$ = '.format(
key=('exp' if i == 0 else 'gauss')) +
'{:.2f} $\mu$s'.format(
fit_res.params['tau'].value*1e6) +
'$\pm$ {:.2f} $\mu$s'.format(
fit_res.params['tau'].stderr*1e6))
textstr += '\n$f_{qubit \_ old}$ = '+'{:.6f} GHz '.format(
old_qb_freq*1e-9)
textstr += ('\n$\Delta f$ = {:.4f} MHz '.format(
(ramsey_dict[qbn][exp_decay_fit_key]['new_qb_freq'] -
old_qb_freq)*1e-6) + '$\pm$ {:.2E} MHz'.format(
self.fit_dicts[exp_decay_fit_key]['fit_res'].params[
'frequency'].stderr*1e-6) +
'\n$f_{Ramsey}$ = '+'{:.4f} MHz $\pm$ {:.2E} MHz'.format(
self.fit_dicts[exp_decay_fit_key]['fit_res'].params[
'frequency'].value*1e-6,
self.fit_dicts[exp_decay_fit_key]['fit_res'].params[
'frequency'].stderr*1e-6))
textstr += T2_star_str
textstr += '\nartificial detuning = {:.2f} MHz'.format(
ramsey_dict[qbn][exp_decay_fit_key][
'artificial_detuning']*1e-6)
self.plot_dicts['text_msg_' + qbn] = {
'fig_id': base_plot_name,
'ypos': -0.2,
'xpos': -0.025,
'horizontalalignment': 'left',
'verticalalignment': 'top',
'plotfn': self.plot_text,
'text_string': textstr}
self.plot_dicts['half_hline_' + qbn] = {
'fig_id': base_plot_name,
'plotfn': self.plot_hlines,
'y': 0.5,
'xmin': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][0],
'xmax': self.proc_data_dict['sweep_points_dict'][qbn][
'sweep_points'][-1],
'colors': 'gray'}
class QScaleAnalysis(MultiQubit_TimeDomain_Analysis):
def __init__(self, qb_names, *args, **kwargs):
params_dict = {}
for qbn in qb_names:
s = 'Instrument settings.'+qbn
for trans_name in ['ge', 'ef']:
params_dict[f'{trans_name}_qscale_'+qbn] = \
s+f'.{trans_name}_motzoi'
kwargs['params_dict'] = params_dict
kwargs['numeric_params'] = list(params_dict)
super().__init__(qb_names, *args, **kwargs)
def process_data(self):
super().process_data()
self.proc_data_dict['qscale_data'] = OrderedDict()
for qbn in self.qb_names:
self.proc_data_dict['qscale_data'][qbn] = OrderedDict()
sweep_points = deepcopy(self.proc_data_dict['sweep_points_dict'][
qbn]['msmt_sweep_points'])
# check if the sweep points are repeated 3 times as they have to be
# for the qscale analysis:
# Takes the first 3 entries and check if they are all the same or different.
# Needed For backwards compatibility with QudevTransmon.measure_qscale()
# that does not (yet) use Sweeppoints object.
unique_sp = np.unique(sweep_points[:3])
if unique_sp.size > 1:
sweep_points = | np.repeat(sweep_points, 3) | numpy.repeat |
"""Core functionality for foamPy."""
from __future__ import division, print_function
import numpy as np
import os
import re
import datetime
import sys
import time
import subprocess
import pandas
import glob
from .dictionaries import *
from .templates import *
def gen_stripped_lines(fpath):
with open(fpath) as f:
for line in f.readlines():
yield line.replace("(", " ").replace(")", " ")
def load_forces(casedir="./", object_name="forces", start_time=0):
"""Load forces and moments as a pandas DataFrame."""
glob_string = os.path.join(
casedir,
"postProcessing/{}/{}/forces*.dat".format(object_name, start_time)
)
fpath = sorted(glob.glob(glob_string))[-1]
data = np.loadtxt(gen_stripped_lines(fpath))
df = pandas.DataFrame()
df["time"] = data[:, 0]
df["fx_pressure"] = data[:, 1]
df["fx_viscous"] = data[:, 4]
df["fx_porous"] = data[:, 7]
df["fy_pressure"] = data[:, 2]
df["fy_viscous"] = data[:, 5]
df["fy_porous"] = data[:, 8]
df["fz_pressure"] = data[:, 3]
df["fz_viscous"] = data[:, 6]
df["fz_porous"] = data[:, 9]
df["mx_pressure"] = data[:, 10]
df["mx_viscous"] = data[:, 13]
df["mx_porous"] = data[:, 16]
df["my_pressure"] = data[:, 11]
df["my_viscous"] = data[:, 14]
df["my_porous"] = data[:, 17]
df["mz_pressure"] = data[:, 12]
df["mz_viscous"] = data[:, 15]
df["mz_porous"] = data[:, 18]
for fm in ["f", "m"]:
for component in ["x", "y", "z"]:
df[fm + component] = df[fm + component + "_pressure"] \
+ df[fm + component + "_viscous"] \
+ df[fm + component + "_porous"]
return df
def load_probes_data(casedir="./", object_name="probes", start_time=0,
field_name="U"):
"""Load probes data as pandas ``DataFrame``."""
fpath = os.path.join(casedir, "postProcessing", object_name,
str(start_time), field_name)
# First get probe locations to use as column names
with open(fpath) as f:
txt = f.read()
probe_lines = re.findall(r"# Probe \d.*\n", txt)
probe_locs = []
for line in probe_lines:
probe_locs.append(line.split("(")[-1].split(")")[0].split())
data = np.loadtxt(gen_stripped_lines(fpath))
df = pandas.DataFrame()
df["time"] = data[:, 0]
# Determine the rank of the data
nprobes = len(probe_locs)
nsamps = data.shape[0]
dims = (data.shape[1] - 1) // nprobes
for n, probe_loc in enumerate(probe_locs):
probe_loc = [float(pl) for pl in probe_loc]
d = data[:, n + 1:n + dims + 1]
if dims > 1:
d = [tuple(p) for p in d]
df[tuple(probe_loc)] = d
return df
def load_torque_drag(casedir="", folder="0", filename=None,
torque_axis="z", drag_axis="x"):
"""Loads time, z-axis torque, and streamwise force from specified forces
folder. Case name can be left empty if running within a case folder."""
# Create empty lists
t = []
fpx = []; fpy = []; fpz = []
fpox = []; fpoy = []; fpoz = []
fvx = []; fvy = []; fvz = []
mpx = []; mpy = []; mpz = []
mpox = []; mpoy = []; mpoz = []
mvx = []; mvy = []; mvz = []
# Cycle through file
if casedir: casedir += "/"
if not filename: filename = "forces.dat"
with open(casedir+"postProcessing/forces/"+str(folder)+"/"+filename, "r") as f:
for line in f.readlines():
line = line.replace("(", "")
line = line.replace(")", "")
line = line.replace(",", " ")
line = line.split()
if line[0] != "#":
t.append(float(line[0]))
fpx.append(float(line[1]))
fpy.append(float(line[2]))
fpz.append(float(line[3]))
fvx.append(float(line[4]))
fvy.append(float(line[5]))
fvz.append(float(line[6]))
fpox.append(float(line[7]))
fpoy.append(float(line[8]))
fpoz.append(float(line[9]))
mpx.append(float(line[10]))
mpy.append(float(line[11]))
mpz.append(float(line[12]))
mvx.append(float(line[13]))
mvy.append(float(line[14]))
mvz.append(float(line[15]))
mpox.append(float(line[16]))
mpoy.append(float(line[17]))
mpoz.append(float(line[18]))
#Convert to numpy arrays
t = np.asarray(t)
if torque_axis == "z":
torque = np.asarray(np.asarray(mpz) + | np.asarray(mvz) | numpy.asarray |
import numpy as np
from numpy.linalg import norm
from safegridoptim.elastic_net import enet_path as enet_solvers
def step_size(eps, eps_c, lambda_, norm_resid2, dual_scale, mu=0., nu=1.,
large_step=True):
"""Compute adaptive step size for the eps-approximation path
Parameters
----------
eps : float
Desired accuracy on the whole path
eps_c : float
Optimization accuracy at each step, it must satisfies eps_c < eps
lambda_ : float
Current regularization parameter
norm_resid2 : float
Squared norm of the residual at the current parameter lambda_
dual_scale : float
Scaling used to make the residual dual feasible
mu : float, optional
Strong convexity paramerer of the loss.
Default value is 0. It corresponds to vanilla least squares
nu : float, optional
Smoothness paramerer of the loss.
Default value is 1. It corresponds to vanilla least squares
large_step : boolean, optional
If True, it computes the bilateral step size which is larger than the
unilateral step size (False)
Returns
-------
rho : float
Step size for computing the next regularization parameter from lambda_
"""
alpha = lambda_ / dual_scale
norm_zeta2 = norm_resid2 * alpha ** 2
Delta = 0.5 * norm_resid2 * (1. - alpha ** 2)
delta_opt = eps - eps_c
tilde_delta_opt = Delta - eps_c
# dir_ is "left":
rho_l = np.sqrt(2 * nu * norm_zeta2 * delta_opt + tilde_delta_opt ** 2) -\
tilde_delta_opt
rho_l /= nu * norm_zeta2
rho = rho_l
if large_step:
if mu == 0:
tilde_R2 = 2 * (0.5 * norm_resid2 + 2 * eps_c / rho_l)
else:
tilde_R2 = 2 * mu * (0.5 * norm_resid2 + 2 * eps_c / rho_l)
delta_opt = eps - eps_c
rho_r = np.sqrt(2 * nu * tilde_R2 * delta_opt + eps_c ** 2) - eps_c
rho_r /= nu * tilde_R2
rho = (rho_l + rho_r) / (1. + rho_r)
# TODO: add implementation of both direction (increasing and decreasing)
return rho
def compute_path(X, y, eps, mu, nu, lambda_range, adaptive=True,
large_step=False, tau=10.):
"""Compute an eps-approximation path on a given range of parameter
Parameters
----------
X : {array-like}, shape (n_samples, n_features)
Training data. Pass directly as Fortran-contiguous data to avoid
unnecessary memory duplication in the optimization solver.
y : ndarray, shape = (n_samples,)
Target values
eps : float
Desired accuracy on the whole path
mu : float, optional
Strong convexity paramerer of the loss.
Default value is 0. It corresponds to vanilla least squares
nu : float, optional
Smoothness paramerer of the loss.
Default value is 1. It corresponds to vanilla least squares
lambda_range : list or tuples, or array of size 2
Range of parameter where the eps-path is computed.
lambda_range = [lambda_min, lambda_max]
adaptive : boolean, optional
If True (default value), the eps-path is constructed with unilateral
or bilateral step size. If False, the uniform grid is constructed.
large_step : boolean, optional
If True, it computes the bilateral step size which is larger than the
unilateral step size (False)
tau : float, optional (strictly larger than 1)
At each step, we solve an optimization problem at accuracy eps_c < eps.
We simply choose eps_c = eps / tau. Default value is tau=10.
Returns
-------
lambdas : ndarray
List of lambdas that constitutes the eps-path grid
gaps : array, shape (n_lambdas,)
The dual gaps at the end of the optimization for each lambda.
betas : array, shape (n_features, n_lambdas)
Coefficients beta along the eps-path.
"""
n_samples, n_features = X.shape
lambda_min, lambda_max = lambda_range
lambda_ = lambda_max
lambdas = [lambda_]
gaps = [0.]
nrm2_y = norm(y) ** 2
norm_resid2 = nrm2_y
dual_scale = lambda_max
eps_c = eps / tau
beta = np.zeros(n_features, dtype=float, order='F')
# uniform step size
if adaptive is False:
nu_nrm2_y = nu * nrm2_y
if mu == 0:
tmp = np.sqrt(2. * nrm2_y * eps_c) - eps_c
else:
tmp = np.sqrt(2. * mu * nrm2_y * eps_c) - eps_c
delta_opt = eps - eps_c
rho_l = np.sqrt(2. * nu_nrm2_y * delta_opt + tmp ** 2) - tmp
rho_l /= nu_nrm2_y
if large_step is False:
rho = rho_l
else:
rho_r = np.sqrt(2. * nu_nrm2_y * delta_opt + eps_c ** 2) - eps_c
rho_r /= nu_nrm2_y
rho = (rho_l + rho_r) / (1. + rho_r)
r_lmbd = lambda_min / lambda_max
n_lambdas = int(1 + np.floor(np.log(r_lmbd) / np.log(1. - rho)))
lambdas = lambda_max * (1. - rho) ** np.arange(n_lambdas)
model = enet_solvers(X, y, lambdas, beta, mu, eps_c)
betas = model[1]
gaps = model[2]
return lambdas, gaps, betas
# Adaptive grid with decreasing direction of lambda_
betas = [beta.copy()]
while lambda_ > lambda_min:
# Update lambda_
# In step size(), one can use eps_c = gap
rho_l = step_size(eps, eps_c, lambda_, norm_resid2, dual_scale, mu, nu,
large_step)
lambda_ *= 1. - rho_l
lambda_ = max(lambda_, lambda_min) # stop at lambda_min
lambdas += [lambda_]
model = enet_solvers(X, y, lambda_, beta, mu, eps=eps_c)
norm_resid2 = model[4][0]
dual_scale = model[5][0]
betas += [beta.copy()]
gap = abs(model[2][0])
gaps += [gap]
# TODO: implement it for increasing direction of lambda_ and
# factorize the code for the two direction.
return np.array(lambdas), np.array(gaps), np.array(betas).T
def Q(lambda_0, lambda_, eps_c, Delta, norm_zeta2, nu):
"""
Quadratic upper bound of the duality gap function initialized at lambda_0
"""
lmd = lambda_ / lambda_0
Q_lambda = (lmd * eps_c + Delta * (1. - lmd) +
0.5 * nu * norm_zeta2 * (1. - lmd) ** 2)
return Q_lambda
def error_grid(X, y, betas, gaps, lambdas, mu, nu):
"""
Compute the error eps such that the set of betas on the given grid of
regularization parameter lambdas is an eps-path
"""
n_samples, n_features = X.shape
n_lambdas = lambdas.shape[0]
Deltas = np.zeros(n_lambdas)
norm_zeta2s = np.zeros(n_lambdas)
Q_is = []
for i, lambda_ in enumerate(lambdas):
Xbeta = X.dot(betas[:, i])
residual = y - Xbeta
XTR = X.T.dot(residual)
norm_beta2 = norm(betas[:, i]) ** 2
norm_resid2 = norm(residual) ** 2 + mu * norm_beta2
dual_scale = max(lambda_, norm(XTR - mu * betas[:, i], ord=np.inf))
alpha = lambda_ / dual_scale
norm_zeta2s[i] = norm_resid2 * alpha ** 2
Deltas[i] = abs(0.5 * norm_resid2 * (1. - alpha ** 2))
gaps[i] = abs(gaps[i]) # avoid negative gap due to numerical errors
for i in range(n_lambdas - 1):
a = norm_zeta2s[i + 1] / (lambdas[i + 1] ** 2) - \
norm_zeta2s[i] / (lambdas[i] ** 2)
a *= 0.5 * nu
diff_gap = gaps[i + 1] / lambdas[i + 1] - gaps[i] / lambdas[i]
diff_Delta = Deltas[i + 1] / lambdas[i + 1] - Deltas[i] / lambdas[i]
diff_norm_zeta2 = norm_zeta2s[i + 1] / \
lambdas[i + 1] - norm_zeta2s[i] / lambdas[i]
b = diff_gap - diff_Delta - nu * diff_norm_zeta2
c = Deltas[i + 1] - Deltas[i] + 0.5 * nu * \
(norm_zeta2s[i + 1] - norm_zeta2s[i])
if abs(a) >= 1e-12:
discriminant = b ** 2 - 4 * a * c
hat_lmbdi = (np.sqrt(discriminant) - b) / (2. * a)
else:
if abs(b) >= 1e-12:
hat_lmbdi = - c / b
else:
hat_lmbdi = -666
if hat_lmbdi == -666:
Q_is += [Q(lambdas[i + 1], lambdas[i], gaps[i + 1], Deltas[i + 1],
norm_zeta2s[i + 1], nu)]
else:
Q_is += [Q(lambdas[i], hat_lmbdi, gaps[i],
Deltas[i], norm_zeta2s[i], nu)]
return | np.max(Q_is) | numpy.max |
import matplotlib.pyplot as plt
import numpy as np
from rockyraccoon.model.core import RaccoonWrapper
from typing import List
def plot_qml_landscape_binary(
X: np.ndarray, y: np.ndarray, wrapper: RaccoonWrapper, cmap="viridis", title=""
):
"""
Plot the separation boundaries in the 2D input space.
Args:
X: N x d matrix of N samples and d features.
y: Length N vector with labels.
wrapper: The RaccoonWrapper we used for learning
cmap: String with name of matplotlib colormap, see MPL docs
title: String with title of the figure
"""
if wrapper.model.bias:
X = wrapper.add_bias(X)
class_0, class_1 = np.unique(y)
plt.rc("font", size=15)
cmap = plt.cm.get_cmap(cmap)
blue = cmap(0.0)
red = cmap(1.0)
h = 25
max_grid = 2
x_min, x_max = X[:, 0].min() - max_grid, X[:, 0].max() + max_grid
y_min, y_max = X[:, 1].min() - max_grid, X[:, 1].max() + max_grid
xx, yy = np.meshgrid(np.linspace(x_min, x_max, h), np.linspace(y_min, y_max, h))
if wrapper.bias:
z = wrapper.predict(np.c_[xx.ravel(), yy.ravel(), np.ones_like(yy).ravel()])
else:
z = wrapper.predict(np.c_[xx.ravel(), yy.ravel()])
z = z[:, 1] - z[:, 0]
z = z.reshape(xx.shape)
fig, ax = plt.subplots()
ax.contour(xx, yy, z, cmap=cmap)
# Plot also the training points
y = y.flatten()
np.random.seed(123)
spread = 0.3
ax.scatter(
X[(y == class_0), 0]
+ np.random.uniform(-spread, spread, np.sum((y == class_0))),
X[(y == class_0), 1]
+ np.random.uniform(-spread, spread, np.sum((y == class_0))),
marker=".",
c=np.array([blue]),
label="-1",
s=25,
)
ax.scatter(
X[(y == class_1), 0]
+ np.random.uniform(-spread, spread, np.sum((y == class_1))),
X[(y == class_1), 1]
+ np.random.uniform(-spread, spread, np.sum((y == class_1))),
marker="x",
c=np.array([red]),
label="+1",
s=25,
)
ax.set_xlabel("$x_0$")
ax.set_ylabel("$x_1$")
ax.set_title(title)
ax.legend()
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.03, 0.7])
m = plt.cm.ScalarMappable(cmap=cmap)
m.set_array(np.linspace(-1, 1, 11))
plt.colorbar(m, cax=cbar_ax, boundaries=np.linspace(-1, 1, 11))
plt.show()
def plot_lh(wrapper: RaccoonWrapper, cmap="viridis", title=""):
"""
Args:
wrapper: The RaccoonWrapper we used for learning
"""
cmap = plt.cm.get_cmap(cmap)
fig, ax = plt.subplots(1, 1)
ax.plot(wrapper.lh, c=cmap(0.2))
ax.set_xlabel("number of iterations")
ax.set_ylabel("Likelihood $\mathcal{L}$")
ax.set_title(title)
plt.show()
def plot_qml_landscape_multiclass(
X: np.ndarray,
y: np.ndarray,
wrapper: RaccoonWrapper,
subplot_grid: List[int],
cmap="viridis",
title="",
):
"""
Plot the separation boundaries of a multiclass qml model in 2D space.
Args:
X: N x d matrix of N samples and d features.
y: Length N vector with labels.
wrapper: The RaccoonWrapper we used for learning
subplot_grid: List that specifies the grid of the subplots
cmap: Name of MPL colormap
title: Title of the figure
"""
if wrapper.model.bias:
X = wrapper.add_bias(X)
assert (
len(subplot_grid) == 2
), "Expected subplot_grid to have length 2, but got iterable with length {}".format(
len(subplot_grid)
)
labels = np.unique(y)
num_classes = len(np.unique(y))
if num_classes == 2:
print("Only {} classes found, calling binary plotter instead")
plot_qml_landscape_binary(X, y, wrapper, cmap=cmap)
return
assert (
np.product(subplot_grid) == num_classes
), "wrong grid size {} for {} classes".format(subplot_grid, num_classes)
plt.rc("font", size=15)
cmap = plt.cm.get_cmap(cmap)
clrs = [cmap(0.0), cmap(0.5), cmap(1.0)]
h = 25
max_grid = 2
spread = 0.2
y = y.flatten()
x_min, x_max = X[:, 0].min() - max_grid, X[:, 0].max() + max_grid
y_min, y_max = X[:, 1].min() - max_grid, X[:, 1].max() + max_grid
xx, yy = np.meshgrid(np.linspace(x_min, x_max, h), np.linspace(y_min, y_max, h))
if wrapper.bias:
z = wrapper.predict(np.c_[xx.ravel(), yy.ravel(), np.ones_like(yy).ravel()])
else:
z = wrapper.predict(np.c_[xx.ravel(), yy.ravel()])
sections = np.zeros_like(z)
idx = np.argmax(z, axis=1)
sections[np.arange(len(idx)), np.argmax(z, axis=1)] = 1
idx = idx.reshape(xx.shape)
markers = [".", "*", "x", "v", "s", "1"]
z = [el.reshape(xx.shape) for el in z.T]
fig, axs = plt.subplots(*subplot_grid)
if subplot_grid[0] == 1:
axs = axs.reshape(1, -1)
if subplot_grid[1] == 1:
axs = axs.reshape(-1, 1)
for i, ax in enumerate(axs.flatten()):
for j, label in enumerate(labels):
np.random.seed(2342)
if j != i:
ax.scatter(
X[(y == label), 0]
+ np.random.uniform(-spread, spread, np.sum((y == label))),
X[(y == label), 1]
+ np.random.uniform(-spread, spread, np.sum((y == label))),
c="gray",
label=label,
marker=markers[label],
s=50,
)
else:
ax.scatter(
X[(y == label), 0]
+ np.random.uniform(-spread, spread, np.sum((y == label))),
X[(y == label), 1]
+ np.random.uniform(-spread, spread, np.sum((y == label))),
c= | np.array([clrs[2]]) | numpy.array |
import os
import numpy as np
import time
import pickle
class Cluster:
def __init__(self, a_s, a_u, u, s, a, alpha_r, e=None, m_k=None):
self.a_s = a_s
self.a_u = a_u
self.ns = np.size(s)
self.alpha = 1
self.alpha_r = alpha_r
self.S = s[np.newaxis, :]
self.phi = u[np.newaxis, :]
# if m_k is None:
# m_k = self.alpha_r
# # m_k = self.alpha * self.A @ self.A.T + self.alpha_r
# m_k = np.atleast_2d(m_k)
# m_k = np.linalg.inv(m_k)
if np.ndim(a) == 1:
a = a[:, np.newaxis]
if e is not None:
self.A = (a @ m_k @ e).T
else:
self.A = a.T
m_k = self.alpha * self.A @ self.A.T + self.alpha_r
m_k = np.atleast_2d(m_k)
m_k = np.linalg.inv(m_k)
self.center = (u, self.S[0])
self.num = 1
# self.b = self.alpha ** 2 * a @ m_k @ a.T
self.b = self.alpha ** 2 * self.A.T @ m_k @ self.A
self.a_k = None
self.p_k = None
self.dis = None
def distance(self, u, s):
return self.a_u * np.sum((self.center[0] - u) ** 2) + self.a_s * np.sum((self.center[1] - s) ** 2)
def udistance(self, u):
dis_u = (self.phi - u) ** 2
dis_u = np.array([np.sum(dis_u[i]) for i in range(dis_u.shape[0])])
return dis_u.reshape(-1)
def sdistance(self, s):
dis_s = (self.S - s) ** 2
return np.array([np.sum(dis_s[i]) for i in range(dis_s.shape[0])])
def update(self, u, s, a, dq):
dis = self.a_s * self.sdistance(s) + self.a_u * self.udistance(u)
index = np.argmin(dis)
if dis[index] < dq:
self.A[index] = self.A[index] + a.reshape(-1)
else:
self.A = np.concatenate((self.A, a.T), axis=0)
self.phi = np.concatenate((self.phi, u[np.newaxis, :]), axis=0)
self.S = np.concatenate((self.S, s[np.newaxis, :]), axis=0)
self.center = ((self.center[0] * self.num + u) / (self.num + 1), (self.center[1] * self.num + s) / (self.num + 1))
self.num += 1
def m_k(self, u, s, a):
dis = []
p_k = []
p_kk = []
a_k = np.array([[]])
m_0 = len(self.A)
m_1 = len(u)
for (uu, ss, aa) in zip(u, s, a):
corr = np.exp(-self.a_s * self.sdistance(ss)) * np.exp(-self.a_u * self.udistance(uu))
dis.append(1 - corr)
p = np.zeros((m_0 * self.ns, self.ns))
for mm in range(m_0):
p[mm * self.ns:mm * self.ns + self.ns, :] = np.eye(self.ns) * corr[mm]
p_k.append(p)
dis_s_p = np.sum((s - ss) ** 2, axis=1)
dis_u_p = (u - uu) ** 2
dis_u_p = np.array([np.sum(dis_u_p[i]) for i in range(dis_u_p.shape[0])])
# dis_u = np.sum(np.sum(dis_u, axis=1), axis=1)
dis_u_p = dis_u_p.reshape(-1)
corr_p = np.exp(-self.a_s * dis_s_p) * np.exp(-self.a_u * dis_u_p)
p_p = np.zeros((m_1 * self.ns, self.ns))
for mm in range(m_1):
p_p[mm * self.ns:mm * self.ns + self.ns, :] = np.eye(self.ns) * corr_p[mm]
p_kk.append(p_p)
a_k = np.hstack((a_k, aa))
self.p_k = np.hstack(p_k)
p_kk = np.hstack(p_kk)
self.a_k = a_k.T
# print(np.size(self.a_k) / self.ns, len(u))
# if np.size(self.a_k) / self.ns != len(u):
# print('ddddd')
mk = self.alpha * self.a_k.T @ p_kk @ self.a_k - self.a_k.T @ self.p_k.T @ self.b @ self.p_k @ self.a_k
self.dis = np.array(dis).T
return mk
def update_kalman(self, u, s, mk, e, dq):
da = -self.b @ self.p_k @ self.a_k @ mk @ e
da_p = self.alpha * self.a_k @ mk @ e
db = -self.alpha * self.b @ self.p_k @ self.a_k @ mk @ self.a_k.T
b = np.hstack((self.b, db))
self.b = np.vstack((b, np.hstack((db.T, self.alpha ** 2 * self.a_k @ mk @ self.a_k.T))))
da = da.reshape(-1, self.ns)
da_p = da_p.reshape(-1, self.ns)
self.A = self.A + da
min_dis = np.min(self.dis, axis=0)
index_A = np.argmin(self.dis, axis=0)
index_merge = np.nonzero(min_dis <= dq)[0]
index_keep = np.nonzero(min_dis > dq)[0]
index_A = [index_A[pos] for pos in index_merge]
if len(index_merge) == 0:
self.A = np.concatenate((self.A, da_p), axis=0)
self.phi = np.concatenate((self.phi, u), axis=0)
self.S = np.concatenate((self.S, s), axis=0)
else:
# merge
self.A[index_A, :] = self.A[index_A, :] + da_p[index_merge, :]
# rewrite b
n_a = len(self.A)
ind_k = [na for na in range(n_a * self.ns)]
for i_k in index_keep:
st = (i_k + n_a) * self.ns
ind_k = ind_k + [na for na in range(st, st + self.ns)]
ind_m = []
for i_m in index_merge:
st = (i_m + n_a) * self.ns
ind_m = ind_m + [na for na in range(st, st + self.ns)]
ind_a = []
for i_a in index_A:
st = i_a * self.ns
ind_a = ind_a + [na for na in range(st, st + self.ns)]
b = self.b[ind_k, :]
b[ind_a, :] = b[ind_a, :] + self.b[ind_m, :]
self.b = b
b = self.b[:, ind_k]
b[:, ind_a] = b[:, ind_a] + self.b[:, ind_m]
self.b = b
self.A = np.concatenate((self.A, da_p[index_keep, :]), axis=0)
self.phi = np.concatenate((self.phi, u[index_keep, :]), axis=0)
self.S = np.concatenate((self.S, s[index_keep, :]), axis=0)
self.num = len(self.A)
class NiceKAARMA:
def __init__(self, ns, ny, a_s, a_u, u, dc, dq):
self.ns = ns
self.ny = ny
self.a_s = a_s
self.a_u = a_u
self.dc = dc
self.dq = dq
self.II = np.zeros((ny, ns))
self.II[:, ns - ny:] = np.eye(ny)
self.clusters = []
np.random.seed(0)
s = np.random.random(ns)
np.random.seed(1)
a = np.random.random(ns)
self.alpha_r = 1
self.clusters.append(Cluster(a_s, a_u, u, s, a, self.alpha_r))
def save_model(self, path):
for (no, cluster) in enumerate(self.clusters):
with open(path + '/%d_cluster.pkl' % no, 'wb+') as f:
pickle.dump(cluster, f)
def load_model(self, path):
self.clusters.clear()
for cc in os.walk(path):
for c in cc[2]:
if '.pkl' in c:
with open(path + '/' + c, 'rb+') as f:
self.clusters.append(pickle.load(f))
def forward(self, u):
ss = np.zeros((1, self.ns))
for i in range(u.shape[0]):
distance = [self.clusters[j].distance(u[i], ss) for j in range(len(self.clusters))]
cluster = np.argmin(distance)
di = self.clusters[cluster].S - ss
k_s = np.exp(-self.a_s * np.sum(di ** 2, axis=1))[:, np.newaxis]
diss = (self.clusters[cluster].phi - u[i]) ** 2
k_u = np.exp(-self.a_u * np.array([np.sum(diss[j]) for j in range(self.clusters[cluster].num)]))[:,
np.newaxis]
ki = k_s * k_u
ss = self.clusters[cluster].A.T @ ki
ss = ss.T
pred = self.II @ ss.T
return pred
def test_one_sampe(self, x, y):
pred = self.forward(x)
# return np.sum((y - pred.reshape(-1)) ** 2), np.argmax(pred) == np.argmax(y)
return np.sum((y - pred) ** 2), round(pred[0, 0]) == y
def test(self, x, y):
loss = []
count = 0
for (xx, yy) in zip(x, y):
ls, result = self.test_one_sampe(xx, yy)
loss.append(ls)
count += result
return np.mean(loss), count / len(loss)
def train(self, x, y, lr, test_x=None, test_y=None):
loss = []
acc = []
m = []
loss_train = []
step = 0
for (u, d) in zip(x, y):
step += 1
print('\r', step, end='')
# generate s-1
# d = np.float64(d)
s_p = np.zeros((u.shape[0], self.ns))
phi = np.zeros(u.shape)
v = np.zeros((u.shape[0], self.ns, self.ns))
ss = np.zeros((1, self.ns))
for j in range(u.shape[0]):
distance = [self.clusters[i].distance(u[j], ss) for i in range(len(self.clusters))]
cluster = np.argmin(distance)
s_p[j] = ss.reshape(-1)
phi[j] = u[j]
di = self.clusters[cluster].S - ss
# if di.dtype == 'object':
# print(di.dtype)
k_s = np.exp(-self.a_s * np.sum(di ** 2, axis=1))[:, np.newaxis]
diss = (self.clusters[cluster].phi - u[j]) ** 2
k_u = np.exp(-self.a_u * np.array([np.sum(diss[i]) for i in range(self.clusters[cluster].num)]))[:, np.newaxis]
ki = k_s * k_u
# print(ki.tolist())
ss = self.clusters[cluster].A.T @ ki
ss = ss.T
# print(ki.tolist())
ki = np.diag(ki.reshape(-1))
gamma_i = 2 * self.a_s * self.clusters[cluster].A.T @ ki
if gamma_i.ndim == 1:
gamma_i = gamma_i[:, np.newaxis]
gamma_i = gamma_i @ di
if j == 0:
v[j] = np.eye(self.ns)
else:
for index in range(len(v)):
v[index] = gamma_i @ v[index]
v[j] = np.eye(self.ns)
pred = self.II @ ss.T
e = np.atleast_2d(d).T - pred
loss_train.append(np.sum(e ** 2))
# update weights
start = max(0, len(s_p) - 10)
num_steps = 0
for (s, uu, vv) in zip(s_p, phi, v):
num_steps += 1
if num_steps > start:
distance = [self.clusters[i].distance(uu, s) for i in range(len(self.clusters))]
cluster = np.argmin(distance)
a = self.II @ vv
a = a.T @ e
if distance[cluster] > self.dc:
self.clusters.append(Cluster(self.a_s, self.a_u, uu, s, lr * a.reshape(-1), self.alpha_r))
else:
self.clusters[cluster].update(uu, s, lr * a, self.dq)
if test_x is not None:
loss_test = []
num_test = 0
for (test_xx, test_yy) in zip(test_x, test_y):
ls, count = self.test_one_sampe(test_xx, test_yy)
loss_test.append(ls)
num_test = num_test + count
print('\rloss_train: %05f' % loss_train[-1], 'loss_test: %05f' % np.mean(loss_test), ' acc_test: %05f' % (num_test / len(loss_test)), ' m:', [cc.num for cc in self.clusters])
loss.append(np.mean(loss_test))
acc.append(num_test / len(loss_test))
m.append([cc.num for cc in self.clusters])
return loss_train, loss, acc, m
def train_kalman(self, x, y, test_x=None, test_y=None):
loss = []
acc = []
m = []
loss_train = []
for (u, d) in zip(x, y):
s_p = np.zeros((u.shape[0], self.ns))
phi = np.zeros(u.shape)
v = np.zeros((u.shape[0], self.ns, self.ns))
ss = np.zeros((1, self.ns))
for j in range(u.shape[0]):
distance = [self.clusters[i].distance(u[j], ss) for i in range(len(self.clusters))]
cluster = np.argmin(distance)
s_p[j] = ss.reshape(-1)
phi[j] = u[j]
di = self.clusters[cluster].S - ss
k_s = np.exp(-self.a_s * np.sum(di ** 2, axis=1))[:, np.newaxis]
diss = (self.clusters[cluster].phi - u[j]) ** 2
k_u = np.exp(-self.a_u * np.array([np.sum(diss[i]) for i in range(self.clusters[cluster].num)]))[:, np.newaxis]
ki = k_s * k_u
ss = self.clusters[cluster].A.T @ ki
ss = ss.T
ki = np.diag(ki.reshape(-1))
gamma_i = 2 * self.a_s * self.clusters[cluster].A.T @ ki
if gamma_i.ndim == 1:
gamma_i = gamma_i[:, np.newaxis]
gamma_i = gamma_i @ di
if j == 0:
v[j] = np.eye(self.ns)
else:
for index in range(len(v)):
v[index] = gamma_i @ v[index]
v[j] = np.eye(self.ns)
pred = self.II @ ss.T
e = np.atleast_2d(d).T - pred
loss_train.append(np.sum(e ** 2))
start = max(0, len(s_p) - 10)
s_p = s_p[start:]
phi = phi[start:]
v = v[start:]
a = []
cluster = []
for vv in v:
a.append(self.II @ vv)
for (s, uu) in zip(s_p, phi):
distance = [self.clusters[i].distance(uu, s) for i in range(len(self.clusters))]
if | np.min(distance) | numpy.min |
import tensorflow as tf
import numpy as np
import rcwa_utils
import tensor_utils
def initialize_params(wavelengths = [632.0],
thetas = [0.0],
phis = [0.0],
pte = [1.0],
ptm = [0.0],
pixelsX = 1,
pixelsY = 1,
erd = 6.76,
ers = 2.25,
PQ = [11, 11],
Lx = 0.7 * 632.0,
Ly = 0.7 * 632.0,
L = [632.0, 632.0],
Nx = 512,
eps_min = 1.0,
eps_max = 12.11,
blur_radius = 100.0):
'''
Initializes simulation parameters and hyperparameters.
Args:
wavelengths: A `list` of dtype `float` and length `batchSize` specifying
the set of wavelengths over which to optimize.
thetas: A `list` of dtype `float` and length `batchSize` specifying
the set of polar angles over which to optimize.
phis: A `list` of dtype `float` and length `batchSize` specifying the
set of azimuthal angles over which to optimize.
pte: A `list` of dtype `float` and length `batchSize` specifying the set
of TE polarization component magnitudes over which to optimize. A
magnitude of 0.0 means no TE component. Under normal incidence, the TE
polarization is parallel to the y-axis.
ptm: A `list` of dtype `float` and length `batchSize` specifying the set
of TM polarization component magnitudes over which to optimize. A
magnitude of 0.0 means no TM component. Under normal incidence, the TM
polarization is parallel to the x-axis.
pixelsX: An `int` specifying the x dimension of the metasurface in
pixels that are of width `params['Lx']`.
pixelsY: An `int` specifying the y dimension of the metasurface in
pixels that are of width `params['Ly']`.
erd: A `float` specifying the relative permittivity of the non-vacuum,
constituent material of the device layer for shape optimizations.
ers: A `float` specifying the relative permittivity of the substrate
layer.
PQ: A `list` of dtype `int` and length 2 specifying the number of
Fourier harmonics in the x and y directions. The numbers should be odd
values.
Lx: A `float` specifying the unit cell pitch in the x direction in
nanometers.
Ly: A `float` specifying the unit cell pitch in the y direction in
nanometers.
L: A `list` of dtype `float` specifying the layer thicknesses in
nanometers.
Nx: An `int` specifying the number of sample points along the x
direction in the unit cell.
eps_min: A `float` specifying the minimum allowed permittivity for
topology optimizations.
eps_max: A `float` specifying the maximum allowed permittivity for
topology optimizations.
blur_radius: A `float` specifying the radius of the blur function with
which a topology optimized permittivity density should be convolved.
Returns:
params: A `dict` containing simulation and optimization settings.
'''
# Define the `params` dictionary.
params = dict({})
# Units and tensor dimensions.
params['nanometers'] = 1E-9
params['degrees'] = np.pi / 180
params['batchSize'] = len(wavelengths)
params['pixelsX'] = pixelsX
params['pixelsY'] = pixelsY
params['Nlay'] = len(L)
# Simulation tensor shapes.
batchSize = params['batchSize']
simulation_shape = (batchSize, pixelsX, pixelsY)
# Batch parameters (wavelength, incidence angle, and polarization).
lam0 = params['nanometers'] * tf.convert_to_tensor(wavelengths, dtype = tf.float32)
lam0 = lam0[:, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis]
lam0 = tf.tile(lam0, multiples = (1, pixelsX, pixelsY, 1, 1, 1))
params['lam0'] = lam0
theta = params['degrees'] * tf.convert_to_tensor(thetas, dtype = tf.float32)
theta = theta[:, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis]
theta = tf.tile(theta, multiples = (1, pixelsX, pixelsY, 1, 1, 1))
params['theta'] = theta
phi = params['degrees'] * tf.convert_to_tensor(phis, dtype = tf.float32)
phi = phi[:, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis]
phi = tf.tile(phi, multiples = (1, pixelsX, pixelsY, 1, 1, 1))
params['phi'] = phi
pte = tf.convert_to_tensor(pte, dtype = tf.complex64)
pte = pte[:, tf.newaxis, tf.newaxis, tf.newaxis]
pte = tf.tile(pte, multiples = (1, pixelsX, pixelsY, 1))
params['pte'] = pte
ptm = tf.convert_to_tensor(ptm, dtype = tf.complex64)
ptm = ptm[:, tf.newaxis, tf.newaxis, tf.newaxis]
ptm = tf.tile(ptm, multiples = (1, pixelsX, pixelsY, 1))
params['ptm'] = ptm
# Device parameters.
params['ur1'] = 1.0 # permeability in reflection region
params['er1'] = 1.0 # permittivity in reflection region
params['ur2'] = 1.0 # permeability in transmission region
params['er2'] = 1.0 # permittivity in transmission region
params['urd'] = 1.0 # permeability of device
params['erd'] = erd # permittivity of device
params['urs'] = 1.0 # permeability of substrate
params['ers'] = ers # permittivity of substrate
params['Lx'] = Lx * params['nanometers'] # period along x
params['Ly'] = Ly * params['nanometers'] # period along y
length_shape = (1, 1, 1, params['Nlay'], 1, 1)
L = tf.convert_to_tensor(L, dtype = tf.complex64)
L = L[tf.newaxis, tf.newaxis, tf.newaxis, :, tf.newaxis, tf.newaxis]
params['L'] = L * params['nanometers'] #* tf.ones(shape = length_shape, dtype = tf.complex64)
params['length_min'] = 0.1
params['length_max'] = 2.0
# RCWA parameters.
params['PQ'] = PQ # number of spatial harmonics along x and y
params['Nx'] = Nx # number of point along x in real-space grid
if params['PQ'][1] == 1:
params['Ny'] = 1
else:
params['Ny'] = int(np.round(params['Nx'] * params['Ly'] / params['Lx'])) # number of point along y in real-space grid
# Coefficient for the argument of tf.math.sigmoid() when generating
# permittivity distributions with geometric parameters.
params['sigmoid_coeff'] = 1000.0
# Polynomial order for rectangular resonators definition.
params['rectangle_power'] = 200
# Allowed permittivity range.
params['eps_min'] = eps_min
params['eps_max'] = eps_max
# Upsampling for Fourier optics propagation.
params['upsample'] = 1
# Duty Cycle limits for gratings.
params['duty_min'] = 0.1
params['duty_max'] = 0.9
# Permittivity density blur radius.
params['blur_radius'] = blur_radius * params['nanometers']
return params
def generate_coupled_cylindrical_resonators(r_x, r_y, params):
'''
Generates permittivity/permeability for a unit cell comprising 4 coupled
elliptical resonators.
Args:
r_x: A `tf.Tensor` of shape `(1, pixelsX, pixelsY, 4)` specifying the
x-axis diameters of the four cylinders.
r_y: A `tf.Tensor` of shape `(1, pixelsX, pixelsY, 4)` specifying the
y-axis diameters of the four cylinders.
params: A `dict` containing simulation and optimization settings.
Returns:
ER_t: A `tf.Tensor` of shape `(batchSize, pixelsX, pixelsY, Nlayer, Nx, Ny)`
specifying the relative permittivity distribution of the unit cell.
UR_t: A `tf.Tensor` of shape `(batchSize, pixelsX, pixelsY, Nlayer, Nx, Ny)`
specifying the relative permeability distribution of the unit cell.
'''
# Retrieve simulation size parameters.
batchSize = params['batchSize']
pixelsX = params['pixelsX']
pixelsY = params['pixelsY']
Nlay = params['Nlay']
Nx = params['Nx']
Ny = params['Ny']
Lx = params['Lx']
Ly = params['Ly']
# Initialize relative permeability.
materials_shape = (batchSize, pixelsX, pixelsY, Nlay, Nx, Ny)
UR = params['urd'] * np.ones(materials_shape)
# Define the cartesian cross section.
dx = Lx / Nx # grid resolution along x
dy = Ly / Ny # grid resolution along y
xa = np.linspace(0, Nx - 1, Nx) * dx # x axis array
xa = xa - np.mean(xa) # center x axis at zero
ya = np.linspace(0, Ny - 1, Ny) * dy # y axis vector
ya = ya - np.mean(ya) # center y axis at zero
[y_mesh, x_mesh] = np.meshgrid(ya,xa)
# Convert to tensors and expand and tile to match the simulation shape.
y_mesh = tf.convert_to_tensor(y_mesh, dtype = tf.float32)
y_mesh = y_mesh[tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis, :, :]
y_mesh = tf.tile(y_mesh, multiples = (batchSize, pixelsX, pixelsY, 1, 1, 1))
x_mesh = tf.convert_to_tensor(x_mesh, dtype = tf.float32)
x_mesh = x_mesh[tf.newaxis, tf.newaxis, tf.newaxis, tf.newaxis, :, :]
x_mesh = tf.tile(x_mesh, multiples = (batchSize, pixelsX, pixelsY, 1, 1, 1))
# Nanopost centers.
c1_x = -Lx / 4
c1_y = -Ly / 4
c2_x = -Lx / 4
c2_y = Ly / 4
c3_x = Lx / 4
c3_y = -Ly / 4
c4_x = Lx / 4
c4_y = Ly / 4
# Clip the optimization ranges.
r_x = params['Lx'] * tf.clip_by_value(r_x, clip_value_min = 0.05, clip_value_max = 0.23)
r_y = params['Ly'] * tf.clip_by_value(r_y, clip_value_min = 0.05, clip_value_max = 0.23)
r_x = tf.tile(r_x, multiples = (batchSize, 1, 1, 1))
r_y = tf.tile(r_y, multiples = (batchSize, 1, 1, 1))
r_x = r_x[:, :, :, tf.newaxis, tf.newaxis, tf.newaxis, :]
r_y = r_y[:, :, :, tf.newaxis, tf.newaxis, tf.newaxis, :]
# Calculate the nanopost boundaries.
c1 = 1 - ((x_mesh - c1_x) / r_x[:, :, :, :, :, :, 0]) ** 2 - ((y_mesh - c1_y) / r_y[:, :, :, :, :, :, 0]) ** 2
c2 = 1 - ((x_mesh - c2_x) / r_x[:, :, :, :, :, :, 1]) ** 2 - ((y_mesh - c2_y) / r_y[:, :, :, :, :, :, 1]) ** 2
c3 = 1 - ((x_mesh - c3_x) / r_x[:, :, :, :, :, :, 2]) ** 2 - ((y_mesh - c3_y) / r_y[:, :, :, :, :, :, 2]) ** 2
c4 = 1 - ((x_mesh - c4_x) / r_x[:, :, :, :, :, :, 3]) ** 2 - ((y_mesh - c4_y) / r_y[:, :, :, :, :, :, 3]) ** 2
# Build device layer.
ER_c1 = tf.math.sigmoid(params['sigmoid_coeff'] * c1)
ER_c2 = tf.math.sigmoid(params['sigmoid_coeff'] * c2)
ER_c3 = tf.math.sigmoid(params['sigmoid_coeff'] * c3)
ER_c4 = tf.math.sigmoid(params['sigmoid_coeff'] * c4)
ER_t = 1 + (params['erd'] - 1) * (ER_c1 + ER_c2 + ER_c3 + ER_c4)
# Build substrate and concatenate along the layers dimension.
device_shape = (batchSize, pixelsX, pixelsY, 1, Nx, Ny)
ER_substrate = params['ers'] * tf.ones(device_shape, dtype = tf.float32)
ER_t = tf.concat(values = [ER_t, ER_substrate], axis = 3)
# Cast to complex for subsequent calculations.
ER_t = tf.cast(ER_t, dtype = tf.complex64)
UR_t = tf.convert_to_tensor(UR, dtype = tf.float32)
UR_t = tf.cast(UR_t, dtype = tf.complex64)
return ER_t, UR_t
def generate_coupled_rectangular_resonators(r_x, r_y, params):
'''
Generates permittivity/permeability for a unit cell comprising 4 coupled
rectangular cross section scatterers.
Args:
r_x: A `tf.Tensor` of shape `(1, pixelsX, pixelsY, 4)` specifying the
x-axis widths of the four rectangles.
r_y: A `tf.Tensor` of shape `(1, pixelsX, pixelsY, 4)` specifying the
y-axis widths of the four rectangles.
params: A `dict` containing simulation and optimization settings.
Returns:
ER_t: A `tf.Tensor` of shape `(batchSize, pixelsX, pixelsY, Nlayer, Nx, Ny)`
specifying the relative permittivity distribution of the unit cell.
UR_t: A `tf.Tensor` of shape `(batchSize, pixelsX, pixelsY, Nlayer, Nx, Ny)`
specifying the relative permeability distribution of the unit cell.
'''
# Retrieve simulation size parameters.
batchSize = params['batchSize']
pixelsX = params['pixelsX']
pixelsY = params['pixelsY']
Nlay = params['Nlay']
Nx = params['Nx']
Ny = params['Ny']
Lx = params['Lx']
Ly = params['Ly']
# Initialize relative permeability.
materials_shape = (batchSize, pixelsX, pixelsY, Nlay, Nx, Ny)
UR = params['urd'] * np.ones(materials_shape)
# Define the cartesian cross section.
dx = Lx / Nx # grid resolution along x
dy = Ly / Ny # grid resolution along y
xa = | np.linspace(0, Nx - 1, Nx) | numpy.linspace |
import numpy as np
import tensorflow as tf
from ownagents.utils.tf_utils import get_vars, Normalizer
import ownagents.tf_util as U
#from algorithm.replay_buffer import goal_based_process
class DDPG:
def __init__(self, sess, layer_index, env, args):
self.args = args
self.sess = sess
self.layer_index = layer_index
self.pi_lr = args.pi_lr
self.q_lr = args.q_lr
self.polyak = args.polyak
# for level 0: S_0 = S, A_0 = A, T_0 = T, G_0 = S;
# for level i: S_i = S, A_i = S, T_i = T~, G_i = S except for most bigger hierarchy (k-1) G_k-1 = G
if layer_index == 0:
self.action_space_bounds = env.action_bounds
self.action_offset = env.action_offset
self.action_dims = env.action_dim
else:
# Determine symmetric range of subgoal space and offset
self.action_space_bounds = env.subgoal_bounds_symmetric
self.action_offset = env.subgoal_bounds_offset
self.action_dims = env.subgoal_dim
if layer_index == args.num_layers - 1:
self.goal_dim = env.end_goal_dim
else:
self.goal_dim = env.subgoal_dim
self.state_dim = env.state_dim
# Set parameters to give critic optimistic initialization near q_init
self.q_init = -0.067
self.q_limit = -args.H
self.q_offset = -np.log(self.q_limit / self.q_init - 1)
self.create_model()
self.train_info_pi = {
'Pi_q_loss': self.pi_q_loss,
'Pi_l2_loss': self.pi_l2_loss
}
self.train_info_q = {
'Q_loss': self.q_loss
}
self.train_info = {**self.train_info_pi, **self.train_info_q}
self.step_info = {
'Q_average': self.q_pi
}
def create_model(self):
input_dims = [self.state_dim + self.goal_dim]
action_dims = [self.action_dims]
def create_inputs():
self.raw_obs_ph = tf.placeholder(tf.float32, [None]+input_dims)
self.raw_obs_next_ph = tf.placeholder(tf.float32, [None]+input_dims)
self.acts_ph = tf.placeholder(tf.float32, [None]+action_dims)
self.rews_ph = tf.placeholder(tf.float32, [None, 1])
self.gamma_ph = tf.placeholder(tf.float32, [None, 1])
def create_normalizer():
with tf.variable_scope('normalizer_'+str(self.layer_index)):
self.obs_normalizer = Normalizer(input_dims, self.sess)
self.obs_ph = self.obs_normalizer.normalize(self.raw_obs_ph)
self.obs_next_ph = self.obs_normalizer.normalize(self.raw_obs_next_ph)
def create_network():
def mlp_policy(obs_ph):
with tf.variable_scope('net', initializer=tf.contrib.layers.xavier_initializer()):
pi_dense1 = tf.layers.dense(obs_ph, 64, activation=tf.nn.relu, name='pi_dense1')
pi_dense2 = tf.layers.dense(pi_dense1, 64, activation=tf.nn.relu, name='pi_dense2')
pi_dense3 = tf.layers.dense(pi_dense2, 64, activation=tf.nn.relu, name='pi_dense3')
pi = tf.layers.dense(pi_dense3, action_dims[0], activation=tf.nn.tanh, name='pi')
pi = pi * self.action_space_bounds + self.action_offset#!!!!!this is transformation to action needed for not normalized enviroinments
return pi
def mlp_value(obs_ph, acts_ph):
state_ph = tf.concat([obs_ph, acts_ph], axis=1)
with tf.variable_scope('net', initializer=tf.contrib.layers.xavier_initializer()):
q_dense1 = tf.layers.dense(state_ph, 64, activation=tf.nn.relu, name='q_dense1')
q_dense2 = tf.layers.dense(q_dense1, 64, activation=tf.nn.relu, name='q_dense2')
q_dense3 = tf.layers.dense(q_dense2, 64, activation=tf.nn.relu, name='q_dense3')
q = tf.layers.dense(q_dense3, 1, name='q')
q = tf.sigmoid(q + self.q_offset) * self.q_limit#!!!!!this is the clipping from 0 and -H, from the paper
return q
with tf.variable_scope('main_'+str(self.layer_index)):
with tf.variable_scope('policy'):
self.pi = mlp_policy(self.obs_ph)
with tf.variable_scope('value'):
self.q = mlp_value(self.obs_ph, self.acts_ph)
with tf.variable_scope('value', reuse=True):
self.q_pi = mlp_value(self.obs_ph, self.pi)
with tf.variable_scope('target_'+str(self.layer_index)):
with tf.variable_scope('policy'):
self.pi_t = mlp_policy(self.obs_next_ph)
with tf.variable_scope('value'):
self.q_t = mlp_value(self.obs_next_ph, self.pi_t)
def create_operators():
self.pi_q_loss = -tf.reduce_mean(self.q_pi)
self.pi_l2_loss = self.args.act_l2*tf.reduce_mean(tf.square(self.pi))
self.pi_optimizer = tf.train.AdamOptimizer(self.pi_lr)
self.pi_train_op = self.pi_optimizer.minimize(self.pi_q_loss+self.pi_l2_loss,
var_list=get_vars('main_'+str(self.layer_index)+'/policy'))
'''if self.args.clip_return:
return_value = tf.clip_by_value(self.q_t, self.args.clip_return_l, self.args.clip_return_r)
else:
return_value = self.q_t'''
return_value = self.q_t
discounted = self.gamma_ph * return_value
target = tf.stop_gradient(self.rews_ph+discounted)
self.q_loss = tf.reduce_mean(tf.square(self.q-target))
self.q_optimizer = tf.train.AdamOptimizer(self.q_lr)
self.q_train_op = self.q_optimizer.minimize(self.q_loss,
var_list=get_vars('main_'+str(self.layer_index)+'/value'))
self.target_update_op = tf.group([
v_t.assign(self.polyak*v_t + (1.0-self.polyak)*v)
for v, v_t in zip(get_vars('main_'+str(self.layer_index)),
get_vars('target_'+str(self.layer_index)))
])
self.saver=tf.train.Saver()
self.init_op = tf.global_variables_initializer()
self.target_init_op = tf.group([
v_t.assign(v)
for v, v_t in zip(get_vars('main_'+str(self.layer_index)),
get_vars('target_'+str(self.layer_index)))
])
create_inputs()
create_normalizer()
create_network()
create_operators()
def init_network(self):
self.sess.run(self.init_op)
self.sess.run(self.target_init_op)
def step(self, obs, explore=False, test_info=False):
#if (not test_info) and (self.args.buffer.steps_counter<self.args.warmup):
# return np.random.uniform(-1, 1, size=self.action_dims)
# TODO if self.args.goal_based: obs = goal_based_process(obs)#neede?
# eps-greedy exploration; if eps:act==20 means 20% will be totally random action; else 80%
if explore and np.random.uniform() <= self.args.eps_act:
#return np.random.uniform(-1, 1, size=self.action_dims)#!!!!!!this works just with normalized actions
a = np.random.uniform(-1, 1, size=self.action_dims)
return a*self.action_space_bounds + self.action_offset#!!!!this maps againto the actioons dimensions!!!!
feed_dict = {
self.raw_obs_ph: [obs]
}
action, info = self.sess.run([self.pi, self.step_info], feed_dict)
action = action[0]
# uncorrelated gaussian exploration
'''so will work just for normalized actions
if explore:
action += np.random.normal(0, self.args.std_act, size=self.action_dims)
action = np.clip(action, -1, 1)'''
if explore:
action += np.random.normal(0, self.args.std_act, size=self.action_dims)#!!!!!!!!!!!!!!
action = | np.clip(action, -self.action_space_bounds, self.action_space_bounds) | numpy.clip |
"""Tests for the normal distribution."""
import unittest
import itertools
from tests.testing import NumpyAssertions
import numpy as np
import scipy.sparse
from probnum import prob
from probnum.linalg import linops
class NormalTestCase(unittest.TestCase, NumpyAssertions):
"""General test case for the normal distribution."""
def setUp(self):
"""Resources for tests."""
# Seed
np.random.seed(seed=42)
# Parameters
m = 7
n = 3
self.constants = [-1, -2.4, 0, 200, np.pi]
sparsemat = scipy.sparse.rand(m=m, n=n, density=0.1, random_state=1)
self.normal_params = [
(-1, 3),
(np.random.uniform(size=10), np.eye(10)),
(np.array([1, -5]), linops.MatrixMult(A=np.array([[2, 1], [1, -0.1]]))),
(linops.MatrixMult(A=np.array([[0, -5]])), linops.Identity(shape=(2, 2))),
(
np.array([[1, 2], [-3, -0.4], [4, 1]]),
linops.Kronecker(A= | np.eye(3) | numpy.eye |
from typing import Union, Optional, Tuple
import numpy as np
from yaonet.tensor import Dependency, Tensor, ensure_tensor
from yaonet.basic_functions import exp
def sigmoid(t: Tensor) -> Tensor:
t = ensure_tensor(t)
return 1 / (1 + exp(-t))
def tanh(t: Tensor) -> Tensor:
t = ensure_tensor(t)
return (exp(t) - exp(-t)) / (exp(t) + exp(-t))
def relu(t: Tensor) -> Tensor:
t = ensure_tensor(t)
data = | np.maximum(0, t.data) | numpy.maximum |
from tensorflow import keras
import tensorflow as tf
import numpy as np
import math
from tensorflow.keras.models import Model
class FGSM:
def __init__(self, model, ep=0.3, isRand=True, clip_min=0, clip_max=1):
"""
args:
model: victim model
ep: FGSM perturbation bound
isRand: whether adding a random noise
clip_min: clip lower bound
clip_max: clip upper bound
"""
self.isRand = isRand
self.model = model
self.ep = ep
self.clip_min = clip_min
self.clip_max = clip_max
def generate(self, x, y, randRate=1):
"""
args:
x: normal inputs
y: ground-truth labels
randRate: the size of random noise
returns:
successed adversarial examples and corresponding ground-truth labels
"""
target = tf.constant(y)
if self.isRand:
x = x + np.random.uniform(-self.ep * randRate, self.ep * randRate, x.shape)
x = np.clip(x, self.clip_min, self.clip_max)
x_adv = tf.Variable(x)
with tf.GradientTape() as tape:
loss = keras.losses.categorical_crossentropy(target, self.model(x_adv))
grads = tape.gradient(loss, x_adv)
delta = tf.sign(grads)
x_adv.assign_add(self.ep * delta)
x_adv = tf.clip_by_value(x_adv, clip_value_min=self.clip_min, clip_value_max=self.clip_max)
success_idx = np.where(np.argmax(self.model(x_adv), axis=1) != np.argmax(y, axis=1))[0]
print("SUCCESS:", len(success_idx))
return x_adv.numpy()[success_idx], y[success_idx]
class PGD:
def __init__(self, model, ep=0.3, epochs=10, step=0.03, isRand=True, clip_min=0, clip_max=1):
"""
args:
model: victim model
ep: PGD perturbation bound
epochs: PGD iterations
isRand: whether adding a random noise
clip_min: clip lower bound
clip_max: clip upper bound
"""
self.isRand = isRand
self.model = model
self.ep = ep
self.epochs = epochs
self.step = step
self.clip_min = clip_min
self.clip_max = clip_max
def generate(self, x, y, randRate=1):
"""
args:
x: normal inputs
y: ground-truth labels
randRate: the size of random noise
returns:
successed adversarial examples and corresponding ground-truth labels
"""
target = tf.constant(y)
if self.isRand:
x = x + np.random.uniform(-self.ep * randRate, self.ep * randRate, x.shape)
x = np.clip(x, self.clip_min, self.clip_max)
x_adv = tf.Variable(x)
for i in range(self.epochs):
with tf.GradientTape() as tape:
loss = keras.losses.categorical_crossentropy(target, self.model(x_adv))
grads = tape.gradient(loss, x_adv)
delta = tf.sign(grads)
x_adv.assign_add(self.step * delta)
x_adv = tf.clip_by_value(x_adv, clip_value_min=self.clip_min, clip_value_max=self.clip_max)
x_adv = tf.Variable(x_adv)
success_idx = np.where(np.argmax(self.model(x_adv), axis=1) != np.argmax(y, axis=1))[0]
print("SUCCESS:", len(success_idx))
return x_adv.numpy()[success_idx], y[success_idx]
class CW_L2:
def __init__(self, model, batch_size, confidence, targeted,
learning_rate, binary_search_steps, max_iterations,
abort_early, initial_const, clip_min, clip_max, shape):
""" a tf2 version of C&W-L2 (batch generation)
based on https://github.com/cleverhans-lab/cleverhans
"""
self.TARGETED = targeted
self.MAX_ITERATIONS = max_iterations
self.BINARY_SEARCH_STEPS = binary_search_steps
self.ABORT_EARLY = abort_early
self.CONFIDENCE = confidence
self.initial_const = initial_const
self.batch_size = batch_size
self.clip_min = clip_min
self.clip_max = clip_max
self.model = model
self.sub_model = Model(inputs=model.input, outputs=model.layers[-2].output)
self.optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
self.repeat = binary_search_steps >= 10
self.shape = tuple([batch_size] + list(shape))
self.modifier = tf.Variable(np.zeros(self.shape, dtype=np.dtype('float32')))
def ZERO():
return np.asarray(0., dtype=np.dtype('float32'))
def attack(self, images, targets):
"""
Perform the L_2 attack on the given instance for the given targets.
If self.targeted is true, then the targets represents the target labels
If self.targeted is false, then targets are the original class labels
"""
r = []
for i in range(0, images.shape[0], self.batch_size):
tf.print('Processing {} - {} inputs'.format(i+1, i+self.batch_size))
r.extend(self.attack_batch(images[i:i + self.batch_size],
targets[i:i + self.batch_size]))
success_idx = np.where(np.argmax(self.model(np.array(r)), axis=1) != np.argmax(targets, axis=1))[0]
print("SUCCESS:", len(success_idx))
return np.array(r)[success_idx], targets[success_idx]
def attack_batch(self, imgs, labs):
def compare(x, y):
if not isinstance(x, (float, int, np.int64)):
x = np.copy(x)
if self.TARGETED:
x[y] -= self.CONFIDENCE
else:
x[y] += self.CONFIDENCE
x = tf.argmax(x)
if self.TARGETED:
return x == y
else:
return x != y
batch_size = self.batch_size
oimgs = np.clip(imgs, self.clip_min, self.clip_max)
imgs = (imgs - self.clip_min) / (self.clip_max - self.clip_min)
imgs = np.clip(imgs, 0, 1)
imgs = (imgs * 2) - 1
imgs = | np.arctanh(imgs * .999999) | numpy.arctanh |
import json
import os
import sys
import time
from pathlib import Path
from datetime import datetime
import click
import numpy as np
import torch
from torch import optim
from torch import autograd
from taskspec.data import Vocab, Dataset
from taskspec.model import MultiClassModel
from taskspec.utils import Config
def compute_accuracy(x, y):
return torch.mean((x == y).float())
def evaluate(model, dataset, report, device):
model.eval()
accuracies = []
losses = []
for labels, tokens, mask in dataset.batches():
labels = labels.to(device)
tokens = tokens.to(device)
mask = mask.to(device)
result = model(tokens, mask, label=labels, predict=True)
losses.append(float(result['loss']))
accuracy = compute_accuracy(result['pred'], labels)
accuracies.append(float(accuracy))
report['dev_loss'] = | np.mean(losses) | numpy.mean |
# coding: utf-8
# In[3]:
print("Hello bye");
row=list();
ft=open("train.csv");
data=ft.read();
print(data);
# In[10]:
import numpy as np;
from numpy.linalg import inv;
from numpy.linalg import det;
import math;
trainDSSizePercentage=0.7; # x*100 percentage. 1-x data set will be used for validating
# Will read the file and convert it into two dataset one train data other validate data
def readTrainData(fileName):
row_index=0;
phi=list();
y=list();
with open(fileName) as f:
for line in f:
if row_index >0:
phi_i=list((float(n) for n in line.split('\n')[0].split(",") ));
phi_i[0]=1;
# last row is value of yi
y_i=phi_i.pop(len(phi_i)-1);
phi.append(phi_i);
y.append(y_i);
row_index+=1;
return [phi,y];
#End-readTrainData
# Will read the file and convert it into dataset for Testing the Model
def readTestData(fileName):
row_index=0;
phi=list();
y=list();
with open(fileName) as f:
for line in f:
if row_index >0:
phi_i=list((float(n) for n in line.split('\n')[0].split(",") ));
phi_i[0]=1;
phi.append(phi_i);
row_index+=1;
m=len(phi);
return phi;
#End-readTrainData
#split train data into Train and Validate
def spitTrainDataset(phi,y):
m=len(phi);
tdsSize=int(m*trainDSSizePercentage);
trainDatasetPhi=phi[0:tdsSize];
trainDatasetY=y[0:tdsSize];
validateDatasetPhi=phi[tdsSize:m];
validateDatasetY=y[tdsSize:m];
return [trainDatasetPhi,trainDatasetY,validateDatasetPhi,validateDatasetY];
pass
#write-output
def writeTestData(ystar):
fo = open("output.csv", "w");
fo.write("ID,MEDV\n");
m=len(ystar);
for i in range(m):
fo.write(str(i)+","+str(ystar[i])+"\n");
fo.close();
pass;
# Return det of matrix
def getDet(A):
d=det(A);
if(d<10**-10):
return 0;
return d;
#Return RMS: root mean square error
def getRMS(y,yStar):
m=len(y);
sigma=0;
for i in range(m):
delta=(y[i]-yStar[i]);
delta=delta*delta;
sigma=sigma+delta;
meanSq=sigma/m;
rms=math.sqrt(meanSq);
return rms;
pass;
#For ploting graph of RMS VS Iteration
def plotGraph(x,y):
import matplotlib.pyplot as plt;
plt.plot(x,y)
plt.ylabel('rms')
plt.xlabel('iteration');
plt.show();
pass;
#Record readings for gradient descent
def writeReadingInFile(filename,alpha,lam,iteration,rms,p):
import os.path;
import datetime;
import time;
ts = datetime.datetime.fromtimestamp(time.time()).strftime('%d-%m-%Y %H:%M:%S')
if(os.path.exists(filename)==False):
fo = open(filename, "w");
fo.write("iteration,norm,alpha,lam,rms,timestamp\n");
fo.write(str(iteration)+","+str(p)+","+str(alpha)+","+str(lam)+","+str(rms)+","+str(ts)+"\n");
else:
fo = open(filename, "a");
fo.write(str(iteration)+","+str(p)+","+str(alpha)+","+str(lam)+","+str(rms)+","+str(ts)+"\n");
fo.close();
pass;
#normalize the data set ny (x-u)/s where s is max-min
def normalizePhi(unNormalizedPhi):
phi=np.array(unNormalizedPhi);
print("Normalizing Phi...");
std=phi.std(0);
mean=phi.mean(0);
std[0]=1;
mean[0]=0;
phi_normalize=(phi-mean)/std;
print("Normalization done.");
return phi_normalize;
pass;
#pridict of y* given w* QW=y*
def pridict(dataset,weight):
phi=np.array(dataset);
w=np.array(weight);
ystar=np.dot(phi,w);
return ystar;
pass;
# Finding w*=(QTQ)^-1QTY
def trainUsingClosedFormEquation(dataset,output):
m=len(dataset);
n=len(dataset[0]);
print("------------------");
#print(dataset);
phi=np.array(dataset);
print("------------------");
#print(phi);
y=np.array(output);
phiT=np.transpose(phi);
#(QTQ)
phiT_phi=np.dot(phiT,phi);
d=getDet(phiT_phi)
if(d>0):
#(QTQ)^-1
phiT_phi_inv=inv(phiT_phi);
#(QTQ)^-1QT
phiT_phi_inv_phiT=np.dot(phiT_phi_inv,phiT);
#(QTQ)^-1QT*Y
w=np.dot(phiT_phi_inv_phiT,y);
return w;
else:
print("Error:Phi is NOT full column rank.");
return None;
pass;
# Finding w*=(QTQ+lamI)^-1QTY
def trainUsingClosedFormRidgeEq(dataset,output):
m=len(dataset);
n=len(dataset[0]);
phi=np.array(dataset);
y=np.array(output);
phiT=np.transpose(phi);
#(QTQ)
phiT_phi=np.dot(phiT,phi);
n=len(phiT_phi);
lam=0.3;
I=np.identity(n);
lamI=lam*I;
d=getDet(phiT_phi)
#--------------------------------------
if(d>0):
#(QTQ+lamI)^-1
phiT_phi_inv=inv((phiT_phi+lamI));
#(QTQ+lamI)^-1QT
phiT_phi_inv_phiT=np.dot(phiT_phi_inv,phiT);
#(QTQ+lamI)^-1QT*Y
w=np.dot(phiT_phi_inv_phiT,y);
return w;
else:
print("Error:Phi is NOT full column rank.");
return None;
pass;
def numpiTestFun():
A2= np.matrix([[4,6],[2,8]])
A3= np.matrix([[1,2,3],[4,5,7],[7,8,9]])
A=A2;
print(A);
print(np.power(A,0.5));
print(A);
print("Det(A):"+str(getDet(A)));
B= np.transpose(A);
C=inv(A);
#print(C);
print(np.dot(A,C));
print(A.std(0));
print(A.mean(0));
print(normalizePhi(A));
norm=(A-A.mean(0))/A.std(0);
print(norm);
print();
pass;
def mainClosedFormSol():
#--------------------[Closed Form Sol without Regularlization]--------------------------------
#Find w*
wStar=trainUsingClosedFormEquation(trainDatasetPhi,trainDatasetY);
#Predict y* for Validate Data
ystar=pridict(validateDatasetPhi,wStar);
#checking for RMS for Validate Data
rms=getRMS(validateDatasetY,ystar);
#Predict y* for TestData
ystar=pridict(testDS_norm,wStar);
writeTestData(ystar);
print("ClosedFormSolWithoutReg RMS:",rms);
#---------------------------------------------------------------------------------------------
pass;
def mainRidgeClosedFormSol():
#--------------------[Closed Form Sol without Regularlization]--------------------------------
#Find w*
wStar=trainUsingClosedFormRidgeEq(trainDatasetPhi,trainDatasetY);
#Predict y* for Validate Data
ystar=pridict(validateDatasetPhi,wStar);
#checking for RMS for Validate Data
rms=getRMS(validateDatasetY,ystar);
#Predict y* for TestData
ystar=pridict(testDS_norm,wStar);
writeTestData(ystar);
print("ClosedFormSolWithoutReg RMS:",rms);
#---------------------------------------------------------------------------------------------
pass;
# In[12]:
# GD: Least Sq. Without Regularlization
def gardientDescentErrorFun(phi,y):
m=len(y);#no of data points
n=len(phi[0]);# no. of features
alpha=0.22;# learning parameter
maxIteration=10000;
phi=np.array(phi);
y=(np.array(y));#converting row vector to col vector
wk0=np.zeros(n);# Nx1 vector
phiT=np.transpose(phi);
phiTphi=np.dot(phiT,phi);
phiTy=np.dot(phiT,y);
alphaBym=alpha/m;
xaxis=list();
yaxis=list();
#----------------------
print("Training Started (Least Sq. Without Regularlization) ...");
for i in range(maxIteration):
wk1=wk0-(alphaBym*((np.dot(phiTphi,wk0)-phiTy)));
ystar=pridict(phi,wk1);
rms=getRMS(y,ystar);
xaxis.append(i);
yaxis.append(rms);
percentComplete=((i+1)*100)/maxIteration;
if( percentComplete%10==0 ):
print("Percent Completed",percentComplete);
wk0=wk1;
print("Final Trained RMS:",rms);
plotGraph(xaxis,yaxis);
return wk1;
pass;
# GD: Least Sq. With Ridges
def gardientDescentWithRidge(phi,y):
m=len(y);#no of data points
n=len(phi[0]);# no. of features
alpha=0.212;# learning parameter
maxIteration=10000;
phi=np.array(phi);
y=(np.array(y));#converting row vector to col vector
wk0=np.zeros(n);# Nx1 vector
#wk0=phi[14];#14
phiT=np.transpose(phi);
phiTphi=np.dot(phiT,phi);
phiTy=np.dot(phiT,y);
alphaBym=alpha/m;
lam=0.301;
xaxis=list();
yaxis=list();
algFixedIteration=False;
logReading=True;
diff=0;
#-----------------------------------------------------------------
#Best Tested Constant
#aplha=.212 lamda=.301 datasie=0.7 o/p=4.8310 rms
#Tried for different initial wk0 but o/p remain same
#-----------------------------------------------------------------
print("Training Started (Least Sq. With Ridge) ...");
if (algFixedIteration):
for iteration in range(0,maxIteration):
wk1=wk0-(alphaBym*((np.dot(phiTphi,wk0)-phiTy)+(lam*wk0)));
ystar=pridict(phi,wk1);
rms=getRMS(y,ystar);
xaxis.append(iteration);
yaxis.append(rms);
percentComplete=((iteration+1)*100)/maxIteration;
if( percentComplete%10==0 ):
print("Percent Completed",percentComplete);
wk0=wk1;
else:
diffOffset=1e-20;
iteration=0;
oldRms=0;
voldRms=0;
while (True):
wk1=wk0-(alphaBym*((np.dot(phiTphi,wk0)-phiTy)+(lam*wk0)));
ystar=pridict(phi,wk1);
rms=getRMS(y,ystar);
xaxis.append(iteration);
yaxis.append(rms);
diff=oldRms-rms;
vystar=pridict(validateDatasetPhi,wk1);
vrms=getRMS(validateDatasetY,vystar);
vdiff=voldRms-vrms;
if(iteration>0 and diff<diffOffset):
break;
if(False and iteration%100==0 ):
print("# iteration: ",iteration," rms:",rms,"diff:",diff," vrms:",vrms," vdiff:", vdiff);
wk0=wk1;
oldRms=rms;
voldRms=vrms;
iteration+=1;
print("# iteration: ",iteration," rms:",rms,"diff:",diff," vrms:",vrms," vdiff:", vdiff);
print("Final Trained RMS:",rms ,". Iteration needed ", iteration);
#-------------------------------------------------------------
if(logReading):
writeReadingInFile("ridge.csv",alpha,lam,iteration,rms,2);
plotGraph(xaxis,yaxis);
return wk1;
# GD: Least Sq. With ||w||_(1.5)^(1.5)
def gardientDescentWithPnom(phi,y,p):
m=len(y);#no of data points
n=len(phi[0]);# no. of features
alpha=0.2 #learning parameter
maxIteration=100000;
phi=np.array(phi);
y=(np.array(y));#converting row vector to col vector
wk0=np.zeros(n);# Nx1 vector
wk0=phi[1];
phiT=np.transpose(phi);
phiTphi=np.dot(phiT,phi);
phiTy=np.dot(phiT,y);
alphaBym=alpha/m;
lam=0.31;
xaxis=list();
yaxis=list();
algFixedIteration=False;
logReading=True;
diff=0;
wPow=p-1;
if (p<=1):
print("Error: norm p is less than 1 i.p p=",wPow);
return None;
#-----------------------------------------------------------------
print("Training Started (Least Sq. With Ridge) ...");
if (algFixedIteration):
for iteration in range(0,maxIteration):
if (wPow>1):
wk0Pow= | np.power(wk0,wPow) | numpy.power |
import copy
import warnings
import pandas as pd
import numpy as np
from scipy.optimize import fmin
import os
CWD = os.path.dirname(os.path.abspath(__file__))
class FFD(object):
"""Flare frequency distribution.
alpha and beta refer to a power law that
can be used to model the FFD.
dN/dE = beta * E^(-alpha)
N - number of flares
E - energy or equivalent duration
Attributes:
-----------
f : DataFrame
flare table in the FlareLightCurve.flares format
with extra columns for flare target identifiers
alpha : float
power law exponent
alpha_err : float
power law exponent uncertainty
beta : float
power law intercept
beta_err : float
power law intercept uncertainty
tot_obs_time: float
total observing time during which
the flares in f were detected
ID : str
column name in f for the flare target identifier
ed : array
EDs in cumulative FFD, sorted
freq : array
frequencies of EDs in cumulative FFD, sorted like ed
count_ed : array
frequency adjusted ed sample
multiple_stars : bool
True when ed_and_freq was called with multiple_stars
flag set
"""
def __init__(self, f=None, alpha=None, alpha_err=None,
beta=None, beta_err=None, tot_obs_time=1.,
ID=None, multiple_stars=False):
self.f = f
self.alpha = alpha
self.alpha_err = alpha_err
self.beta = beta
self.beta_err = beta_err
self.tot_obs_time = tot_obs_time
self._ed = None
self._freq = None
self._count_ed = None
self.ID = ID
self._multiple_stars = multiple_stars
# Set all the setters and getters for attributes
# that only methods should be allowed to change:
@property
def multiple_stars(self):
return self._multiple_stars
@multiple_stars.setter
def multiple_stars(self, multiple_stars):
print(f"Setting multiple_stars flag with {multiple_stars}.")
self._multiple_stars = multiple_stars
@property
def ed(self):
return self._ed
@ed.setter
def ed(self, ed):
print(f"Setting ED with new values, size {len(ed)}.")
self._ed = ed
@property
def freq(self):
return self._freq
@freq.setter
def freq(self, freq):
print(f"Setting frequency values with new values, size {len(freq)}.")
self._freq = freq
@property
def count_ed(self):
return self._count_ed
@count_ed.setter
def count_ed(self, count_ed):
print(f"Setting frequency adjusted count values "
f"with new values, size {len(count_ed)}.")
self._count_ed = count_ed
# -----------------------------------------------------------------------
def ed_and_freq(self, energy_correction=False,
recovery_probability_correction=False,
multiple_stars=False):
"""Take the flare table and return the FFD with
different or no corrections. tot_obs_time is used to
convert counts to frequencies and defines its unit.
Parameters:
------------
energy_correction: bool, default False
use ed_corr instead of ed_rec
recovery_probability_correction: bool, default False
multiply inverse recovery probabilities instead
of assuming the recovery probability was 1
multiple_stars: bool, default False
apply a first order approximation to account
for the effects of stacking FFDs of stars with
different detection thresholds
Return:
-------
ed, freq, count_ed - equivalent durations and corresponding
cumulative frequencies, and frequency
adjusted event sample. See `_ed_and_counts`
method for details.
"""
# Convert human readable cases to keywords
if ((energy_correction is False) &
(recovery_probability_correction is False)):
key = "no_corr"
elif ((energy_correction is True) &
(recovery_probability_correction is False)):
key = "ed_corr"
elif ((energy_correction is True) &
(recovery_probability_correction is True)):
key = "edrecprob_corr"
else:
raise KeyError("This set of parameters for energy "
"correction, recovery probability "
"correction is not implemented. You must"
" set energy_correction=True if you wish to "
"set recovery_probability_correction=True.")
return self._ed_and_counts(key, multiple_stars)
def _ed_and_counts(self, key, multiple_stars):
"""Sub function to ed_and_func.
Parameters:
------------
key : str
defines type of correction to apply to FFD
multiple_stars: bool
if True will use a first order approximation to
account for stacking FFDs of multiple stars
Return:
-------
ed, freq, count_ed - equivalent durations and corresponding
cumulative frequencies, and frequency
adjusted event sample
"""
# df, ID, col are flare table, identifier column name in df,
# and column name for the ED array in df in each of the
# functions below.
# Each function return two arrays: sorted flare EDs or energies,
# and their respective frequencies.
def cum_dist(df, col, ID):
"""simple cumulative distribution."""
return (np.arange(1, df[col].shape[0] + 1, 1) / self.tot_obs_time,
np.ones_like(df[col].values))
def get_msf_cum_dist(df, col, ID):
"""simple cumulative distribution
accounting for multiple stars with different
detection thresholds in FFDs"""
freq = _get_multistar_factors(df, ID, col)
self.multiple_stars = True
return (np.cumsum(1 / freq) / self.tot_obs_time,
1 / freq)
def cum_dist_rec_prob(df, col, ID):
"""cumulative distribution accounting for
recovery probabilities of individual flares"""
freq = (np.cumsum(1. / df.recovery_probability.values) /
self.tot_obs_time)
return freq, 1. / df.recovery_probability.values
def get_msf_cumdist_recprob(df, col, ID):
"""cumulative distribution accounting for
recovery probabilities of individual flares
and multiple stars with different detection
thresholds in FFDs"""
freq_ = _get_multistar_factors(df, ID, col)
self.multiple_stars = True
cfreq = (np.cumsum(1. / df.recovery_probability.values / freq_) /
self.tot_obs_time)
return cfreq, 1. / df.recovery_probability.values / freq_
# Different keys call different correction procedures
vals = {"no_corr": {False: ["ed_rec", cum_dist],
True: ["ed_rec", get_msf_cum_dist]},
"ed_corr": {False: ["ed_corr", cum_dist],
True: ["ed_corr", get_msf_cum_dist]},
"edrecprob_corr": {False: ["ed_corr", cum_dist_rec_prob],
True: ["ed_corr", get_msf_cumdist_recprob]}
}
# make a copy to sort safely without affecting self.f
df = self.f.copy(deep=True)
# retrieve ED type (corrected or not), and function for counts
col, func = vals[key][multiple_stars]
df = df.sort_values(by=col, ascending=False)
ed = df[col].values # get the right EDs
# get the (corrected) flare counts
freq, counts = func(df, col, self.ID)
self.ed = ed
self.freq = freq
self.count_ed = _get_frequency_corrected_ed_sample(ed, counts)
return self.ed, self.freq, self.count_ed
def fit_beta_to_powerlaw(self, mode="ED"):
'''Fit beta via non-linear least squares to a power
law with given alpha using the cumulative
FFD. Generate uncertainty using jackknife algorithm.
Parameters:
-----------
mode : str
ED or energy will set the starting value for the
least square minimization
Return:
-------
_beta, beta, beta_err - array, float, float
jackknife sample of beta values, mean beta, beta uncertainty
'''
def LSQ(x0, ed, freq, alpha):
zw = ((x0 /
(np.power(ed, alpha - 1.) * (alpha - 1.)) - freq)**2).sum()
return np.sqrt(zw)
N = len(self.ed)
if N == 0:
raise ValueError('No data.')
# jackknife uncertainty
x0starts = {'ED': 10, 'energy': 1e25}
_beta = np.array([fmin(LSQ, x0=x0starts[mode],
args=(np.delete(self.ed, i),
np.delete(self.freq, i),
self.alpha),
disp=0)[0] for i in range(N)])
# cumulative beta = beta_cum
beta = _beta.mean()
beta_err = np.sqrt((N - 1) / N * ((_beta - beta)**2).sum())
# propagate errors on alpha to beta
beta_err = (np.sqrt(beta_err**2 * (self.alpha - 1.)**2 +
beta**2 * self.alpha_err**2))
# set attributes
self.beta = beta
self.beta_err = beta_err
return _beta, self.beta, self.beta_err
def plot_powerlaw(self, ax, custom_xlim=None, **kwargs):
'''
Plot the power law fit to the FFD. [No tests]
Parameters:
-----------
ax : matplotlibe Axes object
plot to insert the power law in to
custom_xlim : 2-tuple
minimum, maximum ED/energy value for power law
kwargs : dict
Keyword arguments to pass to plt.plot()
Return:
--------
3 power law points to construct a line
in log-log representation.
'''
if custom_xlim is None:
x = np.linspace(np.nanmin(self.ed), np.nanmax(self.ed), 3)
else:
mi, ma = custom_xlim
x = np.linspace(mi, ma, 3)
y = self.beta / np.abs(self.alpha - 1.) * np.power(x, -self.alpha + 1.)
a = ax.plot(x, y, **kwargs)
return a, x, y
def fit_powerlaw(self, alims=[1.01, 3.]):
'''
Calculate the un-biased ML power law estimator
from Maschberger and Kroupa (2009), sections
3.1.4. and 3.1.5. by simply minimizing the equation in
ML_powerlaw_estimator.
Parameters:
------------
alims:
parameter range for power law exponent
Return:
-------
alpha, alpha_err - float, float
power law exponent and its jackknife uncertainty
'''
# use frequency adjusted ED sample?
ed = self._get_ed()
# solve eq. 9 using scipy.fmin, define jacknife uncertainty
N = len(ed)
_alpha = np.array([fmin(_ML_powerlaw_estimator, x0=2.,
args=(np.delete(ed, i),), disp=0)[0]
for i in range(N)])
# alpha is the mean value
alpha = _alpha.mean()
# uncertainty is the standard deviation
sig_alpha = np.sqrt((N - 1) / N * ((_alpha - alpha)**2).sum())
self.alpha = alpha
self.alpha_err = sig_alpha
return self.alpha, self.alpha_err
def is_powerlaw_truncated(self, rejection=(.15, .05), nthresh=100):
'''
Apply the exceedance test recommended by
Maschberger and Kroupa 2009.
Parameters:
------------
rejection : tuple of floats < 1.
above these thresholds the distribution
can be suspected to be truncated
nthresh : int
Number at which to use the more permissive
or more restrictive truncation rejection
limit, i.e. value 0 or 1 in `rejection`
Return:
---------
True if power law not consistent with an un-truncated power law
False if power law is consitent with an un-truncated power law
'''
ed = self._get_ed()
mean, std = _calculate_average_number_of_exceeding_values(ed,
self.alpha,
500)
if self.alpha > 2.:
warnings.warn('Power law exponent is steep. '
'Power of statistical tests decreases '
'according to Maschberger and Kroupa 2009.')
if len(ed) >= nthresh:
truncation_limit = rejection[1]
else:
truncation_limit = rejection[0]
truncated = (mean / len(ed) > truncation_limit)
return truncated
def is_powerlaw(self, sig_level=0.05):
'''
Test if we must reject the power law hypothesis
judging by the stabilised Kolmogorov-Smirnov
statistic, suggested by Maschberger and Kroupa
2009.
Parameters:
-----------
sig_level : float < 1.
significance level for the hypothesis test
Returns:
---------
True if we cannot reject the power law hypothesis.
False if we must reject the power law hypothesis.
'''
ed = self._get_ed()
truncated = self.is_powerlaw_truncated()
KS = _stabilised_KS_statistic(ed, alpha=self.alpha,
truncated=truncated)
limit = _calculate_KS_acceptance_limit(len(self.ed),
sig_level=sig_level)
ispowerlaw = KS < limit
if ispowerlaw is False:
warnings.warn('Kolmogorov-Smirnov tells us to reject'
r' the power law hypothesis at p={}.'
' KS={}, limit={}'.format(sig_level, KS, limit))
return ispowerlaw
def _get_ed(self):
"""Get ED array either for a single star sample
or a multiple stars sample, depending on `multiple_stars`
flag.
Return:
-------
ed - sample of flare energies
"""
if self.multiple_stars is True:
ed = self.count_ed
elif self.multiple_stars is False:
ed = self.ed
return ed
def _calculate_average_number_of_exceeding_values(data, alpha, n, **kwargs):
'''
Parameters:
-----------
ffd : FFD object
n : int
number of samples to average
kwargs : dict
Keyword arguments to pass to
:func:calculate_number_of_exceeding_values
Returns:
--------
(mean, std) : (float, float)
average number number of exceeding values
and standard deviation
'''
assert alpha is not None
assert data is not None
exceedance_statistic = [_calculate_number_of_exceeding_values(data,
alpha,
**kwargs)
for i in range(n)]
exceedance_statistic = np.array(exceedance_statistic)
return np.nanmean(exceedance_statistic), np.nanstd(exceedance_statistic)
def _calculate_number_of_exceeding_values(data, alpha, maxlim=1e8, **kwargs):
'''
Helper function that mimicks data similar
to the observations (same alpha and size)
and returns a sample from an untruncated
distribution. The number of values that
exceeds the maximum in the actual data is
returned.
Parameters:
-----------
data : array
observed values
alpha : float
best-fit power law exponent to the data
maxlim : float > 1.
factor to simulate an untruncated
version of the given power law
distribution
kwargs : dict
Keyword arguments to pass to
:func:generate_random_power_law_distribution
Return:
--------
int : number of exceeding values
'''
pdist = generate_random_power_law_distribution(np.min(data),
| np.max(data) | numpy.max |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.