input
stringlengths 2.65k
237k
| output
stringclasses 1
value |
---|---|
<reponame>MILAB-yhg/models<filename>research/object_detection/meta_architectures/faster_rcnn_meta_arch.py<gh_stars>1-10
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Faster R-CNN meta-architecture definition.
General tensorflow implementation of Faster R-CNN detection models.
See Faster R-CNN: Ren, Shaoqing, et al.
"Faster R-CNN: Towards real-time object detection with region proposal
networks." Advances in neural information processing systems. 2015.
We allow for three modes: number_of_stages={1, 2, 3}. In case of 1 stage,
all of the user facing methods (e.g., predict, postprocess, loss) can be used as
if the model consisted only of the RPN, returning class agnostic proposals
(these can be thought of as approximate detections with no associated class
information). In case of 2 stages, proposals are computed, then passed
through a second stage "box classifier" to yield (multi-class) detections.
Finally, in case of 3 stages which is only used during eval, proposals are
computed, then passed through a second stage "box classifier" that will compute
refined boxes and classes, and then features are pooled from the refined and
non-maximum suppressed boxes and are passed through the box classifier again. If
number of stages is 3 during training it will be reduced to two automatically.
Implementations of Faster R-CNN models must define a new
FasterRCNNFeatureExtractor and override three methods: `preprocess`,
`_extract_proposal_features` (the first stage of the model), and
`_extract_box_classifier_features` (the second stage of the model). Optionally,
the `restore_fn` method can be overridden. See tests for an example.
A few important notes:
+ Batching conventions: We support batched inference and training where
all images within a batch have the same resolution. Batch sizes are determined
dynamically via the shape of the input tensors (rather than being specified
directly as, e.g., a model constructor).
A complication is that due to non-max suppression, we are not guaranteed to get
the same number of proposals from the first stage RPN (region proposal network)
for each image (though in practice, we should often get the same number of
proposals). For this reason we pad to a max number of proposals per image
within a batch. This `self.max_num_proposals` property is set to the
`first_stage_max_proposals` parameter at inference time and the
`second_stage_batch_size` at training time since we subsample the batch to
be sent through the box classifier during training.
For the second stage of the pipeline, we arrange the proposals for all images
within the batch along a single batch dimension. For example, the input to
_extract_box_classifier_features is a tensor of shape
`[total_num_proposals, crop_height, crop_width, depth]` where
total_num_proposals is batch_size * self.max_num_proposals. (And note that per
the above comment, a subset of these entries correspond to zero paddings.)
+ Coordinate representations:
Following the API (see model.DetectionModel definition), our outputs after
postprocessing operations are always normalized boxes however, internally, we
sometimes convert to absolute --- e.g. for loss computation. In particular,
anchors and proposal_boxes are both represented as absolute coordinates.
Images are resized in the `preprocess` method.
The Faster R-CNN meta architecture has two post-processing methods
`_postprocess_rpn` which is applied after first stage and
`_postprocess_box_classifier` which is applied after second stage. There are
three different ways post-processing can happen depending on number_of_stages
configured in the meta architecture:
1. When number_of_stages is 1:
`_postprocess_rpn` is run as part of the `postprocess` method where
true_image_shapes is used to clip proposals, perform non-max suppression and
normalize them.
2. When number of stages is 2:
`_postprocess_rpn` is run as part of the `_predict_second_stage` method where
`resized_image_shapes` is used to clip proposals, perform non-max suppression
and normalize them. In this case `postprocess` method skips `_postprocess_rpn`
and only runs `_postprocess_box_classifier` using `true_image_shapes` to clip
detections, perform non-max suppression and normalize them.
3. When number of stages is 3:
`_postprocess_rpn` is run as part of the `_predict_second_stage` using
`resized_image_shapes` to clip proposals, perform non-max suppression and
normalize them. Subsequently, `_postprocess_box_classifier` is run as part of
`_predict_third_stage` using `true_image_shapes` to clip detections, peform
non-max suppression and normalize them. In this case, the `postprocess` method
skips both `_postprocess_rpn` and `_postprocess_box_classifier`.
"""
from abc import abstractmethod
from functools import partial
import tensorflow as tf
from object_detection.anchor_generators import grid_anchor_generator
from object_detection.core import box_list
from object_detection.core import box_list_ops
from object_detection.core import box_predictor
from object_detection.core import losses
from object_detection.core import model
from object_detection.core import post_processing
from object_detection.core import standard_fields as fields
from object_detection.core import target_assigner
from object_detection.predictors import convolutional_box_predictor
from object_detection.utils import ops
from object_detection.utils import shape_utils
slim = tf.contrib.slim
class FasterRCNNFeatureExtractor(object):
"""Faster R-CNN Feature Extractor definition."""
def __init__(self,
is_training,
first_stage_features_stride,
batch_norm_trainable=False,
reuse_weights=None,
weight_decay=0.0):
"""Constructor.
Args:
is_training: A boolean indicating whether the training version of the
computation graph should be constructed.
first_stage_features_stride: Output stride of extracted RPN feature map.
batch_norm_trainable: Whether to update batch norm parameters during
training or not. When training with a relative large batch size
(e.g. 8), it could be desirable to enable batch norm update.
reuse_weights: Whether to reuse variables. Default is None.
weight_decay: float weight decay for feature extractor (default: 0.0).
"""
self._is_training = is_training
self._first_stage_features_stride = first_stage_features_stride
self._train_batch_norm = (batch_norm_trainable and is_training)
self._reuse_weights = reuse_weights
self._weight_decay = weight_decay
@abstractmethod
def preprocess(self, resized_inputs):
"""Feature-extractor specific preprocessing (minus image resizing)."""
pass
def extract_proposal_features(self, preprocessed_inputs, scope):
"""Extracts first stage RPN features.
This function is responsible for extracting feature maps from preprocessed
images. These features are used by the region proposal network (RPN) to
predict proposals.
Args:
preprocessed_inputs: A [batch, height, width, channels] float tensor
representing a batch of images.
scope: A scope name.
Returns:
rpn_feature_map: A tensor with shape [batch, height, width, depth]
activations: A dictionary mapping activation tensor names to tensors.
"""
with tf.variable_scope(scope, values=[preprocessed_inputs]):
return self._extract_proposal_features(preprocessed_inputs, scope)
@abstractmethod
def _extract_proposal_features(self, preprocessed_inputs, scope):
"""Extracts first stage RPN features, to be overridden."""
pass
def extract_box_classifier_features(self, proposal_feature_maps, scope):
"""Extracts second stage box classifier features.
Args:
proposal_feature_maps: A 4-D float tensor with shape
[batch_size * self.max_num_proposals, crop_height, crop_width, depth]
representing the feature map cropped to each proposal.
scope: A scope name.
Returns:
proposal_classifier_features: A 4-D float tensor with shape
[batch_size * self.max_num_proposals, height, width, depth]
representing box classifier features for each proposal.
"""
with tf.variable_scope(
scope, values=[proposal_feature_maps], reuse=tf.AUTO_REUSE):
return self._extract_box_classifier_features(proposal_feature_maps, scope)
@abstractmethod
def _extract_box_classifier_features(self, proposal_feature_maps, scope):
"""Extracts second stage box classifier features, to be overridden."""
pass
def restore_from_classification_checkpoint_fn(
self,
first_stage_feature_extractor_scope,
second_stage_feature_extractor_scope):
"""Returns a map of variables to load from a foreign checkpoint.
Args:
first_stage_feature_extractor_scope: A scope name for the first stage
feature extractor.
second_stage_feature_extractor_scope: A scope name for the second stage
feature extractor.
Returns:
A dict mapping variable names (to load from a checkpoint) to variables in
the model graph.
"""
variables_to_restore = {}
for variable in tf.global_variables():
for scope_name in [first_stage_feature_extractor_scope,
second_stage_feature_extractor_scope]:
if variable.op.name.startswith(scope_name):
var_name = variable.op.name.replace(scope_name + '/', '')
variables_to_restore[var_name] = variable
return variables_to_restore
class FasterRCNNMetaArch(model.DetectionModel):
"""Faster R-CNN Meta-architecture definition."""
def __init__(self,
is_training,
num_classes,
image_resizer_fn,
feature_extractor,
number_of_stages,
first_stage_anchor_generator,
first_stage_target_assigner,
first_stage_atrous_rate,
first_stage_box_predictor_arg_scope_fn,
first_stage_box_predictor_kernel_size,
first_stage_box_predictor_depth,
first_stage_minibatch_size,
first_stage_sampler,
first_stage_nms_score_threshold,
first_stage_nms_iou_threshold,
first_stage_max_proposals,
first_stage_localization_loss_weight,
first_stage_objectness_loss_weight,
initial_crop_size,
maxpool_kernel_size,
maxpool_stride,
second_stage_target_assigner,
second_stage_mask_rcnn_box_predictor,
second_stage_batch_size,
second_stage_sampler,
second_stage_non_max_suppression_fn,
second_stage_score_conversion_fn,
second_stage_localization_loss_weight,
second_stage_classification_loss_weight,
second_stage_classification_loss,
second_stage_mask_prediction_loss_weight=1.0,
hard_example_miner=None,
parallel_iterations=16,
add_summaries=True,
use_matmul_crop_and_resize=False,
clip_anchors_to_image=False):
"""FasterRCNNMetaArch Constructor.
Args:
is_training: A boolean indicating whether the training version of the
computation graph should be constructed.
num_classes: Number of classes. Note that num_classes *does not*
include the background category, so if groundtruth labels take values
in {0, 1, .., K-1}, num_classes=K (and not K+1, even though the
assigned classification targets can range from {0,... K}).
image_resizer_fn: A callable for image resizing. This callable
takes a rank-3 image tensor of shape [height, width, channels]
(corresponding to a single image), an optional rank-3 instance mask
tensor of shape [num_masks, height, width] and returns a resized rank-3
image tensor, a resized mask tensor if one was provided in the input. In
addition this callable must also return a 1-D tensor of the form
[height, width, channels] containing the size of the true image, as the
image resizer can perform zero padding. See protos/image_resizer.proto.
feature_extractor: A FasterRCNNFeatureExtractor object.
number_of_stages: An integer values taking values in {1, 2, 3}. If
| |
command=lambda: af.append_digit5(self))
self.action5.grid(column=1, row=2, padx=4, pady=2)
self.action6 = ttk.Button(self.inKeys, text=" 6 ", takefocus=False, command=lambda: af.append_digit6(self))
self.action6.grid(column=2, row=2, padx=4, pady=2)
# Adding digit entry buttons 1 to 3
self.action7 = ttk.Button(self.inKeys, text=" 7 ", takefocus=False, command=lambda: af.append_digit7(self))
self.action7.grid(column=0, row=4, padx=4, pady=2)
self.action8 = ttk.Button(self.inKeys, text=" 8 ", takefocus=False, command=lambda: af.append_digit8(self))
self.action8.grid(column=1, row=4, padx=4, pady=2)
self.action9 = ttk.Button(self.inKeys, text=" 9 ", takefocus=False, command=lambda: af.append_digit9(self))
self.action9.grid(column=2, row=4, padx=4, pady=2)
# Special Digit keys
self.action_pi = ttk.Button(self.inKeys, text=" \u03C0 ", takefocus=False, command=lambda: af.get_pi(self))
self.action_pi.grid(column=0, row=6, padx=4, pady=2)
# Associated tool tip
enterPIDescr = 'Enters the python internal PI value into x,\n moving the previous x to y'
createToolTip(self.action_pi, enterPIDescr)
self.action0 = ttk.Button(self.inKeys, text=" 0 ", takefocus=False, command=lambda: af.append_digit0(self))
self.action0.grid(column=1, row=6, padx=4, pady=2)
self.action_e = ttk.Button(self.inKeys, text=" e ", takefocus=False, command=lambda: af.get_e(self))
self.action_e.grid(column=2, row=6, padx=4, pady=2)# Associated tool tip
enterEDescr = 'Enters the python internal e (Euler Constant) value/n into x, moving the previous x to y'
createToolTip(self.action_e, enterEDescr)
self.action_minSgn = ttk.Button(self.inKeys, text=" - ", takefocus=False, command=lambda: af.append_minSgn(self))
self.action_minSgn.grid(column=0, row=7, padx=4, pady=2)
self.actiondec = ttk.Button(self.inKeys, text=" . ", takefocus=False, command=lambda: af.append_dec(self))
self.actiondec.grid(column=1, row=7, padx=4, pady=2)
self.action_comma = ttk.Button(self.inKeys, text=" , ", takefocus=False, command=lambda: af.append_comma(self))
self.action_comma.grid(column=2, row=7, padx=4, pady=2)
'''
Populate function keys frames
* Discrete valued (x,y) functions
* List Math Functions
* List Statistics Functions
'''
# ####################################
# Discrete valued x,y Functions Keys defined
# ####################################
self.xyFunctKeys = ttk.LabelFrame(tab1, text=' x,y Function Keys ')
self.xyFunctKeys.grid(column=0, row=18, padx=8, pady=8)
self.action_add = ttk.Button(self.xyFunctKeys, text=" y + x ", command=lambda: af.do_add(self))
self.action_add.grid(column=0, row=0, padx=4, pady=6)
# Associated tool tip
additionDescr = 'Add x and y result to x'
createToolTip(self.action_add, additionDescr)
self.action_subt = ttk.Button(self.xyFunctKeys, text=" y - x ", command=lambda: af.do_subt(self))
self.action_subt.grid(column=1, row=0, padx=4, pady=6)
# Associated tool tip
subtractionDescr = 'Subtract x from y result to x'
createToolTip(self.action_subt, subtractionDescr)
self.action_mult = ttk.Button(self.xyFunctKeys, text=" y * x ", command=lambda: af.do_mult(self))
self.action_mult.grid(column=2, row=0, padx=4, pady=6)
# Associated tool tip
multiplicationDescr = 'Multiply x and y result to x'
createToolTip(self.action_mult, multiplicationDescr)
self.action_div = ttk.Button(self.xyFunctKeys, text=" y / x ", command=lambda: af.do_div(self))
self.action_div.grid(column=3, row=0, padx=4, pady=6)
# Associated tool tip
divisionDescr = 'Divide x into y result to x'
createToolTip(self.action_div, divisionDescr)
self.action_switchxy = ttk.Button(self.xyFunctKeys, text=" y \u2194 x ", command=lambda: af.do_switchxy(self))
self.action_switchxy.grid(column=4, row=0, padx=4, pady=6)
# Associated tool tip
switchDescr = 'Switch x and y values internally'
createToolTip(self.action_switchxy, switchDescr)
self.action_sgn = ttk.Button(self.xyFunctKeys, text="+/-", command=lambda: af.do_sgn(self))
self.action_sgn.grid(column=0, row=1, padx=4, pady=6)
# Associated tool tip
chgSgnDescr = 'Change sign of x in memory echo result to x field.\n NOT necessary to press ENTERx'
createToolTip(self.action_sgn, chgSgnDescr)
self.action_inverse = ttk.Button(self.xyFunctKeys, text=" 1/x ", command=lambda: af.do_invert(self))
self.action_inverse.grid(column=1, row=1, padx=4, pady=6)
# Associated tool tip
invertXDescr = 'Invert X in memory result to x.'
createToolTip(self.action_inverse, invertXDescr)
self.action_power2 = ttk.Button(self.xyFunctKeys, text=" x\u00B2 ", command=lambda: af.do_power2(self))
self.action_power2.grid(column=2, row=1, padx=4, pady=6)
# Associated tool tip
squareXDescr = 'Square X in memory result to x.'
createToolTip(self.action_power2, squareXDescr)
self.action_xpowy = ttk.Button(self.xyFunctKeys, text=" y\u207F ", command=lambda: af.do_xpowy(self))
self.action_xpowy.grid(column=3, row=1, padx=4, pady=6)
# Associated tool tip
xpowyDescr = 'x to power y result to x.\n'
createToolTip(self.action_xpowy, xpowyDescr)
self.action_sqrt = ttk.Button(self.xyFunctKeys, text=" \u221Ax", command=lambda: af.do_sqrt(self))
self.action_sqrt.grid(column=4, row=1, padx=4, pady=6)
# Associated tool tip
sqrtDescr = 'Take square root of x result to x.\n'
createToolTip(self.action_sqrt, sqrtDescr)
self.action_cos = ttk.Button(self.xyFunctKeys, text="cos x", command=lambda: af.do_cos(self))
self.action_cos.grid(column=0, row=2, padx=4, pady=6)
# Associated tool tip
cosDescr = 'Take Cosine of x result to x.\n'
createToolTip(self.action_cos, cosDescr)
self.action_sin = ttk.Button(self.xyFunctKeys, text="sin x", command=lambda: af.do_sin(self))
self.action_sin.grid(column=1, row=2, padx=4, pady=6)
# Associated tool tip
sinDescr = 'Take Sine of x result to x.\n'
createToolTip(self.action_sin, sinDescr)
self.action_tan = ttk.Button(self.xyFunctKeys, text="tan x", command=lambda: af.do_tan(self))
self.action_tan.grid(column=2, row=2, padx=4, pady=6)
# Associated tool tip
tanDescr = 'Take tan of x result to x.\n'
createToolTip(self.action_tan, tanDescr)
self.action_acos = ttk.Button(self.xyFunctKeys, text="acos x", command=lambda: af.do_acos(self))
self.action_acos.grid(column=3, row=2, padx=4, pady=6)
# Associated tool tip
arcCosineDescr = 'Take arcCosine of x result to x.\n'
createToolTip(self.action_acos, arcCosineDescr)
self.action_asin = ttk.Button(self.xyFunctKeys, text="asin x", command=lambda: af.do_asin(self))
self.action_asin.grid(column=4, row=2, padx=4, pady=6)
# Associated tool tip
arcsinDescr = 'Take arcsine of x result to x.\n'
createToolTip(self.action_asin, arcsinDescr)
self.action_atan = ttk.Button(self.xyFunctKeys, text="atan x", command=lambda: af.do_atan(self))
self.action_atan.grid(column=0, row=3, padx=4, pady=6)
# Associated tool tip
arctanDescr = 'Take arctan of x result to x.\n'
createToolTip(self.action_atan, arctanDescr)
self.action_log10 = ttk.Button(self.xyFunctKeys, text=" log10 x", command=lambda: af.do_log10(self))
self.action_log10.grid(column=1, row=3, padx=4, pady=6)
# Associated tool tip
log10Descr = 'Take base 10 log of x result to x.\n'
createToolTip(self.action_log10, log10Descr)
self.action_ln = ttk.Button(self.xyFunctKeys, text=" ln x ", command=lambda: af.do_ln(self))
self.action_ln.grid(column=2, row=3, padx=4, pady=6)
# Associated tool tip
lnDescr = 'Take natural log of x result to x.\n'
createToolTip(self.action_ln, lnDescr)
self.action_exp = ttk.Button(self.xyFunctKeys, text="exp(x)", command=lambda: af.do_exp(self))
self.action_exp.grid(column=3, row=3, padx=4, pady=6)
# Associated tool tip
expDescr = 'Take exponent (base e to power) of x result to x.\n'
createToolTip(self.action_exp, expDescr)
self.action_factorial = ttk.Button(self.xyFunctKeys, text=" x!", command=lambda: af.do_factorial(self))
self.action_factorial.grid(column=4, row=3, padx=4, pady=6)
# Associated tool tip
factorialDescr = 'Take factorial of x result to x.\n'
createToolTip(self.action_factorial, factorialDescr)
self.action_xrooty = ttk.Button(self.xyFunctKeys, text="\u207F\u221A y ", command=lambda: af.do_blank(self))
self.action_xrooty.grid(column=0, row=4, padx=4, pady=6)
# Associated tool tip
xrootyDescr = 'Take xth root of y, result to x.\n'
createToolTip(self.action_xrooty, xrootyDescr)
self.action_blank = ttk.Button(self.xyFunctKeys, text=" ", command=lambda: af.do_blank(self))
self.action_blank.grid(column=1, row=4, padx=4, pady=6)
# Associated tool tip
blankDescr = 'Unassigned key'
createToolTip(self.action_blank, blankDescr)
self.action_blank1 = ttk.Button(self.xyFunctKeys, text=" ", command=lambda: af.do_blank(self))
self.action_blank1.grid(column=2, row=4, padx=4, pady=6)
# Associated tool tip
blankDescr = 'Unassigned key.\n'
createToolTip(self.action_blank1, blankDescr)
self.action_blank2 = ttk.Button(self.xyFunctKeys, text=" ", command=lambda: af.do_blank(self))
self.action_blank2.grid(column=3, row=4, padx=4, pady=6)
# Associated tool tip
blankDescr = 'Unassigned key'
createToolTip(self.action_blank2, blankDescr)
# Convert degrees to radians
self.action_deg2rad = ttk.Button(self.xyFunctKeys, text="deg \u2192 rad", command=lambda: af.do_deg2rad(self))
self.action_deg2rad.grid(column=4, row=4, padx=4, pady=6)
# Associated tool tip
deg2radDescr = 'Convert x from degrees to radians result to x.\n'
createToolTip(self.action_deg2rad, deg2radDescr)
# #####################################
# List math functions Keys defined
# #####################################
self.listFunctKeys = ttk.LabelFrame(tab1, text='List Function Keys ')
self.listFunctKeys.grid(column=0, row=18, padx=8, pady=8)
self.action_addL = ttk.Button(self.listFunctKeys, text=" L + x ", command=lambda: af.do_addL(self))
self.action_addL.grid(column=0, row=0, padx=4, pady=6)
# Associated tool tip
addtoListDescr = 'Add x to L element-wise, result to L'
createToolTip(self.action_addL, addtoListDescr)
self.action_subtL = ttk.Button(self.listFunctKeys, text=" L - x ", command=lambda: af.do_subtL(self))
self.action_subtL.grid(column=1, row=0, padx=4, pady=6)
# Associated tool tip
subtfromListDescr = 'Subtract x from L element-wise, result to L'
createToolTip(self.action_subtL, subtfromListDescr)
self.action_multL = ttk.Button(self.listFunctKeys, text=" L * x ", command=lambda: af.do_multL(self))
self.action_multL.grid(column=2, row=0, padx=4, pady=6)
# Associated tool tip
multListDescr = 'Multiply L by x element-wise, result to L'
createToolTip(self.action_multL, multListDescr)
self.action_divL = ttk.Button(self.listFunctKeys, text=" L / x ", command=lambda: af.do_divL(self))
self.action_divL.grid(column=3, row=0, padx=4, pady=6)
# Associated tool tip
divListDescr = 'Divide L by x element-wise, result to L'
createToolTip(self.action_divL, divListDescr)
self.action_sumL = ttk.Button(self.listFunctKeys, text=" \u03a3 L " , command=lambda: af.do_sumL(self))
self.action_sumL.grid(column=0, row=1, padx=4, pady=6)
# Associated tool tip
sumListDescr = 'Sum elements of L, result to L'
createToolTip(self.action_sumL, sumListDescr)
self.action_prodL = ttk.Button(self.listFunctKeys, text=" \u03A0 L ", command=lambda: af.do_prodL(self))
self.action_prodL.grid(column=1, row=1, padx=4, pady=6)
# Associated tool tip
prodListDescr = 'Product of elements of L, result to L'
createToolTip(self.action_prodL, prodListDescr)
self.action_inverseL = ttk.Button(self.listFunctKeys, text=" 1/L ", command=lambda: af.do_invertL(self))
self.action_inverseL.grid(column=2, row=1, padx=4, pady=6)
# Associated tool tip
invListDescr = 'Invert elements of L, result to L'
createToolTip(self.action_inverseL, invListDescr)
self.action_Lpower2 = ttk.Button(self.listFunctKeys, text=" L\u00B2 ", command=lambda: af.do_Lpower2(self))
self.action_Lpower2.grid(column=3, row=1, padx=4, pady=6)
# Associated tool tip
pow2ListDescr = 'Squares of elements of L, result to L'
createToolTip(self.action_Lpower2, pow2ListDescr)
self.action_Lpowx = ttk.Button(self.listFunctKeys, text="L\u207F", command=lambda: af.do_Lpowx(self))
self.action_Lpowx.grid(column=0, row=2, padx=4, pady=6)
# Associated tool tip
xpowListDescr = 'x powers of elements of L, result to L'
createToolTip(self.action_Lpowx, xpowListDescr)
self.action_sqrtL = ttk.Button(self.listFunctKeys, text="\u221AL", command=lambda: af.do_sqrtL(self))
self.action_sqrtL.grid(column=1, row=2, padx=4, pady=6)
# Associated tool tip
sqrtListDescr = 'Square roots of of elements of L, result to L'
| |
# The MIT License (MIT)
#
# Copyright (c) 2016 <NAME>, National Institutes of Health / NINDS
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Top level script for generating supervoxels / running metrics.
# Adapted from top level script for revision of ECS paper.
#
# Ideally this top-level script eventually morphs into some top-level control for EMDRP...
import os, sys
import argparse
import time
import numpy as np
import numpy.ma as ma
import tifffile
import dill
from shutil import copyfile
from gala import evaluate as ev
from scipy import stats
from scipy.stats import mstats
from dpLoadh5 import dpLoadh5
from metrics import warping_error, adapted_rand_error
from metrics import adapted_rand_error_resample_objects_points, adapted_rand_error_resample_objects
from emdrp.utils.typesh5 import emLabels, emProbabilities, emVoxelType
'''
# 2d tiles input parameters
nblocks = 9; ngroups = 4;
params = {
'groups' : ['none', 'small', 'large', 'huge'],
'groups_vals' : [0.6, 6, 11, 24],
'groups_xlim' : [-3,27.5],
'groups_xlbl' : 'Mean Amount of ECS (%)',
'blocks' : range(1,nblocks+1),
'inpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/runs',
#'loadpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/runs/out/20150918',
'loadpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/runs/out/test',
'outpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/runs/out',
'size' : [640, 576, 1], 'offset' : [0, 0, 8], 'lblbits' : 16, 'chunks' : [[[0, 0, 0]]*nblocks]*ngroups,
'thrRng' : [0.6, 0.999, 0.01],
'thrHi' : [0.995, 0.999, 0.9995, 0.9999, 0.99995, 0.99999, 0.999995, 0.999999],
'thrLo' : [0.3, 0.4, 0.5],
'Tmins' : [64],
#'connectivity' : 3,
#'probWatershed' : True,
#'skimWatershed' : True,
'global_mins' : True,
'skeletonize' : False,
'input_data' : 'batch_input_data.h5',
'output_data' : 'batch_output_data.h5',
#'save_file' : 'process_convnet_out_newestECS.allresamps100p_1000.dill',
'save_file' : 'process_convnet_out_newestECS.test.dill',
'gt_name' : '_gt.h5',
'out_name' : '_supervoxels.h5',
'run_watershed' : False,
'sel_watershed' : [],
'run_gt_watershed' : False,
'run_metrics' : False,
#'nReSamples' : 1000,
'nReSamples' : 0,
'sel_metrics' : [],
'make_plots' : True,
'plot_setlims' : True,
'save_plots' : False,
'figno' : 1000,
'export_images' : False,
}
'''
'''
# 3d chunks input parameters
nblocks = 6; ngroups = 2;
params = {
'groups' : ['none', 'huge'],
'groups_vals' : [0.6, 24],
'groups_xlim' : [-3,27.5],
'groups_xlbl' : 'Mean Amount of ECS (%)',
'blocks' : range(1,nblocks+1),
'inpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed',
'loadpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed/out/test',
'outpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed/out',
'size' : [128, 128, 128], 'offset' : [0, 0, 0], 'lblbits' : 16,
'chunks' : [
[[16,17,4], [18,15,3], [13,15,3], [13,20,3], [18,20,3], [18,20,4]], # none ECS
[[19,22,2], [17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2]], # huge ECS
],
'thrRng' : [0.6, 0.999, 0.01],
'thrHi' : [0.995, 0.999, 0.9995, 0.9999, 0.99995, 0.99999, 0.999995, 0.999999],
'thrLo' : [0.3, 0.4, 0.5],
#'Tmins' : [16, 32, 64, 128],
'Tmins' : [64],
'global_mins' : True,
'group_single' : True,
# merge all
'merge_probs' : ['_xyz_0_probs.h5', '_xzy_0_probs.h5', '_zyx_0_probs.h5', '_xyz_1_probs.h5', '_xyz_2_probs.h5'],
'merge_weightings' : [1.0,0.5,0.5,1.0,1.0],
'merge_orderings' : ['xyz','xzy','zyx','xyz','xyz'],
'gt_labels' : [
'M0027_11_labels_briggmankl_watkinspv_33x37x7chunks_Forder.h5',
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
],
'gt_ECS_label' : 1,
'out_name_probs' : '_probs.h5',
'out_name' : '_supervoxels.h5',
'save_file' : 'process_convnet_out_newestECS.test.dill',
'run_merge' : False,
'run_watershed' : False,
'sel_watershed' : [],
'export_raw' : False,
'raw_data' : [
'/Data/big_datasets/M0027_11_33x37x7chunks_Forder.h5',
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
],
'run_metrics' : False,
'sel_metrics' : [],
'make_plots' : True,
'plot_setlims' : True,
'save_plots' : False,
'figno' : 1000,
'export_images' : False,
}
'''
'''
# supervoxels over whole area
nblocks = 1; ngroups = 2;
params = {
'groups' : ['none', 'huge'],
'groups_vals' : [0.6, 24],
'groups_xlim' : [-3,27.5],
'groups_xlbl' : 'Mean Amount of ECS (%)',
'blocks' : range(1,nblocks+1),
#'inpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed',
'inpath' : '/Data/pwatkins/full_datasets/newestECSall',
'loadpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed/out',
'outpath' : '/Data/pwatkins/full_datasets/newestECSall',
'size' : [1024, 1024, 480], 'offset' : [0, 0, 32], 'lblbits' : 32,
#'size' : [64, 64, 64], 'offset' : [0, 0, 32], 'lblbits' : 32, # for test
#'chunks' : [
# [[16,17,4], [18,15,3], [13,15,3], [13,20,3], [18,20,3], [18,20,4]], # none ECS
# [[19,22,2], [17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2]], # huge ECS
# ],
'chunks' : [
[[12,14,2], ], # corner none ECS
[[16,17,0], ], # corner huge ECS
],
#'thrRng' : [0.95, 0.999, 0.01],
#'thrHi' : [0.995, 0.999, 0.9995, 0.9999, 0.99995],
'thrRng' : [0.3, 0.999, 0.1],
'thrHi' : [0.95, 0.975, 0.99, 0.995, 0.999, 0.9995, 0.9999, 0.99995, 0.99999, 0.999995, 0.999999],
'thrLo' : [],
#'Tmins' : [8, 16, 32, 64, 128, 256],
'Tmins' : [256],
'group_single' : True,
# merge all
'merge_probs' : ['_all_xyz_0_probs.h5', '_all_xzy_0_probs.h5', '_all_zyx_0_probs.h5', '_all_xyz_1_probs.h5',
'_all_xyz_2_probs.h5'],
'merge_weightings' : [1.0,0.5,0.5,1.0,1.0],
'merge_orderings' : ['xyz','xzy','zyx','xyz','xyz'],
'gt_labels' : [
'M0027_11_labels_briggmankl_watkinspv_33x37x7chunks_Forder.h5',
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
],
'gt_ECS_label' : 1,
'out_name_probs' : '_probs.h5',
'out_name' : '_supervoxels.h5',
'save_file' : 'process_convnet_out_newestECS.dill',
'run_merge' : False,
'run_watershed' : True,
'sel_watershed' : [],
'export_raw' : False,
'raw_data' : [
'/Data/big_datasets/M0027_11_33x37x7chunks_Forder.h5',
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
],
'run_metrics' : False,
'sel_metrics' : [],
'make_plots' : False,
'plot_setlims' : False,
'save_plots' : False,
'figno' : 1000,
'export_images' : False,
}
'''
'''
# 3d chunks input parameters, compare merging probs with multiple xyz versus ortho dirs
nblocks = 6; ngroups = 3;
params = {
'groups' : ['none', 'none', 'none'],
'groups_vals' : [2,4,6],
'groups_xlim' : [-2,8],
'groups_xlbl' : 'prob merge groups',
'blocks' : range(1,nblocks+1),
'inpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed',
'loadpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed/out/20150817_mergecompare',
'outpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed/out',
'size' : [128, 128, 128], 'offset' : [0, 0, 0], 'lblbits' : 16,
'chunks' : [ [[16,17,4], [18,15,3], [13,15,3], [13,20,3], [18,20,3], [18,20,4]], # none ECS
[[16,17,4], [18,15,3], [13,15,3], [13,20,3], [18,20,3], [18,20,4]], # none ECS
[[16,17,4], [18,15,3], [13,15,3], [13,20,3], [18,20,3], [18,20,4]], # none ECS
],
'thrRng' : [0.5, 1.0, 0.01],
'thrHi' : [0.995, 0.999, 0.9995, 0.9999],
'thrLo' : [],
'group_single' : True,
'merge_probs' : [
['_xyz_0_probs.h5', '_xzy_0_probs.h5', '_zyx_0_probs.h5', '_xyz_1_probs.h5', '_xyz_2_probs.h5'], # merge all
['_xyz_0_probs.h5', '_xzy_0_probs.h5', '_zyx_0_probs.h5'], # merge orthos
['_xyz_0_probs.h5', '_xyz_1_probs.h5', '_xyz_2_probs.h5'], # merge xy's only
],
'merge_weightings' : [
[1.0,0.25,0.25,1.0,1.0], # merge all
#[1.0,0.25,0.25], # merge orthos, xxx - ran it this way the first time by accident
[1.0,0.25,0.25], # merge orthos
[1.0,1.0,1.0], # merge xy's only
],
'merge_orderings' : [
['xyz','xzy','zyx','xyz','xyz'], # merge all
['xyz','xzy','zyx'], # merge orthos
['xyz','xyz','xyz'], # merge xy's only
],
'gt_labels' : ['M0027_11_labels_briggmankl_watkinspv_33x37x7chunks_Forder.h5',
'M0027_11_labels_briggmankl_watkinspv_33x37x7chunks_Forder.h5',
'M0027_11_labels_briggmankl_watkinspv_33x37x7chunks_Forder.h5'],
'gt_ECS_label' : 1,
'out_name_probs' : ['_all5_probs.h5', '_ortho3_probs.h5', '_xyz3_probs.h5'],
'out_name' : ['_all5_supervoxels.h5', '_ortho3_supervoxels.h5', '_xyz3_supervoxels.h5',],
'out_group' : ['_all5', '_ortho3', '_xyz3'],
'save_file' : 'process_convnet_out_newestECS_mergecompare.dill',
'run_merge' : False,
'run_watershed' : False,
'sel_watershed' : [],
'run_metrics' : False,
'sel_metrics' : [],
'make_plots' : False,
'plot_setlims' : False,
'save_plots' : False,
'figno' : 1000,
'export_images' : True,
}
'''
#'''
# 3d compare gala
nblocks = 6; ngroups = 4;
params = {
#'groups' : ['huge', 'huge_gala', 'huge_flatagglo_lda_9f_iter0p05'],
'groups' : ['huge', 'huge_agglo_perfect', 'huge_flatagglo_lda_23f_iter0p05', 'huge_flatagglo_rf_23f_iter0p05'],
'groups_vals' : [5, 10, 15, 20],
#'groups_vals' : [5, 10],
'groups_xlim' : [0,15],
'groups_xlbl' : 'groups',
'blocks' : range(1,nblocks+1),
'inpath' : '/home/watkinspv/Data/convnet_out/cube_recons/newestECS/sixfold_threed',
'loadpath' : '/Data/pwatkins/full_datasets/newestECSall/20151001',
'outpath' : '/Data/pwatkins/full_datasets/newestECSall/20151001',
'size' : [128, 128, 128], 'offset' : [0, 0, 0], 'lblbitsgt' : 16, 'lblbits' : 32,
#'size' : [64,64,64], 'offset' : [32,32,32], 'lblbitsgt' : 16, 'lblbits' : 32,
'chunks' : [
[[17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2], [19,22,2]],
[[17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2], [19,22,2]],
[[17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2], [19,22,2]],
[[17,19,2], [17,23,1], [22,23,1], [22,18,1], [22,23,2], [19,22,2]],
],
#'chunks' : [
# [[17,19,2]],
# [[17,19,2]],
# [[17,19,2]],
# [[17,19,2]],
# ],
'thrRng' : [0.3, 0.999, 0.1],
'thrHi' : [0.95, 0.975, 0.99, 0.995, 0.999, 0.9995, 0.9999, 0.99995, 0.99999, 0.999995, 0.999999],
'thrLo' : [],
#'Tmins' : [64],
'Tmins' : [256],
'global_mins' : True,
'group_single' : True,
# merge all
'merge_probs' : ['_xyz_0_probs.h5', '_xzy_0_probs.h5', '_zyx_0_probs.h5', '_xyz_1_probs.h5', '_xyz_2_probs.h5'],
'merge_weightings' : [1.0,0.5,0.5,1.0,1.0],
'merge_orderings' : ['xyz','xzy','zyx','xyz','xyz'],
'gt_labels' : [
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
'M0007_33_labels_briggmankl_watkinspv_39x35x7chunks_Forder.h5',
],
'gt_ECS_label' : 1,
'out_name_probs' : '_probs.h5',
'out_name' : '_supervoxels.h5',
'save_file' : 'process_convnet_out_newestECS_gala.dill',
'run_merge' : False,
'run_watershed' : False,
'sel_watershed' : [],
'export_raw' : False,
'raw_data' : [
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
'/Data/big_datasets/M0007_33_39x35x7chunks_Forder.h5',
],
'run_metrics' : True,
'sel_metrics' | |
# -*- coding:utf-8 -*-
import numpy as np
from .hydrology import PearsonThree
def calc_kps(ps, cvs, css):
"""
计算各设计频率下的模比系数 Kp
:param ps: iterable object -> floats 需要计算的频率
:param cvs: iterable object -> floats 各历时变差系数
:param css: iterable object -> floats 各历时偏态系数
:return: len(ps) * len(cvs)D 矩阵 floats
"""
return np.array([[PearsonThree(cv, css[i]).calc_kp(p) for i, cv in enumerate(cvs)] for p in ps])
def calc_design_point_rainfalls(ps, cvs, css, point_rainfalls):
"""
计算各历时、各设计频率的设计点雨量
计算各历时、各设计频率的设计面雨量
:param ps: iterable object -> floats 需要计算的频率
:param point_rainfalls: iterable object -> floats 各历时点雨量, 本项目中统一分为 10min, 1h, 6h, 24h
:param cvs: iterable object -> floats 各历时变差系数
:param css: iterable object -> floats 各历时偏态系数
:return: len(ps) * len(point_rainfalls)D 矩阵 floats
"""
kps = calc_kps(ps, cvs, css)
return np.array([kp * np.array(point_rainfalls) for kp in kps])
def calc_design_area_rainfalls(ps, cvs, css, point_rainfalls, alphas):
"""
计算各历时、各设计频率的设计面雨量
:param ps: iterable object -> floats 需要计算的频率
:param point_rainfalls: iterable object -> floats 各历时点雨量, 本项目中统一分为 10min, 1h, 6h, 24h
:param cvs: iterable object -> floats 各历时变差系数
:param css: iterable object -> floats 各历时偏态系数
:param alphas: iterable object -> floats 各历时点面折减系数, 维度同点雨量
:return: len(ps) * len(point_rainfalls)D 矩阵 floats
"""
design_point_rainfalls = calc_design_point_rainfalls(ps, cvs, css, point_rainfalls)
return np.array([x * alphas[i] for i, x in enumerate(design_point_rainfalls.T)]).T
def calc_ns(h10minp_area_rainfalls, h1hp_area_rainfalls, h6hp_area_rainfalls, h24hp_area_rainfalls):
"""
计算暴雨递减指数 n
:param h10minp_area_rainfalls: iterable object -> 各设计频率下的10min历时设计面雨量
:param h1hp_area_rainfalls: iterable object -> 各设计频率下的1h历时设计面雨量
:param h6hp_area_rainfalls: iterable object -> 各设计频率下的6h历时设计面雨量
:param h24hp_area_rainfalls: iterable object -> 各设计频率下的24h历时设计面雨量
:return: len(h10minp_area_rainfalls) * 3 矩阵 float 每一维分别为 各设计频率下的 n1, n2, n3
"""
if not len(h10minp_area_rainfalls) == len(h1hp_area_rainfalls) == len(h6hp_area_rainfalls) == len(h24hp_area_rainfalls):
raise ReferenceError('各参数长度不一致')
h10minp_area_rainfalls = np.array(h10minp_area_rainfalls)
h1hp_area_rainfalls = np.array(h1hp_area_rainfalls)
h6hp_area_rainfalls = np.array(h6hp_area_rainfalls)
h24hp_area_rainfalls = np.array(h24hp_area_rainfalls)
n1s = 1 - 1.285 * np.log10(h1hp_area_rainfalls / h10minp_area_rainfalls)
n2s = 1 - 1.285 * np.log10(h6hp_area_rainfalls / h1hp_area_rainfalls)
n3s = 1 - 1.661 * np.log10(h24hp_area_rainfalls / h6hp_area_rainfalls)
return np.array([n1s, n2s, n3s]).T
def calc_r_24h_allocate(r24ps, n2ps, n3ps, h6ps, h24ps):
"""
计算24小时净雨概化时程分配
:param r24ps: iterable object -> float 各设计频率24h净雨量
:param n2ps: iterable object -> float 各设计频率1~6h暴雨递减指数n2
:param n3ps: iterable object -> float 各设计频率6~24h暴雨递减指数n3
:param h6ps: iterable object -> float 各设计频率年最大6小时雨量
:param h24ps: iterable object -> float 各设计频率年最大24小时雨量
:return: len(n2ps) * 24D matrix -> float 24h各小时时程分配量
"""
scale_factor_base = [
{12: 10, 13: 12, 14: 16, 15: 38, 16: 14, 17: 10},
{12: 8, 13: 10, 14: 16, 15: 44, 16: 12, 17: 10},
{12: 7, 13: 7, 14: 15, 15: 54, 16: 10, 17: 7},
{12: 5, 13: 6, 14: 12, 15: 64, 16: 8, 17: 5},
{5: 4, 6: 5, 7: 6, 8: 8, 9: 8, 10: 10, 11: 10, 18: 10, 19: 8, 20: 8, 21: 6, 22: 6, 23: 6, 24: 5},
{7: 6, 8: 6, 9: 9, 10: 10, 11: 10, 18: 14, 19: 10, 20: 9, 21: 7, 22: 7, 23: 6, 24: 6},
{8: 6, 9: 10, 10: 12, 11: 12, 18: 16, 19: 12, 20: 10, 21: 10, 22: 6, 23: 6},
]
scale_factor = np.zeros((len(scale_factor_base), 24))
for i, row in enumerate(scale_factor_base):
for k, v in row.items():
scale_factor[i][k - 1] = v / 100
h6ps = np.array([np.full(24, x) for x in h6ps])
h24ps = np.array([np.full(24, x) for x in h24ps])
r24ps = np.array([np.full(24, x) for x in r24ps])
r6ps = h6ps / h24ps * r24ps
scale_factor_r6 = np.zeros((len(n2ps), 24))
scale_factor_r24 = np.zeros((len(n2ps), 24))
for i, row in enumerate(n2ps):
if 0.4 <= row < 0.5:
scale_factor_r6[i] = scale_factor[0]
elif 0.5 <= row < 0.6:
scale_factor_r6[i] = scale_factor[1]
elif 0.6 <= row < 0.7:
scale_factor_r6[i] = scale_factor[2]
else:
scale_factor_r6[i] = scale_factor[3]
if n3ps[i] < 0.6:
scale_factor_r24[i] = scale_factor[4]
elif 0.6 <= n3ps[i] < 0.7:
scale_factor_r24[i] = scale_factor[5]
else:
scale_factor_r24[i] = scale_factor[6]
r6_result = r6ps * scale_factor_r6
r24_result = (r24ps - r6ps) * scale_factor_r24
result = np.array([[r24_result[i][j] or col for j, col in enumerate(r6_result[i])] for i, row in enumerate(r6_result)])
return result
# def calc_taus(l, m, j, qs):
# """
# 计算汇流时间 τ
# :param l: float 干流长度,设计断面至分水岭(公里)
# :param j: float l 的平均比降
# :param m: float 汇流参数 查图 θ~m曲线
# :param qs: iterable object -> float 各频率流量
# :return: len(qs)D vector 各频率汇流时间
# """
# return np.array([0.278 * l / (m * j**(1/3) * q**0.25) for q in qs])
#
#
# def calc_psi(f, l, j, m, mu, ss, n2s, n3s):
# qs = calc_qs(f, l, j, m, mu, ss, n2s, n3s)
# tau = calc_taus(l, m, j, qs)
# if tau <= 1:
# n = n2s
# else:
# n = n3s
# psi = 1 - mu * tau ** n / ss
# return psi
def calc_qs(f, l, j, m, mu, ss, n2s, n3s):
"""
推理公式法计算洪峰流量
:param f: float 流域面积(平方公里)
:param l: float 干流长度,设计断面至分水岭(公里)
:param j: float l 的平均比降
:param m: float 汇流参数 查图 θ~m曲线
:param mu: 平均入渗率,(mm/h)
:param ss: iterable object -> float 各设计频率最大1h雨量平均强度,即设计频率1h雨量(mm/h)
:param n2s: iterable object -> float 各设计频率1~6h暴雨递减指数n2
:param n3s: iterable object -> float 各设计频率6~24h暴雨递减指数n3
:return: len(ss)D vector -> float 个设计频率洪峰流量
"""
qms = np.ones(len(ss))
taus = np.ones(len(ss))
psis = np.ones(len(ss))
ns = np.ones(len(ss))
for i, s in enumerate(ss):
n2 = n2s[i]
n3 = n3s[i]
q = qms[i]
qm = 1e20
while np.abs(q - qm) > 1e-4:
q = qm
tau = 0.278 * l / (m * j**0.33333333 * q**0.25)
if tau <= 1:
n = n2
else:
n = n3
psi = 1 - mu * tau**n / s
qm = 0.278 * psi * s * f / tau**n
if qm < 0:
qm = 0.0
psi = 0.0
qms[i] = qm
taus[i] = tau
psis[i] = psi
ns[i] = n
return np.array([qms, taus, psis, ns]).T
def calc_per_hour_hps(h1ps, h24ps, n2s, n3s):
"""
计算1~24h各时段雨量,主要用于计算时程分配
:param h1ps: iterable object -> float 各设计频率1h历时点雨量
:param h24ps: iterable object -> float 各设计频率24h历时点雨量
:param n2s: iterable object -> float 各设计频率n2
:param n3s: iterable object -> float 各设计频率n3
:return: len(h1p) * 24 matrix
"""
h1ps = np.array(h1ps)
h24ps = np.array(h24ps)
n2s = np.array(n2s)
n3s = np.array(n3s)
per_hour_hps = []
for i in range(24):
t = i + 1
if t < 6:
per_hour_hps.append(h1ps * t**(1 - n2s))
else:
per_hour_hps.append(h24ps * 24**(n3s - 1) * t**(1 - n3s))
return np.array(per_hour_hps).T
def calc_hps_allocate(h1ps, h24ps, n2s, n3s):
"""
24小时暴雨时程分配,逐时降雨量
:param h1ps: iterable object -> float 各设计频率1h历时点雨量
:param h24ps: iterable object -> float 各设计频率24h历时点雨量
:param n2s: iterable object -> float 各设计频率n2
:param n3s: iterable object -> float 各设计频率n3
:return: len(per_hour_hps)*24 matrix 各设计频率逐时降雨量
"""
per_hour_hps = np.array(calc_per_hour_hps(h1ps, h24ps, n2s, n3s)).T
allocate = []
# h24ps = np.array([sum(per_hour_hp) for per_hour_hp in per_hour_hps])
for i in range(6):
allocate.append(1 / 6 * (per_hour_hps[23] - per_hour_hps[17]))
allocate.append(per_hour_hps[15] - per_hour_hps[14])
allocate.append(per_hour_hps[13] - per_hour_hps[12])
allocate.append(per_hour_hps[11] - per_hour_hps[10])
allocate.append(per_hour_hps[9] - per_hour_hps[8])
allocate.append(per_hour_hps[7] - per_hour_hps[6])
allocate.append(per_hour_hps[5] - per_hour_hps[4])
allocate.append(per_hour_hps[3] - per_hour_hps[2])
allocate.append(per_hour_hps[1] - per_hour_hps[0])
allocate.append(per_hour_hps[0])
allocate.append(per_hour_hps[2] - per_hour_hps[1])
allocate.append(per_hour_hps[4] - per_hour_hps[3])
allocate.append(per_hour_hps[6] - per_hour_hps[5])
allocate.append(per_hour_hps[8] - per_hour_hps[7])
allocate.append(per_hour_hps[10] - per_hour_hps[9])
allocate.append(per_hour_hps[12] - per_hour_hps[11])
allocate.append(per_hour_hps[14] - per_hour_hps[13])
allocate.append(per_hour_hps[16] - per_hour_hps[15])
allocate.append(per_hour_hps[17] - per_hour_hps[16])
return np.array(allocate).T
def calc_per_hour_rps(r24ps, h1ps, h24ps, n2s, n3s, mu):
"""
计算诸时净雨
:param r24ps:iterable object -> float 各设计频率24h静雨
:param h1ps: iterable object -> float 各设计频率1h历时点雨量
:param h24ps: iterable object -> float 各设计频率24h历时点雨量
:param n2s: iterable object -> float 各设计频率n2
:param n3s: iterable object -> float 各设计频率n3
:param mu: float 平均入渗率
:return: 逐时净雨
"""
hps_allocate = np.array(calc_hps_allocate(h1ps, h24ps, n2s, n3s))
# 计算静雨比例
rs_allocate = np.array([(hps - mu) for i, hps in enumerate(hps_allocate.T)])
rs_allocate[rs_allocate < 0] = 0.0
rs_allocate = np.array([hps / sum(hps) for i, hps in enumerate(rs_allocate.T)]).T
rs_allocate = np.array([hps * r24ps for i, hps in enumerate(rs_allocate)])
return rs_allocate.T
def calc_qs_allocate(f, l, j, m, mu, ss, n2s, n3s, per_hour_rps):
"""
计算洪水过程线
:param f: float 流域面积(平方公里)
:param l: float 干流长度,设计断面至分水岭(公里)
:param j: float l 的平均比降
:param m: float 汇流参数 查图 θ~m曲线
:param mu: 平均入渗率,(mm/h)
:param ss: iterable object -> float 各设计频率最大1h雨量平均强度,即设计频率1h雨量(mm/h)
:param n2s: iterable object -> float 各设计频率1~6h暴雨递减指数n2
:param n3s: iterable object -> float 各设计频率6~24h暴雨递减指数n3
:param f: float 流域面积
:return: len(per_hour_rps)*24 matrix 各设计频率洪水工程
"""
# qs = calc_qs(f, l, j, m, mu, ss, n2s, n3s)
# psi = calc_qs(f, l, j, m, mu, ss, n2s, n3s)
# taus = calc_taus(l, m, j, qs)
# rps = np.array(per_hour_rps)
# qs = [0.278 * rs * f / taus for rs in rps.T]
# | |
line, 2pt break, 10pt line, 2pt break
# line2, = ax.plot(iter_list, cur_global_fitness_list, label='Global Iteration {0}\nentity number {1}\nDimension {2}'.format(iter_num, entity_num, dim))
# line2.set_dashes([2, 2, 10, 2]) # 2pt line, 2pt break, 10pt line, 2pt break
# ax.legend()
# plt.xlabel('Iteration times')
# plt.ylabel('Error rate')
# plt.title('Search the minimum of f1 = sum(Xi ^ 2)')
# plt.show()
class EDA_PBILc_EIS:
"""
Estimation of distribution algorithm 分布估计算法
Extending population-based incremental learning to continuous search spaces
Refer:
Paper:
paper0: 分布估计算法综述
ch02 一个简单的分布估计算法
给了一个简单易懂的案例
ch04 连续域的分布估计算法
paper1: Estimation of Distribution Algorithms: A New Evolutionary Computation Approach for Graph Matching Problems
ch02: Estimation Distribution Algorithms
2.3 EDAs in Continuous Domains
paper2: **Extending population-based incremental learning to continuous search spaces
ch03: Continuous PBIL
3.1 Continuous PBIL with dichotomic distributions
3.2 Continuous PBIL with Gausssian distributions
Adjustable parameters:
casual:
number of search agents
number of iteration
unique:
Relaxation factor, alpha
the only adjustable parameter, is taken as 1e-2
(paper2-ch3-penultimate paragraph倒数第二段提到alpha很小,缓慢减小父母的值,起到保持父母记忆的作用)
Attention:
Version:
0
"""
class Entity:
def __init__(self, exp_data_dict, fitness_function, x_avg_list, x_sigma_list):
self.exp_data_dict = exp_data_dict
self.limits_list = exp_data_dict['limit']
self.fitness_function = fitness_function
self.x_avg_list = x_avg_list
self.x_sigma_list = x_sigma_list
self.x_list = [random.gauss(x_avg_list[i], x_sigma_list[i]) for i in range(len(self.limits_list))]
# each x generated from Gaussian function might beyond its boundary,
# have to check its boundary, then calculate its fitness
# self.fitness = fitness_function(self.x_list)
self.update()
def update(self):
# Check whether the value is in the boundary
for i in range(len(self.limits_list)):
# If not, give them a random value from the doable range
if (self.x_list[i] < self.limits_list[i][0]) or (self.x_list[i] > self.limits_list[i][1]):
# Constrain the value in the boundary
self.x_list[i] = random.uniform(self.limits_list[i][0], self.limits_list[i][1])
# Update its fitness
self.fitness = self.fitness_function(self.exp_data_dict, self.x_list)
def __init__(self, exp_data_dict, iter_num, entity_num, fitness_function=cal_EIS_WSE_fitness_1):
self.exp_data_dict = exp_data_dict
self.limits_list = exp_data_dict['limit']
# Load setting
self.iter_num = iter_num
self.entity_num = entity_num
self.fitness_function = fitness_function
# Initialize average list,
self.avg_list = [0.5 * (limit[1] + limit[0]) for limit in self.limits_list]
# Initialize sigma_list
# Default as the length of (0.25 ~ 0.5) * range
self.sigma_list = [0.25 * (limit[1] - limit[0]) for limit in self.limits_list]
"""
Update sigma:
refer:
paper2-ch03-3.2-C
Content:
Adjust sigma depending on the diversity of the current best offspring;
sigma_t (t: current iteration time) is set to the variance of the K best current offspring
sigma_t_C = sqrt(sum([Xi - x_mean for i in range(K)]) / K), X = [x0, x1, x2, ..., xn-1]
select the K (0.5, half) best entities
Result:
the entity becomes premature too early, abandon this method
paper2-ch03-3.2-D
Content:
Sigma can be learned in the same way as X itself,
by memorizing the diversity of the K best offspring
Result:
sigma_t+1_D = sigma_t_D
"""
self.K = int(0.5 * self.entity_num)
self.alpha = 1e-2
# Initialize the global best entity
self.global_best_entity = self.Entity(self.exp_data_dict, fitness_function, self.avg_list, self.sigma_list)
def search(self):
current_best_entity_list = []
global_best_entity_list = []
continue_criterion = True
iter = 0
while continue_criterion:
# Create new generation
# Initialize a population
self.entities_list = [self.Entity(self.exp_data_dict, self.fitness_function, self.avg_list, self.sigma_list) \
for i in range(self.entity_num)]
self.entities_list.sort(key=lambda entity: entity.fitness, reverse=False)
# Select the first two optimal and the worst entities according to their fitness
current_1_best_entity, current_2_best_entity = self.entities_list[:2]
current_worst_entity = self.entities_list[-1]
# Compare the global best and the current best, if the current is better, replace global
if current_1_best_entity.fitness < self.global_best_entity.fitness:
self.global_best_entity = copy.deepcopy(current_1_best_entity)
current_best_entity_list.append(copy.deepcopy(current_1_best_entity))
global_best_entity_list.append(copy.deepcopy(self.global_best_entity))
# Calculate the average and variance of each dimension
tmp_x_avg_list = []
tmp_sigma_list = []
for x_index in range(len(self.limits_list)):
tmp_list = []
for entity_index in range(self.K):
x = self.entities_list[entity_index].x_list[x_index]
tmp_list.append(x)
tmp_x_avg = sum(tmp_list)/self.K
tmp_x_avg_list.append(tmp_x_avg)
tmp_sigma_list.append( math.sqrt(sum([math.pow((x0 - tmp_x_avg), 2) for x0 in tmp_list]) / self.K) )
# Update the average and sigma list
self.avg_list = [(1 - self.alpha) * parent_avg \
+ self.alpha * (current_1_best_entity.x_list[index] + current_2_best_entity.x_list[index] - current_worst_entity.x_list[index]) \
for index, parent_avg in enumerate(self.avg_list)]
# self.sigma_list = copy.deepcopy(tmp_sigma_list)
self.sigma_list = [(1 - self.alpha) * parent_sigma + self.alpha * tmp_sigma \
for parent_sigma, tmp_sigma in zip(self.sigma_list, tmp_sigma_list)]
# There are two entities only after at least two iteration
# If there is global_best_entity_list, use it,
# If not, use current_best_entity_list to replace
if iter >= 1:
x_lists_list = [global_best_entity_list[-2].x_list, global_best_entity_list[-1].x_list]
goa_criterion, chi_squared = goa_criterion_pack(x_lists_list=x_lists_list, iter=iter,
max_iter_time=self.iter_num,
data_dict=self.exp_data_dict)
if goa_criterion:
continue_criterion = False
iter += 1
return current_best_entity_list, global_best_entity_list, iter, chi_squared
class EDA_PBILc_EIS_access:
"""
Estimation of distribution algorithm 分布估计算法
Extending population-based incremental learning to continuous search spaces
Refer:
Paper:
paper0: 分布估计算法综述
ch02 一个简单的分布估计算法
给了一个简单易懂的案例
ch04 连续域的分布估计算法
paper1: Estimation of Distribution Algorithms: A New Evolutionary Computation Approach for Graph Matching Problems
ch02: Estimation Distribution Algorithms
2.3 EDAs in Continuous Domains
paper2: **Extending population-based incremental learning to continuous search spaces
ch03: Continuous PBIL
3.1 Continuous PBIL with dichotomic distributions
3.2 Continuous PBIL with Gausssian distributions
Adjustable parameters:
casual:
number of search agents
number of iteration
unique:
Relaxation factor, alpha
the only adjustable parameter, is taken as 1e-2
(paper2-ch3-penultimate paragraph倒数第二段提到alpha很小,缓慢减小父母的值,起到保持父母记忆的作用)
Attention:
Version:
0
"""
class Entity:
def __init__(self, exp_data_dict, fitness_function, x_avg_list, x_sigma_list):
self.exp_data_dict = exp_data_dict
self.limits_list = exp_data_dict['limit']
self.fitness_function = fitness_function
self.x_avg_list = x_avg_list
self.x_sigma_list = x_sigma_list
self.x_list = [random.gauss(x_avg_list[i], x_sigma_list[i]) for i in range(len(self.limits_list))]
# each x generated from Gaussian function might beyond its boundary,
# have to check its boundary, then calculate its fitness
# self.fitness = fitness_function(self.x_list)
self.update()
def update(self):
# Check whether the value is in the boundary
for i in range(len(self.limits_list)):
# If not, give them a random value from the doable range
if (self.x_list[i] < self.limits_list[i][0]) or (self.x_list[i] > self.limits_list[i][1]):
# Constrain the value in the boundary
self.x_list[i] = random.uniform(self.limits_list[i][0], self.limits_list[i][1])
# Update its fitness
self.fitness = self.fitness_function(self.exp_data_dict, self.x_list)
def __init__(self, exp_data_dict, iter_num, entity_num, fitness_function=cal_EIS_WSE_fitness_1):
self.exp_data_dict = exp_data_dict
self.limits_list = exp_data_dict['limit']
# Load setting
self.iter_num = iter_num
self.entity_num = entity_num
self.fitness_function = fitness_function
# Initialize average list,
self.avg_list = [0.5 * (limit[1] + limit[0]) for limit in self.limits_list]
# Initialize sigma_list
# Default as the length of (0.25 ~ 0.5) * range
self.sigma_list = [0.25 * (limit[1] - limit[0]) for limit in self.limits_list]
"""
Update sigma:
refer:
paper2-ch03-3.2-C
Content:
Adjust sigma depending on the diversity of the current best offspring;
sigma_t (t: current iteration time) is set to the variance of the K best current offspring
sigma_t_C = sqrt(sum([Xi - x_mean for i in range(K)]) / K), X = [x0, x1, x2, ..., xn-1]
select the K (0.5, half) best entities
Result:
the entity becomes premature too early, abandon this method
paper2-ch03-3.2-D
Content:
Sigma can be learned in the same way as X itself,
by memorizing the diversity of the K best offspring
Result:
sigma_t+1_D = sigma_t_D
"""
self.K = int(0.5 * self.entity_num)
self.alpha = 1e-2
# Initialize the global best entity
self.global_best_entity = self.Entity(self.exp_data_dict, fitness_function, self.avg_list, self.sigma_list)
def search(self, res_fn, start_time):
current_best_entity_list = []
global_best_entity_list = []
continue_criterion = True
iter = 0
while continue_criterion:
# Create new generation
# Initialize a population
self.entities_list = [self.Entity(self.exp_data_dict, self.fitness_function, self.avg_list, self.sigma_list) \
for i in range(self.entity_num)]
self.entities_list.sort(key=lambda entity: entity.fitness, reverse=False)
# Select the first two optimal and the worst entities according to their fitness
current_1_best_entity, current_2_best_entity = self.entities_list[:2]
current_worst_entity = self.entities_list[-1]
# Compare the global best and the current best, if the current is better, replace global
if current_1_best_entity.fitness < self.global_best_entity.fitness:
self.global_best_entity = copy.deepcopy(current_1_best_entity)
current_best_entity_list.append(copy.deepcopy(current_1_best_entity))
global_best_entity_list.append(copy.deepcopy(self.global_best_entity))
# Calculate the average and variance of each dimension
tmp_x_avg_list = []
tmp_sigma_list = []
for x_index in range(len(self.limits_list)):
tmp_list = []
for entity_index in range(self.K):
x = self.entities_list[entity_index].x_list[x_index]
tmp_list.append(x)
tmp_x_avg = sum(tmp_list)/self.K
tmp_x_avg_list.append(tmp_x_avg)
tmp_sigma_list.append( math.sqrt(sum([math.pow((x0 - tmp_x_avg), 2) for x0 in tmp_list]) / self.K) )
# Update the average and sigma list
self.avg_list = [(1 - self.alpha) * parent_avg \
+ self.alpha * (current_1_best_entity.x_list[index] + current_2_best_entity.x_list[index] - current_worst_entity.x_list[index]) \
for index, parent_avg in enumerate(self.avg_list)]
# self.sigma_list = copy.deepcopy(tmp_sigma_list)
self.sigma_list = [(1 - self.alpha) * parent_sigma + self.alpha * tmp_sigma \
for parent_sigma, tmp_sigma in zip(self.sigma_list, tmp_sigma_list)]
# There are two entities only after at least two iteration
# If there is global_best_entity_list, use it,
# If not, use current_best_entity_list | |
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Author: <NAME>
# Most comments are above the line they are commenting
# import cmath
# import scipy
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from textwrap import wrap
# define constants
epsilon = 8.85E-12
c = 299792458
def f(k, z, x, x_prime):
'''
Evaluates the complex exponential part of the Fresnel diffraction function
within the integral.
Parameters
----------
k : float
Wavenumber of the light.
z : float
Distance from the aperture to the screen.
x : float
Coordinate on the screen.
x_prime : float
Coordinate on the aperture.
Returns
-------
f : float
The evaluated complex exponential.
'''
return np.exp(1j*k/(2*z) * np.square(x-x_prime))
def simpson_integral(x_prime, k, x, z, N):
'''
Performs a Simpson's rule numerical integration on a given function. For
each value x, the function iterates over all x' values between x'1 and x'2.
Parameters
----------
x_prime : array
The range of x' to integrate over.
k : float
Wavenumber of the light.
x : array
The range of x on the screen to iterate over.
z : float
The distance to the screen.
N : int
The number of terms to iterate over.
Returns
-------
I : array
Computed relative intensity values for the diffraction.
'''
# note: for each value x, integrate over range x'1 to x'2
# extension: compare against scipy.integrate.simps()
# generate arrays for E and I
E = np.zeros(N, dtype=np.complex_) # allows handling of complex values
I = np.zeros(N)
# iterate over all screen coordinates
for i1 in range(len(x)):
# define the spacing
h = abs(x2_prime - x1_prime)/N
# generate a temparory sum array
sum = np.zeros(N, dtype=np.complex_)
# assign values for x at x'1 and x'2
sum[0] = (h/3) * f(k, z, x1, x1_prime)
sum[N-1] = (h/3) * f(k, z, x2, x2_prime)
# integrate over all x'
for i2 in range(len(x_prime)):
# evaluate even values of N
if i2 % 2 == 0:
sum[i2] = h/3 * 2 * f(k, z, x[i1], x_prime[i2])
# evaluate odd values of N
else:
sum[i2] = h/3 * 4 * f(k, z, x[i1], x_prime[i2])
# fix for constants & convert to intensity
E[i1] = np.sum(sum) * (k*E0/2*np.pi*z)
I[i1] = np.real(epsilon * c * E[i1] * np.conjugate(E[i1]))
return I
user_input = '0'
while user_input != 'q':
user_input = input(
'\nChoose an option:\n'
'\na: 1-D integration,'
'\nb: 1-D integration with custom values,'
'\nc: 2-D integration,'
'\n or press \'q\' to quit.\n').lower()
if user_input == 'a':
print('Performing a 1-dimensional Fresnel integral uing Simpson\'s '
'rule, with default values.')
_lambda = 1E-6
k = 2*np.pi/_lambda
N = 100
E0 = 1
x1 = -0.005
x2 = 0.005
x1_prime = -1E-5
x2_prime = 1E-5
z = 0.02
x = np.linspace(x1, x2, N)
x_prime = np.linspace(x1_prime, x2_prime, N)
E = simpson_integral(x_prime, k, x, z, N)
plt.plot(x, E)
plt.xlabel('Screen coordinate, x (m)', size=12)
plt.ylabel('Relative intensity', size=12)
plt.show()
if user_input == 'b':
print('Performing a 1-dimensional Fresnel integral uing Simpson\'s '
'rule, with custom values. Leave blank for the default value.')
# loop for each value until valid input
while True:
try:
_lambda = float(input(
'Please enter a value for lambda (in metres - accepts '
'scientific notation): ') or 1E-6)
if (_lambda <= 0):
raise ValueError('The wavelength must be positive and '
'greater than 0. Please try again.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
N = int(input('Please enter a value for N:') or 100)
if (N <= 0):
raise ValueError(
'N must be larger than 0. Please try again.')
elif (N > 500):
raise ValueError(
'The value of N entered is too large, and will cause '
'performance degradation. Please select a value lower '
'than 500.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
E0 = float(input('Please enter a value for the initial '
'electrical field (E0):') or 1)
if (E0 < 0):
raise ValueError(
'E0 cannot be negative. Please enter a positive '
'value.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x1 = float(input(
'Please enter a value for the minimum x coordinate (in '
'metres - accepts scientific notation):') or -0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x2 = float(input(
'Please enter a value for the maximum x coordinate (in '
'metres - accepts scientific notation):') or 0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x1_prime = float(input(
'Please enter a value for the minimum horizontal aperture '
'limit (in metres - accepts scientific notation):')
or -1E-5)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x2_prime = float(input(
'Please enter a value for the maximum horizontal aperture '
'limit (in metres - accepts scientific notation):')
or 1E-5)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
z = float(input('Please enter a value for the distance between'
' the aperture and the screen (in metres - '
'accepts scientific notation): ') or 0.02)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
k = 2*np.pi/_lambda
x = np.linspace(x1, x2, N)
x_prime = np.linspace(x1_prime, x2_prime, N)
E = simpson_integral(x_prime, k, x, z, N)
plt.plot(x, E)
plt.xlabel('Screen coordinate, x (m)', size=12)
plt.ylabel('Relative intensity', size=12)
plt.show()
if user_input == 'c':
print('Performing a 2-dimensional Fresnel integral uing Simpson\'s '
'rule, with custom values. Leave blank for the default value.')
# loop for each value until valid input
while True:
try:
_lambda = float(input(
'Please enter a value for lambda (in metres - accepts '
'scientific notation): ') or 1E-6)
if (_lambda <= 0):
raise ValueError('The wavelength must be positive and '
'greater than 0. Please try again.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
N = int(input('Please enter a value for N:') or 100)
if (N <= 0):
raise ValueError(
'N must be larger than 0. Please try again.')
elif (N > 500):
raise ValueError(
'The value of N entered is too large, and will cause '
'performance degradation. Please select a value lower '
'than 500.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
E0 = float(input('Please enter a value for the initial '
'electrical field (E0):') or 1)
if (E0 < 0):
raise ValueError(
'E0 cannot be negative. Please enter a positive '
'value.')
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x1 = float(input(
'Please enter a value for the minimum x coordinate (in '
'metres - accepts scientific notation):') or -0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x2 = float(input(
'Please enter a value for the maximum x coordinate (in '
'metres - accepts scientific notation):') or 0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x1_prime = float(input(
'Please enter a value for the minimum horizontal aperture '
'limit (in metres - accepts scientific notation):')
or -1E-5)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
x2_prime = float(input(
'Please enter a value for the maximum horizontal aperture '
'limit (in metres - accepts scientific notation):')
or 1E-5)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
y1 = float(input(
'Please enter a value for the minimum y coordinate (in '
'metres - accepts scientific notation):') or -0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
y2 = float(input(
'Please enter a value for the maximum y coordinate (in '
'metres - accepts scientific notation):') or 0.005)
except ValueError:
print('Invalid input. Please try again.')
continue
else:
break
while True:
try:
y1_prime = float(input(
'Please enter a value for the minimum vertical aperture '
'limit (in metres - accepts scientific notation):')
or -1E-5)
| |
import numpy as np
import ipywidgets as widgets
from IPython.display import display
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from mpl_toolkits.axes_grid1.axes_divider import make_axes_locatable
import lsst.afw.table
from lsst.afw.image import MultibandExposure
from scarlet.display import AsinhMapping, img_to_rgb
import lsst.meas.extensions.scarlet as mes
class InteractivePlot:
"""A matplotlib plot will drilldown capabilities
In order to follow-up with outliers this class is intended to
be inherited by different types of plots. When a data point in
the plot is clicked, the source corresponding to the selected
data point has its image and data displayed and stored within
this class.
This class requires ineritants to define the following properties:
Attributes
----------
df: `pandas.DataFrame`
The `DataFrame` that contains the entire pipe_analysis catalog
tract: int
The tract that contains the source
selected: array of int
Array with the indices from `df` for all selected sources
(if the user clicked on a point in a histogram there can
be multiple selected sources, otherwise a single source
is selected).
selectIndex: int
The index of the currently selected source in `selected`.
butler: `daf.persistence.Butler`
The Butler that points the the location of the image and
catalog data.
lblStatus: `ipywidgets.Label`
The current status. This updates the user as different
data products are loaded.
filters: string
Names of the filters to load (typically "grizy").
The case does not matter, as the filters are converted into
``"HSC-{}".format(f)``, where ``f`` is the name of the filter.
stretch: double
The ``stretch`` parameter for the asinh mapping
Q: double
The ``Q`` parameter for the asinh mapping
fig: `matplotlib.Figure`
The figure that displays both the plot and the image of the blend.
ax: list of matplotlib axes
`ax[0]` is expected to be the interactive plot while
`ax[1]` is the source image.
coadds: `lsst.afw.MultibandExposure`
The multiband coadd for the selected source
footprint: `lsst.detection.Footprint`
The footprint of the selected sources' parent
band: str
The name of the band for the ``meas`` catalog for the
selected source. This should be the full filter name,
for example ``HSC-R``.
peak: `PeakCatalog`
The peak catalog row for the selected source
"""
def loadImage(self):
"""Load and display the parent blend of the selected source.
"""
self.lblStatus.value = "Loading parent catalog"
# Load the parent blend of the currently selected source
row = self.df.iloc[self.selected].iloc[self.selectIndex]
patch = row["patchId"]
dataId = {
"tract": self.tract,
"patch": patch,
}
mergeDet = self.butler.get("deepCoadd_mergeDet", dataId)
pid = row["parent"]
if pid == 0:
pid = row["id"]
src = mergeDet[mergeDet["id"]==pid][0]
self.lblStatus.value = "Loading parent MultibandExposure"
# Extract the bounding box off the parent and extract the MultibandExposure
fp = src.getFootprint()
self.footprint = fp
bbox = fp.getBBox()
coadds = [
self.butler.get(
"deepCoadd_calexp_sub",
dataId,
bbox=bbox,
filter="HSC-{}".format(f.upper()))
for f in self.filters
]
coadds = MultibandExposure.fromExposures(self.filters, coadds)
self.coadds = coadds
# Set the visualization parameters
self.vmin = 0
self.vmax = np.max(coadds.image.array)
meas = self.butler.get(
"deepCoadd_meas",
dataId,
flags=lsst.afw.table.SOURCE_IO_NO_FOOTPRINTS,
filter=self.band
)
self.peak = meas[meas["id"]==row["id"]][0]
self.displayImage()
def coaddToRGB(self):
"""Convert a coadd image to an RGB image
"""
images = self.coadds.image.array
norm = AsinhMapping(minimum=self.vmin, stretch=self.vmax*self.stretch, Q=self.Q)
rgb = img_to_rgb(images, norm=norm)
# Apply a mask to only display the pixels in the footprint
mask = mes.scarletDeblendTask.getFootprintMask(self.footprint, self.coadds)
rgb = np.dstack([rgb, ~mask*255])
return rgb
def displayImage(self):
"""Display the image of the blend with the sources marked
"""
self.lblStatus.value = "Displaying image"
# Display the image
rgb = self.coaddToRGB()
self.ax[1].clear()
self.image = self.ax[1].imshow(rgb, origin="lower")
self.ax[1].set_title("source {} of {}".format(self.selectIndex+1, len(self.selected)))
# Plot all of the sources in the blend
self.lblStatus.value = "Plotting sources"
bbox = self.footprint.getBBox()
xmin = bbox.getMinX()
ymin = bbox.getMinY()
for pk in self.footprint.peaks:
self.ax[1].plot(pk["i_x"]-xmin, pk["i_y"]-ymin, "wx", mew=2)
# Plot the currently selected source
self.ax[1].plot(
self.peak["deblend_peak_center_x"]-xmin,
self.peak["deblend_peak_center_y"]-ymin,
"cx",
mew=2
)
plt.tight_layout()
self.fig.canvas.draw()
self.lblStatus.value = "selected source at {}".format(
(self.peak["deblend_peak_center_x"]-xmin,
self.peak["deblend_peak_center_y"]-ymin)
)
lbl = "Tract {}, patch {}, parent {}"
self.lblSelect.value = lbl.format(self.tract, patch, pid)
def updateImage(self):
"""Quickly update the image without redrawing the axis
This is typically done when updating vmin, vmax, Q, or stretch
"""
rgb = self.coaddToRGB()
self.image.set_data(rgb)
self.fig.canvas.draw()
class InteractiveHist(InteractivePlot):
"""An interactive histogram
Histogram version of a pipe_analysis plot, which
allows the user to slect a data point and select all
of the sources contained in that data point.
In addition to the attributes listed below, it also
inherits the attributes from `InteractivePlot`.
Attributes
----------
hist: array
The histogram that is plotted
xedges: array
The values of the x column for the edges of each bin
yedges: array
The values of the y column for the edges of each bin
rect: `matplotlib.patches.Rectangle`
The rectangle that shows the selected bin
colorbar: `matplotlib.Colorbar`
The colorbar for the histogram
lblCursor: `ipywidgets.Label`
The label that shows the number of sources in the
bin that is underneath the cursor.
lblSelect: `ipywidgets.Label`
The label that shows the number of sources in the
bin that has been selected.
"""
def __init__(self, butler, df=None, tract=9813, filters=None, band=None, cursorColor=None):
if filters is None:
filters = "grizy"
if band is None:
band = "HSC-R"
if cursorColor is None:
cursorColor = "#c9392e"
self.butler = butler
self.tract=tract
self.filters = filters
self.band = band
self.cursorColor = cursorColor
if df is None:
cat = butler.get(
"analysisCoaddTable_forced",
tract=tract,
filter=band,
subdir=""
)
df = cat.toDataFrame()
sources = (
df["parent"] != 0
& df["detect_isPatchInner"]
& ~df["merge_peak_sky"]
& np.isfinite(df["base_PsfFlux_instFlux"])
& (df["base_PsfFlux_instFlux"] > 0)
& np.isfinite(df["modelfit_CModel_instFlux"])
& (df["modelfit_CModel_instFlux"] > 0)
)
df = df[sources]
self.df = df
self.selectIndex = 0
self.Q = 10
self.stretch = 0.005
def previousSource(self, event):
"""Load the parent blend for the previous source
"""
if len(self.selected) <= 1:
# No need to do anything if there is only one selected source
return
self.selectIndex -= 1
if self.selectIndex < 0:
self.selectIndex = len(self.selected) - 1
self.loadImage()
def nextSource(self, event):
"""Load the next parent blend for the previous source
"""
self.lblStatus.value = "NEXT"
if len(self.selected) <= 1:
# No need to do anything if there is only one selected source
return
self.selectIndex += 1
if self.selectIndex > len(self.selected)-1:
self.selectIndex = 0
self.loadImage()
def updateQ(self, change):
"""Update the 'Q' parameter in the asinh algorithm
"""
self.Q = change["new"]
self.updateImage()
def updateStretch(self, change):
"""Update the 'stretch' parameter in the asinh algorithm
"""
self.stretch = change["new"]
self.updateImage()
def initControls(self):
"""Initialize the navigation controls and sliders
"""
# Initialize and display the navigation buttons
previousButton = widgets.Button(description="<")
previousButton.on_click(self.previousSource)
nextButton = widgets.Button(description=">")
nextButton.on_click(self.nextSource)
display(widgets.HBox([previousButton, nextButton]))
# Initialize and display the parameters for the asinh mapping
sliderQ = widgets.FloatSlider(value=10, min=0, max=100, step=.5, readout_format='.1f')
sliderQ.observe(self.updateQ, names="value")
display(widgets.HBox([widgets.Label("Q"), sliderQ]))
sliderStretch = widgets.FloatSlider(value=0.005, min=0.00001, max=0.1, step=0.0001, readout_format='.4f')
sliderStretch.observe(self.updateStretch, names="value")
display(widgets.HBox([widgets.Label("stretch"), sliderStretch]))
def setColumnX(self, column, toMag=False):
"""Based on the column name set the x column
If ``toMag`` is ``True`` then convert the (flux) values
in the column into magnitudes
"""
assert column is not None
self.x = self.df[column]
self.xColumn = column
if toMag:
self.x = -2.5*np.log10(self.x)
self.xColumn = column
return self.x
def setColumnY(self, column, toMag=False):
"""Based on the column name set the y column
If ``toMag`` is ``True`` then convert the (flux) values
in the column into magnitudes
"""
assert column is not None
self.y = self.df[column]
self.yColumn = column
if toMag:
self.y = -2.5*np.log10(self.y)
self.yColumn = column
return self.x
def initHist(self, cuts=None, xColumn=None, yColumn=None, width=100, ratio=0.5,
close=True, xMag=False, yMag=False, norm=None, figsize=None):
"""Initialize the histogram plot
Parameters
----------
cuts: array
The cuts (rows to select) in the x and y columns
xColumn: str
The name of the x column to plot
yColumn: str
The name of the y column to plot
width: int
The number of column in the histogram
ratio: int
The ratio of the number of rows/number of columns
(eg. y/x)
close: bool
Whether or not to close all open plots before creating the figure.
This is recommended, since matplotlib will keep all of the plots open
by default, even if the same cell is rerun, which can result in
significant memory consumption.
xMag: bool
Whether or | |
indent.
section_splitter = _yield_section(
lambda x: not x[0].isspace(), strip=False)
for section in section_splitter(chunks):
header = section[0].split(None, 1)[0]
parser = _PARSER_TABLE.get(
header, _parse_section_default)
if header == 'FEATURES':
# This requires 'LOCUS' line parsed before 'FEATURES', which should
# be true and is implicitly checked by the sniffer.
parser = partial(
parser, length=metadata['LOCUS']['size'])
parsed = parser(section)
# reference can appear multiple times
if header == 'REFERENCE':
if header in metadata:
metadata[header].append(parsed)
else:
metadata[header] = [parsed]
elif header == 'ORIGIN':
sequence = parsed
elif header == 'FEATURES':
metadata[header] = parsed[0]
positional_metadata = pd.concat(parsed[1], axis=1)
else:
metadata[header] = parsed
return sequence, metadata, positional_metadata
def _serialize_single_genbank(obj, fh):
'''Write a GenBank record.
Always write it in NCBI canonical way:
1. sequence in lowercase
2. 'u' as 't' even in RNA molecules.
'''
md = obj.metadata
for header in _HEADERS:
if header in md:
serializer = _SERIALIZER_TABLE.get(
header, _serialize_section_default)
out = serializer(header, md[header])
# test if 'out' is a iterator.
# cf. Effective Python Item 17
if iter(out) is iter(out):
for s in out:
fh.write(s)
else:
fh.write(out)
# always write RNA seq as DNA
if isinstance(obj, RNA):
obj = obj.reverse_transcribe()
# always write in lowercase
seq_str = str(obj).lower()
for s in _serialize_origin(seq_str):
fh.write(s)
fh.write('//\n')
def _parse_locus(lines):
'''Parse the line LOCUS.
Format:
# Positions Contents
# --------- --------
# 00:06 LOCUS
# 06:12 spaces
# 12:?? Locus name
# ??:?? space
# ??:29 Length of sequence, right-justified
# 29:33 space, bp/aa/rc, space
# 33:41 molecule type (can be blank): DNA, ssDNA, dsRNA, tRNA, etc.
# 41:42 space
# 42:51 Blank (implies linear), linear or circular
# 51:52 space
# 52:55 The division code (e.g. BCT, VRL, INV)
# 55:62 space
# 62:73 Date, in the form dd-MMM-yyyy (e.g., 15-MAR-1991)
'''
line = lines[0]
pattern = (r'LOCUS'
' +([^\s]+)'
' +([0-9]+)'
' +(bp|aa|rc)'
' +(.*DNA|.*RNA)?'
' +(linear|circular)?'
' +(PRI|ROD|MAM|VRT|INV|PLN|BCT|VRL|PHG|'
'SYN|UNA|EST|PAT|STS|GSS|HTG|HTC|ENV|CON)'
' +([0-9]{2}-[A-Z]{3}-[0-9]{4})')
matches = re.match(pattern, line)
try:
res = dict(zip(
['locus_name', 'size', 'unit', 'mol_type',
'shape', 'division', 'date'],
matches.groups()))
except:
raise GenBankFormatError(
"Could not parse the LOCUS line:\n%s" % line)
res['size'] = int(res['size'])
res['date'] = datetime.strptime(res['date'], _TIME_FORMAT)
return res
def _serialize_locus(header, obj, indent=12):
'''Serilize LOCUS line.
Parameters
----------
obj : dict
'''
# use 'or' to convert None to ''
kwargs = {k: v or '' for k, v in obj.items()}
# convert datetime to str
kwargs['date'] = kwargs['date'].strftime(_TIME_FORMAT).upper()
return ('{header:<{indent}}{locus_name} {size} {unit}'
' {mol_type} {shape} {division} {date}\n').format(
header=header, indent=indent, **kwargs)
def _parse_reference(lines):
'''Parse single REFERENCE field.
'''
res = {}
# magic number 11: the non keyworded lines in REFERENCE
# are at least indented with 11 spaces.
feature_indent = ' ' * 11
section_splitter = _yield_section(
lambda x: not x.startswith(feature_indent),
skip_blanks=True, strip=False)
for section in section_splitter(lines):
label, data = _parse_section_default(
section, join_delimitor=' ', return_label=True)
res[label] = data
return res
def _serialize_reference(header, obj, indent=12):
'''Serialize REFERENCE.
Parameters
----------
obj : list
'''
padding = ' '
sort_order = {'REFERENCE': 0, 'AUTHORS': 1,
'TITLE': 2, 'JOURNAL': 3, 'PUBMED': 4}
for obj_i in obj:
ref_i = []
for h in sorted(obj_i, key=lambda k: sort_order.get(k, 100)):
if h == header:
s = '{h:<{indent}}{ref}'.format(
h=h, indent=indent, ref=obj_i[h])
else:
s = '{h:<{indent}}{value}'.format(
h=padding + h, indent=indent, value=obj_i[h])
ref_i.append(s)
yield '%s\n' % '\n'.join(ref_i)
def _parse_source(lines):
'''Parse SOURCE field.
'''
res = {}
# magic number 11: the non keyworded lines in SOURCE
# are at least indented with 11 spaces.
feature_indent = ' ' * 11
section_splitter = _yield_section(
lambda x: not x.startswith(feature_indent),
skip_blanks=True, strip=False)
# SOURCE line is not informative; skip it
_, organism = list(section_splitter(lines))
res['ORGANISM'] = organism[0].split(None, 1)[1].strip()
res['taxonomy'] = ' '.join([i.strip() for i in organism[1:]])
return res
def _serialize_source(header, obj, indent=12):
'''Serialize SOURCE.
Parameters
----------
obj : dict
'''
s = ('{header:<{indent}}{organism}\n'
'{h:<{indent}}{organism}\n'
'{space}{taxonomy}\n').format(
header=header, indent=indent,
h=' ORGANISM', organism=obj['ORGANISM'],
space=' ' * 12, taxonomy=obj['taxonomy'])
return s
def _parse_features(lines, length):
'''Parse FEATURES field.
'''
features = []
positional_metadata = []
# skip the 1st FEATURES line
if lines[0].startswith('FEATURES'):
lines = lines[1:]
# magic number 20: the lines following header of each feature
# are at least indented with 20 spaces.
feature_indent = ' ' * 20
section_splitter = _yield_section(
lambda x: not x.startswith(feature_indent),
skip_blanks=True, strip=False)
for i, section in enumerate(section_splitter(lines)):
# print(i) ; continue
feature, pmd = _parse_single_feature(section, length, i)
features.append(feature)
positional_metadata.append(pmd)
return features, positional_metadata
def _serialize_features(header, obj, indent=21):
first = True
for feature in obj:
if first:
first = False
yield '{header:<{indent}}Location/Qualifiers\n{feat}'.format(
header=header, indent=indent,
feat=_serialize_single_feature(feature, indent))
else:
yield _serialize_single_feature(feature, indent)
def _parse_single_feature(lines, length, index):
'''Parse a feature.
Returns
-------
tuple
Tuple of a dict of `metadata` and a pandas.Series of
`positional_metadata` for the feature.
'''
feature = {}
feature['index_'] = index
# each component of a feature starts with '/', except the 1st
# component of location.
section_splitter = _yield_section(
lambda x: x.startswith('/'), strip=True)
first = True
for section in section_splitter(lines):
if first:
# first section is the Location string
first = False
type, location = _parse_section_default(
section, join_delimitor='', return_label=True)
feature['type_'] = type
feature['location'] = location
loc, loc_pmd = _parse_loc_str(location, length)
feature.update(loc)
else:
# following sections are Qualifiers
k, v = _parse_section_default(
section, label_delimitor='=',
join_delimitor=' ', return_label=True)
k = k[1:]
# some Qualifiers can appear multiple times
if k in feature:
if not isinstance(feature[k], list):
feature[k] = [feature[k]]
feature[k].append(v)
else:
feature[k] = v
return feature, loc_pmd
def _serialize_single_feature(obj, indent=21):
padding = ' ' * 8
qualifiers = []
for k in sorted(obj):
if k.endswith('_') or k in ('location', 'type'):
continue
v = obj[k]
if isinstance(v, list):
for vi in v:
qualifiers.append(_serialize_qualifier(k, vi))
else:
qualifiers.append(_serialize_qualifier(k, v))
qualifiers = [' ' * indent + i for i in qualifiers]
return '{header:>{indent}}{loc}\n{qualifiers}\n'.format(
header=obj['type_'] + padding, loc=obj['location'],
indent=indent, qualifiers='\n'.join(qualifiers))
def _serialize_qualifier(key, value):
'''Serialize a Qualifier in a feature.
Parameters
----------
value : int, str
'''
# if value is empty
if not value:
return '/%s' % key
return '/{k}={v}'.format(k=key, v=value)
def _parse_loc_str(loc_str, length):
'''Parse location string.
Warning: This converts coordinates to 0-based from 1-based as
in GenBank format.
The location descriptor can be one of the following:
(a) a single base number. e.g. 467
(b) a site between two indicated adjoining bases. e.g. 123^124
(c) a single base chosen from within a specified range of bases (not
allowed for new entries). e.g. 102.110
(d) the base numbers delimiting a sequence span. e.g.340..565
(e) a remote entry identifier followed by a local location
descriptor (i.e., a-d). e.g. J00194.1:100..202
TODO:
handle (b), (c), (e) cases correctly
'''
pmd = np.zeros(length, dtype=bool)
res = {'rc_': False,
'left_partial_': False,
'right_partial_': False}
items = re.split('[(),]+', loc_str)
operators = ['join', 'complement', 'order']
if 'complement' in items:
res['rc_'] = True
for i in items:
i = i.strip()
if i in operators or not i:
continue
elif ':' in i: # (e)
index = []
elif '..' in i: # (d)
beg, end = i.split('..')
if beg.startswith('<'):
beg = beg[1:]
res['left_partial_'] = True
if end.startswith('>'):
end = end[1:]
res['right_partial_'] = True
beg = int(beg)
end = int(end)
index = range(beg-1, end)
elif '.' in i: # (c)
index = []
elif i.isdigit(): # (a)
index = int(i) - 1
elif '^' in i: # (b)
index = []
else:
raise GenBankFormatError(
'Could not parse location string: "%s"' %
loc_str)
pmd[index] = True
return res, pd.Series(pmd)
def _parse_origin(lines):
'''Parse the ORIGIN section for sequence.
'''
sequence = []
for line in lines:
if line.startswith('ORIGIN'):
continue
# remove the number at the beg of each line
items = line.split()
sequence.append(''.join(items[1:]))
return ''.join(sequence)
def _serialize_origin(seq, indent=9):
'''Serialize seq to ORIGIN.
Parameters
----------
seq : str
'''
n = 1
line_size = 60
frag_size = 10
for i in range(0, len(seq), line_size):
line = seq[i:i+line_size]
s = '{n:>{indent}} {s}\n'.format(
n=n, indent=indent, s=chunk_str(line, frag_size, ' '))
if n == 1:
s = 'ORIGIN\n' + s
n = n + line_size
yield s
def _parse_section_default(
lines, label_delimitor=None, join_delimitor=' ', return_label=False):
'''Parse sections in default way.
Do 2 things:
1. split first line with label_delimitor for label
2. join all the lines into one str with | |
'0'})
},
u'orm.helptext': {
'Meta': {'object_name': 'HelpText'},
'area': ('django.db.models.fields.IntegerField', [], {}),
'build': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'helptext_build'", 'to': u"orm['orm.Build']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'text': ('django.db.models.fields.TextField', [], {})
},
u'orm.layer': {
'Meta': {'unique_together': "(('layer_source', 'up_id'), ('layer_source', 'name'))", 'object_name': 'Layer'},
'description': ('django.db.models.fields.TextField', [], {'default': 'None', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer_index_url': ('django.db.models.fields.URLField', [], {'max_length': '200'}),
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
'local_path': ('django.db.models.fields.FilePathField', [], {'default': 'None', 'max_length': '255', 'null': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'summary': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'vcs_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'}),
'vcs_web_file_base_url': ('django.db.models.fields.URLField', [], {'default': 'None', 'max_length': '200', 'null': 'True'})
},
u'orm.layer_version': {
'Meta': {'unique_together': "(('layer_source', 'up_id'),)", 'object_name': 'Layer_Version'},
'branch': ('django.db.models.fields.CharField', [], {'max_length': '80'}),
'build': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'related_name': "'layer_version_build'", 'null': 'True', 'to': u"orm['orm.Build']"}),
'commit': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'dirpath': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'layer_version_layer'", 'to': u"orm['orm.Layer']"}),
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
'priority': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'up_branch': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.Branch']", 'null': 'True'}),
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'})
},
u'orm.layersource': {
'Meta': {'unique_together': "(('sourcetype', 'apiurl'),)", 'object_name': 'LayerSource'},
'apiurl': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '255', 'null': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '63'}),
'sourcetype': ('django.db.models.fields.IntegerField', [], {})
},
u'orm.layerversiondependency': {
'Meta': {'unique_together': "(('layer_source', 'up_id'),)", 'object_name': 'LayerVersionDependency'},
'depends_on': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'dependees'", 'to': u"orm['orm.Layer_Version']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
'layer_version': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'dependencies'", 'to': u"orm['orm.Layer_Version']"}),
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'})
},
u'orm.logmessage': {
'Meta': {'object_name': 'LogMessage'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'lineno': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'message': ('django.db.models.fields.CharField', [], {'max_length': '240'}),
'pathname': ('django.db.models.fields.FilePathField', [], {'max_length': '255', 'blank': 'True'}),
'task': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Task']", 'null': 'True', 'blank': 'True'})
},
u'orm.machine': {
'Meta': {'unique_together': "(('layer_source', 'up_id'),)", 'object_name': 'Machine'},
'description': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
'layer_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Layer_Version']"}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'})
},
u'orm.package': {
'Meta': {'object_name': 'Package'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']"}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'installed_name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '100'}),
'installed_size': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'license': ('django.db.models.fields.CharField', [], {'max_length': '80', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'recipe': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Recipe']", 'null': 'True'}),
'revision': ('django.db.models.fields.CharField', [], {'max_length': '32', 'blank': 'True'}),
'section': ('django.db.models.fields.CharField', [], {'max_length': '80', 'blank': 'True'}),
'size': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'version': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'})
},
u'orm.package_dependency': {
'Meta': {'object_name': 'Package_Dependency'},
'dep_type': ('django.db.models.fields.IntegerField', [], {}),
'depends_on': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'package_dependencies_target'", 'to': u"orm['orm.Package']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'package': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'package_dependencies_source'", 'to': u"orm['orm.Package']"}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Target']", 'null': 'True'})
},
u'orm.package_file': {
'Meta': {'object_name': 'Package_File'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'package': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'buildfilelist_package'", 'to': u"orm['orm.Package']"}),
'path': ('django.db.models.fields.FilePathField', [], {'max_length': '255', 'blank': 'True'}),
'size': ('django.db.models.fields.IntegerField', [], {})
},
u'orm.project': {
'Meta': {'object_name': 'Project'},
'bitbake_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.BitbakeVersion']"}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'release': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Release']"}),
'short_description': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'updated': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user_id': ('django.db.models.fields.IntegerField', [], {'null': 'True'})
},
u'orm.projectlayer': {
'Meta': {'object_name': 'ProjectLayer'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layercommit': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Layer_Version']", 'null': 'True'}),
'optional': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"})
},
u'orm.projecttarget': {
'Meta': {'object_name': 'ProjectTarget'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'task': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'})
},
u'orm.projectvariable': {
'Meta': {'object_name': 'ProjectVariable'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'project': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Project']"}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'orm.recipe': {
'Meta': {'object_name': 'Recipe'},
'bugtracker': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'file_path': ('django.db.models.fields.FilePathField', [], {'max_length': '255'}),
'homepage': ('django.db.models.fields.URLField', [], {'max_length': '200', 'blank': 'True'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer_source': ('django.db.models.fields.related.ForeignKey', [], {'default': 'None', 'to': u"orm['orm.LayerSource']", 'null': 'True'}),
'layer_version': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'recipe_layer_version'", 'to': u"orm['orm.Layer_Version']"}),
'license': ('django.db.models.fields.CharField', [], {'max_length': '200', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'section': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'summary': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'up_date': ('django.db.models.fields.DateTimeField', [], {'default': 'None', 'null': 'True'}),
'up_id': ('django.db.models.fields.IntegerField', [], {'default': 'None', 'null': 'True'}),
'version': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'})
},
u'orm.recipe_dependency': {
'Meta': {'object_name': 'Recipe_Dependency'},
'dep_type': ('django.db.models.fields.IntegerField', [], {}),
'depends_on': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'r_dependencies_depends'", 'to': u"orm['orm.Recipe']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'recipe': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'r_dependencies_recipe'", 'to': u"orm['orm.Recipe']"})
},
u'orm.release': {
'Meta': {'object_name': 'Release'},
'bitbake_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.BitbakeVersion']"}),
'branch': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'description': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'})
},
u'orm.releasedefaultlayer': {
'Meta': {'object_name': 'ReleaseDefaultLayer'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Layer']"}),
'release': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Release']"})
},
u'orm.target': {
'Meta': {'object_name': 'Target'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Build']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'image_size': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'is_image': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'license_manifest_path': ('django.db.models.fields.CharField', [], {'max_length': '500', 'null': 'True'}),
'target': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
u'orm.target_file': {
'Meta': {'object_name': 'Target_File'},
'directory': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'directory_set'", 'null': 'True', 'to': u"orm['orm.Target_File']"}),
'group': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'inodetype': ('django.db.models.fields.IntegerField', [], {}),
'owner': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'path': ('django.db.models.fields.FilePathField', [], {'max_length': '100'}),
'permission': ('django.db.models.fields.CharField', [], {'max_length': '16'}),
'size': ('django.db.models.fields.IntegerField', [], {}),
'sym_target': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'symlink_set'", 'null': 'True', 'to': u"orm['orm.Target_File']"}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Target']"})
},
u'orm.target_image_file': {
'Meta': {'object_name': 'Target_Image_File'},
'file_name': ('django.db.models.fields.FilePathField', [], {'max_length': '254'}),
'file_size': ('django.db.models.fields.IntegerField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Target']"})
},
u'orm.target_installed_package': {
'Meta': {'object_name': 'Target_Installed_Package'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'package': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'buildtargetlist_package'", 'to': u"orm['orm.Package']"}),
'target': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Target']"})
},
u'orm.task': {
'Meta': {'ordering': "('order', 'recipe')", 'unique_together': "(('build', 'recipe', 'task_name'),)", 'object_name': 'Task'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'task_build'", 'to': u"orm['orm.Build']"}),
'cpu_usage': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '2'}),
'disk_io': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'elapsed_time': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '6', 'decimal_places': '2'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'line_number': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'logfile': ('django.db.models.fields.FilePathField', [], {'max_length': '255', 'blank': 'True'}),
'message': ('django.db.models.fields.CharField', [], {'max_length': '240'}),
'order': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'outcome': ('django.db.models.fields.IntegerField', [], {'default': '-1'}),
'path_to_sstate_obj': ('django.db.models.fields.FilePathField', [], {'max_length': '500', 'blank': 'True'}),
'recipe': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'build_recipe'", 'to': u"orm['orm.Recipe']"}),
'script_type': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'source_url': ('django.db.models.fields.FilePathField', [], {'max_length': '255', 'blank': 'True'}),
'sstate_checksum': ('django.db.models.fields.CharField', [], {'max_length': '100', 'blank': 'True'}),
'sstate_result': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'task_executed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'task_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'work_directory': ('django.db.models.fields.FilePathField', [], {'max_length': '255', 'blank': 'True'})
},
u'orm.task_dependency': {
'Meta': {'object_name': 'Task_Dependency'},
'depends_on': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'task_dependencies_depends'", 'to': u"orm['orm.Task']"}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'task': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'task_dependencies_task'", 'to': u"orm['orm.Task']"})
},
u'orm.toastersetting': {
'Meta': {'object_name': 'ToasterSetting'},
'helptext': ('django.db.models.fields.TextField', [], {}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '63'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '255'})
},
u'orm.toastersettingdefaultlayer': {
'Meta': {'object_name': 'ToasterSettingDefaultLayer'},
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'layer_version': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['orm.Layer_Version']"})
},
u'orm.variable': {
'Meta': {'object_name': 'Variable'},
'build': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'variable_build'", 'to': u"orm['orm.Build']"}),
'changed': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'description': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'human_readable_name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'variable_name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'variable_value': ('django.db.models.fields.TextField', [], {'blank': 'True'})
},
u'orm.variablehistory': {
'Meta': {'object_name': 'VariableHistory'},
'file_name': ('django.db.models.fields.FilePathField', [], {'max_length': '255'}),
u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'line_number': ('django.db.models.fields.IntegerField', [], {'null': 'True'}),
'operation': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'value': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'variable': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'vhistory'", 'to': u"orm['orm.Variable']"})
}
| |
= {self}
while x in assignment:
if x in visited:
from lamb import meta
from lamb.meta import logger
logger.error(
"breaking loop in substitution (x: '%s', visited: '%s', , assignment: '%s')"
% (x, visited, assignment))
break
visited |= {x}
x = assignment[x]
return x
else:
return self
def key_str(self):
return self.symbol + str(self.number)
def __hash__(self):
return hash(self.key_str())
def __eq__(self, other):
"""This implements token equality. This is _not_ semantic equality due
to type variable binding.
"""
if isinstance(other, VariableType):
return self.symbol == other.symbol and self.number == other.number
else:
return False
def __repr__(self):
if self.number > 3:
return self.symbol + str(self.number)
else:
return self.symbol + "'" * self.number
def latex_str(self):
if self.number > 3:
return ensuremath(self.symbol + "_{" + str(self.number) + "}")
else:
return ensuremath(self.symbol + "'" * self.number)
def _repr_latex_(self):
return self.latex_str()
@classmethod
def parse(cls, s, i, parse_control_fun):
(m_group, new_i) = parsing.consume_pattern(s, i, vartype_regex)
if m_group is None:
return (None, i)
else:
return (VariableType(m_group), new_i)
@classmethod
def random(cls, random_ctrl_fun):
primes = random.randint(0, 5)
var = random.choice(("X", "Y", "Z"))
return VariableType(var, number=primes)
@classmethod
def fresh(cls):
return VariableType("I", number=cls.max_id + 1)
# ensure that this gets reset on reload
VariableType.max_id = 0
class UnknownType(VariableType):
"""Special case of a variable type where the type variable is guaranteed to
be free, i.e. where the identity just doesn't matter.
Something like <?,?> amounts to <X,Y> where X,Y are free no matter what.
"""
max_identifier = 0
def __init__(self, force_num=None):
if force_num is None:
UnknownType.max_identifier += 1
self.identifier = UnknownType.max_identifier
else:
self.identifier = force_num
super().__init__("?", number=self.identifier)
def __repr__(self):
return "?"
def latex_str(self):
return ensuremath("?")
def copy_local(self):
return UnknownType(force_num=self.identifier)
@classmethod
def parse(cls, s, i, parse_control_fun):
new_i = parsing.consume_char(s, i, "?")
if new_i is None:
return (None, i)
else:
return (UnknownType(), new_i)
@classmethod
def random(cls, random_ctrl_fun):
return UnknownType()
@classmethod
def fresh(cls):
return UnknownType()
class DisjunctiveType(TypeConstructor):
"""Disjunctive types.
These types represent finite sets of non-polymorphic types. (Accordingly,
disjunctions of variable types are disallowed.)"""
def __init__(self, *type_list, raise_s=None, raise_i=None):
disjuncts = set()
for t in type_list:
if isinstance(t, DisjunctiveType):
disjuncts.update(t.disjuncts)
elif len(t.bound_type_vars()) > 0:
# this constraint is somewhat arbitrary, and could be
# generalized. But, then unification would be more complicated.
raise TypeParseError(
"Variable types can't be used disjunctively.",
raise_s, raise_i)
else:
disjuncts.add(t)
if len(disjuncts) <= 1:
raise TypeParseError(
"Disjunctive type must have multiple unique disjuncts",
raise_s, raise_i)
self.disjuncts = disjuncts
# still sort for a canonical ordering
self.type_list = sorted(self.disjuncts, key=repr)
super().__init__()
self.store_functional_info()
def __hash__(self):
return hash(tuple(self.type_list))
def __eq__(self, other):
if isinstance(other, DisjunctiveType):
return self.disjuncts == other.disjuncts
else:
return False
def __len__(self):
return len(self.disjuncts)
def __getitem__(self, i):
return self.type_list[i]
def __iter__(self):
return iter(self.type_list)
def __repr__(self):
return "[" + "|".join([repr(t) for t in self.type_list]) + "]"
def latex_str(self):
# wrap in curly braces to ensure the brackets don't get swallowed
return ensuremath("{\\left[%s\\right]}" % "\mid{}".join(
[self.type_list[i].latex_str()
for i in range(len(self.type_list))]))
# this works if b is a regular type or a disjunctive type.
def __or__(self, b):
return DisjunctiveType(self, b)
def __and__(self, b):
return poly_system.unify(self, b)
def copy_local(self, *parts):
return DisjunctiveType(*parts)
@classmethod
def factory(cls, *disjuncts):
"""returns a disjunctive type or a non-disjunctive type, as appropriate
for `disjuncts`.
`disjuncts`, when turned into a set, is:
length 0: return None
length 1: return the content
length 2: return a DisjunctiveType for the disjuncts.
If multiple disjuncts are DisjunctiveTypes, this could still fail,
depending on what the union looks like.
"""
r = set(disjuncts)
if len(r) == 0:
return None
elif len(r) == 1:
(r,) = r
return r
else:
return DisjunctiveType(*r)
# test case: tp("[<e,t>|<e,n>|<n,t>]").intersection_point(tp("<X,t>"),
# types.poly_system.unify_r, dict())
# should map X to [e|n].
def intersection_point(self, b, unify_fun, assignment):
if b in self.disjuncts:
return (b, assignment)
else:
new_disjuncts = list()
return_assign = assignment
# assumption: type variables can't be part of a disjunctive type.
for d in self.disjuncts:
tmp_assignment = assignment.copy() # sigh
(principal, tmp_assignment) = unify_fun(d, b, tmp_assignment)
# Important note: we discard all the temporary assignments
# *unless* the type disjunction is eliminated.
if principal is None:
continue
new_disjuncts.append(principal)
return_assign = union_assignments(return_assign, tmp_assignment)
result = self.factory(*new_disjuncts)
if result is None:
return (result, assignment)
else:
return (result, return_assign)
def store_functional_info(self):
l_set = set()
r_set = set()
# alternative: only set something if all disjuncts are functional?
for d in self.disjuncts:
if d.functional():
l_set |= {d.left}
r_set |= {d.right}
self.left = self.factory(*l_set)
self.right = self.factory(*r_set)
def functional(self):
return (self.left is not None)
# returns a FunType characterizing the set of functional types contained in
# self.
def factor_functional_types(self):
return FunType(self.left, self.right)
def intersection(self, b, unify_fun, assignment):
"""Calculate the intersection of `self` and type `b`.
If `b` is a DisjunctiveType, this involves looking at the intersection
of the types. Otherwise, it involves unifying `b` with the contents of
self and seeing what is left. Will return some type (not necessarily
disjunctive) if it succeeds, otherwise None.
Some examples:
[e|n] intersect e = e
[e|n] intersect X = [e|n]
[<e,t>|<n,t>|<e,e>] intersect <X,t> = [<e,t>|<n,t>]
[e|n] intersect t = None
"""
if isinstance(b, DisjunctiveType):
# this relies on type variables not being possible as disjuncts.
# otherwise, you'd need to use unify to check equality.
intersection = self.disjuncts & b.disjuncts
return (self.factory(*intersection), assignment)
else:
return self.intersection_point(b, unify_fun, assignment)
def unify(self, b, unify_control_fun, assignment):
return self.intersection(b, unify_control_fun, assignment)
@classmethod
def parse(cls, s, i, parse_control_fun):
starting_i = i
next = parsing.consume_char(s, i, "[")
if next is None:
return (None, i)
else:
i = next
signature = []
while i < len(s) and s[i] != "]":
(m, i) = parse_control_fun(s, i)
signature.append(m)
if s[i] == "]":
break
i = parsing.consume_char(s, i, "|",
"Missing | in disjunctive type")
i = parsing.consume_char(s, i, "]",
"Unmatched [ in disjunctive type")
return (DisjunctiveType(*signature, raise_s=s, raise_i=starting_i),
i)
@classmethod
def random(cls, random_ctrl_fun):
type_len = random.randint(2, random_len_cap)
args = set()
success = False
while not success:
# a lot of seeds might just not work for this class, so keep
# trying until we get something sensible
# TODO: in extremely simple (non-realistic) type systems this could
# loop indefinitely.
args = set([random_ctrl_fun() for i in range(0,type_len)])
try:
result = DisjunctiveType(*args)
except TypeParseError:
#print("retry", repr(args))
continue
success = True
return result
####################
#
# Type systems and various utility stuff.
#
####################
class TypeSystem(object):
"""Parent class for a type system. A TypeSystem provides:
* unification functions for variables and function application
* an identity criteria
* some FA checking utilities
"""
def __init__(self, name=None, atomics=None, nonatomics=None):
self.name = name
if self.name is None:
self.raisemsg = "function-argument combination"
else:
self.raisemsg = name + " function-argument combination"
self._parse_cache = dict()
self.atomics = set()
if atomics is not None:
for a in atomics:
self.add_atomic(a)
self.nonatomics = set()
if nonatomics is not None:
for a in nonatomics:
self.add_nonatomic(a)
def add_atomic(self, atomic):
if not atomic in self.atomics:
self._parse_cache[atomic.regex] = atomic
self.atomics.add(atomic)
def remove_atomic(self, atomic):
if atomic in self.atomics:
self.atomics.remove(atomic)
del self._parse_cache[atomic]
def add_nonatomic(self, na):
self.nonatomics.add(na)
def remove_nonatomic(self, na):
self.nonatomics.remove(na)
def atomic_parser(self, s, i):
# note: no checking for multiple matches...
for r in self._parse_cache.keys():
(m_group, new_i) = parsing.consume_pattern(s, i, r)
if m_group is None:
continue
return (self._parse_cache[r], new_i)
raise TypeParseError("Unknown atomic type", s, i)
def type_parser_recursive(self, s, i=0):
for na in self.nonatomics:
result, next = na.parse(s, i, self.type_parser_recursive)
if result is not None:
# one of the non-atomics succeeded
return (result, next)
# none of the non-atomics succeeded, so try atomic parser.
if i >= len(s):
raise TypeParseError("Missing type", s, i)
(result, i) = self.atomic_parser(s, i)
# may return None?
return (result, i)
def type_parser(self, s):
(r, i) = self.type_parser_recursive(s)
return r
def parse(self, s):
return self.type_parser(s)
def fun_arg_check_bool(self, fun, arg):
return (fun.type.functional() and
self.type_allowed(fun.type) and
self.type_allowed(arg.type) and
self.eq_check(fun.type.left, arg.type))
def check_type(self, t):
return (t in self.atomics or t.__class__ in self.nonatomics)
def unify(self, a, b):
if not (self.check_type(a) and self.check_type(b)):
return None
(result, r_assign) = a.unify(b, self.unify, None)
return result
def unify_ar(self, arg, | |
'''
P_blobs is a terminal fork of intra_blob. It calls comp_g and then comp_P (edge tracing) per terminated stack.
Pixel-level parameters are accumulated in contiguous spans of same-sign gradient, first horizontally then vertically.
Horizontal spans are Ps: 1D patterns, and vertical spans are first stacks of Ps, then blobs of stacks.
This processing adds a level of encoding per row y, defined relative to y of current input row, with top-down scan:
1Le, line y-1: form_P( dert_) -> 1D pattern P: contiguous row segment, a slice of a blob
2Le, line y-2: scan_P_(P, hP) -> hP, up_connect_, down_connect_count: vertical connections per stack of Ps
3Le, line y-3: form_stack(hP, stack) -> stack: merge vertically-connected _Ps into non-forking stacks of Ps
4Le, line y-4+ stack depth: form_blob(stack, blob): merge connected stacks in blobs referred by up_connect_, recursively
Higher-row elements include additional parameters, derived while they were lower-row elements.
Resulting blob structure (fixed set of parameters per blob):
- Dert: summed pixel-level dert params, Dx, surface area S, vertical depth Ly
- sign = s: sign of gradient deviation
- box = [y0, yn, x0, xn],
- dert__, # 2D array of pixel-level derts: (p, dy, dx, g, m) tuples
- stack_, # contains intermediate blob composition structures: stacks and Ps, not meaningful on their own
( intra_blob structure extends Dert, adds fork params and sub_layers)
Blob is 2D pattern: connectivity cluster defined by the sign of gradient deviation. Gradient represents 2D variation
per pixel. It is used as inverse measure of partial match (predictive value) because direct match (min intensity)
is not meaningful in vision. Intensity of reflected light doesn't correlate with predictive value of observed object
(predictive value is physical density, hardness, inertia that represent resistance to change in positional parameters)
Comparison range is fixed for each layer of search, to enable encoding of input pose parameters: coordinates, dimensions,
orientation. These params are essential because value of prediction = precision of what * precision of where.
Clustering is by nearest-neighbor connectivity only, to avoid overlap among the blobs.
frame_blobs is a complex function with a simple purpose: to sum pixel-level params in blob-level params. These params
were derived by pixel cross-comparison (cross-correlation) to represent predictive value per pixel, so they are also
predictive on a blob level, and should be cross-compared between blobs on the next level of search and composition.
Old diagrams: https://kwcckw.github.io/CogAlg/
'''
"""
usage: frame_blobs_find_adj.py [-h] [-i IMAGE] [-v VERBOSE] [-n INTRA] [-r RENDER]
[-z ZOOM]
optional arguments:
-h, --help show this help message and exit
-i IMAGE, --image IMAGE
path to image file
-v VERBOSE, --verbose VERBOSE
print details, useful for debugging
-n INTRA, --intra INTRA
run intra_blobs after frame_blobs
-r RENDER, --render RENDER
render the process
-z ZOOM, --zoom ZOOM zooming ratio when rendering
"""
from time import time
from collections import deque
from pathlib import Path
import sys
import numpy as np
from operator import attrgetter
from class_cluster import ClusterStructure, NoneType
from class_bind import AdjBinder
# from comp_pixel import comp_pixel
from class_stream import BlobStreamer
from utils import (pairwise, imread)
from comp_P_draft import cluster_P_
ave = 30 # filter or hyper-parameter, set as a guess, latter adjusted by feedback
aveG = 50 # filter for comp_g, assumed constant direction
class CP(ClusterStructure):
I = int # default type at initialization
Dy = int
Dx = int
G = int
M = int
Dyy = int
Dyx = int
Dxy = int
Dxx = int
Ga = int
Ma = int
L = int
x0 = int
sign = NoneType
dert_ = list
gdert_ = list
Dg = int
Mg = int
class Cstack(ClusterStructure):
I = int
Dy = int
Dx = int
G = int
M = int
Dyy = int
Dyx = int
Dxy = int
Dxx = int
Ga = int
Ma = int
S = int
Ly = int
y0 = int
Py_ = list
blob = NoneType
down_connect_cnt = int
sign = NoneType
fPP = bool # PPy_ if 1, else Py_
class CBlob(ClusterStructure):
Dert = dict
box = list
stack_ = list
sign = NoneType
open_stacks = int
root_dert__ = object
dert__ = object
mask = object
adj_blobs = list
fopen = bool
margin = list
# Functions:
# prefix '_' denotes higher-line variable or structure, vs. same-type lower-line variable or structure
# postfix '_' denotes array name, vs. same-name elements of that array
def P_blobs(dert__, mask, crit__, verbose=False, render=False):
frame = dict(rng=1, dert__=dert__, mask=None, I=0, Dy=0, Dx=0, G=0, M=0, Dyy=0, Dyx=0, Dxy=0, Dxx=0, Ga=0, Ma=0, blob__=[])
stack_ = deque() # buffer of running vertical stacks of Ps
height, width = dert__[0].shape
if render:
def output_path(input_path, suffix):
return str(Path(input_path).with_suffix(suffix))
streamer = BlobStreamer(CBlob, dert__[1],
record_path=output_path(arguments['image'],
suffix='.im2blobs.avi'))
stack_binder = AdjBinder(Cstack)
if verbose:
start_time = time()
print("Converting the image(frame) to blobs...")
for y, dert_ in enumerate(zip(*dert__)): # first and last row are discarded
if verbose:
print(f"\rProcessing line {y + 1}/{height}, ", end="")
print(f"{len(frame['blob__'])} blobs converted", end="")
sys.stdout.flush()
P_binder = AdjBinder(CP) # binder needs data about clusters of the same level
P_ = form_P_(zip(*dert_), crit__[y], mask[y], P_binder) # horizontal clustering
if render:
render = streamer.update_blob_conversion(y, P_)
P_ = scan_P_(P_, stack_, frame, P_binder) # vertical clustering, adds P up_connects and _P down_connect_cnt
stack_ = form_stack_(P_, frame, y)
stack_binder.bind_from_lower(P_binder)
while stack_: # frame ends, last-line stacks are merged into their blobs
form_blob(stack_.popleft(), frame)
blob_binder = AdjBinder(CBlob)
blob_binder.bind_from_lower(stack_binder)
assign_adjacents(blob_binder) # add adj_blobs to each blob
if verbose: # print out at the end
nblobs = len(frame['blob__'])
print(f"\rImage has been successfully converted to "
f"{nblobs} blob{'s' if nblobs != 1 else 0} in "
f"{time() - start_time:.3} seconds", end="")
blob_ids = [blob_id for blob_id in range(CBlob.instance_cnt)]
merged_percentage = len([*filter(lambda bid: CBlob.get_instance(bid) is None, blob_ids)]) / len(blob_ids)
print(f"\nPercentage of merged blobs: {merged_percentage}")
if render: # rendering mode after blob conversion
path = output_path(arguments['image'],
suffix='.im2blobs.jpg')
streamer.end_blob_conversion(y, img_out_path=path)
return frame # frame of blobs
'''
Parameterized connectivity clustering functions below:
- form_P sums dert params within P and increments its L: horizontal length.
- scan_P_ searches for horizontal (x) overlap between Ps of consecutive (in y) rows.
- form_stack combines these overlapping Ps into vertical stacks of Ps, with one up_P to one down_P
- form_blob merges terminated or forking stacks into blob, removes redundant representations of the same blob
by multiple forked P stacks, then checks for blob termination and merger into whole-frame representation.
dert: tuple of derivatives per pixel, initially (p, dy, dx, g), will be extended in intra_blob
Dert: params of cluster structures (P, stack, blob): summed dert params + dimensions: vertical Ly and area S
'''
def form_P_(idert_, crit_, mask_, binder): # segment dert__ into P__, in horizontal ) vertical order
P_ = deque() # row of Ps
s_ = crit_ > 0
x0 = 0
try:
while mask_[x0]: # skip until not masked
next(idert_)
x0 += 1
except IndexError:
return P_ # the whole line is masked, return an empty P
dert_ = [[*next(idert_)]] # get first dert from idert_ (generator/iterator)
(I, Dy, Dx, G, M, Dyy, Dyx, Dxy, Dxx, Ga, Ma), L = dert_[0], 1 # initialize P params
_s = s_[x0]
_mask = mask_[x0] # mask bit per dert
for x, (p, dy, dx, g, m, dyy, dyx, dxy, dxx, ga, ma) in enumerate(idert_, start=x0 + 1): # loop left to right in each row of derts
mask = mask_[x]
if ~mask: # current dert is not masked
s = s_[x]
if ~_mask and s != _s: # prior dert is not masked and sign changed, terminate and pack P:
P = CP(I=I, Dy=Dy, Dx=Dx, G=G, M=M, Dyy=Dyy, Dyx=Dyx, Dxy=Dxy, Dxx=Dxx, Ga=Ga, Ma=Ma, L=L, x0=x0, sign=_s, dert_=dert_)
P_.append(P)
I= Dy= Dx= G= M= Dyy= Dyx= Dxy= Dxx= Ga= Ma= L= 0; x0 = x; dert_ = [] # initialize P params
elif _mask:
I= Dy= Dx= G= M= Dyy= Dyx= Dxy= Dxx= Ga= Ma= L= 0; x0 = x; dert_ = [] # initialize P params
elif ~_mask:
# dert is masked, prior dert is not masked, pack P:
P | |
= []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/admin/application/{application_id}', 'PATCH',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Application', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def admin_get_application(self, token, application_id, **kwargs): # noqa: E501
"""Retrieves an application (admin only) # noqa: E501
Retrieves a new application from the system. This method is only accessible to admins # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.admin_get_application(token, application_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str token: The login token obtained via GET /login/{api_key} (required)
:param str application_id: The id of the application to retrieve (required)
:return: Application
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.admin_get_application_with_http_info(token, application_id, **kwargs) # noqa: E501
else:
(data) = self.admin_get_application_with_http_info(token, application_id, **kwargs) # noqa: E501
return data
def admin_get_application_with_http_info(self, token, application_id, **kwargs): # noqa: E501
"""Retrieves an application (admin only) # noqa: E501
Retrieves a new application from the system. This method is only accessible to admins # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.admin_get_application_with_http_info(token, application_id, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str token: The login token obtained via GET /login/{api_key} (required)
:param str application_id: The id of the application to retrieve (required)
:return: Application
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['token', 'application_id'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method admin_get_application" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'token' is set
if ('token' not in params or
params['token'] is None):
raise ValueError("Missing the required parameter `token` when calling `admin_get_application`") # noqa: E501
# verify the required parameter 'application_id' is set
if ('application_id' not in params or
params['application_id'] is None):
raise ValueError("Missing the required parameter `application_id` when calling `admin_get_application`") # noqa: E501
collection_formats = {}
path_params = {}
if 'application_id' in params:
path_params['application_id'] = params['application_id'] # noqa: E501
query_params = []
header_params = {}
if 'token' in params:
header_params['token'] = params['token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/admin/application/{application_id}', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='Application', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def admin_get_applications(self, token, **kwargs): # noqa: E501
"""Retrieves applications (admin only) (paginated) # noqa: E501
Retrieves applications from the system. This method is only accessible to admins # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.admin_get_applications(token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str token: The login token obtained via GET /login/{api_key} (required)
:param str application_id: If specified, filters by application id
:param str email: If specified, filters by application email
:param str application_name: If specified, filters by application name
:param str first_name: If specified, filters by application first_name
:param str last_name: If specified, filters by application last_name
:param ApplicationTypes application_type: If specified, filters by application application_type
:param datetime created_after: If specified, keeps only applications created after given UTC timestamp (ISO 8601 format : yyyy-MM-ddThh:mm:ss)
:param datetime created_before: If specified, keeps only applications created before given UTC timestamp (ISO 8601 format : yyyy-MM-ddThh:mm:ss)
:param bool dedicated_workers: If specified, filters by dedicated_workers value
:param int offset: Number of the first document to send (pagination)
:param int limit: Maximum number of documents to send (pagination)
:param str sort: If specified, sorts the applications by a list of existing parameters separated by commas. Can be 'application_name', 'application_type', 'creation_time', 'first_name', 'last_name', 'email', 'dedicated_workers'. Sorts in ascending order by default. If a parameter is preceded by '-', it is sorted in descending order.
:return: list[Application]
If the method is called asynchronously,
returns the request thread.
"""
kwargs['_return_http_data_only'] = True
if kwargs.get('async_req'):
return self.admin_get_applications_with_http_info(token, **kwargs) # noqa: E501
else:
(data) = self.admin_get_applications_with_http_info(token, **kwargs) # noqa: E501
return data
def admin_get_applications_with_http_info(self, token, **kwargs): # noqa: E501
"""Retrieves applications (admin only) (paginated) # noqa: E501
Retrieves applications from the system. This method is only accessible to admins # noqa: E501
This method makes a synchronous HTTP request by default. To make an
asynchronous HTTP request, please pass async_req=True
>>> thread = api.admin_get_applications_with_http_info(token, async_req=True)
>>> result = thread.get()
:param async_req bool
:param str token: The login token obtained via GET /login/{api_key} (required)
:param str application_id: If specified, filters by application id
:param str email: If specified, filters by application email
:param str application_name: If specified, filters by application name
:param str first_name: If specified, filters by application first_name
:param str last_name: If specified, filters by application last_name
:param ApplicationTypes application_type: If specified, filters by application application_type
:param datetime created_after: If specified, keeps only applications created after given UTC timestamp (ISO 8601 format : yyyy-MM-ddThh:mm:ss)
:param datetime created_before: If specified, keeps only applications created before given UTC timestamp (ISO 8601 format : yyyy-MM-ddThh:mm:ss)
:param bool dedicated_workers: If specified, filters by dedicated_workers value
:param int offset: Number of the first document to send (pagination)
:param int limit: Maximum number of documents to send (pagination)
:param str sort: If specified, sorts the applications by a list of existing parameters separated by commas. Can be 'application_name', 'application_type', 'creation_time', 'first_name', 'last_name', 'email', 'dedicated_workers'. Sorts in ascending order by default. If a parameter is preceded by '-', it is sorted in descending order.
:return: list[Application]
If the method is called asynchronously,
returns the request thread.
"""
all_params = ['token', 'application_id', 'email', 'application_name', 'first_name', 'last_name', 'application_type', 'created_after', 'created_before', 'dedicated_workers', 'offset', 'limit', 'sort'] # noqa: E501
all_params.append('async_req')
all_params.append('_return_http_data_only')
all_params.append('_preload_content')
all_params.append('_request_timeout')
params = locals()
for key, val in six.iteritems(params['kwargs']):
if key not in all_params:
raise TypeError(
"Got an unexpected keyword argument '%s'"
" to method admin_get_applications" % key
)
params[key] = val
del params['kwargs']
# verify the required parameter 'token' is set
if ('token' not in params or
params['token'] is None):
raise ValueError("Missing the required parameter `token` when calling `admin_get_applications`") # noqa: E501
collection_formats = {}
path_params = {}
query_params = []
if 'application_id' in params:
query_params.append(('application_id', params['application_id'])) # noqa: E501
if 'email' in params:
query_params.append(('email', params['email'])) # noqa: E501
if 'application_name' in params:
query_params.append(('application_name', params['application_name'])) # noqa: E501
if 'first_name' in params:
query_params.append(('first_name', params['first_name'])) # noqa: E501
if 'last_name' in params:
query_params.append(('last_name', params['last_name'])) # noqa: E501
if 'application_type' in params:
query_params.append(('application_type', params['application_type'])) # noqa: E501
if 'created_after' in params:
query_params.append(('created_after', params['created_after'])) # noqa: E501
if 'created_before' in params:
query_params.append(('created_before', params['created_before'])) # noqa: E501
if 'dedicated_workers' in params:
query_params.append(('dedicated_workers', params['dedicated_workers'])) # noqa: E501
if 'offset' in params:
query_params.append(('offset', params['offset'])) # noqa: E501
if 'limit' in params:
query_params.append(('limit', params['limit'])) # noqa: E501
if 'sort' in params:
query_params.append(('sort', params['sort'])) # noqa: E501
header_params = {}
if 'token' in params:
header_params['token'] = params['token'] # noqa: E501
form_params = []
local_var_files = {}
body_params = None
# HTTP header `Accept`
header_params['Accept'] = self.api_client.select_header_accept(
['application/json']) # noqa: E501
# Authentication setting
auth_settings = [] # noqa: E501
return self.api_client.call_api(
'/admin/application', 'GET',
path_params,
query_params,
header_params,
body=body_params,
post_params=form_params,
files=local_var_files,
response_type='list[Application]', # noqa: E501
auth_settings=auth_settings,
async_req=params.get('async_req'),
_return_http_data_only=params.get('_return_http_data_only'),
_preload_content=params.get('_preload_content', True),
_request_timeout=params.get('_request_timeout'),
collection_formats=collection_formats)
def admin_reset_api_key(self, token, application_id, **kwargs): # noqa: E501
"""Resets an API key (admin only) # noqa: E501
Resets the API key of the application corresponding to application_id, and returns a new one. This method is only accessible to admins # noqa: E501
This | |
"""Sets of metrics to look at general sky coverage - nvisits/coadded depth/Teff.
"""
import numpy as np
import rubin_sim.maf.metrics as metrics
import rubin_sim.maf.slicers as slicers
import rubin_sim.maf.plots as plots
import rubin_sim.maf.metricBundles as mb
import rubin_sim.maf.utils as mafUtils
from .colMapDict import ColMapDict, getColMap
from .common import standardSummary, filterList, radecCols, combineMetadata
__all__ = ['nvisitsM5Maps', 'tEffMetrics', 'nvisitsPerNight', 'nvisitsPerProp']
def nvisitsM5Maps(colmap=None, runName='opsim',
extraSql=None, extraMetadata=None,
nside=64, runLength=10.,
ditherStacker=None, ditherkwargs=None):
"""Generate number of visits and Coadded depth per RA/Dec point in all and per filters.
Parameters
----------
colmap : dict, optional
A dictionary with a mapping of column names. Default will use OpsimV4 column names.
runName : str, optional
The name of the simulated survey. Default is "opsim".
extraSql : str, optional
Additional constraint to add to any sql constraints (e.g. 'propId=1' or 'fieldID=522').
Default None, for no additional constraints.
extraMetadata : str, optional
Additional metadata to add before any below (i.e. "WFD"). Default is None.
nside : int, optional
Nside value for healpix slicer. Default 64.
If "None" is passed, the healpixslicer-based metrics will be skipped.
runLength : float, optional
Length of the simulated survey, for scaling values for the plot limits.
Default 10.
ditherStacker: str or rubin_sim.maf.stackers.BaseDitherStacker
Optional dither stacker to use to define ra/dec columns.
ditherkwargs: dict, optional
Optional dictionary of kwargs for the dither stacker.
Returns
-------
metricBundleDict
"""
if colmap is None:
colmap = ColMapDict('opsimV4')
bundleList = []
subgroup = extraMetadata
if subgroup is None:
subgroup = 'All visits'
raCol, decCol, degrees, ditherStacker, ditherMeta = radecCols(ditherStacker, colmap, ditherkwargs)
extraMetadata = combineMetadata(extraMetadata, ditherMeta)
# Set up basic all and per filter sql constraints.
filterlist, colors, orders, sqls, metadata = filterList(all=True,
extraSql=extraSql,
extraMetadata=extraMetadata)
# Set up some values to make nicer looking plots.
benchmarkVals = mafUtils.scaleBenchmarks(runLength, benchmark='design')
# Check that nvisits is not set to zero (for very short run length).
for f in benchmarkVals['nvisits']:
if benchmarkVals['nvisits'][f] == 0:
print('Updating benchmark nvisits value in %s to be nonzero' % (f))
benchmarkVals['nvisits'][f] = 1
benchmarkVals['coaddedDepth'] = mafUtils.calcCoaddedDepth(benchmarkVals['nvisits'],
benchmarkVals['singleVisitDepth'])
# Scale the nvisit ranges for the runLength.
nvisitsRange = {'u': [20, 80], 'g': [50, 150], 'r': [100, 250],
'i': [100, 250], 'z': [100, 300], 'y': [100, 300], 'all': [700, 1200]}
scale = runLength / 10.0
for f in nvisitsRange:
for i in [0, 1]:
nvisitsRange[f][i] = int(np.floor(nvisitsRange[f][i] * scale))
# Generate Nvisit maps in all and per filters
displayDict = {'group': 'Nvisits Maps', 'subgroup': subgroup}
metric = metrics.CountMetric(colmap['mjd'], metricName='NVisits', units='')
slicer = slicers.HealpixSlicer(nside=nside, latCol=decCol, lonCol=raCol,
latLonDeg=degrees)
for f in filterlist:
sql = sqls[f]
displayDict['caption'] = 'Number of visits per healpix in %s.' % metadata[f]
displayDict['order'] = orders[f]
binsize = 2
if f == 'all':
binsize = 5
plotDict = {'xMin': nvisitsRange[f][0], 'xMax': nvisitsRange[f][1],
'colorMin': nvisitsRange[f][0], 'colorMax': nvisitsRange[f][1],
'binsize': binsize, 'color': colors[f]}
bundle = mb.MetricBundle(metric, slicer, sql, metadata=metadata[f],
stackerList=ditherStacker,
displayDict=displayDict, plotDict=plotDict,
summaryMetrics=standardSummary())
bundleList.append(bundle)
# Generate Coadded depth maps per filter
displayDict = {'group': 'Coadded M5 Maps', 'subgroup': subgroup}
metric = metrics.Coaddm5Metric(m5Col=colmap['fiveSigmaDepth'], metricName='CoaddM5')
slicer = slicers.HealpixSlicer(nside=nside, latCol=decCol, lonCol=raCol,
latLonDeg=degrees)
for f in filterlist:
# Skip "all" for coadded depth.
if f == 'all':
continue
mag_zp = benchmarkVals['coaddedDepth'][f]
sql = sqls[f]
displayDict['caption'] = 'Coadded depth per healpix, with %s benchmark value subtracted (%.1f) ' \
'in %s.' % (f, mag_zp, metadata[f])
displayDict['caption'] += ' More positive numbers indicate fainter limiting magnitudes.'
displayDict['order'] = orders[f]
plotDict = {'zp': mag_zp, 'xMin': -0.6, 'xMax': 0.6,
'xlabel': 'coadded m5 - %.1f' % mag_zp,
'colorMin': -0.6, 'colorMax': 0.6, 'color': colors[f]}
bundle = mb.MetricBundle(metric, slicer, sql, metadata=metadata[f],
stackerList=ditherStacker,
displayDict=displayDict, plotDict=plotDict,
summaryMetrics=standardSummary())
bundleList.append(bundle)
# Set the runName for all bundles and return the bundleDict.
for b in bundleList:
b.setRunName(runName)
return mb.makeBundlesDictFromList(bundleList)
def tEffMetrics(colmap=None, runName='opsim',
extraSql=None, extraMetadata=None, nside=64,
ditherStacker=None, ditherkwargs=None):
"""Generate a series of Teff metrics. Teff total, per night, and sky maps (all and per filter).
Parameters
----------
colmap : dict, optional
A dictionary with a mapping of column names. Default will use OpsimV4 column names.
runName : str, optional
The name of the simulated survey. Default is "opsim".
extraSql : str, optional
Additional constraint to add to any sql constraints (e.g. 'propId=1' or 'fieldID=522').
Default None, for no additional constraints.
extraMetadata : str, optional
Additional metadata to add before any below (i.e. "WFD"). Default is None.
nside : int, optional
Nside value for healpix slicer. Default 64.
If "None" is passed, the healpixslicer-based metrics will be skipped.
ditherStacker: str or rubin_sim.maf.stackers.BaseDitherStacker
Optional dither stacker to use to define ra/dec columns.
ditherkwargs: dict, optional
Optional dictionary of kwargs for the dither stacker.
Returns
-------
metricBundleDict
"""
if colmap is None:
colmap = ColMapDict('opsimV4')
bundleList = []
subgroup = extraMetadata
if subgroup is None:
subgroup = 'All visits'
raCol, decCol, degrees, ditherStacker, ditherMeta = radecCols(ditherStacker, colmap, ditherkwargs)
extraMetadata = combineMetadata(extraMetadata, ditherMeta)
# Set up basic all and per filter sql constraints.
filterlist, colors, orders, sqls, metadata = filterList(all=True,
extraSql=extraSql,
extraMetadata=extraMetadata)
if metadata['all'] is None:
metadata['all'] = 'All visits'
subsetPlots = [plots.HealpixSkyMap(), plots.HealpixHistogram()]
# Total Teff and normalized Teff.
displayDict = {'group': 'T_eff Summary', 'subgroup': subgroup}
displayDict['caption'] = 'Total effective time of the survey (see Teff metric).'
displayDict['order'] = 0
metric = metrics.TeffMetric(m5Col=colmap['fiveSigmaDepth'], filterCol=colmap['filter'],
normed=False, metricName='Total Teff')
slicer = slicers.UniSlicer()
bundle = mb.MetricBundle(metric, slicer, constraint=sqls['all'], displayDict=displayDict,
metadata=metadata['all'])
bundleList.append(bundle)
displayDict['caption'] = 'Normalized total effective time of the survey (see Teff metric).'
displayDict['order'] = 1
metric = metrics.TeffMetric(m5Col=colmap['fiveSigmaDepth'], filterCol=colmap['filter'],
normed=True, metricName='Normalized Teff')
slicer = slicers.UniSlicer()
bundle = mb.MetricBundle(metric, slicer, constraint=sqls['all'], displayDict=displayDict,
metadata=metadata['all'])
bundleList.append(bundle)
# Generate Teff maps in all and per filters
displayDict = {'group': 'T_eff Maps', 'subgroup': subgroup}
if ditherMeta is not None:
for m in metadata:
metadata[m] = combineMetadata(metadata[m], ditherMeta)
metric = metrics.TeffMetric(m5Col=colmap['fiveSigmaDepth'], filterCol=colmap['filter'],
normed=True, metricName='Normalized Teff')
slicer = slicers.HealpixSlicer(nside=nside, latCol=decCol, lonCol=raCol,
latLonDeg=degrees)
for f in filterlist:
displayDict['caption'] = 'Normalized effective time of the survey, for %s' % metadata[f]
displayDict['order'] = orders[f]
plotDict = {'color': colors[f]}
bundle = mb.MetricBundle(metric, slicer, sqls[f], metadata=metadata[f],
stackerList=ditherStacker,
displayDict=displayDict, plotFuncs=subsetPlots, plotDict=plotDict,
summaryMetrics=standardSummary())
bundleList.append(bundle)
# Set the runName for all bundles and return the bundleDict.
for b in bundleList:
b.setRunName(runName)
return mb.makeBundlesDictFromList(bundleList)
def nvisitsPerNight(colmap=None, runName='opsim', binNights=1,
extraSql=None, extraMetadata=None, subgroup=None):
"""Count the number of visits per night through the survey.
Parameters
----------
colmap : dict or None, optional
A dictionary with a mapping of column names. Default will use OpsimV4 column names.
runName : str, optional
The name of the simulated survey. Default is "opsim".
binNights : int, optional
Number of nights to count in each bin. Default = 1, count number of visits in each night.
extraSql : str or None, optional
Additional constraint to add to any sql constraints (e.g. 'propId=1' or 'fieldID=522').
Default None, for no additional constraints.
extraMetadata : str or None, optional
Additional metadata to add before any below (i.e. "WFD"). Default is None.
subgroup : str or None, optional
Use this for the 'subgroup' in the displayDict, instead of metadata. Default is None.
Returns
-------
metricBundleDict
"""
if colmap is None:
colmap = ColMapDict('opsimV4')
subgroup = subgroup
if subgroup is None:
subgroup = extraMetadata
if subgroup is None:
subgroup = 'All visits'
metadataCaption = extraMetadata
if extraMetadata is None:
if extraSql is not None:
metadataCaption = extraSql
else:
metadataCaption = 'all visits'
bundleList = []
displayDict = {'group': 'Nvisits Per Night', 'subgroup': subgroup}
displayDict['caption'] = 'Number of visits per night for %s.' % (metadataCaption)
displayDict['order'] = 0
metric = metrics.CountMetric(colmap['mjd'], metricName='Nvisits')
slicer = slicers.OneDSlicer(sliceColName=colmap['night'], binsize=binNights)
bundle = mb.MetricBundle(metric, slicer, extraSql, metadata=metadataCaption,
displayDict=displayDict, summaryMetrics=standardSummary())
bundleList.append(bundle)
# Set the runName for all bundles and return the bundleDict.
for b in bundleList:
b.setRunName(runName)
return mb.makeBundlesDictFromList(bundleList)
def nvisitsPerProp(opsdb, colmap=None, runName='opsim', binNights=1, extraSql=None):
"""Set up a group of all and per-proposal nvisits metrics.
Parameters
----------
opsdb : rubin_sim.maf.db.Database or rubin_sim.maf.db.OpsimDatabase* object
colmap : dict or None, optional
A dictionary with a mapping of column names. Default will use OpsimV4 column names.
runName : str, optional
The name of the simulated survey. Default is "opsim".
binNights : int, optional
Number of nights to count in each bin. Default = 1, count number of visits in each night.
sqlConstraint | |
<reponame>s0hvaperuna/music-player<filename>src/_database.py
raise DeprecationWarning('Use the new database file')
import logging
import os
import pathlib
import pickle
import threading
import time
from collections import deque
from functools import wraps
from queue import Queue
from random import choice, shuffle
import sqlalchemy
from sqlalchemy import Integer, Column, String, INTEGER, Float, Boolean
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base, declared_attr
from sqlalchemy.orm import class_mapper
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy.orm.attributes import manager_of_class
from sqlalchemy.orm.properties import ColumnProperty
from src.exiftool import ExifTool
from src.globals import GV
from src.queues import SongList
from src.song import Song
from src.utils import get_supported_formats, grouper, print_info, concatenate_numbers
logger = logging.getLogger('debug')
def attribute_names(cls):
return [prop.key for prop in class_mapper(cls).iterate_properties
if isinstance(prop, sqlalchemy.orm.ColumnProperty)]
def get_state_dict(instance):
cls = type(instance)
mgr = manager_of_class(cls)
return cls, dict((key, getattr(instance, key))
for key, attr in mgr.local_attrs.items()
if isinstance(attr.property, ColumnProperty))
def create_from_state_dict(cls, state_dict):
mgr = manager_of_class(cls)
instance = mgr.new_instance()
for key, value in state_dict.items():
setattr(instance, key, value)
return instance
def thread_local_session(func):
@wraps(func)
def decorator_func(self, *args, **kwargs):
session = kwargs.pop('session', None)
remove = False
if session is None:
session = self.Session()
remove = True
retval = func(self, *args, **kwargs, session=session)
if kwargs.pop('commit', True):
session.commit()
if remove:
session.close()
self.Session.remove()
return retval
return decorator_func
class SongBase:
@declared_attr
def __tablename__(cls):
return cls._tablename
id = Column(Integer, primary_key=True)
name = Column(String)
link = Column(String)
Base = declarative_base(cls=SongBase)
# Class that has all the info for the song
class FullSong:
title = Column(String, default=None)
artist = Column(String, default=None)
duration = Column(Float, default=None)
album = Column(String, default=None)
track = Column(INTEGER, default=None)
year = Column(INTEGER, default=None)
band = Column(String, default=None)
play_count = Column(INTEGER, default=0) # How many times the song has been played
rating = Column(INTEGER, default=0)
file_type = Column(String, default='link')
cover_art = Column(String, default=None)
added = Column(INTEGER) # Follows format YYYYMMDD
metadata_set = Column(Boolean, default=False)
class DBSong(Base, FullSong):
_tablename = 'songs'
# DBSong that is in the queue
class QSong(Base):
_tablename = 'queue'
real_id = Column(INTEGER)
played = Column(Boolean, default=False)
class TempSong(Base, FullSong):
_tablename = 'searched'
Playlist = declarative_base(cls=SongBase)
class PlaylistSong(FullSong):
_tablename = 'playlist'
class EmptyPlaylistError(Exception):
pass
class DBHandler:
def __init__(self, name: str, session_manager):
self.name = name if name.endswith('.db') else name + '.db'
success = self._connect_to_db()
if not success:
raise ConnectionError('Could not connect to database')
self.session_manager = session_manager
self._load_history()
self.queue_pos = self.session_manager.index
self.session_manager.queues[GV.MainQueue] = self.load_queue('data/queue.dat')
self.session_manager.queues[GV.SecondaryQueue] = self.load_queue('data/second_q.dat')
self.playlist = None
def _connect_to_db(self):
path = 'databases'
if not os.path.exists(path):
try:
os.mkdir(path)
except WindowsError:
return False
path = os.path.join(path, self.name)
self.engine = create_engine('sqlite:///%s' % path)
self.engine.execute('PRAGMA encoding = "UTF-8"')
DBSong.metadata.create_all(self.engine)
session_factory = sessionmaker(bind=self.engine, expire_on_commit=False)
self.Session = scoped_session(session_factory)
return True
def _load_history(self):
self.history = deque(maxlen=30)
if os.path.exists('data/history.dat'):
with open('data/history.dat', 'rb') as f:
_history = pickle.load(f)
for item in _history:
self.history.append(create_from_state_dict(DBSong, item))
@thread_local_session
def get_state_updated(self, instance, session=None):
self.refresh_item(instance, session=session)
return get_state_dict(instance)
@thread_local_session
def refresh_item(self, instance, session=None):
session.add(instance)
session.refresh(instance)
@staticmethod
def delete_history():
try:
os.remove(os.path.join(os.getcwd(), 'data', 'history.dat'))
except OSError:
pass
def save_history(self):
self.save_list(self.history, 'data/history.dat')
@thread_local_session
def q2q(self, session=None):
for item in self.items(QSong, session=session).all():
song = self.items(DBSong, session=session).filter_by(id=item.real_id).first()
if isinstance(song, DBSong):
self.main_queue.append(song)
def shuffle_queue(self):
shuffle(self.main_queue)
@property
def main_queue(self):
return self.session_manager.queues.get(GV.MainQueue, [])
@property
def secondary_queue(self):
return self.session_manager.queues.get(GV.SecondaryQueue, [])
@thread_local_session
def queue(self, session=None):
if len(self.main_queue) == 0:
q = self.items(DBSong, session=session).all()
items = []
for idx, item in enumerate(q):
items.append(Song(item, self, self.session_manager.downloader, idx))
self.set_queue(items)
return self.main_queue
def clear_main_queue(self):
self.main_queue.clear()
def set_queue(self, queue):
self.session_manager.queues[GV.MainQueue] = queue
def add_to_second_queue(self, item):
self.secondary_queue.append(item)
def clear_second_queue(self):
self.secondary_queue.clear()
@thread_local_session
def save_list(self, queue, filename, session=None):
if len(queue) == 0:
try:
os.remove(filename)
except OSError:
pass
return
_queue = SongList(cls=type(queue[0].song))
for song in queue:
song.refresh()
song = song.song
cls = type(song)
state_dict = {'id': song.id, 'link': song.link}
if not isinstance(cls, _queue.cls):
_queue.append((cls, state_dict))
else:
_queue.append(state_dict)
with open(filename, 'wb') as f:
pickle.dump(_queue, f)
def save_queue(self):
self.save_list(self.session_manager.queues[GV.MainQueue], 'data/queue.dat')
@thread_local_session
def load_queue(self, filename, queue=None, session=None):
if queue is None:
queue = []
if not os.path.exists(filename):
return queue
with open(filename, 'rb') as f:
_queue = pickle.load(f)
query = session.query(_queue.cls)
for idx, item in enumerate(_queue):
try:
if isinstance(item, tuple):
cls, item = item[0], item[1]
q = session.query(cls)
else:
q = query
except Exception as e:
logger.exception('Could not get song instance.\n %s' % e)
continue
song = q.filter_by(id=item['id'], link = item['link']).first()
if song is not None:
session.expunge(song)
queue.append(Song(song, self, self.session_manager.downloader, idx))
else:
print('song removed')
return queue
@thread_local_session
def shutdown(self, session=None):
session.commit()
self.save_history()
self.save_queue()
self.save_list(self.secondary_queue, 'data/second_q.dat')
def get_from_history(self, idx=-1):
try:
return self.history[idx]
except IndexError:
return
def get_from_queue(self, idx=0):
try:
return self.main_queue[idx]
except IndexError:
return
@staticmethod
def _get_random_by_play_count(query):
# Sort list by play count
query = sorted(query, key=lambda x: x.play_count)
play_count = query[0].play_count
# Get all _songs with the lowest play count
query = [x for x in query if x.play_count == play_count]
return choice(query)
@thread_local_session
def items(self, cls, session=None):
return session.query(cls)
@thread_local_session
def get_random_song(self, session=None):
songs = session.query(DBSong).all()
if not len(songs) > 0:
print('Empty playlist')
return None
song = self._get_random_by_play_count(songs)
return song
def add_to_history(self, item):
self.history.append(item)
@thread_local_session
def increment_playcount(self, item, session=None):
song = self.items(DBSong, session=session).filter_by(link=item.link, name=item.name).first()
if song:
song.play_count += 1
else:
print('Item "%s" not found with id %s' % (item, item.id))
logger.info('Error while incrementing play counts. {}'.format(vars(item)))
@thread_local_session
def filter_from_database(self, session=None, **filters):
items = session.query(DBSong)
return items.filter_by(**filters).all()
@thread_local_session
def set_up_shuffled_queue(self, query=None, session=None, cls=QSong):
if query is None:
query = session.query(DBSong)
songs = query.all()
if not songs:
print('Empty playlist')
return
shuffle(songs)
for song in songs:
self.add_song(song.name, song.link, cls=cls, commit=False, real_id=song.id, session=session)
session.commit()
return session.query(QSong)
@thread_local_session
def get_duration(self, item, session=None):
if isinstance(item, FullSong):
return item.duration
else:
item = self.items(DBSong, session=session).filter_by(link=item.link, name=item.name).first()
return item.duration if item else 0
@thread_local_session
def get_from_shuffled(self, session=None):
query = session.query(QSong).all()
if not query:
query = self.set_up_shuffled_queue().all()
try:
song = query[self.queue_pos]
except IndexError:
return print('End of queue')
self.queue_pos += 1
if song:
real_song = self.items(DBSong, session=session).filter_by(id=song.real_id, link=song.link, name=song.name).first()
return real_song
@staticmethod
def get_item_type(item_link):
item_type = 'link'
if os.path.exists(item_link):
if os.path.isfile(item_link):
item_type = 'file'
elif os.path.isdir(item_link):
item_type = 'dir'
return item_type
@thread_local_session
def _add_song(self, song, session=None, commit=True):
session.add(song)
@staticmethod
def setup_filename(name):
if not os.path.isabs(name):
return os.path.realpath(name)
return name
@thread_local_session
def add_song(self, name, link, item_type=None, cls=DBSong, session=None, commit=True, **kwargs):
if not item_type:
item_type = self.get_item_type(link)
if item_type == 'file':
link = self.setup_filename(link)
if item_type == 'dir':
# TODO add all dir files
print('skipped folder %s' % link)
return
kwargs.pop('added', None) # Because the song is added now the old entry is removed
if hasattr(cls, 'added'):
kwargs['added'] = self.get_time()
song = cls(name=name, link=link, **kwargs)
if getattr(cls, 'file_type', False):
song.file_type = item_type
self._add_song(song, session=session, commit=commit)
return song
def get_thread_local_session(self):
return self.Session()
@thread_local_session
def get_temp_song(self, name, link, item_type='link', commit=True, session=None, **kwargs):
if not item_type:
item_type = self.get_item_type(link)
if item_type == 'file':
link = self.setup_filename(link)
if item_type == 'dir':
# TODO add all dir files
print('skipped folder %s' % link)
return
kwargs['added'] = self.get_time()
song = self.items(TempSong, session=session).filter_by(link=link).first()
if song is None:
song = TempSong(name=name, link=link, **kwargs)
self._add_song(song, session=session, commit=commit)
return song
@staticmethod
def _dict_info(d, include_mt=False):
mt = {}
if include_mt:
for var in vars(DBSong):
if not var.startswith('_'):
mt[var] = d.get(var, None)
return mt
else:
mt['name'] = d.get('name')
mt['link'] = d.get('link')
return mt
@staticmethod
def _item_info(item, include_mt=False):
mt = {}
if include_mt:
for var in vars(DBSong):
if not var.startswith('_'):
mt[var] = getattr(item, var, None)
return mt
else:
mt['name'] = getattr(item, 'name')
mt['link'] = getattr(item, 'link')
return mt
@staticmethod
def _list_info(l):
return {'name': l[0], 'link': l[1]}
def _get_info_from_item(self, item, include_metadata=False):
"""
Gets the name and link from the provided object
"""
if isinstance(item, dict):
return self._dict_info(item, include_metadata)
elif isinstance(item, list) or isinstance(item, tuple):
return self._list_info(item)
else:
return self._item_info(item, include_metadata)
@thread_local_session
def add_songs(self, songs, cls=DBSong, include_metadata=False, session=None):
for song in songs:
try:
kwargs = self._get_info_from_item(song, include_metadata)
if 'name' not in kwargs or 'link' not in kwargs:
print('Link or name is None. Link and name must be specified in the object\nSkipping song "{}"'.format(song))
continue
except Exception as e:
print('Skipping %s because of an error\n%s' % (song, e))
continue
name, link = kwargs.pop('name'), kwargs.pop('link')
self.add_song(name, link, commit=False, cls=cls, session=session, **kwargs)
def add_from_file(self, filename, delim=' -<>- ', link_first=True,
link_format='https://www.youtube.com/watch?v={}',
custom_parser=None):
"""
Args:
filename:
Name of the file read
delim:
What separates the link and name
link_first:
If the file has links before the names
link_format:
A string that the method format can be called with the link as its
parameter. It's useful when the link is only an id
custom_parser:
A | |
<gh_stars>1-10
__all__ = ["ReferrerMixin"]
import hashlib
import logging
from urllib.parse import quote_plus
import requests
import ratelimit
from django.conf import settings
from django.contrib.auth import REDIRECT_FIELD_NAME
from django.db import transaction
from django.db.models import Q
from django.forms.widgets import Media
from django.http import (
HttpResponse, HttpResponseRedirect, HttpResponseServerError
)
from django.test import Client
from django.utils.translation import gettext
from spkcspider.constants import TokenCreationError
from spkcspider.utils.security import get_hashob
from spkcspider.utils.settings import get_settings_func
from spkcspider.utils.urls import merge_get_url
from ..conf import VALID_INTENTIONS, VALID_SUB_INTENTIONS, get_requests_params
from ..models import AuthToken, ReferrerObject
logger = logging.getLogger(__name__)
_extra = '' if settings.DEBUG else '.min'
class ReferrerMixin(object):
allow_domain_mode = False
def get_context_data(self, **kwargs):
kwargs["token_strength"] = None
# will be overwritten in referring path so there is no interference
kwargs["referrer"] = None
kwargs["intentions"] = set()
if self.request.auth_token:
kwargs["referrer"] = self.request.auth_token.referrer
kwargs["token_strength"] = self.request.auth_token.extra.get(
"strength", None
)
kwargs["intentions"].update(self.request.auth_token.extra.get(
"intentions", []
))
return super().get_context_data(**kwargs)
def test_token(self, minstrength, force_token=False, taint=False):
if "intention" in self.request.GET or "referrer" in self.request.GET:
# validate early, before auth
intentions = set(self.request.GET.getlist("intention"))
if not VALID_INTENTIONS.issuperset(
intentions
):
return HttpResponse(
"invalid intentions", status=400
)
if "domain" in intentions:
if not self.clean_domain_upgrade(
{"intentions": intentions},
False
):
return HttpResponse(
"invalid domain upgrade", status=400
)
# requires token
force_token = True
else:
# can be either domain or auth to not have taint flag
if "auth" not in intentions:
taint = True
# maximal one main intention
if len(intentions.difference(VALID_SUB_INTENTIONS)) > 1:
return HttpResponse(
"invalid intentions", status=400
)
minstrength = 4
return super().test_token(minstrength, force_token, taint)
def refer_with_post(self, context, token):
# application/x-www-form-urlencoded is best here,
# for beeing compatible to most webservers
# client side rdf is no problem
# NOTE: csrf must be disabled or use csrf token from GET,
# here is no way to know the token value
h = hashlib.sha256(context["referrer"].encode("utf8")).hexdigest()
def h_fun(*a):
return h
# rate limit on errors
if ratelimit.get_ratelimit(
request=self.request,
group="refer_with_post.refer_with_post",
key=h_fun,
rate=settings.SPIDER_DOMAIN_ERROR_RATE,
inc=False
)["request_limit"] > 0:
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="post_failed",
error="error_rate_limit"
)
)
d = {
"token": token.token,
"hash_algorithm": settings.SPIDER_HASH_ALGORITHM.name,
"action": context["action"],
}
if context["payload"] is not None:
d["payload"] = context["payload"]
params, inline_domain = get_requests_params(context["referrer"])
if inline_domain:
response = Client().post(
context["referrer"],
data=d,
Connection="close",
Referer=merge_get_url(
"%s%s" % (
context["hostpart"],
self.request.path
)
# sending full url not required anymore, payload
),
SERVER_NAME=inline_domain
)
if response.status_code != 200:
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="post_failed",
error="other"
)
)
else:
try:
with requests.post(
context["referrer"],
data=d,
headers={
"Referer": merge_get_url(
"%s%s" % (
context["hostpart"],
self.request.path
)
# sending full url not required anymore, payload
),
"Connection": "close"
},
**params
) as resp:
resp.raise_for_status()
except requests.exceptions.SSLError as exc:
logger.info(
"referrer: \"%s\" has a broken ssl configuration",
context["referrer"], exc_info=exc
)
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="post_failed",
error="ssl"
)
)
except (Exception, requests.exceptions.HTTPError) as exc:
apply_error_limit = False
if isinstance(
exc, (
requests.exceptions.ConnectionError,
requests.exceptions.Timeout
)
):
apply_error_limit = True
elif (
isinstance(exc, requests.exceptions.HTTPError) and
exc.response.status_code >= 500
):
apply_error_limit = True
if apply_error_limit:
ratelimit.get_ratelimit(
request=self.request,
group="refer_with_post",
key=h_fun,
rate=settings.SPIDER_DOMAIN_ERROR_RATE,
inc=True
)
logger.info(
"post failed: \"%s\" failed",
context["referrer"], exc_info=exc
)
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="post_failed",
error="other"
)
)
context["post_success"] = True
h = get_hashob()
h.update(token.token.encode("utf-8", "ignore"))
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="success",
hash=h.finalize().hex()
)
)
def refer_with_get(self, context, token):
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
token=token.token,
payload=context["payload"]
)
)
def clean_domain_upgrade(self, context, token):
if "referrer" not in self.request.GET:
return False
# domain mode must be used alone
if len(context["intentions"]) > 1:
return False
if not context["intentions"].issubset(VALID_INTENTIONS):
return False
if not getattr(self.request, "_clean_domain_upgrade_checked", False):
if ratelimit.get_ratelimit(
request=self.request,
group="clean_domain_upgrade",
key=("get", {"IP": True, "USER": True}),
rate=settings.SPIDER_DOMAIN_UPDATE_RATE,
inc=True
)["request_limit"] > 0:
return False
setattr(self.request, "_clean_domain_upgrade_checked", True)
# False for really no token
if token is False:
return True
if not token or token.extra.get("strength", 0) >= 10:
return False
return True
def clean_refer_intentions(self, context, token=None):
# Only owner can use other intentions than domain
if not self.request.is_owner:
return False
# Second error: invalid intentions
# this is the second time the validation will be executed
# in case test_token path is used
# this is the first time the validation will be executed
# in case has_special_access path is used
if not context["intentions"].issubset(VALID_INTENTIONS):
return False
# auth is only for self.requesting component auth
if "auth" in context["intentions"]:
return False
# maximal one main intention
if len(context["intentions"].difference(VALID_SUB_INTENTIONS)) > 1:
return False
# "persist" or default can be serverless other intentions not
# this way rogue client based attacks are prevented
if "persist" in context["intentions"]:
if not self.usercomponent.features.filter(
name="Persistence"
).exists():
return False
else:
if context["is_serverless"] and len(context["intentions"]) != 1:
return False
if not token:
return True
####### with token ######## # noqa: 266E
# cannot add sl intention to existing intentions
if "sl" in context["intentions"].difference(
token.extra.get("intentions", ["sl"])
):
return False
if "persist" in context["intentions"]:
# set persist = true, (false=-1)
token.persist = 0
# if possible, pin to anchor
if self.usercomponent.primary_anchor:
token.persist = self.usercomponent.primary_anchor_id
else:
token.persist = -1
if not token.extra.get("initial_referrer_url", None):
token.extra["initial_referrer_url"] = "{}://{}{}".format(
self.request.scheme,
self.request.get_host(),
self.request.path
)
return True
def handle_domain_auth(self, context, token):
assert token or self.request.is_special_user, \
"special user and no token"
context.setdefault("payload", None)
context.setdefault("action", "create")
if not self.allow_domain_mode:
return HttpResponse(
status=400,
content='domain mode disallowed'
)
if not self.clean_domain_upgrade(context, token):
return HttpResponse(
status=400,
content='Invalid token'
)
token.initialize_token()
token.referrer = ReferrerObject.objects.get_or_create(
url=context["referrer"]
)[0]
token.extra["intentions"] = list(context["intentions"])
try:
token.save()
except TokenCreationError:
logger.exception("Token creation failed")
return HttpResponseServerError(
"Token creation failed, try again"
)
context["post_success"] = False
ret = self.refer_with_post(context, token)
if not context["post_success"]:
token.delete()
return ret
def handle_referrer_request(
self, context, token, keep=False, dontact=False, no_oldtoken=False
):
"""
no_oldtoken: don't use old token for calculating:
old_ids, old_search
(performance and manual old_* possible)
dontact: dont act if probing results fails
"""
_ = gettext
context.setdefault("action", "create")
context.setdefault("model", self.model)
context.setdefault("payload", None)
context.setdefault("ids", set())
context.setdefault("filter", set())
context.setdefault("old_ids", set())
context.setdefault("old_search", set())
context["is_serverless"] = "sl" in context["intentions"]
if keep is True:
context["ids"].update(token.extra.get("ids", []))
context["search"].update(token.extra.get("filter", []))
action = self.request.POST.get("action", None)
if context["action"].endswith("invalid"):
action = context["action"]
if action == "confirm":
newtoken = None
# if persist try to find old token
if "persist" in context["intentions"]:
oldtoken = AuthToken.objects.filter(
Q(persist__gte=0, usercomponent=token.usercomponent),
referrer__url=context["referrer"]
).first()
if oldtoken:
newtoken = token
token = oldtoken
# either reuse persistent token with auth token tokenstring
# or just reuse auth token
if newtoken:
# steal token value
token.token = newtoken.token
# set to zero as prot_strength can elevate perms
token.extra["taint"] = False
token.extra["prot_strength"] = 0
token.extra["intentions"] = list(context["intentions"])
token.extra.pop("request_referrer", None)
token.extra.pop("request_intentions", None)
token.extra.pop("request_search", None)
if not self.clean_refer_intentions(context, token):
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
error="intentions_incorrect"
)
)
token.extra["search"] = list(context["search"])
if "live" in context["intentions"]:
token.extra.pop("ids", None)
else:
token.extra["ids"] = list(context["ids"])
token.referrer = ReferrerObject.objects.get_or_create(
url=context["referrer"]
)[0]
# after cleanup, save
try:
with transaction.atomic():
# must be done here, elsewise other token can (unlikely)
# take token as it is free for a short time, better be safe
if newtoken:
newtoken.delete()
token.save()
except TokenCreationError:
logger.exception("Token creation failed")
return HttpResponseServerError(
_("Token creation failed, try again")
)
if context["is_serverless"]:
context["post_success"] = True
ret = self.refer_with_get(context, token)
else:
context["post_success"] = False
ret = self.refer_with_post(context, token)
if dontact:
return ret
if not context["post_success"]:
if newtoken:
logger.warning(
"Updating persisting token failed"
)
else:
token.delete()
return ret
elif action == "cancel":
if not dontact:
token.delete()
return HttpResponseRedirect(
redirect_to=merge_get_url(
context["referrer"],
status="canceled",
payload=context["payload"]
)
)
else:
if (
no_oldtoken and
"persist" in context["intentions"]
):
oldtoken = AuthToken.objects.filter(
Q(persist__gte=0, usercomponent=token.usercomponent),
referrer__url=context["referrer"],
).first()
if oldtoken:
context["old_ids"].update(oldtoken.extra.get("ids", []))
context["old_search"].update(oldtoken.extra.get(
"filter", []
))
if not self.clean_refer_intentions(context, token):
return HttpResponse(
status=400,
content=_('Error: intentions incorrect')
)
context["object_list"] = context["model"].objects.filter(
id__in=context["ids"]
)
# remove other media
context["media"] = Media(
css={
'all': [
'node_modules/choices.js/public/assets/styles/choices%s.css' % _extra # noqa:E501
]
},
js=[
'node_modules/choices.js/public/assets/scripts/choices%s.js' % _extra, # noqa: E501
]
)
return self.response_class(
request=self.request,
template=self.get_referrer_template_names(),
context=context,
using=self.template_engine,
content_type=self.content_type
)
def handle_referrer(self):
_ = gettext
if (
self.request.user != self.usercomponent.user and
not self.request.auth_token
):
if self.request.user.is_authenticated:
return self.handle_no_permission()
return HttpResponseRedirect(
redirect_to="{}?{}={}".format(
self.get_login_url(),
REDIRECT_FIELD_NAME,
quote_plus(
merge_get_url(
self.request.build_absolute_uri(),
token=None
)
)
)
)
context = self.get_context_data()
context["action"] = "create"
context["intentions"] = set(self.request.GET.getlist("intention"))
if "referrer" in self.request.POST:
context["referrer"] = merge_get_url(self.request.POST["referrer"])
if not get_settings_func(
"SPIDER_URL_VALIDATOR",
"spkcspider.apps.spider.functions.validate_url_default"
)(context["referrer"], self):
context["action"] = "referrer_invalid"
else:
context["referrer"] = merge_get_url(self.request.GET["referrer"])
if not get_settings_func(
"SPIDER_URL_VALIDATOR",
"spkcspider.apps.spider.functions.validate_url_default"
)(context["referrer"], self):
return HttpResponse(
status=400,
content=_('Insecure url: %(url)s') % {
"url": context["referrer"]
}
)
context["payload"] = self.request.GET.get("payload", None)
token = self.request.auth_token
if not token:
| |
<gh_stars>1-10
"""
Copyright 2021 Merck & Co., Inc. Kenilworth, NJ, USA.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
"""
#from .fixtures import AUTH_HEADERS, BASE_API_PATH
from requests import get as reqget, post as reqpost, delete as reqdel
from os import getenv
BASE_API_PATH = getenv('USER_FACING_API_HTTP_PATH', 'http://dp-api:9000')
AUTH_HEADERS = {'X-Username': 'test-developer', 'X-Api-Key': getenv('INTTESTPASS', 'local-developer')}
def test_annotations_datasets():
# routes: 1231
# CustomAnnotationController
# GET /annotations/datasets controllers.CustomAnnotationController.listAllDatasetAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/datasets'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tables():
# routes: 1246
# CustomAnnotationController
# GET /annotations/tables controllers.CustomAnnotationController.listAllTableAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/tables'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_columns():
# routes: 1261
# CustomAnnotationController
# GET /annotations/columns controllers.CustomAnnotationController.listAllColumnAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/columns'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_hierarchy_dataset_dataset():
# routes: 1276
# CustomAnnotationController
# GET /annotations/hierarchy/dataset/:dataset controllers.CustomAnnotationController.listDatasetHierarchyAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/hierarchy/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_hierarchy_dataset_dataset_table_table():
# routes: 1291
# CustomAnnotationController
# GET /annotations/hierarchy/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableHierarchyAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/hierarchy/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_dataset_dataset():
# routes: 1306
# CustomAnnotationController
# GET /annotations/dataset/:dataset controllers.CustomAnnotationController.listDatasetAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_dataset_dataset_table_table():
# routes: 1321
# CustomAnnotationController
# GET /annotations/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_dataset_dataset_table_table_column_column():
# routes: 1336
# CustomAnnotationController
# GET /annotations/dataset/:dataset/table/:table/column/:column controllers.CustomAnnotationController.listColumnAnnotations(request: Request, dataset: String, table: String, column: String, max: java.util.Optional[Integer])
dataset = 'int_cell_level_visibilities_test_data'
table = 'int_cell_level_visibilities_test_data'
column = 'Location'
req_path = f'{BASE_API_PATH}/annotations/dataset/{dataset}/table/{table}/column/{column}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_datasets():
# routes: 1352
# CustomAnnotationController
# GET /annotations/system/datasets controllers.CustomAnnotationController.listAllDatasetSystemAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/system/datasets'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_tables():
# routes: 1367
# CustomAnnotationController
# GET /annotations/system/tables controllers.CustomAnnotationController.listAllTableSystemAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/system/tables'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_columns():
# routes: 1382
# CustomAnnotationController
# GET /annotations/system/columns controllers.CustomAnnotationController.listAllColumnSystemAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/system/columns'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_hierarchy_dataset_dataset():
# routes: 1397
# CustomAnnotationController
# GET /annotations/system/hierarchy/dataset/:dataset controllers.CustomAnnotationController.listDatasetHierarchySystemAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/system/hierarchy/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_hierarchy_dataset_dataset_table_table():
# routes: 1412
# CustomAnnotationController
# GET /annotations/system/hierarchy/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableHierarchySystemAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/system/hierarchy/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_dataset_dataset():
# routes: 1427
# CustomAnnotationController
# GET /annotations/system/dataset/:dataset controllers.CustomAnnotationController.listDatasetSystemAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/system/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_dataset_dataset_table_table():
# route: 1442
# CustomAnnotationController
# GET /annotations/system/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableSystemAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/system/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_dataset_dataset_table_table_column_column():
# route: 1457
# CustomAnnotationController
# GET /annotations/system/dataset/:dataset/table/:table/column/:column controllers.CustomAnnotationController.listColumnSystemAnnotations(request: Request, dataset: String, table: String, column: String, max: java.util.Optional[Integer])
dataset = 'int_cell_level_visibilities_test_data'
table = 'int_cell_level_visibilities_test_data'
column = 'Location'
req_path = f'{BASE_API_PATH}/annotations/system/dataset/{dataset}/table/{table}/column/{column}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_datasets():
# route: 1473
# CustomAnnotationController
# GET /annotations/tour/datasets controllers.CustomAnnotationController.listAllDatasetTourAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/tour/datasets'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_tables():
# route: 1488
# CustomAnnotationController
# GET /annotations/tour/tables controllers.CustomAnnotationController.listAllTableTourAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/tour/tables'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_columns():
# route: 1503
# CustomAnnotationController
# GET /annotations/tour/columns controllers.CustomAnnotationController.listAllColumnTourAnnotations(request: Request, max: java.util.Optional[Integer])
req_path = f'{BASE_API_PATH}/annotations/tour/columns'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_hierarchy_dataset_dataset():
# route: 1518
# CustomAnnotationController
# GET /annotations/tour/hierarchy/dataset/:dataset controllers.CustomAnnotationController.listDatasetHierarchyTourAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/tour/hierarchy/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_heirarchy_dataset_dataset_table_table():
# route: 1533
# CustomAnnotationController
# GET /annotations/tour/hierarchy/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableHierarchyTourAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/tour/hierarchy/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_dataset_dataset():
# route: 1548
# CustomAnnotationController
# GET /annotations/tour/dataset/:dataset controllers.CustomAnnotationController.listDatasetTourAnnotations(request: Request, dataset: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/tour/dataset/{dataset}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_dataset_dataset_table_table():
# route: 1563
# CustomAnnotationController
# GET /annotations/tour/dataset/:dataset/table/:table controllers.CustomAnnotationController.listTableTourAnnotations(request: Request, dataset: String, table: String, max: java.util.Optional[Integer])
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/tour/dataset/{dataset}/table/{table}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tour_dataset_dataset_table_table_column_column():
# route: 1578
# CustomAnnotationController
# GET /annotations/tour/dataset/:dataset/table/:table/column/:column controllers.CustomAnnotationController.listColumnTourAnnotations(request: Request, dataset: String, table: String, column: String, max: java.util.Optional[Integer])
dataset = 'int_cell_level_visibilities_test_data'
table = 'int_cell_level_visibilities_test_data'
column = 'Location'
req_path = f'{BASE_API_PATH}/annotations/tour/dataset/{dataset}/table/{table}/column/{column}'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_dataset_counts():
# route: 1594
# CustomAnnotationController
# GET /annotations/datasets/counts controllers.CustomAnnotationController.countAllDatasetAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/dataset/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_tables_counts():
# route: 1609
# CustomAnnotationController
# GET /annotations/tables/counts controllers.CustomAnnotationController.countAllTableAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/tables/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_columns_counts():
# route: 1624
# CustomAnnotationController
# GET /annotations/columns/counts controllers.CustomAnnotationController.countAllColumnAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/columns/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_counts():
# route: 1639
# CustomAnnotationController
# GET /annotations/counts controllers.CustomAnnotationController.countAllAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_hierarchy_dataset_dataset_counts():
# route: 1654
# CustomAnnotationController
# GET /annotations/hierarchy/dataset/:dataset/counts controllers.CustomAnnotationController.countAnnotationsForDatasetHierarchy(request: Request, dataset: String)
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/hierarchy/dataset/{dataset}/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_hierarchy_table_dataset_table_counts():
# routes: 1669
# CustomAnnotationController
# GET /annotations/hierarchy/table/:dataset/:table/counts controllers.CustomAnnotationController.countAnnotationsForTableHierarchy(request: Request, dataset: String, table: String)
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/hierarchy/table/{dataset}/{table}/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_dataset_dataset_counts():
# routes: 1684
# CustomAnnotationController
# GET /annotations/dataset/:dataset/counts controllers.CustomAnnotationController.countAnnotationsForDataset(request: Request, dataset: String)
dataset = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/dataset/{dataset}/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_table_dataset_table_counts():
# routes: 1699
# CustomAnnotationController
# GET /annotations/table/:dataset/:table/counts controllers.CustomAnnotationController.countAnnotationsForTable(request: Request, dataset: String, table: String)
dataset = 'int_basic_test_data'
table = 'int_basic_test_data'
req_path = f'{BASE_API_PATH}/annotations/table/{dataset}/{table}/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_column_dataset_table_column_counts():
# routes: 1714
# CustomAnnotationController
# GET /annotations/column/:dataset/:table/:column/counts controllers.CustomAnnotationController.countAnnotationsForColumn(request: Request, dataset: String, table: String, column: String)
dataset = 'int_cell_level_visibilities_test_data'
table = 'int_cell_level_visibilities_test_data'
column = 'Location'
req_path = f'{BASE_API_PATH}/annotations/column/{dataset}/{table}/{column}/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_datasets_counts():
# routes: 1730
# CustomAnnotationController
# GET /annotations/system/datasets/counts controllers.CustomAnnotationController.countAllDatasetSystemAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/system/dataset/counts'
res = reqget(req_path, headers=AUTH_HEADERS)
assert res.status_code == 200
assert res.json() != {}
def test_annotations_system_tables_counts():
# routes: 1745
# CustomAnnotationController
# GET /annotations/system/tables/counts controllers.CustomAnnotationController.countAllTableSystemAnnotations(request: Request)
req_path = f'{BASE_API_PATH}/annotations/system/tables/counts'
res | |
<gh_stars>1-10
import json
from PyQt5 import QtCore
from PyQt5.QtCore import QLocale
from PyQt5.QtWidgets import QDialog
from PyQt5.QtWidgets import QVBoxLayout, QHBoxLayout, QPushButton
from narwhallet.control.shared import MShared
from narwhallet.core.kex import KEXclient
from narwhallet.core.ksc import Scripts
from narwhallet.core.ksc.utils import Ut
from narwhallet.core.kcl.bip_utils.base58 import Base58Decoder
from narwhallet.core.kcl.cache import MCache
from narwhallet.core.kcl.transaction.builder.sighash import SIGHASH_TYPE
from narwhallet.core.kcl.transaction import keva_psbt
from narwhallet.core.kcl.wallet.wallets import MWallets
from narwhallet.core.kcl.wallet.wallet import MWallet
from narwhallet.core.kcl.transaction import MTransactionBuilder
from narwhallet.core.kui.ux.widgets.wallet_combobox import WalletComboBox
from narwhallet.core.kui.ux.widgets.amount_input import AmountInput
from narwhallet.core.kui.ux.widgets.address_input import AddressInput
from narwhallet.core.kui.ux.widgets.address_combobox import AddressComboBox
from narwhallet.core.kui.ux.widgets.namespace_combobox import NamespaceComboBox
from narwhallet.core.kui.ux.widgets.namespace_special_keys_combobox import NamespaceSpecialKeysComboBox
from narwhallet.core.kui.ux.widgets.namespace_key_input import NamespaceKeyInput
from narwhallet.core.kui.ux.widgets.namespace_value_input import NamespaceValueInput
from narwhallet.core.kui.ux.widgets.transaction_input import TransactionInput
from narwhallet.core.kui.ux.widgets.send_info_frame import SendInfoFrame
from narwhallet.core.kui.ux.widgets.auction_info_frame import AuctionInfoFrame
from narwhallet.core.kui.ux.widgets.dialog_buttonbox import DialogButtonBox
from narwhallet.core.kui.ux.widgets.generator import UShared
TEMP_TX = 'c1ec98af03dcc874e2c1cf2a799463d14fb71bf29bec4f6b9ea68a38a46e50f2'
NS_RESERVATION = 1000000
class Ui_send_dlg(QDialog):
def setupUi(self, mode):
self.mode: int = mode
self.wallets: MWallets = None
self.wallet: MWallet = None
self.cache: MCache = None
self.kex: KEXclient = None
self.new_tx = MTransactionBuilder()
self.bid_tx = MTransactionBuilder()
self.raw_tx: str = ''
self.action: str = ''
self.verticalLayout = QVBoxLayout(self)
self.horizontalLayout_1 = QHBoxLayout()
self.wallet_combo = WalletComboBox()
self.ns_combo = NamespaceComboBox()
self.amount_input = AmountInput()
self.address_input = AddressInput()
self.address_combo = AddressComboBox()
self.transaction_input = TransactionInput()
self.special_keys_combo = NamespaceSpecialKeysComboBox()
self.namespace_key_input = NamespaceKeyInput()
self.namespace_value_input = NamespaceValueInput()
self.bid_input = AmountInput()
self.auction_info = AuctionInfoFrame()
self.send_info = SendInfoFrame()
self.buttonBox = DialogButtonBox(self)
self.address_select = QPushButton(self)
# self.setMinimumSize(QtCore.QSize(485, 225))
self.horizontalLayout_1.addWidget(UShared.dialog_header_graphic())
self.verticalLayout.addLayout(self.horizontalLayout_1)
self.verticalLayout.addLayout(self.wallet_combo)
self.verticalLayout.addLayout(self.ns_combo)
self.verticalLayout.addLayout(self.amount_input)
self.verticalLayout.addLayout(self.address_input)
self.verticalLayout.addLayout(self.address_combo)
self.verticalLayout.addWidget(self.address_select)
self.verticalLayout.addLayout(self.transaction_input)
self.verticalLayout.addLayout(self.special_keys_combo)
self.verticalLayout.addLayout(self.namespace_key_input)
self.verticalLayout.addLayout(self.namespace_value_input)
self.verticalLayout.addLayout(self.bid_input)
self.verticalLayout.addWidget(self.auction_info)
self.verticalLayout.addWidget(self.send_info)
self.verticalLayout.addWidget(self.buttonBox)
self.retranslateUi()
self.init_mode()
self.buttonBox.accepted.connect(self.accept)
self.buttonBox.rejected.connect(self.reject)
self.buttonBox.cancel.clicked.connect(self.reject)
self.buttonBox.next.clicked.connect(self.build_send)
self.buttonBox.back.clicked.connect(self.back_click)
(self.wallet_combo.combo.currentTextChanged
.connect(self.wallet_combo_changed))
(self.ns_combo.combo.currentTextChanged
.connect(self.ns_combo_changed))
(self.special_keys_combo.combo.currentTextChanged
.connect(self.special_keys_combo_changed))
self.amount_input.amount.textChanged.connect(self.check_next)
self.address_input.address.textChanged.connect(self.check_next)
self.address_combo.combo.currentTextChanged.connect(self.check_next)
(self.namespace_key_input.key.textChanged
.connect(self.check_ns_key_input))
self.namespace_value_input.value.textChanged.connect(self.check_next)
self.address_select.clicked.connect(self.select_swap)
self.auction_info.nft_desc.textChanged.connect(self.check_next)
self.auction_info.nft_name.textChanged.connect(self.check_next)
self.auction_info.nft_hashtags.textChanged.connect(self.check_next)
self.auction_info.nft_price.textChanged.connect(self.check_next)
def retranslateUi(self):
_translate = QtCore.QCoreApplication.translate
self.setWindowTitle(_translate('send_dlg', 'Narwhallet - Send'))
self.address_select.setText(_translate('send_dlg', 'Book'))
def _mode_simple_send(self):
# Simple Send
self.setWindowTitle('Narwhallet - Send')
self.wallet_combo.show()
self.ns_combo.hide()
self.amount_input.show()
self.address_input.show()
self.address_combo.hide()
self.transaction_input.hide()
self.namespace_key_input.hide()
self.namespace_value_input.hide()
self.bid_input.hide()
self.auction_info.hide()
def _mode_multi_sig_send(self):
# Multi Sig Send
self.setWindowTitle('Narwhallet - Multi-Sig Send')
def _mode_namespace_create(self):
# Namespace Create
self.setWindowTitle('Narwhallet - Create Namespace')
self.wallet_combo.show()
self.ns_combo.hide()
self.amount_input.hide()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.hide()
self.namespace_value_input.show()
self.bid_input.hide()
self.auction_info.hide()
self.namespace_value_input.value.setMinimumHeight(28)
self.namespace_value_input.value.setMaximumHeight(28)
def _mode_namespace_create_key(self):
# Namespace Key Create
self.setWindowTitle('Narwhallet - Create Key')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.hide()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.special_keys_combo.show()
self.namespace_key_input.show()
self.namespace_value_input.show()
self.bid_input.hide()
self.auction_info.hide()
self.wallet_combo.combo.setEnabled(False)
self.ns_combo.combo.setEnabled(False)
def _mode_namespace_update_key(self):
# Namespace Key Update
self.setWindowTitle('Narwhallet - Update Key')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.hide()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_value_input.show()
self.bid_input.hide()
self.auction_info.hide()
self.wallet_combo.combo.setEnabled(False)
self.ns_combo.combo.setEnabled(False)
self.namespace_key_input.key.setReadOnly(True)
def _mode_namespace_delete_key(self):
# Namespace Key Delete
self.setWindowTitle('Narwhallet - Delete Key')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.hide()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_value_input.hide()
self.bid_input.hide()
self.auction_info.hide()
self.wallet_combo.combo.setEnabled(False)
self.ns_combo.combo.setEnabled(False)
self.namespace_key_input.key.setReadOnly(True)
def _mode_create_auction(self):
# Namespace Auction
self.setWindowTitle('Narwhallet - Create Auction')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.hide()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.hide()
self.namespace_value_input.hide()
self.bid_input.hide()
self.auction_info.show()
self.namespace_key_input.key.setText('\x01_KEVA_NS_')
self.namespace_key_input.key.setReadOnly(True)
self.auction_info.nft_ns.setVisible(False)
self.auction_info.ns_l.setVisible(False)
self.auction_info.nft_address.setReadOnly(True)
def _mode_create_bid(self):
# Namespace Bid
self.setWindowTitle('Narwhallet - Create Bid')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.show()
self.address_input.hide()
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_value_input.hide()
self.bid_input.hide()
self.auction_info.show()
self.address_select.setVisible(False)
self.auction_info.nft_desc.setReadOnly(True)
self.auction_info.nft_name.setReadOnly(True)
self.auction_info.nft_hashtags.setReadOnly(True)
self.auction_info.nft_ns.setReadOnly(True)
self.auction_info.nft_address.setReadOnly(True)
self.auction_info.nft_price.setReadOnly(True)
def _mode_accept_bid(self):
# Namespace Accept Bid
self.setWindowTitle('Narwhallet - Accept Bid')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.show()
self.address_input.hide()
self.address_combo.hide()
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_value_input.hide()
self.bid_input.hide()
self.auction_info.show()
self.wallet_combo.combo.setEnabled(False)
self.ns_combo.combo.setEnabled(False)
self.address_select.setVisible(False)
# self.bid_input.amount.setReadOnly(True)
self.auction_info.nft_desc.setReadOnly(True)
self.auction_info.nft_name.setReadOnly(True)
self.auction_info.nft_hashtags.setReadOnly(True)
self.auction_info.nft_ns.setReadOnly(True)
self.auction_info.nft_address.setReadOnly(True)
self.auction_info.nft_price.setReadOnly(True)
def _mode_namespace_transfer(self):
# Namespace Transfer
self.setWindowTitle('Narwhallet - Namespace Transfer')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.hide()
self.address_input.show()
self.address_combo.hide()
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_value_input.show()
self.bid_input.hide()
self.auction_info.hide()
self.wallet_combo.combo.setEnabled(False)
self.ns_combo.combo.setEnabled(False)
def _mode_namespace_reward(self):
# Namespace Key Reward
self.setWindowTitle('Narwhallet - Create Reward')
self.wallet_combo.show()
self.ns_combo.show()
self.amount_input.show()
self.address_input.show()
self.address_input.address.setReadOnly(True)
self.address_combo.hide()
self.address_select.setVisible(False)
self.transaction_input.hide()
self.namespace_key_input.show()
self.namespace_key_input.key.setReadOnly(True)
self.namespace_value_input.show()
self.namespace_value_input.value.setReadOnly(True)
self.bid_input.hide()
self.auction_info.hide()
def init_mode(self):
if self.mode == 0:
# Simple Send
self._mode_simple_send()
elif self.mode == 1:
# Multi Sig Send
self._mode_multi_sig_send()
elif self.mode == 2:
# Namespace Create
self._mode_namespace_create()
elif self.mode == 3:
# Namespace Key Create
self._mode_namespace_create_key()
elif self.mode == 4:
# Namespace Key Update
self._mode_namespace_update_key()
elif self.mode == 5:
# Namespace Key Delete
self._mode_namespace_delete_key()
elif self.mode == 6:
# Namespace Auction
self._mode_create_auction()
elif self.mode == 7:
# Namespace Bid
self._mode_create_bid()
elif self.mode == 8:
# Namespace Accept Bid
self._mode_accept_bid()
elif self.mode == 9:
# Namespace Transfer
self._mode_namespace_transfer()
elif self.mode == 10:
# Namespace Key Reward
self._mode_namespace_reward()
def set_availible_usxo(self, is_change: bool, isBidOp: bool, ns_address):
# MShared.list_unspents(self.wallet, self.kex)
_tmp_usxo = self.wallet.get_usxos()
_usxos = []
_nsusxo = None
for tx in _tmp_usxo:
# TODO Check for usxo's used by bids
_tx = self.cache.tx.get_tx_by_txid(tx['tx_hash'])
if _tx is None:
_tx = MShared.get_tx(tx['tx_hash'], self.kex, True)
if _tx is not None and isinstance(_tx, dict):
_tx = self.cache.tx.add_from_json(_tx)
# if self._test_tx(_tx) is False:
# continue
if 'OP_KEVA' not in _tx.vout[tx['tx_pos']].scriptPubKey.asm:
if isBidOp is False:
_used = False
for _vin in self.bid_tx.vin:
if _vin.txid == _tx.txid:
_used = True
print('used')
if _used is False:
_usxos.append(tx)
else:
_usxos.append(tx)
elif ('OP_KEVA' in _tx.vout[tx['tx_pos']].scriptPubKey.asm
and is_change is True and tx['a'] == ns_address):
_nsusxo = tx
if _nsusxo is not None and is_change is True and isBidOp is False:
_usxos.insert(0, _nsusxo)
if isBidOp is True:
self.bid_tx.inputs_to_spend = _usxos
else:
self.new_tx.inputs_to_spend = _usxos
def check_ns_key_input(self):
if self.mode == 7:
self.check_tx_is_auction()
self.check_next()
def check_tx_is_ns_key(self):
_action_tx = self.namespace_key_input.key.text()
if _action_tx[:4] in ('0001', '0002', '0003'):
_action_tx = _action_tx[4:]
_action_tx = MShared.check_tx_is_ns_key(_action_tx, self.kex, self.cache)
if _action_tx[0] is True:
self.address_input.address.setText(_action_tx[4])
def check_tx_is_bid(self):
_nft_tx = self.namespace_key_input.key.text()
_nft_tx = MShared.check_tx_is_bid(_nft_tx, self.kex, self.cache)
if _nft_tx[0] is True:
_bid_psbt = keva_psbt(_nft_tx[2])
_sh = (Scripts.AddressScriptHash
(self.auction_info.nft_address.text()))
_sh = Scripts.compile(_sh, True)
if _bid_psbt.tx.vout[1].scriptPubKey.hex == _sh:
(self.amount_input.amount
.setText(str(_bid_psbt.tx.vout[1].value/100000000)))
self.new_tx = _bid_psbt.tx
_idx = 0
for _, _r in enumerate(_bid_psbt.psbt_records):
if _r[0] == 'PSBT_IN_WITNESS_UTXO':
self.new_tx.vin[_idx].tb_value = (Ut
.bytes_to_int(
_r[2][:8],
'little'))
elif _r[0] == 'PSBT_IN_PARTIAL_SIG':
(self.new_tx.input_signatures
.append([Ut.bytes_to_hex(_r[2]),
Ut.bytes_to_hex(_r[1][1:])]))
elif _r[0] == 'PSBT_IN_REDEEM_SCRIPT':
(self.new_tx.vin[_idx].scriptSig
.set_hex(Ut.bytes_to_hex(_r[2])))
_idx += 1
self.check_next()
else:
self.buttonBox.next.setEnabled(False)
def check_tx_is_auction(self):
_nft_tx = self.namespace_key_input.key.text()
_nft_tx = MShared.check_tx_is_auction(_nft_tx, self.kex, self.cache)
if _nft_tx[0] is True:
self.auction_info.nft_name.setText(_nft_tx[2]['displayName'])
self.auction_info.nft_desc.setText(_nft_tx[2]['desc'])
if 'hashtags' in _nft_tx[1]:
self.auction_info.nft_hashtags.setText(_nft_tx[1]['hashtags'])
self.auction_info.nft_price.setText(_nft_tx[2]['price'])
self.auction_info.nft_ns.setText(_nft_tx[1])
self.auction_info.nft_address.setText(_nft_tx[2]['addr'])
def build_bid(self):
self.bid_tx.set_version(Ut.hex_to_bytes('00710000'))
locale = QLocale()
_b_amount = locale.toDouble(self.amount_input.amount.text())
_bid_amount = int(_b_amount[0] * 100000000)
_auc = {}
_auc['displayName'] = self.auction_info.nft_name.text()
_ns = self.auction_info.nft_ns.text()
_ns_key = '\x01_KEVA_NS_'
_ns_value = json.dumps(_auc, separators=(',', ':'))
_trans_address = self.wallet.get_unused_address()
_sh = Scripts.KevaKeyValueUpdate(_ns, _ns_key, _ns_value,
_trans_address)
_sh = Scripts.compile(_sh, True)
_ = self.bid_tx.add_output(NS_RESERVATION, _trans_address)
self.bid_tx.vout[0].scriptPubKey.set_hex(_sh)
_ = self.bid_tx.add_output(_bid_amount,
self.auction_info.nft_address.text())
self.set_availible_usxo(False, True, self.ns_combo.combo.currentData())
_inp_sel, _need_change, _est_fee = self.bid_tx.select_inputs()
if _inp_sel is True:
_, _, _fv = self.bid_tx.get_current_values()
_cv = _fv - _est_fee
if _need_change is True:
_ = self.bid_tx.add_output(_cv, _trans_address)
self.bid_tx.txb_preimage(self.wallet,
SIGHASH_TYPE.ALL_ANYONECANPAY)
def tx_to_ns(self, tx, vout):
_tx = Ut.reverse_bytes(Ut.hex_to_bytes(tx))
_tx_hash = Ut.hash160(_tx + str(vout).encode())
return Ut.bytes_to_hex(bytes([53]) + _tx_hash)
def check_next(self):
if self.mode in (0, 1):
if self.wallet_combo.combo.currentText() != '-':
if self.check_amount() and self.check_address():
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 2:
if self.wallet_combo.combo.currentText() != '-':
if self.namespace_value_input.value.toPlainText() != '':
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 3:
if self.wallet_combo.combo.currentText() != '-':
if (self.ns_combo.combo.currentText() != '' and
self.namespace_key_input.key.text() != '' and
self.namespace_value_input.value.toPlainText() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 4:
if (self.wallet_combo.combo.currentText() != '-' and
self.ns_combo.combo.currentText() != '-' and
self.namespace_key_input.key.text() != '' and
self.namespace_value_input.value.toPlainText() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 5:
if self.wallet_combo.combo.currentText() != '-':
if (self.wallet_combo.combo.currentText() != '-' and
self.ns_combo.combo.currentText() != '-' and
self.namespace_key_input.key.text() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 6:
if (self.check_auction_amount() and
self.wallet_combo.combo.currentText() != '-' and
self.ns_combo.combo.currentText() != '-' and
self.auction_info.nft_name.text() != '' and
self.auction_info.nft_desc.text() != '' and
# self.nft_hashtags.text() != '' and
self.auction_info.nft_price.text() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 7:
if (self.wallet_combo.combo.currentText() != '-' and
self.ns_combo.combo.currentText() != '-' and
self.check_amount() and
self.auction_info.nft_ns.text() != '' and
self.auction_info.nft_price.text() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 8:
if (self.wallet_combo.combo.currentText() != '-' and
self.amount_input.amount.text() != '' and
self.auction_info.nft_ns.text() != '' and
self.auction_info.nft_price.text() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 9:
if self.wallet_combo.combo.currentText() != '-':
if (self.check_address() and
self.ns_combo.combo.currentText() != '' and
self.namespace_key_input.key.text() != '' and
self.namespace_value_input.value.toPlainText() != ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
else:
self.buttonBox.next.setEnabled(False)
elif self.mode == 10:
if (self.check_address() and self.check_amount() and
self.wallet_combo.combo.currentText() != '-' and
self.ns_combo.combo.currentText() != '-' and
self.namespace_key_input.key.text() != '' and
self.namespace_value_input.value.toPlainText() == ''):
self.buttonBox.next.setEnabled(True)
else:
self.buttonBox.next.setEnabled(False)
def check_amount(self):
return self._check_amount(self.amount_input.amount.text())
def check_auction_amount(self):
return self._check_amount(self.auction_info.nft_price.text())
def _check_amount(self, amount):
try:
locale = QLocale()
_result = locale.toDouble(amount)
if _result[1] is True:
_return = True
else:
_return = False
except Exception:
_return = False
return _return
def check_address(self):
try:
if self.address_input.address.isVisible():
_ = (Base58Decoder
.CheckDecode(self.address_input.address.text()))
else:
_ = (Base58Decoder
.CheckDecode(self.address_combo.combo.currentData()))
return True
except Exception:
return False
def select_swap(self):
self.address_input.address.setText('')
self.address_combo.combo.setCurrentText('-')
if self.address_input.address.isVisible():
self.address_input.hide()
self.address_combo.show()
self.address_select.setText('Entry')
else:
self.address_input.show()
self.address_combo.hide()
self.address_select.setText('Book')
def wallet_combo_changed(self, data):
if data != '-':
_n = self.wallet_combo.combo.currentData()
self.wallet = self.wallets.get_wallet_by_name(_n)
MShared.list_unspents(self.wallet, self.kex)
self.set_namespace_combo()
self.check_next()
def ns_combo_changed(self, data):
if data not in ('-', ''):
| |
<gh_stars>0
from manimlib.imports import *
from manimlib.imports import *
class LinedCode(Text):
CONFIG = {
'size' : 0.5,
'color' : WHITE,
'stroke_color' : WHITE,
'stroke_weight': 0,
'ln_color' : GRAY,
}
def __init__(self, *text, **config):
digest_config(self, config)
res_text = ''
i = 1
for each_text in text:
res_text += str(i) + ' ' + each_text + '\n'
self.t2c['{} '.format(i)] = self.ln_color
i = i + 1
super(LinedCode, self).__init__(res_text, **config)
self.set_stroke(self.stroke_color, self.stroke_weight)
class Code(Text):
CONFIG = {
'font' : 'Monaco for Powerline',
'size' : 0.5,
'color' : WHITE,
'stroke_color' : WHITE,
'stroke_weight': 0,
}
def __init__(self, *text, **config):
res_text = ''
for each_text in text:
res_text += each_text + '\n'
super(Code, self).__init__(res_text, **config)
self.set_stroke(self.stroke_color, self.stroke_weight)
class TryLinedCode(Scene):
def construct(self):
code = LinedCode(
"#include <bits/stdc++.h>",
"using namespace std;",
"",
"int main() {",
" printf(\"Hello World\\n\");",
" return 1;",
"}",
t2c={
'#include <' : BLUE,
'>' : BLUE,
'std' : YELLOW,
'bits/stdc++.h' : GREEN,
'using' : ORANGE,
'namespace' : PURPLE,
';' : BLUE,
'for' : BLUE,
'if' : BLUE,
'int' : PURPLE,
'(' : BLUE,
')' : BLUE,
'{' : BLUE,
'}' : BLUE,
'return' : BLUE,
'0' : ORANGE,
'main' : '#214FB7',
'printf' : '#214FB7',
'\"' : BLUE,
'Hello World' : GREEN,
'\\n' : BLUE,
}
)
self.play(Write(code))
class TryCode(Scene):
def construct(self):
code = Code(
"#include <bits/stdc++.h>",
"using namespace std; ",
)
self.play(Write(code))
class CodeLine(Text):
CONFIG = {
't2c': {
# 'x': average_color(BLUE, PINK),
# 'y': average_color(BLUE, PINK),
# 'z': average_color(BLUE, PINK),
'True': ORANGE,
'False': ORANGE,
'RIGHT': ORANGE,
'LEFT': ORANGE,
'DOWN': ORANGE,
'UP': ORANGE,
'IN': ORANGE,
'OUT': ORANGE,
'ORIGIN': ORANGE,
'DL': ORANGE,
'DR': ORANGE,
'UL': ORANGE,
'UR': ORANGE,
'TOP': ORANGE,
'BOTTOM': ORANGE,
'LEFT_SIDE': ORANGE,
'RIGHT_SIDE': ORANGE,
'manim': GOLD,
'constants.py': GOLD,
'FRAME_HEIGHT': BLUE_D,
'FRAME_WIDTH': BLUE_D,
'PIXEL_HEIGHT': RED_B,
'PIXEL_WIDTH': RED_B,
'np': BLACK,
'array': BLUE_D,
'ndarray': BLUE,
'FadeIn': average_color(RED, ORANGE),
'move_to': BLUE_D,
'shift': BLUE_D,
'next_to': BLUE_D,
'to_corner': BLUE_D,
'sheen': BLUE_D,
'to_edge': BLUE_D,
'align_to': BLUE_D,
'scale': BLUE_D,
'rotate': BLUE_D,
'flip': BLUE_D,
'add': BLUE_D,
'play': BLUE_D,
'round_corners': BLUE_D,
'corner_radius': BLUE_D,
'side_length': BLUE_D,
'stretch': BLUE_D,
'color': BLUE_D,
'stroke': BLUE_D,
'opacity': BLUE_D,
'stroke_width': BLUE_D,
'stroke_color': BLUE_D,
'stroke_opacity': BLUE_D,
'sheen_factor': BLUE_D,
'sheen_direction': BLUE_D,
'PURPLE_C': PURPLE_C,
'set_stroke': BLUE_D,
'width': BLUE_D,
'height': BLUE_D,
'Rectangle': PURPLE_C,
'RoundedRectangle':PURPLE_C,
'Square': PURPLE_C,
'0': average_color(BLUE, PINK),
'1': average_color(BLUE, PINK),
'2': average_color(BLUE, PINK),
'3': average_color(BLUE, PINK),
'4': average_color(BLUE, PINK),
'5': average_color(BLUE, PINK),
'6': average_color(BLUE, PINK),
'7': average_color(BLUE, PINK),
'8': average_color(BLUE, PINK),
'9': average_color(BLUE, PINK),
'2D': RED_B,
'3D': RED_B,
'self': PINK,
'mob': RED_D,
'#FF0000': '#FF0000',
'#66CCFF': '#66CCFF',
'GREEN_B': GREEN_B,
'BLUE_B': BLUE_B,
'~': '#F0F0F0',
},
'size': 0.36,
'color': DARK_GRAY,
'plot_depth': 2,
'stroke_width': 0,
}
def __init__(self, text, **kwargs):
Text.__init__(self, text, **kwargs)
pass
class RectangleIllustration(Scene):
CONFIG = {
'camera_config': {
'background_color': WHITE,
}
}
def construct(self):
captions = [
'这是一个宽度为3,高度为5的长方形',
'控制形状的有两个参数:width和height(宽度和高度)',
'width就是宽度,height就是高度',
'color可以改变矩形颜色',
'opacity调整其透明度,取值范围[0,1]',
'stroke相关的参数可以调整笔画粗细、颜色、透明度等',
'stroke_width:粗细,stroke_color:颜色,stroke_opacity:透明度',
'sheen的效果是添加渐变',
'其中的sheen_factor可以调整渐变大小',
'sheen_direction调整渐变方向',
]
codelines = [
"rectangle=Rectangle(",# 0
"~~~~height=5,",# 1
"~~~~width=3,",# 2
"~~~~height=4,",# 3
"~~~~width=3,",# 4
"~~~~height=4,",# 5
"~~~~width=5,",# 6
"~~~~height=2,",# 7
"~~~~width=3,",# 8
"~~~~color='#FF0000',",# 9
"~~~~opacity=1.0,",# 10
"~~~~color='#66CCFF',",# 11
"~~~~opacity=0.2,",# 12
"~~~~opacity=0.5,",# 13
"~~~~opacity=1.0,",# 14
"~~~~stroke_color=PURPLE_C,",# 15
"~~~~stroke_width=20,",# 16
"~~~~stroke_opacity=1,",# 17
"~~~~stroke_color=GREEN_B,",# 18
"~~~~stroke_width=10,",# 19
"~~~~stroke_opacity=0.5,",# 20
"~~~~stroke_opacity=0.0,",#21
"~~~~sheen_factor=0.2,",# 22
"~~~~sheen_direction=UR,",#23
"~~~~sheen_factor=0.4,",# 24
"~~~~sheen_direction=UL,",# 25
")",# 26
"~~~~stroke_color=PURPLE_C,",# 27
]
captions_mob = VGroup(
*[
CodeLine(cap, size=0.36,plot_depth=5,color=DARKER_GRAY).to_edge(DOWN * 1.2)
for cap in captions
]
)
codelines_mob = VGroup(
*[
CodeLine(codeline,size=0.36)
for codeline in codelines
]
)
code_bg = Rectangle(
height=7,
width=6,
stroke_width=1,
stroke_color=GRAY,
fill_color=LIGHT_GREY,
fill_opacity=0.25,
plot_depth=-1
).to_edge(RIGHT, buff=LARGE_BUFF)
self.play(FadeIn(code_bg))
rec = Rectangle(
stroke_width=1,
stroke_color=PURPLE_C,
fill_color="66CCFF",
fill_opacity=1,
height=5,
width=3,
).to_edge(LEFT, buff=LARGE_BUFF).shift(RIGHT)
for each in range(len(codelines_mob)):
if each == 0:
codelines_mob[each].next_to(code_bg, direction=RIGHT, aligned_edge=UP).shift(np.array([-code_bg.get_width(),-0.3,0]))
elif each in [1,3,5,7]:
codelines_mob[each].next_to(codelines_mob[0], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [2,4,6,8]:
codelines_mob[each].next_to(codelines_mob[each-1], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [9,11]:
codelines_mob[each].next_to(codelines_mob[2], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [10,12,13,14]:
codelines_mob[each].next_to(codelines_mob[9], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [15,18,27]:
codelines_mob[each].next_to(codelines_mob[14], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [16,19]:
codelines_mob[each].next_to(codelines_mob[15], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [17,20,21]:
codelines_mob[each].next_to(codelines_mob[16], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [22,24]:
codelines_mob[each].next_to(codelines_mob[21], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
elif each in [23,25]:
codelines_mob[each].next_to(codelines_mob[22], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)
# codelines_mob[each].next_to(code_bg, direction=RIGHT, aligned_edge=UP).shift(np.array([-code_bg.get_width(),-0.3,0]))
# , DOWN, buff=MED_SMALL_BUFF
brace_right = Brace(rec, direction=RIGHT,fill_color=LIGHT_GREY, fill_opacity=1)\
.add_updater(lambda x: x.become(
Brace(rec, direction=RIGHT,fill_color=LIGHT_GREY, fill_opacity=1)
))
brace_up = Brace(rec, direction=UP,fill_color=LIGHT_GREY, fill_opacity=1)\
.add_updater(lambda x: x.become(
Brace(rec, direction=UP,fill_color=LIGHT_GREY, fill_opacity=1)
))
self.play(FadeIn(codelines_mob[0]), run_time=0.8)
self.play(FadeIn(rec), run_time=0.8)
self.play(Write(captions_mob[0]))
self.add(VGroup(brace_up, brace_right))
up_label = CodeLine('3').next_to(brace_up, UP, buff=SMALL_BUFF)
right_label = CodeLine('5').next_to(brace_right, RIGHT, buff=SMALL_BUFF)
self.play(FadeIn(VGroup(up_label, right_label)), run_time=0.8)
self.play(Write(codelines_mob[1]), run_time=0.5)
self.play(Write(codelines_mob[2]), run_time=0.5)
self.play(Write(codelines_mob[26].next_to(codelines_mob[2], DOWN, buff=MED_SMALL_BUFF, aligned_edge=LEFT)), run_time=0.5)
self.wait(2)
self.play(Transform(captions_mob[0], captions_mob[1]))
self.wait(2)
self.play(Transform(captions_mob[0], captions_mob[2]))
self.play(
Transform(codelines_mob[1], codelines_mob[3]),
Transform(codelines_mob[2], codelines_mob[4]),
)
self.play(
rec.set_height, {'height': 4, 'stretch':True},
Transform(right_label, CodeLine('4').add_updater(lambda x: x.become(CodeLine('4').next_to(brace_right, RIGHT, buff=SMALL_BUFF)))),
Transform(up_label, CodeLine('3').add_updater(lambda x: x.become(CodeLine('3').next_to(brace_up, UP, buff=SMALL_BUFF))))
)
self.wait(1.8)
self.play(
Transform(codelines_mob[1], codelines_mob[5]),
Transform(codelines_mob[2], codelines_mob[6]),
)
self.play(
rec.set_width, {'width': 5, 'stretch': True},
Transform(right_label, CodeLine('4').add_updater(lambda x: x.become(CodeLine('4').next_to(brace_right, RIGHT, buff=SMALL_BUFF)))),
Transform(up_label, CodeLine('5').add_updater(lambda x: x.become(CodeLine('5').next_to(brace_up, UP, buff=SMALL_BUFF))))
)
self.wait(1.8)
self.play(
Transform(codelines_mob[1], codelines_mob[7]),
Transform(codelines_mob[2], codelines_mob[8]),
)
self.play(
rec.set_height, {'height': 2, 'stretch': True},
rec.set_width, {'width': 3, 'stretch': True},
Transform(right_label, CodeLine('2').add_updater(lambda x: x.become(CodeLine('2').next_to(brace_right, RIGHT, buff=SMALL_BUFF)))),
Transform(up_label, CodeLine('3').add_updater(lambda x: x.become(CodeLine('3').next_to(brace_up, UP, buff=SMALL_BUFF))))
)
self.wait(1.8)
self.play(Transform(captions_mob[0], captions_mob[3]))
self.play(codelines_mob[26].next_to, {'mobject_or_point':codelines_mob[10], 'direction':DOWN, 'buff': MED_SMALL_BUFF, 'aligned_edge': LEFT})
self.play(Write(codelines_mob[9]), run_time=0.5)
self.play(Write(codelines_mob[10]), run_time=0.5)
self.play(rec.set_color, "#FF0000")
self.wait(1.8)
self.play(Transform(codelines_mob[9], codelines_mob[11]))
self.play(rec.set_color, "#66CCFF")
self.play(Transform(captions_mob[0], captions_mob[4]))
self.play(Transform(codelines_mob[10], codelines_mob[12]))
self.play(rec.set_opacity, 0.2)
self.wait(1.8)
self.play(Transform(codelines_mob[10], codelines_mob[13]))
self.play(rec.set_opacity, 0.5)
self.play(Transform(codelines_mob[10], codelines_mob[14]))
self.play(codelines_mob[26].next_to, {'mobject_or_point':codelines_mob[17], 'direction':DOWN, 'buff': MED_SMALL_BUFF, 'aligned_edge': LEFT})
self.play(Write(codelines_mob[15]))
self.play(Write(codelines_mob[16]))
self.play(Write(codelines_mob[17]))
self.play(rec.set_opacity, 1)
self.play(Transform(captions_mob[0], captions_mob[5]))
self.play(rec.set_stroke, {'color':PURPLE_C, 'width': 20, 'opacity': 1})
self.wait(1.8)
self.play(
Transform(codelines_mob[15], codelines_mob[18]),
Transform(codelines_mob[16], codelines_mob[19]),
)
self.play(Transform(captions_mob[0], captions_mob[6]))
self.play(rec.set_stroke, {'color':GREEN_B, 'width': 10, 'opacity': 1})
self.wait(1.8)
self.play(
Transform(codelines_mob[15], codelines_mob[27]),
Transform(codelines_mob[17], codelines_mob[20]),
)
self.play(rec.set_stroke, {'color':PURPLE_C, 'width': 10, 'opacity': 0.5})
self.wait(1.8)
self.play(Transform(codelines_mob[17], codelines_mob[21]))
self.play(rec.set_stroke, {'color':PURPLE_C, 'width': 10, 'opacity': 0})
self.wait(1.8)
self.play(codelines_mob[26].next_to, {'mobject_or_point':codelines_mob[25], 'direction':DOWN, 'buff': MED_SMALL_BUFF, 'aligned_edge': LEFT})
self.play(Write(VGroup(codelines_mob[22], codelines_mob[23])))
self.play(Transform(captions_mob[0], captions_mob[7]))
self.play(rec.set_sheen_direction, UR)
self.play(rec.set_sheen, 0.2)
self.play(Transform(captions_mob[0], captions_mob[8]))
self.wait(1.8)
self.play(Transform(codelines_mob[22], codelines_mob[24]))
self.play(rec.set_sheen, 0.4)
self.play(Transform(captions_mob[0], captions_mob[9]))
self.wait(1.8)
self.play(Transform(codelines_mob[23], codelines_mob[25]))
self.play(rec.set_sheen_direction, UL)
self.wait(1.8)
fade_out_codelines = VGroup(
codelines_mob[0],
code_bg,
rec,
up_label,
right_label,
captions_mob[0],
codelines_mob[1],
codelines_mob[2],
codelines_mob[9],
codelines_mob[10],
codelines_mob[15],
codelines_mob[16],
codelines_mob[17],
codelines_mob[22],
codelines_mob[23],
codelines_mob[26]
)
self.remove(brace_up, brace_right)
self.play(FadeOut(fade_out_codelines))
self.wait()
class SquareIllustration(RectangleIllustration):
def construct(self):
captions = [
'Square的大小参数有side_length',
'我们添加一个边长为4的正方形',
'可以通过set_height或者set_width来调节它的大小',
'由于它继承自Rectangle类,所以加上stretch=True,也可以把它拉回一个长方形',
]
codelines = [
'square = Square(side_length=4)',
'# (default)side_length=2.0',
'square.set_height(3)',
'square.set_width(5)',
'square.set_height(3,stretch=True)',
]
captions_mob = VGroup(
*[
CodeLine(cap, size=0.32,plot_depth=5,color=DARKER_GRAY).to_edge(DOWN * 1.2)
for cap in captions
]
)
codelines_mob = VGroup(
*[
CodeLine(codeline,size=0.29)
for codeline in codelines
]
)
code_bg = Rectangle(
height=7,
width=6,
stroke_width=1,
stroke_color=GRAY,
fill_color=LIGHT_GREY,
fill_opacity=0.25,
plot_depth=-1
).to_edge(RIGHT, buff=LARGE_BUFF)
self.play(FadeIn(code_bg))
for each in range(len(codelines_mob)):
if each == 0:
codelines_mob[each].next_to(code_bg, direction=UP, aligned_edge=LEFT).shift(DOWN+RIGHT*0.5)
elif each in range(1,10):
codelines_mob[each].next_to(codelines_mob[each-1], direction=DOWN, aligned_edge=LEFT)
sq = Square(
side_length=4,
stroke_width=1,
stroke_color=PURPLE_C,
fill_color=BLUE,
fill_opacity=1,
).to_edge(LEFT, buff=LARGE_BUFF)
brace_right = Brace(sq, direction=RIGHT,fill_color=LIGHT_GREY, fill_opacity=1)\
.add_updater(lambda x: x.become(
Brace(sq, direction=RIGHT,fill_color=LIGHT_GREY, fill_opacity=1)
))
brace_up = Brace(sq, direction=UP,fill_color=LIGHT_GREY, fill_opacity=1)\
.add_updater(lambda x: x.become(
Brace(sq, direction=UP,fill_color=LIGHT_GREY, fill_opacity=1)
))
right_label = CodeLine('4').next_to(brace_right, RIGHT, buff=SMALL_BUFF)
self.play(FadeIn(captions_mob[0]))
self.wait()
self.play(Transform(captions_mob[0], captions_mob[1]))
self.play(FadeIn(codelines_mob[0]))
self.play(FadeIn(codelines_mob[1]))
self.play(FadeIn(VGroup(sq, brace_right, right_label),run_time=1.2))
self.wait()
self.play(Transform(captions_mob[0], captions_mob[2]))
self.play(FadeIn(codelines_mob[2]))
self.play(
sq.set_height, 3,
Transform(right_label, CodeLine('3').add_updater(lambda x: x.become(CodeLine('3').next_to(brace_right, RIGHT, buff=SMALL_BUFF))))
)
self.wait()
self.play(FadeIn(codelines_mob[3]))
self.play(
sq.set_width, 5,
Transform(right_label, CodeLine('5').add_updater(lambda x: x.become(CodeLine('5').next_to(brace_right, RIGHT, buff=SMALL_BUFF))))
)
self.wait()
self.play(Transform(captions_mob[0], captions_mob[3]))
self.play(FadeIn(codelines_mob[4]))
self.play(
sq.set_height, {'height':3, 'stretch': True},
Transform(right_label, CodeLine('3').add_updater(lambda x: x.become(CodeLine('3').next_to(brace_right, RIGHT, buff=SMALL_BUFF))))
)
self.wait()
self.play(FadeOut(captions_mob[0]))
self.wait()
self.remove(brace_right)
self.play(FadeOut(VGroup(codelines_mob[0], codelines_mob[1], codelines_mob[2], codelines_mob[3], code_bg, sq, codelines_mob[4], right_label)))
class RoundedRectangleIllustration(RectangleIllustration):
def emphasize(self, mobject):
self.play(mobject.scale, 1.2, run_time=0.5)
self.play(mobject.scale, 10/12, run_time=0.5)
def construct(self):
captions = [
'RoundedRectangle继承自Rectangle类',# 0
'效果是使长方形的四个角变得圆滑',# 1
'它的参数是corner_radius',# 2
'可以通过调整这个参数来设置圆滑度',# 3
'corner_radius的大小会决定边角的圆滑度',# 4
'圆角的半径大小就是corner_radius的值',# 5
'当corner_radius大于矩形最短边的二分之一时,矩形就会发生变形',# 6
'而当corner_radius的值小于0时,圆边就会向内凹陷',# 7
'由于RoundedRectangle继承自Rectangle类,你还可以直接调整Rectangle的corner_radius',# 8
'或者在实例化Rectangle后通过round_corners方法设置其圆滑度',# 9
]
codelines = [
"roundedrec = RoundedRectangle(",# 0
'~~~~#~(default)corner_radius=0.5',# 1
'~~~~corner_radius=0.2',# 2
'~~~~corner_radius=0.5',# 3
'~~~~corner_radius=1.0',# 4
'~~~~corner_radius=3.0',# 5
'~~~~corner_radius=-1.0',# 6
'~~~~corner_radius=-2.0',# 7
'#~(Rectangle)corner_radius=0',# 8
'rec~=~Rectangle(',# 9
'~~~~height=3,',# 10
'~~~~width=4,',# 11
'rec.round_corners(1.5)',# 12
')',# 13
')',#14
]
captions_mob = VGroup(
*[
CodeLine(cap,size=0.32,plot_depth=5,color=DARKER_GRAY).to_edge(DOWN * 1.2)
for cap in captions
]
)
codelines_mob = VGroup(
*[
CodeLine(codeline,size=0.32)
for codeline in codelines
]
)
code_bg = Rectangle(
height=7,
width=6,
stroke_width=1,
stroke_color=GRAY,
fill_color=LIGHT_GREY,
fill_opacity=0.25,
plot_depth=-1
).to_edge(RIGHT, buff=LARGE_BUFF)
self.play(FadeIn(code_bg))
for each in range(len(codelines_mob)):
if each == 0:
codelines_mob[each].next_to(code_bg, direction=UP, aligned_edge=LEFT).shift(DOWN*0.8+RIGHT*0.2)
elif each == 1:
codelines_mob[each].next_to(codelines_mob[0], direction=DOWN, aligned_edge=LEFT, buff=MED_SMALL_BUFF)
elif each in [2,3,4,5,6,7,13]:
codelines_mob[each].next_to(codelines_mob[1], direction=DOWN, aligned_edge=LEFT, buff=MED_SMALL_BUFF)
elif each == 8:
codelines_mob[each].next_to(codelines_mob[7], direction=DOWN, aligned_edge=LEFT, buff=SMALL_BUFF).shift(DOWN*0.6)
elif each in [9,10,11]:
codelines_mob[each].next_to(codelines_mob[each-1], direction=DOWN, aligned_edge=LEFT, buff=MED_SMALL_BUFF)
elif each == 14:
codelines_mob[each].next_to(codelines_mob[11], direction=DOWN, aligned_edge=LEFT, buff=MED_SMALL_BUFF)
elif each == 12:
codelines_mob[each].next_to(codelines_mob[11], direction=DOWN, aligned_edge=LEFT, buff=MED_SMALL_BUFF).shift(DOWN*0.6)
roundedrectangle0 = RoundedRectangle(
corner_radius=0.5,
fill_color="66CCFF",
fill_opacity=1,
height=4,
width=3,)\
.shift(LEFT*3)
arrow_up = | |
for reading video using OpenCV.
The input video can be a standalone video file like "/path/to/video.mp4"
or a directory of frames like "/path/to/frames/%05d.png". This path is
passed directly to cv2.VideoCapture. So, for example, if you specify a
directory of frames, the frame numbering must start from 0-3.
A frames string like "1-5,10-15" can optionally be passed to only read
certain frame ranges.
This class uses 1-based indexing for all frame operations.
'''
def __init__(self, inpath, frames=None):
'''Constructs a new VideoReader with OpenCV backend.
Args:
inpath: path to the input video, which can be a standalone video
file like "/path/to/video.mp4" or a directory of frames like
"/path/to/frames/%05d.png". This path is passed directly to
cv2.VideoCapture
frames: one of the following optional quantities specifying a
collection of frames to process:
- None (all frames - the default)
- "*" (all frames)
- a string like "1-3,6,8-10"
- a list like [1, 2, 3, 6, 8, 9, 10]
- a FrameRange or FrameRanges instance
Raises:
VideoReaderError: if the input video could not be opened.
'''
self._cap = cv2.VideoCapture(inpath)
if not self._cap.isOpened():
raise VideoReaderError("Unable to open '%s'" % inpath)
super(OpenCVVideoReader, self).__init__(inpath, frames)
@property
def encoding_str(self):
'''Return the video encoding string.'''
try:
# OpenCV 3
code = int(self._cap.get(cv2.CAP_PROP_FOURCC))
except AttributeError:
# OpenCV 2
code = int(self._cap.get(cv2.cv.CV_CAP_PROP_FOURCC))
return FOURCC.int_to_str(code)
@property
def frame_size(self):
'''The (width, height) of each frame.'''
try:
# OpenCV 3
return (
int(self._cap.get(cv2.CAP_PROP_FRAME_WIDTH)),
int(self._cap.get(cv2.CAP_PROP_FRAME_HEIGHT)),
)
except AttributeError:
# OpenCV 2
return (
int(self._cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH)),
int(self._cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT)),
)
@property
def frame_rate(self):
'''The frame rate.'''
try:
# OpenCV 3
return float(self._cap.get(cv2.CAP_PROP_FPS))
except AttributeError:
# OpenCV 2
return float(self._cap.get(cv2.cv.CV_CAP_PROP_FPS))
@property
def total_frame_count(self):
'''The total number of frames in the video.'''
try:
# OpenCV 3
return int(self._cap.get(cv2.CAP_PROP_FRAME_COUNT))
except AttributeError:
# OpenCV 2
return int(self._cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))
def read(self):
'''Reads the next frame.
Returns:
img: the next frame
Raises:
StopIteration: if there are no more frames to process
VideoReaderError: if unable to load the next frame from file
'''
for idx in range(max(0, self.frame_number), next(self._ranges)):
if not self._cap.grab():
raise VideoReaderError(
"Failed to grab frame %d" % (idx + 1))
return etai.bgr_to_rgb(self._cap.retrieve()[1])
def close(self):
'''Closes the video reader.'''
self._cap.release()
class VideoWriter(object):
'''Base class for writing videos.'''
def __enter__(self):
return self
def __exit__(self, *args):
self.close()
def write(self, img):
raise NotImplementedError("subclass must implement write()")
def close(self):
raise NotImplementedError("subclass must implement close()")
class VideoWriterError(Exception):
pass
class FFmpegVideoWriter(VideoWriter):
'''Class for writing videos using ffmpeg.'''
def __init__(self, outpath, fps, size, out_opts=None):
'''Constructs a VideoWriter with ffmpeg backend.
Args:
outpath: the output video path. Existing files are overwritten,
and the directory is created if necessary
fps: the frame rate
size: the (width, height) of each frame
out_opts: an optional list of output options for FFmpeg
'''
self.outpath = outpath
self.fps = fps
self.size = size
self._ffmpeg = FFmpeg(
in_opts=[
"-f", "rawvideo", # input will be raw video
"-vcodec", "rawvideo", # input will be raw video
"-s", "%dx%d" % self.size, # frame size
"-pix_fmt", "rgb24", # pixel format
"-r", str(self.fps), # frame rate
],
out_opts=out_opts,
)
self._ffmpeg.run("-", self.outpath)
def write(self, img):
'''Appends the image to the output video.
Args:
img: an image in ETA format (RGB)
'''
self._ffmpeg.stream(img.tostring())
def close(self):
'''Closes the video writer.'''
self._ffmpeg.close()
class OpenCVVideoWriter(VideoWriter):
'''Class for writing videos using cv2.VideoWriter.
Uses the default encoding scheme for the extension of the output path.
'''
def __init__(self, outpath, fps, size):
'''Constructs a VideoWriter with OpenCV backend.
Args:
outpath: the output video path. Existing files are overwritten,
and the directory is created if necessary
fps: the frame rate
size: the (width, height) of each frame
Raises:
VideoWriterError: if the writer failed to open
'''
self.outpath = outpath
self.fps = fps
self.size = size
self._writer = cv2.VideoWriter()
etau.ensure_path(self.outpath)
self._writer.open(self.outpath, -1, self.fps, self.size, True)
if not self._writer.isOpened():
raise VideoWriterError("Unable to open '%s'" % self.outpath)
def write(self, img):
'''Appends the image to the output video.
Args:
img: an image in ETA format
'''
self._writer.write(etai.rgb_to_bgr(img))
def close(self):
'''Closes the video writer.'''
# self._writer.release() # warns to use a separate thread
threading.Thread(target=self._writer.release, args=()).start()
class FFprobe(object):
'''Interface for the ffprobe binary.'''
DEFAULT_GLOBAL_OPTS = ["-loglevel", "error"]
def __init__(self, global_opts=None, opts=None):
'''Constructs an ffprobe command, minus the input path.
Args:
global_opts: a list of global options for ffprobe. By default,
self.DEFAULT_GLOBAL_OPTS is used
opts: a list of options for ffprobe
'''
self._global_opts = global_opts or self.DEFAULT_GLOBAL_OPTS
self._opts = opts or []
self._args = None
self._p = None
@property
def cmd(self):
'''The last executed ffprobe command string, or None if run() has not
yet been called.
'''
return " ".join(self._args) if self._args else None
def run(self, inpath, decode=False):
'''Run the ffprobe binary with the specified input path.
Args:
inpath: the input path
Returns:
out: the stdout from the ffprobe binary
decode: whether to decode the output bytes into utf-8 strings. By
default, the raw bytes are returned
Raises:
ExecutableNotFoundError: if the ffprobe binary cannot be found
ExecutableRuntimeError: if the ffprobe binary raises an error
during execution
'''
self._args = (
["ffprobe"] +
self._global_opts +
self._opts +
["-i", inpath]
)
try:
self._p = Popen(
self._args,
stdout=PIPE,
stderr=PIPE,
)
except EnvironmentError as e:
if e.errno == errno.ENOENT:
raise etau.ExecutableNotFoundError("ffprobe")
else:
raise
out, err = self._p.communicate()
if self._p.returncode != 0:
raise etau.ExecutableRuntimeError(self.cmd, err)
return out.decode() if decode else out
class FFprobeError(Exception):
'''Exception raised when FFprobe was unable to analyze a video.'''
pass
class FFmpeg(object):
'''Interface for the ffmpeg binary.
Example usages:
# Convert a video to sampled frames
ffmpeg = = FFmpeg()
ffmpeg.run("/path/to/video.mp4", "/path/to/frames/%05d.png")
# Resize a video
ffmpeg = FFmpeg(size=(512, -1))
ffmpeg.run("/path/to/video.mp4", "/path/to/resized.mp4")
# Change the frame rate of a video
ffmpeg = FFmpeg(fps=10)
ffmpeg.run("/path/to/video.mp4", "/path/to/resampled.mp4")
'''
DEFAULT_GLOBAL_OPTS = ["-loglevel", "error"]
DEFAULT_VIDEO_OUT_OPTS = [
"-c:v", "libx264", "-preset", "medium", "-crf", "23",
"-pix_fmt", "yuv420p", "-an"]
def __init__(
self,
fps=None,
size=None,
scale=None,
global_opts=None,
in_opts=None,
out_opts=None):
'''Constructs an ffmpeg command, minus the input/output paths.
Args:
fps: an optional output frame rate. By default, the native frame
rate of the input video is used
size: an optional output (width, height) for each frame. At most
one dimension can be -1, in which case the aspect ratio is
preserved
scale: an optional positive number by which to scale the input
video (e.g., 0.5 or 2)
global_opts: an optional list of global options for ffmpeg. By
default, self.DEFAULT_GLOBAL_OPTS is used
in_opts: an optional list of input options for ffmpeg
out_opts: an optional list of output options for ffmpeg. By
default, self.DEFAULT_VIDEO_OUT_OPTS is used when the output
path is a video file
'''
self.is_input_streaming = False
self.is_output_streaming = False
self._filter_opts = self._gen_filter_opts(fps, size, scale)
self._global_opts = global_opts or self.DEFAULT_GLOBAL_OPTS
self._in_opts = in_opts or []
self._out_opts = out_opts
self._args = None
self._p = None
@property
def cmd(self):
'''The last executed ffmpeg command string, or None if run() has not
yet been called.
'''
return " ".join(self._args) if self._args else None
def run(self, inpath, outpath):
'''Run the ffmpeg binary with the specified input/outpath paths.
Args:
inpath: the input path. If inpath is "-", input streaming mode is
activated and data can be passed via the stream() method
outpath: the output path. Existing files are overwritten, and the
directory is created if needed. If outpath is "-", output
streaming mode is activated and data can be read via the
read() method
Raises:
ExecutableNotFoundError: if the ffmpeg binary cannot be found
ExecutableRuntimeError: if the ffmpeg binary raises an error during
execution
'''
self.is_input_streaming = (inpath == "-")
self.is_output_streaming = (outpath == "-")
if self._out_opts is None and is_supported_video_file(outpath):
out_opts = self.DEFAULT_VIDEO_OUT_OPTS
else:
out_opts = self._out_opts or []
self._args = (
["ffmpeg"] +
self._global_opts +
self._in_opts + ["-i", inpath] +
self._filter_opts + out_opts + [outpath]
)
if not self.is_output_streaming:
etau.ensure_path(outpath)
try:
logger.debug("Executing '%s'" % self.cmd)
self._p = Popen(self._args, stdin=PIPE, stdout=PIPE, stderr=PIPE)
except EnvironmentError as e:
if e.errno == errno.ENOENT:
raise etau.ExecutableNotFoundError("ffmpeg")
else:
raise
# Run non-streaming jobs immediately
if not (self.is_input_streaming or self.is_output_streaming):
err = self._p.communicate()[1]
if self._p.returncode != 0:
raise etau.ExecutableRuntimeError(self.cmd, err)
def stream(self, string):
'''Writes the string to ffmpeg's stdin stream.
| |
<reponame>mcyos118/DSF-image-generator-python
from PIL import Image
from IPython.display import display
import random
import json
import sys
from traitlets.traitlets import Int
# Each image is made up a series of traits
# The weightings for each trait drive the rarity and add up to 100%
background = [
"background-1",
"background-2",
"background-3",
"background-4",
"background-5",
"background-6",
"background-7",
"background-8",
"background-9",
"background-10",
"background-11",
"background-12",
"background-13",
"background-14",
"background-15",
"background-16",
"background-17",
"background-18",
"background-19",
"background-20",
"background-21",
"background-22",
"background-23",
"background-24",
"background-25",
"background-26",
"background-27",
"background-28",
"background-29",
"background-30",
"background-31",
"background-32",
"background-33"
]
background_weights = [7, 7, 7, 7, 7, 7, 7, 7, 7, 7,
2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5, 2.5,
0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4, 0.4,
0.4, 0.4, 0.2]
helm_backpieces = [
"helm_backpieces-1",
"helm_backpieces-2",
"helm_backpieces-3",
"helm_backpieces-4",
"helm_backpieces-5",
]
armor_back_pieces = [
"armor_back_pieces-1",
"armor_back_pieces-2",
"armor_back_pieces-3",
"armor_back_pieces-4",
"armor_back_pieces-5",
"armor_back_pieces-6",
"armor_back_pieces-7",
"armor_back_pieces-8",
"armor_back_pieces-9",
"armor_back_pieces-10",
"armor_back_pieces-11",
"armor_back_pieces-12",
"armor_back_pieces-13",
"armor_back_pieces-14",
"armor_back_pieces-15",
"armor_back_pieces-16",
"armor_back_pieces-17",
"armor_back_pieces-18",
"armor_back_pieces-19",
"armor_back_pieces-20",
"armor_back_pieces-21",
"armor_back_pieces-22",
"armor_back_pieces-23",
"armor_back_pieces-24",
"armor_back_pieces-25",
"armor_back_pieces-26",
"armor_back_pieces-27",
"armor_back_pieces-28",
"armor_back_pieces-29",
"armor_back_pieces-30",
"armor_back_pieces-31",
"armor_back_pieces-32",
"armor_back_pieces-33",
"armor_back_pieces-34",
"armor_back_pieces-35",
"armor_back_pieces-36",
"armor_back_pieces-37",
"armor_back_pieces-38",
"armor_back_pieces-39",
"armor_back_pieces-40",
"armor_back_pieces-41",
"armor_back_pieces-42",
"armor_back_pieces-43",
'no_armor_back',
]
base_body = [
"base_body-1",
"base_body-2",
"base_body-3"
]
base_body_weights = [33.3, 33.3, 33.4]
tattoos = [
"tattoos-1",
"tattoos-2",
"tattoos-3",
"tattoos-4",
"tattoos-5",
"tattoos-6",
"tattoos-7",
"tattoos-8",
"tattoos-9",
"tattoos-10"
]
tattoos_weights = [0.1, 0.25, 0.1, 0.25, 0.1, 0.25, 0.2, 0.25, 0.5, 98]
battle_armors = [
"battle_armors-1",
"battle_armors-2",
"battle_armors-3",
"battle_armors-4",
"battle_armors-5",
"battle_armors-6",
"battle_armors-7",
"battle_armors-8",
"battle_armors-9",
"battle_armors-10",
"battle_armors-11",
"battle_armors-12",
"battle_armors-13",
"battle_armors-14",
"battle_armors-15",
"battle_armors-16",
"helmetless_armors-1",
"helmetless_armors-2",
"helmetless_armors-3",
"helmetless_armors-4",
"helmetless_armors-5",
"helmetless_armors-6",
"helmetless_armors-7",
"helmetless_armors-8",
"helmetless_armors-9",
"helmetless_armors-10",
"helmetless_armors-11",
"helmetless_armors-12",
"helmetless_armors-13",
"helmetless_armors-14",
"helmetless_armors-15",
"helmetless_armors-16",
"helmetless_armors-17",
"helmetless_armors-18",
"helmetless_armors-19",
"helmetless_armors-20",
"enclosed_armors-1",
"enclosed_armors-2",
"enclosed_armors-3",
"enclosed_armors-4",
"enclosed_armors-5",
"enclosed_armors-6",
"enclosed_armors-7",
"enclosed_armors-8",
"enclosed_armors-9",
"enclosed_armors-10",
"enclosed_armors-11",
"enclosed_armors-12",
"enclosed_armors-13",
"enclosed_armors-14",
"enclosed_armors-15",
"enclosed_armors-16",
"enclosed_armors-17",
"enclosed_armors-18",
"enclosed_armors-19",
"enclosed_armors-20",
"no_battle_armors"
]
battle_armors_weights = [1, 1, 1, 1, 1, 1, 1, 2, 2, 2,
2, 2, 2, 2, 2, 2, 2.25, 2.25, 2.25, 2.25,
2.25, 2.25, 2.25, 2.25, 2.25, 2.25, 1.75, 1.75, 1.75, 1.75,
1.75, 1.75, 1.75, 1.75, 1.75, 1.75, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 15 ]
scars = [
"scars-1",
"scars-2",
"scars-3",
"scars-4"
]
scars_weights = [70, 10, 10, 10]
facial_expressions = [
"facial_expressions-1",
"facial_expressions-2",
"facial_expressions-3",
"facial_expressions-4",
"facial_expressions-5",
"facial_expressions-6",
"facial_expressions-7",
"facial_expressions-8",
"facial_expressions-9",
"facial_expressions-10",
"facial_expressions-11",
"facial_expressions-12",
"facial_expressions-13",
"facial_expressions-14",
"facial_expressions-15",
"facial_expressions-16",
"facial_expressions-17",
"facial_expressions-18",
"facial_expressions-19",
"facial_expressions-20",
"facial_expressions-21",
"facial_expressions-22",
"facial_expressions-23",
"facial_expressions-24",
"facial_expressions-25",
"facial_expressions-26",
"facial_expressions-27",
"facial_expressions-28",
"facial_expressions-29",
"facial_expressions-30",
"facial_expressions-31",
"facial_expressions-32",
"facial_expressions-33",
"facial_expressions-34",
"facial_expressions-35",
"facial_expressions-36",
"facial_expressions-37",
"facial_expressions-38",
"facial_expressions-39",
"facial_expressions-40",
"facial_expressions-41",
"facial_expressions-42",
"facial_expressions-43",
"facial_expressions-44",
"facial_expressions-45",
"facial_expressions-46",
"facial_expressions-47",
"facial_expressions-48",
"facial_expressions-49",
"facial_expressions-50",
"facial_expressions-51",
"facial_expressions-52",
"facial_expressions-53",
"facial_expressions-54",
"facial_expressions-55",
"facial_expressions-56",
"facial_expressions-57",
"facial_expressions-58",
"facial_expressions-59",
"facial_expressions-60",
"facial_expressions-61",
"facial_expressions-62",
"facial_expressions-63",
"facial_expressions-64",
"facial_expressions-65",
"facial_expressions-66",
"facial_expressions-67",
"facial_expressions-68",
"facial_expressions-69",
"facial_expressions-70",
"facial_expressions-71",
"facial_expressions-72",
"facial_expressions-73",
"facial_expressions-74",
"facial_expressions-75",
"facial_expressions-76",
"facial_expressions-77",
"facial_expressions-78",
"facial_expressions-79",
"facial_expressions-80",
"facial_expressions-81",
"facial_expressions-82",
"facial_expressions-83",
"facial_expressions-84",
"facial_expressions-85",
"facial_expressions-86",
"facial_expressions-87"
]
war_paint = [
"war_paint-1",
"war_paint-2",
"war_paint-3",
"war_paint-4",
"war_paint-5",
"war_paint-6",
"war_paint-7",
"war_paint-8",
"war_paint-9",
"war_paint-10",
"war_paint-11",
"war_paint-12",
"war_paint-13",
"war_paint-14",
"war_paint-15",
"war_paint-16",
"war_paint-17",
"war_paint-18",
"war_paint-19",
"war_paint-20",
"war_paint-21",
"war_paint-22",
"war_paint-23",
"war_paint-24",
"war_paint-25",
"war_paint-26",
"war_paint-27",
"war_paint-28",
"war_paint-29",
"war_paint-30",
"war_paint-31",
"war_paint-32",
"war_paint-33",
"war_paint-34",
"war_paint-35",
"war_paint-36",
"war_paint-37",
"war_paint-38",
"war_paint-39",
"war_paint-40",
"war_paint-41",
"war_paint-42",
"war_paint-43",
"war_paint-44",
"war_paint-45",
"war_paint-46",
"war_paint-47",
"war_paint-48",
"war_paint-49",
"war_paint-50",
"war_paint-51",
"war_paint-52",
"war_paint-53",
"war_paint-54",
"war_paint-55",
"war_paint-56",
"war_paint-57",
"war_paint-58",
"war_paint-59",
"war_paint-60",
"war_paint-61",
"war_paint-62",
"war_paint-63",
"war_paint-64",
"war_paint-65",
"war_paint-66",
"war_paint-67",
"war_paint-68",
"war_paint-69",
"war_paint-70",
"war_paint-71",
"war_paint-72",
"war_paint-73",
"war_paint-74",
"war_paint-75",
"war_paint-76",
"war_paint-77",
"war_paint-78",
"war_paint-79",
"war_paint-80",
"war_paint-81",
"war_paint-82",
"war_paint-83",
"war_paint-84",
"war_paint-85",
"war_paint-86",
"war_paint-87",
"war_paint-88",
"war_paint-89",
"war_paint-90",
"war_paint-91",
"war_paint-92",
"war_paint-93",
"war_paint-94",
"war_paint-95",
"war_paint-96",
"war_paint-97",
"war_paint-98",
"war_paint-99",
"war_paint-100",
"war_paint-101",
"war_paint-102",
"war_paint-103",
"war_paint-104",
"war_paint-105",
"war_paint-106",
"war_paint-107",
"war_paint-108",
"war_paint-109",
"war_paint-110",
"war_paint-111",
"war_paint-112",
"war_paint-113",
"war_paint-114",
"war_paint-115",
"war_paint-116",
"war_paint-117",
"war_paint-118",
"war_paint-119",
"war_paint-120",
"war_paint-121",
"war_paint-122",
"war_paint-123",
"war_paint-124",
"war_paint-125",
"war_paint-126",
"war_paint-127",
"war_paint-128",
"war_paint-129",
"war_paint-130",
"war_paint-131",
"war_paint-132",
"war_paint-133",
"war_paint-134",
"war_paint-135",
"war_paint-136",
"war_paint-137",
"war_paint-138",
"war_paint-139",
"war_paint-140",
"war_paint-genesis",
"war_paint-141"
]
war_paint_weights = [1, 2, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 2, 0.14,
1, 2, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 2, 0.5,
1, 2, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 2, 0.5,
0.5, 0.5, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
1, 2, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 2, 0.5,
1, 2, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 2, 0.14,
0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.14,
1, 2, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0.5,
0.5, 0.5, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5,
1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0.5,
1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0.14,
1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 1, 0.14,
0.5, 0.5, 0.5, 0.07, 0.5, 0.5, 0.5, 0.5, 0.5, 0.14,
1, 1, 0.5, 0.08, 0.5, 0.5, 0.5, 0.5, 1, 0.16,
0, 9.5]
eyebrows = [
"eyebrows-1",
"eyebrows-2",
"eyebrows-3",
"eyebrows-4",
"eyebrows-5",
"eyebrows-6",
"eyebrows-7",
"eyebrows-8",
"eyebrows-9",
"eyebrows-10",
"eyebrows-11",
"eyebrows-12",
"eyebrows-13",
"eyebrows-14",
"eyebrows-15",
"eyebrows-16",
"eyebrows-17",
"eyebrows-18",
"eyebrows-19",
"eyebrows-20",
"eyebrows-21",
"eyebrows-22",
"eyebrows-23",
"eyebrows-24",
"eyebrows-25",
"eyebrows-26",
"eyebrows-27",
"eyebrows-28",
"eyebrows-29",
"eyebrows-30",
"eyebrows-31",
"eyebrows-32",
"eyebrows-33",
"eyebrows-34",
"eyebrows-35",
"eyebrows-36",
"eyebrows-37",
"eyebrows-38",
"eyebrows-39",
"eyebrows-40",
"eyebrows-41",
"eyebrows-42",
"eyebrows-43",
"eyebrows-44",
"eyebrows-45"
]
hair = [
"hair-1",
"hair-2",
"hair-3",
"hair-4",
"hair-5",
"hair-6",
"hair-7",
"hair-8",
"hair-9",
"hair-10",
"hair-11",
"hair-12",
"hair-13",
"hair-14",
"hair-15",
"hair-16",
"hair-17",
"hair-18",
"hair-19",
"hair-20",
"hair-21",
"hair-22",
"hair-23",
"hair-24",
"hair-25",
"hair-26",
"hair-27",
"hair-28",
"hair-29",
"hair-30",
"hair-31",
"hair-32",
"hair-33",
"hair-34",
"hair-35",
"hair-36",
"hair-37",
"hair-38",
"hair-39",
"hair-40",
"hair-41",
"hair-42",
"hair-43",
"hair-44",
"hair-45",
"hair-46",
"hair-47",
"hair-48",
"hair-49",
"hair-50",
"hair-51",
"hair-52",
"hair-53",
"hair-54",
"hair-55",
"hair-56",
"hair-57",
"hair-58",
"hair-59",
"hair-60",
"hair-61",
"hair-62",
"hair-63",
"hair-64",
"hair-65",
"hair-66",
"hair-67",
"hair-68",
"hair-69",
"hair-70",
"hair-71",
"hair-72",
"halo",
"Helmet crown gold#2",
"Helmet demon horn#2"
]
hair_weights = [1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.2,
0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.52,
1.6, 1.6, 1.6, 1.6, 1.6, 1.6, 1.8, 1.8, 1.8,
2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.3, 2.3,
1.6, 1.6, 1.6, 1.6, 1.6, 1.6, 1.8, 1.8, 1.8,
1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.2,
0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.52,
1.6, 1.6, 1.6, 1.6, 1.6, 1.6, 1.8, 1.8, 1.8,
1.6, 1.6, 1.8]
beards = [
"beards-1",
"beards-2",
"beards-3",
"beards-4",
"beards-5",
"beards-6",
"beards-7",
"beards-8",
"beards-9",
"beards-10",
"beards-11",
"beards-12",
"beards-13",
"beards-14",
"beards-15",
"beards-16",
"beards-17",
"beards-18",
"beards-19",
"beards-20",
"beards-21",
"beards-22",
"beards-23",
"beards-24",
"beards-25",
"beards-26",
"beards-27",
"beards-28",
"beards-29",
"beards-30",
"beards-31",
"beards-32",
"beards-33",
"beards-34",
"beards-35",
"beards-36",
"beards-37",
"beards-38",
"beards-39",
"beards-40",
"beards-41",
"beards-42",
"beards-43",
"beards-44",
"beards-45",
"beards-46",
"beards-47",
"beards-48",
"beards-49",
"beards-50",
"beards-51",
"beards-52",
"beards-53",
"beards-54",
"beards-55",
"beards-56",
"beards-57",
"beards-58",
"beards-59",
"beards-60",
"beards-61",
"beards-62",
"beards-63",
"beards-64",
"beards-65",
"beards-66",
"beards-67",
"beards-68",
"beards-69",
"beards-70",
"beards-71",
"beards-72"
]
beards_weights = [2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.3, 2.3,
1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.2,
1.6, 1.6, 1.6, 1.6, 1.6, 1.6, 1.8, 1.8, 1.8,
1.6, 1.6, 1.6, 1.6, 1.6, 1.6, 1.8, 1.8, 1.8,
0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.52,
1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.1, 1.2,
0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.56, 0.52,
2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.2, 2.3, 2.3
]
mage_hoods = [
"no_mage_hoods",
"mage_hoods-2",
"no_mage_hoods",
"mage_hoods-4",
"mage_hoods-5",
"no_mage_hoods",
"no_mage_hoods",
"mage_hoods-8",
"mage_hoods-9",
"mage_hoods-10",
"no_mage_hoods",
]
mage_hoods_weights = [19, 0.8, 19, 0.8, 0.8, 19, 19, 0.8, 0.9, 0.9, 19]
arms = [
"arms-1",
"arms-2",
"arms-3",
"mage-1",
"mage-2",
"mage-3",
"arms-7",
"arms-8",
"arms-9",
"arms-10",
"arms-11",
"arms-12",
"arms-13",
"arms-14",
"arms-15",
"no_arms"
]
arms_weights = [7, 7, 7, 7, 7, 2, 7, 7, 7, 7, 7, 7, 7, 7, 7, 0]
weapon = [
"weapon_one_hand-1",
"weapon_one_hand-2",
"weapon_one_hand-3",
"weapon_one_hand-4",
"weapon_one_hand-5",
"weapon_one_hand-6",
"weapon_one_hand-7",
"weapon_one_hand-8",
"weapon_one_hand-9",
"weapon_one_hand-10",
"weapon_one_hand-back-11",
"weapon_one_hand-back-12",
"weapon_one_hand-back-13",
"weapon_one_hand-back-14",
"weapon_one_hand-back-15",
"weapon_one_hand-back-16",
"weapon_one_hand-back-17",
"weapon_one_hand-back-18",
"weapon_one_hand-back-19",
"weapon_one_hand-back-20",
"weapon_one_hand-back-21",
"weapon_one_hand-back-22",
"weapon_one_hand-back-23",
"weapon_one_hand-back-24",
"weapon_dual_wield-1",
"weapon_dual_wield-2",
"weapon_dual_wield-3",
"weapon_dual_wield-4",
"weapon_dual_wield-5",
"weapon_dual_wield-6",
"weapon_dual_wield-7",
"weapon_dual_wield-8",
"weapon_dual_wield-9",
"weapon_dual_wield-10",
"weapon_double_grip-1",
"weapon_double_grip-2",
"weapon_double_grip-3",
"weapon_double_grip-4",
"weapon_double_grip-5",
"weapon_double_grip-6",
"weapon_double_grip-8",
"weapon_double_grip-9",
"weapon_double_grip-10",
"weapon_double_grip-11",
"weapon_double_grip-12",
"weapon_double_grip-13",
"weapon_double_grip-14",
"weapon_double_grip-15",
"weapon_double_grip-16",
"weapon_double_grip-17",
"weapon_double_grip-18",
"weapon_double_grip-19",
"weapon_double_grip-20",
"weapon_double_grip-21",
"weapon_double_grip-22",
"weapon_double_grip-23",
"weapon_staff-1",
"weapon_staff-2",
"weapon_staff-3",
"weapon_staff-4",
"weapon_staff-5",
"weapon_staff-6",
"weapon_staff-7",
"weapon_staff-8",
"weapon_staff-9",
"weapon_staff-10",
"weapon_staff-11",
"weapon_staff-12",
"weapon_staff-13",
"weapon_staff-14",
"weapon_staff-15",
"weapon_staff-16",
"weapon_staff-17",
"weapon_staff-18",
"weapon_staff-19",
"weapon_staff-20",
"weapon_staff-21",
"weapon_staff-22",
"weapon_staff-23",
"weapon_staff-24",
"weapon_mage_effect-1",
"weapon_mage_effect-2",
"weapon_mage_effect-3",
"weapon_mage_effect-4",
"weapon_mage_effect-5",
"weapon_back-1",
"weapon_back-2",
"weapon_back-3",
"weapon_back-4",
"weapon_back-5",
"weapon_back-6",
"weapon_back-7",
"weapon_back-8",
"weapon_back-9",
"weapon_back-10",
"weapon_back-11",
"weapon_back-12",
"no_weapon"
]
weapon_weights = [ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1.5,
1, 1.5, 1, 1, 1, 1, 1, 1, 1, 1.5,
1, 1, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5,
1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1.5, 1, 1, 0.5,
0.5, 0.5, 0.5, 0.5, 0.5, | |
"""
Random walker segmentation algorithm
from *Random walks for image segmentation*, <NAME>, IEEE Trans
Pattern Anal Mach Intell. 2006 Nov;28(11):1768-83.
This code is mostly adapted from scikit-image 0.11.3 release.
Location of file in scikit image: random_walker function and its supporting
sub functions in skimage.segmentation
"""
import warnings
import numpy as np
from scipy import sparse, ndimage as ndi
from sklearn.utils import as_float_array
from scipy.sparse.linalg import cg
def _make_graph_edges_3d(n_x, n_y, n_z):
"""Returns a list of edges for a 3D image.
Parameters
----------
n_x : integer
The size of the grid in the x direction.
n_y : integer
The size of the grid in the y direction.
n_z : integer
The size of the grid in the z direction.
Returns
-------
edges : (2, N) ndarray
With the total number of edges:
N = n_x * n_y * (nz - 1) +
n_x * (n_y - 1) * nz +
(n_x - 1) * n_y * nz
Graph edges with each column describing a node-id pair.
"""
vertices = np.arange(n_x * n_y * n_z).reshape((n_x, n_y, n_z))
edges_deep = np.vstack((vertices[:, :, :-1].ravel(),
vertices[:, :, 1:].ravel()))
edges_right = np.vstack((vertices[:, :-1].ravel(),
vertices[:, 1:].ravel()))
edges_down = np.vstack((vertices[:-1].ravel(), vertices[1:].ravel()))
edges = np.hstack((edges_deep, edges_right, edges_down))
return edges
def _compute_weights_3d(data, spacing, beta=130, eps=1.e-6):
# Weight calculation is main difference in multispectral version
# Original gradient**2 replaced with sum of gradients ** 2
gradients = 0
for channel in range(0, data.shape[-1]):
gradients += _compute_gradients_3d(data[..., channel],
spacing) ** 2
# All channels considered together in this standard deviation
beta /= 10 * data.std()
gradients *= beta
weights = np.exp(- gradients)
weights += eps
return weights
def _compute_gradients_3d(data, spacing):
gr_deep = np.abs(data[:, :, :-1] - data[:, :, 1:]).ravel() / spacing[2]
gr_right = np.abs(data[:, :-1] - data[:, 1:]).ravel() / spacing[1]
gr_down = np.abs(data[:-1] - data[1:]).ravel() / spacing[0]
return np.r_[gr_deep, gr_right, gr_down]
def _make_laplacian_sparse(edges, weights):
"""
Sparse implementation
"""
pixel_nb = edges.max() + 1
diag = np.arange(pixel_nb)
i_indices = np.hstack((edges[0], edges[1]))
j_indices = np.hstack((edges[1], edges[0]))
data = np.hstack((-weights, -weights))
lap = sparse.coo_matrix((data, (i_indices, j_indices)),
shape=(pixel_nb, pixel_nb))
connect = - np.ravel(lap.sum(axis=1))
lap = sparse.coo_matrix(
(np.hstack((data, connect)), (np.hstack((i_indices, diag)),
np.hstack((j_indices, diag)))),
shape=(pixel_nb, pixel_nb))
return lap.tocsr()
def _clean_labels_ar(X, labels):
X = X.astype(labels.dtype)
labels = np.ravel(labels)
labels[labels == 0] = X
return labels
def _buildAB(lap_sparse, labels):
"""
Build the matrix A and rhs B of the linear system to solve.
A and B are two block of the laplacian of the image graph.
"""
labels = labels[labels >= 0]
indices = np.arange(labels.size)
unlabeled_indices = indices[labels == 0]
seeds_indices = indices[labels > 0]
# The following two lines take most of the time in this function
B = lap_sparse[unlabeled_indices][:, seeds_indices]
lap_sparse = lap_sparse[unlabeled_indices][:, unlabeled_indices]
nlabels = labels.max()
rhs = []
for lab in range(1, nlabels + 1):
mask = (labels[seeds_indices] == lab)
fs = sparse.csr_matrix(mask)
fs = fs.transpose()
rhs.append(B * fs)
return lap_sparse, rhs
def _mask_edges_weights(edges, weights, mask):
"""
Remove edges of the graph connected to masked nodes, as well as
corresponding weights of the edges.
"""
mask0 = np.hstack((mask[:, :, :-1].ravel(), mask[:, :-1].ravel(),
mask[:-1].ravel()))
mask1 = np.hstack((mask[:, :, 1:].ravel(), mask[:, 1:].ravel(),
mask[1:].ravel()))
ind_mask = np.logical_and(mask0, mask1)
edges, weights = edges[:, ind_mask], weights[ind_mask]
max_node_index = edges.max()
# Reassign edges labels to 0, 1, ... edges_number - 1
order = np.searchsorted(np.unique(edges.ravel()),
np.arange(max_node_index + 1))
edges = order[edges.astype(np.int64)]
return edges, weights
def _build_laplacian(data, spacing, mask=None, beta=50):
l_x, l_y, l_z = tuple(data.shape[i] for i in range(3))
edges = _make_graph_edges_3d(l_x, l_y, l_z)
weights = _compute_weights_3d(data, spacing, beta=beta, eps=1.e-10)
if mask is not None:
edges, weights = _mask_edges_weights(edges, weights, mask)
lap = _make_laplacian_sparse(edges, weights)
del edges, weights
return lap
def _random_walker(data, labels, beta=130, tol=1.e-3, copy=True, spacing=None):
"""Random walker algorithm for segmentation from markers.
Parameters
----------
data : array_like
Image to be segmented in phases. Data spacing is assumed isotropic unless
the `spacing` keyword argument is used.
labels : array of ints, of same shape as `data` without channels dimension
Array of seed markers labeled with different positive integers
for different phases. Zero-labeled pixels are unlabeled pixels.
Negative labels correspond to inactive pixels that are not taken
into account (they are removed from the graph). If labels are not
consecutive integers, the labels array will be transformed so that
labels are consecutive.
beta : float, optional
Penalization coefficient for the random walker motion
(the greater `beta`, the more difficult the diffusion).
Default=130.
tol : float, optional
Tolerance to achieve when solving the linear system, in
cg' mode. Default=1e-3.
copy : bool, optional
If copy is False, the `labels` array will be overwritten with
the result of the segmentation. Use copy=False if you want to
save on memory. Default=True.
spacing : iterable of floats, optional
Spacing between voxels in each spatial dimension. If `None`, then
the spacing between pixels/voxels in each dimension is assumed 1.
Returns
-------
output : ndarray
An array of ints of same shape as `data`, in which each pixel has
been labeled according to the marker that reached the pixel first
by anisotropic diffusion.
Notes
-----
The `spacing` argument is specifically for anisotropic datasets, where
data points are spaced differently in one or more spatial dimensions.
Anisotropic data is commonly encountered in medical imaging.
The algorithm was first proposed in [1]_.
The algorithm solves the diffusion equation at infinite times for
sources placed on markers of each phase in turn. A pixel is labeled with
the phase that has the greatest probability to diffuse first to the pixel.
The diffusion equation is solved by minimizing x.T L x for each phase,
where L is the Laplacian of the weighted graph of the image, and x is
the probability that a marker of the given phase arrives first at a pixel
by diffusion (x=1 on markers of the phase, x=0 on the other markers, and
the other coefficients are looked for). Each pixel is attributed the label
for which it has a maximal value of x. The Laplacian L of the image
is defined as:
- L_ii = d_i, the number of neighbors of pixel i (the degree of i)
- L_ij = -w_ij if i and j are adjacent pixels
The weight w_ij is a decreasing function of the norm of the local gradient.
This ensures that diffusion is easier between pixels of similar values.
When the Laplacian is decomposed into blocks of marked and unmarked
pixels::
L = M B.T
B A
with first indices corresponding to marked pixels, and then to unmarked
pixels, minimizing x.T L x for one phase amount to solving::
A x = - B x_m
where x_m = 1 on markers of the given phase, and 0 on other markers.
This linear system is solved in the algorithm using a direct method for
small images, and an iterative method for larger images.
References
----------
.. [1] Random walks for image segmentation, <NAME>y, IEEE Trans Pattern
Anal Mach Intell. 2006 Nov;28(11):1768-83.
"""
out_labels = np.copy(labels)
if (labels != 0).all():
warnings.warn('Random walker only segments unlabeled areas, where '
'labels == 0. No zero valued areas in labels were '
'found. Returning provided labels.')
return out_labels
if (labels == 0).all():
warnings.warn('Random walker received no seed label. Returning provided labels.')
return out_labels
# We take multichannel as always False since we are not strictly using
# for image processing as such with RGB values.
multichannel = False
if not multichannel:
if data.ndim < 2 or data.ndim > 3:
raise ValueError('For non-multichannel input, data must be of '
'dimension 2 or 3.')
dims = data.shape # To reshape final labeled result
data = np.atleast_3d(as_float_array(data))[..., np.newaxis]
# Spacing kwarg checks
if spacing is None:
spacing = np.asarray((1.,) * 3)
elif len(spacing) == len(dims):
if len(spacing) == 2: # Need a dummy spacing for singleton 3rd dim
spacing = np.r_[spacing, 1.]
else: # Convert to array
spacing = np.asarray(spacing)
else:
raise ValueError('Input argument `spacing` incorrect, should be an '
'iterable with one number per spatial dimension.')
if copy:
labels = np.copy(labels)
label_values = np.unique(labels)
| |
<filename>gwent/vendor/pygwinc_clone/gwinc/noise/substratethermal.py
from __future__ import division, print_function
from numpy import exp, inf, pi, sqrt
import numpy as np
import scipy.special
import scipy.integrate
from .. import const
from ..const import BESSEL_ZEROS as zeta
from ..const import J0M as j0m
def carrierdensity_adiabatic(f, ifo):
"""strain noise psd arising from charge carrier density
fluctuations in ITM substrate (for semiconductor substrates)."""
Omega = 2 * pi * f
H = ifo.Materials.MassThickness
gammaElec = ifo.Materials.Substrate.ElectronIndexGamma
gammaHole = ifo.Materials.Substrate.HoleIndexGamma
diffElec = ifo.Materials.Substrate.ElectronDiffusion
diffHole = ifo.Materials.Substrate.HoleDiffusion
cdDens = ifo.Materials.Substrate.CarrierDensity
r0 = ifo.Optics.ITM.BeamRadius / np.sqrt(2)
L = ifo.Infrastructure.Length
gPhase = ifo.gwinc.finesse * 2 / pi
psdElec = (
4 * H * gammaElec ** 2 * cdDens * diffElec / (pi * r0 ** 4 * Omega ** 2)
) # units are meters
psdHole = (
4 * H * gammaHole ** 2 * cdDens * diffHole / (pi * r0 ** 4 * Omega ** 2)
) # units are meters
psdMeters = 2 * (psdElec + psdHole) # electrons and holes for two ITMs
n = psdMeters / (gPhase * L) ** 2
return n
def carrierdensity_exact(f, ifo):
"""Strain noise arising from charge carrier density fluctuations in ITM substrate
For semiconductor substrates
"""
w = ifo.Optics.ITM.BeamRadius
L = ifo.Infrastructure.Length
H = ifo.Materials.MassThickness
kBT = const.kB * ifo.Materials.Substrate.Temp
hbar = const.hbar
c = const.c
diffElec = ifo.Materials.Substrate.ElectronDiffusion
diffHole = ifo.Materials.Substrate.HoleDiffusion
mElec = ifo.Materials.Substrate.ElectronEffMass
mHole = ifo.Materials.Substrate.HoleEffMass
cdDens = ifo.Materials.Substrate.CarrierDensity
gammaElec = ifo.Materials.Substrate.ElectronIndexGamma
gammaHole = ifo.Materials.Substrate.HoleIndexGamma
gPhase = ifo.gwinc.finesse * 2 / pi
omega = 2 * pi * f
def integrand(k, om, D):
return D * k ** 3 * exp(-(k ** 2) * w ** 2 / 4) / (D ** 2 * k ** 4 + om ** 2)
integralElec = np.array(
[
scipy.integrate.quad(lambda k: integrand(k, om, diffElec), 0, inf)[0]
for om in omega
]
)
integralHole = np.array(
[
scipy.integrate.quad(lambda k: integrand(k, om, diffHole), 0, inf)[0]
for om in omega
]
)
# From P1400084 Heinert et al. Eq. 15
# psdCD = @(gamma,m,int) 2*(3/pi^7)^(1/3)*kBT*H*gamma^2*m/hbar^2*cdDens^(1/3)*int; %units are meters
def psdCD(gamma, m, int_):
return 2 / pi * H * gamma ** 2 * cdDens * int_ # units are meters
psdElec = psdCD(gammaElec, mElec, integralElec)
psdHole = psdCD(gammaHole, mHole, integralHole)
psdMeters = 2 * (psdElec + psdHole)
n = psdMeters / (gPhase * L) ** 2
return n
carrierdensity = carrierdensity_adiabatic
def thermorefractiveITM_adiabatic(f, ifo):
"""strain noise psd arising from thermorefractive
fluctuations in ITM substrate (for semiconductor substrates)."""
Omega = 2 * pi * f
H = ifo.Materials.MassThickness
beta = ifo.Materials.Substrate.dndT
kappa = ifo.Materials.Substrate.MassKappa
rho = ifo.Materials.Substrate.MassDensity
C = ifo.Materials.Substrate.MassCM
Temp = ifo.Materials.Substrate.Temp
kBT = const.kB * Temp
r0 = ifo.Optics.ITM.BeamRadius / np.sqrt(2)
L = ifo.Infrastructure.Length
gPhase = ifo.gwinc.finesse * 2 / pi
psd = (
4
* H
* beta ** 2
* kappa
* kBT
* Temp
/ (pi * r0 ** 4 * Omega ** 2 * (rho * C) ** 2)
) # units are meters
psdMeters = 2 * psd # two ITMs
n = psdMeters / (gPhase * L) ** 2
return n
def thermorefractiveITM_exact(f, ifo):
"""Strain noise from thermorefractive fluctuations in ITM substrate
For semiconductor substrates.
"""
w = ifo.Optics.ITM.BeamRadius
L = ifo.Infrastructure.Length
H = ifo.Materials.MassThickness
kBT = const.kB * ifo.Materials.Substrate.Temp
Temp = ifo.Materials.Substrate.Temp
c = const.c
rho = ifo.Materials.Substrate.MassDensity
beta = ifo.Materials.Substrate.dndT
C = ifo.Materials.Substrate.MassCM
kappa = ifo.Materials.Substrate.MassKappa
gPhase = ifo.gwinc.finesse * 2 / pi
omega = 2 * pi * f
def integrand(k, om, D):
return D * k ** 3 * exp(-(k ** 2) * w ** 2 / 4) / (D ** 2 * k ** 4 + om ** 2)
inte = np.array(
[
scipy.integrate.quad(lambda k: integrand(k, om, kappa / (rho * C)), 0, inf)[
0
]
for om in omega
]
)
# From P1400084 Heinert et al. Eq. 15
# psdCD = @(gamma,m,int) 2*(3/pi^7)^(1/3)*kBT*H*gamma^2*m/hbar^2*cdDens^(1/3)*int; %units are meters
psdTR = lambda int_: 2 / pi * H * beta ** 2 * kBT * Temp / (rho * C) * int_
# units are meters
psd = psdTR(inte)
psdMeters = 2 * psd # two itms
n = psdMeters / (gPhase * L) ** 2
return n
thermorefractiveITM = thermorefractiveITM_adiabatic
def subbrownian(f, ifo):
"""Strain noise from the Brownian thermal noise due to substrate mechanical loss"""
wITM = ifo.Optics.ITM.BeamRadius
wETM = ifo.Optics.ETM.BeamRadius
Y = ifo.Materials.Substrate.MirrorY
sigma = ifo.Materials.Substrate.MirrorSigma
c2 = ifo.Materials.Substrate.c2
n = ifo.Materials.Substrate.MechanicalLossExponent
alphas = ifo.Materials.Substrate.Alphas
L = ifo.Infrastructure.Length
kBT = const.kB * ifo.Materials.Substrate.Temp
# Bulk substrate contribution
phibulk = c2 * f ** n
cITM, aITM = subbrownianFiniteCorr(ifo, "ITM")
cETM, aETM = subbrownianFiniteCorr(ifo, "ETM")
cbulk = 8 * kBT * (aITM + aETM) * phibulk / (2 * pi * f)
# Surface loss contribution
# csurfETM = alphas/(Y*pi*wETM^2);
# csurfITM = alphas/(Y*pi*wITM^2);
csurfETM = alphas * (1 - 2 * sigma) / ((1 - sigma) * Y * pi * wETM ** 2)
csurfITM = alphas * (1 - 2 * sigma) / ((1 - sigma) * Y * pi * wITM ** 2)
csurf = 8 * kBT * (csurfITM + csurfETM) / (2 * pi * f)
# account for 2 ITM and 2 ETM, and convert to strain whith 1/L^2
n = 2 * (csurf + cbulk) * ifo.gwinc.dhdl_sqr
return n
def subbrownianFiniteCorr(ifo, opticName):
"""Amplitude coefficient of mirror thermal noise
Contribution for finite-size test masses.
[cftm, aftm] = subbrownianFiniteCorr(ifo, opticName)
cftm = finite mirror correction factor
aftm = amplitude coefficient for thermal noise:
thermal noise contribution to displacement noise is
S_x(f) = (8 * kB * T / (2*pi*f)) * Phi(f) * aftm
Equation references to Bondu, et al. Physics Letters A 246 (1998)
227-236 (hereafter BHV) and Liu and Thorne gr-qc/0002055 (hereafter LT)
"""
# get some numbers
a = ifo.Materials.MassRadius
h = ifo.Materials.MassThickness
w = ifo.Optics[opticName].BeamRadius
Y = ifo.Materials.Substrate.MirrorY
sigma = ifo.Materials.Substrate.MirrorSigma
# do the work
r0 = w / sqrt(2) # LT uses e-folding of power
km = zeta / a
Qm = exp(-2 * km * h) # LT eq. 35a
Um = (1 - Qm) * (1 + Qm) + 4 * h * km * Qm
Um = Um / ((1 - Qm) ** 2 - 4 * (km * h) ** 2 * Qm) # LT 53 (BHV eq. btwn 29 & 30)
x = exp(-((zeta * r0 / a) ** 2) / 4)
s = sum(x / (zeta ** 2 * j0m)) # LT 57
x2 = x * x
U0 = sum(Um * x2 / (zeta * j0m ** 2))
U0 = U0 * (1 - sigma) * (1 + sigma) / (pi * a * Y) # LT 56 (BHV eq. 3)
p0 = 1 / (pi * a ** 2) # LT 28
DeltaU = (pi * h ** 2 * p0) ** 2
DeltaU = DeltaU + 12 * pi * h ** 2 * p0 * sigma * s
DeltaU = DeltaU + 72 * (1 - sigma) * s ** 2
DeltaU = DeltaU * a ** 2 / (6 * pi * h ** 3 * Y) # LT 54
aftm = DeltaU + U0 # LT 58 (eq. following BHV 31)
# amplitude coef for infinite TM
# factored out: (8 * kB * T * Phi) / (2 * pi * f)
aitm = (1 - sigma ** 2) / (2 * sqrt(2 * pi) * Y * r0) # LT 59
# finite mirror correction
cftm = aftm / aitm
return cftm, aftm
def subtherm(f, ifo):
"""Noise from thermoelastic fluctuations in mirror"""
wITM = ifo.Optics.ITM.BeamRadius
wETM = ifo.Optics.ETM.BeamRadius
sigma = ifo.Materials.Substrate.MirrorSigma
L = ifo.Infrastructure.Length
kBT = const.kB * ifo.Materials.Substrate.Temp
rho = ifo.Materials.Substrate.MassDensity
kappa = ifo.Materials.Substrate.MassKappa # thermal conductivity
alpha = ifo.Materials.Substrate.MassAlpha # thermal expansion
CM = ifo.Materials.Substrate.MassCM # heat capacity @ constant mass
Temp = ifo.Materials.Substrate.Temp # temperature
S0 = (
8 * (1 + sigma) ** | |
from future.utils import iteritems
from builtins import range
import numpy as np
from numpy.polynomial.polynomial import polyval3d
from numpy.polynomial.legendre import legval
from numpy.polynomial.chebyshev import chebval
import scipy.optimize as opt
from collections import OrderedDict
from operator import add, mul
from itertools import product, chain
from peri import util
from peri.comp import Component
from peri.interpolation import BarnesInterpolation1D,BarnesInterpolationND
#=============================================================================
# Pure 3d functional representations of ILMs
#=============================================================================
class Polynomial3D(Component):
def __init__(self, order=(1,1,1), tileinfo=None, constval=None,
category='ilm', shape=None, float_precision=np.float64):
"""
A polynomial 3D class for updating large fields of polys.
Parameters
----------
shape : `peri.util.Tile`
shape of the field (z,y,x)
order : tuple
number of terms in each direction
tileinfo : tuple of 2 `peri.util.Tile`
These objects help in the transfer of fields from different
sections of the same image to new fields. `tileinfo` is a tuple
containing the Tile representing the entire image as well as the
Tile representing this particular section of field. (typically
given by `peri.rawimage.tile`)
constval : float
The initial value of the entire field, if a constant.
float_precision : numpy float datatype
One of numpy.float16, numpy.float32, numpy.float64; precision
for precomputed arrays. Default is np.float64; make it 16 or 32
to save memory.
"""
self.shape = shape
self.order = order
self.tileinfo = tileinfo
self.category = category
c = category
if float_precision not in (np.float64, np.float32, np.float16):
raise ValueError('float_precision must be one of np.float64, ' +
'np.float32, np.float16')
self.float_precision = float_precision
# set up the parameter mappings and values
params, values = [], []
self.param_term = {}
for order in product(*(range(o) for o in self.order)):
p = c+'-%i-%i-%i' % order
self.param_term[p] = order
params.append(p)
values.append(0.0)
if constval:
values[0] = constval
super(Polynomial3D, self).__init__(
params=params, values=values, category=category
)
if self.shape:
self.initialize()
def initialize(self):
self.r = self.rvecs()
self.set_tile(self.shape)
self.field = np.zeros(self.shape.shape, dtype=self.float_precision)
self.update(self.params, self.values)
def rvecs(self):
# normalize all sizes to a strict upper bound on image size
# so we can transfer ILM between different images
if self.tileinfo:
img, inner = self.tileinfo
vecs = img.coords(norm=img.shape)
vecs = [v[inner.slicer] for v in vecs]
else:
vecs = self.shape.coords(norm=self.shape.shape)
return vecs
def term_ijk(self, index):
i,j,k = index
return self.r[0]**i * self.r[1]**j * self.r[2]**k
def term(self, index):
if self.__dict__.get('_last_index') and index == self._last_index:
return self._last_term
else:
term = self.term_ijk(index)
self._last_term = term
self._last_index = index
return self._last_term
def set_tile(self, tile):
self.tile = tile
def update(self, params, values):
params = util.listify(params)
values = util.listify(values)
if len(params) < len(self.params)//2:
for p,v1 in zip(params, values):
v0 = self.get_values(p)
tm = self.param_term[p]
self.field -= v0 * self.term(tm)
self.set_values(p, v1)
self.field += v1 * self.term(tm)
else:
self.set_values(params, values)
self.field = np.zeros(self.shape.shape, dtype=self.float_precision)
for p,v in zip(self.params, self.values):
self.field += v * self.term(self.param_term[p])
def get(self):
return self.field[self.tile.slicer]
def get_params(self):
return self.params
def get_update_tile(self, params, values):
return self.shape.copy()
def nopickle(self):
return super(Polynomial3D, self).nopickle() + [
'r', 'field', '_last_term', '_last_index'
]
def __str__(self):
return "{} [{}]".format(
self.__class__.__name__, self.order
)
def __repr__(self):
return self.__str__()
def __getstate__(self):
odict = self.__dict__.copy()
util.cdd(odict, self.nopickle())
return odict
def __setstate__(self, idict):
self.__dict__.update(idict)
##Compatibility patches...
self.float_precision = self.__dict__.get('float_precision', np.float64)
##end compatibility patch
if self.shape:
self.initialize()
class LegendrePoly3D(Polynomial3D):
def __init__(self, *args, **kwargs):
""" Same arguments are Polynomial3D """
super(LegendrePoly3D, self).__init__(*args, **kwargs)
def rvecs(self):
vecs = super(LegendrePoly3D, self).rvecs()
vecs = [2*v - 1 for v in vecs]
return vecs
def term_ijk(self, index):
i,j,k = index
ci = np.zeros(i+1)
cj = np.zeros(j+1)
ck = np.zeros(k+1)
ci[-1] = cj[-1] = ck[-1] = 1
return legval(self.r[0], ci) * legval(self.r[1], cj) * legval(self.r[2], ck)
#=============================================================================
# 2+1d functional representations of ILMs, p(x,y)+q(z)
#=============================================================================
class Polynomial2P1D(Polynomial3D):
def __init__(self, order=(1,1,1), tileinfo=None, constval=None,
operation='*', category='ilm', shape=None,
float_precision=np.float64):
"""
A polynomial 2+1D class for updating large fields of polys. The form
of these polynomials if P(x,y) () Q(z), separated in the z-direction.
Parameters
----------
shape : tuple
shape of the field (z,y,x)
order : tuple
number of terms in each direction
tileinfo : tuple of 2 `peri.util.Tile`
These objects help in the transfer of fields from different
sections of the same image to new fields. `tileinfo` is a tuple
containing the Tile representing the entire image as well as the
Tile representing this particular section of field. (typically
given by `peri.rawimage.tile`)
constval : float
The initial value of the entire field, if a constant.
operation : string
Type of joining operation between the (x,y) and (z) poly. Can be
either '*' or '+'
float_precision : numpy float datatype
One of numpy.float16, numpy.float32, numpy.float64; precision
for precomputed arrays. Default is np.float64; make it 16 or 32
to save memory.
"""
self.shape = shape
self.operation = operation
self.order = order
self.tileinfo = tileinfo
self.category = category
c = self.category
if float_precision not in (np.float64, np.float32, np.float16):
raise ValueError('float_precision must be one of np.float64, ' +
'np.float32, np.float16')
self.float_precision = float_precision
# set up the parameter mappings and values
params, values = [], []
self.xy_param = {}
self.z_param = {}
for order in product(*(range(o) for o in self.order[1:][::-1])):
p = c+'-xy-%i-%i' % order
self.xy_param[p] = order
params.append(p)
values.append(0.0)
for order in range(self.order[0]):
p = c+'-z-%i' % order
self.z_param[p] = (order+1,)
params.append(p)
values.append(0.0)
# setup the basics of the component now
Component.__init__(self, params, values, category=category)
# set up the appropriate zero terms for the supplied constant value
# parameter if there.
if constval:
self.set_values(c+'-xy-0-0', constval)
if self.shape:
self.initialize()
def initialize(self):
self.r = self.rvecs()
self.field_xy = 0*self.term_ijk((0,0))
self.field_z = 0*self.term_ijk((0,))
super(Polynomial2P1D, self).initialize()
def calc_field(self):
self.field_xy = 0*self.term_ijk((0,0))
self.field_z = 0*self.term_ijk((0,))
for p,v in zip(self.params, self.values):
if p in self.xy_param:
order = self.xy_param[p]
term = self.field_xy
else:
order = self.z_param[p]
term = self.field_z
term += v * self.term(order)
op = {'*': mul, '+': add}[self.operation]
self.field[:] = op(self.field_xy, 1.0 + self.field_z)
return self.field
def term_ijk(self, index):
if len(index) == 2:
i,j = index
return self.r[2]**i * self.r[1]**j
elif len(index) == 1:
k = index[0]
return self.r[0]**k
def update(self, params, values):
params = util.listify(params)
values = util.listify(values)
if len(params) < len(self.params)//2:
for p,v1 in zip(params, values):
if p in self.xy_param:
order = self.xy_param[p]
term = self.field_xy
else:
order = self.z_param[p]
term = self.field_z
v0 = self.get_values(p)
term -= v0 * self.term(order)
self.set_values(p,v1)
term += v1 * self.term(order)
op = {'*': mul, '+': add}[self.operation]
self.field[:] = op(self.field_xy, 1.0 + self.field_z)
else:
self.set_values(params, values)
self.field[:] = self.calc_field()
def nopickle(self):
return super(Polynomial2P1D, self).nopickle() + [
'r', 'field', 'field_xy', 'field_z',
'_last_term', '_last_index'
]
class LegendrePoly2P1D(Polynomial2P1D):
def __init__(self, order=(1,1,1), **kwargs):
super(LegendrePoly2P1D, self).__init__(order=order, **kwargs)
def rvecs(self):
vecs = super(LegendrePoly2P1D, self).rvecs()
vecs = [2*v - 1 for v in vecs]
return vecs
def term_ijk(self, index):
if len(index) == 2:
i,j = index
ci = np.diag(np.ones(i+1))[i]
cj = np.diag(np.ones(j+1))[j]
return legval(self.r[2], ci) * legval(self.r[1], cj)
elif len(index) == 1:
k = index[0]
ck = np.diag(np.ones(k+1))[k]
return legval(self.r[0], ck)
class ChebyshevPoly2P1D(Polynomial2P1D):
def __init__(self, order=(1,1,1), **kwargs):
super(ChebyshevPoly2P1D, self).__init__(order=order, **kwargs)
def term_ijk(self, index):
if len(index) == 2:
i,j = index
ci = np.diag(np.ones(i+1))[i]
cj = np.diag(np.ones(j+1))[j]
return chebval(self.r[2], ci) * chebval(self.r[1], cj)
elif len(index) == 1:
k = index[0]
ck = np.diag(np.ones(k+1))[k]
return chebval(self.r[0], ck)
#=============================================================================
# a complex hidden variable representation of the ILM
# something like (p(x,y)+m(x,y))*q(z) where m is determined by local models
#=============================================================================
class BarnesPoly(Component, util.CompatibilityPatch):
category = 'ilm'
def __init__(self, npts=(40,20), zorder=7, op='*', barnes_dist=1.75,
barnes_clip_size=3, local_updates=True, category='ilm', shape=None,
float_precision=np.float64, donorm=True):
"""
Superclass for ilms of the form Barnes * poly
Parameters
----------
shape : iterable
size of the field in pixels, needs to be padded shape
npts : tuple of ints, optional
Number of control points used for the Barnes interpolant b_k
in the x-y sum. Default is (40,20)
zorder : integer
Number of orders for the z-polynomial.
op : string
The operation to perform between Barnes and LegPoly, '*' or '+'.
barnes_dist : float
Fractional distance to use for the barnes interpolator
local_updates : boolean
Whether to perform local updates on the ILM
float_precision : numpy float datatype
One of numpy.float16, numpy.float32, numpy.float64; precision
for precomputed arrays. Default is np.float64; make it 16 or 32
to save memory.
"""
self.shape = shape
self.local_updates = local_updates
self.barnes_clip_size = barnes_clip_size
self.barnes_dist = barnes_dist
self.category = category
self.zorder = zorder
self.npts = npts
self.op = op
if float_precision | |
= t2[i][j][k][l]
self.assertEqual(qt1[i][j][k][l], qt2[i][j][k][l])
# 1D tensor assignment verification
qt1[i][j][k][2:l] = t2[i][j][k][2:l]
self.assertEqual(qt1[i][j][k][2:l], qt2[i][j][k][2:l])
qt1[i][j][k] = t2[i][j][k]
self.assertEqual(qt1[i][j][k], qt2[i][j][k])
# 2D tensor assignment verification
qt1[i][j][k:] = t2[i][j][k:]
self.assertEqual(qt1[i][j][k:], qt2[i][j][k:])
qt1[i][j] = t2[i][j]
self.assertEqual(qt1[i][j], qt2[i][j])
# 3D tensor assignment verification
qt1[i][j:] = t2[i][j:]
self.assertEqual(qt1[i][j:], qt2[i][j:])
qt1[i] = t2[i]
self.assertEqual(qt1[i], qt2[i])
# 4D tensor assignment verification
qt1[:1] = t2[:1]
self.assertEqual(qt1[:1], qt2[:1])
qt1[:] = t2[:]
self.assertEqual(qt1[:], qt2[:])
# non-contiguous case **this should raise an exception**
with self.assertRaisesRegex(RuntimeError, "Quantized copy only works with contiguous Tensors"):
qt1[:, 0] = t2[:, 0]
def test_qtensor_float_assignment(self):
# Scalar Tensor
# item
scale = 1.0
zero_point = 2
r = torch.ones(1, dtype=torch.float)
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
qr = torch.quantize_per_tensor(r, scale, zero_point, dtype=dtype)
self.assertEqual(qr.item(), 1)
self.assertEqual(qr[0].item(), 1)
# assignment
self.assertTrue(qr[0].is_quantized)
qr[0] = 11.3 # float assignment
self.assertEqual(qr.item(), 11)
x = torch.ones(1, dtype=torch.float) * 15.3
# Copying from a float Tensor
qr[:] = x
self.assertEqual(qr.item(), 15)
dtype_msg = str(dtype) + ", "
self.assertEqual(' '.join(str(qr).split()),
"tensor([15.], size=(1,), dtype=" + dtype_msg +
"quantization_scheme=torch.per_tensor_affine, " +
"scale=1.0, zero_point=2)")
def test_qtensor_quant_dequant(self):
scale = 0.02
zero_point = 2
for device in get_supported_device_types():
r = torch.rand(3, 2, 4, 5, dtype=torch.float, device=device) * 4 - 2
for memory_format in [torch.contiguous_format, torch.channels_last]:
r = r.contiguous(memory_format=memory_format)
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
qr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
rqr = qr.dequantize()
self.assertTrue(np.allclose(r.cpu().numpy(), rqr.cpu().numpy(), atol=2 / scale))
# Also check 5D tensors work.
for device in get_supported_device_types():
r = torch.rand(3, 2, 4, 5, 6, dtype=torch.float, device=device) * 4 - 2
for dtype in [torch.qint8, torch.quint8, torch.qint32]:
qr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
rqr = qr.dequantize()
self.assertTrue(np.allclose(r.cpu().numpy(), rqr.cpu().numpy(), atol=2 / scale))
# legacy constructor/new doesn't support qtensors
def test_qtensor_legacy_new_failure(self):
r = torch.rand(3, 2, dtype=torch.float) * 4 - 2
scale = 0.02
zero_point = 2
qr = torch.quantize_per_tensor(r, scale, zero_point, torch.quint8)
self.assertRaises(RuntimeError, lambda: qr.new(device='cpu'))
self.assertRaises(RuntimeError, lambda: qr.new(r.storage()))
self.assertRaises(RuntimeError, lambda: qr.new(r))
self.assertRaises(RuntimeError, lambda: qr.new(torch.Size([2, 3])))
self.assertRaises(RuntimeError, lambda: qr.new([6]))
def test_per_channel_qtensor_creation_cpu(self):
self._test_per_channel_qtensor_creation(torch.device('cpu'))
def _test_dequantize_fp16(self, device):
data_orig = torch.randn(1, 2, 4, 4, dtype=torch.float, device=device)
data_fp16 = data_orig.to(torch.float16)
data_fp16_dequant = data_fp16.dequantize()
data_fp16_fp32 = data_fp16.to(torch.float)
self.assertTrue(data_fp16_dequant.dtype == torch.float)
self.assertTrue(torch.allclose(data_fp16_fp32, data_fp16_dequant))
def test_dequantize_fp16_cpu(self):
self._test_dequantize_fp16(torch.device('cpu'))
@unittest.skipIf(not TEST_CUDA, "No gpu is available.")
def test_dequantize_fp16_cuda(self):
self._test_dequantize_fp16(torch.device('cuda'))
@unittest.skipIf(not TEST_CUDA, "No gpu is available.")
def test_per_channel_qtensor_creation_cuda(self):
self._test_per_channel_qtensor_creation(torch.device('cuda'))
def _test_per_channel_qtensor_creation(self, device):
numel = 10
ch_axis = 0
scales = torch.rand(numel, device=device)
zero_points_int = torch.randint(0, 10, size=(numel,), device=device)
zero_points_float = torch.randn(numel, device=device)
for dtype, zero_points in itertools.product([torch.qint8, torch.quint8], [zero_points_float, zero_points_int]):
q = torch._empty_per_channel_affine_quantized(
[numel], scales=scales, zero_points=zero_points, axis=ch_axis, dtype=dtype, device=device)
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
self.assertEqualIgnoreType(scales, q.q_per_channel_scales())
self.assertEqual(zero_points, q.q_per_channel_zero_points())
self.assertEqual(ch_axis, q.q_per_channel_axis())
# create Tensor from uint8_t Tensor, scales and zero_points
for zero_points in [zero_points_float, zero_points_int]:
int_tensor = torch.randint(0, 100, size=(numel,), dtype=torch.uint8, device=device)
q = torch._make_per_channel_quantized_tensor(int_tensor, scales, zero_points, ch_axis)
self.assertEqual(int_tensor, q.int_repr())
# TODO(#38095): Replace assertEqualIgnoreType. See issue #38095
self.assertEqualIgnoreType(scales, q.q_per_channel_scales())
self.assertEqual(zero_points, q.q_per_channel_zero_points())
self.assertEqual(ch_axis, q.q_per_channel_axis())
def test_qtensor_creation(self):
scale = 0.5
zero_point = 10
numel = 10
for device in get_supported_device_types():
q = torch._empty_affine_quantized([numel], scale=scale, zero_point=zero_point,
device=device, dtype=torch.quint8)
self.assertEqual(scale, q.q_scale())
self.assertEqual(zero_point, q.q_zero_point())
# create Tensor from uint8_t Tensor, scale and zero_point
int_tensor = torch.randint(0, 100, size=(10,), device=device, dtype=torch.uint8)
q = torch._make_per_tensor_quantized_tensor(int_tensor, scale, zero_point)
self.assertEqual(int_tensor, q.int_repr())
self.assertEqual(scale, q.q_scale())
self.assertEqual(zero_point, q.q_zero_point())
# create via empty_like
q = torch._empty_affine_quantized([numel], scale=scale, zero_point=zero_point,
device=device, dtype=torch.quint8)
q_el = torch.empty_like(q)
self.assertEqual(q.q_scale(), q_el.q_scale())
self.assertEqual(q.q_zero_point(), q_el.q_zero_point())
self.assertEqual(q.dtype, q_el.dtype)
# create via empty_like but change the dtype (currently not supported)
with self.assertRaises(RuntimeError):
torch.empty_like(q, dtype=torch.qint8)
def test_qtensor_dtypes(self):
r = torch.rand(3, 2, dtype=torch.float) * 4 - 2
scale = 0.2
zero_point = 2
for dtype in [torch.qint8, torch.quint8, torch.qint32, torch.quint4x2, torch.quint2x4]:
qr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
rqr = qr.dequantize()
self.assertTrue(np.allclose(r.numpy(), rqr.numpy(), atol=2 / scale))
@unittest.skipIf(not TEST_CUDA, "No gpu is available.")
def test_per_tensor_to_device(self):
dtypes = [
torch.quint8,
torch.qint8,
torch.qint32,
]
device = torch.device('cuda')
for dtype in dtypes:
r = torch.rand(2, 2, dtype=torch.float) * 10
scale = torch.rand(2).abs().max().item()
zero_point = (torch.rand(2) * 10).round().to(torch.long).max().item()
qr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
qr = qr.to(device)
qr_cuda = torch.quantize_per_tensor(r.to(device), scale, zero_point, dtype)
qr_cuda = qr_cuda.to('cpu')
self.assertEqual('cuda', qr.device.type)
self.assertEqual('cpu', qr_cuda.device.type)
@unittest.skipIf(not TEST_CUDA, "No gpu is available.")
def test_per_channel_to_device(self):
dtype_and_zero_types = [
(torch.quint8, torch.float),
(torch.qint8, torch.float),
# (torch.qint32, torch.float) not supported for quantize_per_channel
(torch.quint8, torch.long),
(torch.qint8, torch.long),
(torch.qint32, torch.long),
]
axis = 1
device = torch.device('cuda')
for dtype, zero_type in dtype_and_zero_types:
r = torch.rand(2, 2, dtype=torch.float) * 10
scales = torch.rand(2).abs()
zero_points = (torch.rand(2) * 10).round().to(zero_type)
dqr = torch.quantize_per_channel(r, scales, zero_points, axis, dtype)
dqr = dqr.to(device)
dqr_cuda = torch.quantize_per_channel(r.to(device), scales.to(
device), zero_points.to(device), axis, dtype)
dqr_cuda = dqr_cuda.to('cpu')
self.assertEqual('cuda', dqr.device.type)
self.assertEqual('cuda', dqr.q_per_channel_scales().device.type)
self.assertEqual('cuda', dqr.q_per_channel_zero_points().device.type)
self.assertEqual('cpu', dqr_cuda.device.type)
self.assertEqual('cpu', dqr_cuda.q_per_channel_scales().device.type)
self.assertEqual('cpu', dqr_cuda.q_per_channel_zero_points().device.type)
@unittest.skipIf(not torch.cuda.is_available(), 'CUDA is not available')
def test_compare_per_tensor_device_numerics(self):
dtypes = [
torch.quint8,
torch.qint8,
torch.qint32,
]
device = torch.device('cuda')
for dtype in dtypes:
r = torch.rand(2, 2) * 10
r[0, 0] = 2.5
scale = torch.rand(2).abs().max().item()
zero_point = (torch.rand(2) * 10).round().to(torch.long).max().item()
qtr = torch.quantize_per_tensor(r, scale, zero_point, dtype)
dqtr = qtr.dequantize()
qtr_cuda = torch.quantize_per_tensor(r.to(device), scale, zero_point, dtype)
dqtr_cuda = qtr_cuda.dequantize()
self.assertEqual(qtr.int_repr(), qtr_cuda.int_repr())
self.assertTrue(np.allclose(dqtr, dqtr_cuda.cpu()))
@unittest.skipIf(not torch.cuda.is_available(), 'CUDA is not available')
def test_compare_per_channel_device_numerics(self):
dtype_and_zero_types = [
(torch.quint8, torch.float),
(torch.qint8, torch.float),
# (torch.qint32, torch.float) not supported for quantize_per_channel
(torch.quint8, torch.long),
(torch.qint8, torch.long),
(torch.qint32, torch.long),
]
axis = 1
device = torch.device('cuda')
for i in range(20):
for dtype, zero_type in dtype_and_zero_types:
r = torch.rand(2, 2) * 10
r[0, 0] = 2.5
scales = torch.rand(2).abs()
zero_points = (torch.rand(2) * 10).round().to(zero_type)
qr = torch.quantize_per_channel(r, scales, zero_points, axis, dtype)
dqr = qr.dequantize()
qr_cuda = torch.quantize_per_channel(r.to(device), scales.to(
device), zero_points.to(device), axis, dtype)
dqr_cuda = qr_cuda.dequantize()
self.assertEqual(qr.int_repr(), qr_cuda.int_repr())
self.assertTrue(np.allclose(dqr, dqr_cuda.cpu()))
def _test_quantize_per_channel(self, r, scales, zero_points, axis, float_params):
def _quantize_per_channel_ref_nd(data, scales, zero_points, float_params):
dims = data.size()
data = data.view(-1, dims[axis], np.prod(dims[axis + 1:]))
res = torch.empty_like(data)
quant_min, quant_max = 0, 255
for i in range(res.size()[0]):
for j in range(res.size()[1]):
for k in range(res.size()[2]):
if float_params:
inv_scale = 1.0 / scales[j]
res[i][j][k] = np.clip(
np.round(data[i][j][k] * inv_scale + zero_points[j]), quant_min, quant_max)
else:
res[i][j][k] = np.clip(
np.round(data[i][j][k] / scales[j]) + zero_points[j], quant_min, quant_max)
res = res.view(*dims)
return res
contig_format = torch.channels_last if r.ndim == 4 else torch.channels_last_3d
for memory_format in [torch.contiguous_format, contig_format]:
ref_res = _quantize_per_channel_ref_nd(r, scales, zero_points, float_params)
r_contig = r.contiguous(memory_format=memory_format)
qr = torch.quantize_per_channel(r_contig, scales, zero_points, axis, torch.quint8)
rqr = qr.dequantize()
self.assertTrue(np.allclose(qr.int_repr(), ref_res))
self.assertTrue(np.allclose(r.numpy(), rqr.numpy(), atol=2 / np.min(scales.numpy())))
def test_qtensor_quantize_per_channel(self):
r = torch.rand(3, 2, dtype=torch.float) * 4 - 2
scales = torch.tensor([0.2, 0.03], dtype=torch.double)
zero_points = torch.tensor([5, 10], dtype=torch.long)
axis = 1
def quantize_c(data, scales, zero_points):
res = torch.empty((3, 2))
quant_min, quant_max = 0, 255
for i in range(3):
for j in range(2):
res[i][j] = np.clip(np.round(data[i][j] / scales[j]) + zero_points[j], quant_min, quant_max)
return res
qr = torch.quantize_per_channel(r, scales, zero_points, axis, torch.quint8)
rqr = qr.dequantize()
self.assertTrue(np.allclose(qr.int_repr(), quantize_c(r, scales, zero_points)))
self.assertTrue(np.allclose(r.numpy(), rqr.numpy(), atol=2 / np.min(scales.numpy())))
# Check 4D tensor with 2 different memory formats.
r = torch.rand(3, 2, 4, 5, dtype=torch.float) * 4 - 2
scales = torch.tensor([0.2, 0.03], dtype=torch.double)
zero_points = torch.tensor([5, 10], dtype=torch.long)
self._test_quantize_per_channel(r, scales, zero_points, 1 , False)
scales = torch.tensor([0.2, 0.03, 0.5], dtype=torch.double)
zero_points = torch.tensor([5, 10, 7], dtype=torch.long)
self._test_quantize_per_channel(r, scales, zero_points, 0, False)
# Check 5D tensor.
r = torch.rand(3, 2, 4, 5, 7, dtype=torch.float) * 4 - 2
scales = torch.tensor([0.2, 0.03], dtype=torch.double)
zero_points = torch.tensor([5, 10], dtype=torch.long)
self._test_quantize_per_channel(r, scales, zero_points, 1, False)
scales = torch.tensor([0.2, 0.03, 0.5], dtype=torch.double)
zero_points = torch.tensor([5, 10, 7], dtype=torch.long)
self._test_quantize_per_channel(r, scales, zero_points, 0, False)
def test_quantize_per_channel_float_qparams(self):
r = torch.rand(3, 2, dtype=torch.float) * 4
scales = torch.tensor([0.2, 0.03], dtype=torch.float)
zero_points = torch.tensor([0.1, 0.2], dtype=torch.float)
axis = 1
# Reference quantize function with FP zero_point.
def quantize_ref(data, scales, zero_points):
res = torch.empty((3, 2))
quant_min, quant_max = 0, 255
for i in range(3):
for j in range(2):
inv_scale = 1.0 / scales[j]
res[i][j] = np.clip(np.round(data[i][j] * inv_scale + zero_points[j]), quant_min, quant_max)
return res
qr = torch.quantize_per_channel(r, scales, zero_points, axis, torch.quint8)
dequant_tensor = qr.dequantize()
ref = quantize_ref(r, scales, zero_points)
self.assertTrue(np.allclose(qr.int_repr(), ref))
self.assertTrue(np.allclose(r.numpy(), dequant_tensor.numpy(), atol=1))
# Check 4D tensor with 2 different memory formats.
r = torch.rand(3, 2, 4, 5, dtype=torch.float) * 4
scales = torch.tensor([0.2, 0.03], dtype=torch.float)
zero_points = torch.tensor([0.1, 0.2], dtype=torch.float)
self._test_quantize_per_channel(r, scales, | |
# pylint: disable=no-member
# pylint: disable=unsubscriptable-object
"""
Defines :class:`.BcrClinicalXmlToJsonParser`, a class (that is instantiated
with a given project_code) which consumes BCR Clinical XML and produces JSON.
Pylint ``no-member`` error is disabled because for some reason there are a lot
of false positives with ``lxml.etree``.
"""
import datetime
import json
import math
import pkg_resources
from uuid import uuid5, UUID
from cdislogging import get_logger
import flask
from lxml import etree
import requests
import yaml
from sheepdog import dictionary
from sheepdog.errors import ParsingError, SchemaError
from sheepdog.globals import BCR_MAPPING
log = get_logger(__name__)
SCHEMA_LOCATION_WHITELIST = [
"https://github.com/nchbcr/xsd",
"http://tcga-data.nci.nih.gov",
]
def _parse_schema_location(root):
"""Get all schema locations from xml."""
try:
namespace = root.nsmap["xsi"]
except Exception as e:
raise SchemaError("Can't get schema location namespace", e)
try:
schema_location = root.attrib["{%s}schemaLocation" % namespace]
except Exception as e:
raise SchemaError("Missing xsi:schemaLocation", e)
# schemaLocation is a space delimited list of namespace and location pairs
# return odd elements
locations = schema_location.split(" ")
if len(locations) >= 2 and len(locations) % 2 == 0:
return locations[1::2]
else:
raise SchemaError("schemaLocation has to be a list of namespace and url pairs")
def _fetch_schema(schema_url):
"""Fetch schema using the url from schemaLocation."""
if not any(map(schema_url.startswith, SCHEMA_LOCATION_WHITELIST)):
raise SchemaError("schema location: {} is not allowed".format(schema_url))
try:
r = requests.get(
schema_url, proxies=flask.current_app.config.get("EXTERNAL_PROXIES")
)
except Exception as e:
raise SchemaError("Can't get xml XSD at {}".format(schema_url), e)
if r.status_code == 200:
try:
return etree.XMLSchema(etree.XML(r.text.encode("utf-8")))
except Exception as e:
raise SchemaError("Invalid XML XSD at {}".format(schema_url), e)
else:
raise SchemaError("Can't get XML XSD at {}: {}".format(schema_url, r.text))
def validated_parse(xml):
"""
Parse an XML document or fragment from a string and return the root node.
"""
try:
root = etree.fromstring(xml)
# note(pyt): return the document without doing schema validation
# until we are clear about how to handle the xsd
return root
except etree.XMLSyntaxError as msg:
log.error("User submitted invalid xml: {}".format(msg))
raise
schemas = map(_fetch_schema, _parse_schema_location(root))
try:
for schema in schemas:
schema.assertValid(root)
return root
except (etree.XMLSchemaError, etree.DocumentInvalid) as msg:
log.error("User submitted invalid xml: {}".format(msg))
# note(jsm): Here we re-raise. This exception should be
# caught by the caller, at the time this comment was
# written, it will be caught in
# ``..transaction.handle_xml_transaction``
raise
def unix_time(dt):
epoch = datetime.datetime.utcfromtimestamp(0)
delta = dt - epoch
return int(delta.total_seconds())
class AttrDict(dict):
def __init__(self, *args, **kwargs):
super(AttrDict, self).__init__(*args, **kwargs)
self.__dict__ = self
def to_bool(val):
possible_true_values = ["true", "yes"]
possible_false_values = ["false", "no"]
if val is None:
return None
if val.lower() in possible_true_values:
return True
elif val.lower() in possible_false_values:
return False
else:
raise ValueError("Cannot convert {} to boolean".format(val))
class BcrXmlToJsonParser(object):
def __init__(self, project):
"""
Create a parser to convert XML to GDC JSON.
Args:
project (str): the id of the project node to link cases to
"""
self.project = project
self.namespaces = None
self.exported_entitys = 0
self.export_count = 0
self.ignore_missing_properties = True
self.xml_mapping = json.loads(
json.dumps(yaml.load(BCR_MAPPING)), object_hook=AttrDict
)
self.entities = {}
def xpath(
self,
path,
root=None,
single=False,
nullable=True,
expected=True,
text=True,
label="",
):
"""
Wrapper to perform the xpath queries on the xml
Args:
path (str): The xpath location path
root: the lxml element to perform query on
single (bool): raise ParsingError if the result is not singular
nullable (bool): raise ParsingError if the result is null
expected (bool): raise ParsingError if the result does not exist
text (bool): whether the return value is the .text str value
label (str): label for logging
Return:
Raises:
ParsingError:
if ``single``, ``nullable``, or ``expected`` are True and their
respective conditions are violated (see above)
"""
if root is None:
root = self.xml_root
try:
result = root.xpath(path, namespaces=self.namespaces)
except etree.XPathEvalError:
result = []
except:
raise
rlen = len(result)
if rlen < 1 and expected:
raise ParsingError("{}: Unable to find xpath {}".format(label, path))
if rlen < 1 and not expected and single:
return None
if rlen < 1 and not expected and not single:
return []
elif rlen > 1 and single:
log.error(result)
msg = "{}: Expected 1 result for xpath {}, found {}"
raise ParsingError(msg.format(label, path, result))
if text:
result = [r.text for r in result]
if not nullable and None in result:
raise ParsingError("{}: Null result for {}".format(label, result))
if single:
result = result[0]
return result
def loads(self, xml):
"""
Take xml string and convert it to a graph to insert into psqlgraph.
Args:
xml (str): xml string to convert and insert
Return:
self
"""
if not xml:
return None
self.xml_root = validated_parse(str(xml)).getroottree()
self.namespaces = self.xml_root.getroot().nsmap
for entity_type, param_list in self.xml_mapping.items():
for params in param_list:
self.parse_entity(entity_type, params)
return self
def dumps(self, indent=2):
return json.dumps(self.json, indent=indent)
@property
def json(self):
return self.entities.values()
def parse_entity(self, entity_type, params):
"""
Convert a subsection of the xml that will be treated as an entity.
Args:
entity_type (str): the type of entity to be used as a label
params (dict):
the parameters that govern xpath queries and translation from
the translation yaml file
Return:
None
"""
roots = self.get_entity_roots(entity_type, params)
for root in roots:
# Get entity and entity properties
entity_id = self.get_entity_id(root, entity_type, params)
args = (root, entity_type, params, entity_id)
props = self.get_entity_properties(*args)
props.update(self.get_entity_datetime_properties(*args))
props.update(self.get_entity_const_properties(*args))
# Get edges to and from this entity
edges = self.get_entity_edges(root, entity_type, params, entity_id)
props.update(edges)
# If the entity is a case, supliement the edges with an edge
# to the project
if entity_type == "case":
props["projects"] = [{"id": self.project}]
self.save_entity(entity_id, entity_type, props)
def save_entity(self, entity_id, label, properties):
"""Adds a entity to the graph
"""
if label == "file":
raise ParsingError("This endpoint is not built to handle file entities")
if entity_id in self.entities:
self.entities[entity_id].update(properties)
else:
self.entities[entity_id] = dict(id=entity_id, type=label, **properties)
def get_entity_roots(self, entity_type, params, root=None):
"""
Return a list of xml entity root elements for a given entity_type.
Args:
entity_type (str): entity type to be used as a label in psqlgraph
params (dict):
parameters that govern xpath queries and translation from the
translation yaml file
"""
if not params.root:
log.warn("No root xpath for {}".format(entity_type))
return
xml_entities = self.xpath(
params.root, root=root, expected=False, text=False, label="get_entity_roots"
)
return xml_entities
def get_entity_id(self, root, entity_type, params):
"""
Look up the id for the entity.
Args:
root: the lxml root element to treat as a entity
entity_type (str): entity type to be used as a label in psqlgraph
params (dict):
the parameters that govern xpath queries and translation from
the translation yaml file
Return:
str: the entity id
"""
assert not (
"id" in params and "generated_id" in params
), "Specification of an id xpath and parameters for generating an id"
# Lookup ID
if "id" in params:
entity_id = self.xpath(
params.id, root, single=True, label=entity_type
).lower()
else:
entity_id = None
return entity_id
def get_entity_properties(self, root, entity_type, params, entity_id=""):
"""
For each parameter in the setting file, try to look it up, and add
it to the entity properties.
Args:
root: the lxml root element to treat as a entity
entity_type (str):
the entity type to be used as a label in psqlgraph
params (dict):
the parameters that govern xpath queries and translation
from the translation yaml file
entity_id (str): used for logging
Return:
dict: the entity properties
"""
if "properties" not in params or not params.properties:
return {}
props = {}
schema = dictionary.schema[entity_type]
for prop, args in params.properties.items():
if args is None:
if "null" in schema["properties"][prop].get("type", []):
props[prop] = None
continue
path, _type = args["path"], args["type"]
if not path:
if "null" in schema["properties"][prop].get("type", []):
props[prop] = None
continue
result = self.xpath(
path,
root,
single=True,
text=True,
expected=(not self.ignore_missing_properties),
label="{}: {}".format(entity_type, entity_id),
)
# optional null fields are removed
if result is None and prop not in dictionary.schema[entity_type].get(
"required", []
):
continue
props[prop] = munge_property(result, _type)
return props
def get_entity_const_properties(self, root, entity_type, params, entity_id=""):
"""
For each parameter in the setting file that is a constant value, add it
to the properties dict.
Args:
root: the lxml root element to treat as a entity
entity_type (str):
the entity type to be used as a label in psqlgraph
params (dict):
the parameters that govern xpath queries and translation
from the translation yaml file
entity_id (str): used for logging
Return:
dict: dictionary of properties
"""
if "const_properties" not in params or not params.const_properties:
return | |
cast(bool, data.attrib.get('transcodeHwFullPipeline', '0'))
self.transcodeHwRequested = cast(bool, data.attrib.get('transcodeHwRequested', '0'))
self.videoCodec = data.attrib.get('videoCodec')
self.videoDecision = data.attrib.get('videoDecision')
self.width = cast(int, data.attrib.get('width'))
@utils.registerPlexObject
class TranscodeJob(PlexObject):
""" Represents an Optimizing job.
TrancodeJobs are the process for optimizing conversions.
Active or paused optimization items. Usually one item as a time."""
TAG = 'TranscodeJob'
def _loadData(self, data):
self._data = data
self.generatorID = data.attrib.get('generatorID')
self.key = data.attrib.get('key')
self.progress = data.attrib.get('progress')
self.ratingKey = data.attrib.get('ratingKey')
self.size = data.attrib.get('size')
self.targetTagID = data.attrib.get('targetTagID')
self.thumb = data.attrib.get('thumb')
self.title = data.attrib.get('title')
self.type = data.attrib.get('type')
@utils.registerPlexObject
class Optimized(PlexObject):
""" Represents a Optimized item.
Optimized items are optimized and queued conversions items."""
TAG = 'Item'
def _loadData(self, data):
self._data = data
self.id = data.attrib.get('id')
self.composite = data.attrib.get('composite')
self.title = data.attrib.get('title')
self.type = data.attrib.get('type')
self.target = data.attrib.get('target')
self.targetTagID = data.attrib.get('targetTagID')
def remove(self):
""" Remove an Optimized item"""
key = '%s/%s' % (self._initpath, self.id)
self._server.query(key, method=self._server._session.delete)
def rename(self, title):
""" Rename an Optimized item"""
key = '%s/%s?Item[title]=%s' % (self._initpath, self.id, title)
self._server.query(key, method=self._server._session.put)
def reprocess(self, ratingKey):
""" Reprocess a removed Conversion item that is still a listed Optimize item"""
key = '%s/%s/%s/enable' % (self._initpath, self.id, ratingKey)
self._server.query(key, method=self._server._session.put)
@utils.registerPlexObject
class Conversion(PlexObject):
""" Represents a Conversion item.
Conversions are items queued for optimization or being actively optimized."""
TAG = 'Video'
def _loadData(self, data):
self._data = data
self.addedAt = data.attrib.get('addedAt')
self.art = data.attrib.get('art')
self.chapterSource = data.attrib.get('chapterSource')
self.contentRating = data.attrib.get('contentRating')
self.duration = data.attrib.get('duration')
self.generatorID = data.attrib.get('generatorID')
self.generatorType = data.attrib.get('generatorType')
self.guid = data.attrib.get('guid')
self.key = data.attrib.get('key')
self.lastViewedAt = data.attrib.get('lastViewedAt')
self.librarySectionID = data.attrib.get('librarySectionID')
self.librarySectionKey = data.attrib.get('librarySectionKey')
self.librarySectionTitle = data.attrib.get('librarySectionTitle')
self.originallyAvailableAt = data.attrib.get('originallyAvailableAt')
self.playQueueItemID = data.attrib.get('playQueueItemID')
self.playlistID = data.attrib.get('playlistID')
self.primaryExtraKey = data.attrib.get('primaryExtraKey')
self.rating = data.attrib.get('rating')
self.ratingKey = data.attrib.get('ratingKey')
self.studio = data.attrib.get('studio')
self.summary = data.attrib.get('summary')
self.tagline = data.attrib.get('tagline')
self.target = data.attrib.get('target')
self.thumb = data.attrib.get('thumb')
self.title = data.attrib.get('title')
self.type = data.attrib.get('type')
self.updatedAt = data.attrib.get('updatedAt')
self.userID = data.attrib.get('userID')
self.username = data.attrib.get('username')
self.viewOffset = data.attrib.get('viewOffset')
self.year = data.attrib.get('year')
def remove(self):
""" Remove Conversion from queue """
key = '/playlists/%s/items/%s/%s/disable' % (self.playlistID, self.generatorID, self.ratingKey)
self._server.query(key, method=self._server._session.put)
def move(self, after):
""" Move Conversion items position in queue
after (int): Place item after specified playQueueItemID. '-1' is the active conversion.
Example:
Move 5th conversion Item to active conversion
conversions[4].move('-1')
Move 4th conversion Item to 3rd in conversion queue
conversions[3].move(conversions[1].playQueueItemID)
"""
key = '%s/items/%s/move?after=%s' % (self._initpath, self.playQueueItemID, after)
self._server.query(key, method=self._server._session.put)
class MediaTag(PlexObject):
""" Base class for media tags used for filtering and searching your library
items or navigating the metadata of media items in your library. Tags are
the construct used for things such as Country, Director, Genre, etc.
Attributes:
server (:class:`~plexapi.server.PlexServer`): Server this client is connected to.
id (id): Tag ID (This seems meaningless except to use it as a unique id).
role (str): Unknown
tag (str): Name of the tag. This will be Animation, SciFi etc for Genres. The name of
person for Directors and Roles (ex: Animation, <NAME>, etc).
<Hub_Search_Attributes>: Attributes only applicable in search results from
PlexServer :func:`~plexapi.server.PlexServer.search`. They provide details of which
library section the tag was found as well as the url to dig deeper into the results.
* key (str): API URL to dig deeper into this tag (ex: /library/sections/1/all?actor=9081).
* librarySectionID (int): Section ID this tag was generated from.
* librarySectionTitle (str): Library section title this tag was found.
* librarySectionType (str): Media type of the library section this tag was found.
* tagType (int): Tag type ID.
* thumb (str): URL to thumbnail image.
"""
def _loadData(self, data):
""" Load attribute values from Plex XML response. """
self._data = data
self.id = cast(int, data.attrib.get('id'))
self.role = data.attrib.get('role')
self.tag = data.attrib.get('tag')
# additional attributes only from hub search
self.key = data.attrib.get('key')
self.librarySectionID = cast(int, data.attrib.get('librarySectionID'))
self.librarySectionTitle = data.attrib.get('librarySectionTitle')
self.librarySectionType = data.attrib.get('librarySectionType')
self.tagType = cast(int, data.attrib.get('tagType'))
self.thumb = data.attrib.get('thumb')
def items(self, *args, **kwargs):
""" Return the list of items within this tag. This function is only applicable
in search results from PlexServer :func:`~plexapi.server.PlexServer.search`.
"""
if not self.key:
raise BadRequest('Key is not defined for this tag: %s' % self.tag)
return self.fetchItems(self.key)
class GuidTag(PlexObject):
""" Base class for guid tags used only for Guids, as they contain only a string identifier
Attributes:
id (id): The guid for external metadata sources (e.g. IMDB, TMDB, TVDB).
"""
def _loadData(self, data):
""" Load attribute values from Plex XML response. """
self._data = data
self.id = data.attrib.get('id')
@utils.registerPlexObject
class Collection(MediaTag):
""" Represents a single Collection media tag.
Attributes:
TAG (str): 'Collection'
FILTER (str): 'collection'
"""
TAG = 'Collection'
FILTER = 'collection'
@utils.registerPlexObject
class Label(MediaTag):
""" Represents a single Label media tag.
Attributes:
TAG (str): 'Label'
FILTER (str): 'label'
"""
TAG = 'Label'
FILTER = 'label'
@utils.registerPlexObject
class Tag(MediaTag):
""" Represents a single Tag media tag.
Attributes:
TAG (str): 'Tag'
FILTER (str): 'tag'
"""
TAG = 'Tag'
FILTER = 'tag'
def _loadData(self, data):
self._data = data
self.id = cast(int, data.attrib.get('id', 0))
self.filter = data.attrib.get('filter')
self.tag = data.attrib.get('tag')
self.title = self.tag
@utils.registerPlexObject
class Country(MediaTag):
""" Represents a single Country media tag.
Attributes:
TAG (str): 'Country'
FILTER (str): 'country'
"""
TAG = 'Country'
FILTER = 'country'
@utils.registerPlexObject
class Director(MediaTag):
""" Represents a single Director media tag.
Attributes:
TAG (str): 'Director'
FILTER (str): 'director'
"""
TAG = 'Director'
FILTER = 'director'
@utils.registerPlexObject
class Genre(MediaTag):
""" Represents a single Genre media tag.
Attributes:
TAG (str): 'Genre'
FILTER (str): 'genre'
"""
TAG = 'Genre'
FILTER = 'genre'
@utils.registerPlexObject
class Guid(GuidTag):
""" Represents a single Guid media tag.
Attributes:
TAG (str): 'Guid'
"""
TAG = "Guid"
@utils.registerPlexObject
class Mood(MediaTag):
""" Represents a single Mood media tag.
Attributes:
TAG (str): 'Mood'
FILTER (str): 'mood'
"""
TAG = 'Mood'
FILTER = 'mood'
@utils.registerPlexObject
class Style(MediaTag):
""" Represents a single Style media tag.
Attributes:
TAG (str): 'Style'
FILTER (str): 'style'
"""
TAG = 'Style'
FILTER = 'style'
class BaseImage(PlexObject):
""" Base class for all Art, Banner, and Poster objects.
Attributes:
TAG (str): 'Photo'
key (str): API URL (/library/metadata/<ratingkey>).
provider (str): The source of the poster or art.
ratingKey (str): Unique key identifying the poster or art.
selected (bool): True if the poster or art is currently selected.
thumb (str): The URL to retrieve the poster or art thumbnail.
"""
TAG = 'Photo'
def _loadData(self, data):
self._data = data
self.key = data.attrib.get('key')
self.provider = data.attrib.get('provider')
self.ratingKey = data.attrib.get('ratingKey')
self.selected = cast(bool, data.attrib.get('selected'))
self.thumb = data.attrib.get('thumb')
def select(self):
key = self._initpath[:-1]
data = '%s?url=%s' % (key, quote_plus(self.ratingKey))
try:
self._server.query(data, method=self._server._session.put)
except xml.etree.ElementTree.ParseError:
pass
class Art(BaseImage):
""" Represents a single Art object. """
class Banner(BaseImage):
""" Represents a single Banner object. """
class Poster(BaseImage):
""" Represents a single Poster object. """
@utils.registerPlexObject
class Producer(MediaTag):
""" Represents a single Producer media tag.
Attributes:
TAG (str): 'Producer'
FILTER (str): 'producer'
"""
TAG = 'Producer'
FILTER = 'producer'
@utils.registerPlexObject
class Role(MediaTag):
""" Represents a single Role (actor/actress) media tag.
Attributes:
TAG (str): 'Role'
FILTER (str): 'role'
"""
TAG = 'Role'
FILTER = 'role'
@utils.registerPlexObject
class Similar(MediaTag):
""" Represents a single Similar media tag.
Attributes:
TAG (str): 'Similar'
FILTER (str): 'similar'
"""
TAG = 'Similar'
FILTER = 'similar'
@utils.registerPlexObject
class Writer(MediaTag):
""" Represents a single Writer media tag.
Attributes:
TAG (str): 'Writer'
FILTER (str): 'writer'
"""
TAG = 'Writer'
FILTER = 'writer'
@utils.registerPlexObject
class Chapter(PlexObject):
""" Represents a single Writer media tag.
Attributes:
TAG (str): 'Chapter'
"""
TAG = 'Chapter'
def _loadData(self, data):
self._data = data
self.id = cast(int, data.attrib.get('id', 0))
self.filter = data.attrib.get('filter') # I couldn't filter on it anyways
self.tag = data.attrib.get('tag')
self.title = self.tag
self.index = cast(int, data.attrib.get('index'))
self.start = cast(int, data.attrib.get('startTimeOffset'))
self.end = cast(int, data.attrib.get('endTimeOffset'))
@utils.registerPlexObject
class Marker(PlexObject):
""" Represents a single Marker media tag.
Attributes:
TAG (str): 'Marker'
"""
TAG = 'Marker'
def __repr__(self):
name = self._clean(self.firstAttr('type'))
start = utils.millisecondToHumanstr(self._clean(self.firstAttr('start')))
end = utils.millisecondToHumanstr(self._clean(self.firstAttr('end')))
return '<%s:%s %s - %s>' % (self.__class__.__name__, name, start, end)
def _loadData(self, data):
self._data = data
self.type = data.attrib.get('type')
self.start = cast(int, data.attrib.get('startTimeOffset'))
self.end = cast(int, data.attrib.get('endTimeOffset'))
@utils.registerPlexObject
class Field(PlexObject):
""" Represents a single Field.
Attributes:
TAG (str): 'Field'
"""
TAG = 'Field'
def _loadData(self, data):
self._data = data
self.name = data.attrib.get('name')
self.locked = cast(bool, data.attrib.get('locked'))
@utils.registerPlexObject
class SearchResult(PlexObject):
""" Represents a single SearchResult.
Attributes:
TAG (str): 'SearchResult'
"""
TAG = 'SearchResult'
def __repr__(self):
name = self._clean(self.firstAttr('name'))
score = self._clean(self.firstAttr('score'))
return '<%s>' % ':'.join([p for p | |
of tracked position on ball
self.trackBodyCurPos = self.cnstrntBody.to_world(x=self.cnstrntOnBallLoc) #self.cnstrntBody.com()
self.trackBodyVel = (self.trackBodyCurPos - self.trackBodyLastPos)/self.timestep
#print('setSimVals : Tracked Body Position : {} |\tLast position : {} |\tVel : {}'.format(self.trackBodyCurPos, self.trackBodyLastPos,self.trackBodyVel ))
#save current state to restore for forward sim-dependent objectives/constraints, and to use for cost/objective functions
self.curQ = self.skel.q
self.curQdot = self.skel.dq
#current end effector position in world space -
self.curEffPos = self.reachBody.to_world(x=self.reachBodyOffset)
#whether or not ot calculate dynamic jacobians and other quantities
#torque cntrol desired to provide pulling force at contact location on reaching hand
self.Tau_JtFpull, self.JtPullPInv, self.Jpull, self.JpullLin = self.getPullTau(self.useLinJacob)
if(calcDynQuants):
#current end effector velocity in world space
self.currEffVel = self.Jpull.dot(self.curQdot)
#current end effector acceleration in world space : Jdot * ddq + Jddot * dq
JdotPull = self.getEffJdot(self.useLinJacob)
self.currEffAccel = JdotPull.dot(self.curQdot) + self.Jpull.dot(self.skel.accelerations())#self.Jpull.dot(self.curQdot) + self.Jpull.dot(self.curQdot)
# end per-step body values TODO
####################################################
#####################################################
# end optimization initialization
#
# per-step optimization/torque/force calc functions
# #return body torques to provide self.desExtFrcVal at toWorld(self.constraintLoc)
# #provides JtransFpull component of equation
# def getPullTau(self, useLinJacob, debug=False):
# if (useLinJacob) :
# self.useForce = self.desExtFrcVal
# #using only linear : 3 rows x ndofs cols
# Jpull = self.reachBody.linear_jacobian(offset=self.reachBodyOffset)
# else :
# #wrench
# #TODO verify target orientation should be 0,0,0
# self.useForce = np.zeros(6)
# self.useForce[3:]=self.desExtFrcVal
# #using linear and rotational == world
# Jpull = self.reachBody.world_jacobian(offset=self.reachBodyOffset)
# if(debug):
# print('getPullTau : pull force being used : {} '.format(self.useForce))
#
# JTrans = np.transpose(Jpull)
# res = JTrans.dot(self.useForce)
# JTransInv = np.linalg.pinv(JTrans)
# #last 3 rows as lin component
# return res, JTransInv, Jpull, Jpull[-3:,:]
# JtransInvC = self.calcPInv(Jlin_trans, 3)
# if np.allclose(JtransInv,JtransInvC,1e-9):
# print('numpy pinv calc on jtransinv yield equivalent result to manual calc')
# else :
# print('numpy pinv calc on jtrasninv different from manual calc : \n{}\n and {}'.format(JtransInv,JtransInvC))
# if np.allclose(res,resWd,1e-9) :
# print('using world and lin jacobians yield equivalent result')
# else :
# print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!using lin and world jacobians not equivalent in calculating JtransFrc_pull : \n{}\n and \n{}\n'.format(res,resWd))
#Jw_pinv = JwTrans.dot(np.linalg.inv(Jw.dot(JwTrans) + tmpIdent6))
#print('\nNumpy pinv calc on jtrasninv and on world jwtransinv : \n{}\n and {}'.format(JtransInv,JwTransInv))
#build operational space controller as per DART tutorial
#perhaps this would work?
def compOSCntrllr(self):
#end effector jacobian
J = self.Jpull
qDot = self.curQdot
#linear and angular (if present) components of error
osc_error = np.zeros(self.useForce.shape)
#error gradient
osc_dError = np.zeros(self.useForce.shape)
linVel = np.zeros(self.useForce.shape)
#vector from eff to constraint
vecToCnstrnt = self.cnstrntBody.to_world(x=self.cnstrntOnBallLoc) - self.reachBody.to_world(x=self.reachBodyOffset)
#distance error
osc_error[-3:]=vecToCnstrnt
if (self.useLinJacob) :
#find end effector velocity -> xdot = J * qdot --> want relative velocity between ball@constLoc and eff
#Since ball is being moved manually every timestep, need to use constraint vel in world space
JcnstCntct = self.cnstrntBody.linear_jacobian(offset=self.cnstrntOnBallLoc)
else :
#angular components (osc_error[:3]) are 0 here : angular components are differences between orientation of reachbody and desired orientation
#in c++ this part would be, where the linear part of the transform is the rotation component of the matrix (linear as in system of linear eqs):
#Eigen::AngleAxisd aa(mTarget->getTransform(mEndEffector).linear());
# mTarget is target frame(loc, orientation);
# getransform here is returning endeffector's world transform inverse * target's transform -> the rotation of the target w/respect to eff
#e.head<3>() = aa.angle() * aa.axis() -> this is angle error in each axis (total angle error * unit rot axis)
#TODO can we approximate this with inv world transform of body node x unit vector(representing target orientation) with 0 in 4th part?
#unit vector would be osc_error[3:]/mag(osc_error[3:])
mag = np.linalg.norm(vecToCnstrnt)
if(mag > 0.0):
#this is desired orientation - point at target
orientVec_v =(1.0/mag)*vecToCnstrnt
orientVecRot = np.append(orientVec_v, [0.0])
TbodyInv = np.linalg.inv(self.reachBody.transform())
invRotRes=TbodyInv.dot(orientVecRot)
osc_error[:3]=invRotRes[:3]
#find end effector velocity -> xdot = J * qdot --> want relative velocity between ball@constLoc and
JcnstCntct = self.cnstrntBody.world_jacobian(offset=self.cnstrntOnBallLoc)
#error vel
osc_dError = JcnstCntct.dot(self.cnstrntBody.skel.dq) - J.dot(qDot)
#J dot @ eef
derivJ = self.getEffJdot(self.useLinJacob)
#pseudo inverse of jacobian and J dot
pinv_J = np.linalg.pinv(J)
pinv_dJ = np.linalg.pinv(derivJ)
#def in ctor
# self.osc_Kp = np.diag(np.ones(6)*Kp)
# self.osc_Kd = np.diag(np.ones(self.ndofs)*Kd)
resKpE = self.osc_Kp.dot(osc_error)
res1 = pinv_J.dot(self.osc_Kp.dot(osc_dError))
res2 = pinv_dJ.dot(resKpE)
Kd_dq = self.osc_Kd.dot(qDot)
Kd_JinvKp_e = self.osc_Kd.dot(pinv_J.dot(resKpE))
desTorques = self.M.dot(res1 + res2) - Kd_dq + Kd_JinvKp_e + self.CfG + self.Tau_JtFpull
return desTorques
#keep around last step's decision variable
def setPrevCurrGuess(self, x):
self.prevGuess = self.currGuess
self.currGuess = x
#initialize these values every time step - as per Abe' paper they are constant per timestep
def setSimVals(self):
#per timestep constraint and eef locations and various essential jacobians
self.setPerStepStateVals(True)
#Mass Matrix == self.skel.M
#M is dof x dof
self.M = self.skel.M
#M * qd / dt - precalc for MAconst
#M / dt = precalc for MAconst - multiply by qdotPrime
self.M_ovDt =self.M/self.timestep
#derivative of MA eq w/respect to qdotprime
self.M_dqdotPrime = self.M.dot(self.oneAOvTS)
#precalc -Mqdot/dt
self.Mqd = -self.M_ovDt.dot(self.skel.dq)
#CfG == C(q,dq) + G; it is dof x 1 vector
self.CfG = self.skel.coriolis_and_gravity_forces()
#TODO constraint forces - investigate
self.CntrntFrc = self.skel.constraint_forces()
# number of iterations of optimization
self.optIters = 0
#specific settings for instancing class - specific for skeleton configuration
self.setSimValsPriv()
#if pose matching, using q/qdot for pose elements being matched
if self.doMatchPose :
self.curMatchPose = self.curQ[(self.optPoseUseIDXs)]
self.curMatchPoseDot = self.curQdot[(self.optPoseUseIDXs)]
#solving quadratic objectives :
# min (a, F(cntct), tau)
# w1 * | pose_fwd_accel - pose_des_accel | +
# w2 * | COM_fwd_accel - COM_des_accel |
#subject to
# : M a + (C(q,dq) + G) + Tau_JtF(cntct) + Tau_JtF(grab) = tau
# : ground forces being in the coulomb cone
# : tau being within torque limits
# : subject to no slip contacts on ground
#dt set to timestep * frameskips
#functionality before sim step is executed on this skel
#here is where we calculate the robot's control torques
def preStep(self, actions):
#if not mobile skeleton, perform IK on position
if (not self.isFrwrdSim):
if (self.debug):
print('helperBotSkelHolder::preStep : {} robot set to not mobile, so no optimization being executed, but IK to constraint position performed'.format(self.skel.name))
self.setPerStepStateVals(False)
self.IKtoCnstrntLoc()
#self.tau = np.zeros(self.ndofs)
return
#save world state - this is so that the state can be restored after fwd sim in opt
#self.env.saveSimState()
#set sim values used by optimization routine
self.setSimVals()
#determine control
if(self.useOSControl):
tauOSC = self.compOSCntrllr()
self.dbgShowTorques(tauOSC)
self.tau = tauOSC
else :
#build optimizer with LD_SLSQP alg - remake every time to handle static baloney seems to be happening
self.initOptimizer(nlopt.LD_SLSQP)
#mma only handles inequality constraints
#self.initOptimizer(nlopt.LD_MMA)
#initialize optimizer - objective function is minimizer
#TODO set which objectives we wish to use
self.optimizer.set_min_objective(self.objFunc)
#set constraints F=MA and fCntctCnst
#need to set tol as vector
#self.optimizer.add_equality_constraint(self.MAcnst, self.cnstTol[0])
self.optimizer.add_equality_mconstraint(self.MAcnstVec, self.cnstTolMA)
#mma only handles inequality constraints so use dual-bound constraints
#self.initOptimizer(nlopt.LD_MMA)
#self.optimizer.add_inequality_constraint(self.MAconstPos, self.cnstTol[0])
#self.optimizer.add_inequality_constraint(self.MAconstNeg, self.cnstTol[0])
#expects inequality constraint f(x) <= 0
#stop if less than tol changes happen to objective function eval - needs to be small
self.optimizer.set_ftol_rel(1e-13)
#stop after numOptIters
self.optimizer.set_maxeval((int)(self.numOptIters))
#instance specific inequality constraints - ground contact forces, for example
self.setInstanceConstrnts()
#print('stopping if obj func <= {}'.format(self.optimizer.get_stopval()))
#run optimizer - use last result as guess
#guess = self.nextGuess.tolist()
try:
self.nextGuess = self.optimizer.optimize(self.nextGuess.tolist())
except Exception as e:
print('Exception {} thrown, using previous iteration value.'.format(e))
self.nextGuess = np.copy(self.prevGuess)
#if monitoring guesses
if(self.monitorOptGuess):
self._checkiMinMaxVals(self.nextGuess, self.minMaxGuessDict)
#restore environment for actual frwrd step
#self.env.restoreSimState()
self.doStepTestsAndSetTau_OptCntrl()
#IK end effector to constraint position
def delSqDist(self, p1, p2):
diff = p1 - p2
return diff, diff.dot(diff)
def _IK_setSkelAndCompare(self, q, pos):
self.skel.set_positions(q)
effWorldPos = self.reachBody.to_world(x=self.reachBodyOffset)
diff = pos - effWorldPos
return diff, diff.dot(diff), effWorldPos
#IK eef to world location of constraint
def IKtoCnstrntLoc(self):
self.IKtoPassedPos(self.trackBodyCurPos)
#IK end effector to passed position (world coords)
def IKtoPassedPos(self, pos):
#minimize .5 *(pos - effPos_w)^2
#== .5 * (pos - T.effLcl)^2
#
if(self.debug_IK):
print('\nIK to pos : {}'.format(pos))
delPt, distSqP, effWorldPos= self._IK_setSkelAndCompare(self.skel.q, pos)
# effWorldPos = self.reachBody.to_world(x=self.reachBodyOffset)
# delPt, distP = self.delSqDist(pos,effWorldPos)
iters = 0
oldQ = self.skel.q
oldDistSq = distSqP
| |
parser for the dictionary plotter to a user provided subparser
'''
plot_parser = subparser.add_parser('plot', help=" A simple example: plot -pf your_pickle_file.p --dictionary_name mat_eng --x_data time --y_data mat_name ")
input_type_parser = plot_parser.add_mutually_exclusive_group(required=True)
input_type_parser.add_argument('-pf','--pickle_files', dest='pickle_files', help='pickle files to be plotted (run1.p run2.p etc...)', nargs='+' )
input_type_parser.add_argument('-tf','--tally_files', dest='tally_files', help='tally files to be parsed and plotted (tally_file1.txt tally_file2.txt etc...)', nargs='+', action='append')
plot_parser.add_argument('-sk','--series_key', dest='series_key', help='Series key string to access the data (i.e time or cycle)', nargs='?', required=True)
plot_parser.add_argument('-sv','--series_value', dest='series_value', help='Series value to plot the data at (default is the last value of the series_key data)', nargs='?', type=float, default=None)
self.dict_ploter = plot_dictionary()
self.dict_ploter.setup_parser(plot_parser)
plot_parser.set_defaults(func=self.plot_tally)
def plot_tally(self, args):
'''
Command line based tally plotting tool.
arguments:
args parsed dictionary plotting arguments
'''
raw_dictionary_data=[]
raw_dictionary_names=[]
if args.tally_files is not None:
raw_dictionary_data, raw_dictionary_names = build_tally_dictionary_list(args.tally_files, self.opppy_parser)
else:
for pickle_file_name in args.pickle_files:
raw_dictionary_names.append(pickle_file_name.split('/')[-1].split('.p')[0])
raw_dictionary_data.append(pickle.load(open(pickle_file_name,'rb')))
# build up plotting dictionary list and names
dictionary_data = []
dictionary_names = []
for dictionary, name in zip(raw_dictionary_data,raw_dictionary_names):
found = False
times = dictionary[args.series_key]
if args.series_value is not None:
for index, time in enumerate(times):
if(time >= args.series_value):
found = True
tally = dictionary['tally_cycle_data'][index]
dictionary_data.append(tally)
dictionary_names.append(name + ' ' + args.series_key + " = " + str(time))
break
if not found:
tally = dictionary['tally_cycle_data'][-1]
dictionary_data.append(tally)
dictionary_names.append(name + ' ' + args.series_key + " = " + str(times[-1]))
# plot dictionaries based on input arguments
self.dict_ploter.plot_dict(args,dictionary_data,dictionary_names)
def plot_interactive_tally_parser(self, subparser):
plot_parser = subparser.add_parser('iplot',help='Load a previously created pickle files (your_run.p) for interactive plotting or a set of output files to be parsed and plotted')
input_type_parser = plot_parser.add_mutually_exclusive_group(required=True)
input_type_parser.add_argument('-pf','--pickle_files', dest='pickle_files', help='pickle files to be plotted (run1.p run2.p etc...)', nargs='+' )
input_type_parser.add_argument('-tf','--tally_files', dest='tally_files', help='tally files to be parsed and plotted (tally_file1.txt tally_file2.txt etc...)', nargs='+', action='append')
plot_parser.set_defaults(func=self.plot_interactive_tally)
def get_plot_option(self):
'''
Interactive request for valid plotting options
'''
while(1):
opt = input('Additional options (-h for list and -q to quit): ')
input_args = opt.split()
parser = self.get_interactive_plot_parser()
try:
args=parser.parse_args(input_args)
break
except:
parser.print_help()
if args.quit:
sys.exit(0)
return args
def get_interactive_plot_parser(self):
'''
return parser object that contains interactive plotting options
'''
parser = argparse.ArgumentParser(description=" Output Plotting options ",
epilog =" Specify the desired plotting options ", usage='')
parser.add_argument('-q','--quit', dest='quit', help='quit program', nargs='?', type=bool, const=True, default=False)
parser.add_argument('-n','--new', dest='new', help='generate a new plot', nargs='?', type=bool, const=True, default=False)
parser.add_argument('-bg','--background', dest='background', help='keep plot in background', nargs='?', type=bool, const=True, default=False)
parser.add_argument('-p','--plot', dest='plot', help='re-open plot', nargs='?', type=bool, const=True, default=False)
parser.add_argument('-l','--labels', dest='legend_labels', help='specify the legend labels [line1_label, line2_label,...]', type=str, nargs='+')
parser.add_argument('-rs','--resize', dest='plot_size', help='specify the plot size [x_size, y_size]', type=float, nargs=2)
add_plot_options(parser)
return parser
def plot_interactive_tally(self, args):
'''
This is an interactive plotter for a python dictionary.
The plotting option are specified via an option string.
arguments:
args - argeparse data structure with pickle name info
self.option_string - a string to be passed to get_plot_options for pre-designed plots
'''
raw_dictionary_data=[]
raw_dictionary_names=[]
if args.tally_files is not None:
raw_dictionary_data, raw_dictionary_names = build_tally_dictionary_list(args.tally_files, self.opppy_parser)
else:
for pickle_file_name in args.pickle_files:
raw_dictionary_names.append(pickle_file_name.split('/')[-1].split('.p')[0])
raw_dictionary_data.append(pickle.load(open(pickle_file_name,'rb')))
option_parser = self.get_interactive_plot_parser()
option = option_parser.parse_args(["--new"])
ptype = []
while(1):
if option.new or option.background:
close()
plot_labels = self.get_plot_options(self.option_string)
xsize = 8
ysize = 5
try:
fig = figure(figsize=(xsize,ysize))
except:
PyPloter.switch_backend('agg')
fig = figure(figsize=(xsize,ysize))
xlog_flag = 0
ylog_flag = 0
counter = 1
labels = []
for i in range(len(plot_labels)):
if self.dict_ploter.is_data_available(plot_labels[i][-1],raw_dictionary_data[0]['tally_cycle_data'][-1]):
labels.append(plot_labels[i])
counter = counter + 1
for i in range(0,len(labels),2):
if i+1<counter-1:
print('%3i %-50s %3i %-50s' %(i+1, labels[i][0], i+2, labels[i+1][0]))
else:
print('%3i %-50s' %(i+1, labels[i][0]))
plot_num = get_option_num(counter)-1
label = labels[plot_num][0]
plot_args = labels[plot_num][-1]
plot_args.series_value = get_option_series_value(plot_args.series_key,raw_dictionary_data[0][plot_args.series_key])
# build up plotting dictionary list and names
dictionary_data = []
dictionary_names = []
for dictionary, name in zip(raw_dictionary_data,raw_dictionary_names):
found = False
times = dictionary[plot_args.series_key]
if plot_args.series_value is not None:
for index, time in enumerate(times):
if(time >= plot_args.series_value):
found = True
tally = dictionary['tally_cycle_data'][index]
dictionary_data.append(tally)
dictionary_names.append(name + ' ' + plot_args.series_key + " = " + str(time))
break
if not found:
tally = dictionary['tally_cycle_data'][-1]
dictionary_data.append(tally)
dictionary_names.append(name + ' ' + plot_args.series_key + " = " + str(times[-1]))
if plot_args.y_value_names[0] == "select_key":
keys = list(dictionary_data[-1][plot_args.dictionary_name].keys())
keys.remove(plot_args.x_value_name)
for i, key in zip(list(range(len(keys))),keys):
if (i & 1)==0:
print('%3i %-50s' %(i+1, key), end=' ')
else:
print('%3i %-50s' %(i+1, key))
print()
plot_args.y_value_names = get_key_num_vec(keys)
last_xmin = None
last_xmax = None
last_ymin = None
last_ymax = None
if len(plot_args.scale_x) != len(dictionary_data):
if len(plot_args.scale_x) == 0:
plot_args.scale_x = [1.0]*len(dictionary_data)
else:
plot_args.scale_x = [plot_args.scale_x[-1]]*len(dictionary_data)
if len(plot_args.scale_y) != len(dictionary_data):
if len(plot_args.scale_y) == 0:
plot_args.scale_y = [1.0]*len(dictionary_data)
else:
plot_args.scale_y = [plot_args.scale_y[-1]]*len(dictionary_data)
for dictionary, name, scale_x, scale_y in zip(dictionary_data, dictionary_names, plot_args.scale_x, plot_args.scale_y):
data = dictionary[plot_args.dictionary_name]
xmin = []
xmax = []
ymin = []
ymax = []
plabels = []
x = []
y = []
xmin.append(min(data[plot_args.x_value_name])*scale_x)
xmax.append(max(data[plot_args.x_value_name])*scale_x)
ymin.append(min(data[plot_args.y_value_names[0]])*scale_y)
ymax.append(max(data[plot_args.y_value_names[0]])*scale_y)
# material specific plot
for yname in plot_args.y_value_names:
x.append(array(data[plot_args.x_value_name])*scale_x)
ymin[-1] = min(ymin[-1],min(data[yname])*scale_y)
ymax[-1] = max(ymin[-1],max(data[yname])*scale_y)
plabels.append(label+" "+yname)
if (option.no_y_names):
plabels[-1] = ''
y.append(array(data[yname])*scale_y)
xmin = array(xmin)
xmax = array(xmax)
ymin = array(ymin)
ymax = array(ymax)
if last_xmin is not None:
xmin = min(last_xmin,xmin.min())
xmax = max(last_xmax,xmax.max())
ymin = min(last_ymin,ymin.min())
ymax = max(last_ymax,ymax.max())
else:
xmin = xmin.min()
xmax = xmax.max()
ymin = ymin.min()
ymax = ymax.max()
last_xmin = xmin
last_xmax = xmax
last_ymin = ymin
last_ymax = ymax
xlab = plot_args.x_label
ylab = plot_args.y_label
if option.x_limits is not None:
xmin = option.x_limits[0]
xmax = option.x_limits[1]
if option.y_limits is not None:
ymin = option.y_limits[0]
ymax = option.y_limits[1]
if option.x_label is not None:
xlab = option.x_label
if option.y_label is not None:
ylab = option.y_label
if option.legend_labels is not None:
if len(plabels) < len(option.legend_labels):
print("You specified more labels then there are plots")
else:
for i in range(len(option.legend_labels)):
plabels[i] = option.legend_labels[i]
if option.plot_size is not None:
fig = figure(figsize=(option.plot_size[0],option.plot_size[1]))
if not option.hide_plot:
show(block=False)
for i in range(len(x)):
logplot(option.log_x,option.log_y,x[i],y[i],label=name+" "+plabels[i])
if option.data_file_name is not None:
output_file_temp = option.data_file_name
for i in range(len(x)):
outfile_name = output_file_temp.strip()+"_"+str(name+"_"+plabels[i]).replace(" ","_").replace('/','_').replace('#','num')
outfile = open(outfile_name,'w')
print("# ", xlab, ylab, file=outfile)
for j in range(len(x[i])):
print('%15e %15e' %(x[i][j], y[i][j]), file=outfile)
print("Data written to - ", outfile_name)
outfile.close()
xlabel(xlab)
ylabel(ylab)
legend(loc='best').draw_frame(0)
xlim(xmin,xmax)
ylim(ymin,ymax)
if not option.hide_plot:
show(block=False)
draw()
if option.figure_name is not None:
savefig(option.figure_name)
if not option.hide_plot:
show(block=False)
option = self.get_plot_option()
if option.background:
figure()
else:
clf()
def get_plot_options(self, label_string):
'''
Get pre-formated plotting options for the interactive ploter
Input Options:
label_string preformated string for expected data in a
dictionary
This expects a two column semicolon (;) separated text
format string. The first column is an arbitrary string that
will be printed by the interactive plotter as a plot the
user can pick. The second column in a dictionary plotting
command (more details in the examples below).
examples:
"your first fancy plot name; -sk time -dn your_dictionary_key -x your_x_data_key -xlab "time [s]" -y your_y_data_key -ylab "RSS [%]";
"your second fancy plot name; -sk cycle -dn your_dictionary_key -x bins -xlab "bin [#]" -y select_key -ylab "tally_count [#]";
The dictionary plotting command has some basic requirements
include a dictionary name (-dn) the x variable key (-x), the y
variable key (-y), the x axis lable (-xlab), and the y axis
label (-y). There are more details about available plotting
options in plot_dictionary.py. The "-y select_key" value is a
magic key word that tells the interactive ploter to provide a
list of all y_data options for the designated dictionary.
'''
plot_labels = []
label_file = io.StringIO(label_string)
raw = label_file.readlines()
for line in raw:
lines = line.strip().split(';')
if len(lines) == 3:
labels = []
labels.append(lines[0])
labels.append(self.parse_tally_plot_args(lines[1]))
plot_labels.append(labels)
return plot_labels
def parse_tally_plot_args(self, input_string):
'''
Returns a set of parsed dictionary plotting options from an input_string
'''
parser = argparse.ArgumentParser(description=" Tally Plotting options ",
epilog =" Specify the desired plotting options ", usage='')
parser.add_argument('-dn','--dictionary_name', dest='dictionary_name', help='dictionary that the plotting data is contained | |
<gh_stars>1-10
import os
import importlib
from glob import glob
from subprocess import call
from collections import OrderedDict
from devito.compiler import make
from devito.exceptions import CompilationError
from devito.logger import debug, yask as log
from devito.yask import cfac, nfac, ofac, exit, configuration
from devito.yask.utils import namespace, rawpointer
class YaskKernel(object):
"""
A ``YaskKernel`` wraps a YASK kernel solution.
"""
def __init__(self, name, yc_soln, local_grids=None):
"""
Write out a YASK kernel, build it using YASK's Makefiles,
import the corresponding SWIG-generated Python module, and finally
create a YASK kernel solution object.
:param name: Unique name of this YaskKernel.
:param yc_soln: YaskCompiler solution.
:param local_grids: A local grid is necessary to run the YaskKernel,
but its final content can be ditched. Indeed, local
grids are hidden to users -- for example, they could
represent temporary arrays introduced by the DSE.
This parameter tells which of the ``yc_soln``'s grids
are local.
"""
self.name = name
# Shared object name
self.soname = "%s.%s.%s" % (name, yc_soln.get_name(), configuration['platform'])
# It's necessary to `clean` the YASK kernel directory *before*
# writing out the first `yask_stencil_code.hpp`
make(namespace['path'], ['-C', namespace['kernel-path'], 'clean'])
# Write out the stencil file
if not os.path.exists(namespace['kernel-path-gen']):
os.makedirs(namespace['kernel-path-gen'])
yc_soln.format(configuration['isa'],
ofac.new_file_output(namespace['kernel-output']))
# JIT-compile it
try:
compiler = configuration.yask['compiler']
opt_level = 1 if configuration['develop-mode'] else 3
make(namespace['path'], ['-j3', 'YK_CXX=%s' % compiler.cc,
'YK_CXXOPT=-O%d' % opt_level,
'mpi=0', # Disable MPI for now
# "EXTRA_MACROS=TRACE",
'YK_BASE=%s' % str(name),
'stencil=%s' % yc_soln.get_name(),
'arch=%s' % configuration['platform'],
'-C', namespace['kernel-path'], 'api'])
except CompilationError:
exit("Kernel solution compilation")
# Import the corresponding Python (SWIG-generated) module
try:
yk = getattr(__import__('yask', fromlist=[name]), name)
except ImportError:
exit("Python YASK kernel bindings")
try:
yk = reload(yk)
except NameError:
# Python 3.5 compatibility
yk = importlib.reload(yk)
# Create the YASK solution object
kfac = yk.yk_factory()
self.env = kfac.new_env()
self.soln = kfac.new_solution(self.env)
# Apply any user-provided options, if any.
# These are applied here instead of just before prepare_solution()
# so that applicable options will apply to all API calls.
self.soln.apply_command_line_options(configuration.yask['options'] or '')
# MPI setup: simple rank configuration in 1st dim only.
# TODO: in production runs, the ranks would be distributed along all
# domain dimensions.
self.soln.set_num_ranks(self.space_dimensions[0], self.env.get_num_ranks())
# Redirect stdout to a string or file
if configuration.yask['dump']:
filename = 'yk_dump.%s.%s.%s.txt' % (self.name,
configuration['platform'],
configuration['isa'])
filename = os.path.join(configuration.yask['dump'], filename)
self.output = yk.yask_output_factory().new_file_output(filename)
else:
self.output = yk.yask_output_factory().new_string_output()
self.soln.set_debug_output(self.output)
# Users may want to run the same Operator (same domain etc.) with
# different grids.
self.grids = {i.get_name(): i for i in self.soln.get_grids()}
self.local_grids = {i.name: self.grids[i.name] for i in (local_grids or [])}
def new_grid(self, name, obj):
"""
Create a new YASK grid.
"""
return self.soln.new_fixed_size_grid(name, [str(i) for i in obj.indices],
[int(i) for i in obj.shape]) # cast np.int
def run(self, cfunction, arg_values, toshare):
"""
Run the YaskKernel through a JIT-compiled function.
:param cfunction: The JIT-compiler function, of type :class:`ctypes.FuncPtr`
:param arg_values: The run-time values to be passed to ``cfunction``.
:param toshare: Mapper from functions to :class:`Data`s for sharing
grid storage.
"""
# Sanity check
grids = {i.grid for i in toshare if i.is_TensorFunction}
assert len(grids) == 1
grid = grids.pop()
# Set the domain size, apply grid sharing, more sanity checks
for k, v in zip(self.space_dimensions, grid.shape):
self.soln.set_rank_domain_size(k, int(v))
for k, v in toshare.items():
target = self.grids.get(k.name)
if target is not None:
v._give_storage(target)
assert all(not i.is_storage_allocated() for i in self.local_grids.values())
assert all(v.is_storage_allocated() for k, v in self.grids.items()
if k not in self.local_grids)
# Debug info
debug("%s<%s,%s>" % (self.name, self.time_dimension, self.space_dimensions))
for i in list(self.grids.values()) + list(self.local_grids.values()):
if i.get_num_dims() == 0:
debug(" Scalar: %s", i.get_name())
elif not i.is_storage_allocated():
size = [i.get_rank_domain_size(j) for j in self.space_dimensions]
debug(" LocalGrid: %s%s, size=%s" %
(i.get_name(), str(i.get_dim_names()), size))
else:
size = [i.get_rank_domain_size(j) for j in self.space_dimensions]
lpad = [i.get_left_pad_size(j) for j in self.space_dimensions]
rpad = [i.get_right_pad_size(j) for j in self.space_dimensions]
debug(" Grid: %s%s, size=%s, left_pad=%s, right_pad=%s" %
(i.get_name(), str(i.get_dim_names()), size, lpad, rpad))
# Set up the block shape for loop blocking
for i, j in zip(self.space_dimensions, configuration.yask['blockshape']):
self.soln.set_block_size(i, j)
# This, amongst other things, allocates storage for the temporary grids
self.soln.prepare_solution()
# Set up auto-tuning
if configuration.yask['autotuning'] == 'off':
self.soln.reset_auto_tuner(False)
elif configuration.yask['autotuning'] == 'preemptive':
self.soln.run_auto_tuner_now()
# Run the kernel
cfunction(*arg_values)
# Release grid storage. Note: this *will not* cause deallocation, as these
# grids are actually shared with the hook solution
for i in self.grids.values():
i.release_storage()
# Release local grid storage. This *will* cause deallocation
for i in self.local_grids.values():
i.release_storage()
# Dump performance data
self.soln.get_stats()
@property
def space_dimensions(self):
return tuple(self.soln.get_domain_dim_names())
@property
def time_dimension(self):
return self.soln.get_step_dim_name()
@property
def rawpointer(self):
return rawpointer(self.soln)
def __repr__(self):
return "YaskKernel [%s]" % self.name
class YaskContext(object):
def __init__(self, name, grid, dtype):
"""
Proxy between Devito and YASK.
A ``YaskContext`` contains N :class:`YaskKernel` and M :class:`Data`,
which have common space and time dimensions.
:param name: Unique name of the context.
:param grid: A :class:`Grid` carrying the context dimensions.
:param dtype: The data type used in kernels, as a NumPy dtype.
"""
self.name = name
self.space_dimensions = grid.dimensions
self.time_dimension = grid.stepping_dim
self.dtype = dtype
# All known solutions and grids in this context
self.solutions = []
self.grids = {}
# Build the hook kernel solution (wrapper) to create grids
yc_hook = self.make_yc_solution(namespace['jit-yc-hook'])
# Need to add dummy grids to make YASK happy
# TODO: improve me
handle = [nfac.new_domain_index(str(i)) for i in self.space_dimensions]
yc_hook.new_grid('dummy_wo_time', handle)
handle = [nfac.new_step_index(str(self.time_dimension))] + handle
yc_hook.new_grid('dummy_w_time', handle)
self.yk_hook = YaskKernel(namespace['jit-yk-hook'](name, 0), yc_hook)
@property
def dimensions(self):
return (self.time_dimension,) + self.space_dimensions
@property
def nsolutions(self):
return len(self.solutions)
@property
def ngrids(self):
return len(self.grids)
def make_grid(self, obj):
"""
Create and return a new :class:`Data`, a YASK grid wrapper. Memory
is allocated.
:param obj: The :class:`Function` for which a YASK grid is allocated.
"""
if set(obj.indices) < set(self.space_dimensions):
exit("Need a Function[x,y,z] to create a YASK grid.")
name = 'devito_%s_%d' % (obj.name, contexts.ngrids)
# Create the YASK grid
grid = self.yk_hook.new_grid(name, obj)
# Where should memory be allocated ?
alloc = obj._allocator
if alloc.is_Numa:
if alloc.put_onnode:
grid.set_numa_preferred(alloc.node)
elif alloc.put_local:
grid.set_numa_preferred(namespace['numa-put-local'])
for i, s, h in zip(obj.indices, obj.shape_allocated, obj._extent_halo):
if i.is_Time:
assert grid.is_dim_used(i.name)
assert grid.get_alloc_size(i.name) == s
else:
# Note:
# From the YASK docs: "If the halo is set to a value larger than
# the padding size, the padding size will be automatically increased
# to accomodate it."
grid.set_left_halo_size(i.name, h.left)
grid.set_right_halo_size(i.name, h.right)
grid.alloc_storage()
self.grids[name] = grid
return grid
def make_yc_solution(self, namer):
"""
Create and return a YASK compiler solution object.
"""
name = namer(self.name, self.nsolutions)
yc_soln = cfac.new_solution(name)
# Redirect stdout/strerr to a string or file
if configuration.yask['dump']:
filename = 'yc_dump.%s.%s.%s.txt' % (name, configuration['platform'],
configuration['isa'])
filename = os.path.join(configuration.yask['dump'], filename)
yc_soln.set_debug_output(ofac.new_file_output(filename))
else:
yc_soln.set_debug_output(ofac.new_null_output())
# Set data type size
yc_soln.set_element_bytes(self.dtype().itemsize)
# Apply compile-time optimizations
if configuration['isa'] != 'cpp':
dimensions = [nfac.new_domain_index(str(i)) for i in self.space_dimensions]
# Vector folding
for i, j in zip(dimensions, configuration.yask['folding']):
yc_soln.set_fold_len(i, j)
# Unrolling
for i, j in zip(dimensions, configuration.yask['clustering']):
yc_soln.set_cluster_mult(i, j)
return yc_soln
def make_yk_solution(self, namer, yc_soln, local_grids):
"""
Create and return a new :class:`YaskKernel` using ``self`` as context
and ``yc_soln`` as YASK compiler ("stencil") solution.
"""
soln = YaskKernel(namer(self.name, self.nsolutions), yc_soln, local_grids)
self.solutions.append(soln)
return soln
def __repr__(self):
return ("YaskContext: %s\n"
"- domain: %s\n"
"- grids: [%s]\n"
"- solns: [%s]\n") % (self.name, str(self.space_dimensions),
', '.join([i for i in list(self.grids)]),
', '.join([i.name for i in self.solutions]))
class ContextManager(OrderedDict):
def __init__(self, *args, **kwargs):
super(ContextManager, self).__init__(*args, **kwargs)
self.ncontexts = 0
def dump(self):
"""
Drop all known contexts and clean up the relevant YASK directories.
"""
self.clear()
call(['rm', '-f'] + glob(os.path.join(namespace['path'], 'yask', '*devito*')))
call(['rm', '-f'] + glob(os.path.join(namespace['path'], 'lib', '*devito*')))
call(['rm', '-f'] + glob(os.path.join(namespace['path'], 'lib', '*hook*')))
def fetch(self, grid, dtype):
"""
Fetch the :class:`YaskContext` in ``self`` uniquely identified by
``grid`` and ``dtype``. Create a new (empty) :class:`YaskContext` on miss.
"""
# A unique key for this context.
key = (configuration['isa'], dtype, grid.dimensions,
grid.time_dim, grid.stepping_dim)
# Fetch or create a YaskContext
if key in self:
log("Fetched existing context from cache")
else:
self[key] = YaskContext('ctx%d' % self.ncontexts, grid, dtype)
self.ncontexts += 1
log("Context successfully created!")
return self[key]
@property
def ngrids(self):
return sum(i.ngrids for i in self.values())
contexts = ContextManager()
"""All known | |
a_tfrom1[1]
a_t_cont = sylvester(a_syl, b_syl, q_syl)
a_t_list.append(a_t_cont)
for i in range(1, q_tfrom1.size(0)-1):
a_syl = 2 * self.kappa * q_tfrom1[i]
b_syl = c_xtxt[i]
q_syl = c_xtxtm1[i] + self.kappa * q_tfrom1[i] @ (a_t_cont + a_tfrom1[i+1])
a_t_cont = sylvester(a_syl, b_syl, q_syl)
a_t_list.append(a_t_cont)
a_syl = self.kappa * q_tfrom1[-1]
b_syl = c_xtxt[-2]
q_syl = c_xtxtm1[-1] + self.kappa * q_tfrom1[-1] @ a_t_cont
a_t_cont = sylvester(a_syl, b_syl, q_syl)
a_t_list.append(a_t_cont)
a_tfrom1 = torch.stack(a_t_list, 0)
c_yy = (y @ y.permute(0, 1, 3, 2)).mean(dim=0)
c_yy = (c_yy + c_yy.permute(0, 2, 1))/2
c_yx = (y @ x.permute(0, 1, 3, 2)).mean(dim=0)
# b_t = c_yx @ c_xtxt_inv
b_t = c_yx.permute(0, 2, 1).cholesky_solve(c_xtxt_chol).permute(0, 2, 1)
r_t = c_yy - c_yx @ c_xtxt_inv @ c_yx.permute(0, 2, 1)
# try:
# r_t.cholesky()
# except RuntimeError as err:
# r_t = c_yy - b_t @ (c_yx.permute(0, 2, 1))
# r_t.cholesky()
b = b_t.mean(dim=0)
r = r_t.mean(dim=0)
r = (r + r.T)/2
mu0 = x[:, 0, :, :].mean(dim=0)
eps0 = x[:, 0:1, :, :] - mu0
eps0_outer = eps0 @ eps0.permute(0, 1, 3, 2)
p0 = (p[0, :, :] + eps0_outer).mean(dim=0)
p0 = (p0 + p0.permute(0, 2, 1))/2
return a_tfrom1, q_tfrom1, b, r, mu0, p0
def update_parameters_from_tensors(self, a_t, q_t, b, r, mu0, p0):
'''Args are `torch.tensors`, detached and converted to `np.ndarray` are
then stored to the objects attributes.'''
self.update_parameters_from_np(
a_t.detach().numpy(),
q_t.detach().numpy(),
b.detach().numpy(),
r.detach().numpy(),
mu0.squeeze(1).detach().numpy(),
p0.squeeze(0).detach().numpy()
)
def update_parameters_from_np(self, a_t, q_t, b, r, mu0, p0):
'''Args are `np.ndarray`, then stored to the objects attributes.'''
self.transition_matrices = a_t
self.transition_covariance = q_t
self.observation_matrices = b
self.observation_covariance = r
self.initial_state_mean = mu0
self.initial_state_covariance = p0
def train_em_step(self, engine, batch):
'''training steps, call fit_em here, returned value will be stored in
engine.state.output'''
mu, p, h = self.smooth_torch(batch)
try:
m_step_result = self.maximization(batch, mu, p, h)
except RuntimeError as err:
print(f'Original error: {err}')
self.update_parameters_from_tensors(*m_step_result)
try:
data_negloglikelihood = -self.loglikelihood_torch(batch)
except RuntimeError as err:
print(f'Original error: {err}')
mean_frob_norm_sum = self.frob_diff_a().mean()
loss = data_negloglikelihood # + mean_frob_norm_sum #+ self.frob_a().mean()
return {'nll': loss.item(),
'mean_frob_norm_sum': mean_frob_norm_sum.item()}
def frob_diff_a(self):
""" Returns sequence of frob. norms of differences of the sequence A_t.
"""
a_t = torch.from_numpy(self.transition_matrices)
if a_t.dim() <=2:
return torch.tensor(0, dtype=torch.double)
dif = a_t[:-1] - a_t[1:]
return torch.norm(dif, p='fro', dim=(1, 2))
def frob_a(self):
""" Returns sequence of frob. norms of the sequence A_t.
"""
a_t = torch.from_numpy(self.transition_matrices)
return torch.norm(a_t, p='fro', dim=(1, 2))
def r2_score(self, dataset):
"""Calculates mean R2 score of the model on the given dataset
averaged over batches in dataset.
"""
y = self.trim_length(dataset)
y_squeezed = y.squeeze(-1).detach().numpy()
y_hat, sigma = self.predict_output_torch(y)
y_hat_squeezed = y_hat.squeeze(-1).detach().numpy()
scores = []
for y_sample, y_hat_sample in zip(y_squeezed, y_hat_squeezed):
scores.append(r2_score(y_sample, y_hat_sample))
scores_np = np.array(scores)
return scores_np.mean(), scores_np.std(ddof=1)
def plot_latent(self, y, y_noiseless=None, time=None):
if time is None:
time = np.arange(y.squeeze().size(0))
y = self.trim_length(y)
mu, p, h = self.smooth_torch(y)
fig, axes = plt.subplots(nrows=2, figsize=(12, 9))
y_hat, sig = self.predict_output_torch(y)
y_sqz = y.squeeze(-1).squeeze(0).detach()
y_hat_sqz = y_hat.squeeze(-1).squeeze(0).detach()
mu_sqz = mu.squeeze(-1).squeeze(0).detach()
labels = []
if y_noiseless is not None:
axes[0].plot(y_noiseless.squeeze().detach(), lw=1, linestyle='--')
labels.extend([r'$true y_{0}$'.format(i) for i in range(y_sqz.size(-1))])
plot_confidence(time, y_hat.squeeze(-1).squeeze(0).detach(), sig.detach(), ax=axes[0])
labels.extend([r'$\hat y_{0} \pm 2\sigma$'.format(i) for i in range(y_sqz.size(-1))])
axes[0].plot(time, y.squeeze().detach(), lw=1)
labels.extend([r'$y_{0}$'.format(i) for i in range(y_sqz.size(-1))])
axes[0].set_xlabel('Sample')
axes[0].set_ylabel('Magnitude')
axes[0].set_xlim(time[0], time[-1])
axes[0].grid(True)
axes[0].legend(labels)
# axes[1].plot(time, mu.squeeze().detach())
plot_confidence(time, mu_sqz, p.detach(), ax=axes[1])
axes[1].set_xlabel('Sample')
axes[1].set_ylabel('Magnitude')
axes[1].set_xlim(time[0], time[-1])
axes[1].grid(True)
labels_x_hat = [r'$\hat x_{0} \pm 2\sigma$'.format(i) for i in range(mu_sqz.size(-1))]
axes[1].legend(labels_x_hat)
plt.tight_layout()
plt.show()
def print_info(self):
print(
f'Info TBD',
f'\n------------------------------------------'
)
class LitEMSystem(LightningModule):
"""
ML System for HGMM model.
@param length: length of dataset.
@param num_states: number of hidden states to use.
@param num_outputs: number of outputs of the observable in the dataset.
@param kappa: continuity preference strength - precision (inverse variance)
of the parameter gaussian process.
"""
def __init__(self, length, num_states, num_outputs, kappa):
super().__init__()
self.smoother = SmoothKalman(length=length,
num_states=num_states,
num_outputs=num_outputs,
cont_precision=kappa)
self.dummy_param = torch.nn.Parameter(torch.rand([1, 1],
dtype=torch.double))
self.save_hyperparameters()
def state_dict(self):
base_dict = super().state_dict()
smoother_dict = self.smoother.state_dict()
base_dict.update(smoother_dict)
return base_dict
def load_state_dict(self, state_dict, strict, **kwargs):
super().load_state_dict(state_dict, strict=False)
self.smoother.load_state_dict(state_dict)
def forward(self, y):
"""
Calculates distributions of latent variables.
@param: y [n_timesteps, n_dim_obs] array-like
@return: smoothed_state_means, smoothed_state_covariances,
kalman_smoothing_gains
"""
return self.smoother.smooth_torch(y)
def predict_log_proba(self, batch):
"""
batch is `torch.Tensor` of size `b x length x n_outs x 1` - that is
4-dimensional.
@return log_proba: torch.Tensor (b, )
"""
return self.smoother.log_pred_density(batch)
def training_step(self, batch, batch_idx):
try:
mu, p, h = self.smoother.smooth_torch(batch[1])
m_step_result = self.smoother.maximization(batch[1], mu, p, h)
self.smoother.update_parameters_from_tensors(*m_step_result)
data_negloglikelihood = -self.smoother.loglikelihood_torch(batch[1])
mean_frob_norm_sum = self.smoother.frob_diff_a().mean()
loss = data_negloglikelihood + mean_frob_norm_sum
self.log('aug_loss', loss.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
self.log('nll', data_negloglikelihood.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
self.log('mean_frob_norm_sum', mean_frob_norm_sum.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
except RuntimeError:
self.log('aug_loss', math.nan, on_step=True, on_epoch=False, prog_bar=True, logger=True)
return self.dummy_param
def configure_optimizers(self):
# Fake optimizer, does nothing
optimizer = torch.optim.Adam(self.parameters(), lr=0.1)
return optimizer
def validation_step(self, batch, batch_idx):
data_negloglikelihood = -self.smoother.loglikelihood_torch(batch[1])
mean_frob_norm_sum = self.smoother.frob_diff_a().mean()
val_loss = data_negloglikelihood + mean_frob_norm_sum
self.log('val_nll', data_negloglikelihood.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
self.log('val_aug_loss', val_loss.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
return data_negloglikelihood
def test_step(self, batch, batch_idx):
data_negloglikelihood = -self.smoother.loglikelihood_torch(batch[1])
self.log('val_nll', data_negloglikelihood.detach(), on_step=False, on_epoch=True, prog_bar=True, logger=True)
return data_negloglikelihood
def fit(self, train_loader: DataLoader, val_loader: DataLoader, max_epochs=100):
"""
Fits a single class based on given data.
@return best_model_path: path to the checkpoint with best performing model.
"""
early_stop_callback = EarlyStopping(
monitor='val_nll',
min_delta=0.001,
patience=5,
verbose=True,
mode='min'
)
early_nan_stop_callback = EarlyStopping(
monitor='aug_loss',
patience=5,
verbose=True,
mode='min',
check_finite=True,
check_on_train_epoch_end=True
)
early_stop_nll_callback = EarlyStopping(
monitor='nll',
patience=5,
verbose=True,
mode='min',
check_on_train_epoch_end=True
)
self.trainer = Trainer(
max_epochs=max_epochs,
callbacks=[early_stop_callback,
early_nan_stop_callback,
early_stop_nll_callback]
)
self.trainer.fit(self, train_loader, val_loader)
return self.trainer.checkpoint_callback.best_model_path
def eval_elpd(self, batch):
"""
Calculates log pointwise predictive density of the model given the batch
which serves as an estimate of expected log predictive density for
the current model (set of parameters).
"""
num_samples = batch[0].size(0)
elpd_est = self.smoother.loglikelihood_torch(batch[1])
return elpd_est, num_samples
def eval_elpd_ldr(self, val_loader: DataLoader):
"""
Computes elpd from the given val_loader for one instance of parameters.
@return: elpd (scalar) of current model given val_loader data.
"""
results = [self.eval_elpd(batch) for batch in val_loader]
factored_elpds = [elpd * num_samples for elpd, num_samples in results]
N = sum([num_samples for _, num_samples in results])
elpd_weighted_avg = sum(factored_elpds)/N
return elpd_weighted_avg
def fit_all_classes(self, data_container: DatasetContainer, batch_size=1000, max_epochs=100):
'''
OBSOLETE!
Runs training with early stopping and constructs default trainer for you.
Trains over all classes in data_container, best model from last run are
used as initial models for the next run, loops twice over models.
@return: best models checkpoint paths
'''
# results = {label: checkpoint_path}
results = {}
data_loaders = {}
for Xs, ys, label in data_container.class_train_datasets_generator():
train_set, val_set = data_container._split_for_validation((Xs, ys), ratio=0.3)
train_loader = DataLoader(train_set, batch_size=batch_size)
val_loader = DataLoader(val_set, batch_size=batch_size)
data_loaders[label] = (train_loader, val_loader)
for label, (train_loader, val_loader) in data_loaders.items():
self.fit(train_loader, val_loader, max_epochs=max_epochs)
for label, (train_loader, val_loader) in data_loaders.items():
results[label] = self.fit(train_loader, val_loader, max_epochs=max_epochs)
return results
if __name__ == "__main__":
from datetime import datetime
import json
from pathlib import Path
torch.manual_seed(53)
dataset_name = 'BasicMotions'
mode = 'ucr'
data_container = DatasetContainer(dataset_name, mode=mode, dtype=torch.double)
sample_length = data_container.data_len
N_states = 3
N_outputs = data_container.output_dim
cont_precision = 10.
my_model = LitEMSystem(length=sample_length,
num_states=N_states,
num_outputs=N_outputs,
kappa=cont_precision)
chkpt_paths = my_model.fit_all_classes(data_container, max_epochs=3)
outdir = Path('out')
outdir.mkdir(parents=True, exist_ok=True)
timestamp = datetime.now().strftime('%Y-%m-%dT%H-%M-%S')
filename = f'{dataset_name}_{mode}_{timestamp}.json'
save_file_path = outdir / Path(filename)
with open(save_file_path, 'w') as f:
json.dump(chkpt_paths, f)
test_data_sample = data_container.dataset_test.tensors[1][0:1, :]
# my_model.smoother.plot_latent(test_data_sample)
for label in data_container.unique_labels:
loaded_model = LitEMSystem.load_from_checkpoint(chkpt_paths[data_container.unique_labels[0]])
loaded_model.smoother.plot_latent(test_data_sample)
# y_hat, sigma = my_model.smoother.predict_output_torch(test_data_sample)
# N_epochs = 10
# batch_size = 80
# sample_length = 100
# N_samples = 10
# N_states = 5
# cont_precision = 10
# # Dummy data to fit,
# # a good example was with time 0:10, 380 sample_length, 2 N_states,
# # 20 N_samples and 0.35 std of noise and generator [sin(7x), cos(5x)]
# time_vector = torch.linspace(0, 3, sample_length, dtype=torch.double).unsqueeze(0)
# x_tensor = time_vector.repeat(N_samples, 1)
# # y_noiseless = torch.stack([
# # torch.sin(7 * x_tensor) + torch.cos(3 * x_tensor),
# # torch.cos(5 * x_tensor)
# # ],
# # dim=2).unsqueeze(-1)
# y_noiseless = torch.stack([
# torch.sin(3 * x_tensor),
# torch.cos(5 * x_tensor)
# ],
# dim=2).unsqueeze(-1)
# # y_noiseless = torch.tensor([[1., 0.], [0.1, 0.5], [0.1, 0.1]], dtype=torch.double) @ y_noiseless
# # y_noiseless = torch.tensor([[0.5, 0.5]], dtype=torch.double) @ y_noiseless
# y = y_noiseless + | |
<gh_stars>1-10
import config
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import time
import os
from datetime import datetime, timedelta
from xgboost import XGBRFRegressor
from xgboost import plot_importance
from matplotlib import pyplot
from sklearn.preprocessing import LabelBinarizer
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_absolute_error, mean_squared_error
### Configure weights of 3 methods for intervention scoring
### - the 3 weights should sum up to 1.0
### - a method will be ignored if its weight is set to 0.0
intervention_scoring_methods = {'fit_stringency_index':1.0,
'fit_conf_cases':0.0,
'fit_intv_effect':0.0}
#Data source for the whole analysis
data_src = os.path.join(config.base_data_dir, config.oxcgrt_intervention_data_offline)
relevant_columns = ['CountryName', 'CountryCode', 'C1_School closing', 'C2_Workplace closing',
'C3_Cancel public events', 'C4_Restrictions on gatherings', 'C5_Close public transport',
'C6_Stay at home requirements', 'C7_Restrictions on internal movement',
'C8_International travel controls', 'E1_Income support', 'H1_Public information campaigns',
'H2_Testing policy', 'H3_Contact tracing', 'ConfirmedCases', 'ConfirmedDeaths',
'StringencyIndex']
#selected_countries = ['IND', 'USA', 'GBR', 'ITA', 'JPN', 'SGP', 'NLD', 'ISR', 'BEL', 'BRA', 'DEU', 'CUB', 'ESP', 'MEX', 'MYS', 'PHL', 'HUN', 'ZAF']
selected_countries = None
#Select a country only if it has exceeded the conf_cases_threshold
conf_cases_threshold = 10000
#Select records having confirmed cases >= min_case_threshold
min_case_threshold = 0
#window for rollong averages of conf case counts
smoothing_window_len = 3
#number of lags to use for time-series style modeling of conf cases
num_lags = 1
#Skip a few recent dayes data for potential missing values
recent_days_to_skip = 5
#median incubation period for Covid19
incubation_period = 5
plot_predictions = False
### Fetch and filter cases & intervention data
def get_all_data (data_src):
data = pd.read_csv(data_src)
data = data.loc[(data['RegionCode'] == '') | (data['RegionCode'].isnull())]
data['Date'] = pd.to_datetime(data['Date'], format='%Y%m%d')
data = data.set_index('Date')
data = data[relevant_columns]
countries = data[['CountryCode', 'CountryName']].drop_duplicates(keep='first')
country_dict = dict()
for cc in countries['CountryCode']:
country_dict[cc] = countries.loc[countries['CountryCode']==cc, 'CountryName'].tolist()[0]
selected_countries = data.loc[(data['ConfirmedCases'] > conf_cases_threshold),'CountryCode'].unique()
print ('Countries with more than %d confirmed cases: %d' % (conf_cases_threshold, len(selected_countries)))
if selected_countries is not None and len(selected_countries)>0:
data = data.loc[data['CountryCode'].isin(selected_countries)]
data['ConfirmedCases'] = data['ConfirmedCases'].fillna(method='ffill')
data = data.loc[data['ConfirmedCases'] >= min_case_threshold]
return data, country_dict
### Filter data for a specific country
def get_country_data (data, country_code, min_threshold=1):
country_data = data.loc[data['CountryCode'] == country_code]
country_data = country_data[:-recent_days_to_skip]
country_data = country_data.loc[country_data['ConfirmedCases'] >= min_threshold]
country_data = country_data.loc[~country_data.index.duplicated(keep='first')]
print (f'Data dimension for counry {country_code} is {country_data.shape}')
return country_data
### Feature engineering
def add_features (country_data, rolling_window=3):
country_data['Change'] = (country_data['ConfirmedCases'] - country_data['ConfirmedCases'].shift(1)) - 1
country_data['RateOfChange'] = (country_data['ConfirmedCases'] - country_data['ConfirmedCases'].shift(1)) / (country_data['ConfirmedCases'].shift(1) - country_data['ConfirmedCases'].shift(2))
country_data['RateOfChange'] = country_data['RateOfChange'].replace([np.inf, -np.inf], np.nan)
country_data['RateOfChange'] = country_data['RateOfChange'].fillna(0)
country_data['Change_MA'] = country_data['Change'].rolling(window=rolling_window).mean()
country_data['RateOfChange_MA'] = country_data['RateOfChange'].rolling(window=rolling_window).mean()
return country_data
### Get training features and labels
def get_modeling_data (country_data, target_col, nlags=1, fit_conf_cases=False, fit_intv_effect=False):
country_data = country_data.fillna(method='ffill')
country_data = country_data.fillna(0) #Fill remaining initial NaN values
#X = country_data.drop(['CountryName', 'CountryCode', 'ConfirmedCases', 'Change', 'RateOfChange', 'DepVar', 'Change_MA', 'RateOfChange_MA'], axis=1)
#country_data[target_col] = country_data[target_col].fillna(0.0)
drop_cols = ['CountryName', 'CountryCode', 'ConfirmedCases', 'ConfirmedDeaths', 'StringencyIndex', 'Change', 'RateOfChange', 'Change_MA', 'RateOfChange_MA']
X = country_data.drop(drop_cols, axis=1, errors='ignore')
y = country_data[target_col]
if fit_conf_cases or fit_intv_effect:
lag_cols = []
for lag in range(nlags):
X[target_col + '_' + str(lag+1)] = country_data[target_col].shift(lag)
X[target_col + '_' + str(lag+1)].fillna(0, inplace=True)
lag_cols.append (target_col + '_' + str(lag+1))
X1, X2 = None, None
if fit_intv_effect:
X1 = X.drop([col for col in X.columns if col not in lag_cols], axis=1)
X2 = X.drop([col for col in X.columns if col in lag_cols], axis=1)
return X, X1, X2, y
### Fit the data against an ensemble based regression model and get predictions on the same data
def fit_model (X, y):
model = XGBRFRegressor(n_estimators=1000, max_depth=7, random_state=42)
model.fit(X, y)
y_pred = model.predict(X)
#print (y)
err_mae = mean_absolute_error(y, y_pred)
err_rmse = np.sqrt(mean_squared_error(y, y_pred))
return model, y_pred, err_mae, err_rmse
### Measure predicted total cases using model's predictions
def get_predicted_total_cases (country_data, X, y):
_, y_pred, _, _ = fit_model(X, y)
total_cases_pred = country_data['ConfirmedCases'].shift() + (y_pred)
return y_pred, total_cases_pred
def plot01 (country_data, y, y_pred, total_cases_pred):
fig, ax = plt.subplots(1, 2, figsize=(25, 8))
ax[0].plot(y.values, label='actual', color='green')
ax[0].plot(y_pred, label='predicted')
ax[0].legend()
ax[0].set_title('Actual Vs Predicted: % Change')
ax[1].plot(country_data['ConfirmedCases'].values, label='actual', color='green')
ax[1].plot(total_cases_pred.values, label='predicted')
ax[1].legend()
ax[1].set_title('Actual Vs Predicted: Cummulative Cases')
plt.show();
### Get the mean difference in case count between the following 2 predictions:
### 1. prediction with all features
### 2. mean of the prediction using previous case counts (based on n-lags) and the prediction with interventions
### having the target intervention (target_intv) turned off
### The resulting mean difference in case count is subsequently interpreted as an indicator of approx impact
### of the target intervention (target_intv)
def get_case_count_diff (country_data, X1, X2, y, total_cases_pred, target_intv, plot_graphs=False):
### Fit & predict with only TimeSeries MA Lag Feature(s)
y_pred1, total_cases_pred1 = get_predicted_total_cases (country_data, X1, y)
### Fit & predict with Interventions (with target_intv set to 0) but without TimeSeries MA Lag Feature(s)'
y_pred2, total_cases_pred2 = get_predicted_total_cases (country_data, X2, y)
y_pred3 = pd.DataFrame(zip(y_pred1, y_pred2)).mean(axis=1)
total_cases_pred3 = country_data['ConfirmedCases'].shift() + (y_pred3).tolist()
# if plot_graphs:
# plot01 (y, y_pred3, total_cases_pred3, country_data)
total_cases_diff = None
country_data['seq'] = [i for i in range(len(country_data))]
non_zero_indices = country_data.loc[country_data[target_intv] > 0, 'seq'].tolist()
### Assuming that the infection will be detectable only after the incubation_period
non_zero_indices = [min((v + incubation_period), len(country_data)-1) for v in non_zero_indices]
country_data.drop(['seq'], axis=1, inplace=True)
### Measure the mean difference between the two scenarios
if len(non_zero_indices) > 0:
total_cases_diff = np.mean(total_cases_pred3.iloc[non_zero_indices] - total_cases_pred.iloc[non_zero_indices])
return total_cases_diff
### Measure the approx effectiveness of each intervention for a country
def measure_intervention_effects (country_data, nlags=num_lags, plot_graphs=True):
target_col = 'Change_MA'
country_data = add_features (country_data)
X, X1, X2, y = get_modeling_data (country_data, target_col, nlags=nlags, fit_intv_effect=True)
# Get prediction with all features (previous case counts and all interventions)
y_pred, total_cases_pred = get_predicted_total_cases (country_data, X2, y)
intervention_vs_cases_diff = []
interventions = X2.columns
country_data['seq1'] = [i for i in range(len(country_data))]
for intervention in interventions:
X2_Temp = X2.copy()
# Turning an intervention off
X2_Temp[intervention] = 0
seq_with_conf_cases = country_data.loc[country_data['ConfirmedCases']>0, 'seq1']
seq_with_intervention_active = country_data.loc[country_data[intervention]>0, 'seq1']
delay_in_intervention = 100 * (seq_with_intervention_active.min() - seq_with_conf_cases.min()) / len(seq_with_conf_cases)
# Get an approx case-count diff with an intervention turned off
total_cases_diff = get_case_count_diff (country_data, X1, X2_Temp, y, total_cases_pred, intervention, plot_graphs=plot_graphs)
intervention_vs_cases_diff.append([intervention, total_cases_diff])
country_data.drop(['seq1'], axis=1, inplace=True)
country_intv_scores = pd.DataFrame(intervention_vs_cases_diff, columns=['intervention', 'score'])
min_score, max_score = np.min(country_intv_scores['score']), np.max(country_intv_scores['score'])
country_intv_scores['score_norm'] = country_intv_scores['score'].apply(lambda x: (x - min_score) / (max_score - min_score))
#country_intv_scores['score_norm'] = country_intv_scores['score_norm'].max() - country_intv_scores['score_norm']
return country_intv_scores
### Plot actuals vs predictions
def plot (y, y_pred, country_data):
fig, ax = plt.subplots(1, 1, figsize=(17, 7))
ax.plot(y.values, label='actual', color='green')
ax.plot(y_pred, label='predicted')
ax.legend()
ax.set_title('Actual Vs Predicted: % Case Count Change')
plt.show();
### Score interventions (features) using the feature importances measured by the ensemble model used (XGBoost Regressor)
def score_country_interventions (country_data, nlags=1, fit_conf_cases=False, fit_intv_effect=False, plot_predictions=False):
if fit_intv_effect:
return measure_intervention_effects (country_data, nlags=nlags)
target_col = 'Change_MA' if fit_conf_cases else 'StringencyIndex'
if fit_conf_cases:
country_data = add_features (country_data)
X, _, _, y = get_modeling_data (country_data, target_col, nlags=nlags, fit_conf_cases=fit_conf_cases)
model, y_pred, mae, rmse = fit_model (X, y)
#print (f'MAE: {mae}, RMSE: {rmse} [Predicting Confirmed Case Rate? => {fit_conf_cases}]')
if fit_conf_cases and plot_predictions:
plot (y, y_pred, country_data)
#plot_importance(model)
#pyplot.show()
#feature_importances_dict = model.get_booster().get_score(importance_type='gain')
feature_importances_dict = model.get_booster().get_fscore()
feature_importances_list = list()
for feature in feature_importances_dict:
if 'Change_MA' in feature:
continue
feature_importances_list.append([feature, feature_importances_dict[feature]])
country_intv_scores = pd.DataFrame(feature_importances_list, columns=['intervention', 'score'])
min_score, max_score = np.min(country_intv_scores['score']), np.max(country_intv_scores['score'])
country_intv_scores['score_norm'] = country_intv_scores['score'].apply(lambda x: (x - min_score) / (max_score - min_score))
#display (country_measures_analysis.head(25))
return country_intv_scores
def score_interventions (selected_countries=None):
assert sum(intervention_scoring_methods.values()) == 1.0, 'Error: Sum of the scoring method weights should be 1.0'
fit_stringency_index = True if intervention_scoring_methods['fit_stringency_index'] > 0 else False
fit_conf_cases = True if intervention_scoring_methods['fit_conf_cases'] > 0 else False
fit_intv_effect = True if intervention_scoring_methods['fit_intv_effect'] > 0 else False
data_all, country_dict = get_all_data (data_src)
if selected_countries is None or len(selected_countries)==0:
selected_countries = data_all['CountryCode'].unique()
all_country_intv_scores = pd.DataFrame()
for country_code in selected_countries:
print ('* '*10 + f'Scoring interventions for country: {country_dict[country_code]} [{country_code}]')
country_data = get_country_data (data_all, country_code)
if len(country_data) < 50:
print ('Not enough data to score interventions . . .\n')
continue
country_scores_merged = None
if fit_stringency_index:
country_scores = score_country_interventions (country_data, nlags=num_lags, fit_conf_cases=False, plot_predictions=False)
country_scores_merged = country_scores
country_scores_merged.rename(columns={'score_norm': 'score_stringency_idx'}, inplace=True)
if fit_conf_cases:
country_scores = score_country_interventions (country_data, nlags=num_lags, fit_conf_cases=True, plot_predictions=False)
country_scores = country_scores[['intervention', 'score_norm']]
country_scores.rename(columns={'score_norm': 'score_conf_cases'}, inplace=True)
country_scores_merged = country_scores if country_scores_merged is None else pd.merge(country_scores_merged, country_scores, on=['intervention'], how='outer')
if fit_intv_effect:
country_scores = score_country_interventions (country_data, nlags=num_lags, fit_intv_effect=True, plot_predictions=False)
country_scores = country_scores[['intervention', 'score_norm']]
country_scores.rename(columns={'score_norm': 'score_intv_effects'}, inplace=True)
country_scores_merged = country_scores if country_scores_merged is None else pd.merge(country_scores_merged, country_scores, | |
level, already_processed, namespaceprefix_, name_='LatestDropOffDetail')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='LatestDropOffDetail', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='LatestDropOffDetail'):
pass
def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='LatestDropOffDetail', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.DayOfWeek is not None:
namespaceprefix_ = self.DayOfWeek_nsprefix_ + ':' if (UseCapturedNS_ and self.DayOfWeek_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sDayOfWeek>%s</%sDayOfWeek>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.DayOfWeek), input_name='DayOfWeek')), namespaceprefix_ , eol_))
if self.Time is not None:
namespaceprefix_ = self.Time_nsprefix_ + ':' if (UseCapturedNS_ and self.Time_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sTime>%s</%sTime>%s' % (namespaceprefix_ , self.gds_format_time(self.Time, input_name='Time'), namespaceprefix_ , eol_))
for Overlays_ in self.Overlays:
namespaceprefix_ = self.Overlays_nsprefix_ + ':' if (UseCapturedNS_ and self.Overlays_nsprefix_) else ''
Overlays_.export(outfile, level, namespaceprefix_, namespacedef_='', name_='Overlays', pretty_print=pretty_print)
def build(self, node, gds_collector_=None):
self.gds_collector_ = gds_collector_
if SaveElementTreeNode:
self.gds_elementtree_node_ = node
already_processed = set()
self.ns_prefix_ = node.prefix
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None):
if nodeName_ == 'DayOfWeek':
value_ = child_.text
value_ = self.gds_parse_string(value_, node, 'DayOfWeek')
value_ = self.gds_validate_string(value_, node, 'DayOfWeek')
self.DayOfWeek = value_
self.DayOfWeek_nsprefix_ = child_.prefix
# validate type DayOfWeekType
self.validate_DayOfWeekType(self.DayOfWeek)
elif nodeName_ == 'Time':
sval_ = child_.text
dval_ = self.gds_parse_time(sval_)
self.Time = dval_
self.Time_nsprefix_ = child_.prefix
elif nodeName_ == 'Overlays':
obj_ = LatestDropoffOverlayDetail.factory(parent_object_=self)
obj_.build(child_, gds_collector_=gds_collector_)
self.Overlays.append(obj_)
obj_.original_tagname_ = 'Overlays'
# end class LatestDropOffDetail
class LatestDropoffOverlayDetail(GeneratedsSuper):
"""Specifies the time and reason to overlay the last drop off time for a
carrier at a FedEx location."""
__hash__ = GeneratedsSuper.__hash__
subclass = None
superclass = None
def __init__(self, Type=None, Time=None, gds_collector_=None, **kwargs_):
self.gds_collector_ = gds_collector_
self.gds_elementtree_node_ = None
self.original_tagname_ = None
self.parent_object_ = kwargs_.get('parent_object_')
self.ns_prefix_ = None
self.Type = Type
self.validate_LatestDropOffOverlayType(self.Type)
self.Type_nsprefix_ = None
if isinstance(Time, BaseStrType_):
initvalue_ = datetime_.datetime.strptime(Time, '%H:%M:%S').time()
else:
initvalue_ = Time
self.Time = initvalue_
self.Time_nsprefix_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, LatestDropoffOverlayDetail)
if subclass is not None:
return subclass(*args_, **kwargs_)
if LatestDropoffOverlayDetail.subclass:
return LatestDropoffOverlayDetail.subclass(*args_, **kwargs_)
else:
return LatestDropoffOverlayDetail(*args_, **kwargs_)
factory = staticmethod(factory)
def get_ns_prefix_(self):
return self.ns_prefix_
def set_ns_prefix_(self, ns_prefix):
self.ns_prefix_ = ns_prefix
def get_Type(self):
return self.Type
def set_Type(self, Type):
self.Type = Type
def get_Time(self):
return self.Time
def set_Time(self, Time):
self.Time = Time
def validate_LatestDropOffOverlayType(self, value):
result = True
# Validate type LatestDropOffOverlayType, a restriction on xs:string.
if value is not None and Validate_simpletypes_ and self.gds_collector_ is not None:
if not isinstance(value, str):
lineno = self.gds_get_node_lineno_()
self.gds_collector_.add_message('Value "%(value)s"%(lineno)s is not of the correct base simple type (str)' % {"value": value, "lineno": lineno, })
return False
value = value
enumerations = ['US_WEST_COAST']
if value not in enumerations:
lineno = self.gds_get_node_lineno_()
self.gds_collector_.add_message('Value "%(value)s"%(lineno)s does not match xsd enumeration restriction on LatestDropOffOverlayType' % {"value" : encode_str_2_3(value), "lineno": lineno} )
result = False
return result
def hasContent_(self):
if (
self.Type is not None or
self.Time is not None
):
return True
else:
return False
def export(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='LatestDropoffOverlayDetail', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('LatestDropoffOverlayDetail')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None and name_ == 'LatestDropoffOverlayDetail':
name_ = self.original_tagname_
if UseCapturedNS_ and self.ns_prefix_:
namespaceprefix_ = self.ns_prefix_ + ':'
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='LatestDropoffOverlayDetail')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='LatestDropoffOverlayDetail', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='LatestDropoffOverlayDetail'):
pass
def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='LatestDropoffOverlayDetail', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.Type is not None:
namespaceprefix_ = self.Type_nsprefix_ + ':' if (UseCapturedNS_ and self.Type_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sType>%s</%sType>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.Type), input_name='Type')), namespaceprefix_ , eol_))
if self.Time is not None:
namespaceprefix_ = self.Time_nsprefix_ + ':' if (UseCapturedNS_ and self.Time_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sTime>%s</%sTime>%s' % (namespaceprefix_ , self.gds_format_time(self.Time, input_name='Time'), namespaceprefix_ , eol_))
def build(self, node, gds_collector_=None):
self.gds_collector_ = gds_collector_
if SaveElementTreeNode:
self.gds_elementtree_node_ = node
already_processed = set()
self.ns_prefix_ = node.prefix
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None):
if nodeName_ == 'Type':
value_ = child_.text
value_ = self.gds_parse_string(value_, node, 'Type')
value_ = self.gds_validate_string(value_, node, 'Type')
self.Type = value_
self.Type_nsprefix_ = child_.prefix
# validate type LatestDropOffOverlayType
self.validate_LatestDropOffOverlayType(self.Type)
elif nodeName_ == 'Time':
sval_ = child_.text
dval_ = self.gds_parse_time(sval_)
self.Time = dval_
self.Time_nsprefix_ = child_.prefix
# end class LatestDropoffOverlayDetail
class Localization(GeneratedsSuper):
"""Identifies the representation of human-readable text."""
__hash__ = GeneratedsSuper.__hash__
subclass = None
superclass = None
def __init__(self, LanguageCode=None, LocaleCode=None, gds_collector_=None, **kwargs_):
self.gds_collector_ = gds_collector_
self.gds_elementtree_node_ = None
self.original_tagname_ = None
self.parent_object_ = kwargs_.get('parent_object_')
self.ns_prefix_ = None
self.LanguageCode = LanguageCode
self.LanguageCode_nsprefix_ = None
self.LocaleCode = LocaleCode
self.LocaleCode_nsprefix_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, Localization)
if subclass is not None:
return subclass(*args_, **kwargs_)
if Localization.subclass:
return Localization.subclass(*args_, **kwargs_)
else:
return Localization(*args_, **kwargs_)
factory = staticmethod(factory)
def get_ns_prefix_(self):
return self.ns_prefix_
def set_ns_prefix_(self, ns_prefix):
self.ns_prefix_ = ns_prefix
def get_LanguageCode(self):
return self.LanguageCode
def set_LanguageCode(self, LanguageCode):
self.LanguageCode = LanguageCode
def get_LocaleCode(self):
return self.LocaleCode
def set_LocaleCode(self, LocaleCode):
self.LocaleCode = LocaleCode
def hasContent_(self):
if (
self.LanguageCode is not None or
self.LocaleCode is not None
):
return True
else:
return False
def export(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='Localization', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('Localization')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None and name_ == 'Localization':
name_ = self.original_tagname_
if UseCapturedNS_ and self.ns_prefix_:
namespaceprefix_ = self.ns_prefix_ + ':'
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespaceprefix_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespaceprefix_, name_='Localization')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespaceprefix_, namespacedef_, name_='Localization', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespaceprefix_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespaceprefix_='', name_='Localization'):
pass
def exportChildren(self, outfile, level, namespaceprefix_='', namespacedef_='', name_='Localization', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.LanguageCode is not None:
namespaceprefix_ = self.LanguageCode_nsprefix_ + ':' if (UseCapturedNS_ and self.LanguageCode_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sLanguageCode>%s</%sLanguageCode>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.LanguageCode), input_name='LanguageCode')), namespaceprefix_ , eol_))
if self.LocaleCode is not None:
namespaceprefix_ = self.LocaleCode_nsprefix_ + ':' if (UseCapturedNS_ and self.LocaleCode_nsprefix_) else ''
showIndent(outfile, level, pretty_print)
outfile.write('<%sLocaleCode>%s</%sLocaleCode>%s' % (namespaceprefix_ , self.gds_encode(self.gds_format_string(quote_xml(self.LocaleCode), input_name='LocaleCode')), namespaceprefix_ , eol_))
def build(self, node, gds_collector_=None):
self.gds_collector_ = gds_collector_
if SaveElementTreeNode:
self.gds_elementtree_node_ = node
already_processed = set()
self.ns_prefix_ = node.prefix
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_, gds_collector_=gds_collector_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False, gds_collector_=None):
if nodeName_ == 'LanguageCode':
value_ = child_.text
value_ = self.gds_parse_string(value_, node, 'LanguageCode')
value_ = self.gds_validate_string(value_, node, 'LanguageCode')
self.LanguageCode = value_
self.LanguageCode_nsprefix_ = child_.prefix
elif nodeName_ == 'LocaleCode':
value_ = child_.text
value_ = self.gds_parse_string(value_, node, 'LocaleCode')
value_ = self.gds_validate_string(value_, node, 'LocaleCode')
self.LocaleCode = value_
self.LocaleCode_nsprefix_ = child_.prefix
# end class Localization
class LocationCapabilityDetail(GeneratedsSuper):
__hash__ = GeneratedsSuper.__hash__
subclass = None
superclass = None
def __init__(self, CarrierCode=None, ServiceType=None, ServiceCategory=None, TransferOfPossessionType=None, DaysOfWeek=None, gds_collector_=None, **kwargs_):
self.gds_collector_ = gds_collector_
self.gds_elementtree_node_ = None
self.original_tagname_ = None
self.parent_object_ = kwargs_.get('parent_object_')
self.ns_prefix_ = None
self.CarrierCode = CarrierCode
self.validate_CarrierCodeType(self.CarrierCode)
self.CarrierCode_nsprefix_ = None
self.ServiceType = ServiceType
self.ServiceType_nsprefix_ = None
self.ServiceCategory = ServiceCategory
self.validate_ServiceCategoryType(self.ServiceCategory)
self.ServiceCategory_nsprefix_ = None
self.TransferOfPossessionType = TransferOfPossessionType
self.validate_LocationTransferOfPossessionType(self.TransferOfPossessionType)
self.TransferOfPossessionType_nsprefix_ = None
if DaysOfWeek is None:
self.DaysOfWeek = []
else:
self.DaysOfWeek = DaysOfWeek
self.DaysOfWeek_nsprefix_ = None
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, LocationCapabilityDetail)
if subclass is not None:
return subclass(*args_, **kwargs_)
if LocationCapabilityDetail.subclass:
return LocationCapabilityDetail.subclass(*args_, **kwargs_)
else:
return LocationCapabilityDetail(*args_, **kwargs_)
factory = staticmethod(factory)
def get_ns_prefix_(self):
| |
from matplotlib._api.deprecation import MatplotlibDeprecationWarning
import matplotlib.colors as mcolors
import matplotlib.widgets as widgets
import matplotlib.pyplot as plt
from matplotlib.testing.decorators import check_figures_equal, image_comparison
from matplotlib.testing.widgets import do_event, get_ax, mock_event
from numpy.testing import assert_allclose
import pytest
def check_rectangle(**kwargs):
ax = get_ax()
def onselect(epress, erelease):
ax._got_onselect = True
assert epress.xdata == 100
assert epress.ydata == 100
assert erelease.xdata == 199
assert erelease.ydata == 199
tool = widgets.RectangleSelector(ax, onselect, **kwargs)
do_event(tool, 'press', xdata=100, ydata=100, button=1)
do_event(tool, 'onmove', xdata=199, ydata=199, button=1)
# purposely drag outside of axis for release
do_event(tool, 'release', xdata=250, ydata=250, button=1)
if kwargs.get('drawtype', None) not in ['line', 'none']:
assert_allclose(tool.geometry,
[[100., 100, 199, 199, 100],
[100, 199, 199, 100, 100]],
err_msg=tool.geometry)
assert ax._got_onselect
def test_rectangle_selector():
check_rectangle()
with pytest.warns(
MatplotlibDeprecationWarning,
match="Support for drawtype='line' is deprecated"):
check_rectangle(drawtype='line', useblit=False)
check_rectangle(useblit=True, button=1)
with pytest.warns(
MatplotlibDeprecationWarning,
match="Support for drawtype='none' is deprecated"):
check_rectangle(drawtype='none', minspanx=10, minspany=10)
check_rectangle(minspanx=10, minspany=10, spancoords='pixels')
check_rectangle(props=dict(fill=True))
def _resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new,
use_key=None):
do_event(tool, 'press', xdata=xdata, ydata=ydata, button=1)
if use_key is not None:
do_event(tool, 'on_key_press', key=use_key)
do_event(tool, 'onmove', xdata=xdata_new, ydata=ydata_new, button=1)
if use_key is not None:
do_event(tool, 'on_key_release', key=use_key)
do_event(tool, 'release', xdata=xdata_new, ydata=ydata_new, button=1)
return tool
@pytest.mark.parametrize('drag_from_anywhere, new_center',
[[True, (60, 75)],
[False, (30, 20)]])
def test_rectangle_drag(drag_from_anywhere, new_center):
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True,
drag_from_anywhere=drag_from_anywhere)
# Create rectangle
do_event(tool, 'press', xdata=0, ydata=10, button=1)
do_event(tool, 'onmove', xdata=100, ydata=120, button=1)
do_event(tool, 'release', xdata=100, ydata=120, button=1)
assert tool.center == (50, 65)
# Drag inside rectangle, but away from centre handle
#
# If drag_from_anywhere == True, this will move the rectangle by (10, 10),
# giving it a new center of (60, 75)
#
# If drag_from_anywhere == False, this will create a new rectangle with
# center (30, 20)
do_event(tool, 'press', xdata=25, ydata=15, button=1)
do_event(tool, 'onmove', xdata=35, ydata=25, button=1)
do_event(tool, 'release', xdata=35, ydata=25, button=1)
assert tool.center == new_center
# Check that in both cases, dragging outside the rectangle draws a new
# rectangle
do_event(tool, 'press', xdata=175, ydata=185, button=1)
do_event(tool, 'onmove', xdata=185, ydata=195, button=1)
do_event(tool, 'release', xdata=185, ydata=195, button=1)
assert tool.center == (180, 190)
def test_rectangle_selector_set_props_handle_props():
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True,
props=dict(facecolor='b', alpha=0.2),
handle_props=dict(alpha=0.5))
# Create rectangle
do_event(tool, 'press', xdata=0, ydata=10, button=1)
do_event(tool, 'onmove', xdata=100, ydata=120, button=1)
do_event(tool, 'release', xdata=100, ydata=120, button=1)
artist = tool._selection_artist
assert artist.get_facecolor() == mcolors.to_rgba('b', alpha=0.2)
tool.set_props(facecolor='r', alpha=0.3)
assert artist.get_facecolor() == mcolors.to_rgba('r', alpha=0.3)
for artist in tool._handles_artists:
assert artist.get_markeredgecolor() == 'black'
assert artist.get_alpha() == 0.5
tool.set_handle_props(markeredgecolor='r', alpha=0.3)
for artist in tool._handles_artists:
assert artist.get_markeredgecolor() == 'r'
assert artist.get_alpha() == 0.3
def test_rectangle_resize():
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True)
# Create rectangle
_resize_rectangle(tool, 0, 10, 100, 120)
assert tool.extents == (0.0, 100.0, 10.0, 120.0)
# resize NE handle
extents = tool.extents
xdata, ydata = extents[1], extents[3]
xdata_new, ydata_new = xdata + 10, ydata + 5
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (extents[0], xdata_new, extents[2], ydata_new)
# resize E handle
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdata_new, ydata_new = xdata + 10, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (extents[0], xdata_new, extents[2], extents[3])
# resize W handle
extents = tool.extents
xdata, ydata = extents[0], extents[2] + (extents[3] - extents[2]) / 2
xdata_new, ydata_new = xdata + 15, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (xdata_new, extents[1], extents[2], extents[3])
# resize SW handle
extents = tool.extents
xdata, ydata = extents[0], extents[2]
xdata_new, ydata_new = xdata + 20, ydata + 25
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (xdata_new, extents[1], ydata_new, extents[3])
@pytest.mark.parametrize('use_default_state', [True, False])
def test_rectangle_resize_center(use_default_state):
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True)
# Create rectangle
_resize_rectangle(tool, 70, 65, 125, 130)
assert tool.extents == (70.0, 125.0, 65.0, 130.0)
if use_default_state:
tool._default_state.add('center')
use_key = None
else:
use_key = 'control'
# resize NE handle
extents = tool.extents
xdata, ydata = extents[1], extents[3]
xdiff, ydiff = 10, 5
xdata_new, ydata_new = xdata + xdiff, ydata + ydiff
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0] - xdiff, xdata_new,
extents[2] - ydiff, ydata_new)
# resize E handle
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdiff = 10
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0] - xdiff, xdata_new,
extents[2], extents[3])
# resize E handle negative diff
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdiff = -20
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0] - xdiff, xdata_new,
extents[2], extents[3])
# resize W handle
extents = tool.extents
xdata, ydata = extents[0], extents[2] + (extents[3] - extents[2]) / 2
xdiff = 15
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (xdata_new, extents[1] - xdiff,
extents[2], extents[3])
# resize W handle negative diff
extents = tool.extents
xdata, ydata = extents[0], extents[2] + (extents[3] - extents[2]) / 2
xdiff = -25
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (xdata_new, extents[1] - xdiff,
extents[2], extents[3])
# resize SW handle
extents = tool.extents
xdata, ydata = extents[0], extents[2]
xdiff, ydiff = 20, 25
xdata_new, ydata_new = xdata + xdiff, ydata + ydiff
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (xdata_new, extents[1] - xdiff,
ydata_new, extents[3] - ydiff)
@pytest.mark.parametrize('use_default_state', [True, False])
def test_rectangle_resize_square(use_default_state):
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True)
# Create rectangle
_resize_rectangle(tool, 70, 65, 120, 115)
assert tool.extents == (70.0, 120.0, 65.0, 115.0)
if use_default_state:
tool._default_state.add('square')
use_key = None
else:
use_key = 'shift'
# resize NE handle
extents = tool.extents
xdata, ydata = extents[1], extents[3]
xdiff, ydiff = 10, 5
xdata_new, ydata_new = xdata + xdiff, ydata + ydiff
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0], xdata_new,
extents[2], extents[3] + xdiff)
# resize E handle
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdiff = 10
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0], xdata_new,
extents[2], extents[3] + xdiff)
# resize E handle negative diff
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdiff = -20
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0], xdata_new,
extents[2], extents[3] + xdiff)
# resize W handle
extents = tool.extents
xdata, ydata = extents[0], extents[2] + (extents[3] - extents[2]) / 2
xdiff = 15
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (xdata_new, extents[1],
extents[2], extents[3] - xdiff)
# resize W handle negative diff
extents = tool.extents
xdata, ydata = extents[0], extents[2] + (extents[3] - extents[2]) / 2
xdiff = -25
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (xdata_new, extents[1],
extents[2], extents[3] - xdiff)
# resize SW handle
extents = tool.extents
xdata, ydata = extents[0], extents[2]
xdiff, ydiff = 20, 25
xdata_new, ydata_new = xdata + xdiff, ydata + ydiff
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new, use_key)
assert tool.extents == (extents[0] + ydiff, extents[1],
ydata_new, extents[3])
def test_rectangle_resize_square_center():
ax = get_ax()
def onselect(epress, erelease):
pass
tool = widgets.RectangleSelector(ax, onselect, interactive=True)
# Create rectangle
_resize_rectangle(tool, 70, 65, 120, 115)
tool._default_state.add('square')
tool._default_state.add('center')
assert tool.extents == (70.0, 120.0, 65.0, 115.0)
# resize NE handle
extents = tool.extents
xdata, ydata = extents[1], extents[3]
xdiff, ydiff = 10, 5
xdata_new, ydata_new = xdata + xdiff, ydata + ydiff
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (extents[0] - xdiff, xdata_new,
extents[2] - xdiff, extents[3] + xdiff)
# resize E handle
extents = tool.extents
xdata, ydata = extents[1], extents[2] + (extents[3] - extents[2]) / 2
xdiff = 10
xdata_new, ydata_new = xdata + xdiff, ydata
_resize_rectangle(tool, xdata, ydata, xdata_new, ydata_new)
assert tool.extents == (extents[0] - xdiff, xdata_new,
extents[2] - xdiff, extents[3] + xdiff)
| |
"left", on = 'Scribe')
self.df_biplot.drop_duplicates(inplace = True)
self.df_biplot.reset_index(drop = True, inplace = True)
self.df_biplot[['Classification','Tool & Chamber']].replace('', np.nan, inplace=True)
self.df_biplot.dropna(subset=['Classification','Tool & Chamber'], inplace=True)
elif self.ToolGroupSelection.isChecked():
self.StepTool['Tool & Chamber'] = self.StepTool['Tool'] + " " + self.StepTool['Chamber']
self.StepTool.drop(['Tool','Chamber'], axis=1, inplace = True)
self.df_biplot = pd.merge(self.StepTool, self.df_biplot, how = "left", on = 'Scribe')
self.df_biplot.drop_duplicates(inplace = True)
self.df_biplot.reset_index(drop = True, inplace = True)
self.df_biplot[['Classification','Tool & Chamber']].replace('', np.nan, inplace=True)
self.df_biplot.dropna(subset=['Classification','Tool & Chamber'], inplace=True)
else:
pass
self.biplot_data = self.df_biplot
if self.analysis_count == 1:
print("Error! Perform analysis before switching group")
return
if self.analysis_count == 0:
self.outputfoldercount = os.path.join(self.outputfolder, f"Analysis {self.analysis_count}")
os.mkdir(self.outputfoldercount)
self.outputfoldercountslash = self.outputfoldercount.replace(os.sep, '/')
self.Biplot_list = {}
self.Biplot_loc = []
if len(self.loadings.columns) >= 10:
NumberOfPC = 11
else:
NumberOfPC = len(self.loadings.columns)
pairplot = sns.pairplot(self.df_biplot[['PC1','PC2','PC3','PC4','PC5','PC6','PC7','PC8','PC9','PC10','Classification']], hue='Classification', palette = {"Bad": "red","Good": "green","Mild": "orange"}, corner=True)
pairplot.fig.suptitle("Biplot Overview")
pairplot.savefig(f"{self.outputfoldercount}/Pairplot Overview.jpg",dpi = 300)
pixMap = QPixmap(f"{self.outputfoldercount}/Pairplot Overview.jpg")
pixMap = pixMap.scaled(3200,3000)
self.imageLabel.setPixmap(pixMap)
n = 0
for y in range(2,NumberOfPC):
for x in range(1,NumberOfPC - 1):
if x != y and x < y and n < 45:
self.Biplot_button_group.button(-n-2).setStyleSheet("")
l = len(self.loadings)
fig = go.Figure()
if self.ToolGroupSelection.isChecked():
fig = px.scatter(self.df_biplot, x=f'PC{x}', y=f'PC{y}', symbol='Tool & Chamber', color='Classification', color_discrete_map={"Bad": "red","Good": "green","Mild": "orange"},\
hover_data={'Product':False,'Lot_No':True,'Wafer_Alias':True,'Scribe':False,'Measure':True,'Step':True,'Tool & Chamber':True,f'PC{x}':False,f'PC{y}':False})
else:
fig = px.scatter(self.df_biplot, x=f'PC{x}', y=f'PC{y}', symbol='Lot_No', color='Classification', color_discrete_map={"Bad": "red","Good": "green","Mild": "orange"},\
hover_data={'Product':False,'Lot_No':True,'Wafer_Alias':True,'Scribe':False,'Measure':True,f'PC{x}':False,f'PC{y}':False})
fig.update_traces(marker_size=10)
fig.update_xaxes(range=[-1, 1])
fig.update_yaxes(range=[-1, 1])
fig.update_layout(title=f"Biplot PC{x} vs PC{y}", title_x=0.5)
magnitude_principalsx = sorted(list(self.loadings[f'PC{x}'][i] for i in range(l)), key = abs, reverse = True)
magnitude_principalsy = sorted(list(self.loadings[f'PC{y}'][i] for i in range(l)), key = abs, reverse = True)
for i in range(l):
for count in range(5):
if self.loadings[f'PC{x}'][i] == magnitude_principalsx[count]:
fig.add_annotation(x=self.loadings[f'PC{x}'][i]*1.5, y=self.loadings[f'PC{y}'][i], ax=0, ay=0, text=self.loadings.index[i], font=dict(color='magenta',size=12))
fig.add_annotation(x=self.loadings[f'PC{x}'][i], y=self.loadings[f'PC{y}'][i], ax=0, ay=0, xref = 'x', yref = 'y', axref = 'x', ayref = 'y', showarrow=True, arrowsize=1, arrowhead=1, arrowwidth = 2, arrowcolor='mediumblue')
if self.loadings[f'PC{y}'][i] == magnitude_principalsy[count]:
fig.add_annotation(x=self.loadings[f'PC{x}'][i], y=self.loadings[f'PC{y}'][i]*1.2, ax=0, ay=0, text=self.loadings.index[i], font=dict(color='magenta',size=12))
fig.add_annotation(x=self.loadings[f'PC{x}'][i], y=self.loadings[f'PC{y}'][i], ax=0, ay=0, xref = 'x', yref = 'y', axref = 'x', ayref = 'y', showarrow=True, arrowsize=1, arrowhead=1, arrowwidth = 2, arrowcolor='mediumblue')
self.Biplot_list[f'Biplot{x}{y}'] = fig
self.Biplot_loc.append(f'Biplot{x}{y}')
n += 1
self.analysis_count = self.analysis_count + 1
def DataQueryUi(self):
# UI function display the Data Query Selection Menu
# Called from SetupUi()
self.DataQuerySelectionMenu_layout = QVBoxLayout()
self.DataQuerySelectionMenu.setWindowTitle('Selection Menu')
self.DataQuerySelectionMenu.closeEvent = self.closeEvent
self.DataQuerySelectionMenu.resize(960, 960)
self.DataQuerymenuButton = QPushButton("Back to main menu")
self.DataQueryTitle = QLabel()
self.DataQueryTitle.setStyleSheet("font: 14pt Century Gothic; text-decoration: underline")
self.DataQueryTitle.setText("Data Query")
self.DataQueryNode_layout = QGridLayout()
self.DataQueryNode_layout.setSpacing(8)
self.DataQueryNodelabel = QLabel()
self.DataQueryNodelabel.setText("Tech:")
self.DataQueryNode_Box = QComboBox()
self.DataQueryNode_Box.setEditable(True)
self.DataQueryNode_Box.activated.connect(self.DataQuery_QueryProcessTechCategory)
self.DataQueryNode_layout.addWidget(self.DataQueryNodelabel,0,0)
self.DataQueryNode_layout.addWidget(self.DataQueryNode_Box,0,1,1,1)
self.DataQueryNode_layout.addWidget(QLabel(""),0,3,1,6)
self.DataQuery_QueryTech()
self.DataQueryProcess_tech_category_layout = QGridLayout()
self.DataQueryProcess_tech_category_layout.setSpacing(8)
self.DataQueryProcess_tech_categorylabel = QLabel()
self.DataQueryProcess_tech_categorylabel.setText("Process:")
self.DataQueryProcess_tech_category_Box = QComboBox()
self.DataQueryProcess_tech_category_Box.setEditable(True)
self.DataQueryProcess_tech_category_Box.activated.connect(self.DataQuery_QueryProduct)
self.DataQueryProcess_tech_category_layout.addWidget(self.DataQueryProcess_tech_categorylabel,0,0)
self.DataQueryProcess_tech_category_layout.addWidget(self.DataQueryProcess_tech_category_Box,0,1,1,1)
self.DataQueryProcess_tech_category_layout.addWidget(QLabel(""),0,3,1,6)
self.DataQueryProduct_layout = QGridLayout()
self.DataQueryProduct_layout.setSpacing(8)
self.DataQueryProduct_Nolabel = QLabel()
self.DataQueryProduct_Nolabel.setText("Product:")
self.DataQueryProduct_No = QComboBox()
self.DataQueryProduct_No.setEditable(True)
self.DataQueryProduct_No.activated.connect(self.DataQuery_QueryTest)
self.DataQueryProduct_layout.addWidget(self.DataQueryProduct_Nolabel,0,0)
self.DataQueryProduct_layout.addWidget(self.DataQueryProduct_No,0,1,1,2)
self.DataQueryProduct_layout.addWidget(QLabel(""),0,3,1,5)
self.DataQueryTestSelection = QGridLayout()
self.DataQueryTestSelection.setSpacing(8)
self.DataQueryTestTypelabel = QLabel()
self.DataQueryTestTypelabel.setText("Test Type:")
self.DataQueryTestTypeBox = QComboBox()
self.DataQueryTestTypeBox.setEditable(True)
self.DataQueryTestTypeBox.activated.connect(self.DataQuery_QueryLot)
self.DataQueryTestSelection.addWidget(self.DataQueryTestTypelabel,0,0)
self.DataQueryTestSelection.addWidget(self.DataQueryTestTypeBox,0,1,1,2)
self.DataQueryTestSelection.addWidget(QLabel(""),0,3,1,5)
self.DataQueryTimeFrame_container = QWidget()
self.DataQueryTimeFrame_container_layout = QGridLayout()
self.DataQueryTimeFrame_container_layout.setSpacing(10)
self.DataQueryTimeFrame_container_layout.setContentsMargins(0,0,0,0)
self.DataQueryDatelabel = QLabel()
self.DataQueryDatelabel.setText("Date Range: From")
self.DataQueryDateTolabel = QLabel()
self.DataQueryDateTolabel.setText("To")
self.DataQueryDateTolabel.setAlignment(Qt.AlignCenter)
self.DataQueryEarlier_Date = QDateEdit()
self.DataQueryLater_Date = QDateEdit()
self.DataQueryEarlier_Date.setDisplayFormat("yyyy-MM-dd")
self.DataQueryLater_Date.setDisplayFormat("yyyy-MM-dd")
self.DataQueryEarlier_Date.setDate(QDate(2021,1,1))
self.DataQueryLater_Date.setDate(QDate.currentDate())
self.DataQueryEarlier_Date.setCalendarPopup(True)
self.DataQueryLater_Date.setCalendarPopup(True)
self.DataQueryTimeFrame_searchbutton = QPushButton('Search')
self.DataQueryTimeFrame_searchbutton.clicked.connect(self.DataQuerySearchUp)
self.DataQueryTimeFrame_container_layout.addWidget(self.DataQueryDatelabel,0,0,1,1)
self.DataQueryTimeFrame_container_layout.addWidget(self.DataQueryEarlier_Date,0,1,1,3)
self.DataQueryTimeFrame_container_layout.addWidget(self.DataQueryDateTolabel,0,4,1,1)
self.DataQueryTimeFrame_container_layout.addWidget(self.DataQueryLater_Date,0,5,1,3)
self.DataQueryTimeFrame_container_layout.addWidget(self.DataQueryTimeFrame_searchbutton,0,8,1,2)
self.DataQueryTimeFrame_container.setLayout(self.DataQueryTimeFrame_container_layout)
self.DataQueryDataSelection = QGridLayout()
self.DataQueryDataSelection.setSpacing(8)
self.DataQueryET_container = QWidget()
self.DataQueryET_layout = QGridLayout()
self.DataQueryET_layout.setContentsMargins(0,0,0,0)
self.DataQueryET_layout.setSpacing(2)
self.DataQueryET_container_frame = QFrame()
self.DataQueryET_container_frame.setFrameStyle(QFrame.Panel | QFrame.Raised)
self.DataQueryET_container_frame.setStyleSheet("QFrame {border-width: 1;"
"border-style: solid}")
self.DataQueryDataTypeLabel = QLabel()
self.DataQueryDataTypeLabel.setText("Data Type:")
self.DataQueryETSelection = QCheckBox("ET")
self.DataQueryETSelection.clicked.connect(self.DataQueryShowET)
self.DataQueryETStatSelection = QComboBox()
self.DataQueryETStatSelection.setEditable(True)
self.DataQueryETStatSelection.addItems(['Mean','Median','Max','Min','Site'])
self.DataQueryETFilterSelection = QCheckBox("Filter By:")
self.DataQueryETFilterSelected = QLineEdit()
self.DataQueryETFilterSelected.returnPressed.connect(self.DataQuery_QueryET)
self.DataQueryETFilterSelected.setFixedWidth(120)
self.DataQueryET_container_frame.hide()
self.DataQueryETStatSelection.hide()
self.DataQueryETStatSelection.hide()
self.DataQueryETFilterSelection.hide()
self.DataQueryETFilterSelected.hide()
self.DataQueryET_layout.addWidget(self.DataQueryET_container_frame,0,0,2,2)
self.DataQueryET_layout.addWidget(self.DataQueryETSelection,0,0,1,1)
self.DataQueryET_layout.addWidget(self.DataQueryETStatSelection,0,1,1,1)
self.DataQueryET_layout.addWidget(self.DataQueryETFilterSelection,1,0,1,1)
self.DataQueryET_layout.addWidget(self.DataQueryETFilterSelected,1,1,1,1)
self.DataQueryET_container.setLayout(self.DataQueryET_layout)
self.DataQueryBin_container = QWidget()
self.DataQueryBin_layout = QGridLayout()
self.DataQueryBin_layout.setContentsMargins(0,0,0,0)
self.DataQueryBin_layout.setSpacing(2)
self.DataQueryBin_container_frame = QFrame()
self.DataQueryBin_container_frame.setFrameStyle(QFrame.Panel | QFrame.Raised)
self.DataQueryBin_container_frame.setStyleSheet("QFrame {border-width: 1;"
"border-style: solid}")
self.DataQueryBinSelection = QCheckBox("Bin")
self.DataQueryBinSelection.clicked.connect(self.DataQueryShowBin)
self.DataQueryBinStatSelection = QComboBox()
self.DataQueryBinStatSelection.setEditable(True)
self.DataQueryBinStatSelection.addItems(['Percentage','Count'])
self.DataQueryBinFilterSelection = QCheckBox("Filter by Bin No:")
self.DataQueryBinFilterSelected = QLineEdit()
self.DataQueryBinFilterSelected.returnPressed.connect(self.DataQuery_FilterBin)
self.DataQueryBinFilterSelected.setFixedWidth(120)
self.DataQueryBin_container_frame.hide()
self.DataQueryBinStatSelection.hide()
self.DataQueryBinStatSelection.hide()
self.DataQueryBinFilterSelection.hide()
self.DataQueryBinFilterSelected.hide()
self.DataQueryBin_layout.addWidget(self.DataQueryBin_container_frame,0,0,2,2)
self.DataQueryBin_layout.addWidget(self.DataQueryBinSelection,0,0,1,1)
self.DataQueryBin_layout.addWidget(self.DataQueryBinStatSelection,0,1,1,1)
self.DataQueryBin_layout.addWidget(self.DataQueryBinFilterSelection,1,0,1,1)
self.DataQueryBin_layout.addWidget(self.DataQueryBinFilterSelected,1,1,1,1)
self.DataQueryBin_container.setLayout(self.DataQueryBin_layout)
self.DataQueryInline_container = QWidget()
self.DataQueryInline_layout = QGridLayout()
self.DataQueryInline_layout.setContentsMargins(0,0,0,0)
self.DataQueryInline_layout.setSpacing(5)
self.DataQueryInline_container_frame = QFrame()
self.DataQueryInline_container_frame.setFrameStyle(QFrame.Panel | QFrame.Raised)
self.DataQueryInline_container_frame.setStyleSheet("QFrame {border-width: 1;"
"border-style: solid}")
self.DataQueryInlineSelection = QCheckBox("Inline")
self.DataQueryInlineSelection.clicked.connect(self.DataQueryShowInline)
self.DataQueryInlineStatSelection = QComboBox()
self.DataQueryInlineStatSelection.setEditable(True)
self.DataQueryInlineFilterSelection = QLabel("Filter By:")
self.DataQueryInlineFilterDescriptionSelection = QComboBox()
self.DataQueryInlineFilterDescriptionSelection.setEditable(True)
self.DataQueryInlineFilterDescriptionSelection.addItems(['Step','Step Name'])
self.DataQueryInlineFilterSelected = QLineEdit()
self.DataQueryInlineFilterSelected.returnPressed.connect(self.DataQuery_FilterInline)
self.DataQueryInlineFilterSelected.setFixedWidth(220)
self.DataQueryInlineStatSelection.addItems(['Mean','Median','Max','Min','Site'])
self.DataQueryInline_container_frame.hide()
self.DataQueryInlineFilterSelection.hide()
self.DataQueryInlineFilterDescriptionSelection.hide()
self.DataQueryInlineFilterSelected.hide()
self.DataQueryInlineStatSelection.hide()
self.DataQueryInline_layout.addWidget(self.DataQueryInline_container_frame,0,0,2,3)
self.DataQueryInline_layout.addWidget(self.DataQueryInlineSelection,0,0,1,1)
self.DataQueryInline_layout.addWidget(self.DataQueryInlineStatSelection,0,1,1,1)
self.DataQueryInline_layout.addWidget(self.DataQueryInlineFilterSelection,1,0,1,1)
self.DataQueryInline_layout.addWidget(self.DataQueryInlineFilterDescriptionSelection,1,1,1,1)
self.DataQueryInline_layout.addWidget(self.DataQueryInlineFilterSelected,1,2,1,1)
self.DataQueryInline_layout.addWidget(QLabel(""),0,4,2,1)
self.DataQueryInline_container.setLayout(self.DataQueryInline_layout)
self.DataQueryWip_container = QWidget()
self.DataQueryWip_layout = QGridLayout()
self.DataQueryWip_layout.setContentsMargins(0,0,0,0)
self.DataQueryWip_layout.setSpacing(7)
self.DataQueryWip_container_frame = QFrame()
self.DataQueryWip_container_frame.setFrameStyle(QFrame.Panel | QFrame.Raised)
self.DataQueryWip_container_frame.setStyleSheet("QFrame {border-width: 1;"
"border-style: solid}")
self.DataQueryWipSelection = QCheckBox("Wip")
self.DataQueryWipSelection.clicked.connect(self.DataQueryShowWip)
self.DataQueryWipFilterSelection = QLabel("Filter By:")
self.DataQueryWipFilterDescriptionSelection = QComboBox()
self.DataQueryWipFilterDescriptionSelection.setEditable(True)
self.DataQueryWipFilterDescriptionSelection.addItems(['Step','Step Name'])
self.DataQueryWipFilterSelected = QLineEdit()
self.DataQueryWipFilterSelected.returnPressed.connect(self.DataQuery_FilterWip)
self.DataQueryWipFilterSelected.setFixedWidth(220)
self.DataQueryWip_container_frame.hide()
self.DataQueryWipFilterSelection.hide()
self.DataQueryWipFilterDescriptionSelection.hide()
self.DataQueryWipFilterSelected.hide()
self.DataQueryWip_layout.addWidget(self.DataQueryWip_container_frame,0,0,2,3)
self.DataQueryWip_layout.addWidget(self.DataQueryWipSelection,0,0,1,1)
self.DataQueryWip_layout.addWidget(self.DataQueryWipFilterSelection,1,0,1,1)
self.DataQueryWip_layout.addWidget(self.DataQueryWipFilterDescriptionSelection,1,1,1,1)
self.DataQueryWip_layout.addWidget(self.DataQueryWipFilterSelected,1,2,1,1)
self.DataQueryWip_layout.addWidget(QLabel(""),0,4,2,1)
self.DataQueryWip_container.setLayout(self.DataQueryWip_layout)
self.DataQueryDataSelection.addWidget(self.DataQueryDataTypeLabel,0,0)
self.DataQueryDataSelection.addWidget(self.DataQueryBin_container,0,1,2,2)
self.DataQueryDataSelection.addWidget(self.DataQueryET_container,2,1,2,2)
self.DataQueryDataSelection.addWidget(self.DataQueryInline_container,0,3,2,5)
self.DataQueryDataSelection.addWidget(self.DataQueryWip_container,2,3,2,5)
self.DataQuerySelectionList_layout = QGridLayout()
self.DataQuerySelectionList_layout.setAlignment(Qt.AlignTop)
self.DataQuerySelectionList_layout.setSpacing(19)
self.DataQueryLotListInput = QTableWidget()
self.DataQueryLotListInput.setFont(QFont('Arial', 8))
self.DataQueryLotListInput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryLotListInput.setSortingEnabled(True)
self.DataQueryBinListInput = QTableWidget()
self.DataQueryBinListInput.setFont(QFont('Arial', 8))
self.DataQueryBinListInput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryBinListInput.setSortingEnabled(True)
self.DataQueryETListInput = QTableWidget()
self.DataQueryETListInput.setFont(QFont('Arial', 8))
self.DataQueryETListInput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryETListInput.setSortingEnabled(True)
self.DataQueryInlineListInput = QTableWidget()
self.DataQueryInlineListInput.setFont(QFont('Arial', 8))
self.DataQueryInlineListInput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryInlineListInput.setSortingEnabled(True)
self.DataQueryWipListInput = QTableWidget()
self.DataQueryWipListInput.setFont(QFont('Arial', 8))
self.DataQueryWipListInput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryWipListInput.setSortingEnabled(True)
self.DataQuerySelectionInputListTab = QTabWidget()
self.DataQuerySelectionInputListTab.addTab(self.DataQueryLotListInput, "Lot List")
self.DataQuerySelectionInputListTab.addTab(self.DataQueryBinListInput,"Bin List")
self.DataQuerySelectionInputListTab.addTab(self.DataQueryETListInput,"ET List")
self.DataQuerySelectionInputListTab.addTab(self.DataQueryInlineListInput,"Inline List")
self.DataQuerySelectionInputListTab.addTab(self.DataQueryWipListInput,"Wip List")
self.DataQuerySelectionInputListTab.tabBarClicked.connect(self.DataQuerySwitchSelectionList)
self.DataQueryLotListOutput = QTableWidget()
self.DataQueryLotListOutput.setFont(QFont('Arial', 8))
self.DataQueryLotListOutput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryLotListOutput.setSortingEnabled(True)
self.DataQueryBinListOutput = QTableWidget()
self.DataQueryBinListOutput.setFont(QFont('Arial', 8))
self.DataQueryBinListOutput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryBinListOutput.setSortingEnabled(True)
self.DataQueryETListOutput = QTableWidget()
self.DataQueryETListOutput.setFont(QFont('Arial', 8))
self.DataQueryETListOutput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryETListOutput.setSortingEnabled(True)
self.DataQueryInlineListOutput = QTableWidget()
self.DataQueryInlineListOutput.setFont(QFont('Arial', 8))
self.DataQueryInlineListOutput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryInlineListOutput.setSortingEnabled(True)
self.DataQueryWipListOutput = QTableWidget()
self.DataQueryWipListOutput.setFont(QFont('Arial', 8))
self.DataQueryWipListOutput.setSelectionBehavior(QTableWidget.SelectRows)
self.DataQueryWipListOutput.setSortingEnabled(True)
self.DataQuerySelectionOutputListTab = QTabWidget()
self.DataQuerySelectionOutputListTab.addTab(self.DataQueryLotListOutput, "Lot List")
self.DataQuerySelectionOutputListTab.addTab(self.DataQueryBinListOutput,"Bin List")
self.DataQuerySelectionOutputListTab.addTab(self.DataQueryETListOutput,"ET List")
self.DataQuerySelectionOutputListTab.addTab(self.DataQueryInlineListOutput,"Inline List")
self.DataQuerySelectionOutputListTab.addTab(self.DataQueryWipListOutput,"Wip List")
self.DataQuerySelectionOutputListTab.tabBarClicked.connect(self.DataQuerySwitchSelectionList)
self.DataQueryLotListOutputTrack = 0
self.DataQueryBinListOutputTrack = 0
self.DataQueryForward_Button = QToolButton()
self.DataQueryForward_Button.setArrowType(Qt.RightArrow)
self.DataQueryForward_Button.clicked.connect(self.DataQueryForward)
self.DataQueryFull_Forward_Button = QToolButton()
self.DataQueryFull_Forward_Button.setIcon(self.DataQueryFull_Forward_Button.style().standardIcon(QStyle.SP_MediaSeekForward))
self.DataQueryFull_Forward_Button.clicked.connect(self.DataQueryFullForward)
self.DataQueryBackward_Button = QToolButton()
self.DataQueryBackward_Button.setArrowType(Qt.LeftArrow)
self.DataQueryBackward_Button.clicked.connect(self.DataQueryBackward)
self.DataQueryFull_Backward_Button = QToolButton()
self.DataQueryFull_Backward_Button.setIcon(self.DataQueryFull_Backward_Button.style().standardIcon(QStyle.SP_MediaSeekBackward))
self.DataQueryFull_Backward_Button.clicked.connect(self.DataQueryFullBackward)
self.DataQuerySelectionList_layout.addWidget(self.DataQuerySelectionInputListTab,0,0,17,9)
self.DataQuerySelectionList_layout.addWidget(self.DataQuerySelectionOutputListTab,0,10,17,9)
self.DataQuerySelectionList_layout.addWidget(self.DataQueryForward_Button,7,9,1,1)
self.DataQuerySelectionList_layout.addWidget(self.DataQueryFull_Forward_Button,8,9,1,1)
self.DataQuerySelectionList_layout.addWidget(self.DataQueryBackward_Button,9,9,1,1)
self.DataQuerySelectionList_layout.addWidget(self.DataQueryFull_Backward_Button,10,9,1,1)
self.DataQuerySystem_layout = QGridLayout()
self.DataQuerySystem_layout.setSpacing(17)
self.DataQuerySystemLog = QLabel()
self.DataQuerySystemLog.setText("System log:")
self.DataQuerySystemLog_Dialog = QTextEdit()
self.DataQuerySystemLog_Dialog.setFixedWidth(430)
self.DataQuery_FileTypeSelection = QComboBox()
self.DataQuery_FileTypeSelection.addItems(['Excel (.xlsx)','Comma Separated Values (.csv)'])
self.DataQuery_button = QPushButton('Query')
self.DataQuery_button.setFixedSize(440,150)
self.DataQuery_button.setFont(QFont('Times', 24))
self.DataQuery_button.clicked.connect(self.RunQueryScript)
self.DataQuerySystem_layout.addWidget(self.DataQuerySystemLog,0,0,1,8)
self.DataQuerySystem_layout.addWidget(self.DataQuerySystemLog_Dialog,1,0,2,8)
self.DataQuerySystem_layout.addWidget(QLabel(""),0,9,2,1)
self.DataQuerySystem_layout.addWidget(QLabel("Output format:"),1,10,1,1)
self.DataQuerySystem_layout.addWidget(self.DataQuery_FileTypeSelection,1,11,1,7)
self.DataQuerySystem_layout.addWidget(self.DataQuery_button,2,10,1,8)
self.Menu_DataQuery_layout = QVBoxLayout()
self.Menu_DataQuery_layout.addWidget(self.DataQuerymenuButton)
self.Menu_DataQuery_layout.addWidget(self.DataQueryTitle)
self.Menu_DataQuery_layout.addLayout(self.DataQueryNode_layout)
self.Menu_DataQuery_layout.addLayout(self.DataQueryProcess_tech_category_layout)
self.Menu_DataQuery_layout.addLayout(self.DataQueryProduct_layout)
self.Menu_DataQuery_layout.addLayout(self.DataQueryTestSelection)
self.Menu_DataQuery_layout.addWidget(self.DataQueryTimeFrame_container)
self.Menu_DataQuery_layout.addLayout(self.DataQueryDataSelection)
self.Menu_DataQuery_layout.addLayout(self.DataQuerySelectionList_layout)
self.Menu_DataQuery_layout.addLayout(self.DataQuerySystem_layout)
self.DataQuerySelectionMenu.setLayout(self.Menu_DataQuery_layout)
def DataQueryExportData(self, data):
if self.DataQuery_FileTypeSelection.currentText() == 'Excel (.xlsx)':
WriteData = pd.ExcelWriter(f"{self.folderpath}/Data Query {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.xlsx")
if self.DataQueryETSelection.isChecked() and self.DataQueryInlineSelection.isChecked():
query_data = data[0]
et_spec = data[1]
inline_spec = data[2]
query_data.to_excel(WriteData, sheet_name = 'Query Data', index=False)
et_spec.to_excel(WriteData, sheet_name = 'ET Spec Book', index=False)
inline_spec.to_excel(WriteData, sheet_name = 'Inline Spec Book', index=False)
elif self.DataQueryETSelection.isChecked() and self.DataQueryInlineSelection.isChecked() == False:
query_data = data[0]
et_spec = data[1]
query_data.to_excel(WriteData, sheet_name = 'Query Data', index=False)
et_spec.to_excel(WriteData, sheet_name = 'ET Spec Book', index=False)
elif self.DataQueryETSelection.isChecked() == False and self.DataQueryInlineSelection.isChecked():
query_data = data[0]
inline_spec = data[1]
query_data.to_excel(WriteData, sheet_name = 'Query Data', index=False)
inline_spec.to_excel(WriteData, sheet_name = 'Inline Spec Book', index=False)
else:
query_data = data
query_data.to_excel(WriteData, sheet_name = 'Query Data', index=False)
WriteData.save()
elif self.DataQuery_FileTypeSelection.currentText() == 'Comma Separated Values (.csv)':
if self.DataQueryETSelection.isChecked() and self.DataQueryInlineSelection.isChecked():
query_data = data[0]
et_spec = data[1]
inline_spec = data[2]
query_data.to_csv(f"{self.folderpath}/Query Data {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
et_spec.to_csv(f"{self.folderpath}/ET Spec Book {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
inline_spec.to_csv(f"{self.folderpath}/Inline Spec Book {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
elif self.DataQueryETSelection.isChecked() and self.DataQueryInlineSelection.isChecked() == False:
query_data = data[0]
et_spec = data[1]
query_data.to_csv(f"{self.folderpath}/Query Data {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
et_spec.to_csv(f"{self.folderpath}/ET Spec Book {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
elif self.DataQueryETSelection.isChecked() == False and self.DataQueryInlineSelection.isChecked():
query_data = data[0]
inline_spec = data[1]
query_data.to_csv(f"{self.folderpath}/Query Data {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
inline_spec.to_csv(f"{self.folderpath}/Inline Spec Book {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
else:
query_data = data
query_data.to_csv(f"{self.folderpath}/Query Data {self.DataQueryProduct_No.currentText()} {datetime.now().strftime('%d-%m-%Y_%H-%M')}.csv", index=False)
def DataQuery_QueryTech(self):
# Function to Query Tech to update Tech DropBox in Data Query Selection Menu UI
with QueryThread('None','None','None') as self.DataQuery_QueryThread:
self.DataQuery_QueryThread.start()
self.DataQuery_QueryThread.system_log.connect(self.DataQuerySystemMessage)
self.DataQuery_QueryThread.query_result.connect(self.DataQueryUpdateTech)
def DataQueryUpdateTech(self, query):
# Update Tech DropBox in Data Query Selection Menu UI Queried from DataQuery_QueryTech()
self.DataQueryNode_Box.addItems(query)
def DataQuery_QueryProcessTechCategory(self):
# Function to Query Product to update Process DropBox in Data Query Selection Menu UI
with QueryThread('Process_Tech_Category', self.DataQueryNode_Box.currentText(), 'None') as self.DataQuery_QueryThread:
self.DataQuery_QueryThread.start()
self.DataQuery_QueryThread.system_log.connect(self.DataQuerySystemMessage)
self.DataQuery_QueryThread.query_result.connect(self.DataQueryUpdateProcessTechCategory)
def DataQueryUpdateProcessTechCategory(self, query):
# Update Process DropBox in Data Query Selection Menu UI Queried from DataQuery_QueryProcessTechCategory()
self.DataQueryProcess_tech_category_Box.clear()
self.DataQueryProcess_tech_category_Box.addItems(query)
def DataQuery_QueryProduct(self):
# Function to Query Product to update Product DropBox in Data Query Selection Menu UI
with | |
Whether to mark the edit as a bot edit
:param summary: Edit summary
"""
if claim.isReference or claim.isQualifier:
raise ValueError('The claim cannot have a source.')
params = {'action': 'wbsetreference', 'statement': claim.snak,
'baserevid': claim.on_item.latest_revision_id,
'summary': summary, 'bot': bot, 'token': self.tokens['edit']}
# build up the snak
if isinstance(source, list):
sources = source
else:
sources = [source]
snak = {}
for sourceclaim in sources:
datavalue = sourceclaim._formatDataValue()
valuesnaks = snak.get(sourceclaim.getID(), [])
valuesnaks.append({
'snaktype': 'value',
'property': sourceclaim.getID(),
'datavalue': datavalue,
})
snak[sourceclaim.getID()] = valuesnaks
# set the hash if the source should be changed.
# if present, all claims of one source have the same hash
if not new and hasattr(sourceclaim, 'hash'):
params['reference'] = sourceclaim.hash
params['snaks'] = json.dumps(snak)
req = self.simple_request(**params)
return req.submit()
@need_right('edit')
@remove_last_args(['baserevid']) # since 7.0.0
def editQualifier(self, claim, qualifier,
new: bool = False,
bot: bool = True,
summary: Optional[str] = None):
"""Create/Edit a qualifier.
.. versionchanged:: 7.0
deprecated `baserevid` parameter was removed
:param claim: A Claim object to add the qualifier to
:type claim: pywikibot.Claim
:param qualifier: A Claim object to be used as a qualifier
:type qualifier: pywikibot.Claim
:param new: Whether to create a new one if the "qualifier"
already exists
:param bot: Whether to mark the edit as a bot edit
:param summary: Edit summary
"""
if claim.isReference or claim.isQualifier:
raise ValueError('The claim cannot have a qualifier.')
params = {'action': 'wbsetqualifier', 'claim': claim.snak,
'baserevid': claim.on_item.latest_revision_id,
'summary': summary, 'bot': bot}
if (not new and hasattr(qualifier, 'hash')
and qualifier.hash is not None):
params['snakhash'] = qualifier.hash
params['token'] = self.tokens['edit']
# build up the snak
if qualifier.getSnakType() == 'value':
params['value'] = json.dumps(qualifier._formatValue())
params['snaktype'] = qualifier.getSnakType()
params['property'] = qualifier.getID()
req = self.simple_request(**params)
return req.submit()
@need_right('edit')
@remove_last_args(['baserevid']) # since 7.0.0
def removeClaims(self, claims,
bot: bool = True,
summary: Optional[str] = None):
"""Remove claims.
.. versionchanged:: 7.0
deprecated `baserevid` parameter was removed
:param claims: Claims to be removed
:type claims: List[pywikibot.Claim]
:param bot: Whether to mark the edit as a bot edit
:type bot: bool
:param summary: Edit summary
:type summary: str
"""
# Check on_item for all additional claims
items = {claim.on_item for claim in claims if claim.on_item}
assert len(items) == 1
baserevid = items.pop().latest_revision_id
params = {
'action': 'wbremoveclaims', 'baserevid': baserevid,
'summary': summary,
'bot': bot,
'claim': '|'.join(claim.snak for claim in claims),
'token': self.tokens['edit'],
}
req = self.simple_request(**params)
return req.submit()
@need_right('edit')
@remove_last_args(['baserevid']) # since 7.0.0
def removeSources(self, claim, sources,
bot: bool = True,
summary: Optional[str] = None):
"""Remove sources.
.. versionchanged:: 7.0
deprecated `baserevid` parameter was removed
:param claim: A Claim object to remove the sources from
:type claim: pywikibot.Claim
:param sources: A list of Claim objects that are sources
:type sources: list
:param bot: Whether to mark the edit as a bot edit
:param summary: Edit summary
"""
params = {
'action': 'wbremovereferences',
'baserevid': claim.on_item.latest_revision_id,
'summary': summary, 'bot': bot,
'statement': claim.snak,
'references': '|'.join(source.hash for source in sources),
'token': self.tokens['edit'],
}
req = self.simple_request(**params)
return req.submit()
@need_right('edit')
@remove_last_args(['baserevid']) # since 7.0.0
def remove_qualifiers(self, claim, qualifiers,
bot: bool = True,
summary: Optional[str] = None):
"""Remove qualifiers.
.. versionchanged:: 7.0
deprecated `baserevid` parameter was removed
:param claim: A Claim object to remove the qualifier from
:type claim: pywikibot.Claim
:param qualifiers: Claim objects currently used as a qualifiers
:type qualifiers: List[pywikibot.Claim]
:param bot: Whether to mark the edit as a bot edit
:param summary: Edit summary
"""
params = {
'action': 'wbremovequalifiers',
'claim': claim.snak,
'baserevid': claim.on_item.latest_revision_id,
'summary': summary,
'bot': bot,
'qualifiers': [qualifier.hash for qualifier in qualifiers],
'token': self.tokens['edit']
}
req = self.simple_request(**params)
return req.submit()
@need_right('edit')
def linkTitles(self, page1, page2, bot: bool = True):
"""
Link two pages together.
:param page1: First page to link
:type page1: pywikibot.Page
:param page2: Second page to link
:type page2: pywikibot.Page
:param bot: Whether to mark the edit as a bot edit
:return: dict API output
:rtype: dict
"""
params = {
'action': 'wblinktitles',
'tosite': page1.site.dbName(),
'totitle': page1.title(),
'fromsite': page2.site.dbName(),
'fromtitle': page2.title(),
'token': self.tokens['edit']
}
if bot:
params['bot'] = 1
req = self.simple_request(**params)
return req.submit()
@need_right('item-merge')
def mergeItems(self, from_item, to_item, ignore_conflicts=None,
summary=None, bot: bool = True):
"""
Merge two items together.
:param from_item: Item to merge from
:type from_item: pywikibot.ItemPage
:param to_item: Item to merge into
:type to_item: pywikibot.ItemPage
:param ignore_conflicts: Which type of conflicts
('description', 'sitelink', and 'statement')
should be ignored
:type ignore_conflicts: list of str
:param summary: Edit summary
:type summary: str
:param bot: Whether to mark the edit as a bot edit
:return: dict API output
:rtype: dict
"""
params = {
'action': 'wbmergeitems',
'fromid': from_item.getID(),
'toid': to_item.getID(),
'ignoreconflicts': ignore_conflicts,
'token': self.tokens['edit'],
'summary': summary,
}
if bot:
params['bot'] = 1
req = self.simple_request(**params)
return req.submit()
@need_right('item-merge')
@need_extension('WikibaseLexeme')
def mergeLexemes(self, from_lexeme, to_lexeme, summary=None, *,
bot: bool = True) -> dict:
"""
Merge two lexemes together.
:param from_lexeme: Lexeme to merge from
:type from_lexeme: pywikibot.LexemePage
:param to_lexeme: Lexeme to merge into
:type to_lexeme: pywikibot.LexemePage
:param summary: Edit summary
:type summary: str
:keyword bot: Whether to mark the edit as a bot edit
:return: dict API output
"""
params = {
'action': 'wblmergelexemes',
'source': from_lexeme.getID(),
'target': to_lexeme.getID(),
'token': self.tokens['edit'],
'summary': summary,
}
if bot:
params['bot'] = 1
req = self.simple_request(**params)
data = req.submit()
return data
@need_right('item-redirect')
def set_redirect_target(self, from_item, to_item, bot: bool = True):
"""
Make a redirect to another item.
:param to_item: title of target item.
:type to_item: pywikibot.ItemPage
:param from_item: Title of the item to be redirected.
:type from_item: pywikibot.ItemPage
:param bot: Whether to mark the edit as a bot edit
"""
params = {
'action': 'wbcreateredirect',
'from': from_item.getID(),
'to': to_item.getID(),
'token': self.tokens['edit'],
'bot': bot,
}
req = self.simple_request(**params)
return req.submit()
def search_entities(self, search: str, language: str,
total: Optional[int] = None, **kwargs):
"""
Search for pages or properties that contain the given text.
:param search: Text to find.
:param language: Language to search in.
:param total: Maximum number of pages to retrieve in total, or
None in case of no limit.
:return: 'search' list from API output.
:rtype: Generator
"""
lang_codes = self._paraminfo.parameter('wbsearchentities',
'language')['type']
if language not in lang_codes:
raise ValueError('Data site used does not support provided '
'language.')
if 'site' in kwargs:
if kwargs['site'].sitename != self.sitename:
raise ValueError('The site given in the kwargs is different.')
warn('search_entities should not get a site via kwargs.',
UserWarning, 2)
del kwargs['site']
parameters = dict(search=search, language=language, **kwargs)
gen = self._generator(api.APIGenerator,
type_arg='wbsearchentities',
data_name='search',
total=total, parameters=parameters)
return gen
@need_right('edit')
def _wbset_action(self, itemdef, action: str, action_data,
**kwargs) -> dict:
"""
Execute wbset{action} on a Wikibase entity.
Supported actions are:
wbsetaliases, wbsetdescription, wbsetlabel and wbsetsitelink
:param itemdef: Entity to modify or create
:type itemdef: str, WikibaseEntity or Page connected to such item
:param action: wbset{action} to perform:
'wbsetaliases', 'wbsetdescription', 'wbsetlabel', 'wbsetsitelink'
:param action_data: data to be used in API request, see API help
:type action_data: SiteLink or dict
wbsetaliases:
dict shall have the following structure:
{'language': value (str),
'add': list of language codes (str),
'remove': list of language codes (str),
'set' list of language codes (str)
}
'add' and 'remove' are alternative to 'set'
wbsetdescription and wbsetlabel:
dict shall have keys 'language', 'value'
wbsetsitelink:
dict shall have keys 'linksite', 'linktitle' and
optionally 'badges'
:keyword bot: Whether to mark the edit as a bot edit, default is True
:type bot: bool
:keyword tags: Change tags to apply with the edit
:type tags: list of str
:return: query result
:raises AssertionError, TypeError
"""
def format_sitelink(sitelink):
"""Convert SiteLink to a dict accepted by wbsetsitelink API."""
if isinstance(sitelink, pywikibot.page.SiteLink):
_dict = {
'linksite': sitelink._sitekey,
'linktitle': sitelink._rawtitle,
'badges': '|'.join([b.title() for b in sitelink.badges]),
}
else:
_dict = sitelink
return _dict
def prepare_data(action, data):
"""Prepare data as expected by API."""
if action == 'wbsetaliases':
res = data
keys = set(res)
assert keys < {'language', 'add', 'remove', 'set'}
assert 'language' in keys
assert ({'add', 'remove', 'set'} & keys)
assert ({'add', 'set'} >= keys)
assert ({'remove', 'set'} >= keys)
elif action in ('wbsetlabel', 'wbsetdescription'):
res = data
keys = set(res)
assert keys == {'language', 'value'}
elif action == | |
path corresponding to a kernel_name."""
return '%s/%s.cl' % (self.KERNELS_PATH, kernel_name)
def inject_kernel_source(self, kernel_name):
"""Prepend LOADED_SOURCES[kernel_name] with necessary sources."""
pass
def get_kernel_source(self, kernel_name, **kwargs):
"""
Return the kernel source corresponding to a kernel_name and options.
"""
if kernel_name not in self.LOADED_SOURCES:
kernel_lines = []
with open(self.get_kernel_path(kernel_name), 'r') as source_file:
for line in source_file:
kernel_lines += [line]
self.LOADED_SOURCES[kernel_name] = kernel_lines
self.inject_kernel_source(kernel_name)
kernel_lines = self.LOADED_SOURCES[kernel_name]
kernel_source = ""
for line in kernel_lines:
if line[0:7] == 'UNROLL_':
toks = line[7:].split(' ')
reduced_line = ' '.join(toks[1:])
unroll_tok = toks[0]
replace_tok = unroll_tok + '_I'
for unroll_i in range(kwargs[unroll_tok]):
kernel_source += reduced_line.replace(replace_tok,
'%d' % unroll_i)
else:
kernel_source += line
if 'VECTOR_WIDTH' in kwargs and kwargs['VECTOR_WIDTH'] > 1:
vw_str = '%d' % kwargs['VECTOR_WIDTH']
else:
vw_str = ''
kernel_source = kernel_source.replace('SIZE_T', KERNEL_SIZE_T)
kernel_source = kernel_source.replace('_VW', vw_str)
for k in kwargs:
assert k != 'DTYPE'
if k[0:5] == 'DTYPE':
dtype = kwargs[k]
dtype_str = SUPPORTED_DTYPES[dtype]
kernel_source = kernel_source.replace(k, dtype_str)
header = "//"
for k in kwargs:
header += '(%s,%s)' % (k, kwargs[k])
kernel_source = header + '\n' + kernel_source
return kernel_source
def get_include_opts(self):
return '-I %s' % self.KERNELS_PATH
def get_program(self, kernel_name, **kwargs):
"""
Return the kernel corresponding to a kernel_name and kernel options.
If the kernel is not in the cache it will be build and cached.
"""
if kernel_name not in self._programs:
self._programs[kernel_name] = {}
kernel_opts = str(sorted(kwargs.items()))
try:
result = self._programs[kernel_name][kernel_opts]
except KeyError:
kernel_source = self.get_kernel_source(kernel_name, **kwargs)
compile_opts = self.get_include_opts()
for k in sorted(kwargs):
if k[0:5] == 'DTYPE':
pass # These are not compile opts.
elif isinstance(kwargs[k], bool):
# These are pure defines
if kwargs[k]:
compile_opts += " -D%s " % k
elif k == 'VECTOR_WIDTH' and kwargs[k] == 1:
pass
else:
compile_opts += " -D%s=%s " % (k, kwargs[k])
if(isinstance(kwargs[k], str)):
kernel_source = kernel_source.replace(k, kwargs[k])
try:
kernel_program = cl.Program(self._context,
kernel_source).build(compile_opts)
except cl.RuntimeError as runtime_err:
message = 'opencl could not build kernel. %s\n' \
'KERNEL_OPTS were %s\n %s' % \
('Perhaps the function is not compatible'
' with the types it is being called with.',
kernel_opts, runtime_err)
raise RuntimeError(message)
self._programs[kernel_name][kernel_opts] = kernel_program
result = kernel_program
return result
##### Kernel Call Helpers #####
def safe_cast_non_logical(self, dt1, dt2, operator='+'):
"""
Return a dtype which is safe(st) for two dtypes to be cast to.
An attempt will be made to be consistent with numpy.
"""
result = eval('(np.zeros(1,dt1) %s np.zeros(1,dt2)).dtype' % operator)
if result not in SUPPORTED_DTYPES:
warnings.warn('Not sure what dtype to use for operation %s '
'on dtypes %s %s. Will use float64.' %
(operator, dt1, dt2), RtypeWarning)
result = _DTYPE('float64')
return result
def preferred_work_group_size_multiple(self, kernel):
"""Get the device's preferred work group size multiple for a kernel."""
return kernel.get_work_group_info(
cl.kernel_work_group_info.PREFERRED_WORK_GROUP_SIZE_MULTIPLE,
self._device)
def _cl_elementwise_local_size(self, kernel, dim):
"""Gets the optimal local size for a given kernel and dimensionality.
This is loosely based on some prior knowledge/benchmarks and
the preferred and maximal work group sizes of the device.
@type kernel: cl.Kernel
@type dim: 1,2
@param dim: The dimensionality of the kernel
@rtype: [int,...]
@returns: A list of integer type ready to pass to the kernel call
as a local work size.
"""
if self.device.type == CPU:
if dim == 1:
result = [int(self.device.max_work_group_size)]
else: # Simply reverse order operation on two matrices
# Take care of the most common scenario first.
if self.device.max_work_group_size == 1024:
result = [32, 32]
else:
local_size_major = 1
local_size_minor = self.device.max_work_group_size
while local_size_major < local_size_minor:
local_size_major *= 2
local_size_minor //= 2
result = [int(local_size_major),
int(local_size_minor)]
else:
if dim == 1:
result = [int(
self.preferred_work_group_size_multiple(kernel))]
else:
result = [2,
int(self.device.max_work_group_size//2)]
return result
def _cl_elementwise_global_size(self, min_global_size, local_size):
"""
Get the global size for the given local_size and matrix shape/size.
@type min_global_size: [int] , [int,int]
@param min_global_size: The minimum global size to return.
@type local_size: [int] , [int,int]
@param local_size: The local size that will be used.
@rtype: [int,...]
@return: The global size (ready for passing to kernel calls)
"""
local_size = [l for l in local_size]
dim = len(local_size)
if local_size[0] > min_global_size[0]:
local_size[0] = int(min_global_size[0])
if dim == 2 and local_size[1] > min_global_size[1]:
local_size[1] = int(min_global_size[1])
gs0 = int((min_global_size[0]//local_size[0])*local_size[0])
if gs0 < min_global_size[0]:
gs0 += local_size[0]
if dim == 1:
global_size = [gs0]
else:
gs1 = int((min_global_size[1]//local_size[1])*local_size[1])
if gs1 < min_global_size[1]:
gs1 += local_size[1]
global_size = [gs0, gs1]
return global_size, local_size
def _cl_buffer_copy(self, dst, src):
"""Copy from/to host and device.
The arguments must conform to cl.enqueue_copy requirements.
@type dst: cl.Buffer/np.ndarray
@param dst: The object to copy to.
@type src: np.ndarray/cl.Buffer
@param src: The object to copy from.
"""
cl.enqueue_copy(self._queue, dst, src)
self._queue.finish()
def _consistify_args(self, dim, *args):
"""
Checks the arguments for 1d, 2d or other kernel consistency
and returns their consistent versions if possible.
Each arg is expected to be a Mat or a scalar.
@type dim: int
@param dim: 1 if kernel is 1d, 2 if kernel is 2d, -1 for mixed/other.
For mixed/other no shape checks will be performed.
@raises ValueError: If any argument inconsistency cannot be rectified.
"""
result = []
out = args[0]
for m_arg in args:
if isinstance(m_arg, Mat):
if m_arg.computer is not self:
raise ValueError('One of the arguments is not using'
' this computer: %s' % m_arg)
if dim == 1 and m_arg.size != out.size:
raise ValueError('1d kernel arg and output size mismatch:'
' %d != %d' % (m_arg.size, out.size))
elif dim == 2 and (m_arg.shape0 not in [out.shape0, 1] or
m_arg.shape1 not in [out.shape1, 1]):
raise ValueError('2d kernel arg and output shape mismatch:'
' %s != %s' % (m_arg.shape, out.shape))
elif not isinstance(m_arg, np.ndarray):
new_m = np.array([m_arg])
if new_m.size != 1:
raise ValueError('One of the arguments could not be '
'converted to a scalar ndarray: %s' %
m_arg)
m_arg = new_m
result += [m_arg]
return tuple(result)
def _call_dim(self, *args):
"""
Return the lowest dimensionality of kernel
that can be called with the given args.
NOTE: Will not check the arguments for shape or other consistency.
@rtype: 1,2
@returns: 1 if all the Mat arguments are contiguous
and have identical shapes and compatible orders, 2 otherwise.
@raises TypeError: If the first argument is not a Mat.
"""
if not isinstance(args[0], Mat):
raise TypeError('Expected first argument to be a Mat (was %s)' %
args[0])
main = args[0]
if not main.contiguous:
return 2
shape = main.shape
for m_arg in args:
if isinstance(m_arg, Mat):
if m_arg.shape != shape:
return 2
if not (m_arg.c_contiguous and main.c_contiguous) and \
not (m_arg.f_contiguous and main.f_contiguous):
return 2
return 1
def compute_preferred_vector_width(self, dtype):
"""Compute the preferred opencl vector width for a given dtype."""
dtype_str = SUPPORTED_DTYPES[dtype]
is_unsigned = dtype_str[0] == 'u'
if is_unsigned:
dtype_str = dtype_str[1:]
result = getattr(self.device, 'preferred_vector_width_%s' % dtype_str)
if is_unsigned and result > 1:
result //= 2
return result
def get_preferred_vector_width(self, dtype):
"""Get the preferred opencl vector width for a given dtype."""
return self._preferred_vector_width_store[dtype]
def _optimal_vector_width(self, size, dtype):
"""
Return the vector width most suited for kernel calls
given buffers of the given size.
@type size: SIZE_T
@param size: The size in which the vector width must fit.
@type dtype: numpy dtype
@param dtype: The data type that will be vectorized.
@rtype: int
@returns: The smallest vector width that divides the size
and is not greater than the preferred for that dtype.
"""
if size % 2:
return 1
result = self.get_preferred_vector_width(dtype)
while size % result:
result //= 2
return max(result, 1)
##### Kernel Callers #####
def _cl_random(self, out, normal=False):
"""Call the random kernel."""
out, = self._consistify_args(-1, out)
queue = self.queue
if out.dtype.kind != 'f':
raise TypeError('random expects Mat with float dtype.')
is_exact = bool(out.size % 4 == 0)
is_double = bool(out.dtype == np.double)
program = self.get_program(
'random', EXACT=is_exact, DOUBLE=is_double, NORMAL=bool(normal),
DTYPE_OUT=out.dtype, RANLUXCL_LUX=self.ranlux_lux)
gws = [int(self.ranlux_num_states)]
lws = [int(self.device.max_work_group_size)]
if self._ranlux_states is None:
seed = np.uint32(np.random.randint(2**32))
self._ranlux_states = cl.Buffer(
self._context, cl.mem_flags.READ_WRITE,
28*4*self.ranlux_num_states)
program.random_states_init(queue, gws, lws, seed,
self._ranlux_states)
queue.finish()
out_size = out.size
program.random(queue, gws, lws,
out_size, out.buffer, _size_t(out_size % 4),
_size_t(out_size // 4), out.buffer,
self._ranlux_states)
self._queue.finish()
def | |
"""multipy: Python library for multicomponent mass transfer"""
__author__ = "<NAME>, <NAME>"
__copyright__ = "Copyright (c) 2022, <NAME>, <NAME>"
__license__ = "MIT"
__version__ = "1.0.0"
__maintainer__ = ["<NAME>"]
__email__ = ["<EMAIL>"]
__status__ = "Production"
import numpy as np
import pandas as pd
import random
import copy
import scipy
import multipy
import warnings
gas_constant = 8.31446261815324
################################################################################
################################################################################
####
#### Class: Transform
####
################################################################################
################################################################################
class Transform:
"""
Supports performing transformations of multicomponent quantities to other bases or reference frames.
"""
def __init__(self):
pass
# --------------------------------------------------------------------------
def species_fractions_mole_to_mass(self, species_mole_fractions, species_molar_masses):
"""
Computes the species mass fractions, :math:`\\mathbf{Y}_i`, from the
species mole fractions, :math:`\\mathbf{X}_i`, using the relation:
.. math::
Y_i = \\frac{M_i}{M} X_i
:param species_mole_fractions:
scalar ``numpy.ndarray`` specifying **all** species mole fractions, :math:`X_i`, in :math:`[-]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:param species_molar_masses:
scalar ``numpy.ndarray`` specifying the species molar masses, :math:`\\mathbf{M}_i`, in :math:`[kg/mole]`.
It should be of size ``(n_species,1)`` where ``n_species`` is at least 2.
:return:
- **species_mass_fractions** - scalar ``numpy.ndarray`` specifying the species mass fractions, :math:`\\mathbf{Y}_i`, in :math:`[-]`. It has size ``(n_species,n_observations)``.
"""
if not isinstance(species_mole_fractions, np.ndarray):
raise ValueError("Parameter `species_mole_fractions` has to be of type `numpy.ndarray`.")
try:
(n_species_1, n_observations_1) = np.shape(species_mole_fractions)
except:
raise ValueError("Parameter `species_mole_fractions` has to be a matrix.")
if not isinstance(species_molar_masses, np.ndarray):
raise ValueError("Parameter `species_molar_masses` has to be of type `numpy.ndarray`.")
try:
(n_species_2, n_dim) = np.shape(species_molar_masses)
except:
raise ValueError("Parameter `species_molar_masses` has to be a matrix.")
if n_dim != 1:
raise ValueError("Parameter `species_molar_masses` has to be of size ``(n_species,1)``.")
if n_species_1 != n_species_2:
raise ValueError("Parameters `species_mole_fractions` and `species_molar_masses` have different number of species, ``n_species``.")
if np.any(species_molar_masses==0):
raise ValueError("Parameter `species_molar_masses` has entries equal to zero.")
if n_species_1 < 2:
raise ValueError("Parameters `species_mole_fractions` and `species_molar_masses` should contain all species. Only one species found.")
composition = multipy.Composition()
mixture_molar_mass = composition.mixture_molar_mass(species_mole_fractions, 'molar', species_molar_masses)
species_mass_fractions = np.multiply(np.divide(species_molar_masses, mixture_molar_mass), species_mole_fractions)
return species_mass_fractions
# --------------------------------------------------------------------------
def species_fractions_mass_to_mole(self, species_mass_fractions, species_molar_masses):
"""
Computes the species mole fractions, :math:`\\mathbf{X}_i`, from the
species mass fractions, :math:`\\mathbf{Y}_i`, using the relation:
.. math::
X_i = \\frac{M}{M_i} Y_i
:param species_mass_fractions:
scalar ``numpy.ndarray`` specifying **all** species mass fractions, :math:`Y_i`, in :math:`[-]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:param species_molar_masses:
scalar ``numpy.ndarray`` specifying the species molar masses, :math:`\\mathbf{M}_i`, in :math:`[kg/mole]`.
It should be of size ``(n_species,1)`` where ``n_species`` is at least 2.
:return:
- **species_mole_fractions** - scalar ``numpy.ndarray`` specifying the species mole fractions, :math:`\\mathbf{X}_i`, in :math:`[-]`. It has size ``(n_species,n_observations)``.
"""
if not isinstance(species_mass_fractions, np.ndarray):
raise ValueError("Parameter `species_mass_fractions` has to be of type `numpy.ndarray`.")
try:
(n_species_1, n_observations_1) = np.shape(species_mass_fractions)
except:
raise ValueError("Parameter `species_mass_fractions` has to be a matrix.")
if not isinstance(species_molar_masses, np.ndarray):
raise ValueError("Parameter `species_molar_masses` has to be of type `numpy.ndarray`.")
try:
(n_species_2, n_dim) = np.shape(species_molar_masses)
except:
raise ValueError("Parameter `species_molar_masses` has to be a matrix.")
if n_dim != 1:
raise ValueError("Parameter `species_molar_masses` has to be of size ``(n_species,1)``.")
if n_species_1 != n_species_2:
raise ValueError("Parameters `species_mass_fractions` and `species_molar_masses` have different number of species, ``n_species``.")
if np.any(species_molar_masses==0):
raise ValueError("Parameter `species_molar_masses` has entries equal to zero.")
if n_species_1 < 2:
raise ValueError("Parameters `species_mass_fractions` and `species_molar_masses` should contain all species. Only one species found.")
composition = multipy.Composition()
mixture_molar_mass = composition.mixture_molar_mass(species_mass_fractions, 'mass', species_molar_masses)
species_mole_fractions = np.multiply(np.divide(mixture_molar_mass, species_molar_masses), species_mass_fractions)
return species_mole_fractions
# --------------------------------------------------------------------------
def species_gradients_mole_to_mass(self, species_mass_fractions, species_molar_masses):
"""
Computes an invertible, :math:`n-1` dimensional transformation matrix, :math:`\\mathbf{J}^{XY}`,
that allows to transform from the species mole fraction gradients, :math:`\\nabla \\mathbf{X}_i`,
to the species mass fraction gradients, :math:`\\nabla \\mathbf{Y}_i`, according to:
.. math::
\\nabla \\mathbf{Y}_i = \\mathbf{J}^{XY} \\nabla \\mathbf{X}_i
where:
.. math::
J_{i,j}^{XY} = \\frac{M_i}{M} \\Bigg( \\delta_{i,j} + \\frac{Y_i}{M_i} (M_n - M_j) \\Bigg)
.. note::
:math:`\\mathbf{J}^{XY} = (\\mathbf{J}^{YX})^{-1}`.
:param species_mass_fractions:
scalar ``numpy.ndarray`` specifying **all** species mass fractions, :math:`\\mathbf{Y}_i`, in :math:`[-]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:param species_molar_masses:
scalar ``numpy.ndarray`` specifying **all** species molar masses, :math:`\\mathbf{M}_i`, in :math:`[kg/mole]`.
It should be of size ``(n_species,1)`` where ``n_species`` is at least 2.
:return:
- **transformation_matrix** - scalar ``numpy.ndarray`` transformation matrix, :math:`\\mathbf{J}^{XY}`, in :math:`[-]`. It has size ``(n_species-1,n_species-1,n_observations)``.
"""
if not isinstance(species_mass_fractions, np.ndarray):
raise ValueError("Parameter `species_mass_fractions` has to be of type `numpy.ndarray`.")
try:
(n_species_1, n_observations_1) = np.shape(species_mass_fractions)
except:
raise ValueError("Parameter `species_mass_fractions` has to be a matrix.")
if not isinstance(species_molar_masses, np.ndarray):
raise ValueError("Parameter `species_molar_masses` has to be of type `numpy.ndarray`.")
try:
(n_species_2, n_dim) = np.shape(species_molar_masses)
except:
raise ValueError("Parameter `species_molar_masses` has to be a matrix.")
if n_dim != 1:
raise ValueError("Parameter `species_molar_masses` has to be of size ``(n_species,1)``.")
if n_species_1 != n_species_2:
raise ValueError("Parameters `species_mass_fractions` and `species_molar_masses` have different number of species, ``n_species``.")
if np.any(species_molar_masses==0):
raise ValueError("Parameter `species_molar_masses` has entries equal to zero.")
if n_species_1 < 2:
raise ValueError("Parameters `species_mass_fractions` and `species_molar_masses` should contain all species. Only one species found.")
composition = multipy.Composition()
mixture_molar_mass = composition.mixture_molar_mass(species_mass_fractions, 'mass', species_molar_masses)
(n_species, n_observations) = np.shape(species_mass_fractions)
transformation_matrix = np.zeros((n_species-1, n_species-1, n_observations))
for k in range(0,n_observations):
for i in range(0,n_species-1):
for j in range(0,n_species-1):
if i == j:
kronecker_delta = 1
else:
kronecker_delta = 0
transformation_matrix[i,j,k] = species_molar_masses[i,0] / mixture_molar_mass[0,k] * (kronecker_delta + species_mass_fractions[i,k] / species_molar_masses[i,0] * (species_molar_masses[-1,0] - species_molar_masses[j,0]))
return transformation_matrix
# --------------------------------------------------------------------------
def species_gradients_mass_to_mole(self, species_mass_fractions, species_molar_masses):
"""
Computes an invertible, :math:`n-1` dimensional transformation matrix, :math:`\\mathbf{J}^{YX}`,
that allows to transform from the species mass fraction gradients, :math:`\\nabla \\mathbf{Y}_i`,
to the species mole fraction gradients, :math:`\\nabla \\mathbf{X}_i`, according to:
.. math::
\\nabla \\mathbf{X}_i = \\mathbf{J}^{YX} \\nabla \\mathbf{Y}_i
where:
.. math::
J_{i,j}^{YX} = \\frac{M}{M_i} \\Bigg( \\delta_{i,j} + M Y_i \\Big( \\frac{1}{M_n} - \\frac{1}{M_j} \\Big) \\Bigg)
.. note::
:math:`\\mathbf{J}^{YX} = (\\mathbf{J}^{XY})^{-1}`.
:param species_mass_fractions:
scalar ``numpy.ndarray`` specifying **all** species mass fractions, :math:`\\mathbf{Y}_i`, in :math:`[-]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:param species_molar_masses:
scalar ``numpy.ndarray`` specifying **all** species molar masses, :math:`\\mathbf{M}_i`, in :math:`[kg/mole]`.
It should be of size ``(n_species,1)`` where ``n_species`` is at least 2.
:return:
- **transformation_matrix** - scalar ``numpy.ndarray`` transformation matrix, :math:`\\mathbf{J}^{YX}`, in :math:`[-]`. It has size ``(n_species-1,n_species-1,n_observations)``.
"""
if not isinstance(species_mass_fractions, np.ndarray):
raise ValueError("Parameter `species_mass_fractions` has to be of type `numpy.ndarray`.")
try:
(n_species_1, n_observations_1) = np.shape(species_mass_fractions)
except:
raise ValueError("Parameter `species_mass_fractions` has to be a matrix.")
if not isinstance(species_molar_masses, np.ndarray):
raise ValueError("Parameter `species_molar_masses` has to be of type `numpy.ndarray`.")
try:
(n_species_2, n_dim) = np.shape(species_molar_masses)
except:
raise ValueError("Parameter `species_molar_masses` has to be a matrix.")
if n_dim != 1:
raise ValueError("Parameter `species_molar_masses` has to be of size ``(n_species,1)``.")
if n_species_1 != n_species_2:
raise ValueError("Parameters `species_mass_fractions` and `species_molar_masses` have different number of species, ``n_species``.")
if np.any(species_molar_masses==0):
raise ValueError("Parameter `species_molar_masses` has entries equal to zero.")
if n_species_1 < 2:
raise ValueError("Parameters `species_mole_fractions` and `species_molar_masses` should contain all species. Only one species found.")
composition = multipy.Composition()
mixture_molar_mass = composition.mixture_molar_mass(species_mass_fractions, 'mass', species_molar_masses)
(n_species, n_observations) = np.shape(species_mass_fractions)
transformation_matrix = np.zeros((n_species-1, n_species-1, n_observations))
for k in range(0,n_observations):
for i in range(0,n_species-1):
for j in range(0,n_species-1):
if i == j:
kronecker_delta = 1
else:
kronecker_delta = 0
transformation_matrix[i,j,k] = mixture_molar_mass[0,k] / species_molar_masses[i,0] * (kronecker_delta + mixture_molar_mass[0,k] * species_mass_fractions[i,k] * (1.0 / species_molar_masses[-1,0] - 1.0 / species_molar_masses[j,0]))
return transformation_matrix
# --------------------------------------------------------------------------
def diffusive_flux_molar_molar_to_molar_volume(self, T, p, species_mole_fractions, species_partial_molar_volumes):
"""
Computes an invertible, :math:`n-1` dimensional transformation matrix, :math:`\\mathbf{B}^{Vu}`,
that allows to transform from the molar diffusive flux relative to a
molar-averaged velocity, :math:`\\mathbf{J}_i`, to the molar diffusive flux relative
to a volume-averaged velocity, :math:`\\mathbf{J}_i^V`, according to:
.. math::
\\mathbf{J}_i^V = \\mathbf{B}^{Vu} \\mathbf{J}_i
where:
.. math::
B_{i,j}^{Vu} = \\delta_{i,j} - X_i (\\bar{V}_j - \\bar{V}_n) / \\bar{V}
.. note::
:math:`\\mathbf{B}^{Vu} = (\\mathbf{B}^{uV})^{-1}`.
:param T: (optional)
``int`` or ``float`` specifying the temperature, :math:`T`, in :math:`[K]`.
:param p: (optional)
``int`` or ``float`` specifying the pressure, :math:`p`, in :math:`[Pa]`.
:param species_mole_fractions:
scalar ``numpy.ndarray`` specifying **all** species mole fractions, :math:`\\mathbf{X}_i`, in :math:`[-]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:param species_partial_molar_volumes:
scalar ``numpy.ndarray`` specifying **all** species partial molar volumes, :math:`\\bar{\\mathbf{V}}_i`, in :math:`[m^3/mole]`.
It should be of size ``(n_species,n_observations)`` where ``n_species`` is at least 2.
:return:
- **transformation_matrix** - scalar ``numpy.ndarray`` transformation matrix, :math:`\\mathbf{B}^{Vu}`, in :math:`[-]`. It has size ``(n_species-1,n_species-1,n_observations)``.
"""
if not isinstance(T, int) and not isinstance(T, float):
raise ValueError("Parameter `T` has to be of type `int` or `float`.")
if not isinstance(p, int) and not isinstance(p, float):
raise ValueError("Parameter `p` | |
<reponame>onderogluserdar/boardInstrumentFramework
##############################################################################
# Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##############################################################################
# File Abstract:
# Abstraction for UI GUI
#
##############################################################################
from tkinter import ttk
from tkinter import Tk
from tkinter import * # pylint: disable=W0611
from Helpers import GuiMgr
from Helpers import TargetManager
from Helpers import Configuration
from Helpers import Target
from Helpers import Log
from Helpers import Recorder
from Helpers import Statistics
from Data import ConnectionPoint
import tkinter.messagebox
import tkinter.filedialog
import ntpath
import os
import tkinter.simpledialog
from Helpers import Playback
from Util import Time
from Helpers import VersionMgr
def enabled(objWidget):
if 'normal' == objWidget.cget('state'):
return True
return False
def enable(objWidget):
objWidget.config(state=NORMAL)
def disable(objWidget):
objWidget.config(state=DISABLED)
class GuiTK(object):
def __init__(self):
self.updateList=[]
self.root = Tk()
useTab = True
self._LastDataPointCount=0
if useTab:
self.TabPane = ttk.Notebook(self.root)
self.tabFrameData = ttk.Frame(self.TabPane)
self.tabFrameStatistics = ttk.Frame(self.TabPane)
self.setupStatisticsTab(self.tabFrameStatistics)
self.TabPane.add(self.tabFrameData,text="Data")
self.TabPane.add(self.tabFrameStatistics,text="Statistics")
self.TabPane.grid(row=0,sticky=(N,S,E,W))
self.TabPane.rowconfigure(0,weight=1) # Makes it so the data frame will grow
self.TabPane.columnconfigure(0,weight=1) # Makes it so the data frame will grow
else:
self.tabFrameData = ttk.Frame(self.root)
self.tabFrameData.grid(row=0,column=0,sticky=(N,W,E,S)) #Sticky says to grow in those directions,
# self.tabFrame1.rowconfigure(0,weight=1)
# self.tabFrame1.columnconfigure(0,weight=1)
self.tabFrameData.rowconfigure(0,weight=1)
self.tabFrameData.columnconfigure(0,weight=1)
self.setupDataTab(self.tabFrameData)
#Makes tabFrame1 frame grow to size of app
self.root.rowconfigure(0,weight=1)
self.root.columnconfigure(0,weight=1)
self._menu = MenuSystem(self.root)
self.root.config(menu=self._menu.get())
self.updateList.append(self._menu)
if Configuration.get().GetMinimizeGui():
self.root.iconify()
def SetTitle(self,titleStr):
self.root.title(titleStr)
def MessageBox_Error(self,Title,Message):
tkinter.messagebox.showerror(Title,Message)
def MessageBox_Info(self,Title,Message):
tkinter.messagebox.showinfo(Title,Message)
def MessageBox_OkCancel(self,Title,Message):
return tkinter.messagebox.askokcancel(Title,Message)
def setupDataTab(self,parent):
self.dataFrame = ttk.Frame(parent,borderwidth=5) #contains data tree and clear btn
self.dView = DataView(self.dataFrame)
self.dataFrame.grid(row=0,column=0,sticky=(N,E,S,W))
self.dataText = StringVar()
self.dataText.set("Data")
Label(self.dataFrame,textvariable = self.dataText).grid(row=0,column=0,sticky=(N))
self.dView.get().grid(row=1,column=0,sticky=(N,W,S,E))
self.dataFrame.rowconfigure(1,weight=1) # Makes it so the data frame will grow
self.dataFrame.columnconfigure(0,weight=1)
self.targetFrame = ttk.Frame(parent,borderwidth=5)
self.targetFrame.grid(row=0,column=1,sticky=(N,S))
self.liveCtrl = LiveControls(parent)
self.liveCtrl.get().grid(row=3,sticky=S)
self.targetView = TargetView(self.targetFrame)
self.playbackCtrl = PlaybackControls(parent)
self.playbackCtrl.get().grid(row=3,column=1)
self.playbackCtrl.get().grid_remove()
self.updateList.append(self.playbackCtrl)
self.updateList.append(self.dView)
self.updateList.append(self.liveCtrl)
self.updateList.append(self.targetView)
def setupStatisticsTab(self,pane):
self.statsView = StatisticsView(pane)
self.updateList.append(self.statsView)
def OnStart(self):
Log.getLogger().info("Starting GUI")
self.root.after(100,self.UpdateGui)
self.root.mainloop()
def OnQuit(self):
self.root.quit()
def OnClearData(self):
self.dView.Clear()
def UpdateGui(self):
for widget in self.updateList:
widget.updateGui()
currCount = len(GuiMgr.get().GetDatalist())
if currCount != self._LastDataPointCount:
self.dataText.set("Data {" + str(currCount) +"}")
self._LastDataPointCount = currCount
self.root.after(100,self.UpdateGui)
# Found some of this nice scroll bar code at: http://svn.python.org/projects/python/branches/pep-0384/Demo/tkinter/ttk/dirbrowser.py
def autoscroll(sbar, first, last):
"""Hide and show scrollbar as needed."""
first, last = float(first), float(last)
if first <= 0 and last >= 1:
sbar.grid_remove()
else:
sbar.grid()
sbar.set(first, last)
class DataView():
def __init__(self,parent):
self.parent = parent
self.root = ttk.Frame(parent,borderwidth=5,relief="ridge") # Frame with Tree Control in it and button
vsb = ttk.Scrollbar(self.root,orient="vertical")
hsb = ttk.Scrollbar(self.root,orient="horizontal")
self.dataViewTree = ttk.Treeview(self.root,yscrollcommand=lambda f, l: autoscroll(vsb, f, l),
xscrollcommand=lambda f, l:autoscroll(hsb, f, l))
vsb['command'] = self.dataViewTree.yview
hsb['command'] = self.dataViewTree.xview
vsb.grid(column=1, row=0, sticky='ns')
hsb.grid(column=0, row=1, sticky='ew')
self.dataViewTree['columns'] = ('Namespace','ID','Value','Source')
self.dataViewTree.heading('Namespace',text='Namespace')
self.dataViewTree.heading('ID',text='ID')
self.dataViewTree.heading('Value',text='Value')
self.dataViewTree.heading('Source',text='Source')
self.dataViewTree.column('Namespace',width=150)
self.dataViewTree.column('ID',width=100)
self.dataViewTree.column('Value',width=200,anchor='e')
self.dataViewTree.column('Source',width=60)
self.dataViewTree['show'] = 'headings' # gets rid of 1st empty column
self.dataViewTree.grid(row=0,column=0,padx=2,pady=2,sticky=(N,E,S,W))
#allows the root frame to grow
self.root.columnconfigure(0,weight=1)
self.root.rowconfigure(0,weight=1)
Button(self.root,text="Clear",command=self.onClearBtn).grid(row=2,column=0)
def onClearBtn(self):
GuiMgr.get().ClearDataView()
def Clear(self):
self.dataViewTree.delete(*self.dataViewTree.get_children())
def get(self):
return self.root
def __findIndex(self,key):
index = 0
for child in self.dataViewTree.get_children():
if key < child:
return index
index +=1
return 'end'
def updateGui(self):
try:
dlist = GuiMgr.get().GetDatalist()
for key in dlist.keys():
try:
objData = dlist[key][0]
strFrom = dlist[key][1]
self.dataViewTree.set(key,'Value',objData.Value)
self.dataViewTree.set(key,'Source',strFrom)
except Exception as Ex:
try:
index = self.__findIndex(key)
self.dataViewTree.insert('',index,key,values=(objData.Namespace,objData.ID,str(objData.Value),strFrom))
except Exception as Ex:
Log.getLogger().error(str(Ex))
except Exception as Ex: # Likely had the list updated while I was iterating (didn't make this thread safe), just ignore and wait for next loop
pass
class TargetView():
def __init__(self,parent):
self.root = parent#ttk.Frame(parent,borderwidth=5,relief="sunken")
self.tree = ttk.Treeview(self.root)
self.tree['columns'] = ('IP','Port','Type','Packets','Bytes')
self.tree.heading('IP',text='IP')
self.tree.heading('Port',text='Port')
self.tree.heading('Type',text='Type')
self.tree.heading('Packets',text='Packets')
self.tree.heading('Bytes',text='Bytes')
self.tree.column('IP',width=100)
self.tree.column('Port',width=50,anchor='e')
self.tree.column('Type',width=70,anchor='e')
self.tree.column('Packets',width=70,anchor='e')
self.tree.column('Bytes',width=90,anchor='e')
self.tree['show'] = 'headings' # gets rid of 1st empty column
#self.root.grid(sticky=(N,S))
Label(self.root,text="Targets").grid()
self.tree.grid(row=1,sticky=(N,S))
self.root.columnconfigure(0,weight=1)
self.root.rowconfigure(1,weight=1)
self.PreviousTargetCount = 0
#self.tree.grid(row=1,column=0,padx=2,pady=2,sticky=(N,E,S,W))
def get(self):
return self.root
def updateGui(self):
targets = TargetManager.GetTargetManager().GetDownstreamTargets()
# was a channge in number (maybe a dynamci marvin went away) so just clear the tree and re-poplualte
if len(targets) != self.PreviousTargetCount:
self.PreviousTargetCount = len(targets)
self.tree.delete(*self.tree.get_children())
for key in targets:
target = TargetManager.GetTargetManager().GetDownstreamTarget(key)
strPackets=str(target.m_PacketsSent)
strBytes = str(target.m_BytestSent)
strType = target.getTypeStr()
try:
self.tree.set(key,'Packets',strPackets)
self.tree.set(key,'Bytes',strBytes)
self.tree.set(key,'Type',strType)
if True == target.m_hasTimedOut:
self.tree.set(key,'IP',"*"+target.getIP())
else:
self.tree.set(key,'IP',target.getIP())
except Exception as Ex:
try:
self.tree.insert('','end',key,values=(target.getIP(),str(target.getPort()),strType,strPackets,strBytes))
except Exception as Ex:
Log.getLogger().error(str(Ex))
class LiveControls():
def __init__(self,parent):
self.parent = parent
self.root = ttk.Frame(parent,borderwidth=5,relief="groove")
self.Visible = True
baseRow=1
self.btnStartLive = Button(self.root,text="Go Live",width=10,command=self.onLiveStartBtn)
self.btnStopLive = Button(self.root,text="Stop Live",width=10,command=self.onLiveStopBtn)
self.btnStartRecording = Button(self.root,text="Record",width=10,command=self.onRecordStartBtn)
self.btnStopRecording = Button(self.root,text="Stop Recording",width=15,command=self.onRecordStopBtn)
Label(self.root,text = "Live Data Control").grid(columnspan=4)
self.btnStartLive.grid(row=baseRow,column=0)
self.btnStopLive.grid(row=baseRow,column=1)
self.btnStartRecording.grid(row=baseRow,column=2)
self.btnStopRecording.grid(row=baseRow,column=3)
# treeview of recording info
self.RecordingInfoFrame = ttk.Frame(self.root)
self.RecordingInfoFrame.grid(row=baseRow+1,columnspan=4)
self.RecordingTree = ttk.Treeview(self.RecordingInfoFrame,height=1)
self.lblRecordedInfo = Label(self.RecordingInfoFrame,text="Recorded Data")
self.RecordingTree['columns'] = ('COUNT','MEM','SECS')
self.RecordingTree.heading('COUNT',text='Count')
self.RecordingTree.heading('MEM',text='Approx. Mem')
self.RecordingTree.heading('SECS',text='Seconds')
self.RecordingTree.column('COUNT',width=70,anchor='center')
self.RecordingTree.column('MEM',width=75,anchor='center')
self.RecordingTree.column('SECS',width=75,anchor='center')
self.RecordingTree['show'] = 'headings' # gets rid of 1st empty column
self.RecordingTree.insert('',0,"foo",values=('0','0','0'))
self.lblRecordedInfo.grid()
self.RecordingTree.grid(row=1,column=0)
self.RecordingInfoFrame.grid_remove()
def get(self):
return self.root
def onLiveStartBtn(self):
if not Recorder.get().HasBeenSaved() and Recorder.get().GetRecordedCount()>0:
response = GuiMgr.MessageBox_OkCancel("Warning","You have not saved the current recorded data. OK to Discard?")
if False == response:
return
GuiMgr.OnStartLiveData()
GuiMgr.SetPlaybackFilename("")
#GuiMgr.SetTitle("")
def onLiveStopBtn(self):
GuiMgr.OnStopLiveData()
def onRecordStartBtn(self):
if not Recorder.get().HasBeenSaved() and Recorder.get().GetRecordedCount()>0:
response = GuiMgr.MessageBox_OkCancel("Restart Recording?","You have not saved the current recorded data. OK to Discard?")
if False == response:
return
GuiMgr.OnStartRecording()
def onRecordStopBtn(self):
GuiMgr.OnStopRecording()
def updateGui(self):
if True == GuiMgr.get().Playback_Playing and (enabled(self.btnStopLive) or enabled(self.btnStartRecording)):
disable(self.btnStopLive)
disable(self.btnStartRecording)
if False == GuiMgr.get().Live_Active and True == self.Visible:
self.root.grid_remove()
self.Visible = False
return
if True == GuiMgr.get().Live_Active and False == self.Visible:
self.root.grid()
self.Visible=True
if False == self.Visible:
return
if True == GuiMgr.get().Playback_Playing and enabled(self.btnStartLive):
disable(self.btnStartLive)
if True == GuiMgr.get().Playback_Playing:
return
if True == GuiMgr.get().Live_Receiving and enabled(self.btnStartLive):
self.btnStartLive.config(state=DISABLED)
self.btnStopLive.config(state=NORMAL)
self.RecordingInfoFrame.grid_remove()
if False == GuiMgr.get().Live_Receiving and enabled(self.btnStopLive):
enable(self.btnStartLive)
disable(self.btnStopLive)
if True == GuiMgr.get().Live_Recording and enabled(self.btnStartRecording):
disable(self.btnStartRecording)
enable(self.btnStopRecording)
disable(self.btnStopLive)
self.RecordingInfoFrame.grid()
if enabled(self.btnStartRecording) and not enabled(self.btnStopLive):
enable(self.btnStopLive)
if enabled(self.btnStartLive) and enabled(self.btnStartRecording):
self.btnStartRecording.config(state=DISABLED)
if not enabled(self.btnStartLive) and not enabled(self.btnStartRecording) and not enabled(self.btnStopRecording):
enable(self.btnStartRecording)
if False == GuiMgr.get().Live_Recording and enabled(self.btnStopRecording):
enable(self.btnStartRecording)
disable(self.btnStopRecording)
if True == GuiMgr.get().Live_Recording:
self.RecordingTree.set("foo","COUNT",str(Recorder.get().GetRecordedCount()))
bytes = Recorder.get().GetBytesRecorded()
if bytes < 1024:
strVal = str(bytes)+" B"
elif bytes < 1024 * 1024:
strVal = "{0:.2f}".format(float(bytes/1024))+" KB"
else:
strVal = "{0:.2f}".format(float((bytes/1024)/1024))+" MB"
self.RecordingTree.set("foo","MEM",str(strVal))
self.RecordingTree.set("foo","SECS",str(Recorder.get().GetRecordingTime()))
class PlaybackControls():
def __init__(self,parent):
self.parent = parent
self.root = ttk.Frame(parent,borderwidth=5,relief="groove")
self.Visible = False
self.LoopValuesVisible=True
self.btnStartPlayback = Button(self.root,text="Play",width=10,command=self.onPlayBtn)
self.btnStopPlayback = Button(self.root,text="Stop",width=10,command=self.onStopBtn)
self.btnPausePlayback = Button(self.root,text="Pause",width=10,command=self.onPauseBtn)
self.lstBoxRepeatMode = ttk.Combobox(self.root,text="foo",width=6,state="readonly")
self.lstBoxRepeatMode.bind('<<ComboboxSelected>>',self.onSetPlaybackRepeatMode)
self.lstBoxRepeatMode['values'] = ("NONE", "REPEAT", "LOOP")
self.lstBoxRepeatMode.current(0)
self.lstBoxPlaybackSpeed = ttk.Combobox(self.root,width=3,state="readonly")
self.lstBoxPlaybackSpeed.bind('<<ComboboxSelected>>',self.onSetPlaybackSpeed)
self.lstBoxPlaybackSpeed['values'] = (".1", ".25", ".5",".75","1","2","5","10")
self.lstBoxPlaybackSpeed.current(4)
self.lblStartLoop = Label(self.root,width=3,justify=CENTER)
self.lblEndLoop = Label(self.root,width=3,justify=CENTER)
self.lblPlaybackTime = Label(self.root,width=10,justify=CENTER)
self.slider = Scale(self.root,from_=0, to=100, orient=HORIZONTAL,length=300,command=self.sliderUpdate)
self.slider.bind("<ButtonRelease-1>",self.sliderHandler)
self.lblPacketNumber = Label(self.root,text="0/0",justify=RIGHT)
self.btnStartLoop = Button(self.root,text="Begin",command=self.onStartLoopBtn)
self.btnStopLoop = Button(self.root,text="End",command=self.onStopLoopBtn)
labelRow=0
btnRow=1
sliderRow=3
loopRow=4
loopBtnRow=5
#playLenStr = str(int(Playback.get().GetPlayTime()/1000))
#Label(self.root,text="playLenStr").grid(row=labelRow,column=0,sticky=(N),columnspan=3)
Label(self.root,text="Speed").grid(row=labelRow,column=4,sticky=(N))
Label(self.root,text="Mode").grid(row=labelRow,column=5,sticky=(N))
self.btnStartPlayback.grid(row=btnRow,column=0)
self.btnStopPlayback.grid(row=btnRow,column=1)
self.btnPausePlayback.grid(row=btnRow,column=2)
self.lstBoxPlaybackSpeed.grid(row=btnRow,column=4)
self.lstBoxRepeatMode.grid(row=btnRow,column=5)
self.lblStartLoop.grid(row=loopRow,column=0,columnspan=2)
self.lblEndLoop.grid(row=loopRow,column=2,columnspan=2)
self.slider.grid(row=sliderRow,column=0,columnspan=4)
self.lblPacketNumber.grid(row=sliderRow,column=4,columnspan=2)
self.btnStartLoop.grid(row=loopBtnRow,column=0,columnspan=2)
self.btnStopLoop.grid(row=loopBtnRow,column=2,columnspan=2)
self.lblPlaybackTime.grid(row=labelRow,column=0,sticky=(N),columnspan=3)
disable(self.btnStopPlayback)
disable(self.btnPausePlayback)
def get(self):
return self.root
def onStartLoopBtn(self):
currNum = Playback.get().GetCurrentNumber()
currEnd = Playback.get().GetLoopMode()[2]
maxNum = Playback.get().GetDataCount()
if currNum >= currEnd:
GuiMgr.MessageBox_Error("Error","You cannot set the beginning of a loop to be beyond the end")
return
Playback.get().SetLoopMode(Playback.RepeatMode.LOOP,currNum,currEnd)
def onStopLoopBtn(self):
currNum = Playback.get().GetCurrentNumber()
currStart = Playback.get().GetLoopMode()[1]
maxNum = Playback.get().GetDataCount()
if currStart >= currNum :
GuiMgr.MessageBox_Error("Error","You cannot set the end of a loop to be before the start")
return
Playback.get().SetLoopMode(Playback.RepeatMode.LOOP,currStart,currNum)
def sliderUpdate(self,event):
number = Playback.get().GetCurrentNumber()
strVal = str(number)+"/"+str(Playback.get().GetDataCount())
if GuiMgr.get().GetRepeatMode()[0] != Playback.RepeatMode.NONE:
strVal = strVal + " Loop: " + str(Playback.get().GetLoopCount())
self.lblPacketNumber.config(text=strVal)
def sliderHandler(self,event):
if GuiMgr.get().Playback_Playing:
return
percent = float(self.slider.get())/100.0
number = int(Playback.get().GetDataCount()*percent)
Playback.get().SetCurrentNumber(number)
strVal = str(number)+"/"+str(Playback.get().GetDataCount())
self.lblPacketNumber.config(text=strVal)
def onPlayBtn(self):
GuiMgr.OnStartPlayback()
def onStopBtn(self):
GuiMgr.OnStopPlayback()
def onPauseBtn(self):
GuiMgr.OnPausePlayback()
def onSetPlaybackRepeatMode(self,event):
if "NONE" == self.lstBoxRepeatMode.get():
GuiMgr.OnSetRepeatMode(Playback.RepeatMode.NONE)
elif "REPEAT" == self.lstBoxRepeatMode.get():
GuiMgr.OnSetRepeatMode(Playback.RepeatMode.REPEAT)
else:
GuiMgr.OnSetRepeatMode(Playback.RepeatMode.LOOP)
self.updateLoopValue()
def onSetPlaybackSpeed(self,event):
GuiMgr.get().OnSetPlaybackSpeed(float(self.lstBoxPlaybackSpeed.get()))
def updatePlaybackSpeed(self):
if GuiMgr.get().GetPlaybackSpeed() == float(self.lstBoxPlaybackSpeed.get()):
return
currSpeed = GuiMgr.get().GetPlaybackSpeed()
insertIndex = 0
index = 0
for strVal in self.lstBoxPlaybackSpeed['values']:
fVal = float(strVal)
if fVal < currSpeed:
insertIndex = index
if fVal == currSpeed:
self.lstBoxPlaybackSpeed.set(strVal)
return
index +=1
#so it wasn't there, must have been set via cmdline OR via Oscar Task
itemList = list(self.lstBoxPlaybackSpeed['values'])
itemList.insert(insertIndex, str(currSpeed))
self.lstBoxPlaybackSpeed['values'] = tuple(itemList)
def updateLoopValue(self):
mode = GuiMgr.get().GetRepeatMode()[0]
if Playback.RepeatMode.toString(mode) == self.lstBoxRepeatMode.get():
return
self.lstBoxRepeatMode.set(Playback.RepeatMode.toString(mode))
def updatePlaybackTime(self):
currTime = Playback.get().GetCurrentPlaybackTime()
if currTime > 0:
currTime = str(int(currTime/1000))
else:
currTime = "0"
endTime = Playback.get().GetPlayTime()
| |
<reponame>libyal/vstools
# -*- coding: utf-8 -*-
"""Project and solution file reader classes."""
import abc
import re
from vstools import resources
class FileReader(object):
"""File reader."""
def __init__(self, encoding='utf-8'):
"""Initializes a file reader.
Args:
encoding (str): encoding.
"""
super(FileReader, self).__init__()
self._encoding = encoding
self._file = None
self._line = None
def _ReadBinaryData(self, size):
"""Reads binary data.
Args:
size (int): number of bytes to read.
Returns:
bytes: binary data.
"""
return self._file.read(size)
def _ReadLine(self, look_ahead=False):
"""Reads a line.
Args:
look_ahead (Optional[bool]): indicated if the line should be considered
read (False) or not (True).
Returns:
str: line stripped of leading and trailing white space or None if no
input is available.
"""
if self._line is not None:
line = self._line
if not look_ahead:
self._line = None
else:
line = self._file.readline()
if line:
line = line.strip()
if look_ahead:
self._line = line
if isinstance(line, bytes):
line = line.decode(self._encoding)
return line
def Close(self):
"""Closes the file."""
self._file.close()
def Open(self, filename):
"""Opens the file.
Args:
filename (str): path of the file.
"""
self._file = open(filename, 'rb') # pylint: disable=consider-using-with
class VSProjectFileReader(FileReader):
"""Visual Studio project file reader."""
class VS2008ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2008 project file reader."""
_CONFIGURATION_OPTIONS = {
'CharacterSet': 'character_set',
'ConfigurationType': 'output_type',
'ManagedExtensions': 'managed_extensions',
'WholeProgramOptimization': 'whole_program_optimization',
}
_TOOL_COMPILER_CONFIGURATION_OPTIONS = {
'AdditionalIncludeDirectories': 'include_directories',
'BasicRuntimeChecks': 'basic_runtime_checks',
'CompileAs': 'compile_as',
'DebugInformationFormat': 'debug_information_format',
'Detect64BitPortabilityProblems': 'detect_64bit_portability_problems',
'EnableFunctionLevelLinking': 'enable_function_level_linking',
'EnableIntrinsicFunctions': 'enable_intrinsic_functions',
'Optimization': 'optimization',
'PreprocessorDefinitions': 'preprocessor_definitions',
'RuntimeLibrary': 'runtime_library',
'SmallerTypeCheck': 'smaller_type_check',
'UsePrecompiledHeader': 'precompiled_header',
'WarnAsError': 'warning_as_error',
'WarningLevel': 'warning_level',
}
_TOOL_LIBRARIAN_CONFIGURATION_OPTIONS = {
'IgnoreAllDefaultLibraries': 'librarian_ignore_defaults',
'OutputFile': 'librarian_output_file',
}
_TOOL_LINKER_CONFIGURATION_OPTIONS = {
'AdditionalDependencies': 'additional_dependencies',
'AdditionalLibraryDirectories': 'library_directories',
'DataExecutionPrevention': 'data_execution_prevention',
'EnableCOMDATFolding': 'enable_comdat_folding',
'FixedBaseAddress': 'fixed_base_address',
'GenerateDebugInformation': 'generate_debug_information',
'ImportLibrary': 'linker_values_set',
'LinkIncremental': 'link_incremental',
'ModuleDefinitionFile': 'module_definition_file',
'OptimizeReferences': 'optimize_references',
'OutputDirectory': 'linker_output_directory',
'OutputFile': 'linker_output_file',
'RandomizedBaseAddress': 'randomized_base_address',
'SubSystem': 'sub_system',
'TargetMachine': 'target_machine',
}
def _ParseConfigurationOption(
self, project_configuration, definition, name, line):
"""Parses a configuration option.
Args:
project_configuration (VSProjectConfiguration): project configuration.
definition (str): definition of the configuration value in file.
name (str): name of the configuration value in the project information.
line (str): line that contains the configuration value.
"""
regex_pattern = '{0:s}="([^"]*)"'.format(definition)
values = re.findall(regex_pattern, line)
if len(values) == 1:
setattr(project_configuration, name, values[0])
def _ParseConfigurationOptions(
self, project_configuration, configuration_options, line):
"""Parses configuration options.
Args:
project_configuration (VSProjectConfiguration): project configuration.
configuration_options (dict[str, str]): configuration options defined
as a name per definition.
line (str): line that contains the configuration value.
"""
configuration_definition, _, _ = line.partition('=')
configuration_value = configuration_options.get(
configuration_definition, None)
if configuration_value:
self._ParseConfigurationOption(
project_configuration, configuration_definition, configuration_value,
line)
def _ReadConfiguration(self, line):
"""Reads a configuration.
Args:
line (str): line that contains the start of the configuration section.
Returns:
VSProjectConfiguration: configuration or None if no configuration was
found.
"""
if not line or not line.startswith('<Configuration'):
return None
project_configuration = resources.VSProjectConfiguration()
found_tool = False
found_tool_compiler = False
found_tool_librarian = False
found_tool_linker = False
while line:
line = self._ReadLine()
if line.startswith('</Configuration>'):
break
if found_tool:
if line.startswith('/>'):
found_tool = False
found_tool_compiler = False
found_tool_librarian = False
found_tool_linker = False
elif found_tool_compiler:
self._ParseConfigurationOptions(
project_configuration, self._TOOL_COMPILER_CONFIGURATION_OPTIONS,
line)
if isinstance(
project_configuration.include_directories, str):
project_configuration.include_directories = (
project_configuration.include_directories.split(';'))
elif found_tool_librarian:
self._ParseConfigurationOptions(
project_configuration, self._TOOL_LIBRARIAN_CONFIGURATION_OPTIONS,
line)
elif found_tool_linker:
self._ParseConfigurationOptions(
project_configuration, self._TOOL_LINKER_CONFIGURATION_OPTIONS,
line)
additional_dependencies = (
project_configuration.additional_dependencies)
if isinstance(additional_dependencies, str):
# pylint: disable=no-member
additional_dependencies = additional_dependencies.split(' ')
project_configuration.additional_dependencies = []
for dependency in additional_dependencies:
dependency.replace('$(ConfigurationName)', '$(Configuration)')
project_configuration.additional_dependencies.append(dependency)
if isinstance(
project_configuration.library_directories, str):
project_configuration.library_directories = (
project_configuration.library_directories.split(';'))
elif line.startswith('Name="VCCLCompilerTool"'):
found_tool_compiler = True
elif line.startswith('Name="VCLibrarianTool"'):
found_tool_librarian = True
elif line.startswith('Name="VCLinkerTool"'):
found_tool_linker = True
elif line.startswith('<Tool'):
found_tool = True
elif line.startswith('Name='):
# For more than 1 match findall will return a list with a tuple.
values = re.findall('Name="([^|]*)[|]([^"]*)"', line)[0]
if len(values) == 2:
project_configuration.name = values[0]
project_configuration.platform = values[1]
else:
self._ParseConfigurationOptions(
project_configuration, self._CONFIGURATION_OPTIONS, line)
# TODO: PlatformToolset.
# TargetFrameworkVersion ?
# Add the target machine when not defined.
if not project_configuration.target_machine:
if project_configuration.platform == 'Win32':
project_configuration.target_machine = '1'
# TODO: assuming here that 2 is x64.
elif project_configuration.platform == 'x64':
project_configuration.target_machine = '2'
return project_configuration
def _ReadConfigurations(self, project_information):
"""Reads the configurations.
Args:
project_information (VSProjectInformation): project information.
"""
# Find the start of the configurations section.
result = False
line = self._ReadLine()
while line:
result = line.startswith('<Configurations>')
if result:
break
line = self._ReadLine()
if not result:
return
while line:
line = self._ReadLine()
if line.startswith('</Configurations>'):
break
if line.startswith('<Configuration'):
project_configuration = self._ReadConfiguration(line)
if project_configuration:
project_information.configurations.Append(project_configuration)
def _ReadFiles(self, project_information):
"""Reads the files.
Args:
project_information (VSProjectInformation): project information.
"""
# Find the start of the files section.
result = False
line = self._ReadLine()
while line:
result = line.startswith('<Files>')
if result:
break
line = self._ReadLine()
if result:
found_filter = False
found_filter_source_files = False
found_filter_header_files = False
found_filter_resource_files = False
while line:
line = self._ReadLine()
if line.startswith('</Files>'):
break
if found_filter:
if line.startswith('</Filter>'):
found_filter = False
found_filter_source_files = False
found_filter_header_files = False
found_filter_resource_files = False
elif found_filter_source_files:
if line.startswith('RelativePath='):
values = re.findall('RelativePath="([^"]*)"', line)
if len(values) == 1:
project_information.source_files.append(values[0])
elif found_filter_header_files:
if line.startswith('RelativePath='):
values = re.findall('RelativePath="([^"]*)"', line)
if len(values) == 1:
project_information.header_files.append(values[0])
elif found_filter_resource_files:
if line.startswith('RelativePath='):
values = re.findall('RelativePath="([^"]*)"', line)
if len(values) == 1:
project_information.resource_files.append(values[0])
elif line.startswith('Name="Source Files"'):
found_filter_source_files = True
elif line.startswith('Name="Header Files"'):
found_filter_header_files = True
elif line.startswith('Name="Resource Files"'):
found_filter_resource_files = True
elif line.startswith('<Filter'):
found_filter = True
def _ReadProjectInformation(self, project_information):
"""Reads project information.
Args:
project_information (VSProjectInformation): project information.
"""
line = self._ReadLine()
while line:
if line.startswith('>'):
break
if line.startswith('Name='):
values = re.findall('Name="([^"]*)"', line)
if len(values) == 1:
project_information.name = values[0]
elif line.startswith('ProjectGUID='):
values = re.findall('ProjectGUID="{([^}]*)}"', line)
if len(values) == 1:
project_information.guid = values[0]
elif line.startswith('RootNamespace='):
values = re.findall('RootNamespace="([^"]*)"', line)
if len(values) == 1:
project_information.root_name_space = values[0]
elif line.startswith('Keyword='):
values = re.findall('Keyword="([^"]*)"', line)
if len(values) == 1:
project_information.keyword = values[0]
line = self._ReadLine()
def ReadHeader(self):
"""Reads a file header.
Returns:
bool: True if successful or false otherwise.
"""
# TODO check encoding?
line = self._ReadLine()
if not line or not line.startswith('<?xml version="1.0"'):
return False
line = self._ReadLine()
if not line or not line.startswith('<VisualStudioProject'):
return False
line = self._ReadLine()
if not line or not line.startswith('ProjectType="Visual C++"'):
return False
line = self._ReadLine()
if not line or not line.startswith('Version="9,00"'):
return False
return True
def ReadProject(self):
"""Reads the project.
Returns:
VSProjectInformation: project information if successful or None otherwise.
"""
project_information = resources.VSProjectInformation()
self._ReadProjectInformation(project_information)
self._ReadConfigurations(project_information)
self._ReadFiles(project_information)
return project_information
class VS2010ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2010 project file reader."""
# TODO: implement.
class VS2012ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2012 project file reader."""
# TODO: implement.
class VS2013ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2013 project file reader."""
# TODO: implement.
class VS2015ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2015 project file reader."""
# TODO: implement.
class VS2017ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2017 project file reader."""
# TODO: implement.
class VS2019ProjectFileReader(VSProjectFileReader):
"""Visual Studio 2019 project file reader."""
# TODO: implement.
class VSSolutionFileReader(FileReader):
"""Visual Studio solution file reader."""
# Note that redundant-returns-doc is broken for pylint 1.7.x for abstract
# methods
# pylint: disable=redundant-returns-doc
@abc.abstractmethod
def _CheckFormatVersion(self, line):
"""Checks the format version.
Args:
line (str): line containing the Visual Studio format version.
Returns:
bool: True if successful or false otherwise.
"""
# pylint: disable=unused-argument
def _CheckVisualStudioVersion(self, line):
"""Checks the Visual Studio version.
Args:
line (str): line containing the Visual Studio format version.
Returns:
bool: True if successful or false otherwise.
"""
return False
def ReadConfigurations(self):
"""Reads the configurations.
Returns:
VSConfigurations: configurations or None if not available.
"""
solution_configurations = resources.VSConfigurations()
line = self._ReadLine(look_ahead=True)
if not line or line != 'Global':
return None
found_section = False
line = self._ReadLine()
while line and line != 'EndGlobal':
line = self._ReadLine()
if found_section:
if line == 'EndGlobalSection':
found_section = False
else:
# For more than 1 match findall will return a list with a tuple.
values = re.findall('([^|]*)[|]([^ ]*) = ([^|]*)[|]([^ ]*)', line)
if len(values) == 1:
values = values[0]
if (len(values) == 4 and values[0] == values[2] and
values[1] == values[3]):
configuration = resources.VSSolutionConfiguration()
configuration.name = values[0]
configuration.platform = values[1]
solution_configurations.Append(configuration)
elif line == ('GlobalSection(SolutionConfigurationPlatforms) = '
'preSolution'):
found_section = True
return solution_configurations
def ReadHeader(self):
"""Reads a file header.
Returns:
bool: True if successful or false otherwise.
"""
binary_data = self._ReadBinaryData(5)
if binary_data != b'\xef\xbb\xbf\r\n':
return False
line = self._ReadLine()
if not line or not line.startswith(
'Microsoft Visual Studio Solution File, Format Version '):
return False
if not self._CheckFormatVersion(line):
return False
visual_studio_version_line = None
line = self._ReadLine(look_ahead=True)
while line:
if line.startswith('# Visual C++ | |
<reponame>JasperJuergensen/elastalert
import copy
import logging
from datetime import datetime, timedelta
from typing import List, Tuple, Union
from elastalert import config
from elastalert.clients import ElasticSearchClient
from elastalert.exceptions import EARuntimeException
from elastalert.queries import BaseQuery
from elastalert.utils.arithmetic import gcd
from elastalert.utils.time import dt_to_ts, pretty_ts, total_seconds, ts_now
from elastalert.utils.util import (
add_raw_postfix,
elasticsearch_client,
get_starttime,
lookup_es_key,
set_es_key,
should_scrolling_continue,
)
from elasticsearch import ElasticsearchException, NotFoundError
log = logging.getLogger(__name__)
class ElasticsearchQuery(BaseQuery):
""""""
def __init__(
self,
rule_config: dict,
callback: callable,
persistent: dict,
es: ElasticSearchClient = None,
):
super().__init__(rule_config, callback, persistent)
self.scroll_id = None
self.total_hits = 0
self.num_hits = 0
self.num_dupes = 0
self.persistent.setdefault("processed_hits", {})
if not es:
es = elasticsearch_client(config.CFG().es_client)
self.es = es
def build_query(self, sort: bool = True):
self.query = {"query": {"bool": {"filter": self.rule_config.get("filter", [])}}}
if sort:
self.query["sort"] = [self.rule_config.get("timestamp_field", "@timestamp")]
def get_hits(self, starttime: datetime, endtime: datetime) -> List[dict]:
if starttime and endtime:
query_starttime = self.rule_config.get("dt_to_ts", dt_to_ts)(starttime)
query_endtime = self.rule_config.get("dt_to_ts", dt_to_ts)(endtime)
self.query["query"]["bool"].update(
{
"must": {
"range": {
self.rule_config.get("timestamp_field", "@timestamp"): {
"gt": query_starttime,
"lte": query_endtime,
}
}
}
}
)
extra_args = {}
if self.rule_config.get("_source_enabled"):
extra_args = {"_source_includes": self.rule_config["include"]}
else:
self.query["stored_fields"] = self.rule_config["include"]
scroll_keepalive = self.rule_config.get(
"scroll_keepalive", config.CFG().scroll_keepalive
)
try:
log.debug("Running query: %s", self.query)
if self.scroll_id:
res = self.es.scroll(scroll_id=self.scroll_id, scroll=scroll_keepalive)
else:
res = self.es.search(
index=self.rule_config["index"],
body=self.query,
ignore_unavailable=True,
size=self.rule_config.get(
"max_query_size", config.CFG().max_query_size
),
scroll=scroll_keepalive,
**extra_args,
)
self.total_hits = int(res["hits"]["total"]["value"])
except ElasticsearchException as e:
# Elasticsearch sometimes gives us GIGANTIC error messages
# (so big that they will fill the entire terminal buffer)
if len(str(e)) > 1024:
msg = str(e)[:1024] + "... (%d characters removed)" % (
len(str(e)) - 1024
)
else:
msg = str(e)
raise EARuntimeException(
"Error running query %s" % msg,
rule=self.rule_config["name"],
query=self.query,
)
if "_scroll_id" in res:
# The scroll_id can change after every request (scroll and search)
self.scroll_id = res["_scroll_id"]
if len(res.get("_shards", {}).get("failures", [])) > 0:
try:
errs = [
e["reason"]["reason"]
for e in res["_shards"]["failures"]
if "Failed to parse" in e["reason"]["reason"]
]
if len(errs):
raise EARuntimeException(
"\n".join(errs), rule=self.rule_config["name"], query=self.query
)
except (TypeError, KeyError) as e:
raise EARuntimeException(
str(res["_shards"]["failures"]),
rule=self.rule_config["name"],
query=self.query,
original_exception=e,
)
hits = res["hits"]["hits"]
self.num_hits += len(hits)
lt = self.rule_config.get("use_local_time")
log.info(
"Queried rule %s on %s from %s to %s: %s / %s hits",
self.rule_config["name"],
self.rule_config["index"],
pretty_ts(starttime, lt),
pretty_ts(endtime, lt),
len(hits),
self.num_hits,
)
return self.process_hits(self.rule_config, hits)
def set_starttime(self, endtime) -> datetime:
# if it is the first run
if self.rule_config[
"type"
].previous_endtime is None and not self.rule_config.get(
"scan_entire_timeframe"
):
# try to get last endtime from es
last_run_end = get_starttime(self.rule_config)
if last_run_end:
return last_run_end
if self.rule_config.get("scan_entire_timeframe"):
starttime = endtime - self.rule_config["timeframe"]
else:
starttime = endtime - self.rule_config.get(
"buffer_time", config.CFG().buffer_time
)
# calculated starttime or previous_endtime depending on which is older. if previous_endtime is None use
# the current time because it's always newer than the starttime
return min(starttime, self.rule_config["type"].previous_endtime or ts_now())
def get_segment_size(self) -> timedelta:
return self.rule_config.get("buffer_time", config.CFG().buffer_time)
def run_query(self, starttime: datetime, endtime: datetime) -> int:
data = self.get_hits(starttime, endtime)
if data:
unique_data = self.remove_duplicates(data)
self.num_dupes += len(data) - len(unique_data)
if unique_data:
self.callback(unique_data)
try:
if (
self.scroll_id
and self.num_hits < self.total_hits
and should_scrolling_continue(self.rule_config)
):
self.run_query(starttime, endtime)
except RuntimeError:
# It's possible to scroll far enough to hit max recursive depth
log.warning("Scrolling hit maximum recursion depth.")
if self.scroll_id:
try:
self.es.clear_scroll(scroll_id=self.scroll_id)
except NotFoundError:
pass
self.scroll_id = None
return self.num_hits
def remove_duplicates(self, data: List[dict]) -> List[dict]:
new_events = []
# TODO find better way of removing duplicates for aggregations where no _id is in dict
if type(data) == list:
for event in data:
if event["_id"] in self.persistent["processed_hits"]:
continue
# Remember the new data's IDs
self.persistent["processed_hits"][event["_id"]] = lookup_es_key(
event, self.rule_config.get("timestamp_field", "@timestamp")
)
new_events.append(event)
else:
new_events = data
return new_events
@staticmethod
def process_hits(rule_config: dict, hits) -> List[dict]:
"""
Update the _source field for each hit received from ES based on the rule configuration.
This replaces timestamps with datetime objects,
folds important fields into _source and creates compound query_keys.
:return: A list of processed _source dictionaries.
"""
processed_hits = []
for hit in hits:
# Merge fields and _source
hit.setdefault("_source", {})
for key, value in list(hit.get("fields", {}).items()):
# Fields are returned as lists, assume any with length 1 are not arrays in _source
# Except sometimes they aren't lists. This is dependent on ES version
hit["_source"].setdefault(
key, value[0] if type(value) is list and len(value) == 1 else value
)
# Convert the timestamp to a datetime
ts = lookup_es_key(hit["_source"], rule_config["timestamp_field"])
if not ts and not rule_config["_source_enabled"]:
raise EARuntimeException(
"Error: No timestamp was found for hit. '_source_enabled' is set to false, "
"check your mappings for stored fields"
)
set_es_key(
hit["_source"],
rule_config["timestamp_field"],
rule_config["ts_to_dt"](ts),
)
set_es_key(
hit,
rule_config["timestamp_field"],
lookup_es_key(hit["_source"], rule_config["timestamp_field"]),
)
# Tack metadata fields into _source
for field in ["_id", "_index", "_type"]:
if field in hit:
hit["_source"][field] = hit[field]
if rule_config.get("compound_query_key"):
values = [
lookup_es_key(hit["_source"], key)
for key in rule_config["compound_query_key"]
]
hit["_source"][rule_config["query_key"]] = ", ".join(
[str(value) for value in values]
)
if rule_config.get("compound_aggregation_key"):
values = [
lookup_es_key(hit["_source"], key)
for key in rule_config["compound_aggregation_key"]
]
hit["_source"][rule_config["aggregation_key"]] = ", ".join(
[str(value) for value in values]
)
processed_hits.append(hit["_source"])
return processed_hits
class ElasticsearchCountQuery(ElasticsearchQuery):
def build_query(self, **kwargs):
super().build_query(False)
def set_starttime(self, endtime) -> datetime:
starttime = super().set_starttime(endtime)
if self.rule_config["type"].previous_endtime and not self.rule_config.get(
"scan_entire_timeframe"
):
# in this case it differs from a normal es query
starttime = endtime - config.CFG().run_every
return starttime
def get_segment_size(self) -> timedelta:
return config.CFG().run_every
def get_hits(self, starttime: datetime, endtime: datetime) -> Union[dict, None]:
query = copy.deepcopy(self.query)
query_starttime = self.rule_config.get("dt_to_ts", dt_to_ts)(starttime)
query_endtime = self.rule_config.get("dt_to_ts", dt_to_ts)(endtime)
if starttime and endtime:
self.query["query"]["bool"].update(
{
"must": {
"range": {
self.rule_config.get("timestamp_field", "@timestamp"): {
"gt": query_starttime,
"lte": query_endtime,
}
}
}
}
)
try:
log.debug("Running query: %s", self.query)
res = self.es.count(
index=self.rule_config["index"],
body=self.query,
ignore_unavailable=True,
)
except ElasticsearchException as e:
# Elasticsearch sometimes gives us GIGANTIC error messages
# (so big that they will fill the entire terminal buffer)
if len(str(e)) > 1024:
msg = str(e)[:1024] + "... (%d characters removed)" % (
len(str(e)) - 1024
)
else:
msg = str(e)
raise EARuntimeException(
"Error running count query %s" % msg,
rule=self.rule_config["name"],
query=query,
original_exception=e,
)
self.num_hits += res["count"]
lt = self.rule_config.get("use_local_time")
log.info(
"Queried rule %s from %s to %s: %s buckets",
self.rule_config["name"],
pretty_ts(starttime, lt),
pretty_ts(endtime, lt),
res["count"],
)
return {endtime: res["count"]}
class ElasticsearchTermQuery(ElasticsearchQuery):
def set_starttime(self, endtime) -> datetime:
starttime = super().set_starttime(endtime)
if self.rule_config["type"].previous_endtime and not self.rule_config.get(
"scan_entire_timeframe"
):
# in this case it differs from a normal es query
starttime = endtime - config.CFG().run_every
return starttime
def get_segment_size(self) -> timedelta:
return config.CFG().run_every
def build_query(self, **kwargs):
super().build_query(False)
if self.rule_config.get("raw_query_key", True):
key = self.rule_config["query_key"], True
else:
key = add_raw_postfix(self.rule_config["query_key"], True)
self.query.update(
{
"aggs": {
"counts": {
"terms": {
"field": key,
"size": self.rule_config.get("terms_size", 50),
"min_doc_count": self.rule_config.get("min_doc_count", 1),
}
}
}
}
)
def get_hits(self, starttime: datetime, endtime: datetime) -> Union[dict, None]:
if starttime and endtime:
query_starttime = self.rule_config.get("dt_to_ts", dt_to_ts)(starttime)
query_endtime = self.rule_config.get("dt_to_ts", dt_to_ts)(endtime)
self.query["query"]["bool"].update(
{
"must": {
"range": {
self.rule_config.get("timestamp_field", "@timestamp"): {
"gt": query_starttime,
"lte": query_endtime,
}
}
}
}
)
try:
log.debug("Running query: %s", self.query)
res = self.es.search(
index=self.rule_config["index"],
body=self.query,
ignore_unavailable=True,
)
except ElasticsearchException as e:
# Elasticsearch sometimes gives us GIGANTIC error messages
# (so big that they will fill the entire terminal buffer)
if len(str(e)) > 1024:
msg = str(e)[:1024] + "... (%d characters removed)" % (
len(str(e)) - 1024
)
else:
msg = str(e)
raise EARuntimeException(
"Error running terms query %s" % msg,
rule=self.rule_config["name"],
query=self.query,
original_exception=e,
)
if "aggregations" not in res:
log.info("Missing aggregations result: %s", res)
return None
buckets = res["aggregations"]["counts"]["buckets"]
self.num_hits += len(buckets)
lt = self.rule_config.get("use_local_time")
log.info(
"Queried rule %s from %s to %s: %s buckets",
self.rule_config["name"],
pretty_ts(starttime, lt),
pretty_ts(endtime, lt),
len(buckets),
)
return {endtime: buckets}
def remove_duplicates(self, data: List[dict]) -> List[dict]:
return data
class ElasticsearchAggregationQuery(ElasticsearchQuery):
def run(self, endtime: datetime) -> Tuple[datetime, datetime, int]:
if self.rule_config.get("initial_starttime"):
starttime = self.rule_config["initial_starttime"]
else:
starttime = self.set_starttime(endtime)
original_starttime = starttime
cumulative_hits = 0
segment_size = self.get_segment_size()
tmp_endtime = starttime
while endtime - tmp_endtime >= segment_size:
tmp_endtime += segment_size
cumulative_hits += self.run_query(starttime, tmp_endtime)
starttime = tmp_endtime
self.rule_config["type"].garbage_collect(tmp_endtime)
if total_seconds(original_starttime - tmp_endtime) != 0:
endtime = tmp_endtime
return original_starttime, endtime, cumulative_hits
def set_starttime(self, endtime) -> datetime:
starttime = super().set_starttime(endtime)
starttime = self.adjust_start_time_for_overlapping_agg_query(starttime)
starttime = self.adjust_start_time_for_bucket_interval_sync(starttime)
return | |
# -*- coding: utf-8 -*-
#
# ***********************************************************************************
# MIT License
#
# Copyright (c) 2020 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is furnished
# to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
# PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
# CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
# OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
# ***********************************************************************************
from .adl_structures_h import * # NOQA
# Copyright (c) 2016 Advanced Micro Devices, Inc. All rights reserved.
# MIT LICENSE:
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
ADL2_ADAPTER_ACTIVE_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(INT)
)
ADL_ADAPTER_ACTIVE_SET = _int(
INT,
INT,
POINTER(INT)
)
ADL2_ADAPTER_ACTIVE_SETPREFER = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
POINTER(ADLDisplayTarget),
POINTER(INT)
)
ADL_ADAPTER_ACTIVE_SETPREFER = _int(
INT,
INT,
INT,
POINTER(ADLDisplayTarget),
POINTER(INT)
)
ADL2_ADAPTER_PRIMARY_GET = _int(
ADL_CONTEXT_HANDLE,
POINTER(INT)
)
ADL_ADAPTER_PRIMARY_GET = _int(
POINTER(INT)
)
ADL2_ADAPTER_PRIMARY_SET = _int(
ADL_CONTEXT_HANDLE,
INT
)
ADL_ADAPTER_PRIMARY_SET = _int(
INT
)
ADL2_ADAPTER_MODESWITCH = _int(
ADL_CONTEXT_HANDLE,
INT
)
ADL_ADAPTER_MODESWITCH = _int(
INT
)
ADL2_DISPLAY_MODES_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL_DISPLAY_MODES_GET = _int(
INT,
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL2_DISPLAY_MODES_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
POINTER(ADLMode)
)
ADL_DISPLAY_MODES_SET = _int(
INT,
INT,
INT,
POINTER(ADLMode)
)
ADL2_DISPLAY_POSSIBLEMODE_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL_DISPLAY_POSSIBLEMODE_GET = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL2_DISPLAY_FORCIBLEDISPLAY_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(INT)
)
ADL_DISPLAY_FORCIBLEDISPLAY_GET = _int(
INT,
INT,
POINTER(INT)
)
ADL2_DISPLAY_FORCIBLEDISPLAY_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT
)
ADL_DISPLAY_FORCIBLEDISPLAY_SET = _int(
INT,
INT,
INT
)
ADL2_ADAPTER_NUMBEROFACTIVATABLESOURCES_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLActivatableSource))
)
ADL_ADAPTER_NUMBEROFACTIVATABLESOURCES_GET = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLActivatableSource))
)
ADL2_ADAPTER_DISPLAY_CAPS = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLAdapterDisplayCap))
)
ADL_ADAPTER_DISPLAY_CAPS = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLAdapterDisplayCap))
)
ADL2_DISPLAY_DISPLAYMAPCONFIG_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLDisplayMap)),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget)),
INT
)
ADL_DISPLAY_DISPLAYMAPCONFIG_GET = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLDisplayMap)),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget)),
INT
)
ADL2_DISPLAY_DISPLAYMAPCONFIG_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLDisplayMap),
INT,
POINTER(ADLDisplayTarget)
)
ADL_DISPLAY_DISPLAYMAPCONFIG_SET = _int(
INT,
INT,
POINTER(ADLDisplayMap),
INT,
POINTER(ADLDisplayTarget)
)
ADL2_DISPLAY_POSSIBLEMAPPING_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLPossibleMapping),
INT,
POINTER(INT),
POINTER(POINTER(ADLPossibleMapping))
)
ADL_DISPLAY_POSSIBLEMAPPING_GET = _int(
INT,
INT,
POINTER(ADLPossibleMapping),
INT,
POINTER(INT),
POINTER(POINTER(ADLPossibleMapping))
)
ADL2_DISPLAY_DISPLAYMAPCONFIG_VALIDATE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLPossibleMap),
POINTER(INT),
POINTER(POINTER(ADLPossibleMapResult))
)
ADL_DISPLAY_DISPLAYMAPCONFIG_VALIDATE = _int(
INT,
INT,
POINTER(ADLPossibleMap),
POINTER(INT),
POINTER(POINTER(ADLPossibleMapResult))
)
ADL2_DISPLAY_DISPLAYMAPCONFIG_POSSIBLEADDANDREMOVE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLDisplayMap),
INT,
POINTER(ADLDisplayTarget),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget)),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget))
)
ADL_DISPLAY_DISPLAYMAPCONFIG_POSSIBLEADDANDREMOVE = _int(
INT,
INT,
POINTER(ADLDisplayMap),
INT,
POINTER(ADLDisplayTarget),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget)),
POINTER(INT),
POINTER(POINTER(ADLDisplayTarget))
)
ADL2_DISPLAY_SLSGRID_CAPS = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLSLSGrid)),
INT
)
ADL_DISPLAY_SLSGRID_CAPS = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLSLSGrid)),
INT
)
ADL2_DISPLAY_SLSMAPINDEXLIST_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(INT)),
INT
)
ADL_DISPLAY_SLSMAPINDEXLIST_GET = _int(
INT,
POINTER(INT),
POINTER(POINTER(INT)),
INT
)
ADL2_DISPLAY_SLSMAPINDEX_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLDisplayTarget),
POINTER(INT)
)
ADL_DISPLAY_SLSMAPINDEX_GET = _int(
INT,
INT,
POINTER(ADLDisplayTarget),
POINTER(INT)
)
ADL2_DISPLAY_SLSMAPCONFIGX2_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLSLSMap),
POINTER(INT),
POINTER(POINTER(ADLSLSTarget)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
INT
)
ADL_DISPLAY_SLSMAPCONFIGX2_GET = _int(
INT,
INT,
POINTER(ADLSLSMap),
POINTER(INT),
POINTER(POINTER(ADLSLSTarget)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
INT
)
ADL2_DISPLAY_SLSMAPCONFIG_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLSLSMap),
POINTER(INT),
POINTER(POINTER(ADLSLSTarget)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
INT
)
ADL_DISPLAY_SLSMAPCONFIG_GET = _int(
INT,
INT,
POINTER(ADLSLSMap),
POINTER(INT),
POINTER(POINTER(ADLSLSTarget)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLBezelTransientMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSOffset)),
INT
)
ADL2_DISPLAY_SLSMAPCONFIG_CREATE = _int(
ADL_CONTEXT_HANDLE,
INT,
ADLSLSMap,
INT,
POINTER(ADLSLSTarget),
INT,
POINTER(INT),
INT
)
ADL_DISPLAY_SLSMAPCONFIG_CREATE = _int(
INT,
ADLSLSMap,
INT,
POINTER(ADLSLSTarget),
INT,
POINTER(INT),
INT
)
ADL2_DISPLAY_SLSMAPCONFIG_DELETE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT
)
ADL_DISPLAY_SLSMAPCONFIG_DELETE = _int(
INT,
INT
)
ADL2_DISPLAY_SLSMAPCONFIGX2_DELETE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(INT)
)
ADL2_DISPLAY_SLSMAPCONFIG_SETSTATE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT
)
ADL_DISPLAY_SLSMAPCONFIG_SETSTATE = _int(
INT,
INT,
INT
)
ADL2_DISPLAY_SLSMAPCONFIG_REARRANGE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
POINTER(ADLSLSTarget),
ADLSLSMap,
INT
)
ADL_DISPLAY_SLSMAPCONFIG_REARRANGE = _int(
INT,
INT,
INT,
POINTER(ADLSLSTarget),
ADLSLSMap,
INT
)
ADL2_DISPLAY_POSSIBLEMODE_WINXP_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(ADLDisplayTarget),
INT,
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL_DISPLAY_POSSIBLEMODE_WINXP_GET = _int(
INT,
INT,
POINTER(ADLDisplayTarget),
INT,
INT,
POINTER(INT),
POINTER(POINTER(ADLMode))
)
ADL2_DISPLAY_BEZELOFFSETSTEPPINGSIZE_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT),
POINTER(POINTER(ADLBezelOffsetSteppingSize))
)
ADL_DISPLAY_BEZELOFFSETSTEPPINGSIZE_GET = _int(
INT,
POINTER(INT),
POINTER(POINTER(ADLBezelOffsetSteppingSize))
)
ADL2_DISPLAY_BEZELOFFSET_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
LPADLSLSOffset,
ADLSLSMap,
INT
)
ADL_DISPLAY_BEZELOFFSET_SET = _int(
INT,
INT,
INT,
LPADLSLSOffset,
ADLSLSMap,
INT
)
ADL2_DISPLAY_BEZELSUPPORTED_VALIDATE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
LPADLPossibleSLSMap,
POINTER(INT),
POINTER(LPADLPossibleMapResult)
)
ADL_DISPLAY_BEZELSUPPORTED_VALIDATE = _int(
INT,
INT,
LPADLPossibleSLSMap,
POINTER(INT),
POINTER(LPADLPossibleMapResult)
)
ADL2_DISPLAY_OVERLAP_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
LPADLSLSTargetOverlap,
INT,
LPADLSLSOffset,
ADLSLSMap
)
ADL_DISPLAY_OVERLAP_SET = _int(
INT,
INT,
INT,
LPADLSLSTargetOverlap,
INT,
LPADLSLSOffset,
ADLSLSMap
)
ADL2_DISPLAY_SLSMIDDLEMODE_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
POINTER(INT),
POINTER(INT),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
INT
)
ADL_DISPLAY_SLSMIDDLEMODE_GET = _int(
INT,
INT,
POINTER(INT),
POINTER(INT),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
POINTER(INT),
POINTER(POINTER(ADLSLSMode)),
INT
)
ADL2_DISPLAY_SLSMIDDLEMODE_SET = _int(
ADL_CONTEXT_HANDLE,
INT,
INT,
INT,
POINTER(ADLSLSMode),
INT
)
ADL_DISPLAY_SLSMIDDLEMODE_SET = _int(
INT,
INT,
INT,
POINTER(ADLSLSMode),
INT
)
ADL2_WORKSTATION_ENABLEUNSUPPORTEDDISPLAYMODES = _int(
ADL_CONTEXT_HANDLE,
INT
)
ADL_WORKSTATION_ENABLEUNSUPPORTEDDISPLAYMODES = _int(
INT
)
ADL2_DISPLAY_SLSRECORDS_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
ADLDisplayID,
POINTER(INT),
POINTER(POINTER(INT))
)
ADL_DISPLAY_SLSRECORDS_GET = _int(
INT,
ADLDisplayID,
POINTER(INT),
POINTER(POINTER(INT))
)
ADL2_ADAPTER_AMDANDNONAMDDISPLAYCLONE_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT)
)
ADL2_ADAPTER_CLONETYPES_GET = _int(
ADL_CONTEXT_HANDLE,
INT,
POINTER(INT)
)
ADL2_ADAPTER_CROSSGPUCLONE_DISABLE = _int(
ADL_CONTEXT_HANDLE,
INT,
INT
)
# Function to set the current extended desktop mode status for a display.
_ADL2_Adapter_Active_Set = ADL2_ADAPTER_ACTIVE_SET
# Function to set the current extended desktop mode status for a display.
_ADL_Adapter_Active_Set = ADL_ADAPTER_ACTIVE_SET
# Function to set the current extended desktop mode status for the display.
_ADL2_Adapter_Active_SetPrefer = ADL2_ADAPTER_ACTIVE_SETPREFER
# Function to set the current extended desktop mode status for the display.
_ADL_Adapter_Active_SetPrefer = ADL_ADAPTER_ACTIVE_SETPREFER
# Function to retrieve the primary display adapter index.
_ADL2_Adapter_Primary_Get = ADL2_ADAPTER_PRIMARY_GET
# Function to retrieve the primary display adapter index.
_ADL_Adapter_Primary_Get = ADL_ADAPTER_PRIMARY_GET
# Function to set the primary display adapter index.
_ADL2_Adapter_Primary_Set = ADL2_ADAPTER_PRIMARY_SET
# Function to set the primary display adapter index.
_ADL_Adapter_Primary_Set = ADL_ADAPTER_PRIMARY_SET
# Function to perform a mode switch for an adapter.
_ADL2_Adapter_ModeSwitch = ADL2_ADAPTER_MODESWITCH
# Function to perform a mode switch for an adapter.
_ADL_Adapter_ModeSwitch = ADL_ADAPTER_MODESWITCH
# Function to retrieve the display mode information.
_ADL2_Display_Modes_Get = ADL2_DISPLAY_MODES_GET
# Function to retrieve the display mode information.
_ADL_Display_Modes_Get = ADL_DISPLAY_MODES_GET
# Function to set display mode information.
_ADL2_Display_Modes_Set = ADL2_DISPLAY_MODES_SET
# Function to set display mode information.
_ADL_Display_Modes_Set = ADL_DISPLAY_MODES_SET
# Function to retrieve the OS possible modes list for an adapter (all OS platforms).
_ADL2_Display_PossibleMode_Get = ADL2_DISPLAY_POSSIBLEMODE_GET
# Function to retrieve the OS possible modes list for an adapter (all OS platforms).
_ADL_Display_PossibleMode_Get = ADL_DISPLAY_POSSIBLEMODE_GET
# Function to retrieve the forcible connected status of a display.
_ADL2_Display_ForcibleDisplay_Get = ADL2_DISPLAY_FORCIBLEDISPLAY_GET
# Function to retrieve the forcible connected status of a display.
_ADL_Display_ForcibleDisplay_Get = ADL_DISPLAY_FORCIBLEDISPLAY_GET
# Function to set the forcible connected status of a display.
_ADL2_Display_ForcibleDisplay_Set = ADL2_DISPLAY_FORCIBLEDISPLAY_SET
# Function to set the forcible connected status of a display.
_ADL_Display_ForcibleDisplay_Set = ADL_DISPLAY_FORCIBLEDISPLAY_SET
# Function to retrieve the number of Activatable sources based on ADL Index.
_ADL2_Adapter_NumberOfActivatableSources_Get = ADL2_ADAPTER_NUMBEROFACTIVATABLESOURCES_GET
# Function to retrieve the number of Activatable sources based on ADL Index.
_ADL_Adapter_NumberOfActivatableSources_Get = ADL_ADAPTER_NUMBEROFACTIVATABLESOURCES_GET
# Function to retrieve the adapter display manner capabilities based on ADL index.
_ADL2_Adapter_Display_Caps = ADL2_ADAPTER_DISPLAY_CAPS
# Function to retrieve the adapter display manner capabilities based on ADL index.
_ADL_Adapter_Display_Caps = ADL_ADAPTER_DISPLAY_CAPS
# Function to retrieve current display map configurations.
_ADL2_Display_DisplayMapConfig_Get = ADL2_DISPLAY_DISPLAYMAPCONFIG_GET
# Function to retrieve current display map configurations.
_ADL_Display_DisplayMapConfig_Get = ADL_DISPLAY_DISPLAYMAPCONFIG_GET
# Function to set the current display configuration.
_ADL2_Display_DisplayMapConfig_Set = ADL2_DISPLAY_DISPLAYMAPCONFIG_SET
# Function to set the current display configuration.
_ADL_Display_DisplayMapConfig_Set = ADL_DISPLAY_DISPLAYMAPCONFIG_SET
# Function to retrieve the possible display mappings.
_ADL2_Display_PossibleMapping_Get = ADL2_DISPLAY_POSSIBLEMAPPING_GET
# Function to retrieve the possible display mappings.
_ADL_Display_PossibleMapping_Get = ADL_DISPLAY_POSSIBLEMAPPING_GET
# Function to validate the list of the display configurations based on ADL Index.
_ADL2_Display_DisplayMapConfig_Validate = ADL2_DISPLAY_DISPLAYMAPCONFIG_VALIDATE
# Function to validate the list of the display configurations based on ADL Index.
_ADL_Display_DisplayMapConfig_Validate = ADL_DISPLAY_DISPLAYMAPCONFIG_VALIDATE
# Function to validate a list of display configurations.
_ADL2_Display_DisplayMapConfig_PossibleAddAndRemove = ADL2_DISPLAY_DISPLAYMAPCONFIG_POSSIBLEADDANDREMOVE
# | |
GH_LayoutPanel,reader: GH_IReader) -> bool """
pass
def Sort(self):
""" Sort(self: GH_LayoutPanel) """
pass
def Write(self,writer):
""" Write(self: GH_LayoutPanel,writer: GH_IWriter) -> bool """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,*__args):
"""
__new__(cls: type)
__new__(cls: type,name: str)
__new__(cls: type,other: GH_LayoutPanel)
"""
pass
def __repr__(self,*args):
""" __repr__(self: object) -> str """
pass
Items=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: Items(self: GH_LayoutPanel) -> List[GH_LayoutItem]
"""
Name=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: Name(self: GH_LayoutPanel) -> str
Set: Name(self: GH_LayoutPanel)=value
"""
class GH_LayoutTab(object,GH_ISerializable,IComparable[GH_LayoutTab]):
"""
GH_LayoutTab()
GH_LayoutTab(name: str)
GH_LayoutTab(other: GH_LayoutTab)
"""
def AddPanel(self,name=None):
"""
AddPanel(self: GH_LayoutTab,name: str) -> GH_LayoutPanel
AddPanel(self: GH_LayoutTab) -> GH_LayoutPanel
"""
pass
def CompareTo(self,other):
""" CompareTo(self: GH_LayoutTab,other: GH_LayoutTab) -> int """
pass
def Deselect(self):
""" Deselect(self: GH_LayoutTab) """
pass
def Read(self,reader):
""" Read(self: GH_LayoutTab,reader: GH_IReader) -> bool """
pass
def SortPanels(self,order):
""" SortPanels(self: GH_LayoutTab,*order: Array[str]) """
pass
def Write(self,writer):
""" Write(self: GH_LayoutTab,writer: GH_IWriter) -> bool """
pass
def __init__(self,*args):
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod
def __new__(self,*__args):
"""
__new__(cls: type)
__new__(cls: type,name: str)
__new__(cls: type,other: GH_LayoutTab)
"""
pass
def __repr__(self,*args):
""" __repr__(self: object) -> str """
pass
Name=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: Name(self: GH_LayoutTab) -> str
Set: Name(self: GH_LayoutTab)=value
"""
Panels=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: Panels(self: GH_LayoutTab) -> List[GH_LayoutPanel]
"""
Selected=property(lambda self: object(),lambda self,v: None,lambda self: None)
"""Get: Selected(self: GH_LayoutTab) -> bool
Set: Selected(self: GH_LayoutTab)=value
"""
class GH_Ribbon(UserControl,IComponent,IDisposable,IOleControl,IOleObject,IOleInPlaceObject,IOleInPlaceActiveObject,IOleWindow,IViewObject,IViewObject2,IPersist,IPersistStreamInit,IPersistPropertyBag,IPersistStorage,IQuickActivate,ISupportOleDropSource,IDropTarget,ISynchronizeInvoke,IWin32Window,IArrangedElement,IBindableComponent,IContainerControl,IGH_FixedSizeControl):
""" GH_Ribbon() """
def AccessibilityNotifyClients(self,*args):
"""
AccessibilityNotifyClients(self: Control,accEvent: AccessibleEvents,objectID: int,childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control .
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
objectID: The identifier of the System.Windows.Forms.AccessibleObject.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
AccessibilityNotifyClients(self: Control,accEvent: AccessibleEvents,childID: int)
Notifies the accessibility client applications of the specified
System.Windows.Forms.AccessibleEvents for the specified child control.
accEvent: The System.Windows.Forms.AccessibleEvents to notify the accessibility client applications of.
childID: The child System.Windows.Forms.Control to notify of the accessible event.
"""
pass
def AddLayout(self,file,visible=None):
""" AddLayout(self: GH_Ribbon,file: str,visible: bool)AddLayout(self: GH_Ribbon,file: str) """
pass
def AdjustFormScrollbars(self,*args):
"""
AdjustFormScrollbars(self: ContainerControl,displayScrollbars: bool)
displayScrollbars: true to show the scroll bars; otherwise,false.
"""
pass
def AmIFocused(self,item):
""" AmIFocused(self: GH_Ribbon,item: GH_RibbonItem) -> bool """
pass
def CreateAccessibilityInstance(self,*args):
"""
CreateAccessibilityInstance(self: Control) -> AccessibleObject
Creates a new accessibility object for the control.
Returns: A new System.Windows.Forms.AccessibleObject for the control.
"""
pass
def CreateControlsInstance(self,*args):
"""
CreateControlsInstance(self: Control) -> ControlCollection
Creates a new instance of the control collection for the control.
Returns: A new instance of System.Windows.Forms.Control.ControlCollection assigned to the control.
"""
pass
def CreateHandle(self,*args):
"""
CreateHandle(self: Control)
Creates a handle for the control.
"""
pass
def CreateRibbonLayoutMenu(self):
""" CreateRibbonLayoutMenu(self: GH_Ribbon) -> ToolStripDropDown """
pass
def DefWndProc(self,*args):
"""
DefWndProc(self: Control,m: Message) -> Message
Sends the specified message to the default window procedure.
m: The Windows System.Windows.Forms.Message to process.
"""
pass
def DestroyHandle(self,*args):
"""
DestroyHandle(self: Control)
Destroys the handle associated with the control.
"""
pass
def DetermineFocusItem(self,*args):
""" DetermineFocusItem(self: GH_Ribbon,e: MouseEventArgs) -> GH_RibbonControlFocusResult """
pass
def Dispose(self):
""" Dispose(self: GH_Ribbon,disposing: bool) """
pass
def FindAndDisplayProxy(self,id,tabBox,iconbox,dropdown):
""" FindAndDisplayProxy(self: GH_Ribbon,id: Guid) -> (bool,Rectangle,Rectangle,GH_RibbonDropdown) """
pass
def GetAccessibilityObjectById(self,*args):
"""
GetAccessibilityObjectById(self: Control,objectId: int) -> AccessibleObject
Retrieves the specified System.Windows.Forms.AccessibleObject.
objectId: An Int32 that identifies the System.Windows.Forms.AccessibleObject to retrieve.
Returns: An System.Windows.Forms.AccessibleObject.
"""
pass
def GetAutoSizeMode(self,*args):
"""
GetAutoSizeMode(self: Control) -> AutoSizeMode
Retrieves a value indicating how a control will behave when its
System.Windows.Forms.Control.AutoSize property is enabled.
Returns: One of the System.Windows.Forms.AutoSizeMode values.
"""
pass
def GetScaledBounds(self,*args):
"""
GetScaledBounds(self: Control,bounds: Rectangle,factor: SizeF,specified: BoundsSpecified) -> Rectangle
Retrieves the bounds within which the control is scaled.
bounds: A System.Drawing.Rectangle that specifies the area for which to retrieve the display bounds.
factor: The height and width of the control's bounds.
specified: One of the values of System.Windows.Forms.BoundsSpecified that specifies the bounds of the
control to use when defining its size and position.
Returns: A System.Drawing.Rectangle representing the bounds within which the control is scaled.
"""
pass
def GetScrollState(self,*args):
"""
GetScrollState(self: ScrollableControl,bit: int) -> bool
Determines whether the specified flag has been set.
bit: The flag to check.
Returns: true if the specified flag has been set; otherwise,false.
"""
pass
def GetService(self,*args):
"""
GetService(self: Component,service: Type) -> object
Returns an object that represents a service provided by the System.ComponentModel.Component or
by its System.ComponentModel.Container.
service: A service provided by the System.ComponentModel.Component.
Returns: An System.Object that represents a service provided by the System.ComponentModel.Component,or
null if the System.ComponentModel.Component does not provide the specified service.
"""
pass
def GetStyle(self,*args):
"""
GetStyle(self: Control,flag: ControlStyles) -> bool
Retrieves the value of the specified control style bit for the control.
flag: The System.Windows.Forms.ControlStyles bit to return the value from.
Returns: true if the specified control style bit is set to true; otherwise,false.
"""
pass
def GetTopLevel(self,*args):
"""
GetTopLevel(self: Control) -> bool
Determines if the control is a top-level control.
Returns: true if the control is a top-level control; otherwise,false.
"""
pass
def HideLayout(self,path):
""" HideLayout(self: GH_Ribbon,path: str) """
pass
def InitLayout(self,*args):
"""
InitLayout(self: Control)
Called after the control has been added to another container.
"""
pass
def InvokeGotFocus(self,*args):
"""
InvokeGotFocus(self: Control,toInvoke: Control,e: EventArgs)
Raises the System.Windows.Forms.Control.GotFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeLostFocus(self,*args):
"""
InvokeLostFocus(self: Control,toInvoke: Control,e: EventArgs)
Raises the System.Windows.Forms.Control.LostFocus event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokeOnClick(self,*args):
"""
InvokeOnClick(self: Control,toInvoke: Control,e: EventArgs)
Raises the System.Windows.Forms.Control.Click event for the specified control.
toInvoke: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Click event to.
e: An System.EventArgs that contains the event data.
"""
pass
def InvokePaint(self,*args):
"""
InvokePaint(self: Control,c: Control,e: PaintEventArgs)
Raises the System.Windows.Forms.Control.Paint event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def InvokePaintBackground(self,*args):
"""
InvokePaintBackground(self: Control,c: Control,e: PaintEventArgs)
Raises the PaintBackground event for the specified control.
c: The System.Windows.Forms.Control to assign the System.Windows.Forms.Control.Paint event to.
e: An System.Windows.Forms.PaintEventArgs that contains the event data.
"""
pass
def IsInputChar(self,*args):
"""
IsInputChar(self: Control,charCode: Char) -> bool
Determines if a character is an input character that the control recognizes.
charCode: The character to test.
Returns: true if the character should be sent directly to the control and not preprocessed; otherwise,
false.
"""
pass
def IsInputKey(self,*args):
"""
IsInputKey(self: Control,keyData: Keys) -> bool
Determines whether the specified key is a regular input key or a special key that requires
preprocessing.
keyData: One of the System.Windows.Forms.Keys values.
Returns: true if the specified key is a regular input key; otherwise,false.
"""
pass
def IsLayoutLoaded(self,file):
""" IsLayoutLoaded(self: GH_Ribbon,file: str) -> bool """
pass
def IsLayoutVisible(self,file):
""" IsLayoutVisible(self: GH_Ribbon,file: str) -> bool """
pass
def LayoutRibbon(self):
""" LayoutRibbon(self: GH_Ribbon) """
pass
def MemberwiseClone(self,*args):
"""
MemberwiseClone(self: MarshalByRefObject,cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity,which will cause the
object to be assigned a new identity when it is marshaled across a remoting boundary. A value of
false is usually appropriate. true to copy the current System.MarshalByRefObject object's
identity to its clone,which will cause remoting client calls to be routed to the remote server
object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def NearestHeight(self,h):
""" | |
import pdb
import sys
import torch
import numpy as np
import cv2
def write_calib(K,bl,shape,maxd,path):
str1 = 'camera.A=[%f 0 %f; 0 %f %f; 0 0 1]'%(K[0,0], K[0,2], K[1,1],K[1,2])
str2 = 'camera.height=%d'%(shape[0])
str3 = 'camera.width=%d' %(shape[1])
str4 = 'camera.zmax=%f'%(maxd)
str5 = 'rho=%f'%(bl*K[0,0])
with open(path,'w') as f:
f.write('%s\n%s\n%s\n%s\n%s'%(str1,str2,str3,str4,str5))
def create_ade20k_label_colormap():
"""Creates a label colormap used in ADE20K segmentation benchmark.
Returns:
A colormap for visualizing segmentation results.
"""
return np.asarray([
[0, 0, 0],
[120, 120, 120],
[180, 120, 120],
[6, 230, 230],
[80, 50, 50],
[4, 200, 3],
[120, 120, 80],
[140, 140, 140],
[204, 5, 255],
[230, 230, 230],
[4, 250, 7],
[224, 5, 255],
[235, 255, 7],
[150, 5, 61],
[120, 120, 70],
[8, 255, 51],
[255, 6, 82],
[143, 255, 140],
[204, 255, 4],
[255, 51, 7],
[204, 70, 3],
[0, 102, 200],
[61, 230, 250],
[255, 6, 51],
[11, 102, 255],
[255, 7, 71],
[255, 9, 224],
[9, 7, 230],
[220, 220, 220],
[255, 9, 92],
[112, 9, 255],
[8, 255, 214],
[7, 255, 224],
[255, 184, 6],
[10, 255, 71],
[255, 41, 10],
[7, 255, 255],
[224, 255, 8],
[102, 8, 255],
[255, 61, 6],
[255, 194, 7],
[255, 122, 8],
[0, 255, 20],
[255, 8, 41],
[255, 5, 153],
[6, 51, 255],
[235, 12, 255],
[160, 150, 20],
[0, 163, 255],
[140, 140, 140],
[250, 10, 15],
[20, 255, 0],
[31, 255, 0],
[255, 31, 0],
[255, 224, 0],
[153, 255, 0],
[0, 0, 255],
[255, 71, 0],
[0, 235, 255],
[0, 173, 255],
[31, 0, 255],
[11, 200, 200],
[255, 82, 0],
[0, 255, 245],
[0, 61, 255],
[0, 255, 112],
[0, 255, 133],
[255, 0, 0],
[255, 163, 0],
[255, 102, 0],
[194, 255, 0],
[0, 143, 255],
[51, 255, 0],
[0, 82, 255],
[0, 255, 41],
[0, 255, 173],
[10, 0, 255],
[173, 255, 0],
[0, 255, 153],
[255, 92, 0],
[255, 0, 255],
[255, 0, 245],
[255, 0, 102],
[255, 173, 0],
[255, 0, 20],
[255, 184, 184],
[0, 31, 255],
[0, 255, 61],
[0, 71, 255],
[255, 0, 204],
[0, 255, 194],
[0, 255, 82],
[0, 10, 255],
[0, 112, 255],
[51, 0, 255],
[0, 194, 255],
[0, 122, 255],
[0, 255, 163],
[255, 153, 0],
[0, 255, 10],
[255, 112, 0],
[143, 255, 0],
[82, 0, 255],
[163, 255, 0],
[255, 235, 0],
[8, 184, 170],
[133, 0, 255],
[0, 255, 92],
[184, 0, 255],
[255, 0, 31],
[0, 184, 255],
[0, 214, 255],
[255, 0, 112],
[92, 255, 0],
[0, 224, 255],
[112, 224, 255],
[70, 184, 160],
[163, 0, 255],
[153, 0, 255],
[71, 255, 0],
[255, 0, 163],
[255, 204, 0],
[255, 0, 143],
[0, 255, 235],
[133, 255, 0],
[255, 0, 235],
[245, 0, 255],
[255, 0, 122],
[255, 245, 0],
[10, 190, 212],
[214, 255, 0],
[0, 204, 255],
[20, 0, 255],
[255, 255, 0],
[0, 153, 255],
[0, 41, 255],
[0, 255, 204],
[41, 0, 255],
[41, 255, 0],
[173, 0, 255],
[0, 245, 255],
[71, 0, 255],
[122, 0, 255],
[0, 255, 184],
[0, 92, 255],
[184, 255, 0],
[0, 133, 255],
[255, 214, 0],
[25, 194, 194],
[102, 255, 0],
[92, 0, 255],
])
def write_pfm(path, image, scale=1):
"""Write pfm file.
Args:
path (str): pathto file
image (array): data
scale (int, optional): Scale. Defaults to 1.
"""
with open(path, "wb") as file:
color = None
if image.dtype.name != "float32":
raise Exception("Image dtype must be float32.")
image = np.flipud(image)
if len(image.shape) == 3 and image.shape[2] == 3: # color image
color = True
elif (
len(image.shape) == 2 or len(image.shape) == 3 and image.shape[2] == 1
): # greyscale
color = False
else:
raise Exception("Image must have H x W x 3, H x W x 1 or H x W dimensions.")
file.write("PF\n".encode() if color else "Pf\n".encode())
file.write("%d %d\n".encode() % (image.shape[1], image.shape[0]))
endian = image.dtype.byteorder
if endian == "<" or endian == "=" and sys.byteorder == "little":
scale = -scale
file.write("%f\n".encode() % scale)
image.tofile(file)
def triangulation(disp, xcoord, ycoord, bl=1, fl = 450, cx = 479.5, cy = 269.5):
mask = (disp<=0).flatten()
depth = bl*fl / (disp) # 450px->15mm focal length
X = (xcoord - cx) * depth / fl
Y = (ycoord - cy) * depth / fl
Z = depth
P = np.concatenate((X[np.newaxis],Y[np.newaxis],Z[np.newaxis]),0).reshape(3,-1)
P = np.concatenate((P,np.ones((1,P.shape[-1]))),0)
P[:,mask]=0
return P
def midpoint_triangulate(x, cam):
"""
Args:
x: Set of 2D points in homogeneous coords, (3 x n x N) matrix
cam: Collection of n objects, each containing member variables
cam.P - 3x4 camera matrix [0]
cam.R - 3x3 rotation matrix [1]
cam.T - 3x1 translation matrix [2]
Returns:
midpoint: 3D point in homogeneous coords, (4 x 1) matrix
"""
n = len(cam) # No. of cameras
N = x.shape[-1]
I = np.eye(3) # 3x3 identity matrix
A = np.zeros((3,n))
B = np.zeros((3,n,N))
sigma2 = np.zeros((3,N))
for i in range(n):
a = -np.linalg.inv(cam[i][:3,:3]).dot(cam[i][:3,-1:]) # ith camera position #
A[:,i,None] = a
if i==0:
b = np.linalg.pinv(cam[i][:3,:3]).dot(x[:,i]) # Directional vector # 4, N
else:
b = np.linalg.pinv(cam[i]).dot(x[:,i]) # Directional vector # 4, N
b = b / b[3:]
b = b[:3,:] - a # 3,N
b = b / np.linalg.norm(b,2,0)[np.newaxis]
B[:,i,:] = b
sigma2 = sigma2 + b * (b.T.dot(a).reshape(-1,N)) # 3,N
Bo = B.transpose([2,0,1])
Bt = B.transpose([2,1,0])
Bo = torch.DoubleTensor(Bo)
Bt = torch.DoubleTensor(Bt)
A = torch.DoubleTensor(A)
sigma2 = torch.DoubleTensor(sigma2)
I = torch.DoubleTensor(I)
BoBt = torch.matmul(Bo, Bt)
C = (n * I)[np.newaxis] - BoBt# N,3,3
Cinv = C.inverse()
sigma1 = torch.sum(A, axis=1)[:,None]
m1 = I[np.newaxis] + torch.matmul(BoBt,Cinv)
m2 = torch.matmul(Cinv,sigma2.T[:,:,np.newaxis])
midpoint = (1/n) * torch.matmul(m1,sigma1[np.newaxis]) - m2
midpoint = np.asarray(midpoint)
return midpoint[:,:,0].T, np.asarray(Bo)
def register_disp_fast(id_flow, id_mono, mask, inlier_th=0.01,niters=100):
"""
input: disp_flow, disp_mono, mask
output: inlier_mask, registered
register up-to-scale rough depth to motion-based depth
"""
shape = id_mono.shape
id_mono = id_mono.flatten()
disp_flow = id_flow[mask] # register to flow with mono
disp_mono = id_mono[mask]
num_samp = min(3000,len(disp_flow))
np.random.seed(0)
submask = np.random.choice(range(len(disp_flow)), num_samp)
disp_flow = disp_flow[submask]
disp_mono = disp_mono[submask]
n = len(disp_flow)
sample_size=niters
rand_idx = np.random.choice(range(n),sample_size)
scale_cand = (disp_flow/disp_mono)[rand_idx]
dis_cand = np.abs(np.log(disp_mono[:,np.newaxis]*scale_cand[np.newaxis])-np.log(disp_flow[:,np.newaxis]))
rank_metric = (dis_cand<inlier_th).sum(0)
scale_idx = np.argmax(rank_metric)
scale = scale_cand[scale_idx]
# # another way to align scale
# from scipy.optimize import minimize
# def cost_function(alpha, K):
# return np.mean(np.abs(alpha*K - 1))
#
# # MRE minimize
# output = minimize(cost_function, 1., args=(disp_mono/disp_flow),method='Nelder-Mead')
# if output.success:
# scale = output.x
dis = np.abs(np.log(disp_mono*scale)-np.log(disp_flow))
ninliers = (dis<inlier_th).sum()/n
registered_flow=(id_flow.reshape(shape))/scale
return registered_flow, scale, ninliers
def testEss(K0,K1,R,T,p1,p2):
testP = cv2.triangulatePoints(K0.dot(np.concatenate( (np.eye(3),np.zeros((3,1))), -1)),
K1.dot(np.concatenate( (R,T), -1)),
p1[:2],p2[:2])
Z1 = testP[2,:]/testP[-1,:]
Z2 = (R.dot(Z1*np.linalg.inv(K0).dot(p1))+T)[-1,:]
if ((Z1>0).sum() > (Z1<=0).sum()) and ((Z2>0).sum() > (Z2<=0).sum()):
#print(Z1)
#print(Z2)
return True
else:
return False
def pose_estimate(K0,K1,hp0,hp1,strict_mask,rot,th=0.0001):
# epipolar geometry
from ..models.submodule import F_ngransac
tmphp0 = hp0[:,strict_mask]
tmphp1 = hp1[:,strict_mask]
#num_samp = min(300000,tmphp0.shape[1])
#num_samp = min(30000,tmphp0.shape[1])
num_samp = min(3000,tmphp0.shape[1])
submask = np.random.choice(range(tmphp0.shape[1]), num_samp)
if num_samp > 0:
submask = np.random.choice(range(tmphp0.shape[1]), num_samp)
tmphp0 = tmphp0[:, submask]
tmphp1 = tmphp1[:, submask]
rotx,transx,Ex = F_ngransac(torch.Tensor(tmphp0.T[np.newaxis]).cuda(),
torch.Tensor(tmphp1.T[np.newaxis]).cuda(),
torch.Tensor(K0[np.newaxis]).cuda(),
False,0,
Kn = torch.Tensor(K1[np.newaxis]).cuda())
R01 = cv2.Rodrigues(np.asarray(rotx[0]))[0]
T01 = np.asarray(transx[0])
E = np.asarray(Ex[0])
# _,R01,T01,_ = cv2.recoverPose(E.astype(float), tmphp0[:2].T, tmphp1[:2].T, K0) # RT are 0->1 points transform
# T01 = T01[:,0]
# R01=R01.T
# T01=-R01.dot(T01) # now are 1->0 points transform
R1, R2, T = cv2.decomposeEssentialMat(E)
for rott in [(R1,T),(R2,T),(R1,-T),(R2,-T)]:
if testEss(K0,K1,rott[0],rott[1],tmphp0, tmphp1):
R01=rott[0].T
T01=-R01.dot(rott[1][:,0])
if not 'T01' in locals():
T01 = np.asarray([0,0,1])
R01 = np.eye(3)
# E, maskk = cv2.findEssentialMat(np.linalg.inv(K0).dot(hp0[:,strict_mask])[:2].T,
# np.linalg.inv(K1).dot(hp1[:,strict_mask])[:2].T, np.eye(3),
# cv2.LMEDS,threshold=th)
#
# valid_points = np.ones((strict_mask.sum())).astype(bool)
# valid_points[~maskk[:,0].astype(bool)]=False
# fmask = strict_mask.copy()
# fmask[strict_mask]=valid_points
#
# R1, R2, T = cv2.decomposeEssentialMat(E)
# for rott in [(R1,T),(R2,T),(R1,-T),(R2,-T)]:
# if testEss(K0,K1,rott[0],rott[1],hp0[:,fmask], hp1[:,fmask]):
# R01=rott[0].T
# T01=-R01.dot(rott[1][:,0])
# if not 'T01' in locals():
# T01 = np.asarray([0,0,1])
# R01 = np.eye(3)
# T01t = T01.copy()
else:
T01 = np.asarray([0, 0, 1])
R01 = np.eye(3)
E = None
# compensate R
H01 = K0.dot(R01).dot(np.linalg.inv(K1)) # plane at infinity
comp_hp1 = H01.dot(hp1)
comp_hp1 = comp_hp1/comp_hp1[-1:]
return R01,T01,H01,comp_hp1,E
def evaluate_tri(t10,R01,K0,K1,hp0,hp1,disp0,ent,bl,inlier_th=0.1,select_th=0.4, valid_mask=None):
if valid_mask is not None:
hp0 = hp0[:,valid_mask]
hp1 = hp1[:,valid_mask]
disp0 = disp0.flatten()[valid_mask]
ent = ent.flatten()[valid_mask]
# triangluation
| |
"1975:76"): "metadataonly",
("sou", "1975:70"): "metadataonly",
("sou", "1975:45"): "metadataonly",
("sou", "1974:97"): "metadataonly",
("sou", "1974:92"): "metadataonly",
("sou", "1974:68"): "metadataonly",
("sou", "1974:56"): "metadataonly",
("sou", "1974:30"): "metadataonly",
("sou", "1973:39"): "metadataonly",
("sou", "1973:42"): "metadataonly",
("sou", "1973:17"): "metadataonly",
("sou", "1972:89"): "metadataonly",
("sou", "1972:82"): "metadataonly",
("sou", "1972:74"): "metadataonly",
("sou", "1972:69"): "metadataonly",
("sou", "1972:51"): "metadataonly",
("sou", "1971:63"): "metadataonly",
("sou", "1971:48"): "metadataonly",
("sou", "1971:55"): "metadataonly",
("sou", "1971:18"): "metadataonly",
("sou", "1971:23"): "metadataonly",
("sou", "1971:1"): "metadataonly",
("sou", "1970:65"): "metadataonly",
("sou", "1970:42"): "metadataonly",
("sou", "1970:18"): "metadataonly",
("sou", "1970:10"): "metadataonly",
("sou", "1970:12"): "metadataonly",
("sou", "1970:5"): "metadataonly",
("sou", "1969:61"): "metadataonly",
("sou", "1969:24"): "metadataonly",
("sou", "1969:23"): "metadataonly",
("sou", "1969:17"): "metadataonly",
("sou", "1969:13"): "metadataonly",
("sou", "1968:53"): "metadataonly",
("sou", "1968:42"): "metadataonly",
("sou", "1968:14"): "metadataonly",
("sou", "1967:55"): "metadataonly",
("sou", "1967:42"): "metadataonly",
("sou", "1967:33"): "metadataonly",
("sou", "1967:37"): "metadataonly",
("sou", "1967:38"): "metadataonly",
("sou", "1967:21"): "metadataonly",
("sou", "1967:10"): "metadataonly",
("sou", "1967:12"): "metadataonly",
("sou", "1967:7"): "metadataonly",
("sou", "1966:59"): "metadataonly",
("sou", "1966:34"): "metadataonly",
("sou", "1966:16"): "metadataonly",
("sou", "1966:10"): "metadataonly",
("sou", "1966:11"): "metadataonly",
("sou", "1966:19"): "metadataonly",
("sou", "1963:34"): "metadataonly",
("sou", "1962:56"): "metadataonly",
("sou", "1962:39"): "metadataonly",
("sou", "1962:3"): "metadataonly",
("sou", "1962:1"): "metadataonly",
("sou", "1960:38"): "metadataonly",
("sou", "1960:31"): "metadataonly",
("sou", "1960:29"): "metadataonly",
("sou", "1960:30"): "metadataonly",
("sou", "1960:18"): "metadataonly",
("sou", "1960:8"): "metadataonly",
("sou", "1960:3"): "metadataonly",
("sou", "1959:36"): "metadataonly",
("sou", "1959:39"): "metadataonly",
("sou", "1959:31"): "metadataonly",
("sou", "1959:27"): "metadataonly",
("sou", "1959:30"): "metadataonly",
("sou", "1959:15"): "metadataonly",
("sou", "1959:21"): "metadataonly",
("sou", "1959:3"): "metadataonly",
("sou", "1959:6"): "metadataonly",
("sou", "1959:1"): "metadataonly",
("sou", "1958:46"): "metadataonly",
("sou", "1958:33"): "metadataonly",
("sou", "1958:28"): "metadataonly",
("sou", "1958:31"): "metadataonly",
("sou", "1958:24"): "metadataonly",
("sou", "1958:16"): "metadataonly",
("sou", "1958:19"): "metadataonly",
("sou", "1958:9"): "metadataonly",
("sou", "1958:8"): "metadataonly",
("sou", "1958:7"): "metadataonly",
("sou", "1958:3"): "metadataonly",
("sou", "1957:45"): "metadataonly",
("sou", "1957:30"): "metadataonly",
("sou", "1957:37"): "metadataonly",
("sou", "1957:25"): "metadataonly",
("sou", "1957:31"): "metadataonly",
("sou", "1957:29"): "metadataonly",
("sou", "1957:19"): "metadataonly",
("sou", "1957:9"): "metadataonly",
("sou", "1957:16"): "metadataonly",
("sou", "1957:1"): "metadataonly",
("sou", "1956:62"): "metadataonly",
("sou", "1956:59"): "metadataonly",
("sou", "1956:48"): "metadataonly",
("sou", "1956:47"): "metadataonly",
("sou", "1956:51"): "metadataonly",
("sou", "1956:45"): "metadataonly",
("sou", "1956:37"): "metadataonly",
("sou", "1956:16"): "metadataonly",
("sou", "1956:15"): "metadataonly",
("sou", "1956:9"): "metadataonly",
("sou", "1956:5"): "metadataonly",
("sou", "1956:10"): "metadataonly",
("sou", "1955:46"): "metadataonly",
("sou", "1955:37"): "metadataonly",
("sou", "1955:33"): "metadataonly",
("sou", "1955:24"): "metadataonly",
("sou", "1955:22"): "metadataonly",
("sou", "1955:23"): "metadataonly",
("sou", "1955:18"): "metadataonly",
("sou", "1955:15"): "metadataonly",
("sou", "1955:10"): "metadataonly",
("sou", "1955:9"): "metadataonly",
("sou", "1955:3"): "metadataonly",
("sou", "1955:6"): "metadataonly",
("sou", "1955:7"): "metadataonly",
("sou", "1955:5"): "metadataonly",
("sou", "1955:1"): "metadataonly",
("sou", "1955:2"): "metadataonly",
("sou", "1954:36"): "metadataonly",
("sou", "1954:21"): "metadataonly",
("sou", "1954:13"): "metadataonly",
("sou", "1954:9"): "metadataonly",
("sou", "1954:8"): "metadataonly",
("sou", "1954:7"): "metadataonly",
("sou", "1954:1"): "metadataonly",
("sou", "1953:33"): "metadataonly",
("sou", "1953:22"): "metadataonly",
("sou", "1953:9"): "metadataonly",
("sou", "1953:4"): "metadataonly",
("sou", "1953:2"): "metadataonly",
("sou", "1952:31"): "metadataonly",
("sou", "1952:32"): "metadataonly",
("sou", "1952:24"): "metadataonly",
("sou", "1952:30"): "metadataonly",
("sou", "1952:19"): "metadataonly",
("sou", "1952:16"): "metadataonly",
("sou", "1952:11"): "metadataonly",
("sou", "1951:55"): "metadataonly",
("sou", "1951:52"): "metadataonly",
("sou", "1951:50"): "metadataonly",
("sou", "1951:39"): "metadataonly",
("sou", "1951:37"): "metadataonly",
("sou", "1951:36"): "metadataonly",
("sou", "1951:38"): "metadataonly",
("sou", "1951:22"): "metadataonly",
("sou", "1951:21"): "metadataonly",
("sou", "1951:13"): "metadataonly",
("sou", "1951:11"): "metadataonly",
("sou", "1951:2"): "metadataonly",
("sou", "1950:44"): "metadataonly",
("sou", "1950:39"): "metadataonly",
("sou", "1950:43"): "metadataonly",
("sou", "1950:30"): "metadataonly",
("sou", "1950:36"): "metadataonly",
("sou", "1950:24"): "metadataonly",
("sou", "1950:27"): "metadataonly",
("sou", "1950:22"): "metadataonly",
("sou", "1950:18"): "metadataonly",
("sou", "1950:15"): "metadataonly",
("sou", "1950:17"): "metadataonly",
("sou", "1950:9"): "metadataonly",
("sou", "1950:20"): "metadataonly",
("sou", "1950:6"): "metadataonly",
("sou", "1950:10"): "metadataonly",
("sou", "1949:64"): "metadataonly",
("sou", "1949:59"): "metadataonly",
("sou", "1949:51"): "metadataonly",
("sou", "1949:44"): "metadataonly",
("sou", "1949:50"): "metadataonly",
("sou", "1949:34"): "metadataonly",
("sou", "1949:43"): "metadataonly",
("sou", "1949:32"): "metadataonly",
("sou", "1949:16"): "metadataonly",
("sou", "1949:15"): "metadataonly",
("sou", "1949:22"): "metadataonly",
("sou", "1949:13"): "metadataonly",
("sou", "1949:10"): "metadataonly",
("sou", "1949:7"): "metadataonly",
("sou", "1949:3"): "metadataonly",
("sou", "1949:2"): "metadataonly",
("sou", "1948:52"): "metadataonly",
("sou", "1948:38"): "metadataonly",
("sou", "1948:36"): "metadataonly",
("sou", "1948:41"): "metadataonly",
("sou", "1948:29"): "metadataonly",
("sou", "1948:26"): "metadataonly",
("sou", "1948:18"): "metadataonly",
("sou", "1948:15"): "metadataonly",
("sou", "1948:12"): "metadataonly",
("sou", "1948:5"): "metadataonly",
("sou", "1948:1"): "metadataonly",
("sou", "1947:87"): "metadataonly",
("sou", "1947:84"): "metadataonly",
("sou", "1947:81"): "metadataonly",
("sou", "1947:78"): "metadataonly",
("sou", "1947:83"): "metadataonly",
("sou", "1947:77"): "metadataonly",
("sou", "1947:73"): "metadataonly",
("sou", "1947:71"): "metadataonly",
("sou", "1947:62"): "metadataonly",
("sou", "1947:68"): "metadataonly",
("sou", "1947:63"): "metadataonly",
("sou", "1947:57"): "metadataonly",
("sou", "1947:56"): "metadataonly",
("sou", "1947:48"): "metadataonly",
("sou", "1947:49"): "metadataonly",
("sou", "1947:39"): "metadataonly",
("sou", "1947:41"): "metadataonly",
("sou", "1947:36"): "metadataonly",
("sou", "1947:27"): "metadataonly",
("sou", "1947:16"): "metadataonly",
("sou", "1947:9"): "metadataonly",
("sou", "1947:6"): "metadataonly",
("sou", "1946:95"): "metadataonly",
("sou", "1946:89"): "metadataonly",
("sou", "1946:87"): "metadataonly",
("sou", "1946:85"): "metadataonly",
("sou", "1946:82"): "metadataonly",
("sou", "1946:86"): "metadataonly",
("sou", "1946:73"): "metadataonly",
("sou", "1946:63"): "metadataonly",
("sou", "1946:55"): "metadataonly",
("sou", "1946:50"): "metadataonly",
("sou", "1946:25"): "metadataonly",
("sou", "1946:26"): "metadataonly",
("sou", "1946:17"): "metadataonly",
("sou", "1946:13"): "metadataonly",
("sou", "1946:4"): "metadataonly",
("sou", "1946:2"): "metadataonly",
("sou", "1945:62"): "metadataonly",
("sou", "1945:54"): "metadataonly",
("sou", "1945:58"): "metadataonly",
("sou", "1945:52"): "metadataonly",
("sou", "1945:44"): "metadataonly",
("sou", "1945:39"): "metadataonly",
("sou", "1945:29"): "metadataonly",
("sou", "1945:24"): "metadataonly",
("sou", "1945:20"): "metadataonly",
("sou", "1945:18"): "metadataonly",
("sou", "1945:9"): "metadataonly",
("sou", "1945:17"): "metadataonly",
("sou", "1945:6"): "metadataonly",
("sou", "1945:3"): "metadataonly",
("sou", "1944:60"): "metadataonly",
("sou", "1944:62"): "metadataonly",
("sou", "1944:56"): "metadataonly",
("sou", "1944:41"): "metadataonly",
("sou", "1944:24"): "metadataonly",
("sou", "1944:6"): "metadataonly",
("sou", "1944:4"): "metadataonly",
("sou", "1944:1"): "metadataonly",
("sou", "1944:11"): "metadataonly",
("sou", "1943:45"): "metadataonly",
("sou", "1943:46"): "metadataonly",
("sou", "1943:37"): "metadataonly",
("sou", "1943:33"): "metadataonly",
("sou", "1943:31"): "metadataonly",
("sou", "1943:36"): "metadataonly",
("sou", "1943:35"): "metadataonly",
("sou", "1943:30"): "metadataonly",
("sou", "1943:22"): "metadataonly",
("sou", "1943:26"): "metadataonly",
("sou", "1943:21"): "metadataonly",
("sou", "1943:16"): "metadataonly",
("sou", "1943:12"): "metadataonly",
("sou", "1943:10"): "metadataonly",
("sou", "1943:25"): "metadataonly",
("sou", "1943:2"): "metadataonly",
("sou", "1942:43"): "metadataonly",
("sou", "1942:42"): "metadataonly",
("sou", "1942:45"): "metadataonly",
("sou", "1942:37"): "metadataonly",
("sou", "1942:48"): "metadataonly",
("sou", "1942:39"): "metadataonly",
("sou", "1942:28"): "metadataonly",
("sou", "1942:27"): "metadataonly",
("sou", "1942:24"): "metadataonly",
("sou", "1942:18"): "metadataonly",
("sou", "1942:13"): "metadataonly",
("sou", "1942:16"): "metadataonly",
("sou", "1942:25"): "metadataonly",
("sou", "1942:5"): "metadataonly",
("sou", "1942:2"): "metadataonly",
("sou", "1941:37"): "metadataonly",
("sou", "1941:33"): "metadataonly",
("sou", "1941:34"): "metadataonly",
("sou", "1941:26"): "metadataonly",
("sou", "1941:30"): "metadataonly",
("sou", "1941:19"): "metadataonly",
("sou", "1941:11"): "metadataonly",
("sou", "1941:2"): "metadataonly",
("sou", "1941:13"): "metadataonly",
("sou", "1940:37"): "metadataonly",
("sou", "1940:38"): "metadataonly",
("sou", "1940:27"): "metadataonly",
("sou", "1940:18"): "metadataonly",
("sou", "1940:26"): "metadataonly",
("sou", "1940:15"): "metadataonly",
("sou", "1940:10"): "metadataonly",
("sou", "1940:14"): "metadataonly",
("sou", "1940:7"): "metadataonly",
("sou", "1940:5"): "metadataonly",
("sou", "1940:2"): "metadataonly",
("sou", "1939:48"): "metadataonly",
("sou", "1939:45"): "metadataonly",
("sou", "1939:42"): "metadataonly",
("sou", "1939:39"): "metadataonly",
("sou", "1939:43"): "metadataonly",
("sou", "1939:31"): "metadataonly",
("sou", "1939:33"): "metadataonly",
("sou", "1939:27"): "metadataonly",
("sou", "1939:20"): "metadataonly",
("sou", "1939:24"): "metadataonly",
("sou", "1939:16"): "metadataonly",
("sou", "1939:17"): "metadataonly",
("sou", "1939:12"): "metadataonly",
("sou", "1939:8"): "metadataonly",
("sou", "1939:10"): "metadataonly",
("sou", "1938:51"): "metadataonly",
("sou", "1938:48"): "metadataonly",
("sou", "1938:39"): "metadataonly",
("sou", "1938:37"): "metadataonly",
("sou", "1938:35"): "metadataonly",
("sou", "1938:32"): "metadataonly",
("sou", "1938:31"): "metadataonly",
("sou", "1938:12"): "metadataonly",
("sou", "1938:9"): "metadataonly",
("sou", "1938:24"): "metadataonly",
("sou", "1938:5"): "metadataonly",
("sou", "1938:3"): "metadataonly",
("sou", "1937:57"): "metadataonly",
("sou", "1937:56"): "metadataonly",
("sou", "1937:46"): "metadataonly",
("sou", "1937:40"): "metadataonly",
("sou", "1937:30"): "metadataonly",
("sou", "1937:27"): "metadataonly",
("sou", "1937:20"): "metadataonly",
("sou", "1937:25"): "metadataonly",
("sou", "1937:19"): "metadataonly",
("sou", "1937:17"): "metadataonly",
("sou", "1937:15"): "metadataonly",
("sou", "1937:11"): "metadataonly",
("sou", "1937:9"): "metadataonly",
("sou", "1937:2"): "metadataonly",
("sou", "1937:12"): "metadataonly",
("sou", "1936:55"): "metadataonly",
("sou", "1936:43"): "metadataonly",
("sou", "1936:40"): "metadataonly",
("sou", "1936:39"): "metadataonly",
("sou", "1936:33"): "metadataonly",
("sou", "1936:30"): "metadataonly",
("sou", "1936:29"): "metadataonly",
("sou", "1936:22"): "metadataonly",
("sou", "1936:19"): "metadataonly",
("sou", "1936:16"): "metadataonly",
("sou", "1936:21"): "metadataonly",
("sou", "1936:7"): "metadataonly",
("sou", "1936:6"): "metadataonly",
("sou", "1936:18"): "metadataonly",
("sou", "1936:13"): "metadataonly",
("sou", "1936:11"): "metadataonly",
("sou", "1935:67"): "metadataonly",
("sou", "1935:64"): "metadataonly",
("sou", "1935:61"): "metadataonly",
("sou", "1935:63"): "metadataonly",
("sou", "1935:55"): "metadataonly",
("sou", "1935:58"): "metadataonly",
("sou", "1935:54"): "metadataonly",
("sou", "1935:47"): "metadataonly",
("sou", "1935:42"): "metadataonly",
("sou", "1935:37"): "metadataonly",
("sou", "1935:41"): "metadataonly",
("sou", "1935:40"): "metadataonly",
("sou", "1935:34"): "metadataonly",
("sou", "1935:33"): "metadataonly",
("sou", "1935:32"): "metadataonly",
("sou", "1935:29"): "metadataonly",
("sou", "1935:28"): "metadataonly",
("sou", "1935:30"): "metadataonly",
("sou", "1935:25"): "metadataonly",
("sou", "1935:20"): "metadataonly",
("sou", "1935:17"): "metadataonly",
("sou", "1935:19"): "metadataonly",
("sou", "1935:16"): "metadataonly",
("sou", "1935:39"): "metadataonly",
("sou", "1935:7"): "metadataonly",
("sou", "1935:3"): "metadataonly",
("sou", "1934:56"): "metadataonly",
("sou", "1934:55"): "metadataonly",
("sou", "1934:51"): "metadataonly",
("sou", "1934:50"): "metadataonly",
("sou", "1934:49"): "metadataonly",
("sou", "1934:44"): "metadataonly",
("sou", "1934:46"): "metadataonly",
("sou", "1934:45"): "metadataonly",
("sou", "1934:42"): "metadataonly",
("sou", "1934:41"): "metadataonly",
("sou", "1934:40"): "metadataonly",
("sou", "1934:37"): "metadataonly",
("sou", "1934:38"): "metadataonly",
("sou", "1934:36"): "metadataonly",
("sou", "1934:34"): "metadataonly",
("sou", "1934:28"): "metadataonly",
("sou", "1934:26"): "metadataonly",
("sou", "1934:21"): "metadataonly",
("sou", "1934:19"): "metadataonly",
("sou", "1934:20"): "metadataonly",
("sou", "1934:23"): "metadataonly",
("sou", "1934:17"): "metadataonly",
("sou", "1934:15"): "metadataonly",
("sou", "1934:12"): "metadataonly",
("sou", "1934:24"): "metadataonly",
("sou", "1934:7"): "metadataonly",
("sou", "1934:6"): "metadataonly",
("sou", "1934:5"): "metadataonly",
("sou", "1933:35"): "metadataonly",
("sou", "1933:28"): "metadataonly",
("sou", "1933:34"): "metadataonly",
("sou", "1933:26"): "metadataonly",
("sou", "1933:37"): "metadataonly",
("sou", "1933:24"): "metadataonly",
("sou", "1933:18"): "metadataonly",
("sou", "1933:22"): "metadataonly",
| |
import pickle
import warnings
import collections.abc
from math import isnan
from statistics import mean, median, stdev, mode
from abc import abstractmethod, ABC
from numbers import Number
from collections import defaultdict
from itertools import islice, chain
from typing import Hashable, Optional, Sequence, Union, Iterable, Dict, Any, List, Tuple, Callable, Mapping
from coba.backports import Literal
from coba import pipes
from coba.random import CobaRandom
from coba.exceptions import CobaException
from coba.statistics import iqr
from coba.pipes import Flatten
from coba.environments.primitives import Interaction
from coba.environments.logged.primitives import LoggedInteraction
from coba.environments.simulated.primitives import SimulatedInteraction
class EnvironmentFilter(pipes.Filter[Iterable[Interaction],Iterable[Interaction]], ABC):
"""A filter that can be applied to an Environment."""
@abstractmethod
def filter(self, interactions: Iterable[Interaction]) -> Iterable[Interaction]:
"""Apply a filter to an Environment's interactions."""
...
class Identity(pipes.Identity, EnvironmentFilter):
"""Return whatever interactions are given to the filter."""
pass
class Take(pipes.Take, EnvironmentFilter):
"""Take a fixed number of interactions from an Environment."""
pass
class Shuffle(pipes.Shuffle, EnvironmentFilter):
"""Shuffle a sequence of Interactions in an Environment."""
pass
class Reservoir(pipes.Reservoir, EnvironmentFilter):
"""Take a fixed number of random Interactions from an Environment."""
pass
class Scale(EnvironmentFilter):
"""Shift and scale features to precondition them before learning."""
def __init__(self,
shift: Union[Number,Literal["min","mean","med"]] = 0,
scale: Union[Number,Literal["minmax","std","iqr","maxabs"]] = "minmax",
target: Literal["features","rewards"] = "features",
using: Optional[int] = None):
"""Instantiate a Scale filter.
Args:
shift: The statistic to use to shift each context feature.
scale: The statistic to use to scale each context feature.
target: The target data we wish to scale in the environment.
using: The number of interactions to use when calculating the necessary statistics.
"""
assert isinstance(shift,Number) or shift in ["min","mean","med"]
assert isinstance(scale,Number) or scale in ["minmax","std","iqr","maxabs"]
self._shift = shift
self._scale = scale
self._using = using
self._target = target
@property
def params(self) -> Dict[str, Any]:
return {
"scale_shift": self._shift,
"scale_scale": self._scale,
"scale_using": self._using,
"scale_target": self._target
}
def filter(self, interactions: Iterable[Interaction]) -> Iterable[Interaction]:
iter_interactions = iter(interactions)
fitting_interactions = list(islice(iter_interactions,self._using))
shifts : Dict[Hashable,float] = defaultdict(lambda:0)
scales : Dict[Hashable,float] = defaultdict(lambda:1)
unscaled: Dict[Hashable,List[Any]] = defaultdict(list)
if any([isinstance(i.context,dict) for i in fitting_interactions]) and self._shift != 0:
raise CobaException("Shift is required to be 0 for sparse environments. Otherwise the environment will become dense.")
mixed = set()
had_non_numeric = set()
for interaction in fitting_interactions:
if self._target == "features":
for name,value in self._feature_pairs(interaction.context):
if name in mixed: continue
is_numeric = isinstance(value,Number)
is_nan = is_numeric and isnan(value)
if is_nan:
pass
elif (not is_numeric and name in unscaled) or (is_numeric and name in had_non_numeric):
mixed.add(name)
if name in unscaled: del unscaled[name]
if name in had_non_numeric: had_non_numeric.remove(name)
elif not is_numeric:
had_non_numeric.add(name)
elif is_numeric and not is_nan:
unscaled[name].append(value)
if self._target == "rewards":
unscaled["rewards"].extend(interaction.rewards)
if mixed: warnings.warn(f"Some features were not scaled due to having mixed types: {mixed}. ")
has_sparse_zero = set()
for interaction in fitting_interactions:
if isinstance(interaction.context,dict):
has_sparse_zero |= unscaled.keys() - interaction.context.keys() - {"rewards"}
for key in has_sparse_zero:
unscaled[key].append(0)
for name, values in unscaled.items():
if isinstance(self._shift, Number):
shift = self._shift
if self._shift == "min":
shift = min(values)
if self._shift == "mean":
shift = mean(values)
if self._shift == "med":
shift = median(values)
if isinstance(self._scale, Number):
scale_num = self._scale
scale_den = 1
if self._scale == "std":
scale_num = 1
scale_den = stdev(values)
if self._scale == "minmax":
scale_num = 1
scale_den = max(values)-min(values)
if self._scale == "iqr":
scale_num = 1
scale_den = iqr(values)
if self._scale == "maxabs":
scale_num = 1
scale_den = max([abs(v-shift) for v in values])
shifts[name] = shift
scales[name] = scale_num/scale_den if round(scale_den,10) != 0 else 1
for interaction in chain(fitting_interactions, iter_interactions):
scaled_values = {}
final_context = interaction.context
final_rewards = None
final_kwargs = interaction.kwargs.copy()
if self._target == "features":
for name,value in self._feature_pairs(interaction.context):
if isinstance(value,Number):
scaled_values[name] = (value-shifts[name])*scales[name]
else:
scaled_values[name] = value
if interaction.context is None:
final_context = None
elif isinstance(interaction.context,dict):
final_context = scaled_values
elif isinstance(interaction.context,tuple):
final_context = tuple(scaled_values[k] for k,_ in self._feature_pairs(interaction.context))
else:
final_context = scaled_values[1]
if self._target == "rewards":
final_rewards = [ (r-shifts['rewards'])*scales['rewards'] for r in interaction.rewards ]
if isinstance(interaction, SimulatedInteraction):
yield SimulatedInteraction(
final_context,
interaction.actions,
final_rewards or interaction.rewards,
**interaction.kwargs
)
elif isinstance(interaction, LoggedInteraction):
yield LoggedInteraction(
final_context,
interaction.action,
interaction.reward,
interaction.probability,
interaction.actions,
**interaction.kwargs
)
else: #pragma: no cover
raise CobaException("Unknown interactions were given to Scale.")
def _feature_pairs(self,context) -> Sequence[Tuple[Hashable,Any]]:
if isinstance(context,dict ): return context.items()
if isinstance(context,tuple): return enumerate(context)
if context is not None : return [(1,context)]
return []
class Impute(EnvironmentFilter):
"""Impute missing values (nan) in Interaction contexts."""
def __init__(self,
stat : Literal["mean","median","mode"] = "mean",
using: Optional[int] = None):
"""Instantiate an Impute filter.
Args:
stat: The statistic to use for impuatation.
using: The number of interactions to use to calculate the imputation statistics.
"""
assert stat in ["mean","median","mode"]
self._stat = stat
self._using = using
@property
def params(self) -> Dict[str, Any]:
return { "impute_stat": self._stat, "impute_using": self._using }
def filter(self, interactions: Iterable[Interaction]) -> Iterable[Interaction]:
iter_interactions = iter(interactions)
train_interactions = list(islice(iter_interactions,self._using))
test_interactions = chain.from_iterable([train_interactions, iter_interactions])
stats : Dict[Hashable,float] = defaultdict(int)
features: Dict[Hashable,List[Number]] = defaultdict(list)
for interaction in train_interactions:
for name,value in self._context_as_name_values(interaction.context):
if isinstance(value,Number) and not isnan(value):
features[name].append(value)
for feat_name, feat_numeric_values in features.items():
if self._stat == "mean":
stats[feat_name] = mean(feat_numeric_values)
if self._stat == "median":
stats[feat_name] = median(feat_numeric_values)
if self._stat == "mode":
stats[feat_name] = mode(feat_numeric_values)
for interaction in test_interactions:
kv_imputed_context = {}
for name,value in self._context_as_name_values(interaction.context):
kv_imputed_context[name] = stats[name] if isinstance(value,Number) and isnan(value) else value
if interaction.context is None:
final_context = None
elif isinstance(interaction.context,dict):
final_context = kv_imputed_context
elif isinstance(interaction.context,tuple):
final_context = tuple(kv_imputed_context[k] for k,_ in self._context_as_name_values(interaction.context))
else:
final_context = kv_imputed_context[1]
if isinstance(interaction, SimulatedInteraction):
yield SimulatedInteraction(
final_context,
interaction.actions,
interaction.rewards,
**interaction.kwargs
)
elif isinstance(interaction, LoggedInteraction):
yield LoggedInteraction(
final_context,
interaction.action,
interaction.reward,
**interaction.kwargs
)
else: #pragma: no cover
raise CobaException("Unknown interactions were given to Impute.")
def _context_as_name_values(self,context) -> Sequence[Tuple[Hashable,Any]]:
if isinstance(context,dict ): return context.items()
if isinstance(context,tuple): return enumerate(context)
if context is not None : return [(1,context)]
return []
class Sparse(EnvironmentFilter):
"""Sparsify an environment's feature representation.
This has little utility beyond debugging.
"""
def __init__(self, context:bool = True, action:bool = False):
"""Instantiate a Sparse filter.
Args:
context: If True then contexts should be made sparse otherwise leave them alone.
action: If True then actions should be made sparse otherwise leave them alone.
"""
self._context = context
self._action = action
@property
def params(self) -> Dict[str, Any]:
return { "sparse_C": self._context, "sparse_A": self._action }
def filter(self, interactions: Iterable[Interaction]) -> Iterable[Interaction]:
for interaction in interactions:
sparse_context = self._make_sparse(interaction.context) if self._context else interaction.context
if isinstance(interaction, SimulatedInteraction):
sparse_actions = list(map(self._make_sparse,interaction.actions)) if self._action else interaction.actions
yield SimulatedInteraction(
sparse_context,
sparse_actions,
interaction.rewards
)
elif isinstance(interaction, LoggedInteraction):
sparse_action = self._make_sparse(interaction.action) if self._action else interaction.action
yield LoggedInteraction(
sparse_context,
sparse_action,
interaction.reward,
interaction.probability,
interaction.actions,
**interaction.kwargs
)
else: #pragma: no cover
raise CobaException("Unknown interactions were given to Sparse.")
def _make_sparse(self, value) -> Optional[dict]:
if isinstance(value,dict) or value is None:
return value
if isinstance(value,(list,tuple)):
return dict(enumerate(value))
return {0:value}
class Cycle(EnvironmentFilter):
"""Cycle all rewards associated with actions by one place.
This filter is useful for testing an algorithms response to a non-stationary shock.
"""
def __init__(self, after:int = 0):
"""Instantiate a Cycle filter.
Args:
after: How many interactions should be seen before applying the cycle filter.
"""
self._after = after
@property
def params(self) -> Dict[str, Any]:
return { "cycle_after": self._after }
def filter(self, interactions: Iterable[SimulatedInteraction]) -> Iterable[SimulatedInteraction]:
underlying_iterable = iter(interactions)
sans_cycle_interactions = islice(underlying_iterable, self._after)
with_cycle_interactions = underlying_iterable
for interaction in sans_cycle_interactions:
yield interaction
try:
first_interaction = next(with_cycle_interactions)
action_set = set(first_interaction.actions)
n_actions = len(action_set)
featureless_actions = [tuple([0]*n+[1]+[0]*(n_actions-n-1)) for n in range(n_actions)]
with_cycle_interactions = chain([first_interaction], with_cycle_interactions)
if len(set(action_set) & set(featureless_actions)) != len(action_set):
warnings.warn("Cycle only works for environments without action features. It will be ignored in this case.")
for interaction in with_cycle_interactions:
yield interaction
else:
for interaction in with_cycle_interactions:
rewards = interaction.rewards[-1:] + interaction.rewards[:-1]
yield SimulatedInteraction(interaction.context, interaction.actions, rewards, **interaction.kwargs)
except StopIteration:
pass
class Binary(EnvironmentFilter):
"""Binarize all rewards to either 1 (max rewards) or 0 (all others)."""
@property
def params(self) -> Dict[str, Any]:
return { "binary": True }
def filter(self, interactions: Iterable[SimulatedInteraction]) -> Iterable[SimulatedInteraction]:
for interaction in interactions:
max_rwd = max(interaction.rewards)
rewards = [int(r==max_rwd) for r in interaction.rewards]
yield SimulatedInteraction(interaction.context, interaction.actions, rewards, **interaction.kwargs)
class Sort(EnvironmentFilter):
"""Sort a sequence of Interactions in an Environment."""
def __init__(self, *keys: Union[str,int,Sequence[Union[str,int]]]) -> None:
"""Instantiate a Sort filter.
Args:
*keys: The context items that should be sorted on.
"""
self._keys = list(Flatten().filter([list(keys)]))[0]
@property
def params(self) -> Dict[str, Any]:
return { "sort": self._keys or '*' }
def filter(self, interactions: Iterable[Interaction]) -> | |
#!/usr/bin/env python
# coding=utf-8
import eventlet
# BGPSpeaker needs sockets patched -> breaks SRL registration if done too late
# eventlet.monkey_patch( socket=True, select=True ) # adding only ( socket=True ) allows SRL, but then BGP doesn't work :(
eventlet.monkey_patch() # need thread too
# Google core libraries don't support eventlet; workaround
import grpc
from grpc.experimental import eventlet as grpc_eventlet
grpc_eventlet.init_eventlet() # Fix gRPC eventlet interworking, early
# May need to start a separate Python process for BGP
from datetime import datetime, timezone, timedelta
import time
import sys
import logging
import socket
import os
import re
import struct
import ipaddress
import json
import traceback
import subprocess
from threading import Timer
import pwd
# sys.path.append('/usr/lib/python3.6/site-packages/sdk_protos')
import sdk_service_pb2
import sdk_service_pb2_grpc
import lldp_service_pb2
import config_service_pb2
import sdk_common_pb2
# Local gNMI connection
from pygnmi.client import gNMIclient, telemetryParser
# To report state back
import telemetry_service_pb2
import telemetry_service_pb2_grpc
from logging.handlers import RotatingFileHandler
#
# BGP imports
#
import netns
import signal
from ryu.services.protocols.bgp.bgpspeaker import (BGPSpeaker,
EVPN_MULTICAST_ETAG_ROUTE,
EVPN_MAC_IP_ADV_ROUTE,
RF_L2_EVPN,
PMSI_TYPE_INGRESS_REP)
from ryu.lib.packet.bgp import (EvpnNLRI, BGPEvpnMacMobilityExtendedCommunity,
BGP_ATTR_TYPE_ORIGINATOR_ID,
BGP_ATTR_TYPE_EXTENDED_COMMUNITIES)
# Ryu has its own threading model
from ryu.lib import hub
#
# eBPF ARP filter imports
#
from bcc import BPF
from ryu.lib.packet import packet, ipv4, udp, vxlan, ethernet, arp, tcp
from ryu.ofproto import ether, inet
SO_TIMESTAMP = 29 # us precision
SO_TIMESTAMPNS = 35 # Higher ns precision
############################################################
## Agent will start with this name
############################################################
agent_name='static_vxlan_agent'
############################################################
## Open a GRPC channel to connect to sdk_mgr on the dut
## sdk_mgr will be listening on 50053
############################################################
#channel = grpc.insecure_channel('unix:///opt/srlinux/var/run/sr_sdk_service_manager:50053')
channel = grpc.insecure_channel('127.0.0.1:50053')
metadata = [('agent_name', agent_name)]
stub = sdk_service_pb2_grpc.SdkMgrServiceStub(channel)
# Try global gNMI connection
#gnmi = gNMIclient(target=('unix:///opt/srlinux/var/run/sr_gnmi_server',57400),
# username="admin",password="<PASSWORD>",
# insecure=True, debug=False)
#gnmi.connect()
############################################################
## Subscribe to required event
## This proc handles subscription of: Interface, LLDP,
## Route, Network Instance, Config
############################################################
def Subscribe(stream_id, option):
# XXX Does not pass pylint
op = sdk_service_pb2.NotificationRegisterRequest.AddSubscription
if option == 'cfg':
entry = config_service_pb2.ConfigSubscriptionRequest()
# entry.key.js_path = '.' + agent_name
request = sdk_service_pb2.NotificationRegisterRequest(op=op, stream_id=stream_id, config=entry)
subscription_response = stub.NotificationRegister(request=request, metadata=metadata)
logging.info( f'Status of subscription response for {option}:: {subscription_response.status}' )
############################################################
## Subscribe to all the events that Agent needs
############################################################
def Subscribe_Notifications(stream_id):
'''
Agent will receive notifications to what is subscribed here.
'''
if not stream_id:
logging.info("Stream ID not sent.")
return False
# Subscribe to config changes, first
Subscribe(stream_id, 'cfg')
def Add_Telemetry( path_obj_list ):
telemetry_stub = telemetry_service_pb2_grpc.SdkMgrTelemetryServiceStub(channel)
telemetry_update_request = telemetry_service_pb2.TelemetryUpdateRequest()
for js_path,obj in path_obj_list:
telemetry_info = telemetry_update_request.state.add()
telemetry_info.key.js_path = js_path
telemetry_info.data.json_content = json.dumps(obj)
logging.info(f"Telemetry_Update_Request :: {telemetry_update_request}")
telemetry_response = telemetry_stub.TelemetryAddOrUpdate(request=telemetry_update_request, metadata=metadata)
return telemetry_response
def Remove_Telemetry(js_paths):
telemetry_stub = telemetry_service_pb2_grpc.SdkMgrTelemetryServiceStub(channel)
telemetry_del_request = telemetry_service_pb2.TelemetryDeleteRequest()
for path in js_paths:
telemetry_key = telemetry_del_request.key.add()
telemetry_key.js_path = path
logging.info(f"Telemetry_Delete_Request :: {telemetry_del_request}")
telemetry_response = telemetry_stub.TelemetryDelete(request=telemetry_del_request, metadata=metadata)
return telemetry_response
def Configure_BFD(state,remote_evpn_vtep):
logging.info(f"Configure_BFD :: remote_evpn_vtep={remote_evpn_vtep}")
nh_group_name = f"vtep-{remote_evpn_vtep}"
static_route = {
"static-routes": {
"route": [
{
"prefix": f"{remote_evpn_vtep}/32",
"admin-state": "enable",
"next-hop-group": nh_group_name
}
]
},
"next-hop-groups": {
"group": [
{
"name": nh_group_name,
"nexthop": [
{
"index": 0,
"ip-address": f"{remote_evpn_vtep}",
"admin-state": "enable",
"failure-detection": {
"enable-bfd": {
# XXX Need to specify local VTEP IP in config, TODO read this
# using c.get( system0.0 IP )
"local-address": f"{state.params[ 'peer_address' ]}"
}
}
}
]
}
]
}
}
updates = [
('/bfd/subinterface[name=system0.0]', { 'admin-state': 'enable' } ),
('/network-instance[name=default]', static_route)
]
with gNMIclient(target=('unix:///opt/srlinux/var/run/sr_gnmi_server',57400),
username="admin",password="<PASSWORD>",insecure=True) as c:
c.set( encoding='json_ietf', update=updates )
#global gnmi
#gnmi.set( encoding='json_ietf', update=updates )
def AnnounceMulticastRoute( state, rd, vtep_ip, vni ):
state.speaker.evpn_prefix_add(
route_type=EVPN_MULTICAST_ETAG_ROUTE,
route_dist=rd,
# esi=0, # should be ignored
ethernet_tag_id=0,
# mac_addr='00:11:22:33:44:55', # not relevant for MC route
ip_addr=state.params['source_address'], # originator == proxy IP
tunnel_type='vxlan',
vni=vni, # Sent as label
gw_ip_addr=vtep_ip,
next_hop=vtep_ip, # on behalf of remote VTEP
pmsi_tunnel_type=PMSI_TYPE_INGRESS_REP,
# Added via patch
tunnel_endpoint_ip=vtep_ip
)
def WithdrawMulticastRoute( state, rd, vtep_ip ):
try:
state.speaker.evpn_prefix_del(
route_type=EVPN_MULTICAST_ETAG_ROUTE, # RT3
route_dist=rd, # original RD
# vni=mac_vrf['vni'], # not used/allowed in withdraw
ethernet_tag_id=0
)
except Exception as ex:
logging.error( ex )
def AnnounceRoute( state, mac_vrf, vtep_ip, mac, ip, mobility_seq ):
state.speaker.evpn_prefix_add(
route_type=EVPN_MAC_IP_ADV_ROUTE, # RT2
route_dist=AutoRouteDistinguisher(vtep_ip,mac_vrf),
esi=0, # Single homed
ethernet_tag_id=0,
mac_addr=mac,
ip_addr=ip if state.params['include_ip'] else None, # Enables remote peers to perform proxy ARP
next_hop=vtep_ip, # on behalf of remote VTEP
tunnel_type='vxlan',
vni=mac_vrf['vni'],
gw_ip_addr=vtep_ip,
mac_mobility=mobility_seq # Sequence number for MAC mobility
)
def WithdrawRoute( state, mac_vrf, vtep_ip, mac, ip=None ):
try:
state.speaker.evpn_prefix_del(
route_type=EVPN_MAC_IP_ADV_ROUTE, # RT2
route_dist=AutoRouteDistinguisher(vtep_ip,mac_vrf), # original RD
# vni=mac_vrf['vni'], # not used/allowed in withdraw
ethernet_tag_id=0,
mac_addr=mac,
ip_addr=ip if state.params['include_ip'] else None
)
except Exception as ex:
logging.error( ex )
# Also remove telemetry
js_path = f'.vxlan_proxy.static_vtep{{.vtep_ip=="{vtep_ip}"}}.mac_vrf{{.name=="{mac_vrf["name"]}"}}.mac{{.address=="{mac}"}}'
Remove_Telemetry( [js_path] )
def UpdateMACVRF( state, mac_vrf, new_vni=None, new_evi=None ):
logging.info( f"UpdateMACVRF mac_vrf={mac_vrf} new_vni={new_vni} new_evi={new_evi}" )
if new_evi:
# Clean up old VTEPs, RDs need to be changed
for static_vtep in list( mac_vrf['vxlan_vteps'].keys() ):
Remove_Static_VTEP( state, mac_vrf, static_vtep, clear_macs=False )
mac_vrf['evi'] = new_evi
if new_vni:
# Clean up old EVPN routes, VNI needs to be changed
for vtep_ip,macs in mac_vrf['vxlan_vteps'].items():
rd = AutoRouteDistinguisher( vtep_ip, mac_vrf )
WithdrawMulticastRoute( state, rd, vtep_ip )
for mac,status in macs.items():
if status=='static_announced':
WithdrawRoute( state, mac_vrf, vtep_ip, mac )
mac_vrf['vxlan_vteps'][ vtep_ip ][ mac ] = 'static'
mac_vrf['vni'] = new_vni
# Make sure all VTEPs exist
if mac_vrf['admin_state'] == "enable":
for vtep_ip,macs in mac_vrf['vxlan_vteps'].items():
Add_Static_VTEP( state, mac_vrf, vtep_ip )
for mac,status in macs.items():
if status != 'static_announced' or new_evi:
AnnounceRoute( state, mac_vrf, vtep_ip, mac, ip=None, mobility_seq=-1 )
mac_vrf['vxlan_vteps'][ vtep_ip ][ mac ] = 'static_announced'
else:
logging.info( "UpdateMACVRF: admin-state not 'enable'" )
# Updates a single static VTEP
def UpdateMACVRF_StaticVTEP( state, mac_vrf, vtep_ip, macs ):
logging.info( f"UpdateMACVRF_StaticVTEP mac_vrf={mac_vrf} vtep_ip={vtep_ip} macs={macs}" )
vteps = mac_vrf['vxlan_vteps']
vtep = vteps[ vtep_ip ] if vtep_ip in vteps else None
if hasattr( state, 'speaker' ): # BGP running?
if vtep:
# Clean up old MAC routes
macs_to_keep = list( macs.keys() )
for mac in vtep.keys():
if mac not in macs_to_keep:
WithdrawRoute( state, mac_vrf, vtep_ip, mac )
else:
Add_Static_VTEP( state, mac_vrf, vtep_ip )
# Announce new MACs
for mac in macs.keys():
if not vtep or mac not in vtep or vtep[mac] != 'static_announced':
AnnounceRoute( state, mac_vrf, vtep_ip, mac, ip=None, mobility_seq=-1 )
macs[ mac ] = 'static_announced'
vteps[ vtep_ip ] = macs
#
# Runs BGP EVPN as a separate thread>, using Ryu hub
#
#from threading import Thread
#class BGPEVPNThread(Thread):
# def __init__(self):
# Thread.__init__(self)
def runBGPThread( state ):
LOCAL_LOOPBACK = state.params['source_address']
NEIGHBOR = state.params[ 'peer_address' ]
if NEIGHBOR=="127.0.0.1": # Connect to 127.0.0.1 does not work
NEIGHBOR = LOCAL_LOOPBACK
evpn_vteps = {}
def best_path_change_handler(event):
logging.info( f'BGP best path changed: {event.path} prefix={event.prefix} NLRI={event.path.nlri}' )
# event.remote_as, event.prefix, event.nexthop, event.is_withdraw, event.path )
try:
# Could remove VTEP IP upon withdraw too
if not event.is_withdraw:
originator_id = event.path.get_pattr(BGP_ATTR_TYPE_ORIGINATOR_ID)
if event.path.nlri.type == EvpnNLRI.INCLUSIVE_MULTICAST_ETHERNET_TAG:
# SRL EVPN VTEP does not normally include an 'originator' attribute
if originator_id and originator_id.value != event.nexthop:
logging.info( f"Detected another EVPN proxy: {originator_id.value}" )
# TODO if (state.enabled), remove upon withdraw
# Fails: timeout
# Configure_BFD(state,originator_id.value)
else:
logging.info( f"Multicast route from EVPN VTEP: {event.nexthop}" )
evpn_vteps[ event.nexthop ] = event.remote_as
# Could withdraw routes and remove static MACs if this IP matches
# a static vtep in our configuration
data = { 'evpn_vteps': sorted(evpn_vteps.keys()) }
Add_Telemetry( [('.vxlan_proxy', data)] )
# check for RT2 MAC moves between static VTEPs and EVPN VTEPs
# event.label is reduced to the 20-bit MPLS label
elif hasattr( event.path.nlri, 'vni'):
vni = event.path.nlri.vni
if vni not in state.mac_vrfs:
logging.warning( f"BGP: No mac-vrf mapping for VNI: {vni}" )
return
mac_vrf = state.mac_vrfs[ vni ]
logging.info( f"Received EVPN route update for VNI {vni}: {mac_vrf}" )
mac = event.path.nlri.mac_addr
if mac in mac_vrf['macs']:
cur = mac_vrf['macs'][ mac ]
# Don't bother checking IP; SRL MAC-VRF doesn't send it
# Only other proxies do
if cur['vtep'] != event.nexthop and cur['vtep'] != 'tbd':
logging.info( f"EVPN MAC-move detected {cur['vtep']} -> {event.nexthop}" )
# if this is from an EVPN VTEP, withdraw our route - our job is done
if not originator_id or originator_id.value == event.nexthop:
logging.info( f"Removing MAC moved to EVPN VTEP {event.nexthop} from EVPN proxy: {mac}" )
WithdrawRoute( state, mac_vrf, cur['vtep'], mac, cur['ip'] )
del mac_vrf['macs'][ mac ]
# else (from other EVPN proxy) only withdraw if VTEP IP changed, but don't remove MAC
# as we need to keep track of the mobility sequence number
elif originator_id and originator_id.value != event.nexthop:
# Check Mobility sequence - route may be stale
def GetMACMobility():
ext_comms = event.path.get_pattr(BGP_ATTR_TYPE_EXTENDED_COMMUNITIES)
for c in ext_comms.communities:
if isinstance( c, BGPEvpnMacMobilityExtendedCommunity ):
return c.sequence_number
return -1 # not present
if GetMACMobility() < cur['seq']:
logging.info( f"Local mobility sequence {cur['seq']} higher than peer - keeping route" )
return
logging.info( f"Withdrawing MAC | |
(self):
return ###
c = self.c ; log = c.frame.log ; tabName = self.tabName
parentFrame = log.frameDict.get(tabName)
w = log.textDict.get(tabName)
w.pack_forget()
# Set the common background color.
bg = c.config.getColor('log_pane_Spell_tab_background_color') or 'LightSteelBlue2'
#@+<< Create the outer frames >>
#@+node:ekr.20090126093408.62: *6* << Create the outer frames >>
self.outerScrolledFrame = Pmw.ScrolledFrame(
parentFrame,usehullsize = 1)
self.outerFrame = outer = self.outerScrolledFrame.component('frame')
self.outerFrame.configure(background=bg)
for z in ('borderframe','clipper','frame','hull'):
self.outerScrolledFrame.component(z).configure(
relief='flat',background=bg)
#@-<< Create the outer frames >>
#@+<< Create the text and suggestion panes >>
#@+node:ekr.20090126093408.63: *6* << Create the text and suggestion panes >>
f2 = Tk.Frame(outer,bg=bg)
f2.pack(side='top',expand=0,fill='x')
self.wordLabel = Tk.Label(f2,text="Suggestions for:")
self.wordLabel.pack(side='left')
self.wordLabel.configure(font=('verdana',10,'bold'))
fpane = Tk.Frame(outer,bg=bg,bd=2)
fpane.pack(side='top',expand=1,fill='both')
self.listBox = Tk.Listbox(fpane,height=6,width=10,selectmode="single")
self.listBox.pack(side='left',expand=1,fill='both')
self.listBox.configure(font=('verdana',11,'normal'))
listBoxBar = Tk.Scrollbar(fpane,name='listBoxBar')
bar, txt = listBoxBar, self.listBox
txt ['yscrollcommand'] = bar.set
bar ['command'] = txt.yview
bar.pack(side='right',fill='y')
#@-<< Create the text and suggestion panes >>
#@+<< Create the spelling buttons >>
#@+node:ekr.20090126093408.64: *6* << Create the spelling buttons >>
# Create the alignment panes
buttons1 = Tk.Frame(outer,bd=1,bg=bg)
buttons2 = Tk.Frame(outer,bd=1,bg=bg)
buttons3 = Tk.Frame(outer,bd=1,bg=bg)
for w in (buttons1,buttons2,buttons3):
w.pack(side='top',expand=0,fill='x')
buttonList = [] ; font = ('verdana',9,'normal') ; width = 12
for frame, text, command in (
(buttons1,"Find",self.onFindButton),
(buttons1,"Add",self.onAddButton),
(buttons2,"Change",self.onChangeButton),
(buttons2,"Change, Find",self.onChangeThenFindButton),
(buttons3,"Ignore",self.onIgnoreButton),
(buttons3,"Hide",self.onHideButton),
):
b = Tk.Button(frame,font=font,width=width,text=text,command=command)
b.pack(side='left',expand=0,fill='none')
buttonList.append(b)
# Used to enable or disable buttons.
(self.findButton,self.addButton,
self.changeButton, self.changeFindButton,
self.ignoreButton, self.hideButton) = buttonList
#@-<< Create the spelling buttons >>
# Pack last so buttons don't get squished.
self.outerScrolledFrame.pack(expand=1,fill='both',padx=2,pady=2)
#@+node:ekr.20090126093408.65: *5* Event handlers
#@+node:ekr.20090126093408.66: *6* onAddButton
def onAddButton(self):
"""Handle a click in the Add button in the Check Spelling dialog."""
self.handler.add()
#@+node:ekr.20090126093408.67: *6* onChangeButton & onChangeThenFindButton
def onChangeButton(self,event=None):
"""Handle a click in the Change button in the Spell tab."""
self.handler.change()
self.updateButtons()
def onChangeThenFindButton(self,event=None):
"""Handle a click in the "Change, Find" button in the Spell tab."""
if self.change():
self.find()
self.updateButtons()
#@+node:ekr.20090126093408.68: *6* onFindButton
def onFindButton(self):
"""Handle a click in the Find button in the Spell tab."""
c = self.c
self.handler.find()
self.updateButtons()
c.invalidateFocus()
c.bodyWantsFocus()
#@+node:ekr.20090126093408.69: *6* onHideButton
def onHideButton(self):
"""Handle a click in the Hide button in the Spell tab."""
self.handler.hide()
#@+node:ekr.20090126093408.70: *6* onIgnoreButton
def onIgnoreButton(self,event=None):
"""Handle a click in the Ignore button in the Check Spelling dialog."""
self.handler.ignore()
#@+node:ekr.20090126093408.71: *6* onMap
def onMap (self, event=None):
"""Respond to a Tk <Map> event."""
self.update(show= False, fill= False)
#@+node:ekr.20090126093408.72: *6* onSelectListBox
def onSelectListBox(self, event=None):
"""Respond to a click in the selection listBox."""
c = self.c
self.updateButtons()
c.bodyWantsFocus()
#@+node:ekr.20090126093408.73: *5* Helpers
#@+node:ekr.20090126093408.74: *6* bringToFront
def bringToFront (self):
self.c.frame.log.selectTab('Spell')
#@+node:ekr.20090126093408.75: *6* fillbox
def fillbox(self, alts, word=None):
"""Update the suggestions listBox in the Check Spelling dialog."""
self.suggestions = alts
if not word:
word = ""
self.wordLabel.configure(text= "Suggestions for: " + word)
self.listBox.delete(0, "end")
for i in range(len(self.suggestions)):
self.listBox.insert(i, self.suggestions[i])
# This doesn't show up because we don't have focus.
if len(self.suggestions):
self.listBox.select_set(1)
#@+node:ekr.20090126093408.76: *6* getSuggestion
def getSuggestion(self):
"""Return the selected suggestion from the listBox."""
# Work around an old Python bug. Convert strings to ints.
items = self.listBox.curselection()
try:
items = map(int, items)
except ValueError: pass
if items:
n = items[0]
suggestion = self.suggestions[n]
return suggestion
else:
return None
#@+node:ekr.20090126093408.77: *6* update
def update(self,show=True,fill=False):
"""Update the Spell Check dialog."""
c = self.c
if fill:
self.fillbox([])
self.updateButtons()
if show:
self.bringToFront()
c.bodyWantsFocus()
#@+node:ekr.20090126093408.78: *6* updateButtons (spellTab)
def updateButtons (self):
"""Enable or disable buttons in the Check Spelling dialog."""
c = self.c ; w = c.frame.body.bodyCtrl
start, end = w.getSelectionRange()
state = g.choose(self.suggestions and start,"normal","disabled")
self.changeButton.configure(state=state)
self.changeFindButton.configure(state=state)
self.addButton.configure(state='normal')
self.ignoreButton.configure(state='normal')
#@-others
#@+node:ekr.20090126093408.79: *3* Text widgets
#@+<< baseTextWidget class >>
#@+node:ekr.20090126093408.80: *4* << baseTextWidget class >>
# Subclassing from wx.EvtHandler allows methods of this and derived class to be event handlers.
class baseTextWidget (wx.EvtHandler,leoFrame.baseTextWidget):
'''The base class for all wrapper classes for the Tk.Text widget.'''
#@+others
#@+node:ekr.20090126093408.81: *5* Birth & special methods (baseText)
def __init__ (self,c,baseClassName,name,widget):
self.baseClassName = baseClassName # For repr.
wx.EvtHandler.__init__(self) # Init the base class.
leoFrame.baseTextWidget.__init__(self,c,baseClassName,name,widget)
self.name = name
self.virtualInsertPoint = None
self.widget = widget
def __repr__(self):
return '%s: %s' % (self.baseClassName,id(self))
def GetName(self):
return self.name
#@+node:ekr.20090126093408.82: *5* baseTextWidget.onChar
# Don't even think of using key up/down events.
# They don't work reliably and don't support auto-repeat.
def onChar (self, event):
c = self.c
keycode = event.GetKeyCode()
event.leoWidget = self
keysym = g.app.gui.eventKeysym(event)
# if keysym: g.trace('base text: keysym:',repr(keysym))
if keysym:
c.k.masterKeyHandler(event)
#@+node:ekr.20090126093408.83: *5* oops
def oops (self):
print('wxGui baseTextWidget oops:',self,g.callers(),
'must be overridden in subclass')
#@-others
#@-<< baseTextWidget class >>
#@+others
#@+node:ekr.20090126093408.84: *4* headlineWidget class (baseTextWidget)
class headlineWidget (baseTextWidget):
'''A class to make a wxWidgets headline look like a plainTextWidget.'''
#@+others
#@+node:ekr.20090126093408.85: *5* Birth & special methods
def __init__ (self,c,treeCtrl,id):
self.c = c
self.tree = treeCtrl
# Init the base class.
baseTextWidget.__init__(self,c,
baseClassName='headlineWidget',
name='headline',widget=self)
self.init(id)
def init (self,id):
self.id = id
self.ins = 0
self.sel = 0,0
#@+node:ekr.20090126093408.86: *5* wx widget bindings
def _appendText(self,s):
# g.trace(s)
s1 = self.tree.GetItemText(self.id)
self.tree.SetItemText(self.id,s1+s)
self.ins = len(s1) + len(s)
self.sel = self.ins,self.ins
def _get(self,i,j):
s = self.tree.GetItemText(self.id)
return s[i:j]
def _getAllText(self):
return self.tree.GetItemText(self.id)
def _getFocus(self):
return self.tree.FindFocus()
def _getInsertPoint(self):
# g.trace(self.ins)
return self.ins
def _getLastPosition(self):
s = self.tree.GetItemText(self.id)
# g.trace(len(s))
return len(s)
def _getSelectedText(self):
s = self.tree.GetItemText(self.id)
return s[i:j]
def _getSelectionRange(self):
# g.trace(self.sel)
return self.sel
def _hitTest(self,pos):
pass
def _insertText(self,i,s):
s2 = self.tree.GetItemText(self.id)
s3 = s2[:i] + s + s2[i:]
self.tree.SetItemText(self.id,s3)
#g.trace('i',i,'s3',s3)
self.ins = len(s3)
self.sel = self.ins,self.ins
def _see(self,i):
pass
def _setAllText(self,s):
#g.trace(s,g.callers())
self.tree.SetItemText(self.id,s)
self.ins = len(s)
self.sel = self.ins,self.ins
def _setBackgroundColor(self,color):
pass
def _setFocus(self):
g.trace('headline widget (does nothing)')
def _setInsertPoint(self,i):
# g.trace(i)
self.ins = i
self.sel = i,i
def _setSelectionRange(self,i,j):
# g.trace(i,j)
self.sel = i,j
if i == j: self.ins = i
#@-others
#@+node:ekr.20090126093408.87: *4* plainTextWidget (baseTextWidget)
class plainTextWidget (baseTextWidget):
'''A class wrapping wx.TextCtrl widgets.'''
#@+others
#@+node:ekr.20090126093408.88: *5* plainTextWidget.__init__
def __init__ (self,c,parent,multiline=True,*args,**keys):
w = self
self.c = c
self.baseClassName = 'plainTextWidget'
# Create the actual gui widget.
style = g.choose(multiline,wx.TE_MULTILINE,0)
self.widget = wx.TextCtrl(parent,id=-1,style=style,*args,**keys)
# Inject the leo_wrapper_class ivar.
self.widget.leo_wrapper_object = self
# Init the base class.
name = keys.get('name') or '<unknown plainTextWidget>'
baseTextWidget.__init__(self,c,
baseClassName=self.baseClassName,name=name,widget=self.widget)
wx.EVT_CHAR (w.widget,self.onChar)
self.defaultFont = font = wx.Font(pointSize=10,
family = wx.FONTFAMILY_TELETYPE, # wx.FONTFAMILY_ROMAN,
style = wx.FONTSTYLE_NORMAL,
weight = wx.FONTWEIGHT_NORMAL,)
#@+node:ekr.20090126093408.89: *5* bindings (TextCtrl)
# Specify the names of widget-specific methods.
# These particular names are the names of wx.TextCtrl methods.
def _appendText(self,s): return self.widget.AppendText(s)
def _get(self,i,j): return self.widget.GetRange(i,j)
def _getAllText(self): return self.widget.GetValue()
def _getFocus(self): return self.widget.FindFocus()
def _getInsertPoint(self): return self.widget.GetInsertionPoint()
def _getLastPosition(self): return self.widget.GetLastPosition()
def _getSelectedText(self): return self.widget.GetStringSelection()
def _getSelectionRange(self): return self.widget.GetSelection()
def _hitTest(self,pos): return self.widget.HitTest(pos)
def _insertText(self,i,s): self.setInsertPoint(i) ; return self.widget.WriteText(s)
def _scrollLines(self,n): return self.widget.ScrollLines(n)
def _see(self,i): return self.widget.ShowPosition(i)
def _setAllText(self,s): return self.widget.ChangeValue(s)
def _setBackgroundColor(self,color): return self.widget.SetBackgroundColour(color)
def _setFocus(self): return self.widget.SetFocus()
def _setInsertPoint(self,i): return self.widget.SetInsertionPoint(i)
def _setSelectionRange(self,i,j): return self.widget.SetSelection(i,j)
#@-others
#@+node:ekr.20090126093408.90: *4* richTextWidget (baseTextWidget)
class richTextWidget (baseTextWidget):
'''A class wrapping wx.richtext.RichTextCtrl widgets.'''
#@+others
#@+node:ekr.20090126093408.91: *5* richTextWidget.__init__
def __init__ (self,c,parent,*args,**keys):
w = self
self.c = c
self.baseClassName = 'richTextWidget'
# Init the base class, removing the name keyword.
name = keys.get('name') or '<unknown richTextWidget>'
if keys.get('name'): del keys['name']
# Create the actual gui widget.
self.widget = richtext.RichTextCtrl(parent,*args,**keys)
# Inject the leo_wrapper_class ivar.
self.widget.leo_wrapper_object = self
wx.EVT_CHAR (w.widget,self.onChar)
baseTextWidget.__init__(self,c,
baseClassName=self.baseClassName,name=name,widget=self.widget)
self.defaultFont = font = wx.Font(pointSize=10,
family = wx.FONTFAMILY_TELETYPE, # wx.FONTFAMILY_ROMAN,
style = wx.FONTSTYLE_NORMAL,
weight = wx.FONTWEIGHT_NORMAL,
)
#@+node:ekr.20090126093408.92: *5* bindings (RichTextCtrl)
def _appendText(self,s): return self.widget.AppendText(s)
def _get(self,i,j): return self.widget.GetRange(i,j)
def _getAllText(self): return self.widget.GetValue()
def _getFocus(self): return self.widget.FindFocus()
def _getInsertPoint(self): return self.widget.GetInsertionPoint()
def _getLastPosition(self): return self.widget.GetLastPosition()
def _getSelectedText(self): return self.widget.GetStringSelection()
def _getSelectionRange(self): return self.widget.GetSelection()
def _getYScrollPosition(self): return 0,0 # Could also return None.
def _hitTest(self,pos): return self.widget.HitTest(pos)
def _insertText(self,i,s): self.setInsertPoint(i) ; return self.widget.WriteText(s)
def _scrollLines(self,n): return self.widget.ScrollLines(n)
def _see(self,i): return self.widget.ShowPosition(i)
def _setAllText(self,s): self.widget.Clear() ; self.widget.WriteText(s)
def _setBackgroundColor(self,color): return self.widget.SetBackgroundColour(color)
def _setFocus(self): return self.widget.SetFocus()
def _setInsertPoint(self,i): return self.widget.SetInsertionPoint(i)
def _setSelectionRange(self,i,j): return self.widget.SetSelection(i,j)
def _setYScrollPosition(self,i): pass
#@-others
#@+node:ekr.20090126093408.93: *4* stcWidget (baseTextWidget)
class stcWidget (baseTextWidget):
'''A class to wrap the Tk.Text widget.
Translates Python (integer) indices to and from Tk (string) indices.
This class inherits almost all tkText methods: you call use them as usual.'''
# The signatures of tag_add and insert are different from the Tk.Text signatures.
#@+others
#@+node:ekr.20090126093408.94: *5* stcWidget.__init__
def __init__ (self,c,parent,*args,**keys):
self.c = c
self.baseClassName | |
6, 2]), axis=1)
eye_dist = tf.sqrt(tf.reduce_sum(tf.square(p1 - p2), axis=1))
return landmarks_rms_err / eye_dist
else:
return landmarks_rms_err
if self.mode is 'TRAIN':
# calculate L2 loss between ideal and predicted heatmaps
primary_maps_diff = self.pred_hm_p - self.heatmaps_small
fusion_maps_diff = self.pred_hm_f - self.heatmaps_small
upsample_maps_diff = self.pred_hm_u - self.heatmaps
self.l2_primary = tf.reduce_mean(tf.square(primary_maps_diff))
self.l2_fusion = tf.reduce_mean(tf.square(fusion_maps_diff))
self.l2_upsample = tf.reduce_mean(tf.square(upsample_maps_diff))
self.total_loss = 1000.*(self.l_weight_primary * self.l2_primary + self.l_weight_fusion * self.l2_fusion +
self.l_weight_upsample * self.l2_upsample)
# add weight decay
self.total_loss += self.reg * tf.add_n(
[tf.nn.l2_loss(v) for v in tf.trainable_variables() if 'bias' not in v.name])
# compute normalized mean error on gt vs. predicted landmarks (for validation)
if self.compute_nme:
self.nme_loss = tf.reduce_mean(nme_norm_eyes(self.train_pred_lms, self.train_lms))
if self.valid_size > 0 and self.compute_nme:
self.valid_nme_loss = tf.reduce_mean(nme_norm_eyes(self.valid_pred_lms, self.valid_lms))
elif self.mode == 'TEST' and self.compute_nme:
self.nme_per_image = nme_norm_eyes(self.pred_lms, self.lms)
self.nme_loss = tf.reduce_mean(self.nme_per_image)
def predict_valid_landmarks_in_batches(self, images, session):
num_images=int(images.shape[0])
num_batches = int(1.*num_images/self.batch_size)
if num_batches == 0:
batch_size = num_images
num_batches = 1
else:
batch_size = self.batch_size
for j in range(num_batches):
batch_images = images[j * batch_size:(j + 1) * batch_size,:,:,:]
batch_maps_pred = session.run(self.pred_hm_u, {self.images: batch_images})
batch_heat_maps_to_landmarks_alloc_once(
batch_maps=batch_maps_pred, batch_landmarks=self.valid_landmarks_pred[j * batch_size:(j + 1) * batch_size, :, :],
batch_size=batch_size,image_size=self.image_size,num_landmarks=self.num_landmarks)
reminder = num_images-num_batches*batch_size
if reminder > 0:
batch_images = images[-reminder:, :, :, :]
batch_maps_pred = session.run(self.pred_hm_u, {self.images: batch_images})
batch_heat_maps_to_landmarks_alloc_once(
batch_maps=batch_maps_pred,
batch_landmarks=self.valid_landmarks_pred[-reminder:, :, :],
batch_size=reminder, image_size=self.image_size, num_landmarks=self.num_landmarks)
def create_summary_ops(self):
"""create summary ops for logging"""
# loss summary
l2_primary = tf.summary.scalar('l2_primary', self.l2_primary)
l2_fusion = tf.summary.scalar('l2_fusion', self.l2_fusion)
l2_upsample = tf.summary.scalar('l2_upsample', self.l2_upsample)
l_total = tf.summary.scalar('l_total', self.total_loss)
self.batch_summary_op = tf.summary.merge([l2_primary,l2_fusion,l2_upsample,l_total])
if self.compute_nme:
nme = tf.summary.scalar('nme', self.nme_loss)
self.batch_summary_op = tf.summary.merge([self.batch_summary_op, nme])
if self.log_histograms:
var_summary = [tf.summary.histogram(var.name,var) for var in tf.trainable_variables()]
grads = tf.gradients(self.total_loss, tf.trainable_variables())
grads = list(zip(grads, tf.trainable_variables()))
grad_summary = [tf.summary.histogram(var.name+'/grads',grad) for grad,var in grads]
activ_summary = [tf.summary.histogram(layer.name, layer) for layer in self.all_layers]
self.batch_summary_op = tf.summary.merge([self.batch_summary_op, var_summary, grad_summary, activ_summary])
if self.valid_size > 0 and self.compute_nme:
self.valid_summary = tf.summary.scalar('valid_nme', self.valid_nme_loss)
if self.sample_to_log:
img_map_summary_small = tf.summary.image('compare_map_to_gt_small', self.log_image_map_small)
img_map_summary = tf.summary.image('compare_map_to_gt', self.log_image_map)
if self.sample_per_channel:
map_channels_summary = tf.summary.image('compare_map_channels_to_gt', self.log_map_channels)
map_channels_summary_small = tf.summary.image('compare_map_channels_to_gt_small',
self.log_map_channels_small)
self.img_summary = tf.summary.merge(
[img_map_summary, img_map_summary_small,map_channels_summary,map_channels_summary_small])
else:
self.img_summary = tf.summary.merge([img_map_summary, img_map_summary_small])
if self.valid_size >= self.sample_grid:
img_map_summary_valid_small = tf.summary.image('compare_map_to_gt_small_valid', self.log_image_map_small)
img_map_summary_valid = tf.summary.image('compare_map_to_gt_valid', self.log_image_map)
if self.sample_per_channel:
map_channels_summary_valid_small = tf.summary.image('compare_map_channels_to_gt_small_valid',
self.log_map_channels_small)
map_channels_summary_valid = tf.summary.image('compare_map_channels_to_gt_valid',
self.log_map_channels)
self.img_summary_valid = tf.summary.merge(
[img_map_summary_valid,img_map_summary_valid_small,map_channels_summary_valid,
map_channels_summary_valid_small])
else:
self.img_summary_valid = tf.summary.merge([img_map_summary_valid, img_map_summary_valid_small])
def train(self):
# set random seed
tf.set_random_seed(1234)
np.random.seed(1234)
# build a graph
# add placeholders
self.add_placeholders()
# build model
self.build_model()
# create loss ops
self.create_loss_ops()
# create summary ops
self.create_summary_ops()
# create optimizer and training op
global_step = tf.Variable(0, trainable=False)
lr = tf.train.exponential_decay(self.learning_rate,global_step, self.step, self.gamma, staircase=True)
if self.adam_optimizer:
optimizer = tf.train.AdamOptimizer(lr)
else:
optimizer = tf.train.MomentumOptimizer(lr, self.momentum)
train_op = optimizer.minimize(self.total_loss,global_step=global_step)
with tf.Session(config=self.config) as sess:
tf.global_variables_initializer().run()
# load pre trained weights if load_pretrain==True
if self.load_pretrain:
print
print('*** loading pre-trained weights from: '+self.pre_train_path+' ***')
if self.load_primary_only:
print('*** loading primary-net only ***')
primary_var = [v for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) if
('deconv_' not in v.name) and ('_fsn_' not in v.name)]
loader = tf.train.Saver(var_list=primary_var)
else:
loader = tf.train.Saver()
loader.restore(sess, self.pre_train_path)
print("*** Model restore finished, current global step: %d" % global_step.eval())
# for fine-tuning, choose reset_training_op==True. when resuming training, reset_training_op==False
if self.reset_training_op:
print ("resetting optimizer and global step")
opt_var_list = [optimizer.get_slot(var, name) for name in optimizer.get_slot_names()
for var in tf.global_variables() if optimizer.get_slot(var, name) is not None]
opt_var_list_init = tf.variables_initializer(opt_var_list)
opt_var_list_init.run()
sess.run(global_step.initializer)
# create model saver and file writer
summary_writer = tf.summary.FileWriter(logdir=self.save_log_path, graph=tf.get_default_graph())
saver = tf.train.Saver()
print('\n*** Start Training ***')
# initialize some variables before training loop
resume_step = global_step.eval()
num_train_images = len(self.img_menpo_list)
batches_in_epoch = int(float(num_train_images) / float(self.batch_size))
epoch = int(resume_step / batches_in_epoch)
img_inds = self.epoch_inds_shuffle[epoch, :]
log_valid = True
log_valid_images = True
# allocate space for batch images, maps and landmarks
batch_images = np.zeros([self.batch_size, self.image_size, self.image_size, self.c_dim]).astype(
'float32')
batch_lms = np.zeros([self.batch_size, self.num_landmarks, 2]).astype('float32')
batch_lms_pred = np.zeros([self.batch_size, self.num_landmarks, 2]).astype('float32')
batch_maps_small = np.zeros((self.batch_size, int(self.image_size/4),
int(self.image_size/4), self.num_landmarks)).astype('float32')
batch_maps = np.zeros((self.batch_size, self.image_size, self.image_size,
self.num_landmarks)).astype('float32')
# create gaussians for heatmap generation
gaussian_filt_large = create_gaussian_filter(sigma=self.sigma, win_mult=self.win_mult)
gaussian_filt_small = create_gaussian_filter(sigma=1.*self.sigma/4, win_mult=self.win_mult)
# training loop
for step in range(resume_step, self.train_iter):
j = step % batches_in_epoch # j==0 if we finished an epoch
# if we finished an epoch and this isn't the first step
if step > resume_step and j == 0:
epoch += 1
img_inds = self.epoch_inds_shuffle[epoch, :] # get next shuffled image inds
log_valid = True
log_valid_images = True
if self.use_epoch_data: # if using pre-augmented data, load epoch directory
epoch_dir = os.path.join(self.epoch_data_dir, str(epoch))
self.img_menpo_list = load_menpo_image_list(
self.img_path, train_crop_dir=epoch_dir, img_dir_ns=None, mode=self.mode,
bb_dictionary=self.bb_dictionary, image_size=self.image_size, test_data=self.test_data,
augment_basic=False, augment_texture=False, augment_geom=False)
# get batch indices
batch_inds = img_inds[j * self.batch_size:(j + 1) * self.batch_size]
# load batch images, gt maps and landmarks
load_images_landmarks_approx_maps_alloc_once(
self.img_menpo_list, batch_inds, images=batch_images, maps_small=batch_maps_small,
maps=batch_maps, landmarks=batch_lms, image_size=self.image_size,
num_landmarks=self.num_landmarks, scale=self.scale, gauss_filt_large=gaussian_filt_large,
gauss_filt_small=gaussian_filt_small, win_mult=self.win_mult, sigma=self.sigma,
save_landmarks=self.compute_nme)
feed_dict_train = {self.images: batch_images, self.heatmaps: batch_maps,
self.heatmaps_small: batch_maps_small}
# train on batch
sess.run(train_op, feed_dict_train)
# save to log and print status
if step == resume_step or (step + 1) % self.print_every == 0:
# train data log
if self.compute_nme:
batch_maps_pred = sess.run(self.pred_hm_u, {self.images: batch_images})
batch_heat_maps_to_landmarks_alloc_once(
batch_maps=batch_maps_pred,batch_landmarks=batch_lms_pred,
batch_size=self.batch_size, image_size=self.image_size,
num_landmarks=self.num_landmarks)
train_feed_dict_log = {
self.images: batch_images, self.heatmaps: batch_maps,
self.heatmaps_small: batch_maps_small, self.train_lms: batch_lms,
self.train_pred_lms: batch_lms_pred}
summary, l_p, l_f, l_t, nme = sess.run(
[self.batch_summary_op, self.l2_primary, self.l2_fusion, self.total_loss,
self.nme_loss],
train_feed_dict_log)
print (
'epoch: [%d] step: [%d/%d] primary loss: [%.6f] fusion loss: [%.6f]'
' total loss: [%.6f] NME: [%.6f]' % (
epoch, step + 1, self.train_iter, l_p, l_f, l_t, nme))
else:
train_feed_dict_log = {self.images: batch_images, self.heatmaps: batch_maps,
self.heatmaps_small: batch_maps_small}
summary, l_p, l_f, l_t = sess.run(
[self.batch_summary_op, self.l2_primary, self.l2_fusion, self.total_loss],
train_feed_dict_log)
print (
'epoch: [%d] step: [%d/%d] primary loss: [%.6f] fusion loss: [%.6f] total loss: [%.6f]'
% (epoch, step + 1, self.train_iter, l_p, l_f, l_t))
summary_writer.add_summary(summary, step)
# valid data log
if self.valid_size > 0 and (log_valid and epoch % self.log_valid_every == 0) \
and self.compute_nme:
log_valid = False
self.predict_valid_landmarks_in_batches(self.valid_images_loaded, sess)
valid_feed_dict_log = {
self.valid_lms: self.valid_landmarks_loaded,
self.valid_pred_lms: self.valid_landmarks_pred}
v_summary, v_nme = sess.run([self.valid_summary, self.valid_nme_loss],
valid_feed_dict_log)
summary_writer.add_summary(v_summary, step)
print (
'epoch: [%d] step: [%d/%d] valid NME: [%.6f]' % (
epoch, step + 1, self.train_iter, v_nme))
# save model
if (step + 1) % self.save_every == 0:
saver.save(sess, os.path.join(self.save_model_path, 'deep_heatmaps'), global_step=step + 1)
print ('model/deep-heatmaps-%d saved' % (step + 1))
# save images
if step == resume_step or (step + 1) % self.sample_every == 0:
batch_maps_small_pred = sess.run(self.pred_hm_p, {self.images: batch_images})
if not self.compute_nme:
batch_maps_pred = sess.run(self.pred_hm_u, {self.images: batch_images})
batch_lms_pred = None
merged_img = merge_images_landmarks_maps_gt(
batch_images.copy(), batch_maps_pred, batch_maps, landmarks=batch_lms_pred,
image_size=self.image_size, num_landmarks=self.num_landmarks, num_samples=self.sample_grid,
scale=self.scale, circle_size=2, fast=self.fast_img_gen)
merged_img_small = merge_images_landmarks_maps_gt(
batch_images.copy(), batch_maps_small_pred, batch_maps_small,
image_size=self.image_size,
num_landmarks=self.num_landmarks, num_samples=self.sample_grid, scale=self.scale,
circle_size=0, fast=self.fast_img_gen)
if self.sample_per_channel:
map_per_channel = map_comapre_channels(
batch_images.copy(), batch_maps_pred, batch_maps, image_size=self.image_size,
num_landmarks=self.num_landmarks, scale=self.scale)
map_per_channel_small = map_comapre_channels(
batch_images.copy(), batch_maps_small_pred, batch_maps_small, image_size=int(self.image_size/4),
num_landmarks=self.num_landmarks, scale=self.scale)
if self.sample_to_log: # save heatmap images to log
if self.sample_per_channel:
summary_img = sess.run(
self.img_summary, {self.log_image_map: np.expand_dims(merged_img, 0),
self.log_map_channels: np.expand_dims(map_per_channel, 0),
self.log_image_map_small: np.expand_dims(merged_img_small, 0),
self.log_map_channels_small: np.expand_dims(map_per_channel_small, 0)})
else:
summary_img = sess.run(
self.img_summary, {self.log_image_map: np.expand_dims(merged_img, 0),
self.log_image_map_small: np.expand_dims(merged_img_small, 0)})
summary_writer.add_summary(summary_img, step)
if (self.valid_size >= self.sample_grid) and self.save_valid_images and\
(log_valid_images and epoch % self.log_valid_every == 0):
log_valid_images = False
batch_maps_small_pred_val,batch_maps_pred_val =\
sess.run([self.pred_hm_p,self.pred_hm_u],
{self.images: self.valid_images_loaded[:self.sample_grid]})
merged_img_small = merge_images_landmarks_maps_gt(
self.valid_images_loaded[:self.sample_grid].copy(), batch_maps_small_pred_val,
self.valid_gt_maps_small_loaded, image_size=self.image_size,
num_landmarks=self.num_landmarks, num_samples=self.sample_grid,
scale=self.scale, circle_size=0, fast=self.fast_img_gen)
merged_img = merge_images_landmarks_maps_gt(
self.valid_images_loaded[:self.sample_grid].copy(), batch_maps_pred_val,
self.valid_gt_maps_loaded, image_size=self.image_size,
num_landmarks=self.num_landmarks, num_samples=self.sample_grid,
scale=self.scale, circle_size=2, fast=self.fast_img_gen)
if self.sample_per_channel:
map_per_channel_small = map_comapre_channels(
self.valid_images_loaded[:self.sample_grid].copy(), batch_maps_small_pred_val,
self.valid_gt_maps_small_loaded, image_size=int(self.image_size / 4),
num_landmarks=self.num_landmarks, scale=self.scale)
map_per_channel = map_comapre_channels(
self.valid_images_loaded[:self.sample_grid].copy(), batch_maps_pred,
self.valid_gt_maps_loaded, image_size=self.image_size,
num_landmarks=self.num_landmarks, scale=self.scale)
summary_img = sess.run(
self.img_summary_valid,
{self.log_image_map: np.expand_dims(merged_img, 0),
self.log_map_channels: np.expand_dims(map_per_channel, 0),
self.log_image_map_small: np.expand_dims(merged_img_small, 0),
self.log_map_channels_small: np.expand_dims(map_per_channel_small, 0)})
else:
summary_img = sess.run(
self.img_summary_valid,
{self.log_image_map: np.expand_dims(merged_img, 0),
self.log_image_map_small: np.expand_dims(merged_img_small, 0)})
summary_writer.add_summary(summary_img, step)
else: # save heatmap images to directory
sample_path_imgs = os.path.join(
self.save_sample_path, 'epoch-%d-train-iter-%d-1.png' % (epoch, step + 1))
sample_path_imgs_small = os.path.join(
self.save_sample_path, 'epoch-%d-train-iter-%d-1-s.png' % (epoch, step + 1))
scipy.misc.imsave(sample_path_imgs, merged_img)
scipy.misc.imsave(sample_path_imgs_small, merged_img_small)
if self.sample_per_channel:
sample_path_ch_maps = os.path.join(
self.save_sample_path, 'epoch-%d-train-iter-%d-3.png' % (epoch, step + 1))
sample_path_ch_maps_small = os.path.join(
self.save_sample_path, 'epoch-%d-train-iter-%d-3-s.png' % (epoch, step + 1))
scipy.misc.imsave(sample_path_ch_maps, map_per_channel)
scipy.misc.imsave(sample_path_ch_maps_small, map_per_channel_small)
print('*** Finished Training ***')
def get_image_maps(self, test_image, reuse=None, norm=False):
""" returns heatmaps of input image (menpo image object)"""
self.add_placeholders()
# build model
pred_hm_p, pred_hm_f, pred_hm_u = self.heatmaps_network(self.images, reuse=reuse)
with tf.Session(config=self.config) as sess:
# load trained parameters
saver = tf.train.Saver()
| |
<gh_stars>0
# coding=utf-8
# Copyright 2018 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT finetuning runner."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import csv
import os
import modeling
import optimization
import tokenization
import tensorflow as tf
import metric_functions
from tqdm import tqdm
import random
random.seed(31415926525)
class InputExample(object):
"""A single training/test example for simple sequence classification."""
def __init__(self, guid, text_a, text_b=None, label=None, ex_data=None,pos=None):
"""Constructs a InputExample.
Args:
guid: Unique id for the example.
text_a: string. The untokenized text of the first sequence. For single
sequence tasks, only this sequence must be specified.
text_b: (Optional) string. The untokenized text of the second sequence.
Only must be specified for sequence pair tasks.
label: (Optional) string. The label of the example. This should be
specified for train and dev examples, but not for test examples.
"""
self.guid = guid
self.text_a = text_a
self.text_b = text_b
self.label = label
self.ex_data = ex_data
self.pos = pos
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self,
input_ids,
input_mask,
segment_ids,
label_id,
ex_data,
is_real_example=True):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
self.ex_data = ex_data
self.is_real_example = is_real_example
class DataProcessor(object):
"""Base class for data converters for sequence classification data sets."""
def get_train_examples(self, data_dir, read_range=None):
"""Gets a collection of `InputExample`s for the train set."""
raise NotImplementedError()
def get_dev_examples(self, data_dir, read_range=None):
"""Gets a collection of `InputExample`s for the dev set."""
raise NotImplementedError()
def get_test_examples(self, data_dir, read_range=None):
"""Gets a collection of `InputExample`s for prediction."""
raise NotImplementedError()
def get_labels(self):
"""Gets the list of labels for this data set."""
raise NotImplementedError()
@classmethod
def _read_tsv(cls, input_file, read_range=None,quotechar=None):
"""Reads a tab separated value file."""
with tf.gfile.Open(input_file, "r") as f:
reader = csv.reader(f, delimiter="\t", quotechar=quotechar)
lines = []
for n,line in enumerate(tqdm(reader,"reading tsv")):
if read_range:
if n<read_range[0]:
continue
elif n>=read_range[1]:
break
lines.append(line)
return lines
class MrpcProcessor(DataProcessor):
def get_train_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv"),read_range=read_range), "train")
def get_dev_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv"),read_range=read_range), "dev")
def get_test_examples(self, data_dir, read_range=None,dataset=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, f"test{'_'+dataset if dataset else ''}.tsv"),read_range=read_range), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(tqdm(lines,"creating_examples")):
guid = "%s-%s" % (set_type, i)
text_a = tokenization.convert_to_unicode(line[1])
text_b = tokenization.convert_to_unicode(line[2])
label = tokenization.convert_to_unicode(line[0])
pos = tokenization.convert_to_unicode(line[3])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label, pos=pos))
return examples
class MrpcWithExDataProcessor(DataProcessor):
def get_train_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv"),read_range=read_range), "train")
def get_dev_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv"),read_range=read_range), "dev")
def get_test_examples(self, data_dir, read_range=None,dataset=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, f"test{'_'+dataset if dataset else ''}.tsv"),read_range=read_range), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(tqdm(lines,"creating_examples")):
guid = "%s-%s" % (set_type, i)
text_a = tokenization.convert_to_unicode(line[1])
text_b = tokenization.convert_to_unicode(line[2])
label = tokenization.convert_to_unicode(line[0])
ex_data = tokenization.convert_to_unicode(line[3])
pos = tokenization.convert_to_unicode(line[4])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=text_b, label=label, ex_data=ex_data, pos=pos))
return examples
class REProcessor(DataProcessor):
def get_train_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "train.tsv"),read_range=read_range), "train")
def get_dev_examples(self, data_dir, read_range=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, "dev.tsv"),read_range=read_range), "dev")
def get_test_examples(self, data_dir, read_range=None,dataset=None):
"""See base class."""
return self._create_examples(
self._read_tsv(os.path.join(data_dir, f"test{'_'+dataset if dataset else ''}.tsv"),read_range=read_range), "test")
def get_labels(self):
"""See base class."""
return ["0", "1"]
def _create_examples(self, lines, set_type):
"""Creates examples for the training and dev sets."""
examples = []
for (i, line) in enumerate(tqdm(lines,"creating_examples")):
guid = "%s-%s" % (set_type, i)
text_a = tokenization.convert_to_unicode(line[0])
label = tokenization.convert_to_unicode(line[1])
pos = tokenization.convert_to_unicode(line[2])
examples.append(
InputExample(guid=guid, text_a=text_a, text_b=None, label=label, pos=pos))
return examples
def convert_single_example(ex_index, example, label_list, max_seq_length,
tokenizer,create_altered_data=False):
"""Converts a single `InputExample` into a single `InputFeatures`."""
label_map = {}
for (i, label) in enumerate(label_list):
label_map[label] = i
pos = int(example.pos)
ex_data = example.ex_data
if ex_data:
ex_data = [float(ex_dat) for ex_dat in ex_data.split()]
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if create_altered_data:
if random.randint(0,1)==0:
def generate_clips(seq,pos):
start_clip = random.randint(0, int(pos / 2))
end_clip = random.randint(int((len(seq) + pos) / 2), len(seq))
return start_clip,end_clip
start_clip,end_clip = generate_clips(tokens_a,pos)
tokens_a = tokens_a[start_clip:end_clip + 1]
if tokens_b:
tokens_b = tokens_b[start_clip:end_clip + 1]
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, max_seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > max_seq_length - 2:
tokens_a = tokens_a[0:(max_seq_length - 2)]
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
#
# Where "type_ids" are used to indicate whether this is the first
# sequence or the second sequence. The embedding vectors for `type=0` and
# `type=1` were learned during pre-training and are added to the wordpiece
# embedding vector (and position vector). This is not *strictly* necessary
# since the [SEP] token unambiguously separates the sequences, but it makes
# it easier for the model to learn the concept of sequences.
#
# For classification tasks, the first vector (corresponding to [CLS]) is
# used as the "sentence vector". Note that this only makes sense because
# the entire model is fine-tuned.
tokens = []
segment_ids = []
tokens.append("[CLS]")
segment_ids.append(0)
for token in tokens_a:
tokens.append(token)
segment_ids.append(0)
tokens.append("[SEP]")
segment_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
segment_ids.append(1)
tokens.append("[SEP]")
segment_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < max_seq_length:
input_ids.append(0)
input_mask.append(0)
segment_ids.append(0)
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label_map[example.label]
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("guid: %s" % (example.guid))
tf.logging.info(f"tokens (length = {len(tokens)}): %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.logging.info(f"input_ids (length = {len(input_ids)}): %s" % " ".join([str(x) for x in input_ids]))
tf.logging.info(f"input_mask (length = {len(input_mask)}): %s" % " ".join([str(x) for x in input_mask]))
tf.logging.info(f"segment_ids (length = {len(segment_ids)}): %s" % " ".join([str(x) for x in segment_ids]))
if ex_data:
tf.logging.info(f"ex_data (length = {len(ex_data)}): %s" % " ".join([str(x) for x in ex_data]))
tf.logging.info("label: %s (id = %d)" % (example.label, label_id))
feature = InputFeatures(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id,
ex_data=ex_data,
is_real_example=True)
return feature
def shuffle(lst, name=""):
print(f"shuffling"+(" " if name else "")+f"{name}...")
random.shuffle(lst)
return lst
def file_based_convert_examples_to_features(
examples, label_list, max_seq_length, tokenizer, output_file,augmented_data_copies=0,shuffle_data=False):
"""Convert a set of `InputExample`s to a TFRecord file."""
writer = tf.python_io.TFRecordWriter(output_file)
data_augmentation_examples = [[example,0] for example in examples]
for i in range(augmented_data_copies):
data_augmentation_examples.extend([[example,1] for example in examples])
if shuffle_data:
data_augmentation_examples = shuffle(data_augmentation_examples,"examples")
for (ex_index, [example,augment]) in enumerate(data_augmentation_examples):
if ex_index % 10000 == 0:
tf.logging.info(f"Writing example {ex_index} of {len(data_augmentation_examples)}")
feature = convert_single_example(ex_index, example, label_list,
max_seq_length, tokenizer,create_altered_data=augment==1)
def create_int_feature(values):
f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))
return f
def create_float_feature(values):
f = tf.train.Feature(float_list=tf.train.FloatList(value=list(values)))
return f
features = collections.OrderedDict()
features["input_ids"] = create_int_feature(feature.input_ids)
features["input_mask"] = create_int_feature(feature.input_mask)
features["segment_ids"] = create_int_feature(feature.segment_ids)
features["label_ids"] = create_int_feature([feature.label_id])
if feature.ex_data:
features["ex_data"] = create_float_feature(feature.ex_data)
features["is_real_example"] = create_int_feature([int(feature.is_real_example)])
tf_example = tf.train.Example(features=tf.train.Features(feature=features))
writer.write(tf_example.SerializeToString())
writer.close()
def file_based_input_fn_builder(input_file, seq_length, is_training,
drop_remainder, shards_folder=None, pred_num=None):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
if pred_num:
name_to_features = {
"input_ids": tf.FixedLenFeature([seq_length], tf.int64),
"input_mask": tf.FixedLenFeature([seq_length], tf.int64),
"segment_ids": tf.FixedLenFeature([seq_length], tf.int64),
"label_ids": tf.FixedLenFeature([], tf.int64),
"ex_data": | |
struct.unpack("<L", dir_stream.read(4))[0]
REFERENCEREGISTERED_Libid = dir_stream.read(REFERENCEREGISTERED_SizeOfLibid)
REFERENCEREGISTERED_Reserved1 = struct.unpack("<L", dir_stream.read(4))[0]
check_value('REFERENCEREGISTERED_Reserved1', 0x0000, REFERENCEREGISTERED_Reserved1)
REFERENCEREGISTERED_Reserved2 = struct.unpack("<H", dir_stream.read(2))[0]
check_value('REFERENCEREGISTERED_Reserved2', 0x0000, REFERENCEREGISTERED_Reserved2)
continue
if check == 0x000E:
# REFERENCEPROJECT
REFERENCEPROJECT_Id = check
REFERENCEPROJECT_Size = struct.unpack("<L", dir_stream.read(4))[0]
REFERENCEPROJECT_SizeOfLibidAbsolute = struct.unpack("<L", dir_stream.read(4))[0]
REFERENCEPROJECT_LibidAbsolute = dir_stream.read(REFERENCEPROJECT_SizeOfLibidAbsolute)
REFERENCEPROJECT_SizeOfLibidRelative = struct.unpack("<L", dir_stream.read(4))[0]
REFERENCEPROJECT_LibidRelative = dir_stream.read(REFERENCEPROJECT_SizeOfLibidRelative)
REFERENCEPROJECT_MajorVersion = struct.unpack("<L", dir_stream.read(4))[0]
REFERENCEPROJECT_MinorVersion = struct.unpack("<H", dir_stream.read(2))[0]
continue
logging.error('invalid or unknown check Id {0:04X}'.format(check))
return
PROJECTMODULES_Id = check #struct.unpack("<H", dir_stream.read(2))[0]
check_value('PROJECTMODULES_Id', 0x000F, PROJECTMODULES_Id)
PROJECTMODULES_Size = struct.unpack("<L", dir_stream.read(4))[0]
check_value('PROJECTMODULES_Size', 0x0002, PROJECTMODULES_Size)
PROJECTMODULES_Count = struct.unpack("<H", dir_stream.read(2))[0]
PROJECTMODULES_ProjectCookieRecord_Id = struct.unpack("<H", dir_stream.read(2))[0]
check_value('PROJECTMODULES_ProjectCookieRecord_Id', 0x0013, PROJECTMODULES_ProjectCookieRecord_Id)
PROJECTMODULES_ProjectCookieRecord_Size = struct.unpack("<L", dir_stream.read(4))[0]
check_value('PROJECTMODULES_ProjectCookieRecord_Size', 0x0002, PROJECTMODULES_ProjectCookieRecord_Size)
PROJECTMODULES_ProjectCookieRecord_Cookie = struct.unpack("<H", dir_stream.read(2))[0]
logging.debug("parsing {0} modules".format(PROJECTMODULES_Count))
for x in xrange(0, PROJECTMODULES_Count):
MODULENAME_Id = struct.unpack("<H", dir_stream.read(2))[0]
check_value('MODULENAME_Id', 0x0019, MODULENAME_Id)
MODULENAME_SizeOfModuleName = struct.unpack("<L", dir_stream.read(4))[0]
MODULENAME_ModuleName = dir_stream.read(MODULENAME_SizeOfModuleName)
# account for optional sections
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x0047:
MODULENAMEUNICODE_Id = section_id
MODULENAMEUNICODE_SizeOfModuleNameUnicode = struct.unpack("<L", dir_stream.read(4))[0]
MODULENAMEUNICODE_ModuleNameUnicode = dir_stream.read(MODULENAMEUNICODE_SizeOfModuleNameUnicode)
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x001A:
MODULESTREAMNAME_id = section_id
MODULESTREAMNAME_SizeOfStreamName = struct.unpack("<L", dir_stream.read(4))[0]
MODULESTREAMNAME_StreamName = dir_stream.read(MODULESTREAMNAME_SizeOfStreamName)
MODULESTREAMNAME_Reserved = struct.unpack("<H", dir_stream.read(2))[0]
check_value('MODULESTREAMNAME_Reserved', 0x0032, MODULESTREAMNAME_Reserved)
MODULESTREAMNAME_SizeOfStreamNameUnicode = struct.unpack("<L", dir_stream.read(4))[0]
MODULESTREAMNAME_StreamNameUnicode = dir_stream.read(MODULESTREAMNAME_SizeOfStreamNameUnicode)
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x001C:
MODULEDOCSTRING_Id = section_id
check_value('MODULEDOCSTRING_Id', 0x001C, MODULEDOCSTRING_Id)
MODULEDOCSTRING_SizeOfDocString = struct.unpack("<L", dir_stream.read(4))[0]
MODULEDOCSTRING_DocString = dir_stream.read(MODULEDOCSTRING_SizeOfDocString)
MODULEDOCSTRING_Reserved = struct.unpack("<H", dir_stream.read(2))[0]
check_value('MODULEDOCSTRING_Reserved', 0x0048, MODULEDOCSTRING_Reserved)
MODULEDOCSTRING_SizeOfDocStringUnicode = struct.unpack("<L", dir_stream.read(4))[0]
MODULEDOCSTRING_DocStringUnicode = dir_stream.read(MODULEDOCSTRING_SizeOfDocStringUnicode)
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x0031:
MODULEOFFSET_Id = section_id
check_value('MODULEOFFSET_Id', 0x0031, MODULEOFFSET_Id)
MODULEOFFSET_Size = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULEOFFSET_Size', 0x0004, MODULEOFFSET_Size)
MODULEOFFSET_TextOffset = struct.unpack("<L", dir_stream.read(4))[0]
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x001E:
MODULEHELPCONTEXT_Id = section_id
check_value('MODULEHELPCONTEXT_Id', 0x001E, MODULEHELPCONTEXT_Id)
MODULEHELPCONTEXT_Size = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULEHELPCONTEXT_Size', 0x0004, MODULEHELPCONTEXT_Size)
MODULEHELPCONTEXT_HelpContext = struct.unpack("<L", dir_stream.read(4))[0]
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x002C:
MODULECOOKIE_Id = section_id
check_value('MODULECOOKIE_Id', 0x002C, MODULECOOKIE_Id)
MODULECOOKIE_Size = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULECOOKIE_Size', 0x0002, MODULECOOKIE_Size)
MODULECOOKIE_Cookie = struct.unpack("<H", dir_stream.read(2))[0]
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x0021 or section_id == 0x0022:
MODULETYPE_Id = section_id
MODULETYPE_Reserved = struct.unpack("<L", dir_stream.read(4))[0]
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x0025:
MODULEREADONLY_Id = section_id
check_value('MODULEREADONLY_Id', 0x0025, MODULEREADONLY_Id)
MODULEREADONLY_Reserved = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULEREADONLY_Reserved', 0x0000, MODULEREADONLY_Reserved)
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x0028:
MODULEPRIVATE_Id = section_id
check_value('MODULEPRIVATE_Id', 0x0028, MODULEPRIVATE_Id)
MODULEPRIVATE_Reserved = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULEPRIVATE_Reserved', 0x0000, MODULEPRIVATE_Reserved)
section_id = struct.unpack("<H", dir_stream.read(2))[0]
if section_id == 0x002B: # TERMINATOR
MODULE_Reserved = struct.unpack("<L", dir_stream.read(4))[0]
check_value('MODULE_Reserved', 0x0000, MODULE_Reserved)
section_id = None
if section_id != None:
logging.warning('unknown or invalid module section id {0:04X}'.format(section_id))
logging.debug('Project CodePage = %d' % PROJECTCODEPAGE_CodePage)
vba_codec = 'cp%d' % PROJECTCODEPAGE_CodePage
logging.debug("ModuleName = {0}".format(MODULENAME_ModuleName))
logging.debug("StreamName = {0}".format(repr(MODULESTREAMNAME_StreamName)))
streamname_unicode = MODULESTREAMNAME_StreamName.decode(vba_codec)
logging.debug("StreamName.decode('%s') = %s" % (vba_codec, repr(streamname_unicode)))
logging.debug("StreamNameUnicode = {0}".format(repr(MODULESTREAMNAME_StreamNameUnicode)))
logging.debug("TextOffset = {0}".format(MODULEOFFSET_TextOffset))
code_path = vba_root + u'VBA/' + streamname_unicode
#TODO: test if stream exists
logging.debug('opening VBA code stream %s' % repr(code_path))
code_data = ole.openstream(code_path).read()
logging.debug("length of code_data = {0}".format(len(code_data)))
logging.debug("offset of code_data = {0}".format(MODULEOFFSET_TextOffset))
code_data = code_data[MODULEOFFSET_TextOffset:]
if len(code_data) > 0:
code_data = decompress_stream(code_data)
# case-insensitive search in the code_modules dict to find the file extension:
filext = code_modules.get(MODULENAME_ModuleName.lower(), 'bin')
filename = '{0}.{1}'.format(MODULENAME_ModuleName, filext)
#TODO: also yield the codepage so that callers can decode it properly
yield (code_path, filename, code_data)
# print '-'*79
# print filename
# print ''
# print code_data
# print ''
logging.debug('extracted file {0}'.format(filename))
else:
logging.warning("module stream {0} has code data length 0".format(MODULESTREAMNAME_StreamName))
return
def vba_collapse_long_lines(vba_code):
"""
Parse a VBA module code to detect continuation line characters (underscore) and
collapse split lines. Continuation line characters are replaced by spaces.
:param vba_code: str, VBA module code
:return: str, VBA module code with long lines collapsed
"""
# TODO: use a regex instead, to allow whitespaces after the underscore?
vba_code = vba_code.replace(' _\r\n', ' ')
vba_code = vba_code.replace(' _\r', ' ')
vba_code = vba_code.replace(' _\n', ' ')
return vba_code
def filter_vba(vba_code):
"""
Filter VBA source code to remove the first lines starting with "Attribute VB_",
which are automatically added by MS Office and not displayed in the VBA Editor.
This should only be used when displaying source code for human analysis.
Note: lines are not filtered if they contain a colon, because it could be
used to hide malicious instructions.
:param vba_code: str, VBA source code
:return: str, filtered VBA source code
"""
vba_lines = vba_code.splitlines()
start = 0
for line in vba_lines:
if line.startswith("Attribute VB_") and not ':' in line:
start += 1
else:
break
#TODO: also remove empty lines?
vba = '\n'.join(vba_lines[start:])
return vba
def detect_autoexec(vba_code, obfuscation=None):
"""
Detect if the VBA code contains keywords corresponding to macros running
automatically when triggered by specific actions (e.g. when a document is
opened or closed).
:param vba_code: str, VBA source code
:param obfuscation: None or str, name of obfuscation to be added to description
:return: list of str tuples (keyword, description)
"""
#TODO: merge code with detect_suspicious
# case-insensitive search
#vba_code = vba_code.lower()
results = []
obf_text = ''
if obfuscation:
obf_text = ' (obfuscation: %s)' % obfuscation
for description, keywords in AUTOEXEC_KEYWORDS.items():
for keyword in keywords:
#TODO: if keyword is already a compiled regex, use it as-is
# search using regex to detect word boundaries:
if re.search(r'(?i)\b' + keyword + r'\b', vba_code):
#if keyword.lower() in vba_code:
results.append((keyword, description + obf_text))
return results
def detect_suspicious(vba_code, obfuscation=None):
"""
Detect if the VBA code contains suspicious keywords corresponding to
potential malware behaviour.
:param vba_code: str, VBA source code
:param obfuscation: None or str, name of obfuscation to be added to description
:return: list of str tuples (keyword, description)
"""
# case-insensitive search
#vba_code = vba_code.lower()
results = []
obf_text = ''
if obfuscation:
obf_text = ' (obfuscation: %s)' % obfuscation
for description, keywords in SUSPICIOUS_KEYWORDS.items():
for keyword in keywords:
# search using regex to detect word boundaries:
if re.search(r'(?i)\b' + keyword + r'\b', vba_code):
#if keyword.lower() in vba_code:
results.append((keyword, description + obf_text))
return results
def detect_patterns(vba_code, obfuscation=None):
"""
Detect if the VBA code contains specific patterns such as IP addresses,
URLs, e-mail addresses, executable file names, etc.
:param vba_code: str, VBA source code
:return: list of str tuples (pattern type, value)
"""
results = []
found = set()
obf_text = ''
if obfuscation:
obf_text = ' (obfuscation: %s)' % obfuscation
for pattern_type, pattern_re in RE_PATTERNS:
for match in pattern_re.finditer(vba_code):
value = match.group()
if value not in found:
results.append((pattern_type + obf_text, value))
found.add(value)
return results
def detect_hex_strings(vba_code):
"""
Detect if the VBA code contains strings encoded in hexadecimal.
:param vba_code: str, VBA source code
:return: list of str tuples (encoded string, decoded string)
"""
results = []
found = set()
for match in re_hex_string.finditer(vba_code):
value = match.group()
if value not in found:
decoded = binascii.unhexlify(value)
# check if decoded string is actually ascii
try:
decoded.decode("ascii")
results.append((value, decoded))
found.add(value)
except:
pass
return results
def detect_base64_strings(vba_code):
"""
Detect if the VBA code contains strings encoded in base64.
:param vba_code: str, VBA source code
:return: list of str tuples (encoded string, decoded string)
"""
#TODO: avoid matching simple hex strings as base64?
results = []
found = set()
for match in re_base64_string.finditer(vba_code):
# extract the base64 string without quotes:
value = match.group().strip('"')
# check it is not just a hex string:
if not re_nothex_check.search(value):
continue
# only keep new values and not in the whitelist:
if value not in found and value.lower() not in BASE64_WHITELIST:
try:
decoded = base64.b64decode(value)
results.append((value, decoded))
found.add(value)
except:
# if an exception occurs, it is likely not a base64-encoded string
pass
return results
def detect_dridex_strings(vba_code):
"""
Detect if the VBA code contains strings obfuscated with a specific algorithm found in Dridex samples.
:param vba_code: str, VBA source code
:return: list of str tuples (encoded string, decoded string)
"""
from DridexUrlDecoder import DridexUrlDecode
results = []
found = set()
for match in re_dridex_string.finditer(vba_code):
value = match.group()[1:-1]
# check it is not just a hex string:
if not re_nothex_check.search(value):
continue
if value not in found:
try:
decoded = DridexUrlDecode(value)
results.append((value, decoded))
found.add(value)
except:
# if an exception occurs, it is likely not a dridex-encoded string
pass
return results
def detect_vba_strings(vba_code):
"""
Detect if the VBA code contains strings obfuscated with VBA | |
SLHA_TABLE = '''
# ISAJET SUSY parameters in SUSY Les Houches Accord 2 format
# Created by ISALHA 2.0 Last revision: <NAME> 27 May 2014
Block SPINFO # Program information
1 ISASUGRA/ISASUSY from ISAJET # Spectrum Calculator
2 7.88 02-JAN-2018 11:01:14 # Version number
Block MODSEL # Model selection
1 2 # Minimal gauge mediated (GMSB) model
Block SMINPUTS # Standard Model inputs
1 1.28000000E+02 # alpha_em^(-1)
2 1.16570000E-05 # G_Fermi
3 1.19999997E-01 # alpha_s(M_Z)
4 9.11699982E+01 # m_{Z}(pole)
5 4.19999981E+00 # m_{b}(m_{b})
6 1.73100006E+02 # m_{top}(pole)
7 1.77699995E+00 # m_{tau}(pole)
Block MINPAR # SUSY breaking input parameters
1 1.00000000E+05 # Lambda scale of soft SSB
2 2.00000000E+05 # M_mess overall messenger scale
3 1.50000000E+01 # tan(beta)
4 1.00000000E+00 # sign(mu)
5 1.00000000E+00 # N_5 messenger index
6 1.01010999E+03 # c_grav gravitino mass factor
51 1.00000000E+00 # N5_1 U(1)_Y messenger index
52 1.00000000E+00 # N5_2 SU(2)_L messenger index
53 1.00000000E+00 # N5_3 SU(3)_C messenger index
101 1.00000000E+00 # Rsl
102 0.00000000E+00 # dmH_d^2
103 0.00000000E+00 # dmH_u^2
104 0.00000000E+00 # d_Y
Block MASS # Scalar and gaugino mass spectrum
# PDG code mass particle
6 1.73100006E+02 # top
24 8.04229965E+01 # W^+
25 1.12660583E+02 # h^0
35 5.32464172E+02 # H^0
36 5.28826233E+02 # A^0
37 5.38383728E+02 # H^+
1000001 1.12279468E+03 # dnl
1000002 1.11991418E+03 # upl
1000003 1.12279468E+03 # stl
1000004 1.11991504E+03 # chl
1000005 1.06131323E+03 # b1
1000006 9.83236389E+02 # t1
1000011 3.59282593E+02 # el-
1000012 3.47792084E+02 # nuel
1000013 3.59282623E+02 # mul-
1000014 3.47792084E+02 # numl
1000015 1.73505310E+02 # tau1
1000016 3.45739380E+02 # nutl
1000021 8.37941467E+02 # glss
1000022 1.39420563E+02 # z1ss
1000023 2.62437164E+02 # z2ss
1000024 2.62887177E+02 # w1ss
1000025 -4.17500519E+02 # z3ss
1000035 4.38492828E+02 # z4ss
1000037 4.38239990E+02 # w2ss
1000039 4.85989403E-06 # gvss
2000001 1.07008032E+03 # dnr
2000002 1.07382593E+03 # upr
2000003 1.07008032E+03 # str
2000004 1.07382678E+03 # chr
2000005 1.08112476E+03 # b2
2000006 1.09331641E+03 # t2
2000011 1.75364502E+02 # er-
2000013 1.75364532E+02 # mur-
2000015 3.58682983E+02 # tau2
Block ALPHA # Effective Higgs mixing parameter
-7.14971721E-02 # alpha
Block STOPMIX # stop mixing matrix
1 1 1.31063163E-01 # O_{11}
1 2 9.91374016E-01 # O_{12}
2 1 -9.91374016E-01 # O_{21}
2 2 1.31063163E-01 # O_{22}
Block SBOTMIX # sbottom mixing matrix
1 1 4.31396365E-01 # O_{11}
1 2 9.02162492E-01 # O_{12}
2 1 -9.02162492E-01 # O_{21}
2 2 4.31396365E-01 # O_{22}
Block STAUMIX # stau mixing matrix
1 1 1.06868908E-01 # O_{11}
1 2 9.94273126E-01 # O_{12}
2 1 -9.94273126E-01 # O_{21}
2 2 1.06868908E-01 # O_{22}
Block NMIX # neutralino mixing matrix
1 1 9.90213990E-01 #
1 2 -3.34359631E-02 #
1 3 1.26071051E-01 #
1 4 -4.96434607E-02 #
2 1 7.86210597E-02 #
2 2 9.32150126E-01 #
2 3 -2.91743159E-01 #
2 4 1.99502394E-01 #
3 1 5.10697402E-02 #
3 2 -7.13807121E-02 #
3 3 -6.99376285E-01 #
3 4 -7.09344268E-01 #
4 1 1.03377640E-01 #
4 2 -3.53388399E-01 #
4 3 -6.40206873E-01 #
4 4 6.74214423E-01 #
Block UMIX # chargino U mixing matrix
1 1 -9.10573781E-01 # U_{11}
1 2 4.13346618E-01 # U_{12}
2 1 -4.13346618E-01 # U_{21}
2 2 -9.10573781E-01 # U_{22}
Block VMIX # chargino V mixing matrix
1 1 -9.59697127E-01 # V_{11}
1 2 2.81036437E-01 # V_{12}
2 1 -2.81036437E-01 # V_{21}
2 2 -9.59697127E-01 # V_{22}
Block GAUGE Q= 9.94140076E+02 #
1 3.57524991E-01 # g`
2 6.52378619E-01 # g_2
3 1.21928000E+00 # g_3
Block YU Q= 9.94140076E+02 #
3 3 8.68142247E-01 # y_t
Block YD Q= 9.94140076E+02 #
3 3 1.97788149E-01 # y_b
Block YE Q= 9.94140076E+02 #
3 3 1.52371675E-01 # y_tau
Block HMIX Q= 9.94140076E+02 # Higgs mixing parameters
1 4.09184875E+02 # mu(Q)
2 1.45027819E+01 # tan(beta)(Q)
3 2.50755539E+02 # Higgs vev at Q
4 2.79657188E+05 # m_A^2(Q)
Block MSOFT Q= 9.94140076E+02 # DRbar SUSY breaking parameters
1 1.44496643E+02 # M_1(Q)
2 2.71771362E+02 # M_2(Q)
3 7.54792358E+02 # M_3(Q)
21 1.09165062E+05 # MHd^2(Q)
22 -1.54057359E+05 # MHu^2(Q)
31 3.53854187E+02 # MeL(Q)
32 3.53854187E+02 # MmuL(Q)
33 3.51864899E+02 # MtauL(Q)
34 1.70639542E+02 # MeR(Q)
35 1.70639542E+02 # MmuR(Q)
36 1.68160034E+02 # MtauR(Q)
41 1.08344421E+03 # MqL1(Q)
42 1.08344421E+03 # MqL2(Q)
43 1.04217688E+03 # MqL3(Q)
44 1.03599817E+03 # MuR(Q)
45 1.03599817E+03 # McR(Q)
46 9.48317505E+02 # MtR(Q)
47 1.03127246E+03 # MdR(Q)
48 1.03127246E+03 # MsR(Q)
49 1.02739893E+03 # MbR(Q)
Block AU Q= 9.94140076E+02 #
1 1 -2.32469284E+02 # A_u
2 2 -2.32469284E+02 # A_c
3 3 -2.32469284E+02 # A_t
Block AD Q= 9.94140076E+02 #
1 1 -2.60610168E+02 # A_d
2 2 -2.60610168E+02 # A_s
3 3 -2.60610168E+02 # A_b
Block AE Q= 9.94140076E+02 #
1 1 -2.63792038E+01 # A_e
2 2 -2.63792038E+01 # A_mu
3 3 -2.63792038E+01 # A_tau
# ISAJET decay tables in SUSY Les Houches accord format
# Created by ISALHD. Last revision: <NAME>, 2005 May 25
Block DCINFO # Program information
1 ISASUGRA from ISAJET # Spectrum Calculator
2 7.88 02-JAN-2018 11:01:14 # Version number
# PDG Width
DECAY 6 1.48575687E+00 # TP decays
# BR NDA ID1 ID2 ID3 ID4
3.33333313E-01 3 2 -1 5 # TP --> UP DB BT
3.33333313E-01 3 4 -3 5 # TP --> CH SB BT
1.11111097E-01 3 -11 12 5 # TP --> E+ NUE BT
1.11111097E-01 3 -13 14 5 # TP --> MU+ NUM BT
1.11111097E-01 3 -15 16 5 # TP --> TAU+ NUT BT
# PDG Width
DECAY 1000021 1.81679316E-02 # GLSS decays
# BR NDA ID1 ID2 ID3 ID4
8.55864882E-02 3 1000024 1 -2 # GLSS --> W1SS+ DN UB
8.55864882E-02 3 -1000024 2 -1 # GLSS --> W1SS- UP DB
8.55862945E-02 3 1000024 3 -4 # GLSS --> W1SS+ ST CB
8.55862945E-02 3 -1000024 4 -3 # GLSS --> W1SS- CH SB
4.87022921E-02 3 1000024 5 -6 # GLSS --> W1SS+ BT TB
4.87022921E-02 3 -1000024 6 -5 # GLSS --> W1SS- TP BB
4.16135555E-03 3 1000037 1 -2 # GLSS --> W2SS+ DN UB
4.16135555E-03 3 -1000037 2 -1 # GLSS --> W2SS- UP DB
4.16134857E-03 3 1000037 3 -4 # GLSS --> W2SS+ ST CB
4.16134857E-03 3 -1000037 4 -3 # GLSS --> W2SS- CH SB
4.93497662E-02 3 1000037 5 -6 # GLSS --> W2SS+ BT TB
4.93497662E-02 3 -1000037 6 -5 # GLSS --> W2SS- TP BB
4.90525490E-05 2 1000022 21 # GLSS --> Z1SS GL
4.75236885E-02 3 1000022 2 -2 # GLSS --> Z1SS UP UB
1.48751494E-02 3 1000022 1 -1 # GLSS --> Z1SS DN DB
1.48751494E-02 3 1000022 3 -3 # GLSS --> Z1SS ST SB
4.75234650E-02 3 1000022 4 -4 # GLSS --> Z1SS CH CB
1.66878048E-02 3 1000022 5 -5 # GLSS --> Z1SS BT BB
2.34468691E-02 3 1000022 6 -6 # GLSS --> Z1SS TP TB
1.27998251E-03 2 1000023 21 # GLSS --> Z2SS GL
4.44265231E-02 3 1000023 2 -2 # GLSS --> Z2SS UP UB
4.10145558E-02 3 1000023 1 -1 # GLSS --> Z2SS DN DB
4.10145558E-02 3 1000023 3 -3 # GLSS --> Z2SS ST SB
4.44263257E-02 3 1000023 4 -4 # GLSS --> Z2SS CH CB
5.44009879E-02 3 1000023 5 -5 # GLSS --> Z2SS BT BB
7.33838743E-03 3 1000023 6 -6 # GLSS --> Z2SS TP TB
7.48055940E-03 2 1000025 21 # GLSS --> Z3SS GL
4.82715732E-05 3 1000025 2 -2 # GLSS --> Z3SS UP UB
5.82629727E-05 3 1000025 1 -1 # GLSS --> Z3SS DN DB
5.82629727E-05 3 1000025 3 -3 # GLSS --> Z3SS ST SB
4.82713258E-05 3 1000025 4 -4 # GLSS --> Z3SS CH CB
5.34723373E-03 3 1000025 | |
#! /usr/bin/env python
# encoding: utf-8
# WARNING! Do not edit! https://waf.io/book/index.html#_obtaining_the_waf_file
from __future__ import with_statement
import atexit, os, sys, errno, inspect, re, datetime, platform, base64, signal, functools, time
try:
import cPickle
except ImportError:
import pickle as cPickle
if os.name == 'posix' and sys.version_info[0] < 3:
try:
import subprocess32 as subprocess
except ImportError:
import subprocess
else:
import subprocess
try:
TimeoutExpired = subprocess.TimeoutExpired
except AttributeError:
class TimeoutExpired(Exception):
pass
from collections import deque, defaultdict
try:
import _winreg as winreg
except ImportError:
try:
import winreg
except ImportError:
winreg = None
from waflib import Errors
try:
from hashlib import md5
except ImportError:
try:
from hashlib import sha1 as md5
except ImportError:
pass
else:
try:
md5().digest()
except ValueError:
from hashlib import sha1 as md5
try:
import threading
except ImportError:
if not 'JOBS' in os.environ:
os.environ['JOBS'] = '1'
class threading(object):
pass
class Lock(object):
def acquire(self):
pass
def release(self):
pass
threading.Lock = threading.Thread = Lock
SIG_NIL = 'SIG_NIL_SIG_NIL_'.encode()
O644 = 420
O755 = 493
rot_chr = ['\\', '|', '/', '-']
rot_idx = 0
class ordered_iter_dict(dict):
def __init__(self, *k, **kw):
self.lst = deque()
dict.__init__(self, *k, **kw)
def clear(self):
dict.clear(self)
self.lst = deque()
def __setitem__(self, key, value):
if key in dict.keys(self):
self.lst.remove(key)
dict.__setitem__(self, key, value)
self.lst.append(key)
def __delitem__(self, key):
dict.__delitem__(self, key)
try:
self.lst.remove(key)
except ValueError:
pass
def __iter__(self):
return reversed(self.lst)
def keys(self):
return reversed(self.lst)
class lru_node(object):
__slots__ = ('next', 'prev', 'key', 'val')
def __init__(self):
self.next = self
self.prev = self
self.key = None
self.val = None
class lru_cache(object):
__slots__ = ('maxlen', 'table', 'head')
def __init__(self, maxlen=100):
self.maxlen = maxlen
self.table = {}
self.head = lru_node()
self.head.next = self.head
self.head.prev = self.head
def __getitem__(self, key):
node = self.table[key]
if node is self.head:
return node.val
node.prev.next = node.next
node.next.prev = node.prev
node.next = self.head.next
node.prev = self.head
self.head = node.next.prev = node.prev.next = node
return node.val
def __setitem__(self, key, val):
if key in self.table:
node = self.table[key]
node.val = val
self.__getitem__(key)
else:
if len(self.table) < self.maxlen:
node = lru_node()
node.prev = self.head
node.next = self.head.next
node.prev.next = node.next.prev = node
else:
node = self.head = self.head.next
try:
del self.table[node.key]
except KeyError:
pass
node.key = key
node.val = val
self.table[key] = node
class lazy_generator(object):
def __init__(self, fun, params):
self.fun = fun
self.params = params
def __iter__(self):
return self
def __next__(self):
try:
it = self.it
except AttributeError:
it = self.it = self.fun(*self.params)
return next(it)
next = __next__
is_win32 = os.sep == '\\' or sys.platform == 'win32' or os.name == 'nt'
def readf(fname, m='r', encoding='latin-1'):
if sys.hexversion > 0x3000000 and not 'b' in m:
m += 'b'
with open(fname, m) as f:
txt = f.read()
if encoding:
txt = txt.decode(encoding)
else:
txt = txt.decode()
else:
with open(fname, m) as f:
txt = f.read()
return txt
def writef(fname, data, m='w', encoding='latin-1'):
if sys.hexversion > 0x3000000 and not 'b' in m:
data = data.encode(encoding)
m += 'b'
with open(fname, m) as f:
f.write(data)
def h_file(fname):
m = md5()
with open(fname, 'rb') as f:
while fname:
fname = f.read(200000)
m.update(fname)
return m.digest()
def readf_win32(f, m='r', encoding='latin-1'):
flags = os.O_NOINHERIT | os.O_RDONLY
if 'b' in m:
flags |= os.O_BINARY
if '+' in m:
flags |= os.O_RDWR
try:
fd = os.open(f, flags)
except OSError:
raise IOError('Cannot read from %r' % f)
if sys.hexversion > 0x3000000 and not 'b' in m:
m += 'b'
with os.fdopen(fd, m) as f:
txt = f.read()
if encoding:
txt = txt.decode(encoding)
else:
txt = txt.decode()
else:
with os.fdopen(fd, m) as f:
txt = f.read()
return txt
def writef_win32(f, data, m='w', encoding='latin-1'):
if sys.hexversion > 0x3000000 and not 'b' in m:
data = data.encode(encoding)
m += 'b'
flags = os.O_CREAT | os.O_TRUNC | os.O_WRONLY | os.O_NOINHERIT
if 'b' in m:
flags |= os.O_BINARY
if '+' in m:
flags |= os.O_RDWR
try:
fd = os.open(f, flags)
except OSError:
raise OSError('Cannot write to %r' % f)
with os.fdopen(fd, m) as f:
f.write(data)
def h_file_win32(fname):
try:
fd = os.open(fname, os.O_BINARY | os.O_RDONLY | os.O_NOINHERIT)
except OSError:
raise OSError('Cannot read from %r' % fname)
m = md5()
with os.fdopen(fd, 'rb') as f:
while fname:
fname = f.read(200000)
m.update(fname)
return m.digest()
readf_unix = readf
writef_unix = writef
h_file_unix = h_file
if hasattr(os, 'O_NOINHERIT') and sys.hexversion < 0x3040000:
readf = readf_win32
writef = writef_win32
h_file = h_file_win32
try:
x = ''.encode('hex')
except LookupError:
import binascii
def to_hex(s):
ret = binascii.hexlify(s)
if not isinstance(ret, str):
ret = ret.decode('utf-8')
return ret
else:
def to_hex(s):
return s.encode('hex')
to_hex.__doc__ = """
Return the hexadecimal representation of a string
:param s: string to convert
:type s: string
"""
def listdir_win32(s):
if not s:
try:
import ctypes
except ImportError:
return [x + ':\\' for x in 'ABCDEFGHIJKLMNOPQRSTUVWXYZ']
else:
dlen = 4
maxdrives = 26
buf = ctypes.create_string_buffer(maxdrives * dlen)
ndrives = ctypes.windll.kernel32.GetLogicalDriveStringsA(maxdrives * dlen, ctypes.byref(buf))
return [str(buf.raw[4 * i:4 * i + 2].decode('ascii')) for i in range(int(ndrives / dlen))]
if len(s) == 2 and s[1] == ":":
s += os.sep
if not os.path.isdir(s):
e = OSError('%s is not a directory' % s)
e.errno = errno.ENOENT
raise e
return os.listdir(s)
listdir = os.listdir
if is_win32:
listdir = listdir_win32
def num2ver(ver):
if isinstance(ver, str):
ver = tuple(ver.split('.'))
if isinstance(ver, tuple):
ret = 0
for i in range(4):
if i < len(ver):
ret += 256**(3 - i) * int(ver[i])
return ret
return ver
def to_list(val):
if isinstance(val, str):
return val.split()
else:
return val
def console_encoding():
try:
import ctypes
except ImportError:
pass
else:
try:
codepage = ctypes.windll.kernel32.GetConsoleCP()
except AttributeError:
pass
else:
if codepage:
return 'cp%d' % codepage
return sys.stdout.encoding or ('cp1252' if is_win32 else 'latin-1')
def split_path_unix(path):
return path.split('/')
def split_path_cygwin(path):
if path.startswith('//'):
ret = path.split('/')[2:]
ret[0] = '/' + ret[0]
return ret
return path.split('/')
re_sp = re.compile('[/\\\\]+')
def split_path_win32(path):
if path.startswith('\\\\'):
ret = re_sp.split(path)[1:]
ret[0] = '\\\\' + ret[0]
if ret[0] == '\\\\?':
return ret[1:]
return ret
return re_sp.split(path)
msysroot = None
def split_path_msys(path):
if path.startswith(('/', '\\')) and not path.startswith(('//', '\\\\')):
global msysroot
if not msysroot:
msysroot = subprocess.check_output(['cygpath', '-w', '/']).decode(sys.stdout.encoding or 'latin-1')
msysroot = msysroot.strip()
path = os.path.normpath(msysroot + os.sep + path)
return split_path_win32(path)
if sys.platform == 'cygwin':
split_path = split_path_cygwin
elif is_win32:
if os.environ.get('MSYSTEM') and sys.executable.startswith('/'):
split_path = split_path_msys
else:
split_path = split_path_win32
else:
split_path = split_path_unix
split_path.__doc__ = """
Splits a path by / or \\; do not confuse this function with with ``os.path.split``
:type path: string
:param path: path to split
:return: list of string
"""
def check_dir(path):
if not os.path.isdir(path):
try:
os.makedirs(path)
except OSError as e:
if not os.path.isdir(path):
raise Errors.WafError('Cannot create the folder %r' % path, ex=e)
def check_exe(name, env=None):
if not name:
raise ValueError('Cannot execute an empty string!')
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
fpath, fname = os.path.split(name)
if fpath and is_exe(name):
return os.path.abspath(name)
else:
env = env or os.environ
for path in env['PATH'].split(os.pathsep):
path = path.strip('"')
exe_file = os.path.join(path, name)
if is_exe(exe_file):
return os.path.abspath(exe_file)
return None
def def_attrs(cls, **kw):
for k, v in kw.items():
if not hasattr(cls, k):
setattr(cls, k, v)
def quote_define_name(s):
fu = re.sub('[^a-zA-Z0-9]', '_', s)
fu = re.sub('_+', '_', fu)
fu = fu.upper()
return fu
re_sh = re.compile('\\s|\'|"')
def shell_escape(cmd):
if isinstance(cmd, str):
return cmd
return ' '.join(repr(x) if re_sh.search(x) else x for x in cmd)
def h_list(lst):
return md5(repr(lst).encode()).digest()
if sys.hexversion < 0x3000000:
def h_list_python2(lst):
return md5(repr(lst)).digest()
h_list_python2.__doc__ = h_list.__doc__
h_list = h_list_python2
def h_fun(fun):
try:
return fun.code
except AttributeError:
if isinstance(fun, functools.partial):
code = list(fun.args)
code.extend(sorted(fun.keywords.items()))
code.append(h_fun(fun.func))
fun.code = h_list(code)
return fun.code
try:
h = inspect.getsource(fun)
except EnvironmentError:
h = 'nocode'
try:
fun.code = h
except AttributeError:
pass
return h
def h_cmd(ins):
if isinstance(ins, str):
ret = ins
elif isinstance(ins, list) or isinstance(ins, tuple):
ret = str([h_cmd(x) for x in ins])
else:
ret = str(h_fun(ins))
if sys.hexversion > 0x3000000:
ret = ret.encode('latin-1', 'xmlcharrefreplace')
return ret
reg_subst = re.compile(r"(\\\\)|(\$\$)|\$\{([^}]+)\}")
def subst_vars(expr, params):
def repl_var(m):
if m.group(1):
return '\\'
if m.group(2):
return '$'
try:
return params.get_flat(m.group(3))
except AttributeError:
return params[m.group(3)]
return reg_subst.sub(repl_var, expr)
def destos_to_binfmt(key):
if key == 'darwin':
return 'mac-o'
elif key in ('win32', 'cygwin', 'uwin', 'msys'):
return 'pe'
return 'elf'
def unversioned_sys_platform():
s = sys.platform
if s.startswith('java'):
from java.lang import System
s = System.getProperty('os.name')
if s == 'Mac OS X':
return 'darwin'
elif s.startswith('Windows '):
return 'win32'
elif s == 'OS/2':
return 'os2'
elif s == 'HP-UX':
return 'hp-ux'
elif s in ('SunOS', 'Solaris'):
return 'sunos'
else:
s = s.lower()
if s == 'powerpc':
return 'darwin'
if s == 'win32' or s == 'os2':
return s
if s | |
<reponame>sireliah/polish-python
"""Text wrapping oraz filling.
"""
# Copyright (C) 1999-2001 <NAME>.
# Copyright (C) 2002, 2003 Python Software Foundation.
# Written by <NAME> <<EMAIL>>
zaimportuj re
__all__ = ['TextWrapper', 'wrap', 'fill', 'dedent', 'indent', 'shorten']
# Hardcode the recognized whitespace characters to the US-ASCII
# whitespace characters. The main reason dla doing this jest that w
# ISO-8859-1, 0xa0 jest non-breaking whitespace, so w certain locales
# that character winds up w string.whitespace. Respecting
# string.whitespace w those cases would 1) make textwrap treat 0xa0 the
# same jako any other whitespace char, which jest clearly wrong (it's a
# *non-breaking* space), 2) possibly cause problems przy Unicode,
# since 0xa0 jest nie w range(128).
_whitespace = '\t\n\x0b\x0c\r '
klasa TextWrapper:
"""
Object dla wrapping/filling text. The public interface consists of
the wrap() oraz fill() methods; the other methods are just there for
subclasses to override w order to tweak the default behaviour.
If you want to completely replace the main wrapping algorithm,
you'll probably have to override _wrap_chunks().
Several instance attributes control various aspects of wrapping:
width (default: 70)
the maximum width of wrapped lines (unless przerwij_long_words
jest false)
initial_indent (default: "")
string that will be prepended to the first line of wrapped
output. Counts towards the line's width.
subsequent_indent (default: "")
string that will be prepended to all lines save the first
of wrapped output; also counts towards each line's width.
expand_tabs (default: true)
Expand tabs w input text to spaces before further processing.
Each tab will become 0 .. 'tabsize' spaces, depending on its position
w its line. If false, each tab jest treated jako a single character.
tabsize (default: 8)
Expand tabs w input text to 0 .. 'tabsize' spaces, unless
'expand_tabs' jest false.
replace_whitespace (default: true)
Replace all whitespace characters w the input text by spaces
after tab expansion. Note that jeżeli expand_tabs jest false oraz
replace_whitespace jest true, every tab will be converted to a
single space!
fix_sentence_endings (default: false)
Ensure that sentence-ending punctuation jest always followed
by two spaces. Off by default because the algorithm jest
(unavoidably) imperfect.
przerwij_long_words (default: true)
Break words longer than 'width'. If false, those words will nie
be broken, oraz some lines might be longer than 'width'.
przerwij_on_hyphens (default: true)
Allow przerwijing hyphenated words. If true, wrapping will occur
preferably on whitespaces oraz right after hyphens part of
compound words.
drop_whitespace (default: true)
Drop leading oraz trailing whitespace z lines.
max_lines (default: Nic)
Truncate wrapped lines.
placeholder (default: ' [...]')
Append to the last line of truncated text.
"""
unicode_whitespace_trans = {}
uspace = ord(' ')
dla x w _whitespace:
unicode_whitespace_trans[ord(x)] = uspace
# This funky little regex jest just the trick dla splitting
# text up into word-wrappable chunks. E.g.
# "Hello there -- you goof-ball, use the -b option!"
# splits into
# Hello/ /there/ /--/ /you/ /goof-/ball,/ /use/ /the/ /-b/ /option!
# (after stripping out empty strings).
word_punct = r'[\w!"\'&.,?]'
letter = r'[^\d\W]'
wordsep_re = re.compile(r'''
( # any whitespace
\s+
| # em-dash between words
(?<=%(wp)s) -{2,} (?=\w)
| # word, possibly hyphenated
\S+? (?:
# hyphenated word
-(?: (?<=%(lt)s{2}-) | (?<=%(lt)s-%(lt)s-))
(?= %(lt)s -? %(lt)s)
| # end of word
(?=\s|\Z)
| # em-dash
(?<=%(wp)s) (?=-{2,}\w)
)
)''' % {'wp': word_punct, 'lt': letter}, re.VERBOSE)
usuń word_punct, letter
# This less funky little regex just split on recognized spaces. E.g.
# "Hello there -- you goof-ball, use the -b option!"
# splits into
# Hello/ /there/ /--/ /you/ /goof-ball,/ /use/ /the/ /-b/ /option!/
wordsep_simple_re = re.compile(r'(\s+)')
# XXX this jest nie locale- albo charset-aware -- string.lowercase
# jest US-ASCII only (and therefore English-only)
sentence_end_re = re.compile(r'[a-z]' # lowercase letter
r'[\.\!\?]' # sentence-ending punct.
r'[\"\']?' # optional end-of-quote
r'\Z') # end of chunk
def __init__(self,
width=70,
initial_indent="",
subsequent_indent="",
expand_tabs=Prawda,
replace_whitespace=Prawda,
fix_sentence_endings=Nieprawda,
przerwij_long_words=Prawda,
drop_whitespace=Prawda,
przerwij_on_hyphens=Prawda,
tabsize=8,
*,
max_lines=Nic,
placeholder=' [...]'):
self.width = width
self.initial_indent = initial_indent
self.subsequent_indent = subsequent_indent
self.expand_tabs = expand_tabs
self.replace_whitespace = replace_whitespace
self.fix_sentence_endings = fix_sentence_endings
self.break_long_words = przerwij_long_words
self.drop_whitespace = drop_whitespace
self.break_on_hyphens = przerwij_on_hyphens
self.tabsize = tabsize
self.max_lines = max_lines
self.placeholder = placeholder
# -- Private methods -----------------------------------------------
# (possibly useful dla subclasses to override)
def _munge_whitespace(self, text):
"""_munge_whitespace(text : string) -> string
Munge whitespace w text: expand tabs oraz convert all other
whitespace characters to spaces. Eg. " foo\\tbar\\n\\nbaz"
becomes " foo bar baz".
"""
jeżeli self.expand_tabs:
text = text.expandtabs(self.tabsize)
jeżeli self.replace_whitespace:
text = text.translate(self.unicode_whitespace_trans)
zwróć text
def _split(self, text):
"""_split(text : string) -> [string]
Split the text to wrap into indivisible chunks. Chunks are
nie quite the same jako words; see _wrap_chunks() dla full
details. As an example, the text
Look, goof-ball -- use the -b option!
przerwijs into the following chunks:
'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', 'option!'
jeżeli przerwij_on_hyphens jest Prawda, albo in:
'Look,', ' ', 'goof-ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', option!'
otherwise.
"""
jeżeli self.break_on_hyphens jest Prawda:
chunks = self.wordsep_re.split(text)
inaczej:
chunks = self.wordsep_simple_re.split(text)
chunks = [c dla c w chunks jeżeli c]
zwróć chunks
def _fix_sentence_endings(self, chunks):
"""_fix_sentence_endings(chunks : [string])
Correct dla sentence endings buried w 'chunks'. Eg. when the
original text contains "... foo.\\nBar ...", munge_whitespace()
oraz split() will convert that to [..., "foo.", " ", "Bar", ...]
which has one too few spaces; this method simply changes the one
space to two.
"""
i = 0
patsearch = self.sentence_end_re.search
dopóki i < len(chunks)-1:
jeżeli chunks[i+1] == " " oraz patsearch(chunks[i]):
chunks[i+1] = " "
i += 2
inaczej:
i += 1
def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
"""_handle_long_word(chunks : [string],
cur_line : [string],
cur_len : int, width : int)
Handle a chunk of text (most likely a word, nie whitespace) that
jest too long to fit w any line.
"""
# Figure out when indent jest larger than the specified width, oraz make
# sure at least one character jest stripped off on every dalej
jeżeli width < 1:
space_left = 1
inaczej:
space_left = width - cur_len
# If we're allowed to przerwij long words, then do so: put jako much
# of the next chunk onto the current line jako will fit.
jeżeli self.break_long_words:
cur_line.append(reversed_chunks[-1][:space_left])
reversed_chunks[-1] = reversed_chunks[-1][space_left:]
# Otherwise, we have to preserve the long word intact. Only add
# it to the current line jeżeli there's nothing already there --
# that minimizes how much we violate the width constraint.
albo_inaczej nie cur_line:
cur_line.append(reversed_chunks.pop())
# If we're nie allowed to przerwij long words, oraz there's already
# text on the current line, do nothing. Next time through the
# main loop of _wrap_chunks(), we'll wind up here again, but
# cur_len will be zero, so the next line will be entirely
# devoted to the long word that we can't handle right now.
def _wrap_chunks(self, chunks):
"""_wrap_chunks(chunks : [string]) -> [string]
Wrap a sequence of text chunks oraz zwróć a list of lines of
length 'self.width' albo less. (If 'break_long_words' jest false,
some lines may be longer than this.) Chunks correspond roughly
to words oraz the whitespace between them: each chunk jest
indivisible (modulo 'break_long_words'), but a line przerwij can
come between any two chunks. Chunks should nie have internal
whitespace; ie. a chunk jest either all whitespace albo a "word".
Whitespace chunks will be removed z the beginning oraz end of
lines, but apart z that whitespace jest preserved.
"""
lines = []
jeżeli self.width <= 0:
podnieś ValueError("invalid width %r (must be > 0)" % self.width)
jeżeli self.max_lines jest nie Nic:
jeżeli self.max_lines > 1:
indent = self.subsequent_indent
inaczej:
indent = self.initial_indent
jeżeli len(indent) + len(self.placeholder.lstrip()) > self.width:
podnieś ValueError("placeholder too large dla max width")
# Arrange w reverse order so items can be efficiently popped
# z a stack of chucks.
chunks.reverse()
dopóki chunks:
# Start the list of chunks that will make | |
= IwahoriHeckeAlgebra("B2", 1)
sage: bases = H._BasesCategory()
sage: bases.super_categories()
[Category of realizations of Iwahori-Hecke algebra of type B2 in 1,-1 over Integer Ring,
Category of finite dimensional algebras with basis over Integer Ring]
"""
return [Realizations(self.base()), self.base()._category]
def _repr_(self):
r"""
Return the representation of ``self``.
EXAMPLES::
sage: H = IwahoriHeckeAlgebra("B2", 1)
sage: H._BasesCategory()
Category of bases of Iwahori-Hecke algebra of type B2 in 1,-1 over Integer Ring
"""
return "Category of bases of %s" % self.base()
class ParentMethods:
r"""
This class collects code common to all the various bases. In most
cases, these are just default implementations that will get
specialized in a basis.
"""
def _repr_(self):
"""
Text representation of this basis of Iwahori-Hecke algebra.
EXAMPLES::
sage: H = IwahoriHeckeAlgebra("B2", 1)
sage: H.T()
Iwahori-Hecke algebra of type B2 in 1,-1 over Integer Ring in the T-basis
sage: H.C()
Iwahori-Hecke algebra of type B2 in 1,-1 over Integer Ring in the C-basis
sage: H.Cp()
Iwahori-Hecke algebra of type B2 in 1,-1 over Integer Ring in the Cp-basis
"""
return "%s in the %s-basis"%(self.realization_of(), self._basis_name)
def __getitem__(self, i):
"""
Return the basis element indexed by ``i``.
INPUT:
- ``i`` -- either an element of the Coxeter group or a
reduced word
.. WARNING::
If `i`` is not a reduced expression then the basis element
indexed by the corresponding element of the algebra is
returned rather than the corresponding product of the
generators::
sage: R.<v> = LaurentPolynomialRing(QQ, 'v')
sage: T = IwahoriHeckeAlgebra('A3', v**2).T()
sage: T[1,1] == T[1] * T[1]
False
EXAMPLES::
sage: H = IwahoriHeckeAlgebra("B2", 1)
sage: T = H.T()
sage: G = WeylGroup("B2")
sage: T[G.one()]
1
sage: T[G.simple_reflection(1)]
T[1]
sage: T[G.from_reduced_word([1,2,1])]
T[1,2,1]
sage: T[[]]
1
sage: T[1]
T[1]
sage: T[1,2,1]
T[1,2,1]
"""
W = self.realization_of().coxeter_group()
if i in ZZ:
return self(W.simple_reflection(i))
if i in W:
return self(i)
if i == []:
return self.one()
return self(W.from_reduced_word(i))
def is_field(self, proof=True):
"""
Return whether this Iwahori-Hecke algebra is a field.
EXAMPLES::
sage: T = IwahoriHeckeAlgebra("B2", 1).T()
sage: T.is_field()
False
"""
return False
def is_commutative(self):
"""
Return whether this Iwahori-Hecke algebra is commutative.
EXAMPLES::
sage: T = IwahoriHeckeAlgebra("B2", 1).T()
sage: T.is_commutative()
False
"""
return self.base_ring().is_commutative() \
and self.realization_of().coxeter_group().is_commutative()
@cached_method
def one_basis(self):
r"""
Return the identity element in the Weyl group, as per
``AlgebrasWithBasis.ParentMethods.one_basis``.
EXAMPLES::
sage: H = IwahoriHeckeAlgebra("B2", 1)
sage: H.T().one_basis()
[1 0]
[0 1]
"""
return self.realization_of().coxeter_group().one()
def index_set(self):
r"""
Return the index set of ``self``.
EXAMPLES::
sage: IwahoriHeckeAlgebra("B2", 1).T().index_set()
(1, 2)
"""
return self.realization_of().coxeter_group().index_set()
@cached_method
def algebra_generators(self):
r"""
Return the generators.
They do not have order two but satisfy a quadratic relation.
They coincide with the simple reflections in the Coxeter group
when `q_1 = 1` and `q_2 = -1`. In this special case,
the Iwahori-Hecke algebra is identified with the group algebra
of the Coxeter group.
EXAMPLES:
In the standard basis::
sage: R.<q> = QQ[]
sage: H = IwahoriHeckeAlgebra("A3", q).T()
sage: T = H.algebra_generators(); T
Finite family {1: T[1], 2: T[2], 3: T[3]}
sage: T.list()
[T[1], T[2], T[3]]
sage: [T[i] for i in [1,2,3]]
[T[1], T[2], T[3]]
sage: T1,T2,T3 = H.algebra_generators()
sage: T1
T[1]
sage: H = IwahoriHeckeAlgebra(['A',2,1], q).T()
sage: T = H.algebra_generators(); T
Finite family {0: T[0], 1: T[1], 2: T[2]}
sage: T.list()
[T[0], T[1], T[2]]
sage: [T[i] for i in [0,1,2]]
[T[0], T[1], T[2]]
sage: [T0, T1, T2] = H.algebra_generators()
sage: T0
T[0]
In the Kazhdan-Lusztig basis::
sage: R = LaurentPolynomialRing(QQ, 'v')
sage: v = R.gen(0)
sage: H = IwahoriHeckeAlgebra('A5', v**2)
sage: C = H.C()
sage: C.algebra_generators()
Finite family {1: C[1], 2: C[2], 3: C[3], 4: C[4], 5: C[5]}
sage: C.algebra_generators().list()
[C[1], C[2], C[3], C[4], C[5]]
"""
return self.basis().keys().simple_reflections().map(self.monomial)
def algebra_generator(self, i):
r"""
Return the `i`-th generator of ``self``.
EXAMPLES:
In the standard basis::
sage: R.<q>=QQ[]
sage: H = IwahoriHeckeAlgebra("A3", q).T()
sage: [H.algebra_generator(i) for i in H.index_set()]
[T[1], T[2], T[3]]
In the Kazhdan-Lusztig basis::
sage: R = LaurentPolynomialRing(QQ, 'v')
sage: v = R.gen(0)
sage: H = IwahoriHeckeAlgebra('A5', v**2)
sage: C = H.C()
sage: [C.algebra_generator(i) for i in H.coxeter_group().index_set()]
[C[1], C[2], C[3], C[4], C[5]]
"""
return self.algebra_generators()[i]
@abstract_method(optional=True)
def bar_on_basis(self, w):
"""
Return the bar involution on the basis element of ``self``
indexed by ``w``.
EXAMPLES::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: W = H.coxeter_group()
sage: s1,s2,s3 = W.simple_reflections()
sage: Cp = H.Cp()
sage: Cp.bar_on_basis(s1*s2*s1*s3)
Cp[1,2,3,1]
"""
@abstract_method(optional=True)
def hash_involution_on_basis(self, w):
"""
Return the bar involution on the basis element of ``self``
indexed by ``w``.
EXAMPLES::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: W = H.coxeter_group()
sage: s1,s2,s3 = W.simple_reflections()
sage: Cp = H.Cp()
sage: C = H.C()
sage: C(Cp.hash_involution_on_basis(s1*s2*s1*s3))
C[1,2,3,1]
"""
class ElementMethods:
def bar(self):
"""
Return the bar involution of ``self``.
The bar involution `\overline{\phantom{x}}` is an antilinear
`\ZZ`-algebra involution defined by the identity on `\ZZ`,
sending `q^{1/2} \mapsto q^{-1/2}`, and `\overline{T_w} =
T_{w^{-1}}^{-1}`.
REFERENCES:
- :wikipedia:`Iwahori%E2%80%93Hecke_algebra#Canonical_basis`
EXAMPLES:
We first test on a single generator::
sage: R.<q> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', q)
sage: T = H.T()
sage: T1,T2,T3 = T.algebra_generators()
sage: T1.bar()
(q^-1)*T[1] + (-1+q^-1)
sage: T1.bar().bar() == T1
True
Next on a multiple of generators::
sage: b = (T1*T2*T1).bar(); b
(q^-3)*T[1,2,1]
+ (-q^-2+q^-3)*T[1,2]
+ (-q^-2+q^-3)*T[2,1]
+ (q^-1-2*q^-2+q^-3)*T[1]
+ (q^-1-2*q^-2+q^-3)*T[2]
+ (-1+2*q^-1-2*q^-2+q^-3)
sage: b.bar() == T1*T2*T1
True
A sum::
sage: s = T1 + T2
sage: b = s.bar(); b
(q^-1)*T[1] + (q^-1)*T[2] + (-2+2*q^-1)
sage: b.bar() == s
True
A more complicated example::
sage: p = T1*T2 + (1-q+q^-1)*T3 - q^3*T1*T3
sage: p.bar()
(q^-2)*T[1,2]
+ (-q^-5)*T[3,1]
+ (-q^-1+q^-2+q^-4-q^-5)*T[1]
+ (-q^-1+q^-2)*T[2]
+ (1+q^-1-q^-2+q^-4-q^-5)*T[3]
+ (-q+1-q^-3+2*q^-4-q^-5)
sage: p.bar().bar() == p
True
This also works for arbitrary ``q1`` and ``q2``::
sage: R.<q1,q2> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', q1, q2=-q2)
sage: T = H.T()
sage: T1,T2,T3 = T.algebra_generators()
sage: p = T1*T3 + T2
sage: p.bar()
(q1^-2*q2^-2)*T[3,1]
+ (-q1^-1*q2^-2+q1^-2*q2^-1)*T[1]
+ (q1^-1*q2^-1)*T[2]
+ (-q1^-1*q2^-2+q1^-2*q2^-1)*T[3]
+ (-q2^-1+q1^-1+q2^-2-2*q1^-1*q2^-1+q1^-2)
sage: p.bar().bar() == p
True
Next we have an example in the `C` basis::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: C = H.C()
sage: p = C[1]*C[3] + C[2]
sage: p.bar()
C[3,1] + C[2]
sage: p.bar().bar() == p
True
For the `C^{\prime}` basis as well::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: Cp = H.Cp()
sage: p = Cp[1]*Cp[3] + Cp[2]
sage: p.bar()
Cp[3,1] + Cp[2]
TESTS:
We check that doing the computations explicitly in the `T`
basis gives the same results and with bar invariant
coefficients::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: Cp = H.Cp()
sage: T = H.T()
sage: Cp(T(Cp[1,2,1])) == Cp[1,2,1]
True
sage: p = 4*Cp[1]*Cp[3] + (v^2 + v^-2 - 2)*Cp[2]
sage: Cp(T(p).bar()) == p
True
"""
B = self.parent()
if B.bar_on_basis is NotImplemented:
T = B.realization_of().T()
return B(T(self).bar())
H=B.realization_of()
return sum(H._bar_on_coefficients(c) * B.bar_on_basis(w) for (w,c) in self)
def hash_involution(self):
r"""
Return the hash involution of ``self``.
The hash involution `\alpha` is a `\ZZ`-algebra
involution of the Iwahori-Hecke algebra determined by
`q^{1/2} \mapsto q^{-1/2}`, and `T_w \mapsto -1^{\ell(w)}
(q_1 q_2)^{-\ell(w)} T_w`, for `w` an element of the
corresponding Coxeter group.
This map is defined in [KL79]_ and it is used to
change between the `C` and `C^{\prime}` bases because
`\alpha(C_w) = (-1)^{\ell(w)} C_w'`.
EXAMPLES::
sage: R.<v> = LaurentPolynomialRing(QQ)
sage: H = IwahoriHeckeAlgebra('A3', v**2)
sage: T = H.T()
sage: T1,T2,T3 = T.algebra_generators()
sage: elt = T1.hash_involution(); elt
(-v^-2)*T[1]
sage: elt.hash_involution()
T[1]
sage: elt = T1*T2 + (v^3 - v^-1 + 2)*T3*T1*T2*T3
sage: elt.hash_involution()
(-v^-7+2*v^-8+v^-11)*T[1,2,3,2] + (v^-4)*T[1,2]
sage: elt.hash_involution().hash_involution() == elt
True
With the Kazhdan-Lusztig `C^{\prime}` basis::
sage: Cp = H.Cp()
sage: p = Cp[1]*Cp[3] + Cp[2]
sage: q = p.hash_involution(); q
Cp[3,1] + (-v-v^-1)*Cp[1] + (-1)*Cp[2] + (-v-v^-1)*Cp[3] + (v^2+v+2+v^-1+v^-2)
sage: q.hash_involution() == p
True
With the Kazhdan-Lusztig `C` basis::
sage: C = H.C()
sage: p = C[1]*C[3] + C[2]
sage: q = p.hash_involution(); q
C[3,1] + (v+v^-1)*C[1] + (-1)*C[2] + (v+v^-1)*C[3] + (v^2-v+2-v^-1+v^-2)
sage: q.hash_involution() == p
True
"""
B = self.parent()
| |
import os
import requests
import xml.etree.ElementTree as ET
import webbrowser
import shutil
from bs4 import BeautifulSoup, SoupStrainer
from nltk.corpus import words
from ._common import *
EDGAR_BASE_URL = "https://www.sec.gov"
EDGAR_BROWSE_URL = "/cgi-bin/browse-edgar?action=getcompany"
EDGAR_ARCHIVE_URL = "/Archives/edgar/data/"
class edgarFiler(object):
"""
A class designed for retrieving SEC reports.
Currently only supports requests for raw reports, and does not take care of parsing.
This class may be useful if you are looking for the CIK number of a company.
This attribute is set upon initialization.
"""
def __init__(self, ticker):
self.ticker = ticker
self.cik = self.cik()
def cik(self):
"""Sets the CIK attribute for the requested company"""
URL = EDGAR_BASE_URL + EDGAR_BROWSE_URL + f"&CIK={self.ticker}&output=atom"
search = requests.get(URL)
search_result = search.text
root = ET.fromstring(search_result)
cik = root[1][4].text
return cik
# # # # # # # # # # # # # # # # # # # # # # #
# FULL SERVICE ACCESSION AND REPORT GETTING #
# # # # # # # # # # # # # # # # # # # # # # #
def get_report_full(self, count, document, get=False, html=False, xbrl=False, xlsx=False,debug=False):
"""Returns accession numbers and documents in five forms for all the documents for the desired company
SEC lists a minimum of 10 numbers for any given result but that is parred down in this code.
:param count: The number of documents to return.
:type count: int, required
:param document: The name of the document you wish to request (i.e. '10-K')
:type document: string, required
:param get: Streams a raw text document for the filings staright to your local workspace.
:type get: boolean, optional
:param html: Returns an html-only version of the raw text document (parsed by switching at html tags)
:type html: boolean, optional
:param xbrl: Opens a new web page with the interactive xbrl data.
:type xbrl: boolean, optional
:param xlsx: Downloads the company-supplied xlsx filing document if available
:type xlsx: boolean, optional
"""
document = document_type_parse(document)
URL = EDGAR_BASE_URL + EDGAR_BROWSE_URL + f"&CIK={self.ticker}&type={document}&count={count}&output=atom"
if debug:
print(URL)
get_result = requests.get(URL)
if get_result.status_code == 200:
result_text = get_result.text
if debug:
print(result_text)
root = ET.fromstring(result_text)
if debug:
print(root.text)
accessions_requested = []
i = 0
for result in root.iter('{http://www.w3.org/2005/Atom}accession-nunber'):
if debug:
print(result)
while i < count:
i += 1
nunber = result.text
accessions_requested.append(nunber)
if debug:
print(nunber)
else:
raise Exception(f'{document} is not a valid document type!')
# Optionally, one can request the raw text document streamed to your local workspace
if get:
for accession in accessions_requested:
fixed_accession = accession.replace("-","")
URL = f"https://www.sec.gov/Archives/edgar/data/{self.cik}/{fixed_accession}/{accession}.txt"
# Stream site to local file
response = requests.get(URL, stream=True)
filename = f'{self.ticker}_{accession}.txt'
with open(filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
# Optionally, one can request an html-only version of the raw text document (parsed by switching at html tags)
if html:
raw_file = filename
html_file = f"{self.ticker}_{accession}.html"
edgar_strip_to_html(raw_file, html_file)
# Optionally, one can request a new web page with the interactive xbrl data
elif xbrl:
URL = f"https://www.sec.gov/cgi-bin/viewer?action=view&cik={self.cik}&accession_number={accession}"
xbrl_request = requests.get(URL)
if xbrl_request.status_code == 200:
webbrowser.open(URL)
else:
raise Exception('Please ensure there are valid CIK and accession numbers.')
# Optionally, one can request a download of the xlsx filing document if valid
elif xlsx:
URL = f"https://www.sec.gov/Archives/edgar/data/{self.cik}/{fixed_accession}/Financial_Report.xlsx"
xlsx_filename = f"{self.ticker}_{accession}.xlsx"
xlsx_download = requests.get(URL)
if xlsx_download.status_code == 200:
with open(xlsx_filename, 'wb') as f:
for chunk in xlsx_download.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
else:
raise Exception('Please ensure there are valid CIK and accession numbers.')
return accessions_requested
# # # # # # # # # # # # # # #
# GET REPORT FROM ACCESSION #
# # # # # # # # # # # # # # #
def get_reports(self, accession, get=False, html=False, xbrl=False, xlsx=False,debug=False):
"""Returns document in five forms for the requested accession number.
SEC lists a minimum of 10 numbers for any given result but that is parred down in this code.
:param accession: The accession number of the report you wish to retrieve
:type accession: string, required
:param get: Streams a raw text document for the filings staright to your local workspace.
:type get: boolean, optional
:param html: Returns an html-only version of the raw text document (parsed by switching at html tags)
:type html: boolean, optional
:param xbrl: Opens a new web page with the interactive xbrl data.
:type xbrl: boolean, optional
:param xlsx: Downloads the company-supplied xlsx filing document if available
:type xlsx: boolean, optional
"""
fixed_accession = accession.replace("-","")
URL = f"https://www.sec.gov/Archives/edgar/data/{self.cik}/{fixed_accession}/{accession}.txt"
# Stream raw filing to local file
response = requests.get(URL, stream=True)
filename = f'{self.ticker}_{accession}.txt'
with open(filename, 'wb') as f:
for chunk in response.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
# Optionally, one can request an html-only version of the raw text document (parsed by switching at html tags)
if html:
raw_file = filename
html_file = f"{self.ticker}_{accession}.html"
edgar_strip_to_html(raw_file, html_file)
# Optionally, one can request a new web page with the interactive xbrl data
elif xbrl:
URL = f"https://www.sec.gov/cgi-bin/viewer?action=view&cik={self.cik}&accession_number={accession}"
xbrl_request = requests.get(URL)
if xbrl_request.status_code == 200:
webbrowser.open(URL)
else:
raise Exception('Please ensure there are valid CIK and accession numbers.')
# Optionally, one can request a download of the xlsx filing document if valid
elif xlsx:
URL = f"https://www.sec.gov/Archives/edgar/data/{self.cik}/{fixed_accession}/Financial_Report.xlsx"
xlsx_filename = f"{self.ticker}_{accession}.xlsx"
xlsx_download = requests.get(URL)
if xlsx_download.status_code == 200:
with open(xlsx_filename, 'wb') as f:
for chunk in xlsx_download.iter_content(chunk_size=512):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
else:
raise Exception('Please ensure there are valid CIK and accession numbers.')
return filename
# # # # # # # # # # # # # # # # #
# GET LIST OF ACCESSION NUMBERS #
# # # # # # # # # # # # # # # # #
def get_accessions(self, count, document):
"""Returns accession numbers for the document for the desired company
SEC lists a minimum of 10 numbers for any given result but that is parred down in this code.
:param count: The number of documents to return.
:type count: int, required
:param document: The name of the document you wish to request (i.e. '10-K')
:type document: string, required
"""
document = document_type_parse(document)
URL = EDGAR_BASE_URL + EDGAR_BROWSE_URL + f"&CIK={self.ticker}&type={document}&count={count}&output=atom"
if debug:
print(URL)
get_result = requests.get(URL)
if get_result.status_code == 200:
result_text = get_result.text
if debug:
print(result_text)
root = ET.fromstring(result_text)
if debug:
print(root.text)
accessions_requested = []
i = 0
for result in root.iter('{http://www.w3.org/2005/Atom}accession-nunber'):
if debug:
print(result)
while i < count:
i += 1
nunber = result.text
accessions_requested.append(nunber)
if debug:
print(nunber)
else:
raise Exception(f'{document} is not a valid document type!')
return accessions_requested
# class edgarReport(object):
#
# def __init__(self,ticker,file):
# self.ticker = ticker
# self.file = file
# self.report_period = ''
# self.company_name = ''
#
# def get_report_attributes():
# with open(self.file, 'r') as f:
# text = f.read()
# report_period = "CONFORMED PERIOD OF REPORT: "
# file_date = "FILED AS OF DATE: "
# company_name = "COMPANY CONFORMED NAME: "
# industrial_classification = "STANDARD INDUSTRIAL CLASSIFICATION: "
# report_period_start = text.find(report_period)+len(report_period)
# file_date_start = text.find(file_date)+len(file_date)
# company_name_start = text.find(company_name)+len(company_name)
# company_name_end = ''
# industrial_classification_start = text.find(industrial_classification)+len(industrial_classification)
# industrial_classification_end = ''
#
# # # # # # # # # # # #
# # Isolating Tables #
# # # # # # # # # # # #
#
# def isolate_tables(self, debug=True):
# with open(self.file, 'r') as fo:
# text = fo.read()
# starts, ends = find_all_tables(text)
# if debug: print(starts, ends)
# table_only_output = f'{self.ticker}_{self.report_period}_tablesonly.html'
# with open(table_only_output, 'a') as f:
# for i in range(len(starts)-1):
# block_to_write = text[starts[i]:ends[i]]
# f.write(block_to_write)
# isolate_tables.__doc__='Isolates all the tables in a SEC filing to a new file.'
#
#
# # # # # # # # # # #
# # WORD FREQUENCY #
# # # # # # # # # # #
#
# def real_word_frequency(self):
# # Returns a list fo real words sorted by use frequency
# with open(self.file,'r') as f:
# word_list = words.words()
# alphabet = ['a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z']
# file_words = []
# # Quick sieve that is used in bad_apples function to quickly throw out obvious unreal words
# ignore = ['-','_','/','=',':',';','<','>','#','$','@','*','\\']
# lines = f.readlines()
# word_freq = {}
# final_checked = {}
# # Iterate lines from file
# for | |
<reponame>Masa-Yasuno/oase
# Copyright 2019 NEC Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
[概要]
アクションドライバ アクション実行処理(ServiceNow)
"""
import os
import sys
import json
import django
import traceback
import datetime
import pytz
# OASE モジュール importパス追加
my_path = os.path.dirname(os.path.abspath(__file__))
tmp_path = my_path.split('oase-root')
root_dir_path = tmp_path[0] + 'oase-root'
sys.path.append(root_dir_path)
# OASE モジュール import
# #LOCAL_PATCH#
os.environ['DJANGO_SETTINGS_MODULE'] = 'confs.frameworkconfs.settings'
django.setup()
from django.db import transaction
from django.conf import settings
from datetime import timedelta
# ロガー初期化
from libs.commonlibs.oase_logger import OaseLogger
logger = OaseLogger.get_instance()
from libs.backyardlibs.action_driver.common.action_abstract import AbstractManager
from libs.webcommonlibs.oase_exception import OASEError
from libs.backyardlibs.action_driver.ServiceNow.ServiceNow_core import ServiceNow1Core
from libs.backyardlibs.oase_action_common_libs import ActionDriverCommonModules
from libs.backyardlibs.oase_action_common_libs import ConstantModules as Cstobj
from libs.commonlibs.define import *
from libs.commonlibs.aes_cipher import AESCipher
from web_app.models.models import ActionHistory, User, EventsRequest, RuleType
from web_app.models.ServiceNow_models import *
from web_app.templatetags.common import get_message
Comobj = ActionDriverCommonModules()
class ServiceNowManager(AbstractManager):
"""
[クラス概要]
アクションドライバメイン処理クラス
"""
############################################
# 定数
############################################
# アクション情報キーリスト
ACTIONPARAM_KEYS = [
'SERVICENOW_NAME',
'INCIDENT_STATUS',
'WORKFLOW_ID',
'WORK_NOTES_APPROVAL',
'WORK_NOTES_REJECTED',
]
# アクション情報の必須キーリスト
ACTIONPARAM_KEYS_REQUIRE = [
'SERVICENOW_NAME',
]
############################################
# メソッド
############################################
def __init__(self, trace_id, response_id, last_update_user):
self.servicenow_driver = None
self.action_history = None
self.trace_id = trace_id
self.response_id = response_id
self.last_update_user = last_update_user
self.aryActionParameter = {}
self.core = ServiceNow1Core(trace_id)
self.approval_flg = False
def conv_tz(self, dt, tzname):
"""
[概要]
タイムゾーン変換
"""
return dt.astimezone(pytz.timezone(tzname)).strftime('%Y-%m-%d %H:%M:%S')
def servicenow_action_history_insert(self, servicenow_name, sys_id, short_desc, exe_order, action_history_id):
"""
[概要]
ServiceNowアクション履歴登録メゾット
"""
logger.logic_log(
'LOSI00001',
'servicenow_name: % s, short_desc: % s, exe_order: % s, action_history_id: % s' % (
servicenow_name, short_desc, exe_order, action_history_id))
try:
with transaction.atomic():
ServiceNowActionHistory(
action_his_id=action_history_id,
servicenow_disp_name=servicenow_name,
sys_id=sys_id,
short_description=short_desc,
last_update_timestamp=Comobj.getStringNowDateTime(),
last_update_user=self.last_update_user,
).save(force_insert=True)
except Exception as e:
logger.system_log('LOSE01141', self.trace_id, traceback.format_exc())
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01085'
)
logger.logic_log('LOSI00002', 'None')
def reset_variables(self):
"""
[概要]
メンバ変数を初期化する
"""
self.servicenow_driver = None
self.aryActionParameter = {}
def set_information(self, rhdm_res_act, action_history):
"""
[概要]
ServiceNow情報をインスタンス変数にセット
"""
logger.logic_log('LOSI00001', 'action_type_id: %s' %
(rhdm_res_act.action_type_id))
self.action_history = action_history
try:
# アクションパラメータ解析
param_info = json.loads(rhdm_res_act.action_parameter_info)
self.set_action_parameters(
param_info, rhdm_res_act.execution_order,
rhdm_res_act.response_detail_id
)
self.set_driver(rhdm_res_act.execution_order)
except OASEError as e:
if e.log_id:
if e.arg_list and isinstance(e.arg_list, list):
logger.system_log(e.log_id, *(e.arg_list))
else:
logger.system_log(e.log_id)
if e.arg_dict and isinstance(e.arg_dict, dict) \
and 'sts' in e.arg_dict and 'detail' in e.arg_dict:
return e.arg_dict['sts'], e.arg_dict['detail']
except Exception as e:
logger.system_log(*e.args)
return ACTION_DATA_ERROR, 0
logger.logic_log('LOSI00002', 'return: True')
return 0, 0
def set_driver(self, exe_order):
"""
[概要]
ServiceNowドライバーmodelをセット
"""
disp_name = self.aryActionParameter['SERVICENOW_NAME']
try:
self.servicenow_driver = ServiceNowDriver.objects.get(servicenow_disp_name=disp_name)
except ServiceNowDriver.DoesNotExist:
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01086')
raise OASEError(
'',
'LOSE01115',
log_params=['OASE_T_SERVICENOW_DRIVER', 'SERVICENOW_NAME', self.trace_id],
msg_params={
'sts': ACTION_DATA_ERROR,
'detail': ACTION_HISTORY_STATUS.DETAIL_STS.DATAERR_PARAM_VAL
}
)
def set_action_parameters(self, param_list, exe_order, response_detail_id, pre_flg=False, post_flg=False):
"""
[概要]
ServiceNowパラメータ解析
"""
logger.logic_log('LOSI00001', 'param_list: %s' % (param_list))
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01004'
)
# アクションパラメータ情報の存在チェック
key1 = 'ACTION_PARAMETER_INFO'
if key1 not in param_list:
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01005'
)
raise OASEError(
'',
'LOSE01114',
log_params=[
self.trace_id,
'OASE_T_RHDM_RESPONSE_ACTION',
response_detail_id,
key1
],
msg_params={
'sts': ACTION_DATA_ERROR,
'detail': ACTION_HISTORY_STATUS.DETAIL_STS.DATAERR_PARAM_KEY
}
)
# アクションパラメータ情報のキー値チェック
check_info = self.analysis_parameters(param_list[key1])
for key, val in check_info.items():
# 必須キーが記述されていない場合はデータ異常
if val is None and key in self.ACTIONPARAM_KEYS_REQUIRE:
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01006', **{'key': key}
)
raise OASEError(
'',
'LOSE01114',
log_params=[
self.trace_id, 'OASE_T_RHDM_RESPONSE_ACTION',
response_detail_id, key
],
msg_params={
'sts': ACTION_DATA_ERROR,
'detail': ACTION_HISTORY_STATUS.DETAIL_STS.DATAERR_PARAM_KEY
}
)
# 必須キーの値が空の場合はデータ異常
if val == '' and key in self.ACTIONPARAM_KEYS_REQUIRE:
ActionDriverCommonModules.SaveActionLog(
self.response_id, exe_order, self.trace_id, 'MOSJA01006', **{'key': key}
)
raise OASEError(
'',
'LOSE01114',
log_params=[
self.trace_id, 'OASE_T_RHDM_RESPONSE_ACTION',
response_detail_id, key
],
msg_params={
'sts': ACTION_DATA_ERROR,
'detail': ACTION_HISTORY_STATUS.DETAIL_STS.DATAERR_PARAM_VAL
}
)
self.aryActionParameter[key] = val
@classmethod
def analysis_parameters(cls, param_list):
"""
[概要]
アクション情報キーに指定された値を返す
[引数]
param_list : list : アクション情報に記述されたキー値ペアのリスト
[戻り値]
check_info : dict : アクション情報に記述された値、記述がなければNoneを返す
"""
check_info = {}
for key in cls.ACTIONPARAM_KEYS:
check_info[key] = None
for string in param_list:
for key in cls.ACTIONPARAM_KEYS:
ret = Comobj.KeyValueStringFind(key, string)
if ret is not None:
check_info[key] = ret
return check_info
def get_act_ptrn(self):
"""
[概要]
アクション情報の値により呼び出す関数を取得する
"""
if 'INCIDENT_STATUS' not in self.aryActionParameter and 'WORKFLOW_ID' not in self.aryActionParameter:
return None
if not self.approval_flg and \
self.aryActionParameter['WORK_NOTES_APPROVAL'] is not None and \
self.aryActionParameter['WORK_NOTES_REJECTED'] is not None and \
(self.action_history.status in [PROCESSING, ACTION_HISTORY_STATUS.SNOW_APPROVAL_PENDING] or \
self.action_history.retry_status in [PROCESSING, ACTION_HISTORY_STATUS.SNOW_APPROVAL_PENDING]):
return self.act_approval_confirmation
if self.aryActionParameter['INCIDENT_STATUS'] == 'NEW':
return self.act_open_incident
if self.aryActionParameter['INCIDENT_STATUS'] == 'IN_PROGRESS':
return self.act_progress_incident
if self.aryActionParameter['WORKFLOW_ID'] is not None:
return self.act_update_workflow
if self.aryActionParameter['INCIDENT_STATUS'] == 'RESOLVED':
return self.act_resolved_incident
if self.aryActionParameter['INCIDENT_STATUS'] == 'CLOSED':
return self.act_close_incident
return None
def act(self, rhdm_res_act, retry=False, pre_flag=False):
"""
[概要]
ServiceNowアクションを実行
"""
logger.logic_log(
'LOSI00001',
'self.trace_id: %s, aryActionParameter: %s' % (self.trace_id, self.aryActionParameter)
)
status = ACTION_EXEC_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
act_func = self.get_act_ptrn()
if act_func:
status, detail = act_func(rhdm_res_act, retry, pre_flag)
if status in [ACTION_HISTORY_STATUS.SNOW_APPROVED]:
self.approval_flg = True
act_func = self.get_act_ptrn()
status, detail = act_func(rhdm_res_act, retry, pre_flag)
logger.logic_log('LOSI00002', 'sts:%s, detail:%s' % (status, detail))
return status, detail
def act_open_incident(self, rhdm_res_act, retry, pre_flag):
"""
[概要]
ServiceNowアクションを実行
インシデント新規作成
"""
logger.logic_log(
'LOSI00001',
'self.trace_id: %s, aryActionParameter: %s' % (self.trace_id, self.aryActionParameter)
)
description = self.description_join(rhdm_res_act, 'NEW', '')
data = {
'short_description' : 'OASE Event Notify',
'description' : description,
}
# リクエスト送信
result, sys_id = self.core.create_incident(self.servicenow_driver, data)
# リクエスト結果判定
status = PROCESSED if result else ACTION_EXEC_ERROR
logger.logic_log(
'LOSI01108',
status, str(rhdm_res_act.execution_order), self.trace_id
)
# 初回実行の場合は履歴情報を登録
if not retry:
# ServiceNowアクション履歴登録
logger.logic_log(
'LOSI01125',
str(rhdm_res_act.execution_order), self.trace_id
)
self.servicenow_action_history_insert(
self.servicenow_driver.servicenow_disp_name,
sys_id,
'OASE Event Notify',
rhdm_res_act.execution_order,
self.action_history.pk,
)
# リクエストに失敗した場合は異常終了する。
if status != PROCESSED:
logger.system_log('LOSE01142', status, self.trace_id, 'incident')
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
return ACTION_EXEC_ERROR, ACTION_HISTORY_STATUS.DETAIL_STS.EXECERR_SEND_FAIL
logger.logic_log('LOSI00002', 'return: PROCESSED')
return PROCESSED, ACTION_HISTORY_STATUS.DETAIL_STS.NONE
def retry(self, rhdm_res_act, retry=True):
"""
[概要]
再実行
"""
status, detail = self.act(rhdm_res_act, retry=retry)
return status, detail
def act_progress_incident(self, rhdm_res_act, retry, pre_flag):
"""
[概要]
インシデント更新処理(In Progress)
"""
logger.logic_log(
'LOSI00001',
'self.trace_id:%s, aryActionParameter:%s' % (self.trace_id, self.aryActionParameter)
)
status = PROCESSED
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
# sys_id取得
sys_id = None
try:
act_his = ActionHistory.objects.filter(
trace_id = self.trace_id,
action_type_id = rhdm_res_act.action_type_id,
execution_order__lt = rhdm_res_act.execution_order
).order_by('-execution_order')[0]
# これを保存することにより以降のアクションを実行させる
sys_id = ServiceNowActionHistory.objects.get(action_his_id=act_his.action_history_id).sys_id
except ServiceNowActionHistory.DoesNotExist:
logger.logic_log('LOSM01501', self.trace_id, act_his.action_history_id)
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01106'
)
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
except Exception as e:
logger.logic_log('LOSM01501', self.trace_id, None)
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01106'
)
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
if not sys_id:
logger.logic_log('LOSI01126', self.trace_id, self.action_history.pk)
return status, detail
result = self.core.get_incident(self.servicenow_driver, sys_id)
if not result:
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
return status, detail
resp = json.loads(result.text)
if resp and 'result' in resp and 'description' in resp['result'] and \
'display_value' in resp['result']['description']:
resp_description = resp['result']['description']['display_value']
else:
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
return status, detail
if self.aryActionParameter['WORK_NOTES_APPROVAL'] is not None and \
self.aryActionParameter['WORK_NOTES_REJECTED'] is not None:
# インシデントインプログレス
data = {
'state' : '2', # 2:In progress
}
else:
description = self.description_join(rhdm_res_act,
self.aryActionParameter['INCIDENT_STATUS'], resp_description)
# インシデントインプログレス
data = {
'state' : '2', # 2:In progress
'description' : description,
}
result = self.core.modify_incident(self.servicenow_driver, sys_id, data)
# 初回実行の場合は履歴情報を登録
if not retry:
# ServiceNowアクション履歴登録
logger.logic_log(
'LOSI01125',
str(rhdm_res_act.execution_order), self.trace_id
)
self.servicenow_action_history_insert(
self.servicenow_driver.servicenow_disp_name,
sys_id,
'OASE Event Notify',
rhdm_res_act.execution_order,
self.action_history.pk,
)
if not result:
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
logger.logic_log(
'LOSI00002',
'self.trace_id:%s, sts:%s, detail:%s' % (self.trace_id, status, detail)
)
return status, detail
def act_resolved_incident(self, rhdm_res_act, retry, pre_flag):
"""
[概要]
インシデント更新処理(Resolved)
"""
logger.logic_log(
'LOSI00001',
'self.trace_id:%s, aryActionParameter:%s' % (self.trace_id, self.aryActionParameter)
)
status = PROCESSED
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
# sys_id取得
sys_id = None
try:
act_his = ActionHistory.objects.filter(
trace_id = self.trace_id,
action_type_id = rhdm_res_act.action_type_id,
execution_order__lt = rhdm_res_act.execution_order
).order_by('-execution_order')[0]
sys_id = ServiceNowActionHistory.objects.get(action_his_id=act_his.action_history_id).sys_id
except ServiceNowActionHistory.DoesNotExist:
logger.logic_log('LOSM01501', self.trace_id, act_his.action_history_id)
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01106'
)
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
except Exception as e:
logger.logic_log('LOSM01501', self.trace_id, None)
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01106'
)
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
if not sys_id:
logger.logic_log('LOSI01126', self.trace_id, self.action_history.pk)
return status, detail
result = self.core.get_incident(self.servicenow_driver, sys_id)
if not result:
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
return status, detail
resp = json.loads(result.text)
if resp and 'result' in resp and 'description' in resp['result'] and \
'display_value' in resp['result']['description']:
resp_description = resp['result']['description']['display_value']
else:
status = SERVER_ERROR
detail = ACTION_HISTORY_STATUS.DETAIL_STS.NONE
ActionDriverCommonModules.SaveActionLog(
self.response_id,
rhdm_res_act.execution_order,
self.trace_id,
'MOSJA01087'
)
return status, detail
if self.aryActionParameter['WORK_NOTES_APPROVAL'] is not None and \
self.aryActionParameter['WORK_NOTES_REJECTED'] is not None:
# | |
#!/usr/bin/env python
# PROGRAM: plot_sst.py
# ----------------------------------------------------------------------------------
# Version 0.18
# 19 August, 2019
# michael.taylor AT reading DOT ac DOT uk
# PYTHON DEBUGGER CONTROL:
#------------------------
# import os; os._exit(0)
# import ipdb
# ipdb.set_trace()
import os.path
import optparse
from optparse import OptionParser
import sys
import numpy as np
import xarray
import pandas as pd
from pandas import Series, DataFrame, Panel
import seaborn as sns; sns.set(style="darkgrid")
import datetime
import matplotlib
import matplotlib.pyplot as plt; plt.close("all")
#import typhon
#from typhon.plots import plot_bitfield
#cmap = 'tab20c' # https://matplotlib.org/users/colormaps
def calc_median(counts,bins):
"""
# -------------------------------
# CALCULATE MEDIUM FROM HISTOGRAM
# -------------------------------
# M_estimated ~ L_m + [ ( N/2 - F_{m-1} ) / f_m] * c
#
# where,
#
# L_m =lower limit of the median bar
# N = is the total number of observations
# F_{m-1} = cumulative frequency (total number of observations) in all bars below the median bar
# f_m = frequency of the median bar
# c = median bar width
"""
M = 0
counts_cumsum = counts.cumsum()
counts_half = counts_cumsum[-1]/2.0
for i in np.arange(0,bins.shape[0]-1):
counts_l = counts_cumsum[i]
counts_r = counts_cumsum[i+1]
if (counts_half >= counts_l) & (counts_half < counts_r):
c = bins[1]-bins[0]
L_m = bins[i+1]
F_m_minus_1 = counts_cumsum[i]
f_m = counts[i+1]
M = L_m + ( (counts_half - F_m_minus_1) / f_m ) * c
return M
def plot_n_sst(times,n_sst_q3,n_sst_q4,n_sst_q5):
"""
# ---------------------------------------
# PLOT CUMULATIVE SST OBSERVATION DENSITY
# ---------------------------------------
"""
ocean_area = 361900000.0
t = np.array(times, dtype=np.datetime64)
years = (t[-1] - t[0]).astype('timedelta64[D]') / np.timedelta64(1, 'D') / 365.0
Q3 = pd.Series(n_sst_q3, index=times).fillna(0) / ocean_area / years
Q4 = pd.Series(n_sst_q4, index=times).fillna(0) / ocean_area / years
Q5 = pd.Series(n_sst_q5, index=times).fillna(0) / ocean_area / years
df = pd.DataFrame({'QL=3':Q3, 'QL=4':Q4, 'QL=5':Q5})
df['QL=4 & 5'] = df['QL=4'] + df['QL=5']
df = df.mask(np.isinf(df))
fig = plt.figure()
plt.plot(times,df['QL=4 & 5'].cumsum(), drawstyle='steps')
plt.plot(times,df['QL=3'].cumsum(), drawstyle='steps')
plt.tick_params(labelsize=12)
plt.ylabel("Observation density / $\mathrm{km^{-2} \ yr^{-1}}$", fontsize=12)
title_str = ' ' + 'QL=3:max=' + "{0:.5f}".format(df['QL=3'].cumsum().max()) + ' ' + 'QL=4 & 5:max=' + "{0:.5f}".format(df['QL=4 & 5'].cumsum().max())
print(title_str)
plt.legend(loc='best')
plt.savefig('n_sst.pdf')
# plt.savefig('n_sst.png', dpi=600)
# plt.savefig('n_sst.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def plot_n_sst_lat(lat_vec,n_sst_q3_lat,n_sst_q4_lat,n_sst_q5_lat):
"""
# ------------------------------------------
# PLOT SST OBSERVATION DENSITY WITH LATITUDE
# ------------------------------------------
"""
interpolation = np.arange(-90,90,1)
multiplier = 1.0
Q3 = multiplier * pd.Series(np.interp(interpolation,lat_vec,n_sst_q3_lat), index=interpolation)
Q4 = multiplier * pd.Series(np.interp(interpolation,lat_vec,n_sst_q4_lat), index=interpolation)
Q5 = multiplier * pd.Series(np.interp(interpolation,lat_vec,n_sst_q5_lat), index=interpolation)
df = pd.DataFrame({'QL=3':Q3, 'QL=4':Q4, 'QL=5':Q5})
df['QL=4 & 5'] = df['QL=4'] + df['QL=5']
df['QL=3 & 4 & 5'] = df['QL=3'] + df['QL=4'] + df['QL=5']
df = df.mask(np.isinf(df))
fig = plt.figure()
plt.fill_between(interpolation, df['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(interpolation, df['QL=3'], step="post", alpha=0.4)
plt.plot(interpolation, df['QL=4 & 5'], drawstyle='steps-post', label='QL=4 & 5')
plt.plot(interpolation, df['QL=3'], drawstyle='steps-post', label='QL=3')
ax = plt.gca()
ax.set_xlim([-90,90])
ticks = ax.get_xticks()
ax.set_xticks(np.linspace(-90, 90, 7))
plt.tick_params(labelsize=12)
plt.xlabel("Latitude / $\mathrm{\degree N}$", fontsize=12)
plt.ylabel("Observation density / $\mathrm{km^{-2} \ yr^{-1}}$", fontsize=12)
plt.legend(loc='best')
plt.savefig('n_sst_lat.pdf')
# plt.savefig('n_sst_lat.png', dpi=600)
# plt.savefig('n_sst_lat.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def plot_histogram_sst(sst_midpoints,sst_q3_hist,sst_q4_hist,sst_q5_hist):
"""
# ------------------------------
# PLOT HISTOGRAM OF SST + MEDIAN
# ------------------------------
"""
# interpolation = np.arange(260.05,319.95,0.1) # original bin midpoints
i = np.arange(260,320,0.1) # bin edges
n = len(i)
m = 1.0
q3 = m * pd.Series(np.interp(i,sst_midpoints,sst_q3_hist), index=i)
q4 = m * pd.Series(np.interp(i,sst_midpoints,sst_q4_hist), index=i)
q5 = m * pd.Series(np.interp(i,sst_midpoints,sst_q5_hist), index=i)
dq = pd.DataFrame({'QL=3':q3, 'QL=4':q4, 'QL=5':q5})
dq['QL=4 & 5'] = 0.5 * (dq['QL=4'] + dq['QL=5'])
# dq = dq.mask(np.isinf(df))
M3 = calc_median(dq['QL=3'].values,i[0:n])
M4_5 = calc_median(dq['QL=4 & 5'].values,i[0:n])
interpolation = np.arange(260,320,1) # 10x original resolution
n = len(interpolation)
multiplier = 10.0
Q3 = multiplier * pd.Series(np.interp(interpolation,sst_midpoints,sst_q3_hist), index=interpolation)
Q4 = multiplier * pd.Series(np.interp(interpolation,sst_midpoints,sst_q4_hist), index=interpolation)
Q5 = multiplier * pd.Series(np.interp(interpolation,sst_midpoints,sst_q5_hist), index=interpolation)
df = pd.DataFrame({'QL=3':Q3, 'QL=4':Q4, 'QL=5':Q5})
df['QL=4 & 5'] = 0.5 * (df['QL=4'] + df['QL=5'])
# df = df.mask(np.isinf(df))
fig = plt.figure()
plt.fill_between(interpolation,df['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(interpolation,df['QL=3'], step="post", alpha=0.4)
plt.plot(interpolation,df['QL=4 & 5'], drawstyle='steps-post')
plt.plot(interpolation,df['QL=3'], drawstyle='steps-post')
ax = plt.gca()
ax.set_xlim([260,310])
plt.tick_params(labelsize=12)
plt.xlabel("SST / $\mathrm{K}$", fontsize=12)
plt.ylabel("Frequency / $\mathrm{\% \ K^{-1}}$", fontsize=12)
title_str = 'SST: QL=3:median=' + "{0:.5f}".format(M3) + ' ' + 'QL=4 & 5:median=' + "{0:.5f}".format(M4_5)
print(title_str)
plt.legend(loc='best')
plt.savefig('hist_sst.pdf')
# plt.savefig('hist_sst.png', dpi=600)
# plt.savefig('hist_sst.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def plot_histogram_sensitivity(sensitivity_midpoints,sensitivity_q3_hist,sensitivity_q4_hist,sensitivity_q5_hist):
"""
# ------------------------------------------------
# PLOT HISTOGRAM OF RETRIEVAL SENSITIVITY + MEDIAN
# ------------------------------------------------
"""
# interpolation = np.arange(0.005,1.995,0.01) # original bin midpoints
interpolation = np.arange(0,2,0.01)
n = len(interpolation)
multiplier = 1.0
Q3 = multiplier * pd.Series(np.interp(interpolation,sensitivity_midpoints,sensitivity_q3_hist), index=interpolation)
Q4 = multiplier * pd.Series(np.interp(interpolation,sensitivity_midpoints,sensitivity_q4_hist), index=interpolation)
Q5 = multiplier * pd.Series(np.interp(interpolation,sensitivity_midpoints,sensitivity_q5_hist), index=interpolation)
df = pd.DataFrame({'QL=3':Q3, 'QL=4':Q4, 'QL=5':Q5})
df['QL=4 & 5'] = 0.5 * (df['QL=4'] + df['QL=5'])
# df = df.mask(np.isinf(df))
M3 = calc_median(df['QL=3'].values,interpolation[0:n])
M4_5 = calc_median(df['QL=4 & 5'].values,interpolation[0:n])
fig = plt.figure()
plt.fill_between(100.0*interpolation,df['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(100.0*interpolation,df['QL=3'], step="post", alpha=0.4)
plt.plot(100.0*interpolation,df['QL=4 & 5'], drawstyle='steps-post')
plt.plot(100.0*interpolation,df['QL=3'], drawstyle='steps-post')
ax = plt.gca()
ax.set_xlim([85,110])
plt.tick_params(labelsize=12)
plt.xlabel("Retrieval sensitivity / $\mathrm{\%}$", fontsize=12)
plt.ylabel("Frequency / $\mathrm{\% \ {\%}^{-1} }$", fontsize=12)
title_str = 'Sensitivity: QL=3:median=' + "{0:.5f}".format(M3) + ' ' + 'QL=4 & 5:median=' + "{0:.5f}".format(M4_5)
print(title_str)
plt.legend(loc='best')
plt.savefig('hist_sensitivity.pdf')
# plt.savefig('hist_sensitivity.png', dpi=600)
# plt.savefig('hist_sensitivity.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def plot_histogram_total_uncertainty(total_uncertainty_midpoints,total_uncertainty_q3_hist,total_uncertainty_q4_hist,total_uncertainty_q5_hist):
"""
# --------------------------------------------
# PLOT HISTOGRAM OF TOTAL UNCERTAINTY + MEDIAN
# --------------------------------------------
"""
# interpolation = np.arange(0.005,3.995+0.01,0.01) # original bin midpoints
interpolation = np.arange(0,4,0.01)
n = len(interpolation)
multiplier = 1.0
Q3 = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q3_hist), index=interpolation)
Q4 = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q4_hist), index=interpolation)
Q5 = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q5_hist), index=interpolation)
df = pd.DataFrame({'QL=3':Q3, 'QL=4':Q4, 'QL=5':Q5})
df['QL=4 & 5'] = 0.5 * (df['QL=4'] + df['QL=5'])
# df = df.mask(np.isinf(df))
M3 = calc_median(df['QL=3'].values,interpolation[0:n])
M4_5 = calc_median(df['QL=4 & 5'].values,interpolation[0:n])
fig = plt.figure()
plt.fill_between(total_uncertainty_midpoints,df['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(total_uncertainty_midpoints,df['QL=3'], step="post", alpha=0.4)
plt.plot(total_uncertainty_midpoints,df['QL=4 & 5'], drawstyle='steps-post')
plt.plot(total_uncertainty_midpoints,df['QL=3'], drawstyle='steps-post')
ax = plt.gca()
ax.set_xlim([0.0,1.25])
plt.tick_params(labelsize=12)
plt.xlabel("Total uncertainty / $\mathrm{K}$", fontsize=12)
plt.ylabel("Frequency / $\mathrm{\% \ cK^{-1}}$", fontsize=12)
title_str = 'Uncertainty: QL=3:median=' + "{0:.5f}".format(M3) + ' ' + 'QL=4 & 5:median=' + "{0:.5f}".format(M4_5)
print(title_str)
plt.legend(loc='best')
plt.savefig('hist_total_uncertainty.pdf')
# plt.savefig('hist_total_uncertainty.png', dpi=600)
# plt.savefig('hist_total_uncertainty.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def plot_histogram_total_uncertainty2(total_uncertainty_midpoints,total_uncertainty_q3_hist_avhrr,total_uncertainty_q4_hist_avhrr,total_uncertainty_q5_hist_avhrr,total_uncertainty_q3_hist_atsr,total_uncertainty_q4_hist_atsr,total_uncertainty_q5_hist_atsr):
"""
# --------------------------------------------------------------
# PLOT HISTOGRAM OF TOTAL UNCERTAINTY + MEDIAN FOR AVHRR VS ATSR
# --------------------------------------------------------------
"""
# interpolation = np.arange(0.005,3.995,0.01) # original bin midpoints
interpolation = np.arange(0,4,0.01)
n = len(interpolation)
multiplier = 1.0
Q3_avhrr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q3_hist_avhrr), index=interpolation)
Q4_avhrr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q4_hist_avhrr), index=interpolation)
Q5_avhrr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q5_hist_avhrr), index=interpolation)
df_avhrr = pd.DataFrame({'QL=3':Q3_avhrr, 'QL=4':Q4_avhrr, 'QL=5':Q5_avhrr})
# df_avhrr['QL=4 & 5'] = 0.5 * (df_avhrr['QL=4'] + df_avhrr['QL=5'])
df_avhrr['QL=4 & 5'] = df_avhrr['QL=5']
# df_avhrr = df_avhrr.mask(np.isinf(df_avhrr))
Q3_atsr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q3_hist_atsr), index=interpolation)
Q4_atsr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q4_hist_atsr), index=interpolation)
Q5_atsr = multiplier * pd.Series(np.interp(interpolation,total_uncertainty_midpoints,total_uncertainty_q5_hist_atsr), index=interpolation)
df_atsr = pd.DataFrame({'QL=3':Q3_atsr, 'QL=4':Q4_atsr, 'QL=5':Q5_atsr})
df_atsr['QL=4 & 5'] = 0.5 * (df_atsr['QL=4'] + df_atsr['QL=5'])
# df_atsr = df_atsr.mask(np.isinf(df_atsr))
fig = plt.figure()
plt.fill_between(total_uncertainty_midpoints,df_avhrr['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(total_uncertainty_midpoints,df_avhrr['QL=3'], step="post", alpha=0.4)
plt.plot(total_uncertainty_midpoints,df_avhrr['QL=4 & 5'], drawstyle='steps-post')
plt.plot(total_uncertainty_midpoints,df_avhrr['QL=3'], drawstyle='steps-post')
ax = plt.gca()
ax.set_xlim([0.0,1.25])
plt.tick_params(labelsize=12)
plt.xlabel("Total uncertainty / $\mathrm{K}$", fontsize=12)
plt.ylabel("Frequency / $\mathrm{\% \ cK^{-1}}$", fontsize=12)
M3 = calc_median(df_avhrr['QL=3'].values,interpolation[0:n])
M4_5 = calc_median(df_avhrr['QL=4 & 5'].values,interpolation[0:n])
title_str = 'AVHRR: QL=3:median=' + "{0:.5f}".format(M3) + ' ' + 'QL=4 & 5:median=' + "{0:.5f}".format(M4_5)
print(title_str)
plt.legend(loc='best')
plt.savefig('hist_total_uncertainty_avhrr.pdf')
# plt.savefig('hist_total_uncertainty_avhrr.png', dpi=600)
# plt.savefig('hist_total_uncertainty_avhrr.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
fig = plt.figure()
plt.fill_between(total_uncertainty_midpoints,df_atsr['QL=4 & 5'], step="post", alpha=0.4)
plt.fill_between(total_uncertainty_midpoints,df_atsr['QL=3'], step="post", alpha=0.4)
plt.plot(total_uncertainty_midpoints,df_atsr['QL=4 & 5'], drawstyle='steps-post')
plt.plot(total_uncertainty_midpoints,df_atsr['QL=3'], drawstyle='steps-post')
ax = plt.gca()
ax.set_xlim([0.0,1.25])
plt.tick_params(labelsize=12)
plt.xlabel("Total uncertainty / $\mathrm{K}$", fontsize=12)
plt.ylabel("Frequency / $\mathrm{\% \ cK^{-1}}$", fontsize=12)
M3 = calc_median(df_atsr['QL=3'].values,interpolation[0:n])
M4_5 = calc_median(df_atsr['QL=4 & 5'].values,interpolation[0:n])
title_str = 'ATSR: QL=3:median=' + "{0:.5f}".format(M3) + ' ' + 'QL=4 & 5:median=' + "{0:.5f}".format(M4_5)
print(title_str)
plt.legend(loc='best')
plt.savefig('hist_total_uncertainty_atsr.pdf')
# plt.savefig('hist_total_uncertainty_atsr.png', dpi=600)
# plt.savefig('hist_total_uncertainty_atsr.eps', format='eps', rasterized=True, dpi=1200)
plt.close('all')
def calc_n_sst_timeseries(satellites):
"""
# ---------------------------------------------------------------
# CALC MEAN OF TIMESERIES OF DAILY OBSERVATION DENSITY PER SENSOR
# ---------------------------------------------------------------
"""
ocean_area = 361900000.0
labels = ['ATSR1','ATSR2','AATSR','NOAA07','NOAA09','NOAA11','NOAA12','NOAA14','NOAA15','NOAA16','NOAA17','NOAA18','NOAA19','METOPA']
satellites = ['ATSR1','ATSR2','AATSR','AVHRR07_G','AVHRR09_G','AVHRR11_G','AVHRR12_G','AVHRR14_G','AVHRR15_G','AVHRR16_G','AVHRR17_G','AVHRR18_G','AVHRR19_G','AVHRRMTA_G']
df_all = pd.DataFrame()
for i in range(0,len(satellites)):
filename = satellites[i] + '_summary.nc'
ds = xarray.open_dataset(filename)
dates = ds['time']
idx = np.argsort(dates, axis=0)
t = np.array(dates)[idx]
days = (t[-1] - t[0]).astype('timedelta64[D]') / np.timedelta64(1, 'D')
years = days/365.0
times_duplicates = pd.Series(t)
times = times_duplicates.drop_duplicates()
Q3_duplicates = pd.Series(ds['n_sst_q3'].values[idx], index=t)
Q4_duplicates = pd.Series(ds['n_sst_q4'].values[idx], index=t)
Q5_duplicates = pd.Series(ds['n_sst_q5'].values[idx], index=t)
n_sst_q3 = 365.0 * Q3_duplicates.groupby(Q3_duplicates.index).sum() / ocean_area
n_sst_q4 = 365.0 * Q4_duplicates.groupby(Q4_duplicates.index).sum() / ocean_area
n_sst_q5 = 365.0 * Q5_duplicates.groupby(Q5_duplicates.index).sum() / ocean_area
df = DataFrame({'Q3' : n_sst_q3, 'Q4' : n_sst_q4, 'Q5' : n_sst_q5})
df['Sum'] = df['Q4'] + df['Q5']
df_all = df_all.append(df,ignore_index=True)
satellites_avhrr = ['AVHRR07_G','AVHRR09_G','AVHRR11_G','AVHRR12_G','AVHRR14_G','AVHRR15_G','AVHRR16_G','AVHRR17_G','AVHRR18_G','AVHRR19_G','AVHRRMTA_G']
df_avhrr = | |
wave
list = v1 + (v2-v1)*np.random.random(size=n)
# Program wave
loadWavetable(list,second)
# Return list
if returnList:
return list
'''
@setWaveFrequency@
setWaveFrequency(freq)
Set wave frequency by changing sample frequency
Required parameters:
freq : Wave frequency in Hz
Return sampleTime set
Included in slab.py
'''
def setWaveFrequency(freq):
# Checks
if freq <= 0.0:
raise SlabEx("Frequency cannot be negative or zero")
if w_idle == -1:
raise SlabEx("No wave loaded")
# Calculate sample time
st = 1.0/(w_points * freq)
if st < min_sample:
raise SlabEx("Frequency too high")
if st > max_sample:
raise SlabEx("Frequency too low")
# Change sample time
st = setSampleTime(st)
frequency = 1/(st * w_points)
message(1,"Sample time set to " +str(st) + " s")
message(1,"Wave frequency set to " +str(frequency) + " Hz")
return st
################## WAVE RESPONSE COMMANDS #################
'''
@waveResponse@
waveResponse(npre,tinit,dual)
Obtain the response of a circuit against a wave
Measurement sequence:
1) Set DAC1 to first wave sample during tinit
2) Send npre waves to DAC1
3) Start measurement as set on setTransientStorage
During this time wave continues to be generated
Optional parameters:
npre : Number of waves before measurement (default to zero)
tinit : Time iddle before first wave (default to zero)
dual : Use dual DAC generation (defaults to False)
Returns a list of vectors:
Vector 0 is time
Vectors 1 onward are ADC readings
Included in slab.py
See also setWaveFrequency and setTransientStorage
'''
def waveResponse(npre = 0,tinit = 1.0,dual=False):
# Checks
if not opened:
raise SlabEx("Not connected to board")
if npre < 0:
raise SlabEx("Invalid number of waves")
if w_idle < 0:
raise SlabEx("Wavetable not loaded")
if dual and w_idle2 < 0:
raise SlabEx("Secondary wavetable not loaded")
message(1,"Performing wave measurement...")
# Idle start
if tinit > 0.0:
setVoltage(1,w_idle)
if dual:
setVoltage(2,w_idle2)
time.sleep(tinit)
# Send command
if not dual:
startCommand('V')
else:
startCommand('v')
sendU16(npre)
sendCRC()
checkACK()
# Check for overrun or other errors
code = getByte()
if code == 1:
checkCRC()
raise SlabEx("Sample overrun")
if code == 3:
checkCRC()
raise SlabEx("Halt from board")
if code != 0:
raise SlabEx("Unknown transient response code")
message(1,"Mesurement ends. Receiving data")
na = getByte()
nd = getByte()
if nd!=0:
raise SlabEx("Digital transient is not supported Yet")
samples = getU16()
result = []
vector = []
for s in range(0,samples):
vector.append(s*sampleTime)
if scipy:
result.append(np.array(vector))
else:
result.append(vector)
for i in range(0,na):
vector = []
for s in range(0,samples):
fvalue = dc_cal(getU16()/65536.0,xcal,adcCalData[i])
fvalue = fvalue * vref
vector.append(fvalue)
if scipy:
result.append(np.array(vector))
else:
result.append(vector)
checkCRC()
# Return to iddle
setVoltage(1,w_idle)
message(1,"Data received")
return result
'''
@singleWaveResponse@
waveResponse(channel,npre,tinit)
Obtain the response of a circuit against a wave
Response is obtained only on the selected channel
regardless of the setting on setTransientStorage
Measurement sequence:
1) Set DAC1 to first wave sample during tinit
2) Send npre waves to DAC1
3) Start measurement as set on setTransientStorage
During this time wave continues to be generated
Optional parameters:
channel : ADC channel to read (default to 1)
npre : Number of waves before measurement (default to zero)
tinit : Time iddle before first wave (default to zero)
Returns a list of two:
Vector 0 is time
Vectors 1 is ADC readings
Included in slab.py
See also setWaveFrequency and setTransientStorage
'''
def singleWaveResponse(channel = 1,npre = 0,tinit = 1.0):
# Checks
if not opened:
raise SlabEx("Not connected to board")
if npre < 0:
raise SlabEx("Invalid number of waves")
if channel < 1 or channel > 4:
raise SlabEx("Invalid channel number")
if w_idle < 0:
raise SlabEx("Wavetable not loaded")
message(1,"Performing wave measurement at ADC " + str(channel) + " ...")
# Idle start
if tinit > 0.0:
setVoltage(1,w_idle)
time.sleep(tinit)
# Send command
startCommand('X')
sendByte(channel)
sendU16(npre)
sendCRC()
checkACK()
# Check for overrun or other errors
code = getByte()
if code == 1:
checkCRC()
raise SlabEx("Sample overrun")
if code == 3:
checkCRC()
raise SlabEx("Halt from board")
if code != 0:
raise SlabEx("Unknown transient response code")
message(1,"Mesurement ends. Receiving data")
na = getByte()
nd = getByte()
if nd!=0:
raise SlabEx("Digital transient is not supported Yet")
if na!=1:
raise SlabEx("Internal Error: Only one ADC should be read")
samples = getU16()
result = []
vector = []
for s in range(0,samples):
vector.append(s*sampleTime)
if scipy:
result.append(np.array(vector))
else:
result.append(vector)
vector = []
for s in range(0,samples):
fvalue = dc_cal(getU16()/65536.0,xcal,adcCalData[channel-1])
fvalue = fvalue * vref
vector.append(fvalue)
if scipy:
result.append(np.array(vector))
else:
result.append(vector)
checkCRC()
# Return to iddle
setVoltage(1,w_idle)
message(1,"Data received")
return result
'''
@wavePlay@
wavePlay(n,tinit,dual)
Generates wave withou measuring
Generation sequence:
1) Set DAC1 to first wave sample during tinit
2) Send n waves to DAC1
Optional parameters:
n : Number of waves to send (default to one)
Zero means infinite (Use HALT to end)
tinit : Time iddle before first wave (default to zero)
dual : Use dual DAC generation (defaults to False)
Returns nothing
Included in slab.py
See also setWaveFrequency
'''
def wavePlay(n = 1,tinit = 1.0,dual=False):
# Checks
if not opened:
raise SlabEx("Not connected to board")
if n < 0:
raise SlabEx("Invalid number of waves")
if w_idle < 0:
raise SlabEx("Wavetable not loaded")
if dual and w_idle2 < 0:
raise SlabEx("Secondary wavetable not loaded")
message(1,"Sending wave...")
# Idle start
if tinit > 0.0:
setVoltage(1,w_idle)
if dual:
setVoltage(2,w_idle2)
time.sleep(tinit)
# Send command
if not dual:
startCommand('Q')
else:
startCommand('q')
sendU16(n)
sendCRC()
checkACK()
# Check for overrun or other errors
code = getByte()
if code == 1:
checkCRC()
raise SlabEx("Sample overrun")
if code == 3:
checkCRC()
raise SlabEx("Halt from board")
if code != 0:
raise SlabEx("Unknown transient response code")
message(1,"Wave play ends")
checkCRC()
# Return to iddle
setVoltage(1,w_idle)
'''
@wavePlot@
wavePlot(npre,tinit,dual,returnData)
Plot the response of a circuit against a wave
Measurement sequence:
1) Set DAC1 to first wave sample during tinit
2) Send npre waves to DAC1
3) Start measurement as set on setTransientStorage
During this time wave continues to be generated
Optional parameters:
npre : Number of waves before measurement (default to zero)
tinit : Time iddle before first wave (default to zero)
dual : Generate waves on both dacs (defaults to False)
returnData : Enables return of plot data (defaults to False)
Returns plot data if enabled
Vector 0 is time
Vectors 1 onward are ADC readings
Included in slab.py
See also setWaveFrequency, setTransientStorage and setPlotReturnData
'''
def wavePlot(n = 0,tinit = 1.0,dual=False,returnData=False):
if not scipy:
cannotPlot(exception=True)
return
# Perform measurement
res = waveResponse(n,tinit,dual)
# Plot result
message(1,"Drawing curves")
plt.figure(facecolor="white") # White border
for i in range(1,len(res)):
if (len(res) == 2):
# Only one plot
pl.plot(res[0],res[i])
else:
# More than one plot
pl.plot(res[0],res[i],label=adcNames[i-1])
pl.xlabel("Time (s)")
pl.ylabel("Voltage (V)")
pl.title("Wave Response Plot")
if (len(res) > 2):
# More than one plot
pl.legend(loc='best')
pl.grid()
pl.show()
pl.close()
if plotReturnData or returnData:
return res
'''
@singleWavePlot@
singleWavePlot(channel,npre,tinit,returnData)
Plot the response of a circuit against a wave
Response is obtained only on the selected channel
regardless of the setting on setTransientStorage
Measurement sequence:
1) Set DAC1 to first wave sample during tinit
2) Send npre waves to DAC1
3) Start measurement as set on setTransientStorage
During this time wave continues to be generated
Optional parameters:
channel : ADC channel to use (defaults to 1)
npre : Number of waves before measurement (default to zero)
tinit : Time iddle before first wave (default to zero)
returnData : Enables return of plot data (defaults to False)
Returns plot data if enabled
Vector 0 is time
Vectors 1 onward are ADC readings
Included in slab.py
See also setWaveFrequency, setTransientStorage and setPlotReturnData
'''
def singleWavePlot(channel=1,n=0,tinit = 1.0,returnData=False):
if not scipy:
cannotPlot(exception=True)
return
# Perform measurement
res = singleWaveResponse(channel,n,tinit)
# Plot result
message(1,"Drawing curve")
plt.figure(facecolor="white") # White border
pl.plot(res[0],res[1])
pl.xlabel("Time (s)")
pl.ylabel("Voltage (V)")
pl.title("Single Wave Response Plot")
pl.grid()
pl.show()
| |
DH currently, press /status."
context.bot.editMessageText(text=warnText,
chat_id=user.id,
parse_mode=ParseMode.HTML)
return ConversationHandler.END # end convo if user pressed start but is in DH
else:
# get user intention from button pressed
pressed = str(query.data)
if pressed == 'INTENT_0':
intention = "TAKEAWAY"
if pressed == 'INTENT_1':
intention = "DINE-IN"
# Using chat_data to store information from the same chat ID
context.chat_data['Intention'] = intention
log_text = "User " + str(user.id) + " has indicated to {}.".format(intention)
logger.info(log_text)
reply_text = "Yumz, time for some good food!\n\n<b>You wish to {} in the Dining Hall now, can I confirm?</b>".format(intention)
reply_text += "\n\nOr did you accidentally press? Press <i>Back</i> to go back to the previous page!"
button_list = [InlineKeyboardButton(text='Yes, I confirm.', callback_data='CONFIRM_ENTRY'),
InlineKeyboardButton(text='Back', callback_data='CANCEL')]
menu = build_menu(button_list, n_cols=1, header_buttons=None, footer_buttons=None)
context.bot.editMessageText(text=reply_text,
chat_id=chatid,
message_id=query.message.message_id,
reply_markup=InlineKeyboardMarkup(menu),
parse_mode=ParseMode.HTML)
return CONFIRM_ENTRY
# ██████╗ ██████╗ ███╗ ██╗███████╗██╗██████╗ ███╗ ███╗ ███████╗███╗ ██╗████████╗██████╗ ██╗ ██╗
# ██╔════╝██╔═══██╗████╗ ██║██╔════╝██║██╔══██╗████╗ ████║ ██╔════╝████╗ ██║╚══██╔══╝██╔══██╗╚██╗ ██╔╝
# ██║ ██║ ██║██╔██╗ ██║█████╗ ██║██████╔╝██╔████╔██║ █████╗ ██╔██╗ ██║ ██║ ██████╔╝ ╚████╔╝
# ██║ ██║ ██║██║╚██╗██║██╔══╝ ██║██╔══██╗██║╚██╔╝██║ ██╔══╝ ██║╚██╗██║ ██║ ██╔══██╗ ╚██╔╝
# ╚██████╗╚██████╔╝██║ ╚████║██║ ██║██║ ██║██║ ╚═╝ ██║ ███████╗██║ ╚████║ ██║ ██║ ██║ ██║
# ╚═════╝ ╚═════╝ ╚═╝ ╚═══╝╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝ ╚══════╝╚═╝ ╚═══╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝
#
def send_final(update, context):
query = update.callback_query
user = query.from_user
chatid = query.message.chat_id
log_text = "User " + str(user.id) + " has now confirmed entry to the DH."
logger.info(log_text)
reply_text = "<b>Okay, thank you for indicating on this bot! Do remind your friends to do the same as well!</b>\n\n" \
+ "I have also set up timers to remind you when the time limit is up!\n\n" + EAT + " Enjoy your meal! " + EAT \
+ "\n\nPlease press the button below <b>only if you are currently leaving</b> the dining hall:"
# encode leaving to specific user ID
exitID = "LEAVE_" + str(user.id)
button_list = [InlineKeyboardButton(text='Leave Dining Hall', callback_data = exitID)]
menu = build_menu(button_list, n_cols=1, header_buttons=None, footer_buttons=None)
context.bot.editMessageText(text=reply_text,
chat_id=chatid,
message_id=query.message.message_id,
reply_markup=InlineKeyboardMarkup(menu),
parse_mode=ParseMode.HTML) # no buttons for final text sent to the user
indicatedIntention = context.chat_data['Intention']
logger.info("Pulled intention is " + indicatedIntention)
if (indicatedIntention == "TAKEAWAY"):
# Add user to DB for takeaway
res = db.addTakeAwayUser(str(user.id))
if res:
notify_admin(TAKEAWAY_OVERFLOW_MESSAGE, context)
new_job = context.job_queue.run_once(alarmTakeAway, 420, context=user.id) # changed context to userID so as to be not usable in groups; 420 for 7 mins
#INFOSTORE[str(user.id)] = new_job
logger.info("Takeaway timer has started for {}".format(str(user.id)))
elif (indicatedIntention == "DINE-IN"):
# Add user to DB for dine-in
res = db.addDineInUser(str(user.id))
if res:
notify_admin(DINE_IN_OVERFLOW_MESSAGE, context)
new_job1 = context.job_queue.run_once(alarmEatIn25, 1500, context=user.id) # 1500s = 25 mins
new_job2 = context.job_queue.run_once(alarmEatIn20, 1200, context=user.id) # 1200s = 20 mins
#INFOSTORE[str(user.id)] = new_job
logger.info("Two dining in timers have started for {}".format(str(user.id)))
else:
logger.warning("Something went wrong with the intention...")
return
# changed to button to leave
# ██╗ ███████╗ █████╗ ██╗ ██╗███████╗
# ██║ ██╔════╝██╔══██╗██║ ██║██╔════╝
# ██║ █████╗ ███████║██║ ██║█████╗
# ██║ ██╔══╝ ██╔══██║╚██╗ ██╔╝██╔══╝
# ███████╗███████╗██║ ██║ ╚████╔╝ ███████╗
# ╚══════╝╚══════╝╚═╝ ╚═╝ ╚═══╝ ╚══════╝
#
def leaveEarly(update, context):
query = update.callback_query
user = query.from_user
chatid = query.message.chat_id
reply_text = "<b>Are you sure you are leaving the Dining Hall right now?</b>\n"
# encode leaving to specific user ID
exitID = "EXITCONFIRM_" + str(user.id)
button_list = [InlineKeyboardButton(text='Yes, Leave Dining Hall', callback_data = exitID)]
menu = build_menu(button_list, n_cols=1, header_buttons=None, footer_buttons=None)
context.bot.editMessageText(text=reply_text,
chat_id=chatid,
message_id=query.message.message_id,
reply_markup=InlineKeyboardMarkup(menu),
parse_mode=ParseMode.HTML)
return
# ████████╗██╗███╗ ███╗███████╗██████╗ ███████╗
# ╚══██╔══╝██║████╗ ████║██╔════╝██╔══██╗██╔════╝
# ██║ ██║██╔████╔██║█████╗ ██████╔╝███████╗
# ██║ ██║██║╚██╔╝██║██╔══╝ ██╔══██╗╚════██║
# ██║ ██║██║ ╚═╝ ██║███████╗██║ ██║███████║
# ╚═╝ ╚═╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚══════╝
#
def alarmEatIn25(context):
job = context.job
userID = job.context
# encode leaving to specific user ID
exitID = "EXITCONFIRM_" + str(userID)
EATIN_MESSAGE = "<b>Hi, you have been eating in the Dining Hall for 25 minutes. Kindly leave now, thank you for your cooperation!</b> " + RUN + RUN + RUN + "\n"
button_list = [InlineKeyboardButton(text='Leave Dining hall', callback_data=exitID)]
menu = build_menu(button_list, n_cols=1, header_buttons=None, footer_buttons=None)
userIn = db.checkUser(str(userID))
if userIn:
logger.info("Reminder text for eatin25 has been sent to the user {}".format(str(userID)))
context.bot.send_message(userID,
text=EATIN_MESSAGE,
reply_markup=InlineKeyboardMarkup(menu),
parse_mode=ParseMode.HTML)
else: # if user has left early
logger.info("User {} has already long left the DH! Nevertheless, this job has still be executed and no reminder message is sent to the user.".format(userID))
return
def alarmEatIn20(context):
job = context.job
userID = job.context
exitID = "EXITCONFIRM_" + str(userID)
EATIN_MESSAGE = "<b>Hi, you have been eating in the Dining Hall for 20 minutes already. Kindly leave soon!</b> " + RUN + "\n"
userIn = db.checkUser(str(userID))
if userIn:
logger.info("Reminder text for eatin20 has been sent to the user {}".format(str(userID)))
context.bot.send_message(userID,
text=EATIN_MESSAGE,
parse_mode=ParseMode.HTML)
else: # if user has left early
logger.info("User {} has already long left the DH! Nevertheless, this job has still be executed and no reminder message is sent to the user.".format(userID))
return
def alarmTakeAway(context):
job = context.job
userID = job.context
# encode leaving to specific user ID
exitID = "EXITCONFIRM_" + str(userID)
TAKEAWAY_MESSAGE = "<b>Hi, you have been in the Dining Hall for 7 minutes to take away food. Kindly leave now, thank you for your cooperation!</b> " + RUN + "\n"
button_list = [InlineKeyboardButton(text='Leave Dining Hall', callback_data = exitID)]
menu = build_menu(button_list, n_cols=1, header_buttons=None, footer_buttons=None)
userIn = db.checkUser(str(userID))
if userIn:
logger.info("Reminder text for takeaway has been sent to the user {}".format(str(userID)))
context.bot.send_message(userID,
text=TAKEAWAY_MESSAGE,
reply_markup=InlineKeyboardMarkup(menu),
parse_mode=ParseMode.HTML)
else: # if user has left early
logger.info("User {} has already long left the DH! Nevertheless, this job has still be executed and no reminder message is sent to the user.".format(userID))
return
# When user leaves dining hall
def leaveFinal(update, context):
query = update.callback_query
user = query.from_user
chatid = query.message.chat_id
logger.info("Query data is: {}".format(str(query.data)))
# Remove user from DB
res = db.remove(str(user.id))
if (res == 1):
notify_admin(DINE_IN_OVERFLOW_RESOLVED_MESSAGE, context)
elif (res == 2):
notify_admin(TAKEAWAY_OVERFLOW_RESOLVED_MESSAGE, context)
#INFOSTORE[str(user.id)].schedule_removal()
#del INFOSTORE[str(user.id)]
# Check Job Queue
#logger.info("Job Queue is: {}".format(context.job_queue.jobs()))
log_text = "User " + str(user.id) + " has now confirmed exit from DH."
logger.info(log_text)
reply_text = "<b>Thank you for leaving on time! Do remind your friends to do the same as well! </b>" + HAPPY
reply_text += "\n\nTo restart the bot, press /start! Press /status to check current crowd level. Press /foodtmr or /foodtoday to get daily menus!"
context.bot.editMessageText(text=reply_text,
chat_id=chatid,
message_id=query.message.message_id,
parse_mode=ParseMode.HTML)
return ConversationHandler.END
# Feature 3: Reminder function to take temperature
# ██████╗ ███████╗███╗ ███╗██╗███╗ ██╗██████╗ ███████╗██████╗
# ██╔══██╗██╔════╝████╗ ████║██║████╗ ██║██╔══██╗██╔════╝██╔══██╗
# ██████╔╝█████╗ ██╔████╔██║██║██╔██╗ ██║██║ ██║█████╗ ██████╔╝
# ██╔══██╗██╔══╝ ██║╚██╔╝██║██║██║╚██╗██║██║ ██║██╔══╝ ██╔══██╗
# ██║ ██║███████╗██║ ╚═╝ ██║██║██║ ╚████║██████╔╝███████╗██║ ██║
# ╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝╚═════╝ ╚══════╝╚═╝ ╚═╝
#
def callback_reminder(context):
REMINDER_TEXT = WHALE + "<b>DAILY TEMPERATURE TAKING</b>" + WHALE + \
"\n\nHello!! Please remember to log your temperature at https://myaces.nus.edu.sg/htd/.\n\n" + \
"For those who do not have thermometers, RAs will be stationed at the " \
"<b>Level 1 Main Entrance</b> on Sunday to Saturday from:\n" + \
"1. 8am to 10am\n" + "2. 5.30pm to 7.30pm\n\n" + CAMERA + \
"Remember to take a photo of your temperature readings!\n\n" + \
"Last but not least, please rest well and take care during this period!!" + \
FLEXED_BICEPS + FLEXED_BICEPS + FLEXED_BICEPS
context.bot.send_message(context.job.context, text=REMINDER_TEXT, parse_mode=ParseMode.HTML)
# ██████╗ ██╗ ██╗ ███╗ ███╗███████╗███╗ ██╗██╗ ██╗
# ██╔══██╗██║ ██║ ████╗ ████║██╔════╝████╗ ██║██║ ██║
# ██║ ██║███████║ ██╔████╔██║█████╗ ██╔██╗ ██║██║ ██║
# ██║ ██║██╔══██║ ██║╚██╔╝██║██╔══╝ ██║╚██╗██║██║ ██║
# ██████╔╝██║ ██║ ██║ ╚═╝ ██║███████╗██║ ╚████║╚██████╔╝
# ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝╚══════╝╚═╝ ╚═══╝ ╚═════╝
#
def foodtoday(update, context):
user = update.message.from_user
chatid = update.message.chat_id
URL = getMenuURL(0)
reply_text = "<b>Here is the menu for Dining Hall food today:</b>\n\n"
reply_text += URL
context.bot.send_message(text=reply_text,
chat_id=chatid,
parse_mode=ParseMode.HTML)
return
def foodtmr(update, context):
user = update.message.from_user
chatid = update.message.chat_id
URL = getMenuURL(1)
reply_text = "<b>Here is the menu for Dining Hall food tomorrow:</b>\n\n"
reply_text += URL
context.bot.send_message(text=reply_text,
chat_id=chatid,
parse_mode=ParseMode.HTML)
return
# ██████╗ █████╗ ███╗ ██╗ ██████╗███████╗██╗
# ██╔════╝██╔══██╗████╗ ██║██╔════╝██╔════╝██║
# ██║ ███████║██╔██╗ ██║██║ █████╗ ██║
# ██║ ██╔══██║██║╚██╗██║██║ ██╔══╝ ██║
# ╚██████╗██║ ██║██║ ╚████║╚██████╗███████╗███████╗
# ╚═════╝╚═╝ ╚═╝╚═╝ ╚═══╝ ╚═════╝╚══════╝╚══════╝
#
def cancel(update, context):
user = update.message.from_user
chatid = update.message.chat_id
log_text = "User " + str(user.id) + " has cancelled using bot."
logger.info(log_text)
reply_text = "Okay Bye!\n\n"
reply_text += HELP_TEXT
context.bot.send_message(text=reply_text,
chat_id=chatid,
parse_mode=ParseMode.HTML)
return ConversationHandler.END
# ███╗ ███╗ █████╗ ██╗███╗ ██╗
# ████╗ ████║██╔══██╗██║████╗ ██║
# ██╔████╔██║███████║██║██╔██╗ ██║
# ██║╚██╔╝██║██╔══██║██║██║╚██╗██║
# ██║ ╚═╝ ██║██║ ██║██║██║ ╚████║
# ╚═╝ ╚═╝╚═╝ ╚═╝╚═╝╚═╝ ╚═══╝
#
def main():
TELEGRAM_TOKEN = os.environ['TELEGRAM_TOKEN']
updater | |
" + measure_write + " order by desc limit 1")
key_write = write_ss.keys()
print(key_write[:])
write_inter = write_ss[key_write[0]]
write_items = list(write_inter)
print(write_items[:])
write_now = int(write_items[0]['modulate'])
if aim_ns not in ns_list and (write_now == 0):
yichang = True
break
pod_status = [i.status.phase for i in v1.list_namespaced_pod(aim_ns).items]
print(pod_status)
print("going on")
print(measure)
# print(math.ceil(step_to_train * 0.75))
# print(step_now)
write_ss = client.query("select * from " + measure_write + " order by desc limit 1")
key_write = write_ss.keys()
print(key_write[:])
write_inter = write_ss[key_write[0]]
write_items = list(write_inter)
print(write_items[:])
write_now = int(write_items[0]['modulate'])
if ('Succeeded' in pod_status or 'Failed' in pod_status) and (write_now == 0):
if countt00 <= 3:
countt00+=1
else:
print("Job is ended")
yichang = True
break
div_num = min_steps2 - step_now + 1
sleep_last = interval_step * div_num
print(sleep_last)
print(div_num)
print(interval_step)
result = client.query("select * from " + measure_t + " order by desc limit 1")
key = result.keys()
result_inter = result[key[0]]
result_items = list(result_inter)
trains_step = int(result_items[0]['training_step'])
if step_now >= math.ceil(trains_step * 0.85):
jieshu = True
break
if step_now >= trains_step - 3:
print("This process is ended!!")
jieshu = True
break
# allow path!!!
# allow_path = "/tfdata/k8snfs/%s/%s.json" % (aim_ns, measure_t)
allow_path = '/tfdata/k8snfs/setad2/%s/%s.json' % (aim_ns, measure_t)
# allow_path = "/tfdata/k8snfs/%s/%s.json" % (aim_ns, measure_t)
retry_now = int(result_items[0]['retry'])
allow_read = load_config(allow_path)
print("Reload success!!")
allow_read['retry'] = retry_now
ps_now = int(result_items[0]['ps'])
worker_now = int(result_items[0]['worker'])
allow_read['worker'] = worker_now
allow_read['ps'] = ps_now
save_config2(allow_read, allow_path)
print("save success!!")
result2 = client.query("select * from " + measure_up + " order by desc limit 1")
key2 = result2.keys()
# print(key2)
result_inter2 = result2[key2[0]]
result_items2 = list(result_inter2)
# print(result_items2)
retry_top = int(result_items2[0]['retry'])
if retry_top != retry_now:
new_ps = int(result_items2[0]['ps'])
new_worker = int(result_items2[0]['worker'])
trains_step = math.ceil(trains_step * worker_now / new_worker)
allow_read = load_config(allow_path)
allow_read['retry'] = retry_top
allow_read['ps'] = new_ps
allow_read['worker'] = new_worker
save_config2(allow_read, allow_path)
print("saved successful!!")
# print(trains_step)
step_items = [
{
'measurement': measure_t,
'tags': {
'task': int(pre_list[-1]),
'runtimes': int(pre_list[-1]),
'retry': int(retry_top)
},
'fields': {
'training_step': int(trains_step),
'ps': int(allow_read['ps']),
'worker': int(allow_read['worker'])
}
}
]
print("saved in db")
client.write_points(step_items, time_precision="ms", database="PREDICT")
print("Writed in db")
min_steps2 = trains_step * 0.2
time.sleep(float(interval_step))
if yichang:
return [], 0
# selected_node = select_node(client, measure_s)
result = client.query("select * from " + measure_s + " where nodes='worker0' order by desc")
else:
# selected_node = select_node(client, measure_s)
result = client.query("select * from " + measure_s + " where nodes='worker0' order by desc")
print("select * from " + measure_s + " where nodes='worker0' order by desc")
keys = result.keys()
print(keys)
msg_raw = list(result[keys[0]])
print(msg_raw)
print(first)
print("Catched raw data")
tmp_loss = {}
for i in range(len(msg_raw)):
# tmp_step.append(int(msg_raw[i]['step']))
tmp = int(msg_raw[i]['step'])
# tmp_loss.append(msg_raw[i]['loss'])
if tmp in tmp_loss:
tmp_loss[tmp].append(msg_raw[i]['loss'])
else:
tmp_loss[tmp] = [msg_raw[i]['loss']]
steps = list(tmp_loss.keys())
loss = []
steps.sort()
for i in steps:
loss_per_step = np.mean(tmp_loss[i])
loss.append(loss_per_step)
step_high = steps[-1]
step_low = steps[0]
if first:
config = {}
loss_max = max(loss)
config['high'] = step_high
config['low'] = step_low
config['loss_max'] = loss_max
save_config(config, measure_s)
else:
filename = '%s.json' % measure_s
config = load_config(filename)
config['high'] = step_high
config['low'] = step_low
save_config(config, measure_s)
print("saved config")
max_loss = config['loss_max']
# print(loss)
if jieshu:
return loss,max_loss,1
else:
return loss,max_loss,0
def normalization(loss,max_loss):
loss_array = []
for i in loss:
tmp = i / max_loss
loss_array.append(tmp)
loss_array = np.asarray(loss_array)
return loss_array
def make_dataset_nnls(data,max_loss):
step_len = len(data)
step_arrange = list(np.arange(step_len)+1)
step_arrange.reverse()
step_x = np.array([1/i for i in step_arrange])
data = data.reverse()
data_in = np.array([[i/max_loss,1] for i in data])
return data_in,step_x
def make_dataset(data,max_loss,time_step,predict_step,intra):
loss_array = normalization(data,max_loss)
train = []
total_length = len(loss_array)
for i in range(0,total_length - time_step - predict_step,intra):
train_slice = loss_array[i:i+time_step+predict_step]
train.append(train_slice)
train = np.array(train).astype(float)
train_x = train[:,0:time_step]
train_y = train[:,time_step:]
train_twice_x = []
train_twice_y = []
gap = time_step // intra
slice_length = len(train)
for i in range(gap,slice_length):
tmp_slice_twice = []
tmp_slice_twice.append(train_x[i-gap])
tmp_slice_twice.append(train_x[i])
train_twice_x.append(tmp_slice_twice)
train_twice_y.append(train_y[i])
train_twice_x = np.array(train_twice_x).astype(float)
train_twice_y = np.array(train_twice_y).astype(float)
return train_x,train_y,train_twice_x,train_twice_y
def build_lstm_model(time_step,predict_step,input_dim):
model = Sequential()
model.add(LSTM(units=16,input_shape=(time_step,input_dim),return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(units=64,return_sequences=True))
model.add(LSTM(units=128,return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(units=predict_step))
model.add(Activation('linear'))
model.summary()
optimizer = optimizers.Adam()
model.compile(loss="mse",optimizer=optimizer)
return model
def build_twice_lstm_model(time_step,predict_step,input_dim):
input_1 = Input(shape=(time_step,input_dim),dtype='float32',name='First_Time_Step')
input_2 = Input(shape=(time_step,input_dim),dtype='float32',name='Pre_First_Time_Step')
lstm1 = LSTM(units=16,input_shape=(time_step,input_dim),return_sequences=True)(input_1)
lstm1 = Dropout(0.2)(lstm1)
lstm2 = LSTM(units=16,input_shape=(time_step,input_dim),return_sequences=True)(input_2)
lstm2 = Dropout(0.2)(lstm2)
lstm = concatenate([lstm2,lstm1],axis=1)
x1 = LSTM(units=64,return_sequences=True)(lstm)
x1 = LSTM(units=128,return_sequences=False)(x1)
x1 = Dense(units=predict_step)(x1)
output = Activation('linear')(x1)
model = Model(input=[input_1,input_2],output=output)
model.summary()
optimizer = optimizers.Adam()
model.compile(loss='mse',optimizer=optimizer)
return model
#加载模型
def load_model(filepath):
print('[Model] Loading model from file %s' % filepath)
model = keras.models.load_model(filepath)
return model
def reshape_for_lstm(data):
train = np.reshape(data,[data.shape[0],data.shape[1],1])
return train
def divide_train_test(data,split):
isplit = math.ceil(data.shape[0]*split)
train_data = data[:isplit]
test_data = data[isplit:]
return train_data,test_data
def train(x, y, epochs, batch_size, save_dir, model,measure):
pre_list = measure.split(" ")
measure_s = pre_list[0] + 'S' + pre_list[-1]
measure_t = pre_list[0] + 'T' + pre_list[-1]
if not os.path.exists(save_dir):
os.makedirs(save_dir)
timer = Timer()
timer.start()
print('[Model] Training Started')
print('[Model] %s epochs, %s batch size' % (epochs, batch_size))
def scheduler(epoch):
# 每隔100个epoch,学习率减小为原来的1/10
if epoch % 100 == 0 and epoch != 0:
lr = keras.backend.get_value(model.optimizer.lr)
keras.backend.set_value(model.optimizer.lr, lr * 0.1)
print("lr changed to {}".format(lr * 0.1))
return keras.backend.get_value(model.optimizer.lr)
#'%s-e%s.h5' % (dt.datetime.now().strftime('%d%m%Y-%H%M%S'), str(epochs))
save_fname = os.path.join(save_dir, '%s.h5' % measure_s)
reduce_lr = LearningRateScheduler(scheduler)
callbacks = [
EarlyStopping(monitor='val_loss', patience=10),
ModelCheckpoint(filepath=save_fname, monitor='val_loss', save_best_only=True),
ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto',
epsilon=0.001, cooldown=0, min_lr=0)
]
# 当评价指标不在提升时,减少学习率
#
# 当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在patience个epoch中看不到模型性能提升,则减少学习率
# 参数
#
# monitor:被监测的量
# factor:每次减少学习率的因子,学习率将以lr = lr*factor的形式被减少
# patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
# mode:‘auto’,‘min’,‘max’之一,在min模式下,如果检测值触发学习率减少。在max模式下,当检测值不再上升则触发学习率减少。
# epsilon:阈值,用来确定是否进入检测值的“平原区”
# cooldown:学习率减少后,会经过cooldown个epoch才重新进行正常操作
# min_lr:学习率的下限
# ————————————————
history = model.fit(
x,
y,
epochs=epochs,
batch_size=batch_size,
callbacks=callbacks,
validation_split=0.1
)
model.save(save_fname)
print('[Model] Training Completed. Model saved as %s' % save_fname)
timer.stop()
return history, model
def train_twice(x1,x2, y, epochs, batch_size, save_dir, model,measure):
pre_list = measure.split(" ")
measure_s = pre_list[0] + 'S' + pre_list[-1]
measure_t = pre_list[0] + 'T' + pre_list[-1]
if not os.path.exists(save_dir):
os.makedirs(save_dir)
timer = Timer()
timer.start()
print('[Model] Training Started')
print('[Model] %s epochs, %s batch size' % (epochs, batch_size))
def scheduler(epoch):
# 每隔100个epoch,学习率减小为原来的1/10
if epoch % 100 == 0 and epoch != 0:
lr = keras.backend.get_value(model.optimizer.lr)
keras.backend.set_value(model.optimizer.lr, lr * 0.1)
print("lr changed to {}".format(lr * 0.1))
return keras.backend.get_value(model.optimizer.lr)
save_fname = os.path.join(save_dir, '%s.h5' % measure_s)
reduce_lr = LearningRateScheduler(scheduler)
callbacks = [
EarlyStopping(monitor='val_loss', patience=10),
ModelCheckpoint(filepath=save_fname, monitor='val_loss', save_best_only=True),
ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=10, verbose=0, mode='auto',
epsilon=0.001, cooldown=0, min_lr=0)
]
# 当评价指标不在提升时,减少学习率
#
# 当学习停滞时,减少2倍或10倍的学习率常常能获得较好的效果。该回调函数检测指标的情况,如果在patience个epoch中看不到模型性能提升,则减少学习率
# 参数
#
# monitor:被监测的量
# factor:每次减少学习率的因子,学习率将以lr = lr*factor的形式被减少
# patience:当patience个epoch过去而模型性能不提升时,学习率减少的动作会被触发
# mode:‘auto’,‘min’,‘max’之一,在min模式下,如果检测值触发学习率减少。在max模式下,当检测值不再上升则触发学习率减少。
# epsilon:阈值,用来确定是否进入检测值的“平原区”
# cooldown:学习率减少后,会经过cooldown个epoch才重新进行正常操作
# min_lr:学习率的下限
# ————————————————
history = model.fit(
{'First_Time_Step': x1,'Pre_First_Time_Step':x2},
y,
epochs=epochs,
batch_size=batch_size,
callbacks=callbacks,
validation_split=0.1
)
model.save(save_fname)
print('[Model] Training Completed. Model saved as %s' % save_fname)
timer.stop()
return history,model
def predict_once(data,model,input_dim,time_step,predict_step):
data = np.reshape(data,(1,time_step,input_dim))
predict_y = model.predict(data)
predict_y = np.array(predict_y).astype(float)
predict_y = np.reshape(predict_y,(predict_step,1))
return predict_y
def predict_once_t(data1,data2,model,input_dim,time_step,predict_step):
data1 = np.reshape(data1,(1,time_step,input_dim))
data2 = np.reshape(data2,(1,time_step,input_dim))
predict_y = model.predict([data1,data2])
predict_y = np.array(predict_y).astype(float)
predict_y = np.reshape(predict_y,(predict_step,1))
return predict_y
def predict_multi(data,model,input_dim,time_step,predict_step,intra):
iter = predict_step // intra
predict = []
for i in range(0,data.shape[0],iter):
pone = predict_once(data[i],model,input_dim,time_step,predict_step)
pone = np.array(pone).astype(float)
pone = np.reshape(pone,(predict_step,))
for p in pone:
predict.append(p)
predict = np.array(predict).astype(float)
predict = np.reshape(predict,(len(predict),1))
return predict
def predict_multi_t(data1,data2,model,input_dim,time_step,predict_step,intra):
iter = predict_step // intra
predict = []
for i in range(0,data1.shape[0],iter):
pone = predict_once_t(data1[i],data2[i],model,input_dim,time_step,predict_step)
pone = np.array(pone).astype(float)
pone = np.reshape(pone,(predict_step,))
for p in pone:
predict.append(p)
predict = np.array(predict).astype(float)
predict = np.reshape(predict,(len(predict),1))
return predict
def derivation(x1,x2):
xx = (x1 - x2)**2
result = float((math.sqrt((xx))) / x1)
return result
def step_predict(data,model,input_dim,predict_step,time_step,div,top_step,low_step,measure):
pre_list = measure.split(" ")
measure_s = pre_list[0] + 'S' + pre_list[-1]
measure_t = pre_list[0] + 'T' + pre_list[-1]
filename = '%s.json' % measure_s
config = load_config(filename)
# config['high'] = step_high
# config['low'] = step_low
# save_config(config, measure)
#
#
# max_loss = config['loss_max']
step_high = config['high']
max_loss_read = config['loss_max']
data_array = np.array(data).astype(float)
data_array = data_array / max_loss_read
data_use = list(data_array)
fit_step = 0 - time_step - predict_step
data_fit = data_use[fit_step:]
data_list = list(data_fit[:])
data_fit = np.array(data_fit[-time_step:]).astype(float)
data_fit = np.reshape(data_fit,(1,time_step,input_dim))
# data = np.reshape(data, (1, time_step, input_dim))
predict_res = predict_once(data_fit,model,input_dim,time_step,predict_step)
predict_res = np.squeeze(predict_res)
step_to_train = predict_step
tmp_base = 0 - 3*predict_step
for i in range(predict_step):
data_list.append(predict_res[i])
while True:
print(step_to_train)
if step_to_train + step_high >= top_step:
break
data_div_pre = data_list[tmp_base:]
print(data_div_pre)
data_div_base = []
for i in range(1,3*predict_step):
tmp_div = derivation(data_div_pre[i-1],data_div_pre[i])
data_div_base.append(tmp_div)
der_base = np.mean(data_div_base)
print(der_base)
if der_base < div:
break
data_fit = data_list[fit_step:]
data_list = list(data_fit[:])
data_fit = np.array(data_fit[-time_step:]).astype(float)
data_fit = np.reshape(data_fit, (1, time_step, input_dim))
# data = np.reshape(data, (1, time_step, input_dim))
| |
<filename>utils/pose_utils_np.py
"""
Copyright (C) 2018 NVIDIA Corporation. All rights reserved.
Licensed under the CC BY-NC-SA 4.0 license (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
"""
import torch
from torch.nn import Module
from torch.autograd import Variable
from torch.nn.functional import pad
import numpy as np
import scipy.linalg as slin
import math
import transforms3d.quaternions as txq
import transforms3d.euler as txe
import quaternion
import bisect
# see for formulas:
# https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-801-machine-vision-fall-2004/readings/quaternions.pdf
# and "Quaternion and Rotation" - <NAME>, September 18, 2016
# from IPython.core.debugger import set_trace
# PYTORCH
def tq2RT(poses, square=False):
"""
:param poses: N x 7, (t,q)
:return: (N,3,4)
"""
N,_ = poses.shape
T = poses[:,:3]
q = quaternion.from_float_array(poses[:,3:])
R = quaternion.as_rotation_matrix(q) #Nx3x3
RT = np.concatenate([R,T[...,None]], axis=-1) #Nx3x4
if square:
padding = np.zeros([N,1,4])
padding[:,:,-1] = 1
RT = np.concatenate([RT,padding], axis=1) #Nx4x4
return RT
def RT2tq(poses, square=False):
"""
!!NOT TESETED!!
:param poses: N x 3 x 4, (R|T)
:return: (N, 7)
"""
N,_,_ = poses.shape
R = poses[:,:,:3]
T = poses[:,:,3:] # Nx3x1
q = quaternion.as_float_array(quaternion.from_rotation_matrix(R)) #Nx4
t= T.squeeze(-1)
tq = np.concatenate([t,q], axis=-1)
return tq
def pose_interp(poses, timestamps_in, timestamps_out, r_interp='slerp'):
"""
:param poses: N x 7, (t,q)
:param timestamps: (N,)
:param t: (K,)
:return: (K,)
"""
# assert t_interp in ['linear', 'spline']
assert r_interp in ['slerp', 'squad']
assert len(poses)>1
assert len(poses) == len(timestamps_in)
input_ts = poses[:,:3]
input_rs= poses[:,3:] #quaternions
timestamps_in = np.array(timestamps_in)
#sort the inputs
inds = np.argsort(timestamps_in)
poses = poses[inds]
timestamps_in = timestamps_in[inds]
if r_interp == 'squad':
input_rs_ = quaternion.from_float_array(input_rs)
output_rs = quaternion.squad( input_rs, timestamps_in, timestamps_out)
output_rs = quaternion.as_float_array(output_rs)
elif r_interp == 'slerp':
output_rs = []
for t in timestamps_out:
input_rs_ = quaternion.from_float_array(input_rs)
idx = bisect.bisect_left(timestamps_in)
output_r = quaternion.slerp(input_rs_[idx],input_rs_[idx+1], timestamps_in[idx], timestamps_in[idx+1],t )
output_r = quaternion.as_float_array(output_r)
output_rs.append(output_r)
output_ts = []
for t in timestamps_out:
idx = bisect_left.bisect_left(timestamps_in)
if idx>=len(timestamps_in)-1:
idx -= 1
t1 = timestamps_in[idx]
t2 = timestamps_in[idx+1]
output_t = ((t-t1)*input_ts[idx+1] + (t2-t) *input_ts[idx]) / (t2-t1)
output_ts.append(output_t)
output_ts =np.concatenate(output_ts, axis=0 )
output_rs =np.concatenate(output_rs, axis=0 )
new_pose = np.concatenate([output_ts, output_rs], axis=1)
return new_pose
def vdot(v1, v2):
"""
Dot product along the dim=1
:param v1: N x d
:param v2: N x d
:return: N x 1
"""
# out = torch.mul(v1, v2)
out = v1 * v2
out = np.sum(out, axis=1)
return out
def normalize(x, p=2, dim=0, eps=1e-6):
"""
Divides a tensor along a certain dim by the Lp norm
:param x:
:param p: Lp norm
:param dim: Dimension to normalize along
:return:
"""
# xn=x.norm(p = p, dim = dim)
xn = np.linalg.norm(x, ord=p, axis=dim, keepdims=True)
# x=x / xn.unsqueeze(dim = dim)
x = x / (xn+eps)
# x *= np.sign(x[0]) #added on 22/4/2020
return x
def qmult(q1, q2):
"""
Multiply 2 quaternions
:param q1: Tensor N x 4
:param q2: Tensor N x 4
:return: quaternion product, Tensor N x 4
"""
q1s, q1v = q1[:, :1], q1[:, 1:]
q2s, q2v = q2[:, :1], q2[:, 1:]
qs = q1s*q2s - vdot(q1v, q2v)
# qv=q1v.mul(q2s.expand_as(q1v)) + q2v.mul(q1s.expand_as(q2v)) +
# torch.cross(q1v, q2v, dim = 1)
qv = q1v*q2s + q2v*q1s + np.cross(q1v, q2v, axis=1)
q = np.concatenate((qs, qv), axis=1)
# normalize
q = normalize(q, dim=1)
return q
def qinv(q):
"""
Inverts quaternions
:param q: N x 4
:return: q*: N x 4
"""
# q_inv = torch.cat((q[:, :1], -q[:, 1:]), dim=1)
q_inv = np.concatenate((q[:, :1], -q[:, 1:]), axis=1)
return q_inv
def qexp_t(q):
"""
Applies exponential map to log quaternion
:param q: N x 3
:return: N x 4
"""
n = torch.norm(q, p=2, dim=1, keepdim=True)
n = torch.clamp(n, min=1e-8)
q = q * torch.sin(n)
q = q / n
q = torch.cat((torch.cos(n), q), dim=1)
return q
def qlog_t(q):
"""
Applies the log map to a quaternion
:param q: N x 4
:return: N x 3
"""
n = torch.norm(q[:, 1:], p=2, dim=1, keepdim=True)
n = torch.clamp(n, min=1e-8)
q = q[:, 1:] * torch.acos(torch.clamp(q[:, :1], min=-1.0, max=1.0))
q = q / n
return q
def qexp_t_safe(q):
"""
Applies exponential map to log quaternion (safe implementation that does not
maintain gradient flow)
:param q: N x 3
:return: N x 4
"""
q = torch.from_numpy(np.asarray([qexp(qq) for qq in q.numpy()],
dtype=np.float32))
return q
def qlog_t_safe(q):
"""
Applies the log map to a quaternion (safe implementation that does not
maintain gradient flow)
:param q: N x 4
:return: N x 3
"""
q = torch.from_numpy(np.asarray([qlog(qq) for qq in q.numpy()],
dtype=np.float32))
return q
def rotate_vec_by_q(t, q):
"""
rotates vector t by quaternion q
:param t: vector, Tensor N x 3
:param q: quaternion, Tensor N x 4
:return: t rotated by q: t' = t + 2*qs*(qv x t) + 2*qv x (qv x r)
"""
qs, qv = q[:, :1], q[:, 1:]
# b = torch.cross(qv, t, dim=1)
b = np.cross(qv, t, axis=1)
# c = 2 * torch.cross(qv, b, dim=1)
c = 2 * np.cross(qv, b, axis=1)
# b = 2 * b.mul(qs.expand_as(b))
b = 2 * b*qs
tq = t + b + c
return tq
def compose_pose_quaternion(p1, p2):
"""
pyTorch implementation
:param p1: input pose, Tensor N x 7
:param p2: pose to apply, Tensor N x 7
:return: output pose, Tensor N x 7
all poses are translation + quaternion
#!comments: first apply p2 and then p1 !!
"""
p1t, p1q = p1[:, :3], p1[:, 3:]
p2t, p2q = p2[:, :3], p2[:, 3:]
q = qmult(p1q, p2q)
t = p1t + rotate_vec_by_q(p2t, p1q)
return np.concatenate((t, q), axis=1)
def invert_pose_quaternion(p):
"""
inverts the pose
:param p: pose, Tensor N x 7
:return: inverted pose
"""
t, q = p[:, :3], p[:, 3:]
q_inv = qinv(q)
tinv = -rotate_vec_by_q(t, q_inv)
return np.concatenate((tinv, q_inv), axis=1)
def calc_vo(p0, p1):
"""
calculates VO (in the p0 frame) from 2 poses
:param p0: N x 7
:param p1: N x 7
"""
# assert p0.shape==p1.shape
return compose_pose_quaternion(invert_pose_quaternion(p0), p1)
def calc_vo_logq(p0, p1):
"""
VO (in the p0 frame) (logq)
:param p0: N x 6
:param p1: N x 6
:return: N-1 x 6
"""
q0 = qexp_t(p0[:, 3:])
q1 = qexp_t(p1[:, 3:])
vos = calc_vo(torch.cat((p0[:, :3], q0), dim=1), torch.cat((p1[:, :3], q1),
dim=1))
vos_q = qlog_t(vos[:, 3:])
return torch.cat((vos[:, :3], vos_q), dim=1)
def calc_vo_relative(p0, p1):
"""
calculates VO (in the world frame) from 2 poses
:param p0: N x 7
:param p1: N x 7
"""
vos_t = p1[:, :3] - p0[:, :3]
vos_q = qmult(qinv(p0[:, 3:]), p1[:, 3:])
return np.concatenate((vos_t, vos_q), axis=1)
def calc_vo_relative_logq(p0, p1):
"""
Calculates VO (in the world frame) from 2 poses (log q)
:param p0: N x 6
:param p1: N x 6
:return:
"""
q0 = qexp_t(p0[:, 3:])
q1 = qexp_t(p1[:, 3:])
vos = calc_vo_relative(torch.cat((p0[:, :3], q0), dim=1),
torch.cat((p1[:, :3], q1), dim=1))
vos_q = qlog_t(vos[:, 3:])
return torch.cat((vos[:, :3], vos_q), dim=1)
def calc_vo_relative_logq_safe(p0, p1):
"""
Calculates VO (in the world frame) from 2 poses (log q) through numpy fns
:param p0: N x 6
:param p1: N x 6
:return:
"""
vos_t = p1[:, :3] - p0[:, :3]
q0 = qexp_t_safe(p0[:, 3:])
q1 = qexp_t_safe(p1[:, 3:])
vos_q = qmult(qinv(q0), q1)
vos_q = qlog_t_safe(vos_q)
return torch.cat((vos_t, vos_q), dim=1)
def calc_vo_logq_safe(p0, p1):
"""
VO in the p0 frame using numpy fns
:param p0:
:param p1:
:return:
"""
vos_t = p1[:, :3] - p0[:, :3]
q0 = qexp_t_safe(p0[:, 3:])
q1 = qexp_t_safe(p1[:, 3:])
vos_t = rotate_vec_by_q(vos_t, qinv(q0))
vos_q = qmult(qinv(q0), q1)
vos_q = qlog_t_safe(vos_q)
return torch.cat((vos_t, vos_q), dim=1)
def calc_vos_simple(poses):
"""
calculate the VOs, from a list of consecutive poses
:param poses: N x T x 7
:return: N x (T-1) x 7
"""
vos = []
for p in poses:
pvos = [p[i+1].unsqueeze(0) - p[i].unsqueeze(0)
for i in range(len(p)-1)]
vos.append(torch.cat(pvos, dim=0))
vos = torch.stack(vos, dim=0)
return vos
def calc_vos(poses):
"""
calculate the VOs, from a list of consecutive poses (in the p0 frame)
:param poses: N x T x 7
:return: N x (T-1) x 7
"""
vos = []
for p in poses:
pvos = [calc_vo_logq(p[i].unsqueeze(0), p[i+1].unsqueeze(0))
for i in range(len(p)-1)]
vos.append(torch.cat(pvos, dim=0))
vos = torch.stack(vos, dim=0)
return vos
def calc_vos_relative(poses):
"""
calculate the VOs, from a list of consecutive poses (in the world frame)
:param poses: N x T x 7
| |
# Author: <NAME>, <NAME>
"""API for computing integrals."""
import json
from flask import request
from flask.json import jsonify
from lark import Lark, Transformer, v_args, exceptions
from fractions import Fraction
from sympy import expand_multinomial
import pathlib
import os
import integral
from logic import basic
from integral import slagle
from integral import proof
from app.app import app
basic.load_theory('interval_arith')
@app.route("/api/integral-load-file-list", methods=['POST'])
def integral_load_file_list():
os.chdir('./integral/examples')
json_files = tuple(str(z) for z in list(pathlib.Path('./').rglob('*.json')))
os.chdir('../../')
return jsonify({
'file_list': json_files
})
@app.route("/api/integral-open-file", methods=['POST'])
def integral_open_file():
data = json.loads(request.get_data().decode('utf-8'))
file_name = "integral/examples/%s" % data['filename']
with open(file_name, 'r', encoding='utf-8') as f:
f_data = json.load(f)
for item in f_data['content']:
problem = integral.parser.parse_expr(item['problem'])
item['_problem_latex'] = integral.latex.convert_expr(problem)
return jsonify(f_data)
@app.route("/api/integral-initialize", methods=['POST'])
def integral_initialize():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
return jsonify({
'text': str(problem),
'latex': integral.latex.convert_expr(problem),
'reason': "Initial"
})
@app.route("/api/integral-validate-integral", methods=['POST'])
def integral_validate_integral():
data = json.loads(request.get_data().decode('utf-8'))
try:
problem = integral.parser.parse_expr(data['expr'])
index = int(data['index'])
return jsonify({
'flag': True,
'content': {
'name': 'Exercise ' + str(data['index']),
'problem': data['expr'],
'_problem_latex': integral.latex.convert_expr(problem),
}
})
except:
return jsonify({
'flag': False
})
@app.route("/api/integral-super-simplify", methods=['POST'])
def integral_super_simplify():
data = json.loads(request.get_data().decode('utf-8'))
rules_set = [integral.rules.Simplify(), integral.rules.OnSubterm(integral.rules.Linearity()), integral.rules.OnSubterm(integral.rules.CommonIntegral())]
# abs_rule = integral.rules.ElimAbs()
problem = integral.parser.parse_expr(data['problem'])
# if not (abs_rule.check_zero_point(problem) and len(problem.getAbs()) == 0):
# # If there are no abs expression or there are no zero point
# rules_set.append(integral.rules.OnSubterm(integral.rules.ElimAbs()))
def simplify(problem):
for i in range(5):
for r in rules_set:
problem = r.eval(problem)
if problem.is_constant():
return problem
return problem
problem = simplify(integral.parser.parse_expr(data['problem']))
step = {
'text': str(problem),
'latex': integral.latex.convert_expr(problem),
'reason': "Simplification",
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'])
return jsonify(step)
@app.route("/api/integral-elim-abs", methods=["POST"])
def integral_elim_abs():
data = json.loads(request.get_data().decode('utf-8'))
rule = integral.rules.ElimAbs()
problem = integral.parser.parse_expr(data['problem'])
if not rule.check_zero_point(problem):
new_problem = rule.eval(problem)
step = {
'reason': "Elim abs",
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'location': data['location']
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'])
return jsonify(step)
c = rule.get_zero_point(problem)
new_problem = rule.eval(problem)
step = {
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'reason': "Elim abs",
'params': {
'c': str(c)
},
'location': data['location']
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'])
return jsonify(step)
@app.route("/api/integral-integrate-by-equation", methods=['POST'])
def integrate_by_equation():
data = json.loads(request.get_data().decode('utf-8'))
rhs = integral.parser.parse_expr(data['rhs'])
lhs = integral.parser.parse_expr(data['lhs'])
rule = integral.rules.IntegrateByEquation(lhs)
if not rule.validate(rhs):
return jsonify({
'flag': False
})
new_problem = rule.eval(rhs)
coeff = rule.coeff
return jsonify({
"text": str(new_problem),
"latex": integral.latex.convert_expr(new_problem),
"params": {
"factor": str(coeff),
"prev_id": str(int(data['prev_id']) - 1)
},
"reason": "Solve equation",
"_latex_reason": "By solving equation: \\(%s = %s\\)" % (
integral.latex.convert_expr(lhs), integral.latex.convert_expr(rhs)
)
})
@app.route("/api/integral-separate-integrals", methods=['POST'])
def integral_separate_integrals():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
integrals = problem.separate_integral()
n = []
for i, loc in integrals:
n.append({
"text": str(i),
"var_name": i.var,
"body": str(i.body),
"latex": integral.latex.convert_expr(i),
"location": str(loc)
})
return json.dumps(n)
@app.route("/api/integral-compose-integral", methods=['POST'])
def integral_compose_integral():
data = json.loads(request.get_data().decode('utf-8'))
new_integral = []
latex_reason = ""
reason = ""
modified_index = int(data['index'])
location = ""
if 'location' in data['problem'][modified_index]:
location = data['problem'][modified_index]['location']
denom = ""
rhs = ""
params = {}
for d in data['problem']:
new_integral.append(integral.parser.parse_expr(d['text']))
if '_latex_reason' in d:
latex_reason += d['_latex_reason']
if 'reason' in d:
reason += d['reason']
if 'params' in d:
params = d['params']
if 'denom' in d:
denom = d['denom']
if 'rhs' in d:
rhs = d['rhs']
curr = integral.parser.parse_expr(data['cur_calc'])
new_expr = curr
old_integral = curr.separate_integral()
for i in range(len(old_integral)):
new_expr = new_expr.replace_trig(old_integral[i][0], new_integral[i])
info = {
'text': str(new_expr),
'latex': integral.latex.convert_expr(new_expr),
'reason': reason,
'checked': data['problem'][data['index']]['checked'],
'proof': data['problem'][data['index']]['proof']
}
if location != "":
info.update({'location': location})
if params:
info.update({'params': params})
if denom:
info.update({'denom': denom})
if rhs:
info.update({'rhs': rhs})
if latex_reason:
info.update({'_latex_reason': latex_reason})
return json.dumps(info)
@app.route("/api/integral-substitution", methods=['POST'])
def integral_substitution():
data = json.loads(request.get_data().decode('utf-8'))
try:
expr = integral.parser.parse_expr(data['expr'])
except:
return jsonify({
'flag': False,
'reason': "%s is not a valid substitution expression." % data['expr']
})
rule = integral.rules.Substitution1(data['var_name'], expr)
problem = integral.parser.parse_expr(data['problem'])
if data['var_name'] == problem.var:
return jsonify({
'flag': False,
'reason': "%s is not a valid variable for substitution." % data['var_name']
})
try:
new_problem = rule.eval(problem)
new_problem_body = str(rule.f)
except:
return jsonify({
'flag': False,
'reason': "Substitution failed."
})
log = {
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'reason': "Substitution",
'location': data['location'],
'params': {
'f': new_problem_body,
'g': str(expr),
'var_name': str(data['var_name'])
},
'_latex_reason': "Substitute \\(%s\\) for \\(%s\\)" % (
integral.latex.convert_expr(integral.parser.parse_expr(data['var_name'])), integral.latex.convert_expr(expr)
)
}
log['checked'], log['proof'] = proof.translate_single_item(log, data['problem'], _loc="")
return jsonify({
'flag': True,
'log': log
})
@app.route("/api/integral-substitution2", methods=['POST'])
def integral_substitution2():
data = json.loads(request.get_data().decode('utf-8'))
try:
expr = integral.parser.parse_expr(data['expr'])
except:
return jsonify({
'flag': False,
'reason': "%s is not a valid expression" % data['expr']
})
rule = integral.rules.Substitution2(data['var_name'], expr)
problem = integral.parser.parse_expr(data['problem'])
new_problem = rule.eval(problem)
log = {
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'reason': "Substitution inverse",
'location': data['location'],
'params': {
'g': str(expr),
'var_name': str(data['var_name']),
"a": str(new_problem.lower),
"b": str(new_problem.upper)
},
'_latex_reason': "Substitute \\(%s\\) for \\(%s\\)" % (
integral.latex.convert_expr(integral.parser.parse_expr(problem.var)), integral.latex.convert_expr(expr)
)
}
log['checked'], log['proof'] = proof.translate_single_item(log, data['problem'])
return jsonify({
'flag': True,
'log': log
})
@app.route("/api/integral-validate-expr", methods=['POST'])
def integral_validate_expr():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
flag = None # if dollar is valid, flag = true
try:
dollar = integral.parser.parse_expr(data['dollar'])
if dollar.normalize() != problem.body.normalize():
return jsonify({
'flag': False
})
else:
# Do trig transform
select = integral.parser.parse_expr(data['select'])
dollar_location = dollar.get_location()
location = ""
if data["integral_location"] != "":
location = data["integral_location"] + ".0"
else:
location = "0"
if dollar_location != "":
location += "." + dollar_location
# location = data["integral_location"] + ".0." + dollar_location if data["integral_location"] != "" else "0." + dollar_location
new_trig_set = tuple(integral.expr.trig_transform(select, problem.var))
new_integral_set = [
integral.expr.Integral(problem.var, problem.lower, problem.upper, problem.body.replace_expr(dollar_location, t[0]))
for t in new_trig_set]
transform_info = []
for i in range(len(new_integral_set)):
step = {
"reason": "Rewrite trigonometric",
'text': str(new_integral_set[i]),
'latex': integral.latex.convert_expr(new_integral_set[i]),
"params":{
"rule": new_trig_set[i][1]
},
'_latex_reason': "Rewrite trigonometric \\(%s\\) to \\(%s\\)" %
(integral.latex.convert_expr(select), integral.latex.convert_expr(new_trig_set[i][0])),
# If there is only one integral in the full expression, location begins from the body;
# Else from the integral
"location": location
}
if dollar_location == "":
rel_loc = "0"
else:
rel_loc = "0."+dollar_location
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'], _loc=rel_loc)
transform_info.append(step)
return jsonify({
"flag": True,
"content": transform_info
})
except (exceptions.UnexpectedCharacters, exceptions.UnexpectedToken) as e:
return jsonify({
'flag': False
})
@app.route("/api/integral-validate-power-expr", methods=['POST'])
def integral_validate_power_expr():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
flag = None # if dollar is valid, flag = true
try:
dollar = integral.parser.parse_expr(data['dollar'])
if dollar.normalize() != problem.body.normalize():
return jsonify({
'flag': False
})
else:
select = integral.parser.parse_expr(data['select'])
if not (select.ty == integral.expr.OP and select.op == "^" and select.args[1].ty == integral.expr.CONST and Fraction(select.args[1].val).denominator == 1):
return jsonify({
'flag': False
})
dollar_location = dollar.get_location()
location = ""
if data["integral_location"] != "":
location = data["integral_location"] + ".0"
else:
location = "0"
if dollar_location != "":
location += "." + dollar_location
body = problem.body
body = body.replace_expr(dollar_location, integral.rules.UnfoldPower().eval(select))
new_integral = integral.expr.Integral(problem.var, problem.lower, problem.upper, body)
step = {
"flag": True,
"text": str(new_integral),
"latex": integral.latex.convert_expr(new_integral),
"location": location,
"reason": "Unfold power"
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'])
return jsonify(step)
except (exceptions.UnexpectedCharacters, exceptions.UnexpectedToken) as e:
return jsonify({
'flag': False
})
@app.route("/api/integral-validate-rewrite", methods=['POST'])
def integral_validate_rewrite():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
flag = None # if dollar is valid, flag = true
try:
dollar = integral.parser.parse_expr(data['dollar'])
if dollar.normalize() != problem.body.normalize():
return jsonify({
'flag': False
})
else:
# Do trig transform
select = integral.parser.parse_expr(data['select'])
dollar_location = dollar.get_location()
location = ""
if data["integral_location"] != "":
location = data["integral_location"] + ".0"
else:
location = "0"
if dollar_location != "":
location += "." + dollar_location
return jsonify({
"rewrite": str(select),
"flag": True,
"absolute_location": location, #location in the whole Integral
"relative_location": dollar_location # location in its own integral
})
except (exceptions.UnexpectedCharacters, exceptions.UnexpectedToken) as e:
return jsonify({
'flag': False
})
@app.route("/api/integral-rewrite-expr", methods=['POST'])
def integral_rewrite_expr():
data = json.loads(request.get_data().decode('utf-8'))
problem = integral.parser.parse_expr(data['problem'])
old_expr = integral.parser.parse_expr(data['old_expr'])
try:
new_expr = integral.parser.parse_expr(data['new_expr'])
location = data['relative_location']
if expand_multinomial(integral.expr.sympy_style(new_expr.normalize()).simplify()) != expand_multinomial(integral.expr.sympy_style(old_expr.normalize()).simplify()) or new_expr.findVar()[0].name != problem.var:
return jsonify({
'flag': False
})
new_problem = integral.expr.Integral(problem.var, problem.lower, problem.upper, problem.body.replace_expr(location, new_expr))
if location == "":
rel_loc = "0"
else:
rel_loc = "0." + location
if old_expr.ty == integral.expr.OP and old_expr.op == "/" or\
old_expr.ty == integral.expr.OP and old_expr.op == "*" and\
old_expr.args[1].ty == integral.expr.OP and old_expr.args[1].op == "^" and\
old_expr.args[1].args[1] == integral.expr.Const(-1):
denom = old_expr.args[1]
step = {
'flag': True,
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'reason': "Rewrite",
'_latex_reason': "Rewrite \\(%s\\) to \\(%s\\)"%(integral.latex.convert_expr(old_expr),
integral.latex.convert_expr(new_expr)),
'params': {
'rhs': data['new_expr'],
'denom': str(denom)
},
"location": data['absolute_location']
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'], _loc=rel_loc)
return jsonify(step)
else:
step = {
'flag': True,
'text': str(new_problem),
'latex': integral.latex.convert_expr(new_problem),
'reason': "Rewrite",
'_latex_reason': "Rewrite \\(%s\\) to \\(%s\\)"%(integral.latex.convert_expr(old_expr),
integral.latex.convert_expr(new_expr)),
'params': {
'rhs': data['new_expr']
},
"location": data['absolute_location']
}
step['checked'], step['proof'] = proof.translate_single_item(step, data['problem'], | |
wavelengths (in nanometers) that correspond to the bands (last dimension) in the data.
:param threshold: radiance values.
:param bandrange: band numbers, defined as a tuple (band_a, band_b, band_c), to screen for clouds.
:return: Binary Mask with 1/True where clouds occur, 0/False for normal pixels.
"""
if threshold is None:
threshold = SAT_THRESH_CLD
if bandrange is None:
bandrange = (15, 60, 175) # AVIRIS-NG bands that will be used based on Thompson et al. 2014, corrsponding to 450 and 1250 nm, added 670 nm for slope analysis
# Calculate simple cloud screening based on Thompson et al. 2014
rdn1 = data[:, :, bandrange[0]]
rdn2 = data[:, :, bandrange[1]]
rdn3 = data[:, :, bandrange[2]]
is_bright = rdn1 > threshold[0]
# Calculate the slope between band_a and band_b ((rad_a-rad_b)/(wvl_a-wvl_b)) and band_b and band c ((rad_b-rad_c)/(wvl_b-wvl_c))
sze = rdn1.shape
wide = sze[0]
tall = sze[1]
x_rdn_a = np.zeros((wide, tall, 2), dtype = np.float32)
x_rdn_b = np.zeros((wide, tall, 2), dtype = np.float32)
x_rdn_a[:, :, 0] = rdn1
x_rdn_a[:, :, 1] = rdn2
x_rdn_b[:, :, 0] = rdn2
x_rdn_b[:, :, 1] = rdn3
x_diff_a = np.diff(x_rdn_a)
x_diff_b = np.diff(x_rdn_b)
y_diff_a = wavelengths[bandrange[0]] - wavelengths[bandrange[1]]
y_diff_b = wavelengths[bandrange[1]] - wavelengths[bandrange[2]]
y_arr_a = np.ones((wide,tall,1), dtype = np.float32) * y_diff_a * -1
y_arr_b = np.ones((wide,tall,1),dtype = np.float32) * y_diff_b * -1
# Negative slope between rad_a and rad_b (indiciative of clouds)
der_a = x_diff_a / y_arr_a
slope_a = der_a < 0
slope_a_bool = slope_a[:, :, 0]
# Negative slope between rad_b and rad_c (indiciative of clouds)
der_b = x_diff_b / y_arr_b
slope_b = der_b < 0
slope_b_bool = slope_b[:, :, 0]
# Combine if the radiance at 450 nm is bright (is_bright) with negative slopes between band_a and band_b (slope_a_bool) and band_b and band_c (slope_b_bool)
# If one of the slopes is positive, classify as not a cloud (i.e. bright soil has positive slope between band_a and band_b and neg slope between band_b and band_c)
is_cloud = np.logical_and(is_bright == 1, slope_a_bool == 1, slope_b_bool == 1)
return is_cloud
def get_radius_in_pixels(value_str, metadata):
if value_str.endswith('px'):
return np.ceil(float(value_str.split('px')[0]))
if value_str.endswith('m'):
if 'map info' not in metadata:
raise RuntimeError('Image does not have resolution specified. Try giving values in pixels.')
if 'meters' not in metadata['map info'][10].lower():
raise RuntimeError('Unknown unit for image resolution.')
meters_per_pixel_x = float(metadata['map info'][5])
meters_per_pixel_y = float(metadata['map info'][6])
if meters_per_pixel_x != meters_per_pixel_y:
print('Warning: x and y resolutions are not equal, the average resolution will be used.')
meters_per_pixel_x = (meters_per_pixel_y + meters_per_pixel_x) / 2.0
pixel_radius = float(value_str.split('m')[0]) / meters_per_pixel_x
return np.ceil(pixel_radius)
#raise RuntimeError('Unknown unit specified.')
def dilate_mask(binmask, value_str_cld, metadata):
if value_str_cld.endswith('px'):
dil_u=np.ceil(float(value_str_cld.split('px')[0])) #Use buffer of this many pixels
if value_str_cld.endswith('m'):
if 'map info' not in metadata:
raise RuntimeError('Image does not have resolution specified. Try giving values in pixels.')
if 'meters' not in metadata['map info'][10].lower():
raise RuntimeError('Unknown unit for image resolution.')
meters_per_pixel_x = float(metadata['map info'][5])
meters_per_pixel_y = float(metadata['map info'][6])
if meters_per_pixel_x != meters_per_pixel_y:
print('Warning: x and y resolutions are not equal, the average resolution will be used.')
meters_per_pixel_x = (meters_per_pixel_y + meters_per_pixel_x) / 2.0
dil_u = float(value_str_cld.split('m')[0]) / meters_per_pixel_x #Use buffer of this many pixels based on specified distance
#raise RuntimeError('Unknown unit specified.')
from skimage.morphology import binary_dilation as _bwd
bufmask = binmask.copy()
for _ in range(int(np.ceil(dil_u))):
bufmask = _bwd(bufmask)
return bufmask
def main(aws=False, bucket=None):
args = vars(parse_args()) # convert namespace object to dict
print(args)
# Text file path
txt_path = args.get('txt')
# File path containing orthocorrected radiance files
in_path = args.get('inpath')
# File path to write outputs to
out_path = args.get('outpath')
# download flightline list
# moved to lambda trigger
files = []
if aws:
s3 = boto3.resource("s3")
bucket = s3.Bucket(args.get('bucket_name'))
files = txt_path.split('\n')
# Read in text file of flights
else:
with open(txt_path, "r") as fd:
files = fd.read().splitlines()
# Go through each line of the text file
for f in range(0,len(files)): #Go through each row of text file
f_txt = str(files[f])
print('Processing flight',f_txt)
# Open the specified radiance file as a memory-mapped object
rdn_filename = f_txt + '.hdr'
rdn_filepath = in_path + rdn_filename
# if file on aws, download it first
if aws:
print('download {} to {}'.format(rdn_filepath, '/tmp/'+rdn_filename))
# download image file
bucket.download_file(rdn_filepath[:-4], '/tmp/'+rdn_filename[:-4])
# download corresponding hdr file
bucket.download_file(rdn_filepath, '/tmp/'+rdn_filename)
rdn_file = spectral.io.envi.open('/tmp/'+rdn_filename)
else:
rdn_file = spectral.io.envi.open(rdn_filepath)
rdn_file_memmap = rdn_file.open_memmap(interleave='bip', writable=False)
# Close memory-mapped object for radiance
rdn_file_memmap.flush()
# previously defined helper functions here
# Get wavelengths from rdn file
wavelengths = np.array(rdn_file.bands.centers)
# If thresholding is enabled, calculate the mask and (if enabled) preprocess with dilation
sat_mask_full = None
print('Detecting pixels to mask...', end='')
if args.get('maskgrowradius') is not None:
grow_radius_px = get_radius_in_pixels(args.get('maskgrowradius'), rdn_file.metadata)
selem = morphology.disk(radius=grow_radius_px, dtype=np.bool)
idx_500 = np.argmin(np.absolute(wavelengths - 500))
sat_mask_full = np.zeros((rdn_file.nrows, rdn_file.ncols), dtype=np.uint8) # For flare mask
sat_mask_full2 = np.zeros((rdn_file.nrows, rdn_file.ncols), dtype=np.uint8) # For cloud mask
dark_mask_full = np.zeros((rdn_file.nrows, rdn_file.ncols), dtype=np.uint8) # For dark mask
#sat_mask_full3 = np.zeros((rdn_file.nrows, rdn_file.ncols), dtype=np.uint8) # For flare mask buffer
spec_mask_full = np.zeros((rdn_file.nrows, rdn_file.ncols), dtype=np.uint8) # For specular mask
block_overlap = np.ceil((args.get('mingrowarea') if args.get('mingrowarea') is not None else 0) + (grow_radius_px if args.get('maskgrowradius') is not None else 0)).astype(np.int64)
block_step = args.get('saturation_processing_block_length')
block_length = block_step + block_overlap
line_idx_start_values = np.arange(start=0, stop=rdn_file.nrows, step=block_step)
for line_block_start in line_idx_start_values:
print('.', end='', flush=True)
line_block_end = np.minimum(rdn_file.nrows, line_block_start + block_length)
block_data = rdn_file.read_subregion((line_block_start, line_block_end), (0, rdn_file.ncols))
sat_mask_block2 = get_cloud_mask(data=block_data[:, :, :], wave=wavelengths,
threshold=args.get('cldthreshold'), bandrange=args.get('cldbands'),
wavelengths=wavelengths) # For cloud mask
# This are the saturated pixels (either flare or specular reflection)
sat_mask_block = get_saturation_mask(data=block_data[:, :, :], wave=wavelengths,
threshold=args.get('saturationthreshold'), waverange=args.get('saturationwindow')) # For flare mask, pixels saturated in SWIR
# Specify which pixels are specular reflection
spec_block = get_spec_mask(data=block_data[:, :, :], inp=sat_mask_block, args=args)
spec_mask_full[line_block_start:line_block_end, ...][spec_block == 1] = 1 # For specular mask
sat_mask_full2[line_block_start:line_block_end, ...][sat_mask_block2 == 1] = 1 # For cloud mask
# Apply dark mask
dark_block = get_dark_mask(data=block_data[:, :, :], args=args)
dark_mask_full[line_block_start:line_block_end, ...][dark_block == 1] = 1
# Go through flare mask and create an additional mask based on radius in meters
if args.get('maskgrowradius') is not None:
sat_mask_grow_regions = np.zeros_like(sat_mask_block, dtype=np.uint8)
sat_mask_flare = np.zeros_like(sat_mask_block, dtype=np.uint8)
for region in measure.regionprops(measure.label(sat_mask_block.astype(np.uint8), connectivity=2)):
if args.get('mingrowarea') is None or region.area >= args.get('mingrowarea'):
# Mark these large regions in the mask to get dilated
for c in region.coords:
# Use visible radiance threshold to rule out sun glint in the visible wavelength range. If sunglint, set mask to 0 (no mask).
sat_mask_grow_regions[c[0], c[1]] = 1 if block_data[c[0], c[1], idx_500] < args.get('visible_mask_growing_threshold') else 0 # Define pixels where flare buffer will be applied (this will be a pixel, as opposed to the full flare which would likely contain multoiple pixels)
# Binary mask based on radius only for only flares, not for specular reflection
sat_mask_large_grown = morphology.binary_dilation(image=sat_mask_grow_regions.astype(np.bool),
selem=selem)
sat_mask_out = sat_mask_large_grown.astype(np.uint8)
# Assign flare and specular both 2 (see sat_mask_block), buffer=1
sat_mask_out[sat_mask_block] = np.asarray(2, dtype=np.uint8)
# Generate a layer of data that will be used as a bankd
sat_mask_full[line_block_start:line_block_end, ...][
np.logical_and(sat_mask_large_grown == 1, sat_mask_large_grown == 1)] = 2 # Buffer location assigned to 2 using buffer mask
sat_mask_full[line_block_start:line_block_end, ...][
np.logical_and(sat_mask_out == 2, spec_block == 0)] = 1 # Flare location assigned to 1
# Dialte the cloud mask
#m_pixel = float(rdn_file.metadata['map info'][5])
#argcldbfr=args.get('cldbfr')
#cloud_buf_m=np.ceil(float(argcldbfr.split('m')[0]))
value_str_cld=args.get('cldbfr')
cloud_mask_buf = dilate_mask(sat_mask_full2, value_str_cld, rdn_file.metadata)
# Combine the three bands of data
sat_mask_all = np.zeros((rdn_file.nrows, rdn_file.ncols, 4), dtype=np.int16)
sat_mask_all[:,:,0]=cloud_mask_buf # For cloud mask
sat_mask_all[:,:,1]=spec_mask_full # For specular mask
sat_mask_all[:,:,2]=sat_mask_full # For flare mask
sat_mask_all[:,:,3]=dark_mask_full # For flare mask
sat_mask_all[rdn_file_memmap[:,:,0]==-9999] = -9999 # Apply the -9999 image border from the radiance file
#Specify output type
output_dtype = np.int16
# Create an image file for the output
flare_wvl_window = args['saturationwindow'] if args['saturationwindow'] is not None else (1945, 2485)
flare_threshold = args['saturationthreshold'] if args['saturationthreshold'] is not None else SAT_THRESH_DEFAULT
cloud_wvl = args['cldbands'] if args['cldbands'] is not None else (450)
cloud_threshold = args['cldthreshold'] if args['cldthreshold'] is not None | |
<reponame>PlasticMem/tencentcloud-sdk-python
# -*- coding: utf8 -*-
# Copyright (c) 2017-2021 THL A29 Limited, a Tencent company. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import warnings
from tencentcloud.common.abstract_model import AbstractModel
class BunkZone(AbstractModel):
"""点位包含铺位信息
"""
def __init__(self):
r"""
:param ZoneId: 点位ID
:type ZoneId: int
:param ZoneName: 点位名称
:type ZoneName: str
:param BunkCodes: 铺位编码
:type BunkCodes: str
"""
self.ZoneId = None
self.ZoneName = None
self.BunkCodes = None
def _deserialize(self, params):
self.ZoneId = params.get("ZoneId")
self.ZoneName = params.get("ZoneName")
self.BunkCodes = params.get("BunkCodes")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CameraConfig(AbstractModel):
"""摄像头配置信息
"""
def __init__(self):
r"""
:param GroupCode: 集团编码
:type GroupCode: str
:param MallId: 广场ID
:type MallId: int
:param FloorId: 楼层ID
:type FloorId: int
:param CameraId: 摄像头ID
:type CameraId: int
:param CameraIp: 摄像头IP
:type CameraIp: str
:param CameraMac: 摄像头Mac
:type CameraMac: str
:param CameraType: 摄像头类型:
1: 码流机
2: AI相机
:type CameraType: int
:param CameraFeature: 摄像头功能:
1: 人脸
2: 人体
:type CameraFeature: int
:param CameraState: 摄像头是否启用:
0: 下线
1: 启用
:type CameraState: int
:param ZoneId: 点位ID
:type ZoneId: int
:param ZoneType: 点位类型:
1: 场门
3: 层门
5: 特殊区域
7: 门店
8: 补位
10: 开放式门店
11: 品类区
12: 公共区
:type ZoneType: int
:param Config: 配置
:type Config: :class:`tencentcloud.ump.v20200918.models.Config`
:param Width: 宽
:type Width: int
:param Height: 高
:type Height: int
"""
self.GroupCode = None
self.MallId = None
self.FloorId = None
self.CameraId = None
self.CameraIp = None
self.CameraMac = None
self.CameraType = None
self.CameraFeature = None
self.CameraState = None
self.ZoneId = None
self.ZoneType = None
self.Config = None
self.Width = None
self.Height = None
def _deserialize(self, params):
self.GroupCode = params.get("GroupCode")
self.MallId = params.get("MallId")
self.FloorId = params.get("FloorId")
self.CameraId = params.get("CameraId")
self.CameraIp = params.get("CameraIp")
self.CameraMac = params.get("CameraMac")
self.CameraType = params.get("CameraType")
self.CameraFeature = params.get("CameraFeature")
self.CameraState = params.get("CameraState")
self.ZoneId = params.get("ZoneId")
self.ZoneType = params.get("ZoneType")
if params.get("Config") is not None:
self.Config = Config()
self.Config._deserialize(params.get("Config"))
self.Width = params.get("Width")
self.Height = params.get("Height")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CameraState(AbstractModel):
"""用于场内上报当前相机的状态
"""
def __init__(self):
r"""
:param CameraId: 相机ID
:type CameraId: int
:param State: 相机状态:
10: 初始化
11: 未知状态
12: 网络异常
13: 未授权
14: 相机App异常
15: 相机取流异常
16: 状态正常
:type State: int
"""
self.CameraId = None
self.State = None
def _deserialize(self, params):
self.CameraId = params.get("CameraId")
self.State = params.get("State")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CameraZones(AbstractModel):
"""摄像头包含简单的点位信息
"""
def __init__(self):
r"""
:param CameraId: 摄像头ID
:type CameraId: int
:param CameraName: 摄像头名称
:type CameraName: str
:param CameraFeature: 摄像头功能:
1: 人脸
2: 人体
:type CameraFeature: int
:param CameraIp: 摄像头IP
:type CameraIp: str
:param CameraState: 摄像头状态:
0: 异常 (不再使用)
1: 正常 (不再使用)
10: 初始化
11: 未知状态 (因服务内部错误产生)
12: 网络异常
13: 未授权
14: 相机App异常
15: 相机取流异常
16: 正常
:type CameraState: int
:param Zones: 点位列表
:type Zones: list of BunkZone
:param Pixel: 像素:
130W(1280*960)
200W(1920*1080)
400W(2560*1440)
:type Pixel: str
:param RTSP: 相机Rtsp地址
注意:此字段可能返回 null,表示取不到有效值。
:type RTSP: str
"""
self.CameraId = None
self.CameraName = None
self.CameraFeature = None
self.CameraIp = None
self.CameraState = None
self.Zones = None
self.Pixel = None
self.RTSP = None
def _deserialize(self, params):
self.CameraId = params.get("CameraId")
self.CameraName = params.get("CameraName")
self.CameraFeature = params.get("CameraFeature")
self.CameraIp = params.get("CameraIp")
self.CameraState = params.get("CameraState")
if params.get("Zones") is not None:
self.Zones = []
for item in params.get("Zones"):
obj = BunkZone()
obj._deserialize(item)
self.Zones.append(obj)
self.Pixel = params.get("Pixel")
self.RTSP = params.get("RTSP")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class Config(AbstractModel):
"""摄像头配置
"""
def __init__(self):
r"""
:param CameraProducer: 摄像头厂商:
H: 海康
D: 大华
Y: 英飞拓
L: 联纵
:type CameraProducer: str
:param RTSP: rtsp 地址
:type RTSP: str
:param Fps: 摄像头帧率
:type Fps: int
:param DecodeFps: 解码帧率
:type DecodeFps: int
:param PassengerFlow: 是否做客流计算:
0: 否
1: 是
:type PassengerFlow: int
:param FaceExpose: 是否打开人脸曝光:
0: 关闭
1: 开启
:type FaceExpose: int
:param MallArea: 门线标注
:type MallArea: list of Point
:param ShopArea: 店门标注
:type ShopArea: list of Point
:param TrackAreas: 检测区标注
:type TrackAreas: list of Polygon
:param Zones: 点位列表(品类区)
:type Zones: list of ZoneArea
"""
self.CameraProducer = None
self.RTSP = None
self.Fps = None
self.DecodeFps = None
self.PassengerFlow = None
self.FaceExpose = None
self.MallArea = None
self.ShopArea = None
self.TrackAreas = None
self.Zones = None
def _deserialize(self, params):
self.CameraProducer = params.get("CameraProducer")
self.RTSP = params.get("RTSP")
self.Fps = params.get("Fps")
self.DecodeFps = params.get("DecodeFps")
self.PassengerFlow = params.get("PassengerFlow")
self.FaceExpose = params.get("FaceExpose")
if params.get("MallArea") is not None:
self.MallArea = []
for item in params.get("MallArea"):
obj = Point()
obj._deserialize(item)
self.MallArea.append(obj)
if params.get("ShopArea") is not None:
self.ShopArea = []
for item in params.get("ShopArea"):
obj = Point()
obj._deserialize(item)
self.ShopArea.append(obj)
if params.get("TrackAreas") is not None:
self.TrackAreas = []
for item in params.get("TrackAreas"):
obj = Polygon()
obj._deserialize(item)
self.TrackAreas.append(obj)
if params.get("Zones") is not None:
self.Zones = []
for item in params.get("Zones"):
obj = ZoneArea()
obj._deserialize(item)
self.Zones.append(obj)
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraAlertAlert(AbstractModel):
"""告警信息
"""
def __init__(self):
r"""
:param GroupCode: 集团编码
:type GroupCode: str
:param MallId: 广场ID
:type MallId: int
:param CameraId: 相机ID
:type CameraId: int
:param CaptureTime: 时间戳,ms,默认为告警请求到达时间
:type CaptureTime: int
:param Image: 图片base64编码
:type Image: str
:param MoveAlert: 移动告警
:type MoveAlert: :class:`tencentcloud.ump.v20200918.models.CreateCameraAlertsMoveAlert`
:param CoverAlert: 遮挡告警
:type CoverAlert: :class:`tencentcloud.ump.v20200918.models.CreateCameraAlertsCoverAlert`
"""
self.GroupCode = None
self.MallId = None
self.CameraId = None
self.CaptureTime = None
self.Image = None
self.MoveAlert = None
self.CoverAlert = None
def _deserialize(self, params):
self.GroupCode = params.get("GroupCode")
self.MallId = params.get("MallId")
self.CameraId = params.get("CameraId")
self.CaptureTime = params.get("CaptureTime")
self.Image = params.get("Image")
if params.get("MoveAlert") is not None:
self.MoveAlert = CreateCameraAlertsMoveAlert()
self.MoveAlert._deserialize(params.get("MoveAlert"))
if params.get("CoverAlert") is not None:
self.CoverAlert = CreateCameraAlertsCoverAlert()
self.CoverAlert._deserialize(params.get("CoverAlert"))
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraAlertsCoverAlert(AbstractModel):
"""遮挡告警
"""
def __init__(self):
r"""
:param Cover: 是否遮挡
:type Cover: bool
:param CoverConfidence: 是否移动置信度
:type CoverConfidence: float
"""
self.Cover = None
self.CoverConfidence = None
def _deserialize(self, params):
self.Cover = params.get("Cover")
self.CoverConfidence = params.get("CoverConfidence")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraAlertsMoveAlert(AbstractModel):
"""移动告警
"""
def __init__(self):
r"""
:param Move: 是否移动
:type Move: bool
:param MoveConfidence: 是否移动置信度
:type MoveConfidence: float
"""
self.Move = None
self.MoveConfidence = None
def _deserialize(self, params):
self.Move = params.get("Move")
self.MoveConfidence = params.get("MoveConfidence")
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraAlertsRequest(AbstractModel):
"""CreateCameraAlerts请求参数结构体
"""
def __init__(self):
r"""
:param Alerts: 告警信息列表
:type Alerts: list of CreateCameraAlertAlert
"""
self.Alerts = None
def _deserialize(self, params):
if params.get("Alerts") is not None:
self.Alerts = []
for item in params.get("Alerts"):
obj = CreateCameraAlertAlert()
obj._deserialize(item)
self.Alerts.append(obj)
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraAlertsResponse(AbstractModel):
"""CreateCameraAlerts返回参数结构体
"""
def __init__(self):
r"""
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.RequestId = None
def _deserialize(self, params):
self.RequestId = params.get("RequestId")
class CreateCameraStateRequest(AbstractModel):
"""CreateCameraState请求参数结构体
"""
def __init__(self):
r"""
:param GroupCode: 集团编码
:type GroupCode: str
:param MallId: 广场ID
:type MallId: int
:param CameraStates: 场内所有相机的状态值
:type CameraStates: list of CameraState
"""
self.GroupCode = None
self.MallId = None
self.CameraStates = None
def _deserialize(self, params):
self.GroupCode = params.get("GroupCode")
self.MallId = params.get("MallId")
if params.get("CameraStates") is not None:
self.CameraStates = []
for item in params.get("CameraStates"):
obj = CameraState()
obj._deserialize(item)
self.CameraStates.append(obj)
memeber_set = set(params.keys())
for name, value in vars(self).items():
if name in memeber_set:
memeber_set.remove(name)
if len(memeber_set) > 0:
warnings.warn("%s fileds are useless." % ",".join(memeber_set))
class CreateCameraStateResponse(AbstractModel):
"""CreateCameraState返回参数结构体
"""
def __init__(self):
r"""
:param RequestId: 唯一请求 ID,每次请求都会返回。定位问题时需要提供该次请求的 RequestId。
:type RequestId: str
"""
self.RequestId = None
| |
<filename>CHI2017_retrain.py
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""------------------- Modified for CHI2017 Experiments ------------------------
Simple transfer learning with an Inception v3 architecture model.
This example shows how to take a Inception v3 architecture model trained on
ImageNet images, and train a new top layer that can recognize other classes of
images.
The top layer receives as input a 2048-dimensional vector for each image. We
train a softmax layer on top of this representation. Assuming the softmax layer
contains N labels, this corresponds to learning N + 2048*N model parameters
corresponding to the learned biases and weights.
Here's an example, which assumes you have a folder containing class-named
subfolders, each full of images for each label. The example folder flower_photos
should have a structure like this:
~/flower_photos/daisy/photo1.jpg
~/flower_photos/daisy/photo2.jpg
...
~/flower_photos/rose/anotherphoto77.jpg
...
~/flower_photos/sunflower/somepicture.jpg
The subfolder names are important, since they define what label is applied to
each image, but the filenames themselves don't matter. Once your images are
prepared, you can run the training with a command like this:
bazel build third_party/tensorflow/examples/image_retraining:retrain && \
bazel-bin/third_party/tensorflow/examples/image_retraining/retrain \
--image_dir ~/flower_photos
You can replace the image_dir argument with any folder containing subfolders of
images. The label for each image is taken from the name of the subfolder it's
in.
This produces a new model file that can be loaded and run by any TensorFlow
program, for example the label_image sample code.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from datetime import datetime
from collections import OrderedDict, defaultdict
import glob
import hashlib
import os.path
import random
import re
import sys
import csv
import json
import tarfile
import numpy as np
from six.moves import urllib
# import tensorflow as tf
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
# from tensorflow.python.client import graph_util
from tensorflow.python.framework import tensor_shape
from tensorflow.python.platform import gfile
# calculate recall and precision
from sklearn.metrics import precision_score, recall_score, f1_score
# file lock on writing csv files
import fcntl
cwd = os.getcwd()
parent_dir = os.path.dirname(cwd)
sys.path.insert(0, parent_dir)
FLAGS = tf.app.flags.FLAGS
# Input and output file flags.
tf.app.flags.DEFINE_string('image_dir', '/home/jhong12/ASSETS2019/p01/original',
"""Path to folders of labeled images.""")
tf.app.flags.DEFINE_string('output_graph', '/home/jhong12/GORTestModels/tmp/model.pb',
"""Where to save the trained graph.""")
tf.app.flags.DEFINE_string('output_labels', '/home/jhong12/GORTestModels/tmp/labels.txt',
"""Where to save the trained graph's labels.""")
# Details of the training configuration.
tf.app.flags.DEFINE_integer('how_many_training_steps', 1000,
"""How many training steps to run before ending.""")
tf.app.flags.DEFINE_float('learning_rate', 0.01,
"""How large a learning rate to use when training.""")
tf.app.flags.DEFINE_integer(
'testing_percentage', 10,
"""What percentage of images to use as a test set.""")
tf.app.flags.DEFINE_integer(
'validation_percentage', 10,
"""What percentage of images to use as a validation set.""")
tf.app.flags.DEFINE_integer('eval_step_interval', 50,
"""How often to evaluate the training results.""")
tf.app.flags.DEFINE_integer('train_batch_size', 100,
"""How many images to train on at a time.""")
tf.app.flags.DEFINE_integer('test_batch_size', 500,
"""How many images to test on at a time. This"""
""" test set is only used infrequently to verify"""
""" the overall accuracy of the model.""")
tf.app.flags.DEFINE_integer(
'validation_batch_size', 100,
"""How many images to use in an evaluation batch. This validation set is"""
""" used much more often than the test set, and is an early indicator of"""
""" how accurate the model is during training.""")
# File-system cache locations.
tf.app.flags.DEFINE_string('model_dir', '/home/jhong12/TOR-app-files/base_model',
"""Path to classify_image_graph_def.pb, """
"""imagenet_synset_to_human_label_map.txt, and """
"""imagenet_2012_challenge_label_map_proto.pbtxt.""")
tf.app.flags.DEFINE_string('bottleneck_dir', '/home/jhong12/GORTestModels/tmp/bottleneck',
"""Path to cache bottleneck layer values as files.""")
tf.app.flags.DEFINE_string('final_tensor_name', 'final_result',
"""The name of the output classification layer in"""
""" the retrained graph.""")
# Controls the distortions used during training.
tf.app.flags.DEFINE_boolean(
'flip_left_right', False,
"""Whether to randomly flip half of the training images horizontally.""")
tf.app.flags.DEFINE_integer(
'random_crop', 0,
"""A percentage determining how much of a margin to randomly crop off the"""
""" training images.""")
tf.app.flags.DEFINE_integer(
'random_scale', 0,
"""A percentage determining how much to randomly scale up the size of the"""
""" training images by.""")
tf.app.flags.DEFINE_integer(
'random_brightness', 0,
"""A percentage determining how much to randomly multiply the training"""
""" image input pixels up or down by.""")
# ASSETS2019 Experiment parameters
tf.app.flags.DEFINE_integer(
'k_shot', 20,
"""Number of examples to be used for training.""")
tf.app.flags.DEFINE_integer(
'attempt_number', 1,
"""A number denoting the attempt and used to get different random k.""")
tf.app.flags.DEFINE_string(
'output_file', '/home/jhong12/GORTestModels/tmp/experiment_results.csv',
"""Where to write the results of the experiment.""")
tf.app.flags.DEFINE_integer(
'gpu_to_use', 0,
"""GPU number to use""")
# These are all parameters that are tied to the particular model architecture
# we're using for Inception v3. These include things like tensor names and their
# sizes. If you want to adapt this script to work with another model, you will
# need to update these to reflect the values in the network you're using.
# pylint: disable=line-too-long
DATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
# pylint: enable=line-too-long
BOTTLENECK_TENSOR_NAME = 'pool_3/_reshape:0'
BOTTLENECK_TENSOR_SIZE = 2048
MODEL_INPUT_WIDTH = 299
MODEL_INPUT_HEIGHT = 299
MODEL_INPUT_DEPTH = 3
JPEG_DATA_TENSOR_NAME = 'DecodeJpeg/contents:0'
RESIZED_INPUT_TENSOR_NAME = 'ResizeBilinear:0'
RANDOM_SEED_GRAPH = 37
RANDOM_SEED = 12
OUTPUT_HEADER = ["pid",
"input",
"start_time",
"start_training",
"end_time",
"image_dir",
"k_shot",
"attempt_number",
"random_crop",
"random_scale",
"random_brightness",
"flip_left_right",
"how_many_training_steps",
"learning_rate",
"train_batch_size",
"test_batch_size",
"validation_batch_size",
"train_accuracy",
"cross_entropy",
"validation_accuracy",
"test1_accuracy",
"test1_f1_per_object",
"test1_recall_per_object",
"test1_precision_per_object",
"test2_accuracy",
"test2_f1_per_object",
"test2_recall_per_object",
"test2_precision_per_object",
"labels"]
def create_image_lists(image_dir, k, attempt):
"""Builds a list of training images from the file system.
Analyzes the sub folders in the image directory, splits them into stable
training, testing, and validation sets, and returns a data structure
describing the lists of images for each label and their paths.
Args:
image_dir: String path to a folder containing subfolders of images.
testing_percentage: Integer percentage of the images to reserve for tests.
validation_percentage: Integer percentage of images reserved for validation.
k: Number of examples to be used for training.
attempt: Attempt number. To make sure that that the random chooses different numbers in each attempt.
Returns:
A dictionary containing an entry for each label subfolder, with images split
into training, testing, and validation sets within each label.
"""
if not gfile.Exists(image_dir):
print("Image directory '" + image_dir + "' not found.")
return None
result = defaultdict(dict)
sub_dirs = [x[0] for x in os.walk(image_dir)]
# The root directory comes first, so skip it.
is_root_dir = True
for sub_dir in sub_dirs:
if is_root_dir:
is_root_dir = False
continue
extensions = ['jpg', 'jpeg', 'JPG', 'JPEG']
training_file_list = []
testing_file_list = []
dir_name = os.path.basename(sub_dir)
if dir_name == image_dir:
continue
print('current dir name', dir_name)
if dir_name == 'train':
phase = dir_name
continue
if dir_name == 'test1':
phase = dir_name
continue
if dir_name == "test2":
phase = dir_name
label_name = re.sub(r'[^a-z0-9]+', ' ', dir_name.lower())
if phase == "train":
print("Looking for images in '" + 'train/'+dir_name + "'")
for extension in extensions:
file_glob = os.path.join(image_dir, 'train/'+dir_name, '*.' + extension)
training_file_list.extend(glob.glob(file_glob))
if not training_file_list:
print('No files found')
continue
if len(training_file_list) < 30:
print(len(training_file_list))
print('WARNING: Folder has less than 30 images, which may cause issues.')
training_validation_images = [os.path.basename(file_name) for file_name in training_file_list]
# shuffle the images and choose top k for training
# to make sure that the second attempt we don't choose the same images we introduce
# the attempt number.
random.seed(RANDOM_SEED + attempt)
random.shuffle(training_validation_images)
result[label_name]['dir'] = dir_name
result[label_name]['train'] = training_validation_images[:k]
result[label_name]['validation'] = training_validation_images[k:]
"""
result[label_name] = {
'dir': dir_name,
'training': training_validation_images[:k],
'validation': training_validation_images[k:],
'testing': []}
"""
elif phase == "test1":
print("Looking for images in '" + 'test1/' + dir_name + "'")
for extension in extensions:
file_glob = os.path.join(image_dir, 'test1/' + dir_name, '*.' + extension)
testing_file_list.extend(glob.glob(file_glob))
if not testing_file_list:
print('No files found')
continue
if len(testing_file_list) < 5:
print(len(testing_file_list))
print('WARNING: Folder has less than 5 images, which may cause issues.')
testing_images = [os.path.basename(file_name) for file_name in testing_file_list]
result[label_name]['test1'] = testing_images
# result[label_name] =['testing'] = testing_images
else:
print("Looking for images in '" + 'test2/' + dir_name + "'")
for extension in extensions:
file_glob = os.path.join(image_dir, 'test2/' + dir_name, '*.' + extension)
testing_file_list.extend(glob.glob(file_glob))
if not testing_file_list:
print('No files found')
continue
if len(testing_file_list) < 5:
print(len(testing_file_list))
print('WARNING: Folder has less than 5 images, which may cause issues.')
testing_images = [os.path.basename(file_name) for file_name in testing_file_list]
result[label_name]['test2'] = testing_images
# return dictionary ordered by key
return OrderedDict(sorted(result.items(), key=lambda t: t[0]))
def get_image_path(image_lists, label_name, index, image_dir, category):
""""Returns a path to an image for a label at the given index.
Args:
image_lists: Dictionary of training images for each label.
label_name: Label string we want to get an | |
10.0, 15.0, 10.0],
[20.0, 10.0, 0.0, 9.969209968386869e36],
],
[
[50.0, 40.0, 30.0, 9.969209968386869e36],
[
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
],
[
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
],
],
],
units="degrees_east",
dtype="f8",
mask=Data(
[
[
[False, False, False, True],
[False, False, False, False],
[False, False, False, True],
],
[
[False, False, False, True],
[True, True, True, True],
[True, True, True, True],
],
],
dtype="b1",
),
)
b.set_data(d)
b.nc_set_variable("x")
c.set_bounds(b)
i = InteriorRing()
d = Data(
[[0, 1, 0], [0, -2147483647, -2147483647]],
dtype="i4",
mask=Data(
[[False, False, False], [False, True, True]], dtype="b1"
),
)
i.set_data(d)
i.nc_set_variable("interior_ring")
c.set_interior_ring(i)
f.set_construct(
c, axes=("domainaxis0",), key="auxiliarycoordinate1", copy=False
)
# auxiliary_coordinate: cf_role=timeseries_id
c = AuxiliaryCoordinate()
c.set_properties({"cf_role": "timeseries_id"})
d = Data([b"x1", b"y2"], dtype="S2")
c.set_data(d)
c.nc_set_variable("instance_id")
f.set_construct(
c, axes=("domainaxis0",), key="auxiliarycoordinate2", copy=False
)
# auxiliary_coordinate: Z
c = AuxiliaryCoordinate()
c.nc_set_variable("z")
c.set_geometry("polygon")
b = Bounds()
b.set_properties(
{"units": "m", "standard_name": "altitude", "axis": "Z"}
)
d = Data(
[
[
[1.0, 2.0, 4.0, 9.969209968386869e36],
[2.0, 3.0, 4.0, 5.0],
[5.0, 1.0, 4.0, 9.969209968386869e36],
],
[
[3.0, 2.0, 1.0, 9.969209968386869e36],
[
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
],
[
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
9.969209968386869e36,
],
],
],
units="m",
dtype="f8",
mask=Data(
[
[
[False, False, False, True],
[False, False, False, False],
[False, False, False, True],
],
[
[False, False, False, True],
[True, True, True, True],
[True, True, True, True],
],
],
dtype="b1",
),
)
b.set_data(d)
b.nc_set_variable("z")
c.set_bounds(b)
i = InteriorRing()
d = Data(
[[0, 1, 0], [0, -2147483647, -2147483647]],
dtype="i4",
mask=Data(
[[False, False, False], [False, True, True]], dtype="b1"
),
)
i.set_data(d)
i.nc_set_variable("interior_ring")
c.set_interior_ring(i)
f.set_construct(
c, axes=("domainaxis0",), key="auxiliarycoordinate3", copy=False
)
# dimension_coordinate: time
c = DimensionCoordinate()
c.set_properties(
{"standard_name": "time", "units": "days since 2000-01-01"}
)
d = Data(
[15.5, 45, 74.5, 105],
units="days since 2000-01-01",
calendar="gregorian",
dtype="f8",
)
c.set_data(d)
c.nc_set_variable("time")
b = Bounds()
d = Data(
[[0.0, 31.0], [31.0, 60.0], [60.0, 91.0], [91.0, 121.0]],
units="days since 2000-01-01",
calendar="gregorian",
dtype="f8",
)
b.set_data(d)
b.nc_set_variable("time_bounds")
c.set_bounds(b)
f.set_construct(
c, axes=("domainaxis1",), key="dimensioncoordinate0", copy=False
)
# coordinate_reference
c = CoordinateReference()
c.nc_set_variable("datum")
c.set_coordinates({"auxiliarycoordinate0", "auxiliarycoordinate1"})
c.datum.set_parameter("semi_major_axis", 6378137.0)
c.datum.set_parameter("inverse_flattening", 298.257223563)
c.datum.set_parameter("longitude_of_prime_meridian", 0.0)
c.coordinate_conversion.set_parameter(
"grid_mapping_name", "latitude_longitude"
)
f.set_construct(c)
elif n == 7:
# field: eastward_wind
f = Field()
f.set_properties(
{
"Conventions": "CF-1.8",
"_FillValue": -1073741824.0,
"standard_name": "eastward_wind",
"units": "m s-1",
}
)
d = Data(
[
[
[
[12.62, 13.23, 13.75, 14.13, 14.28],
[12.46, 12.9, 13.46, 13.71, 14.03],
[12.22, 12.57, 12.91, 13.19, 13.8],
[11.83, 12.25, 12.6, 13.09, 13.63],
]
],
[
[
[3.6, 3.3, 3.13, 3.21, 3.67],
[3.69, 3.53, 3.91, 4.5, 5.63],
[4.64, 5.03, 5.8, 6.79, 8.17],
[6.7, 7.42, 8.23, 9.32, 10.5],
]
],
[
[
[10.42, 10.81, 11.03, 10.96, 10.6],
[10.59, 10.95, 11.36, 11.27, 11.08],
[10.82, 11.19, 11.43, 11.43, 11.45],
[10.93, 11.23, 11.35, 11.58, 11.64],
]
],
],
units="m s-1",
dtype="f4",
fill_value=-1073741824.0,
)
f.set_data(d)
f.nc_set_variable("ua")
# domain_axis: ncdim%t
c = DomainAxis()
c.set_size(3)
c.nc_set_dimension("t")
f.set_construct(c, key="domainaxis0", copy=False)
# domain_axis: ncdim%z
c = DomainAxis()
c.set_size(1)
c.nc_set_dimension("z")
f.set_construct(c, key="domainaxis1", copy=False)
# domain_axis: ncdim%y
c = DomainAxis()
c.set_size(4)
c.nc_set_dimension("y")
f.set_construct(c, key="domainaxis2", copy=False)
# domain_axis: ncdim%x
c = DomainAxis()
c.set_size(5)
c.nc_set_dimension("x")
f.set_construct(c, key="domainaxis3", copy=False)
# field data axes
f.set_data_axes(
("domainaxis0", "domainaxis1", "domainaxis2", "domainaxis3")
)
# auxiliary_coordinate: latitude
c = AuxiliaryCoordinate()
c.set_properties(
{"standard_name": "latitude", "units": "degrees_north"}
)
d = Data(
[
[52.4243, 52.4338, 52.439, 52.4398, 52.4362],
[51.9845, 51.9939, 51.999, 51.9998, 51.9962],
[51.5446, 51.5539, 51.559, 51.5598, 51.5563],
[51.1048, 51.114, 51.119, 51.1198, 51.1163],
],
units="degrees_north",
dtype="f8",
)
c.set_data(d)
b = Bounds()
d = Data(
[
[
[52.6378, 52.198, 52.2097, 52.6496],
[52.6496, 52.2097, 52.217, 52.6569],
[52.6569, 52.217, 52.2199, 52.6599],
[52.6599, 52.2199, 52.2185, 52.6585],
[52.6585, 52.2185, 52.2128, 52.6527],
],
[
[52.198, 51.7582, 51.7698, 52.2097],
[52.2097, 51.7698, 51.777, 52.217],
[52.217, 51.777, 51.7799, 52.2199],
[52.2199, 51.7799, 51.7786, 52.2185],
[52.2185, 51.7786, 51.7729, 52.2128],
],
[
[51.7582, 51.3184, 51.3299, 51.7698],
[51.7698, 51.3299, 51.337, 51.777],
[51.777, 51.337, 51.3399, 51.7799],
[51.7799, 51.3399, 51.3386, 51.7786],
[51.7786, 51.3386, 51.333, 51.7729],
],
[
[51.3184, 50.8786, 50.89, 51.3299],
[51.3299, 50.89, 50.8971, 51.337],
[51.337, 50.8971, 50.8999, 51.3399],
[51.3399, 50.8999, 50.8986, 51.3386],
[51.3386, 50.8986, 50.893, 51.333],
],
],
units="degrees_north",
dtype="f8",
)
b.set_data(d)
c.set_bounds(b)
f.set_construct(
c,
axes=("domainaxis2", "domainaxis3"),
key="auxiliarycoordinate0",
copy=False,
)
# auxiliary_coordinate: longitude
c = AuxiliaryCoordinate()
c.set_properties(
{"standard_name": "longitude", "units": "degrees_east"}
)
d = Data(
[
[8.0648, 8.7862, 9.5079, 10.2296, 10.9514],
[8.0838, 8.7981, 9.5127, 10.2274, 10.942],
[8.1024, 8.8098, 9.5175, 10.2252, 10.9328],
[8.1207, 8.8213, 9.5221, 10.223, 10.9238],
],
units="degrees_east",
dtype="f8",
)
c.set_data(d)
b = Bounds()
d = Data(
[
[
[7.6928, 7.7155, 8.4332, 8.4176],
[8.4176, 8.4332, 9.1512, 9.1428],
[9.1428, 9.1512, 9.8694, 9.8681],
[9.8681, 9.8694, 10.5876, 10.5935],
[10.5935, 10.5876, 11.3057, 11.3187],
],
[
[7.7155, 7.7379, 8.4485, 8.4332],
[8.4332, 8.4485, 9.1595, 9.1512],
[9.1512, 9.1595, 9.8707, 9.8694],
[9.8694, 9.8707, 10.5818, 10.5876],
[10.5876, 10.5818, 11.2929, 11.3057],
],
[
[7.7379, 7.7598, 8.4636, 8.4485],
[8.4485, 8.4636, 9.1677, 9.1595],
[9.1595, 9.1677, 9.8719, 9.8707],
[9.8707, 9.8719, 10.5762, 10.5818],
[10.5818, 10.5762, 11.2804, 11.2929],
],
[
[7.7598, 7.7812, 8.4783, 8.4636],
[8.4636, 8.4783, 9.1757, 9.1677],
[9.1677, 9.1757, 9.8732, 9.8719],
[9.8719, 9.8732, 10.5707, 10.5762],
[10.5762, 10.5707, 11.2681, 11.2804],
],
],
units="degrees_east",
dtype="f8",
)
b.set_data(d)
c.set_bounds(b)
f.set_construct(
c,
axes=("domainaxis2", "domainaxis3"),
key="auxiliarycoordinate1",
copy=False,
)
# dimension_coordinate: time
c = DimensionCoordinate()
c.set_properties(
{
"axis": "T",
"standard_name": "time",
"units": "days since 1979-1-1",
"calendar": "gregorian",
}
)
d = Data(
[120.5, 121.5, 122.5],
units="days since 1979-1-1",
calendar="gregorian",
dtype="f8",
)
c.set_data(d)
b = Bounds()
d = Data(
[[120.0, 121.0], [121.0, 122.0], [122.0, 123.0]],
units="days since 1979-1-1",
calendar="gregorian",
dtype="f8",
)
b.set_data(d)
c.set_bounds(b)
f.set_construct(
c, axes=("domainaxis0",), key="dimensioncoordinate0", copy=False
)
# dimension_coordinate: air_pressure
c = DimensionCoordinate()
c.set_properties(
{
"positive": "down",
"axis": "Z",
"standard_name": "air_pressure",
"units": "hPa",
}
)
d = Data([850.0], units="hPa", dtype="f8")
c.set_data(d)
f.set_construct(
c, axes=("domainaxis1",), key="dimensioncoordinate1", copy=False
)
# dimension_coordinate: grid_latitude
c = DimensionCoordinate()
c.set_properties(
{"axis": "Y", "standard_name": "grid_latitude", "units": "degrees"}
)
d = Data([0.44, 0.0, -0.44, -0.88], units="degrees", dtype="f8")
c.set_data(d)
b = Bounds()
d = Data(
[[0.66, 0.22], [0.22, -0.22], [-0.22, -0.66], [-0.66, -1.1]],
units="degrees",
dtype="f8",
)
b.set_data(d)
c.set_bounds(b)
f.set_construct(
c, axes=("domainaxis2",), key="dimensioncoordinate2", copy=False
)
# dimension_coordinate: grid_longitude
c = DimensionCoordinate()
c.set_properties(
{
"axis": "X",
"standard_name": "grid_longitude",
"units": "degrees",
}
)
d = Data([-1.18, -0.74, -0.3, 0.14, 0.58], units="degrees", dtype="f8")
c.set_data(d)
b = Bounds()
d = Data(
[
[-1.4, -0.96],
[-0.96, -0.52],
[-0.52, -0.08],
[-0.08, 0.36],
[0.36, 0.8],
],
units="degrees",
dtype="f8",
)
b.set_data(d)
c.set_bounds(b)
f.set_construct(
c, axes=("domainaxis3",), key="dimensioncoordinate3", copy=False
)
# cell_method
c = CellMethod()
c.set_method("mean")
c.set_axes(("domainaxis0",))
f.set_construct(c)
# coordinate_reference
c = CoordinateReference()
c.set_coordinates(
{
"dimensioncoordinate3",
"auxiliarycoordinate1",
"auxiliarycoordinate0",
"dimensioncoordinate2",
}
)
c.coordinate_conversion.set_parameter(
"grid_mapping_name", "rotated_latitude_longitude"
)
c.coordinate_conversion.set_parameter("grid_north_pole_latitude", 38.0)
c.coordinate_conversion.set_parameter(
"grid_north_pole_longitude", 190.0
)
f.set_construct(c)
return f
def example_fields(*n, _func=example_field):
"""Return example field constructs.
.. versionadded:: (cfdm) 1.8.9.0
.. seealso:: `cfdm.example_field`, `cfdm.example_domain`
:Parameters:
n: zero or more `int`, optional
Select the example field constructs to return, any
combination of:
===== ===================================================
*n* Description
===== ===================================================
``0`` A field construct with properties as well as a
cell method construct and dimension coordinate
constructs with bounds.
``1`` A field construct with properties as well as at
least one of every type of metadata construct.
``2`` A field construct that contains a monthly time
series at each latitude-longitude location.
``3`` A field construct that contains discrete sampling
geometry (DSG) "timeSeries" features.
``4`` A field construct that contains discrete sampling
geometry (DSG) "timeSeriesProfile" features.
``5`` A field construct that contains a 12 hourly time
series at each latitude-longitude location.
``6`` A field construct that has polygon geometry
coordinate cells with interior ring variables.
``7`` A field construct that has rotated pole dimension
coordinate constructs and 2-d latitude and
longitude auxiliary coordinate constructs.
===== ===================================================
If no individual field constructs are selected then all
available field constructs will be returned.
Field constructs may be selected multiple time, and will
be output in the order that they are given.
See the `cfdm.example_field` for details.
_func: function
The function that returns each individual field construct.
:Returns:
`list`
The example field constructs.
**Examples**
>>> cfdm.example_fields()
[<Field: specific_humidity(latitude(5), longitude(8)) 1>,
<Field: air_temperature(atmosphere_hybrid_height_coordinate(1), grid_latitude(10), grid_longitude(9)) K>,
<Field: air_potential_temperature(time(36), latitude(5), longitude(8)) K>,
<Field: precipitation_flux(cf_role=timeseries_id(4), ncdim%timeseries(9)) kg m-2 day-1>,
<Field: air_temperature(cf_role=timeseries_id(3), ncdim%timeseries(26), ncdim%profile_1(4)) K>,
<Field: air_potential_temperature(time(118), latitude(5), longitude(8)) K>,
<Field: precipitation_amount(cf_role=timeseries_id(2), time(4))>,
<Field: eastward_wind(time(3), air_pressure(1), grid_latitude(4), grid_longitude(5)) m s-1>]
>>> cfdm.example_fields(7, 1)
[<Field: eastward_wind(time(3), air_pressure(1), grid_latitude(4), grid_longitude(5)) m s-1>,
<Field: air_temperature(atmosphere_hybrid_height_coordinate(1), grid_latitude(10), | |
System.String.Format(System.String,System.Object).
format: The formatting string.
arg0: The object to write into the format string.
arg1: The object to write into the format string.
WriteLine(self: TextWriter, buffer: Array[Char])
Writes an array of characters followed by a line terminator to the text stream.
buffer: The character array from which data is read.
WriteLine(self: TextWriter, buffer: Array[Char], index: int, count: int)
Writes a subarray of characters followed by a line terminator to the text
stream.
buffer: The character array from which data is read.
index: The index into buffer at which to begin reading.
count: The maximum number of characters to write.
WriteLine(self: TextWriter)
Writes a line terminator to the text stream.
WriteLine(self: TextWriter, value: Char)
Writes a character followed by a line terminator to the text stream.
value: The character to write to the text stream.
WriteLine(self: TextWriter, value: bool)
Writes the text representation of a Boolean followed by a line terminator to
the text stream.
value: The Boolean to write.
WriteLine(self: TextWriter, value: Int64)
Writes the text representation of an 8-byte signed integer followed by a line
terminator to the text stream.
value: The 8-byte signed integer to write.
WriteLine(self: TextWriter, value: UInt64)
Writes the text representation of an 8-byte unsigned integer followed by a line
terminator to the text stream.
value: The 8-byte unsigned integer to write.
WriteLine(self: TextWriter, value: int)
Writes the text representation of a 4-byte signed integer followed by a line
terminator to the text stream.
value: The 4-byte signed integer to write.
WriteLine(self: TextWriter, value: UInt32)
Writes the text representation of a 4-byte unsigned integer followed by a line
terminator to the text stream.
value: The 4-byte unsigned integer to write.
"""
pass
def WriteLineAsync(self, *__args):
"""
WriteLineAsync(self: TextWriter, buffer: Array[Char], index: int, count: int) -> Task
WriteLineAsync(self: TextWriter) -> Task
WriteLineAsync(self: TextWriter, buffer: Array[Char]) -> Task
WriteLineAsync(self: TextWriter, value: Char) -> Task
WriteLineAsync(self: TextWriter, value: str) -> Task
"""
pass
def __enter__(self, *args): #cannot find CLR method
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self, *args): #cannot find CLR method
""" __exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object) """
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, *args): #cannot find CLR constructor
"""
__new__(cls: type)
__new__(cls: type, formatProvider: IFormatProvider)
"""
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
Encoding = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""When overridden in a derived class, returns the System.Text.Encoding in which the output is written.
Get: Encoding(self: TextWriter) -> Encoding
"""
FormatProvider = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets an object that controls formatting.
Get: FormatProvider(self: TextWriter) -> IFormatProvider
"""
NewLine = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets the line terminator string used by the current TextWriter.
Get: NewLine(self: TextWriter) -> str
Set: NewLine(self: TextWriter) = value
"""
CoreNewLine = None
Null = None
class StreamWriter(TextWriter, IDisposable):
"""
Implements a System.IO.TextWriter for writing characters to a stream in a particular encoding.
StreamWriter(stream: Stream)
StreamWriter(stream: Stream, encoding: Encoding)
StreamWriter(stream: Stream, encoding: Encoding, bufferSize: int, leaveOpen: bool)
StreamWriter(path: str, append: bool, encoding: Encoding)
StreamWriter(stream: Stream, encoding: Encoding, bufferSize: int)
StreamWriter(path: str)
StreamWriter(path: str, append: bool)
StreamWriter(path: str, append: bool, encoding: Encoding, bufferSize: int)
"""
def Close(self):
"""
Close(self: StreamWriter)
Closes the current StreamWriter object and the underlying stream.
"""
pass
def Dispose(self):
"""
Dispose(self: StreamWriter, disposing: bool)
Releases the unmanaged resources used by the System.IO.StreamWriter and
optionally releases the managed resources.
disposing: true to release both managed and unmanaged resources; false to release only
unmanaged resources.
"""
pass
def Flush(self):
"""
Flush(self: StreamWriter)
Clears all buffers for the current writer and causes any buffered data to be
written to the underlying stream.
"""
pass
def FlushAsync(self):
""" FlushAsync(self: StreamWriter) -> Task """
pass
def MemberwiseClone(self, *args): #cannot find CLR method
"""
MemberwiseClone(self: MarshalByRefObject, cloneIdentity: bool) -> MarshalByRefObject
Creates a shallow copy of the current System.MarshalByRefObject object.
cloneIdentity: false to delete the current System.MarshalByRefObject object's identity, which
will cause the object to be assigned a new identity when it is marshaled across
a remoting boundary. A value of false is usually appropriate. true to copy the
current System.MarshalByRefObject object's identity to its clone, which will
cause remoting client calls to be routed to the remote server object.
Returns: A shallow copy of the current System.MarshalByRefObject object.
MemberwiseClone(self: object) -> object
Creates a shallow copy of the current System.Object.
Returns: A shallow copy of the current System.Object.
"""
pass
def Write(self, *__args):
"""
Write(self: StreamWriter, buffer: Array[Char])
Writes a character array to the stream.
buffer: A character array containing the data to write. If buffer is null, nothing is
written.
Write(self: StreamWriter, value: str)
Writes a string to the stream.
value: The string to write to the stream. If value is null, nothing is written.
Write(self: StreamWriter, buffer: Array[Char], index: int, count: int)
Writes a subarray of characters to the stream.
buffer: A character array containing the data to write.
index: The index into buffer at which to begin writing.
count: The number of characters to read from buffer.
Write(self: StreamWriter, value: Char)
Writes a character to the stream.
value: The character to write to the text stream.
"""
pass
def WriteAsync(self, *__args):
"""
WriteAsync(self: StreamWriter, buffer: Array[Char], index: int, count: int) -> Task
WriteAsync(self: StreamWriter, value: str) -> Task
WriteAsync(self: StreamWriter, value: Char) -> Task
"""
pass
def WriteLineAsync(self, *__args):
"""
WriteLineAsync(self: StreamWriter, value: str) -> Task
WriteLineAsync(self: StreamWriter, buffer: Array[Char], index: int, count: int) -> Task
WriteLineAsync(self: StreamWriter) -> Task
WriteLineAsync(self: StreamWriter, value: Char) -> Task
"""
pass
def __enter__(self, *args): #cannot find CLR method
""" __enter__(self: IDisposable) -> object """
pass
def __exit__(self, *args): #cannot find CLR method
""" __exit__(self: IDisposable, exc_type: object, exc_value: object, exc_back: object) """
pass
def __init__(self, *args): #cannot find CLR method
""" x.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signaturex.__init__(...) initializes x; see x.__class__.__doc__ for signature """
pass
@staticmethod # known case of __new__
def __new__(self, *__args):
"""
__new__(cls: type, stream: Stream)
__new__(cls: type, stream: Stream, encoding: Encoding)
__new__(cls: type, stream: Stream, encoding: Encoding, bufferSize: int)
__new__(cls: type, stream: Stream, encoding: Encoding, bufferSize: int, leaveOpen: bool)
__new__(cls: type, path: str)
__new__(cls: type, path: str, append: bool)
__new__(cls: type, path: str, append: bool, encoding: Encoding)
__new__(cls: type, path: str, append: bool, encoding: Encoding, bufferSize: int)
"""
pass
def __reduce_ex__(self, *args): #cannot find CLR method
pass
AutoFlush = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets or sets a value indicating whether the System.IO.StreamWriter will flush its buffer to the underlying stream after every call to System.IO.StreamWriter.Write(System.Char).
Get: AutoFlush(self: StreamWriter) -> bool
Set: AutoFlush(self: StreamWriter) = value
"""
BaseStream = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the underlying stream that interfaces with a backing store.
Get: BaseStream(self: StreamWriter) -> Stream
"""
Encoding = property(lambda self: object(), lambda self, v: None, lambda self: None) # default
"""Gets the System.Text.Encoding in which the output is written.
Get: Encoding(self: StreamWriter) -> Encoding
"""
CoreNewLine = None
Null = None
class StringReader(TextReader, IDisposable):
"""
Implements a System.IO.TextReader that reads from a string.
StringReader(s: str)
"""
def Close(self):
"""
Close(self: StringReader)
Closes the System.IO.StringReader.
"""
pass
def Dispose(self):
"""
Dispose(self: StringReader, disposing: bool)
Releases the unmanaged resources used by the System.IO.StringReader and
optionally releases the managed resources.
disposing: true to | |
import os
import pickle
import time
import numpy as np
import pandas as pd
from pylearn2.datasets import DenseDesignMatrix
from pylearn2.datasets.dense_design_matrix import DefaultViewConverter
from pylearn2.format.target_format import OneHotFormatter
from scipy.io import loadmat
from scipy.signal import firwin, filtfilt
from sklearn import preprocessing
from data_extraction.base import DatasetLoader, ModifiedDenseDesignMatrix, SharedExtension
from utils.common_params import Params as params
class EpilepsiaeEEGLoader(DatasetLoader):
scalp_channel_labels = np.asarray([ u'FP1',
u'FP2',
u'F3',
u'F4',
u'C3',
u'C4',
u'P3',
u'P4',
u'O1',
u'O2',
u'F7',
u'F8',
u'T3',
u'T4',
u'T5',
u'T6',
u'FZ',
u'CZ',
u'PZ' ])
def __init__(self,
patient_id,
which_set,
leave_out_seizure_idx_valid,
leave_out_seizure_idx_test,
data_dir):
self.patient_id = patient_id
self.which_set = which_set
self.leave_out_seizure_idx_valid = leave_out_seizure_idx_valid
self.leave_out_seizure_idx_test = leave_out_seizure_idx_test
self.data_dir = data_dir
def load_data(self):
# Get the directory of the patient data
patient_dir = os.path.join(self.data_dir, self.patient_id)
# Load metadata about dataset form MAT file
metadata_fname = os.path.join(patient_dir, 'trainset.mat')
metadata_mat = loadmat(metadata_fname)
# Get number of seizures
self.n_seizures = metadata_mat.get('ictals').size
# Get detail of the segment
self.sampling_rate = metadata_mat['sampling_rate'][0][0]
self.segment_sec = metadata_mat['segment_sec'][0][0]
self.segment_samples = self.sampling_rate * self.segment_sec
self.preictal_samples = 0
self.nonictal_samples = 0
# Examples of indexing through MAT file
# mat['nonictals'][i][0]['filename'][0][0][0][j][0]
# mat['nonictals'][i][0]['idx'][0][0][0][j][0]
# mat['nonictals'][i][0]['n_segments'][0][0][0][0]
# Balanced classes
if self.which_set == 'train' or self.which_set == 'valid_train':
if self.which_set == 'train':
select_idx = np.setdiff1d(range(metadata_mat['preictals'].size),
np.asarray([self.leave_out_seizure_idx_valid,
self.leave_out_seizure_idx_test]))
else:
select_idx = np.asarray([self.leave_out_seizure_idx_valid])
X = None
y = None
for i in select_idx:
print '====== Seizure', i, '======'
# Non-ictal data
temp_nonictal_X = self.load_segment(part='nonictals',
seizure_idx=i,
metadata_mat=metadata_mat,
patient_dir=patient_dir)
# Pre-ictal
temp_preictal_X = self.load_segment(part='preictals',
seizure_idx=i,
metadata_mat=metadata_mat,
patient_dir=patient_dir)
# Concatenate preictal and nonictal data
temp_X = np.concatenate((temp_preictal_X, temp_nonictal_X), axis=0)
temp_y = np.zeros(temp_X.shape[0], dtype=int)
temp_y[range(temp_preictal_X.shape[0])] = 1
# Sanity check
# if not (temp_preictal_X.shape[0] == temp_nonictal_X.shape[0]):
# raise Exception('Unbalanced classes.')
print 'Preictal samples: {0}, Nonictal samples: {1}'.format(temp_preictal_X.shape[0],
temp_nonictal_X.shape[0])
if not np.all(np.arange(temp_preictal_X.shape[0]) == np.where(temp_y)[0]):
raise Exception('There is a mismatch between the number of preictal data and labels.')
self.preictal_samples = self.preictal_samples + temp_preictal_X.shape[0]
self.nonictal_samples = self.nonictal_samples + temp_nonictal_X.shape[0]
if not (X is None) and not (y is None):
X = np.concatenate((X, temp_X), axis=0)
y = np.append(y, temp_y)
else:
X = temp_X
y = temp_y
# Unbalanced classes
elif self.which_set == 'valid' or self.which_set == 'test':
if self.which_set == 'valid':
select_idx = self.leave_out_seizure_idx_valid
else:
select_idx = self.leave_out_seizure_idx_test
print '====== Seizure', select_idx, '======'
# Get metadata of all blocks
block_df = pd.read_table(os.path.join(patient_dir, 'block_metadata.txt'), sep='\t')
# Get block index of the selected seizure
select_sz_fname = metadata_mat['preictals'][select_idx][0]['filename'][0][0][0][0][0]
block_idx = np.where(block_df.filename == select_sz_fname)[0][0]
n_padded_block = 2
start_block_idx = block_idx - n_padded_block
end_block_idx = block_idx + n_padded_block + 1
if start_block_idx < 0:
start_block_idx = 0
if end_block_idx > block_df.shape[0]:
end_block_idx = block_df.shape[0]
select_block_idx = np.arange(start_block_idx, end_block_idx)
filenames = block_df.filename[select_block_idx].values
X = None
y = None
y_select_idx = None
ictal_labels = None
for b_idx, fname in enumerate(filenames):
# Name of the MAT files that store EEG data
data_fname = fname.replace('.data', '.mat')
# Name of the MAT file that stores indices of flat (i.e., false) segments
fname_flat = fname.replace('.data', '_flat_signal_segment_idx.mat')
# Get all good indices (i.e., remove segments of flat signals)
flat_mat = loadmat(os.path.join(patient_dir, fname_flat))
flat_idx = np.empty(0, dtype=int)
for j in range(flat_mat['flat_signal_segment_idx'].shape[0]):
flat_idx = np.append(flat_idx, np.squeeze(flat_mat['flat_signal_segment_idx'][j][0]))
flat_idx = flat_idx - 1 # Change from MATLAB to python index system
data_mat = loadmat(os.path.join(patient_dir, data_fname))
if data_mat['signals'].shape[1] != block_df.samples[select_block_idx[b_idx]]:
raise Exception('There is a mismatch between the number samples specified in the metadata and '
'the provided signal data')
n_segments = np.ceil(data_mat['signals'].shape[1] / (self.segment_samples * 1.0))
all_idx = np.arange(n_segments, dtype=int)
good_idx = np.setdiff1d(all_idx, flat_idx)
# Get indicies of scalp EEG channels
elec_names = np.asarray([ename[0][0] for ename in data_mat['elec_names']])
scalp_channels_idx = np.empty(0, dtype=int)
for ch in self.scalp_channel_labels:
scalp_channels_idx = np.append(scalp_channels_idx, np.where(elec_names == ch)[0][0])
print 'Load', self.which_set, 'data from', fname
if good_idx.size > 0:
temp_X = None
for idx in range(good_idx.size):
g_idx = good_idx[idx]
start_sample_idx = np.uint32(g_idx) * self.segment_samples
end_sample_idx = np.uint32(g_idx+1) * self.segment_samples
if end_sample_idx > data_mat['signals'].shape[1]:
# Zero-padding if the window size is not compatible
extra = end_sample_idx - data_mat['signals'].shape[1]
assert (data_mat['signals'].shape[1] + extra) % self.segment_samples == 0
if extra > 0:
data_mat['signals'] = np.concatenate((data_mat['signals'],
np.zeros((data_mat['signals'].shape[0], extra),
dtype=float)),
axis=1)
assert data_mat['signals'].shape[1] % self.segment_samples == 0
temp_sample_idx = np.arange(start_sample_idx, end_sample_idx)
if not (temp_X is None):
temp = data_mat['signals'][:, temp_sample_idx]
temp_X = np.concatenate((temp_X, np.asarray([temp[scalp_channels_idx, :]])),
axis=0)
else:
temp = data_mat['signals'][:, temp_sample_idx]
temp_X = np.asarray([temp[scalp_channels_idx, :]])
# If this record contains preictal data, get preictal labels
temp_preictal_meta_idx = -1
temp_preictal_fname_idx = -1
for preictal_meta_idx, preictal_meta in enumerate(metadata_mat['preictals']):
for preictal_fname_idx, preictal_fname in enumerate(preictal_meta[0]['filename'][0][0][0]):
if preictal_fname == fname:
temp_preictal_meta_idx = preictal_meta_idx
temp_preictal_fname_idx = preictal_fname_idx
break
if temp_preictal_meta_idx != -1 and temp_preictal_fname_idx != -1:
# Preictal indices
preictal_idx = metadata_mat['preictals'][temp_preictal_meta_idx][0]['idx'][0][0][0][temp_preictal_fname_idx][0]
preictal_idx = preictal_idx - 1 # Change from MATLAB to python index system
temp_y = np.zeros(n_segments, dtype=int)
temp_y[preictal_idx] = 1
# Sanity check
if not (preictal_idx.size == np.intersect1d(good_idx, preictal_idx).size):
raise Exception('Good indices and preictal indices are mismatch.')
# Remove segment of flat signals from labels
temp_y = temp_y[good_idx]
self.preictal_samples = self.preictal_samples + preictal_idx.size
self.nonictal_samples = self.nonictal_samples + (temp_y.size - preictal_idx.size)
else:
temp_y = np.zeros(temp_X.shape[0], dtype=int)
self.nonictal_samples = self.nonictal_samples + temp_y.size
# If this record contains preictal data of the leave-out-seizure index, get preictal labels
if temp_preictal_meta_idx == select_idx:
temp_y_select_idx = temp_y
else:
temp_y_select_idx = np.zeros(temp_X.shape[0], dtype=int)
# If this record contains ictal data, get ictal labels
temp_ictal_meta_idx = -1
temp_ictal_fname_idx = -1
for ictal_meta_idx, ictal_meta in enumerate(metadata_mat['ictals']):
for ictal_fname_idx, ictal_fname in enumerate(ictal_meta[0]['filename'][0][0][0]):
if ictal_fname == fname:
temp_ictal_meta_idx = ictal_meta_idx
temp_ictal_fname_idx = ictal_fname_idx
break
if temp_ictal_meta_idx != -1 and temp_ictal_fname_idx != -1:
# Ictal indices
ictal_idx = metadata_mat['ictals'][temp_ictal_meta_idx][0]['idx'][0][0][0][temp_ictal_fname_idx][0]
ictal_idx = ictal_idx - 1 # Change from MATLAB to python index system
temp_ictal_labels = np.zeros(n_segments, dtype=int)
temp_ictal_labels[ictal_idx] = 1
# Sanity check
if not (ictal_idx.size == np.intersect1d(good_idx, ictal_idx).size):
raise Exception('Good indices and ictal indices are mismatch.')
# Remove segment of flat signals from labels
temp_ictal_labels = temp_ictal_labels[good_idx]
else:
temp_ictal_labels = np.zeros(temp_X.shape[0], dtype=int)
# Sanity check
if not (temp_X.shape[0] == temp_y.size):
raise Exception('Number of feature data and labels are not equal.')
if not (temp_X.shape[0] == temp_ictal_labels.size):
raise Exception('Number of feature data and labels are not equal.')
if not (X is None) and not (y is None) and not (ictal_labels is None):
X = np.concatenate((X, temp_X), axis=0)
y = np.append(y, temp_y)
y_select_idx = np.append(y_select_idx, temp_y_select_idx)
ictal_labels = np.append(ictal_labels, temp_ictal_labels)
else:
X = temp_X
y = temp_y
y_select_idx = temp_y_select_idx
ictal_labels = temp_ictal_labels
else:
print 'There is no good segment for during this seizure'
# Store preictal labels that are from the leave-out-seizure index (use for compute accuracy)
# Note: this property will exist when which_set=='valid' or which_set=='test'
# as there is no need for ictal to be imported.
self.y_select_idx = y_select_idx
# Sanity check
if np.where(y_select_idx == 1)[0].size > np.where(y == 1)[0].size:
raise Exception('There is an error in collecting preictal labels only from the leave-out-seizure index.')
elif np.where(y_select_idx == 1)[0].size == np.where(y == 1)[0].size:
print 'There is only one preictal periods, and this period is from the leave-out-seizure index.'
if not np.all(np.where(y_select_idx == 1)[0] == np.where(y == 1)[0]):
raise Exception('There is a mismatch between y_select_idx and y.')
elif np.where(y_select_idx == 1)[0].size < np.where(y == 1)[0].size:
print 'There are more than one preictal periods.'
if not np.all(np.intersect1d(np.where(y == 1)[0], np.where(y_select_idx == 1)[0]) == np.where(y_select_idx == 1)[0]):
raise Exception('There is a mismatch between y_select_idx and y in the preictal labels of the leave-out-seizure index.')
# Store ictal labels
# Note: this property will exist when which_set=='valid' or which_set=='test'
# as there is no need for ictal to be imported.
self.ictal_labels = ictal_labels
else:
raise Exception('Invalid dataset selection')
X = np.transpose(X, [0, 2, 1])
one_hot_formatter = OneHotFormatter(max_labels=2)
y = one_hot_formatter.format(y)
# Sanity check
if not (X.shape[0] == self.preictal_samples + self.nonictal_samples):
raise Exception('There is a mismatch in the number of training samples.')
if not (np.where(np.argmax(y, axis=1) == 1)[0].size == self.preictal_samples):
raise Exception('There is a mismatch in the number of preictal samples and its labels.')
if not (X.shape[0] == y.shape[0]):
raise Exception('There is a mismatch in the number of training samples and its labels.')
return X, y
def load_segment(self, part, seizure_idx, metadata_mat, patient_dir):
X = None
for j in range(metadata_mat[part][seizure_idx][0]['filename'][0][0][0].size):
data_fname = metadata_mat[part][seizure_idx][0]['filename'][0][0][0][j][0]
print 'Load', part, 'data from', data_fname
# Name | |
<reponame>cmatija/probreg
from __future__ import print_function
from __future__ import division
import abc
from collections import namedtuple
import six
import numpy as np
import open3d as o3
from . import transformation as tf
from . import gaussian_filtering as gf
from . import gauss_transform as gt
from . import se3_op as so
import warnings
from . import _kabsch as kabsch
from . import _pt2pl as pt2pl
from . import math_utils as mu
import scipy
try:
from dq3d import dualquat, quat
except:
print("No dq3d python package, filterreg deformation model not available.")
EstepResult = namedtuple('EstepResult', ['m0', 'm1', 'm2', 'nx'])
MstepResult = namedtuple('MstepResult', ['transformation', 'sigma2', 'q'])
def dualquat_from_twist(tw):
ang = np.linalg.norm(tw[:3])
if ang < np.finfo(np.float32).eps:
return dualquat(quat.identity(), tw[3:])
return dualquat(quat(ang, tw[:3] / ang), tw[3:])
@six.add_metaclass(abc.ABCMeta)
class FilterReg():
"""FilterReg
FilterReg is similar to CPD, and the speed performance is improved.
In this algorithm, not only point-to-point alignment but also
point-to-plane alignment are implemented.
Args:
source (numpy.ndarray, optional): Source point cloud data.
target_normals (numpy.ndarray, optional): Normals of target points.
sigma2 (Float, optional): Variance parameter. If this variable is None,
the variance is updated in Mstep.
"""
def __init__(self, source=None, target_normals=None,
sigma2=None, update_sigma2=False, **kwargs):
self._source = source
self._target_normals = target_normals
self._sigma2 = sigma2
self._update_sigma2 = update_sigma2
self._tf_type = None
self._tf_result = None
self._callbacks = []
def set_source(self, source):
self._source = source
def set_target_normals(self, target_normals):
self._target_normals = target_normals
def set_callbacks(self, callbacks):
self._callbacks = callbacks
def expectation_step(self, t_source, target, y, sigma2, update_sigma2,
objective_type='pt2pt', alpha=0.015):
"""Expectation step
"""
assert t_source.ndim == 2 and target.ndim == 2, "source and target must have 2 dimensions."
m, _ = t_source.shape
n = target.shape[0]
sigma = np.sqrt(sigma2)
fx = t_source / sigma
fy = target / sigma
zero_m1 = np.zeros((m, 1))
zeros_md = np.zeros((m, y.shape[1]))
fin = np.r_[fx, fy]
ph = gf.Permutohedral(fin)
if ph.get_lattice_size() < n * alpha:
ph = gf.Permutohedral(fin, False)
vin0 = np.r_[zero_m1, np.ones((n, 1))]
vin1 = np.r_[zeros_md, y]
m0 = ph.filter(vin0, m).flatten()[:m]
m1 = ph.filter(vin1, m)[:m]
if update_sigma2:
vin2 = np.r_[zero_m1,
np.expand_dims(np.square(y).sum(axis=1), axis=1)]
m2 = ph.filter(vin2, m).flatten()[:m]
else:
m2 = None
if objective_type == 'pt2pt':
nx = None
elif objective_type == 'pt2pl':
vin = np.r_[zeros_md, self._target_normals]
nx = ph.filter(vin, m)[:m]
else:
raise ValueError('Unknown objective_type: %s.' % objective_type)
return EstepResult(m0, m1, m2, nx)
def maximization_step(self, t_source, target, estep_res, w=0.0,
objective_type='pt2pt', **kwargs):
return self._maximization_step(t_source, target, estep_res,
self._tf_result, self._sigma2, w,
objective_type=objective_type, **kwargs)
@staticmethod
@abc.abstractmethod
def _maximization_step(t_source, target, estep_res, trans_p, sigma2, w=0.0,
objective_type='pt2pt', **kwargs):
return None
def registration(self, target, w=0.0,
objective_type='pt2pt',
maxiter=50, tol=0.001,
min_sigma2=1.0e-4,
feature_fn=lambda x: x, **kwargs):
assert not self._tf_type is None, "transformation type is None."
q = None
ftarget = feature_fn(target)
if self._sigma2 is None:
fsource = feature_fn(self._source)
self._sigma2 = max(mu.squared_kernel_sum(fsource, ftarget), min_sigma2)
for _ in range(maxiter):
t_source = self._tf_result.transform(self._source, **kwargs)
fsource = feature_fn(t_source)
estep_res = self.expectation_step(fsource, ftarget, target, self._sigma2, self._update_sigma2,
objective_type)
res = self.maximization_step(t_source, target, estep_res, w=w, objective_type=objective_type, **kwargs)
self._tf_result = res.transformation
self._sigma2 = max(res.sigma2, min_sigma2)
for c in self._callbacks:
c(self._tf_result)
if not q is None and abs(res.q - q) < tol:
break
q = res.q
return res
class RigidFilterReg(FilterReg):
def __init__(self, source=None, target_normals=None,
sigma2=None, update_sigma2=False, tf_init_params={}):
super(RigidFilterReg, self).__init__(source=source, target_normals=target_normals,
sigma2=sigma2, update_sigma2=update_sigma2)
self._tf_type = tf.RigidTransformation
self._tf_result = self._tf_type(**tf_init_params)
@staticmethod
def _maximization_step(t_source, target, estep_res, trans_p, sigma2, w=0.0,
objective_type='pt2pt', maxiter=10, tol=1.0e-4):
m, dim = t_source.shape
n = target.shape[0]
assert dim == 2 or dim == 3, "dim must be 2 or 3."
m0, m1, m2, nx = estep_res
tw = np.zeros(dim * 2)
c = w / (1.0 - w) * n / m
m0[m0==0] = np.finfo(np.float32).eps
m1m0 = np.divide(m1.T, m0).T
m0m0 = m0 / (m0 + c)
drxdx = np.sqrt(m0m0 * 1.0 / sigma2)
if objective_type == 'pt2pt':
if dim == 2:
dr, dt = kabsch.kabsch2d(t_source, m1m0, drxdx)
else:
dr, dt = kabsch.kabsch(t_source, m1m0, drxdx)
rx = np.multiply(drxdx, (t_source - m1m0).T).T.sum(axis=1)
rot, t = np.dot(dr, trans_p.rot), np.dot(trans_p.t, dr.T) + dt
q = np.dot(rx.T, rx).sum()
elif objective_type == 'pt2pl':
nxm0 = (nx.T / m0).T
tw, q = pt2pl.compute_twist_for_pt2pl(t_source, m1m0, nxm0, drxdx)
rot, t = so.pt2pl_mul(tw, trans_p.rot, trans_p.t)
else:
raise ValueError('Unknown objective_type: %s.' % objective_type)
if not m2 is None:
sigma2 = (m0 * (np.square(t_source).sum(axis=1) - 2.0 * (t_source * m1).sum(axis=1) + m2) / (m0 + c)).sum()
sigma2 /= (3.0 * m0m0.sum())
return MstepResult(tf.RigidTransformation(rot, t), sigma2, q)
class TwistFilterReg(FilterReg):
def __init__(self, source=None, target_normals=None,
sigma2=None, update_sigma2=False, tf_init_params=None, bodies=None):
super(TwistFilterReg, self).__init__(source=source, target_normals=target_normals,
sigma2=sigma2, update_sigma2=update_sigma2)
if tf_init_params is None:
tf_init_params = {}
self._tf_type = tf.MultibodyChainModel
if bodies is None:
self.bodies = [np.arange(source.shape[0])]
else:
self.bodies = bodies
tf_init_params['n_bodies'] = len(self.bodies)
self._tf_result = self._tf_type(**tf_init_params)
@staticmethod
def compute_spatial_velocity_jacobian(source, bodies=None, ind_body=None):
if len(bodies) == 1:
return np.eye(6)
else:
a = source
raise NotImplemented('computation of the spatial velocity jacobian for the articulated case is not '
'implemented yet')
@staticmethod
def _maximization_step(t_source, target, estep_res, trans_p, sigma2, w=0.0, bodies=None, objective_type='pt2pt',
max_iter=15, tol=1.0e-4):
if bodies is None:
bodies = [np.arange(t_source.shape[0])]
tol_bodies = tol * len(bodies)
m, dim = t_source.shape
n = target.shape[0]
m0, m1, m2, _ = estep_res
c = w / (1.0 - w) * n / m
m0[m0 == 0] = np.finfo(np.float32).eps
m1m0 = np.divide(m1.T, m0).T
m0m0 = m0 / (m0 + c)
JtJ_twist = np.zeros((len(bodies), 6, 6))
A = np.zeros((6, 6), dtype=np.float128)
Jt_error = np.zeros(6, dtype=np.float128)
for ind_body in range(len(bodies)):
for ind_point in bodies[ind_body]:
residual = (t_source[ind_point, ...] - m1m0[ind_point, ...])
jacobian = so.diff_x_from_twist(t_source[ind_point, ...])
JtJ_twist[ind_body] += m0m0[ind_point] * jacobian.T.dot(jacobian)
Jt_error -= jacobian.T.dot(residual)
for ind_body in range(len(bodies)):
spatial_jacobian = TwistFilterReg.compute_spatial_velocity_jacobian(t_source, bodies=bodies)
A += spatial_jacobian.T.dot(JtJ_twist[ind_body]).dot(spatial_jacobian)
sol = scipy.linalg.solve(A, Jt_error, assume_a='sym')
matrix = [so.twist_trans(sol[i*6:(i+1)*6], linear=False) for i in range(len(bodies))]
qs = np.array([np.sum(np.abs(sol[i*6:i*6+3])) for i in range(len(bodies))])
q = np.sum(qs)
return MstepResult(tf.MultibodyChainModel(n_bodies=len(bodies),
rot=[m[0] for m in matrix],
t=[m[1] for m in matrix]), sigma2, q)
class DeformableKinematicFilterReg(FilterReg):
def __init__(self, source=None, skinning_weight=None,
sigma2=None):
super(DeformableKinematicFilterReg, self).__init__(source, sigma2=sigma2)
self._tf_type = tf.DeformableKinematicModel
self._skinning_weight = skinning_weight
self._tf_result = self._tf_type([dualquat.identity() for _ in range(self._skinning_weight.n_nodes)],
self._skinning_weight)
@staticmethod
def _maximization_step(t_source, target, estep_res, trans_p, sigma2, w=0.0,
objective_type='', maxiter=50, tol=1.0e-4):
m, dim = t_source.shape
n6d = dim * 2
idx_6d = lambda i: slice(i * n6d, (i + 1) * n6d)
n = target.shape[0]
n_nodes = trans_p.weights.n_nodes
assert dim == 3, "dim must be 3."
m0, m1, m2, _ = estep_res
tw = np.zeros(n_nodes * dim * 2)
c = w / (1.0 - w) * n / m
m0[m0==0] = np.finfo(np.float32).eps
m1m0 = np.divide(m1.T, m0).T
m0m0 = m0 / (m0 + c)
drxdx = np.sqrt(m0m0 * 1.0 / sigma2)
dxdz = np.apply_along_axis(so.diff_x_from_twist, 1, t_source)
a = np.zeros((n_nodes * n6d, n_nodes * n6d))
for pair in trans_p.weights.pairs_set():
jtj_tw = np.zeros([n6d, n6d])
for idx in trans_p.weights.in_pair(pair):
drxdz = drxdx[idx] * dxdz[idx]
w = trans_p.weights[idx]['val']
jtj_tw += w[0] * w[1] * np.dot(drxdz.T, drxdz)
a[idx_6d(pair[0]), idx_6d(pair[1])] += jtj_tw
a[idx_6d(pair[1]), idx_6d(pair[0])] += jtj_tw
for _ in range(maxiter):
x = np.zeros_like(t_source)
for pair in trans_p.weights.pairs_set():
for idx in trans_p.weights.in_pair(pair):
w = trans_p.weights[idx]['val']
q0 = dualquat_from_twist(tw[idx_6d(pair[0])])
q1 = dualquat_from_twist(tw[idx_6d(pair[1])])
x[idx] = (w[0] * q0 + w[1] * q1).transform_point(t_source[idx])
rx = np.multiply(drxdx, (x - m1m0).T).T
b = np.zeros(n_nodes * n6d)
for pair in trans_p.weights.pairs_set():
j_tw = np.zeros(n6d)
for idx in trans_p.weights.in_pair(pair):
drxdz = drxdx[idx] * dxdz[idx]
w = trans_p.weights[idx]['val']
j_tw += w[0] * np.dot(drxdz.T, rx[idx])
b[idx_6d(pair[0])] += j_tw
dtw = np.linalg.lstsq(a, b, rcond=None)[0]
tw -= dtw
if np.linalg.norm(dtw) < tol:
break
dualquats = [dualquat_from_twist(tw[idx_6d(i)]) * dq for i, dq in enumerate(trans_p.dualquats)]
if not m2 is None:
sigma2 = (m0 * (np.square(t_source).sum(axis=1) - 2.0 * (t_source * m1).sum(axis=1) + m2) / (m0 + c)).sum()
sigma2 /= (3.0 * m0m0.sum())
q = np.dot(rx.T, rx).sum()
return MstepResult(tf.DeformableKinematicModel(dualquats, trans_p.weights), sigma2, q)
def registration_filterreg(source, target, target_normals=None,
sigma2=None, update_sigma2=False, w=0, objective_type='pt2pt', maxiter=50,
tol=0.001, min_sigma2=1.0e-4, feature_fn=lambda x: x,
callbacks=[], **kargs):
"""FilterReg registration
Args:
source (numpy.ndarray): Source point cloud data.
target (numpy.ndarray): Target point cloud data.
target_normals (numpy.ndarray, optional): Normal vectors of target point cloud.
sigma2 (float, optional): Variance of GMM. If `sigma2` is `None`, `sigma2` is automatically updated.
w (float, optional): Weight of the uniform distribution, 0 < `w` < 1.
objective_type (str, optional): The type of objective function selected by 'pt2pt' or 'pt2pl'.
maxitr (int, optional): Maximum number of iterations to EM algorithm.
tol (float, optional): Tolerance for termination.
min_sigma2 (float, optional): Minimum variance of GMM.
feature_fn (function, optional): | |
freq points (float)
S: array of (2,2) matrices same size as flist
return:
S-matrix for point f
'''
S11=S[:,0,0];S12=S[:,0,1];S21=S[:,1,0];S22=S[:,1,1]
if len(flist) != len(S11):
raise ValueError("Diffrent Length of f and S")
Si = zeros((2,2),dtype=complex)
Si[0,0] = interp(f,flist,S11)
Si[0,1] = interp(f,flist,S12)
Si[1,0] = interp(f,flist,S21)
Si[1,1] = interp(f,flist,S22)
return Si
############################################################
def StoABCD(S,Z0=50):
if shape(S) == (2,2):
S = array([S])
S11=S[:,0,0]
S12=S[:,0,1]
S21=S[:,1,0]
S22=S[:,1,1]
A = ((1+S11)*(1-S22)+S12*S21) / 2/S21
B = Z0* ((1+S11)*(1+S22)-S12*S21) / 2/S21
C = 1/Z0*((1-S11)*(1-S22)-S12*S21) / 2/S21
D = ((1-S11)*(1+S22)+S12*S21) / 2/S21
ABCD=array([[A,B],[C,D]])
ABCD=transpose(ABCD,(2,0,1))
return squeeze(ABCD)
#############################################################
def ZtoS(Z,Z0=50):
Z11=Z[0,0]
Z12=Z[0,1]
Z21=Z[1,0]
Z22=Z[1,1]
S11 = ( (Z11-Z0)*(Z22+Z0)-Z12*Z21 ) / ((Z11+Z0)*(Z22+Z0)-Z12*Z21)
S12 = ( 2*Z12*Z0 ) / ((Z11+Z0)*(Z22+Z0)-Z12*Z21)
S21 = ( 2*Z21*Z0 ) / ((Z11+Z0)*(Z22+Z0)-Z12*Z21)
S22 = ( (Z11+Z0)*(Z22-Z0)-Z12*Z21 ) / ((Z11+Z0)*(Z22+Z0)-Z12*Z21)
S=matrix([[S11,S12],[S21,S22]])
return S
def ABCDtoTransferFct(ABCD,Zs=0,Zl=1e99):
'''
To get the transfer function from the ABCD parameters,
we can use the equation shown below. In this equation,
we consider the impedance from the source side of the network (S)
and the load side (L). If the network is terminated to the characteristic
impedance on each side, then the two values are equal to the characteristic impedance Z.
see: https://resources.system-analysis.cadence.com/blog/2020-how-to-calculate-a-transfer-function-from-s-parameters
Parameters
----------
ABCD : array 2x2xsize
phase constant at operating freq
Zs : complex
source impedance
Zl : complex
load impedance
Returns
-------
complex
voltage transfer function H(f) = Vl/Vs
'''
if shape(ABCD) == (2,2):
ABCD = array([ABCD])
A=ABCD[:,0,0]
B=ABCD[:,0,1]
C=ABCD[:,1,0]
D=ABCD[:,1,1]
H = Zl / (A*Zl + B + C*Zs*Zl + D*Zs)
return squeeze(H)
################################################################################
################################################################################
## AMP DESIGN
################################################################################
################################################################################
def latexMatrix(a,rnd=None):
"""Returns a LaTeX bmatrix
:a: numpy array
:rnd: rounding digitrs, int
:returns: LaTeX bmatrix as a string
"""
set_printoptions(suppress=True)
if rnd is not None:
a = around(a,rnd)
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').replace('j','j,').replace('+0.j','').replace('. ','').replace(' 0 ','').replace(' ','').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.rstrip(',').split(',')) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
rv = '\n'.join(rv)
rv = rv.replace(' ','')
return rv
### Return a complex type from a number given in magitude and phase (Degrees)
def magphase(A,phi):
'''Returns a complex number from magnitude and phase (in degrees)
'''
return A*exp(1j*phi*pi/180.0)
### Return a string formatted from a complex in the form Magn /__ Phase deg
################################################################################
def magphase_str(c):
''' Returns a nicely formatted string to print complex numbers in ampl. and phase
'''
return u'{0:6.3f}\u2220{1:5.1f}\u00B0'.format(abs(c),angle(c)*180/pi)
################################################################################
def magphase_tuple(c):
''' Returns a tuple with (magn,phase) to print complex numbers in ampl. and phase
'''
return ( abs(c) , angle(c)*180/pi )
def polar(mag,ang,isDegrees=True):
'''
takes a complex number in polar and returns the complex number
'''
fac = 1
if isDegrees:
fac = pi/180
return mag*exp(1j*ang*fac)
################################################################################
def splitmatrixarray(S):
'''
....splits list of matrices into lists of the individual elements
currently two by two matirces only
'''
S11 = S[:,0,0]
S12 = S[:,0,1]
S21 = S[:,1,0]
S22 = S[:,1,1]
return S11,S12,S21,S22
### Save touchstone formatted S-parameter files ###############################
def save_touchstone(filename, flist, slist, annotations = "Touchstone file created by python mwave module "):
'''
saves a touchstone file of two lists
Parameters
----------
filename : string
name of touchstone file shall end with .s2p
flist : array or list
list of frequency values
slist : array of 2x2 arrays
list or array of 2x2 matrices with S-parameters
annotations : string
annotations in the header of the file e.g. time
Returns
-------
nothing
Examples
--------
>>> filename = "touchstone.s2p"
>>> flist = array([1e9,2e9,2.2e9])
>>> S1 = array([[0.2,0.3],[0.4,0.5-1j]])
>>> S2 = array([[0.5,0.333],[0.34,0.35-0.44j]])
>>> S3 = array([[0.11,0.234],[0.554,0.55-.55j]])
>>> slist = array([S1,S2,S3])
>>> save_touchstone(filename, flist, slist)
'''
# check for consistent data
if len(flist) != len(slist):
raise ValueError('length of flist and slist do not match in save touchstone!')
return
if shape(slist)[1:3] != (2,2) and ndim(slist) != 1:
raise ValueError('No 2x2 matrices in touchstone swrite!')
return
f=open(filename,'w', encoding = "ISO-8859-1")
noise=False
f.write('! \n')
f.write('! Export of Touchstone Data from mwave.py Module Author: <NAME> \n')
f.write('! \n')
f.write('!'+annotations +'\n')
f.write('!---------------------------------------------------------------------\n')
f.write('! symbol freq-unit parameter-type data-format keyword impedance-ohm\n')
f.write('# HZ S RI R 50\n')
f.write('!---------------------------------------------------------------------\n')
if ndim(slist) == 1:
# -- One Port parameter -----------
f.write('! freq reS11 imS11 \n')
s11 = slist
for i in range(len(flist)):
l = "{:10.1f} {: 3.9f} {: 3.9f}".format(flist[i],real(s11[i]), imag(s11[i]))
f.write(l+"\n")
else:
#--- two-port paramter -----------
s11,s12,s21,s22 = splitmatrixarray(slist)
for i in range(len(flist)):
l = "{:10.1f} {: 3.9f} {: 3.9f} {: 3.9f} {: 3.9f} {: 3.9f} {: 3.9f} {: 3.9f} {: 3.9f} ".format(flist[i],real(s11[i]), imag(s11[i]), real(s12[i]), imag(s12[i]), real(s21[i]), imag(s21[i]), real(s22[i]), imag(s22[i]), )
f.write(l+"\n")
f.close()
return
### Load touchstone formatted S-parameter files ###############################
def load_touchstone(filename, annotations=False):
'''
Loads a touchstone file in two lists
:filename: Touchstone filename including path (type string)
Returns
-------
tuple with: frequency list (type flaot) S-matrix list (2x2 Matrix list of S Parameters)
Note
-----
currently works with 2x2 matrices only
'''
print("Load Touchstone file ",filename)
f=open(filename,'r', encoding = "ISO-8859-1")
noise=False
if filename[-2] == '1':
Twoport = False
elif filename[-2] == '2':
Twoport = True
elif filename[-2:] == 'ts':
Twoport = True
else:
print('Load Touchstone: Neither extension s1p or s2p , Exit')
raise NameError('Neither extension s1p or s2p')
anno = []
Slist=[];flist=[]
rad=pi/180.0
for line in f:
#print(line.strip())
if line[0]=='!':
anno.append(line)
if line.find('Fmin')>0:
noise=True
#print("----- Here Noise Data start ------>")
continue
if line[0]=='#':
#print("Format is ",line)
if 'HZ' in line.upper(): factor=1e0
if 'KHZ' in line.upper(): factor=1e3
if 'MHZ' in line.upper(): factor=1e6
if 'GHZ' in line.upper(): factor=1e9
if 'MA' in line.upper():
sform ='MA'
elif 'RI' in line.upper():
sform = 'RI'
elif 'DB' in line.upper():
sform ='DB'
else:
print("Data not in MA or RI Format")
raise RuntimeError("Data not in MA or RI Format")
return
continue
if len(line) <10: continue ## empty line
if not(noise): ##### Spara Info
p=line.split()
p=[float(x) for x in p]
#print("f=",p[0],"S11=",p[1], ".....")
flist.append(float(p[0])*factor)
if sform=='MA':
S11=p[1]*exp(1j*p[2]*rad)
S=S11
if Twoport:
S21=p[3]*exp(1j*p[4]*rad)
S12=p[5]*exp(1j*p[6]*rad)
S22=p[7]*exp(1j*p[8]*rad)
S=matrix([[S11,S12],[S21,S22]])
Slist.append(S)
if sform=='RI':
S11=p[1]+p[2]*1j
S=S11
if Twoport:
S21=p[3]+p[4]*1j
S12=p[5]+p[6]*1j
S22=p[7]+p[8]*1j
S=matrix([[S11,S12],[S21,S22]])
Slist.append(S)
if sform=='DB':
S11=10**(p[1]/20)*exp(1j*p[2]*rad)
S=S11
if Twoport:
S21=10**(p[3]/20)*exp(1j*p[4]*rad)
S12=10**(p[5]/20)*exp(1j*p[6]*rad)
S22=10**(p[7]/20)*exp(1j*p[8]*rad)
S=matrix([[S11,S12],[S21,S22]])
Slist.append(S)
#print S
if (noise): ##### Noise Info
pass
flist = array(flist)
Slist = array(Slist)
if annotations:
return flist,Slist,anno
else:
return flist,Slist
#
# mdif load
#
import re
rad = pi/180.0
######################################################################
def mdifbiaslist(filename):
'''
Shows the possible bias points of a mdif file
:param filename: mdif file
:return: a list of biases
'''
f=open(filename,'r')
line = f.readlines()
i=0
biaslist = []
while i< len(line):
if 'VAR Vc' in line[i]:
if not 'Ic' in line[i+1]:
raise valueerror('No Vc,Ic VAR defined in mdif')
valueV = re.findall("\d+\.\d+", line[i])[0]
valueI = re.findall("\d+\.\d+", line[i+1])[0]
biaslist.append((float(valueV),float(valueI)))
i += 1
i += 1
if biaslist == []: raise valueerror('No Vc,Ic VAR defined in mdif')
return biaslist
##########Load MDIF Spara #############################################################
def mdifsparlist(filename,Vc,Ic):
f=open(filename,'r')
line = f.readlines()
i=0
biaslist = []
while i< len(line):
if 'VAR Vc' in line[i]:
try:
valueV = float(re.findall("\d+\.\d+", line[i])[0])
except:
valueV = float(re.findall("\d+\\d+", line[i])[0])
try:
valueI = float(re.findall("\d+\.\d+", line[i+1])[0])
except:
valueI = float(re.findall("\d+\\d+", line[i+1])[0])
if valueV == Vc and valueI == Ic:
#print("Biaspoint found", valueV, valueI)
if not ('BEGIN ACDATA' in line[i+2]): raise ValueError('MDIF Wrong Format no BEGIN ACDATA found ')
i +=3
#print(line[i])
if not '#' in line[i]: raise ValueError('MDIF Wrong Format no # Format found found ')
if 'HZ' in line[i]: factor=1e0
if 'MHZ' in line[i]: factor=1e6
if 'GHZ' in line[i]: factor=1e9
if 'MA' in line[i]:
sform ='MA'
elif 'RI' in line[i]:
sform = 'RI'
else:
raise RuntimeError("MDIF Data not in MA or RI Format")
#print(sform, factor)
i += 2
##### Start of spar found reading data ###################
flist = []
Slist = []
while not 'END' in line[i]:
p=line[i].split()
p=[float(x) for x in p]
#print("f=",p[0],"S11=",p[1], ".....")
flist.append(float(p[0])*factor)
if sform=='MA':
S11=p[1]*exp(1j*p[2]*rad)
S21=p[3]*exp(1j*p[4]*rad)
S12=p[5]*exp(1j*p[6]*rad)
S22=p[7]*exp(1j*p[8]*rad)
if sform=='RI':
S11=p[1]+p[2]*1j
S21=p[3]+p[4]*1j
S12=p[5]+p[6]*1j
S22=p[7]+p[8]*1j
S=matrix([[S11,S12],[S21,S22]])
Slist.append(S)
i += 1
return flist, Slist
### end of spar data read
| |
"""This file is nearly the same as flask.cli except for some modifications."""
import ast
import inspect
import os
import platform
import re
import sys
import traceback
from functools import update_wrapper
import click
import flask
from flask.cli import (
locate_app,
prepare_import,
_called_with_wrong_args,
_validate_key,
)
from flask.cli import (
DispatchingApp,
ScriptInfo as FlaskScriptInfo,
FlaskGroup,
AppGroup,
CertParamType,
SeparatedPathType,
NoAppException,
)
from flask.cli import (
routes_command,
)
from flask.helpers import get_debug_flag
from flask.helpers import get_env
from flask.helpers import get_load_dotenv
from .custom_commands import create_app_command
try:
import dotenv
except ImportError:
dotenv = None
def find_best_app(module):
"""Given a module instance this tries to find the best possible
application in the module or raises an exception.
"""
from . import Djask
# Search for the most common names first.
for attr_name in ("app", "application"):
app = getattr(module, attr_name, None)
if isinstance(app, Djask):
return app
# Otherwise find the only object that is a Djask instance.
matches = [v for v in module.__dict__.values() if isinstance(v, Djask)]
if len(matches) == 1:
return matches[0]
elif len(matches) > 1:
raise NoAppException(
"Detected multiple Djask applications in module"
f" {module.__name__!r}. Use 'DJASK_APP={module.__name__}:name'"
f" to specify the correct one."
)
# Search for app factory functions.
for attr_name in ("create_app", "make_app"):
app_factory = getattr(module, attr_name, None)
if inspect.isfunction(app_factory):
try:
app = app_factory()
if isinstance(app, Djask):
return app
except TypeError as e:
if not _called_with_wrong_args(app_factory):
raise
raise NoAppException(
f"Detected factory {attr_name!r} in module {module.__name__!r},"
" but could not call it without arguments. Use"
f" \"DJASK_APP='{module.__name__}:{attr_name}(args)'\""
" to specify arguments."
) from e
raise NoAppException(
"Failed to find Djask application or factory in module"
f" {module.__name__!r}. Use 'DJASK_APP={module.__name__}:name'"
" to specify one."
)
def find_app_by_string(module, app_name):
"""Check if the given string is a variable name or a function. Call
a function to get the app instance, or return the variable directly.
"""
from . import Djask
# Parse app_name as a single expression to determine if it's a valid
# attribute name or function call.
try:
expr = ast.parse(app_name.strip(), mode="eval").body
except SyntaxError:
raise NoAppException(
f"Failed to parse {app_name!r} as an attribute name or function call."
) from None
if isinstance(expr, ast.Name):
name = expr.id
args = []
kwargs = {}
elif isinstance(expr, ast.Call):
# Ensure the function name is an attribute name only.
if not isinstance(expr.func, ast.Name):
raise NoAppException(
f"Function reference must be a simple name: {app_name!r}."
)
name = expr.func.id
# Parse the positional and keyword arguments as literals.
try:
args = [ast.literal_eval(arg) for arg in expr.args]
kwargs = {kw.arg: ast.literal_eval(kw.value) for kw in expr.keywords}
except ValueError:
# literal_eval gives cryptic error messages, show a generic
# message with the full expression instead.
raise NoAppException(
f"Failed to parse arguments as literal values: {app_name!r}."
) from None
else:
raise NoAppException(
f"Failed to parse {app_name!r} as an attribute name or function call."
)
try:
attr = getattr(module, name)
except AttributeError as e:
raise NoAppException(
f"Failed to find attribute {name!r} in {module.__name__!r}."
) from e
# If the attribute is a function, call it with any args and kwargs
# to get the real application.
if inspect.isfunction(attr):
try:
app = attr(*args, **kwargs)
except TypeError as e:
if not _called_with_wrong_args(attr):
raise
raise NoAppException(
f"The factory {app_name!r} in module"
f" {module.__name__!r} could not be called with the"
" specified arguments."
) from e
else:
app = attr
if isinstance(app, Djask):
return app
raise NoAppException(
"A valid Djask application was not obtained from"
f" '{module.__name__}:{app_name}'."
)
def get_version(ctx, param, value):
if not value or ctx.resilient_parsing:
return
import werkzeug
from . import __version__
click.echo(
f"Python {platform.python_version()}\n"
f"Werkzeug {werkzeug.__version__}\n"
f"Flask {flask.__version__}\n"
f"Djask {__version__}\n",
color=ctx.color,
)
ctx.exit()
version_option = click.Option(
["--version"],
help="Show the djask version",
expose_value=False,
callback=get_version,
is_flag=True,
is_eager=True,
)
class ScriptInfo(FlaskScriptInfo):
"""Helper object to deal with Djask applications. This is usually not
necessary to interface with as it's used internally in the dispatching
to click. In future versions of Flask this object will most likely play
a bigger role. Typically it's created automatically by the
:class:`DjaskGroup` but you can also manually create it and pass it
onwards as click object.
"""
def __init__(self, *args, **kwargs):
os.environ["FLASK_APP"] = (
os.environ.get("DJASK_APP") or os.environ.get("FLASK_APP") or ""
)
super().__init__(*args, **kwargs)
def load_app(self):
"""Loads the Djask app (if not yet loaded) and returns it. Calling
this multiple times will just result in the already loaded app to
be returned.
"""
__traceback_hide__ = True # noqa: F841
if self._loaded_app is not None:
return self._loaded_app
if self.create_app is not None:
app = self.create_app()
else:
if self.app_import_path:
path, name = (
re.split(r":(?![\\/])", self.app_import_path, 1) + [None]
)[:2]
import_name = prepare_import(path)
app = locate_app(import_name, name)
else:
for path in ("wsgi.py", "app.py"):
import_name = prepare_import(path)
app = locate_app(import_name, None, raise_if_not_found=False)
if app:
break
if not app:
raise NoAppException(
"Could not locate a Djask application. You did not provide "
'the "DJASK_APP" environment variable, and a "wsgi.py" or '
'"app.py" module was not found in the current directory.'
)
if self.set_debug_flag:
# Update the app's debug flag through the descriptor so that
# other values repopulate as well.
app.debug = get_debug_flag()
self._loaded_app = app
return app
def list_commands(self, ctx):
self._load_plugin_commands()
# Start with the built-in and plugin commands.
rv = set(super().list_commands(ctx))
info = ctx.ensure_object(ScriptInfo)
# Add commands provided by the app, showing an error and
# continuing if the app couldn't be loaded.
try:
rv.update(info.load_app().cli.list_commands(ctx))
except NoAppException as e:
# When an app couldn't be loaded, show the error message
# without the traceback.
click.secho(f"Error: {e.format_message()}\n", err=True, fg="red")
except Exception:
# When any other errors occurred during loading, show the
# full traceback.
click.secho(f"{traceback.format_exc()}\n", err=True, fg="red")
return sorted(rv)
def main(self, *args, **kwargs):
# Set a global flag that indicates that we were invoked from the
# command line interface. This is detected by Flask.run to make the
# call into a no-op. This is necessary to avoid ugly errors when the
# script that is loaded here also attempts to start a server.
os.environ["FLASK_RUN_FROM_CLI"] = "true"
os.environ["DJASK_RUN_FROM_CLI"] = "true"
if get_load_dotenv(self.load_dotenv):
load_dotenv()
obj = kwargs.get("obj")
if obj is None:
obj = ScriptInfo(
create_app=self.create_app, set_debug_flag=self.set_debug_flag
)
kwargs["obj"] = obj
kwargs.setdefault("auto_envvar_prefix", "DJASK")
return super().main(*args, **kwargs)
pass_script_info = click.make_pass_decorator(ScriptInfo, ensure=True)
def with_appcontext(f):
"""Wraps a callback so that it's guaranteed to be executed with the
script's application context. If callbacks are registered directly
to the ``app.cli`` object then they are wrapped with this function
by default unless it's disabled.
"""
@click.pass_context
def decorator(__ctx, *args, **kwargs):
with __ctx.ensure_object(ScriptInfo).load_app().app_context():
return __ctx.invoke(f, *args, **kwargs)
return update_wrapper(decorator, f)
class DjaskGroup(FlaskGroup):
"""Special subclass of the :class:`AppGroup` group that supports
loading more commands from the configured Djask app. Normally a
developer does not have to interface with this class but there are
some very advanced use cases for which it makes sense to create an
instance of this.
:param add_default_commands: if this is True then the default run and
shell commands will be added.
:param add_version_option: adds the ``--version`` option.
:param create_app: an optional callback that is passed the script info and
returns the loaded app.
:param load_dotenv: Load the nearest :file:`.env`, :file:`.flaskenv` and :file:`.djaskenv`
files to set environment variables. Will also change the working
directory to the directory containing the first file found.
:param set_debug_flag: Set the app's debug flag based on the active
environment
"""
def __init__(
self,
add_default_commands=True,
create_app=None,
add_version_option=True,
load_dotenv=True,
set_debug_flag=True,
**extra,
):
params = list(extra.pop("params", None) or ())
if add_version_option:
params.append(version_option)
AppGroup.__init__(self, params=params, **extra)
self.create_app = create_app
self.load_dotenv = load_dotenv
self.set_debug_flag = set_debug_flag
if add_default_commands:
self.add_command(run_command)
self.add_command(shell_command)
self.add_command(routes_command)
self.add_command(create_app_command)
self._loaded_plugin_commands = False
def _load_plugin_commands(self):
if self._loaded_plugin_commands:
return
try:
import pkg_resources
except ImportError:
self._loaded_plugin_commands = True
return
for ep in pkg_resources.iter_entry_points("djask.commands"):
self.add_command(ep.load(), ep.name)
self._loaded_plugin_commands = True
def get_command(self, ctx, name):
self._load_plugin_commands()
# Look up built-in and plugin commands, which should be
# available even if the app fails to load.
rv = super().get_command(ctx, name)
if rv is not None:
return rv
info = ctx.ensure_object(ScriptInfo)
# Look up commands provided by the app, showing an error and
# continuing if the app couldn't be loaded.
try:
return info.load_app().cli.get_command(ctx, name)
except NoAppException as e:
click.secho(f"Error: {e.format_message()}\n", err=True, fg="red")
def main(self, *args, **kwargs):
# | |
<gh_stars>0
'''
Created on May 25, 2012
@author: kwalker
'''
'''
notes:
-conversion to shape shortens long field names and that can mess stuff up.
'''
import arcpy, os, math
arcpy.env.overwriteOutput = True
#
###
#inRoutesFullPath = r'C:\KW_Working\Udot\CalibrationPointScript\CaliPointTesting.gdb\Route1534p'
inRoutesFullPath = arcpy.GetParameterAsText(0)
#referencePoints = r'C:\KW_Working\Udot\CalibrationPointScript\CaliPointTesting.gdb\Route1534refPoints'
referencePoints = arcpy.GetParameterAsText(1)
lrsSchemaTemplate = arcpy.GetParameterAsText(2)
#zSurfacePath = r"Database Connections\ConnectSGID10.sde\SGID10.RASTER.DEM_10METER"
zSurfacePath = arcpy.GetParameterAsText(3)
#outDirectoryWorkspace = r"C:\KW_Working\Udot\CalibrationPointScript\Test2865p2"#TODO make more relative
outDirectoryWorkspace = arcpy.GetParameterAsText(4)
#outFc = 'LRSTest6'#final route output will be named with a suffix given in createMZ_Route function
outFc = str(arcpy.GetParameterAsText(5))
###
#
#
tempOutsList = []
newOutputGDB = "LRS_RouteScript.gdb"
outFcNoMerge = outFc + "CalPnts"
outFcMerge = outFc + "CalPntsMerge"
newRtIdFieldName = "ScrptRtID" #Will be created and calculated to and ID number of
#the form "Route Number_Part Number" eg. "0039P_1"
finalOutPath = os.path.join(outDirectoryWorkspace, newOutputGDB, outFc)
outFileGDB = arcpy.CreateFileGDB_management(outDirectoryWorkspace, newOutputGDB, "CURRENT")
arcpy.env.workspace = os.path.join(outDirectoryWorkspace, newOutputGDB)
arcpy.SetParameterAsText(6, outFcNoMerge)#Output Parameter for script tool only
arcpy.SetParameterAsText(7, outFcMerge)#Output Parameter for script tool only
#Setting env fields to same values as SGID10.TRANSPORTATION.UDOTRoutes_LRS
arcpy.env.XYResolution = 0.01
arcpy.env.MResolution = 0.0005
arcpy.env.XYTolerance = 0.1
arcpy.env.MTolerance = 0.001
#arcpy.env.ZResolution = 0 #Changing the Z resolution changes the end value a little
routeFields = arcpy.ListFields(inRoutesFullPath)
for f in routeFields:
if f.baseName == newRtIdFieldName:
arcpy.AddError("Reserved Field: " + newRtIdFieldName + " already exists in input table")
if arcpy.CheckExtension("3D") != "Available":
arcpy.AddError("3D Analyst: not available")
else:
arcpy.AddMessage("3D analyst: " + str(arcpy.CheckExtension("3D")))
#Find Shape field name and spatial ref
inFcDesc = arcpy.Describe(inRoutesFullPath)
inFcShpField = inFcDesc.ShapeFieldName
inSpatialRef = inFcDesc.spatialReference
arcpy.AddMessage("Environment variables set")
def distanceFormula(x1 , y1, x2, y2):
d = math.sqrt((math.pow((x2 - x1),2) + math.pow((y2 - y1),2)))
return d
### End distanceFormula() ###
def lengthCalc3d(mzPnt1, mzPnt2):
"""3D line length calculation. Takes two arcpy Point objects as input"""
dist2dA = distanceFormula(mzPnt1.X, mzPnt1.Y, mzPnt2.X, mzPnt2.Y)
zheightB = math.fabs(mzPnt1.Z - mzPnt2.Z)
length3dC = math.sqrt((math.pow((dist2dA),2) + math.pow((zheightB),2)))
return length3dC * 0.000621371192 #Conversion from meters to miles
### End lengthCalc3d() ###
def removeCurves (inFeatureFullPath, routeIdField):
"""Take and input feature class and Convert to a shapefile to remove curves"""
tempFc = os.path.join(outDirectoryWorkspace, newOutputGDB, outFc + "TempCopy")
outShpFullName = os.path.join(outDirectoryWorkspace, outFc + "CurvRem.shp")
tempOutsList.append(outShpFullName)#keep track of temp outputs for later deletion
tempOutsList.append(tempFc)
arcpy.CopyFeatures_management(inFeatureFullPath, tempFc)#Copy to feature class first to avoid GLOBALID issues
arcpy.CopyFeatures_management(tempFc, outShpFullName)
arcpy.AddField_management(outShpFullName, routeIdField, "TEXT", "", "", "20")
arcpy.CalculateField_management(outShpFullName, routeIdField, '!DOT_RTNAME! + "_" + !DOT_RTPART!', "PYTHON_9.3")
arcpy.AddMessage("Shapefile created to remove curves")
return outShpFullName
def createMZ_Route (inCurvesRemovedPath, outFeatureClass, surfaceForZ, routeIdField):
"""Input the shapefile with curves removed, calc Z values and create the routes with temp M values that
will be overridden"""
extStatus = arcpy.CheckOutExtension("3D")
arcpy.AddMessage("3D analyst: " + str(extStatus))
outPolyLineZ = outFeatureClass + "LineZ"
tempOutsList.append(outPolyLineZ)
outRouteMZ = outFeatureClass
rtPartNumField = "PrtNum"
rtPartCalcExp = "!" + routeIdField + "![-1:]"
arcpy.InterpolateShape_3d(surfaceForZ, inCurvesRemovedPath, outPolyLineZ, '', 1, '', True)
arcpy.AddMessage(arcpy.GetMessages())
arcpy.CreateRoutes_lr(outPolyLineZ, routeIdField, outRouteMZ)
arcpy.AddMessage(arcpy.GetMessages())
arcpy.AddField_management(outRouteMZ, rtPartNumField, "SHORT")
arcpy.CalculateField_management(outRouteMZ, rtPartNumField, rtPartCalcExp, "PYTHON_9.3")
arcpy.AddMessage("Routes with Z values and M place holders have been created")
extStatus = arcpy.CheckInExtension("3D")
arcpy.AddMessage("3D analyst: " + str(extStatus))
return [outRouteMZ, rtPartNumField]
def add3dLengthToM(routesMZ, routePartNumField):
"""Takes the output layer and the part field name from createMZ_Route as parameters. Calculates a 3D M value using distance
between sequential Point objects and the differnce of those point's Z values"""
routes = arcpy.UpdateCursor(routesMZ, "", "", "", newRtIdFieldName + " A")
rtPrtLstPnt = arcpy.Point(0,0,0,0)
for rt in routes:
print 'Route: ' + str(rt.getValue(newRtIdFieldName))
rtShp = rt.getValue(inFcShpField)
rtPartNum = rt.getValue(routePartNumField)
newShpArray = arcpy.Array()
previousPnt = arcpy.Point(0,0,0,0)
featurePartC = 0
print "test"
for part in rtShp:
pntC = 0
print "Feature Part: " + str(featurePartC)
while pntC < part.count:#Loop through all points in current feature part Array object
if pntC == 0:#testing to handle first points of feature part Array
if rtPartNum > 1 and featurePartC == 0:
if part.getObject(pntC).disjoint(rtPrtLstPnt):
part.getObject(pntC).M = rtPrtLstPnt.M + 0.001
else:
part.getObject(pntC).M = rtPrtLstPnt.M
else:
part.getObject(pntC).M = previousPnt.M
else:
mCalc = lengthCalc3d(previousPnt, part.getObject(pntC))
part.getObject(pntC).M = mCalc + previousPnt.M
previousPnt = part.getObject(pntC)#Assign the current point to the previous point for use in the next iteration
pntC += 1
#End point while:
newShpArray.add(part)
featurePartC += 1
#End part for:
newShp = arcpy.Polyline(newShpArray, inSpatialRef)
rt.setValue(inFcShpField, newShp)
routes.updateRow(rt)
rtPrtLstPnt = previousPnt
arcpy.AddMessage("M values updated")
def routeFlipTemp(routesMZ, idField, refPointLayer):
flipRtField = "FlipRt"
arcpy.AddField_management(routesMZ, flipRtField, "SHORT")
routesCursor = arcpy.UpdateCursor(routesMZ)
for route in routesCursor:
rtShp = route.getValue(inFcShpField)
rtID = route.getValue(idField)
print rtID
rtEndPnt = rtShp.lastPoint
#TODO if no ref points are found script errors out
refPntCursor = arcpy.SearchCursor(refPointLayer, """ "LABEL" = '""" + rtID + "'")#Create reference point cursor limited by route ID of current route
#Get the first ref point of the route and reset the closest point with it.
#
p = refPntCursor.next()
closestRefPnt = p.getValue(inFcShpField).centroid
closestDist = distanceFormula(rtEndPnt.X, rtEndPnt.Y, closestRefPnt.X, closestRefPnt.Y)
print "SP : " + str(p.CALPT_TYPE) + str(closestDist)
closestType = str(p.CALPT_TYPE)
##
for refPnt in refPntCursor:
nextRefPnt = refPnt.getValue(inFcShpField).centroid
nextDist = distanceFormula(rtEndPnt.X, rtEndPnt.Y, nextRefPnt.X, nextRefPnt.Y)
if nextDist < closestDist:
closestRefPnt = refPnt
closestDist = nextDist
closestType = str(refPnt.CALPT_TYPE)
print str(refPnt.CALPT_TYPE) + " c: " + str(closestDist)
elif nextDist == closestDist:
if str(refPnt.CALPT_TYPE).count("END") > 0 or str(refPnt.CALPT_TYPE).count("START") > 0:
closestRefPnt = refPnt
closestDist = nextDist
closestType = str(refPnt.CALPT_TYPE)
print str(refPnt.CALPT_TYPE) + " c: " + str(closestDist)
# else:
# print str(refPnt.CALPT_TYPE) + " f: " + str(nextDist)
print closestType + " final: " + str(closestDist)
print
if closestType.count("START") > 0:
route.setValue(flipRtField, 1)
routesCursor.updateRow(route)
del route
del routesCursor
#Select by the flipRtField to flip routes that need it.
arcpy.MakeFeatureLayer_management(routesMZ, "flipRts", '"' + flipRtField + '"' + " = 1 ")
matchCount = int(arcpy.GetCount_management("flipRts").getOutput(0)) #temp
arcpy.AddMessage("Attemping Flip of: " + str(matchCount))
arcpy.FlipLine_edit("flipRts")
arcpy.AddMessage("Routes flipped: " + str(matchCount))
def routePartMerge (inputRoutesMZ, outGDB, outLayerName, lrsSchemaTemplate):
""" Route part merging. Merges route parts into one feature. Populates LRS attributes."""
arcpy.AddMessage("Merging route parts")
inRouteLayer = inputRoutesMZ
outPath = outGDB
outLayer = outLayerName
outLayerTemplate = lrsSchemaTemplate
inRouteDesc = arcpy.Describe(inRouteLayer)
inFcShpField = inRouteDesc.ShapeFieldName
inSpatialRef = inRouteDesc.spatialReference
interstateRts = ["15", "70", "80", "84", "215"]
usRts = ["6", "191", "89", "50", "89A", "491", "163", "40", "189", "90"]
institutionalRts = ["284", "285", "286", "291", "292", "293", "294", "296", "298", "299", "303", "304", "309", "312", "317", "320"]
print "settings complete"
outLayer = arcpy.CreateFeatureclass_management (outPath, outLayer, "", outLayerTemplate, "ENABLED", "DISABLED", inSpatialRef)
#Add route name field to input routes
arcpy.AddField_management(inRouteLayer, "RtNumber", "TEXT", "", "", "15")
#Calc new field to route name with direction
arcpy.CalculateField_management(inRouteLayer, "RtNumber", """!ScrptRtID!.split("_")[0]""", "PYTHON_9.3")
#Build unique table base on route_Direction field
arcpy.Frequency_analysis(inRouteLayer, os.path.join(outPath, "Freq_out"), "RtNumber")
#init cursor for freq table
frequencyCursor = arcpy.SearchCursor(os.path.join(outPath, "Freq_out"))
#init cursors and combine route parts
outFeatureCursor = arcpy.InsertCursor(outLayer)
#iterate through unique table
for uniqueRtNum in frequencyCursor:
#print uniqueRtNum.getValue("RtNumber")
inRtCursor = arcpy.SearchCursor(inRouteLayer, "\"RtNumber\" = '" + uniqueRtNum.getValue("RtNumber") + "'", "", "", "RtNumber A")#select by route_dir sort by part num
outRow = outFeatureCursor.newRow()
newShpArray = arcpy.Array()
previousPnt = arcpy.Point(0,0,0,0)
featureCount = 0
for routePart in inRtCursor:#feature
#Get field data from route part and add it to out table
if featureCount == 0:
#print "set RtName: " + str(routePart.getValue("RtNumber"))
outRow.setValue("LABEL", str(routePart.getValue("RtNumber")))
outRow.setValue("RT_NAME", str(routePart.getValue("RtNumber"))[:4])
outRow.setValue("RT_DIR", str(routePart.getValue("RtNumber"))[-1:])
outRow.setValue("RT_TYPE", "M")
#remove leading zeros from route nummber
num = str(routePart.getValue("RtNumber"))[:4]
while num.find("0") == 0:
num = num[1:]
#Type labeling
if interstateRts.count(num) > 0:
outRow.setValue("RT_MINDESC", "I " + num)
outRow.setValue("CARTO", "1")
elif usRts.count(num) > 0:
outRow.setValue("RT_MINDESC", "US " + num)
outRow.setValue("CARTO", "2")
elif institutionalRts.count(num) > 0:
outRow.setValue("RT_MINDESC", "SR " + num)
outRow.setValue("CARTO", "I")
elif int(num) >= 1000:
outRow.setValue("RT_MINDESC", "FA " + num)
outRow.setValue("CARTO", "9")
else:
outRow.setValue("RT_MINDESC", "SR " + num)
outRow.setValue("CARTO", "3")
rtPartShape = routePart.SHAPE
featurePartCount = 0
for featurePart in rtPartShape:#feature part array
if featureCount == 0 and featurePartCount == 0:#first feature test
newShpArray.add(featurePart)
elif previousPnt.disjoint(featurePart.getObject(0)):
#print "prev: " + str(previousPnt.X) + " next: " + str(featurePart.getObject(0).X)
newShpArray.add(featurePart)
else:
featurePart.remove(0)
newShpArray.getObject(newShpArray.count - 1 ).extend(featurePart)
featurePartCount += 1
lastArrayAddedToNewShp = newShpArray.getObject(newShpArray.count - 1 )
previousPnt = lastArrayAddedToNewShp.getObject(lastArrayAddedToNewShp.count - 1 )
#print "FPC = " + str(featurePartCount)
featureCount += 1
#print "FC = " + str(featureCount)
#build new feature in out layer.
newShp = arcpy.Polyline(newShpArray, inSpatialRef)
outRow.setValue(inFcShpField, newShp)
outFeatureCursor.insertRow(outRow)
try:
del outRow
del outFeatureCursor
del inRtCursor
del frequencyCursor
arcpy.Delete_management(os.path.join(outPath, "Freq_out"))
except:
print "Some Temporary layers did not delete"
print "Complete"
def calibrationPointRoutes(routesMZ, idField, outFeatureName, refPointLayer):
| |
"""
THIS CODE IS UNDER THE BSD 2-Clause LICENSE. YOU CAN FIND THE COMPLETE
FILE AT THE SOURCE DIRECTORY.
Copyright (C) 2017 <NAME> - All rights reserved
@author : <EMAIL>
Publication:
A Novel Unsupervised Analysis of Electrophysiological
Signals Reveals New Sleep Sub-stages in Mice
*****************************************************************************
Class implementing the mean-covariance Restricted Boltzmann Machine (mcRBM)
by <NAME>.
It is based on the original code with minor modifications according to
the needs of our experiments.
Refer to:
"<NAME>, <NAME>, "Modeling Pixel Means and Covariances Using
Factorized Third-Order Boltzmann Machines", CVPR 2010"
You can find the original code at
http://www.cs.toronto.edu/~ranzato/publications/mcRBM/code/mcRBM_04May2010.zip
COPYRIGHT of the original code has been included in the currect directory.
<vkatsageorgiou@vassia-PC>
"""
import sys
import numpy as np
import os
import cudamat as cmt
import _pickle as cPickle
import matplotlib.pyplot as plt
import shutil
from numpy.random import RandomState
from scipy.io import loadmat, savemat
from configparser import *
from datetime import datetime
import sys
sys.path.insert(0, '../dataPreprocessing/')
from dataPreproc import DataPreproc
class mcRBM:
def __init__(self, refDir, expConfigFilename, modelConfigFilename, gpuId):
# directory containing all the configuration files for the experiment
self.refDir = refDir
# file with configuration details for the launched experiment
self.expConfigFilename = refDir + '/' + expConfigFilename
# file with configuration details for the model to be trained
self.modelConfigFilename = refDir + '/' + modelConfigFilename
# data pre-processing object
self.dpp = DataPreproc()
# loading details from configuration files
self.loadExpConfig()
self.loadModelConfig()
# id of the GPU which will be used for computation
self.gpuId = int(gpuId)
def loadExpConfig(self):
'''
Function loading the configuration details for the experiment &
data pre-pocessing flags
'''
config = ConfigParser()
config.read(self.expConfigFilename)
self.npRandSeed = config.getint('PARAMETERS','npRandSeed')
self.npRandState = config.getint('PARAMETERS','npRandState')
self.dataDir = config.get('EXP_DETAILS','dsetDir')
self.expsDir = config.get('EXP_DETAILS','expsDir')
self.expName = config.get('EXP_DETAILS','expID')
self.dSetName = config.get('EXP_DETAILS','dSetName')
self.logFlag = config.getboolean('EXP_DETAILS','logFlag')
self.meanSubtructionFlag = config.getboolean('EXP_DETAILS','meanSubtructionFlag')
self.scaleFlag = config.getboolean('EXP_DETAILS','scaleFlag')
self.scaling = config.get('EXP_DETAILS','scaling')
self.doPCA = config.getboolean('EXP_DETAILS','doPCA')
self.whitenFlag = config.getboolean('EXP_DETAILS','whitenFlag')
self.rescaleFlag = config.getboolean('EXP_DETAILS','rescaleFlag')
self.rescaling = config.get('EXP_DETAILS','rescaling')
self.dataFilename = self.dataDir + self.dSetName
self.saveDir = self.expsDir + self.expName
if not os.path.exists(self.saveDir):
os.makedirs(self.saveDir)
#shutil.copy2(self.expConfigFilename, self.saveDir)
#shutil.copy2(self.modelConfigFilename, self.saveDir)
def loadModelConfig(self):
'''
Function loading the configuration details for the model to be trained
'''
config = ConfigParser()
config.read(self.modelConfigFilename)
self.verbose = config.getint('VERBOSITY','verbose')
self.num_epochs = config.getint('MAIN_PARAMETER_SETTING','num_epochs')
self.batch_size = config.getint('MAIN_PARAMETER_SETTING','batch_size')
self.startFH = config.getint('MAIN_PARAMETER_SETTING','startFH')
self.startwd = config.getint('MAIN_PARAMETER_SETTING','startwd')
self.doPCD = config.getint('MAIN_PARAMETER_SETTING','doPCD')
# model parameters
self.num_fac = config.getint('MODEL_PARAMETER_SETTING','num_fac')
self.num_hid_cov = config.getint('MODEL_PARAMETER_SETTING','num_hid_cov')
self.num_hid_mean = config.getint('MODEL_PARAMETER_SETTING','num_hid_mean')
self.apply_mask = config.getint('MODEL_PARAMETER_SETTING','apply_mask')
self.epsilon = config.getfloat('OPTIMIZER_PARAMETERS','epsilon')
self.weightcost_final = config.getfloat('OPTIMIZER_PARAMETERS','weightcost_final')
self.hmc_step_nr = config.getint('HMC_PARAMETERS','hmc_step_nr')
self.hmc_target_ave_rej = config.getfloat('HMC_PARAMETERS','hmc_target_ave_rej')
#-- Data Loading function:
def loadData(self):
'''
Function loading the data
'''
# Create save folder
if not os.path.exists(self.saveDir + '/dataDetails/'):
os.makedirs(self.saveDir + '/dataDetails/')
# load data file:
if self.dataFilename.split('.')[1] == 'npz':
dLoad = np.load(self.dataFilename)
elif self.dataFilename.split('.') == 'mat':
dLoad = loadmat(self.dataFilename)
else:
print("error! Unrecognized data file")
self.d = dLoad['d']
self.obsKeys = dLoad['epochsLinked']
self.epochTime = dLoad['epochTime']
"""
If you want to keep only EEG features, uncomment next line.
"""
#self.d = self.d[:, :self.d.shape[1]-1]
self.d = np.array(self.d, dtype=np.float32)
self.obsKeys = np.array(self.obsKeys, dtype=np.float32)
print("initial size: ", self.d.shape)
#print("FrameIDs : ", self.obsKeys, "of shape : ", self.obsKeys.shape)
with open (self.saveDir + '/dataDetails/' + 'initialData.txt','w') as f:
f.write("\n Modeling: %s " % self.dataFilename)
f.write("\n Dataset size: %s " % str(self.d.shape))
f.write("\n Dataset type: %s " % str(self.d.dtype))
f.write("\n \n d_min: %s " % str(np.min(self.d, axis=0)))
f.write("\n \n d_max: %s " % str(np.max(self.d, axis=0)))
f.write("\n \n d_mean: %s " % str(np.mean(self.d, axis=0)))
f.write("\n \n d_std: %s " % str(np.std(self.d, axis=0)))
f.close()
# Function taken from original code
def compute_energy_mcRBM(self, data,normdata,vel,energy,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t6,feat,featsq,feat_mean,length,lengthsq,normcoeff,small,num_vis):
# normalize input data vectors
data.mult(data, target = t6) # DxP (nr input dims x nr samples)
t6.sum(axis = 0, target = lengthsq) # 1xP
lengthsq.mult(0.5, target = energy) # energy of quadratic regularization term
lengthsq.mult(1./num_vis) # normalize by number of components (like std)
lengthsq.add(small) # small prevents division by 0
cmt.sqrt(lengthsq, target = length)
length.reciprocal(target = normcoeff) # 1xP
data.mult_by_row(normcoeff, target = normdata) # normalized data
## potential
# covariance contribution
cmt.dot(VF.T, normdata, target = feat) # HxP (nr factors x nr samples)
feat.mult(feat, target = featsq) # HxP
cmt.dot(FH.T,featsq, target = t1) # OxP (nr cov hiddens x nr samples)
t1.mult(-0.5)
t1.add_col_vec(bias_cov) # OxP
cmt.exp(t1) # OxP
t1.add(1, target = t2) # OxP
cmt.log(t2)
t2.mult(-1)
energy.add_sums(t2, axis=0)
# mean contribution
cmt.dot(w_mean.T, data, target = feat_mean) # HxP (nr mean hiddens x nr samples)
feat_mean.add_col_vec(bias_mean) # HxP
cmt.exp(feat_mean)
feat_mean.add(1)
cmt.log(feat_mean)
feat_mean.mult(-1)
energy.add_sums(feat_mean, axis=0)
# visible bias term
data.mult_by_col(bias_vis, target = t6)
t6.mult(-1) # DxP
energy.add_sums(t6, axis=0) # 1xP
# kinetic
vel.mult(vel, target = t6)
energy.add_sums(t6, axis = 0, mult = .5)
# same as the previous function. Needed only if the energy has to be computed
# and stored to check the training process
def compute_energy_mcRBM_visual(self, data,normdata,energy,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t6,feat,featsq,feat_mean,length,lengthsq,normcoeff,small,num_vis):
# normalize input data vectors
data.mult(data, target = t6) # DxP (nr input dims x nr samples)
t6.sum(axis = 0, target = lengthsq) # 1xP
lengthsq.mult(0.5, target = energy) # energy of quadratic regularization term
lengthsq.mult(1./num_vis) # normalize by number of components (like std)
lengthsq.add(small) # small prevents division by 0
cmt.sqrt(lengthsq, target = length)
length.reciprocal(target = normcoeff) # 1xP
data.mult_by_row(normcoeff, target = normdata) # normalized data
## potential
# covariance contribution
cmt.dot(VF.T, normdata, target = feat) # HxP (nr factors x nr samples)
feat.mult(feat, target = featsq) # HxP
cmt.dot(FH.T,featsq, target = t1) # OxP (nr cov hiddens x nr samples)
t1.mult(-0.5)
t1.add_col_vec(bias_cov) # OxP
cmt.exp(t1) # OxP
t1.add(1, target = t2) # OxP
cmt.log(t2)
t2.mult(-1)
energy.add_sums(t2, axis=0)
# mean contribution
cmt.dot(w_mean.T, data, target = feat_mean) # HxP (nr mean hiddens x nr samples)
feat_mean.add_col_vec(bias_mean) # HxP
cmt.exp(feat_mean)
feat_mean.add(1)
cmt.log(feat_mean)
feat_mean.mult(-1)
energy.add_sums(feat_mean, axis=0)
# visible bias term
data.mult_by_col(bias_vis, target = t6)
t6.mult(-1) # DxP
energy.add_sums(t6, axis=0) # 1xP
# kinetic
data.mult(data, target = t6)
energy.add_sums(t6, axis = 0, mult = .5)
# Function taken from original code
#################################################################
# compute the derivative if the free energy at a given input
def compute_gradient_mcRBM(self, data,normdata,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t3,t4,t6,feat,featsq,feat_mean,gradient,normgradient,length,lengthsq,normcoeff,small,num_vis):
# normalize input data
data.mult(data, target = t6) # DxP
t6.sum(axis = 0, target = lengthsq) # 1xP
lengthsq.mult(1./num_vis) # normalize by number of components (like std)
lengthsq.add(small)
cmt.sqrt(lengthsq, target = length)
length.reciprocal(target = normcoeff) # 1xP
data.mult_by_row(normcoeff, target = normdata) # normalized data
cmt.dot(VF.T, normdata, target = feat) # HxP
feat.mult(feat, target = featsq) # HxP
cmt.dot(FH.T,featsq, target = t1) # OxP
t1.mult(-.5)
t1.add_col_vec(bias_cov) # OxP
t1.apply_sigmoid(target = t2) # OxP
cmt.dot(FH,t2, target = t3) # HxP
t3.mult(feat)
cmt.dot(VF, t3, target = normgradient) # VxP
# final bprop through normalization
length.mult(lengthsq, target = normcoeff)
normcoeff.reciprocal() # 1xP
normgradient.mult(data, target = gradient) # VxP
gradient.sum(axis = 0, target = t4) # 1xP
t4.mult(-1./num_vis)
data.mult_by_row(t4, target = gradient)
normgradient.mult_by_row(lengthsq, target = t6)
gradient.add(t6)
gradient.mult_by_row(normcoeff)
# add quadratic term gradient
gradient.add(data)
# add visible bias term
gradient.add_col_mult(bias_vis, -1)
# add MEAN contribution to gradient
cmt.dot(w_mean.T, data, target = feat_mean) # HxP
feat_mean.add_col_vec(bias_mean) # HxP
feat_mean.apply_sigmoid() # HxP
gradient.subtract_dot(w_mean,feat_mean) # VxP
# Function taken from original code
############################################################3
# Hybrid Monte Carlo sampler
def draw_HMC_samples(self, data,negdata,normdata,vel,gradient,normgradient,new_energy,old_energy,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,hmc_step,hmc_step_nr,hmc_ave_rej,hmc_target_ave_rej,t1,t2,t3,t4,t5,t6,t7,thresh,feat,featsq,batch_size,feat_mean,length,lengthsq,normcoeff,small,num_vis):
vel.fill_with_randn()
negdata.assign(data)
self.compute_energy_mcRBM(negdata,normdata,vel,old_energy,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t6,feat,featsq,feat_mean,length,lengthsq,normcoeff,small,num_vis)
self.compute_gradient_mcRBM(negdata,normdata,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t3,t4,t6,feat,featsq,feat_mean,gradient,normgradient,length,lengthsq,normcoeff,small,num_vis)
# half step
vel.add_mult(gradient, -0.5*hmc_step)
negdata.add_mult(vel,hmc_step)
# full leap-frog steps
for ss in range(hmc_step_nr - 1):
## re-evaluate the gradient
self.compute_gradient_mcRBM(negdata,normdata,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t3,t4,t6,feat,featsq,feat_mean,gradient,normgradient,length,lengthsq,normcoeff,small,num_vis)
# update variables
vel.add_mult(gradient, -hmc_step)
negdata.add_mult(vel,hmc_step)
# final half-step
self.compute_gradient_mcRBM(negdata,normdata,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t3,t4,t6,feat,featsq,feat_mean,gradient,normgradient,length,lengthsq,normcoeff,small,num_vis)
vel.add_mult(gradient, -0.5*hmc_step)
# compute new energy
self.compute_energy_mcRBM(negdata,normdata,vel,new_energy,VF,FH,bias_cov,bias_vis,w_mean,bias_mean,t1,t2,t6,feat,featsq,feat_mean,length,lengthsq,normcoeff,small,num_vis)
# rejecton
old_energy.subtract(new_energy, target = thresh)
cmt.exp(thresh)
t4.fill_with_rand()
t4.less_than(thresh)
# update negdata and rejection rate
t4.mult(-1)
t4.add(1) # now 1's detect rejections
t4.sum(axis = 1, target = t5)
t5.copy_to_host()
rej = t5.numpy_array[0,0]/batch_size
data.mult_by_row(t4, target = t6)
negdata.mult_by_row(t4, target = t7)
negdata.subtract(t7)
negdata.add(t6)
hmc_ave_rej = 0.9*hmc_ave_rej + 0.1*rej
if hmc_ave_rej < hmc_target_ave_rej:
hmc_step = min(hmc_step*1.01,0.25)
else:
hmc_step = max(hmc_step*0.99,.001)
return hmc_step, hmc_ave_rej
def saveLsq(self):
'''
Function saving the sum of the square of the data
(needed for training as well as for post-analysis)
'''
d = self.d.astype(np.float32)
dsq = np.square(d)
lsq = np.sum(dsq, axis=0)
with open( self.refDir + 'lsqComplete.pkl', 'wb') as pklFile:
cPickle.dump(lsq, pklFile)
def train(self):
'''
Main train function | |
(resp, filedata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testdir/directory/file1.b",
expect_status=200, expect_reason="OK", expect_type="text/plain")
checkdata = open("testdata/testrdf/directory/file1.b").read()
self.assertEqual(filedata, checkdata, "Difference between local and remote data!")
# Unpack fifth ZIP file into dataset by submiter 2, check response
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser2,
endpointpass=RDFDatabankConfig.endpointsubmitterpass2)
fields = \
[ ("filename", "testrdf2.zip"),
("id", "TestSubmission-testdir")
]
files = []
(reqtype, reqdata) = SparqlQueryTestCase.encode_multipart_formdata(fields, files)
(resp,respdata) = self.doHTTP_POST(
reqdata, reqtype,
resource="items/TestSubmission",
expect_status=403, expect_reason="Forbidden")
# Access and check list of contents in TestSubmission
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser,
endpointpass=RDFDatabankConfig.endpointsubmitterpass)
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
stype = URIRef(oxds+"DataSet")
base = self.getManifestUri("datasets/TestSubmission/")
subj = URIRef(self.getManifestUri("datasets/TestSubmission"))
self.assertEqual(len(rdfgraph),20,'Graph length %i' %len(rdfgraph))
self.assertFalse((URIRef(base+"testrdf2.zip"),URIRef(dcterms+"hasVersion"),None) in rdfgraph, 'dcterms:hasVersion testrdf2.zip')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"5") in rdfgraph, 'oxds:currentVersion')
# Access new dataset, check response
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testrdf2",
expect_status=404, expect_reason="Not Found", expect_type="application/rdf+xml")
# Unpack ZIP file into dataset by general user, check response
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointgeneraluser,
endpointpass=RDFDatabankConfig.endpointgeneralpass)
fields = \
[ ("filename", "testrdf2.zip"),
("id", "TestSubmission-testdir")
]
files = []
(reqtype, reqdata) = SparqlQueryTestCase.encode_multipart_formdata(fields, files)
(resp,respdata) = self.doHTTP_POST(
reqdata, reqtype,
resource="items/TestSubmission",
expect_status=401, expect_reason="Unauthorized", expect_type="text/plain")
# Access and check list of contents in TestSubmission
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser,
endpointpass=RDFDatabankConfig.endpointsubmitterpass)
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
base = self.getManifestUri("datasets/TestSubmission/")
subj = URIRef(self.getManifestUri("datasets/TestSubmission"))
stype = URIRef(oxds+"DataSet")
self.assertEqual(len(rdfgraph),20,'Graph length %i' %len(rdfgraph))
self.assertFalse((URIRef(base+"testrdf2.zip"),URIRef(dcterms+"hasVersion"),None) in rdfgraph, 'dcterms:hasVersion testrdf2.zip')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"5") in rdfgraph, 'oxds:currentVersion')
# Access new dataset, check response
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testrdf2",
expect_status=404, expect_reason="Not Found", expect_type="application/rdf+xml")
# Unpack ZIP file into dataset by admin user3, check response
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointadminuser3,
endpointpass=RDFDatabankConfig.endpointadminpass3)
fields = \
[ ("filename", "testrdf2.zip"),
("id", "TestSubmission-testdir")
]
files = []
(reqtype, reqdata) = SparqlQueryTestCase.encode_multipart_formdata(fields, files)
(resp,respdata) = self.doHTTP_POST(
reqdata, reqtype,
resource="items/TestSubmission",
expect_status=403, expect_reason="Forbidden")
# Access and check list of contents in TestSubmission
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser,
endpointpass=RDFDatabankConfig.endpointsubmitterpass)
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
base = self.getManifestUri("datasets/TestSubmission/")
subj = URIRef(self.getManifestUri("datasets/TestSubmission"))
stype = URIRef(oxds+"DataSet")
self.assertEqual(len(rdfgraph),20,'Graph length %i' %len(rdfgraph))
self.assertFalse((URIRef(base+"testrdf2.zip"),URIRef(dcterms+"hasVersion"),None) in rdfgraph, 'dcterms:hasVersion testrdf2.zip')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"5") in rdfgraph, 'oxds:currentVersion')
# Access new dataset, check response
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testrdf2",
expect_status=404, expect_reason="Not Found", expect_type="application/rdf+xml")
# Unpack ZIP file into dataset by manager user3, check response
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointmanageruser3,
endpointpass=RDFDatabankConfig.endpointmanagerpass3)
fields = \
[ ("filename", "testrdf2.zip"),
("id", "TestSubmission-testdir")
]
files = []
(reqtype, reqdata) = SparqlQueryTestCase.encode_multipart_formdata(fields, files)
(resp,respdata) = self.doHTTP_POST(
reqdata, reqtype,
resource="items/TestSubmission",
expect_status=403, expect_reason="Forbidden")
# Access and check list of contents in TestSubmission
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser,
endpointpass=RDFDatabankConfig.endpointsubmitterpass)
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
stype = URIRef(oxds+"DataSet")
base = self.getManifestUri("datasets/TestSubmission/")
subj = URIRef(self.getManifestUri("datasets/TestSubmission"))
self.assertEqual(len(rdfgraph),20,'Graph length %i' %len(rdfgraph))
self.assertFalse((URIRef(base+"testrdf2.zip"),URIRef(dcterms+"hasVersion"),None) in rdfgraph, 'dcterms:hasVersion testrdf2.zip')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"5") in rdfgraph, 'oxds:currentVersion')
# Access new dataset, check response
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testrdf2",
expect_status=404, expect_reason="Not Found", expect_type="application/rdf+xml")
# Unpack ZIP file into dataset by submitter user3, check response
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser3,
endpointpass=RDFDatabankConfig.endpointsubmitterpass3)
fields = \
[ ("filename", "testrdf2.zip"),
("id", "TestSubmission-testdir")
]
files = []
(reqtype, reqdata) = SparqlQueryTestCase.encode_multipart_formdata(fields, files)
(resp,respdata) = self.doHTTP_POST(
reqdata, reqtype,
resource="items/TestSubmission",
expect_status=403, expect_reason="Forbidden")
# Access and check list of contents in TestSubmission
self.setRequestUserPass(
endpointuser=RDFDatabankConfig.endpointsubmitteruser,
endpointpass=RDFDatabankConfig.endpointsubmitterpass)
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
base = self.getManifestUri("datasets/TestSubmission/")
subj = URIRef(self.getManifestUri("datasets/TestSubmission"))
stype = URIRef(oxds+"DataSet")
self.assertEqual(len(rdfgraph),20,'Graph length %i' %len(rdfgraph))
self.assertFalse((URIRef(base+"testrdf2.zip"),URIRef(dcterms+"hasVersion"),None) in rdfgraph, 'dcterms:hasVersion testrdf2.zip')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"5") in rdfgraph, 'oxds:currentVersion')
# Access new dataset, check response
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testrdf2",
expect_status=404, expect_reason="Not Found", expect_type="application/rdf+xml")
# Access and check list of contents in TestSubmission-testdir version 0
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testdir?version=0",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
stype = URIRef(oxds+"DataSet")
stype2 = URIRef(oxds+"Grouping")
subj = URIRef(self.getManifestUri("datasets/TestSubmission-testdir"))
base = self.getManifestUri("datasets/TestSubmission-testdir/")
self.assertEqual(len(rdfgraph),10,'Graph length %i' %len(rdfgraph))
self.failUnless((subj,RDF.type,stype) in rdfgraph, 'Testing submission type: '+subj+", "+stype)
self.failUnless((subj,URIRef(dcterms+"identifier"),None) in rdfgraph, 'dcterms:identifier')
self.failUnless((subj,URIRef(dcterms+"mediator"),None) in rdfgraph, 'dcterms:mediator')
self.failUnless((subj,URIRef(dcterms+"rights"),None) in rdfgraph, 'dcterms:rights')
self.failUnless((subj,URIRef(dcterms+"license"),None) in rdfgraph, 'dcterms:license')
self.failUnless((subj,URIRef(dcterms+"publisher"),None) in rdfgraph, 'dcterms:publisher')
self.failUnless((subj,URIRef(oxds+"isEmbargoed"),None) in rdfgraph, 'oxds:isEmbargoed')
self.failUnless((subj,URIRef(oxds+"embargoedUntil"),None) in rdfgraph, 'oxds:embargoedUntil')
self.failUnless((subj,URIRef(dcterms+"created"),None) in rdfgraph, 'dcterms:created')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"0") in rdfgraph, 'oxds:currentVersion')
#Access state information of TestSubmission-testdir version 0
(resp, data) = self.doHTTP_GET(
resource="states/TestSubmission-testdir?version=0",
expect_status=200, expect_reason="OK", expect_type="application/json")
state = data['state']
parts = data['parts']
self.assertEqual(len(state.keys()), 11, "States")
self.assertEqual(len(parts.keys()), 3, "Parts")
self.assertEqual(len(parts['4=TestSubmission-testdir'].keys()), 13, "File stats for 4=TestSubmission-testdir")
self.assertEqual(len(parts['manifest.rdf'].keys()), 13, "File stats for manifest.rdf")
# Access dataset TestSubmission-testdir version 1
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testdir/version1",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
self.assertEqual(len(rdfgraph),17,'Graph length %i' %len(rdfgraph))
self.failUnless((subj,RDF.type,stype2) in rdfgraph, 'Testing submission type: '+subj+", "+stype2)
self.failUnless((subj,URIRef(dcterms+"identifier"),None) in rdfgraph, 'dcterms:identifier')
self.failUnless((subj,URIRef(dcterms+"mediator"),None) in rdfgraph, 'dcterms:mediator')
self.failUnless((subj,URIRef(dcterms+"rights"),None) in rdfgraph, 'dcterms:rights')
self.failUnless((subj,URIRef(dcterms+"license"),None) in rdfgraph, 'dcterms:license')
self.failUnless((subj,URIRef(dcterms+"publisher"),None) in rdfgraph, 'dcterms:publisher')
self.failUnless((subj,URIRef(oxds+"isEmbargoed"),None) in rdfgraph, 'oxds:isEmbargoed')
self.failUnless((subj,URIRef(oxds+"embargoedUntil"),None) in rdfgraph, 'oxds:embargoedUntil')
self.failUnless((subj,URIRef(dcterms+"created"),None) in rdfgraph, 'dcterms:created')
self.failUnless((subj,URIRef(dcterms+"modified"),None) in rdfgraph, 'dcterms:modified')
self.failUnless((subj,URIRef(dcterms+"isVersionOf"),None) in rdfgraph, 'dcterms:isVersionOf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"directory")) in rdfgraph, 'ore:aggregates directory')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"directory/file1.a")) in rdfgraph, 'ore:aggregates file1.a')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"directory/file1.b")) in rdfgraph, 'ore:aggregates file1.b')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"directory/file2.a")) in rdfgraph, 'ore:aggregates tfile2.a')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"test-csv.csv")) in rdfgraph, 'ore:aggregates test-csv.csv')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"1") in rdfgraph, 'oxds:currentVersion')
#Access state information of TestSubmission-testdir version 1
(resp, data) = self.doHTTP_GET(
resource="states/TestSubmission-testdir/version1",
expect_status=200, expect_reason="OK", expect_type="application/json")
state = data['state']
parts = data['parts']
self.assertEqual(len(state.keys()), 11, "States")
self.assertEqual(len(parts.keys()), 5, "Parts")
self.assertEqual(len(parts['4=TestSubmission-testdir'].keys()), 13, "File stats for 4=TestSubmission-testdir")
self.assertEqual(len(parts['manifest.rdf'].keys()), 13, "File stats for manifest.rdf")
self.assertEqual(len(parts['test-csv.csv'].keys()), 13, "File stats for test-csv.csv")
self.assertEqual(len(parts['directory'].keys()), 0, "File stats for directory")
# Access dataset TestSubmission-testdir version 2
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testdir/version2",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
stype3 = URIRef(oxds+"item")
subj2 = URIRef(base+"testrdf4/directory/file1.a")
subj3 = URIRef(base+"testrdf4/directory/file1.b")
self.assertEqual(len(rdfgraph),29,'Graph length %i' %len(rdfgraph))
self.failUnless((subj,RDF.type,stype2) in rdfgraph, 'Testing submission type: '+subj+", "+stype2)
self.failUnless((subj,URIRef(dcterms+"modified"),None) in rdfgraph, 'dcterms:modified')
self.failUnless((subj,URIRef(dcterms+"isVersionOf"),None) in rdfgraph, 'dcterms:isVersionOf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4")) in rdfgraph, 'ore:aggregates testrdf4')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory")) in rdfgraph, 'ore:aggregates directory')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/file1.a")) in rdfgraph, 'ore:aggregates file1.a')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/file1.b")) in rdfgraph, 'ore:aggregates file1.b')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/file2.a")) in rdfgraph, 'ore:aggregates file2.a')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/1a.rdf")) in rdfgraph, 'ore:aggregates 1a.rdf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/1b.rdf")) in rdfgraph, 'ore:aggregates 1b.rdf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/directory/2a.rdf")) in rdfgraph, 'ore:aggregates 2a.rdf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"testrdf4/test-csv.csv")) in rdfgraph, 'ore:aggregates test-csv.csv')
self.failUnless((subj,URIRef(dcterms+"identifier"),None) in rdfgraph, 'dcterms:identifier')
self.failUnless((subj,URIRef(dcterms+"mediator"),None) in rdfgraph, 'dcterms:mediator')
self.failUnless((subj,URIRef(dcterms+"rights"),None) in rdfgraph, 'dcterms:rights')
self.failUnless((subj,URIRef(dcterms+"license"),None) in rdfgraph, 'dcterms:license')
self.failUnless((subj,URIRef(dcterms+"publisher"),None) in rdfgraph, 'dcterms:publisher')
self.failUnless((subj,URIRef(oxds+"isEmbargoed"),None) in rdfgraph, 'oxds:isEmbargoed')
self.failUnless((subj,URIRef(oxds+"embargoedUntil"),None) in rdfgraph, 'oxds:embargoedUntil')
self.failUnless((subj,URIRef(dcterms+"created"),None) in rdfgraph, 'dcterms:created')
self.failUnless((subj,URIRef(dc+"description"),"This is a archived test item 2a ") in rdfgraph, 'dc:description')
self.failUnless((subj,URIRef(dcterms+"title"),"Test item 2a") in rdfgraph, 'dcterms:title')
self.failUnless((subj,URIRef(oxds+"currentVersion"),"2") in rdfgraph, 'oxds:currentVersion')
self.failUnless((subj2,RDF.type,stype3) in rdfgraph, 'Testing submission type: %s, %s'%(str(subj2), str(stype3)))
self.failUnless((subj2,URIRef(dc+"description"),"This is a archived test item 1a ") in rdfgraph, 'dc:description')
self.failUnless((subj2,URIRef(dcterms+"title"),"Test item 1a") in rdfgraph, 'dcterms:title')
self.failUnless((subj3,RDF.type,stype3) in rdfgraph, 'Testing submission type: '+subj3+", "+stype3)
self.failUnless((subj3,URIRef(dc+"description"),"This is test item 1b of type file") in rdfgraph, 'dc:description')
self.failUnless((subj3,URIRef(dcterms+"title"),"Test item 1b") in rdfgraph, 'dcterms:title')
#Access state information of TestSubmission-testdir version 2
(resp, data) = self.doHTTP_GET(
resource="states/TestSubmission-testdir/version2",
expect_status=200, expect_reason="OK", expect_type="application/json")
state = data['state']
parts = data['parts']
self.assertEqual(len(state.keys()), 11, "States")
self.assertEqual(state['item_id'], "TestSubmission-testdir", "Submission item identifier")
self.assertEqual(len(state['versions']), 5, "Five versions")
self.assertEqual(state['currentversion'], '2', "Current version == 2")
self.assertEqual(state['rdffileformat'], 'xml', "RDF file type")
self.assertEqual(state['rdffilename'], 'manifest.rdf', "RDF file name")
self.assertEqual(len(state['files']['0']), 1, "List should contain just manifest.rdf")
self.assertEqual(len(state['files']['1']), 3, "List should contain manifest.rdf, directory and test-csv.csv")
self.assertEqual(len(state['files']['2']), 2, "List should contain manifest.rdf, testrdf4")
self.assertEqual(len(state['metadata_files']['0']), 0, "metadata_files of version 0")
self.assertEqual(len(state['metadata_files']['1']), 0, "metadata_files of version 1")
self.assertEqual(len(state['metadata_files']['2']), 0, "metadata_files of version 2")
self.assertEqual(state['subdir']['0'], [], "Subdirectory count for version 0")
self.assertEqual(state['subdir']['1'], ['directory'], "Subdirectory for version 1")
self.assertEqual(state['subdir']['2'], ['testrdf4'], "Subdirectory for version 2 should be testrdf4")
self.assertEqual(state['metadata']['createdby'], RDFDatabankConfig.endpointsubmitteruser, "Created by")
self.assertEqual(state['metadata']['embargoed'], True, "Embargoed?")
self.assertEqual(len(parts.keys()), 4, "Parts")
self.assertEqual(len(parts['4=TestSubmission-testdir'].keys()), 13, "File stats for 4=TestSubmission-testdir")
self.assertEqual(len(parts['manifest.rdf'].keys()), 13, "File stats for manifest.rdf")
self.assertEqual(len(parts['testrdf4'].keys()), 0, "File stats for directory1")
# Access dataset TestSubmission-testdir version 3
(resp, rdfdata) = self.doHTTP_GET(
resource="datasets/TestSubmission-testdir?version=3",
expect_status=200, expect_reason="OK", expect_type="application/rdf+xml")
rdfgraph = Graph()
rdfstream = StringIO(rdfdata)
rdfgraph.parse(rdfstream)
subj = URIRef(self.getManifestUri("datasets/TestSubmission-testdir"))
stype = URIRef("http://vocab.ox.ac.uk/dataset/schema#Grouping")
base = self.getManifestUri("datasets/TestSubmission-testdir/")
owl = "http://www.w3.org/2002/07/owl#"
self.assertEqual(len(rdfgraph),22,'Graph length %i' %len(rdfgraph))
self.failUnless((subj,RDF.type,stype) in rdfgraph, 'Testing submission type: '+subj+", "+stype)
self.failUnless((subj,URIRef(dcterms+"identifier"),None) in rdfgraph, 'dcterms:identifier')
self.failUnless((subj,URIRef(dcterms+"mediator"),None) in rdfgraph, 'dcterms:mediator')
self.failUnless((subj,URIRef(dcterms+"rights"),None) in rdfgraph, 'dcterms:rights')
self.failUnless((subj,URIRef(dcterms+"license"),None) in rdfgraph, 'dcterms:license')
self.failUnless((subj,URIRef(dcterms+"publisher"),None) in rdfgraph, 'dcterms:publisher')
self.failUnless((subj,URIRef(oxds+"isEmbargoed"),None) in rdfgraph, 'oxds:isEmbargoed')
self.failUnless((subj,URIRef(oxds+"embargoedUntil"),None) in rdfgraph, 'oxds:embargoedUntil')
self.failUnless((subj,URIRef(dcterms+"created"),None) in rdfgraph, 'dcterms:created')
self.failUnless((subj,URIRef(dcterms+"modified"),None) in rdfgraph, 'dcterms:modified')
self.failUnless((subj,URIRef(dcterms+"isVersionOf"),None) in rdfgraph, 'dcterms:isVersionOf')
self.failUnless((subj,URIRef(ore+"aggregates"),URIRef(base+"directory1")) in | |
<gh_stars>100-1000
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
import unittest.mock
import numpy as np
from tests.unit import utils
class Test_newton_refine(utils.NumPyTestCase):
@staticmethod
def _call_function_under_test(s, nodes1, t, nodes2):
from bezier.hazmat import intersection_helpers
return intersection_helpers.newton_refine(s, nodes1, t, nodes2)
def test_linear(self):
import bezier
nodes1 = np.asfortranarray([[0.0, 1.0], [0.0, 1.0]])
nodes2 = np.asfortranarray([[1.0, 0.0], [0.0, 3.0]])
curve1 = bezier.Curve(nodes1, degree=1)
curve2 = bezier.Curve(nodes2, degree=1)
known_s = 0.75
known_t = 0.25
self.assertEqual(curve1.evaluate(known_s), curve2.evaluate(known_t))
wrong_s = known_s - 0.125
wrong_t = known_t + 0.125
# NOTE: By construction, the Jacobian matrix will be
# [1, 1], [1, -3]
# which has determinant -4.0, hence there will
# be no round-off when solving.
new_s, new_t = self._call_function_under_test(
wrong_s, nodes1, wrong_t, nodes2
)
# Newton's method is exact on linear problems so will
# always converge after one step.
self.assertEqual(new_s, known_s)
self.assertEqual(new_t, known_t)
@staticmethod
def _get_quadratics():
import bezier
nodes1 = np.asfortranarray([[0.0, 0.5, 1.0], [0.0, 1.0, 0.0]])
nodes2 = np.asfortranarray([[1.0, 0.5, 0.0], [0.75, -0.25, 0.75]])
curve1 = bezier.Curve(nodes1, degree=2)
curve2 = bezier.Curve(nodes2, degree=2)
return curve1, curve2
def test_mixed_degree(self):
import bezier
curve1, _ = self._get_quadratics()
nodes2 = np.asfortranarray([[1.0, 0.0], [0.0, 1.0]])
curve2 = bezier.Curve(nodes2, degree=1)
known_s = 0.5
known_t = 0.5
self.assertEqual(curve1.evaluate(known_s), curve2.evaluate(known_t))
wrong_s = 0.25
wrong_t = 0.25
# NOTE: By construction, the Jacobian matrix will be
# [1, 1], [1, -1]
# which has determinant -2.0, hence there will
# be no round-off when solving.
new_s, new_t = self._call_function_under_test(
wrong_s, curve1._nodes, wrong_t, nodes2
)
self.assertEqual(new_s, 0.4375)
self.assertEqual(new_t, 0.5625)
# Make sure we have gotten closer to correct.
self.assertLess(abs(known_s - new_s), abs(known_s - wrong_s))
self.assertLess(abs(known_t - new_t), abs(known_t - wrong_t))
def test_early_exit(self):
curve1, curve2 = self._get_quadratics()
known_s = 0.25
known_t = 0.75
self.assertEqual(curve1.evaluate(known_s), curve2.evaluate(known_t))
new_s, new_t = self._call_function_under_test(
known_s, curve1._nodes, known_t, curve2._nodes
)
self.assertEqual(new_s, known_s)
self.assertEqual(new_t, known_t)
def test_quadratic(self):
curve1, curve2 = self._get_quadratics()
known_s = 0.25
known_t = 0.75
self.assertEqual(curve1.evaluate(known_s), curve2.evaluate(known_t))
wrong_s = known_s + 0.0625 # 1/16
wrong_t = known_t + 0.0625 # 1/16
# NOTE: By construction, the Jacobian matrix will be
# [1, 3/4], [1, -5/4]
# which has determinant -2.0, hence there will
# be no round-off when solving.
new_s, new_t = self._call_function_under_test(
wrong_s, curve1._nodes, wrong_t, curve2._nodes
)
self.assertEqual(new_s, 0.2421875)
self.assertEqual(new_t, 0.7578125)
# Make sure we have gotten closer to correct.
self.assertLess(abs(known_s - new_s), abs(known_s - wrong_s))
self.assertLess(abs(known_t - new_t), abs(known_t - wrong_t))
def test_convergence(self):
import bezier
nodes1 = np.asfortranarray(
[[0.0, 0.25, 0.5, 0.75, 1.0], [0.0, 1.0, -0.75, 1.0, 0.0]]
)
curve1 = bezier.Curve(nodes1, degree=4)
# Vertical line forces a unique solution.
nodes2 = np.asfortranarray([[0.5, 0.5], [0.0, 1.0]])
curve2 = bezier.Curve(nodes2, degree=1)
num_guess = 4
parameters = np.zeros((2, num_guess), order="F")
# NOTE: This means our "first" guess is (s, t) = (0, 0).
for guess in range(1, num_guess):
prev_s, prev_t = parameters[:, guess - 1]
parameters[:, guess] = self._call_function_under_test(
prev_s, nodes1, prev_t, nodes2
)
expected = np.asfortranarray(
[[0.0, 0.5, 0.5, 0.5], [0.0, 2.0, 0.21875, 0.21875]]
)
self.assertEqual(parameters, expected)
# Make sure that we've actually converged.
exact_s, exact_t = parameters[:, -1]
self.assertEqual(curve1.evaluate(exact_s), curve2.evaluate(exact_t))
def test_singular_jacobian(self):
nodes1 = np.asfortranarray([[0.5, 1.0, 1.5], [0.0, 1.0, 0.0]])
nodes2 = np.asfortranarray([[0.0, 1.0], [0.5, 0.5]])
with self.assertRaises(ValueError) as exc_info:
self._call_function_under_test(0.5, nodes1, 0.5, nodes2)
exc_args = exc_info.exception.args
self.assertEqual(exc_args, ("Jacobian is singular.",))
class TestNewtonSimpleRoot(utils.NumPyTestCase):
@staticmethod
def _get_target_class():
from bezier.hazmat import intersection_helpers
return intersection_helpers.NewtonSimpleRoot
def _make_one(self, *args, **kwargs):
klass = self._get_target_class()
return klass(*args, **kwargs)
def test_constructor(self):
nodes1 = unittest.mock.sentinel.nodes1
first_deriv1 = unittest.mock.sentinel.first_deriv1
nodes2 = unittest.mock.sentinel.nodes2
first_deriv2 = unittest.mock.sentinel.first_deriv2
evaluate_fn = self._make_one(
nodes1, first_deriv1, nodes2, first_deriv2
)
self.assertIs(evaluate_fn.nodes1, nodes1)
self.assertIs(evaluate_fn.first_deriv1, first_deriv1)
self.assertIs(evaluate_fn.nodes2, nodes2)
self.assertIs(evaluate_fn.first_deriv2, first_deriv2)
def test___call__(self):
# B1(s) = [s(s + 2) ]
# [4s(1 - s)]
# B2(t) = [1 + 3t]
# [4 - 4t]
# DF = [2 + 2s, -3]
# [4 - 8s, 4]
nodes1 = np.asfortranarray([[0.0, 1.0, 3.0], [0.0, 2.0, 0.0]])
first_deriv1 = np.asfortranarray([[2.0, 4.0], [4.0, -4.0]])
nodes2 = np.asfortranarray([[1.0, 4.0], [4.0, 0.0]])
first_deriv2 = np.asfortranarray([[3.0], [-4.0]])
evaluate_fn = self._make_one(
nodes1, first_deriv1, nodes2, first_deriv2
)
jacobian, func_val = evaluate_fn(0.5, 0.25)
expected_jacobian = np.asfortranarray([[3.0, -3.0], [0.0, 4.0]])
self.assertEqual(jacobian, expected_jacobian)
expected_func_val = np.asfortranarray([[-0.5], [-2.0]])
self.assertEqual(func_val, expected_func_val)
def test___call__exact_zero(self):
# B1(s) = [2s(1 + s)]
# [6s(1 - s)]
# B2(t) = [21t]
# [ 9t]
# DF = [2 + 4s, -21]
# [6 - 12s, -9]
nodes1 = np.asfortranarray([[0.0, 1.0, 4.0], [0.0, 3.0, 0.0]])
first_deriv1 = np.asfortranarray([[2.0, 6.0], [6.0, -6.0]])
nodes2 = np.asfortranarray([[0.0, 21.0], [0.0, 9.0]])
first_deriv2 = np.asfortranarray([[21.0], [9.0]])
evaluate_fn = self._make_one(
nodes1, first_deriv1, nodes2, first_deriv2
)
jacobian, func_val = evaluate_fn(0.75, 0.125)
self.assertIsNone(jacobian)
expected_func_val = np.asfortranarray([[0.0], [0.0]])
self.assertEqual(func_val, expected_func_val)
class TestNewtonDoubleRoot(utils.NumPyTestCase):
@staticmethod
def _get_target_class():
from bezier.hazmat import intersection_helpers
return intersection_helpers.NewtonDoubleRoot
def _make_one(self, *args, **kwargs):
klass = self._get_target_class()
return klass(*args, **kwargs)
def test_constructor(self):
nodes1 = unittest.mock.sentinel.nodes1
first_deriv1 = unittest.mock.sentinel.first_deriv1
second_deriv1 = unittest.mock.sentinel.second_deriv1
nodes2 = unittest.mock.sentinel.nodes2
first_deriv2 = unittest.mock.sentinel.first_deriv2
second_deriv2 = unittest.mock.sentinel.second_deriv2
evaluate_fn = self._make_one(
nodes1,
first_deriv1,
second_deriv1,
nodes2,
first_deriv2,
second_deriv2,
)
self.assertIs(evaluate_fn.nodes1, nodes1)
self.assertIs(evaluate_fn.first_deriv1, first_deriv1)
self.assertIs(evaluate_fn.second_deriv1, second_deriv1)
self.assertIs(evaluate_fn.nodes2, nodes2)
self.assertIs(evaluate_fn.first_deriv2, first_deriv2)
self.assertIs(evaluate_fn.second_deriv2, second_deriv2)
def _make_default(self):
# B1(s) = [4s(1 - s)]
# [ 2s]
# B2(t) = [2(2t^2 - 2t + 1)]
# [ 2t]
# B1'(s) x B2'(s) = -16(s + t - 1)
# DG = [4 - 8s, 4 - 8t]
# [ 2, -2]
# [ -16, -16]
# DG^T DG = [ 4(16s^2 - 16s + 69), 4(16st - 8s - 8t + 67)]
# [4(16st - 8s - 8t + 67), 4(16t^2 - 16t + 69)]
# DG^T G = [4(8s^3 - 12s^2 + 8st^2 - 8st + 73s - 4t^2 + 67t - 66)]
# [4(8s^2t - 4s^2 - 8st + 67s + 8t^3 - 12t^2 + 73t - 66)]
nodes1 = np.asfortranarray([[0.0, 2.0, 0.0], [0.0, 1.0, 2.0]])
first_deriv1 = np.asfortranarray([[4.0, -4.0], [2.0, 2.0]])
second_deriv1 = np.asfortranarray([[-8.0], [0.0]])
nodes2 = np.asfortranarray([[2.0, 0.0, 2.0], [0.0, 1.0, 2.0]])
first_deriv2 = np.asfortranarray([[-4.0, 4.0], [2.0, 2.0]])
second_deriv2 = np.asfortranarray([[8.0], [0.0]])
return self._make_one(
nodes1,
first_deriv1,
second_deriv1,
nodes2,
first_deriv2,
second_deriv2,
)
def test___call__(self):
evaluate_fn = self._make_default()
jacobian, func_val = evaluate_fn(0.75, 0.25)
expected_jacobian = np.asfortranarray([[264.0, 248.0], [248.0, 264.0]])
self.assertEqual(jacobian, expected_jacobian)
expected_func_val = np.asfortranarray([[3.0], [-3.0]])
self.assertEqual(func_val, expected_func_val)
def test___call__exact_zero(self):
evaluate_fn = self._make_default()
jacobian, func_val = evaluate_fn(0.5, 0.5)
self.assertEqual(jacobian, None)
expected_func_val = np.asfortranarray([[0.0], [0.0]])
self.assertEqual(func_val, expected_func_val)
def test___call__linear_curves(self):
nodes1 = np.asfortranarray([[0.0, 1.0], [0.0, 1.0]])
first_deriv1 = np.asfortranarray([[1.0], [1.0]])
second_deriv1 = np.empty((2, 0))
nodes2 = np.asfortranarray([[0.0, 1.0], [1.0, 0.0]])
first_deriv2 = np.asfortranarray([[1.0], [-1.0]])
second_deriv2 = np.empty((2, 0))
evaluate_fn = self._make_one(
nodes1,
first_deriv1,
second_deriv1,
nodes2,
first_deriv2,
second_deriv2,
)
jacobian, func_val = evaluate_fn(0.25, 0.25)
expected_jacobian = np.asfortranarray([[2.0, 0.0], [0.0, 2.0]])
self.assertEqual(jacobian, expected_jacobian)
expected_func_val = np.asfortranarray([[-0.5], [-0.5]])
self.assertEqual(func_val, expected_func_val)
class Test_newton_iterate(unittest.TestCase):
HALF_EPS = 0.5 ** 26
@staticmethod
def _call_function_under_test(evaluate_fn, s, t):
from bezier.hazmat import intersection_helpers
return intersection_helpers.newton_iterate(evaluate_fn, s, t)
@staticmethod
def _simple_evaluate(quadratic1, quadratic2):
from bezier.hazmat import intersection_helpers
first_deriv1 = 2.0 * (quadratic1[:, 1:] - quadratic1[:, :-1])
first_deriv2 = 2.0 * (quadratic2[:, 1:] - quadratic2[:, :-1])
return intersection_helpers.NewtonSimpleRoot(
quadratic1, first_deriv1, quadratic2, first_deriv2
)
@staticmethod
def _double_evaluate(quadratic1, quadratic2):
from bezier.hazmat import intersection_helpers
first_deriv1 = 2.0 * (quadratic1[:, 1:] - quadratic1[:, :-1])
second_deriv1 = first_deriv1[:, 1:] - first_deriv1[:, :-1]
first_deriv2 = 2.0 * (quadratic2[:, 1:] - quadratic2[:, :-1])
second_deriv2 = first_deriv2[:, 1:] - first_deriv2[:, :-1]
return intersection_helpers.NewtonDoubleRoot(
quadratic1,
first_deriv1,
second_deriv1,
quadratic2,
first_deriv2,
second_deriv2,
)
def test_rhs_exactly_zero(self):
# B1([10922/32768, 10923/32768]) and B2([16383/16384, 1]) are
# linearized and when the segments intersect they produce
# t = 109217/109216 > 1.
nodes1 = np.asfortranarray([[0.0, 4.5, 9.0], [0.0, 9.0, 0.0]])
s = 671023103.0 / 2013069312.0
nodes2 = np.asfortranarray([[11.0, 7.0, 3.0], [8.0, 10.0, 4.0]])
t = 1789394945.0 / 1789394944.0
evaluate_fn = self._simple_evaluate(nodes1, nodes2)
converged, current_s, current_t = self._call_function_under_test(
evaluate_fn, s, | |
{
return a.__cmp__(b);
} else if ((typeof b == 'object' || typeof b == 'function') && typeof b.__cmp__ == 'function') {
return -b.__cmp__(a);
}
if (a && b && (typeof a.__class__ != 'undefined' || typeof b.__class__ != 'undefined')) {
if (a === b)
return 0;
return -1;
}
if (a == b) return 0;
if (a > b) return 1;
return -1;
};
""")
# for list.sort()
__cmp = cmp
def bool(v):
# this needs to stay in native code without any dependencies here,
# because this is used by if and while, we need to prevent
# recursion
#setCompilerOptions("InlineBool")
#if v:
# return True
#return False
JS("""
if (typeof @{{v}} == 'undefined')
throw $pyce(@{{TypeError}}('bool() called with undefined as argument'));
switch (@{{v}}) {
case null:
case false:
case 0:
case '':
return false;
}
if (typeof @{{v}} == 'object') {
if (typeof @{{v}}.__nonzero__ == 'function'){
return @{{v}}.__nonzero__();
} else if (typeof @{{v}}.__len__ == 'function'){
return @{{v}}.__len__() > 0;
}
}
return true;
""")
JS("@{{bool}}.__$super_cache__ = {};")
JS("@{{:_set_hash}}(@{{bool}});")
JS("@{{bool}}.__$super_cache__[@{{bool}}.$H] = null;")
class float:
__number__ = JS("0x01")
def __new__(self, num):
JS("""
if (typeof @{{num}} == 'string') {
@{{num}} = @{{num}}.lstrip();
}
var v = Number(@{{num}});
if (isNaN(v)) {
if (typeof @{{num}} == 'string')
throw $pyce(@{{ValueError}}("could not convert string to float: " + @{{!num}}));
throw $pyce(@{{TypeError}}("float() argument must be a string or a number"));
}
return v;
""")
# Patching of the standard javascript Number
# which is in principle the python 'float'
JS("""
Number.prototype.__number__ = 0x01;
Number.prototype.__name__ = 'float';
Number.prototype.__class__ = Number.prototype;
Number.prototype.__$super_cache__ = @{{float}}.__$super_cache__;
Number.prototype.$H = @{{float}}.$H;
Number.prototype.__init__ = function (value, radix) {
return null;
};
Number.prototype.__str__ = function () {
if (typeof this == 'function') return "<type '" + this.__name__ + "'>";
return this.toString();
};
Number.prototype.__repr__ = function () {
if (typeof this == 'function') return "<type '" + this.__name__ + "'>";
return this.toString();
};
Number.prototype.__nonzero__ = function () {
return this != 0;
};
Number.prototype.__cmp__ = function (y) {
return this < y? -1 : (this == y ? 0 : 1);
};
Number.prototype.__hash__ = function () {
return this;
};
Number.prototype.__oct__ = function () {
return '0'+this.toString(8);
};
Number.prototype.__hex__ = function () {
return '0x'+this.toString(16);
};
Number.prototype.__pos__ = function () {
return this;
};
Number.prototype.__neg__ = function () {
return -this;
};
Number.prototype.__abs__ = function () {
if (this >= 0) return this;
return -this;
};
Number.prototype.__add__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return this + y;
};
Number.prototype.__radd__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return y + this;
};
Number.prototype.__sub__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return this - y;
};
Number.prototype.__rsub__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return y - this;
};
Number.prototype.__floordiv__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (y == 0) throw $pyce(@{{ZeroDivisionError}}('float divmod()'));
return Math.floor(this / y);
};
Number.prototype.__rfloordiv__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (this == 0) throw $pyce(@{{ZeroDivisionError}}('float divmod'));
return Math.floor(y / this);
};
Number.prototype.__div__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (y == 0) throw $pyce(@{{ZeroDivisionError}}('float division'));
return this / y;
};
Number.prototype.__rdiv__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (this == 0) throw $pyce(@{{ZeroDivisionError}}('float division'));
return y / this;
};
Number.prototype.__mul__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return this * y;
};
Number.prototype.__rmul__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
return y * this;
};
Number.prototype.__mod__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (y == 0) throw $pyce(@{{ZeroDivisionError}}('float modulo'));
return this % y;
};
Number.prototype.__rmod__ = function (y) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (this == 0) throw $pyce(@{{ZeroDivisionError}}('float modulo'));
return y % this;
};
Number.prototype.__pow__ = function (y, z) {
if (!y.__number__ || isNaN(y = y.valueOf())) return @{{NotImplemented}};
if (typeof z == 'undefined' || z == null) {
return Math.pow(this, y);
}
if (!z.__number__ || isNaN(z = z.valueOf())) return @{{NotImplemented}};
return Math.pow(this, y) % z;
};
""")
float_int = JS("""function $fn$pyjslib$float_int(value, radix) {
if (typeof radix == 'undefined') {
if (typeof value == 'number') {
return value >= 0 ? Math.floor(value) : Math.ceil(value);
}
radix = null;
}
return @{{_float_int}}(value, radix);
}""")
JS("@{{float_int}}.__$super_cache__ = Number.prototype.__$super_cache__;")
JS("@{{float_int}}.$H = Number.prototype.$H;")
def _float_int(value, radix):
JS("""
var v;
if (typeof @{{value}}['__int__'] != 'undefined') {
return @{{value}}['__int__']();
}
if (@{{value}}.__number__) {
if (@{{radix}} !== null) {
throw $pyce(@{{TypeError}}("int() can't convert non-string with explicit base"));
}
v = @{{value}}.valueOf();
if (v > 0) {
v = Math.floor(v);
} else {
v = Math.ceil(v);
}
} else if (typeof @{{value}} == 'string') {
if (@{{radix}} === null) {
@{{radix}} = 10;
}
@{{value}} = @{{value}}.lstrip();
switch (@{{value}}[@{{value}}.length-1]) {
case 'l':
case 'L':
v = @{{value}}.slice(0, @{{value}}.length-2);
break;
default:
v = @{{value}};
}
if (v.match($radix_regex[@{{radix}}]) === null) {
v = NaN;
} else {
v = v.$$replace(' ', '');
v = parseInt(v, @{{radix}});
}
} else {
throw $pyce(@{{TypeError}}("TypeError: int() argument must be a string or a number"));
}
if (isNaN(v) || !isFinite(v)) {
throw $pyce(@{{ValueError}}("invalid literal for int() with base " + @{{!radix}} + ": '" + @{{!value}} + "'"));
}
return v;
""")
JS("""
var $radix_regex = [
/^$/i, // 0
/^$/i, // 1
/^ *-? *[01]+ *$/i, // 2
/^ *-? *[0-2]+ *$/i, // 3
/^ *-? *[0-3]+ *$/i, // 4
/^ *-? *[0-4]+ *$/i, // 5
/^ *-? *[0-5]+ *$/i, // 6
/^ *-? *[0-6]+ *$/i, // 7
/^ *-? *[0-7]+ *$/i, // 8
/^ *-? *[0-8]+ *$/i, // 9
/^ *-? *[0-9]+ *$/i, // 10
/^ *-? *[0-9a]+ *$/i, // 11
/^ *-? *[0-9ab]+ *$/i, // 12
/^ *-? *[0-9a-c]+ *$/i, // 13
/^ *-? *[0-9a-d]+ *$/i, // 14
/^ *-? *[0-9a-e]+ *$/i, // 15
/^ *-? *[0-9a-f]+ *$/i, // 16
/^ *-? *[0-9a-g]+ *$/i, // 17
/^ *-? *[0-9a-h]+ *$/i, // 18
/^ *-? *[0-9a-i]+ *$/i, // 19
/^ *-? *[0-9a-j]+ *$/i, // 20
/^ *-? *[0-9a-k]+ *$/i, // 21
/^ *-? *[0-9a-l]+ *$/i, // 22
/^ *-? *[0-9a-m]+ *$/i, // 23
/^ *-? *[0-9a-n]+ *$/i, // 24
/^ *-? *[0-9a-o]+ *$/i, // 25
/^ *-? *[0-9a-p]+ *$/i, // 26
/^ *-? *[0-9a-q]+ *$/i, // 27
/^ *-? *[0-9a-r]+ *$/i, // 28
/^ *-? *[0-9a-s]+ *$/i, // 29
/^ *-? *[0-9a-t]+ *$/i, // 30
/^ *-? *[0-9a-u]+ *$/i, // 31
/^ *-? *[0-9a-v]+ *$/i, // 32
/^ *-? *[0-9a-w]+ *$/i, // 33
/^ *-? *[0-9a-x]+ *$/i, // 34
/^ *-? *[0-9a-y]+ *$/i, // 35
/^ *-? *[0-9a-z]+ *$/i // 36
];
(function(){
/* XXX do not convert to @{{int}} - this is correct */
var $int = pyjslib['int'] = function (value, radix) {
var v, i;
if (typeof radix == 'undefined' || radix === null) {
if (typeof value == 'undefined') {
throw $pyce(@{{TypeError}}("int() takes at least 1 argument"));
}
if (typeof value['__int__'] != 'undefined') {
return value['__int__']();
}
switch (value.__number__) {
case 0x01:
value = value > 0 ? Math.floor(value) : Math.ceil(value);
break;
case 0x02:
return value;
case 0x04:
v = value.valueOf();
if (!($min_int <= v && v <= $max_int))
return value;
}
radix = null;
}
if (typeof this != 'object' || this.__number__ != 0x02) return new $int(value, radix);
if (value.__number__) {
if (radix !== null) throw $pyce(@{{TypeError}}("int() can't convert non-string with explicit base"));
v = value.valueOf();
} else if (typeof value == 'string') {
if (radix === null) {
radix = 10;
}
if (value.match($radix_regex[radix]) === null) {
value = value.lstrip();
v = NaN;
} else {
value = value.$$replace(' ', '');
v = parseInt(value, radix);
}
} else {
throw $pyce(@{{TypeError}}("TypeError: int() argument must be a string or a number"));
}
if (isNaN(v) || !isFinite(v)) {
throw $pyce(@{{ValueError}}("invalid literal for int() with base " + @{{!radix}} + ": '" + @{{!value}} + "'"));
}
if ($min_int <= v && v <= $max_int) {
this.__v = v;
return this;
}
return new pyjslib['long'](v);
};
$int.__init__ = function () {};
$int.__number__ = 0x02;
$int.__v = 0;
$int.__name__ = 'int';
$int.prototype = $int;
$int.__class__ = $int;
$int.toExponential = function (fractionDigits) {
return (typeof fractionDigits == | |
patch title:\n"
)
for pkg_name in used_packages.values():
print(bcolors.OKGREEN + pkg_name + bcolors.ENDC)
print(
"\nThe following packages are not used in any policies, "
"PreStage Enrollments, or patch titles:\n"
)
for pkg_id, pkg_name in unused_packages.items():
print(bcolors.FAIL + f"[{pkg_id}] " + pkg_name + bcolors.ENDC)
if args.delete:
if actions.confirm(
prompt=(
"\nDelete all unused packages?"
"\n(press n to go on to confirm individually)?"
),
default=False,
):
delete_all = True
else:
delete_all = False
for pkg_id, pkg_name in unused_packages.items():
# prompt to delete each package in turn
if delete_all or actions.confirm(
prompt=(
bcolors.OKBLUE
+ f"Delete [{pkg_id}] {pkg_name}?"
+ bcolors.ENDC
),
default=False,
):
print(f"Deleting {pkg_name}...")
api_delete.delete_api_object(
jamf_url, "package", pkg_id, enc_creds, verbosity
)
# process for SMB shares if defined
if args.smb_url:
# mount the share
smb_actions.mount_smb(
args.smb_url,
args.smb_user,
args.smb_pass,
verbosity,
)
# delete the file from the share
smb_actions.delete_pkg(args.smb_url, pkg_name)
# unmount the share
smb_actions.umount_smb(args.smb_url)
def handle_scripts(jamf_url, enc_creds, token, args, verbosity):
unused_scripts = {}
used_scripts = {}
if args.unused:
# get a list of scripts in policies
scripts_in_policies = api_get.get_scripts_in_policies(
jamf_url, enc_creds, verbosity
)
else:
scripts_in_policies = []
if args.all or args.unused:
scripts = api_get.get_uapi_obj_list(jamf_url, "scripts", token, verbosity)
if scripts:
for script in scripts:
# loop all the scripts
if args.unused:
# see if the script is in any smart groups
unused_in_policies = 0
if scripts_in_policies:
if script["name"] not in scripts_in_policies:
unused_in_policies = 1
else:
unused_in_policies = 1
if unused_in_policies == 1:
unused_scripts[script["id"]] = script["name"]
elif script["name"] not in used_scripts:
used_scripts[script["id"]] = script["name"]
else:
print(
bcolors.WARNING
+ f" script {script['id']}\n"
+ f" name : {script['name']}"
+ bcolors.ENDC
)
if args.details:
# gather interesting info for each script via API
generic_info = api_get.get_uapi_obj_from_id(
jamf_url, "script", script["id"], token, verbosity
)
category = generic_info["categoryName"]
if category and "No category assigned" not in category:
print(f" category : {category}")
info = generic_info["info"]
if info:
print(f" info : {info}")
notes = generic_info["notes"]
if notes:
print(f" notes : {notes}")
priority = generic_info["priority"]
print(f" priority : {priority}")
if args.unused:
print("\nThe following scripts are found in at least one policy:\n")
for script_name in used_scripts.values():
print(bcolors.OKGREEN + script_name + bcolors.ENDC)
print("\nThe following scripts are not used in any policies:\n")
for script_id, script_name in unused_scripts.items():
print(bcolors.FAIL + f"[{script_id}] " + script_name + bcolors.ENDC)
if args.delete:
if actions.confirm(
prompt=(
"\nDelete all unused scripts?"
"\n(press n to go on to confirm individually)?"
),
default=False,
):
delete_all = True
else:
delete_all = False
for script_id, script_name in unused_scripts.items():
# prompt to delete each script in turn
if delete_all or actions.confirm(
prompt=(
bcolors.OKBLUE
+ f"Delete {script_name} (id={script_id})?"
+ bcolors.ENDC
),
default=False,
):
print(f"Deleting {script_name}...")
api_delete.delete_uapi_object(
jamf_url, "script", script_id, token, verbosity
)
else:
print("\nNo scripts found")
def handle_eas(jamf_url, enc_creds, args, verbosity):
unused_eas = {}
used_eas = {}
if args.unused:
criteria_in_computer_groups = api_get.get_criteria_in_computer_groups(
jamf_url, enc_creds, verbosity
)
names_in_advanced_searches = api_get.get_names_in_advanced_searches(
jamf_url, enc_creds, verbosity
)
# TODO EAs in Patch policies?
else:
criteria_in_computer_groups = []
names_in_advanced_searches = []
if args.all or args.unused:
eas = api_get.get_api_obj_list(
jamf_url, "extension_attribute", enc_creds, verbosity
)
if eas:
for ea in eas:
# loop all the eas
if args.unused:
# see if the eas is in any policies
unused_in_computer_groups = 0
unused_in_advanced_searches = 0
if criteria_in_computer_groups:
if ea["name"] not in criteria_in_computer_groups:
unused_in_computer_groups = 1
else:
unused_in_computer_groups = 1
if names_in_advanced_searches:
if ea["name"] not in names_in_advanced_searches:
unused_in_advanced_searches = 1
else:
unused_in_advanced_searches = 1
if (
unused_in_computer_groups == 1
and unused_in_advanced_searches == 1
):
unused_eas[ea["id"]] = ea["name"]
elif ea["name"] not in used_eas:
used_eas[ea["id"]] = ea["name"]
else:
print(
bcolors.WARNING
+ f" script {ea['id']}\n"
+ f" name : {ea['name']}"
+ bcolors.ENDC
)
if args.details:
# gather interesting info for each EA via API
generic_info = api_get.get_api_obj_from_id(
jamf_url,
"extension_attribute",
ea["id"],
enc_creds,
verbosity,
)
enabled = generic_info["enabled"]
print(f" enabled : {enabled}")
data_type = generic_info["data_type"]
print(f" data_type : {data_type}")
input_type = generic_info["input_type"]["type"]
print(f" notes : {input_type}")
inventory_display = generic_info["inventory_display"]
print(f" inventory_display : {inventory_display}")
if args.unused:
print(
"\nThe following EAs are found in at least one smart group "
"or advanced search:\n"
)
for ea_name in used_eas.values():
print(bcolors.OKGREEN + ea_name + bcolors.ENDC)
print(
"\nThe following EAs are not used in any smart groups "
"or advanced searches:\n"
)
for ea_id, ea_name in unused_eas.items():
print(bcolors.FAIL + f"[{ea_id}] " + ea_name + bcolors.ENDC)
if args.delete:
if actions.confirm(
prompt=(
"\nDelete all unused EAs?"
"\n(press n to go on to confirm individually)?"
),
default=False,
):
delete_all = True
else:
delete_all = False
for ea_id, ea_name in unused_eas.items():
# prompt to delete each EA in turn
if delete_all or actions.confirm(
prompt=(
bcolors.OKBLUE
+ f"Delete {ea_name} (id={ea_id})?"
+ bcolors.ENDC
),
default=False,
):
print(f"Deleting {ea_name}...")
api_delete.delete_api_object(
jamf_url,
"extension_attribute",
ea_id,
enc_creds,
verbosity,
)
else:
print("\nNo EAs found")
def handle_groups(jamf_url, enc_creds, args, verbosity):
unused_groups = {}
used_groups = {}
if args.unused:
# look in computer groups for computer groups in the criteria
criteria_in_computer_groups = api_get.get_criteria_in_computer_groups(
jamf_url, enc_creds, verbosity
)
# look in advanced searches for computer groups in the criteria
names_in_advanced_searches = api_get.get_names_in_advanced_searches(
jamf_url, enc_creds, verbosity
)
# look in the scope of policies
groups_in_policies = api_get.get_groups_in_api_objs(
jamf_url, enc_creds, "policy", verbosity
)
# look in the scope of Mac App Store apps
groups_in_mas_apps = api_get.get_groups_in_api_objs(
jamf_url, enc_creds, "mac_application", verbosity
)
# look in the scope of configurator profiles
groups_in_config_profiles = api_get.get_groups_in_api_objs(
jamf_url, enc_creds, "os_x_configuration_profile", verbosity
)
# look in the scope of patch policies
groups_in_patch_policies = api_get.get_groups_in_patch_policies(
jamf_url, enc_creds, verbosity
)
# look in the scope of restricted software
groups_in_restricted_software = api_get.get_groups_in_api_objs(
jamf_url, enc_creds, "restricted_software", verbosity
)
else:
criteria_in_computer_groups = []
names_in_advanced_searches = []
groups_in_policies = []
groups_in_mas_apps = []
groups_in_config_profiles = []
groups_in_patch_policies = []
groups_in_restricted_software = []
if args.all or args.unused:
groups = api_get.get_api_obj_list(
jamf_url, "computer_group", enc_creds, verbosity
)
if groups:
for group in groups:
# loop all the groups
if args.unused:
# see if the groups is in any smart groups
unused_in_computer_groups = 0
unused_in_advanced_searches = 0
unused_in_policies = 0
unused_in_mas_apps = 0
unused_in_config_profiles = 0
unused_in_patch_policies = 0
unused_in_restricted_software = 0
if criteria_in_computer_groups:
if group["name"] not in criteria_in_computer_groups:
unused_in_computer_groups = 1
else:
unused_in_computer_groups = 1
if names_in_advanced_searches:
if group["name"] not in names_in_advanced_searches:
unused_in_advanced_searches = 1
else:
unused_in_advanced_searches = 1
if groups_in_policies:
if group["name"] not in groups_in_policies:
unused_in_policies = 1
else:
unused_in_policies = 1
if groups_in_mas_apps:
if group["name"] not in groups_in_mas_apps:
unused_in_mas_apps = 1
else:
unused_in_mas_apps = 1
if groups_in_config_profiles:
if group["name"] not in groups_in_config_profiles:
unused_in_config_profiles = 1
else:
unused_in_config_profiles = 1
if groups_in_patch_policies:
if group["name"] not in groups_in_patch_policies:
unused_in_patch_policies = 1
else:
unused_in_patch_policies = 1
if groups_in_restricted_software:
if group["name"] not in groups_in_restricted_software:
unused_in_restricted_software = 1
else:
unused_in_restricted_software = 1
if (
unused_in_computer_groups == 1
and unused_in_advanced_searches == 1
and unused_in_policies == 1
and unused_in_mas_apps == 1
and unused_in_config_profiles == 1
and unused_in_patch_policies == 1
and unused_in_restricted_software == 1
):
unused_groups[group["id"]] = group["name"]
elif group["name"] not in used_groups:
used_groups[group["id"]] = group["name"]
else:
print(
bcolors.WARNING
+ f" script {group['id']}\n"
+ f" name : {group['name']}"
+ bcolors.ENDC
)
if args.details:
# gather interesting info for each group via API
generic_info = api_get.get_api_obj_from_id(
jamf_url,
"computer_group",
group["id"],
enc_creds,
verbosity,
)
is_smart = generic_info["is_smart"]
print(f" is smart : {is_smart}")
if args.unused:
print(
"\nThe following groups are criteria in at least one smart group or "
"advanced search,\n"
"and/or are scoped or excluded in at least one "
"policy, patch policy, Mac App Store app,\n"
"configuration profile or restricted software:\n"
)
for group_name in used_groups.values():
print(bcolors.OKGREEN + group_name + bcolors.ENDC)
print(
"\nThe following groups are not found in any smart groups, advanced searches\n"
"policies, patch policies, Mac App Store apps, "
"configuration profiles or restricted software:\n"
)
for group_id, group_name in unused_groups.items():
print(bcolors.FAIL + f"[{group_id}] " + group_name + bcolors.ENDC)
if args.delete:
if actions.confirm(
prompt=(
"\nDelete all unused groups?"
"\n(press n to go on to confirm individually)?"
),
default=False,
):
delete_all = True
else:
delete_all = False
for group_id, group_name in unused_groups.items():
# prompt to delete each group in turn
if delete_all or actions.confirm(
prompt=(
bcolors.OKBLUE
+ f"Delete {group_name} (id={group_id})?"
+ bcolors.ENDC
),
default=False,
):
print(f"Deleting {group_name}...")
api_delete.delete_api_object(
jamf_url,
"computer_group",
group_id,
enc_creds,
verbosity,
| |
""" Handle the confluence uploading"""
from decouple import config
import datetime
from atlassian import Confluence
from pylabnet.utils.helper_methods import load_config, get_os, load_script_config, get_config_filepath
import ctypes
import os
from PyQt5 import QtWidgets, uic, QtCore, QtGui
import sys
from functools import partial
import numpy as np
import logging
class Confluence_Handler():
""" Handle the gui's confluence handler except main window (log server) """
def __init__(self, parent_wins, app, log_client):
self.log = log_client
self.confluence_popup = Confluence_Popping_Windows(parent_wins, app, self.log, "Confluence_info_window" )
class LaunchControl_Confluence_Handler():
""" Handle the main window (log server)'s confluence setting """
def __init__(self, controller, app):
self.confluence_popup = LaunchControl_Confluence_Windows(controller, app, 'Confluence_info_from_LaunchControl' )
class Confluence_Popping_Windows(QtWidgets.QMainWindow):
""" Instantiate a popping-up window, which documents the confluence setting, but not show until users press popping-up button.
It loads html template from 'pylabnet/configs/gui/html_template/html_template_0.html' as the base,
and append it to the confluence page by setting information.
self.load determines whether it is in the 'upload' mode. If it is not, then update the info. It it is, then screenshot the whole gui and save into the 'temp/ folder/'.
The screenshot file is then uploaded to the confluence page and then deleted after all things are settled.
Param: parent_win - the Window class who calls the confluence handler
Param: url, username, pw, uerkey, dev_root_id - the information required for using confluenc API (https://pypi.org/project/atlassian-python-api/, https://atlassian-python-api.readthedocs.io/ )
Param: upload (bool) - whether it is in the upload mode or not
Param: log - log client
Param: pix - screenshot stuffs, a class defined by QtWidgets.QMainWindow
Param: Confluence - a class from atlassian (https://pypi.org/project/atlassian-python-api/, https://atlassian-python-api.readthedocs.io/ )
"""
def __init__(self, parent_wins, app, log_client=None, template= "Confluence_info_window"):
# param (global)
self.parent_wins = parent_wins
self.url = config('CONFLUENCE_URL')
self.username = config('CONFLUENCE_USERNAME')
self.pw = config('CONFLUENCE_PW')
self.userkey = config('CONFLUENCE_USERKEY')
self.dev_root_id = config('CONFLUENCE_DEV_root_id')
self.log = log_client
# param (condition)
self.upload = False
self.pix = None
self._gui_directory = "gui_templates"
self.app = app # Application instance onto which to load the GUI.
self.auto_info_setting_mode = True # automatically access info from the launch control
if self.app is None:
if get_os() == 'Windows':
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('pylabnet')
self.app = QtWidgets.QApplication(sys.argv)
self.app.setWindowIcon(
QtGui.QIcon(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'devices.ico'))
)
# Initialize parent class QtWidgets.QDialog
super(Confluence_Popping_Windows, self).__init__()
self.confluence = Confluence(
url='{}/wiki'.format(self.url), # need to add 'wiki', see https://github.com/atlassian-api/atlassian-python-api/issues/252
username=self.username,
password=self.pw)
# load the gui, but not show
self._load_gui(gui_template=template, run=False)
# the initial fields' info
timestamp_date = datetime.datetime.now().strftime('%b %d %Y')
timestamp_day = datetime.datetime.now().strftime('%b %d %Y')
self.space_key_field.setText('DEV')
self.space_name_field.setText('API Dev Test Space')
self.page_field.setText("test-uploading graphs {}".format(timestamp_day) )
self.comment_field.setFontPointSize(12)
self.upload_space_key = self.space_key_field.text()
self.upload_space_name = self.space_name_field.text()
self.upload_page_title = self.page_field.text()
self.upload_setting = self.setting_field.text()
self.upload_comment = self.comment_field.toPlainText()
# Handle button pressing
self.ok_button.clicked.connect(self.okay_event)
self.cancel_button.clicked.connect(self.cancel_event)
self.actionchage_typing_mode.triggered.connect(self.Change_typing_mode)
# init the space and page as in the launch control
self.Update_confluence_info()
# init the reading settings
if(self.auto_info_setting_mode):
self.space_name_field.setReadOnly(True)
self.space_name_field.setStyleSheet("background-color: gray; color: white")
self.page_field.setReadOnly(True)
self.page_field.setStyleSheet("background-color: gray; color: white")
return
def Change_typing_mode(self):
if(self.auto_info_setting_mode):
self.auto_info_setting_mode = False
self.actionchage_typing_mode.setText('Change to Auto-typing mode (From launch control')
self.space_name_field.setReadOnly(False)
self.space_name_field.setStyleSheet("background-color: black; color: white")
self.page_field.setReadOnly(False)
self.page_field.setStyleSheet("background-color: black; color: white")
else:
self.auto_info_setting_mode = True
self.actionchage_typing_mode.setText('Change to Manual-typing mode')
self.Update_confluence_info()
self.space_name_field.setReadOnly(True)
self.space_name_field.setStyleSheet("background-color: gray; color: white")
self.page_field.setReadOnly(True)
self.page_field.setStyleSheet("background-color: gray; color: white")
return
def Update_confluence_info(self):
confluence_config_dict = load_config('confluence_upload')
lab = confluence_config_dict["lab"]
# access metadata
metadata = self.log.get_metadata()
self.upload_space_key = metadata['confluence_space_key_' + lab]
self.upload_space_name = metadata['confluence_space_name_' + lab]
self.upload_page_title = metadata['confluence_page_' + lab]
# update display
self.space_name_field.setReadOnly(False)
self.page_field.setReadOnly(False)
self.space_key_field.setText(self.upload_space_key)
self.space_name_field.setText(self.upload_space_name)
self.page_field.setText(self.upload_page_title)
if(self.auto_info_setting_mode): self.space_name_field.setReadOnly(True)
if(self.auto_info_setting_mode): self.page_field.setReadOnly(True)
return
def Popup_Update(self):
self.upload = False
self.ok_button.setText("OK")
self.space_key_field.setText(self.upload_space_key)
self.space_name_field.setText(self.upload_space_name)
self.page_field.setText(self.upload_page_title)
self.setting_field.setText(self.upload_setting)
self.comment_field.setPlainText(self.upload_comment)
self.ok_button.setText("Ok")
self.setWindowTitle( self.upload_space_key + '/' + self.upload_page_title )
self._run_gui()
self.ok_button.setShortcut("Ctrl+Return")
def Popup_Upload(self):
self.upload = True
#screenshot
self.pix = self.parent_wins.grab()
# access the info of the space and page from the launch control
if(self.auto_info_setting_mode):
self.Update_confluence_info()
# display setting
self.ok_button.setText("Upload")
self.space_key_field.setText(self.upload_space_key)
self.space_name_field.setText(self.upload_space_name)
self.page_field.setText(self.upload_page_title)
self.setting_field.setText(self.upload_setting)
self.comment_field.setPlainText(self.upload_comment)
# pop out
self._run_gui()
self.setWindowTitle( self.upload_space_key + '/' + self.upload_page_title )
self.ok_button.setShortcut("Ctrl+Return")
def cancel_event(self):
self.close()
def okay_event(self):
self.upload_space_key = self.space_key_field.text()
self.upload_space_name = self.space_name_field.text()
self.upload_page_title = self.page_field.text()
self.upload_setting = self.setting_field.text()
self.upload_comment = self.comment_field.toPlainText()
if(self.upload == False):
self.close()
return
# upload case
wintitle = self.windowTitle()
self.setWindowTitle('Uploading ...')
self.log.info("Uploading to the confluence page")
# save the temperary file
timestamp_datetime = datetime.datetime.now().strftime("%b_%d_%Y__%H_%M_%S")
scrn_shot_filename = "Screenshot_{}".format(timestamp_datetime) + ".png"
scrn_shot_AbsPath = os.path.join("..\\..\\temp", scrn_shot_filename)
self.pix.save(scrn_shot_AbsPath)
# upload
self.upload_pic(scrn_shot_AbsPath, scrn_shot_filename)
# delete the temperary file
os.remove(scrn_shot_AbsPath)
self.setWindowTitle(wintitle)
self.upload = False
self.log.info("Finish uploading")
self.close()
return
def _load_gui(self, gui_template=None, run=True):
""" Loads a GUI template to the main window.
Currently assumes all templates are in the directory given by the self._gui_directory. If no
gui_template is passed, the self._default_template is used. By default, this method also runs the GUI window.
:param gui_template: name of the GUI template to use (str)
:param run: whether or not to also run the GUI (bool)
"""
if gui_template is None:
gui_template = self._default_template
# Check for proper formatting
if not gui_template.endswith(".ui"):
gui_template += ".ui"
# Find path to GUI
# Currently assumes all templates are in the directory given by the self._gui_directory attribute
# self._ui = os.path.join(
# os.path.dirname(os.path.abspath(__file__ )),
# "..\\..\\",
# self._gui_directory,
# gui_template
# )
self._ui = os.path.join(
(os.path.abspath("..\\..\\pylabnet\\gui\\pyqt" ) ),
self._gui_directory,
gui_template
)
# Load UI
try:
uic.loadUi(self._ui, self)
except FileNotFoundError:
raise
if run:
self._run_gui()
def _run_gui(self):
"""Runs the GUI. Displays the main window"""
self.show()
def upload_pic(self, scrn_shot_AbsPath, scrn_shot_filename):
''' Upload the picture if the page exists, otherwise firtst create a new page and then upload the picture
'''
if( self.confluence.page_exists(self.upload_space_key, self.upload_page_title) ):
upload_page_id = self.confluence.get_page_id(self.upload_space_key, self.upload_page_title)
else:
response = self.confluence.update_or_create(
parent_id= self.dev_root_id,
title = self.upload_page_title,
body='',
representation='storage')
upload_page_id = response['id']
web_url = response['_links']['base']+response['_links']['webui']
self.upload_and_append_picture(
fileAbsPath=scrn_shot_AbsPath,
filename=scrn_shot_filename,
comment=self.upload_comment,
settings=self.upload_setting,
page_id=upload_page_id,
page_title=self.upload_page_title)
return
def replace_html(self, base_html, replace_dict):
'''
Reads in a HTML template and replaces occurences of the keys of replace_dict by the key values.
'''
with open(base_html, "r+") as f:
replaced_html = f.read()
for key in replace_dict:
replaced_html = replaced_html.replace(key, replace_dict[key])
return replaced_html
def append_rendered_html(self, base_html, replace_dict, page_id, page_title, silent=True):
'''
Renders base_html according to replace_dict and appends it on existing page
'''
append_html = self.replace_html(base_html, replace_dict)
status = self.confluence.append_page(
page_id=page_id,
title=page_title,
append_body=append_html
)
self.log.info('PAGE URL: '+ status['_links']['base'] + status['_links']['webui'])
return status
def upload_and_append_picture(self, fileAbsPath, filename, comment, settings, page_id, page_title):
''' Upload a picture and embed it to page, alongside measurement setting informations and possible comments
'''
self.confluence.attach_file(fileAbsPath, name=None, content_type=None, page_id=page_id, title=None, space=None, comment=None)
confluence_config_dict = load_config('confluence_upload')
templates_root = confluence_config_dict['templates_root']
html_template_filename = confluence_config_dict['html_template_filename']
base_html = '{}\\{}'.format(templates_root, html_template_filename)
timestamp_date = datetime.datetime.now().strftime('%Y-%m-%d')
timestamp_time = datetime.datetime.now().strftime('%H:%M')
replace_dict = {
'DATE' : timestamp_date,
'TIME' : timestamp_time,
'USERKEY' : self.userkey,
'SETTING' : settings,
'COMMENT' : comment,
'FILENAME' : filename
}
# self.log.info(replace_dict)
status = self.append_rendered_html( base_html, replace_dict, page_id, page_title)
return status
class LaunchControl_Confluence_Windows(QtWidgets.QMainWindow):
'''
It instantiates the confluence window for the main window (Log server). It only shows when users press the button. The updatted info will be saved into
the new entry of the metadata's dictionary
Param: controller - the Controller class who calls the confluence handler
Param: url, username, pw, uerkey, dev_root_id - the information required for using confluenc API (https://pypi.org/project/atlassian-python-api/, https://atlassian-python-api.readthedocs.io/ )
Param: dict_name_key - the dictionary of space's name -> key
Param: Confluence - a class from atlassian (https://pypi.org/project/atlassian-python-api/, https://atlassian-python-api.readthedocs.io/ )
'''
def __init__(self, controller, app, template= 'Confluence_info_from_LaunchControl'):
# param (global)
self.url = config('CONFLUENCE_URL')
self.username = config('CONFLUENCE_USERNAME')
self.pw = config('CONFLUENCE_PW')
self.userkey = config('CONFLUENCE_USERKEY')
self.dev_root_id = config('CONFLUENCE_DEV_root_id')
# param
self.controller = controller
self.app = app # Application instance onto which to load the GUI.
self._gui_directory = "gui_templates"
self.dict_name_key = {}
if self.app is None:
if get_os() == 'Windows':
ctypes.windll.shell32.SetCurrentProcessExplicitAppUserModelID('pylabnet')
self.app = QtWidgets.QApplication(sys.argv)
self.app.setWindowIcon(
QtGui.QIcon(os.path.join(os.path.dirname(os.path.realpath(__file__)), 'devices.ico'))
)
# Initialize parent class QtWidgets.QDialog
super(LaunchControl_Confluence_Windows, self).__init__()
# confluence
self.confluence = Confluence(
url='{}/wiki'.format(self.url), # need to add 'wiki', see https://github.com/atlassian-api/atlassian-python-api/issues/252
username=self.username,
password=<PASSWORD>.pw)
# load the gui, but not show
self._load_gui(gui_template=template, run=False)
# the initial fields' info
timestamp_day = datetime.datetime.now().strftime('%b %d %Y')
self.space_key_field.setText('DEV')
self.space_name_field.setText('API Dev Test Space')
self.page_field.setText("test-uploading graphs {}".format(timestamp_day) )
self.upload_space_key = self.space_key_field.text()
self.upload_space_name = self.space_name_field.text()
self.upload_page_title = self.page_field.text()
# Handle events
self.space_name_field.textChanged[str].connect(self.change_space_name_event)
self.ok_button.setShortcut("Ctrl+Return")
self.ok_button.clicked.connect(partial(self.okay_event, True))
self.cancel_button.clicked.connect(self.cancel_event)
return
def Popup_Update(self):
if(not self.controller.staticproxy): self.controller.log_service.logger.setLevel(logging.INFO)
response = self.confluence.get_all_spaces(start=0, limit=500, expand=None)['results']
if(not self.controller.staticproxy): self.controller.log_service.logger.setLevel(logging.DEBUG)
# | |
i11iIiiIii - I11i
if 71 - 71: OoO0O00 - I11i
if 96 - 96: I1Ii111 / Ii1I
if 65 - 65: I1ii11iIi11i * O0 . IiII
def lisp_timeout_map_cache ( lisp_map_cache ) :
I1I1i = [ [ ] , [ ] ]
I1I1i = lisp_map_cache . walk_cache ( lisp_timeout_map_cache_walk , I1I1i )
if 11 - 11: I11i / Ii1I % oO0o
if 50 - 50: i11iIiiIii
if 93 - 93: i1IIi / Ii1I * II111iiii - Oo0Ooo . OoOoOO00 - OOooOOo
if 25 - 25: I11i / ooOoO0o % ooOoO0o - OOooOOo
if 59 - 59: I1IiiI + o0oOOo0O0Ooo . iIii1I11I1II1 - O0 - i11iIiiIii
O0O0OooOo0000O = I1I1i [ 0 ]
for IiiiiII1i in O0O0OooOo0000O : IiiiiII1i . delete_cache ( )
if 4 - 4: I1IiiI
if 36 - 36: Ii1I
if 76 - 76: i11iIiiIii + i1IIi
if 56 - 56: OoOoOO00 + II111iiii / i11iIiiIii * OoOoOO00 * OoooooooOO
OoOo0ooo0Ooo = I1I1i [ 1 ]
lisp_checkpoint ( OoOo0ooo0Ooo )
return
if 15 - 15: OoOoOO00 / OoooooooOO + OOooOOo
if 76 - 76: Ii1I * iII111i . OoooooooOO
if 92 - 92: iIii1I11I1II1 - Oo0Ooo - I1IiiI - OOooOOo * I1Ii111
if 44 - 44: I1Ii111 - II111iiii / OOooOOo
if 50 - 50: I11i / I1ii11iIi11i
if 60 - 60: II111iiii / Ii1I + OoO0O00 % I1IiiI * i1IIi / II111iiii
if 91 - 91: I1IiiI * I1Ii111 * i11iIiiIii - oO0o - IiII + I1ii11iIi11i
if 99 - 99: OoO0O00 % o0oOOo0O0Ooo
if 3 - 3: OOooOOo / OoOoOO00 % iIii1I11I1II1
if 47 - 47: ooOoO0o . i11iIiiIii / OoO0O00
if 48 - 48: O0
if 89 - 89: i11iIiiIii % OoO0O00 . OoOoOO00 + Oo0Ooo + OoOoOO00
if 53 - 53: Ii1I / OoOoOO00 % iII111i * OoooooooOO + Oo0Ooo
if 70 - 70: OoO0O00 % OoO0O00 * OoooooooOO
if 96 - 96: ooOoO0o * Ii1I + I11i + II111iiii * I1IiiI / iII111i
if 40 - 40: OoooooooOO - I11i % OOooOOo - I1IiiI . I1IiiI + Ii1I
def lisp_store_nat_info ( hostname , rloc , port ) :
oo0o00OO = rloc . print_address_no_iid ( )
O0oii1III1II1 = "{} NAT state for {}, RLOC {}, port {}" . format ( "{}" ,
blue ( hostname , False ) , red ( oo0o00OO , False ) , port )
if 6 - 6: Oo0Ooo + OoooooooOO - i1IIi * OOooOOo
I1i1 = lisp_nat_info ( oo0o00OO , hostname , port )
if 61 - 61: O0 * OoooooooOO % O0 * Ii1I
if ( lisp_nat_state_info . has_key ( hostname ) == False ) :
lisp_nat_state_info [ hostname ] = [ I1i1 ]
lprint ( O0oii1III1II1 . format ( "Store initial" ) )
return ( True )
if 3 - 3: IiII + OoooooooOO - i1IIi
if 94 - 94: ooOoO0o / iIii1I11I1II1 + I11i + I1ii11iIi11i
if 67 - 67: IiII / o0oOOo0O0Ooo . O0
if 7 - 7: II111iiii . OoOoOO00 % OoOoOO00 % Ii1I + Oo0Ooo - ooOoO0o
if 29 - 29: OoOoOO00 - i1IIi
if 5 - 5: I1IiiI - ooOoO0o + O0
O00OOoOOO0O0O = lisp_nat_state_info [ hostname ] [ 0 ]
if ( O00OOoOOO0O0O . address == oo0o00OO and O00OOoOOO0O0O . port == port ) :
O00OOoOOO0O0O . uptime = lisp_get_timestamp ( )
lprint ( O0oii1III1II1 . format ( "Refresh existing" ) )
return ( False )
if 47 - 47: i1IIi - II111iiii - II111iiii
if 31 - 31: Ii1I
if 37 - 37: I1ii11iIi11i - Ii1I / oO0o . I1IiiI % I1Ii111
if 8 - 8: oO0o
if 46 - 46: I1Ii111 + IiII + II111iiii . o0oOOo0O0Ooo + i11iIiiIii
if 97 - 97: o0oOOo0O0Ooo % OoOoOO00 * O0 / iIii1I11I1II1 * OoO0O00 / i11iIiiIii
if 1 - 1: OoooooooOO . Ii1I
o0o0o0OO000 = None
for O00OOoOOO0O0O in lisp_nat_state_info [ hostname ] :
if ( O00OOoOOO0O0O . address == oo0o00OO and O00OOoOOO0O0O . port == port ) :
o0o0o0OO000 = O00OOoOOO0O0O
break
if 9 - 9: o0oOOo0O0Ooo . iII111i % OoO0O00 / i11iIiiIii + I1ii11iIi11i + i1IIi
if 67 - 67: o0oOOo0O0Ooo
if 58 - 58: IiII % o0oOOo0O0Ooo + i1IIi
if ( o0o0o0OO000 == None ) :
lprint ( O0oii1III1II1 . format ( "Store new" ) )
else :
lisp_nat_state_info [ hostname ] . remove ( o0o0o0OO000 )
lprint ( O0oii1III1II1 . format ( "Use previous" ) )
if 33 - 33: II111iiii
if 61 - 61: I1Ii111
oOoOOOOO = lisp_nat_state_info [ hostname ]
lisp_nat_state_info [ hostname ] = [ I1i1 ] + oOoOOOOO
return ( True )
if 73 - 73: OoOoOO00 * OOooOOo / oO0o % Oo0Ooo
if 53 - 53: I1Ii111
if 33 - 33: OoO0O00 - iIii1I11I1II1 + IiII + oO0o * I1IiiI
if 48 - 48: II111iiii + IiII * O0 . oO0o * iII111i - iIii1I11I1II1
if 75 - 75: I11i / iII111i . O0
if 54 - 54: I1IiiI * OoOoOO00
if 56 - 56: o0oOOo0O0Ooo
if 35 - 35: ooOoO0o / I1Ii111 / I1Ii111
def lisp_get_nat_info ( rloc , hostname ) :
if ( lisp_nat_state_info . has_key ( hostname ) == False ) : return ( None )
if 19 - 19: OoO0O00 % i11iIiiIii % iIii1I11I1II1
oo0o00OO = rloc . print_address_no_iid ( )
for O00OOoOOO0O0O in lisp_nat_state_info [ hostname ] :
if ( O00OOoOOO0O0O . address == oo0o00OO ) : return ( O00OOoOOO0O0O )
if 100 - 100: OOooOOo . oO0o % ooOoO0o * ooOoO0o . I1Ii111 - oO0o
return ( None )
if 33 - 33: Oo0Ooo . i1IIi - OoooooooOO
if 14 - 14: I1Ii111 + Oo0Ooo
if 35 - 35: i11iIiiIii * Ii1I
if 100 - 100: O0 . iII111i / iIii1I11I1II1
if 47 - 47: ooOoO0o + OoOoOO00
if 67 - 67: IiII - I1ii11iIi11i * i1IIi - ooOoO0o
if 91 - 91: I11i
if 54 - 54: I1ii11iIi11i / i1IIi
if 14 - 14: iIii1I11I1II1 * I11i . I11i * ooOoO0o * iII111i
if 60 - 60: iIii1I11I1II1 + i1IIi + oO0o - iIii1I11I1II1 . i11iIiiIii * OoooooooOO
if 23 - 23: iII111i - IiII % i11iIiiIii
if 81 - 81: OoooooooOO % OoOoOO00 / IiII / OoooooooOO + i1IIi - O0
if 60 - 60: OOooOOo - I1Ii111 * Oo0Ooo
if 9 - 9: OoooooooOO * OOooOOo % OoO0O00 - ooOoO0o + Ii1I
if 39 - 39: iIii1I11I1II1 / i1IIi % I11i % I1ii11iIi11i * IiII
if 11 - 11: II111iiii + i1IIi
if 1 - 1: OOooOOo
if 23 - 23: i1IIi + OoooooooOO * OOooOOo . Oo0Ooo
if 83 - 83: OoooooooOO
if 53 - 53: o0oOOo0O0Ooo - Oo0Ooo / IiII + O0
def lisp_build_info_requests ( lisp_sockets , dest , port ) :
if ( lisp_nat_traversal == False ) : return
if 88 - 88: Oo0Ooo % I1Ii111 * O0 - i1IIi * OoO0O00
if 74 - 74: Oo0Ooo % iIii1I11I1II1 + OOooOOo
if 50 - 50: OoO0O00 . OoooooooOO
if 31 - 31: OoO0O00
if 55 - 55: OoOoOO00 + I1Ii111 * o0oOOo0O0Ooo - I1ii11iIi11i + OoOoOO00
if 6 - 6: II111iiii % iIii1I11I1II1 * I1Ii111
I1i1Ii111 = [ ]
Oo0oOoOO0o = [ ]
if ( dest == None ) :
for O0o00000o0O in lisp_map_resolvers_list . values ( ) :
Oo0oOoOO0o . append ( O0o00000o0O . map_resolver )
if 36 - 36: I1IiiI + IiII + I1Ii111 - I11i % I1Ii111
I1i1Ii111 = Oo0oOoOO0o
if ( I1i1Ii111 == [ ] ) :
for OooOoOoo0OOoo in lisp_map_servers_list . values ( ) :
I1i1Ii111 . append ( OooOoOoo0OOoo . map_server )
if 38 - 38: Ii1I * i11iIiiIii + II111iiii . OoO0O00
if 64 - 64: I11i
if ( I1i1Ii111 == [ ] ) : return
else :
I1i1Ii111 . append ( dest )
if 11 - 11: I1ii11iIi11i . i11iIiiIii - Ii1I - OoooooooOO % OoO0O00 / OoO0O00
| |
only obtain information from remote locations, by default
False
Returns
-------
Optional[dict]
An index of the directory in the form:
{
dirs: {...},
files: {...}
}
"""
if path and path.startswith('key://'):
path_resolved: str = self.resolveKeyPath(path)
if not path_resolved:
return None
else:
path_resolved = path
if path_resolved.startswith('kbucket://'):
raise Exception('kbucket:// paths are no longer supported.')
if path_resolved.startswith('sha1dir://'):
list0 = path_resolved.split('/')
sha1 = list0[2]
if '.' in sha1:
sha1 = sha1.split('.')[0]
dd = self.loadObject(path='sha1://' + sha1, download_from=download_from,
local_only=local_only, remote_only=remote_only)
if not dd:
return None
ii = 3
while ii < len(list0):
name0 = list0[ii]
if name0 in dd['dirs']:
dd = dd['dirs'][name0]
else:
return None
ii = ii + 1
return dd
else:
ret = self._read_file_system_dir(
path=path_resolved, recursive=recursive, include_sha1=include_sha1)
return ret
@mtlogging.log(name='MountainClient:computeDirHash')
def computeDirHash(self, path: str) -> Optional[str]:
"""Returns a hash of a local or remote directory
Parameters
----------
path : str
Path of the local or remote directory. See readDir().
Returns
-------
Optional[str]
The hash of the recursive directory index object from readDir().
"""
# resolve key:// path
path = self._maybe_resolve(path)
dd = self.readDir(path=path, recursive=True, include_sha1=True)
ret = _sha1_of_object(dd)
return ret
@mtlogging.log(name='MountainClient:computeFileSha1')
def computeFileSha1(self, path: str) -> Optional[str]:
"""Return the SHA-1 hash of a local or remote file.
Parameters
----------
path : str
The path to a local file or a sha1:// or sha1dir:// URI.
Returns
-------
Optional[str]
The SHA-1 file hash, or None if the file does not exist.
"""
try:
path = self._maybe_resolve(path)
return self._local_db.computeFileSha1(path=path)
except KeyError:
return None
def sha1OfObject(self, obj: dict) -> str:
"""Compute the SHA-1 hash of a simple dict as the SHA-1 hash
of the JSON text generated in a reproducible way.
Parameters
----------
obj : dict
The simple dict object.
Returns
-------
str
The hash.
"""
return _sha1_of_object(obj)
@mtlogging.log(name='MountainClient:computeFileOrDirHash')
def computeFileOrDirHash(self, path: str) -> Optional[str]:
"""
Compute the SHA-1 hash of a file or directory. See computeFileSha1() and
computeDirHash().
Parameters
----------
path : str
The path or URI to the file or directory.
Returns
-------
Optional[str]
The hash computed either using computeFileSha1() or
computeDirHash().
"""
if path.startswith('kbucket://'):
raise Exception('kbucket:// paths are no longer supported')
if path and (path.startswith('sha1dir://') or path.startswith('key://')):
if self.findFile(path):
return self.computeFileSha1(path)
else:
return self.computeDirHash(path)
elif path.startswith('sha1://'):
return self.computeFileSha1(path)
else:
if os.path.isdir(path):
return self.computeDirHash(path)
else:
return self.computeFileSha1(path)
def isFile(self, path: str) -> bool:
"""
Returns True if the path or URI represents a file rather than a
directory.
Parameters
----------
path : str
The path or URI to the putative (is putative the right word here?)
file.
Returns
-------
bool
True if the path or URI represents a file rather than a directory
"""
if self.isLocalPath(path=path):
return os.path.isfile(path)
if path.startswith('kbucket://'):
raise Exception('kucket:// paths are no longer supported')
if path.startswith('sha1://'):
return True
elif path.startswith('sha1dir://'):
if len(path.split('/')) <= 3:
return False
else:
return (self.computeFileSha1(path) is not None)
elif path.startswith('key://'):
return (self.computeFileSha1(path) is not None)
else:
return os.path.isfile(path)
def isLocalPath(self, path: str) -> bool:
"""
Return True if the path or URI refers to a local file or directory. In
other words if it is not a sha1:// or sha1dir:// URI.
Parameters
----------
path : str
The path or URI to a file or directory.
Returns
-------
bool
True if the path refers to a local file or directory.
"""
if path.startswith('kbucket://'):
raise Exception('kucket:// paths are no longer supported')
if path.startswith('sha1://') or path.startswith('sha1dir://') or path.startswith('key://'):
return False
return True
def setPairioToken(self, collection: str, token: str) -> None:
"""
Store a pairio token for a given collection.
"""
self._pairio_tokens[collection] = token
def setKacheryUploadToken(self, kachery_name: str, token: str) -> None:
"""
Store upload token for given kachery
"""
self._kachery_upload_tokens[kachery_name] = token
def setKacheryDownloadToken(self, kachery_name: str, token: str) -> None:
"""
Store download token for given kachery
"""
self._kachery_download_tokens[kachery_name] = token
def addRemoteCollection(self, collection: str, token: str, admin_token: str) -> bool:
"""
Add a remote collection, or set the token for an existing collection
(requires admin access).
Parameters
----------
collection : str
Name of the remote collection.
token : str
The new token.
admin_token : str
The admin token for the pairio server
Returns
-------
bool
True if successful.
"""
return self._remote_client.addCollection(
collection=collection,
token=token,
url=self._pairio_url,
admin_token=admin_token
)
def localCacheDir(self) -> str:
"""Returns the path of the directory used for the local cache.
Returns
-------
str
Path to the cache directory.
"""
return self._local_db.localCacheDir()
def alternateLocalCacheDirs(self) -> List[str]:
"""Returns a list of paths to alternate local cache directories.
Returns
-------
List[str]
The list of alternate local cache paths.
"""
return self._local_db.alternateLocalCacheDirs()
@mtlogging.log(name='MountainClient:getSha1Url')
def getSha1Url(self, path: str, *, basename: Optional[str]=None) -> Optional[str]:
"""Return a sha1:// URI representing the file.
Parameters
----------
path : str
Path or URI to the file.
basename : Optional[str], optional
The base name for forming the sha1:// URI to be returned, by default
None
Returns
-------
Optional[str]
The sha1:// URI.
"""
if basename is None:
basename = os.path.basename(path)
sha1 = self.computeFileSha1(path)
if not sha1:
return None
return 'sha1://{}/{}'.format(sha1, basename)
def _initialize_kacheries(self) -> None:
kacheries_fname = os.path.join(os.environ.get(
'HOME', ''), '.mountaintools', 'kacheries')
kachery_upload_tokens_fname = os.path.join(os.environ.get(
'HOME', ''), '.mountaintools', 'kachery_upload_tokens')
kachery_urls = dict()
kachery_upload_tokens: dict = dict()
kachery_download_tokens: dict = dict()
if os.path.exists(kacheries_fname):
txt = _read_text_file(kacheries_fname)
lines = txt.splitlines()
for line in lines:
if (not line.startswith('#')) and (len(line.strip()) > 0):
vals = line.strip().split()
if len(vals) != 2:
print('WARNING: problem parsing kacheries file.')
else:
kachery_urls[vals[0]] = vals[1]
if os.path.exists(kachery_upload_tokens_fname):
txt = _read_text_file(kachery_upload_tokens_fname)
lines = txt.splitlines()
for line in lines:
if (not line.startswith('#')) and (len(line.strip()) > 0):
vals = line.strip().split()
if len(vals) != 2:
print('WARNING: problem parsing kachery_upload_tokens file.')
else:
kachery_upload_tokens[vals[0]] = vals[1]
from .kachery_tokens import KacheryTokens
db = KacheryTokens()
kachery_upload_tokens = {name: token for name,
type, token in db.entries() if type == 'upload'}
kachery_download_tokens = {
name: token for name, type, token in db.entries() if type == 'download'}
for name, url in kachery_urls.items():
self._kachery_urls[name] = url
for name, token in kachery_upload_tokens.items():
self._kachery_upload_tokens[name] = token
for name, token in kachery_download_tokens.items():
self._kachery_download_tokens[name] = token
@deprecated("Warning: login() is deprecated.")
def login(self, *, user=None, password=<PASSWORD>, interactive=False, ask_password=False) -> None:
pass
def _read_pairio_tokens(self) -> None:
pairio_tokens_fname = os.path.join(os.environ.get(
'HOME', ''), '.mountaintools', 'pairio_tokens')
if os.path.exists(pairio_tokens_fname):
txt = _read_text_file(pairio_tokens_fname)
lines = txt.splitlines()
for line in lines:
if (not line.startswith('#')) and (len(line.strip()) > 0):
vals = line.strip().split()
if len(vals) != 2:
print('WARNING: problem parsing pairio tokens file.')
else:
self._pairio_tokens[vals[0]] = vals[1]
def _get_value(self, *,
key: Union[str, dict],
subkey: Union[None, str]=None,
collection: Union[None, str]=None,
check_alt: bool=False
) -> Optional[str]:
if not collection:
ret = self._local_db.getValue(
key=key, subkey=subkey, check_alt=check_alt)
if ret is not None:
return ret
if collection:
ret = self._remote_client.getValue(
key=key, subkey=subkey, collection=collection, url=self._pairio_url)
if ret is not None:
return ret
return None
def _get_value_from_alias(self, alias: str) -> Optional[str]:
if alias in self._values_by_alias:
return self._values_by_alias[alias]
vals = alias.split('.')
if len(vals) != 2:
raise Exception('Invalid alias: ' + alias)
ret = self.getValue(key=vals[1], collection=vals[0])
if ret is None:
return None
self._values_by_alias[alias] = ret
return ret
def _set_value(self, *,
key: StrOrDict,
subkey: Optional[str],
value: Union[str, None],
overwrite: bool,
collection: Optional[str]=None
) -> bool:
if collection:
token = self._pairio_tokens.get(collection, None)
else:
token = None
if collection and (not token):
raise Exception('Unable to set value... no token found for collection {}'.format(
collection)) # should we throw an exception here?
if not collection:
if not self._local_db.setValue(key=key, subkey=subkey, value=value, overwrite=overwrite):
return False
if collection:
if not self._remote_client.setValue(key=key, subkey=subkey, value=value, overwrite=overwrite, collection=collection, url=self._pairio_url, token=str(token)):
raise Exception(
'Error setting value to remote collection {}'.format(collection))
return True
def _get_sub_keys(self, *,
key: Union[str, dict],
collection: Union[str, None]
) -> Optional[List[str]]:
if collection:
return self._remote_client.getSubKeys(key=key, collection=collection, url=self._pairio_url)
else:
return self._local_db.getSubKeys(key=key)
def _realize_file(self, *,
path: str,
resolve_locally: bool=True,
local_only: bool=False,
remote_only: bool=False,
dest_path: Optional[str]=None,
show_progress: bool=False,
download_from: Optional[StrOrStrList]=None
) -> Optional[str]:
if not remote_only:
ret = self._local_db.realizeFile(
path=path, local_only=local_only, resolve_locally=resolve_locally, dest_path=dest_path, show_progress=show_progress)
if ret:
return ret
if local_only:
return None
if path.startswith('sha1dir://'):
sha1 = self.computeFileSha1(path)
if not sha1:
return None
path = 'sha1://' + sha1
# | |
or
self.ideEmpregador is not None or
self.infoEmpregador is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='evtInfoEmpregador', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('evtInfoEmpregador')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='evtInfoEmpregador')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='evtInfoEmpregador', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='evtInfoEmpregador'):
if self.Id is not None and 'Id' not in already_processed:
already_processed.add('Id')
outfile.write(' Id=%s' % (self.gds_encode(self.gds_format_string(quote_attrib(self.Id), input_name='Id')), ))
def exportChildren(self, outfile, level, namespace_='', name_='evtInfoEmpregador', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.ideEvento is not None:
self.ideEvento.export(outfile, level, namespace_, name_='ideEvento', pretty_print=pretty_print)
if self.ideEmpregador is not None:
self.ideEmpregador.export(outfile, level, namespace_, name_='ideEmpregador', pretty_print=pretty_print)
if self.infoEmpregador is not None:
self.infoEmpregador.export(outfile, level, namespace_, name_='infoEmpregador', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
value = find_attr_value_('Id', node)
if value is not None and 'Id' not in already_processed:
already_processed.add('Id')
self.Id = value
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'ideEvento':
obj_ = TIdeCadastro.factory()
obj_.build(child_)
self.ideEvento = obj_
obj_.original_tagname_ = 'ideEvento'
elif nodeName_ == 'ideEmpregador':
obj_ = TEmpregador.factory()
obj_.build(child_)
self.ideEmpregador = obj_
obj_.original_tagname_ = 'ideEmpregador'
elif nodeName_ == 'infoEmpregador':
obj_ = infoEmpregador.factory()
obj_.build(child_)
self.infoEmpregador = obj_
obj_.original_tagname_ = 'infoEmpregador'
# end class evtInfoEmpregador
class infoEmpregador(GeneratedsSuper):
"""Identificação da operação (inclusão, alteração ou exclusão) e das
respectivas informações do empregador."""
subclass = None
superclass = None
def __init__(self, inclusao=None, alteracao=None, exclusao=None):
self.original_tagname_ = None
self.inclusao = inclusao
self.alteracao = alteracao
self.exclusao = exclusao
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, infoEmpregador)
if subclass is not None:
return subclass(*args_, **kwargs_)
if infoEmpregador.subclass:
return infoEmpregador.subclass(*args_, **kwargs_)
else:
return infoEmpregador(*args_, **kwargs_)
factory = staticmethod(factory)
def get_inclusao(self): return self.inclusao
def set_inclusao(self, inclusao): self.inclusao = inclusao
def get_alteracao(self): return self.alteracao
def set_alteracao(self, alteracao): self.alteracao = alteracao
def get_exclusao(self): return self.exclusao
def set_exclusao(self, exclusao): self.exclusao = exclusao
def hasContent_(self):
if (
self.inclusao is not None or
self.alteracao is not None or
self.exclusao is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='infoEmpregador', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('infoEmpregador')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='infoEmpregador')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='infoEmpregador', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='infoEmpregador'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='infoEmpregador', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.inclusao is not None:
self.inclusao.export(outfile, level, namespace_, name_='inclusao', pretty_print=pretty_print)
if self.alteracao is not None:
self.alteracao.export(outfile, level, namespace_, name_='alteracao', pretty_print=pretty_print)
if self.exclusao is not None:
self.exclusao.export(outfile, level, namespace_, name_='exclusao', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'inclusao':
obj_ = inclusao.factory()
obj_.build(child_)
self.inclusao = obj_
obj_.original_tagname_ = 'inclusao'
elif nodeName_ == 'alteracao':
obj_ = alteracao.factory()
obj_.build(child_)
self.alteracao = obj_
obj_.original_tagname_ = 'alteracao'
elif nodeName_ == 'exclusao':
obj_ = exclusao.factory()
obj_.build(child_)
self.exclusao = obj_
obj_.original_tagname_ = 'exclusao'
# end class infoEmpregador
class inclusao(GeneratedsSuper):
"""Inclusão de novas informações"""
subclass = None
superclass = None
def __init__(self, idePeriodo=None, infoCadastro=None):
self.original_tagname_ = None
self.idePeriodo = idePeriodo
self.infoCadastro = infoCadastro
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, inclusao)
if subclass is not None:
return subclass(*args_, **kwargs_)
if inclusao.subclass:
return inclusao.subclass(*args_, **kwargs_)
else:
return inclusao(*args_, **kwargs_)
factory = staticmethod(factory)
def get_idePeriodo(self): return self.idePeriodo
def set_idePeriodo(self, idePeriodo): self.idePeriodo = idePeriodo
def get_infoCadastro(self): return self.infoCadastro
def set_infoCadastro(self, infoCadastro): self.infoCadastro = infoCadastro
def hasContent_(self):
if (
self.idePeriodo is not None or
self.infoCadastro is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='inclusao', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('inclusao')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='inclusao')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='inclusao', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='inclusao'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='inclusao', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.idePeriodo is not None:
self.idePeriodo.export(outfile, level, namespace_, name_='idePeriodo', pretty_print=pretty_print)
if self.infoCadastro is not None:
self.infoCadastro.export(outfile, level, namespace_, name_='infoCadastro', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'idePeriodo':
obj_ = TIdePeriodo.factory()
obj_.build(child_)
self.idePeriodo = obj_
obj_.original_tagname_ = 'idePeriodo'
elif nodeName_ == 'infoCadastro':
obj_ = TInfoEmpregador.factory()
obj_.build(child_)
self.infoCadastro = obj_
obj_.original_tagname_ = 'infoCadastro'
# end class inclusao
class alteracao(GeneratedsSuper):
"""Alteração das informações"""
subclass = None
superclass = None
def __init__(self, idePeriodo=None, infoCadastro=None, novaValidade=None):
self.original_tagname_ = None
self.idePeriodo = idePeriodo
self.infoCadastro = infoCadastro
self.novaValidade = novaValidade
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, alteracao)
if subclass is not None:
return subclass(*args_, **kwargs_)
if alteracao.subclass:
return alteracao.subclass(*args_, **kwargs_)
else:
return alteracao(*args_, **kwargs_)
factory = staticmethod(factory)
def get_idePeriodo(self): return self.idePeriodo
def set_idePeriodo(self, idePeriodo): self.idePeriodo = idePeriodo
def get_infoCadastro(self): return self.infoCadastro
def set_infoCadastro(self, infoCadastro): self.infoCadastro = infoCadastro
def get_novaValidade(self): return self.novaValidade
def set_novaValidade(self, novaValidade): self.novaValidade = novaValidade
def hasContent_(self):
if (
self.idePeriodo is not None or
self.infoCadastro is not None or
self.novaValidade is not None
):
return True
else:
return False
def export(self, outfile, level, namespace_='', name_='alteracao', namespacedef_='', pretty_print=True):
imported_ns_def_ = GenerateDSNamespaceDefs_.get('alteracao')
if imported_ns_def_ is not None:
namespacedef_ = imported_ns_def_
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.original_tagname_ is not None:
name_ = self.original_tagname_
showIndent(outfile, level, pretty_print)
outfile.write('<%s%s%s' % (namespace_, name_, namespacedef_ and ' ' + namespacedef_ or '', ))
already_processed = set()
self.exportAttributes(outfile, level, already_processed, namespace_, name_='alteracao')
if self.hasContent_():
outfile.write('>%s' % (eol_, ))
self.exportChildren(outfile, level + 1, namespace_='', name_='alteracao', pretty_print=pretty_print)
showIndent(outfile, level, pretty_print)
outfile.write('</%s%s>%s' % (namespace_, name_, eol_))
else:
outfile.write('/>%s' % (eol_, ))
def exportAttributes(self, outfile, level, already_processed, namespace_='', name_='alteracao'):
pass
def exportChildren(self, outfile, level, namespace_='', name_='alteracao', fromsubclass_=False, pretty_print=True):
if pretty_print:
eol_ = '\n'
else:
eol_ = ''
if self.idePeriodo is not None:
self.idePeriodo.export(outfile, level, namespace_, name_='idePeriodo', pretty_print=pretty_print)
if self.infoCadastro is not None:
self.infoCadastro.export(outfile, level, namespace_, name_='infoCadastro', pretty_print=pretty_print)
if self.novaValidade is not None:
self.novaValidade.export(outfile, level, namespace_, name_='novaValidade', pretty_print=pretty_print)
def build(self, node):
already_processed = set()
self.buildAttributes(node, node.attrib, already_processed)
for child in node:
nodeName_ = Tag_pattern_.match(child.tag).groups()[-1]
self.buildChildren(child, node, nodeName_)
return self
def buildAttributes(self, node, attrs, already_processed):
pass
def buildChildren(self, child_, node, nodeName_, fromsubclass_=False):
if nodeName_ == 'idePeriodo':
obj_ = TIdePeriodo.factory()
obj_.build(child_)
self.idePeriodo = obj_
obj_.original_tagname_ = 'idePeriodo'
elif nodeName_ == 'infoCadastro':
obj_ = TInfoEmpregador.factory()
obj_.build(child_)
self.infoCadastro = obj_
obj_.original_tagname_ = 'infoCadastro'
elif nodeName_ == 'novaValidade':
obj_ = TPeriodoValidade.factory()
obj_.build(child_)
self.novaValidade = obj_
obj_.original_tagname_ = 'novaValidade'
# end class alteracao
class exclusao(GeneratedsSuper):
"""Exclusão das informações"""
subclass = None
superclass = None
def __init__(self, idePeriodo=None):
self.original_tagname_ = None
self.idePeriodo = idePeriodo
def factory(*args_, **kwargs_):
if CurrentSubclassModule_ is not None:
subclass = getSubclassFromModule_(
CurrentSubclassModule_, exclusao)
if subclass is not None:
return subclass(*args_, **kwargs_)
if exclusao.subclass:
return exclusao.subclass(*args_, **kwargs_)
| |
<filename>backslash/contrib/slash_plugin.py
from __future__ import print_function
import functools
import hashlib
import itertools
import json
import os
import pkg_resources
import socket
import sys
import time
import webbrowser
import logbook
import requests
import vintage
try:
import git
except Exception as e: # pylint: disable=broad-except
pass
import slash
from sentinels import NOTHING
from slash import config as slash_config
from slash.plugins import PluginInterface, registers_on
from slash.utils.conf_utils import Cmdline, Doc
from urlobject import URLObject as URL
from requests import HTTPError
from .._compat import shellquote
from ..client import Backslash as BackslashClient
from ..exceptions import ParamsTooLarge
from ..utils import ensure_dir
from .keepalive_thread import KeepaliveThread
from .utils import normalize_file_path, distill_slash_traceback, distill_object_attributes, add_environment_variable_metadata
from ..lazy_query import LazyQuery
from ..session import APPEND_UPCOMING_TESTS_STR
from ..__version__ import __version__ as BACKSLASH_CLIENT_VERSION
_DEFAULT_CONFIG_FILENAME = os.path.expanduser('~/.backslash/config.json')
_logger = logbook.Logger(__name__)
_PWD = os.path.abspath('.')
_HAS_TEST_AVOIDED = (int(slash.__version__.split('.')[0]) >= 1)
_HAS_SESSION_INTERRUPT = hasattr(slash.hooks, 'session_interrupt')
_HAS_TEST_DISTRIBUTED = hasattr(slash.hooks, 'test_distributed')
_HAS_APP_QUIT = hasattr(slash.hooks, 'app_quit')
def handle_exceptions(func):
@functools.wraps(func)
def new_func(self, *args, **kwargs):
try:
with slash.exception_handling.handling_exceptions():
return func(self, *args, **kwargs)
except Exception: # pylint: disable=broad-except
exc_info = sys.exc_info()
if not self._handle_exception(exc_info) or self._propagate_exceptions: # pylint: disable=protected-access
raise
return new_func
class BackslashPlugin(PluginInterface):
client = current_test = session = None
def __init__(self, url=None, keepalive_interval=None, runtoken=None,
propagate_exceptions=False, config_filename=_DEFAULT_CONFIG_FILENAME):
super().__init__()
self._url = url
self._repo_cache = {}
self._config_filename = config_filename
self._file_hash_cache = {}
self._keepalive_interval = keepalive_interval
self._keepalive_thread = None
self._error_containers = {}
self._runtoken = runtoken
self._propagate_exceptions = propagate_exceptions
self._started = False
self._adding_error = False
@property
def rest_url(self):
if self.client is None:
return None
return self.client.url.add_path('rest')
@property
def webapp_url(self):
if self.client is None:
return None
return self.client.get_ui_url()
@property
def session_webapp_url(self):
session = slash.context.session
if session is None or self.client is None:
return None
return self.client.get_ui_url(f'sessions/{session.id}')
def _handle_exception(self, exc_info):
pass
def _handle_keepalive_exception(self, exc_info):
pass
def _get_backslash_url(self):
return self._url
def get_name(self):
return 'backslash'
def get_default_config(self):
return {
"session_ttl_days": 0 // Doc(
'Optional number of days after which this session will be discarded '
'from Backslash') // Cmdline(arg='--session-ttl-days', metavar='DAYS'),
"session_labels": [] // Doc('Specify labels to be added to the session when reported') \
// Cmdline(append="--session-label", metavar="LABEL"),
"blacklisted_warnings_category": [] // Doc(
'Specify warnings categories which should not be reported to backslash'),
"report_test_docstrings": False // Doc(
'Add test docstring to backslash test metadata') // Cmdline(on="--report_test_docstrings"),
}
@handle_exceptions
def activate(self):
if self._runtoken is None:
self._runtoken = self._ensure_run_token()
self.client = BackslashClient(
URL(self._get_backslash_url()),
self._runtoken, headers=self._get_default_headers())
def _get_default_headers(self):
"""Override this method to control the headers sent to the Backslash server
on each reequest
"""
return None
def deactivate(self):
if self._keepalive_thread is not None:
self._keepalive_thread.stop()
super().deactivate()
def _notify_session_start(self):
metadata = self._get_initial_session_metadata()
is_parent_session = False
parent_logical_id = child_id = None
is_slash_support_parallel = getattr(slash_config.root, 'parallel', False)
if is_slash_support_parallel and slash_config.root.parallel.num_workers:
child_id = slash_config.root.parallel.worker_id
if child_id is not None:
parent_logical_id = getattr(slash.context.session, 'parent_session_id', None)
else:
is_parent_session = True
self.session = self.client.report_session_start(
logical_id=slash.context.session.id,
parent_logical_id=parent_logical_id,
is_parent_session=is_parent_session,
child_id=child_id,
total_num_tests=slash.context.session.get_total_num_tests(),
hostname=socket.getfqdn(),
keepalive_interval=self._keepalive_interval,
infrastructure='slash',
metadata=metadata,
**self._get_extra_session_start_kwargs()
)
self._started = True
for warning in slash.context.session.warnings:
self.warning_added(warning)
for label in self.current_config.session_labels:
self.client.api.call.add_label(session_id=self.session.id, label=label)
if self._keepalive_interval is not None:
self._keepalive_thread = KeepaliveThread(
self.client,
self.session,
self._keepalive_interval,
error_callback=self._handle_keepalive_exception
)
self._keepalive_thread.start()
@handle_exceptions
def session_start(self):
if self.session is None:
self._notify_session_start()
def _get_initial_session_metadata(self):
returned = {
'slash::version': slash.__version__,
'slash::commandline': ' '.join(shellquote(arg) for arg in sys.argv),
'backslash_client_version': BACKSLASH_CLIENT_VERSION,
'python_version': '.'.join(map(str, sys.version_info[:3])),
'process_id': os.getpid(),
}
add_environment_variable_metadata(metadata=returned)
return returned
def _get_extra_session_start_kwargs(self):
returned = {}
ttl_seconds = self.current_config.session_ttl_days * 24 * 60 * 60
if ttl_seconds:
returned['ttl_seconds'] = ttl_seconds
return returned
@slash.plugins.register_if(_HAS_TEST_AVOIDED)
@handle_exceptions
def test_avoided(self, reason):
self.test_start()
self.test_skip(reason=reason)
self.test_end()
@handle_exceptions
def test_interrupt(self):
if self.current_test is not None:
self.current_test.report_interrupted()
@slash.plugins.register_if(_HAS_SESSION_INTERRUPT)
@handle_exceptions
def session_interrupt(self):
if self.session is not None:
self.session.report_interrupted()
@slash.plugins.registers_on(None)
@handle_exceptions
def report_planned_tests(self, tests):
if not APPEND_UPCOMING_TESTS_STR in self.client.api.info().endpoints:
return
tests_metadata = []
for test in tests:
test_info = self._get_test_info(test)
current = {'test_logical_id':test.__slash__.id,
'file_name':test_info['file_name'],
'name':test_info['name'],
'class_name':test_info['class_name']
}
if 'variation' in test_info:
current['variation'] = test_info['variation']
tests_metadata.append(current)
tests_count = 0
batch_size = 100
try:
while tests_count < len(tests_metadata):
self.session.report_upcoming_tests(tests_metadata[tests_count:tests_count+batch_size])
tests_count += batch_size
except requests.exceptions.HTTPError:
_logger.error('Ignoring exception while reporting planned tests', exc_info=True)
@handle_exceptions
def test_start(self):
kwargs = self._get_test_info(slash.context.test)
self._update_scm_info(kwargs)
tags = slash.context.test.__slash__.tags
tag_dict = {tag_name: tags[tag_name] for tag_name in tags}
if tag_dict:
kwargs['metadata'] = {
'slash::tags': {
'values': {tag_name: tag_value for tag_name, tag_value in tag_dict.items() if tag_value is not NOTHING},
'names': list(tag_dict),
},
}
log_path = slash.context.result.get_log_path()
if log_path:
kwargs.setdefault('metadata', {})['local_log_path'] = os.path.abspath(log_path)
if self.current_config.report_test_docstrings and slash.test.get_test_function().__doc__:
kwargs.setdefault('metadata', {})['docstring'] = slash.test.get_test_function().__doc__
self.current_test = self.session.report_test_start(
test_logical_id=slash.context.test.__slash__.id,
test_index=slash.context.test.__slash__.test_index1,
**kwargs
)
self._error_containers[slash.context.test.__slash__.id] = self.current_test
@slash.plugins.register_if(_HAS_TEST_DISTRIBUTED)
@handle_exceptions #pylint: disable=unused-argument
def test_distributed(self, test_logical_id, worker_session_id): #pylint: disable=unused-argument
if 'report_test_distributed' in self.client.api.info().endpoints:
self.current_test = self.session.report_test_distributed(test_logical_id)
@handle_exceptions
def test_skip(self, reason=None):
self.current_test.mark_skipped(reason=reason)
@slash.plugins.registers_on(None)
def is_session_exist(self, session_id):
try:
self.client.api.get(f'/rest/sessions/{session_id}')
return True
except HTTPError as e:
if e.response.status_code == 404:
return False
raise
@handle_exceptions
@slash.plugins.registers_on(None)
def get_tests_to_resume(self, session_id, filters_dict):
"""Queries backslash specific session's tests
:param session_id: the wanted session
:param filters_dict: a dictionary containing filters for backslash tests query
:rtype: list of test objects
"""
max_retries = 3
for i in range(max_retries):
try:
default_params_dict = {x: 'true' for x in ['show_planned', 'show_skipped', 'show_unsuccessful', 'show_abandoned']}
default_params_dict.update({'session_id': session_id, 'show_successful': 'false'})
default_params_dict.update(filters_dict)
return reversed(LazyQuery(self.client, '/rest/tests', query_params=default_params_dict).all())
except HTTPError:
if i == max_retries-1:
raise
def _get_test_info(self, test):
if test.__slash__.is_interactive() and \
pkg_resources.parse_version(slash.__version__) < pkg_resources.parse_version('1.6.0'):
returned = {
'file_name': '<interactive>',
'class_name': '<interactive>',
'name': '<interactive>',
'is_interactive': True
}
else:
test_display_name = test.__slash__.address
if set(test_display_name) & set('/.'):
test_display_name = test.__slash__.function_name
returned = {
'file_name': normalize_file_path(test.__slash__.file_path),
'class_name': test.__slash__.class_name,
'name': test_display_name,
'is_interactive': test.__slash__.is_interactive(),
}
variation = getattr(test.__slash__, 'variation', None)
if variation:
if hasattr(test.__slash__.variation, 'labels'):
items = test.__slash__.variation.labels.items()
returned['parameters'] = variation.values.copy()
elif hasattr(test.__slash__.variation, 'id'):
items = test.__slash__.variation.id.items()
returned['parameters'] = variation.values.copy()
else:
items = test.__slash__.variation.items()
returned['variation'] = dict((name, value) for name, value in items)
return returned
def _update_scm_info(self, test_info):
try:
test_info['file_hash'] = self._calculate_file_hash(test_info['file_name'])
dirname = os.path.dirname(test_info['file_name'])
repo = self._repo_cache.get(dirname, NOTHING)
if repo is NOTHING:
repo = self._repo_cache[dirname] = self._get_git_repo(dirname)
if repo is None:
return
test_info['scm'] = 'git'
try:
hexsha = repo.head.commit.hexsha
except Exception: # pylint: disable=broad-except
_logger.debug('Unable to get commit hash', exc_info=True)
hexsha = None
test_info['scm_revision'] = hexsha
test_info['scm_dirty'] = bool(repo.untracked_files or repo.index.diff(None) or repo.index.diff(repo.head.commit))
if self.client.api.info().endpoints.report_test_start.version >= 3:
if not repo.head.is_detached:
test_info['scm_local_branch'] = repo.active_branch.name
tracking_branch = repo.active_branch.tracking_branch()
if tracking_branch is not None:
test_info['scm_remote_branch'] = tracking_branch.name
except Exception: # pylint: disable=broad-except
_logger.warning('Error when obtaining SCM information', exc_info=True)
def _calculate_file_hash(self, filename):
returned = self._file_hash_cache.get(filename)
if returned is None:
try:
with open(filename, 'rb') as f:
data = f.read()
h = hashlib.sha1()
h.update('blob '.encode('utf-8'))
h.update(f'{len(data)}\0'.encode('utf-8'))
h.update(data)
except IOError as e:
_logger.debug(f'Ignoring IOError {e!r} when calculating file hash for {filename}')
returned = None
else:
returned = h.hexdigest()
self._file_hash_cache[filename] = returned
return returned
def _get_git_repo(self, dirname):
if not os.path.isabs(dirname):
dirname = os.path.abspath(os.path.join(_PWD, dirname))
while dirname != os.path.normpath(os.path.abspath(os.path.sep)):
if os.path.isdir(os.path.join(dirname, '.git')):
return git.Repo(dirname)
dirname = os.path.normpath(os.path.abspath(os.path.join(dirname, '..')))
return None
@handle_exceptions
def test_end(self):
if self.current_test is None:
return
details = {}
if hasattr(slash.context.result, 'details'):
additional = slash.context.result.details.all()
else:
additional = slash.context.result.get_additional_details()
details.update(additional)
self.current_test.set_metadata_dict(details)
self.current_test.report_end()
self.current_test = None
@handle_exceptions
def session_end(self):
self._session_report_end('session_end')
@slash.plugins.register_if(_HAS_APP_QUIT)
@handle_exceptions # pylint: disable=unused-argument
def app_quit(self):
self._session_report_end('app_quit')
def _session_report_end(self, hook_name):
if not self._started:
return
try:
if self._keepalive_thread is not None:
self._keepalive_thread.stop()
kwargs = {}
session_results = getattr(slash.session, 'results', None)
has_fatal_errors = hasattr(session_results, 'has_fatal_errors') and session_results.has_fatal_errors()
if self.client.api.info().endpoints.report_session_end.version >= 2:
kwargs['has_fatal_errors'] = has_fatal_errors
self.session.report_end(**kwargs)
self._started = False
except Exception: # pylint: disable=broad-except
_logger.error(f'Exception ignored in {hook_name}', exc_info=True)
@handle_exceptions
def error_added(self, result, error):
if self._adding_error:
return
self._adding_error = True
try:
with slash.exception_handling.handling_exceptions():
self._add_exception(result=result, exception=error, is_fatal=error.is_fatal())
finally:
self._adding_error = False
@slash.plugins.register_if(hasattr(slash.hooks, 'interruption_added'))
@handle_exceptions
def interruption_added(self, result, exception):
self._add_exception(result=result, exception=exception, is_interruption=True)
def _add_exception(self, result, exception, is_interruption=False, is_fatal=False):
has_interruptions = self.client.api.info().endpoints.add_error.version >= 4
if is_interruption and not has_interruptions:
_logger.debug('Server does not support recording is_interruption exceptions. Skipping reporting')
return
if result is slash.session.results.global_result:
error_container = self.session
else:
error_container = self._error_containers.get(result.test_metadata.id, self.current_test) or self.session
if error_container is None:
_logger.debug('Could not determine error container to report on for {}', result)
return
with vintage.get_no_deprecations_context():
exception_attrs = getattr(exception, 'exception_attributes', NOTHING)
if exception_attrs is NOTHING and hasattr(exception, 'exc_info'):
exception_attrs = distill_object_attributes(exception.exc_info[1])
kwargs = {'exception_type': exception.exception_type.__name__ if exception.exception_type is not None else None,
'traceback': distill_slash_traceback(exception), 'exception_attrs': exception_attrs}
if exception.message:
message = exception.message
elif hasattr(exception, 'exception_str'):
message = exception.exception_str
else:
message = str(exception.exception)
kwargs['message'] = message
if has_interruptions:
kwargs['is_interruption'] = is_interruption
has_fatal = self.client.api.info().endpoints.add_error.version >= 5
if has_fatal:
kwargs['is_fatal'] = is_fatal
for compact_variables in [False, True]:
if compact_variables:
for frame in kwargs['traceback']:
frame['globals'] = None
frame['locals'] = None
try:
error_container.add_error(**kwargs)
except ParamsTooLarge:
if compact_variables:
raise
# continue to try compacting
else:
break
@handle_exceptions
def | |
import abc
import pytest
import py_vsys as pv
from test.func_test import conftest as cft
class TestNFTCtrt:
"""
TestNFTCtrt is the collection of functional tests of NFT contract.
"""
@pytest.fixture
async def new_ctrt(self, acnt0: pv.Account) -> pv.NFTCtrt:
"""
new_ctrt is the fixture that registers a new NFT contract.
Args:
acnt0 (pv.Account): The account of nonce 0.
Returns:
pv.NFTCtrt: The NFTCtrt instance.
"""
nc = await pv.NFTCtrt.register(acnt0)
await cft.wait_for_block()
return nc
@pytest.fixture
async def new_ctrt_with_tok(
self, new_ctrt: pv.NFTCtrt, acnt0: pv.Account
) -> pv.NFTCtrt:
"""
new_ctrt_with_tok is the fixture that registers a new NFT contract and issues an NFT token right after it.
Args:
new_ctrt (pv.NFTCtrt): The fixture that registers a new NFT contract.
acnt0 (pv.Account): The account of nonce 0.
Returns:
pv.NFTCtrt: The NFTCtrt instance.
"""
nc = new_ctrt
await nc.issue(acnt0)
await cft.wait_for_block()
return nc
@pytest.fixture
async def new_atomic_swap_ctrt(
self,
new_ctrt_with_tok: pv.NFTCtrt,
acnt0: pv.Account,
) -> pv.AtomicSwapCtrt:
"""
new_atomic_swap_ctrt is the fixture that registers a new atomic swap contract.
Args:
new_ctrt_with_tok (pv.NFTCtrt): The fixture that registers a new NFT contract and issues an NFT token right after it.
acnt0 (pv.Account): The account of nonce 0.
Returns:
pv.AtomicSwapCtrt: The AtomicSwapCtrt instance.
"""
nc = new_ctrt_with_tok
tok_id = pv.Ctrt.get_tok_id(nc.ctrt_id, pv.TokenIdx(0))
ac = await pv.AtomicSwapCtrt.register(acnt0, tok_id.data)
await cft.wait_for_block()
assert (await ac.maker) == acnt0.addr
assert (await ac.tok_id) == tok_id
return ac
async def test_register(self, acnt0: pv.Account) -> pv.NFTCtrt:
"""
test_register tests the method register.
Args:
acnt0 (pv.Account): The account of nonce 0.
Returns:
pv.NFTCtrt: The registered NFTCtrt.
"""
nc = await pv.NFTCtrt.register(acnt0)
await cft.wait_for_block()
assert (await nc.issuer) == acnt0.addr
assert (await nc.maker) == acnt0.addr
return nc
async def test_issue(self, new_ctrt: pv.NFTCtrt, acnt0: pv.Account):
"""
test_issue tests the method issue.
Args:
new_ctrt (pv.NFTCtrt): The fixture that registers a new NFT contract.
acnt0 (pv.Account): The account of nonce 0.
"""
nc = new_ctrt
api = nc.chain.api
resp = await nc.issue(acnt0)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
tok_id = pv.Ctrt.get_tok_id(nc.ctrt_id, pv.TokenIdx(0))
tok_bal = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal == 1
async def test_send(
self, new_ctrt_with_tok: pv.NFTCtrt, acnt0: pv.Account, acnt1: pv.Account
):
"""
test_send tests the method send
Args:
new_ctrt_with_tok (pv.NFTCtrt): The fixture that registers a new NFT contract and issues an NFT token right after it.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = new_ctrt_with_tok
api = nc.chain.api
tok_id = pv.Ctrt.get_tok_id(nc.ctrt_id, pv.TokenIdx(0))
tok_bal_acnt0 = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal_acnt0 == 1
tok_bal_acnt1 = await cft.get_tok_bal(api, acnt1.addr.data, tok_id.data)
assert tok_bal_acnt1 == 0
resp = await nc.send(acnt0, acnt1.addr.data, 0)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
tok_bal_acnt0 = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal_acnt0 == 0
tok_bal_acnt1 = await cft.get_tok_bal(api, acnt1.addr.data, tok_id.data)
assert tok_bal_acnt1 == 1
async def test_transfer(
self, new_ctrt_with_tok: pv.NFTCtrt, acnt0: pv.Account, acnt1: pv.Account
):
"""
test_transfer tests the method transfer.
Args:
new_ctrt_with_tok (pv.NFTCtrt): The fixture that registers a new NFT contract and issues an NFT token right after it.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = new_ctrt_with_tok
api = nc.chain.api
tok_id = pv.Ctrt.get_tok_id(nc.ctrt_id, pv.TokenIdx(0))
tok_bal_acnt0 = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal_acnt0 == 1
tok_bal_acnt1 = await cft.get_tok_bal(api, acnt1.addr.data, tok_id.data)
assert tok_bal_acnt1 == 0
resp = await nc.transfer(acnt0, acnt0.addr.data, acnt1.addr.data, 0)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
tok_bal_acnt0 = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal_acnt0 == 0
tok_bal_acnt1 = await cft.get_tok_bal(api, acnt1.addr.data, tok_id.data)
assert tok_bal_acnt1 == 1
async def test_deposit_withdraw(
self,
new_ctrt_with_tok: pv.NFTCtrt,
new_atomic_swap_ctrt: pv.AtomicSwapCtrt,
acnt0: pv.Account,
):
"""
test_deposit_withdraw tests the method deposit & withdraw.
Args:
new_ctrt_with_tok (pv.NFTCtrt): The fixture that registers a new NFT contract and issues an NFT token right after it.
new_atomic_swap_ctrt (pv.AtomicSwapCtrt): The fixture that registers a new atomic swap contract.
acnt0 (pv.Account): The account of nonce 0.
"""
nc = new_ctrt_with_tok
api = nc.chain.api
tok_id = pv.Ctrt.get_tok_id(nc.ctrt_id, pv.TokenIdx(0))
ac = new_atomic_swap_ctrt
tok_bal = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal == 1
resp = await nc.deposit(acnt0, ac.ctrt_id.data, 0)
await cft.wait_for_block()
tx_info = await api.tx.get_info(resp["id"])
assert tx_info["status"] == "Success"
tok_bal = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal == 0
deposited_tok_bal = await ac.get_ctrt_bal(acnt0.addr.data)
assert deposited_tok_bal.amount == 1
await nc.withdraw(acnt0, ac.ctrt_id.data, 0)
await cft.wait_for_block()
tok_bal = await cft.get_tok_bal(api, acnt0.addr.data, tok_id.data)
assert tok_bal == 1
deposited_tok_bal = await ac.get_ctrt_bal(acnt0.addr.data)
assert deposited_tok_bal.amount == 0
async def test_supersede(
self, new_ctrt: pv.NFTCtrt, acnt0: pv.Account, acnt1: pv.Account
):
"""
test_supersede tests the method supersede.
Args:
new_ctrt (pv.NFTCtrt): The fixture that registers a new NFT contract.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = new_ctrt
api = nc.chain.api
assert (await nc.issuer) == acnt0.addr
resp = await nc.supersede(acnt0, acnt1.addr.data)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
assert (await nc.issuer) == acnt1.addr
@pytest.mark.whole
async def test_as_whole(
self,
new_ctrt_with_tok: pv.NFTCtrt,
new_atomic_swap_ctrt: pv.AtomicSwapCtrt,
acnt0: pv.Account,
acnt1: pv.Account,
):
"""
test_as_whole tests methods of NFTCtrt as a whole so as to reduce resource consumption.
Args:
new_ctrt_with_tok (pv.NFTCtrt): The fixture that registers a new NFT contract and issues an NFT token right after it.
new_atomic_swap_ctrt (pv.AtomicSwapCtrt): The fixture that registers a new atomic swap contract.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = await self.test_register(acnt0)
await self.test_issue(nc, acnt0)
nc = new_ctrt_with_tok
ac = new_atomic_swap_ctrt
await self.test_send(nc, acnt0, acnt1)
await self.test_transfer(nc, acnt1, acnt0)
await self.test_deposit_withdraw(nc, ac, acnt0)
await self.test_supersede(nc, acnt0, acnt1)
class _TestNFTCtrtV2Base(TestNFTCtrt):
"""
_TestNFTCtrtV2Base is the collection of general functional tests of NFT contract V2.
"""
@pytest.fixture
@abc.abstractmethod
async def new_ctrt(self, acnt0: pv.Account, acnt1: pv.Account) -> pv.NFTCtrtV2Base:
"""
new_ctrt is the fixture that registers a new NFT contract V2 instance.
Args:
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
Returns:
pv.NFTCtrtV2Base: The pv.NFTCtrtV2Base instance.
"""
@pytest.fixture
@abc.abstractmethod
async def new_atomic_swap_ctrt(
self,
new_ctrt_with_tok: pv.NFTCtrtV2Blacklist,
acnt0: pv.Account,
) -> pv.AtomicSwapCtrt:
"""
new_atomic_swap_ctrt is the fixture that registers a new atomic swap contract.
Args:
new_ctrt_with_tok (pv.NFTCtrtV2Base): The fixture that registers a new NFT contract and issues an NFT token right after it.
acnt0 (pv.Account): The account of nonce 0.
Returns:
pv.AtomicSwapCtrt: The AtomicSwapCtrt instance.
"""
@pytest.fixture
def arbitrary_ctrt_id(self) -> str:
"""
arbitrary_ctrt_id is the fixture that returns an arbitrary contract ID
Returns:
str: The contract ID.
"""
return "CF5Zkj2Ycx72WrBnjrcNHvJRVwsbNX1tjgT"
async def test_supersede(
self, new_ctrt: pv.NFTCtrtV2Whitelist, acnt0: pv.Account, acnt1: pv.Account
):
"""
test_supersede tests the method supersede.
Args:
new_ctrt (pv.NFTCtrtV2Whitelist): The fixture that registers a new NFT contract V2 with whitelist.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = new_ctrt
api = nc.chain.api
assert (await nc.issuer) == acnt0.addr
assert (await nc.regulator) == acnt0.addr
resp = await nc.supersede(acnt0, acnt1.addr.data, acnt1.addr.data)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
assert (await nc.issuer) == acnt1.addr
assert (await nc.regulator) == acnt1.addr
async def test_update_list_user(
self, new_ctrt: pv.NFTCtrtV2Whitelist, acnt0: pv.Account, acnt1: pv.Account
):
"""
test_update_list_user tests the method update_list_user.
Args:
new_ctrt (pv.NFTCtrtV2Whitelist): The fixture that registers a new NFT contract V2 with whitelist.
acnt0 (pv.Account): The account of nonce 0.
acnt1 (pv.Account): The account of nonce 1.
"""
nc = new_ctrt
api = nc.chain.api
in_list = await nc.is_user_in_list(acnt1.addr.data)
assert in_list == False
resp = await nc.update_list_user(
by=acnt0,
addr=acnt1.addr.data,
val=True,
)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
in_list = await nc.is_user_in_list(acnt1.addr.data)
assert in_list == True
resp = await nc.update_list_user(
by=acnt0,
addr=acnt1.addr.data,
val=False,
)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
in_list = await nc.is_user_in_list(acnt1.addr.data)
assert in_list == False
async def test_update_list_ctrt(
self, new_ctrt: pv.NFTCtrtV2Whitelist, acnt0: pv.Account, arbitrary_ctrt_id: str
):
"""
test_update_list_ctrt tests the method update_list_ctrt.
Args:
new_ctrt (pv.NFTCtrtV2Whitelist): The fixture that registers a new NFT contract V2 with whitelist.
acnt0 (pv.Account): The account of nonce 0.
arbitrary_ctrt_id (str): An arbitrary contract ID
"""
nc = new_ctrt
api = nc.chain.api
target_ctrt_id = arbitrary_ctrt_id
in_list = await nc.is_ctrt_in_list(target_ctrt_id)
assert in_list == False
resp = await nc.update_list_ctrt(
by=acnt0,
addr=target_ctrt_id,
val=True,
)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
in_list = await nc.is_ctrt_in_list(target_ctrt_id)
assert in_list == True
resp = await nc.update_list_ctrt(
by=acnt0,
addr=target_ctrt_id,
val=False,
)
await cft.wait_for_block()
await cft.assert_tx_success(api, resp["id"])
in_list = await nc.is_ctrt_in_list(target_ctrt_id)
assert in_list | |
[0.0744211226702,0.722495436668,-0.687358558178],
[0.0362210273743,0.761889100075,-0.646693944931],
[0.108097285032,0.757922053337,-0.643326640129],
[-0.0744211226702,0.722495436668,-0.687358558178],
[-0.108097285032,0.757922053337,-0.643326640129],
[-0.0362210273743,0.761889100075,-0.646693944931],
[0.0,0.79582041502,-0.605532705784],
[-0.0339771322906,0.82464236021,-0.564633131027],
[0.0339771322906,0.82464236021,-0.564633131027],
[-0.0362210273743,0.761889100075,-0.646693944931],
[0.0,0.79582041502,-0.605532705784],
[0.0362210273743,0.761889100075,-0.646693944931],
[-0.0399611219764,0.581926107407,-0.81225925684],
[0.0,0.634539365768,-0.772890508175],
[0.0399611219764,0.581926107407,-0.81225925684],
[-0.114190116525,0.677466154099,-0.726636230946],
[-0.0744211226702,0.722495436668,-0.687358558178],
[-0.0382858961821,0.68142670393,-0.730884253979],
[0.0382858961821,0.68142670393,-0.730884253979],
[0.0744211226702,0.722495436668,-0.687358558178],
[0.114190116525,0.677466154099,-0.726636230946],
[-0.0382858961821,0.68142670393,-0.730884253979],
[0.0382858961821,0.68142670393,-0.730884253979],
[0.0,0.634539365768,-0.772890508175],
[-0.0394601933658,0.0638479366899,-0.997179210186],
[0.0,0.130150929093,-0.991494178772],
[0.0394601933658,0.0638479366899,-0.997179210186],
[-0.121444880962,0.196501940489,-0.97295331955],
[-0.0820460245013,0.265506535769,-0.960611641407],
[-0.0407496243715,0.197802826762,-0.979394435883],
[0.0407496243715,0.197802826762,-0.979394435883],
[0.0820460245013,0.265506535769,-0.960611641407],
[0.121444880962,0.196501940489,-0.97295331955],
[-0.0407496243715,0.197802826762,-0.979394435883],
[0.0407496243715,0.197802826762,-0.979394435883],
[0.0,0.130150929093,-0.991494178772],
[-0.202408134937,0.327503234148,-0.922915279865],
[-0.162998497486,0.395605653524,-0.903840482235],
[-0.123069040477,0.33188316226,-0.935258030891],
[-0.276221334934,0.446935534477,-0.85085272789],
[-0.236761152744,0.510783493519,-0.826465010643],
[-0.20109423995,0.455528259277,-0.867211103439],
[-0.122248865664,0.461539924145,-0.878655850887],
[-0.0809632539749,0.524005174637,-0.847858190536],
[-0.0410230122507,0.464636415243,-0.88455080986],
[-0.20109423995,0.455528259277,-0.867211103439],
[-0.122248865664,0.461539924145,-0.878655850887],
[-0.162998497486,0.395605653524,-0.903840482235],
[0.123069040477,0.33188316226,-0.935258030891],
[0.162998497486,0.395605653524,-0.903840482235],
[0.202408134937,0.327503234148,-0.922915279865],
[0.0410230122507,0.464636415243,-0.88455080986],
[0.0809632539749,0.524005174637,-0.847858190536],
[0.122248865664,0.461539924145,-0.878655850887],
[0.20109423995,0.455528259277,-0.867211103439],
[0.236761152744,0.510783493519,-0.826465010643],
[0.276221334934,0.446935534477,-0.85085272789],
[0.122248865664,0.461539924145,-0.878655850887],
[0.20109423995,0.455528259277,-0.867211103439],
[0.162998497486,0.395605653524,-0.903840482235],
[-0.123069040477,0.33188316226,-0.935258030891],
[-0.04130198434,0.334140062332,-0.941618084908],
[-0.0820460245013,0.265506535769,-0.960611641407],
[-0.0410230122507,0.464636415243,-0.88455080986],
[0.0410230122507,0.464636415243,-0.88455080986],
[0.0,0.400968074799,-0.916092038155],
[0.04130198434,0.334140062332,-0.941618084908],
[0.123069040477,0.33188316226,-0.935258030891],
[0.0820460245013,0.265506535769,-0.960611641407],
[0.0,0.400968074799,-0.916092038155],
[0.04130198434,0.334140062332,-0.941618084908],
[-0.04130198434,0.334140062332,-0.941618084908],
[0.509656965733,0.0549761541188,-0.858619451523],
[0.548688352108,0.091976031661,-0.830952167511],
[0.564633131027,0.0339771322906,-0.82464236021],
[0.468421578407,0.174905076623,-0.866019308567],
[0.506734728813,0.217834427953,-0.834127128124],
[0.529480218887,0.153434738517,-0.834331154823],
[0.588087081909,0.131048902869,-0.798110127449],
[0.627150595188,0.171839639544,-0.759706020355],
[0.643326640129,0.108097285032,-0.757922053337],
[0.529480218887,0.153434738517,-0.834331154823],
[0.588087081909,0.131048902869,-0.798110127449],
[0.548688352108,0.091976031661,-0.830952167511],
[0.413926929235,0.3044308424,-0.857896447182],
[0.450116455555,0.352179646492,-0.820587992668],
[0.480284929276,0.284414708614,-0.829719662666],
[0.346611320972,0.436200261116,-0.830415487289],
[0.379529476166,0.486395716667,-0.787004828453],
[0.416404157877,0.419940322638,-0.806385576725],
[0.485873311758,0.400663375854,-0.776785671711],
[0.520354926586,0.44894811511,-0.726413309574],
[0.553625464439,0.378517180681,-0.74177056551],
[0.416404157877,0.419940322638,-0.806385576725],
[0.485873311758,0.400663375854,-0.776785671711],
[0.450116455555,0.352179646492,-0.820587992668],
[0.665048420429,0.213841319084,-0.715529501438],
[0.700865805149,0.256401896477,-0.665616393089],
[0.718357801437,0.188148602843,-0.669747889042],
[0.618283927441,0.353819847107,-0.701809465885],
[0.65135627985,0.398910075426,-0.645450055599],
[0.678621411324,0.327040165663,-0.657660841942],
[0.733673810959,0.298754066229,-0.610302150249],
[0.762617051601,0.340069264174,-0.550243675709],
[0.782811582088,0.269586592913,-0.560828924179],
[0.678621411324,0.327040165663,-0.657660841942],
[0.733673810959,0.298754066229,-0.610302150249],
[0.700865805149,0.256401896477,-0.665616393089],
[0.480284929276,0.284414708614,-0.829719662666],
[0.545040607452,0.26241543889,-0.796284377575],
[0.506734728813,0.217834427953,-0.834127128124],
[0.553625464439,0.378517180681,-0.74177056551],
[0.618283927441,0.353819847107,-0.701809465885],
[0.582528710365,0.308011889458,-0.752189457417],
[0.606988489628,0.238753452897,-0.757998526096],
[0.665048420429,0.213841319084,-0.715529501438],
[0.627150595188,0.171839639544,-0.759706020355],
[0.582528710365,0.308011889458,-0.752189457417],
[0.606988489628,0.238753452897,-0.757998526096],
[0.545040607452,0.26241543889,-0.796284377575],
[0.269586592913,0.560828924179,-0.782811582088],
[0.298754066229,0.610302150249,-0.733673810959],
[0.340069264174,0.550243675709,-0.762617051601],
[0.188148602843,0.669747889042,-0.718357801437],
[0.213841319084,0.715529501438,-0.665048420429],
[0.256401896477,0.665616393089,-0.700865805149],
[0.327040165663,0.657660841942,-0.678621411324],
[0.353819847107,0.701809465885,-0.618283927441],
[0.398910075426,0.645450055599,-0.65135627985],
[0.256401896477,0.665616393089,-0.700865805149],
[0.327040165663,0.657660841942,-0.678621411324],
[0.298754066229,0.610302150249,-0.733673810959],
[0.108097285032,0.757922053337,-0.643326640129],
[0.131048902869,0.798110127449,-0.588087081909],
[0.171839639544,0.759706020355,-0.627150595188],
[0.0339771322906,0.82464236021,-0.564633131027],
[0.0549761541188,0.858619451523,-0.509656965733],
[0.091976031661,0.830952167511,-0.548688352108],
[0.153434738517,0.834331154823,-0.529480218887],
[0.174905076623,0.866019308567,-0.468421578407],
[0.217834427953,0.834127128124,-0.506734728813],
[0.091976031661,0.830952167511,-0.548688352108],
[0.153434738517,0.834331154823,-0.529480218887],
[0.131048902869,0.798110127449,-0.588087081909],
[0.378517180681,0.74177056551,-0.553625464439],
[0.400663375854,0.776785671711,-0.485873311758],
[0.44894811511,0.726413309574,-0.520354926586],
[0.284414708614,0.829719662666,-0.480284929276],
[0.3044308424,0.857896447182,-0.413926929235],
[0.352179646492,0.820587992668,-0.450116455555],
[0.419940322638,0.806385576725,-0.416404157877],
[0.436200261116,0.830415487289,-0.346611320972],
[0.486395716667,0.787004828453,-0.379529476166],
[0.352179646492,0.820587992668,-0.450116455555],
[0.419940322638,0.806385576725,-0.416404157877],
[0.400663375854,0.776785671711,-0.485873311758],
[0.171839639544,0.759706020355,-0.627150595188],
[0.238753452897,0.757998526096,-0.606988489628],
[0.213841319084,0.715529501438,-0.665048420429],
[0.217834427953,0.834127128124,-0.506734728813],
[0.284414708614,0.829719662666,-0.480284929276],
[0.26241543889,0.796284377575,-0.545040607452],
[0.308011889458,0.752189457417,-0.582528710365],
[0.378517180681,0.74177056551,-0.553625464439],
[0.353819847107,0.701809465885,-0.618283927441],
[0.26241543889,0.796284377575,-0.545040607452],
[0.308011889458,0.752189457417,-0.582528710365],
[0.238753452897,0.757998526096,-0.606988489628],
[0.787004828453,0.379529476166,-0.486395716667],
[0.806385576725,0.416404157877,-0.419940322638],
[0.830415487289,0.346611320972,-0.436200261116],
[0.726413309574,0.520354926586,-0.44894811511],
[0.74177056551,0.553625464439,-0.378517180681],
[0.776785671711,0.485873311758,-0.400663375854],
[0.820587992668,0.450116455555,-0.352179646492],
[0.829719662666,0.480284929276,-0.284414708614],
[0.857896447182,0.413926929235,-0.3044308424],
[0.776785671711,0.485873311758,-0.400663375854],
[0.820587992668,0.450116455555,-0.352179646492],
[0.806385576725,0.416404157877,-0.419940322638],
[0.645450055599,0.65135627985,-0.398910075426],
[0.657660841942,0.678621411324,-0.327040165663],
[0.701809465885,0.618283927441,-0.353819847107],
[0.550243675709,0.762617051601,-0.340069264174],
[0.560828924179,0.782811582088,-0.269586592913],
[0.610302150249,0.733673810959,-0.298754066229],
[0.665616393089,0.700865805149,-0.256401896477],
[0.669747889042,0.718357801437,-0.188148602843],
[0.715529501438,0.665048420429,-0.213841319084],
[0.610302150249,0.733673810959,-0.298754066229],
[0.665616393089,0.700865805149,-0.256401896477],
[0.657660841942,0.678621411324,-0.327040165663],
[0.834127128124,0.506734728813,-0.217834427953],
[0.834331154823,0.529480218887,-0.153434738517],
[0.866019308567,0.468421578407,-0.174905076623],
[0.759706020355,0.627150595188,-0.171839639544],
[0.757922053337,0.643326640129,-0.108097285032],
[0.798110127449,0.588087081909,-0.131048902869],
[0.830952167511,0.548688352108,-0.091976031661],
[0.82464236021,0.564633131027,-0.0339771322906],
[0.858619451523,0.509656965733,-0.0549761541188],
[0.798110127449,0.588087081909,-0.131048902869],
[0.830952167511,0.548688352108,-0.091976031661],
[0.834331154823,0.529480218887,-0.153434738517],
[0.701809465885,0.618283927441,-0.353819847107],
[0.752189457417,0.582528710365,-0.308011889458],
[0.74177056551,0.553625464439,-0.378517180681],
[0.715529501438,0.665048420429,-0.213841319084],
[0.759706020355,0.627150595188,-0.171839639544],
[0.757998526096,0.606988489628,-0.238753452897],
[0.796284377575,0.545040607452,-0.26241543889],
[0.834127128124,0.506734728813,-0.217834427953],
[0.829719662666,0.480284929276,-0.284414708614],
[0.757998526096,0.606988489628,-0.238753452897],
[0.796284377575,0.545040607452,-0.26241543889],
[0.752189457417,0.582528710365,-0.308011889458],
[0.340069264174,0.550243675709,-0.762617051601],
[0.411682873964,0.535965919495,-0.737060189247],
[0.379529476166,0.486395716667,-0.787004828453],
[0.398910075426,0.645450055599,-0.65135627985],
[0.470621615648,0.628728508949,-0.619044244289],
[0.44230055809,0.58378881216,-0.680853009224],
[0.483050197363,0.517854511738,-0.706037700176],
[0.552667617798,0.495975226164,-0.669751524925],
[0.520354926586,0.44894811511,-0.726413309574],
[0.44230055809,0.58378881216,-0.680853009224],
[0.483050197363,0.517854511738,-0.706037700176],
[0.411682873964,0.535965919495,-0.737060189247],
[0.44894811511,0.726413309574,-0.520354926586],
[0.517854511738,0.706037700176,-0.483050197363],
[0.495975226164,0.669751524925,-0.552667617798],
[0.486395716667,0.787004828453,-0.379529476166],
[0.550243675709,0.762617051601,-0.340069264174],
[0.535965919495,0.737060189247,-0.411682873964],
[0.58378881216,0.680853009224,-0.44230055809],
[0.645450055599,0.65135627985,-0.398910075426],
[0.628728508949,0.619044244289,-0.470621615648],
[0.535965919495,0.737060189247,-0.411682873964],
[0.58378881216,0.680853009224,-0.44230055809],
[0.517854511738,0.706037700176,-0.483050197363],
[0.619044244289,0.470621615648,-0.628728508949],
[0.680853009224,0.44230055809,-0.58378881216],
[0.65135627985,0.398910075426,-0.645450055599],
[0.669751524925,0.552667617798,-0.495975226164],
[0.726413309574,0.520354926586,-0.44894811511],
[0.706037700176,0.483050197363,-0.517854511738],
[0.737060189247,0.411682873964,-0.535965919495],
[0.787004828453,0.379529476166,-0.486395716667],
[0.762617051601,0.340069264174,-0.550243675709],
[0.706037700176,0.483050197363,-0.517854511738],
[0.737060189247,0.411682873964,-0.535965919495],
[0.680853009224,0.44230055809,-0.58378881216],
[0.495975226164,0.669751524925,-0.552667617798],
[0.540649950504,0.607478022575,-0.581951975822],
[0.470621615648,0.628728508949,-0.619044244289],
[0.628728508949,0.619044244289,-0.470621615648],
[0.669751524925,0.552667617798,-0.495975226164],
[0.607478022575,0.581951975822,-0.540649950504],
[0.581951975822,0.540649950504,-0.607478022575],
[0.619044244289,0.470621615648,-0.628728508949],
[0.552667617798,0.495975226164,-0.669751524925],
[0.607478022575,0.581951975822,-0.540649950504],
[0.581951975822,0.540649950504,-0.607478022575],
[0.540649950504,0.607478022575,-0.581951975822],
[0.82464236021,0.564633131027,-0.0339771322906],
[0.79582041502,0.605532705784,0.0],
[0.82464236021,0.564633131027,0.0339771322906],
[0.757922053337,0.643326640129,-0.108097285032],
[0.722495436668,0.687358558178,-0.0744211226702],
[0.761889100075,0.646693944931,-0.0362210273743],
[0.761889100075,0.646693944931,0.0362210273743],
[0.722495436668,0.687358558178,0.0744211226702],
[0.757922053337,0.643326640129,0.108097285032],
[0.761889100075,0.646693944931,-0.0362210273743],
[0.761889100075,0.646693944931,0.0362210273743],
[0.79582041502,0.605532705784,0.0],
[0.669747889042,0.718357801437,-0.188148602843],
[0.62687343359,0.763553202152,-0.154971644282],
[0.677466154099,0.726636230946,-0.114190116525],
[0.560828924179,0.782811582088,-0.269586592913],
[0.510783493519,0.826465010643,-0.236761152744],
[0.571085453033,0.797127783298,-0.196083456278],
[0.578244268894,0.807120084763,-0.119124859571],
[0.524005174637,0.847858190536,-0.0809632539749],
[0.581926107407,0.81225925684,-0.0399611219764],
[0.571085453033,0.797127783298,-0.196083456278],
[0.578244268894,0.807120084763,-0.119124859571],
[0.62687343359,0.763553202152,-0.154971644282],
[0.677466154099,0.726636230946,0.114190116525],
[0.62687343359,0.763553202152,0.154971644282],
[0.669747889042,0.718357801437,0.188148602843],
[0.581926107407,0.81225925684,0.0399611219764],
[0.524005174637,0.847858190536,0.0809632539749],
[0.578244268894,0.807120084763,0.119124859571],
[0.571085453033,0.797127783298,0.196083456278],
[0.510783493519,0.826465010643,0.236761152744],
[0.560828924179,0.782811582088,0.269586592913],
[0.578244268894,0.807120084763,0.119124859571],
[0.571085453033,0.797127783298,0.196083456278],
[0.62687343359,0.763553202152,0.154971644282],
[0.677466154099,0.726636230946,-0.114190116525],
[0.68142670393,0.730884253979,-0.0382858961821],
[0.722495436668,0.687358558178,-0.0744211226702],
[0.581926107407,0.81225925684,-0.0399611219764],
[0.581926107407,0.81225925684,0.0399611219764],
[0.634539365768,0.772890508175,0.0],
[0.68142670393,0.730884253979,0.0382858961821],
[0.677466154099,0.726636230946,0.114190116525],
[0.722495436668,0.687358558178,0.0744211226702],
[0.634539365768,0.772890508175,0.0],
[0.68142670393,0.730884253979,0.0382858961821],
[0.68142670393,0.730884253979,-0.0382858961821],
[0.436200261116,0.830415487289,-0.346611320972],
[0.380723625422,0.869839549065,-0.313733518124],
[0.446935534477,0.85085272789,-0.276221334934],
[0.3044308424,0.857896447182,-0.413926929235],
[0.246351331472,0.891307473183,-0.380633711815],
[0.313436716795,0.883275330067,-0.348686188459],
[0.321246802807,0.905284404755,-0.277958005667],
[0.258633822203,0.935745954514,-0.239766731858],
[0.327503234148,0.922915279865,-0.202408134937],
[0.313436716795,0.883275330067,-0.348686188459],
[0.321246802807,0.905284404755,-0.277958005667],
[0.380723625422,0.869839549065,-0.313733518124],
[0.174905076623,0.866019308567,-0.468421578407],
[0.117213711143,0.892938017845,-0.434652328491],
[0.180623859167,0.894335091114,-0.409316182137],
[0.0549761541188,0.858619451523,-0.509656965733],
[0.0,0.8796184659,-0.475679844618],
[0.0568443164229,0.887796461582,-0.456712335348],
[0.0586068555713,0.915323853493,-0.398431301117],
[0.0,0.932827115059,-0.360324263573],
[0.0602079555392,0.940329909325,-0.334895044565],
[0.0568443164229,0.887796461582,-0.456712335348],
[0.0586068555713,0.915323853493,-0.398431301117],
[0.117213711143,0.892938017845,-0.434652328491],
[0.193975359201,0.960443258286,-0.199805602431],
[0.128498718143,0.978907585144,-0.158833146095],
[0.196501940489,0.97295331955,-0.121444880962],
[0.0615878328681,0.961880862713,-0.266443610191],
[0.0,0.974178731441,-0.225778326392],
[0.0626873448491,0.979053080082,-0.193714544177],
[0.0634539350867,0.991025745869,-0.117650069296],
[0.0,0.997029185295,-0.0770247355103],
[0.0638479366899,0.997179210186,-0.0394601933658],
[0.0626873448491,0.979053080082,-0.193714544177],
[0.0634539350867,0.991025745869,-0.117650069296],
[0.128498718143,0.978907585144,-0.158833146095],
[0.180623859167,0.894335091114,-0.409316182137],
[0.185843646526,0.920180141926,-0.34457308054],
[0.246351331472,0.891307473183,-0.380633711815],
[0.0602079555392,0.940329909325,-0.334895044565],
[0.0615878328681,0.961880862713,-0.266443610191],
[0.123895764351,0.943842172623,-0.306287169456],
[0.190361812711,0.942551255226,-0.274516820908],
[0.193975359201,0.960443258286,-0.199805602431],
[0.258633822203,0.935745954514,-0.239766731858],
[0.123895764351,0.943842172623,-0.306287169456],
[0.190361812711,0.942551255226,-0.274516820908],
[0.185843646526,0.920180141926,-0.34457308054],
[0.446935534477,0.85085272789,0.276221334934],
[0.380723625422,0.869839549065,0.313733518124],
[0.436200261116,0.830415487289,0.346611320972],
[0.327503234148,0.922915279865,0.202408134937],
[0.258633822203,0.935745954514,0.239766731858],
[0.321246802807,0.905284404755,0.277958005667],
[0.313436716795,0.883275330067,0.348686188459],
[0.246351331472,0.891307473183,0.380633711815],
[0.3044308424,0.857896447182,0.413926929235],
[0.321246802807,0.905284404755,0.277958005667],
[0.313436716795,0.883275330067,0.348686188459],
[0.380723625422,0.869839549065,0.313733518124],
[0.196501940489,0.97295331955,0.121444880962],
[0.128498718143,0.978907585144,0.158833146095],
[0.193975359201,0.960443258286,0.199805602431],
[0.0638479366899,0.997179210186,0.0394601933658],
[0.0,0.997029185295,0.0770247355103],
[0.0634539350867,0.991025745869,0.117650069296],
[0.0626873448491,0.979053080082,0.193714544177],
[0.0,0.974178731441,0.225778326392],
[0.0615878328681,0.961880862713,0.266443610191],
[0.0634539350867,0.991025745869,0.117650069296],
[0.0626873448491,0.979053080082,0.193714544177],
[0.128498718143,0.978907585144,0.158833146095],
[0.180623859167,0.894335091114,0.409316182137],
[0.117213711143,0.892938017845,0.434652328491],
[0.174905076623,0.866019308567,0.468421578407],
[0.0602079555392,0.940329909325,0.334895044565],
[0.0,0.932827115059,0.360324263573],
[0.0586068555713,0.915323853493,0.398431301117],
[0.0568443164229,0.887796461582,0.456712335348],
[0.0,0.8796184659,0.475679844618],
[0.0549761541188,0.858619451523,0.509656965733],
[0.0586068555713,0.915323853493,0.398431301117],
[0.0568443164229,0.887796461582,0.456712335348],
[0.117213711143,0.892938017845,0.434652328491],
[0.193975359201,0.960443258286,0.199805602431],
[0.190361812711,0.942551255226,0.274516820908],
[0.258633822203,0.935745954514,0.239766731858],
[0.0615878328681,0.961880862713,0.266443610191],
[0.0602079555392,0.940329909325,0.334895044565],
[0.123895764351,0.943842172623,0.306287169456],
[0.185843646526,0.920180141926,0.34457308054],
[0.180623859167,0.894335091114,0.409316182137],
[0.246351331472,0.891307473183,0.380633711815],
[0.123895764351,0.943842172623,0.306287169456],
[0.185843646526,0.920180141926,0.34457308054],
[0.190361812711,0.942551255226,0.274516820908],
[0.446935534477,0.85085272789,-0.276221334934],
[0.455528259277,0.867211103439,-0.20109423995],
[0.510783493519,0.826465010643,-0.236761152744],
[0.327503234148,0.922915279865,-0.202408134937],
[0.33188316226,0.935258030891,-0.123069040477],
[0.395605653524,0.903840482235,-0.162998497486],
[0.461539924145,0.878655850887,-0.122248865664],
[0.464636415243,0.88455080986,-0.0410230122507],
[0.524005174637,0.847858190536,-0.0809632539749],
[0.395605653524,0.903840482235,-0.162998497486],
[0.461539924145,0.878655850887,-0.122248865664],
[0.455528259277,0.867211103439,-0.20109423995],
[0.196501940489,0.97295331955,-0.121444880962],
[0.197802826762,0.979394435883,-0.0407496243715],
[0.265506535769,0.960611641407,-0.0820460245013],
[0.0638479366899,0.997179210186,-0.0394601933658],
[0.0638479366899,0.997179210186,0.0394601933658],
[0.130150929093,0.991494178772,0.0],
[0.197802826762,0.979394435883,0.0407496243715],
[0.196501940489,0.97295331955,0.121444880962],
[0.265506535769,0.960611641407,0.0820460245013],
[0.130150929093,0.991494178772,0.0],
[0.197802826762,0.979394435883,0.0407496243715],
[0.197802826762,0.979394435883,-0.0407496243715],
[0.464636415243,0.88455080986,0.0410230122507],
[0.461539924145,0.878655850887,0.122248865664],
[0.524005174637,0.847858190536,0.0809632539749],
[0.33188316226,0.935258030891,0.123069040477],
[0.327503234148,0.922915279865,0.202408134937],
[0.395605653524,0.903840482235,0.162998497486],
[0.455528259277,0.867211103439,0.20109423995],
[0.446935534477,0.85085272789,0.276221334934],
[0.510783493519,0.826465010643,0.236761152744],
[0.395605653524,0.903840482235,0.162998497486],
[0.455528259277,0.867211103439,0.20109423995],
[0.461539924145,0.878655850887,0.122248865664],
[0.265506535769,0.960611641407,-0.0820460245013],
[0.334140062332,0.941618084908,-0.04130198434],
[0.33188316226,0.935258030891,-0.123069040477],
[0.265506535769,0.960611641407,0.0820460245013],
[0.33188316226,0.935258030891,0.123069040477],
[0.334140062332,0.941618084908,0.04130198434],
[0.400968074799,0.916092038155,0.0],
[0.464636415243,0.88455080986,0.0410230122507],
[0.464636415243,0.88455080986,-0.0410230122507],
[0.334140062332,0.941618084908,0.04130198434],
[0.400968074799,0.916092038155,0.0],
[0.334140062332,0.941618084908,-0.04130198434],
[0.82464236021,0.564633131027,0.0339771322906],
[0.830952167511,0.548688352108,0.091976031661],
[0.858619451523,0.509656965733,0.0549761541188],
[0.757922053337,0.643326640129,0.108097285032],
[0.759706020355,0.627150595188,0.171839639544],
[0.798110127449,0.588087081909,0.131048902869],
[0.834331154823,0.529480218887,0.153434738517],
[0.834127128124,0.506734728813,0.217834427953],
[0.866019308567,0.468421578407,0.174905076623],
[0.798110127449,0.588087081909,0.131048902869],
[0.834331154823,0.529480218887,0.153434738517],
[0.830952167511,0.548688352108,0.091976031661],
[0.669747889042,0.718357801437,0.188148602843],
[0.665616393089,0.700865805149,0.256401896477],
[0.715529501438,0.665048420429,0.213841319084],
[0.560828924179,0.782811582088,0.269586592913],
[0.550243675709,0.762617051601,0.340069264174],
[0.610302150249,0.733673810959,0.298754066229],
[0.657660841942,0.678621411324,0.327040165663],
[0.645450055599,0.65135627985,0.398910075426],
[0.701809465885,0.618283927441,0.353819847107],
[0.610302150249,0.733673810959,0.298754066229],
[0.657660841942,0.678621411324,0.327040165663],
[0.665616393089,0.700865805149,0.256401896477],
[0.829719662666,0.480284929276,0.284414708614],
[0.820587992668,0.450116455555,0.352179646492],
[0.857896447182,0.413926929235,0.3044308424],
[0.74177056551,0.553625464439,0.378517180681],
[0.726413309574,0.520354926586,0.44894811511],
[0.776785671711,0.485873311758,0.400663375854],
[0.806385576725,0.416404157877,0.419940322638],
[0.787004828453,0.379529476166,0.486395716667],
[0.830415487289,0.346611320972,0.436200261116],
[0.776785671711,0.485873311758,0.400663375854],
[0.806385576725,0.416404157877,0.419940322638],
[0.820587992668,0.450116455555,0.352179646492],
[0.715529501438,0.665048420429,0.213841319084],
[0.757998526096,0.606988489628,0.238753452897],
[0.759706020355,0.627150595188,0.171839639544],
[0.701809465885,0.618283927441,0.353819847107],
[0.74177056551,0.553625464439,0.378517180681],
[0.752189457417,0.582528710365,0.308011889458],
[0.796284377575,0.545040607452,0.26241543889],
[0.829719662666,0.480284929276,0.284414708614],
[0.834127128124,0.506734728813,0.217834427953],
[0.752189457417,0.582528710365,0.308011889458],
[0.796284377575,0.545040607452,0.26241543889],
[0.757998526096,0.606988489628,0.238753452897],
[0.436200261116,0.830415487289,0.346611320972],
[0.419940322638,0.806385576725,0.416404157877],
[0.486395716667,0.787004828453,0.379529476166],
[0.3044308424,0.857896447182,0.413926929235],
[0.284414708614,0.829719662666,0.480284929276],
[0.352179646492,0.820587992668,0.450116455555],
[0.400663375854,0.776785671711,0.485873311758],
[0.378517180681,0.74177056551,0.553625464439],
[0.44894811511,0.726413309574,0.520354926586],
[0.352179646492,0.820587992668,0.450116455555],
[0.400663375854,0.776785671711,0.485873311758],
[0.419940322638,0.806385576725,0.416404157877],
[0.174905076623,0.866019308567,0.468421578407],
[0.153434738517,0.834331154823,0.529480218887],
[0.217834427953,0.834127128124,0.506734728813],
[0.0549761541188,0.858619451523,0.509656965733],
[0.0339771322906,0.82464236021,0.564633131027],
[0.091976031661,0.830952167511,0.548688352108],
[0.131048902869,0.798110127449,0.588087081909],
[0.108097285032,0.757922053337,0.643326640129],
[0.171839639544,0.759706020355,0.627150595188],
[0.091976031661,0.830952167511,0.548688352108],
[0.131048902869,0.798110127449,0.588087081909],
[0.153434738517,0.834331154823,0.529480218887],
[0.353819847107,0.701809465885,0.618283927441],
[0.327040165663,0.657660841942,0.678621411324],
[0.398910075426,0.645450055599,0.65135627985],
[0.213841319084,0.715529501438,0.665048420429],
[0.188148602843,0.669747889042,0.718357801437],
[0.256401896477,0.665616393089,0.700865805149],
[0.298754066229,0.610302150249,0.733673810959],
[0.269586592913,0.560828924179,0.782811582088],
[0.340069264174,0.550243675709,0.762617051601],
[0.256401896477,0.665616393089,0.700865805149],
[0.298754066229,0.610302150249,0.733673810959],
[0.327040165663,0.657660841942,0.678621411324],
[0.217834427953,0.834127128124,0.506734728813],
[0.26241543889,0.796284377575,0.545040607452],
[0.284414708614,0.829719662666,0.480284929276],
[0.171839639544,0.759706020355,0.627150595188],
[0.213841319084,0.715529501438,0.665048420429],
[0.238753452897,0.757998526096,0.606988489628],
[0.308011889458,0.752189457417,0.582528710365],
[0.353819847107,0.701809465885,0.618283927441],
[0.378517180681,0.74177056551,0.553625464439],
[0.238753452897,0.757998526096,0.606988489628],
[0.308011889458,0.752189457417,0.582528710365],
[0.26241543889,0.796284377575,0.545040607452],
[0.762617051601,0.340069264174,0.550243675709],
[0.733673810959,0.298754066229,0.610302150249],
[0.782811582088,0.269586592913,0.560828924179],
[0.65135627985,0.398910075426,0.645450055599],
[0.618283927441,0.353819847107,0.701809465885],
[0.678621411324,0.327040165663,0.657660841942],
[0.700865805149,0.256401896477,0.665616393089],
[0.665048420429,0.213841319084,0.715529501438],
[0.718357801437,0.188148602843,0.669747889042],
[0.678621411324,0.327040165663,0.657660841942],
[0.700865805149,0.256401896477,0.665616393089],
[0.733673810959,0.298754066229,0.610302150249],
[0.520354926586,0.44894811511,0.726413309574],
[0.485873311758,0.400663375854,0.776785671711],
[0.553625464439,0.378517180681,0.74177056551],
[0.379529476166,0.486395716667,0.787004828453],
[0.346611320972,0.436200261116,0.830415487289],
[0.416404157877,0.419940322638,0.806385576725],
[0.450116455555,0.352179646492,0.820587992668],
[0.413926929235,0.3044308424,0.857896447182],
[0.480284929276,0.284414708614,0.829719662666],
[0.416404157877,0.419940322638,0.806385576725],
[0.450116455555,0.352179646492,0.820587992668],
[0.485873311758,0.400663375854,0.776785671711],
[0.627150595188,0.171839639544,0.759706020355],
[0.588087081909,0.131048902869,0.798110127449],
[0.643326640129,0.108097285032,0.757922053337],
[0.506734728813,0.217834427953,0.834127128124],
[0.468421578407,0.174905076623,0.866019308567],
[0.529480218887,0.153434738517,0.834331154823],
[0.548688352108,0.091976031661,0.830952167511],
[0.509656965733,0.0549761541188,0.858619451523],
[0.564633131027,0.0339771322906,0.82464236021],
[0.529480218887,0.153434738517,0.834331154823],
[0.548688352108,0.091976031661,0.830952167511],
[0.588087081909,0.131048902869,0.798110127449],
[0.553625464439,0.378517180681,0.74177056551],
[0.582528710365,0.308011889458,0.752189457417],
[0.618283927441,0.353819847107,0.701809465885],
[0.480284929276,0.284414708614,0.829719662666],
[0.506734728813,0.217834427953,0.834127128124],
[0.545040607452,0.26241543889,0.796284377575],
[0.606988489628,0.238753452897,0.757998526096],
[0.627150595188,0.171839639544,0.759706020355],
[0.665048420429,0.213841319084,0.715529501438],
[0.545040607452,0.26241543889,0.796284377575],
[0.606988489628,0.238753452897,0.757998526096],
[0.582528710365,0.308011889458,0.752189457417],
[0.486395716667,0.787004828453,0.379529476166],
[0.535965919495,0.737060189247,0.411682873964],
[0.550243675709,0.762617051601,0.340069264174],
[0.44894811511,0.726413309574,0.520354926586],
[0.495975226164,0.669751524925,0.552667617798],
[0.517854511738,0.706037700176,0.483050197363],
[0.58378881216,0.680853009224,0.44230055809],
[0.628728508949,0.619044244289,0.470621615648],
[0.645450055599,0.65135627985,0.398910075426],
[0.517854511738,0.706037700176,0.483050197363],
[0.58378881216,0.680853009224,0.44230055809],
[0.535965919495,0.737060189247,0.411682873964],
[0.398910075426,0.645450055599,0.65135627985],
[0.44230055809,0.58378881216,0.680853009224],
[0.470621615648,0.628728508949,0.619044244289],
[0.340069264174,0.550243675709,0.762617051601],
[0.379529476166,0.486395716667,0.787004828453],
[0.411682873964,0.535965919495,0.737060189247],
[0.483050197363,0.517854511738,0.706037700176],
[0.520354926586,0.44894811511,0.726413309574],
[0.552667617798,0.495975226164,0.669751524925],
[0.411682873964,0.535965919495,0.737060189247],
[0.483050197363,0.517854511738,0.706037700176],
[0.44230055809,0.58378881216,0.680853009224],
[0.669751524925,0.552667617798,0.495975226164],
[0.706037700176,0.483050197363,0.517854511738],
[0.726413309574,0.520354926586,0.44894811511],
[0.619044244289,0.470621615648,0.628728508949],
[0.65135627985,0.398910075426,0.645450055599],
[0.680853009224,0.44230055809,0.58378881216],
[0.737060189247,0.411682873964,0.535965919495],
[0.762617051601,0.340069264174,0.550243675709],
[0.787004828453,0.379529476166,0.486395716667],
[0.680853009224,0.44230055809,0.58378881216],
[0.737060189247,0.411682873964,0.535965919495],
[0.706037700176,0.483050197363,0.517854511738],
[0.470621615648,0.628728508949,0.619044244289],
[0.540649950504,0.607478022575,0.581951975822],
[0.495975226164,0.669751524925,0.552667617798],
[0.552667617798,0.495975226164,0.669751524925],
[0.619044244289,0.470621615648,0.628728508949],
[0.581951975822,0.540649950504,0.607478022575],
[0.607478022575,0.581951975822,0.540649950504],
[0.669751524925,0.552667617798,0.495975226164],
[0.628728508949,0.619044244289,0.470621615648],
[0.581951975822,0.540649950504,0.607478022575],
[0.607478022575,0.581951975822,0.540649950504],
[0.540649950504,0.607478022575,0.581951975822],
[0.8796184659,-0.475679844618,0.0],
[0.887796461582,-0.456712335348,0.0568443164229],
[0.858619451523,-0.509656965733,0.0549761541188],
[0.932827115059,-0.360324263573,0.0],
[0.940329909325,-0.334895044565,0.0602079555392],
[0.915323853493,-0.398431301117,0.0586068555713],
[0.892938017845,-0.434652328491,0.117213711143],
[0.894335091114,-0.409316182137,0.180623859167],
[0.866019308567,-0.468421578407,0.174905076623],
[0.915323853493,-0.398431301117,0.0586068555713],
[0.892938017845,-0.434652328491,0.117213711143],
[0.887796461582,-0.456712335348,0.0568443164229],
[0.974178731441,-0.225778326392,0.0],
[0.979053080082,-0.193714544177,0.0626873448491],
[0.961880862713,-0.266443610191,0.0615878328681],
[0.997029185295,-0.0770247355103,0.0],
[0.997179210186,-0.0394601933658,0.0638479366899],
[0.991025745869,-0.117650069296,0.0634539350867],
[0.978907585144,-0.158833146095,0.128498718143],
[0.97295331955,-0.121444880962,0.196501940489],
[0.960443258286,-0.199805602431,0.193975359201],
[0.991025745869,-0.117650069296,0.0634539350867],
[0.978907585144,-0.158833146095,0.128498718143],
[0.979053080082,-0.193714544177,0.0626873448491],
[0.891307473183,-0.380633711815,0.246351331472],
[0.883275330067,-0.348686188459,0.313436716795],
[0.857896447182,-0.413926929235,0.3044308424],
[0.935745954514,-0.239766731858,0.258633822203],
[0.922915279865,-0.202408134937,0.327503234148],
[0.905284404755,-0.277958005667,0.321246802807],
[0.869839549065,-0.313733518124,0.380723625422],
[0.85085272789,-0.276221334934,0.446935534477],
[0.830415487289,-0.346611320972,0.436200261116],
[0.905284404755,-0.277958005667,0.321246802807],
[0.869839549065,-0.313733518124,0.380723625422],
[0.883275330067,-0.348686188459,0.313436716795],
[0.961880862713,-0.266443610191,0.0615878328681],
[0.943842172623,-0.306287169456,0.123895764351],
[0.940329909325,-0.334895044565,0.0602079555392],
[0.960443258286,-0.199805602431,0.193975359201],
[0.935745954514,-0.239766731858,0.258633822203],
[0.942551255226,-0.274516820908,0.190361812711],
[0.920180141926,-0.34457308054,0.185843646526],
[0.891307473183,-0.380633711815,0.246351331472],
[0.894335091114,-0.409316182137,0.180623859167],
[0.942551255226,-0.274516820908,0.190361812711],
[0.920180141926,-0.34457308054,0.185843646526],
[0.943842172623,-0.306287169456,0.123895764351],
[0.997029185295,0.0770247355103,0.0],
[0.991025745869,0.117650069296,0.0634539350867],
[0.997179210186,0.0394601933658,0.0638479366899],
[0.974178731441,0.225778326392,0.0],
[0.961880862713,0.266443610191,0.0615878328681],
[0.979053080082,0.193714544177,0.0626873448491],
[0.978907585144,0.158833146095,0.128498718143],
[0.960443258286,0.199805602431,0.193975359201],
[0.97295331955,0.121444880962,0.196501940489],
[0.979053080082,0.193714544177,0.0626873448491],
[0.978907585144,0.158833146095,0.128498718143],
[0.991025745869,0.117650069296,0.0634539350867],
[0.932827115059,0.360324263573,0.0],
[0.915323853493,0.398431301117,0.0586068555713],
[0.940329909325,0.334895044565,0.0602079555392],
[0.8796184659,0.475679844618,0.0],
[0.858619451523,0.509656965733,0.0549761541188],
[0.887796461582,0.456712335348,0.0568443164229],
[0.892938017845,0.434652328491,0.117213711143],
[0.866019308567,0.468421578407,0.174905076623],
[0.894335091114,0.409316182137,0.180623859167],
[0.887796461582,0.456712335348,0.0568443164229],
[0.892938017845,0.434652328491,0.117213711143],
[0.915323853493,0.398431301117,0.0586068555713],
[0.935745954514,0.239766731858,0.258633822203],
[0.905284404755,0.277958005667,0.321246802807],
[0.922915279865,0.202408134937,0.327503234148],
[0.891307473183,0.380633711815,0.246351331472],
[0.857896447182,0.413926929235,0.3044308424],
[0.883275330067,0.348686188459,0.313436716795],
[0.869839549065,0.313733518124,0.380723625422],
[0.830415487289,0.346611320972,0.436200261116],
[0.85085272789,0.276221334934,0.446935534477],
[0.883275330067,0.348686188459,0.313436716795],
[0.869839549065,0.313733518124,0.380723625422],
[0.905284404755,0.277958005667,0.321246802807],
[0.940329909325,0.334895044565,0.0602079555392],
[0.943842172623,0.306287169456,0.123895764351],
[0.961880862713,0.266443610191,0.0615878328681],
[0.894335091114,0.409316182137,0.180623859167],
[0.891307473183,0.380633711815,0.246351331472],
[0.920180141926,0.34457308054,0.185843646526],
[0.942551255226,0.274516820908,0.190361812711],
[0.935745954514,0.239766731858,0.258633822203],
[0.960443258286,0.199805602431,0.193975359201],
[0.920180141926,0.34457308054,0.185843646526],
[0.942551255226,0.274516820908,0.190361812711],
[0.943842172623,0.306287169456,0.123895764351],
[0.826465010643,-0.236761152744,0.510783493519],
[0.797127783298,-0.196083456278,0.571085453033],
[0.782811582088,-0.269586592913,0.560828924179],
[0.847858190536,-0.0809632539749,0.524005174637],
[0.81225925684,-0.0399611219764,0.581926107407],
[0.807120084763,-0.119124859571,0.578244268894],
[0.763553202152,-0.154971644282,0.62687343359],
[0.726636230946,-0.114190116525,0.677466154099],
[0.718357801437,-0.188148602843,0.669747889042],
[0.807120084763,-0.119124859571,0.578244268894],
[0.763553202152,-0.154971644282,0.62687343359],
[0.797127783298,-0.196083456278,0.571085453033],
[0.847858190536,0.0809632539749,0.524005174637],
[0.807120084763,0.119124859571,0.578244268894],
[0.81225925684,0.0399611219764,0.581926107407],
[0.826465010643,0.236761152744,0.510783493519],
[0.782811582088,0.269586592913,0.560828924179],
[0.797127783298,0.196083456278,0.571085453033],
[0.763553202152,0.154971644282,0.62687343359],
[0.718357801437,0.188148602843,0.669747889042],
[0.726636230946,0.114190116525,0.677466154099],
[0.797127783298,0.196083456278,0.571085453033],
[0.763553202152,0.154971644282,0.62687343359],
[0.807120084763,0.119124859571,0.578244268894],
[0.687358558178,-0.0744211226702,0.722495436668],
[0.646693944931,-0.0362210273743,0.761889100075],
[0.643326640129,-0.108097285032,0.757922053337],
[0.687358558178,0.0744211226702,0.722495436668],
[0.643326640129,0.108097285032,0.757922053337],
[0.646693944931,0.0362210273743,0.761889100075],
[0.605532705784,0.0,0.79582041502],
[0.564633131027,0.0339771322906,0.82464236021],
[0.564633131027,-0.0339771322906,0.82464236021],
[0.646693944931,0.0362210273743,0.761889100075],
[0.605532705784,0.0,0.79582041502],
[0.646693944931,-0.0362210273743,0.761889100075],
[0.81225925684,0.0399611219764,0.581926107407],
[0.772890508175,0.0,0.634539365768],
[0.81225925684,-0.0399611219764,0.581926107407],
[0.726636230946,0.114190116525,0.677466154099],
[0.687358558178,0.0744211226702,0.722495436668],
[0.730884253979,0.0382858961821,0.68142670393],
[0.730884253979,-0.0382858961821,0.68142670393],
[0.687358558178,-0.0744211226702,0.722495436668],
[0.726636230946,-0.114190116525,0.677466154099],
[0.730884253979,0.0382858961821,0.68142670393],
[0.730884253979,-0.0382858961821,0.68142670393],
[0.772890508175,0.0,0.634539365768],
[0.997179210186,0.0394601933658,0.0638479366899],
[0.991494178772,0.0,0.130150929093],
[0.997179210186,-0.0394601933658,0.0638479366899],
[0.97295331955,0.121444880962,0.196501940489],
[0.960611641407,0.0820460245013,0.265506535769],
[0.979394435883,0.0407496243715,0.197802826762],
[0.979394435883,-0.0407496243715,0.197802826762],
[0.960611641407,-0.0820460245013,0.265506535769],
[0.97295331955,-0.121444880962,0.196501940489],
[0.979394435883,0.0407496243715,0.197802826762],
[0.979394435883,-0.0407496243715,0.197802826762],
[0.991494178772,0.0,0.130150929093],
[0.922915279865,0.202408134937,0.327503234148],
[0.903840482235,0.162998497486,0.395605653524],
[0.935258030891,0.123069040477,0.33188316226],
[0.85085272789,0.276221334934,0.446935534477],
[0.826465010643,0.236761152744,0.510783493519],
[0.867211103439,0.20109423995,0.455528259277],
[0.878655850887,0.122248865664,0.461539924145],
[0.847858190536,0.0809632539749,0.524005174637],
[0.88455080986,0.0410230122507,0.464636415243],
[0.867211103439,0.20109423995,0.455528259277],
[0.878655850887,0.122248865664,0.461539924145],
[0.903840482235,0.162998497486,0.395605653524],
[0.935258030891,-0.123069040477,0.33188316226],
[0.903840482235,-0.162998497486,0.395605653524],
[0.922915279865,-0.202408134937,0.327503234148],
[0.88455080986,-0.0410230122507,0.464636415243],
[0.847858190536,-0.0809632539749,0.524005174637],
[0.878655850887,-0.122248865664,0.461539924145],
[0.867211103439,-0.20109423995,0.455528259277],
[0.826465010643,-0.236761152744,0.510783493519],
[0.85085272789,-0.276221334934,0.446935534477],
[0.878655850887,-0.122248865664,0.461539924145],
[0.867211103439,-0.20109423995,0.455528259277],
[0.903840482235,-0.162998497486,0.395605653524],
[0.935258030891,0.123069040477,0.33188316226],
[0.941618084908,0.04130198434,0.334140062332],
[0.960611641407,0.0820460245013,0.265506535769],
[0.88455080986,0.0410230122507,0.464636415243],
[0.88455080986,-0.0410230122507,0.464636415243],
[0.916092038155,0.0,0.400968074799],
[0.941618084908,-0.04130198434,0.334140062332],
[0.935258030891,-0.123069040477,0.33188316226],
[0.960611641407,-0.0820460245013,0.265506535769],
[0.916092038155,0.0,0.400968074799],
[0.941618084908,-0.04130198434,0.334140062332],
[0.941618084908,0.04130198434,0.334140062332],
[-0.564633131027,0.0339771322906,0.82464236021],
[-0.605532705784,0.0,0.79582041502],
[-0.564633131027,-0.0339771322906,0.82464236021],
[-0.643326640129,0.108097285032,0.757922053337],
[-0.687358558178,0.0744211226702,0.722495436668],
[-0.646693944931,0.0362210273743,0.761889100075],
[-0.646693944931,-0.0362210273743,0.761889100075],
[-0.687358558178,-0.0744211226702,0.722495436668],
[-0.643326640129,-0.108097285032,0.757922053337],
[-0.646693944931,0.0362210273743,0.761889100075],
[-0.646693944931,-0.0362210273743,0.761889100075],
[-0.605532705784,0.0,0.79582041502],
[-0.718357801437,0.188148602843,0.669747889042],
[-0.763553202152,0.154971644282,0.62687343359],
[-0.726636230946,0.114190116525,0.677466154099],
[-0.782811582088,0.269586592913,0.560828924179],
[-0.826465010643,0.236761152744,0.510783493519],
[-0.797127783298,0.196083456278,0.571085453033],
[-0.807120084763,0.119124859571,0.578244268894],
[-0.847858190536,0.0809632539749,0.524005174637],
[-0.81225925684,0.0399611219764,0.581926107407],
[-0.797127783298,0.196083456278,0.571085453033],
[-0.807120084763,0.119124859571,0.578244268894],
[-0.763553202152,0.154971644282,0.62687343359],
[-0.726636230946,-0.114190116525,0.677466154099],
[-0.763553202152,-0.154971644282,0.62687343359],
[-0.718357801437,-0.188148602843,0.669747889042],
[-0.81225925684,-0.0399611219764,0.581926107407],
[-0.847858190536,-0.0809632539749,0.524005174637],
[-0.807120084763,-0.119124859571,0.578244268894],
[-0.797127783298,-0.196083456278,0.571085453033],
[-0.826465010643,-0.236761152744,0.510783493519],
[-0.782811582088,-0.269586592913,0.560828924179],
[-0.807120084763,-0.119124859571,0.578244268894],
[-0.797127783298,-0.196083456278,0.571085453033],
[-0.763553202152,-0.154971644282,0.62687343359],
[-0.726636230946,0.114190116525,0.677466154099],
[-0.730884253979,0.0382858961821,0.68142670393],
[-0.687358558178,0.0744211226702,0.722495436668],
[-0.81225925684,0.0399611219764,0.581926107407],
[-0.81225925684,-0.0399611219764,0.581926107407],
[-0.772890508175,0.0,0.634539365768],
[-0.730884253979,-0.0382858961821,0.68142670393],
[-0.726636230946,-0.114190116525,0.677466154099],
[-0.687358558178,-0.0744211226702,0.722495436668],
[-0.772890508175,0.0,0.634539365768],
[-0.730884253979,-0.0382858961821,0.68142670393],
[-0.730884253979,0.0382858961821,0.68142670393],
[-0.830415487289,0.346611320972,0.436200261116],
[-0.869839549065,0.313733518124,0.380723625422],
[-0.85085272789,0.276221334934,0.446935534477],
[-0.857896447182,0.413926929235,0.3044308424],
[-0.891307473183,0.380633711815,0.246351331472],
[-0.883275330067,0.348686188459,0.313436716795],
[-0.905284404755,0.277958005667,0.321246802807],
[-0.935745954514,0.239766731858,0.258633822203],
[-0.922915279865,0.202408134937,0.327503234148],
[-0.883275330067,0.348686188459,0.313436716795],
[-0.905284404755,0.277958005667,0.321246802807],
[-0.869839549065,0.313733518124,0.380723625422],
[-0.866019308567,0.468421578407,0.174905076623],
[-0.892938017845,0.434652328491,0.117213711143],
[-0.894335091114,0.409316182137,0.180623859167],
[-0.858619451523,0.509656965733,0.0549761541188],
[-0.8796184659,0.475679844618,0.0],
[-0.887796461582,0.456712335348,0.0568443164229],
[-0.915323853493,0.398431301117,0.0586068555713],
[-0.932827115059,0.360324263573,0.0],
[-0.940329909325,0.334895044565,0.0602079555392],
[-0.887796461582,0.456712335348,0.0568443164229],
[-0.915323853493,0.398431301117,0.0586068555713],
[-0.892938017845,0.434652328491,0.117213711143],
[-0.960443258286,0.199805602431,0.193975359201],
[-0.978907585144,0.158833146095,0.128498718143],
[-0.97295331955,0.121444880962,0.196501940489],
[-0.961880862713,0.266443610191,0.0615878328681],
[-0.974178731441,0.225778326392,0.0],
[-0.979053080082,0.193714544177,0.0626873448491],
[-0.991025745869,0.117650069296,0.0634539350867],
[-0.997029185295,0.0770247355103,0.0],
[-0.997179210186,0.0394601933658,0.0638479366899],
[-0.979053080082,0.193714544177,0.0626873448491],
[-0.991025745869,0.117650069296,0.0634539350867],
[-0.978907585144,0.158833146095,0.128498718143],
[-0.894335091114,0.409316182137,0.180623859167],
[-0.920180141926,0.34457308054,0.185843646526],
[-0.891307473183,0.380633711815,0.246351331472],
[-0.940329909325,0.334895044565,0.0602079555392],
[-0.961880862713,0.266443610191,0.0615878328681],
[-0.943842172623,0.306287169456,0.123895764351],
[-0.942551255226,0.274516820908,0.190361812711],
[-0.960443258286,0.199805602431,0.193975359201],
[-0.935745954514,0.239766731858,0.258633822203],
[-0.943842172623,0.306287169456,0.123895764351],
[-0.942551255226,0.274516820908,0.190361812711],
[-0.920180141926,0.34457308054,0.185843646526],
[-0.85085272789,-0.276221334934,0.446935534477],
[-0.869839549065,-0.313733518124,0.380723625422],
[-0.830415487289,-0.346611320972,0.436200261116],
[-0.922915279865,-0.202408134937,0.327503234148],
[-0.935745954514,-0.239766731858,0.258633822203],
[-0.905284404755,-0.277958005667,0.321246802807],
[-0.883275330067,-0.348686188459,0.313436716795],
[-0.891307473183,-0.380633711815,0.246351331472],
[-0.857896447182,-0.413926929235,0.3044308424],
[-0.905284404755,-0.277958005667,0.321246802807],
[-0.883275330067,-0.348686188459,0.313436716795],
[-0.869839549065,-0.313733518124,0.380723625422],
[-0.97295331955,-0.121444880962,0.196501940489],
[-0.978907585144,-0.158833146095,0.128498718143],
[-0.960443258286,-0.199805602431,0.193975359201],
[-0.997179210186,-0.0394601933658,0.0638479366899],
[-0.997029185295,-0.0770247355103,0.0],
[-0.991025745869,-0.117650069296,0.0634539350867],
[-0.979053080082,-0.193714544177,0.0626873448491],
[-0.974178731441,-0.225778326392,0.0],
[-0.961880862713,-0.266443610191,0.0615878328681],
[-0.991025745869,-0.117650069296,0.0634539350867],
[-0.979053080082,-0.193714544177,0.0626873448491],
[-0.978907585144,-0.158833146095,0.128498718143],
[-0.894335091114,-0.409316182137,0.180623859167],
[-0.892938017845,-0.434652328491,0.117213711143],
[-0.866019308567,-0.468421578407,0.174905076623],
[-0.940329909325,-0.334895044565,0.0602079555392],
[-0.932827115059,-0.360324263573,0.0],
[-0.915323853493,-0.398431301117,0.0586068555713],
[-0.887796461582,-0.456712335348,0.0568443164229],
[-0.8796184659,-0.475679844618,0.0],
[-0.858619451523,-0.509656965733,0.0549761541188],
[-0.915323853493,-0.398431301117,0.0586068555713],
[-0.887796461582,-0.456712335348,0.0568443164229],
[-0.892938017845,-0.434652328491,0.117213711143],
[-0.960443258286,-0.199805602431,0.193975359201],
[-0.942551255226,-0.274516820908,0.190361812711],
[-0.935745954514,-0.239766731858,0.258633822203],
[-0.961880862713,-0.266443610191,0.0615878328681],
[-0.940329909325,-0.334895044565,0.0602079555392],
[-0.943842172623,-0.306287169456,0.123895764351],
[-0.920180141926,-0.34457308054,0.185843646526],
[-0.894335091114,-0.409316182137,0.180623859167],
[-0.891307473183,-0.380633711815,0.246351331472],
[-0.943842172623,-0.306287169456,0.123895764351],
[-0.920180141926,-0.34457308054,0.185843646526],
[-0.942551255226,-0.274516820908,0.190361812711],
[-0.85085272789,0.276221334934,0.446935534477],
[-0.867211103439,0.20109423995,0.455528259277],
[-0.826465010643,0.236761152744,0.510783493519],
[-0.922915279865,0.202408134937,0.327503234148],
[-0.935258030891,0.123069040477,0.33188316226],
[-0.903840482235,0.162998497486,0.395605653524],
[-0.878655850887,0.122248865664,0.461539924145],
[-0.88455080986,0.0410230122507,0.464636415243],
[-0.847858190536,0.0809632539749,0.524005174637],
[-0.903840482235,0.162998497486,0.395605653524],
[-0.878655850887,0.122248865664,0.461539924145],
[-0.867211103439,0.20109423995,0.455528259277],
[-0.97295331955,0.121444880962,0.196501940489],
[-0.979394435883,0.0407496243715,0.197802826762],
[-0.960611641407,0.0820460245013,0.265506535769],
[-0.997179210186,0.0394601933658,0.0638479366899],
[-0.997179210186,-0.0394601933658,0.0638479366899],
[-0.991494178772,0.0,0.130150929093],
[-0.979394435883,-0.0407496243715,0.197802826762],
[-0.97295331955,-0.121444880962,0.196501940489],
[-0.960611641407,-0.0820460245013,0.265506535769],
[-0.991494178772,0.0,0.130150929093],
[-0.979394435883,-0.0407496243715,0.197802826762],
[-0.979394435883,0.0407496243715,0.197802826762],
[-0.88455080986,-0.0410230122507,0.464636415243],
[-0.878655850887,-0.122248865664,0.461539924145],
[-0.847858190536,-0.0809632539749,0.524005174637],
[-0.935258030891,-0.123069040477,0.33188316226],
[-0.922915279865,-0.202408134937,0.327503234148],
[-0.903840482235,-0.162998497486,0.395605653524],
[-0.867211103439,-0.20109423995,0.455528259277],
[-0.85085272789,-0.276221334934,0.446935534477],
[-0.826465010643,-0.236761152744,0.510783493519],
[-0.903840482235,-0.162998497486,0.395605653524],
[-0.867211103439,-0.20109423995,0.455528259277],
[-0.878655850887,-0.122248865664,0.461539924145],
[-0.960611641407,0.0820460245013,0.265506535769],
[-0.941618084908,0.04130198434,0.334140062332],
[-0.935258030891,0.123069040477,0.33188316226],
[-0.960611641407,-0.0820460245013,0.265506535769],
[-0.935258030891,-0.123069040477,0.33188316226],
[-0.941618084908,-0.04130198434,0.334140062332],
[-0.916092038155,0.0,0.400968074799],
[-0.88455080986,-0.0410230122507,0.464636415243],
[-0.88455080986,0.0410230122507,0.464636415243],
[-0.941618084908,-0.04130198434,0.334140062332],
[-0.916092038155,0.0,0.400968074799],
[-0.941618084908,0.04130198434,0.334140062332],
[-0.509656965733,0.0549761541188,0.858619451523],
[-0.548688352108,0.091976031661,0.830952167511],
[-0.564633131027,0.0339771322906,0.82464236021],
[-0.468421578407,0.174905076623,0.866019308567],
[-0.506734728813,0.217834427953,0.834127128124],
[-0.529480218887,0.153434738517,0.834331154823],
[-0.588087081909,0.131048902869,0.798110127449],
[-0.627150595188,0.171839639544,0.759706020355],
[-0.643326640129,0.108097285032,0.757922053337],
[-0.529480218887,0.153434738517,0.834331154823],
[-0.588087081909,0.131048902869,0.798110127449],
[-0.548688352108,0.091976031661,0.830952167511],
[-0.413926929235,0.3044308424,0.857896447182],
[-0.450116455555,0.352179646492,0.820587992668],
[-0.480284929276,0.284414708614,0.829719662666],
[-0.346611320972,0.436200261116,0.830415487289],
[-0.379529476166,0.486395716667,0.787004828453],
[-0.416404157877,0.419940322638,0.806385576725],
[-0.485873311758,0.400663375854,0.776785671711],
[-0.520354926586,0.44894811511,0.726413309574],
[-0.553625464439,0.378517180681,0.74177056551],
[-0.416404157877,0.419940322638,0.806385576725],
[-0.485873311758,0.400663375854,0.776785671711],
[-0.450116455555,0.352179646492,0.820587992668],
[-0.665048420429,0.213841319084,0.715529501438],
[-0.700865805149,0.256401896477,0.665616393089],
[-0.718357801437,0.188148602843,0.669747889042],
[-0.618283927441,0.353819847107,0.701809465885],
[-0.65135627985,0.398910075426,0.645450055599],
[-0.678621411324,0.327040165663,0.657660841942],
[-0.733673810959,0.298754066229,0.610302150249],
[-0.762617051601,0.340069264174,0.550243675709],
[-0.782811582088,0.269586592913,0.560828924179],
[-0.678621411324,0.327040165663,0.657660841942],
[-0.733673810959,0.298754066229,0.610302150249],
[-0.700865805149,0.256401896477,0.665616393089],
[-0.480284929276,0.284414708614,0.829719662666],
[-0.545040607452,0.26241543889,0.796284377575],
[-0.506734728813,0.217834427953,0.834127128124],
[-0.553625464439,0.378517180681,0.74177056551],
[-0.618283927441,0.353819847107,0.701809465885],
[-0.582528710365,0.308011889458,0.752189457417],
[-0.606988489628,0.238753452897,0.757998526096],
[-0.665048420429,0.213841319084,0.715529501438],
[-0.627150595188,0.171839639544,0.759706020355],
[-0.582528710365,0.308011889458,0.752189457417],
[-0.606988489628,0.238753452897,0.757998526096],
[-0.545040607452,0.26241543889,0.796284377575],
[-0.269586592913,0.560828924179,0.782811582088],
[-0.298754066229,0.610302150249,0.733673810959],
[-0.340069264174,0.550243675709,0.762617051601],
[-0.188148602843,0.669747889042,0.718357801437],
[-0.213841319084,0.715529501438,0.665048420429],
[-0.256401896477,0.665616393089,0.700865805149],
[-0.327040165663,0.657660841942,0.678621411324],
[-0.353819847107,0.701809465885,0.618283927441],
[-0.398910075426,0.645450055599,0.65135627985],
[-0.256401896477,0.665616393089,0.700865805149],
[-0.327040165663,0.657660841942,0.678621411324],
[-0.298754066229,0.610302150249,0.733673810959],
[-0.108097285032,0.757922053337,0.643326640129],
[-0.131048902869,0.798110127449,0.588087081909],
[-0.171839639544,0.759706020355,0.627150595188],
[-0.0339771322906,0.82464236021,0.564633131027],
[-0.0549761541188,0.858619451523,0.509656965733],
[-0.091976031661,0.830952167511,0.548688352108],
[-0.153434738517,0.834331154823,0.529480218887],
[-0.174905076623,0.866019308567,0.468421578407],
[-0.217834427953,0.834127128124,0.506734728813],
[-0.091976031661,0.830952167511,0.548688352108],
[-0.153434738517,0.834331154823,0.529480218887],
[-0.131048902869,0.798110127449,0.588087081909],
[-0.378517180681,0.74177056551,0.553625464439],
[-0.400663375854,0.776785671711,0.485873311758],
[-0.44894811511,0.726413309574,0.520354926586],
[-0.284414708614,0.829719662666,0.480284929276],
[-0.3044308424,0.857896447182,0.413926929235],
[-0.352179646492,0.820587992668,0.450116455555],
[-0.419940322638,0.806385576725,0.416404157877],
[-0.436200261116,0.830415487289,0.346611320972],
[-0.486395716667,0.787004828453,0.379529476166],
[-0.352179646492,0.820587992668,0.450116455555],
[-0.419940322638,0.806385576725,0.416404157877],
[-0.400663375854,0.776785671711,0.485873311758],
[-0.171839639544,0.759706020355,0.627150595188],
[-0.238753452897,0.757998526096,0.606988489628],
[-0.213841319084,0.715529501438,0.665048420429],
[-0.217834427953,0.834127128124,0.506734728813],
[-0.284414708614,0.829719662666,0.480284929276],
[-0.26241543889,0.796284377575,0.545040607452],
[-0.308011889458,0.752189457417,0.582528710365],
[-0.378517180681,0.74177056551,0.553625464439],
[-0.353819847107,0.701809465885,0.618283927441],
[-0.26241543889,0.796284377575,0.545040607452],
[-0.308011889458,0.752189457417,0.582528710365],
[-0.238753452897,0.757998526096,0.606988489628],
[-0.787004828453,0.379529476166,0.486395716667],
[-0.806385576725,0.416404157877,0.419940322638],
[-0.830415487289,0.346611320972,0.436200261116],
[-0.726413309574,0.520354926586,0.44894811511],
[-0.74177056551,0.553625464439,0.378517180681],
[-0.776785671711,0.485873311758,0.400663375854],
[-0.820587992668,0.450116455555,0.352179646492],
[-0.829719662666,0.480284929276,0.284414708614],
[-0.857896447182,0.413926929235,0.3044308424],
[-0.776785671711,0.485873311758,0.400663375854],
[-0.820587992668,0.450116455555,0.352179646492],
[-0.806385576725,0.416404157877,0.419940322638],
[-0.645450055599,0.65135627985,0.398910075426],
[-0.657660841942,0.678621411324,0.327040165663],
[-0.701809465885,0.618283927441,0.353819847107],
[-0.550243675709,0.762617051601,0.340069264174],
[-0.560828924179,0.782811582088,0.269586592913],
[-0.610302150249,0.733673810959,0.298754066229],
[-0.665616393089,0.700865805149,0.256401896477],
[-0.669747889042,0.718357801437,0.188148602843],
[-0.715529501438,0.665048420429,0.213841319084],
[-0.610302150249,0.733673810959,0.298754066229],
[-0.665616393089,0.700865805149,0.256401896477],
[-0.657660841942,0.678621411324,0.327040165663],
[-0.834127128124,0.506734728813,0.217834427953],
[-0.834331154823,0.529480218887,0.153434738517],
[-0.866019308567,0.468421578407,0.174905076623],
[-0.759706020355,0.627150595188,0.171839639544],
[-0.757922053337,0.643326640129,0.108097285032],
[-0.798110127449,0.588087081909,0.131048902869],
[-0.830952167511,0.548688352108,0.091976031661],
[-0.82464236021,0.564633131027,0.0339771322906],
[-0.858619451523,0.509656965733,0.0549761541188],
[-0.798110127449,0.588087081909,0.131048902869],
[-0.830952167511,0.548688352108,0.091976031661],
[-0.834331154823,0.529480218887,0.153434738517],
[-0.701809465885,0.618283927441,0.353819847107],
[-0.752189457417,0.582528710365,0.308011889458],
[-0.74177056551,0.553625464439,0.378517180681],
[-0.715529501438,0.665048420429,0.213841319084],
[-0.759706020355,0.627150595188,0.171839639544],
[-0.757998526096,0.606988489628,0.238753452897],
[-0.796284377575,0.545040607452,0.26241543889],
[-0.834127128124,0.506734728813,0.217834427953],
[-0.829719662666,0.480284929276,0.284414708614],
[-0.757998526096,0.606988489628,0.238753452897],
[-0.796284377575,0.545040607452,0.26241543889],
[-0.752189457417,0.582528710365,0.308011889458],
[-0.340069264174,0.550243675709,0.762617051601],
[-0.411682873964,0.535965919495,0.737060189247],
[-0.379529476166,0.486395716667,0.787004828453],
[-0.398910075426,0.645450055599,0.65135627985],
[-0.470621615648,0.628728508949,0.619044244289],
[-0.44230055809,0.58378881216,0.680853009224],
[-0.483050197363,0.517854511738,0.706037700176],
[-0.552667617798,0.495975226164,0.669751524925],
[-0.520354926586,0.44894811511,0.726413309574],
[-0.44230055809,0.58378881216,0.680853009224],
[-0.483050197363,0.517854511738,0.706037700176],
[-0.411682873964,0.535965919495,0.737060189247],
[-0.44894811511,0.726413309574,0.520354926586],
[-0.517854511738,0.706037700176,0.483050197363],
[-0.495975226164,0.669751524925,0.552667617798],
[-0.486395716667,0.787004828453,0.379529476166],
[-0.550243675709,0.762617051601,0.340069264174],
[-0.535965919495,0.737060189247,0.411682873964],
[-0.58378881216,0.680853009224,0.44230055809],
[-0.645450055599,0.65135627985,0.398910075426],
[-0.628728508949,0.619044244289,0.470621615648],
[-0.535965919495,0.737060189247,0.411682873964],
[-0.58378881216,0.680853009224,0.44230055809],
[-0.517854511738,0.706037700176,0.483050197363],
[-0.619044244289,0.470621615648,0.628728508949],
[-0.680853009224,0.44230055809,0.58378881216],
[-0.65135627985,0.398910075426,0.645450055599],
[-0.669751524925,0.552667617798,0.495975226164],
[-0.726413309574,0.520354926586,0.44894811511],
[-0.706037700176,0.483050197363,0.517854511738],
[-0.737060189247,0.411682873964,0.535965919495],
[-0.787004828453,0.379529476166,0.486395716667],
[-0.762617051601,0.340069264174,0.550243675709],
[-0.706037700176,0.483050197363,0.517854511738],
[-0.737060189247,0.411682873964,0.535965919495],
[-0.680853009224,0.44230055809,0.58378881216],
[-0.495975226164,0.669751524925,0.552667617798],
[-0.540649950504,0.607478022575,0.581951975822],
[-0.470621615648,0.628728508949,0.619044244289],
[-0.628728508949,0.619044244289,0.470621615648],
[-0.669751524925,0.552667617798,0.495975226164],
[-0.607478022575,0.581951975822,0.540649950504],
[-0.581951975822,0.540649950504,0.607478022575],
[-0.619044244289,0.470621615648,0.628728508949],
[-0.552667617798,0.495975226164,0.669751524925],
[-0.607478022575,0.581951975822,0.540649950504],
[-0.581951975822,0.540649950504,0.607478022575],
[-0.540649950504,0.607478022575,0.581951975822],
[-0.0339771322906,0.82464236021,0.564633131027],
[0.0,0.79582041502,0.605532705784],
[0.0339771322906,0.82464236021,0.564633131027],
[-0.108097285032,0.757922053337,0.643326640129],
[-0.0744211226702,0.722495436668,0.687358558178],
[-0.0362210273743,0.761889100075,0.646693944931],
[0.0362210273743,0.761889100075,0.646693944931],
[0.0744211226702,0.722495436668,0.687358558178],
[0.108097285032,0.757922053337,0.643326640129],
[-0.0362210273743,0.761889100075,0.646693944931],
[0.0362210273743,0.761889100075,0.646693944931],
[0.0,0.79582041502,0.605532705784],
[-0.188148602843,0.669747889042,0.718357801437],
[-0.154971644282,0.62687343359,0.763553202152],
[-0.114190116525,0.677466154099,0.726636230946],
[-0.269586592913,0.560828924179,0.782811582088],
[-0.236761152744,0.510783493519,0.826465010643],
[-0.196083456278,0.571085453033,0.797127783298],
[-0.119124859571,0.578244268894,0.807120084763],
[-0.0809632539749,0.524005174637,0.847858190536],
[-0.0399611219764,0.581926107407,0.81225925684],
[-0.196083456278,0.571085453033,0.797127783298],
[-0.119124859571,0.578244268894,0.807120084763],
[-0.154971644282,0.62687343359,0.763553202152],
[0.114190116525,0.677466154099,0.726636230946],
[0.154971644282,0.62687343359,0.763553202152],
[0.188148602843,0.669747889042,0.718357801437],
[0.0399611219764,0.581926107407,0.81225925684],
[0.0809632539749,0.524005174637,0.847858190536],
[0.119124859571,0.578244268894,0.807120084763],
[0.196083456278,0.571085453033,0.797127783298],
[0.236761152744,0.510783493519,0.826465010643],
[0.269586592913,0.560828924179,0.782811582088],
[0.119124859571,0.578244268894,0.807120084763],
[0.196083456278,0.571085453033,0.797127783298],
[0.154971644282,0.62687343359,0.763553202152],
[-0.114190116525,0.677466154099,0.726636230946],
[-0.0382858961821,0.68142670393,0.730884253979],
[-0.0744211226702,0.722495436668,0.687358558178],
[-0.0399611219764,0.581926107407,0.81225925684],
[0.0399611219764,0.581926107407,0.81225925684],
[0.0,0.634539365768,0.772890508175],
[0.0382858961821,0.68142670393,0.730884253979],
[0.114190116525,0.677466154099,0.726636230946],
[0.0744211226702,0.722495436668,0.687358558178],
[0.0,0.634539365768,0.772890508175],
[0.0382858961821,0.68142670393,0.730884253979],
[-0.0382858961821,0.68142670393,0.730884253979],
[-0.346611320972,0.436200261116,0.830415487289],
[-0.313733518124,0.380723625422,0.869839549065],
[-0.276221334934,0.446935534477,0.85085272789],
[-0.413926929235,0.3044308424,0.857896447182],
[-0.380633711815,0.246351331472,0.891307473183],
[-0.348686188459,0.313436716795,0.883275330067],
[-0.277958005667,0.321246802807,0.905284404755],
[-0.239766731858,0.258633822203,0.935745954514],
| |
#!/bin/env python
#
# output.py: functions for outputting analysis results
# Copyright (C) University of Manchester 2015-2019 <NAME>,
# <NAME> & <NAME>
#
"""
output.py
Functions for outputing analysis results
"""
from . import distances
from .Peaks import Peak
import io
import tempfile
#######################################################################
# Constants
#######################################################################
MULTI_LINE=0
SINGLE_LINE=1
FIELDS = {
'chr': "chromosome",
'start': "peak start position",
'end': "peak end position",
'id': "<FEATURE> ID",
'strand': "<FEATURE> strand direction",
'TSS': "<FEATURE> TSS position",
'TES': "<FEATURE> TES position",
'peak.id': "peak ID",
'peak.chr': "chromosome of the peak",
'peak.start': "peak start position",
'peak.end': "peak end position",
'peak.file': "file the peak was loaded from",
'feature.id': "<FEATURE> ID",
'feature.chr': "chromosome of the <FEATURE>",
'feature.start': "<FEATURE> start position",
'feature.end': "<FEATURE> end position",
'feature.TSS': "<FEATURE> TSS position",
'feature.TES': "<FEATURE> TES position",
'feature.strand': "<FEATURE> strand direction",
'feature.file': "file the <FEATURE> was loaded from",
'dist_closest': "closest distance between peak and <FEATURE> considering all edges (zero if there is overlap)",
'dist_TSS': "distance between peak and <FEATURE> TSS",
'dist_TES': "distance between peak and <FEATURE> TES",
'overlap_feature': "1 if peak overlaps the <FEATURE>, 0 if not",
'overlap_promoter': "1 if peak overlaps the promoter region, 0 if not",
'in_the_feature': "'YES' if peak overlaps the <FEATURE>, 'NO' if not",
'direction': "'U' if <SOURCE> is upstream (5') of <TARGET>; 'D' if <SOURCE> is downstream (3') of <TARGET>; '.' if overlapping",
'differentially_expressed': "1 if <FEATURE> is differentially expressed, 0 if not",
'order': "the 'order' of the <FEATURE>/peak pair (e.g. '1 of 4')",
'number_of_results': "number of hits being reported",
}
#######################################################################
# Classes
#######################################################################
class AnalysisReporter(object):
"""
Class to handle reporting of analysis results
Once initialised the reporter can be used to generate 'reports'
of each peak along with the nearest features (using the
'report_nearest_features' method) or for each feature along
with the nearest peaks (using 'report_nearest_peaks').
Output can be in either 'multi-line' (one line per result pair),
or 'single-line' format (one line containing all results).
For each method a list of the fields to be reported can be
specified. Available fields are:
- peak.id: peak ID
- (peak.)chr: chromosome for the peak
- (peak.)start: peak start position
- (peak.)end: peak end position
- peak.file: file the peak was loaded from
- (feature.)id: feature ID
- feature.chr: chromosome for the feature
- feature.start: feature start position
- feature.end: feature end position
- (feature.)TSS: feature TSS
- (feature.)TES: feature TES
- (feature.)strand: feature strand
- (feature).file: file the feature was loaded from
- dist_closest: closest distance between peak and feature
- dist_TSS: distance between peak and feature TSS
- dist_TES: distance between peak and feature TES
- overlap_feature: 'YES' if peak overlaps the feature, 'NO' if not
- overlap_promoter: 1 if peak overlaps the promoter region, 0 if not
- direction: 'U' if hit is upstream, 'D' if downstream, '.' if overlapped
- in_the_feature: synonym for 'overlap_feature'
- 'differentially_expressed': flag value for feature
(In the field names above, the parts in (...) are optional e.g.
'chr' == 'peak.chr' etc.)
For multi-line output these additional fields are available:
- order: the 'order' of the feature/peak pair (e.g. '1 of 4')
For single-line output these additional fields are available:
- number_of_results
- list(...): output all results (peaks or features, as
appropriate)
For the 'list' options, the parentheses should enclose a list
of fields to output for each peak or feature in the list e.g.
'list(chr,start,dist_closest)' or 'list(feature.id)'.
The following fields have not been implemented:
- features_inbetween
"""
def __init__(self,mode,fields,promoter_region=None,
max_hits=None,pad=False,
null_placeholder='.',
feature_type=None):
"""
Create new AnalysisReporter instance
Arguments:
mode (int): either SINGLE_LINE or MULTI_LINE
fields (list): list of fields to output
promoter_region (tuple): promoter region extent (optional)
max_hits (int): optional maximum number of hits to
report for each set of results
null_placeholder (str): placeholder to use in output for
fields which evaluate to 'null'
pad (bool): add extra 'None' items to output hits to
pad out to max_closest results (n.b. padding is
always performed in SINGLE_LINE mode)
feature_type (str): if not 'None' then replace 'feature'
with 'feature_type' (e.g. 'gene', 'transcript' etc) in
the output
"""
self._fields = fields
self._mode = mode
self._promoter_region = promoter_region
self._placeholder = null_placeholder
self._max_hits = max_hits
self._pad = pad
self._context_peak = None
self._context_feature = None
self._is_features = None
self._feature_type = feature_type
self._max_pairs = 0
self._extra_data = None
def report_nearest(self,reference,results,**extra_data):
"""
Return details of nearest objects to a reference
This is a generic reporting method which can handle
either nearest features to a reference peak (in which
case ``reference`` should be a Peak and ``results``
the corresponding FeatureSet), or nearest peaks to a
reference Feature (when ``reference`` is a Feature and
``results`` is a PeakSet).
Arguments:
reference (Object): reference object (i.e.
Peak or Feature of interest)
results (Object): list of corresponding results
i.e. FeatureSet (for reference Peak) or
PeakSet (reference Feature)
extra_data (mapping): optional mapping defining
arbitrary data items
Yields:
string: line(s) of text reporting the results
"""
# Initialise and set the context
if isinstance(reference,Peak):
self._context_peak = reference
self._is_features = True
else:
self._context_feature = reference
self._is_features = False
self._extra_data = extra_data
is_features = self._is_features
# Store largest number of pairs reported
self._max_pairs = max(self._max_pairs,len(results))
# Reduce to maximum number of hits
if self._max_hits is not None:
results = results[:self._max_hits]
else:
results = results[:]
nresults = len(results)
# Pad with null results
if self._max_hits is not None and (self._mode == SINGLE_LINE or
self._pad):
while len(results) < self._max_hits:
if is_features:
results.addFeature(None)
else:
results.addPeak(None)
# Write the results
if self._mode == SINGLE_LINE:
# Report everything on a single line
line = []
for field in self._fields:
if field == 'number_of_results':
value = nresults
elif field.startswith('list('):
# Extract the subfields
subfields = field[:-1].split('(')[1].split(',')
# Report list of features
value = []
for result in results:
if is_features:
self._context_feature = result
else:
self._context_peak = result
for subfield in subfields:
value.append(self.value_for(subfield))
value = '\t'.join([str(x) for x in value])
else:
# All other fields
value = self.value_for(field)
line.append(str(value))
# Return (yield) the line
yield '\t'.join(line)
elif self._mode == MULTI_LINE:
# Report each result pair on a new line
i = 0
for result in results:
if is_features:
self._context_feature = result
else:
self._context_peak = result
i += 1
line = []
for field in self._fields:
if field == 'order' and result is not None:
value = '%d of %d' % (i,nresults)
else:
value = self.value_for(field)
line.append(str(value))
# Return (yield) the line
yield '\t'.join(line)
# Reset the context
self._context_peak = None
self._context_feature = None
self._is_features = None
self._extra_data = None
def report_nearest_features(self,peak,features,**extra_data):
"""
Return details of nearest features for a peak
This is a wrapper for ``report_nearest``.
Arguments:
peak (Peak): peak of interest
features (FeatureSet): list of nearest features
extra_data (mapping): optional mapping defining
arbitrary data items
Yields:
string: line(s) of text reporting the results
"""
for line in self.report_nearest(peak,features,**extra_data):
yield line
def report_nearest_peaks(self,feature,peaks,**extra_data):
"""
Return details of nearest peaks for a feature
This is a wrapper for ``report_nearest``.
Arguments:
feature (Feature): feature of interest
peaks (PeakSet): list of nearest peaks
Returns:
string: block of text reporting the results
"""
for line in self.report_nearest(feature,peaks,**extra_data):
yield line
def value_for(self,attr):
"""
Return the value for the specified attribute
Wraps '_value_for' method, and returns the null
placeholder value in the event of an AttributeError
being raised.
Arguments:
attr (string): attribute name
Returns:
Value of the field for the current peak/feature
pair
"""
try:
return self._value_for(attr)
except (AttributeError,KeyError):
return self._placeholder
def _value_for(self,attr):
"""
Return the value for the specified attribute
Given the name of a field/attribute (see above for
a list and definition of each), return the value
for the current peak/feature pair (which should have
been set by the calling method in the '_context_peak'
and '_context_feature' properties).
Arguments:
attr (string): attribute name
Returns:
Value of the field for the current peak/feature
pair
Raises:
AttributeError: if valid ``attr`` cannot be derived
KeyError: if ``attr`` is not a recognised attribute
name
"""
peak = self._context_peak
feature = self._context_feature
extra_data = self._extra_data
is_features = self._is_features
if attr == 'peak.id':
return peak.id
elif attr == 'chr' or attr == 'peak.chr':
return peak.chrom
elif | |
<reponame>yinxusen/deepword<gh_stars>1-10
import math
import random
import sys
import time
import traceback
from os import path
from queue import Queue
from threading import Thread
from typing import Tuple, List, Union, Any, Optional, Dict, Generator
import numpy as np
import tensorflow as tf
from deeptextworld.agents.base_agent import DRRNMemoTeacher
from tensorflow import Session
from tensorflow.contrib.training import HParams
from tensorflow.summary import FileWriter
from tensorflow.train import Saver
from termcolor import colored
from tqdm import trange, tqdm
from deepword.action import ActionCollector
from deepword.agents.utils import ActionMaster
from deepword.agents.utils import Memolet
from deepword.agents.utils import batch_drrn_action_input
from deepword.agents.utils import bert_nlu_input
from deepword.agents.utils import get_action_idx_pair
from deepword.agents.utils import get_best_batch_ids
from deepword.agents.utils import get_path_tags
from deepword.agents.utils import sample_batch_ids
from deepword.hparams import save_hparams, output_hparams
from deepword.log import Logging
from deepword.students.utils import batch_dqn_input, align_batch_str
from deepword.tokenizers import init_tokens
from deepword.trajectory import Trajectory
from deepword.utils import flatten, softmax
from deepword.utils import load_uniq_lines
from deepword.utils import model_name2clazz, bytes2idx
class CMD:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
def set(self, key, val):
self.__dict__[key] = val
def get(self, key):
return self.__dict__[key]
class StudentLearner(Logging):
def __init__(
self, hp: HParams, model_dir: str, train_data_dir: Optional[str],
eval_data_path: Optional[str] = None) -> None:
super(StudentLearner, self).__init__()
# prefix should match BaseAgent
self.tjs_prefix = "trajectories"
self.action_prefix = "actions"
self.memo_prefix = "memo"
self.hs2tj_prefix = "hs2tj"
self.model_dir = model_dir
self.train_data_dir = train_data_dir
self.eval_data_path = eval_data_path
self.load_from = path.join(self.model_dir, "last_weights")
self.ckpt_prefix = path.join(self.load_from, "after-epoch")
self.hp, self.tokenizer = init_tokens(hp)
save_hparams(self.hp, path.join(model_dir, "hparams.json"))
self.info(output_hparams(self.hp))
self.sess = None
self.model = None
self.saver = None
self.sw = None
self.train_steps = None
self.queue = None
# filter allowed gids from memory during training
# if allowed_gids is empty, use all memory
# allowed_gids.txt:
# game name [TAB] game ID
self.allowed_gids = set()
fn_allowed_gids = path.join(self.model_dir, "allowed_gids.txt")
if path.isfile(fn_allowed_gids):
self.allowed_gids = set(
[x.split("\t")[1] for x in load_uniq_lines(fn_allowed_gids)])
def _get_compatible_snapshot_tag(self, data_dir: str) -> List[int]:
action_tags = get_path_tags(data_dir, self.action_prefix)
memo_tags = get_path_tags(data_dir, self.memo_prefix)
tjs_tags = get_path_tags(data_dir, self.tjs_prefix)
hs2tj_tags = get_path_tags(data_dir, self.hs2tj_prefix)
valid_tags = set(action_tags)
valid_tags.intersection_update(memo_tags)
valid_tags.intersection_update(tjs_tags)
valid_tags.intersection_update(hs2tj_tags)
return list(valid_tags)
def _get_combined_data_path(
self, data_dir: str) -> List[Tuple[str, str, str, str]]:
valid_tags = self._get_compatible_snapshot_tag(data_dir)
combined_data_path = []
for tag in sorted(valid_tags, key=lambda k: random.random()):
combined_data_path.append(
(path.join(
data_dir, "{}-{}.npz".format(self.tjs_prefix, tag)),
path.join(
data_dir, "{}-{}.npz".format(self.action_prefix, tag)),
path.join(
data_dir, "{}-{}.npz".format(self.memo_prefix, tag)),
path.join(
data_dir, "{}-{}.npz".format(
self.hs2tj_prefix, tag))))
return combined_data_path
def _prepare_model(
self, device_placement: str, training: bool,
restore_from: Optional[str] = None,
) -> Tuple[Session, Any, Saver, int]:
"""
create and load model from restore_from
if restore_from is None, use the latest checkpoint from last_weights
if model_dir
"""
model_clazz = model_name2clazz(self.hp.model_creator)
if training:
model = model_clazz.get_train_student_model(
hp=self.hp,
device_placement=device_placement)
else:
model = model_clazz.get_eval_student_model(
hp=self.hp,
device_placement=device_placement)
conf = tf.ConfigProto(
log_device_placement=False, allow_soft_placement=True)
sess = tf.Session(graph=model.graph, config=conf)
with model.graph.as_default():
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(
max_to_keep=self.hp.max_snapshot_to_keep,
save_relative_paths=True)
if restore_from is None:
restore_from = tf.train.latest_checkpoint(self.load_from)
if restore_from is not None:
trained_steps = model.safe_loading(sess, saver, restore_from)
else:
self.warning(colored(
"No checkpoint to load, using untrained model",
"red", "on_white", ["bold", "blink", "underline"]))
trained_steps = 0
return sess, model, saver, trained_steps
@classmethod
def lst_str2am(
cls, tj: List[str], allow_unfinished_tj: bool = False
) -> List[ActionMaster]:
tj = [""] + tj
if not allow_unfinished_tj and len(tj) % 2 != 0:
raise ValueError("wrong old trajectory: {}".format(tj))
res_tj = []
i = 0
while i < len(tj) // 2:
res_tj.append(
ActionMaster(
action=tj[i * 2], master=tj[i * 2 + 1],
action_ids=[], master_ids=[], objective_ids=[]))
i += 1
return res_tj
@classmethod
def tjs_str2am(
cls, old_tjs: Trajectory[str]) -> Trajectory[ActionMaster]:
tjs = Trajectory(num_turns=old_tjs.num_turns // 2, size_per_turn=1)
tjs.curr_tj = cls.lst_str2am(old_tjs.curr_tj, allow_unfinished_tj=True)
tjs.curr_tid = old_tjs.curr_tid
tjs.trajectories = dict([
(k, cls.lst_str2am(v)) for k, v in old_tjs.trajectories.items()])
return tjs
@classmethod
def memo_old2new(cls, old_memo: List[DRRNMemoTeacher]) -> List[Memolet]:
res = []
for m in old_memo:
mask = bytes2idx(m.action_mask, size=128)
next_mask = bytes2idx(m.next_action_mask, size=128)
res.append(Memolet(
tid=m.tid, sid=m.sid // 2, gid=m.gid, aid=m.aid,
token_id=None, a_len=None, a_type=None,
reward=m.reward, is_terminal=m.is_terminal,
action_mask=mask, sys_action_mask=None,
next_action_mask=next_mask, next_sys_action_mask=None,
q_actions=m.q_actions[mask]))
return res
@classmethod
def hs2tj_old2new(
cls, old_hs2tj: Dict[str, Dict[int, List[int]]]
) -> Dict[str, Dict[int, List[int]]]:
"""
sid need to be halved.
Args:
old_hs2tj:
Returns:
"""
new_hs2tj = dict()
for sk in old_hs2tj:
new_hs2tj[sk] = dict()
for tid in old_hs2tj[sk]:
new_hs2tj[sk][tid] = list(np.asarray(old_hs2tj[sk][tid]) // 2)
return new_hs2tj
def _load_snapshot(
self, memo_path: str, tjs_path: str, action_path: str,
hs2tj_path: str
) -> Tuple[List[Memolet], Trajectory[ActionMaster], ActionCollector,
Dict[str, Dict[int, List[int]]]]:
memory = np.load(memo_path, allow_pickle=True)["data"]
if isinstance(memory[0], DRRNMemoTeacher):
self.warning("load old data with DRRNMemoTeacher")
return self._load_snapshot_v1(
memo_path, tjs_path, action_path, hs2tj_path)
elif isinstance(memory[0], Memolet):
self.warning("load new data with Memolet")
return self._load_snapshot_v2(
memo_path, tjs_path, action_path, hs2tj_path)
else:
raise ValueError(
"Unrecognized memory type: {}".format(type(memory[0])))
def _load_snapshot_v1(
self, memo_path: str, tjs_path: str, action_path: str,
hs2tj_path: str
) -> Tuple[List[Memolet], Trajectory[ActionMaster], ActionCollector,
Dict[str, Dict[int, List[int]]]]:
"""load snapshot for old data"""
old_memory = np.load(memo_path, allow_pickle=True)["data"]
old_memory = list(filter(
lambda x: isinstance(x, DRRNMemoTeacher), old_memory))
memory = self.memo_old2new(old_memory)
old_tjs = Trajectory(
num_turns=self.hp.num_turns * 2 + 1, size_per_turn=2)
old_tjs.load_tjs(tjs_path)
tjs = self.tjs_str2am(old_tjs)
actions = ActionCollector(
tokenizer=self.tokenizer,
n_tokens=self.hp.n_tokens_per_action,
unk_val_id=self.hp.unk_val_id,
padding_val_id=self.hp.padding_val_id)
actions.load_actions(action_path)
hs2tj = np.load(hs2tj_path, allow_pickle=True)
hash_states2tjs = self.hs2tj_old2new(hs2tj["hs2tj"][0])
return memory, tjs, actions, hash_states2tjs
def _load_snapshot_v2(
self, memo_path: str, tjs_path: str, action_path: str,
hs2tj_path: str
) -> Tuple[List[Memolet], Trajectory[ActionMaster], ActionCollector,
Dict[str, Dict[int, List[int]]]]:
memory = np.load(memo_path, allow_pickle=True)["data"]
memory = list(filter(lambda x: isinstance(x, Memolet), memory))
tjs = Trajectory(self.hp.num_turns)
tjs.load_tjs(tjs_path)
actions = ActionCollector(
tokenizer=self.tokenizer,
n_tokens=self.hp.n_tokens_per_action,
unk_val_id=self.hp.unk_val_id,
padding_val_id=self.hp.padding_val_id)
actions.load_actions(action_path)
hs2tj = np.load(hs2tj_path, allow_pickle=True)
hash_states2tjs = hs2tj["hs2tj"][0]
return memory, tjs, actions, hash_states2tjs
def _add_batch(
self, combined_data_path: List[Tuple[str, str, str]],
queue: Queue, training: bool = True,
append_new_data: bool = True) -> None:
"""
:param combined_data_path:
:param queue:
:param training:
:param append_new_data: scan train_data_dir for new data after every
epoch of training.
:return:
"""
self.info("try to add batch data: {}".format(combined_data_path))
while True:
if training and append_new_data:
new_combined_data_path = self._get_combined_data_path(
self.train_data_dir)
if set(new_combined_data_path) != set(combined_data_path):
self.info(
"update training data: {}".format(combined_data_path))
combined_data_path = new_combined_data_path
for tp, ap, mp, hsp in sorted(
combined_data_path, key=lambda k: random.random()):
memory, tjs, action_collector, _ = self._load_snapshot(
mp, tp, ap, hsp)
if training:
if not self.allowed_gids:
random.shuffle(memory)
else:
self.info(
"before gid filtering: {}".format(len(memory)))
memory = [
x for x in memory if x.gid in self.allowed_gids]
self.info("after gid filtering: {}".format(len(memory)))
random.shuffle(memory)
else:
memory = memory[:5000]
i = 0
while i < int(math.ceil(len(memory) * 1. / self.hp.batch_size)):
ss = i * self.hp.batch_size
ee = min((i + 1) * self.hp.batch_size, len(memory))
batch_memory = memory[ss:ee]
try:
queue.put(self._prepare_data(
batch_memory, tjs, action_collector),
)
except Exception as e:
self.error("add_batch error: {}".format(e))
traceback.print_tb(e.__traceback__)
raise RuntimeError()
i += 1
# only load data once if not training
if not training:
break
return
def _prepare_data(
self,
b_memory: List[Union[Tuple, Memolet]],
tjs: Trajectory[ActionMaster],
action_collector: ActionCollector) -> Tuple:
"""
Given a batch of memory, tjs, and action collector, create a tuple
of data for training.
:param b_memory:
:param tjs:
:param action_collector:
:return: Tuple of data, the train_impl knows the details
"""
raise NotImplementedError()
def _prepare_training(
self
) -> Tuple[Session, Any, Saver, FileWriter, int, Queue]:
sess, model, saver, train_steps = self._prepare_model(
"/device:GPU:0", training=True)
# save the very first model to verify weight has been loaded
if train_steps == 0:
saver.save(
sess, self.ckpt_prefix,
global_step=tf.train.get_or_create_global_step(
graph=model.graph))
else:
pass
sw_path = path.join(self.model_dir, "summaries", "train")
sw = tf.summary.FileWriter(sw_path, sess.graph)
queue = Queue(maxsize=100)
t = Thread(
target=self._add_batch,
args=(self._get_combined_data_path(self.train_data_dir), queue))
t.setDaemon(True)
t.start()
return sess, model, saver, sw, train_steps, queue
def train(self, n_epochs: int) -> None:
if self.sess is None:
(self.sess, self.model, self.saver, self.sw, self.train_steps,
self.queue) = self._prepare_training()
wait_times = 10
while wait_times > 0 and self.queue.empty():
self.info("waiting data ... (retry times: {})".format(wait_times))
time.sleep(10)
wait_times -= 1
if self.queue.empty():
self.warning("No data received. exit")
return
epoch_size = self.hp.save_gap_t
self.info("start training")
data_in_queue = True
for et in trange(n_epochs, ascii=True, desc="epoch"):
for it in trange(epoch_size, ascii=True, desc="step"):
try:
data = self.queue.get(timeout=1000)
self._train_impl(
data, self.train_steps + et * epoch_size + it)
except Exception as e:
data_in_queue = False
self.info("no more data: {}".format(e))
exc_type, exc_value, exc_traceback = sys.exc_info()
traceback.print_exception(
exc_type, exc_value, exc_traceback, limit=None,
file=sys.stdout)
break
self.saver.save(
self.sess, self.ckpt_prefix,
global_step=tf.train.get_or_create_global_step(
graph=self.model.graph))
self.info("finish and save {} epoch".format(et))
if not data_in_queue:
break
return
def _train_impl(self, data: Tuple, train_step: int) -> None:
"""
Train the model one time given data.
:param data:
:param train_step:
:return:
"""
raise NotImplementedError()
def _prepare_test(
self, device_placement: str = "/device:GPU:0",
restore_from: Optional[str] = None
) -> Tuple[Session, Any, Saver, int, Queue]:
sess, model, saver, train_steps = self._prepare_model(
device_placement, training=False, restore_from=restore_from)
queue = Queue()
return sess, model, saver, train_steps, queue
# TODO: fix the OOM problem
def preprocess_input(self, data_dir):
valid_tags = self._get_compatible_snapshot_tag(data_dir)
queue = []
for tag in valid_tags:
| |
import copy
import numpy as np
from math import cos, sin, pi, atan2
import warnings
import matplotlib.patches as mpatches
from matplotlib.path import Path
from matplotlib.lines import Line2D
from matplotlib.transforms import Affine2D, Bbox, IdentityTransform
from matplotlib.text import Annotation
def rotated_polygon(xy, ox, oy, angle):
# angle in degree
theta = angle / 180. * pi
st = sin(theta)
ct = cos(theta)
xy = np.asarray(xy, dtype="d")
x, y = xy[:, 0], xy[:, 1]
x1 = x - ox
y1 = y - oy
x2 = ct * x1 + -st * y1
y2 = st * x1 + ct * y1
xp = x2 + ox
yp = y2 + oy
return np.hstack((xp.reshape((-1, 1)), yp.reshape((-1, 1))))
# sss3 = [s1[0] for s1 in sss2 if isinstance(s1[0], parser_ds9.Shape)]
_point_type_dict = dict(circle="o",
box="s",
diamond="D",
x="x",
cross="+",
arrow="^",
boxcircle="*")
_ds9_to_mpl_colormap = dict(green="lime",
)
def properties_func_default(shape, saved_attrs):
attr_list = copy.copy(shape.attr[0])
attr_dict = copy.copy(shape.attr[1])
attr_list.extend(saved_attrs[0])
attr_dict.update(saved_attrs[1])
color = attr_dict.get("color", None)
color = _ds9_to_mpl_colormap.get(color, color)
if shape.name == "text":
kwargs = dict(color=color,
rotation=attr_dict.get("textangle", 0),
)
font = attr_dict.get("font")
if font:
a = font.split()
if len(a) >= 3:
fontsize = float(a[1])
kwargs["fontsize"] = fontsize
elif shape.name == "point":
point_attrs = attr_dict.get("point", "boxcircle").split()
if len(point_attrs) == 1:
point_type = point_attrs[0]
point_size = 11
elif len(point_attrs) > 1:
point_type = point_attrs[0]
point_size = int(point_attrs[1])
marker = _point_type_dict.get(point_type, "o")
kwargs = dict(markeredgecolor=color,
markerfacecolor="none",
marker=marker,
markeredgewidth=int(attr_dict.get("width", 1)),
markersize=point_size
)
elif shape.name in ["line", "vector"]:
fontsize = 10 # default font size
font = attr_dict.get("font")
if font:
a = font.split()
if len(a) >= 3:
fontsize = float(a[1])
kwargs = dict(color=color,
linewidth=int(attr_dict.get("width", 1)),
mutation_scale=fontsize,
)
if int(attr_dict.get("dash", "0")):
kwargs["linestyle"] = "dashed"
else:
# The default behavior of matplotlib edgecolor has changed, and it does
# not draw edges by default. To remdy this, simply use black edgecolor
# if None.
# https://matplotlib.org/stable/users/dflt_style_changes.html#patch-edges-and-color
if color is None:
color = "k"
kwargs = dict(edgecolor=color,
linewidth=int(attr_dict.get("width", 1)),
facecolor="none"
)
if "background" in attr_list:
kwargs["linestyle"] = "dashed"
if int(attr_dict.get("dash", "0")):
kwargs["linestyle"] = "dashed"
if shape.exclude:
kwargs["hatch"] = "/"
return kwargs
def _get_text(txt, x, y, dx, dy, ha="center", va="center", **kwargs):
if "color" in kwargs:
textcolor = kwargs["color"]
del kwargs["color"]
elif "markeredgecolor" in kwargs:
textcolor = kwargs["markeredgecolor"]
else:
import matplotlib as mpl
textcolor = mpl.rcParams['text.color']
ann = Annotation(txt, (x, y), xytext=(dx, dy),
xycoords='data',
textcoords="offset points",
color=textcolor,
ha=ha, va=va,
**kwargs)
ann.set_transform(IdentityTransform())
return ann
def as_mpl_artists(shape_list,
properties_func=None,
text_offset=5.0, origin=1):
"""
Converts a region list to a list of patches and a list of artists.
Optional Keywords:
[ text_offset ] - If there is text associated with the regions, add
some vertical offset (in pixels) to the text so that it doesn't overlap
with the regions.
Often, the regions files implicitly assume the lower-left corner
of the image as a coordinate (1,1). However, the python convetion
is that the array index starts from 0. By default (origin = 1),
coordinates of the returned mpl artists have coordinate shifted by
(1, 1). If you do not want this shift, set origin=0.
"""
patch_list = []
artist_list = []
if properties_func is None:
properties_func = properties_func_default
# properties for continued(? multiline?) regions
saved_attrs = None
for shape in shape_list:
patches = []
if saved_attrs is None:
_attrs = [], {}
else:
_attrs = copy.copy(saved_attrs[0]), copy.copy(saved_attrs[1])
kwargs = properties_func(shape, _attrs)
if shape.name == "composite":
saved_attrs = shape.attr
continue
if saved_attrs is None and shape.continued:
saved_attrs = shape.attr
# elif (shape.name in shape.attr[1]):
# if (shape.attr[1][shape.name] != "ignore"):
# saved_attrs = shape.attr
if not shape.continued:
saved_attrs = None
# text associated with the shape
txt = shape.attr[1].get("text")
if shape.name == "polygon":
xy = np.array(shape.coord_list)
xy.shape = -1, 2
# -1 for change origin to 0,0
patches = [mpatches.Polygon(xy - origin, closed=True, **kwargs)]
elif shape.name == "rotbox" or shape.name == "box":
xc, yc, w, h, rot = shape.coord_list
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
_box = np.array([[-w / 2., -h / 2.],
[-w / 2., h / 2.],
[w / 2., h / 2.],
[w / 2., -h / 2.]])
box = _box + [xc, yc]
rotbox = rotated_polygon(box, xc, yc, rot)
patches = [mpatches.Polygon(rotbox, closed=True, **kwargs)]
elif shape.name == "ellipse":
xc, yc = shape.coord_list[:2]
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
angle = shape.coord_list[-1]
maj_list, min_list = shape.coord_list[2:-1:2], shape.coord_list[3:-1:2]
patches = [mpatches.Ellipse((xc, yc), 2 * maj, 2 * min,
angle=angle, **kwargs)
for maj, min in zip(maj_list, min_list)]
elif shape.name == "annulus":
xc, yc = shape.coord_list[:2]
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
r_list = shape.coord_list[2:]
patches = [mpatches.Ellipse((xc, yc), 2 * r, 2 * r, **kwargs) for r in r_list]
elif shape.name == "circle":
xc, yc, major = shape.coord_list
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
patches = [mpatches.Ellipse((xc, yc), 2 * major, 2 * major, angle=0, **kwargs)]
elif shape.name == "panda":
xc, yc, a1, a2, an, r1, r2, rn = shape.coord_list
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
patches = [mpatches.Arc((xc, yc), rr * 2, rr * 2, angle=0,
theta1=a1, theta2=a2, **kwargs)
for rr in np.linspace(r1, r2, rn + 1)]
for aa in np.linspace(a1, a2, an + 1):
xx = np.array([r1, r2]) * np.cos(aa / 180. * np.pi) + xc
yy = np.array([r1, r2]) * np.sin(aa / 180. * np.pi) + yc
p = Path(np.transpose([xx, yy]))
patches.append(mpatches.PathPatch(p, **kwargs))
elif shape.name == "pie":
xc, yc, r1, r2, a1, a2 = shape.coord_list
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
patches = [mpatches.Arc((xc, yc), rr * 2, rr * 2, angle=0,
theta1=a1, theta2=a2, **kwargs)
for rr in [r1, r2]]
for aa in [a1, a2]:
xx = np.array([r1, r2]) * np.cos(aa / 180. * np.pi) + xc
yy = np.array([r1, r2]) * np.sin(aa / 180. * np.pi) + yc
p = Path(np.transpose([xx, yy]))
patches.append(mpatches.PathPatch(p, **kwargs))
elif shape.name == "epanda":
xc, yc, a1, a2, an, r11, r12, r21, r22, rn, angle = shape.coord_list
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
# mpl takes angle a1, a2 as angle as in circle before
# transformation to ellipse.
x1, y1 = cos(a1 / 180. * pi), sin(a1 / 180. * pi) * r11 / r12
x2, y2 = cos(a2 / 180. * pi), sin(a2 / 180. * pi) * r11 / r12
a1, a2 = atan2(y1, x1) / pi * 180., atan2(y2, x2) / pi * 180.
patches = [mpatches.Arc((xc, yc), rr1 * 2, rr2 * 2,
angle=angle, theta1=a1, theta2=a2,
**kwargs)
for rr1, rr2 in zip(np.linspace(r11, r21, rn + 1),
np.linspace(r12, r22, rn + 1))]
for aa in np.linspace(a1, a2, an + 1):
xx = np.array([r11, r21]) * np.cos(aa / 180. * np.pi)
yy = np.array([r11, r21]) * np.sin(aa / 180. * np.pi)
p = Path(np.transpose([xx, yy]))
tr = Affine2D().scale(1, r12 / r11).rotate_deg(angle).translate(xc, yc)
p2 = tr.transform_path(p)
patches.append(mpatches.PathPatch(p2, **kwargs))
elif shape.name == "text":
xc, yc = shape.coord_list[:2]
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
if txt:
_t = _get_text(txt, xc, yc, 0, 0, **kwargs)
artist_list.append(_t)
elif shape.name == "point":
xc, yc = shape.coord_list[:2]
# -1 for change origin to 0,0
xc, yc = xc - origin, yc - origin
artist_list.append(Line2D([xc], [yc],
**kwargs))
if txt:
textshape = copy.copy(shape)
textshape.name = "text"
textkwargs = properties_func(textshape, _attrs)
_t = _get_text(txt, xc, yc, 0, text_offset,
va="bottom",
**textkwargs)
artist_list.append(_t)
elif shape.name in ["line", "vector"]:
if shape.name == "line":
x1, y1, x2, y2 = shape.coord_list[:4]
# -1 for change origin to 0,0
x1, y1, x2, y2 = x1 - origin, y1 - origin, x2 - origin, y2 - origin
a1, a2 = shape.attr[1].get("line", "0 0").strip().split()[:2]
arrowstyle = "-"
if int(a1):
arrowstyle = "<" + arrowstyle
if int(a2):
arrowstyle = arrowstyle + ">"
| |
],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_388})
V_1289 = Vertex(name = 'V_1289',
particles = [ P.W__minus__, P.W__plus__, P.sl3__plus__, P.sl3__minus__ ],
color = [ '1' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_414})
V_1290 = Vertex(name = 'V_1290',
particles = [ P.W__minus__, P.W__plus__, P.sd1__tilde__, P.sd1 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_196})
V_1291 = Vertex(name = 'V_1291',
particles = [ P.W__minus__, P.W__plus__, P.sd2__tilde__, P.sd2 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_223})
V_1292 = Vertex(name = 'V_1292',
particles = [ P.W__minus__, P.W__plus__, P.sd3__tilde__, P.sd3 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_252})
V_1293 = Vertex(name = 'V_1293',
particles = [ P.W__minus__, P.W__plus__, P.su1__tilde__, P.su1 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_563})
V_1294 = Vertex(name = 'V_1294',
particles = [ P.W__minus__, P.W__plus__, P.su2__tilde__, P.su2 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_615})
V_1295 = Vertex(name = 'V_1295',
particles = [ P.W__minus__, P.W__plus__, P.su3__tilde__, P.su3 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_669})
V_1296 = Vertex(name = 'V_1296',
particles = [ P.a, P.a, P.W__minus__, P.W__plus__ ],
color = [ '1' ],
lorentz = [ L.VVVV2, L.VVVV3, L.VVVV5 ],
couplings = {(0,0):C.GC_5,(0,1):C.GC_5,(0,2):C.GC_6})
V_1297 = Vertex(name = 'V_1297',
particles = [ P.W__minus__, P.W__plus__, P.Z ],
color = [ '1' ],
lorentz = [ L.VVV1, L.VVV2, L.VVV3, L.VVV4, L.VVV5, L.VVV6 ],
couplings = {(0,0):C.GC_36,(0,1):C.GC_35,(0,2):C.GC_35,(0,3):C.GC_36,(0,4):C.GC_36,(0,5):C.GC_35})
V_1298 = Vertex(name = 'V_1298',
particles = [ P.W__minus__, P.W__minus__, P.W__plus__, P.W__plus__ ],
color = [ '1' ],
lorentz = [ L.VVVV2, L.VVVV3, L.VVVV5 ],
couplings = {(0,0):C.GC_27,(0,1):C.GC_27,(0,2):C.GC_28})
V_1299 = Vertex(name = 'V_1299',
particles = [ P.t__tilde__, P.x1__plus__, P.sd6 ],
color = [ 'Identity(1,3)' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_866})
V_1300 = Vertex(name = 'V_1300',
particles = [ P.t__tilde__, P.x2__plus__, P.sd6 ],
color = [ 'Identity(1,3)' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_884})
V_1301 = Vertex(name = 'V_1301',
particles = [ P.sd3__tilde__, P.sd6, P.sl3__minus__, P.sl6__plus__ ],
color = [ 'Identity(1,2)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_949})
V_1302 = Vertex(name = 'V_1302',
particles = [ P.sd6, P.sl6__plus__, P.sv3, P.su3__tilde__ ],
color = [ 'Identity(1,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_950})
V_1303 = Vertex(name = 'V_1303',
particles = [ P.H__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.SSS1 ],
couplings = {(0,0):C.GC_2169})
V_1304 = Vertex(name = 'V_1304',
particles = [ P.G__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.SSS1 ],
couplings = {(0,0):C.GC_2168})
V_1305 = Vertex(name = 'V_1305',
particles = [ P.G__plus__, P.h01, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1027})
V_1306 = Vertex(name = 'V_1306',
particles = [ P.h02, P.H__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1028})
V_1307 = Vertex(name = 'V_1307',
particles = [ P.A0, P.G__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_965})
V_1308 = Vertex(name = 'V_1308',
particles = [ P.G0, P.H__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_966})
V_1309 = Vertex(name = 'V_1309',
particles = [ P.G__plus__, P.h02, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1985})
V_1310 = Vertex(name = 'V_1310',
particles = [ P.h01, P.H__plus__, P.sd6, P.su6__tilde__ ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1985})
V_1311 = Vertex(name = 'V_1311',
particles = [ P.vt__tilde__, P.tau__minus__, P.G__plus__ ],
color = [ '1' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_1030})
V_1312 = Vertex(name = 'V_1312',
particles = [ P.vt__tilde__, P.x1__plus__, P.sl6__minus__ ],
color = [ '1' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_865})
V_1313 = Vertex(name = 'V_1313',
particles = [ P.vt__tilde__, P.x2__plus__, P.sl6__minus__ ],
color = [ '1' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_883})
V_1314 = Vertex(name = 'V_1314',
particles = [ P.sd3, P.sd6__tilde__, P.sl3__plus__, P.sl6__minus__ ],
color = [ 'Identity(1,2)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_933})
V_1315 = Vertex(name = 'V_1315',
particles = [ P.vt__tilde__, P.tau__minus__, P.H__plus__ ],
color = [ '1' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_1987})
V_1316 = Vertex(name = 'V_1316',
particles = [ P.sd6__tilde__, P.sl6__minus__, P.sv3__tilde__, P.su3 ],
color = [ 'Identity(1,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_934})
V_1317 = Vertex(name = 'V_1317',
particles = [ P.x1__minus__, P.b__tilde__, P.su6 ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_901})
V_1318 = Vertex(name = 'V_1318',
particles = [ P.x2__minus__, P.b__tilde__, P.su6 ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.FFS3 ],
couplings = {(0,0):C.GC_917})
V_1319 = Vertex(name = 'V_1319',
particles = [ P.H__minus__, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.SSS1 ],
couplings = {(0,0):C.GC_2158})
V_1320 = Vertex(name = 'V_1320',
particles = [ P.G__minus__, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(2,3)' ],
lorentz = [ L.SSS1 ],
couplings = {(0,0):C.GC_2157})
V_1321 = Vertex(name = 'V_1321',
particles = [ P.G__minus__, P.h01, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1025})
V_1322 = Vertex(name = 'V_1322',
particles = [ P.H__minus__, P.h02, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1026})
V_1323 = Vertex(name = 'V_1323',
particles = [ P.A0, P.G__minus__, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_932})
V_1324 = Vertex(name = 'V_1324',
particles = [ P.G0, P.H__minus__, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_931})
V_1325 = Vertex(name = 'V_1325',
particles = [ P.H__minus__, P.h01, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1984})
V_1326 = Vertex(name = 'V_1326',
particles = [ P.G__minus__, P.h02, P.sd6__tilde__, P.su6 ],
color = [ 'Identity(3,4)' ],
lorentz = [ L.SSSS1 ],
couplings = {(0,0):C.GC_1984})
V_1327 = Vertex(name = 'V_1327',
particles = [ P.a, P.Z, P.G__minus__, P.G__plus__ ],
color = [ '1' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_92})
V_1328 = Vertex(name = 'V_1328',
particles = [ P.a, P.Z, P.H__minus__, P.H__plus__ ],
color = [ '1' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_92})
V_1329 = Vertex(name = 'V_1329',
particles = [ P.Z, P.A0, P.h01 ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_1024,(0,1):C.GC_1023})
V_1330 = Vertex(name = 'V_1330',
particles = [ P.Z, P.G0, P.h02 ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_1024,(0,1):C.GC_1023})
V_1331 = Vertex(name = 'V_1331',
particles = [ P.Z, P.G__minus__, P.G__plus__ ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_90,(0,1):C.GC_91})
V_1332 = Vertex(name = 'V_1332',
particles = [ P.Z, P.H__minus__, P.H__plus__ ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_90,(0,1):C.GC_91})
V_1333 = Vertex(name = 'V_1333',
particles = [ P.Z, P.sv1__tilde__, P.sv1 ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_81,(0,1):C.GC_80})
V_1334 = Vertex(name = 'V_1334',
particles = [ P.Z, P.sv2__tilde__, P.sv2 ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_81,(0,1):C.GC_80})
V_1335 = Vertex(name = 'V_1335',
particles = [ P.Z, P.sv3__tilde__, P.sv3 ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_81,(0,1):C.GC_80})
V_1336 = Vertex(name = 'V_1336',
particles = [ P.a, P.Z, P.sl1__plus__, P.sl1__minus__ ],
color = [ '1' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_381})
V_1337 = Vertex(name = 'V_1337',
particles = [ P.Z, P.sl1__plus__, P.sl1__minus__ ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_380,(0,1):C.GC_379})
V_1338 = Vertex(name = 'V_1338',
particles = [ P.a, P.Z, P.sl2__plus__, P.sl2__minus__ ],
color = [ '1' ],
lorentz = [ L.VVSS1 ],
couplings = {(0,0):C.GC_407})
V_1339 = Vertex(name = 'V_1339',
particles = [ P.Z, P.sl2__plus__, P.sl2__minus__ ],
color = [ '1' ],
lorentz = [ L.VSS1, L.VSS3 ],
couplings = {(0,0):C.GC_406,(0,1):C.GC_405})
V_1340 | |
"""
:param qw_number: number of quantum wells in the sample.
:type qw_number: int
:return: None
"""
"""
This method turns the absorption to the absorbance per quantum well. Is
that how this data should be reported?
Also, I'm not sure if columns 1 and 2 are correct.
"""
temp_abs = -np.log(self.proc_data[:, 1] / self.proc_data[:, 2]) / qw_number
self.proc_data = np.hstack((self.proc_data, temp_abs))
def fft_smooth(self, cutoff, inspectPlots=False):
"""
This function removes the Fabry-Perot that affects the absorption data
creates:
self.clean = np.array of the Fourier-filtered absorption data, freq (eV) vs. absorbance (dB!)
self.parameters['fourier cutoff'] = the low pass cutoff frequency, in eV**(-1)
:param cutoff: Fourier frequency of the cut off for the low pass filter
:type cutoff: int or float
:param inspectPlots: Do you want to see the results?
:type inspectPlots: bool
:return: None
"""
# self.fixed = -np.log10(abs(self.raw_data[:, 1]) / abs(self.ref_data[:, 1]))
# self.fixed = np.nan_to_num(self.proc_data[:, 1])
# self.fixed = np.column_stack((self.raw_data[:, 0], self.fixed))
self.parameters['fourier cutoff'] = cutoff
self.clean = low_pass_filter(self.proc_data[:, 0], self.proc_data[:, 1], cutoff, inspectPlots)
def save_processing(self, file_name, folder_str, marker='', index=''):
"""
This bad boy saves the absorption spectrum that has been manipulated.
Saves 100 lines of comments.
:param file_name: The base name of the file to be saved
:type file_name: str
:param folder_str: The name of the folder where the file will be saved
:type folder_str: str
:param marker: A further label that might be the series tag or something
:type marker: str
:param index: If multiple files are being saved with the same name, include an integer to append to the end of the file
:type index: int
:return: None
"""
try:
os.mkdir(folder_str)
except OSError as e:
if e.errno == errno.EEXIST:
pass
else:
raise
spectra_fname = file_name + '_' + marker + '_' + str(index) + '.txt'
self.save_name = spectra_fname
try:
parameter_str = json.dumps(self.parameters, sort_keys=True, indent=4, separators=(',', ': '))
except:
print("Source: EMCCD_image.save_images\nJSON FAILED")
print("Here is the dictionary that broke JSON:\n", self.parameters)
return
parameter_str = parameter_str.replace('\n', '\n#')
num_lines = parameter_str.count('#') # Make the number of lines constant so importing into Origin is easier
# for num in range(99 - num_lines): parameter_str += '\n#'
parameter_str += '\n#' * (99 - num_lines)
origin_import_spec = '\nNIR frequency,Signal,Standard error\neV,arb. u.,arb. u.'
spec_header = '#' + parameter_str + origin_import_spec
# spec_header = '#' + parameter_str + '\n#' + self.description[:-2] + origin_import_spec
np.savetxt(os.path.join(folder_str, spectra_fname), self.proc_data, delimiter=',',
header=spec_header, comments='', fmt='%0.6e')
spectra_fname = 'clean ' + spectra_fname
np.savetxt(os.path.join(folder_str, spectra_fname), self.clean, delimiter=',',
header=spec_header, comments='', fmt='%0.6e')
print("Save image.\nDirectory: {}".format(os.path.join(folder_str, spectra_fname)))
# class LaserLineCCD(HighSidebandCCD):
# """
# Class for use when doing alinging/testing by sending the laser
# directly into the CCD. Modifies how "sidebands" and guess and fit,
# simply looking at the max signal.
# """
# def guess_sidebands(self, cutoff=8, verbose=False, plot=False):
# pass
class NeonNoiseAnalysis(CCD):
"""
This class is used to make handling neon calibration lines easier. It's not great.
"""
def __init__(self, fname, spectrometer_offset=None):
# print 'opening', fname
super(NeonNoiseAnalysis, self).__init__(fname, spectrometer_offset=spectrometer_offset)
self.addenda = self.parameters['addenda']
self.subtrahenda = self.parameters['subtrahenda']
self.noise_and_signal()
self.process_stuff()
def noise_and_signal(self):
"""
This bad boy calculates the standard deviation of the space between the
neon lines.
The noise regions are, in nm:
high: 784-792
low1: 795-806
low2: 815-823
low3: 831-834
the peaks are located at, in nm:
#1, weak: 793.6
#2, medium: 794.3
#3, medium: 808.2
#4, weak: 825.9
#5, strong: 830.0
"""
print('\n\n')
self.ccd_data = np.flipud(self.ccd_data)
# self.high_noise_region = np.array(self.ccd_data[30:230, :])
self.high_noise_region = np.array(self.ccd_data[80:180, :]) # for dark current measurements
self.low_noise_region1 = np.array(self.ccd_data[380:700, :])
self.low_noise_region2 = np.array(self.ccd_data[950:1200, :])
self.low_noise_region3 = np.array(self.ccd_data[1446:1546, :])
# self.high_noise = np.std(self.high_noise_region[:, 1])
self.high_noise_std = np.std(self.high_noise_region[:, 1])
self.high_noise_sig = np.mean(self.high_noise_region[:, 1])
self.low_noise1 = np.std(self.low_noise_region1[:, 1])
self.low_noise2 = np.std(self.low_noise_region2[:, 1])
self.low_noise_std = np.std(self.low_noise_region2[:, 1])
self.low_noise_sig = np.mean(self.low_noise_region2[:, 1])
self.low_noise3 = np.std(self.low_noise_region3[:, 1])
# self.noise_list = [self.high_noise, self.low_noise1, self.low_noise2, self.low_noise3]
self.peak1 = np.array(self.ccd_data[303:323, :])
self.peak2 = np.array(self.ccd_data[319:339, :])
self.peak3 = np.array(self.ccd_data[736:746, :])
self.peak4 = np.array(self.ccd_data[1268:1288, :])
self.peak5 = np.array(self.ccd_data[1381:1421, :])
temp_max = np.argmax(self.peak1[:, 1])
self.signal1 = np.sum(self.peak1[temp_max - 1:temp_max + 2, 1])
self.error1 = np.sqrt(np.sum(self.peak1[temp_max - 1:temp_max + 2, 2] ** 2))
temp_max = np.argmax(self.peak2[:, 1])
self.signal2 = np.sum(self.peak2[temp_max - 1:temp_max + 2, 1])
self.error2 = np.sqrt(np.sum(self.peak2[temp_max - 1:temp_max + 2, 2] ** 2))
temp_max = np.argmax(self.peak3[:, 1])
self.signal3 = np.sum(self.peak3[temp_max - 1:temp_max + 2, 1])
self.error3 = np.sqrt(np.sum(self.peak3[temp_max - 1:temp_max + 2, 2] ** 2))
temp_max = np.argmax(self.peak4[:, 1])
self.signal4 = np.sum(self.peak4[temp_max - 1:temp_max + 2, 1])
self.error4 = np.sqrt(np.sum(self.peak4[temp_max - 1:temp_max + 2, 2] ** 2))
temp_max = np.argmax(self.peak5[:, 1])
self.signal5 = np.sum(self.peak5[temp_max - 1:temp_max + 2, 1])
self.error5 = np.sqrt(np.sum(self.peak5[temp_max - 1:temp_max + 2, 2] ** 2))
self.signal_list = [self.signal1, self.signal2, self.signal3, self.signal4, self.signal5]
self.error_list = [self.error1, self.error2, self.error3, self.error4, self.error5]
print("Signal list:", self.signal_list)
self.ccd_data = np.flipud(self.ccd_data)
def process_stuff(self):
"""
This one puts high_noise, low_noise1, signal2, and error2 in a nice horizontal array
"""
# self.results = np.array([self.high_noise, self.low_noise1, self.signal5, self.error5])
# average = np.mean([self.low_noise1, self.low_noise2, self.low_noise3])
# self.results = np.array([self.high_noise, self.low_noise1, self.low_noise2, self.low_noise3, self.high_noise/average])
self.results = np.array([self.high_noise_sig, self.high_noise_std, self.low_noise_sig, self.low_noise_std])
def collect_noise(neon_list, param_name, folder_name, file_name, name='Signal'):
"""
This function acts like save parameter sweep.
param_name = string that we're gonna save!
"""
# param_array = None
for elem in neon_list:
print("pname: {}".format(elem.parameters[param_name]))
print("results:", elem.results)
temp = np.insert(elem.results, 0, elem.parameters[param_name])
try:
param_array = np.row_stack((param_array, temp))
except UnboundLocalError:
param_array = np.array(temp)
if len(param_array.shape) == 1:
print("I don't think you want this file")
return
# append the relative peak error
print('\n', param_array, '\n')
param_array = np.column_stack((param_array, param_array[:, 4] / param_array[:, 3]))
# append the snr
param_array = np.column_stack((param_array, param_array[:, 3] / param_array[:, 2]))
try:
param_array = param_array[param_array[:, 0].argsort()]
except:
print("param_array shape", param_array.shape)
raise
try:
os.mkdir(folder_name)
except OSError as e:
if e.errno == errno.EEXIST:
pass
else:
raise
file_name = file_name + '.txt'
origin_import1 = param_name + ",Noise,Noise,Signal,error,rel peak error,peak signal-to-noise"
# origin_import1 = param_name + ",Noise,Noise,Noise,Noise,Ratio"
origin_import2 = ",counts,counts,counts,counts,,"
# origin_import2 = ",counts,counts,counts,,"
origin_import3 = ",High noise region,Low noise region,{},{} error,{} rel error, {}".format(name, name, name, name)
# origin_import3 = ",High noise region,Low noise region 1,Low noise region 2,Low noise region 3,High/low"
header_total = origin_import1 + "\n" + origin_import2 + "\n" + origin_import3
# print "Spec header: ", spec_header
print("the param_array is:", param_array)
np.savetxt(os.path.join(folder_name, file_name), param_array, delimiter=',',
header=header_total, comments='', fmt='%0.6e')
print("Saved the file.\nDirectory: {}".format(os.path.join(folder_name, file_name)))
class HighSidebandCCD(CCD):
def __init__(self, hsg_thing, parameter_dict=None, spectrometer_offset=None):
"""
This will read the appropriate file. The header needs to be fixed to
reflect the changes to the output header from the Andor file. Because
another helper file will do the cleaning and background subtraction,
those are no longer part of this init. This also turns all wavelengths
from nm (NIR ones) or cm-1 (THz ones) into eV.
OR, if an array is thrown in there, it'll handle the array and dict
Input:
For post-processing analysis:
hsg_thing = file name of the hsg spectrum from CCD superclass
spectrometer_offset = number of nanometers the spectrometer is off by,
should be 0.0...but can be 0.2 or 1.0
For Live-software:
hsg_thing = np array of spectrum from camera
parameter_dict = equipment dict generated by software
Internal:
self.hsg_thing = the filename
self.parameters = string with all the relevant experimental perameters
self.description = the description we added to the file as the data
was being taken
self.proc_data = processed data that has gone is frequency vs counts/pulse
self.dark_stdev = this is not currently handled appropriately
self.addenda = the list of things that have been added to the file, in
form of [constant, *spectra_added]
self.subtrahenda = the list of spectra that have been subtracted from
the file. Constant subtraction is dealt with with
self.addenda
:param hsg_thing: file name for the file to be opened. OR the actually hsg np.ndarray. Fun!
:type hsg_thing: str OR np.ndarray
:param parameter_dict: If being loaded through the data acquisition GUI, throw the dict in here
:type parameter_dict: dict
:param spectrometer_offset: Number of nm the spectrometer is off by
:type spectrometer_offset: float
:return: None, technically
"""
if isinstance(hsg_thing, str):
super(HighSidebandCCD, self).__init__(hsg_thing, spectrometer_offset=spectrometer_offset)
# TODO: fix addenda bullshit
self.addenda = []
self.subtrahenda = []
elif | |
<reponame>xarkes/pydis<filename>pydis/generate_types.py
from enum import IntEnum
class ISAExt(IntEnum):
INVALID = 0
ADOX_ADCX = 1
AES = 2
AMD = 3
AMD3DNOW = 4
AVX = 5
AVX2 = 6
AVX2GATHER = 7
AVX512BW_128 = 8
AVX512BW_128N = 9
AVX512BW_256 = 10
AVX512BW_512 = 11
AVX512BW_KOP = 12
AVX512CD_128 = 13
AVX512CD_256 = 14
AVX512CD_512 = 15
AVX512DQ_128 = 16
AVX512DQ_128N = 17
AVX512DQ_256 = 18
AVX512DQ_512 = 19
AVX512DQ_KOP = 20
AVX512DQ_SCALAR = 21
AVX512ER_512 = 22
AVX512ER_SCALAR = 23
AVX512F_128 = 24
AVX512F_128N = 25
AVX512F_256 = 26
AVX512F_512 = 27
AVX512F_KOP = 28
AVX512F_SCALAR = 29
AVX512PF_512 = 30
AVX512_4FMAPS_512 = 31
AVX512_4FMAPS_SCALAR = 32
AVX512_4VNNIW_512 = 33
AVX512_BITALG_128 = 34
AVX512_BITALG_256 = 35
AVX512_BITALG_512 = 36
AVX512_GFNI_128 = 37
AVX512_GFNI_256 = 38
AVX512_GFNI_512 = 39
AVX512_IFMA_128 = 40
AVX512_IFMA_256 = 41
AVX512_IFMA_512 = 42
AVX512_VAES_128 = 43
AVX512_VAES_256 = 44
AVX512_VAES_512 = 45
AVX512_VBMI2_128 = 46
AVX512_VBMI2_256 = 47
AVX512_VBMI2_512 = 48
AVX512_VBMI_128 = 49
AVX512_VBMI_256 = 50
AVX512_VBMI_512 = 51
AVX512_VNNI_128 = 52
AVX512_VNNI_256 = 53
AVX512_VNNI_512 = 54
AVX512_VPCLMULQDQ_128 = 55
AVX512_VPCLMULQDQ_256 = 56
AVX512_VPCLMULQDQ_512 = 57
AVX512_VPOPCNTDQ_128 = 58
AVX512_VPOPCNTDQ_256 = 59
AVX512_VPOPCNTDQ_512 = 60
AVXAES = 61
AVX_GFNI = 62
BMI1 = 63
BMI2 = 64
CET = 65
CLFLUSHOPT = 66
CLFSH = 67
CLWB = 68
CLZERO = 69
CMOV = 70
CMPXCHG16B = 71
F16C = 72
FAT_NOP = 73
FCMOV = 74
FMA = 75
FMA4 = 76
FXSAVE = 77
FXSAVE64 = 78
GFNI = 79
I186 = 80
I286PROTECTED = 81
I286REAL = 82
I386 = 83
I486 = 84
I486REAL = 85
I86 = 86
INVPCID = 87
KNCE = 88
KNCJKBR = 89
KNCSTREAM = 90
KNCV = 91
KNC_MISC = 92
KNC_PF_HINT = 93
LAHF = 94
LONGMODE = 95
LZCNT = 96
MONITOR = 97
MONITORX = 98
MOVBE = 99
MPX = 100
PAUSE = 101
PCLMULQDQ = 102
PCONFIG = 103
PENTIUMMMX = 104
PENTIUMREAL = 105
PKU = 106
POPCNT = 107
PPRO = 108
PREFETCHWT1 = 109
PREFETCH_NOP = 110
PT = 111
RDPID = 112
RDPMC = 113
RDRAND = 114
RDSEED = 115
RDTSCP = 116
RDWRFSGS = 117
RTM = 118
SGX = 119
SGX_ENCLV = 120
SHA = 121
SMAP = 122
SMX = 123
SSE = 124
SSE2 = 125
SSE2MMX = 126
SSE3 = 127
SSE3X87 = 128
SSE4 = 129
SSE42 = 130
SSE4A = 131
SSEMXCSR = 132
SSE_PREFETCH = 133
SSSE3 = 134
SSSE3MMX = 135
SVM = 136
TBM = 137
VAES = 138
VMFUNC = 139
VPCLMULQDQ = 140
VTX = 141
X87 = 142
XOP = 143
XSAVE = 144
XSAVEC = 145
XSAVEOPT = 146
XSAVES = 147
class ISASet(IntEnum):
INVALID = 0
ADOX_ADCX = 1
AES = 2
AMD3DNOW = 3
AVX = 4
AVX2 = 5
AVX2GATHER = 6
AVX512EVEX = 7
AVX512VEX = 8
AVXAES = 9
BASE = 10
BMI1 = 11
BMI2 = 12
CET = 13
CLFLUSHOPT = 14
CLFSH = 15
CLWB = 16
CLZERO = 17
F16C = 18
FMA = 19
FMA4 = 20
GFNI = 21
INVPCID = 22
KNC = 23
KNCE = 24
KNCV = 25
LONGMODE = 26
LZCNT = 27
MMX = 28
MONITOR = 29
MONITORX = 30
MOVBE = 31
MPX = 32
PAUSE = 33
PCLMULQDQ = 34
PCONFIG = 35
PKU = 36
PREFETCHWT1 = 37
PT = 38
RDPID = 39
RDRAND = 40
RDSEED = 41
RDTSCP = 42
RDWRFSGS = 43
RTM = 44
SGX = 45
SGX_ENCLV = 46
SHA = 47
SMAP = 48
SMX = 49
SSE = 50
SSE2 = 51
SSE3 = 52
SSE4 = 53
SSE4A = 54
SSSE3 = 55
SVM = 56
TBM = 57
VAES = 58
VMFUNC = 59
VPCLMULQDQ = 60
VTX = 61
X87 = 62
XOP = 63
XSAVE = 64
XSAVEC = 65
XSAVEOPT = 66
XSAVES = 67
class InstructionCategory(IntEnum):
INVALID = 0
ADOX_ADCX = 1
AES = 2
AMD3DNOW = 3
AVX = 4
AVX2 = 5
AVX2GATHER = 6
AVX512 = 7
AVX512_4FMAPS = 8
AVX512_4VNNIW = 9
AVX512_BITALG = 10
AVX512_VBMI = 11
BINARY = 12
BITBYTE = 13
BLEND = 14
BMI1 = 15
BMI2 = 16
BROADCAST = 17
CALL = 18
CET = 19
CLFLUSHOPT = 20
CLWB = 21
CLZERO = 22
CMOV = 23
COMPRESS = 24
COND_BR = 25
CONFLICT = 26
CONVERT = 27
DATAXFER = 28
DECIMAL = 29
EXPAND = 30
FCMOV = 31
FLAGOP = 32
FMA4 = 33
GATHER = 34
GFNI = 35
IFMA = 36
INTERRUPT = 37
IO = 38
IOSTRINGOP = 39
KMASK = 40
KNC = 41
KNCMASK = 42
KNCSCALAR = 43
LOGICAL = 44
LOGICAL_FP = 45
LZCNT = 46
MISC = 47
MMX = 48
MPX = 49
NOP = 50
PCLMULQDQ = 51
PCONFIG = 52
PKU = 53
POP = 54
PREFETCH = 55
PREFETCHWT1 = 56
PT = 57
PUSH = 58
RDPID = 59
RDRAND = 60
RDSEED = 61
RDWRFSGS = 62
RET = 63
ROTATE = 64
SCATTER = 65
SEGOP = 66
SEMAPHORE = 67
SETCC = 68
SGX = 69
SHA = 70
SHIFT = 71
SMAP = 72
SSE = 73
STRINGOP = 74
STTNI = 75
SYSCALL = 76
SYSRET = 77
SYSTEM = 78
TBM = 79
UFMA = 80
UNCOND_BR = 81
VAES = 82
VBMI2 = 83
VFMA = 84
VPCLMULQDQ = 85
VTX = 86
WIDENOP = 87
X87_ALU = 88
XOP = 89
XSAVE = 90
XSAVEOPT = 91
class Mnemonic(IntEnum):
INVALID = 0
AAA = 1
AAD = 2
AAM = 3
AAS = 4
ADC = 5
ADCX = 6
ADD = 7
ADDPD = 8
ADDPS = 9
ADDSD = 10
ADDSS = 11
ADDSUBPD = 12
ADDSUBPS = 13
ADOX = 14
AESDEC = 15
AESDECLAST = 16
AESENC = 17
AESENCLAST = 18
AESIMC = 19
AESKEYGENASSIST = 20
AND = 21
ANDN = 22
ANDNPD = 23
ANDNPS = 24
ANDPD = 25
ANDPS = 26
ARPL = 27
BEXTR = 28
BLCFILL = 29
BLCI = 30
BLCIC = 31
BLCMSK = 32
BLCS = 33
BLENDPD = 34
BLENDPS = 35
BLENDVPD = 36
BLENDVPS = 37
BLSFILL = 38
BLSI = 39
BLSIC = 40
BLSMSK = 41
BLSR = 42
BNDCL = 43
BNDCN = 44
BNDCU = 45
BNDLDX = 46
BNDMK = 47
BNDMOV = 48
BNDSTX = 49
BOUND = 50
BSF = 51
BSR = 52
BSWAP = 53
BT = 54
BTC = 55
BTR = 56
BTS = 57
BZHI = 58
CALL = 59
CBW = 60
CDQ = 61
CDQE = 62
CLAC = 63
CLC = 64
CLD = 65
CLEVICT0 = 66
CLEVICT1 = 67
CLFLUSH = 68
CLFLUSHOPT = 69
CLGI = 70
CLI = 71
CLRSSBSY = 72
CLTS = 73
CLWB = 74
CLZERO = 75
CMC = 76
CMOVB = 77
CMOVBE = 78
CMOVL = 79
CMOVLE = 80
CMOVNB = 81
CMOVNBE = 82
CMOVNL = 83
CMOVNLE = 84
CMOVNO = 85
CMOVNP = 86
CMOVNS = 87
CMOVNZ = 88
CMOVO = 89
CMOVP = 90
CMOVS = 91
CMOVZ = 92
CMP = 93
CMPPD = 94
CMPPS = 95
CMPSB = 96
CMPSD = 97
CMPSQ = 98
CMPSS = 99
CMPSW = 100
CMPXCHG = 101
CMPXCHG16B = 102
CMPXCHG8B = 103
COMISD = 104
COMISS = 105
CPUID = 106
CQO = 107
CRC32 = 108
CVTDQ2PD = 109
CVTDQ2PS = 110
CVTPD2DQ = 111
CVTPD2PI = 112
CVTPD2PS = 113
CVTPI2PD = 114
CVTPI2PS = 115
| |
"""
This module is the computational part of the geometrical module of ToFu
"""
# Built-in
import sys
import warnings
# Common
import numpy as np
import scipy.interpolate as scpinterp
import scipy.integrate as scpintg
if sys.version[0]=='3':
from inspect import signature as insp
elif sys.version[0]=='2':
from inspect import getargspec as insp
# ToFu-specific
try:
import tofu.geom._def as _def
import tofu.geom._GG as _GG
except Exception:
from . import _def as _def
from . import _GG as _GG
"""
###############################################################################
###############################################################################
Ves functions
###############################################################################
"""
############################################
##### Ves sub-functions
############################################
def _Struct_set_Poly(Poly, pos=None, extent=None, arrayorder='C',
Type='Tor', Clock=False):
""" Compute geometrical attributes of a Struct object """
# Make Poly closed, counter-clockwise, with '(cc,N)' layout and arrayorder
Poly = _GG.Poly_Order(Poly, order='C', Clock=False,
close=True, layout='(cc,N)', Test=True)
assert Poly.shape[0]==2, "Arg Poly must be a 2D polygon !"
fPfmt = np.ascontiguousarray if arrayorder=='C' else np.asfortranarray
# Get all remarkable points and moments
NP = Poly.shape[1]-1
P1Max = Poly[:,np.argmax(Poly[0,:])]
P1Min = Poly[:,np.argmin(Poly[0,:])]
P2Max = Poly[:,np.argmax(Poly[1,:])]
P2Min = Poly[:,np.argmin(Poly[1,:])]
BaryP = np.sum(Poly[:,:-1],axis=1,keepdims=False)/(Poly.shape[1]-1)
BaryL = np.array([(P1Max[0]+P1Min[0])/2., (P2Max[1]+P2Min[1])/2.])
BaryS, Surf = _GG.poly_area_and_barycenter(Poly, NP)
# Get lim-related indicators
noccur = int(pos.size)
Multi = noccur>1
# Get Tor-related quantities
if Type.lower()=='lin':
Vol, BaryV = None, None
else:
Vol, BaryV = _GG.Poly_VolAngTor(Poly)
msg = "Pb. with volume computation for Ves object of type 'Tor' !"
assert Vol>0., msg
# Compute the non-normalized vector of each side of the Poly
Vect = np.diff(Poly,n=1,axis=1)
Vect = fPfmt(Vect)
# Compute the normalised vectors directed inwards
Vin = np.array([Vect[1,:],-Vect[0,:]])
Vin = -Vin # Poly is Counter Clock-wise as defined above
Vin = Vin/np.hypot(Vin[0,:],Vin[1,:])[np.newaxis,:]
Vin = fPfmt(Vin)
poly = _GG.Poly_Order(Poly, order=arrayorder, Clock=Clock,
close=False, layout='(cc,N)', Test=True)
# Get bounding circle
circC = BaryS
r = np.sqrt(np.sum((poly-circC[:,np.newaxis])**2,axis=0))
circr = np.max(r)
dout = {'Poly':poly, 'pos':pos, 'extent':extent,
'noccur':noccur, 'Multi':Multi, 'nP':NP,
'P1Max':P1Max, 'P1Min':P1Min, 'P2Max':P2Max, 'P2Min':P2Min,
'BaryP':BaryP, 'BaryL':BaryL, 'BaryS':BaryS, 'BaryV':BaryV,
'Surf':Surf, 'VolAng':Vol, 'Vect':Vect, 'VIn':Vin,
'circ-C':circC, 'circ-r':circr, 'Clock':Clock}
return dout
def _Ves_get_InsideConvexPoly(Poly, P2Min, P2Max, BaryS, RelOff=_def.TorRelOff, ZLim='Def', Spline=True, Splprms=_def.TorSplprms, NP=_def.TorInsideNP, Plot=False, Test=True):
if Test:
assert type(RelOff) is float, "Arg RelOff must be a float"
assert ZLim is None or ZLim=='Def' or type(ZLim) in [tuple,list], "Arg ZLim must be a tuple (ZlimMin, ZLimMax)"
assert type(Spline) is bool, "Arg Spline must be a bool !"
if not ZLim is None:
if ZLim=='Def':
ZLim = (P2Min[1]+0.1*(P2Max[1]-P2Min[1]), P2Max[1]-0.05*(P2Max[1]-P2Min[1]))
indZLim = (Poly[1,:]<ZLim[0]) | (Poly[1,:]>ZLim[1])
if Poly.shape[1]-indZLim.sum()<10:
msg = "Poly seems to be Convex and simple enough !"
msg += "\n Poly.shape[1] - indZLim.sum() < 10"
warnings.warn(msg)
return Poly
Poly = np.delete(Poly, indZLim.nonzero()[0], axis=1)
if np.all(Poly[:,0]==Poly[:,-1]):
Poly = Poly[:,:-1]
Np = Poly.shape[1]
if Spline:
BarySbis = np.tile(BaryS,(Np,1)).T
Ptemp = (1.-RelOff)*(Poly-BarySbis)
#Poly = BarySbis + Ptemp
Ang = np.arctan2(Ptemp[1,:],Ptemp[0,:])
Ang, ind = np.unique(Ang, return_index=True)
Ptemp = Ptemp[:,ind]
# spline parameters
ww = Splprms[0]*np.ones((Np+1,))
ss = Splprms[1]*(Np+1) # smoothness parameter
kk = Splprms[2] # spline order
nest = int((Np+1)/2.) # estimate of number of knots needed (-1 = maximal)
# Find the knot points
#tckp,uu = scpinterp.splprep([np.append(Ptemp[0,:],Ptemp[0,0]),np.append(Ptemp[1,:],Ptemp[1,0]),np.append(Ang,Ang[0]+2.*np.pi)], w=ww, s=ss, k=kk, nest=nest)
tckp,uu = scpinterp.splprep([np.append(Ptemp[0,:],Ptemp[0,0]),np.append(Ptemp[1,:],Ptemp[1,0])], u=np.append(Ang,Ang[0]+2.*np.pi), w=ww, s=ss, k=kk, nest=nest, full_output=0)
xnew,ynew = scpinterp.splev(np.linspace(-np.pi,np.pi,NP),tckp)
Poly = np.array([xnew+BaryS[0],ynew+BaryS[1]])
Poly = np.concatenate((Poly,Poly[:,0:1]),axis=1)
if Plot:
f = plt.figure(facecolor='w',figsize=(8,10))
ax = f.add_axes([0.1,0.1,0.8,0.8])
ax.plot(Poly[0,:], Poly[1,:],'-k', Poly[0,:],Poly[1,:],'-r')
ax.set_aspect(aspect="equal",adjustable='datalim'), ax.set_xlabel(r"R (m)"), ax.set_ylabel(r"Z (m)")
f.canvas.draw()
return Poly
def _Ves_get_sampleEdge(VPoly, dL, DS=None, dLMode='abs', DIn=0., VIn=None,
margin=1.e-9):
types =[int,float,np.int32,np.int64,np.float32,np.float64]
assert type(dL) in types and type(DIn) in types
assert DS is None or (hasattr(DS,'__iter__') and len(DS)==2)
if DS is None:
DS = [None,None]
else:
assert all([ds is None or (hasattr(ds,'__iter__') and len(ds)==2 and
all([ss is None or type(ss) in types
for ss in ds])) for ds in DS])
assert (type(dLMode) is str and
dLMode.lower() in ['abs','rel']), "Arg dLMode must be in ['abs','rel'] !"
#assert ind is None or (type(ind) is np.ndarray and ind.ndim==1 and ind.dtype in ['int32','int64'] and np.all(ind>=0)), "Arg ind must be None or 1D np.ndarray of positive int !"
Pts, dLr, ind, N,\
Rref, VPolybis = _GG.discretize_vpoly(VPoly, float(dL),
mode=dLMode.lower(),
D1=DS[0], D2=DS[1],
margin=margin,
DIn=float(DIn), VIn=VIn)
return Pts, dLr, ind
def _Ves_get_sampleCross(VPoly, Min1, Max1, Min2, Max2, dS,
DS=None, dSMode='abs', ind=None,
margin=1.e-9, mode='flat'):
assert mode in ['flat','imshow']
types =[int,float,np.int32,np.int64,np.float32,np.float64]
c0 = (hasattr(dS,'__iter__') and len(dS)==2
and all([type(ds) in types for ds in dS]))
assert c0 or type(dS) in types, "Arg dS must be a float or a list 2 floats!"
dS = [float(dS),float(dS)] if type(dS) in types else [float(dS[0]),
float(dS[1])]
assert DS is None or (hasattr(DS,'__iter__') and len(DS)==2)
if DS is None:
DS = [None,None]
else:
assert all([ds is None or (hasattr(ds,'__iter__') and len(ds)==2
and all([ss is None or type(ss) in types
for ss in ds])) for ds in DS])
assert type(dSMode) is str and dSMode.lower() in ['abs','rel'],\
"Arg dSMode must be in ['abs','rel'] !"
assert ind is None or (type(ind) is np.ndarray and ind.ndim==1
and ind.dtype in ['int32','int64']
and np.all(ind>=0)), \
"Arg ind must be None or 1D np.ndarray of positive int !"
MinMax1 = np.array([Min1,Max1])
MinMax2 = np.array([Min2,Max2])
if ind is None:
if mode == 'flat':
Pts, dS, ind, d1r, d2r = _GG.discretize_segment2d(MinMax1, MinMax2,
dS[0], dS[1],
D1=DS[0],
D2=DS[1],
mode=dSMode,
VPoly=VPoly,
margin=margin)
out = (Pts, dS, ind, (d1r,d2r))
else:
x1, d1r, ind1, N1 = _GG._Ves_mesh_dlfromL_cython(MinMax1,
dS[0], DS[0],
Lim=True,
dLMode=dSMode,
margin=margin)
x2, d2r, ind2, N2 = _GG._Ves_mesh_dlfromL_cython(MinMax2,
dS[1], DS[1],
Lim=True,
dLMode=dSMode,
margin=margin)
xx1, xx2 = np.meshgrid(x1,x2)
pts = np.squeeze([xx1,xx2])
extent = (x1[0]-d1r/2., x1[-1]+d1r/2., x2[0]-d2r/2., x2[-1]+d2r/2.)
out = (pts, x1, x2, extent)
else:
assert mode == 'flat'
c0 = type(ind) is np.ndarray and ind.ndim==1
c0 = c0 and ind.dtype in ['int32','int64'] and np.all(ind>=0)
assert c0, "Arg ind must be a np.ndarray of int !"
Pts, dS, d1r, d2r = _GG._Ves_meshCross_FromInd(MinMax1, MinMax2,
dS[0], dS[1], ind,
dSMode=dSMode,
margin=margin)
out = (Pts, dS, ind, (d1r,d2r))
return out
def _Ves_get_sampleV(VPoly, Min1, Max1, Min2, Max2, dV,
DV=None, dVMode='abs', ind=None,
VType='Tor', VLim=None,
Out='(X,Y,Z)', margin=1.e-9):
types =[int,float,np.int32,np.int64,np.float32,np.float64]
assert type(dV) in types or (hasattr(dV,'__iter__') and len(dV)==3 and all([type(ds) in types for ds in dV])), "Arg dV must be a float or a list 3 floats !"
dV = [float(dV),float(dV),float(dV)] if type(dV) in types else [float(dV[0]),float(dV[1]),float(dV[2])]
assert DV is None or (hasattr(DV,'__iter__') and len(DV)==3)
if DV is None:
DV = [None,None,None]
else:
assert all([ds is None or (hasattr(ds,'__iter__') and len(ds)==2 and all([ss is None or type(ss) in types for ss in ds])) for ds in DV]), "Arg DV must be a list of 3 lists of 2 floats !"
assert type(dVMode) is str and dVMode.lower() in ['abs','rel'], "Arg dVMode must be in ['abs','rel'] !"
assert ind is None or (type(ind) is np.ndarray and ind.ndim==1 and ind.dtype in ['int32','int64'] and np.all(ind>=0)), "Arg ind must be None or 1D np.ndarray of positive int !"
MinMax1 = np.array([Min1,Max1])
MinMax2 = np.array([Min2,Max2])
VLim = None if VType.lower()=='tor' else np.array(VLim).ravel()
dVr = [None,None,None]
if ind is None:
if VType.lower()=='tor':
Pts, dV, ind, dVr[0], dVr[1], dVr[2] = _GG._Ves_Vmesh_Tor_SubFromD_cython(dV[0], dV[1], dV[2], MinMax1, MinMax2, DR=DV[0], DZ=DV[1], DPhi=DV[2], VPoly=VPoly, Out=Out, margin=margin)
else:
Pts, dV, ind, dVr[0], dVr[1], dVr[2] = _GG._Ves_Vmesh_Lin_SubFromD_cython(dV[0], dV[1], dV[2], VLim, MinMax1, MinMax2, DX=DV[0], DY=DV[1], DZ=DV[2], VPoly=VPoly, margin=margin)
else:
if VType.lower()=='tor':
Pts, dV, dVr[0], dVr[1], dVr[2] = _GG._Ves_Vmesh_Tor_SubFromInd_cython(dV[0], dV[1], dV[2], MinMax1, MinMax2, ind, Out=Out, margin=margin)
else:
Pts, dV, dVr[0], dVr[1], dVr[2] = _GG._Ves_Vmesh_Lin_SubFromInd_cython(dV[0], dV[1], dV[2], VLim, MinMax1, MinMax2, ind, margin=margin)
return Pts, dV, ind, dVr
def _Ves_get_sampleS(VPoly, Min1, Max1, Min2, Max2, dS,
DS=None, dSMode='abs', ind=None, DIn=0., VIn=None,
VType='Tor', VLim=None, nVLim=None, Out='(X,Y,Z)',
margin=1.e-9, Multi=False, Ind=None):
types =[int,float,np.int32,np.int64,np.float32,np.float64]
assert type(dS) in types or (hasattr(dS,'__iter__') and len(dS)==2 and all([type(ds) in types for ds in dS])), "Arg dS must be a float or a list of 2 floats !"
dS = [float(dS),float(dS),float(dS)] if type(dS) in types else [float(dS[0]),float(dS[1]),float(dS[2])]
assert DS is None or (hasattr(DS,'__iter__') and len(DS)==3)
msg = "type(nVLim)={0} and nVLim={1}".format(str(type(nVLim)),nVLim)
assert type(nVLim) is int and nVLim>=0, msg
if DS is None:
DS = [None,None,None]
else:
assert all([ds is None or (hasattr(ds,'__iter__') and len(ds)==2 and all([ss is None or type(ss) in types for ss in ds])) for ds in DS]), "Arg DS must be a list of 3 lists of 2 floats !"
assert type(dSMode) is str and dSMode.lower() in ['abs','rel'], "Arg dSMode must be in ['abs','rel'] !"
assert type(Multi) is bool, "Arg Multi must be a bool !"
VLim = None if (VLim is None | |
self.default_apikey
self.apiclient.connection.securityKey = self.default_secretkey
self.vmdata["name"] = self.acldata["vmD2A"]["name"] + "-shared-scope-account-root-admin"
self.vmdata["displayname"] = self.acldata["vmD2A"]["displayname"] + "-shared-scope-account-root-admin"
try:
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_account_d111a.id,
accountid=self.account_d2a.name,
domainid=self.account_d2a.domainid
)
self.fail("ROOT admin is able to deploy a VM for a admin user in a shared network with scope=account which the admin user does not have access to")
except Exception as e:
self.debug("account %s" % e)
if not CloudstackAclException.verifyMsginException(e, CloudstackAclException.UNABLE_TO_USE_NETWORK):
self.fail(
"Error message validation failed when ROOT admin tries to deploy a VM for a admin user in a shared network with scope=account which the admin user does not have access to ")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_admin_scope_account_ROOTuser(self):
"""
Valiate that ROOT admin is NOT able to deploy a VM for a user in ROOT domain in a shared network with scope=account which the user does not have access to
"""
# Deploy VM as user in ROOT domain
self.apiclient.connection.apiKey = self.default_apikey
self.apiclient.connection.securityKey = self.default_secretkey
self.vmdata["name"] = self.acldata["vmROOTA"]["name"] + "-shared-scope-account-root-admin"
self.vmdata["displayname"] = self.acldata["vmROOTA"]["displayname"] + "-shared-scope-account-root-admin"
try:
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_account_d111a.id,
accountid=self.account_roota.name,
domainid=self.account_roota.domainid
)
self.fail("ROOT admin is able to deploy a VM for a user in ROOT domain in a shared network with scope=account which the user does not have access to")
except Exception as e:
self.debug("When a user from ROOT domain deploys a VM in a shared network with scope=account %s" % e)
if not CloudstackAclException.verifyMsginException(e, CloudstackAclException.UNABLE_TO_USE_NETWORK):
self.fail(
"Error message validation failed when ROOT admin tries to deploy a VM for a user in ROOT domain in a shared network with scope=account which the user does not have access to ")
## Test cases relating to deploying Virtual Machine as Domain admin for other users in shared network with scope=all
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_domainuser(self):
"""
Valiate that Domain admin is able to deploy a VM for a domain user in a shared network with scope=all
"""
# Deploy VM for a user in a domain under ROOT as admin
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD1A"]["name"] + "-shared-scope-all-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD1A"]["displayname"] + "-shared-scope-all-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_d1a.name,
domainid=self.account_d1a.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d1a.name and vm.domainid == self.account_d1a.domainid,
True,
"Domain admin is not able to deploy a VM for a domain user in a shared network with scope=all")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_domainadminuser(self):
"""
Valiate that Domain admin is able to deploy a VM for a domain admin user in a shared network with scope=all
"""
# Deploy VM for an admin user in a domain under ROOT as admin
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD1"]["name"] + "-shared-scope-all-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD1"]["displayname"] + "-shared-scope-all-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_d1.name,
domainid=self.account_d1.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d1.name and vm.domainid == self.account_d1.domainid,
True,
"Domain admin is not able to deploy a VM for a domain admin user in a shared network with scope=all")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_subdomainuser(self):
"""
Valiate that Domain admin is able to deploy a VM for a sub domain user in a shared network with scope=all
"""
# Deploy VM as user in a subdomain under ROOT
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD11A"]["name"] + "-shared-scope-all-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD11A"]["displayname"] + "-shared-scope-all-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_d11a.name,
domainid=self.account_d11a.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d11a.name and vm.domainid == self.account_d11a.domainid,
True,
"Domain admin is not able to deploy a VM for a sub domain user in a shared network with scope=all")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_subdomainadminuser(self):
"""
Valiate that Domain admin is able to deploy a VM for a sub domain admin user in a shared network with scope=all
"""
# Deploy VM as an admin user in a subdomain under ROOT
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD11"]["name"] + "-shared-scope-all-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD11"]["displayname"] + "-shared-scope-all-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_d11.name,
domainid=self.account_d11.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d11.name and vm.domainid == self.account_d11.domainid,
True,
"Domain admin is not able to deploy a VM for a sub domain admin user in a shared network with scope=all")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_ROOTuser(self):
"""
Valiate that Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=all
"""
# Deploy VM as user in ROOT domain
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmROOTA"]["name"] + "-shared-scope-all"
self.vmdata["displayname"] = self.acldata["vmROOTA"]["displayname"] + "-shared-scope-all"
try:
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_roota.name,
domainid=self.account_roota.domainid
)
self.fail("Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=all")
except Exception as e:
self.debug("When a Domain admin user deploys a VM for ROOT user in a shared network with scope=all %s" % e)
if not CloudstackAclException.verifyMsginException(e, CloudstackAclException.NO_PERMISSION_TO_OPERATE_DOMAIN):
self.fail("Error message validation failed when Domain admin is NOT able to deploy a VM for user in ROOT domain in a shared network with scope=all")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_all_crossdomainuser(self):
"""
Valiate that Domain admin is NOT able to deploy a VM for user in other domain in a shared network with scope=all
"""
# Deploy VM as user in ROOT domain
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmROOTA"]["name"] + "-shared-scope-all"
self.vmdata["displayname"] = self.acldata["vmROOTA"]["displayname"] + "-shared-scope-all"
try:
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_all.id,
accountid=self.account_d2a.name,
domainid=self.account_d2a.domainid
)
self.fail("Domain admin user is able to Deploy VM for a domain user, but there is no access to in a shared network with scope=domain with no subdomain access ")
except Exception as e:
self.debug("When a Domain admin user deploys a VM for a domain user, but there is no access to in a shared network with scope=domain with no subdomain access %s" % e)
if not CloudstackAclException.verifyMsginException(e, CloudstackAclException.NO_PERMISSION_TO_OPERATE_DOMAIN):
self.fail(
"Error mesage validation failed when Domain admin user tries to Deploy VM for a domain user, but there is no access to in a shared network with scope=domain with no subdomain access ")
## Test cases relating to deploying Virtual Machine as Domain admin for other users in shared network with scope=Domain and no subdomain access
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_domainuser(self):
"""
Valiate that Domain admin is able to deploy a VM for domain user in a shared network with scope=Domain and no subdomain access
"""
# Deploy VM as user in a domain that has shared network with no subdomain access
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD11A"]["name"] + "-shared-scope-domain-nosubdomainaccess-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD11A"]["displayname"] + "-shared-scope-domain-nosubdomainaccess-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_domain_d11.id,
accountid=self.account_d11a.name,
domainid=self.account_d11a.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d11a.name and vm.domainid == self.account_d11a.domainid,
True,
"Domain admin is not able to deploy a VM for domain user in a shared network with scope=Domain and no subdomain access")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_domainadminuser(self):
"""
Valiate that Domain admin is able to deploy a VM for domain admin user in a shared network with scope=Domain and no subdomain access
"""
# Deploy VM as an admin user in a domain that has shared network with no subdomain access
self.apiclient.connection.apiKey = self.user_d1_apikey
self.apiclient.connection.securityKey = self.user_d1_secretkey
self.vmdata["name"] = self.acldata["vmD11"]["name"] + "-shared-scope-domain-nosubdomainaccess-domain-admin"
self.vmdata["displayname"] = self.acldata["vmD11"]["displayname"] + "-shared-scope-domain-nosubdomainaccess-domain-admin"
vm = VirtualMachine.create(
self.apiclient,
self.vmdata,
zoneid=self.zone.id,
serviceofferingid=self.service_offering.id,
templateid=self.template.id,
networkids=self.shared_network_domain_d11.id,
accountid=self.account_d11.name,
domainid=self.account_d11.domainid
)
self.assertEqual(vm.state == "Running" and vm.account == self.account_d11.name and vm.domainid == self.account_d11.domainid,
True,
"Admin User in a domain that has a shared network with no subdomain access failed to Deploy VM in a shared network with scope=domain with no subdomain access")
@attr("simulator_only", tags=["advanced"], required_hardware="false")
def test_deployVM_in_sharedNetwork_as_domainadmin_scope_domain_nosubdomainaccess_subdomainuser(self):
"""
Valiate that Domain admin is NOT able to deploy a VM for sub domain user in a shared network with scope=Domain and no subdomain access
"""
# Deploy VM as user in a subdomain under a domain | |
(38, ( 3,-3)),
1099: (38, ( 0,-1)),
1100: (38, ( 0,-2)),
1101: (38, ( 0,-3)),
1102: (38, ( 0,-4)),
1103: (38, ( 0,-5)),
1104: (38, ( 0,-6)),
1105: (38, (-1,-1)),
1106: (38, (-2,-2)),
1107: (38, (-3,-3)),
1108: (38, (-4,-4)),
1109: (38, (-2, 1)),
1110: (38, ( 2, 1)),
1111: (38, ( 2,-1)),
1112: (38, ( 1,-2)),
1113: (38, (-1,-2)),
1114: (38, (-2,-1)),
1115: (39, (-1, 0)),
1116: (39, (-2, 0)),
1117: (39, (-3, 0)),
1118: (39, (-4, 0)),
1119: (39, ( 1, 0)),
1120: (39, ( 2, 0)),
1121: (39, ( 3, 0)),
1122: (39, ( 1,-1)),
1123: (39, ( 2,-2)),
1124: (39, ( 3,-3)),
1125: (39, ( 0,-1)),
1126: (39, ( 0,-2)),
1127: (39, ( 0,-3)),
1128: (39, ( 0,-4)),
1129: (39, ( 0,-5)),
1130: (39, ( 0,-6)),
1131: (39, ( 0,-7)),
1132: (39, (-1,-1)),
1133: (39, (-2,-2)),
1134: (39, (-3,-3)),
1135: (39, (-4,-4)),
1136: (39, ( 2,-1)),
1137: (39, ( 1,-2)),
1138: (39, (-1,-2)),
1139: (39, (-2,-1)),
1140: (40, (-1, 0)),
1141: (40, (-2, 0)),
1142: (40, (-3, 0)),
1143: (40, (-4, 0)),
1144: (40, (-5, 0)),
1145: (40, (-1, 1)),
1146: (40, (-2, 2)),
1147: (40, (-3, 3)),
1148: (40, (-4, 4)),
1149: (40, (-5, 5)),
1150: (40, ( 0, 1)),
1151: (40, ( 0, 2)),
1152: (40, ( 0, 3)),
1153: (40, ( 0, 4)),
1154: (40, ( 0, 5)),
1155: (40, ( 0, 6)),
1156: (40, ( 0, 7)),
1157: (40, ( 1, 1)),
1158: (40, ( 2, 2)),
1159: (40, ( 1, 0)),
1160: (40, ( 2, 0)),
1161: (40, (-2, 1)),
1162: (40, (-1, 2)),
1163: (40, ( 1, 2)),
1164: (40, ( 2, 1)),
1165: (41, (-1, 0)),
1166: (41, (-2, 0)),
1167: (41, (-3, 0)),
1168: (41, (-4, 0)),
1169: (41, (-5, 0)),
1170: (41, (-1, 1)),
1171: (41, (-2, 2)),
1172: (41, (-3, 3)),
1173: (41, (-4, 4)),
1174: (41, (-5, 5)),
1175: (41, ( 0, 1)),
1176: (41, ( 0, 2)),
1177: (41, ( 0, 3)),
1178: (41, ( 0, 4)),
1179: (41, ( 0, 5)),
1180: (41, ( 0, 6)),
1181: (41, ( 1, 1)),
1182: (41, ( 2, 2)),
1183: (41, ( 1, 0)),
1184: (41, ( 2, 0)),
1185: (41, ( 1,-1)),
1186: (41, ( 0,-1)),
1187: (41, (-1,-1)),
1188: (41, (-2, 1)),
1189: (41, (-1, 2)),
1190: (41, ( 1, 2)),
1191: (41, ( 2, 1)),
1192: (41, ( 2,-1)),
1193: (41, (-2,-1)),
1194: (42, (-1, 0)),
1195: (42, (-2, 0)),
1196: (42, (-3, 0)),
1197: (42, (-4, 0)),
1198: (42, (-5, 0)),
1199: (42, (-1, 1)),
1200: (42, (-2, 2)),
1201: (42, (-3, 3)),
1202: (42, (-4, 4)),
1203: (42, (-5, 5)),
1204: (42, ( 0, 1)),
1205: (42, ( 0, 2)),
1206: (42, ( 0, 3)),
1207: (42, ( 0, 4)),
1208: (42, ( 0, 5)),
1209: (42, ( 1, 1)),
1210: (42, ( 2, 2)),
1211: (42, ( 1, 0)),
1212: (42, ( 2, 0)),
1213: (42, ( 1,-1)),
1214: (42, ( 2,-2)),
1215: (42, ( 0,-1)),
1216: (42, ( 0,-2)),
1217: (42, (-1,-1)),
1218: (42, (-2,-2)),
1219: (42, (-2, 1)),
1220: (42, (-1, 2)),
1221: (42, ( 1, 2)),
1222: (42, ( 2, 1)),
1223: (42, ( 2,-1)),
1224: (42, ( 1,-2)),
1225: (42, (-1,-2)),
1226: (42, (-2,-1)),
1227: (43, (-1, 0)),
1228: (43, (-2, 0)),
1229: (43, (-3, 0)),
1230: (43, (-4, 0)),
1231: (43, (-5, 0)),
1232: (43, (-1, 1)),
1233: (43, (-2, 2)),
1234: (43, (-3, 3)),
1235: (43, (-4, 4)),
1236: (43, ( 0, 1)),
1237: (43, ( 0, 2)),
1238: (43, ( 0, 3)),
1239: (43, ( 0, 4)),
1240: (43, ( 1, 1)),
1241: (43, ( 2, 2)),
1242: (43, ( 1, 0)),
1243: (43, ( 2, 0)),
1244: (43, ( 1,-1)),
1245: (43, ( 2,-2)),
1246: (43, ( 0,-1)),
1247: (43, ( 0,-2)),
1248: (43, ( 0,-3)),
1249: (43, (-1,-1)),
1250: (43, (-2,-2)),
1251: (43, (-3,-3)),
1252: (43, (-2, 1)),
1253: (43, (-1, 2)),
1254: (43, ( 1, 2)),
1255: (43, ( 2, 1)),
1256: (43, ( 2,-1)),
1257: (43, ( 1,-2)),
1258: (43, (-1,-2)),
1259: (43, (-2,-1)),
1260: (44, (-1, 0)),
1261: (44, (-2, 0)),
1262: (44, (-3, 0)),
1263: (44, (-4, 0)),
1264: (44, (-5, 0)),
1265: (44, (-1, 1)),
1266: (44, (-2, 2)),
1267: (44, (-3, 3)),
1268: (44, ( 0, 1)),
1269: (44, ( 0, 2)),
1270: (44, ( 0, 3)),
1271: (44, ( 1, 1)),
1272: (44, ( 2, 2)),
1273: (44, ( 1, 0)),
1274: (44, ( 2, 0)),
1275: (44, ( 1,-1)),
1276: (44, ( 2,-2)),
1277: (44, ( 0,-1)),
1278: (44, ( 0,-2)),
1279: (44, ( 0,-3)),
1280: (44, ( 0,-4)),
1281: (44, (-1,-1)),
1282: (44, (-2,-2)),
1283: (44, (-3,-3)),
1284: (44, (-4,-4)),
1285: (44, (-2, 1)),
1286: (44, (-1, 2)),
1287: (44, ( 1, 2)),
1288: (44, ( 2, 1)),
1289: (44, ( 2,-1)),
1290: (44, ( 1,-2)),
1291: (44, (-1,-2)),
1292: (44, (-2,-1)),
1293: (45, (-1, 0)),
1294: (45, (-2, 0)),
1295: (45, (-3, 0)),
1296: (45, (-4, 0)),
1297: (45, (-5, 0)),
1298: (45, (-1, 1)),
1299: (45, (-2, 2)),
1300: (45, ( 0, 1)),
1301: (45, ( 0, 2)),
1302: (45, ( 1, 1)),
1303: (45, ( 2, 2)),
1304: (45, ( 1, 0)),
1305: (45, ( 2, 0)),
1306: (45, ( 1,-1)),
1307: (45, ( 2,-2)),
1308: (45, ( 0,-1)),
1309: (45, ( 0,-2)),
1310: (45, ( 0,-3)),
1311: (45, ( 0,-4)),
1312: (45, ( 0,-5)),
1313: (45, (-1,-1)),
1314: (45, (-2,-2)),
1315: (45, (-3,-3)),
1316: (45, (-4,-4)),
1317: (45, (-5,-5)),
1318: (45, (-2, 1)),
1319: (45, (-1, 2)),
1320: (45, ( 1, 2)),
1321: (45, ( 2, 1)),
1322: (45, ( 2,-1)),
1323: (45, ( 1,-2)),
1324: (45, (-1,-2)),
1325: (45, (-2,-1)),
1326: (46, (-1, 0)),
1327: (46, (-2, 0)),
1328: (46, (-3, 0)),
1329: (46, (-4, 0)),
1330: (46, (-5, 0)),
1331: (46, (-1, 1)),
1332: (46, ( 0, 1)),
1333: (46, ( 1, 1)),
1334: (46, ( 1, 0)),
1335: (46, ( 2, 0)),
1336: (46, ( 1,-1)),
1337: (46, ( 2,-2)),
1338: (46, ( 0,-1)),
1339: (46, ( 0,-2)),
1340: (46, ( 0,-3)),
1341: (46, ( 0,-4)),
1342: (46, ( 0,-5)),
1343: (46, ( 0,-6)),
1344: (46, (-1,-1)),
1345: (46, (-2,-2)),
1346: (46, (-3,-3)),
1347: (46, (-4,-4)),
1348: (46, (-5,-5)),
1349: (46, (-2, 1)),
1350: (46, ( 2, 1)),
1351: (46, ( 2,-1)),
1352: (46, ( 1,-2)),
1353: (46, (-1,-2)),
1354: (46, (-2,-1)),
1355: (47, (-1, 0)),
1356: (47, (-2, 0)),
1357: (47, (-3, 0)),
1358: (47, (-4, 0)),
1359: (47, (-5, 0)),
1360: (47, ( 1, 0)),
1361: (47, ( 2, 0)),
1362: (47, ( 1,-1)),
1363: (47, ( 2,-2)),
1364: (47, ( 0,-1)),
1365: (47, ( 0,-2)),
1366: (47, ( 0,-3)),
1367: (47, ( 0,-4)),
1368: (47, ( 0,-5)),
1369: (47, ( 0,-6)),
1370: (47, ( 0,-7)),
1371: (47, (-1,-1)),
1372: (47, (-2,-2)),
1373: (47, (-3,-3)),
1374: (47, (-4,-4)),
1375: (47, (-5,-5)),
1376: (47, ( 2,-1)),
1377: (47, ( 1,-2)),
1378: (47, (-1,-2)),
1379: (47, (-2,-1)),
1380: (48, (-1, 0)),
1381: (48, (-2, 0)),
1382: (48, (-3, 0)),
1383: (48, (-4, 0)),
1384: (48, (-5, 0)),
1385: (48, (-6, 0)),
1386: (48, (-1, 1)),
1387: (48, (-2, 2)),
1388: (48, (-3, 3)),
1389: (48, (-4, 4)),
1390: (48, (-5, 5)),
1391: (48, (-6, 6)),
1392: (48, ( 0, 1)),
1393: (48, ( 0, 2)),
1394: (48, ( 0, 3)),
1395: (48, ( 0, 4)),
1396: (48, ( 0, 5)),
1397: (48, ( 0, 6)),
1398: (48, ( 0, 7)),
1399: (48, ( 1, 1)),
1400: (48, ( 1, 0)),
1401: (48, (-2, 1)),
1402: (48, (-1, 2)),
1403: (48, ( 1, 2)),
1404: (49, (-1, 0)),
1405: (49, (-2, 0)),
1406: (49, (-3, 0)),
1407: (49, (-4, 0)),
1408: (49, (-5, 0)),
1409: (49, (-6, | |
from __future__ import division, print_function
__all__ = ["Signal", "LikelihoodError"]
from .global_imports import *
from . import global_imports
from .Data import Data
from .Instrument import Instrument, ChannelError
from .Background import Background
from .Interstellar import Interstellar
from .tools.energy_integrator import energy_integrator
from .tools.energy_interpolator import energy_interpolator
from .tools.phase_integrator import phase_integrator
from abc import abstractmethod
from .Parameter import Parameter
from .ParameterSubspace import ParameterSubspace
class LikelihoodError(xpsiError):
""" Raised if there is a problem with the value of the log-likelihood. """
class Signal(ParameterSubspace):
"""
A signal is constituted by some X-ray dataset, a model instrument
with which that data was acquired, a model background, and an object for
modelling interstellar processes.
The methods in this class must transform incident specific flux signals
into a structure congruent to that of the data for the purpose of
evaluation of the custom likelihood implemented via subclassing.
:param obj data:
An instance of :class:`~.Data.Data`.
:param obj instrument:
An instance of :class:`~.Instrument.Instrument`.
:param obj background:
If not ``None``, an instance of :class:`~.Background.Background`.
It is assumed if one constructs a model using instances of
:class:`~.Background.Background` that the background needs to be
registered by a model instrument. If ``None``, it is still possible
for one to define and use background parameters in a custom subclass
of :class:`~.Signal`. In particular, background parameters for some
model which directly specifies background contribution in units of
count/s per *output* channels. These background parameters can even
*be* the counts/s in output channels.
:param obj interstellar:
If not ``None``, an instance of :class:`~.Interstellar.Interstellar`.
To be applied to the incident signal as a callable that modifies the
signal in place.
:param str photosphere_prefix:
The ``str`` prefix of the photosphere object with which this signal
object is associated.
:param bool cache:
Cache intermediary signals during likelihood evalation? When performing
post-processing, this needs to be activated for full functionality of
the :mod:`~.xpsi.PostProcessing` module. For likelihood function
evaluation during sampling, caching should be deactivated because it is
not used. It might be useful to activate caching also when preparing a
model for a sampling application, to check the likelihood function
works as intended.
:param bool store:
Deprecated. You can use this or ``cache``, which has the same effect.
"""
def __init__(self,
data,
instrument,
background = None,
interstellar = None,
photosphere_prefix = None,
cache = False,
bounds = None,
values = None,
*args,
**kwargs):
if not isinstance(data, Data):
raise TypeError('Invalid type for a data object.')
else:
self._data = data
if not isinstance(instrument, Instrument):
raise TypeError('Invalid type for an instrument object.')
else:
self._instrument = instrument
a, b = data.index_range
if (data.channels != instrument.channels[a:b]).any():
raise ChannelError('Channel array declared for event data does not '
'match channel array declared for the loaded '
'instrument response (sub)matrix.')
self._identify_waveband()
if background is not None:
if not isinstance(background, Background):
raise TypeError('Invalid type for a background object.')
else:
self._background = background
else:
self._background = None
if interstellar is not None:
if not isinstance(interstellar, Interstellar):
raise TypeError('Invalid type for an interstellar object.')
else:
self._interstellar = interstellar
else:
self._interstellar = None
if photosphere_prefix is not None:
self._photosphere = photosphere_prefix
cache = kwargs.get('store', cache)
if not isinstance(cache, bool):
raise TypeError('Activate or deactivate caching with a boolean.')
self._cache = cache
if bounds is None: bounds = {}
if values is None: values = {}
doc = """
The phase shift for the signal, a periodic parameter [cycles].
"""
phase_bounds = bounds.get('phase_shift', None)
phase_value = values.get('phase_shift', 0.0 if phase_bounds is None else None)
if phase_value is None:
if not phase_bounds or None in phase_bounds:
raise ValueError('Phase-shift bounds must be specified.')
elif _np.array([not _np.isfinite(b) for b in phase_bounds]).any():
raise ValueError('Phase-shift bounds must be finite.')
elif not (0.0 <= (phase_bounds[1] - phase_bounds[0]) <= 1.0):
raise ValueError('Phase bounds must be separated by '
'a maximum of one cycle.')
phase_shift = Parameter('phase_shift',
strict_bounds = (-_np.infty, _np.infty),
bounds = phase_bounds,
doc = doc,
symbol = r'$\phi$',
value = phase_value)
# merge the subspaces; order unimportant
super(Signal, self).__init__(self._instrument,
self._background,
self._interstellar,
*args, **kwargs)
@property
def background(self):
""" Get the instance of :class:`~.Background.Background`."""
return self._background
@property
def interstellar(self):
""" Get the instance of :class:`~.Interstellar.Interstellar`."""
return self._interstellar
@property
def instrument(self):
""" Get the instance of :class:`~.Instrument.Instrument`."""
return self._instrument
@property
def photosphere(self):
return self._photosphere
def _identify_waveband(self):
""" Bound the waveband for signal integration.
Constructs an array of energy edges for instrument operation.
This method thus automatically constructs energy bounds for this
a particular instrument. At energies between these bounds signals
are calculated. This requires details about the contiguous
subset of output channels the photon data spans (in an instance of
the :class:`~.Data.Data` class) and the redistribution matrix of the
model instrument (in an instance of the
:class:`~.Instrument.Instrument` class).
:raises IndexError:
If the channel range of the data object is not consistent with
the instrument object.
"""
a, b = self._data.index_range
def search(i, j, k):
while self._instrument.matrix[i,j] == 0.0:
j += k
return j
a = search(a, 0, 1)
b = self._instrument.matrix.shape[1] + search(b-1, -1, -1) + 1
self._input_interval_range = (a, b)
self._energy_edges = self._instrument.energy_edges[a:b + 1]
self._energy_mids = (self._energy_edges[:-1] + self._energy_edges[1:])/2.0
@property
def fast_energies(self):
""" Get coarse array of energies for fast-mode likelihood evals. """
return self._fast_energies
@fast_energies.setter
def fast_energies(self, energies):
""" Set energies for fast mode."""
self._fast_energies = energies
def create_energy_array(self, rel_num_energies=10.0):
""" Get a (finer) array of energies spanning instrument waveband.
Useful for getting an appropriately bounded and spaced set of energies
for signal interpolation.
:param float rel_num_energies:
The number of energies desired as a fraction of the number of
energies implemented for incident signal integration.
"""
L = self.energy_edges[0]
R = self.energy_edges[-1]
energies = _np.logspace(_np.log10(L), _np.log10(R),
int(rel_num_energies * len(self.energies)),
base=10.0)
return energies
@property
def energy_edges(self):
""" Get a :class:`numpy.ndarray` of energy edges. """
return self._energy_edges
def register(self, signals, fast_mode=False, threads=1):
""" Register an incident signal by operating with the response matrix.
A :class:`numpy.ndarray` is stored as an instance attribute containing
source signal for each *output* channel in units of counts cm^2/s
(assuming instrument effective area units are cm^2).
"""
if fast_mode:
try:
del self.fast_total_counts
except AttributeError:
pass
for hotRegion in signals:
fast_total_counts = []
for component, phases in zip(hotRegion, self.fast_phases):
if component is None:
fast_total_counts.append(None)
else:
integrated = energy_integrator(threads,
component,
_np.log10(self.fast_energies),
_np.log10(self._energy_edges))
# move interstellar to star?
if self._interstellar is not None:
self._interstellar(self._energy_mids, integrated)
temp = self._instrument(integrated,
self._input_interval_range,
self._data.index_range)
fast_total_counts.append(_np.sum(temp))
self.fast_total_counts = tuple(fast_total_counts)
else:
try:
del self.signals
except AttributeError:
pass
if self.cache:
try:
del self.incident_specific_flux_signals
except AttributeError:
pass
for hotRegion in signals: # iterate over hot regions
signal = None
for component in hotRegion: # add other components
try:
signal += component
except TypeError:
signal = component
# cache total hot region signal
self.incident_specific_flux_signals = signal
try:
del self.incident_flux_signals
except AttributeError:
pass
try:
self.execute_custom_cache_instructions()
except NotImplementedError:
pass # no custom caching targets
for hotRegion in signals:
integrated = None
for component in hotRegion:
temp = energy_integrator(threads,
component,
_np.log10(self._energies),
_np.log10(self._energy_edges))
try:
integrated += temp
except TypeError:
integrated = temp
if self.cache:
self.incident_flux_signals = integrated.copy()
if self._interstellar is not None:
self._interstellar(self._energy_mids, integrated)
self.signals = self._instrument(integrated,
self._input_interval_range,
self._data.index_range)
if self._background is not None:
try:
self._background(self._energy_edges,
self._data.phases)
except TypeError:
print('Error when evaluating the incident background.')
raise
self._background.registered_background = \
self._instrument(self._background.incident_background,
self._input_interval_range,
self._data.index_range)
@property
def num_components(self):
return len(self._signals)
@property
def phases(self):
return [phases.copy() for phases in self._phases]
@phases.setter
def phases(self, obj):
if not isinstance(obj, list):
obj = [obj]
self._phases = obj
@property
def fast_phases(self):
return [phases.copy() for phases in self._fast_phases]
@fast_phases.setter
def fast_phases(self, obj):
if not isinstance(obj, list):
obj = [obj]
self._fast_phases = obj
@property
def energies(self):
return self._energies
@energies.setter
def energies(self, obj):
self._energies = obj
@energies.deleter
def energies(self):
del self._energies
@property
def fast_total_counts(self):
return tuple(self._fast_total_counts)
@fast_total_counts.setter
def fast_total_counts(self, obj):
try:
self._fast_total_counts.append(obj)
except AttributeError:
self._fast_total_counts = [obj]
@fast_total_counts.deleter
def fast_total_counts(self):
del self._fast_total_counts
@property
def store(self):
return self._cache
@store.setter
def store(self, value):
if isinstance(value, bool):
self._cache = value
else:
raise ValueError('Signal storage requires boolean activation.')
@property
def cache(self):
return self._cache
@cache.setter
def cache(self, value):
if isinstance(value, bool):
self._cache = value
else:
raise ValueError('Signal storage requires boolean activation.')
@property
def data(self):
""" Get the stored data | |
import cv2
import mxnet as mx
import time
from tools import image_processing
#from mx.model import FeedForward
import numpy as np
from config import config
from tools.nms import py_nms
class MtcnnDetector(object):
"""
Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Neural Networks
see https://github.com/kpzhang93/MTCNN_face_detection_alignment
this is a mxnet version
"""
def __init__(self,
detectors,
min_face_size=24,
stride=2,
threshold=[0.6, 0.7, 0.7],
scale_factor=0.709,
ctx=mx.cpu(),
slide_window=False):
self.pnet_detector = detectors[0]
self.rnet_detector = detectors[1]
self.onet_detector = detectors[2]
self.min_face_size = min_face_size
self.stride=stride
self.thresh = threshold
self.ctx = ctx
self.scale_factor = scale_factor
self.slide_window = slide_window
def convert_to_square(self, bbox):
"""
convert bbox to square
Parameters:
----------
bbox: numpy array , shape n x 5
input bbox
Returns:
-------
square bbox
"""
square_bbox = bbox.copy()
h = bbox[:, 3] - bbox[:, 1] + 1
w = bbox[:, 2] - bbox[:, 0] + 1
max_side = np.maximum(h,w)
square_bbox[:, 0] = bbox[:, 0] + w*0.5 - max_side*0.5
square_bbox[:, 1] = bbox[:, 1] + h*0.5 - max_side*0.5
square_bbox[:, 2] = square_bbox[:, 0] + max_side - 1
square_bbox[:, 3] = square_bbox[:, 1] + max_side - 1
return square_bbox
def calibrate_box(self, bbox, reg):
"""
calibrate bboxes
Parameters:
----------
bbox: numpy array, shape n x 5
input bboxes
reg: numpy array, shape n x 4
bboxes adjustment
Returns:
-------
bboxes after refinement
"""
bbox_c = bbox.copy()
w = bbox[:, 2] - bbox[:, 0] + 1
w = np.expand_dims(w, 1)
h = bbox[:, 3] - bbox[:, 1] + 1
h = np.expand_dims(h, 1)
reg_m = np.hstack([w, h, w, h])
aug = reg_m * reg
bbox_c[:, 0:4] = bbox_c[:, 0:4] + aug
return bbox_c
def generate_bbox(self, map, reg, scale, threshold):
"""
generate bbox from feature map
Parameters:
----------
map: numpy array , n x m x 1
detect score for each position
reg: numpy array , n x m x 4
bbox
scale: float number
scale of this detection
threshold: float number
detect threshold
Returns:
-------
bbox array
"""
stride = 2
cellsize = 12
t_index = np.where(map>threshold)
# find nothing
if t_index[0].size == 0:
return np.array([])
dx1, dy1, dx2, dy2 = [reg[0, i, t_index[0], t_index[1]] for i in range(4)]
reg = np.array([dx1, dy1, dx2, dy2])
score = map[t_index[0], t_index[1]]
boundingbox = np.vstack([np.round((stride*t_index[1])/scale),
np.round((stride*t_index[0])/scale),
np.round((stride*t_index[1]+cellsize)/scale),
np.round((stride*t_index[0]+cellsize)/scale),
score,
reg])
return boundingbox.T
def resize_image(self, img, scale):
"""
resize image and transform dimention to [batchsize, channel, height, width]
Parameters:
----------
img: numpy array , height x width x channel
input image, channels in BGR order here
scale: float number
scale factor of resize operation
Returns:
-------
transformed image tensor , 1 x channel x height x width
"""
height, width, channels = img.shape
new_height = int(height * scale) # resized new height
new_width = int(width * scale) # resized new width
new_dim = (new_width, new_height)
img_resized = cv2.resize(img, new_dim, interpolation=cv2.INTER_LINEAR) # resized image
img_resized = image_processing.transform(img_resized)
return img_resized # (batch_size, c, h, w)
def pad(self, bboxes, w, h):
"""
pad the the bboxes, alse restrict the size of it
Parameters:
----------
bboxes: numpy array, n x 5
input bboxes
w: float number
width of the input image
h: float number
height of the input image
Returns :
------
dy, dx : numpy array, n x 1
start point of the bbox in target image
edy, edx : numpy array, n x 1
end point of the bbox in target image
y, x : numpy array, n x 1
start point of the bbox in original image
ex, ex : numpy array, n x 1
end point of the bbox in original image
tmph, tmpw: numpy array, n x 1
height and width of the bbox
"""
tmpw, tmph = bboxes[:, 2] - bboxes[:, 0] + 1, bboxes[:, 3] - bboxes[:, 1] + 1
num_box = bboxes.shape[0]
dx , dy= np.zeros((num_box, )), np.zeros((num_box, ))
edx, edy = tmpw.copy()-1, tmph.copy()-1
x, y, ex, ey = bboxes[:, 0], bboxes[:, 1], bboxes[:, 2], bboxes[:, 3]
tmp_index = np.where(ex > w-1)
edx[tmp_index] = tmpw[tmp_index] + w - 2 - ex[tmp_index]
ex[tmp_index] = w - 1
tmp_index = np.where(ey > h-1)
edy[tmp_index] = tmph[tmp_index] + h - 2 - ey[tmp_index]
ey[tmp_index] = h - 1
tmp_index = np.where(x < 0)
dx[tmp_index] = 0 - x[tmp_index]
x[tmp_index] = 0
tmp_index = np.where(y < 0)
dy[tmp_index] = 0 - y[tmp_index]
y[tmp_index] = 0
return_list = [dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph]
return_list = [item.astype(np.int32) for item in return_list]
return return_list
def detect_pnet(self, im):
"""Get face candidates through pnet
Parameters:
----------
im: numpy array
input image array
Returns:
-------
boxes: numpy array
detected boxes before calibration
boxes_c: numpy array
boxes after calibration
"""
h, w, c = im.shape
net_size = 12
current_scale = float(net_size) / self.min_face_size # find initial scale
im_resized = self.resize_image(im, current_scale)
_, _, current_height, current_width = im_resized.shape
if self.slide_window:
# sliding window
temp_rectangles = list()
rectangles = list() # list of rectangles [x11, y11, x12, y12, confidence] (corresponding to original image)
all_cropped_ims = list()
while min(current_height, current_width) > net_size:
current_y_list = range(0, current_height - net_size + 1, self.stride) if (current_height - net_size) % self.stride == 0 \
else range(0, current_height - net_size + 1, self.stride) + [current_height - net_size]
current_x_list = range(0, current_width - net_size + 1, self.stride) if (current_width - net_size) % self.stride == 0 \
else range(0, current_width - net_size + 1, self.stride) + [current_width - net_size]
for current_y in current_y_list:
for current_x in current_x_list:
cropped_im = im_resized[:, :, current_y:current_y + net_size, current_x:current_x + net_size]
current_rectangle = [int(w * float(current_x) / current_width), int(h * float(current_y) / current_height),
int(w * float(current_x) / current_width) + int(w * float(net_size) / current_width),
int(h * float(current_y) / current_height) + int(w * float(net_size) / current_width),
0.0]
temp_rectangles.append(current_rectangle)
all_cropped_ims.append(cropped_im)
current_scale *= self.scale_factor
im_resized = self.resize_image(im, current_scale)
_, _, current_height, current_width = im_resized.shape
'''
# helper for setting PNet batch size
num_boxes = len(all_cropped_ims)
batch_size = self.pnet_detector.batch_size
ratio = float(num_boxes) / batch_size
if ratio > 3 or ratio < 0.3:
print "You may need to reset PNet batch size if this info appears frequently, \
face candidates:%d, current batch_size:%d"%(num_boxes, batch_size)
'''
all_cropped_ims = np.vstack(all_cropped_ims)
cls_scores, reg = self.pnet_detector.predict(all_cropped_ims)
cls_scores = cls_scores[:, 1].flatten()
keep_inds = np.where(cls_scores > self.thresh[0])[0]
if len(keep_inds) > 0:
boxes = np.vstack(temp_rectangles[ind] for ind in keep_inds)
boxes[:, 4] = cls_scores[keep_inds]
reg = reg[keep_inds].reshape(-1, 4)
else:
return None, None
keep = py_nms(boxes, 0.7, 'Union')
boxes = boxes[keep]
boxes_c = self.calibrate_box(boxes, reg[keep])
else:
# fcn
all_boxes = list()
while min(current_height, current_width) > net_size:
cls_map, reg = self.pnet_detector.predict(im_resized)
cls_map = cls_map.asnumpy()
reg = reg.asnumpy()
boxes = self.generate_bbox(cls_map[0, 1, :, :], reg, current_scale, self.thresh[0])
current_scale *= self.scale_factor
im_resized = self.resize_image(im, current_scale)
_, _, current_height, current_width = im_resized.shape
if boxes.size == 0:
continue
keep = py_nms(boxes[:, :5], 0.5, 'Union')
boxes = boxes[keep]
all_boxes.append(boxes)
if len(all_boxes) == 0:
return None, None
all_boxes = np.vstack(all_boxes)
# merge the detection from first stage
keep = py_nms(all_boxes[:, 0:5], 0.7, 'Union')
all_boxes = all_boxes[keep]
boxes = all_boxes[:, :5]
bbw = all_boxes[:, 2] - all_boxes[:, 0] + 1
bbh = all_boxes[:, 3] - all_boxes[:, 1] + 1
# refine the boxes
boxes_c = np.vstack([all_boxes[:, 0] + all_boxes[:, 5] * bbw,
all_boxes[:, 1] + all_boxes[:, 6] * bbh,
all_boxes[:, 2] + all_boxes[:, 7] * bbw,
all_boxes[:, 3] + all_boxes[:, 8] * bbh,
all_boxes[:, 4]])
boxes_c = boxes_c.T
return boxes, boxes_c
def detect_rnet(self, im, dets):
"""Get face candidates using rnet
Parameters:
----------
im: numpy array
input image array
dets: numpy array
detection results of pnet
Returns:
-------
boxes: numpy array
detected boxes before calibration
boxes_c: numpy array
boxes after calibration
"""
h, w, c = im.shape
dets = self.convert_to_square(dets)
dets[:, 0:4] = np.round(dets[:, 0:4])
[dy, edy, dx, edx, y, ey, x, ex, tmpw, tmph] = self.pad(dets, w, h)
num_boxes = dets.shape[0]
'''
# helper for setting RNet batch size
batch_size = self.rnet_detector.batch_size
ratio = float(num_boxes) / batch_size
if ratio > 3 or ratio < 0.3:
print "You may need to reset RNet batch size if this info appears frequently, \
face | |
<gh_stars>100-1000
from __future__ import unicode_literals
from __future__ import print_function
from __future__ import absolute_import
from ..elements import Attribute
from ..elements.elementbase import LogicElement
from ..tags.context import ContextElementBase, DataSetter
from .. import logic
from ..urlmapper import URLMapper, MissingURLParameter, RouteError
from ..context.expressiontime import ExpressionDateTime
from ..render import render_object
from .. import http
from ..http import StatusCode, standard_response, RespondWith
from .. import errors
from ..template.errors import MissingTemplateError
from ..template.rendercontainer import RenderContainer
from .. import trace
from .. import __version__
from ..content import Content
from ..tags.content import ContentElementMixin
from ..tools import get_return
from .. import syntax
from ..timezone import Timezone
from ..context.tools import to_expression, set_dynamic
from ..sites import LocaleProxy
from ..compat import text_type, itervalues, py2bytes, iteritems
from .. import db
from ..response import MoyaResponse
from ..request import ReplaceRequest
from ..urltools import urlencode as moya_urlencode
from .. import tools
from .. import pilot
from .. import namespaces
from webob import Response
from fs.path import splitext
from fs.errors import NoSysPath
import pytz
import sys
import logging
log = logging.getLogger("moya.runtime")
startup_log = logging.getLogger("moya.startup")
class Mountpoint(LogicElement):
"""
A [i]mountpoint[/i] defines a collection of URL *routes* which map incoming requests on to moya code.
An app will typically have at least one mountpoint with [c]name="main"[/c] (the default) which is used when the app is mounted. Moya will check each enclosed <url> in turn until it finds a route which matches.
An app may contain multiple mountpoints, which can be [i]mounted[/i] separately.
"""
class Help:
synopsis = "define a collection of url routes"
example = """
<mountpoint name="main">
<!-- should contain <url> tags -->
</mountpoint>
"""
name = Attribute(
"Mountpoint name unique to the application", default="main", map_to="_name"
)
preserve_attributes = ["urlmapper", "middleware", "name"]
def post_build(self, context):
self.urlmapper = URLMapper(self.libid)
self.middleware = dict(request=URLMapper(), response=URLMapper())
self.name = self._name(context)
class URL(LogicElement):
"""
Add a URL route to a [tag]mountpoint[/tag].
"""
class Help:
synopsis = """add a url to a mountpoint"""
mountpoint = Attribute("Name of the parent mount point", required=False)
mount = Attribute("Mountpoint to mount on this url", required=False, default=None)
route = Attribute("URL route", required=True)
view = Attribute("View element", required=False, map_to="target", example="#post")
methods = Attribute(
"A list of comma separated HTTP methods",
type="commalist",
evaldefault=True,
required=False,
default="GET,POST",
example="GET,POST",
map_to="_methods",
)
handler = Attribute(
"A list of comma separated http status codes",
type="commalist",
evaldefault=False,
required=False,
default=[],
example="404",
map_to="_handlers",
)
name = Attribute("An optional name", required=False, default=None)
final = Attribute(
"Ignore further URLs if this route matches?", type="boolean", default=False
)
def lib_finalize(self, context):
if not self.check(context):
return
defaults = self.get_let_map(context)
params = self.get_parameters(context)
methods = params._methods
handlers = []
for h in params._handlers:
try:
handlers.append(StatusCode(h))
except KeyError:
raise errors.ElementError(
""""{}" is not a valid http status code""".format(h), element=self
)
target = params.target
url_target = self.document.lib.qualify_libname(self.libname)
try:
if target is None:
target = (url_target,)
else:
target = (
url_target,
self.document.qualify_element_ref(target, lib=self.lib),
)
except errors.ElementNotFoundError:
raise errors.ElementError(
"No view called '{}' in the project".format(target), element=self
)
if params.mountpoint is None:
mount_point = self.get_ancestor("mountpoint")
else:
_, mount_point = self.get_element(params.mountpoint)
if params.mount:
try:
_, element = self.archive.get_element(params.mount, lib=self.lib)
if not hasattr(element, "urlmapper"):
raise ValueError("element {} is not mountable".format(element))
mount_point.urlmapper.map(
params.route.rstrip("/") + "/*",
[url_target],
methods=methods,
handlers=handlers or None,
defaults=defaults,
)
mount_point.urlmapper.mount(
params.route, element.urlmapper, name=params.name, defaults=defaults
)
except Exception as e:
raise errors.ElementError(
text_type(e), element=self, diagnosis=getattr(e, "diagnosis", None)
)
else:
try:
mount_point.urlmapper.map(
params.route,
target,
methods=methods,
handlers=handlers or None,
name=params.name,
defaults=defaults,
final=params.final,
)
except ValueError as e:
raise errors.ElementError(text_type(e), element=self)
class Middleware(LogicElement):
"""Add middleware to a mountpoint"""
class Help:
synopsis = "add middleware to a mountpoint"
route = Attribute("Route", required=True)
methods = Attribute(
"A list of comma separated HTTP methods",
required=False,
type="commalist",
evaldefault=True,
default="*",
example="GET,POST",
map_to="_methods",
)
mountpoint = Attribute("Mount point", required=False)
stage = Attribute(
"Stage in request handling",
required=False,
default="request",
metavar="STAGE",
choices=["request", "response"],
)
macro = Attribute("Macro to call", required=False, default=None)
name = Attribute("An optional name", required=False, default=None)
def lib_finalize(self, context):
if not self.check(context):
return
params = self.get_parameters(context)
methods = params._methods
target = params.macro
url_target = self.document.lib.qualify_libname(self.libname)
if target is None:
target = (url_target,)
else:
target = (url_target, self.document.qualify_element_ref(target))
if params.mountpoint is None:
mount_point = self.get_ancestor("mountpoint")
else:
_, mount_point = self.get_element(params.mountpoint)
mapper = mount_point.middleware[params.stage]
_route = mapper.map(params.route, target, methods=methods, name=params.name)
class Mount(LogicElement):
"""Mount a library."""
class Help:
synopsis = "mount a library on a given URL"
app = Attribute("Application", required=True)
url = Attribute("Url", required=True)
mountpoint = Attribute("Mount point", required=False, default="main")
priority = Attribute(
"Priority (highest priority is checked first)",
type="integer",
required=False,
default=0,
)
def logic(self, context):
if self.archive.test_build:
return
self.archive.build_libs()
params = self.get_parameters(context)
app = self.archive.find_app(params.app)
server = self.get_ancestor("server")
url_params = self.get_let_map(context, check_missing=False)
url_params["app"] = app.name
mountpoint = app.lib.get_element_by_type_and_attribute(
"mountpoint", "name", params.mountpoint
)
app.mounts.append((params.mountpoint, params.url))
server.urlmapper.mount(
params.url,
mountpoint.urlmapper,
defaults=url_params,
name=app.name,
priority=params.priority,
)
for stage, urlmapper in server.middleware.items():
urlmapper.mount(
params.url,
mountpoint.middleware[stage],
defaults=url_params,
name=app.name,
priority=params.priority,
)
startup_log.debug(
"%s (%s) mounted on %s",
app,
params.mountpoint,
tools.normalize_url_path(params.url),
)
class GetURL(DataSetter):
"""Get a named URL."""
class Help:
synopsis = "get a named URL"
name = Attribute("URL name", required=True)
_from = Attribute("Application", type="application", default=None, evaldefault=True)
query = Attribute(
"Mapping expression to use as a query string",
metavar="EXPRESSION",
required=False,
default=None,
type="expression",
missing=False,
)
_with = Attribute(
"Extract URL values from this object",
type="expression",
required=False,
default=None,
)
base = Attribute("Base (protocol and domain) of the URL", default=None)
def get_value(self, context):
params = self.get_parameters(context)
query = params.query
app = self.get_app(context)
try:
if self.has_parameter("with"):
url_params = self.get_let_map(context)
url_params.update(params["with"])
else:
url_params = {
k: text_type(v) for k, v in iteritems(self.get_let_map(context))
}
for k, v in iteritems(url_params):
if not v:
self.throw(
"bad-value.parameter",
"URL parameter '{}' must not be blank or missing (it is {})".format(
k, to_expression(context, v)
),
)
url = context[".server"].get_url(app.name, params.name, url_params)
except MissingURLParameter as e:
self.throw("get-url.missing-parameter", text_type(e))
except RouteError as e:
self.throw("get-url.no-route", text_type(e))
if query and hasattr(query, "items"):
qs = moya_urlencode(query)
if qs:
url += "?" + qs
url = self.qualify(context, url)
return url
def qualify(self, context, url):
base = self.base(context)
if base is not None:
url = base.rstrip("/") + "/" + url.lstrip("/")
return url
class GetFqURL(GetURL):
"""Get a [i]fully qualified[/i] (including domain name and scheme) named URL."""
base = Attribute("Base (protocol and domain) of the URL", default=None)
class Help:
synopsis = "get a fully qualified URL"
def qualify(self, context, url):
base = self.base(context)
if base is None:
base = context[".sys.site.host"] or context[".request.host_url"]
url = base + url
return url
class Trace(DataSetter):
"""
Extract route information from a URL path.
Returns route matches in a list of dictionaries. Route matches have three keys;
[c]data[/c] is the url data (as returned in [c].url[/c]), [c]targets[/c] is a list of element references,
[c]name[/c] is the name of the matching URL.
If [c]app[/c] or [c]name[/c] is provided, this tag will return the first url route matching the given app / named url.
"""
class Help:
synopsis = "extract routing information from mounted URL paths"
example = """
<trace path=".request.path" dst="matches"/>
"""
server = Attribute(
"Server containing URL routes",
type="expression",
default=".server",
evaldefault=True,
)
path = Attribute(
"URL path to parse", type="expression", required=True, missing=False
)
method = Attribute("HTTP method", type="text", default="GET")
app = Attribute("Application name", required=False, default=None, type="text")
name = Attribute(
"Route name to find", required=False, type="commalist", default=None
)
def get_value(self, context):
server, path, method, app, name = self.get_parameters(
context, "server", "path", "method", "app", "name"
)
if "://" in path:
_, _, path = path.partition("://")
if not path.startswith("/"):
path = "/" + path
if app is None and name is None:
routes = []
for route_match in server.urlmapper.iter_routes(path, method):
if route_match is not None:
data, targets, name = route_match
routes.append({"data": data, "targets": targets, "name": name})
return routes
else:
for route_match in server.urlmapper.iter_routes(path, method):
data, targets, _name = route_match
if app is not None:
if data.get("app", None) != app:
continue
if name is not None:
if _name not in name:
continue
return {"data": data, "targets": targets, "name": _name}
else:
return None
def wrap_element_error(f):
def deco(self, context):
try:
for node in f(self, context):
yield node
except (errors.ElementError, logic.LogicFlowException):
raise
except Exception as e:
# import traceback; traceback.print_exc(e)
raise errors.ElementError(
text_type(e), self, diagnosis=getattr(e, "diagnosis", None)
)
return deco
class View(ContextElementBase, ContentElementMixin):
"""Define a view to handle a URL"""
class Help:
synopsis = "define a view to handle a URL"
content = Attribute("Content", type="elementref", required=False, default=None)
template = Attribute("Template", type="templates", required=False, default=None)
requires = Attribute(
"Permission expression", type="expression", required=False, default=None
)
withscope = | |
, (3, 0, None, None) , 0 , )),
(( 'Mileage' , 'Mileage' , ), 34100, (34100, (), [ (8, 1, None, None) , ], 1 , 4 , 4 , 0 , 136 , (3, 0, None, None) , 0 , )),
(( 'NoAging' , 'NoAging' , ), 34062, (34062, (), [ (16395, 10, None, None) , ], 1 , 2 , 4 , 0 , 140 , (3, 0, None, None) , 0 , )),
(( 'NoAging' , 'NoAging' , ), 34062, (34062, (), [ (11, 1, None, None) , ], 1 , 4 , 4 , 0 , 144 , (3, 0, None, None) , 0 , )),
(( 'OutlookInternalVersion' , 'OutlookInternalVersion' , ), 34130, (34130, (), [ (16387, 10, None, None) , ], 1 , 2 , 4 , 0 , 148 , (3, 0, None, None) , 0 , )),
(( 'OutlookVersion' , 'OutlookVersion' , ), 34132, (34132, (), [ (16392, 10, None, None) , ], 1 , 2 , 4 , 0 , 152 , (3, 0, None, None) , 0 , )),
(( 'Saved' , 'Saved' , ), 61603, (61603, (), [ (16395, 10, None, None) , ], 1 , 2 , 4 , 0 , 156 , (3, 0, None, None) , 0 , )),
(( 'Sensitivity' , 'Sensitivity' , ), 54, (54, (), [ (16387, 10, None, None) , ], 1 , 2 , 4 , 0 , 160 , (3, 0, None, None) , 0 , )),
(( 'Sensitivity' , 'Sensitivity' , ), 54, (54, (), [ (3, 1, None, None) , ], 1 , 4 , 4 , 0 , 164 , (3, 0, None, None) , 0 , )),
(( 'Size' , 'Size' , ), 3592, (3592, (), [ (16387, 10, None, None) , ], 1 , 2 , 4 , 0 , 168 , (3, 0, None, None) , 0 , )),
(( 'Subject' , 'Subject' , ), 55, (55, (), [ (16392, 10, None, None) , ], 1 , 2 , 4 , 0 , 172 , (3, 0, None, None) , 0 , )),
(( 'Subject' , 'Subject' , ), 55, (55, (), [ (8, 1, None, None) , ], 1 , 4 , 4 , 0 , 176 , (3, 0, None, None) , 0 , )),
(( 'UnRead' , 'UnRead' , ), 61468, (61468, (), [ (16395, 10, None, None) , ], 1 , 2 , 4 , 0 , 180 , (3, 0, None, None) , 0 , )),
(( 'UnRead' , 'UnRead' , ), 61468, (61468, (), [ (11, 1, None, None) , ], 1 , 4 , 4 , 0 , 184 , (3, 0, None, None) , 0 , )),
(( 'UserProperties' , 'UserProperties' , ), 63510, (63510, (), [ (16393, 10, None, "IID('{0006303D-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 188 , (3, 0, None, None) , 0 , )),
(( 'Close' , 'SaveMode' , ), 61475, (61475, (), [ (3, 1, None, None) , ], 1 , 1 , 4 , 0 , 192 , (3, 0, None, None) , 0 , )),
(( 'Copy' , 'Item' , ), 61490, (61490, (), [ (16393, 10, None, None) , ], 1 , 1 , 4 , 0 , 196 , (3, 0, None, None) , 0 , )),
(( 'Delete' , ), 61514, (61514, (), [ ], 1 , 1 , 4 , 0 , 200 , (3, 0, None, None) , 0 , )),
(( 'Display' , 'Modal' , ), 61606, (61606, (), [ (12, 17, None, None) , ], 1 , 1 , 4 , 1 , 204 , (3, 0, None, None) , 0 , )),
(( 'Move' , 'DestFldr' , 'Item' , ), 61492, (61492, (), [ (9, 1, None, "IID('{00063006-0000-0000-C000-000000000046}')") ,
(16393, 10, None, None) , ], 1 , 1 , 4 , 0 , 208 , (3, 0, None, None) , 0 , )),
(( 'PrintOut' , ), 61491, (61491, (), [ ], 1 , 1 , 4 , 0 , 212 , (3, 0, None, None) , 0 , )),
(( 'Save' , ), 61512, (61512, (), [ ], 1 , 1 , 4 , 0 , 216 , (3, 0, None, None) , 0 , )),
(( 'SaveAs' , 'Path' , 'Type' , ), 61521, (61521, (), [ (8, 1, None, None) ,
(12, 17, None, None) , ], 1 , 1 , 4 , 1 , 220 , (3, 0, None, None) , 0 , )),
(( 'Links' , 'Links' , ), 62469, (62469, (), [ (16393, 10, None, "IID('{0006308A-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 224 , (3, 0, None, None) , 0 , )),
]
_SyncObject_vtables_dispatch_ = 1
_SyncObject_vtables_ = [
(( 'Application' , 'Application' , ), 61440, (61440, (), [ (16393, 10, None, "IID('{00063001-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 28 , (3, 0, None, None) , 0 , )),
(( 'Class' , 'Class' , ), 61450, (61450, (), [ (16387, 10, None, None) , ], 1 , 2 , 4 , 0 , 32 , (3, 0, None, None) , 0 , )),
(( 'Session' , 'Session' , ), 61451, (61451, (), [ (16393, 10, None, "IID('{00063002-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 36 , (3, 0, None, None) , 0 , )),
(( 'Parent' , 'Parent' , ), 61441, (61441, (), [ (16393, 10, None, None) , ], 1 , 2 , 4 , 0 , 40 , (3, 0, None, None) , 0 , )),
(( 'Name' , 'Name' , ), 8448, (8448, (), [ (16392, 10, None, None) , ], 1 , 2 , 4 , 0 , 44 , (3, 0, None, None) , 0 , )),
(( 'Start' , ), 8449, (8449, (), [ ], 1 , 1 , 4 , 0 , 48 , (3, 0, None, None) , 0 , )),
(( 'Stop' , ), 8450, (8450, (), [ ], 1 , 1 , 4 , 0 , 52 , (3, 0, None, None) , 0 , )),
]
_TaskItem_vtables_dispatch_ = 1
_TaskItem_vtables_ = [
(( 'Application' , 'Application' , ), 61440, (61440, (), [ (16393, 10, None, "IID('{00063001-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 28 , (3, 0, None, None) , 0 , )),
(( 'Class' , 'Class' , ), 61450, (61450, (), [ (16387, 10, None, None) , ], 1 , 2 , 4 , 0 , 32 , (3, 0, None, None) , 0 , )),
(( 'Session' , 'Session' , ), 61451, (61451, (), [ (16393, 10, None, "IID('{00063002-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 36 , (3, 0, None, None) , 0 , )),
(( 'Parent' , 'Parent' , ), 61441, (61441, (), [ (16393, 10, None, None) , ], 1 , 2 , 4 , 0 , 40 , (3, 0, None, None) , 0 , )),
(( 'Actions' , 'Actions' , ), 63511, (63511, (), [ (16393, 10, None, "IID('{0006303E-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 44 , (3, 0, None, None) , 0 , )),
(( 'Attachments' , 'Attachments' , ), 63509, (63509, (), [ (16393, 10, None, "IID('{0006303C-0000-0000-C000-000000000046}')") , ], 1 , 2 , 4 , 0 , 48 , (3, 0, None, None) , 0 , )),
(( 'BillingInformation' , 'BillingInformation' , ), 34101, (34101, (), [ (16392, 10, None, None) , ], 1 , 2 , 4 , 0 , 52 , (3, 0, None, None) , 0 , )),
(( 'BillingInformation' , 'BillingInformation' , ), 34101, (34101, (), [ (8, 1, None, None) , ], 1 , 4 , 4 , 0 , 56 , (3, 0, None, None) , 0 , )),
(( 'Body' , 'Body' , ), 37120, (37120, (), [ (16392, 10, None, None) , ], 1 , 2 , 4 , 0 , 60 , (3, 0, None, None) , | |
<gh_stars>1-10
# -----------------------------------------------------------------------
# Name: inputs.py
# Purpose: Read in input parameters to set up and run the model.
# Read in meteorological data, and if desired calculate derived
# data to run the model.
# Author: <NAME>
# Created: 06/11/2018
# Copyright:(c) <NAME> and NIVA, 2018
# Licence:
# -----------------------------------------------------------------------
""" Read in and process input parameters and data
"""
import pandas as pd, numpy as np
import calendar
import math
def read_input_data(params_fpath):
""" Read SimplyP setup data from Excel template.
Args:
params_fpath: Raw str. Path to completed Excel input template.
Returns:
Tuple (p_SU, dynamic_options, p, p_LU, p_SC, p_struc, met_df, obs_dict).
Parameter values. Indices are the parameter names (which match the input parameter sheet).
Values are the parameters.
p_SU: Series. Setup parameters
dynamic_options: Series. Subset of dynamic setup parameters
p: Series. Parameters which are constant over land use and sub-catchment/reach
p_LU: Dataframe. Land use parameters. One column per land use type ('A','S','IG','NC')
p_SC: Dataframe. Sub-catchment and reach parameters. One column per sub-catchment/reach
p_struc: Dataframe.
met_df: Dataframe. Meteorological data and data derived from it
(if desired, including results from snow accumulation & melt module, PET)
obs_dict: Dict. Observed discharge and chemistry data.
Keys: reach number. Values: Dataframe with datetime index and columns for water quality
variables. Columns are all
"""
# ----------------------------------------------------------------------------------------
# USER SET-UP PARAMETERS
p_SU = pd.read_excel(params_fpath, sheet_name='Setup', index_col=0, usecols="A,C")
p_SU = p_SU['Value'] # Convert to a series
# Extract user set-up parameters for dynamic dict
# **dynamic terrestrial P inputs and effluent inputs not yet implemented**
dynamic_options = p_SU[['Dynamic_EPC0', 'Dynamic_effluent_inputs',
'Dynamic_terrestrialP_inputs','Dynamic_erodibility']]
# ----------------------------------------------------------------------------------------
# MODEL PARAMETERS
# CONSTANT PARAMS: Parameters that're constant over land use, sub-catchment or reach.
# Values in col 'Value'
p = pd.read_excel(params_fpath, sheet_name='Constant', index_col=0, usecols="B,E")
p = p['Value'] # Convert to a series
# LAND USE PARAMETERS. Values in cols A,S,IG,NC
p_LU = pd.read_excel(params_fpath, sheet_name='LU', index_col=0, usecols="B,E,F,G,H")
# SUB-CATCHMENT & REACH PARAMETERS: Values in cols '1', '2',..
# Some fiddling required to parse the right number of columns, according to the number of SCs
p['SC_list'] = np.arange(1,p_SU.n_SC+1)
lastCol = chr(ord('E')+p_SU.n_SC-1) # Last column in excel sheet to be parsed
if p_SU.n_SC ==1:
usecols_str = "B,E"
else:
usecols_str = "B,E:%s" %lastCol
p_SC = pd.read_excel(params_fpath, sheet_name='SC_reach', index_col=0, usecols=usecols_str)
# REACH STRUCTURE PARAMETERS
# Describe which reaches flow into each other, and whether to sum fluxes to produce an input
# to a water body (e.g. a lake or coastal zone)
p_struc = pd.read_excel(params_fpath, sheet_name='Reach_structure', index_col=0, usecols="A,B,C")
p_struc.columns = ['Upstream_SCs','In_final_flux?'] # Shorten column names
# Easy to make some mistakes with the reach parameters, so add a couple of checks
if p_SU.n_SC != len(p_struc['Upstream_SCs']):
raise ValueError("The number of sub-catchments specified in your 'Setup' parameter sheet doesn't \nmatch the number of rows in your 'Reach_structure' sheet")
if p_SU.n_SC != len(p_SC.columns):
raise ValueError("The number of columns in your 'SC_reach' sheet should match the number of sub-catchments specified in your 'Setup' parameter sheet")
# Print some output
print ('Parameter values successfully read in')
# -----------------------------------------------------------------------------------------
# MET DATA
# Assume constant met data over the catchment. This could be amended in the future.
met_df = pd.read_csv(p_SU.metdata_fpath, parse_dates=True, dayfirst=True, index_col=0)
met_df = met_df.truncate(before=p_SU.st_dt, after=p_SU.end_dt) # Truncate to the desired period
print ('Input meteorological data read in')
# If desired, run SNOW MODULE
if p_SU.inc_snowmelt == 'y':
met_df = snow_hydrol_inputs(p['D_snow_0'], p['f_DDSM'], met_df)
print ('Snow accumulation and melt module run to estimate snowmelt inputs to the soil')
else:
met_df.rename(columns={'Precipitation':'P'}, inplace=True)
# If PET isn't in the input met data, calculate it using Thornthwaite's 1948 equation
if 'PET' not in met_df.columns:
met_df = daily_PET(latitude=p['latitude'], met_df=met_df)
print ('PET estimated using the Thornthwaite method')
# -----------------------------------------------------------------------------------------
# OBSERVATIONS
# If file paths provided, read in observed data
# Read from excel files (one for each of Q and chem). Excel files should have one sheet per
# sub-catchment/reach, numbered 1, 2, etc. Obs for each reach are read into a dataframe.
# Each reach is stored as a separate df in obs_dict (key is the reach number, as an integer).
# Units of Q: m3/s, Units of chemistry: mg/l
# If a string has been provided for the Q observations, try reading in
if isinstance(p_SU.Qobsdata_fpath, str):
Qobs_xl = pd.ExcelFile(p_SU.Qobsdata_fpath)
SC_with_Qobs = [int(x) for x in Qobs_xl.sheet_names] # List of sub-catchments with Q data
print ('Observed discharge data read in')
else:
SC_with_Qobs = []
# If a string has been provided for water chem obs, try reading in
if isinstance(p_SU.chemObsData_fpath, str):
chemObs_xl = pd.ExcelFile(p_SU.chemObsData_fpath)
SC_with_chemObs = [int(x) for x in chemObs_xl.sheet_names] # List of sub-catchments with chemistry data
print ('Observed water chemistry data read in')
else:
SC_with_chemObs = []
obs_dict = {} # Key: sub-catchment number (1,2,...); only SCs with obs are included
# Returns dataframe of observed data (if any)
for SC in p['SC_list']: # Loop through all sub-catchments being simulated
df_li = [] # List of Q and chem dataframes for the reach; may be empty or have up to 2 dfs
if SC in SC_with_Qobs:
Qobs_df = pd.read_excel(p_SU.Qobsdata_fpath, sheet_name=str(SC), index_col=0)
Qobs_df = Qobs_df.truncate(before=p_SU.st_dt, after=p_SU.end_dt)
df_li.append(Qobs_df)
if SC in SC_with_chemObs:
chemObs_df = pd.read_excel(p_SU.chemObsData_fpath, sheet_name=str(SC), index_col=0)
chemObs_df = chemObs_df.truncate(before=p_SU.st_dt, after=p_SU.end_dt)
df_li.append(chemObs_df)
# If this SC has observations, add it to the dictionary of observations (obs_dict)
if len(df_li)>0:
obs_df = pd.concat(df_li, axis=1) # If have both Q & chem data, combine into one df
obs_dict[SC] = obs_df # Add to dictionary
# -----------------------------------------------------------------------------------------
return (p_SU, dynamic_options, p, p_LU, p_SC, p_struc, met_df, obs_dict)
#########################################################################################
def snow_hydrol_inputs(D_snow_0, f_DDSM, met_df):
""" Calculate snow accumulation and melt i.e. estimates total hydrological input to soil box
as (rain + snowmelt). Source for priors for DDF:
http://directives.sc.egov.usda.gov/OpenNonWebContent.aspx?content=17753.wba
Future potential extensions:
(1) Add options for how temperature is assumed to vary through the day, e.g. triangular or
sinuosoidal variations to get a more accurate portrayal of the degree-days above the
threshold
(2) Consider setting ET to 0 when D_snow > 0
Args:
D_snow_0: Float. Initial snow depth (mm)
f_DDSM: Float. Degree-day factor for snow melt (mm/degree-day deg C)
met_df: Dataframe. Met data with cols T_air, PET, Precipitation
Returns:
met_df with additional columns [P_snow, P_rain, P_melt, D_snow_start, D_snow_end, P].
Of these, P is the hydrological input to the soil store (mm/d)
"""
# Precipitation falling as snow (mm/d, as water equivalents)
met_df.loc[:,'P_snow'] = met_df['Precipitation'].ix[met_df['T_air']<0] # = total pptn if air T<0
met_df['P_snow'].fillna(0, inplace=True) # otherwise, =0
# Precipitation falling as rain (mm/d)
met_df['P_rain'] = met_df['Precipitation'] - met_df['P_snow']
# Potential daily snow melt (unlimited by snow pack depth) (mm/day)
met_df['P_melt'] = f_DDSM*(met_df['T_air']-0)
met_df['P_melt'][met_df['P_melt']<0]=0 # Set negative values to 0 (i.e. only melt when T_air>0)
# Snow pack depth (mm), as end of day depth = start of day depth + inputs - melt, where melt is
# limited by the depth wherever necessary.
met_df['D_snow_start'], met_df['D_snow_end'] = np.nan, np.nan # Set-up
# First time-step manually, to take initial condition into account
met_df.ix[0,'D_snow_start'] = D_snow_0 # Assign user-supplied starting depth to first row
met_df.ix[0,'P_melt'] = np.minimum(met_df.ix[0,'P_melt'],met_df.ix[0,'D_snow_start']) # Melt limited by depth
met_df.ix[0,'D_snow_end'] = (met_df.ix[0,'D_snow_start']+
met_df.ix[0,'P_snow']-met_df.ix[0,'P_melt']) # Change over day
# Calculate for subsequent days
for idx in range (1,len(met_df)):
met_df.ix[idx,'D_snow_start'] = met_df.ix[idx-1,'D_snow_end']
met_df.ix[idx,'P_melt'] = np.minimum(met_df.ix[idx,'P_melt'],met_df.ix[idx,'D_snow_start'])
met_df.ix[idx,'D_snow_end'] = met_df.ix[idx,'D_snow_start']+met_df.ix[idx,'P_snow']-met_df.ix[idx,'P_melt']
# Hydrological input to soil box
met_df.loc[:,'P'] = met_df['P_rain'] + met_df['P_melt']
return met_df
# ###############################################################################################################
# Functions for estimating daily Thornthwaite potential evapotranspiration (PET) from monthly estimates
# Calculate potential evapotranspiration using the Thornthwaite (1948 method)
# Many functions copied from https://github.com/woodcrafty/PyETo/blob/master/pyeto
# :copyright: (c) 2015 by <NAME>.
# :license: BSD 3-Clause
# Nice comparison of some different PET methods, inc. Thornthwaite:
# <NAME> Singh (2001), Hydrol. Proc.
# http://folk.uio.no/chongyux/papers_SCI/HYP_5.pdf
# ----------
# Thornthwaite CW (1948) An approach toward a rational classification of
# climate. Geographical Review, 38, 55-94.
# Set up
_MONTHDAYS = (31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31)
_LEAP_MONTHDAYS = (31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, | |
self.nets['style_encoding'] = build_generator(style_encoding)
self.nets_ema['style_encoding'] = build_generator(style_encoding)
if discriminator:
self.nets['discriminator'] = build_discriminator(discriminator)
self.latent_dim = latent_dim
self.lambda_reg = lambda_reg
self.lambda_sty = lambda_sty
self.lambda_cyc = lambda_cyc
# self.nets['generator'].apply(he_init)
# self.nets['style_encoder'].apply(he_init)
# self.nets['mapping_network'].apply(he_init)
# self.nets['discriminator'].apply(he_init)
self.phases = []
for name, reg_interval in [('G', G_reg_interval), ('D', D_reg_interval)]:
if reg_interval is None:
# opt = dnnlib.util.construct_class_by_name(params=module.parameters(),
# **opt_kwargs) # subclass of torch.optim.Optimizer
# phases += [dnnlib.EasyDict(name=name + 'both', module=module, opt=opt, interval=1)]
pass
else: # Lazy regularization.
self.phases += [dict(name=name + 'main', interval=1)]
self.phases += [dict(name=name + 'reg', interval=reg_interval)]
self.z_dim = self.nets['mapping'].z_dim
self.batch_idx = 0
# loss config.
self.r1_gamma = r1_gamma
self.l1_weight = l1_weight
self.vgg_weight = vgg_weight
self.pl_weight = pl_weight
self.contextual_weight = contextual_weight
self.mask_weight = mask_weight
self.style_mixing_prob = style_mixing_prob
self.vgg19_ckpt1 = vgg19_ckpt1
self.vgg19_ckpt2 = vgg19_ckpt2
self.pl_batch_shrink = pl_batch_shrink
# 每个类别的权重(6个类别)
class_weight = paddle.to_tensor([1., 2., 2., 3., 3., 3.])
self.ce_parsing = paddle.nn.CrossEntropyLoss(ignore_index=255, weight=class_weight)
if self.vgg_weight > 0:
self.criterionVGG = VGGLoss(ckpt_path=self.vgg19_ckpt1, requires_grad=False)
if self.contextual_weight > 0:
contextual_vgg_path = self.vgg19_ckpt2
self.contextual_vgg = VGG19_feature_color_torchversion()
self.contextual_vgg.set_state_dict(paddle.load(contextual_vgg_path))
self.contextual_vgg.eval()
for param in self.contextual_vgg.parameters():
param.stop_gradient = True
self.contextual_layers = ['r12','r22','r32','r42','r52']
self.contextual_forward_loss = ContextualLoss_forward()
self.augment_pipe = None
def setup_input(self, input):
"""Unpack input data from the dataloader and perform necessary pre-processing steps.
Args:
input (dict): include the data itself and its metadata information.
The option 'direction' can be used to swap images in domain A and domain B.
"""
self.input = input
def forward(self):
"""Run forward pass; called by both functions <optimize_parameters> and <test>."""
pass
def _reset_grad(self, optims):
for optim in optims.values():
optim.clear_gradients()
def run_G(self, z, c, pose, const_feats, denorm_upper_mask, denorm_lower_mask, denorm_upper_input, denorm_lower_input, sync):
cat_feats = {}
for _, cat_feat in enumerate(const_feats):
h, _ = cat_feat.shape[2], cat_feat.shape[3]
cat_feats[str(h)] = cat_feat
pose_feat = self.nets['const_encoding'](pose)
ws = self.nets['mapping'](z, c)
if self.style_mixing_prob > 0:
with torch.autograd.profiler.record_function('style_mixing'):
cutoff = torch.empty([], dtype=torch.int64, device=ws.device).random_(1, ws.shape[1])
cutoff = torch.where(torch.rand([], device=ws.device) < self.style_mixing_prob, cutoff, torch.full_like(cutoff, ws.shape[1]))
ws[:, cutoff:] = self.G_mapping(torch.randn_like(z), c, skip_w_avg_update=True)[:, cutoff:]
img, finetune_img, pred_parsing = self.nets['synthesis'](ws, pose_feat, cat_feats, denorm_upper_input, denorm_lower_input, denorm_upper_mask, denorm_lower_mask)
return img, finetune_img, pred_parsing, ws
def run_D(self, img, c, sync):
if self.augment_pipe is not None:
img = self.augment_pipe(img)
logits = self.nets['discriminator'](img, c)
return logits
# 梯度累加(变相增大批大小)。
def accumulate_gradients(self, phase, real_img, gen_z, style_input, retain, pose, denorm_upper_input,
denorm_lower_input, denorm_upper_mask, denorm_lower_mask, gt_parsing, sync, gain):
assert phase in ['Gmain', 'Greg', 'Gboth', 'Dmain', 'Dreg', 'Dboth']
do_Gmain = (phase in ['Gmain', 'Gboth'])
do_Dmain = (phase in ['Dmain', 'Dboth'])
do_Gpl = (phase in ['Greg', 'Gboth']) and (self.pl_weight != 0)
do_Dr1 = (phase in ['Dreg', 'Dboth']) and (self.r1_gamma != 0)
# dic2 = np.load('../data77.npz')
real_c, cat_feats = self.nets['style_encoding'](style_input, retain)
gen_c = real_c # 把 real_c 也当做 gen_c作为CGAN的C
loss_numpy = {}
# Gmain: Maximize logits for generated images.
if do_Gmain:
gen_img, gen_finetune_img, pred_parsing, _gen_ws = self.run_G(gen_z, gen_c, pose, cat_feats,
denorm_upper_mask, denorm_lower_mask, \
denorm_upper_input, denorm_lower_input,
sync=(sync and not do_Gpl)) # May get synced by Gpl.
# 这里的conditioned GAN的 (gen_img, gen_c) 和 (real_img, real_c) 不是严格对应的。
# 如果加入pose conditioned, 那么应该 gen_img和real_img严格对应,然后 只用一个real pose, 也就是(gen_img, real_pose) 和 (real_img, real_pose)
# 视情况, 看是否需要加入L1 和 vgg loss
gen_logits = self.run_D(gen_img, gen_c, sync=False)
gen_finetune_logits = self.run_D(gen_finetune_img, gen_c, sync=False)
loss_Gmain = paddle.nn.functional.softplus(-gen_logits) # -log(sigmoid(gen_logits))
loss_Gmain = loss_Gmain.mean()
loss_Gmain_finetune = paddle.nn.functional.softplus(-gen_finetune_logits) # -log(sigmoid(gen_logits))
loss_Gmain_finetune = loss_Gmain_finetune.mean()
loss_numpy['loss_Gmain'] = loss_Gmain.numpy()
loss_numpy['loss_Gmain_finetune'] = loss_Gmain_finetune.numpy()
# l1 loss
loss_G_L1 = 0
loss_G_finetune_L1 = 0
if self.l1_weight > 0:
loss_G_L1 = paddle.nn.L1Loss()(gen_img, real_img) * self.l1_weight
loss_G_finetune_L1 = paddle.nn.L1Loss()(gen_finetune_img, real_img) * self.l1_weight
loss_numpy['loss_G_L1'] = loss_G_L1.numpy()
loss_numpy['loss_G_finetune_L1'] = loss_G_finetune_L1.numpy()
loss_mask = 0
if self.mask_weight > 0:
aaaaaaaaaaaaa = paddle.cast(gt_parsing, dtype=paddle.int64)[:, 0, :, :]
loss_mask = self.ce_parsing(pred_parsing.transpose((0, 2, 3, 1)), aaaaaaaaaaaaa)
loss_mask = paddle.mean(loss_mask) * self.mask_weight
loss_numpy['loss_mask'] = loss_mask.numpy()
# vgg loss
loss_G_VGG = 0
loss_G_finetune_VGG = 0
if self.vgg_weight > 0:
loss_G_VGG = self.criterionVGG(gen_img, real_img) * self.vgg_weight
loss_G_VGG = loss_G_VGG.mean()
loss_G_finetune_VGG = self.criterionVGG(gen_finetune_img, real_img) * self.vgg_weight
loss_G_finetune_VGG = loss_G_finetune_VGG.mean()
loss_numpy['loss_G_VGG'] = loss_G_VGG.numpy()
loss_numpy['loss_G_finetune_VGG'] = loss_G_finetune_VGG.numpy()
loss_G = (loss_Gmain + loss_Gmain_finetune) / 2 + \
(loss_G_L1 + loss_G_finetune_L1) / 2 + \
(loss_G_VGG + loss_G_finetune_VGG) / 2 + loss_mask
loss_G = loss_G * float(gain)
loss_G.backward() # 咩酱:gain即上文提到的这个阶段的训练间隔。
# Gpl: Apply path length regularization.
if do_Gpl:
batch_size = gen_z.shape[0] // self.pl_batch_shrink
# with misc.ddp_sync(self.G_flownet, sync):
# flow = self.G_flownet(torch.cat((cloth[:batch_size], aff_pose[:batch_size]), dim=1))
# warp_cloth = F.grid_sample(cloth[:batch_size, :3, :, :], flow)
gen_img, gen_ws = self.run_G(gen_z[:batch_size], gen_c[:batch_size], pose[:batch_size],
[cat_feat[:batch_size] for cat_feat in cat_feats], sync=sync)
pl_noise = paddle.randn_like(gen_img) / np.sqrt(gen_img.shape[2] * gen_img.shape[3])
pl_grads = paddle.grad(
outputs=[(gen_img * pl_noise).sum()],
inputs=[gen_ws],
create_graph=True, # 最终loss里包含梯度,需要求梯度的梯度,所以肯定需要建立反向图。
retain_graph=True)[0]
pl_lengths = pl_grads.square().sum(2).mean(1).sqrt()
pl_mean = self.pl_mean.lerp(pl_lengths.mean(), self.pl_decay)
self.pl_mean.copy_(pl_mean.detach())
pl_penalty = (pl_lengths - pl_mean).square()
loss_Gpl = pl_penalty * self.pl_weight
# loss_numpy['loss_Gpl'] = loss_Gpl.numpy()
loss_Gpl = (gen_img[:, 0, 0, 0] * 0 + loss_Gpl).mean() * float(gain)
loss_Gpl.backward() # 咩酱:gain即上文提到的这个阶段的训练间隔。
# Dmain: Minimize logits for generated images.
loss_Dgen = 0
loss3 = 0.0
if do_Dmain:
gen_img, gen_finetune_img, _, _gen_ws = self.run_G(gen_z, gen_c, pose, cat_feats, denorm_upper_mask,
denorm_lower_mask, \
denorm_upper_input, denorm_lower_input, sync=False)
gen_logits = self.run_D(gen_img, gen_c, sync=False) # Gets synced by loss_Dreal.
gen_finetune_logits = self.run_D(gen_finetune_img, gen_c, sync=False)
loss_Dgen = paddle.nn.functional.softplus(gen_logits) # -log(1 - sigmoid(gen_logits))
loss_Dgen_finetune = paddle.nn.functional.softplus(gen_finetune_logits) # -log(1 - sigmoid(gen_logits))
loss_Dgen = loss_Dgen.mean()
loss_Dgen_finetune = loss_Dgen_finetune.mean()
loss_numpy['loss_Dgen'] = loss_Dgen.numpy()
loss_numpy['loss_Dgen_finetune'] = loss_Dgen_finetune.numpy()
loss3 = ((loss_Dgen + loss_Dgen_finetune) / 2) * float(gain)
# Dmain: Maximize logits for real images.
# Dr1: Apply R1 regularization.
if do_Dmain or do_Dr1:
name = 'Dreal_Dr1' if do_Dmain and do_Dr1 else 'Dreal' if do_Dmain else 'Dr1'
real_img_tmp = real_img.detach()
real_img_tmp.stop_gradient = not do_Dr1
real_logits = self.run_D(real_img_tmp, real_c, sync=sync)
loss_Dreal = 0
if do_Dmain:
loss_Dreal = paddle.nn.functional.softplus(-real_logits) # -log(sigmoid(real_logits))
loss_numpy['loss_Dreal'] = loss_Dreal.numpy().mean()
loss_Dr1 = 0
if do_Dr1:
r1_grads = paddle.grad(
outputs=[real_logits.sum()],
inputs=[real_img_tmp],
create_graph=True, # 最终loss里包含梯度,需要求梯度的梯度,所以肯定需要建立反向图。
retain_graph=True)[0]
# r1_grads = paddle.grad(outputs=real_logits.sum(),
# inputs=real_img_tmp,
# create_graph=True)[0] # 最终loss里包含梯度,需要求梯度的梯度,所以肯定需要建立反向图。
r1_penalty = r1_grads.square().sum([1, 2, 3])
loss_Dr1 = r1_penalty * (self.r1_gamma / 2)
loss_numpy['loss_Dr1'] = loss_Dr1.numpy().mean()
loss4 = (loss_Dreal + loss_Dr1).mean() * float(gain)
if do_Dmain:
loss4 += loss3
loss4.backward() # 咩酱:gain即上文提到的这个阶段的训练间隔。
return loss_numpy
def train_iter(self, optimizers=None):
phase_real = self.input[0]
phase_pose = self.input[1]
phase_norm_img = self.input[2]
phase_norm_img_lower = self.input[3]
phase_denorm_upper_img = self.input[4]
phase_denorm_lower_img = self.input[5]
phase_gt_parsing = self.input[7]
phase_denorm_upper_mask = self.input[8]
phase_denorm_lower_mask = self.input[9]
phase_retain_mask = self.input[12]
# miemie2013: 调试的代码
# dic2 = np.load('../train_data.npz')
# phase_real = paddle.to_tensor(dic2['phase_real'], dtype=phase_real.dtype)
# phase_pose = paddle.to_tensor(dic2['phase_pose'], dtype=phase_pose.dtype)
# phase_norm_img = paddle.to_tensor(dic2['phase_norm_img'], dtype=phase_norm_img.dtype)
# phase_norm_img_lower = paddle.to_tensor(dic2['phase_norm_img_lower'], dtype=phase_norm_img_lower.dtype)
# phase_denorm_upper_img = paddle.to_tensor(dic2['phase_denorm_upper_img'], dtype=phase_denorm_upper_img.dtype)
# phase_denorm_lower_img = paddle.to_tensor(dic2['phase_denorm_lower_img'], dtype=phase_denorm_lower_img.dtype)
# phase_gt_parsing = paddle.to_tensor(dic2['phase_gt_parsing'], dtype=phase_gt_parsing.dtype)
# phase_denorm_upper_mask = paddle.to_tensor(dic2['phase_denorm_upper_mask'], dtype=phase_denorm_upper_mask.dtype)
# phase_denorm_lower_mask = paddle.to_tensor(dic2['phase_denorm_lower_mask'], dtype=phase_denorm_lower_mask.dtype)
# phase_retain_mask = paddle.to_tensor(dic2['phase_retain_mask'], dtype=phase_retain_mask.dtype)
# phase_real2 = paddle.to_tensor(dic2['phase_real'], dtype=phase_real.dtype)
# phase_pose2 = paddle.to_tensor(dic2['phase_pose'], dtype=phase_pose.dtype)
# phase_norm_img2 = paddle.to_tensor(dic2['phase_norm_img'], dtype=phase_norm_img.dtype)
# phase_norm_img_lower2 = paddle.to_tensor(dic2['phase_norm_img_lower'], dtype=phase_norm_img_lower.dtype)
# phase_denorm_upper_img2 = paddle.to_tensor(dic2['phase_denorm_upper_img'], dtype=phase_denorm_upper_img.dtype)
# phase_denorm_lower_img2 = paddle.to_tensor(dic2['phase_denorm_lower_img'], dtype=phase_denorm_lower_img.dtype)
# phase_gt_parsing2 = paddle.to_tensor(dic2['phase_gt_parsing'], dtype=phase_gt_parsing.dtype)
# phase_denorm_upper_mask2 = paddle.to_tensor(dic2['phase_denorm_upper_mask'], dtype=phase_denorm_upper_mask.dtype)
# phase_denorm_lower_mask2 = paddle.to_tensor(dic2['phase_denorm_lower_mask'], dtype=phase_denorm_lower_mask.dtype)
# phase_retain_mask2 = paddle.to_tensor(dic2['phase_retain_mask'], dtype=phase_retain_mask.dtype)
phase_real_tensor = paddle.cast(phase_real, dtype=paddle.float32) / 127.5 - 1
phase_parts_tensor = paddle.cast(phase_norm_img, dtype=paddle.float32) / 127.5 - 1
phase_parts_lower_tensor = paddle.cast(phase_norm_img_lower, dtype=paddle.float32) / 127.5 - 1
phase_parts_tensor = paddle.concat([phase_parts_tensor, phase_parts_lower_tensor], 1)
phase_denorm_upper_img_tensor = paddle.cast(phase_denorm_upper_img, dtype=paddle.float32) / 127.5 - 1
phase_denorm_lower_img_tensor = paddle.cast(phase_denorm_lower_img, dtype=paddle.float32) / 127.5 - 1
phase_denorm_upper_mask_tensor = paddle.cast(phase_denorm_upper_mask, dtype=paddle.float32)
phase_denorm_lower_mask_tensor = paddle.cast(phase_denorm_lower_mask, dtype=paddle.float32)
phase_pose_tensor = paddle.cast(phase_pose, dtype=paddle.float32) / 127.5 - 1
phase_retain_mask = paddle.cast(phase_retain_mask, dtype=paddle.float32)
phase_head_mask = phase_retain_mask
phase_head_tensor = phase_head_mask * phase_real_tensor - (1 - phase_head_mask)
phase_pose_tensor = paddle.concat([phase_pose_tensor, phase_head_tensor], 1)
phase_gt_parsing_tensor = paddle.cast(phase_gt_parsing, dtype=paddle.float32)
# process head
phase_retain_tensor = phase_head_tensor
phases = self.phases
batch_size = phase_real_tensor.shape[0]
all_gen_z = None
num_gpus = 1 # 显卡数量
batch_gpu = batch_size // num_gpus # 一张显卡上的批大小
if self.z_dim > 0:
all_gen_z = paddle.randn([len(phases) * batch_size, self.z_dim]) # 咩酱:训练的4个阶段每个gpu的噪声
else:
all_gen_z = paddle.randn([len(phases) * batch_size, 1]) # 咩酱:训练的4个阶段每个gpu的噪声
phases_all_gen_z = paddle.split(all_gen_z, num_or_sections=len(phases)) # 咩酱:训练的4个阶段的噪声
all_gen_z = [paddle.split(phase_gen_z, num_or_sections=num_gpus) for phase_gen_z in phases_all_gen_z] # 咩酱:训练的4个阶段每个gpu的噪声
phase_real_tensor = paddle.split(phase_real_tensor, num_or_sections=num_gpus)
phase_parts_tensor = paddle.split(phase_parts_tensor, num_or_sections=num_gpus)
phase_pose_tensor = paddle.split(phase_pose_tensor, num_or_sections=num_gpus)
phase_retain_tensor = paddle.split(phase_retain_tensor, num_or_sections=num_gpus)
phase_denorm_upper_img_tensor = paddle.split(phase_denorm_upper_img_tensor, num_or_sections=num_gpus)
phase_denorm_lower_img_tensor = paddle.split(phase_denorm_lower_img_tensor, num_or_sections=num_gpus)
phase_gt_parsing_tensor = paddle.split(phase_gt_parsing_tensor, num_or_sections=num_gpus)
phase_denorm_upper_mask_tensor = paddle.split(phase_denorm_upper_mask_tensor, num_or_sections=num_gpus)
phase_denorm_lower_mask_tensor = paddle.split(phase_denorm_lower_mask_tensor, num_or_sections=num_gpus)
del phase_real # conserve memory
del phase_pose # conserve memory
del phase_head_mask # conserve memory
del phase_gt_parsing # conserve memory
# Execute training phases. 咩酱:训练的4个阶段。一个批次的图片训练4个阶段。
loss_numpys = []
loss_phase_name = []
for phase, phase_gen_z in | |
1028903, 1028939, 1028941, 1028953, 1028957,
1028969, 1028981, 1028999, 1029001, 1029013, 1029023, 1029037, 1029103,
1029109, 1029113, 1029139, 1029151, 1029157, 1029167, 1029179, 1029191,
1029199, 1029209, 1029247, 1029251, 1029263, 1029277, 1029289, 1029307,
1029323, 1029331, 1029337, 1029341, 1029349, 1029359, 1029361, 1029383,
1029403, 1029407, 1029409, 1029433, 1029467, 1029473, 1029481, 1029487,
1029499, 1029517, 1029521, 1029527, 1029533, 1029547, 1029563, 1029569,
1029577, 1029583, 1029593, 1029601, 1029617, 1029643, 1029647, 1029653,
1029689, 1029697, 1029731, 1029751, 1029757, 1029767, 1029803, 1029823,
1029827, 1029839, 1029841, 1029859, 1029881, 1029883, 1029907, 1029929,
1029937, 1029943, 1029953, 1029967, 1029983, 1029989, 1030019, 1030021,
1030027, 1030031, 1030033, 1030039, 1030049, 1030061, 1030067, 1030069,
1030091, 1030111, 1030121, 1030153, 1030157, 1030181, 1030201, 1030213,
1030219, 1030241, 1030247, 1030291, 1030297, 1030307, 1030349, 1030357,
1030361, 1030369, 1030411, 1030417, 1030429, 1030439, 1030441, 1030451,
1030493, 1030511, 1030529, 1030537, 1030543, 1030571, 1030583, 1030619,
1030637, 1030639, 1030643, 1030681, 1030703, 1030723, 1030739, 1030741,
1030751, 1030759, 1030763, 1030787, 1030793, 1030801, 1030811, 1030817,
1030823, 1030831, 1030847, 1030867, 1030873, 1030889, 1030919, 1030933,
1030949, 1030951, 1030957, 1030987, 1030993, 1031003, 1031047, 1031053,
1031057, 1031081, 1031117, 1031119, 1031137, 1031141, 1031161, 1031189,
1031231, 1031267, 1031279, 1031281, 1031291, 1031299, 1031309, 1031323,
1031347, 1031357, 1031399, 1031411, 1031413, 1031423, 1031431, 1031447,
1031461, 1031477, 1031479, 1031483, 1031489, 1031507, 1031521, 1031531,
1031533, 1031549, 1031561, 1031593, 1031609, 1031623, 1031629, 1031633,
1031669, 1031677, 1031707, 1031717, 1031729, 1031731, 1031741, 1031753,
1031759, 1031761, 1031809, 1031813, 1031831, 1031837, 1031869, 1031911,
1031923, 1031981, 1031999, 1032007, 1032047, 1032049, 1032067, 1032071,
1032107, 1032131, 1032151, 1032191, 1032193, 1032211, 1032221, 1032233,
1032259, 1032287, 1032299, 1032307, 1032319, 1032329, 1032341, 1032347,
1032349, 1032373, 1032377, 1032391, 1032397, 1032407, 1032419, 1032433,
1032457, 1032463, 1032467, 1032491, 1032497, 1032509, 1032511, 1032527,
1032541, 1032571, 1032583, 1032601, 1032607, 1032613, 1032617, 1032643,
1032649, 1032679, 1032683, 1032697, 1032701, 1032709, 1032721, 1032727,
1032739, 1032751, 1032763, 1032793, 1032799, 1032803, 1032833, 1032839,
1032841, 1032847, 1032851, 1032853, 1032881, 1032887, 1032901, 1032943,
1032949, 1032959, 1032961, 1033001, 1033007, 1033027, 1033033, 1033037,
1033057, 1033061, 1033063, 1033069, 1033079, 1033099, 1033127, 1033139,
1033171, 1033181, 1033189, 1033223, 1033271, 1033273, 1033289, 1033297,
1033303, 1033309, 1033313, 1033337, 1033339, 1033343, 1033349, 1033363,
1033369, 1033381, 1033387, 1033393, 1033421, 1033423, 1033427, 1033441,
1033451, 1033457, 1033463, 1033469, 1033489, 1033493, 1033499, 1033507,
1033517, 1033537, 1033541, 1033559, 1033567, 1033601, 1033603, 1033631,
1033661, 1033663, 1033667, 1033679, 1033687, 1033693, 1033741, 1033751,
1033759, 1033777, 1033783, 1033789, 1033793, 1033801, 1033807, 1033829,
1033841, 1033843, 1033867, 1033927, 1033951, 1033987, 1034003, 1034009,
1034027, 1034029, 1034069, 1034071, 1034101, 1034119, 1034123, 1034147,
1034167, 1034171, 1034177, 1034183, 1034197, 1034207, 1034219, 1034221,
1034233, 1034237, 1034239, 1034249, 1034251, 1034281, 1034309, 1034317,
1034323, 1034339, 1034353, 1034357, 1034359, 1034381, 1034387, 1034419,
1034443, 1034461, 1034477, 1034479, 1034489, 1034491, 1034503, 1034513,
1034549, 1034567, 1034581, 1034591, 1034597, 1034599, 1034617, 1034639,
1034651, 1034653, 1034659, 1034707, 1034729, 1034731, 1034767, 1034771,
1034783, 1034791, 1034809, 1034827, 1034833, 1034837, 1034849, 1034857,
1034861, 1034863, 1034867, 1034879, 1034903, 1034941, 1034951, 1034953,
1034959, 1034983, 1034989, 1034993, 1035007, 1035019, 1035043, 1035061,
1035077, 1035107, 1035131, 1035163, 1035187, 1035191, 1035197, 1035211,
1035241, 1035247, 1035257, 1035263, 1035277, 1035301, 1035313, 1035323,
1035341, 1035343, 1035361, 1035379, 1035383, 1035403, 1035409, 1035413,
1035427, 1035449, 1035451, 1035467, 1035469, 1035473, 1035479, 1035499,
1035527, 1035533, 1035547, 1035563, 1035571, 1035581, 1035599, 1035607,
1035613, 1035631, 1035637, 1035641, 1035649, 1035659, 1035707, 1035733,
1035743, 1035761, 1035763, 1035781, 1035791, 1035829, 1035869, 1035893,
1035917, 1035949, 1035953, 1035959, 1035973, 1035977, 1036001, 1036003,
1036027, 1036039, 1036067, 1036069, 1036073, 1036093, 1036109, 1036117,
1036121, 1036129, 1036153, 1036163, 1036183, 1036213, 1036223, 1036229,
1036247, 1036249, 1036253, 1036261, 1036267, 1036271, 1036291, 1036297,
1036307, 1036319, 1036327, 1036331, 1036339, 1036349, 1036351, 1036363,
1036367, 1036369, 1036391, 1036411, 1036459, 1036471, 1036493, 1036499,
1036513, 1036531, 1036537, 1036561, 1036579, 1036613, 1036619, 1036631,
1036649, 1036661, 1036667, 1036669, 1036681, 1036729, 1036747, 1036751,
1036757, 1036759, 1036769, 1036787, 1036793, 1036799, 1036829, 1036831,
1036853, 1036873, 1036877, 1036883, 1036913, 1036921, 1036943, 1036951,
1036957, 1036979, 1036991, 1036993, 1037041, 1037053, 1037059, 1037081,
1037087, 1037089, 1037123, 1037129, 1037137, 1037143, 1037213, 1037233,
1037249, 1037261, 1037273, 1037293, 1037297, 1037303, 1037317, 1037327,
1037329, 1037339, 1037347, 1037401, 1037411, 1037437, 1037441, 1037447,
1037471, 1037479, 1037489, 1037497, 1037503, 1037537, 1037557, 1037563,
1037567, 1037593, 1037611, 1037627, 1037653, 1037657, 1037677, 1037681,
1037683, 1037741, 1037747, 1037753, 1037759, 1037767, 1037791, 1037801,
1037819, 1037831, 1037857, 1037873, 1037879, 1037893, 1037903, 1037917,
1037929, 1037941, 1037957, 1037963, 1037983, 1038001, 1038017, 1038019,
1038029, 1038041, 1038043, 1038047, 1038073, 1038077, 1038119, 1038127,
1038143, 1038157, 1038187, 1038199, 1038203, 1038209, 1038211, 1038227,
1038251, 1038253, 1038259, 1038263, 1038269, 1038307, 1038311, 1038319,
1038329, 1038337, 1038383, 1038391, 1038409, 1038421, 1038449, 1038463,
1038487, 1038497, 1038503, 1038523, 1038529, 1038539, 1038563, 1038589,
1038599, 1038601, 1038617, 1038619, 1038623, 1038629, 1038637, 1038643,
1038671, 1038689, 1038691, 1038707, 1038721, 1038727, 1038731, 1038757,
1038797, 1038803, 1038811, 1038823, 1038827, 1038833, 1038881, 1038913,
1038937, 1038941, 1038953, 1039001, 1039007, 1039021, 1039033, 1039037,
1039039, 1039043, 1039067, 1039069, 1039081, 1039109, 1039111, 1039127,
1039139, 1039153, 1039169, 1039187, 1039229, 1039249, 1039279, 1039289,
1039307, 1039321, 1039327, 1039343, 1039349, 1039351, 1039387, 1039421,
1039427, 1039429, 1039463, 1039469, 1039477, 1039481, 1039513, 1039517,
1039537, 1039543, 1039553, 1039603, 1039607, 1039631, 1039651, 1039657,
1039667, 1039681, 1039733, 1039763, 1039769, 1039789, 1039799, 1039817,
1039823, 1039837, 1039891, 1039897, 1039901, 1039921, 1039931, 1039943,
1039949, 1039979, 1039999, 1040021, 1040029, 1040051, 1040057, 1040059,
1040069, 1040071, 1040089, 1040093, 1040101, 1040113, 1040119, 1040141,
1040153, 1040159, 1040161, 1040167, 1040183, 1040189, 1040191, 1040203,
1040219, 1040227, 1040311, 1040327, 1040339, 1040353, 1040371, 1040381,
1040387, 1040407, 1040411, 1040419, 1040447, 1040449, 1040483, 1040489,
1040503, 1040521, 1040531, 1040563, 1040579, 1040581, 1040597, 1040629,
1040651, 1040657, 1040659, 1040671, 1040717, 1040731, 1040747, 1040749,
1040771, 1040777, 1040779, 1040783, 1040797, 1040803, 1040807, 1040813,
1040821, 1040827, 1040833, 1040857, 1040861, 1040873, 1040881, 1040891,
1040899, 1040929, 1040939, 1040947, 1040951, 1040959, 1040981, 1040989,
1041041, 1041077, 1041083, 1041091, 1041109, 1041119, 1041121, 1041127,
1041137, 1041149, 1041151, 1041163, 1041167, 1041169, 1041203, 1041221,
1041223, 1041239, 1041241, 1041253, 1041269, 1041281, 1041283, 1041289,
1041307, 1041311, 1041317, 1041329, 1041343, 1041349, 1041373, 1041421,
1041427, 1041449, 1041451, 1041461, 1041497, 1041511, 1041517, 1041529,
1041553, 1041559, 1041563, 1041571, 1041577, 1041583, 1041617, 1041619,
1041643, 1041653, 1041671, 1041673, 1041701, 1041731, 1041737, 1041757,
1041779, 1041787, 1041793, 1041823, 1041829, 1041841, 1041853, 1041857,
1041863, 1041869, 1041889, 1041893, 1041907, 1041919, 1041949, 1041961,
1041983, 1041991, 1042001, 1042021, 1042039, 1042043, 1042081, 1042087,
1042091, 1042099, 1042103, 1042109, 1042121, 1042123, 1042133, 1042141,
1042183, 1042187, 1042193, 1042211, 1042241, 1042243, 1042259, 1042267,
1042271, 1042273, 1042309, 1042331, 1042333, 1042357, 1042369, 1042373,
1042381, 1042399, 1042427, 1042439, 1042451, 1042469, 1042487, 1042519,
1042523, 1042529, 1042571, 1042577, 1042583, 1042597, 1042607, 1042609,
1042619, 1042631, 1042633, 1042681, 1042687, 1042693, 1042703, 1042709,
1042733, 1042759, 1042781, 1042799, 1042819, 1042829, 1042837, 1042849,
1042861, 1042897, 1042901, 1042903, 1042931, 1042949, 1042961, 1042997,
1043011, 1043023, 1043047, 1043083, 1043089, 1043111, 1043113, 1043117,
1043131, 1043167, 1043173, 1043177, 1043183, 1043191, 1043201, 1043209,
1043213, 1043221, 1043279, 1043291, 1043293, 1043299, 1043311, 1043323,
1043351, 1043369, 1043377, 1043401, 1043453, 1043467, 1043479, 1043489,
1043501, 1043513, 1043521, 1043531, 1043543, 1043557, 1043587, 1043591,
1043593, 1043597, 1043599, 1043617, 1043639, 1043657, 1043663, 1043683,
1043701, 1043723, 1043743, 1043747, 1043753, 1043759, 1043761, 1043767,
1043773, 1043831, 1043837, 1043839, 1043843, 1043849, 1043857, 1043869,
1043873, 1043897, 1043899, 1043921, 1043923, 1043929, 1043951, 1043969,
1043981, 1044019, 1044023, 1044041, 1044053, 1044079, 1044091, 1044097,
1044133, 1044139, 1044149, 1044161, 1044167, 1044179, 1044181, 1044187,
1044193, 1044209, 1044217, 1044227, 1044247, 1044257, 1044271, 1044283,
1044287, 1044289, 1044299, 1044343, 1044347, 1044353, 1044367, 1044371,
1044383, 1044391, 1044397, 1044409, 1044437, 1044443, 1044451, 1044457,
1044479, 1044509, 1044517, 1044529, 1044559, 1044569, 1044583, 1044587,
1044613, 1044619, 1044629, 1044653, 1044689, 1044697, 1044727, 1044733,
1044737, 1044739, 1044749, 1044751, 1044761, 1044767, 1044779, 1044781,
1044809, 1044811, 1044833, 1044839, 1044847, 1044851, 1044859, 1044877,
1044889, 1044893, 1044931, 1044941, 1044971, 1044997, 1045003, 1045013,
1045021, 1045027, 1045043, 1045061, 1045063, 1045081, 1045111, 1045117,
1045123, 1045129, 1045151, 1045153, 1045157, 1045183, 1045193, 1045199,
1045223, 1045229, 1045237, 1045241, 1045273, 1045277, 1045307, 1045309,
1045321, 1045349, 1045367, 1045391, 1045393, 1045397, 1045409, 1045411,
1045423, 1045427, 1045469, 1045487, 1045493, 1045507, 1045523, 1045529,
1045543, 1045547, 1045549, 1045559, 1045571, 1045573, 1045607, 1045621,
1045633, 1045643, 1045651, 1045663, 1045679, 1045691, 1045727, 1045729,
1045739, 1045763, 1045799, 1045801, 1045819, 1045829, 1045841, 1045859,
1045903, 1045907, 1045963, 1045981, 1045987, 1045997, 1046029, 1046047,
1046051, 1046053, 1046069, 1046077, 1046081, 1046113, 1046119, 1046179,
1046183, 1046189, 1046191, 1046203, 1046207, 1046237, 1046239, 1046257,
1046263, 1046329, 1046347, 1046351, 1046369, 1046371, 1046389, 1046393,
1046399, 1046413, 1046447, 1046449, 1046459, 1046497, 1046519, 1046527,
1046557, 1046579, 1046587, 1046597, 1046599, 1046627, 1046641, 1046657,
1046659, 1046677, 1046681, 1046687, 1046701, 1046711, 1046779, 1046791,
1046797, 1046807, 1046827, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.