Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
376,000
def heat_wave_frequency(tasmin, tasmax, thresh_tasmin=, thresh_tasmax=, window=3, freq=): r thresh_tasmax = utils.convert_units_to(thresh_tasmax, tasmax) thresh_tasmin = utils.convert_units_to(thresh_tasmin, tasmin) cond = (tasmin > thresh_tasmin) & (tasmax > thresh_tasmax) group = cond.resample(time=freq) return group.apply(rl.windowed_run_events, window=window, dim=)
r"""Heat wave frequency Number of heat waves over a given period. A heat wave is defined as an event where the minimum and maximum daily temperature both exceeds specific thresholds over a minimum number of days. Parameters ---------- tasmin : xarrray.DataArray Minimum daily temperature [℃] or [K] tasmax : xarrray.DataArray Maximum daily temperature [℃] or [K] thresh_tasmin : str The minimum temperature threshold needed to trigger a heatwave event [℃] or [K]. Default : '22 degC' thresh_tasmax : str The maximum temperature threshold needed to trigger a heatwave event [℃] or [K]. Default : '30 degC' window : int Minimum number of days with temperatures above thresholds to qualify as a heatwave. freq : str, optional Resampling frequency Returns ------- xarray.DataArray Number of heatwave at the wanted frequency Notes ----- The thresholds of 22° and 25°C for night temperatures and 30° and 35°C for day temperatures were selected by Health Canada professionals, following a temperature–mortality analysis. These absolute temperature thresholds characterize the occurrence of hot weather events that can result in adverse health outcomes for Canadian communities (Casati et al., 2013). In Robinson (2001), the parameters would be `thresh_tasmin=27.22, thresh_tasmax=39.44, window=2` (81F, 103F). References ---------- Casati, B., A. Yagouti, and D. Chaumont, 2013: Regional Climate Projections of Extreme Heat Events in Nine Pilot Canadian Communities for Public Health Planning. J. Appl. Meteor. Climatol., 52, 2669–2698, https://doi.org/10.1175/JAMC-D-12-0341.1 Robinson, P.J., 2001: On the Definition of a Heat Wave. J. Appl. Meteor., 40, 762–775, https://doi.org/10.1175/1520-0450(2001)040<0762:OTDOAH>2.0.CO;2
376,001
def _adjust_penalty(self, observ, old_policy_params, length): old_policy = self._policy_type(**old_policy_params) with tf.name_scope(): network = self._network(observ, length) print_penalty = tf.Print(0, [self._penalty], ) with tf.control_dependencies([print_penalty]): kl_change = tf.reduce_mean(self._mask( tf.contrib.distributions.kl_divergence(old_policy, network.policy), length)) kl_change = tf.Print(kl_change, [kl_change], ) maybe_increase = tf.cond( kl_change > 1.3 * self._config.kl_target, lambda: tf.Print(self._penalty.assign( self._penalty * 1.5), [0], ), float) maybe_decrease = tf.cond( kl_change < 0.7 * self._config.kl_target, lambda: tf.Print(self._penalty.assign( self._penalty / 1.5), [0], ), float) with tf.control_dependencies([maybe_increase, maybe_decrease]): return tf.summary.merge([ tf.summary.scalar(, kl_change), tf.summary.scalar(, self._penalty)])
Adjust the KL policy between the behavioral and current policy. Compute how much the policy actually changed during the multiple update steps. Adjust the penalty strength for the next training phase if we overshot or undershot the target divergence too much. Args: observ: Sequences of observations. old_policy_params: Parameters of the behavioral policy. length: Batch of sequence lengths. Returns: Summary tensor.
376,002
def post(self, url, data, headers={}): response = self._run_method(, url, data=data, headers=headers) return self._handle_response(url, response)
POST request for creating new objects. data should be a dictionary.
376,003
def confusion_matrix(model, X, y, ax=None, classes=None, sample_weight=None, percent=False, label_encoder=None, cmap=, fontsize=None, random_state=None, **kwargs): visualizer = ConfusionMatrix( model, ax, classes, sample_weight, percent, label_encoder, cmap, fontsize, **kwargs ) X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=random_state ) visualizer.fit(X_train, y_train, **kwargs) visualizer.score(X_test, y_test) return visualizer.ax
Quick method: Creates a heatmap visualization of the sklearn.metrics.confusion_matrix(). A confusion matrix shows each combination of the true and predicted classes for a test data set. The default color map uses a yellow/orange/red color scale. The user can choose between displaying values as the percent of true (cell value divided by sum of row) or as direct counts. If percent of true mode is selected, 100% accurate predictions are highlighted in green. Requires a classification model. Parameters ---------- model : estimator Must be a classifier, otherwise raises YellowbrickTypeError X : ndarray or DataFrame of shape n x m A matrix of n instances with m features. y : ndarray or Series of length n An array or series of target or class values. ax : matplotlib Axes, default: None The axes to plot the figure on. If None is passed in the current axes will be used (or generated if required). sample_weight: array-like of shape = [n_samples], optional Passed to ``confusion_matrix`` to weight the samples. percent: bool, default: False Determines whether or not the confusion_matrix is displayed as counts or as a percent of true predictions. Note, if specifying a subset of classes, percent should be set to False or inaccurate figures will be displayed. classes : list, default: None a list of class names to use in the confusion_matrix. This is passed to the ``labels`` parameter of ``sklearn.metrics.confusion_matrix()``, and follows the behaviour indicated by that function. It may be used to reorder or select a subset of labels. If None, classes that appear at least once in ``y_true`` or ``y_pred`` are used in sorted order. label_encoder : dict or LabelEncoder, default: None When specifying the ``classes`` argument, the input to ``fit()`` and ``score()`` must match the expected labels. If the ``X`` and ``y`` datasets have been encoded prior to training and the labels must be preserved for the visualization, use this argument to provide a mapping from the encoded class to the correct label. Because typically a Scikit-Learn ``LabelEncoder`` is used to perform this operation, you may provide it directly to the class to utilize its fitted encoding. cmap : string, default: ``'YlOrRd'`` Specify a colormap to define the heatmap of the predicted class against the actual class in the confusion matrix. fontsize : int, default: None Specify the fontsize of the text in the grid and labels to make the matrix a bit easier to read. Uses rcParams font size by default. random_state : int, RandomState instance or None, optional (default=None) Passes a random state parameter to the train_test_split function. Returns ------- ax : matplotlib axes Returns the axes that the classification report was drawn on.
376,004
def closed(self, code, reason=None): if code != 1000: self._error = errors.SignalFlowException(code, reason) _logger.info(, self, code, reason) for c in self._channels.values(): c.offer(WebSocketComputationChannel.END_SENTINEL) self._channels.clear() with self._connection_cv: self._connected = False self._connection_cv.notify()
Handler called when the WebSocket is closed. Status code 1000 denotes a normal close; all others are errors.
376,005
def getecho (self): attr = termios.tcgetattr(self.child_fd) if attr[3] & termios.ECHO: return True return False
This returns the terminal echo mode. This returns True if echo is on or False if echo is off. Child applications that are expecting you to enter a password often set ECHO False. See waitnoecho().
376,006
def is_dataset(ds): import tensorflow as tf from tensorflow_datasets.core.utils import py_utils dataset_types = [tf.data.Dataset] v1_ds = py_utils.rgetattr(tf, "compat.v1.data.Dataset", None) v2_ds = py_utils.rgetattr(tf, "compat.v2.data.Dataset", None) if v1_ds is not None: dataset_types.append(v1_ds) if v2_ds is not None: dataset_types.append(v2_ds) return isinstance(ds, tuple(dataset_types))
Whether ds is a Dataset. Compatible across TF versions.
376,007
def append_row(table, label, data): count = table.rowCount() table.insertRow(table.rowCount()) items = QTableWidgetItem(label) variant = (data,) items.setData(Qt.UserRole, variant) table.setItem(count, 0, items) table.setItem(count, 1, QTableWidgetItem(data[]))
Append new row to table widget. :param table: The table that shall have the row added to it. :type table: QTableWidget :param label: Label for the row. :type label: str :param data: custom data associated with label value. :type data: str
376,008
def get_object(self, pid, type=None): if type is None: type = self.__class__ return type(self.api, pid)
Initialize and return a new :class:`~eulfedora.models.DigitalObject` instance from the same repository, passing along the connection credentials in use by the current object. If type is not specified, the current DigitalObject class will be used. :param pid: pid of the object to return :param type: (optional) :class:`~eulfedora.models.DigitalObject` type to initialize and return
376,009
def lande_g_factors(element, isotope, L=None, J=None, F=None): r atom = Atom(element, isotope) gL = atom.gL gS = atom.gS gI = atom.gI res = [gL, gS, gI] if J is not None: if L is None: raise ValueError("A value of L must be specified.") S = 1/Integer(2) gJ = gL*(J*(J+1)-S*(S+1)+L*(L+1))/(2*J*(J+1)) gJ += gS*(J*(J+1)+S*(S+1)-L*(L+1))/(2*J*(J+1)) res += [gJ] if F is not None: II = atom.nuclear_spin if F == 0: gF = gJ else: gF = gJ*(F*(F+1)-II*(II+1)+J*(J+1))/(2*F*(F+1)) gF += gI*(F*(F+1)+II*(II+1)-J*(J+1))/(2*F*(F+1)) res += [gF] return array(res)
r"""Return the Lande g-factors for a given atom or level. >>> element = "Rb" >>> isotope = 87 >>> print(lande_g_factors(element, isotope)) [ 9.9999e-01 2.0023e+00 -9.9514e-04] The spin-orbit g-factor for a certain J >>> print(lande_g_factors(element, isotope, L=0, J=1/Integer(2))) [0.9999936864200584 2.0023193043622 -0.0009951414 2.00231930436220] The nuclear-coupled g-factor for a certain F >>> print(lande_g_factors(element, isotope, L=0, J=1/Integer(2), F=1)) [0.9999936864200584 2.0023193043622 -0.0009951414 2.00231930436220 -0.501823752840550]
376,010
def rpc_request(method_name: str, *args, **kwargs) -> rpcq.messages.RPCRequest: if args: kwargs[] = args return rpcq.messages.RPCRequest( jsonrpc=, id=str(uuid.uuid4()), method=method_name, params=kwargs )
Create RPC request :param method_name: Method name :param args: Positional arguments :param kwargs: Keyword arguments :return: JSON RPC formatted dict
376,011
def get_locs(self, seq): from .numeric import Int64Index true_slices = [i for (i, s) in enumerate(com.is_true_slices(seq)) if s] if true_slices and true_slices[-1] >= self.lexsort_depth: raise UnsortedIndexError( .format(true_slices, self.lexsort_depth)) n = len(self) indexer = None def _convert_to_indexer(r): if isinstance(r, slice): m = np.zeros(n, dtype=bool) m[r] = True r = m.nonzero()[0] elif com.is_bool_indexer(r): if len(r) != n: raise ValueError("cannot index with a boolean indexer " "that is not the same length as the " "index") r = r.nonzero()[0] return Int64Index(r) def _update_indexer(idxr, indexer=indexer): if indexer is None: indexer = Index(np.arange(n)) if idxr is None: return indexer return indexer & idxr for i, k in enumerate(seq): if com.is_bool_indexer(k): k = np.asarray(k) indexer = _update_indexer(_convert_to_indexer(k), indexer=indexer) elif is_list_like(k): indexers = None for x in k: try: idxrs = _convert_to_indexer( self._get_level_indexer(x, level=i, indexer=indexer)) indexers = (idxrs if indexers is None else indexers | idxrs) except KeyError: continue if indexers is not None: indexer = _update_indexer(indexers, indexer=indexer) else: return Int64Index([])._ndarray_values elif com.is_null_slice(k): indexer = _update_indexer(None, indexer=indexer) elif isinstance(k, slice): indexer = _update_indexer(_convert_to_indexer( self._get_level_indexer(k, level=i, indexer=indexer)), indexer=indexer) else: indexer = _update_indexer(_convert_to_indexer( self.get_loc_level(k, level=i, drop_level=False)[0]), indexer=indexer) if indexer is None: return Int64Index([])._ndarray_values return indexer._ndarray_values
Get location for a given label/slice/list/mask or a sequence of such as an array of integers. Parameters ---------- seq : label/slice/list/mask or a sequence of such You should use one of the above for each level. If a level should not be used, set it to ``slice(None)``. Returns ------- locs : array of integers suitable for passing to iloc Examples --------- >>> mi = pd.MultiIndex.from_arrays([list('abb'), list('def')]) >>> mi.get_locs('b') array([1, 2], dtype=int64) >>> mi.get_locs([slice(None), ['e', 'f']]) array([1, 2], dtype=int64) >>> mi.get_locs([[True, False, True], slice('e', 'f')]) array([2], dtype=int64) See Also -------- MultiIndex.get_loc : Get location for a label or a tuple of labels. MultiIndex.slice_locs : Get slice location given start label(s) and end label(s).
376,012
def _setup_notification_listener(self, topic_name, url): self.notify_listener = rpc.DfaNotifcationListener( topic_name, url, rpc.DfaNotificationEndpoints(self))
Setup notification listener for a service.
376,013
def step(self, vector_action=None, memory=None, text_action=None, value=None, custom_action=None) -> AllBrainInfo: vector_action = {} if vector_action is None else vector_action memory = {} if memory is None else memory text_action = {} if text_action is None else text_action value = {} if value is None else value custom_action = {} if custom_action is None else custom_action if self._loaded and not self._global_done and self._global_done is not None: if isinstance(vector_action, self.SINGLE_BRAIN_ACTION_TYPES): if self._num_external_brains == 1: vector_action = {self._external_brain_names[0]: vector_action} elif self._num_external_brains > 1: raise UnityActionException( "You have {0} brains, you need to feed a dictionary of brain names a keys, " "and vector_actions as values".format(self._num_brains)) else: raise UnityActionException( "There are no external brains in the environment, " "step cannot take a vector_action input") if isinstance(memory, self.SINGLE_BRAIN_ACTION_TYPES): if self._num_external_brains == 1: memory = {self._external_brain_names[0]: memory} elif self._num_external_brains > 1: raise UnityActionException( "You have {0} brains, you need to feed a dictionary of brain names as keys " "and memories as values".format(self._num_brains)) else: raise UnityActionException( "There are no external brains in the environment, " "step cannot take a memory input") if isinstance(text_action, self.SINGLE_BRAIN_TEXT_TYPES): if self._num_external_brains == 1: text_action = {self._external_brain_names[0]: text_action} elif self._num_external_brains > 1: raise UnityActionException( "You have {0} brains, you need to feed a dictionary of brain names as keys " "and text_actions as values".format(self._num_brains)) else: raise UnityActionException( "There are no external brains in the environment, " "step cannot take a value input") if isinstance(value, self.SINGLE_BRAIN_ACTION_TYPES): if self._num_external_brains == 1: value = {self._external_brain_names[0]: value} elif self._num_external_brains > 1: raise UnityActionException( "You have {0} brains, you need to feed a dictionary of brain names as keys " "and state/action value estimates as values".format(self._num_brains)) else: raise UnityActionException( "There are no external brains in the environment, " "step cannot take a value input") if isinstance(custom_action, CustomAction): if self._num_external_brains == 1: custom_action = {self._external_brain_names[0]: custom_action} elif self._num_external_brains > 1: raise UnityActionException( "You have {0} brains, you need to feed a dictionary of brain names as keys " "and CustomAction instances as values".format(self._num_brains)) else: raise UnityActionException( "There are no external brains in the environment, " "step cannot take a custom_action input") for brain_name in list(vector_action.keys()) + list(memory.keys()) + list( text_action.keys()): if brain_name not in self._external_brain_names: raise UnityActionException( "The name {0} does not correspond to an external brain " "in the environment".format(brain_name)) for brain_name in self._external_brain_names: n_agent = self._n_agents[brain_name] if brain_name not in vector_action: if self._brains[brain_name].vector_action_space_type == "discrete": vector_action[brain_name] = [0.0] * n_agent * len( self._brains[brain_name].vector_action_space_size) else: vector_action[brain_name] = [0.0] * n_agent * \ self._brains[ brain_name].vector_action_space_size[0] else: vector_action[brain_name] = self._flatten(vector_action[brain_name]) if brain_name not in memory: memory[brain_name] = [] else: if memory[brain_name] is None: memory[brain_name] = [] else: memory[brain_name] = self._flatten(memory[brain_name]) if brain_name not in text_action: text_action[brain_name] = [""] * n_agent else: if text_action[brain_name] is None: text_action[brain_name] = [""] * n_agent if isinstance(text_action[brain_name], str): text_action[brain_name] = [text_action[brain_name]] * n_agent if brain_name not in custom_action: custom_action[brain_name] = [None] * n_agent else: if custom_action[brain_name] is None: custom_action[brain_name] = [None] * n_agent if isinstance(custom_action[brain_name], CustomAction): custom_action[brain_name] = [custom_action[brain_name]] * n_agent number_text_actions = len(text_action[brain_name]) if not ((number_text_actions == n_agent) or number_text_actions == 0): raise UnityActionException( "There was a mismatch between the provided text_action and " "the environments expectation: " "The brain {0} expected {1} {2} action(s), but was provided: {3}" .format(brain_name, str(expected_discrete_size) if discrete_check else str(expected_continuous_size), self._brains[brain_name].vector_action_space_type, str(vector_action[brain_name]))) outputs = self.communicator.exchange( self._generate_step_input(vector_action, memory, text_action, value, custom_action)) if outputs is None: raise KeyboardInterrupt rl_output = outputs.rl_output state = self._get_state(rl_output) self._global_done = state[1] for _b in self._external_brain_names: self._n_agents[_b] = len(state[0][_b].agents) return state[0] elif not self._loaded: raise UnityEnvironmentException("No Unity environment is loaded.") elif self._global_done: raise UnityActionException( "The episode is completed. Reset the environment with ") elif self.global_done is None: raise UnityActionException( "You cannot conduct step without first calling reset. " "Reset the environment with ")
Provides the environment with an action, moves the environment dynamics forward accordingly, and returns observation, state, and reward information to the agent. :param value: Value estimates provided by agents. :param vector_action: Agent's vector action. Can be a scalar or vector of int/floats. :param memory: Vector corresponding to memory used for recurrent policies. :param text_action: Text action to send to environment for. :param custom_action: Optional instance of a CustomAction protobuf message. :return: AllBrainInfo : A Data structure corresponding to the new state of the environment.
376,014
def plot2d(self, c_poly=, alpha=1, cmap=, ret=False, title=, colorbar=False, cbar_label=): import matplotlib.pyplot as plt import matplotlib.patches as patches import matplotlib.cm as cm paths = [polygon.get_path() for polygon in self] domain = self.get_domain()[:, :2] if type(c_poly) == str: if c_poly is : c_poly = color_vector = c_poly*len(paths) colorbar = False else: if cmap is : cmap = cm.YlOrRd import matplotlib.colors as mcolors normalize = mcolors.Normalize(vmin=c_poly.min(), vmax=c_poly.max()) color_vector = cmap(normalize(c_poly)) fig = plt.figure(title) ax = fig.add_subplot(111) for p, c in zip(paths, color_vector): ax.add_patch(patches.PathPatch(p, facecolor=c, lw=1, edgecolor=, alpha=alpha)) ax.set_xlim(domain[0,0],domain[1,0]) ax.set_ylim(domain[0,1], domain[1,1]) if colorbar: scalarmappaple = cm.ScalarMappable(norm=normalize, cmap=cmap) scalarmappaple.set_array(c_poly) cbar = plt.colorbar(scalarmappaple, shrink=0.8, aspect=10) cbar.ax.set_ylabel(cbar_label, rotation=0) if ret: return ax
Generates a 2D plot for the z=0 Surface projection. :param c_poly: Polygons color. :type c_poly: matplotlib color :param alpha: Opacity. :type alpha: float :param cmap: colormap :type cmap: matplotlib.cm :param ret: If True, returns the figure. It can be used to add more elements to the plot or to modify it. :type ret: bool :param title: Figure title. :type title: str :param colorbar: If True, inserts a colorbar in the figure. :type colorbar: bool :param cbar_label: Colorbar right label. :type cbar_label: str :returns: None, axes :rtype: None, matplotlib axes
376,015
def yesterday(symbol, token=, version=): _raiseIfNotStr(symbol) return _getJson( + symbol + , token, version)
This returns previous day adjusted price data for one or more stocks https://iexcloud.io/docs/api/#previous-day-prices Available after 4am ET Tue-Sat Args: symbol (string); Ticker to request token (string); Access token version (string); API version Returns: dict: result
376,016
def plot(self, figsize="GROW", parameters=None, chains=None, extents=None, filename=None, display=False, truth=None, legend=None, blind=None, watermark=None): chains, parameters, truth, extents, blind = self._sanitise(chains, parameters, truth, extents, color_p=True, blind=blind) names = [chain.name for chain in chains] if legend is None: legend = len(chains) > 1 offset.set_visible(False) dpi = 300 if watermark: if flip and len(parameters) == 2: ax = axes[-1, 0] else: ax = None self._add_watermark(fig, ax, figsize, watermark, dpi=dpi) if filename is not None: if isinstance(filename, str): filename = [filename] for f in filename: self._save_fig(fig, f, dpi) if display: plt.show() return fig
Plot the chain! Parameters ---------- figsize : str|tuple(float)|float, optional The figure size to generate. Accepts a regular two tuple of size in inches, or one of several key words. The default value of ``COLUMN`` creates a figure of appropriate size of insertion into an A4 LaTeX document in two-column mode. ``PAGE`` creates a full page width figure. ``GROW`` creates an image that scales with parameters (1.5 inches per parameter). String arguments are not case sensitive. If you pass a float, it will scale the default ``GROW`` by that amount, so ``2.0`` would result in a plot 3 inches per parameter. parameters : list[str]|int, optional If set, only creates a plot for those specific parameters (if list). If an integer is given, only plots the fist so many parameters. chains : int|str, list[str|int], optional Used to specify which chain to show if more than one chain is loaded in. Can be an integer, specifying the chain index, or a str, specifying the chain name. extents : list[tuple[float]] or dict[str], optional Extents are given as two-tuples. You can pass in a list the same size as parameters (or default parameters if you don't specify parameters), or as a dictionary. filename : str, optional If set, saves the figure to this location display : bool, optional If True, shows the figure using ``plt.show()``. truth : list[float] or dict[str], optional A list of truth values corresponding to parameters, or a dictionary of truth values indexed by key legend : bool, optional If true, creates a legend in your plot using the chain names. blind : bool|string|list[string], optional Whether to blind axes values. Can be set to `True` to blind all parameters, or can pass in a string (or list of strings) which specify the parameters to blind. watermark : str, optional A watermark to add to the figure Returns ------- figure the matplotlib figure
376,017
def append(self, species, coords, coords_are_cartesian=False, validate_proximity=False, properties=None): return self.insert(len(self), species, coords, coords_are_cartesian=coords_are_cartesian, validate_proximity=validate_proximity, properties=properties)
Append a site to the structure. Args: species: Species of inserted site coords (3x1 array): Coordinates of inserted site coords_are_cartesian (bool): Whether coordinates are cartesian. Defaults to False. validate_proximity (bool): Whether to check if inserted site is too close to an existing site. Defaults to False. properties (dict): Properties of the site. Returns: New structure with inserted site.
376,018
def crescent_data(num_data=200, seed=default_seed): np.random.seed(seed=seed) sqrt2 = np.sqrt(2) R = np.array([[sqrt2 / 2, -sqrt2 / 2], [sqrt2 / 2, sqrt2 / 2]]) scales = [] scales.append(np.array([[3, 0], [0, 1]])) scales.append(np.array([[3, 0], [0, 1]])) scales.append([[1, 0], [0, 3]]) scales.append([[1, 0], [0, 3]]) means = [] means.append(np.array([4, 4])) means.append(np.array([0, 4])) means.append(np.array([-4, -4])) means.append(np.array([0, -4])) Xparts = [] num_data_part = [] num_data_total = 0 for i in range(0, 4): num_data_part.append(round(((i + 1) * num_data) / 4.)) num_data_part[i] -= num_data_total part = np.random.normal(size=(num_data_part[i], 2)) part = np.dot(np.dot(part, scales[i]), R) + means[i] Xparts.append(part) num_data_total += num_data_part[i] X = np.vstack((Xparts[0], Xparts[1], Xparts[2], Xparts[3])) Y = np.vstack((np.ones((num_data_part[0] + num_data_part[1], 1)), -np.ones((num_data_part[2] + num_data_part[3], 1)))) return {:X, :Y, : "Two separate classes of data formed approximately in the shape of two crescents."}
Data set formed from a mixture of four Gaussians. In each class two of the Gaussians are elongated at right angles to each other and offset to form an approximation to the crescent data that is popular in semi-supervised learning as a toy problem. :param num_data_part: number of data to be sampled (default is 200). :type num_data: int :param seed: random seed to be used for data generation. :type seed: int
376,019
def _color_name_to_rgb(self, color): " Turn , into (0xff, 0xff, 0xff). " try: rgb = int(color, 16) except ValueError: raise else: r = (rgb >> 16) & 0xff g = (rgb >> 8) & 0xff b = rgb & 0xff return r, g, b
Turn 'ffffff', into (0xff, 0xff, 0xff).
376,020
def get(self, key, default=None): if key in self._hparam_types: if default is not None: param_type, is_param_list = self._hparam_types[key] type_str = % param_type if is_param_list else str(param_type) fail_msg = ("Hparam of type is incompatible with " % (key, type_str, default)) is_default_list = isinstance(default, list) if is_param_list != is_default_list: raise ValueError(fail_msg) try: if is_default_list: for value in default: _cast_to_type_if_compatible(key, param_type, value) else: _cast_to_type_if_compatible(key, param_type, default) except ValueError as e: raise ValueError( % (fail_msg, e)) return getattr(self, key) return default
Returns the value of `key` if it exists, else `default`.
376,021
def libvlc_media_player_set_time(p_mi, i_time): f = _Cfunctions.get(, None) or \ _Cfunction(, ((1,), (1,),), None, None, MediaPlayer, ctypes.c_longlong) return f(p_mi, i_time)
Set the movie time (in ms). This has no effect if no media is being played. Not all formats and protocols support this. @param p_mi: the Media Player. @param i_time: the movie time (in ms).
376,022
def _append_slash_if_dir_path(self, relpath): if self._isdir_raw(relpath): return self._append_trailing_slash(relpath) return relpath
For a dir path return a path that has a trailing slash.
376,023
def Disks(self): if not self.disks: self.disks = clc.v2.Disks(server=self,disks_lst=self.data[][],session=self.session) return(self.disks)
Return disks object associated with server. >>> clc.v2.Server("WA1BTDIX01").Disks() <clc.APIv2.disk.Disks object at 0x10feea190>
376,024
def build_struct_type(s_sdt): s_dt = nav_one(s_sdt).S_DT[17]() struct = ET.Element(, name=s_dt.name) first_filter = lambda selected: not nav_one(selected).S_MBR[46, ]() s_mbr = nav_any(s_sdt).S_MBR[44](first_filter) while s_mbr: s_dt = nav_one(s_mbr).S_DT[45]() type_name = get_type_name(s_dt) ET.SubElement(struct, , name=s_mbr.name, type=type_name) s_mbr = nav_one(s_mbr).S_MBR[46, ]() return struct
Build an xsd complexType out of a S_SDT.
376,025
async def get_tracks(self, query) -> Tuple[Track, ...]: if not self._warned: log.warn("get_tracks() is now deprecated. Please switch to using load_tracks().") self._warned = True result = await self.load_tracks(query) return result.tracks
Gets tracks from lavalink. Parameters ---------- query : str Returns ------- Tuple[Track, ...]
376,026
def check_valid_solution(solution, graph): expected = Counter( i for (i, _) in graph.iter_starts_with_index() if i < graph.get_disjoint(i) ) actual = Counter( min(i, graph.get_disjoint(i)) for i in solution ) difference = Counter(expected) difference.subtract(actual) difference = {k: v for k, v in difference.items() if v != 0} if difference: print( .format(difference)) return False return True
Check that the solution is valid: every path is visited exactly once.
376,027
def search(self, **kwargs): point = kwargs.pop(, False) if point: kwargs[] = % (point[0], point[1]) bound = kwargs.pop(, False) if bound: kwargs[] = bound[0] kwargs[] = bound[1] filters = kwargs.pop(, False) if filters: for k, v in filters.items(): kwargs[ % k] = v return self._search(**kwargs)
Firms search http://api.2gis.ru/doc/firms/searches/search/
376,028
def command(self, command, value=1, check=True, allowable_errors=None, codec_options=DEFAULT_CODEC_OPTIONS, _deadline=None, **kwargs): if isinstance(command, (bytes, unicode)): command = SON([(command, value)]) options = kwargs.copy() command.update(options) def on_ok(response): if check: msg = "TxMongo: command {0} on namespace {1} failed with ".format(repr(command), ns) _check_command_response(response, msg, allowable_errors) return response ns = self["$cmd"].with_options(codec_options=codec_options) return ns.find_one(command, _deadline=_deadline).addCallback(on_ok)
command(command, value=1, check=True, allowable_errors=None, codec_options=DEFAULT_CODEC_OPTIONS)
376,029
def get_create_table_sql(self, table, create_flags=CREATE_INDEXES): table_name = table.get_quoted_name(self) options = dict((k, v) for k, v in table.get_options().items()) options["unique_constraints"] = OrderedDict() options["indexes"] = OrderedDict() options["primary"] = [] if create_flags & self.CREATE_INDEXES > 0: for index in table.get_indexes().values(): if index.is_primary(): options["primary"] = index.get_quoted_columns(self) options["primary_index"] = index else: options["indexes"][index.get_quoted_name(self)] = index columns = OrderedDict() for column in table.get_columns().values(): column_data = column.to_dict() column_data["name"] = column.get_quoted_name(self) if column.has_platform_option("version"): column_data["version"] = column.get_platform_option("version") else: column_data["version"] = False if column_data["type"] == "string" and column_data["length"] is None: column_data["length"] = 255 if column.get_name() in options["primary"]: column_data["primary"] = True columns[column_data["name"]] = column_data if create_flags & self.CREATE_FOREIGNKEYS > 0: options["foreign_keys"] = [] for fk in table.get_foreign_keys().values(): options["foreign_keys"].append(fk) sql = self._get_create_table_sql(table_name, columns, options) return sql
Returns the SQL statement(s) to create a table with the specified name, columns and constraints on this platform. :param table: The table :type table: Table :type create_flags: int :rtype: str
376,030
def delim(arguments): if bool(arguments.control_files) == bool(arguments.directory): raise ValueError( ) if arguments.directory: arguments.control_files.extend(control_iter(arguments.directory)) with arguments.output as fp: results = _delim_accum(arguments.control_files, arguments.file_template, arguments.keys, arguments.exclude_keys, arguments.separator, missing_action=arguments.missing_action) r = next(results) writer = csv.DictWriter(fp, r.keys(), delimiter=arguments.separator) writer.writeheader() writer.writerow(r) writer.writerows(results)
Execute delim action. :param arguments: Parsed command line arguments from :func:`main`
376,031
def render(self, *args, **kwargs): env = {}; stdout = [] for dictarg in args: env.update(dictarg) env.update(kwargs) self.execute(stdout, env) return .join(stdout)
Render the template using keyword arguments as local variables.
376,032
def parse_locator(locator): if isinstance(locator, loc.Locator): locator = .format(by=locator.by, locator=locator.locator) locator_tuple = namedtuple(, ) if locator.count() > 0 and locator.count() < 1: by = locator[:locator.find()].replace(, ) value = locator[locator.find()+1:] return locator_tuple(by, value) else: value = locator[locator.find()+1:] return locator_tuple(, value)
Parses a valid selenium By and value from a locator; returns as a named tuple with properties 'By' and 'value' locator -- a valid element locator or css string
376,033
def add_rpt(self, sequence, mod, pt): modstr = self.value(mod) if modstr == : self._stream.restore_context() self.diagnostic.notify( error.Severity.ERROR, "Cannot repeat a lookahead rule", error.LocationInfo.from_stream(self._stream, is_error=True) ) raise self.diagnostic if modstr == : self._stream.restore_context() self.diagnostic.notify( error.Severity.ERROR, "Cannot repeat a negated rule", error.LocationInfo.from_stream(self._stream, is_error=True) ) raise self.diagnostic oldnode = sequence sequence.parser_tree = pt.functor(oldnode.parser_tree) return True
Add a repeater to the previous sequence
376,034
def pull(remote=, branch=): print(cyan("Pulling changes from repo ( %s / %s)..." % (remote, branch))) local("git pull %s %s" % (remote, branch))
git pull commit
376,035
def _generate_url_root(protocol, host, port): return URL_ROOT_PATTERN.format(protocol=protocol, host=host, port=port)
Generate API root URL without resources :param protocol: Web protocol [HTTP | HTTPS] (string) :param host: Hostname or IP (string) :param port: Service port (string) :return: ROOT url
376,036
def get_coauthors(self): res = download(url=self.coauthor_link, accept=) data = loads(res.text)[] N = int(data.get(, 0)) fields = coauth = namedtuple(, fields) coauthors = [] count = 0 while count < N: params = {: count, : 25} res = download(url=self.coauthor_link, params=params, accept=) data = loads(res.text)[].get(, []) for entry in data: aff = entry.get(, {}) try: areas = [a[] for a in entry.get(, [])] except TypeError: areas = [entry[][]] new = coauth(surname=entry[][], given_name=entry[].get(), id=entry[].split()[-1], areas=.join(areas), name=aff.get(), affiliation_id=aff.get(), city=aff.get(), country=aff.get()) coauthors.append(new) count += 25 return coauthors
Retrieves basic information about co-authors as a list of namedtuples in the form (surname, given_name, id, areas, affiliation_id, name, city, country), where areas is a list of subject area codes joined by "; ". Note: These information will not be cached and are slow for large coauthor groups.
376,037
def save_load(jid, clear_load, minion=None): cb_ = _get_connection() try: jid_doc = cb_.get(six.text_type(jid)) except couchbase.exceptions.NotFoundError: cb_.add(six.text_type(jid), {}, ttl=_get_ttl()) jid_doc = cb_.get(six.text_type(jid)) jid_doc.value[] = clear_load cb_.replace(six.text_type(jid), jid_doc.value, cas=jid_doc.cas, ttl=_get_ttl()) if in clear_load and clear_load[] != : ckminions = salt.utils.minions.CkMinions(__opts__) _res = ckminions.check_minions( clear_load[], clear_load.get(, ) ) minions = _res[] save_minions(jid, minions)
Save the load to the specified jid
376,038
def stats_for(self, dt): if not isinstance(dt, datetime): raise TypeError() return self._client.get(.format(dt.strftime()))
Returns stats for the month containing the given datetime
376,039
def status(self): rd = self.repo_dir logger.debug("pkg path %s", rd) if not rd: print( "unable to find pkg . %s" % (self.name, did_u_mean(self.name)) ) cwd = os.getcwd() os.chdir(self.repo_dir) logger.debug("cwd: %s, getting status %s ", cwd, self.repo_dir) try: p = git.status(_out=self._sh_stdout(), _err=self._sh_stderr()) p.wait() except Exception: pass os.chdir(cwd)
Get status on the repo. :return: :rtype:
376,040
def age(self, as_at_date=None): if self.date_of_death != None or self.is_deceased == True: return None as_at_date = date.today() if as_at_date == None else as_at_date if self.date_of_birth != None: if (as_at_date.month >= self.date_of_birth.month) and (as_at_date.day >= self.date_of_birth.day): return (as_at_date.year - self.date_of_birth.year) else: return ((as_at_date.year - self.date_of_birth.year) -1) else: return None
Compute the person's age
376,041
def enhancer(self): if self._enhancer is None: self._enhancer = Enhancer(ppp_config_dir=self.ppp_config_dir) return self._enhancer
Lazy loading of enhancements only if needed.
376,042
def files_rm(self, path, recursive=False, **kwargs): kwargs.setdefault("opts", {"recursive": recursive}) args = (path,) return self._client.request(, args, **kwargs)
Removes a file from the MFS. .. code-block:: python >>> c.files_rm("/bla/file") b'' Parameters ---------- path : str Filepath within the MFS recursive : bool Recursively remove directories?
376,043
def moveCursor(self, cursorAction, modifiers): if cursorAction not in (self.MoveNext, self.MoveRight, self.MovePrevious, self.MoveLeft, self.MoveHome, self.MoveEnd): return super(XTreeWidget, self).moveCursor(cursorAction, modifiers) header = self.header() index = self.currentIndex() row = index.row() col = index.column() vcol = None if cursorAction == self.MoveEnd: vcol = header.count() - 1 delta = -1 elif cursorAction == self.MoveHome: vcol = 0 delta = +1 elif cursorAction in (self.MoveNext, self.MoveRight): delta = +1 elif cursorAction in (self.MovePrevious, self.MoveLeft): delta = -1 if vcol is None: vcol = header.visualIndex(col) + delta ncol = header.count() lcol = header.logicalIndex(vcol) while 0 <= vcol and vcol < ncol and self.isColumnHidden(lcol): vcol += delta lcol = header.logicalIndex(vcol) sibling = index.sibling(index.row(), lcol) if sibling and sibling.isValid(): return sibling elif delta < 0: return index.sibling(index.row() - 1, header.logicalIndex(ncol - 1)) else: return index.sibling(index.row() + 1, header.visualIndex(0))
Returns a QModelIndex object pointing to the next object in the view, based on the given cursorAction and keyboard modifiers specified by modifiers. :param modifiers | <QtCore.Qt.KeyboardModifiers>
376,044
def empty(self, duration): ann = super(DynamicLabelTransformer, self).empty(duration) ann.append(time=0, duration=duration, value=None) return ann
Empty label annotations. Constructs a single observation with an empty value (None). Parameters ---------- duration : number > 0 The duration of the annotation
376,045
def edit_section(self, id, course_section_end_at=None, course_section_name=None, course_section_restrict_enrollments_to_section_dates=None, course_section_sis_section_id=None, course_section_start_at=None): path = {} data = {} params = {} path["id"] = id if course_section_name is not None: data["course_section[name]"] = course_section_name if course_section_sis_section_id is not None: data["course_section[sis_section_id]"] = course_section_sis_section_id if course_section_start_at is not None: data["course_section[start_at]"] = course_section_start_at if course_section_end_at is not None: data["course_section[end_at]"] = course_section_end_at if course_section_restrict_enrollments_to_section_dates is not None: data["course_section[restrict_enrollments_to_section_dates]"] = course_section_restrict_enrollments_to_section_dates self.logger.debug("PUT /api/v1/sections/{id} with query params: {params} and form data: {data}".format(params=params, data=data, **path)) return self.generic_request("PUT", "/api/v1/sections/{id}".format(**path), data=data, params=params, single_item=True)
Edit a section. Modify an existing section.
376,046
def str_variants(institute_id, case_name): page = int(request.args.get(, 1)) variant_type = request.args.get(, ) form = StrFiltersForm(request.args) institute_obj, case_obj = institute_and_case(store, institute_id, case_name) query = form.data query[] = variant_type variants_query = store.variants(case_obj[], category=, query=query) data = controllers.str_variants(store, institute_obj, case_obj, variants_query, page) return dict(institute=institute_obj, case=case_obj, variant_type = variant_type, form=form, page=page, **data)
Display a list of STR variants.
376,047
def run_outdated(cls, options): latest_versions = sorted( cls.find_packages_latest_versions(cls.options), key=lambda p: p[0].project_name.lower()) for dist, latest_version, typ in latest_versions: if latest_version > dist.parsed_version: if options.all: pass elif options.pinned: if cls.can_be_updated(dist, latest_version): continue elif not options.pinned: if not cls.can_be_updated(dist, latest_version): continue elif options.update: print(dist.project_name if options.brief else % (cls.output_package(dist), latest_version, typ)) main([, ] + ([ ] if ENABLE_USER_SITE else []) + [dist.key]) continue print(dist.project_name if options.brief else % (cls.output_package(dist), latest_version, typ))
Print outdated user packages.
376,048
def getRastersAsPngs(self, session, tableName, rasterIds, postGisRampString, rasterField=, rasterIdField=, cellSize=None, resampleMethod=): VALID_RESAMPLE_METHODS = (, , , , ) if resampleMethod not in VALID_RESAMPLE_METHODS: print( .format(resampleMethod, .join(VALID_RESAMPLE_METHODS))) if cellSize is not None: if not self.isNumber(cellSize): raise ValueError() rasterIdsString = .format(.join(rasterIds)) if cellSize is not None: statement = {6}{4}Bilinear.format(rasterField, tableName, rasterIdField, rasterIdsString, postGisRampString, cellSize, resampleMethod) else: statement = {4}Bilinear.format(rasterField, tableName, rasterIdField, rasterIdsString, postGisRampString) result = session.execute(statement) return result
Return the raster in a PNG format
376,049
def _dataflash_dir(self, mpstate): if mpstate.settings.state_basedir is None: ret = else: ret = os.path.join(mpstate.settings.state_basedir,) try: os.makedirs(ret) except OSError as e: if e.errno != errno.EEXIST: print("DFLogger: OSError making (%s): %s" % (ret, str(e))) except Exception as e: print("DFLogger: Unknown exception making (%s): %s" % (ret, str(e))) return ret
returns directory path to store DF logs in. May be relative
376,050
def plot_isotherm(self, T, Pmin=None, Pmax=None, methods_P=[], pts=50, only_valid=True): r if not has_matplotlib: raise Exception() if Pmin is None: if self.Pmin is not None: Pmin = self.Pmin else: raise Exception() if Pmax is None: if self.Pmax is not None: Pmax = self.Pmax else: raise Exception() if not methods_P: if self.user_methods_P: methods_P = self.user_methods_P else: methods_P = self.all_methods_P Ps = np.linspace(Pmin, Pmax, pts) for method_P in methods_P: if only_valid: properties, Ps2 = [], [] for P in Ps: if self.test_method_validity_P(T, P, method_P): try: p = self.calculate_P(T, P, method_P) if self.test_property_validity(p): properties.append(p) Ps2.append(P) except: pass plt.plot(Ps2, properties, label=method_P) else: properties = [self.calculate_P(T, P, method_P) for P in Ps] plt.plot(Ps, properties, label=method_P) plt.legend(loc=) plt.ylabel(self.name + + self.units) plt.xlabel() plt.title(self.name + + self.CASRN) plt.show()
r'''Method to create a plot of the property vs pressure at a specified temperature according to either a specified list of methods, or the user methods (if set), or all methods. User-selectable number of points, and pressure range. If only_valid is set, `test_method_validity_P` will be used to check if each condition in the specified range is valid, and `test_property_validity` will be used to test the answer, and the method is allowed to fail; only the valid points will be plotted. Otherwise, the result will be calculated and displayed as-is. This will not suceed if the method fails. Parameters ---------- T : float Temperature at which to create the plot, [K] Pmin : float Minimum pressure, to begin calculating the property, [Pa] Pmax : float Maximum pressure, to stop calculating the property, [Pa] methods_P : list, optional List of methods to consider pts : int, optional A list of points to calculate the property at; if Pmin to Pmax covers a wide range of method validities, only a few points may end up calculated for a given method so this may need to be large only_valid : bool If True, only plot successful methods and calculated properties, and handle errors; if False, attempt calculation without any checking and use methods outside their bounds
376,051
def expect(self, *args): integer64t match, raise a ConfigParseError. ' t = self.accept(*args) if t is not None: return t self.error("expected: %r" % (args,))
Consume and return the next token if it has the correct type Multiple token types (as strings, e.g. 'integer64') can be given as arguments. If the next token is one of them, consume and return it. If the token type doesn't match, raise a ConfigParseError.
376,052
def page(self, attr=None, fill=u): u if attr is None: attr = self.attr if len(fill) != 1: raise ValueError info = CONSOLE_SCREEN_BUFFER_INFO() self.GetConsoleScreenBufferInfo(self.hout, byref(info)) if info.dwCursorPosition.X != 0 or info.dwCursorPosition.Y != 0: self.SetConsoleCursorPosition(self.hout, self.fixcoord(0, 0)) w = info.dwSize.X n = DWORD(0) for y in range(info.dwSize.Y): self.FillConsoleOutputAttribute(self.hout, attr, w, self.fixcoord(0, y), byref(n)) self.FillConsoleOutputCharacterW(self.hout, ord(fill[0]), w, self.fixcoord(0, y), byref(n)) self.attr = attr
u'''Fill the entire screen.
376,053
def _scheduleMePlease(self): sched = IScheduler(self.store) if len(list(sched.scheduledTimes(self))) == 0: sched.schedule(self, sched.now())
This queue needs to have its run() method invoked at some point in the future. Tell the dependent scheduler to schedule it if it isn't already pending execution.
376,054
def lock_file(path): with _paths_lock: lock = _paths_to_locks.get(path) if lock is None: _paths_to_locks[path] = lock = _FileLock(path) return lock
File based lock on ``path``. Creates a file based lock. When acquired, other processes or threads are prevented from acquiring the same lock until it is released.
376,055
def interface_by_ipaddr(self, ipaddr): owns ipaddr = IPAddr(ipaddr) for devname,iface in self._devinfo.items(): if iface.ipaddr == ipaddr: return iface raise KeyError("No device has IP address {}".format(ipaddr))
Given an IP address, return the interface that 'owns' this address
376,056
def kwargs_to_variable_assignment(kwargs: dict, value_representation=repr, assignment_operator: str = , statement_separator: str = , statement_per_line: bool = False) -> str: code = [] join_str = if statement_per_line else for key, value in kwargs.items(): code.append(key + assignment_operator + value_representation(value)+statement_separator) return join_str.join(code)
Convert a dictionary into a string with assignments Each assignment is constructed based on: key assignment_operator value_representation(value) statement_separator, where key and value are the key and value of the dictionary. Moreover one can seprate the assignment statements by new lines. Parameters ---------- kwargs : dict assignment_operator: str, optional: Assignment operator (" = " in python) value_representation: str, optinal How to represent the value in the assignments (repr function in python) statement_separator : str, optional: Statement separator (new line in python) statement_per_line: bool, optional Insert each statement on a different line Returns ------- str All the assignemnts. >>> kwargs_to_variable_assignment({'a': 2, 'b': "abc"}) "a = 2\\nb = 'abc'\\n" >>> kwargs_to_variable_assignment({'a':2 ,'b': "abc"}, statement_per_line=True) "a = 2\\n\\nb = 'abc'\\n" >>> kwargs_to_variable_assignment({'a': 2}) 'a = 2\\n' >>> kwargs_to_variable_assignment({'a': 2}, statement_per_line=True) 'a = 2\\n'
376,057
def params_size(m: Union[nn.Module,Learner], size: tuple = (3, 64, 64))->Tuple[Sizes, Tensor, Hooks]: "Pass a dummy input through the model to get the various sizes. Returns (res,x,hooks) if `full`" if isinstance(m, Learner): if m.data.is_empty: raise Exception("This is an empty `Learner` and `Learner.summary` requires some data to pass through the model.") ds_type = DatasetType.Train if m.data.train_dl else (DatasetType.Valid if m.data.valid_dl else DatasetType.Test) x = m.data.one_batch(ds_type=ds_type, detach=False, denorm=False)[0] x = [o[:1] for o in x] if is_listy(x) else x[:1] m = m.model elif isinstance(m, nn.Module): x = next(m.parameters()).new(1, *size) else: raise TypeError() with hook_outputs(flatten_model(m)) as hook_o: with hook_params(flatten_model(m))as hook_p: x = m.eval()(*x) if is_listy(x) else m.eval()(x) output_size = [((o.stored.shape[1:]) if o.stored is not None else None) for o in hook_o] params = [(o.stored if o.stored is not None else (None,None)) for o in hook_p] params, trainables = map(list,zip(*params)) return output_size, params, trainables
Pass a dummy input through the model to get the various sizes. Returns (res,x,hooks) if `full`
376,058
def set_inbound_cipher( self, block_engine, block_size, mac_engine, mac_size, mac_key ): self.__block_engine_in = block_engine self.__block_size_in = block_size self.__mac_engine_in = mac_engine self.__mac_size_in = mac_size self.__mac_key_in = mac_key self.__received_bytes = 0 self.__received_packets = 0 self.__received_bytes_overflow = 0 self.__received_packets_overflow = 0 self.__init_count |= 2 if self.__init_count == 3: self.__init_count = 0 self.__need_rekey = False
Switch inbound data cipher.
376,059
def comments(self): if self.cache[]: return self.cache[] comments = [] for message in self.messages[0:3]: comment_xml = self.bc.comments(message.id) for comment_node in ET.fromstring(comment_xml).findall("comment"): comments.append(Comment(comment_node)) comments.sort() comments.reverse() self.cache[] = comments return self.cache[]
Looks through the last 3 messages and returns those comments.
376,060
def get_children(self): def visitor(child, parent, children): assert child != conf.lib.clang_getNullCursor() child._tu = self._tu children.append(child) return 1 children = [] conf.lib.clang_visitChildren(self, callbacks[](visitor), children) return iter(children)
Return an iterator for accessing the children of this cursor.
376,061
async def service_messages(self, msg, _context): msgs = self.service_manager.service_messages(msg.get()) return [x.to_dict() for x in msgs]
Get all messages for a service.
376,062
def astype(array, y): if isinstance(y, autograd.core.Node): return array.astype(numpy.array(y.value).dtype) return array.astype(numpy.array(y).dtype)
A functional form of the `astype` method. Args: array: The array or number to cast. y: An array or number, as the input, whose type should be that of array. Returns: An array or number with the same dtype as `y`.
376,063
def ar_periodogram(x, window=, window_len=7): x_lag = x[:-1] X = np.array([np.ones(len(x_lag)), x_lag]).T y = np.array(x[1:]) beta_hat = np.linalg.solve(X.T @ X, X.T @ y) e_hat = y - X @ beta_hat phi = beta_hat[1] w, I_w = periodogram(e_hat, window=window, window_len=window_len) I_w = I_w / np.abs(1 - phi * np.exp(1j * w))**2 return w, I_w
Compute periodogram from data x, using prewhitening, smoothing and recoloring. The data is fitted to an AR(1) model for prewhitening, and the residuals are used to compute a first-pass periodogram with smoothing. The fitted coefficients are then used for recoloring. Parameters ---------- x : array_like(float) A flat NumPy array containing the data to smooth window_len : scalar(int), optional An odd integer giving the length of the window. Defaults to 7. window : string A string giving the window type. Possible values are 'flat', 'hanning', 'hamming', 'bartlett' or 'blackman' Returns ------- w : array_like(float) Fourier frequences at which periodogram is evaluated I_w : array_like(float) Values of periodogram at the Fourier frequences
376,064
def validate(self, config): if not isinstance(config, ConfigObject): raise Exception("Config object expected") if config["output"]["componants"] not in ("local", "remote", "embedded", "without"): raise ValueError("Unknown componant \"%s\"." % config["output"]["componants"]) if config["output"]["layout"] not in ("default", "content-only"): raise ValueError("Unknown layout \"%s\"." % config["output"]["layout"]) if config["input"]["locations"] is not None: unknown_locations = [x for x in config["input"]["locations"] if not os.path.exists(x)] if len(unknown_locations) > 0: raise ValueError( "Location%s \"%s\" does not exists" % ("s" if len(unknown_locations) > 1 else "", ("\" and \"").join(unknown_locations)) ) config["input"]["locations"] = [os.path.realpath(x) for x in config["input"]["locations"]] if config["input"]["arguments"] is not None: if not isinstance(config["input"]["arguments"], dict): raise ValueError( "Sources arguments \"%s\" are not a dict" % config["input"]["arguments"] )
Validate that the source file is ok
376,065
def handle_data(self, data): self.days += 1 signals = {} self.orderbook = {} if self.initialized and self.manager: self.manager.update( self.portfolio, self.datetime, self.perf_tracker.cumulative_risk_metrics.to_dict()) else: self.sids = data.keys() self.warm(data) self.initialized = True return try: signals = self.event(data) except Exception, error: raise AlgorithmEventFailed( reason=error, date=self.datetime, data=data) self.orderbook = self.manager.trade_signals_handler(signals) if self.auto and self._is_interactive(): self.process_orders(self.orderbook) if self._is_interactive(): self._call_middlewares()
Method called for each event by zipline. In intuition this is the place to factorize algorithms and then call event()
376,066
def set_os_environ(variables_mapping): for variable in variables_mapping: os.environ[variable] = variables_mapping[variable] logger.log_debug("Set OS environment variable: {}".format(variable))
set variables mapping to os.environ
376,067
def bcesp(y1,y1err,y2,y2err,cerr,nsim=10000): import time import multiprocessing print "BCES,", nsim,"trials... ", tic=time.time() ncores=multiprocessing.cpu_count() n=2*ncores pargs=[] for i in range(n): pargs.append([y1,y1err,y2,y2err,cerr,nsim/n]) pool = multiprocessing.Pool(processes=ncores) presult=pool.map(ab, pargs) pool.close() i=0 for m in presult: if i==0: am,bm=m[0].copy(),m[1].copy() else: am=numpy.vstack((am,m[0])) bm=numpy.vstack((bm,m[1])) i=i+1 a=numpy.array([ am[:,0].mean(),am[:,1].mean(),am[:,2].mean(),am[:,3].mean() ]) b=numpy.array([ bm[:,0].mean(),bm[:,1].mean(),bm[:,2].mean(),bm[:,3].mean() ]) erra,errb,covab=numpy.zeros(4),numpy.zeros(4),numpy.zeros(4) for i in range(4): erra[i]=numpy.sqrt( 1./(nsim-1) * ( numpy.sum(am[:,i]**2)-nsim*(am[:,i].mean())**2 )) errb[i]=numpy.sqrt( 1./(nsim-1) * ( numpy.sum(bm[:,i]**2)-nsim*(bm[:,i].mean())**2 )) covab[i]=1./(nsim-1) * ( numpy.sum(am[:,i]*bm[:,i])-nsim*am[:,i].mean()*bm[:,i].mean() ) print "%f s" % (time.time() - tic) return a,b,erra,errb,covab
Parallel implementation of the BCES with bootstrapping. Divide the bootstraps equally among the threads (cores) of the machine. It will automatically detect the number of cores available. Usage: >>> a,b,aerr,berr,covab=bcesp(x,xerr,y,yerr,cov,nsim) :param x,y: data :param xerr,yerr: measurement errors affecting x and y :param cov: covariance between the measurement errors (all are arrays) :param nsim: number of Monte Carlo simulations (bootstraps) :returns: a,b - best-fit parameters a,b of the linear regression :returns: aerr,berr - the standard deviations in a,b :returns: covab - the covariance between a and b (e.g. for plotting confidence bands) .. seealso:: Check out ~/work/projects/playground/parallel python/bcesp.py for the original, testing, code. I deleted some line from there to make the "production" version. * v1 Mar 2012: serial version ported from bces_regress.f. Added covariance output. * v2 May 3rd 2012: parallel version ported from nemmen.bcesboot. .. codeauthor: Rodrigo Nemmen, http://goo.gl/8S1Oo
376,068
def ready_to_draw(mol): copied = molutil.clone(mol) equalize_terminal_double_bond(copied) scale_and_center(copied) format_ring_double_bond(copied) return copied
Shortcut function to prepare molecule to draw. Overwrite this function for customized appearance. It is recommended to clone the molecule before draw because all the methods above are destructive.
376,069
def instance(cls, size): if not getattr(cls, "_instance", None): cls._instance = {} if size not in cls._instance: cls._instance[size] = ThreadPool(size) return cls._instance[size]
Cache threadpool since context is recreated for each request
376,070
def mkIntDate(s): n = s.__len__() d = int(s[-(n - 1):n]) return d
Convert the webserver formatted dates to an integer format by stripping the leading char and casting
376,071
async def set_agent_neighbors(self): for addr in self.addrs: r_manager = await self.env.connect(addr) await r_manager.set_agent_neighbors()
Set neighbors for all the agents in all the slave environments. Assumes that all the slave environments have their neighbors set.
376,072
def compile_config(path, source=None, config_name=None, config_data=None, config_data_source=None, script_parameters=None, salt_env=): rbase** if source: log.info(, source) cached_files = __salt__[](path=source, dest=path, saltenv=salt_env, makedirs=True) if not cached_files: error = .format(source) log.error(, error) raise CommandExecutionError(error) if config_data_source: log.info(, config_data_source) cached_files = __salt__[](path=config_data_source, dest=config_data, saltenv=salt_env, makedirs=True) if not cached_files: error = .format(config_data_source) log.error(, error) raise CommandExecutionError(error) if not os.path.exists(path): error = .format(path) log.error(, error) raise CommandExecutionError(error) if config_name is None: raise CommandExecutionError(error)
r''' Compile a config from a PowerShell script (``.ps1``) Args: path (str): Path (local) to the script that will create the ``.mof`` configuration file. If no source is passed, the file must exist locally. Required. source (str): Path to the script on ``file_roots`` to cache at the location specified by ``path``. The source file will be cached locally and then executed. If source is not passed, the config script located at ``path`` will be compiled. Optional. config_name (str): The name of the Configuration within the script to apply. If the script contains multiple configurations within the file a ``config_name`` must be specified. If the ``config_name`` is not specified, the name of the file will be used as the ``config_name`` to run. Optional. config_data (str): Configuration data in the form of a hash table that will be passed to the ``ConfigurationData`` parameter when the ``config_name`` is compiled. This can be the path to a ``.psd1`` file containing the proper hash table or the PowerShell code to create the hash table. .. versionadded:: 2017.7.0 config_data_source (str): The path to the ``.psd1`` file on ``file_roots`` to cache at the location specified by ``config_data``. If this is specified, ``config_data`` must be a local path instead of a hash table. .. versionadded:: 2017.7.0 script_parameters (str): Any additional parameters expected by the configuration script. These must be defined in the script itself. .. versionadded:: 2017.7.0 salt_env (str): The salt environment to use when copying the source. Default is 'base' Returns: dict: A dictionary containing the results of the compilation CLI Example: To compile a config from a script that already exists on the system: .. code-block:: bash salt '*' dsc.compile_config C:\\DSC\\WebsiteConfig.ps1 To cache a config script to the system from the master and compile it: .. code-block:: bash salt '*' dsc.compile_config C:\\DSC\\WebsiteConfig.ps1 salt://dsc/configs/WebsiteConfig.ps1
376,073
def getMeanInpCurrents(params, numunits=100, filepattern=os.path.join(, )): x = np.arange(100) * params.dt kernel = np.exp(-x / params.model_params[]) K_bg = np.array(sum(params.K_bg, [])) } }) data = COMM.allgather(data) return {k: v for d in data for k, v in d.items()}
return a dict with the per population mean and std synaptic current, averaging over numcells recorded units from each population in the network Returned currents are in unit of nA.
376,074
def make_path(*args): paths = unpack_args(*args) return os.path.abspath(os.path.join(*[p for p in paths if p is not None]))
>>> _hack_make_path_doctest_output(make_path("/a", "b")) '/a/b' >>> _hack_make_path_doctest_output(make_path(["/a", "b"])) '/a/b' >>> _hack_make_path_doctest_output(make_path(*["/a", "b"])) '/a/b' >>> _hack_make_path_doctest_output(make_path("/a")) '/a' >>> _hack_make_path_doctest_output(make_path(["/a"])) '/a' >>> _hack_make_path_doctest_output(make_path(*["/a"])) '/a'
376,075
def flatten(value): if isinstance(value, np.ndarray): def unflatten(vector): return np.reshape(vector, value.shape) return np.ravel(value), unflatten elif isinstance(value, float): return np.array([value]), lambda x: x[0] elif isinstance(value, tuple): if not value: return np.array([]), lambda x: () flattened_first, unflatten_first = flatten(value[0]) flattened_rest, unflatten_rest = flatten(value[1:]) def unflatten(vector): N = len(flattened_first) return (unflatten_first(vector[:N]),) + unflatten_rest(vector[N:]) return np.concatenate((flattened_first, flattened_rest)), unflatten elif isinstance(value, list): if not value: return np.array([]), lambda x: [] flattened_first, unflatten_first = flatten(value[0]) flattened_rest, unflatten_rest = flatten(value[1:]) def unflatten(vector): N = len(flattened_first) return [unflatten_first(vector[:N])] + unflatten_rest(vector[N:]) return np.concatenate((flattened_first, flattened_rest)), unflatten elif isinstance(value, dict): flattened = [] unflatteners = [] lengths = [] keys = [] for k, v in sorted(value.items(), key=itemgetter(0)): cur_flattened, cur_unflatten = flatten(v) flattened.append(cur_flattened) unflatteners.append(cur_unflatten) lengths.append(len(cur_flattened)) keys.append(k) def unflatten(vector): split_ixs = np.cumsum(lengths) pieces = np.split(vector, split_ixs) return {key: unflattener(piece) for piece, unflattener, key in zip(pieces, unflatteners, keys)} return np.concatenate(flattened), unflatten else: raise Exception("Don't know how to flatten type {}".format(type(value)) )
value can be any nesting of tuples, arrays, dicts. returns 1D numpy array and an unflatten function.
376,076
def parse_table_data(lines): data = "\n".join([i.rstrip() for i in lines if not i.startswith(("^", "!", " if data: return read_csv(StringIO(data), index_col=None, sep="\t") else: return DataFrame()
Parse list of lines from SOFT file into DataFrame. Args: lines (:obj:`Iterable`): Iterator over the lines. Returns: :obj:`pandas.DataFrame`: Table data.
376,077
def parse(self, text, *, metadata=None, filename="input"): metadata = metadata or {} body = [] in_metadata = True line_number = None def throw(s): raise BlurbError(f("Error in {filename}:{line_number}:\n{s}")) def finish_entry(): nonlocal body nonlocal in_metadata nonlocal metadata nonlocal self if not body: throw("Blurb text must not be empty!") text = textwrap_body(body) for naughty_prefix in ("- ", "Issue if text.startswith(naughty_prefix): throw("Blurb canno changessectionsectionsectionbpo..'): line = line[2:].strip() name, colon, value = line.partition(":") assert colon name = name.strip() value = value.strip() if name in metadata: throw("Blurb metadata sets " + repr(name) + " twice!") metadata[name] = value continue if line.startswith(" continue in_metadata = False if line == "..": finish_entry() continue body.append(line) finish_entry()
Parses a string. Appends a list of blurb ENTRIES to self, as tuples: (metadata, body) metadata is a dict. body is a string.
376,078
def _compute_bounds(self, axis, view): is_vertical = self._is_vertical pos = self._pos if axis == 0 and is_vertical: return (pos[0, 0], pos[0, 0]) elif axis == 1 and not is_vertical: return (self._pos[0, 1], self._pos[0, 1]) return None
Return the (min, max) bounding values of this visual along *axis* in the local coordinate system.
376,079
def parseFloat(self, words): def pointFloat(words): m = re.search(r, words) if m: whole = m.group(1) frac = m.group(2) total = 0.0 coeff = 0.10 for digit in frac.split(): total += coeff * self.parse(digit) coeff /= 10.0 return self.parseInt(whole) + total return None def fractionFloat(words): m = re.search(r, words) if m: whole = self.parseInt(m.group(1)) frac = m.group(2) frac = re.sub(r, , frac) frac = re.sub(r, , frac) split = frac.split() num = split[:1] denom = split[1:] while denom: try: num_value = self.parse(.join(num)) denom_value = self.parse(.join(denom)) return whole + float(num_value) / denom_value except: num += denom[:1] denom = denom[1:] return None result = pointFloat(words) if result: return result result = fractionFloat(words) if result: return result return self.parseInt(words)
Convert a floating-point number described in words to a double. Supports two kinds of descriptions: those with a 'point' (e.g., "one point two five") and those with a fraction (e.g., "one and a quarter"). Args: words (str): Description of the floating-point number. Returns: A double representation of the words.
376,080
def density_und(CIJ): n = len(CIJ) k = np.size(np.where(np.triu(CIJ).flatten())) kden = k / ((n * n - n) / 2) return kden, n, k
Density is the fraction of present connections to possible connections. Parameters ---------- CIJ : NxN np.ndarray undirected (weighted/binary) connection matrix Returns ------- kden : float density N : int number of vertices k : int number of edges Notes ----- Assumes CIJ is undirected and has no self-connections. Weight information is discarded.
376,081
def dutyCycle(self, active=False, readOnly=False): if self.tm.lrnIterationIdx <= self.dutyCycleTiers[1]: dutyCycle = float(self.positiveActivations) \ / self.tm.lrnIterationIdx if not readOnly: self._lastPosDutyCycleIteration = self.tm.lrnIterationIdx self._lastPosDutyCycle = dutyCycle return dutyCycle age = self.tm.lrnIterationIdx - self._lastPosDutyCycleIteration for tierIdx in range(len(self.dutyCycleTiers)-1, 0, -1): if self.tm.lrnIterationIdx > self.dutyCycleTiers[tierIdx]: alpha = self.dutyCycleAlphas[tierIdx] break dutyCycle = pow(1.0-alpha, age) * self._lastPosDutyCycle if active: dutyCycle += alpha if not readOnly: self._lastPosDutyCycleIteration = self.tm.lrnIterationIdx self._lastPosDutyCycle = dutyCycle return dutyCycle
Compute/update and return the positive activations duty cycle of this segment. This is a measure of how often this segment is providing good predictions. :param active True if segment just provided a good prediction :param readOnly If True, compute the updated duty cycle, but don't change the cached value. This is used by debugging print statements. :returns: The duty cycle, a measure of how often this segment is providing good predictions. **NOTE:** This method relies on different schemes to compute the duty cycle based on how much history we have. In order to support this tiered approach **IT MUST BE CALLED ON EVERY SEGMENT AT EACH DUTY CYCLE TIER** (@ref dutyCycleTiers). When we don't have a lot of history yet (first tier), we simply return number of positive activations / total number of iterations After a certain number of iterations have accumulated, it converts into a moving average calculation, which is updated only when requested since it can be a bit expensive to compute on every iteration (it uses the pow() function). The duty cycle is computed as follows: dc[t] = (1-alpha) * dc[t-1] + alpha * value[t] If the value[t] has been 0 for a number of steps in a row, you can apply all of the updates at once using: dc[t] = (1-alpha)^(t-lastT) * dc[lastT] We use the alphas and tiers as defined in @ref dutyCycleAlphas and @ref dutyCycleTiers.
376,082
def from_http(cls, headers: Mapping[str, str]) -> Optional["RateLimit"]: try: limit = int(headers["x-ratelimit-limit"]) remaining = int(headers["x-ratelimit-remaining"]) reset_epoch = float(headers["x-ratelimit-reset"]) except KeyError: return None else: return cls(limit=limit, remaining=remaining, reset_epoch=reset_epoch)
Gather rate limit information from HTTP headers. The mapping providing the headers is expected to support lowercase keys. Returns ``None`` if ratelimit info is not found in the headers.
376,083
def set_file_paths(self, new_file_paths): self._file_paths = new_file_paths self._file_path_queue = [x for x in self._file_path_queue if x in new_file_paths] filtered_processors = {} for file_path, processor in self._processors.items(): if file_path in new_file_paths: filtered_processors[file_path] = processor else: self.log.warning("Stopping processor for %s", file_path) processor.terminate() self._processors = filtered_processors
Update this with a new set of paths to DAG definition files. :param new_file_paths: list of paths to DAG definition files :type new_file_paths: list[unicode] :return: None
376,084
def send_result(self, return_code, output, service_description=, time_stamp=0, specific_servers=None): if time_stamp == 0: time_stamp = int(time.time()) if specific_servers == None: specific_servers = self.servers else: specific_servers = set(self.servers).intersection(specific_servers) for server in specific_servers: post_data = {} post_data[] = time_stamp post_data[] = self.servers[server][] post_data[] = service_description post_data[] = return_code post_data[] = output if self.servers[server][]: url = % (self.servers[server][], self.servers[server][], self.servers[server][], self.servers[server][]) auth = (self.servers[server][], self.servers[server][]) try: response = requests.post(url, auth=auth, headers=self.http_headers, verify=self.servers[server][], timeout=self.servers[server][], data=post_data) if response.status_code == 400: LOG.error("[ws_shinken][%s]: HTTP status: %s - The content of the WebService call is incorrect", server, response.status_code) elif response.status_code == 401: LOG.error("[ws_shinken][%s]: HTTP status: %s - You must provide an username and password", server, response.status_code) elif response.status_code == 403: LOG.error("[ws_shinken][%s]: HTTP status: %s - The username or password is wrong", server, response.status_code) elif response.status_code != 200: LOG.error("[ws_shinken][%s]: HTTP status: %s", server, response.status_code) except (requests.ConnectionError, requests.Timeout), error: self.servers[server][] = False LOG.error(error) else: LOG.error("[ws_shinken][%s]: Data not sent, server is unavailable", server) if self.servers[server][] == False and self.servers[server][] == True: self.servers[server][].writerow(post_data) LOG.info("[ws_shinken][%s]: Data cached", server)
Send result to the Skinken WS
376,085
def joint_sfs_scaled(dac1, dac2, n1=None, n2=None): s = joint_sfs(dac1, dac2, n1=n1, n2=n2) s = scale_joint_sfs(s) return s
Compute the joint site frequency spectrum between two populations, scaled such that a constant value is expected across the spectrum for neutral variation, constant population size and unrelated populations. Parameters ---------- dac1 : array_like, int, shape (n_variants,) Derived allele counts for the first population. dac2 : array_like, int, shape (n_variants,) Derived allele counts for the second population. n1, n2 : int, optional The total number of chromosomes called in each population. Returns ------- joint_sfs_scaled : ndarray, int, shape (n1 + 1, n2 + 1) Array where the (i, j)th element is the scaled frequency of variant sites with i derived alleles in the first population and j derived alleles in the second population.
376,086
def get_resources_strings(self): resources_strings = list() if hasattr(self, ): for resource_type in self.DIRECTORY_ENTRY_RESOURCE.entries: if hasattr(resource_type, ): for resource_id in resource_type.directory.entries: if hasattr(resource_id, ): if hasattr(resource_id.directory, ) and resource_id.directory.strings: for res_string in resource_id.directory.strings.values(): resources_strings.append( res_string ) return resources_strings
Returns a list of all the strings found withing the resources (if any). This method will scan all entries in the resources directory of the PE, if there is one, and will return a list() with the strings. An empty list will be returned otherwise.
376,087
def get_last_live_chat(self): now = datetime.now() lcqs = self.get_query_set() lcqs = lcqs.filter( chat_ends_at__lte=now, ).order_by() for itm in lcqs: if itm.chat_ends_at + timedelta(days=3) > now: return itm return None
Check if there is a live chat that ended in the last 3 days, and return it. We will display a link to it on the articles page.
376,088
def _xr_to_keyset(line): tkns = [elm for elm in line.strip().split(":", 1) if elm] if len(tkns) == 1: return ": ".format(tkns[0]) else: key, val = tkns return ": ,".format(key.strip(), val.strip())
Parse xfsrestore output keyset elements.
376,089
def yum_install(self, packages, ignore_error=False): return self.run( + .join(packages), ignore_error=ignore_error, retry=5)
Install some packages on the remote host. :param packages: ist of packages to install.
376,090
def get_publish_path(self, obj): return os.path.join( obj.chat_type.publish_path, obj.publish_path.lstrip("/") )
publish_path joins the publish_paths for the chat type and the channel.
376,091
def _CCompiler_spawn_silent(cmd, dry_run=None): proc = Popen(cmd, stdout=PIPE, stderr=PIPE) out, err = proc.communicate() if proc.returncode: raise DistutilsExecError(err)
Spawn a process, and eat the stdio.
376,092
def get_multi_generation(self, tables, db=): generations = [] for table in tables: generations.append(self.get_single_generation(table, db)) key = self.keygen.gen_multi_key(generations, db) val = self.cache_backend.get(key, None, db) if val is None: val = self.keygen.random_generator() self.cache_backend.set(key, val, settings.MIDDLEWARE_SECONDS, db) return val
Takes a list of table names and returns an aggregate value for the generation
376,093
def create_chapter_from_string(self, html_string, url=None, title=None): clean_html_string = self.clean_function(html_string) clean_xhtml_string = clean.html_to_xhtml(clean_html_string) if title: pass else: try: root = BeautifulSoup(html_string, ) title_node = root.title if title_node is not None: title = unicode(title_node.string) else: raise ValueError except (IndexError, ValueError): title = return Chapter(clean_xhtml_string, title, url)
Creates a Chapter object from a string. Sanitizes the string using the clean_function method, and saves it as the content of the created chapter. Args: html_string (string): The html or xhtml content of the created Chapter url (Option[string]): A url to infer the title of the chapter from title (Option[string]): The title of the created Chapter. By default, this is None, in which case the title will try to be inferred from the webpage at the url. Returns: Chapter: A chapter object whose content is the given string and whose title is that provided or inferred from the url
376,094
def prepare(cls): if cls._ask_openapi(): napp_path = Path() tpl_path = SKEL_PATH / OpenAPI(napp_path, tpl_path).render_template() print() sys.exit()
Prepare NApp to be uploaded by creating openAPI skeleton.
376,095
def _set_default_configs(user_settings, default): for key in default: if key not in user_settings: user_settings[key] = default[key] return user_settings
Set the default value to user settings if user not specified the value.
376,096
def remove(self, document_id, namespace, timestamp): index, doc_type = self._index_and_mapping(namespace) action = { : , : index, : doc_type, : u(document_id) } meta_action = { : , : self.meta_index_name, : self.meta_type, : u(document_id) } self.index(action, meta_action)
Remove a document from Elasticsearch.
376,097
def run_actions(self, actions): policy = self._policy for action in actions: config_id = action.config_id config_type = config_id.config_type client_config = policy.clients[action.client_name] client = client_config.get_client() c_map = policy.container_maps[config_id.map_name] if config_type == ItemType.CONTAINER: config = c_map.get_existing(config_id.config_name) item_name = policy.cname(config_id.map_name, config_id.config_name, config_id.instance_name) elif config_type == ItemType.VOLUME: a_parent_name = config_id.config_name if c_map.use_attached_parent_name else None item_name = policy.aname(config_id.map_name, config_id.instance_name, parent_name=a_parent_name) if client_config.features[]: config = c_map.get_existing_volume(config_id.config_name) else: config = c_map.get_existing(config_id.config_name) elif config_type == ItemType.NETWORK: config = c_map.get_existing_network(config_id.config_name) item_name = policy.nname(config_id.map_name, config_id.config_name) elif config_type == ItemType.IMAGE: config = None item_name = format_image_tag(config_id.config_name, config_id.instance_name) else: raise ValueError("Invalid configuration type.", config_id.config_type) for action_type in action.action_types: try: a_method = self.action_methods[(config_type, action_type)] except KeyError: raise ActionTypeException(config_id, action_type) action_config = ActionConfig(action.client_name, action.config_id, client_config, client, c_map, config) try: res = a_method(action_config, item_name, **action.extra_data) except Exception: exc_info = sys.exc_info() raise ActionException(exc_info, action.client_name, config_id, action_type) if res is not None: yield ActionOutput(action.client_name, config_id, action_type, res)
Runs the given lists of attached actions and instance actions on the client. :param actions: Actions to apply. :type actions: list[dockermap.map.action.ItemAction] :return: Where the result is not ``None``, returns the output from the client. Note that this is a generator and needs to be consumed in order for all actions to be performed. :rtype: collections.Iterable[dict]
376,098
def perspective(fovy, aspect, znear, zfar): assert(znear != zfar) h = math.tan(fovy / 360.0 * math.pi) * znear w = h * aspect return frustum(-w, w, -h, h, znear, zfar)
Create perspective projection matrix Parameters ---------- fovy : float The field of view along the y axis. aspect : float Aspect ratio of the view. znear : float Near coordinate of the field of view. zfar : float Far coordinate of the field of view. Returns ------- M : ndarray Perspective projection matrix (4x4).
376,099
def tri_area(self, lons, lats): lons, lats = self._check_integrity(lons, lats) x, y, z = _stripack.trans(lats, lons) area = _stripack.areas(x, y, z) return area
Calculate the area enclosed by 3 points on the unit sphere. Parameters ---------- lons : array of floats, shape (3) longitudinal coordinates in radians lats : array of floats, shape (3) latitudinal coordinates in radians Returns ------- area : float area of triangle on the unit sphere