code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def read_partition(self, i): self._load_metadata() if i < 0 or i >= self.npartitions: raise IndexError('%d is out of range' % i) return self._get_partition(i)
Return a part of the data corresponding to i-th partition. By default, assumes i should be an integer between zero and npartitions; override for more complex indexing schemes.
def backward(self, speed=1): self.left_motor.backward(speed) self.right_motor.backward(speed)
Drive the robot backward by running both motors backward. :param float speed: Speed at which to drive the motors, as a value between 0 (stopped) and 1 (full speed). The default is 1.
def stop(self, *args, **kwargs): if self.status in (Status.stopping, Status.stopped): logger.debug("{} is already {}".format(self, self.status.name)) else: self.status = Status.stopping self.onStopping(*args, **kwargs) self.status = Status.stopped
Set the status to Status.stopping and also call `onStopping` with the provided args and kwargs.
def url(self): if self.is_public: return '{0}/{1}/{2}'.format( self.bucket._boto_s3.meta.client.meta.endpoint_url, self.bucket.name, self.name ) else: raise ValueError('{0!r} does not have the public-read ACL set. ' 'Use the make_public() method to allow for ' 'public URL sharing.'.format(self.name))
Returns the public URL for the given key.
def _full_like_variable(other, fill_value, dtype: Union[str, np.dtype, None] = None): from .variable import Variable if isinstance(other.data, dask_array_type): import dask.array if dtype is None: dtype = other.dtype data = dask.array.full(other.shape, fill_value, dtype=dtype, chunks=other.data.chunks) else: data = np.full_like(other, fill_value, dtype=dtype) return Variable(dims=other.dims, data=data, attrs=other.attrs)
Inner function of full_like, where other must be a variable
def loadBatch(self, records): try: curr_batch = records[:self.batchSize()] next_batch = records[self.batchSize():] curr_records = list(curr_batch) if self._preloadColumns: for record in curr_records: record.recordValues(self._preloadColumns) if len(curr_records) == self.batchSize(): self.loadedRecords[object, object].emit(curr_records, next_batch) else: self.loadedRecords[object].emit(curr_records) except ConnectionLostError: self.connectionLost.emit() except Interruption: pass
Loads the records for this instance in a batched mode.
def get_graph_data(self, graph, benchmark): if benchmark.get('params'): param_iter = enumerate(zip(itertools.product(*benchmark['params']), graph.get_steps())) else: param_iter = [(None, (None, graph.get_steps()))] for j, (param, steps) in param_iter: if param is None: entry_name = benchmark['name'] else: entry_name = benchmark['name'] + '({0})'.format(', '.join(param)) start_revision = self._get_start_revision(graph, benchmark, entry_name) threshold = self._get_threshold(graph, benchmark, entry_name) if start_revision is None: continue steps = [step for step in steps if step[1] >= start_revision] yield j, entry_name, steps, threshold
Iterator over graph data sets Yields ------ param_idx Flat index to parameter permutations for parameterized benchmarks. None if benchmark is not parameterized. entry_name Name for the data set. If benchmark is non-parameterized, this is the benchmark name. steps Steps to consider in regression detection. threshold User-specified threshold for regression detection.
def registration_id_chunks(self, registration_ids): try: xrange except NameError: xrange = range for i in xrange(0, len(registration_ids), self.FCM_MAX_RECIPIENTS): yield registration_ids[i:i + self.FCM_MAX_RECIPIENTS]
Splits registration ids in several lists of max 1000 registration ids per list Args: registration_ids (list): FCM device registration ID Yields: generator: list including lists with registration ids
def to_dict(obj, **kwargs): if is_model(obj.__class__): return related_obj_to_dict(obj, **kwargs) else: return obj
Convert an object into dictionary. Uses singledispatch to allow for clean extensions for custom class types. Reference: https://pypi.python.org/pypi/singledispatch :param obj: object instance :param kwargs: keyword arguments such as suppress_private_attr, suppress_empty_values, dict_factory :return: converted dictionary.
def merge_asof(left, right, on=None, left_on=None, right_on=None, left_index=False, right_index=False, by=None, left_by=None, right_by=None, suffixes=('_x', '_y'), tolerance=None, allow_exact_matches=True, direction='backward'): op = _AsOfMerge(left, right, on=on, left_on=left_on, right_on=right_on, left_index=left_index, right_index=right_index, by=by, left_by=left_by, right_by=right_by, suffixes=suffixes, how='asof', tolerance=tolerance, allow_exact_matches=allow_exact_matches, direction=direction) return op.get_result()
Perform an asof merge. This is similar to a left-join except that we match on nearest key rather than equal keys. Both DataFrames must be sorted by the key. For each row in the left DataFrame: - A "backward" search selects the last row in the right DataFrame whose 'on' key is less than or equal to the left's key. - A "forward" search selects the first row in the right DataFrame whose 'on' key is greater than or equal to the left's key. - A "nearest" search selects the row in the right DataFrame whose 'on' key is closest in absolute distance to the left's key. The default is "backward" and is compatible in versions below 0.20.0. The direction parameter was added in version 0.20.0 and introduces "forward" and "nearest". Optionally match on equivalent keys with 'by' before searching with 'on'. .. versionadded:: 0.19.0 Parameters ---------- left : DataFrame right : DataFrame on : label Field name to join on. Must be found in both DataFrames. The data MUST be ordered. Furthermore this must be a numeric column, such as datetimelike, integer, or float. On or left_on/right_on must be given. left_on : label Field name to join on in left DataFrame. right_on : label Field name to join on in right DataFrame. left_index : boolean Use the index of the left DataFrame as the join key. .. versionadded:: 0.19.2 right_index : boolean Use the index of the right DataFrame as the join key. .. versionadded:: 0.19.2 by : column name or list of column names Match on these columns before performing merge operation. left_by : column name Field names to match on in the left DataFrame. .. versionadded:: 0.19.2 right_by : column name Field names to match on in the right DataFrame. .. versionadded:: 0.19.2 suffixes : 2-length sequence (tuple, list, ...) Suffix to apply to overlapping column names in the left and right side, respectively. tolerance : integer or Timedelta, optional, default None Select asof tolerance within this range; must be compatible with the merge index. allow_exact_matches : boolean, default True - If True, allow matching with the same 'on' value (i.e. less-than-or-equal-to / greater-than-or-equal-to) - If False, don't match the same 'on' value (i.e., strictly less-than / strictly greater-than) direction : 'backward' (default), 'forward', or 'nearest' Whether to search for prior, subsequent, or closest matches. .. versionadded:: 0.20.0 Returns ------- merged : DataFrame See Also -------- merge merge_ordered Examples -------- >>> left = pd.DataFrame({'a': [1, 5, 10], 'left_val': ['a', 'b', 'c']}) >>> left a left_val 0 1 a 1 5 b 2 10 c >>> right = pd.DataFrame({'a': [1, 2, 3, 6, 7], ... 'right_val': [1, 2, 3, 6, 7]}) >>> right a right_val 0 1 1 1 2 2 2 3 3 3 6 6 4 7 7 >>> pd.merge_asof(left, right, on='a') a left_val right_val 0 1 a 1 1 5 b 3 2 10 c 7 >>> pd.merge_asof(left, right, on='a', allow_exact_matches=False) a left_val right_val 0 1 a NaN 1 5 b 3.0 2 10 c 7.0 >>> pd.merge_asof(left, right, on='a', direction='forward') a left_val right_val 0 1 a 1.0 1 5 b 6.0 2 10 c NaN >>> pd.merge_asof(left, right, on='a', direction='nearest') a left_val right_val 0 1 a 1 1 5 b 6 2 10 c 7 We can use indexed DataFrames as well. >>> left = pd.DataFrame({'left_val': ['a', 'b', 'c']}, index=[1, 5, 10]) >>> left left_val 1 a 5 b 10 c >>> right = pd.DataFrame({'right_val': [1, 2, 3, 6, 7]}, ... index=[1, 2, 3, 6, 7]) >>> right right_val 1 1 2 2 3 3 6 6 7 7 >>> pd.merge_asof(left, right, left_index=True, right_index=True) left_val right_val 1 a 1 5 b 3 10 c 7 Here is a real-world times-series example >>> quotes time ticker bid ask 0 2016-05-25 13:30:00.023 GOOG 720.50 720.93 1 2016-05-25 13:30:00.023 MSFT 51.95 51.96 2 2016-05-25 13:30:00.030 MSFT 51.97 51.98 3 2016-05-25 13:30:00.041 MSFT 51.99 52.00 4 2016-05-25 13:30:00.048 GOOG 720.50 720.93 5 2016-05-25 13:30:00.049 AAPL 97.99 98.01 6 2016-05-25 13:30:00.072 GOOG 720.50 720.88 7 2016-05-25 13:30:00.075 MSFT 52.01 52.03 >>> trades time ticker price quantity 0 2016-05-25 13:30:00.023 MSFT 51.95 75 1 2016-05-25 13:30:00.038 MSFT 51.95 155 2 2016-05-25 13:30:00.048 GOOG 720.77 100 3 2016-05-25 13:30:00.048 GOOG 720.92 100 4 2016-05-25 13:30:00.048 AAPL 98.00 100 By default we are taking the asof of the quotes >>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker') time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN We only asof within 2ms between the quote time and the trade time >>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker', ... tolerance=pd.Timedelta('2ms')) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 51.95 51.96 1 2016-05-25 13:30:00.038 MSFT 51.95 155 NaN NaN 2 2016-05-25 13:30:00.048 GOOG 720.77 100 720.50 720.93 3 2016-05-25 13:30:00.048 GOOG 720.92 100 720.50 720.93 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN We only asof within 10ms between the quote time and the trade time and we exclude exact matches on time. However *prior* data will propagate forward >>> pd.merge_asof(trades, quotes, ... on='time', ... by='ticker', ... tolerance=pd.Timedelta('10ms'), ... allow_exact_matches=False) time ticker price quantity bid ask 0 2016-05-25 13:30:00.023 MSFT 51.95 75 NaN NaN 1 2016-05-25 13:30:00.038 MSFT 51.95 155 51.97 51.98 2 2016-05-25 13:30:00.048 GOOG 720.77 100 NaN NaN 3 2016-05-25 13:30:00.048 GOOG 720.92 100 NaN NaN 4 2016-05-25 13:30:00.048 AAPL 98.00 100 NaN NaN
def python_sidebar_navigation(python_input): def get_text_fragments(): tokens = [] tokens.extend([ ('class:sidebar', ' '), ('class:sidebar.key', '[Arrows]'), ('class:sidebar', ' '), ('class:sidebar.description', 'Navigate'), ('class:sidebar', ' '), ('class:sidebar.key', '[Enter]'), ('class:sidebar', ' '), ('class:sidebar.description', 'Hide menu'), ]) return tokens return Window( FormattedTextControl(get_text_fragments), style='class:sidebar', width=Dimension.exact(43), height=Dimension.exact(1))
Create the `Layout` showing the navigation information for the sidebar.
def anchor_stream(self, stream_id, converter="rtc"): if isinstance(converter, str): converter = self._known_converters.get(converter) if converter is None: raise ArgumentError("Unknown anchor converter string: %s" % converter, known_converters=list(self._known_converters)) self._anchor_streams[stream_id] = converter
Mark a stream as containing anchor points.
def to_embedded(pool_id=None, is_thin_enabled=None, is_deduplication_enabled=None, is_compression_enabled=None, is_backup_only=None, size=None, tiering_policy=None, request_id=None, src_id=None, name=None, default_sp=None, replication_resource_type=None): return {'poolId': pool_id, 'isThinEnabled': is_thin_enabled, 'isDeduplicationEnabled': is_deduplication_enabled, 'isCompressionEnabled': is_compression_enabled, 'isBackupOnly': is_backup_only, 'size': size, 'tieringPolicy': tiering_policy, 'requestId': request_id, 'srcId': src_id, 'name': name, 'defaultSP': default_sp, 'replicationResourceType': replication_resource_type}
Constructs an embeded object of `UnityResourceConfig`. :param pool_id: storage pool of the resource. :param is_thin_enabled: is thin type or not. :param is_deduplication_enabled: is deduplication enabled or not. :param is_compression_enabled: is in-line compression (ILC) enabled or not. :param is_backup_only: is backup only or not. :param size: size of the resource. :param tiering_policy: `TieringPolicyEnum` value. Tiering policy for the resource. :param request_id: unique request ID for the configuration. :param src_id: storage resource if it already exists. :param name: name of the storage resource. :param default_sp: `NodeEnum` value. Default storage processor for the resource. :param replication_resource_type: `ReplicationEndpointResourceTypeEnum` value. Replication resource type. :return:
def initialise_loggers(names, log_level=_builtin_logging.WARNING, handler_class=SplitStreamHandler): frmttr = get_formatter() for name in names: logr = _builtin_logging.getLogger(name) handler = handler_class() handler.setFormatter(frmttr) logr.addHandler(handler) logr.setLevel(log_level)
Initialises specified loggers to generate output at the specified logging level. If the specified named loggers do not exist, they are created. :type names: :obj:`list` of :obj:`str` :param names: List of logger names. :type log_level: :obj:`int` :param log_level: Log level for messages, typically one of :obj:`logging.DEBUG`, :obj:`logging.INFO`, :obj:`logging.WARN`, :obj:`logging.ERROR` or :obj:`logging.CRITICAL`. See :ref:`levels`. :type handler_class: One of the :obj:`logging.handlers` classes. :param handler_class: The handler class for output of log messages, for example :obj:`SplitStreamHandler` or :obj:`logging.StreamHandler`. Example:: >>> from array_split import logging >>> logging.initialise_loggers(["my_logger",], log_level=logging.INFO) >>> logger = logging.getLogger("my_logger") >>> logger.info("This is info logging.") 16:35:09|ARRSPLT| This is info logging. >>> logger.debug("Not logged at logging.INFO level.") >>>
def _get_callback_context(env): if env.model is not None and env.cvfolds is None: context = 'train' elif env.model is None and env.cvfolds is not None: context = 'cv' return context
return whether the current callback context is cv or train
def date_range(self): try: days = int(self.days) except ValueError: exit_after_echo(QUERY_DAYS_INVALID) if days < 1: exit_after_echo(QUERY_DAYS_INVALID) start = datetime.today() end = start + timedelta(days=days) return ( datetime.strftime(start, '%Y-%m-%d'), datetime.strftime(end, '%Y-%m-%d') )
Generate date range according to the `days` user input.
def stop_workflow(config, *, names=None): jobs = list_jobs(config, filter_by_type=JobType.Workflow) if names is not None: filtered_jobs = [] for job in jobs: if (job.id in names) or (job.name in names) or (job.workflow_id in names): filtered_jobs.append(job) else: filtered_jobs = jobs success = [] failed = [] for job in filtered_jobs: client = Client(SignalConnection(**config.signal, auto_connect=True), request_key=job.workflow_id) if client.send(Request(action='stop_workflow')).success: success.append(job) else: failed.append(job) return success, failed
Stop one or more workflows. Args: config (Config): Reference to the configuration object from which the settings for the workflow are retrieved. names (list): List of workflow names, workflow ids or workflow job ids for the workflows that should be stopped. If all workflows should be stopped, set it to None. Returns: tuple: A tuple of the workflow jobs that were successfully stopped and the ones that could not be stopped.
def get(key, profile=None): conn = salt.utils.memcached.get_conn(profile) return salt.utils.memcached.get(conn, key)
Get a value from memcached
def classify_coincident(st_vals, coincident): r if not coincident: return None if st_vals[0, 0] >= st_vals[0, 1] or st_vals[1, 0] >= st_vals[1, 1]: return UNUSED_T else: return CLASSIFICATION_T.COINCIDENT
r"""Determine if coincident parameters are "unused". .. note:: This is a helper for :func:`surface_intersections`. In the case that ``coincident`` is :data:`True`, then we'll have two sets of parameters :math:`(s_1, t_1)` and :math:`(s_2, t_2)`. If one of :math:`s1 < s2` or :math:`t1 < t2` is not satisfied, the coincident segments will be moving in opposite directions, hence don't define an interior of an intersection. .. warning:: In the "coincident" case, this assumes, but doesn't check, that ``st_vals`` is ``2 x 2``. Args: st_vals (numpy.ndarray): ``2 X N`` array of intersection parameters. coincident (bool): Flag indicating if the intersections are the endpoints of coincident segments of two curves. Returns: Optional[.IntersectionClassification]: The classification of the intersections.
def reset(self): if self.resync_period > 0 and (self.resets + 1) % self.resync_period == 0: self._exit_resync() while not self.done: self.done = self._quit_episode() if not self.done: time.sleep(0.1) return self._start_up()
gym api reset
def write(name, keyword, domain, citation, author, description, species, version, contact, licenses, values, functions, output, value_prefix): write_namespace( name, keyword, domain, author, citation, values, namespace_description=description, namespace_species=species, namespace_version=version, author_contact=contact, author_copyright=licenses, functions=functions, file=output, value_prefix=value_prefix )
Build a namespace from items.
def get_logfile_path(working_dir): logfile_filename = virtualchain_hooks.get_virtual_chain_name() + ".log" return os.path.join( working_dir, logfile_filename )
Get the logfile path for our service endpoint.
def is_monotonic(full_list): prev_elements = set({full_list[0]}) prev_item = full_list[0] for item in full_list: if item != prev_item: if item in prev_elements: return False prev_item = item prev_elements.add(item) return True
Determine whether elements in a list are monotonic. ie. unique elements are clustered together. ie. [5,5,3,4] is, [5,3,5] is not.
def reload(self): other = type(self).get(self.name, service=self.service) self.request_count = other.request_count
reload self from self.service
def get_current_shutit_pexpect_session(self, note=None): self.handle_note(note) res = self.current_shutit_pexpect_session self.handle_note_after(note) return res
Returns the currently-set default pexpect child. @return: default shutit pexpect child object
def get_genus_type_metadata(self): metadata = dict(self._mdata['genus_type']) metadata.update({'existing_string_values': self._my_map['genusTypeId']}) return Metadata(**metadata)
Gets the metadata for a genus type. return: (osid.Metadata) - metadata for the genus *compliance: mandatory -- This method must be implemented.*
def delete_user_template(self, uid=0, temp_id=0, user_id=''): if self.tcp and user_id: command = 134 command_string = pack('<24sB', str(user_id), temp_id) cmd_response = self.__send_command(command, command_string) if cmd_response.get('status'): return True else: return False if not uid: users = self.get_users() users = list(filter(lambda x: x.user_id==str(user_id), users)) if not users: return False uid = users[0].uid command = const.CMD_DELETE_USERTEMP command_string = pack('hb', uid, temp_id) cmd_response = self.__send_command(command, command_string) if cmd_response.get('status'): return True else: return False
Delete specific template :param uid: user ID that are generated from device :param user_id: your own user ID :return: bool
def convert_response(allocate_quota_response, project_id): if not allocate_quota_response or not allocate_quota_response.allocateErrors: return _IS_OK theError = allocate_quota_response.allocateErrors[0] error_tuple = _QUOTA_ERROR_CONVERSION.get(theError.code, _IS_UNKNOWN) if error_tuple[1].find(u'{') == -1: return error_tuple updated_msg = error_tuple[1].format(project_id=project_id, detail=theError.description or u'') return error_tuple[0], updated_msg
Computes a http status code and message `AllocateQuotaResponse` The return value a tuple (code, message) where code: is the http status code message: is the message to return Args: allocate_quota_response (:class:`endpoints_management.gen.servicecontrol_v1_messages.AllocateQuotaResponse`): the response from calling an api Returns: tuple(code, message)
def config_changed(inherit_napalm_device=None, **kwargs): is_config_changed = False reason = '' try_compare = compare_config(inherit_napalm_device=napalm_device) if try_compare.get('result'): if try_compare.get('out'): is_config_changed = True else: reason = 'Configuration was not changed on the device.' else: reason = try_compare.get('comment') return is_config_changed, reason
Will prompt if the configuration has been changed. :return: A tuple with a boolean that specifies if the config was changed on the device.\ And a string that provides more details of the reason why the configuration was not changed. CLI Example: .. code-block:: bash salt '*' net.config_changed
def get_archive_type(path): if not is_directory_archive(path): raise TypeError('Unable to determine the type of archive at path: %s' % path) try: ini_path = '/'.join([_convert_slashes(path), 'dir_archive.ini']) parser = _ConfigParser.SafeConfigParser() parser.read(ini_path) contents = parser.get('metadata', 'contents') return contents except Exception as e: raise TypeError('Unable to determine type of archive for path: %s' % path, e)
Returns the contents type for the provided archive path. Parameters ---------- path : string Directory to evaluate. Returns ------- Returns a string of: sframe, sgraph, raises TypeError for anything else
def quantile_turnover(quantile_factor, quantile, period=1): quant_names = quantile_factor[quantile_factor == quantile] quant_name_sets = quant_names.groupby(level=['date']).apply( lambda x: set(x.index.get_level_values('asset'))) if isinstance(period, int): name_shifted = quant_name_sets.shift(period) else: shifted_idx = utils.add_custom_calendar_timedelta( quant_name_sets.index, -pd.Timedelta(period), quantile_factor.index.levels[0].freq) name_shifted = quant_name_sets.reindex(shifted_idx) name_shifted.index = quant_name_sets.index new_names = (quant_name_sets - name_shifted).dropna() quant_turnover = new_names.apply( lambda x: len(x)) / quant_name_sets.apply(lambda x: len(x)) quant_turnover.name = quantile return quant_turnover
Computes the proportion of names in a factor quantile that were not in that quantile in the previous period. Parameters ---------- quantile_factor : pd.Series DataFrame with date, asset and factor quantile. quantile : int Quantile on which to perform turnover analysis. period: string or int, optional Period over which to calculate the turnover. If it is a string it must follow pandas.Timedelta constructor format (e.g. '1 days', '1D', '30m', '3h', '1D1h', etc). Returns ------- quant_turnover : pd.Series Period by period turnover for that quantile.
def HasStorage(self): from neo.Core.State.ContractState import ContractPropertyState return self.ContractProperties & ContractPropertyState.HasStorage > 0
Flag indicating if storage is available. Returns: bool: True if available. False otherwise.
def _set_rock_ridge(self, rr): if not self.rock_ridge: self.rock_ridge = rr else: for ver in ['1.09', '1.10', '1.12']: if self.rock_ridge == ver: if rr and rr != ver: raise pycdlibexception.PyCdlibInvalidISO('Inconsistent Rock Ridge versions on the ISO!')
An internal method to set the Rock Ridge version of the ISO given the Rock Ridge version of the previous entry. Parameters: rr - The version of rr from the last directory record. Returns: Nothing.
def read_file(path): if os.path.isabs(path): with wrap_file_exceptions(): with open(path, 'rb') as stream: return stream.read() with wrap_file_exceptions(): stream = ca_storage.open(path) try: return stream.read() finally: stream.close()
Read the file from the given path. If ``path`` is an absolute path, reads a file from the local filesystem. For relative paths, read the file using the storage backend configured using :ref:`CA_FILE_STORAGE <settings-ca-file-storage>`.
def getAWSAccountID(): link = "http://169.254.169.254/latest/dynamic/instance-identity/document" try: conn = urllib2.urlopen(url=link, timeout=5) except urllib2.URLError: return '0' jsonData = json.loads(conn.read()) return jsonData['accountId']
Print an instance's AWS account number or 0 when not in EC2
def __clear_covers(self): for i in range(self.n): self.row_covered[i] = False self.col_covered[i] = False
Clear all covered matrix cells
def ngettext(self, singular, plural, num, domain=None, **variables): variables.setdefault('num', num) t = self.get_translations(domain) return t.ungettext(singular, plural, num) % variables
Translate a string wity the current locale. The `num` parameter is used to dispatch between singular and various plural forms of the message.
def headloss_rect(FlowRate, Width, DistCenter, Length, KMinor, Nu, PipeRough, openchannel): return (headloss_exp_rect(FlowRate, Width, DistCenter, KMinor).magnitude + headloss_fric_rect(FlowRate, Width, DistCenter, Length, Nu, PipeRough, openchannel).magnitude)
Return the total head loss in a rectangular channel. Total head loss is a combination of the major and minor losses. This equation applies to both laminar and turbulent flows.
def _parse_error_message(self, message): msg = message['error']['message'] code = message['error']['code'] err = None out = None if 'data' in message['error']: err = ' '.join(message['error']['data'][-1]['errors']) out = message['error']['data'] return code, msg, err, out
Parses the eAPI failure response message This method accepts an eAPI failure message and parses the necesary parts in order to generate a CommandError. Args: message (str): The error message to parse Returns: tuple: A tuple that consists of the following: * code: The error code specified in the failure message * message: The error text specified in the failure message * error: The error text from the command that generated the error (the last command that ran) * output: A list of all output from all commands
def on_api_error(self, error_status=None, message=None, event_origin=None): if message.meta["error"].find("Widget instance not found on server") > -1: error_status["retry"] = True self.comm.attach_remote( self.id, self.remote_name, remote_name=self.name, **self.init_kwargs )
API error handling
def requires_refcount(cls, func): @functools.wraps(func) def requires_active_handle(*args, **kwargs): if cls.refcount() == 0: raise NoHandleException() return func(*args, **kwargs) return requires_active_handle
The ``requires_refcount`` decorator adds a check prior to call ``func`` to verify that there is an active handle. if there is no such handle, a ``NoHandleException`` exception is thrown.
def process_params(mod_id, params, type_params): res = {} for param_name, param_info in type_params.items(): val = params.get(param_name, param_info.get("default", None)) if val is not None: param_res = dict(param_info) param_res["value"] = val res[param_name] = param_res elif type_params.get("required", False): raise ValueError( 'Required parameter "{}" is not defined for module ' '"{}"'.format(param_name, mod_id) ) return res
Takes as input a dictionary of parameters defined on a module and the information about the required parameters defined on the corresponding module type. Validatates that are required parameters were supplied and fills any missing parameters with their default values from the module type. Returns a nested dictionary of the same format as the `type_params` but with an additional key `value` on each inner dictionary that gives the value of that parameter for this specific module
def knx_to_time(knxdata): if len(knxdata) != 3: raise KNXException("Can only convert a 3 Byte object to time") dow = knxdata[0] >> 5 res = time(knxdata[0] & 0x1f, knxdata[1], knxdata[2]) return [res, dow]
Converts a KNX time to a tuple of a time object and the day of week
def construct_rest_of_worlds_mapping(self, excluded, fp=None): metadata = { 'filename': 'faces.gpkg', 'field': 'id', 'sha256': sha256(self.faces_fp) } data = [] for key, locations in excluded.items(): for location in locations: assert location in self.locations, "Can't find location {}".format(location) included = self.all_faces.difference( {face for loc in locations for face in self.data[loc]} ) data.append((key, sorted(included))) obj = {'data': data, 'metadata': metadata} if fp: with open(fp, "w") as f: json.dump(obj, f, indent=2) else: return obj
Construct topo mapping file for ``excluded``. ``excluded`` must be a **dictionary** of {"rest-of-world label": ["names", "of", "excluded", "locations"]}``. Topo mapping has the data format: .. code-block:: python { 'data': [ ['location label', ['topo face integer ids']], ], 'metadata': { 'filename': 'name of face definitions file', 'field': 'field with uniquely identifies the fields in ``filename``', 'sha256': 'SHA 256 hash of ``filename``' } }
def load(fp, encode_nominal=False, return_type=DENSE): decoder = ArffDecoder() return decoder.decode(fp, encode_nominal=encode_nominal, return_type=return_type)
Load a file-like object containing the ARFF document and convert it into a Python object. :param fp: a file-like object. :param encode_nominal: boolean, if True perform a label encoding while reading the .arff file. :param return_type: determines the data structure used to store the dataset. Can be one of `arff.DENSE`, `arff.COO`, `arff.LOD`, `arff.DENSE_GEN` or `arff.LOD_GEN`. Consult the sections on `working with sparse data`_ and `loading progressively`_. :return: a dictionary.
def normalize(self, text, cleaned=False, **kwargs): if not cleaned: text = self.clean(text, **kwargs) return ensure_list(text)
Create a represenation ideal for comparisons, but not to be shown to the user.
def translate(root_list, use_bag_semantics=False): translator = (Translator() if use_bag_semantics else SetTranslator()) return [translator.translate(root).to_sql() for root in root_list]
Translate a list of relational algebra trees into SQL statements. :param root_list: a list of tree roots :param use_bag_semantics: flag for using relational algebra bag semantics :return: a list of SQL statements
def toString(self): slist = self.toList() string = angle.slistStr(slist) return string if slist[0] == '-' else string[1:]
Returns time as string.
def login(self, username='0000', userid=0, password=None): if password and len(password) > 20: self.logger.error('password longer than 20 characters received') raise Exception('password longer than 20 characters, login failed') self.send(C1218LogonRequest(username, userid)) data = self.recv() if data != b'\x00': self.logger.warning('login failed, username and user id rejected') return False if password is not None: self.send(C1218SecurityRequest(password)) data = self.recv() if data != b'\x00': self.logger.warning('login failed, password rejected') return False self.logged_in = True return True
Log into the connected device. :param str username: the username to log in with (len(username) <= 10) :param int userid: the userid to log in with (0x0000 <= userid <= 0xffff) :param str password: password to log in with (len(password) <= 20) :rtype: bool
def update_from_dict(self, dct): if not dct: return all_props = self.__class__.CONFIG_PROPERTIES for key, value in six.iteritems(dct): attr_config = all_props.get(key) if attr_config: setattr(self, key, value) else: self.update_default_from_dict(key, value)
Updates this configuration object from a dictionary. See :meth:`ConfigurationObject.update` for details. :param dct: Values to update the ConfigurationObject with. :type dct: dict
def node_transmissions(node_id): exp = Experiment(session) direction = request_parameter(parameter="direction", default="incoming") status = request_parameter(parameter="status", default="all") for x in [direction, status]: if type(x) == Response: return x node = models.Node.query.get(node_id) if node is None: return error_response(error_type="/node/transmissions, node does not exist") transmissions = node.transmissions(direction=direction, status=status) try: if direction in ["incoming", "all"] and status in ["pending", "all"]: node.receive() session.commit() exp.transmission_get_request(node=node, transmissions=transmissions) session.commit() except Exception: return error_response( error_type="/node/transmissions GET server error", status=403, participant=node.participant, ) return success_response(transmissions=[t.__json__() for t in transmissions])
Get all the transmissions of a node. The node id must be specified in the url. You can also pass direction (to/from/all) or status (all/pending/received) as arguments.
def _parse_common_paths_file(project_path): common_paths_file = os.path.join(project_path, 'common_paths.xml') tree = etree.parse(common_paths_file) paths = {} path_vars = ['basedata', 'scheme', 'style', 'style', 'customization', 'markable'] for path_var in path_vars: specific_path = tree.find('//{}_path'.format(path_var)).text paths[path_var] = specific_path if specific_path else project_path paths['project_path'] = project_path annotations = {} for level in tree.iterfind('//level'): annotations[level.attrib['name']] = { 'schemefile': level.attrib['schemefile'], 'customization_file': level.attrib['customization_file'], 'file_extension': level.text[1:]} stylesheet = tree.find('//stylesheet').text return paths, annotations, stylesheet
Parses a common_paths.xml file and returns a dictionary of paths, a dictionary of annotation level descriptions and the filename of the style file. Parameters ---------- project_path : str path to the root directory of the MMAX project Returns ------- paths : dict maps from MMAX file types (str, e.g. 'basedata' or 'markable') to the relative path (str) containing files of this type annotations : dict maps from MMAX annotation level names (str, e.g. 'sentence', 'primmark') to a dict of features. The features are: 'schemefile' (maps to a file), 'customization_file' (ditto) and 'file_extension' (maps to the file name ending used for all annotations files of this level) stylefile : str name of the (default) style file used in this MMAX project
def acquire_auth_token_ticket(self, headers=None): logging.debug('[CAS] Acquiring Auth token ticket') url = self._get_auth_token_tickets_url() text = self._perform_post(url, headers=headers) auth_token_ticket = json.loads(text)['ticket'] logging.debug('[CAS] Acquire Auth token ticket: {}'.format( auth_token_ticket)) return auth_token_ticket
Acquire an auth token from the CAS server.
def start_centroid_distance(item_a, item_b, max_value): start_a = item_a.center_of_mass(item_a.times[0]) start_b = item_b.center_of_mass(item_b.times[0]) start_distance = np.sqrt((start_a[0] - start_b[0]) ** 2 + (start_a[1] - start_b[1]) ** 2) return np.minimum(start_distance, max_value) / float(max_value)
Distance between the centroids of the first step in each object. Args: item_a: STObject from the first set in TrackMatcher item_b: STObject from the second set in TrackMatcher max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value between 0 and 1.
def invocation(): cmdargs = [sys.executable] + sys.argv[:] invocation = " ".join(shlex.quote(s) for s in cmdargs) return invocation
reconstructs the invocation for this python program
def get_logs(self): folder = os.path.dirname(self.pcfg['log_file']) for path, dir, files in os.walk(folder): for file in files: if os.path.splitext(file)[-1] == '.log': yield os.path.join(path, file)
returns logs from disk, requires .log extenstion
def __query_options(self): options = 0 if self.__tailable: options |= _QUERY_OPTIONS["tailable_cursor"] if self.__slave_okay or self.__pool._slave_okay: options |= _QUERY_OPTIONS["slave_okay"] if not self.__timeout: options |= _QUERY_OPTIONS["no_timeout"] return options
Get the query options string to use for this query.
def project_with_metadata(self, term_doc_mat, x_dim=0, y_dim=1): return self._project_category_corpus(self._get_category_metadata_corpus_and_replace_terms(term_doc_mat), x_dim, y_dim)
Returns a projection of the :param term_doc_mat: a TermDocMatrix :return: CategoryProjection
def run_subprocess(command, return_code=False, **kwargs): use_kwargs = dict(stderr=subprocess.PIPE, stdout=subprocess.PIPE) use_kwargs.update(kwargs) p = subprocess.Popen(command, **use_kwargs) output = p.communicate() output = ['' if s is None else s for s in output] output = [s.decode('utf-8') if isinstance(s, bytes) else s for s in output] output = tuple(output) if not return_code and p.returncode: print(output[0]) print(output[1]) err_fun = subprocess.CalledProcessError.__init__ if 'output' in inspect.getargspec(err_fun).args: raise subprocess.CalledProcessError(p.returncode, command, output) else: raise subprocess.CalledProcessError(p.returncode, command) if return_code: output = output + (p.returncode,) return output
Run command using subprocess.Popen Run command and wait for command to complete. If the return code was zero then return, otherwise raise CalledProcessError. By default, this will also add stdout= and stderr=subproces.PIPE to the call to Popen to suppress printing to the terminal. Parameters ---------- command : list of str Command to run as subprocess (see subprocess.Popen documentation). return_code : bool If True, the returncode will be returned, and no error checking will be performed (so this function should always return without error). **kwargs : dict Additional kwargs to pass to ``subprocess.Popen``. Returns ------- stdout : str Stdout returned by the process. stderr : str Stderr returned by the process. code : int The command exit code. Only returned if ``return_code`` is True.
def init_db(): import reana_db.models if not database_exists(engine.url): create_database(engine.url) Base.metadata.create_all(bind=engine)
Initialize the DB.
def _reset(cls): if os.getpid() != cls._cls_pid: cls._cls_pid = os.getpid() cls._cls_instances_by_target.clear() cls._cls_thread_by_target.clear()
If we have forked since the watch dictionaries were initialized, all that has is garbage, so clear it.
def update_cache(from_currency, to_currency): if check_update(from_currency, to_currency) is True: ccache[from_currency][to_currency]['value'] = convert_using_api(from_currency, to_currency) ccache[from_currency][to_currency]['last_update'] = time.time() cache.write(ccache)
update from_currency to_currency pair in cache if last update for that pair is over 30 minutes ago by request API info
def createPopulationFile(inputFiles, labels, outputFileName): outputFile = None try: outputFile = open(outputFileName, 'w') except IOError: msg = "%(outputFileName)s: can't write file" raise ProgramError(msg) for i in xrange(len(inputFiles)): fileName = inputFiles[i] label = labels[i] try: with open(fileName, 'r') as inputFile: for line in inputFile: row = line.rstrip("\r\n").split(" ") famID = row[0] indID = row[1] print >>outputFile, "\t".join([famID, indID, label]) except IOError: msg = "%(fileName)s: no such file" % locals() raise ProgramError(msg) outputFile.close()
Creates a population file. :param inputFiles: the list of input files. :param labels: the list of labels (corresponding to the input files). :param outputFileName: the name of the output file. :type inputFiles: list :type labels: list :type outputFileName: str The ``inputFiles`` is in reality a list of ``tfam`` files composed of samples. For each of those ``tfam`` files, there is a label associated with it (representing the name of the population). The output file consists of one row per sample, with the following three columns: the family ID, the individual ID and the population of each sample.
def run_timed(self, **kwargs): for key in kwargs: setattr(self, key, kwargs[key]) self.command = self.COMMAND_RUN_TIMED
Run the motor for the amount of time specified in `time_sp` and then stop the motor using the action specified by `stop_action`.
def check_ellipsis(text): err = "typography.symbols.ellipsis" msg = u"'...' is an approximation, use the ellipsis symbol '…'." regex = "\.\.\." return existence_check(text, [regex], err, msg, max_errors=3, require_padding=False, offset=0)
Use an ellipsis instead of three dots.
def cli(out_fmt, input, output): _input = StringIO() for l in input: try: _input.write(str(l)) except TypeError: _input.write(bytes(l, 'utf-8')) _input = seria.load(_input) _out = (_input.dump(out_fmt)) output.write(_out)
Converts text.
def _node_name(self, concept): if ( self.grounding_threshold is not None and concept.db_refs[self.grounding_ontology] and (concept.db_refs[self.grounding_ontology][0][1] > self.grounding_threshold)): entry = concept.db_refs[self.grounding_ontology][0][0] return entry.split('/')[-1].replace('_', ' ').capitalize() else: return concept.name.capitalize()
Return a standardized name for a node given a Concept.
def extract_archive(filepath): if os.path.isdir(filepath): path = os.path.abspath(filepath) print("Archive already extracted. Viewing from {}...".format(path)) return path elif not zipfile.is_zipfile(filepath): raise TypeError("{} is not a zipfile".format(filepath)) archive_sha = SHA1_file( filepath=filepath, extra=to_bytes(slackviewer.__version__) ) extracted_path = os.path.join(SLACKVIEWER_TEMP_PATH, archive_sha) if os.path.exists(extracted_path): print("{} already exists".format(extracted_path)) else: with zipfile.ZipFile(filepath) as zip: print("{} extracting to {}...".format(filepath, extracted_path)) zip.extractall(path=extracted_path) print("{} extracted to {}".format(filepath, extracted_path)) create_archive_info(filepath, extracted_path, archive_sha) return extracted_path
Returns the path of the archive :param str filepath: Path to file to extract or read :return: path of the archive :rtype: str
def description(self): for e in self: if isinstance(e, Description): return e.value raise NoSuchAnnotation
Obtain the description associated with the element. Raises: :class:`NoSuchAnnotation` if there is no associated description.
def send_audio_packet(self, data, *, encode=True): self.checked_add('sequence', 1, 65535) if encode: encoded_data = self.encoder.encode(data, self.encoder.SAMPLES_PER_FRAME) else: encoded_data = data packet = self._get_voice_packet(encoded_data) try: self.socket.sendto(packet, (self.endpoint_ip, self.voice_port)) except BlockingIOError: log.warning('A packet has been dropped (seq: %s, timestamp: %s)', self.sequence, self.timestamp) self.checked_add('timestamp', self.encoder.SAMPLES_PER_FRAME, 4294967295)
Sends an audio packet composed of the data. You must be connected to play audio. Parameters ---------- data: bytes The :term:`py:bytes-like object` denoting PCM or Opus voice data. encode: bool Indicates if ``data`` should be encoded into Opus. Raises ------- ClientException You are not connected. OpusError Encoding the data failed.
def infer_location( self, location_query, max_distance, google_key, foursquare_client_id, foursquare_client_secret, limit ): self.location_from = infer_location( self.points[0], location_query, max_distance, google_key, foursquare_client_id, foursquare_client_secret, limit ) self.location_to = infer_location( self.points[-1], location_query, max_distance, google_key, foursquare_client_id, foursquare_client_secret, limit ) return self
In-place location inferring See infer_location function Args: Returns: :obj:`Segment`: self
def drop_primary_key(self, table): if self.get_primary_key(table): self.execute('ALTER TABLE {0} DROP PRIMARY KEY'.format(wrap(table)))
Drop a Primary Key constraint for a specific table.
def exit_if_missing_graphviz(self): (out, err) = utils.capture_shell("which dot") if "dot" not in out: ui.error(c.MESSAGES["dot_missing"])
Detect the presence of the dot utility to make a png graph.
def execute_command(self, args, parent_environ=None, **subprocess_kwargs): if parent_environ in (None, os.environ): target_environ = {} else: target_environ = parent_environ.copy() interpreter = Python(target_environ=target_environ) executor = self._create_executor(interpreter, parent_environ) self._execute(executor) return interpreter.subprocess(args, **subprocess_kwargs)
Run a command within a resolved context. This applies the context to a python environ dict, then runs a subprocess in that namespace. This is not a fully configured subshell - shell-specific commands such as aliases will not be applied. To execute a command within a subshell instead, use execute_shell(). Warning: This runs a command in a configured environ dict only, not in a true shell. To do that, call `execute_shell` using the `command` keyword argument. Args: args: Command arguments, can be a string. parent_environ: Environment to interpret the context within, defaults to os.environ if None. subprocess_kwargs: Args to pass to subprocess.Popen. Returns: A subprocess.Popen object. Note: This does not alter the current python session.
def listen(self, port: int, address: str = "") -> None: sockets = bind_sockets(port, address=address) self.add_sockets(sockets)
Starts accepting connections on the given port. This method may be called more than once to listen on multiple ports. `listen` takes effect immediately; it is not necessary to call `TCPServer.start` afterwards. It is, however, necessary to start the `.IOLoop`.
def Containers(vent=True, running=True, exclude_labels=None): containers = [] try: d_client = docker.from_env() if vent: c = d_client.containers.list(all=not running, filters={'label': 'vent'}) else: c = d_client.containers.list(all=not running) for container in c: include = True if exclude_labels: for label in exclude_labels: if 'vent.groups' in container.labels and label in container.labels['vent.groups']: include = False if include: containers.append((container.name, container.status)) except Exception as e: logger.error('Docker problem ' + str(e)) return containers
Get containers that are created, by default limit to vent containers that are running
def _read_proto_resolve(self, length, ptype): if ptype == '0800': return ipaddress.ip_address(self._read_fileng(4)) elif ptype == '86dd': return ipaddress.ip_address(self._read_fileng(16)) else: return self._read_fileng(length)
Resolve IP address according to protocol. Positional arguments: * length -- int, protocol address length * ptype -- int, protocol type Returns: * str -- IP address
def download_setuptools(version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir, delay=15): to_dir = os.path.abspath(to_dir) try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen tgz_name = "distribute-%s.tar.gz" % version url = download_base + tgz_name saveto = os.path.join(to_dir, tgz_name) src = dst = None if not os.path.exists(saveto): try: log.warn("Downloading %s", url) src = urlopen(url) data = src.read() dst = open(saveto, "wb") dst.write(data) finally: if src: src.close() if dst: dst.close() return os.path.realpath(saveto)
Download distribute from a specified location and return its filename `version` should be a valid distribute version number that is available as an egg for download under the `download_base` URL (which should end with a '/'). `to_dir` is the directory where the egg will be downloaded. `delay` is the number of seconds to pause before an actual download attempt.
def get(self, key): if key not in self._keystore: return None rec = self._keystore[key] if rec.is_expired: self.delete(key) return None return rec.value
Retrieves previously stored key from the storage :return value, stored in the storage
def _create_alignment_button(self): iconnames = ["AlignTop", "AlignCenter", "AlignBottom"] bmplist = [icons[iconname] for iconname in iconnames] self.alignment_tb = _widgets.BitmapToggleButton(self, bmplist) self.alignment_tb.SetToolTipString(_(u"Alignment")) self.Bind(wx.EVT_BUTTON, self.OnAlignment, self.alignment_tb) self.AddControl(self.alignment_tb)
Creates vertical alignment button
def delete(self, domain, type_name, search_command): return self._request(domain, type_name, search_command, 'DELETE', None)
Delete entry in ThreatConnect Data Store Args: domain (string): One of 'local', 'organization', or 'system'. type_name (string): This is a free form index type name. The ThreatConnect API will use this resource verbatim. search_command (string): Search command to pass to ES.
def create_repo(url, vcs, **kwargs): r if vcs == 'git': return GitRepo(url, **kwargs) elif vcs == 'hg': return MercurialRepo(url, **kwargs) elif vcs == 'svn': return SubversionRepo(url, **kwargs) else: raise InvalidVCS('VCS %s is not a valid VCS' % vcs)
r"""Return a object representation of a VCS repository. :returns: instance of a repository object :rtype: :class:`libvcs.svn.SubversionRepo`, :class:`libvcs.git.GitRepo` or :class:`libvcs.hg.MercurialRepo`. Usage Example:: >>> from libvcs.shortcuts import create_repo >>> r = create_repo( ... url='https://www.github.com/you/myrepo', ... vcs='git', ... repo_dir='/tmp/myrepo') >>> r.update_repo() |myrepo| (git) Repo directory for myrepo (git) does not exist @ \ /tmp/myrepo |myrepo| (git) Cloning. |myrepo| (git) git clone https://www.github.com/tony/myrepo \ /tmp/myrepo Cloning into '/tmp/myrepo'... Checking connectivity... done. |myrepo| (git) git fetch |myrepo| (git) git pull Already up-to-date.
def getIndex(reference): if reference: reffas = reference else: parent_directory = path.dirname(path.abspath(path.dirname(__file__))) reffas = path.join(parent_directory, "reference/DNA_CS.fasta") if not path.isfile(reffas): logging.error("Could not find reference fasta for lambda genome.") sys.exit("Could not find reference fasta for lambda genome.") aligner = mp.Aligner(reffas, preset="map-ont") if not aligner: logging.error("Failed to load/build index") raise Exception("ERROR: failed to load/build index") return aligner
Find the reference folder using the location of the script file Create the index, test if successful
def convert_notebook(self, name): exporter = nbconvert.exporters.python.PythonExporter() relative_path = self.convert_path(name) file_path = self.get_path("%s.ipynb"%relative_path) code = exporter.from_filename(file_path)[0] self.write_code(name, code) self.clean_code(name, [])
Converts a notebook into a python file.
def handle_input(self, event): self.update_timeval() self.events = [] code = self._get_event_key_code(event) if code in self.codes: new_code = self.codes[code] else: new_code = 0 event_type = self._get_event_type(event) value = self._get_key_value(event, event_type) scan_event, key_event = self.emulate_press( new_code, code, value, self.timeval) self.events.append(scan_event) self.events.append(key_event) self.events.append(self.sync_marker(self.timeval)) self.write_to_pipe(self.events)
Process they keyboard input.
def to_html(self, show_mean=None, sortable=None, colorize=True, *args, **kwargs): if show_mean is None: show_mean = self.show_mean if sortable is None: sortable = self.sortable df = self.copy() if show_mean: df.insert(0, 'Mean', None) df.loc[:, 'Mean'] = ['%.3f' % self[m].mean() for m in self.models] html = df.to_html(*args, **kwargs) html, table_id = self.annotate(df, html, show_mean, colorize) if sortable: self.dynamify(table_id) return html
Extend Pandas built in `to_html` method for rendering a DataFrame and use it to render a ScoreMatrix.
def three_digit(number): number = str(number) if len(number) == 1: return u'00%s' % number elif len(number) == 2: return u'0%s' % number else: return number
Add 0s to inputs that their length is less than 3. :param number: The number to convert :type number: int :returns: String :example: >>> three_digit(1) '001'
def disconnect(self): all_conns = chain( self._available_connections.values(), self._in_use_connections.values(), ) for node_connections in all_conns: for connection in node_connections: connection.disconnect()
Nothing that requires any overwrite.
def nonKeyVisibleCols(self): 'All columns which are not keysList of unhidden non-key columns.' return [c for c in self.columns if not c.hidden and c not in self.keyCols]
All columns which are not keysList of unhidden non-key columns.
def calculate_start_time(df): if "time" in df: df["time_arr"] = pd.Series(df["time"], dtype='datetime64[s]') elif "timestamp" in df: df["time_arr"] = pd.Series(df["timestamp"], dtype="datetime64[ns]") else: return df if "dataset" in df: for dset in df["dataset"].unique(): time_zero = df.loc[df["dataset"] == dset, "time_arr"].min() df.loc[df["dataset"] == dset, "start_time"] = \ df.loc[df["dataset"] == dset, "time_arr"] - time_zero else: df["start_time"] = df["time_arr"] - df["time_arr"].min() return df.drop(["time", "timestamp", "time_arr"], axis=1, errors="ignore")
Calculate the star_time per read. Time data is either a "time" (in seconds, derived from summary files) or a "timestamp" (in UTC, derived from fastq_rich format) and has to be converted appropriately in a datetime format time_arr For both the time_zero is the minimal value of the time_arr, which is then used to subtract from all other times In the case of method=track (and dataset is a column in the df) then this subtraction is done per dataset
def rename_variables(expression: Expression, renaming: Dict[str, str]) -> Expression: if isinstance(expression, Operation): if hasattr(expression, 'variable_name'): variable_name = renaming.get(expression.variable_name, expression.variable_name) return create_operation_expression( expression, [rename_variables(o, renaming) for o in op_iter(expression)], variable_name=variable_name ) operands = [rename_variables(o, renaming) for o in op_iter(expression)] return create_operation_expression(expression, operands) elif isinstance(expression, Expression): expression = expression.__copy__() expression.variable_name = renaming.get(expression.variable_name, expression.variable_name) return expression
Rename the variables in the expression according to the given dictionary. Args: expression: The expression in which the variables are renamed. renaming: The renaming dictionary. Maps old variable names to new ones. Variable names not occuring in the dictionary are left unchanged. Returns: The expression with renamed variables.
def merge(args): p = OptionParser(merge.__doc__) p.set_outdir(outdir="outdir") opts, args = p.parse_args(args) if len(args) < 1: sys.exit(not p.print_help()) folders = args outdir = opts.outdir mkdir(outdir) files = flatten(glob("{0}/*.*.fastq".format(x)) for x in folders) files = list(files) key = lambda x: op.basename(x).split(".")[0] files.sort(key=key) for id, fns in groupby(files, key=key): fns = list(fns) outfile = op.join(outdir, "{0}.fastq".format(id)) FileMerger(fns, outfile=outfile).merge(checkexists=True)
%prog merge folder1 ... Consolidate split contents in the folders. The folders can be generated by the split() process and several samples may be in separate fastq files. This program merges them.
def _stop_ubridge(self): if self._ubridge_hypervisor and self._ubridge_hypervisor.is_running(): log.info("Stopping uBridge hypervisor {}:{}".format(self._ubridge_hypervisor.host, self._ubridge_hypervisor.port)) yield from self._ubridge_hypervisor.stop() self._ubridge_hypervisor = None
Stops uBridge.
def is_deb_package_installed(pkg): with settings(hide('warnings', 'running', 'stdout', 'stderr'), warn_only=True, capture=True): result = sudo('dpkg-query -l "%s" | grep -q ^.i' % pkg) return not bool(result.return_code)
checks if a particular deb package is installed
def numbering_part(self): try: return self.part_related_by(RT.NUMBERING) except KeyError: numbering_part = NumberingPart.new() self.relate_to(numbering_part, RT.NUMBERING) return numbering_part
A |NumberingPart| object providing access to the numbering definitions for this document. Creates an empty numbering part if one is not present.
def solidangle_errorprop(twotheta, dtwotheta, sampletodetectordistance, dsampletodetectordistance, pixelsize=None): SAC = solidangle(twotheta, sampletodetectordistance, pixelsize) if pixelsize is None: pixelsize = 1 return (SAC, (sampletodetectordistance * (4 * dsampletodetectordistance ** 2 * np.cos(twotheta) ** 2 + 9 * dtwotheta ** 2 * sampletodetectordistance ** 2 * np.sin(twotheta) ** 2) ** 0.5 / np.cos(twotheta) ** 4) / pixelsize ** 2)
Solid-angle correction for two-dimensional SAS images with error propagation Inputs: twotheta: matrix of two-theta values dtwotheta: matrix of absolute error of two-theta values sampletodetectordistance: sample-to-detector distance dsampletodetectordistance: absolute error of sample-to-detector distance Outputs two matrices of the same shape as twotheta. The scattering intensity matrix should be multiplied by the first one. The second one is the propagated error of the first one.
async def process_name(message: types.Message, state: FSMContext): async with state.proxy() as data: data['name'] = message.text await Form.next() await message.reply("How old are you?")
Process user name
def _update(self, data): self.bullet = data['bullet'] self.level = data['level'] self.text = WikiText(data['text_raw'], data['text_rendered'])
Update the line using the blob of json-parsed data directly from the API.
def load_probe(name): if op.exists(name): path = name else: curdir = op.realpath(op.dirname(__file__)) path = op.join(curdir, 'probes/{}.prb'.format(name)) if not op.exists(path): raise IOError("The probe `{}` cannot be found.".format(name)) return MEA(probe=_read_python(path))
Load one of the built-in probes.
def release(self, force=False): D = self.__class__ collection = self.get_collection() identity = self.Lock() query = D.id == self if not force: query &= D.lock.instance == identity.instance previous = collection.find_one_and_update(query, {'$unset': {~D.lock: True}}, {~D.lock: True}) if previous is None: lock = getattr(self.find_one(self, projection={~D.lock: True}), 'lock', None) raise self.Locked("Unable to release lock.", lock) lock = self.Lock.from_mongo(previous[~D.lock]) if lock and lock.expires <= identity.time: lock.expired(self) identity.released(self, force)
Release an exclusive lock on this integration task. Unless forcing, if we are not the current owners of the lock a Locked exception will be raised.