code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def update_view_bounds(self): self.view_bounds.left = self.pan.X - self.world_center.X self.view_bounds.top = self.pan.Y - self.world_center.Y self.view_bounds.width = self.world_center.X * 2 self.view_bounds.height = self.world_center.Y * 2
Update the camera's view bounds.
def tabledefinehypercolumn(tabdesc, name, ndim, datacolumns, coordcolumns=False, idcolumns=False): rec = {'HCndim': ndim, 'HCdatanames': datacolumns} if not isinstance(coordcolumns, bool): rec['HCcoordnames'] = coordcolumns if not isinstance(idcolumns, bool): rec['HCidnames'] = idcolumns if '_define_hypercolumn_' not in tabdesc: tabdesc['_define_hypercolumn_'] = {} tabdesc['_define_hypercolumn_'][name] = rec
Add a hypercolumn to a table description. It defines a hypercolumn and adds it the given table description. A hypercolumn is an entity used by the Tiled Storage Managers (TSM). It defines which columns have to be stored together with a TSM. It should only be used by expert users who want to use a TSM to its full extent. For a basic TSM s hypercolumn definition is not needed. tabledesc A table description (result from :func:`maketabdesc`). name Name of hypercolumn ndim Dimensionality of hypercolumn; normally 1 more than the dimensionality of the arrays in the data columns to be stored with the TSM datacolumns Data columns to be stored with TSM coordcolumns Optional coordinate columns to be stored with TSM idcolumns Optional id columns to be stored with TSM For example:: scd1 = makescacoldesc("col2", "aa") scd2 = makescacoldesc("col1", 1, "IncrementalStMan") scd3 = makescacoldesc("colrec1", {}) acd1 = makearrcoldesc("arr1", 1, 0, [2,3,4]) acd2 = makearrcoldesc("arr2", as_complex(0)) td = maketabdesc([scd1, scd2, scd3, acd1, acd2]) tabledefinehypercolumn(td, "TiledArray", 4, ["arr1"]) tab = table("mytable", tabledesc=td, nrow=100) | This creates a table description `td` from five column descriptions and then creates a 100-row table called mytable from the table description. | The columns contain respectivily strings, integer scalars, records, 3D integer arrays with fixed shape [2,3,4], and complex arrays with variable shape. | The first array is stored with the Tiled Storage Manager (in this case the TiledColumnStMan).
def trim_and_pad_all_features(features, length): return {k: _trim_and_pad(v, length) for k, v in features.items()}
Trim and pad first dimension of all features to size length.
def best_prefix(bytes, system=NIST): if isinstance(bytes, Bitmath): value = bytes.bytes else: value = bytes return Byte(value).best_prefix(system=system)
Return a bitmath instance representing the best human-readable representation of the number of bytes given by ``bytes``. In addition to a numeric type, the ``bytes`` parameter may also be a bitmath type. Optionally select a preferred unit system by specifying the ``system`` keyword. Choices for ``system`` are ``bitmath.NIST`` (default) and ``bitmath.SI``. Basically a shortcut for: >>> import bitmath >>> b = bitmath.Byte(12345) >>> best = b.best_prefix() Or: >>> import bitmath >>> best = (bitmath.KiB(12345) * 4201).best_prefix()
def _fill_and_verify_parameter_shape(x, n, parameter_label): try: return _fill_shape(x, n) except TypeError as e: raise base.IncompatibleShapeError("Invalid " + parameter_label + " shape: " "{}".format(e))
Expands x if necessary into a `n`-D kernel shape and reports errors.
def send_commands(self, commands, timeout=1.0, max_retries=1, eor=('\n', '\n- ')): if not isinstance(eor, list): eor = [eor]*len(commands) responses = [] for i, command in enumerate(commands): rsp = self.send_command(command, timeout=timeout, max_retries=max_retries, eor=eor[i]) responses.append(rsp) if self.command_error(rsp): break time.sleep(0.25) return responses
Send a sequence of commands to the drive and collect output. Takes a sequence of many commands and executes them one by one till either all are executed or one runs out of retries (`max_retries`). Retries are optionally performed if a command's repsonse indicates that there was an error. Remaining commands are not executed. The processed output of the final execution (last try or retry) of each command that was actually executed is returned. This function basically feeds commands one by one to ``send_command`` and collates the outputs. Parameters ---------- commands : iterable of str Iterable of commands to send to the drive. Each command must be an ``str``. timeout : float or None, optional Optional timeout in seconds to use when reading the response. A negative value or ``None`` indicates that the an infinite timeout should be used. max_retries : int, optional Maximum number of retries to do per command in the case of errors. eor : str or iterable of str, optional End Of Resonse. An EOR is either a ``str`` or an iterable of ``str`` that denote the possible endings of a response. 'eor' can be a single EOR, in which case it is used for all commands, or it can be an iterable of EOR to use for each individual command. For most commands, it should be ``('\\n', '\\n- ')``, but for running a program, it should be ``'*END\\n'``. The default is ``('\\n', '\\n- ')``. Returns ------- outputs : list of lists ``list`` composed of the processed responses of each command in the order that they were done up to and including the last command executed. See ``send_command`` for the format of processed responses. See Also -------- send_command : Send a single command. Examples -------- A sequence of commands to energize the motor, move it a bit away from the starting position, and then do 4 forward/reverse cycles, and de-energize the motor. **DO NOT** try these specific movement distances without checking that the motion won't damage something (very motor and application specific). >>> from GeminiMotorDrive.drivers import ASCII_RS232 >>> ra = ASCII_RS232('/dev/ttyS1') >>> ra.send_commands(['DRIVE1', 'D-10000', 'GO'] ... + ['D-10000','GO','D10000','GO']*4 ... + [ 'DRIVE0']) [['DRIVE1', 'DRIVE1\\r', 'DRIVE1', None, []], ['D-10000', 'D-10000\\r', 'D-10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D-10000', 'D-10000\\r', 'D-10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D10000', 'D10000\\r', 'D10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D-10000', 'D-10000\\r', 'D-10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D10000', 'D10000\\r', 'D10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D-10000', 'D-10000\\r', 'D-10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D10000', 'D10000\\r', 'D10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D-10000', 'D-10000\\r', 'D-10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['D10000', 'D10000\\r', 'D10000', None, []], ['GO', 'GO\\r', 'GO', None, []], ['DRIVE0', 'DRIVE0\\r', 'DRIVE0', None, []]]
def variance(x): if x.ndim > 1 and len(x[0]) > 1: return np.var(x, axis=1) return np.var(x)
Return a numpy array of column variance Parameters ---------- x : ndarray A numpy array instance Returns ------- ndarray A 1 x n numpy array instance of column variance Examples -------- >>> a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) >>> np.testing.assert_array_almost_equal( ... variance(a), ... [0.666666, 0.666666, 0.666666]) >>> a = np.array([1, 2, 3]) >>> np.testing.assert_array_almost_equal( ... variance(a), ... 0.666666)
def get_etag(storage, path, prefixed_path): cache_key = get_cache_key(path) etag = cache.get(cache_key, False) if etag is False: etag = get_remote_etag(storage, prefixed_path) cache.set(cache_key, etag) return etag
Get etag of path from cache or S3 - in that order.
def date_to_long_form_string(dt, locale_ = 'en_US.utf8'): if locale_: old_locale = locale.getlocale() locale.setlocale(locale.LC_ALL, locale_) v = dt.strftime("%A %B %d %Y") if locale_: locale.setlocale(locale.LC_ALL, old_locale) return v
dt should be a datetime.date object.
def write(self): with open(self.filepath, 'wb') as outfile: outfile.write( self.fernet.encrypt( yaml.dump(self.data, encoding='utf-8')))
Encrypts and writes the current state back onto the filesystem
def xpath(self, xpath, **kwargs): result = self.adapter.xpath_on_node(self.impl_node, xpath, **kwargs) if isinstance(result, (list, tuple)): return [self._maybe_wrap_node(r) for r in result] else: return self._maybe_wrap_node(result)
Perform an XPath query on the current node. :param string xpath: XPath query. :param dict kwargs: Optional keyword arguments that are passed through to the underlying XML library implementation. :return: results of the query as a list of :class:`Node` objects, or a list of base type objects if the XPath query does not reference node objects.
def get_item(env, name, default=None): for key in name.split('.'): if isinstance(env, dict) and key in env: env = env[key] elif isinstance(env, types.ModuleType) and key in env.__dict__: env = env.__dict__[key] else: return default return env
Get an item from a dictionary, handling nested lookups with dotted notation. Args: env: the environment (dictionary) to use to look up the name. name: the name to look up, in dotted notation. default: the value to return if the name if not found. Returns: The result of looking up the name, if found; else the default.
def maxTreeDepthDivide(rootValue, currentDepth=0, parallelLevel=2): thisRoot = shared.getConst('myTree').search(rootValue) if currentDepth >= parallelLevel: return thisRoot.maxDepth(currentDepth) else: if not any([thisRoot.left, thisRoot.right]): return currentDepth if not all([thisRoot.left, thisRoot.right]): return thisRoot.maxDepth(currentDepth) return max( futures.map( maxTreeDepthDivide, [ thisRoot.left.payload, thisRoot.right.payload, ], cycle([currentDepth + 1]), cycle([parallelLevel]), ) )
Finds a tree node that represents rootValue and computes the max depth of this tree branch. This function will emit new futures until currentDepth=parallelLevel
def header(self): return "place %s\n" % self.location.city + \ "latitude %.2f\n" % self.location.latitude + \ "longitude %.2f\n" % -self.location.longitude + \ "time_zone %d\n" % (-self.location.time_zone * 15) + \ "site_elevation %.1f\n" % self.location.elevation + \ "weather_data_file_units 1\n"
Wea header.
def active_pt_window(self): " The active prompt_toolkit layout Window. " if self.active_tab: w = self.active_tab.active_window if w: return w.pt_window
The active prompt_toolkit layout Window.
def get_user_details(self, response): account = response['account'] metadata = json.loads(account.get('json_metadata') or '{}') account['json_metadata'] = metadata return { 'id': account['id'], 'username': account['name'], 'name': metadata.get("profile", {}).get('name', ''), 'account': account, }
Return user details from GitHub account
def unmarshal( compoundSignature, data, offset = 0, lendian = True ): values = list() start_offset = offset for ct in genCompleteTypes( compoundSignature ): tcode = ct[0] offset += len(pad[tcode]( offset )) nbytes, value = unmarshallers[ tcode ]( ct, data, offset, lendian ) offset += nbytes values.append( value ) return offset - start_offset, values
Unmarshals DBus encoded data. @type compoundSignature: C{string} @param compoundSignature: DBus signature specifying the encoded value types @type data: C{string} @param data: Binary data @type offset: C{int} @param offset: Offset within data at which data for compoundSignature starts (used during recursion) @type lendian: C{bool} @param lendian: True if data is encoded in little-endian format @returns: (number_of_bytes_decoded, list_of_values)
def DeserializeTX(buffer): mstream = MemoryStream(buffer) reader = BinaryReader(mstream) tx = Transaction.DeserializeFrom(reader) return tx
Deserialize the stream into a Transaction object. Args: buffer (BytesIO): stream to deserialize the Transaction from. Returns: neo.Core.TX.Transaction:
def run_in_greenlet(callable): @wraps(callable) async def _(*args, **kwargs): green = greenlet(callable) result = green.switch(*args, **kwargs) while isawaitable(result): try: result = green.switch((await result)) except Exception: exc_info = sys.exc_info() result = green.throw(*exc_info) return green.switch(result) return _
Decorator to run a ``callable`` on a new greenlet. A ``callable`` decorated with this decorator returns a coroutine
def CopyToDateTimeString(self): if self._timestamp is None: return None number_of_days, hours, minutes, seconds = self._GetTimeValues( int(self._timestamp)) year, month, day_of_month = self._GetDateValuesWithEpoch( number_of_days, self._EPOCH) microseconds = int( (self._timestamp % 1) * definitions.MICROSECONDS_PER_SECOND) return '{0:04d}-{1:02d}-{2:02d} {3:02d}:{4:02d}:{5:02d}.{6:06d}'.format( year, month, day_of_month, hours, minutes, seconds, microseconds)
Copies the Cocoa timestamp to a date and time string. Returns: str: date and time value formatted as: YYYY-MM-DD hh:mm:ss.###### or None if the timestamp cannot be copied to a date and time string.
def http_methods(self, urls=None, **route_data): def decorator(class_definition): instance = class_definition if isinstance(class_definition, type): instance = class_definition() router = self.urls(urls if urls else "/{0}".format(instance.__class__.__name__.lower()), **route_data) for method in HTTP_METHODS: handler = getattr(instance, method.lower(), None) if handler: http_routes = getattr(handler, '_hug_http_routes', ()) if http_routes: for route in http_routes: http(**router.accept(method).where(**route).route)(handler) else: http(**router.accept(method).route)(handler) cli_routes = getattr(handler, '_hug_cli_routes', ()) if cli_routes: for route in cli_routes: cli(**self.where(**route).route)(handler) return class_definition return decorator
Creates routes from a class, where the class method names should line up to HTTP METHOD types
def participating_ecs(self): with self._mutex: if not self._participating_ecs: self._participating_ecs = [ExecutionContext(ec, self._obj.get_context_handle(ec)) \ for ec in self._obj.get_participating_contexts()] return self._participating_ecs
A list of the execution contexts this component is participating in.
def iterrowproxy(self, cls=RowProxy): row_proxy = None headers = None for row in self: if not headers: headers = row row_proxy = cls(headers) continue yield row_proxy.set_row(row)
Iterate over the resource as row proxy objects, which allow acessing colums as attributes. Like iterrows, but allows for setting a specific RowProxy class.
def close(self): if self.tabix_file and not self.tabix_file.closed: self.tabix_file.close() if self.stream: self.stream.close()
Close underlying stream
def piped_bamprep(data, region=None, out_file=None): data["region"] = region if not _need_prep(data): return [data] else: utils.safe_makedir(os.path.dirname(out_file)) if region[0] == "nochrom": prep_bam = shared.write_nochr_reads(data["work_bam"], out_file, data["config"]) elif region[0] == "noanalysis": prep_bam = shared.write_noanalysis_reads(data["work_bam"], region[1], out_file, data["config"]) else: if not utils.file_exists(out_file): with tx_tmpdir(data) as tmp_dir: _piped_bamprep_region(data, region, out_file, tmp_dir) prep_bam = out_file bam.index(prep_bam, data["config"]) data["work_bam"] = prep_bam return [data]
Perform full BAM preparation using pipes to avoid intermediate disk IO. Handles realignment of original BAMs.
def add_gate_option_group(parser): gate_group = parser.add_argument_group("Options for gating data.") gate_group.add_argument("--gate", nargs="+", type=str, metavar="IFO:CENTRALTIME:HALFDUR:TAPERDUR", help="Apply one or more gates to the data before " "filtering.") gate_group.add_argument("--gate-overwhitened", action="store_true", help="Overwhiten data first, then apply the " "gates specified in --gate. Overwhitening " "allows for sharper tapers to be used, " "since lines are not blurred.") gate_group.add_argument("--psd-gate", nargs="+", type=str, metavar="IFO:CENTRALTIME:HALFDUR:TAPERDUR", help="Apply one or more gates to the data used " "for computing the PSD. Gates are applied " "prior to FFT-ing the data for PSD " "estimation.") return gate_group
Adds the options needed to apply gates to data. Parameters ---------- parser : object ArgumentParser instance.
def write_ast(patched_ast_node): result = [] for child in patched_ast_node.sorted_children: if isinstance(child, ast.AST): result.append(write_ast(child)) else: result.append(child) return ''.join(result)
Extract source form a patched AST node with `sorted_children` field If the node is patched with sorted_children turned off you can use `node_region` function for obtaining code using module source code.
def create_business_rules(self, hosts, services, hostgroups, servicegroups, macromodulations, timeperiods): for item in self: item.create_business_rules(hosts, services, hostgroups, servicegroups, macromodulations, timeperiods)
Loop on hosts or services and call SchedulingItem.create_business_rules :param hosts: hosts to link to :type hosts: alignak.objects.host.Hosts :param services: services to link to :type services: alignak.objects.service.Services :param hostgroups: hostgroups to link to :type hostgroups: alignak.objects.hostgroup.Hostgroups :param servicegroups: servicegroups to link to :type servicegroups: alignak.objects.servicegroup.Servicegroups :param macromodulations: macromodulations to link to :type macromodulations: alignak.objects.macromodulation.Macromodulations :param timeperiods: timeperiods to link to :type timeperiods: alignak.objects.timeperiod.Timeperiods :return: None
def start_discovery(self, service_uuids=[]): discovery_filter = {'Transport': 'le'} if service_uuids: discovery_filter['UUIDs'] = service_uuids try: self._adapter.SetDiscoveryFilter(discovery_filter) self._adapter.StartDiscovery() except dbus.exceptions.DBusException as e: if e.get_dbus_name() == 'org.bluez.Error.NotReady': raise errors.NotReady( "Bluetooth adapter not ready. " "Set `is_adapter_powered` to `True` or run 'echo \"power on\" | sudo bluetoothctl'.") if e.get_dbus_name() == 'org.bluez.Error.InProgress': pass else: raise _error_from_dbus_error(e)
Starts a discovery for BLE devices with given service UUIDs. :param service_uuids: Filters the search to only return devices with given UUIDs.
def build_pattern(body, features): line_patterns = apply_features(body, features) return reduce(lambda x, y: [i + j for i, j in zip(x, y)], line_patterns)
Converts body into a pattern i.e. a point in the features space. Applies features to the body lines and sums up the results. Elements of the pattern indicate how many times a certain feature occurred in the last lines of the body.
def print_logins(logins): table = formatting.Table(['Date', 'IP Address', 'Successufl Login?']) for login in logins: table.add_row([login.get('createDate'), login.get('ipAddress'), login.get('successFlag')]) return table
Prints out the login history for a user
def is_constrained_reaction(model, rxn): lower_bound, upper_bound = helpers.find_bounds(model) if rxn.reversibility: return rxn.lower_bound > lower_bound or rxn.upper_bound < upper_bound else: return rxn.lower_bound > 0 or rxn.upper_bound < upper_bound
Return whether a reaction has fixed constraints.
def json_http_resp(handler): @wraps(handler) def wrapper(event, context): response = handler(event, context) try: body = json.dumps(response) except Exception as exception: return {'statusCode': 500, 'body': str(exception)} return {'statusCode': 200, 'body': body} return wrapper
Automatically serialize return value to the body of a successfull HTTP response. Returns a 500 error if the response cannot be serialized Usage:: >>> from lambda_decorators import json_http_resp >>> @json_http_resp ... def handler(event, context): ... return {'hello': 'world'} >>> handler({}, object()) {'statusCode': 200, 'body': '{"hello": "world"}'} in this example, the decorated handler returns: .. code:: python {'statusCode': 200, 'body': '{"hello": "world"}'}
def binds(val, **kwargs): if not isinstance(val, dict): if not isinstance(val, list): try: val = helpers.split(val) except AttributeError: raise SaltInvocationError( '\'{0}\' is not a dictionary or list of bind ' 'definitions'.format(val) ) return val
On the CLI, these are passed as multiple instances of a given CLI option. In Salt, we accept these as a comma-delimited list but the API expects a Python list.
def stem(self, word): if self.stemmer: return unicode_to_ascii(self._stemmer.stem(word)) else: return word
Perform stemming on an input word.
def _qvm_run(self, quil_program, classical_addresses, trials, measurement_noise, gate_noise, random_seed) -> np.ndarray: payload = qvm_run_payload(quil_program, classical_addresses, trials, measurement_noise, gate_noise, random_seed) response = post_json(self.session, self.sync_endpoint + "/qvm", payload) ram = response.json() for k in ram.keys(): ram[k] = np.array(ram[k]) return ram
Run a Forest ``run`` job on a QVM. Users should use :py:func:`QVM.run` instead of calling this directly.
def delete(gandi, address, force): source, domain = address if not force: proceed = click.confirm('Are you sure to delete the domain ' 'mail forward %s@%s ?' % (source, domain)) if not proceed: return result = gandi.forward.delete(domain, source) return result
Delete a domain mail forward.
def embedding(x, vocab_size, dense_size, name=None, reuse=None, multiplier=1.0, symbol_dropout_rate=0.0, embedding_var=None, dtype=tf.float32): with tf.variable_scope( name, default_name="embedding", values=[x], reuse=reuse, dtype=dtype): if embedding_var is None: embedding_var = tf.get_variable("kernel", [vocab_size, dense_size]) if not tf.executing_eagerly(): embedding_var = convert_gradient_to_tensor(embedding_var) x = dropout_no_scaling(x, 1.0 - symbol_dropout_rate) emb_x = gather(embedding_var, x, dtype) if multiplier != 1.0: emb_x *= multiplier static_shape = emb_x.shape.as_list() if len(static_shape) < 5: return emb_x assert len(static_shape) == 5 return tf.squeeze(emb_x, 3)
Embed x of type int64 into dense vectors, reducing to max 4 dimensions.
def resolve(self, link_resource_type, resource_id, array=None): result = None if array is not None: container = array.items_mapped.get(link_resource_type) result = container.get(resource_id) if result is None: clz = utils.class_for_type(link_resource_type) result = self.fetch(clz).where({'sys.id': resource_id}).first() return result
Resolve a link to a CDA resource. Provided an `array` argument, attempt to retrieve the resource from the `mapped_items` section of that array (containing both included and regular resources), in case the resource cannot be found in the array (or if no `array` was provided) - attempt to fetch the resource from the API by issuing a network request. :param link_resource_type: (str) Resource type as str. :param resource_id: (str) Remote ID of the linked resource. :param array: (:class:`.Array`) Optional array resource. :return: :class:`.Resource` subclass, `None` if it cannot be retrieved.
def check_for_completion(self): job_result_obj = self.session.get(self.uri) job_status = job_result_obj['status'] if job_status == 'complete': self.session.delete(self.uri) op_status_code = job_result_obj['job-status-code'] if op_status_code in (200, 201): op_result_obj = job_result_obj.get('job-results', None) elif op_status_code == 204: op_result_obj = None else: error_result_obj = job_result_obj.get('job-results', None) if not error_result_obj: message = None elif 'message' in error_result_obj: message = error_result_obj['message'] elif 'error' in error_result_obj: message = error_result_obj['error'] else: message = None error_obj = { 'http-status': op_status_code, 'reason': job_result_obj['job-reason-code'], 'message': message, 'request-method': self.op_method, 'request-uri': self.op_uri, } raise HTTPError(error_obj) else: op_result_obj = None return job_status, op_result_obj
Check once for completion of the job and return completion status and result if it has completed. If the job completed in error, an :exc:`~zhmcclient.HTTPError` exception is raised. Returns: : A tuple (status, result) with: * status (:term:`string`): Completion status of the job, as returned in the ``status`` field of the response body of the "Query Job Status" HMC operation, as follows: * ``"complete"``: Job completed (successfully). * any other value: Job is not yet complete. * result (:term:`json object` or `None`): `None` for incomplete jobs. For completed jobs, the result of the original asynchronous operation that was performed by the job, from the ``job-results`` field of the response body of the "Query Job Status" HMC operation. That result is a :term:`json object` as described for the asynchronous operation, or `None` if the operation has no result. Raises: :exc:`~zhmcclient.HTTPError`: The job completed in error, or the job status cannot be retrieved, or the job cannot be deleted. :exc:`~zhmcclient.ParseError` :exc:`~zhmcclient.ClientAuthError` :exc:`~zhmcclient.ServerAuthError` :exc:`~zhmcclient.ConnectionError`
def _ssh_client(self): ssh = paramiko.SSHClient() ssh.load_system_host_keys() ssh.set_missing_host_key_policy(paramiko.RejectPolicy()) return ssh
Gets an SSH client to connect with.
def get_app(opts): apiopts = opts.get(__name__.rsplit('.', 2)[-2], {}) cherrypy.config['saltopts'] = opts cherrypy.config['apiopts'] = apiopts root = API() cpyopts = root.get_conf() return root, apiopts, cpyopts
Returns a WSGI app and a configuration dictionary
def list_extensions(request): blacklist = set(getattr(settings, 'OPENSTACK_NOVA_EXTENSIONS_BLACKLIST', [])) nova_api = _nova.novaclient(request) return tuple( extension for extension in nova_list_extensions.ListExtManager(nova_api).show_all() if extension.name not in blacklist )
List all nova extensions, except the ones in the blacklist.
def response(code, **kwargs): _ret_json = jsonify(kwargs) resp = make_response(_ret_json, code) resp.headers["Content-Type"] = "application/json; charset=utf-8" return resp
Generic HTTP JSON response method :param code: HTTP code (int) :param kwargs: Data structure for response (dict) :return: HTTP Json response
def trace2array(self, sl): chain = [] for stochastic in self.stochastics: tr = stochastic.trace.gettrace(slicing=sl) if tr is None: raise AttributeError chain.append(tr) return np.hstack(chain)
Return an array with the trace of all stochastics, sliced by sl.
def _update_element(name, element_type, data, server=None): name = quote(name, safe='') if 'properties' in data: properties = [] for key, value in data['properties'].items(): properties.append({'name': key, 'value': value}) _api_post('{0}/{1}/property'.format(element_type, name), properties, server) del data['properties'] if not data: return unquote(name) update_data = _get_element(name, element_type, server, with_properties=False) if update_data: update_data.update(data) else: __context__['retcode'] = salt.defaults.exitcodes.SALT_BUILD_FAIL raise CommandExecutionError('Cannot update {0}'.format(name)) _api_post('{0}/{1}'.format(element_type, name), _clean_data(update_data), server) return unquote(name)
Update an element, including it's properties
def vertical_percent(plot, percent=0.1): plot_bottom, plot_top = plot.get_ylim() return percent * (plot_top - plot_bottom)
Using the size of the y axis, return a fraction of that size.
def source_components(self): raw_sccs = self._component_graph() vertex_to_root = self.vertex_dict() non_sources = self.vertex_set() for scc in raw_sccs: root = scc[0][1] for item_type, w in scc: if item_type == 'VERTEX': vertex_to_root[w] = root elif item_type == 'EDGE': non_sources.add(vertex_to_root[w]) sccs = [] for raw_scc in raw_sccs: root = raw_scc[0][1] if root not in non_sources: sccs.append([v for vtype, v in raw_scc if vtype == 'VERTEX']) return [self.full_subgraph(scc) for scc in sccs]
Return the strongly connected components not reachable from any other component. Any component in the graph is reachable from one of these.
def noise_get_turbulence( n: tcod.noise.Noise, f: Sequence[float], oc: float, typ: int = NOISE_DEFAULT, ) -> float: return float( lib.TCOD_noise_get_turbulence_ex( n.noise_c, ffi.new("float[4]", f), oc, typ ) )
Return the turbulence noise sampled from the ``f`` coordinate. Args: n (Noise): A Noise instance. f (Sequence[float]): The point to sample the noise from. typ (int): The noise algorithm to use. octaves (float): The level of level. Should be more than 1. Returns: float: The sampled noise value.
def create_aql_text(*args): aql_query_text = "" for arg in args: if isinstance(arg, dict): arg = "({})".format(json.dumps(arg)) elif isinstance(arg, list): arg = "({})".format(json.dumps(arg)).replace("[", "").replace("]", "") aql_query_text += arg return aql_query_text
Create AQL querty from string or list or dict arguments
def _get_qvm_with_topology(name: str, topology: nx.Graph, noisy: bool = False, requires_executable: bool = True, connection: ForestConnection = None, qvm_type: str = 'qvm') -> QuantumComputer: device = NxDevice(topology=topology) if noisy: noise_model = decoherence_noise_with_asymmetric_ro(gates=gates_in_isa(device.get_isa())) else: noise_model = None return _get_qvm_qc(name=name, qvm_type=qvm_type, connection=connection, device=device, noise_model=noise_model, requires_executable=requires_executable)
Construct a QVM with the provided topology. :param name: A name for your quantum computer. This field does not affect behavior of the constructed QuantumComputer. :param topology: A graph representing the desired qubit connectivity. :param noisy: Whether to include a generic noise model. If you want more control over the noise model, please construct your own :py:class:`NoiseModel` and use :py:func:`_get_qvm_qc` instead of this function. :param requires_executable: Whether this QVM will refuse to run a :py:class:`Program` and only accept the result of :py:func:`compiler.native_quil_to_executable`. Setting this to True better emulates the behavior of a QPU. :param connection: An optional :py:class:`ForestConnection` object. If not specified, the default values for URL endpoints will be used. :param qvm_type: The type of QVM. Either 'qvm' or 'pyqvm'. :return: A pre-configured QuantumComputer
def sort_url(self): prefix = (self.sort_direction == "asc") and "-" or "" return self.table.get_url(order_by=prefix + self.name)
Return the URL to sort the linked table by this column. If the table is already sorted by this column, the order is reversed. Since there is no canonical URL for a table the current URL (via the HttpRequest linked to the Table instance) is reused, and any unrelated parameters will be included in the output.
def get_var_count(self): n = c_int() self.library.get_var_count.argtypes = [POINTER(c_int)] self.library.get_var_count(byref(n)) return n.value
Return number of variables
def system_exit_exception_handler(*args): reporter = Reporter() reporter.Footer_label.setText( "The severity of this exception is critical, <b>{0}</b> cannot continue and will now close!".format( Constants.application_name)) base_exception_handler(*args) foundations.core.exit(1) return True
Provides a system exit exception handler. :param \*args: Arguments. :type \*args: \* :return: Definition success. :rtype: bool
def append(a, vancestors): add = True for j, va in enumerate(vancestors): if issubclass(va, a): add = False break if issubclass(a, va): vancestors[j] = a add = False if add: vancestors.append(a)
Append ``a`` to the list of the virtual ancestors, unless it is already included.
def features(self): return { aioxmpp.im.conversation.ConversationFeature.BAN, aioxmpp.im.conversation.ConversationFeature.BAN_WITH_KICK, aioxmpp.im.conversation.ConversationFeature.KICK, aioxmpp.im.conversation.ConversationFeature.SEND_MESSAGE, aioxmpp.im.conversation.ConversationFeature.SEND_MESSAGE_TRACKED, aioxmpp.im.conversation.ConversationFeature.SET_TOPIC, aioxmpp.im.conversation.ConversationFeature.SET_NICK, aioxmpp.im.conversation.ConversationFeature.INVITE, aioxmpp.im.conversation.ConversationFeature.INVITE_DIRECT, }
The set of features supported by this MUC. This may vary depending on features exported by the MUC service, so be sure to check this for each individual MUC.
def match_date(date, date_pattern): year, month, day, day_of_week = date year_p, month_p, day_p, day_of_week_p = date_pattern if year_p == 255: pass elif year != year_p: return False if month_p == 255: pass elif month_p == 13: if (month % 2) == 0: return False elif month_p == 14: if (month % 2) == 1: return False elif month != month_p: return False if day_p == 255: pass elif day_p == 32: last_day = calendar.monthrange(year + 1900, month)[1] if day != last_day: return False elif day_p == 33: if (day % 2) == 0: return False elif day_p == 34: if (day % 2) == 1: return False elif day != day_p: return False if day_of_week_p == 255: pass elif day_of_week != day_of_week_p: return False return True
Match a specific date, a four-tuple with no special values, with a date pattern, four-tuple possibly having special values.
def _get_entities(self, text, language=''): body = { 'document': { 'type': 'PLAIN_TEXT', 'content': text, }, 'encodingType': 'UTF32', } if language: body['document']['language'] = language request = self.service.documents().analyzeEntities(body=body) response = request.execute() result = [] for entity in response.get('entities', []): mentions = entity.get('mentions', []) if not mentions: continue entity_text = mentions[0]['text'] offset = entity_text['beginOffset'] for word in entity_text['content'].split(): result.append({'content': word, 'beginOffset': offset}) offset += len(word) return result
Returns the list of entities retrieved from the given text. Args: text (str): Input text. language (:obj:`str`, optional): Language code. Returns: List of entities.
def process_results(self): for result in self._results: provider = result.provider self.providers.append(provider) if result.error: self.failed_providers.append(provider) continue if not result.response: continue self.blacklisted = True provider_categories = provider.process_response(result.response) assert provider_categories.issubset(DNSBL_CATEGORIES) self.categories = self.categories.union(provider_categories) self.detected_by[provider.host] = list(provider_categories)
Process results by providers
def should_log(self, logger_name: str, level: str) -> bool: if (logger_name, level) not in self._should_log: log_level_per_rule = self._get_log_level(logger_name) log_level_per_rule_numeric = getattr(logging, log_level_per_rule.upper(), 10) log_level_event_numeric = getattr(logging, level.upper(), 10) should_log = log_level_event_numeric >= log_level_per_rule_numeric self._should_log[(logger_name, level)] = should_log return self._should_log[(logger_name, level)]
Returns if a message for the logger should be logged.
def integer(token): token = token.strip() neg = False if token.startswith(compat.b('-')): token = token[1:] neg = True if token.startswith(compat.b('0x')): result = int(token, 16) elif token.startswith(compat.b('0b')): result = int(token[2:], 2) elif token.startswith(compat.b('0o')): result = int(token, 8) else: try: result = int(token) except ValueError: result = int(token, 16) if neg: result = -result return result
Convert numeric strings into integers. @type token: str @param token: String to parse. @rtype: int @return: Parsed integer value.
def _parse(cls, scope): if not scope: return ('default',) if isinstance(scope, string_types): scope = scope.split(' ') scope = {str(s).lower() for s in scope if s} return scope or ('default',)
Parses the input scope into a normalized set of strings. :param scope: A string or tuple containing zero or more scope names. :return: A set of scope name strings, or a tuple with the default scope name. :rtype: set
def local_dt(dt): if not dt.tzinfo: dt = pytz.utc.localize(dt) return LOCALTZ.normalize(dt.astimezone(LOCALTZ))
Return an aware datetime in system timezone, from a naive or aware datetime. Naive datetime are assumed to be in UTC TZ.
def remove_node(self, node): if node in self._nodes: yield from node.delete() self._nodes.remove(node)
Removes a node from the project. In theory this should be called by the node manager. :param node: Node instance
def to_json(self): result = super(Webhook, self).to_json() result.update({ 'name': self.name, 'url': self.url, 'topics': self.topics, 'httpBasicUsername': self.http_basic_username, 'headers': self.headers }) if self.filters: result.update({'filters': self.filters}) if self.transformation: result.update({'transformation': self.transformation}) return result
Returns the JSON representation of the webhook.
def select(message="", title="Lackey Input", options=None, default=None): if options is None or len(options) == 0: return "" if default is None: default = options[0] if default not in options: raise ValueError("<<default>> not in options[]") root = tk.Tk() input_text = tk.StringVar() input_text.set(message) PopupList(root, message, title, options, default, input_text) root.focus_force() root.mainloop() return str(input_text.get())
Creates a dropdown selection dialog with the specified message and options `default` must be one of the options. Returns the selected value.
def namedb_preorder_insert( cur, preorder_rec ): preorder_row = copy.deepcopy( preorder_rec ) assert 'preorder_hash' in preorder_row, "BUG: missing preorder_hash" try: preorder_query, preorder_values = namedb_insert_prepare( cur, preorder_row, "preorders" ) except Exception, e: log.exception(e) log.error("FATAL: Failed to insert name preorder '%s'" % preorder_row['preorder_hash']) os.abort() namedb_query_execute( cur, preorder_query, preorder_values ) return True
Add a name or namespace preorder record, if it doesn't exist already. DO NOT CALL THIS DIRECTLY.
def append(entry): if not entry: return try: with open(get_rc_path(), 'a') as f: if isinstance(entry, list): f.writelines(entry) else: f.write(entry + '\n') except IOError: print('Error writing your ~/.vacationrc file!')
Append either a list of strings or a string to our file.
def data_find_text(data, path): el = data_find(data, path) if not isinstance(el, (list, tuple)): return None texts = [child for child in el[1:] if not isinstance(child, (tuple, list, dict))] if not texts: return None return " ".join( [ six.ensure_text(x, encoding="utf-8", errors="strict") for x in texts ] )
Return the text value of the element-as-tuple in tuple ``data`` using simplified XPath ``path``.
def dbmax05years(self, value=None): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `dbmax05years`'.format(value)) self._dbmax05years = value
Corresponds to IDD Field `dbmax05years` 5-year return period values for maximum extreme dry-bulb temperature Args: value (float): value for IDD Field `dbmax05years` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
def _req(self, req): logger.debug('DUT> %s', req) self._log and self.pause() times = 3 res = None while times: times = times - 1 try: self._sendline(req) self._expect(req) line = None res = [] while True: line = self._readline() logger.debug('Got line %s', line) if line == 'Done': break if line: res.append(line) break except: logger.exception('Failed to send command') self.close() self._init() self._log and self.resume() return res
Send command and wait for response. The command will be repeated 3 times at most in case data loss of serial port. Args: req (str): Command to send, please do not include new line in the end. Returns: [str]: The output lines
def list_json_files(directory, recursive=False): json_files = [] for top, dirs, files in os.walk(directory): dirs.sort() paths = (os.path.join(top, f) for f in sorted(files)) json_files.extend(x for x in paths if is_json(x)) if not recursive: break return json_files
Return a list of file paths for JSON files within `directory`. Args: directory: A path to a directory. recursive: If ``True``, this function will descend into all subdirectories. Returns: A list of JSON file paths directly under `directory`.
def get_intended_direction(self): x = 0 y = 0 if self.target_x == self.current_x and self.target_y == self.current_y: return y,x if self.target_y > self.current_y: y = 1 elif self.target_y < self.current_y: y = -1 if self.target_x > self.current_x: x = 1 elif self.target_x < self.current_x: x = -1 return y,x
returns a Y,X value showing which direction the agent should move in order to get to the target
def __build_sms_data(self, message): attributes = {} attributes_to_translate = { 'to' : 'To', 'message' : 'Content', 'client_id' : 'ClientID', 'concat' : 'Concat', 'from_name': 'From', 'invalid_char_option' : 'InvalidCharOption', 'truncate' : 'Truncate', 'wrapper_id' : 'WrapperId' } for attr in attributes_to_translate: val_to_use = None if hasattr(message, attr): val_to_use = getattr(message, attr) if val_to_use is None and hasattr(self, attr): val_to_use = getattr(self, attr) if val_to_use is not None: attributes[attributes_to_translate[attr]] = str(val_to_use) return attributes
Build a dictionary of SMS message elements
def get_parent(port_id): session = db.get_reader_session() res = dict() with session.begin(): subport_model = trunk_models.SubPort trunk_model = trunk_models.Trunk subport = (session.query(subport_model). filter(subport_model.port_id == port_id).first()) if subport: trunk = (session.query(trunk_model). filter(trunk_model.id == subport.trunk_id).first()) if trunk: trunk_port_id = trunk.port.id res = get_ports(port_id=trunk_port_id, active=False)[0] return res
Get trunk subport's parent port
def dumps(mesh): from lxml import etree dae = mesh_to_collada(mesh) dae.save() return etree.tostring(dae.xmlnode, encoding='UTF-8')
Generates a UTF-8 XML string containing the mesh, in collada format.
def focus_prev_matching(self, querystring): self.focus_property(lambda x: x._message.matches(querystring), self._tree.prev_position)
focus previous matching message in depth first order
def get_item_properties(item, fields, mixed_case_fields=(), formatters=None): if formatters is None: formatters = {} row = [] for field in fields: if field in formatters: row.append(formatters[field](item)) else: if field in mixed_case_fields: field_name = field.replace(' ', '_') else: field_name = field.lower().replace(' ', '_') if not hasattr(item, field_name) and isinstance(item, dict): data = item[field_name] else: data = getattr(item, field_name, '') if data is None: data = '' row.append(data) return tuple(row)
Return a tuple containing the item properties. :param item: a single item resource (e.g. Server, Tenant, etc) :param fields: tuple of strings with the desired field names :param mixed_case_fields: tuple of field names to preserve case :param formatters: dictionary mapping field names to callables to format the values
def create_statement(self, connection_id): request = requests_pb2.CreateStatementRequest() request.connection_id = connection_id response_data = self._apply(request) response = responses_pb2.CreateStatementResponse() response.ParseFromString(response_data) return response.statement_id
Creates a new statement. :param connection_id: ID of the current connection. :returns: New statement ID.
def GetAnalyzerInstance(cls, analyzer_name): analyzer_name = analyzer_name.lower() if analyzer_name not in cls._analyzer_classes: raise KeyError( 'analyzer class not set for name: {0:s}.'.format(analyzer_name)) analyzer_class = cls._analyzer_classes[analyzer_name] return analyzer_class()
Retrieves an instance of a specific analyzer. Args: analyzer_name (str): name of the analyzer to retrieve. Returns: BaseAnalyzer: analyzer instance. Raises: KeyError: if analyzer class is not set for the corresponding name.
def read(self, path, ext=None, start=None, stop=None, recursive=False, npartitions=None): path = uri_to_path(path) files = self.list(path, ext=ext, start=start, stop=stop, recursive=recursive) nfiles = len(files) self.nfiles = nfiles if spark and isinstance(self.engine, spark): npartitions = min(npartitions, nfiles) if npartitions else nfiles rdd = self.engine.parallelize(enumerate(files), npartitions) return rdd.map(lambda kv: (kv[0], readlocal(kv[1]), kv[1])) else: return [(k, readlocal(v), v) for k, v in enumerate(files)]
Sets up Spark RDD across files specified by dataPath on local filesystem. Returns RDD of <integer file index, string buffer> k/v pairs.
def queue_purge(self, queue='', nowait=False): args = AMQPWriter() args.write_short(0) args.write_shortstr(queue) args.write_bit(nowait) self._send_method((50, 30), args) if not nowait: return self.wait(allowed_methods=[ (50, 31), ])
Purge a queue This method removes all messages from a queue. It does not cancel consumers. Purged messages are deleted without any formal "undo" mechanism. RULE: A call to purge MUST result in an empty queue. RULE: On transacted channels the server MUST not purge messages that have already been sent to a client but not yet acknowledged. RULE: The server MAY implement a purge queue or log that allows system administrators to recover accidentally-purged messages. The server SHOULD NOT keep purged messages in the same storage spaces as the live messages since the volumes of purged messages may get very large. PARAMETERS: queue: shortstr Specifies the name of the queue to purge. If the queue name is empty, refers to the current queue for the channel, which is the last declared queue. RULE: If the client did not previously declare a queue, and the queue name in this method is empty, the server MUST raise a connection exception with reply code 530 (not allowed). RULE: The queue must exist. Attempting to purge a non- existing queue causes a channel exception. nowait: boolean do not send a reply method If set, the server will not respond to the method. The client should not wait for a reply method. If the server could not complete the method it will raise a channel or connection exception. if nowait is False, returns a message_count
def insrti(item, inset): assert isinstance(inset, stypes.SpiceCell) if hasattr(item, "__iter__"): for i in item: libspice.insrti_c(ctypes.c_int(i), ctypes.byref(inset)) else: item = ctypes.c_int(item) libspice.insrti_c(item, ctypes.byref(inset))
Insert an item into an integer set. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/insrti_c.html :param item: Item to be inserted. :type item: Union[float,Iterable[int]] :param inset: Insertion set. :type inset: spiceypy.utils.support_types.SpiceCell
def getConnectionStats(self): cur = self._conn.cursor() cur.execute() rows = cur.fetchall() if rows: return dict(rows) else: return {}
Returns dictionary with number of connections for each database. @return: Dictionary of database connection statistics.
def remove_service(self, zeroconf, srv_type, srv_name): self.servers.remove_server(srv_name) logger.info( "Glances server %s removed from the autodetect list" % srv_name)
Remove the server from the list.
def GetPublicCert(self): cert_url = self.google_api_url + 'publicKeys' resp, content = self.http.request(cert_url) if resp.status == 200: return simplejson.loads(content) else: raise errors.GitkitServerError('Error response for cert url: %s' % content)
Download Gitkit public cert. Returns: dict of public certs.
def authentication_url(self): params = { 'client_id': self.client_id, 'response_type': self.type, 'redirect_uri': self.callback_url } return AUTHENTICATION_URL + "?" + urlencode(params)
Redirect your users to here to authenticate them.
def geometrize_stops(stops: List[str], *, use_utm: bool = False) -> DataFrame: import geopandas as gpd g = ( stops.assign( geometry=lambda x: [ sg.Point(p) for p in x[["stop_lon", "stop_lat"]].values ] ) .drop(["stop_lon", "stop_lat"], axis=1) .pipe(lambda x: gpd.GeoDataFrame(x, crs=cs.WGS84)) ) if use_utm: lat, lon = stops.loc[0, ["stop_lat", "stop_lon"]].values crs = hp.get_utm_crs(lat, lon) g = g.to_crs(crs) return g
Given a stops DataFrame, convert it to a GeoPandas GeoDataFrame and return the result. Parameters ---------- stops : DataFrame A GTFS stops table use_utm : boolean If ``True``, then convert the output to local UTM coordinates; otherwise use WGS84 coordinates Returns ------- GeoPandas GeoDataFrame Looks like the given stops DataFrame, but has a ``'geometry'`` column of Shapely Point objects that replaces the ``'stop_lon'`` and ``'stop_lat'`` columns. Notes ----- Requires GeoPandas.
def getcosIm(alat): alat = np.float64(alat) return np.cos(np.radians(alat))/np.sqrt(4 - 3*np.cos(np.radians(alat))**2)
Computes cosIm from modified apex latitude. Parameters ========== alat : array_like Modified apex latitude Returns ======= cosIm : ndarray or float
def set_scrollbars_cb(self, w, tf): scrollbars = 'on' if tf else 'off' self.t_.set(scrollbars=scrollbars)
This callback is invoked when the user checks the 'Use Scrollbars' box in the preferences pane.
def _lockfile(self): pfile = pipfile.load(self.pipfile_location, inject_env=False) lockfile = json.loads(pfile.lock()) for section in ("default", "develop"): lock_section = lockfile.get(section, {}) for key in list(lock_section.keys()): norm_key = pep423_name(key) lockfile[section][norm_key] = lock_section.pop(key) return lockfile
Pipfile.lock divided by PyPI and external dependencies.
def get_keys_of_max_n(dict_obj, n): return sorted([ item[0] for item in sorted( dict_obj.items(), key=lambda item: item[1], reverse=True )[:n] ])
Returns the keys that maps to the top n max values in the given dict. Example: -------- >>> dict_obj = {'a':2, 'b':1, 'c':5} >>> get_keys_of_max_n(dict_obj, 2) ['a', 'c']
def start(self): if not self.valid: err = ("\nMessengers and listeners that still need set:\n\n" "messengers : %s\n\n" "listeners : %s\n") raise InvalidApplication(err % (self.needed_messengers, self.needed_listeners)) self.dispatcher.start()
If we have a set of plugins that provide our expected listeners and messengers, tell our dispatcher to start up. Otherwise, raise InvalidApplication
def warning(self, msg): self._execActions('warning', msg) msg = self._execFilters('warning', msg) self._processMsg('warning', msg) self._sendMsg('warning', msg)
Log Warning Messages
def set_environment_variable(self, name, value): m = Message() m.add_byte(cMSG_CHANNEL_REQUEST) m.add_int(self.remote_chanid) m.add_string("env") m.add_boolean(False) m.add_string(name) m.add_string(value) self.transport._send_user_message(m)
Set the value of an environment variable. .. warning:: The server may reject this request depending on its ``AcceptEnv`` setting; such rejections will fail silently (which is common client practice for this particular request type). Make sure you understand your server's configuration before using! :param str name: name of the environment variable :param str value: value of the environment variable :raises: `.SSHException` -- if the request was rejected or the channel was closed
def simple(self): if self._days: return '%sD' % self.totaldays elif self.months: return '%sM' % self._months elif self.years: return '%sY' % self.years else: return ''
A string representation with only one period delimiter.
def file_upload(self, local_path, remote_path, l_st): self.sftp.put(local_path, remote_path) self._match_modes(remote_path, l_st)
Upload local_path to remote_path and set permission and mtime.
def filter(self, value): if self.env is None: raise PluginException('The plugin must be installed to application.') def wrapper(func): name = func.__name__ if isinstance(value, str): name = value if callable(func): self.env.filters[name] = func return func if callable(value): return wrapper(value) return wrapper
Register function to filters.
def _CreateIndexIfNotExists(self, index_name, mappings): try: if not self._client.indices.exists(index_name): self._client.indices.create( body={'mappings': mappings}, index=index_name) except elasticsearch.exceptions.ConnectionError as exception: raise RuntimeError( 'Unable to create Elasticsearch index with error: {0!s}'.format( exception))
Creates an Elasticsearch index if it does not exist. Args: index_name (str): mame of the index. mappings (dict[str, object]): mappings of the index. Raises: RuntimeError: if the Elasticsearch index cannot be created.
def replay(self, event, ts=0, end_ts=None, with_ts=False): key = self._keygen(event, ts) end_ts = end_ts if end_ts else "+inf" elements = self.r.zrangebyscore(key, ts, end_ts, withscores=with_ts) if not with_ts: return [s(e) for e in elements] else: return [(s(e[0]), int(e[1])) for e in elements]
Replay events based on timestamp. If you split namespace with ts, the replay will only return events within the same namespace. :param event: event name :param ts: replay events after ts, default from 0. :param end_ts: replay events to ts, default to "+inf". :param with_ts: return timestamp with events, default to False. :return: list of pks when with_ts set to False, list of (pk, ts) tuples when with_ts is True.