Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
379,400
def get_image_path(repo_url, trailing_path): repo_url = repo_url.split()[-1].strip() if repo_url.endswith(): repo_url = repo_url[:-4] return "%s/%s" %(re.sub(,,repo_url), trailing_path)
get_image_path will determine an image path based on a repo url, removing any token, and taking into account urls that end with .git. :param repo_url: the repo url to parse: :param trailing_path: the trailing path (commit then hash is common)
379,401
def add_deploy(state, deploy_func, *args, **kwargs): frameinfo = get_caller_frameinfo() kwargs[] = frameinfo for host in state.inventory: deploy_func(state, host, *args, **kwargs)
Prepare & add an deploy to pyinfra.state by executing it on all hosts. Args: state (``pyinfra.api.State`` obj): the deploy state to add the operation deploy_func (function): the operation function from one of the modules, ie ``server.user`` args/kwargs: passed to the operation function
379,402
def _init_cycle_dict(self): dict_arr = np.zeros(self.epochs, dtype=int) length_arr = np.zeros(self.epochs, dtype=int) start_arr = np.zeros(self.epochs, dtype=int) c_len = self.cycle_len idx = 0 for i in range(self.cycles): current_start = idx for j in range(c_len): dict_arr[idx] = i length_arr[idx] = c_len start_arr[idx] = current_start idx += 1 c_len *= self.cycle_mult return dict_arr, length_arr, start_arr
Populate a cycle dict
379,403
def contributions_from_model_image_and_galaxy_image(self, model_image, galaxy_image, minimum_value=0.0): contributions = np.divide(galaxy_image, np.add(model_image, self.contribution_factor)) contributions = np.divide(contributions, np.max(contributions)) contributions[contributions < minimum_value] = 0.0 return contributions
Compute the contribution map of a galaxy, which represents the fraction of flux in each pixel that the \ galaxy is attributed to contain, scaled to the *contribution_factor* hyper-parameter. This is computed by dividing that galaxy's flux by the total flux in that pixel, and then scaling by the \ maximum flux such that the contribution map ranges between 0 and 1. Parameters ----------- model_image : ndarray The best-fit model image to the observed image from a previous analysis phase. This provides the \ total light attributed to each image pixel by the model. galaxy_image : ndarray A model image of the galaxy (from light profiles or an inversion) from a previous analysis phase. minimum_value : float The minimum contribution value a pixel must contain to not be rounded to 0.
379,404
def intersect(a, b): if a[x0] == a[x1] or a[y0] == a[y1]: return False if b[x0] == b[x1] or b[y0] == b[y1]: return False return a[x0] <= b[x1] and b[x0] <= a[x1] and a[y0] <= b[y1] and b[y0] <= a[y1]
Check if two rectangles intersect
379,405
def _set_bfd_static_route(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("bfd_static_route_dest bfd_static_route_src",bfd_static_route.bfd_static_route, yang_name="bfd-static-route", rest_name="bfd-static-route", parent=self, is_container=, user_ordered=False, path_helper=self._path_helper, yang_keys=, extensions={u: {u: None, u: None, u: None, u: None, u: None, u: u}}), is_container=, yang_name="bfd-static-route", rest_name="bfd-static-route", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u: {u: None, u: None, u: None, u: None, u: None, u: u}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "list", : , }) self.__bfd_static_route = t if hasattr(self, ): self._set()
Setter method for bfd_static_route, mapped from YANG variable /rbridge_id/vrf/address_family/ip/unicast/ip/route/static/bfd/bfd_static_route (list) If this variable is read-only (config: false) in the source YANG file, then _set_bfd_static_route is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_bfd_static_route() directly.
379,406
def info(self): info = {: , : self.verbose} info.update({:[term.info for term in self._terms]}) return info
get information about the terms in the term list Parameters ---------- Returns ------- dict containing information to duplicate the term list
379,407
def RunMetadata(self, run, tag): accumulator = self.GetAccumulator(run) return accumulator.RunMetadata(tag)
Get the session.run() metadata associated with a TensorFlow run and tag. Args: run: A string name of a TensorFlow run. tag: A string name of the tag associated with a particular session.run(). Raises: KeyError: If the run is not found, or the tag is not available for the given run. Returns: The metadata in the form of `RunMetadata` protobuf data structure.
379,408
def close(self, status=1000, reason=u): try: if self.closed is False: close_msg = bytearray() close_msg.extend(struct.pack("!H", status)) if _check_unicode(reason): close_msg.extend(reason.encode()) else: close_msg.extend(reason) self._send_message(False, CLOSE, close_msg) finally: self.closed = True
Send Close frame to the client. The underlying socket is only closed when the client acknowledges the Close frame. status is the closing identifier. reason is the reason for the close.
379,409
def _check_valid_condition(self, get_params): try: variable = get_params(self.variable) except: variable = None value = self.value if variable is None: return not self.default if isinstance(variable, bool): value = bool(self.value) elif isinstance(variable, Number): try: value = int(self.value) except: try: value = float(self.value) except: return not self.default if self.condition == "=": return (variable == value) == self.default elif self.condition == ">": return (variable > value) == self.default elif self.condition == "<": return (variable < value) == self.default
Check if the condition has been met. We need to make sure that we are of the correct type.
379,410
def query(self, tableClass, comparison=None, limit=None, offset=None, sort=None): if isinstance(tableClass, tuple): queryClass = MultipleItemQuery else: queryClass = ItemQuery return queryClass(self, tableClass, comparison, limit, offset, sort)
Return a generator of instances of C{tableClass}, or tuples of instances if C{tableClass} is a tuple of classes. Examples:: fastCars = s.query(Vehicle, axiom.attributes.AND( Vehicle.wheels == 4, Vehicle.maxKPH > 200), limit=100, sort=Vehicle.maxKPH.descending) quotesByClient = s.query( (Client, Quote), axiom.attributes.AND( Client.active == True, Quote.client == Client.storeID, Quote.created >= someDate), limit=10, sort=(Client.name.ascending, Quote.created.descending)) @param tableClass: a subclass of Item to look for instances of, or a tuple of subclasses. @param comparison: a provider of L{IComparison}, or None, to match all items available in the store. If tableClass is a tuple, then the comparison must refer to all Item subclasses in that tuple, and specify the relationships between them. @param limit: an int to limit the total length of the results, or None for all available results. @param offset: an int to specify a starting point within the available results, or None to start at 0. @param sort: an L{ISort}, something that comes from an SQLAttribute's 'ascending' or 'descending' attribute. @return: an L{ItemQuery} object, which is an iterable of Items or tuples of Items, according to tableClass.
379,411
def import_module(self, name): if name not in self._objects: module = _import_module(name) self._objects[name] = module self._object_references[id(module)] = name return self._objects[name]
Import a module into the bridge.
379,412
def get_raw(self, url: str, _attempt=1) -> requests.Response: with self.get_anonymous_session() as anonymous_session: resp = anonymous_session.get(url, stream=True) if resp.status_code == 200: resp.raw.decode_content = True return resp else: if resp.status_code == 403: raise QueryReturnedForbiddenException("403 when accessing {}.".format(url)) if resp.status_code == 404: raise QueryReturnedNotFoundException("404 when accessing {}.".format(url)) raise ConnectionException("HTTP error code {}.".format(resp.status_code))
Downloads a file anonymously. :raises QueryReturnedNotFoundException: When the server responds with a 404. :raises QueryReturnedForbiddenException: When the server responds with a 403. :raises ConnectionException: When download failed. .. versionadded:: 4.2.1
379,413
def check_file(self, filename): if not exists(filename): return False new_config = ConfigResolverBase() new_config.read(filename) if self.version and not new_config.has_option(, ): raise NoVersionError( "The config option is missing in {}. The " "application expects version {}!".format(filename, self.version)) elif not self.version and new_config.has_option(, ): self.version = StrictVersion(new_config.get(, )) self._log.info( , filename, self.version) elif self.version: file_version = new_config.get(, ) major, minor, _ = StrictVersion(file_version).version expected_major, expected_minor, _ = self.version.version if expected_major != major: self._log.error( , abspath(filename), str(self.version), file_version) return False if expected_minor != minor: self._log.warning( , abspath(filename), str(self.version), file_version) return True return True
Check if ``filename`` can be read. Will return boolean which is True if the file can be read, False otherwise.
379,414
def to_dict(self): if self.L is None: return { "mu": self.mu.tolist(), "Sigma": self.Sigma.tolist()} else: return { "mu": self.mu.tolist(), "Sigma": self.Sigma.tolist(), "L": self.L.tolist()}
Convert the object into a json serializable dictionary. Note: It uses the private method _save_to_input_dict of the parent. :return dict: json serializable dictionary containing the needed information to instantiate the object
379,415
def coerce_author(value): if isinstance(value, Author): return value if not isinstance(value, string_types): msg = "Expected Author object or string as argument, got %s instead!" raise ValueError(msg % type(value)) raise ValueError(msg % value) return Author( name=match.group(1).strip(), email=match.group(2).strip(), )
Coerce strings to :class:`Author` objects. :param value: A string or :class:`Author` object. :returns: An :class:`Author` object. :raises: :exc:`~exceptions.ValueError` when `value` isn't a string or :class:`Author` object.
379,416
def word_groups_for_language(language_code): if language_code not in LANGUAGE_CODES: message = .format(language_code) raise InvalidLanguageCodeException(message) return MATH_WORDS[language_code]
Return the math word groups for a language code. The language_code should be an ISO 639-2 language code. https://www.loc.gov/standards/iso639-2/php/code_list.php
379,417
def signMessage(self, message): if (message.hasKey(OPENID_NS, ) or message.hasKey(OPENID_NS, )): raise ValueError() extant_handle = message.getArg(OPENID_NS, ) if extant_handle and extant_handle != self.handle: raise ValueError("Message has a different association handle") signed_message = message.copy() signed_message.setArg(OPENID_NS, , self.handle) message_keys = signed_message.toPostArgs().keys() signed_list = [k[7:] for k in message_keys if k.startswith()] signed_list.append() signed_list.sort() signed_message.setArg(OPENID_NS, , .join(signed_list)) sig = self.getMessageSignature(signed_message) signed_message.setArg(OPENID_NS, , sig) return signed_message
Add a signature (and a signed list) to a message. @return: a new Message object with a signature @rtype: L{openid.message.Message}
379,418
def extend_env(extra_env): env = os.environ.copy() env.update(extra_env) return env
Copies and extends the current environment with the values present in `extra_env`.
379,419
def eval_grad(self): Ryf = self.eval_Rf(self.Yf) gradf = sl.inner(np.conj(self.Zf), Ryf, axis=self.cri.axisK) if self.cri.C > 1 and self.cri.Cd == 1: gradf = np.sum(gradf, axis=self.cri.axisC, keepdims=True) return gradf
Compute gradient in Fourier domain.
379,420
def newbie(cls, *args, **kwargs): parser = cls(*args, **kwargs) subparser = parser.add_subparsers(dest=) parents = [parser.pparser, parser.output_parser] sparser = subparser.add_parser(, help=, parents=parents) parser.search_group = sparser.add_argument_group() parser.search_group.add_argument(, , help=, default=None) h = parser.search_group.add_argument(, help=h, nargs=, default=None) parser.search_group.add_argument(, help=, nargs=4) parser.search_group.add_argument(, help=) parser.search_group.add_argument(, help=) parser.search_group.add_argument(, , nargs=, help=) parser.search_group.add_argument(, help=, nargs=) h = parser.search_group.add_argument(, help=h, action=, default=False) parser.search_group.add_argument(, help=, default=config.API_URL) parents.append(parser.download_parser) lparser = subparser.add_parser(, help=, parents=parents) lparser.add_argument(, help=) return parser
Create a newbie class, with all the skills needed
379,421
def form_field_definitions(self): schema = copy.deepcopy(form_field_definitions.user) uid, login = self._get_auth_attrs() if uid != login: field = schema.get(login, schema[]) if field[].find() == -1: field[] = % ( , field[]) if not field.get(): field[] = dict() field[][] = \ ([], [], [], [], []) schema[login] = field return schema
Hook optional_login extractor if necessary for form defaults.
379,422
def read_adc_difference(self, differential): assert 0 <= differential <= 7, command = 0b10 << 6 command |= (differential & 0x07) << 3 resp = self._spi.transfer([command, 0x0, 0x0]) result = (resp[0] & 0x01) << 9 result |= (resp[1] & 0xFF) << 1 result |= (resp[2] & 0x80) >> 7 return result & 0x3FF
Read the difference between two channels. Differential should be a value of: - 0: Return channel 0 minus channel 1 - 1: Return channel 1 minus channel 0 - 2: Return channel 2 minus channel 3 - 3: Return channel 3 minus channel 2 - 4: Return channel 4 minus channel 5 - 5: Return channel 5 minus channel 4 - 6: Return channel 6 minus channel 7 - 7: Return channel 7 minus channel 6
379,423
def getRenderers(filename): global available_renderers renderers = [] for rdrid, (renderer, module) in available_renderers.items(): try: priority = renderer.canRender(filename) except: print( % (rdrid, filename)) traceback.print_exc() priority = None if priority: renderers.append((priority, rdrid)) renderers.sort(lambda a, b: cmp(a[0], b[0])) return [a[1] for a in renderers] or ["link"]
For a given DP, returns a list of renderer ids giving the renderers that support the source file type
379,424
def product_name(self): buf = (ctypes.c_char * self.MAX_BUF_SIZE)() self._dll.JLINKARM_EMU_GetProductName(buf, self.MAX_BUF_SIZE) return ctypes.string_at(buf).decode()
Returns the product name of the connected J-Link. Args: self (JLink): the ``JLink`` instance Returns: Product name.
379,425
def set_chat_photo( self, chat_id: Union[int, str], photo: str ) -> bool: peer = self.resolve_peer(chat_id) if os.path.exists(photo): photo = types.InputChatUploadedPhoto(file=self.save_file(photo)) else: s = unpack("<qq", b64decode(photo + "=" * (-len(photo) % 4), "-_")) photo = types.InputChatPhoto( id=types.InputPhoto( id=s[0], access_hash=s[1], file_reference=b"" ) ) if isinstance(peer, types.InputPeerChat): self.send( functions.messages.EditChatPhoto( chat_id=peer.chat_id, photo=photo ) ) elif isinstance(peer, types.InputPeerChannel): self.send( functions.channels.EditPhoto( channel=peer, photo=photo ) ) else: raise ValueError("The chat_id \"{}\" belongs to a user".format(chat_id)) return True
Use this method to set a new profile photo for the chat. Photos can't be changed for private chats. You must be an administrator in the chat for this to work and must have the appropriate admin rights. Note: In regular groups (non-supergroups), this method will only work if the "All Members Are Admins" setting is off. Args: chat_id (``int`` | ``str``): Unique identifier (int) or username (str) of the target chat. photo (``str``): New chat photo. You can pass a :class:`Photo` id or a file path to upload a new photo. Returns: True on success. Raises: :class:`RPCError <pyrogram.RPCError>` in case of a Telegram RPC error. ``ValueError`` if a chat_id belongs to user.
379,426
def notification_message(cls, item): assert isinstance(item, Notification) return cls.encode_payload(cls.request_payload(item, None))
Convert an RPCRequest item to a message.
379,427
def mmGetCellTracePlot(self, cellTrace, cellCount, activityType, title="", showReset=False, resetShading=0.25): plot = Plot(self, title) resetTrace = self.mmGetTraceResets().data data = numpy.zeros((cellCount, 1)) for i in xrange(len(cellTrace)): if showReset and resetTrace[i]: activity = numpy.ones((cellCount, 1)) * resetShading else: activity = numpy.zeros((cellCount, 1)) activeIndices = cellTrace[i] activity[list(activeIndices)] = 1 data = numpy.concatenate((data, activity), 1) plot.add2DArray(data, xlabel="Time", ylabel=activityType, name=title) return plot
Returns plot of the cell activity. Note that if many timesteps of activities are input, matplotlib's image interpolation may omit activities (columns in the image). @param cellTrace (list) a temporally ordered list of sets of cell activities @param cellCount (int) number of cells in the space being rendered @param activityType (string) type of cell activity being displayed @param title (string) an optional title for the figure @param showReset (bool) if true, the first set of cell activities after a reset will have a grayscale background @param resetShading (float) applicable if showReset is true, specifies the intensity of the reset background with 0.0 being white and 1.0 being black @return (Plot) plot
379,428
def features(self): if self._features is None: metadata = self.metadata() if "features" in metadata: self._features = metadata["features"] else: self._features = [] return self._features
lazy fetch and cache features
379,429
def to_keypoints(self): from imgaug.augmentables.kps import Keypoint return [Keypoint(x=x, y=y) for (x, y) in self.coords]
Convert the line string points to keypoints. Returns ------- list of imgaug.augmentables.kps.Keypoint Points of the line string as keypoints.
379,430
def read_data_sets(train_dir, data_type="train"): TRAIN_IMAGES = TRAIN_LABELS = TEST_IMAGES = TEST_LABELS = if data_type == "train": local_file = base.maybe_download(TRAIN_IMAGES, train_dir, SOURCE_URL + TRAIN_IMAGES) with open(local_file, ) as f: train_images = extract_images(f) local_file = base.maybe_download(TRAIN_LABELS, train_dir, SOURCE_URL + TRAIN_LABELS) with open(local_file, ) as f: train_labels = extract_labels(f) return train_images, train_labels else: local_file = base.maybe_download(TEST_IMAGES, train_dir, SOURCE_URL + TEST_IMAGES) with open(local_file, ) as f: test_images = extract_images(f) local_file = base.maybe_download(TEST_LABELS, train_dir, SOURCE_URL + TEST_LABELS) with open(local_file, ) as f: test_labels = extract_labels(f) return test_images, test_labels
Parse or download mnist data if train_dir is empty. :param: train_dir: The directory storing the mnist data :param: data_type: Reading training set or testing set.It can be either "train" or "test" :return: ``` (ndarray, ndarray) representing (features, labels) features is a 4D unit8 numpy array [index, y, x, depth] representing each pixel valued from 0 to 255. labels is 1D unit8 nunpy array representing the label valued from 0 to 9. ```
379,431
def get_folder_details(self, folder): created_by303447created_on2017-03-21T14:06:32.293902Zdescriptionentity_typefoldermodified_by303447modified_on2017-03-21T14:06:32.293967Znamemyfolderparent3abd8742-d069-44cf-a66b-2370df74a682uuid2516442e-1e26-4de1-8ed8-94523224cc40 if not is_valid_uuid(folder): raise StorageArgumentException( .format(folder)) return self._authenticated_request \ .to_endpoint(.format(folder)) \ .return_body() \ .get()
Get information on a given folder. Args: folder (str): The UUID of the requested folder. Returns: A dictionary of the folder details if found:: { u'created_by': u'303447', u'created_on': u'2017-03-21T14:06:32.293902Z', u'description': u'', u'entity_type': u'folder', u'modified_by': u'303447', u'modified_on': u'2017-03-21T14:06:32.293967Z', u'name': u'myfolder', u'parent': u'3abd8742-d069-44cf-a66b-2370df74a682', u'uuid': u'2516442e-1e26-4de1-8ed8-94523224cc40' } Raises: StorageArgumentException: Invalid arguments StorageForbiddenException: Server response code 403 StorageNotFoundException: Server response code 404 StorageException: other 400-600 error codes
379,432
def in_filter_get(self, address): func_name = param = { neighbors.IP_ADDRESS: address, } return call(func_name, **param)
This method gets in-bound filters of the specified neighbor. ``address`` specifies the IP address of the neighbor. Returns a list object containing an instance of Filter sub-class
379,433
def list_containers_info(self, limit=None, marker=None): return self._manager.list_containers_info(limit=limit, marker=marker)
Returns a list of info on Containers. For each container, a dict containing the following keys is returned: \code name - the name of the container count - the number of objects in the container bytes - the total bytes in the container
379,434
def mget(self, keys, *args): args = list_or_args(keys, args) options = {} if not args: options[EMPTY_RESPONSE] = [] return self.execute_command(, *args, **options)
Returns a list of values ordered identically to ``keys``
379,435
def set(self, section, key, value): value_type = str if self.has_option(section, key): descr, value_type, default = self.get_description(section, key) if value_type != type(value): if value_type == bool: if ((type(value) in string_types and value.lower() in (, )) or (type(value) == int and value > 0)): value = True elif ((type(value) in string_types and value.lower() in (, )) or (type(value) == int and value == 0)): value = False else: raise AppConfigValueException( .format(value)) else: value = value_type(value) if not self.has_section(section): self.add_section(section) ConfigParser.set(self, section, key, str(value))
Set the value for a key in the given section. It will check the type of the value if it is available. If the value is not from the given type it will be transformed to the type. An exception will be thrown if there is a problem with the conversion. @param section: the section of the key @param key: the key where to store the valu @param value: the value to store @exception: If there is a problem with the conversation of the value type.
379,436
def launch_subshell(self, shell_cls, cmd, args, *, prompt = None, context = {}): readline.write_history_file(self.history_fname) prompt = prompt if prompt else shell_cls.__name__ mode = _ShellBase._Mode( shell = self, cmd = cmd, args = args, prompt = prompt, context = context, ) shell = shell_cls( batch_mode = self.batch_mode, debug = self.debug, mode_stack = self._mode_stack + [ mode ], pipe_end = self._pipe_end, root_prompt = self.root_prompt, stdout = self.stdout, stderr = self.stderr, temp_dir = self._temp_dir, ) self.print_debug("Leave parent shell ".format(self.prompt)) exit_directive = shell.cmdloop() self.print_debug("Enter parent shell : {}".format(self.prompt, exit_directive)) readline.clear_history() if os.path.isfile(self.history_fname): readline.read_history_file(self.history_fname) if not exit_directive is True: return exit_directive
Launch a subshell. The doc string of the cmdloop() method explains how shell histories and history files are saved and restored. The design of the _ShellBase class encourage launching of subshells through the subshell() decorator function. Nonetheless, the user has the option of directly launching subshells via this method. Arguments: shell_cls: The _ShellBase class object to instantiate and launch. args: Arguments used to launch this subshell. prompt: The name of the subshell. The default, None, means to use the shell_cls.__name__. context: A dictionary to pass to the subshell as its context. Returns: 'root': Inform the parent shell to keep exiting until the root shell is reached. 'all': Exit the the command line. False, None, or anything that are evaluated as False: Inform the parent shell to stay in that parent shell. An integer indicating the depth of shell to exit to. 0 = root shell.
379,437
def handle_error(self, message: str, e: mastodon.MastodonError) -> OutputRecord: self.lerror(f"Got an error! {e}") try: code = e[0]["code"] if code in self.handled_errors: self.handled_errors[code] else: pass except Exception: pass return TootRecord(error=e)
Handle error while trying to do something.
379,438
def get_child(self, child_name): child = self.children.get(child_name, None) if child: return child raise ValueError("Value {} not in this tree".format(child_name))
returns the object with the name supplied
379,439
def update_configurable(self, configurable_class, name, config): configurable_class_name = configurable_class.__name__.lower() logger.info( "updating %s: ", configurable_class_name, name ) registry = self.registry_for(configurable_class) if name not in registry: logger.warn( "Tried to update unknown %s: ", configurable_class_name, name ) self.add_configurable( configurable_class, configurable_class.from_config(name, config) ) return registry[name].apply_config(config) hook = self.hook_for(configurable_class, "update") if not hook: return def done(f): try: f.result() except Exception: logger.exception("Error updating configurable ", name) self.work_pool.submit(hook, name, config).add_done_callback(done)
Callback fired when a configurable instance is updated. Looks up the existing configurable in the proper "registry" and `apply_config()` is called on it. If a method named "on_<configurable classname>_update" is defined it is called in the work pool and passed the configurable's name, the old config and the new config. If the updated configurable is not present, `add_configurable()` is called instead.
379,440
def get_stream(self, bucket, label, as_stream=True): if self.mode == "w": raise OFSException("Cannot read from archive in mode") elif self.exists(bucket, label): fn = self._zf(bucket, label) if as_stream: return self.z.open(fn) else: return self.z.read(fn) else: raise OFSFileNotFound
Get a bitstream for the given bucket:label combination. :param bucket: the bucket to use. :return: bitstream as a file-like object
379,441
def _set_suffix_links(self): self._suffix_links_set = True for current, parent in self.bfs(): if parent is None: continue current.longest_prefix = parent.longest_prefix if parent.has_value: current.longest_prefix = parent suffix = parent while True: if not suffix.has_suffix: current.suffix = self.root break else: suffix = suffix.suffix if current.uplink in suffix: current.suffix = suffix[current.uplink] break suffix = current.suffix while not suffix.has_value and suffix.has_suffix: suffix = suffix.suffix if suffix.has_value: current.dict_suffix = suffix
Sets all suffix links in all nodes in this trie.
379,442
def add_tunnel_port(self, name, tunnel_type, remote_ip, local_ip=None, key=None, ofport=None): options = % locals() if key: options += % locals() if local_ip: options += % locals() args = [, name, % tunnel_type, % options] if ofport: args.append( % locals()) command_add = ovs_vsctl.VSCtlCommand(, (self.br_name, name)) command_set = ovs_vsctl.VSCtlCommand(, args) self.run_command([command_add, command_set])
Creates a tunnel port. :param name: Port name to be created :param tunnel_type: Type of tunnel (gre or vxlan) :param remote_ip: Remote IP address of tunnel :param local_ip: Local IP address of tunnel :param key: Key of GRE or VNI of VxLAN :param ofport: Requested OpenFlow port number
379,443
def _unicode(p): q = {} for tag in p: vals = [] for v in p[tag]: if type(v) is not str: v = v.decode() vals.append(v) if type(tag) is not str: tag = tag.decode() q[tag] = vals return q
Used when force_unicode is True (default), the tags and values in the dict will be coerced as 'str' (or 'unicode' with Python2). In Python3 they can otherwise end up as a mixtures of 'str' and 'bytes'.
379,444
def enumerate(self, **kwargs): for item in self.set.enumerate(**kwargs): yield flattened(item)
Iterate through all possible sequences (lists). By default, will stop after 50 items have been yielded. This value can be change by supplying a different value via the max_enumerate kwarg.
379,445
def _fetch_AlignmentMapper(self, tx_ac, alt_ac=None, alt_aln_method=None): if alt_ac is None: alt_ac = self._alt_ac_for_tx_ac(tx_ac) if alt_aln_method is None: alt_aln_method = self.alt_aln_method return super(AssemblyMapper, self)._fetch_AlignmentMapper(tx_ac, alt_ac, alt_aln_method)
convenience version of VariantMapper._fetch_AlignmentMapper that derives alt_ac from transcript, assembly, and alt_aln_method used to instantiate the AssemblyMapper instance
379,446
def retrieve_tmpl(args): password = get_password(args) token = connect.get_token(args.username, password, args.server) if args.__dict__.get(): template = args.template processing.get_template(args.server, token, template) else: processing.get_all_templates(args.server, token)
Retrieve template. Argument: args: arguments object
379,447
def server_rules(self): sftp = self.client.open_sftp() try: rule_path = self.rule_location try: stat_entry = sftp.stat(rule_path) if stat.S_ISDIR(stat_entry.st_mode): sftp.rmdir(rule_path) return [] except IOError: return [] with sftp.open(rule_path, ) as file_handle: data = file_handle.read() return self._parse(data) finally: sftp.close()
Reads the server rules from the client and returns it.
379,448
def render_services_ctrl(self, request): urls = self.Urls() urls.auth_activate = urls.auth_deactivate = urls.auth_reset_password = urls.auth_set_password = return render_to_string(self.service_ctrl_template, { : self.title, : urls, : self.service_url, : }, request=request)
Example for rendering the service control panel row You can override the default template and create a custom one if you wish. :param request: :return:
379,449
def _move_to_top(self, pos): if pos > 0: self.queue.rotate(-pos) item = self.queue.popleft() self.queue.rotate(pos) self.queue.appendleft(item)
Move element at given position to top of queue.
379,450
def _do(self, cmd, *args, **kwargs): handle_out = kwargs.get(, _flush_write_stdout) if self._autosync_dirs: mlabraw.eval(self._session, "cd();" % os.getcwd().replace("noutarg%d__;show[%s]=%s;castt cast: 0 nout") return kwargs[](res) else: return res finally: if len(tempargs) and self._clear_call_args: mlabraw.eval(self._session, "clear();" % "".join(tempargs))
Semi-raw execution of a matlab command. Smartly handle calls to matlab, figure out what to do with `args`, and when to use function call syntax and not. If no `args` are specified, the ``cmd`` not ``result = cmd()`` form is used in Matlab -- this also makes literal Matlab commands legal (eg. cmd=``get(gca, 'Children')``). If ``nout=0`` is specified, the Matlab command is executed as procedure, otherwise it is executed as function (default), nout specifying how many values should be returned (default 1). **Beware that if you use don't specify ``nout=0`` for a `cmd` that never returns a value will raise an error** (because assigning a variable to a call that doesn't return a value is illegal in matlab). ``cast`` specifies which typecast should be applied to the result (e.g. `int`), it defaults to none. XXX: should we add ``parens`` parameter?
379,451
def speech_src(self): if self.src: return self.src elif self.parent: return self.parent.speech_src() else: return None
Retrieves the URL/filename of the audio or video file associated with the element. The source is inherited from ancestor elements if none is specified. For this reason, always use this method rather than access the ``src`` attribute directly. Returns: str or None if not found
379,452
def orchestration_save(self, mode="shallow", custom_params=None): save_params = {: , : , : True} params = dict() if custom_params: params = jsonpickle.decode(custom_params) save_params.update(params.get(, {})) save_params[] = self.get_path(save_params[]) saved_artifact = self.save(**save_params) saved_artifact_info = OrchestrationSavedArtifactInfo(resource_name=self.resource_config.name, created_date=datetime.datetime.now(), restore_rules=self.get_restore_rules(), saved_artifact=saved_artifact) save_response = OrchestrationSaveResult(saved_artifacts_info=saved_artifact_info) self._validate_artifact_info(saved_artifact_info) return serialize_to_json(save_response)
Orchestration Save command :param mode: :param custom_params: json with all required action to configure or remove vlans from certain port :return Serialized OrchestrationSavedArtifact to json :rtype json
379,453
def start_exp(): if not (( in request.args) and ( in request.args) and ( in request.args) and ( in request.args)): raise ExperimentError() hit_id = request.args[] assignment_id = request.args[] worker_id = request.args[] mode = request.args[] app.logger.info("Accessing /exp: %(h)s %(a)s %(w)s " % { "h" : hit_id, "a": assignment_id, "w": worker_id }) if hit_id[:5] == "debug": debug_mode = True else: debug_mode = False allow_repeats = CONFIG.getboolean(, ) if allow_repeats: matches = Participant.query.\ filter(Participant.workerid == worker_id).\ filter(Participant.assignmentid == assignment_id).\ all() else: matches = Participant.query.\ filter(Participant.workerid == worker_id).\ all() numrecs = len(matches) if numrecs == 0: subj_cond, subj_counter = get_random_condcount(mode) worker_ip = "UNKNOWN" if not request.remote_addr else \ request.remote_addr browser = "UNKNOWN" if not request.user_agent.browser else \ request.user_agent.browser platform = "UNKNOWN" if not request.user_agent.platform else \ request.user_agent.platform language = "UNKNOWN" if not request.user_agent.language else \ request.user_agent.language participant_attributes = dict( assignmentid=assignment_id, workerid=worker_id, hitid=hit_id, cond=subj_cond, counterbalance=subj_counter, ipaddress=worker_ip, browser=browser, platform=platform, language=language, mode=mode ) part = Participant(**participant_attributes) db_session.add(part) db_session.commit() else: if part.status >= STARTED and not debug_mode: raise ExperimentError() else: if nrecords > 1: app.logger.error("Error, hit/assignment appears in database \ more than once (serious problem)") raise ExperimentError( ) if other_assignment: raise ExperimentError() use_psiturk_ad_server = CONFIG.getboolean(, ) if use_psiturk_ad_server and (mode == or mode == ): ad_id = get_ad_via_hitid(hit_id) if ad_id != "error": if mode == "sandbox": ad_server_location = \ + str(ad_id) elif mode == "live": ad_server_location = +\ str(ad_id) else: raise ExperimentError() else: ad_server_location = return render_template( , uniqueId=part.uniqueid, condition=part.cond, counterbalance=part.counterbalance, adServerLoc=ad_server_location, mode = mode, contact_address=CONFIG.get(, ) )
Serves up the experiment applet.
379,454
def build_block(self): header_bytes = self.block_header.SerializeToString() block = Block(header=header_bytes, header_signature=self._header_signature) block.batches.extend(self.batches) return block
Assembles the candidate block into it's finalized form for broadcast.
379,455
def _discover_ontology(ontology_path): last_part = os.path.split(os.path.abspath(ontology_path))[1] possible_patterns = [last_part, last_part.lower()] if not last_part.endswith(): possible_patterns.append(last_part + ) places = [os.path.join(current_app.instance_path, "classifier"), os.path.abspath(), os.path.join(os.path.dirname(__file__), "classifier")] workdir = current_app.config.get() if workdir: places.append(workdir) current_app.logger.debug( "Searching for taxonomy using string: %s" % last_part) current_app.logger.debug("Possible patterns: %s" % possible_patterns) for path in places: try: if os.path.isdir(path): current_app.logger.debug("Listing: %s" % path) for filename in os.listdir(path): for pattern in possible_patterns: filename_lc = filename.lower() if pattern == filename_lc and\ os.path.exists(os.path.join(path, filename)): filepath = os.path.abspath(os.path.join(path, filename)) if (os.access(filepath, os.R_OK)): current_app.logger.debug( "Found taxonomy at: {0}".format(filepath)) return filepath else: current_app.logger.warning( .format( filepath ) ) except OSError as os_error_msg: current_app.logger.exception( .format( str(path), str(os_error_msg)) ) current_app.logger.debug( "No taxonomy with pattern found".format(ontology_path))
Look for the file in known places. :param ontology: path name or url :type ontology: str :return: absolute path of a file if found, or None
379,456
def compute_chunksize(df, num_splits, default_block_size=32, axis=None): if axis == 0 or axis is None: row_chunksize = get_default_chunksize(len(df.index), num_splits) row_chunksize = max(1, row_chunksize, default_block_size) if axis == 0: return row_chunksize col_chunksize = get_default_chunksize(len(df.columns), num_splits) col_chunksize = max(1, col_chunksize, default_block_size) if axis == 1: return col_chunksize return row_chunksize, col_chunksize
Computes the number of rows and/or columns to include in each partition. Args: df: The DataFrame to split. num_splits: The maximum number of splits to separate the DataFrame into. default_block_size: Minimum number of rows/columns (default set to 32x32). axis: The axis to split. (0: Index, 1: Columns, None: Both) Returns: If axis is 1 or 0, returns an integer number of rows/columns to split the DataFrame. If axis is None, return a tuple containing both.
379,457
def get_devices(self, refresh=False): if refresh or self._devices is None: if self._devices is None: self._devices = {} _LOGGER.info("Updating all devices...") response = self.send_request("get", CONST.DEVICES_URL) response_object = json.loads(response.text) _LOGGER.debug("Get Devices Response: %s", response.text) for device_json in response_object: device = self._devices.get(device_json[]) if device: device.update(device_json) else: device = SkybellDevice(device_json, self) self._devices[device.device_id] = device return list(self._devices.values())
Get all devices from Abode.
379,458
def get_vowel(syll): syll return re.search(r, syll, flags=FLAGS).group(1).upper()
Return the firstmost vowel in 'syll'.
379,459
def _chunk(self, size): items = self.items return [items[i:i + size] for i in range(0, len(items), size)]
Chunk the underlying collection. :param size: The chunk size :type size: int :rtype: Collection
379,460
def kernel(x1, x2, method=, sigma=1, **kwargs): if method.lower() in [, , ]: K = np.exp(-dist(x1, x2) / (2 * sigma**2)) return K
Compute kernel matrix
379,461
def register(self, command: str, handler: Any): if not command.startswith("/"): command = f"/{command}" LOG.info("Registering %s to %s", command, handler) self._routes[command].append(handler)
Register a new handler for a specific slash command Args: command: Slash command handler: Callback
379,462
def _add_default_source(self): if self.modname: if self._package_path: filename = os.path.join(self._package_path, DEFAULT_FILENAME) if os.path.isfile(filename): self.add(ConfigSource(load_yaml(filename), filename, True))
Add the package's default configuration settings. This looks for a YAML file located inside the package for the module `modname` if it was given.
379,463
def get_log_entry_log_session(self, proxy): if not self.supports_log_entry_log(): raise errors.Unimplemented() return sessions.LogEntryLogSession(proxy=proxy, runtime=self._runtime)
Gets the session for retrieving log entry to log mappings. arg: proxy (osid.proxy.Proxy): a proxy return: (osid.logging.LogEntryLogSession) - a ``LogEntryLogSession`` raise: NullArgument - ``proxy`` is ``null`` raise: OperationFailed - unable to complete request raise: Unimplemented - ``supports_log_entry_log()`` is ``false`` *compliance: optional -- This method must be implemented if ``supports_log_entry_log()`` is ``true``.*
379,464
def _decrypt(self, rj, token): if self.iss: keys = self.key_jar.get_jwt_decrypt_keys(rj.jwt, aud=self.iss) else: keys = self.key_jar.get_jwt_decrypt_keys(rj.jwt) return rj.decrypt(token, keys=keys)
Decrypt an encrypted JsonWebToken :param rj: :py:class:`cryptojwt.jwe.JWE` instance :param token: The encrypted JsonWebToken :return:
379,465
def get_value(self, tau): tau = np.asarray(tau) (alpha_real, beta_real, alpha_complex_real, alpha_complex_imag, beta_complex_real, beta_complex_imag) = self.coefficients k = get_kernel_value( alpha_real, beta_real, alpha_complex_real, alpha_complex_imag, beta_complex_real, beta_complex_imag, tau.flatten(), ) return np.asarray(k).reshape(tau.shape)
Compute the value of the term for an array of lags Args: tau (array[...]): An array of lags where the term should be evaluated. Returns: The value of the term for each ``tau``. This will have the same shape as ``tau``.
379,466
def pick_input_v1(self): flu = self.sequences.fluxes.fastaccess inl = self.sequences.inlets.fastaccess flu.input = 0. for idx in range(inl.len_total): flu.input += inl.total[idx][0]
Updates |Input| based on |Total|.
379,467
def setMaximumHeight(self, height): super(XView, self).setMaximumHeight(height) if ( not self.signalsBlocked() ): self.sizeConstraintChanged.emit()
Sets the maximum height value to the inputed height and emits the \ sizeConstraintChanged signal. :param height | <int>
379,468
def encapsulate_processing(object): @functools.wraps(object) def encapsulate_processing_wrapper(*args, **kwargs): RuntimeGlobals.engine._Umbra__store_processing_state() RuntimeGlobals.engine.stop_processing(warning=False) try: return object(*args, **kwargs) finally: RuntimeGlobals.engine.stop_processing(warning=False) RuntimeGlobals.engine._Umbra__restore_processing_state() return encapsulate_processing_wrapper
Encapsulates a processing operation. :param object: Object to decorate. :type object: object :return: Object. :rtype: object
379,469
def set_stream(self, stream_id): self._group[] = stream_id yield from self._server.group_stream(self.identifier, stream_id) _LOGGER.info(, stream_id, self.friendly_name)
Set group stream.
379,470
def list_queues(region, opts=None, user=None): * out = _run_aws(, region, opts, user) ret = { : 0, : out[], } return ret
List the queues in the selected region. region Region to list SQS queues for opts : None Any additional options to add to the command line user : None Run hg as a user other than what the minion runs as CLI Example: salt '*' aws_sqs.list_queues <region>
379,471
def get_works(self): Work = self._session.get_class(surf.ns.EFRBROO[]) return list(Work.all())
Return the author's works. :return: a list of `HucitWork` instances.
379,472
def load_url(self, url, force=False, reload_seconds=0, callback_function=None): def launch_callback(): should_reload = not force and reload_seconds not in (0, None) reload_milliseconds = (0 if not should_reload else reload_seconds * 1000) msg = { "url": url, "force": force, "reload": should_reload, "reload_time": reload_milliseconds } self.send_message(msg, inc_session_id=True, callback_function=callback_function) self.launch(callback_function=launch_callback)
Starts loading a URL with an optional reload time in seconds. Setting force to True may load pages which block iframe embedding, but will prevent reload from working and will cause calls to load_url() to reload the app.
379,473
def end_node(self): return db.cypher_query("MATCH (aNode) " "WHERE id(aNode)={nodeid} " "RETURN aNode".format(nodeid=self._end_node_id), resolve_objects = True)[0][0][0]
Get end node :return: StructuredNode
379,474
def concat(self, to_concat, new_axis): non_empties = [x for x in to_concat if len(x) > 0] if len(non_empties) > 0: blocks = [obj.blocks[0] for obj in non_empties] if len({b.dtype for b in blocks}) == 1: new_block = blocks[0].concat_same_type(blocks) else: values = [x.values for x in blocks] values = _concat._concat_compat(values) new_block = make_block( values, placement=slice(0, len(values), 1)) else: values = [x._block.values for x in to_concat] values = _concat._concat_compat(values) new_block = make_block( values, placement=slice(0, len(values), 1)) mgr = SingleBlockManager(new_block, new_axis) return mgr
Concatenate a list of SingleBlockManagers into a single SingleBlockManager. Used for pd.concat of Series objects with axis=0. Parameters ---------- to_concat : list of SingleBlockManagers new_axis : Index of the result Returns ------- SingleBlockManager
379,475
def _recordAndPrintHeadline(self, test, error_class, artifact): is_error_class = False for cls, (storage, label, is_failure) in self.errorClasses.items(): if isclass(error_class) and issubclass(error_class, cls): if is_failure: test.passed = False storage.append((test, artifact)) is_error_class = True if not is_error_class: self.errors.append((test, artifact)) test.passed = False is_any_failure = not is_error_class or is_failure self._printHeadline(label if is_error_class else , test, is_failure=is_any_failure) return is_any_failure
Record that an error-like thing occurred, and print a summary. Store ``artifact`` with the record. Return whether the test result is any sort of failure.
379,476
def matchingfiles(self, projectpath): results = [] if projectpath[-1] == : inputpath = projectpath + else: inputpath = projectpath + for linkf,realf in clam.common.util.globsymlinks(inputpath + + self.id + ): seqnr = int(linkf.split()[-1]) results.append( (seqnr, realf[len(inputpath):], self) ) results = sorted(results) if self.unique and len(results) != 1: return [] else: return results
Checks if the input conditions are satisfied, i.e the required input files are present. We use the symbolic links .*.INPUTTEMPLATE.id.seqnr to determine this. Returns a list of matching results (seqnr, filename, inputtemplate).
379,477
def register(self, app, options, first_registration=False): self.jsonrpc_site = options.get() self._got_registered_once = True state = self.make_setup_state(app, options, first_registration) if self.has_static_folder and \ not self.name + in state.app.view_functions.keys(): state.add_url_rule(self.static_url_path + , view_func=self.send_static_file, endpoint=) for deferred in self.deferred_functions: deferred(state)
Called by :meth:`Flask.register_blueprint` to register a blueprint on the application. This can be overridden to customize the register behavior. Keyword arguments from :func:`~flask.Flask.register_blueprint` are directly forwarded to this method in the `options` dictionary.
379,478
def create(cls, name, ncpus=None): try: return cls._predictors[name.lower()](ncpus=ncpus) except KeyError: raise Exception("Unknown class")
Create a Moap instance based on the predictor name. Parameters ---------- name : str Name of the predictor (eg. Xgboost, BayesianRidge, ...) ncpus : int, optional Number of threads. Default is the number specified in the config. Returns ------- moap : Moap instance moap instance.
379,479
def is_sufficient(self, device): sufficient = True if (self.min_vram is not None) and (device.vram < self.min_vram): sufficient = False return sufficient
Returns whether the device is sufficient for this requirement. :param device: A GPUDevice instance. :type device: GPUDevice :return: True if the requirement is fulfilled otherwise False
379,480
def to_unicode(value): if isinstance(value, _TO_UNICODE_TYPES): return value if not isinstance(value, bytes): raise TypeError( "Expected bytes, unicode, or None; got %r" % type(value)) return value.decode("utf-8")
Converts a string argument to a unicode string. If the argument is already a unicode string or None, it is returned unchanged. Otherwise it must be a byte string and is decoded as utf8.
379,481
def state_set(self, state, use_active_range=False): self.description = state[] if use_active_range: self._index_start, self._index_end = state[] self._length_unfiltered = self._index_end - self._index_start if in state: for old, new in state[]: self._rename(old, new) for name, value in state[].items(): self.add_function(name, vaex.serialize.from_dict(value)) if in state: self.column_names = [] self.virtual_columns = collections.OrderedDict() for name, value in state[].items(): self[name] = self._expr(value) self.column_names = state[] else: self.virtual_columns = collections.OrderedDict() for name, value in state[].items(): self[name] = self._expr(value) self.variables = state[] import astropy units = {key: astropy.units.Unit(value) for key, value in state["units"].items()} self.units.update(units) for name, selection_dict in state[].items(): if selection_dict is None: selection = None else: selection = selections.selection_from_dict(selection_dict) self.set_selection(selection, name=name)
Sets the internal state of the df Example: >>> import vaex >>> df = vaex.from_scalars(x=1, y=2) >>> df # x y r 0 1 2 2.23607 >>> df['r'] = (df.x**2 + df.y**2)**0.5 >>> state = df.state_get() >>> state {'active_range': [0, 1], 'column_names': ['x', 'y', 'r'], 'description': None, 'descriptions': {}, 'functions': {}, 'renamed_columns': [], 'selections': {'__filter__': None}, 'ucds': {}, 'units': {}, 'variables': {}, 'virtual_columns': {'r': '(((x ** 2) + (y ** 2)) ** 0.5)'}} >>> df2 = vaex.from_scalars(x=3, y=4) >>> df2.state_set(state) # now the virtual functions are 'copied' >>> df2 # x y r 0 3 4 5 :param state: dict as returned by :meth:`DataFrame.state_get`. :param bool use_active_range: Whether to use the active range or not.
379,482
def _clean_frequency(frequency): if isinstance(frequency, int): return frequency elif isinstance(frequency, datetime.timedelta): return int(frequency.total_seconds()) raise ValueError(.format(frequency))
Converts a frequency value to an integer. Raises an error if an invalid type is given. :param frequency: A frequency :type frequency: int or datetime.timedelta :rtype: int
379,483
def get_encoding_from_headers(headers): content_type = headers.get() if not content_type: return None content_type, params = cgi.parse_header(content_type) if in params: return params[].strip("'\"")
Returns encodings from given HTTP Header Dict. :param headers: dictionary to extract encoding from.
379,484
def warn(self, collection): super(CodeElement, self).warn(collection) if not "implicit none" in self.modifiers: collection.append("WARNING: implicit none not set in {}".format(self.name))
Checks the module for documentation and best-practice warnings.
379,485
def with_tz(request): dt = datetime.now() t = Template() c = RequestContext(request) response = t.render(c) return HttpResponse(response)
Get the time with TZ enabled
379,486
def json(self): data = { "segment_name": self.segment_name, "formula": self.formula, "warnings": [w.json() for w in self.warnings], "model_params": self.model_params, } return data
Return a JSON-serializable representation of this result. The output of this function can be converted to a serialized string with :any:`json.dumps`.
379,487
def convert_nsarg( nsarg: str, api_url: str = None, namespace_targets: Mapping[str, List[str]] = None, canonicalize: bool = False, decanonicalize: bool = False, ) -> str: if not api_url: api_url = config["bel_api"]["servers"]["api_url"] if not api_url: log.error("Missing api url - cannot convert namespace") return None params = None if namespace_targets: namespace_targets_str = json.dumps(namespace_targets) params = {"namespace_targets": namespace_targets_str} if not namespace_targets: if canonicalize: api_url = api_url + "/terms/{}/canonicalized" elif decanonicalize: api_url = api_url + "/terms/{}/decanonicalized" else: log.warning("Missing (de)canonical flag - cannot convert namespaces") return nsarg else: api_url = ( api_url + "/terms/{}/canonicalized" ) request_url = api_url.format(url_path_param_quoting(nsarg)) r = get_url(request_url, params=params, timeout=10) if r and r.status_code == 200: nsarg = r.json().get("term_id", nsarg) elif not r or r.status_code == 404: log.error(f"[de]Canonicalization endpoint missing: {request_url}") return nsarg
[De]Canonicalize NSArg Args: nsarg (str): bel statement string or partial string (e.g. subject or object) api_url (str): BEL.bio api url to use, e.g. https://api.bel.bio/v1 namespace_targets (Mapping[str, List[str]]): formatted as in configuration file example canonicalize (bool): use canonicalize endpoint/namespace targets decanonicalize (bool): use decanonicalize endpoint/namespace targets Results: str: converted NSArg
379,488
def _dist_obs_oracle(oracle, query, trn_list): a = np.subtract(query, [oracle.f_array[t] for t in trn_list]) return (a * a).sum(axis=1)
A helper function calculating distances between a feature and frames in oracle.
379,489
def set_speech_ssml(self, ssml): self.response.outputSpeech.type = self.response.outputSpeech.ssml = ssml
Set response output speech as SSML type. Args: ssml: str. Response speech used when type is 'SSML', should be formatted with Speech Synthesis Markup Language. Cannot exceed 8,000 characters.
379,490
def _fake_enumerateclassnames(self, namespace, **params): self._validate_namespace(namespace) classname = params.get(, None) if classname: assert(isinstance(classname, CIMClassName)) if not self._class_exists(classname.classname, namespace): raise CIMError( CIM_ERR_INVALID_CLASS, _format("The class {0!A} defined by parameter " "does not exist in namespace {1!A}", classname, namespace)) clns = self._get_subclass_names(classname, namespace, params[]) rtn_clns = [ CIMClassName(cn, namespace=namespace, host=self.host) for cn in clns] return self._make_tuple(rtn_clns)
Implements a mock server responder for :meth:`~pywbem.WBEMConnection.EnumerateClassNames`. Enumerates the classnames of the classname in the 'classname' parameter or from the top of the tree if 'classname is None. Returns: return tuple including list of classnames Raises: CIMError: CIM_ERR_INVALID_NAMESPACE: invalid namespace, CIMError: CIM_ERR_INVALID_CLASS: class defined by the classname parameter does not exist
379,491
def file_code(function_index=1, function_name=None): info = function_info(function_index + 1, function_name) with open(info[], ) as fn: return fn.read()
This will return the code of the calling function function_index of 2 will give the parent of the caller function_name should not be used with function_index :param function_index: int of how many frames back the program should look :param function_name: str of what function to look for :return: str of the code from the target function
379,492
def read_word(self, offset): self._lock = True if(offset > self.current_max_offset): raise BUSError("Offset({}) exceeds address space of BUS({})".format(offset, self.current_max_offset)) self.reads += 1 for addresspace, device in self.index.items(): if(offset in addresspace): if(self.debug > 5): print("BUS::read({}) | startaddress({})> {}".format(offset, self.start_addresses[device], device.read(offset - self.start_addresses[device]))) self.truncate.setvalue( device.read(offset - self.start_addresses[device])) return self.truncate.getvalue()
.. _read_word: Read one word from a device. The offset is ``device_addr + device_offset``, e.g.:: offset = 3 # third word of the device offset += addr2 b.read_word(offset) # reads third word of d2. Truncates the value according to ``width``. May raise BUSError_, if the offset exceeds the address space.
379,493
def load_np(self, imname, data_np, imtype, header): load_buffer = self._client.lookup_attr() return load_buffer(imname, self._chname, Blob(data_np.tobytes()), data_np.shape, str(data_np.dtype), header, {}, False)
Display a numpy image buffer in a remote Ginga reference viewer. Parameters ---------- imname : str A name to use for the image in the reference viewer. data_np : ndarray This should be at least a 2D Numpy array. imtype : str Image type--currently ignored. header : dict Fits header as a dictionary, or other keyword metadata. Returns ------- 0 Notes ----- * The "RC" plugin needs to be started in the viewer for this to work.
379,494
def _cutout_psf(self, image, subgrid_res): self._init_mask_psf() return image[self._x_min_psf*subgrid_res:(self._x_max_psf+1)*subgrid_res, self._y_min_psf*subgrid_res:(self._y_max_psf+1)*subgrid_res]
cutout the part of the image relevant for the psf convolution :param image: :return:
379,495
def new_db_from_pandas(self, frame, table=None, data=None, load=True, **kwargs): from ..orm import Column with self.bundle.session: sch = self.bundle.schema t = sch.new_table(table) if frame.index.name: id_name = frame.index.name else: id_name = sch.add_column(t, id_name, datatype=Column.convert_numpy_type(frame.index.dtype), is_primary_key=True) for name, type_ in zip([row for row in frame.columns], [row for row in frame.convert_objects(convert_numeric=True, convert_dates=True).dtypes]): sch.add_column(t, name, datatype=Column.convert_numpy_type(type_)) sch.write_schema() p = self.new_partition(table=table, data=data, **kwargs) if load: pk_name = frame.index.name with p.inserter(table) as ins: for i, row in frame.iterrows(): d = dict(row) d[pk_name] = i ins.insert(d) return p
Create a new db partition from a pandas data frame. If the table does not exist, it will be created
379,496
def create(self, campaign_id, data, **queryparams): self.campaign_id = campaign_id if not in data: raise KeyError() response = self._mc_client._post(url=self._build_path(campaign_id, ), data=data, **queryparams) if response is not None: self.feedback_id = response[] else: self.feedback_id = None return response
Add feedback on a specific campaign. :param campaign_id: The unique id for the campaign. :type campaign_id: :py:class:`str` :param data: The request body parameters :type data: :py:class:`dict` data = { "message": string* } :param queryparams: The query string parameters queryparams['fields'] = [] queryparams['exclude_fields'] = []
379,497
def list_instances(self, hourly=True, monthly=True, tags=None, cpus=None, memory=None, hostname=None, domain=None, local_disk=None, datacenter=None, nic_speed=None, public_ip=None, private_ip=None, **kwargs): if not in kwargs: items = [ , , , , , , , , , , , , , , ] kwargs[] = "mask[%s]" % .join(items) call = if not all([hourly, monthly]): if hourly: call = elif monthly: call = _filter = utils.NestedDict(kwargs.get() or {}) if tags: _filter[][][][] = { : , : [{: , : tags}], } if cpus: _filter[][] = utils.query_filter(cpus) if memory: _filter[][] = utils.query_filter(memory) if hostname: _filter[][] = utils.query_filter(hostname) if domain: _filter[][] = utils.query_filter(domain) if local_disk is not None: _filter[][] = ( utils.query_filter(bool(local_disk))) if datacenter: _filter[][][] = ( utils.query_filter(datacenter)) if nic_speed: _filter[][][] = ( utils.query_filter(nic_speed)) if public_ip: _filter[][] = ( utils.query_filter(public_ip)) if private_ip: _filter[][] = ( utils.query_filter(private_ip)) kwargs[] = _filter.to_dict() kwargs[] = True return self.client.call(, call, **kwargs)
Retrieve a list of all virtual servers on the account. Example:: # Print out a list of hourly instances in the DAL05 data center. for vsi in mgr.list_instances(hourly=True, datacenter='dal05'): print vsi['fullyQualifiedDomainName'], vsi['primaryIpAddress'] # Using a custom object-mask. Will get ONLY what is specified object_mask = "mask[hostname,monitoringRobot[robotStatus]]" for vsi in mgr.list_instances(mask=object_mask,hourly=True): print vsi :param boolean hourly: include hourly instances :param boolean monthly: include monthly instances :param list tags: filter based on list of tags :param integer cpus: filter based on number of CPUS :param integer memory: filter based on amount of memory :param string hostname: filter based on hostname :param string domain: filter based on domain :param string local_disk: filter based on local_disk :param string datacenter: filter based on datacenter :param integer nic_speed: filter based on network speed (in MBPS) :param string public_ip: filter based on public ip address :param string private_ip: filter based on private ip address :param dict \\*\\*kwargs: response-level options (mask, limit, etc.) :returns: Returns a list of dictionaries representing the matching virtual servers
379,498
def describe_batch_predictions(FilterVariable=None, EQ=None, GT=None, LT=None, GE=None, LE=None, NE=None, Prefix=None, SortOrder=None, NextToken=None, Limit=None): pass
Returns a list of BatchPrediction operations that match the search criteria in the request. See also: AWS API Documentation :example: response = client.describe_batch_predictions( FilterVariable='CreatedAt'|'LastUpdatedAt'|'Status'|'Name'|'IAMUser'|'MLModelId'|'DataSourceId'|'DataURI', EQ='string', GT='string', LT='string', GE='string', LE='string', NE='string', Prefix='string', SortOrder='asc'|'dsc', NextToken='string', Limit=123 ) :type FilterVariable: string :param FilterVariable: Use one of the following variables to filter a list of BatchPrediction : CreatedAt - Sets the search criteria to the BatchPrediction creation date. Status - Sets the search criteria to the BatchPrediction status. Name - Sets the search criteria to the contents of the BatchPrediction **** Name . IAMUser - Sets the search criteria to the user account that invoked the BatchPrediction creation. MLModelId - Sets the search criteria to the MLModel used in the BatchPrediction . DataSourceId - Sets the search criteria to the DataSource used in the BatchPrediction . DataURI - Sets the search criteria to the data file(s) used in the BatchPrediction . The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory. :type EQ: string :param EQ: The equal to operator. The BatchPrediction results will have FilterVariable values that exactly match the value specified with EQ . :type GT: string :param GT: The greater than operator. The BatchPrediction results will have FilterVariable values that are greater than the value specified with GT . :type LT: string :param LT: The less than operator. The BatchPrediction results will have FilterVariable values that are less than the value specified with LT . :type GE: string :param GE: The greater than or equal to operator. The BatchPrediction results will have FilterVariable values that are greater than or equal to the value specified with GE . :type LE: string :param LE: The less than or equal to operator. The BatchPrediction results will have FilterVariable values that are less than or equal to the value specified with LE . :type NE: string :param NE: The not equal to operator. The BatchPrediction results will have FilterVariable values not equal to the value specified with NE . :type Prefix: string :param Prefix: A string that is found at the beginning of a variable, such as Name or Id . For example, a Batch Prediction operation could have the Name 2014-09-09-HolidayGiftMailer . To search for this BatchPrediction , select Name for the FilterVariable and any of the following strings for the Prefix : 2014-09 2014-09-09 2014-09-09-Holiday :type SortOrder: string :param SortOrder: A two-value parameter that determines the sequence of the resulting list of MLModel s. asc - Arranges the list in ascending order (A-Z, 0-9). dsc - Arranges the list in descending order (Z-A, 9-0). Results are sorted by FilterVariable . :type NextToken: string :param NextToken: An ID of the page in the paginated results. :type Limit: integer :param Limit: The number of pages of information to include in the result. The range of acceptable values is 1 through 100 . The default value is 100 . :rtype: dict :return: { 'Results': [ { 'BatchPredictionId': 'string', 'MLModelId': 'string', 'BatchPredictionDataSourceId': 'string', 'InputDataLocationS3': 'string', 'CreatedByIamUser': 'string', 'CreatedAt': datetime(2015, 1, 1), 'LastUpdatedAt': datetime(2015, 1, 1), 'Name': 'string', 'Status': 'PENDING'|'INPROGRESS'|'FAILED'|'COMPLETED'|'DELETED', 'OutputUri': 'string', 'Message': 'string', 'ComputeTime': 123, 'FinishedAt': datetime(2015, 1, 1), 'StartedAt': datetime(2015, 1, 1), 'TotalRecordCount': 123, 'InvalidRecordCount': 123 }, ], 'NextToken': 'string' } :returns: PENDING - Amazon Machine Learning (Amazon ML) submitted a request to generate predictions for a batch of observations. INPROGRESS - The process is underway. FAILED - The request to perform a batch prediction did not run to completion. It is not usable. COMPLETED - The batch prediction process completed successfully. DELETED - The BatchPrediction is marked as deleted. It is not usable.
379,499
def lsb_release(self, loglevel=logging.DEBUG): shutit = self.shutit d = {} self.send(ShutItSendSpec(self, send=, check_exit=False, echo=False, loglevel=loglevel, ignore_background=True)) res = shutit.match_string(self.pexpect_child.before, r) if isinstance(res, str): dist_string = res d[] = dist_string.lower().strip() try: d[] = (package_map.INSTALL_TYPE_MAP[dist_string.lower()]) except KeyError: raise Exception("Distribution is not supported." % dist_string) else: return d res = shutit.match_string(self.pexpect_child.before, r) if isinstance(res, str): version_string = res d[] = version_string return d
Get distro information from lsb_release.