Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
3,900
def on_m_open_file(self,event): dlg = wx.FileDialog( self, message="choose orient file", defaultDir=self.WD, defaultFile="", style=wx.FD_OPEN | wx.FD_CHANGE_DIR ) if dlg.ShowModal() == wx.ID_OK: orient_file = dlg.GetPath() dlg.Destroy() new_data = self.er_magic_data.read_magic_file(orient_file, "sample_name")[0] if len(new_data) > 0: self.orient_data={} self.orient_data=new_data self.update_sheet() print("-I- If you don't see a change in the spreadsheet, you may need to manually re-size the window")
open orient.txt read the data display the data from the file in a new grid
3,901
def _set_mpls_reopt_lsp(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=mpls_reopt_lsp.mpls_reopt_lsp, is_leaf=True, yang_name="mpls-reopt-lsp", rest_name="mpls-reopt-lsp", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=False, extensions={u: {u: u, u: u}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "rpc", : , }) self.__mpls_reopt_lsp = t if hasattr(self, ): self._set()
Setter method for mpls_reopt_lsp, mapped from YANG variable /brocade_mpls_rpc/mpls_reopt_lsp (rpc) If this variable is read-only (config: false) in the source YANG file, then _set_mpls_reopt_lsp is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_mpls_reopt_lsp() directly.
3,902
def upload_site(self): if not os.path.isdir(self._config[]): message = .format(self._config[]) self._logger.error(message) raise RuntimeError(message) ltdclient.upload(self._config)
Upload a previously-built site to LSST the Docs.
3,903
def density(a_M, *args, **kwargs): rows, cols = a_M.shape a_Mmask = ones( (rows, cols) ) if len(args): a_Mmask = args[0] a_M *= a_Mmask f_binaryMass = float(size(nonzero(a_M)[0])) f_actualMass = a_M.sum() f_area = float(size(nonzero(a_Mmask)[0])) f_binaryDensity = f_binaryMass / f_area; f_actualDensity = f_actualMass / f_area; return f_actualDensity, f_binaryDensity
ARGS a_M matrix to analyze *args[0] optional mask matrix; if passed, calculate density of a_M using non-zero elements of args[0] as a mask. DESC Determine the "density" of a passed matrix. Two densities are returned: o f_actualDensity -- density of the matrix using matrix values as "mass" o f_binaryDensity -- density of the matrix irrespective of actual matrix values If the passed matrix contains only "ones", the f_binaryDensity will be equal to the f_actualDensity.
3,904
def compute_venn3_colors(set_colors): rgb ccv = ColorConverter() base_colors = [np.array(ccv.to_rgb(c)) for c in set_colors] return (base_colors[0], base_colors[1], mix_colors(base_colors[0], base_colors[1]), base_colors[2], mix_colors(base_colors[0], base_colors[2]), mix_colors(base_colors[1], base_colors[2]), mix_colors(base_colors[0], base_colors[1], base_colors[2]))
Given three base colors, computes combinations of colors corresponding to all regions of the venn diagram. returns a list of 7 elements, providing colors for regions (100, 010, 110, 001, 101, 011, 111). >>> compute_venn3_colors(['r', 'g', 'b']) (array([ 1., 0., 0.]),..., array([ 0.4, 0.2, 0.4]))
3,905
def get_parsed_content(metadata_content): _import_parsers() xml_tree = None if metadata_content is None: raise NoContent() else: if isinstance(metadata_content, MetadataParser): xml_tree = deepcopy(metadata_content._xml_tree) elif isinstance(metadata_content, dict): xml_tree = get_element_tree(metadata_content) else: try: xml_tree = get_element_tree(metadata_content) except Exception: xml_tree = None if xml_tree is None: raise InvalidContent( , parser_type=type(metadata_content).__name__ ) xml_root = get_element_name(xml_tree) if xml_root is None: raise NoContent() elif xml_root not in VALID_ROOTS: content = type(metadata_content).__name__ raise InvalidContent(, content=content, xml_root=xml_root) return xml_root, xml_tree
Parses any of the following types of content: 1. XML string or file object: parses XML content 2. MetadataParser instance: deep copies xml_tree 3. Dictionary with nested objects containing: - name (required): the name of the element tag - text: the text contained by element - tail: text immediately following the element - attributes: a Dictionary containing element attributes - children: a List of converted child elements :raises InvalidContent: if the XML is invalid or does not conform to a supported metadata standard :raises NoContent: If the content passed in is null or otherwise empty :return: the XML root along with an XML Tree parsed by and compatible with element_utils
3,906
def add_legend(self, labels=None, **kwargs): if in kwargs: if not in kwargs: kwargs[] = {: kwargs[]} else: kwargs[][] = kwargs[] del kwargs[] self.legend.add_legend = True self.legend.legend_labels = labels self.legend.legend_kwargs = kwargs return
Specify legend for a plot. Adds labels and basic legend specifications for specific plot. For the optional Args, refer to https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html for more information. # TODO: Add legend capabilities for Loss/Gain plots. This is possible using the return_fig_ax kwarg in the main plotting function. Args: labels (list of str): String representing each item in plot that will be added to the legend. Keyword Arguments: loc (str, int, len-2 list of floats, optional): Location of legend. See matplotlib documentation for more detail. Default is None. bbox_to_anchor (2-tuple or 4-tuple of floats, optional): Specify position and size of legend box. 2-tuple will specify (x,y) coordinate of part of box specified with `loc` kwarg. 4-tuple will specify (x, y, width, height). See matplotlib documentation for more detail. Default is None. size (float, optional): Set size of legend using call to `prop` dict in legend call. See matplotlib documentaiton for more detail. Default is None. ncol (int, optional): Number of columns in the legend. Note: Other kwargs are available. See: https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.html
3,907
def deploy_script(self, synchronous=True, **kwargs): kwargs = kwargs.copy() kwargs.update(self._server_config.get_client_kwargs()) response = client.get(self.path(), **kwargs) return _handle_response(response, self._server_config, synchronous)
Helper for Config's deploy_script method. :param synchronous: What should happen if the server returns an HTTP 202 (accepted) status code? Wait for the task to complete if ``True``. Immediately return the server's response otherwise. :param kwargs: Arguments to pass to requests. :returns: The server's response, with all JSON decoded. :raises: ``requests.exceptions.HTTPError`` If the server responds with an HTTP 4XX or 5XX message.
3,908
def unhexlify(blob): lines = blob.split()[1:] output = [] for line in lines: output.append(binascii.unhexlify(line[9:-2])) if (output[0][0:2].decode() != u): return output[0] = output[0][4:] output[-1] = output[-1].strip(b) script = b.join(output) try: result = script.decode() return result except UnicodeDecodeError:
Takes a hexlified script and turns it back into a string of Python code.
3,909
def _increment_504_stat(self, request): for key in self.stats_dict: if key == : unique = request.url + str(time.time()) self.stats_dict[key].increment(unique) else: self.stats_dict[key].increment() self.logger.debug("Incremented status_code stats")
Increments the 504 stat counters @param request: The scrapy request in the spider
3,910
def get_record(self, **kwargs): if not self._initialized: raise pycdlibexception.PyCdlibInvalidInput() num_paths = 0 for key in kwargs: if key in [, , , ]: if kwargs[key] is not None: num_paths += 1 else: raise pycdlibexception.PyCdlibInvalidInput("Invalid keyword, must be one of , , , or ") if num_paths != 1: raise pycdlibexception.PyCdlibInvalidInput("Must specify one, and only one of , , , or ") if in kwargs: return self._get_entry(None, None, self._normalize_joliet_path(kwargs[])) if in kwargs: return self._get_entry(None, utils.normpath(kwargs[]), None) if in kwargs: return self._get_udf_entry(kwargs[]) return self._get_entry(utils.normpath(kwargs[]), None, None)
Get the directory record for a particular path. Parameters: iso_path - The absolute path on the ISO9660 filesystem to get the record for. rr_path - The absolute path on the Rock Ridge filesystem to get the record for. joliet_path - The absolute path on the Joliet filesystem to get the record for. udf_path - The absolute path on the UDF filesystem to get the record for. Returns: An object that represents the path. This may be a dr.DirectoryRecord object (in the cases of iso_path, rr_path, or joliet_path), or a udf.UDFFileEntry object (in the case of udf_path).
3,911
def _clone_block_and_wires(block_in): block_in.sanity_check() block_out = block_in.__class__() temp_wv_map = {} with set_working_block(block_out, no_sanity_check=True): for wirevector in block_in.wirevector_subset(): new_wv = clone_wire(wirevector) temp_wv_map[wirevector] = new_wv return block_out, temp_wv_map
This is a generic function to copy the WireVectors for another round of synthesis This does not split a WireVector with multiple wires. :param block_in: The block to change :param synth_name: a name to prepend to all new copies of a wire :return: the resulting block and a WireVector map
3,912
def create(self, create_missing=None): return UserGroup( self._server_config, id=self.create_json(create_missing)[], ).read()
Do extra work to fetch a complete set of attributes for this entity. For more information, see `Bugzilla #1301658 <https://bugzilla.redhat.com/show_bug.cgi?id=1301658>`_.
3,913
def update_dist_caches(dist_path, fix_zipimporter_caches): normalized_path = normalize_path(dist_path) _uncache(normalized_path, sys.path_importer_cache) if fix_zipimporter_caches: _replace_zip_directory_cache_data(normalized_path) else: _remove_and_clear_zip_directory_cache_data(normalized_path)
Fix any globally cached `dist_path` related data `dist_path` should be a path of a newly installed egg distribution (zipped or unzipped). sys.path_importer_cache contains finder objects that have been cached when importing data from the original distribution. Any such finders need to be cleared since the replacement distribution might be packaged differently, e.g. a zipped egg distribution might get replaced with an unzipped egg folder or vice versa. Having the old finders cached may then cause Python to attempt loading modules from the replacement distribution using an incorrect loader. zipimport.zipimporter objects are Python loaders charged with importing data packaged inside zip archives. If stale loaders referencing the original distribution, are left behind, they can fail to load modules from the replacement distribution. E.g. if an old zipimport.zipimporter instance is used to load data from a new zipped egg archive, it may cause the operation to attempt to locate the requested data in the wrong location - one indicated by the original distribution's zip archive directory information. Such an operation may then fail outright, e.g. report having read a 'bad local file header', or even worse, it may fail silently & return invalid data. zipimport._zip_directory_cache contains cached zip archive directory information for all existing zipimport.zipimporter instances and all such instances connected to the same archive share the same cached directory information. If asked, and the underlying Python implementation allows it, we can fix all existing zipimport.zipimporter instances instead of having to track them down and remove them one by one, by updating their shared cached zip archive directory information. This, of course, assumes that the replacement distribution is packaged as a zipped egg. If not asked to fix existing zipimport.zipimporter instances, we still do our best to clear any remaining zipimport.zipimporter related cached data that might somehow later get used when attempting to load data from the new distribution and thus cause such load operations to fail. Note that when tracking down such remaining stale data, we can not catch every conceivable usage from here, and we clear only those that we know of and have found to cause problems if left alive. Any remaining caches should be updated by whomever is in charge of maintaining them, i.e. they should be ready to handle us replacing their zip archives with new distributions at runtime.
3,914
def check_exists(subreddit): req = get( % subreddit, headers={: }) if req.json().get() == : return False return req.status_code == 200
Make sure that a subreddit actually exists.
3,915
def CheckForQuestionPending(task): vm = task.info.entity if vm is not None and isinstance(vm, vim.VirtualMachine): qst = vm.runtime.question if qst is not None: raise TaskBlocked("Task blocked, User Intervention required")
Check to see if VM needs to ask a question, throw exception
3,916
def make_token(cls, ephemeral_token: ) -> str: value = ephemeral_token.key if ephemeral_token.scope: value += .join(ephemeral_token.scope) return get_hmac(cls.KEY_SALT + ephemeral_token.salt, value)[::2]
Returns a token to be used x number of times to allow a user account to access certain resource.
3,917
def fprint(self, obj, stream=None, **kwargs): if stream is None: stream = sys.stdout options = self.options options.update(kwargs) if isinstance(obj, dimod.SampleSet): self._print_sampleset(obj, stream, **options) return raise TypeError("cannot format type {}".format(type(obj)))
Prints the formatted representation of the object on stream
3,918
def get_branding_metadata(self): metadata = dict(self._branding_metadata) metadata.update({: self.my_osid_object_form._my_map[]}) return Metadata(**metadata)
Gets the metadata for the asset branding. return: (osid.Metadata) - metadata for the asset branding. *compliance: mandatory -- This method must be implemented.*
3,919
def require(self, path=None, contents=None, source=None, url=None, md5=None, use_sudo=False, owner=None, group=, mode=None, verify_remote=True, temp_dir=): func = use_sudo and run_as_root or self.run if path and not (contents or source or url): assert path if not self.is_file(path): func( % locals()) elif url: if not path: path = os.path.basename(urlparse(url).path) if not self.is_file(path) or md5 and self.md5sum(path) != md5: func( % locals()) else: if source: assert not contents t = None else: fd, source = mkstemp() t = os.fdopen(fd, ) t.write(contents) t.close() if verify_remote: digest = hashlib.md5() f = open(source, ) try: while True: d = f.read(BLOCKSIZE) if not d: break digest.update(d) finally: f.close() else: digest = None if (not self.is_file(path, use_sudo=use_sudo) or (verify_remote and self.md5sum(path, use_sudo=use_sudo) != digest.hexdigest())): with self.settings(hide()): self.put(local_path=source, remote_path=path, use_sudo=use_sudo, temp_dir=temp_dir) if t is not None: os.unlink(source) if use_sudo and owner is None: owner = if (owner and self.get_owner(path, use_sudo) != owner) or \ (group and self.get_group(path, use_sudo) != group): func( % locals()) if use_sudo and mode is None: mode = oct(0o666 & ~int(self.umask(use_sudo=True), base=8)) if mode and self.get_mode(path, use_sudo) != mode: func( % locals())
Require a file to exist and have specific contents and properties. You can provide either: - *contents*: the required contents of the file:: from fabtools import require require.file('/tmp/hello.txt', contents='Hello, world') - *source*: the local path of a file to upload:: from fabtools import require require.file('/tmp/hello.txt', source='files/hello.txt') - *url*: the URL of a file to download (*path* is then optional):: from fabric.api import cd from fabtools import require with cd('tmp'): require.file(url='http://example.com/files/hello.txt') If *verify_remote* is ``True`` (the default), then an MD5 comparison will be used to check whether the remote file is the same as the source. If this is ``False``, the file will be assumed to be the same if it is present. This is useful for very large files, where generating an MD5 sum may take a while. When providing either the *contents* or the *source* parameter, Fabric's ``put`` function will be used to upload the file to the remote host. When ``use_sudo`` is ``True``, the file will first be uploaded to a temporary directory, then moved to its final location. The default temporary directory is ``/tmp``, but can be overridden with the *temp_dir* parameter. If *temp_dir* is an empty string, then the user's home directory will be used. If `use_sudo` is `True`, then the remote file will be owned by root, and its mode will reflect root's default *umask*. The optional *owner*, *group* and *mode* parameters can be used to override these properties. .. note:: This function can be accessed directly from the ``fabtools.require`` module for convenience.
3,920
def list_all_zones_by_name(region=None, key=None, keyid=None, profile=None): ret = describe_hosted_zones(region=region, key=key, keyid=keyid, profile=profile) return [r[] for r in ret]
List, by their FQDNs, all hosted zones in the bound account. region Region to connect to. key Secret key to be used. keyid Access key to be used. profile A dict with region, key and keyid, or a pillar key (string) that contains a dict with region, key and keyid. CLI Example: .. code-block:: bash salt myminion boto_route53.list_all_zones_by_name
3,921
def cli(ctx, config, debug, language, verbose): ctx.obj = {} try: ctx.obj[] = Config(normalizations=config, language=language, debug=debug, verbose=verbose) except ConfigError as e: click.echo(e.message) sys.exit(-1) ctx.obj[] = Cucco(ctx.obj[])
Cucco allows to apply normalizations to a given text or file. This normalizations include, among others, removal of accent marks, stop words an extra white spaces, replacement of punctuation symbols, emails, emojis, etc. For more info on how to use and configure Cucco, check the project website at https://cucco.io.
3,922
def format_row(self, row, key, color): value = row[key] if isinstance(value, bool) or value is None: return if value else if not isinstance(value, Number): return value is_integer = float(value).is_integer() template = if is_integer else + self.floatfmt + key_best = key + if (key_best in row) and row[key_best]: template = color + template + Ansi.ENDC.value return template.format(value)
For a given row from the table, format it (i.e. floating points and color if applicable).
3,923
def authorize_application( request, redirect_uri = % (FACEBOOK_APPLICATION_DOMAIN, FACEBOOK_APPLICATION_NAMESPACE), permissions = FACEBOOK_APPLICATION_INITIAL_PERMISSIONS ): query = { : FACEBOOK_APPLICATION_ID, : redirect_uri } if permissions: query[] = .join(permissions) return render( request = request, template_name = , dictionary = { : % urlencode(query) }, status = 401 )
Redirect the user to authorize the application. Redirection is done by rendering a JavaScript snippet that redirects the parent window to the authorization URI, since Facebook will not allow this inside an iframe.
3,924
def end_namespace(self, prefix): del self._ns[prefix] self._g.endPrefixMapping(prefix)
Undeclare a namespace prefix.
3,925
def redraw(self, reset_camera=False): self.ren.RemoveAllViewProps() self.picker = None self.add_picker_fixed() self.helptxt_mapper = vtk.vtkTextMapper() tprops = self.helptxt_mapper.GetTextProperty() tprops.SetFontSize(14) tprops.SetFontFamilyToTimes() tprops.SetColor(0, 0, 0) if self.structure is not None: self.set_structure(self.structure, reset_camera) self.ren_win.Render()
Redraw the render window. Args: reset_camera: Set to True to reset the camera to a pre-determined default for each structure. Defaults to False.
3,926
def validate(self): self.normalize() if self._node is None: logging.debug() return False if self._schema is None: self._schema = XMLSchema(self._schemadoc) is_valid = self._schema.validate(self._node) for msg in self._schema.error_log: logging.info(, msg.line, msg.column, msg.message) return is_valid
Validate the current file against the SLD schema. This first normalizes the SLD document, then validates it. Any schema validation error messages are logged at the INFO level. @rtype: boolean @return: A flag indicating if the SLD is valid.
3,927
def get_local_lb(self, loadbal_id, **kwargs): if not in kwargs: kwargs[] = ( ) return self.lb_svc.getObject(id=loadbal_id, **kwargs)
Returns a specified local load balancer given the id. :param int loadbal_id: The id of the load balancer to retrieve :returns: A dictionary containing the details of the load balancer
3,928
def autobuild_release(family=None): if family is None: family = utilities.get_family() env = Environment(tools=[]) env[] = family.tile target = env.Command([], [], action=env.Action(create_release_settings_action, "Creating release manifest")) env.AlwaysBuild(target) if os.path.exists(): env.Command([], [], Copy("$TARGET", "$SOURCE")) copy_include_dirs(family.tile) copy_tilebus_definitions(family.tile) copy_dependency_docs(family.tile) copy_linker_scripts(family.tile) if not family.tile.settings.get(, False): copy_dependency_images(family.tile) copy_extra_files(family.tile) build_python_distribution(family.tile)
Copy necessary files into build/output so that this component can be used by others Args: family (ArchitectureGroup): The architecture group that we are targeting. If not provided, it is assumed that we are building in the current directory and the module_settings.json file is read to create an ArchitectureGroup
3,929
def query_singleton_edges_from_network(self, network: Network) -> sqlalchemy.orm.query.Query: ne1 = aliased(network_edge, name=) ne2 = aliased(network_edge, name=) singleton_edge_ids_for_network = ( self.session .query(ne1.c.edge_id) .outerjoin(ne2, and_( ne1.c.edge_id == ne2.c.edge_id, ne1.c.network_id != ne2.c.network_id )) .filter(and_( ne1.c.network_id == network.id, ne2.c.edge_id == None )) ) return singleton_edge_ids_for_network
Return a query selecting all edge ids that only belong to the given network.
3,930
def ec_number(self, ec_number=None, entry_name=None, limit=None, as_df=False): q = self.session.query(models.ECNumber) q = self.get_model_queries(q, ((ec_number, models.ECNumber.ec_number),)) q = self.get_one_to_many_queries(q, ((entry_name, models.Entry.name),)) return self._limit_and_df(q, limit, as_df)
Method to query :class:`.models.ECNumber` objects in database :param ec_number: Enzyme Commission number(s) :type ec_number: str or tuple(str) or None :param entry_name: name(s) in :class:`.models.Entry` :type entry_name: str or tuple(str) or None :param limit: - if `isinstance(limit,int)==True` -> limit - if `isinstance(limit,tuple)==True` -> format:= tuple(page_number, results_per_page) - if limit == None -> all results :type limit: int or tuple(int) or None :param bool as_df: if `True` results are returned as :class:`pandas.DataFrame` :return: - if `as_df == False` -> list(:class:`.models.ECNumber`) - if `as_df == True` -> :class:`pandas.DataFrame` :rtype: list(:class:`.models.ECNumber`) or :class:`pandas.DataFrame`
3,931
def new(cls, package): partname = package.next_partname("/word/footer%d.xml") content_type = CT.WML_FOOTER element = parse_xml(cls._default_footer_xml()) return cls(partname, content_type, element, package)
Return newly created footer part.
3,932
def fit(self, Z): check_rdd(Z, {: (sp.spmatrix, np.ndarray)}) return self._spark_fit(SparkLinearRegression, Z)
Fit linear model. Parameters ---------- Z : DictRDD with (X, y) values X containing numpy array or sparse matrix - The training data y containing the target values Returns ------- self : returns an instance of self.
3,933
def value(self) -> Decimal: assert isinstance(self.price, Decimal) return self.quantity * self.price
Value of the holdings in exchange currency. Value = Quantity * Price
3,934
def _handle_response(self, response): if isinstance(response, Exception): logging.error("dropped chunk %s" % response) elif response.json().get("limits"): parsed = response.json() self._api.dynamic_settings.update(parsed["limits"])
Logs dropped chunks and updates dynamic settings
3,935
def calc_point_distance(self, chi_coords): chi1_bin, chi2_bin = self.find_point_bin(chi_coords) min_dist = 1000000000 indexes = None for chi1_bin_offset, chi2_bin_offset in self.bin_loop_order: curr_chi1_bin = chi1_bin + chi1_bin_offset curr_chi2_bin = chi2_bin + chi2_bin_offset for idx, bank_chis in \ enumerate(self.bank[curr_chi1_bin][curr_chi2_bin]): dist = coord_utils.calc_point_dist(chi_coords, bank_chis) if dist < min_dist: min_dist = dist indexes = (curr_chi1_bin, curr_chi2_bin, idx) return min_dist, indexes
Calculate distance between point and the bank. Return the closest distance. Parameters ----------- chi_coords : numpy.array The position of the point in the chi coordinates. Returns -------- min_dist : float The smallest **SQUARED** metric distance between the test point and the bank. indexes : The chi1_bin, chi2_bin and position within that bin at which the closest matching point lies.
3,936
def run_sphinx(): old_dir = here_directory() os.chdir(str(doc_directory())) doc_status = subprocess.check_call([, ], shell=True) os.chdir(str(old_dir)) if doc_status is not 0: exit(Fore.RED + )
Runs Sphinx via it's `make html` command
3,937
def __get_state_by_id(cls, job_id): state = model.MapreduceState.get_by_job_id(job_id) if state is None: raise ValueError("Job state for job %s is missing." % job_id) return state
Get job state by id. Args: job_id: job id. Returns: model.MapreduceState for the job. Raises: ValueError: if the job state is missing.
3,938
def from_twodim_list(cls, datalist, tsformat=None): ts = TimeSeries() ts.set_timeformat(tsformat) for entry in datalist: ts.add_entry(*entry[:2]) ts._normalized = ts.is_normalized() ts.sort_timeseries() return ts
Creates a new TimeSeries instance from the data stored inside a two dimensional list. :param list datalist: List containing multiple iterables with at least two values. The first item will always be used as timestamp in the predefined format, the second represents the value. All other items in those sublists will be ignored. :param string tsformat: Format of the given timestamp. This is used to convert the timestamp into UNIX epochs, if necessary. For valid examples take a look into the :py:func:`time.strptime` documentation. :return: Returns a TimeSeries instance containing the data from datalist. :rtype: TimeSeries
3,939
def display_message( self, subject=, message="This is a note", sounds=False ): data = json.dumps( { : self.content[], : subject, : sounds, : True, : message } ) self.session.post( self.message_url, params=self.params, data=data )
Send a request to the device to play a sound. It's possible to pass a custom message by changing the `subject`.
3,940
def chebyshev(x, y): result = 0.0 for i in range(x.shape[0]): result = max(result, np.abs(x[i] - y[i])) return result
Chebyshev or l-infinity distance. ..math:: D(x, y) = \max_i |x_i - y_i|
3,941
def collect_loaded_packages() -> List[Tuple[str, str]]: dists = get_installed_distributions() get_dist_files = DistFilesFinder() file_table = {} for dist in dists: for file in get_dist_files(dist): file_table[file] = dist used_dists = set() for module in list(sys.modules.values()): try: dist = file_table[module.__file__] except (AttributeError, KeyError): continue used_dists.add(dist) return sorted((dist.project_name, dist.version) for dist in used_dists)
Return the currently loaded package names and their versions.
3,942
def unique(seq): try: return list(OrderedDict.fromkeys(seq)) except TypeError: unique_values = [] for i in seq: if i not in unique_values: unique_values.append(i) return unique_values
Return the unique values in a sequence. Parameters ---------- seq : sequence Sequence with (possibly duplicate) elements. Returns ------- unique : list Unique elements of ``seq``. Order is guaranteed to be the same as in seq. Examples -------- Determine unique elements in list >>> unique([1, 2, 3, 3]) [1, 2, 3] >>> unique((1, 'str', 'str')) [1, 'str'] The utility also works with unhashable types: >>> unique((1, [1], [1])) [1, [1]]
3,943
def human_or_11(X, y, model_generator, method_name): return _human_or(X, model_generator, method_name, True, True)
OR (true/true) This tests how well a feature attribution method agrees with human intuition for an OR operation combined with linear effects. This metric deals specifically with the question of credit allocation for the following function when all three inputs are true: if fever: +2 points if cough: +2 points if fever or cough: +6 points transform = "identity" sort_order = 2
3,944
async def trigger_all(self, *args, **kwargs): :meth:`act` return values. Given arguments and keyword arguments are passed down to each agents manager agent, i.e. if the environment has :attr:`manager`, is excluded from acting. ' tasks = [] for a in self.get_agents(addr=False, include_manager=False): task = asyncio.ensure_future(self.trigger_act (*args, agent=a, **kwargs)) tasks.append(task) rets = await asyncio.gather(*tasks) return rets
Trigger all agents in the environment to act asynchronously. :returns: A list of agents' :meth:`act` return values. Given arguments and keyword arguments are passed down to each agent's :meth:`creamas.core.agent.CreativeAgent.act`. .. note:: By design, the environment's manager agent, i.e. if the environment has :attr:`manager`, is excluded from acting.
3,945
def get_log_hierarchy_id(self): if self._catalog_session is not None: return self._catalog_session.get_catalog_hierarchy_id() return self._hierarchy_session.get_hierarchy_id()
Gets the hierarchy ``Id`` associated with this session. return: (osid.id.Id) - the hierarchy ``Id`` associated with this session *compliance: mandatory -- This method must be implemented.*
3,946
def upload_file(self, body, key=None, metadata=None, headers=None, access_key=None, secret_key=None, queue_derive=None, verbose=None, verify=None, checksum=None, delete=None, retries=None, retries_sleep=None, debug=None, request_kwargs=None): headers = {} if headers is None else headers metadata = {} if metadata is None else metadata access_key = self.session.access_key if access_key is None else access_key secret_key = self.session.secret_key if secret_key is None else secret_key queue_derive = True if queue_derive is None else queue_derive verbose = False if verbose is None else verbose verify = True if verify is None else verify delete = False if delete is None else delete checksum = True if delete else checksum retries = 0 if retries is None else retries retries_sleep = 30 if retries_sleep is None else retries_sleep debug = False if debug is None else debug request_kwargs = {} if request_kwargs is None else request_kwargs if not in request_kwargs: request_kwargs[] = 120 md5_sum = None if not hasattr(body, ): filename = body body = open(body, ) else: if key: filename = key else: filename = body.name size = get_file_size(body) if size == 0: headers[] = if not headers.get(): headers[] = str(size) key = norm_filepath(filename).split()[-1] if key is None else key base_url = .format(self) url = .format( base_url, urllib.parse.quote(norm_filepath(key).lstrip().encode())) if checksum: md5_sum = get_md5(body) ia_file = self.get_file(key) if (not self.tasks) and (ia_file) and (ia_file.md5 == md5_sum): log.info(.format(f=key, u=url)) if verbose: print(.format(f=key)) if delete: log.info( .format(i=self.identifier, f=key)) body.close() os.remove(filename) body.close() return Response() if verify or delete: if not md5_sum: md5_sum = get_md5(body) headers[] = md5_sum def _build_request(): body.seek(0, os.SEEK_SET) if verbose: try: if size == 0: raise Exception chunk_size = 1048576 expected_size = size / chunk_size + 1 chunks = chunk_generator(body, chunk_size) progress_generator = progress.bar( chunks, expected_size=expected_size, label=.format(f=key)) data = IterableToFileAdapter(progress_generator, size) except: print(.format(f=key)) data = body else: data = body headers.update(self.session.headers) request = S3Request(method=, url=url, headers=headers, data=data, metadata=metadata, access_key=access_key, secret_key=secret_key, queue_derive=queue_derive) return request if debug: prepared_request = self.session.prepare_request(_build_request()) body.close() return prepared_request else: try: error_msg = ( .format(retries_sleep, retries)) while True: if retries > 0: if self.session.s3_is_overloaded(access_key): sleep(retries_sleep) log.info(error_msg) if verbose: print(.format(error_msg), file=sys.stderr) retries -= 1 continue request = _build_request() prepared_request = request.prepare() if prepared_request.headers.get() == : del prepared_request.headers[] response = self.session.send(prepared_request, stream=True, **request_kwargs) if (response.status_code == 503) and (retries > 0): log.info(error_msg) if verbose: print(.format(error_msg), file=sys.stderr) sleep(retries_sleep) retries -= 1 continue else: if response.status_code == 503: log.info() break response.raise_for_status() log.info(u.format(f=key, u=url)) if delete and response.status_code == 200: log.info( .format(i=self.identifier, f=key)) body.close() os.remove(filename) body.close() return response except HTTPError as exc: body.close() msg = get_s3_xml_text(exc.response.content) error_msg = ( .format(key, self.identifier, msg)) log.error(error_msg) if verbose: print(.format(key, msg), file=sys.stderr) raise type(exc)(error_msg, response=exc.response, request=exc.request)
Upload a single file to an item. The item will be created if it does not exist. :type body: Filepath or file-like object. :param body: File or data to be uploaded. :type key: str :param key: (optional) Remote filename. :type metadata: dict :param metadata: (optional) Metadata used to create a new item. :type headers: dict :param headers: (optional) Add additional IA-S3 headers to request. :type queue_derive: bool :param queue_derive: (optional) Set to False to prevent an item from being derived after upload. :type verify: bool :param verify: (optional) Verify local MD5 checksum matches the MD5 checksum of the file received by IAS3. :type checksum: bool :param checksum: (optional) Skip based on checksum. :type delete: bool :param delete: (optional) Delete local file after the upload has been successfully verified. :type retries: int :param retries: (optional) Number of times to retry the given request if S3 returns a 503 SlowDown error. :type retries_sleep: int :param retries_sleep: (optional) Amount of time to sleep between ``retries``. :type verbose: bool :param verbose: (optional) Print progress to stdout. :type debug: bool :param debug: (optional) Set to True to print headers to stdout, and exit without sending the upload request. Usage:: >>> import internetarchive >>> item = internetarchive.Item('identifier') >>> item.upload_file('/path/to/image.jpg', ... key='photos/image1.jpg') True
3,947
def do_child_count(self, params): for child, level in self._zk.tree(params.path, params.depth, full_path=True): self.show_output("%s: %d", child, self._zk.child_count(child))
\x1b[1mNAME\x1b[0m child_count - Prints the child count for paths \x1b[1mSYNOPSIS\x1b[0m child_count [path] [depth] \x1b[1mOPTIONS\x1b[0m * path: the path (default: cwd) * max_depth: max recursion limit (0 is no limit) (default: 1) \x1b[1mEXAMPLES\x1b[0m > child-count / /zookeeper: 2 /foo: 0 /bar: 3
3,948
def _get_phrase_list_from_words(self, word_list): groups = groupby(word_list, lambda x: x not in self.to_ignore) phrases = [tuple(group[1]) for group in groups if group[0]] return list( filter( lambda x: self.min_length <= len(x) <= self.max_length, phrases ) )
Method to create contender phrases from the list of words that form a sentence by dropping stopwords and punctuations and grouping the left words into phrases. Only phrases in the given length range (both limits inclusive) would be considered to build co-occurrence matrix. Ex: Sentence: Red apples, are good in flavour. List of words: ['red', 'apples', ",", 'are', 'good', 'in', 'flavour'] List after dropping punctuations and stopwords. List of words: ['red', 'apples', *, *, good, *, 'flavour'] List of phrases: [('red', 'apples'), ('good',), ('flavour',)] List of phrases with a correct length: For the range [1, 2]: [('red', 'apples'), ('good',), ('flavour',)] For the range [1, 1]: [('good',), ('flavour',)] For the range [2, 2]: [('red', 'apples')] :param word_list: List of words which form a sentence when joined in the same order. :return: List of contender phrases that are formed after dropping stopwords and punctuations.
3,949
def gene_ids(self, contig=None, strand=None): return self._all_feature_values( column="gene_id", feature="gene", contig=contig, strand=strand)
What are all the gene IDs (optionally restrict to a given chromosome/contig and/or strand)
3,950
def model(self): try: return self.data.get().get() except (KeyError, AttributeError): return self.device_status_simple()
Return the model name of the printer.
3,951
def db_parse( block_id, txid, vtxindex, op, data, senders, inputs, outputs, fee, db_state=None, **virtualchain_hints ): if len(senders) == 0: raise Exception("No senders given") if not check_tx_sender_types(senders, block_id): log.warning(.format(txid)) return None assert in virtualchain_hints, raw_tx = virtualchain_hints[] btc_tx_data = virtualchain.btc_tx_deserialize(raw_tx) test_btc_tx = virtualchain.btc_tx_serialize({: inputs, : outputs, : btc_tx_data[], : btc_tx_data[]}) assert raw_tx == test_btc_tx, .format(raw_tx, test_btc_tx) try: opcode = op_get_opcode_name(op) assert opcode is not None, "Unrecognized opcode " % op except Exception, e: log.exception(e) log.error("Skipping unrecognized opcode") return None log.debug("PARSE %s at (%s, %s): %s" % (opcode, block_id, vtxindex, data.encode())) op_data = None try: op_data = op_extract( opcode, data, senders, inputs, outputs, block_id, vtxindex, txid ) except Exception, e: log.exception(e) op_data = None if op_data is not None: try: assert in op_data, except Exception as e: log.exception(e) log.error("BUG: missing op") os.abort() original_op_data = copy.deepcopy(op_data) op_data[] = int(vtxindex) op_data[] = str(txid) op_data[] = original_op_data else: log.error("Unparseable op " % opcode) return op_data
(required by virtualchain state engine) Parse a blockstack operation from a transaction. The transaction fields are as follows: * `block_id` is the blockchain height at which this transaction occurs * `txid` is the transaction ID * `data` is the scratch area of the transaction that contains the actual virtualchain operation (e.g. "id[opcode][payload]") * `senders` is a list in 1-to-1 correspondence with `inputs` that contains information about what funded the inputs * `inputs` are the list of inputs to the transaction. Some blockchains (like Bitcoin) support multiple inputs, whereas others (like Ethereum) support only 1. * `outputs` are the list of outputs of the transaction. Some blockchains (like Bitcoin) support multiple outputs, whereas others (like Ethereum) support only 1. * `fee` is the transaction fee. `db_state` is the StateEngine-derived class. This is a BlockstackDB instance. `**virtualchain_hints` is a dict with extra virtualchain hints that may be relevant. We require: * `raw_tx`: the hex-encoded string containing the raw transaction. Returns a dict with the parsed operation on success. Return None on error
3,952
def trimmed(self, n_peaks): result = self.copy() result.trim(n_peaks) return result
:param n_peaks: number of peaks to keep :returns: an isotope pattern with removed low-intensity peaks :rtype: CentroidedSpectrum
3,953
def make_parent_dirs(path, mode=0o777): parent = os.path.dirname(path) if parent: make_all_dirs(parent, mode) return path
Ensure parent directories of a file are created as needed.
3,954
def next(self): rv = self.current self.pos = (self.pos + 1) % len(self.items) return rv
Goes one item ahead and returns it.
3,955
def get_band_qpoints(band_paths, npoints=51, rec_lattice=None): npts = _get_npts(band_paths, npoints, rec_lattice) qpoints_of_paths = [] c = 0 for band_path in band_paths: nd = len(band_path) for i in range(nd - 1): delta = np.subtract(band_path[i + 1], band_path[i]) / (npts[c] - 1) qpoints = [delta * j for j in range(npts[c])] qpoints_of_paths.append(np.array(qpoints) + band_path[i]) c += 1 return qpoints_of_paths
Generate qpoints for band structure path Note ---- Behavior changes with and without rec_lattice given. Parameters ---------- band_paths: list of array_likes Sets of end points of paths dtype='double' shape=(sets of paths, paths, 3) example: [[[0, 0, 0], [0.5, 0.5, 0], [0.5, 0.5, 0.5]], [[0.5, 0.25, 0.75], [0, 0, 0]]] npoints: int, optional Number of q-points in each path including end points. Default is 51. rec_lattice: array_like, optional When given, q-points are sampled in a similar interval. The longest path length divided by npoints including end points is used as the reference interval. Reciprocal basis vectors given in column vectors. dtype='double' shape=(3, 3)
3,956
def diff(self, n, axis=1): new_values = algos.diff(self.values, n, axis=axis) return [self.make_block(values=new_values)]
return block for the diff of the values
3,957
def _configure(self, *args): gldrawable = self.get_gl_drawable() glcontext = self.get_gl_context() if not gldrawable or not gldrawable.gl_begin(glcontext): return False glViewport(0, 0, self.get_allocation().width, self.get_allocation().height) glMatrixMode(GL_PROJECTION) glLoadIdentity() self._apply_orthogonal_view() glMatrixMode(GL_MODELVIEW) glLoadIdentity() gldrawable.gl_end() return False
Configure viewport This method is called when the widget is resized or something triggers a redraw. The method configures the view to show all elements in an orthogonal perspective.
3,958
def update(self, request, id): if self.readonly: return HTTPMethodNotAllowed(headers={: }) session = self.Session() obj = session.query(self.mapped_class).get(id) if obj is None: return HTTPNotFound() feature = loads(request.body, object_hook=GeoJSON.to_instance) if not isinstance(feature, Feature): return HTTPBadRequest() if self.before_update is not None: self.before_update(request, feature, obj) obj.__update__(feature) session.flush() request.response.status_int = 200 return obj
Read the GeoJSON feature from the request body and update the corresponding object in the database.
3,959
def _submit_and_wait(cmd, cores, config, output_dir): batch_script = "submit_bcl2fastq.sh" if not os.path.exists(batch_script + ".finished"): if os.path.exists(batch_script + ".failed"): os.remove(batch_script + ".failed") with open(batch_script, "w") as out_handle: out_handle.write(config["process"]["bcl2fastq_batch"].format( cores=cores, bcl2fastq_cmd=" ".join(cmd), batch_script=batch_script)) submit_cmd = utils.get_in(config, ("process", "submit_cmd")) subprocess.check_call(submit_cmd.format(batch_script=batch_script), shell=True) while 1: if os.path.exists(batch_script + ".finished"): break if os.path.exists(batch_script + ".failed"): raise ValueError("bcl2fastq batch script failed: %s" % os.path.join(output_dir, batch_script)) time.sleep(5)
Submit command with batch script specified in configuration, wait until finished
3,960
def splitext( filename ): index = filename.find() if index == 0: index = 1+filename[1:].find() if index == -1: return filename, return filename[:index], filename[index:] return os.path.splitext(filename)
Return the filename and extension according to the first dot in the filename. This helps date stamping .tar.bz2 or .ext.gz files properly.
3,961
def _bdd(node): try: bdd = _BDDS[node] except KeyError: bdd = _BDDS[node] = BinaryDecisionDiagram(node) return bdd
Return a unique BDD.
3,962
def _make_file_dict(self, f): if isinstance(f, dict): file_obj = f[] if in f: file_name = f[] else: file_name = file_obj.name else: file_obj = f file_name = f.name b64_data = base64.b64encode(file_obj.read()) return { : file_name, : b64_data.decode() if six.PY3 else b64_data, }
Make a dictionary with filename and base64 file data
3,963
def generate_listall_output(lines, resources, aws_config, template, arguments, nodup = False): for line in lines: output = [] for resource in resources: current_path = resource.split() outline = line[1] for key in line[2]: outline = outline.replace(+key+, get_value_at(aws_config[], current_path, key, True)) output.append(outline) output = .join(line for line in sorted(set(output))) template = template.replace(line[0], output) for (i, argument) in enumerate(arguments): template = template.replace( % i, argument) return template
Format and print the output of ListAll :param lines: :param resources: :param aws_config: :param template: :param arguments: :param nodup: :return:
3,964
def is_ancestor_of_objective_bank(self, id_, objective_bank_id): if self._catalog_session is not None: return self._catalog_session.is_ancestor_of_catalog(id_=id_, catalog_id=objective_bank_id) return self._hierarchy_session.is_ancestor(id_=id_, ancestor_id=objective_bank_id)
Tests if an ``Id`` is an ancestor of an objective bank. arg: id (osid.id.Id): an ``Id`` arg: objective_bank_id (osid.id.Id): the ``Id`` of an objective bank return: (boolean) - ``true`` if this ``id`` is an ancestor of ``objective_bank_id,`` ``false`` otherwise raise: NotFound - ``objective_bank_id`` is not found raise: NullArgument - ``id`` or ``objective_bank_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.* *implementation notes*: If ``id`` not found return ``false``.
3,965
def equals(df1, df2, ignore_order=set(), ignore_indices=set(), all_close=False, _return_reason=False): re equal, the latter is `None` if equal or a short explanation of why the data frames arens ``NaN``. Values needni1i2i3index1c1c2c3columns1i3i1i2c2c1c3i3i1i2other result = _equals(df1, df2, ignore_order, ignore_indices, all_close) if _return_reason: return result else: return result[0]
Get whether 2 data frames are equal. ``NaN`` is considered equal to ``NaN`` and `None`. Parameters ---------- df1 : ~pandas.DataFrame Data frame to compare. df2 : ~pandas.DataFrame Data frame to compare. ignore_order : ~typing.Set[int] Axi in which to ignore order. ignore_indices : ~typing.Set[int] Axi of which to ignore the index. E.g. ``{1}`` allows differences in ``df.columns.name`` and ``df.columns.equals(df2.columns)``. all_close : bool If `False`, values must match exactly, if `True`, floats are compared as if compared with `numpy.isclose`. _return_reason : bool Internal. If `True`, `equals` returns a tuple containing the reason, else `equals` only returns a bool indicating equality (or equivalence rather). Returns ------- bool Whether they are equal (after ignoring according to the parameters). Internal note: if ``_return_reason``, ``Tuple[bool, str or None]`` is returned. The former is whether they're equal, the latter is `None` if equal or a short explanation of why the data frames aren't equal, otherwise. Notes ----- All values (including those of indices) must be copyable and ``__eq__`` must be such that a copy must equal its original. A value must equal itself unless it's ``NaN``. Values needn't be orderable or hashable (however pandas requires index values to be orderable and hashable). By consequence, this is not an efficient function, but it is flexible. Examples -------- >>> from pytil import data_frame as df_ >>> import pandas as pd >>> df = pd.DataFrame([ ... [1, 2, 3], ... [4, 5, 6], ... [7, 8, 9] ... ], ... index=pd.Index(('i1', 'i2', 'i3'), name='index1'), ... columns=pd.Index(('c1', 'c2', 'c3'), name='columns1') ... ) >>> df columns1 c1 c2 c3 index1 i1 1 2 3 i2 4 5 6 i3 7 8 9 >>> df2 = df.reindex(('i3', 'i1', 'i2'), columns=('c2', 'c1', 'c3')) >>> df2 columns1 c2 c1 c3 index1 i3 8 7 9 i1 2 1 3 i2 5 4 6 >>> df_.equals(df, df2) False >>> df_.equals(df, df2, ignore_order=(0,1)) True >>> df2 = df.copy() >>> df2.index = [1,2,3] >>> df2 columns1 c1 c2 c3 1 1 2 3 2 4 5 6 3 7 8 9 >>> df_.equals(df, df2) False >>> df_.equals(df, df2, ignore_indices={0}) True >>> df2 = df.reindex(('i3', 'i1', 'i2')) >>> df2 columns1 c1 c2 c3 index1 i3 7 8 9 i1 1 2 3 i2 4 5 6 >>> df_.equals(df, df2, ignore_indices={0}) # does not ignore row order! False >>> df_.equals(df, df2, ignore_order={0}) True >>> df2 = df.copy() >>> df2.index.name = 'other' >>> df_.equals(df, df2) # df.index.name must match as well, same goes for df.columns.name False
3,966
def get_default_pandas_converters() -> List[Union[Converter[Any, pd.DataFrame], Converter[pd.DataFrame, Any]]]: return [ConverterFunction(from_type=pd.DataFrame, to_type=dict, conversion_method=single_row_or_col_df_to_dict), ConverterFunction(from_type=dict, to_type=pd.DataFrame, conversion_method=dict_to_df, option_hints=dict_to_single_row_or_col_df_opts), ConverterFunction(from_type=pd.DataFrame, to_type=pd.Series, conversion_method=single_row_or_col_df_to_series)]
Utility method to return the default converters associated to dataframes (from dataframe to other type, and from other type to dataframe) :return:
3,967
def grasstruth(args): p = OptionParser(grasstruth.__doc__) opts, args = p.parse_args(args) if len(args) != 1: sys.exit(not p.print_help()) james, = args fp = open(james) pairs = set() for row in fp: atoms = row.split() genes = [] idx = {} for i, a in enumerate(atoms): aa = a.split("||") for ma in aa: idx[ma] = i genes.extend(aa) genes = [x for x in genes if ":" not in x] Os = [x for x in genes if x.startswith("Os")] for o in Os: for g in genes: if idx[o] == idx[g]: continue pairs.add(tuple(sorted((o, g)))) for a, b in sorted(pairs): print("\t".join((a, b)))
%prog grasstruth james-pan-grass.txt Prepare truth pairs for 4 grasses.
3,968
def add_username(user, apps): if not user: return None apps = [a for a in apps if a.instance == user.instance] if not apps: return None from toot.api import verify_credentials creds = verify_credentials(apps.pop(), user) return User(user.instance, creds[], user.access_token)
When using broser login, username was not stored so look it up
3,969
def translate_y(self, d): mat = numpy.array([ [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, d, 0, 1] ]) self.vectors = self.vectors.dot(mat) return self
Translate mesh for y-direction :param float d: Amount to translate
3,970
def evaluate(self, env, engine, parser, term_type, eval_in_python): if engine == : res = self(env) else: left = self.lhs.evaluate(env, engine=engine, parser=parser, term_type=term_type, eval_in_python=eval_in_python) right = self.rhs.evaluate(env, engine=engine, parser=parser, term_type=term_type, eval_in_python=eval_in_python) if self.op in eval_in_python: res = self.func(left.value, right.value) else: from pandas.core.computation.eval import eval res = eval(self, local_dict=env, engine=engine, parser=parser) name = env.add_tmp(res) return term_type(name, env=env)
Evaluate a binary operation *before* being passed to the engine. Parameters ---------- env : Scope engine : str parser : str term_type : type eval_in_python : list Returns ------- term_type The "pre-evaluated" expression as an instance of ``term_type``
3,971
def find_path_package(thepath): pname = find_path_package_name(thepath) if not pname: return None fromlist = b if six.PY2 else return __import__(pname, globals(), locals(), [fromlist])
Takes a file system path and returns the module object of the python package the said path belongs to. If the said path can not be determined, it returns None.
3,972
def sample_stats_to_xarray(self): dtypes = {"divergent__": bool, "n_leapfrog__": np.int64, "treedepth__": np.int64} dims = deepcopy(self.dims) if self.dims is not None else {} coords = deepcopy(self.coords) if self.coords is not None else {} sampler_params = self.sample_stats log_likelihood = self.log_likelihood if isinstance(log_likelihood, str): log_likelihood_cols = [ col for col in self.posterior[0].columns if log_likelihood == col.split(".")[0] ] log_likelihood_vals = [item[log_likelihood_cols] for item in self.posterior] for i, _ in enumerate(sampler_params): for col in log_likelihood_cols: col_ll = col.replace(log_likelihood, "log_likelihood") sampler_params[i][col_ll] = log_likelihood_vals[i][col] if log_likelihood in dims: dims["log_likelihood"] = dims.pop(log_likelihood) log_likelihood_dims = np.array( [list(map(int, col.split(".")[1:])) for col in log_likelihood_cols] ) max_dims = log_likelihood_dims.max(0) max_dims = max_dims if hasattr(max_dims, "__iter__") else (max_dims,) default_dim_names, _ = generate_dims_coords(shape=max_dims, var_name=log_likelihood) log_likelihood_dim_names, _ = generate_dims_coords( shape=max_dims, var_name="log_likelihood" ) for default_dim_name, log_likelihood_dim_name in zip( default_dim_names, log_likelihood_dim_names ): if default_dim_name in coords: coords[log_likelihood_dim_name] = coords.pop(default_dim_name) for j, s_params in enumerate(sampler_params): rename_dict = {} for key in s_params: key_, *end = key.split(".") name = re.sub("__$", "", key_) name = "diverging" if name == "divergent" else name rename_dict[key] = ".".join((name, *end)) sampler_params[j][key] = s_params[key].astype(dtypes.get(key)) sampler_params[j] = sampler_params[j].rename(columns=rename_dict) data = _unpack_dataframes(sampler_params) return dict_to_dataset(data, coords=coords, dims=dims)
Extract sample_stats from fit.
3,973
def removed(self): def _removed(diffs, prefix): keys = [] for key in diffs.keys(): if isinstance(diffs[key], dict) and not in diffs[key]: keys.extend(_removed(diffs[key], prefix=.format(prefix, key))) elif diffs[key][] == self.NONE_VALUE: keys.append(.format(prefix, key)) elif isinstance(diffs[key][], dict): keys.extend( _removed(diffs[key][], prefix=.format(prefix, key))) return keys return sorted(_removed(self._diffs, prefix=))
Returns all keys that have been removed. If the keys are in child dictionaries they will be represented with . notation
3,974
def int_to_bin(i): i1 = i % 256 i2 = int(i / 256) return chr(i1) + chr(i2)
Integer to two bytes
3,975
def get_one_over_n_factorial(counter_entry): r factos = [sp.factorial(c) for c in counter_entry] prod = product(factos) return sp.Integer(1)/sp.S(prod)
r""" Calculates the :math:`\frac{1}{\mathbf{n!}}` of eq. 6 (see Ale et al. 2013). That is the invert of a product of factorials. :param counter_entry: an entry of counter. That is an array of integers of length equal to the number of variables. For instance, `counter_entry` could be `[1,0,1]` for three variables. :return: a scalar as a sympy expression
3,976
def _set_range(self, init): self._speed *= 0.0 w, h = self._viewbox.size w, h = float(w), float(h) x1, y1, z1 = self._xlim[0], self._ylim[0], self._zlim[0] x2, y2, z2 = self._xlim[1], self._ylim[1], self._zlim[1] rx, ry, rz = (x2 - x1), (y2 - y1), (z2 - z1) if w / h > 1: rx /= w / h ry /= w / h else: rz /= h / w self._scale_factor = max(rx, ry, rz) / 3.0 margin = np.mean([rx, ry, rz]) * 0.1 self._center = x1 - margin, y1 - margin, z1 + margin yaw = 45 * self._flip_factors[0] pitch = -90 - 20 * self._flip_factors[2] if self._flip_factors[1] < 0: yaw += 90 * np.sign(self._flip_factors[0]) q1 = Quaternion.create_from_axis_angle(pitch*math.pi/180, 1, 0, 0) q2 = Quaternion.create_from_axis_angle(0*math.pi/180, 0, 1, 0) q3 = Quaternion.create_from_axis_angle(yaw*math.pi/180, 0, 0, 1) self._rotation1 = (q1 * q2 * q3).normalize() self._rotation2 = Quaternion() self.view_changed()
Reset the view.
3,977
def __select_alternatives (self, property_set_, debug): best = None best_properties = None if len (self.alternatives_) == 0: return None if len (self.alternatives_) == 1: return self.alternatives_ [0] if debug: print "Property set for selection:", property_set_ for v in self.alternatives_: properties = v.match (property_set_, debug) if properties is not None: if not best: best = v best_properties = properties else: if b2.util.set.equal (properties, best_properties): return None elif b2.util.set.contains (properties, best_properties): pass elif b2.util.set.contains (best_properties, properties): best = v best_properties = properties else: return None return best
Returns the best viable alternative for this property_set See the documentation for selection rules. # TODO: shouldn't this be 'alternative' (singular)?
3,978
def get_template_names(self): names = super(EasyUIDatagridView, self).get_template_names() names.append() return names
datagrid的默认模板
3,979
def sg_input(shape=None, dtype=sg_floatx, name=None): r if shape is None: return tf.placeholder(dtype, shape=None, name=name) else: if not isinstance(shape, (list, tuple)): shape = [shape] return tf.placeholder(dtype, shape=[None] + list(shape), name=name)
r"""Creates a placeholder. Args: shape: A tuple/list of integers. If an integers is given, it will turn to a list. dtype: A data type. Default is float32. name: A name for the placeholder. Returns: A wrapped placeholder `Tensor`.
3,980
def check_extracted_paths(namelist, subdir=None): def relpath(p): q = os.path.relpath(p) if p.endswith(os.path.sep) or p.endswith(): q += os.path.sep return q parent = os.path.abspath() if subdir: if os.path.isabs(subdir): raise FileException(, subdir) subdir = relpath(subdir + os.path.sep) for name in namelist: if os.path.commonprefix([parent, os.path.abspath(name)]) != parent: raise FileException(, name) if subdir and os.path.commonprefix( [subdir, relpath(name)]) != subdir: raise FileException( , name)
Check whether zip file paths are all relative, and optionally in a specified subdirectory, raises an exception if not namelist: A list of paths from the zip file subdir: If specified then check whether all paths in the zip file are under this subdirectory Python docs are unclear about the security of extract/extractall: https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.extractall https://docs.python.org/2/library/zipfile.html#zipfile.ZipFile.extract
3,981
def replace_header(self, name, value): name_lower = name.lower() for index in range(len(self.headers) - 1, -1, -1): curr_name, curr_value = self.headers[index] if curr_name.lower() == name_lower: self.headers[index] = (curr_name, value) return curr_value self.headers.append((name, value)) return None
replace header with new value or add new header return old header value, if any
3,982
def images(arrays, labels=None, domain=None, w=None): s = for i, array in enumerate(arrays): url = _image_url(array) label = labels[i] if labels is not None else i s += .format(label=label, url=url) s += "</div>" _display_html(s)
Display a list of images with optional labels. Args: arrays: A list of NumPy arrays representing images labels: A list of strings to label each image. Defaults to show index if None domain: Domain of pixel values, inferred from min & max values if None w: width of output image, scaled using nearest neighbor interpolation. size unchanged if None
3,983
def p_subind_strTO(p): p[0] = (make_typecast(TYPE.uinteger, make_number(0, lineno=p.lineno(2)), p.lineno(1)), make_typecast(TYPE.uinteger, p[3], p.lineno(2)))
substr : LP TO expr RP
3,984
def participants(self): if self._participants is None: self._participants = ParticipantList( self._version, account_sid=self._solution[], conference_sid=self._solution[], ) return self._participants
Access the participants :returns: twilio.rest.api.v2010.account.conference.participant.ParticipantList :rtype: twilio.rest.api.v2010.account.conference.participant.ParticipantList
3,985
def from_object(self, obj: Union[str, Any]) -> None: if isinstance(obj, str): obj = importer.import_object_str(obj) for key in dir(obj): if key.isupper(): value = getattr(obj, key) self._setattr(key, value) logger.info("Config is loaded from object: %r", obj)
Load values from an object.
3,986
def users_text(self): if self._users_text is None: self.chain.connection.log("Getting connected users text") self._users_text = self.driver.get_users_text() if self._users_text: self.chain.connection.log("Users text collected") else: self.chain.connection.log("Users text not collected") return self._users_text
Return connected users information and collect if not available.
3,987
def update_from_delta(self, delta, *args): for (node, extant) in delta.get(, {}).items(): if extant: if node in delta.get(, {}) \ and in delta[][node]\ and node not in self.pawn: self.add_pawn(node) elif node not in self.spot: self.add_spot(node) spot = self.spot[node] if not in spot.place or not in spot.place: self.spots_unposd.append(spot) else: if node in self.pawn: self.rm_pawn(node) if node in self.spot: self.rm_spot(node) for (node, stats) in delta.get(, {}).items(): if node in self.spot: spot = self.spot[node] x = stats.get() y = stats.get() if x is not None: spot.x = x * self.width if y is not None: spot.y = y * self.height if in stats: spot.paths = stats[] or spot.default_image_paths elif node in self.pawn: pawn = self.pawn[node] if in stats: pawn.loc_name = stats[] if in stats: pawn.paths = stats[] or pawn.default_image_paths else: Logger.warning( "Board: diff tried to change stats of node {} " "but I donedges', {}).items(): for (dest, extant) in dests.items(): if extant and (orig not in self.arrow or dest not in self.arrow[orig]): self.add_arrow(orig, dest) elif not extant and orig in self.arrow and dest in self.arrow[orig]: self.rm_arrow(orig, dest)
Apply the changes described in the dict ``delta``.
3,988
def load_country_map_data(): csv_bytes = get_example_data( , is_gzip=False, make_bytes=True) data = pd.read_csv(csv_bytes, encoding=) data[] = datetime.datetime.now().date() data.to_sql( , db.engine, if_exists=, chunksize=500, dtype={ : String(10), : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : BigInteger, : Date(), }, index=False) print() print( * 80) print() obj = db.session.query(TBL).filter_by(table_name=).first() if not obj: obj = TBL(table_name=) obj.main_dttm_col = obj.database = utils.get_or_create_main_db() if not any(col.metric_name == for col in obj.metrics): obj.metrics.append(SqlMetric( metric_name=, expression=, )) db.session.merge(obj) db.session.commit() obj.fetch_metadata() tbl = obj slice_data = { : , : , : , : , : , : , : { : , : { : , : , }, : , : , : , }, : 500000, } print() slc = Slice( slice_name=, viz_type=, datasource_type=, datasource_id=tbl.id, params=get_slice_json(slice_data), ) misc_dash_slices.add(slc.slice_name) merge_slice(slc)
Loading data for map with country map
3,989
def _sanity_check_all_marked_locations_are_registered(ir_blocks, query_metadata_table): registered_locations = { location for location, _ in query_metadata_table.registered_locations } ir_encountered_locations = { block.location for block in ir_blocks if isinstance(block, MarkLocation) } unregistered_locations = ir_encountered_locations - registered_locations unencountered_locations = registered_locations - ir_encountered_locations if unregistered_locations: raise AssertionError(u u.format(unregistered_locations)) if unencountered_locations: raise AssertionError(u u.format(unencountered_locations))
Assert that all locations in MarkLocation blocks have registered and valid metadata.
3,990
def cancel_capture(self): command = const.CMD_CANCELCAPTURE cmd_response = self.__send_command(command) return bool(cmd_response.get())
cancel capturing finger :return: bool
3,991
def check_base(path, base, os_sep=os.sep): return ( check_path(path, base, os_sep) or check_under_base(path, base, os_sep) )
Check if given absolute path is under or given base. :param path: absolute path :type path: str :param base: absolute base path :type base: str :param os_sep: path separator, defaults to os.sep :return: wether path is under given base or not :rtype: bool
3,992
def get_source_var_declaration(self, var): return next((x.source_mapping for x in self.variables if x.name == var))
Return the source mapping where the variable is declared Args: var (str): variable name Returns: (dict): sourceMapping
3,993
def calc_A(Z, x): if (type(x) is pd.DataFrame) or (type(x) is pd.Series): x = x.values if (type(x) is not np.ndarray) and (x == 0): recix = 0 else: with warnings.catch_warnings(): warnings.simplefilter() recix = 1/x recix[recix == np.inf] = 0 recix = recix.reshape((1, -1)) if type(Z) is pd.DataFrame: return pd.DataFrame(Z.values * recix, index=Z.index, columns=Z.columns) else: return Z*recix
Calculate the A matrix (coefficients) from Z and x Parameters ---------- Z : pandas.DataFrame or numpy.array Symmetric input output table (flows) x : pandas.DataFrame or numpy.array Industry output column vector Returns ------- pandas.DataFrame or numpy.array Symmetric input output table (coefficients) A The type is determined by the type of Z. If DataFrame index/columns as Z
3,994
def as_cpf(numero): _num = digitos(numero) if is_cpf(_num): return .format(_num[:3], _num[3:6], _num[6:9], _num[9:]) return numero
Formata um número de CPF. Se o número não for um CPF válido apenas retorna o argumento sem qualquer modificação.
3,995
def to_array(self): array = super(ContactMessage, self).to_array() array[] = u(self.phone_number) array[] = u(self.first_name) if self.receiver is not None: if isinstance(self.receiver, None): array[] = None(self.receiver) array[] = u(self.receiver) elif isinstance(self.receiver, int): array[] = int(self.receiver) raise TypeError() if self.reply_id is not None: if isinstance(self.reply_id, DEFAULT_MESSAGE_ID): array[] = DEFAULT_MESSAGE_ID(self.reply_id) array[] = int(self.reply_id) raise TypeError() if self.last_name is not None: array[] = u(self.last_name) if self.vcard is not None: array[] = u(self.vcard) if self.disable_notification is not None: array[] = bool(self.disable_notification) if self.reply_markup is not None: if isinstance(self.reply_markup, InlineKeyboardMarkup): array[] = self.reply_markup.to_array() elif isinstance(self.reply_markup, ReplyKeyboardMarkup): array[] = self.reply_markup.to_array() elif isinstance(self.reply_markup, ReplyKeyboardRemove): array[] = self.reply_markup.to_array() elif isinstance(self.reply_markup, ForceReply): array[] = self.reply_markup.to_array() else: raise TypeError() return array
Serializes this ContactMessage to a dictionary. :return: dictionary representation of this object. :rtype: dict
3,996
def build_archive_file(archive_contents, zip_file): for root_dir, _, files in os.walk(archive_contents): relative_dir = os.path.relpath(root_dir, archive_contents) for f in files: temp_file_full_path = os.path.join( archive_contents, relative_dir, f) zipname = os.path.join(relative_dir, f) zip_file.write(temp_file_full_path, arcname=zipname)
Build a Tableau-compatible archive file.
3,997
def process_request(self, method, data=None): self._validate_request_method(method) attempts = 3 for i in range(attempts): response = self._send_request(method, data) if self._is_token_response(response): if i < attempts - 1: time.sleep(2) continue else: raise UpdatingTokenResponse break resp = BaseResponse(response) if response.headers.get() == : if not resp.json.get(): if all([ resp.json.get() == 1, resp.json.get() == u"We are currently " "undergoing maintenance, please check back shortly.", ]): raise MaintenanceResponse(response=resp.json) else: raise ResponseError(response=resp.json) return resp
Process request over HTTP to ubersmith instance. method: Ubersmith API method string data: dict of method arguments
3,998
def push(self, task, func, *args, **kwargs): task.to_call(func, *args, **kwargs) data = task.serialize() if task.async: task.task_id = self.backend.push( self.queue_name, task.task_id, data ) else: self.execute(task) return task
Pushes a configured task onto the queue. Typically, you'll favor using the ``Gator.task`` method or ``Gator.options`` context manager for creating a task. Call this only if you have specific needs or know what you're doing. If the ``Task`` has the ``async = False`` option, the task will be run immediately (in-process). This is useful for development and in testing. Ex:: task = Task(async=False, retries=3) finished = gator.push(task, increment, incr_by=2) :param task: A mostly-configured task :type task: A ``Task`` instance :param func: The callable with business logic to execute :type func: callable :param args: Positional arguments to pass to the callable task :type args: list :param kwargs: Keyword arguments to pass to the callable task :type kwargs: dict :returns: The ``Task`` instance
3,999
def delete(self, path, payload=None, headers=None): return self._request(, path, payload, headers)
HTTP DELETE operation. :param path: URI Path :param payload: HTTP Body :param headers: HTTP Headers :raises ApiError: Raises if the remote server encountered an error. :raises ApiConnectionError: Raises if there was a connectivity issue. :return: Response