Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
386,000
def _to_DOM(self): root_node = ET.Element("ozone") reference_time_node = ET.SubElement(root_node, "reference_time") reference_time_node.text = str(self._reference_time) reception_time_node = ET.SubElement(root_node, "reception_time") reception_time_node.text = str(self._reception_time) interval_node = ET.SubElement(root_node, "interval") interval_node.text = str(self._interval) value_node = ET.SubElement(root_node, "value") value_node.text = str(self.du_value) root_node.append(self._location._to_DOM()) return root_node
Dumps object data to a fully traversable DOM representation of the object. :returns: a ``xml.etree.Element`` object
386,001
def add_ovsbridge_linuxbridge(name, bridge): try: import netifaces except ImportError: if six.PY2: apt_install(, fatal=True) else: apt_install(, fatal=True) import netifaces config.write(BRIDGE_TEMPLATE.format(linuxbridge_port=linuxbridge_port, ovsbridge_port=ovsbridge_port, bridge=bridge)) subprocess.check_call(["ifup", linuxbridge_port]) add_bridge_port(name, linuxbridge_port)
Add linux bridge to the named openvswitch bridge :param name: Name of ovs bridge to be added to Linux bridge :param bridge: Name of Linux bridge to be added to ovs bridge :returns: True if veth is added between ovs bridge and linux bridge, False otherwise
386,002
def get_affinity_group_properties(self, affinity_group_name): _validate_not_none(, affinity_group_name) return self._perform_get( + self.subscription_id + + _str(affinity_group_name) + , AffinityGroup)
Returns the system properties associated with the specified affinity group. affinity_group_name: The name of the affinity group.
386,003
def resample(self, rule, how=None, axis=0, fill_method=None, closed=None, label=None, convention=, kind=None, loffset=None, limit=None, base=0, on=None, level=None): from pandas.core.resample import (resample, _maybe_process_deprecations) axis = self._get_axis_number(axis) r = resample(self, freq=rule, label=label, closed=closed, axis=axis, kind=kind, loffset=loffset, convention=convention, base=base, key=on, level=level) return _maybe_process_deprecations(r, how=how, fill_method=fill_method, limit=limit)
Resample time-series data. Convenience method for frequency conversion and resampling of time series. Object must have a datetime-like index (`DatetimeIndex`, `PeriodIndex`, or `TimedeltaIndex`), or pass datetime-like values to the `on` or `level` keyword. Parameters ---------- rule : str The offset string or object representing target conversion. how : str Method for down/re-sampling, default to 'mean' for downsampling. .. deprecated:: 0.18.0 The new syntax is ``.resample(...).mean()``, or ``.resample(...).apply(<func>)`` axis : {0 or 'index', 1 or 'columns'}, default 0 Which axis to use for up- or down-sampling. For `Series` this will default to 0, i.e. along the rows. Must be `DatetimeIndex`, `TimedeltaIndex` or `PeriodIndex`. fill_method : str, default None Filling method for upsampling. .. deprecated:: 0.18.0 The new syntax is ``.resample(...).<func>()``, e.g. ``.resample(...).pad()`` closed : {'right', 'left'}, default None Which side of bin interval is closed. The default is 'left' for all frequency offsets except for 'M', 'A', 'Q', 'BM', 'BA', 'BQ', and 'W' which all have a default of 'right'. label : {'right', 'left'}, default None Which bin edge label to label bucket with. The default is 'left' for all frequency offsets except for 'M', 'A', 'Q', 'BM', 'BA', 'BQ', and 'W' which all have a default of 'right'. convention : {'start', 'end', 's', 'e'}, default 'start' For `PeriodIndex` only, controls whether to use the start or end of `rule`. kind : {'timestamp', 'period'}, optional, default None Pass 'timestamp' to convert the resulting index to a `DateTimeIndex` or 'period' to convert it to a `PeriodIndex`. By default the input representation is retained. loffset : timedelta, default None Adjust the resampled time labels. limit : int, default None Maximum size gap when reindexing with `fill_method`. .. deprecated:: 0.18.0 base : int, default 0 For frequencies that evenly subdivide 1 day, the "origin" of the aggregated intervals. For example, for '5min' frequency, base could range from 0 through 4. Defaults to 0. on : str, optional For a DataFrame, column to use instead of index for resampling. Column must be datetime-like. .. versionadded:: 0.19.0 level : str or int, optional For a MultiIndex, level (name or number) to use for resampling. `level` must be datetime-like. .. versionadded:: 0.19.0 Returns ------- Resampler object See Also -------- groupby : Group by mapping, function, label, or list of labels. Series.resample : Resample a Series. DataFrame.resample: Resample a DataFrame. Notes ----- See the `user guide <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling>`_ for more. To learn more about the offset strings, please see `this link <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__. Examples -------- Start by creating a series with 9 one minute timestamps. >>> index = pd.date_range('1/1/2000', periods=9, freq='T') >>> series = pd.Series(range(9), index=index) >>> series 2000-01-01 00:00:00 0 2000-01-01 00:01:00 1 2000-01-01 00:02:00 2 2000-01-01 00:03:00 3 2000-01-01 00:04:00 4 2000-01-01 00:05:00 5 2000-01-01 00:06:00 6 2000-01-01 00:07:00 7 2000-01-01 00:08:00 8 Freq: T, dtype: int64 Downsample the series into 3 minute bins and sum the values of the timestamps falling into a bin. >>> series.resample('3T').sum() 2000-01-01 00:00:00 3 2000-01-01 00:03:00 12 2000-01-01 00:06:00 21 Freq: 3T, dtype: int64 Downsample the series into 3 minute bins as above, but label each bin using the right edge instead of the left. Please note that the value in the bucket used as the label is not included in the bucket, which it labels. For example, in the original series the bucket ``2000-01-01 00:03:00`` contains the value 3, but the summed value in the resampled bucket with the label ``2000-01-01 00:03:00`` does not include 3 (if it did, the summed value would be 6, not 3). To include this value close the right side of the bin interval as illustrated in the example below this one. >>> series.resample('3T', label='right').sum() 2000-01-01 00:03:00 3 2000-01-01 00:06:00 12 2000-01-01 00:09:00 21 Freq: 3T, dtype: int64 Downsample the series into 3 minute bins as above, but close the right side of the bin interval. >>> series.resample('3T', label='right', closed='right').sum() 2000-01-01 00:00:00 0 2000-01-01 00:03:00 6 2000-01-01 00:06:00 15 2000-01-01 00:09:00 15 Freq: 3T, dtype: int64 Upsample the series into 30 second bins. >>> series.resample('30S').asfreq()[0:5] # Select first 5 rows 2000-01-01 00:00:00 0.0 2000-01-01 00:00:30 NaN 2000-01-01 00:01:00 1.0 2000-01-01 00:01:30 NaN 2000-01-01 00:02:00 2.0 Freq: 30S, dtype: float64 Upsample the series into 30 second bins and fill the ``NaN`` values using the ``pad`` method. >>> series.resample('30S').pad()[0:5] 2000-01-01 00:00:00 0 2000-01-01 00:00:30 0 2000-01-01 00:01:00 1 2000-01-01 00:01:30 1 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64 Upsample the series into 30 second bins and fill the ``NaN`` values using the ``bfill`` method. >>> series.resample('30S').bfill()[0:5] 2000-01-01 00:00:00 0 2000-01-01 00:00:30 1 2000-01-01 00:01:00 1 2000-01-01 00:01:30 2 2000-01-01 00:02:00 2 Freq: 30S, dtype: int64 Pass a custom function via ``apply`` >>> def custom_resampler(array_like): ... return np.sum(array_like) + 5 ... >>> series.resample('3T').apply(custom_resampler) 2000-01-01 00:00:00 8 2000-01-01 00:03:00 17 2000-01-01 00:06:00 26 Freq: 3T, dtype: int64 For a Series with a PeriodIndex, the keyword `convention` can be used to control whether to use the start or end of `rule`. Resample a year by quarter using 'start' `convention`. Values are assigned to the first quarter of the period. >>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01', ... freq='A', ... periods=2)) >>> s 2012 1 2013 2 Freq: A-DEC, dtype: int64 >>> s.resample('Q', convention='start').asfreq() 2012Q1 1.0 2012Q2 NaN 2012Q3 NaN 2012Q4 NaN 2013Q1 2.0 2013Q2 NaN 2013Q3 NaN 2013Q4 NaN Freq: Q-DEC, dtype: float64 Resample quarters by month using 'end' `convention`. Values are assigned to the last month of the period. >>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01', ... freq='Q', ... periods=4)) >>> q 2018Q1 1 2018Q2 2 2018Q3 3 2018Q4 4 Freq: Q-DEC, dtype: int64 >>> q.resample('M', convention='end').asfreq() 2018-03 1.0 2018-04 NaN 2018-05 NaN 2018-06 2.0 2018-07 NaN 2018-08 NaN 2018-09 3.0 2018-10 NaN 2018-11 NaN 2018-12 4.0 Freq: M, dtype: float64 For DataFrame objects, the keyword `on` can be used to specify the column instead of the index for resampling. >>> d = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df = pd.DataFrame(d) >>> df['week_starting'] = pd.date_range('01/01/2018', ... periods=8, ... freq='W') >>> df price volume week_starting 0 10 50 2018-01-07 1 11 60 2018-01-14 2 9 40 2018-01-21 3 13 100 2018-01-28 4 14 50 2018-02-04 5 18 100 2018-02-11 6 17 40 2018-02-18 7 19 50 2018-02-25 >>> df.resample('M', on='week_starting').mean() price volume week_starting 2018-01-31 10.75 62.5 2018-02-28 17.00 60.0 For a DataFrame with MultiIndex, the keyword `level` can be used to specify on which level the resampling needs to take place. >>> days = pd.date_range('1/1/2000', periods=4, freq='D') >>> d2 = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19], ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]}) >>> df2 = pd.DataFrame(d2, ... index=pd.MultiIndex.from_product([days, ... ['morning', ... 'afternoon']] ... )) >>> df2 price volume 2000-01-01 morning 10 50 afternoon 11 60 2000-01-02 morning 9 40 afternoon 13 100 2000-01-03 morning 14 50 afternoon 18 100 2000-01-04 morning 17 40 afternoon 19 50 >>> df2.resample('D', level=0).sum() price volume 2000-01-01 21 110 2000-01-02 22 140 2000-01-03 32 150 2000-01-04 36 90
386,004
def read_api_service(self, name, **kwargs): kwargs[] = True if kwargs.get(): return self.read_api_service_with_http_info(name, **kwargs) else: (data) = self.read_api_service_with_http_info(name, **kwargs) return data
read_api_service # noqa: E501 read the specified APIService # noqa: E501 This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please pass async_req=True >>> thread = api.read_api_service(name, async_req=True) >>> result = thread.get() :param async_req bool :param str name: name of the APIService (required) :param str pretty: If 'true', then the output is pretty printed. :param bool exact: Should the export be exact. Exact export maintains cluster-specific fields like 'Namespace'. :param bool export: Should this value be exported. Export strips fields that a user can not specify. :return: V1beta1APIService If the method is called asynchronously, returns the request thread.
386,005
def as_unit(self, unit, location=, *args, **kwargs): f = Formatter( as_unit(unit, location=location), args, kwargs ) return self._add_formatter(f)
Format subset as with units :param unit: string to use as unit :param location: prefix or suffix :param subset: Pandas subset
386,006
def project_leave(object_id, input_params={}, always_retry=True, **kwargs): return DXHTTPRequest( % object_id, input_params, always_retry=always_retry, **kwargs)
Invokes the /project-xxxx/leave API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Project-Permissions-and-Sharing#API-method%3A-%2Fproject-xxxx%2Fleave
386,007
def list_files(tag=, sat_id=None, data_path=None, format_str=None): if format_str is None and tag is not None: if tag == or tag == : ascii_fmt = return pysat.Files.from_os(data_path=data_path, format_str=ascii_fmt) else: raise ValueError() elif format_str is None: estr = raise ValueError(estr) else: return pysat.Files.from_os(data_path=data_path, format_str=format_str)
Return a Pandas Series of every file for chosen satellite data Parameters ----------- tag : (string or NoneType) Denotes type of file to load. Accepted types are '' and 'ascii'. If '' is specified, the primary data type (ascii) is loaded. (default='') sat_id : (string or NoneType) Specifies the satellite ID for a constellation. Not used. (default=None) data_path : (string or NoneType) Path to data directory. If None is specified, the value previously set in Instrument.files.data_path is used. (default=None) format_str : (string or NoneType) User specified file format. If None is specified, the default formats associated with the supplied tags are used. (default=None) Returns -------- pysat.Files.from_os : (pysat._files.Files) A class containing the verified available files
386,008
def windows_k_distinct(x, k): dist, i, j = 0, 0, 0 occ = {xi: 0 for xi in x} while j < len(x): while dist == k: occ[x[i]] -= 1 if occ[x[i]] == 0: dist -= 1 i += 1 while j < len(x) and (dist < k or occ[x[j]]): if occ[x[j]] == 0: dist += 1 occ[x[j]] += 1 j += 1 if dist == k: yield (i, j)
Find all largest windows containing exactly k distinct elements :param x: list or string :param k: positive integer :yields: largest intervals [i, j) with len(set(x[i:j])) == k :complexity: `O(|x|)`
386,009
def from_string(data_str): lines = data_str.strip().split() err_msg = ("Invalid input string format. Is this string generated by " "MonsoonData class?") conditions = [ len(lines) <= 4, "Average Current:" not in lines[1], "Voltage: " not in lines[2], "Total Power: " not in lines[3], "samples taken at " not in lines[4], lines[5] != "Time" + * 7 + "Amp" ] if any(conditions): raise MonsoonError(err_msg) hz_str = lines[4].split()[2] hz = int(hz_str[:-2]) voltage_str = lines[2].split()[1] voltage = int(voltage_str[:-1]) lines = lines[6:] t = [] v = [] for l in lines: try: timestamp, value = l.split() t.append(int(timestamp)) v.append(float(value)) except ValueError: raise MonsoonError(err_msg) return MonsoonData(v, t, hz, voltage)
Creates a MonsoonData object from a string representation generated by __str__. Args: str: The string representation of a MonsoonData. Returns: A MonsoonData object.
386,010
def new_main_mod(self,ns=None): main_mod = self._user_main_module init_fakemod_dict(main_mod,ns) return main_mod
Return a new 'main' module object for user code execution.
386,011
def nested_insert(self, item_list): if len(item_list) == 1: self[item_list[0]] = LIVVDict() elif len(item_list) > 1: if item_list[0] not in self: self[item_list[0]] = LIVVDict() self[item_list[0]].nested_insert(item_list[1:])
Create a series of nested LIVVDicts given a list
386,012
def strip_filter(value): if isinstance(value, basestring): value = bleach.clean(value, tags=ALLOWED_TAGS, attributes=ALLOWED_ATTRIBUTES, styles=ALLOWED_STYLES, strip=True) return value
Strips HTML tags from strings according to SANITIZER_ALLOWED_TAGS, SANITIZER_ALLOWED_ATTRIBUTES and SANITIZER_ALLOWED_STYLES variables in settings. Example usage: {% load sanitizer %} {{ post.content|strip_html }}
386,013
def getall(self, key, failobj=None): if self.mmkeys is None: self._mmInit() k = self.mmkeys.get(key) if not k: return failobj return list(map(self.data.get, k))
Returns a list of all the matching values for key, containing a single entry for unambiguous matches and multiple entries for ambiguous matches.
386,014
def role_search(auth=None, **kwargs): *** cloud = get_operator_cloud(auth) kwargs = _clean_kwargs(**kwargs) return cloud.search_roles(**kwargs)
Search roles CLI Example: .. code-block:: bash salt '*' keystoneng.role_search salt '*' keystoneng.role_search name=role1 salt '*' keystoneng.role_search domain_id=b62e76fbeeff4e8fb77073f591cf211e
386,015
def set_exception(self, exception): was_handled = self._finish(self.errbacks, exception) if not was_handled: traceback.print_exception( type(exception), exception, exception.__traceback__)
Signal unsuccessful completion.
386,016
def desc(self, table): geo.countries query = "desc {0}".format(table) response = self.raw_query(query) return response
Returns table description >>> yql.desc('geo.countries') >>>
386,017
def commit(self): didChipErase = False perfList = [] self._current_progress_fraction = builder.buffered_data_size / self._total_data_size chipErase = self._chip_erase if not didChipErase else False perf = builder.program(chip_erase=chipErase, progress_cb=self._progress_cb, smart_flash=self._smart_flash, fast_verify=self._trust_crc, keep_unwritten=self._keep_unwritten) perfList.append(perf) didChipErase = True self._progress_offset += self._current_progress_fraction self._log_performance(perfList) self._reset_state()
! @brief Write all collected data to flash. This routine ensures that chip erase is only used once if either the auto mode or chip erase mode are used. As an example, if two regions are to be written to and True was passed to the constructor for chip_erase (or if the session option was set), then only the first region will actually use chip erase. The second region will be forced to use sector erase. This will not result in extra erasing, as sector erase always verifies whether the sectors are already erased. This will, of course, also work correctly if the flash algorithm for the first region doesn't actually erase the entire chip (all regions). After calling this method, the loader instance can be reused to program more data.
386,018
def plot_gos(self, fout_img, goids=None, **kws_usr): gosubdagplot = self.get_gosubdagplot(goids, **kws_usr) gosubdagplot.plt_dag(fout_img)
Plot GO IDs.
386,019
def deserialize_durable_record_to_durable_model(record, durable_model): if record.get(EVENT_TOO_BIG_FLAG): return get_full_durable_object(record[][][][], record[][][][], durable_model) new_image = remove_global_dynamo_specific_fields(record[][]) data = {} for item, value in new_image.items(): data[item] = DESER.deserialize(value) return durable_model(**data)
Utility function that will take a Dynamo event record and turn it into the proper Durable Dynamo object. This will properly deserialize the ugly Dynamo datatypes away. :param record: :param durable_model: :return:
386,020
def get_process_flow(self, pid=None): pid = self._get_pid(pid) return self._call_rest_api(, +pid+, error=)
get_process_flow(self, pid=None) Get process in flow context. The response returns a sub-tree of the whole flow containing the requested process, its direct children processes, and all ancestors. You can navigate within the flow backword and forward by running this call on the children or ancestors of a given process. :Parameters: * *pid* (`string`) -- Identifier of an existing process
386,021
def _create_technical_words_dictionary(spellchecker_cache_path, relative_path, user_words, shadow): technical_terms_set = (user_words | technical_words_from_shadow_contents(shadow)) technical_words = Dictionary(technical_terms_set, "technical_words_" + relative_path.replace(os.path.sep, "_"), [os.path.realpath(relative_path)], spellchecker_cache_path) return technical_words
Create Dictionary at spellchecker_cache_path with technical words.
386,022
def auto_inline_code(self, node): assert isinstance(node, nodes.literal) if len(node.children) != 1: return None content = node.children[0] if not isinstance(content, nodes.Text): return None content = content.astext().strip() if content.startswith() and content.endswith(): if not self.config[]: return None content = content[1:-1] self.state_machine.reset(self.document, node.parent, self.current_level) return self.state_machine.run_role(, content=content) else: return None
Try to automatically generate nodes for inline literals. Parameters ---------- node : nodes.literal Original codeblock node Returns ------- tocnode: docutils node The converted toc tree node, None if conversion is not possible.
386,023
def keypoint_vflip(kp, rows, cols): x, y, angle, scale = kp c = math.cos(angle) s = math.sin(angle) angle = math.atan2(-s, c) return [x, (rows - 1) - y, angle, scale]
Flip a keypoint vertically around the x-axis.
386,024
def threads_init(gtk=True): x11.XInitThreads() if gtk: from gtk.gdk import threads_init threads_init()
Enables multithreading support in Xlib and PyGTK. See the module docstring for more info. :Parameters: gtk : bool May be set to False to skip the PyGTK module.
386,025
def write_roi(self, outfile=None, save_model_map=False, **kwargs): make_plots = kwargs.get(, False) save_weight_map = kwargs.get(, False) if outfile is None: pathprefix = os.path.join(self.config[][], ) elif not os.path.isabs(outfile): pathprefix = os.path.join(self.config[][], outfile) else: pathprefix = outfile pathprefix = utils.strip_suffix(pathprefix, [, , ]) prefix = os.path.basename(pathprefix) xmlfile = pathprefix + fitsfile = pathprefix + npyfile = pathprefix + self.write_xml(xmlfile) self.write_fits(fitsfile) if not self.config[][]: for c in self.components: c.like.logLike.saveSourceMaps(str(c.files[])) if save_model_map: self.write_model_map(prefix) if save_weight_map: self.write_weight_map(prefix) o = {} o[] = copy.deepcopy(self._roi_data) o[] = copy.deepcopy(self.config) o[] = fermipy.__version__ o[] = fermipy.get_st_version() o[] = {} for s in self.roi.sources: o[][s.name] = copy.deepcopy(s.data) for i, c in enumerate(self.components): o[][][i][ ] = copy.deepcopy(c.src_expscale) self.logger.info(, npyfile) np.save(npyfile, o) if make_plots: self.make_plots(prefix, None, **kwargs.get(, {}))
Write current state of the analysis to a file. This method writes an XML model definition, a ROI dictionary, and a FITS source catalog file. A previously saved analysis state can be reloaded from the ROI dictionary file with the `~fermipy.gtanalysis.GTAnalysis.load_roi` method. Parameters ---------- outfile : str String prefix of the output files. The extension of this string will be stripped when generating the XML, YAML and npy filenames. make_plots : bool Generate diagnostic plots. save_model_map : bool Save the current counts model to a FITS file.
386,026
def MapByteStream(self, byte_stream, **unused_kwargs): raise errors.MappingError( .format( self._data_type_definition.TYPE_INDICATOR))
Maps the data type on a byte stream. Args: byte_stream (bytes): byte stream. Returns: object: mapped value. Raises: MappingError: if the data type definition cannot be mapped on the byte stream.
386,027
def get_mime_data(self, mime_type): buffer_address = ffi.new() buffer_length = ffi.new() mime_type = ffi.new(, mime_type.encode()) cairo.cairo_surface_get_mime_data( self._pointer, mime_type, buffer_address, buffer_length) return (ffi.buffer(buffer_address[0], buffer_length[0]) if buffer_address[0] != ffi.NULL else None)
Return mime data previously attached to surface using the specified mime type. :param mime_type: The MIME type of the image data. :type mime_type: ASCII string :returns: A CFFI buffer object, or :obj:`None` if no data has been attached with the given mime type. *New in cairo 1.10.*
386,028
def create_volume(client, resource_group_name, name, location, template_file=None, template_uri=None): volume_properties = None if template_uri: volume_properties = shell_safe_json_parse(_urlretrieve(template_uri).decode(), preserve_order=True) elif template_file: volume_properties = get_file_json(template_file, preserve_order=True) volume_properties = json.loads(json.dumps(volume_properties)) else: raise CLIError() volume_properties[] = location return client.create(resource_group_name, name, volume_properties)
Create a volume.
386,029
def cut_video_stream(stream, start, end, fmt): with TemporaryDirectory() as tmp: in_file = Path(tmp) / f"in{fmt}" out_file = Path(tmp) / f"out{fmt}" in_file.write_bytes(stream) try: ret = subprocess.run( [ "ffmpeg", "-ss", f"{start}", "-i", f"{in_file}", "-to", f"{end}", "-c", "copy", f"{out_file}", ], capture_output=True, ) except FileNotFoundError: result = stream else: if ret.returncode: result = stream else: result = out_file.read_bytes() return result
cut video stream from `start` to `end` time Parameters ---------- stream : bytes video file content start : float start time end : float end time Returns ------- result : bytes content of cut video
386,030
def getRandomBinaryTreeLeafNode(binaryTree): if binaryTree.internal == True: if random.random() > 0.5: return getRandomBinaryTreeLeafNode(binaryTree.left) else: return getRandomBinaryTreeLeafNode(binaryTree.right) else: return binaryTree
Get random binary tree node.
386,031
def copy_file(self, path, prefixed_path, source_storage): if prefixed_path in self.copied_files: return self.log("Skipping (already copied earlier)" % path) if not self.delete_file(path, prefixed_path, source_storage): return source_path = source_storage.path(path) if self.dry_run: self.log("Pretending to copy " % source_path, level=1) else: self.log("Copying " % source_path, level=1) with source_storage.open(path) as source_file: self.storage.save(prefixed_path, source_file) self.copied_files.append(prefixed_path)
Attempt to copy ``path`` with storage
386,032
def update_comment(self, comment_id, body): path = req = ET.Element() ET.SubElement(req, ).text = str(int(comment_id)) comment = ET.SubElement(req, ) ET.SubElement(comment, ).text = str(body) return self._request(path, req)
Update a specific comment. This can be used to edit the content of an existing comment.
386,033
def default_panels(institute_id, case_name): panel_ids = request.form.getlist() controllers.update_default_panels(store, current_user, institute_id, case_name, panel_ids) return redirect(request.referrer)
Update default panels for a case.
386,034
def asList(self): base = [self._x, self._y] if not self._z is None: base.append(self._z) elif not self._m is None: base.append(self._m) return base
returns a Point value as a list of [x,y,<z>,<m>]
386,035
def intersection(self, *others): r result = self.__copy__() _elements = result._elements _total = result._total for other in map(self._as_mapping, others): for element, multiplicity in list(_elements.items()): new_multiplicity = other.get(element, 0) if new_multiplicity < multiplicity: if new_multiplicity > 0: _elements[element] = new_multiplicity _total -= multiplicity - new_multiplicity else: del _elements[element] _total -= multiplicity result._total = _total return result
r"""Return a new multiset with elements common to the multiset and all others. >>> ms = Multiset('aab') >>> sorted(ms.intersection('abc')) ['a', 'b'] You can also use the ``&`` operator for the same effect. However, the operator version will only accept a set as other operator, not any iterable, to avoid errors. >>> ms = Multiset('aab') >>> sorted(ms & Multiset('aaac')) ['a', 'a'] For a variant of the operation which modifies the multiset in place see :meth:`intersection_update`. Args: others: The other sets intersect with the multiset. Can also be any :class:`~typing.Iterable`\[~T] or :class:`~typing.Mapping`\[~T, :class:`int`] which are then converted to :class:`Multiset`\[~T]. Returns: The multiset resulting from the intersection of the sets.
386,036
def set_basic_params( self, workers=None, zerg_server=None, fallback_node=None, concurrent_events=None, cheap_mode=None, stats_server=None, quiet=None, buffer_size=None, fallback_nokey=None, subscription_key=None, emperor_command_socket=None): super(RouterFast, self).set_basic_params(**filter_locals(locals(), [ , , , ])) self._set_aliased(, fallback_nokey, cast=bool) self._set_aliased(, subscription_key) self._set_aliased(, emperor_command_socket) return self
:param int workers: Number of worker processes to spawn. :param str|unicode zerg_server: Attach the router to a zerg server. :param str|unicode fallback_node: Fallback to the specified node in case of error. :param int concurrent_events: Set the maximum number of concurrent events router can manage. Default: system dependent. :param bool cheap_mode: Enables cheap mode. When the router is in cheap mode, it will not respond to requests until a node is available. This means that when there are no nodes subscribed, only your local app (if any) will respond. When all of the nodes go down, the router will return in cheap mode. :param str|unicode stats_server: Router stats server address to run at. :param bool quiet: Do not report failed connections to instances. :param int buffer_size: Set internal buffer size in bytes. Default: page size. :param bool fallback_nokey: Move to fallback node even if a subscription key is not found. :param str|unicode subscription_key: Skip uwsgi parsing and directly set a key. :param str|unicode emperor_command_socket: Set the emperor command socket that will receive spawn commands. See `.empire.set_emperor_command_params()`.
386,037
def accept(self, deviceId, device): storedDevice = self.devices.get(deviceId) if storedDevice is None: logger.info( + deviceId) storedDevice = Device(self.maxAgeSeconds) storedDevice.deviceId = deviceId storedDevice.dataHandler = AsyncHandler(, CSVLogger(, deviceId, self.dataDir)) else: logger.debug( + deviceId) storedDevice.payload = device storedDevice.lastUpdateTime = datetime.datetime.utcnow() self.devices.update({deviceId: storedDevice}) self.targetStateController.updateDeviceState(storedDevice.payload)
Adds the named device to the store. :param deviceId: :param device: :return:
386,038
def find_span_binsearch(degree, knot_vector, num_ctrlpts, knot, **kwargs): tol = kwargs.get(, 10e-6) n = num_ctrlpts - 1 if abs(knot_vector[n + 1] - knot) <= tol: return n low = degree high = num_ctrlpts mid = (low + high) / 2 mid = int(round(mid + tol)) while (knot < knot_vector[mid]) or (knot >= knot_vector[mid + 1]): if knot < knot_vector[mid]: high = mid else: low = mid mid = int((low + high) / 2) return mid
Finds the span of the knot over the input knot vector using binary search. Implementation of Algorithm A2.1 from The NURBS Book by Piegl & Tiller. The NURBS Book states that the knot span index always starts from zero, i.e. for a knot vector [0, 0, 1, 1]; if FindSpan returns 1, then the knot is between the interval [0, 1). :param degree: degree, :math:`p` :type degree: int :param knot_vector: knot vector, :math:`U` :type knot_vector: list, tuple :param num_ctrlpts: number of control points, :math:`n + 1` :type num_ctrlpts: int :param knot: knot or parameter, :math:`u` :type knot: float :return: knot span :rtype: int
386,039
def __raise_user_error(self, view): raise foundations.exceptions.UserError("{0} | Cannot perform action, View has been set read only!".format( self.__class__.__name__, view.objectName() or view))
Raises an error if the given View has been set read only and the user attempted to edit its content. :param view: View. :type view: QWidget
386,040
def kitchen_get(backend, kitchen_name, recipe): found_kitchen = DKKitchenDisk.find_kitchen_name() if found_kitchen is not None and len(found_kitchen) > 0: raise click.ClickException("You cannot get a kitchen into an existing kitchen directory structure.") if len(recipe) > 0: click.secho("%s - Getting kitchen and the recipes %s" % (get_datetime(), kitchen_name, str(recipe)), fg=) else: click.secho("%s - Getting kitchen " % (get_datetime(), kitchen_name), fg=) check_and_print(DKCloudCommandRunner.get_kitchen(backend.dki, kitchen_name, os.getcwd(), recipe))
Get an existing Kitchen
386,041
def guess_wxr_version(self, tree): for v in (, , ): try: tree.find( % (WP_NS % v)).text return v except AttributeError: pass raise CommandError()
We will try to guess the wxr version used to complete the wordpress xml namespace name.
386,042
def catalog_register(consul_url=None, token=None, **kwargs): *node1192.168.1.1redis127.0.0.18080redis_server1 ret = {} data = {} data[] = {} if not consul_url: consul_url = _get_config() if not consul_url: log.error() ret[] = ret[] = False return ret if in kwargs: data[] = kwargs[] if in kwargs: data[] = kwargs[] else: ret[] = ret[] = False return ret if in kwargs: if isinstance(kwargs[], list): _address = kwargs[][0] else: _address = kwargs[] data[] = _address else: ret[] = ret[] = False return ret if in kwargs: data[] = {} for k in kwargs[]: if kwargs[].get(k): data[][k] = kwargs[][k][0] if in kwargs: data[] = {} data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: _tags = kwargs[] if not isinstance(_tags, list): _tags = [_tags] data[][] = _tags if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[] = {} data[][] = kwargs[] if in kwargs: if kwargs[] not in (, , , ): ret[] = ret[] = False return ret data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] if in kwargs: data[][] = kwargs[] function = res = _query(consul_url=consul_url, function=function, token=token, method=, data=data) if res[]: ret[] = True ret[] = ( .format(kwargs[])) else: ret[] = False ret[] = ( .format(kwargs[])) ret[] = data return ret
Registers a new node, service, or check :param consul_url: The Consul server URL. :param dc: By default, the datacenter of the agent is queried; however, the dc can be provided using the "dc" parameter. :param node: The node to register. :param address: The address of the node. :param service: The service that will be registered. :param service_address: The address that the service listens on. :param service_port: The port for the service. :param service_id: A unique identifier for the service, if this is not provided "name" will be used. :param service_tags: Any tags associated with the service. :param check: The name of the health check to register :param check_status: The initial status of the check, must be one of unknown, passing, warning, or critical. :param check_service: The service that the check is performed against. :param check_id: Unique identifier for the service. :param check_notes: An opaque field that is meant to hold human-readable text. :return: Boolean & message of success or failure. CLI Example: .. code-block:: bash salt '*' consul.catalog_register node='node1' address='192.168.1.1' service='redis' service_address='127.0.0.1' service_port='8080' service_id='redis_server1'
386,043
def get_shape(self, prune=False, hs_dims=None): if not prune: return self.as_array(include_transforms_for_dims=hs_dims).shape shape = compress_pruned( self.as_array(prune=True, include_transforms_for_dims=hs_dims) ).shape return tuple(n for n in shape if n > 1)
Tuple of array dimensions' lengths. It returns a tuple of ints, each representing the length of a cube dimension, in the order those dimensions appear in the cube. Pruning is supported. Dimensions that get reduced to a single element (e.g. due to pruning) are removed from the returning shape, thus allowing for the differentiation between true 2D cubes (over which statistical testing can be performed) and essentially 1D cubes (over which it can't). Usage: >>> shape = get_shape() >>> pruned_shape = get_shape(prune=True)
386,044
def check_key(user, key, enc, comment, options, config=, cache_keys=None, fingerprint_hash_type=None): * if cache_keys is None: cache_keys = [] enc = _refine_enc(enc) current = auth_keys(user, config=config, fingerprint_hash_type=fingerprint_hash_type) nline = _format_auth_line(key, enc, comment, options)
Check to see if a key needs updating, returns "update", "add" or "exists" CLI Example: .. code-block:: bash salt '*' ssh.check_key <user> <key> <enc> <comment> <options>
386,045
def ppo_original_params(): hparams = ppo_atari_base() hparams.learning_rate_constant = 2.5e-4 hparams.gae_gamma = 0.99 hparams.gae_lambda = 0.95 hparams.clipping_coef = 0.1 hparams.value_loss_coef = 1 hparams.entropy_loss_coef = 0.01 hparams.eval_every_epochs = 200 hparams.dropout_ppo = 0.1 hparams.epoch_length = 50 hparams.optimization_batch_size = 20 return hparams
Parameters based on the original PPO paper.
386,046
def delete(self, *keys): key_counter = 0 for key in map(self._encode, keys): if key in self.redis: del self.redis[key] key_counter += 1 if key in self.timeouts: del self.timeouts[key] return key_counter
Emulate delete.
386,047
def QA_util_random_with_zh_stock_code(stockNumber=10): 60XXXX00XXXX300XXX codeList = [] pt = 0 for i in range(stockNumber): if pt == 0: iCode = random.randint(600000, 609999) aCode = "%06d" % iCode elif pt == 1: iCode = random.randint(600000, 600999) aCode = "%06d" % iCode elif pt == 2: iCode = random.randint(2000, 9999) aCode = "%06d" % iCode elif pt == 3: iCode = random.randint(300000, 300999) aCode = "%06d" % iCode elif pt == 4: iCode = random.randint(2000, 2999) aCode = "%06d" % iCode pt = (pt + 1) % 5 codeList.append(aCode) return codeList
随机生成股票代码 :param stockNumber: 生成个数 :return: ['60XXXX', '00XXXX', '300XXX']
386,048
def conn_handler(self, session: ClientSession, proxy: str = None) -> ConnectionHandler: return ConnectionHandler("https", "wss", self.server, self.port, "", session, proxy)
Return connection handler instance for the endpoint :param session: AIOHTTP client session instance :param proxy: Proxy url :return:
386,049
def colors_to_dict(colors, img): return { "wallpaper": img, "alpha": util.Color.alpha_num, "special": { "background": colors[0], "foreground": colors[15], "cursor": colors[15] }, "colors": { "color0": colors[0], "color1": colors[1], "color2": colors[2], "color3": colors[3], "color4": colors[4], "color5": colors[5], "color6": colors[6], "color7": colors[7], "color8": colors[8], "color9": colors[9], "color10": colors[10], "color11": colors[11], "color12": colors[12], "color13": colors[13], "color14": colors[14], "color15": colors[15] } }
Convert list of colors to pywal format.
386,050
def _set_compact_flash(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=compact_flash.compact_flash, is_container=, presence=False, yang_name="compact-flash", rest_name="compact-flash", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u: {u: u, u: None}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "container", : , }) self.__compact_flash = t if hasattr(self, ): self._set()
Setter method for compact_flash, mapped from YANG variable /system_monitor/compact_flash (container) If this variable is read-only (config: false) in the source YANG file, then _set_compact_flash is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_compact_flash() directly.
386,051
def confirm_commit(jid): * if __grains__[] == : confirmed = __salt__[]() confirmed[] = confirmed.pop() confirmed[] = confirmed.pop() else: confirmed = cancel_commit(jid) if confirmed[]: confirmed[] = .format(jid=jid) return confirmed
.. versionadded:: 2019.2.0 Confirm a commit scheduled to be reverted via the ``revert_in`` and ``revert_at`` arguments from the :mod:`net.load_template <salt.modules.napalm_network.load_template>` or :mod:`net.load_config <salt.modules.napalm_network.load_config>` execution functions. The commit ID is displayed when the commit confirmed is scheduled via the functions named above. CLI Example: .. code-block:: bash salt '*' net.confirm_commit 20180726083540640360
386,052
async def Track(self, payloads): _params = dict() msg = dict(type=, request=, version=1, params=_params) _params[] = payloads reply = await self.rpc(msg) return reply
payloads : typing.Sequence[~Payload] Returns -> typing.Sequence[~PayloadResult]
386,053
def _perturbation(self): if self.P>1: scales = [] for term_i in range(self.n_terms): _scales = SP.randn(self.diag[term_i].shape[0]) if self.offset[term_i]>0: _scales = SP.concatenate((_scales,SP.zeros(1))) scales.append(_scales) scales = SP.concatenate(scales) else: scales = SP.randn(self.vd.getNumberScales()) return scales
Returns Gaussian perturbation
386,054
def cmd_rcbind(self, args): if len(args) < 1: print("Usage: rcbind <dsmmode>") return self.master.mav.command_long_send(self.settings.target_system, self.settings.target_component, mavutil.mavlink.MAV_CMD_START_RX_PAIR, 0, float(args[0]), 0, 0, 0, 0, 0, 0)
start RC bind
386,055
def from_arrays(cls, arrays, sortorder=None, names=None): error_msg = "Input must be a list / sequence of array-likes." if not is_list_like(arrays): raise TypeError(error_msg) elif is_iterator(arrays): arrays = list(arrays) for array in arrays: if not is_list_like(array): raise TypeError(error_msg) for i in range(1, len(arrays)): if len(arrays[i]) != len(arrays[i - 1]): raise ValueError() from pandas.core.arrays.categorical import _factorize_from_iterables codes, levels = _factorize_from_iterables(arrays) if names is None: names = [getattr(arr, "name", None) for arr in arrays] return MultiIndex(levels=levels, codes=codes, sortorder=sortorder, names=names, verify_integrity=False)
Convert arrays to MultiIndex. Parameters ---------- arrays : list / sequence of array-likes Each array-like gives one level's value for each data point. len(arrays) is the number of levels. sortorder : int or None Level of sortedness (must be lexicographically sorted by that level). names : list / sequence of str, optional Names for the levels in the index. Returns ------- index : MultiIndex See Also -------- MultiIndex.from_tuples : Convert list of tuples to MultiIndex. MultiIndex.from_product : Make a MultiIndex from cartesian product of iterables. MultiIndex.from_frame : Make a MultiIndex from a DataFrame. Examples -------- >>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']] >>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color')) MultiIndex(levels=[[1, 2], ['blue', 'red']], codes=[[0, 0, 1, 1], [1, 0, 1, 0]], names=['number', 'color'])
386,056
def p_const_map(self, p): {} p[0] = ast.ConstMap(dict(p[2]), p.lineno(1))
const_map : '{' const_map_seq '}'
386,057
def vor_plot(self, which=): import matplotlib.cm as cm import matplotlib.pyplot as plt sm = self.SM if sm.light_vor is None: raise ValueError() if which is : title = ax = sm.vor_surf.plot2d(, alpha=0.15, ret=True, title=title) ax.scatter(sm.azimuth_zenit[:, 0],sm.azimuth_zenit[:, 1], c=) ax.scatter(sm.vor_centers[:, 0], sm.vor_centers[:,1], s = 30, c = ) ax.set_xlabel() ax.set_ylabel() plt.show() elif which is : cmap = cm.Blues title = +\ ax = sm.vor_surf.plot2d(sm.vor_freq, cmap=cmap, alpha=0.85, colorbar=True, title=title, ret=True, cbar_label=) ax.set_xlabel() ax.set_ylabel() plt.show() elif which is : cmap = cm.YlOrRd title = +\ data = sm.proj_vor/sm.vor_freq proj_data = data*100/data.max() ax = sm.vor_surf.plot2d(proj_data, alpha=0.85, cmap=cmap, colorbar=True, title=title, ret=True, cbar_label=) ax.set_xlabel() ax.set_ylabel() plt.title(+str(data.max())+) plt.show() else: raise ValueError(+which)
Voronoi diagram visualizations. There are three types: 1. **vor**: Voronoi diagram of the Solar Horizont. 2. **freq**: Frequency of Sun positions in t in the Voronoi diagram of the Solar Horizont. 3. **data**: Accumulated time integral of the data projected in the Voronoi diagram of the Solar Horizont. :param which: Type of visualization. :type which: str :returns: None
386,058
def _int2coord(x, y, dim): assert dim >= 1 assert x < dim assert y < dim lng = x / dim * 360 - 180 lat = y / dim * 180 - 90 return lng, lat
Convert x, y values in dim x dim-grid coordinate system into lng, lat values. Parameters: x: int x value of point [0, dim); corresponds to longitude y: int y value of point [0, dim); corresponds to latitude dim: int Number of coding points each x, y value can take. Corresponds to 2^level of the hilbert curve. Returns: Tuple[float, float]: (lng, lat) lng longitude value of coordinate [-180.0, 180.0]; corresponds to X axis lat latitude value of coordinate [-90.0, 90.0]; corresponds to Y axis
386,059
def decode_xml(elem, _in_bind = False): obj = xml.tag_to_object(elem) attrs = {} def a2d(*props): for p in props: attrs[p] = elem.get(p) if issubclass(obj, om.CommonAttributes): a2d("id") if issubclass(obj, om.CDBaseAttribute): a2d("cdbase") if issubclass(obj, om.OMObject): a2d("version") attrs["omel"] = decode_xml(elem[0]) elif issubclass(obj, om.OMReference): a2d("href") elif issubclass(obj, om.OMInteger): attrs["integer"] = int(elem.text) elif issubclass(obj, om.OMFloat): attrs["double"] = float(elem.get()) elif issubclass(obj, om.OMString): attrs["string"] = elem.text elif issubclass(obj, om.OMBytes): try: attrs["bytes"] = base64.b64decode(elem.text) except TypeError: attrs["bytes"] = base64.b64decode(bytes(elem.text, "ascii")) elif issubclass(obj, om.OMSymbol): a2d("name", "cd") elif issubclass(obj, om.OMVariable): a2d("name") elif issubclass(obj, om.OMForeign): attrs["obj"] = elem.text a2d("encoding") elif issubclass(obj, om.OMApplication): attrs["elem"] = decode_xml(elem[0]) attrs["arguments"] = list(map(decode_xml, elem[1:])) elif issubclass(obj, om.OMAttribution): attrs["pairs"] = decode_xml(elem[0]) attrs["obj"] = decode_xml(elem[1]) elif issubclass(obj, om.OMAttributionPairs): if not _in_bind: attrs["pairs"] = [(decode_xml(k), decode_xml(v)) for k, v in zip(elem[::2], elem[1::2])] else: obj = om.OMAttVar attrs["pairs"] = decode_xml(elem[0], True) attrs["obj"] = decode_xml(elem[1], True) elif issubclass(obj, om.OMBinding): attrs["binder"] = decode_xml(elem[0]) attrs["vars"] = decode_xml(elem[1]) attrs["obj"] = decode_xml(elem[2]) elif issubclass(obj, om.OMBindVariables): attrs["vars"] = list(map(lambda x:decode_xml(x, True), elem[:])) elif issubclass(obj, om.OMError): attrs["name"] = decode_xml(elem[0]) attrs["params"] = list(map(decode_xml, elem[1:])) else: raise TypeError("Expected OMAny, found %s." % obj.__name__) return obj(**attrs)
Decodes an XML element into an OpenMath object. :param elem: Element to decode. :type elem: etree._Element :param _in_bind: Internal flag used to indicate if we should decode within an OMBind. :type _in_bind: bool :rtype: OMAny
386,060
def normalize(self, stats:Collection[Tensor]=None, do_x:bool=True, do_y:bool=False)->None: "Add normalize transform using `stats` (defaults to `DataBunch.batch_stats`)" if getattr(self,,False): raise Exception() if stats is None: self.stats = self.batch_stats() else: self.stats = stats self.norm,self.denorm = normalize_funcs(*self.stats, do_x=do_x, do_y=do_y) self.add_tfm(self.norm) return self
Add normalize transform using `stats` (defaults to `DataBunch.batch_stats`)
386,061
def read_data(self, **kwargs): now = arrow.utcnow().to(settings.TIME_ZONE) my_toots = [] search = {} since_id = None trigger_id = kwargs[] date_triggered = arrow.get(kwargs[]) def _get_toots(toot_api, toot_obj, search): max_id = 0 if toot_obj.max_id is None else toot_obj.max_id since_id = 0 if toot_obj.since_id is None else toot_obj.since_id statuses = if toot_obj.tag: search[] = toot_obj.tag statuses = toot_api.search(**search) statuses = statuses[] elif toot_obj.tooter: search[] = toot_obj.tooter if toot_obj.fav: statuses = toot_api.favourites(max_id=max_id, since_id=since_id) else: user_id = toot_api.account_search(q=toot_obj.tooter) statuses = toot_api.account_statuses( id=user_id[0][], max_id=toot_obj.max_id, since_id=toot_obj.since_id) return search, statuses if self.token is not None: kw = {: , : , : trigger_id} toot_obj = super(ServiceMastodon, self).read_data(**kw) us = UserService.objects.get(token=self.token, name=) try: toot_api = MastodonAPI( client_id=us.client_id, client_secret=us.client_secret, access_token=self.token, api_base_url=us.host, ) except ValueError as e: logger.error(e) update_result(trigger_id, msg=e, status=False) if toot_obj.since_id is not None and toot_obj.since_id > 0: since_id = toot_obj.since_id search = {: toot_obj.since_id} search, statuses = _get_toots(toot_api, toot_obj, search) if len(statuses) > 0: newest = None for status in statuses: if newest is None: newest = True search[] = max_id = status[] since_id = search[] = statuses[-1][] - 1 search, statuses = _get_toots(toot_api, toot_obj, search) newest = None if len(statuses) > 0: my_toots = [] for s in statuses: if newest is None: newest = True max_id = s[] - 1 toot_name = s[][] title = _(. format(us.host, toot_name)) my_date = arrow.get(s[]).to( settings.TIME_ZONE) published = arrow.get(my_date).to(settings.TIME_ZONE) if date_triggered is not None and \ published is not None and \ now >= published >= date_triggered: my_toots.append({: title, : s[], : s[], : my_date}) self.send_digest_event(trigger_id, title, s[]) cache.set( + str(trigger_id), my_toots) Mastodon.objects.filter(trigger_id=trigger_id).update( since_id=since_id, max_id=max_id) return my_toots
get the data from the service :param kwargs: contain keyword args : trigger_id at least :type kwargs: dict :rtype: list
386,062
def import_batch(self, filename): batch = self.batch_cls() json_file = self.json_file_cls(name=filename, path=self.path) try: deserialized_txs = json_file.deserialized_objects except JSONFileError as e: raise TransactionImporterError(e) from e try: batch.populate(deserialized_txs=deserialized_txs, filename=json_file.name) except ( BatchDeserializationError, InvalidBatchSequence, BatchAlreadyProcessed, ) as e: raise TransactionImporterError(e) from e batch.save() batch.update_history() return batch
Imports the batch of outgoing transactions into model IncomingTransaction.
386,063
def run_evaluate(self, *args, **kwargs) -> None: if self._needs_evaluation: for _, item in self._nested_items.items(): item.run_evaluate()
Evaluates the current item :returns An evaluation result object containing the result, or reasons why evaluation failed
386,064
def _get_cached_style_urls(self, asset_url_path): try: cached_styles = os.listdir(self.cache_path) except IOError as ex: if ex.errno != errno.ENOENT and ex.errno != errno.ESRCH: raise return [] except OSError: return [] return [posixpath.join(asset_url_path, style) for style in cached_styles if style.endswith()]
Gets the URLs of the cached styles.
386,065
def _dt_to_epoch(dt): try: epoch = dt.timestamp() except AttributeError: epoch = (dt - datetime(1970, 1, 1)).total_seconds() return epoch
Convert datetime to epoch seconds.
386,066
def validate_registry_uri_authority(auth: str) -> None: if is_ens_domain(auth) is False and not is_checksum_address(auth): raise ValidationError(f"{auth} is not a valid registry URI authority.")
Raise an exception if the authority is not a valid ENS domain or a valid checksummed contract address.
386,067
def _get_data_bytes_or_stream_only(param_name, param_value): if param_value is None: return b if isinstance(param_value, bytes) or hasattr(param_value, ): return param_value raise TypeError(_ERROR_VALUE_SHOULD_BE_BYTES_OR_STREAM.format(param_name))
Validates the request body passed in is a stream/file-like or bytes object.
386,068
def _init_fld2col_widths(self): fld2col_widths = GoSubDagWr.fld2col_widths.copy() for fld, wid in self.oprtfmt.default_fld2col_widths.items(): fld2col_widths[fld] = wid for fld in get_hdridx_flds(): fld2col_widths[fld] = 2 return fld2col_widths
Return default column widths for writing an Excel Spreadsheet.
386,069
def authenticate(self, request): try: key = request.data[] except KeyError: return try: token = AuthToken.objects.get(key=key) except AuthToken.DoesNotExist: return return (token.user, token)
Authenticate a user from a token form field Errors thrown here will be swallowed by django-rest-framework, and it expects us to return None if authentication fails.
386,070
def _shrink(self): cursize = self._current_size while cursize > self._maxsize: name, value = self.dynamic_entries.pop() cursize -= table_entry_size(name, value) self._current_size = cursize
Shrinks the dynamic table to be at or below maxsize
386,071
def pretty_print(self, indent=0): s = tab = *indent s += %self.tag if isinstance(self.value, basestring): s += self.value else: s += for e in self.value: s += e.pretty_print(indent+4) s += return s
Print the document without tags using indentation
386,072
def _get_char(self, win, char): def get_check_next_byte(): char = win.getch() if 128 <= char <= 191: return char else: raise UnicodeError bytes = [] if char <= 127: bytes.append(char) elif 192 <= char <= 223: bytes.append(char) bytes.append(get_check_next_byte()) elif 224 <= char <= 239: bytes.append(char) bytes.append(get_check_next_byte()) bytes.append(get_check_next_byte()) elif 240 <= char <= 244: bytes.append(char) bytes.append(get_check_next_byte()) bytes.append(get_check_next_byte()) bytes.append(get_check_next_byte()) while 0 in bytes: bytes.remove(0) if version_info < (3, 0): out = .join([chr(b) for b in bytes]) else: buf = bytearray(bytes) out = self._decode_string(buf) return out
no zero byte allowed
386,073
def mean(name, num, minimum=0, maximum=0, ref=None): return calc( name=name, num=num, oper=, minimum=minimum, maximum=maximum, ref=ref )
Calculates the mean of the ``num`` most recent values. Requires a list. USAGE: .. code-block:: yaml foo: calc.mean: - name: myregentry - num: 5
386,074
def _parse_selector(self, scoped=True, allow_periods_in_scope=False): if self._current_token.kind != tokenize.NAME: self._raise_syntax_error() begin_line_num = self._current_token.begin[0] begin_char_num = self._current_token.begin[1] end_char_num = self._current_token.end[1] line = self._current_token.line selector_parts = [] step_parity = 0 while (step_parity == 0 and self._current_token.kind == tokenize.NAME or step_parity == 1 and self._current_token.value in (, )): selector_parts.append(self._current_token.value) step_parity = not step_parity end_char_num = self._current_token.end[1] self._advance_one_token() self._skip_whitespace_and_comments() scoped_selector = .join(selector_parts) untokenized_scoped_selector = line[begin_char_num:end_char_num] return scoped_selector
Parse a (possibly scoped) selector. A selector is a sequence of one or more valid Python-style identifiers separated by periods (see also `SelectorMap`). A scoped selector is a selector that may be preceded by scope names (separated by slashes). Args: scoped: Whether scopes are allowed. allow_periods_in_scope: Whether to allow period characters in the scope names preceding the selector. Returns: The parsed selector (as a string). Raises: SyntaxError: If the scope or selector is malformatted.
386,075
def begin_auth(): repository = request.headers[] if repository not in config[]: return fail(no_such_repo_msg) repository_path = config[][repository][] conn = auth_db_connect(cpjoin(repository_path, )); gc_tokens(conn) auth_token = base64.b64encode(pysodium.randombytes(35)).decode() conn.execute("insert into tokens (expires, token, ip) values (?,?,?)", (time.time() + 30, auth_token, request.environ[])) conn.commit() return success({ : auth_token})
Request authentication token to sign
386,076
def create_cluster(dc_ref, cluster_name, cluster_spec): dc_name = get_managed_object_name(dc_ref) log.trace(%s\%s\, cluster_name, dc_name) try: dc_ref.hostFolder.CreateClusterEx(cluster_name, cluster_spec) except vim.fault.NoPermission as exc: log.exception(exc) raise salt.exceptions.VMwareApiError( .format(exc.privilegeId)) except vim.fault.VimFault as exc: log.exception(exc) raise salt.exceptions.VMwareApiError(exc.msg) except vmodl.RuntimeFault as exc: log.exception(exc) raise salt.exceptions.VMwareRuntimeError(exc.msg)
Creates a cluster in a datacenter. dc_ref The parent datacenter reference. cluster_name The cluster name. cluster_spec The cluster spec (vim.ClusterConfigSpecEx). Defaults to None.
386,077
def com_google_fonts_check_whitespace_widths(ttFont): from fontbakery.utils import get_glyph_name space_name = get_glyph_name(ttFont, 0x0020) nbsp_name = get_glyph_name(ttFont, 0x00A0) space_width = ttFont[][space_name][0] nbsp_width = ttFont[][nbsp_name][0] if space_width > 0 and space_width == nbsp_width: yield PASS, "Whitespace and non-breaking space have the same width." else: yield FAIL, ("Whitespace and non-breaking space have differing width:" " Whitespace ({}) is {} font units wide, non-breaking space" " ({}) is {} font units wide. Both should be positive and the" " same.").format(space_name, space_width, nbsp_name, nbsp_width)
Whitespace and non-breaking space have the same width?
386,078
def arp(): * ret = {} out = __salt__[]() for line in out.splitlines(): comps = line.split() if len(comps) < 4: continue if __grains__[] == : if not in comps[-1]: continue ret[comps[-1]] = comps[1] elif __grains__[] == : if comps[0] == or comps[1] == : continue ret[comps[1]] = comps[0] elif __grains__[] == : if comps[0] in (, ): continue ret[comps[3]] = comps[1].strip().strip() else: ret[comps[3]] = comps[1].strip().strip() return ret
Return the arp table from the minion .. versionchanged:: 2015.8.0 Added support for SunOS CLI Example: .. code-block:: bash salt '*' network.arp
386,079
def get_receive(self, script_list): events = defaultdict(set) for script in script_list: if self.script_start_type(script) == self.HAT_WHEN_I_RECEIVE: event = script.blocks[0].args[0].lower() events[event].add(script) return events
Return a list of received events contained in script_list.
386,080
def evaluate_objective(self): self.Y_new, cost_new = self.objective.evaluate(self.suggested_sample) self.cost.update_cost_model(self.suggested_sample, cost_new) self.Y = np.vstack((self.Y,self.Y_new))
Evaluates the objective
386,081
def stamp(name, backdate=None, unique=None, keep_subdivisions=None, quick_print=None, un=None, ks=None, qp=None): t = timer() if f.t.stopped: raise StoppedError("Cannot stamp stopped timer.") if f.t.paused: raise PausedError("Cannot stamp paused timer.") if backdate is None: t_stamp = t else: if not isinstance(backdate, float): raise TypeError("Backdate must be type float.") if backdate > t: raise BackdateError("Cannot backdate to future time.") if backdate < f.t.last_t: raise BackdateError("Cannot backdate to time earlier than last stamp.") t_stamp = backdate elapsed = t_stamp - f.t.last_t unique = SET[] if (unique is None and un is None) else bool(unique or un) keep_subdivisions = SET[] if (keep_subdivisions is None and ks is None) else bool(keep_subdivisions or ks) quick_print = SET[] if (quick_print is None and qp is None) else bool(quick_print or qp) _stamp(name, elapsed, unique, keep_subdivisions, quick_print) tmp_self = timer() - t f.t.self_cut += tmp_self f.t.last_t = t_stamp + tmp_self return t
Mark the end of a timing interval. Notes: If keeping subdivisions, each subdivision currently awaiting assignment to a stamp (i.e. ended since the last stamp in this level) will be assigned to this one. Otherwise, all awaiting ones will be discarded after aggregating their self times into the current timer. If both long- and short-form are present, they are OR'ed together. If neither are present, the current global default is used. Backdating: record a stamp as if it happened at an earlier time. Backdate time must be in the past but more recent than the latest stamp. (This can be useful for parallel applications, wherein a sub- process can return times of interest to the master process.) Warning: When backdating, awaiting subdivisions will be assigned as normal, with no additional checks for validity. Args: name (any): The identifier for this interval, processed through str() backdate (float, optional): time to use for stamp instead of current unique (bool, optional): enforce uniqueness keep_subdivisions (bool, optional): keep awaiting subdivisions quick_print (bool, optional): print elapsed interval time un (bool, optional): short-form for unique ks (bool, optional): short-form for keep_subdivisions qp (bool, optional): short-form for quick_print Returns: float: The current time. Raises: BackdateError: If the given backdate time is out of range. PausedError: If the timer is paused. StoppedError: If the timer is stopped. TypeError: If the given backdate value is not type float.
386,082
def WriteLine(log: Any, consoleColor: int = -1, writeToFile: bool = True, printToStdout: bool = True, logFile: str = None) -> None: Logger.Write(.format(log), consoleColor, writeToFile, printToStdout, logFile)
log: any type. consoleColor: int, a value in class `ConsoleColor`, such as `ConsoleColor.DarkGreen`. writeToFile: bool. printToStdout: bool. logFile: str, log file path.
386,083
def fulltext_scan_ids(self, query_id=None, query_fc=None, preserve_order=True, indexes=None): it = self._fulltext_scan(query_id, query_fc, feature_names=False, preserve_order=preserve_order, indexes=indexes) for hit in it: yield hit[], did(hit[])
Fulltext search for identifiers. Yields an iterable of triples (score, identifier) corresponding to the search results of the fulltext search in ``query``. This will only search text indexed under the given feature named ``fname``. Note that, unless ``preserve_order`` is set to True, the ``score`` will always be 0.0, and the results will be unordered. ``preserve_order`` set to True will cause the results to be scored and be ordered by score, but you should expect to see a decrease in performance. :param str fname: The feature to search. :param unicode query: The query. :rtype: Iterable of ``(score, content_id)``
386,084
def _escape_jid(jid): jid = six.text_type(jid) jid = re.sub(r"'*", "", jid) return jid
Do proper formatting of the jid
386,085
def fstab(config=): * ret = {} if not os.path.isfile(config): return ret with salt.utils.files.fopen(config) as ifile: for line in ifile: line = salt.utils.stringutils.to_unicode(line) try: if __grains__[] == : if line[0] == : continue entry = _vfstab_entry.dict_from_line( line) else: entry = _fstab_entry.dict_from_line( line, _fstab_entry.compatibility_keys) entry[] = entry[].split() while entry[] in ret: entry[] += ret[entry.pop()] = entry except _fstab_entry.ParseError: pass except _vfstab_entry.ParseError: pass return ret
.. versionchanged:: 2016.3.2 List the contents of the fstab CLI Example: .. code-block:: bash salt '*' mount.fstab
386,086
def _tr_system(line_info): "Translate lines escaped with: !" cmd = line_info.line.lstrip().lstrip(ESC_SHELL) return % (line_info.pre, cmd)
Translate lines escaped with: !
386,087
def remove_existing_pidfile(pidfile_path): try: os.remove(pidfile_path) except OSError as exc: if exc.errno == errno.ENOENT: pass else: raise
Remove the named PID file if it exists. Removing a PID file that doesn't already exist puts us in the desired state, so we ignore the condition if the file does not exist.
386,088
def resizeToMinimum(self): offset = self.padding() min_size = self.minimumPixmapSize() if self.position() in (XDockToolbar.Position.East, XDockToolbar.Position.West): self.resize(min_size.width() + offset, self.height()) elif self.position() in (XDockToolbar.Position.North, XDockToolbar.Position.South): self.resize(self.width(), min_size.height() + offset)
Resizes the dock toolbar to the minimum sizes.
386,089
def rpc(name, dest=None, **kwargs): s following execution module: \ :py:func:`cp.push <salt.modules.cp.push>` * format: The format in which the rpc reply must be stored in file specified in the dest (used only when dest is specified) (default = xml) * kwargs: keyworded arguments taken by rpc call like- * timeout: Set NETCONF RPC timeout. Can be used for commands which take a while to execute. (default= 30 seconds) * filter: Only to be used with rpc to get specific configuration. * terse: Amount of information you want. * interface_name: Name of the interface whose information you want. namechangesresultcommentchangesjunos.rpc'](name, dest, **kwargs) return ret
Executes the given rpc. The returned data can be stored in a file by specifying the destination path with dest as an argument .. code-block:: yaml get-interface-information: junos: - rpc - dest: /home/user/rpc.log - interface_name: lo0 Parameters: Required * cmd: The rpc to be executed. (default = None) Optional * dest: Destination file where the rpc output is stored. (default = None) Note that the file will be stored on the proxy minion. To push the files to the master use the salt's following execution module: \ :py:func:`cp.push <salt.modules.cp.push>` * format: The format in which the rpc reply must be stored in file specified in the dest (used only when dest is specified) (default = xml) * kwargs: keyworded arguments taken by rpc call like- * timeout: Set NETCONF RPC timeout. Can be used for commands which take a while to execute. (default= 30 seconds) * filter: Only to be used with 'get-config' rpc to get specific configuration. * terse: Amount of information you want. * interface_name: Name of the interface whose information you want.
386,090
def _AddUser(self, user): self.logger.info(, user) command = self.useradd_cmd.format(user=user) try: subprocess.check_call(command.split()) except subprocess.CalledProcessError as e: self.logger.warning(, user, str(e)) return False else: self.logger.info(, user) return True
Configure a Linux user account. Args: user: string, the name of the Linux user account to create. Returns: bool, True if user creation succeeded.
386,091
def to_new(self, data, perplexities=None, return_distances=False): perplexities = perplexities if perplexities is not None else self.perplexities perplexities = self.check_perplexities(perplexities) max_perplexity = np.max(perplexities) k_neighbors = min(self.n_samples - 1, int(3 * max_perplexity)) neighbors, distances = self.knn_index.query(data, k_neighbors) P = self._calculate_P( neighbors, distances, perplexities, symmetrize=False, normalization="point-wise", n_reference_samples=self.n_samples, n_jobs=self.n_jobs, ) if return_distances: return P, neighbors, distances return P
Compute the affinities of new samples to the initial samples. This is necessary for embedding new data points into an existing embedding. Please see the :ref:`parameter-guide` for more information. Parameters ---------- data: np.ndarray The data points to be added to the existing embedding. perplexities: List[float] A list of perplexity values, which will be used in the multiscale Gaussian kernel. Perplexity can be thought of as the continuous :math:`k` number of nearest neighbors, for which t-SNE will attempt to preserve distances. return_distances: bool If needed, the function can return the indices of the nearest neighbors and their corresponding distances. Returns ------- P: array_like An :math:`N \\times M` affinity matrix expressing interactions between :math:`N` new data points the initial :math:`M` data samples. indices: np.ndarray Returned if ``return_distances=True``. The indices of the :math:`k` nearest neighbors in the existing embedding for every new data point. distances: np.ndarray Returned if ``return_distances=True``. The distances to the :math:`k` nearest neighbors in the existing embedding for every new data point.
386,092
def remove_labels(self, test): ii = 0 while ii < len(self.labels): if test(self.labels[ii]): self.labels.pop(ii) else: ii += 1 return self
Remove labels from this cell. The function or callable ``test`` is called for each label in the cell. If its return value evaluates to ``True``, the corresponding label is removed from the cell. Parameters ---------- test : callable Test function to query whether a label should be removed. The function is called with the label as the only argument. Returns ------- out : ``Cell`` This cell. Examples -------- Remove labels in layer 1: >>> cell.remove_labels(lambda lbl: lbl.layer == 1)
386,093
def _match_offset_front_id_to_onset_front_id(onset_front_id, onset_fronts, offset_fronts, onsets, offsets): onset_idxs = _get_front_idxs_from_id(onset_fronts, onset_front_id) offset_idxs = [_lookup_offset_by_onset_idx(i, onsets, offsets) for i in onset_idxs] candidate_offset_front_ids = set([int(offset_fronts[f, i]) for f, i in offset_idxs]) candidate_offset_front_ids = [id for id in candidate_offset_front_ids if id != 0] if candidate_offset_front_ids: chosen_offset_front_id = _choose_front_id_from_candidates(candidate_offset_front_ids, offset_fronts, offset_idxs) else: chosen_offset_front_id = _get_offset_front_id_after_onset_front(onset_front_id, onset_fronts, offset_fronts) return chosen_offset_front_id
Find all offset fronts which are composed of at least one offset which corresponds to one of the onsets in the given onset front. The offset front which contains the most of such offsets is the match. If there are no such offset fronts, return -1.
386,094
def configure(cls, impl, **kwargs): base = cls.configurable_base() if isinstance(impl, (str, unicode_type)): impl = import_object(impl) if impl is not None and not issubclass(impl, cls): raise ValueError("Invalid subclass of %s" % cls) base.__impl_class = impl base.__impl_kwargs = kwargs
Sets the class to use when the base class is instantiated. Keyword arguments will be saved and added to the arguments passed to the constructor. This can be used to set global defaults for some parameters.
386,095
def coerce(self, value): if isinstance(value, bool): return value if not hasattr(value, ): raise TypeError() if value.lower() in (, , ): return True if value.lower() in (, , ): return False raise ValueError(.format(value))
Convert text values into boolean values. True values are (case insensitive): 'yes', 'true', '1'. False values are (case insensitive): 'no', 'false', '0'. Args: value (str or bool): The value to coerce. Raises: TypeError: If the value is not a bool or string. ValueError: If the value is not bool or an acceptable value. Returns: bool: The True/False value represented.
386,096
def featureName(self): featId = self.attributes.get("ID") if featId is not None: featId = featId[0] return featId
ID attribute from GFF3 or None if record doesn't have it. Called "Name" rather than "Id" within GA4GH, as there is no guarantee of either uniqueness or existence.
386,097
def _set_symlink_ownership(path, user, group, win_owner): if salt.utils.platform.is_windows(): try: salt.utils.win_dacl.set_owner(path, win_owner) except CommandExecutionError: pass else: try: __salt__[](path, user, group) except OSError: pass return _check_symlink_ownership(path, user, group, win_owner)
Set the ownership of a symlink and return a boolean indicating success/failure
386,098
def DomainTokensGet(self, domain_id): if self.__SenseApiCall__(.format(domain_id), ): return True else: self.__error__ = "api call unsuccessful" return False
T his method returns the list of tokens which are available for this domain. Only domain managers can list domain tokens. @param domain_id - ID of the domain for which to retrieve tokens @return (bool) - Boolean indicating whether DomainTokensGet was successful
386,099
def get_log_entries_by_log(self, log_id): mgr = self._get_provider_manager(, local=True) lookup_session = mgr.get_log_entry_lookup_session_for_log(log_id, proxy=self._proxy) lookup_session.use_isolated_log_view() return lookup_session.get_log_entries()
Gets the list of log entries associated with a ``Log``. arg: log_id (osid.id.Id): ``Id`` of a ``Log`` return: (osid.logging.LogEntryList) - list of related logEntry raise: NotFound - ``log_id`` is not found raise: NullArgument - ``log_id`` is ``null`` raise: OperationFailed - unable to complete request raise: PermissionDenied - authorization failure *compliance: mandatory -- This method must be implemented.*