Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
14,100
def find_one(cls, pattern, string, flags=0): item = re.search(pattern, string, flags=flags) return cls(item)
JS-like match object. Use index number to get groups, if not match or no group, will return ''. Basic Usage:: >>> from torequests.utils import find_one >>> string = "abcd" >>> find_one("a.*", string) <torequests.utils.RegMatch object at 0x0705F1D0> >>> find_one("a.*", string)[0] 'abcd' >>> find_one("a.*", string)[1] '' >>> find_one("a(.)", string)[0] 'ab' >>> find_one("a(.)", string)[1] 'b' >>> find_one("a(.)", string)[2] or "default" 'default' >>> import re >>> item = find_one("a(B)(C)", string, flags=re.I | re.S) >>> item <torequests.utils.RegMatch object at 0x0705F1D0> >>> item[0] 'abc' >>> item[1] 'b' >>> item[2] 'c' >>> item[3] '' >>> # import re >>> # re.findone = find_one >>> register_re_findone() >>> re.findone('a(b)', 'abcd')[1] or 'default' 'b'
14,101
def loadf(path, encoding=None, model=None, parser=None): path = str(path) if path.endswith(): with gzip.open(path, mode=, encoding=encoding) as f: return load(f, model=model, parser=parser) else: with open(path, mode=, encoding=encoding) as f: return load(f, model=model, parser=parser)
Deserialize path (.arpa, .gz) to a Python object.
14,102
def populate_obj(obj, attrs): for k, v in attrs.iteritems(): setattr(obj, k, v)
Populates an object's attributes using the provided dict
14,103
def merge_dict_of_lists(adict, indices, pop_later=True, copy=True): def check_indices(idxs, x): for i in chain(*idxs): if i < 0 or i >= x: raise IndexError("Given indices are out of dict range.") check_indices(indices, len(adict)) rdict = adict.copy() if copy else adict dict_keys = list(rdict.keys()) for i, j in zip(*indices): rdict[dict_keys[i]].extend(rdict[dict_keys[j]]) if pop_later: for i, j in zip(*indices): rdict.pop(dict_keys[j], ) return rdict
Extend the within a dict of lists. The indices will indicate which list have to be extended by which other list. Parameters ---------- adict: OrderedDict An ordered dictionary of lists indices: list or tuple of 2 iterables of int, bot having the same length The indices of the lists that have to be merged, both iterables items will be read pair by pair, the first is the index to the list that will be extended with the list of the second index. The indices can be constructed with Numpy e.g., indices = np.where(square_matrix) pop_later: bool If True will oop out the lists that are indicated in the second list of indices. copy: bool If True will perform a deep copy of the input adict before modifying it, hence not changing the original input. Returns ------- Dictionary of lists Raises ------ IndexError If the indices are out of range
14,104
def conditions(self) -> Dict[str, Dict[str, Union[float, numpy.ndarray]]]: conditions = {} for subname in NAMES_CONDITIONSEQUENCES: subseqs = getattr(self, subname, ()) subconditions = {seq.name: copy.deepcopy(seq.values) for seq in subseqs} if subconditions: conditions[subname] = subconditions return conditions
Nested dictionary containing the values of all condition sequences. See the documentation on property |HydPy.conditions| for further information.
14,105
def update(self): self.frontends = [] self.backends = [] self.listeners = [] csv = [ l for l in self._fetch().strip().split() if l ] if self.failed: return self.fields = [ f for f in csv.pop(0).split() if f ] for line in csv: service = HAProxyService(self.fields, line.split(), self.name) if service.svname == : self.frontends.append(service) elif service.svname == : service.listeners = [] self.backends.append(service) else: self.listeners.append(service) for listener in self.listeners: for backend in self.backends: if backend.iid == listener.iid: backend.listeners.append(listener) self.last_update = datetime.utcnow()
Fetch and parse stats
14,106
def check_elasticsearch(record, *args, **kwargs): def can(self): search = request._methodview.search_class() search = search.get_record(str(record.id)) return search.count() == 1 return type(, (), {: can})()
Return permission that check if the record exists in ES index. :params record: A record object. :returns: A object instance with a ``can()`` method.
14,107
def cat(tensors, dim=0): assert isinstance(tensors, (list, tuple)) if len(tensors) == 1: return tensors[0] return torch.cat(tensors, dim)
Efficient version of torch.cat that avoids a copy if there is only a single element in a list
14,108
async def callproc(self, procname, args=()): conn = self._get_db() if self._echo: logger.info("CALL %s", procname) logger.info("%r", args) for index, arg in enumerate(args): q = "SET @_%s_%d=%s" % (procname, index, conn.escape(arg)) await self._query(q) await self.nextset() _args = .join( % (procname, i) for i in range(len(args))) q = "CALL %s(%s)" % (procname, _args) await self._query(q) self._executed = q return args
Execute stored procedure procname with args Compatibility warning: PEP-249 specifies that any modified parameters must be returned. This is currently impossible as they are only available by storing them in a server variable and then retrieved by a query. Since stored procedures return zero or more result sets, there is no reliable way to get at OUT or INOUT parameters via callproc. The server variables are named @_procname_n, where procname is the parameter above and n is the position of the parameter (from zero). Once all result sets generated by the procedure have been fetched, you can issue a SELECT @_procname_0, ... query using .execute() to get any OUT or INOUT values. Compatibility warning: The act of calling a stored procedure itself creates an empty result set. This appears after any result sets generated by the procedure. This is non-standard behavior with respect to the DB-API. Be sure to use nextset() to advance through all result sets; otherwise you may get disconnected. :param procname: ``str``, name of procedure to execute on server :param args: `sequence of parameters to use with procedure :returns: the original args.
14,109
def ordinal(self, num): if re.match(r"\d", str(num)): try: num % 2 n = num except TypeError: if "." in str(num): try: n = int(num[-1]) except ValueError: n = int(num[:-1]) else: n = int(num) try: post = nth[n % 100] except KeyError: post = nth[n % 10] return "{}{}".format(num, post) else: mo = re.search(r"(%s)\Z" % ordinal_suff, num) try: post = ordinal[mo.group(1)] return re.sub(r"(%s)\Z" % ordinal_suff, post, num) except AttributeError: return "%sth" % num
Return the ordinal of num. num can be an integer or text e.g. ordinal(1) returns '1st' ordinal('one') returns 'first'
14,110
def boolean_difference(self, mesh, inplace=False): bfilter = vtk.vtkBooleanOperationPolyDataFilter() bfilter.SetOperationToDifference() bfilter.SetInputData(1, mesh) bfilter.SetInputData(0, self) bfilter.ReorientDifferenceCellsOff() bfilter.Update() mesh = _get_output(bfilter) if inplace: self.overwrite(mesh) else: return mesh
Combines two meshes and retains only the volume in common between the meshes. Parameters ---------- mesh : vtki.PolyData The mesh to perform a union against. inplace : bool, optional Updates mesh in-place while returning nothing. Returns ------- union : vtki.PolyData The union mesh when inplace=False.
14,111
def __load_project(path): file_path = __get_docker_file_path(path) if file_path is None: msg = .format(path) return __standardize_result(False, msg, None, None) return __load_project_from_file_path(file_path)
Load a docker-compose project from path :param path: :return:
14,112
def get_fields_from_job_name(self, job_name): extra_fields = { : None, : None, : None, : None, : None, : None, : None } try: components = job_name.split() if len(components) < 2: return extra_fields kind = components[1] if kind == : extra_fields[] = extra_fields[] = components[0] extra_fields[] = .join(components[2:-3]) elif kind == : extra_fields[] = extra_fields[] = components[0] else: extra_fields[] = extra_fields[] = components[0] extra_fields[] = components[1] extra_fields[] = components[-3] extra_fields[] = components[-2] extra_fields[] = components[-1] except IndexError as ex: logger.debug(, job_name) logger.debug(ex) return extra_fields
Analyze a Jenkins job name, producing a dictionary The produced dictionary will include information about the category and subcategory of the job name, and any extra information which could be useful. For each deployment of a Jenkins dashboard, an implementation of this function should be produced, according to the needs of the users. :param job: job name to Analyze :returns: dictionary with categorization information
14,113
def to_array(self): array = super(PreCheckoutQuery, self).to_array() array[] = u(self.id) array[] = self.from_peer.to_array() array[] = u(self.currency) array[] = int(self.total_amount) array[] = u(self.invoice_payload) if self.shipping_option_id is not None: array[] = u(self.shipping_option_id) if self.order_info is not None: array[] = self.order_info.to_array() return array
Serializes this PreCheckoutQuery to a dictionary. :return: dictionary representation of this object. :rtype: dict
14,114
def expose(self, binder, interface, annotation=None): private_module = self class Provider(object): def get(self): return private_module.private_injector.get_instance( interface, annotation) self.original_binder.bind(interface, annotated_with=annotation, to_provider=Provider)
Expose the child injector to the parent inject for a binding.
14,115
def names_labels(self, do_print=False): if do_print: for name, label in zip(self.field_names, self.field_labels): print( % (str(name), str(label))) return self.field_names, self.field_labels
Simple helper function to get all field names and labels
14,116
def inserir(self, id_equipment, id_script): equipment_script_map = dict() equipment_script_map[] = id_equipment equipment_script_map[] = id_script code, xml = self.submit( {: equipment_script_map}, , ) return self.response(code, xml)
Inserts a new Related Equipment with Script and returns its identifier :param id_equipment: Identifier of the Equipment. Integer value and greater than zero. :param id_script: Identifier of the Script. Integer value and greater than zero. :return: Dictionary with the following structure: :: {'equipamento_roteiro': {'id': < id_equipment_script >}} :raise InvalidParameterError: The identifier of Equipment or Script is null and invalid. :raise RoteiroNaoExisteError: Script not registered. :raise EquipamentoNaoExisteError: Equipment not registered. :raise EquipamentoRoteiroError: Equipment is already associated with the script. :raise DataBaseError: Networkapi failed to access the database. :raise XMLError: Networkapi failed to generate the XML response
14,117
def write(self, fileobj = sys.stdout, indent = u""): fileobj.write(self.start_tag(indent)) fileobj.write(u"\n")
Recursively write an element and it's children to a file.
14,118
def load_plugins(self, plugin_class_name): continue for name in dir(f_module): binding = getattr(f_module, name, None) try: is_sub = issubclass(binding, plugin_class) except TypeError: is_sub = False if binding and is_sub and plugin_class.__name__ != binding.__name__: plugin_classes[binding.key] = binding return plugin_classes
load all available plugins :param plugin_class_name: str, name of plugin class (e.g. 'PreBuildPlugin') :return: dict, bindings for plugins of the plugin_class_name class
14,119
def OxmlElement(nsptag_str, attrs=None, nsdecls=None): nsptag = NamespacePrefixedTag(nsptag_str) if nsdecls is None: nsdecls = nsptag.nsmap return oxml_parser.makeelement( nsptag.clark_name, attrib=attrs, nsmap=nsdecls )
Return a 'loose' lxml element having the tag specified by *nsptag_str*. *nsptag_str* must contain the standard namespace prefix, e.g. 'a:tbl'. The resulting element is an instance of the custom element class for this tag name if one is defined. A dictionary of attribute values may be provided as *attrs*; they are set if present. All namespaces defined in the dict *nsdecls* are declared in the element using the key as the prefix and the value as the namespace name. If *nsdecls* is not provided, a single namespace declaration is added based on the prefix on *nsptag_str*.
14,120
def get_prefixed_config(self, section, option, **kwargs): cfg = Config.instance() default = cfg.get_expanded(section, option, **kwargs) return cfg.get_expanded(section, "{}_{}".format(self.workflow_type, option), default=default, **kwargs)
TODO.
14,121
def relpath(path, start=None): relative = get_instance(path).relpath(path) if start: return os_path_relpath(relative, start=start).replace(, ) return relative
Return a relative file path to path either from the current directory or from an optional start directory. For storage objects, "path" and "start" are relative to storage root. "/" are not stripped on storage objects path. The ending slash is required on some storage to signify that target is a directory. Equivalent to "os.path.relpath". Args: path (path-like object): Path or URL. start (path-like object): Relative from this optional directory. Default to "os.curdir" for local files. Returns: str: Relative path.
14,122
def exec_after_request_actions(actions, response, **kwargs): current_context["response"] = response groups = ("after_" + flask.request.method.lower(), "after") try: rv = execute_actions(actions, limit_groups=groups, **kwargs) except ReturnValueException as e: rv = e.value if rv: return rv return response
Executes actions of the "after" and "after_METHOD" groups. A "response" var will be injected in the current context.
14,123
def updateShape(self): x = self.tags[] y = -self.tags[] z = self.tags[] for sec in list(self.secs.values()): if in sec and not in sec[]: h.pop_section()
Call after h.define_shape() to update cell coords
14,124
def check_sim_in(self): try: pkt = self.sim_in.recv(17*8 + 4) except socket.error as e: if not e.errno in [ errno.EAGAIN, errno.EWOULDBLOCK ]: raise return if len(pkt) != 17*8 + 4: print("wrong size %u" % len(pkt)) return (latitude, longitude, altitude, heading, v_north, v_east, v_down, ax, ay, az, phidot, thetadot, psidot, roll, pitch, yaw, vcas, check) = struct.unpack(, pkt) (p, q, r) = self.convert_body_frame(radians(roll), radians(pitch), radians(phidot), radians(thetadot), radians(psidot)) try: self.hil_state_msg = self.master.mav.hil_state_encode(int(time.time()*1e6), radians(roll), radians(pitch), radians(yaw), p, q, r, int(latitude*1.0e7), int(longitude*1.0e7), int(altitude*1.0e3), int(v_north*100), int(v_east*100), 0, int(ax*1000/9.81), int(ay*1000/9.81), int(az*1000/9.81)) except Exception: return
check for FDM packets from runsim
14,125
def format_crypto_units(input_quantity, input_type, output_type, coin_symbol=None, print_cs=False, safe_trimming=False, round_digits=0): assert input_type in UNIT_CHOICES, input_type assert output_type in UNIT_CHOICES, output_type if print_cs: assert is_valid_coin_symbol(coin_symbol=coin_symbol), coin_symbol assert isinstance(round_digits, int) satoshis_float = to_satoshis(input_quantity=input_quantity, input_type=input_type) if round_digits: satoshis_float = round(satoshis_float, -1*round_digits) output_quantity = from_satoshis( input_satoshis=satoshis_float, output_type=output_type, ) if output_type == and round_digits >= 2: pass output_quantity_formatted = format_output(num=output_quantity, output_type=) else: output_quantity_formatted = format_output(num=output_quantity, output_type=output_type) if safe_trimming and output_type not in (, ): output_quantity_formatted = safe_trim(qty_as_string=output_quantity_formatted) if print_cs: curr_symbol = get_curr_symbol( coin_symbol=coin_symbol, output_type=output_type, ) output_quantity_formatted += % curr_symbol return output_quantity_formatted
Take an input like 11002343 satoshis and convert it to another unit (e.g. BTC) and format it with appropriate units if coin_symbol is supplied and print_cs == True then the units will be added (e.g. BTC or satoshis) Smart trimming gets rid of trailing 0s in the decimal place, except for satoshis (irrelevant) and bits (always two decimals points). It also preserves one decimal place in the case of 1.0 to show significant figures. It is stil technically correct and reversible. Smart rounding performs a rounding operation (so it is techincally not the correct number and is not reversible). The number of decimals to round by is a function of the output_type Requires python >= 2.7
14,126
def get_arsc_info(arscobj): buff = "" for package in arscobj.get_packages_names(): buff += package + ":\n" for locale in arscobj.get_locales(package): buff += "\t" + repr(locale) + ":\n" for ttype in arscobj.get_types(package, locale): buff += "\t\t" + ttype + ":\n" try: tmp_buff = getattr(arscobj, "get_" + ttype + "_resources")( package, locale).decode("utf-8", ).split("\n") for i in tmp_buff: buff += "\t\t\t" + i + "\n" except AttributeError: pass return buff
Return a string containing all resources packages ordered by packagename, locale and type. :param arscobj: :class:`~ARSCParser` :return: a string
14,127
def parse_iparamvalue(self, tup_tree): self.check_node(tup_tree, , (,)) child = self.optional_child(tup_tree, (, , , , , , , , )) _name = attrs(tup_tree)[] if isinstance(child, six.string_types) and \ _name.lower() in (, , , ): if child.lower() in (, ): child = (child.lower() == ) return _name, child
Parse expected IPARAMVALUE element. I.e. :: <!ELEMENT IPARAMVALUE (VALUE | VALUE.ARRAY | VALUE.REFERENCE | INSTANCENAME | CLASSNAME | QUALIFIER.DECLARATION | CLASS | INSTANCE | VALUE.NAMEDINSTANCE)?> <!ATTLIST IPARAMVALUE %CIMName;> :return: NAME, VALUE pair.
14,128
def summary(self): if self._summary is None: self._summary = CallSummaryList(self) return self._summary
:rtype: twilio.rest.insights.v1.summary.CallSummaryList
14,129
def set_feature_generator(self): self.features = preparation.build_proteintable(self.lookup, self.headerfields, self.mergecutoff, self.isobaric, self.precursor, self.probability, self.fdr, self.pep, self.genecentric)
Generates proteins with quant from the lookup table
14,130
def _estimate_transforms(self, nsamples): M = len(self.coef) mean_transform = np.zeros((M,M)) x_transform = np.zeros((M,M)) inds = np.arange(M, dtype=np.int) for _ in tqdm(range(nsamples), "Estimating transforms"): np.random.shuffle(inds) cov_inv_SiSi = np.zeros((0,0)) cov_Si = np.zeros((M,0)) for j in range(M): i = inds[j] cov_S = cov_Si cov_inv_SS = cov_inv_SiSi cov_Si = self.cov[:,inds[:j+1]] d = cov_Si[i,:-1].T t = np.matmul(cov_inv_SS, d) Z = self.cov[i, i] u = Z - np.matmul(t.T, d) cov_inv_SiSi = np.zeros((j+1, j+1)) if j > 0: cov_inv_SiSi[:-1, :-1] = cov_inv_SS + np.outer(t, t) / u cov_inv_SiSi[:-1, -1] = cov_inv_SiSi[-1,:-1] = -t / u cov_inv_SiSi[-1, -1] = 1 / u mean_transform[i, i] += self.coef[i] coef_R_Si = np.matmul(self.coef[inds[j+1:]], np.matmul(cov_Si, cov_inv_SiSi)[inds[j+1:]]) mean_transform[i, inds[:j+1]] += coef_R_Si coef_R_S = np.matmul(self.coef[inds[j:]], np.matmul(cov_S, cov_inv_SS)[inds[j:]]) mean_transform[i, inds[:j]] -= coef_R_S x_transform[i, i] += self.coef[i] x_transform[i, inds[:j+1]] += coef_R_Si x_transform[i, inds[:j]] -= coef_R_S mean_transform /= nsamples x_transform /= nsamples return mean_transform, x_transform
Uses block matrix inversion identities to quickly estimate transforms. After a bit of matrix math we can isolate a transform matrix (# features x # features) that is independent of any sample we are explaining. It is the result of averaging over all feature permutations, but we just use a fixed number of samples to estimate the value. TODO: Do a brute force enumeration when # feature subsets is less than nsamples. This could happen through a recursive method that uses the same block matrix inversion as below.
14,131
def join(self, other): return Heading( [self.attributes[name].todict() for name in self.primary_key] + [other.attributes[name].todict() for name in other.primary_key if name not in self.primary_key] + [self.attributes[name].todict() for name in self.dependent_attributes if name not in other.primary_key] + [other.attributes[name].todict() for name in other.dependent_attributes if name not in self.primary_key])
Join two headings into a new one. It assumes that self and other are headings that share no common dependent attributes.
14,132
def pixy_set_brightness(self, brightness): task = asyncio.ensure_future(self.core.pixy_set_brightness(brightness)) self.loop.run_until_complete(task)
Sends the setBrightness Pixy command. This method sets the brightness (exposure) of Pixy's camera. :param brightness: range between 0 and 255 with 255 being the brightest setting :returns: No return value.
14,133
def _regex_from_encoded_pattern(s): if s.startswith() and s.rfind() != 0: idx = s.rfind() pattern, flags_str = s[1:idx], s[idx+1:] flag_from_char = { "i": re.IGNORECASE, "l": re.LOCALE, "s": re.DOTALL, "m": re.MULTILINE, "u": re.UNICODE, } flags = 0 for char in flags_str: try: flags |= flag_from_char[char] except KeyError: raise ValueError("unsupported regex flag: in " "(must be one of )" % (char, s, .join(list(flag_from_char.keys())))) return re.compile(s[1:idx], flags) else: return re.compile(re.escape(s))
foo' -> re.compile(re.escape('foo')) '/foo/' -> re.compile('foo') '/foo/i' -> re.compile('foo', re.I)
14,134
def CountFlowResultsByType(self, client_id, flow_id): result = collections.Counter() for hr in self.ReadFlowResults(client_id, flow_id, 0, sys.maxsize): key = compatibility.GetName(hr.payload.__class__) result[key] += 1 return result
Returns counts of flow results grouped by result type.
14,135
def BVirial_Abbott(T, Tc, Pc, omega, order=0): r Tr = T/Tc if order == 0: B0 = 0.083 - 0.422/Tr**1.6 B1 = 0.139 - 0.172/Tr**4.2 elif order == 1: B0 = 0.6752*Tr**(-1.6)/T B1 = 0.7224*Tr**(-4.2)/T elif order == 2: B0 = -1.75552*Tr**(-1.6)/T**2 B1 = -3.75648*Tr**(-4.2)/T**2 elif order == 3: B0 = 6.319872*Tr**(-1.6)/T**3 B1 = 23.290176*Tr**(-4.2)/T**3 elif order == -1: B0 = 0.083*T + 211/300.*Tc*(Tr)**(-0.6) B1 = 0.139*T + 0.05375*Tc*Tr**(-3.2) elif order == -2: B0 = 0.0415*T**2 + 211/120.*Tc**2*Tr**0.4 B1 = 0.0695*T**2 - 43/1760.*Tc**2*Tr**(-2.2) else: raise Exception() Br = B0 + omega*B1 return Br*R*Tc/Pc
r'''Calculates the second virial coefficient using the model in [1]_. Simple fit to the Lee-Kesler equation. .. math:: B_r=B^{(0)}+\omega B^{(1)} B^{(0)}=0.083+\frac{0.422}{T_r^{1.6}} B^{(1)}=0.139-\frac{0.172}{T_r^{4.2}} Parameters ---------- T : float Temperature of fluid [K] Tc : float Critical temperature of fluid [K] Pc : float Critical pressure of the fluid [Pa] omega : float Acentric factor for fluid, [-] order : int, optional Order of the calculation. 0 for the calculation of B itself; for 1/2/3, the first/second/third derivative of B with respect to temperature; and for -1/-2, the first/second indefinite integral of B with respect to temperature. No other integrals or derivatives are implemented, and an exception will be raised if any other order is given. Returns ------- B : float Second virial coefficient in density form or its integral/derivative if specified, [m^3/mol or m^3/mol/K^order] Notes ----- Analytical models for derivatives and integrals are available for orders -2, -1, 1, 2, and 3, all obtained with SymPy. For first temperature derivative of B: .. math:: \frac{d B^{(0)}}{dT} = \frac{0.6752}{T \left(\frac{T}{Tc}\right)^{1.6}} \frac{d B^{(1)}}{dT} = \frac{0.7224}{T \left(\frac{T}{Tc}\right)^{4.2}} For the second temperature derivative of B: .. math:: \frac{d^2 B^{(0)}}{dT^2} = - \frac{1.75552}{T^{2} \left(\frac{T}{Tc}\right)^{1.6}} \frac{d^2 B^{(1)}}{dT^2} = - \frac{3.75648}{T^{2} \left(\frac{T}{Tc}\right)^{4.2}} For the third temperature derivative of B: .. math:: \frac{d^3 B^{(0)}}{dT^3} = \frac{6.319872}{T^{3} \left(\frac{T}{Tc}\right)^{1.6}} \frac{d^3 B^{(1)}}{dT^3} = \frac{23.290176}{T^{3} \left(\frac{T}{Tc}\right)^{4.2}} For the first indefinite integral of B: .. math:: \int{B^{(0)}} dT = 0.083 T + \frac{\frac{211}{300} Tc}{\left(\frac{T}{Tc}\right)^{0.6}} \int{B^{(1)}} dT = 0.139 T + \frac{0.05375 Tc}{\left(\frac{T}{Tc}\right)^{3.2}} For the second indefinite integral of B: .. math:: \int\int B^{(0)} dT dT = 0.0415 T^{2} + \frac{211}{120} Tc^{2} \left(\frac{T}{Tc}\right)^{0.4} \int\int B^{(1)} dT dT = 0.0695 T^{2} - \frac{\frac{43}{1760} Tc^{2}}{\left(\frac{T}{Tc}\right)^{2.2}} Examples -------- Example is from [1]_, p. 93, and matches the result exactly, for isobutane. >>> BVirial_Abbott(510., 425.2, 38E5, 0.193) -0.00020570178037383633 References ---------- .. [1] Smith, H. C. Van Ness Joseph M. Introduction to Chemical Engineering Thermodynamics 4E 1987.
14,136
def convert_ram_sp_rf(ADDR_WIDTH=8, DATA_WIDTH=8): clk = Signal(bool(0)) we = Signal(bool(0)) addr = Signal(intbv(0)[ADDR_WIDTH:]) di = Signal(intbv(0)[DATA_WIDTH:]) do = Signal(intbv(0)[DATA_WIDTH:]) toVerilog(ram_sp_rf, clk, we, addr, di, do)
Convert RAM: Single-Port, Read-First
14,137
def get_icon_url(self, icon): if not icon.startswith() \ and not icon.startswith() \ and not icon.startswith(): if in icon: return self.icon_static_root + icon else: return self.icon_theme_root + icon else: return icon
Replaces the "icon name" with a full usable URL. * When the icon is an absolute URL, it is used as-is. * When the icon contains a slash, it is relative from the ``STATIC_URL``. * Otherwise, it's relative to the theme url folder.
14,138
def average_s_rad(site, hypocenter, reference, pp, normal, dist_to_plane, e, p0, p1, delta_slip): site_xyz = get_xyz_from_ll(site, reference) zs = dst.pdist([pp, site_xyz]) if site_xyz[0] * normal[0] + site_xyz[1] * normal[1] + site_xyz[2] * \ normal[2] - dist_to_plane > 0: zs = -zs hyp_xyz = get_xyz_from_ll(hypocenter, reference) hyp_xyz = np.array(hyp_xyz).reshape(1, 3).flatten() l2 = dst.pdist([pp, hyp_xyz]) rd = ((l2 - e) ** 2 + zs ** 2) ** 0.5 r_hyp = (l2 ** 2 + zs ** 2) ** 0.5 p0_xyz = get_xyz_from_ll(p0, reference) p1_xyz = get_xyz_from_ll(p1, reference) u = (np.array(p1_xyz) - np.array(p0_xyz)) v = pp - hyp_xyz phi = vectors2angle(u, v) - np.deg2rad(delta_slip) ix = np.cos(phi) * (2 * zs * (l2 / r_hyp - (l2 - e) / rd) - zs * np.log((l2 + r_hyp) / (l2 - e + rd))) inn = np.cos(phi) * (-2 * zs ** 2 * (1 / r_hyp - 1 / rd) - (r_hyp - rd)) iphi = np.sin(phi) * (zs * np.log((l2 + r_hyp) / (l2 - e + rd))) fs = (ix ** 2 + inn ** 2 + iphi ** 2) ** 0.5 / e return fs, rd, r_hyp
Gets the average S-wave radiation pattern given an e-path as described in: Spudich et al. (2013) "Final report of the NGA-West2 directivity working group", PEER report, page 90- 92 and computes: the site to the direct point distance, rd, and the hypocentral distance, r_hyp. :param site: :class:`~openquake.hazardlib.geo.point.Point` object representing the location of the target site :param hypocenter: :class:`~openquake.hazardlib.geo.point.Point` object representing the location of hypocenter :param reference: :class:`~openquake.hazardlib.geo.point.Point` object representing the location of the reference point for coordinate projection within the calculation. The suggested reference point is Epicentre. :param pp: the projection point pp on the patch plane, a numpy array :param normal: normal of the plane, describe by a normal vector[a, b, c] :param dist_to_plane: d is the constant term in the plane equation, e.g., ax + by + cz = d :param e: a float defining the E-path length, which is the distance from Pd(direction) point to hypocentre. In km. :param p0: :class:`~openquake.hazardlib.geo.point.Point` object representing the location of the starting point on fault segment :param p1: :class:`~openquake.hazardlib.geo.point.Point` object representing the location of the ending point on fault segment. :param delta_slip: slip direction away from the strike direction, in decimal degrees. A positive angle is generated by a counter-clockwise rotation. :returns: fs, float value of the average S-wave radiation pattern. rd, float value of the distance from site to the direct point. r_hyp, float value of the hypocetre distance.
14,139
def create_time_series(self, label_values, func): if label_values is None: raise ValueError if any(lv is None for lv in label_values): raise ValueError if len(label_values) != self._len_label_keys: raise ValueError if func is None: raise ValueError return self._create_time_series(label_values, func)
Create a derived measurement to trac `func`. :type label_values: list(:class:`LabelValue`) :param label_values: The measurement's label values. :type func: function :param func: The function to track. :rtype: :class:`DerivedGaugePoint` :return: A read-only measurement that tracks `func`.
14,140
def visit_If(self, node): self.visit(node.test) old_range = self.result self.result = old_range.copy() for stmt in node.body: self.visit(stmt) body_range = self.result self.result = old_range.copy() for stmt in node.orelse: self.visit(stmt) orelse_range = self.result self.result = body_range for k, v in orelse_range.items(): if k in self.result: self.result[k] = self.result[k].union(v) else: self.result[k] = v
Handle iterate variable across branches >>> import gast as ast >>> from pythran import passmanager, backend >>> node = ast.parse(''' ... def foo(a): ... if a > 1: b = 1 ... else: b = 3''') >>> pm = passmanager.PassManager("test") >>> res = pm.gather(RangeValues, node) >>> res['b'] Interval(low=1, high=3)
14,141
def infer_struct(value: Mapping[str, GenericAny]) -> Struct: if not value: raise TypeError() return Struct(list(value.keys()), list(map(infer, value.values())))
Infer the :class:`~ibis.expr.datatypes.Struct` type of `value`.
14,142
def get_broadcast_date(pid): print("Extracting first broadcast date...") broadcast_etree = open_listing_page(pid + ) original_broadcast_date, = broadcast_etree.xpath( ) return original_broadcast_date
Take BBC pid (string); extract and return broadcast date as string.
14,143
def hacking_no_locals(logical_line, tokens, noqa): if noqa: return for_formatting = False for token_type, text, start, _, _ in tokens: if text == "%" and token_type == tokenize.OP: for_formatting = True if for_formatting and token_type == tokenize.NAME: for k, v in LOCALS_TEXT_MAP.items(): if text == k and v in logical_line: yield (start[1], "H501: Do not use %s for string formatting" % v)
Do not use locals() or self.__dict__ for string formatting. Okay: 'locals()' Okay: 'locals' Okay: locals() Okay: print(locals()) H501: print("%(something)" % locals()) H501: LOG.info(_("%(something)") % self.__dict__) Okay: print("%(something)" % locals()) # noqa
14,144
def strip_html(text): def reply_to(text): replying_to = [] split_text = text.split() for index, token in enumerate(split_text): if token.startswith(): replying_to.append(token[1:]) else: message = split_text[index:] break rply_msg = "" if len(replying_to) > 0: rply_msg = "Replying to " for token in replying_to[:-1]: rply_msg += token+"," if len(replying_to)>1: rply_msg += rply_msg += replying_to[-1]+". " return rply_msg + " ".join(message) text = reply_to(text) text = text.replace(, ) return " ".join([token for token in text.split() if ( not in token) and ( not in token)])
Get rid of ugly twitter html
14,145
def set_layout(self, value): visibility_changed = True if visibility_changed: self.update_modified() self.update_exportable() self.update_block_endpoints()
Set the layout table value. Called on attribute put
14,146
async def do_after_sleep(delay: float, coro, *args, **kwargs): await asyncio.sleep(delay) return await coro(*args, **kwargs)
Performs an action after a set amount of time. This function only calls the coroutine after the delay, preventing asyncio complaints about destroyed coros. :param delay: Time in seconds :param coro: Coroutine to run :param args: Arguments to pass to coroutine :param kwargs: Keyword arguments to pass to coroutine :return: Whatever the coroutine returned.
14,147
def check_request(name=None): * notify_path = os.path.join(__opts__[], ) serial = salt.payload.Serial(__opts__) if os.path.isfile(notify_path): with salt.utils.files.fopen(notify_path, ) as fp_: req = serial.load(fp_) if name: return req[name] return req return {}
.. versionadded:: 2015.5.0 Return the state request information, if any CLI Example: .. code-block:: bash salt '*' state.check_request
14,148
def do_upload(post_data, callback=None): encoder = MultipartEncoder(post_data) monitor = MultipartEncoderMonitor(encoder, callback) headers = {: USER_AGENT, : monitor.content_type} response = post(API_URL, data=monitor, headers=headers) check_response(response) return response.json()[0]
does the actual upload also sets and generates the user agent string
14,149
def trim(stream, **kwargs): return FilterNode(stream, trim.__name__, kwargs=kwargs).stream()
Trim the input so that the output contains one continuous subpart of the input. Args: start: Specify the time of the start of the kept section, i.e. the frame with the timestamp start will be the first frame in the output. end: Specify the time of the first frame that will be dropped, i.e. the frame immediately preceding the one with the timestamp end will be the last frame in the output. start_pts: This is the same as start, except this option sets the start timestamp in timebase units instead of seconds. end_pts: This is the same as end, except this option sets the end timestamp in timebase units instead of seconds. duration: The maximum duration of the output in seconds. start_frame: The number of the first frame that should be passed to the output. end_frame: The number of the first frame that should be dropped. Official documentation: `trim <https://ffmpeg.org/ffmpeg-filters.html#trim>`__
14,150
def load(self, game_json=None, mode=None): if game_json is None: if mode is not None: if isinstance(mode, str): _game_object = GameObject(mode=self._match_mode(mode=mode)) elif isinstance(mode, GameMode): _game_object = GameObject(mode=mode) else: raise TypeError("Game mode must be a GameMode or string") else: _game_object = GameObject(mode=self._game_modes[0]) _game_object.status = self.GAME_PLAYING else: if not isinstance(game_json, str): raise TypeError("Game must be passed as a serialized JSON string.") game_dict = json.loads(game_json) if not in game_dict: raise ValueError("Mode is not provided in JSON; game_json cannot be loaded!") _mode = GameMode(**game_dict["mode"]) _game_object = GameObject(mode=_mode, source_game=game_dict) self.game = copy.deepcopy(_game_object)
Load a game from a serialized JSON representation. The game expects a well defined structure as follows (Note JSON string format): '{ "guesses_made": int, "key": "str:a 4 word", "status": "str: one of playing, won, lost", "mode": { "digits": int, "digit_type": DigitWord.DIGIT | DigitWord.HEXDIGIT, "mode": GameMode(), "priority": int, "help_text": str, "instruction_text": str, "guesses_allowed": int }, "ttl": int, "answer": [int|str0, int|str1, ..., int|strN] }' * "mode" will be cast to a GameMode object * "answer" will be cast to a DigitWord object :param game_json: The source JSON - MUST be a string :param mode: A mode (str or GameMode) for the game being loaded :return: A game object
14,151
def filter(self, value, model=None, context=None): http = [, ] if all(not str(value).startswith(s) for s in http): value = .format(value) return value
Filter Performs value filtering and returns filtered result. :param value: input value :param model: parent model being validated :param context: object, filtering context :return: filtered value
14,152
def alias_create(indices, alias, hosts=None, body=None, profile=None, source=None): es = _get_instance(hosts, profile) if source and body: message = raise SaltInvocationError(message) if source: body = __salt__[]( source, saltenv=__opts__.get(, )) try: result = es.indices.put_alias(index=indices, name=alias, body=body) return result.get(, False) except elasticsearch.TransportError as e: raise CommandExecutionError("Cannot create alias {0} in index {1}, server returned code {2} with message {3}".format(alias, indices, e.status_code, e.error))
Create an alias for a specific index/indices indices Single or multiple indices separated by comma, use _all to perform the operation on all indices. alias Alias name body Optional definition such as routing or filter as defined in https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html source URL of file specifying optional definition such as routing or filter. Cannot be used in combination with ``body``. CLI example:: salt myminion elasticsearch.alias_create testindex_v1 testindex
14,153
def _getEngineRoot(self): override = ConfigurationManager.getConfigKey() if override != None: Utility.printStderr( + override) return override else: return self._detectEngineRoot()
Retrieves the user-specified engine root directory override (if set), or else performs auto-detection
14,154
def validate_string_dict(dct): for k,v in dct.iteritems(): if not isinstance(k, basestring): raise ValueError( % k) if not isinstance(v, basestring): raise ValueError( % v)
Validate that the input is a dict with string keys and values. Raises ValueError if not.
14,155
def counter(metatdata, value): if not str(value).isdigit(): return None arg = str(metatdata[][0]).replace(, ) repeated_args = arg * int(value) return + repeated_args
Returns str(option_string * DropDown Value) e.g. -vvvvv
14,156
def imread(path, grayscale=False, size=None, interpolate="bilinear", channel_first=False, as_uint16=False, num_channels=-1): _imread_before(grayscale, num_channels) f = path if hasattr(path, "read") else open(path, "rb") r = png.Reader(file=f) width, height, pixels, metadata = r.asDirect() bit_depth = metadata.get("bitdepth") if bit_depth not in [8, 16]: raise ValueError("The bit-depth of the image you want to read is unsupported ({}bit)." "Currently, pypng backend`s imread supports only [8, 16] bit-depth." "the path for this image is {}".format(bit_depth, path)) img = read_result_to_ndarray( pixels, width, height, metadata, grayscale, as_uint16, num_channels) return _imread_after(img, size, interpolate, channel_first, imresize)
Read image by pypng module. Args: path (str or 'file object'): File path or object to read. grayscale (bool): size (tupple of int): (width, height). If None, output img shape depends on the files to read. channel_first (bool): This argument specifies the shape of img is whether (height, width, channel) or (channel, height, width). Default value is False, which means the img shape is (height, width, channel). interpolate (str): must be one of ["nearest", "box", "bilinear", "hamming", "bicubic", "lanczos"]. as_uint16 (bool): If True, this function reads image as uint16. num_channels (int): channel size of output array. Default is -1 which preserves raw image shape. Returns: numpy.ndarray
14,157
def to_task(self): from google.appengine.api.taskqueue import Task task_args = self.get_task_args().copy() payload = None if in task_args: payload = task_args.pop() kwargs = { : METHOD_TYPE, : json.dumps(payload) } kwargs.update(task_args) return Task(**kwargs)
Return a task object representing this message.
14,158
def isIn(val, schema, name = None): if name is None: name = schema if not _lists.has_key(name): return False try: return val in _lists[name] except TypeError: return False
!~~isIn(data)
14,159
def from_flag(cls, flag): if not is_flag_set(flag) or not in flag: return None parts = flag.split() if parts[0] == : return cls.from_name(parts[1]) else: return cls.from_name(parts[0])
Return an Endpoint subclass instance based on the given flag. The instance that is returned depends on the endpoint name embedded in the flag. Flags should be of the form ``endpoint.{name}.extra...``, though for legacy purposes, the ``endpoint.`` prefix can be omitted. The ``{name}}`` portion will be passed to :meth:`~charms.reactive.endpoints.Endpoint.from_name`. If the flag is not set, an appropriate Endpoint subclass cannot be found, or the flag name can't be parsed, ``None`` will be returned.
14,160
def _jgezerou16(ins): output = [] value = ins.quad[1] if not is_int(value): output = _16bit_oper(value) output.append( % str(ins.quad[2])) return output
Jumps if top of the stack (16bit) is >= 0 to arg(1) Always TRUE for unsigned
14,161
def fromutc(self, dt): if not isinstance(dt, datetime.datetime): raise TypeError("fromutc() requires a datetime argument") if dt.tzinfo is not self: raise ValueError("dt.tzinfo is not self") idx = self._find_last_transition(dt, in_utc=True) tti = self._get_ttinfo(idx) dt_out = dt + datetime.timedelta(seconds=tti.offset) fold = self.is_ambiguous(dt_out, idx=idx) return enfold(dt_out, fold=int(fold))
The ``tzfile`` implementation of :py:func:`datetime.tzinfo.fromutc`. :param dt: A :py:class:`datetime.datetime` object. :raises TypeError: Raised if ``dt`` is not a :py:class:`datetime.datetime` object. :raises ValueError: Raised if this is called with a ``dt`` which does not have this ``tzinfo`` attached. :return: Returns a :py:class:`datetime.datetime` object representing the wall time in ``self``'s time zone.
14,162
def get_all(self) -> List[Commodity]: query = ( self.query .order_by(Commodity.namespace, Commodity.mnemonic) ) return query.all()
Loads all non-currency commodities, assuming they are stocks.
14,163
def warn(self, message, container=None): if self.source is not None: message = .format(self.source.location) + message if container is not None: try: message += .format(container.page.formatted_number) except AttributeError: pass warn(message)
Present the warning `message` to the user, adding information on the location of the related element in the input file.
14,164
def addReferences(self, reference, service_uids): addedanalyses = [] wf = getToolByName(self, ) bsc = getToolByName(self, ) bac = getToolByName(self, ) ref_type = reference.getBlank() and or ref_uid = reference.UID() postfix = 1 for refa in reference.getReferenceAnalyses(): grid = refa.getReferenceAnalysesGroupID() try: cand = int(grid.split()[2]) if cand >= postfix: postfix = cand + 1 except: pass postfix = str(postfix).zfill(int(3)) refgid = % (reference.id, postfix) for service_uid in service_uids: return addedanalyses
Add reference analyses to reference
14,165
def block1(self): value = None for option in self.options: if option.number == defines.OptionRegistry.BLOCK1.number: value = parse_blockwise(option.value) return value
Get the Block1 option. :return: the Block1 value
14,166
def fai_from_bam(ref_file, bam_file, out_file, data): contigs = set([x.contig for x in idxstats(bam_file, data)]) if not utils.file_uptodate(out_file, bam_file): with open(ref.fasta_idx(ref_file, data["config"])) as in_handle: with file_transaction(data, out_file) as tx_out_file: with open(tx_out_file, "w") as out_handle: for line in (l for l in in_handle if l.strip()): if line.split()[0] in contigs: out_handle.write(line) return out_file
Create a fai index with only contigs in the input BAM file.
14,167
def _find_day_section_from_indices(indices, split_interval): cells_day = 24 * 60 // split_interval rv = [[int(math.floor(i / cells_day)), i % cells_day] for i in indices] return rv
Returns a list with [weekday, section] identifiers found using a list of indices.
14,168
def at_depth(self, level): return Zconfig(lib.zconfig_at_depth(self._as_parameter_, level), False)
Locate the last config item at a specified depth
14,169
def create(self, req, parent, name, mode, fi): self.reply_err(req, errno.ENOSYS)
Create and open a file Valid replies: reply_create reply_err
14,170
def index_queryset(self, using=None): translation.activate(settings.LANGUAGE_CODE) return self.get_model().objects.filter(status=UrlNode.PUBLISHED).select_related()
Index current language translation of published objects. TODO: Find a way to index all translations of the given model, not just the current site language's translation.
14,171
def main(): parser = ArgumentParser(description="search files using n-grams") parser.add_argument(, dest=, help="where to search", nargs=1, action="store", default=getcwd()) parser.add_argument(, dest=, help="update the index", action=, default=True) parser.add_argument(, dest=, help="any, images, documents, code, audio, video", nargs=1, action="store", default=["any"]) parser.add_argument(, dest=, help="extended output", action=, default=False) parser.add_argument(, dest=, help="number of results to display", action="store", default=10) parser.add_argument(, nargs=, help="what to search", action="store") args = parser.parse_args() if args.verbose: verbose = 2 pprint(args) else: verbose = 0 query = args.query[0] for arg in args.query[1:]: query = query + " " + arg slb = min([len(w) for w in query.split(" ")]) files = Files(path=args.path, filetype=args.filetype[0], exclude=[], update=args.update, verbose=verbose) index = Index(files, slb=slb, verbose=verbose) results = index.search(query, verbose=verbose) Handler(results, results_number=int(args.results))
Function for command line execution
14,172
def post(self, request): serializer = self.get_serializer(data=request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
Save the provided data using the class' serializer. Args: request: The request being made. Returns: An ``APIResponse`` instance. If the request was successful the response will have a 200 status code and contain the serializer's data. Otherwise a 400 status code and the request's errors will be returned.
14,173
def editprojecthook(self, project_id, hook_id, url, push=False, issues=False, merge_requests=False, tag_push=False): data = { "id": project_id, "hook_id": hook_id, "url": url, : int(bool(push)), : int(bool(issues)), : int(bool(merge_requests)), : int(bool(tag_push)), } request = requests.put( .format(self.projects_url, project_id, hook_id), headers=self.headers, data=data, verify=self.verify_ssl, auth=self.auth, timeout=self.timeout) if request.status_code == 200: return True else: return False
edit an existing hook from a project :param id_: project id :param hook_id: hook id :param url: the new url :return: True if success
14,174
def _uptrace(nodelist, node): if node.parent_index is None: return parent = nodelist[node.parent_index] for x in _uptrace(nodelist, parent): yield x yield node
๋…ธ๋“œ๋ฅผ ์ƒํ–ฅ ์ถ”์ ํ•œ๋‹ค. ํ˜„ ๋…ธ๋“œ๋กœ๋ถ€ํ„ฐ ์กฐ์ƒ ๋…ธ๋“œ๋“ค์„ ์ฐจ๋ก€๋กœ ์ˆœํšŒํ•˜๋ฉฐ ๋ฐ˜ํ™˜ํ•œ๋‹ค. ๋ฃจํŠธ ๋…ธ๋“œ๋Š” ์ œ์™ธํ•œ๋‹ค.
14,175
def slice(x, start, length): sc = SparkContext._active_spark_context return Column(sc._jvm.functions.slice(_to_java_column(x), start, length))
Collection function: returns an array containing all the elements in `x` from index `start` (or starting from the end if `start` is negative) with the specified `length`. >>> df = spark.createDataFrame([([1, 2, 3],), ([4, 5],)], ['x']) >>> df.select(slice(df.x, 2, 2).alias("sliced")).collect() [Row(sliced=[2, 3]), Row(sliced=[5])]
14,176
def extract_date(self, date): if isinstance(date, six.string_types): try: date = dateutil.parser.parse(date) except ValueError: raise ValueError( ).format(self.query_name) if not isinstance(date, datetime): raise TypeError( ).format(self.query_name) return date
Extract date from string if necessary. :returns: the extracted date.
14,177
def _install(self, name, autoinstall): import importlib import pkg_resources spam_spec = importlib.util.find_spec(name) reinstall = False if spam_spec is not None: if self._version: mod = importlib.__import__(name) if hasattr(mod, ): ver = mod.__version__ else: try: ver = pkg_resources.get_distribution(name).version except Exception as e: env.logger.debug( f) env.logger.debug( f ) if self._version.startswith( ) and pkg_resources.parse_version( ver) == pkg_resources.parse_version( self._version[2:]): pass elif self._version.startswith( ) and pkg_resources.parse_version( ver) <= pkg_resources.parse_version( self._version[2:]): pass elif self._version.startswith( ) and not self._version.startswith( ) and pkg_resources.parse_version( ver) < pkg_resources.parse_version( self._version[1:]): pass elif self._version.startswith( ) and pkg_resources.parse_version( ver) >= pkg_resources.parse_version( self._version[2:]): pass elif self._version.startswith( ) and not self._version.startswith( ) and pkg_resources.parse_version( ver) > pkg_resources.parse_version( self._version[1:]): pass elif self._version.startswith( ) and pkg_resources.parse_version( ver) != pkg_resources.parse_version( self._version[2:]): pass elif self._version[0] not in ( , , , ) and pkg_resources.parse_version( ver) == pkg_resources.parse_version(self._version): pass else: env.logger.warning( f ) reinstall = True if spam_spec and not reinstall: return True if not autoinstall: return False import subprocess cmd = [, ] + ([] if self._version else []) + [ self._module + (self._version if self._version else ) if self._autoinstall is True else self._autoinstall ] env.logger.info( f) ret = subprocess.call(cmd) if reinstall: import sys importlib.reload(sys.modules[name]) return ret == 0 and self._install(name, False)
Check existence of Python module and install it using command pip install if necessary.
14,178
def update_user(self, user_is_artist="", artist_level="", artist_specialty="", real_name="", tagline="", countryid="", website="", bio=""): if self.standard_grant_type is not "authorization_code": raise DeviantartError("Authentication through Authorization Code (Grant Type) is required in order to connect to this endpoint.") post_data = {} if user_is_artist: post_data["user_is_artist"] = user_is_artist if artist_level: post_data["artist_level"] = artist_level if artist_specialty: post_data["artist_specialty"] = artist_specialty if real_name: post_data["real_name"] = real_name if tagline: post_data["tagline"] = tagline if countryid: post_data["countryid"] = countryid if website: post_data["website"] = website if bio: post_data["bio"] = bio response = self._req(, post_data=post_data) return response[]
Update the users profile information :param user_is_artist: Is the user an artist? :param artist_level: If the user is an artist, what level are they :param artist_specialty: If the user is an artist, what is their specialty :param real_name: The users real name :param tagline: The users tagline :param countryid: The users location :param website: The users personal website :param bio: The users bio
14,179
def comment (self, s, **args): self.write(u"<!-- ") self.write(s, **args) self.writeln(u" -->")
Write XML comment.
14,180
def feed_eof(self): self._incoming.write_eof() ssldata, appdata = self.feed_ssldata(b) assert appdata == [] or appdata == [b]
Send a potentially "ragged" EOF. This method will raise an SSL_ERROR_EOF exception if the EOF is unexpected.
14,181
def _add_session(self, session, start_info, groups_by_name): for (key, value) in six.iteritems(start_info.hparams): group.hparams[key].CopyFrom(value) groups_by_name[group_name] = group
Adds a new Session protobuffer to the 'groups_by_name' dictionary. Called by _build_session_groups when we encounter a new session. Creates the Session protobuffer and adds it to the relevant group in the 'groups_by_name' dict. Creates the session group if this is the first time we encounter it. Args: session: api_pb2.Session. The session to add. start_info: The SessionStartInfo protobuffer associated with the session. groups_by_name: A str to SessionGroup protobuffer dict. Representing the session groups and sessions found so far.
14,182
def libvlc_media_event_manager(p_md): t increment reference counting. @param p_md: a media descriptor object. @return: event manager object. libvlc_media_event_managerlibvlc_media_event_manager', ((1,),), class_result(EventManager), ctypes.c_void_p, Media) return f(p_md)
Get event manager from media descriptor object. NOTE: this function doesn't increment reference counting. @param p_md: a media descriptor object. @return: event manager object.
14,183
def pipeline_counter(self): if in self.data and self.data.pipeline_counter: return self.data.get() elif self.stage.pipeline is not None: return self.stage.pipeline.data.counter else: return self.stage.data.pipeline_counter
Get pipeline counter of current job instance. Because instantiating job instance could be performed in different ways and those return different results, we have to check where from to get counter of the pipeline. :return: pipeline counter.
14,184
def patched_model(): patched = (, , ) originals = {} for patch in patched: try: originals[patch] = getattr(models.Model, patch) except: pass try: models.Model.__reduce__ = _reduce try: del models.Model.__getstate__ except: pass try: del models.Model.__setstate__ except: pass yield finally: for patch in patched: try: setattr(models.Model, patch, originals[patch]) except KeyError: try: delattr(models.Model, patch) except AttributeError: pass
Context Manager that safely patches django.db.Model.__reduce__().
14,185
def calculate_betweenness_centality(graph: BELGraph, number_samples: int = CENTRALITY_SAMPLES) -> Counter: try: res = nx.betweenness_centrality(graph, k=number_samples) except Exception: res = nx.betweenness_centrality(graph) return Counter(res)
Calculate the betweenness centrality over nodes in the graph. Tries to do it with a certain number of samples, but then tries a complete approach if it fails.
14,186
def write_preferences_file(self): user_data_dir = find_pmag_dir.find_user_data_dir("thellier_gui") if not os.path.exists(user_data_dir): find_pmag_dir.make_user_data_dir(user_data_dir) pref_file = os.path.join(user_data_dir, "thellier_gui_preferences.json") with open(pref_file, "w+") as pfile: print(.format(pref_file)) json.dump(self.preferences, pfile)
Write json preferences file to (platform specific) user data directory, or PmagPy directory if appdirs module is missing.
14,187
def list_experiment(args): nni_config = Config(get_config_filename(args)) rest_port = nni_config.get_config() rest_pid = nni_config.get_config() if not detect_process(rest_pid): print_error() return running, _ = check_rest_server_quick(rest_port) if running: response = rest_get(experiment_url(rest_port), REST_TIME_OUT) if response and check_response(response): content = convert_time_stamp_to_date(json.loads(response.text)) print(json.dumps(content, indent=4, sort_keys=True, separators=(, ))) else: print_error() else: print_error()
Get experiment information
14,188
def way(self, w): if w.id not in self.way_ids: return way_points = [] for n in w.nodes: try: way_points.append(Point(n.location.lon, n.location.lat)) except o.InvalidLocationError: logging.debug(, w.id, n.ref) self.ways[w.id] = Way(w.id, way_points)
Process each way.
14,189
def is_quoted(arg: str) -> bool: return len(arg) > 1 and arg[0] == arg[-1] and arg[0] in constants.QUOTES
Checks if a string is quoted :param arg: the string being checked for quotes :return: True if a string is quoted
14,190
def descendants(self): for i in self.current_item.items: self.move_to(i) if i.type == TYPE_COLLECTION: for c in self.children: yield c else: yield i self.move_up()
Recursively return every dataset below current item.
14,191
def chi_eff(mass1, mass2, spin1z, spin2z): return (spin1z * mass1 + spin2z * mass2) / (mass1 + mass2)
Returns the effective spin from mass1, mass2, spin1z, and spin2z.
14,192
def _getScalesDiag(self,termx=0): assert self.P>1, assert self.noisPos!=None, assert termx<self.n_terms-1, assert self.covar_type[self.noisPos] not in [,,], assert self.covar_type[termx] not in [,,], scales = [] res = self.estimateHeritabilities(self.vd.getTerm(termx).getK()) scaleg = SP.sqrt(res[].mean()) scalen = SP.sqrt(res[].mean()) for term_i in range(self.n_terms): if term_i==termx: _scales = scaleg*self.diag[term_i] elif term_i==self.noisPos: _scales = scalen*self.diag[term_i] else: _scales = 0.*self.diag[term_i] if self.offset[term_i]>0: _scales = SP.concatenate((_scales,SP.array([SP.sqrt(self.offset[term_i])]))) scales.append(_scales) return SP.concatenate(scales)
Uses 2 term single trait model to get covar params for initialization Args: termx: non-noise term terms that is used for initialization
14,193
def energy_minimize(self, forcefield=, steps=1000, **kwargs): tmp_dir = tempfile.mkdtemp() original = clone(self) self._kick() self.save(os.path.join(tmp_dir, )) extension = os.path.splitext(forcefield)[-1] openbabel_ffs = [, , , , ] if forcefield in openbabel_ffs: self._energy_minimize_openbabel(tmp_dir, forcefield=forcefield, steps=steps, **kwargs) elif extension == : self._energy_minimize_openmm(tmp_dir, forcefield_files=forcefield, forcefield_name=None, steps=steps, **kwargs) else: self._energy_minimize_openmm(tmp_dir, forcefield_files=None, forcefield_name=forcefield, steps=steps, **kwargs) self.update_coordinates(os.path.join(tmp_dir, ))
Perform an energy minimization on a Compound Default beahvior utilizes Open Babel (http://openbabel.org/docs/dev/) to perform an energy minimization/geometry optimization on a Compound by applying a generic force field Can also utilize OpenMM (http://openmm.org/) to energy minimize after atomtyping a Compound using Foyer (https://github.com/mosdef-hub/foyer) to apply a forcefield XML file that contains valid SMARTS strings. This function is primarily intended to be used on smaller components, with sizes on the order of 10's to 100's of particles, as the energy minimization scales poorly with the number of particles. Parameters ---------- steps : int, optional, default=1000 The number of optimization iterations forcefield : str, optional, default='UFF' The generic force field to apply to the Compound for minimization. Valid options are 'MMFF94', 'MMFF94s', ''UFF', 'GAFF', and 'Ghemical'. Please refer to the Open Babel documentation (http://open-babel. readthedocs.io/en/latest/Forcefields/Overview.html) when considering your choice of force field. Utilizing OpenMM for energy minimization requires a forcefield XML file with valid SMARTS strings. Please refer to (http://docs. openmm.org/7.0.0/userguide/application.html#creating-force-fields) for more information. Keyword Arguments ------------ algorithm : str, optional, default='cg' The energy minimization algorithm. Valid options are 'steep', 'cg', and 'md', corresponding to steepest descent, conjugate gradient, and equilibrium molecular dynamics respectively. For _energy_minimize_openbabel scale_bonds : float, optional, default=1 Scales the bond force constant (1 is completely on). For _energy_minimize_openmm scale_angles : float, optional, default=1 Scales the angle force constant (1 is completely on) For _energy_minimize_openmm scale_torsions : float, optional, default=1 Scales the torsional force constants (1 is completely on) For _energy_minimize_openmm Note: Only Ryckaert-Bellemans style torsions are currently supported scale_nonbonded : float, optional, default=1 Scales epsilon (1 is completely on) For _energy_minimize_openmm References ---------- If using _energy_minimize_openmm(), please cite: .. [1] P. Eastman, M. S. Friedrichs, J. D. Chodera, R. J. Radmer, C. M. Bruns, J. P. Ku, K. A. Beauchamp, T. J. Lane, L.-P. Wang, D. Shukla, T. Tye, M. Houston, T. Stich, C. Klein, M. R. Shirts, and V. S. Pande. "OpenMM 4: A Reusable, Extensible, Hardware Independent Library for High Performance Molecular Simulation." J. Chem. Theor. Comput. 9(1): 461-469. (2013). If using _energy_minimize_openbabel(), please cite: .. [1] O'Boyle, N.M.; Banck, M.; James, C.A.; Morley, C.; Vandermeersch, T.; Hutchison, G.R. "Open Babel: An open chemical toolbox." (2011) J. Cheminf. 3, 33 .. [2] Open Babel, version X.X.X http://openbabel.org, (installed Month Year) If using the 'MMFF94' force field please also cite the following: .. [3] T.A. Halgren, "Merck molecular force field. I. Basis, form, scope, parameterization, and performance of MMFF94." (1996) J. Comput. Chem. 17, 490-519 .. [4] T.A. Halgren, "Merck molecular force field. II. MMFF94 van der Waals and electrostatic parameters for intermolecular interactions." (1996) J. Comput. Chem. 17, 520-552 .. [5] T.A. Halgren, "Merck molecular force field. III. Molecular geometries and vibrational frequencies for MMFF94." (1996) J. Comput. Chem. 17, 553-586 .. [6] T.A. Halgren and R.B. Nachbar, "Merck molecular force field. IV. Conformational energies and geometries for MMFF94." (1996) J. Comput. Chem. 17, 587-615 .. [7] T.A. Halgren, "Merck molecular force field. V. Extension of MMFF94 using experimental data, additional computational data, and empirical rules." (1996) J. Comput. Chem. 17, 616-641 If using the 'MMFF94s' force field please cite the above along with: .. [8] T.A. Halgren, "MMFF VI. MMFF94s option for energy minimization studies." (1999) J. Comput. Chem. 20, 720-729 If using the 'UFF' force field please cite the following: .. [3] Rappe, A.K., Casewit, C.J., Colwell, K.S., Goddard, W.A. III, Skiff, W.M. "UFF, a full periodic table force field for molecular mechanics and molecular dynamics simulations." (1992) J. Am. Chem. Soc. 114, 10024-10039 If using the 'GAFF' force field please cite the following: .. [3] Wang, J., Wolf, R.M., Caldwell, J.W., Kollman, P.A., Case, D.A. "Development and testing of a general AMBER force field" (2004) J. Comput. Chem. 25, 1157-1174 If using the 'Ghemical' force field please cite the following: .. [3] T. Hassinen and M. Perakyla, "New energy terms for reduced protein models implemented in an off-lattice force field" (2001) J. Comput. Chem. 22, 1229-1242
14,194
def is_connected(C, directed=True): if isdense(C): return sparse.connectivity.is_connected(csr_matrix(C), directed=directed) else: return sparse.connectivity.is_connected(C, directed=directed)
Check connectivity of the given matrix. Parameters ---------- C : scipy.sparse matrix Count matrix specifying edge weights. directed : bool, optional Whether to compute connected components for a directed or undirected graph. Default is True. Returns ------- is_connected: bool True if C is connected, False otherwise. See also -------- largest_connected_submatrix Notes ----- A count matrix is connected if the graph having the count matrix as adjacency matrix has a single connected component. Connectivity of a graph can be efficiently checked using Tarjan's algorithm. References ---------- .. [1] Tarjan, R E. 1972. Depth-first search and linear graph algorithms. SIAM Journal on Computing 1 (2): 146-160. Examples -------- >>> import numpy as np >>> from msmtools.estimation import is_connected >>> C = np.array([[10, 1, 0], [2, 0, 3], [0, 0, 4]]) >>> is_connected(C) False >>> is_connected(C, directed=False) True
14,195
def create(name, template_body=None, template_url=None, parameters=None, notification_arns=None, disable_rollback=None, timeout_in_minutes=None, capabilities=None, tags=None, on_failure=None, stack_policy_body=None, stack_policy_url=None, region=None, key=None, keyid=None, profile=None): https://s3.amazonaws.com/bucket/template.cft conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) try: return conn.create_stack(name, template_body, template_url, parameters, notification_arns, disable_rollback, timeout_in_minutes, capabilities, tags, on_failure, stack_policy_body, stack_policy_url) except BotoServerError as e: msg = .format(name, e) log.error(msg) log.debug(e) return False
Create a CFN stack. CLI Example: .. code-block:: bash salt myminion boto_cfn.create mystack template_url='https://s3.amazonaws.com/bucket/template.cft' \ region=us-east-1
14,196
def get_wu_settings(): * ret = {} day = [, , , , , , , ] with salt.utils.winapi.Com(): obj_au = win32com.client.Dispatch() obj_au_settings = obj_au.Settings ret[] = obj_au_settings.FeaturedUpdatesEnabled ret[] = obj_au_settings.Required ret[] = _get_msupdate_status() ret[] = get_needs_reboot() ret[] = obj_au_settings.NonAdministratorsElevated ret[] = obj_au_settings.NotificationLevel ret[] = obj_au_settings.ReadOnly ret[] = obj_au_settings.IncludeRecommendedUpdates ret[] = day[obj_au_settings.ScheduledInstallationDay] if obj_au_settings.ScheduledInstallationTime < 10: ret[] = .\ format(obj_au_settings.ScheduledInstallationTime) else: ret[] = .\ format(obj_au_settings.ScheduledInstallationTime) return ret
Get current Windows Update settings. Returns: dict: A dictionary of Windows Update settings: Featured Updates: Boolean value that indicates whether to display notifications for featured updates. Group Policy Required (Read-only): Boolean value that indicates whether Group Policy requires the Automatic Updates service. Microsoft Update: Boolean value that indicates whether to turn on Microsoft Update for other Microsoft Products Needs Reboot: Boolean value that indicates whether the machine is in a reboot pending state. Non Admins Elevated: Boolean value that indicates whether non-administrators can perform some update-related actions without administrator approval. Notification Level: Number 1 to 4 indicating the update level: 1. Never check for updates 2. Check for updates but let me choose whether to download and install them 3. Download updates but let me choose whether to install them 4. Install updates automatically Read Only (Read-only): Boolean value that indicates whether the Automatic Update settings are read-only. Recommended Updates: Boolean value that indicates whether to include optional or recommended updates when a search for updates and installation of updates is performed. Scheduled Day: Days of the week on which Automatic Updates installs or uninstalls updates. Scheduled Time: Time at which Automatic Updates installs or uninstalls updates. CLI Examples: .. code-block:: bash salt '*' win_wua.get_wu_settings
14,197
def get_color(name, number=None): foregroundfgbackgroundbgindexDEADBEEFCAFEbg000000000000index4e4d9a9a0605 colors = () if is_a_tty() and not env.SSH_CLIENT: if not in _color_code_map: _color_code_map[] = + str(number or ) if os_name == : from .windows import get_color color_id = get_color(name) if sys.getwindowsversion()[2] > 16299: basic_palette = color_tables.cmd1709_palette4 else: basic_palette = color_tables.cmd_palette4 colors = (f for i in basic_palette[color_id]) elif sys.platform == : if env.TERM_PROGRAM == : colors = _get_color_xterm(name, number) elif os_name == : if sys.platform.startswith(): pass elif env.TERM and env.TERM.startswith(): colors = _get_color_xterm(name, number) return tuple(colors)
Query the default terminal, for colors, etc. Direct queries supported on xterm, iTerm, perhaps others. Arguments: str: name, one of ('foreground', 'fg', 'background', 'bg', or 'index') # index grabs a palette index int: or a "dynamic color number of (4, 10-19)," see links below. str: number - if name is index, number should be an int from 0โ€ฆ255 Queries terminal using ``OSC # ? BEL`` sequence, call responds with a color in this X Window format syntax: - ``rgb:DEAD/BEEF/CAFE`` - `Control sequences <http://invisible-island.net/xterm/ctlseqs/ctlseqs.html#h2-Operating-System-Commands>`_ - `X11 colors <https://www.x.org/releases/X11R7.7/doc/libX11/libX11/libX11.html#RGB_Device_String_Specification>`_ Returns: tuple[int]:ย  A tuple of four-digit hex strings after parsing, the last two digits are the least significant and can be chopped if needed: ``('DEAD', 'BEEF', 'CAFE')`` If an error occurs during retrieval or parsing, the tuple will be empty. Examples: >>> get_color('bg') ('0000', '0000', '0000') >>> get_color('index', 2) # second color in indexed ('4e4d', '9a9a', '0605') # palette, 2 aka 32 in basic Note: Blocks if terminal does not support the function. Checks is_a_tty() first, since function would also block if i/o were redirected through a pipe. On Windows, only able to find palette defaults, which may be different if they were customized. To find the palette index instead, see ``windows.get_color``.
14,198
def subparsers(self): try: return self.__subparsers except AttributeError: parent = super(ArgumentParser, self) self.__subparsers = parent.add_subparsers(title="drill down") self.__subparsers.metavar = "COMMAND" return self.__subparsers
Obtain the subparser's object.
14,199
def p_expr(self, p): if len(p) == 2: p[0] = p[1] else: p[0] = ast.Comma(left=p[1], right=p[3])
expr : assignment_expr | expr COMMA assignment_expr