Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
26,800
def _ipsi(y, tol=1.48e-9, maxiter=10): s method. For the purposes of Dirichlet MLE, since the parameters a[i] must always satisfy a > 0, we define ipsi :: R -> (0,inf).floatUnable to converge in {} iterations, value is {}'.format(maxiter, x1))
Inverse of psi (digamma) using Newton's method. For the purposes of Dirichlet MLE, since the parameters a[i] must always satisfy a > 0, we define ipsi :: R -> (0,inf).
26,801
def weather_history_at_place(self, name, start=None, end=None): assert isinstance(name, str), "Value must be a string" encoded_name = name params = {: encoded_name, : self._language} if start is None and end is None: pass elif start is not None and end is not None: unix_start = timeformatutils.to_UNIXtime(start) unix_end = timeformatutils.to_UNIXtime(end) if unix_start >= unix_end: raise ValueError("Error: the start time boundary must " \ "precede the end time!") current_time = time() if unix_start > current_time: raise ValueError("Error: the start time boundary must " \ "precede the current time!") params[] = str(unix_start) params[] = str(unix_end) else: raise ValueError("Error: one of the time boundaries is None, " \ "while the other is not!") uri = http_client.HttpClient.to_url(CITY_WEATHER_HISTORY_URL, self._API_key, self._subscription_type, self._use_ssl) _, json_data = self._wapi.cacheable_get_json(uri, params=params) return self._parsers[].parse_JSON(json_data)
Queries the OWM Weather API for weather history for the specified location (eg: "London,uk"). A list of *Weather* objects is returned. It is possible to query for weather history in a closed time period, whose boundaries can be passed as optional parameters. :param name: the location's toponym :type name: str or unicode :param start: the object conveying the time value for the start query boundary (defaults to ``None``) :type start: int, ``datetime.datetime`` or ISO8601-formatted string :param end: the object conveying the time value for the end query boundary (defaults to ``None``) :type end: int, ``datetime.datetime`` or ISO8601-formatted string :returns: a list of *Weather* instances or ``None`` if history data is not available for the specified location :raises: *ParseResponseException* when OWM Weather API responses' data cannot be parsed, *APICallException* when OWM Weather API can not be reached, *ValueError* if the time boundaries are not in the correct chronological order, if one of the time boundaries is not ``None`` and the other is or if one or both of the time boundaries are after the current time
26,802
def get_last_branch_location(cls): LastBranchFromIP = cls.read_msr(DebugRegister.LastBranchFromIP) LastBranchToIP = cls.read_msr(DebugRegister.LastBranchToIP) return ( LastBranchFromIP, LastBranchToIP )
Returns the source and destination addresses of the last taken branch. @rtype: tuple( int, int ) @return: Source and destination addresses of the last taken branch. @raise WindowsError: Raises an exception on error. @raise NotImplementedError: Current architecture is not C{i386} or C{amd64}. @warning: This method uses the processor's machine specific registers (MSR). It could potentially brick your machine. It works on my machine, but your mileage may vary. @note: It doesn't seem to work in VMWare or VirtualBox machines. Maybe it fails in other virtualization/emulation environments, no extensive testing was made so far.
26,803
def first_line_indent(self): ind = self.ind if ind is None: return None hanging = ind.hanging if hanging is not None: return Length(-hanging) firstLine = ind.firstLine if firstLine is None: return None return firstLine
A |Length| value calculated from the values of `w:ind/@w:firstLine` and `w:ind/@w:hanging`. Returns |None| if the `w:ind` child is not present.
26,804
def commit(self): try: if self.connection is not None: self.connection.commit() self._updateCheckTime() self.release() except Exception, e: pass
Commit MySQL Transaction to database. MySQLDB: If the database and the tables support transactions, this commits the current transaction; otherwise this method successfully does nothing. @author: Nick Verbeck @since: 5/12/2008
26,805
def check_virtualserver(self, name): vs = self.bigIP.LocalLB.VirtualServer for v in vs.get_list(): if v.split()[-1] == name: return True return False
Check to see if a virtual server exists
26,806
def wsgi_wrap(app): @wraps(app) def wrapped(environ, start_response): status_headers = [None, None] def _start_response(status, headers): status_headers[:] = [status, headers] body = app(environ, _start_response) ret = body, status_headers[0], status_headers[1] return ret return wrapped
Wraps a standard wsgi application e.g.: def app(environ, start_response) It intercepts the start_response callback and grabs the results from it so it can return the status, headers, and body as a tuple
26,807
def edit(self, hardware_id, userdata=None, hostname=None, domain=None, notes=None, tags=None): obj = {} if userdata: self.hardware.setUserMetadata([userdata], id=hardware_id) if tags is not None: self.hardware.setTags(tags, id=hardware_id) if hostname: obj[] = hostname if domain: obj[] = domain if notes: obj[] = notes if not obj: return True return self.hardware.editObject(obj, id=hardware_id)
Edit hostname, domain name, notes, user data of the hardware. Parameters set to None will be ignored and not attempted to be updated. :param integer hardware_id: the instance ID to edit :param string userdata: user data on the hardware to edit. If none exist it will be created :param string hostname: valid hostname :param string domain: valid domain name :param string notes: notes about this particular hardware :param string tags: tags to set on the hardware as a comma separated list. Use the empty string to remove all tags. Example:: # Change the hostname on instance 12345 to 'something' result = mgr.edit(hardware_id=12345 , hostname="something") #result will be True or an Exception
26,808
def get_dockercfg_credentials(self, docker_registry): if not self.registry_secret_path: return {} dockercfg = Dockercfg(self.registry_secret_path) registry_creds = dockercfg.get_credentials(docker_registry) if not in registry_creds: return {} return { : registry_creds[], : registry_creds[], }
Read the .dockercfg file and return an empty dict, or else a dict with keys 'basic_auth_username' and 'basic_auth_password'.
26,809
def handle_one_request(self): self.raw_requestline = self.rfile.readline(65537) if len(self.raw_requestline) > 65536: self.requestline = self.request_version = self.command = self.send_error(414) return if not self.parse_request(): return handler = ServerHandler( self.rfile, self.wfile, self.get_stderr(), self.get_environ() ) handler.request_handler = self handler.run(self.server.get_app())
Copy of WSGIRequestHandler.handle(), but with different ServerHandler
26,810
def get_ssh_key(host, username, password, protocol=None, port=None, certificate_verify=False): * if protocol is None: protocol = if port is None: port = 443 url = .format(protocol, host, port) ret = {} try: result = salt.utils.http.query(url, status=True, text=True, method=, username=username, password=password, verify_ssl=certificate_verify) if result.get() == 200: ret[] = True ret[] = result[] else: ret[] = False ret[] = result[] except Exception as msg: ret[] = False ret[] = msg return ret
Retrieve the authorized_keys entry for root. This function only works for ESXi, not vCenter. :param host: The location of the ESXi Host :param username: Username to connect as :param password: Password for the ESXi web endpoint :param protocol: defaults to https, can be http if ssl is disabled on ESXi :param port: defaults to 443 for https :param certificate_verify: If true require that the SSL connection present a valid certificate :return: True if upload is successful CLI Example: .. code-block:: bash salt '*' vsphere.get_ssh_key my.esxi.host root bad-password certificate_verify=True
26,811
def suites(self, request, pk=None): suites_names = self.get_object().suites.values_list() suites_metadata = SuiteMetadata.objects.filter(kind=, suite__in=suites_names) page = self.paginate_queryset(suites_metadata) serializer = SuiteMetadataSerializer(page, many=True, context={: request}) return self.get_paginated_response(serializer.data)
List of test suite names available in this project
26,812
def fmpt(P): P = np.matrix(P) k = P.shape[0] A = np.zeros_like(P) ss = steady_state(P).reshape(k, 1) for i in range(k): A[:, i] = ss A = A.transpose() I = np.identity(k) Z = la.inv(I - P + A) E = np.ones_like(Z) A_diag = np.diag(A) A_diag = A_diag + (A_diag == 0) D = np.diag(1. / A_diag) Zdg = np.diag(np.diag(Z)) M = (I - Z + E * Zdg) * D return np.array(M)
Calculates the matrix of first mean passage times for an ergodic transition probability matrix. Parameters ---------- P : array (k, k), an ergodic Markov transition probability matrix. Returns ------- M : array (k, k), elements are the expected value for the number of intervals required for a chain starting in state i to first enter state j. If i=j then this is the recurrence time. Examples -------- >>> import numpy as np >>> from giddy.ergodic import fmpt >>> p=np.array([[.5, .25, .25],[.5,0,.5],[.25,.25,.5]]) >>> fm=fmpt(p) >>> fm array([[2.5 , 4. , 3.33333333], [2.66666667, 5. , 2.66666667], [3.33333333, 4. , 2.5 ]]) Thus, if it is raining today in Oz we can expect a nice day to come along in another 4 days, on average, and snow to hit in 3.33 days. We can expect another rainy day in 2.5 days. If it is nice today in Oz, we would experience a change in the weather (either rain or snow) in 2.67 days from today. (That wicked witch can only die once so I reckon that is the ultimate absorbing state). Notes ----- Uses formulation (and examples on p. 218) in :cite:`Kemeny1967`.
26,813
def x_build_targets_target( self, node ): target_node = node name = self.get_child_data(target_node,tag=,strip=True) path = self.get_child_data(target_node,tag=,strip=True) jam_target = self.get_child_data(target_node,tag=,strip=True) self.target[jam_target] = { : name, : path } dep_node = self.get_child(self.get_child(target_node,tag=),tag=) while dep_node: child = self.get_data(dep_node,strip=True) child_jam_target = % (path,child.split(,1)[1]) self.parent[child_jam_target] = jam_target dep_node = self.get_sibling(dep_node.nextSibling,tag=) return None
Process the target dependency DAG into an ancestry tree so we can look up which top-level library and test targets specific build actions correspond to.
26,814
def createproject(self, name, **kwargs): data = {: name} if kwargs: data.update(kwargs) request = requests.post( self.projects_url, headers=self.headers, data=data, verify=self.verify_ssl, auth=self.auth, timeout=self.timeout) if request.status_code == 201: return request.json() elif request.status_code == 403: if in request.text: print(request.text) return False else: return False
Creates a new project owned by the authenticated user. :param name: new project name :param path: custom repository name for new project. By default generated based on name :param namespace_id: namespace for the new project (defaults to user) :param description: short project description :param issues_enabled: :param merge_requests_enabled: :param wiki_enabled: :param snippets_enabled: :param public: if true same as setting visibility_level = 20 :param visibility_level: :param sudo: :param import_url: :return:
26,815
def Shift(self, term): new = self.Copy() new.xs = [x + term for x in self.xs] return new
Adds a term to the xs. term: how much to add
26,816
def get_icon(name, aspix=False, asicon=False): datapath = os.path.join(ICON_PATH, name) icon = pkg_resources.resource_filename(, datapath) if aspix or asicon: icon = QtGui.QPixmap(icon) if asicon: icon = QtGui.QIcon(icon) return icon
Return the real file path to the given icon name If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. :param name: the name of the icon :type name: str :param aspix: If True, return a QtGui.QPixmap. :type aspix: bool :param asicon: If True, return a QtGui.QIcon. :type asicon: bool :returns: The real file path to the given icon name. If aspix is True return as QtGui.QPixmap, if asicon is True return as QtGui.QIcon. If both are True, a QtGui.QIcon is returned. :rtype: string :raises: None
26,817
def keywords(s, top=10, **kwargs): return parser.find_keywords(s, top=top, frequency=parser.frequency)
Returns a sorted list of keywords in the given string.
26,818
def _compute_count_availability(resource, status, previous_status): count_availability = resource.extras.get(, 1) return count_availability + 1 if status == previous_status else 1
Compute the `check:count-availability` extra value
26,819
def main(): fmt,plot=,0 col1,col2=0,1 sym,size = ,50 xlab,ylab=, lines=0 if in sys.argv: print(main.__doc__) sys.exit() if in sys.argv: ind=sys.argv.index() file=sys.argv[ind+1] if in sys.argv: ind=sys.argv.index() fmt=sys.argv[ind+1] if in sys.argv:plot=1 if in sys.argv: ind=sys.argv.index() col1=int(sys.argv[ind+1])-1 col2=int(sys.argv[ind+2])-1 if in sys.argv: ind=sys.argv.index() col3=int(sys.argv[ind+1])-1 if in sys.argv: ind=sys.argv.index() col4=int(sys.argv[ind+1])-1 if in sys.argv: ind=sys.argv.index() xlab=sys.argv[ind+1] if in sys.argv: ind=sys.argv.index() ylab=sys.argv[ind+1] if in sys.argv: ind=sys.argv.index() xmin=float(sys.argv[ind+1]) xmax=float(sys.argv[ind+2]) ymin=float(sys.argv[ind+3]) ymax=float(sys.argv[ind+4]) if in sys.argv: ind=sys.argv.index() degr=sys.argv[ind+1] if in sys.argv: ind=sys.argv.index() sym=sys.argv[ind+1] size=int(sys.argv[ind+2]) if in sys.argv: lines=1 if in sys.argv: sym= skip = int(pmag.get_named_arg(, default_val=0)) X,Y=[],[] Xerrs,Yerrs=[],[] f=open(file,) for num in range(skip): f.readline() data=f.readlines() for line in data: line.replace(,) line.replace(,) rec=line.split() X.append(float(rec[col1])) Y.append(float(rec[col2])) if in sys.argv:Xerrs.append(float(rec[col3])) if in sys.argv:Yerrs.append(float(rec[col4])) if in sys.argv: pylab.plot(xs,ys) coeffs=numpy.polyfit(X,Y,degr) correl=numpy.corrcoef(X,Y)**2 polynomial=numpy.poly1d(coeffs) xs=numpy.linspace(numpy.min(X),numpy.max(X),10) ys=polynomial(xs) pylab.plot(xs,ys) print(polynomial) if degr==: print(, %(correl[0,1])) if sym!=: pylab.scatter(X,Y,marker=sym[1],c=sym[0],s=size) else: pylab.plot(X,Y) if in sys.argv and in sys.argv: pylab.errorbar(X,Y,xerr=Xerrs,yerr=Yerrs,fmt=None) if in sys.argv and not in sys.argv: pylab.errorbar(X,Y,xerr=Xerrs,fmt=None) if not in sys.argv and in sys.argv: pylab.errorbar(X,Y,yerr=Yerrs,fmt=None) if xlab!=:pylab.xlabel(xlab) if ylab!=:pylab.ylabel(ylab) if lines==1:pylab.plot(X,Y,) if in sys.argv:pylab.axis([xmin,xmax,ymin,ymax]) if plot==0: pylab.show() else: pylab.savefig(+fmt) print(,+fmt) sys.exit()
NAME plotXY.py DESCRIPTION Makes simple X,Y plots INPUT FORMAT X,Y data in columns SYNTAX plotxy.py [command line options] OPTIONS -h prints this help message -f FILE to set file name on command line -c col1 col2 specify columns to plot -xsig col3 specify xsigma if desired -ysig col4 specify xsigma if desired -b xmin xmax ymin ymax, sets bounds -sym SYM SIZE specify symbol to plot: default is red dots, 10 pt -S don't plot the symbols -xlab XLAB -ylab YLAB -l connect symbols with lines -fmt [svg,png,pdf,eps] specify output format, default is svg -sav saves plot and quits -poly X plot a degree X polynomial through the data -skip n Number of lines to skip before reading in data
26,820
def from_json(json_str, allow_pickle=False): if six.PY3: if isinstance(json_str, bytes): json_str = json_str.decode() UtoolJSONEncoder = make_utool_json_encoder(allow_pickle) object_hook = UtoolJSONEncoder._json_object_hook val = json.loads(json_str, object_hook=object_hook) return val
Decodes a JSON object specified in the utool convention Args: json_str (str): allow_pickle (bool): (default = False) Returns: object: val CommandLine: python -m utool.util_cache from_json --show Example: >>> # ENABLE_DOCTEST >>> from utool.util_cache import * # NOQA >>> import utool as ut >>> json_str = 'just a normal string' >>> json_str = '["just a normal string"]' >>> allow_pickle = False >>> val = from_json(json_str, allow_pickle) >>> result = ('val = %s' % (ut.repr2(val),)) >>> print(result)
26,821
def diffplot(self, f, delay=1, lfilter=None, **kargs): if lfilter is None: lst_pkts = [f(self.res[i], self.res[i + 1]) for i in range(len(self.res) - delay)] else: lst_pkts = [f(self.res[i], self.res[i + 1]) for i in range(len(self.res) - delay) if lfilter(self.res[i])] if kargs == {}: kargs = MATPLOTLIB_DEFAULT_PLOT_KARGS lines = plt.plot(lst_pkts, **kargs) if not MATPLOTLIB_INLINED: plt.show() return lines
diffplot(f, delay=1, lfilter=None) Applies a function to couples (l[i],l[i+delay]) A list of matplotlib.lines.Line2D is returned.
26,822
def count(start=0, step=1, *, interval=0): agen = from_iterable.raw(itertools.count(start, step)) return time.spaceout.raw(agen, interval) if interval else agen
Generate consecutive numbers indefinitely. Optional starting point and increment can be defined, respectively defaulting to ``0`` and ``1``. An optional interval can be given to space the values out.
26,823
def all_coplanar(triangles): triangles = np.asanyarray(triangles, dtype=np.float64) if not util.is_shape(triangles, (-1, 3, 3)): raise ValueError() test_normal = normals(triangles)[0] test_vertex = triangles[0][0] distances = point_plane_distance(points=triangles[1:].reshape((-1, 3)), plane_normal=test_normal, plane_origin=test_vertex) all_coplanar = np.all(np.abs(distances) < tol.zero) return all_coplanar
Check to see if a list of triangles are all coplanar Parameters ---------------- triangles: (n, 3, 3) float Vertices of triangles Returns --------------- all_coplanar : bool True if all triangles are coplanar
26,824
def parse_query(query_str): def _generate_match_all_fields_query(): stripped_query_str = .join(query_str.replace(, ).split()) return {: {: stripped_query_str, : [], : }} if not isinstance(query_str, six.text_type): query_str = six.text_type(query_str.decode()) logger.info( + query_str + ) parser = StatefulParser() rst_visitor = RestructuringVisitor() es_visitor = ElasticSearchVisitor() try: unrecognized_text, parse_tree = parser.parse(query_str, Query) if unrecognized_text: msg = + unrecognized_text + \ + query_str + if query_str == unrecognized_text and parse_tree is None: ) return _generate_match_all_fields_query() if not es_query: return _generate_match_all_fields_query() return es_query
Drives the whole logic, by parsing, restructuring and finally, generating an ElasticSearch query. Args: query_str (six.text_types): the given query to be translated to an ElasticSearch query Returns: six.text_types: Return an ElasticSearch query. Notes: In case there's an error, an ElasticSearch `multi_match` query is generated with its `query` value, being the query_str argument.
26,825
def find_longest_match(self, alo, ahi, blo, bhi): besti, bestj, bestsize = _cdifflib.find_longest_match(self, alo, ahi, blo, bhi) return _Match(besti, bestj, bestsize)
Find longest matching block in a[alo:ahi] and b[blo:bhi]. Wrapper for the C implementation of this function.
26,826
def bind_super(self, opr): for path in self.routes: route = self.routes.get(path) route[].append(opr)
为超级管理员授权所有权限
26,827
def _get_resources_string(res_dict, pid): config_str = "" ignore_directives = ["container", "version"] for p, directives in res_dict.items(): for d, val in directives.items(): if d in ignore_directives: continue config_str += .format(p, pid, d, val) return config_str
Returns the nextflow resources string from a dictionary object If the dictionary has at least on of the resource directives, these will be compiled for each process in the dictionary and returned as a string read for injection in the nextflow config file template. This dictionary should be:: dict = {"processA": {"cpus": 1, "memory": "4GB"}, "processB": {"cpus": 2}} Parameters ---------- res_dict : dict Dictionary with the resources for processes. pid : int Unique identified of the process Returns ------- str nextflow config string
26,828
def classifications(ctx, classifications, results, readlevel, readlevel_path): if not readlevel and not results: cli_resource_fetcher(ctx, "classifications", classifications) elif not readlevel and results: if len(classifications) != 1: log.error("Can only request results data on one Classification at a time") else: classification = ctx.obj["API"].Classifications.get(classifications[0]) if not classification: log.error( "Could not find classification {} (404 status code)".format(classifications[0]) ) return results = classification.results(json=True) pprint(results, ctx.obj["NOPPRINT"]) elif readlevel is not None and not results: if len(classifications) != 1: log.error("Can only request read-level data on one Classification at a time") else: classification = ctx.obj["API"].Classifications.get(classifications[0]) if not classification: log.error( "Could not find classification {} (404 status code)".format(classifications[0]) ) return tsv_url = classification._readlevel()["url"] log.info("Downloading tsv data from: {}".format(tsv_url)) download_file_helper(tsv_url, readlevel_path) else: log.error("Can only request one of read-level data or results data at a time")
Retrieve performed metagenomic classifications
26,829
def Run(self, unused_arg): reply = rdf_flows.GrrStatus(status=rdf_flows.GrrStatus.ReturnedStatus.OK) self.SendReply(reply, message_type=rdf_flows.GrrMessage.Type.STATUS) self.grr_worker.Sleep(10) logging.info("Dying on request.") os._exit(242)
Run the kill.
26,830
def get(self, idx, default=): if isinstance(idx, int) and (idx >= len(self) or idx < -1 * len(self)): return default return super().__getitem__(idx)
Returns the element at idx, or default if idx is beyond the length of the list
26,831
def register_sub_command(self, sub_command, additional_ids=[]): self.__register_sub_command(sub_command, sub_command.command_desc().command) self.__additional_ids.update(additional_ids) for id in additional_ids: self.__register_sub_command(sub_command, id)
Register a command as a subcommand. It will have it's CommandDesc.command string used as id. Additional ids can be provided. Args: sub_command (CommandBase): Subcommand to register. additional_ids (List[str]): List of additional ids. Can be empty.
26,832
def river_sources(world, water_flow, water_path): river_source_list = [] for y in range(0, world.height - 1): for x in range(0, world.width - 1): rain_fall = world.layers[].data[y, x] water_flow[y, x] = rain_fall if water_path[y, x] == 0: continue cx, cy = x, y neighbour_seed_found = False while not neighbour_seed_found: if world.is_mountain((cx, cy)) and water_flow[cy, cx] >= RIVER_TH: for seed in river_source_list: sx, sy = seed if in_circle(9, cx, cy, sx, sy): neighbour_seed_found = True if neighbour_seed_found: break river_source_list.append([cx, cy]) break if water_path[cy, cx] == 0: break dx, dy = DIR_NEIGHBORS_CENTER[water_path[cy, cx]] nx, ny = cx + dx, cy + dy water_flow[ny, nx] += rain_fall cx, cy = nx, ny return river_source_list
Find places on map where sources of river can be found
26,833
def precision(Ntp, Nsys, eps=numpy.spacing(1)): if Nsys == 0: return numpy.nan else: return float(Ntp / float(Nsys))
Precision. Wikipedia entry https://en.wikipedia.org/wiki/Precision_and_recall Parameters ---------- Ntp : int >=0 Number of true positives. Nsys : int >=0 Amount of system output. eps : float eps. Default value numpy.spacing(1) Returns ------- precision: float Precision
26,834
def get_class_alias(klass): for k, v in pyamf.ALIAS_TYPES.iteritems(): for kl in v: try: if issubclass(klass, kl): return k except TypeError: if hasattr(kl, ): if kl(klass) is True: return k
Tries to find a suitable L{pyamf.ClassAlias} subclass for C{klass}.
26,835
def get_next_url(request, redirect_field_name): next_url = request.GET.get(redirect_field_name) if next_url: kwargs = { : next_url, : import_from_settings( , request.is_secure()) } hosts = list(import_from_settings(, [])) hosts.append(request.get_host()) kwargs[] = hosts is_safe = is_safe_url(**kwargs) if is_safe: return next_url return None
Retrieves next url from request Note: This verifies that the url is safe before returning it. If the url is not safe, this returns None. :arg HttpRequest request: the http request :arg str redirect_field_name: the name of the field holding the next url :returns: safe url or None
26,836
def xyz_with_ports(self, arrnx3): if not self.children: if not arrnx3.shape[0] == 1: raise ValueError( .format( self, arrnx3)) self.pos = np.squeeze(arrnx3) else: for atom, coords in zip( self._particles( include_ports=True), arrnx3): atom.pos = coords
Set the positions of the particles in the Compound, including the Ports. Parameters ---------- arrnx3 : np.ndarray, shape=(n,3), dtype=float The new particle positions
26,837
def post_grade2(self, grade, user=None, comment=): content_type = if user is None: user = self.user_id lti2_url = self.response_url.replace( "/grade_handler", "/lti_2_0_result_rest_handler/user/{}".format(user)) score = float(grade) if 0 <= score <= 1.0: body = json.dumps({ "@context": "http://purl.imsglobal.org/ctx/lis/v2/Result", "@type": "Result", "resultScore": score, "comment": comment }) ret = post_message2(self._consumers(), self.key, lti2_url, body, method=, content_type=content_type) if not ret: raise LTIPostMessageException("Post Message Failed") return True return False
Post grade to LTI consumer using REST/JSON URL munging will is related to: https://openedx.atlassian.net/browse/PLAT-281 :param: grade: 0 <= grade <= 1 :return: True if post successful and grade valid :exception: LTIPostMessageException if call failed
26,838
def _configure_manager(self): self._manager = CloudNetworkManager(self, resource_class=CloudNetwork, response_key="network", uri_base="os-networksv2")
Creates the Manager instance to handle networks.
26,839
def info_label(self, indicator): if indicator in xrange(1, 9): self.id = indicator self.setPixmap(QtGui.QPixmap(NUMBER_PATHS[indicator]).scaled( self.field_width, self.field_height)) elif indicator == 0: self.id == 0 self.setPixmap(QtGui.QPixmap(NUMBER_PATHS[0]).scaled( self.field_width, self.field_height)) elif indicator == 12: self.id = 12 self.setPixmap(QtGui.QPixmap(BOOM_PATH).scaled(self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: black;}") elif indicator == 9: self.id = 9 self.setPixmap(QtGui.QPixmap(FLAG_PATH).scaled(self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: elif indicator == 10: self.id = 10 self.setPixmap(QtGui.QPixmap(QUESTION_PATH).scaled( self.field_width, self.field_height)) self.setStyleSheet("QLabel {background-color: yellow;}") elif indicator == 11: self.id = 11 self.setPixmap(QtGui.QPixmap(EMPTY_PATH).scaled( self.field_width*3, self.field_height*3)) self.setStyleSheet()
Set info label by given settings. Parameters ---------- indicator : int A number where 0-8 is number of mines in srrounding. 12 is a mine field.
26,840
def to_digital(d, num): if not isinstance(num, int) or not 1 < num < 10: raise ValueError() d = int(d) result = [] x = d % num d = d - x result.append(str(x)) while d > 0: d = d // num x = d % num d = d - x result.append(str(x)) return .join(result[::-1])
进制转换,从10进制转到指定机制 :param d: :param num: :return:
26,841
def find_doc(self, name=None, ns_uri=None, first_only=False): return self.document.find(name=name, ns_uri=ns_uri, first_only=first_only)
Find :class:`Element` node descendants of the document containing this node, with optional constraints to limit the results. Delegates to :meth:`find` applied to this node's owning document.
26,842
def is_first_root(self): if self.parent: return False if self._is_first_root is not None: return self._is_first_root first_root_id = cache.get() if first_root_id is not None: self._is_first_root = first_root_id == self.id return self._is_first_root try: first_root_id = Page.objects.root().values()[0][] except IndexError: first_root_id = None if first_root_id is not None: cache.set(, first_root_id) self._is_first_root = self.id == first_root_id return self._is_first_root
Return ``True`` if this page is the first root pages.
26,843
def revoke_all(self, paths: Union[str, Iterable[str]], recursive: bool=False):
See `AccessControlMapper.revoke_all`. :param paths: see `AccessControlMapper.revoke_all` :param access_controls: see `AccessControlMapper.revoke_all` :param recursive: whether the access control list should be changed recursively for all nested collections
26,844
def append_note(self, player, text): note = self._find_note(player) note.text += text
Append text to an already existing note.
26,845
def init_app(self, app): if not hasattr(app, ): app.extensions = {} app.extensions[] = self self._set_default_configuration_options(app) self._set_error_handler_callbacks(app) app.config[] = True
Register this extension with the flask app :param app: A flask application
26,846
def filter_any_above_threshold( self, multi_key_fn, value_dict, threshold, default_value=0.0): def filter_fn(x): for key in multi_key_fn(x): value = value_dict.get(key, default_value) if value > threshold: return True return False return self.filter(filter_fn)
Like filter_above_threshold but `multi_key_fn` returns multiple keys and the element is kept if any of them have a value above the given threshold. Parameters ---------- multi_key_fn : callable Given an element of this collection, returns multiple keys into `value_dict` value_dict : dict Dict from keys returned by `extract_key_fn` to float values threshold : float Only keep elements whose value in `value_dict` is above this threshold. default_value : float Value to use for elements whose key is not in `value_dict`
26,847
def get_randomized_guid_sample(self, item_count): dataset = self.get_whitelist() random.shuffle(dataset) return dataset[:item_count]
Fetch a subset of randomzied GUIDs from the whitelist
26,848
def apex(self, axis): from blmath.geometry.apex import apex return apex(self.v, axis)
Find the most extreme vertex in the direction of the axis provided. axis: A vector, which is an 3x1 np.array.
26,849
def categorization(self, domains, labels=False): domainslabels if type(domains) is str: return self._get_categorization(domains, labels) elif type(domains) is list: return self._post_categorization(domains, labels) else: raise Investigate.DOMAIN_ERR
Get the domain status and categorization of a domain or list of domains. 'domains' can be either a single domain, or a list of domains. Setting 'labels' to True will give back categorizations in human-readable form. For more detail, see https://investigate.umbrella.com/docs/api#categorization
26,850
def set_pid_params(self, *args, **kwargs): for joint in self.joints: joint.target_angles = [None] * joint.ADOF joint.controllers = [pid(*args, **kwargs) for i in range(joint.ADOF)]
Set PID parameters for all joints in the skeleton. Parameters for this method are passed directly to the `pid` constructor.
26,851
def datetime(self, start: int = 2000, end: int = 2035, timezone: Optional[str] = None) -> DateTime: datetime_obj = datetime.combine( date=self.date(start, end), time=self.time(), ) if timezone: if not pytz: raise ImportError() tz = pytz.timezone(timezone) datetime_obj = tz.localize(datetime_obj) return datetime_obj
Generate random datetime. :param start: Minimum value of year. :param end: Maximum value of year. :param timezone: Set custom timezone (pytz required). :return: Datetime
26,852
def _default_buffer_pos_changed(self, _): if self.app.current_buffer == self.default_buffer: try: line_no = self.default_buffer.document.cursor_position_row - \ self.history_mapping.result_line_offset if line_no < 0: raise IndexError history_lineno = sorted(self.history_mapping.selected_lines)[line_no] except IndexError: pass else: self.history_buffer.cursor_position = \ self.history_buffer.document.translate_row_col_to_index(history_lineno, 0)
When the cursor changes in the default buffer. Synchronize with history buffer.
26,853
def create_label(self, name, justify=Gtk.Justification.CENTER, wrap_mode=True, tooltip=None): label = Gtk.Label() name = name.replace(, ) label.set_markup(name) label.set_justify(justify) label.set_line_wrap(wrap_mode) if tooltip is not None: label.set_has_tooltip(True) label.connect("query-tooltip", self.parent.tooltip_queries, tooltip) return label
The function is used for creating lable with HTML text
26,854
def get_related_models(cls, model): from waldur_core.structure.models import ServiceSettings if isinstance(model, ServiceSettings): model_str = cls._registry.get(model.type, {}).get(, ) else: model_str = cls._get_model_str(model) for models in cls.get_service_models().values(): if model_str == cls._get_model_str(models[]) or \ model_str == cls._get_model_str(models[]): return models for resource_model in models[]: if model_str == cls._get_model_str(resource_model): return models
Get a dictionary with related structure models for given class or model: >> SupportedServices.get_related_models(gitlab_models.Project) { 'service': nodeconductor_gitlab.models.GitLabService, 'service_project_link': nodeconductor_gitlab.models.GitLabServiceProjectLink, 'resources': [ nodeconductor_gitlab.models.Group, nodeconductor_gitlab.models.Project, ] }
26,855
def update(self, **kwargs): for k in self.prior_params: try: self.params[k] = kwargs[self.alias[k]] except(KeyError): pass
Update `params` values using alias.
26,856
def set_source_morphology(self, name, **kwargs): name = self.roi.get_source_by_name(name).name src = self.roi[name] spatial_model = kwargs.get(, src[]) spatial_pars = kwargs.get(, {}) use_pylike = kwargs.get(, True) psf_scale_fn = kwargs.get(, None) update_source = kwargs.get(, False) if hasattr(pyLike.BinnedLikelihood, ) and not use_pylike: src.set_spatial_model(spatial_model, spatial_pars) self._update_srcmap(src.name, src, psf_scale_fn=psf_scale_fn) else: src = self.delete_source(name, loglevel=logging.DEBUG, save_template=False) src.set_spatial_model(spatial_model, spatial_pars) self.add_source(src.name, src, init_source=False, use_pylike=use_pylike, loglevel=logging.DEBUG) if update_source: self.update_source(name)
Set the spatial model of a source. Parameters ---------- name : str Source name. spatial_model : str Spatial model name (PointSource, RadialGaussian, etc.). spatial_pars : dict Dictionary of spatial parameters (optional). use_cache : bool Generate the spatial model by interpolating the cached source map. use_pylike : bool
26,857
def disconnect(self, code): Subscriber.objects.filter(session_id=self.session_id).delete()
Called when WebSocket connection is closed.
26,858
def page_not_found(request, template_name=): rendered_page = get_response_page( request, http.HttpResponseNotFound, , abstract_models.RESPONSE_HTTP404 ) if rendered_page is None: return defaults.page_not_found(request, template_name) return rendered_page
Custom page not found (404) handler. Don't raise a Http404 or anything like that in here otherwise you will cause an infinite loop. That would be bad. If no ResponsePage exists for with type ``RESPONSE_HTTP404`` then the default template render view will be used. Templates: :template:`404.html` Context: request_path The path of the requested URL (e.g., '/app/pages/bad_page/') page A ResponsePage with type ``RESPONSE_HTTP404`` if it exists.
26,859
def prepare_hooks(self, hooks): hooks = hooks or [] for event in hooks: self.register_hook(event, hooks[event])
Prepares the given hooks.
26,860
def _getInputValue(self, obj, fieldName): if isinstance(obj, dict): if not fieldName in obj: knownFields = ", ".join( key for key in obj.keys() if not key.startswith("_") ) raise ValueError( "Unknown field name in input record. Known fields are .\n" "This could be because input headers are mislabeled, or because " "input data rows do not contain a value for ." % ( fieldName, knownFields, fieldName ) ) return obj[fieldName] else: return getattr(obj, fieldName)
Gets the value of a given field from the input record
26,861
def getLogLevelNo(level): if isinstance(level,(int,long)): return level try: return(int(logging.getLevelName(level.upper()))) except: raise ValueError( % level)
Return numerical log level or raise ValueError. A valid level is either an integer or a string such as WARNING etc.
26,862
def _write(self, request): with sw("serialize_request"): request_str = request.SerializeToString() with sw("write_request"): with catch_websocket_connection_errors(): self._sock.send(request_str)
Actually serialize and write the request.
26,863
def propagate_cols_up(self, cols, target_df_name, source_df_name): print("-I- Trying to propagate {} columns from {} table into {} table".format(cols, source_df_name, target_df_name)) if target_df_name not in self.tables: self.add_magic_table(target_df_name) if target_df_name not in self.tables: print("-W- Couldnt read in {} table".format(source_df_name)) return target_df = self.tables[target_df_name] source_df = self.tables[source_df_name] target_name = target_df_name[:-1] for col in cols: if col not in source_df.df.columns: source_df.df[col] = None target_df.front_and_backfill(cols) if target_name not in source_df.df.columns: print("-W- You cant propagate from {} to {}".format(source_df_name, target_df_name)) return target_df def func(group, col_name): lst = group[col_name][group[col_name].notnull()].unique() split_lst = [col.split() for col in lst if col] sorted_lst = sorted(np.unique([item.capitalize() for sublist in split_lst for item in sublist])) group_col = ":".join(sorted_lst) return group_col for col in cols: res = grouped.apply(func, col) target_df.df[ + col] = res target_df.df[col] = np.where(target_df.df[col], target_df.df[col], target_df.df[ + col]) target_df.df.drop([ + col], axis=, inplace=True) self.tables[target_df_name] = target_df return target_df
Take values from source table, compile them into a colon-delimited list, and apply them to the target table. This method won't overwrite values in the target table, it will only supply values where they are missing. Parameters ---------- cols : list-like list of columns to propagate target_df_name : str name of table to propagate values into source_df_name: name of table to propagate values from Returns --------- target_df : MagicDataFrame updated MagicDataFrame with propagated values
26,864
def delete_project(self, owner, id, **kwargs): kwargs[] = True if kwargs.get(): return self.delete_project_with_http_info(owner, id, **kwargs) else: (data) = self.delete_project_with_http_info(owner, id, **kwargs) return data
Delete a project Permanently deletes a project and all data associated with it. This operation cannot be undone, although a new project may be created with the same id. This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.delete_project(owner, id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param str owner: User name and unique identifier of the creator of a project. For example, in the URL: [https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), government is the unique identifier of the owner. (required) :param str id: Project unique identifier. For example, in the URL:[https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs](https://data.world/government/how-to-add-depth-to-your-data-with-the-us-census-acs), how-to-add-depth-to-your-data-with-the-us-census-acs is the unique identifier of the project. (required) :return: SuccessMessage If the method is called asynchronously, returns the request thread.
26,865
def bulkMinuteBars(symbol, dates, token=, version=): _raiseIfNotStr(symbol) dates = [_strOrDate(date) for date in dates] list_orig = dates.__class__ args = [] for date in dates: args.append((symbol, , date, token, version)) pool = ThreadPool(20) rets = pool.starmap(chart, args) pool.close() return list_orig(itertools.chain(*rets))
fetch many dates worth of minute-bars for a given symbol
26,866
def get_file(self, filename): try: return self.zip.read(filename) except KeyError: raise FileNotPresent(filename)
Return the raw data of the specified filename inside the APK :rtype: bytes
26,867
def convert_uv(pinyin): return UV_RE.sub( lambda m: .join((m.group(1), UV_MAP[m.group(2)], m.group(3))), pinyin)
ü 转换,还原原始的韵母 ü行的韵跟声母j,q,x拼的时候,写成ju(居),qu(区),xu(虚), ü上两点也省略;但是跟声母n,l拼的时候,仍然写成nü(女),lü(吕)。
26,868
def has_own_property(self, attr): try: object.__getattribute__(self, attr) except AttributeError: return False else: return True
Returns if the property
26,869
def resolve_object_number(self, ref): if not isinstance(ref, ObjectNumber): on = ObjectNumber.parse(ref) else: on = ref ds_on = on.as_dataset return ds_on
Resolve a variety of object numebrs to a dataset number
26,870
def count_missense_per_gene(lines): counts = {} for x in lines: x = x.split("\t") gene = x[0] consequence = x[3] if gene not in counts: counts[gene] = 0 if consequence != "missense_variant": continue counts[gene] += 1 return counts
count the number of missense variants in each gene.
26,871
def describe_constructor(self, s): method = self.signatures.get(b"__constructor__") if not method: m = AbiMethod({"type": "constructor", "name": "", "inputs": [], "outputs": []}) return m types_def = method["inputs"] types = [t["type"] for t in types_def] names = [t["name"] for t in types_def] if not len(s): values = len(types) * ["<nA>"] else: values = decode_abi(types, s) method.inputs = [{"type": t, "name": n, "data": v} for t, n, v in list( zip(types, names, values))] return method
Describe the input bytesequence (constructor arguments) s based on the loaded contract abi definition :param s: bytes constructor arguments :return: AbiMethod instance
26,872
def head(self, n=5): if n >= len(self.index): return self.copy() return self.__constructor__(query_compiler=self._query_compiler.head(n))
Get the first n rows of the DataFrame. Args: n (int): The number of rows to return. Returns: A new DataFrame with the first n rows of the DataFrame.
26,873
def lineMatchingPattern(pattern, lines): for line in lines: m = pattern.match(line) if m: return m else: return None
Searches through the specified list of strings and returns the regular expression match for the first line that matches the specified pre-compiled regex pattern, or None if no match was found Note: if you are using a regex pattern string (i.e. not already compiled), use lineMatching() instead :type pattern: Compiled regular expression pattern to use :type lines: List of lines to search :return: the regular expression match for the first line that matches the specified regex, or None if no match was found :rtype: re.Match
26,874
def create_tables(self, tables): cursor = self.get_cursor() for table in tables: columns = mslookup_tables[table] try: cursor.execute(.format( table, .join(columns))) except sqlite3.OperationalError as error: print(error) print( .format(table)) else: self.conn.commit()
Creates database tables in sqlite lookup db
26,875
def invoke(*args, **kwargs): self, callback = args[:2] ctx = self for param in other_cmd.params: if param.name not in kwargs and param.expose_value: kwargs[param.name] = param.get_default(ctx) args = args[2:] with augment_usage_errors(self): with ctx: return callback(*args, **kwargs)
Invokes a command callback in exactly the way it expects. There are two ways to invoke this method: 1. the first argument can be a callback and all other arguments and keyword arguments are forwarded directly to the function. 2. the first argument is a click command object. In that case all arguments are forwarded as well but proper click parameters (options and click arguments) must be keyword arguments and Click will fill in defaults. Note that before Click 3.2 keyword arguments were not properly filled in against the intention of this code and no context was created. For more information about this change and why it was done in a bugfix release see :ref:`upgrade-to-3.2`.
26,876
def _command_line(): if __name__ == "PyFunceble": initiate(autoreset=True) load_config(True) try: try: PARSER = argparse.ArgumentParser( epilog="Crafted with %s by %s" % ( Fore.RED + "♥" + Fore.RESET, Style.BRIGHT + Fore.CYAN + "Nissar Chababy (Funilrys) " + Style.RESET_ALL + "with the help of " + Style.BRIGHT + Fore.GREEN + "https://pyfunceble.rtfd.io/en/master/contributors.html " + Style.RESET_ALL + "&& " + Style.BRIGHT + Fore.GREEN + "https://pyfunceble.rtfd.io/en/master/special-thanks.html", ), add_help=False, ) CURRENT_VALUE_FORMAT = ( Fore.YELLOW + Style.BRIGHT + "Configured value: " + Fore.BLUE ) PARSER.add_argument( "-ad", "--adblock", action="store_true", help="Switch the decoding of the adblock format. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["adblock"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-a", "--all", action="store_false", help="Output all available information on the screen. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["less"]) + Style.RESET_ALL ), ) PARSER.add_argument( "" "-c", "--auto-continue", "--continue", action="store_true", help="Switch the value of the auto continue mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["auto_continue"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--autosave-minutes", type=int, help="Update the minimum of minutes before we start " "committing to upstream under Travis CI. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_minutes"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--clean", action="store_true", help="Clean all files under output." ) PARSER.add_argument( "--clean-all", action="store_true", help="Clean all files under output and all file generated by PyFunceble.", ) PARSER.add_argument( "--cmd", type=str, help="Pass a command to run before each commit " "(except the final one) under the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["command_before_end"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--cmd-before-end", type=str, help="Pass a command to run before the results " "(final) commit under the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["command_before_end"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--commit-autosave-message", type=str, help="Replace the default autosave commit message. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_commit"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--commit-results-message", type=str, help="Replace the default results (final) commit message. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_autosave_final_commit"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-d", "--domain", type=str, help="Set and test the given domain." ) PARSER.add_argument( "-db", "--database", action="store_true", help="Switch the value of the usage of a database to store " "inactive domains of the currently tested list. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["inactive_database"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-dbr", "--days-between-db-retest", type=int, help="Set the numbers of days between each retest of domains present " "into inactive-db.json. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["days_between_db_retest"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--debug", action="store_true", help="Switch the value of the debug mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["debug"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--directory-structure", action="store_true", help="Generate the directory and files that are needed and which does " "not exist in the current directory.", ) PARSER.add_argument( "-ex", "--execution", action="store_true", help="Switch the default value of the execution time showing. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["show_execution_time"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-f", "--file", type=str, help="Read the given file and test all domains inside it. " "If a URL is given we download and test the content of the given URL.", ) PARSER.add_argument( "--filter", type=str, help="Domain to filter (regex)." ) PARSER.add_argument( "--help", action="help", default=argparse.SUPPRESS, help="Show this help message and exit.", ) PARSER.add_argument( "--hierarchical", action="store_true", help="Switch the value of the hierarchical sorting of the tested file. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["hierarchical_sorting"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-h", "--host", action="store_true", help="Switch the value of the generation of hosts file. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["generate_hosts"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--http", action="store_true", help="Switch the value of the usage of HTTP code. %s" % ( CURRENT_VALUE_FORMAT + repr(HTTP_CODE["active"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--iana", action="store_true", help="Update/Generate `iana-domains-db.json`.", ) PARSER.add_argument( "--idna", action="store_true", help="Switch the value of the IDNA conversion. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["idna_conversion"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-ip", type=str, help="Change the IP to print in the hosts files with the given one. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["custom_ip"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--json", action="store_true", help="Switch the value of the generation " "of the JSON formatted list of domains. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["generate_json"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--less", action="store_true", help="Output less informations on screen. %s" % ( CURRENT_VALUE_FORMAT + repr(Core.switch("less")) + Style.RESET_ALL ), ) PARSER.add_argument( "--local", action="store_true", help="Switch the value of the local network testing. %s" % ( CURRENT_VALUE_FORMAT + repr(Core.switch("local")) + Style.RESET_ALL ), ) PARSER.add_argument( "--link", type=str, help="Download and test the given file." ) PARSER.add_argument( "-m", "--mining", action="store_true", help="Switch the value of the mining subsystem usage. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["mining"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-n", "--no-files", action="store_true", help="Switch the value of the production of output files. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_files"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nl", "--no-logs", action="store_true", help="Switch the value of the production of logs files " "in the case we encounter some errors. %s" % ( CURRENT_VALUE_FORMAT + repr(not CONFIGURATION["logs"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-ns", "--no-special", action="store_true", help="Switch the value of the usage of the SPECIAL rules. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_special"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nu", "--no-unified", action="store_true", help="Switch the value of the production unified logs " "under the output directory. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["unified"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-nw", "--no-whois", action="store_true", help="Switch the value the usage of whois to test domain's status. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["no_whois"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-p", "--percentage", action="store_true", help="Switch the value of the percentage output mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["show_percentage"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--plain", action="store_true", help="Switch the value of the generation " "of the plain list of domains. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["plain_list_domain"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--production", action="store_true", help="Prepare the repository for production.", ) PARSER.add_argument( "-psl", "--public-suffix", action="store_true", help="Update/Generate `public-suffix.json`.", ) PARSER.add_argument( "-q", "--quiet", action="store_true", help="Run the script in quiet mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["quiet"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--share-logs", action="store_true", help="Switch the value of the sharing of logs. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["share_logs"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-s", "--simple", action="store_true", help="Switch the value of the simple output mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["simple"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--split", action="store_true", help="Switch the value of the split of the generated output files. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["inactive_database"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--syntax", action="store_true", help="Switch the value of the syntax test mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["syntax"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-t", "--timeout", type=int, default=3, help="Switch the value of the timeout. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["seconds_before_http_timeout"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--travis", action="store_true", help="Switch the value of the Travis mode. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis"]) + Style.RESET_ALL ), ) PARSER.add_argument( "--travis-branch", type=str, default="master", help="Switch the branch name where we are going to push. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["travis_branch"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-u", "--url", type=str, help="Analyze the given URL." ) PARSER.add_argument( "-uf", "--url-file", type=str, help="Read and test the list of URL of the given file. " "If a URL is given we download and test the content of the given URL.", ) PARSER.add_argument( "-ua", "--user-agent", type=str, help="Set the user-agent to use and set every time we " "interact with everything which is not our logs sharing system.", ) PARSER.add_argument( "-v", "--version", help="Show the version of PyFunceble and exit.", action="version", version="%(prog)s " + VERSION, ) PARSER.add_argument( "-vsc", "--verify-ssl-certificate", action="store_true", help="Switch the value of the verification of the " "SSL/TLS certificate when testing for URL. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["verify_ssl_certificate"]) + Style.RESET_ALL ), ) PARSER.add_argument( "-wdb", "--whois-database", action="store_true", help="Switch the value of the usage of a database to store " "whois data in order to avoid whois servers rate limit. %s" % ( CURRENT_VALUE_FORMAT + repr(CONFIGURATION["whois_database"]) + Style.RESET_ALL ), ) ARGS = PARSER.parse_args() if ARGS.less: CONFIGURATION.update({"less": ARGS.less}) elif not ARGS.all: CONFIGURATION.update({"less": ARGS.all}) if ARGS.adblock: CONFIGURATION.update({"adblock": Core.switch("adblock")}) if ARGS.auto_continue: CONFIGURATION.update( {"auto_continue": Core.switch("auto_continue")} ) if ARGS.autosave_minutes: CONFIGURATION.update( {"travis_autosave_minutes": ARGS.autosave_minutes} ) if ARGS.clean: Clean(None) if ARGS.clean_all: Clean(None, ARGS.clean_all) if ARGS.cmd: CONFIGURATION.update({"command": ARGS.cmd}) if ARGS.cmd_before_end: CONFIGURATION.update({"command_before_end": ARGS.cmd_before_end}) if ARGS.commit_autosave_message: CONFIGURATION.update( {"travis_autosave_commit": ARGS.commit_autosave_message} ) if ARGS.commit_results_message: CONFIGURATION.update( {"travis_autosave_final_commit": ARGS.commit_results_message} ) if ARGS.database: CONFIGURATION.update( {"inactive_database": Core.switch("inactive_database")} ) if ARGS.days_between_db_retest: CONFIGURATION.update( {"days_between_db_retest": ARGS.days_between_db_retest} ) if ARGS.debug: CONFIGURATION.update({"debug": Core.switch("debug")}) if ARGS.directory_structure: DirectoryStructure() if ARGS.execution: CONFIGURATION.update( {"show_execution_time": Core.switch("show_execution_time")} ) if ARGS.filter: CONFIGURATION.update({"filter": ARGS.filter}) if ARGS.hierarchical: CONFIGURATION.update( {"hierarchical_sorting": Core.switch("hierarchical_sorting")} ) if ARGS.host: CONFIGURATION.update( {"generate_hosts": Core.switch("generate_hosts")} ) if ARGS.http: HTTP_CODE.update({"active": Core.switch(HTTP_CODE["active"], True)}) if ARGS.iana: IANA().update() if ARGS.idna: CONFIGURATION.update( {"idna_conversion": Core.switch("idna_conversion")} ) if ARGS.ip: CONFIGURATION.update({"custom_ip": ARGS.ip}) if ARGS.json: CONFIGURATION.update( {"generate_json": Core.switch("generate_json")} ) if ARGS.local: CONFIGURATION.update({"local": Core.switch("local")}) if ARGS.mining: CONFIGURATION.update({"mining": Core.switch("mining")}) if ARGS.no_files: CONFIGURATION.update({"no_files": Core.switch("no_files")}) if ARGS.no_logs: CONFIGURATION.update({"logs": Core.switch("logs")}) if ARGS.no_special: CONFIGURATION.update({"no_special": Core.switch("no_special")}) if ARGS.no_unified: CONFIGURATION.update({"unified": Core.switch("unified")}) if ARGS.no_whois: CONFIGURATION.update({"no_whois": Core.switch("no_whois")}) if ARGS.percentage: CONFIGURATION.update( {"show_percentage": Core.switch("show_percentage")} ) if ARGS.plain: CONFIGURATION.update( {"plain_list_domain": Core.switch("plain_list_domain")} ) if ARGS.production: Production() if ARGS.public_suffix: PublicSuffix().update() if ARGS.quiet: CONFIGURATION.update({"quiet": Core.switch("quiet")}) if ARGS.share_logs: CONFIGURATION.update({"share_logs": Core.switch("share_logs")}) if ARGS.simple: CONFIGURATION.update( {"simple": Core.switch("simple"), "quiet": Core.switch("quiet")} ) if ARGS.split: CONFIGURATION.update({"split": Core.switch("split")}) if ARGS.syntax: CONFIGURATION.update({"syntax": Core.switch("syntax")}) if ARGS.timeout and ARGS.timeout % 3 == 0: CONFIGURATION.update({"seconds_before_http_timeout": ARGS.timeout}) if ARGS.travis: CONFIGURATION.update({"travis": Core.switch("travis")}) if ARGS.travis_branch: CONFIGURATION.update({"travis_branch": ARGS.travis_branch}) if ARGS.user_agent: CONFIGURATION.update({"user_agent": ARGS.user_agent}) if ARGS.verify_ssl_certificate: CONFIGURATION.update( {"verify_ssl_certificate": ARGS.verify_ssl_certificate} ) if ARGS.whois_database: CONFIGURATION.update( {"whois_database": Core.switch("whois_database")} ) if not CONFIGURATION["quiet"]: Core.colorify_logo(home=True) Version().compare() Core( domain_or_ip_to_test=ARGS.domain, file_path=ARGS.file, url_to_test=ARGS.url, url_file=ARGS.url_file, link_to_test=ARGS.link, ) except KeyError as e: if not Version(True).is_cloned(): Merge(CURRENT_DIRECTORY) else: raise e except KeyboardInterrupt: stay_safe()
Provide the command line interface.
26,877
def present(name, type, url, access=, user=, password=, database=, basic_auth=False, basic_auth_user=, basic_auth_password=, is_default=False, json_data=None, profile=): graphiteinfluxdb if isinstance(profile, string_types): profile = __salt__[](profile) ret = {: name, : None, : None, : {}} datasource = _get_datasource(profile, name) data = _get_json_data(name, type, url, access, user, password, database, basic_auth, basic_auth_user, basic_auth_password, is_default, json_data) if datasource: requests.put( _get_url(profile, datasource[]), data, headers=_get_headers(profile), timeout=profile.get(, 3), ) ret[] = True ret[] = _diff(datasource, data) if ret[][] or ret[][]: ret[] = .format(name) else: ret[] = {} ret[] = .format(name) else: requests.post( .format(profile[]), data, headers=_get_headers(profile), timeout=profile.get(, 3), ) ret[] = True ret[] = .format(name) ret[] = data return ret
Ensure that a data source is present. name Name of the data source. type Which type of data source it is ('graphite', 'influxdb' etc.). url The URL to the data source API. user Optional - user to authenticate with the data source password Optional - password to authenticate with the data source basic_auth Optional - set to True to use HTTP basic auth to authenticate with the data source. basic_auth_user Optional - HTTP basic auth username. basic_auth_password Optional - HTTP basic auth password. is_default Default: False
26,878
def mousePressEvent(self, event): if event.x() < 50: super(PlotMenuBar, self).mousePressEvent(event) else: event.ignore()
Marshalls behaviour depending on location of the mouse click
26,879
def mkopen(p, *args, **kwargs): dir = os.path.dirname(p) mkdir(dir) return open(p, *args, **kwargs)
A wrapper for the open() builtin which makes parent directories if needed.
26,880
def open(self, value, nt=None, wrap=None, unwrap=None): self._wrap = wrap or (nt and nt.wrap) or self._wrap self._unwrap = unwrap or (nt and nt.unwrap) or self._unwrap _SharedPV.open(self, self._wrap(value))
Mark the PV as opened an provide its initial value. This initial value is later updated with post(). :param value: A Value, or appropriate object (see nt= and wrap= of the constructor). Any clients which have begun connecting which began connecting while this PV was in the close'd state will complete connecting. Only those fields of the value which are marked as changed will be stored.
26,881
def on_origin(self, *args): if self.origin is None: Clock.schedule_once(self.on_origin, 0) return self.origin.bind( pos=self._trigger_repoint, size=self._trigger_repoint )
Make sure to redraw whenever the origin moves.
26,882
def insert_object_into_db_pk_known(self, obj: Any, table: str, fieldlist: Sequence[str]) -> None: pkvalue = getattr(obj, fieldlist[0]) if pkvalue is None: raise AssertionError("insert_object_intoto_db_pk_known called " "without PK") valuelist = [] for f in fieldlist: valuelist.append(getattr(obj, f)) self.db_exec( get_sql_insert(table, fieldlist, self.get_delims()), *valuelist )
Inserts object into database table, with PK (first field) already known.
26,883
def transform(self): if self._scene is None: if not hasattr(self, ) or self._transform is None: return np.eye(4) return self._transform return self._scene.graph[self.name][0]
Get the (4, 4) homogenous transformation from the world frame to this camera object. Returns ------------ transform : (4, 4) float Transform from world to camera
26,884
async def handle_action(self, action: str, request_id: str, **kwargs): try: await self.check_permissions(action, **kwargs) if action not in self.available_actions: raise MethodNotAllowed(method=action) method_name = self.available_actions[action] method = getattr(self, method_name) reply = partial(self.reply, action=action, request_id=request_id) response = await method( request_id=request_id, action=action, **kwargs ) if isinstance(response, tuple): data, status = response await reply( data=data, status=status ) except Exception as exc: await self.handle_exception( exc, action=action, request_id=request_id )
run the action.
26,885
def wanted_labels(self, labels): if not isinstance(labels, list): raise ValueError("Input labels must be a list.") self.user_labels = labels
Specify only WANTED labels to minimize get_labels() requests Args: - labels: <list> of wanted labels. Example: page.wanted_labels(['P18', 'P31'])
26,886
def encrypt_assertion(self, statement, enc_key, template, key_type=, node_xpath=None, node_id=None): if six.PY2: _str = unicode else: _str = str if isinstance(statement, SamlBase): statement = pre_encrypt_assertion(statement) _, fil = make_temp( _str(statement), decode=False, delete=self._xmlsec_delete_tmpfiles ) _, tmpl = make_temp(_str(template), decode=False) if not node_xpath: node_xpath = ASSERT_XPATH com_list = [ self.xmlsec, , , enc_key, , key_type, , fil, , node_xpath, ] if node_id: com_list.extend([, node_id]) try: (_stdout, _stderr, output) = self._run_xmlsec(com_list, [tmpl]) except XmlsecError as e: six.raise_from(EncryptError(com_list), e) return output.decode()
Will encrypt an assertion :param statement: A XML document that contains the assertion to encrypt :param enc_key: File name of a file containing the encryption key :param template: A template for the encryption part to be added. :param key_type: The type of session key to use. :return: The encrypted text
26,887
def run(self, n_iterations=1, min_n_workers=1, iteration_kwargs = {},): self.wait_for_workers(min_n_workers) iteration_kwargs.update({: self.result_logger}) if self.time_ref is None: self.time_ref = time.time() self.config[] = self.time_ref self.logger.info(%(str(self.time_ref))) self.thread_cond.acquire() while True: self._queue_wait() next_run = None for i in self.active_iterations(): next_run = self.iterations[i].get_next_run() if not next_run is None: break if not next_run is None: self.logger.debug(%i) self._submit_job(*next_run) continue else: if n_iterations > 0: self.iterations.append(self.get_next_iteration(len(self.iterations), iteration_kwargs)) n_iterations -= 1 continue if self.active_iterations(): self.thread_cond.wait() else: break self.thread_cond.release() for i in self.warmstart_iteration: i.fix_timestamps(self.time_ref) ws_data = [i.data for i in self.warmstart_iteration] return Result([copy.deepcopy(i.data) for i in self.iterations] + ws_data, self.config)
run n_iterations of SuccessiveHalving Parameters ---------- n_iterations: int number of iterations to be performed in this run min_n_workers: int minimum number of workers before starting the run
26,888
def _date_time_match(cron, **kwargs): return all([kwargs.get(x) is None or cron[x] == six.text_type(kwargs[x]) or (six.text_type(kwargs[x]).lower() == and cron[x] != ) for x in (, , , , )])
Returns true if the minute, hour, etc. params match their counterparts from the dict returned from list_tab().
26,889
def user_auth_link(self, redirect_uri, scope=, state=, avoid_linking=False): if not scope: scope = .join(settings.DEFAULT_OAUTH_SCOPE) self.auth.update(redirect_uri=redirect_uri) url = % self.app_base_url params = { : , : self.auth.client_id, : redirect_uri, : scope, : state, : avoid_linking, } urlencoded_params = urlencode(params) return "{url}?{params}".format(url=url, params=urlencoded_params)
Generates a URL to send the user for OAuth 2.0 :param string redirect_uri: URL to redirect the user to after auth. :param string scope: The scope of the privileges you want the eventual access_token to grant. :param string state: A value that will be returned to you unaltered along with the user's authorization request decision. (The OAuth 2.0 RFC recommends using this to prevent cross-site request forgery.) :param bool avoid_linking: Avoid linking calendar accounts together under one set of credentials. (Optional, default: false). :return: authorization link :rtype: ``string``
26,890
def command(sock, dbname, spec, slave_ok, is_mongos, read_preference, codec_options, session, client, check=True, allowable_errors=None, address=None, check_keys=False, listeners=None, max_bson_size=None, read_concern=None, parse_write_concern_error=False, collation=None, compression_ctx=None, use_op_msg=False, unacknowledged=False, user_fields=None): name = next(iter(spec)) ns = dbname + flags = 4 if slave_ok else 0 orig = spec if is_mongos and not use_op_msg: spec = message._maybe_add_read_preference(spec, read_preference) if read_concern and not (session and session._in_transaction): if read_concern.level: spec[] = read_concern.document if (session and session.options.causal_consistency and session.operation_time is not None): spec.setdefault( , {})[] = session.operation_time if collation is not None: spec[] = collation publish = listeners is not None and listeners.enabled_for_commands if publish: start = datetime.datetime.now() if compression_ctx and name.lower() in _NO_COMPRESSION: compression_ctx = None if use_op_msg: flags = 2 if unacknowledged else 0 request_id, msg, size, max_doc_size = message._op_msg( flags, spec, dbname, read_preference, slave_ok, check_keys, codec_options, ctx=compression_ctx) if (unacknowledged and max_bson_size is not None and max_doc_size > max_bson_size): message._raise_document_too_large(name, size, max_bson_size) else: request_id, msg, size = message.query( flags, ns, 0, -1, spec, None, codec_options, check_keys, compression_ctx) if (max_bson_size is not None and size > max_bson_size + message._COMMAND_OVERHEAD): message._raise_document_too_large( name, size, max_bson_size + message._COMMAND_OVERHEAD) if publish: encoding_duration = datetime.datetime.now() - start listeners.publish_command_start(orig, dbname, request_id, address) start = datetime.datetime.now() try: sock.sendall(msg) if use_op_msg and unacknowledged: response_doc = {"ok": 1} else: reply = receive_message(sock, request_id) unpacked_docs = reply.unpack_response( codec_options=codec_options, user_fields=user_fields) response_doc = unpacked_docs[0] if client: client._process_response(response_doc, session) if check: helpers._check_command_response( response_doc, None, allowable_errors, parse_write_concern_error=parse_write_concern_error) except Exception as exc: if publish: duration = (datetime.datetime.now() - start) + encoding_duration if isinstance(exc, (NotMasterError, OperationFailure)): failure = exc.details else: failure = message._convert_exception(exc) listeners.publish_command_failure( duration, failure, name, request_id, address) raise if publish: duration = (datetime.datetime.now() - start) + encoding_duration listeners.publish_command_success( duration, response_doc, name, request_id, address) return response_doc
Execute a command over the socket, or raise socket.error. :Parameters: - `sock`: a raw socket instance - `dbname`: name of the database on which to run the command - `spec`: a command document as an ordered dict type, eg SON. - `slave_ok`: whether to set the SlaveOkay wire protocol bit - `is_mongos`: are we connected to a mongos? - `read_preference`: a read preference - `codec_options`: a CodecOptions instance - `session`: optional ClientSession instance. - `client`: optional MongoClient instance for updating $clusterTime. - `check`: raise OperationFailure if there are errors - `allowable_errors`: errors to ignore if `check` is True - `address`: the (host, port) of `sock` - `check_keys`: if True, check `spec` for invalid keys - `listeners`: An instance of :class:`~pymongo.monitoring.EventListeners` - `max_bson_size`: The maximum encoded bson size for this server - `read_concern`: The read concern for this command. - `parse_write_concern_error`: Whether to parse the ``writeConcernError`` field in the command response. - `collation`: The collation for this command. - `compression_ctx`: optional compression Context. - `use_op_msg`: True if we should use OP_MSG. - `unacknowledged`: True if this is an unacknowledged command. - `user_fields` (optional): Response fields that should be decoded using the TypeDecoders from codec_options, passed to bson._decode_all_selective.
26,891
def convex_conj(self): convex_conjs = [func.convex_conj for func in self.functionals] return SeparableSum(*convex_conjs)
The convex conjugate functional. Convex conjugate distributes over separable sums, so the result is simply the separable sum of the convex conjugates.
26,892
def remove_ectopy(tachogram_data, tachogram_time): remove_margin = 0.20 finish_ectopy_remove = False signal = list(tachogram_data) time = list(tachogram_time) beat = 1 while finish_ectopy_remove is False: max_thresh = signal[beat - 1] + remove_margin * signal[beat - 1] min_thresh = signal[beat - 1] - remove_margin * signal[beat - 1] if signal[beat] > max_thresh or signal[beat] < min_thresh: signal.pop(beat) signal.pop(beat) time.pop(beat) time.pop(beat) beat += 1 else: beat += 1 if beat >= len(signal): finish_ectopy_remove = True return signal, time
----- Brief ----- Function for removing ectopic beats. ----------- Description ----------- Ectopic beats are beats that are originated in cells that do not correspond to the expected pacemaker cells. These beats are identifiable in ECG signals by abnormal rhythms. This function allows to remove the ectopic beats by defining time thresholds that consecutive heartbeats should comply with. ---------- Parameters ---------- tachogram_data : list Y Axis of tachogram. tachogram_time : list X Axis of tachogram. Returns ------- out : list, list List of tachogram samples. List of instants where each cardiac cycle ends. Source ------ "Comparison of methods for removal of ectopy in measurement of heart rate variability" by N. Lippman, K. M. Stein and B. B. Lerman.
26,893
def subtract_bg(samplename, bgname, factor=1, distance=None, disttolerance=2, subname=None, qrange=(), graph_extension=, graph_dpi=80): ip = get_ipython() data1d = ip.user_ns[] data2d = ip.user_ns[] if not in ip.user_ns: ip.user_ns[] = set() subtractedsamplenames = ip.user_ns[] if subname is None: if isinstance(bgname, str): subname = samplename + + bgname else: subname = samplename + if distance is None: dists = data1d[samplename] else: dists = [d for d in data1d[samplename] if abs(d - distance) < disttolerance] for dist in dists: if isinstance(bgname, str): if not disttolerance: if dist not in data1d[bgname]: print( % ( dist, samplename, bgname)) continue else: bgdist = dist else: bgdist = sorted([(d, r) for (d, r) in [(d, np.abs(d - dist)) for d in list(data1d[bgname].keys())] if r <= disttolerance], key=lambda x: x[1])[0][0] if subname not in data1d: data1d[subname] = {} if subname not in data2d: data2d[subname] = {} if subname not in ip.user_ns[]: ip.user_ns[][subname] = {} data1_s = data1d[samplename][dist] data2_s = data2d[samplename][dist] if isinstance(bgname, str): data1_bg = data1d[bgname][bgdist] data2_bg = data2d[bgname][bgdist] if factor is None: factor = data1_s.trim(*qrange).momentum(0) / data1_bg.trim(*qrange).momentum(0) elif bgname is None: data1_bg = data1_s.trim(*qrange).momentum(0) data2_bg = data1_bg else: data1_bg = bgname data2_bg = bgname if factor is None: factor = 1 data1d[subname][dist] = data1_s - factor * data1_bg data2d[subname][dist] = data2_s - factor * data2_bg data1d[subname][dist].save( os.path.join(ip.user_ns[], subname + + ( % dist).replace(, ) + )) ip.user_ns[][subname][dist] = ip.user_ns[][samplename][ dist] plt.figure() plotsascurve(samplename, dist=dist) if isinstance(bgname, str): plotsascurve(bgname, dist=dist, factor=factor) plotsascurve(subname, dist=dist) plt.savefig(os.path.join(ip.user_ns[], + samplename + + graph_extension), dpi=graph_dpi) subtractedsamplenames.add(subname)
Subtract background from measurements. Inputs: samplename: the name of the sample bgname: the name of the background measurements. Alternatively, it can be a numeric value (float or ErrorValue), which will be subtracted. If None, this constant will be determined by integrating the scattering curve in the range given by qrange. factor: the background curve will be multiplied by this distance: if None, do the subtraction for all sample-to-detector distances. Otherwise give here the value of the sample-to-detector distance. qrange: a tuple (qmin, qmax) disttolerance: the tolerance in which two distances are considered equal. subname: the sample name of the background-corrected curve. The default is samplename + '-' + bgname
26,894
def back_bfs(self, start, end=None): return [node for node, step in self._iterbfs(start, end, forward=False)]
Returns a list of nodes in some backward BFS order. Starting from the start node the breadth first search proceeds along incoming edges.
26,895
def _apply_cell_filters(self, context): self.setattrs(_is_unpermitted_fields_set=True) for perm, fields in self.Meta.field_permissions.items(): if not context.has_permission(perm): self._unpermitted_fields.extend(fields) return self._unpermitted_fields
Applies the field restrictions based on the return value of the context's "has_permission()" method. Stores them on self._unpermitted_fields. Returns: List of unpermitted fields names.
26,896
def confidenceInterval(self, alpha=0.6827, steps=1.e5, plot=False): x_dense, y_dense = self.densify() y_dense -= np.max(y_dense) f = scipy.interpolate.interp1d(x_dense, y_dense, kind=) x = np.linspace(0., np.max(x_dense), steps) pdf = np.exp(f(x) / 2.) cut = (pdf / np.max(pdf)) > 1.e-10 x = x[cut] pdf = pdf[cut] sorted_pdf_indices = np.argsort(pdf)[::-1] cdf = np.cumsum(pdf[sorted_pdf_indices]) cdf /= cdf[-1] sorted_pdf_index_max = np.argmin((cdf - alpha)**2) x_select = x[sorted_pdf_indices[0: sorted_pdf_index_max]] return np.min(x_select), np.max(x_select)
Compute two-sided confidence interval by taking x-values corresponding to the largest PDF-values first.
26,897
def _ParseDocstring(function): if not function.__doc__: return {} type_check_dict = {} for match in param_regexp.finditer(function.__doc__): param_str = match.group(1).strip() param_splitted = param_str.split(" ") if len(param_splitted) >= 2: type_str = " ".join(param_splitted[:-1]) name = param_splitted[-1] type_check_dict[name] = type_str for match in returns_regexp.finditer(function.__doc__): type_check_dict["returns"] = match.group(1) for match in type_regexp.finditer(function.__doc__): name = match.group(1) type_str = match.group(2) type_check_dict[name] = type_str for match in rtype_regexp.finditer(function.__doc__): type_check_dict["returns"] = match.group(1) return type_check_dict
Parses the functions docstring into a dictionary of type checks.
26,898
def get_table(self, dataset, table, project_id=None): project_id = self._get_project_id(project_id) try: table = self.bigquery.tables().get( projectId=project_id, datasetId=dataset, tableId=table).execute(num_retries=self.num_retries) except HttpError: table = {} return table
Retrieve a table if it exists, otherwise return an empty dict. Parameters ---------- dataset : str The dataset that the table is in table : str The name of the table project_id: str, optional The project that the table is in Returns ------- dict Containing the table object if it exists, else empty
26,899
def _compute_examples(self): for label in self._raw_examples: self._examples[label] = self._compute_example(label) for field in self.all_fields: dt, _ = unwrap_nullable(field.data_type) if is_void_type(dt): self._examples[field.name] = \ Example( field.name, None, OrderedDict([(, field.name)]))
Populates the ``_examples`` instance attribute by computing full examples for each label in ``_raw_examples``. The logic in this method is separate from :meth:`_add_example` because this method requires that every type have ``_raw_examples`` assigned for resolving example references.