Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
20,800
def pagination_for(context, current_page, page_var="page", exclude_vars=""): querystring = context["request"].GET.copy() exclude_vars = [v for v in exclude_vars.split(",") if v] + [page_var] for exclude_var in exclude_vars: if exclude_var in querystring: del querystring[exclude_var] querystring = querystring.urlencode() return { "current_page": current_page, "querystring": querystring, "page_var": page_var, }
Include the pagination template and data for persisting querystring in pagination links. Can also contain a comma separated string of var names in the current querystring to exclude from the pagination links, via the ``exclude_vars`` arg.
20,801
def arquire_attributes(self, attributes, active=True): attribute_update = self._post_object(self.update_api.attributes.acquire, attributes) return ExistAttributeResponse(attribute_update)
Claims a list of attributes for the current client. Can also disable attributes. Returns update response object.
20,802
def conditional_gate(control: Qubit, gate0: Gate, gate1: Gate) -> Gate: assert gate0.qubits == gate1.qubits tensor = join_gates(P0(control), gate0).tensor tensor += join_gates(P1(control), gate1).tensor gate = Gate(tensor=tensor, qubits=[control, *gate0.qubits]) return gate
Return a conditional unitary gate. Do gate0 on bit 1 if bit 0 is zero, else do gate1 on 1
20,803
def transmission_rate(self): sent = self.bytes_sent received = self.bytes_received traffic_call = time.time() time_delta = traffic_call - self.last_traffic_call upstream = int(1.0 * (sent - self.last_bytes_sent)/time_delta) downstream = int(1.0 * (received - self.last_bytes_received)/time_delta) self.last_bytes_sent = sent self.last_bytes_received = received self.last_traffic_call = traffic_call return upstream, downstream
Returns the upstream, downstream values as a tuple in bytes per second. Use this for periodical calling.
20,804
def encode_request(request_line, **headers): lines = [request_line] lines.extend([ % kv for kv in headers.items()]) return (.join(lines) + ).encode()
Creates the data for a SSDP request. Args: request_line (string): The request line for the request (e.g. ``"M-SEARCH * HTTP/1.1"``). headers (dict of string -> string): Dictionary of header name - header value pairs to present in the request. Returns: bytes: The encoded request.
20,805
def runGetReference(self, id_): compoundId = datamodel.ReferenceCompoundId.parse(id_) referenceSet = self.getDataRepository().getReferenceSet( compoundId.reference_set_id) reference = referenceSet.getReference(id_) return self.runGetRequest(reference)
Runs a getReference request for the specified ID.
20,806
def status(**connection_args): * dbc = _connect(**connection_args) if dbc is None: return {} cur = dbc.cursor() qry = try: _execute(cur, qry) except OperationalError as exc: err = .format(*exc.args) __context__[] = err log.error(err) return {} ret = {} for _ in range(cur.rowcount): row = cur.fetchone() ret[row[0]] = row[1] return ret
Return the status of a MySQL server using the output from the ``SHOW STATUS`` query. CLI Example: .. code-block:: bash salt '*' mysql.status
20,807
def iou_coe(output, target, threshold=0.5, axis=(1, 2, 3), smooth=1e-5): pre = tf.cast(output > threshold, dtype=tf.float32) truth = tf.cast(target > threshold, dtype=tf.float32) inse = tf.reduce_sum(tf.multiply(pre, truth), axis=axis) union = tf.reduce_sum(tf.cast(tf.add(pre, truth) >= 1, dtype=tf.float32), axis=axis) batch_iou = (inse + smooth) / (union + smooth) iou = tf.reduce_mean(batch_iou, name=) return iou
Non-differentiable Intersection over Union (IoU) for comparing the similarity of two batch of data, usually be used for evaluating binary image segmentation. The coefficient between 0 to 1, and 1 means totally match. Parameters ----------- output : tensor A batch of distribution with shape: [batch_size, ....], (any dimensions). target : tensor The target distribution, format the same with `output`. threshold : float The threshold value to be true. axis : tuple of integer All dimensions are reduced, default ``(1,2,3)``. smooth : float This small value will be added to the numerator and denominator, see ``dice_coe``. Notes ------ - IoU cannot be used as training loss, people usually use dice coefficient for training, IoU and hard-dice for evaluating.
20,808
def human_size(size): if isinstance(size, string_types): size = int(size, 10) if size < 0: return "-??? bytes" if size < 1024: return "%4d bytes" % size for unit in ("KiB", "MiB", "GiB"): size /= 1024.0 if size < 1024: return "%6.1f %s" % (size, unit) return "%6.1f GiB" % size
Return a human-readable representation of a byte size. @param size: Number of bytes as an integer or string. @return: String of length 10 with the formatted result.
20,809
def is_analysis_edition_allowed(self, analysis_brain): if not self.context_active: return False analysis_obj = api.get_object(analysis_brain) if analysis_obj.getPointOfCapture() == : if not self.has_permission(EditFieldResults, analysis_obj): return False elif not self.has_permission(EditResults, analysis_obj): return False if not self.has_permission(FieldEditAnalysisResult, analysis_obj): return False if not self.is_analysis_instrument_valid(analysis_brain): return analysis_obj.getManualEntryOfResults() return True
Returns if the analysis passed in can be edited by the current user :param analysis_brain: Brain that represents an analysis :return: True if the user can edit the analysis, otherwise False
20,810
def get_feature(self, ds, feat): if ds.config["filtering"]["enable filters"]: x = ds[feat][ds._filter] else: x = ds[feat] bad = np.isnan(x) | np.isinf(x) xout = x[~bad] return xout
Return filtered feature data The features are filtered according to the user-defined filters, using the information in `ds._filter`. In addition, all `nan` and `inf` values are purged. Parameters ---------- ds: dclab.rtdc_dataset.RTDCBase The dataset containing the feature feat: str The name of the feature; must be a scalar feature
20,811
def error(self, instance, value, error_class=None, extra=): error_class = error_class or ValidationError if not isinstance(value, (list, tuple, np.ndarray)): super(Array, self).error(instance, value, error_class, extra) if isinstance(value, (list, tuple)): val_description = .format( typ=value.__class__.__name__, len=len(value) ) else: val_description = .format( shp=value.shape, typ=value.dtype ) if instance is None: prefix = .format(self.__class__.__name__) else: prefix = "The property of a {cls} instance".format( name=self.name, cls=instance.__class__.__name__, ) message = ( .format( prefix=prefix, info=self.info, desc=val_description, extra=extra, ) ) if issubclass(error_class, ValidationError): raise error_class(message, , self.name, instance) raise error_class(message)
Generates a ValueError on setting property to an invalid value
20,812
def connectToBroker(self, protocol): self.protocol = protocol self.protocol.onPublish = self.onPublish self.protocol.onDisconnection = self.onDisconnection self.protocol.setWindowSize(3) try: yield self.protocol.connect("TwistedMQTT-subs", keepalive=60) yield self.subscribe() except Exception as e: log.error("Connecting to {broker} raised {excp!s}", broker=BROKER, excp=e) else: log.info("Connected and subscribed to {broker}", broker=BROKER)
Connect to MQTT broker
20,813
def convertColors(element): numBytes = 0 if element.nodeType != Node.ELEMENT_NODE: return 0 attrsToConvert = [] if element.nodeName in [, , , , , , , , ]: attrsToConvert = [, ] elif element.nodeName in []: attrsToConvert = [] elif element.nodeName in []: attrsToConvert = [] styles = _getStyle(element) for attr in attrsToConvert: oldColorValue = element.getAttribute(attr) if oldColorValue != : newColorValue = convertColor(oldColorValue) oldBytes = len(oldColorValue) newBytes = len(newColorValue) if oldBytes > newBytes: element.setAttribute(attr, newColorValue) numBytes += (oldBytes - len(element.getAttribute(attr))) if attr in styles: oldColorValue = styles[attr] newColorValue = convertColor(oldColorValue) oldBytes = len(oldColorValue) newBytes = len(newColorValue) if oldBytes > newBytes: styles[attr] = newColorValue numBytes += (oldBytes - len(element.getAttribute(attr))) _setStyle(element, styles) for child in element.childNodes: numBytes += convertColors(child) return numBytes
Recursively converts all color properties into #RRGGBB format if shorter
20,814
def insert(parent: ScheduleComponent, time: int, child: ScheduleComponent, name: str = None) -> Schedule: return union(parent, (time, child), name=name)
Return a new schedule with the `child` schedule inserted into the `parent` at `start_time`. Args: parent: Schedule to be inserted into time: Time to be inserted defined with respect to `parent` child: Schedule to insert name: Name of the new schedule. Defaults to name of parent
20,815
def provideCustomerReferralCode(sender,**kwargs): s profile page can show their voucher referral code. customervouchers__enableVouchersreferrals__enableReferralProgramreferralVoucherId': vrd.referreeVoucher.voucherId }
If the vouchers app is installed and referrals are enabled, then the customer's profile page can show their voucher referral code.
20,816
def _update_statuses(self, sub_job_num=None): status_dict = dict() for val in CONDOR_JOB_STATUSES.values(): status_dict[val] = 0 for node in self.node_set: job = node.job try: job_status = job.status status_dict[job_status] += 1 except (KeyError, HTCondorError): status_dict[] += 1 return status_dict
Update statuses of jobs nodes in workflow.
20,817
def runserver(ctx, conf, port, foreground): config = read_config(conf) debug = config[].get(, False) click.echo(.format( if debug else )) port = port or config[][][] app_settings = { : debug, : config[][].get(, False), } handlers_settings = __create_handler_settings(config) if foreground: click.echo() start_app(port, app_settings, handlers_settings) else: click.echo() raise NotImplementedError
Run the fnExchange server
20,818
def p_const_vector_vector_list(p): if len(p[3]) != len(p[1][0]): syntax_error(p.lineno(2), ) p[0] = None return p[0] = p[1] + [p[3]]
const_vector_list : const_vector_list COMMA const_vector
20,819
def string_to_int( s ): result = 0 for c in s: if not isinstance(c, int): c = ord( c ) result = 256 * result + c return result
Convert a string of bytes into an integer, as per X9.62.
20,820
def validate(self, value, model=None, context=None): from boiler.user.services import user_service self_id = None if model: if isinstance(model, dict): self_id = model.get() else: self_id = getattr(model, ) params = dict() params[self.property] = value found = user_service.first(**params) if not found or (model and self_id == found.id): return Error() return Error(self.error)
Perform validation
20,821
def _execute(self, query, commit=False, working_columns=None): log.debug("RawlBase._execute()") result = [] if working_columns is None: working_columns = self.columns with RawlConnection(self.dsn) as conn: query_id = random.randrange(9999) curs = conn.cursor() try: log.debug("Executing(%s): %s" % (query_id, query.as_string(curs))) except: log.exception("LOGGING EXCEPTION LOL") curs.execute(query) log.debug("Executed") if commit == True: log.debug("COMMIT(%s)" % query_id) conn.commit() log.debug("curs.rowcount: %s" % curs.rowcount) if curs.rowcount > 0: result_rows = curs.fetchall() for row in result_rows: i = 0 row_dict = {} for col in working_columns: try: col = col.replace(, ) row_dict[col] = row[i] except IndexError: pass i += 1 log.debug("Appending dict to result: %s" % row_dict) rr = RawlResult(working_columns, row_dict) result.append(rr) curs.close() return result
Execute a query with provided parameters Parameters :query: SQL string with parameter placeholders :commit: If True, the query will commit :returns: List of rows
20,822
def configure_logging(info=False, debug=False): if info: logging.basicConfig(level=logging.INFO, format=LOG_FORMAT) logging.getLogger().setLevel(logging.WARNING) logging.getLogger().setLevel(logging.WARNING) logging.getLogger().setLevel(logging.WARNING) elif debug: logging.basicConfig(level=logging.DEBUG, format=DEBUG_LOG_FORMAT) else: logging.basicConfig(level=logging.WARNING, format=LOG_FORMAT) logging.getLogger().setLevel(logging.WARNING) logging.getLogger().setLevel(logging.WARNING) logging.getLogger().setLevel(logging.WARNING)
Configure logging The function configures log messages. By default, log messages are sent to stderr. Set the parameter `debug` to activate the debug mode. :param debug: set the debug mode
20,823
def cwd(self, new_path): old_cwd = self._cwd self._cwd = new_path return old_cwd
Sets the cwd during reads and writes
20,824
def freeSave(self, obj) : if self.saveIniator is obj and not self.inTransaction : self.saveIniator = None self.savedObject = set() self.connection.commit() return True return False
THIS IS WHERE COMMITS TAKE PLACE! Ends a saving session, only the initiator can end a session. The commit is performed at the end of the session
20,825
def check_params(**kwargs): missing_params = [] check = True for param in kwargs: if kwargs[param] is None: missing_params.append(param) if len(missing_params) > 0: print("POT - Warning: following necessary parameters are missing") for p in missing_params: print("\n", p) check = False return check
check_params: check whether some parameters are missing
20,826
def _hjoin_multiline(join_char, strings): cstrings = [string.split("\n") for string in strings] max_num_lines = max(len(item) for item in cstrings) pp = [] for k in range(max_num_lines): p = [cstring[k] for cstring in cstrings] pp.append(join_char + join_char.join(p) + join_char) return "\n".join([p.rstrip() for p in pp])
Horizontal join of multiline strings
20,827
def add_int(self, name, min, max, warp=None): min, max = map(int, (min, max)) if max < min: raise ValueError( % name) if warp not in (None, ): raise ValueError( % (name, warp)) if min <= 0 and warp == : raise ValueError() self.variables[name] = IntVariable(name, min, max, warp)
An integer-valued dimension bounded between `min` <= x <= `max`. Note that the right endpoint of the interval includes `max`. When `warp` is None, the base measure associated with this dimension is a categorical distribution with each weight on each of the integers in [min, max]. With `warp == 'log'`, the base measure is a uniform distribution on the log of the variable, with bounds at `log(min)` and `log(max)`. This is appropriate for variables that are "naturally" in log-space. Other `warp` functions are not supported (yet), but may be at a later time. Please note that this functionality is not supported for `hyperopt_tpe`.
20,828
def get_authenticated_user(self, redirect_uri, callback, scope=None, **args): code = self.get_argument(, None) if not code: self.authorize_redirect(redirect_uri, scope=scope, **args) return self.get_access_token( code, callback=(yield gen.Callback()), redirect_uri=redirect_uri) response = yield gen.Wait() if not response: callback(None) return try: user = json_decode(response.body) except: logging.warning("Error response %s fetching %s", response.body, response.request.url) callback(None) return if in user: logging.warning("Error response %s fetching %s", user[], response.request.url) callback(None) return self.renren_request(, user[], callback=(yield gen.Callback())) response = yield gen.Wait() if response.error and not response.body: logging.warning("Error response %s fetching %s", response.error, response.request.url) elif response.error: logging.warning("Error response %s fetching %s: %s", response.error, response.request.url, response.body) else: try: user[] = json_decode(response.body) except: pass callback(user) return
class RenrenHandler(tornado.web.RequestHandler, RenrenGraphMixin): @tornado.web.asynchronous @gen.engine def get(self): self.get_authenticated_user( callback=(yield gen.Callback('key')), redirect_uri=url) user = yield gen.Wait('key') if not user: raise web.HTTPError(500, "Renren auth failed") # do something else self.finish()
20,829
async def cancel_task(app: web.Application, task: asyncio.Task, *args, **kwargs ) -> Any: return await get_scheduler(app).cancel(task, *args, **kwargs)
Convenience function for calling `TaskScheduler.cancel(task)` This will use the default `TaskScheduler` to cancel the given task. Example: import asyncio from datetime import datetime from brewblox_service import scheduler, service async def current_time(interval): while True: await asyncio.sleep(interval) print(datetime.now()) async def stop_after(app, task, duration): await asyncio.sleep(duration) await scheduler.cancel_task(app, task) print('stopped!') async def start(app): # Start first task task = await scheduler.create_task(app, current_time(interval=2)) # Start second task to stop the first await scheduler.create_task(app, stop_after(app, task, duration=10)) app = service.create_app(default_name='example') scheduler.setup(app) app.on_startup.append(start) service.furnish(app) service.run(app)
20,830
def _organize_variants(samples, batch_id): caller_names = [x["variantcaller"] for x in samples[0]["variants"]] calls = collections.defaultdict(list) for data in samples: for vrn in data["variants"]: calls[vrn["variantcaller"]].append(vrn["vrn_file"]) data = samples[0] vrn_files = [] for caller in caller_names: fnames = calls[caller] if len(fnames) == 1: vrn_files.append(fnames[0]) else: vrn_files.append(population.get_multisample_vcf(fnames, batch_id, caller, data)) return caller_names, vrn_files
Retrieve variant calls for all samples, merging batched samples into single VCF.
20,831
def clone(self, new_object): return self.__class__(new_object, self.method, self.name)
Returns an object that re-binds the underlying "method" to the specified new object.
20,832
def response(self, model=None, code=HTTPStatus.OK, description=None, **kwargs): code = HTTPStatus(code) if code is HTTPStatus.NO_CONTENT: assert model is None if model is None and code not in {HTTPStatus.ACCEPTED, HTTPStatus.NO_CONTENT}: if code.value not in http_exceptions.default_exceptions: raise ValueError("`model` parameter is required for code %d" % code) model = self.model( name= % code, model=DefaultHTTPErrorSchema(http_code=code) ) if description is None: description = code.description def response_serializer_decorator(func): def dump_wrapper(*args, **kwargs): response = func(*args, **kwargs) extra_headers = None if response is None: if model is not None: raise ValueError("Response cannot not be None with HTTP status %d" % code) return flask.Response(status=code) elif isinstance(response, flask.Response) or model is None: return response elif isinstance(response, tuple): response, _code, extra_headers = unpack(response) else: _code = code if HTTPStatus(_code) is code: response = model.dump(response).data return response, _code, extra_headers return dump_wrapper def decorator(func_or_class): if code.value in http_exceptions.default_exceptions: api_model = [api_model] doc_decorator = self.doc( responses={ code.value: (description, api_model) } ) return doc_decorator(decorated_func_or_class) return decorator
Endpoint response OpenAPI documentation decorator. It automatically documents HTTPError%(code)d responses with relevant schemas. Arguments: model (flask_marshmallow.Schema) - it can be a class or an instance of the class, which will be used for OpenAPI documentation purposes. It can be omitted if ``code`` argument is set to an error HTTP status code. code (int) - HTTP status code which is documented. description (str) Example: >>> @namespace.response(BaseTeamSchema(many=True)) ... @namespace.response(code=HTTPStatus.FORBIDDEN) ... def get_teams(): ... if not user.is_admin: ... abort(HTTPStatus.FORBIDDEN) ... return Team.query.all()
20,833
def get_template_sources(self, template_name, template_dirs=None): if not template_dirs: template_dirs = self.get_dirs() for template_dir in template_dirs: try: name = safe_join(template_dir, template_name) except SuspiciousFileOperation: pass else: if Origin: yield Origin( name=name, template_name=template_name, loader=self, ) else: yield name
Returns the absolute paths to "template_name", when appended to each directory in "template_dirs". Any paths that don't lie inside one of the template dirs are excluded from the result set, for security reasons.
20,834
def doc(self): collector = [] for name, varname in self.components._namespace.items(): try: docstring = getattr(self.components, varname).__doc__ lines = docstring.split() collector.append({: name, : varname, : lines[2].replace("Original Eqn:", "").strip(), : lines[3].replace("Units:", "").strip(), : lines[4].replace("Limits:", "").strip(), : lines[5].replace("Type:", "").strip(), : .join(lines[7:]).strip()}) except: pass docs_df = _pd.DataFrame(collector) docs_df.fillna(, inplace=True) order = [, , , , , , ] return docs_df[order].sort_values(by=).reset_index(drop=True)
Formats a table of documentation strings to help users remember variable names, and understand how they are translated into python safe names. Returns ------- docs_df: pandas dataframe Dataframe with columns for the model components: - Real names - Python safe identifiers (as used in model.components) - Units string - Documentation strings from the original model file
20,835
def statuses_show(self, id, trim_user=None, include_my_retweet=None, include_entities=None): params = {: id} set_bool_param(params, , trim_user) set_bool_param(params, , include_my_retweet) set_bool_param(params, , include_entities) return self._get_api(, params)
Returns a single Tweet, specified by the id parameter. https://dev.twitter.com/docs/api/1.1/get/statuses/show/%3Aid :param str id: (*required*) The numerical ID of the desired tweet. :param bool trim_user: When set to ``True``, the tweet's user object includes only the status author's numerical ID. :param bool include_my_retweet: When set to ``True``, any Tweet returned that has been retweeted by the authenticating user will include an additional ``current_user_retweet`` node, containing the ID of the source status for the retweet. :param bool include_entities: When set to ``False``, the ``entities`` node will not be included. :returns: A tweet dict.
20,836
def get_files(self): return github.PaginatedList.PaginatedList( github.File.File, self._requester, self.url + "/files", None )
:calls: `GET /repos/:owner/:repo/pulls/:number/files <http://developer.github.com/v3/pulls>`_ :rtype: :class:`github.PaginatedList.PaginatedList` of :class:`github.File.File`
20,837
def create_ellipse(self,xcen,ycen,a,b,ang,resolution=40.0): import math e1=[] e2=[] ang=ang-math.radians(90) for i in range(0,int(resolution)+1): x=(-1*a+2*a*float(i)/resolution) y=1-(x/a)**2 if y < 1E-6: y=1E-6 y=math.sqrt(y)*b ptv=self.p2c((x*math.cos(ang)+y*math.sin(ang)+xcen,y*math.cos(ang)-x*math.sin(ang)+ycen)) y=-1*y ntv=self.p2c((x*math.cos(ang)+y*math.sin(ang)+xcen,y*math.cos(ang)-x*math.sin(ang)+ycen)) e1.append(ptv) e2.append(ntv) e2.reverse() e1.extend(e2) self.create_line(e1,fill=,width=1)
Plot ellipse at x,y with size a,b and orientation ang
20,838
def decryptWithSessionRecord(self, sessionRecord, cipherText): previousStates = sessionRecord.getPreviousSessionStates() exceptions = [] try: sessionState = SessionState(sessionRecord.getSessionState()) plaintext = self.decryptWithSessionState(sessionState, cipherText) sessionRecord.setState(sessionState) return plaintext except InvalidMessageException as e: exceptions.append(e) for i in range(0, len(previousStates)): previousState = previousStates[i] try: promotedState = SessionState(previousState) plaintext = self.decryptWithSessionState(promotedState, cipherText) previousStates.pop(i) sessionRecord.promoteState(promotedState) return plaintext except InvalidMessageException as e: exceptions.append(e) raise InvalidMessageException("No valid sessions", exceptions)
:type sessionRecord: SessionRecord :type cipherText: WhisperMessage
20,839
def deploy_image(self, image_name, oc_new_app_args=None, project=None, name=None): self.project = project or self.get_current_project() name = name or .format(random_string=random_str(5)) oc_new_app_args = oc_new_app_args or [] new_image = self.import_image(image_name.split()[-1], image_name) c = self._oc_command( ["new-app"] + oc_new_app_args + [new_image] + ["-n"] + [project] + ["--name=%s" % name]) logger.info("Creating new app in project %s", project) try: run_cmd(c) except subprocess.CalledProcessError as ex: raise ConuException("oc new-app failed: %s" % ex) return name
Deploy image in OpenShift cluster using 'oc new-app' :param image_name: image name with tag :param oc_new_app_args: additional parameters for the `oc new-app`, env variables etc. :param project: project where app should be created, default: current project :param name:str, name of application, if None random name is generated :return: str, name of the app
20,840
def create_or_bind_with_claims(self, source_identity): content = self._serialize.body(source_identity, ) response = self._send(http_method=, location_id=, version=, content=content) return self._deserialize(, response)
CreateOrBindWithClaims. [Preview API] :param :class:`<Identity> <azure.devops.v5_0.identity.models.Identity>` source_identity: :rtype: :class:`<Identity> <azure.devops.v5_0.identity.models.Identity>`
20,841
def CheckForIncludeWhatYouUse(filename, clean_lines, include_state, error, io=codecs): required = {} for linenum in xrange(clean_lines.NumLines()): line = clean_lines.elided[linenum] if not line or line[0] == : continue matched = _RE_PATTERN_STRING.search(line) if matched: if filename.endswith() and not header_found: return for required_header_unstripped in required: template = required[required_header_unstripped][1] if required_header_unstripped.strip() not in include_dict: error(filename, required[required_header_unstripped][0], , 4, + required_header_unstripped + + template)
Reports for missing stl includes. This function will output warnings to make sure you are including the headers necessary for the stl containers and functions that you use. We only give one reason to include a header. For example, if you use both equal_to<> and less<> in a .h file, only one (the latter in the file) of these will be reported as a reason to include the <functional>. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. include_state: An _IncludeState instance. error: The function to call with any errors found. io: The IO factory to use to read the header file. Provided for unittest injection.
20,842
def fit(self, x, y, **kwargs): self.mean = numpy.mean(y) return {}
Fit a naive model :param x: Predictors to use for fitting the data (this will not be used in naive models) :param y: Outcome
20,843
def _validate(self, inst: "InstanceNode", scope: ValidationScope, ctype: ContentType) -> None: if (scope.value & ValidationScope.syntax.value and inst.value not in self.type): raise YangTypeError(inst.json_pointer(), self.type.error_tag, self.type.error_message) if (isinstance(self.type, LinkType) and scope.value & ValidationScope.semantics.value and self.type.require_instance): try: tgt = inst._deref() except YangsonException: tgt = [] if not tgt: raise SemanticError(inst.json_pointer(), "instance-required") super()._validate(inst, scope, ctype)
Extend the superclass method.
20,844
def setup_driver(scenario): if not hasattr(world, ): world.config_files = ConfigFiles() if not world.config_files.config_directory: world.config_files.set_config_directory(DriverWrappersPool.get_default_config_directory()) world.global_status = {: True} bdd_common_before_scenario(world, scenario)
Scenario initialization :param scenario: running scenario
20,845
def PCA(x, n=False): if not n: n = x.shape[1] - 1 try: x = np.array(x) except: raise ValueError() assert type(n) == int, "Provided n is not an integer." assert x.shape[1] > n, "The requested n is bigger than \ number of features in x." eigen_values, eigen_vectors = np.linalg.eig(np.cov(x.T)) eigen_order = eigen_vectors.T[(-eigen_values).argsort()] return eigen_order[:n].dot(x.T).T
Principal component analysis function. **Args:** * `x` : input matrix (2d array), every row represents new sample **Kwargs:** * `n` : number of features returned (integer) - how many columns should the output keep **Returns:** * `new_x` : matrix with reduced size (lower number of columns)
20,846
def get_filestats(cluster, environ, topology, container, path, role=None): params = dict( cluster=cluster, environ=environ, topology=topology, container=container, path=path) if role is not None: params[] = role request_url = tornado.httputil.url_concat(create_url(FILESTATS_URL_FMT), params) raise tornado.gen.Return((yield fetch_url_as_json(request_url)))
:param cluster: :param environ: :param topology: :param container: :param path: :param role: :return:
20,847
def _set_traffic_class_mutation(self, v, load=False): if hasattr(v, "_utype"): v = v._utype(v) try: t = YANGDynClass(v,base=YANGListType("name",traffic_class_mutation.traffic_class_mutation, yang_name="traffic-class-mutation", rest_name="traffic-class-mutation", parent=self, is_container=, user_ordered=False, path_helper=self._path_helper, yang_keys=, extensions={u: {u: u, u: None, u: None, u: None, u: None, u: None, u: u, u: u}}), is_container=, yang_name="traffic-class-mutation", rest_name="traffic-class-mutation", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions={u: {u: u, u: None, u: None, u: None, u: None, u: None, u: u, u: u}}, namespace=, defining_module=, yang_type=, is_config=True) except (TypeError, ValueError): raise ValueError({ : , : "list", : , }) self.__traffic_class_mutation = t if hasattr(self, ): self._set()
Setter method for traffic_class_mutation, mapped from YANG variable /qos/map/traffic_class_mutation (list) If this variable is read-only (config: false) in the source YANG file, then _set_traffic_class_mutation is considered as a private method. Backends looking to populate this variable should do so via calling thisObj._set_traffic_class_mutation() directly.
20,848
def create_credentials(self, environment_id, source_type=None, credential_details=None, **kwargs): if environment_id is None: raise ValueError() if credential_details is not None: credential_details = self._convert_model(credential_details, CredentialDetails) headers = {} if in kwargs: headers.update(kwargs.get()) sdk_headers = get_sdk_headers(, , ) headers.update(sdk_headers) params = {: self.version} data = { : source_type, : credential_details } url = .format( *self._encode_path_vars(environment_id)) response = self.request( method=, url=url, headers=headers, params=params, json=data, accept_json=True) return response
Create credentials. Creates a set of credentials to connect to a remote source. Created credentials are used in a configuration to associate a collection with the remote source. **Note:** All credentials are sent over an encrypted connection and encrypted at rest. :param str environment_id: The ID of the environment. :param str source_type: The source that this credentials object connects to. - `box` indicates the credentials are used to connect an instance of Enterprise Box. - `salesforce` indicates the credentials are used to connect to Salesforce. - `sharepoint` indicates the credentials are used to connect to Microsoft SharePoint Online. - `web_crawl` indicates the credentials are used to perform a web crawl. = `cloud_object_storage` indicates the credentials are used to connect to an IBM Cloud Object Store. :param CredentialDetails credential_details: Object containing details of the stored credentials. Obtain credentials for your source from the administrator of the source. :param dict headers: A `dict` containing the request headers :return: A `DetailedResponse` containing the result, headers and HTTP status code. :rtype: DetailedResponse
20,849
def update_status(self, card_id, code, reimburse_status): return self._post( , data={ : card_id, : code, : reimburse_status, }, )
更新发票卡券的状态 详情请参考 https://mp.weixin.qq.com/wiki?id=mp1497082828_r1cI2 :param card_id: 发票卡券模板的编号 :param code: 发票卡券的编号 :param reimburse_status: 发票报销状态
20,850
def _sorted_keys(self): try: keys = self._cache[] except KeyError: keys = self._cache[] = sorted(self.keys(), key=parse_version) return keys
Return list of keys sorted by version Sorting is done based on :py:func:`pkg_resources.parse_version`
20,851
def update_entity(self, entity, if_match=): request = _update_entity(entity, if_match, self._require_encryption, self._key_encryption_key, self._encryption_resolver) self._add_to_batch(entity[], entity[], request)
Adds an update entity operation to the batch. See :func:`~azure.storage.table.tableservice.TableService.update_entity` for more information on updates. The operation will not be executed until the batch is committed. :param entity: The entity to update. Could be a dict or an entity object. Must contain a PartitionKey and a RowKey. :type entity: dict or :class:`~azure.storage.table.models.Entity` :param str if_match: The client may specify the ETag for the entity on the request in order to compare to the ETag maintained by the service for the purpose of optimistic concurrency. The update operation will be performed only if the ETag sent by the client matches the value maintained by the server, indicating that the entity has not been modified since it was retrieved by the client. To force an unconditional update, set If-Match to the wildcard character (*).
20,852
def uninstalled(name): ret = {: name, : {}, : None, : } old = __salt__[](name) if not old: ret[] = .format(name) ret[] = True return ret else: if __opts__[]: ret[] = .format(name) ret[][] = old[0][] ret[][] = None ret[] = None return ret __salt__[](name) if not __salt__[](name): ret[] = .format(name) ret[][] = old[0][] ret[][] = None ret[] = True return ret
Ensure that the named package is not installed. Args: name (str): The flatpak package. Returns: dict: The ``result`` and ``output``. Example: .. code-block:: yaml uninstall_package: flatpack.uninstalled: - name: gimp
20,853
def typed_node_from_id(id: str) -> TypedNode: filter_out_types = [ , , , , , ] node = next(get_scigraph_nodes([id])) if in node: label = node[] else: label = None types = [typ.lower() for typ in node[][] if typ not in filter_out_types] return TypedNode( id=node[], label=label, type=types[0], taxon = get_taxon(id) )
Get typed node from id :param id: id as curie :return: TypedNode object
20,854
def clear(self): self._cx, self._cy = (0, 0) self._canvas.rectangle(self._device.bounding_box, fill=self.default_bgcolor) self.flush()
Clears the display and resets the cursor position to ``(0, 0)``.
20,855
def set_attrs(self, **attrs): self.attrs.update(attrs) self._backend.set_attrs(**attrs)
Set model attributes, e.g. input resistance of a cell.
20,856
def dre_dc(self, pars): r self._set_parameters(pars) num1a = np.log(self.w * self.tau) * self.otc * np.sin(self.ang) num1b = self.otc * np.cos(self.ang) * np.pi / 2.0 term1 = (num1a + num1b) / self.denom num2 = self.otc * np.sin(self.c / np.pi) * 2 denom2 = self.denom ** 2 term2 = num2 / denom2 num3a = 2 * np.log(self.w * self.tau) * self.otc * np.cos(self.ang) num3b = 2 * ((self.w * self.tau) ** 2) * np.pi / 2.0 * np.sin(self.ang) num3c = 2 * np.log(self.w * self.tau) * self.otc2 term3 = num3a - num3b + num3c result = self.sigmai * self.m * (term1 + term2 * term3) return result
r""" :math:Add formula
20,857
def _capitalize_word(text, pos): while pos < len(text) and not text[pos].isalnum(): pos += 1 if pos < len(text): text = text[:pos] + text[pos].upper() + text[pos + 1:] while pos < len(text) and text[pos].isalnum(): pos += 1 return text, pos
Capitalize the current (or following) word.
20,858
def get_class_from_settings(settings_key): cls_path = getattr(settings, settings_key, None) if not cls_path: raise NotImplementedError() try: return get_model_from_settings(settings_key=settings_key) except: try: return get_class_from_settings_from_apps(settings_key=settings_key) except: return get_class_from_settings_full_path(settings_key)
Gets a class from a setting key. This will first check loaded models, then look in installed apps, then fallback to import from lib. :param settings_key: the key defined in settings to the value for
20,859
def generate_wildcard_pem_bytes(): key = generate_private_key(u) name = x509.Name([x509.NameAttribute(NameOID.COMMON_NAME, u)]) cert = ( x509.CertificateBuilder() .issuer_name(name) .subject_name(name) .not_valid_before(datetime.today() - timedelta(days=1)) .not_valid_after(datetime.now() + timedelta(days=3650)) .serial_number(int(uuid.uuid4())) .public_key(key.public_key()) .sign( private_key=key, algorithm=hashes.SHA256(), backend=default_backend()) ) return b.join(( key.private_bytes( encoding=serialization.Encoding.PEM, format=serialization.PrivateFormat.TraditionalOpenSSL, encryption_algorithm=serialization.NoEncryption()), cert.public_bytes(serialization.Encoding.PEM) ))
Generate a wildcard (subject name '*') self-signed certificate valid for 10 years. https://cryptography.io/en/latest/x509/tutorial/#creating-a-self-signed-certificate :return: Bytes representation of the PEM certificate data
20,860
def validate(self, schema): try: validate(instance=self.to_dict(), schema=schema, cls=Draft4Validator) except ValidationError as e: raise ValidationError(_validate_err_template.format(VDOM_SCHEMA, e))
Validate VDOM against given JSON Schema Raises ValidationError if schema does not match
20,861
def namespace_map(self, target): self._check_target(target) return target.namespace_map or self._default_namespace_map
Returns the namespace_map used for Thrift generation. :param target: The target to extract the namespace_map from. :type target: :class:`pants.backend.codegen.targets.java_thrift_library.JavaThriftLibrary` :returns: The namespaces to remap (old to new). :rtype: dictionary
20,862
def set_format(self, column_or_columns, formatter): if inspect.isclass(formatter): formatter = formatter() if callable(formatter) and not hasattr(formatter, ): formatter = _formats.FunctionFormatter(formatter) if not hasattr(formatter, ): raise Exception( + str(formatter)) for label in self._as_labels(column_or_columns): if formatter.converts_values: self[label] = formatter.convert_column(self[label]) self._formats[label] = formatter return self
Set the format of a column.
20,863
def get_category_or_404(path): path_bits = [p for p in path.split() if p] return get_object_or_404(Category, slug=path_bits[-1])
Retrieve a Category instance by a path.
20,864
def build_application(conf): if isinstance(conf.adapter_options, list): conf[] = {key: val for _dict in conf.adapter_options for key, val in _dict.items()} elif conf.adapter_options is None: conf[] = {} else: conf[] = copy.copy(conf.adapter_options) conf[] = conf.app or bottle.default_app() if isinstance(conf.app, six.string_types): conf[] = bottle.load_app(conf.app) def _find_bottle_app(_app): while hasattr(_app, ): if isinstance(_app, bottle.Bottle): break _app = _app.app assert isinstance(_app, bottle.Bottle), return _app bottle_app = _find_bottle_app(conf.app) bottle_app.route( path=, method=, callback=_version_callback) def _show_routes(): if conf.app and not conf.quiet: if conf.reloader and os.getenv(): LOG.info("Running bottle server with reloader.") elif not conf.reloader: pass else: return routes = fmt_routes(bottle_app) if routes: print(.format(routes), end=) _show_routes() return conf.app
Do some setup and return the wsgi app.
20,865
def var(self, ddof=0): N = len(self) if N: v = self.values() mu = sum(v) return (sum(v*v) - mu*mu/N)/(N-ddof) else: return None
Calculate variance of timeseries. Return a vector containing the variances of each series in the timeseries. :parameter ddof: delta degree of freedom, the divisor used in the calculation is given by ``N - ddof`` where ``N`` represents the length of timeseries. Default ``0``. .. math:: var = \\frac{\\sum_i^N (x - \\mu)^2}{N-ddof}
20,866
def show_in_view(self, sourceview, matches, targetname=None): append = self.options.append_view or self.options.alter_view == remove = self.options.alter_view == action_name = if append else if remove else targetname = config.engine.show(matches, targetname or self.options.to_view or "rtcontrol", append=append, disjoin=remove) msg = "Filtered %d out of %d torrents using [ %s ]" % ( len(matches), sourceview.size(), sourceview.matcher) self.LOG.info("%s%s rTorrent view %r." % (msg, action_name, targetname)) config.engine.log(msg)
Show search result in ncurses view.
20,867
def set_provider_links(self, resource_ids=None): if resource_ids is None: raise NullArgument() metadata = Metadata(**settings.METADATA[]) if metadata.is_read_only(): raise NoAccess() if self._is_valid_input(resource_ids, metadata, array=True): self._my_map[] = [] for i in resource_ids: self._my_map[].append(str(i)) else: raise InvalidArgument()
Sets a provider chain in order from the most recent source to the originating source. :param resource_ids: the new source :type resource_ids: ``osid.id.Id[]`` :raise: ``InvalidArgument`` -- ``resource_ids`` is invalid :raise: ``NoAccess`` -- ``Metadata.isReadOnly()`` is ``true`` :raise: ``NullArgument`` -- ``resource_ids`` is ``null`` *compliance: mandatory -- This method must be implemented.*
20,868
def process_file(self, filename): if self.config.dry_run: if not self.config.internal: self.logger.info("Dry run mode for script %s", filename) with open(filename) as handle: for line in handle: yield line[0:-1] if line[-1] == else line else: if not self.config.internal: self.logger.info("Running script %s", filename) for line in self.process_script(filename): yield line
Processing one file.
20,869
def _on_new_data_received(self, data: bytes): if data == b: self.callback.on_captcha_received(login.CaptchaElement(xml_element))
Gets called whenever we get a whole new XML element from kik's servers. :param data: The data received (bytes)
20,870
def expl_var(self, greenacre=True, N=None): if greenacre: greenacre_inertia = (self.K / (self.K - 1.) * (sum(self.s**4) - (self.J - self.K) / self.K**2.)) return (self._benzecri() / greenacre_inertia)[:N] else: E = self._benzecri() if self.cor else self.s**2 return (E / sum(E))[:N]
Return proportion of explained inertia (variance) for each factor. :param greenacre: Perform Greenacre correction (default: True)
20,871
def get(**kwargs): sensor = None tick = 0 driver = DHTReader(**kwargs) while not sensor and tick < TIME_LIMIT: try: sensor = driver.receive_data() except DHTException: tick += 1 return sensor
Safe sensor wrapper
20,872
def register_plugin(manager): manager.register_blueprint(player) manager.register_mimetype_function(detect_playable_mimetype) manager.register_widget( place=, type=, endpoint=, filename= ) manager.register_widget( place=, type=, endpoint=, filter=PlayableFile.detect ) manager.register_widget( place=, icon=, type=, endpoint=, filter=PlayListFile.detect ) manager.register_widget( place=, css=, type=, endpoint=, filter=PlayableFile.detect ) manager.register_widget( place=, css=, type=, endpoint=, filter=PlayListFile.detect ) if manager.get_argument(): manager.register_widget( place=, type=, endpoint=, text=, filter=PlayableDirectory.detect )
Register blueprints and actions using given plugin manager. :param manager: plugin manager :type manager: browsepy.manager.PluginManager
20,873
def link_contentkey_authorization_policy(access_token, ckap_id, options_id, \ ams_redirected_rest_endpoint): path = full_path = .join([path, "()", "/$links/Options"]) full_path_encoded = urllib.parse.quote(full_path, safe=) endpoint = .join([ams_rest_endpoint, full_path_encoded]) uri = .join([ams_redirected_rest_endpoint, , \ "()"]) body = + uri + return do_ams_post(endpoint, full_path_encoded, body, access_token, "json_only", "1.0;NetFx")
Link Media Service Content Key Authorization Policy. Args: access_token (str): A valid Azure authentication token. ckap_id (str): A Media Service Asset Content Key Authorization Policy ID. options_id (str): A Media Service Content Key Authorization Policy Options . ams_redirected_rest_endpoint (str): A Media Service Redirected Endpoint. Returns: HTTP response. JSON body.
20,874
def chop_into_sequences(episode_ids, unroll_ids, agent_indices, feature_columns, state_columns, max_seq_len, dynamic_max=True, _extra_padding=0): prev_id = None seq_lens = [] seq_len = 0 unique_ids = np.add( np.add(episode_ids, agent_indices), np.array(unroll_ids) << 32) for uid in unique_ids: if (prev_id is not None and uid != prev_id) or \ seq_len >= max_seq_len: seq_lens.append(seq_len) seq_len = 0 seq_len += 1 prev_id = uid if seq_len: seq_lens.append(seq_len) assert sum(seq_lens) == len(unique_ids) if dynamic_max: max_seq_len = max(seq_lens) + _extra_padding feature_sequences = [] for f in feature_columns: f = np.array(f) f_pad = np.zeros((len(seq_lens) * max_seq_len, ) + np.shape(f)[1:]) seq_base = 0 i = 0 for l in seq_lens: for seq_offset in range(l): f_pad[seq_base + seq_offset] = f[i] i += 1 seq_base += max_seq_len assert i == len(unique_ids), f feature_sequences.append(f_pad) initial_states = [] for s in state_columns: s = np.array(s) s_init = [] i = 0 for l in seq_lens: s_init.append(s[i]) i += l initial_states.append(np.array(s_init)) return feature_sequences, initial_states, np.array(seq_lens)
Truncate and pad experiences into fixed-length sequences. Arguments: episode_ids (list): List of episode ids for each step. unroll_ids (list): List of identifiers for the sample batch. This is used to make sure sequences are cut between sample batches. agent_indices (list): List of agent ids for each step. Note that this has to be combined with episode_ids for uniqueness. feature_columns (list): List of arrays containing features. state_columns (list): List of arrays containing LSTM state values. max_seq_len (int): Max length of sequences before truncation. dynamic_max (bool): Whether to dynamically shrink the max seq len. For example, if max len is 20 and the actual max seq len in the data is 7, it will be shrunk to 7. _extra_padding (int): Add extra padding to the end of sequences. Returns: f_pad (list): Padded feature columns. These will be of shape [NUM_SEQUENCES * MAX_SEQ_LEN, ...]. s_init (list): Initial states for each sequence, of shape [NUM_SEQUENCES, ...]. seq_lens (list): List of sequence lengths, of shape [NUM_SEQUENCES]. Examples: >>> f_pad, s_init, seq_lens = chop_into_sequences( episode_ids=[1, 1, 5, 5, 5, 5], unroll_ids=[4, 4, 4, 4, 4, 4], agent_indices=[0, 0, 0, 0, 0, 0], feature_columns=[[4, 4, 8, 8, 8, 8], [1, 1, 0, 1, 1, 0]], state_columns=[[4, 5, 4, 5, 5, 5]], max_seq_len=3) >>> print(f_pad) [[4, 4, 0, 8, 8, 8, 8, 0, 0], [1, 1, 0, 0, 1, 1, 0, 0, 0]] >>> print(s_init) [[4, 4, 5]] >>> print(seq_lens) [2, 3, 1]
20,875
def LookupNamespace(self, prefix): ret = libxml2mod.xmlTextReaderLookupNamespace(self._o, prefix) return ret
Resolves a namespace prefix in the scope of the current element.
20,876
def build_data_availability(datasets_json): data_availability = None if in datasets_json and datasets_json.get(): data_availability = datasets_json.get()[0].get() return data_availability
Given datasets in JSON format, get the data availability from it if present
20,877
def param_sweep(model, sequences, param_grid, n_jobs=1, verbose=0): if isinstance(param_grid, dict): param_grid = ParameterGrid(param_grid) elif not isinstance(param_grid, ParameterGrid): raise ValueError("param_grid must be a dict or ParamaterGrid instance") iter_args = ((clone(model).set_params(**params), sequences) for params in param_grid) models = Parallel(n_jobs=n_jobs, verbose=verbose)( delayed(_param_sweep_helper)(args) for args in iter_args) return models
Fit a series of models over a range of parameters. Parameters ---------- model : msmbuilder.BaseEstimator An *instance* of an estimator to be used to fit data. sequences : list of array-like List of sequences, or a single sequence. Each sequence should be a 1D iterable of state labels. Labels can be integers, strings, or other orderable objects. param_grid : dict or sklearn.grid_search.ParameterGrid Parameter grid to specify models to fit. See sklearn.grid_search.ParameterGrid for an explanation n_jobs : int, optional Number of jobs to run in parallel using joblib.Parallel Returns ------- models : list List of models fit to the data according to param_grid
20,878
def get_conn(self): if not self._conn: self._conn = storage.Client(credentials=self._get_credentials()) return self._conn
Returns a Google Cloud Storage service object.
20,879
def _data_from_dotnotation(self, key, default=None): if key is None: raise KeyError() doc = self._collection.find_one({"_id": ObjectId(self._workflow_id)}) if doc is None: return default for k in key.split(): doc = doc[k] return doc
Returns the MongoDB data from a key using dot notation. Args: key (str): The key to the field in the workflow document. Supports MongoDB's dot notation for embedded fields. default (object): The default value that is returned if the key does not exist. Returns: object: The data for the specified key or the default value.
20,880
def snapshot(self, channel=0, path_file=None, timeout=None): ret = self.command( "snapshot.cgi?channel={0}".format(channel), timeout_cmd=timeout ) if path_file: with open(path_file, ) as out_file: shutil.copyfileobj(ret.raw, out_file) return ret.raw
Args: channel: Values according with Amcrest API: 0 - regular snapshot 1 - motion detection snapshot 2 - alarm snapshot If no channel param is used, default is 0 path_file: If path_file is provided, save the snapshot in the path Return: raw from http request
20,881
def get_extension_rights(self): response = self._send(http_method=, location_id=, version=) return self._deserialize(, response)
GetExtensionRights. [Preview API] :rtype: :class:`<ExtensionRightsResult> <azure.devops.v5_0.licensing.models.ExtensionRightsResult>`
20,882
def expand(self, vs=None, conj=False): vs = self._expect_vars(vs) if vs: outer, inner = (And, Or) if conj else (Or, And) terms = [inner(self.restrict(p), *boolfunc.point2term(p, conj)) for p in boolfunc.iter_points(vs)] if conj: terms = [term for term in terms if term is not One] else: terms = [term for term in terms if term is not Zero] return outer(*terms, simplify=False) else: return self
Return the Shannon expansion with respect to a list of variables.
20,883
def _matches(self, url, options, general_re, domain_required_rules, rules_with_options): if general_re and general_re.search(url): return True rules = [] if in options and domain_required_rules: src_domain = options[] for domain in _domain_variants(src_domain): if domain in domain_required_rules: rules.extend(domain_required_rules[domain]) rules.extend(rules_with_options) if self.skip_unsupported_rules: rules = [rule for rule in rules if rule.matching_supported(options)] return any(rule.match_url(url, options) for rule in rules)
Return if ``url``/``options`` are matched by rules defined by ``general_re``, ``domain_required_rules`` and ``rules_with_options``. ``general_re`` is a compiled regex for rules without options. ``domain_required_rules`` is a {domain: [rules_which_require_it]} mapping. ``rules_with_options`` is a list of AdblockRule instances that don't require any domain, but have other options.
20,884
def _checkDragDropEvent(self, ev): mimedata = ev.mimeData() if mimedata.hasUrls(): urls = [str(url.toLocalFile()) for url in mimedata.urls() if url.toLocalFile()] else: urls = [] if urls: ev.acceptProposedAction() return urls else: ev.ignore() return None
Checks if event contains a file URL, accepts if it does, ignores if it doesn't
20,885
def container_move(object_id, input_params={}, always_retry=False, **kwargs): return DXHTTPRequest( % object_id, input_params, always_retry=always_retry, **kwargs)
Invokes the /container-xxxx/move API method. For more info, see: https://wiki.dnanexus.com/API-Specification-v1.0.0/Folders-and-Deletion#API-method%3A-%2Fclass-xxxx%2Fmove
20,886
def append(self, item): if self.should_flush(): self.flush() self.items.append(item)
Add new item to the list. If needed, append will first flush existing items and clear existing items. Args: item: an item to add to the list.
20,887
def get_response_example(cls, operation, response): path = " operation.path, operation.method, response.name) kwargs = dict(paths=[path]) if response.type in PRIMITIVE_TYPES: result = cls.get_example_value_for_primitive_type( response.type, response.properties, response.type_format, **kwargs) else: schema = SchemaObjects.get(response.type) result = cls.get_example_by_schema(schema, **kwargs) return result
Get example for response object by operation object :param Operation operation: operation object :param Response response: response object
20,888
def _run(self, bundle, container_id=None, empty_process=False, log_path=None, pid_file=None, sync_socket=None, command="run", log_format="kubernetes"): container_id = self.get_container_id(container_id) cmd = self._init_command(command) if not os.path.exists(bundle): bot.exit( % bundle) cmd = cmd + [, bundle] cmd = cmd + [, log_format] if log_path != None: cmd = cmd + [, log_path] if pid_file != None: cmd = cmd + [, pid_file] if sync_socket != None: cmd = cmd + [, sync_socket] if empty_process: cmd.append() cmd.append(container_id) result = self._send_command(cmd, sudo=True) return self.state(container_id, sudo=True, sync_socket=sync_socket)
_run is the base function for run and create, the only difference between the two being that run does not have an option for sync_socket. Equivalent command line example: singularity oci create [create options...] <container_ID> Parameters ========== bundle: the full path to the bundle folder container_id: an optional container_id. If not provided, use same container_id used to generate OciImage instance empty_process: run container without executing container process (for example, for a pod container waiting for signals). This is a specific use case for tools like Kubernetes log_path: the path to store the log. pid_file: specify the pid file path to use sync_socket: the path to the unix socket for state synchronization. command: the command (run or create) to use (default is run) log_format: defaults to kubernetes. Can also be "basic" or "json"
20,889
def visitFunctionCall(self, ctx): func_name = ctx.fnname().getText() if ctx.parameters() is not None: parameters = self.visit(ctx.parameters()) else: parameters = [] return self._functions.invoke_function(self._eval_context, func_name, parameters)
expression : fnname LPAREN parameters? RPAREN
20,890
def LazyField(lookup_name, scope): def __init__(self, stream=None): base_cls = self._pfp__scope.get_id(self._pfp__lazy_name) self.__class__.__bases__ = (base_cls,) base_cls.__init__(self, stream) new_class = type(lookup_name + "_lazy", (fields.Field,), { "__init__" : __init__, "_pfp__scope" : scope, "_pfp__lazy_name" : lookup_name }) return new_class
Super non-standard stuff here. Dynamically changing the base class using the scope and the lazy name when the class is instantiated. This works as long as the original base class is not directly inheriting from object (which we're not, since our original base class is fields.Field).
20,891
def static_serve(request, path, client): if msettings[]: combo_match = _find_combo_match(path) if combo_match: return combo_serve(request, combo_match, client) except KeyError: pass return resp
Given a request for a media asset, this view does the necessary wrangling to get the correct thing delivered to the user. This can also emulate the combo behavior seen when SERVE_REMOTE == False and EMULATE_COMBO == True.
20,892
def generate_pagerank_graph(num_vertices=250, **kwargs): g = minimal_random_graph(num_vertices, **kwargs) r = np.zeros(num_vertices) for k, pr in nx.pagerank(g).items(): r[k] = pr g = set_types_rank(g, rank=r, **kwargs) return g
Creates a random graph where the vertex types are selected using their pagerank. Calls :func:`.minimal_random_graph` and then :func:`.set_types_rank` where the ``rank`` keyword argument is given by :func:`networkx.pagerank`. Parameters ---------- num_vertices : int (optional, the default is 250) The number of vertices in the graph. **kwargs : Any parameters to send to :func:`.minimal_random_graph` or :func:`.set_types_rank`. Returns ------- :class:`.QueueNetworkDiGraph` A graph with a ``pos`` vertex property and the ``edge_type`` edge property. Notes ----- This function sets the edge types of a graph to be either 1, 2, or 3. It sets the vertices to type 2 by selecting the top ``pType2 * g.number_of_nodes()`` vertices given by the :func:`~networkx.pagerank` of the graph. A loop is added to all vertices identified this way (if one does not exist already). It then randomly sets vertices close to the type 2 vertices as type 3, and adds loops to these vertices as well. These loops then have edge types that correspond to the vertices type. The rest of the edges are set to type 1.
20,893
def write(self, process_tile, data): data = self._prepare_array(data) if data.mask.all(): logger.debug("data empty, nothing to write") else: bucket_resource = get_boto3_bucket(self._bucket) if self._bucket else None for tile in self.pyramid.intersecting(process_tile): out_path = self.get_path(tile) self.prepare_path(tile) out_tile = BufferedTile(tile, self.pixelbuffer) write_raster_window( in_tile=process_tile, in_data=data, out_profile=self.profile(out_tile), out_tile=out_tile, out_path=out_path, bucket_resource=bucket_resource )
Write data from process tiles into PNG file(s). Parameters ---------- process_tile : ``BufferedTile`` must be member of process ``TilePyramid``
20,894
def readWiggleLine(self, line): if(line.isspace() or line.startswith(" or line.startswith("browser") or line.startswith("track")): return elif line.startswith("variableStep"): self._mode = self._VARIABLE_STEP self.parseStep(line) return elif line.startswith("fixedStep"): self._mode = self._FIXED_STEP self.parseStep(line) return elif self._mode is None: raise ValueError("Unexpected input line: %s" % line.strip()) if self._queryReference != self._reference: return fields = line.split() if self._mode == self._VARIABLE_STEP: start = int(fields[0])-1 val = float(fields[1]) else: start = self._start self._start += self._step val = float(fields[0]) if start < self._queryEnd and start > self._queryStart: if self._position is None: self._position = start self._data.start = start while self._position < start: self._data.values.append(float()) self._position += 1 for _ in xrange(self._span): self._data.values.append(val) self._position += self._span
Read a wiggle line. If it is a data line, add values to the protocol object.
20,895
def field_xpath(field, attribute): if field in [, ]: xpath = elif field == : if attribute == : xpath = else: xpath = elif field == : field = if attribute == : xpath = else: xpath = elif field == : xpath = else: xpath = return xpath.format(field=field, attr=attribute)
Field helper functions to locate select, textarea, and the other types of input fields (text, checkbox, radio) :param field: One of the values 'select', 'textarea', 'option', or 'button-element' to match a corresponding HTML element (and to match a <button> in the case of 'button-element'). Otherwise a type to match an <input> element with a type=<field> attribute. :param attribute: An attribute to be matched against, or 'value' to match against the content within element being matched.
20,896
def list_mapping(html_cleaned): unit_raw = html_cleaned.split() for i in unit_raw: c = CDM(i) if c.PTN is not 0: fake_title = i break init_list = [] init_dict = {} for i in unit_raw: init_list.append(len(i)) for i in range(0, len(init_list)): init_dict[i] = init_list[i] init_dict = sorted(init_dict.items(), key=lambda item: item[1], reverse=True) try: log(, .format(fake_title)) except UnboundLocalError: fake_title = log(, ) return unit_raw, init_dict, fake_title
将预处理后的网页文档映射成列表和字典,并提取虚假标题 Keyword arguments: html_cleaned -- 预处理后的网页源代码,字符串类型 Return: unit_raw -- 网页文本行 init_dict -- 字典的key是索引,value是网页文本行,并按照网页文本行长度降序排序 fake_title -- 虚假标题,即网页源代码<title>中的文本行
20,897
def unmake(self): if lib.EnvUnmakeInstance(self._env, self._ist) != 1: raise CLIPSError(self._env)
This method is equivalent to delete except that it uses message-passing instead of directly deleting the instance.
20,898
def destroy(name): s running and] raises ContainerNotExists exception if the specified name is not created lxc-destroy-f-n', name] subprocess.check_call(cmd)
removes a container [stops a container if it's running and] raises ContainerNotExists exception if the specified name is not created
20,899
def conditional_distribution(self, values, inplace=True): JPD = self if inplace else self.copy() JPD.reduce(values) JPD.normalize() if not inplace: return JPD
Returns Conditional Probability Distribution after setting values to 1. Parameters ---------- values: list or array_like A list of tuples of the form (variable_name, variable_state). The values on which to condition the Joint Probability Distribution. inplace: Boolean (default True) If False returns a new instance of JointProbabilityDistribution Examples -------- >>> import numpy as np >>> from pgmpy.factors.discrete import JointProbabilityDistribution >>> prob = JointProbabilityDistribution(['x1', 'x2', 'x3'], [2, 2, 2], np.ones(8)/8) >>> prob.conditional_distribution([('x1', 1)]) >>> print(prob) x2 x3 P(x2,x3) ---- ---- ---------- x2_0 x3_0 0.2500 x2_0 x3_1 0.2500 x2_1 x3_0 0.2500 x2_1 x3_1 0.2500