Unnamed: 0
int64
0
389k
code
stringlengths
26
79.6k
docstring
stringlengths
1
46.9k
388,400
def histogram_voltage(self, timestep=None, title=True, **kwargs): data = self.network.results.v_res() if title is True: if timestep is not None: title = "Voltage histogram for time step {}".format(timestep) else: title = "Voltage histogram \nfor time steps {} to {}".format( data.index[0], data.index[-1]) elif title is False: title = None plots.histogram(data=data, title=title, timeindex=timestep, **kwargs)
Plots histogram of voltages. For more information see :func:`edisgo.tools.plots.histogram`. Parameters ---------- timestep : :pandas:`pandas.Timestamp<timestamp>` or None, optional Specifies time step histogram is plotted for. If timestep is None all time steps voltages are calculated for are used. Default: None. title : :obj:`str` or :obj:`bool`, optional Title for plot. If True title is auto generated. If False plot has no title. If :obj:`str`, the provided title is used. Default: True.
388,401
def yaml_dump(data, stream=None): return yaml.dump( data, stream=stream, Dumper=Dumper, default_flow_style=False )
Dump data to a YAML string/file. Args: data (YamlData): The data to serialize as YAML. stream (TextIO): The file-like object to save to. If given, this function will write the resulting YAML to that stream. Returns: str: The YAML string.
388,402
def _dict_increment(self, dictionary, key): if key in dictionary: dictionary[key] += 1 else: dictionary[key] = 1
Increments the value of the dictionary at the specified key.
388,403
def reply_message(self, reply_token, messages, timeout=None): if not isinstance(messages, (list, tuple)): messages = [messages] data = { : reply_token, : [message.as_json_dict() for message in messages] } self._post( , data=json.dumps(data), timeout=timeout )
Call reply message API. https://devdocs.line.me/en/#reply-message Respond to events from users, groups, and rooms. Webhooks are used to notify you when an event occurs. For events that you can respond to, a replyToken is issued for replying to messages. Because the replyToken becomes invalid after a certain period of time, responses should be sent as soon as a message is received. Reply tokens can only be used once. :param str reply_token: replyToken received via webhook :param messages: Messages. Max: 5 :type messages: T <= :py:class:`linebot.models.send_messages.SendMessage` | list[T <= :py:class:`linebot.models.send_messages.SendMessage`] :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a (connect timeout, read timeout) float tuple. Default is self.http_client.timeout :type timeout: float | tuple(float, float)
388,404
def get_membership_document(membership_type: str, current_block: dict, identity: Identity, salt: str, password: str) -> Membership: timestamp = BlockUID(current_block[], current_block[]) key = SigningKey.from_credentials(salt, password) membership = Membership( version=10, currency=current_block[], issuer=key.pubkey, membership_ts=timestamp, membership_type=membership_type, uid=identity.uid, identity_ts=identity.timestamp, signature=None ) membership.sign([key]) return membership
Get a Membership document :param membership_type: "IN" to ask for membership or "OUT" to cancel membership :param current_block: Current block data :param identity: Identity document :param salt: Passphrase of the account :param password: Password of the account :rtype: Membership
388,405
def define(cls, name, parent=None, interleave=False): node = cls("define", parent, interleave=interleave) node.occur = 0 node.attr["name"] = name return node
Create define node.
388,406
def _member_def(self, member): member_docstring = textwrap.dedent(member.docstring).strip() member_docstring = textwrap.fill( member_docstring, width=78, initial_indent=*4, subsequent_indent=*4 ) return % (member.name, member_docstring)
Return an individual member definition formatted as an RST glossary entry, wrapped to fit within 78 columns.
388,407
def make_application_error(name, tag): cls = type(xso.XSO)(name, (xso.XSO,), { "TAG": tag, }) Error.as_application_condition(cls) return cls
Create and return a **class** inheriting from :class:`.xso.XSO`. The :attr:`.xso.XSO.TAG` is set to `tag` and the class’ name will be `name`. In addition, the class is automatically registered with :attr:`.Error.application_condition` using :meth:`~.Error.as_application_condition`. Keep in mind that if you subclass the class returned by this function, the subclass is not registered with :class:`.Error`. In addition, if you do not override the :attr:`~.xso.XSO.TAG`, you will not be able to register the subclass as application defined condition as it has the same tag as the class returned by this function, which has already been registered as application condition.
388,408
def authorization_url(self, **kwargs): payload = {: , : self._client_id} for key in kwargs.keys(): payload[key] = kwargs[key] payload = sorted(payload.items(), key=lambda val: val[0]) params = urlencode(payload) url = self.get_url() return .format(url, params)
Get authorization URL to redirect the resource owner to. https://tools.ietf.org/html/rfc6749#section-4.1.1 :param str redirect_uri: (optional) Absolute URL of the client where the user-agent will be redirected to. :param str scope: (optional) Space delimited list of strings. :param str state: (optional) An opaque value used by the client to maintain state between the request and callback :return: URL to redirect the resource owner to :rtype: str
388,409
def cartesian_to_index(ranges, maxima=None): if maxima is None: return reduce(lambda y,x: (x*y[1] + y[0],(np.max(x)+1)*y[1]), ranges[:,::-1].transpose(), (np.array([0]*ranges.shape[0]),1))[0] else: maxima_prod = np.concatenate([np.cumprod(np.array(maxima)[::-1])[1::-1],[1]]) return np.sum(ranges * maxima_prod, 1)
Inverts tuples from a cartesian product to a numeric index ie. the index this tuple would have in a cartesian product. Each column gets multiplied with a place value according to the preceding columns maxmimum and all columns are summed up. This function in the same direction as utils.cartesian, ie. the first column has the largest value.
388,410
def readFromProto(cls, proto): instance = cls() instance.implementation = proto.implementation instance.steps = proto.steps instance.stepsList = [int(i) for i in proto.steps.split(",")] instance.alpha = proto.alpha instance.verbosity = proto.verbosity instance.maxCategoryCount = proto.maxCategoryCount instance._sdrClassifier = SDRClassifierFactory.read(proto) instance.learningMode = proto.learningMode instance.inferenceMode = proto.inferenceMode instance.recordNum = proto.recordNum return instance
Read state from proto object. :param proto: SDRClassifierRegionProto capnproto object
388,411
def _get_operator_param_name_and_values(operator_class_name, task_details): operator_task_details = task_details.copy() if in operator_task_details.keys(): del operator_task_details[] if in operator_task_details.keys(): del operator_task_details[] if (operator_class_name == ): return PipelineGenerator._get_bq_execute_params(operator_task_details) if (operator_class_name == ): return PipelineGenerator._get_bq_extract_params(operator_task_details) if (operator_class_name == ): return PipelineGenerator._get_bq_load_params(operator_task_details) return operator_task_details
Internal helper gets the name of the python parameter for the Airflow operator class. In some cases, we do not expose the airflow parameter name in its native form, but choose to expose a name that's more standard for Datalab, or one that's more friendly. For example, Airflow's BigQueryOperator uses 'bql' for the query string, but we want %%bq users in Datalab to use 'query'. Hence, a few substitutions that are specific to the Airflow operator need to be made. Similarly, we the parameter value could come from the notebook's context. All that happens here. Returns: Dict containing _only_ the keys and values that are required in Airflow operator definition. This requires a substituting existing keys in the dictionary with their Airflow equivalents ( i.e. by adding new keys, and removing the existing ones).
388,412
def atFontFace(self, declarations): result = self.ruleset([self.selector()], declarations) data = list(result[0].values())[0] if "src" not in data: return {}, {} names = data["font-family"] fweight = str(data.get("font-weight", "normal")).lower() bold = fweight in ("bold", "bolder", "500", "600", "700", "800", "900") if not bold and fweight != "normal": log.warn( self.c.warning("@fontface, unknown value font-weight ", fweight)) italic = str( data.get("font-style", "")).lower() in ("italic", "oblique") uri = data[] if not isinstance(data[], str): for part in uri: if isinstance(part, str): uri = part break src = self.c.getFile(uri, relative=self.c.cssParser.rootPath) self.c.loadFont( names, src, bold=bold, italic=italic) return {}, {}
Embed fonts
388,413
def set_bank_1(self, bits): res = yield from self._pigpio_aio_command(_PI_CMD_BS1, bits, 0) return _u2i(res)
Sets gpios 0-31 if the corresponding bit in bits is set. bits:= a 32 bit mask with 1 set if the corresponding gpio is to be set. A returned status of PI_SOME_PERMITTED indicates that the user is not allowed to write to one or more of the gpios. ... pi.set_bank_1(int("111110010000",2)) ...
388,414
def make_form(fields=None, layout=None, layout_class=None, base_class=None, get_form_field=None, name=None, rules=None, **kwargs): from uliweb.utils.sorteddict import SortedDict get_form_field = get_form_field or (lambda name, f:None) props = SortedDict({}) for f in fields or []: if isinstance(f, BaseField): props[f.name] = get_form_field(f.name, f) or f else: props[f[]] = get_form_field(f[], f) or make_field(**f) if layout: props[] = layout if layout_class: props[] = layout_class if rules: props[] = rules layout_class_args = kwargs.pop(, None) if layout_class_args: props[] = layout_class_args cls = type(name or , (base_class or Form,), props) return cls
Make a from according dict data: {'fields':[ {'name':'name', 'type':'str', 'label':'label, 'rules':{ 'required': 'email' 'required:back|front' #back means server side, front means front side } ...}, ... ], #layout_class should be defined in settings.ini, just like #[FORM_LAYOUT_CLASSES] #bs3 = '#{appname}.form_help.Bootstrap3Layout' #is also can be a Layout Class #default is BootstrapLayout 'layout_class':'bs3', 'layout':{ 'rows':[ '-- legend title --', 'field_name', ['group_fieldname', 'group_fieldname'] {'name':'name', 'colspan':3} ], } 'base_class':'form class if not existed, then use Form' } get_form_field is a callback function, used to defined customized field class if has name then it'll be cached
388,415
def acquire_lock(self): try: self.collection.insert_one(dict(_id=self.id)) except pymongo.errors.DuplicateKeyError: pass unlocked_spec = dict(_id=self.id, locked=None) lock_timer = ( timers.Timer.after(self.lock_timeout) if self.lock_timeout else timers.NeverExpires() ) while not lock_timer.expired(): locked_spec = {: dict(locked=datetime.datetime.utcnow())} res = self.collection.update_one(unlocked_spec, locked_spec) if res.raw_result[]: break time.sleep(0.1) else: raise LockTimeout(f"Timeout acquiring lock for {self.id}") self.locked = True
Acquire the lock. Blocks indefinitely until lock is available unless `lock_timeout` was supplied. If the lock_timeout elapses, raises LockTimeout.
388,416
def windowed(seq, n, fillvalue=None, step=1): if n < 0: raise ValueError() if n == 0: yield tuple() return if step < 1: raise ValueError() it = iter(seq) window = deque([], n) append = window.append for _ in range(n): append(next(it, fillvalue)) yield tuple(window) i = 0 for item in it: append(item) i = (i + 1) % step if i % step == 0: yield tuple(window) if (i % step) and (step - i < n): for _ in range(step - i): append(fillvalue) yield tuple(window)
Return a sliding window of width *n* over the given iterable. >>> all_windows = windowed([1, 2, 3, 4, 5], 3) >>> list(all_windows) [(1, 2, 3), (2, 3, 4), (3, 4, 5)] When the window is larger than the iterable, *fillvalue* is used in place of missing values:: >>> list(windowed([1, 2, 3], 4)) [(1, 2, 3, None)] Each window will advance in increments of *step*: >>> list(windowed([1, 2, 3, 4, 5, 6], 3, fillvalue='!', step=2)) [(1, 2, 3), (3, 4, 5), (5, 6, '!')] To slide into the iterable's items, use :func:`chain` to add filler items to the left: >>> iterable = [1, 2, 3, 4] >>> n = 3 >>> padding = [None] * (n - 1) >>> list(windowed(chain(padding, iterable), 3)) [(None, None, 1), (None, 1, 2), (1, 2, 3), (2, 3, 4)]
388,417
def add_JSsource(self, new_src): if isinstance(new_src, list): for h in new_src: self.JSsource.append(h) elif isinstance(new_src, basestring): self.JSsource.append(new_src) else: raise OptionTypeError("Option: %s Not Allowed For Series Type: %s" % type(new_src))
add additional js script source(s)
388,418
def oauth_client_create(self, data, **kwargs): "https://developer.zendesk.com/rest_api/docs/core/oauth_clients api_path = "/api/v2/oauth/clients.json" return self.call(api_path, method="POST", data=data, **kwargs)
https://developer.zendesk.com/rest_api/docs/core/oauth_clients#create-client
388,419
def autodiff(func, wrt=(0,), optimized=True, motion=, mode=, preserve_result=False, check_dims=True, input_derivative=INPUT_DERIVATIVE.Required, verbose=0): func = getattr(func, , func) node, namespace = autodiff_tree(func, wrt, motion, mode, preserve_result, check_dims, verbose) if mode == and motion == : node.body[0] = _create_joint(node.body[0], func, wrt, input_derivative) if verbose >= 2: print() print(quoting.to_source(node)) if mode == : node = _create_forward(node) if optimized: node = optimization.optimize(node) node = comments.remove_repeated_comments(node) if verbose >= 1: print(quoting.to_source(node)) module = compile_.compile_file(node, namespace) if mode == or motion == : return getattr(module, node.body[0].name) else: forward(_stack, *args, **kwargs) dx = backward(_stack, init_grad, *args, **kwargs) if len(dx) == 1: dx, = dx return dx return df
Build the vector-Jacobian or Jacobian-vector product of a function `func`. For a vector-Jacobian product (reverse-mode autodiff): This function proceeds by finding the primals and adjoints of all the functions in the call tree. For a Jacobian-vector product (forward-mode autodiff): We first find the primals and tangents of all functions in the call tree. It then wraps the top level function (i.e. the one passed as `func`) in a slightly more user-friendly interface. It then compiles the function and attaches to it the global namespace it needs to run. Args: func: The function to take the gradient of. wrt: A tuple of argument indices to differentiate with respect to. By default the derivative is taken with respect to the first argument. optimized: Whether to optimize the gradient function (`True` by default). motion: Either 'split' (separate functions for forward and backward pass) or 'joint' motion (a single combined function). Joint mode is the default. mode: Either 'forward' or 'reverse' mode. Forward mode is more efficient when the input dimensionality is lower than the output dimensionality, whereas it is the opposite for reverse mode. input_derivative: An enum indicating whether the user must supply an input derivative, and if not, what the default value is. See the possible values of INPUT_DERIVATIVE in this file. preserve_result: A boolean indicating whether or not the generated gradient function should also return the output of the original function. If False, the return signature of the input and output functions will be > val = func(*args) > df = grad(func,preserve_result=False) > gradval = df(*args) If True, > val = func(*args) > df = grad(func,preserve_result=True) > gradval, val = df(*args) Note that if taking gradients with respect to multiple arguments, the primal value will be appended to the return signature. Ex: > val = func(x,y) > df = grad(func,wrt=(0,1),preserve_result=True) > dx,dy,val = df(x,y) verbose: If 1 the source code of the generated functions will be output to stdout at various stages of the process for debugging purposes. If > 1, all intermediate code generation steps will print. Returns: df: A function that calculates a derivative (see file-level documentation above for the kinds of derivatives available) with respect to arguments specified in `wrt`, using forward or reverse mode according to `mode`. If using reverse mode, the gradient is calculated in either split or joint motion according to the value passed in `motion`. If `preserve_result` is True, the function will also return the original result of `func`.
388,420
def stop(logfile, time_format): "stop tracking for the active project" def save_and_output(records): records = server.stop(records) write(records, logfile, time_format) def output(r): print "worked on %s" % colored(r[0], attrs=[]) print " from %s" % colored( server.date_to_txt(r[1][0], time_format), ) print " to now, %s" % colored( server.date_to_txt(r[1][1], time_format), ) print " => %s elapsed" % colored( time_elapsed(r[1][0], r[1][1]), ) output(records[-1]) save_and_output(read(logfile, time_format))
stop tracking for the active project
388,421
def knx_to_time(knxdata): if len(knxdata) != 3: raise KNXException("Can only convert a 3 Byte object to time") dow = knxdata[0] >> 5 res = time(knxdata[0] & 0x1f, knxdata[1], knxdata[2]) return [res, dow]
Converts a KNX time to a tuple of a time object and the day of week
388,422
def ram_dp_rf(clka, clkb, wea, web, addra, addrb, dia, dib, doa, dob): memL = [Signal(intbv(0)[len(dia):]) for _ in range(2**len(addra))] @always(clka.posedge) def writea(): if wea: memL[int(addra)].next = dia doa.next = memL[int(addra)] @always(clkb.posedge) def writeb(): if web: memL[int(addrb)].next = dib dob.next = memL[int(addrb)] return writea, writeb
RAM: Dual-Port, Read-First
388,423
def get_dependency_graph_for_set(self, id, **kwargs): kwargs[] = True if kwargs.get(): return self.get_dependency_graph_for_set_with_http_info(id, **kwargs) else: (data) = self.get_dependency_graph_for_set_with_http_info(id, **kwargs) return data
Gets dependency graph for a Build Group Record (running and completed). This method makes a synchronous HTTP request by default. To make an asynchronous HTTP request, please define a `callback` function to be invoked when receiving the response. >>> def callback_function(response): >>> pprint(response) >>> >>> thread = api.get_dependency_graph_for_set(id, callback=callback_function) :param callback function: The callback function for asynchronous request. (optional) :param int id: Build record set id. (required) :return: BuildRecordPage If the method is called asynchronously, returns the request thread.
388,424
def match_pattern(expr_or_pattern: object, expr: object) -> MatchDict: try: return expr_or_pattern.match(expr) except AttributeError: if expr_or_pattern == expr: return MatchDict() else: res = MatchDict() res.success = False res.reason = "Expressions and are not the same" % ( repr(expr_or_pattern), repr(expr)) return res
Recursively match `expr` with the given `expr_or_pattern` Args: expr_or_pattern: either a direct expression (equal to `expr` for a successful match), or an instance of :class:`Pattern`. expr: the expression to be matched
388,425
def dict2dzn( objs, declare=False, assign=True, declare_enums=True, wrap=True, fout=None ): log = logging.getLogger(__name__) vals = [] enums = set() for key, val in objs.items(): if _is_enum(val) and declare_enums: enum_type = type(val) enum_name = enum_type.__name__ if enum_name not in enums: enum_stmt = stmt2enum( enum_type, declare=declare, assign=assign, wrap=wrap ) vals.append(enum_stmt) enums.add(enum_name) stmt = stmt2dzn(key, val, declare=declare, assign=assign, wrap=wrap) vals.append(stmt) if fout: log.debug(.format(fout)) with open(fout, ) as f: for val in vals: f.write(.format(val)) return vals
Serializes the objects in input and produces a list of strings encoding them into dzn format. Optionally, the produced dzn is written on a file. Supported types of objects include: ``str``, ``int``, ``float``, ``set``, ``list`` or ``dict``. List and dict are serialized into dzn (multi-dimensional) arrays. The key-set of a dict is used as index-set of dzn arrays. The index-set of a list is implicitly set to ``1 .. len(list)``. Parameters ---------- objs : dict A dictionary containing the objects to serialize, the keys are the names of the variables. declare : bool Whether to include the declaration of the variable in the statements or just the assignment. Default is ``False``. assign : bool Whether to include assignment of the value in the statements or just the declaration. declare_enums : bool Whether to declare the enums found as types of the objects to serialize. Default is ``True``. wrap : bool Whether to wrap the serialized values. fout : str Path to the output file, if None no output file is written. Returns ------- list List of strings containing the dzn-encoded objects.
388,426
def add_node(self, payload): self.nodes.append(Node(len(self.nodes), payload)) return len(self.nodes) - 1
Returns ------- int Identifier for the inserted node.
388,427
def logging_syslog_facility_local(self, **kwargs): config = ET.Element("config") logging = ET.SubElement(config, "logging", xmlns="urn:brocade.com:mgmt:brocade-ras") syslog_facility = ET.SubElement(logging, "syslog-facility") local = ET.SubElement(syslog_facility, "local") local.text = kwargs.pop() callback = kwargs.pop(, self._callback) return callback(config)
Auto Generated Code
388,428
def load_default_moderator(): if appsettings.FLUENT_COMMENTS_DEFAULT_MODERATOR == : return moderation.FluentCommentsModerator(None) elif appsettings.FLUENT_COMMENTS_DEFAULT_MODERATOR == : return moderation.AlwaysDeny(None) elif str(appsettings.FLUENT_COMMENTS_DEFAULT_MODERATOR).lower() == : return moderation.NullModerator(None) elif in appsettings.FLUENT_COMMENTS_DEFAULT_MODERATOR: return import_string(appsettings.FLUENT_COMMENTS_DEFAULT_MODERATOR)(None) else: raise ImproperlyConfigured( "Bad FLUENT_COMMENTS_DEFAULT_MODERATOR value. Provide default/deny/none or a dotted path" )
Find a moderator object
388,429
def find_or_create_by_name(self, item_name, items_list, item_type): item = self.find_by_name(item_name, items_list) if not item: item = self.data_lists[item_type][2](item_name, None) return item
See if item with item_name exists in item_list. If not, create that item. Either way, return an item of type item_type.
388,430
def _ensure_set_contains(test_object, required_object, test_set_name=None): assert isinstance(test_object, (set, dict)), % (test_object, test_set_name) assert isinstance(required_object, (set, dict)), % (required_object, test_set_name) test_set = set(test_object) required_set = set(required_object) set_name = if test_set_name is None else % test_set_name missing_opts = required_set.difference(test_set) if len(missing_opts) > 0: raise ParameterError( % (set_name, .join(missing_opts)))
Ensure that the required entries (set or keys of a dict) are present in the test set or keys of the test dict. :param set|dict test_object: The test set or dict :param set|dict required_object: The entries that need to be present in the test set (keys of input dict if input is dict) :param str test_set_name: Optional name for the set :raises ParameterError: If required entry doesnt exist
388,431
def tvdb_login(api_key): url = "https://api.thetvdb.com/login" body = {"apikey": api_key} status, content = _request_json(url, body=body, cache=False) if status == 401: raise MapiProviderException("invalid api key") elif status != 200 or not content.get("token"): raise MapiNetworkException("TVDb down or unavailable?") return content["token"]
Logs into TVDb using the provided api key Note: You can register for a free TVDb key at thetvdb.com/?tab=apiregister Online docs: api.thetvdb.com/swagger#!/Authentication/post_login=
388,432
def complex_validates(validate_rule): ref_dict = { } for column_names, predicate_refs in validate_rule.items(): for column_name in _to_tuple(column_names): ref_dict[column_name] = \ ref_dict.get(column_name, tuple()) + _normalize_predicate_refs(predicate_refs) return validates(*ref_dict.keys())( lambda self, name, value: _validate_handler(name, value, ref_dict[name]))
Quickly setup attributes validation by one-time, based on `sqlalchemy.orm.validates`. Don't like `sqlalchemy.orm.validates`, you don't need create many model method, as long as pass formatted validate rule. (Cause of SQLAlchemy's validate mechanism, you need assignment this funciton's return value to a model property.) For simplicity, complex_validates don't support `include_removes` and `include_backrefs` parameters that in `sqlalchemy.orm.validates`. And we don't recommend you use this function multiple times in one model. Because this will bring many problems, like: 1. Multiple complex_validates's execute order was decide by it's model property name, and by reversed order. eg. predicates in `validator1 = complex_validates(...)` will be executed **AFTER** predicates in `validator2 = complex_validates(...)` 2. If you try to validate the same attribute in two (or more) complex_validates, only one of complex_validates will be execute. (May be this is a bug of SQLAlchemy?) `complex_validates` was currently based on `sqlalchemy.orm.validates`, so it is difficult to solve these problems. May be we can try to use `AttributeEvents` directly in further, to provide more reliable function. Rule Format ----------- { column_name: predicate # basic format (column_name2, column_name3): predicate # you can specify multiple column_names to given predicates column_name4: (predicate, predicate2) # you can specify multiple predicates to given column_names column_name5: [(predicate, arg1, ... argN)] # and you can specify what arguments should pass to predicate # when it doing validate (column_name6, column_name7): [(predicate, arg1, ... argN), predicate2] # another example } Notice: If you want pass arguments to predicate, you must wrap whole command by another list or tuple. Otherwise, we will determine the argument as another predicate. So, this is wrong: { column_name: (predicate, arg) } this is right: { column_name: [(predicate, arg)] } Predicate --------- There's some `predefined_predicates`, you can just reference its name in validate rule. {column_name: ['trans_upper']} Or you can pass your own predicate function to the rule, like this: def custom_predicate(value): return value_is_legal # return True or False for valid or invalid value {column_name: [custom_predicate]} If you want change the value when doing validate, return an `dict(value=new_value)` instead of boolean {column_name: lambda value: dict(value = value * 2)} # And you see, we can use lambda as a predicate. And the predicate can receive extra arguments, that passes in rule: def multiple(value, target_multiple): return dict(value= value * target_multiple) {column_name: (multiple, 10)} Complete Example ---------------- class People(db.Model): name = Column(String(100)) age = Column(Integer) IQ = Column(Integer) has_lover = Column(Boolean) validator = complex_validates({ 'name': [('min_length', 1), ('max_length', 100)], ('age', 'IQ'): [('min', 0)], 'has_lover': lambda value: return !value # hate you! })
388,433
def get_members(self, api=None): api = api or self._API response = api.get(url=self._URL[].format(id=self.id)) data = response.json() total = response.headers[] members = [Member(api=api, **member) for member in data[]] links = [Link(**link) for link in data[]] href = data[] return Collection( resource=Member, href=href, total=total, items=members, links=links, api=api )
Retrieve dataset members :param api: Api instance :return: Collection object
388,434
def K(self, parm): return RQ_K_matrix(self.X, parm) + np.identity(self.X.shape[0])*(10**-10)
Returns the Gram Matrix Parameters ---------- parm : np.ndarray Parameters for the Gram Matrix Returns ---------- - Gram Matrix (np.ndarray)
388,435
def delete(self, key, **kwargs): payload = { "key": _encode(key), } payload.update(kwargs) result = self.post(self.get_url("/kv/deleterange"), json=payload) if in result: return True return False
DeleteRange deletes the given range from the key-value store. A delete request increments the revision of the key-value store and generates a delete event in the event history for every deleted key. :param key: :param kwargs: :return:
388,436
def load_app_resource(**kwargs): s workspace container, and the handler for it will be returned. If the app resources container ID is not found in DX_RESOURCES_ID, falls back to looking in the current project. Example:: @dxpy.entry_point() def main(*args, **kwargs): x = load_app_resource(name="Indexed genome", classname=) dxpy.download_dxfile(x) projectUnexpected kwarg: "project"Not called by a jobDX_RESOURCES_IDDX_PROJECT_CONTEXT_IDApp resources container ID could not be foundprojectDX_RESOURCES_IDDX_PROJECT_CONTEXT_IDreturn_handler'] = True return find_one_data_object(**kwargs)
:param kwargs: keyword args for :func:`~dxpy.bindings.search.find_one_data_object`, with the exception of "project" :raises: :exc:`~dxpy.exceptions.DXError` if "project" is given, if this is called with dxpy.JOB_ID not set, or if "DX_RESOURCES_ID" or "DX_PROJECT_CONTEXT_ID" is not found in the environment variables :returns: None if no matching object is found; otherwise returns a dxpy object handler for that class of object Searches for a data object in the app resources container matching the given keyword arguments. If found, the object will be cloned into the running job's workspace container, and the handler for it will be returned. If the app resources container ID is not found in DX_RESOURCES_ID, falls back to looking in the current project. Example:: @dxpy.entry_point('main') def main(*args, **kwargs): x = load_app_resource(name="Indexed genome", classname='file') dxpy.download_dxfile(x)
388,437
def heatmap(dm, partition=None, cmap=CM.Blues, fontsize=10): assert isinstance(dm, DistanceMatrix) datamax = float(np.abs(dm.values).max()) length = dm.shape[0] if partition: sorting = np.array(flatten_list(partition.get_membership())) new_dm = dm.reorder(dm.df.columns[sorting]) else: new_dm = dm fig = plt.figure() ax = fig.add_subplot(111) ax.xaxis.tick_top() ax.grid(False) tick_positions = np.array(list(range(length))) + 0.5 if fontsize is not None: ax.set_yticks(tick_positions) ax.set_xticks(tick_positions) ax.set_xticklabels(new_dm.df.columns, rotation=90, fontsize=fontsize, ha=) ax.set_yticklabels(new_dm.df.index, fontsize=fontsize, va=) cbar_ticks_at = [0, 0.5 * datamax, datamax] cax = ax.imshow( new_dm.values, interpolation=, extent=[0., length, length, 0.], vmin=0, vmax=datamax, cmap=cmap, ) cbar = fig.colorbar(cax, ticks=cbar_ticks_at, format=) cbar.set_label() return fig
heatmap(dm, partition=None, cmap=CM.Blues, fontsize=10) Produce a 2D plot of the distance matrix, with values encoded by coloured cells. Args: partition: treeCl.Partition object - if supplied, will reorder rows and columns of the distance matrix to reflect the groups defined by the partition cmap: matplotlib colourmap object - the colour palette to use fontsize: int or None - sets the size of the locus lab Returns: matplotlib plottable object
388,438
def apply(self, func, applyto=, noneval=nan, setdata=False): applyto = applyto.lower() if applyto == : if self.data is not None: data = self.data elif self.datafile is None: return noneval else: data = self.read_data() if setdata: self.data = data return func(data) elif applyto == : return func(self) else: raise ValueError( % applyto)
Apply func either to self or to associated data. If data is not already parsed, try and read it. Parameters ---------- func : callable The function either accepts a measurement object or an FCS object. Does some calculation and returns the result. applyto : ['data' | 'measurement'] * 'data' : apply to associated data * 'measurement' : apply to measurement object itself. noneval : obj Value to return if `applyto` is 'data', but no data is available. setdata : bool Used only if data is not already set. If true parsed data will be assigned to self.data Otherwise data will be discarded at end of apply.
388,439
def submit_job(job_ini, username, hazard_job_id=None): job_id = logs.init() oq = engine.job_from_file( job_ini, job_id, username, hazard_calculation_id=hazard_job_id) pik = pickle.dumps(oq, protocol=0) code = RUNCALC % dict(job_id=job_id, hazard_job_id=hazard_job_id, pik=pik, username=username) tmp_py = gettemp(code, suffix=) devnull = subprocess.DEVNULL popen = subprocess.Popen([sys.executable, tmp_py], stdin=devnull, stdout=devnull, stderr=devnull) threading.Thread(target=popen.wait).start() logs.dbcmd(, job_id, {: popen.pid}) return job_id, popen.pid
Create a job object from the given job.ini file in the job directory and run it in a new process. Returns the job ID and PID.
388,440
def dir_df_boot(dir_df, nb=5000, par=False): N = dir_df.dir_dec.values.shape[0] BDIs = [] for k in range(nb): pdir_df = dir_df.sample(n=N, replace=True) pdir_df.reset_index(inplace=True) if par: for i in pdir_df.index: n = pdir_df.loc[i, ] ks = np.ones(shape=n)*pdir_df.loc[i, ] decs, incs = fshdev(ks) di_block = np.column_stack((decs, incs)) di_block = dodirot_V( di_block, pdir_df.loc[i, ], pdir_df.loc[i, ]) fpars = fisher_mean(di_block) pdir_df.loc[i, ] = fpars[] pdir_df.loc[i, ] = fpars[] bfpars = dir_df_fisher_mean(pdir_df) BDIs.append([bfpars[], bfpars[]]) return BDIs
Performs a bootstrap for direction DataFrame with optional parametric bootstrap Parameters _________ dir_df : Pandas DataFrame with columns: dir_dec : mean declination dir_inc : mean inclination Required for parametric bootstrap dir_n : number of data points in mean dir_k : Fisher k statistic for mean nb : number of bootstraps, default is 5000 par : if True, do a parameteric bootstrap Returns _______ BDIs: nested list of bootstrapped mean Dec,Inc pairs
388,441
def remove_container(self, container, **kwargs): self.push_log("Removing container .".format(container)) set_raise_on_error(kwargs) super(DockerFabricClient, self).remove_container(container, **kwargs)
Identical to :meth:`dockermap.client.base.DockerClientWrapper.remove_container` with additional logging.
388,442
def _run_atstart(): global _atstart for callback, args, kwargs in _atstart: callback(*args, **kwargs) del _atstart[:]
Hook frameworks must invoke this before running the main hook body.
388,443
def _parse_uri(uri): tokens = urlparse(uri) if tokens.netloc != : logger.error("Invalid URI: %s", uri) raise ValueError("MediaFire URI format error: " "host should be empty - mf:///path") if tokens.scheme != and tokens.scheme != URI_SCHEME: raise ValueError("MediaFire URI format error: " "must start with or ") return posixpath.normpath(tokens.path)
Parse and validate MediaFire URI.
388,444
def execute(self): self.prepare_models() self.prepare_worker() if self.options.print_options: self.print_options() self.run()
Main method to call to run the worker
388,445
def get(self, source, media, collection=None, start_date=None, days=None, query=None, years=None, genres=None, languages=None, countries=None, runtimes=None, ratings=None, certifications=None, networks=None, status=None, **kwargs): if source not in [, ]: raise ValueError( % (source,)) if media not in [, , ]: raise ValueError( % (media,)) if start_date is None and days: start_date = datetime.utcnow() response = self.http.get( % ( source, media, ( + collection) if collection else ), params=[ start_date.strftime() if start_date else None, days ], query={ : query, : years, : genres, : languages, : countries, : runtimes, : ratings, : certifications, : networks, : status }, **popitems(kwargs, [ , ]) ) items = self.get_data(response, **kwargs) if isinstance(items, requests.Response): return items if media == : return SummaryMapper.episodes( self.client, items, parse_show=True ) return SummaryMapper.movies(self.client, items)
Retrieve calendar items. The `all` calendar displays info for all shows airing during the specified period. The `my` calendar displays episodes for all shows that have been watched, collected, or watchlisted. :param source: Calendar source (`all` or `my`) :type source: str :param media: Media type (`dvd`, `movies` or `shows`) :type media: str :param collection: Collection type (`new`, `premieres`) :type collection: str or None :param start_date: Start date (defaults to today) :type start_date: datetime or None :param days: Number of days to display (defaults to `7`) :type days: int or None :param query: Search title or description. :type query: str or None :param years: Year or range of years (e.g. `2014`, or `2014-2016`) :type years: int or str or tuple or None :param genres: Genre slugs (e.g. `action`) :type genres: str or list of str or None :param languages: Language codes (e.g. `en`) :type languages: str or list of str or None :param countries: Country codes (e.g. `us`) :type countries: str or list of str or None :param runtimes: Runtime range in minutes (e.g. `30-90`) :type runtimes: str or tuple or None :param ratings: Rating range between `0` and `100` (e.g. `75-100`) :type ratings: str or tuple or None :param certifications: US Content Certification (e.g. `pg-13`, `tv-pg`) :type certifications: str or list of str or None :param networks: (TV) Network name (e.g. `HBO`) :type networks: str or list of str or None :param status: (TV) Show status (e.g. `returning series`, `in production`, ended`) :type status: str or list of str or None :return: Items :rtype: list of trakt.objects.video.Video
388,446
def get_sub_commands(parser: argparse.ArgumentParser) -> List[str]: sub_cmds = [] if parser is not None and parser._subparsers is not None: for action in parser._subparsers._actions: if isinstance(action, argparse._SubParsersAction): for sub_cmd, sub_cmd_parser in action.choices.items(): sub_cmds.append(sub_cmd) for nested_sub_cmd in get_sub_commands(sub_cmd_parser): sub_cmds.append(.format(sub_cmd, nested_sub_cmd)) break sub_cmds.sort() return sub_cmds
Get a list of sub-commands for an ArgumentParser
388,447
def pos_by_percent(self, x_percent, y_percent): x = round(x_percent * self.width) y = round(y_percent * self.height) return int(x), int(y)
Finds a point inside the box that is exactly at the given percentage place. :param x_percent: how much percentage from left edge :param y_percent: how much percentage from top edge :return: A point inside the box
388,448
def _retf(ins): output = _float_oper(ins.quad[1]) output.append() output.append( % str(ins.quad[2])) return output
Returns from a procedure / function a Floating Point (40bits) value
388,449
def map_single_end(credentials, instance_config, instance_name, script_dir, index_dir, fastq_file, output_dir, num_threads=None, seed_start_lmax=None, mismatch_nmax=None, multimap_nmax=None, splice_min_overhang=None, out_mult_nmax=None, sort_bam=True, keep_unmapped=False, self_destruct=True, compressed=True, **kwargs): if sort_bam: out_sam_type = else: out_sam_type = fastq_files = fastq_file if isinstance(fastq_files, (str, _oldstr)): fastq_files = [fastq_file] template = _TEMPLATE_ENV.get_template( os.path.join()) startup_script = template.render( script_dir=script_dir, index_dir=index_dir, fastq_files=fastq_files, output_dir=output_dir, num_threads=num_threads, seed_start_lmax=seed_start_lmax, self_destruct=self_destruct, mismatch_nmax=mismatch_nmax, multimap_nmax=multimap_nmax, splice_min_overhang=splice_min_overhang, out_mult_nmax=out_mult_nmax, keep_unmapped=keep_unmapped, compressed=compressed, out_sam_type=out_sam_type) if len(startup_script) > 32768: raise ValueError() op_name = instance_config.create_instance( credentials, instance_name, startup_script=startup_script, **kwargs) return op_name
Maps single-end reads using STAR. Reads are expected in FASTQ format. By default, they are also expected to be compressed with gzip. - recommended machine type: "n1-standard-16" (60 GB of RAM, 16 vCPUs). - recommended disk size: depends on size of FASTQ files, at least 128 GB. TODO: docstring
388,450
def create_hit(MaxAssignments=None, AutoApprovalDelayInSeconds=None, LifetimeInSeconds=None, AssignmentDurationInSeconds=None, Reward=None, Title=None, Keywords=None, Description=None, Question=None, RequesterAnnotation=None, QualificationRequirements=None, UniqueRequestToken=None, AssignmentReviewPolicy=None, HITReviewPolicy=None, HITLayoutId=None, HITLayoutParameters=None): pass
The CreateHIT operation creates a new Human Intelligence Task (HIT). The new HIT is made available for Workers to find and accept on the Amazon Mechanical Turk website. This operation allows you to specify a new HIT by passing in values for the properties of the HIT, such as its title, reward amount and number of assignments. When you pass these values to CreateHIT , a new HIT is created for you, with a new HITTypeID . The HITTypeID can be used to create additional HITs in the future without needing to specify common parameters such as the title, description and reward amount each time. An alternative way to create HITs is to first generate a HITTypeID using the CreateHITType operation and then call the CreateHITWithHITType operation. This is the recommended best practice for Requesters who are creating large numbers of HITs. CreateHIT also supports several ways to provide question data: by providing a value for the Question parameter that fully specifies the contents of the HIT, or by providing a HitLayoutId and associated HitLayoutParameters . See also: AWS API Documentation :example: response = client.create_hit( MaxAssignments=123, AutoApprovalDelayInSeconds=123, LifetimeInSeconds=123, AssignmentDurationInSeconds=123, Reward='string', Title='string', Keywords='string', Description='string', Question='string', RequesterAnnotation='string', QualificationRequirements=[ { 'QualificationTypeId': 'string', 'Comparator': 'LessThan'|'LessThanOrEqualTo'|'GreaterThan'|'GreaterThanOrEqualTo'|'EqualTo'|'NotEqualTo'|'Exists'|'DoesNotExist'|'In'|'NotIn', 'IntegerValues': [ 123, ], 'LocaleValues': [ { 'Country': 'string', 'Subdivision': 'string' }, ], 'RequiredToPreview': True|False }, ], UniqueRequestToken='string', AssignmentReviewPolicy={ 'PolicyName': 'string', 'Parameters': [ { 'Key': 'string', 'Values': [ 'string', ], 'MapEntries': [ { 'Key': 'string', 'Values': [ 'string', ] }, ] }, ] }, HITReviewPolicy={ 'PolicyName': 'string', 'Parameters': [ { 'Key': 'string', 'Values': [ 'string', ], 'MapEntries': [ { 'Key': 'string', 'Values': [ 'string', ] }, ] }, ] }, HITLayoutId='string', HITLayoutParameters=[ { 'Name': 'string', 'Value': 'string' }, ] ) :type MaxAssignments: integer :param MaxAssignments: The number of times the HIT can be accepted and completed before the HIT becomes unavailable. :type AutoApprovalDelayInSeconds: integer :param AutoApprovalDelayInSeconds: The number of seconds after an assignment for the HIT has been submitted, after which the assignment is considered Approved automatically unless the Requester explicitly rejects it. :type LifetimeInSeconds: integer :param LifetimeInSeconds: [REQUIRED] An amount of time, in seconds, after which the HIT is no longer available for users to accept. After the lifetime of the HIT elapses, the HIT no longer appears in HIT searches, even if not all of the assignments for the HIT have been accepted. :type AssignmentDurationInSeconds: integer :param AssignmentDurationInSeconds: [REQUIRED] The amount of time, in seconds, that a Worker has to complete the HIT after accepting it. If a Worker does not complete the assignment within the specified duration, the assignment is considered abandoned. If the HIT is still active (that is, its lifetime has not elapsed), the assignment becomes available for other users to find and accept. :type Reward: string :param Reward: [REQUIRED] The amount of money the Requester will pay a Worker for successfully completing the HIT. :type Title: string :param Title: [REQUIRED] The title of the HIT. A title should be short and descriptive about the kind of task the HIT contains. On the Amazon Mechanical Turk web site, the HIT title appears in search results, and everywhere the HIT is mentioned. :type Keywords: string :param Keywords: One or more words or phrases that describe the HIT, separated by commas. These words are used in searches to find HITs. :type Description: string :param Description: [REQUIRED] A general description of the HIT. A description includes detailed information about the kind of task the HIT contains. On the Amazon Mechanical Turk web site, the HIT description appears in the expanded view of search results, and in the HIT and assignment screens. A good description gives the user enough information to evaluate the HIT before accepting it. :type Question: string :param Question: The data the person completing the HIT uses to produce the results. Constraints: Must be a QuestionForm data structure, an ExternalQuestion data structure, or an HTMLQuestion data structure. The XML question data must not be larger than 64 kilobytes (65,535 bytes) in size, including whitespace. Either a Question parameter or a HITLayoutId parameter must be provided. :type RequesterAnnotation: string :param RequesterAnnotation: An arbitrary data field. The RequesterAnnotation parameter lets your application attach arbitrary data to the HIT for tracking purposes. For example, this parameter could be an identifier internal to the Requester's application that corresponds with the HIT. The RequesterAnnotation parameter for a HIT is only visible to the Requester who created the HIT. It is not shown to the Worker, or any other Requester. The RequesterAnnotation parameter may be different for each HIT you submit. It does not affect how your HITs are grouped. :type QualificationRequirements: list :param QualificationRequirements: A condition that a Worker's Qualifications must meet before the Worker is allowed to accept and complete the HIT. (dict) --The QualificationRequirement data structure describes a Qualification that a Worker must have before the Worker is allowed to accept a HIT. A requirement may optionally state that a Worker must have the Qualification in order to preview the HIT. QualificationTypeId (string) -- [REQUIRED]The ID of the Qualification type for the requirement. Comparator (string) -- [REQUIRED]The kind of comparison to make against a Qualification's value. You can compare a Qualification's value to an IntegerValue to see if it is LessThan, LessThanOrEqualTo, GreaterThan, GreaterThanOrEqualTo, EqualTo, or NotEqualTo the IntegerValue. You can compare it to a LocaleValue to see if it is EqualTo, or NotEqualTo the LocaleValue. You can check to see if the value is In or NotIn a set of IntegerValue or LocaleValue values. Lastly, a Qualification requirement can also test if a Qualification Exists or DoesNotExist in the user's profile, regardless of its value. IntegerValues (list) --The integer value to compare against the Qualification's value. IntegerValue must not be present if Comparator is Exists or DoesNotExist. IntegerValue can only be used if the Qualification type has an integer value; it cannot be used with the Worker_Locale QualificationType ID. When performing a set comparison by using the In or the NotIn comparator, you can use up to 15 IntegerValue elements in a QualificationRequirement data structure. (integer) -- LocaleValues (list) --The locale value to compare against the Qualification's value. The local value must be a valid ISO 3166 country code or supports ISO 3166-2 subdivisions. LocaleValue can only be used with a Worker_Locale QualificationType ID. LocaleValue can only be used with the EqualTo, NotEqualTo, In, and NotIn comparators. You must only use a single LocaleValue element when using the EqualTo or NotEqualTo comparators. When performing a set comparison by using the In or the NotIn comparator, you can use up to 30 LocaleValue elements in a QualificationRequirement data structure. (dict) --The Locale data structure represents a geographical region or location. Country (string) -- [REQUIRED]The country of the locale. Must be a valid ISO 3166 country code. For example, the code US refers to the United States of America. Subdivision (string) --The state or subdivision of the locale. A valid ISO 3166-2 subdivision code. For example, the code WA refers to the state of Washington. RequiredToPreview (boolean) --If true, the question data for the HIT will not be shown when a Worker whose Qualifications do not meet this requirement tries to preview the HIT. That is, a Worker's Qualifications must meet all of the requirements for which RequiredToPreview is true in order to preview the HIT. If a Worker meets all of the requirements where RequiredToPreview is true (or if there are no such requirements), but does not meet all of the requirements for the HIT, the Worker will be allowed to preview the HIT's question data, but will not be allowed to accept and complete the HIT. The default is false. :type UniqueRequestToken: string :param UniqueRequestToken: A unique identifier for this request which allows you to retry the call on error without creating duplicate HITs. This is useful in cases such as network timeouts where it is unclear whether or not the call succeeded on the server. If the HIT already exists in the system from a previous call using the same UniqueRequestToken, subsequent calls will return a AWS.MechanicalTurk.HitAlreadyExists error with a message containing the HITId. Note Note: It is your responsibility to ensure uniqueness of the token. The unique token expires after 24 hours. Subsequent calls using the same UniqueRequestToken made after the 24 hour limit could create duplicate HITs. :type AssignmentReviewPolicy: dict :param AssignmentReviewPolicy: The Assignment-level Review Policy applies to the assignments under the HIT. You can specify for Mechanical Turk to take various actions based on the policy. PolicyName (string) --Name of a Review Policy: SimplePlurality/2011-09-01 or ScoreMyKnownAnswers/2011-09-01 Parameters (list) --Name of the parameter from the Review policy. (dict) --Name of the parameter from the Review policy. Key (string) --Name of the parameter from the list of Review Polices. Values (list) --The list of values of the Parameter (string) -- MapEntries (list) --List of ParameterMapEntry objects. (dict) --This data structure is the data type for the AnswerKey parameter of the ScoreMyKnownAnswers/2011-09-01 Review Policy. Key (string) --The QuestionID from the HIT that is used to identify which question requires Mechanical Turk to score as part of the ScoreMyKnownAnswers/2011-09-01 Review Policy. Values (list) --The list of answers to the question specified in the MapEntry Key element. The Worker must match all values in order for the answer to be scored correctly. (string) -- :type HITReviewPolicy: dict :param HITReviewPolicy: The HIT-level Review Policy applies to the HIT. You can specify for Mechanical Turk to take various actions based on the policy. PolicyName (string) --Name of a Review Policy: SimplePlurality/2011-09-01 or ScoreMyKnownAnswers/2011-09-01 Parameters (list) --Name of the parameter from the Review policy. (dict) --Name of the parameter from the Review policy. Key (string) --Name of the parameter from the list of Review Polices. Values (list) --The list of values of the Parameter (string) -- MapEntries (list) --List of ParameterMapEntry objects. (dict) --This data structure is the data type for the AnswerKey parameter of the ScoreMyKnownAnswers/2011-09-01 Review Policy. Key (string) --The QuestionID from the HIT that is used to identify which question requires Mechanical Turk to score as part of the ScoreMyKnownAnswers/2011-09-01 Review Policy. Values (list) --The list of answers to the question specified in the MapEntry Key element. The Worker must match all values in order for the answer to be scored correctly. (string) -- :type HITLayoutId: string :param HITLayoutId: The HITLayoutId allows you to use a pre-existing HIT design with placeholder values and create an additional HIT by providing those values as HITLayoutParameters. Constraints: Either a Question parameter or a HITLayoutId parameter must be provided. :type HITLayoutParameters: list :param HITLayoutParameters: If the HITLayoutId is provided, any placeholder values must be filled in with values using the HITLayoutParameter structure. For more information, see HITLayout. (dict) --The HITLayoutParameter data structure defines parameter values used with a HITLayout. A HITLayout is a reusable Amazon Mechanical Turk project template used to provide Human Intelligence Task (HIT) question data for CreateHIT. Name (string) --The name of the parameter in the HITLayout. Value (string) --The value substituted for the parameter referenced in the HITLayout. :rtype: dict :return: { 'HIT': { 'HITId': 'string', 'HITTypeId': 'string', 'HITGroupId': 'string', 'HITLayoutId': 'string', 'CreationTime': datetime(2015, 1, 1), 'Title': 'string', 'Description': 'string', 'Question': 'string', 'Keywords': 'string', 'HITStatus': 'Assignable'|'Unassignable'|'Reviewable'|'Reviewing'|'Disposed', 'MaxAssignments': 123, 'Reward': 'string', 'AutoApprovalDelayInSeconds': 123, 'Expiration': datetime(2015, 1, 1), 'AssignmentDurationInSeconds': 123, 'RequesterAnnotation': 'string', 'QualificationRequirements': [ { 'QualificationTypeId': 'string', 'Comparator': 'LessThan'|'LessThanOrEqualTo'|'GreaterThan'|'GreaterThanOrEqualTo'|'EqualTo'|'NotEqualTo'|'Exists'|'DoesNotExist'|'In'|'NotIn', 'IntegerValues': [ 123, ], 'LocaleValues': [ { 'Country': 'string', 'Subdivision': 'string' }, ], 'RequiredToPreview': True|False }, ], 'HITReviewStatus': 'NotReviewed'|'MarkedForReview'|'ReviewedAppropriate'|'ReviewedInappropriate', 'NumberOfAssignmentsPending': 123, 'NumberOfAssignmentsAvailable': 123, 'NumberOfAssignmentsCompleted': 123 } } :returns: (integer) --
388,451
def handle_hooks(stage, hooks, provider, context, dump, outline): if not outline and not dump and hooks: utils.handle_hooks( stage=stage, hooks=hooks, provider=provider, context=context )
Handle pre/post hooks. Args: stage (str): The name of the hook stage - pre_build/post_build. hooks (list): A list of dictionaries containing the hooks to execute. provider (:class:`stacker.provider.base.BaseProvider`): The provider the current stack is using. context (:class:`stacker.context.Context`): The current stacker context. dump (bool): Whether running with dump set or not. outline (bool): Whether running with outline set or not.
388,452
def isdir(self, path=None, client_kwargs=None, virtual_dir=True, assume_exists=None): relative = self.relpath(path) if not relative: return True if path[-1] == or self.is_locator(relative, relative=True): exists = self.exists(path=path, client_kwargs=client_kwargs, assume_exists=assume_exists) if exists: return True elif virtual_dir: try: next(self.list_objects(relative, relative=True, max_request_entries=1)) return True except (StopIteration, ObjectNotFoundError, UnsupportedOperation): return False return False
Return True if path is an existing directory. Args: path (str): Path or URL. client_kwargs (dict): Client arguments. virtual_dir (bool): If True, checks if directory exists virtually if an object path if not exists as a specific object. assume_exists (bool or None): This value define the value to return in the case there is no enough permission to determinate the existing status of the file. If set to None, the permission exception is reraised (Default behavior). if set to True or False, return this value. Returns: bool: True if directory exists.
388,453
def main(arguments=None): if not arguments: arguments = sys.argv[1:] wordlist, sowpods, by_length, start, end = argument_parser(arguments) for word in wordlist: pretty_print( word, anagrams_in_word(word, sowpods, start, end), by_length, )
Main command line entry point.
388,454
def handle_call(self, body, message): try: r = self._DISPATCH(body, ticket=message.properties[]) except self.Next: pass else: self.reply(message, r)
Handle call message.
388,455
def subnet_group_exists(name, tags=None, region=None, key=None, keyid=None, profile=None): try: conn = _get_conn(region=region, key=key, keyid=keyid, profile=profile) if not conn: return {: bool(conn)} rds = conn.describe_db_subnet_groups(DBSubnetGroupName=name) return {: bool(rds)} except ClientError as e: if "DBSubnetGroupNotFoundFault" in e.message: return {: False} else: return {: __utils__[](e)}
Check to see if an RDS subnet group exists. CLI example:: salt myminion boto_rds.subnet_group_exists my-param-group \ region=us-east-1
388,456
def progress_view(shell): while not ShellProgressView.done: _, col = get_window_dim() col = int(col) progress = get_progress_message() if in progress: prog_list = progress.split() prog_val = len(prog_list[-1]) else: prog_val = len(progress) buffer_size = col - prog_val - 4 if ShellProgressView.progress_bar: doc = u.format(progress, ShellProgressView.progress_bar) shell.spin_val = -1 counter = 0 ShellProgressView.heart_bar = else: if progress and not ShellProgressView.done: heart_bar = ShellProgressView.heart_bar if shell.spin_val >= 0: beat = ShellProgressView.heart_beat_values[_get_heart_frequency()] heart_bar += beat heart_bar = heart_bar[len(beat):] len_beat = len(heart_bar) if len_beat > buffer_size: heart_bar = heart_bar[len_beat - buffer_size:] while len(heart_bar) < buffer_size: beat = ShellProgressView.heart_beat_values[_get_heart_frequency()] heart_bar += beat else: shell.spin_val = 0 counter = 0 while counter < buffer_size: beat = ShellProgressView.heart_beat_values[_get_heart_frequency()] heart_bar += beat counter += len(beat) ShellProgressView.heart_bar = heart_bar doc = u.format(progress, ShellProgressView.heart_bar) shell.cli.buffers[].reset( initial_document=Document(doc)) shell.cli.request_redraw() sleep(shell.intermediate_sleep) ShellProgressView.done = False ShellProgressView.progress_bar = shell.spin_val = -1 sleep(shell.final_sleep) return True
updates the view
388,457
def set_climate_hold(self, index, climate, hold_type="nextTransition"): body = {"selection": { "selectionType": "thermostats", "selectionMatch": self.thermostats[index][]}, "functions": [{"type": "setHold", "params": { "holdType": hold_type, "holdClimateRef": climate }}]} log_msg_action = "set climate hold" return self.make_request(body, log_msg_action)
Set a climate hold - ie away, home, sleep
388,458
def chop(array, epsilon=1e-10): ret = np.array(array) if np.isrealobj(ret): ret[abs(ret) < epsilon] = 0.0 else: ret.real[abs(ret.real) < epsilon] = 0.0 ret.imag[abs(ret.imag) < epsilon] = 0.0 return ret
Truncate small values of a complex array. Args: array (array_like): array to truncte small values. epsilon (float): threshold. Returns: np.array: A new operator with small values set to zero.
388,459
def update_room_name(self): try: response = self.client.api.get_room_name(self.room_id) if "name" in response and response["name"] != self.name: self.name = response["name"] return True else: return False except MatrixRequestError: return False
Updates self.name and returns True if room name has changed.
388,460
def list_handler(HandlerResult="nparray"): def decorate(func): def wrapper(*args, **kwargs): sequences = [] enumsUnitCheck = enumerate(args) argsList = list(args) for num, arg in enumsUnitCheck: if type(arg) == type(1 * u.m): argsList[num] = arg.to_base_units().magnitude enumsUnitless = enumerate(argsList) for num, arg in enumsUnitless: if isinstance(arg, (list, tuple, np.ndarray)): sequences.append(num) if len(sequences) == 0: result = func(*args, **kwargs) else: limiter = len(argsList[sequences[0]]) iterant = 0 result = [] for num in sequences: for arg in argsList[num]: if iterant >= limiter: break argsList[num] = arg result.append(wrapper(*argsList, HandlerResult=HandlerResult, **kwargs)) iterant += 1 if HandlerResult == "nparray": result = np.array(result) elif HandlerResult == "tuple": result = tuple(result) elif HandlerResult == "list": result == list(result) return result return wrapper return decorate
Wraps a function to handle list inputs.
388,461
def from_api_repr(cls, resource, client): name = resource.get("name") dns_name = resource.get("dnsName") if name is None or dns_name is None: raise KeyError( "Resource lacks required identity information:" ) zone = cls(name, dns_name, client=client) zone._set_properties(resource) return zone
Factory: construct a zone given its API representation :type resource: dict :param resource: zone resource representation returned from the API :type client: :class:`google.cloud.dns.client.Client` :param client: Client which holds credentials and project configuration for the zone. :rtype: :class:`google.cloud.dns.zone.ManagedZone` :returns: Zone parsed from ``resource``.
388,462
def add_paragraph(self, text=, style=None): return super(_Cell, self).add_paragraph(text, style)
Return a paragraph newly added to the end of the content in this cell. If present, *text* is added to the paragraph in a single run. If specified, the paragraph style *style* is applied. If *style* is not specified or is |None|, the result is as though the 'Normal' style was applied. Note that the formatting of text in a cell can be influenced by the table style. *text* can contain tab (``\\t``) characters, which are converted to the appropriate XML form for a tab. *text* can also include newline (``\\n``) or carriage return (``\\r``) characters, each of which is converted to a line break.
388,463
def first(o): if o is None: return None if isinstance(o, ISeq): return o.first s = to_seq(o) if s is None: return None return s.first
If o is a ISeq, return the first element from o. If o is None, return None. Otherwise, coerces o to a Seq and returns the first.
388,464
def mod_bufsize(iface, *args, **kwargs): * if __grains__[] == : if os.path.exists(): return _mod_bufsize_linux(iface, *args, **kwargs) return False
Modify network interface buffers (currently linux only) CLI Example: .. code-block:: bash salt '*' network.mod_bufsize tx=<val> rx=<val> rx-mini=<val> rx-jumbo=<val>
388,465
def get_preferred(self, addr_1, addr_2): if addr_1 > addr_2: addr_1, addr_2 = addr_2, addr_1 return self._cache.get((addr_1, addr_2))
Return the preferred address.
388,466
def checkUserManage(worksheet, request, redirect=True): allowed = worksheet.checkUserManage() if allowed == False and redirect == True: destination_url = worksheet.absolute_url() + "/manage_results" request.response.redirect(destination_url)
Checks if the current user has granted access to the worksheet and if has also privileges for managing it. If the user has no granted access and redirect's value is True, redirects to /manage_results view. Otherwise, does nothing
388,467
def get_version_url(self, version): for each_version in self.other_versions(): if version == each_version[] and in each_version: return each_version.get() raise VersionNotInHive(version)
Retrieve the URL for the designated version of the hive.
388,468
def delete(self, url, headers=None, **kwargs): if headers is None: headers = [] if kwargs: url = url + UrlEncoded( + _encode(**kwargs), skip_encode=True) message = { : "DELETE", : headers, } return self.request(url, message)
Sends a DELETE request to a URL. :param url: The URL. :type url: ``string`` :param headers: A list of pairs specifying the headers for the HTTP response (for example, ``[('Content-Type': 'text/cthulhu'), ('Token': 'boris')]``). :type headers: ``list`` :param kwargs: Additional keyword arguments (optional). These arguments are interpreted as the query part of the URL. The order of keyword arguments is not preserved in the request, but the keywords and their arguments will be URL encoded. :type kwargs: ``dict`` :returns: A dictionary describing the response (see :class:`HttpLib` for its structure). :rtype: ``dict``
388,469
def add_self_defined_objects(raw_objects): logger.info("- creating internally defined commands...") if not in raw_objects: raw_objects[] = [] raw_objects[].append({ : , : , : }) raw_objects[].append({ : , : , : }) raw_objects[].append({ : , : , : }) raw_objects[].append({ : , : , : }) raw_objects[].append({ : , : , : })
Add self defined command objects for internal processing ; bp_rule, _internal_host_up, _echo, _internal_host_check, _interna_service_check :param raw_objects: Raw config objects dict :type raw_objects: dict :return: raw_objects with some more commands :rtype: dict
388,470
def _snakify_name(self, name): name = self._strip_diacritics(name) name = name.lower() name = name.replace(, ) return name
Snakify a name string. In this context, "to snakify" means to strip a name of all diacritics, convert it to lower case, and replace any spaces inside the name with hyphens. This way the name is made "machine-friendly", and ready to be combined with a second name component into a full "snake_case" name. :param str name: A name to snakify. :return str: A snakified name.
388,471
def key_from_keybase(username, fingerprint=None): url = keybase_lookup_url(username) resp = requests.get(url) if resp.status_code == 200: j_resp = json.loads(polite_string(resp.content)) if in j_resp and len(j_resp[]) == 1: kb_obj = j_resp[][0] if fingerprint: return fingerprint_from_keybase(fingerprint, kb_obj) else: if in kb_obj \ and in kb_obj[]: key = kb_obj[][] return massage_key(key) return None
Look up a public key from a username
388,472
def revert(self, unchanged_only=False): if self._reverted: raise errors.ChangelistError() change = self._change if self._change == 0: change = cmd = [, , str(change)] if unchanged_only: cmd.append() files = [f.depotFile for f in self._files] if files: cmd += files self._connection.run(cmd) self._files = [] self._reverted = True
Revert all files in this changelist :param unchanged_only: Only revert unchanged files :type unchanged_only: bool :raises: :class:`.ChangelistError`
388,473
def text_extract(path, password=None): pdf = Info(path, password).pdf return [pdf.getPage(i).extractText() for i in range(pdf.getNumPages())]
Extract text from a PDF file
388,474
def create_process_behavior(self, behavior, process_id): route_values = {} if process_id is not None: route_values[] = self._serialize.url(, process_id, ) content = self._serialize.body(behavior, ) response = self._send(http_method=, location_id=, version=, route_values=route_values, content=content) return self._deserialize(, response)
CreateProcessBehavior. [Preview API] Creates a single behavior in the given process. :param :class:`<ProcessBehaviorCreateRequest> <azure.devops.v5_0.work_item_tracking_process.models.ProcessBehaviorCreateRequest>` behavior: :param str process_id: The ID of the process :rtype: :class:`<ProcessBehavior> <azure.devops.v5_0.work_item_tracking_process.models.ProcessBehavior>`
388,475
def get_relationships_for_source_on_date(self, source_id, from_, to): if self._can(): return self._provider_session.get_relationships_for_source_on_date(source_id, from_, to) self._check_lookup_conditions() query = self._query_session.get_relationship_query() query.match_source_id(source_id, match=True) query.match_date(from_, to, match=True) return self._try_harder(query)
Pass through to provider RelationshipLookupSession.get_relationships_for_source_on_date
388,476
def _initializer_wrapper(actual_initializer, *rest): signal.signal(signal.SIGINT, signal.SIG_IGN) if actual_initializer is not None: actual_initializer(*rest)
We ignore SIGINT. It's up to our parent to kill us in the typical condition of this arising from ``^C`` on a terminal. If someone is manually killing us with that signal, well... nothing will happen.
388,477
def add_seconds(self, datetimestr, n): a_datetime = self.parse_datetime(datetimestr) return a_datetime + timedelta(seconds=n)
Returns a time that n seconds after a time. :param datetimestr: a datetime object or a datetime str :param n: number of seconds, value can be negative **中文文档** 返回给定日期N秒之后的时间。
388,478
def send_messages(self, email_messages): if not email_messages: return with self._lock: try: stream_created = self.open() for message in email_messages: if six.PY3: self.write_message(message) else: self.write_message(message) self.stream.flush() if stream_created: self.close() except: if not self.fail_silently: raise return len(email_messages)
Write all messages to the stream in a thread-safe way.
388,479
def dump_info(): vultr = Vultr(API_KEY) try: logging.info(, dumps( vultr.account.info(), indent=2 )) logging.info(, dumps( vultr.app.list(), indent=2 )) logging.info(, dumps( vultr.backup.list(), indent=2 )) logging.info(, dumps( vultr.dns.list(), indent=2 )) logging.info(, dumps( vultr.iso.list(), indent=2 )) logging.info(, dumps( vultr.os.list(), indent=2 )) logging.info(, dumps( vultr.plans.list(), indent=2 )) logging.info(, dumps( vultr.regions.list(), indent=2 )) logging.info(, dumps( vultr.server.list(), indent=2 )) logging.info(, dumps( vultr.snapshot.list(), indent=2 )) logging.info(, dumps( vultr.sshkey.list(), indent=2 )) logging.info(, dumps( vultr.startupscript.list(), indent=2 )) except VultrError as ex: logging.error(, ex)
Shows various details about the account & servers
388,480
def params_values(self): return [p.value for p in atleast_list(self.params) if p.has_value]
Get a list of the ``Parameter`` values if they have a value. This does not include the basis regularizer.
388,481
def TAPQuery(RAdeg=180.0, DECdeg=0.0, width=1, height=1): QUERY =( ) QUERY = QUERY.format( RAdeg, DECdeg, width, height) data={"QUERY": QUERY, "REQUEST": "doQuery", "LANG": "ADQL", "FORMAT": "votable"} url="http://www.cadc.hia.nrc.gc.ca/tap/sync" print url, data return urllib.urlopen(url,urllib.urlencode(data))
Do a query of the CADC Megacam table. Get all observations insize the box. Returns a file-like object
388,482
def loader_for_type(self, ctype): for loadee, mimes in Mimer.TYPES.iteritems(): for mime in mimes: if ctype.startswith(mime): return loadee
Gets a function ref to deserialize content for a certain mimetype.
388,483
def obtainInfo(self): try: info = self.ytdl.extract_info(self.yid, download=False) except youtube_dl.utils.DownloadError: raise ConnectionError if not self.preferences[]: self.url = (info[][0][], info[][1][]) return True for f in info[]: if not in f or not f[]: f[] = float() aud = {(-int(f[]), f[], f[]) for f in info[] if f.get() and not f.get()} vid = {(-int(f[]), f[], f[]) for f in info[] if not f.get() and f.get()} full= {(-int(f[]), f[], f[]) for f in info[] if f.get() and f.get()} try: _f = int( self.preferences.get() ) _k = lambda x: abs(x[0] + _f) except (ValueError, TypeError): _k = lambda d: d if self.preferences[] and self.preferences[]: fm = sorted(full, key=_k) elif self.preferences[]: fm = sorted(aud, key=_k) elif self.preferences[]: fm = sorted(vid, key=_k) filesize = 0 i = -1 try: while filesize == 0: i += 1 self.url = fm[i][2] if fm[i][1] == float(): filesize = int(self.r_session.head(self.url).headers[]) else: filesize = int(fm[i][1]) except IndexError: self.url = (info[][0][], info[][1][]) self.preferences[] = False return True self.filesize = filesize return True
Method for obtaining information about the movie.
388,484
def make_levels_set(levels): for level_key,level_filters in levels.items(): levels[level_key] = make_level_set(level_filters) return levels
make set efficient will convert all lists of items in levels to a set to speed up operations
388,485
def interactive_update_stack(self, fqn, template, old_parameters, parameters, stack_policy, tags, **kwargs): logger.debug("Using interactive provider mode for %s.", fqn) changes, change_set_id = create_change_set( self.cloudformation, fqn, template, parameters, tags, , service_role=self.service_role, **kwargs ) old_parameters_as_dict = self.params_as_dict(old_parameters) new_parameters_as_dict = self.params_as_dict( [x if in x else {: x[], : old_parameters_as_dict[x[]]} for x in parameters] ) params_diff = diff_parameters( old_parameters_as_dict, new_parameters_as_dict) action = "replacements" if self.replacements_only else "changes" full_changeset = changes if self.replacements_only: changes = requires_replacement(changes) if changes or params_diff: ui.lock() try: output_summary(fqn, action, changes, params_diff, replacements_only=self.replacements_only) ask_for_approval( full_changeset=full_changeset, params_diff=params_diff, include_verbose=True, ) finally: ui.unlock() self.deal_with_changeset_stack_policy(fqn, stack_policy) self.cloudformation.execute_change_set( ChangeSetName=change_set_id, )
Update a Cloudformation stack in interactive mode. Args: fqn (str): The fully qualified name of the Cloudformation stack. template (:class:`stacker.providers.base.Template`): A Template object to use when updating the stack. old_parameters (list): A list of dictionaries that defines the parameter list on the existing Cloudformation stack. parameters (list): A list of dictionaries that defines the parameter list to be applied to the Cloudformation stack. stack_policy (:class:`stacker.providers.base.Template`): A template object representing a stack policy. tags (list): A list of dictionaries that defines the tags that should be applied to the Cloudformation stack.
388,486
def _get_absolute_reference(self, ref_key): key_str = u", ".join(map(str, ref_key)) return u"S[" + key_str + u"]"
Returns absolute reference code for key.
388,487
def filter(self, func): self._data = xfilter(func, self._data) return self
A lazy way to skip elements in the stream that gives False for the given function.
388,488
def set_errors(self, errors): if errors is None: self.__errors__ = None return self.__errors__ = [asscalar(e) for e in errors]
Set parameter error estimate
388,489
def vdistg(v1, v2, ndim): v1 = stypes.toDoubleVector(v1) v2 = stypes.toDoubleVector(v2) ndim = ctypes.c_int(ndim) return libspice.vdistg_c(v1, v2, ndim)
Return the distance between two vectors of arbitrary dimension. http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/cspice/vdistg_c.html :param v1: ndim-dimensional double precision vector. :type v1: list[ndim] :param v2: ndim-dimensional double precision vector. :type v2: list[ndim] :param ndim: Dimension of v1 and v2. :type ndim: int :return: the distance between v1 and v2 :rtype: float
388,490
def dotproduct(X, Y): return sum([x * y for x, y in zip(X, Y)])
Return the sum of the element-wise product of vectors x and y. >>> dotproduct([1, 2, 3], [1000, 100, 10]) 1230
388,491
def build_variant(variant, institute_id, gene_to_panels = None, hgncid_to_gene=None, sample_info=None): gene_to_panels = gene_to_panels or {} hgncid_to_gene = hgncid_to_gene or {} sample_info = sample_info or {} variant_obj = dict( _id = variant[][], document_id=variant[][], variant_id=variant[][], display_name=variant[][], variant_type=variant[], case_id=variant[], chromosome=variant[], reference=variant[], alternative=variant[], institute=institute_id, ) variant_obj[] = False variant_obj[] = int(variant[]) variant_obj[] = float(variant[]) end = variant.get() if end: variant_obj[] = int(end) length = variant.get() if length: variant_obj[] = int(length) variant_obj[] = variant[].get() variant_obj[] = float(variant[]) if variant[] else None variant_obj[] = variant[] variant_obj[] = variant.get() variant_obj[] = variant.get() variant_obj[] = variant[] variant_obj[] = variant.get() if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] if in variant: variant_obj[] = variant[] gt_types = [] for sample in variant.get(, []): gt_call = build_genotype(sample) gt_types.append(gt_call) if sample_info: sample_id = sample[] if sample_info[sample_id] == : key = else: key = variant_obj[key] = { : sample[], : sample[], : sample[], : sample[], : sample_id } variant_obj[] = gt_types if in variant: variant_obj[] = variant[] compounds = [] for compound in variant.get(, []): compound_obj = build_compound(compound) compounds.append(compound_obj) if compounds: variant_obj[] = compounds genes = [] for index, gene in enumerate(variant.get(, [])): if gene.get(): gene_obj = build_gene(gene, hgncid_to_gene) genes.append(gene_obj) if index > 30: variant_obj[] = True break if genes: variant_obj[] = genes if in variant: variant_obj[] = [hgnc_id for hgnc_id in variant[] if hgnc_id] hgnc_symbols = [] for hgnc_id in variant_obj[]: gene_obj = hgncid_to_gene.get(hgnc_id) if gene_obj: hgnc_symbols.append(gene_obj[]) if hgnc_symbols: variant_obj[] = hgnc_symbols panel_names = set() for hgnc_id in variant_obj[]: gene_panels = gene_to_panels.get(hgnc_id, set()) panel_names = panel_names.union(gene_panels) if panel_names: variant_obj[] = list(panel_names) clnsig_objects = [] for entry in variant.get(, []): clnsig_obj = build_clnsig(entry) clnsig_objects.append(clnsig_obj) if clnsig_objects: variant_obj[] = clnsig_objects call_info = variant.get(, {}) for caller in call_info: if call_info[caller]: variant_obj[caller] = call_info[caller] conservation_info = variant.get(, {}) if conservation_info.get(): variant_obj[] = conservation_info[] if conservation_info.get(): variant_obj[] = conservation_info[] if conservation_info.get(): variant_obj[] = conservation_info[] if variant.get(): variant_obj[] = variant[] if variant.get(): variant_obj[] = variant[] frequencies = variant.get(, {}) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if frequencies.get(): variant_obj[] = float(frequencies[]) if variant.get(): variant_obj[] = variant[] if variant.get(): variant_obj[] = variant[] if frequencies.get(): variant_obj[] = frequencies[] if frequencies.get(): variant_obj[] = frequencies[] if frequencies.get(): variant_obj[] = frequencies[] if frequencies.get(): variant_obj[] = frequencies[] if frequencies.get(): variant_obj[] = frequencies[] elif frequencies.get(): variant_obj[] = frequencies[] if variant.get(): variant_obj[] = variant[] if variant.get(): variant_obj[] = variant[] rank_results = [] for category in variant.get(, []): rank_result = { : category, : variant[][category] } rank_results.append(rank_result) if rank_results: variant_obj[] = rank_results if variant.get(): variant_obj[] = True return variant_obj
Build a variant object based on parsed information Args: variant(dict) institute_id(str) gene_to_panels(dict): A dictionary with {<hgnc_id>: { 'panel_names': [<panel_name>, ..], 'disease_associated_transcripts': [<transcript_id>, ..] } . . } hgncid_to_gene(dict): A dictionary with {<hgnc_id>: <hgnc_gene info> . . } sample_info(dict): A dictionary with info about samples. Strictly for cancer to tell which is tumor Returns: variant_obj(dict) variant = dict( # document_id is a md5 string created by institute_genelist_caseid_variantid: _id = str, # required, same as document_id document_id = str, # required # variant_id is a md5 string created by chrom_pos_ref_alt (simple_id) variant_id = str, # required # display name is variant_id (no md5) display_name = str, # required # chrom_pos_ref_alt simple_id = str, # The variant can be either research or clinical. # For research variants we display all the available information while # the clinical variants have limited annotation fields. variant_type = str, # required, choices=('research', 'clinical')) category = str, # choices=('sv', 'snv', 'str') sub_category = str, # choices=('snv', 'indel', 'del', 'ins', 'dup', 'inv', 'cnv', 'bnd') mate_id = str, # For SVs this identifies the other end case_id = str, # case_id is a string like owner_caseid chromosome = str, # required position = int, # required end = int, # required length = int, # required reference = str, # required alternative = str, # required rank_score = float, # required variant_rank = int, # required rank_score_results = list, # List if dictionaries variant_rank = int, # required institute = str, # institute_id, required sanger_ordered = bool, validation = str, # Sanger validation, choices=('True positive', 'False positive') quality = float, filters = list, # list of strings samples = list, # list of dictionaries that are <gt_calls> genetic_models = list, # list of strings choices=GENETIC_MODELS compounds = list, # sorted list of <compound> ordering='combined_score' genes = list, # list with <gene> dbsnp_id = str, # Gene ids: hgnc_ids = list, # list of hgnc ids (int) hgnc_symbols = list, # list of hgnc symbols (str) panels = list, # list of panel names that the variant ovelapps # Frequencies: thousand_genomes_frequency = float, thousand_genomes_frequency_left = float, thousand_genomes_frequency_right = float, exac_frequency = float, max_thousand_genomes_frequency = float, max_exac_frequency = float, local_frequency = float, local_obs_old = int, local_obs_hom_old = int, local_obs_total_old = int, # default=638 # Predicted deleteriousness: cadd_score = float, clnsig = list, # list of <clinsig> spidex = float, missing_data = bool, # default False # STR specific information str_repid = str, repeat id generally corresponds to gene symbol str_ru = str, used e g in PanelApp naming of STRs str_ref = int, reference copy number str_len = int, number of repeats found in case str_status = str, this indicates the severity of the expansion level # Callers gatk = str, # choices=VARIANT_CALL, default='Not Used' samtools = str, # choices=VARIANT_CALL, default='Not Used' freebayes = str, # choices=VARIANT_CALL, default='Not Used' # Conservation: phast_conservation = list, # list of str, choices=CONSERVATION gerp_conservation = list, # list of str, choices=CONSERVATION phylop_conservation = list, # list of str, choices=CONSERVATION # Database options: gene_lists = list, manual_rank = int, # choices=[0, 1, 2, 3, 4, 5] dismiss_variant = list, acmg_evaluation = str, # choices=ACMG_TERMS )
388,492
def _unsigned_bounds(self): ssplit = self._ssplit() if len(ssplit) == 1: lb = ssplit[0].lower_bound ub = ssplit[0].upper_bound return [ (lb, ub) ] elif len(ssplit) == 2: lb_1 = ssplit[0].lower_bound ub_1 = ssplit[0].upper_bound lb_2 = ssplit[1].lower_bound ub_2 = ssplit[1].upper_bound return [ (lb_1, ub_1), (lb_2, ub_2) ] else: raise Exception()
Get lower bound and upper bound for `self` in unsigned arithmetic. :return: a list of (lower_bound, upper_bound) tuples.
388,493
def process_embed(embed_items=None, embed_tracks=None, embed_metadata=None, embed_insights=None): result = None embed = if embed_items: embed = if embed_tracks: if embed != : embed += embed += if embed_metadata: if embed != : embed += embed += if embed_insights: if embed != : embed += embed += if embed != : result = embed return result
Returns an embed field value based on the parameters.
388,494
def remove_block(self, block, index="-1"): self[index]["__blocks__"].remove(block) self[index]["__names__"].remove(block.raw())
Remove block element from scope Args: block (Block): Block object
388,495
def disassociate(self, eip_or_aid): if "." in eip_or_aid: return "true" == self.call("DisassociateAddress", response_data_key="return", PublicIp=eip_or_aid) else: return "true" == self.call("DisassociateAddress", response_data_key="return", AllocationId=eip_or_aid)
Disassociates an EIP. If the EIP was allocated for a VPC instance, an AllocationId(aid) must be provided instead of a PublicIp.
388,496
def from_raw(self, rval: RawValue, jptr: JSONPointer = "") -> Value: def convert(val): if isinstance(val, list): res = ArrayValue([convert(x) for x in val]) elif isinstance(val, dict): res = ObjectValue({x: convert(val[x]) for x in val}) else: res = val return res return convert(rval)
Override the superclass method.
388,497
def get_user_logins(self, user_id, params={}): url = USERS_API.format(user_id) + "/logins" data = self._get_paged_resource(url, params=params) logins = [] for login_data in data: logins.append(Login(data=login_data)) return logins
Return a user's logins for the given user_id. https://canvas.instructure.com/doc/api/logins.html#method.pseudonyms.index
388,498
def invalid(cls, data, context=None): return cls(cls.TagType.INVALID, data, context)
Shortcut to create an INVALID Token.
388,499
def calc_max_flexural_wavelength(self): if np.isscalar(self.D): Dmax = self.D else: Dmax = self.D.max() alpha = (4*Dmax/(self.drho*self.g))**.25 self.maxFlexuralWavelength = 2*np.pi*alpha self.maxFlexuralWavelength_ncells_x = int(np.ceil(self.maxFlexuralWavelength / self.dx)) self.maxFlexuralWavelength_ncells_y = int(np.ceil(self.maxFlexuralWavelength / self.dy))
Returns the approximate maximum flexural wavelength This is important when padding of the grid is required: in Flexure (this code), grids are padded out to one maximum flexural wavelength, but in any case, the flexural wavelength is a good characteristic distance for any truncation limit