Unnamed: 0
int64 0
389k
| code
stringlengths 26
79.6k
| docstring
stringlengths 1
46.9k
|
---|---|---|
20,100 | def head_object(Bucket=None, IfMatch=None, IfModifiedSince=None, IfNoneMatch=None, IfUnmodifiedSince=None, Key=None, Range=None, VersionId=None, SSECustomerAlgorithm=None, SSECustomerKey=None, SSECustomerKeyMD5=None, RequestPayer=None, PartNumber=None):
pass | The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
See also: AWS API Documentation
:example: response = client.head_object(
Bucket='string',
IfMatch='string',
IfModifiedSince=datetime(2015, 1, 1),
IfNoneMatch='string',
IfUnmodifiedSince=datetime(2015, 1, 1),
Key='string',
Range='string',
VersionId='string',
SSECustomerAlgorithm='string',
SSECustomerKey='string',
RequestPayer='requester',
PartNumber=123
)
:type Bucket: string
:param Bucket: [REQUIRED]
:type IfMatch: string
:param IfMatch: Return the object only if its entity tag (ETag) is the same as the one specified, otherwise return a 412 (precondition failed).
:type IfModifiedSince: datetime
:param IfModifiedSince: Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).
:type IfNoneMatch: string
:param IfNoneMatch: Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).
:type IfUnmodifiedSince: datetime
:param IfUnmodifiedSince: Return the object only if it has not been modified since the specified time, otherwise return a 412 (precondition failed).
:type Key: string
:param Key: [REQUIRED]
:type Range: string
:param Range: Downloads the specified range bytes of an object. For more information about the HTTP Range header, go to http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35.
:type VersionId: string
:param VersionId: VersionId used to reference a specific version of the object.
:type SSECustomerAlgorithm: string
:param SSECustomerAlgorithm: Specifies the algorithm to use to when encrypting the object (e.g., AES256).
:type SSECustomerKey: string
:param SSECustomerKey: Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side -encryption -customer-algorithm header.
:type SSECustomerKeyMD5: string
:param SSECustomerKeyMD5: Specifies the 128-bit MD5 digest of the encryption key according to RFC 1321. Amazon S3 uses this header for a message integrity check to ensure the encryption key was transmitted without error. Please note that this parameter is automatically populated if it is not provided. Including this parameter is not required
:type RequestPayer: string
:param RequestPayer: Confirms that the requester knows that she or he will be charged for the request. Bucket owners need not specify this parameter in their requests. Documentation on downloading objects from requester pays buckets can be found at http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectsinRequesterPaysBuckets.html
:type PartNumber: integer
:param PartNumber: Part number of the object being read. This is a positive integer between 1 and 10,000. Effectively performs a 'ranged' HEAD request for the part specified. Useful querying about the size of the part and the number of parts in this object.
:rtype: dict
:return: {
'DeleteMarker': True|False,
'AcceptRanges': 'string',
'Expiration': 'string',
'Restore': 'string',
'LastModified': datetime(2015, 1, 1),
'ContentLength': 123,
'ETag': 'string',
'MissingMeta': 123,
'VersionId': 'string',
'CacheControl': 'string',
'ContentDisposition': 'string',
'ContentEncoding': 'string',
'ContentLanguage': 'string',
'ContentType': 'string',
'Expires': datetime(2015, 1, 1),
'WebsiteRedirectLocation': 'string',
'ServerSideEncryption': 'AES256'|'aws:kms',
'Metadata': {
'string': 'string'
},
'SSECustomerAlgorithm': 'string',
'SSECustomerKeyMD5': 'string',
'SSEKMSKeyId': 'string',
'StorageClass': 'STANDARD'|'REDUCED_REDUNDANCY'|'STANDARD_IA',
'RequestCharged': 'requester',
'ReplicationStatus': 'COMPLETE'|'PENDING'|'FAILED'|'REPLICA',
'PartsCount': 123
}
:returns:
(dict) --
DeleteMarker (boolean) -- Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
AcceptRanges (string) --
Expiration (string) -- If the object expiration is configured (see PUT Bucket lifecycle), the response includes this header. It includes the expiry-date and rule-id key value pairs providing object expiration information. The value of the rule-id is URL encoded.
Restore (string) -- Provides information about object restoration operation and expiration time of the restored object copy.
LastModified (datetime) -- Last modified date of the object
ContentLength (integer) -- Size of the body in bytes.
ETag (string) -- An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL
MissingMeta (integer) -- This is set to the number of metadata entries not returned in x-amz-meta headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
VersionId (string) -- Version of the object.
CacheControl (string) -- Specifies caching behavior along the request/reply chain.
ContentDisposition (string) -- Specifies presentational information for the object.
ContentEncoding (string) -- Specifies what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
ContentLanguage (string) -- The language the content is in.
ContentType (string) -- A standard MIME type describing the format of the object data.
Expires (datetime) -- The date and time at which the object is no longer cacheable.
WebsiteRedirectLocation (string) -- If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
ServerSideEncryption (string) -- The Server-side encryption algorithm used when storing this object in S3 (e.g., AES256, aws:kms).
Metadata (dict) -- A map of metadata to store with the object in S3.
(string) --
(string) --
SSECustomerAlgorithm (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
SSECustomerKeyMD5 (string) -- If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide round trip message integrity verification of the customer-provided encryption key.
SSEKMSKeyId (string) -- If present, specifies the ID of the AWS Key Management Service (KMS) master encryption key that was used for the object.
StorageClass (string) --
RequestCharged (string) -- If present, indicates that the requester was successfully charged for the request.
ReplicationStatus (string) --
PartsCount (integer) -- The count of parts this object has. |
20,101 | def _display(self, sent, now, chunk, mbps):
if self.parent is not None:
self.parent._display(self.parent.offset + sent, now, chunk, mbps)
return
elapsed = now - self.startTime
if sent > 0 and self.total is not None and sent <= self.total:
eta = (self.total - sent) * elapsed.total_seconds() / sent
eta = datetime.timedelta(seconds=eta)
else:
eta = None
self.output.write(
"\r %s: Sent %s%s%s ETA: %s (%s) %s%20s\r" % (
elapsed,
util.humanize(sent),
"" if self.total is None else " of %s" % (util.humanize(self.total),),
"" if self.total is None else " (%d%%)" % (int(100 * sent / self.total),),
eta,
"" if not mbps else "%.3g Mbps " % (mbps,),
chunk or "",
" ",
)
)
self.output.flush() | Display intermediate progress. |
20,102 | def format(self, sql, params):
if isinstance(sql, unicode):
string_type = unicode
elif isinstance(sql, bytes):
string_type = bytes
sql = sql.decode(_BYTES_ENCODING)
else:
raise TypeError("sql:{!r} is not a unicode or byte string.".format(sql))
if self.named == :
if isinstance(params, collections.Mapping):
params = {string_type(idx): val for idx, val in iteritems(params)}
elif isinstance(params, collections.Sequence) and not isinstance(params, (unicode, bytes)):
params = {string_type(idx): val for idx, val in enumerate(params, 1)}
if not isinstance(params, collections.Mapping):
raise TypeError("params:{!r} is not a dict.".format(params))
names = self.match.findall(sql)
ord_params = []
name_to_ords = {}
for name in names:
value = params[name]
if isinstance(value, tuple):
ord_params.extend(value)
if name not in name_to_ords:
name_to_ords[name] = + .join((self.replace,) * len(value)) +
else:
ord_params.append(value)
if name not in name_to_ords:
name_to_ords[name] = self.replace
sql = self.match.sub(lambda m: name_to_ords[m.group(1)], sql)
if string_type is bytes:
sql = sql.encode(_BYTES_ENCODING)
return sql, ord_params | Formats the SQL query to use ordinal parameters instead of named
parameters.
*sql* (|string|) is the SQL query.
*params* (|dict|) maps each named parameter (|string|) to value
(|object|). If |self.named| is "numeric", then *params* can be
simply a |sequence| of values mapped by index.
Returns a 2-|tuple| containing: the formatted SQL query (|string|),
and the ordinal parameters (|list|). |
20,103 | def splitPrefix(name):
if isinstance(name, basestring) \
and in name:
return tuple(name.split(, 1))
else:
return (None, name) | Split the name into a tuple (I{prefix}, I{name}). The first element in
the tuple is I{None} when the name does't have a prefix.
@param name: A node name containing an optional prefix.
@type name: basestring
@return: A tuple containing the (2) parts of I{name}
@rtype: (I{prefix}, I{name}) |
20,104 | def dropSpans(spans, text):
spans.sort()
res =
offset = 0
for s, e in spans:
if offset <= s:
if offset < s:
res += text[offset:s]
offset = e
res += text[offset:]
return res | Drop from text the blocks identified in :param spans:, possibly nested. |
20,105 | def add_page(self, page=None):
if page is None:
self.page = PDFPage(self.orientation_default, self.layout_default, self.margins)
else:
self.page = page
self.page._set_index(len(self.pages))
self.pages.append(self.page)
currentfont = self.font
self.set_font(font=currentfont)
self.session._reset_colors() | May generate and add a PDFPage separately, or use this to generate
a default page. |
20,106 | def netconf_config_change_datastore(self, **kwargs):
config = ET.Element("config")
netconf_config_change = ET.SubElement(config, "netconf-config-change", xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-notifications")
datastore = ET.SubElement(netconf_config_change, "datastore")
datastore.text = kwargs.pop()
callback = kwargs.pop(, self._callback)
return callback(config) | Auto Generated Code |
20,107 | def full_value(self):
s = self.name_value()
s += self.path_value()
s += "\n\n"
return s | Returns the full value with the path also (ie, name = value (path))
:returns: String |
20,108 | def array_controller_by_model(self, model):
for member in self.get_members():
if member.model == model:
return member | Returns array controller instance by model
:returns Instance of array controller |
20,109 | def tile_read(source, bounds, tilesize, **kwargs):
if isinstance(source, DatasetReader):
return _tile_read(source, bounds, tilesize, **kwargs)
else:
with rasterio.open(source) as src_dst:
return _tile_read(src_dst, bounds, tilesize, **kwargs) | Read data and mask.
Attributes
----------
source : str or rasterio.io.DatasetReader
input file path or rasterio.io.DatasetReader object
bounds : list
Mercator tile bounds (left, bottom, right, top)
tilesize : int
Output image size
kwargs: dict, optional
These will be passed to the _tile_read function.
Returns
-------
out : array, int
returns pixel value. |
20,110 | def _is_monotonic(coord, axis=0):
if coord.shape[axis] < 3:
return True
else:
n = coord.shape[axis]
delta_pos = (coord.take(np.arange(1, n), axis=axis) >=
coord.take(np.arange(0, n - 1), axis=axis))
delta_neg = (coord.take(np.arange(1, n), axis=axis) <=
coord.take(np.arange(0, n - 1), axis=axis))
return np.all(delta_pos) or np.all(delta_neg) | >>> _is_monotonic(np.array([0, 1, 2]))
True
>>> _is_monotonic(np.array([2, 1, 0]))
True
>>> _is_monotonic(np.array([0, 2, 1]))
False |
20,111 | def set_env(self):
if self.cov_source is None:
os.environ[] =
else:
os.environ[] = UNIQUE_SEP.join(self.cov_source)
os.environ[] = self.cov_data_file
os.environ[] = self.cov_config | Put info about coverage into the env so that subprocesses can activate coverage. |
20,112 | def to_regex(regex, flags=0):
if regex is None:
raise TypeError()
if hasattr(regex, ):
return regex
return re.compile(regex, flags) | Given a string, this function returns a new re.RegexObject.
Given a re.RegexObject, this function just returns the same object.
:type regex: string|re.RegexObject
:param regex: A regex or a re.RegexObject
:type flags: int
:param flags: See Python's re.compile().
:rtype: re.RegexObject
:return: The Python regex object. |
20,113 | def _get_dst_resolution(self, dst_res=None):
if dst_res is None:
dst_res = min(self._res_indices.keys())
return dst_res | Get default resolution, i.e. the highest resolution or smallest cell size. |
20,114 | def reply(self, user, msg, errors_as_replies=True):
return self._brain.reply(user, msg, errors_as_replies) | Fetch a reply from the RiveScript brain.
Arguments:
user (str): A unique user ID for the person requesting a reply.
This could be e.g. a screen name or nickname. It's used internally
to store user variables (including topic and history), so if your
bot has multiple users each one should have a unique ID.
msg (str): The user's message. This is allowed to contain
punctuation and such, but any extraneous data such as HTML tags
should be removed in advance.
errors_as_replies (bool): When errors are encountered (such as a
deep recursion error, no reply matched, etc.) this will make the
reply be a text representation of the error message. If you set
this to ``False``, errors will instead raise an exception, such as
a ``DeepRecursionError`` or ``NoReplyError``. By default, no
exceptions are raised and errors are set in the reply instead.
Returns:
str: The reply output. |
20,115 | def make_attrstring(attr):
attrstring = .join([ % (k, v) for k, v in attr.items()])
return % ( if attrstring != else , attrstring) | Returns an attribute string in the form key="val" |
20,116 | def at_depth(self, depth):
for child in list(self.children):
if depth == 0:
yield child
else:
for grandchild in child.at_depth(depth - 1):
yield grandchild | Returns a generator yielding all nodes in the tree at a specific depth
:param depth:
An integer >= 0 of the depth of nodes to yield
:return:
A generator yielding PolicyTreeNode objects |
20,117 | def status(self):
if self.provider:
status = self.provider.status(self.engines)
else:
status = []
return status | Returns the status of the executor via probing the execution providers. |
20,118 | def apply(self, func, skills):
def run_item(skill):
try:
func(skill)
return True
except MsmException as e:
LOG.error(.format(
func.__name__, skill.name, repr(e)
))
return False
except:
LOG.exception(.format(
func.__name__, skill.name
))
with ThreadPool(20) as tp:
return tp.map(run_item, skills) | Run a function on all skills in parallel |
20,119 | def complete_url(self, url):
if self.base_url:
return urlparse.urljoin(self.base_url, url)
else:
return url | Completes a given URL with this instance's URL base. |
20,120 | def expire_leaderboard_at_for(self, leaderboard_name, timestamp):
pipeline = self.redis_connection.pipeline()
pipeline.expireat(leaderboard_name, timestamp)
pipeline.expireat(
self._ties_leaderboard_key(leaderboard_name), timestamp)
pipeline.expireat(self._member_data_key(leaderboard_name), timestamp)
pipeline.execute() | Expire the given leaderboard at a specific UNIX timestamp. Do not use this with
leaderboards that utilize member data as there is no facility to cascade the
expiration out to the keys for the member data.
@param leaderboard_name [String] Name of the leaderboard.
@param timestamp [int] UNIX timestamp at which the leaderboard will be expired. |
20,121 | def request_show(self, id, **kwargs):
"https://developer.zendesk.com/rest_api/docs/core/requests
api_path = "/api/v2/requests/{id}.json"
api_path = api_path.format(id=id)
return self.call(api_path, **kwargs) | https://developer.zendesk.com/rest_api/docs/core/requests#show-request |
20,122 | def pick_peaks(nc, L=16, offset_denom=0.1):
offset = nc.mean() * float(offset_denom)
th = filters.median_filter(nc, size=L) + offset
peaks = []
for i in range(1, nc.shape[0] - 1):
if nc[i - 1] < nc[i] and nc[i] > nc[i + 1]:
if nc[i] > th[i]:
peaks.append(i)
return peaks | Obtain peaks from a novelty curve using an adaptive threshold. |
20,123 | def at(*args, **kwargs):
timespec**/sbin/reboot*3:05am +3 daysbin/myscript
if len(args) < 2:
return {: []}
if in kwargs:
stdin = .format(kwargs[], .join(args[1:]))
else:
stdin = .join(args[1:])
cmd_kwargs = {: stdin, : False}
if in kwargs:
cmd_kwargs[] = kwargs[]
res = __salt__[](.format(
timespec=args[0]
), **cmd_kwargs)
if res[] > 0:
if in res[]:
return {: [], : }
return {: [], : res[]}
else:
jobid = res[].splitlines()[1]
jobid = six.text_type(jobid.split()[1])
return atq(jobid) | Add a job to the queue.
The 'timespec' follows the format documented in the
at(1) manpage.
CLI Example:
.. code-block:: bash
salt '*' at.at <timespec> <cmd> [tag=<tag>] [runas=<user>]
salt '*' at.at 12:05am '/sbin/reboot' tag=reboot
salt '*' at.at '3:05am +3 days' 'bin/myscript' tag=nightly runas=jim |
20,124 | def CopyToIsoFormat(cls, timestamp, timezone=pytz.UTC, raise_error=False):
datetime_object = cls.CopyToDatetime(
timestamp, timezone, raise_error=raise_error)
return datetime_object.isoformat() | Copies the timestamp to an ISO 8601 formatted string.
Args:
timestamp: The timestamp which is an integer containing the number
of micro seconds since January 1, 1970, 00:00:00 UTC.
timezone: Optional timezone (instance of pytz.timezone).
raise_error: Boolean that if set to True will not absorb an OverflowError
if the timestamp is out of bounds. By default there will be
no error raised.
Returns:
A string containing an ISO 8601 formatted date and time. |
20,125 | def API520_B(Pset, Pback, overpressure=0.1):
r
gauge_backpressure = (Pback-atm)/(Pset-atm)*100
if overpressure not in [0.1, 0.16, 0.21]:
raise Exception()
if (overpressure == 0.1 and gauge_backpressure < 30) or (
overpressure == 0.16 and gauge_backpressure < 38) or (
overpressure == 0.21 and gauge_backpressure < 50):
return 1
elif gauge_backpressure > 50:
raise Exception()
if overpressure == 0.16:
Kb = interp(gauge_backpressure, Kb_16_over_x, Kb_16_over_y)
elif overpressure == 0.1:
Kb = interp(gauge_backpressure, Kb_10_over_x, Kb_10_over_y)
return Kb | r'''Calculates capacity correction due to backpressure on balanced
spring-loaded PRVs in vapor service. For pilot operated valves,
this is always 1. Applicable up to 50% of the percent gauge backpressure,
For use in API 520 relief valve sizing. 1D interpolation among a table with
53 backpressures is performed.
Parameters
----------
Pset : float
Set pressure for relief [Pa]
Pback : float
Backpressure, [Pa]
overpressure : float, optional
The maximum fraction overpressure; one of 0.1, 0.16, or 0.21, []
Returns
-------
Kb : float
Correction due to vapor backpressure [-]
Notes
-----
If the calculated gauge backpressure is less than 30%, 38%, or 50% for
overpressures of 0.1, 0.16, or 0.21, a value of 1 is returned.
Percent gauge backpressure must be under 50%.
Examples
--------
Custom examples from figure 30:
>>> API520_B(1E6, 5E5)
0.7929945420944432
References
----------
.. [1] API Standard 520, Part 1 - Sizing and Selection. |
20,126 | def compute_logarithmic_scale(min_, max_, min_scale, max_scale):
if max_ <= 0 or min_ <= 0:
return []
min_order = int(floor(log10(min_)))
max_order = int(ceil(log10(max_)))
positions = []
amplitude = max_order - min_order
if amplitude <= 1:
return []
detail = 10.
while amplitude * detail < min_scale * 5:
detail *= 2
while amplitude * detail > max_scale * 3:
detail /= 2
for order in range(min_order, max_order + 1):
for i in range(int(detail)):
tick = (10 * i / detail or 1) * 10**order
tick = round_to_scale(tick, tick)
if min_ <= tick <= max_ and tick not in positions:
positions.append(tick)
return positions | Compute an optimal scale for logarithmic |
20,127 | def from_location(cls, location):
if not location:
return cls()
try:
if hasattr(location, ):
return location
elif hasattr(location, ):
return cls(city=location)
return cls(city=city, country=None, latitude=latitude,
longitude=longitude, time_zone=time_zone,
elevation=elevation)
except Exception as e:
raise ValueError(
"Failed to create a Location from %s!\n%s" % (location, e)) | Try to create a Ladybug location from a location string.
Args:
locationString: Location string
Usage:
l = Location.from_location(locationString) |
20,128 | async def join_voice_channel(self, guild_id, channel_id):
voice_ws = self.get_voice_ws(guild_id)
await voice_ws.voice_state(guild_id, channel_id) | Alternative way to join a voice channel if node is known. |
20,129 | def results(context, history_log):
if context.obj is None:
context.obj = {}
context.obj[] = history_log
if context.invoked_subcommand is None:
context.invoke(show, item=1) | Process provided history log and results files. |
20,130 | def show_bounds(self, mesh=None, bounds=None, show_xaxis=True,
show_yaxis=True, show_zaxis=True, show_xlabels=True,
show_ylabels=True, show_zlabels=True, italic=False,
bold=True, shadow=False, font_size=None,
font_family=None, color=None,
xlabel=, ylabel=, zlabel=,
use_2d=False, grid=None, location=, ticks=None,
all_edges=False, corner_factor=0.5, fmt=None,
minor_ticks=False, loc=None, padding=0.0):
kwargs = locals()
_ = kwargs.pop()
_ = kwargs.pop()
self._active_renderer_index = self.loc_to_index(loc)
renderer = self.renderers[self._active_renderer_index]
renderer.show_bounds(**kwargs) | Adds bounds axes. Shows the bounds of the most recent input
mesh unless mesh is specified.
Parameters
----------
mesh : vtkPolydata or unstructured grid, optional
Input mesh to draw bounds axes around
bounds : list or tuple, optional
Bounds to override mesh bounds.
[xmin, xmax, ymin, ymax, zmin, zmax]
show_xaxis : bool, optional
Makes x axis visible. Default True.
show_yaxis : bool, optional
Makes y axis visible. Default True.
show_zaxis : bool, optional
Makes z axis visible. Default True.
show_xlabels : bool, optional
Shows x labels. Default True.
show_ylabels : bool, optional
Shows y labels. Default True.
show_zlabels : bool, optional
Shows z labels. Default True.
italic : bool, optional
Italicises axis labels and numbers. Default False.
bold : bool, optional
Bolds axis labels and numbers. Default True.
shadow : bool, optional
Adds a black shadow to the text. Default False.
font_size : float, optional
Sets the size of the label font. Defaults to 16.
font_family : string, optional
Font family. Must be either courier, times, or arial.
color : string or 3 item list, optional
Color of all labels and axis titles. Default white.
Either a string, rgb list, or hex color string. For example:
color='white'
color='w'
color=[1, 1, 1]
color='#FFFFFF'
xlabel : string, optional
Title of the x axis. Default "X Axis"
ylabel : string, optional
Title of the y axis. Default "Y Axis"
zlabel : string, optional
Title of the z axis. Default "Z Axis"
use_2d : bool, optional
A bug with vtk 6.3 in Windows seems to cause this function
to crash this can be enabled for smoother plotting for
other enviornments.
grid : bool or str, optional
Add grid lines to the backface (``True``, ``'back'``, or
``'backface'``) or to the frontface (``'front'``,
``'frontface'``) of the axes actor.
location : str, optional
Set how the axes are drawn: either static (``'all'``),
closest triad (``front``), furthest triad (``'back'``),
static closest to the origin (``'origin'``), or outer
edges (``'outer'``) in relation to the camera
position. Options include: ``'all', 'front', 'back',
'origin', 'outer'``
ticks : str, optional
Set how the ticks are drawn on the axes grid. Options include:
``'inside', 'outside', 'both'``
all_edges : bool, optional
Adds an unlabeled and unticked box at the boundaries of
plot. Useful for when wanting to plot outer grids while
still retaining all edges of the boundary.
corner_factor : float, optional
If ``all_edges````, this is the factor along each axis to
draw the default box. Dafuault is 0.5 to show the full box.
loc : int, tuple, or list
Index of the renderer to add the actor to. For example,
``loc=2`` or ``loc=(1, 1)``. If None, selects the last
active Renderer.
padding : float, optional
An optional percent padding along each axial direction to cushion
the datasets in the scene from the axes annotations. Defaults to
have no padding
Returns
-------
cube_axes_actor : vtk.vtkCubeAxesActor
Bounds actor
Examples
--------
>>> import vtki
>>> from vtki import examples
>>> mesh = vtki.Sphere()
>>> plotter = vtki.Plotter()
>>> _ = plotter.add_mesh(mesh)
>>> _ = plotter.show_bounds(grid='front', location='outer', all_edges=True)
>>> plotter.show() # doctest:+SKIP |
20,131 | def _bnd(self, xloc, dist, cache):
return numpy.log(evaluation.evaluate_bound(
dist, numpy.e**xloc, cache=cache)) | Distribution bounds. |
20,132 | def macro_body(self, node, frame):
frame = frame.inner()
frame.symbols.analyze_node(node)
macro_ref = MacroRef(node)
explicit_caller = None
skip_special_params = set()
args = []
for idx, arg in enumerate(node.args):
if arg.name == :
explicit_caller = idx
if arg.name in (, ):
skip_special_params.add(arg.name)
args.append(frame.symbols.ref(arg.name))
undeclared = find_undeclared(node.body, (, , ))
if in undeclared:
if explicit_caller is not None:
try:
node.defaults[explicit_caller - len(node.args)]
except IndexError:
self.fail(
, node.lineno)
else:
args.append(frame.symbols.declare_parameter())
macro_ref.accesses_caller = True
if in undeclared and not in skip_special_params:
args.append(frame.symbols.declare_parameter())
macro_ref.accesses_kwargs = True
if in undeclared and not in skip_special_params:
args.append(frame.symbols.declare_parameter())
macro_ref.accesses_varargs = True
frame.require_output_check = False
frame.symbols.analyze_node(node)
self.writeline( % (self.func(), .join(args)), node)
self.indent()
self.buffer(frame)
self.enter_frame(frame)
self.push_parameter_definitions(frame)
for idx, arg in enumerate(node.args):
ref = frame.symbols.ref(arg.name)
self.writeline( % ref)
self.indent()
try:
default = node.defaults[idx - len(node.args)]
except IndexError:
self.writeline( % (
ref,
% arg.name,
arg.name))
else:
self.writeline( % ref)
self.visit(default, frame)
self.mark_parameter_stored(ref)
self.outdent()
self.pop_parameter_definitions()
self.blockvisit(node.body, frame)
self.return_buffer_contents(frame, force_unescaped=True)
self.leave_frame(frame, with_python_scope=True)
self.outdent()
return frame, macro_ref | Dump the function def of a macro or call block. |
20,133 | def get_mapping(self, doc_type=None, indices=None, raw=False):
if doc_type is None and indices is None:
path = make_path("_mapping")
is_mapping = False
else:
indices = self.conn._validate_indices(indices)
if doc_type:
path = make_path(.join(indices), doc_type, "_mapping")
is_mapping = True
else:
path = make_path(.join(indices), "_mapping")
is_mapping = False
result = self.conn._send_request(, path)
if raw:
return result
from pyes.mappings import Mapper
mapper = Mapper(result, is_mapping=False,
connection=self.conn,
document_object_field=self.conn.document_object_field)
if doc_type:
return mapper.mappings[doc_type]
return mapper | Register specific mapping definition for a specific type against one or more indices.
(See :ref:`es-guide-reference-api-admin-indices-get-mapping`) |
20,134 | def link(self, camera):
cam1, cam2 = self, camera
while cam1 in cam2._linked_cameras:
cam2._linked_cameras.remove(cam1)
while cam2 in cam1._linked_cameras:
cam1._linked_cameras.remove(cam2)
cam1._linked_cameras.append(cam2)
cam2._linked_cameras.append(cam1) | Link this camera with another camera of the same type
Linked camera's keep each-others' state in sync.
Parameters
----------
camera : instance of Camera
The other camera to link. |
20,135 | def thumbnail_preview(src_path):
try:
assert(exists(src_path))
width =
dest_dir = mkdtemp(prefix=)
cmd = [QLMANAGE, , , width, src_path, , dest_dir]
assert(check_call(cmd) == 0)
src_filename = basename(src_path)
dest_list = glob(join(dest_dir, % (src_filename)))
assert(dest_list)
dest_path = dest_list[0]
assert(exists(dest_path))
return dest_path
except:
return None | Returns the path to small thumbnail preview. |
20,136 | def _update(self, rect, delta_y, force_update_margins=False):
helper = TextHelper(self.editor)
if not self:
return
for zones_id, zone in self._panels.items():
if zones_id == Panel.Position.TOP or \
zones_id == Panel.Position.BOTTOM:
continue
panels = list(zone.values())
for panel in panels:
if panel.scrollable and delta_y:
panel.scroll(0, delta_y)
line, col = helper.cursor_position()
oline, ocol = self._cached_cursor_pos
if line != oline or col != ocol or panel.scrollable:
panel.update(0, rect.y(), panel.width(), rect.height())
self._cached_cursor_pos = helper.cursor_position()
if (rect.contains(self.editor.viewport().rect()) or
force_update_margins):
self._update_viewport_margins() | Updates panels |
20,137 | def low_mem_sq(m, step=100000):
if not m.flags.c_contiguous:
raise ValueError()
mmt = np.zeros([m.shape[0], m.shape[0]])
for a in range(0, m.shape[0], step):
mx = min(a+step, m.shape[1])
mmt[:, a:mx] = np.dot(m, m[a:mx].T)
return mmt | np.dot(m, m.T) with low mem usage, by doing it in small steps |
20,138 | def init_app(self, app, minters_entry_point_group=None,
fetchers_entry_point_group=None):
self.init_config(app)
app.cli.add_command(cmd)
app.config.setdefault(, app.debug)
if app.config[]:
for handler in app.logger.handlers:
logger.addHandler(handler)
try:
pkg_resources.get_distribution()
app.config.setdefault(, dict(
rec=,
))
except pkg_resources.DistributionNotFound:
app.config.setdefault(, {})
app.jinja_env.filters[] = pid_exists
state = _PIDStoreState(
app=app,
minters_entry_point_group=minters_entry_point_group,
fetchers_entry_point_group=fetchers_entry_point_group,
)
app.extensions[] = state
return state | Flask application initialization.
Initialize:
* The CLI commands.
* Initialize the logger (Default: `app.debug`).
* Initialize the default admin object link endpoint.
(Default: `{"rec": "recordmetadata.details_view"}` if
`invenio-records` is installed, otherwise `{}`).
* Register the `pid_exists` template filter.
* Initialize extension state.
:param app: The Flask application
:param minters_entry_point_group: The minters entry point group
(Default: None).
:param fetchers_entry_point_group: The fetchers entry point group
(Default: None).
:returns: PIDStore state application. |
20,139 | def createpath(path, mode, exists_ok=True):
try:
os.makedirs(path, mode)
except OSError, e:
if e.errno != errno.EEXIST or not exists_ok:
raise e | Create directories in the indicated path.
:param path:
:param mode:
:param exists_ok:
:return: |
20,140 | def find_locales(self) -> Dict[str, gettext.GNUTranslations]:
translations = {}
for name in os.listdir(self.path):
if not os.path.isdir(os.path.join(self.path, name)):
continue
mo_path = os.path.join(self.path, name, , self.domain + )
if os.path.exists(mo_path):
with open(mo_path, ) as fp:
translations[name] = gettext.GNUTranslations(fp)
elif os.path.exists(mo_path[:-2] + ):
raise RuntimeError(f"Found locale '{name} but this language is not compiled!")
return translations | Load all compiled locales from path
:return: dict with locales |
20,141 | def batch_split_words(self, sentences: List[str]) -> List[List[Token]]:
return [self.split_words(sentence) for sentence in sentences] | Spacy needs to do batch processing, or it can be really slow. This method lets you take
advantage of that if you want. Default implementation is to just iterate of the sentences
and call ``split_words``, but the ``SpacyWordSplitter`` will actually do batched
processing. |
20,142 | def navigation_info(request):
if request.GET.get() == "1":
nav_class = "wafer-invisible"
else:
nav_class = "wafer-visible"
context = {
: nav_class,
}
return context | Expose whether to display the navigation header and footer |
20,143 | def get_precision_regex():
expr = re.escape(PRECISION_FORMULA)
expr += r
return re.compile(expr) | Build regular expression used to extract precision
metric from command output |
20,144 | def _col_name(index):
for exp in itertools.count(1):
limit = 26 ** exp
if index < limit:
return .join(chr(ord() + index // (26 ** i) % 26) for i in range(exp-1, -1, -1))
index -= limit | Converts a column index to a column name.
>>> _col_name(0)
'A'
>>> _col_name(26)
'AA' |
20,145 | def mqc_load_userconfig(paths=()):
mqc_load_config(os.path.join( os.path.dirname(MULTIQC_DIR), ))
mqc_load_config(os.path.expanduser())
if os.environ.get() is not None:
mqc_load_config( os.environ.get() )
mqc_load_config()
for p in paths:
mqc_load_config(p) | Overwrite config defaults with user config files |
20,146 | def middleware(self, *args, **kwargs):
kwargs.setdefault(, 5)
kwargs.setdefault(, None)
kwargs.setdefault(, None)
kwargs.setdefault(, False)
if len(args) == 1 and callable(args[0]):
middle_f = args[0]
self._middlewares.append(
FutureMiddleware(middle_f, args=tuple(), kwargs=kwargs))
return middle_f
def wrapper(middleware_f):
self._middlewares.append(
FutureMiddleware(middleware_f, args=args, kwargs=kwargs))
return middleware_f
return wrapper | Decorate and register middleware
:param args: captures all of the positional arguments passed in
:type args: tuple(Any)
:param kwargs: captures the keyword arguments passed in
:type kwargs: dict(Any)
:return: The middleware function to use as the decorator
:rtype: fn |
20,147 | def get_paths(self):
paths = []
for key, child in six.iteritems(self):
if isinstance(child, TreeMap) and child:
for path in child.get_paths():
path.insert(0, key)
paths.append(path)
else:
paths.append([key])
return paths | Get all paths from the root to the leaves.
For example, given a chain like `{'a':{'b':{'c':None}}}`,
this method would return `[['a', 'b', 'c']]`.
Returns:
A list of lists of paths. |
20,148 | def f_store(self, recursive=True, store_data=pypetconstants.STORE_DATA,
max_depth=None):
traj = self._nn_interface._root_instance
storage_service = traj.v_storage_service
storage_service.store(pypetconstants.GROUP, self,
trajectory_name=traj.v_name,
recursive=recursive,
store_data=store_data,
max_depth=max_depth) | Stores a group node to disk
:param recursive:
Whether recursively all children should be stored too. Default is ``True``.
:param store_data:
For how to choose 'store_data' see :ref:`more-on-storing`.
:param max_depth:
In case `recursive` is `True`, you can specify the maximum depth to store
data relative from current node. Leave `None` if you don't want to limit
the depth. |
20,149 | def _get_data(self) -> BaseFrameManager:
def iloc(partition, row_internal_indices, col_internal_indices):
return partition.iloc[row_internal_indices, col_internal_indices]
masked_data = self.parent_data.apply_func_to_indices_both_axis(
func=iloc,
row_indices=self.index_map.values,
col_indices=self.columns_map.values,
lazy=False,
keep_remaining=False,
)
return masked_data | Perform the map step
Returns:
A BaseFrameManager object. |
20,150 | def _fmt_structured(d):
timeEntry = datetime.datetime.utcnow().strftime(
"time=%Y-%m-%dT%H:%M:%S.%f-00")
pidEntry = "pid=" + str(os.getpid())
rest = sorted(.join([str(k), str(v)])
for (k, v) in list(d.items()))
return .join([timeEntry, pidEntry] + rest) | Formats '{k1:v1, k2:v2}' => 'time=... pid=... k1=v1 k2=v2'
Output is lexically sorted, *except* the time and pid always
come first, to assist with human scanning of the data. |
20,151 | def arg(*args, **kwargs):
metadata = {: (args, kwargs)}
return attrib(default=arg_default(*args, **kwargs), metadata=metadata) | Return an attrib() that can be fed as a command-line argument.
This function is a wrapper for an attr.attrib to create a corresponding
command line argument for it. Use it with the same arguments as argparse's
add_argument().
Example:
>>> @attrs
... class MyFeature(Feature):
... my_number = arg('-n', '--number', default=3)
... def run(self):
... print('Your number:', self.my_number)
Now you could run it like `firefed myfeature --number 5`. |
20,152 | def decimal_format(value, TWOPLACES=Decimal(100) ** -2):
if not isinstance(value, Decimal):
value = Decimal(str(value))
return value.quantize(TWOPLACES) | Format a decimal.Decimal like to 2 decimal places. |
20,153 | def maybe_inspect_zip(models):
r
if not(is_zip_file(models)):
return models
if len(models) > 1:
return models
if len(models) < 1:
raise AssertionError()
return zipfile.ZipFile(models[0]).namelist() | r'''
Detect if models is a list of protocolbuffer files or a ZIP file.
If the latter, then unzip it and return the list of protocolbuffer files
that were inside. |
20,154 | def create_session(self, session_request, protocol):
route_values = {}
if protocol is not None:
route_values[] = self._serialize.url(, protocol, )
content = self._serialize.body(session_request, )
response = self._send(http_method=,
location_id=,
version=,
route_values=route_values,
content=content)
return self._deserialize(, response) | CreateSession.
[Preview API] Creates a session, a wrapper around a feed that can store additional metadata on the packages published to it.
:param :class:`<SessionRequest> <azure.devops.v5_0.provenance.models.SessionRequest>` session_request: The feed and metadata for the session
:param str protocol: The protocol that the session will target
:rtype: :class:`<SessionResponse> <azure.devops.v5_0.provenance.models.SessionResponse>` |
20,155 | def generate_cutV_genomic_CDR3_segs(self):
max_palindrome = self.max_delV_palindrome
self.cutV_genomic_CDR3_segs = []
for CDR3_V_seg in [x[1] for x in self.genV]:
if len(CDR3_V_seg) < max_palindrome:
self.cutV_genomic_CDR3_segs += [cutR_seq(CDR3_V_seg, 0, len(CDR3_V_seg))]
else:
self.cutV_genomic_CDR3_segs += [cutR_seq(CDR3_V_seg, 0, max_palindrome)] | Add palindromic inserted nucleotides to germline V sequences.
The maximum number of palindromic insertions are appended to the
germline V segments so that delV can index directly for number of
nucleotides to delete from a segment.
Sets the attribute cutV_genomic_CDR3_segs. |
20,156 | def get_container_service(access_token, subscription_id, resource_group, service_name):
endpoint = .join([get_rm_endpoint(),
, subscription_id,
, resource_group,
, service_name,
, ACS_API])
return do_get(endpoint, access_token) | Get details about an Azure Container Server
Args:
access_token (str): A valid Azure authentication token.
subscription_id (str): Azure subscription id.
resource_group (str): Azure resource group name.
service_name (str): Name of container service.
Returns:
HTTP response. JSON model. |
20,157 | def hook_drop(self):
widget = self.widget
widget.setAcceptDrops(True)
widget.dragEnterEvent = self.dragEnterEvent
widget.dragMoveEvent = self.dragMoveEvent
widget.dragLeaveEvent = self.dragLeaveEvent
widget.dropEvent = self.dropEvent | Install hooks for drop operations. |
20,158 | def send(self, stanza):
if self.uplink:
self.uplink.send(stanza)
else:
raise NoRouteError("No route for stanza") | Send a stanza somwhere.
The default implementation sends it via the `uplink` if it is defined
or raises the `NoRouteError`.
:Parameters:
- `stanza`: the stanza to send.
:Types:
- `stanza`: `pyxmpp.stanza.Stanza` |
20,159 | def update_user_password(new_pwd_user_id, new_password,**kwargs):
try:
user_i = db.DBSession.query(User).filter(User.id==new_pwd_user_id).one()
user_i.password = bcrypt.hashpw(str(new_password).encode(), bcrypt.gensalt())
return user_i
except NoResultFound:
raise ResourceNotFoundError("User (id=%s) not found"%(new_pwd_user_id)) | Update a user's password |
20,160 | def read_samples(self, sr=None, offset=0, duration=None):
with self.container.open_if_needed(mode=) as cnt:
samples, native_sr = cnt.get(self.key)
start_sample_index = int(offset * native_sr)
if duration is None:
end_sample_index = samples.shape[0]
else:
end_sample_index = int((offset + duration) * native_sr)
samples = samples[start_sample_index:end_sample_index]
if sr is not None and sr != native_sr:
samples = librosa.core.resample(
samples,
native_sr,
sr,
res_type=
)
return samples | Return the samples from the track in the container.
Uses librosa for resampling, if needed.
Args:
sr (int): If ``None``, uses the sampling rate given by the file,
otherwise resamples to the given sampling rate.
offset (float): The time in seconds, from where to start reading
the samples (rel. to the file start).
duration (float): The length of the samples to read in seconds.
Returns:
np.ndarray: A numpy array containing the samples as a
floating point (numpy.float32) time series. |
20,161 | def bootstrap_repl(which_ns: str) -> types.ModuleType:
repl_ns = runtime.Namespace.get_or_create(sym.symbol("basilisp.repl"))
ns = runtime.Namespace.get_or_create(sym.symbol(which_ns))
repl_module = importlib.import_module("basilisp.repl")
ns.add_alias(sym.symbol("basilisp.repl"), repl_ns)
ns.refer_all(repl_ns)
return repl_module | Bootstrap the REPL with a few useful vars and returned the
bootstrapped module so it's functions can be used by the REPL
command. |
20,162 | def loads(s: str, **kwargs) -> JsonObj:
if isinstance(s, (bytes, bytearray)):
s = s.decode(json.detect_encoding(s), )
return json.loads(s, object_hook=lambda pairs: JsonObj(**pairs), **kwargs) | Convert a json_str into a JsonObj
:param s: a str instance containing a JSON document
:param kwargs: arguments see: json.load for details
:return: JsonObj representing the json string |
20,163 | def load_profiles_from_file(self, fqfn):
if self.args.verbose:
print(.format(c.Style.BRIGHT, c.Fore.MAGENTA, fqfn))
with open(fqfn, ) as fh:
data = json.load(fh)
for profile in data:
self.profile_update(profile)
if self.args.action == :
self.validate(profile)
fh.seek(0)
fh.write(json.dumps(data, indent=2, sort_keys=True))
fh.truncate()
for d in data:
if d.get() in self.profiles:
self.handle_error(
.format(d.get())
)
self.profiles.setdefault(
d.get(),
{: d, : d.get(), : fqfn},
) | Load profiles from file.
Args:
fqfn (str): Fully qualified file name. |
20,164 | def on_train_begin(self, **kwargs):
"Call watch method to log model topology, gradients & weights"
super().on_train_begin()
if not WandbCallback.watch_called:
WandbCallback.watch_called = True
wandb.watch(self.learn.model, log=self.log) | Call watch method to log model topology, gradients & weights |
20,165 | def scoped_session_decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
with sessions_scope(session):
logger.debug("Running worker %s in scoped DB session", func.__name__)
return func(*args, **kwargs)
return wrapper | Manage contexts and add debugging to db sessions. |
20,166 | def copy(self):
return type(self)(self.chr,
self.start+self._start_offset,
self.end,
self.payload,
self.dir) | Create a new copy of selfe. does not do a deep copy for payload
:return: copied range
:rtype: GenomicRange |
20,167 | def webapi_request(url, method=, caller=None, session=None, params=None):
if method not in (, ):
raise NotImplemented("HTTP method: %s" % repr(self.method))
if params is None:
params = {}
onetime = {}
for param in DEFAULT_PARAMS:
params[param] = onetime[param] = params.get(param, DEFAULT_PARAMS[param])
for param in (, , , ):
del params[param]
if onetime[] not in (, , ):
raise ValueError("Expected format to be json,vdf or xml; got %s" % onetime[])
for k, v in list(params.items()):
if isinstance(v, bool): params[k] = 1 if v else 0
elif isinstance(v, dict): params[k] = _json.dumps(v)
elif isinstance(v, list):
del params[k]
for i, lvalue in enumerate(v):
params["%s[%d]" % (k, i)] = lvalue
kwargs = {: params} if method == "GET" else {: params}
if session is None: session = _make_session()
f = getattr(session, method.lower())
resp = f(url, stream=False, timeout=onetime[], **kwargs)
if caller is not None: caller.last_response = resp
resp.raise_for_status()
if onetime[]:
return resp.text
elif onetime[] == :
return resp.json()
elif onetime[] == :
from lxml import etree as _etree
return _etree.fromstring(resp.content)
elif onetime[] == :
import vdf as _vdf
return _vdf.loads(resp.text) | Low level function for calling Steam's WebAPI
.. versionchanged:: 0.8.3
:param url: request url (e.g. ``https://api.steampowered.com/A/B/v001/``)
:type url: :class:`str`
:param method: HTTP method (GET or POST)
:type method: :class:`str`
:param caller: caller reference, caller.last_response is set to the last response
:param params: dict of WebAPI and endpoint specific params
:type params: :class:`dict`
:param session: an instance requests session, or one is created per call
:type session: :class:`requests.Session`
:return: response based on paramers
:rtype: :class:`dict`, :class:`lxml.etree.Element`, :class:`str` |
20,168 | def calculate_incorrect_name_dict(graph: BELGraph) -> Mapping[str, List[str]]:
missing = defaultdict(list)
for namespace, name in _iterate_namespace_name(graph):
missing[namespace].append(name)
return dict(missing) | Get missing names grouped by namespace. |
20,169 | def state_name(self):
if self.state == 1:
return
elif self.state == 2:
return
elif self.state == 3:
return
elif self.state == 4:
return
elif self.state == 5:
return
elif self.state == 6:
return
else:
raise ValueError(.format(self.state)) | Get a human-readable value of the state
Returns:
str: Name of the current state |
20,170 | def _validate_target(self, y):
if y is None:
return
y_type = type_of_target(y)
if y_type not in ("binary", "multiclass"):
raise YellowbrickValueError((
" target type not supported, only binary and multiclass"
).format(y_type)) | Raises a value error if the target is not a classification target. |
20,171 | def origin(self, origin):
ox, oy, oz = origin[0], origin[1], origin[2]
self.SetOrigin(ox, oy, oz)
self.Modified() | Set the origin. Pass a length three tuple of floats |
20,172 | def migrator(self):
migrator = Migrator(self.database)
for name in self.done:
self.run_one(name, migrator)
return migrator | Create migrator and setup it with fake migrations. |
20,173 | def getResponsible(self):
managers = {}
for department in self.getDepartments():
manager = department.getManager()
if manager is None:
continue
manager_id = manager.getId()
if manager_id not in managers:
managers[manager_id] = {}
managers[manager_id][] = safe_unicode(
manager.getSalutation())
managers[manager_id][] = safe_unicode(
manager.getFullname())
managers[manager_id][] = safe_unicode(
manager.getEmailAddress())
managers[manager_id][] = safe_unicode(
manager.getBusinessPhone())
managers[manager_id][] = safe_unicode(
manager.getJobTitle())
if manager.getSignature():
managers[manager_id][] = \
.format(manager.absolute_url())
else:
managers[manager_id][] = False
managers[manager_id][] =
mngr_dept = managers[manager_id][]
if mngr_dept:
mngr_dept +=
mngr_dept += safe_unicode(department.Title())
managers[manager_id][] = mngr_dept
mngr_keys = managers.keys()
mngr_info = {: mngr_keys, : managers}
return mngr_info | Return all manager info of responsible departments |
20,174 | def create_selection():
operation = Forward()
nested = Group(Suppress("(") + operation + Suppress(")")).setResultsName("nested")
select_expr = Forward()
functions = select_functions(select_expr)
maybe_nested = functions | nested | Group(var_val)
operation <<= maybe_nested + OneOrMore(oneOf("+ - * /") + maybe_nested)
select_expr <<= operation | maybe_nested
alias = Group(Suppress(upkey("as")) + var).setResultsName("alias")
full_select = Group(
Group(select_expr).setResultsName("selection") + Optional(alias)
)
return Group(
Keyword("*") | upkey("count(*)") | delimitedList(full_select)
).setResultsName("attrs") | Create a selection expression |
20,175 | def get_class_field(cls, field_name):
try:
field = super(ModelWithDynamicFieldMixin, cls).get_class_field(field_name)
except AttributeError:
dynamic_field = cls._get_dynamic_field_for(field_name)
field = cls._add_dynamic_field_to_model(dynamic_field, field_name)
return field | Add management of dynamic fields: if a normal field cannot be retrieved,
check if it can be a dynamic field and in this case, create a copy with
the given name and associate it to the model. |
20,176 | def decode(self, encoding=None, errors=):
return self.__class__(super(ColorStr, self).decode(encoding, errors), keep_tags=True) | Decode using the codec registered for encoding. encoding defaults to the default encoding.
errors may be given to set a different error handling scheme. Default is 'strict' meaning that encoding errors
raise a UnicodeDecodeError. Other possible values are 'ignore' and 'replace' as well as any other name
registered with codecs.register_error that is able to handle UnicodeDecodeErrors.
:param str encoding: Codec.
:param str errors: Error handling scheme. |
20,177 | def _get_chain_by_sid(sid):
try:
return d1_gmn.app.models.Chain.objects.get(sid__did=sid)
except d1_gmn.app.models.Chain.DoesNotExist:
pass | Return None if not found. |
20,178 | def main(host):
client = capnp.TwoPartyClient(host)
calculator = client.bootstrap().cast_as(calculator_capnp.Calculator)
s interesting here is that evaluate() returns a "Value", which is
another interface and therefore points back to an object living on the
server. We then have to call read() on that object to read it.
However, even though we are making two RPC
print(, end="")
eval_promise = calculator.evaluate({"literal": 123})
read_promise = eval_promise.value.read()
response = read_promise.wait()
assert response.value == 123
print("PASS")
print("Using add and subtract... ", end=)
add = calculator.getOperator(op=).func
subtract = calculator.getOperator(op=).func
request = calculator.evaluate_request()
subtract_call = request.expression.init()
subtract_call.function = subtract
subtract_params = subtract_call.init(, 2)
subtract_params[1].literal = 67.0
add_call = subtract_params[0].init()
add_call.function = add
add_params = add_call.init(, 2)
add_params[0].literal = 123
add_params[1].literal = 45
eval_promise = request.send()
read_promise = eval_promise.value.read()
response = read_promise.wait()
assert response.value == 101
print("PASS")
callfunctionparamscallfunctionparamsliteralliteralliteral
print("Pipelining eval() calls... ", end="")
add = calculator.getOperator(op=).func
multiply = calculator.getOperator(op=).func
request = calculator.evaluate_request()
multiply_call = request.expression.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].literal = 4
multiply_params[1].literal = 6
multiply_result = request.send().value
add_3_request = calculator.evaluate_request()
add_3_call = add_3_request.expression.init("call")
add_3_call.function = add
add_3_params = add_3_call.init("params", 2)
add_3_params[0].previousResult = multiply_result
add_3_params[1].literal = 3
add_3_promise = add_3_request.send().value.read()
add_5_request = calculator.evaluate_request()
add_5_call = add_5_request.expression.init("call")
add_5_call.function = add
add_5_params = add_5_call.init("params", 2)
add_5_params[0].previousResult = multiply_result
add_5_params[1].literal = 5
add_5_promise = add_5_request.send().value.read()
assert add_3_promise.wait().value == 27
assert add_5_promise.wait().value == 29
print("PASS")
print("Defining functions... ", end="")
add = calculator.getOperator(op=).func
multiply = calculator.getOperator(op=).func
request = calculator.defFunction_request()
request.paramCount = 2
add_call = request.body.init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[1].parameter = 1
multiply_call = add_params[0].init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[0].parameter = 0
multiply_params[1].literal = 100
f = request.send().func
request = calculator.defFunction_request()
request.paramCount = 1
multiply_call = request.body.init("call")
multiply_call.function = multiply
multiply_params = multiply_call.init("params", 2)
multiply_params[1].literal = 2
f_call = multiply_params[0].init("call")
f_call.function = f
f_params = f_call.init("params", 2)
f_params[0].parameter = 0
add_call = f_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].parameter = 0
add_params[1].literal = 1
g = request.send().func
Specifically, we will compute 2^(4 + 5). However, exponent is not
defined by the Calculator server. So, wet implemented this optimization in
the sample server.add').func
request = calculator.evaluate_request()
pow_call = request.expression.init("call")
pow_call.function = PowerFunction()
pow_params = pow_call.init("params", 2)
pow_params[0].literal = 2
add_call = pow_params[1].init("call")
add_call.function = add
add_params = add_call.init("params", 2)
add_params[0].literal = 4
add_params[1].literal = 5
response = request.send().value.read().wait()
assert response.value == 512
print("PASS") | Make a request that just evaluates the literal value 123.
What's interesting here is that evaluate() returns a "Value", which is
another interface and therefore points back to an object living on the
server. We then have to call read() on that object to read it.
However, even though we are making two RPC's, this block executes in
*one* network round trip because of promise pipelining: we do not wait
for the first call to complete before we send the second call to the
server. |
20,179 | def _histplot_bins(column, bins=100):
col_min = np.min(column)
col_max = np.max(column)
return range(col_min, col_max + 2, max((col_max - col_min) // bins, 1)) | Helper to get bins for histplot. |
20,180 | def topological_order_dfs(graph):
n = len(graph)
order = []
times_seen = [-1] * n
for start in range(n):
if times_seen[start] == -1:
times_seen[start] = 0
to_visit = [start]
while to_visit:
node = to_visit[-1]
children = graph[node]
if times_seen[node] == len(children):
to_visit.pop()
order.append(node)
else:
child = children[times_seen[node]]
times_seen[node] += 1
if times_seen[child] == -1:
times_seen[child] = 0
to_visit.append(child)
return order[::-1] | Topological sorting by depth first search
:param graph: directed graph in listlist format, cannot be listdict
:returns: list of vertices in order
:complexity: `O(|V|+|E|)` |
20,181 | def required_items(element, children, attributes):
required_elements(element, *children)
required_attributes(element, *attributes) | Check an xml element to include given attributes and children.
:param element: ElementTree element
:param children: list of XPaths to check
:param attributes: list of attributes names to check
:raises NotValidXmlException: if some argument is missing
:raises NotValidXmlException: if some child is missing |
20,182 | def _recover_network_failure(self):
if self.auto_reconnect and not self._is_closing:
connected = False
while not connected:
log_msg = "* ATTEMPTING RECONNECT"
if self._retry_new_version:
log_msg = "* RETRYING DIFFERENT DDP VERSION"
self.ddpsocket._debug_log(log_msg)
time.sleep(self.auto_reconnect_timeout)
self._init_socket()
try:
self.connect()
connected = True
if self._retry_new_version:
self._retry_new_version = False
else:
self._is_reconnecting = True
except (socket.error, WebSocketException):
pass | Recover from a network failure |
20,183 | def set_user_jobs(session, job_ids):
jobs_data = {
: job_ids
}
response = make_put_request(session, , json_data=jobs_data)
json_data = response.json()
if response.status_code == 200:
return json_data[]
else:
raise UserJobsNotSetException(
message=json_data[],
error_code=json_data[],
request_id=json_data[]) | Replace the currently authenticated user's list of jobs with a new list of
jobs |
20,184 | def is_negated(self):
return all(
clause.presence == QueryPresence.PROHIBITED for clause in self.clauses
) | A negated query is one in which every clause has a presence of
prohibited. These queries require some special processing to return
the expected results. |
20,185 | def find_node(self, attribute):
for model in GraphModel._GraphModel__models_instances.itervalues():
for node in foundations.walkers.nodes_walker(model.root_node):
if attribute in node.get_attributes():
return node | Returns the Node with given attribute.
:param attribute: Attribute.
:type attribute: GraphModelAttribute
:return: Node.
:rtype: GraphModelNode |
20,186 | def associations(self, subject, object=None):
if object is None:
if self.associations_by_subj is not None:
return self.associations_by_subj[subject]
else:
return []
else:
if self.associations_by_subj_obj is not None:
return self.associations_by_subj_obj[(subject,object)]
else:
return [] | Given a subject-object pair (e.g. gene id to ontology class id), return all association
objects that match. |
20,187 | def create_ingest_point(self, privateStreamName, publicStreamName):
return self.protocol.execute(,
privateStreamName=privateStreamName,
publicStreamName=publicStreamName) | Creates an RTMP ingest point, which mandates that streams pushed into
the EMS have a target stream name which matches one Ingest Point
privateStreamName.
:param privateStreamName: The name that RTMP Target Stream Names must
match.
:type privateStreamName: str
:param publicStreamName: The name that is used to access the stream
pushed to the privateStreamName. The publicStreamName becomes the
streams localStreamName.
:type publicStreamName: str
:link: http://docs.evostream.com/ems_api_definition/createingestpoint |
20,188 | def update_rec(self, rec, name, value):
if name == "def":
name = "defn"
if hasattr(rec, name):
if name not in self.attrs_scalar:
if name not in self.attrs_nested:
getattr(rec, name).add(value)
else:
self._add_nested(rec, name, value)
else:
raise Exception("ATTR({NAME}) ALREADY SET({VAL})".format(
NAME=name, VAL=getattr(rec, name)))
else:
if name in self.attrs_scalar:
setattr(rec, name, value)
elif name not in self.attrs_nested:
setattr(rec, name, set([value]))
else:
name = .format(name)
setattr(rec, name, defaultdict(list))
self._add_nested(rec, name, value) | Update current GOTerm with optional record. |
20,189 | def write(self, output_filepath):
with open(output_filepath, ) as out_file:
out_file.write(self.__str__()) | serialize the ExmaraldaFile instance and write it to a file.
Parameters
----------
output_filepath : str
relative or absolute path to the Exmaralda file to be created |
20,190 | def extended_fade_in(self, segment, duration):
dur = int(duration * segment.track.samplerate)
if segment.start - dur >= 0:
segment.start -= dur
else:
raise Exception(
"Cannot create fade-in that extends "
"past the tracks beginning")
segment.duration += dur
f = Fade(segment.track, segment.comp_location_in_seconds,
duration, 0.0, 1.0)
self.add_dynamic(f)
return f | Add a fade-in to a segment that extends the beginning of the
segment.
:param segment: Segment to fade in
:type segment: :py:class:`radiotool.composer.Segment`
:param duration: Duration of fade-in (in seconds)
:returns: The fade that has been added to the composition
:rtype: :py:class:`Fade` |
20,191 | def _schedule_dependencies(dag):
in_degrees = dict(dag.get_indegrees())
independent_vertices = collections.deque([vertex for vertex in dag if dag.get_indegree(vertex) == 0])
topological_order = []
while independent_vertices:
v_vertex = independent_vertices.popleft()
topological_order.append(v_vertex)
for u_vertex in dag[v_vertex]:
in_degrees[u_vertex] -= 1
if in_degrees[u_vertex] == 0:
independent_vertices.append(u_vertex)
if len(topological_order) != len(dag):
raise CyclicDependencyError()
return topological_order | Computes an ordering < of tasks so that for any two tasks t and t' we have that if t depends on t' then
t' < t. In words, all dependencies of a task precede the task in this ordering.
:param dag: A directed acyclic graph representing dependencies between tasks.
:type dag: DirectedGraph
:return: A list of topologically ordered dependecies
:rtype: list(Dependency) |
20,192 | def stringfy(expr, sym_const=None, sym_states=None, sym_algebs=None):
if not sym_const:
sym_const = []
if not sym_states:
sym_states = []
if not sym_algebs:
sym_algebs = []
expr_str = []
if type(expr) in (int, float):
return expr
if expr.is_Atom:
if expr in sym_const:
expr_str = .format(expr)
elif expr in sym_states:
expr_str = .format(expr)
elif expr in sym_algebs:
expr_str = .format(expr)
elif expr.is_Number:
if expr.is_Integer:
expr_str = str(int(expr))
else:
expr_str = str(float(expr))
else:
raise AttributeError(.format(expr))
else:
nargs = len(expr.args)
arg_str = []
for arg in expr.args:
arg_str.append(stringfy(arg, sym_const, sym_states, sym_algebs))
if expr.is_Add:
expr_str =
for idx, item in enumerate(arg_str):
if idx == 0:
if len(item) > 1 and item[1] == :
item = item[0] + item[2:]
if idx > 0:
if item[0] == :
item = + item
else:
item = + item
expr_str += item
elif expr.is_Mul:
if nargs == 2 and expr.args[0].is_Integer:
if expr.args[0].is_positive:
expr_str = .format(*arg_str)
elif expr.args[0] == Integer():
expr_str = .format(arg_str[1])
else:
expr_str = .format(*arg_str)
else:
if expr.args[0] == Integer():
expr_str = .join(arg_str[1:])
expr_str = + expr_str +
else:
expr_str = .join(arg_str)
expr_str = + expr_str +
elif expr.is_Function:
expr_str = .join(arg_str)
expr_str = str(expr.func) + + expr_str +
elif expr.is_Pow:
if arg_str[1] == :
expr_str = .format(arg_str[0])
else:
expr_str = .format(*arg_str)
elif expr.is_Div:
expr_str = .join(arg_str)
expr_str = + expr_str +
else:
raise NotImplementedError
return expr_str | Convert the right-hand-side of an equation into CVXOPT matrix operations |
20,193 | def deployment_absent(name, namespace=, **kwargs):
ret = {: name,
: {},
: False,
: }
deployment = __salt__[](name, namespace, **kwargs)
if deployment is None:
ret[] = True if not __opts__[] else None
ret[] =
return ret
if __opts__[]:
ret[] =
ret[] = None
return ret
res = __salt__[](name, namespace, **kwargs)
if res[] == 200:
ret[] = True
ret[] = {
: {
: , : }}
ret[] = res[]
else:
ret[] = .format(res)
return ret | Ensures that the named deployment is absent from the given namespace.
name
The name of the deployment
namespace
The name of the namespace |
20,194 | def parse_gradient_rgb_args(args):
arglen = len(args)
if arglen < 1 or arglen > 2:
raise InvalidArg(arglen, label=-G\)
start_rgb = try_rgb(args[0]) if args else None
stop_rgb = try_rgb(args[1]) if arglen > 1 else None
return start_rgb, stop_rgb | Parse one or two rgb args given with --gradientrgb.
Raises InvalidArg for invalid rgb values.
Returns a tuple of (start_rgb, stop_rgb), where the stop_rgb may be
None if only one arg value was given and start_rgb may be None if
no values were given. |
20,195 | def compensate_system_time_change(self, difference):
super(Alignak, self).compensate_system_time_change(difference)
self.program_start = max(0, self.program_start + difference)
if not hasattr(self.sched, "conf"):
return
for host in self.sched.hosts:
host.compensate_system_time_change(difference)
for serv in self.sched.services:
serv.compensate_system_time_change(difference)
for chk in list(self.sched.checks.values()):
if chk.status == u and chk.t_to_go is not None:
t_to_go = chk.t_to_go
ref = self.sched.find_item_by_id(chk.ref)
new_t = max(0, t_to_go + difference)
timeperiod = self.sched.timeperiods[ref.check_period]
if timeperiod is not None:
notification_period = self.sched.timeperiods[ref.notification_period]
new_t = notification_period.get_next_valid_time_from_t(new_t)
act.creation_time += difference
if new_t is None:
act.state =
act.exit_status = 2
act.output =
act.check_time = time.time()
act.execution_time = 0
else:
act.t_to_go = new_t | Compensate a system time change of difference for all hosts/services/checks/notifs
:param difference: difference in seconds
:type difference: int
:return: None |
20,196 | def repair_central_directory(zipFile, is_file_instance):
f = zipFile if is_file_instance else open(zipFile, )
data = f.read()
pos = data.find(CENTRAL_DIRECTORY_SIGNATURE)
if (pos > 0):
sio = BytesIO(data)
sio.seek(pos + 22)
sio.truncate()
sio.seek(0)
return sio
f.seek(0)
return f | trims trailing data from the central directory
code taken from http://stackoverflow.com/a/7457686/570216, courtesy of Uri Cohen |
20,197 | def PC_varExplained(Y,standardized=True):
if standardized:
Y-=Y.mean(0)
Y/=Y.std(0)
covY = sp.cov(Y)
S,U = linalg.eigh(covY+1e-6*sp.eye(covY.shape[0]))
S = S[::-1]
rv = np.array([S[0:i].sum() for i in range(1,S.shape[0])])
rv/= S.sum()
return rv | Run PCA and calculate the cumulative fraction of variance
Args:
Y: phenotype values
standardize: if True, phenotypes are standardized
Returns:
var: cumulative distribution of variance explained |
20,198 | def export_content_groups(self, group_id, export_type, skip_notifications=None):
path = {}
data = {}
params = {}
path["group_id"] = group_id
self._validate_enum(export_type, ["common_cartridge", "qti", "zip"])
data["export_type"] = export_type
if skip_notifications is not None:
data["skip_notifications"] = skip_notifications
self.logger.debug("POST /api/v1/groups/{group_id}/content_exports with query params: {params} and form data: {data}".format(params=params, data=data, **path))
return self.generic_request("POST", "/api/v1/groups/{group_id}/content_exports".format(**path), data=data, params=params, single_item=True) | Export content.
Begin a content export job for a course, group, or user.
You can use the {api:ProgressController#show Progress API} to track the
progress of the export. The migration's progress is linked to with the
_progress_url_ value.
When the export completes, use the {api:ContentExportsApiController#show Show content export} endpoint
to retrieve a download URL for the exported content. |
20,199 | def get_cluster_name(self):
return self._get(
url=self.url + ,
headers=self.headers,
auth=self.auth
) | Name identifying this RabbitMQ cluster. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.