repo
stringlengths 7
54
| path
stringlengths 4
192
| url
stringlengths 87
284
| code
stringlengths 78
104k
| code_tokens
sequence | docstring
stringlengths 1
46.9k
| docstring_tokens
sequence | language
stringclasses 1
value | partition
stringclasses 3
values |
---|---|---|---|---|---|---|---|---|
IBMStreams/pypi.streamsx | streamsx/topology/topology.py | https://github.com/IBMStreams/pypi.streamsx/blob/abd67b4757120f6f805787fba390f53e9df9cdd8/streamsx/topology/topology.py#L1356-L1423 | def set_parallel(self, width, name=None):
"""
Set this source stream to be split into multiple channels
as the start of a parallel region.
Calling ``set_parallel`` on a stream created by
:py:meth:`~Topology.source` results in the stream
having `width` channels, each created by its own instance
of the callable::
s = topo.source(S())
s.set_parallel(3)
f = s.filter(F())
e = f.end_parallel()
Each channel has independent instances of ``S`` and ``F``. Tuples
created by the instance of ``S`` in channel 0 are passed to the
instance of ``F`` in channel 0, and so on for channels 1 and 2.
Callable transforms instances within the channel can use
the runtime functions
:py:func:`~streamsx.ec.channel`,
:py:func:`~streamsx.ec.local_channel`,
:py:func:`~streamsx.ec.max_channels` &
:py:func:`~streamsx.ec.local_max_channels`
to adapt to being invoked in parallel. For example a
source callable can use its channel number to determine
which partition to read from in a partitioned external system.
Calling ``set_parallel`` on a stream created by
:py:meth:`~Topology.subscribe` results in the stream
having `width` channels. Subscribe ensures that the
stream will contain all published tuples matching the
topic subscription and type. A published tuple will appear
on one of the channels though the specific channel is not known
in advance.
A parallel region is terminated by :py:meth:`end_parallel`
or :py:meth:`for_each`.
The number of channels is set by `width` which may be an `int` greater
than zero or a submission parameter created by
:py:meth:`Topology.create_submission_parameter`.
With IBM Streams 4.3 or later the number of channels can be
dynamically changed at runtime.
Parallel regions are started on non-source streams using
:py:meth:`parallel`.
Args:
width: The degree of parallelism for the parallel region.
name(str): Name of the parallel region. Defaults to the name of this stream.
Returns:
Stream: Returns this stream.
.. seealso:: :py:meth:`parallel`, :py:meth:`end_parallel`
.. versionadded:: 1.9
.. versionchanged:: 1.11 `name` parameter added.
"""
self.oport.operator.config['parallel'] = True
self.oport.operator.config['width'] = streamsx.topology.graph._as_spl_json(width, int)
if name:
name = self.topology.graph._requested_name(str(name), action='set_parallel')
self.oport.operator.config['regionName'] = name
return self | [
"def",
"set_parallel",
"(",
"self",
",",
"width",
",",
"name",
"=",
"None",
")",
":",
"self",
".",
"oport",
".",
"operator",
".",
"config",
"[",
"'parallel'",
"]",
"=",
"True",
"self",
".",
"oport",
".",
"operator",
".",
"config",
"[",
"'width'",
"]",
"=",
"streamsx",
".",
"topology",
".",
"graph",
".",
"_as_spl_json",
"(",
"width",
",",
"int",
")",
"if",
"name",
":",
"name",
"=",
"self",
".",
"topology",
".",
"graph",
".",
"_requested_name",
"(",
"str",
"(",
"name",
")",
",",
"action",
"=",
"'set_parallel'",
")",
"self",
".",
"oport",
".",
"operator",
".",
"config",
"[",
"'regionName'",
"]",
"=",
"name",
"return",
"self"
] | Set this source stream to be split into multiple channels
as the start of a parallel region.
Calling ``set_parallel`` on a stream created by
:py:meth:`~Topology.source` results in the stream
having `width` channels, each created by its own instance
of the callable::
s = topo.source(S())
s.set_parallel(3)
f = s.filter(F())
e = f.end_parallel()
Each channel has independent instances of ``S`` and ``F``. Tuples
created by the instance of ``S`` in channel 0 are passed to the
instance of ``F`` in channel 0, and so on for channels 1 and 2.
Callable transforms instances within the channel can use
the runtime functions
:py:func:`~streamsx.ec.channel`,
:py:func:`~streamsx.ec.local_channel`,
:py:func:`~streamsx.ec.max_channels` &
:py:func:`~streamsx.ec.local_max_channels`
to adapt to being invoked in parallel. For example a
source callable can use its channel number to determine
which partition to read from in a partitioned external system.
Calling ``set_parallel`` on a stream created by
:py:meth:`~Topology.subscribe` results in the stream
having `width` channels. Subscribe ensures that the
stream will contain all published tuples matching the
topic subscription and type. A published tuple will appear
on one of the channels though the specific channel is not known
in advance.
A parallel region is terminated by :py:meth:`end_parallel`
or :py:meth:`for_each`.
The number of channels is set by `width` which may be an `int` greater
than zero or a submission parameter created by
:py:meth:`Topology.create_submission_parameter`.
With IBM Streams 4.3 or later the number of channels can be
dynamically changed at runtime.
Parallel regions are started on non-source streams using
:py:meth:`parallel`.
Args:
width: The degree of parallelism for the parallel region.
name(str): Name of the parallel region. Defaults to the name of this stream.
Returns:
Stream: Returns this stream.
.. seealso:: :py:meth:`parallel`, :py:meth:`end_parallel`
.. versionadded:: 1.9
.. versionchanged:: 1.11 `name` parameter added. | [
"Set",
"this",
"source",
"stream",
"to",
"be",
"split",
"into",
"multiple",
"channels",
"as",
"the",
"start",
"of",
"a",
"parallel",
"region",
"."
] | python | train |
rapidpro/expressions | python/temba_expressions/functions/excel.py | https://github.com/rapidpro/expressions/blob/b03d91ec58fc328960bce90ecb5fa49dcf467627/python/temba_expressions/functions/excel.py#L47-L53 | def fixed(ctx, number, decimals=2, no_commas=False):
"""
Formats the given number in decimal format using a period and commas
"""
value = _round(ctx, number, decimals)
format_str = '{:f}' if no_commas else '{:,f}'
return format_str.format(value) | [
"def",
"fixed",
"(",
"ctx",
",",
"number",
",",
"decimals",
"=",
"2",
",",
"no_commas",
"=",
"False",
")",
":",
"value",
"=",
"_round",
"(",
"ctx",
",",
"number",
",",
"decimals",
")",
"format_str",
"=",
"'{:f}'",
"if",
"no_commas",
"else",
"'{:,f}'",
"return",
"format_str",
".",
"format",
"(",
"value",
")"
] | Formats the given number in decimal format using a period and commas | [
"Formats",
"the",
"given",
"number",
"in",
"decimal",
"format",
"using",
"a",
"period",
"and",
"commas"
] | python | train |
python-openxml/python-docx | docx/package.py | https://github.com/python-openxml/python-docx/blob/6756f6cd145511d3eb6d1d188beea391b1ddfd53/docx/package.py#L100-L113 | def _next_image_partname(self, ext):
"""
The next available image partname, starting from
``/word/media/image1.{ext}`` where unused numbers are reused. The
partname is unique by number, without regard to the extension. *ext*
does not include the leading period.
"""
def image_partname(n):
return PackURI('/word/media/image%d.%s' % (n, ext))
used_numbers = [image_part.partname.idx for image_part in self]
for n in range(1, len(self)+1):
if n not in used_numbers:
return image_partname(n)
return image_partname(len(self)+1) | [
"def",
"_next_image_partname",
"(",
"self",
",",
"ext",
")",
":",
"def",
"image_partname",
"(",
"n",
")",
":",
"return",
"PackURI",
"(",
"'/word/media/image%d.%s'",
"%",
"(",
"n",
",",
"ext",
")",
")",
"used_numbers",
"=",
"[",
"image_part",
".",
"partname",
".",
"idx",
"for",
"image_part",
"in",
"self",
"]",
"for",
"n",
"in",
"range",
"(",
"1",
",",
"len",
"(",
"self",
")",
"+",
"1",
")",
":",
"if",
"n",
"not",
"in",
"used_numbers",
":",
"return",
"image_partname",
"(",
"n",
")",
"return",
"image_partname",
"(",
"len",
"(",
"self",
")",
"+",
"1",
")"
] | The next available image partname, starting from
``/word/media/image1.{ext}`` where unused numbers are reused. The
partname is unique by number, without regard to the extension. *ext*
does not include the leading period. | [
"The",
"next",
"available",
"image",
"partname",
"starting",
"from",
"/",
"word",
"/",
"media",
"/",
"image1",
".",
"{",
"ext",
"}",
"where",
"unused",
"numbers",
"are",
"reused",
".",
"The",
"partname",
"is",
"unique",
"by",
"number",
"without",
"regard",
"to",
"the",
"extension",
".",
"*",
"ext",
"*",
"does",
"not",
"include",
"the",
"leading",
"period",
"."
] | python | train |
DistrictDataLabs/yellowbrick | yellowbrick/text/freqdist.py | https://github.com/DistrictDataLabs/yellowbrick/blob/59b67236a3862c73363e8edad7cd86da5b69e3b2/yellowbrick/text/freqdist.py#L202-L263 | def draw(self, **kwargs):
"""
Called from the fit method, this method creates the canvas and
draws the distribution plot on it.
Parameters
----------
kwargs: generic keyword arguments.
"""
# Prepare the data
bins = np.arange(self.N)
words = [self.features[i] for i in self.sorted_[:self.N]]
freqs = {}
# Set up the bar plots
if self.conditional_freqdist_:
for label, values in sorted(self.conditional_freqdist_.items(), key=itemgetter(0)):
freqs[label] = [
values[i] for i in self.sorted_[:self.N]
]
else:
freqs['corpus'] = [
self.freqdist_[i] for i in self.sorted_[:self.N]
]
# Draw a horizontal barplot
if self.orient == 'h':
# Add the barchart, stacking if necessary
for label, freq in freqs.items():
self.ax.barh(bins, freq, label=label, align='center')
# Set the y ticks to the words
self.ax.set_yticks(bins)
self.ax.set_yticklabels(words)
# Order the features from top to bottom on the y axis
self.ax.invert_yaxis()
# Turn off y grid lines and turn on x grid lines
self.ax.yaxis.grid(False)
self.ax.xaxis.grid(True)
# Draw a vertical barplot
elif self.orient == 'v':
# Add the barchart, stacking if necessary
for label, freq in freqs.items():
self.ax.bar(bins, freq, label=label, align='edge')
# Set the y ticks to the words
self.ax.set_xticks(bins)
self.ax.set_xticklabels(words, rotation=90)
# Turn off x grid lines and turn on y grid lines
self.ax.yaxis.grid(True)
self.ax.xaxis.grid(False)
# Unknown state
else:
raise YellowbrickValueError(
"Orientation must be 'h' or 'v'"
) | [
"def",
"draw",
"(",
"self",
",",
"*",
"*",
"kwargs",
")",
":",
"# Prepare the data",
"bins",
"=",
"np",
".",
"arange",
"(",
"self",
".",
"N",
")",
"words",
"=",
"[",
"self",
".",
"features",
"[",
"i",
"]",
"for",
"i",
"in",
"self",
".",
"sorted_",
"[",
":",
"self",
".",
"N",
"]",
"]",
"freqs",
"=",
"{",
"}",
"# Set up the bar plots",
"if",
"self",
".",
"conditional_freqdist_",
":",
"for",
"label",
",",
"values",
"in",
"sorted",
"(",
"self",
".",
"conditional_freqdist_",
".",
"items",
"(",
")",
",",
"key",
"=",
"itemgetter",
"(",
"0",
")",
")",
":",
"freqs",
"[",
"label",
"]",
"=",
"[",
"values",
"[",
"i",
"]",
"for",
"i",
"in",
"self",
".",
"sorted_",
"[",
":",
"self",
".",
"N",
"]",
"]",
"else",
":",
"freqs",
"[",
"'corpus'",
"]",
"=",
"[",
"self",
".",
"freqdist_",
"[",
"i",
"]",
"for",
"i",
"in",
"self",
".",
"sorted_",
"[",
":",
"self",
".",
"N",
"]",
"]",
"# Draw a horizontal barplot",
"if",
"self",
".",
"orient",
"==",
"'h'",
":",
"# Add the barchart, stacking if necessary",
"for",
"label",
",",
"freq",
"in",
"freqs",
".",
"items",
"(",
")",
":",
"self",
".",
"ax",
".",
"barh",
"(",
"bins",
",",
"freq",
",",
"label",
"=",
"label",
",",
"align",
"=",
"'center'",
")",
"# Set the y ticks to the words",
"self",
".",
"ax",
".",
"set_yticks",
"(",
"bins",
")",
"self",
".",
"ax",
".",
"set_yticklabels",
"(",
"words",
")",
"# Order the features from top to bottom on the y axis",
"self",
".",
"ax",
".",
"invert_yaxis",
"(",
")",
"# Turn off y grid lines and turn on x grid lines",
"self",
".",
"ax",
".",
"yaxis",
".",
"grid",
"(",
"False",
")",
"self",
".",
"ax",
".",
"xaxis",
".",
"grid",
"(",
"True",
")",
"# Draw a vertical barplot",
"elif",
"self",
".",
"orient",
"==",
"'v'",
":",
"# Add the barchart, stacking if necessary",
"for",
"label",
",",
"freq",
"in",
"freqs",
".",
"items",
"(",
")",
":",
"self",
".",
"ax",
".",
"bar",
"(",
"bins",
",",
"freq",
",",
"label",
"=",
"label",
",",
"align",
"=",
"'edge'",
")",
"# Set the y ticks to the words",
"self",
".",
"ax",
".",
"set_xticks",
"(",
"bins",
")",
"self",
".",
"ax",
".",
"set_xticklabels",
"(",
"words",
",",
"rotation",
"=",
"90",
")",
"# Turn off x grid lines and turn on y grid lines",
"self",
".",
"ax",
".",
"yaxis",
".",
"grid",
"(",
"True",
")",
"self",
".",
"ax",
".",
"xaxis",
".",
"grid",
"(",
"False",
")",
"# Unknown state",
"else",
":",
"raise",
"YellowbrickValueError",
"(",
"\"Orientation must be 'h' or 'v'\"",
")"
] | Called from the fit method, this method creates the canvas and
draws the distribution plot on it.
Parameters
----------
kwargs: generic keyword arguments. | [
"Called",
"from",
"the",
"fit",
"method",
"this",
"method",
"creates",
"the",
"canvas",
"and",
"draws",
"the",
"distribution",
"plot",
"on",
"it",
"."
] | python | train |
Tanganelli/CoAPthon3 | coapthon/messages/option.py | https://github.com/Tanganelli/CoAPthon3/blob/985763bfe2eb9e00f49ec100c5b8877c2ed7d531/coapthon/messages/option.py#L55-L77 | def value(self, value):
"""
Set the value of the option.
:param value: the option value
"""
opt_type = defines.OptionRegistry.LIST[self._number].value_type
if opt_type == defines.INTEGER:
if type(value) is not int:
value = int(value)
if byte_len(value) == 0:
value = 0
elif opt_type == defines.STRING:
if type(value) is not str:
value = str(value)
elif opt_type == defines.OPAQUE:
if type(value) is bytes:
pass
else:
if value is not None:
value = bytes(value, "utf-8")
self._value = value | [
"def",
"value",
"(",
"self",
",",
"value",
")",
":",
"opt_type",
"=",
"defines",
".",
"OptionRegistry",
".",
"LIST",
"[",
"self",
".",
"_number",
"]",
".",
"value_type",
"if",
"opt_type",
"==",
"defines",
".",
"INTEGER",
":",
"if",
"type",
"(",
"value",
")",
"is",
"not",
"int",
":",
"value",
"=",
"int",
"(",
"value",
")",
"if",
"byte_len",
"(",
"value",
")",
"==",
"0",
":",
"value",
"=",
"0",
"elif",
"opt_type",
"==",
"defines",
".",
"STRING",
":",
"if",
"type",
"(",
"value",
")",
"is",
"not",
"str",
":",
"value",
"=",
"str",
"(",
"value",
")",
"elif",
"opt_type",
"==",
"defines",
".",
"OPAQUE",
":",
"if",
"type",
"(",
"value",
")",
"is",
"bytes",
":",
"pass",
"else",
":",
"if",
"value",
"is",
"not",
"None",
":",
"value",
"=",
"bytes",
"(",
"value",
",",
"\"utf-8\"",
")",
"self",
".",
"_value",
"=",
"value"
] | Set the value of the option.
:param value: the option value | [
"Set",
"the",
"value",
"of",
"the",
"option",
"."
] | python | train |
ANCIR/granoloader | granoloader/mapping.py | https://github.com/ANCIR/granoloader/blob/c48b1bd50403dd611340c5f51637f7c5ca54059c/granoloader/mapping.py#L124-L129 | def get_source(self, spec, row):
""" Sources can be specified as plain strings or as a reference to a column. """
value = self.get_value({'column': spec.get('source_url_column')}, row)
if value is not None:
return value
return spec.get('source_url') | [
"def",
"get_source",
"(",
"self",
",",
"spec",
",",
"row",
")",
":",
"value",
"=",
"self",
".",
"get_value",
"(",
"{",
"'column'",
":",
"spec",
".",
"get",
"(",
"'source_url_column'",
")",
"}",
",",
"row",
")",
"if",
"value",
"is",
"not",
"None",
":",
"return",
"value",
"return",
"spec",
".",
"get",
"(",
"'source_url'",
")"
] | Sources can be specified as plain strings or as a reference to a column. | [
"Sources",
"can",
"be",
"specified",
"as",
"plain",
"strings",
"or",
"as",
"a",
"reference",
"to",
"a",
"column",
"."
] | python | train |
mar10/wsgidav | wsgidav/util.py | https://github.com/mar10/wsgidav/blob/cec0d84222fc24bea01be1cea91729001963f172/wsgidav/util.py#L387-L396 | def to_unicode_safe(s):
"""Convert a binary string to Unicode using UTF-8 (fallback to ISO-8859-1)."""
try:
u = compat.to_unicode(s, "utf8")
except ValueError:
_logger.error(
"to_unicode_safe({!r}) *** UTF-8 failed. Trying ISO-8859-1".format(s)
)
u = compat.to_unicode(s, "ISO-8859-1")
return u | [
"def",
"to_unicode_safe",
"(",
"s",
")",
":",
"try",
":",
"u",
"=",
"compat",
".",
"to_unicode",
"(",
"s",
",",
"\"utf8\"",
")",
"except",
"ValueError",
":",
"_logger",
".",
"error",
"(",
"\"to_unicode_safe({!r}) *** UTF-8 failed. Trying ISO-8859-1\"",
".",
"format",
"(",
"s",
")",
")",
"u",
"=",
"compat",
".",
"to_unicode",
"(",
"s",
",",
"\"ISO-8859-1\"",
")",
"return",
"u"
] | Convert a binary string to Unicode using UTF-8 (fallback to ISO-8859-1). | [
"Convert",
"a",
"binary",
"string",
"to",
"Unicode",
"using",
"UTF",
"-",
"8",
"(",
"fallback",
"to",
"ISO",
"-",
"8859",
"-",
"1",
")",
"."
] | python | valid |
EventTeam/beliefs | src/beliefs/cells/sets.py | https://github.com/EventTeam/beliefs/blob/c07d22b61bebeede74a72800030dde770bf64208/src/beliefs/cells/sets.py#L60-L65 | def is_equal(self, other):
"""
True iff all members are the same
"""
other = self.coerce(other)
return len(self.get_values().symmetric_difference(other.get_values())) == 0 | [
"def",
"is_equal",
"(",
"self",
",",
"other",
")",
":",
"other",
"=",
"self",
".",
"coerce",
"(",
"other",
")",
"return",
"len",
"(",
"self",
".",
"get_values",
"(",
")",
".",
"symmetric_difference",
"(",
"other",
".",
"get_values",
"(",
")",
")",
")",
"==",
"0"
] | True iff all members are the same | [
"True",
"iff",
"all",
"members",
"are",
"the",
"same"
] | python | train |
yyuu/botornado | boto/sdb/db/manager/__init__.py | https://github.com/yyuu/botornado/blob/fffb056f5ff2324d1d5c1304014cfb1d899f602e/boto/sdb/db/manager/__init__.py#L23-L90 | def get_manager(cls):
"""
Returns the appropriate Manager class for a given Model class. It does this by
looking in the boto config for a section like this::
[DB]
db_type = SimpleDB
db_user = <aws access key id>
db_passwd = <aws secret access key>
db_name = my_domain
[DB_TestBasic]
db_type = SimpleDB
db_user = <another aws access key id>
db_passwd = <another aws secret access key>
db_name = basic_domain
db_port = 1111
The values in the DB section are "generic values" that will be used if nothing more
specific is found. You can also create a section for a specific Model class that
gives the db info for that class. In the example above, TestBasic is a Model subclass.
"""
db_user = boto.config.get('DB', 'db_user', None)
db_passwd = boto.config.get('DB', 'db_passwd', None)
db_type = boto.config.get('DB', 'db_type', 'SimpleDB')
db_name = boto.config.get('DB', 'db_name', None)
db_table = boto.config.get('DB', 'db_table', None)
db_host = boto.config.get('DB', 'db_host', "sdb.amazonaws.com")
db_port = boto.config.getint('DB', 'db_port', 443)
enable_ssl = boto.config.getbool('DB', 'enable_ssl', True)
sql_dir = boto.config.get('DB', 'sql_dir', None)
debug = boto.config.getint('DB', 'debug', 0)
# first see if there is a fully qualified section name in the Boto config file
module_name = cls.__module__.replace('.', '_')
db_section = 'DB_' + module_name + '_' + cls.__name__
if not boto.config.has_section(db_section):
db_section = 'DB_' + cls.__name__
if boto.config.has_section(db_section):
db_user = boto.config.get(db_section, 'db_user', db_user)
db_passwd = boto.config.get(db_section, 'db_passwd', db_passwd)
db_type = boto.config.get(db_section, 'db_type', db_type)
db_name = boto.config.get(db_section, 'db_name', db_name)
db_table = boto.config.get(db_section, 'db_table', db_table)
db_host = boto.config.get(db_section, 'db_host', db_host)
db_port = boto.config.getint(db_section, 'db_port', db_port)
enable_ssl = boto.config.getint(db_section, 'enable_ssl', enable_ssl)
debug = boto.config.getint(db_section, 'debug', debug)
elif hasattr(cls, "_db_name") and cls._db_name is not None:
# More specific then the generic DB config is any _db_name class property
db_name = cls._db_name
elif hasattr(cls.__bases__[0], "_manager"):
return cls.__bases__[0]._manager
if db_type == 'SimpleDB':
from sdbmanager import SDBManager
return SDBManager(cls, db_name, db_user, db_passwd,
db_host, db_port, db_table, sql_dir, enable_ssl)
elif db_type == 'PostgreSQL':
from pgmanager import PGManager
if db_table:
return PGManager(cls, db_name, db_user, db_passwd,
db_host, db_port, db_table, sql_dir, enable_ssl)
else:
return None
elif db_type == 'XML':
from xmlmanager import XMLManager
return XMLManager(cls, db_name, db_user, db_passwd,
db_host, db_port, db_table, sql_dir, enable_ssl)
else:
raise ValueError, 'Unknown db_type: %s' % db_type | [
"def",
"get_manager",
"(",
"cls",
")",
":",
"db_user",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_user'",
",",
"None",
")",
"db_passwd",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_passwd'",
",",
"None",
")",
"db_type",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_type'",
",",
"'SimpleDB'",
")",
"db_name",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_name'",
",",
"None",
")",
"db_table",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_table'",
",",
"None",
")",
"db_host",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'db_host'",
",",
"\"sdb.amazonaws.com\"",
")",
"db_port",
"=",
"boto",
".",
"config",
".",
"getint",
"(",
"'DB'",
",",
"'db_port'",
",",
"443",
")",
"enable_ssl",
"=",
"boto",
".",
"config",
".",
"getbool",
"(",
"'DB'",
",",
"'enable_ssl'",
",",
"True",
")",
"sql_dir",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"'DB'",
",",
"'sql_dir'",
",",
"None",
")",
"debug",
"=",
"boto",
".",
"config",
".",
"getint",
"(",
"'DB'",
",",
"'debug'",
",",
"0",
")",
"# first see if there is a fully qualified section name in the Boto config file",
"module_name",
"=",
"cls",
".",
"__module__",
".",
"replace",
"(",
"'.'",
",",
"'_'",
")",
"db_section",
"=",
"'DB_'",
"+",
"module_name",
"+",
"'_'",
"+",
"cls",
".",
"__name__",
"if",
"not",
"boto",
".",
"config",
".",
"has_section",
"(",
"db_section",
")",
":",
"db_section",
"=",
"'DB_'",
"+",
"cls",
".",
"__name__",
"if",
"boto",
".",
"config",
".",
"has_section",
"(",
"db_section",
")",
":",
"db_user",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_user'",
",",
"db_user",
")",
"db_passwd",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_passwd'",
",",
"db_passwd",
")",
"db_type",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_type'",
",",
"db_type",
")",
"db_name",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_name'",
",",
"db_name",
")",
"db_table",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_table'",
",",
"db_table",
")",
"db_host",
"=",
"boto",
".",
"config",
".",
"get",
"(",
"db_section",
",",
"'db_host'",
",",
"db_host",
")",
"db_port",
"=",
"boto",
".",
"config",
".",
"getint",
"(",
"db_section",
",",
"'db_port'",
",",
"db_port",
")",
"enable_ssl",
"=",
"boto",
".",
"config",
".",
"getint",
"(",
"db_section",
",",
"'enable_ssl'",
",",
"enable_ssl",
")",
"debug",
"=",
"boto",
".",
"config",
".",
"getint",
"(",
"db_section",
",",
"'debug'",
",",
"debug",
")",
"elif",
"hasattr",
"(",
"cls",
",",
"\"_db_name\"",
")",
"and",
"cls",
".",
"_db_name",
"is",
"not",
"None",
":",
"# More specific then the generic DB config is any _db_name class property",
"db_name",
"=",
"cls",
".",
"_db_name",
"elif",
"hasattr",
"(",
"cls",
".",
"__bases__",
"[",
"0",
"]",
",",
"\"_manager\"",
")",
":",
"return",
"cls",
".",
"__bases__",
"[",
"0",
"]",
".",
"_manager",
"if",
"db_type",
"==",
"'SimpleDB'",
":",
"from",
"sdbmanager",
"import",
"SDBManager",
"return",
"SDBManager",
"(",
"cls",
",",
"db_name",
",",
"db_user",
",",
"db_passwd",
",",
"db_host",
",",
"db_port",
",",
"db_table",
",",
"sql_dir",
",",
"enable_ssl",
")",
"elif",
"db_type",
"==",
"'PostgreSQL'",
":",
"from",
"pgmanager",
"import",
"PGManager",
"if",
"db_table",
":",
"return",
"PGManager",
"(",
"cls",
",",
"db_name",
",",
"db_user",
",",
"db_passwd",
",",
"db_host",
",",
"db_port",
",",
"db_table",
",",
"sql_dir",
",",
"enable_ssl",
")",
"else",
":",
"return",
"None",
"elif",
"db_type",
"==",
"'XML'",
":",
"from",
"xmlmanager",
"import",
"XMLManager",
"return",
"XMLManager",
"(",
"cls",
",",
"db_name",
",",
"db_user",
",",
"db_passwd",
",",
"db_host",
",",
"db_port",
",",
"db_table",
",",
"sql_dir",
",",
"enable_ssl",
")",
"else",
":",
"raise",
"ValueError",
",",
"'Unknown db_type: %s'",
"%",
"db_type"
] | Returns the appropriate Manager class for a given Model class. It does this by
looking in the boto config for a section like this::
[DB]
db_type = SimpleDB
db_user = <aws access key id>
db_passwd = <aws secret access key>
db_name = my_domain
[DB_TestBasic]
db_type = SimpleDB
db_user = <another aws access key id>
db_passwd = <another aws secret access key>
db_name = basic_domain
db_port = 1111
The values in the DB section are "generic values" that will be used if nothing more
specific is found. You can also create a section for a specific Model class that
gives the db info for that class. In the example above, TestBasic is a Model subclass. | [
"Returns",
"the",
"appropriate",
"Manager",
"class",
"for",
"a",
"given",
"Model",
"class",
".",
"It",
"does",
"this",
"by",
"looking",
"in",
"the",
"boto",
"config",
"for",
"a",
"section",
"like",
"this",
"::",
"[",
"DB",
"]",
"db_type",
"=",
"SimpleDB",
"db_user",
"=",
"<aws",
"access",
"key",
"id",
">",
"db_passwd",
"=",
"<aws",
"secret",
"access",
"key",
">",
"db_name",
"=",
"my_domain",
"[",
"DB_TestBasic",
"]",
"db_type",
"=",
"SimpleDB",
"db_user",
"=",
"<another",
"aws",
"access",
"key",
"id",
">",
"db_passwd",
"=",
"<another",
"aws",
"secret",
"access",
"key",
">",
"db_name",
"=",
"basic_domain",
"db_port",
"=",
"1111",
"The",
"values",
"in",
"the",
"DB",
"section",
"are",
"generic",
"values",
"that",
"will",
"be",
"used",
"if",
"nothing",
"more",
"specific",
"is",
"found",
".",
"You",
"can",
"also",
"create",
"a",
"section",
"for",
"a",
"specific",
"Model",
"class",
"that",
"gives",
"the",
"db",
"info",
"for",
"that",
"class",
".",
"In",
"the",
"example",
"above",
"TestBasic",
"is",
"a",
"Model",
"subclass",
"."
] | python | train |
materialsproject/pymatgen | pymatgen/analysis/transition_state.py | https://github.com/materialsproject/pymatgen/blob/4ca558cf72f8d5f8a1f21dfdfc0181a971c186da/pymatgen/analysis/transition_state.py#L214-L292 | def from_dir(cls, root_dir, relaxation_dirs=None, **kwargs):
"""
Initializes a NEBAnalysis object from a directory of a NEB run.
Note that OUTCARs must be present in all image directories. For the
terminal OUTCARs from relaxation calculations, you can specify the
locations using relaxation_dir. If these are not specified, the code
will attempt to look for the OUTCARs in 00 and 0n directories,
followed by subdirs "start", "end" or "initial", "final" in the
root_dir. These are just some typical conventions used
preferentially in Shyue Ping's MAVRL research group. For the
non-terminal points, the CONTCAR is read to obtain structures. For
terminal points, the POSCAR is used. The image directories are
assumed to be the only directories that can be resolved to integers.
E.g., "00", "01", "02", "03", "04", "05", "06". The minimum
sub-directory structure that can be parsed is of the following form (
a 5-image example is shown):
00:
- POSCAR
- OUTCAR
01, 02, 03, 04, 05:
- CONTCAR
- OUTCAR
06:
- POSCAR
- OUTCAR
Args:
root_dir (str): Path to the root directory of the NEB calculation.
relaxation_dirs (tuple): This specifies the starting and ending
relaxation directories from which the OUTCARs are read for the
terminal points for the energies.
Returns:
NEBAnalysis object.
"""
neb_dirs = []
for d in os.listdir(root_dir):
pth = os.path.join(root_dir, d)
if os.path.isdir(pth) and d.isdigit():
i = int(d)
neb_dirs.append((i, pth))
neb_dirs = sorted(neb_dirs, key=lambda d: d[0])
outcars = []
structures = []
# Setup the search sequence for the OUTCARs for the terminal
# directories.
terminal_dirs = []
if relaxation_dirs is not None:
terminal_dirs.append(relaxation_dirs)
terminal_dirs.append((neb_dirs[0][1], neb_dirs[-1][1]))
terminal_dirs.append([os.path.join(root_dir, d)
for d in ["start", "end"]])
terminal_dirs.append([os.path.join(root_dir, d)
for d in ["initial", "final"]])
for i, d in neb_dirs:
outcar = glob.glob(os.path.join(d, "OUTCAR*"))
contcar = glob.glob(os.path.join(d, "CONTCAR*"))
poscar = glob.glob(os.path.join(d, "POSCAR*"))
terminal = i == 0 or i == neb_dirs[-1][0]
if terminal:
for ds in terminal_dirs:
od = ds[0] if i == 0 else ds[1]
outcar = glob.glob(os.path.join(od, "OUTCAR*"))
if outcar:
outcar = sorted(outcar)
outcars.append(Outcar(outcar[-1]))
break
else:
raise ValueError("OUTCAR cannot be found for terminal "
"point %s" % d)
structures.append(Poscar.from_file(poscar[0]).structure)
else:
outcars.append(Outcar(outcar[0]))
structures.append(Poscar.from_file(contcar[0]).structure)
return NEBAnalysis.from_outcars(outcars, structures, **kwargs) | [
"def",
"from_dir",
"(",
"cls",
",",
"root_dir",
",",
"relaxation_dirs",
"=",
"None",
",",
"*",
"*",
"kwargs",
")",
":",
"neb_dirs",
"=",
"[",
"]",
"for",
"d",
"in",
"os",
".",
"listdir",
"(",
"root_dir",
")",
":",
"pth",
"=",
"os",
".",
"path",
".",
"join",
"(",
"root_dir",
",",
"d",
")",
"if",
"os",
".",
"path",
".",
"isdir",
"(",
"pth",
")",
"and",
"d",
".",
"isdigit",
"(",
")",
":",
"i",
"=",
"int",
"(",
"d",
")",
"neb_dirs",
".",
"append",
"(",
"(",
"i",
",",
"pth",
")",
")",
"neb_dirs",
"=",
"sorted",
"(",
"neb_dirs",
",",
"key",
"=",
"lambda",
"d",
":",
"d",
"[",
"0",
"]",
")",
"outcars",
"=",
"[",
"]",
"structures",
"=",
"[",
"]",
"# Setup the search sequence for the OUTCARs for the terminal",
"# directories.",
"terminal_dirs",
"=",
"[",
"]",
"if",
"relaxation_dirs",
"is",
"not",
"None",
":",
"terminal_dirs",
".",
"append",
"(",
"relaxation_dirs",
")",
"terminal_dirs",
".",
"append",
"(",
"(",
"neb_dirs",
"[",
"0",
"]",
"[",
"1",
"]",
",",
"neb_dirs",
"[",
"-",
"1",
"]",
"[",
"1",
"]",
")",
")",
"terminal_dirs",
".",
"append",
"(",
"[",
"os",
".",
"path",
".",
"join",
"(",
"root_dir",
",",
"d",
")",
"for",
"d",
"in",
"[",
"\"start\"",
",",
"\"end\"",
"]",
"]",
")",
"terminal_dirs",
".",
"append",
"(",
"[",
"os",
".",
"path",
".",
"join",
"(",
"root_dir",
",",
"d",
")",
"for",
"d",
"in",
"[",
"\"initial\"",
",",
"\"final\"",
"]",
"]",
")",
"for",
"i",
",",
"d",
"in",
"neb_dirs",
":",
"outcar",
"=",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"d",
",",
"\"OUTCAR*\"",
")",
")",
"contcar",
"=",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"d",
",",
"\"CONTCAR*\"",
")",
")",
"poscar",
"=",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"d",
",",
"\"POSCAR*\"",
")",
")",
"terminal",
"=",
"i",
"==",
"0",
"or",
"i",
"==",
"neb_dirs",
"[",
"-",
"1",
"]",
"[",
"0",
"]",
"if",
"terminal",
":",
"for",
"ds",
"in",
"terminal_dirs",
":",
"od",
"=",
"ds",
"[",
"0",
"]",
"if",
"i",
"==",
"0",
"else",
"ds",
"[",
"1",
"]",
"outcar",
"=",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"od",
",",
"\"OUTCAR*\"",
")",
")",
"if",
"outcar",
":",
"outcar",
"=",
"sorted",
"(",
"outcar",
")",
"outcars",
".",
"append",
"(",
"Outcar",
"(",
"outcar",
"[",
"-",
"1",
"]",
")",
")",
"break",
"else",
":",
"raise",
"ValueError",
"(",
"\"OUTCAR cannot be found for terminal \"",
"\"point %s\"",
"%",
"d",
")",
"structures",
".",
"append",
"(",
"Poscar",
".",
"from_file",
"(",
"poscar",
"[",
"0",
"]",
")",
".",
"structure",
")",
"else",
":",
"outcars",
".",
"append",
"(",
"Outcar",
"(",
"outcar",
"[",
"0",
"]",
")",
")",
"structures",
".",
"append",
"(",
"Poscar",
".",
"from_file",
"(",
"contcar",
"[",
"0",
"]",
")",
".",
"structure",
")",
"return",
"NEBAnalysis",
".",
"from_outcars",
"(",
"outcars",
",",
"structures",
",",
"*",
"*",
"kwargs",
")"
] | Initializes a NEBAnalysis object from a directory of a NEB run.
Note that OUTCARs must be present in all image directories. For the
terminal OUTCARs from relaxation calculations, you can specify the
locations using relaxation_dir. If these are not specified, the code
will attempt to look for the OUTCARs in 00 and 0n directories,
followed by subdirs "start", "end" or "initial", "final" in the
root_dir. These are just some typical conventions used
preferentially in Shyue Ping's MAVRL research group. For the
non-terminal points, the CONTCAR is read to obtain structures. For
terminal points, the POSCAR is used. The image directories are
assumed to be the only directories that can be resolved to integers.
E.g., "00", "01", "02", "03", "04", "05", "06". The minimum
sub-directory structure that can be parsed is of the following form (
a 5-image example is shown):
00:
- POSCAR
- OUTCAR
01, 02, 03, 04, 05:
- CONTCAR
- OUTCAR
06:
- POSCAR
- OUTCAR
Args:
root_dir (str): Path to the root directory of the NEB calculation.
relaxation_dirs (tuple): This specifies the starting and ending
relaxation directories from which the OUTCARs are read for the
terminal points for the energies.
Returns:
NEBAnalysis object. | [
"Initializes",
"a",
"NEBAnalysis",
"object",
"from",
"a",
"directory",
"of",
"a",
"NEB",
"run",
".",
"Note",
"that",
"OUTCARs",
"must",
"be",
"present",
"in",
"all",
"image",
"directories",
".",
"For",
"the",
"terminal",
"OUTCARs",
"from",
"relaxation",
"calculations",
"you",
"can",
"specify",
"the",
"locations",
"using",
"relaxation_dir",
".",
"If",
"these",
"are",
"not",
"specified",
"the",
"code",
"will",
"attempt",
"to",
"look",
"for",
"the",
"OUTCARs",
"in",
"00",
"and",
"0n",
"directories",
"followed",
"by",
"subdirs",
"start",
"end",
"or",
"initial",
"final",
"in",
"the",
"root_dir",
".",
"These",
"are",
"just",
"some",
"typical",
"conventions",
"used",
"preferentially",
"in",
"Shyue",
"Ping",
"s",
"MAVRL",
"research",
"group",
".",
"For",
"the",
"non",
"-",
"terminal",
"points",
"the",
"CONTCAR",
"is",
"read",
"to",
"obtain",
"structures",
".",
"For",
"terminal",
"points",
"the",
"POSCAR",
"is",
"used",
".",
"The",
"image",
"directories",
"are",
"assumed",
"to",
"be",
"the",
"only",
"directories",
"that",
"can",
"be",
"resolved",
"to",
"integers",
".",
"E",
".",
"g",
".",
"00",
"01",
"02",
"03",
"04",
"05",
"06",
".",
"The",
"minimum",
"sub",
"-",
"directory",
"structure",
"that",
"can",
"be",
"parsed",
"is",
"of",
"the",
"following",
"form",
"(",
"a",
"5",
"-",
"image",
"example",
"is",
"shown",
")",
":"
] | python | train |
nugget/python-insteonplm | insteonplm/devices/__init__.py | https://github.com/nugget/python-insteonplm/blob/65548041f1b0729ae1ae904443dd81b0c6cbf1bf/insteonplm/devices/__init__.py#L441-L473 | def write_aldb(self, mem_addr: int, mode: str, group: int, target,
data1=0x00, data2=0x00, data3=0x00):
"""Write to the device All-Link Database.
Parameters:
Required:
mode: r - device is a responder of target
c - device is a controller of target
group: Link group
target: Address of the other device
Optional:
data1: Device dependant
data2: Device dependant
data3: Device dependant
"""
if isinstance(mode, str) and mode.lower() in ['c', 'r']:
pass
else:
_LOGGER.error('Insteon link mode: %s', mode)
raise ValueError("Mode must be 'c' or 'r'")
if isinstance(group, int):
pass
else:
raise ValueError("Group must be an integer")
target_addr = Address(target)
_LOGGER.debug('calling aldb write_record')
self._aldb.write_record(mem_addr, mode, group, target_addr,
data1, data2, data3)
self._aldb.add_loaded_callback(self._aldb_loaded_callback) | [
"def",
"write_aldb",
"(",
"self",
",",
"mem_addr",
":",
"int",
",",
"mode",
":",
"str",
",",
"group",
":",
"int",
",",
"target",
",",
"data1",
"=",
"0x00",
",",
"data2",
"=",
"0x00",
",",
"data3",
"=",
"0x00",
")",
":",
"if",
"isinstance",
"(",
"mode",
",",
"str",
")",
"and",
"mode",
".",
"lower",
"(",
")",
"in",
"[",
"'c'",
",",
"'r'",
"]",
":",
"pass",
"else",
":",
"_LOGGER",
".",
"error",
"(",
"'Insteon link mode: %s'",
",",
"mode",
")",
"raise",
"ValueError",
"(",
"\"Mode must be 'c' or 'r'\"",
")",
"if",
"isinstance",
"(",
"group",
",",
"int",
")",
":",
"pass",
"else",
":",
"raise",
"ValueError",
"(",
"\"Group must be an integer\"",
")",
"target_addr",
"=",
"Address",
"(",
"target",
")",
"_LOGGER",
".",
"debug",
"(",
"'calling aldb write_record'",
")",
"self",
".",
"_aldb",
".",
"write_record",
"(",
"mem_addr",
",",
"mode",
",",
"group",
",",
"target_addr",
",",
"data1",
",",
"data2",
",",
"data3",
")",
"self",
".",
"_aldb",
".",
"add_loaded_callback",
"(",
"self",
".",
"_aldb_loaded_callback",
")"
] | Write to the device All-Link Database.
Parameters:
Required:
mode: r - device is a responder of target
c - device is a controller of target
group: Link group
target: Address of the other device
Optional:
data1: Device dependant
data2: Device dependant
data3: Device dependant | [
"Write",
"to",
"the",
"device",
"All",
"-",
"Link",
"Database",
"."
] | python | train |
smdabdoub/phylotoast | bin/sanger_qiimify.py | https://github.com/smdabdoub/phylotoast/blob/0b74ef171e6a84761710548501dfac71285a58a3/bin/sanger_qiimify.py#L94-L128 | def generate_barcodes(nIds, codeLen=12):
"""
Given a list of sample IDs generate unique n-base barcodes for each.
Note that only 4^n unique barcodes are possible.
"""
def next_code(b, c, i):
return c[:i] + b + (c[i+1:] if i < -1 else '')
def rand_base():
return random.choice(['A', 'T', 'C', 'G'])
def rand_seq(n):
return ''.join([rand_base() for _ in range(n)])
# homopolymer filter regex: match if 4 identical bases in a row
hpf = re.compile('aaaa|cccc|gggg|tttt', re.IGNORECASE)
while True:
codes = [rand_seq(codeLen)]
if (hpf.search(codes[0]) is None):
break
idx = 0
while len(codes) < nIds:
idx -= 1
if idx < -codeLen:
idx = -1
codes.append(rand_seq(codeLen))
else:
nc = next_code(rand_base(), codes[-1], idx)
if hpf.search(nc) is None:
codes.append(nc)
codes = list(set(codes))
return codes | [
"def",
"generate_barcodes",
"(",
"nIds",
",",
"codeLen",
"=",
"12",
")",
":",
"def",
"next_code",
"(",
"b",
",",
"c",
",",
"i",
")",
":",
"return",
"c",
"[",
":",
"i",
"]",
"+",
"b",
"+",
"(",
"c",
"[",
"i",
"+",
"1",
":",
"]",
"if",
"i",
"<",
"-",
"1",
"else",
"''",
")",
"def",
"rand_base",
"(",
")",
":",
"return",
"random",
".",
"choice",
"(",
"[",
"'A'",
",",
"'T'",
",",
"'C'",
",",
"'G'",
"]",
")",
"def",
"rand_seq",
"(",
"n",
")",
":",
"return",
"''",
".",
"join",
"(",
"[",
"rand_base",
"(",
")",
"for",
"_",
"in",
"range",
"(",
"n",
")",
"]",
")",
"# homopolymer filter regex: match if 4 identical bases in a row",
"hpf",
"=",
"re",
".",
"compile",
"(",
"'aaaa|cccc|gggg|tttt'",
",",
"re",
".",
"IGNORECASE",
")",
"while",
"True",
":",
"codes",
"=",
"[",
"rand_seq",
"(",
"codeLen",
")",
"]",
"if",
"(",
"hpf",
".",
"search",
"(",
"codes",
"[",
"0",
"]",
")",
"is",
"None",
")",
":",
"break",
"idx",
"=",
"0",
"while",
"len",
"(",
"codes",
")",
"<",
"nIds",
":",
"idx",
"-=",
"1",
"if",
"idx",
"<",
"-",
"codeLen",
":",
"idx",
"=",
"-",
"1",
"codes",
".",
"append",
"(",
"rand_seq",
"(",
"codeLen",
")",
")",
"else",
":",
"nc",
"=",
"next_code",
"(",
"rand_base",
"(",
")",
",",
"codes",
"[",
"-",
"1",
"]",
",",
"idx",
")",
"if",
"hpf",
".",
"search",
"(",
"nc",
")",
"is",
"None",
":",
"codes",
".",
"append",
"(",
"nc",
")",
"codes",
"=",
"list",
"(",
"set",
"(",
"codes",
")",
")",
"return",
"codes"
] | Given a list of sample IDs generate unique n-base barcodes for each.
Note that only 4^n unique barcodes are possible. | [
"Given",
"a",
"list",
"of",
"sample",
"IDs",
"generate",
"unique",
"n",
"-",
"base",
"barcodes",
"for",
"each",
".",
"Note",
"that",
"only",
"4^n",
"unique",
"barcodes",
"are",
"possible",
"."
] | python | train |
SeattleTestbed/seash | pyreadline/unicode_helper.py | https://github.com/SeattleTestbed/seash/blob/40f9d2285662ff8b61e0468b4196acee089b273b/pyreadline/unicode_helper.py#L20-L27 | def ensure_unicode(text):
u"""helper to ensure that text passed to WriteConsoleW is unicode"""
if isinstance(text, str):
try:
return text.decode(pyreadline_codepage, u"replace")
except (LookupError, TypeError):
return text.decode(u"ascii", u"replace")
return text | [
"def",
"ensure_unicode",
"(",
"text",
")",
":",
"if",
"isinstance",
"(",
"text",
",",
"str",
")",
":",
"try",
":",
"return",
"text",
".",
"decode",
"(",
"pyreadline_codepage",
",",
"u\"replace\"",
")",
"except",
"(",
"LookupError",
",",
"TypeError",
")",
":",
"return",
"text",
".",
"decode",
"(",
"u\"ascii\"",
",",
"u\"replace\"",
")",
"return",
"text"
] | u"""helper to ensure that text passed to WriteConsoleW is unicode | [
"u",
"helper",
"to",
"ensure",
"that",
"text",
"passed",
"to",
"WriteConsoleW",
"is",
"unicode"
] | python | train |
bninja/pilo | pilo/fields.py | https://github.com/bninja/pilo/blob/32b7298a47e33fb7383103017b4f3b59ad76ea6f/pilo/fields.py#L567-L587 | def map(self, value=NONE):
"""
Executes the steps used to "map" this fields value from `ctx.src` to a
value.
:param value: optional **pre-computed** value.
:return: The successfully mapped value or:
- NONE if one was not found
- ERROR if the field was present in `ctx.src` but invalid.
"""
with self.ctx(field=self, parent=self):
value = self._map(value)
if self.attach_parent and value not in IGNORE:
if hasattr(self.ctx, 'parent'):
value.parent = weakref.proxy(self.ctx.parent)
else:
value.parent = None
return value | [
"def",
"map",
"(",
"self",
",",
"value",
"=",
"NONE",
")",
":",
"with",
"self",
".",
"ctx",
"(",
"field",
"=",
"self",
",",
"parent",
"=",
"self",
")",
":",
"value",
"=",
"self",
".",
"_map",
"(",
"value",
")",
"if",
"self",
".",
"attach_parent",
"and",
"value",
"not",
"in",
"IGNORE",
":",
"if",
"hasattr",
"(",
"self",
".",
"ctx",
",",
"'parent'",
")",
":",
"value",
".",
"parent",
"=",
"weakref",
".",
"proxy",
"(",
"self",
".",
"ctx",
".",
"parent",
")",
"else",
":",
"value",
".",
"parent",
"=",
"None",
"return",
"value"
] | Executes the steps used to "map" this fields value from `ctx.src` to a
value.
:param value: optional **pre-computed** value.
:return: The successfully mapped value or:
- NONE if one was not found
- ERROR if the field was present in `ctx.src` but invalid. | [
"Executes",
"the",
"steps",
"used",
"to",
"map",
"this",
"fields",
"value",
"from",
"ctx",
".",
"src",
"to",
"a",
"value",
"."
] | python | train |
PatrikValkovic/grammpy | grammpy/transforms/InverseContextFree.py | https://github.com/PatrikValkovic/grammpy/blob/879ce0ef794ac2823acc19314fcd7a8aba53e50f/grammpy/transforms/InverseContextFree.py#L58-L72 | def reverse_cyk_transforms(root):
# type: (Nonterminal) -> Nonterminal
"""
Reverse transformation made to grammar before CYK.
Performs following steps:
- transform from chomsky normal form
- restore unit rules
- restore epsilon rules
:param root: Root node of the parsed tree.
:return: Restored parsed tree.
"""
root = InverseContextFree.transform_from_chomsky_normal_form(root)
root = InverseContextFree.unit_rules_restore(root)
root = InverseContextFree.epsilon_rules_restore(root)
return root | [
"def",
"reverse_cyk_transforms",
"(",
"root",
")",
":",
"# type: (Nonterminal) -> Nonterminal",
"root",
"=",
"InverseContextFree",
".",
"transform_from_chomsky_normal_form",
"(",
"root",
")",
"root",
"=",
"InverseContextFree",
".",
"unit_rules_restore",
"(",
"root",
")",
"root",
"=",
"InverseContextFree",
".",
"epsilon_rules_restore",
"(",
"root",
")",
"return",
"root"
] | Reverse transformation made to grammar before CYK.
Performs following steps:
- transform from chomsky normal form
- restore unit rules
- restore epsilon rules
:param root: Root node of the parsed tree.
:return: Restored parsed tree. | [
"Reverse",
"transformation",
"made",
"to",
"grammar",
"before",
"CYK",
".",
"Performs",
"following",
"steps",
":",
"-",
"transform",
"from",
"chomsky",
"normal",
"form",
"-",
"restore",
"unit",
"rules",
"-",
"restore",
"epsilon",
"rules",
":",
"param",
"root",
":",
"Root",
"node",
"of",
"the",
"parsed",
"tree",
".",
":",
"return",
":",
"Restored",
"parsed",
"tree",
"."
] | python | train |
JoshAshby/pyRethinkORM | rethinkORM/rethinkModel.py | https://github.com/JoshAshby/pyRethinkORM/blob/92158d146dea6cfe9022d7de2537403f5f2c1e02/rethinkORM/rethinkModel.py#L277-L289 | def delete(self):
"""
Deletes the current instance. This assumes that we know what we're
doing, and have a primary key in our data already. If this is a new
instance, then we'll let the user know with an Exception
"""
if self._new:
raise Exception("This is a new object, %s not in data, \
indicating this entry isn't stored." % self.primaryKey)
r.table(self.table).get(self._data[self.primaryKey]) \
.delete(durability=self.durability).run(self._conn)
return True | [
"def",
"delete",
"(",
"self",
")",
":",
"if",
"self",
".",
"_new",
":",
"raise",
"Exception",
"(",
"\"This is a new object, %s not in data, \\\nindicating this entry isn't stored.\"",
"%",
"self",
".",
"primaryKey",
")",
"r",
".",
"table",
"(",
"self",
".",
"table",
")",
".",
"get",
"(",
"self",
".",
"_data",
"[",
"self",
".",
"primaryKey",
"]",
")",
".",
"delete",
"(",
"durability",
"=",
"self",
".",
"durability",
")",
".",
"run",
"(",
"self",
".",
"_conn",
")",
"return",
"True"
] | Deletes the current instance. This assumes that we know what we're
doing, and have a primary key in our data already. If this is a new
instance, then we'll let the user know with an Exception | [
"Deletes",
"the",
"current",
"instance",
".",
"This",
"assumes",
"that",
"we",
"know",
"what",
"we",
"re",
"doing",
"and",
"have",
"a",
"primary",
"key",
"in",
"our",
"data",
"already",
".",
"If",
"this",
"is",
"a",
"new",
"instance",
"then",
"we",
"ll",
"let",
"the",
"user",
"know",
"with",
"an",
"Exception"
] | python | train |
zeaphoo/reston | reston/core/apk.py | https://github.com/zeaphoo/reston/blob/96502487b2259572df55237c9526f92627465088/reston/core/apk.py#L797-L811 | def get_signature_names(self):
"""
Return a list of the signature file names.
"""
signature_expr = re.compile("^(META-INF/)(.*)(\.RSA|\.EC|\.DSA)$")
signatures = []
for i in self.get_files():
if signature_expr.search(i):
signatures.append(i)
if len(signatures) > 0:
return signatures
return None | [
"def",
"get_signature_names",
"(",
"self",
")",
":",
"signature_expr",
"=",
"re",
".",
"compile",
"(",
"\"^(META-INF/)(.*)(\\.RSA|\\.EC|\\.DSA)$\"",
")",
"signatures",
"=",
"[",
"]",
"for",
"i",
"in",
"self",
".",
"get_files",
"(",
")",
":",
"if",
"signature_expr",
".",
"search",
"(",
"i",
")",
":",
"signatures",
".",
"append",
"(",
"i",
")",
"if",
"len",
"(",
"signatures",
")",
">",
"0",
":",
"return",
"signatures",
"return",
"None"
] | Return a list of the signature file names. | [
"Return",
"a",
"list",
"of",
"the",
"signature",
"file",
"names",
"."
] | python | train |
MaxStrange/AudioSegment | algorithms/asa.py | https://github.com/MaxStrange/AudioSegment/blob/1daefb8de626ddff3ff7016697c3ad31d262ecd6/algorithms/asa.py#L888-L901 | def _merge_adjacent_segments(mask):
"""
Merges all segments in `mask` which are touching.
"""
mask_ids = [id for id in np.unique(mask) if id != 0]
for id in mask_ids:
myfidxs, mysidxs = np.where(mask == id)
for other in mask_ids: # Ugh, brute force O(N^2) algorithm.. gross..
if id == other:
continue
else:
other_fidxs, other_sidxs = np.where(mask == other)
if _segments_are_adjacent((myfidxs, mysidxs), (other_fidxs, other_sidxs)):
mask[other_fidxs, other_sidxs] = id | [
"def",
"_merge_adjacent_segments",
"(",
"mask",
")",
":",
"mask_ids",
"=",
"[",
"id",
"for",
"id",
"in",
"np",
".",
"unique",
"(",
"mask",
")",
"if",
"id",
"!=",
"0",
"]",
"for",
"id",
"in",
"mask_ids",
":",
"myfidxs",
",",
"mysidxs",
"=",
"np",
".",
"where",
"(",
"mask",
"==",
"id",
")",
"for",
"other",
"in",
"mask_ids",
":",
"# Ugh, brute force O(N^2) algorithm.. gross..",
"if",
"id",
"==",
"other",
":",
"continue",
"else",
":",
"other_fidxs",
",",
"other_sidxs",
"=",
"np",
".",
"where",
"(",
"mask",
"==",
"other",
")",
"if",
"_segments_are_adjacent",
"(",
"(",
"myfidxs",
",",
"mysidxs",
")",
",",
"(",
"other_fidxs",
",",
"other_sidxs",
")",
")",
":",
"mask",
"[",
"other_fidxs",
",",
"other_sidxs",
"]",
"=",
"id"
] | Merges all segments in `mask` which are touching. | [
"Merges",
"all",
"segments",
"in",
"mask",
"which",
"are",
"touching",
"."
] | python | test |
saltstack/salt | salt/modules/mysql.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/mysql.py#L1049-L1083 | def db_tables(name, **connection_args):
'''
Shows the tables in the given MySQL database (if exists)
CLI Example:
.. code-block:: bash
salt '*' mysql.db_tables 'database'
'''
if not db_exists(name, **connection_args):
log.info('Database \'%s\' does not exist', name)
return False
dbc = _connect(**connection_args)
if dbc is None:
return []
cur = dbc.cursor()
s_name = quote_identifier(name)
# identifiers cannot be used as values
qry = 'SHOW TABLES IN {0}'.format(s_name)
try:
_execute(cur, qry)
except MySQLdb.OperationalError as exc:
err = 'MySQL Error {0}: {1}'.format(*exc.args)
__context__['mysql.error'] = err
log.error(err)
return []
ret = []
results = cur.fetchall()
for table in results:
ret.append(table[0])
log.debug(ret)
return ret | [
"def",
"db_tables",
"(",
"name",
",",
"*",
"*",
"connection_args",
")",
":",
"if",
"not",
"db_exists",
"(",
"name",
",",
"*",
"*",
"connection_args",
")",
":",
"log",
".",
"info",
"(",
"'Database \\'%s\\' does not exist'",
",",
"name",
")",
"return",
"False",
"dbc",
"=",
"_connect",
"(",
"*",
"*",
"connection_args",
")",
"if",
"dbc",
"is",
"None",
":",
"return",
"[",
"]",
"cur",
"=",
"dbc",
".",
"cursor",
"(",
")",
"s_name",
"=",
"quote_identifier",
"(",
"name",
")",
"# identifiers cannot be used as values",
"qry",
"=",
"'SHOW TABLES IN {0}'",
".",
"format",
"(",
"s_name",
")",
"try",
":",
"_execute",
"(",
"cur",
",",
"qry",
")",
"except",
"MySQLdb",
".",
"OperationalError",
"as",
"exc",
":",
"err",
"=",
"'MySQL Error {0}: {1}'",
".",
"format",
"(",
"*",
"exc",
".",
"args",
")",
"__context__",
"[",
"'mysql.error'",
"]",
"=",
"err",
"log",
".",
"error",
"(",
"err",
")",
"return",
"[",
"]",
"ret",
"=",
"[",
"]",
"results",
"=",
"cur",
".",
"fetchall",
"(",
")",
"for",
"table",
"in",
"results",
":",
"ret",
".",
"append",
"(",
"table",
"[",
"0",
"]",
")",
"log",
".",
"debug",
"(",
"ret",
")",
"return",
"ret"
] | Shows the tables in the given MySQL database (if exists)
CLI Example:
.. code-block:: bash
salt '*' mysql.db_tables 'database' | [
"Shows",
"the",
"tables",
"in",
"the",
"given",
"MySQL",
"database",
"(",
"if",
"exists",
")"
] | python | train |
AtomHash/evernode | evernode/classes/form_data.py | https://github.com/AtomHash/evernode/blob/b2fb91555fb937a3f3eba41db56dee26f9b034be/evernode/classes/form_data.py#L38-L46 | def add_file(self, name, required=False, error=None, extensions=None):
""" Add a file field to parse on request (uploads) """
if name is None:
return
self.file_arguments.append(dict(
name=name,
required=required,
error=error,
extensions=extensions)) | [
"def",
"add_file",
"(",
"self",
",",
"name",
",",
"required",
"=",
"False",
",",
"error",
"=",
"None",
",",
"extensions",
"=",
"None",
")",
":",
"if",
"name",
"is",
"None",
":",
"return",
"self",
".",
"file_arguments",
".",
"append",
"(",
"dict",
"(",
"name",
"=",
"name",
",",
"required",
"=",
"required",
",",
"error",
"=",
"error",
",",
"extensions",
"=",
"extensions",
")",
")"
] | Add a file field to parse on request (uploads) | [
"Add",
"a",
"file",
"field",
"to",
"parse",
"on",
"request",
"(",
"uploads",
")"
] | python | train |
stephrdev/django-formwizard | formwizard/views.py | https://github.com/stephrdev/django-formwizard/blob/7b35165f0340aae4e8302d5b05b0cb443f6c9904/formwizard/views.py#L637-L646 | def post(self, *args, **kwargs):
"""
Do a redirect if user presses the prev. step button. The rest of this
is super'd from FormWizard.
"""
prev_step = self.request.POST.get('wizard_prev_step', None)
if prev_step and prev_step in self.get_form_list():
self.storage.current_step = prev_step
return redirect(self.url_name, step=prev_step)
return super(NamedUrlWizardView, self).post(*args, **kwargs) | [
"def",
"post",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"prev_step",
"=",
"self",
".",
"request",
".",
"POST",
".",
"get",
"(",
"'wizard_prev_step'",
",",
"None",
")",
"if",
"prev_step",
"and",
"prev_step",
"in",
"self",
".",
"get_form_list",
"(",
")",
":",
"self",
".",
"storage",
".",
"current_step",
"=",
"prev_step",
"return",
"redirect",
"(",
"self",
".",
"url_name",
",",
"step",
"=",
"prev_step",
")",
"return",
"super",
"(",
"NamedUrlWizardView",
",",
"self",
")",
".",
"post",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | Do a redirect if user presses the prev. step button. The rest of this
is super'd from FormWizard. | [
"Do",
"a",
"redirect",
"if",
"user",
"presses",
"the",
"prev",
".",
"step",
"button",
".",
"The",
"rest",
"of",
"this",
"is",
"super",
"d",
"from",
"FormWizard",
"."
] | python | train |
apache/airflow | airflow/contrib/hooks/gcp_video_intelligence_hook.py | https://github.com/apache/airflow/blob/b69c686ad8a0c89b9136bb4b31767257eb7b2597/airflow/contrib/hooks/gcp_video_intelligence_hook.py#L51-L105 | def annotate_video(
self,
input_uri=None,
input_content=None,
features=None,
video_context=None,
output_uri=None,
location=None,
retry=None,
timeout=None,
metadata=None,
):
"""
Performs video annotation.
:param input_uri: Input video location. Currently, only Google Cloud Storage URIs are supported,
which must be specified in the following format: ``gs://bucket-id/object-id``.
:type input_uri: str
:param input_content: The video data bytes.
If unset, the input video(s) should be specified via ``input_uri``.
If set, ``input_uri`` should be unset.
:type input_content: bytes
:param features: Requested video annotation features.
:type features: list[google.cloud.videointelligence_v1.VideoIntelligenceServiceClient.enums.Feature]
:param output_uri: Optional, location where the output (in JSON format) should be stored. Currently,
only Google Cloud Storage URIs are supported, which must be specified in the following format:
``gs://bucket-id/object-id``.
:type output_uri: str
:param video_context: Optional, Additional video context and/or feature-specific parameters.
:type video_context: dict or google.cloud.videointelligence_v1.types.VideoContext
:param location: Optional, cloud region where annotation should take place. Supported cloud regions:
us-east1, us-west1, europe-west1, asia-east1.
If no region is specified, a region will be determined based on video file location.
:type location: str
:param retry: Retry object used to determine when/if to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: Optional, The amount of time, in seconds, to wait for the request to complete.
Note that if retry is specified, the timeout applies to each individual attempt.
:type timeout: float
:param metadata: Optional, Additional metadata that is provided to the method.
:type metadata: seq[tuple[str, str]]
"""
client = self.get_conn()
return client.annotate_video(
input_uri=input_uri,
input_content=input_content,
features=features,
video_context=video_context,
output_uri=output_uri,
location_id=location,
retry=retry,
timeout=timeout,
metadata=metadata,
) | [
"def",
"annotate_video",
"(",
"self",
",",
"input_uri",
"=",
"None",
",",
"input_content",
"=",
"None",
",",
"features",
"=",
"None",
",",
"video_context",
"=",
"None",
",",
"output_uri",
"=",
"None",
",",
"location",
"=",
"None",
",",
"retry",
"=",
"None",
",",
"timeout",
"=",
"None",
",",
"metadata",
"=",
"None",
",",
")",
":",
"client",
"=",
"self",
".",
"get_conn",
"(",
")",
"return",
"client",
".",
"annotate_video",
"(",
"input_uri",
"=",
"input_uri",
",",
"input_content",
"=",
"input_content",
",",
"features",
"=",
"features",
",",
"video_context",
"=",
"video_context",
",",
"output_uri",
"=",
"output_uri",
",",
"location_id",
"=",
"location",
",",
"retry",
"=",
"retry",
",",
"timeout",
"=",
"timeout",
",",
"metadata",
"=",
"metadata",
",",
")"
] | Performs video annotation.
:param input_uri: Input video location. Currently, only Google Cloud Storage URIs are supported,
which must be specified in the following format: ``gs://bucket-id/object-id``.
:type input_uri: str
:param input_content: The video data bytes.
If unset, the input video(s) should be specified via ``input_uri``.
If set, ``input_uri`` should be unset.
:type input_content: bytes
:param features: Requested video annotation features.
:type features: list[google.cloud.videointelligence_v1.VideoIntelligenceServiceClient.enums.Feature]
:param output_uri: Optional, location where the output (in JSON format) should be stored. Currently,
only Google Cloud Storage URIs are supported, which must be specified in the following format:
``gs://bucket-id/object-id``.
:type output_uri: str
:param video_context: Optional, Additional video context and/or feature-specific parameters.
:type video_context: dict or google.cloud.videointelligence_v1.types.VideoContext
:param location: Optional, cloud region where annotation should take place. Supported cloud regions:
us-east1, us-west1, europe-west1, asia-east1.
If no region is specified, a region will be determined based on video file location.
:type location: str
:param retry: Retry object used to determine when/if to retry requests.
If None is specified, requests will not be retried.
:type retry: google.api_core.retry.Retry
:param timeout: Optional, The amount of time, in seconds, to wait for the request to complete.
Note that if retry is specified, the timeout applies to each individual attempt.
:type timeout: float
:param metadata: Optional, Additional metadata that is provided to the method.
:type metadata: seq[tuple[str, str]] | [
"Performs",
"video",
"annotation",
"."
] | python | test |
google/grr | grr/config/grr_response_templates/setup.py | https://github.com/google/grr/blob/5cef4e8e2f0d5df43ea4877e9c798e0bf60bfe74/grr/config/grr_response_templates/setup.py#L53-L69 | def CheckTemplates(self, base_dir, version):
"""Verify we have at least one template that matches maj.minor version."""
major_minor = ".".join(version.split(".")[0:2])
templates = glob.glob(
os.path.join(base_dir, "templates/*%s*.zip" % major_minor))
required_templates = set(
[x.replace("maj.minor", major_minor) for x in self.REQUIRED_TEMPLATES])
# Client templates have an extra version digit, e.g. 3.1.0.0
templates_present = set([
re.sub(r"_%s[^_]+_" % major_minor, "_%s_" % major_minor,
os.path.basename(x)) for x in templates
])
difference = required_templates - templates_present
if difference:
raise RuntimeError("Missing templates %s" % difference) | [
"def",
"CheckTemplates",
"(",
"self",
",",
"base_dir",
",",
"version",
")",
":",
"major_minor",
"=",
"\".\"",
".",
"join",
"(",
"version",
".",
"split",
"(",
"\".\"",
")",
"[",
"0",
":",
"2",
"]",
")",
"templates",
"=",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"base_dir",
",",
"\"templates/*%s*.zip\"",
"%",
"major_minor",
")",
")",
"required_templates",
"=",
"set",
"(",
"[",
"x",
".",
"replace",
"(",
"\"maj.minor\"",
",",
"major_minor",
")",
"for",
"x",
"in",
"self",
".",
"REQUIRED_TEMPLATES",
"]",
")",
"# Client templates have an extra version digit, e.g. 3.1.0.0",
"templates_present",
"=",
"set",
"(",
"[",
"re",
".",
"sub",
"(",
"r\"_%s[^_]+_\"",
"%",
"major_minor",
",",
"\"_%s_\"",
"%",
"major_minor",
",",
"os",
".",
"path",
".",
"basename",
"(",
"x",
")",
")",
"for",
"x",
"in",
"templates",
"]",
")",
"difference",
"=",
"required_templates",
"-",
"templates_present",
"if",
"difference",
":",
"raise",
"RuntimeError",
"(",
"\"Missing templates %s\"",
"%",
"difference",
")"
] | Verify we have at least one template that matches maj.minor version. | [
"Verify",
"we",
"have",
"at",
"least",
"one",
"template",
"that",
"matches",
"maj",
".",
"minor",
"version",
"."
] | python | train |
matrix-org/matrix-python-sdk | matrix_client/client.py | https://github.com/matrix-org/matrix-python-sdk/blob/e734cce3ccd35f2d355c6a19a7a701033472498a/matrix_client/client.py#L512-L531 | def start_listener_thread(self, timeout_ms=30000, exception_handler=None):
""" Start a listener thread to listen for events in the background.
Args:
timeout (int): How long to poll the Home Server for before
retrying.
exception_handler (func(exception)): Optional exception handler
function which can be used to handle exceptions in the caller
thread.
"""
try:
thread = Thread(target=self.listen_forever,
args=(timeout_ms, exception_handler))
thread.daemon = True
self.sync_thread = thread
self.should_listen = True
thread.start()
except RuntimeError:
e = sys.exc_info()[0]
logger.error("Error: unable to start thread. %s", str(e)) | [
"def",
"start_listener_thread",
"(",
"self",
",",
"timeout_ms",
"=",
"30000",
",",
"exception_handler",
"=",
"None",
")",
":",
"try",
":",
"thread",
"=",
"Thread",
"(",
"target",
"=",
"self",
".",
"listen_forever",
",",
"args",
"=",
"(",
"timeout_ms",
",",
"exception_handler",
")",
")",
"thread",
".",
"daemon",
"=",
"True",
"self",
".",
"sync_thread",
"=",
"thread",
"self",
".",
"should_listen",
"=",
"True",
"thread",
".",
"start",
"(",
")",
"except",
"RuntimeError",
":",
"e",
"=",
"sys",
".",
"exc_info",
"(",
")",
"[",
"0",
"]",
"logger",
".",
"error",
"(",
"\"Error: unable to start thread. %s\"",
",",
"str",
"(",
"e",
")",
")"
] | Start a listener thread to listen for events in the background.
Args:
timeout (int): How long to poll the Home Server for before
retrying.
exception_handler (func(exception)): Optional exception handler
function which can be used to handle exceptions in the caller
thread. | [
"Start",
"a",
"listener",
"thread",
"to",
"listen",
"for",
"events",
"in",
"the",
"background",
"."
] | python | train |
Azure/azure-storage-python | azure-storage-queue/azure/storage/queue/queueservice.py | https://github.com/Azure/azure-storage-python/blob/52327354b192cbcf6b7905118ec6b5d57fa46275/azure-storage-queue/azure/storage/queue/queueservice.py#L732-L794 | def put_message(self, queue_name, content, visibility_timeout=None,
time_to_live=None, timeout=None):
'''
Adds a new message to the back of the message queue.
The visibility timeout specifies the time that the message will be
invisible. After the timeout expires, the message will become visible.
If a visibility timeout is not specified, the default value of 0 is used.
The message time-to-live specifies how long a message will remain in the
queue. The message will be deleted from the queue when the time-to-live
period expires.
If the key-encryption-key field is set on the local service object, this method will
encrypt the content before uploading.
:param str queue_name:
The name of the queue to put the message into.
:param obj content:
Message content. Allowed type is determined by the encode_function
set on the service. Default is str. The encoded message can be up to
64KB in size.
:param int visibility_timeout:
If not specified, the default value is 0. Specifies the
new visibility timeout value, in seconds, relative to server time.
The value must be larger than or equal to 0, and cannot be
larger than 7 days. The visibility timeout of a message cannot be
set to a value later than the expiry time. visibility_timeout
should be set to a value smaller than the time-to-live value.
:param int time_to_live:
Specifies the time-to-live interval for the message, in
seconds. The time-to-live may be any positive number or -1 for infinity. If this
parameter is omitted, the default time-to-live is 7 days.
:param int timeout:
The server timeout, expressed in seconds.
:return:
A :class:`~azure.storage.queue.models.QueueMessage` object.
This object is also populated with the content although it is not
returned from the service.
:rtype: :class:`~azure.storage.queue.models.QueueMessage`
'''
_validate_encryption_required(self.require_encryption, self.key_encryption_key)
_validate_not_none('queue_name', queue_name)
_validate_not_none('content', content)
request = HTTPRequest()
request.method = 'POST'
request.host_locations = self._get_host_locations()
request.path = _get_path(queue_name, True)
request.query = {
'visibilitytimeout': _to_str(visibility_timeout),
'messagettl': _to_str(time_to_live),
'timeout': _int_to_str(timeout)
}
request.body = _get_request_body(_convert_queue_message_xml(content, self.encode_function,
self.key_encryption_key))
message_list = self._perform_request(request, _convert_xml_to_queue_messages,
[self.decode_function, False,
None, None, content])
return message_list[0] | [
"def",
"put_message",
"(",
"self",
",",
"queue_name",
",",
"content",
",",
"visibility_timeout",
"=",
"None",
",",
"time_to_live",
"=",
"None",
",",
"timeout",
"=",
"None",
")",
":",
"_validate_encryption_required",
"(",
"self",
".",
"require_encryption",
",",
"self",
".",
"key_encryption_key",
")",
"_validate_not_none",
"(",
"'queue_name'",
",",
"queue_name",
")",
"_validate_not_none",
"(",
"'content'",
",",
"content",
")",
"request",
"=",
"HTTPRequest",
"(",
")",
"request",
".",
"method",
"=",
"'POST'",
"request",
".",
"host_locations",
"=",
"self",
".",
"_get_host_locations",
"(",
")",
"request",
".",
"path",
"=",
"_get_path",
"(",
"queue_name",
",",
"True",
")",
"request",
".",
"query",
"=",
"{",
"'visibilitytimeout'",
":",
"_to_str",
"(",
"visibility_timeout",
")",
",",
"'messagettl'",
":",
"_to_str",
"(",
"time_to_live",
")",
",",
"'timeout'",
":",
"_int_to_str",
"(",
"timeout",
")",
"}",
"request",
".",
"body",
"=",
"_get_request_body",
"(",
"_convert_queue_message_xml",
"(",
"content",
",",
"self",
".",
"encode_function",
",",
"self",
".",
"key_encryption_key",
")",
")",
"message_list",
"=",
"self",
".",
"_perform_request",
"(",
"request",
",",
"_convert_xml_to_queue_messages",
",",
"[",
"self",
".",
"decode_function",
",",
"False",
",",
"None",
",",
"None",
",",
"content",
"]",
")",
"return",
"message_list",
"[",
"0",
"]"
] | Adds a new message to the back of the message queue.
The visibility timeout specifies the time that the message will be
invisible. After the timeout expires, the message will become visible.
If a visibility timeout is not specified, the default value of 0 is used.
The message time-to-live specifies how long a message will remain in the
queue. The message will be deleted from the queue when the time-to-live
period expires.
If the key-encryption-key field is set on the local service object, this method will
encrypt the content before uploading.
:param str queue_name:
The name of the queue to put the message into.
:param obj content:
Message content. Allowed type is determined by the encode_function
set on the service. Default is str. The encoded message can be up to
64KB in size.
:param int visibility_timeout:
If not specified, the default value is 0. Specifies the
new visibility timeout value, in seconds, relative to server time.
The value must be larger than or equal to 0, and cannot be
larger than 7 days. The visibility timeout of a message cannot be
set to a value later than the expiry time. visibility_timeout
should be set to a value smaller than the time-to-live value.
:param int time_to_live:
Specifies the time-to-live interval for the message, in
seconds. The time-to-live may be any positive number or -1 for infinity. If this
parameter is omitted, the default time-to-live is 7 days.
:param int timeout:
The server timeout, expressed in seconds.
:return:
A :class:`~azure.storage.queue.models.QueueMessage` object.
This object is also populated with the content although it is not
returned from the service.
:rtype: :class:`~azure.storage.queue.models.QueueMessage` | [
"Adds",
"a",
"new",
"message",
"to",
"the",
"back",
"of",
"the",
"message",
"queue",
"."
] | python | train |
census-instrumentation/opencensus-python | opencensus/trace/tracers/context_tracer.py | https://github.com/census-instrumentation/opencensus-python/blob/992b223f7e34c5dcb65922b7d5c827e7a1351e7d/opencensus/trace/tracers/context_tracer.py#L149-L176 | def get_span_datas(self, span):
"""Extracts a list of SpanData tuples from a span
:rtype: list of opencensus.trace.span_data.SpanData
:return list of SpanData tuples
"""
span_datas = [
span_data_module.SpanData(
name=ss.name,
context=self.span_context,
span_id=ss.span_id,
parent_span_id=ss.parent_span.span_id if
ss.parent_span else None,
attributes=ss.attributes,
start_time=ss.start_time,
end_time=ss.end_time,
child_span_count=len(ss.children),
stack_trace=ss.stack_trace,
time_events=ss.time_events,
links=ss.links,
status=ss.status,
same_process_as_parent_span=ss.same_process_as_parent_span,
span_kind=ss.span_kind
)
for ss in span
]
return span_datas | [
"def",
"get_span_datas",
"(",
"self",
",",
"span",
")",
":",
"span_datas",
"=",
"[",
"span_data_module",
".",
"SpanData",
"(",
"name",
"=",
"ss",
".",
"name",
",",
"context",
"=",
"self",
".",
"span_context",
",",
"span_id",
"=",
"ss",
".",
"span_id",
",",
"parent_span_id",
"=",
"ss",
".",
"parent_span",
".",
"span_id",
"if",
"ss",
".",
"parent_span",
"else",
"None",
",",
"attributes",
"=",
"ss",
".",
"attributes",
",",
"start_time",
"=",
"ss",
".",
"start_time",
",",
"end_time",
"=",
"ss",
".",
"end_time",
",",
"child_span_count",
"=",
"len",
"(",
"ss",
".",
"children",
")",
",",
"stack_trace",
"=",
"ss",
".",
"stack_trace",
",",
"time_events",
"=",
"ss",
".",
"time_events",
",",
"links",
"=",
"ss",
".",
"links",
",",
"status",
"=",
"ss",
".",
"status",
",",
"same_process_as_parent_span",
"=",
"ss",
".",
"same_process_as_parent_span",
",",
"span_kind",
"=",
"ss",
".",
"span_kind",
")",
"for",
"ss",
"in",
"span",
"]",
"return",
"span_datas"
] | Extracts a list of SpanData tuples from a span
:rtype: list of opencensus.trace.span_data.SpanData
:return list of SpanData tuples | [
"Extracts",
"a",
"list",
"of",
"SpanData",
"tuples",
"from",
"a",
"span"
] | python | train |
globality-corp/microcosm-flask | microcosm_flask/fields/timestamp_field.py | https://github.com/globality-corp/microcosm-flask/blob/c2eaf57f03e7d041eea343751a4a90fcc80df418/microcosm_flask/fields/timestamp_field.py#L24-L35 | def _serialize(self, value, attr, obj):
"""
Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string.
"""
if value is None:
return None
if self.use_isoformat:
return datetime.utcfromtimestamp(value).isoformat()
else:
return value | [
"def",
"_serialize",
"(",
"self",
",",
"value",
",",
"attr",
",",
"obj",
")",
":",
"if",
"value",
"is",
"None",
":",
"return",
"None",
"if",
"self",
".",
"use_isoformat",
":",
"return",
"datetime",
".",
"utcfromtimestamp",
"(",
"value",
")",
".",
"isoformat",
"(",
")",
"else",
":",
"return",
"value"
] | Serialize value as a timestamp, either as a Unix timestamp (in float second) or a UTC isoformat string. | [
"Serialize",
"value",
"as",
"a",
"timestamp",
"either",
"as",
"a",
"Unix",
"timestamp",
"(",
"in",
"float",
"second",
")",
"or",
"a",
"UTC",
"isoformat",
"string",
"."
] | python | train |
zetaops/zengine | zengine/views/system.py | https://github.com/zetaops/zengine/blob/b5bc32d3b37bca799f8985be916f04528ac79e4a/zengine/views/system.py#L107-L197 | def get_tasks(current):
"""
List task invitations of current user
.. code-block:: python
# request:
{
'view': '_zops_get_tasks',
'state': string, # one of these:
# "active", "future", "finished", "expired"
'inverted': boolean, # search on other people's tasks
'query': string, # optional. for searching on user's tasks
'wf_type': string, # optional. only show tasks of selected wf_type
'start_date': datetime, # optional. only show tasks starts after this date
'finish_date': datetime, # optional. only show tasks should end before this date
}
# response:
{
'task_list': [
{'token': key, # wf token (key of WFInstance)
{'key': key, # wf token (key of TaskInvitation)
'title': string, # name of workflow
'wf_type': string, # unread message count
'title': string, # task title
'state': int, # state of invitation
# zengine.models.workflow_manager.TASK_STATES
'start_date': string, # start date
'finish_date': string, # end date
},],
'active_task_count': int,
'future_task_count': int,
'finished_task_count': int,
'expired_task_count': int,
}
"""
# TODO: Also return invitations for user's other roles
# TODO: Handle automatic role switching
STATE_DICT = {
'active': [20, 30],
'future': 10,
'finished': 40,
'expired': 90
}
state = STATE_DICT[current.input['state']]
if isinstance(state, list):
queryset = TaskInvitation.objects.filter(progress__in=state)
else:
queryset = TaskInvitation.objects.filter(progress=state)
if 'inverted' in current.input:
# show other user's tasks
allowed_workflows = [bpmn_wf.name for bpmn_wf in BPMNWorkflow.objects.all()
if current.has_permission(bpmn_wf.name)]
queryset = queryset.exclude(role_id=current.role_id).filter(wf_name__in=allowed_workflows)
else:
# show current user's tasks
queryset = queryset.filter(role_id=current.role_id)
if 'query' in current.input:
queryset = queryset.filter(search_data__contains=current.input['query'].lower())
if 'wf_type' in current.input:
queryset = queryset.filter(wf_name=current.input['wf_type'])
if 'start_date' in current.input:
queryset = queryset.filter(start_date__gte=datetime.strptime(current.input['start_date'], "%d.%m.%Y"))
if 'finish_date' in current.input:
queryset = queryset.filter(finish_date__lte=datetime.strptime(current.input['finish_date'], "%d.%m.%Y"))
current.output['task_list'] = [
{
'token': inv.instance.key,
'key': inv.key,
'title': inv.title,
'wf_type': inv.wf_name,
'state': inv.progress,
'start_date': format_date(inv.start_date),
'finish_date': format_date(inv.finish_date),
'description': inv.instance.wf.description,
'status': inv.ownership}
for inv in queryset
]
task_inv_list = TaskInvitation.objects.filter(role_id=current.role_id)
current.output['task_count']= {
'active': task_inv_list.filter(progress__in=STATE_DICT['active']).count(),
'future' : task_inv_list.filter(progress=STATE_DICT['future']).count(),
'finished' : task_inv_list.filter(progress=STATE_DICT['finished']).count(),
'expired' : task_inv_list.filter(progress=STATE_DICT['expired']).count()
} | [
"def",
"get_tasks",
"(",
"current",
")",
":",
"# TODO: Also return invitations for user's other roles",
"# TODO: Handle automatic role switching",
"STATE_DICT",
"=",
"{",
"'active'",
":",
"[",
"20",
",",
"30",
"]",
",",
"'future'",
":",
"10",
",",
"'finished'",
":",
"40",
",",
"'expired'",
":",
"90",
"}",
"state",
"=",
"STATE_DICT",
"[",
"current",
".",
"input",
"[",
"'state'",
"]",
"]",
"if",
"isinstance",
"(",
"state",
",",
"list",
")",
":",
"queryset",
"=",
"TaskInvitation",
".",
"objects",
".",
"filter",
"(",
"progress__in",
"=",
"state",
")",
"else",
":",
"queryset",
"=",
"TaskInvitation",
".",
"objects",
".",
"filter",
"(",
"progress",
"=",
"state",
")",
"if",
"'inverted'",
"in",
"current",
".",
"input",
":",
"# show other user's tasks",
"allowed_workflows",
"=",
"[",
"bpmn_wf",
".",
"name",
"for",
"bpmn_wf",
"in",
"BPMNWorkflow",
".",
"objects",
".",
"all",
"(",
")",
"if",
"current",
".",
"has_permission",
"(",
"bpmn_wf",
".",
"name",
")",
"]",
"queryset",
"=",
"queryset",
".",
"exclude",
"(",
"role_id",
"=",
"current",
".",
"role_id",
")",
".",
"filter",
"(",
"wf_name__in",
"=",
"allowed_workflows",
")",
"else",
":",
"# show current user's tasks",
"queryset",
"=",
"queryset",
".",
"filter",
"(",
"role_id",
"=",
"current",
".",
"role_id",
")",
"if",
"'query'",
"in",
"current",
".",
"input",
":",
"queryset",
"=",
"queryset",
".",
"filter",
"(",
"search_data__contains",
"=",
"current",
".",
"input",
"[",
"'query'",
"]",
".",
"lower",
"(",
")",
")",
"if",
"'wf_type'",
"in",
"current",
".",
"input",
":",
"queryset",
"=",
"queryset",
".",
"filter",
"(",
"wf_name",
"=",
"current",
".",
"input",
"[",
"'wf_type'",
"]",
")",
"if",
"'start_date'",
"in",
"current",
".",
"input",
":",
"queryset",
"=",
"queryset",
".",
"filter",
"(",
"start_date__gte",
"=",
"datetime",
".",
"strptime",
"(",
"current",
".",
"input",
"[",
"'start_date'",
"]",
",",
"\"%d.%m.%Y\"",
")",
")",
"if",
"'finish_date'",
"in",
"current",
".",
"input",
":",
"queryset",
"=",
"queryset",
".",
"filter",
"(",
"finish_date__lte",
"=",
"datetime",
".",
"strptime",
"(",
"current",
".",
"input",
"[",
"'finish_date'",
"]",
",",
"\"%d.%m.%Y\"",
")",
")",
"current",
".",
"output",
"[",
"'task_list'",
"]",
"=",
"[",
"{",
"'token'",
":",
"inv",
".",
"instance",
".",
"key",
",",
"'key'",
":",
"inv",
".",
"key",
",",
"'title'",
":",
"inv",
".",
"title",
",",
"'wf_type'",
":",
"inv",
".",
"wf_name",
",",
"'state'",
":",
"inv",
".",
"progress",
",",
"'start_date'",
":",
"format_date",
"(",
"inv",
".",
"start_date",
")",
",",
"'finish_date'",
":",
"format_date",
"(",
"inv",
".",
"finish_date",
")",
",",
"'description'",
":",
"inv",
".",
"instance",
".",
"wf",
".",
"description",
",",
"'status'",
":",
"inv",
".",
"ownership",
"}",
"for",
"inv",
"in",
"queryset",
"]",
"task_inv_list",
"=",
"TaskInvitation",
".",
"objects",
".",
"filter",
"(",
"role_id",
"=",
"current",
".",
"role_id",
")",
"current",
".",
"output",
"[",
"'task_count'",
"]",
"=",
"{",
"'active'",
":",
"task_inv_list",
".",
"filter",
"(",
"progress__in",
"=",
"STATE_DICT",
"[",
"'active'",
"]",
")",
".",
"count",
"(",
")",
",",
"'future'",
":",
"task_inv_list",
".",
"filter",
"(",
"progress",
"=",
"STATE_DICT",
"[",
"'future'",
"]",
")",
".",
"count",
"(",
")",
",",
"'finished'",
":",
"task_inv_list",
".",
"filter",
"(",
"progress",
"=",
"STATE_DICT",
"[",
"'finished'",
"]",
")",
".",
"count",
"(",
")",
",",
"'expired'",
":",
"task_inv_list",
".",
"filter",
"(",
"progress",
"=",
"STATE_DICT",
"[",
"'expired'",
"]",
")",
".",
"count",
"(",
")",
"}"
] | List task invitations of current user
.. code-block:: python
# request:
{
'view': '_zops_get_tasks',
'state': string, # one of these:
# "active", "future", "finished", "expired"
'inverted': boolean, # search on other people's tasks
'query': string, # optional. for searching on user's tasks
'wf_type': string, # optional. only show tasks of selected wf_type
'start_date': datetime, # optional. only show tasks starts after this date
'finish_date': datetime, # optional. only show tasks should end before this date
}
# response:
{
'task_list': [
{'token': key, # wf token (key of WFInstance)
{'key': key, # wf token (key of TaskInvitation)
'title': string, # name of workflow
'wf_type': string, # unread message count
'title': string, # task title
'state': int, # state of invitation
# zengine.models.workflow_manager.TASK_STATES
'start_date': string, # start date
'finish_date': string, # end date
},],
'active_task_count': int,
'future_task_count': int,
'finished_task_count': int,
'expired_task_count': int,
} | [
"List",
"task",
"invitations",
"of",
"current",
"user"
] | python | train |
Robpol86/Flask-Celery-Helper | flask_celery.py | https://github.com/Robpol86/Flask-Celery-Helper/blob/92bd3b02954422665260116adda8eb899546c365/flask_celery.py#L87-L90 | def reset_lock(self):
"""Removed the lock regardless of timeout."""
redis_key = self.CELERY_LOCK.format(task_id=self.task_identifier)
self.celery_self.backend.client.delete(redis_key) | [
"def",
"reset_lock",
"(",
"self",
")",
":",
"redis_key",
"=",
"self",
".",
"CELERY_LOCK",
".",
"format",
"(",
"task_id",
"=",
"self",
".",
"task_identifier",
")",
"self",
".",
"celery_self",
".",
"backend",
".",
"client",
".",
"delete",
"(",
"redis_key",
")"
] | Removed the lock regardless of timeout. | [
"Removed",
"the",
"lock",
"regardless",
"of",
"timeout",
"."
] | python | valid |
ray-project/ray | python/ray/worker.py | https://github.com/ray-project/ray/blob/4eade036a0505e244c976f36aaa2d64386b5129b/python/ray/worker.py#L2006-L2045 | def _try_to_compute_deterministic_class_id(cls, depth=5):
"""Attempt to produce a deterministic class ID for a given class.
The goal here is for the class ID to be the same when this is run on
different worker processes. Pickling, loading, and pickling again seems to
produce more consistent results than simply pickling. This is a bit crazy
and could cause problems, in which case we should revert it and figure out
something better.
Args:
cls: The class to produce an ID for.
depth: The number of times to repeatedly try to load and dump the
string while trying to reach a fixed point.
Returns:
A class ID for this class. We attempt to make the class ID the same
when this function is run on different workers, but that is not
guaranteed.
Raises:
Exception: This could raise an exception if cloudpickle raises an
exception.
"""
# Pickling, loading, and pickling again seems to produce more consistent
# results than simply pickling. This is a bit
class_id = pickle.dumps(cls)
for _ in range(depth):
new_class_id = pickle.dumps(pickle.loads(class_id))
if new_class_id == class_id:
# We appear to have reached a fix point, so use this as the ID.
return hashlib.sha1(new_class_id).digest()
class_id = new_class_id
# We have not reached a fixed point, so we may end up with a different
# class ID for this custom class on each worker, which could lead to the
# same class definition being exported many many times.
logger.warning(
"WARNING: Could not produce a deterministic class ID for class "
"{}".format(cls))
return hashlib.sha1(new_class_id).digest() | [
"def",
"_try_to_compute_deterministic_class_id",
"(",
"cls",
",",
"depth",
"=",
"5",
")",
":",
"# Pickling, loading, and pickling again seems to produce more consistent",
"# results than simply pickling. This is a bit",
"class_id",
"=",
"pickle",
".",
"dumps",
"(",
"cls",
")",
"for",
"_",
"in",
"range",
"(",
"depth",
")",
":",
"new_class_id",
"=",
"pickle",
".",
"dumps",
"(",
"pickle",
".",
"loads",
"(",
"class_id",
")",
")",
"if",
"new_class_id",
"==",
"class_id",
":",
"# We appear to have reached a fix point, so use this as the ID.",
"return",
"hashlib",
".",
"sha1",
"(",
"new_class_id",
")",
".",
"digest",
"(",
")",
"class_id",
"=",
"new_class_id",
"# We have not reached a fixed point, so we may end up with a different",
"# class ID for this custom class on each worker, which could lead to the",
"# same class definition being exported many many times.",
"logger",
".",
"warning",
"(",
"\"WARNING: Could not produce a deterministic class ID for class \"",
"\"{}\"",
".",
"format",
"(",
"cls",
")",
")",
"return",
"hashlib",
".",
"sha1",
"(",
"new_class_id",
")",
".",
"digest",
"(",
")"
] | Attempt to produce a deterministic class ID for a given class.
The goal here is for the class ID to be the same when this is run on
different worker processes. Pickling, loading, and pickling again seems to
produce more consistent results than simply pickling. This is a bit crazy
and could cause problems, in which case we should revert it and figure out
something better.
Args:
cls: The class to produce an ID for.
depth: The number of times to repeatedly try to load and dump the
string while trying to reach a fixed point.
Returns:
A class ID for this class. We attempt to make the class ID the same
when this function is run on different workers, but that is not
guaranteed.
Raises:
Exception: This could raise an exception if cloudpickle raises an
exception. | [
"Attempt",
"to",
"produce",
"a",
"deterministic",
"class",
"ID",
"for",
"a",
"given",
"class",
"."
] | python | train |
seung-lab/cloud-volume | cloudvolume/txrx.py | https://github.com/seung-lab/cloud-volume/blob/d2fd4500333f1bc3cd3e3919a8b649cec5d8e214/cloudvolume/txrx.py#L273-L314 | def upload_image(vol, img, offset, parallel=1,
manual_shared_memory_id=None, manual_shared_memory_bbox=None, manual_shared_memory_order='F'):
"""Upload img to vol with offset. This is the primary entry point for uploads."""
global NON_ALIGNED_WRITE
if not np.issubdtype(img.dtype, np.dtype(vol.dtype).type):
raise ValueError('The uploaded image data type must match the volume data type. volume: {}, image: {}'.format(vol.dtype, img.dtype))
(is_aligned, bounds, expanded) = check_grid_aligned(vol, img, offset)
if is_aligned:
upload_aligned(vol, img, offset, parallel=parallel,
manual_shared_memory_id=manual_shared_memory_id, manual_shared_memory_bbox=manual_shared_memory_bbox,
manual_shared_memory_order=manual_shared_memory_order)
return
elif vol.non_aligned_writes == False:
msg = NON_ALIGNED_WRITE.format(mip=vol.mip, chunk_size=vol.chunk_size, offset=vol.voxel_offset, got=bounds, check=expanded)
raise AlignmentError(msg)
# Upload the aligned core
retracted = bounds.shrink_to_chunk_size(vol.underlying, vol.voxel_offset)
core_bbox = retracted.clone() - bounds.minpt
if not core_bbox.subvoxel():
core_img = img[ core_bbox.to_slices() ]
upload_aligned(vol, core_img, retracted.minpt, parallel=parallel,
manual_shared_memory_id=manual_shared_memory_id, manual_shared_memory_bbox=manual_shared_memory_bbox,
manual_shared_memory_order=manual_shared_memory_order)
# Download the shell, paint, and upload
all_chunks = set(chunknames(expanded, vol.bounds, vol.key, vol.underlying))
core_chunks = set(chunknames(retracted, vol.bounds, vol.key, vol.underlying))
shell_chunks = all_chunks.difference(core_chunks)
def shade_and_upload(img3d, bbox):
# decode is returning non-writable chunk
# we're throwing them away so safe to write
img3d.setflags(write=1)
shade(img3d, bbox, img, bounds)
single_process_upload(vol, img3d, (( Vec(0,0,0), Vec(*img3d.shape[:3]), bbox.minpt, bbox.maxpt),), n_threads=0)
download_multiple(vol, shell_chunks, fn=shade_and_upload) | [
"def",
"upload_image",
"(",
"vol",
",",
"img",
",",
"offset",
",",
"parallel",
"=",
"1",
",",
"manual_shared_memory_id",
"=",
"None",
",",
"manual_shared_memory_bbox",
"=",
"None",
",",
"manual_shared_memory_order",
"=",
"'F'",
")",
":",
"global",
"NON_ALIGNED_WRITE",
"if",
"not",
"np",
".",
"issubdtype",
"(",
"img",
".",
"dtype",
",",
"np",
".",
"dtype",
"(",
"vol",
".",
"dtype",
")",
".",
"type",
")",
":",
"raise",
"ValueError",
"(",
"'The uploaded image data type must match the volume data type. volume: {}, image: {}'",
".",
"format",
"(",
"vol",
".",
"dtype",
",",
"img",
".",
"dtype",
")",
")",
"(",
"is_aligned",
",",
"bounds",
",",
"expanded",
")",
"=",
"check_grid_aligned",
"(",
"vol",
",",
"img",
",",
"offset",
")",
"if",
"is_aligned",
":",
"upload_aligned",
"(",
"vol",
",",
"img",
",",
"offset",
",",
"parallel",
"=",
"parallel",
",",
"manual_shared_memory_id",
"=",
"manual_shared_memory_id",
",",
"manual_shared_memory_bbox",
"=",
"manual_shared_memory_bbox",
",",
"manual_shared_memory_order",
"=",
"manual_shared_memory_order",
")",
"return",
"elif",
"vol",
".",
"non_aligned_writes",
"==",
"False",
":",
"msg",
"=",
"NON_ALIGNED_WRITE",
".",
"format",
"(",
"mip",
"=",
"vol",
".",
"mip",
",",
"chunk_size",
"=",
"vol",
".",
"chunk_size",
",",
"offset",
"=",
"vol",
".",
"voxel_offset",
",",
"got",
"=",
"bounds",
",",
"check",
"=",
"expanded",
")",
"raise",
"AlignmentError",
"(",
"msg",
")",
"# Upload the aligned core",
"retracted",
"=",
"bounds",
".",
"shrink_to_chunk_size",
"(",
"vol",
".",
"underlying",
",",
"vol",
".",
"voxel_offset",
")",
"core_bbox",
"=",
"retracted",
".",
"clone",
"(",
")",
"-",
"bounds",
".",
"minpt",
"if",
"not",
"core_bbox",
".",
"subvoxel",
"(",
")",
":",
"core_img",
"=",
"img",
"[",
"core_bbox",
".",
"to_slices",
"(",
")",
"]",
"upload_aligned",
"(",
"vol",
",",
"core_img",
",",
"retracted",
".",
"minpt",
",",
"parallel",
"=",
"parallel",
",",
"manual_shared_memory_id",
"=",
"manual_shared_memory_id",
",",
"manual_shared_memory_bbox",
"=",
"manual_shared_memory_bbox",
",",
"manual_shared_memory_order",
"=",
"manual_shared_memory_order",
")",
"# Download the shell, paint, and upload",
"all_chunks",
"=",
"set",
"(",
"chunknames",
"(",
"expanded",
",",
"vol",
".",
"bounds",
",",
"vol",
".",
"key",
",",
"vol",
".",
"underlying",
")",
")",
"core_chunks",
"=",
"set",
"(",
"chunknames",
"(",
"retracted",
",",
"vol",
".",
"bounds",
",",
"vol",
".",
"key",
",",
"vol",
".",
"underlying",
")",
")",
"shell_chunks",
"=",
"all_chunks",
".",
"difference",
"(",
"core_chunks",
")",
"def",
"shade_and_upload",
"(",
"img3d",
",",
"bbox",
")",
":",
"# decode is returning non-writable chunk",
"# we're throwing them away so safe to write",
"img3d",
".",
"setflags",
"(",
"write",
"=",
"1",
")",
"shade",
"(",
"img3d",
",",
"bbox",
",",
"img",
",",
"bounds",
")",
"single_process_upload",
"(",
"vol",
",",
"img3d",
",",
"(",
"(",
"Vec",
"(",
"0",
",",
"0",
",",
"0",
")",
",",
"Vec",
"(",
"*",
"img3d",
".",
"shape",
"[",
":",
"3",
"]",
")",
",",
"bbox",
".",
"minpt",
",",
"bbox",
".",
"maxpt",
")",
",",
")",
",",
"n_threads",
"=",
"0",
")",
"download_multiple",
"(",
"vol",
",",
"shell_chunks",
",",
"fn",
"=",
"shade_and_upload",
")"
] | Upload img to vol with offset. This is the primary entry point for uploads. | [
"Upload",
"img",
"to",
"vol",
"with",
"offset",
".",
"This",
"is",
"the",
"primary",
"entry",
"point",
"for",
"uploads",
"."
] | python | train |
pypa/pipenv | pipenv/vendor/click/formatting.py | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/vendor/click/formatting.py#L211-L223 | def section(self, name):
"""Helpful context manager that writes a paragraph, a heading,
and the indents.
:param name: the section name that is written as heading.
"""
self.write_paragraph()
self.write_heading(name)
self.indent()
try:
yield
finally:
self.dedent() | [
"def",
"section",
"(",
"self",
",",
"name",
")",
":",
"self",
".",
"write_paragraph",
"(",
")",
"self",
".",
"write_heading",
"(",
"name",
")",
"self",
".",
"indent",
"(",
")",
"try",
":",
"yield",
"finally",
":",
"self",
".",
"dedent",
"(",
")"
] | Helpful context manager that writes a paragraph, a heading,
and the indents.
:param name: the section name that is written as heading. | [
"Helpful",
"context",
"manager",
"that",
"writes",
"a",
"paragraph",
"a",
"heading",
"and",
"the",
"indents",
"."
] | python | train |
WojciechMula/pyahocorasick | py/pyahocorasick.py | https://github.com/WojciechMula/pyahocorasick/blob/53842f783fbe3fa77d53cde1ac251b23c3cbed02/py/pyahocorasick.py#L113-L129 | def items(self):
"""
Generator returning all keys and values stored in a trie.
"""
L = []
def aux(node, s):
s = s + node.char
if node.output is not nil:
L.append((s, node.output))
for child in node.children.values():
if child is not node:
aux(child, s)
aux(self.root, '')
return iter(L) | [
"def",
"items",
"(",
"self",
")",
":",
"L",
"=",
"[",
"]",
"def",
"aux",
"(",
"node",
",",
"s",
")",
":",
"s",
"=",
"s",
"+",
"node",
".",
"char",
"if",
"node",
".",
"output",
"is",
"not",
"nil",
":",
"L",
".",
"append",
"(",
"(",
"s",
",",
"node",
".",
"output",
")",
")",
"for",
"child",
"in",
"node",
".",
"children",
".",
"values",
"(",
")",
":",
"if",
"child",
"is",
"not",
"node",
":",
"aux",
"(",
"child",
",",
"s",
")",
"aux",
"(",
"self",
".",
"root",
",",
"''",
")",
"return",
"iter",
"(",
"L",
")"
] | Generator returning all keys and values stored in a trie. | [
"Generator",
"returning",
"all",
"keys",
"and",
"values",
"stored",
"in",
"a",
"trie",
"."
] | python | train |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/summary.py | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/summary.py#L136-L165 | def get_diff(left, right):
"""Get the difference of two summaries.
Subtracts the values of the right summary from the values of the left
summary.
If similar rows appear on both sides, the are included in the summary with
0 for number of elements and total size.
If the number of elements of a row of the diff is 0, but the total size is
not, it means that objects likely have changed, but not there number, thus
resulting in a changed size.
"""
res = []
for row_r in right:
found = False
for row_l in left:
if row_r[0] == row_l[0]:
res.append([row_r[0], row_r[1] - row_l[1], row_r[2] - row_l[2]])
found = True
if not found:
res.append(row_r)
for row_l in left:
found = False
for row_r in right:
if row_l[0] == row_r[0]:
found = True
if not found:
res.append([row_l[0], -row_l[1], -row_l[2]])
return res | [
"def",
"get_diff",
"(",
"left",
",",
"right",
")",
":",
"res",
"=",
"[",
"]",
"for",
"row_r",
"in",
"right",
":",
"found",
"=",
"False",
"for",
"row_l",
"in",
"left",
":",
"if",
"row_r",
"[",
"0",
"]",
"==",
"row_l",
"[",
"0",
"]",
":",
"res",
".",
"append",
"(",
"[",
"row_r",
"[",
"0",
"]",
",",
"row_r",
"[",
"1",
"]",
"-",
"row_l",
"[",
"1",
"]",
",",
"row_r",
"[",
"2",
"]",
"-",
"row_l",
"[",
"2",
"]",
"]",
")",
"found",
"=",
"True",
"if",
"not",
"found",
":",
"res",
".",
"append",
"(",
"row_r",
")",
"for",
"row_l",
"in",
"left",
":",
"found",
"=",
"False",
"for",
"row_r",
"in",
"right",
":",
"if",
"row_l",
"[",
"0",
"]",
"==",
"row_r",
"[",
"0",
"]",
":",
"found",
"=",
"True",
"if",
"not",
"found",
":",
"res",
".",
"append",
"(",
"[",
"row_l",
"[",
"0",
"]",
",",
"-",
"row_l",
"[",
"1",
"]",
",",
"-",
"row_l",
"[",
"2",
"]",
"]",
")",
"return",
"res"
] | Get the difference of two summaries.
Subtracts the values of the right summary from the values of the left
summary.
If similar rows appear on both sides, the are included in the summary with
0 for number of elements and total size.
If the number of elements of a row of the diff is 0, but the total size is
not, it means that objects likely have changed, but not there number, thus
resulting in a changed size. | [
"Get",
"the",
"difference",
"of",
"two",
"summaries",
"."
] | python | train |
nickpandolfi/Cyther | cyther/processing.py | https://github.com/nickpandolfi/Cyther/blob/9fb0bd77af594008aa6ee8af460aa8c953abf5bc/cyther/processing.py#L169-L186 | def core(args):
"""
The heart of Cyther, this function controls the main loop, and can be
used to perform any Cyther action. You can call if using Cyther
from the module level
"""
args = furtherArgsProcessing(args)
numfiles = len(args['filenames'])
interval = INTERVAL / numfiles
files = processFiles(args)
while True:
for file in files:
cytherize(args, file)
if not args['watch']:
break
else:
time.sleep(interval) | [
"def",
"core",
"(",
"args",
")",
":",
"args",
"=",
"furtherArgsProcessing",
"(",
"args",
")",
"numfiles",
"=",
"len",
"(",
"args",
"[",
"'filenames'",
"]",
")",
"interval",
"=",
"INTERVAL",
"/",
"numfiles",
"files",
"=",
"processFiles",
"(",
"args",
")",
"while",
"True",
":",
"for",
"file",
"in",
"files",
":",
"cytherize",
"(",
"args",
",",
"file",
")",
"if",
"not",
"args",
"[",
"'watch'",
"]",
":",
"break",
"else",
":",
"time",
".",
"sleep",
"(",
"interval",
")"
] | The heart of Cyther, this function controls the main loop, and can be
used to perform any Cyther action. You can call if using Cyther
from the module level | [
"The",
"heart",
"of",
"Cyther",
"this",
"function",
"controls",
"the",
"main",
"loop",
"and",
"can",
"be",
"used",
"to",
"perform",
"any",
"Cyther",
"action",
".",
"You",
"can",
"call",
"if",
"using",
"Cyther",
"from",
"the",
"module",
"level"
] | python | train |
numenta/nupic | src/nupic/algorithms/knn_classifier.py | https://github.com/numenta/nupic/blob/5922fafffdccc8812e72b3324965ad2f7d4bbdad/src/nupic/algorithms/knn_classifier.py#L757-L777 | def getClosest(self, inputPattern, topKCategories=3):
"""Returns the index of the pattern that is closest to inputPattern,
the distances of all patterns to inputPattern, and the indices of the k
closest categories.
"""
inferenceResult = numpy.zeros(max(self._categoryList)+1)
dist = self._getDistances(inputPattern)
sorted = dist.argsort()
validVectorCount = len(self._categoryList) - self._categoryList.count(-1)
for j in sorted[:min(self.k, validVectorCount)]:
inferenceResult[self._categoryList[j]] += 1.0
winner = inferenceResult.argmax()
topNCats = []
for i in range(topKCategories):
topNCats.append((self._categoryList[sorted[i]], dist[sorted[i]] ))
return winner, dist, topNCats | [
"def",
"getClosest",
"(",
"self",
",",
"inputPattern",
",",
"topKCategories",
"=",
"3",
")",
":",
"inferenceResult",
"=",
"numpy",
".",
"zeros",
"(",
"max",
"(",
"self",
".",
"_categoryList",
")",
"+",
"1",
")",
"dist",
"=",
"self",
".",
"_getDistances",
"(",
"inputPattern",
")",
"sorted",
"=",
"dist",
".",
"argsort",
"(",
")",
"validVectorCount",
"=",
"len",
"(",
"self",
".",
"_categoryList",
")",
"-",
"self",
".",
"_categoryList",
".",
"count",
"(",
"-",
"1",
")",
"for",
"j",
"in",
"sorted",
"[",
":",
"min",
"(",
"self",
".",
"k",
",",
"validVectorCount",
")",
"]",
":",
"inferenceResult",
"[",
"self",
".",
"_categoryList",
"[",
"j",
"]",
"]",
"+=",
"1.0",
"winner",
"=",
"inferenceResult",
".",
"argmax",
"(",
")",
"topNCats",
"=",
"[",
"]",
"for",
"i",
"in",
"range",
"(",
"topKCategories",
")",
":",
"topNCats",
".",
"append",
"(",
"(",
"self",
".",
"_categoryList",
"[",
"sorted",
"[",
"i",
"]",
"]",
",",
"dist",
"[",
"sorted",
"[",
"i",
"]",
"]",
")",
")",
"return",
"winner",
",",
"dist",
",",
"topNCats"
] | Returns the index of the pattern that is closest to inputPattern,
the distances of all patterns to inputPattern, and the indices of the k
closest categories. | [
"Returns",
"the",
"index",
"of",
"the",
"pattern",
"that",
"is",
"closest",
"to",
"inputPattern",
"the",
"distances",
"of",
"all",
"patterns",
"to",
"inputPattern",
"and",
"the",
"indices",
"of",
"the",
"k",
"closest",
"categories",
"."
] | python | valid |
DataDog/integrations-core | tokumx/datadog_checks/tokumx/vendor/pymongo/topology.py | https://github.com/DataDog/integrations-core/blob/ebd41c873cf9f97a8c51bf9459bc6a7536af8acd/tokumx/datadog_checks/tokumx/vendor/pymongo/topology.py#L283-L291 | def get_primary(self):
"""Return primary's address or None."""
# Implemented here in Topology instead of MongoClient, so it can lock.
with self._lock:
topology_type = self._description.topology_type
if topology_type != TOPOLOGY_TYPE.ReplicaSetWithPrimary:
return None
return writable_server_selector(self._new_selection())[0].address | [
"def",
"get_primary",
"(",
"self",
")",
":",
"# Implemented here in Topology instead of MongoClient, so it can lock.",
"with",
"self",
".",
"_lock",
":",
"topology_type",
"=",
"self",
".",
"_description",
".",
"topology_type",
"if",
"topology_type",
"!=",
"TOPOLOGY_TYPE",
".",
"ReplicaSetWithPrimary",
":",
"return",
"None",
"return",
"writable_server_selector",
"(",
"self",
".",
"_new_selection",
"(",
")",
")",
"[",
"0",
"]",
".",
"address"
] | Return primary's address or None. | [
"Return",
"primary",
"s",
"address",
"or",
"None",
"."
] | python | train |
MIT-LCP/wfdb-python | wfdb/io/annotation.py | https://github.com/MIT-LCP/wfdb-python/blob/cc8c9e9e44f10af961b7a9d8ae03708b31ac8a8c/wfdb/io/annotation.py#L1529-L1565 | def interpret_defintion_annotations(potential_definition_inds, aux_note):
"""
Try to extract annotation definition information from annotation notes.
Information that may be contained:
- fs - sample=0, label_state=22, aux_note='## time resolution: XXX'
- custom annotation label definitions
"""
fs = None
custom_labels = []
if len(potential_definition_inds) > 0:
i = 0
while i<len(potential_definition_inds):
if aux_note[i].startswith('## '):
if not fs:
search_fs = rx_fs.findall(aux_note[i])
if search_fs:
fs = float(search_fs[0])
if round(fs, 8) == float(int(fs)):
fs = int(fs)
i += 1
continue
if aux_note[i] == '## annotation type definitions':
i += 1
while aux_note[i] != '## end of definitions':
label_store, symbol, description = rx_custom_label.findall(aux_note[i])[0]
custom_labels.append((int(label_store), symbol, description))
i += 1
i += 1
else:
i += 1
if not custom_labels:
custom_labels = None
return fs, custom_labels | [
"def",
"interpret_defintion_annotations",
"(",
"potential_definition_inds",
",",
"aux_note",
")",
":",
"fs",
"=",
"None",
"custom_labels",
"=",
"[",
"]",
"if",
"len",
"(",
"potential_definition_inds",
")",
">",
"0",
":",
"i",
"=",
"0",
"while",
"i",
"<",
"len",
"(",
"potential_definition_inds",
")",
":",
"if",
"aux_note",
"[",
"i",
"]",
".",
"startswith",
"(",
"'## '",
")",
":",
"if",
"not",
"fs",
":",
"search_fs",
"=",
"rx_fs",
".",
"findall",
"(",
"aux_note",
"[",
"i",
"]",
")",
"if",
"search_fs",
":",
"fs",
"=",
"float",
"(",
"search_fs",
"[",
"0",
"]",
")",
"if",
"round",
"(",
"fs",
",",
"8",
")",
"==",
"float",
"(",
"int",
"(",
"fs",
")",
")",
":",
"fs",
"=",
"int",
"(",
"fs",
")",
"i",
"+=",
"1",
"continue",
"if",
"aux_note",
"[",
"i",
"]",
"==",
"'## annotation type definitions'",
":",
"i",
"+=",
"1",
"while",
"aux_note",
"[",
"i",
"]",
"!=",
"'## end of definitions'",
":",
"label_store",
",",
"symbol",
",",
"description",
"=",
"rx_custom_label",
".",
"findall",
"(",
"aux_note",
"[",
"i",
"]",
")",
"[",
"0",
"]",
"custom_labels",
".",
"append",
"(",
"(",
"int",
"(",
"label_store",
")",
",",
"symbol",
",",
"description",
")",
")",
"i",
"+=",
"1",
"i",
"+=",
"1",
"else",
":",
"i",
"+=",
"1",
"if",
"not",
"custom_labels",
":",
"custom_labels",
"=",
"None",
"return",
"fs",
",",
"custom_labels"
] | Try to extract annotation definition information from annotation notes.
Information that may be contained:
- fs - sample=0, label_state=22, aux_note='## time resolution: XXX'
- custom annotation label definitions | [
"Try",
"to",
"extract",
"annotation",
"definition",
"information",
"from",
"annotation",
"notes",
".",
"Information",
"that",
"may",
"be",
"contained",
":",
"-",
"fs",
"-",
"sample",
"=",
"0",
"label_state",
"=",
"22",
"aux_note",
"=",
"##",
"time",
"resolution",
":",
"XXX",
"-",
"custom",
"annotation",
"label",
"definitions"
] | python | train |
tensorflow/probability | tensorflow_probability/python/edward2/generated_random_variables.py | https://github.com/tensorflow/probability/blob/e87fe34111d68c35db0f9eeb4935f1ece9e1a8f5/tensorflow_probability/python/edward2/generated_random_variables.py#L43-L79 | def _simple_name(distribution):
"""Infer the original name passed into a distribution constructor.
Distributions typically follow the pattern of
with.name_scope(name) as name:
super(name=name)
so we attempt to reverse the name-scope transformation to allow
addressing of RVs by the distribution's original, user-visible
name kwarg.
Args:
distribution: a tfd.Distribution instance.
Returns:
simple_name: the original name passed into the Distribution.
#### Example
```
d1 = tfd.Normal(0., 1., name='x') # d1.name = 'x/'
d2 = tfd.Normal(0., 1., name='x') # d2.name = 'x_2/'
_simple_name(d2) # returns 'x'
```
"""
simple_name = distribution.name
# turn 'scope/x/' into 'x'
if simple_name.endswith('/'):
simple_name = simple_name.split('/')[-2]
# turn 'x_3' into 'x'
parts = simple_name.split('_')
if parts[-1].isdigit():
simple_name = '_'.join(parts[:-1])
return simple_name | [
"def",
"_simple_name",
"(",
"distribution",
")",
":",
"simple_name",
"=",
"distribution",
".",
"name",
"# turn 'scope/x/' into 'x'",
"if",
"simple_name",
".",
"endswith",
"(",
"'/'",
")",
":",
"simple_name",
"=",
"simple_name",
".",
"split",
"(",
"'/'",
")",
"[",
"-",
"2",
"]",
"# turn 'x_3' into 'x'",
"parts",
"=",
"simple_name",
".",
"split",
"(",
"'_'",
")",
"if",
"parts",
"[",
"-",
"1",
"]",
".",
"isdigit",
"(",
")",
":",
"simple_name",
"=",
"'_'",
".",
"join",
"(",
"parts",
"[",
":",
"-",
"1",
"]",
")",
"return",
"simple_name"
] | Infer the original name passed into a distribution constructor.
Distributions typically follow the pattern of
with.name_scope(name) as name:
super(name=name)
so we attempt to reverse the name-scope transformation to allow
addressing of RVs by the distribution's original, user-visible
name kwarg.
Args:
distribution: a tfd.Distribution instance.
Returns:
simple_name: the original name passed into the Distribution.
#### Example
```
d1 = tfd.Normal(0., 1., name='x') # d1.name = 'x/'
d2 = tfd.Normal(0., 1., name='x') # d2.name = 'x_2/'
_simple_name(d2) # returns 'x'
``` | [
"Infer",
"the",
"original",
"name",
"passed",
"into",
"a",
"distribution",
"constructor",
"."
] | python | test |
pkgw/pwkit | pwkit/cli/wrapout.py | https://github.com/pkgw/pwkit/blob/d40957a1c3d2ea34e7ceac2267ee9635135f2793/pwkit/cli/wrapout.py#L93-L104 | def output(self, kind, line):
"*line* should be bytes"
self.destination.write(b''.join([
self._cyan,
b't=%07d' % (time.time() - self._t0),
self._reset,
self._kind_prefixes[kind],
self.markers[kind],
line,
self._reset,
]))
self.destination.flush() | [
"def",
"output",
"(",
"self",
",",
"kind",
",",
"line",
")",
":",
"self",
".",
"destination",
".",
"write",
"(",
"b''",
".",
"join",
"(",
"[",
"self",
".",
"_cyan",
",",
"b't=%07d'",
"%",
"(",
"time",
".",
"time",
"(",
")",
"-",
"self",
".",
"_t0",
")",
",",
"self",
".",
"_reset",
",",
"self",
".",
"_kind_prefixes",
"[",
"kind",
"]",
",",
"self",
".",
"markers",
"[",
"kind",
"]",
",",
"line",
",",
"self",
".",
"_reset",
",",
"]",
")",
")",
"self",
".",
"destination",
".",
"flush",
"(",
")"
] | *line* should be bytes | [
"*",
"line",
"*",
"should",
"be",
"bytes"
] | python | train |
saltstack/salt | salt/modules/libcloud_compute.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/libcloud_compute.py#L780-L789 | def _get_by_id(collection, id):
'''
Get item from a list by the id field
'''
matches = [item for item in collection if item.id == id]
if not matches:
raise ValueError('Could not find a matching item')
elif len(matches) > 1:
raise ValueError('The id matched {0} items, not 1'.format(len(matches)))
return matches[0] | [
"def",
"_get_by_id",
"(",
"collection",
",",
"id",
")",
":",
"matches",
"=",
"[",
"item",
"for",
"item",
"in",
"collection",
"if",
"item",
".",
"id",
"==",
"id",
"]",
"if",
"not",
"matches",
":",
"raise",
"ValueError",
"(",
"'Could not find a matching item'",
")",
"elif",
"len",
"(",
"matches",
")",
">",
"1",
":",
"raise",
"ValueError",
"(",
"'The id matched {0} items, not 1'",
".",
"format",
"(",
"len",
"(",
"matches",
")",
")",
")",
"return",
"matches",
"[",
"0",
"]"
] | Get item from a list by the id field | [
"Get",
"item",
"from",
"a",
"list",
"by",
"the",
"id",
"field"
] | python | train |
django-extensions/django-extensions | django_extensions/management/commands/dumpscript.py | https://github.com/django-extensions/django-extensions/blob/7e0bef97ea6cb7f9eea5e2528e3a985a83a7b9b8/django_extensions/management/commands/dumpscript.py#L178-L183 | def get_import_lines(self):
""" Take the stored imports and converts them to lines """
if self.imports:
return ["from %s import %s" % (value, key) for key, value in self.imports.items()]
else:
return [] | [
"def",
"get_import_lines",
"(",
"self",
")",
":",
"if",
"self",
".",
"imports",
":",
"return",
"[",
"\"from %s import %s\"",
"%",
"(",
"value",
",",
"key",
")",
"for",
"key",
",",
"value",
"in",
"self",
".",
"imports",
".",
"items",
"(",
")",
"]",
"else",
":",
"return",
"[",
"]"
] | Take the stored imports and converts them to lines | [
"Take",
"the",
"stored",
"imports",
"and",
"converts",
"them",
"to",
"lines"
] | python | train |
bitesofcode/projexui | projexui/widgets/xpopupwidget.py | https://github.com/bitesofcode/projexui/blob/f18a73bec84df90b034ca69b9deea118dbedfc4d/projexui/widgets/xpopupwidget.py#L757-L869 | def popup(self, pos=None):
"""
Pops up this widget at the inputed position. The inputed point should \
be in global space.
:param pos | <QPoint>
:return <bool> success
"""
if self._first and self.centralWidget() is not None:
self.adjustSize()
self._first = False
if not self.signalsBlocked():
self.aboutToShow.emit()
if not pos:
pos = QCursor.pos()
if self.currentMode() == XPopupWidget.Mode.Dialog and \
self.isVisible():
return False
elif self.currentMode() == XPopupWidget.Mode.Dialog:
self.setPopupMode()
# auto-calculate the point
if self.autoCalculateAnchor():
self.setAnchor(self.mapAnchorFrom(self.parent(), pos))
pad = self.popupPadding()
# determine where to move based on the anchor
anchor = self.anchor()
# MODIFY X POSITION
# align x-left
if ( anchor & (XPopupWidget.Anchor.TopLeft |
XPopupWidget.Anchor.BottomLeft) ):
pos.setX(pos.x() - pad)
# align x-center
elif ( anchor & (XPopupWidget.Anchor.TopCenter |
XPopupWidget.Anchor.BottomCenter) ):
pos.setX(pos.x() - self.width() / 2)
# align x-right
elif ( anchor & (XPopupWidget.Anchor.TopRight |
XPopupWidget.Anchor.BottomRight) ):
pos.setX(pos.x() - self.width() + pad)
# align x-padded
elif ( anchor & (XPopupWidget.Anchor.RightTop |
XPopupWidget.Anchor.RightCenter |
XPopupWidget.Anchor.RightBottom) ):
pos.setX(pos.x() - self.width())
# MODIFY Y POSITION
# align y-top
if ( anchor & (XPopupWidget.Anchor.LeftTop |
XPopupWidget.Anchor.RightTop) ):
pos.setY(pos.y() - pad)
# align y-center
elif ( anchor & (XPopupWidget.Anchor.LeftCenter |
XPopupWidget.Anchor.RightCenter) ):
pos.setY(pos.y() - self.height() / 2)
# align y-bottom
elif ( anchor & (XPopupWidget.Anchor.LeftBottom |
XPopupWidget.Anchor.RightBottom) ):
pos.setY(pos.y() - self.height() + pad)
# align y-padded
elif ( anchor & (XPopupWidget.Anchor.BottomLeft |
XPopupWidget.Anchor.BottomCenter |
XPopupWidget.Anchor.BottomRight) ):
pos.setY(pos.y() - self.height())
self.adjustMask()
self.move(pos)
self.update()
self.setUpdatesEnabled(True)
if self.isAnimated():
anim = QPropertyAnimation(self, 'windowOpacity')
anim.setParent(self)
anim.setStartValue(0.0)
anim.setEndValue(self.windowOpacity())
anim.setDuration(500)
anim.finished.connect(anim.deleteLater)
self.setWindowOpacity(0.0)
else:
anim = None
self.show()
if self.currentMode() != XPopupWidget.Mode.ToolTip:
self.activateWindow()
widget = self.centralWidget()
if widget:
self.centralWidget().setFocus()
if anim:
anim.start()
if not self.signalsBlocked():
self.shown.emit()
return True | [
"def",
"popup",
"(",
"self",
",",
"pos",
"=",
"None",
")",
":",
"if",
"self",
".",
"_first",
"and",
"self",
".",
"centralWidget",
"(",
")",
"is",
"not",
"None",
":",
"self",
".",
"adjustSize",
"(",
")",
"self",
".",
"_first",
"=",
"False",
"if",
"not",
"self",
".",
"signalsBlocked",
"(",
")",
":",
"self",
".",
"aboutToShow",
".",
"emit",
"(",
")",
"if",
"not",
"pos",
":",
"pos",
"=",
"QCursor",
".",
"pos",
"(",
")",
"if",
"self",
".",
"currentMode",
"(",
")",
"==",
"XPopupWidget",
".",
"Mode",
".",
"Dialog",
"and",
"self",
".",
"isVisible",
"(",
")",
":",
"return",
"False",
"elif",
"self",
".",
"currentMode",
"(",
")",
"==",
"XPopupWidget",
".",
"Mode",
".",
"Dialog",
":",
"self",
".",
"setPopupMode",
"(",
")",
"# auto-calculate the point\r",
"if",
"self",
".",
"autoCalculateAnchor",
"(",
")",
":",
"self",
".",
"setAnchor",
"(",
"self",
".",
"mapAnchorFrom",
"(",
"self",
".",
"parent",
"(",
")",
",",
"pos",
")",
")",
"pad",
"=",
"self",
".",
"popupPadding",
"(",
")",
"# determine where to move based on the anchor\r",
"anchor",
"=",
"self",
".",
"anchor",
"(",
")",
"# MODIFY X POSITION\r",
"# align x-left\r",
"if",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"TopLeft",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"BottomLeft",
")",
")",
":",
"pos",
".",
"setX",
"(",
"pos",
".",
"x",
"(",
")",
"-",
"pad",
")",
"# align x-center\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"TopCenter",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"BottomCenter",
")",
")",
":",
"pos",
".",
"setX",
"(",
"pos",
".",
"x",
"(",
")",
"-",
"self",
".",
"width",
"(",
")",
"/",
"2",
")",
"# align x-right\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"TopRight",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"BottomRight",
")",
")",
":",
"pos",
".",
"setX",
"(",
"pos",
".",
"x",
"(",
")",
"-",
"self",
".",
"width",
"(",
")",
"+",
"pad",
")",
"# align x-padded\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"RightTop",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"RightCenter",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"RightBottom",
")",
")",
":",
"pos",
".",
"setX",
"(",
"pos",
".",
"x",
"(",
")",
"-",
"self",
".",
"width",
"(",
")",
")",
"# MODIFY Y POSITION\r",
"# align y-top\r",
"if",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"LeftTop",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"RightTop",
")",
")",
":",
"pos",
".",
"setY",
"(",
"pos",
".",
"y",
"(",
")",
"-",
"pad",
")",
"# align y-center\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"LeftCenter",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"RightCenter",
")",
")",
":",
"pos",
".",
"setY",
"(",
"pos",
".",
"y",
"(",
")",
"-",
"self",
".",
"height",
"(",
")",
"/",
"2",
")",
"# align y-bottom\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"LeftBottom",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"RightBottom",
")",
")",
":",
"pos",
".",
"setY",
"(",
"pos",
".",
"y",
"(",
")",
"-",
"self",
".",
"height",
"(",
")",
"+",
"pad",
")",
"# align y-padded\r",
"elif",
"(",
"anchor",
"&",
"(",
"XPopupWidget",
".",
"Anchor",
".",
"BottomLeft",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"BottomCenter",
"|",
"XPopupWidget",
".",
"Anchor",
".",
"BottomRight",
")",
")",
":",
"pos",
".",
"setY",
"(",
"pos",
".",
"y",
"(",
")",
"-",
"self",
".",
"height",
"(",
")",
")",
"self",
".",
"adjustMask",
"(",
")",
"self",
".",
"move",
"(",
"pos",
")",
"self",
".",
"update",
"(",
")",
"self",
".",
"setUpdatesEnabled",
"(",
"True",
")",
"if",
"self",
".",
"isAnimated",
"(",
")",
":",
"anim",
"=",
"QPropertyAnimation",
"(",
"self",
",",
"'windowOpacity'",
")",
"anim",
".",
"setParent",
"(",
"self",
")",
"anim",
".",
"setStartValue",
"(",
"0.0",
")",
"anim",
".",
"setEndValue",
"(",
"self",
".",
"windowOpacity",
"(",
")",
")",
"anim",
".",
"setDuration",
"(",
"500",
")",
"anim",
".",
"finished",
".",
"connect",
"(",
"anim",
".",
"deleteLater",
")",
"self",
".",
"setWindowOpacity",
"(",
"0.0",
")",
"else",
":",
"anim",
"=",
"None",
"self",
".",
"show",
"(",
")",
"if",
"self",
".",
"currentMode",
"(",
")",
"!=",
"XPopupWidget",
".",
"Mode",
".",
"ToolTip",
":",
"self",
".",
"activateWindow",
"(",
")",
"widget",
"=",
"self",
".",
"centralWidget",
"(",
")",
"if",
"widget",
":",
"self",
".",
"centralWidget",
"(",
")",
".",
"setFocus",
"(",
")",
"if",
"anim",
":",
"anim",
".",
"start",
"(",
")",
"if",
"not",
"self",
".",
"signalsBlocked",
"(",
")",
":",
"self",
".",
"shown",
".",
"emit",
"(",
")",
"return",
"True"
] | Pops up this widget at the inputed position. The inputed point should \
be in global space.
:param pos | <QPoint>
:return <bool> success | [
"Pops",
"up",
"this",
"widget",
"at",
"the",
"inputed",
"position",
".",
"The",
"inputed",
"point",
"should",
"\\",
"be",
"in",
"global",
"space",
".",
":",
"param",
"pos",
"|",
"<QPoint",
">",
":",
"return",
"<bool",
">",
"success"
] | python | train |
pypa/pipenv | pipenv/patched/notpip/_internal/download.py | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/patched/notpip/_internal/download.py#L835-L881 | def unpack_url(
link, # type: Optional[Link]
location, # type: Optional[str]
download_dir=None, # type: Optional[str]
only_download=False, # type: bool
session=None, # type: Optional[PipSession]
hashes=None, # type: Optional[Hashes]
progress_bar="on" # type: str
):
# type: (...) -> None
"""Unpack link.
If link is a VCS link:
if only_download, export into download_dir and ignore location
else unpack into location
for other types of link:
- unpack into location
- if download_dir, copy the file into download_dir
- if only_download, mark location for deletion
:param hashes: A Hashes object, one of whose embedded hashes must match,
or HashMismatch will be raised. If the Hashes is empty, no matches are
required, and unhashable types of requirements (like VCS ones, which
would ordinarily raise HashUnsupported) are allowed.
"""
# non-editable vcs urls
if is_vcs_url(link):
unpack_vcs_link(link, location)
# file urls
elif is_file_url(link):
unpack_file_url(link, location, download_dir, hashes=hashes)
# http urls
else:
if session is None:
session = PipSession()
unpack_http_url(
link,
location,
download_dir,
session,
hashes=hashes,
progress_bar=progress_bar
)
if only_download:
write_delete_marker_file(location) | [
"def",
"unpack_url",
"(",
"link",
",",
"# type: Optional[Link]",
"location",
",",
"# type: Optional[str]",
"download_dir",
"=",
"None",
",",
"# type: Optional[str]",
"only_download",
"=",
"False",
",",
"# type: bool",
"session",
"=",
"None",
",",
"# type: Optional[PipSession]",
"hashes",
"=",
"None",
",",
"# type: Optional[Hashes]",
"progress_bar",
"=",
"\"on\"",
"# type: str",
")",
":",
"# type: (...) -> None",
"# non-editable vcs urls",
"if",
"is_vcs_url",
"(",
"link",
")",
":",
"unpack_vcs_link",
"(",
"link",
",",
"location",
")",
"# file urls",
"elif",
"is_file_url",
"(",
"link",
")",
":",
"unpack_file_url",
"(",
"link",
",",
"location",
",",
"download_dir",
",",
"hashes",
"=",
"hashes",
")",
"# http urls",
"else",
":",
"if",
"session",
"is",
"None",
":",
"session",
"=",
"PipSession",
"(",
")",
"unpack_http_url",
"(",
"link",
",",
"location",
",",
"download_dir",
",",
"session",
",",
"hashes",
"=",
"hashes",
",",
"progress_bar",
"=",
"progress_bar",
")",
"if",
"only_download",
":",
"write_delete_marker_file",
"(",
"location",
")"
] | Unpack link.
If link is a VCS link:
if only_download, export into download_dir and ignore location
else unpack into location
for other types of link:
- unpack into location
- if download_dir, copy the file into download_dir
- if only_download, mark location for deletion
:param hashes: A Hashes object, one of whose embedded hashes must match,
or HashMismatch will be raised. If the Hashes is empty, no matches are
required, and unhashable types of requirements (like VCS ones, which
would ordinarily raise HashUnsupported) are allowed. | [
"Unpack",
"link",
".",
"If",
"link",
"is",
"a",
"VCS",
"link",
":",
"if",
"only_download",
"export",
"into",
"download_dir",
"and",
"ignore",
"location",
"else",
"unpack",
"into",
"location",
"for",
"other",
"types",
"of",
"link",
":",
"-",
"unpack",
"into",
"location",
"-",
"if",
"download_dir",
"copy",
"the",
"file",
"into",
"download_dir",
"-",
"if",
"only_download",
"mark",
"location",
"for",
"deletion"
] | python | train |
sdispater/cachy | cachy/stores/redis_store.py | https://github.com/sdispater/cachy/blob/ee4b044d6aafa80125730a00b1f679a7bd852b8a/cachy/stores/redis_store.py#L102-L111 | def forget(self, key):
"""
Remove an item from the cache.
:param key: The cache key
:type key: str
:rtype: bool
"""
return bool(self._redis.delete(self._prefix + key)) | [
"def",
"forget",
"(",
"self",
",",
"key",
")",
":",
"return",
"bool",
"(",
"self",
".",
"_redis",
".",
"delete",
"(",
"self",
".",
"_prefix",
"+",
"key",
")",
")"
] | Remove an item from the cache.
:param key: The cache key
:type key: str
:rtype: bool | [
"Remove",
"an",
"item",
"from",
"the",
"cache",
"."
] | python | train |
bokeh/bokeh | bokeh/plotting/figure.py | https://github.com/bokeh/bokeh/blob/dc8cf49e4e4302fd38537ad089ece81fbcca4737/bokeh/plotting/figure.py#L914-L954 | def harea_stack(self, stackers, **kw):
''' Generate multiple ``HArea`` renderers for levels stacked left
to right.
Args:
stackers (seq[str]) : a list of data source field names to stack
successively for ``x1`` and ``x2`` harea coordinates.
Additionally, the ``name`` of the renderer will be set to
the value of each successive stacker (this is useful with the
special hover variable ``$name``)
Any additional keyword arguments are passed to each call to ``harea``.
If a keyword value is a list or tuple, then each call will get one
value from the sequence.
Returns:
list[GlyphRenderer]
Examples:
Assuming a ``ColumnDataSource`` named ``source`` with columns
*2016* and *2017*, then the following call to ``harea_stack`` will
will create two ``HArea`` renderers that stack:
.. code-block:: python
p.harea_stack(['2016', '2017'], y='y', color=['blue', 'red'], source=source)
This is equivalent to the following two separate calls:
.. code-block:: python
p.harea(x1=stack(), x2=stack('2016'), y='y', color='blue', source=source, name='2016')
p.harea(x1=stack('2016'), x2=stack('2016', '2017'), y='y', color='red', source=source, name='2017')
'''
result = []
for kw in _double_stack(stackers, "x1", "x2", **kw):
result.append(self.harea(**kw))
return result | [
"def",
"harea_stack",
"(",
"self",
",",
"stackers",
",",
"*",
"*",
"kw",
")",
":",
"result",
"=",
"[",
"]",
"for",
"kw",
"in",
"_double_stack",
"(",
"stackers",
",",
"\"x1\"",
",",
"\"x2\"",
",",
"*",
"*",
"kw",
")",
":",
"result",
".",
"append",
"(",
"self",
".",
"harea",
"(",
"*",
"*",
"kw",
")",
")",
"return",
"result"
] | Generate multiple ``HArea`` renderers for levels stacked left
to right.
Args:
stackers (seq[str]) : a list of data source field names to stack
successively for ``x1`` and ``x2`` harea coordinates.
Additionally, the ``name`` of the renderer will be set to
the value of each successive stacker (this is useful with the
special hover variable ``$name``)
Any additional keyword arguments are passed to each call to ``harea``.
If a keyword value is a list or tuple, then each call will get one
value from the sequence.
Returns:
list[GlyphRenderer]
Examples:
Assuming a ``ColumnDataSource`` named ``source`` with columns
*2016* and *2017*, then the following call to ``harea_stack`` will
will create two ``HArea`` renderers that stack:
.. code-block:: python
p.harea_stack(['2016', '2017'], y='y', color=['blue', 'red'], source=source)
This is equivalent to the following two separate calls:
.. code-block:: python
p.harea(x1=stack(), x2=stack('2016'), y='y', color='blue', source=source, name='2016')
p.harea(x1=stack('2016'), x2=stack('2016', '2017'), y='y', color='red', source=source, name='2017') | [
"Generate",
"multiple",
"HArea",
"renderers",
"for",
"levels",
"stacked",
"left",
"to",
"right",
"."
] | python | train |
maas/python-libmaas | maas/client/viscera/maas.py | https://github.com/maas/python-libmaas/blob/4092c68ef7fb1753efc843569848e2bcc3415002/maas/client/viscera/maas.py#L268-L274 | async def get_default_storage_layout(cls) -> StorageLayout:
"""Default storage layout.
Storage layout that is applied to a node when it is deployed.
"""
data = await cls.get_config("default_storage_layout")
return cls.StorageLayout.lookup(data) | [
"async",
"def",
"get_default_storage_layout",
"(",
"cls",
")",
"->",
"StorageLayout",
":",
"data",
"=",
"await",
"cls",
".",
"get_config",
"(",
"\"default_storage_layout\"",
")",
"return",
"cls",
".",
"StorageLayout",
".",
"lookup",
"(",
"data",
")"
] | Default storage layout.
Storage layout that is applied to a node when it is deployed. | [
"Default",
"storage",
"layout",
"."
] | python | train |
EventTeam/beliefs | src/beliefs/cells/posets.py | https://github.com/EventTeam/beliefs/blob/c07d22b61bebeede74a72800030dde770bf64208/src/beliefs/cells/posets.py#L406-L411 | def to_dotfile(self):
""" Writes a DOT graphviz file of the domain structure, and returns the filename"""
domain = self.get_domain()
filename = "%s.dot" % (self.__class__.__name__)
nx.write_dot(domain, filename)
return filename | [
"def",
"to_dotfile",
"(",
"self",
")",
":",
"domain",
"=",
"self",
".",
"get_domain",
"(",
")",
"filename",
"=",
"\"%s.dot\"",
"%",
"(",
"self",
".",
"__class__",
".",
"__name__",
")",
"nx",
".",
"write_dot",
"(",
"domain",
",",
"filename",
")",
"return",
"filename"
] | Writes a DOT graphviz file of the domain structure, and returns the filename | [
"Writes",
"a",
"DOT",
"graphviz",
"file",
"of",
"the",
"domain",
"structure",
"and",
"returns",
"the",
"filename"
] | python | train |
cggh/scikit-allel | allel/stats/admixture.py | https://github.com/cggh/scikit-allel/blob/3c979a57a100240ba959dd13f98839349530f215/allel/stats/admixture.py#L306-L373 | def average_patterson_f3(acc, aca, acb, blen, normed=True):
"""Estimate F3(C; A, B) and standard error using the block-jackknife.
Parameters
----------
acc : array_like, int, shape (n_variants, 2)
Allele counts for the test population (C).
aca : array_like, int, shape (n_variants, 2)
Allele counts for the first source population (A).
acb : array_like, int, shape (n_variants, 2)
Allele counts for the second source population (B).
blen : int
Block size (number of variants).
normed : bool, optional
If False, use un-normalised f3 values.
Returns
-------
f3 : float
Estimated value of the statistic using all data.
se : float
Estimated standard error.
z : float
Z-score (number of standard errors from zero).
vb : ndarray, float, shape (n_blocks,)
Value of the statistic in each block.
vj : ndarray, float, shape (n_blocks,)
Values of the statistic from block-jackknife resampling.
Notes
-----
See Patterson (2012), main text and Appendix A.
See Also
--------
allel.stats.admixture.patterson_f3
"""
# calculate per-variant values
T, B = patterson_f3(acc, aca, acb)
# N.B., nans can occur if any of the populations have completely missing
# genotype calls at a variant (i.e., allele number is zero). Here we
# assume that is rare enough to be negligible.
# calculate overall value of statistic
if normed:
f3 = np.nansum(T) / np.nansum(B)
else:
f3 = np.nanmean(T)
# calculate value of statistic within each block
if normed:
T_bsum = moving_statistic(T, statistic=np.nansum, size=blen)
B_bsum = moving_statistic(B, statistic=np.nansum, size=blen)
vb = T_bsum / B_bsum
_, se, vj = jackknife((T_bsum, B_bsum),
statistic=lambda t, b: np.sum(t) / np.sum(b))
else:
vb = moving_statistic(T, statistic=np.nanmean, size=blen)
_, se, vj = jackknife(vb, statistic=np.mean)
# compute Z score
z = f3 / se
return f3, se, z, vb, vj | [
"def",
"average_patterson_f3",
"(",
"acc",
",",
"aca",
",",
"acb",
",",
"blen",
",",
"normed",
"=",
"True",
")",
":",
"# calculate per-variant values",
"T",
",",
"B",
"=",
"patterson_f3",
"(",
"acc",
",",
"aca",
",",
"acb",
")",
"# N.B., nans can occur if any of the populations have completely missing",
"# genotype calls at a variant (i.e., allele number is zero). Here we",
"# assume that is rare enough to be negligible.",
"# calculate overall value of statistic",
"if",
"normed",
":",
"f3",
"=",
"np",
".",
"nansum",
"(",
"T",
")",
"/",
"np",
".",
"nansum",
"(",
"B",
")",
"else",
":",
"f3",
"=",
"np",
".",
"nanmean",
"(",
"T",
")",
"# calculate value of statistic within each block",
"if",
"normed",
":",
"T_bsum",
"=",
"moving_statistic",
"(",
"T",
",",
"statistic",
"=",
"np",
".",
"nansum",
",",
"size",
"=",
"blen",
")",
"B_bsum",
"=",
"moving_statistic",
"(",
"B",
",",
"statistic",
"=",
"np",
".",
"nansum",
",",
"size",
"=",
"blen",
")",
"vb",
"=",
"T_bsum",
"/",
"B_bsum",
"_",
",",
"se",
",",
"vj",
"=",
"jackknife",
"(",
"(",
"T_bsum",
",",
"B_bsum",
")",
",",
"statistic",
"=",
"lambda",
"t",
",",
"b",
":",
"np",
".",
"sum",
"(",
"t",
")",
"/",
"np",
".",
"sum",
"(",
"b",
")",
")",
"else",
":",
"vb",
"=",
"moving_statistic",
"(",
"T",
",",
"statistic",
"=",
"np",
".",
"nanmean",
",",
"size",
"=",
"blen",
")",
"_",
",",
"se",
",",
"vj",
"=",
"jackknife",
"(",
"vb",
",",
"statistic",
"=",
"np",
".",
"mean",
")",
"# compute Z score",
"z",
"=",
"f3",
"/",
"se",
"return",
"f3",
",",
"se",
",",
"z",
",",
"vb",
",",
"vj"
] | Estimate F3(C; A, B) and standard error using the block-jackknife.
Parameters
----------
acc : array_like, int, shape (n_variants, 2)
Allele counts for the test population (C).
aca : array_like, int, shape (n_variants, 2)
Allele counts for the first source population (A).
acb : array_like, int, shape (n_variants, 2)
Allele counts for the second source population (B).
blen : int
Block size (number of variants).
normed : bool, optional
If False, use un-normalised f3 values.
Returns
-------
f3 : float
Estimated value of the statistic using all data.
se : float
Estimated standard error.
z : float
Z-score (number of standard errors from zero).
vb : ndarray, float, shape (n_blocks,)
Value of the statistic in each block.
vj : ndarray, float, shape (n_blocks,)
Values of the statistic from block-jackknife resampling.
Notes
-----
See Patterson (2012), main text and Appendix A.
See Also
--------
allel.stats.admixture.patterson_f3 | [
"Estimate",
"F3",
"(",
"C",
";",
"A",
"B",
")",
"and",
"standard",
"error",
"using",
"the",
"block",
"-",
"jackknife",
"."
] | python | train |
cloudendpoints/endpoints-python | endpoints/api_config.py | https://github.com/cloudendpoints/endpoints-python/blob/00dd7c7a52a9ee39d5923191c2604b8eafdb3f24/endpoints/api_config.py#L1561-L1604 | def __validate_simple_subfield(self, parameter, field, segment_list,
_segment_index=0):
"""Verifies that a proposed subfield actually exists and is a simple field.
Here, simple means it is not a MessageField (nested).
Args:
parameter: String; the '.' delimited name of the current field being
considered. This is relative to some root.
field: An instance of a subclass of messages.Field. Corresponds to the
previous segment in the path (previous relative to _segment_index),
since this field should be a message field with the current segment
as a field in the message class.
segment_list: The full list of segments from the '.' delimited subfield
being validated.
_segment_index: Integer; used to hold the position of current segment so
that segment_list can be passed as a reference instead of having to
copy using segment_list[1:] at each step.
Raises:
TypeError: If the final subfield (indicated by _segment_index relative
to the length of segment_list) is a MessageField.
TypeError: If at any stage the lookup at a segment fails, e.g if a.b
exists but a.b.c does not exist. This can happen either if a.b is not
a message field or if a.b.c is not a property on the message class from
a.b.
"""
if _segment_index >= len(segment_list):
# In this case, the field is the final one, so should be simple type
if isinstance(field, messages.MessageField):
field_class = field.__class__.__name__
raise TypeError('Can\'t use messages in path. Subfield %r was '
'included but is a %s.' % (parameter, field_class))
return
segment = segment_list[_segment_index]
parameter += '.' + segment
try:
field = field.type.field_by_name(segment)
except (AttributeError, KeyError):
raise TypeError('Subfield %r from path does not exist.' % (parameter,))
self.__validate_simple_subfield(parameter, field, segment_list,
_segment_index=_segment_index + 1) | [
"def",
"__validate_simple_subfield",
"(",
"self",
",",
"parameter",
",",
"field",
",",
"segment_list",
",",
"_segment_index",
"=",
"0",
")",
":",
"if",
"_segment_index",
">=",
"len",
"(",
"segment_list",
")",
":",
"# In this case, the field is the final one, so should be simple type",
"if",
"isinstance",
"(",
"field",
",",
"messages",
".",
"MessageField",
")",
":",
"field_class",
"=",
"field",
".",
"__class__",
".",
"__name__",
"raise",
"TypeError",
"(",
"'Can\\'t use messages in path. Subfield %r was '",
"'included but is a %s.'",
"%",
"(",
"parameter",
",",
"field_class",
")",
")",
"return",
"segment",
"=",
"segment_list",
"[",
"_segment_index",
"]",
"parameter",
"+=",
"'.'",
"+",
"segment",
"try",
":",
"field",
"=",
"field",
".",
"type",
".",
"field_by_name",
"(",
"segment",
")",
"except",
"(",
"AttributeError",
",",
"KeyError",
")",
":",
"raise",
"TypeError",
"(",
"'Subfield %r from path does not exist.'",
"%",
"(",
"parameter",
",",
")",
")",
"self",
".",
"__validate_simple_subfield",
"(",
"parameter",
",",
"field",
",",
"segment_list",
",",
"_segment_index",
"=",
"_segment_index",
"+",
"1",
")"
] | Verifies that a proposed subfield actually exists and is a simple field.
Here, simple means it is not a MessageField (nested).
Args:
parameter: String; the '.' delimited name of the current field being
considered. This is relative to some root.
field: An instance of a subclass of messages.Field. Corresponds to the
previous segment in the path (previous relative to _segment_index),
since this field should be a message field with the current segment
as a field in the message class.
segment_list: The full list of segments from the '.' delimited subfield
being validated.
_segment_index: Integer; used to hold the position of current segment so
that segment_list can be passed as a reference instead of having to
copy using segment_list[1:] at each step.
Raises:
TypeError: If the final subfield (indicated by _segment_index relative
to the length of segment_list) is a MessageField.
TypeError: If at any stage the lookup at a segment fails, e.g if a.b
exists but a.b.c does not exist. This can happen either if a.b is not
a message field or if a.b.c is not a property on the message class from
a.b. | [
"Verifies",
"that",
"a",
"proposed",
"subfield",
"actually",
"exists",
"and",
"is",
"a",
"simple",
"field",
"."
] | python | train |
inspirehep/refextract | refextract/references/tag.py | https://github.com/inspirehep/refextract/blob/d70e3787be3c495a3a07d1517b53f81d51c788c7/refextract/references/tag.py#L182-L338 | def process_reference_line(working_line,
journals_matches,
pprint_repnum_len,
pprint_repnum_matchtext,
publishers_matches,
removed_spaces,
standardised_titles,
kbs):
"""After the phase of identifying and tagging citation instances
in a reference line, this function is called to go through the
line and the collected information about the recognised citations,
and to transform the line into a string of MARC XML in which the
recognised citations are grouped under various datafields and
subfields, depending upon their type.
@param line_marker: (string) - this is the marker for this
reference line (e.g. [1]).
@param working_line: (string) - this is the line before the
punctuation was stripped. At this stage, it has not been
capitalised, and neither TITLES nor REPORT NUMBERS have been
stripped from it. However, any recognised numeration and/or URLs
have been tagged with <cds.YYYY> tags.
The working_line could, for example, look something like this:
[1] CDS <cds.URL description="http //invenio-software.org/">
http //invenio-software.org/</cds.URL>.
@param found_title_len: (dictionary) - the lengths of the title
citations that have been recognised in the line. Keyed by the index
within the line of each match.
@param found_title_matchtext: (dictionary) - The text that was found
for each matched title citation in the line. Keyed by the index within
the line of each match.
@param pprint_repnum_len: (dictionary) - the lengths of the matched
institutional preprint report number citations found within the line.
Keyed by the index within the line of each match.
@param pprint_repnum_matchtext: (dictionary) - The matched text for each
matched institutional report number. Keyed by the index within the line
of each match.
@param identified_dois (list) - The list of dois inside the citation
@identified_urls: (list) - contains 2-cell tuples, each of which
represents an idenitfied URL and its description string.
The list takes the order in which the URLs were identified in the line
(i.e. first-found, second-found, etc).
@param removed_spaces: (dictionary) - The number of spaces removed from
the various positions in the line. Keyed by the index of the position
within the line at which the spaces were removed.
@param standardised_titles: (dictionary) - The standardised journal
titles, keyed by the non-standard version of those titles.
@return: (tuple) of 5 components:
( string -> a MARC XML-ized reference line.
integer -> number of fields of miscellaneous text marked-up
for the line.
integer -> number of title citations marked-up for the line.
integer -> number of institutional report-number citations
marked-up for the line.
integer -> number of URL citations marked-up for the record.
integer -> number of DOI's found for the record
integer -> number of author groups found
)
"""
if len(journals_matches) + len(pprint_repnum_len) + len(publishers_matches) == 0:
# no TITLE or REPORT-NUMBER citations were found within this line,
# use the raw line: (This 'raw' line could still be tagged with
# recognised URLs or numeration.)
tagged_line = working_line
else:
# TITLE and/or REPORT-NUMBER citations were found in this line,
# build a new version of the working-line in which the standard
# versions of the REPORT-NUMBERs and TITLEs are tagged:
startpos = 0 # First cell of the reference line...
previous_match = {} # previously matched TITLE within line (used
# for replacement of IBIDs.
replacement_types = {}
journals_keys = journals_matches.keys()
journals_keys.sort()
reports_keys = pprint_repnum_matchtext.keys()
reports_keys.sort()
publishers_keys = publishers_matches.keys()
publishers_keys.sort()
spaces_keys = removed_spaces.keys()
spaces_keys.sort()
replacement_types = get_replacement_types(journals_keys,
reports_keys,
publishers_keys)
replacement_locations = replacement_types.keys()
replacement_locations.sort()
tagged_line = u"" # This is to be the new 'working-line'. It will
# contain the tagged TITLEs and REPORT-NUMBERs,
# as well as any previously tagged URLs and
# numeration components.
# begin:
for replacement_index in replacement_locations:
# first, factor in any stripped spaces before this 'replacement'
true_replacement_index, extras = \
account_for_stripped_whitespace(spaces_keys,
removed_spaces,
replacement_types,
pprint_repnum_len,
journals_matches,
replacement_index)
if replacement_types[replacement_index] == u"journal":
# Add a tagged periodical TITLE into the line:
rebuilt_chunk, startpos, previous_match = \
add_tagged_journal(
reading_line=working_line,
journal_info=journals_matches[replacement_index],
previous_match=previous_match,
startpos=startpos,
true_replacement_index=true_replacement_index,
extras=extras,
standardised_titles=standardised_titles)
tagged_line += rebuilt_chunk
elif replacement_types[replacement_index] == u"reportnumber":
# Add a tagged institutional preprint REPORT-NUMBER
# into the line:
rebuilt_chunk, startpos = \
add_tagged_report_number(
reading_line=working_line,
len_reportnum=pprint_repnum_len[replacement_index],
reportnum=pprint_repnum_matchtext[replacement_index],
startpos=startpos,
true_replacement_index=true_replacement_index,
extras=extras
)
tagged_line += rebuilt_chunk
elif replacement_types[replacement_index] == u"publisher":
rebuilt_chunk, startpos = \
add_tagged_publisher(
reading_line=working_line,
matched_publisher=publishers_matches[
replacement_index],
startpos=startpos,
true_replacement_index=true_replacement_index,
extras=extras,
kb_publishers=kbs['publishers']
)
tagged_line += rebuilt_chunk
# add the remainder of the original working-line into the rebuilt line:
tagged_line += working_line[startpos:]
# we have all the numeration
# we can make sure there's no space between the volume
# letter and the volume number
# e.g. B 20 -> B20
tagged_line = wash_volume_tag(tagged_line)
# Try to find any authors in the line
tagged_line = identify_and_tag_authors(tagged_line, kbs['authors'])
# Try to find any collaboration in the line
tagged_line = identify_and_tag_collaborations(tagged_line,
kbs['collaborations'])
return tagged_line.replace('\n', '') | [
"def",
"process_reference_line",
"(",
"working_line",
",",
"journals_matches",
",",
"pprint_repnum_len",
",",
"pprint_repnum_matchtext",
",",
"publishers_matches",
",",
"removed_spaces",
",",
"standardised_titles",
",",
"kbs",
")",
":",
"if",
"len",
"(",
"journals_matches",
")",
"+",
"len",
"(",
"pprint_repnum_len",
")",
"+",
"len",
"(",
"publishers_matches",
")",
"==",
"0",
":",
"# no TITLE or REPORT-NUMBER citations were found within this line,",
"# use the raw line: (This 'raw' line could still be tagged with",
"# recognised URLs or numeration.)",
"tagged_line",
"=",
"working_line",
"else",
":",
"# TITLE and/or REPORT-NUMBER citations were found in this line,",
"# build a new version of the working-line in which the standard",
"# versions of the REPORT-NUMBERs and TITLEs are tagged:",
"startpos",
"=",
"0",
"# First cell of the reference line...",
"previous_match",
"=",
"{",
"}",
"# previously matched TITLE within line (used",
"# for replacement of IBIDs.",
"replacement_types",
"=",
"{",
"}",
"journals_keys",
"=",
"journals_matches",
".",
"keys",
"(",
")",
"journals_keys",
".",
"sort",
"(",
")",
"reports_keys",
"=",
"pprint_repnum_matchtext",
".",
"keys",
"(",
")",
"reports_keys",
".",
"sort",
"(",
")",
"publishers_keys",
"=",
"publishers_matches",
".",
"keys",
"(",
")",
"publishers_keys",
".",
"sort",
"(",
")",
"spaces_keys",
"=",
"removed_spaces",
".",
"keys",
"(",
")",
"spaces_keys",
".",
"sort",
"(",
")",
"replacement_types",
"=",
"get_replacement_types",
"(",
"journals_keys",
",",
"reports_keys",
",",
"publishers_keys",
")",
"replacement_locations",
"=",
"replacement_types",
".",
"keys",
"(",
")",
"replacement_locations",
".",
"sort",
"(",
")",
"tagged_line",
"=",
"u\"\"",
"# This is to be the new 'working-line'. It will",
"# contain the tagged TITLEs and REPORT-NUMBERs,",
"# as well as any previously tagged URLs and",
"# numeration components.",
"# begin:",
"for",
"replacement_index",
"in",
"replacement_locations",
":",
"# first, factor in any stripped spaces before this 'replacement'",
"true_replacement_index",
",",
"extras",
"=",
"account_for_stripped_whitespace",
"(",
"spaces_keys",
",",
"removed_spaces",
",",
"replacement_types",
",",
"pprint_repnum_len",
",",
"journals_matches",
",",
"replacement_index",
")",
"if",
"replacement_types",
"[",
"replacement_index",
"]",
"==",
"u\"journal\"",
":",
"# Add a tagged periodical TITLE into the line:",
"rebuilt_chunk",
",",
"startpos",
",",
"previous_match",
"=",
"add_tagged_journal",
"(",
"reading_line",
"=",
"working_line",
",",
"journal_info",
"=",
"journals_matches",
"[",
"replacement_index",
"]",
",",
"previous_match",
"=",
"previous_match",
",",
"startpos",
"=",
"startpos",
",",
"true_replacement_index",
"=",
"true_replacement_index",
",",
"extras",
"=",
"extras",
",",
"standardised_titles",
"=",
"standardised_titles",
")",
"tagged_line",
"+=",
"rebuilt_chunk",
"elif",
"replacement_types",
"[",
"replacement_index",
"]",
"==",
"u\"reportnumber\"",
":",
"# Add a tagged institutional preprint REPORT-NUMBER",
"# into the line:",
"rebuilt_chunk",
",",
"startpos",
"=",
"add_tagged_report_number",
"(",
"reading_line",
"=",
"working_line",
",",
"len_reportnum",
"=",
"pprint_repnum_len",
"[",
"replacement_index",
"]",
",",
"reportnum",
"=",
"pprint_repnum_matchtext",
"[",
"replacement_index",
"]",
",",
"startpos",
"=",
"startpos",
",",
"true_replacement_index",
"=",
"true_replacement_index",
",",
"extras",
"=",
"extras",
")",
"tagged_line",
"+=",
"rebuilt_chunk",
"elif",
"replacement_types",
"[",
"replacement_index",
"]",
"==",
"u\"publisher\"",
":",
"rebuilt_chunk",
",",
"startpos",
"=",
"add_tagged_publisher",
"(",
"reading_line",
"=",
"working_line",
",",
"matched_publisher",
"=",
"publishers_matches",
"[",
"replacement_index",
"]",
",",
"startpos",
"=",
"startpos",
",",
"true_replacement_index",
"=",
"true_replacement_index",
",",
"extras",
"=",
"extras",
",",
"kb_publishers",
"=",
"kbs",
"[",
"'publishers'",
"]",
")",
"tagged_line",
"+=",
"rebuilt_chunk",
"# add the remainder of the original working-line into the rebuilt line:",
"tagged_line",
"+=",
"working_line",
"[",
"startpos",
":",
"]",
"# we have all the numeration",
"# we can make sure there's no space between the volume",
"# letter and the volume number",
"# e.g. B 20 -> B20",
"tagged_line",
"=",
"wash_volume_tag",
"(",
"tagged_line",
")",
"# Try to find any authors in the line",
"tagged_line",
"=",
"identify_and_tag_authors",
"(",
"tagged_line",
",",
"kbs",
"[",
"'authors'",
"]",
")",
"# Try to find any collaboration in the line",
"tagged_line",
"=",
"identify_and_tag_collaborations",
"(",
"tagged_line",
",",
"kbs",
"[",
"'collaborations'",
"]",
")",
"return",
"tagged_line",
".",
"replace",
"(",
"'\\n'",
",",
"''",
")"
] | After the phase of identifying and tagging citation instances
in a reference line, this function is called to go through the
line and the collected information about the recognised citations,
and to transform the line into a string of MARC XML in which the
recognised citations are grouped under various datafields and
subfields, depending upon their type.
@param line_marker: (string) - this is the marker for this
reference line (e.g. [1]).
@param working_line: (string) - this is the line before the
punctuation was stripped. At this stage, it has not been
capitalised, and neither TITLES nor REPORT NUMBERS have been
stripped from it. However, any recognised numeration and/or URLs
have been tagged with <cds.YYYY> tags.
The working_line could, for example, look something like this:
[1] CDS <cds.URL description="http //invenio-software.org/">
http //invenio-software.org/</cds.URL>.
@param found_title_len: (dictionary) - the lengths of the title
citations that have been recognised in the line. Keyed by the index
within the line of each match.
@param found_title_matchtext: (dictionary) - The text that was found
for each matched title citation in the line. Keyed by the index within
the line of each match.
@param pprint_repnum_len: (dictionary) - the lengths of the matched
institutional preprint report number citations found within the line.
Keyed by the index within the line of each match.
@param pprint_repnum_matchtext: (dictionary) - The matched text for each
matched institutional report number. Keyed by the index within the line
of each match.
@param identified_dois (list) - The list of dois inside the citation
@identified_urls: (list) - contains 2-cell tuples, each of which
represents an idenitfied URL and its description string.
The list takes the order in which the URLs were identified in the line
(i.e. first-found, second-found, etc).
@param removed_spaces: (dictionary) - The number of spaces removed from
the various positions in the line. Keyed by the index of the position
within the line at which the spaces were removed.
@param standardised_titles: (dictionary) - The standardised journal
titles, keyed by the non-standard version of those titles.
@return: (tuple) of 5 components:
( string -> a MARC XML-ized reference line.
integer -> number of fields of miscellaneous text marked-up
for the line.
integer -> number of title citations marked-up for the line.
integer -> number of institutional report-number citations
marked-up for the line.
integer -> number of URL citations marked-up for the record.
integer -> number of DOI's found for the record
integer -> number of author groups found
) | [
"After",
"the",
"phase",
"of",
"identifying",
"and",
"tagging",
"citation",
"instances",
"in",
"a",
"reference",
"line",
"this",
"function",
"is",
"called",
"to",
"go",
"through",
"the",
"line",
"and",
"the",
"collected",
"information",
"about",
"the",
"recognised",
"citations",
"and",
"to",
"transform",
"the",
"line",
"into",
"a",
"string",
"of",
"MARC",
"XML",
"in",
"which",
"the",
"recognised",
"citations",
"are",
"grouped",
"under",
"various",
"datafields",
"and",
"subfields",
"depending",
"upon",
"their",
"type",
".",
"@param",
"line_marker",
":",
"(",
"string",
")",
"-",
"this",
"is",
"the",
"marker",
"for",
"this",
"reference",
"line",
"(",
"e",
".",
"g",
".",
"[",
"1",
"]",
")",
".",
"@param",
"working_line",
":",
"(",
"string",
")",
"-",
"this",
"is",
"the",
"line",
"before",
"the",
"punctuation",
"was",
"stripped",
".",
"At",
"this",
"stage",
"it",
"has",
"not",
"been",
"capitalised",
"and",
"neither",
"TITLES",
"nor",
"REPORT",
"NUMBERS",
"have",
"been",
"stripped",
"from",
"it",
".",
"However",
"any",
"recognised",
"numeration",
"and",
"/",
"or",
"URLs",
"have",
"been",
"tagged",
"with",
"<cds",
".",
"YYYY",
">",
"tags",
".",
"The",
"working_line",
"could",
"for",
"example",
"look",
"something",
"like",
"this",
":",
"[",
"1",
"]",
"CDS",
"<cds",
".",
"URL",
"description",
"=",
"http",
"//",
"invenio",
"-",
"software",
".",
"org",
"/",
">",
"http",
"//",
"invenio",
"-",
"software",
".",
"org",
"/",
"<",
"/",
"cds",
".",
"URL",
">",
".",
"@param",
"found_title_len",
":",
"(",
"dictionary",
")",
"-",
"the",
"lengths",
"of",
"the",
"title",
"citations",
"that",
"have",
"been",
"recognised",
"in",
"the",
"line",
".",
"Keyed",
"by",
"the",
"index",
"within",
"the",
"line",
"of",
"each",
"match",
".",
"@param",
"found_title_matchtext",
":",
"(",
"dictionary",
")",
"-",
"The",
"text",
"that",
"was",
"found",
"for",
"each",
"matched",
"title",
"citation",
"in",
"the",
"line",
".",
"Keyed",
"by",
"the",
"index",
"within",
"the",
"line",
"of",
"each",
"match",
".",
"@param",
"pprint_repnum_len",
":",
"(",
"dictionary",
")",
"-",
"the",
"lengths",
"of",
"the",
"matched",
"institutional",
"preprint",
"report",
"number",
"citations",
"found",
"within",
"the",
"line",
".",
"Keyed",
"by",
"the",
"index",
"within",
"the",
"line",
"of",
"each",
"match",
".",
"@param",
"pprint_repnum_matchtext",
":",
"(",
"dictionary",
")",
"-",
"The",
"matched",
"text",
"for",
"each",
"matched",
"institutional",
"report",
"number",
".",
"Keyed",
"by",
"the",
"index",
"within",
"the",
"line",
"of",
"each",
"match",
".",
"@param",
"identified_dois",
"(",
"list",
")",
"-",
"The",
"list",
"of",
"dois",
"inside",
"the",
"citation",
"@identified_urls",
":",
"(",
"list",
")",
"-",
"contains",
"2",
"-",
"cell",
"tuples",
"each",
"of",
"which",
"represents",
"an",
"idenitfied",
"URL",
"and",
"its",
"description",
"string",
".",
"The",
"list",
"takes",
"the",
"order",
"in",
"which",
"the",
"URLs",
"were",
"identified",
"in",
"the",
"line",
"(",
"i",
".",
"e",
".",
"first",
"-",
"found",
"second",
"-",
"found",
"etc",
")",
".",
"@param",
"removed_spaces",
":",
"(",
"dictionary",
")",
"-",
"The",
"number",
"of",
"spaces",
"removed",
"from",
"the",
"various",
"positions",
"in",
"the",
"line",
".",
"Keyed",
"by",
"the",
"index",
"of",
"the",
"position",
"within",
"the",
"line",
"at",
"which",
"the",
"spaces",
"were",
"removed",
".",
"@param",
"standardised_titles",
":",
"(",
"dictionary",
")",
"-",
"The",
"standardised",
"journal",
"titles",
"keyed",
"by",
"the",
"non",
"-",
"standard",
"version",
"of",
"those",
"titles",
".",
"@return",
":",
"(",
"tuple",
")",
"of",
"5",
"components",
":",
"(",
"string",
"-",
">",
"a",
"MARC",
"XML",
"-",
"ized",
"reference",
"line",
".",
"integer",
"-",
">",
"number",
"of",
"fields",
"of",
"miscellaneous",
"text",
"marked",
"-",
"up",
"for",
"the",
"line",
".",
"integer",
"-",
">",
"number",
"of",
"title",
"citations",
"marked",
"-",
"up",
"for",
"the",
"line",
".",
"integer",
"-",
">",
"number",
"of",
"institutional",
"report",
"-",
"number",
"citations",
"marked",
"-",
"up",
"for",
"the",
"line",
".",
"integer",
"-",
">",
"number",
"of",
"URL",
"citations",
"marked",
"-",
"up",
"for",
"the",
"record",
".",
"integer",
"-",
">",
"number",
"of",
"DOI",
"s",
"found",
"for",
"the",
"record",
"integer",
"-",
">",
"number",
"of",
"author",
"groups",
"found",
")"
] | python | train |
RudolfCardinal/pythonlib | cardinal_pythonlib/datetimefunc.py | https://github.com/RudolfCardinal/pythonlib/blob/0b84cb35f38bd7d8723958dae51b480a829b7227/cardinal_pythonlib/datetimefunc.py#L165-L171 | def pendulum_date_to_datetime_date(x: Date) -> datetime.date:
"""
Takes a :class:`pendulum.Date` and returns a :class:`datetime.date`.
Used, for example, where a database backend insists on
:class:`datetime.date`.
"""
return datetime.date(year=x.year, month=x.month, day=x.day) | [
"def",
"pendulum_date_to_datetime_date",
"(",
"x",
":",
"Date",
")",
"->",
"datetime",
".",
"date",
":",
"return",
"datetime",
".",
"date",
"(",
"year",
"=",
"x",
".",
"year",
",",
"month",
"=",
"x",
".",
"month",
",",
"day",
"=",
"x",
".",
"day",
")"
] | Takes a :class:`pendulum.Date` and returns a :class:`datetime.date`.
Used, for example, where a database backend insists on
:class:`datetime.date`. | [
"Takes",
"a",
":",
"class",
":",
"pendulum",
".",
"Date",
"and",
"returns",
"a",
":",
"class",
":",
"datetime",
".",
"date",
".",
"Used",
"for",
"example",
"where",
"a",
"database",
"backend",
"insists",
"on",
":",
"class",
":",
"datetime",
".",
"date",
"."
] | python | train |
Chilipp/psyplot | psyplot/config/rcsetup.py | https://github.com/Chilipp/psyplot/blob/75a0a15a9a1dd018e79d2df270d56c4bf5f311d5/psyplot/config/rcsetup.py#L120-L164 | def add_base_str(self, base_str, pattern='.+', pattern_base=None,
append=True):
"""
Add further base string to this instance
Parameters
----------
base_str: str or list of str
Strings that are used as to look for keys to get and set keys in
the :attr:`base` dictionary. If a string does not contain
``'%(key)s'``, it will be appended at the end. ``'%(key)s'`` will
be replaced by the specific key for getting and setting an item.
pattern: str
Default: ``'.+'``. This is the pattern that is inserted for
``%(key)s`` in a base string to look for matches (using the
:mod:`re` module) in the `base` dictionary. The default `pattern`
matches everything without white spaces.
pattern_base: str or list or str
If None, the whatever is given in the `base_str` is used.
Those strings will be used for generating the final search
patterns. You can specify this parameter by yourself to avoid the
misinterpretation of patterns. For example for a `base_str` like
``'my.str'`` it is recommended to additionally provide the
`pattern_base` keyword with ``'my\.str'``.
Like for `base_str`, the ``%(key)s`` is appended if not already in
the string.
append: bool
If True, the given `base_str` are appended (i.e. it is first
looked for them in the :attr:`base` dictionary), otherwise they are
put at the beginning"""
base_str = safe_list(base_str)
pattern_base = safe_list(pattern_base or [])
for i, s in enumerate(base_str):
if '%(key)s' not in s:
base_str[i] += '%(key)s'
if pattern_base:
for i, s in enumerate(pattern_base):
if '%(key)s' not in s:
pattern_base[i] += '%(key)s'
else:
pattern_base = base_str
self.base_str = base_str + self.base_str
self.patterns = list(map(lambda s: re.compile(s.replace(
'%(key)s', '(?P<key>%s)' % pattern)), pattern_base)) + \
self.patterns | [
"def",
"add_base_str",
"(",
"self",
",",
"base_str",
",",
"pattern",
"=",
"'.+'",
",",
"pattern_base",
"=",
"None",
",",
"append",
"=",
"True",
")",
":",
"base_str",
"=",
"safe_list",
"(",
"base_str",
")",
"pattern_base",
"=",
"safe_list",
"(",
"pattern_base",
"or",
"[",
"]",
")",
"for",
"i",
",",
"s",
"in",
"enumerate",
"(",
"base_str",
")",
":",
"if",
"'%(key)s'",
"not",
"in",
"s",
":",
"base_str",
"[",
"i",
"]",
"+=",
"'%(key)s'",
"if",
"pattern_base",
":",
"for",
"i",
",",
"s",
"in",
"enumerate",
"(",
"pattern_base",
")",
":",
"if",
"'%(key)s'",
"not",
"in",
"s",
":",
"pattern_base",
"[",
"i",
"]",
"+=",
"'%(key)s'",
"else",
":",
"pattern_base",
"=",
"base_str",
"self",
".",
"base_str",
"=",
"base_str",
"+",
"self",
".",
"base_str",
"self",
".",
"patterns",
"=",
"list",
"(",
"map",
"(",
"lambda",
"s",
":",
"re",
".",
"compile",
"(",
"s",
".",
"replace",
"(",
"'%(key)s'",
",",
"'(?P<key>%s)'",
"%",
"pattern",
")",
")",
",",
"pattern_base",
")",
")",
"+",
"self",
".",
"patterns"
] | Add further base string to this instance
Parameters
----------
base_str: str or list of str
Strings that are used as to look for keys to get and set keys in
the :attr:`base` dictionary. If a string does not contain
``'%(key)s'``, it will be appended at the end. ``'%(key)s'`` will
be replaced by the specific key for getting and setting an item.
pattern: str
Default: ``'.+'``. This is the pattern that is inserted for
``%(key)s`` in a base string to look for matches (using the
:mod:`re` module) in the `base` dictionary. The default `pattern`
matches everything without white spaces.
pattern_base: str or list or str
If None, the whatever is given in the `base_str` is used.
Those strings will be used for generating the final search
patterns. You can specify this parameter by yourself to avoid the
misinterpretation of patterns. For example for a `base_str` like
``'my.str'`` it is recommended to additionally provide the
`pattern_base` keyword with ``'my\.str'``.
Like for `base_str`, the ``%(key)s`` is appended if not already in
the string.
append: bool
If True, the given `base_str` are appended (i.e. it is first
looked for them in the :attr:`base` dictionary), otherwise they are
put at the beginning | [
"Add",
"further",
"base",
"string",
"to",
"this",
"instance"
] | python | train |
James1345/django-rest-knox | knox/auth.py | https://github.com/James1345/django-rest-knox/blob/05f218f1922999d1be76753076cf8af78f134e02/knox/auth.py#L56-L78 | def authenticate_credentials(self, token):
'''
Due to the random nature of hashing a salted value, this must inspect
each auth_token individually to find the correct one.
Tokens that have expired will be deleted and skipped
'''
msg = _('Invalid token.')
token = token.decode("utf-8")
for auth_token in AuthToken.objects.filter(
token_key=token[:CONSTANTS.TOKEN_KEY_LENGTH]):
if self._cleanup_token(auth_token):
continue
try:
digest = hash_token(token, auth_token.salt)
except (TypeError, binascii.Error):
raise exceptions.AuthenticationFailed(msg)
if compare_digest(digest, auth_token.digest):
if knox_settings.AUTO_REFRESH and auth_token.expiry:
self.renew_token(auth_token)
return self.validate_user(auth_token)
raise exceptions.AuthenticationFailed(msg) | [
"def",
"authenticate_credentials",
"(",
"self",
",",
"token",
")",
":",
"msg",
"=",
"_",
"(",
"'Invalid token.'",
")",
"token",
"=",
"token",
".",
"decode",
"(",
"\"utf-8\"",
")",
"for",
"auth_token",
"in",
"AuthToken",
".",
"objects",
".",
"filter",
"(",
"token_key",
"=",
"token",
"[",
":",
"CONSTANTS",
".",
"TOKEN_KEY_LENGTH",
"]",
")",
":",
"if",
"self",
".",
"_cleanup_token",
"(",
"auth_token",
")",
":",
"continue",
"try",
":",
"digest",
"=",
"hash_token",
"(",
"token",
",",
"auth_token",
".",
"salt",
")",
"except",
"(",
"TypeError",
",",
"binascii",
".",
"Error",
")",
":",
"raise",
"exceptions",
".",
"AuthenticationFailed",
"(",
"msg",
")",
"if",
"compare_digest",
"(",
"digest",
",",
"auth_token",
".",
"digest",
")",
":",
"if",
"knox_settings",
".",
"AUTO_REFRESH",
"and",
"auth_token",
".",
"expiry",
":",
"self",
".",
"renew_token",
"(",
"auth_token",
")",
"return",
"self",
".",
"validate_user",
"(",
"auth_token",
")",
"raise",
"exceptions",
".",
"AuthenticationFailed",
"(",
"msg",
")"
] | Due to the random nature of hashing a salted value, this must inspect
each auth_token individually to find the correct one.
Tokens that have expired will be deleted and skipped | [
"Due",
"to",
"the",
"random",
"nature",
"of",
"hashing",
"a",
"salted",
"value",
"this",
"must",
"inspect",
"each",
"auth_token",
"individually",
"to",
"find",
"the",
"correct",
"one",
"."
] | python | train |
earlye/nephele | nephele/AwsAutoScalingGroup.py | https://github.com/earlye/nephele/blob/a7dadc68f4124671457f09119419978c4d22013e/nephele/AwsAutoScalingGroup.py#L181-L227 | def do_ssh(self,args):
"""SSH to an instance. ssh -h for detailed help"""
parser = CommandArgumentParser("ssh")
parser.add_argument(dest='instance',help='instance index or name');
parser.add_argument('-a','--address-number',default='0',dest='interface-number',help='instance id of the instance to ssh to');
parser.add_argument('-ii','--ignore-host-key',dest='ignore-host-key',default=False,action='store_true',help='Ignore host key')
parser.add_argument('-ne','--no-echo',dest='no-echo',default=False,action='store_true',help='Do not echo command')
parser.add_argument('-L',dest='forwarding',nargs='*',help="port forwarding string of the form: {localport}:{host-visible-to-instance}:{remoteport} or {port}")
parser.add_argument('-R','--replace-key',dest='replaceKey',default=False,action='store_true',help="Replace the host's key. This is useful when AWS recycles an IP address you've seen before.")
parser.add_argument('-Y','--keyscan',dest='keyscan',default=False,action='store_true',help="Perform a keyscan to avoid having to say 'yes' for a new host. Implies -R.")
parser.add_argument('-B','--background',dest='background',default=False,action='store_true',help="Run in the background. (e.g., forward an ssh session and then do other stuff in aws-shell).")
parser.add_argument('-v',dest='verbosity',default=0,action=VAction,nargs='?',help='Verbosity. The more instances, the more verbose');
parser.add_argument('-m',dest='macro',default=False,action='store_true',help='{command} is a series of macros to execute, not the actual command to run on the host');
parser.add_argument(dest='command',nargs='*',help="Command to run on all hosts.") # consider adding a filter option later
args = vars(parser.parse_args(args))
interfaceNumber = int(args['interface-number'])
forwarding = args['forwarding']
replaceKey = args['replaceKey']
keyscan = args['keyscan']
background = args['background']
verbosity = args['verbosity']
ignoreHostKey = args['ignore-host-key']
noEcho = args['no-echo']
# Figure out the host to connect to:
target = args['instance']
try:
index = int(args['instance'])
instances = self.scalingGroupDescription['AutoScalingGroups'][0]['Instances']
instance = instances[index]
target = instance['InstanceId']
except ValueError: # if args['instance'] is not an int, for example.
pass
if args['macro']:
if len(args['command']) > 1:
print("Only one macro may be specified with the -m switch.")
return
else:
macro = args['command'][0]
print("Macro:{}".format(macro))
command = Config.config['ssh-macros'][macro]
else:
command = ' '.join(args['command'])
ssh(target,interfaceNumber,forwarding,replaceKey,keyscan,background,verbosity,command,ignoreHostKey=ignoreHostKey,echoCommand = not noEcho) | [
"def",
"do_ssh",
"(",
"self",
",",
"args",
")",
":",
"parser",
"=",
"CommandArgumentParser",
"(",
"\"ssh\"",
")",
"parser",
".",
"add_argument",
"(",
"dest",
"=",
"'instance'",
",",
"help",
"=",
"'instance index or name'",
")",
"parser",
".",
"add_argument",
"(",
"'-a'",
",",
"'--address-number'",
",",
"default",
"=",
"'0'",
",",
"dest",
"=",
"'interface-number'",
",",
"help",
"=",
"'instance id of the instance to ssh to'",
")",
"parser",
".",
"add_argument",
"(",
"'-ii'",
",",
"'--ignore-host-key'",
",",
"dest",
"=",
"'ignore-host-key'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"'Ignore host key'",
")",
"parser",
".",
"add_argument",
"(",
"'-ne'",
",",
"'--no-echo'",
",",
"dest",
"=",
"'no-echo'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"'Do not echo command'",
")",
"parser",
".",
"add_argument",
"(",
"'-L'",
",",
"dest",
"=",
"'forwarding'",
",",
"nargs",
"=",
"'*'",
",",
"help",
"=",
"\"port forwarding string of the form: {localport}:{host-visible-to-instance}:{remoteport} or {port}\"",
")",
"parser",
".",
"add_argument",
"(",
"'-R'",
",",
"'--replace-key'",
",",
"dest",
"=",
"'replaceKey'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"\"Replace the host's key. This is useful when AWS recycles an IP address you've seen before.\"",
")",
"parser",
".",
"add_argument",
"(",
"'-Y'",
",",
"'--keyscan'",
",",
"dest",
"=",
"'keyscan'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"\"Perform a keyscan to avoid having to say 'yes' for a new host. Implies -R.\"",
")",
"parser",
".",
"add_argument",
"(",
"'-B'",
",",
"'--background'",
",",
"dest",
"=",
"'background'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"\"Run in the background. (e.g., forward an ssh session and then do other stuff in aws-shell).\"",
")",
"parser",
".",
"add_argument",
"(",
"'-v'",
",",
"dest",
"=",
"'verbosity'",
",",
"default",
"=",
"0",
",",
"action",
"=",
"VAction",
",",
"nargs",
"=",
"'?'",
",",
"help",
"=",
"'Verbosity. The more instances, the more verbose'",
")",
"parser",
".",
"add_argument",
"(",
"'-m'",
",",
"dest",
"=",
"'macro'",
",",
"default",
"=",
"False",
",",
"action",
"=",
"'store_true'",
",",
"help",
"=",
"'{command} is a series of macros to execute, not the actual command to run on the host'",
")",
"parser",
".",
"add_argument",
"(",
"dest",
"=",
"'command'",
",",
"nargs",
"=",
"'*'",
",",
"help",
"=",
"\"Command to run on all hosts.\"",
")",
"# consider adding a filter option later",
"args",
"=",
"vars",
"(",
"parser",
".",
"parse_args",
"(",
"args",
")",
")",
"interfaceNumber",
"=",
"int",
"(",
"args",
"[",
"'interface-number'",
"]",
")",
"forwarding",
"=",
"args",
"[",
"'forwarding'",
"]",
"replaceKey",
"=",
"args",
"[",
"'replaceKey'",
"]",
"keyscan",
"=",
"args",
"[",
"'keyscan'",
"]",
"background",
"=",
"args",
"[",
"'background'",
"]",
"verbosity",
"=",
"args",
"[",
"'verbosity'",
"]",
"ignoreHostKey",
"=",
"args",
"[",
"'ignore-host-key'",
"]",
"noEcho",
"=",
"args",
"[",
"'no-echo'",
"]",
"# Figure out the host to connect to:",
"target",
"=",
"args",
"[",
"'instance'",
"]",
"try",
":",
"index",
"=",
"int",
"(",
"args",
"[",
"'instance'",
"]",
")",
"instances",
"=",
"self",
".",
"scalingGroupDescription",
"[",
"'AutoScalingGroups'",
"]",
"[",
"0",
"]",
"[",
"'Instances'",
"]",
"instance",
"=",
"instances",
"[",
"index",
"]",
"target",
"=",
"instance",
"[",
"'InstanceId'",
"]",
"except",
"ValueError",
":",
"# if args['instance'] is not an int, for example.",
"pass",
"if",
"args",
"[",
"'macro'",
"]",
":",
"if",
"len",
"(",
"args",
"[",
"'command'",
"]",
")",
">",
"1",
":",
"print",
"(",
"\"Only one macro may be specified with the -m switch.\"",
")",
"return",
"else",
":",
"macro",
"=",
"args",
"[",
"'command'",
"]",
"[",
"0",
"]",
"print",
"(",
"\"Macro:{}\"",
".",
"format",
"(",
"macro",
")",
")",
"command",
"=",
"Config",
".",
"config",
"[",
"'ssh-macros'",
"]",
"[",
"macro",
"]",
"else",
":",
"command",
"=",
"' '",
".",
"join",
"(",
"args",
"[",
"'command'",
"]",
")",
"ssh",
"(",
"target",
",",
"interfaceNumber",
",",
"forwarding",
",",
"replaceKey",
",",
"keyscan",
",",
"background",
",",
"verbosity",
",",
"command",
",",
"ignoreHostKey",
"=",
"ignoreHostKey",
",",
"echoCommand",
"=",
"not",
"noEcho",
")"
] | SSH to an instance. ssh -h for detailed help | [
"SSH",
"to",
"an",
"instance",
".",
"ssh",
"-",
"h",
"for",
"detailed",
"help"
] | python | train |
annoviko/pyclustering | pyclustering/cluster/birch.py | https://github.com/annoviko/pyclustering/blob/98aa0dd89fd36f701668fb1eb29c8fb5662bf7d0/pyclustering/cluster/birch.py#L156-L172 | def __extract_features(self):
"""!
@brief Extracts features from CF-tree cluster.
"""
self.__features = [];
if (len(self.__tree.leafes) == 1):
# parameters are too general, copy all entries
for entry in self.__tree.leafes[0].entries:
self.__features.append(entry);
else:
# copy all leaf clustering features
for node in self.__tree.leafes:
self.__features.append(node.feature); | [
"def",
"__extract_features",
"(",
"self",
")",
":",
"self",
".",
"__features",
"=",
"[",
"]",
"if",
"(",
"len",
"(",
"self",
".",
"__tree",
".",
"leafes",
")",
"==",
"1",
")",
":",
"# parameters are too general, copy all entries\r",
"for",
"entry",
"in",
"self",
".",
"__tree",
".",
"leafes",
"[",
"0",
"]",
".",
"entries",
":",
"self",
".",
"__features",
".",
"append",
"(",
"entry",
")",
"else",
":",
"# copy all leaf clustering features\r",
"for",
"node",
"in",
"self",
".",
"__tree",
".",
"leafes",
":",
"self",
".",
"__features",
".",
"append",
"(",
"node",
".",
"feature",
")"
] | !
@brief Extracts features from CF-tree cluster. | [
"!"
] | python | valid |
swharden/SWHLab | doc/oldcode/swhlab/core/abf.py | https://github.com/swharden/SWHLab/blob/a86c3c65323cec809a4bd4f81919644927094bf5/doc/oldcode/swhlab/core/abf.py#L432-L467 | def average_data(self,ranges=[[None,None]],percentile=None):
"""
given a list of ranges, return single point averages for every sweep.
Units are in seconds. Expects something like:
ranges=[[1,2],[4,5],[7,7.5]]
None values will be replaced with maximum/minimum bounds.
For baseline subtraction, make a range baseline then sub it youtself.
returns datas[iSweep][iRange][AVorSD]
if a percentile is given, return that percentile rather than average.
percentile=50 is the median, but requires sorting, and is slower.
"""
ranges=copy.deepcopy(ranges) #TODO: make this cleaner. Why needed?
# clean up ranges, make them indexes
for i in range(len(ranges)):
if ranges[i][0] is None:
ranges[i][0] = 0
else:
ranges[i][0] = int(ranges[i][0]*self.rate)
if ranges[i][1] is None:
ranges[i][1] = -1
else:
ranges[i][1] = int(ranges[i][1]*self.rate)
# do the math
datas=np.empty((self.sweeps,len(ranges),2)) #[sweep][range]=[Av,Er]
for iSweep in range(self.sweeps):
self.setSweep(iSweep)
for iRange in range(len(ranges)):
I1=ranges[iRange][0]
I2=ranges[iRange][1]
if percentile:
datas[iSweep][iRange][0]=np.percentile(self.dataY[I1:I2],percentile)
else:
datas[iSweep][iRange][0]=np.average(self.dataY[I1:I2])
datas[iSweep][iRange][1]=np.std(self.dataY[I1:I2])
return datas | [
"def",
"average_data",
"(",
"self",
",",
"ranges",
"=",
"[",
"[",
"None",
",",
"None",
"]",
"]",
",",
"percentile",
"=",
"None",
")",
":",
"ranges",
"=",
"copy",
".",
"deepcopy",
"(",
"ranges",
")",
"#TODO: make this cleaner. Why needed?",
"# clean up ranges, make them indexes",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"ranges",
")",
")",
":",
"if",
"ranges",
"[",
"i",
"]",
"[",
"0",
"]",
"is",
"None",
":",
"ranges",
"[",
"i",
"]",
"[",
"0",
"]",
"=",
"0",
"else",
":",
"ranges",
"[",
"i",
"]",
"[",
"0",
"]",
"=",
"int",
"(",
"ranges",
"[",
"i",
"]",
"[",
"0",
"]",
"*",
"self",
".",
"rate",
")",
"if",
"ranges",
"[",
"i",
"]",
"[",
"1",
"]",
"is",
"None",
":",
"ranges",
"[",
"i",
"]",
"[",
"1",
"]",
"=",
"-",
"1",
"else",
":",
"ranges",
"[",
"i",
"]",
"[",
"1",
"]",
"=",
"int",
"(",
"ranges",
"[",
"i",
"]",
"[",
"1",
"]",
"*",
"self",
".",
"rate",
")",
"# do the math",
"datas",
"=",
"np",
".",
"empty",
"(",
"(",
"self",
".",
"sweeps",
",",
"len",
"(",
"ranges",
")",
",",
"2",
")",
")",
"#[sweep][range]=[Av,Er]",
"for",
"iSweep",
"in",
"range",
"(",
"self",
".",
"sweeps",
")",
":",
"self",
".",
"setSweep",
"(",
"iSweep",
")",
"for",
"iRange",
"in",
"range",
"(",
"len",
"(",
"ranges",
")",
")",
":",
"I1",
"=",
"ranges",
"[",
"iRange",
"]",
"[",
"0",
"]",
"I2",
"=",
"ranges",
"[",
"iRange",
"]",
"[",
"1",
"]",
"if",
"percentile",
":",
"datas",
"[",
"iSweep",
"]",
"[",
"iRange",
"]",
"[",
"0",
"]",
"=",
"np",
".",
"percentile",
"(",
"self",
".",
"dataY",
"[",
"I1",
":",
"I2",
"]",
",",
"percentile",
")",
"else",
":",
"datas",
"[",
"iSweep",
"]",
"[",
"iRange",
"]",
"[",
"0",
"]",
"=",
"np",
".",
"average",
"(",
"self",
".",
"dataY",
"[",
"I1",
":",
"I2",
"]",
")",
"datas",
"[",
"iSweep",
"]",
"[",
"iRange",
"]",
"[",
"1",
"]",
"=",
"np",
".",
"std",
"(",
"self",
".",
"dataY",
"[",
"I1",
":",
"I2",
"]",
")",
"return",
"datas"
] | given a list of ranges, return single point averages for every sweep.
Units are in seconds. Expects something like:
ranges=[[1,2],[4,5],[7,7.5]]
None values will be replaced with maximum/minimum bounds.
For baseline subtraction, make a range baseline then sub it youtself.
returns datas[iSweep][iRange][AVorSD]
if a percentile is given, return that percentile rather than average.
percentile=50 is the median, but requires sorting, and is slower. | [
"given",
"a",
"list",
"of",
"ranges",
"return",
"single",
"point",
"averages",
"for",
"every",
"sweep",
".",
"Units",
"are",
"in",
"seconds",
".",
"Expects",
"something",
"like",
":",
"ranges",
"=",
"[[",
"1",
"2",
"]",
"[",
"4",
"5",
"]",
"[",
"7",
"7",
".",
"5",
"]]",
"None",
"values",
"will",
"be",
"replaced",
"with",
"maximum",
"/",
"minimum",
"bounds",
".",
"For",
"baseline",
"subtraction",
"make",
"a",
"range",
"baseline",
"then",
"sub",
"it",
"youtself",
".",
"returns",
"datas",
"[",
"iSweep",
"]",
"[",
"iRange",
"]",
"[",
"AVorSD",
"]",
"if",
"a",
"percentile",
"is",
"given",
"return",
"that",
"percentile",
"rather",
"than",
"average",
".",
"percentile",
"=",
"50",
"is",
"the",
"median",
"but",
"requires",
"sorting",
"and",
"is",
"slower",
"."
] | python | valid |
RJT1990/pyflux | pyflux/families/laplace.py | https://github.com/RJT1990/pyflux/blob/297f2afc2095acd97c12e827dd500e8ea5da0c0f/pyflux/families/laplace.py#L182-L197 | def logpdf(self, mu):
"""
Log PDF for Laplace prior
Parameters
----------
mu : float
Latent variable for which the prior is being formed over
Returns
----------
- log(p(mu))
"""
if self.transform is not None:
mu = self.transform(mu)
return ss.laplace.logpdf(mu, loc=self.loc0, scale=self.scale0) | [
"def",
"logpdf",
"(",
"self",
",",
"mu",
")",
":",
"if",
"self",
".",
"transform",
"is",
"not",
"None",
":",
"mu",
"=",
"self",
".",
"transform",
"(",
"mu",
")",
"return",
"ss",
".",
"laplace",
".",
"logpdf",
"(",
"mu",
",",
"loc",
"=",
"self",
".",
"loc0",
",",
"scale",
"=",
"self",
".",
"scale0",
")"
] | Log PDF for Laplace prior
Parameters
----------
mu : float
Latent variable for which the prior is being formed over
Returns
----------
- log(p(mu)) | [
"Log",
"PDF",
"for",
"Laplace",
"prior"
] | python | train |
RudolfCardinal/pythonlib | cardinal_pythonlib/tee.py | https://github.com/RudolfCardinal/pythonlib/blob/0b84cb35f38bd7d8723958dae51b480a829b7227/cardinal_pythonlib/tee.py#L272-L286 | def close(self) -> None:
"""
To act as a file.
"""
if self.underlying_stream:
if self.using_stdout:
sys.stdout = self.underlying_stream
else:
sys.stderr = self.underlying_stream
self.underlying_stream = None
if self.file:
# Do NOT close the file; we don't own it.
self.file = None
log.debug("Finished copying {} to {}",
self.output_description, self.filename) | [
"def",
"close",
"(",
"self",
")",
"->",
"None",
":",
"if",
"self",
".",
"underlying_stream",
":",
"if",
"self",
".",
"using_stdout",
":",
"sys",
".",
"stdout",
"=",
"self",
".",
"underlying_stream",
"else",
":",
"sys",
".",
"stderr",
"=",
"self",
".",
"underlying_stream",
"self",
".",
"underlying_stream",
"=",
"None",
"if",
"self",
".",
"file",
":",
"# Do NOT close the file; we don't own it.",
"self",
".",
"file",
"=",
"None",
"log",
".",
"debug",
"(",
"\"Finished copying {} to {}\"",
",",
"self",
".",
"output_description",
",",
"self",
".",
"filename",
")"
] | To act as a file. | [
"To",
"act",
"as",
"a",
"file",
"."
] | python | train |
inasafe/inasafe | safe/gui/tools/options_dialog.py | https://github.com/inasafe/inasafe/blob/831d60abba919f6d481dc94a8d988cc205130724/safe/gui/tools/options_dialog.py#L833-L843 | def save_default_values(self):
"""Save InaSAFE default values."""
for parameter_container in self.default_value_parameter_containers:
parameters = parameter_container.get_parameters()
for parameter in parameters:
set_inasafe_default_value_qsetting(
self.settings,
GLOBAL,
parameter.guid,
parameter.value
) | [
"def",
"save_default_values",
"(",
"self",
")",
":",
"for",
"parameter_container",
"in",
"self",
".",
"default_value_parameter_containers",
":",
"parameters",
"=",
"parameter_container",
".",
"get_parameters",
"(",
")",
"for",
"parameter",
"in",
"parameters",
":",
"set_inasafe_default_value_qsetting",
"(",
"self",
".",
"settings",
",",
"GLOBAL",
",",
"parameter",
".",
"guid",
",",
"parameter",
".",
"value",
")"
] | Save InaSAFE default values. | [
"Save",
"InaSAFE",
"default",
"values",
"."
] | python | train |
ShenggaoZhu/midict | midict/__init__.py | https://github.com/ShenggaoZhu/midict/blob/2fad2edcfb753035b443a70fe15852affae1b5bb/midict/__init__.py#L1450-L1479 | def rename_index(self, *args):
'''change the index name(s).
* call with one argument:
1. list of new index names (to replace all old names)
* call with two arguments:
1. old index name(s) (or index/indices)
2. new index name(s)
'''
if len(args) == 1:
new_indices = args[0]
old_indices =force_list(self.indices.keys())
else:
old_indices, new_indices = args
old_indices, single = convert_index_to_keys(self.indices, old_indices)
if single:
old_indices, new_indices = [old_indices], [new_indices]
if len(new_indices) != len(old_indices):
raise ValueError('Length of update indices (%s) does not match '
'existing indices (%s)' %
(len(new_indices), len(old_indices)))
map(MI_check_index_name, new_indices)
if len(new_indices) != len(set(new_indices)):
raise ValueError('New indices names are not unique: %s' % (new_indices,))
od_replace_key(self.indices, old_indices, new_indices, multi=True) | [
"def",
"rename_index",
"(",
"self",
",",
"*",
"args",
")",
":",
"if",
"len",
"(",
"args",
")",
"==",
"1",
":",
"new_indices",
"=",
"args",
"[",
"0",
"]",
"old_indices",
"=",
"force_list",
"(",
"self",
".",
"indices",
".",
"keys",
"(",
")",
")",
"else",
":",
"old_indices",
",",
"new_indices",
"=",
"args",
"old_indices",
",",
"single",
"=",
"convert_index_to_keys",
"(",
"self",
".",
"indices",
",",
"old_indices",
")",
"if",
"single",
":",
"old_indices",
",",
"new_indices",
"=",
"[",
"old_indices",
"]",
",",
"[",
"new_indices",
"]",
"if",
"len",
"(",
"new_indices",
")",
"!=",
"len",
"(",
"old_indices",
")",
":",
"raise",
"ValueError",
"(",
"'Length of update indices (%s) does not match '",
"'existing indices (%s)'",
"%",
"(",
"len",
"(",
"new_indices",
")",
",",
"len",
"(",
"old_indices",
")",
")",
")",
"map",
"(",
"MI_check_index_name",
",",
"new_indices",
")",
"if",
"len",
"(",
"new_indices",
")",
"!=",
"len",
"(",
"set",
"(",
"new_indices",
")",
")",
":",
"raise",
"ValueError",
"(",
"'New indices names are not unique: %s'",
"%",
"(",
"new_indices",
",",
")",
")",
"od_replace_key",
"(",
"self",
".",
"indices",
",",
"old_indices",
",",
"new_indices",
",",
"multi",
"=",
"True",
")"
] | change the index name(s).
* call with one argument:
1. list of new index names (to replace all old names)
* call with two arguments:
1. old index name(s) (or index/indices)
2. new index name(s) | [
"change",
"the",
"index",
"name",
"(",
"s",
")",
"."
] | python | train |
pypa/pipenv | pipenv/vendor/pipdeptree.py | https://github.com/pypa/pipenv/blob/cae8d76c210b9777e90aab76e9c4b0e53bb19cde/pipenv/vendor/pipdeptree.py#L113-L127 | def guess_version(pkg_key, default='?'):
"""Guess the version of a pkg when pip doesn't provide it
:param str pkg_key: key of the package
:param str default: default version to return if unable to find
:returns: version
:rtype: string
"""
try:
m = import_module(pkg_key)
except ImportError:
return default
else:
return getattr(m, '__version__', default) | [
"def",
"guess_version",
"(",
"pkg_key",
",",
"default",
"=",
"'?'",
")",
":",
"try",
":",
"m",
"=",
"import_module",
"(",
"pkg_key",
")",
"except",
"ImportError",
":",
"return",
"default",
"else",
":",
"return",
"getattr",
"(",
"m",
",",
"'__version__'",
",",
"default",
")"
] | Guess the version of a pkg when pip doesn't provide it
:param str pkg_key: key of the package
:param str default: default version to return if unable to find
:returns: version
:rtype: string | [
"Guess",
"the",
"version",
"of",
"a",
"pkg",
"when",
"pip",
"doesn",
"t",
"provide",
"it"
] | python | train |
hospadar/sqlite_object | sqlite_object/_sqlite_list.py | https://github.com/hospadar/sqlite_object/blob/a24a5d297f10a7d68b5f3e3b744654efb1eee9d4/sqlite_object/_sqlite_list.py#L198-L204 | def extend(self, iterable):
"""
Add each item from iterable to the end of the list
"""
with self.lock:
for item in iterable:
self.append(item) | [
"def",
"extend",
"(",
"self",
",",
"iterable",
")",
":",
"with",
"self",
".",
"lock",
":",
"for",
"item",
"in",
"iterable",
":",
"self",
".",
"append",
"(",
"item",
")"
] | Add each item from iterable to the end of the list | [
"Add",
"each",
"item",
"from",
"iterable",
"to",
"the",
"end",
"of",
"the",
"list"
] | python | train |
teepark/greenhouse | greenhouse/pool.py | https://github.com/teepark/greenhouse/blob/8fd1be4f5443ba090346b5ec82fdbeb0a060d956/greenhouse/pool.py#L36-L40 | def start(self):
"start the pool's workers"
for i in xrange(self.size):
scheduler.schedule(self._runner)
self._closing = False | [
"def",
"start",
"(",
"self",
")",
":",
"for",
"i",
"in",
"xrange",
"(",
"self",
".",
"size",
")",
":",
"scheduler",
".",
"schedule",
"(",
"self",
".",
"_runner",
")",
"self",
".",
"_closing",
"=",
"False"
] | start the pool's workers | [
"start",
"the",
"pool",
"s",
"workers"
] | python | train |
google/grr | grr/server/grr_response_server/aff4.py | https://github.com/google/grr/blob/5cef4e8e2f0d5df43ea4877e9c798e0bf60bfe74/grr/server/grr_response_server/aff4.py#L2393-L2409 | def ListChildren(self, limit=None, age=NEWEST_TIME):
"""Yields RDFURNs of all the children of this object.
Args:
limit: Total number of items we will attempt to retrieve.
age: The age of the items to retrieve. Should be one of ALL_TIMES,
NEWEST_TIME or a range in microseconds.
Yields:
RDFURNs instances of each child.
"""
# Just grab all the children from the index.
for predicate, timestamp in data_store.DB.AFF4FetchChildren(
self.urn, timestamp=Factory.ParseAgeSpecification(age), limit=limit):
urn = self.urn.Add(predicate)
urn.age = rdfvalue.RDFDatetime(timestamp)
yield urn | [
"def",
"ListChildren",
"(",
"self",
",",
"limit",
"=",
"None",
",",
"age",
"=",
"NEWEST_TIME",
")",
":",
"# Just grab all the children from the index.",
"for",
"predicate",
",",
"timestamp",
"in",
"data_store",
".",
"DB",
".",
"AFF4FetchChildren",
"(",
"self",
".",
"urn",
",",
"timestamp",
"=",
"Factory",
".",
"ParseAgeSpecification",
"(",
"age",
")",
",",
"limit",
"=",
"limit",
")",
":",
"urn",
"=",
"self",
".",
"urn",
".",
"Add",
"(",
"predicate",
")",
"urn",
".",
"age",
"=",
"rdfvalue",
".",
"RDFDatetime",
"(",
"timestamp",
")",
"yield",
"urn"
] | Yields RDFURNs of all the children of this object.
Args:
limit: Total number of items we will attempt to retrieve.
age: The age of the items to retrieve. Should be one of ALL_TIMES,
NEWEST_TIME or a range in microseconds.
Yields:
RDFURNs instances of each child. | [
"Yields",
"RDFURNs",
"of",
"all",
"the",
"children",
"of",
"this",
"object",
"."
] | python | train |
chrisspen/burlap | burlap/common.py | https://github.com/chrisspen/burlap/blob/a92b0a8e5206850bb777c74af8421ea8b33779bd/burlap/common.py#L890-L922 | def unregister(self):
"""
Removes this satchel from global registeries.
"""
for k in list(env.keys()):
if k.startswith(self.env_prefix):
del env[k]
try:
del all_satchels[self.name.upper()]
except KeyError:
pass
try:
del manifest_recorder[self.name]
except KeyError:
pass
try:
del manifest_deployers[self.name.upper()]
except KeyError:
pass
try:
del manifest_deployers_befores[self.name.upper()]
except KeyError:
pass
try:
del required_system_packages[self.name.upper()]
except KeyError:
pass | [
"def",
"unregister",
"(",
"self",
")",
":",
"for",
"k",
"in",
"list",
"(",
"env",
".",
"keys",
"(",
")",
")",
":",
"if",
"k",
".",
"startswith",
"(",
"self",
".",
"env_prefix",
")",
":",
"del",
"env",
"[",
"k",
"]",
"try",
":",
"del",
"all_satchels",
"[",
"self",
".",
"name",
".",
"upper",
"(",
")",
"]",
"except",
"KeyError",
":",
"pass",
"try",
":",
"del",
"manifest_recorder",
"[",
"self",
".",
"name",
"]",
"except",
"KeyError",
":",
"pass",
"try",
":",
"del",
"manifest_deployers",
"[",
"self",
".",
"name",
".",
"upper",
"(",
")",
"]",
"except",
"KeyError",
":",
"pass",
"try",
":",
"del",
"manifest_deployers_befores",
"[",
"self",
".",
"name",
".",
"upper",
"(",
")",
"]",
"except",
"KeyError",
":",
"pass",
"try",
":",
"del",
"required_system_packages",
"[",
"self",
".",
"name",
".",
"upper",
"(",
")",
"]",
"except",
"KeyError",
":",
"pass"
] | Removes this satchel from global registeries. | [
"Removes",
"this",
"satchel",
"from",
"global",
"registeries",
"."
] | python | valid |
ssato/python-anyconfig | src/anyconfig/cli.py | https://github.com/ssato/python-anyconfig/blob/f2f4fb8d8e232aadea866c202e1dd7a5967e2877/src/anyconfig/cli.py#L309-L330 | def _output_result(cnf, outpath, otype, inpaths, itype,
extra_opts=None):
"""
:param cnf: Configuration object to print out
:param outpath: Output file path or None
:param otype: Output type or None
:param inpaths: List of input file paths
:param itype: Input type or None
:param extra_opts: Map object will be given to API.dump as extra options
"""
fmsg = ("Uknown file type and cannot detect appropriate backend "
"from its extension, '%s'")
if not anyconfig.utils.is_dict_like(cnf):
_exit_with_output(str(cnf)) # Print primitive types as it is.
if not outpath or outpath == "-":
outpath = sys.stdout
if otype is None:
otype = _output_type_by_input_path(inpaths, itype, fmsg)
_try_dump(cnf, outpath, otype, fmsg, extra_opts=extra_opts) | [
"def",
"_output_result",
"(",
"cnf",
",",
"outpath",
",",
"otype",
",",
"inpaths",
",",
"itype",
",",
"extra_opts",
"=",
"None",
")",
":",
"fmsg",
"=",
"(",
"\"Uknown file type and cannot detect appropriate backend \"",
"\"from its extension, '%s'\"",
")",
"if",
"not",
"anyconfig",
".",
"utils",
".",
"is_dict_like",
"(",
"cnf",
")",
":",
"_exit_with_output",
"(",
"str",
"(",
"cnf",
")",
")",
"# Print primitive types as it is.",
"if",
"not",
"outpath",
"or",
"outpath",
"==",
"\"-\"",
":",
"outpath",
"=",
"sys",
".",
"stdout",
"if",
"otype",
"is",
"None",
":",
"otype",
"=",
"_output_type_by_input_path",
"(",
"inpaths",
",",
"itype",
",",
"fmsg",
")",
"_try_dump",
"(",
"cnf",
",",
"outpath",
",",
"otype",
",",
"fmsg",
",",
"extra_opts",
"=",
"extra_opts",
")"
] | :param cnf: Configuration object to print out
:param outpath: Output file path or None
:param otype: Output type or None
:param inpaths: List of input file paths
:param itype: Input type or None
:param extra_opts: Map object will be given to API.dump as extra options | [
":",
"param",
"cnf",
":",
"Configuration",
"object",
"to",
"print",
"out",
":",
"param",
"outpath",
":",
"Output",
"file",
"path",
"or",
"None",
":",
"param",
"otype",
":",
"Output",
"type",
"or",
"None",
":",
"param",
"inpaths",
":",
"List",
"of",
"input",
"file",
"paths",
":",
"param",
"itype",
":",
"Input",
"type",
"or",
"None",
":",
"param",
"extra_opts",
":",
"Map",
"object",
"will",
"be",
"given",
"to",
"API",
".",
"dump",
"as",
"extra",
"options"
] | python | train |
jstitch/MambuPy | MambuPy/rest/mambuloan.py | https://github.com/jstitch/MambuPy/blob/2af98cc12e7ed5ec183b3e97644e880e70b79ee8/MambuPy/rest/mambuloan.py#L211-L247 | def setProduct(self, cache=False, *args, **kwargs):
"""Adds the product for this loan to a 'product' field.
Product is a MambuProduct object.
cache argument allows to use AllMambuProducts singleton to
retrieve the products. See mambuproduct.AllMambuProducts code
and pydoc for further information.
Returns the number of requests done to Mambu.
"""
if cache:
try:
prods = self.allmambuproductsclass(*args, **kwargs)
except AttributeError as ae:
from .mambuproduct import AllMambuProducts
self.allmambuproductsclass = AllMambuProducts
prods = self.allmambuproductsclass(*args, **kwargs)
for prod in prods:
if prod['encodedKey'] == self['productTypeKey']:
self['product'] = prod
try:
# asked for cache, but cache was originally empty
prods.noinit
except AttributeError:
return 1
return 0
try:
product = self.mambuproductclass(entid=self['productTypeKey'], *args, **kwargs)
except AttributeError as ae:
from .mambuproduct import MambuProduct
self.mambuproductclass = MambuProduct
product = self.mambuproductclass(entid=self['productTypeKey'], *args, **kwargs)
self['product'] = product
return 1 | [
"def",
"setProduct",
"(",
"self",
",",
"cache",
"=",
"False",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"cache",
":",
"try",
":",
"prods",
"=",
"self",
".",
"allmambuproductsclass",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"except",
"AttributeError",
"as",
"ae",
":",
"from",
".",
"mambuproduct",
"import",
"AllMambuProducts",
"self",
".",
"allmambuproductsclass",
"=",
"AllMambuProducts",
"prods",
"=",
"self",
".",
"allmambuproductsclass",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"for",
"prod",
"in",
"prods",
":",
"if",
"prod",
"[",
"'encodedKey'",
"]",
"==",
"self",
"[",
"'productTypeKey'",
"]",
":",
"self",
"[",
"'product'",
"]",
"=",
"prod",
"try",
":",
"# asked for cache, but cache was originally empty",
"prods",
".",
"noinit",
"except",
"AttributeError",
":",
"return",
"1",
"return",
"0",
"try",
":",
"product",
"=",
"self",
".",
"mambuproductclass",
"(",
"entid",
"=",
"self",
"[",
"'productTypeKey'",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"except",
"AttributeError",
"as",
"ae",
":",
"from",
".",
"mambuproduct",
"import",
"MambuProduct",
"self",
".",
"mambuproductclass",
"=",
"MambuProduct",
"product",
"=",
"self",
".",
"mambuproductclass",
"(",
"entid",
"=",
"self",
"[",
"'productTypeKey'",
"]",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"self",
"[",
"'product'",
"]",
"=",
"product",
"return",
"1"
] | Adds the product for this loan to a 'product' field.
Product is a MambuProduct object.
cache argument allows to use AllMambuProducts singleton to
retrieve the products. See mambuproduct.AllMambuProducts code
and pydoc for further information.
Returns the number of requests done to Mambu. | [
"Adds",
"the",
"product",
"for",
"this",
"loan",
"to",
"a",
"product",
"field",
"."
] | python | train |
basho/riak-python-client | riak/datatypes/map.py | https://github.com/basho/riak-python-client/blob/91de13a16607cdf553d1a194e762734e3bec4231/riak/datatypes/map.py#L238-L249 | def value(self):
"""
Returns a copy of the original map's value. Nested values are
pure Python values as returned by :attr:`Datatype.value` from
the nested types.
:rtype: dict
"""
pvalue = {}
for key in self._value:
pvalue[key] = self._value[key].value
return pvalue | [
"def",
"value",
"(",
"self",
")",
":",
"pvalue",
"=",
"{",
"}",
"for",
"key",
"in",
"self",
".",
"_value",
":",
"pvalue",
"[",
"key",
"]",
"=",
"self",
".",
"_value",
"[",
"key",
"]",
".",
"value",
"return",
"pvalue"
] | Returns a copy of the original map's value. Nested values are
pure Python values as returned by :attr:`Datatype.value` from
the nested types.
:rtype: dict | [
"Returns",
"a",
"copy",
"of",
"the",
"original",
"map",
"s",
"value",
".",
"Nested",
"values",
"are",
"pure",
"Python",
"values",
"as",
"returned",
"by",
":",
"attr",
":",
"Datatype",
".",
"value",
"from",
"the",
"nested",
"types",
"."
] | python | train |
iotile/coretools | iotilesensorgraph/iotile/sg/node_descriptor.py | https://github.com/iotile/coretools/blob/2d794f5f1346b841b0dcd16c9d284e9bf2f3c6ec/iotilesensorgraph/iotile/sg/node_descriptor.py#L32-L84 | def parse_node_descriptor(desc, model):
"""Parse a string node descriptor.
The function creates an SGNode object without connecting its inputs and outputs
and returns a 3-tuple:
SGNode, [(input X, trigger X)], <processing function name>
Args:
desc (str): A description of the node to be created.
model (str): A device model for the node to be created that sets any
device specific limits on how the node is set up.
"""
try:
data = graph_node.parseString(desc)
except ParseException:
raise # TODO: Fix this to properly encapsulate the parse error
stream_desc = u' '.join(data['node'])
stream = DataStream.FromString(stream_desc)
node = SGNode(stream, model)
inputs = []
if 'input_a' in data:
input_a = data['input_a']
stream_a = DataStreamSelector.FromString(u' '.join(input_a['input_stream']))
trigger_a = None
if 'type' in input_a:
trigger_a = InputTrigger(input_a['type'], input_a['op'], int(input_a['reference'], 0))
inputs.append((stream_a, trigger_a))
if 'input_b' in data:
input_a = data['input_b']
stream_a = DataStreamSelector.FromString(u' '.join(input_a['input_stream']))
trigger_a = None
if 'type' in input_a:
trigger_a = InputTrigger(input_a['type'], input_a['op'], int(input_a['reference'], 0))
inputs.append((stream_a, trigger_a))
if 'combiner' in data and str(data['combiner']) == u'||':
node.trigger_combiner = SGNode.OrTriggerCombiner
else:
node.trigger_combiner = SGNode.AndTriggerCombiner
processing = data['processor']
return node, inputs, processing | [
"def",
"parse_node_descriptor",
"(",
"desc",
",",
"model",
")",
":",
"try",
":",
"data",
"=",
"graph_node",
".",
"parseString",
"(",
"desc",
")",
"except",
"ParseException",
":",
"raise",
"# TODO: Fix this to properly encapsulate the parse error",
"stream_desc",
"=",
"u' '",
".",
"join",
"(",
"data",
"[",
"'node'",
"]",
")",
"stream",
"=",
"DataStream",
".",
"FromString",
"(",
"stream_desc",
")",
"node",
"=",
"SGNode",
"(",
"stream",
",",
"model",
")",
"inputs",
"=",
"[",
"]",
"if",
"'input_a'",
"in",
"data",
":",
"input_a",
"=",
"data",
"[",
"'input_a'",
"]",
"stream_a",
"=",
"DataStreamSelector",
".",
"FromString",
"(",
"u' '",
".",
"join",
"(",
"input_a",
"[",
"'input_stream'",
"]",
")",
")",
"trigger_a",
"=",
"None",
"if",
"'type'",
"in",
"input_a",
":",
"trigger_a",
"=",
"InputTrigger",
"(",
"input_a",
"[",
"'type'",
"]",
",",
"input_a",
"[",
"'op'",
"]",
",",
"int",
"(",
"input_a",
"[",
"'reference'",
"]",
",",
"0",
")",
")",
"inputs",
".",
"append",
"(",
"(",
"stream_a",
",",
"trigger_a",
")",
")",
"if",
"'input_b'",
"in",
"data",
":",
"input_a",
"=",
"data",
"[",
"'input_b'",
"]",
"stream_a",
"=",
"DataStreamSelector",
".",
"FromString",
"(",
"u' '",
".",
"join",
"(",
"input_a",
"[",
"'input_stream'",
"]",
")",
")",
"trigger_a",
"=",
"None",
"if",
"'type'",
"in",
"input_a",
":",
"trigger_a",
"=",
"InputTrigger",
"(",
"input_a",
"[",
"'type'",
"]",
",",
"input_a",
"[",
"'op'",
"]",
",",
"int",
"(",
"input_a",
"[",
"'reference'",
"]",
",",
"0",
")",
")",
"inputs",
".",
"append",
"(",
"(",
"stream_a",
",",
"trigger_a",
")",
")",
"if",
"'combiner'",
"in",
"data",
"and",
"str",
"(",
"data",
"[",
"'combiner'",
"]",
")",
"==",
"u'||'",
":",
"node",
".",
"trigger_combiner",
"=",
"SGNode",
".",
"OrTriggerCombiner",
"else",
":",
"node",
".",
"trigger_combiner",
"=",
"SGNode",
".",
"AndTriggerCombiner",
"processing",
"=",
"data",
"[",
"'processor'",
"]",
"return",
"node",
",",
"inputs",
",",
"processing"
] | Parse a string node descriptor.
The function creates an SGNode object without connecting its inputs and outputs
and returns a 3-tuple:
SGNode, [(input X, trigger X)], <processing function name>
Args:
desc (str): A description of the node to be created.
model (str): A device model for the node to be created that sets any
device specific limits on how the node is set up. | [
"Parse",
"a",
"string",
"node",
"descriptor",
"."
] | python | train |
CyberZHG/keras-word-char-embd | keras_wc_embd/wrapper.py | https://github.com/CyberZHG/keras-word-char-embd/blob/cca6ddff01b6264dd0d12613bb9ed308e1367b8c/keras_wc_embd/wrapper.py#L47-L54 | def get_dicts(self):
"""Get word and character dictionaries.
:return word_dict, char_dict:
"""
if self.word_dict is None:
self.word_dict, self.char_dict, self.max_word_len = self.dict_generator(return_dict=True)
return self.word_dict, self.char_dict | [
"def",
"get_dicts",
"(",
"self",
")",
":",
"if",
"self",
".",
"word_dict",
"is",
"None",
":",
"self",
".",
"word_dict",
",",
"self",
".",
"char_dict",
",",
"self",
".",
"max_word_len",
"=",
"self",
".",
"dict_generator",
"(",
"return_dict",
"=",
"True",
")",
"return",
"self",
".",
"word_dict",
",",
"self",
".",
"char_dict"
] | Get word and character dictionaries.
:return word_dict, char_dict: | [
"Get",
"word",
"and",
"character",
"dictionaries",
"."
] | python | train |
saltstack/salt | salt/modules/supervisord.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/modules/supervisord.py#L28-L49 | def _get_supervisorctl_bin(bin_env):
'''
Return supervisorctl command to call, either from a virtualenv, an argument
passed in, or from the global modules options
'''
cmd = 'supervisorctl'
if not bin_env:
which_result = __salt__['cmd.which_bin']([cmd])
if which_result is None:
raise CommandNotFoundError(
'Could not find a `{0}` binary'.format(cmd)
)
return which_result
# try to get binary from env
if os.path.isdir(bin_env):
cmd_bin = os.path.join(bin_env, 'bin', cmd)
if os.path.isfile(cmd_bin):
return cmd_bin
raise CommandNotFoundError('Could not find a `{0}` binary'.format(cmd))
return bin_env | [
"def",
"_get_supervisorctl_bin",
"(",
"bin_env",
")",
":",
"cmd",
"=",
"'supervisorctl'",
"if",
"not",
"bin_env",
":",
"which_result",
"=",
"__salt__",
"[",
"'cmd.which_bin'",
"]",
"(",
"[",
"cmd",
"]",
")",
"if",
"which_result",
"is",
"None",
":",
"raise",
"CommandNotFoundError",
"(",
"'Could not find a `{0}` binary'",
".",
"format",
"(",
"cmd",
")",
")",
"return",
"which_result",
"# try to get binary from env",
"if",
"os",
".",
"path",
".",
"isdir",
"(",
"bin_env",
")",
":",
"cmd_bin",
"=",
"os",
".",
"path",
".",
"join",
"(",
"bin_env",
",",
"'bin'",
",",
"cmd",
")",
"if",
"os",
".",
"path",
".",
"isfile",
"(",
"cmd_bin",
")",
":",
"return",
"cmd_bin",
"raise",
"CommandNotFoundError",
"(",
"'Could not find a `{0}` binary'",
".",
"format",
"(",
"cmd",
")",
")",
"return",
"bin_env"
] | Return supervisorctl command to call, either from a virtualenv, an argument
passed in, or from the global modules options | [
"Return",
"supervisorctl",
"command",
"to",
"call",
"either",
"from",
"a",
"virtualenv",
"an",
"argument",
"passed",
"in",
"or",
"from",
"the",
"global",
"modules",
"options"
] | python | train |
casacore/python-casacore | casacore/measures/__init__.py | https://github.com/casacore/python-casacore/blob/975510861ea005f7919dd9e438b5f98a1682eebe/casacore/measures/__init__.py#L777-L805 | def rise(self, crd, ev='5deg'):
"""This method will give the rise/set hour-angles of a source. It
needs the position in the frame, and a time. If the latter is not
set, the current time will be used.
:param crd: a direction measure
:param ev: the elevation limit as a quantity or string
:returns: `dict` with rise and set sidereal time quantities or a 2
strings "below" or "above"
"""
if not is_measure(crd):
raise TypeError('No rise/set coordinates specified')
ps = self._getwhere()
self._fillnow()
hd = self.measure(crd, "hadec")
c = self.measure(crd, "app")
evq = dq.quantity(ev)
hdm1 = dq.quantity(hd["m1"])
psm1 = dq.quantity(ps["m1"])
ct = (dq.sin(dq.quantity(ev)) - (dq.sin(hdm1) * dq.sin(psm1))) \
/ (dq.cos(hdm1) * dq.cos(psm1))
if ct.get_value() >= 1:
return {'rise': 'below', 'set': 'below'}
if ct.get_value() <= -1:
return {'rise': 'above', 'set': 'above'}
a = dq.acos(ct)
return dict(rise=dq.quantity(c["m0"]).norm(0) - a,
set=dq.quantity(c["m0"]).norm(0) + a) | [
"def",
"rise",
"(",
"self",
",",
"crd",
",",
"ev",
"=",
"'5deg'",
")",
":",
"if",
"not",
"is_measure",
"(",
"crd",
")",
":",
"raise",
"TypeError",
"(",
"'No rise/set coordinates specified'",
")",
"ps",
"=",
"self",
".",
"_getwhere",
"(",
")",
"self",
".",
"_fillnow",
"(",
")",
"hd",
"=",
"self",
".",
"measure",
"(",
"crd",
",",
"\"hadec\"",
")",
"c",
"=",
"self",
".",
"measure",
"(",
"crd",
",",
"\"app\"",
")",
"evq",
"=",
"dq",
".",
"quantity",
"(",
"ev",
")",
"hdm1",
"=",
"dq",
".",
"quantity",
"(",
"hd",
"[",
"\"m1\"",
"]",
")",
"psm1",
"=",
"dq",
".",
"quantity",
"(",
"ps",
"[",
"\"m1\"",
"]",
")",
"ct",
"=",
"(",
"dq",
".",
"sin",
"(",
"dq",
".",
"quantity",
"(",
"ev",
")",
")",
"-",
"(",
"dq",
".",
"sin",
"(",
"hdm1",
")",
"*",
"dq",
".",
"sin",
"(",
"psm1",
")",
")",
")",
"/",
"(",
"dq",
".",
"cos",
"(",
"hdm1",
")",
"*",
"dq",
".",
"cos",
"(",
"psm1",
")",
")",
"if",
"ct",
".",
"get_value",
"(",
")",
">=",
"1",
":",
"return",
"{",
"'rise'",
":",
"'below'",
",",
"'set'",
":",
"'below'",
"}",
"if",
"ct",
".",
"get_value",
"(",
")",
"<=",
"-",
"1",
":",
"return",
"{",
"'rise'",
":",
"'above'",
",",
"'set'",
":",
"'above'",
"}",
"a",
"=",
"dq",
".",
"acos",
"(",
"ct",
")",
"return",
"dict",
"(",
"rise",
"=",
"dq",
".",
"quantity",
"(",
"c",
"[",
"\"m0\"",
"]",
")",
".",
"norm",
"(",
"0",
")",
"-",
"a",
",",
"set",
"=",
"dq",
".",
"quantity",
"(",
"c",
"[",
"\"m0\"",
"]",
")",
".",
"norm",
"(",
"0",
")",
"+",
"a",
")"
] | This method will give the rise/set hour-angles of a source. It
needs the position in the frame, and a time. If the latter is not
set, the current time will be used.
:param crd: a direction measure
:param ev: the elevation limit as a quantity or string
:returns: `dict` with rise and set sidereal time quantities or a 2
strings "below" or "above" | [
"This",
"method",
"will",
"give",
"the",
"rise",
"/",
"set",
"hour",
"-",
"angles",
"of",
"a",
"source",
".",
"It",
"needs",
"the",
"position",
"in",
"the",
"frame",
"and",
"a",
"time",
".",
"If",
"the",
"latter",
"is",
"not",
"set",
"the",
"current",
"time",
"will",
"be",
"used",
"."
] | python | train |
datasift/datasift-python | datasift/token.py | https://github.com/datasift/datasift-python/blob/bfaca1a47501a18e11065ecf630d9c31df818f65/datasift/token.py#L56-L71 | def update(self, identity_id, service, token=None):
""" Update the token
:param identity_id: The ID of the identity to retrieve
:return: dict of REST API output with headers attached
:rtype: :class:`~datasift.request.DictResponse`
:raises: :class:`~datasift.exceptions.DataSiftApiException`,
:class:`requests.exceptions.HTTPError`
"""
params = {}
if token:
params['token'] = token
return self.request.put(str(identity_id) + '/token/' + service, params) | [
"def",
"update",
"(",
"self",
",",
"identity_id",
",",
"service",
",",
"token",
"=",
"None",
")",
":",
"params",
"=",
"{",
"}",
"if",
"token",
":",
"params",
"[",
"'token'",
"]",
"=",
"token",
"return",
"self",
".",
"request",
".",
"put",
"(",
"str",
"(",
"identity_id",
")",
"+",
"'/token/'",
"+",
"service",
",",
"params",
")"
] | Update the token
:param identity_id: The ID of the identity to retrieve
:return: dict of REST API output with headers attached
:rtype: :class:`~datasift.request.DictResponse`
:raises: :class:`~datasift.exceptions.DataSiftApiException`,
:class:`requests.exceptions.HTTPError` | [
"Update",
"the",
"token"
] | python | train |
LordDarkula/chess_py | chess_py/core/algebraic/converter.py | https://github.com/LordDarkula/chess_py/blob/14bebc2f8c49ae25c59375cc83d0b38d8ff7281d/chess_py/core/algebraic/converter.py#L349-L383 | def long_alg(alg_str, position):
"""
Converts a string written in long algebraic form
and the corresponding position into a complete move
(initial location specified). Used primarily for
UCI, but can be used for other purposes.
:type: alg_str: str
:type: position: Board
:rtype: Move
"""
if alg_str is None or len(alg_str) < 4 or len(alg_str) > 6:
raise ValueError("Invalid string input {}".format(alg_str))
end = Location.from_string(alg_str[2:])
start = Location.from_string(alg_str[:2])
piece = position.piece_at_square(start)
if len(alg_str) == 4:
return make_legal(Move(end_loc=end,
piece=piece,
status=notation_const.LONG_ALG,
start_loc=start), position)
promoted_to = _get_piece(alg_str, 4)
if promoted_to is None or \
promoted_to is King or \
promoted_to is Pawn:
raise Exception("Invalid move input")
return make_legal(Move(end_loc=end,
piece=piece,
status=notation_const.LONG_ALG,
start_loc=start,
promoted_to_piece=promoted_to), position) | [
"def",
"long_alg",
"(",
"alg_str",
",",
"position",
")",
":",
"if",
"alg_str",
"is",
"None",
"or",
"len",
"(",
"alg_str",
")",
"<",
"4",
"or",
"len",
"(",
"alg_str",
")",
">",
"6",
":",
"raise",
"ValueError",
"(",
"\"Invalid string input {}\"",
".",
"format",
"(",
"alg_str",
")",
")",
"end",
"=",
"Location",
".",
"from_string",
"(",
"alg_str",
"[",
"2",
":",
"]",
")",
"start",
"=",
"Location",
".",
"from_string",
"(",
"alg_str",
"[",
":",
"2",
"]",
")",
"piece",
"=",
"position",
".",
"piece_at_square",
"(",
"start",
")",
"if",
"len",
"(",
"alg_str",
")",
"==",
"4",
":",
"return",
"make_legal",
"(",
"Move",
"(",
"end_loc",
"=",
"end",
",",
"piece",
"=",
"piece",
",",
"status",
"=",
"notation_const",
".",
"LONG_ALG",
",",
"start_loc",
"=",
"start",
")",
",",
"position",
")",
"promoted_to",
"=",
"_get_piece",
"(",
"alg_str",
",",
"4",
")",
"if",
"promoted_to",
"is",
"None",
"or",
"promoted_to",
"is",
"King",
"or",
"promoted_to",
"is",
"Pawn",
":",
"raise",
"Exception",
"(",
"\"Invalid move input\"",
")",
"return",
"make_legal",
"(",
"Move",
"(",
"end_loc",
"=",
"end",
",",
"piece",
"=",
"piece",
",",
"status",
"=",
"notation_const",
".",
"LONG_ALG",
",",
"start_loc",
"=",
"start",
",",
"promoted_to_piece",
"=",
"promoted_to",
")",
",",
"position",
")"
] | Converts a string written in long algebraic form
and the corresponding position into a complete move
(initial location specified). Used primarily for
UCI, but can be used for other purposes.
:type: alg_str: str
:type: position: Board
:rtype: Move | [
"Converts",
"a",
"string",
"written",
"in",
"long",
"algebraic",
"form",
"and",
"the",
"corresponding",
"position",
"into",
"a",
"complete",
"move",
"(",
"initial",
"location",
"specified",
")",
".",
"Used",
"primarily",
"for",
"UCI",
"but",
"can",
"be",
"used",
"for",
"other",
"purposes",
"."
] | python | train |
pywbem/pywbem | pywbem/cim_operations.py | https://github.com/pywbem/pywbem/blob/e54ecb82c2211e289a268567443d60fdd489f1e4/pywbem/cim_operations.py#L4581-L4849 | def IterEnumerateInstancePaths(self, ClassName, namespace=None,
FilterQueryLanguage=None, FilterQuery=None,
OperationTimeout=None, ContinueOnError=None,
MaxObjectCount=DEFAULT_ITER_MAXOBJECTCOUNT,
**extra):
"""
Enumerate the instance paths of instances of a class (including
instances of its subclasses) in a namespace, using the
Python :term:`py:generator` idiom to return the result.
*New in pywbem 0.10 as experimental and finalized in 0.12.*
This method uses the corresponding pull operations if supported by the
WBEM server or otherwise the corresponding traditional operation.
This method is an alternative to using the pull operations directly,
that frees the user of having to know whether the WBEM server supports
pull operations.
This method is a generator function that retrieves instance paths from
the WBEM server and returns them one by one (using :keyword:`yield`)
when the caller iterates through the returned generator object. The
number of instance paths that are retrieved from the WBEM server in one
request (and thus need to be materialized in this method) is up to the
`MaxObjectCount` parameter if the corresponding pull operations are
used, or the complete result set all at once if the corresponding
traditional operation is used.
By default, this method attempts to perform the corresponding pull
operations
(:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths` and
:meth:`~pywbem.WBEMConnection.PullInstancePaths`).
If these pull operations are not supported by the WBEM server, this
method falls back to using the corresponding traditional operation
(:meth:`~pywbem.WBEMConnection.EnumerateInstanceNames`).
Whether the WBEM server supports these pull operations is remembered
in the :class:`~pywbem.WBEMConnection` object (by operation type), and
avoids unnecessary attempts to try these pull operations on that
connection in the future.
The `use_pull_operations` init parameter of
:class:`~pywbem.WBEMConnection` can be used to control the preference
for always using pull operations, always using traditional operations,
or using pull operations if supported by the WBEM server (the default).
This method provides all of the controls of the corresponding pull
operations except for the ability to set different response sizes on
each request; the response size (defined by the `MaxObjectCount`
parameter) is the same for all pull operations in the enumeration
session.
In addition, some functionality is only available if the corresponding
pull operations are used by this method:
* Filtering is not supported for the corresponding traditional
operation so that setting the `FilterQuery` or `FilterQueryLanguage`
parameters will be rejected if the corresponding traditional
operation is used by this method.
Note that this limitation is not a disadvantage compared to using the
corresponding pull operations directly, because in both cases, the
WBEM server must support the pull operations and their filtering
capability in order for the filtering to work.
* Setting the `ContinueOnError` parameter to `True` will be rejected if
the corresponding traditional operation is used by this method.
The enumeration session that is opened with the WBEM server when using
pull operations is closed automatically when the returned generator
object is exhausted, or when the generator object is closed using its
:meth:`~py:generator.close` method (which may also be called before the
generator is exhausted).
Parameters:
ClassName (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a :class:`~pywbem.CIMClassName` object, its
`namespace` attribute will be used as a default namespace as
described for the `namespace` parameter, and its `host` attribute
will be ignored.
namespace (:term:`string`):
Name of the CIM namespace to be used (case independent).
Leading and trailing slash characters will be stripped. The lexical
case will be preserved.
If `None`, the namespace of the `ClassName` parameter will be used,
if specified as a :class:`~pywbem.CIMClassName` object. If that is
also `None`, the default namespace of the connection will be used.
FilterQueryLanguage (:term:`string`):
The name of the filter query language used for the `FilterQuery`
parameter. The DMTF-defined Filter Query Language (see
:term:`DSP0212`) is specified as "DMTF:FQL".
If this parameter is not `None` and the traditional operation is
used by this method, :exc:`~py:exceptions.ValueError` will be
raised.
Not all WBEM servers support filtering for this operation because
it returns instance paths and the act of the server filtering
requires that it generate instances just for that purpose and then
discard them.
FilterQuery (:term:`string`):
The filter query in the query language defined by the
`FilterQueryLanguage` parameter.
If this parameter is not `None` and the traditional operation is
used by this method, :exc:`~py:exceptions.ValueError` will be
raised.
OperationTimeout (:class:`~pywbem.Uint32`):
Minimum time in seconds the WBEM Server shall maintain an open
enumeration session after a previous Open or Pull request is
sent to the client. Once this timeout time has expired, the
WBEM server may close the enumeration session.
* If not `None`, this parameter is sent to the WBEM server as the
proposed timeout for the enumeration session. A value of 0
indicates that the server is expected to never time out. The
server may reject the proposed value, causing a
:class:`~pywbem.CIMError` to be raised with status code
:attr:`~pywbem.CIM_ERR_INVALID_OPERATION_TIMEOUT`.
* If `None`, this parameter is not passed to the WBEM server, and
causes the server-implemented default timeout to be used.
ContinueOnError (:class:`py:bool`):
Indicates to the WBEM server to continue sending responses
after an error response has been sent.
* If `True`, the server is to continue sending responses after
sending an error response. Not all servers support continuation
on error; a server that does not support it must send an error
response if `True` was specified, causing
:class:`~pywbem.CIMError` to be raised with status code
:attr:`~pywbem.CIM_ERR_CONTINUATION_ON_ERROR_NOT_SUPPORTED`.
If the corresponding traditional operation is used by this
method, :exc:`~py:exceptions.ValueError` will be raised.
* If `False`, the server is requested to close the enumeration after
sending an error response.
* If `None`, this parameter is not passed to the WBEM server, and
causes the server-implemented default behaviour to be used.
:term:`DSP0200` defines that the server-implemented default is
`False`.
MaxObjectCount (:class:`~pywbem.Uint32`)
Maximum number of instances the WBEM server may return for each of
the open and pull requests issued during the iterations over the
returned generator object.
* If positive, the WBEM server is to return no more than the
specified number of instance paths.
* Zero is not allowed; it would mean that zero paths
are to be returned for every request issued.
* The default is defined as a system config variable.
* `None` is not allowed.
The choice of MaxObjectCount is client/server dependent but choices
between 100 and 1000 typically do not have a significant impact on
either memory or overall efficiency.
**extra :
Additional keyword arguments are passed as additional operation
parameters to the WBEM server.
Note that :term:`DSP0200` does not define any additional parameters
for this operation.
Raises:
Exceptions described in :class:`~pywbem.WBEMConnection`.
Returns:
:term:`py:generator` iterating :class:`~pywbem.CIMInstanceName`:
A generator object that iterates the resulting CIM instance paths.
These instance paths have their host and namespace components set.
Example::
paths_generator = conn.IterEnumerateInstancePaths('CIM_Blah')
for path in paths_generator:
print('path {0}'.format(path))
"""
_validateIterCommonParams(MaxObjectCount, OperationTimeout)
# Common variable for pull result tuple used by pulls and finally:
pull_result = None
try: # try / finally block to allow iter.close()
if (self._use_enum_path_pull_operations is None or
self._use_enum_path_pull_operations):
try: # operation try block
pull_result = self.OpenEnumerateInstancePaths(
ClassName, namespace=namespace,
FilterQueryLanguage=FilterQueryLanguage,
FilterQuery=FilterQuery,
OperationTimeout=OperationTimeout,
ContinueOnError=ContinueOnError,
MaxObjectCount=MaxObjectCount, **extra)
# Open operation succeeded; set has_pull flag
self._use_enum_path_pull_operations = True
for inst in pull_result.paths:
yield inst
# Loop to pull while more while eos not returned.
while not pull_result.eos:
pull_result = self.PullInstancePaths(
pull_result.context, MaxObjectCount=MaxObjectCount)
for inst in pull_result.paths:
yield inst
pull_result = None # clear the pull_result
return
# If NOT_SUPPORTED and first request, set flag and try
# alternative request operation.
# If use_pull_operations is True, always raise the exception
except CIMError as ce:
if (self._use_enum_path_pull_operations is None and
ce.status_code == CIM_ERR_NOT_SUPPORTED):
self._use_enum_path_pull_operations = False
else:
raise
# Alternate request if Pull not implemented. This does not allow
# the FilterQuery or ContinueOnError
assert self._use_enum_path_pull_operations is False
if FilterQuery is not None or FilterQueryLanguage is not None:
raise ValueError('EnumerateInstanceNnames does not support'
' FilterQuery.')
if ContinueOnError is not None:
raise ValueError('EnumerateInstanceNames does not support '
'ContinueOnError.')
enum_rslt = self.EnumerateInstanceNames(
ClassName, namespace=namespace, **extra)
# pylint: disable=unused-variable
host, port, ssl = parse_url(self.url)
# get namespace for the operation
if namespace is None and isinstance(ClassName, CIMClassName):
namespace = ClassName.namespace
namespace = self._iparam_namespace_from_namespace(namespace)
for path in enum_rslt:
if path.namespace is None:
path.namespace = namespace
if path.host is None:
path.host = host
for inst in enum_rslt:
yield inst
# Cleanup if caller closes the iterator before exhausting it
finally:
# Cleanup only required if the pull context is open and not complete
if pull_result is not None and not pull_result.eos:
self.CloseEnumeration(pull_result.context)
pull_result = None | [
"def",
"IterEnumerateInstancePaths",
"(",
"self",
",",
"ClassName",
",",
"namespace",
"=",
"None",
",",
"FilterQueryLanguage",
"=",
"None",
",",
"FilterQuery",
"=",
"None",
",",
"OperationTimeout",
"=",
"None",
",",
"ContinueOnError",
"=",
"None",
",",
"MaxObjectCount",
"=",
"DEFAULT_ITER_MAXOBJECTCOUNT",
",",
"*",
"*",
"extra",
")",
":",
"_validateIterCommonParams",
"(",
"MaxObjectCount",
",",
"OperationTimeout",
")",
"# Common variable for pull result tuple used by pulls and finally:",
"pull_result",
"=",
"None",
"try",
":",
"# try / finally block to allow iter.close()",
"if",
"(",
"self",
".",
"_use_enum_path_pull_operations",
"is",
"None",
"or",
"self",
".",
"_use_enum_path_pull_operations",
")",
":",
"try",
":",
"# operation try block",
"pull_result",
"=",
"self",
".",
"OpenEnumerateInstancePaths",
"(",
"ClassName",
",",
"namespace",
"=",
"namespace",
",",
"FilterQueryLanguage",
"=",
"FilterQueryLanguage",
",",
"FilterQuery",
"=",
"FilterQuery",
",",
"OperationTimeout",
"=",
"OperationTimeout",
",",
"ContinueOnError",
"=",
"ContinueOnError",
",",
"MaxObjectCount",
"=",
"MaxObjectCount",
",",
"*",
"*",
"extra",
")",
"# Open operation succeeded; set has_pull flag",
"self",
".",
"_use_enum_path_pull_operations",
"=",
"True",
"for",
"inst",
"in",
"pull_result",
".",
"paths",
":",
"yield",
"inst",
"# Loop to pull while more while eos not returned.",
"while",
"not",
"pull_result",
".",
"eos",
":",
"pull_result",
"=",
"self",
".",
"PullInstancePaths",
"(",
"pull_result",
".",
"context",
",",
"MaxObjectCount",
"=",
"MaxObjectCount",
")",
"for",
"inst",
"in",
"pull_result",
".",
"paths",
":",
"yield",
"inst",
"pull_result",
"=",
"None",
"# clear the pull_result",
"return",
"# If NOT_SUPPORTED and first request, set flag and try",
"# alternative request operation.",
"# If use_pull_operations is True, always raise the exception",
"except",
"CIMError",
"as",
"ce",
":",
"if",
"(",
"self",
".",
"_use_enum_path_pull_operations",
"is",
"None",
"and",
"ce",
".",
"status_code",
"==",
"CIM_ERR_NOT_SUPPORTED",
")",
":",
"self",
".",
"_use_enum_path_pull_operations",
"=",
"False",
"else",
":",
"raise",
"# Alternate request if Pull not implemented. This does not allow",
"# the FilterQuery or ContinueOnError",
"assert",
"self",
".",
"_use_enum_path_pull_operations",
"is",
"False",
"if",
"FilterQuery",
"is",
"not",
"None",
"or",
"FilterQueryLanguage",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"'EnumerateInstanceNnames does not support'",
"' FilterQuery.'",
")",
"if",
"ContinueOnError",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"'EnumerateInstanceNames does not support '",
"'ContinueOnError.'",
")",
"enum_rslt",
"=",
"self",
".",
"EnumerateInstanceNames",
"(",
"ClassName",
",",
"namespace",
"=",
"namespace",
",",
"*",
"*",
"extra",
")",
"# pylint: disable=unused-variable",
"host",
",",
"port",
",",
"ssl",
"=",
"parse_url",
"(",
"self",
".",
"url",
")",
"# get namespace for the operation",
"if",
"namespace",
"is",
"None",
"and",
"isinstance",
"(",
"ClassName",
",",
"CIMClassName",
")",
":",
"namespace",
"=",
"ClassName",
".",
"namespace",
"namespace",
"=",
"self",
".",
"_iparam_namespace_from_namespace",
"(",
"namespace",
")",
"for",
"path",
"in",
"enum_rslt",
":",
"if",
"path",
".",
"namespace",
"is",
"None",
":",
"path",
".",
"namespace",
"=",
"namespace",
"if",
"path",
".",
"host",
"is",
"None",
":",
"path",
".",
"host",
"=",
"host",
"for",
"inst",
"in",
"enum_rslt",
":",
"yield",
"inst",
"# Cleanup if caller closes the iterator before exhausting it",
"finally",
":",
"# Cleanup only required if the pull context is open and not complete",
"if",
"pull_result",
"is",
"not",
"None",
"and",
"not",
"pull_result",
".",
"eos",
":",
"self",
".",
"CloseEnumeration",
"(",
"pull_result",
".",
"context",
")",
"pull_result",
"=",
"None"
] | Enumerate the instance paths of instances of a class (including
instances of its subclasses) in a namespace, using the
Python :term:`py:generator` idiom to return the result.
*New in pywbem 0.10 as experimental and finalized in 0.12.*
This method uses the corresponding pull operations if supported by the
WBEM server or otherwise the corresponding traditional operation.
This method is an alternative to using the pull operations directly,
that frees the user of having to know whether the WBEM server supports
pull operations.
This method is a generator function that retrieves instance paths from
the WBEM server and returns them one by one (using :keyword:`yield`)
when the caller iterates through the returned generator object. The
number of instance paths that are retrieved from the WBEM server in one
request (and thus need to be materialized in this method) is up to the
`MaxObjectCount` parameter if the corresponding pull operations are
used, or the complete result set all at once if the corresponding
traditional operation is used.
By default, this method attempts to perform the corresponding pull
operations
(:meth:`~pywbem.WBEMConnection.OpenEnumerateInstancePaths` and
:meth:`~pywbem.WBEMConnection.PullInstancePaths`).
If these pull operations are not supported by the WBEM server, this
method falls back to using the corresponding traditional operation
(:meth:`~pywbem.WBEMConnection.EnumerateInstanceNames`).
Whether the WBEM server supports these pull operations is remembered
in the :class:`~pywbem.WBEMConnection` object (by operation type), and
avoids unnecessary attempts to try these pull operations on that
connection in the future.
The `use_pull_operations` init parameter of
:class:`~pywbem.WBEMConnection` can be used to control the preference
for always using pull operations, always using traditional operations,
or using pull operations if supported by the WBEM server (the default).
This method provides all of the controls of the corresponding pull
operations except for the ability to set different response sizes on
each request; the response size (defined by the `MaxObjectCount`
parameter) is the same for all pull operations in the enumeration
session.
In addition, some functionality is only available if the corresponding
pull operations are used by this method:
* Filtering is not supported for the corresponding traditional
operation so that setting the `FilterQuery` or `FilterQueryLanguage`
parameters will be rejected if the corresponding traditional
operation is used by this method.
Note that this limitation is not a disadvantage compared to using the
corresponding pull operations directly, because in both cases, the
WBEM server must support the pull operations and their filtering
capability in order for the filtering to work.
* Setting the `ContinueOnError` parameter to `True` will be rejected if
the corresponding traditional operation is used by this method.
The enumeration session that is opened with the WBEM server when using
pull operations is closed automatically when the returned generator
object is exhausted, or when the generator object is closed using its
:meth:`~py:generator.close` method (which may also be called before the
generator is exhausted).
Parameters:
ClassName (:term:`string` or :class:`~pywbem.CIMClassName`):
Name of the class to be enumerated (case independent).
If specified as a :class:`~pywbem.CIMClassName` object, its
`namespace` attribute will be used as a default namespace as
described for the `namespace` parameter, and its `host` attribute
will be ignored.
namespace (:term:`string`):
Name of the CIM namespace to be used (case independent).
Leading and trailing slash characters will be stripped. The lexical
case will be preserved.
If `None`, the namespace of the `ClassName` parameter will be used,
if specified as a :class:`~pywbem.CIMClassName` object. If that is
also `None`, the default namespace of the connection will be used.
FilterQueryLanguage (:term:`string`):
The name of the filter query language used for the `FilterQuery`
parameter. The DMTF-defined Filter Query Language (see
:term:`DSP0212`) is specified as "DMTF:FQL".
If this parameter is not `None` and the traditional operation is
used by this method, :exc:`~py:exceptions.ValueError` will be
raised.
Not all WBEM servers support filtering for this operation because
it returns instance paths and the act of the server filtering
requires that it generate instances just for that purpose and then
discard them.
FilterQuery (:term:`string`):
The filter query in the query language defined by the
`FilterQueryLanguage` parameter.
If this parameter is not `None` and the traditional operation is
used by this method, :exc:`~py:exceptions.ValueError` will be
raised.
OperationTimeout (:class:`~pywbem.Uint32`):
Minimum time in seconds the WBEM Server shall maintain an open
enumeration session after a previous Open or Pull request is
sent to the client. Once this timeout time has expired, the
WBEM server may close the enumeration session.
* If not `None`, this parameter is sent to the WBEM server as the
proposed timeout for the enumeration session. A value of 0
indicates that the server is expected to never time out. The
server may reject the proposed value, causing a
:class:`~pywbem.CIMError` to be raised with status code
:attr:`~pywbem.CIM_ERR_INVALID_OPERATION_TIMEOUT`.
* If `None`, this parameter is not passed to the WBEM server, and
causes the server-implemented default timeout to be used.
ContinueOnError (:class:`py:bool`):
Indicates to the WBEM server to continue sending responses
after an error response has been sent.
* If `True`, the server is to continue sending responses after
sending an error response. Not all servers support continuation
on error; a server that does not support it must send an error
response if `True` was specified, causing
:class:`~pywbem.CIMError` to be raised with status code
:attr:`~pywbem.CIM_ERR_CONTINUATION_ON_ERROR_NOT_SUPPORTED`.
If the corresponding traditional operation is used by this
method, :exc:`~py:exceptions.ValueError` will be raised.
* If `False`, the server is requested to close the enumeration after
sending an error response.
* If `None`, this parameter is not passed to the WBEM server, and
causes the server-implemented default behaviour to be used.
:term:`DSP0200` defines that the server-implemented default is
`False`.
MaxObjectCount (:class:`~pywbem.Uint32`)
Maximum number of instances the WBEM server may return for each of
the open and pull requests issued during the iterations over the
returned generator object.
* If positive, the WBEM server is to return no more than the
specified number of instance paths.
* Zero is not allowed; it would mean that zero paths
are to be returned for every request issued.
* The default is defined as a system config variable.
* `None` is not allowed.
The choice of MaxObjectCount is client/server dependent but choices
between 100 and 1000 typically do not have a significant impact on
either memory or overall efficiency.
**extra :
Additional keyword arguments are passed as additional operation
parameters to the WBEM server.
Note that :term:`DSP0200` does not define any additional parameters
for this operation.
Raises:
Exceptions described in :class:`~pywbem.WBEMConnection`.
Returns:
:term:`py:generator` iterating :class:`~pywbem.CIMInstanceName`:
A generator object that iterates the resulting CIM instance paths.
These instance paths have their host and namespace components set.
Example::
paths_generator = conn.IterEnumerateInstancePaths('CIM_Blah')
for path in paths_generator:
print('path {0}'.format(path)) | [
"Enumerate",
"the",
"instance",
"paths",
"of",
"instances",
"of",
"a",
"class",
"(",
"including",
"instances",
"of",
"its",
"subclasses",
")",
"in",
"a",
"namespace",
"using",
"the",
"Python",
":",
"term",
":",
"py",
":",
"generator",
"idiom",
"to",
"return",
"the",
"result",
"."
] | python | train |
CyberReboot/vent | vent/api/tools.py | https://github.com/CyberReboot/vent/blob/9956a09146b11a89a0eabab3bc7ce8906d124885/vent/api/tools.py#L949-L980 | def repo_tools(self, repo, branch, version):
""" Get available tools for a repository branch at a version """
try:
tools = []
status = self.path_dirs.apply_path(repo)
# switch to directory where repo will be cloned to
if status[0]:
cwd = status[1]
else:
self.logger.error('apply_path failed. Exiting repo_tools with'
' status: ' + str(status))
return status
# TODO commenting out for now, should use update_repo
#status = self.p_helper.checkout(branch=branch, version=version)
status = (True, None)
if status[0]:
path, _, _ = self.path_dirs.get_path(repo)
tools = AvailableTools(path, version=version)
else:
self.logger.error('checkout failed. Exiting repo_tools with'
' status: ' + str(status))
return status
chdir(cwd)
status = (True, tools)
except Exception as e: # pragma: no cover
self.logger.error('repo_tools failed with error: ' + str(e))
status = (False, e)
return status | [
"def",
"repo_tools",
"(",
"self",
",",
"repo",
",",
"branch",
",",
"version",
")",
":",
"try",
":",
"tools",
"=",
"[",
"]",
"status",
"=",
"self",
".",
"path_dirs",
".",
"apply_path",
"(",
"repo",
")",
"# switch to directory where repo will be cloned to",
"if",
"status",
"[",
"0",
"]",
":",
"cwd",
"=",
"status",
"[",
"1",
"]",
"else",
":",
"self",
".",
"logger",
".",
"error",
"(",
"'apply_path failed. Exiting repo_tools with'",
"' status: '",
"+",
"str",
"(",
"status",
")",
")",
"return",
"status",
"# TODO commenting out for now, should use update_repo",
"#status = self.p_helper.checkout(branch=branch, version=version)",
"status",
"=",
"(",
"True",
",",
"None",
")",
"if",
"status",
"[",
"0",
"]",
":",
"path",
",",
"_",
",",
"_",
"=",
"self",
".",
"path_dirs",
".",
"get_path",
"(",
"repo",
")",
"tools",
"=",
"AvailableTools",
"(",
"path",
",",
"version",
"=",
"version",
")",
"else",
":",
"self",
".",
"logger",
".",
"error",
"(",
"'checkout failed. Exiting repo_tools with'",
"' status: '",
"+",
"str",
"(",
"status",
")",
")",
"return",
"status",
"chdir",
"(",
"cwd",
")",
"status",
"=",
"(",
"True",
",",
"tools",
")",
"except",
"Exception",
"as",
"e",
":",
"# pragma: no cover",
"self",
".",
"logger",
".",
"error",
"(",
"'repo_tools failed with error: '",
"+",
"str",
"(",
"e",
")",
")",
"status",
"=",
"(",
"False",
",",
"e",
")",
"return",
"status"
] | Get available tools for a repository branch at a version | [
"Get",
"available",
"tools",
"for",
"a",
"repository",
"branch",
"at",
"a",
"version"
] | python | train |
web-push-libs/pywebpush | pywebpush/__init__.py | https://github.com/web-push-libs/pywebpush/blob/2a23f45b7819e31bd030de9fe1357a1cf7dcfdc4/pywebpush/__init__.py#L256-L347 | def send(self, data=None, headers=None, ttl=0, gcm_key=None, reg_id=None,
content_encoding="aes128gcm", curl=False, timeout=None):
"""Encode and send the data to the Push Service.
:param data: A serialized block of data (see encode() ).
:type data: str
:param headers: A dictionary containing any additional HTTP headers.
:type headers: dict
:param ttl: The Time To Live in seconds for this message if the
recipient is not online. (Defaults to "0", which discards the
message immediately if the recipient is unavailable.)
:type ttl: int
:param gcm_key: API key obtained from the Google Developer Console.
Needed if endpoint is https://android.googleapis.com/gcm/send
:type gcm_key: string
:param reg_id: registration id of the recipient. If not provided,
it will be extracted from the endpoint.
:type reg_id: str
:param content_encoding: ECE content encoding (defaults to "aes128gcm")
:type content_encoding: str
:param curl: Display output as `curl` command instead of sending
:type curl: bool
:param timeout: POST requests timeout
:type timeout: float or tuple
"""
# Encode the data.
if headers is None:
headers = dict()
encoded = {}
headers = CaseInsensitiveDict(headers)
if data:
encoded = self.encode(data, content_encoding)
if "crypto_key" in encoded:
# Append the p256dh to the end of any existing crypto-key
crypto_key = headers.get("crypto-key", "")
if crypto_key:
# due to some confusion by a push service provider, we
# should use ';' instead of ',' to append the headers.
# see
# https://github.com/webpush-wg/webpush-encryption/issues/6
crypto_key += ';'
crypto_key += (
"dh=" + encoded["crypto_key"].decode('utf8'))
headers.update({
'crypto-key': crypto_key
})
if "salt" in encoded:
headers.update({
'encryption': "salt=" + encoded['salt'].decode('utf8')
})
headers.update({
'content-encoding': content_encoding,
})
if gcm_key:
# guess if it is a legacy GCM project key or actual FCM key
# gcm keys are all about 40 chars (use 100 for confidence),
# fcm keys are 153-175 chars
if len(gcm_key) < 100:
endpoint = 'https://android.googleapis.com/gcm/send'
else:
endpoint = 'https://fcm.googleapis.com/fcm/send'
reg_ids = []
if not reg_id:
reg_id = self.subscription_info['endpoint'].rsplit('/', 1)[-1]
reg_ids.append(reg_id)
gcm_data = dict()
gcm_data['registration_ids'] = reg_ids
if data:
gcm_data['raw_data'] = base64.b64encode(
encoded.get('body')).decode('utf8')
gcm_data['time_to_live'] = int(
headers['ttl'] if 'ttl' in headers else ttl)
encoded_data = json.dumps(gcm_data)
headers.update({
'Authorization': 'key='+gcm_key,
'Content-Type': 'application/json',
})
else:
encoded_data = encoded.get('body')
endpoint = self.subscription_info['endpoint']
if 'ttl' not in headers or ttl:
headers['ttl'] = str(ttl or 0)
# Additionally useful headers:
# Authorization / Crypto-Key (VAPID headers)
if curl:
return self.as_curl(endpoint, encoded_data, headers)
return self.requests_method.post(endpoint,
data=encoded_data,
headers=headers,
timeout=timeout) | [
"def",
"send",
"(",
"self",
",",
"data",
"=",
"None",
",",
"headers",
"=",
"None",
",",
"ttl",
"=",
"0",
",",
"gcm_key",
"=",
"None",
",",
"reg_id",
"=",
"None",
",",
"content_encoding",
"=",
"\"aes128gcm\"",
",",
"curl",
"=",
"False",
",",
"timeout",
"=",
"None",
")",
":",
"# Encode the data.",
"if",
"headers",
"is",
"None",
":",
"headers",
"=",
"dict",
"(",
")",
"encoded",
"=",
"{",
"}",
"headers",
"=",
"CaseInsensitiveDict",
"(",
"headers",
")",
"if",
"data",
":",
"encoded",
"=",
"self",
".",
"encode",
"(",
"data",
",",
"content_encoding",
")",
"if",
"\"crypto_key\"",
"in",
"encoded",
":",
"# Append the p256dh to the end of any existing crypto-key",
"crypto_key",
"=",
"headers",
".",
"get",
"(",
"\"crypto-key\"",
",",
"\"\"",
")",
"if",
"crypto_key",
":",
"# due to some confusion by a push service provider, we",
"# should use ';' instead of ',' to append the headers.",
"# see",
"# https://github.com/webpush-wg/webpush-encryption/issues/6",
"crypto_key",
"+=",
"';'",
"crypto_key",
"+=",
"(",
"\"dh=\"",
"+",
"encoded",
"[",
"\"crypto_key\"",
"]",
".",
"decode",
"(",
"'utf8'",
")",
")",
"headers",
".",
"update",
"(",
"{",
"'crypto-key'",
":",
"crypto_key",
"}",
")",
"if",
"\"salt\"",
"in",
"encoded",
":",
"headers",
".",
"update",
"(",
"{",
"'encryption'",
":",
"\"salt=\"",
"+",
"encoded",
"[",
"'salt'",
"]",
".",
"decode",
"(",
"'utf8'",
")",
"}",
")",
"headers",
".",
"update",
"(",
"{",
"'content-encoding'",
":",
"content_encoding",
",",
"}",
")",
"if",
"gcm_key",
":",
"# guess if it is a legacy GCM project key or actual FCM key",
"# gcm keys are all about 40 chars (use 100 for confidence),",
"# fcm keys are 153-175 chars",
"if",
"len",
"(",
"gcm_key",
")",
"<",
"100",
":",
"endpoint",
"=",
"'https://android.googleapis.com/gcm/send'",
"else",
":",
"endpoint",
"=",
"'https://fcm.googleapis.com/fcm/send'",
"reg_ids",
"=",
"[",
"]",
"if",
"not",
"reg_id",
":",
"reg_id",
"=",
"self",
".",
"subscription_info",
"[",
"'endpoint'",
"]",
".",
"rsplit",
"(",
"'/'",
",",
"1",
")",
"[",
"-",
"1",
"]",
"reg_ids",
".",
"append",
"(",
"reg_id",
")",
"gcm_data",
"=",
"dict",
"(",
")",
"gcm_data",
"[",
"'registration_ids'",
"]",
"=",
"reg_ids",
"if",
"data",
":",
"gcm_data",
"[",
"'raw_data'",
"]",
"=",
"base64",
".",
"b64encode",
"(",
"encoded",
".",
"get",
"(",
"'body'",
")",
")",
".",
"decode",
"(",
"'utf8'",
")",
"gcm_data",
"[",
"'time_to_live'",
"]",
"=",
"int",
"(",
"headers",
"[",
"'ttl'",
"]",
"if",
"'ttl'",
"in",
"headers",
"else",
"ttl",
")",
"encoded_data",
"=",
"json",
".",
"dumps",
"(",
"gcm_data",
")",
"headers",
".",
"update",
"(",
"{",
"'Authorization'",
":",
"'key='",
"+",
"gcm_key",
",",
"'Content-Type'",
":",
"'application/json'",
",",
"}",
")",
"else",
":",
"encoded_data",
"=",
"encoded",
".",
"get",
"(",
"'body'",
")",
"endpoint",
"=",
"self",
".",
"subscription_info",
"[",
"'endpoint'",
"]",
"if",
"'ttl'",
"not",
"in",
"headers",
"or",
"ttl",
":",
"headers",
"[",
"'ttl'",
"]",
"=",
"str",
"(",
"ttl",
"or",
"0",
")",
"# Additionally useful headers:",
"# Authorization / Crypto-Key (VAPID headers)",
"if",
"curl",
":",
"return",
"self",
".",
"as_curl",
"(",
"endpoint",
",",
"encoded_data",
",",
"headers",
")",
"return",
"self",
".",
"requests_method",
".",
"post",
"(",
"endpoint",
",",
"data",
"=",
"encoded_data",
",",
"headers",
"=",
"headers",
",",
"timeout",
"=",
"timeout",
")"
] | Encode and send the data to the Push Service.
:param data: A serialized block of data (see encode() ).
:type data: str
:param headers: A dictionary containing any additional HTTP headers.
:type headers: dict
:param ttl: The Time To Live in seconds for this message if the
recipient is not online. (Defaults to "0", which discards the
message immediately if the recipient is unavailable.)
:type ttl: int
:param gcm_key: API key obtained from the Google Developer Console.
Needed if endpoint is https://android.googleapis.com/gcm/send
:type gcm_key: string
:param reg_id: registration id of the recipient. If not provided,
it will be extracted from the endpoint.
:type reg_id: str
:param content_encoding: ECE content encoding (defaults to "aes128gcm")
:type content_encoding: str
:param curl: Display output as `curl` command instead of sending
:type curl: bool
:param timeout: POST requests timeout
:type timeout: float or tuple | [
"Encode",
"and",
"send",
"the",
"data",
"to",
"the",
"Push",
"Service",
"."
] | python | train |
gregmccoy/melissadata | melissadata/melissadata.py | https://github.com/gregmccoy/melissadata/blob/e610152c8ec98f673b9c7be4d359bfacdfde7c1e/melissadata/melissadata.py#L30-L79 | def verify_address(self, addr1="", addr2="", city="", fname="", lname="", phone="", province="", postal="", country="", email="", recordID="", freeform= ""):
"""verify_address
Builds a JSON request to send to Melissa data. Takes in all needed address info.
Args:
addr1 (str):Contains info for Melissa data
addr2 (str):Contains info for Melissa data
city (str):Contains info for Melissa data
fname (str):Contains info for Melissa data
lname (str):Contains info for Melissa data
phone (str):Contains info for Melissa data
province (str):Contains info for Melissa data
postal (str):Contains info for Melissa data
country (str):Contains info for Melissa data
email (str):Contains info for Melissa data
recordID (str):Contains info for Melissa data
freeform (str):Contains info for Melissa data
Returns:
result, a string containing the result codes from MelissaData
"""
data = {
"TransmissionReference": "",
"CustomerID": self.custID,
"Actions": "Check",
"Options": "",
"Columns": "",
"Records": [{
"RecordID": recordID,
"CompanyName": "",
"FullName": fname + " " + lname,
"AddressLine1": addr1,
"AddressLine2": addr2,
"Suite": "",
"City": city,
"State": province,
"PostalCode": postal,
"Country": country,
"PhoneNumber": phone,
"EmailAddress": email,
"FreeForm": freeform,
}]
}
self.country = country
data = json.dumps(data)
result = requests.post("https://personator.melissadata.net/v3/WEB/ContactVerify/doContactVerify", data=data)
result = json.loads(result.text)
result = self.parse_results(result)
return result | [
"def",
"verify_address",
"(",
"self",
",",
"addr1",
"=",
"\"\"",
",",
"addr2",
"=",
"\"\"",
",",
"city",
"=",
"\"\"",
",",
"fname",
"=",
"\"\"",
",",
"lname",
"=",
"\"\"",
",",
"phone",
"=",
"\"\"",
",",
"province",
"=",
"\"\"",
",",
"postal",
"=",
"\"\"",
",",
"country",
"=",
"\"\"",
",",
"email",
"=",
"\"\"",
",",
"recordID",
"=",
"\"\"",
",",
"freeform",
"=",
"\"\"",
")",
":",
"data",
"=",
"{",
"\"TransmissionReference\"",
":",
"\"\"",
",",
"\"CustomerID\"",
":",
"self",
".",
"custID",
",",
"\"Actions\"",
":",
"\"Check\"",
",",
"\"Options\"",
":",
"\"\"",
",",
"\"Columns\"",
":",
"\"\"",
",",
"\"Records\"",
":",
"[",
"{",
"\"RecordID\"",
":",
"recordID",
",",
"\"CompanyName\"",
":",
"\"\"",
",",
"\"FullName\"",
":",
"fname",
"+",
"\" \"",
"+",
"lname",
",",
"\"AddressLine1\"",
":",
"addr1",
",",
"\"AddressLine2\"",
":",
"addr2",
",",
"\"Suite\"",
":",
"\"\"",
",",
"\"City\"",
":",
"city",
",",
"\"State\"",
":",
"province",
",",
"\"PostalCode\"",
":",
"postal",
",",
"\"Country\"",
":",
"country",
",",
"\"PhoneNumber\"",
":",
"phone",
",",
"\"EmailAddress\"",
":",
"email",
",",
"\"FreeForm\"",
":",
"freeform",
",",
"}",
"]",
"}",
"self",
".",
"country",
"=",
"country",
"data",
"=",
"json",
".",
"dumps",
"(",
"data",
")",
"result",
"=",
"requests",
".",
"post",
"(",
"\"https://personator.melissadata.net/v3/WEB/ContactVerify/doContactVerify\"",
",",
"data",
"=",
"data",
")",
"result",
"=",
"json",
".",
"loads",
"(",
"result",
".",
"text",
")",
"result",
"=",
"self",
".",
"parse_results",
"(",
"result",
")",
"return",
"result"
] | verify_address
Builds a JSON request to send to Melissa data. Takes in all needed address info.
Args:
addr1 (str):Contains info for Melissa data
addr2 (str):Contains info for Melissa data
city (str):Contains info for Melissa data
fname (str):Contains info for Melissa data
lname (str):Contains info for Melissa data
phone (str):Contains info for Melissa data
province (str):Contains info for Melissa data
postal (str):Contains info for Melissa data
country (str):Contains info for Melissa data
email (str):Contains info for Melissa data
recordID (str):Contains info for Melissa data
freeform (str):Contains info for Melissa data
Returns:
result, a string containing the result codes from MelissaData | [
"verify_address"
] | python | train |
angr/angr | angr/sim_state.py | https://github.com/angr/angr/blob/4e2f97d56af5419ee73bdb30482c8dd8ff5f3e40/angr/sim_state.py#L568-L587 | def copy(self):
"""
Returns a copy of the state.
"""
if self._global_condition is not None:
raise SimStateError("global condition was not cleared before state.copy().")
c_plugins = self._copy_plugins()
state = SimState(project=self.project, arch=self.arch, plugins=c_plugins, options=self.options.copy(),
mode=self.mode, os_name=self.os_name)
if self._is_java_jni_project:
state.ip_is_soot_addr = self.ip_is_soot_addr
state.uninitialized_access_handler = self.uninitialized_access_handler
state._special_memory_filler = self._special_memory_filler
state.ip_constraints = self.ip_constraints
return state | [
"def",
"copy",
"(",
"self",
")",
":",
"if",
"self",
".",
"_global_condition",
"is",
"not",
"None",
":",
"raise",
"SimStateError",
"(",
"\"global condition was not cleared before state.copy().\"",
")",
"c_plugins",
"=",
"self",
".",
"_copy_plugins",
"(",
")",
"state",
"=",
"SimState",
"(",
"project",
"=",
"self",
".",
"project",
",",
"arch",
"=",
"self",
".",
"arch",
",",
"plugins",
"=",
"c_plugins",
",",
"options",
"=",
"self",
".",
"options",
".",
"copy",
"(",
")",
",",
"mode",
"=",
"self",
".",
"mode",
",",
"os_name",
"=",
"self",
".",
"os_name",
")",
"if",
"self",
".",
"_is_java_jni_project",
":",
"state",
".",
"ip_is_soot_addr",
"=",
"self",
".",
"ip_is_soot_addr",
"state",
".",
"uninitialized_access_handler",
"=",
"self",
".",
"uninitialized_access_handler",
"state",
".",
"_special_memory_filler",
"=",
"self",
".",
"_special_memory_filler",
"state",
".",
"ip_constraints",
"=",
"self",
".",
"ip_constraints",
"return",
"state"
] | Returns a copy of the state. | [
"Returns",
"a",
"copy",
"of",
"the",
"state",
"."
] | python | train |
openstack/horizon | horizon/base.py | https://github.com/openstack/horizon/blob/5601ea9477323e599d9b766fcac1f8be742935b2/horizon/base.py#L86-L101 | def _wrapped_include(arg):
"""Convert the old 3-tuple arg for include() into the new format.
The argument "arg" should be a tuple with 3 elements:
(pattern_list, app_namespace, instance_namespace)
Prior to Django 2.0, django.urls.conf.include() accepts 3-tuple arg
(urlconf, namespace, app_name), but it was droppped in Django 2.0.
This function is used to convert the older 3-tuple used in horizon code
into the new format where namespace needs to be passed as the second arg.
For more details, see
https://docs.djangoproject.com/en/2.0/releases/1.9/#passing-a-3-tuple-or-an-app-name-to-include
"""
pattern_list, app_namespace, instance_namespace = arg
return include((pattern_list, app_namespace), namespace=instance_namespace) | [
"def",
"_wrapped_include",
"(",
"arg",
")",
":",
"pattern_list",
",",
"app_namespace",
",",
"instance_namespace",
"=",
"arg",
"return",
"include",
"(",
"(",
"pattern_list",
",",
"app_namespace",
")",
",",
"namespace",
"=",
"instance_namespace",
")"
] | Convert the old 3-tuple arg for include() into the new format.
The argument "arg" should be a tuple with 3 elements:
(pattern_list, app_namespace, instance_namespace)
Prior to Django 2.0, django.urls.conf.include() accepts 3-tuple arg
(urlconf, namespace, app_name), but it was droppped in Django 2.0.
This function is used to convert the older 3-tuple used in horizon code
into the new format where namespace needs to be passed as the second arg.
For more details, see
https://docs.djangoproject.com/en/2.0/releases/1.9/#passing-a-3-tuple-or-an-app-name-to-include | [
"Convert",
"the",
"old",
"3",
"-",
"tuple",
"arg",
"for",
"include",
"()",
"into",
"the",
"new",
"format",
"."
] | python | train |
bjodah/pyodesys | pyodesys/core.py | https://github.com/bjodah/pyodesys/blob/0034a6165b550d8d9808baef58678dca5a493ab7/pyodesys/core.py#L917-L996 | def chained_parameter_variation(subject, durations, y0, varied_params, default_params=None,
integrate_kwargs=None, x0=None, npoints=1, numpy=None):
""" Integrate an ODE-system for a serie of durations with some parameters changed in-between
Parameters
----------
subject : function or ODESys instance
If a function: should have the signature of :meth:`pyodesys.ODESys.integrate`
(and resturn a :class:`pyodesys.results.Result` object).
If a ODESys instance: the ``integrate`` method will be used.
durations : iterable of floats
Spans of the independent variable.
y0 : dict or array_like
varied_params : dict mapping parameter name (or index) to array_like
Each array_like need to be of same length as durations.
default_params : dict or array_like
Default values for the parameters of the ODE system.
integrate_kwargs : dict
Keyword arguments passed on to ``integrate``.
x0 : float-like
First value of independent variable. default: 0.
npoints : int
Number of points per sub-interval.
Examples
--------
>>> odesys = ODESys(lambda t, y, p: [-p[0]*y[0]])
>>> int_kw = dict(integrator='cvode', method='adams', atol=1e-12, rtol=1e-12)
>>> kwargs = dict(default_params=[0], integrate_kwargs=int_kw)
>>> res = chained_parameter_variation(odesys, [2, 3], [42], {0: [.7, .1]}, **kwargs)
>>> mask1 = res.xout <= 2
>>> import numpy as np
>>> np.allclose(res.yout[mask1, 0], 42*np.exp(-.7*res.xout[mask1]))
True
>>> mask2 = 2 <= res.xout
>>> np.allclose(res.yout[mask2, 0], res.yout[mask2, 0][0]*np.exp(-.1*(res.xout[mask2] - res.xout[mask2][0])))
True
"""
assert len(durations) > 0, 'need at least 1 duration (preferably many)'
assert npoints > 0, 'need at least 1 point per duration'
for k, v in varied_params.items():
if len(v) != len(durations):
raise ValueError("Mismathced lengths of durations and varied_params")
if isinstance(subject, ODESys):
integrate = subject.integrate
numpy = numpy or subject.numpy
else:
integrate = subject
numpy = numpy or np
default_params = default_params or {}
integrate_kwargs = integrate_kwargs or {}
def _get_idx(cont, idx):
if isinstance(cont, dict):
return {k: (v[idx] if hasattr(v, '__len__') and getattr(v, 'ndim', 1) > 0 else v)
for k, v in cont.items()}
else:
return cont[idx]
durations = numpy.cumsum(durations)
for idx_dur in range(len(durations)):
params = copy.copy(default_params)
for k, v in varied_params.items():
params[k] = v[idx_dur]
if idx_dur == 0:
if x0 is None:
x0 = durations[0]*0
out = integrate(numpy.linspace(x0, durations[0], npoints + 1), y0, params, **integrate_kwargs)
else:
if isinstance(out, Result):
out.extend_by_integration(durations[idx_dur], params, npoints=npoints, **integrate_kwargs)
else:
for idx_res, r in enumerate(out):
r.extend_by_integration(durations[idx_dur], _get_idx(params, idx_res),
npoints=npoints, **integrate_kwargs)
return out | [
"def",
"chained_parameter_variation",
"(",
"subject",
",",
"durations",
",",
"y0",
",",
"varied_params",
",",
"default_params",
"=",
"None",
",",
"integrate_kwargs",
"=",
"None",
",",
"x0",
"=",
"None",
",",
"npoints",
"=",
"1",
",",
"numpy",
"=",
"None",
")",
":",
"assert",
"len",
"(",
"durations",
")",
">",
"0",
",",
"'need at least 1 duration (preferably many)'",
"assert",
"npoints",
">",
"0",
",",
"'need at least 1 point per duration'",
"for",
"k",
",",
"v",
"in",
"varied_params",
".",
"items",
"(",
")",
":",
"if",
"len",
"(",
"v",
")",
"!=",
"len",
"(",
"durations",
")",
":",
"raise",
"ValueError",
"(",
"\"Mismathced lengths of durations and varied_params\"",
")",
"if",
"isinstance",
"(",
"subject",
",",
"ODESys",
")",
":",
"integrate",
"=",
"subject",
".",
"integrate",
"numpy",
"=",
"numpy",
"or",
"subject",
".",
"numpy",
"else",
":",
"integrate",
"=",
"subject",
"numpy",
"=",
"numpy",
"or",
"np",
"default_params",
"=",
"default_params",
"or",
"{",
"}",
"integrate_kwargs",
"=",
"integrate_kwargs",
"or",
"{",
"}",
"def",
"_get_idx",
"(",
"cont",
",",
"idx",
")",
":",
"if",
"isinstance",
"(",
"cont",
",",
"dict",
")",
":",
"return",
"{",
"k",
":",
"(",
"v",
"[",
"idx",
"]",
"if",
"hasattr",
"(",
"v",
",",
"'__len__'",
")",
"and",
"getattr",
"(",
"v",
",",
"'ndim'",
",",
"1",
")",
">",
"0",
"else",
"v",
")",
"for",
"k",
",",
"v",
"in",
"cont",
".",
"items",
"(",
")",
"}",
"else",
":",
"return",
"cont",
"[",
"idx",
"]",
"durations",
"=",
"numpy",
".",
"cumsum",
"(",
"durations",
")",
"for",
"idx_dur",
"in",
"range",
"(",
"len",
"(",
"durations",
")",
")",
":",
"params",
"=",
"copy",
".",
"copy",
"(",
"default_params",
")",
"for",
"k",
",",
"v",
"in",
"varied_params",
".",
"items",
"(",
")",
":",
"params",
"[",
"k",
"]",
"=",
"v",
"[",
"idx_dur",
"]",
"if",
"idx_dur",
"==",
"0",
":",
"if",
"x0",
"is",
"None",
":",
"x0",
"=",
"durations",
"[",
"0",
"]",
"*",
"0",
"out",
"=",
"integrate",
"(",
"numpy",
".",
"linspace",
"(",
"x0",
",",
"durations",
"[",
"0",
"]",
",",
"npoints",
"+",
"1",
")",
",",
"y0",
",",
"params",
",",
"*",
"*",
"integrate_kwargs",
")",
"else",
":",
"if",
"isinstance",
"(",
"out",
",",
"Result",
")",
":",
"out",
".",
"extend_by_integration",
"(",
"durations",
"[",
"idx_dur",
"]",
",",
"params",
",",
"npoints",
"=",
"npoints",
",",
"*",
"*",
"integrate_kwargs",
")",
"else",
":",
"for",
"idx_res",
",",
"r",
"in",
"enumerate",
"(",
"out",
")",
":",
"r",
".",
"extend_by_integration",
"(",
"durations",
"[",
"idx_dur",
"]",
",",
"_get_idx",
"(",
"params",
",",
"idx_res",
")",
",",
"npoints",
"=",
"npoints",
",",
"*",
"*",
"integrate_kwargs",
")",
"return",
"out"
] | Integrate an ODE-system for a serie of durations with some parameters changed in-between
Parameters
----------
subject : function or ODESys instance
If a function: should have the signature of :meth:`pyodesys.ODESys.integrate`
(and resturn a :class:`pyodesys.results.Result` object).
If a ODESys instance: the ``integrate`` method will be used.
durations : iterable of floats
Spans of the independent variable.
y0 : dict or array_like
varied_params : dict mapping parameter name (or index) to array_like
Each array_like need to be of same length as durations.
default_params : dict or array_like
Default values for the parameters of the ODE system.
integrate_kwargs : dict
Keyword arguments passed on to ``integrate``.
x0 : float-like
First value of independent variable. default: 0.
npoints : int
Number of points per sub-interval.
Examples
--------
>>> odesys = ODESys(lambda t, y, p: [-p[0]*y[0]])
>>> int_kw = dict(integrator='cvode', method='adams', atol=1e-12, rtol=1e-12)
>>> kwargs = dict(default_params=[0], integrate_kwargs=int_kw)
>>> res = chained_parameter_variation(odesys, [2, 3], [42], {0: [.7, .1]}, **kwargs)
>>> mask1 = res.xout <= 2
>>> import numpy as np
>>> np.allclose(res.yout[mask1, 0], 42*np.exp(-.7*res.xout[mask1]))
True
>>> mask2 = 2 <= res.xout
>>> np.allclose(res.yout[mask2, 0], res.yout[mask2, 0][0]*np.exp(-.1*(res.xout[mask2] - res.xout[mask2][0])))
True | [
"Integrate",
"an",
"ODE",
"-",
"system",
"for",
"a",
"serie",
"of",
"durations",
"with",
"some",
"parameters",
"changed",
"in",
"-",
"between"
] | python | train |
gwastro/pycbc | pycbc/filter/matchedfilter.py | https://github.com/gwastro/pycbc/blob/7a64cdd104d263f1b6ea0b01e6841837d05a4cb3/pycbc/filter/matchedfilter.py#L1575-L1580 | def combine_results(self, results):
"""Combine results from different batches of filtering"""
result = {}
for key in results[0]:
result[key] = numpy.concatenate([r[key] for r in results])
return result | [
"def",
"combine_results",
"(",
"self",
",",
"results",
")",
":",
"result",
"=",
"{",
"}",
"for",
"key",
"in",
"results",
"[",
"0",
"]",
":",
"result",
"[",
"key",
"]",
"=",
"numpy",
".",
"concatenate",
"(",
"[",
"r",
"[",
"key",
"]",
"for",
"r",
"in",
"results",
"]",
")",
"return",
"result"
] | Combine results from different batches of filtering | [
"Combine",
"results",
"from",
"different",
"batches",
"of",
"filtering"
] | python | train |
compmech/composites | composites/laminate.py | https://github.com/compmech/composites/blob/3c5d4fd6033898e35a5085063af5dbb81eb6a97d/composites/laminate.py#L289-L353 | def calc_lamination_parameters(self):
"""Calculate the lamination parameters.
The following attributes are calculated:
xiA, xiB, xiD, xiE
"""
if len(self.plies) == 0:
if self.xiA is None:
raise ValueError('Laminate with 0 plies!')
else:
return
xiA1, xiA2, xiA3, xiA4 = 0, 0, 0, 0
xiB1, xiB2, xiB3, xiB4 = 0, 0, 0, 0
xiD1, xiD2, xiD3, xiD4 = 0, 0, 0, 0
xiE1, xiE2, xiE3, xiE4 = 0, 0, 0, 0
lam_thick = sum([ply.h for ply in self.plies])
self.h = lam_thick
h0 = -lam_thick/2. + self.offset
for ply in self.plies:
if self.matobj is None:
self.matobj = ply.matobj
else:
assert np.allclose(self.matobj.u, ply.matobj.u), "Plies with different materials"
hk_1 = h0
h0 += ply.h
hk = h0
Afac = ply.h / lam_thick
Bfac = (2. / lam_thick**2) * (hk**2 - hk_1**2)
Dfac = (4. / lam_thick**3) * (hk**3 - hk_1**3)
Efac = (1. / lam_thick) * (hk - hk_1)
thetarad = np.deg2rad(ply.theta)
cos2t = np.cos(2*thetarad)
sin2t = np.sin(2*thetarad)
cos4t = np.cos(4*thetarad)
sin4t = np.sin(4*thetarad)
xiA1 += Afac * cos2t
xiA2 += Afac * sin2t
xiA3 += Afac * cos4t
xiA4 += Afac * sin4t
xiB1 += Bfac * cos2t
xiB2 += Bfac * sin2t
xiB3 += Bfac * cos4t
xiB4 += Bfac * sin4t
xiD1 += Dfac * cos2t
xiD2 += Dfac * sin2t
xiD3 += Dfac * cos4t
xiD4 += Dfac * sin4t
xiE1 += Efac * cos2t
xiE2 += Efac * sin2t
xiE3 += Efac * cos4t
xiE4 += Efac * sin4t
self.xiA = np.array([1, xiA1, xiA2, xiA3, xiA4], dtype=np.float64)
self.xiB = np.array([0, xiB1, xiB2, xiB3, xiB4], dtype=np.float64)
self.xiD = np.array([1, xiD1, xiD2, xiD3, xiD4], dtype=np.float64)
self.xiE = np.array([1, xiE1, xiE2, xiE3, xiE4], dtype=np.float64) | [
"def",
"calc_lamination_parameters",
"(",
"self",
")",
":",
"if",
"len",
"(",
"self",
".",
"plies",
")",
"==",
"0",
":",
"if",
"self",
".",
"xiA",
"is",
"None",
":",
"raise",
"ValueError",
"(",
"'Laminate with 0 plies!'",
")",
"else",
":",
"return",
"xiA1",
",",
"xiA2",
",",
"xiA3",
",",
"xiA4",
"=",
"0",
",",
"0",
",",
"0",
",",
"0",
"xiB1",
",",
"xiB2",
",",
"xiB3",
",",
"xiB4",
"=",
"0",
",",
"0",
",",
"0",
",",
"0",
"xiD1",
",",
"xiD2",
",",
"xiD3",
",",
"xiD4",
"=",
"0",
",",
"0",
",",
"0",
",",
"0",
"xiE1",
",",
"xiE2",
",",
"xiE3",
",",
"xiE4",
"=",
"0",
",",
"0",
",",
"0",
",",
"0",
"lam_thick",
"=",
"sum",
"(",
"[",
"ply",
".",
"h",
"for",
"ply",
"in",
"self",
".",
"plies",
"]",
")",
"self",
".",
"h",
"=",
"lam_thick",
"h0",
"=",
"-",
"lam_thick",
"/",
"2.",
"+",
"self",
".",
"offset",
"for",
"ply",
"in",
"self",
".",
"plies",
":",
"if",
"self",
".",
"matobj",
"is",
"None",
":",
"self",
".",
"matobj",
"=",
"ply",
".",
"matobj",
"else",
":",
"assert",
"np",
".",
"allclose",
"(",
"self",
".",
"matobj",
".",
"u",
",",
"ply",
".",
"matobj",
".",
"u",
")",
",",
"\"Plies with different materials\"",
"hk_1",
"=",
"h0",
"h0",
"+=",
"ply",
".",
"h",
"hk",
"=",
"h0",
"Afac",
"=",
"ply",
".",
"h",
"/",
"lam_thick",
"Bfac",
"=",
"(",
"2.",
"/",
"lam_thick",
"**",
"2",
")",
"*",
"(",
"hk",
"**",
"2",
"-",
"hk_1",
"**",
"2",
")",
"Dfac",
"=",
"(",
"4.",
"/",
"lam_thick",
"**",
"3",
")",
"*",
"(",
"hk",
"**",
"3",
"-",
"hk_1",
"**",
"3",
")",
"Efac",
"=",
"(",
"1.",
"/",
"lam_thick",
")",
"*",
"(",
"hk",
"-",
"hk_1",
")",
"thetarad",
"=",
"np",
".",
"deg2rad",
"(",
"ply",
".",
"theta",
")",
"cos2t",
"=",
"np",
".",
"cos",
"(",
"2",
"*",
"thetarad",
")",
"sin2t",
"=",
"np",
".",
"sin",
"(",
"2",
"*",
"thetarad",
")",
"cos4t",
"=",
"np",
".",
"cos",
"(",
"4",
"*",
"thetarad",
")",
"sin4t",
"=",
"np",
".",
"sin",
"(",
"4",
"*",
"thetarad",
")",
"xiA1",
"+=",
"Afac",
"*",
"cos2t",
"xiA2",
"+=",
"Afac",
"*",
"sin2t",
"xiA3",
"+=",
"Afac",
"*",
"cos4t",
"xiA4",
"+=",
"Afac",
"*",
"sin4t",
"xiB1",
"+=",
"Bfac",
"*",
"cos2t",
"xiB2",
"+=",
"Bfac",
"*",
"sin2t",
"xiB3",
"+=",
"Bfac",
"*",
"cos4t",
"xiB4",
"+=",
"Bfac",
"*",
"sin4t",
"xiD1",
"+=",
"Dfac",
"*",
"cos2t",
"xiD2",
"+=",
"Dfac",
"*",
"sin2t",
"xiD3",
"+=",
"Dfac",
"*",
"cos4t",
"xiD4",
"+=",
"Dfac",
"*",
"sin4t",
"xiE1",
"+=",
"Efac",
"*",
"cos2t",
"xiE2",
"+=",
"Efac",
"*",
"sin2t",
"xiE3",
"+=",
"Efac",
"*",
"cos4t",
"xiE4",
"+=",
"Efac",
"*",
"sin4t",
"self",
".",
"xiA",
"=",
"np",
".",
"array",
"(",
"[",
"1",
",",
"xiA1",
",",
"xiA2",
",",
"xiA3",
",",
"xiA4",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")",
"self",
".",
"xiB",
"=",
"np",
".",
"array",
"(",
"[",
"0",
",",
"xiB1",
",",
"xiB2",
",",
"xiB3",
",",
"xiB4",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")",
"self",
".",
"xiD",
"=",
"np",
".",
"array",
"(",
"[",
"1",
",",
"xiD1",
",",
"xiD2",
",",
"xiD3",
",",
"xiD4",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")",
"self",
".",
"xiE",
"=",
"np",
".",
"array",
"(",
"[",
"1",
",",
"xiE1",
",",
"xiE2",
",",
"xiE3",
",",
"xiE4",
"]",
",",
"dtype",
"=",
"np",
".",
"float64",
")"
] | Calculate the lamination parameters.
The following attributes are calculated:
xiA, xiB, xiD, xiE | [
"Calculate",
"the",
"lamination",
"parameters",
"."
] | python | train |
openid/JWTConnect-Python-CryptoJWT | src/cryptojwt/jwe/utils.py | https://github.com/openid/JWTConnect-Python-CryptoJWT/blob/8863cfbfe77ca885084870b234a66b55bd52930c/src/cryptojwt/jwe/utils.py#L98-L121 | def concat_sha256(secret, dk_len, other_info):
"""
The Concat KDF, using SHA256 as the hash function.
Note: Does not validate that otherInfo meets the requirements of
SP800-56A.
:param secret: The shared secret value
:param dk_len: Length of key to be derived, in bits
:param other_info: Other info to be incorporated (see SP800-56A)
:return: The derived key
"""
dkm = b''
dk_bytes = int(ceil(dk_len / 8.0))
counter = 0
while len(dkm) < dk_bytes:
counter += 1
counter_bytes = struct.pack("!I", counter)
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(counter_bytes)
digest.update(secret)
digest.update(other_info)
dkm += digest.finalize()
return dkm[:dk_bytes] | [
"def",
"concat_sha256",
"(",
"secret",
",",
"dk_len",
",",
"other_info",
")",
":",
"dkm",
"=",
"b''",
"dk_bytes",
"=",
"int",
"(",
"ceil",
"(",
"dk_len",
"/",
"8.0",
")",
")",
"counter",
"=",
"0",
"while",
"len",
"(",
"dkm",
")",
"<",
"dk_bytes",
":",
"counter",
"+=",
"1",
"counter_bytes",
"=",
"struct",
".",
"pack",
"(",
"\"!I\"",
",",
"counter",
")",
"digest",
"=",
"hashes",
".",
"Hash",
"(",
"hashes",
".",
"SHA256",
"(",
")",
",",
"backend",
"=",
"default_backend",
"(",
")",
")",
"digest",
".",
"update",
"(",
"counter_bytes",
")",
"digest",
".",
"update",
"(",
"secret",
")",
"digest",
".",
"update",
"(",
"other_info",
")",
"dkm",
"+=",
"digest",
".",
"finalize",
"(",
")",
"return",
"dkm",
"[",
":",
"dk_bytes",
"]"
] | The Concat KDF, using SHA256 as the hash function.
Note: Does not validate that otherInfo meets the requirements of
SP800-56A.
:param secret: The shared secret value
:param dk_len: Length of key to be derived, in bits
:param other_info: Other info to be incorporated (see SP800-56A)
:return: The derived key | [
"The",
"Concat",
"KDF",
"using",
"SHA256",
"as",
"the",
"hash",
"function",
"."
] | python | train |
saltstack/salt | salt/states/boto_kinesis.py | https://github.com/saltstack/salt/blob/e8541fd6e744ab0df786c0f76102e41631f45d46/salt/states/boto_kinesis.py#L78-L384 | def present(name,
retention_hours=None,
enhanced_monitoring=None,
num_shards=None,
do_reshard=True,
region=None,
key=None,
keyid=None,
profile=None):
'''
Ensure the kinesis stream is properly configured and scaled.
name (string)
Stream name
retention_hours (int)
Retain data for this many hours.
AWS allows minimum 24 hours, maximum 168 hours.
enhanced_monitoring (list of string)
Turn on enhanced monitoring for the specified shard-level metrics.
Pass in ['ALL'] or True for all metrics, [] or False for no metrics.
Turn on individual metrics by passing in a list: ['IncomingBytes', 'OutgoingBytes']
Note that if only some metrics are supplied, the remaining metrics will be turned off.
num_shards (int)
Reshard stream (if necessary) to this number of shards
!!!!! Resharding is expensive! Each split or merge can take up to 30 seconds,
and the reshard method balances the partition space evenly.
Resharding from N to N+1 can require 2N operations.
Resharding is much faster with powers of 2 (e.g. 2^N to 2^N+1) !!!!!
do_reshard (boolean)
If set to False, this script will NEVER reshard the stream,
regardless of other input. Useful for testing.
region (string)
Region to connect to.
key (string)
Secret key to be used.
keyid (string)
Access key to be used.
profile (dict)
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid.
'''
ret = {'name': name, 'result': True, 'comment': '', 'changes': {}}
comments = []
changes_old = {}
changes_new = {}
# Ensure stream exists
exists = __salt__['boto_kinesis.exists'](
name,
region,
key,
keyid,
profile
)
if exists['result'] is False:
if __opts__['test']:
ret['result'] = None
comments.append('Kinesis stream {0} would be created'.format(name))
_add_changes(ret, changes_old, changes_new, comments)
return ret
else:
is_created = __salt__['boto_kinesis.create_stream'](
name,
num_shards,
region,
key,
keyid,
profile
)
if 'error' in is_created:
ret['result'] = False
comments.append('Failed to create stream {0}: {1}'.format(name, is_created['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
comments.append('Kinesis stream {0} successfully created'.format(name))
changes_new['name'] = name
changes_new['num_shards'] = num_shards
else:
comments.append('Kinesis stream {0} already exists'.format(name))
stream_response = __salt__['boto_kinesis.get_stream_when_active'](
name,
region,
key,
keyid,
profile
)
if 'error' in stream_response:
ret['result'] = False
comments.append('Kinesis stream {0}: error getting description: {1}'
.format(name, stream_response['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
stream_details = stream_response['result']["StreamDescription"]
# Configure retention hours
if retention_hours is not None:
old_retention_hours = stream_details["RetentionPeriodHours"]
retention_matches = (old_retention_hours == retention_hours)
if not retention_matches:
if __opts__['test']:
ret['result'] = None
comments.append('Kinesis stream {0}: retention hours would be updated to {1}'
.format(name, retention_hours))
else:
if old_retention_hours > retention_hours:
retention_updated = __salt__['boto_kinesis.decrease_stream_retention_period'](
name,
retention_hours,
region,
key,
keyid,
profile
)
else:
retention_updated = __salt__['boto_kinesis.increase_stream_retention_period'](
name,
retention_hours,
region,
key,
keyid,
profile
)
if 'error' in retention_updated:
ret['result'] = False
comments.append('Kinesis stream {0}: failed to update retention hours: {1}'
.format(name, retention_updated['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
comments.append('Kinesis stream {0}: retention hours was successfully updated'.format(name))
changes_old['retention_hours'] = old_retention_hours
changes_new['retention_hours'] = retention_hours
# wait until active again, otherwise it will log a lot of ResourceInUseExceptions
# note that this isn't required below; reshard() will itself handle waiting
stream_response = __salt__['boto_kinesis.get_stream_when_active'](
name,
region,
key,
keyid,
profile
)
if 'error' in stream_response:
ret['result'] = False
comments.append('Kinesis stream {0}: error getting description: {1}'
.format(name, stream_response['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
stream_details = stream_response['result']["StreamDescription"]
else:
comments.append('Kinesis stream {0}: retention hours did not require change, already set at {1}'
.format(name, old_retention_hours))
else:
comments.append('Kinesis stream {0}: did not configure retention hours'.format(name))
# Configure enhanced monitoring
if enhanced_monitoring is not None:
if enhanced_monitoring is True or enhanced_monitoring == ['ALL']:
# for ease of comparison; describe_stream will always return the full list of metrics, never 'ALL'
enhanced_monitoring = [
"IncomingBytes",
"OutgoingRecords",
"IteratorAgeMilliseconds",
"IncomingRecords",
"ReadProvisionedThroughputExceeded",
"WriteProvisionedThroughputExceeded",
"OutgoingBytes"
]
elif enhanced_monitoring is False or enhanced_monitoring == "None":
enhanced_monitoring = []
old_enhanced_monitoring = stream_details.get("EnhancedMonitoring")[0]["ShardLevelMetrics"]
new_monitoring_set = set(enhanced_monitoring)
old_monitoring_set = set(old_enhanced_monitoring)
matching_metrics = new_monitoring_set.intersection(old_monitoring_set)
enable_metrics = list(new_monitoring_set.difference(matching_metrics))
disable_metrics = list(old_monitoring_set.difference(matching_metrics))
if enable_metrics:
if __opts__['test']:
ret['result'] = None
comments.append('Kinesis stream {0}: would enable enhanced monitoring for {1}'
.format(name, enable_metrics))
else:
metrics_enabled = __salt__['boto_kinesis.enable_enhanced_monitoring'](
name,
enable_metrics,
region,
key,
keyid,
profile
)
if 'error' in metrics_enabled:
ret['result'] = False
comments.append('Kinesis stream {0}: failed to enable enhanced monitoring: {1}'
.format(name, metrics_enabled['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
comments.append('Kinesis stream {0}: enhanced monitoring was enabled for shard-level metrics {1}'
.format(name, enable_metrics))
if disable_metrics:
if __opts__['test']:
ret['result'] = None
comments.append('Kinesis stream {0}: would disable enhanced monitoring for {1}'
.format(name, disable_metrics))
else:
metrics_disabled = __salt__['boto_kinesis.disable_enhanced_monitoring'](
name,
disable_metrics,
region,
key,
keyid,
profile
)
if 'error' in metrics_disabled:
ret['result'] = False
comments.append('Kinesis stream {0}: failed to disable enhanced monitoring: {1}'
.format(name, metrics_disabled['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
comments.append('Kinesis stream {0}: enhanced monitoring was disabled for shard-level metrics {1}'
.format(name, disable_metrics))
if not disable_metrics and not enable_metrics:
comments.append('Kinesis stream {0}: enhanced monitoring did not require change, already set at {1}'
.format(name, (old_enhanced_monitoring if old_enhanced_monitoring else "None")))
elif not __opts__['test']:
changes_old['enhanced_monitoring'] = (old_enhanced_monitoring if old_enhanced_monitoring
else "None")
changes_new['enhanced_monitoring'] = (enhanced_monitoring if enhanced_monitoring
else "None")
else:
comments.append('Kinesis stream {0}: did not configure enhanced monitoring'.format(name))
# Reshard stream if necessary
min_hash_key, max_hash_key, full_stream_details = __salt__['boto_kinesis.get_info_for_reshard'](
stream_details
)
old_num_shards = len(full_stream_details["OpenShards"])
if num_shards is not None and do_reshard:
num_shards_matches = (old_num_shards == num_shards)
if not num_shards_matches:
if __opts__['test']:
ret['result'] = None
comments.append('Kinesis stream {0}: would be resharded from {1} to {2} shards'
.format(name, old_num_shards, num_shards))
else:
log.info(
'Resharding stream from %s to %s shards, this could take '
'a while', old_num_shards, num_shards
)
# reshard returns True when a split/merge action is taken,
# or False when no more actions are required
continue_reshard = True
while continue_reshard:
reshard_response = __salt__['boto_kinesis.reshard'](
name,
num_shards,
do_reshard,
region,
key,
keyid,
profile)
if 'error' in reshard_response:
ret['result'] = False
comments.append('Encountered error while resharding {0}: {1}'
.format(name, reshard_response['error']))
_add_changes(ret, changes_old, changes_new, comments)
return ret
continue_reshard = reshard_response['result']
comments.append('Kinesis stream {0}: successfully resharded to {1} shards'.format(name, num_shards))
changes_old['num_shards'] = old_num_shards
changes_new['num_shards'] = num_shards
else:
comments.append('Kinesis stream {0}: did not require resharding, remains at {1} shards'
.format(name, old_num_shards))
else:
comments.append('Kinesis stream {0}: did not reshard, remains at {1} shards'.format(name, old_num_shards))
_add_changes(ret, changes_old, changes_new, comments)
return ret | [
"def",
"present",
"(",
"name",
",",
"retention_hours",
"=",
"None",
",",
"enhanced_monitoring",
"=",
"None",
",",
"num_shards",
"=",
"None",
",",
"do_reshard",
"=",
"True",
",",
"region",
"=",
"None",
",",
"key",
"=",
"None",
",",
"keyid",
"=",
"None",
",",
"profile",
"=",
"None",
")",
":",
"ret",
"=",
"{",
"'name'",
":",
"name",
",",
"'result'",
":",
"True",
",",
"'comment'",
":",
"''",
",",
"'changes'",
":",
"{",
"}",
"}",
"comments",
"=",
"[",
"]",
"changes_old",
"=",
"{",
"}",
"changes_new",
"=",
"{",
"}",
"# Ensure stream exists",
"exists",
"=",
"__salt__",
"[",
"'boto_kinesis.exists'",
"]",
"(",
"name",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"exists",
"[",
"'result'",
"]",
"is",
"False",
":",
"if",
"__opts__",
"[",
"'test'",
"]",
":",
"ret",
"[",
"'result'",
"]",
"=",
"None",
"comments",
".",
"append",
"(",
"'Kinesis stream {0} would be created'",
".",
"format",
"(",
"name",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"else",
":",
"is_created",
"=",
"__salt__",
"[",
"'boto_kinesis.create_stream'",
"]",
"(",
"name",
",",
"num_shards",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"is_created",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Failed to create stream {0}: {1}'",
".",
"format",
"(",
"name",
",",
"is_created",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"comments",
".",
"append",
"(",
"'Kinesis stream {0} successfully created'",
".",
"format",
"(",
"name",
")",
")",
"changes_new",
"[",
"'name'",
"]",
"=",
"name",
"changes_new",
"[",
"'num_shards'",
"]",
"=",
"num_shards",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0} already exists'",
".",
"format",
"(",
"name",
")",
")",
"stream_response",
"=",
"__salt__",
"[",
"'boto_kinesis.get_stream_when_active'",
"]",
"(",
"name",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"stream_response",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: error getting description: {1}'",
".",
"format",
"(",
"name",
",",
"stream_response",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"stream_details",
"=",
"stream_response",
"[",
"'result'",
"]",
"[",
"\"StreamDescription\"",
"]",
"# Configure retention hours",
"if",
"retention_hours",
"is",
"not",
"None",
":",
"old_retention_hours",
"=",
"stream_details",
"[",
"\"RetentionPeriodHours\"",
"]",
"retention_matches",
"=",
"(",
"old_retention_hours",
"==",
"retention_hours",
")",
"if",
"not",
"retention_matches",
":",
"if",
"__opts__",
"[",
"'test'",
"]",
":",
"ret",
"[",
"'result'",
"]",
"=",
"None",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: retention hours would be updated to {1}'",
".",
"format",
"(",
"name",
",",
"retention_hours",
")",
")",
"else",
":",
"if",
"old_retention_hours",
">",
"retention_hours",
":",
"retention_updated",
"=",
"__salt__",
"[",
"'boto_kinesis.decrease_stream_retention_period'",
"]",
"(",
"name",
",",
"retention_hours",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"else",
":",
"retention_updated",
"=",
"__salt__",
"[",
"'boto_kinesis.increase_stream_retention_period'",
"]",
"(",
"name",
",",
"retention_hours",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"retention_updated",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: failed to update retention hours: {1}'",
".",
"format",
"(",
"name",
",",
"retention_updated",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: retention hours was successfully updated'",
".",
"format",
"(",
"name",
")",
")",
"changes_old",
"[",
"'retention_hours'",
"]",
"=",
"old_retention_hours",
"changes_new",
"[",
"'retention_hours'",
"]",
"=",
"retention_hours",
"# wait until active again, otherwise it will log a lot of ResourceInUseExceptions",
"# note that this isn't required below; reshard() will itself handle waiting",
"stream_response",
"=",
"__salt__",
"[",
"'boto_kinesis.get_stream_when_active'",
"]",
"(",
"name",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"stream_response",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: error getting description: {1}'",
".",
"format",
"(",
"name",
",",
"stream_response",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"stream_details",
"=",
"stream_response",
"[",
"'result'",
"]",
"[",
"\"StreamDescription\"",
"]",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: retention hours did not require change, already set at {1}'",
".",
"format",
"(",
"name",
",",
"old_retention_hours",
")",
")",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: did not configure retention hours'",
".",
"format",
"(",
"name",
")",
")",
"# Configure enhanced monitoring",
"if",
"enhanced_monitoring",
"is",
"not",
"None",
":",
"if",
"enhanced_monitoring",
"is",
"True",
"or",
"enhanced_monitoring",
"==",
"[",
"'ALL'",
"]",
":",
"# for ease of comparison; describe_stream will always return the full list of metrics, never 'ALL'",
"enhanced_monitoring",
"=",
"[",
"\"IncomingBytes\"",
",",
"\"OutgoingRecords\"",
",",
"\"IteratorAgeMilliseconds\"",
",",
"\"IncomingRecords\"",
",",
"\"ReadProvisionedThroughputExceeded\"",
",",
"\"WriteProvisionedThroughputExceeded\"",
",",
"\"OutgoingBytes\"",
"]",
"elif",
"enhanced_monitoring",
"is",
"False",
"or",
"enhanced_monitoring",
"==",
"\"None\"",
":",
"enhanced_monitoring",
"=",
"[",
"]",
"old_enhanced_monitoring",
"=",
"stream_details",
".",
"get",
"(",
"\"EnhancedMonitoring\"",
")",
"[",
"0",
"]",
"[",
"\"ShardLevelMetrics\"",
"]",
"new_monitoring_set",
"=",
"set",
"(",
"enhanced_monitoring",
")",
"old_monitoring_set",
"=",
"set",
"(",
"old_enhanced_monitoring",
")",
"matching_metrics",
"=",
"new_monitoring_set",
".",
"intersection",
"(",
"old_monitoring_set",
")",
"enable_metrics",
"=",
"list",
"(",
"new_monitoring_set",
".",
"difference",
"(",
"matching_metrics",
")",
")",
"disable_metrics",
"=",
"list",
"(",
"old_monitoring_set",
".",
"difference",
"(",
"matching_metrics",
")",
")",
"if",
"enable_metrics",
":",
"if",
"__opts__",
"[",
"'test'",
"]",
":",
"ret",
"[",
"'result'",
"]",
"=",
"None",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: would enable enhanced monitoring for {1}'",
".",
"format",
"(",
"name",
",",
"enable_metrics",
")",
")",
"else",
":",
"metrics_enabled",
"=",
"__salt__",
"[",
"'boto_kinesis.enable_enhanced_monitoring'",
"]",
"(",
"name",
",",
"enable_metrics",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"metrics_enabled",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: failed to enable enhanced monitoring: {1}'",
".",
"format",
"(",
"name",
",",
"metrics_enabled",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: enhanced monitoring was enabled for shard-level metrics {1}'",
".",
"format",
"(",
"name",
",",
"enable_metrics",
")",
")",
"if",
"disable_metrics",
":",
"if",
"__opts__",
"[",
"'test'",
"]",
":",
"ret",
"[",
"'result'",
"]",
"=",
"None",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: would disable enhanced monitoring for {1}'",
".",
"format",
"(",
"name",
",",
"disable_metrics",
")",
")",
"else",
":",
"metrics_disabled",
"=",
"__salt__",
"[",
"'boto_kinesis.disable_enhanced_monitoring'",
"]",
"(",
"name",
",",
"disable_metrics",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"metrics_disabled",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: failed to disable enhanced monitoring: {1}'",
".",
"format",
"(",
"name",
",",
"metrics_disabled",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: enhanced monitoring was disabled for shard-level metrics {1}'",
".",
"format",
"(",
"name",
",",
"disable_metrics",
")",
")",
"if",
"not",
"disable_metrics",
"and",
"not",
"enable_metrics",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: enhanced monitoring did not require change, already set at {1}'",
".",
"format",
"(",
"name",
",",
"(",
"old_enhanced_monitoring",
"if",
"old_enhanced_monitoring",
"else",
"\"None\"",
")",
")",
")",
"elif",
"not",
"__opts__",
"[",
"'test'",
"]",
":",
"changes_old",
"[",
"'enhanced_monitoring'",
"]",
"=",
"(",
"old_enhanced_monitoring",
"if",
"old_enhanced_monitoring",
"else",
"\"None\"",
")",
"changes_new",
"[",
"'enhanced_monitoring'",
"]",
"=",
"(",
"enhanced_monitoring",
"if",
"enhanced_monitoring",
"else",
"\"None\"",
")",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: did not configure enhanced monitoring'",
".",
"format",
"(",
"name",
")",
")",
"# Reshard stream if necessary",
"min_hash_key",
",",
"max_hash_key",
",",
"full_stream_details",
"=",
"__salt__",
"[",
"'boto_kinesis.get_info_for_reshard'",
"]",
"(",
"stream_details",
")",
"old_num_shards",
"=",
"len",
"(",
"full_stream_details",
"[",
"\"OpenShards\"",
"]",
")",
"if",
"num_shards",
"is",
"not",
"None",
"and",
"do_reshard",
":",
"num_shards_matches",
"=",
"(",
"old_num_shards",
"==",
"num_shards",
")",
"if",
"not",
"num_shards_matches",
":",
"if",
"__opts__",
"[",
"'test'",
"]",
":",
"ret",
"[",
"'result'",
"]",
"=",
"None",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: would be resharded from {1} to {2} shards'",
".",
"format",
"(",
"name",
",",
"old_num_shards",
",",
"num_shards",
")",
")",
"else",
":",
"log",
".",
"info",
"(",
"'Resharding stream from %s to %s shards, this could take '",
"'a while'",
",",
"old_num_shards",
",",
"num_shards",
")",
"# reshard returns True when a split/merge action is taken,",
"# or False when no more actions are required",
"continue_reshard",
"=",
"True",
"while",
"continue_reshard",
":",
"reshard_response",
"=",
"__salt__",
"[",
"'boto_kinesis.reshard'",
"]",
"(",
"name",
",",
"num_shards",
",",
"do_reshard",
",",
"region",
",",
"key",
",",
"keyid",
",",
"profile",
")",
"if",
"'error'",
"in",
"reshard_response",
":",
"ret",
"[",
"'result'",
"]",
"=",
"False",
"comments",
".",
"append",
"(",
"'Encountered error while resharding {0}: {1}'",
".",
"format",
"(",
"name",
",",
"reshard_response",
"[",
"'error'",
"]",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret",
"continue_reshard",
"=",
"reshard_response",
"[",
"'result'",
"]",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: successfully resharded to {1} shards'",
".",
"format",
"(",
"name",
",",
"num_shards",
")",
")",
"changes_old",
"[",
"'num_shards'",
"]",
"=",
"old_num_shards",
"changes_new",
"[",
"'num_shards'",
"]",
"=",
"num_shards",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: did not require resharding, remains at {1} shards'",
".",
"format",
"(",
"name",
",",
"old_num_shards",
")",
")",
"else",
":",
"comments",
".",
"append",
"(",
"'Kinesis stream {0}: did not reshard, remains at {1} shards'",
".",
"format",
"(",
"name",
",",
"old_num_shards",
")",
")",
"_add_changes",
"(",
"ret",
",",
"changes_old",
",",
"changes_new",
",",
"comments",
")",
"return",
"ret"
] | Ensure the kinesis stream is properly configured and scaled.
name (string)
Stream name
retention_hours (int)
Retain data for this many hours.
AWS allows minimum 24 hours, maximum 168 hours.
enhanced_monitoring (list of string)
Turn on enhanced monitoring for the specified shard-level metrics.
Pass in ['ALL'] or True for all metrics, [] or False for no metrics.
Turn on individual metrics by passing in a list: ['IncomingBytes', 'OutgoingBytes']
Note that if only some metrics are supplied, the remaining metrics will be turned off.
num_shards (int)
Reshard stream (if necessary) to this number of shards
!!!!! Resharding is expensive! Each split or merge can take up to 30 seconds,
and the reshard method balances the partition space evenly.
Resharding from N to N+1 can require 2N operations.
Resharding is much faster with powers of 2 (e.g. 2^N to 2^N+1) !!!!!
do_reshard (boolean)
If set to False, this script will NEVER reshard the stream,
regardless of other input. Useful for testing.
region (string)
Region to connect to.
key (string)
Secret key to be used.
keyid (string)
Access key to be used.
profile (dict)
A dict with region, key and keyid, or a pillar key (string)
that contains a dict with region, key and keyid. | [
"Ensure",
"the",
"kinesis",
"stream",
"is",
"properly",
"configured",
"and",
"scaled",
"."
] | python | train |
pydsigner/taskit | taskit/frontend.py | https://github.com/pydsigner/taskit/blob/3b228e2dbac16b3b84b2581f5b46e027d1d8fa7f/taskit/frontend.py#L73-L82 | def _sending_task(self, backend):
"""
Used internally to safely increment `backend`s task count. Returns the
overall count of tasks for `backend`.
"""
with self.backend_mutex:
self.backends[backend] += 1
self.task_counter[backend] += 1
this_task = self.task_counter[backend]
return this_task | [
"def",
"_sending_task",
"(",
"self",
",",
"backend",
")",
":",
"with",
"self",
".",
"backend_mutex",
":",
"self",
".",
"backends",
"[",
"backend",
"]",
"+=",
"1",
"self",
".",
"task_counter",
"[",
"backend",
"]",
"+=",
"1",
"this_task",
"=",
"self",
".",
"task_counter",
"[",
"backend",
"]",
"return",
"this_task"
] | Used internally to safely increment `backend`s task count. Returns the
overall count of tasks for `backend`. | [
"Used",
"internally",
"to",
"safely",
"increment",
"backend",
"s",
"task",
"count",
".",
"Returns",
"the",
"overall",
"count",
"of",
"tasks",
"for",
"backend",
"."
] | python | train |
materialsproject/pymatgen | pymatgen/io/vasp/outputs.py | https://github.com/materialsproject/pymatgen/blob/4ca558cf72f8d5f8a1f21dfdfc0181a971c186da/pymatgen/io/vasp/outputs.py#L593-L612 | def final_energy(self):
"""
Final energy from the vasp run.
"""
try:
final_istep = self.ionic_steps[-1]
if final_istep["e_wo_entrp"] != final_istep[
'electronic_steps'][-1]["e_0_energy"]:
warnings.warn("Final e_wo_entrp differs from the final "
"electronic step. VASP may have included some "
"corrections, e.g., vdw. Vasprun will return "
"the final e_wo_entrp, i.e., including "
"corrections in such instances.")
return final_istep["e_wo_entrp"]
return final_istep['electronic_steps'][-1]["e_0_energy"]
except (IndexError, KeyError):
warnings.warn("Calculation does not have a total energy. "
"Possibly a GW or similar kind of run. A value of "
"infinity is returned.")
return float('inf') | [
"def",
"final_energy",
"(",
"self",
")",
":",
"try",
":",
"final_istep",
"=",
"self",
".",
"ionic_steps",
"[",
"-",
"1",
"]",
"if",
"final_istep",
"[",
"\"e_wo_entrp\"",
"]",
"!=",
"final_istep",
"[",
"'electronic_steps'",
"]",
"[",
"-",
"1",
"]",
"[",
"\"e_0_energy\"",
"]",
":",
"warnings",
".",
"warn",
"(",
"\"Final e_wo_entrp differs from the final \"",
"\"electronic step. VASP may have included some \"",
"\"corrections, e.g., vdw. Vasprun will return \"",
"\"the final e_wo_entrp, i.e., including \"",
"\"corrections in such instances.\"",
")",
"return",
"final_istep",
"[",
"\"e_wo_entrp\"",
"]",
"return",
"final_istep",
"[",
"'electronic_steps'",
"]",
"[",
"-",
"1",
"]",
"[",
"\"e_0_energy\"",
"]",
"except",
"(",
"IndexError",
",",
"KeyError",
")",
":",
"warnings",
".",
"warn",
"(",
"\"Calculation does not have a total energy. \"",
"\"Possibly a GW or similar kind of run. A value of \"",
"\"infinity is returned.\"",
")",
"return",
"float",
"(",
"'inf'",
")"
] | Final energy from the vasp run. | [
"Final",
"energy",
"from",
"the",
"vasp",
"run",
"."
] | python | train |
Esri/ArcREST | src/arcrest/ags/_gpobjects.py | https://github.com/Esri/ArcREST/blob/ab240fde2b0200f61d4a5f6df033516e53f2f416/src/arcrest/ags/_gpobjects.py#L193-L205 | def fromJSON(value):
"""loads the GP object from a JSON string """
j = json.loads(value)
v = GPFeatureRecordSetLayer()
if "defaultValue" in j:
v.value = j['defaultValue']
else:
v.value = j['value']
if 'paramName' in j:
v.paramName = j['paramName']
elif 'name' in j:
v.paramName = j['name']
return v | [
"def",
"fromJSON",
"(",
"value",
")",
":",
"j",
"=",
"json",
".",
"loads",
"(",
"value",
")",
"v",
"=",
"GPFeatureRecordSetLayer",
"(",
")",
"if",
"\"defaultValue\"",
"in",
"j",
":",
"v",
".",
"value",
"=",
"j",
"[",
"'defaultValue'",
"]",
"else",
":",
"v",
".",
"value",
"=",
"j",
"[",
"'value'",
"]",
"if",
"'paramName'",
"in",
"j",
":",
"v",
".",
"paramName",
"=",
"j",
"[",
"'paramName'",
"]",
"elif",
"'name'",
"in",
"j",
":",
"v",
".",
"paramName",
"=",
"j",
"[",
"'name'",
"]",
"return",
"v"
] | loads the GP object from a JSON string | [
"loads",
"the",
"GP",
"object",
"from",
"a",
"JSON",
"string"
] | python | train |
glue-viz/glue-vispy-viewers | glue_vispy_viewers/extern/vispy/scene/cameras/magnify.py | https://github.com/glue-viz/glue-vispy-viewers/blob/54a4351d98c1f90dfb1a557d1b447c1f57470eea/glue_vispy_viewers/extern/vispy/scene/cameras/magnify.py#L83-L110 | def viewbox_mouse_event(self, event):
"""ViewBox mouse event handler
Parameters
----------
event : instance of Event
The mouse event.
"""
# When the attached ViewBox reseives a mouse event, it is sent to the
# camera here.
self.mouse_pos = event.pos[:2]
if event.type == 'mouse_wheel':
# wheel rolled; adjust the magnification factor and hide the
# event from the superclass
m = self.mag_target
m *= 1.2 ** event.delta[1]
m = m if m > 1 else 1
self.mag_target = m
else:
# send everything _except_ wheel events to the superclass
super(MagnifyCamera, self).viewbox_mouse_event(event)
# start the timer to smoothly modify the transform properties.
if not self.timer.running:
self.timer.start()
self._update_transform() | [
"def",
"viewbox_mouse_event",
"(",
"self",
",",
"event",
")",
":",
"# When the attached ViewBox reseives a mouse event, it is sent to the",
"# camera here.",
"self",
".",
"mouse_pos",
"=",
"event",
".",
"pos",
"[",
":",
"2",
"]",
"if",
"event",
".",
"type",
"==",
"'mouse_wheel'",
":",
"# wheel rolled; adjust the magnification factor and hide the ",
"# event from the superclass",
"m",
"=",
"self",
".",
"mag_target",
"m",
"*=",
"1.2",
"**",
"event",
".",
"delta",
"[",
"1",
"]",
"m",
"=",
"m",
"if",
"m",
">",
"1",
"else",
"1",
"self",
".",
"mag_target",
"=",
"m",
"else",
":",
"# send everything _except_ wheel events to the superclass",
"super",
"(",
"MagnifyCamera",
",",
"self",
")",
".",
"viewbox_mouse_event",
"(",
"event",
")",
"# start the timer to smoothly modify the transform properties. ",
"if",
"not",
"self",
".",
"timer",
".",
"running",
":",
"self",
".",
"timer",
".",
"start",
"(",
")",
"self",
".",
"_update_transform",
"(",
")"
] | ViewBox mouse event handler
Parameters
----------
event : instance of Event
The mouse event. | [
"ViewBox",
"mouse",
"event",
"handler"
] | python | train |
HubSpot/hapipy | hapi/broadcast.py | https://github.com/HubSpot/hapipy/blob/6c492ec09aaa872b1b2177454b8c446678a0b9ed/hapi/broadcast.py#L108-L115 | def get_broadcast(self, broadcast_guid, **kwargs):
'''
Get a specific broadcast by guid
'''
params = kwargs
broadcast = self._call('broadcasts/%s' % broadcast_guid,
params=params, content_type='application/json')
return Broadcast(broadcast) | [
"def",
"get_broadcast",
"(",
"self",
",",
"broadcast_guid",
",",
"*",
"*",
"kwargs",
")",
":",
"params",
"=",
"kwargs",
"broadcast",
"=",
"self",
".",
"_call",
"(",
"'broadcasts/%s'",
"%",
"broadcast_guid",
",",
"params",
"=",
"params",
",",
"content_type",
"=",
"'application/json'",
")",
"return",
"Broadcast",
"(",
"broadcast",
")"
] | Get a specific broadcast by guid | [
"Get",
"a",
"specific",
"broadcast",
"by",
"guid"
] | python | train |
astropy/photutils | photutils/segmentation/properties.py | https://github.com/astropy/photutils/blob/cc9bb4534ab76bac98cb5f374a348a2573d10401/photutils/segmentation/properties.py#L526-L539 | def sky_centroid(self):
"""
The sky coordinates of the centroid within the source segment,
returned as a `~astropy.coordinates.SkyCoord` object.
The output coordinate frame is the same as the input WCS.
"""
if self._wcs is not None:
return pixel_to_skycoord(self.xcentroid.value,
self.ycentroid.value,
self._wcs, origin=0)
else:
return None | [
"def",
"sky_centroid",
"(",
"self",
")",
":",
"if",
"self",
".",
"_wcs",
"is",
"not",
"None",
":",
"return",
"pixel_to_skycoord",
"(",
"self",
".",
"xcentroid",
".",
"value",
",",
"self",
".",
"ycentroid",
".",
"value",
",",
"self",
".",
"_wcs",
",",
"origin",
"=",
"0",
")",
"else",
":",
"return",
"None"
] | The sky coordinates of the centroid within the source segment,
returned as a `~astropy.coordinates.SkyCoord` object.
The output coordinate frame is the same as the input WCS. | [
"The",
"sky",
"coordinates",
"of",
"the",
"centroid",
"within",
"the",
"source",
"segment",
"returned",
"as",
"a",
"~astropy",
".",
"coordinates",
".",
"SkyCoord",
"object",
"."
] | python | train |
mitsei/dlkit | dlkit/json_/cataloging/sessions.py | https://github.com/mitsei/dlkit/blob/445f968a175d61c8d92c0f617a3c17dc1dc7c584/dlkit/json_/cataloging/sessions.py#L470-L494 | def can_create_catalog_with_record_types(self, catalog_record_types):
"""Tests if this user can create a single ``Catalog`` using the desired record types.
While ``CatalogingManager.getCatalogRecordTypes()`` can be used
to examine which records are supported, this method tests which
record(s) are required for creating a specific ``Catalog``.
Providing an empty array tests if a ``Catalog`` can be created
with no records.
arg: catalog_record_types (osid.type.Type[]): array of
catalog record types
return: (boolean) - ``true`` if ``Catalog`` creation using the
specified record ``Types`` is supported, ``false``
otherwise
raise: NullArgument - ``catalog_record_types`` is ``null``
*compliance: mandatory -- This method must be implemented.*
"""
# Implemented from template for
# osid.resource.BinAdminSession.can_create_bin_with_record_types
# NOTE: It is expected that real authentication hints will be
# handled in a service adapter above the pay grade of this impl.
if self._catalog_session is not None:
return self._catalog_session.can_create_catalog_with_record_types(catalog_record_types=catalog_record_types)
return True | [
"def",
"can_create_catalog_with_record_types",
"(",
"self",
",",
"catalog_record_types",
")",
":",
"# Implemented from template for",
"# osid.resource.BinAdminSession.can_create_bin_with_record_types",
"# NOTE: It is expected that real authentication hints will be",
"# handled in a service adapter above the pay grade of this impl.",
"if",
"self",
".",
"_catalog_session",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_catalog_session",
".",
"can_create_catalog_with_record_types",
"(",
"catalog_record_types",
"=",
"catalog_record_types",
")",
"return",
"True"
] | Tests if this user can create a single ``Catalog`` using the desired record types.
While ``CatalogingManager.getCatalogRecordTypes()`` can be used
to examine which records are supported, this method tests which
record(s) are required for creating a specific ``Catalog``.
Providing an empty array tests if a ``Catalog`` can be created
with no records.
arg: catalog_record_types (osid.type.Type[]): array of
catalog record types
return: (boolean) - ``true`` if ``Catalog`` creation using the
specified record ``Types`` is supported, ``false``
otherwise
raise: NullArgument - ``catalog_record_types`` is ``null``
*compliance: mandatory -- This method must be implemented.* | [
"Tests",
"if",
"this",
"user",
"can",
"create",
"a",
"single",
"Catalog",
"using",
"the",
"desired",
"record",
"types",
"."
] | python | train |
awickert/gFlex | gflex/f2d.py | https://github.com/awickert/gFlex/blob/3ac32249375b0f8d342a142585d86ea4d905a5a0/gflex/f2d.py#L248-L334 | def BC_Rigidity(self):
"""
Utility function to help implement boundary conditions by specifying
them for and applying them to the elastic thickness grid
"""
#########################################
# FLEXURAL RIGIDITY BOUNDARY CONDITIONS #
#########################################
# West
if self.BC_W == 'Periodic':
self.BC_Rigidity_W = 'periodic'
elif (self.BC_W == np.array(['0Displacement0Slope', '0Moment0Shear', '0Slope0Shear'])).any():
self.BC_Rigidity_W = '0 curvature'
elif self.BC_W == 'Mirror':
self.BC_Rigidity_W = 'mirror symmetry'
else:
sys.exit("Invalid Te B.C. case")
# East
if self.BC_E == 'Periodic':
self.BC_Rigidity_E = 'periodic'
elif (self.BC_E == np.array(['0Displacement0Slope', '0Moment0Shear', '0Slope0Shear'])).any():
self.BC_Rigidity_E = '0 curvature'
elif self.BC_E == 'Mirror':
self.BC_Rigidity_E = 'mirror symmetry'
else:
sys.exit("Invalid Te B.C. case")
# North
if self.BC_N == 'Periodic':
self.BC_Rigidity_N = 'periodic'
elif (self.BC_N == np.array(['0Displacement0Slope', '0Moment0Shear', '0Slope0Shear'])).any():
self.BC_Rigidity_N = '0 curvature'
elif self.BC_N == 'Mirror':
self.BC_Rigidity_N = 'mirror symmetry'
else:
sys.exit("Invalid Te B.C. case")
# South
if self.BC_S == 'Periodic':
self.BC_Rigidity_S = 'periodic'
elif (self.BC_S == np.array(['0Displacement0Slope', '0Moment0Shear', '0Slope0Shear'])).any():
self.BC_Rigidity_S = '0 curvature'
elif self.BC_S == 'Mirror':
self.BC_Rigidity_S = 'mirror symmetry'
else:
sys.exit("Invalid Te B.C. case")
#############
# PAD ARRAY #
#############
if np.isscalar(self.Te):
self.D *= np.ones(self.qs.shape) # And leave Te as a scalar for checks
else:
self.Te_unpadded = self.Te.copy()
self.Te = np.hstack(( np.nan*np.zeros((self.Te.shape[0], 1)), self.Te, np.nan*np.zeros((self.Te.shape[0], 1)) ))
self.Te = np.vstack(( np.nan*np.zeros(self.Te.shape[1]), self.Te, np.nan*np.zeros(self.Te.shape[1]) ))
self.D = np.hstack(( np.nan*np.zeros((self.D.shape[0], 1)), self.D, np.nan*np.zeros((self.D.shape[0], 1)) ))
self.D = np.vstack(( np.nan*np.zeros(self.D.shape[1]), self.D, np.nan*np.zeros(self.D.shape[1]) ))
###############################################################
# APPLY FLEXURAL RIGIDITY BOUNDARY CONDITIONS TO PADDED ARRAY #
###############################################################
if self.BC_Rigidity_W == "0 curvature":
self.D[:,0] = 2*self.D[:,1] - self.D[:,2]
if self.BC_Rigidity_E == "0 curvature":
self.D[:,-1] = 2*self.D[:,-2] - self.D[:,-3]
if self.BC_Rigidity_N == "0 curvature":
self.D[0,:] = 2*self.D[1,:] - self.D[2,:]
if self.BC_Rigidity_S == "0 curvature":
self.D[-1,:] = 2*self.D[-2,:] - self.D[-3,:]
if self.BC_Rigidity_W == "mirror symmetry":
self.D[:,0] = self.D[:,2]
if self.BC_Rigidity_E == "mirror symmetry":
self.D[:,-1] = self.D[:,-3]
if self.BC_Rigidity_N == "mirror symmetry":
self.D[0,:] = self.D[2,:] # Yes, will work on corners -- double-reflection
if self.BC_Rigidity_S == "mirror symmetry":
self.D[-1,:] = self.D[-3,:]
if self.BC_Rigidity_W == "periodic":
self.D[:,0] = self.D[:,-2]
if self.BC_Rigidity_E == "periodic":
self.D[:,-1] = self.D[:,-3]
if self.BC_Rigidity_N == "periodic":
self.D[0,:] = self.D[-2,:]
if self.BC_Rigidity_S == "periodic":
self.D[-1,:] = self.D[-3,:] | [
"def",
"BC_Rigidity",
"(",
"self",
")",
":",
"#########################################",
"# FLEXURAL RIGIDITY BOUNDARY CONDITIONS #",
"#########################################",
"# West",
"if",
"self",
".",
"BC_W",
"==",
"'Periodic'",
":",
"self",
".",
"BC_Rigidity_W",
"=",
"'periodic'",
"elif",
"(",
"self",
".",
"BC_W",
"==",
"np",
".",
"array",
"(",
"[",
"'0Displacement0Slope'",
",",
"'0Moment0Shear'",
",",
"'0Slope0Shear'",
"]",
")",
")",
".",
"any",
"(",
")",
":",
"self",
".",
"BC_Rigidity_W",
"=",
"'0 curvature'",
"elif",
"self",
".",
"BC_W",
"==",
"'Mirror'",
":",
"self",
".",
"BC_Rigidity_W",
"=",
"'mirror symmetry'",
"else",
":",
"sys",
".",
"exit",
"(",
"\"Invalid Te B.C. case\"",
")",
"# East",
"if",
"self",
".",
"BC_E",
"==",
"'Periodic'",
":",
"self",
".",
"BC_Rigidity_E",
"=",
"'periodic'",
"elif",
"(",
"self",
".",
"BC_E",
"==",
"np",
".",
"array",
"(",
"[",
"'0Displacement0Slope'",
",",
"'0Moment0Shear'",
",",
"'0Slope0Shear'",
"]",
")",
")",
".",
"any",
"(",
")",
":",
"self",
".",
"BC_Rigidity_E",
"=",
"'0 curvature'",
"elif",
"self",
".",
"BC_E",
"==",
"'Mirror'",
":",
"self",
".",
"BC_Rigidity_E",
"=",
"'mirror symmetry'",
"else",
":",
"sys",
".",
"exit",
"(",
"\"Invalid Te B.C. case\"",
")",
"# North",
"if",
"self",
".",
"BC_N",
"==",
"'Periodic'",
":",
"self",
".",
"BC_Rigidity_N",
"=",
"'periodic'",
"elif",
"(",
"self",
".",
"BC_N",
"==",
"np",
".",
"array",
"(",
"[",
"'0Displacement0Slope'",
",",
"'0Moment0Shear'",
",",
"'0Slope0Shear'",
"]",
")",
")",
".",
"any",
"(",
")",
":",
"self",
".",
"BC_Rigidity_N",
"=",
"'0 curvature'",
"elif",
"self",
".",
"BC_N",
"==",
"'Mirror'",
":",
"self",
".",
"BC_Rigidity_N",
"=",
"'mirror symmetry'",
"else",
":",
"sys",
".",
"exit",
"(",
"\"Invalid Te B.C. case\"",
")",
"# South",
"if",
"self",
".",
"BC_S",
"==",
"'Periodic'",
":",
"self",
".",
"BC_Rigidity_S",
"=",
"'periodic'",
"elif",
"(",
"self",
".",
"BC_S",
"==",
"np",
".",
"array",
"(",
"[",
"'0Displacement0Slope'",
",",
"'0Moment0Shear'",
",",
"'0Slope0Shear'",
"]",
")",
")",
".",
"any",
"(",
")",
":",
"self",
".",
"BC_Rigidity_S",
"=",
"'0 curvature'",
"elif",
"self",
".",
"BC_S",
"==",
"'Mirror'",
":",
"self",
".",
"BC_Rigidity_S",
"=",
"'mirror symmetry'",
"else",
":",
"sys",
".",
"exit",
"(",
"\"Invalid Te B.C. case\"",
")",
"#############",
"# PAD ARRAY #",
"#############",
"if",
"np",
".",
"isscalar",
"(",
"self",
".",
"Te",
")",
":",
"self",
".",
"D",
"*=",
"np",
".",
"ones",
"(",
"self",
".",
"qs",
".",
"shape",
")",
"# And leave Te as a scalar for checks",
"else",
":",
"self",
".",
"Te_unpadded",
"=",
"self",
".",
"Te",
".",
"copy",
"(",
")",
"self",
".",
"Te",
"=",
"np",
".",
"hstack",
"(",
"(",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"(",
"self",
".",
"Te",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
",",
"self",
".",
"Te",
",",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"(",
"self",
".",
"Te",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
")",
")",
"self",
".",
"Te",
"=",
"np",
".",
"vstack",
"(",
"(",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"self",
".",
"Te",
".",
"shape",
"[",
"1",
"]",
")",
",",
"self",
".",
"Te",
",",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"self",
".",
"Te",
".",
"shape",
"[",
"1",
"]",
")",
")",
")",
"self",
".",
"D",
"=",
"np",
".",
"hstack",
"(",
"(",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"(",
"self",
".",
"D",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
",",
"self",
".",
"D",
",",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"(",
"self",
".",
"D",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
")",
")",
"self",
".",
"D",
"=",
"np",
".",
"vstack",
"(",
"(",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"self",
".",
"D",
".",
"shape",
"[",
"1",
"]",
")",
",",
"self",
".",
"D",
",",
"np",
".",
"nan",
"*",
"np",
".",
"zeros",
"(",
"self",
".",
"D",
".",
"shape",
"[",
"1",
"]",
")",
")",
")",
"###############################################################",
"# APPLY FLEXURAL RIGIDITY BOUNDARY CONDITIONS TO PADDED ARRAY #",
"###############################################################",
"if",
"self",
".",
"BC_Rigidity_W",
"==",
"\"0 curvature\"",
":",
"self",
".",
"D",
"[",
":",
",",
"0",
"]",
"=",
"2",
"*",
"self",
".",
"D",
"[",
":",
",",
"1",
"]",
"-",
"self",
".",
"D",
"[",
":",
",",
"2",
"]",
"if",
"self",
".",
"BC_Rigidity_E",
"==",
"\"0 curvature\"",
":",
"self",
".",
"D",
"[",
":",
",",
"-",
"1",
"]",
"=",
"2",
"*",
"self",
".",
"D",
"[",
":",
",",
"-",
"2",
"]",
"-",
"self",
".",
"D",
"[",
":",
",",
"-",
"3",
"]",
"if",
"self",
".",
"BC_Rigidity_N",
"==",
"\"0 curvature\"",
":",
"self",
".",
"D",
"[",
"0",
",",
":",
"]",
"=",
"2",
"*",
"self",
".",
"D",
"[",
"1",
",",
":",
"]",
"-",
"self",
".",
"D",
"[",
"2",
",",
":",
"]",
"if",
"self",
".",
"BC_Rigidity_S",
"==",
"\"0 curvature\"",
":",
"self",
".",
"D",
"[",
"-",
"1",
",",
":",
"]",
"=",
"2",
"*",
"self",
".",
"D",
"[",
"-",
"2",
",",
":",
"]",
"-",
"self",
".",
"D",
"[",
"-",
"3",
",",
":",
"]",
"if",
"self",
".",
"BC_Rigidity_W",
"==",
"\"mirror symmetry\"",
":",
"self",
".",
"D",
"[",
":",
",",
"0",
"]",
"=",
"self",
".",
"D",
"[",
":",
",",
"2",
"]",
"if",
"self",
".",
"BC_Rigidity_E",
"==",
"\"mirror symmetry\"",
":",
"self",
".",
"D",
"[",
":",
",",
"-",
"1",
"]",
"=",
"self",
".",
"D",
"[",
":",
",",
"-",
"3",
"]",
"if",
"self",
".",
"BC_Rigidity_N",
"==",
"\"mirror symmetry\"",
":",
"self",
".",
"D",
"[",
"0",
",",
":",
"]",
"=",
"self",
".",
"D",
"[",
"2",
",",
":",
"]",
"# Yes, will work on corners -- double-reflection",
"if",
"self",
".",
"BC_Rigidity_S",
"==",
"\"mirror symmetry\"",
":",
"self",
".",
"D",
"[",
"-",
"1",
",",
":",
"]",
"=",
"self",
".",
"D",
"[",
"-",
"3",
",",
":",
"]",
"if",
"self",
".",
"BC_Rigidity_W",
"==",
"\"periodic\"",
":",
"self",
".",
"D",
"[",
":",
",",
"0",
"]",
"=",
"self",
".",
"D",
"[",
":",
",",
"-",
"2",
"]",
"if",
"self",
".",
"BC_Rigidity_E",
"==",
"\"periodic\"",
":",
"self",
".",
"D",
"[",
":",
",",
"-",
"1",
"]",
"=",
"self",
".",
"D",
"[",
":",
",",
"-",
"3",
"]",
"if",
"self",
".",
"BC_Rigidity_N",
"==",
"\"periodic\"",
":",
"self",
".",
"D",
"[",
"0",
",",
":",
"]",
"=",
"self",
".",
"D",
"[",
"-",
"2",
",",
":",
"]",
"if",
"self",
".",
"BC_Rigidity_S",
"==",
"\"periodic\"",
":",
"self",
".",
"D",
"[",
"-",
"1",
",",
":",
"]",
"=",
"self",
".",
"D",
"[",
"-",
"3",
",",
":",
"]"
] | Utility function to help implement boundary conditions by specifying
them for and applying them to the elastic thickness grid | [
"Utility",
"function",
"to",
"help",
"implement",
"boundary",
"conditions",
"by",
"specifying",
"them",
"for",
"and",
"applying",
"them",
"to",
"the",
"elastic",
"thickness",
"grid"
] | python | train |
dcramer/piplint | src/piplint/__init__.py | https://github.com/dcramer/piplint/blob/134b90f4c5adbeb1de73a8e507503d4b7544f6d2/src/piplint/__init__.py#L39-L204 | def check_requirements(requirement_files, strict=False, error_on_extras=False, verbose=False,
venv=None, do_colour=False):
"""
Given a list of requirements files, checks them against the installed
packages in the currentl environment. If any are missing, or do not fit
within the version bounds, exits with a code of 1 and outputs information
about the missing dependency.
"""
colour = TextColours(do_colour)
version_re = re.compile(r'^([^<>=\s#]+)\s*(>=|>|<|<=|==|===)?\s*([^<>=\s#]+)?(?:\s*#.*)?$')
def parse_package_line(line):
try:
if line.startswith('-e'):
return parse_checkout_line(line)
package, compare, version = version_re.split(line)[1:-1]
except ValueError:
raise ValueError("Unknown package line format: %r" % line)
return (package, compare or None, parse_version(version) if version else None, line)
def parse_checkout_line(whole_line):
"""
parse a line that starts with '-e'
e.g.,
-e git://github.com/jcrocholl/pep8.git@bb20999aefc394fb826371764146bf61d8e572e2#egg=pep8-dev
"""
# Snip off the '-e' and any leading whitespace
line = whole_line[2:].lstrip()
# Check if there is a revision specified
if '@' in line:
url, last_bit = line.rsplit('@', 1)
rev, eggname = last_bit.split('#', 1)
return (url, '==', rev, line)
else:
(url, eggname) = line.split('#')
return (url, None, None, line)
def is_requirements_line(line):
"""
line is a valid requirement in requirements file or pip freeze output
"""
if not line:
return False
if line.startswith('#'):
return False
if line.endswith(' # optional'):
return False
if line.startswith('-e'):
return True
if line.startswith('-'):
return False
if line.startswith('http://') or line.startswith('https://'):
return False
return True
def valid_version(version, compare, r_version):
if not all([compare, version]):
return True
if compare in ('==', '==='):
return version == r_version
elif compare == '<=':
return version <= r_version
elif compare == '>=':
return version >= r_version
elif compare == '<':
return version < r_version
elif compare == '>':
return version > r_version
raise ValueError("Unknown comparison operator: %r" % compare)
frozen_reqs = []
unknown_reqs = set()
listed_reqs = []
args = 'pip freeze'
if venv is not None:
args = venv + "/bin/" + args
freeze = Popen([args], stdout=PIPE, shell=True)
for line in freeze.communicate()[0].splitlines():
line = line.strip()
if not is_requirements_line(line):
unknown_reqs.add(line)
continue
frozen_reqs.append(parse_package_line(line))
# Requirements files may include other requirements files;
# if so, add to list.
included_files = []
for file in requirement_files:
path = os.path.dirname(file)
args = "grep '^\-r' %s" % file
grep = Popen([args], stdout=PIPE, shell=True)
for line in grep.communicate()[0].splitlines():
included_files.append(os.path.join(path, line[2:].lstrip()))
requirement_files.extend(included_files)
for fname in requirement_files:
with open(fname) as fp:
for line in fp:
line = line.strip()
if not is_requirements_line(line):
continue
listed_reqs.append(parse_package_line(line))
unknown_reqs.update(set(r[0] for r in frozen_reqs).difference(set(r[0] for r in listed_reqs)))
errors = []
for r_package, r_compare, r_version, r_line in listed_reqs:
r_package_lower = r_package.lower()
found = False
for package, _, version, line in frozen_reqs:
if package.lower() == r_package_lower:
found = True
if strict and package != r_package:
unknown_reqs.remove(package)
errors.append(
"%sUnexpected capitalization found: %r is required but %r is installed.%s"
% (colour.WARNING, r_package, package, colour.ENDC)
)
elif valid_version(version, r_compare, r_version):
unknown_reqs.discard(package)
if verbose:
print("%sRequirement is installed correctly: %s%s"
% (colour.OKGREEN, r_line, colour.ENDC))
else:
errors.append(
"%sUnexpected version of %r found: %r is required but %r is installed.%s"
% (colour.WARNING, r_package, r_line, line, colour.ENDC)
)
break
if not found:
errors.append("%sRequirement %r not installed in virtualenv.%s"
% (colour.FAIL, r_package, colour.ENDC))
code = 0
if unknown_reqs:
print("\nFor debugging purposes, the following packages are installed but not in the requirements file(s):")
unknown_reqs_with_versions = []
for unknown_req in unknown_reqs:
for req in frozen_reqs:
if unknown_req == req[0]:
unknown_reqs_with_versions.append(req[3])
break
print("%s%s%s\n" % (colour.WARNING,
"\n".join(sorted(unknown_reqs_with_versions)),
colour.ENDC))
if error_on_extras:
code = 2
if errors:
print("Errors found:")
print('\n'.join(errors))
print("You must correct your environment before committing (and running tests).\n")
code = 1
if not errors and not unknown_reqs:
print("%sNo errors found; all packages accounted for!%s"
% (colour.OKGREEN, colour.ENDC))
return code | [
"def",
"check_requirements",
"(",
"requirement_files",
",",
"strict",
"=",
"False",
",",
"error_on_extras",
"=",
"False",
",",
"verbose",
"=",
"False",
",",
"venv",
"=",
"None",
",",
"do_colour",
"=",
"False",
")",
":",
"colour",
"=",
"TextColours",
"(",
"do_colour",
")",
"version_re",
"=",
"re",
".",
"compile",
"(",
"r'^([^<>=\\s#]+)\\s*(>=|>|<|<=|==|===)?\\s*([^<>=\\s#]+)?(?:\\s*#.*)?$'",
")",
"def",
"parse_package_line",
"(",
"line",
")",
":",
"try",
":",
"if",
"line",
".",
"startswith",
"(",
"'-e'",
")",
":",
"return",
"parse_checkout_line",
"(",
"line",
")",
"package",
",",
"compare",
",",
"version",
"=",
"version_re",
".",
"split",
"(",
"line",
")",
"[",
"1",
":",
"-",
"1",
"]",
"except",
"ValueError",
":",
"raise",
"ValueError",
"(",
"\"Unknown package line format: %r\"",
"%",
"line",
")",
"return",
"(",
"package",
",",
"compare",
"or",
"None",
",",
"parse_version",
"(",
"version",
")",
"if",
"version",
"else",
"None",
",",
"line",
")",
"def",
"parse_checkout_line",
"(",
"whole_line",
")",
":",
"\"\"\"\n parse a line that starts with '-e'\n\n e.g.,\n -e git://github.com/jcrocholl/pep8.git@bb20999aefc394fb826371764146bf61d8e572e2#egg=pep8-dev\n \"\"\"",
"# Snip off the '-e' and any leading whitespace",
"line",
"=",
"whole_line",
"[",
"2",
":",
"]",
".",
"lstrip",
"(",
")",
"# Check if there is a revision specified",
"if",
"'@'",
"in",
"line",
":",
"url",
",",
"last_bit",
"=",
"line",
".",
"rsplit",
"(",
"'@'",
",",
"1",
")",
"rev",
",",
"eggname",
"=",
"last_bit",
".",
"split",
"(",
"'#'",
",",
"1",
")",
"return",
"(",
"url",
",",
"'=='",
",",
"rev",
",",
"line",
")",
"else",
":",
"(",
"url",
",",
"eggname",
")",
"=",
"line",
".",
"split",
"(",
"'#'",
")",
"return",
"(",
"url",
",",
"None",
",",
"None",
",",
"line",
")",
"def",
"is_requirements_line",
"(",
"line",
")",
":",
"\"\"\"\n line is a valid requirement in requirements file or pip freeze output\n \"\"\"",
"if",
"not",
"line",
":",
"return",
"False",
"if",
"line",
".",
"startswith",
"(",
"'#'",
")",
":",
"return",
"False",
"if",
"line",
".",
"endswith",
"(",
"' # optional'",
")",
":",
"return",
"False",
"if",
"line",
".",
"startswith",
"(",
"'-e'",
")",
":",
"return",
"True",
"if",
"line",
".",
"startswith",
"(",
"'-'",
")",
":",
"return",
"False",
"if",
"line",
".",
"startswith",
"(",
"'http://'",
")",
"or",
"line",
".",
"startswith",
"(",
"'https://'",
")",
":",
"return",
"False",
"return",
"True",
"def",
"valid_version",
"(",
"version",
",",
"compare",
",",
"r_version",
")",
":",
"if",
"not",
"all",
"(",
"[",
"compare",
",",
"version",
"]",
")",
":",
"return",
"True",
"if",
"compare",
"in",
"(",
"'=='",
",",
"'==='",
")",
":",
"return",
"version",
"==",
"r_version",
"elif",
"compare",
"==",
"'<='",
":",
"return",
"version",
"<=",
"r_version",
"elif",
"compare",
"==",
"'>='",
":",
"return",
"version",
">=",
"r_version",
"elif",
"compare",
"==",
"'<'",
":",
"return",
"version",
"<",
"r_version",
"elif",
"compare",
"==",
"'>'",
":",
"return",
"version",
">",
"r_version",
"raise",
"ValueError",
"(",
"\"Unknown comparison operator: %r\"",
"%",
"compare",
")",
"frozen_reqs",
"=",
"[",
"]",
"unknown_reqs",
"=",
"set",
"(",
")",
"listed_reqs",
"=",
"[",
"]",
"args",
"=",
"'pip freeze'",
"if",
"venv",
"is",
"not",
"None",
":",
"args",
"=",
"venv",
"+",
"\"/bin/\"",
"+",
"args",
"freeze",
"=",
"Popen",
"(",
"[",
"args",
"]",
",",
"stdout",
"=",
"PIPE",
",",
"shell",
"=",
"True",
")",
"for",
"line",
"in",
"freeze",
".",
"communicate",
"(",
")",
"[",
"0",
"]",
".",
"splitlines",
"(",
")",
":",
"line",
"=",
"line",
".",
"strip",
"(",
")",
"if",
"not",
"is_requirements_line",
"(",
"line",
")",
":",
"unknown_reqs",
".",
"add",
"(",
"line",
")",
"continue",
"frozen_reqs",
".",
"append",
"(",
"parse_package_line",
"(",
"line",
")",
")",
"# Requirements files may include other requirements files;",
"# if so, add to list.",
"included_files",
"=",
"[",
"]",
"for",
"file",
"in",
"requirement_files",
":",
"path",
"=",
"os",
".",
"path",
".",
"dirname",
"(",
"file",
")",
"args",
"=",
"\"grep '^\\-r' %s\"",
"%",
"file",
"grep",
"=",
"Popen",
"(",
"[",
"args",
"]",
",",
"stdout",
"=",
"PIPE",
",",
"shell",
"=",
"True",
")",
"for",
"line",
"in",
"grep",
".",
"communicate",
"(",
")",
"[",
"0",
"]",
".",
"splitlines",
"(",
")",
":",
"included_files",
".",
"append",
"(",
"os",
".",
"path",
".",
"join",
"(",
"path",
",",
"line",
"[",
"2",
":",
"]",
".",
"lstrip",
"(",
")",
")",
")",
"requirement_files",
".",
"extend",
"(",
"included_files",
")",
"for",
"fname",
"in",
"requirement_files",
":",
"with",
"open",
"(",
"fname",
")",
"as",
"fp",
":",
"for",
"line",
"in",
"fp",
":",
"line",
"=",
"line",
".",
"strip",
"(",
")",
"if",
"not",
"is_requirements_line",
"(",
"line",
")",
":",
"continue",
"listed_reqs",
".",
"append",
"(",
"parse_package_line",
"(",
"line",
")",
")",
"unknown_reqs",
".",
"update",
"(",
"set",
"(",
"r",
"[",
"0",
"]",
"for",
"r",
"in",
"frozen_reqs",
")",
".",
"difference",
"(",
"set",
"(",
"r",
"[",
"0",
"]",
"for",
"r",
"in",
"listed_reqs",
")",
")",
")",
"errors",
"=",
"[",
"]",
"for",
"r_package",
",",
"r_compare",
",",
"r_version",
",",
"r_line",
"in",
"listed_reqs",
":",
"r_package_lower",
"=",
"r_package",
".",
"lower",
"(",
")",
"found",
"=",
"False",
"for",
"package",
",",
"_",
",",
"version",
",",
"line",
"in",
"frozen_reqs",
":",
"if",
"package",
".",
"lower",
"(",
")",
"==",
"r_package_lower",
":",
"found",
"=",
"True",
"if",
"strict",
"and",
"package",
"!=",
"r_package",
":",
"unknown_reqs",
".",
"remove",
"(",
"package",
")",
"errors",
".",
"append",
"(",
"\"%sUnexpected capitalization found: %r is required but %r is installed.%s\"",
"%",
"(",
"colour",
".",
"WARNING",
",",
"r_package",
",",
"package",
",",
"colour",
".",
"ENDC",
")",
")",
"elif",
"valid_version",
"(",
"version",
",",
"r_compare",
",",
"r_version",
")",
":",
"unknown_reqs",
".",
"discard",
"(",
"package",
")",
"if",
"verbose",
":",
"print",
"(",
"\"%sRequirement is installed correctly: %s%s\"",
"%",
"(",
"colour",
".",
"OKGREEN",
",",
"r_line",
",",
"colour",
".",
"ENDC",
")",
")",
"else",
":",
"errors",
".",
"append",
"(",
"\"%sUnexpected version of %r found: %r is required but %r is installed.%s\"",
"%",
"(",
"colour",
".",
"WARNING",
",",
"r_package",
",",
"r_line",
",",
"line",
",",
"colour",
".",
"ENDC",
")",
")",
"break",
"if",
"not",
"found",
":",
"errors",
".",
"append",
"(",
"\"%sRequirement %r not installed in virtualenv.%s\"",
"%",
"(",
"colour",
".",
"FAIL",
",",
"r_package",
",",
"colour",
".",
"ENDC",
")",
")",
"code",
"=",
"0",
"if",
"unknown_reqs",
":",
"print",
"(",
"\"\\nFor debugging purposes, the following packages are installed but not in the requirements file(s):\"",
")",
"unknown_reqs_with_versions",
"=",
"[",
"]",
"for",
"unknown_req",
"in",
"unknown_reqs",
":",
"for",
"req",
"in",
"frozen_reqs",
":",
"if",
"unknown_req",
"==",
"req",
"[",
"0",
"]",
":",
"unknown_reqs_with_versions",
".",
"append",
"(",
"req",
"[",
"3",
"]",
")",
"break",
"print",
"(",
"\"%s%s%s\\n\"",
"%",
"(",
"colour",
".",
"WARNING",
",",
"\"\\n\"",
".",
"join",
"(",
"sorted",
"(",
"unknown_reqs_with_versions",
")",
")",
",",
"colour",
".",
"ENDC",
")",
")",
"if",
"error_on_extras",
":",
"code",
"=",
"2",
"if",
"errors",
":",
"print",
"(",
"\"Errors found:\"",
")",
"print",
"(",
"'\\n'",
".",
"join",
"(",
"errors",
")",
")",
"print",
"(",
"\"You must correct your environment before committing (and running tests).\\n\"",
")",
"code",
"=",
"1",
"if",
"not",
"errors",
"and",
"not",
"unknown_reqs",
":",
"print",
"(",
"\"%sNo errors found; all packages accounted for!%s\"",
"%",
"(",
"colour",
".",
"OKGREEN",
",",
"colour",
".",
"ENDC",
")",
")",
"return",
"code"
] | Given a list of requirements files, checks them against the installed
packages in the currentl environment. If any are missing, or do not fit
within the version bounds, exits with a code of 1 and outputs information
about the missing dependency. | [
"Given",
"a",
"list",
"of",
"requirements",
"files",
"checks",
"them",
"against",
"the",
"installed",
"packages",
"in",
"the",
"currentl",
"environment",
".",
"If",
"any",
"are",
"missing",
"or",
"do",
"not",
"fit",
"within",
"the",
"version",
"bounds",
"exits",
"with",
"a",
"code",
"of",
"1",
"and",
"outputs",
"information",
"about",
"the",
"missing",
"dependency",
"."
] | python | train |
rigetti/pyquil | pyquil/quil.py | https://github.com/rigetti/pyquil/blob/ec98e453084b0037d69d8c3245f6822a5422593d/pyquil/quil.py#L941-L974 | def merge_programs(prog_list):
"""
Merges a list of pyQuil programs into a single one by appending them in sequence.
If multiple programs in the list contain the same gate and/or noisy gate definition
with identical name, this definition will only be applied once. If different definitions
with the same name appear multiple times in the program list, each will be applied once
in the order of last occurrence.
:param list prog_list: A list of pyquil programs
:return: a single pyQuil program
:rtype: Program
"""
definitions = [gate for prog in prog_list for gate in Program(prog).defined_gates]
seen = {}
# Collect definitions in reverse order and reapply definitions in reverse
# collected order to ensure that the last occurrence of a definition is applied last.
for definition in reversed(definitions):
name = definition.name
if name in seen.keys():
# Do not add truly identical definitions with the same name
# If two different definitions share a name, we include each definition so as to provide
# a waring to the user when the contradictory defgate is called.
if definition not in seen[name]:
seen[name].append(definition)
else:
seen[name] = [definition]
new_definitions = [gate for key in seen.keys() for gate in reversed(seen[key])]
p = sum([Program(prog).instructions for prog in prog_list], Program()) # Combine programs without gate definitions
for definition in new_definitions:
p.defgate(definition.name, definition.matrix, definition.parameters)
return p | [
"def",
"merge_programs",
"(",
"prog_list",
")",
":",
"definitions",
"=",
"[",
"gate",
"for",
"prog",
"in",
"prog_list",
"for",
"gate",
"in",
"Program",
"(",
"prog",
")",
".",
"defined_gates",
"]",
"seen",
"=",
"{",
"}",
"# Collect definitions in reverse order and reapply definitions in reverse",
"# collected order to ensure that the last occurrence of a definition is applied last.",
"for",
"definition",
"in",
"reversed",
"(",
"definitions",
")",
":",
"name",
"=",
"definition",
".",
"name",
"if",
"name",
"in",
"seen",
".",
"keys",
"(",
")",
":",
"# Do not add truly identical definitions with the same name",
"# If two different definitions share a name, we include each definition so as to provide",
"# a waring to the user when the contradictory defgate is called.",
"if",
"definition",
"not",
"in",
"seen",
"[",
"name",
"]",
":",
"seen",
"[",
"name",
"]",
".",
"append",
"(",
"definition",
")",
"else",
":",
"seen",
"[",
"name",
"]",
"=",
"[",
"definition",
"]",
"new_definitions",
"=",
"[",
"gate",
"for",
"key",
"in",
"seen",
".",
"keys",
"(",
")",
"for",
"gate",
"in",
"reversed",
"(",
"seen",
"[",
"key",
"]",
")",
"]",
"p",
"=",
"sum",
"(",
"[",
"Program",
"(",
"prog",
")",
".",
"instructions",
"for",
"prog",
"in",
"prog_list",
"]",
",",
"Program",
"(",
")",
")",
"# Combine programs without gate definitions",
"for",
"definition",
"in",
"new_definitions",
":",
"p",
".",
"defgate",
"(",
"definition",
".",
"name",
",",
"definition",
".",
"matrix",
",",
"definition",
".",
"parameters",
")",
"return",
"p"
] | Merges a list of pyQuil programs into a single one by appending them in sequence.
If multiple programs in the list contain the same gate and/or noisy gate definition
with identical name, this definition will only be applied once. If different definitions
with the same name appear multiple times in the program list, each will be applied once
in the order of last occurrence.
:param list prog_list: A list of pyquil programs
:return: a single pyQuil program
:rtype: Program | [
"Merges",
"a",
"list",
"of",
"pyQuil",
"programs",
"into",
"a",
"single",
"one",
"by",
"appending",
"them",
"in",
"sequence",
".",
"If",
"multiple",
"programs",
"in",
"the",
"list",
"contain",
"the",
"same",
"gate",
"and",
"/",
"or",
"noisy",
"gate",
"definition",
"with",
"identical",
"name",
"this",
"definition",
"will",
"only",
"be",
"applied",
"once",
".",
"If",
"different",
"definitions",
"with",
"the",
"same",
"name",
"appear",
"multiple",
"times",
"in",
"the",
"program",
"list",
"each",
"will",
"be",
"applied",
"once",
"in",
"the",
"order",
"of",
"last",
"occurrence",
"."
] | python | train |
kedder/ofxstatement | src/ofxstatement/statement.py | https://github.com/kedder/ofxstatement/blob/61f9dc1cfe6024874b859c8aec108b9d9acee57a/src/ofxstatement/statement.py#L166-L180 | def recalculate_balance(stmt):
"""Recalculate statement starting and ending dates and balances.
When starting balance is not available, it will be assumed to be 0.
This function can be used in statement parsers when balance information is
not available in source statement.
"""
total_amount = sum(sl.amount for sl in stmt.lines)
stmt.start_balance = stmt.start_balance or D(0)
stmt.end_balance = stmt.start_balance + total_amount
stmt.start_date = min(sl.date for sl in stmt.lines)
stmt.end_date = max(sl.date for sl in stmt.lines) | [
"def",
"recalculate_balance",
"(",
"stmt",
")",
":",
"total_amount",
"=",
"sum",
"(",
"sl",
".",
"amount",
"for",
"sl",
"in",
"stmt",
".",
"lines",
")",
"stmt",
".",
"start_balance",
"=",
"stmt",
".",
"start_balance",
"or",
"D",
"(",
"0",
")",
"stmt",
".",
"end_balance",
"=",
"stmt",
".",
"start_balance",
"+",
"total_amount",
"stmt",
".",
"start_date",
"=",
"min",
"(",
"sl",
".",
"date",
"for",
"sl",
"in",
"stmt",
".",
"lines",
")",
"stmt",
".",
"end_date",
"=",
"max",
"(",
"sl",
".",
"date",
"for",
"sl",
"in",
"stmt",
".",
"lines",
")"
] | Recalculate statement starting and ending dates and balances.
When starting balance is not available, it will be assumed to be 0.
This function can be used in statement parsers when balance information is
not available in source statement. | [
"Recalculate",
"statement",
"starting",
"and",
"ending",
"dates",
"and",
"balances",
"."
] | python | train |
rabitt/pysox | sox/transform.py | https://github.com/rabitt/pysox/blob/eae89bde74567136ec3f723c3e6b369916d9b837/sox/transform.py#L255-L380 | def set_output_format(self, file_type=None, rate=None, bits=None,
channels=None, encoding=None, comments=None,
append_comments=True):
'''Sets output file format arguments. These arguments will overwrite
any format related arguments supplied by other effects (e.g. rate).
If this function is not explicity called the output format is inferred
from the file extension or the file's header.
Parameters
----------
file_type : str or None, default=None
The file type of the output audio file. Should be the same as what
the file extension would be, for ex. 'mp3' or 'wav'.
rate : float or None, default=None
The sample rate of the output audio file. If None the sample rate
is inferred.
bits : int or None, default=None
The number of bits per sample. If None, the number of bits per
sample is inferred.
channels : int or None, default=None
The number of channels in the audio file. If None the number of
channels is inferred.
encoding : str or None, default=None
The audio encoding type. Sometimes needed with file-types that
support more than one encoding type. One of:
* signed-integer : PCM data stored as signed (‘two’s
complement’) integers. Commonly used with a 16 or 24−bit
encoding size. A value of 0 represents minimum signal
power.
* unsigned-integer : PCM data stored as unsigned integers.
Commonly used with an 8-bit encoding size. A value of 0
represents maximum signal power.
* floating-point : PCM data stored as IEEE 753 single precision
(32-bit) or double precision (64-bit) floating-point
(‘real’) numbers. A value of 0 represents minimum signal
power.
* a-law : International telephony standard for logarithmic
encoding to 8 bits per sample. It has a precision
equivalent to roughly 13-bit PCM and is sometimes encoded
with reversed bit-ordering.
* u-law : North American telephony standard for logarithmic
encoding to 8 bits per sample. A.k.a. μ-law. It has a
precision equivalent to roughly 14-bit PCM and is sometimes
encoded with reversed bit-ordering.
* oki-adpcm : OKI (a.k.a. VOX, Dialogic, or Intel) 4-bit ADPCM;
it has a precision equivalent to roughly 12-bit PCM. ADPCM
is a form of audio compression that has a good compromise
between audio quality and encoding/decoding speed.
* ima-adpcm : IMA (a.k.a. DVI) 4-bit ADPCM; it has a precision
equivalent to roughly 13-bit PCM.
* ms-adpcm : Microsoft 4-bit ADPCM; it has a precision
equivalent to roughly 14-bit PCM.
* gsm-full-rate : GSM is currently used for the vast majority
of the world’s digital wireless telephone calls. It
utilises several audio formats with different bit-rates and
associated speech quality. SoX has support for GSM’s
original 13kbps ‘Full Rate’ audio format. It is usually
CPU-intensive to work with GSM audio.
comments : str or None, default=None
If not None, the string is added as a comment in the header of the
output audio file. If None, no comments are added.
append_comments : bool, default=True
If True, comment strings are appended to SoX's default comments. If
False, the supplied comment replaces the existing comment.
'''
if file_type not in VALID_FORMATS + [None]:
raise ValueError(
'Invalid file_type. Must be one of {}'.format(VALID_FORMATS)
)
if not is_number(rate) and rate is not None:
raise ValueError('rate must be a float or None')
if rate is not None and rate <= 0:
raise ValueError('rate must be a positive number')
if not isinstance(bits, int) and bits is not None:
raise ValueError('bits must be an int or None')
if bits is not None and bits <= 0:
raise ValueError('bits must be a positive number')
if not isinstance(channels, int) and channels is not None:
raise ValueError('channels must be an int or None')
if channels is not None and channels <= 0:
raise ValueError('channels must be a positive number')
if encoding not in ENCODING_VALS + [None]:
raise ValueError(
'Invalid encoding. Must be one of {}'.format(ENCODING_VALS)
)
if comments is not None and not isinstance(comments, str):
raise ValueError('comments must be a string or None')
if not isinstance(append_comments, bool):
raise ValueError('append_comments must be a boolean')
output_format = []
if file_type is not None:
output_format.extend(['-t', '{}'.format(file_type)])
if rate is not None:
output_format.extend(['-r', '{:f}'.format(rate)])
if bits is not None:
output_format.extend(['-b', '{}'.format(bits)])
if channels is not None:
output_format.extend(['-c', '{}'.format(channels)])
if encoding is not None:
output_format.extend(['-e', '{}'.format(encoding)])
if comments is not None:
if append_comments:
output_format.extend(['--add-comment', comments])
else:
output_format.extend(['--comment', comments])
self.output_format = output_format
return self | [
"def",
"set_output_format",
"(",
"self",
",",
"file_type",
"=",
"None",
",",
"rate",
"=",
"None",
",",
"bits",
"=",
"None",
",",
"channels",
"=",
"None",
",",
"encoding",
"=",
"None",
",",
"comments",
"=",
"None",
",",
"append_comments",
"=",
"True",
")",
":",
"if",
"file_type",
"not",
"in",
"VALID_FORMATS",
"+",
"[",
"None",
"]",
":",
"raise",
"ValueError",
"(",
"'Invalid file_type. Must be one of {}'",
".",
"format",
"(",
"VALID_FORMATS",
")",
")",
"if",
"not",
"is_number",
"(",
"rate",
")",
"and",
"rate",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"'rate must be a float or None'",
")",
"if",
"rate",
"is",
"not",
"None",
"and",
"rate",
"<=",
"0",
":",
"raise",
"ValueError",
"(",
"'rate must be a positive number'",
")",
"if",
"not",
"isinstance",
"(",
"bits",
",",
"int",
")",
"and",
"bits",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"'bits must be an int or None'",
")",
"if",
"bits",
"is",
"not",
"None",
"and",
"bits",
"<=",
"0",
":",
"raise",
"ValueError",
"(",
"'bits must be a positive number'",
")",
"if",
"not",
"isinstance",
"(",
"channels",
",",
"int",
")",
"and",
"channels",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"'channels must be an int or None'",
")",
"if",
"channels",
"is",
"not",
"None",
"and",
"channels",
"<=",
"0",
":",
"raise",
"ValueError",
"(",
"'channels must be a positive number'",
")",
"if",
"encoding",
"not",
"in",
"ENCODING_VALS",
"+",
"[",
"None",
"]",
":",
"raise",
"ValueError",
"(",
"'Invalid encoding. Must be one of {}'",
".",
"format",
"(",
"ENCODING_VALS",
")",
")",
"if",
"comments",
"is",
"not",
"None",
"and",
"not",
"isinstance",
"(",
"comments",
",",
"str",
")",
":",
"raise",
"ValueError",
"(",
"'comments must be a string or None'",
")",
"if",
"not",
"isinstance",
"(",
"append_comments",
",",
"bool",
")",
":",
"raise",
"ValueError",
"(",
"'append_comments must be a boolean'",
")",
"output_format",
"=",
"[",
"]",
"if",
"file_type",
"is",
"not",
"None",
":",
"output_format",
".",
"extend",
"(",
"[",
"'-t'",
",",
"'{}'",
".",
"format",
"(",
"file_type",
")",
"]",
")",
"if",
"rate",
"is",
"not",
"None",
":",
"output_format",
".",
"extend",
"(",
"[",
"'-r'",
",",
"'{:f}'",
".",
"format",
"(",
"rate",
")",
"]",
")",
"if",
"bits",
"is",
"not",
"None",
":",
"output_format",
".",
"extend",
"(",
"[",
"'-b'",
",",
"'{}'",
".",
"format",
"(",
"bits",
")",
"]",
")",
"if",
"channels",
"is",
"not",
"None",
":",
"output_format",
".",
"extend",
"(",
"[",
"'-c'",
",",
"'{}'",
".",
"format",
"(",
"channels",
")",
"]",
")",
"if",
"encoding",
"is",
"not",
"None",
":",
"output_format",
".",
"extend",
"(",
"[",
"'-e'",
",",
"'{}'",
".",
"format",
"(",
"encoding",
")",
"]",
")",
"if",
"comments",
"is",
"not",
"None",
":",
"if",
"append_comments",
":",
"output_format",
".",
"extend",
"(",
"[",
"'--add-comment'",
",",
"comments",
"]",
")",
"else",
":",
"output_format",
".",
"extend",
"(",
"[",
"'--comment'",
",",
"comments",
"]",
")",
"self",
".",
"output_format",
"=",
"output_format",
"return",
"self"
] | Sets output file format arguments. These arguments will overwrite
any format related arguments supplied by other effects (e.g. rate).
If this function is not explicity called the output format is inferred
from the file extension or the file's header.
Parameters
----------
file_type : str or None, default=None
The file type of the output audio file. Should be the same as what
the file extension would be, for ex. 'mp3' or 'wav'.
rate : float or None, default=None
The sample rate of the output audio file. If None the sample rate
is inferred.
bits : int or None, default=None
The number of bits per sample. If None, the number of bits per
sample is inferred.
channels : int or None, default=None
The number of channels in the audio file. If None the number of
channels is inferred.
encoding : str or None, default=None
The audio encoding type. Sometimes needed with file-types that
support more than one encoding type. One of:
* signed-integer : PCM data stored as signed (‘two’s
complement’) integers. Commonly used with a 16 or 24−bit
encoding size. A value of 0 represents minimum signal
power.
* unsigned-integer : PCM data stored as unsigned integers.
Commonly used with an 8-bit encoding size. A value of 0
represents maximum signal power.
* floating-point : PCM data stored as IEEE 753 single precision
(32-bit) or double precision (64-bit) floating-point
(‘real’) numbers. A value of 0 represents minimum signal
power.
* a-law : International telephony standard for logarithmic
encoding to 8 bits per sample. It has a precision
equivalent to roughly 13-bit PCM and is sometimes encoded
with reversed bit-ordering.
* u-law : North American telephony standard for logarithmic
encoding to 8 bits per sample. A.k.a. μ-law. It has a
precision equivalent to roughly 14-bit PCM and is sometimes
encoded with reversed bit-ordering.
* oki-adpcm : OKI (a.k.a. VOX, Dialogic, or Intel) 4-bit ADPCM;
it has a precision equivalent to roughly 12-bit PCM. ADPCM
is a form of audio compression that has a good compromise
between audio quality and encoding/decoding speed.
* ima-adpcm : IMA (a.k.a. DVI) 4-bit ADPCM; it has a precision
equivalent to roughly 13-bit PCM.
* ms-adpcm : Microsoft 4-bit ADPCM; it has a precision
equivalent to roughly 14-bit PCM.
* gsm-full-rate : GSM is currently used for the vast majority
of the world’s digital wireless telephone calls. It
utilises several audio formats with different bit-rates and
associated speech quality. SoX has support for GSM’s
original 13kbps ‘Full Rate’ audio format. It is usually
CPU-intensive to work with GSM audio.
comments : str or None, default=None
If not None, the string is added as a comment in the header of the
output audio file. If None, no comments are added.
append_comments : bool, default=True
If True, comment strings are appended to SoX's default comments. If
False, the supplied comment replaces the existing comment. | [
"Sets",
"output",
"file",
"format",
"arguments",
".",
"These",
"arguments",
"will",
"overwrite",
"any",
"format",
"related",
"arguments",
"supplied",
"by",
"other",
"effects",
"(",
"e",
".",
"g",
".",
"rate",
")",
"."
] | python | valid |
sernst/cauldron | cauldron/render/stack.py | https://github.com/sernst/cauldron/blob/4086aec9c038c402ea212c79fe8bd0d27104f9cf/cauldron/render/stack.py#L67-L87 | def get_formatted_stack_frame(
project: 'projects.Project',
error_stack: bool = True
) -> list:
"""
Returns a list of the stack frames formatted for user display that has
been enriched by the project-specific data.
:param project:
The currently open project used to enrich the stack data.
:param error_stack:
Whether or not to return the error stack. When True the stack of the
last exception will be returned. If no such exception exists, an empty
list will be returned instead. When False the current execution stack
trace will be returned.
"""
return [
format_stack_frame(f, project)
for f in get_stack_frames(error_stack=error_stack)
] | [
"def",
"get_formatted_stack_frame",
"(",
"project",
":",
"'projects.Project'",
",",
"error_stack",
":",
"bool",
"=",
"True",
")",
"->",
"list",
":",
"return",
"[",
"format_stack_frame",
"(",
"f",
",",
"project",
")",
"for",
"f",
"in",
"get_stack_frames",
"(",
"error_stack",
"=",
"error_stack",
")",
"]"
] | Returns a list of the stack frames formatted for user display that has
been enriched by the project-specific data.
:param project:
The currently open project used to enrich the stack data.
:param error_stack:
Whether or not to return the error stack. When True the stack of the
last exception will be returned. If no such exception exists, an empty
list will be returned instead. When False the current execution stack
trace will be returned. | [
"Returns",
"a",
"list",
"of",
"the",
"stack",
"frames",
"formatted",
"for",
"user",
"display",
"that",
"has",
"been",
"enriched",
"by",
"the",
"project",
"-",
"specific",
"data",
".",
":",
"param",
"project",
":",
"The",
"currently",
"open",
"project",
"used",
"to",
"enrich",
"the",
"stack",
"data",
".",
":",
"param",
"error_stack",
":",
"Whether",
"or",
"not",
"to",
"return",
"the",
"error",
"stack",
".",
"When",
"True",
"the",
"stack",
"of",
"the",
"last",
"exception",
"will",
"be",
"returned",
".",
"If",
"no",
"such",
"exception",
"exists",
"an",
"empty",
"list",
"will",
"be",
"returned",
"instead",
".",
"When",
"False",
"the",
"current",
"execution",
"stack",
"trace",
"will",
"be",
"returned",
"."
] | python | train |
watson-developer-cloud/python-sdk | ibm_watson/discovery_v1.py | https://github.com/watson-developer-cloud/python-sdk/blob/4c2c9df4466fcde88975da9ecd834e6ba95eb353/ibm_watson/discovery_v1.py#L5276-L5298 | def _to_dict(self):
"""Return a json dictionary representing this model."""
_dict = {}
if hasattr(self, 'document_id') and self.document_id is not None:
_dict['document_id'] = self.document_id
if hasattr(self,
'configuration_id') and self.configuration_id is not None:
_dict['configuration_id'] = self.configuration_id
if hasattr(self, 'status') and self.status is not None:
_dict['status'] = self.status
if hasattr(
self,
'status_description') and self.status_description is not None:
_dict['status_description'] = self.status_description
if hasattr(self, 'filename') and self.filename is not None:
_dict['filename'] = self.filename
if hasattr(self, 'file_type') and self.file_type is not None:
_dict['file_type'] = self.file_type
if hasattr(self, 'sha1') and self.sha1 is not None:
_dict['sha1'] = self.sha1
if hasattr(self, 'notices') and self.notices is not None:
_dict['notices'] = [x._to_dict() for x in self.notices]
return _dict | [
"def",
"_to_dict",
"(",
"self",
")",
":",
"_dict",
"=",
"{",
"}",
"if",
"hasattr",
"(",
"self",
",",
"'document_id'",
")",
"and",
"self",
".",
"document_id",
"is",
"not",
"None",
":",
"_dict",
"[",
"'document_id'",
"]",
"=",
"self",
".",
"document_id",
"if",
"hasattr",
"(",
"self",
",",
"'configuration_id'",
")",
"and",
"self",
".",
"configuration_id",
"is",
"not",
"None",
":",
"_dict",
"[",
"'configuration_id'",
"]",
"=",
"self",
".",
"configuration_id",
"if",
"hasattr",
"(",
"self",
",",
"'status'",
")",
"and",
"self",
".",
"status",
"is",
"not",
"None",
":",
"_dict",
"[",
"'status'",
"]",
"=",
"self",
".",
"status",
"if",
"hasattr",
"(",
"self",
",",
"'status_description'",
")",
"and",
"self",
".",
"status_description",
"is",
"not",
"None",
":",
"_dict",
"[",
"'status_description'",
"]",
"=",
"self",
".",
"status_description",
"if",
"hasattr",
"(",
"self",
",",
"'filename'",
")",
"and",
"self",
".",
"filename",
"is",
"not",
"None",
":",
"_dict",
"[",
"'filename'",
"]",
"=",
"self",
".",
"filename",
"if",
"hasattr",
"(",
"self",
",",
"'file_type'",
")",
"and",
"self",
".",
"file_type",
"is",
"not",
"None",
":",
"_dict",
"[",
"'file_type'",
"]",
"=",
"self",
".",
"file_type",
"if",
"hasattr",
"(",
"self",
",",
"'sha1'",
")",
"and",
"self",
".",
"sha1",
"is",
"not",
"None",
":",
"_dict",
"[",
"'sha1'",
"]",
"=",
"self",
".",
"sha1",
"if",
"hasattr",
"(",
"self",
",",
"'notices'",
")",
"and",
"self",
".",
"notices",
"is",
"not",
"None",
":",
"_dict",
"[",
"'notices'",
"]",
"=",
"[",
"x",
".",
"_to_dict",
"(",
")",
"for",
"x",
"in",
"self",
".",
"notices",
"]",
"return",
"_dict"
] | Return a json dictionary representing this model. | [
"Return",
"a",
"json",
"dictionary",
"representing",
"this",
"model",
"."
] | python | train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.