Search is not available for this dataset
identifier
stringlengths 1
155
| parameters
stringlengths 2
6.09k
| docstring
stringlengths 11
63.4k
| docstring_summary
stringlengths 0
63.4k
| function
stringlengths 29
99.8k
| function_tokens
sequence | start_point
sequence | end_point
sequence | language
stringclasses 1
value | docstring_language
stringlengths 2
7
| docstring_language_predictions
stringlengths 18
23
| is_langid_reliable
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|
LinuxDistribution.__init__ | (self,
include_lsb=True,
os_release_file='',
distro_release_file='',
include_uname=True) |
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_uname`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding.
|
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once. | def __init__(self,
include_lsb=True,
os_release_file='',
distro_release_file='',
include_uname=True):
"""
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_uname`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding.
"""
self.os_release_file = os_release_file or \
os.path.join(_UNIXCONFDIR, _OS_RELEASE_BASENAME)
self.distro_release_file = distro_release_file or '' # updated later
self.include_lsb = include_lsb
self.include_uname = include_uname | [
"def",
"__init__",
"(",
"self",
",",
"include_lsb",
"=",
"True",
",",
"os_release_file",
"=",
"''",
",",
"distro_release_file",
"=",
"''",
",",
"include_uname",
"=",
"True",
")",
":",
"self",
".",
"os_release_file",
"=",
"os_release_file",
"or",
"os",
".",
"path",
".",
"join",
"(",
"_UNIXCONFDIR",
",",
"_OS_RELEASE_BASENAME",
")",
"self",
".",
"distro_release_file",
"=",
"distro_release_file",
"or",
"''",
"# updated later",
"self",
".",
"include_lsb",
"=",
"include_lsb",
"self",
".",
"include_uname",
"=",
"include_uname"
] | [
577,
4
] | [
653,
42
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.__repr__ | (self) | Return repr of all info
| Return repr of all info
| def __repr__(self):
"""Return repr of all info
"""
return \
"LinuxDistribution(" \
"os_release_file={self.os_release_file!r}, " \
"distro_release_file={self.distro_release_file!r}, " \
"include_lsb={self.include_lsb!r}, " \
"include_uname={self.include_uname!r}, " \
"_os_release_info={self._os_release_info!r}, " \
"_lsb_release_info={self._lsb_release_info!r}, " \
"_distro_release_info={self._distro_release_info!r}, " \
"_uname_info={self._uname_info!r})".format(
self=self) | [
"def",
"__repr__",
"(",
"self",
")",
":",
"return",
"\"LinuxDistribution(\"",
"\"os_release_file={self.os_release_file!r}, \"",
"\"distro_release_file={self.distro_release_file!r}, \"",
"\"include_lsb={self.include_lsb!r}, \"",
"\"include_uname={self.include_uname!r}, \"",
"\"_os_release_info={self._os_release_info!r}, \"",
"\"_lsb_release_info={self._lsb_release_info!r}, \"",
"\"_distro_release_info={self._distro_release_info!r}, \"",
"\"_uname_info={self._uname_info!r})\"",
".",
"format",
"(",
"self",
"=",
"self",
")"
] | [
655,
4
] | [
668,
26
] | python | en | ['en', 'no', 'en'] | True |
LinuxDistribution.linux_distribution | (self, full_distribution_name=True) |
Return information about the OS distribution that is compatible
with Python's :func:`platform.linux_distribution`, supporting a subset
of its parameters.
For details, see :func:`distro.linux_distribution`.
|
Return information about the OS distribution that is compatible
with Python's :func:`platform.linux_distribution`, supporting a subset
of its parameters. | def linux_distribution(self, full_distribution_name=True):
"""
Return information about the OS distribution that is compatible
with Python's :func:`platform.linux_distribution`, supporting a subset
of its parameters.
For details, see :func:`distro.linux_distribution`.
"""
return (
self.name() if full_distribution_name else self.id(),
self.version(),
self.codename()
) | [
"def",
"linux_distribution",
"(",
"self",
",",
"full_distribution_name",
"=",
"True",
")",
":",
"return",
"(",
"self",
".",
"name",
"(",
")",
"if",
"full_distribution_name",
"else",
"self",
".",
"id",
"(",
")",
",",
"self",
".",
"version",
"(",
")",
",",
"self",
".",
"codename",
"(",
")",
")"
] | [
670,
4
] | [
682,
9
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.id | (self) | Return the distro ID of the OS distribution, as a string.
For details, see :func:`distro.id`.
| Return the distro ID of the OS distribution, as a string. | def id(self):
"""Return the distro ID of the OS distribution, as a string.
For details, see :func:`distro.id`.
"""
def normalize(distro_id, table):
distro_id = distro_id.lower().replace(' ', '_')
return table.get(distro_id, distro_id)
distro_id = self.os_release_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_OS_ID)
distro_id = self.lsb_release_attr('distributor_id')
if distro_id:
return normalize(distro_id, NORMALIZED_LSB_ID)
distro_id = self.distro_release_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
distro_id = self.uname_attr('id')
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
return '' | [
"def",
"id",
"(",
"self",
")",
":",
"def",
"normalize",
"(",
"distro_id",
",",
"table",
")",
":",
"distro_id",
"=",
"distro_id",
".",
"lower",
"(",
")",
".",
"replace",
"(",
"' '",
",",
"'_'",
")",
"return",
"table",
".",
"get",
"(",
"distro_id",
",",
"distro_id",
")",
"distro_id",
"=",
"self",
".",
"os_release_attr",
"(",
"'id'",
")",
"if",
"distro_id",
":",
"return",
"normalize",
"(",
"distro_id",
",",
"NORMALIZED_OS_ID",
")",
"distro_id",
"=",
"self",
".",
"lsb_release_attr",
"(",
"'distributor_id'",
")",
"if",
"distro_id",
":",
"return",
"normalize",
"(",
"distro_id",
",",
"NORMALIZED_LSB_ID",
")",
"distro_id",
"=",
"self",
".",
"distro_release_attr",
"(",
"'id'",
")",
"if",
"distro_id",
":",
"return",
"normalize",
"(",
"distro_id",
",",
"NORMALIZED_DISTRO_ID",
")",
"distro_id",
"=",
"self",
".",
"uname_attr",
"(",
"'id'",
")",
"if",
"distro_id",
":",
"return",
"normalize",
"(",
"distro_id",
",",
"NORMALIZED_DISTRO_ID",
")",
"return",
"''"
] | [
684,
4
] | [
709,
17
] | python | en | ['en', 'en', 'en'] | True |
LinuxDistribution.name | (self, pretty=False) |
Return the name of the OS distribution, as a string.
For details, see :func:`distro.name`.
|
Return the name of the OS distribution, as a string. | def name(self, pretty=False):
"""
Return the name of the OS distribution, as a string.
For details, see :func:`distro.name`.
"""
name = self.os_release_attr('name') \
or self.lsb_release_attr('distributor_id') \
or self.distro_release_attr('name') \
or self.uname_attr('name')
if pretty:
name = self.os_release_attr('pretty_name') \
or self.lsb_release_attr('description')
if not name:
name = self.distro_release_attr('name') \
or self.uname_attr('name')
version = self.version(pretty=True)
if version:
name = name + ' ' + version
return name or '' | [
"def",
"name",
"(",
"self",
",",
"pretty",
"=",
"False",
")",
":",
"name",
"=",
"self",
".",
"os_release_attr",
"(",
"'name'",
")",
"or",
"self",
".",
"lsb_release_attr",
"(",
"'distributor_id'",
")",
"or",
"self",
".",
"distro_release_attr",
"(",
"'name'",
")",
"or",
"self",
".",
"uname_attr",
"(",
"'name'",
")",
"if",
"pretty",
":",
"name",
"=",
"self",
".",
"os_release_attr",
"(",
"'pretty_name'",
")",
"or",
"self",
".",
"lsb_release_attr",
"(",
"'description'",
")",
"if",
"not",
"name",
":",
"name",
"=",
"self",
".",
"distro_release_attr",
"(",
"'name'",
")",
"or",
"self",
".",
"uname_attr",
"(",
"'name'",
")",
"version",
"=",
"self",
".",
"version",
"(",
"pretty",
"=",
"True",
")",
"if",
"version",
":",
"name",
"=",
"name",
"+",
"' '",
"+",
"version",
"return",
"name",
"or",
"''"
] | [
711,
4
] | [
730,
25
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.version | (self, pretty=False, best=False) |
Return the version of the OS distribution, as a string.
For details, see :func:`distro.version`.
|
Return the version of the OS distribution, as a string. | def version(self, pretty=False, best=False):
"""
Return the version of the OS distribution, as a string.
For details, see :func:`distro.version`.
"""
versions = [
self.os_release_attr('version_id'),
self.lsb_release_attr('release'),
self.distro_release_attr('version_id'),
self._parse_distro_release_content(
self.os_release_attr('pretty_name')).get('version_id', ''),
self._parse_distro_release_content(
self.lsb_release_attr('description')).get('version_id', ''),
self.uname_attr('release')
]
version = ''
if best:
# This algorithm uses the last version in priority order that has
# the best precision. If the versions are not in conflict, that
# does not matter; otherwise, using the last one instead of the
# first one might be considered a surprise.
for v in versions:
if v.count(".") > version.count(".") or version == '':
version = v
else:
for v in versions:
if v != '':
version = v
break
if pretty and version and self.codename():
version = '{0} ({1})'.format(version, self.codename())
return version | [
"def",
"version",
"(",
"self",
",",
"pretty",
"=",
"False",
",",
"best",
"=",
"False",
")",
":",
"versions",
"=",
"[",
"self",
".",
"os_release_attr",
"(",
"'version_id'",
")",
",",
"self",
".",
"lsb_release_attr",
"(",
"'release'",
")",
",",
"self",
".",
"distro_release_attr",
"(",
"'version_id'",
")",
",",
"self",
".",
"_parse_distro_release_content",
"(",
"self",
".",
"os_release_attr",
"(",
"'pretty_name'",
")",
")",
".",
"get",
"(",
"'version_id'",
",",
"''",
")",
",",
"self",
".",
"_parse_distro_release_content",
"(",
"self",
".",
"lsb_release_attr",
"(",
"'description'",
")",
")",
".",
"get",
"(",
"'version_id'",
",",
"''",
")",
",",
"self",
".",
"uname_attr",
"(",
"'release'",
")",
"]",
"version",
"=",
"''",
"if",
"best",
":",
"# This algorithm uses the last version in priority order that has",
"# the best precision. If the versions are not in conflict, that",
"# does not matter; otherwise, using the last one instead of the",
"# first one might be considered a surprise.",
"for",
"v",
"in",
"versions",
":",
"if",
"v",
".",
"count",
"(",
"\".\"",
")",
">",
"version",
".",
"count",
"(",
"\".\"",
")",
"or",
"version",
"==",
"''",
":",
"version",
"=",
"v",
"else",
":",
"for",
"v",
"in",
"versions",
":",
"if",
"v",
"!=",
"''",
":",
"version",
"=",
"v",
"break",
"if",
"pretty",
"and",
"version",
"and",
"self",
".",
"codename",
"(",
")",
":",
"version",
"=",
"'{0} ({1})'",
".",
"format",
"(",
"version",
",",
"self",
".",
"codename",
"(",
")",
")",
"return",
"version"
] | [
732,
4
] | [
764,
22
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.version_parts | (self, best=False) |
Return the version of the OS distribution, as a tuple of version
numbers.
For details, see :func:`distro.version_parts`.
|
Return the version of the OS distribution, as a tuple of version
numbers. | def version_parts(self, best=False):
"""
Return the version of the OS distribution, as a tuple of version
numbers.
For details, see :func:`distro.version_parts`.
"""
version_str = self.version(best=best)
if version_str:
version_regex = re.compile(r'(\d+)\.?(\d+)?\.?(\d+)?')
matches = version_regex.match(version_str)
if matches:
major, minor, build_number = matches.groups()
return major, minor or '', build_number or ''
return '', '', '' | [
"def",
"version_parts",
"(",
"self",
",",
"best",
"=",
"False",
")",
":",
"version_str",
"=",
"self",
".",
"version",
"(",
"best",
"=",
"best",
")",
"if",
"version_str",
":",
"version_regex",
"=",
"re",
".",
"compile",
"(",
"r'(\\d+)\\.?(\\d+)?\\.?(\\d+)?'",
")",
"matches",
"=",
"version_regex",
".",
"match",
"(",
"version_str",
")",
"if",
"matches",
":",
"major",
",",
"minor",
",",
"build_number",
"=",
"matches",
".",
"groups",
"(",
")",
"return",
"major",
",",
"minor",
"or",
"''",
",",
"build_number",
"or",
"''",
"return",
"''",
",",
"''",
",",
"''"
] | [
766,
4
] | [
780,
25
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.major_version | (self, best=False) |
Return the major version number of the current distribution.
For details, see :func:`distro.major_version`.
|
Return the major version number of the current distribution. | def major_version(self, best=False):
"""
Return the major version number of the current distribution.
For details, see :func:`distro.major_version`.
"""
return self.version_parts(best)[0] | [
"def",
"major_version",
"(",
"self",
",",
"best",
"=",
"False",
")",
":",
"return",
"self",
".",
"version_parts",
"(",
"best",
")",
"[",
"0",
"]"
] | [
782,
4
] | [
788,
42
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.minor_version | (self, best=False) |
Return the minor version number of the current distribution.
For details, see :func:`distro.minor_version`.
|
Return the minor version number of the current distribution. | def minor_version(self, best=False):
"""
Return the minor version number of the current distribution.
For details, see :func:`distro.minor_version`.
"""
return self.version_parts(best)[1] | [
"def",
"minor_version",
"(",
"self",
",",
"best",
"=",
"False",
")",
":",
"return",
"self",
".",
"version_parts",
"(",
"best",
")",
"[",
"1",
"]"
] | [
790,
4
] | [
796,
42
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.build_number | (self, best=False) |
Return the build number of the current distribution.
For details, see :func:`distro.build_number`.
|
Return the build number of the current distribution. | def build_number(self, best=False):
"""
Return the build number of the current distribution.
For details, see :func:`distro.build_number`.
"""
return self.version_parts(best)[2] | [
"def",
"build_number",
"(",
"self",
",",
"best",
"=",
"False",
")",
":",
"return",
"self",
".",
"version_parts",
"(",
"best",
")",
"[",
"2",
"]"
] | [
798,
4
] | [
804,
42
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.like | (self) |
Return the IDs of distributions that are like the OS distribution.
For details, see :func:`distro.like`.
|
Return the IDs of distributions that are like the OS distribution. | def like(self):
"""
Return the IDs of distributions that are like the OS distribution.
For details, see :func:`distro.like`.
"""
return self.os_release_attr('id_like') or '' | [
"def",
"like",
"(",
"self",
")",
":",
"return",
"self",
".",
"os_release_attr",
"(",
"'id_like'",
")",
"or",
"''"
] | [
806,
4
] | [
812,
52
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.codename | (self) |
Return the codename of the OS distribution.
For details, see :func:`distro.codename`.
|
Return the codename of the OS distribution. | def codename(self):
"""
Return the codename of the OS distribution.
For details, see :func:`distro.codename`.
"""
try:
# Handle os_release specially since distros might purposefully set
# this to empty string to have no codename
return self._os_release_info['codename']
except KeyError:
return self.lsb_release_attr('codename') \
or self.distro_release_attr('codename') \
or '' | [
"def",
"codename",
"(",
"self",
")",
":",
"try",
":",
"# Handle os_release specially since distros might purposefully set",
"# this to empty string to have no codename",
"return",
"self",
".",
"_os_release_info",
"[",
"'codename'",
"]",
"except",
"KeyError",
":",
"return",
"self",
".",
"lsb_release_attr",
"(",
"'codename'",
")",
"or",
"self",
".",
"distro_release_attr",
"(",
"'codename'",
")",
"or",
"''"
] | [
814,
4
] | [
827,
21
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.info | (self, pretty=False, best=False) |
Return certain machine-readable information about the OS
distribution.
For details, see :func:`distro.info`.
|
Return certain machine-readable information about the OS
distribution. | def info(self, pretty=False, best=False):
"""
Return certain machine-readable information about the OS
distribution.
For details, see :func:`distro.info`.
"""
return dict(
id=self.id(),
version=self.version(pretty, best),
version_parts=dict(
major=self.major_version(best),
minor=self.minor_version(best),
build_number=self.build_number(best)
),
like=self.like(),
codename=self.codename(),
) | [
"def",
"info",
"(",
"self",
",",
"pretty",
"=",
"False",
",",
"best",
"=",
"False",
")",
":",
"return",
"dict",
"(",
"id",
"=",
"self",
".",
"id",
"(",
")",
",",
"version",
"=",
"self",
".",
"version",
"(",
"pretty",
",",
"best",
")",
",",
"version_parts",
"=",
"dict",
"(",
"major",
"=",
"self",
".",
"major_version",
"(",
"best",
")",
",",
"minor",
"=",
"self",
".",
"minor_version",
"(",
"best",
")",
",",
"build_number",
"=",
"self",
".",
"build_number",
"(",
"best",
")",
")",
",",
"like",
"=",
"self",
".",
"like",
"(",
")",
",",
"codename",
"=",
"self",
".",
"codename",
"(",
")",
",",
")"
] | [
829,
4
] | [
846,
9
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.os_release_info | (self) |
Return a dictionary containing key-value pairs for the information
items from the os-release file data source of the OS distribution.
For details, see :func:`distro.os_release_info`.
|
Return a dictionary containing key-value pairs for the information
items from the os-release file data source of the OS distribution. | def os_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the os-release file data source of the OS distribution.
For details, see :func:`distro.os_release_info`.
"""
return self._os_release_info | [
"def",
"os_release_info",
"(",
"self",
")",
":",
"return",
"self",
".",
"_os_release_info"
] | [
848,
4
] | [
855,
36
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.lsb_release_info | (self) |
Return a dictionary containing key-value pairs for the information
items from the lsb_release command data source of the OS
distribution.
For details, see :func:`distro.lsb_release_info`.
|
Return a dictionary containing key-value pairs for the information
items from the lsb_release command data source of the OS
distribution. | def lsb_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the lsb_release command data source of the OS
distribution.
For details, see :func:`distro.lsb_release_info`.
"""
return self._lsb_release_info | [
"def",
"lsb_release_info",
"(",
"self",
")",
":",
"return",
"self",
".",
"_lsb_release_info"
] | [
857,
4
] | [
865,
37
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.distro_release_info | (self) |
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
|
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution. | def distro_release_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
"""
return self._distro_release_info | [
"def",
"distro_release_info",
"(",
"self",
")",
":",
"return",
"self",
".",
"_distro_release_info"
] | [
867,
4
] | [
875,
40
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.uname_info | (self) |
Return a dictionary containing key-value pairs for the information
items from the uname command data source of the OS distribution.
For details, see :func:`distro.uname_info`.
|
Return a dictionary containing key-value pairs for the information
items from the uname command data source of the OS distribution. | def uname_info(self):
"""
Return a dictionary containing key-value pairs for the information
items from the uname command data source of the OS distribution.
For details, see :func:`distro.uname_info`.
"""
return self._uname_info | [
"def",
"uname_info",
"(",
"self",
")",
":",
"return",
"self",
".",
"_uname_info"
] | [
877,
4
] | [
884,
31
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.os_release_attr | (self, attribute) |
Return a single named information item from the os-release file data
source of the OS distribution.
For details, see :func:`distro.os_release_attr`.
|
Return a single named information item from the os-release file data
source of the OS distribution. | def os_release_attr(self, attribute):
"""
Return a single named information item from the os-release file data
source of the OS distribution.
For details, see :func:`distro.os_release_attr`.
"""
return self._os_release_info.get(attribute, '') | [
"def",
"os_release_attr",
"(",
"self",
",",
"attribute",
")",
":",
"return",
"self",
".",
"_os_release_info",
".",
"get",
"(",
"attribute",
",",
"''",
")"
] | [
886,
4
] | [
893,
55
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.lsb_release_attr | (self, attribute) |
Return a single named information item from the lsb_release command
output data source of the OS distribution.
For details, see :func:`distro.lsb_release_attr`.
|
Return a single named information item from the lsb_release command
output data source of the OS distribution. | def lsb_release_attr(self, attribute):
"""
Return a single named information item from the lsb_release command
output data source of the OS distribution.
For details, see :func:`distro.lsb_release_attr`.
"""
return self._lsb_release_info.get(attribute, '') | [
"def",
"lsb_release_attr",
"(",
"self",
",",
"attribute",
")",
":",
"return",
"self",
".",
"_lsb_release_info",
".",
"get",
"(",
"attribute",
",",
"''",
")"
] | [
895,
4
] | [
902,
56
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.distro_release_attr | (self, attribute) |
Return a single named information item from the distro release file
data source of the OS distribution.
For details, see :func:`distro.distro_release_attr`.
|
Return a single named information item from the distro release file
data source of the OS distribution. | def distro_release_attr(self, attribute):
"""
Return a single named information item from the distro release file
data source of the OS distribution.
For details, see :func:`distro.distro_release_attr`.
"""
return self._distro_release_info.get(attribute, '') | [
"def",
"distro_release_attr",
"(",
"self",
",",
"attribute",
")",
":",
"return",
"self",
".",
"_distro_release_info",
".",
"get",
"(",
"attribute",
",",
"''",
")"
] | [
904,
4
] | [
911,
59
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution.uname_attr | (self, attribute) |
Return a single named information item from the uname command
output data source of the OS distribution.
For details, see :func:`distro.uname_release_attr`.
|
Return a single named information item from the uname command
output data source of the OS distribution. | def uname_attr(self, attribute):
"""
Return a single named information item from the uname command
output data source of the OS distribution.
For details, see :func:`distro.uname_release_attr`.
"""
return self._uname_info.get(attribute, '') | [
"def",
"uname_attr",
"(",
"self",
",",
"attribute",
")",
":",
"return",
"self",
".",
"_uname_info",
".",
"get",
"(",
"attribute",
",",
"''",
")"
] | [
913,
4
] | [
920,
50
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._os_release_info | (self) |
Get the information items from the specified os-release file.
Returns:
A dictionary containing all information items.
|
Get the information items from the specified os-release file. | def _os_release_info(self):
"""
Get the information items from the specified os-release file.
Returns:
A dictionary containing all information items.
"""
if os.path.isfile(self.os_release_file):
with open(self.os_release_file) as release_file:
return self._parse_os_release_content(release_file)
return {} | [
"def",
"_os_release_info",
"(",
"self",
")",
":",
"if",
"os",
".",
"path",
".",
"isfile",
"(",
"self",
".",
"os_release_file",
")",
":",
"with",
"open",
"(",
"self",
".",
"os_release_file",
")",
"as",
"release_file",
":",
"return",
"self",
".",
"_parse_os_release_content",
"(",
"release_file",
")",
"return",
"{",
"}"
] | [
923,
4
] | [
933,
17
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._parse_os_release_content | (lines) |
Parse the lines of an os-release file.
Parameters:
* lines: Iterable through the lines in the os-release file.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
|
Parse the lines of an os-release file. | def _parse_os_release_content(lines):
"""
Parse the lines of an os-release file.
Parameters:
* lines: Iterable through the lines in the os-release file.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
lexer = shlex.shlex(lines, posix=True)
lexer.whitespace_split = True
# The shlex module defines its `wordchars` variable using literals,
# making it dependent on the encoding of the Python source file.
# In Python 2.6 and 2.7, the shlex source file is encoded in
# 'iso-8859-1', and the `wordchars` variable is defined as a byte
# string. This causes a UnicodeDecodeError to be raised when the
# parsed content is a unicode object. The following fix resolves that
# (... but it should be fixed in shlex...):
if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes):
lexer.wordchars = lexer.wordchars.decode('iso-8859-1')
tokens = list(lexer)
for token in tokens:
# At this point, all shell-like parsing has been done (i.e.
# comments processed, quotes and backslash escape sequences
# processed, multi-line values assembled, trailing newlines
# stripped, etc.), so the tokens are now either:
# * variable assignments: var=value
# * commands or their arguments (not allowed in os-release)
if '=' in token:
k, v = token.split('=', 1)
props[k.lower()] = v
else:
# Ignore any tokens that are not variable assignments
pass
if 'version_codename' in props:
# os-release added a version_codename field. Use that in
# preference to anything else Note that some distros purposefully
# do not have code names. They should be setting
# version_codename=""
props['codename'] = props['version_codename']
elif 'ubuntu_codename' in props:
# Same as above but a non-standard field name used on older Ubuntus
props['codename'] = props['ubuntu_codename']
elif 'version' in props:
# If there is no version_codename, parse it from the version
codename = re.search(r'(\(\D+\))|,(\s+)?\D+', props['version'])
if codename:
codename = codename.group()
codename = codename.strip('()')
codename = codename.strip(',')
codename = codename.strip()
# codename appears within paranthese.
props['codename'] = codename
return props | [
"def",
"_parse_os_release_content",
"(",
"lines",
")",
":",
"props",
"=",
"{",
"}",
"lexer",
"=",
"shlex",
".",
"shlex",
"(",
"lines",
",",
"posix",
"=",
"True",
")",
"lexer",
".",
"whitespace_split",
"=",
"True",
"# The shlex module defines its `wordchars` variable using literals,",
"# making it dependent on the encoding of the Python source file.",
"# In Python 2.6 and 2.7, the shlex source file is encoded in",
"# 'iso-8859-1', and the `wordchars` variable is defined as a byte",
"# string. This causes a UnicodeDecodeError to be raised when the",
"# parsed content is a unicode object. The following fix resolves that",
"# (... but it should be fixed in shlex...):",
"if",
"sys",
".",
"version_info",
"[",
"0",
"]",
"==",
"2",
"and",
"isinstance",
"(",
"lexer",
".",
"wordchars",
",",
"bytes",
")",
":",
"lexer",
".",
"wordchars",
"=",
"lexer",
".",
"wordchars",
".",
"decode",
"(",
"'iso-8859-1'",
")",
"tokens",
"=",
"list",
"(",
"lexer",
")",
"for",
"token",
"in",
"tokens",
":",
"# At this point, all shell-like parsing has been done (i.e.",
"# comments processed, quotes and backslash escape sequences",
"# processed, multi-line values assembled, trailing newlines",
"# stripped, etc.), so the tokens are now either:",
"# * variable assignments: var=value",
"# * commands or their arguments (not allowed in os-release)",
"if",
"'='",
"in",
"token",
":",
"k",
",",
"v",
"=",
"token",
".",
"split",
"(",
"'='",
",",
"1",
")",
"props",
"[",
"k",
".",
"lower",
"(",
")",
"]",
"=",
"v",
"else",
":",
"# Ignore any tokens that are not variable assignments",
"pass",
"if",
"'version_codename'",
"in",
"props",
":",
"# os-release added a version_codename field. Use that in",
"# preference to anything else Note that some distros purposefully",
"# do not have code names. They should be setting",
"# version_codename=\"\"",
"props",
"[",
"'codename'",
"]",
"=",
"props",
"[",
"'version_codename'",
"]",
"elif",
"'ubuntu_codename'",
"in",
"props",
":",
"# Same as above but a non-standard field name used on older Ubuntus",
"props",
"[",
"'codename'",
"]",
"=",
"props",
"[",
"'ubuntu_codename'",
"]",
"elif",
"'version'",
"in",
"props",
":",
"# If there is no version_codename, parse it from the version",
"codename",
"=",
"re",
".",
"search",
"(",
"r'(\\(\\D+\\))|,(\\s+)?\\D+'",
",",
"props",
"[",
"'version'",
"]",
")",
"if",
"codename",
":",
"codename",
"=",
"codename",
".",
"group",
"(",
")",
"codename",
"=",
"codename",
".",
"strip",
"(",
"'()'",
")",
"codename",
"=",
"codename",
".",
"strip",
"(",
"','",
")",
"codename",
"=",
"codename",
".",
"strip",
"(",
")",
"# codename appears within paranthese.",
"props",
"[",
"'codename'",
"]",
"=",
"codename",
"return",
"props"
] | [
936,
4
] | [
998,
20
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._lsb_release_info | (self) |
Get the information items from the lsb_release command output.
Returns:
A dictionary containing all information items.
|
Get the information items from the lsb_release command output. | def _lsb_release_info(self):
"""
Get the information items from the lsb_release command output.
Returns:
A dictionary containing all information items.
"""
if not self.include_lsb:
return {}
with open(os.devnull, 'w') as devnull:
try:
cmd = ('lsb_release', '-a')
stdout = subprocess.check_output(cmd, stderr=devnull)
except OSError: # Command not found
return {}
content = self._to_str(stdout).splitlines()
return self._parse_lsb_release_content(content) | [
"def",
"_lsb_release_info",
"(",
"self",
")",
":",
"if",
"not",
"self",
".",
"include_lsb",
":",
"return",
"{",
"}",
"with",
"open",
"(",
"os",
".",
"devnull",
",",
"'w'",
")",
"as",
"devnull",
":",
"try",
":",
"cmd",
"=",
"(",
"'lsb_release'",
",",
"'-a'",
")",
"stdout",
"=",
"subprocess",
".",
"check_output",
"(",
"cmd",
",",
"stderr",
"=",
"devnull",
")",
"except",
"OSError",
":",
"# Command not found",
"return",
"{",
"}",
"content",
"=",
"self",
".",
"_to_str",
"(",
"stdout",
")",
".",
"splitlines",
"(",
")",
"return",
"self",
".",
"_parse_lsb_release_content",
"(",
"content",
")"
] | [
1001,
4
] | [
1017,
55
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._parse_lsb_release_content | (lines) |
Parse the output of the lsb_release command.
Parameters:
* lines: Iterable through the lines of the lsb_release output.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
|
Parse the output of the lsb_release command. | def _parse_lsb_release_content(lines):
"""
Parse the output of the lsb_release command.
Parameters:
* lines: Iterable through the lines of the lsb_release output.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
for line in lines:
kv = line.strip('\n').split(':', 1)
if len(kv) != 2:
# Ignore lines without colon.
continue
k, v = kv
props.update({k.replace(' ', '_').lower(): v.strip()})
return props | [
"def",
"_parse_lsb_release_content",
"(",
"lines",
")",
":",
"props",
"=",
"{",
"}",
"for",
"line",
"in",
"lines",
":",
"kv",
"=",
"line",
".",
"strip",
"(",
"'\\n'",
")",
".",
"split",
"(",
"':'",
",",
"1",
")",
"if",
"len",
"(",
"kv",
")",
"!=",
"2",
":",
"# Ignore lines without colon.",
"continue",
"k",
",",
"v",
"=",
"kv",
"props",
".",
"update",
"(",
"{",
"k",
".",
"replace",
"(",
"' '",
",",
"'_'",
")",
".",
"lower",
"(",
")",
":",
"v",
".",
"strip",
"(",
")",
"}",
")",
"return",
"props"
] | [
1020,
4
] | [
1041,
20
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._distro_release_info | (self) |
Get the information items from the specified distro release file.
Returns:
A dictionary containing all information items.
|
Get the information items from the specified distro release file. | def _distro_release_info(self):
"""
Get the information items from the specified distro release file.
Returns:
A dictionary containing all information items.
"""
if self.distro_release_file:
# If it was specified, we use it and parse what we can, even if
# its file name or content does not match the expected pattern.
distro_info = self._parse_distro_release_file(
self.distro_release_file)
basename = os.path.basename(self.distro_release_file)
# The file name pattern for user-specified distro release files
# is somewhat more tolerant (compared to when searching for the
# file), because we want to use what was specified as best as
# possible.
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if 'name' in distro_info \
and 'cloudlinux' in distro_info['name'].lower():
distro_info['id'] = 'cloudlinux'
elif match:
distro_info['id'] = match.group(1)
return distro_info
else:
try:
basenames = os.listdir(_UNIXCONFDIR)
# We sort for repeatability in cases where there are multiple
# distro specific files; e.g. CentOS, Oracle, Enterprise all
# containing `redhat-release` on top of their own.
basenames.sort()
except OSError:
# This may occur when /etc is not readable but we can't be
# sure about the *-release files. Check common entries of
# /etc for information. If they turn out to not be there the
# error is handled in `_parse_distro_release_file()`.
basenames = ['SuSE-release',
'arch-release',
'base-release',
'centos-release',
'fedora-release',
'gentoo-release',
'mageia-release',
'mandrake-release',
'mandriva-release',
'mandrivalinux-release',
'manjaro-release',
'oracle-release',
'redhat-release',
'sl-release',
'slackware-version']
for basename in basenames:
if basename in _DISTRO_RELEASE_IGNORE_BASENAMES:
continue
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if match:
filepath = os.path.join(_UNIXCONFDIR, basename)
distro_info = self._parse_distro_release_file(filepath)
if 'name' in distro_info:
# The name is always present if the pattern matches
self.distro_release_file = filepath
distro_info['id'] = match.group(1)
if 'cloudlinux' in distro_info['name'].lower():
distro_info['id'] = 'cloudlinux'
return distro_info
return {} | [
"def",
"_distro_release_info",
"(",
"self",
")",
":",
"if",
"self",
".",
"distro_release_file",
":",
"# If it was specified, we use it and parse what we can, even if",
"# its file name or content does not match the expected pattern.",
"distro_info",
"=",
"self",
".",
"_parse_distro_release_file",
"(",
"self",
".",
"distro_release_file",
")",
"basename",
"=",
"os",
".",
"path",
".",
"basename",
"(",
"self",
".",
"distro_release_file",
")",
"# The file name pattern for user-specified distro release files",
"# is somewhat more tolerant (compared to when searching for the",
"# file), because we want to use what was specified as best as",
"# possible.",
"match",
"=",
"_DISTRO_RELEASE_BASENAME_PATTERN",
".",
"match",
"(",
"basename",
")",
"if",
"'name'",
"in",
"distro_info",
"and",
"'cloudlinux'",
"in",
"distro_info",
"[",
"'name'",
"]",
".",
"lower",
"(",
")",
":",
"distro_info",
"[",
"'id'",
"]",
"=",
"'cloudlinux'",
"elif",
"match",
":",
"distro_info",
"[",
"'id'",
"]",
"=",
"match",
".",
"group",
"(",
"1",
")",
"return",
"distro_info",
"else",
":",
"try",
":",
"basenames",
"=",
"os",
".",
"listdir",
"(",
"_UNIXCONFDIR",
")",
"# We sort for repeatability in cases where there are multiple",
"# distro specific files; e.g. CentOS, Oracle, Enterprise all",
"# containing `redhat-release` on top of their own.",
"basenames",
".",
"sort",
"(",
")",
"except",
"OSError",
":",
"# This may occur when /etc is not readable but we can't be",
"# sure about the *-release files. Check common entries of",
"# /etc for information. If they turn out to not be there the",
"# error is handled in `_parse_distro_release_file()`.",
"basenames",
"=",
"[",
"'SuSE-release'",
",",
"'arch-release'",
",",
"'base-release'",
",",
"'centos-release'",
",",
"'fedora-release'",
",",
"'gentoo-release'",
",",
"'mageia-release'",
",",
"'mandrake-release'",
",",
"'mandriva-release'",
",",
"'mandrivalinux-release'",
",",
"'manjaro-release'",
",",
"'oracle-release'",
",",
"'redhat-release'",
",",
"'sl-release'",
",",
"'slackware-version'",
"]",
"for",
"basename",
"in",
"basenames",
":",
"if",
"basename",
"in",
"_DISTRO_RELEASE_IGNORE_BASENAMES",
":",
"continue",
"match",
"=",
"_DISTRO_RELEASE_BASENAME_PATTERN",
".",
"match",
"(",
"basename",
")",
"if",
"match",
":",
"filepath",
"=",
"os",
".",
"path",
".",
"join",
"(",
"_UNIXCONFDIR",
",",
"basename",
")",
"distro_info",
"=",
"self",
".",
"_parse_distro_release_file",
"(",
"filepath",
")",
"if",
"'name'",
"in",
"distro_info",
":",
"# The name is always present if the pattern matches",
"self",
".",
"distro_release_file",
"=",
"filepath",
"distro_info",
"[",
"'id'",
"]",
"=",
"match",
".",
"group",
"(",
"1",
")",
"if",
"'cloudlinux'",
"in",
"distro_info",
"[",
"'name'",
"]",
".",
"lower",
"(",
")",
":",
"distro_info",
"[",
"'id'",
"]",
"=",
"'cloudlinux'",
"return",
"distro_info",
"return",
"{",
"}"
] | [
1086,
4
] | [
1151,
21
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._parse_distro_release_file | (self, filepath) |
Parse a distro release file.
Parameters:
* filepath: Path name of the distro release file.
Returns:
A dictionary containing all information items.
|
Parse a distro release file. | def _parse_distro_release_file(self, filepath):
"""
Parse a distro release file.
Parameters:
* filepath: Path name of the distro release file.
Returns:
A dictionary containing all information items.
"""
try:
with open(filepath) as fp:
# Only parse the first line. For instance, on SLES there
# are multiple lines. We don't want them...
return self._parse_distro_release_content(fp.readline())
except (OSError, IOError):
# Ignore not being able to read a specific, seemingly version
# related file.
# See https://github.com/nir0s/distro/issues/162
return {} | [
"def",
"_parse_distro_release_file",
"(",
"self",
",",
"filepath",
")",
":",
"try",
":",
"with",
"open",
"(",
"filepath",
")",
"as",
"fp",
":",
"# Only parse the first line. For instance, on SLES there",
"# are multiple lines. We don't want them...",
"return",
"self",
".",
"_parse_distro_release_content",
"(",
"fp",
".",
"readline",
"(",
")",
")",
"except",
"(",
"OSError",
",",
"IOError",
")",
":",
"# Ignore not being able to read a specific, seemingly version",
"# related file.",
"# See https://github.com/nir0s/distro/issues/162",
"return",
"{",
"}"
] | [
1153,
4
] | [
1173,
21
] | python | en | ['en', 'error', 'th'] | False |
LinuxDistribution._parse_distro_release_content | (line) |
Parse a line from a distro release file.
Parameters:
* line: Line from the distro release file. Must be a unicode string
or a UTF-8 encoded byte string.
Returns:
A dictionary containing all information items.
|
Parse a line from a distro release file. | def _parse_distro_release_content(line):
"""
Parse a line from a distro release file.
Parameters:
* line: Line from the distro release file. Must be a unicode string
or a UTF-8 encoded byte string.
Returns:
A dictionary containing all information items.
"""
matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(
line.strip()[::-1])
distro_info = {}
if matches:
# regexp ensures non-None
distro_info['name'] = matches.group(3)[::-1]
if matches.group(2):
distro_info['version_id'] = matches.group(2)[::-1]
if matches.group(1):
distro_info['codename'] = matches.group(1)[::-1]
elif line:
distro_info['name'] = line.strip()
return distro_info | [
"def",
"_parse_distro_release_content",
"(",
"line",
")",
":",
"matches",
"=",
"_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN",
".",
"match",
"(",
"line",
".",
"strip",
"(",
")",
"[",
":",
":",
"-",
"1",
"]",
")",
"distro_info",
"=",
"{",
"}",
"if",
"matches",
":",
"# regexp ensures non-None",
"distro_info",
"[",
"'name'",
"]",
"=",
"matches",
".",
"group",
"(",
"3",
")",
"[",
":",
":",
"-",
"1",
"]",
"if",
"matches",
".",
"group",
"(",
"2",
")",
":",
"distro_info",
"[",
"'version_id'",
"]",
"=",
"matches",
".",
"group",
"(",
"2",
")",
"[",
":",
":",
"-",
"1",
"]",
"if",
"matches",
".",
"group",
"(",
"1",
")",
":",
"distro_info",
"[",
"'codename'",
"]",
"=",
"matches",
".",
"group",
"(",
"1",
")",
"[",
":",
":",
"-",
"1",
"]",
"elif",
"line",
":",
"distro_info",
"[",
"'name'",
"]",
"=",
"line",
".",
"strip",
"(",
")",
"return",
"distro_info"
] | [
1176,
4
] | [
1199,
26
] | python | en | ['en', 'error', 'th'] | False |
expand_longitude_slice_by_degrees | (min_lon, max_lon, include_prime_meridian, degrees) |
Returns a tuple of expanded min and max longitude, whether the expanded area includes the prime meridian, and
the final span of the expanded angle in degrees.
|
Returns a tuple of expanded min and max longitude, whether the expanded area includes the prime meridian, and
the final span of the expanded angle in degrees.
| def expand_longitude_slice_by_degrees(min_lon, max_lon, include_prime_meridian, degrees) \
-> (int, int, bool, int):
"""
Returns a tuple of expanded min and max longitude, whether the expanded area includes the prime meridian, and
the final span of the expanded angle in degrees.
"""
if min_lon > max_lon:
# Swap min and max to simplify math
min_lon, max_lon = max_lon, min_lon
original_span = calculate_longitude_angle_in_degrees(min_lon, max_lon, include_prime_meridian)
# Handle trivial case where expanded angle is the whole globe
if original_span + (degrees * 2) >= 360:
return -180, 180, True, 360
expanded_min_lon = normalize_longitude_arithmetic(min_lon - degrees)
expanded_max_lon = normalize_longitude_arithmetic(max_lon + degrees)
if expanded_min_lon > expanded_max_lon:
# Swap min and max to simplify math
expanded_min_lon, expanded_max_lon = expanded_max_lon, expanded_min_lon
if not include_prime_meridian:
# Does this expanded area now include the prime meridian?
if 0 < min_lon <= degrees:
include_prime_meridian = True
elif max_lon < 0 and (degrees + max_lon >= 0):
include_prime_meridian = True
expanded_span = calculate_longitude_angle_in_degrees(expanded_min_lon, expanded_max_lon,
include_prime_meridian)
# Round to 0.001 for sake of this check
original_span = round(original_span, 3)
if not (expanded_span >= original_span):
raise ValueError('Expanded span is smaller than original span. {} < {}'.format(expanded_span,
original_span))
expected_span = original_span + (degrees * 2)
# Round to 0.001 for sake of this check
expected_span = round(expected_span, 3)
if expected_span > 360:
expected_span = 360
error_amount = abs(expected_span - expanded_span)
if error_amount >= 0.002 and expanded_span < 360:
raise ValueError(
'Expanded span is smaller than expected. Original {}, expanded to {}, expected {} (expansion angle {})'
.format(original_span, expanded_span, expected_span, degrees))
return expanded_min_lon, expanded_max_lon, include_prime_meridian, expanded_span | [
"def",
"expand_longitude_slice_by_degrees",
"(",
"min_lon",
",",
"max_lon",
",",
"include_prime_meridian",
",",
"degrees",
")",
"->",
"(",
"int",
",",
"int",
",",
"bool",
",",
"int",
")",
":",
"if",
"min_lon",
">",
"max_lon",
":",
"# Swap min and max to simplify math",
"min_lon",
",",
"max_lon",
"=",
"max_lon",
",",
"min_lon",
"original_span",
"=",
"calculate_longitude_angle_in_degrees",
"(",
"min_lon",
",",
"max_lon",
",",
"include_prime_meridian",
")",
"# Handle trivial case where expanded angle is the whole globe",
"if",
"original_span",
"+",
"(",
"degrees",
"*",
"2",
")",
">=",
"360",
":",
"return",
"-",
"180",
",",
"180",
",",
"True",
",",
"360",
"expanded_min_lon",
"=",
"normalize_longitude_arithmetic",
"(",
"min_lon",
"-",
"degrees",
")",
"expanded_max_lon",
"=",
"normalize_longitude_arithmetic",
"(",
"max_lon",
"+",
"degrees",
")",
"if",
"expanded_min_lon",
">",
"expanded_max_lon",
":",
"# Swap min and max to simplify math",
"expanded_min_lon",
",",
"expanded_max_lon",
"=",
"expanded_max_lon",
",",
"expanded_min_lon",
"if",
"not",
"include_prime_meridian",
":",
"# Does this expanded area now include the prime meridian?",
"if",
"0",
"<",
"min_lon",
"<=",
"degrees",
":",
"include_prime_meridian",
"=",
"True",
"elif",
"max_lon",
"<",
"0",
"and",
"(",
"degrees",
"+",
"max_lon",
">=",
"0",
")",
":",
"include_prime_meridian",
"=",
"True",
"expanded_span",
"=",
"calculate_longitude_angle_in_degrees",
"(",
"expanded_min_lon",
",",
"expanded_max_lon",
",",
"include_prime_meridian",
")",
"# Round to 0.001 for sake of this check",
"original_span",
"=",
"round",
"(",
"original_span",
",",
"3",
")",
"if",
"not",
"(",
"expanded_span",
">=",
"original_span",
")",
":",
"raise",
"ValueError",
"(",
"'Expanded span is smaller than original span. {} < {}'",
".",
"format",
"(",
"expanded_span",
",",
"original_span",
")",
")",
"expected_span",
"=",
"original_span",
"+",
"(",
"degrees",
"*",
"2",
")",
"# Round to 0.001 for sake of this check",
"expected_span",
"=",
"round",
"(",
"expected_span",
",",
"3",
")",
"if",
"expected_span",
">",
"360",
":",
"expected_span",
"=",
"360",
"error_amount",
"=",
"abs",
"(",
"expected_span",
"-",
"expanded_span",
")",
"if",
"error_amount",
">=",
"0.002",
"and",
"expanded_span",
"<",
"360",
":",
"raise",
"ValueError",
"(",
"'Expanded span is smaller than expected. Original {}, expanded to {}, expected {} (expansion angle {})'",
".",
"format",
"(",
"original_span",
",",
"expanded_span",
",",
"expected_span",
",",
"degrees",
")",
")",
"return",
"expanded_min_lon",
",",
"expanded_max_lon",
",",
"include_prime_meridian",
",",
"expanded_span"
] | [
56,
0
] | [
108,
84
] | python | en | ['en', 'error', 'th'] | False |
loadImageSeries | (filelist=None) | create a list of :py:class:`~PIL.Image.Image` objects for use in a montage | create a list of :py:class:`~PIL.Image.Image` objects for use in a montage | def loadImageSeries(filelist=None):
"""create a list of :py:class:`~PIL.Image.Image` objects for use in a montage"""
if filelist is None or len(filelist) < 1:
return
imglist = []
for img in filelist:
if not os.path.exists(img):
print(f"unable to find {img}")
continue
try:
with Image.open(img) as im:
im = im.convert2byte()
except Exception:
if not isSpiderImage(img):
print(img + " is not a Spider image file")
continue
im.info["filename"] = img
imglist.append(im)
return imglist | [
"def",
"loadImageSeries",
"(",
"filelist",
"=",
"None",
")",
":",
"if",
"filelist",
"is",
"None",
"or",
"len",
"(",
"filelist",
")",
"<",
"1",
":",
"return",
"imglist",
"=",
"[",
"]",
"for",
"img",
"in",
"filelist",
":",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"img",
")",
":",
"print",
"(",
"f\"unable to find {img}\"",
")",
"continue",
"try",
":",
"with",
"Image",
".",
"open",
"(",
"img",
")",
"as",
"im",
":",
"im",
"=",
"im",
".",
"convert2byte",
"(",
")",
"except",
"Exception",
":",
"if",
"not",
"isSpiderImage",
"(",
"img",
")",
":",
"print",
"(",
"img",
"+",
"\" is not a Spider image file\"",
")",
"continue",
"im",
".",
"info",
"[",
"\"filename\"",
"]",
"=",
"img",
"imglist",
".",
"append",
"(",
"im",
")",
"return",
"imglist"
] | [
207,
0
] | [
226,
18
] | python | en | ['en', 'en', 'en'] | True |
PostGISIntrospection.get_geometry_type | (self, table_name, description) |
The geometry type OID used by PostGIS does not indicate the particular
type of field that a geometry column is (e.g., whether it's a
PointField or a PolygonField). Thus, this routine queries the PostGIS
metadata tables to determine the geometry type.
|
The geometry type OID used by PostGIS does not indicate the particular
type of field that a geometry column is (e.g., whether it's a
PointField or a PolygonField). Thus, this routine queries the PostGIS
metadata tables to determine the geometry type.
| def get_geometry_type(self, table_name, description):
"""
The geometry type OID used by PostGIS does not indicate the particular
type of field that a geometry column is (e.g., whether it's a
PointField or a PolygonField). Thus, this routine queries the PostGIS
metadata tables to determine the geometry type.
"""
with self.connection.cursor() as cursor:
cursor.execute("""
SELECT t.coord_dimension, t.srid, t.type FROM (
SELECT * FROM geometry_columns
UNION ALL
SELECT * FROM geography_columns
) AS t WHERE t.f_table_name = %s AND t.f_geometry_column = %s
""", (table_name, description.name))
row = cursor.fetchone()
if not row:
raise Exception('Could not find a geometry or geography column for "%s"."%s"' %
(table_name, description.name))
dim, srid, field_type = row
# OGRGeomType does not require GDAL and makes it easy to convert
# from OGC geom type name to Django field.
field_type = OGRGeomType(field_type).django
# Getting any GeometryField keyword arguments that are not the default.
field_params = {}
if self.postgis_oid_lookup.get(description.type_code) == 'geography':
field_params['geography'] = True
if srid != 4326:
field_params['srid'] = srid
if dim != 2:
field_params['dim'] = dim
return field_type, field_params | [
"def",
"get_geometry_type",
"(",
"self",
",",
"table_name",
",",
"description",
")",
":",
"with",
"self",
".",
"connection",
".",
"cursor",
"(",
")",
"as",
"cursor",
":",
"cursor",
".",
"execute",
"(",
"\"\"\"\n SELECT t.coord_dimension, t.srid, t.type FROM (\n SELECT * FROM geometry_columns\n UNION ALL\n SELECT * FROM geography_columns\n ) AS t WHERE t.f_table_name = %s AND t.f_geometry_column = %s\n \"\"\"",
",",
"(",
"table_name",
",",
"description",
".",
"name",
")",
")",
"row",
"=",
"cursor",
".",
"fetchone",
"(",
")",
"if",
"not",
"row",
":",
"raise",
"Exception",
"(",
"'Could not find a geometry or geography column for \"%s\".\"%s\"'",
"%",
"(",
"table_name",
",",
"description",
".",
"name",
")",
")",
"dim",
",",
"srid",
",",
"field_type",
"=",
"row",
"# OGRGeomType does not require GDAL and makes it easy to convert",
"# from OGC geom type name to Django field.",
"field_type",
"=",
"OGRGeomType",
"(",
"field_type",
")",
".",
"django",
"# Getting any GeometryField keyword arguments that are not the default.",
"field_params",
"=",
"{",
"}",
"if",
"self",
".",
"postgis_oid_lookup",
".",
"get",
"(",
"description",
".",
"type_code",
")",
"==",
"'geography'",
":",
"field_params",
"[",
"'geography'",
"]",
"=",
"True",
"if",
"srid",
"!=",
"4326",
":",
"field_params",
"[",
"'srid'",
"]",
"=",
"srid",
"if",
"dim",
"!=",
"2",
":",
"field_params",
"[",
"'dim'",
"]",
"=",
"dim",
"return",
"field_type",
",",
"field_params"
] | [
28,
4
] | [
59,
39
] | python | en | ['en', 'error', 'th'] | False |
with_metaclass | (meta, *bases) | Create a base class with a metaclass. | Create a base class with a metaclass. | def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a
# dummy metaclass for one level of class instantiation that replaces
# itself with the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
return type.__new__(metaclass, 'temporary_class', (), {}) | [
"def",
"with_metaclass",
"(",
"meta",
",",
"*",
"bases",
")",
":",
"# This requires a bit of explanation: the basic idea is to make a",
"# dummy metaclass for one level of class instantiation that replaces",
"# itself with the actual metaclass.",
"class",
"metaclass",
"(",
"type",
")",
":",
"def",
"__new__",
"(",
"cls",
",",
"name",
",",
"this_bases",
",",
"d",
")",
":",
"return",
"meta",
"(",
"name",
",",
"bases",
",",
"d",
")",
"return",
"type",
".",
"__new__",
"(",
"metaclass",
",",
"'temporary_class'",
",",
"(",
")",
",",
"{",
"}",
")"
] | [
55,
0
] | [
63,
61
] | python | en | ['en', 'en', 'en'] | True |
ETagRequestMixin.cache_control | (self) | A :class:`~werkzeug.datastructures.RequestCacheControl` object
for the incoming cache control headers.
| A :class:`~werkzeug.datastructures.RequestCacheControl` object
for the incoming cache control headers.
| def cache_control(self):
"""A :class:`~werkzeug.datastructures.RequestCacheControl` object
for the incoming cache control headers.
"""
cache_control = self.environ.get("HTTP_CACHE_CONTROL")
return parse_cache_control_header(cache_control, None, RequestCacheControl) | [
"def",
"cache_control",
"(",
"self",
")",
":",
"cache_control",
"=",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_CACHE_CONTROL\"",
")",
"return",
"parse_cache_control_header",
"(",
"cache_control",
",",
"None",
",",
"RequestCacheControl",
")"
] | [
29,
4
] | [
34,
83
] | python | de | ['de', 'en', 'de'] | True |
ETagRequestMixin.if_match | (self) | An object containing all the etags in the `If-Match` header.
:rtype: :class:`~werkzeug.datastructures.ETags`
| An object containing all the etags in the `If-Match` header. | def if_match(self):
"""An object containing all the etags in the `If-Match` header.
:rtype: :class:`~werkzeug.datastructures.ETags`
"""
return parse_etags(self.environ.get("HTTP_IF_MATCH")) | [
"def",
"if_match",
"(",
"self",
")",
":",
"return",
"parse_etags",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_IF_MATCH\"",
")",
")"
] | [
37,
4
] | [
42,
61
] | python | en | ['en', 'en', 'en'] | True |
ETagRequestMixin.if_none_match | (self) | An object containing all the etags in the `If-None-Match` header.
:rtype: :class:`~werkzeug.datastructures.ETags`
| An object containing all the etags in the `If-None-Match` header. | def if_none_match(self):
"""An object containing all the etags in the `If-None-Match` header.
:rtype: :class:`~werkzeug.datastructures.ETags`
"""
return parse_etags(self.environ.get("HTTP_IF_NONE_MATCH")) | [
"def",
"if_none_match",
"(",
"self",
")",
":",
"return",
"parse_etags",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_IF_NONE_MATCH\"",
")",
")"
] | [
45,
4
] | [
50,
66
] | python | en | ['en', 'en', 'en'] | True |
ETagRequestMixin.if_modified_since | (self) | The parsed `If-Modified-Since` header as datetime object. | The parsed `If-Modified-Since` header as datetime object. | def if_modified_since(self):
"""The parsed `If-Modified-Since` header as datetime object."""
return parse_date(self.environ.get("HTTP_IF_MODIFIED_SINCE")) | [
"def",
"if_modified_since",
"(",
"self",
")",
":",
"return",
"parse_date",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_IF_MODIFIED_SINCE\"",
")",
")"
] | [
53,
4
] | [
55,
69
] | python | en | ['en', 'en', 'en'] | True |
ETagRequestMixin.if_unmodified_since | (self) | The parsed `If-Unmodified-Since` header as datetime object. | The parsed `If-Unmodified-Since` header as datetime object. | def if_unmodified_since(self):
"""The parsed `If-Unmodified-Since` header as datetime object."""
return parse_date(self.environ.get("HTTP_IF_UNMODIFIED_SINCE")) | [
"def",
"if_unmodified_since",
"(",
"self",
")",
":",
"return",
"parse_date",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_IF_UNMODIFIED_SINCE\"",
")",
")"
] | [
58,
4
] | [
60,
71
] | python | en | ['en', 'en', 'en'] | True |
ETagRequestMixin.if_range | (self) | The parsed `If-Range` header.
.. versionadded:: 0.7
:rtype: :class:`~werkzeug.datastructures.IfRange`
| The parsed `If-Range` header. | def if_range(self):
"""The parsed `If-Range` header.
.. versionadded:: 0.7
:rtype: :class:`~werkzeug.datastructures.IfRange`
"""
return parse_if_range_header(self.environ.get("HTTP_IF_RANGE")) | [
"def",
"if_range",
"(",
"self",
")",
":",
"return",
"parse_if_range_header",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_IF_RANGE\"",
")",
")"
] | [
63,
4
] | [
70,
71
] | python | en | ['en', 'af', 'en'] | True |
ETagRequestMixin.range | (self) | The parsed `Range` header.
.. versionadded:: 0.7
:rtype: :class:`~werkzeug.datastructures.Range`
| The parsed `Range` header. | def range(self):
"""The parsed `Range` header.
.. versionadded:: 0.7
:rtype: :class:`~werkzeug.datastructures.Range`
"""
return parse_range_header(self.environ.get("HTTP_RANGE")) | [
"def",
"range",
"(",
"self",
")",
":",
"return",
"parse_range_header",
"(",
"self",
".",
"environ",
".",
"get",
"(",
"\"HTTP_RANGE\"",
")",
")"
] | [
73,
4
] | [
80,
65
] | python | en | ['en', 'af', 'en'] | True |
ETagResponseMixin.cache_control | (self) | The Cache-Control general-header field is used to specify
directives that MUST be obeyed by all caching mechanisms along the
request/response chain.
| The Cache-Control general-header field is used to specify
directives that MUST be obeyed by all caching mechanisms along the
request/response chain.
| def cache_control(self):
"""The Cache-Control general-header field is used to specify
directives that MUST be obeyed by all caching mechanisms along the
request/response chain.
"""
def on_update(cache_control):
if not cache_control and "cache-control" in self.headers:
del self.headers["cache-control"]
elif cache_control:
self.headers["Cache-Control"] = cache_control.to_header()
return parse_cache_control_header(
self.headers.get("cache-control"), on_update, ResponseCacheControl
) | [
"def",
"cache_control",
"(",
"self",
")",
":",
"def",
"on_update",
"(",
"cache_control",
")",
":",
"if",
"not",
"cache_control",
"and",
"\"cache-control\"",
"in",
"self",
".",
"headers",
":",
"del",
"self",
".",
"headers",
"[",
"\"cache-control\"",
"]",
"elif",
"cache_control",
":",
"self",
".",
"headers",
"[",
"\"Cache-Control\"",
"]",
"=",
"cache_control",
".",
"to_header",
"(",
")",
"return",
"parse_cache_control_header",
"(",
"self",
".",
"headers",
".",
"get",
"(",
"\"cache-control\"",
")",
",",
"on_update",
",",
"ResponseCacheControl",
")"
] | [
95,
4
] | [
109,
9
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin._wrap_response | (self, start, length) | Wrap existing Response in case of Range Request context. | Wrap existing Response in case of Range Request context. | def _wrap_response(self, start, length):
"""Wrap existing Response in case of Range Request context."""
if self.status_code == 206:
self.response = _RangeWrapper(self.response, start, length) | [
"def",
"_wrap_response",
"(",
"self",
",",
"start",
",",
"length",
")",
":",
"if",
"self",
".",
"status_code",
"==",
"206",
":",
"self",
".",
"response",
"=",
"_RangeWrapper",
"(",
"self",
".",
"response",
",",
"start",
",",
"length",
")"
] | [
111,
4
] | [
114,
71
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin._is_range_request_processable | (self, environ) | Return ``True`` if `Range` header is present and if underlying
resource is considered unchanged when compared with `If-Range` header.
| Return ``True`` if `Range` header is present and if underlying
resource is considered unchanged when compared with `If-Range` header.
| def _is_range_request_processable(self, environ):
"""Return ``True`` if `Range` header is present and if underlying
resource is considered unchanged when compared with `If-Range` header.
"""
return (
"HTTP_IF_RANGE" not in environ
or not is_resource_modified(
environ,
self.headers.get("etag"),
None,
self.headers.get("last-modified"),
ignore_if_range=False,
)
) and "HTTP_RANGE" in environ | [
"def",
"_is_range_request_processable",
"(",
"self",
",",
"environ",
")",
":",
"return",
"(",
"\"HTTP_IF_RANGE\"",
"not",
"in",
"environ",
"or",
"not",
"is_resource_modified",
"(",
"environ",
",",
"self",
".",
"headers",
".",
"get",
"(",
"\"etag\"",
")",
",",
"None",
",",
"self",
".",
"headers",
".",
"get",
"(",
"\"last-modified\"",
")",
",",
"ignore_if_range",
"=",
"False",
",",
")",
")",
"and",
"\"HTTP_RANGE\"",
"in",
"environ"
] | [
116,
4
] | [
129,
37
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin._process_range_request | (self, environ, complete_length=None, accept_ranges=None) | Handle Range Request related headers (RFC7233). If `Accept-Ranges`
header is valid, and Range Request is processable, we set the headers
as described by the RFC, and wrap the underlying response in a
RangeWrapper.
Returns ``True`` if Range Request can be fulfilled, ``False`` otherwise.
:raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
if `Range` header could not be parsed or satisfied.
| Handle Range Request related headers (RFC7233). If `Accept-Ranges`
header is valid, and Range Request is processable, we set the headers
as described by the RFC, and wrap the underlying response in a
RangeWrapper. | def _process_range_request(self, environ, complete_length=None, accept_ranges=None):
"""Handle Range Request related headers (RFC7233). If `Accept-Ranges`
header is valid, and Range Request is processable, we set the headers
as described by the RFC, and wrap the underlying response in a
RangeWrapper.
Returns ``True`` if Range Request can be fulfilled, ``False`` otherwise.
:raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
if `Range` header could not be parsed or satisfied.
"""
from ..exceptions import RequestedRangeNotSatisfiable
if accept_ranges is None:
return False
self.headers["Accept-Ranges"] = accept_ranges
if not self._is_range_request_processable(environ) or complete_length is None:
return False
parsed_range = parse_range_header(environ.get("HTTP_RANGE"))
if parsed_range is None:
raise RequestedRangeNotSatisfiable(complete_length)
range_tuple = parsed_range.range_for_length(complete_length)
content_range_header = parsed_range.to_content_range_header(complete_length)
if range_tuple is None or content_range_header is None:
raise RequestedRangeNotSatisfiable(complete_length)
content_length = range_tuple[1] - range_tuple[0]
# Be sure not to send 206 response
# if requested range is the full content.
if content_length != complete_length:
self.headers["Content-Length"] = content_length
self.content_range = content_range_header
self.status_code = 206
self._wrap_response(range_tuple[0], content_length)
return True
return False | [
"def",
"_process_range_request",
"(",
"self",
",",
"environ",
",",
"complete_length",
"=",
"None",
",",
"accept_ranges",
"=",
"None",
")",
":",
"from",
".",
".",
"exceptions",
"import",
"RequestedRangeNotSatisfiable",
"if",
"accept_ranges",
"is",
"None",
":",
"return",
"False",
"self",
".",
"headers",
"[",
"\"Accept-Ranges\"",
"]",
"=",
"accept_ranges",
"if",
"not",
"self",
".",
"_is_range_request_processable",
"(",
"environ",
")",
"or",
"complete_length",
"is",
"None",
":",
"return",
"False",
"parsed_range",
"=",
"parse_range_header",
"(",
"environ",
".",
"get",
"(",
"\"HTTP_RANGE\"",
")",
")",
"if",
"parsed_range",
"is",
"None",
":",
"raise",
"RequestedRangeNotSatisfiable",
"(",
"complete_length",
")",
"range_tuple",
"=",
"parsed_range",
".",
"range_for_length",
"(",
"complete_length",
")",
"content_range_header",
"=",
"parsed_range",
".",
"to_content_range_header",
"(",
"complete_length",
")",
"if",
"range_tuple",
"is",
"None",
"or",
"content_range_header",
"is",
"None",
":",
"raise",
"RequestedRangeNotSatisfiable",
"(",
"complete_length",
")",
"content_length",
"=",
"range_tuple",
"[",
"1",
"]",
"-",
"range_tuple",
"[",
"0",
"]",
"# Be sure not to send 206 response",
"# if requested range is the full content.",
"if",
"content_length",
"!=",
"complete_length",
":",
"self",
".",
"headers",
"[",
"\"Content-Length\"",
"]",
"=",
"content_length",
"self",
".",
"content_range",
"=",
"content_range_header",
"self",
".",
"status_code",
"=",
"206",
"self",
".",
"_wrap_response",
"(",
"range_tuple",
"[",
"0",
"]",
",",
"content_length",
")",
"return",
"True",
"return",
"False"
] | [
131,
4
] | [
165,
20
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin.make_conditional | (
self, request_or_environ, accept_ranges=False, complete_length=None
) | Make the response conditional to the request. This method works
best if an etag was defined for the response already. The `add_etag`
method can be used to do that. If called without etag just the date
header is set.
This does nothing if the request method in the request or environ is
anything but GET or HEAD.
For optimal performance when handling range requests, it's recommended
that your response data object implements `seekable`, `seek` and `tell`
methods as described by :py:class:`io.IOBase`. Objects returned by
:meth:`~werkzeug.wsgi.wrap_file` automatically implement those methods.
It does not remove the body of the response because that's something
the :meth:`__call__` function does for us automatically.
Returns self so that you can do ``return resp.make_conditional(req)``
but modifies the object in-place.
:param request_or_environ: a request object or WSGI environment to be
used to make the response conditional
against.
:param accept_ranges: This parameter dictates the value of
`Accept-Ranges` header. If ``False`` (default),
the header is not set. If ``True``, it will be set
to ``"bytes"``. If ``None``, it will be set to
``"none"``. If it's a string, it will use this
value.
:param complete_length: Will be used only in valid Range Requests.
It will set `Content-Range` complete length
value and compute `Content-Length` real value.
This parameter is mandatory for successful
Range Requests completion.
:raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
if `Range` header could not be parsed or satisfied.
| Make the response conditional to the request. This method works
best if an etag was defined for the response already. The `add_etag`
method can be used to do that. If called without etag just the date
header is set. | def make_conditional(
self, request_or_environ, accept_ranges=False, complete_length=None
):
"""Make the response conditional to the request. This method works
best if an etag was defined for the response already. The `add_etag`
method can be used to do that. If called without etag just the date
header is set.
This does nothing if the request method in the request or environ is
anything but GET or HEAD.
For optimal performance when handling range requests, it's recommended
that your response data object implements `seekable`, `seek` and `tell`
methods as described by :py:class:`io.IOBase`. Objects returned by
:meth:`~werkzeug.wsgi.wrap_file` automatically implement those methods.
It does not remove the body of the response because that's something
the :meth:`__call__` function does for us automatically.
Returns self so that you can do ``return resp.make_conditional(req)``
but modifies the object in-place.
:param request_or_environ: a request object or WSGI environment to be
used to make the response conditional
against.
:param accept_ranges: This parameter dictates the value of
`Accept-Ranges` header. If ``False`` (default),
the header is not set. If ``True``, it will be set
to ``"bytes"``. If ``None``, it will be set to
``"none"``. If it's a string, it will use this
value.
:param complete_length: Will be used only in valid Range Requests.
It will set `Content-Range` complete length
value and compute `Content-Length` real value.
This parameter is mandatory for successful
Range Requests completion.
:raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
if `Range` header could not be parsed or satisfied.
"""
environ = _get_environ(request_or_environ)
if environ["REQUEST_METHOD"] in ("GET", "HEAD"):
# if the date is not in the headers, add it now. We however
# will not override an already existing header. Unfortunately
# this header will be overriden by many WSGI servers including
# wsgiref.
if "date" not in self.headers:
self.headers["Date"] = http_date()
accept_ranges = _clean_accept_ranges(accept_ranges)
is206 = self._process_range_request(environ, complete_length, accept_ranges)
if not is206 and not is_resource_modified(
environ,
self.headers.get("etag"),
None,
self.headers.get("last-modified"),
):
if parse_etags(environ.get("HTTP_IF_MATCH")):
self.status_code = 412
else:
self.status_code = 304
if (
self.automatically_set_content_length
and "content-length" not in self.headers
):
length = self.calculate_content_length()
if length is not None:
self.headers["Content-Length"] = length
return self | [
"def",
"make_conditional",
"(",
"self",
",",
"request_or_environ",
",",
"accept_ranges",
"=",
"False",
",",
"complete_length",
"=",
"None",
")",
":",
"environ",
"=",
"_get_environ",
"(",
"request_or_environ",
")",
"if",
"environ",
"[",
"\"REQUEST_METHOD\"",
"]",
"in",
"(",
"\"GET\"",
",",
"\"HEAD\"",
")",
":",
"# if the date is not in the headers, add it now. We however",
"# will not override an already existing header. Unfortunately",
"# this header will be overriden by many WSGI servers including",
"# wsgiref.",
"if",
"\"date\"",
"not",
"in",
"self",
".",
"headers",
":",
"self",
".",
"headers",
"[",
"\"Date\"",
"]",
"=",
"http_date",
"(",
")",
"accept_ranges",
"=",
"_clean_accept_ranges",
"(",
"accept_ranges",
")",
"is206",
"=",
"self",
".",
"_process_range_request",
"(",
"environ",
",",
"complete_length",
",",
"accept_ranges",
")",
"if",
"not",
"is206",
"and",
"not",
"is_resource_modified",
"(",
"environ",
",",
"self",
".",
"headers",
".",
"get",
"(",
"\"etag\"",
")",
",",
"None",
",",
"self",
".",
"headers",
".",
"get",
"(",
"\"last-modified\"",
")",
",",
")",
":",
"if",
"parse_etags",
"(",
"environ",
".",
"get",
"(",
"\"HTTP_IF_MATCH\"",
")",
")",
":",
"self",
".",
"status_code",
"=",
"412",
"else",
":",
"self",
".",
"status_code",
"=",
"304",
"if",
"(",
"self",
".",
"automatically_set_content_length",
"and",
"\"content-length\"",
"not",
"in",
"self",
".",
"headers",
")",
":",
"length",
"=",
"self",
".",
"calculate_content_length",
"(",
")",
"if",
"length",
"is",
"not",
"None",
":",
"self",
".",
"headers",
"[",
"\"Content-Length\"",
"]",
"=",
"length",
"return",
"self"
] | [
167,
4
] | [
233,
19
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin.add_etag | (self, overwrite=False, weak=False) | Add an etag for the current response if there is none yet. | Add an etag for the current response if there is none yet. | def add_etag(self, overwrite=False, weak=False):
"""Add an etag for the current response if there is none yet."""
if overwrite or "etag" not in self.headers:
self.set_etag(generate_etag(self.get_data()), weak) | [
"def",
"add_etag",
"(",
"self",
",",
"overwrite",
"=",
"False",
",",
"weak",
"=",
"False",
")",
":",
"if",
"overwrite",
"or",
"\"etag\"",
"not",
"in",
"self",
".",
"headers",
":",
"self",
".",
"set_etag",
"(",
"generate_etag",
"(",
"self",
".",
"get_data",
"(",
")",
")",
",",
"weak",
")"
] | [
235,
4
] | [
238,
63
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin.set_etag | (self, etag, weak=False) | Set the etag, and override the old one if there was one. | Set the etag, and override the old one if there was one. | def set_etag(self, etag, weak=False):
"""Set the etag, and override the old one if there was one."""
self.headers["ETag"] = quote_etag(etag, weak) | [
"def",
"set_etag",
"(",
"self",
",",
"etag",
",",
"weak",
"=",
"False",
")",
":",
"self",
".",
"headers",
"[",
"\"ETag\"",
"]",
"=",
"quote_etag",
"(",
"etag",
",",
"weak",
")"
] | [
240,
4
] | [
242,
53
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin.get_etag | (self) | Return a tuple in the form ``(etag, is_weak)``. If there is no
ETag the return value is ``(None, None)``.
| Return a tuple in the form ``(etag, is_weak)``. If there is no
ETag the return value is ``(None, None)``.
| def get_etag(self):
"""Return a tuple in the form ``(etag, is_weak)``. If there is no
ETag the return value is ``(None, None)``.
"""
return unquote_etag(self.headers.get("ETag")) | [
"def",
"get_etag",
"(",
"self",
")",
":",
"return",
"unquote_etag",
"(",
"self",
".",
"headers",
".",
"get",
"(",
"\"ETag\"",
")",
")"
] | [
244,
4
] | [
248,
53
] | python | en | ['en', 'en', 'en'] | True |
ETagResponseMixin.freeze | (self, no_etag=False) | Call this method if you want to make your response object ready for
pickeling. This buffers the generator if there is one. This also
sets the etag unless `no_etag` is set to `True`.
| Call this method if you want to make your response object ready for
pickeling. This buffers the generator if there is one. This also
sets the etag unless `no_etag` is set to `True`.
| def freeze(self, no_etag=False):
"""Call this method if you want to make your response object ready for
pickeling. This buffers the generator if there is one. This also
sets the etag unless `no_etag` is set to `True`.
"""
if not no_etag:
self.add_etag()
super(ETagResponseMixin, self).freeze() | [
"def",
"freeze",
"(",
"self",
",",
"no_etag",
"=",
"False",
")",
":",
"if",
"not",
"no_etag",
":",
"self",
".",
"add_etag",
"(",
")",
"super",
"(",
"ETagResponseMixin",
",",
"self",
")",
".",
"freeze",
"(",
")"
] | [
250,
4
] | [
257,
47
] | python | en | ['en', 'en', 'en'] | True |
extract_from_ast | (node, gettext_functions=GETTEXT_FUNCTIONS,
babel_style=True) | Extract localizable strings from the given template node. Per
default this function returns matches in babel style that means non string
parameters as well as keyword arguments are returned as `None`. This
allows Babel to figure out what you really meant if you are using
gettext functions that allow keyword arguments for placeholder expansion.
If you don't want that behavior set the `babel_style` parameter to `False`
which causes only strings to be returned and parameters are always stored
in tuples. As a consequence invalid gettext calls (calls without a single
string parameter or string parameters after non-string parameters) are
skipped.
This example explains the behavior:
>>> from jinja2 import Environment
>>> env = Environment()
>>> node = env.parse('{{ (_("foo"), _(), ngettext("foo", "bar", 42)) }}')
>>> list(extract_from_ast(node))
[(1, '_', 'foo'), (1, '_', ()), (1, 'ngettext', ('foo', 'bar', None))]
>>> list(extract_from_ast(node, babel_style=False))
[(1, '_', ('foo',)), (1, 'ngettext', ('foo', 'bar'))]
For every string found this function yields a ``(lineno, function,
message)`` tuple, where:
* ``lineno`` is the number of the line on which the string was found,
* ``function`` is the name of the ``gettext`` function used (if the
string was extracted from embedded Python code), and
* ``message`` is the string itself (a ``unicode`` object, or a tuple
of ``unicode`` objects for functions with multiple string arguments).
This extraction function operates on the AST and is because of that unable
to extract any comments. For comment support you have to use the babel
extraction interface or extract comments yourself.
| Extract localizable strings from the given template node. Per
default this function returns matches in babel style that means non string
parameters as well as keyword arguments are returned as `None`. This
allows Babel to figure out what you really meant if you are using
gettext functions that allow keyword arguments for placeholder expansion.
If you don't want that behavior set the `babel_style` parameter to `False`
which causes only strings to be returned and parameters are always stored
in tuples. As a consequence invalid gettext calls (calls without a single
string parameter or string parameters after non-string parameters) are
skipped. | def extract_from_ast(node, gettext_functions=GETTEXT_FUNCTIONS,
babel_style=True):
"""Extract localizable strings from the given template node. Per
default this function returns matches in babel style that means non string
parameters as well as keyword arguments are returned as `None`. This
allows Babel to figure out what you really meant if you are using
gettext functions that allow keyword arguments for placeholder expansion.
If you don't want that behavior set the `babel_style` parameter to `False`
which causes only strings to be returned and parameters are always stored
in tuples. As a consequence invalid gettext calls (calls without a single
string parameter or string parameters after non-string parameters) are
skipped.
This example explains the behavior:
>>> from jinja2 import Environment
>>> env = Environment()
>>> node = env.parse('{{ (_("foo"), _(), ngettext("foo", "bar", 42)) }}')
>>> list(extract_from_ast(node))
[(1, '_', 'foo'), (1, '_', ()), (1, 'ngettext', ('foo', 'bar', None))]
>>> list(extract_from_ast(node, babel_style=False))
[(1, '_', ('foo',)), (1, 'ngettext', ('foo', 'bar'))]
For every string found this function yields a ``(lineno, function,
message)`` tuple, where:
* ``lineno`` is the number of the line on which the string was found,
* ``function`` is the name of the ``gettext`` function used (if the
string was extracted from embedded Python code), and
* ``message`` is the string itself (a ``unicode`` object, or a tuple
of ``unicode`` objects for functions with multiple string arguments).
This extraction function operates on the AST and is because of that unable
to extract any comments. For comment support you have to use the babel
extraction interface or extract comments yourself.
"""
for node in node.find_all(nodes.Call):
if not isinstance(node.node, nodes.Name) or \
node.node.name not in gettext_functions:
continue
strings = []
for arg in node.args:
if isinstance(arg, nodes.Const) and \
isinstance(arg.value, string_types):
strings.append(arg.value)
else:
strings.append(None)
for arg in node.kwargs:
strings.append(None)
if node.dyn_args is not None:
strings.append(None)
if node.dyn_kwargs is not None:
strings.append(None)
if not babel_style:
strings = tuple(x for x in strings if x is not None)
if not strings:
continue
else:
if len(strings) == 1:
strings = strings[0]
else:
strings = tuple(strings)
yield node.lineno, node.node.name, strings | [
"def",
"extract_from_ast",
"(",
"node",
",",
"gettext_functions",
"=",
"GETTEXT_FUNCTIONS",
",",
"babel_style",
"=",
"True",
")",
":",
"for",
"node",
"in",
"node",
".",
"find_all",
"(",
"nodes",
".",
"Call",
")",
":",
"if",
"not",
"isinstance",
"(",
"node",
".",
"node",
",",
"nodes",
".",
"Name",
")",
"or",
"node",
".",
"node",
".",
"name",
"not",
"in",
"gettext_functions",
":",
"continue",
"strings",
"=",
"[",
"]",
"for",
"arg",
"in",
"node",
".",
"args",
":",
"if",
"isinstance",
"(",
"arg",
",",
"nodes",
".",
"Const",
")",
"and",
"isinstance",
"(",
"arg",
".",
"value",
",",
"string_types",
")",
":",
"strings",
".",
"append",
"(",
"arg",
".",
"value",
")",
"else",
":",
"strings",
".",
"append",
"(",
"None",
")",
"for",
"arg",
"in",
"node",
".",
"kwargs",
":",
"strings",
".",
"append",
"(",
"None",
")",
"if",
"node",
".",
"dyn_args",
"is",
"not",
"None",
":",
"strings",
".",
"append",
"(",
"None",
")",
"if",
"node",
".",
"dyn_kwargs",
"is",
"not",
"None",
":",
"strings",
".",
"append",
"(",
"None",
")",
"if",
"not",
"babel_style",
":",
"strings",
"=",
"tuple",
"(",
"x",
"for",
"x",
"in",
"strings",
"if",
"x",
"is",
"not",
"None",
")",
"if",
"not",
"strings",
":",
"continue",
"else",
":",
"if",
"len",
"(",
"strings",
")",
"==",
"1",
":",
"strings",
"=",
"strings",
"[",
"0",
"]",
"else",
":",
"strings",
"=",
"tuple",
"(",
"strings",
")",
"yield",
"node",
".",
"lineno",
",",
"node",
".",
"node",
".",
"name",
",",
"strings"
] | [
436,
0
] | [
501,
50
] | python | en | ['en', 'en', 'en'] | True |
babel_extract | (fileobj, keywords, comment_tags, options) | Babel extraction method for Jinja templates.
.. versionchanged:: 2.3
Basic support for translation comments was added. If `comment_tags`
is now set to a list of keywords for extraction, the extractor will
try to find the best preceeding comment that begins with one of the
keywords. For best results, make sure to not have more than one
gettext call in one line of code and the matching comment in the
same line or the line before.
.. versionchanged:: 2.5.1
The `newstyle_gettext` flag can be set to `True` to enable newstyle
gettext calls.
.. versionchanged:: 2.7
A `silent` option can now be provided. If set to `False` template
syntax errors are propagated instead of being ignored.
:param fileobj: the file-like object the messages should be extracted from
:param keywords: a list of keywords (i.e. function names) that should be
recognized as translation functions
:param comment_tags: a list of translator tags to search for and include
in the results.
:param options: a dictionary of additional options (optional)
:return: an iterator over ``(lineno, funcname, message, comments)`` tuples.
(comments will be empty currently)
| Babel extraction method for Jinja templates. | def babel_extract(fileobj, keywords, comment_tags, options):
"""Babel extraction method for Jinja templates.
.. versionchanged:: 2.3
Basic support for translation comments was added. If `comment_tags`
is now set to a list of keywords for extraction, the extractor will
try to find the best preceeding comment that begins with one of the
keywords. For best results, make sure to not have more than one
gettext call in one line of code and the matching comment in the
same line or the line before.
.. versionchanged:: 2.5.1
The `newstyle_gettext` flag can be set to `True` to enable newstyle
gettext calls.
.. versionchanged:: 2.7
A `silent` option can now be provided. If set to `False` template
syntax errors are propagated instead of being ignored.
:param fileobj: the file-like object the messages should be extracted from
:param keywords: a list of keywords (i.e. function names) that should be
recognized as translation functions
:param comment_tags: a list of translator tags to search for and include
in the results.
:param options: a dictionary of additional options (optional)
:return: an iterator over ``(lineno, funcname, message, comments)`` tuples.
(comments will be empty currently)
"""
extensions = set()
for extension in options.get('extensions', '').split(','):
extension = extension.strip()
if not extension:
continue
extensions.add(import_string(extension))
if InternationalizationExtension not in extensions:
extensions.add(InternationalizationExtension)
def getbool(options, key, default=False):
return options.get(key, str(default)).lower() in \
('1', 'on', 'yes', 'true')
silent = getbool(options, 'silent', True)
environment = Environment(
options.get('block_start_string', BLOCK_START_STRING),
options.get('block_end_string', BLOCK_END_STRING),
options.get('variable_start_string', VARIABLE_START_STRING),
options.get('variable_end_string', VARIABLE_END_STRING),
options.get('comment_start_string', COMMENT_START_STRING),
options.get('comment_end_string', COMMENT_END_STRING),
options.get('line_statement_prefix') or LINE_STATEMENT_PREFIX,
options.get('line_comment_prefix') or LINE_COMMENT_PREFIX,
getbool(options, 'trim_blocks', TRIM_BLOCKS),
getbool(options, 'lstrip_blocks', LSTRIP_BLOCKS),
NEWLINE_SEQUENCE,
getbool(options, 'keep_trailing_newline', KEEP_TRAILING_NEWLINE),
frozenset(extensions),
cache_size=0,
auto_reload=False
)
if getbool(options, 'trimmed'):
environment.policies['ext.i18n.trimmed'] = True
if getbool(options, 'newstyle_gettext'):
environment.newstyle_gettext = True
source = fileobj.read().decode(options.get('encoding', 'utf-8'))
try:
node = environment.parse(source)
tokens = list(environment.lex(environment.preprocess(source)))
except TemplateSyntaxError as e:
if not silent:
raise
# skip templates with syntax errors
return
finder = _CommentFinder(tokens, comment_tags)
for lineno, func, message in extract_from_ast(node, keywords):
yield lineno, func, message, finder.find_comments(lineno) | [
"def",
"babel_extract",
"(",
"fileobj",
",",
"keywords",
",",
"comment_tags",
",",
"options",
")",
":",
"extensions",
"=",
"set",
"(",
")",
"for",
"extension",
"in",
"options",
".",
"get",
"(",
"'extensions'",
",",
"''",
")",
".",
"split",
"(",
"','",
")",
":",
"extension",
"=",
"extension",
".",
"strip",
"(",
")",
"if",
"not",
"extension",
":",
"continue",
"extensions",
".",
"add",
"(",
"import_string",
"(",
"extension",
")",
")",
"if",
"InternationalizationExtension",
"not",
"in",
"extensions",
":",
"extensions",
".",
"add",
"(",
"InternationalizationExtension",
")",
"def",
"getbool",
"(",
"options",
",",
"key",
",",
"default",
"=",
"False",
")",
":",
"return",
"options",
".",
"get",
"(",
"key",
",",
"str",
"(",
"default",
")",
")",
".",
"lower",
"(",
")",
"in",
"(",
"'1'",
",",
"'on'",
",",
"'yes'",
",",
"'true'",
")",
"silent",
"=",
"getbool",
"(",
"options",
",",
"'silent'",
",",
"True",
")",
"environment",
"=",
"Environment",
"(",
"options",
".",
"get",
"(",
"'block_start_string'",
",",
"BLOCK_START_STRING",
")",
",",
"options",
".",
"get",
"(",
"'block_end_string'",
",",
"BLOCK_END_STRING",
")",
",",
"options",
".",
"get",
"(",
"'variable_start_string'",
",",
"VARIABLE_START_STRING",
")",
",",
"options",
".",
"get",
"(",
"'variable_end_string'",
",",
"VARIABLE_END_STRING",
")",
",",
"options",
".",
"get",
"(",
"'comment_start_string'",
",",
"COMMENT_START_STRING",
")",
",",
"options",
".",
"get",
"(",
"'comment_end_string'",
",",
"COMMENT_END_STRING",
")",
",",
"options",
".",
"get",
"(",
"'line_statement_prefix'",
")",
"or",
"LINE_STATEMENT_PREFIX",
",",
"options",
".",
"get",
"(",
"'line_comment_prefix'",
")",
"or",
"LINE_COMMENT_PREFIX",
",",
"getbool",
"(",
"options",
",",
"'trim_blocks'",
",",
"TRIM_BLOCKS",
")",
",",
"getbool",
"(",
"options",
",",
"'lstrip_blocks'",
",",
"LSTRIP_BLOCKS",
")",
",",
"NEWLINE_SEQUENCE",
",",
"getbool",
"(",
"options",
",",
"'keep_trailing_newline'",
",",
"KEEP_TRAILING_NEWLINE",
")",
",",
"frozenset",
"(",
"extensions",
")",
",",
"cache_size",
"=",
"0",
",",
"auto_reload",
"=",
"False",
")",
"if",
"getbool",
"(",
"options",
",",
"'trimmed'",
")",
":",
"environment",
".",
"policies",
"[",
"'ext.i18n.trimmed'",
"]",
"=",
"True",
"if",
"getbool",
"(",
"options",
",",
"'newstyle_gettext'",
")",
":",
"environment",
".",
"newstyle_gettext",
"=",
"True",
"source",
"=",
"fileobj",
".",
"read",
"(",
")",
".",
"decode",
"(",
"options",
".",
"get",
"(",
"'encoding'",
",",
"'utf-8'",
")",
")",
"try",
":",
"node",
"=",
"environment",
".",
"parse",
"(",
"source",
")",
"tokens",
"=",
"list",
"(",
"environment",
".",
"lex",
"(",
"environment",
".",
"preprocess",
"(",
"source",
")",
")",
")",
"except",
"TemplateSyntaxError",
"as",
"e",
":",
"if",
"not",
"silent",
":",
"raise",
"# skip templates with syntax errors",
"return",
"finder",
"=",
"_CommentFinder",
"(",
"tokens",
",",
"comment_tags",
")",
"for",
"lineno",
",",
"func",
",",
"message",
"in",
"extract_from_ast",
"(",
"node",
",",
"keywords",
")",
":",
"yield",
"lineno",
",",
"func",
",",
"message",
",",
"finder",
".",
"find_comments",
"(",
"lineno",
")"
] | [
541,
0
] | [
618,
65
] | python | en | ['nb', 'en', 'en'] | True |
Extension.bind | (self, environment) | Create a copy of this extension bound to another environment. | Create a copy of this extension bound to another environment. | def bind(self, environment):
"""Create a copy of this extension bound to another environment."""
rv = object.__new__(self.__class__)
rv.__dict__.update(self.__dict__)
rv.environment = environment
return rv | [
"def",
"bind",
"(",
"self",
",",
"environment",
")",
":",
"rv",
"=",
"object",
".",
"__new__",
"(",
"self",
".",
"__class__",
")",
"rv",
".",
"__dict__",
".",
"update",
"(",
"self",
".",
"__dict__",
")",
"rv",
".",
"environment",
"=",
"environment",
"return",
"rv"
] | [
74,
4
] | [
79,
17
] | python | en | ['en', 'en', 'en'] | True |
Extension.preprocess | (self, source, name, filename=None) | This method is called before the actual lexing and can be used to
preprocess the source. The `filename` is optional. The return value
must be the preprocessed source.
| This method is called before the actual lexing and can be used to
preprocess the source. The `filename` is optional. The return value
must be the preprocessed source.
| def preprocess(self, source, name, filename=None):
"""This method is called before the actual lexing and can be used to
preprocess the source. The `filename` is optional. The return value
must be the preprocessed source.
"""
return source | [
"def",
"preprocess",
"(",
"self",
",",
"source",
",",
"name",
",",
"filename",
"=",
"None",
")",
":",
"return",
"source"
] | [
81,
4
] | [
86,
21
] | python | en | ['en', 'en', 'en'] | True |
Extension.filter_stream | (self, stream) | It's passed a :class:`~jinja2.lexer.TokenStream` that can be used
to filter tokens returned. This method has to return an iterable of
:class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
:class:`~jinja2.lexer.TokenStream`.
In the `ext` folder of the Jinja2 source distribution there is a file
called `inlinegettext.py` which implements a filter that utilizes this
method.
| It's passed a :class:`~jinja2.lexer.TokenStream` that can be used
to filter tokens returned. This method has to return an iterable of
:class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
:class:`~jinja2.lexer.TokenStream`. | def filter_stream(self, stream):
"""It's passed a :class:`~jinja2.lexer.TokenStream` that can be used
to filter tokens returned. This method has to return an iterable of
:class:`~jinja2.lexer.Token`\\s, but it doesn't have to return a
:class:`~jinja2.lexer.TokenStream`.
In the `ext` folder of the Jinja2 source distribution there is a file
called `inlinegettext.py` which implements a filter that utilizes this
method.
"""
return stream | [
"def",
"filter_stream",
"(",
"self",
",",
"stream",
")",
":",
"return",
"stream"
] | [
88,
4
] | [
98,
21
] | python | en | ['en', 'en', 'en'] | True |
Extension.parse | (self, parser) | If any of the :attr:`tags` matched this method is called with the
parser as first argument. The token the parser stream is pointing at
is the name token that matched. This method has to return one or a
list of multiple nodes.
| If any of the :attr:`tags` matched this method is called with the
parser as first argument. The token the parser stream is pointing at
is the name token that matched. This method has to return one or a
list of multiple nodes.
| def parse(self, parser):
"""If any of the :attr:`tags` matched this method is called with the
parser as first argument. The token the parser stream is pointing at
is the name token that matched. This method has to return one or a
list of multiple nodes.
"""
raise NotImplementedError() | [
"def",
"parse",
"(",
"self",
",",
"parser",
")",
":",
"raise",
"NotImplementedError",
"(",
")"
] | [
100,
4
] | [
106,
35
] | python | en | ['en', 'en', 'en'] | True |
Extension.attr | (self, name, lineno=None) | Return an attribute node for the current extension. This is useful
to pass constants on extensions to generated template code.
::
self.attr('_my_attribute', lineno=lineno)
| Return an attribute node for the current extension. This is useful
to pass constants on extensions to generated template code. | def attr(self, name, lineno=None):
"""Return an attribute node for the current extension. This is useful
to pass constants on extensions to generated template code.
::
self.attr('_my_attribute', lineno=lineno)
"""
return nodes.ExtensionAttribute(self.identifier, name, lineno=lineno) | [
"def",
"attr",
"(",
"self",
",",
"name",
",",
"lineno",
"=",
"None",
")",
":",
"return",
"nodes",
".",
"ExtensionAttribute",
"(",
"self",
".",
"identifier",
",",
"name",
",",
"lineno",
"=",
"lineno",
")"
] | [
108,
4
] | [
116,
77
] | python | en | ['en', 'en', 'en'] | True |
Extension.call_method | (self, name, args=None, kwargs=None, dyn_args=None,
dyn_kwargs=None, lineno=None) | Call a method of the extension. This is a shortcut for
:meth:`attr` + :class:`jinja2.nodes.Call`.
| Call a method of the extension. This is a shortcut for
:meth:`attr` + :class:`jinja2.nodes.Call`.
| def call_method(self, name, args=None, kwargs=None, dyn_args=None,
dyn_kwargs=None, lineno=None):
"""Call a method of the extension. This is a shortcut for
:meth:`attr` + :class:`jinja2.nodes.Call`.
"""
if args is None:
args = []
if kwargs is None:
kwargs = []
return nodes.Call(self.attr(name, lineno=lineno), args, kwargs,
dyn_args, dyn_kwargs, lineno=lineno) | [
"def",
"call_method",
"(",
"self",
",",
"name",
",",
"args",
"=",
"None",
",",
"kwargs",
"=",
"None",
",",
"dyn_args",
"=",
"None",
",",
"dyn_kwargs",
"=",
"None",
",",
"lineno",
"=",
"None",
")",
":",
"if",
"args",
"is",
"None",
":",
"args",
"=",
"[",
"]",
"if",
"kwargs",
"is",
"None",
":",
"kwargs",
"=",
"[",
"]",
"return",
"nodes",
".",
"Call",
"(",
"self",
".",
"attr",
"(",
"name",
",",
"lineno",
"=",
"lineno",
")",
",",
"args",
",",
"kwargs",
",",
"dyn_args",
",",
"dyn_kwargs",
",",
"lineno",
"=",
"lineno",
")"
] | [
118,
4
] | [
128,
62
] | python | en | ['en', 'en', 'en'] | True |
InternationalizationExtension.parse | (self, parser) | Parse a translatable tag. | Parse a translatable tag. | def parse(self, parser):
"""Parse a translatable tag."""
lineno = next(parser.stream).lineno
num_called_num = False
# find all the variables referenced. Additionally a variable can be
# defined in the body of the trans block too, but this is checked at
# a later state.
plural_expr = None
plural_expr_assignment = None
variables = {}
trimmed = None
while parser.stream.current.type != 'block_end':
if variables:
parser.stream.expect('comma')
# skip colon for python compatibility
if parser.stream.skip_if('colon'):
break
name = parser.stream.expect('name')
if name.value in variables:
parser.fail('translatable variable %r defined twice.' %
name.value, name.lineno,
exc=TemplateAssertionError)
# expressions
if parser.stream.current.type == 'assign':
next(parser.stream)
variables[name.value] = var = parser.parse_expression()
elif trimmed is None and name.value in ('trimmed', 'notrimmed'):
trimmed = name.value == 'trimmed'
continue
else:
variables[name.value] = var = nodes.Name(name.value, 'load')
if plural_expr is None:
if isinstance(var, nodes.Call):
plural_expr = nodes.Name('_trans', 'load')
variables[name.value] = plural_expr
plural_expr_assignment = nodes.Assign(
nodes.Name('_trans', 'store'), var)
else:
plural_expr = var
num_called_num = name.value == 'num'
parser.stream.expect('block_end')
plural = None
have_plural = False
referenced = set()
# now parse until endtrans or pluralize
singular_names, singular = self._parse_block(parser, True)
if singular_names:
referenced.update(singular_names)
if plural_expr is None:
plural_expr = nodes.Name(singular_names[0], 'load')
num_called_num = singular_names[0] == 'num'
# if we have a pluralize block, we parse that too
if parser.stream.current.test('name:pluralize'):
have_plural = True
next(parser.stream)
if parser.stream.current.type != 'block_end':
name = parser.stream.expect('name')
if name.value not in variables:
parser.fail('unknown variable %r for pluralization' %
name.value, name.lineno,
exc=TemplateAssertionError)
plural_expr = variables[name.value]
num_called_num = name.value == 'num'
parser.stream.expect('block_end')
plural_names, plural = self._parse_block(parser, False)
next(parser.stream)
referenced.update(plural_names)
else:
next(parser.stream)
# register free names as simple name expressions
for var in referenced:
if var not in variables:
variables[var] = nodes.Name(var, 'load')
if not have_plural:
plural_expr = None
elif plural_expr is None:
parser.fail('pluralize without variables', lineno)
if trimmed is None:
trimmed = self.environment.policies['ext.i18n.trimmed']
if trimmed:
singular = self._trim_whitespace(singular)
if plural:
plural = self._trim_whitespace(plural)
node = self._make_node(singular, plural, variables, plural_expr,
bool(referenced),
num_called_num and have_plural)
node.set_lineno(lineno)
if plural_expr_assignment is not None:
return [plural_expr_assignment, node]
else:
return node | [
"def",
"parse",
"(",
"self",
",",
"parser",
")",
":",
"lineno",
"=",
"next",
"(",
"parser",
".",
"stream",
")",
".",
"lineno",
"num_called_num",
"=",
"False",
"# find all the variables referenced. Additionally a variable can be",
"# defined in the body of the trans block too, but this is checked at",
"# a later state.",
"plural_expr",
"=",
"None",
"plural_expr_assignment",
"=",
"None",
"variables",
"=",
"{",
"}",
"trimmed",
"=",
"None",
"while",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"!=",
"'block_end'",
":",
"if",
"variables",
":",
"parser",
".",
"stream",
".",
"expect",
"(",
"'comma'",
")",
"# skip colon for python compatibility",
"if",
"parser",
".",
"stream",
".",
"skip_if",
"(",
"'colon'",
")",
":",
"break",
"name",
"=",
"parser",
".",
"stream",
".",
"expect",
"(",
"'name'",
")",
"if",
"name",
".",
"value",
"in",
"variables",
":",
"parser",
".",
"fail",
"(",
"'translatable variable %r defined twice.'",
"%",
"name",
".",
"value",
",",
"name",
".",
"lineno",
",",
"exc",
"=",
"TemplateAssertionError",
")",
"# expressions",
"if",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"==",
"'assign'",
":",
"next",
"(",
"parser",
".",
"stream",
")",
"variables",
"[",
"name",
".",
"value",
"]",
"=",
"var",
"=",
"parser",
".",
"parse_expression",
"(",
")",
"elif",
"trimmed",
"is",
"None",
"and",
"name",
".",
"value",
"in",
"(",
"'trimmed'",
",",
"'notrimmed'",
")",
":",
"trimmed",
"=",
"name",
".",
"value",
"==",
"'trimmed'",
"continue",
"else",
":",
"variables",
"[",
"name",
".",
"value",
"]",
"=",
"var",
"=",
"nodes",
".",
"Name",
"(",
"name",
".",
"value",
",",
"'load'",
")",
"if",
"plural_expr",
"is",
"None",
":",
"if",
"isinstance",
"(",
"var",
",",
"nodes",
".",
"Call",
")",
":",
"plural_expr",
"=",
"nodes",
".",
"Name",
"(",
"'_trans'",
",",
"'load'",
")",
"variables",
"[",
"name",
".",
"value",
"]",
"=",
"plural_expr",
"plural_expr_assignment",
"=",
"nodes",
".",
"Assign",
"(",
"nodes",
".",
"Name",
"(",
"'_trans'",
",",
"'store'",
")",
",",
"var",
")",
"else",
":",
"plural_expr",
"=",
"var",
"num_called_num",
"=",
"name",
".",
"value",
"==",
"'num'",
"parser",
".",
"stream",
".",
"expect",
"(",
"'block_end'",
")",
"plural",
"=",
"None",
"have_plural",
"=",
"False",
"referenced",
"=",
"set",
"(",
")",
"# now parse until endtrans or pluralize",
"singular_names",
",",
"singular",
"=",
"self",
".",
"_parse_block",
"(",
"parser",
",",
"True",
")",
"if",
"singular_names",
":",
"referenced",
".",
"update",
"(",
"singular_names",
")",
"if",
"plural_expr",
"is",
"None",
":",
"plural_expr",
"=",
"nodes",
".",
"Name",
"(",
"singular_names",
"[",
"0",
"]",
",",
"'load'",
")",
"num_called_num",
"=",
"singular_names",
"[",
"0",
"]",
"==",
"'num'",
"# if we have a pluralize block, we parse that too",
"if",
"parser",
".",
"stream",
".",
"current",
".",
"test",
"(",
"'name:pluralize'",
")",
":",
"have_plural",
"=",
"True",
"next",
"(",
"parser",
".",
"stream",
")",
"if",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"!=",
"'block_end'",
":",
"name",
"=",
"parser",
".",
"stream",
".",
"expect",
"(",
"'name'",
")",
"if",
"name",
".",
"value",
"not",
"in",
"variables",
":",
"parser",
".",
"fail",
"(",
"'unknown variable %r for pluralization'",
"%",
"name",
".",
"value",
",",
"name",
".",
"lineno",
",",
"exc",
"=",
"TemplateAssertionError",
")",
"plural_expr",
"=",
"variables",
"[",
"name",
".",
"value",
"]",
"num_called_num",
"=",
"name",
".",
"value",
"==",
"'num'",
"parser",
".",
"stream",
".",
"expect",
"(",
"'block_end'",
")",
"plural_names",
",",
"plural",
"=",
"self",
".",
"_parse_block",
"(",
"parser",
",",
"False",
")",
"next",
"(",
"parser",
".",
"stream",
")",
"referenced",
".",
"update",
"(",
"plural_names",
")",
"else",
":",
"next",
"(",
"parser",
".",
"stream",
")",
"# register free names as simple name expressions",
"for",
"var",
"in",
"referenced",
":",
"if",
"var",
"not",
"in",
"variables",
":",
"variables",
"[",
"var",
"]",
"=",
"nodes",
".",
"Name",
"(",
"var",
",",
"'load'",
")",
"if",
"not",
"have_plural",
":",
"plural_expr",
"=",
"None",
"elif",
"plural_expr",
"is",
"None",
":",
"parser",
".",
"fail",
"(",
"'pluralize without variables'",
",",
"lineno",
")",
"if",
"trimmed",
"is",
"None",
":",
"trimmed",
"=",
"self",
".",
"environment",
".",
"policies",
"[",
"'ext.i18n.trimmed'",
"]",
"if",
"trimmed",
":",
"singular",
"=",
"self",
".",
"_trim_whitespace",
"(",
"singular",
")",
"if",
"plural",
":",
"plural",
"=",
"self",
".",
"_trim_whitespace",
"(",
"plural",
")",
"node",
"=",
"self",
".",
"_make_node",
"(",
"singular",
",",
"plural",
",",
"variables",
",",
"plural_expr",
",",
"bool",
"(",
"referenced",
")",
",",
"num_called_num",
"and",
"have_plural",
")",
"node",
".",
"set_lineno",
"(",
"lineno",
")",
"if",
"plural_expr_assignment",
"is",
"not",
"None",
":",
"return",
"[",
"plural_expr_assignment",
",",
"node",
"]",
"else",
":",
"return",
"node"
] | [
216,
4
] | [
319,
23
] | python | en | ['en', 'en', 'it'] | True |
InternationalizationExtension._parse_block | (self, parser, allow_pluralize) | Parse until the next block tag with a given name. | Parse until the next block tag with a given name. | def _parse_block(self, parser, allow_pluralize):
"""Parse until the next block tag with a given name."""
referenced = []
buf = []
while 1:
if parser.stream.current.type == 'data':
buf.append(parser.stream.current.value.replace('%', '%%'))
next(parser.stream)
elif parser.stream.current.type == 'variable_begin':
next(parser.stream)
name = parser.stream.expect('name').value
referenced.append(name)
buf.append('%%(%s)s' % name)
parser.stream.expect('variable_end')
elif parser.stream.current.type == 'block_begin':
next(parser.stream)
if parser.stream.current.test('name:endtrans'):
break
elif parser.stream.current.test('name:pluralize'):
if allow_pluralize:
break
parser.fail('a translatable section can have only one '
'pluralize section')
parser.fail('control structures in translatable sections are '
'not allowed')
elif parser.stream.eos:
parser.fail('unclosed translation block')
else:
assert False, 'internal parser error'
return referenced, concat(buf) | [
"def",
"_parse_block",
"(",
"self",
",",
"parser",
",",
"allow_pluralize",
")",
":",
"referenced",
"=",
"[",
"]",
"buf",
"=",
"[",
"]",
"while",
"1",
":",
"if",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"==",
"'data'",
":",
"buf",
".",
"append",
"(",
"parser",
".",
"stream",
".",
"current",
".",
"value",
".",
"replace",
"(",
"'%'",
",",
"'%%'",
")",
")",
"next",
"(",
"parser",
".",
"stream",
")",
"elif",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"==",
"'variable_begin'",
":",
"next",
"(",
"parser",
".",
"stream",
")",
"name",
"=",
"parser",
".",
"stream",
".",
"expect",
"(",
"'name'",
")",
".",
"value",
"referenced",
".",
"append",
"(",
"name",
")",
"buf",
".",
"append",
"(",
"'%%(%s)s'",
"%",
"name",
")",
"parser",
".",
"stream",
".",
"expect",
"(",
"'variable_end'",
")",
"elif",
"parser",
".",
"stream",
".",
"current",
".",
"type",
"==",
"'block_begin'",
":",
"next",
"(",
"parser",
".",
"stream",
")",
"if",
"parser",
".",
"stream",
".",
"current",
".",
"test",
"(",
"'name:endtrans'",
")",
":",
"break",
"elif",
"parser",
".",
"stream",
".",
"current",
".",
"test",
"(",
"'name:pluralize'",
")",
":",
"if",
"allow_pluralize",
":",
"break",
"parser",
".",
"fail",
"(",
"'a translatable section can have only one '",
"'pluralize section'",
")",
"parser",
".",
"fail",
"(",
"'control structures in translatable sections are '",
"'not allowed'",
")",
"elif",
"parser",
".",
"stream",
".",
"eos",
":",
"parser",
".",
"fail",
"(",
"'unclosed translation block'",
")",
"else",
":",
"assert",
"False",
",",
"'internal parser error'",
"return",
"referenced",
",",
"concat",
"(",
"buf",
")"
] | [
324,
4
] | [
354,
38
] | python | en | ['en', 'en', 'en'] | True |
InternationalizationExtension._make_node | (self, singular, plural, variables, plural_expr,
vars_referenced, num_called_num) | Generates a useful node from the data provided. | Generates a useful node from the data provided. | def _make_node(self, singular, plural, variables, plural_expr,
vars_referenced, num_called_num):
"""Generates a useful node from the data provided."""
# no variables referenced? no need to escape for old style
# gettext invocations only if there are vars.
if not vars_referenced and not self.environment.newstyle_gettext:
singular = singular.replace('%%', '%')
if plural:
plural = plural.replace('%%', '%')
# singular only:
if plural_expr is None:
gettext = nodes.Name('gettext', 'load')
node = nodes.Call(gettext, [nodes.Const(singular)],
[], None, None)
# singular and plural
else:
ngettext = nodes.Name('ngettext', 'load')
node = nodes.Call(ngettext, [
nodes.Const(singular),
nodes.Const(plural),
plural_expr
], [], None, None)
# in case newstyle gettext is used, the method is powerful
# enough to handle the variable expansion and autoescape
# handling itself
if self.environment.newstyle_gettext:
for key, value in iteritems(variables):
# the function adds that later anyways in case num was
# called num, so just skip it.
if num_called_num and key == 'num':
continue
node.kwargs.append(nodes.Keyword(key, value))
# otherwise do that here
else:
# mark the return value as safe if we are in an
# environment with autoescaping turned on
node = nodes.MarkSafeIfAutoescape(node)
if variables:
node = nodes.Mod(node, nodes.Dict([
nodes.Pair(nodes.Const(key), value)
for key, value in variables.items()
]))
return nodes.Output([node]) | [
"def",
"_make_node",
"(",
"self",
",",
"singular",
",",
"plural",
",",
"variables",
",",
"plural_expr",
",",
"vars_referenced",
",",
"num_called_num",
")",
":",
"# no variables referenced? no need to escape for old style",
"# gettext invocations only if there are vars.",
"if",
"not",
"vars_referenced",
"and",
"not",
"self",
".",
"environment",
".",
"newstyle_gettext",
":",
"singular",
"=",
"singular",
".",
"replace",
"(",
"'%%'",
",",
"'%'",
")",
"if",
"plural",
":",
"plural",
"=",
"plural",
".",
"replace",
"(",
"'%%'",
",",
"'%'",
")",
"# singular only:",
"if",
"plural_expr",
"is",
"None",
":",
"gettext",
"=",
"nodes",
".",
"Name",
"(",
"'gettext'",
",",
"'load'",
")",
"node",
"=",
"nodes",
".",
"Call",
"(",
"gettext",
",",
"[",
"nodes",
".",
"Const",
"(",
"singular",
")",
"]",
",",
"[",
"]",
",",
"None",
",",
"None",
")",
"# singular and plural",
"else",
":",
"ngettext",
"=",
"nodes",
".",
"Name",
"(",
"'ngettext'",
",",
"'load'",
")",
"node",
"=",
"nodes",
".",
"Call",
"(",
"ngettext",
",",
"[",
"nodes",
".",
"Const",
"(",
"singular",
")",
",",
"nodes",
".",
"Const",
"(",
"plural",
")",
",",
"plural_expr",
"]",
",",
"[",
"]",
",",
"None",
",",
"None",
")",
"# in case newstyle gettext is used, the method is powerful",
"# enough to handle the variable expansion and autoescape",
"# handling itself",
"if",
"self",
".",
"environment",
".",
"newstyle_gettext",
":",
"for",
"key",
",",
"value",
"in",
"iteritems",
"(",
"variables",
")",
":",
"# the function adds that later anyways in case num was",
"# called num, so just skip it.",
"if",
"num_called_num",
"and",
"key",
"==",
"'num'",
":",
"continue",
"node",
".",
"kwargs",
".",
"append",
"(",
"nodes",
".",
"Keyword",
"(",
"key",
",",
"value",
")",
")",
"# otherwise do that here",
"else",
":",
"# mark the return value as safe if we are in an",
"# environment with autoescaping turned on",
"node",
"=",
"nodes",
".",
"MarkSafeIfAutoescape",
"(",
"node",
")",
"if",
"variables",
":",
"node",
"=",
"nodes",
".",
"Mod",
"(",
"node",
",",
"nodes",
".",
"Dict",
"(",
"[",
"nodes",
".",
"Pair",
"(",
"nodes",
".",
"Const",
"(",
"key",
")",
",",
"value",
")",
"for",
"key",
",",
"value",
"in",
"variables",
".",
"items",
"(",
")",
"]",
")",
")",
"return",
"nodes",
".",
"Output",
"(",
"[",
"node",
"]",
")"
] | [
356,
4
] | [
402,
35
] | python | en | ['en', 'en', 'en'] | True |
hello_monkey | () | Respond and greet the caller by name. | Respond and greet the caller by name. | def hello_monkey():
"""Respond and greet the caller by name."""
# Try adding your own number to this list!
callers = {
"+14158675308": "Curious George",
"+12349013030": "Boots",
"+12348134522": "Virgil",
}
from_number = request.values.get('From', None)
message = callers[from_number] if from_number in callers else "Monkey"
resp = MessagingResponse()
resp.message("{}, thanks for the message!".format(message))
return str(resp) | [
"def",
"hello_monkey",
"(",
")",
":",
"# Try adding your own number to this list!",
"callers",
"=",
"{",
"\"+14158675308\"",
":",
"\"Curious George\"",
",",
"\"+12349013030\"",
":",
"\"Boots\"",
",",
"\"+12348134522\"",
":",
"\"Virgil\"",
",",
"}",
"from_number",
"=",
"request",
".",
"values",
".",
"get",
"(",
"'From'",
",",
"None",
")",
"message",
"=",
"callers",
"[",
"from_number",
"]",
"if",
"from_number",
"in",
"callers",
"else",
"\"Monkey\"",
"resp",
"=",
"MessagingResponse",
"(",
")",
"resp",
".",
"message",
"(",
"\"{}, thanks for the message!\"",
".",
"format",
"(",
"message",
")",
")",
"return",
"str",
"(",
"resp",
")"
] | [
8,
0
] | [
22,
20
] | python | en | ['en', 'en', 'en'] | True |
bdist_wininst.reinitialize_command | (self, command, reinit_subcommands=0) |
Supplement reinitialize_command to work around
http://bugs.python.org/issue20819
|
Supplement reinitialize_command to work around
http://bugs.python.org/issue20819
| def reinitialize_command(self, command, reinit_subcommands=0):
"""
Supplement reinitialize_command to work around
http://bugs.python.org/issue20819
"""
cmd = self.distribution.reinitialize_command(
command, reinit_subcommands)
if command in ('install', 'install_lib'):
cmd.install_lib = None
return cmd | [
"def",
"reinitialize_command",
"(",
"self",
",",
"command",
",",
"reinit_subcommands",
"=",
"0",
")",
":",
"cmd",
"=",
"self",
".",
"distribution",
".",
"reinitialize_command",
"(",
"command",
",",
"reinit_subcommands",
")",
"if",
"command",
"in",
"(",
"'install'",
",",
"'install_lib'",
")",
":",
"cmd",
".",
"install_lib",
"=",
"None",
"return",
"cmd"
] | [
4,
4
] | [
13,
18
] | python | en | ['en', 'error', 'th'] | False |
find_best_app | (module) | Given a module instance this tries to find the best possible
application in the module or raises an exception.
| Given a module instance this tries to find the best possible
application in the module or raises an exception.
| def find_best_app(module):
"""Given a module instance this tries to find the best possible
application in the module or raises an exception.
"""
from . import Flask
# Search for the most common names first.
for attr_name in 'app', 'application':
app = getattr(module, attr_name, None)
if app is not None and isinstance(app, Flask):
return app
# Otherwise find the only object that is a Flask instance.
matches = [v for k, v in iteritems(module.__dict__)
if isinstance(v, Flask)]
if len(matches) == 1:
return matches[0]
raise NoAppException('Failed to find application in module "%s". Are '
'you sure it contains a Flask application? Maybe '
'you wrapped it in a WSGI middleware or you are '
'using a factory function.' % module.__name__) | [
"def",
"find_best_app",
"(",
"module",
")",
":",
"from",
".",
"import",
"Flask",
"# Search for the most common names first.",
"for",
"attr_name",
"in",
"'app'",
",",
"'application'",
":",
"app",
"=",
"getattr",
"(",
"module",
",",
"attr_name",
",",
"None",
")",
"if",
"app",
"is",
"not",
"None",
"and",
"isinstance",
"(",
"app",
",",
"Flask",
")",
":",
"return",
"app",
"# Otherwise find the only object that is a Flask instance.",
"matches",
"=",
"[",
"v",
"for",
"k",
",",
"v",
"in",
"iteritems",
"(",
"module",
".",
"__dict__",
")",
"if",
"isinstance",
"(",
"v",
",",
"Flask",
")",
"]",
"if",
"len",
"(",
"matches",
")",
"==",
"1",
":",
"return",
"matches",
"[",
"0",
"]",
"raise",
"NoAppException",
"(",
"'Failed to find application in module \"%s\". Are '",
"'you sure it contains a Flask application? Maybe '",
"'you wrapped it in a WSGI middleware or you are '",
"'using a factory function.'",
"%",
"module",
".",
"__name__",
")"
] | [
26,
0
] | [
47,
71
] | python | en | ['en', 'en', 'en'] | True |
prepare_exec_for_file | (filename) | Given a filename this will try to calculate the python path, add it
to the search path and return the actual module name that is expected.
| Given a filename this will try to calculate the python path, add it
to the search path and return the actual module name that is expected.
| def prepare_exec_for_file(filename):
"""Given a filename this will try to calculate the python path, add it
to the search path and return the actual module name that is expected.
"""
module = []
# Chop off file extensions or package markers
if os.path.split(filename)[1] == '__init__.py':
filename = os.path.dirname(filename)
elif filename.endswith('.py'):
filename = filename[:-3]
else:
raise NoAppException('The file provided (%s) does exist but is not a '
'valid Python file. This means that it cannot '
'be used as application. Please change the '
'extension to .py' % filename)
filename = os.path.realpath(filename)
dirpath = filename
while 1:
dirpath, extra = os.path.split(dirpath)
module.append(extra)
if not os.path.isfile(os.path.join(dirpath, '__init__.py')):
break
sys.path.insert(0, dirpath)
return '.'.join(module[::-1]) | [
"def",
"prepare_exec_for_file",
"(",
"filename",
")",
":",
"module",
"=",
"[",
"]",
"# Chop off file extensions or package markers",
"if",
"os",
".",
"path",
".",
"split",
"(",
"filename",
")",
"[",
"1",
"]",
"==",
"'__init__.py'",
":",
"filename",
"=",
"os",
".",
"path",
".",
"dirname",
"(",
"filename",
")",
"elif",
"filename",
".",
"endswith",
"(",
"'.py'",
")",
":",
"filename",
"=",
"filename",
"[",
":",
"-",
"3",
"]",
"else",
":",
"raise",
"NoAppException",
"(",
"'The file provided (%s) does exist but is not a '",
"'valid Python file. This means that it cannot '",
"'be used as application. Please change the '",
"'extension to .py'",
"%",
"filename",
")",
"filename",
"=",
"os",
".",
"path",
".",
"realpath",
"(",
"filename",
")",
"dirpath",
"=",
"filename",
"while",
"1",
":",
"dirpath",
",",
"extra",
"=",
"os",
".",
"path",
".",
"split",
"(",
"dirpath",
")",
"module",
".",
"append",
"(",
"extra",
")",
"if",
"not",
"os",
".",
"path",
".",
"isfile",
"(",
"os",
".",
"path",
".",
"join",
"(",
"dirpath",
",",
"'__init__.py'",
")",
")",
":",
"break",
"sys",
".",
"path",
".",
"insert",
"(",
"0",
",",
"dirpath",
")",
"return",
"'.'",
".",
"join",
"(",
"module",
"[",
":",
":",
"-",
"1",
"]",
")"
] | [
50,
0
] | [
76,
33
] | python | en | ['en', 'en', 'en'] | True |
locate_app | (app_id) | Attempts to locate the application. | Attempts to locate the application. | def locate_app(app_id):
"""Attempts to locate the application."""
__traceback_hide__ = True
if ':' in app_id:
module, app_obj = app_id.split(':', 1)
else:
module = app_id
app_obj = None
try:
__import__(module)
except ImportError:
# Reraise the ImportError if it occurred within the imported module.
# Determine this by checking whether the trace has a depth > 1.
if sys.exc_info()[-1].tb_next:
raise
else:
raise NoAppException('The file/path provided (%s) does not appear'
' to exist. Please verify the path is '
'correct. If app is not on PYTHONPATH, '
'ensure the extension is .py' % module)
mod = sys.modules[module]
if app_obj is None:
app = find_best_app(mod)
else:
app = getattr(mod, app_obj, None)
if app is None:
raise RuntimeError('Failed to find application in module "%s"'
% module)
return app | [
"def",
"locate_app",
"(",
"app_id",
")",
":",
"__traceback_hide__",
"=",
"True",
"if",
"':'",
"in",
"app_id",
":",
"module",
",",
"app_obj",
"=",
"app_id",
".",
"split",
"(",
"':'",
",",
"1",
")",
"else",
":",
"module",
"=",
"app_id",
"app_obj",
"=",
"None",
"try",
":",
"__import__",
"(",
"module",
")",
"except",
"ImportError",
":",
"# Reraise the ImportError if it occurred within the imported module.",
"# Determine this by checking whether the trace has a depth > 1.",
"if",
"sys",
".",
"exc_info",
"(",
")",
"[",
"-",
"1",
"]",
".",
"tb_next",
":",
"raise",
"else",
":",
"raise",
"NoAppException",
"(",
"'The file/path provided (%s) does not appear'",
"' to exist. Please verify the path is '",
"'correct. If app is not on PYTHONPATH, '",
"'ensure the extension is .py'",
"%",
"module",
")",
"mod",
"=",
"sys",
".",
"modules",
"[",
"module",
"]",
"if",
"app_obj",
"is",
"None",
":",
"app",
"=",
"find_best_app",
"(",
"mod",
")",
"else",
":",
"app",
"=",
"getattr",
"(",
"mod",
",",
"app_obj",
",",
"None",
")",
"if",
"app",
"is",
"None",
":",
"raise",
"RuntimeError",
"(",
"'Failed to find application in module \"%s\"'",
"%",
"module",
")",
"return",
"app"
] | [
79,
0
] | [
110,
14
] | python | en | ['en', 'en', 'en'] | True |
with_appcontext | (f) | Wraps a callback so that it's guaranteed to be executed with the
script's application context. If callbacks are registered directly
to the ``app.cli`` object then they are wrapped with this function
by default unless it's disabled.
| Wraps a callback so that it's guaranteed to be executed with the
script's application context. If callbacks are registered directly
to the ``app.cli`` object then they are wrapped with this function
by default unless it's disabled.
| def with_appcontext(f):
"""Wraps a callback so that it's guaranteed to be executed with the
script's application context. If callbacks are registered directly
to the ``app.cli`` object then they are wrapped with this function
by default unless it's disabled.
"""
@click.pass_context
def decorator(__ctx, *args, **kwargs):
with __ctx.ensure_object(ScriptInfo).load_app().app_context():
return __ctx.invoke(f, *args, **kwargs)
return update_wrapper(decorator, f) | [
"def",
"with_appcontext",
"(",
"f",
")",
":",
"@",
"click",
".",
"pass_context",
"def",
"decorator",
"(",
"__ctx",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"with",
"__ctx",
".",
"ensure_object",
"(",
"ScriptInfo",
")",
".",
"load_app",
"(",
")",
".",
"app_context",
"(",
")",
":",
"return",
"__ctx",
".",
"invoke",
"(",
"f",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"return",
"update_wrapper",
"(",
"decorator",
",",
"f",
")"
] | [
247,
0
] | [
257,
39
] | python | en | ['en', 'en', 'en'] | True |
run_command | (info, host, port, reload, debugger, eager_loading,
with_threads) | Runs a local development server for the Flask application.
This local server is recommended for development purposes only but it
can also be used for simple intranet deployments. By default it will
not support any sort of concurrency at all to simplify debugging. This
can be changed with the --with-threads option which will enable basic
multithreading.
The reloader and debugger are by default enabled if the debug flag of
Flask is enabled and disabled otherwise.
| Runs a local development server for the Flask application. | def run_command(info, host, port, reload, debugger, eager_loading,
with_threads):
"""Runs a local development server for the Flask application.
This local server is recommended for development purposes only but it
can also be used for simple intranet deployments. By default it will
not support any sort of concurrency at all to simplify debugging. This
can be changed with the --with-threads option which will enable basic
multithreading.
The reloader and debugger are by default enabled if the debug flag of
Flask is enabled and disabled otherwise.
"""
from werkzeug.serving import run_simple
debug = get_debug_flag()
if reload is None:
reload = bool(debug)
if debugger is None:
debugger = bool(debug)
if eager_loading is None:
eager_loading = not reload
app = DispatchingApp(info.load_app, use_eager_loading=eager_loading)
# Extra startup messages. This depends a bit on Werkzeug internals to
# not double execute when the reloader kicks in.
if os.environ.get('WERKZEUG_RUN_MAIN') != 'true':
# If we have an import path we can print it out now which can help
# people understand what's being served. If we do not have an
# import path because the app was loaded through a callback then
# we won't print anything.
if info.app_import_path is not None:
print(' * Serving Flask app "%s"' % info.app_import_path)
if debug is not None:
print(' * Forcing debug mode %s' % (debug and 'on' or 'off'))
run_simple(host, port, app, use_reloader=reload,
use_debugger=debugger, threaded=with_threads) | [
"def",
"run_command",
"(",
"info",
",",
"host",
",",
"port",
",",
"reload",
",",
"debugger",
",",
"eager_loading",
",",
"with_threads",
")",
":",
"from",
"werkzeug",
".",
"serving",
"import",
"run_simple",
"debug",
"=",
"get_debug_flag",
"(",
")",
"if",
"reload",
"is",
"None",
":",
"reload",
"=",
"bool",
"(",
"debug",
")",
"if",
"debugger",
"is",
"None",
":",
"debugger",
"=",
"bool",
"(",
"debug",
")",
"if",
"eager_loading",
"is",
"None",
":",
"eager_loading",
"=",
"not",
"reload",
"app",
"=",
"DispatchingApp",
"(",
"info",
".",
"load_app",
",",
"use_eager_loading",
"=",
"eager_loading",
")",
"# Extra startup messages. This depends a bit on Werkzeug internals to",
"# not double execute when the reloader kicks in.",
"if",
"os",
".",
"environ",
".",
"get",
"(",
"'WERKZEUG_RUN_MAIN'",
")",
"!=",
"'true'",
":",
"# If we have an import path we can print it out now which can help",
"# people understand what's being served. If we do not have an",
"# import path because the app was loaded through a callback then",
"# we won't print anything.",
"if",
"info",
".",
"app_import_path",
"is",
"not",
"None",
":",
"print",
"(",
"' * Serving Flask app \"%s\"'",
"%",
"info",
".",
"app_import_path",
")",
"if",
"debug",
"is",
"not",
"None",
":",
"print",
"(",
"' * Forcing debug mode %s'",
"%",
"(",
"debug",
"and",
"'on'",
"or",
"'off'",
")",
")",
"run_simple",
"(",
"host",
",",
"port",
",",
"app",
",",
"use_reloader",
"=",
"reload",
",",
"use_debugger",
"=",
"debugger",
",",
"threaded",
"=",
"with_threads",
")"
] | [
399,
0
] | [
437,
60
] | python | en | ['en', 'en', 'en'] | True |
shell_command | () | Runs an interactive Python shell in the context of a given
Flask application. The application will populate the default
namespace of this shell according to it's configuration.
This is useful for executing small snippets of management code
without having to manually configuring the application.
| Runs an interactive Python shell in the context of a given
Flask application. The application will populate the default
namespace of this shell according to it's configuration. | def shell_command():
"""Runs an interactive Python shell in the context of a given
Flask application. The application will populate the default
namespace of this shell according to it's configuration.
This is useful for executing small snippets of management code
without having to manually configuring the application.
"""
import code
from flask.globals import _app_ctx_stack
app = _app_ctx_stack.top.app
banner = 'Python %s on %s\nApp: %s%s\nInstance: %s' % (
sys.version,
sys.platform,
app.import_name,
app.debug and ' [debug]' or '',
app.instance_path,
)
ctx = {}
# Support the regular Python interpreter startup script if someone
# is using it.
startup = os.environ.get('PYTHONSTARTUP')
if startup and os.path.isfile(startup):
with open(startup, 'r') as f:
eval(compile(f.read(), startup, 'exec'), ctx)
ctx.update(app.make_shell_context())
code.interact(banner=banner, local=ctx) | [
"def",
"shell_command",
"(",
")",
":",
"import",
"code",
"from",
"flask",
".",
"globals",
"import",
"_app_ctx_stack",
"app",
"=",
"_app_ctx_stack",
".",
"top",
".",
"app",
"banner",
"=",
"'Python %s on %s\\nApp: %s%s\\nInstance: %s'",
"%",
"(",
"sys",
".",
"version",
",",
"sys",
".",
"platform",
",",
"app",
".",
"import_name",
",",
"app",
".",
"debug",
"and",
"' [debug]'",
"or",
"''",
",",
"app",
".",
"instance_path",
",",
")",
"ctx",
"=",
"{",
"}",
"# Support the regular Python interpreter startup script if someone",
"# is using it.",
"startup",
"=",
"os",
".",
"environ",
".",
"get",
"(",
"'PYTHONSTARTUP'",
")",
"if",
"startup",
"and",
"os",
".",
"path",
".",
"isfile",
"(",
"startup",
")",
":",
"with",
"open",
"(",
"startup",
",",
"'r'",
")",
"as",
"f",
":",
"eval",
"(",
"compile",
"(",
"f",
".",
"read",
"(",
")",
",",
"startup",
",",
"'exec'",
")",
",",
"ctx",
")",
"ctx",
".",
"update",
"(",
"app",
".",
"make_shell_context",
"(",
")",
")",
"code",
".",
"interact",
"(",
"banner",
"=",
"banner",
",",
"local",
"=",
"ctx",
")"
] | [
442,
0
] | [
471,
43
] | python | en | ['en', 'en', 'en'] | True |
ScriptInfo.load_app | (self) | Loads the Flask app (if not yet loaded) and returns it. Calling
this multiple times will just result in the already loaded app to
be returned.
| Loads the Flask app (if not yet loaded) and returns it. Calling
this multiple times will just result in the already loaded app to
be returned.
| def load_app(self):
"""Loads the Flask app (if not yet loaded) and returns it. Calling
this multiple times will just result in the already loaded app to
be returned.
"""
__traceback_hide__ = True
if self._loaded_app is not None:
return self._loaded_app
if self.create_app is not None:
rv = self.create_app(self)
else:
if not self.app_import_path:
raise NoAppException(
'Could not locate Flask application. You did not provide '
'the FLASK_APP environment variable.\n\nFor more '
'information see '
'http://flask.pocoo.org/docs/latest/quickstart/')
rv = locate_app(self.app_import_path)
debug = get_debug_flag()
if debug is not None:
rv.debug = debug
self._loaded_app = rv
return rv | [
"def",
"load_app",
"(",
"self",
")",
":",
"__traceback_hide__",
"=",
"True",
"if",
"self",
".",
"_loaded_app",
"is",
"not",
"None",
":",
"return",
"self",
".",
"_loaded_app",
"if",
"self",
".",
"create_app",
"is",
"not",
"None",
":",
"rv",
"=",
"self",
".",
"create_app",
"(",
"self",
")",
"else",
":",
"if",
"not",
"self",
".",
"app_import_path",
":",
"raise",
"NoAppException",
"(",
"'Could not locate Flask application. You did not provide '",
"'the FLASK_APP environment variable.\\n\\nFor more '",
"'information see '",
"'http://flask.pocoo.org/docs/latest/quickstart/'",
")",
"rv",
"=",
"locate_app",
"(",
"self",
".",
"app_import_path",
")",
"debug",
"=",
"get_debug_flag",
"(",
")",
"if",
"debug",
"is",
"not",
"None",
":",
"rv",
".",
"debug",
"=",
"debug",
"self",
".",
"_loaded_app",
"=",
"rv",
"return",
"rv"
] | [
219,
4
] | [
241,
17
] | python | en | ['en', 'en', 'en'] | True |
AppGroup.command | (self, *args, **kwargs) | This works exactly like the method of the same name on a regular
:class:`click.Group` but it wraps callbacks in :func:`with_appcontext`
unless it's disabled by passing ``with_appcontext=False``.
| This works exactly like the method of the same name on a regular
:class:`click.Group` but it wraps callbacks in :func:`with_appcontext`
unless it's disabled by passing ``with_appcontext=False``.
| def command(self, *args, **kwargs):
"""This works exactly like the method of the same name on a regular
:class:`click.Group` but it wraps callbacks in :func:`with_appcontext`
unless it's disabled by passing ``with_appcontext=False``.
"""
wrap_for_ctx = kwargs.pop('with_appcontext', True)
def decorator(f):
if wrap_for_ctx:
f = with_appcontext(f)
return click.Group.command(self, *args, **kwargs)(f)
return decorator | [
"def",
"command",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"wrap_for_ctx",
"=",
"kwargs",
".",
"pop",
"(",
"'with_appcontext'",
",",
"True",
")",
"def",
"decorator",
"(",
"f",
")",
":",
"if",
"wrap_for_ctx",
":",
"f",
"=",
"with_appcontext",
"(",
"f",
")",
"return",
"click",
".",
"Group",
".",
"command",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"(",
"f",
")",
"return",
"decorator"
] | [
268,
4
] | [
278,
24
] | python | en | ['en', 'en', 'en'] | True |
AppGroup.group | (self, *args, **kwargs) | This works exactly like the method of the same name on a regular
:class:`click.Group` but it defaults the group class to
:class:`AppGroup`.
| This works exactly like the method of the same name on a regular
:class:`click.Group` but it defaults the group class to
:class:`AppGroup`.
| def group(self, *args, **kwargs):
"""This works exactly like the method of the same name on a regular
:class:`click.Group` but it defaults the group class to
:class:`AppGroup`.
"""
kwargs.setdefault('cls', AppGroup)
return click.Group.group(self, *args, **kwargs) | [
"def",
"group",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"kwargs",
".",
"setdefault",
"(",
"'cls'",
",",
"AppGroup",
")",
"return",
"click",
".",
"Group",
".",
"group",
"(",
"self",
",",
"*",
"args",
",",
"*",
"*",
"kwargs",
")"
] | [
280,
4
] | [
286,
55
] | python | en | ['en', 'en', 'en'] | True |
make_breseq_folder | (folder: Path) |
Mocks a breseq run for a single sample.
.folder
|---- output/
|----|---- index.html
|---- data/
|----|---- output.vcf
|
Mocks a breseq run for a single sample.
.folder
|---- output/
|----|---- index.html
|---- data/
|----|---- output.vcf
| def make_breseq_folder(folder: Path) -> Dict[str, Path]:
"""
Mocks a breseq run for a single sample.
.folder
|---- output/
|----|---- index.html
|---- data/
|----|---- output.vcf
"""
folder = checkdir(folder)
folder_output = checkdir(folder / "output")
folder_data = checkdir(folder / "data")
folder_evidence = checkdir(folder_output / "evidence")
filename_index = make_file(folder_output, 'index.html')
filename_vcf = make_file(folder_data, 'output.vcf')
filename_gd = make_file(folder_evidence, "annotated.gd")
filename_summary = make_file(folder_data, "summary.json")
result = {
'index': filename_index,
'gd': filename_gd,
'summary': filename_summary,
'vcf': filename_vcf
}
return result | [
"def",
"make_breseq_folder",
"(",
"folder",
":",
"Path",
")",
"->",
"Dict",
"[",
"str",
",",
"Path",
"]",
":",
"folder",
"=",
"checkdir",
"(",
"folder",
")",
"folder_output",
"=",
"checkdir",
"(",
"folder",
"/",
"\"output\"",
")",
"folder_data",
"=",
"checkdir",
"(",
"folder",
"/",
"\"data\"",
")",
"folder_evidence",
"=",
"checkdir",
"(",
"folder_output",
"/",
"\"evidence\"",
")",
"filename_index",
"=",
"make_file",
"(",
"folder_output",
",",
"'index.html'",
")",
"filename_vcf",
"=",
"make_file",
"(",
"folder_data",
",",
"'output.vcf'",
")",
"filename_gd",
"=",
"make_file",
"(",
"folder_evidence",
",",
"\"annotated.gd\"",
")",
"filename_summary",
"=",
"make_file",
"(",
"folder_data",
",",
"\"summary.json\"",
")",
"result",
"=",
"{",
"'index'",
":",
"filename_index",
",",
"'gd'",
":",
"filename_gd",
",",
"'summary'",
":",
"filename_summary",
",",
"'vcf'",
":",
"filename_vcf",
"}",
"return",
"result"
] | [
31,
0
] | [
57,
14
] | python | en | ['en', 'error', 'th'] | False |
structure1 | (tmp_path) |
.parent
|---- CeN0L1G5/
|----|---- data/
|----|---- output/
|---- CeN0L1G3/
|----|---- data/
|----|---- output/
|---- CeN0L1G29/
|----|---- data/
|----|---- output/
|
.parent
|---- CeN0L1G5/
|----|---- data/
|----|---- output/
|---- CeN0L1G3/
|----|---- data/
|----|---- output/
|---- CeN0L1G29/
|----|---- data/
|----|---- output/
| def structure1(tmp_path) -> BreseqFolder:
"""
.parent
|---- CeN0L1G5/
|----|---- data/
|----|---- output/
|---- CeN0L1G3/
|----|---- data/
|----|---- output/
|---- CeN0L1G29/
|----|---- data/
|----|---- output/
"""
parent_folder = checkdir(tmp_path / "cefepime")
filenames_sample_1 = make_breseq_folder(parent_folder / "CeN0L1G5")
filenames_sample_2 = make_breseq_folder(parent_folder / "CeN0L1G3")
filenames_sample_3 = make_breseq_folder(parent_folder / "CeN0L1G29")
samples = [
filenames_sample_1,
filenames_sample_2,
filenames_sample_3
]
result = BreseqFolder(
parent = parent_folder,
samples = samples
)
return result | [
"def",
"structure1",
"(",
"tmp_path",
")",
"->",
"BreseqFolder",
":",
"parent_folder",
"=",
"checkdir",
"(",
"tmp_path",
"/",
"\"cefepime\"",
")",
"filenames_sample_1",
"=",
"make_breseq_folder",
"(",
"parent_folder",
"/",
"\"CeN0L1G5\"",
")",
"filenames_sample_2",
"=",
"make_breseq_folder",
"(",
"parent_folder",
"/",
"\"CeN0L1G3\"",
")",
"filenames_sample_3",
"=",
"make_breseq_folder",
"(",
"parent_folder",
"/",
"\"CeN0L1G29\"",
")",
"samples",
"=",
"[",
"filenames_sample_1",
",",
"filenames_sample_2",
",",
"filenames_sample_3",
"]",
"result",
"=",
"BreseqFolder",
"(",
"parent",
"=",
"parent_folder",
",",
"samples",
"=",
"samples",
")",
"return",
"result"
] | [
61,
0
] | [
92,
14
] | python | en | ['en', 'error', 'th'] | False |
structure2 | (tmp_path) |
.parent
|---- sample1/
|----|---- breseq
|---- sample2/
|----|---- breseq
|---- sample3/
|----|---- breseq
|
.parent
|---- sample1/
|----|---- breseq
|---- sample2/
|----|---- breseq
|---- sample3/
|----|---- breseq
| def structure2(tmp_path) -> BreseqFolder:
"""
.parent
|---- sample1/
|----|---- breseq
|---- sample2/
|----|---- breseq
|---- sample3/
|----|---- breseq
"""
parent_folder = checkdir(tmp_path / "cefepime")
folder_1 = checkdir(parent_folder / "CeN0L1G5")
folder_2 = checkdir(parent_folder / "CeN0L1G3")
folder_3 = checkdir(parent_folder / "CeN0L1G29")
filenames_sample_1 = make_breseq_folder(folder_1 / "breseq")
filenames_sample_2 = make_breseq_folder(folder_2 / "breseq")
filenames_sample_3 = make_breseq_folder(folder_3 / "breseq")
result = BreseqFolder(
parent = parent_folder,
samples = [
filenames_sample_1,
filenames_sample_2,
filenames_sample_3
]
)
return result | [
"def",
"structure2",
"(",
"tmp_path",
")",
"->",
"BreseqFolder",
":",
"parent_folder",
"=",
"checkdir",
"(",
"tmp_path",
"/",
"\"cefepime\"",
")",
"folder_1",
"=",
"checkdir",
"(",
"parent_folder",
"/",
"\"CeN0L1G5\"",
")",
"folder_2",
"=",
"checkdir",
"(",
"parent_folder",
"/",
"\"CeN0L1G3\"",
")",
"folder_3",
"=",
"checkdir",
"(",
"parent_folder",
"/",
"\"CeN0L1G29\"",
")",
"filenames_sample_1",
"=",
"make_breseq_folder",
"(",
"folder_1",
"/",
"\"breseq\"",
")",
"filenames_sample_2",
"=",
"make_breseq_folder",
"(",
"folder_2",
"/",
"\"breseq\"",
")",
"filenames_sample_3",
"=",
"make_breseq_folder",
"(",
"folder_3",
"/",
"\"breseq\"",
")",
"result",
"=",
"BreseqFolder",
"(",
"parent",
"=",
"parent_folder",
",",
"samples",
"=",
"[",
"filenames_sample_1",
",",
"filenames_sample_2",
",",
"filenames_sample_3",
"]",
")",
"return",
"result"
] | [
96,
0
] | [
125,
14
] | python | en | ['en', 'error', 'th'] | False |
upload_docs._build_multipart | (cls, data) |
Build up the MIME payload for the POST data
|
Build up the MIME payload for the POST data
| def _build_multipart(cls, data):
"""
Build up the MIME payload for the POST data
"""
boundary = b'--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'
sep_boundary = b'\n--' + boundary
end_boundary = sep_boundary + b'--'
end_items = end_boundary, b"\n",
builder = functools.partial(
cls._build_part,
sep_boundary=sep_boundary,
)
part_groups = map(builder, data.items())
parts = itertools.chain.from_iterable(part_groups)
body_items = itertools.chain(parts, end_items)
content_type = 'multipart/form-data; boundary=%s' % boundary.decode('ascii')
return b''.join(body_items), content_type | [
"def",
"_build_multipart",
"(",
"cls",
",",
"data",
")",
":",
"boundary",
"=",
"b'--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'",
"sep_boundary",
"=",
"b'\\n--'",
"+",
"boundary",
"end_boundary",
"=",
"sep_boundary",
"+",
"b'--'",
"end_items",
"=",
"end_boundary",
",",
"b\"\\n\"",
",",
"builder",
"=",
"functools",
".",
"partial",
"(",
"cls",
".",
"_build_part",
",",
"sep_boundary",
"=",
"sep_boundary",
",",
")",
"part_groups",
"=",
"map",
"(",
"builder",
",",
"data",
".",
"items",
"(",
")",
")",
"parts",
"=",
"itertools",
".",
"chain",
".",
"from_iterable",
"(",
"part_groups",
")",
"body_items",
"=",
"itertools",
".",
"chain",
"(",
"parts",
",",
"end_items",
")",
"content_type",
"=",
"'multipart/form-data; boundary=%s'",
"%",
"boundary",
".",
"decode",
"(",
"'ascii'",
")",
"return",
"b''",
".",
"join",
"(",
"body_items",
")",
",",
"content_type"
] | [
125,
4
] | [
141,
49
] | python | en | ['en', 'error', 'th'] | False |
do_block | (parser, token) |
Define a block that can be overridden by child templates.
|
Define a block that can be overridden by child templates.
| def do_block(parser, token):
"""
Define a block that can be overridden by child templates.
"""
# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments
bits = token.contents.split()
if len(bits) != 2:
raise TemplateSyntaxError("'%s' tag takes only one argument" % bits[0])
block_name = bits[1]
# Keep track of the names of BlockNodes found in this template, so we can
# check for duplication.
try:
if block_name in parser.__loaded_blocks:
raise TemplateSyntaxError("'%s' tag with name '%s' appears more than once" % (bits[0], block_name))
parser.__loaded_blocks.append(block_name)
except AttributeError: # parser.__loaded_blocks isn't a list yet
parser.__loaded_blocks = [block_name]
nodelist = parser.parse(('endblock',))
# This check is kept for backwards-compatibility. See #3100.
endblock = parser.next_token()
acceptable_endblocks = ('endblock', 'endblock %s' % block_name)
if endblock.contents not in acceptable_endblocks:
parser.invalid_block_tag(endblock, 'endblock', acceptable_endblocks)
return BlockNode(block_name, nodelist) | [
"def",
"do_block",
"(",
"parser",
",",
"token",
")",
":",
"# token.split_contents() isn't useful here because this tag doesn't accept variable as arguments",
"bits",
"=",
"token",
".",
"contents",
".",
"split",
"(",
")",
"if",
"len",
"(",
"bits",
")",
"!=",
"2",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"'%s' tag takes only one argument\"",
"%",
"bits",
"[",
"0",
"]",
")",
"block_name",
"=",
"bits",
"[",
"1",
"]",
"# Keep track of the names of BlockNodes found in this template, so we can",
"# check for duplication.",
"try",
":",
"if",
"block_name",
"in",
"parser",
".",
"__loaded_blocks",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"'%s' tag with name '%s' appears more than once\"",
"%",
"(",
"bits",
"[",
"0",
"]",
",",
"block_name",
")",
")",
"parser",
".",
"__loaded_blocks",
".",
"append",
"(",
"block_name",
")",
"except",
"AttributeError",
":",
"# parser.__loaded_blocks isn't a list yet",
"parser",
".",
"__loaded_blocks",
"=",
"[",
"block_name",
"]",
"nodelist",
"=",
"parser",
".",
"parse",
"(",
"(",
"'endblock'",
",",
")",
")",
"# This check is kept for backwards-compatibility. See #3100.",
"endblock",
"=",
"parser",
".",
"next_token",
"(",
")",
"acceptable_endblocks",
"=",
"(",
"'endblock'",
",",
"'endblock %s'",
"%",
"block_name",
")",
"if",
"endblock",
".",
"contents",
"not",
"in",
"acceptable_endblocks",
":",
"parser",
".",
"invalid_block_tag",
"(",
"endblock",
",",
"'endblock'",
",",
"acceptable_endblocks",
")",
"return",
"BlockNode",
"(",
"block_name",
",",
"nodelist",
")"
] | [
198,
0
] | [
223,
42
] | python | en | ['en', 'error', 'th'] | False |
construct_relative_path | (current_template_name, relative_name) |
Convert a relative path (starting with './' or '../') to the full template
name based on the current_template_name.
|
Convert a relative path (starting with './' or '../') to the full template
name based on the current_template_name.
| def construct_relative_path(current_template_name, relative_name):
"""
Convert a relative path (starting with './' or '../') to the full template
name based on the current_template_name.
"""
has_quotes = (
(relative_name.startswith('"') and relative_name.endswith('"')) or
(relative_name.startswith("'") and relative_name.endswith("'"))
)
new_name = relative_name.strip('\'"')
if not new_name.startswith(('./', '../')):
# relative_name is a variable or a literal that doesn't contain a
# relative path.
return relative_name
new_name = posixpath.normpath(
posixpath.join(
posixpath.dirname(current_template_name.lstrip('/')),
new_name,
)
)
if new_name.startswith('../'):
raise TemplateSyntaxError(
"The relative path '%s' points outside the file hierarchy that "
"template '%s' is in." % (relative_name, current_template_name)
)
if current_template_name.lstrip('/') == new_name:
raise TemplateSyntaxError(
"The relative path '%s' was translated to template name '%s', the "
"same template in which the tag appears."
% (relative_name, current_template_name)
)
return f'"{new_name}"' if has_quotes else new_name | [
"def",
"construct_relative_path",
"(",
"current_template_name",
",",
"relative_name",
")",
":",
"has_quotes",
"=",
"(",
"(",
"relative_name",
".",
"startswith",
"(",
"'\"'",
")",
"and",
"relative_name",
".",
"endswith",
"(",
"'\"'",
")",
")",
"or",
"(",
"relative_name",
".",
"startswith",
"(",
"\"'\"",
")",
"and",
"relative_name",
".",
"endswith",
"(",
"\"'\"",
")",
")",
")",
"new_name",
"=",
"relative_name",
".",
"strip",
"(",
"'\\'\"'",
")",
"if",
"not",
"new_name",
".",
"startswith",
"(",
"(",
"'./'",
",",
"'../'",
")",
")",
":",
"# relative_name is a variable or a literal that doesn't contain a",
"# relative path.",
"return",
"relative_name",
"new_name",
"=",
"posixpath",
".",
"normpath",
"(",
"posixpath",
".",
"join",
"(",
"posixpath",
".",
"dirname",
"(",
"current_template_name",
".",
"lstrip",
"(",
"'/'",
")",
")",
",",
"new_name",
",",
")",
")",
"if",
"new_name",
".",
"startswith",
"(",
"'../'",
")",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"The relative path '%s' points outside the file hierarchy that \"",
"\"template '%s' is in.\"",
"%",
"(",
"relative_name",
",",
"current_template_name",
")",
")",
"if",
"current_template_name",
".",
"lstrip",
"(",
"'/'",
")",
"==",
"new_name",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"The relative path '%s' was translated to template name '%s', the \"",
"\"same template in which the tag appears.\"",
"%",
"(",
"relative_name",
",",
"current_template_name",
")",
")",
"return",
"f'\"{new_name}\"'",
"if",
"has_quotes",
"else",
"new_name"
] | [
226,
0
] | [
258,
54
] | python | en | ['en', 'error', 'th'] | False |
do_extends | (parser, token) |
Signal that this template extends a parent template.
This tag may be used in two ways: ``{% extends "base" %}`` (with quotes)
uses the literal value "base" as the name of the parent template to extend,
or ``{% extends variable %}`` uses the value of ``variable`` as either the
name of the parent template to extend (if it evaluates to a string) or as
the parent template itself (if it evaluates to a Template object).
|
Signal that this template extends a parent template. | def do_extends(parser, token):
"""
Signal that this template extends a parent template.
This tag may be used in two ways: ``{% extends "base" %}`` (with quotes)
uses the literal value "base" as the name of the parent template to extend,
or ``{% extends variable %}`` uses the value of ``variable`` as either the
name of the parent template to extend (if it evaluates to a string) or as
the parent template itself (if it evaluates to a Template object).
"""
bits = token.split_contents()
if len(bits) != 2:
raise TemplateSyntaxError("'%s' takes one argument" % bits[0])
bits[1] = construct_relative_path(parser.origin.template_name, bits[1])
parent_name = parser.compile_filter(bits[1])
nodelist = parser.parse()
if nodelist.get_nodes_by_type(ExtendsNode):
raise TemplateSyntaxError("'%s' cannot appear more than once in the same template" % bits[0])
return ExtendsNode(nodelist, parent_name) | [
"def",
"do_extends",
"(",
"parser",
",",
"token",
")",
":",
"bits",
"=",
"token",
".",
"split_contents",
"(",
")",
"if",
"len",
"(",
"bits",
")",
"!=",
"2",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"'%s' takes one argument\"",
"%",
"bits",
"[",
"0",
"]",
")",
"bits",
"[",
"1",
"]",
"=",
"construct_relative_path",
"(",
"parser",
".",
"origin",
".",
"template_name",
",",
"bits",
"[",
"1",
"]",
")",
"parent_name",
"=",
"parser",
".",
"compile_filter",
"(",
"bits",
"[",
"1",
"]",
")",
"nodelist",
"=",
"parser",
".",
"parse",
"(",
")",
"if",
"nodelist",
".",
"get_nodes_by_type",
"(",
"ExtendsNode",
")",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"'%s' cannot appear more than once in the same template\"",
"%",
"bits",
"[",
"0",
"]",
")",
"return",
"ExtendsNode",
"(",
"nodelist",
",",
"parent_name",
")"
] | [
262,
0
] | [
280,
45
] | python | en | ['en', 'error', 'th'] | False |
do_include | (parser, token) |
Load a template and render it with the current context. You can pass
additional context using keyword arguments.
Example::
{% include "foo/some_include" %}
{% include "foo/some_include" with bar="BAZZ!" baz="BING!" %}
Use the ``only`` argument to exclude the current context when rendering
the included template::
{% include "foo/some_include" only %}
{% include "foo/some_include" with bar="1" only %}
|
Load a template and render it with the current context. You can pass
additional context using keyword arguments. | def do_include(parser, token):
"""
Load a template and render it with the current context. You can pass
additional context using keyword arguments.
Example::
{% include "foo/some_include" %}
{% include "foo/some_include" with bar="BAZZ!" baz="BING!" %}
Use the ``only`` argument to exclude the current context when rendering
the included template::
{% include "foo/some_include" only %}
{% include "foo/some_include" with bar="1" only %}
"""
bits = token.split_contents()
if len(bits) < 2:
raise TemplateSyntaxError(
"%r tag takes at least one argument: the name of the template to "
"be included." % bits[0]
)
options = {}
remaining_bits = bits[2:]
while remaining_bits:
option = remaining_bits.pop(0)
if option in options:
raise TemplateSyntaxError('The %r option was specified more '
'than once.' % option)
if option == 'with':
value = token_kwargs(remaining_bits, parser, support_legacy=False)
if not value:
raise TemplateSyntaxError('"with" in %r tag needs at least '
'one keyword argument.' % bits[0])
elif option == 'only':
value = True
else:
raise TemplateSyntaxError('Unknown argument for %r tag: %r.' %
(bits[0], option))
options[option] = value
isolated_context = options.get('only', False)
namemap = options.get('with', {})
bits[1] = construct_relative_path(parser.origin.template_name, bits[1])
return IncludeNode(parser.compile_filter(bits[1]), extra_context=namemap,
isolated_context=isolated_context) | [
"def",
"do_include",
"(",
"parser",
",",
"token",
")",
":",
"bits",
"=",
"token",
".",
"split_contents",
"(",
")",
"if",
"len",
"(",
"bits",
")",
"<",
"2",
":",
"raise",
"TemplateSyntaxError",
"(",
"\"%r tag takes at least one argument: the name of the template to \"",
"\"be included.\"",
"%",
"bits",
"[",
"0",
"]",
")",
"options",
"=",
"{",
"}",
"remaining_bits",
"=",
"bits",
"[",
"2",
":",
"]",
"while",
"remaining_bits",
":",
"option",
"=",
"remaining_bits",
".",
"pop",
"(",
"0",
")",
"if",
"option",
"in",
"options",
":",
"raise",
"TemplateSyntaxError",
"(",
"'The %r option was specified more '",
"'than once.'",
"%",
"option",
")",
"if",
"option",
"==",
"'with'",
":",
"value",
"=",
"token_kwargs",
"(",
"remaining_bits",
",",
"parser",
",",
"support_legacy",
"=",
"False",
")",
"if",
"not",
"value",
":",
"raise",
"TemplateSyntaxError",
"(",
"'\"with\" in %r tag needs at least '",
"'one keyword argument.'",
"%",
"bits",
"[",
"0",
"]",
")",
"elif",
"option",
"==",
"'only'",
":",
"value",
"=",
"True",
"else",
":",
"raise",
"TemplateSyntaxError",
"(",
"'Unknown argument for %r tag: %r.'",
"%",
"(",
"bits",
"[",
"0",
"]",
",",
"option",
")",
")",
"options",
"[",
"option",
"]",
"=",
"value",
"isolated_context",
"=",
"options",
".",
"get",
"(",
"'only'",
",",
"False",
")",
"namemap",
"=",
"options",
".",
"get",
"(",
"'with'",
",",
"{",
"}",
")",
"bits",
"[",
"1",
"]",
"=",
"construct_relative_path",
"(",
"parser",
".",
"origin",
".",
"template_name",
",",
"bits",
"[",
"1",
"]",
")",
"return",
"IncludeNode",
"(",
"parser",
".",
"compile_filter",
"(",
"bits",
"[",
"1",
"]",
")",
",",
"extra_context",
"=",
"namemap",
",",
"isolated_context",
"=",
"isolated_context",
")"
] | [
284,
0
] | [
328,
57
] | python | en | ['en', 'error', 'th'] | False |
ExtendsNode.find_template | (self, template_name, context) |
This is a wrapper around engine.find_template(). A history is kept in
the render_context attribute between successive extends calls and
passed as the skip argument. This enables extends to work recursively
without extending the same template twice.
|
This is a wrapper around engine.find_template(). A history is kept in
the render_context attribute between successive extends calls and
passed as the skip argument. This enables extends to work recursively
without extending the same template twice.
| def find_template(self, template_name, context):
"""
This is a wrapper around engine.find_template(). A history is kept in
the render_context attribute between successive extends calls and
passed as the skip argument. This enables extends to work recursively
without extending the same template twice.
"""
history = context.render_context.setdefault(
self.context_key, [self.origin],
)
template, origin = context.template.engine.find_template(
template_name, skip=history,
)
history.append(origin)
return template | [
"def",
"find_template",
"(",
"self",
",",
"template_name",
",",
"context",
")",
":",
"history",
"=",
"context",
".",
"render_context",
".",
"setdefault",
"(",
"self",
".",
"context_key",
",",
"[",
"self",
".",
"origin",
"]",
",",
")",
"template",
",",
"origin",
"=",
"context",
".",
"template",
".",
"engine",
".",
"find_template",
"(",
"template_name",
",",
"skip",
"=",
"history",
",",
")",
"history",
".",
"append",
"(",
"origin",
")",
"return",
"template"
] | [
92,
4
] | [
106,
23
] | python | en | ['en', 'error', 'th'] | False |
IncludeNode.render | (self, context) |
Render the specified template and context. Cache the template object
in render_context to avoid reparsing and loading when used in a for
loop.
|
Render the specified template and context. Cache the template object
in render_context to avoid reparsing and loading when used in a for
loop.
| def render(self, context):
"""
Render the specified template and context. Cache the template object
in render_context to avoid reparsing and loading when used in a for
loop.
"""
template = self.template.resolve(context)
# Does this quack like a Template?
if not callable(getattr(template, 'render', None)):
# If not, try the cache and select_template().
template_name = template or ()
if isinstance(template_name, str):
template_name = (construct_relative_path(
self.origin.template_name,
template_name,
),)
else:
template_name = tuple(template_name)
cache = context.render_context.dicts[0].setdefault(self, {})
template = cache.get(template_name)
if template is None:
template = context.template.engine.select_template(template_name)
cache[template_name] = template
# Use the base.Template of a backends.django.Template.
elif hasattr(template, 'template'):
template = template.template
values = {
name: var.resolve(context)
for name, var in self.extra_context.items()
}
if self.isolated_context:
return template.render(context.new(values))
with context.push(**values):
return template.render(context) | [
"def",
"render",
"(",
"self",
",",
"context",
")",
":",
"template",
"=",
"self",
".",
"template",
".",
"resolve",
"(",
"context",
")",
"# Does this quack like a Template?",
"if",
"not",
"callable",
"(",
"getattr",
"(",
"template",
",",
"'render'",
",",
"None",
")",
")",
":",
"# If not, try the cache and select_template().",
"template_name",
"=",
"template",
"or",
"(",
")",
"if",
"isinstance",
"(",
"template_name",
",",
"str",
")",
":",
"template_name",
"=",
"(",
"construct_relative_path",
"(",
"self",
".",
"origin",
".",
"template_name",
",",
"template_name",
",",
")",
",",
")",
"else",
":",
"template_name",
"=",
"tuple",
"(",
"template_name",
")",
"cache",
"=",
"context",
".",
"render_context",
".",
"dicts",
"[",
"0",
"]",
".",
"setdefault",
"(",
"self",
",",
"{",
"}",
")",
"template",
"=",
"cache",
".",
"get",
"(",
"template_name",
")",
"if",
"template",
"is",
"None",
":",
"template",
"=",
"context",
".",
"template",
".",
"engine",
".",
"select_template",
"(",
"template_name",
")",
"cache",
"[",
"template_name",
"]",
"=",
"template",
"# Use the base.Template of a backends.django.Template.",
"elif",
"hasattr",
"(",
"template",
",",
"'template'",
")",
":",
"template",
"=",
"template",
".",
"template",
"values",
"=",
"{",
"name",
":",
"var",
".",
"resolve",
"(",
"context",
")",
"for",
"name",
",",
"var",
"in",
"self",
".",
"extra_context",
".",
"items",
"(",
")",
"}",
"if",
"self",
".",
"isolated_context",
":",
"return",
"template",
".",
"render",
"(",
"context",
".",
"new",
"(",
"values",
")",
")",
"with",
"context",
".",
"push",
"(",
"*",
"*",
"values",
")",
":",
"return",
"template",
".",
"render",
"(",
"context",
")"
] | [
161,
4
] | [
194,
43
] | python | en | ['en', 'error', 'th'] | False |
save_agent | (output_dir, batch_id, model, optimizer, scheduler, keep_last=3) | Store agent to disk. | Store agent to disk. | def save_agent(output_dir, batch_id, model, optimizer, scheduler, keep_last=3):
"""Store agent to disk."""
fpath = os.path.join(output_dir, 'ckpt.%08d' % (batch_id))
logging.info('Saving: %s', fpath)
torch.save(
dict(model=model.state_dict(),
optim=optimizer.state_dict(),
done_batches=batch_id,
scheduler=scheduler and scheduler.state_dict()), fpath)
# Delete the older ones, keep the last few
all_ckpts = sorted(glob.glob(os.path.join(output_dir, 'ckpt.*')))
to_del = all_ckpts[:-keep_last]
for fpath in to_del:
os.remove(fpath) | [
"def",
"save_agent",
"(",
"output_dir",
",",
"batch_id",
",",
"model",
",",
"optimizer",
",",
"scheduler",
",",
"keep_last",
"=",
"3",
")",
":",
"fpath",
"=",
"os",
".",
"path",
".",
"join",
"(",
"output_dir",
",",
"'ckpt.%08d'",
"%",
"(",
"batch_id",
")",
")",
"logging",
".",
"info",
"(",
"'Saving: %s'",
",",
"fpath",
")",
"torch",
".",
"save",
"(",
"dict",
"(",
"model",
"=",
"model",
".",
"state_dict",
"(",
")",
",",
"optim",
"=",
"optimizer",
".",
"state_dict",
"(",
")",
",",
"done_batches",
"=",
"batch_id",
",",
"scheduler",
"=",
"scheduler",
"and",
"scheduler",
".",
"state_dict",
"(",
")",
")",
",",
"fpath",
")",
"# Delete the older ones, keep the last few",
"all_ckpts",
"=",
"sorted",
"(",
"glob",
".",
"glob",
"(",
"os",
".",
"path",
".",
"join",
"(",
"output_dir",
",",
"'ckpt.*'",
")",
")",
")",
"to_del",
"=",
"all_ckpts",
"[",
":",
"-",
"keep_last",
"]",
"for",
"fpath",
"in",
"to_del",
":",
"os",
".",
"remove",
"(",
"fpath",
")"
] | [
47,
0
] | [
60,
24
] | python | en | ['en', 'en', 'en'] | True |
convert_to_video_vis | (vid, is_solved=None) |
Generate a video visualization to go into tensorboard.
Args:
vid np.ndarray(BxTxHxW): Video in standard PHYRE style
is_solved (int): Whether this video solves the task or not.
Returns:
vid_vis (BxTxHxWx3)
|
Generate a video visualization to go into tensorboard.
Args:
vid np.ndarray(BxTxHxW): Video in standard PHYRE style
is_solved (int): Whether this video solves the task or not.
Returns:
vid_vis (BxTxHxWx3)
| def convert_to_video_vis(vid, is_solved=None):
"""
Generate a video visualization to go into tensorboard.
Args:
vid np.ndarray(BxTxHxW): Video in standard PHYRE style
is_solved (int): Whether this video solves the task or not.
Returns:
vid_vis (BxTxHxWx3)
"""
return np.stack([
# The is_solved argument adds a little bar to the top for visualization
# green if solved, red if not.
np.stack([
phyre.vis.observations_to_uint8_rgb(frame, is_solved=is_solved)
for frame in clip
]) for clip in vid
]) | [
"def",
"convert_to_video_vis",
"(",
"vid",
",",
"is_solved",
"=",
"None",
")",
":",
"return",
"np",
".",
"stack",
"(",
"[",
"# The is_solved argument adds a little bar to the top for visualization",
"# green if solved, red if not.",
"np",
".",
"stack",
"(",
"[",
"phyre",
".",
"vis",
".",
"observations_to_uint8_rgb",
"(",
"frame",
",",
"is_solved",
"=",
"is_solved",
")",
"for",
"frame",
"in",
"clip",
"]",
")",
"for",
"clip",
"in",
"vid",
"]",
")"
] | [
63,
0
] | [
79,
6
] | python | en | ['en', 'error', 'th'] | False |
get_num_workers | (num_workers, frames_per_clip) | Finetunes the nworkers if batch size/frames per clip too large, since
otherwise jobs crash. | Finetunes the nworkers if batch size/frames per clip too large, since
otherwise jobs crash. | def get_num_workers(num_workers, frames_per_clip):
"""Finetunes the nworkers if batch size/frames per clip too large, since
otherwise jobs crash."""
del frames_per_clip
return num_workers | [
"def",
"get_num_workers",
"(",
"num_workers",
",",
"frames_per_clip",
")",
":",
"del",
"frames_per_clip",
"return",
"num_workers"
] | [
82,
0
] | [
86,
22
] | python | en | ['en', 'en', 'en'] | True |
gen_vis_vid_preds | (orig_vid,
model,
n_fwd_times=None,
run_decode=True,
n_hist_frames=3) |
Generate a visualization of some training videos, along with model rollout
(actual autoregressive rollout, so need to test again).
Args:
orig_vid: (B, T, Nobj, H, W) video batch
model: the pytorch model for forward prediction
Returns:
RGB frames (B, T, 3, H, W) as torch tensor, in the standard format that
can be used with tensorboard.
|
Generate a visualization of some training videos, along with model rollout
(actual autoregressive rollout, so need to test again).
Args:
orig_vid: (B, T, Nobj, H, W) video batch
model: the pytorch model for forward prediction
Returns:
RGB frames (B, T, 3, H, W) as torch tensor, in the standard format that
can be used with tensorboard.
| def gen_vis_vid_preds(orig_vid,
model,
n_fwd_times=None,
run_decode=True,
n_hist_frames=3):
"""
Generate a visualization of some training videos, along with model rollout
(actual autoregressive rollout, so need to test again).
Args:
orig_vid: (B, T, Nobj, H, W) video batch
model: the pytorch model for forward prediction
Returns:
RGB frames (B, T, 3, H, W) as torch tensor, in the standard format that
can be used with tensorboard.
"""
# Generate the predictions
if n_fwd_times is None:
n_fwd_times = orig_vid.shape[1] - n_hist_frames # As many we've GT for
# For vis, at least 1 frame would be needed for following code
n_fwd_times = max(n_fwd_times, 1)
vid = orig_vid[:, :n_hist_frames, ...] # crop out the first part for pred
with torch.no_grad():
model.eval()
all_preds, _ = model.forward(vid,
None,
n_hist_frames=n_hist_frames,
n_fwd_times=n_fwd_times,
compute_losses=False,
need_intermediate=True,
run_decode=run_decode,
nslices=1)
stacked, _, _, _ = ImgTrainer.vis_stacked_pred_gt(
nets.combine_obj_pixels(orig_vid, 2).cpu().numpy(),
nets.combine_obj_pixels(vid, 2),
all_preds['pixels'] if run_decode else None)
# For some reason need to flip the image in space and time for corr vis
stacked_rgb = np.array(
convert_to_video_vis(stacked).transpose((0, 1, 4, 2, 3)))
return torch.as_tensor(stacked_rgb) | [
"def",
"gen_vis_vid_preds",
"(",
"orig_vid",
",",
"model",
",",
"n_fwd_times",
"=",
"None",
",",
"run_decode",
"=",
"True",
",",
"n_hist_frames",
"=",
"3",
")",
":",
"# Generate the predictions",
"if",
"n_fwd_times",
"is",
"None",
":",
"n_fwd_times",
"=",
"orig_vid",
".",
"shape",
"[",
"1",
"]",
"-",
"n_hist_frames",
"# As many we've GT for",
"# For vis, at least 1 frame would be needed for following code",
"n_fwd_times",
"=",
"max",
"(",
"n_fwd_times",
",",
"1",
")",
"vid",
"=",
"orig_vid",
"[",
":",
",",
":",
"n_hist_frames",
",",
"...",
"]",
"# crop out the first part for pred",
"with",
"torch",
".",
"no_grad",
"(",
")",
":",
"model",
".",
"eval",
"(",
")",
"all_preds",
",",
"_",
"=",
"model",
".",
"forward",
"(",
"vid",
",",
"None",
",",
"n_hist_frames",
"=",
"n_hist_frames",
",",
"n_fwd_times",
"=",
"n_fwd_times",
",",
"compute_losses",
"=",
"False",
",",
"need_intermediate",
"=",
"True",
",",
"run_decode",
"=",
"run_decode",
",",
"nslices",
"=",
"1",
")",
"stacked",
",",
"_",
",",
"_",
",",
"_",
"=",
"ImgTrainer",
".",
"vis_stacked_pred_gt",
"(",
"nets",
".",
"combine_obj_pixels",
"(",
"orig_vid",
",",
"2",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
",",
"nets",
".",
"combine_obj_pixels",
"(",
"vid",
",",
"2",
")",
",",
"all_preds",
"[",
"'pixels'",
"]",
"if",
"run_decode",
"else",
"None",
")",
"# For some reason need to flip the image in space and time for corr vis",
"stacked_rgb",
"=",
"np",
".",
"array",
"(",
"convert_to_video_vis",
"(",
"stacked",
")",
".",
"transpose",
"(",
"(",
"0",
",",
"1",
",",
"4",
",",
"2",
",",
"3",
")",
")",
")",
"return",
"torch",
".",
"as_tensor",
"(",
"stacked_rgb",
")"
] | [
89,
0
] | [
128,
39
] | python | en | ['en', 'error', 'th'] | False |
phyre_batchvidresize | (t, shape) |
Args:
t: Input video tensor batch, Long dtype, BxTxHxW
shape: Output shape required, (H', W')
|
Args:
t: Input video tensor batch, Long dtype, BxTxHxW
shape: Output shape required, (H', W')
| def phyre_batchvidresize(t, shape):
"""
Args:
t: Input video tensor batch, Long dtype, BxTxHxW
shape: Output shape required, (H', W')
"""
return nn.functional.interpolate(t.to(torch.float),
size=list(shape),
mode='nearest').to(torch.long) | [
"def",
"phyre_batchvidresize",
"(",
"t",
",",
"shape",
")",
":",
"return",
"nn",
".",
"functional",
".",
"interpolate",
"(",
"t",
".",
"to",
"(",
"torch",
".",
"float",
")",
",",
"size",
"=",
"list",
"(",
"shape",
")",
",",
"mode",
"=",
"'nearest'",
")",
".",
"to",
"(",
"torch",
".",
"long",
")"
] | [
137,
0
] | [
145,
67
] | python | en | ['en', 'error', 'th'] | False |
overlay_pred_scores | (vid, scores, ch=1) |
Args:
vid (B, 1, H, W): PHYRE style video (torch.Tensor)
scores (B,) Scores for each batch element to be overlayed in text on the
frame
ch: Which channel to overlay on.
Returns:
vid (B, H, W) with the score overlayed
|
Args:
vid (B, 1, H, W): PHYRE style video (torch.Tensor)
scores (B,) Scores for each batch element to be overlayed in text on the
frame
ch: Which channel to overlay on.
Returns:
vid (B, H, W) with the score overlayed
| def overlay_pred_scores(vid, scores, ch=1):
"""
Args:
vid (B, 1, H, W): PHYRE style video (torch.Tensor)
scores (B,) Scores for each batch element to be overlayed in text on the
frame
ch: Which channel to overlay on.
Returns:
vid (B, H, W) with the score overlayed
"""
overlays = []
for batch_id in range(vid.shape[0]):
img = Image.new('1', vid.shape[1:][::-1], 0)
draw = ImageDraw.Draw(img)
draw.text((10, 10), '{:04f}'.format(scores[batch_id]), (1, ))
overlays.append(np.array(img)[::-1, :])
overlay = torch.LongTensor(np.stack(overlays)).to(vid.device)
vid = vid * (overlay == 0) + (overlay > 0) * ch
return vid | [
"def",
"overlay_pred_scores",
"(",
"vid",
",",
"scores",
",",
"ch",
"=",
"1",
")",
":",
"overlays",
"=",
"[",
"]",
"for",
"batch_id",
"in",
"range",
"(",
"vid",
".",
"shape",
"[",
"0",
"]",
")",
":",
"img",
"=",
"Image",
".",
"new",
"(",
"'1'",
",",
"vid",
".",
"shape",
"[",
"1",
":",
"]",
"[",
":",
":",
"-",
"1",
"]",
",",
"0",
")",
"draw",
"=",
"ImageDraw",
".",
"Draw",
"(",
"img",
")",
"draw",
".",
"text",
"(",
"(",
"10",
",",
"10",
")",
",",
"'{:04f}'",
".",
"format",
"(",
"scores",
"[",
"batch_id",
"]",
")",
",",
"(",
"1",
",",
")",
")",
"overlays",
".",
"append",
"(",
"np",
".",
"array",
"(",
"img",
")",
"[",
":",
":",
"-",
"1",
",",
":",
"]",
")",
"overlay",
"=",
"torch",
".",
"LongTensor",
"(",
"np",
".",
"stack",
"(",
"overlays",
")",
")",
".",
"to",
"(",
"vid",
".",
"device",
")",
"vid",
"=",
"vid",
"*",
"(",
"overlay",
"==",
"0",
")",
"+",
"(",
"overlay",
">",
"0",
")",
"*",
"ch",
"return",
"vid"
] | [
148,
0
] | [
166,
14
] | python | en | ['en', 'error', 'th'] | False |
compute_pixel_accuracy | (gt, pred) |
Args:
gt torch.Tensor(B, T, H, W)
pred torch.Tensor(B, T, H, W)
Returns:
acc torch.Tensor(B, phyre.NUM_COLORS)
|
Args:
gt torch.Tensor(B, T, H, W)
pred torch.Tensor(B, T, H, W)
Returns:
acc torch.Tensor(B, phyre.NUM_COLORS)
| def compute_pixel_accuracy(gt, pred):
"""
Args:
gt torch.Tensor(B, T, H, W)
pred torch.Tensor(B, T, H, W)
Returns:
acc torch.Tensor(B, phyre.NUM_COLORS)
"""
match = (gt == pred)
res = torch.zeros((
gt.shape[0],
phyre.NUM_COLORS,
))
for col in range(phyre.NUM_COLORS):
relevant = (gt == col)
res[:, col] = torch.sum(match * relevant, dim=(1, 2, 3)) / torch.sum(
relevant, dim=(1, 2, 3)).float()
return res | [
"def",
"compute_pixel_accuracy",
"(",
"gt",
",",
"pred",
")",
":",
"match",
"=",
"(",
"gt",
"==",
"pred",
")",
"res",
"=",
"torch",
".",
"zeros",
"(",
"(",
"gt",
".",
"shape",
"[",
"0",
"]",
",",
"phyre",
".",
"NUM_COLORS",
",",
")",
")",
"for",
"col",
"in",
"range",
"(",
"phyre",
".",
"NUM_COLORS",
")",
":",
"relevant",
"=",
"(",
"gt",
"==",
"col",
")",
"res",
"[",
":",
",",
"col",
"]",
"=",
"torch",
".",
"sum",
"(",
"match",
"*",
"relevant",
",",
"dim",
"=",
"(",
"1",
",",
"2",
",",
"3",
")",
")",
"/",
"torch",
".",
"sum",
"(",
"relevant",
",",
"dim",
"=",
"(",
"1",
",",
"2",
",",
"3",
")",
")",
".",
"float",
"(",
")",
"return",
"res"
] | [
169,
0
] | [
186,
14
] | python | en | ['en', 'error', 'th'] | False |
store_frames | (frames, task_ids, outdir, subdir, actions) |
Args:
frames: (B, T, H, W)
outdir (path where to store all frames)
actions: (B, 3)
|
Args:
frames: (B, T, H, W)
outdir (path where to store all frames)
actions: (B, 3)
| def store_frames(frames, task_ids, outdir, subdir, actions):
"""
Args:
frames: (B, T, H, W)
outdir (path where to store all frames)
actions: (B, 3)
"""
assert frames.shape[0] == len(task_ids)
assert frames.shape[0] == actions.shape[0]
for i, task_id in enumerate(task_ids):
action = actions[i]
action_str = '{:.5f}_{:.5f}_{:.5f}'.format(action[0], action[1],
action[2])
template, _ = task_id.split(':')
this_outdir = os.path.join(outdir, 'eval_vis', template,
task_id + '_' + action_str, subdir)
os.makedirs(this_outdir, exist_ok=True)
all_rendered = []
for time_step in range(frames[i].shape[0]):
rendered = phyre.vis.observations_to_uint8_rgb(
frames[i][time_step])
# Storing individually was super slow!
# Image.fromarray(rendered).save(
# os.path.join(this_outdir, '%d.png' % time_step))
all_rendered.append(rendered)
imageio.mimwrite(os.path.join(this_outdir, 'combined.gif'),
all_rendered,
fps=2) | [
"def",
"store_frames",
"(",
"frames",
",",
"task_ids",
",",
"outdir",
",",
"subdir",
",",
"actions",
")",
":",
"assert",
"frames",
".",
"shape",
"[",
"0",
"]",
"==",
"len",
"(",
"task_ids",
")",
"assert",
"frames",
".",
"shape",
"[",
"0",
"]",
"==",
"actions",
".",
"shape",
"[",
"0",
"]",
"for",
"i",
",",
"task_id",
"in",
"enumerate",
"(",
"task_ids",
")",
":",
"action",
"=",
"actions",
"[",
"i",
"]",
"action_str",
"=",
"'{:.5f}_{:.5f}_{:.5f}'",
".",
"format",
"(",
"action",
"[",
"0",
"]",
",",
"action",
"[",
"1",
"]",
",",
"action",
"[",
"2",
"]",
")",
"template",
",",
"_",
"=",
"task_id",
".",
"split",
"(",
"':'",
")",
"this_outdir",
"=",
"os",
".",
"path",
".",
"join",
"(",
"outdir",
",",
"'eval_vis'",
",",
"template",
",",
"task_id",
"+",
"'_'",
"+",
"action_str",
",",
"subdir",
")",
"os",
".",
"makedirs",
"(",
"this_outdir",
",",
"exist_ok",
"=",
"True",
")",
"all_rendered",
"=",
"[",
"]",
"for",
"time_step",
"in",
"range",
"(",
"frames",
"[",
"i",
"]",
".",
"shape",
"[",
"0",
"]",
")",
":",
"rendered",
"=",
"phyre",
".",
"vis",
".",
"observations_to_uint8_rgb",
"(",
"frames",
"[",
"i",
"]",
"[",
"time_step",
"]",
")",
"# Storing individually was super slow!",
"# Image.fromarray(rendered).save(",
"# os.path.join(this_outdir, '%d.png' % time_step))",
"all_rendered",
".",
"append",
"(",
"rendered",
")",
"imageio",
".",
"mimwrite",
"(",
"os",
".",
"path",
".",
"join",
"(",
"this_outdir",
",",
"'combined.gif'",
")",
",",
"all_rendered",
",",
"fps",
"=",
"2",
")"
] | [
189,
0
] | [
216,
31
] | python | en | ['en', 'error', 'th'] | False |
ImgTrainer.load_agent_from_folder | (cls,
model: NeuralModel,
agent_folder: str,
strict: bool = True) |
This loader is used in the offline_agents code, to load at test time.
|
This loader is used in the offline_agents code, to load at test time.
| def load_agent_from_folder(cls,
model: NeuralModel,
agent_folder: str,
strict: bool = True) -> NeuralModel:
"""
This loader is used in the offline_agents code, to load at test time.
"""
last_checkpoint = get_latest_checkpoint(agent_folder)
assert last_checkpoint is not None, agent_folder
logging.info('Loading a model from: %s', last_checkpoint)
last_checkpoint = torch.load(last_checkpoint)
missing_keys, unexp_keys = model.load_state_dict(
last_checkpoint['model'], strict=strict)
logging.warning('Could not init: %s', missing_keys)
logging.warning('Unused keys in ckpt: %s', unexp_keys)
model.to(nets.DEVICE)
return model | [
"def",
"load_agent_from_folder",
"(",
"cls",
",",
"model",
":",
"NeuralModel",
",",
"agent_folder",
":",
"str",
",",
"strict",
":",
"bool",
"=",
"True",
")",
"->",
"NeuralModel",
":",
"last_checkpoint",
"=",
"get_latest_checkpoint",
"(",
"agent_folder",
")",
"assert",
"last_checkpoint",
"is",
"not",
"None",
",",
"agent_folder",
"logging",
".",
"info",
"(",
"'Loading a model from: %s'",
",",
"last_checkpoint",
")",
"last_checkpoint",
"=",
"torch",
".",
"load",
"(",
"last_checkpoint",
")",
"missing_keys",
",",
"unexp_keys",
"=",
"model",
".",
"load_state_dict",
"(",
"last_checkpoint",
"[",
"'model'",
"]",
",",
"strict",
"=",
"strict",
")",
"logging",
".",
"warning",
"(",
"'Could not init: %s'",
",",
"missing_keys",
")",
"logging",
".",
"warning",
"(",
"'Unused keys in ckpt: %s'",
",",
"unexp_keys",
")",
"model",
".",
"to",
"(",
"nets",
".",
"DEVICE",
")",
"return",
"model"
] | [
221,
4
] | [
237,
20
] | python | en | ['en', 'error', 'th'] | False |
ImgTrainer.gen_model | (cls, cfg) | Generate the random init model. | Generate the random init model. | def gen_model(cls, cfg):
"""Generate the random init model."""
model = nets.Fwd(agent_cfg=cfg.agent)
assert cfg.num_gpus <= torch.cuda.device_count()
model = torch.nn.DataParallel(model,
device_ids=list(range(cfg.num_gpus)))
return model | [
"def",
"gen_model",
"(",
"cls",
",",
"cfg",
")",
":",
"model",
"=",
"nets",
".",
"Fwd",
"(",
"agent_cfg",
"=",
"cfg",
".",
"agent",
")",
"assert",
"cfg",
".",
"num_gpus",
"<=",
"torch",
".",
"cuda",
".",
"device_count",
"(",
")",
"model",
"=",
"torch",
".",
"nn",
".",
"DataParallel",
"(",
"model",
",",
"device_ids",
"=",
"list",
"(",
"range",
"(",
"cfg",
".",
"num_gpus",
")",
")",
")",
"return",
"model"
] | [
240,
4
] | [
246,
20
] | python | en | ['en', 'ms', 'en'] | True |
ImgTrainer.train | (cls, model, dataset, output_dir, summary_writer,
full_eval_from_model, cfg) | Main train function. | Main train function. | def train(cls, model, dataset, output_dir, summary_writer,
full_eval_from_model, cfg):
"""Main train function."""
updates = cfg.train.num_iter
report_every = cfg.train.report_every
save_checkpoints_every = cfg.train.save_checkpoints_every
full_eval_every = cfg.train.full_eval_every
train_batch_size = cfg.train.batch_size
max_frames_fwd = cfg.train.frames_per_clip
n_hist_frames = cfg.train.n_hist_frames # Frames used to predict the future
loss_cfg = cfg.train.loss
opt_params = cfg.opt
# action_tier_name = cfg.tier
n_fwd_times = cfg.train.n_fwd_times
n_fwd_times_incur_loss = cfg.train.n_fwd_times_incur_loss
run_decode = cfg.train.run_decode
train_modules_subset = cfg.train.modules_to_train
# nslices (slice out the input for training)
num_slices = cfg.train.num_slices
if max_frames_fwd is not None and (max_frames_fwd <= n_hist_frames):
logging.warning(
'Cant train prediction model, max_frames_fwd too low')
assert loss_cfg.wt_pix == 0 or run_decode is True, (
'If the loss is non zero, the decoder should be running')
# logging.info('Creating eval subset from train')
# eval_train = create_balanced_eval_set(cache, dataset.task_ids,
# XE_EVAL_SIZE, action_tier_name)
# if dev_tasks_ids is not None:
# logging.info('Creating eval subset from dev')
# eval_dev = create_balanced_eval_set(cache, dev_tasks_ids, XE_EVAL_SIZE,
# action_tier_name)
# else:
# eval_dev = None
device = nets.DEVICE
model.train()
model.to(device)
logging.info("%s", model)
params_to_train = []
if train_modules_subset is not None:
mod_names = train_modules_subset.split('%')
logging.warning(
'Training only a few modules, listed below. NOTE: '
'BNs/dropout will still be in train mode. Explicitly '
'set those to eval mode if thats not desired.')
for mod_name in mod_names:
mod = getattr(model.module, mod_name)
logging.warning('Training %s: %s', mod_name, mod)
params_to_train.extend(mod.parameters())
else:
params_to_train = model.parameters()
optimizer = hydra.utils.instantiate(opt_params, params_to_train)
scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer,
T_max=updates)
logging.info('Starting actual training for %d updates', updates)
last_checkpoint = get_latest_checkpoint(output_dir)
batch_start = 0 # By default, starting from iteration 0, unles loading mdl
if last_checkpoint is not None:
logging.info('Going to load from %s', last_checkpoint)
last_checkpoint = torch.load(last_checkpoint)
model.load_state_dict(last_checkpoint['model'])
optimizer.load_state_dict(last_checkpoint['optim'])
# Subtracting 1 since we store batch_id + 1 when calling save_agent
batch_start = last_checkpoint['done_batches'] - 1
if scheduler is not None:
scheduler.load_state_dict(last_checkpoint['scheduler'])
def run_full_eval(batch_id):
logging.info('Running full eval')
results = {} # To store to a json
eval_stats = full_eval_from_model(model)
metric = eval_stats.compute_all_metrics()
results['metrics'] = metric
results[
'metrics_rollout'] = eval_stats.compute_all_metrics_over_rollout(
)
results[
'metrics_per_task'] = eval_stats.compute_all_metrics_per_task(
)
max_test_attempts_per_task = (cfg.max_test_attempts_per_task
or phyre.MAX_TEST_ATTEMPTS)
results['parsed_args'] = dict(
# cfg=cfg, # Not json serializable, anyway will be stored in dir
main_kwargs=dict(
eval_setup_name=cfg.eval_setup_name,
fold_id=cfg.fold_id,
use_test_split=cfg.use_test_split,
agent_type=cfg.agent.type,
max_test_attempts_per_task=max_test_attempts_per_task,
output_dir=output_dir))
results['target_metric'] = (
results['metrics']['independent_solved_by_aucs']
[max_test_attempts_per_task])
results['target_metric_over_time'] = [
el['independent_solved_by_aucs'][max_test_attempts_per_task]
for el in results['metrics_rollout']
]
logging.info('Iter %d: %s; Over rollout: %s', (batch_id + 1),
results['target_metric'],
results['target_metric_over_time'])
score = metric['independent_solved_by_aucs'][-1]
summary_writer.add_scalar('FullEval/AUCCESS', score, batch_id + 1)
for solved_by_iter in metric['global_solved_by']:
summary_writer.add_scalar(
'FullEval/solved_by_{}'.format(solved_by_iter),
metric['global_solved_by'][solved_by_iter], batch_id + 1)
logging.info('Full eval perf @ %d: %s', batch_id + 1, score)
for i, metric in enumerate(results['metrics_rollout']):
summary_writer.add_scalar(
'FullEvalRollout/AUCCESS/{}'.format(i + 1),
metric['independent_solved_by_aucs'][-1], batch_id + 1)
summary_writer.add_scalar(
'FullEvalRollout/solved_by_100/{}'.format(i),
metric['global_solved_by'][100], batch_id + 1)
respath = os.path.join(
output_dir,
'results_intermediate/{:08d}.json'.format(batch_id + 1))
os.makedirs(os.path.dirname(respath), exist_ok=True)
with open(respath, 'w') as fout:
json.dump(results, fout)
logging.info('Report every %d; full eval every %d', report_every,
full_eval_every)
if save_checkpoints_every > full_eval_every:
save_checkpoints_every -= save_checkpoints_every % full_eval_every
losses_report = {}
last_time = time.time()
assert train_batch_size > 1 and train_batch_size % 2 == 0, (
'Needs to get 2 elements at least to balance out')
for batch_data_id, batch_data in enumerate(
torch.utils.data.DataLoader(
dataset,
num_workers=get_num_workers(
cfg.train.data_loader.num_workers,
dataset.frames_per_clip),
pin_memory=False,
# Asking for half the batch size since the dataloader is designed
# to give 2 elements per batch (for class balancing)
batch_size=train_batch_size // 2)):
# When the training restarts, it resets to the start of the data loader
batch_id = batch_data_id + batch_start
if (batch_id + 1) >= updates:
save_agent(output_dir, batch_id + 1, model, optimizer,
scheduler)
break
model.train()
batch_is_solved = batch_data['is_solved']
batch_is_solved = batch_is_solved.to(device, non_blocking=True)
batch_is_solved = batch_is_solved.reshape((-1, ))
batch_vid_obs = batch_data['vid_obs']
batch_vid_obs = batch_vid_obs.reshape(
[-1] + list(batch_vid_obs.shape[2:]))
batch_vid_obs = batch_vid_obs.to(device)
# Run the forward image model on the video
_, batch_losses = model.forward(
batch_vid_obs,
batch_is_solved,
n_hist_frames=n_hist_frames,
n_fwd_times=n_fwd_times,
n_fwd_times_incur_loss=n_fwd_times_incur_loss,
run_decode=run_decode,
compute_losses=True,
need_intermediate=loss_cfg.on_intermediate,
autoenc_loss_ratio=loss_cfg.autoenc_loss_ratio,
nslices=num_slices)
optimizer.zero_grad()
total_loss = 0
# Mean over each loss type from each replica
for loss_type in batch_losses:
loss_wt = getattr(loss_cfg, 'wt_' + loss_type)
if loss_wt <= 0:
continue
loss_val = loss_wt * torch.mean(batch_losses[loss_type], dim=0)
if loss_type not in losses_report:
losses_report[loss_type] = []
losses_report[loss_type].append(loss_val.item())
total_loss += loss_val
total_loss.backward()
optimizer.step()
if (save_checkpoints_every > 0
and (batch_id + 1) % save_checkpoints_every == 0):
save_agent(output_dir, batch_id + 1, model, optimizer,
scheduler)
# Removing intermediate eval since it doesnt seem very useful, using the
# full eval for now.
# if (batch_id + 1) % eval_every == 0:
# print_eval_stats(batch_id)
if (batch_id + 1) % report_every == 0:
speed = report_every / (time.time() - last_time)
last_time = time.time()
loss_stats = {
typ: np.mean(losses_report[typ][-report_every:])
for typ in losses_report if len(losses_report[typ]) > 0
}
logging.info(
'Iter: %s, examples: %d, mean loss: %s, speed: %.1f batch/sec,'
' lr: %f', batch_id + 1, (batch_id + 1) * train_batch_size,
loss_stats, speed, get_lr(optimizer))
for typ in loss_stats:
summary_writer.add_scalar('Loss/{}'.format(typ),
loss_stats[typ], batch_id + 1)
summary_writer.add_scalar('Loss/Total',
sum(loss_stats.values()),
batch_id + 1)
summary_writer.add_scalar('LR', get_lr(optimizer),
batch_id + 1)
summary_writer.add_scalar('Speed', speed, batch_id + 1)
# Add a histogram of the batch task IDs, to make sure it picks a
# variety of task
batch_templates = np.array(
dataset.task_ids)[batch_data['task_indices'].reshape(
(-1, ))].tolist()
batch_templates = np.array(
[int(el.split(':')[0]) for el in batch_templates])
gpu_mem_max = max([
torch.cuda.max_memory_allocated(device=i)
for i in range(torch.cuda.device_count())
])
summary_writer.add_scalar('GPU/Mem/Max', gpu_mem_max,
batch_id + 1)
summary_writer.add_histogram('Templates',
batch_templates,
global_step=(batch_id + 1),
bins=25)
# Visualize a couple train videos, and actual rollouts if pix is
# being trained
# Just visualizing the first 256 videos in case the batch size is
# larger; somehow the visualizations get corrupted (grey bg) for
# more. Also no point filling up the memory.
# Storing less frequently than the rest of the logs (takes lot of space)
if n_fwd_times > 0 and (batch_id + 1) % (report_every * 10) == 0:
summary_writer.add_video(
'InputAndRollout/train',
gen_vis_vid_preds(batch_vid_obs[:256],
model,
n_fwd_times=None,
run_decode=run_decode,
n_hist_frames=n_hist_frames),
(batch_id + 1))
if (batch_id + 1) % full_eval_every == 0:
run_full_eval(batch_id)
if scheduler is not None:
scheduler.step()
return model.cpu() | [
"def",
"train",
"(",
"cls",
",",
"model",
",",
"dataset",
",",
"output_dir",
",",
"summary_writer",
",",
"full_eval_from_model",
",",
"cfg",
")",
":",
"updates",
"=",
"cfg",
".",
"train",
".",
"num_iter",
"report_every",
"=",
"cfg",
".",
"train",
".",
"report_every",
"save_checkpoints_every",
"=",
"cfg",
".",
"train",
".",
"save_checkpoints_every",
"full_eval_every",
"=",
"cfg",
".",
"train",
".",
"full_eval_every",
"train_batch_size",
"=",
"cfg",
".",
"train",
".",
"batch_size",
"max_frames_fwd",
"=",
"cfg",
".",
"train",
".",
"frames_per_clip",
"n_hist_frames",
"=",
"cfg",
".",
"train",
".",
"n_hist_frames",
"# Frames used to predict the future",
"loss_cfg",
"=",
"cfg",
".",
"train",
".",
"loss",
"opt_params",
"=",
"cfg",
".",
"opt",
"# action_tier_name = cfg.tier",
"n_fwd_times",
"=",
"cfg",
".",
"train",
".",
"n_fwd_times",
"n_fwd_times_incur_loss",
"=",
"cfg",
".",
"train",
".",
"n_fwd_times_incur_loss",
"run_decode",
"=",
"cfg",
".",
"train",
".",
"run_decode",
"train_modules_subset",
"=",
"cfg",
".",
"train",
".",
"modules_to_train",
"# nslices (slice out the input for training)",
"num_slices",
"=",
"cfg",
".",
"train",
".",
"num_slices",
"if",
"max_frames_fwd",
"is",
"not",
"None",
"and",
"(",
"max_frames_fwd",
"<=",
"n_hist_frames",
")",
":",
"logging",
".",
"warning",
"(",
"'Cant train prediction model, max_frames_fwd too low'",
")",
"assert",
"loss_cfg",
".",
"wt_pix",
"==",
"0",
"or",
"run_decode",
"is",
"True",
",",
"(",
"'If the loss is non zero, the decoder should be running'",
")",
"# logging.info('Creating eval subset from train')",
"# eval_train = create_balanced_eval_set(cache, dataset.task_ids,",
"# XE_EVAL_SIZE, action_tier_name)",
"# if dev_tasks_ids is not None:",
"# logging.info('Creating eval subset from dev')",
"# eval_dev = create_balanced_eval_set(cache, dev_tasks_ids, XE_EVAL_SIZE,",
"# action_tier_name)",
"# else:",
"# eval_dev = None",
"device",
"=",
"nets",
".",
"DEVICE",
"model",
".",
"train",
"(",
")",
"model",
".",
"to",
"(",
"device",
")",
"logging",
".",
"info",
"(",
"\"%s\"",
",",
"model",
")",
"params_to_train",
"=",
"[",
"]",
"if",
"train_modules_subset",
"is",
"not",
"None",
":",
"mod_names",
"=",
"train_modules_subset",
".",
"split",
"(",
"'%'",
")",
"logging",
".",
"warning",
"(",
"'Training only a few modules, listed below. NOTE: '",
"'BNs/dropout will still be in train mode. Explicitly '",
"'set those to eval mode if thats not desired.'",
")",
"for",
"mod_name",
"in",
"mod_names",
":",
"mod",
"=",
"getattr",
"(",
"model",
".",
"module",
",",
"mod_name",
")",
"logging",
".",
"warning",
"(",
"'Training %s: %s'",
",",
"mod_name",
",",
"mod",
")",
"params_to_train",
".",
"extend",
"(",
"mod",
".",
"parameters",
"(",
")",
")",
"else",
":",
"params_to_train",
"=",
"model",
".",
"parameters",
"(",
")",
"optimizer",
"=",
"hydra",
".",
"utils",
".",
"instantiate",
"(",
"opt_params",
",",
"params_to_train",
")",
"scheduler",
"=",
"torch",
".",
"optim",
".",
"lr_scheduler",
".",
"CosineAnnealingLR",
"(",
"optimizer",
",",
"T_max",
"=",
"updates",
")",
"logging",
".",
"info",
"(",
"'Starting actual training for %d updates'",
",",
"updates",
")",
"last_checkpoint",
"=",
"get_latest_checkpoint",
"(",
"output_dir",
")",
"batch_start",
"=",
"0",
"# By default, starting from iteration 0, unles loading mdl",
"if",
"last_checkpoint",
"is",
"not",
"None",
":",
"logging",
".",
"info",
"(",
"'Going to load from %s'",
",",
"last_checkpoint",
")",
"last_checkpoint",
"=",
"torch",
".",
"load",
"(",
"last_checkpoint",
")",
"model",
".",
"load_state_dict",
"(",
"last_checkpoint",
"[",
"'model'",
"]",
")",
"optimizer",
".",
"load_state_dict",
"(",
"last_checkpoint",
"[",
"'optim'",
"]",
")",
"# Subtracting 1 since we store batch_id + 1 when calling save_agent",
"batch_start",
"=",
"last_checkpoint",
"[",
"'done_batches'",
"]",
"-",
"1",
"if",
"scheduler",
"is",
"not",
"None",
":",
"scheduler",
".",
"load_state_dict",
"(",
"last_checkpoint",
"[",
"'scheduler'",
"]",
")",
"def",
"run_full_eval",
"(",
"batch_id",
")",
":",
"logging",
".",
"info",
"(",
"'Running full eval'",
")",
"results",
"=",
"{",
"}",
"# To store to a json",
"eval_stats",
"=",
"full_eval_from_model",
"(",
"model",
")",
"metric",
"=",
"eval_stats",
".",
"compute_all_metrics",
"(",
")",
"results",
"[",
"'metrics'",
"]",
"=",
"metric",
"results",
"[",
"'metrics_rollout'",
"]",
"=",
"eval_stats",
".",
"compute_all_metrics_over_rollout",
"(",
")",
"results",
"[",
"'metrics_per_task'",
"]",
"=",
"eval_stats",
".",
"compute_all_metrics_per_task",
"(",
")",
"max_test_attempts_per_task",
"=",
"(",
"cfg",
".",
"max_test_attempts_per_task",
"or",
"phyre",
".",
"MAX_TEST_ATTEMPTS",
")",
"results",
"[",
"'parsed_args'",
"]",
"=",
"dict",
"(",
"# cfg=cfg, # Not json serializable, anyway will be stored in dir",
"main_kwargs",
"=",
"dict",
"(",
"eval_setup_name",
"=",
"cfg",
".",
"eval_setup_name",
",",
"fold_id",
"=",
"cfg",
".",
"fold_id",
",",
"use_test_split",
"=",
"cfg",
".",
"use_test_split",
",",
"agent_type",
"=",
"cfg",
".",
"agent",
".",
"type",
",",
"max_test_attempts_per_task",
"=",
"max_test_attempts_per_task",
",",
"output_dir",
"=",
"output_dir",
")",
")",
"results",
"[",
"'target_metric'",
"]",
"=",
"(",
"results",
"[",
"'metrics'",
"]",
"[",
"'independent_solved_by_aucs'",
"]",
"[",
"max_test_attempts_per_task",
"]",
")",
"results",
"[",
"'target_metric_over_time'",
"]",
"=",
"[",
"el",
"[",
"'independent_solved_by_aucs'",
"]",
"[",
"max_test_attempts_per_task",
"]",
"for",
"el",
"in",
"results",
"[",
"'metrics_rollout'",
"]",
"]",
"logging",
".",
"info",
"(",
"'Iter %d: %s; Over rollout: %s'",
",",
"(",
"batch_id",
"+",
"1",
")",
",",
"results",
"[",
"'target_metric'",
"]",
",",
"results",
"[",
"'target_metric_over_time'",
"]",
")",
"score",
"=",
"metric",
"[",
"'independent_solved_by_aucs'",
"]",
"[",
"-",
"1",
"]",
"summary_writer",
".",
"add_scalar",
"(",
"'FullEval/AUCCESS'",
",",
"score",
",",
"batch_id",
"+",
"1",
")",
"for",
"solved_by_iter",
"in",
"metric",
"[",
"'global_solved_by'",
"]",
":",
"summary_writer",
".",
"add_scalar",
"(",
"'FullEval/solved_by_{}'",
".",
"format",
"(",
"solved_by_iter",
")",
",",
"metric",
"[",
"'global_solved_by'",
"]",
"[",
"solved_by_iter",
"]",
",",
"batch_id",
"+",
"1",
")",
"logging",
".",
"info",
"(",
"'Full eval perf @ %d: %s'",
",",
"batch_id",
"+",
"1",
",",
"score",
")",
"for",
"i",
",",
"metric",
"in",
"enumerate",
"(",
"results",
"[",
"'metrics_rollout'",
"]",
")",
":",
"summary_writer",
".",
"add_scalar",
"(",
"'FullEvalRollout/AUCCESS/{}'",
".",
"format",
"(",
"i",
"+",
"1",
")",
",",
"metric",
"[",
"'independent_solved_by_aucs'",
"]",
"[",
"-",
"1",
"]",
",",
"batch_id",
"+",
"1",
")",
"summary_writer",
".",
"add_scalar",
"(",
"'FullEvalRollout/solved_by_100/{}'",
".",
"format",
"(",
"i",
")",
",",
"metric",
"[",
"'global_solved_by'",
"]",
"[",
"100",
"]",
",",
"batch_id",
"+",
"1",
")",
"respath",
"=",
"os",
".",
"path",
".",
"join",
"(",
"output_dir",
",",
"'results_intermediate/{:08d}.json'",
".",
"format",
"(",
"batch_id",
"+",
"1",
")",
")",
"os",
".",
"makedirs",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"respath",
")",
",",
"exist_ok",
"=",
"True",
")",
"with",
"open",
"(",
"respath",
",",
"'w'",
")",
"as",
"fout",
":",
"json",
".",
"dump",
"(",
"results",
",",
"fout",
")",
"logging",
".",
"info",
"(",
"'Report every %d; full eval every %d'",
",",
"report_every",
",",
"full_eval_every",
")",
"if",
"save_checkpoints_every",
">",
"full_eval_every",
":",
"save_checkpoints_every",
"-=",
"save_checkpoints_every",
"%",
"full_eval_every",
"losses_report",
"=",
"{",
"}",
"last_time",
"=",
"time",
".",
"time",
"(",
")",
"assert",
"train_batch_size",
">",
"1",
"and",
"train_batch_size",
"%",
"2",
"==",
"0",
",",
"(",
"'Needs to get 2 elements at least to balance out'",
")",
"for",
"batch_data_id",
",",
"batch_data",
"in",
"enumerate",
"(",
"torch",
".",
"utils",
".",
"data",
".",
"DataLoader",
"(",
"dataset",
",",
"num_workers",
"=",
"get_num_workers",
"(",
"cfg",
".",
"train",
".",
"data_loader",
".",
"num_workers",
",",
"dataset",
".",
"frames_per_clip",
")",
",",
"pin_memory",
"=",
"False",
",",
"# Asking for half the batch size since the dataloader is designed",
"# to give 2 elements per batch (for class balancing)",
"batch_size",
"=",
"train_batch_size",
"//",
"2",
")",
")",
":",
"# When the training restarts, it resets to the start of the data loader",
"batch_id",
"=",
"batch_data_id",
"+",
"batch_start",
"if",
"(",
"batch_id",
"+",
"1",
")",
">=",
"updates",
":",
"save_agent",
"(",
"output_dir",
",",
"batch_id",
"+",
"1",
",",
"model",
",",
"optimizer",
",",
"scheduler",
")",
"break",
"model",
".",
"train",
"(",
")",
"batch_is_solved",
"=",
"batch_data",
"[",
"'is_solved'",
"]",
"batch_is_solved",
"=",
"batch_is_solved",
".",
"to",
"(",
"device",
",",
"non_blocking",
"=",
"True",
")",
"batch_is_solved",
"=",
"batch_is_solved",
".",
"reshape",
"(",
"(",
"-",
"1",
",",
")",
")",
"batch_vid_obs",
"=",
"batch_data",
"[",
"'vid_obs'",
"]",
"batch_vid_obs",
"=",
"batch_vid_obs",
".",
"reshape",
"(",
"[",
"-",
"1",
"]",
"+",
"list",
"(",
"batch_vid_obs",
".",
"shape",
"[",
"2",
":",
"]",
")",
")",
"batch_vid_obs",
"=",
"batch_vid_obs",
".",
"to",
"(",
"device",
")",
"# Run the forward image model on the video",
"_",
",",
"batch_losses",
"=",
"model",
".",
"forward",
"(",
"batch_vid_obs",
",",
"batch_is_solved",
",",
"n_hist_frames",
"=",
"n_hist_frames",
",",
"n_fwd_times",
"=",
"n_fwd_times",
",",
"n_fwd_times_incur_loss",
"=",
"n_fwd_times_incur_loss",
",",
"run_decode",
"=",
"run_decode",
",",
"compute_losses",
"=",
"True",
",",
"need_intermediate",
"=",
"loss_cfg",
".",
"on_intermediate",
",",
"autoenc_loss_ratio",
"=",
"loss_cfg",
".",
"autoenc_loss_ratio",
",",
"nslices",
"=",
"num_slices",
")",
"optimizer",
".",
"zero_grad",
"(",
")",
"total_loss",
"=",
"0",
"# Mean over each loss type from each replica",
"for",
"loss_type",
"in",
"batch_losses",
":",
"loss_wt",
"=",
"getattr",
"(",
"loss_cfg",
",",
"'wt_'",
"+",
"loss_type",
")",
"if",
"loss_wt",
"<=",
"0",
":",
"continue",
"loss_val",
"=",
"loss_wt",
"*",
"torch",
".",
"mean",
"(",
"batch_losses",
"[",
"loss_type",
"]",
",",
"dim",
"=",
"0",
")",
"if",
"loss_type",
"not",
"in",
"losses_report",
":",
"losses_report",
"[",
"loss_type",
"]",
"=",
"[",
"]",
"losses_report",
"[",
"loss_type",
"]",
".",
"append",
"(",
"loss_val",
".",
"item",
"(",
")",
")",
"total_loss",
"+=",
"loss_val",
"total_loss",
".",
"backward",
"(",
")",
"optimizer",
".",
"step",
"(",
")",
"if",
"(",
"save_checkpoints_every",
">",
"0",
"and",
"(",
"batch_id",
"+",
"1",
")",
"%",
"save_checkpoints_every",
"==",
"0",
")",
":",
"save_agent",
"(",
"output_dir",
",",
"batch_id",
"+",
"1",
",",
"model",
",",
"optimizer",
",",
"scheduler",
")",
"# Removing intermediate eval since it doesnt seem very useful, using the",
"# full eval for now.",
"# if (batch_id + 1) % eval_every == 0:",
"# print_eval_stats(batch_id)",
"if",
"(",
"batch_id",
"+",
"1",
")",
"%",
"report_every",
"==",
"0",
":",
"speed",
"=",
"report_every",
"/",
"(",
"time",
".",
"time",
"(",
")",
"-",
"last_time",
")",
"last_time",
"=",
"time",
".",
"time",
"(",
")",
"loss_stats",
"=",
"{",
"typ",
":",
"np",
".",
"mean",
"(",
"losses_report",
"[",
"typ",
"]",
"[",
"-",
"report_every",
":",
"]",
")",
"for",
"typ",
"in",
"losses_report",
"if",
"len",
"(",
"losses_report",
"[",
"typ",
"]",
")",
">",
"0",
"}",
"logging",
".",
"info",
"(",
"'Iter: %s, examples: %d, mean loss: %s, speed: %.1f batch/sec,'",
"' lr: %f'",
",",
"batch_id",
"+",
"1",
",",
"(",
"batch_id",
"+",
"1",
")",
"*",
"train_batch_size",
",",
"loss_stats",
",",
"speed",
",",
"get_lr",
"(",
"optimizer",
")",
")",
"for",
"typ",
"in",
"loss_stats",
":",
"summary_writer",
".",
"add_scalar",
"(",
"'Loss/{}'",
".",
"format",
"(",
"typ",
")",
",",
"loss_stats",
"[",
"typ",
"]",
",",
"batch_id",
"+",
"1",
")",
"summary_writer",
".",
"add_scalar",
"(",
"'Loss/Total'",
",",
"sum",
"(",
"loss_stats",
".",
"values",
"(",
")",
")",
",",
"batch_id",
"+",
"1",
")",
"summary_writer",
".",
"add_scalar",
"(",
"'LR'",
",",
"get_lr",
"(",
"optimizer",
")",
",",
"batch_id",
"+",
"1",
")",
"summary_writer",
".",
"add_scalar",
"(",
"'Speed'",
",",
"speed",
",",
"batch_id",
"+",
"1",
")",
"# Add a histogram of the batch task IDs, to make sure it picks a",
"# variety of task",
"batch_templates",
"=",
"np",
".",
"array",
"(",
"dataset",
".",
"task_ids",
")",
"[",
"batch_data",
"[",
"'task_indices'",
"]",
".",
"reshape",
"(",
"(",
"-",
"1",
",",
")",
")",
"]",
".",
"tolist",
"(",
")",
"batch_templates",
"=",
"np",
".",
"array",
"(",
"[",
"int",
"(",
"el",
".",
"split",
"(",
"':'",
")",
"[",
"0",
"]",
")",
"for",
"el",
"in",
"batch_templates",
"]",
")",
"gpu_mem_max",
"=",
"max",
"(",
"[",
"torch",
".",
"cuda",
".",
"max_memory_allocated",
"(",
"device",
"=",
"i",
")",
"for",
"i",
"in",
"range",
"(",
"torch",
".",
"cuda",
".",
"device_count",
"(",
")",
")",
"]",
")",
"summary_writer",
".",
"add_scalar",
"(",
"'GPU/Mem/Max'",
",",
"gpu_mem_max",
",",
"batch_id",
"+",
"1",
")",
"summary_writer",
".",
"add_histogram",
"(",
"'Templates'",
",",
"batch_templates",
",",
"global_step",
"=",
"(",
"batch_id",
"+",
"1",
")",
",",
"bins",
"=",
"25",
")",
"# Visualize a couple train videos, and actual rollouts if pix is",
"# being trained",
"# Just visualizing the first 256 videos in case the batch size is",
"# larger; somehow the visualizations get corrupted (grey bg) for",
"# more. Also no point filling up the memory.",
"# Storing less frequently than the rest of the logs (takes lot of space)",
"if",
"n_fwd_times",
">",
"0",
"and",
"(",
"batch_id",
"+",
"1",
")",
"%",
"(",
"report_every",
"*",
"10",
")",
"==",
"0",
":",
"summary_writer",
".",
"add_video",
"(",
"'InputAndRollout/train'",
",",
"gen_vis_vid_preds",
"(",
"batch_vid_obs",
"[",
":",
"256",
"]",
",",
"model",
",",
"n_fwd_times",
"=",
"None",
",",
"run_decode",
"=",
"run_decode",
",",
"n_hist_frames",
"=",
"n_hist_frames",
")",
",",
"(",
"batch_id",
"+",
"1",
")",
")",
"if",
"(",
"batch_id",
"+",
"1",
")",
"%",
"full_eval_every",
"==",
"0",
":",
"run_full_eval",
"(",
"batch_id",
")",
"if",
"scheduler",
"is",
"not",
"None",
":",
"scheduler",
".",
"step",
"(",
")",
"return",
"model",
".",
"cpu",
"(",
")"
] | [
249,
4
] | [
501,
26
] | python | en | ['en', 'ja', 'en'] | True |
ImgTrainer.eval_actions | (cls, model, dataset, nactionsXtasks, batch_size, cfg) | Evaluate likelihood of actions solving the task. | Evaluate likelihood of actions solving the task. | def eval_actions(cls, model, dataset, nactionsXtasks, batch_size, cfg):
"""Evaluate likelihood of actions solving the task."""
init_frames_to_sim = cfg.eval.init_frames_to_sim # Run it for these many
n_hist_frames = cfg.eval.n_hist_frames
n_fwd_times = cfg.eval.n_fwd_times
store_vis = cfg.eval.store_vis
train_run_decode = cfg.train.run_decode
assert init_frames_to_sim >= n_hist_frames, 'Need those many to start pred'
def pad_tensor(tensor: torch.Tensor, sz: int):
"""
Pad the tensor's bottom (along batch dim), using the last element,
sz times.
"""
bottom_tensor_rep = [tensor[-1:, ...]] * sz
return torch.cat([tensor] + bottom_tensor_rep, dim=0)
def unpad_tensor(tensor: torch.Tensor, sz: int):
return tensor[:tensor.shape[0] - sz, ...]
# Clear the directory of older vis if any
out_dir = 'vis/'
if store_vis:
logging.warning('Removing older vis from %s/%s', os.getcwd(),
out_dir)
subprocess.call(f'rm {out_dir}/*.gif', shell=True)
subprocess.call(f'rm -r {out_dir}/eval_vis', shell=True)
if os.path.exists('temp_storage'):
final_scores = torch.load(os.path.join('temp_storage', 'final_scores.pt'))
final_actions = torch.load(os.path.join('temp_storage', 'final_actions.pt'))
final_task_indices = torch.load(os.path.join('temp_storage', 'final_task_indices.pt'))
final_pixel_accs = torch.load(os.path.join('temp_storage', 'final_pixel_accs.pt'))
else:
scores = []
actions = []
task_indices = []
pixel_accs = [] # If generating output, how accurately did we do
with torch.no_grad():
model.eval()
for batch_id, batch_data in enumerate(
tqdm(torch.utils.data.DataLoader(
dataset,
num_workers=get_num_workers(
cfg.eval.data_loader.num_workers,
dataset.frames_per_clip),
pin_memory=False,
batch_size=batch_size,
drop_last=False),
desc='All tasks X actions batches',
total=nactionsXtasks // batch_size)):
batch_task_indices = batch_data['task_indices']
print(batch_id)
batch_vid_obs = batch_data[
f'{cfg.agent.input_space}_obs'].squeeze(1)
batch_vid_obs_orig = batch_data[
f'{cfg.agent.input_space}_obs_orig'].squeeze(1)
batch_actions = batch_data['actions']
# Since the code might be run with DataParallel, need to make sure
# the batch size is divisible by the ngpus, so stick to the
# requested batch size by padding actions.
uniq_batch_size = batch_actions.shape[0]
pad_len = max(batch_size - uniq_batch_size, 0)
batch_vid_obs = pad_tensor(batch_vid_obs, pad_len)
# Setting run_decode always true when true in training..
# (earlier only when visualizing)
# Sometimes evaluation might depend on the decoded frame, so might
# as well...
other_kwargs = {
'need_intermediate': True,
'run_decode': train_run_decode,
'nslices': 1
}
all_preds, batch_losses = model.forward(
batch_vid_obs,
None,
n_hist_frames=n_hist_frames,
n_fwd_times=n_fwd_times,
compute_losses=False,
**other_kwargs)
# Unpad parts of all_preds that will be used further
# Since the model is trained with BCELoss, normalize using sigmoid
# On 2020/02/11, I changed it to return only one prediction for
# any n_fwd_times (max-pool all to give 1 prediction), hence this
# list will only contain a single element. To stay consistent with
# prior code that expects a prediction at each time step, simply
# repeating that prediction n_fwd_times.
batch_scores = nn.Sigmoid()(unpad_tensor(
all_preds['is_solved'], pad_len))
batch_vid_obs = unpad_tensor(batch_vid_obs, pad_len)
if store_vis:
# Sum the vid obs over the channels, in case it was split into
# components
if cfg.agent.input_space == 'obj':
# update to videos, for storing vis
batch_vid_obs = batch_data['vid_obs'].squeeze(1)
batch_vid_obs_orig = batch_data[
'vid_obs_orig'].squeeze(1)
batch_vid_obs = pad_tensor(batch_vid_obs, pad_len)
batch_vid_obs = unpad_tensor(batch_vid_obs, pad_len)
task_ids = batch_data['task_ids']
_, pixel_acc, gt_frames, pred_frames = cls.vis_stacked_pred_gt(
torch.sum(batch_vid_obs_orig, axis=-3).cpu().numpy(),
torch.sum(batch_vid_obs, dim=-3), [
unpad_tensor(el.cpu(), pad_len)
for el in all_preds['pixels']
])
'''
[batch_scores] * len(all_preds['pixels']),
# Could take any batch_task_indices, all are same
'{}/{:04d}_{:04d}.gif'.format(out_dir,
batch_task_indices[0],
batch_id))
'''
# Also store pure frames individually, will be used for rollout
# accuracy evaluation
store_frames(gt_frames, task_ids, out_dir, 'gt',
batch_actions)
store_frames(pred_frames, task_ids, out_dir, 'predictions',
batch_actions)
else:
pixel_acc = torch.zeros(
(batch_scores.shape[0], phyre.NUM_COLORS))
assert len(batch_scores) == len(batch_actions), (
batch_actions.shape, batch_scores.shape)
# IMP: Don't convert to cpu() numpy() here.. it makes the function
# much slower. Convert in one go at the end when returning
scores.append(deepcopy(batch_scores))
pixel_accs.append(deepcopy(pixel_acc))
actions.append(deepcopy(batch_actions))
task_indices.append(deepcopy(batch_task_indices))
# There is only 1 element in scores, but unsqueezing so that
# it's compatible with following code that expects a score prediction
# over time. Here it will give 1 prediction, the final one.
final_scores = torch.cat(scores, dim=0).unsqueeze(0).cpu().numpy()
final_actions = torch.cat(actions, dim=0).cpu().numpy()
final_task_indices = torch.cat(task_indices, dim=0).cpu().numpy()
final_pixel_accs = torch.cat(pixel_accs, dim=0).cpu().numpy()
if not os.path.exists('temp_storage'):
os.makedirs('temp_storage')
torch.save(final_scores, os.path.join('temp_storage', 'final_scores.pt'))
torch.save(final_actions, os.path.join('temp_storage', 'final_actions.pt'))
torch.save(final_task_indices, os.path.join('temp_storage', 'final_task_indices.pt'))
torch.save(final_pixel_accs, os.path.join('temp_storage', 'final_pixel_accs.pt'))
if nactionsXtasks != len(final_actions):
logging.warning('Only evaluated %d actions instead of full %d',
len(final_actions), nactionsXtasks)
assert (nactionsXtasks - len(actions)) <= batch_size, (
'Shouldnt miss more')
return final_scores, final_actions, final_task_indices, final_pixel_accs | [
"def",
"eval_actions",
"(",
"cls",
",",
"model",
",",
"dataset",
",",
"nactionsXtasks",
",",
"batch_size",
",",
"cfg",
")",
":",
"init_frames_to_sim",
"=",
"cfg",
".",
"eval",
".",
"init_frames_to_sim",
"# Run it for these many",
"n_hist_frames",
"=",
"cfg",
".",
"eval",
".",
"n_hist_frames",
"n_fwd_times",
"=",
"cfg",
".",
"eval",
".",
"n_fwd_times",
"store_vis",
"=",
"cfg",
".",
"eval",
".",
"store_vis",
"train_run_decode",
"=",
"cfg",
".",
"train",
".",
"run_decode",
"assert",
"init_frames_to_sim",
">=",
"n_hist_frames",
",",
"'Need those many to start pred'",
"def",
"pad_tensor",
"(",
"tensor",
":",
"torch",
".",
"Tensor",
",",
"sz",
":",
"int",
")",
":",
"\"\"\"\n Pad the tensor's bottom (along batch dim), using the last element,\n sz times.\n \"\"\"",
"bottom_tensor_rep",
"=",
"[",
"tensor",
"[",
"-",
"1",
":",
",",
"...",
"]",
"]",
"*",
"sz",
"return",
"torch",
".",
"cat",
"(",
"[",
"tensor",
"]",
"+",
"bottom_tensor_rep",
",",
"dim",
"=",
"0",
")",
"def",
"unpad_tensor",
"(",
"tensor",
":",
"torch",
".",
"Tensor",
",",
"sz",
":",
"int",
")",
":",
"return",
"tensor",
"[",
":",
"tensor",
".",
"shape",
"[",
"0",
"]",
"-",
"sz",
",",
"...",
"]",
"# Clear the directory of older vis if any",
"out_dir",
"=",
"'vis/'",
"if",
"store_vis",
":",
"logging",
".",
"warning",
"(",
"'Removing older vis from %s/%s'",
",",
"os",
".",
"getcwd",
"(",
")",
",",
"out_dir",
")",
"subprocess",
".",
"call",
"(",
"f'rm {out_dir}/*.gif'",
",",
"shell",
"=",
"True",
")",
"subprocess",
".",
"call",
"(",
"f'rm -r {out_dir}/eval_vis'",
",",
"shell",
"=",
"True",
")",
"if",
"os",
".",
"path",
".",
"exists",
"(",
"'temp_storage'",
")",
":",
"final_scores",
"=",
"torch",
".",
"load",
"(",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_scores.pt'",
")",
")",
"final_actions",
"=",
"torch",
".",
"load",
"(",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_actions.pt'",
")",
")",
"final_task_indices",
"=",
"torch",
".",
"load",
"(",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_task_indices.pt'",
")",
")",
"final_pixel_accs",
"=",
"torch",
".",
"load",
"(",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_pixel_accs.pt'",
")",
")",
"else",
":",
"scores",
"=",
"[",
"]",
"actions",
"=",
"[",
"]",
"task_indices",
"=",
"[",
"]",
"pixel_accs",
"=",
"[",
"]",
"# If generating output, how accurately did we do",
"with",
"torch",
".",
"no_grad",
"(",
")",
":",
"model",
".",
"eval",
"(",
")",
"for",
"batch_id",
",",
"batch_data",
"in",
"enumerate",
"(",
"tqdm",
"(",
"torch",
".",
"utils",
".",
"data",
".",
"DataLoader",
"(",
"dataset",
",",
"num_workers",
"=",
"get_num_workers",
"(",
"cfg",
".",
"eval",
".",
"data_loader",
".",
"num_workers",
",",
"dataset",
".",
"frames_per_clip",
")",
",",
"pin_memory",
"=",
"False",
",",
"batch_size",
"=",
"batch_size",
",",
"drop_last",
"=",
"False",
")",
",",
"desc",
"=",
"'All tasks X actions batches'",
",",
"total",
"=",
"nactionsXtasks",
"//",
"batch_size",
")",
")",
":",
"batch_task_indices",
"=",
"batch_data",
"[",
"'task_indices'",
"]",
"print",
"(",
"batch_id",
")",
"batch_vid_obs",
"=",
"batch_data",
"[",
"f'{cfg.agent.input_space}_obs'",
"]",
".",
"squeeze",
"(",
"1",
")",
"batch_vid_obs_orig",
"=",
"batch_data",
"[",
"f'{cfg.agent.input_space}_obs_orig'",
"]",
".",
"squeeze",
"(",
"1",
")",
"batch_actions",
"=",
"batch_data",
"[",
"'actions'",
"]",
"# Since the code might be run with DataParallel, need to make sure",
"# the batch size is divisible by the ngpus, so stick to the",
"# requested batch size by padding actions.",
"uniq_batch_size",
"=",
"batch_actions",
".",
"shape",
"[",
"0",
"]",
"pad_len",
"=",
"max",
"(",
"batch_size",
"-",
"uniq_batch_size",
",",
"0",
")",
"batch_vid_obs",
"=",
"pad_tensor",
"(",
"batch_vid_obs",
",",
"pad_len",
")",
"# Setting run_decode always true when true in training..",
"# (earlier only when visualizing)",
"# Sometimes evaluation might depend on the decoded frame, so might",
"# as well...",
"other_kwargs",
"=",
"{",
"'need_intermediate'",
":",
"True",
",",
"'run_decode'",
":",
"train_run_decode",
",",
"'nslices'",
":",
"1",
"}",
"all_preds",
",",
"batch_losses",
"=",
"model",
".",
"forward",
"(",
"batch_vid_obs",
",",
"None",
",",
"n_hist_frames",
"=",
"n_hist_frames",
",",
"n_fwd_times",
"=",
"n_fwd_times",
",",
"compute_losses",
"=",
"False",
",",
"*",
"*",
"other_kwargs",
")",
"# Unpad parts of all_preds that will be used further",
"# Since the model is trained with BCELoss, normalize using sigmoid",
"# On 2020/02/11, I changed it to return only one prediction for",
"# any n_fwd_times (max-pool all to give 1 prediction), hence this",
"# list will only contain a single element. To stay consistent with",
"# prior code that expects a prediction at each time step, simply",
"# repeating that prediction n_fwd_times.",
"batch_scores",
"=",
"nn",
".",
"Sigmoid",
"(",
")",
"(",
"unpad_tensor",
"(",
"all_preds",
"[",
"'is_solved'",
"]",
",",
"pad_len",
")",
")",
"batch_vid_obs",
"=",
"unpad_tensor",
"(",
"batch_vid_obs",
",",
"pad_len",
")",
"if",
"store_vis",
":",
"# Sum the vid obs over the channels, in case it was split into",
"# components",
"if",
"cfg",
".",
"agent",
".",
"input_space",
"==",
"'obj'",
":",
"# update to videos, for storing vis",
"batch_vid_obs",
"=",
"batch_data",
"[",
"'vid_obs'",
"]",
".",
"squeeze",
"(",
"1",
")",
"batch_vid_obs_orig",
"=",
"batch_data",
"[",
"'vid_obs_orig'",
"]",
".",
"squeeze",
"(",
"1",
")",
"batch_vid_obs",
"=",
"pad_tensor",
"(",
"batch_vid_obs",
",",
"pad_len",
")",
"batch_vid_obs",
"=",
"unpad_tensor",
"(",
"batch_vid_obs",
",",
"pad_len",
")",
"task_ids",
"=",
"batch_data",
"[",
"'task_ids'",
"]",
"_",
",",
"pixel_acc",
",",
"gt_frames",
",",
"pred_frames",
"=",
"cls",
".",
"vis_stacked_pred_gt",
"(",
"torch",
".",
"sum",
"(",
"batch_vid_obs_orig",
",",
"axis",
"=",
"-",
"3",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
",",
"torch",
".",
"sum",
"(",
"batch_vid_obs",
",",
"dim",
"=",
"-",
"3",
")",
",",
"[",
"unpad_tensor",
"(",
"el",
".",
"cpu",
"(",
")",
",",
"pad_len",
")",
"for",
"el",
"in",
"all_preds",
"[",
"'pixels'",
"]",
"]",
")",
"'''\n [batch_scores] * len(all_preds['pixels']),\n # Could take any batch_task_indices, all are same\n '{}/{:04d}_{:04d}.gif'.format(out_dir,\n batch_task_indices[0],\n batch_id))\n '''",
"# Also store pure frames individually, will be used for rollout",
"# accuracy evaluation",
"store_frames",
"(",
"gt_frames",
",",
"task_ids",
",",
"out_dir",
",",
"'gt'",
",",
"batch_actions",
")",
"store_frames",
"(",
"pred_frames",
",",
"task_ids",
",",
"out_dir",
",",
"'predictions'",
",",
"batch_actions",
")",
"else",
":",
"pixel_acc",
"=",
"torch",
".",
"zeros",
"(",
"(",
"batch_scores",
".",
"shape",
"[",
"0",
"]",
",",
"phyre",
".",
"NUM_COLORS",
")",
")",
"assert",
"len",
"(",
"batch_scores",
")",
"==",
"len",
"(",
"batch_actions",
")",
",",
"(",
"batch_actions",
".",
"shape",
",",
"batch_scores",
".",
"shape",
")",
"# IMP: Don't convert to cpu() numpy() here.. it makes the function",
"# much slower. Convert in one go at the end when returning",
"scores",
".",
"append",
"(",
"deepcopy",
"(",
"batch_scores",
")",
")",
"pixel_accs",
".",
"append",
"(",
"deepcopy",
"(",
"pixel_acc",
")",
")",
"actions",
".",
"append",
"(",
"deepcopy",
"(",
"batch_actions",
")",
")",
"task_indices",
".",
"append",
"(",
"deepcopy",
"(",
"batch_task_indices",
")",
")",
"# There is only 1 element in scores, but unsqueezing so that",
"# it's compatible with following code that expects a score prediction",
"# over time. Here it will give 1 prediction, the final one.",
"final_scores",
"=",
"torch",
".",
"cat",
"(",
"scores",
",",
"dim",
"=",
"0",
")",
".",
"unsqueeze",
"(",
"0",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
"final_actions",
"=",
"torch",
".",
"cat",
"(",
"actions",
",",
"dim",
"=",
"0",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
"final_task_indices",
"=",
"torch",
".",
"cat",
"(",
"task_indices",
",",
"dim",
"=",
"0",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
"final_pixel_accs",
"=",
"torch",
".",
"cat",
"(",
"pixel_accs",
",",
"dim",
"=",
"0",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"'temp_storage'",
")",
":",
"os",
".",
"makedirs",
"(",
"'temp_storage'",
")",
"torch",
".",
"save",
"(",
"final_scores",
",",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_scores.pt'",
")",
")",
"torch",
".",
"save",
"(",
"final_actions",
",",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_actions.pt'",
")",
")",
"torch",
".",
"save",
"(",
"final_task_indices",
",",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_task_indices.pt'",
")",
")",
"torch",
".",
"save",
"(",
"final_pixel_accs",
",",
"os",
".",
"path",
".",
"join",
"(",
"'temp_storage'",
",",
"'final_pixel_accs.pt'",
")",
")",
"if",
"nactionsXtasks",
"!=",
"len",
"(",
"final_actions",
")",
":",
"logging",
".",
"warning",
"(",
"'Only evaluated %d actions instead of full %d'",
",",
"len",
"(",
"final_actions",
")",
",",
"nactionsXtasks",
")",
"assert",
"(",
"nactionsXtasks",
"-",
"len",
"(",
"actions",
")",
")",
"<=",
"batch_size",
",",
"(",
"'Shouldnt miss more'",
")",
"return",
"final_scores",
",",
"final_actions",
",",
"final_task_indices",
",",
"final_pixel_accs"
] | [
504,
4
] | [
666,
80
] | python | en | ['en', 'en', 'en'] | True |
ImgTrainer.vis_stacked_pred_gt | (cls,
orig_vid_full,
orig_vid,
pred_vid_qnt,
pred_solved=None,
store_path=None) |
Args:
orig_vid_full: list of videos [T'x256x256] for each batch element in
orig_vid, for even the frames that are going to be predicted
orig_vid (BxTx256x256)
pred_vid_qnt [(BxHxW)] (or None, if not available) (unprocessed; i.e.
argmaxed from 1-hot if need be, done already)
pred_solved: [(B,)] list of is_solved scores from the model. Or can be
None
store_path (str): Path to store the video. None if not store.
Returns:
(B, T, H, W) Combined output
(B, phyre.NUM_COLORS) pixel accuracy of the generated video
|
Args:
orig_vid_full: list of videos [T'x256x256] for each batch element in
orig_vid, for even the frames that are going to be predicted
orig_vid (BxTx256x256)
pred_vid_qnt [(BxHxW)] (or None, if not available) (unprocessed; i.e.
argmaxed from 1-hot if need be, done already)
pred_solved: [(B,)] list of is_solved scores from the model. Or can be
None
store_path (str): Path to store the video. None if not store.
Returns:
(B, T, H, W) Combined output
(B, phyre.NUM_COLORS) pixel accuracy of the generated video
| def vis_stacked_pred_gt(cls,
orig_vid_full,
orig_vid,
pred_vid_qnt,
pred_solved=None,
store_path=None):
"""
Args:
orig_vid_full: list of videos [T'x256x256] for each batch element in
orig_vid, for even the frames that are going to be predicted
orig_vid (BxTx256x256)
pred_vid_qnt [(BxHxW)] (or None, if not available) (unprocessed; i.e.
argmaxed from 1-hot if need be, done already)
pred_solved: [(B,)] list of is_solved scores from the model. Or can be
None
store_path (str): Path to store the video. None if not store.
Returns:
(B, T, H, W) Combined output
(B, phyre.NUM_COLORS) pixel accuracy of the generated video
"""
if pred_vid_qnt is None:
return (orig_vid.cpu().numpy(),
np.zeros(
(orig_vid.shape[0], phyre.NUM_COLORS)), None, None)
assert len(orig_vid_full) == orig_vid.shape[0]
# Prepare full GT predictions to go below each clip
orig_vid_full_padded = []
all_t = min(orig_vid_full[0].shape[0], len(pred_vid_qnt))
for vid in orig_vid_full:
if vid.shape[0] >= all_t:
orig_vid_full_padded.append(vid.astype(np.long))
else:
# Pad the videos with white frames if that frame is not returned
raise NotImplementedError('This should not happen')
gt_video = phyre_batchvidresize(
torch.stack([torch.as_tensor(el) for el in orig_vid_full_padded]),
pred_vid_qnt[0].shape[1:]).cpu()
gt_video = gt_video.numpy()
# Convert the gt clip to same size as predictions, add temporal dim
orig_vid = phyre_batchvidresize(orig_vid, pred_vid_qnt[0].shape[1:])
frames_quantized = torch.cat([orig_vid] +
[el.unsqueeze(1) for el in pred_vid_qnt],
dim=1).cpu().numpy()
# Pad the video with empty frames to match the size of predicted videos
padder = np.tile(
np.zeros_like(gt_video[:, -1:]),
(1, abs(frames_quantized.shape[1] - gt_video.shape[1]), 1, 1))
gt_video_padded = gt_video
frames_quantized_padded = frames_quantized
if gt_video.shape[1] > frames_quantized.shape[1]:
frames_quantized_padded = np.concatenate(
[frames_quantized, padder], axis=1)
else:
gt_video_padded = np.concatenate([gt_video, padder], axis=1)
# Compute the accuracy between the generated frames, and the GT frames
# Only do for generated frames (so taking the last few ones)
# If few GT frames are given (eg, when just training classifier), it will
# be comparing to empty frames and get low score but that is okay, we don't
# care about the pixel numbers at that point anyway
# Update April 2 2020: This is using frames with numbers overlayed etc... so
# deprecating this eval.
pix_acc = compute_pixel_accuracy(
torch.as_tensor(gt_video_padded[:, -len(pred_vid_qnt):, ...]),
torch.as_tensor(
frames_quantized_padded[:, -len(pred_vid_qnt):, ...]))
# Stack them on the height axis
final_vid = np.concatenate([gt_video_padded, frames_quantized_padded],
axis=-2)
if store_path:
os.makedirs(os.path.dirname(store_path), exist_ok=True)
phyre.vis.save_observation_series_to_gif(
[frames_quantized_padded, gt_video_padded],
store_path,
# Piggy-backing on the solved_state markers to show which parts are
# GT and which parts are being predicted
solved_states=([True] * orig_vid.shape[1] +
[False] * final_vid.shape[1]),
solved_wrt_step=True,
fps=2)
return final_vid, pix_acc, gt_video, frames_quantized | [
"def",
"vis_stacked_pred_gt",
"(",
"cls",
",",
"orig_vid_full",
",",
"orig_vid",
",",
"pred_vid_qnt",
",",
"pred_solved",
"=",
"None",
",",
"store_path",
"=",
"None",
")",
":",
"if",
"pred_vid_qnt",
"is",
"None",
":",
"return",
"(",
"orig_vid",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
",",
"np",
".",
"zeros",
"(",
"(",
"orig_vid",
".",
"shape",
"[",
"0",
"]",
",",
"phyre",
".",
"NUM_COLORS",
")",
")",
",",
"None",
",",
"None",
")",
"assert",
"len",
"(",
"orig_vid_full",
")",
"==",
"orig_vid",
".",
"shape",
"[",
"0",
"]",
"# Prepare full GT predictions to go below each clip",
"orig_vid_full_padded",
"=",
"[",
"]",
"all_t",
"=",
"min",
"(",
"orig_vid_full",
"[",
"0",
"]",
".",
"shape",
"[",
"0",
"]",
",",
"len",
"(",
"pred_vid_qnt",
")",
")",
"for",
"vid",
"in",
"orig_vid_full",
":",
"if",
"vid",
".",
"shape",
"[",
"0",
"]",
">=",
"all_t",
":",
"orig_vid_full_padded",
".",
"append",
"(",
"vid",
".",
"astype",
"(",
"np",
".",
"long",
")",
")",
"else",
":",
"# Pad the videos with white frames if that frame is not returned",
"raise",
"NotImplementedError",
"(",
"'This should not happen'",
")",
"gt_video",
"=",
"phyre_batchvidresize",
"(",
"torch",
".",
"stack",
"(",
"[",
"torch",
".",
"as_tensor",
"(",
"el",
")",
"for",
"el",
"in",
"orig_vid_full_padded",
"]",
")",
",",
"pred_vid_qnt",
"[",
"0",
"]",
".",
"shape",
"[",
"1",
":",
"]",
")",
".",
"cpu",
"(",
")",
"gt_video",
"=",
"gt_video",
".",
"numpy",
"(",
")",
"# Convert the gt clip to same size as predictions, add temporal dim",
"orig_vid",
"=",
"phyre_batchvidresize",
"(",
"orig_vid",
",",
"pred_vid_qnt",
"[",
"0",
"]",
".",
"shape",
"[",
"1",
":",
"]",
")",
"frames_quantized",
"=",
"torch",
".",
"cat",
"(",
"[",
"orig_vid",
"]",
"+",
"[",
"el",
".",
"unsqueeze",
"(",
"1",
")",
"for",
"el",
"in",
"pred_vid_qnt",
"]",
",",
"dim",
"=",
"1",
")",
".",
"cpu",
"(",
")",
".",
"numpy",
"(",
")",
"# Pad the video with empty frames to match the size of predicted videos",
"padder",
"=",
"np",
".",
"tile",
"(",
"np",
".",
"zeros_like",
"(",
"gt_video",
"[",
":",
",",
"-",
"1",
":",
"]",
")",
",",
"(",
"1",
",",
"abs",
"(",
"frames_quantized",
".",
"shape",
"[",
"1",
"]",
"-",
"gt_video",
".",
"shape",
"[",
"1",
"]",
")",
",",
"1",
",",
"1",
")",
")",
"gt_video_padded",
"=",
"gt_video",
"frames_quantized_padded",
"=",
"frames_quantized",
"if",
"gt_video",
".",
"shape",
"[",
"1",
"]",
">",
"frames_quantized",
".",
"shape",
"[",
"1",
"]",
":",
"frames_quantized_padded",
"=",
"np",
".",
"concatenate",
"(",
"[",
"frames_quantized",
",",
"padder",
"]",
",",
"axis",
"=",
"1",
")",
"else",
":",
"gt_video_padded",
"=",
"np",
".",
"concatenate",
"(",
"[",
"gt_video",
",",
"padder",
"]",
",",
"axis",
"=",
"1",
")",
"# Compute the accuracy between the generated frames, and the GT frames",
"# Only do for generated frames (so taking the last few ones)",
"# If few GT frames are given (eg, when just training classifier), it will",
"# be comparing to empty frames and get low score but that is okay, we don't",
"# care about the pixel numbers at that point anyway",
"# Update April 2 2020: This is using frames with numbers overlayed etc... so",
"# deprecating this eval.",
"pix_acc",
"=",
"compute_pixel_accuracy",
"(",
"torch",
".",
"as_tensor",
"(",
"gt_video_padded",
"[",
":",
",",
"-",
"len",
"(",
"pred_vid_qnt",
")",
":",
",",
"...",
"]",
")",
",",
"torch",
".",
"as_tensor",
"(",
"frames_quantized_padded",
"[",
":",
",",
"-",
"len",
"(",
"pred_vid_qnt",
")",
":",
",",
"...",
"]",
")",
")",
"# Stack them on the height axis",
"final_vid",
"=",
"np",
".",
"concatenate",
"(",
"[",
"gt_video_padded",
",",
"frames_quantized_padded",
"]",
",",
"axis",
"=",
"-",
"2",
")",
"if",
"store_path",
":",
"os",
".",
"makedirs",
"(",
"os",
".",
"path",
".",
"dirname",
"(",
"store_path",
")",
",",
"exist_ok",
"=",
"True",
")",
"phyre",
".",
"vis",
".",
"save_observation_series_to_gif",
"(",
"[",
"frames_quantized_padded",
",",
"gt_video_padded",
"]",
",",
"store_path",
",",
"# Piggy-backing on the solved_state markers to show which parts are",
"# GT and which parts are being predicted",
"solved_states",
"=",
"(",
"[",
"True",
"]",
"*",
"orig_vid",
".",
"shape",
"[",
"1",
"]",
"+",
"[",
"False",
"]",
"*",
"final_vid",
".",
"shape",
"[",
"1",
"]",
")",
",",
"solved_wrt_step",
"=",
"True",
",",
"fps",
"=",
"2",
")",
"return",
"final_vid",
",",
"pix_acc",
",",
"gt_video",
",",
"frames_quantized"
] | [
669,
4
] | [
749,
61
] | python | en | ['en', 'error', 'th'] | False |
retry | (*dargs: t.Any, **dkw: t.Any) | Wrap a function with a new `Retrying` object.
:param dargs: positional arguments passed to Retrying object
:param dkw: keyword arguments passed to the Retrying object
| Wrap a function with a new `Retrying` object. | def retry(*dargs: t.Any, **dkw: t.Any) -> t.Union[WrappedFn, t.Callable[[WrappedFn], WrappedFn]]: # noqa
"""Wrap a function with a new `Retrying` object.
:param dargs: positional arguments passed to Retrying object
:param dkw: keyword arguments passed to the Retrying object
"""
# support both @retry and @retry() as valid syntax
if len(dargs) == 1 and callable(dargs[0]):
return retry()(dargs[0])
else:
def wrap(f: WrappedFn) -> WrappedFn:
if isinstance(f, retry_base):
warnings.warn(
f"Got retry_base instance ({f.__class__.__name__}) as callable argument, "
f"this will probably hang indefinitely (did you mean retry={f.__class__.__name__}(...)?)"
)
if iscoroutinefunction(f):
r: "BaseRetrying" = AsyncRetrying(*dargs, **dkw)
elif tornado and hasattr(tornado.gen, "is_coroutine_function") and tornado.gen.is_coroutine_function(f):
r = TornadoRetrying(*dargs, **dkw)
else:
r = Retrying(*dargs, **dkw)
return r.wraps(f)
return wrap | [
"def",
"retry",
"(",
"*",
"dargs",
":",
"t",
".",
"Any",
",",
"*",
"*",
"dkw",
":",
"t",
".",
"Any",
")",
"->",
"t",
".",
"Union",
"[",
"WrappedFn",
",",
"t",
".",
"Callable",
"[",
"[",
"WrappedFn",
"]",
",",
"WrappedFn",
"]",
"]",
":",
"# noqa",
"# support both @retry and @retry() as valid syntax",
"if",
"len",
"(",
"dargs",
")",
"==",
"1",
"and",
"callable",
"(",
"dargs",
"[",
"0",
"]",
")",
":",
"return",
"retry",
"(",
")",
"(",
"dargs",
"[",
"0",
"]",
")",
"else",
":",
"def",
"wrap",
"(",
"f",
":",
"WrappedFn",
")",
"->",
"WrappedFn",
":",
"if",
"isinstance",
"(",
"f",
",",
"retry_base",
")",
":",
"warnings",
".",
"warn",
"(",
"f\"Got retry_base instance ({f.__class__.__name__}) as callable argument, \"",
"f\"this will probably hang indefinitely (did you mean retry={f.__class__.__name__}(...)?)\"",
")",
"if",
"iscoroutinefunction",
"(",
"f",
")",
":",
"r",
":",
"\"BaseRetrying\"",
"=",
"AsyncRetrying",
"(",
"*",
"dargs",
",",
"*",
"*",
"dkw",
")",
"elif",
"tornado",
"and",
"hasattr",
"(",
"tornado",
".",
"gen",
",",
"\"is_coroutine_function\"",
")",
"and",
"tornado",
".",
"gen",
".",
"is_coroutine_function",
"(",
"f",
")",
":",
"r",
"=",
"TornadoRetrying",
"(",
"*",
"dargs",
",",
"*",
"*",
"dkw",
")",
"else",
":",
"r",
"=",
"Retrying",
"(",
"*",
"dargs",
",",
"*",
"*",
"dkw",
")",
"return",
"r",
".",
"wraps",
"(",
"f",
")",
"return",
"wrap"
] | [
106,
0
] | [
132,
19
] | python | en | ['en', 'en', 'en'] | True |
BaseRetrying.copy | (
self,
sleep: t.Union[t.Callable[[t.Union[int, float]], None], object] = _unset,
stop: t.Union["stop_base", object] = _unset,
wait: t.Union["wait_base", object] = _unset,
retry: t.Union[retry_base, object] = _unset,
before: t.Union[t.Callable[["RetryCallState"], None], object] = _unset,
after: t.Union[t.Callable[["RetryCallState"], None], object] = _unset,
before_sleep: t.Union[t.Optional[t.Callable[["RetryCallState"], None]], object] = _unset,
reraise: t.Union[bool, object] = _unset,
retry_error_cls: t.Union[t.Type[RetryError], object] = _unset,
retry_error_callback: t.Union[t.Optional[t.Callable[["RetryCallState"], t.Any]], object] = _unset,
) | Copy this object with some parameters changed if needed. | Copy this object with some parameters changed if needed. | def copy(
self,
sleep: t.Union[t.Callable[[t.Union[int, float]], None], object] = _unset,
stop: t.Union["stop_base", object] = _unset,
wait: t.Union["wait_base", object] = _unset,
retry: t.Union[retry_base, object] = _unset,
before: t.Union[t.Callable[["RetryCallState"], None], object] = _unset,
after: t.Union[t.Callable[["RetryCallState"], None], object] = _unset,
before_sleep: t.Union[t.Optional[t.Callable[["RetryCallState"], None]], object] = _unset,
reraise: t.Union[bool, object] = _unset,
retry_error_cls: t.Union[t.Type[RetryError], object] = _unset,
retry_error_callback: t.Union[t.Optional[t.Callable[["RetryCallState"], t.Any]], object] = _unset,
) -> "BaseRetrying":
"""Copy this object with some parameters changed if needed."""
return self.__class__(
sleep=_first_set(sleep, self.sleep),
stop=_first_set(stop, self.stop),
wait=_first_set(wait, self.wait),
retry=_first_set(retry, self.retry),
before=_first_set(before, self.before),
after=_first_set(after, self.after),
before_sleep=_first_set(before_sleep, self.before_sleep),
reraise=_first_set(reraise, self.reraise),
retry_error_cls=_first_set(retry_error_cls, self.retry_error_cls),
retry_error_callback=_first_set(retry_error_callback, self.retry_error_callback),
) | [
"def",
"copy",
"(",
"self",
",",
"sleep",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Callable",
"[",
"[",
"t",
".",
"Union",
"[",
"int",
",",
"float",
"]",
"]",
",",
"None",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
"stop",
":",
"t",
".",
"Union",
"[",
"\"stop_base\"",
",",
"object",
"]",
"=",
"_unset",
",",
"wait",
":",
"t",
".",
"Union",
"[",
"\"wait_base\"",
",",
"object",
"]",
"=",
"_unset",
",",
"retry",
":",
"t",
".",
"Union",
"[",
"retry_base",
",",
"object",
"]",
"=",
"_unset",
",",
"before",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Callable",
"[",
"[",
"\"RetryCallState\"",
"]",
",",
"None",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
"after",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Callable",
"[",
"[",
"\"RetryCallState\"",
"]",
",",
"None",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
"before_sleep",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Optional",
"[",
"t",
".",
"Callable",
"[",
"[",
"\"RetryCallState\"",
"]",
",",
"None",
"]",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
"reraise",
":",
"t",
".",
"Union",
"[",
"bool",
",",
"object",
"]",
"=",
"_unset",
",",
"retry_error_cls",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Type",
"[",
"RetryError",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
"retry_error_callback",
":",
"t",
".",
"Union",
"[",
"t",
".",
"Optional",
"[",
"t",
".",
"Callable",
"[",
"[",
"\"RetryCallState\"",
"]",
",",
"t",
".",
"Any",
"]",
"]",
",",
"object",
"]",
"=",
"_unset",
",",
")",
"->",
"\"BaseRetrying\"",
":",
"return",
"self",
".",
"__class__",
"(",
"sleep",
"=",
"_first_set",
"(",
"sleep",
",",
"self",
".",
"sleep",
")",
",",
"stop",
"=",
"_first_set",
"(",
"stop",
",",
"self",
".",
"stop",
")",
",",
"wait",
"=",
"_first_set",
"(",
"wait",
",",
"self",
".",
"wait",
")",
",",
"retry",
"=",
"_first_set",
"(",
"retry",
",",
"self",
".",
"retry",
")",
",",
"before",
"=",
"_first_set",
"(",
"before",
",",
"self",
".",
"before",
")",
",",
"after",
"=",
"_first_set",
"(",
"after",
",",
"self",
".",
"after",
")",
",",
"before_sleep",
"=",
"_first_set",
"(",
"before_sleep",
",",
"self",
".",
"before_sleep",
")",
",",
"reraise",
"=",
"_first_set",
"(",
"reraise",
",",
"self",
".",
"reraise",
")",
",",
"retry_error_cls",
"=",
"_first_set",
"(",
"retry_error_cls",
",",
"self",
".",
"retry_error_cls",
")",
",",
"retry_error_callback",
"=",
"_first_set",
"(",
"retry_error_callback",
",",
"self",
".",
"retry_error_callback",
")",
",",
")"
] | [
251,
4
] | [
276,
9
] | python | en | ['en', 'en', 'en'] | True |
BaseRetrying.statistics | (self) | Return a dictionary of runtime statistics.
This dictionary will be empty when the controller has never been
ran. When it is running or has ran previously it should have (but
may not) have useful and/or informational keys and values when
running is underway and/or completed.
.. warning:: The keys in this dictionary **should** be some what
stable (not changing), but there existence **may**
change between major releases as new statistics are
gathered or removed so before accessing keys ensure that
they actually exist and handle when they do not.
.. note:: The values in this dictionary are local to the thread
running call (so if multiple threads share the same retrying
object - either directly or indirectly) they will each have
there own view of statistics they have collected (in the
future we may provide a way to aggregate the various
statistics from each thread).
| Return a dictionary of runtime statistics. | def statistics(self) -> t.Dict[str, t.Any]:
"""Return a dictionary of runtime statistics.
This dictionary will be empty when the controller has never been
ran. When it is running or has ran previously it should have (but
may not) have useful and/or informational keys and values when
running is underway and/or completed.
.. warning:: The keys in this dictionary **should** be some what
stable (not changing), but there existence **may**
change between major releases as new statistics are
gathered or removed so before accessing keys ensure that
they actually exist and handle when they do not.
.. note:: The values in this dictionary are local to the thread
running call (so if multiple threads share the same retrying
object - either directly or indirectly) they will each have
there own view of statistics they have collected (in the
future we may provide a way to aggregate the various
statistics from each thread).
"""
try:
return self._local.statistics
except AttributeError:
self._local.statistics = {}
return self._local.statistics | [
"def",
"statistics",
"(",
"self",
")",
"->",
"t",
".",
"Dict",
"[",
"str",
",",
"t",
".",
"Any",
"]",
":",
"try",
":",
"return",
"self",
".",
"_local",
".",
"statistics",
"except",
"AttributeError",
":",
"self",
".",
"_local",
".",
"statistics",
"=",
"{",
"}",
"return",
"self",
".",
"_local",
".",
"statistics"
] | [
290,
4
] | [
315,
41
] | python | en | ['en', 'en', 'en'] | True |
BaseRetrying.wraps | (self, f: WrappedFn) | Wrap a function for retrying.
:param f: A function to wraps for retrying.
| Wrap a function for retrying. | def wraps(self, f: WrappedFn) -> WrappedFn:
"""Wrap a function for retrying.
:param f: A function to wraps for retrying.
"""
@functools.wraps(f)
def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
return self(f, *args, **kw)
def retry_with(*args: t.Any, **kwargs: t.Any) -> WrappedFn:
return self.copy(*args, **kwargs).wraps(f)
wrapped_f.retry = self
wrapped_f.retry_with = retry_with
return wrapped_f | [
"def",
"wraps",
"(",
"self",
",",
"f",
":",
"WrappedFn",
")",
"->",
"WrappedFn",
":",
"@",
"functools",
".",
"wraps",
"(",
"f",
")",
"def",
"wrapped_f",
"(",
"*",
"args",
":",
"t",
".",
"Any",
",",
"*",
"*",
"kw",
":",
"t",
".",
"Any",
")",
"->",
"t",
".",
"Any",
":",
"return",
"self",
"(",
"f",
",",
"*",
"args",
",",
"*",
"*",
"kw",
")",
"def",
"retry_with",
"(",
"*",
"args",
":",
"t",
".",
"Any",
",",
"*",
"*",
"kwargs",
":",
"t",
".",
"Any",
")",
"->",
"WrappedFn",
":",
"return",
"self",
".",
"copy",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
".",
"wraps",
"(",
"f",
")",
"wrapped_f",
".",
"retry",
"=",
"self",
"wrapped_f",
".",
"retry_with",
"=",
"retry_with",
"return",
"wrapped_f"
] | [
317,
4
] | [
333,
24
] | python | en | ['en', 'en', 'en'] | True |
Future.failed | (self) | Return whether a exception is being held in this future. | Return whether a exception is being held in this future. | def failed(self) -> bool:
"""Return whether a exception is being held in this future."""
return self.exception() is not None | [
"def",
"failed",
"(",
"self",
")",
"->",
"bool",
":",
"return",
"self",
".",
"exception",
"(",
")",
"is",
"not",
"None"
] | [
428,
4
] | [
430,
43
] | python | en | ['en', 'en', 'en'] | True |
Future.construct | (cls, attempt_number: int, value: t.Any, has_exception: bool) | Construct a new Future object. | Construct a new Future object. | def construct(cls, attempt_number: int, value: t.Any, has_exception: bool) -> "Future":
"""Construct a new Future object."""
fut = cls(attempt_number)
if has_exception:
fut.set_exception(value)
else:
fut.set_result(value)
return fut | [
"def",
"construct",
"(",
"cls",
",",
"attempt_number",
":",
"int",
",",
"value",
":",
"t",
".",
"Any",
",",
"has_exception",
":",
"bool",
")",
"->",
"\"Future\"",
":",
"fut",
"=",
"cls",
"(",
"attempt_number",
")",
"if",
"has_exception",
":",
"fut",
".",
"set_exception",
"(",
"value",
")",
"else",
":",
"fut",
".",
"set_result",
"(",
"value",
")",
"return",
"fut"
] | [
433,
4
] | [
440,
18
] | python | en | ['en', 'en', 'en'] | True |
SharedDataMiddleware.is_allowed | (self, filename) | Subclasses can override this method to disallow the access to
certain files. However by providing `disallow` in the constructor
this method is overwritten.
| Subclasses can override this method to disallow the access to
certain files. However by providing `disallow` in the constructor
this method is overwritten.
| def is_allowed(self, filename):
"""Subclasses can override this method to disallow the access to
certain files. However by providing `disallow` in the constructor
this method is overwritten.
"""
return True | [
"def",
"is_allowed",
"(",
"self",
",",
"filename",
")",
":",
"return",
"True"
] | [
123,
4
] | [
128,
19
] | python | en | ['en', 'en', 'en'] | True |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.