in_source_id
stringlengths 13
58
| issue
stringlengths 3
241k
| before_files
listlengths 0
3
| after_files
listlengths 0
3
| pr_diff
stringlengths 109
107M
⌀ |
---|---|---|---|---|
streamlit__streamlit-6377 | Streamlit logger working on root
### Summary
Upon import, Streamlit adds a new **global** log handler that dumps logs in text format. Packages should not be doing that, because it might break the logging convention of the host systems.
In our case for example, we dump logs in JSON format and push it all to our logging aggregation system. Streamlit's log message break the format and so it happens that the only service we can't debug properly is Streamlit.
### Steps to reproduce
Nothing special, logging comes out of the box.
**Expected behavior:**
Streamlit should attach its handler to a specific logger namespace (e.g. `streamlit`) instead of attaching it to the root logger.
**Actual behavior:**
Streamlit attaches a stream handler to the root logger
### Is this a regression?
That is, did this use to work the way you expected in the past?
no
### Debug info
- Streamlit version: 1.1.0
- Python version: 3.8
- Using Conda? PipEnv? PyEnv? Pex?
- OS version: Any
- Browser version: Irrelevant
---
Community voting on feature requests enables the Streamlit team to understand which features are most important to our users.
**If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.**
| [
{
"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Logging module.\"\"\"\n\nimport logging\nimport sys\nfrom typing import Dict, Union\n\nfrom typing_extensions import Final\n\nDEFAULT_LOG_MESSAGE: Final = \"%(asctime)s %(levelname) -7s \" \"%(name)s: %(message)s\"\n\n# Loggers for each name are saved here.\n_loggers: Dict[str, logging.Logger] = {}\n\n# The global log level is set here across all names.\n_global_log_level = logging.INFO\n\n\ndef set_log_level(level: Union[str, int]) -> None:\n \"\"\"Set log level.\"\"\"\n logger = get_logger(__name__)\n\n if isinstance(level, str):\n level = level.upper()\n if level == \"CRITICAL\" or level == logging.CRITICAL:\n log_level = logging.CRITICAL\n elif level == \"ERROR\" or level == logging.ERROR:\n log_level = logging.ERROR\n elif level == \"WARNING\" or level == logging.WARNING:\n log_level = logging.WARNING\n elif level == \"INFO\" or level == logging.INFO:\n log_level = logging.INFO\n elif level == \"DEBUG\" or level == logging.DEBUG:\n log_level = logging.DEBUG\n else:\n msg = 'undefined log level \"%s\"' % level\n logger.critical(msg)\n sys.exit(1)\n\n for log in _loggers.values():\n log.setLevel(log_level)\n\n global _global_log_level\n _global_log_level = log_level\n\n\ndef setup_formatter(logger: logging.Logger) -> None:\n \"\"\"Set up the console formatter for a given logger.\"\"\"\n # Deregister any previous console loggers.\n if hasattr(logger, \"streamlit_console_handler\"):\n logger.removeHandler(logger.streamlit_console_handler)\n\n logger.streamlit_console_handler = logging.StreamHandler() # type: ignore[attr-defined]\n\n # Import here to avoid circular imports\n from streamlit import config\n\n if config._config_options:\n # logger is required in ConfigOption.set_value\n # Getting the config option before the config file has been parsed\n # can create an infinite loop\n message_format = config.get_option(\"logger.messageFormat\")\n else:\n message_format = DEFAULT_LOG_MESSAGE\n formatter = logging.Formatter(fmt=message_format)\n formatter.default_msec_format = \"%s.%03d\"\n logger.streamlit_console_handler.setFormatter(formatter) # type: ignore[attr-defined]\n\n # Register the new console logger.\n logger.addHandler(logger.streamlit_console_handler) # type: ignore[attr-defined]\n\n\ndef update_formatter() -> None:\n for log in _loggers.values():\n setup_formatter(log)\n\n\ndef init_tornado_logs() -> None:\n \"\"\"Set Tornado log levels.\n\n This function does not import any Tornado code, so it's safe to call even\n when Server is not running.\n \"\"\"\n # http://www.tornadoweb.org/en/stable/log.html\n for log in (\"access\", \"application\", \"general\"):\n # get_logger will set the log level for the logger with the given name.\n get_logger(f\"tornado.{log}\")\n\n\ndef get_logger(name: str) -> logging.Logger:\n \"\"\"Return a logger.\n\n Parameters\n ----------\n name : str\n The name of the logger to use. You should just pass in __name__.\n\n Returns\n -------\n Logger\n\n \"\"\"\n if name in _loggers.keys():\n return _loggers[name]\n\n if name == \"root\":\n logger = logging.getLogger()\n else:\n logger = logging.getLogger(name)\n\n logger.setLevel(_global_log_level)\n logger.propagate = False\n setup_formatter(logger)\n\n _loggers[name] = logger\n\n return logger\n",
"path": "lib/streamlit/logger.py"
}
] | [
{
"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Logging module.\"\"\"\n\nimport logging\nimport sys\nfrom typing import Dict, Union\n\nfrom typing_extensions import Final\n\nDEFAULT_LOG_MESSAGE: Final = \"%(asctime)s %(levelname) -7s \" \"%(name)s: %(message)s\"\n\n# Loggers for each name are saved here.\n_loggers: Dict[str, logging.Logger] = {}\n\n# The global log level is set here across all names.\n_global_log_level = logging.INFO\n\n\ndef set_log_level(level: Union[str, int]) -> None:\n \"\"\"Set log level.\"\"\"\n logger = get_logger(__name__)\n\n if isinstance(level, str):\n level = level.upper()\n if level == \"CRITICAL\" or level == logging.CRITICAL:\n log_level = logging.CRITICAL\n elif level == \"ERROR\" or level == logging.ERROR:\n log_level = logging.ERROR\n elif level == \"WARNING\" or level == logging.WARNING:\n log_level = logging.WARNING\n elif level == \"INFO\" or level == logging.INFO:\n log_level = logging.INFO\n elif level == \"DEBUG\" or level == logging.DEBUG:\n log_level = logging.DEBUG\n else:\n msg = 'undefined log level \"%s\"' % level\n logger.critical(msg)\n sys.exit(1)\n\n for log in _loggers.values():\n log.setLevel(log_level)\n\n global _global_log_level\n _global_log_level = log_level\n\n\ndef setup_formatter(logger: logging.Logger) -> None:\n \"\"\"Set up the console formatter for a given logger.\"\"\"\n # Deregister any previous console loggers.\n if hasattr(logger, \"streamlit_console_handler\"):\n logger.removeHandler(logger.streamlit_console_handler)\n\n logger.streamlit_console_handler = logging.StreamHandler() # type: ignore[attr-defined]\n\n # Import here to avoid circular imports\n from streamlit import config\n\n if config._config_options:\n # logger is required in ConfigOption.set_value\n # Getting the config option before the config file has been parsed\n # can create an infinite loop\n message_format = config.get_option(\"logger.messageFormat\")\n else:\n message_format = DEFAULT_LOG_MESSAGE\n formatter = logging.Formatter(fmt=message_format)\n formatter.default_msec_format = \"%s.%03d\"\n logger.streamlit_console_handler.setFormatter(formatter) # type: ignore[attr-defined]\n\n # Register the new console logger.\n logger.addHandler(logger.streamlit_console_handler) # type: ignore[attr-defined]\n\n\ndef update_formatter() -> None:\n for log in _loggers.values():\n setup_formatter(log)\n\n\ndef init_tornado_logs() -> None:\n \"\"\"Set Tornado log levels.\n\n This function does not import any Tornado code, so it's safe to call even\n when Server is not running.\n \"\"\"\n # http://www.tornadoweb.org/en/stable/log.html\n for log in (\"access\", \"application\", \"general\"):\n # get_logger will set the log level for the logger with the given name.\n get_logger(f\"tornado.{log}\")\n\n\ndef get_logger(name: str) -> logging.Logger:\n \"\"\"Return a logger.\n\n Parameters\n ----------\n name : str\n The name of the logger to use. You should just pass in __name__.\n\n Returns\n -------\n Logger\n\n \"\"\"\n if name in _loggers.keys():\n return _loggers[name]\n\n if name == \"root\":\n logger = logging.getLogger(\"streamlit\")\n else:\n logger = logging.getLogger(name)\n\n logger.setLevel(_global_log_level)\n logger.propagate = False\n setup_formatter(logger)\n\n _loggers[name] = logger\n\n return logger\n",
"path": "lib/streamlit/logger.py"
}
] | diff --git a/lib/streamlit/logger.py b/lib/streamlit/logger.py
index 6f91af7432e4..779195acc001 100644
--- a/lib/streamlit/logger.py
+++ b/lib/streamlit/logger.py
@@ -117,7 +117,7 @@ def get_logger(name: str) -> logging.Logger:
return _loggers[name]
if name == "root":
- logger = logging.getLogger()
+ logger = logging.getLogger("streamlit")
else:
logger = logging.getLogger(name)
diff --git a/lib/tests/streamlit/delta_generator_test.py b/lib/tests/streamlit/delta_generator_test.py
index 65645ff0199c..05c57daacca3 100644
--- a/lib/tests/streamlit/delta_generator_test.py
+++ b/lib/tests/streamlit/delta_generator_test.py
@@ -55,7 +55,7 @@ class RunWarningTest(unittest.TestCase):
@patch("streamlit.runtime.Runtime.exists", MagicMock(return_value=False))
def test_run_warning_presence(self):
"""Using Streamlit without `streamlit run` produces a warning."""
- with self.assertLogs(level=logging.WARNING) as logs:
+ with self.assertLogs("streamlit", level=logging.WARNING) as logs:
delta_generator._use_warning_has_been_displayed = False
st.write("Using delta generator")
output = "".join(logs.output)
@@ -66,7 +66,7 @@ def test_run_warning_presence(self):
def test_run_warning_absence(self):
"""Using Streamlit through the CLI results in a Runtime being instantiated,
so it produces no usage warning."""
- with self.assertLogs(level=logging.WARNING) as logs:
+ with self.assertLogs("streamlit", level=logging.WARNING) as logs:
delta_generator._use_warning_has_been_displayed = False
st.write("Using delta generator")
# assertLogs is being used as a context manager, but it also checks
diff --git a/lib/tests/streamlit/logger_test.py b/lib/tests/streamlit/logger_test.py
index 42f83f40c84f..4b5a859d45de 100644
--- a/lib/tests/streamlit/logger_test.py
+++ b/lib/tests/streamlit/logger_test.py
@@ -56,7 +56,7 @@ def test_set_log_level_by_constant(self):
]
for k in data:
logger.set_log_level(k)
- self.assertEqual(k, logging.getLogger().getEffectiveLevel())
+ self.assertEqual(k, logging.getLogger("streamlit").getEffectiveLevel())
def test_set_log_level_error(self):
"""Test streamlit.logger.set_log_level."""
|
obspy__obspy-2148 | FDSN routing client has a locale dependency
There's a dummy call to `time.strptime` in the module init that uses locale-specific formatting, which fails under locales that don't use the same names (ie. "Nov" for the 11th month of the year).
```
>>> import locale
>>> locale.setlocale(locale.LC_TIME, ('zh_CN', 'UTF-8'))
'zh_CN.UTF-8'
>>> from obspy.clients.fdsn.routing.routing_client import RoutingClient
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/__init__.py", line 242, in <module>
from .routing.routing_client import RoutingClient # NOQA
File "/workspace/anaconda/envs/django/lib/python2.7/site-packages/obspy/clients/fdsn/routing/__init__.py", line 25, in <module>
time.strptime("30 Nov 00", "%d %b %y")
File "/workspace/anaconda/envs/django/lib/python2.7/_strptime.py", line 478, in _strptime_time
return _strptime(data_string, format)[0]
File "/workspace/anaconda/envs/django/lib/python2.7/_strptime.py", line 332, in _strptime
(data_string, format))
ValueError: time data u'30 Nov 00' does not match format u'%d %b %y'
```
I believe switching this to an ISO8601-like string would be locale-agnostic:
time.strptime("2000/11/30", "%Y/%m/%d")
| [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nobspy.clients.fdsn.routing - Routing services for FDSN web services\n===================================================================\n\n:copyright:\n The ObsPy Development Team ([email protected])\n Celso G Reyes, 2017\n IRIS-DMC\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\n\n# Extremely ugly way to avoid a race condition the first time strptime is\n# imported which is not thread safe...\n#\n# See https://bugs.python.org/issue7980\nimport time\ntime.strptime(\"30 Nov 00\", \"%d %b %y\")\n\n\nif __name__ == '__main__': # pragma: no cover\n import doctest\n doctest.testmod(exclude_empty=True)\n",
"path": "obspy/clients/fdsn/routing/__init__.py"
}
] | [
{
"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nobspy.clients.fdsn.routing - Routing services for FDSN web services\n===================================================================\n\n:copyright:\n The ObsPy Development Team ([email protected])\n Celso G Reyes, 2017\n IRIS-DMC\n:license:\n GNU Lesser General Public License, Version 3\n (https://www.gnu.org/copyleft/lesser.html)\n\"\"\"\nfrom __future__ import (absolute_import, division, print_function,\n unicode_literals)\nfrom future.builtins import * # NOQA\n\n\n# Extremely ugly way to avoid a race condition the first time strptime is\n# imported which is not thread safe...\n#\n# See https://bugs.python.org/issue7980\nimport time\ntime.strptime(\"2000/11/30\", \"%Y/%m/%d\")\n\n\nif __name__ == '__main__': # pragma: no cover\n import doctest\n doctest.testmod(exclude_empty=True)\n",
"path": "obspy/clients/fdsn/routing/__init__.py"
}
] | diff --git a/CHANGELOG.txt b/CHANGELOG.txt
index 879f573662a..d9f50e50979 100644
--- a/CHANGELOG.txt
+++ b/CHANGELOG.txt
@@ -22,6 +22,7 @@
and/or `location` are set (see #1810, #2031, #2047).
* A few fixes and stability improvements for the mass downloader
(see #2081).
+ * Fixed routing startup error when running under certain locales (see #2147)
- obspy.imaging:
* Normalize moment tensors prior to plotting in the mopad wrapper to
stabilize the algorithm (see #2114, #2125).
diff --git a/obspy/clients/fdsn/routing/__init__.py b/obspy/clients/fdsn/routing/__init__.py
index 372357b6d5f..ba4f8a7c8d2 100644
--- a/obspy/clients/fdsn/routing/__init__.py
+++ b/obspy/clients/fdsn/routing/__init__.py
@@ -22,7 +22,7 @@
#
# See https://bugs.python.org/issue7980
import time
-time.strptime("30 Nov 00", "%d %b %y")
+time.strptime("2000/11/30", "%Y/%m/%d")
if __name__ == '__main__': # pragma: no cover
|
cupy__cupy-7448 | [RFC] Renaming the development branch to `main`
Now that many projects around the scientific Python community converged to use `main` as the default branch for their repositories, I think it could make sense to do that for CuPy too.
According to https://github.com/github/renaming, side-effects of renaming a branch are very limited and I believe that it is less likely to cause confusion:
- Re-target any open pull requests
- Update any draft releases based on the branch
- Move any branch protection rules that explicitly reference the old name
- Update the branch used to build GitHub Pages, if applicable
- Show a notice to repository contributors, maintainers, and admins on the repository homepage with instructions to update local copies of the repository
- Show a notice to contributors who `git push` to the old branch
- Redirect web requests for the old branch name to the new branch name
- Return a "Moved Permanently" response in API requests for the old branch name
| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# CuPy documentation build configuration file, created by\n# sphinx-quickstart on Sun May 10 12:22:10 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport importlib\nimport inspect\nimport os\nimport sys\n\nimport cupy\n\nsys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))\nimport _comparison_generator\n\n\n__version__ = cupy.__version__\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nrtd_version = os.environ.get('READTHEDOCS_VERSION')\nif rtd_version == 'latest':\n tag = 'master'\nelse:\n tag = 'v{}'.format(__version__)\nextlinks = {\n 'blob': ('https://github.com/cupy/cupy/blob/{}/%s'.format(tag), '%s'),\n 'tree': ('https://github.com/cupy/cupy/tree/{}/%s'.format(tag), '%s'),\n}\n\n\n# Generate comparison table.\nwith open('reference/comparison_table.rst.inc', 'w') as f:\n f.write(_comparison_generator.generate())\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.linkcode',\n 'sphinx_copybutton']\n\ntry:\n import sphinxcontrib.spelling # noqa\n extensions.append('sphinxcontrib.spelling')\nexcept ImportError:\n pass\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'CuPy'\ncopyright = u'2015, Preferred Networks, Inc. and Preferred Infrastructure, Inc.'\nauthor = u'Preferred Networks, Inc. and Preferred Infrastructure, Inc.'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = __version__\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = []\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# Suppress a warning that multiple targets are found for a cross-reference.\n# See #3250\nsuppress_warnings = ['ref.python']\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n# Napoleon settings\nnapoleon_use_ivar = True\nnapoleon_include_special_with_doc = True\n\n# -- Copybutton settings --------------------------------------------------\n\n# Only copy lines starting with the input prompts,\n# valid prompt styles: [\n# Python Repl + continuation (e.g., '>>> ', '... '),\n# Bash (e.g., '$ '),\n# ipython and qtconsole + continuation (e.g., 'In [29]: ', ' ...: '),\n# jupyter-console + continuation (e.g., 'In [29]: ', ' ...: ')\n# ]\n# regex taken from https://sphinx-copybutton.readthedocs.io/en/latest/#using-regexp-prompt-identifiers\ncopybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ |In \\[\\d*\\]: | {2,5}\\.\\.\\.: | {5,8}: \"\ncopybutton_prompt_is_regexp = True\n\n# Continue copying lines as long as they end with this character\ncopybutton_line_continuation_character = \"\\\\\"\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'pydata_sphinx_theme'\n\nhtml_logo = '../image/cupy_logo_1000px.png'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/configuring.html\nhtml_theme_options = {\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/cupy/cupy\",\n \"icon\": \"fab fa-github-square\",\n },\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/CuPy_Team\",\n \"icon\": \"fab fa-twitter-square\",\n },\n ],\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CuPydoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n\n # Latex figure (float) alignment\n #'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'CuPy.tex', u'CuPy Documentation',\n u'Preferred Networks, inc. and Preferred Infrastructure, inc.', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'cupy', u'CuPy Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'CuPy', u'CuPy Documentation',\n author, 'CuPy', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\nautosummary_generate = True\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('https://numpy.org/doc/stable/', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/', None),\n 'numba': ('https://numba.readthedocs.io/en/stable', None),\n 'cuquantum': ('https://docs.nvidia.com/cuda/cuquantum/', None),\n # blocked by data-apis/array-api#428\n #'array-api': ('https://data-apis.org/array-api/2021.12/', None),\n}\n\ndoctest_global_setup = '''\nimport numpy as np\nimport cupy # TODO(okuta) : Remove this line\nimport cupyx\nimport cupy as cp\nnp.random.seed(0)\n'''\n\nspelling_lang = 'en_US'\nspelling_word_list_filename = 'spelling_wordlist.txt'\n\n\ndef _import_object_from_name(module_name, fullname):\n obj = sys.modules.get(module_name)\n if obj is None:\n return None\n for comp in fullname.split('.'):\n obj = getattr(obj, comp)\n return obj\n\n\n# note: cupy_backends is excluded as it is undocumented\n_top_modules = ['cupy', 'cupyx']\n_source_root = None\n\n\ndef _find_source_root(source_abs_path):\n # Note that READTHEDOCS* environment variable cannot be used, because they\n # are not set under the CI environment.\n global _source_root\n if _source_root is not None:\n return _source_root\n\n dirname = os.path.dirname(source_abs_path)\n while True:\n parent = os.path.dirname(dirname)\n if os.path.basename(dirname) in _top_modules:\n _source_root = parent\n return _source_root\n if len(parent) == len(dirname):\n raise RuntimeError(\n 'Couldn\\'t parse root directory from '\n 'source file: {}'.format(source_abs_path))\n dirname = parent\n\n\ndef _get_source_relative_path(source_abs_path):\n return os.path.relpath(source_abs_path, _find_source_root(source_abs_path))\n\n\ndef linkcode_resolve(domain, info):\n if domain != 'py' or not info['module']:\n return None\n\n # Import the object from module path\n obj = _import_object_from_name(info['module'], info['fullname'])\n\n # If it's not defined in the internal module, return None.\n mod = inspect.getmodule(obj)\n if mod is None:\n return None\n if not mod.__name__.split('.')[0] in _top_modules:\n return None\n\n # If it's wrapped (e.g., by `contextlib.contextmanager`), unwrap it\n obj = inspect.unwrap(obj)\n\n # Get the source file name and line number at which obj is defined.\n try:\n filename = inspect.getsourcefile(obj)\n except TypeError:\n # obj is not a module, class, function, ..etc.\n return None\n\n def get_pyx_file(obj):\n filename = inspect.getfile(obj)\n for ext in importlib.machinery.EXTENSION_SUFFIXES:\n if filename.endswith(ext):\n filename = filename[:-len(ext)] + '.pyx'\n return filename\n else:\n return None\n\n # `cupy.ndarray` (aka. `cupy._core.core.ndarray`) has `__module__`\n # attribute overwritten and `inspect.getsourcefile` doesn't work on it,\n # so use `cupy._core.core`'s source location instead\n if obj is cupy.ndarray:\n filename = get_pyx_file(cupy._core.core)\n if filename is None:\n return None\n linenum = None\n # `inspect.getsourcefile` returns None for C-extension objects\n elif filename is None:\n filename = get_pyx_file(obj)\n if filename is None:\n return None\n linenum = None\n else:\n # Get the source line number\n _, linenum = inspect.getsourcelines(obj)\n assert isinstance(linenum, int)\n\n filename = os.path.realpath(filename)\n relpath = _get_source_relative_path(filename)\n\n fragment = '' if linenum is None else f'#L{linenum}'\n return f'https://github.com/cupy/cupy/blob/{tag}/{relpath}{fragment}'\n\n\n# Python Array API methods have type hints, which do not render\n# nicely by default. This option moves the type hints to the\n# doc content so as to make the function signatures shorter and\n# look nicer.\nautodoc_typehints = 'description'\n\n\ndef remove_array_api_module_docstring(app, what, name, obj, options, lines):\n # We don't want to take the docstring in cupyx.array_api because:\n # 1. It's not how we document module-level stuff\n # 2. The docstring is taken from numpy.array_api, which requires rewriting\n # Here we remove the docstring and will add our own description in array_api.rst\n if what == \"module\" and 'array_api' in name:\n del lines[:]\n\ndef fix_jit_callable_signature(\n app, what, name, obj, options, signature, return_annotation):\n if 'cupyx.jit' in name and callable(obj) and signature is None:\n return (f'{inspect.signature(obj)}', None)\n\ndef fix_ndarray_signature(\n app, what, name, obj, options, signature, return_annotation):\n # Replace `_ndarray_base` with `ndarray` for signatures and return types\n # on docs.\n if signature is not None:\n signature = signature.replace('_ndarray_base', 'ndarray')\n if return_annotation == '_ndarray_base':\n return_annotation = 'ndarray'\n return (signature, return_annotation)\n\ndef setup(app):\n app.connect(\"autodoc-process-docstring\", remove_array_api_module_docstring)\n app.connect(\"autodoc-process-signature\", fix_jit_callable_signature)\n app.connect(\"autodoc-process-signature\", fix_ndarray_signature)\n",
"path": "docs/source/conf.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# CuPy documentation build configuration file, created by\n# sphinx-quickstart on Sun May 10 12:22:10 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport importlib\nimport inspect\nimport os\nimport sys\n\nimport cupy\n\nsys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))\nimport _comparison_generator\n\n\n__version__ = cupy.__version__\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nrtd_version = os.environ.get('READTHEDOCS_VERSION')\nif rtd_version == 'latest':\n tag = 'main'\nelse:\n tag = 'v{}'.format(__version__)\nextlinks = {\n 'blob': ('https://github.com/cupy/cupy/blob/{}/%s'.format(tag), '%s'),\n 'tree': ('https://github.com/cupy/cupy/tree/{}/%s'.format(tag), '%s'),\n}\n\n\n# Generate comparison table.\nwith open('reference/comparison_table.rst.inc', 'w') as f:\n f.write(_comparison_generator.generate())\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#sys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.extlinks',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.linkcode',\n 'sphinx_copybutton']\n\ntry:\n import sphinxcontrib.spelling # noqa\n extensions.append('sphinxcontrib.spelling')\nexcept ImportError:\n pass\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'CuPy'\ncopyright = u'2015, Preferred Networks, Inc. and Preferred Infrastructure, Inc.'\nauthor = u'Preferred Networks, Inc. and Preferred Infrastructure, Inc.'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = __version__\n# The full version, including alpha/beta/rc tags.\nrelease = __version__\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = []\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# Suppress a warning that multiple targets are found for a cross-reference.\n# See #3250\nsuppress_warnings = ['ref.python']\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n# Napoleon settings\nnapoleon_use_ivar = True\nnapoleon_include_special_with_doc = True\n\n# -- Copybutton settings --------------------------------------------------\n\n# Only copy lines starting with the input prompts,\n# valid prompt styles: [\n# Python Repl + continuation (e.g., '>>> ', '... '),\n# Bash (e.g., '$ '),\n# ipython and qtconsole + continuation (e.g., 'In [29]: ', ' ...: '),\n# jupyter-console + continuation (e.g., 'In [29]: ', ' ...: ')\n# ]\n# regex taken from https://sphinx-copybutton.readthedocs.io/en/latest/#using-regexp-prompt-identifiers\ncopybutton_prompt_text = r\">>> |\\.\\.\\. |\\$ |In \\[\\d*\\]: | {2,5}\\.\\.\\.: | {5,8}: \"\ncopybutton_prompt_is_regexp = True\n\n# Continue copying lines as long as they end with this character\ncopybutton_line_continuation_character = \"\\\\\"\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'pydata_sphinx_theme'\n\nhtml_logo = '../image/cupy_logo_1000px.png'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# https://pydata-sphinx-theme.readthedocs.io/en/latest/user_guide/configuring.html\nhtml_theme_options = {\n \"icon_links\": [\n {\n \"name\": \"GitHub\",\n \"url\": \"https://github.com/cupy/cupy\",\n \"icon\": \"fab fa-github-square\",\n },\n {\n \"name\": \"Twitter\",\n \"url\": \"https://twitter.com/CuPy_Team\",\n \"icon\": \"fab fa-twitter-square\",\n },\n ],\n}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = '_static/favicon.ico'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = False\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'CuPydoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #'preamble': '',\n\n # Latex figure (float) alignment\n #'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'CuPy.tex', u'CuPy Documentation',\n u'Preferred Networks, inc. and Preferred Infrastructure, inc.', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'cupy', u'CuPy Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'CuPy', u'CuPy Documentation',\n author, 'CuPy', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n\nautosummary_generate = True\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'numpy': ('https://numpy.org/doc/stable/', None),\n 'scipy': ('https://docs.scipy.org/doc/scipy/', None),\n 'numba': ('https://numba.readthedocs.io/en/stable', None),\n 'cuquantum': ('https://docs.nvidia.com/cuda/cuquantum/', None),\n # blocked by data-apis/array-api#428\n #'array-api': ('https://data-apis.org/array-api/2021.12/', None),\n}\n\ndoctest_global_setup = '''\nimport numpy as np\nimport cupy # TODO(okuta) : Remove this line\nimport cupyx\nimport cupy as cp\nnp.random.seed(0)\n'''\n\nspelling_lang = 'en_US'\nspelling_word_list_filename = 'spelling_wordlist.txt'\n\n\ndef _import_object_from_name(module_name, fullname):\n obj = sys.modules.get(module_name)\n if obj is None:\n return None\n for comp in fullname.split('.'):\n obj = getattr(obj, comp)\n return obj\n\n\n# note: cupy_backends is excluded as it is undocumented\n_top_modules = ['cupy', 'cupyx']\n_source_root = None\n\n\ndef _find_source_root(source_abs_path):\n # Note that READTHEDOCS* environment variable cannot be used, because they\n # are not set under the CI environment.\n global _source_root\n if _source_root is not None:\n return _source_root\n\n dirname = os.path.dirname(source_abs_path)\n while True:\n parent = os.path.dirname(dirname)\n if os.path.basename(dirname) in _top_modules:\n _source_root = parent\n return _source_root\n if len(parent) == len(dirname):\n raise RuntimeError(\n 'Couldn\\'t parse root directory from '\n 'source file: {}'.format(source_abs_path))\n dirname = parent\n\n\ndef _get_source_relative_path(source_abs_path):\n return os.path.relpath(source_abs_path, _find_source_root(source_abs_path))\n\n\ndef linkcode_resolve(domain, info):\n if domain != 'py' or not info['module']:\n return None\n\n # Import the object from module path\n obj = _import_object_from_name(info['module'], info['fullname'])\n\n # If it's not defined in the internal module, return None.\n mod = inspect.getmodule(obj)\n if mod is None:\n return None\n if not mod.__name__.split('.')[0] in _top_modules:\n return None\n\n # If it's wrapped (e.g., by `contextlib.contextmanager`), unwrap it\n obj = inspect.unwrap(obj)\n\n # Get the source file name and line number at which obj is defined.\n try:\n filename = inspect.getsourcefile(obj)\n except TypeError:\n # obj is not a module, class, function, ..etc.\n return None\n\n def get_pyx_file(obj):\n filename = inspect.getfile(obj)\n for ext in importlib.machinery.EXTENSION_SUFFIXES:\n if filename.endswith(ext):\n filename = filename[:-len(ext)] + '.pyx'\n return filename\n else:\n return None\n\n # `cupy.ndarray` (aka. `cupy._core.core.ndarray`) has `__module__`\n # attribute overwritten and `inspect.getsourcefile` doesn't work on it,\n # so use `cupy._core.core`'s source location instead\n if obj is cupy.ndarray:\n filename = get_pyx_file(cupy._core.core)\n if filename is None:\n return None\n linenum = None\n # `inspect.getsourcefile` returns None for C-extension objects\n elif filename is None:\n filename = get_pyx_file(obj)\n if filename is None:\n return None\n linenum = None\n else:\n # Get the source line number\n _, linenum = inspect.getsourcelines(obj)\n assert isinstance(linenum, int)\n\n filename = os.path.realpath(filename)\n relpath = _get_source_relative_path(filename)\n\n fragment = '' if linenum is None else f'#L{linenum}'\n return f'https://github.com/cupy/cupy/blob/{tag}/{relpath}{fragment}'\n\n\n# Python Array API methods have type hints, which do not render\n# nicely by default. This option moves the type hints to the\n# doc content so as to make the function signatures shorter and\n# look nicer.\nautodoc_typehints = 'description'\n\n\ndef remove_array_api_module_docstring(app, what, name, obj, options, lines):\n # We don't want to take the docstring in cupyx.array_api because:\n # 1. It's not how we document module-level stuff\n # 2. The docstring is taken from numpy.array_api, which requires rewriting\n # Here we remove the docstring and will add our own description in array_api.rst\n if what == \"module\" and 'array_api' in name:\n del lines[:]\n\ndef fix_jit_callable_signature(\n app, what, name, obj, options, signature, return_annotation):\n if 'cupyx.jit' in name and callable(obj) and signature is None:\n return (f'{inspect.signature(obj)}', None)\n\ndef fix_ndarray_signature(\n app, what, name, obj, options, signature, return_annotation):\n # Replace `_ndarray_base` with `ndarray` for signatures and return types\n # on docs.\n if signature is not None:\n signature = signature.replace('_ndarray_base', 'ndarray')\n if return_annotation == '_ndarray_base':\n return_annotation = 'ndarray'\n return (signature, return_annotation)\n\ndef setup(app):\n app.connect(\"autodoc-process-docstring\", remove_array_api_module_docstring)\n app.connect(\"autodoc-process-signature\", fix_jit_callable_signature)\n app.connect(\"autodoc-process-signature\", fix_ndarray_signature)\n",
"path": "docs/source/conf.py"
}
] | diff --git a/.github/workflows/backport.yml b/.github/workflows/backport.yml
index 151b0d18c1f..179694d231e 100644
--- a/.github/workflows/backport.yml
+++ b/.github/workflows/backport.yml
@@ -4,7 +4,7 @@ on:
pull_request_target:
types: [closed, labeled]
branches:
- - master
+ - main
jobs:
backport:
diff --git a/.github/workflows/flexci.yml b/.github/workflows/flexci.yml
index 85b4b9bceb3..30466a47953 100644
--- a/.github/workflows/flexci.yml
+++ b/.github/workflows/flexci.yml
@@ -2,7 +2,7 @@ name: "FlexCI"
on:
push:
- branches: ["master", "v[0-9]+", "hotfix-*"]
+ branches: ["main", "v[0-9]+", "hotfix-*"]
issue_comment:
types: [created]
diff --git a/.pfnci/BRANCH b/.pfnci/BRANCH
index 1f7391f92b6..ba2906d0666 100644
--- a/.pfnci/BRANCH
+++ b/.pfnci/BRANCH
@@ -1 +1 @@
-master
+main
diff --git a/.pfnci/coverage.rst b/.pfnci/coverage.rst
index b371ea85285..537480ac972 100644
--- a/.pfnci/coverage.rst
+++ b/.pfnci/coverage.rst
@@ -3168,13 +3168,13 @@ CuPy CI Test Coverage
.. _t21: https://ci.preferred.jp/cupy.linux.cuda120.multi/
.. _d21: linux/tests/cuda120.multi.Dockerfile
.. _s21: linux/tests/cuda120.multi.sh
-.. _t22: https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-4-3,label=mnj-mi50/
+.. _t22: https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-4-3,label=mnj-mi50/
.. _d22: linux/tests/rocm-4-3.Dockerfile
.. _s22: linux/tests/rocm-4-3.sh
-.. _t23: https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-5-0,label=mnj-mi50/
+.. _t23: https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-5-0,label=mnj-mi50/
.. _d23: linux/tests/rocm-5-0.Dockerfile
.. _s23: linux/tests/rocm-5-0.sh
-.. _t24: https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-5-3,label=mnj-mi50/
+.. _t24: https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-5-3,label=mnj-mi50/
.. _d24: linux/tests/rocm-5-3.Dockerfile
.. _s24: linux/tests/rocm-5-3.sh
.. _t25: https://ci.preferred.jp/cupy.linux.cuda-slow/
diff --git a/.pfnci/linux/tests/actions/benchmark.sh b/.pfnci/linux/tests/actions/benchmark.sh
index 7e537015a0c..452ec99c58c 100755
--- a/.pfnci/linux/tests/actions/benchmark.sh
+++ b/.pfnci/linux/tests/actions/benchmark.sh
@@ -12,7 +12,7 @@ python3 prof.py benchmarks/bench_ufunc_cupy.py -c
mkdir target
mv *.csv target/
-# Run benchmarks for master branch
+# Run benchmarks for main branch
# Since GCP instance may change and use diff gen processsors/GPUs
# we just recompile and run to avoid false errors
python3 -m pip uninstall -y cupy
@@ -23,10 +23,10 @@ if [[ "${PULL_REQUEST:-}" == "" ]]; then
# For branches we compare against the latest release
# TODO(ecastill) find a programatical way of doing this
# sorting tags, or just checking the dates may mix the
- # stable & master branches
+ # stable & main branches
git checkout tags/v11.0.0a2 -b v11.0.0a2
else
- git checkout master
+ git checkout main
fi
git submodule update --init
python3 -m pip install --user -v .
diff --git a/.pfnci/matrix.yaml b/.pfnci/matrix.yaml
index b10453beb07..248e6a069d2 100644
--- a/.pfnci/matrix.yaml
+++ b/.pfnci/matrix.yaml
@@ -347,7 +347,7 @@
# ROCm 4.3 | Linux
# The lowest ROCm version matrix is intended to cover the lowest supported combination.
- project: "cupy.linux.rocm-4-3"
- _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-4-3,label=mnj-mi50/"
+ _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-4-3,label=mnj-mi50/"
tags: null # Jenkins
target: "rocm-4-3"
system: "linux"
@@ -370,7 +370,7 @@
# ROCm 5.0 | Linux
- project: "cupy.linux.rocm-5-0"
- _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-5-0,label=mnj-mi50/"
+ _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-5-0,label=mnj-mi50/"
tags: null # Jenkins
target: "rocm-5-0"
system: "linux"
@@ -394,7 +394,7 @@
# ROCm 5.3 | Linux
# The latest ROCm version matrix is intended to cover the highest supported combination.
- project: "cupy.linux.rocm-5-3"
- _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_master/TEST=rocm-5-3,label=mnj-mi50/"
+ _url: "https://jenkins.preferred.jp/job/chainer/job/cupy_main/TEST=rocm-5-3,label=mnj-mi50/"
tags: null # Jenkins
target: "rocm-5-3"
system: "linux"
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 72f8c9706c8..14b2cd1710d 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -8,8 +8,8 @@ repos:
# Git
- id: check-added-large-files
- id: no-commit-to-branch
- name: "ensure no direct commit to master/vXX branch"
- args: [--branch, "master", --pattern, "v\\d+"]
+ name: "ensure no direct commit to main/vXX branch"
+ args: [--branch, "main", --pattern, "v\\d+"]
- id: check-case-conflict
# Contents
- id: mixed-line-ending
diff --git a/README.md b/README.md
index a3441672298..d324f2ab898 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-<div align="center"><img src="https://raw.githubusercontent.com/cupy/cupy/master/docs/image/cupy_logo_1000px.png" width="400"/></div>
+<div align="center"><img src="https://raw.githubusercontent.com/cupy/cupy/main/docs/image/cupy_logo_1000px.png" width="400"/></div>
# CuPy : NumPy & SciPy for GPU
@@ -12,7 +12,7 @@
[**Website**](https://cupy.dev/)
| [**Install**](https://docs.cupy.dev/en/stable/install.html)
| [**Tutorial**](https://docs.cupy.dev/en/stable/user_guide/basic.html)
-| [**Examples**](https://github.com/cupy/cupy/tree/master/examples)
+| [**Examples**](https://github.com/cupy/cupy/tree/main/examples)
| [**Documentation**](https://docs.cupy.dev/en/stable/)
| [**API Reference**](https://docs.cupy.dev/en/stable/reference/)
| [**Forum**](https://groups.google.com/forum/#!forum/cupy)
diff --git a/docs/source/conf.py b/docs/source/conf.py
index be8f7a583c8..0ab3dd19045 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -28,7 +28,7 @@
rtd_version = os.environ.get('READTHEDOCS_VERSION')
if rtd_version == 'latest':
- tag = 'master'
+ tag = 'main'
else:
tag = 'v{}'.format(__version__)
extlinks = {
diff --git a/docs/source/contribution.rst b/docs/source/contribution.rst
index 9018369b3bd..b11a8422ac8 100644
--- a/docs/source/contribution.rst
+++ b/docs/source/contribution.rst
@@ -81,24 +81,24 @@ The GitHub milestone is basically used for collecting the issues and PRs resolve
Git Branches
~~~~~~~~~~~~
-The ``master`` branch is used to develop pre-release versions.
-It means that **alpha, beta, and RC updates are developed at the** ``master`` **branch**.
+The ``main`` branch is used to develop pre-release versions.
+It means that **alpha, beta, and RC updates are developed at the** ``main`` **branch**.
This branch contains the most up-to-date source tree that includes features newly added after the latest major version.
The stable version is developed at the individual branch named as ``vN`` where "N" reflects the version number (we call it a *versioned branch*).
For example, v1.0.0, v1.0.1, and v1.0.2 will be developed at the ``v1`` branch.
**Notes for contributors:**
-When you send a pull request, you basically have to send it to the ``master`` branch.
+When you send a pull request, you basically have to send it to the ``main`` branch.
If the change can also be applied to the stable version, a core team member will apply the same change to the stable version so that the change is also included in the next revision update.
-If the change is only applicable to the stable version and not to the ``master`` branch, please send it to the versioned branch.
+If the change is only applicable to the stable version and not to the ``main`` branch, please send it to the versioned branch.
We basically only accept changes to the latest versioned branch (where the stable version is developed) unless the fix is critical.
-If you want to make a new feature of the ``master`` branch available in the current stable version, please send a *backport PR* to the stable version (the latest ``vN`` branch).
+If you want to make a new feature of the ``main`` branch available in the current stable version, please send a *backport PR* to the stable version (the latest ``vN`` branch).
See the next section for details.
-*Note: a change that can be applied to both branches should be sent to the* ``master`` *branch.*
+*Note: a change that can be applied to both branches should be sent to the* ``main`` *branch.*
*Each release of the stable version is also merged to the development version so that the change is also reflected to the next major version.*
Feature Backport PRs
@@ -134,7 +134,7 @@ First of all, before starting to write any code, do not forget to confirm the fo
- Read through the :ref:`coding-guide` and :ref:`testing-guide`.
- Check the appropriate branch that you should send the PR following :ref:`contrib-git-branches`.
- If you do not have any idea about selecting a branch, please choose the ``master`` branch.
+ If you do not have any idea about selecting a branch, please choose the ``main`` branch.
In particular, **check the branch before writing any code.**
The current source tree of the chosen branch is the starting point of your change.
@@ -149,7 +149,7 @@ Note that this automatic PR test only includes CPU tests.
.. note::
- We are also running continuous integration with GPU tests for the ``master`` branch and the versioned branch of the latest major version.
+ We are also running continuous integration with GPU tests for the ``main`` branch and the versioned branch of the latest major version.
Since this service is currently running on our internal server, we do not use it for automatic PR tests to keep the server secure.
If you are planning to add a new feature or modify existing APIs, **it is recommended to open an issue and discuss the design first.**
@@ -389,7 +389,7 @@ When adding a new feature to the framework, you also need to document it in the
If you are unsure about how to fix the documentation, you can submit a pull request without doing so.
Reviewers will help you fix the documentation appropriately.
-The documentation source is stored under `docs directory <https://github.com/cupy/cupy/tree/master/docs>`_ and written in `reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html>`_ format.
+The documentation source is stored under `docs directory <https://github.com/cupy/cupy/tree/main/docs>`_ and written in `reStructuredText <http://www.sphinx-doc.org/en/master/usage/restructuredtext/index.html>`_ format.
To build the documentation, you need to install `Sphinx <http://www.sphinx-doc.org/>`_::
diff --git a/docs/source/user_guide/kernel.rst b/docs/source/user_guide/kernel.rst
index a6814152066..b077886b220 100644
--- a/docs/source/user_guide/kernel.rst
+++ b/docs/source/user_guide/kernel.rst
@@ -381,7 +381,7 @@ It may be important to note that this dedicated memory bank is not shared with t
For now, CuPy offers no helper routines to create user defined composite types.
Such composite types can however be built recursively using NumPy dtype `offsets` and `itemsize` capabilities,
-see `cupy/examples/custum_struct <https://github.com/cupy/cupy/tree/master/examples/custom_struct>`_ for examples of advanced usage.
+see `cupy/examples/custum_struct <https://github.com/cupy/cupy/tree/main/examples/custom_struct>`_ for examples of advanced usage.
.. warning::
You cannot directly pass static arrays as kernel arguments with the ``type arg[N]`` syntax where N is a compile time constant. The signature of ``__global__ void kernel(float arg[5])`` is seen as ``__global__ void kernel(float* arg)`` by the compiler. If you want to pass five floats to the kernel by value you need to define a custom structure ``struct float5 { float val[5]; };`` and modify the kernel signature to ``__global__ void kernel(float5 arg)``.
|
PrefectHQ__prefect-2056 | AuthorizationError when watching logs from CLI
When running with `prefect run cloud --logs`, after a few minutes I see the following error:
```
prefect.utilities.exceptions.AuthorizationError: [{'message': 'AuthenticationError', 'locations': [], 'path': ['flow_run'], 'extensions': {'code': 'UNAUTHENTICATED'}}]
```
The run itself succeeds but the logs stop at that point, so I guess the token is initially valid but just expires...?
cc @joshmeek @cicdw
| [
{
"content": "import json\nimport time\n\nimport click\nfrom tabulate import tabulate\n\nfrom prefect.client import Client\nfrom prefect.utilities.graphql import EnumValue, with_args\n\n\[email protected](hidden=True)\ndef run():\n \"\"\"\n Run Prefect flows.\n\n \\b\n Usage:\n $ prefect run [STORAGE/PLATFORM]\n\n \\b\n Arguments:\n cloud Run flows in Prefect Cloud\n\n \\b\n Examples:\n $ prefect run cloud --name Test-Flow --project My-Project\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n\n \\b\n $ prefect run cloud --name Test-Flow --project My-Project --watch\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n Scheduled -> Submitted -> Running -> Success\n \"\"\"\n\n\[email protected](hidden=True)\[email protected](\n \"--name\", \"-n\", required=True, help=\"The name of a flow to run.\", hidden=True\n)\[email protected](\n \"--project\",\n \"-p\",\n required=True,\n help=\"The project that contains the flow.\",\n hidden=True,\n)\[email protected](\"--version\", \"-v\", type=int, help=\"A flow version to run.\", hidden=True)\[email protected](\n \"--parameters-file\",\n \"-pf\",\n help=\"A parameters JSON file.\",\n hidden=True,\n type=click.Path(exists=True),\n)\[email protected](\n \"--parameters-string\", \"-ps\", help=\"A parameters JSON string.\", hidden=True\n)\[email protected](\"--run-name\", \"-rn\", help=\"A name to assign for this run.\", hidden=True)\[email protected](\n \"--watch\",\n \"-w\",\n is_flag=True,\n help=\"Watch current state of the flow run.\",\n hidden=True,\n)\[email protected](\n \"--logs\", \"-l\", is_flag=True, help=\"Live logs of the flow run.\", hidden=True\n)\ndef cloud(\n name, project, version, parameters_file, parameters_string, run_name, watch, logs\n):\n \"\"\"\n Run a registered flow in Prefect Cloud.\n\n \\b\n Options:\n --name, -n TEXT The name of a flow to run [required]\n --project, -p TEXT The name of a project that contains the flow [required]\n --version, -v INTEGER A flow version to run\n --parameters-file, -pf FILE PATH A filepath of a JSON file containing parameters\n --parameters-string, -ps TEXT A string of JSON parameters\n --run-name, -rn TEXT A name to assign for this run\n --watch, -w Watch current state of the flow run, stream output to stdout\n --logs, -l Get logs of the flow run, stream output to stdout\n\n \\b\n If both `--parameters-file` and `--parameters-string` are provided then the values passed\n in through the string will override the values provided from the file.\n\n \\b\n e.g.\n File contains: {\"a\": 1, \"b\": 2}\n String: '{\"a\": 3}'\n Parameters passed to the flow run: {\"a\": 3, \"b\": 2}\n \"\"\"\n\n if watch and logs:\n click.secho(\n \"Streaming state and logs not currently supported together.\", fg=\"red\"\n )\n return\n\n query = {\n \"query\": {\n with_args(\n \"flow\",\n {\n \"where\": {\n \"_and\": {\n \"name\": {\"_eq\": name},\n \"version\": {\"_eq\": version},\n \"project\": {\"name\": {\"_eq\": project}},\n }\n },\n \"order_by\": {\n \"name\": EnumValue(\"asc\"),\n \"version\": EnumValue(\"desc\"),\n },\n \"distinct_on\": EnumValue(\"name\"),\n },\n ): {\"id\": True}\n }\n }\n\n client = Client()\n result = client.graphql(query)\n\n flow_data = result.data.flow\n\n if flow_data:\n flow_id = flow_data[0].id\n else:\n click.secho(\"{} not found\".format(name), fg=\"red\")\n return\n\n # Load parameters from file if provided\n file_params = {}\n if parameters_file:\n with open(parameters_file) as params_file:\n file_params = json.load(params_file)\n\n # Load parameters from string if provided\n string_params = {}\n if parameters_string:\n string_params = json.loads(parameters_string)\n\n flow_run_id = client.create_flow_run(\n flow_id=flow_id, parameters={**file_params, **string_params}, run_name=run_name\n )\n click.echo(\"Flow Run ID: {}\".format(flow_run_id))\n\n if watch:\n current_states = []\n while True:\n query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\n with_args(\n \"states\",\n {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}},\n ): {\"state\": True, \"timestamp\": True}\n }\n }\n }\n\n result = client.graphql(query)\n\n # Filter through retrieved states and output in order\n for state_index in result.data.flow_run_by_pk.states:\n state = state_index.state\n if state not in current_states:\n if state != \"Success\" and state != \"Failed\":\n click.echo(\"{} -> \".format(state), nl=False)\n else:\n click.echo(state)\n return\n\n current_states.append(state)\n\n time.sleep(3)\n\n if logs:\n all_logs = []\n\n log_query = {\n with_args(\n \"logs\", {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}}\n ): {\"timestamp\": True, \"message\": True, \"level\": True},\n \"start_time\": True,\n }\n\n query = {\n \"query\": {\n with_args(\n \"flow_run\",\n {\n \"where\": {\"id\": {\"_eq\": flow_run_id}},\n \"order_by\": {EnumValue(\"start_time\"): EnumValue(\"desc\")},\n },\n ): log_query\n }\n }\n\n while True:\n result = Client().graphql(query)\n\n flow_run = result.data.flow_run\n if not flow_run:\n click.secho(\"{} not found\".format(flow_run_id), fg=\"red\")\n return\n\n new_run = flow_run[0]\n logs = new_run.logs\n output = []\n\n for i in logs:\n if [i.timestamp, i.level, i.message] not in all_logs:\n\n if not len(all_logs):\n click.echo(\n tabulate(\n [[i.timestamp, i.level, i.message]],\n headers=[\"TIMESTAMP\", \"LEVEL\", \"MESSAGE\"],\n tablefmt=\"plain\",\n numalign=\"left\",\n stralign=\"left\",\n )\n )\n all_logs.append([i.timestamp, i.level, i.message])\n continue\n\n output.append([i.timestamp, i.level, i.message])\n all_logs.append([i.timestamp, i.level, i.message])\n\n if output:\n click.echo(\n tabulate(output, tablefmt=\"plain\", numalign=\"left\", stralign=\"left\")\n )\n\n # Check if state is either Success or Failed, exit if it is\n pk_query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\"state\": True}\n }\n }\n result = client.graphql(pk_query)\n\n if (\n result.data.flow_run_by_pk.state == \"Success\"\n or result.data.flow_run_by_pk.state == \"Failed\"\n ):\n return\n\n time.sleep(3)\n",
"path": "src/prefect/cli/run.py"
}
] | [
{
"content": "import json\nimport time\n\nimport click\nfrom tabulate import tabulate\n\nfrom prefect.client import Client\nfrom prefect.utilities.graphql import EnumValue, with_args\n\n\[email protected](hidden=True)\ndef run():\n \"\"\"\n Run Prefect flows.\n\n \\b\n Usage:\n $ prefect run [STORAGE/PLATFORM]\n\n \\b\n Arguments:\n cloud Run flows in Prefect Cloud\n\n \\b\n Examples:\n $ prefect run cloud --name Test-Flow --project My-Project\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n\n \\b\n $ prefect run cloud --name Test-Flow --project My-Project --watch\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n Scheduled -> Submitted -> Running -> Success\n \"\"\"\n\n\[email protected](hidden=True)\[email protected](\n \"--name\", \"-n\", required=True, help=\"The name of a flow to run.\", hidden=True\n)\[email protected](\n \"--project\",\n \"-p\",\n required=True,\n help=\"The project that contains the flow.\",\n hidden=True,\n)\[email protected](\"--version\", \"-v\", type=int, help=\"A flow version to run.\", hidden=True)\[email protected](\n \"--parameters-file\",\n \"-pf\",\n help=\"A parameters JSON file.\",\n hidden=True,\n type=click.Path(exists=True),\n)\[email protected](\n \"--parameters-string\", \"-ps\", help=\"A parameters JSON string.\", hidden=True\n)\[email protected](\"--run-name\", \"-rn\", help=\"A name to assign for this run.\", hidden=True)\[email protected](\n \"--watch\",\n \"-w\",\n is_flag=True,\n help=\"Watch current state of the flow run.\",\n hidden=True,\n)\[email protected](\n \"--logs\", \"-l\", is_flag=True, help=\"Live logs of the flow run.\", hidden=True\n)\ndef cloud(\n name, project, version, parameters_file, parameters_string, run_name, watch, logs\n):\n \"\"\"\n Run a registered flow in Prefect Cloud.\n\n \\b\n Options:\n --name, -n TEXT The name of a flow to run [required]\n --project, -p TEXT The name of a project that contains the flow [required]\n --version, -v INTEGER A flow version to run\n --parameters-file, -pf FILE PATH A filepath of a JSON file containing parameters\n --parameters-string, -ps TEXT A string of JSON parameters\n --run-name, -rn TEXT A name to assign for this run\n --watch, -w Watch current state of the flow run, stream output to stdout\n --logs, -l Get logs of the flow run, stream output to stdout\n\n \\b\n If both `--parameters-file` and `--parameters-string` are provided then the values passed\n in through the string will override the values provided from the file.\n\n \\b\n e.g.\n File contains: {\"a\": 1, \"b\": 2}\n String: '{\"a\": 3}'\n Parameters passed to the flow run: {\"a\": 3, \"b\": 2}\n \"\"\"\n\n if watch and logs:\n click.secho(\n \"Streaming state and logs not currently supported together.\", fg=\"red\"\n )\n return\n\n query = {\n \"query\": {\n with_args(\n \"flow\",\n {\n \"where\": {\n \"_and\": {\n \"name\": {\"_eq\": name},\n \"version\": {\"_eq\": version},\n \"project\": {\"name\": {\"_eq\": project}},\n }\n },\n \"order_by\": {\n \"name\": EnumValue(\"asc\"),\n \"version\": EnumValue(\"desc\"),\n },\n \"distinct_on\": EnumValue(\"name\"),\n },\n ): {\"id\": True}\n }\n }\n\n client = Client()\n result = client.graphql(query)\n\n flow_data = result.data.flow\n\n if flow_data:\n flow_id = flow_data[0].id\n else:\n click.secho(\"{} not found\".format(name), fg=\"red\")\n return\n\n # Load parameters from file if provided\n file_params = {}\n if parameters_file:\n with open(parameters_file) as params_file:\n file_params = json.load(params_file)\n\n # Load parameters from string if provided\n string_params = {}\n if parameters_string:\n string_params = json.loads(parameters_string)\n\n flow_run_id = client.create_flow_run(\n flow_id=flow_id, parameters={**file_params, **string_params}, run_name=run_name\n )\n click.echo(\"Flow Run ID: {}\".format(flow_run_id))\n\n if watch:\n current_states = []\n while True:\n query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\n with_args(\n \"states\",\n {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}},\n ): {\"state\": True, \"timestamp\": True}\n }\n }\n }\n\n result = client.graphql(query)\n\n # Filter through retrieved states and output in order\n for state_index in result.data.flow_run_by_pk.states:\n state = state_index.state\n if state not in current_states:\n if state != \"Success\" and state != \"Failed\":\n click.echo(\"{} -> \".format(state), nl=False)\n else:\n click.echo(state)\n return\n\n current_states.append(state)\n\n time.sleep(3)\n\n if logs:\n all_logs = []\n\n log_query = {\n with_args(\n \"logs\", {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}}\n ): {\"timestamp\": True, \"message\": True, \"level\": True},\n \"start_time\": True,\n }\n\n query = {\n \"query\": {\n with_args(\n \"flow_run\",\n {\n \"where\": {\"id\": {\"_eq\": flow_run_id}},\n \"order_by\": {EnumValue(\"start_time\"): EnumValue(\"desc\")},\n },\n ): log_query\n }\n }\n\n while True:\n result = client.graphql(query)\n\n flow_run = result.data.flow_run\n if not flow_run:\n click.secho(\"{} not found\".format(flow_run_id), fg=\"red\")\n return\n\n new_run = flow_run[0]\n logs = new_run.logs\n output = []\n\n for i in logs:\n if [i.timestamp, i.level, i.message] not in all_logs:\n\n if not len(all_logs):\n click.echo(\n tabulate(\n [[i.timestamp, i.level, i.message]],\n headers=[\"TIMESTAMP\", \"LEVEL\", \"MESSAGE\"],\n tablefmt=\"plain\",\n numalign=\"left\",\n stralign=\"left\",\n )\n )\n all_logs.append([i.timestamp, i.level, i.message])\n continue\n\n output.append([i.timestamp, i.level, i.message])\n all_logs.append([i.timestamp, i.level, i.message])\n\n if output:\n click.echo(\n tabulate(output, tablefmt=\"plain\", numalign=\"left\", stralign=\"left\")\n )\n\n # Check if state is either Success or Failed, exit if it is\n pk_query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\"state\": True}\n }\n }\n result = client.graphql(pk_query)\n\n if (\n result.data.flow_run_by_pk.state == \"Success\"\n or result.data.flow_run_by_pk.state == \"Failed\"\n ):\n return\n\n time.sleep(3)\n",
"path": "src/prefect/cli/run.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 2669873011ce..72e62747e516 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -21,6 +21,7 @@ These changes are available in the [master branch](https://github.com/PrefectHQ/
### Fixes
- Ensure microseconds are respected on `start_date` provided to CronClock - [#2031](https://github.com/PrefectHQ/prefect/pull/2031)
+- Fix duplicate Client connections when using `--logs` flag from `run cloud` CLI command - [#2056](https://github.com/PrefectHQ/prefect/pull/2056)
### Deprecations
diff --git a/src/prefect/cli/run.py b/src/prefect/cli/run.py
index 359394a650c1..19c3f8d15e85 100644
--- a/src/prefect/cli/run.py
+++ b/src/prefect/cli/run.py
@@ -202,7 +202,7 @@ def cloud(
}
while True:
- result = Client().graphql(query)
+ result = client.graphql(query)
flow_run = result.data.flow_run
if not flow_run:
|
django-cms__django-cms-3415 | Groups could not be deleted if custom user model is used
If a custom user model is used one can't delete groups because in the pre_delete signal the user permissions are cleared. The users are accessed by their reversed descriptor but in the case of a custom user model this is not always called user_set, so there is an attribute error raised.
def pre_delete_group(instance, **kwargs):
for user in instance.user_set.all():
clear_user_permission_cache(user)
| [
{
"content": "# -*- coding: utf-8 -*-\n\nfrom cms.cache.permissions import clear_user_permission_cache\nfrom cms.models import PageUser, PageUserGroup\nfrom cms.utils.compat.dj import user_related_name\nfrom menus.menu_pool import menu_pool\n\n\ndef post_save_user(instance, raw, created, **kwargs):\n \"\"\"Signal called when new user is created, required only when CMS_PERMISSION.\n Assigns creator of the user to PageUserInfo model, so we know who had created\n this user account.\n\n requires: CurrentUserMiddleware\n \"\"\"\n from cms.utils.permissions import get_current_user\n # read current user from thread locals\n creator = get_current_user()\n if not creator or not created or creator.is_anonymous():\n return\n\n page_user = PageUser(user_ptr_id=instance.pk, created_by=creator)\n page_user.__dict__.update(instance.__dict__)\n page_user.save()\n\n\ndef post_save_user_group(instance, raw, created, **kwargs):\n \"\"\"The same like post_save_user, but for Group, required only when\n CMS_PERMISSION.\n Assigns creator of the group to PageUserGroupInfo model, so we know who had\n created this user account.\n\n requires: CurrentUserMiddleware\n \"\"\"\n from cms.utils.permissions import get_current_user\n # read current user from thread locals\n creator = get_current_user()\n if not creator or not created or creator.is_anonymous():\n return\n page_user = PageUserGroup(group_ptr_id=instance.pk, created_by=creator)\n page_user.__dict__.update(instance.__dict__)\n page_user.save()\n\n\ndef pre_save_user(instance, raw, **kwargs):\n clear_user_permission_cache(instance)\n\n\ndef pre_delete_user(instance, **kwargs):\n clear_user_permission_cache(instance)\n\n\ndef pre_save_group(instance, raw, **kwargs):\n if instance.pk:\n user_set = getattr(instance, user_related_name)\n for user in user_set.all():\n clear_user_permission_cache(user)\n\n\ndef pre_delete_group(instance, **kwargs):\n for user in instance.user_set.all():\n clear_user_permission_cache(user)\n\n\ndef _clear_users_permissions(instance):\n if instance.user:\n clear_user_permission_cache(instance.user)\n if instance.group:\n user_set = getattr(instance.group, user_related_name)\n for user in user_set.all():\n clear_user_permission_cache(user)\n\n\ndef pre_save_pagepermission(instance, raw, **kwargs):\n _clear_users_permissions(instance)\n\n\ndef pre_delete_pagepermission(instance, **kwargs):\n _clear_users_permissions(instance)\n\n\ndef pre_save_globalpagepermission(instance, raw, **kwargs):\n _clear_users_permissions(instance)\n menu_pool.clear(all=True)\n\n\ndef pre_delete_globalpagepermission(instance, **kwargs):\n _clear_users_permissions(instance)\n\n\n",
"path": "cms/signals/permissions.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\nfrom cms.cache.permissions import clear_user_permission_cache\nfrom cms.models import PageUser, PageUserGroup\nfrom cms.utils.compat.dj import user_related_name\nfrom menus.menu_pool import menu_pool\n\n\ndef post_save_user(instance, raw, created, **kwargs):\n \"\"\"Signal called when new user is created, required only when CMS_PERMISSION.\n Assigns creator of the user to PageUserInfo model, so we know who had created\n this user account.\n\n requires: CurrentUserMiddleware\n \"\"\"\n from cms.utils.permissions import get_current_user\n # read current user from thread locals\n creator = get_current_user()\n if not creator or not created or creator.is_anonymous():\n return\n\n page_user = PageUser(user_ptr_id=instance.pk, created_by=creator)\n page_user.__dict__.update(instance.__dict__)\n page_user.save()\n\n\ndef post_save_user_group(instance, raw, created, **kwargs):\n \"\"\"The same like post_save_user, but for Group, required only when\n CMS_PERMISSION.\n Assigns creator of the group to PageUserGroupInfo model, so we know who had\n created this user account.\n\n requires: CurrentUserMiddleware\n \"\"\"\n from cms.utils.permissions import get_current_user\n # read current user from thread locals\n creator = get_current_user()\n if not creator or not created or creator.is_anonymous():\n return\n page_user = PageUserGroup(group_ptr_id=instance.pk, created_by=creator)\n page_user.__dict__.update(instance.__dict__)\n page_user.save()\n\n\ndef pre_save_user(instance, raw, **kwargs):\n clear_user_permission_cache(instance)\n\n\ndef pre_delete_user(instance, **kwargs):\n clear_user_permission_cache(instance)\n\n\ndef pre_save_group(instance, raw, **kwargs):\n if instance.pk:\n user_set = getattr(instance, user_related_name)\n for user in user_set.all():\n clear_user_permission_cache(user)\n\n\ndef pre_delete_group(instance, **kwargs):\n user_set = getattr(instance, user_related_name)\n for user in user_set.all():\n clear_user_permission_cache(user)\n\n\ndef _clear_users_permissions(instance):\n if instance.user:\n clear_user_permission_cache(instance.user)\n if instance.group:\n user_set = getattr(instance.group, user_related_name)\n for user in user_set.all():\n clear_user_permission_cache(user)\n\n\ndef pre_save_pagepermission(instance, raw, **kwargs):\n _clear_users_permissions(instance)\n\n\ndef pre_delete_pagepermission(instance, **kwargs):\n _clear_users_permissions(instance)\n\n\ndef pre_save_globalpagepermission(instance, raw, **kwargs):\n _clear_users_permissions(instance)\n menu_pool.clear(all=True)\n\n\ndef pre_delete_globalpagepermission(instance, **kwargs):\n _clear_users_permissions(instance)\n\n\n",
"path": "cms/signals/permissions.py"
}
] | diff --git a/cms/signals/permissions.py b/cms/signals/permissions.py
index 9d306bbd1a0..2a8f7075612 100644
--- a/cms/signals/permissions.py
+++ b/cms/signals/permissions.py
@@ -58,7 +58,8 @@ def pre_save_group(instance, raw, **kwargs):
def pre_delete_group(instance, **kwargs):
- for user in instance.user_set.all():
+ user_set = getattr(instance, user_related_name)
+ for user in user_set.all():
clear_user_permission_cache(user)
|
networkx__networkx-4431 | Documentation: Make classes AtlasView et. al. from networkx/classes/coreviews.py accessible from documentation
Lest I seem ungrateful, I like networkx a lot, and rely on it for two of my main personal projects [fake-data-for-learning](https://github.com/munichpavel/fake-data-for-learning) and the WIP [clovek-ne-jezi-se](https://github.com/munichpavel/clovek-ne-jezi-se).
As I was trying to understand `AtlasView`s, I could find only examples in the documentation (see [this search](https://networkx.org/documentation/stable//search.html?q=AtlasView&check_keywords=yes&area=default#)), none of which pointed to the (well-documented) source code [networkx/classes/coreviews.py](https://github.com/networkx/networkx/blob/master/networkx/classes/coreviews.py).
I think the fix should just be a matter of tweaking how you have configured Sphinx to run.
| [
{
"content": "\"\"\"\n\"\"\"\nimport warnings\nfrom collections.abc import Mapping\n\n__all__ = [\n \"AtlasView\",\n \"AdjacencyView\",\n \"MultiAdjacencyView\",\n \"UnionAtlas\",\n \"UnionAdjacency\",\n \"UnionMultiInner\",\n \"UnionMultiAdjacency\",\n \"FilterAtlas\",\n \"FilterAdjacency\",\n \"FilterMultiInner\",\n \"FilterMultiAdjacency\",\n]\n\n\nclass AtlasView(Mapping):\n \"\"\"An AtlasView is a Read-only Mapping of Mappings.\n\n It is a View into a dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer level is read-only.\n\n See Also\n ========\n AdjacencyView - View into dict-of-dict-of-dict\n MultiAdjacencyView - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_atlas\",)\n\n def __getstate__(self):\n return {\"_atlas\": self._atlas}\n\n def __setstate__(self, state):\n self._atlas = state[\"_atlas\"]\n\n def __init__(self, d):\n self._atlas = d\n\n def __len__(self):\n return len(self._atlas)\n\n def __iter__(self):\n return iter(self._atlas)\n\n def __getitem__(self, key):\n return self._atlas[key]\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n def __str__(self):\n return str(self._atlas) # {nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._atlas!r})\"\n\n\nclass AdjacencyView(AtlasView):\n \"\"\"An AdjacencyView is a Read-only Map of Maps of Maps.\n\n It is a View into a dict-of-dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer levels are read-only.\n\n See Also\n ========\n AtlasView - View into dict-of-dict\n MultiAdjacencyView - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses AtlasView slots names _atlas\n\n def __getitem__(self, name):\n return AtlasView(self._atlas[name])\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n\nclass MultiAdjacencyView(AdjacencyView):\n \"\"\"An MultiAdjacencyView is a Read-only Map of Maps of Maps of Maps.\n\n It is a View into a dict-of-dict-of-dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer levels are read-only.\n\n See Also\n ========\n AtlasView - View into dict-of-dict\n AdjacencyView - View into dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses AtlasView slots names _atlas\n\n def __getitem__(self, name):\n return AdjacencyView(self._atlas[name])\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n\nclass UnionAtlas(Mapping):\n \"\"\"A read-only union of two atlases (dict-of-dict).\n\n The two dict-of-dicts represent the inner dict of\n an Adjacency: `G.succ[node]` and `G.pred[node]`.\n The inner level of dict of both hold attribute key:value\n pairs and is read-write. But the outer level is read-only.\n\n See Also\n ========\n UnionAdjacency - View into dict-of-dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_succ\", \"_pred\")\n\n def __getstate__(self):\n return {\"_succ\": self._succ, \"_pred\": self._pred}\n\n def __setstate__(self, state):\n self._succ = state[\"_succ\"]\n self._pred = state[\"_pred\"]\n\n def __init__(self, succ, pred):\n self._succ = succ\n self._pred = pred\n\n def __len__(self):\n return len(self._succ) + len(self._pred)\n\n def __iter__(self):\n return iter(set(self._succ.keys()) | set(self._pred.keys()))\n\n def __getitem__(self, key):\n try:\n return self._succ[key]\n except KeyError:\n return self._pred[key]\n\n def copy(self):\n result = {nbr: dd.copy() for nbr, dd in self._succ.items()}\n for nbr, dd in self._pred.items():\n if nbr in result:\n result[nbr].update(dd)\n else:\n result[nbr] = dd.copy()\n return result\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._succ!r}, {self._pred!r})\"\n\n\nclass UnionAdjacency(Mapping):\n \"\"\"A read-only union of dict Adjacencies as a Map of Maps of Maps.\n\n The two input dict-of-dict-of-dicts represent the union of\n `G.succ` and `G.pred`. Return values are UnionAtlas\n The inner level of dict is read-write. But the\n middle and outer levels are read-only.\n\n succ : a dict-of-dict-of-dict {node: nbrdict}\n pred : a dict-of-dict-of-dict {node: nbrdict}\n The keys for the two dicts should be the same\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_succ\", \"_pred\")\n\n def __getstate__(self):\n return {\"_succ\": self._succ, \"_pred\": self._pred}\n\n def __setstate__(self, state):\n self._succ = state[\"_succ\"]\n self._pred = state[\"_pred\"]\n\n def __init__(self, succ, pred):\n # keys must be the same for two input dicts\n assert len(set(succ.keys()) ^ set(pred.keys())) == 0\n self._succ = succ\n self._pred = pred\n\n def __len__(self):\n return len(self._succ) # length of each dict should be the same\n\n def __iter__(self):\n return iter(self._succ)\n\n def __getitem__(self, nbr):\n return UnionAtlas(self._succ[nbr], self._pred[nbr])\n\n def copy(self):\n return {n: self[n].copy() for n in self._succ}\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._succ!r}, {self._pred!r})\"\n\n\nclass UnionMultiInner(UnionAtlas):\n \"\"\"A read-only union of two inner dicts of MultiAdjacencies.\n\n The two input dict-of-dict-of-dicts represent the union of\n `G.succ[node]` and `G.pred[node]` for MultiDiGraphs.\n Return values are UnionAtlas.\n The inner level of dict is read-write. But the outer levels are read-only.\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionAdjacency - View into dict-of-dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses UnionAtlas slots names _succ, _pred\n\n def __getitem__(self, node):\n in_succ = node in self._succ\n in_pred = node in self._pred\n if in_succ:\n if in_pred:\n return UnionAtlas(self._succ[node], self._pred[node])\n return UnionAtlas(self._succ[node], {})\n return UnionAtlas({}, self._pred[node])\n\n def copy(self):\n nodes = set(self._succ.keys()) | set(self._pred.keys())\n return {n: self[n].copy() for n in nodes}\n\n\nclass UnionMultiAdjacency(UnionAdjacency):\n \"\"\"A read-only union of two dict MultiAdjacencies.\n\n The two input dict-of-dict-of-dict-of-dicts represent the union of\n `G.succ` and `G.pred` for MultiDiGraphs. Return values are UnionAdjacency.\n The inner level of dict is read-write. But the outer levels are read-only.\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionMultiInner - View into dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses UnionAdjacency slots names _succ, _pred\n\n def __getitem__(self, node):\n return UnionMultiInner(self._succ[node], self._pred[node])\n\n\nclass FilterAtlas(Mapping): # nodedict, nbrdict, keydict\n def __init__(self, d, NODE_OK):\n self._atlas = d\n self.NODE_OK = NODE_OK\n\n def __len__(self):\n return sum(1 for n in self)\n\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return (n for n in self.NODE_OK.nodes if n in self._atlas)\n return (n for n in self._atlas if self.NODE_OK(n))\n\n def __getitem__(self, key):\n if key in self._atlas and self.NODE_OK(key):\n return self._atlas[key]\n raise KeyError(f\"Key {key} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterAtlas.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {u: self._atlas[u] for u in self.NODE_OK.nodes if u in self._atlas}\n return {u: d for u, d in self._atlas.items() if self.NODE_OK(u)}\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._atlas!r}, {self.NODE_OK!r})\"\n\n\nclass FilterAdjacency(Mapping): # edgedict\n def __init__(self, d, NODE_OK, EDGE_OK):\n self._atlas = d\n self.NODE_OK = NODE_OK\n self.EDGE_OK = EDGE_OK\n\n def __len__(self):\n return sum(1 for n in self)\n\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return (n for n in self.NODE_OK.nodes if n in self._atlas)\n return (n for n in self._atlas if self.NODE_OK(n))\n\n def __getitem__(self, node):\n if node in self._atlas and self.NODE_OK(node):\n\n def new_node_ok(nbr):\n return self.NODE_OK(nbr) and self.EDGE_OK(node, nbr)\n\n return FilterAtlas(self._atlas[node], new_node_ok)\n raise KeyError(f\"Key {node} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterAdjacency.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {\n u: {\n v: d\n for v, d in self._atlas[u].items()\n if self.NODE_OK(v)\n if self.EDGE_OK(u, v)\n }\n for u in self.NODE_OK.nodes\n if u in self._atlas\n }\n return {\n u: {v: d for v, d in nbrs.items() if self.NODE_OK(v) if self.EDGE_OK(u, v)}\n for u, nbrs in self._atlas.items()\n if self.NODE_OK(u)\n }\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n name = self.__class__.__name__\n return f\"{name}({self._atlas!r}, {self.NODE_OK!r}, {self.EDGE_OK!r})\"\n\n\nclass FilterMultiInner(FilterAdjacency): # muliedge_seconddict\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n my_nodes = (n for n in self.NODE_OK.nodes if n in self._atlas)\n else:\n my_nodes = (n for n in self._atlas if self.NODE_OK(n))\n for n in my_nodes:\n some_keys_ok = False\n for key in self._atlas[n]:\n if self.EDGE_OK(n, key):\n some_keys_ok = True\n break\n if some_keys_ok is True:\n yield n\n\n def __getitem__(self, nbr):\n if nbr in self._atlas and self.NODE_OK(nbr):\n\n def new_node_ok(key):\n return self.EDGE_OK(nbr, key)\n\n return FilterAtlas(self._atlas[nbr], new_node_ok)\n raise KeyError(f\"Key {nbr} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterMultiInner.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {\n v: {k: d for k, d in self._atlas[v].items() if self.EDGE_OK(v, k)}\n for v in self.NODE_OK.nodes\n if v in self._atlas\n }\n return {\n v: {k: d for k, d in nbrs.items() if self.EDGE_OK(v, k)}\n for v, nbrs in self._atlas.items()\n if self.NODE_OK(v)\n }\n\n\nclass FilterMultiAdjacency(FilterAdjacency): # multiedgedict\n def __getitem__(self, node):\n if node in self._atlas and self.NODE_OK(node):\n\n def edge_ok(nbr, key):\n return self.NODE_OK(nbr) and self.EDGE_OK(node, nbr, key)\n\n return FilterMultiInner(self._atlas[node], self.NODE_OK, edge_ok)\n raise KeyError(f\"Key {node} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterMultiAdjacency.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n my_nodes = self.NODE_OK.nodes\n return {\n u: {\n v: {k: d for k, d in kd.items() if self.EDGE_OK(u, v, k)}\n for v, kd in self._atlas[u].items()\n if v in my_nodes\n }\n for u in my_nodes\n if u in self._atlas\n }\n return {\n u: {\n v: {k: d for k, d in kd.items() if self.EDGE_OK(u, v, k)}\n for v, kd in nbrs.items()\n if self.NODE_OK(v)\n }\n for u, nbrs in self._atlas.items()\n if self.NODE_OK(u)\n }\n",
"path": "networkx/classes/coreviews.py"
}
] | [
{
"content": "\"\"\"Views of core data structures such as nested Mappings (e.g. dict-of-dicts).\nThese ``Views`` often restrict element access, with either the entire view or\nlayers of nested mappings being read-only.\n\"\"\"\nimport warnings\nfrom collections.abc import Mapping\n\n__all__ = [\n \"AtlasView\",\n \"AdjacencyView\",\n \"MultiAdjacencyView\",\n \"UnionAtlas\",\n \"UnionAdjacency\",\n \"UnionMultiInner\",\n \"UnionMultiAdjacency\",\n \"FilterAtlas\",\n \"FilterAdjacency\",\n \"FilterMultiInner\",\n \"FilterMultiAdjacency\",\n]\n\n\nclass AtlasView(Mapping):\n \"\"\"An AtlasView is a Read-only Mapping of Mappings.\n\n It is a View into a dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer level is read-only.\n\n See Also\n ========\n AdjacencyView - View into dict-of-dict-of-dict\n MultiAdjacencyView - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_atlas\",)\n\n def __getstate__(self):\n return {\"_atlas\": self._atlas}\n\n def __setstate__(self, state):\n self._atlas = state[\"_atlas\"]\n\n def __init__(self, d):\n self._atlas = d\n\n def __len__(self):\n return len(self._atlas)\n\n def __iter__(self):\n return iter(self._atlas)\n\n def __getitem__(self, key):\n return self._atlas[key]\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n def __str__(self):\n return str(self._atlas) # {nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._atlas!r})\"\n\n\nclass AdjacencyView(AtlasView):\n \"\"\"An AdjacencyView is a Read-only Map of Maps of Maps.\n\n It is a View into a dict-of-dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer levels are read-only.\n\n See Also\n ========\n AtlasView - View into dict-of-dict\n MultiAdjacencyView - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses AtlasView slots names _atlas\n\n def __getitem__(self, name):\n return AtlasView(self._atlas[name])\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n\nclass MultiAdjacencyView(AdjacencyView):\n \"\"\"An MultiAdjacencyView is a Read-only Map of Maps of Maps of Maps.\n\n It is a View into a dict-of-dict-of-dict-of-dict data structure.\n The inner level of dict is read-write. But the\n outer levels are read-only.\n\n See Also\n ========\n AtlasView - View into dict-of-dict\n AdjacencyView - View into dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses AtlasView slots names _atlas\n\n def __getitem__(self, name):\n return AdjacencyView(self._atlas[name])\n\n def copy(self):\n return {n: self[n].copy() for n in self._atlas}\n\n\nclass UnionAtlas(Mapping):\n \"\"\"A read-only union of two atlases (dict-of-dict).\n\n The two dict-of-dicts represent the inner dict of\n an Adjacency: `G.succ[node]` and `G.pred[node]`.\n The inner level of dict of both hold attribute key:value\n pairs and is read-write. But the outer level is read-only.\n\n See Also\n ========\n UnionAdjacency - View into dict-of-dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_succ\", \"_pred\")\n\n def __getstate__(self):\n return {\"_succ\": self._succ, \"_pred\": self._pred}\n\n def __setstate__(self, state):\n self._succ = state[\"_succ\"]\n self._pred = state[\"_pred\"]\n\n def __init__(self, succ, pred):\n self._succ = succ\n self._pred = pred\n\n def __len__(self):\n return len(self._succ) + len(self._pred)\n\n def __iter__(self):\n return iter(set(self._succ.keys()) | set(self._pred.keys()))\n\n def __getitem__(self, key):\n try:\n return self._succ[key]\n except KeyError:\n return self._pred[key]\n\n def copy(self):\n result = {nbr: dd.copy() for nbr, dd in self._succ.items()}\n for nbr, dd in self._pred.items():\n if nbr in result:\n result[nbr].update(dd)\n else:\n result[nbr] = dd.copy()\n return result\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._succ!r}, {self._pred!r})\"\n\n\nclass UnionAdjacency(Mapping):\n \"\"\"A read-only union of dict Adjacencies as a Map of Maps of Maps.\n\n The two input dict-of-dict-of-dicts represent the union of\n `G.succ` and `G.pred`. Return values are UnionAtlas\n The inner level of dict is read-write. But the\n middle and outer levels are read-only.\n\n succ : a dict-of-dict-of-dict {node: nbrdict}\n pred : a dict-of-dict-of-dict {node: nbrdict}\n The keys for the two dicts should be the same\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = (\"_succ\", \"_pred\")\n\n def __getstate__(self):\n return {\"_succ\": self._succ, \"_pred\": self._pred}\n\n def __setstate__(self, state):\n self._succ = state[\"_succ\"]\n self._pred = state[\"_pred\"]\n\n def __init__(self, succ, pred):\n # keys must be the same for two input dicts\n assert len(set(succ.keys()) ^ set(pred.keys())) == 0\n self._succ = succ\n self._pred = pred\n\n def __len__(self):\n return len(self._succ) # length of each dict should be the same\n\n def __iter__(self):\n return iter(self._succ)\n\n def __getitem__(self, nbr):\n return UnionAtlas(self._succ[nbr], self._pred[nbr])\n\n def copy(self):\n return {n: self[n].copy() for n in self._succ}\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._succ!r}, {self._pred!r})\"\n\n\nclass UnionMultiInner(UnionAtlas):\n \"\"\"A read-only union of two inner dicts of MultiAdjacencies.\n\n The two input dict-of-dict-of-dicts represent the union of\n `G.succ[node]` and `G.pred[node]` for MultiDiGraphs.\n Return values are UnionAtlas.\n The inner level of dict is read-write. But the outer levels are read-only.\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionAdjacency - View into dict-of-dict-of-dict\n UnionMultiAdjacency - View into dict-of-dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses UnionAtlas slots names _succ, _pred\n\n def __getitem__(self, node):\n in_succ = node in self._succ\n in_pred = node in self._pred\n if in_succ:\n if in_pred:\n return UnionAtlas(self._succ[node], self._pred[node])\n return UnionAtlas(self._succ[node], {})\n return UnionAtlas({}, self._pred[node])\n\n def copy(self):\n nodes = set(self._succ.keys()) | set(self._pred.keys())\n return {n: self[n].copy() for n in nodes}\n\n\nclass UnionMultiAdjacency(UnionAdjacency):\n \"\"\"A read-only union of two dict MultiAdjacencies.\n\n The two input dict-of-dict-of-dict-of-dicts represent the union of\n `G.succ` and `G.pred` for MultiDiGraphs. Return values are UnionAdjacency.\n The inner level of dict is read-write. But the outer levels are read-only.\n\n See Also\n ========\n UnionAtlas - View into dict-of-dict\n UnionMultiInner - View into dict-of-dict-of-dict\n \"\"\"\n\n __slots__ = () # Still uses UnionAdjacency slots names _succ, _pred\n\n def __getitem__(self, node):\n return UnionMultiInner(self._succ[node], self._pred[node])\n\n\nclass FilterAtlas(Mapping): # nodedict, nbrdict, keydict\n def __init__(self, d, NODE_OK):\n self._atlas = d\n self.NODE_OK = NODE_OK\n\n def __len__(self):\n return sum(1 for n in self)\n\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return (n for n in self.NODE_OK.nodes if n in self._atlas)\n return (n for n in self._atlas if self.NODE_OK(n))\n\n def __getitem__(self, key):\n if key in self._atlas and self.NODE_OK(key):\n return self._atlas[key]\n raise KeyError(f\"Key {key} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterAtlas.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {u: self._atlas[u] for u in self.NODE_OK.nodes if u in self._atlas}\n return {u: d for u, d in self._atlas.items() if self.NODE_OK(u)}\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n return f\"{self.__class__.__name__}({self._atlas!r}, {self.NODE_OK!r})\"\n\n\nclass FilterAdjacency(Mapping): # edgedict\n def __init__(self, d, NODE_OK, EDGE_OK):\n self._atlas = d\n self.NODE_OK = NODE_OK\n self.EDGE_OK = EDGE_OK\n\n def __len__(self):\n return sum(1 for n in self)\n\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return (n for n in self.NODE_OK.nodes if n in self._atlas)\n return (n for n in self._atlas if self.NODE_OK(n))\n\n def __getitem__(self, node):\n if node in self._atlas and self.NODE_OK(node):\n\n def new_node_ok(nbr):\n return self.NODE_OK(nbr) and self.EDGE_OK(node, nbr)\n\n return FilterAtlas(self._atlas[node], new_node_ok)\n raise KeyError(f\"Key {node} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterAdjacency.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {\n u: {\n v: d\n for v, d in self._atlas[u].items()\n if self.NODE_OK(v)\n if self.EDGE_OK(u, v)\n }\n for u in self.NODE_OK.nodes\n if u in self._atlas\n }\n return {\n u: {v: d for v, d in nbrs.items() if self.NODE_OK(v) if self.EDGE_OK(u, v)}\n for u, nbrs in self._atlas.items()\n if self.NODE_OK(u)\n }\n\n def __str__(self):\n return str({nbr: self[nbr] for nbr in self})\n\n def __repr__(self):\n name = self.__class__.__name__\n return f\"{name}({self._atlas!r}, {self.NODE_OK!r}, {self.EDGE_OK!r})\"\n\n\nclass FilterMultiInner(FilterAdjacency): # muliedge_seconddict\n def __iter__(self):\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n my_nodes = (n for n in self.NODE_OK.nodes if n in self._atlas)\n else:\n my_nodes = (n for n in self._atlas if self.NODE_OK(n))\n for n in my_nodes:\n some_keys_ok = False\n for key in self._atlas[n]:\n if self.EDGE_OK(n, key):\n some_keys_ok = True\n break\n if some_keys_ok is True:\n yield n\n\n def __getitem__(self, nbr):\n if nbr in self._atlas and self.NODE_OK(nbr):\n\n def new_node_ok(key):\n return self.EDGE_OK(nbr, key)\n\n return FilterAtlas(self._atlas[nbr], new_node_ok)\n raise KeyError(f\"Key {nbr} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterMultiInner.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n return {\n v: {k: d for k, d in self._atlas[v].items() if self.EDGE_OK(v, k)}\n for v in self.NODE_OK.nodes\n if v in self._atlas\n }\n return {\n v: {k: d for k, d in nbrs.items() if self.EDGE_OK(v, k)}\n for v, nbrs in self._atlas.items()\n if self.NODE_OK(v)\n }\n\n\nclass FilterMultiAdjacency(FilterAdjacency): # multiedgedict\n def __getitem__(self, node):\n if node in self._atlas and self.NODE_OK(node):\n\n def edge_ok(nbr, key):\n return self.NODE_OK(nbr) and self.EDGE_OK(node, nbr, key)\n\n return FilterMultiInner(self._atlas[node], self.NODE_OK, edge_ok)\n raise KeyError(f\"Key {node} not found\")\n\n # FIXME should this just be removed? we don't use it, but someone might\n def copy(self):\n warnings.warn(\n (\n \"FilterMultiAdjacency.copy is deprecated.\\n\"\n \"It will be removed in NetworkX 3.0.\\n\"\n \"Please open an Issue on https://github.com/networkx/networkx/issues\\n\"\n \"if you use this feature. We think that no one does use it.\"\n ),\n DeprecationWarning,\n )\n try: # check that NODE_OK has attr 'nodes'\n node_ok_shorter = 2 * len(self.NODE_OK.nodes) < len(self._atlas)\n except AttributeError:\n node_ok_shorter = False\n if node_ok_shorter:\n my_nodes = self.NODE_OK.nodes\n return {\n u: {\n v: {k: d for k, d in kd.items() if self.EDGE_OK(u, v, k)}\n for v, kd in self._atlas[u].items()\n if v in my_nodes\n }\n for u in my_nodes\n if u in self._atlas\n }\n return {\n u: {\n v: {k: d for k, d in kd.items() if self.EDGE_OK(u, v, k)}\n for v, kd in nbrs.items()\n if self.NODE_OK(v)\n }\n for u, nbrs in self._atlas.items()\n if self.NODE_OK(u)\n }\n",
"path": "networkx/classes/coreviews.py"
}
] | diff --git a/doc/reference/classes/index.rst b/doc/reference/classes/index.rst
index 0747795410c..acd9e259099 100644
--- a/doc/reference/classes/index.rst
+++ b/doc/reference/classes/index.rst
@@ -59,6 +59,25 @@ Graph Views
subgraph_view
reverse_view
+Core Views
+==========
+
+.. automodule:: networkx.classes.coreviews
+.. autosummary::
+ :toctree: generated/
+
+ AtlasView
+ AdjacencyView
+ MultiAdjacencyView
+ UnionAtlas
+ UnionAdjacency
+ UnionMultiInner
+ UnionMultiAdjacency
+ FilterAtlas
+ FilterAdjacency
+ FilterMultiInner
+ FilterMultiAdjacency
+
Filters
=======
diff --git a/networkx/classes/coreviews.py b/networkx/classes/coreviews.py
index f824e45391e..61a0a768d70 100644
--- a/networkx/classes/coreviews.py
+++ b/networkx/classes/coreviews.py
@@ -1,4 +1,6 @@
-"""
+"""Views of core data structures such as nested Mappings (e.g. dict-of-dicts).
+These ``Views`` often restrict element access, with either the entire view or
+layers of nested mappings being read-only.
"""
import warnings
from collections.abc import Mapping
|
vyperlang__vyper-3202 | `pc_pos_map` for small methods is empty
### Version Information
* vyper Version (output of `vyper --version`): 0.3.7
* OS: osx
* Python Version (output of `python --version`): 3.10.4
### Bug
```
(vyper) ~/vyper $ cat tmp/baz.vy
@external
def foo():
pass
(vyper) ~/vyper $ vyc -f source_map tmp/baz.vy
{"breakpoints": [], "error_map": {"51": "fallback function"}, "pc_breakpoints": [], "pc_jump_map": {"0": "-", "7": "-", "11": "-", "12": "-", "23": "-", "34": "-", "42": "-", "44": "-", "46": "-", "52": "-"}, "pc_pos_map": {}, "pc_pos_map_compressed": "-1:-1:0:-;;;;:::-;;:::-;:::-;;;;;;;:::-;;;;;:::-;;;;;:::-;;:::-;;:::-;;;;:::-;;;"}
```
pc_pos_map should not be empty.
| [
{
"content": "from typing import Any, List\n\nimport vyper.utils as util\nfrom vyper.address_space import CALLDATA, DATA, MEMORY\nfrom vyper.ast.signatures.function_signature import FunctionSignature, VariableRecord\nfrom vyper.codegen.abi_encoder import abi_encoding_matches_vyper\nfrom vyper.codegen.context import Context\nfrom vyper.codegen.core import get_element_ptr, getpos, make_setter, needs_clamp\nfrom vyper.codegen.expr import Expr\nfrom vyper.codegen.function_definitions.utils import get_nonreentrant_lock\nfrom vyper.codegen.ir_node import Encoding, IRnode\nfrom vyper.codegen.stmt import parse_body\nfrom vyper.codegen.types.types import TupleType\n\n\n# register function args with the local calling context.\n# also allocate the ones that live in memory (i.e. kwargs)\ndef _register_function_args(context: Context, sig: FunctionSignature) -> List[IRnode]:\n ret = []\n\n # the type of the calldata\n base_args_t = TupleType([arg.typ for arg in sig.base_args])\n\n # tuple with the abi_encoded args\n if sig.is_init_func:\n base_args_ofst = IRnode(0, location=DATA, typ=base_args_t, encoding=Encoding.ABI)\n else:\n base_args_ofst = IRnode(4, location=CALLDATA, typ=base_args_t, encoding=Encoding.ABI)\n\n for i, arg in enumerate(sig.base_args):\n\n arg_ir = get_element_ptr(base_args_ofst, i)\n\n if needs_clamp(arg.typ, Encoding.ABI):\n # allocate a memory slot for it and copy\n p = context.new_variable(arg.name, arg.typ, is_mutable=False)\n dst = IRnode(p, typ=arg.typ, location=MEMORY)\n\n copy_arg = make_setter(dst, arg_ir)\n copy_arg.source_pos = getpos(arg.ast_source)\n ret.append(copy_arg)\n else:\n assert abi_encoding_matches_vyper(arg.typ)\n # leave it in place\n context.vars[arg.name] = VariableRecord(\n name=arg.name,\n pos=arg_ir,\n typ=arg.typ,\n mutable=False,\n location=arg_ir.location,\n encoding=Encoding.ABI,\n )\n\n return ret\n\n\ndef _annotated_method_id(abi_sig):\n method_id = util.method_id_int(abi_sig)\n annotation = f\"{hex(method_id)}: {abi_sig}\"\n return IRnode(method_id, annotation=annotation)\n\n\ndef _generate_kwarg_handlers(context: Context, sig: FunctionSignature) -> List[Any]:\n # generate kwarg handlers.\n # since they might come in thru calldata or be default,\n # allocate them in memory and then fill it in based on calldata or default,\n # depending on the signature\n # a kwarg handler looks like\n # (if (eq _method_id <method_id>)\n # copy calldata args to memory\n # write default args to memory\n # goto external_function_common_ir\n\n def handler_for(calldata_kwargs, default_kwargs):\n calldata_args = sig.base_args + calldata_kwargs\n # create a fake type so that get_element_ptr works\n calldata_args_t = TupleType(list(arg.typ for arg in calldata_args))\n\n abi_sig = sig.abi_signature_for_kwargs(calldata_kwargs)\n method_id = _annotated_method_id(abi_sig)\n\n calldata_kwargs_ofst = IRnode(\n 4, location=CALLDATA, typ=calldata_args_t, encoding=Encoding.ABI\n )\n\n # a sequence of statements to strictify kwargs into memory\n ret = [\"seq\"]\n\n # ensure calldata is at least of minimum length\n args_abi_t = calldata_args_t.abi_type\n calldata_min_size = args_abi_t.min_size() + 4\n ret.append([\"assert\", [\"ge\", \"calldatasize\", calldata_min_size]])\n\n # TODO optimize make_setter by using\n # TupleType(list(arg.typ for arg in calldata_kwargs + default_kwargs))\n # (must ensure memory area is contiguous)\n\n n_base_args = len(sig.base_args)\n\n for i, arg_meta in enumerate(calldata_kwargs):\n k = n_base_args + i\n\n dst = context.lookup_var(arg_meta.name).pos\n\n lhs = IRnode(dst, location=MEMORY, typ=arg_meta.typ)\n\n rhs = get_element_ptr(calldata_kwargs_ofst, k, array_bounds_check=False)\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(arg_meta.ast_source)\n ret.append(copy_arg)\n\n for x in default_kwargs:\n dst = context.lookup_var(x.name).pos\n lhs = IRnode(dst, location=MEMORY, typ=x.typ)\n lhs.source_pos = getpos(x.ast_source)\n kw_ast_val = sig.default_values[x.name] # e.g. `3` in x: int = 3\n rhs = Expr(kw_ast_val, context).ir_node\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(x.ast_source)\n ret.append(copy_arg)\n\n ret.append([\"goto\", sig.external_function_base_entry_label])\n\n ret = [\"if\", [\"eq\", \"_calldata_method_id\", method_id], ret]\n return ret\n\n ret = [\"seq\"]\n\n keyword_args = sig.default_args\n\n # allocate variable slots in memory\n for arg in keyword_args:\n context.new_variable(arg.name, arg.typ, is_mutable=False)\n\n for i, _ in enumerate(keyword_args):\n calldata_kwargs = keyword_args[:i]\n default_kwargs = keyword_args[i:]\n\n ret.append(handler_for(calldata_kwargs, default_kwargs))\n\n ret.append(handler_for(keyword_args, []))\n\n return ret\n\n\n# TODO it would be nice if this returned a data structure which were\n# amenable to generating a jump table instead of the linear search for\n# method_id we have now.\ndef generate_ir_for_external_function(code, sig, context, skip_nonpayable_check):\n # TODO type hints:\n # def generate_ir_for_external_function(\n # code: vy_ast.FunctionDef, sig: FunctionSignature, context: Context, check_nonpayable: bool,\n # ) -> IRnode:\n \"\"\"Return the IR for an external function. Includes code to inspect the method_id,\n enter the function (nonpayable and reentrancy checks), handle kwargs and exit\n the function (clean up reentrancy storage variables)\n \"\"\"\n func_type = code._metadata[\"type\"]\n\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(func_type)\n\n # generate handlers for base args and register the variable records\n handle_base_args = _register_function_args(context, sig)\n\n # generate handlers for kwargs and register the variable records\n kwarg_handlers = _generate_kwarg_handlers(context, sig)\n\n body = [\"seq\"]\n # once optional args have been handled,\n # generate the main body of the function\n body += handle_base_args\n\n if sig.mutability != \"payable\" and not skip_nonpayable_check:\n # if the contract contains payable functions, but this is not one of them\n # add an assertion that the value of the call is zero\n body += [[\"assert\", [\"iszero\", \"callvalue\"]]]\n\n body += nonreentrant_pre\n\n body += [parse_body(code.body, context, ensure_terminated=True)]\n\n # wrap the body in labeled block\n body = [\"label\", sig.external_function_base_entry_label, [\"var_list\"], body]\n\n exit_sequence = [\"seq\"] + nonreentrant_post\n if sig.is_init_func:\n pass # init func has special exit sequence generated by module.py\n elif context.return_type is None:\n exit_sequence += [[\"stop\"]]\n else:\n exit_sequence += [[\"return\", \"ret_ofst\", \"ret_len\"]]\n\n exit_sequence_args = [\"var_list\"]\n if context.return_type is not None:\n exit_sequence_args += [\"ret_ofst\", \"ret_len\"]\n # wrap the exit in a labeled block\n exit = [\"label\", sig.exit_sequence_label, exit_sequence_args, exit_sequence]\n\n # the ir which comprises the main body of the function,\n # besides any kwarg handling\n func_common_ir = [\"seq\", body, exit]\n\n if sig.is_default_func or sig.is_init_func:\n ret = [\"seq\"]\n # add a goto to make the function entry look like other functions\n # (for zksync interpreter)\n ret.append([\"goto\", sig.external_function_base_entry_label])\n ret.append(func_common_ir)\n else:\n ret = kwarg_handlers\n # sneak the base code into the kwarg handler\n # TODO rethink this / make it clearer\n ret[-1][-1].append(func_common_ir)\n\n return IRnode.from_list(ret)\n",
"path": "vyper/codegen/function_definitions/external_function.py"
}
] | [
{
"content": "from typing import Any, List\n\nimport vyper.utils as util\nfrom vyper.address_space import CALLDATA, DATA, MEMORY\nfrom vyper.ast.signatures.function_signature import FunctionSignature, VariableRecord\nfrom vyper.codegen.abi_encoder import abi_encoding_matches_vyper\nfrom vyper.codegen.context import Context\nfrom vyper.codegen.core import get_element_ptr, getpos, make_setter, needs_clamp\nfrom vyper.codegen.expr import Expr\nfrom vyper.codegen.function_definitions.utils import get_nonreentrant_lock\nfrom vyper.codegen.ir_node import Encoding, IRnode\nfrom vyper.codegen.stmt import parse_body\nfrom vyper.codegen.types.types import TupleType\n\n\n# register function args with the local calling context.\n# also allocate the ones that live in memory (i.e. kwargs)\ndef _register_function_args(context: Context, sig: FunctionSignature) -> List[IRnode]:\n ret = []\n\n # the type of the calldata\n base_args_t = TupleType([arg.typ for arg in sig.base_args])\n\n # tuple with the abi_encoded args\n if sig.is_init_func:\n base_args_ofst = IRnode(0, location=DATA, typ=base_args_t, encoding=Encoding.ABI)\n else:\n base_args_ofst = IRnode(4, location=CALLDATA, typ=base_args_t, encoding=Encoding.ABI)\n\n for i, arg in enumerate(sig.base_args):\n\n arg_ir = get_element_ptr(base_args_ofst, i)\n\n if needs_clamp(arg.typ, Encoding.ABI):\n # allocate a memory slot for it and copy\n p = context.new_variable(arg.name, arg.typ, is_mutable=False)\n dst = IRnode(p, typ=arg.typ, location=MEMORY)\n\n copy_arg = make_setter(dst, arg_ir)\n copy_arg.source_pos = getpos(arg.ast_source)\n ret.append(copy_arg)\n else:\n assert abi_encoding_matches_vyper(arg.typ)\n # leave it in place\n context.vars[arg.name] = VariableRecord(\n name=arg.name,\n pos=arg_ir,\n typ=arg.typ,\n mutable=False,\n location=arg_ir.location,\n encoding=Encoding.ABI,\n )\n\n return ret\n\n\ndef _annotated_method_id(abi_sig):\n method_id = util.method_id_int(abi_sig)\n annotation = f\"{hex(method_id)}: {abi_sig}\"\n return IRnode(method_id, annotation=annotation)\n\n\ndef _generate_kwarg_handlers(context: Context, sig: FunctionSignature) -> List[Any]:\n # generate kwarg handlers.\n # since they might come in thru calldata or be default,\n # allocate them in memory and then fill it in based on calldata or default,\n # depending on the signature\n # a kwarg handler looks like\n # (if (eq _method_id <method_id>)\n # copy calldata args to memory\n # write default args to memory\n # goto external_function_common_ir\n\n def handler_for(calldata_kwargs, default_kwargs):\n calldata_args = sig.base_args + calldata_kwargs\n # create a fake type so that get_element_ptr works\n calldata_args_t = TupleType(list(arg.typ for arg in calldata_args))\n\n abi_sig = sig.abi_signature_for_kwargs(calldata_kwargs)\n method_id = _annotated_method_id(abi_sig)\n\n calldata_kwargs_ofst = IRnode(\n 4, location=CALLDATA, typ=calldata_args_t, encoding=Encoding.ABI\n )\n\n # a sequence of statements to strictify kwargs into memory\n ret = [\"seq\"]\n\n # ensure calldata is at least of minimum length\n args_abi_t = calldata_args_t.abi_type\n calldata_min_size = args_abi_t.min_size() + 4\n ret.append([\"assert\", [\"ge\", \"calldatasize\", calldata_min_size]])\n\n # TODO optimize make_setter by using\n # TupleType(list(arg.typ for arg in calldata_kwargs + default_kwargs))\n # (must ensure memory area is contiguous)\n\n n_base_args = len(sig.base_args)\n\n for i, arg_meta in enumerate(calldata_kwargs):\n k = n_base_args + i\n\n dst = context.lookup_var(arg_meta.name).pos\n\n lhs = IRnode(dst, location=MEMORY, typ=arg_meta.typ)\n\n rhs = get_element_ptr(calldata_kwargs_ofst, k, array_bounds_check=False)\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(arg_meta.ast_source)\n ret.append(copy_arg)\n\n for x in default_kwargs:\n dst = context.lookup_var(x.name).pos\n lhs = IRnode(dst, location=MEMORY, typ=x.typ)\n lhs.source_pos = getpos(x.ast_source)\n kw_ast_val = sig.default_values[x.name] # e.g. `3` in x: int = 3\n rhs = Expr(kw_ast_val, context).ir_node\n\n copy_arg = make_setter(lhs, rhs)\n copy_arg.source_pos = getpos(x.ast_source)\n ret.append(copy_arg)\n\n ret.append([\"goto\", sig.external_function_base_entry_label])\n\n ret = [\"if\", [\"eq\", \"_calldata_method_id\", method_id], ret]\n return ret\n\n ret = [\"seq\"]\n\n keyword_args = sig.default_args\n\n # allocate variable slots in memory\n for arg in keyword_args:\n context.new_variable(arg.name, arg.typ, is_mutable=False)\n\n for i, _ in enumerate(keyword_args):\n calldata_kwargs = keyword_args[:i]\n default_kwargs = keyword_args[i:]\n\n ret.append(handler_for(calldata_kwargs, default_kwargs))\n\n ret.append(handler_for(keyword_args, []))\n\n return ret\n\n\n# TODO it would be nice if this returned a data structure which were\n# amenable to generating a jump table instead of the linear search for\n# method_id we have now.\ndef generate_ir_for_external_function(code, sig, context, skip_nonpayable_check):\n # TODO type hints:\n # def generate_ir_for_external_function(\n # code: vy_ast.FunctionDef, sig: FunctionSignature, context: Context, check_nonpayable: bool,\n # ) -> IRnode:\n \"\"\"Return the IR for an external function. Includes code to inspect the method_id,\n enter the function (nonpayable and reentrancy checks), handle kwargs and exit\n the function (clean up reentrancy storage variables)\n \"\"\"\n func_type = code._metadata[\"type\"]\n\n nonreentrant_pre, nonreentrant_post = get_nonreentrant_lock(func_type)\n\n # generate handlers for base args and register the variable records\n handle_base_args = _register_function_args(context, sig)\n\n # generate handlers for kwargs and register the variable records\n kwarg_handlers = _generate_kwarg_handlers(context, sig)\n\n body = [\"seq\"]\n # once optional args have been handled,\n # generate the main body of the function\n body += handle_base_args\n\n if sig.mutability != \"payable\" and not skip_nonpayable_check:\n # if the contract contains payable functions, but this is not one of them\n # add an assertion that the value of the call is zero\n body += [[\"assert\", [\"iszero\", \"callvalue\"]]]\n\n body += nonreentrant_pre\n\n body += [parse_body(code.body, context, ensure_terminated=True)]\n\n # wrap the body in labeled block\n body = [\"label\", sig.external_function_base_entry_label, [\"var_list\"], body]\n\n exit_sequence = [\"seq\"] + nonreentrant_post\n if sig.is_init_func:\n pass # init func has special exit sequence generated by module.py\n elif context.return_type is None:\n exit_sequence += [[\"stop\"]]\n else:\n exit_sequence += [[\"return\", \"ret_ofst\", \"ret_len\"]]\n\n exit_sequence_args = [\"var_list\"]\n if context.return_type is not None:\n exit_sequence_args += [\"ret_ofst\", \"ret_len\"]\n # wrap the exit in a labeled block\n exit = [\"label\", sig.exit_sequence_label, exit_sequence_args, exit_sequence]\n\n # the ir which comprises the main body of the function,\n # besides any kwarg handling\n func_common_ir = [\"seq\", body, exit]\n\n if sig.is_default_func or sig.is_init_func:\n ret = [\"seq\"]\n # add a goto to make the function entry look like other functions\n # (for zksync interpreter)\n ret.append([\"goto\", sig.external_function_base_entry_label])\n ret.append(func_common_ir)\n else:\n ret = kwarg_handlers\n # sneak the base code into the kwarg handler\n # TODO rethink this / make it clearer\n ret[-1][-1].append(func_common_ir)\n\n return IRnode.from_list(ret, source_pos=getpos(sig.func_ast_code))\n",
"path": "vyper/codegen/function_definitions/external_function.py"
}
] | diff --git a/vyper/codegen/function_definitions/external_function.py b/vyper/codegen/function_definitions/external_function.py
index 3f0d89c4d6..06d2946558 100644
--- a/vyper/codegen/function_definitions/external_function.py
+++ b/vyper/codegen/function_definitions/external_function.py
@@ -214,4 +214,4 @@ def generate_ir_for_external_function(code, sig, context, skip_nonpayable_check)
# TODO rethink this / make it clearer
ret[-1][-1].append(func_common_ir)
- return IRnode.from_list(ret)
+ return IRnode.from_list(ret, source_pos=getpos(sig.func_ast_code))
|
dbt-labs__dbt-core-2599 | yaml quoting not working with NativeEnvironment jinja evaluator
### Describe the bug
dbt's NativeEnvironment introduced a functional change to how Jinja strings are evaluated. In dbt v0.17.0, a schema test can no longer be configured with a quoted column name.
### Steps To Reproduce
```
# schema.yml
version: 2
models:
- name: debug
columns:
- name: MyId
quote: true
tests:
- relationships:
to: ref('debug')
field: '"MyId"'
```
```
-- models/debug.sql
select 1 as "MyId"
```
**Results:**
```
Database Error in test relationships_debug__MyId____MyId___ref_debug_ (models/schema.yml)
column "myid" does not exist
LINE 12: select MyId as id from "analytics"."test_schema"."debug"
^
HINT: Perhaps you meant to reference the column "debug.MyId" or the column "child.id".
compiled SQL at target/compiled/neondwh/models/schema.yml/schema_test/relationships_debug__MyId____MyId___ref_debug_.sql
```
### Expected behavior
I would expect the yaml/jinja string `'"MyId"'` to be resolved to the string `"MyId"`, not `MyId`.
**The output of `dbt --version`:**
```
dbt v0.17.0
```
**The operating system you're using:** macOS
**The output of `python --version`:** 3.7.7
### Additional context
Using `Jinja2==2.11.2`
| [
{
"content": "import codecs\nimport linecache\nimport os\nimport re\nimport tempfile\nimport threading\nfrom ast import literal_eval\nfrom contextlib import contextmanager\nfrom itertools import chain, islice\nfrom typing import (\n List, Union, Set, Optional, Dict, Any, Iterator, Type, NoReturn, Tuple\n)\n\nimport jinja2\nimport jinja2.ext\nimport jinja2.nativetypes # type: ignore\nimport jinja2.nodes\nimport jinja2.parser\nimport jinja2.sandbox\n\nfrom dbt.utils import (\n get_dbt_macro_name, get_docs_macro_name, get_materialization_macro_name,\n deep_map\n)\n\nfrom dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag\nfrom dbt.contracts.graph.compiled import CompiledSchemaTestNode\nfrom dbt.contracts.graph.parsed import ParsedSchemaTestNode\nfrom dbt.exceptions import (\n InternalException, raise_compiler_error, CompilationException,\n invalid_materialization_argument, MacroReturn\n)\nfrom dbt.flags import MACRO_DEBUGGING\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\n\n\ndef _linecache_inject(source, write):\n if write:\n # this is the only reliable way to accomplish this. Obviously, it's\n # really darn noisy and will fill your temporary directory\n tmp_file = tempfile.NamedTemporaryFile(\n prefix='dbt-macro-compiled-',\n suffix='.py',\n delete=False,\n mode='w+',\n encoding='utf-8',\n )\n tmp_file.write(source)\n filename = tmp_file.name\n else:\n # `codecs.encode` actually takes a `bytes` as the first argument if\n # the second argument is 'hex' - mypy does not know this.\n rnd = codecs.encode(os.urandom(12), 'hex') # type: ignore\n filename = rnd.decode('ascii')\n\n # put ourselves in the cache\n cache_entry = (\n len(source),\n None,\n [line + '\\n' for line in source.splitlines()],\n filename\n )\n # linecache does in fact have an attribute `cache`, thanks\n linecache.cache[filename] = cache_entry # type: ignore\n return filename\n\n\nclass MacroFuzzParser(jinja2.parser.Parser):\n def parse_macro(self):\n node = jinja2.nodes.Macro(lineno=next(self.stream).lineno)\n\n # modified to fuzz macros defined in the same file. this way\n # dbt can understand the stack of macros being called.\n # - @cmcarthur\n node.name = get_dbt_macro_name(\n self.parse_assign_target(name_only=True).name)\n\n self.parse_signature(node)\n node.body = self.parse_statements(('name:endmacro',),\n drop_needle=True)\n return node\n\n\nclass MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):\n def _parse(self, source, name, filename):\n return MacroFuzzParser(self, source, name, filename).parse()\n\n def _compile(self, source, filename):\n \"\"\"Override jinja's compilation to stash the rendered source inside\n the python linecache for debugging when the appropriate environment\n variable is set.\n\n If the value is 'write', also write the files to disk.\n WARNING: This can write a ton of data if you aren't careful.\n \"\"\"\n if filename == '<template>' and MACRO_DEBUGGING:\n write = MACRO_DEBUGGING == 'write'\n filename = _linecache_inject(source, write)\n\n return super()._compile(source, filename) # type: ignore\n\n\nclass NativeSandboxEnvironment(MacroFuzzEnvironment):\n code_generator_class = jinja2.nativetypes.NativeCodeGenerator\n\n\nclass TextMarker(str):\n \"\"\"A special native-env marker that indicates that a value is text and is\n not to be evaluated. Use this to prevent your numbery-strings from becoming\n numbers!\n \"\"\"\n\n\ndef quoted_native_concat(nodes):\n \"\"\"This is almost native_concat from the NativeTemplate, except in the\n special case of a single argument that is a quoted string and returns a\n string, the quotes are re-inserted.\n \"\"\"\n head = list(islice(nodes, 2))\n\n if not head:\n return None\n\n if len(head) == 1:\n raw = head[0]\n if isinstance(raw, TextMarker):\n return str(raw)\n else:\n raw = \"\".join([str(v) for v in chain(head, nodes)])\n\n try:\n result = literal_eval(raw)\n except (ValueError, SyntaxError, MemoryError):\n return raw\n\n return result\n\n\nclass NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore\n environment_class = NativeSandboxEnvironment\n\n def render(self, *args, **kwargs):\n \"\"\"Render the template to produce a native Python type. If the\n result is a single node, its value is returned. Otherwise, the\n nodes are concatenated as strings. If the result can be parsed\n with :func:`ast.literal_eval`, the parsed value is returned.\n Otherwise, the string is returned.\n \"\"\"\n vars = dict(*args, **kwargs)\n\n try:\n return quoted_native_concat(\n self.root_render_func(self.new_context(vars))\n )\n except Exception:\n return self.environment.handle_exception()\n\n\nNativeSandboxEnvironment.template_class = NativeSandboxTemplate # type: ignore\n\n\nclass TemplateCache:\n def __init__(self):\n self.file_cache: Dict[str, jinja2.Template] = {}\n\n def get_node_template(self, node) -> jinja2.Template:\n key = node.macro_sql\n\n if key in self.file_cache:\n return self.file_cache[key]\n\n template = get_template(\n string=node.macro_sql,\n ctx={},\n node=node,\n )\n\n self.file_cache[key] = template\n return template\n\n def clear(self):\n self.file_cache.clear()\n\n\ntemplate_cache = TemplateCache()\n\n\nclass BaseMacroGenerator:\n def __init__(self, context: Optional[Dict[str, Any]] = None) -> None:\n self.context: Optional[Dict[str, Any]] = context\n\n def get_template(self):\n raise NotImplementedError('get_template not implemented!')\n\n def get_name(self) -> str:\n raise NotImplementedError('get_name not implemented!')\n\n def get_macro(self):\n name = self.get_name()\n template = self.get_template()\n # make the module. previously we set both vars and local, but that's\n # redundant: They both end up in the same place\n module = template.make_module(vars=self.context, shared=False)\n macro = module.__dict__[get_dbt_macro_name(name)]\n module.__dict__.update(self.context)\n return macro\n\n @contextmanager\n def exception_handler(self) -> Iterator[None]:\n try:\n yield\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n raise_compiler_error(str(e))\n\n def call_macro(self, *args, **kwargs):\n if self.context is None:\n raise InternalException(\n 'Context is still None in call_macro!'\n )\n assert self.context is not None\n\n macro = self.get_macro()\n\n with self.exception_handler():\n try:\n return macro(*args, **kwargs)\n except MacroReturn as e:\n return e.value\n\n\nclass MacroStack(threading.local):\n def __init__(self):\n super().__init__()\n self.call_stack = []\n\n @property\n def depth(self) -> int:\n return len(self.call_stack)\n\n def push(self, name):\n self.call_stack.append(name)\n\n def pop(self, name):\n got = self.call_stack.pop()\n if got != name:\n raise InternalException(f'popped {got}, expected {name}')\n\n\nclass MacroGenerator(BaseMacroGenerator):\n def __init__(\n self,\n macro,\n context: Optional[Dict[str, Any]] = None,\n node: Optional[Any] = None,\n stack: Optional[MacroStack] = None\n ) -> None:\n super().__init__(context)\n self.macro = macro\n self.node = node\n self.stack = stack\n\n def get_template(self):\n return template_cache.get_node_template(self.macro)\n\n def get_name(self) -> str:\n return self.macro.name\n\n @contextmanager\n def exception_handler(self) -> Iterator[None]:\n try:\n yield\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n raise_compiler_error(str(e), self.macro)\n except CompilationException as e:\n e.stack.append(self.macro)\n raise e\n\n @contextmanager\n def track_call(self):\n if self.stack is None or self.node is None:\n yield\n else:\n unique_id = self.macro.unique_id\n depth = self.stack.depth\n # only mark depth=0 as a dependency\n if depth == 0:\n self.node.depends_on.add_macro(unique_id)\n self.stack.push(unique_id)\n try:\n yield\n finally:\n self.stack.pop(unique_id)\n\n def __call__(self, *args, **kwargs):\n with self.track_call():\n return self.call_macro(*args, **kwargs)\n\n\nclass QueryStringGenerator(BaseMacroGenerator):\n def __init__(\n self, template_str: str, context: Dict[str, Any]\n ) -> None:\n super().__init__(context)\n self.template_str: str = template_str\n env = get_environment()\n self.template = env.from_string(\n self.template_str,\n globals=self.context,\n )\n\n def get_name(self) -> str:\n return 'query_comment_macro'\n\n def get_template(self):\n \"\"\"Don't use the template cache, we don't have a node\"\"\"\n return self.template\n\n def __call__(self, connection_name: str, node) -> str:\n return str(self.call_macro(connection_name, node))\n\n\nclass MaterializationExtension(jinja2.ext.Extension):\n tags = ['materialization']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n materialization_name = \\\n parser.parse_assign_target(name_only=True).name\n\n adapter_name = 'default'\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n if target.name == 'default':\n pass\n\n elif target.name == 'adapter':\n parser.stream.expect('assign')\n value = parser.parse_expression()\n adapter_name = value.value\n\n else:\n invalid_materialization_argument(\n materialization_name, target.name\n )\n\n node.name = get_materialization_macro_name(\n materialization_name, adapter_name\n )\n\n node.body = parser.parse_statements(('name:endmaterialization',),\n drop_needle=True)\n\n return node\n\n\nclass DocumentationExtension(jinja2.ext.Extension):\n tags = ['docs']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n docs_name = parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n node.name = get_docs_macro_name(docs_name)\n node.body = parser.parse_statements(('name:enddocs',),\n drop_needle=True)\n return node\n\n\ndef _is_dunder_name(name):\n return name.startswith('__') and name.endswith('__')\n\n\ndef create_undefined(node=None):\n class Undefined(jinja2.Undefined):\n def __init__(self, hint=None, obj=None, name=None, exc=None):\n super().__init__(hint=hint, name=name)\n self.node = node\n self.name = name\n self.hint = hint\n # jinja uses these for safety, so we have to override them.\n # see https://github.com/pallets/jinja/blob/master/jinja2/sandbox.py#L332-L339 # noqa\n self.unsafe_callable = False\n self.alters_data = False\n\n def __getitem__(self, name):\n # Propagate the undefined value if a caller accesses this as if it\n # were a dictionary\n return self\n\n def __getattr__(self, name):\n if name == 'name' or _is_dunder_name(name):\n raise AttributeError(\n \"'{}' object has no attribute '{}'\"\n .format(type(self).__name__, name)\n )\n\n self.name = name\n\n return self.__class__(hint=self.hint, name=self.name)\n\n def __call__(self, *args, **kwargs):\n return self\n\n def __reduce__(self):\n raise_compiler_error(f'{self.name} is undefined', node=node)\n\n return Undefined\n\n\ndef get_environment(\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n) -> jinja2.Environment:\n args: Dict[str, List[Union[str, Type[jinja2.ext.Extension]]]] = {\n 'extensions': ['jinja2.ext.do']\n }\n\n if capture_macros:\n args['undefined'] = create_undefined(node)\n\n args['extensions'].append(MaterializationExtension)\n args['extensions'].append(DocumentationExtension)\n\n env_cls: Type[jinja2.Environment]\n text_filter: Type\n if native:\n env_cls = NativeSandboxEnvironment\n text_filter = TextMarker\n else:\n env_cls = MacroFuzzEnvironment\n text_filter = str\n\n env = env_cls(**args)\n env.filters['as_text'] = text_filter\n\n return env\n\n\n@contextmanager\ndef catch_jinja(node=None) -> Iterator[None]:\n try:\n yield\n except jinja2.exceptions.TemplateSyntaxError as e:\n e.translated = False\n raise CompilationException(str(e), node) from e\n except jinja2.exceptions.UndefinedError as e:\n raise CompilationException(str(e), node) from e\n except CompilationException as exc:\n exc.add_node(node)\n raise\n\n\ndef parse(string):\n with catch_jinja():\n return get_environment().parse(str(string))\n\n\ndef get_template(\n string: str,\n ctx: Dict[str, Any],\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n):\n with catch_jinja(node):\n env = get_environment(node, capture_macros, native=native)\n\n template_source = str(string)\n return env.from_string(template_source, globals=ctx)\n\n\ndef render_template(template, ctx: Dict[str, Any], node=None) -> str:\n with catch_jinja(node):\n return template.render(ctx)\n\n\ndef _requote_result(raw_value: str, rendered: str) -> str:\n double_quoted = raw_value.startswith('\"') and raw_value.endswith('\"')\n single_quoted = raw_value.startswith(\"'\") and raw_value.endswith(\"'\")\n if double_quoted:\n quote_char = '\"'\n elif single_quoted:\n quote_char = \"'\"\n else:\n quote_char = ''\n return f'{quote_char}{rendered}{quote_char}'\n\n\n# performance note: Local benmcharking (so take it with a big grain of salt!)\n# on this indicates that it is is on average slightly slower than\n# checking two separate patterns, but the standard deviation is smaller with\n# one pattern. The time difference between the two was ~2 std deviations, which\n# is small enough that I've just chosen the more readable option.\n_HAS_RENDER_CHARS_PAT = re.compile(r'({[{%]|[}%]})')\n\n\ndef get_rendered(\n string: str,\n ctx: Dict[str, Any],\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n) -> str:\n # performance optimization: if there are no jinja control characters in the\n # string, we can just return the input. Fall back to jinja if the type is\n # not a string or if native rendering is enabled (so '1' -> 1, etc...)\n # If this is desirable in the native env as well, we could handle the\n # native=True case by passing the input string to ast.literal_eval, like\n # the native renderer does.\n if (\n not native and\n isinstance(string, str) and\n _HAS_RENDER_CHARS_PAT.search(string) is None\n ):\n return string\n template = get_template(\n string,\n ctx,\n node,\n capture_macros=capture_macros,\n native=native,\n )\n return render_template(template, ctx, node)\n\n\ndef undefined_error(msg) -> NoReturn:\n raise jinja2.exceptions.UndefinedError(msg)\n\n\ndef extract_toplevel_blocks(\n data: str,\n allowed_blocks: Optional[Set[str]] = None,\n collect_raw_data: bool = True,\n) -> List[Union[BlockData, BlockTag]]:\n \"\"\"Extract the top level blocks with matching block types from a jinja\n file, with some special handling for block nesting.\n\n :param data: The data to extract blocks from.\n :param allowed_blocks: The names of the blocks to extract from the file.\n They may not be nested within if/for blocks. If None, use the default\n values.\n :param collect_raw_data: If set, raw data between matched blocks will also\n be part of the results, as `BlockData` objects. They have a\n `block_type_name` field of `'__dbt_data'` and will never have a\n `block_name`.\n :return: A list of `BlockTag`s matching the allowed block types and (if\n `collect_raw_data` is `True`) `BlockData` objects.\n \"\"\"\n return BlockIterator(data).lex_for_blocks(\n allowed_blocks=allowed_blocks,\n collect_raw_data=collect_raw_data\n )\n\n\nSCHEMA_TEST_KWARGS_NAME = '_dbt_schema_test_kwargs'\n\n\ndef add_rendered_test_kwargs(\n context: Dict[str, Any],\n node: Union[ParsedSchemaTestNode, CompiledSchemaTestNode],\n capture_macros: bool = False,\n) -> None:\n \"\"\"Render each of the test kwargs in the given context using the native\n renderer, then insert that value into the given context as the special test\n keyword arguments member.\n \"\"\"\n looks_like_func = r'^\\s*(env_var|ref|var|source|doc)\\s*\\(.+\\)\\s*$'\n\n def _convert_function(\n value: Any, keypath: Tuple[Union[str, int], ...]\n ) -> Any:\n if isinstance(value, str):\n if keypath == ('column_name',):\n # special case: Don't render column names as native, make them\n # be strings\n return value\n\n if re.match(looks_like_func, value) is not None:\n # curly braces to make rendering happy\n value = f'{{{{ {value} }}}}'\n\n value = get_rendered(\n value, context, node, capture_macros=capture_macros,\n native=True\n )\n\n return value\n\n kwargs = deep_map(_convert_function, node.test_metadata.kwargs)\n context[SCHEMA_TEST_KWARGS_NAME] = kwargs\n",
"path": "core/dbt/clients/jinja.py"
}
] | [
{
"content": "import codecs\nimport linecache\nimport os\nimport re\nimport tempfile\nimport threading\nfrom ast import literal_eval\nfrom contextlib import contextmanager\nfrom itertools import chain, islice\nfrom typing import (\n List, Union, Set, Optional, Dict, Any, Iterator, Type, NoReturn, Tuple\n)\n\nimport jinja2\nimport jinja2.ext\nimport jinja2.nativetypes # type: ignore\nimport jinja2.nodes\nimport jinja2.parser\nimport jinja2.sandbox\n\nfrom dbt.utils import (\n get_dbt_macro_name, get_docs_macro_name, get_materialization_macro_name,\n deep_map\n)\n\nfrom dbt.clients._jinja_blocks import BlockIterator, BlockData, BlockTag\nfrom dbt.contracts.graph.compiled import CompiledSchemaTestNode\nfrom dbt.contracts.graph.parsed import ParsedSchemaTestNode\nfrom dbt.exceptions import (\n InternalException, raise_compiler_error, CompilationException,\n invalid_materialization_argument, MacroReturn\n)\nfrom dbt.flags import MACRO_DEBUGGING\nfrom dbt.logger import GLOBAL_LOGGER as logger # noqa\n\n\ndef _linecache_inject(source, write):\n if write:\n # this is the only reliable way to accomplish this. Obviously, it's\n # really darn noisy and will fill your temporary directory\n tmp_file = tempfile.NamedTemporaryFile(\n prefix='dbt-macro-compiled-',\n suffix='.py',\n delete=False,\n mode='w+',\n encoding='utf-8',\n )\n tmp_file.write(source)\n filename = tmp_file.name\n else:\n # `codecs.encode` actually takes a `bytes` as the first argument if\n # the second argument is 'hex' - mypy does not know this.\n rnd = codecs.encode(os.urandom(12), 'hex') # type: ignore\n filename = rnd.decode('ascii')\n\n # put ourselves in the cache\n cache_entry = (\n len(source),\n None,\n [line + '\\n' for line in source.splitlines()],\n filename\n )\n # linecache does in fact have an attribute `cache`, thanks\n linecache.cache[filename] = cache_entry # type: ignore\n return filename\n\n\nclass MacroFuzzParser(jinja2.parser.Parser):\n def parse_macro(self):\n node = jinja2.nodes.Macro(lineno=next(self.stream).lineno)\n\n # modified to fuzz macros defined in the same file. this way\n # dbt can understand the stack of macros being called.\n # - @cmcarthur\n node.name = get_dbt_macro_name(\n self.parse_assign_target(name_only=True).name)\n\n self.parse_signature(node)\n node.body = self.parse_statements(('name:endmacro',),\n drop_needle=True)\n return node\n\n\nclass MacroFuzzEnvironment(jinja2.sandbox.SandboxedEnvironment):\n def _parse(self, source, name, filename):\n return MacroFuzzParser(self, source, name, filename).parse()\n\n def _compile(self, source, filename):\n \"\"\"Override jinja's compilation to stash the rendered source inside\n the python linecache for debugging when the appropriate environment\n variable is set.\n\n If the value is 'write', also write the files to disk.\n WARNING: This can write a ton of data if you aren't careful.\n \"\"\"\n if filename == '<template>' and MACRO_DEBUGGING:\n write = MACRO_DEBUGGING == 'write'\n filename = _linecache_inject(source, write)\n\n return super()._compile(source, filename) # type: ignore\n\n\nclass NativeSandboxEnvironment(MacroFuzzEnvironment):\n code_generator_class = jinja2.nativetypes.NativeCodeGenerator\n\n\nclass TextMarker(str):\n \"\"\"A special native-env marker that indicates that a value is text and is\n not to be evaluated. Use this to prevent your numbery-strings from becoming\n numbers!\n \"\"\"\n\n\ndef quoted_native_concat(nodes):\n \"\"\"This is almost native_concat from the NativeTemplate, except in the\n special case of a single argument that is a quoted string and returns a\n string, the quotes are re-inserted.\n \"\"\"\n head = list(islice(nodes, 2))\n\n if not head:\n return None\n\n if len(head) == 1:\n raw = head[0]\n if isinstance(raw, TextMarker):\n return str(raw)\n else:\n raw = \"\".join([str(v) for v in chain(head, nodes)])\n\n try:\n result = literal_eval(raw)\n except (ValueError, SyntaxError, MemoryError):\n return raw\n\n # if it was a str and it still is a str, return it as-is.\n if isinstance(result, str):\n result = raw\n\n return result\n\n\nclass NativeSandboxTemplate(jinja2.nativetypes.NativeTemplate): # mypy: ignore\n environment_class = NativeSandboxEnvironment\n\n def render(self, *args, **kwargs):\n \"\"\"Render the template to produce a native Python type. If the\n result is a single node, its value is returned. Otherwise, the\n nodes are concatenated as strings. If the result can be parsed\n with :func:`ast.literal_eval`, the parsed value is returned.\n Otherwise, the string is returned.\n \"\"\"\n vars = dict(*args, **kwargs)\n\n try:\n return quoted_native_concat(\n self.root_render_func(self.new_context(vars))\n )\n except Exception:\n return self.environment.handle_exception()\n\n\nNativeSandboxEnvironment.template_class = NativeSandboxTemplate # type: ignore\n\n\nclass TemplateCache:\n def __init__(self):\n self.file_cache: Dict[str, jinja2.Template] = {}\n\n def get_node_template(self, node) -> jinja2.Template:\n key = node.macro_sql\n\n if key in self.file_cache:\n return self.file_cache[key]\n\n template = get_template(\n string=node.macro_sql,\n ctx={},\n node=node,\n )\n\n self.file_cache[key] = template\n return template\n\n def clear(self):\n self.file_cache.clear()\n\n\ntemplate_cache = TemplateCache()\n\n\nclass BaseMacroGenerator:\n def __init__(self, context: Optional[Dict[str, Any]] = None) -> None:\n self.context: Optional[Dict[str, Any]] = context\n\n def get_template(self):\n raise NotImplementedError('get_template not implemented!')\n\n def get_name(self) -> str:\n raise NotImplementedError('get_name not implemented!')\n\n def get_macro(self):\n name = self.get_name()\n template = self.get_template()\n # make the module. previously we set both vars and local, but that's\n # redundant: They both end up in the same place\n module = template.make_module(vars=self.context, shared=False)\n macro = module.__dict__[get_dbt_macro_name(name)]\n module.__dict__.update(self.context)\n return macro\n\n @contextmanager\n def exception_handler(self) -> Iterator[None]:\n try:\n yield\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n raise_compiler_error(str(e))\n\n def call_macro(self, *args, **kwargs):\n if self.context is None:\n raise InternalException(\n 'Context is still None in call_macro!'\n )\n assert self.context is not None\n\n macro = self.get_macro()\n\n with self.exception_handler():\n try:\n return macro(*args, **kwargs)\n except MacroReturn as e:\n return e.value\n\n\nclass MacroStack(threading.local):\n def __init__(self):\n super().__init__()\n self.call_stack = []\n\n @property\n def depth(self) -> int:\n return len(self.call_stack)\n\n def push(self, name):\n self.call_stack.append(name)\n\n def pop(self, name):\n got = self.call_stack.pop()\n if got != name:\n raise InternalException(f'popped {got}, expected {name}')\n\n\nclass MacroGenerator(BaseMacroGenerator):\n def __init__(\n self,\n macro,\n context: Optional[Dict[str, Any]] = None,\n node: Optional[Any] = None,\n stack: Optional[MacroStack] = None\n ) -> None:\n super().__init__(context)\n self.macro = macro\n self.node = node\n self.stack = stack\n\n def get_template(self):\n return template_cache.get_node_template(self.macro)\n\n def get_name(self) -> str:\n return self.macro.name\n\n @contextmanager\n def exception_handler(self) -> Iterator[None]:\n try:\n yield\n except (TypeError, jinja2.exceptions.TemplateRuntimeError) as e:\n raise_compiler_error(str(e), self.macro)\n except CompilationException as e:\n e.stack.append(self.macro)\n raise e\n\n @contextmanager\n def track_call(self):\n if self.stack is None or self.node is None:\n yield\n else:\n unique_id = self.macro.unique_id\n depth = self.stack.depth\n # only mark depth=0 as a dependency\n if depth == 0:\n self.node.depends_on.add_macro(unique_id)\n self.stack.push(unique_id)\n try:\n yield\n finally:\n self.stack.pop(unique_id)\n\n def __call__(self, *args, **kwargs):\n with self.track_call():\n return self.call_macro(*args, **kwargs)\n\n\nclass QueryStringGenerator(BaseMacroGenerator):\n def __init__(\n self, template_str: str, context: Dict[str, Any]\n ) -> None:\n super().__init__(context)\n self.template_str: str = template_str\n env = get_environment()\n self.template = env.from_string(\n self.template_str,\n globals=self.context,\n )\n\n def get_name(self) -> str:\n return 'query_comment_macro'\n\n def get_template(self):\n \"\"\"Don't use the template cache, we don't have a node\"\"\"\n return self.template\n\n def __call__(self, connection_name: str, node) -> str:\n return str(self.call_macro(connection_name, node))\n\n\nclass MaterializationExtension(jinja2.ext.Extension):\n tags = ['materialization']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n materialization_name = \\\n parser.parse_assign_target(name_only=True).name\n\n adapter_name = 'default'\n node.args = []\n node.defaults = []\n\n while parser.stream.skip_if('comma'):\n target = parser.parse_assign_target(name_only=True)\n\n if target.name == 'default':\n pass\n\n elif target.name == 'adapter':\n parser.stream.expect('assign')\n value = parser.parse_expression()\n adapter_name = value.value\n\n else:\n invalid_materialization_argument(\n materialization_name, target.name\n )\n\n node.name = get_materialization_macro_name(\n materialization_name, adapter_name\n )\n\n node.body = parser.parse_statements(('name:endmaterialization',),\n drop_needle=True)\n\n return node\n\n\nclass DocumentationExtension(jinja2.ext.Extension):\n tags = ['docs']\n\n def parse(self, parser):\n node = jinja2.nodes.Macro(lineno=next(parser.stream).lineno)\n docs_name = parser.parse_assign_target(name_only=True).name\n\n node.args = []\n node.defaults = []\n node.name = get_docs_macro_name(docs_name)\n node.body = parser.parse_statements(('name:enddocs',),\n drop_needle=True)\n return node\n\n\ndef _is_dunder_name(name):\n return name.startswith('__') and name.endswith('__')\n\n\ndef create_undefined(node=None):\n class Undefined(jinja2.Undefined):\n def __init__(self, hint=None, obj=None, name=None, exc=None):\n super().__init__(hint=hint, name=name)\n self.node = node\n self.name = name\n self.hint = hint\n # jinja uses these for safety, so we have to override them.\n # see https://github.com/pallets/jinja/blob/master/jinja2/sandbox.py#L332-L339 # noqa\n self.unsafe_callable = False\n self.alters_data = False\n\n def __getitem__(self, name):\n # Propagate the undefined value if a caller accesses this as if it\n # were a dictionary\n return self\n\n def __getattr__(self, name):\n if name == 'name' or _is_dunder_name(name):\n raise AttributeError(\n \"'{}' object has no attribute '{}'\"\n .format(type(self).__name__, name)\n )\n\n self.name = name\n\n return self.__class__(hint=self.hint, name=self.name)\n\n def __call__(self, *args, **kwargs):\n return self\n\n def __reduce__(self):\n raise_compiler_error(f'{self.name} is undefined', node=node)\n\n return Undefined\n\n\ndef get_environment(\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n) -> jinja2.Environment:\n args: Dict[str, List[Union[str, Type[jinja2.ext.Extension]]]] = {\n 'extensions': ['jinja2.ext.do']\n }\n\n if capture_macros:\n args['undefined'] = create_undefined(node)\n\n args['extensions'].append(MaterializationExtension)\n args['extensions'].append(DocumentationExtension)\n\n env_cls: Type[jinja2.Environment]\n text_filter: Type\n if native:\n env_cls = NativeSandboxEnvironment\n text_filter = TextMarker\n else:\n env_cls = MacroFuzzEnvironment\n text_filter = str\n\n env = env_cls(**args)\n env.filters['as_text'] = text_filter\n\n return env\n\n\n@contextmanager\ndef catch_jinja(node=None) -> Iterator[None]:\n try:\n yield\n except jinja2.exceptions.TemplateSyntaxError as e:\n e.translated = False\n raise CompilationException(str(e), node) from e\n except jinja2.exceptions.UndefinedError as e:\n raise CompilationException(str(e), node) from e\n except CompilationException as exc:\n exc.add_node(node)\n raise\n\n\ndef parse(string):\n with catch_jinja():\n return get_environment().parse(str(string))\n\n\ndef get_template(\n string: str,\n ctx: Dict[str, Any],\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n):\n with catch_jinja(node):\n env = get_environment(node, capture_macros, native=native)\n\n template_source = str(string)\n return env.from_string(template_source, globals=ctx)\n\n\ndef render_template(template, ctx: Dict[str, Any], node=None) -> str:\n with catch_jinja(node):\n return template.render(ctx)\n\n\ndef _requote_result(raw_value: str, rendered: str) -> str:\n double_quoted = raw_value.startswith('\"') and raw_value.endswith('\"')\n single_quoted = raw_value.startswith(\"'\") and raw_value.endswith(\"'\")\n if double_quoted:\n quote_char = '\"'\n elif single_quoted:\n quote_char = \"'\"\n else:\n quote_char = ''\n return f'{quote_char}{rendered}{quote_char}'\n\n\n# performance note: Local benmcharking (so take it with a big grain of salt!)\n# on this indicates that it is is on average slightly slower than\n# checking two separate patterns, but the standard deviation is smaller with\n# one pattern. The time difference between the two was ~2 std deviations, which\n# is small enough that I've just chosen the more readable option.\n_HAS_RENDER_CHARS_PAT = re.compile(r'({[{%]|[}%]})')\n\n\ndef get_rendered(\n string: str,\n ctx: Dict[str, Any],\n node=None,\n capture_macros: bool = False,\n native: bool = False,\n) -> str:\n # performance optimization: if there are no jinja control characters in the\n # string, we can just return the input. Fall back to jinja if the type is\n # not a string or if native rendering is enabled (so '1' -> 1, etc...)\n # If this is desirable in the native env as well, we could handle the\n # native=True case by passing the input string to ast.literal_eval, like\n # the native renderer does.\n if (\n not native and\n isinstance(string, str) and\n _HAS_RENDER_CHARS_PAT.search(string) is None\n ):\n return string\n template = get_template(\n string,\n ctx,\n node,\n capture_macros=capture_macros,\n native=native,\n )\n return render_template(template, ctx, node)\n\n\ndef undefined_error(msg) -> NoReturn:\n raise jinja2.exceptions.UndefinedError(msg)\n\n\ndef extract_toplevel_blocks(\n data: str,\n allowed_blocks: Optional[Set[str]] = None,\n collect_raw_data: bool = True,\n) -> List[Union[BlockData, BlockTag]]:\n \"\"\"Extract the top level blocks with matching block types from a jinja\n file, with some special handling for block nesting.\n\n :param data: The data to extract blocks from.\n :param allowed_blocks: The names of the blocks to extract from the file.\n They may not be nested within if/for blocks. If None, use the default\n values.\n :param collect_raw_data: If set, raw data between matched blocks will also\n be part of the results, as `BlockData` objects. They have a\n `block_type_name` field of `'__dbt_data'` and will never have a\n `block_name`.\n :return: A list of `BlockTag`s matching the allowed block types and (if\n `collect_raw_data` is `True`) `BlockData` objects.\n \"\"\"\n return BlockIterator(data).lex_for_blocks(\n allowed_blocks=allowed_blocks,\n collect_raw_data=collect_raw_data\n )\n\n\nSCHEMA_TEST_KWARGS_NAME = '_dbt_schema_test_kwargs'\n\n\ndef add_rendered_test_kwargs(\n context: Dict[str, Any],\n node: Union[ParsedSchemaTestNode, CompiledSchemaTestNode],\n capture_macros: bool = False,\n) -> None:\n \"\"\"Render each of the test kwargs in the given context using the native\n renderer, then insert that value into the given context as the special test\n keyword arguments member.\n \"\"\"\n looks_like_func = r'^\\s*(env_var|ref|var|source|doc)\\s*\\(.+\\)\\s*$'\n\n def _convert_function(\n value: Any, keypath: Tuple[Union[str, int], ...]\n ) -> Any:\n if isinstance(value, str):\n if keypath == ('column_name',):\n # special case: Don't render column names as native, make them\n # be strings\n return value\n\n if re.match(looks_like_func, value) is not None:\n # curly braces to make rendering happy\n value = f'{{{{ {value} }}}}'\n\n value = get_rendered(\n value, context, node, capture_macros=capture_macros,\n native=True\n )\n\n return value\n\n kwargs = deep_map(_convert_function, node.test_metadata.kwargs)\n context[SCHEMA_TEST_KWARGS_NAME] = kwargs\n",
"path": "core/dbt/clients/jinja.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 91a5c7833b7..c7b4c5dbc9b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,11 +1,11 @@
## dbt 0.17.1 (Release TBD)
### Fixes
+- dbt native rendering now avoids turning quoted strings into unquoted strings ([#2597](https://github.com/fishtown-analytics/dbt/issues/2597), [#2599](https://github.com/fishtown-analytics/dbt/pull/2599))
- Hash name of local packages ([#2600](https://github.com/fishtown-analytics/dbt/pull/2600))
-## dbt 0.17.1rc2 (June 25, 2020)
-
+## dbt 0.17.1rc2 (June 25, 2020)
### Fixes
- dbt config-version: 2 now properly defers rendering `+pre-hook` and `+post-hook` fields. ([#2583](https://github.com/fishtown-analytics/dbt/issues/2583), [#2854](https://github.com/fishtown-analytics/dbt/pull/2854))
diff --git a/core/dbt/clients/jinja.py b/core/dbt/clients/jinja.py
index 6138e8902c8..5ebb206042f 100644
--- a/core/dbt/clients/jinja.py
+++ b/core/dbt/clients/jinja.py
@@ -133,6 +133,10 @@ def quoted_native_concat(nodes):
except (ValueError, SyntaxError, MemoryError):
return raw
+ # if it was a str and it still is a str, return it as-is.
+ if isinstance(result, str):
+ result = raw
+
return result
diff --git a/test/unit/test_jinja.py b/test/unit/test_jinja.py
index 491348de914..b3d273f4b65 100644
--- a/test/unit/test_jinja.py
+++ b/test/unit/test_jinja.py
@@ -1,4 +1,6 @@
+import pytest
import unittest
+import yaml
from dbt.clients.jinja import get_rendered
from dbt.clients.jinja import get_template
@@ -413,3 +415,63 @@ def test_if_endfor_newlines(self):
'''
+native_expected_behaviors = [
+ # strings
+ ('''foo: bar''', 'bar'),
+ ('''foo: "bar"''', 'bar'),
+ ('''foo: "'bar'"''', "'bar'"),
+ ("""foo: '"bar"'""", '"bar"'),
+ # ints
+ ('''foo: 1''', 1),
+ ('''foo: "1"''', 1),
+ ('''foo: "'1'"''', "'1'"),
+ ('''foo: "{{ 1 }}"''', 1),
+ ('''foo: "{{ '1' }}"''', 1),
+ ('''foo: "'{{ 1 }}'"''', "'1'"),
+ ('''foo: "'{{ '1' }}'"''', "'1'"),
+ ('''foo: "{{ 1 | as_text }}"''', '1'),
+ ('''foo: "{{ '1' | as_text }}"''', '1'),
+ # booleans.
+ # Note the discrepancy with true vs True: `true` is recognized by jinja but
+ # not literal_eval, but `True` is recognized by ast.literal_eval.
+ # For extra fun, yaml recognizes both.
+ ('''foo: "{{ true }}"''', True),
+ ('''foo: "{{ 'true' }}"''', 'true'),
+ ('''foo: "'{{ true }}'"''', "'True'"),
+ ('''foo: "{{ true | as_text }}"''', "True"), # true -> boolean True -> text -> str(True) -> 'True'
+ ('''foo: "{{ 'true' | as_text }}"''', "true"), # 'true' -> string 'true' -> text -> str('true') -> 'true'
+ ('''foo: "{{ True }}"''', True),
+ ('''foo: "{{ 'True' }}"''', True),
+ ('''foo: "'{{ True }}'"''', "'True'"),
+ ('''foo: "{{ True | as_text }}"''', "True"), # True -> string 'True' -> text -> str('True') -> 'True'
+ ('''foo: "{{ 'True' | as_text }}"''', "True"), # 'True' -> string 'True' -> text -> str('True') -> 'True'
+ ('''foo: yes''', True), # yaml turns 'yes' into a boolean true
+ ('''foo: "yes"''', "yes"),
+ # concatenation
+ ('''foo: "{{ a_int + 100 }}"''', 200),
+ ('''foo: "{{ a_str ~ 100 }}"''', 100100),
+ ('''foo: "{{ a_int ~ 100 }}"''', 100100),
+ ('''foo: "{{ a_str }}{{ a_str }}"''', 100100),
+ ('''foo: "{{ a_int }}{{ a_int }}"''', 100100),
+ ('''foo: "'{{ a_int }}{{ a_int }}'"''', "'100100'"),
+
+]
+
+
+def expected_id(arg):
+ if isinstance(arg, list):
+ return '_'.join(arg)
+
+
[email protected](
+ 'inputvalue,expected', native_expected_behaviors, ids=expected_id
+)
+def test_native_rendering(inputvalue, expected):
+ # this test is pretty useless without preprocessing things in yaml.
+ value = yaml.safe_load(inputvalue)['foo']
+ ctx = {
+ 'a_str': '100',
+ 'a_int': 100,
+ 'b_str': 'hello'
+ }
+ assert get_rendered(value, ctx, native=True) == expected
|
ansible__ansible-42557 | The ios_linkagg search for interfaces may return wrongs interfaces name
<!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
We are trying to create a Port-channel with the ios_linkagg module, we found the way the interfaces are parsed with the regexp seems wrong, and it takes wrong interface name in other configuration section than the parent section.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
ios_linkagg
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.3
config file = /local/home/ta-admin-ng5898b/ansible.cfg
configured module search path = [u'/local/home/ta-admin-ng5898b/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /dns/development/ctrebuchet_sandbox/zion_ansible/lib/python2.7/site-packages/ansible
executable location = /dns/development/ctrebuchet_sandbox/zion_ansible/bin/ansible
python version = 2.7.8 (default, Oct 9 2014, 10:48:46) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Red Hat Enterprise Linux Server release 6.9 (Santiago)
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
- Given a configuration like below:
```bash
!
interface TenGigabitEthernet1/5/1
no switchport
no ip address
no cdp enable
channel-group 1 mode on
!
interface TenGigabitEthernet1/5/2
no switchport
no ip address
no cdp enable
channel-group 1 mode on
!
interface TenGigabitEthernet1/5/3
no switchport
no ip address
no cdp enable
dual-active fast-hello
!
interface TenGigabitEthernet1/5/4
description Link to m880gbca1
no switchport
mtu 9216
no ip address
logging event link-status
logging event bundle-status
channel-group 11 mode active
!
interface TenGigabitEthernet1/5/5
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/6
no switchport
no ip address
logging event link-status
logging event bundle-status
shutdown
!
interface TenGigabitEthernet1/5/7
no switchport
no ip address
logging event link-status
logging event bundle-status
shutdown
!
interface TenGigabitEthernet1/5/8
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/9
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/10
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/11
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/12
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/13
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/14
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/15
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet1/5/16
no switchport
no ip address
shutdown
!
interface mgmt0
ip address 10.126.127.51 255.255.255.0
!
interface TenGigabitEthernet2/5/1
no switchport
no ip address
no cdp enable
channel-group 2 mode on
!
interface TenGigabitEthernet2/5/2
no switchport
no ip address
no cdp enable
channel-group 2 mode on
!
interface TenGigabitEthernet2/5/3
no switchport
no ip address
no cdp enable
dual-active fast-hello
!
interface TenGigabitEthernet2/5/4
description Link to m880gbca1
no switchport
mtu 9216
no ip address
logging event link-status
logging event bundle-status
channel-group 11 mode active
!
interface TenGigabitEthernet2/5/5
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/6
no switchport
no ip address
logging event link-status
logging event bundle-status
shutdown
!
interface TenGigabitEthernet2/5/7
no switchport
no ip address
logging event link-status
logging event bundle-status
shutdown
!
interface TenGigabitEthernet2/5/8
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/9
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/10
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/11
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/12
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/13
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/14
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/15
no switchport
no ip address
shutdown
!
interface TenGigabitEthernet2/5/16
no switchport
no ip address
shutdown
!
interface Vlan1
no ip address
shutdown
!
router ospf 1
router-id 10.126.16.4
passive-interface default
no passive-interface Port-channel11
network 0.0.0.0 255.255.255.255 area 0
!
```
- Given a task like below:
```yaml
- name: 4/ create link aggregation group.
ios_linkagg:
group: "{{ item.group }}"
state: present
loop: "{{ network_interfaces }}"
```
- Given variables like below:
```yaml
network_interfaces:
- name: "Port-channel100"
group: 100
interface_1: "TenGigabitEthernet1/5/6"
interface_2: "TenGigabitEthernet2/5/6"
- name: "Port-channel101"
group: "101"
interface_1: "TenGigabitEthernet1/5/7"
interface_2: "TenGigabitEthernet2/5/7"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
- The PO is created
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
- The PO is not created
<!--- Paste verbatim command output between quotes below -->
- When the playbook run we get this error:
```bash
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: expected string or buffer
failed: [smm88-mockup-cudi-1.mgt.airbus.corp] (item={'name': 'Port-channel100', 'group': 100, 'interface_1': 'TenGigabitEthernet1/5/6', 'interface_2': 'TenGigabitEthernet2/5/6'}) => changed=false
item:
group: 100
interface_1: TenGigabitEthernet1/5/6
interface_2: TenGigabitEthernet2/5/6
name: Port-channel100
module_stderr: |-
Traceback (most recent call last):
File "/tmp/ansible_vTMuhg/ansible_module_ios_linkagg.py", line 315, in <module>
main()
File "/tmp/ansible_vTMuhg/ansible_module_ios_linkagg.py", line 302, in main
have = map_config_to_obj(module)
File "/tmp/ansible_vTMuhg/ansible_module_ios_linkagg.py", line 254, in map_config_to_obj
obj.update(get_channel(module, config, group))
File "/tmp/ansible_vTMuhg/ansible_module_ios_linkagg.py", line 237, in get_channel
channel['mode'] = parse_mode(module, config, group, member)
File "/tmp/ansible_vTMuhg/ansible_module_ios_linkagg.py", line 204, in parse_mode
match_int = re.findall(r'interface {0}\n'.format(member), body, re.M)
File "/usr/lib/python2.7/re.py", line 181, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or buffer
module_stdout: ''
msg: MODULE FAILURE
rc: 1
```
- It seems that changing the regexp at the get_channel function did the tricks
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/network/ios/ios_linkagg.py#L230
```python
def get_channel(module, config, group):
match = re.findall(r'interface (\S+)', config, re.M)
def get_channel(module, config, group):
match = re.findall(r'^interface (\S+)', config, re.M)
```
- With a "global" findall wrongs interfaces names (default and Port-channel11) are taken from the router ospf section.
```yaml
!
router ospf 1
router-id 10.126.16.4
passive-interface default
no passive-interface Port-channel11
network 0.0.0.0 255.255.255.255 area 0
!
```
```python
>>> matches = re.findall(r'interface (\S+)', config, re.M)
>>> matches
['Loopback0', 'Loopback1', 'Loopback12008', 'Port-channel1', 'Port-channel2', 'Port-channel11', 'Port-channel12', 'Tunnel1', 'TenGigabitEthernet1/5/1', 'TenGigabitEthernet1/5/2', 'TenGigabitEthernet1/5/3', 'TenGigabitEthernet1/5/4', 'TenGigabitEthernet1/5/5', 'TenGigabitEthernet1/5/6', 'TenGigabitEthernet1/5/7', 'TenGigabitEthernet1/5/8', 'TenGigabitEthernet1/5/9', 'TenGigabitEthernet1/5/10', 'TenGigabitEthernet1/5/11', 'TenGigabitEthernet1/5/12', 'TenGigabitEthernet1/5/13', 'TenGigabitEthernet1/5/14', 'TenGigabitEthernet1/5/15', 'TenGigabitEthernet1/5/16', 'mgmt0', 'TenGigabitEthernet2/5/1', 'TenGigabitEthernet2/5/2', 'TenGigabitEthernet2/5/3', 'TenGigabitEthernet2/5/4', 'TenGigabitEthernet2/5/5', 'TenGigabitEthernet2/5/6', 'TenGigabitEthernet2/5/7', 'TenGigabitEthernet2/5/8', 'TenGigabitEthernet2/5/9', 'TenGigabitEthernet2/5/10', 'TenGigabitEthernet2/5/11', 'TenGigabitEthernet2/5/12', 'TenGigabitEthernet2/5/13', 'TenGigabitEthernet2/5/14', 'TenGigabitEthernet2/5/15', 'TenGigabitEthernet2/5/16', 'Vlan1', 'default', 'Port-channel11', 'mgmt0']
```
- Changing the regexp to take only line which begin with interface works.
```python
>>> matches = re.findall(r'^interface (\S+)', config, re.M)
>>> matches
['Loopback0', 'Loopback1', 'Loopback12008', 'Port-channel1', 'Port-channel2', 'Port-channel11', 'Port-channel12', 'Tunnel1', 'TenGigabitEthernet1/5/1', 'TenGigabitEthernet1/5/2', 'TenGigabitEthernet1/5/3', 'TenGigabitEthernet1/5/4', 'TenGigabitEthernet1/5/5', 'TenGigabitEthernet1/5/6', 'TenGigabitEthernet1/5/7', 'TenGigabitEthernet1/5/8', 'TenGigabitEthernet1/5/9', 'TenGigabitEthernet1/5/10', 'TenGigabitEthernet1/5/11', 'TenGigabitEthernet1/5/12', 'TenGigabitEthernet1/5/13', 'TenGigabitEthernet1/5/14', 'TenGigabitEthernet1/5/15', 'TenGigabitEthernet1/5/16', 'mgmt0', 'TenGigabitEthernet2/5/1', 'TenGigabitEthernet2/5/2', 'TenGigabitEthernet2/5/3', 'TenGigabitEthernet2/5/4', 'TenGigabitEthernet2/5/5', 'TenGigabitEthernet2/5/6', 'TenGigabitEthernet2/5/7', 'TenGigabitEthernet2/5/8', 'TenGigabitEthernet2/5/9', 'TenGigabitEthernet2/5/10', 'TenGigabitEthernet2/5/11', 'TenGigabitEthernet2/5/12', 'TenGigabitEthernet2/5/13', 'TenGigabitEthernet2/5/14', 'TenGigabitEthernet2/5/15', 'TenGigabitEthernet2/5/16', 'Vlan1']
```
| [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2017, Ansible by Red Hat, inc\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'network'}\n\nDOCUMENTATION = \"\"\"\n---\nmodule: ios_linkagg\nversion_added: \"2.5\"\nauthor: \"Trishna Guha (@trishnaguha)\"\nshort_description: Manage link aggregation groups on Cisco IOS network devices\ndescription:\n - This module provides declarative management of link aggregation groups\n on Cisco IOS network devices.\nnotes:\n - Tested against IOS 15.2\noptions:\n group:\n description:\n - Channel-group number for the port-channel\n Link aggregation group. Range 1-255.\n mode:\n description:\n - Mode of the link aggregation group.\n choices: ['active', 'on', 'passive', 'auto', 'desirable']\n members:\n description:\n - List of members of the link aggregation group.\n aggregate:\n description: List of link aggregation definitions.\n state:\n description:\n - State of the link aggregation group.\n default: present\n choices: ['present', 'absent']\n purge:\n description:\n - Purge links not defined in the I(aggregate) parameter.\n default: no\nextends_documentation_fragment: ios\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: create link aggregation group\n ios_linkagg:\n group: 10\n state: present\n\n- name: delete link aggregation group\n ios_linkagg:\n group: 10\n state: absent\n\n- name: set link aggregation group to members\n ios_linkagg:\n group: 200\n mode: active\n members:\n - GigabitEthernet0/0\n - GigabitEthernet0/1\n\n- name: remove link aggregation group from GigabitEthernet0/0\n ios_linkagg:\n group: 200\n mode: active\n members:\n - GigabitEthernet0/1\n\n- name: Create aggregate of linkagg definitions\n ios_linkagg:\n aggregate:\n - { group: 3, mode: on, members: [GigabitEthernet0/1] }\n - { group: 100, mode: passive, members: [GigabitEthernet0/2] }\n\"\"\"\n\nRETURN = \"\"\"\ncommands:\n description: The list of configuration mode commands to send to the device\n returned: always, except for the platforms that use Netconf transport to manage the device.\n type: list\n sample:\n - interface port-channel 30\n - interface GigabitEthernet0/3\n - channel-group 30 mode on\n - no interface port-channel 30\n\"\"\"\n\nimport re\nfrom copy import deepcopy\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.network.common.config import CustomNetworkConfig\nfrom ansible.module_utils.network.common.utils import remove_default_spec\nfrom ansible.module_utils.network.ios.ios import get_config, load_config\nfrom ansible.module_utils.network.ios.ios import ios_argument_spec\n\n\ndef search_obj_in_list(group, lst):\n for o in lst:\n if o['group'] == group:\n return o\n\n\ndef map_obj_to_commands(updates, module):\n commands = list()\n want, have = updates\n purge = module.params['purge']\n\n for w in want:\n group = w['group']\n mode = w['mode']\n members = w.get('members') or []\n state = w['state']\n del w['state']\n\n obj_in_have = search_obj_in_list(group, have)\n\n if state == 'absent':\n if obj_in_have:\n commands.append('no interface port-channel {0}'.format(group))\n\n elif state == 'present':\n cmd = ['interface port-channel {0}'.format(group),\n 'end']\n if not obj_in_have:\n if not group:\n module.fail_json(msg='group is a required option')\n commands.extend(cmd)\n\n if members:\n for m in members:\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n else:\n if members:\n if 'members' not in obj_in_have.keys():\n for m in members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n elif set(members) != set(obj_in_have['members']):\n missing_members = list(set(members) - set(obj_in_have['members']))\n for m in missing_members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n superfluous_members = list(set(obj_in_have['members']) - set(members))\n for m in superfluous_members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('no channel-group {0} mode {1}'.format(group, mode))\n\n if purge:\n for h in have:\n obj_in_want = search_obj_in_list(h['group'], want)\n if not obj_in_want:\n commands.append('no interface port-channel {0}'.format(h['group']))\n\n return commands\n\n\ndef map_params_to_obj(module):\n obj = []\n\n aggregate = module.params.get('aggregate')\n if aggregate:\n for item in aggregate:\n for key in item:\n if item.get(key) is None:\n item[key] = module.params[key]\n\n d = item.copy()\n d['group'] = str(d['group'])\n\n obj.append(d)\n else:\n obj.append({\n 'group': str(module.params['group']),\n 'mode': module.params['mode'],\n 'members': module.params['members'],\n 'state': module.params['state']\n })\n\n return obj\n\n\ndef parse_mode(module, config, group, member):\n mode = None\n netcfg = CustomNetworkConfig(indent=1, contents=config)\n parents = ['interface {0}'.format(member)]\n body = netcfg.get_section(parents)\n\n match_int = re.findall(r'interface {0}\\n'.format(member), body, re.M)\n if match_int:\n match = re.search(r'channel-group {0} mode (\\S+)'.format(group), body, re.M)\n if match:\n mode = match.group(1)\n\n return mode\n\n\ndef parse_members(module, config, group):\n members = []\n\n for line in config.strip().split('!'):\n l = line.strip()\n if l.startswith('interface'):\n match_group = re.findall(r'channel-group {0} mode'.format(group), l, re.M)\n if match_group:\n match = re.search(r'interface (\\S+)', l, re.M)\n if match:\n members.append(match.group(1))\n\n return members\n\n\ndef get_channel(module, config, group):\n match = re.findall(r'interface (\\S+)', config, re.M)\n\n if not match:\n return {}\n\n channel = {}\n for item in set(match):\n member = item\n channel['mode'] = parse_mode(module, config, group, member)\n channel['members'] = parse_members(module, config, group)\n\n return channel\n\n\ndef map_config_to_obj(module):\n objs = list()\n config = get_config(module)\n\n for line in config.split('\\n'):\n l = line.strip()\n match = re.search(r'interface Port-channel(\\S+)', l, re.M)\n if match:\n obj = {}\n group = match.group(1)\n obj['group'] = group\n obj.update(get_channel(module, config, group))\n objs.append(obj)\n\n return objs\n\n\ndef main():\n \"\"\" main entry point for module execution\n \"\"\"\n element_spec = dict(\n group=dict(type='int'),\n mode=dict(choices=['active', 'on', 'passive', 'auto', 'desirable']),\n members=dict(type='list'),\n state=dict(default='present',\n choices=['present', 'absent'])\n )\n\n aggregate_spec = deepcopy(element_spec)\n aggregate_spec['group'] = dict(required=True)\n\n required_one_of = [['group', 'aggregate']]\n required_together = [['members', 'mode']]\n mutually_exclusive = [['group', 'aggregate']]\n\n # remove default in aggregate spec, to handle common arguments\n remove_default_spec(aggregate_spec)\n\n argument_spec = dict(\n aggregate=dict(type='list', elements='dict', options=aggregate_spec,\n required_together=required_together),\n purge=dict(default=False, type='bool')\n )\n\n argument_spec.update(element_spec)\n argument_spec.update(ios_argument_spec)\n\n module = AnsibleModule(argument_spec=argument_spec,\n required_one_of=required_one_of,\n required_together=required_together,\n mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n\n warnings = list()\n result = {'changed': False}\n if warnings:\n result['warnings'] = warnings\n\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands((want, have), module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n load_config(module, commands)\n result['changed'] = True\n\n module.exit_json(**result)\n\nif __name__ == '__main__':\n main()\n",
"path": "lib/ansible/modules/network/ios/ios_linkagg.py"
}
] | [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2017, Ansible by Red Hat, inc\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'network'}\n\nDOCUMENTATION = \"\"\"\n---\nmodule: ios_linkagg\nversion_added: \"2.5\"\nauthor: \"Trishna Guha (@trishnaguha)\"\nshort_description: Manage link aggregation groups on Cisco IOS network devices\ndescription:\n - This module provides declarative management of link aggregation groups\n on Cisco IOS network devices.\nnotes:\n - Tested against IOS 15.2\noptions:\n group:\n description:\n - Channel-group number for the port-channel\n Link aggregation group. Range 1-255.\n mode:\n description:\n - Mode of the link aggregation group.\n choices: ['active', 'on', 'passive', 'auto', 'desirable']\n members:\n description:\n - List of members of the link aggregation group.\n aggregate:\n description: List of link aggregation definitions.\n state:\n description:\n - State of the link aggregation group.\n default: present\n choices: ['present', 'absent']\n purge:\n description:\n - Purge links not defined in the I(aggregate) parameter.\n default: no\nextends_documentation_fragment: ios\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: create link aggregation group\n ios_linkagg:\n group: 10\n state: present\n\n- name: delete link aggregation group\n ios_linkagg:\n group: 10\n state: absent\n\n- name: set link aggregation group to members\n ios_linkagg:\n group: 200\n mode: active\n members:\n - GigabitEthernet0/0\n - GigabitEthernet0/1\n\n- name: remove link aggregation group from GigabitEthernet0/0\n ios_linkagg:\n group: 200\n mode: active\n members:\n - GigabitEthernet0/1\n\n- name: Create aggregate of linkagg definitions\n ios_linkagg:\n aggregate:\n - { group: 3, mode: on, members: [GigabitEthernet0/1] }\n - { group: 100, mode: passive, members: [GigabitEthernet0/2] }\n\"\"\"\n\nRETURN = \"\"\"\ncommands:\n description: The list of configuration mode commands to send to the device\n returned: always, except for the platforms that use Netconf transport to manage the device.\n type: list\n sample:\n - interface port-channel 30\n - interface GigabitEthernet0/3\n - channel-group 30 mode on\n - no interface port-channel 30\n\"\"\"\n\nimport re\nfrom copy import deepcopy\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.network.common.config import CustomNetworkConfig\nfrom ansible.module_utils.network.common.utils import remove_default_spec\nfrom ansible.module_utils.network.ios.ios import get_config, load_config\nfrom ansible.module_utils.network.ios.ios import ios_argument_spec\n\n\ndef search_obj_in_list(group, lst):\n for o in lst:\n if o['group'] == group:\n return o\n\n\ndef map_obj_to_commands(updates, module):\n commands = list()\n want, have = updates\n purge = module.params['purge']\n\n for w in want:\n group = w['group']\n mode = w['mode']\n members = w.get('members') or []\n state = w['state']\n del w['state']\n\n obj_in_have = search_obj_in_list(group, have)\n\n if state == 'absent':\n if obj_in_have:\n commands.append('no interface port-channel {0}'.format(group))\n\n elif state == 'present':\n cmd = ['interface port-channel {0}'.format(group),\n 'end']\n if not obj_in_have:\n if not group:\n module.fail_json(msg='group is a required option')\n commands.extend(cmd)\n\n if members:\n for m in members:\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n else:\n if members:\n if 'members' not in obj_in_have.keys():\n for m in members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n elif set(members) != set(obj_in_have['members']):\n missing_members = list(set(members) - set(obj_in_have['members']))\n for m in missing_members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('channel-group {0} mode {1}'.format(group, mode))\n\n superfluous_members = list(set(obj_in_have['members']) - set(members))\n for m in superfluous_members:\n commands.extend(cmd)\n commands.append('interface {0}'.format(m))\n commands.append('no channel-group {0} mode {1}'.format(group, mode))\n\n if purge:\n for h in have:\n obj_in_want = search_obj_in_list(h['group'], want)\n if not obj_in_want:\n commands.append('no interface port-channel {0}'.format(h['group']))\n\n return commands\n\n\ndef map_params_to_obj(module):\n obj = []\n\n aggregate = module.params.get('aggregate')\n if aggregate:\n for item in aggregate:\n for key in item:\n if item.get(key) is None:\n item[key] = module.params[key]\n\n d = item.copy()\n d['group'] = str(d['group'])\n\n obj.append(d)\n else:\n obj.append({\n 'group': str(module.params['group']),\n 'mode': module.params['mode'],\n 'members': module.params['members'],\n 'state': module.params['state']\n })\n\n return obj\n\n\ndef parse_mode(module, config, group, member):\n mode = None\n netcfg = CustomNetworkConfig(indent=1, contents=config)\n parents = ['interface {0}'.format(member)]\n body = netcfg.get_section(parents)\n\n match_int = re.findall(r'interface {0}\\n'.format(member), body, re.M)\n if match_int:\n match = re.search(r'channel-group {0} mode (\\S+)'.format(group), body, re.M)\n if match:\n mode = match.group(1)\n\n return mode\n\n\ndef parse_members(module, config, group):\n members = []\n\n for line in config.strip().split('!'):\n l = line.strip()\n if l.startswith('interface'):\n match_group = re.findall(r'channel-group {0} mode'.format(group), l, re.M)\n if match_group:\n match = re.search(r'interface (\\S+)', l, re.M)\n if match:\n members.append(match.group(1))\n\n return members\n\n\ndef get_channel(module, config, group):\n match = re.findall(r'^interface (\\S+)', config, re.M)\n\n if not match:\n return {}\n\n channel = {}\n for item in set(match):\n member = item\n channel['mode'] = parse_mode(module, config, group, member)\n channel['members'] = parse_members(module, config, group)\n\n return channel\n\n\ndef map_config_to_obj(module):\n objs = list()\n config = get_config(module)\n\n for line in config.split('\\n'):\n l = line.strip()\n match = re.search(r'interface Port-channel(\\S+)', l, re.M)\n if match:\n obj = {}\n group = match.group(1)\n obj['group'] = group\n obj.update(get_channel(module, config, group))\n objs.append(obj)\n\n return objs\n\n\ndef main():\n \"\"\" main entry point for module execution\n \"\"\"\n element_spec = dict(\n group=dict(type='int'),\n mode=dict(choices=['active', 'on', 'passive', 'auto', 'desirable']),\n members=dict(type='list'),\n state=dict(default='present',\n choices=['present', 'absent'])\n )\n\n aggregate_spec = deepcopy(element_spec)\n aggregate_spec['group'] = dict(required=True)\n\n required_one_of = [['group', 'aggregate']]\n required_together = [['members', 'mode']]\n mutually_exclusive = [['group', 'aggregate']]\n\n # remove default in aggregate spec, to handle common arguments\n remove_default_spec(aggregate_spec)\n\n argument_spec = dict(\n aggregate=dict(type='list', elements='dict', options=aggregate_spec,\n required_together=required_together),\n purge=dict(default=False, type='bool')\n )\n\n argument_spec.update(element_spec)\n argument_spec.update(ios_argument_spec)\n\n module = AnsibleModule(argument_spec=argument_spec,\n required_one_of=required_one_of,\n required_together=required_together,\n mutually_exclusive=mutually_exclusive,\n supports_check_mode=True)\n\n warnings = list()\n result = {'changed': False}\n if warnings:\n result['warnings'] = warnings\n\n want = map_params_to_obj(module)\n have = map_config_to_obj(module)\n\n commands = map_obj_to_commands((want, have), module)\n result['commands'] = commands\n\n if commands:\n if not module.check_mode:\n load_config(module, commands)\n result['changed'] = True\n\n module.exit_json(**result)\n\nif __name__ == '__main__':\n main()\n",
"path": "lib/ansible/modules/network/ios/ios_linkagg.py"
}
] | diff --git a/lib/ansible/modules/network/ios/ios_linkagg.py b/lib/ansible/modules/network/ios/ios_linkagg.py
index 840576eee7bd1a..e07c18b0e57dca 100644
--- a/lib/ansible/modules/network/ios/ios_linkagg.py
+++ b/lib/ansible/modules/network/ios/ios_linkagg.py
@@ -227,7 +227,7 @@ def parse_members(module, config, group):
def get_channel(module, config, group):
- match = re.findall(r'interface (\S+)', config, re.M)
+ match = re.findall(r'^interface (\S+)', config, re.M)
if not match:
return {}
|
dotkom__onlineweb4-1359 | Option to post video in article
Make it possible to post video in article from dashboard.
| [
{
"content": "# -*- encoding: utf-8 -*-\nfrom django import forms\n\nfrom apps.article.models import Article\nfrom apps.dashboard.widgets import DatetimePickerInput, multiple_widget_generator\nfrom apps.gallery.widgets import SingleImageInput\n\nfrom taggit.forms import TagWidget\n\n\nclass ArticleForm(forms.ModelForm):\n\n class Meta(object):\n \"\"\"\n Add fields that should have DTP activated in the datetimepicker_fields list\n \"\"\"\n\n model = Article\n fields = [\n 'heading',\n 'ingress_short',\n 'ingress',\n 'content',\n 'image',\n 'published_date',\n 'authors',\n 'tags',\n 'featured'\n ]\n\n # Fields should be a mapping between field name and an attribute dictionary\n img_fields = [('image', {'id': 'responsive-image-id'})]\n dtp_fields = [('published_date', {})]\n widgetlist = [\n (DatetimePickerInput, dtp_fields),\n (SingleImageInput, img_fields)\n ]\n\n # Multiple widget generator merges results from regular widget_generator into a single widget dict\n widgets = multiple_widget_generator(widgetlist)\n widgets.update({'tags': TagWidget(attrs={'placeholder': 'Eksempel: åre, online, kjelleren'})})\n labels = {\n 'tags': u'Tags'\n }\n",
"path": "apps/article/dashboard/forms.py"
}
] | [
{
"content": "# -*- encoding: utf-8 -*-\nfrom django import forms\n\nfrom apps.article.models import Article\nfrom apps.dashboard.widgets import DatetimePickerInput, multiple_widget_generator\nfrom apps.gallery.widgets import SingleImageInput\n\nfrom taggit.forms import TagWidget\n\n\nclass ArticleForm(forms.ModelForm):\n\n class Meta(object):\n \"\"\"\n Add fields that should have DTP activated in the datetimepicker_fields list\n \"\"\"\n\n model = Article\n fields = [\n 'heading',\n 'ingress_short',\n 'ingress',\n 'content',\n 'image',\n 'video',\n 'published_date',\n 'authors',\n 'tags',\n 'featured'\n ]\n\n # Fields should be a mapping between field name and an attribute dictionary\n img_fields = [('image', {'id': 'responsive-image-id'})]\n dtp_fields = [('published_date', {})]\n widgetlist = [\n (DatetimePickerInput, dtp_fields),\n (SingleImageInput, img_fields)\n ]\n\n # Multiple widget generator merges results from regular widget_generator into a single widget dict\n widgets = multiple_widget_generator(widgetlist)\n widgets.update({'tags': TagWidget(attrs={'placeholder': 'Eksempel: åre, online, kjelleren'})})\n labels = {\n 'tags': u'Tags'\n }\n",
"path": "apps/article/dashboard/forms.py"
}
] | diff --git a/apps/article/dashboard/forms.py b/apps/article/dashboard/forms.py
index 43ba4ef9a..fed85caa3 100644
--- a/apps/article/dashboard/forms.py
+++ b/apps/article/dashboard/forms.py
@@ -22,6 +22,7 @@ class Meta(object):
'ingress',
'content',
'image',
+ 'video',
'published_date',
'authors',
'tags',
|
Gallopsled__pwntools-1893 | 'pwn cyclic -o afca' throws a BytesWarning
```
$ pwn cyclic -o afca
/Users/heapcrash/pwntools/pwnlib/commandline/cyclic.py:74: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
pat = flat(pat, bytes=args.length)
506
```
| [
{
"content": "#!/usr/bin/env python2\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nimport argparse\nimport six\nimport string\nimport sys\n\nimport pwnlib.args\npwnlib.args.free_form = False\n\nfrom pwn import *\nfrom pwnlib.commandline import common\n\nparser = common.parser_commands.add_parser(\n 'cyclic',\n help = \"Cyclic pattern creator/finder\",\n description = \"Cyclic pattern creator/finder\"\n)\n\nparser.add_argument(\n '-a', '--alphabet',\n metavar = 'alphabet',\n default = string.ascii_lowercase.encode(),\n type = packing._encode,\n help = 'The alphabet to use in the cyclic pattern (defaults to all lower case letters)',\n)\n\nparser.add_argument(\n '-n', '--length',\n metavar = 'length',\n default = 4,\n type = int,\n help = 'Size of the unique subsequences (defaults to 4).'\n)\n\nparser.add_argument(\n '-c', '--context',\n metavar = 'context',\n action = 'append',\n type = common.context_arg,\n choices = common.choices,\n help = 'The os/architecture/endianness/bits the shellcode will run in (default: linux/i386), choose from: %s' % common.choices,\n)\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n '-l', '-o', '--offset', '--lookup',\n dest = 'lookup',\n metavar = 'lookup_value',\n help = 'Do a lookup instead printing the alphabet',\n)\n\ngroup.add_argument(\n 'count',\n type=int,\n nargs='?',\n default=None,\n help='Number of characters to print'\n)\n\ndef main(args):\n alphabet = args.alphabet\n subsize = args.length\n\n if args.lookup:\n pat = args.lookup\n\n try:\n pat = int(pat, 0)\n except ValueError:\n pass\n pat = flat(pat, bytes=args.length)\n\n if len(pat) != subsize:\n log.critical('Subpattern must be %d bytes' % subsize)\n sys.exit(1)\n\n if not all(c in alphabet for c in pat):\n log.critical('Pattern contains characters not present in the alphabet')\n sys.exit(1)\n\n offset = cyclic_find(pat, alphabet, subsize)\n\n if offset == -1:\n log.critical('Given pattern does not exist in cyclic pattern')\n sys.exit(1)\n else:\n print(offset)\n else:\n want = args.count\n result = cyclic(want, alphabet, subsize)\n got = len(result)\n if want is not None and got < want:\n log.failure(\"Alphabet too small (max length = %i)\" % got)\n\n out = getattr(sys.stdout, 'buffer', sys.stdout)\n out.write(result)\n\n if out.isatty():\n out.write(b'\\n')\n\nif __name__ == '__main__':\n pwnlib.commandline.common.main(__file__)\n",
"path": "pwnlib/commandline/cyclic.py"
}
] | [
{
"content": "#!/usr/bin/env python2\nfrom __future__ import absolute_import\nfrom __future__ import division\n\nimport argparse\nimport six\nimport string\nimport sys\n\nimport pwnlib.args\npwnlib.args.free_form = False\n\nfrom pwn import *\nfrom pwnlib.commandline import common\n\nparser = common.parser_commands.add_parser(\n 'cyclic',\n help = \"Cyclic pattern creator/finder\",\n description = \"Cyclic pattern creator/finder\"\n)\n\nparser.add_argument(\n '-a', '--alphabet',\n metavar = 'alphabet',\n default = string.ascii_lowercase.encode(),\n type = packing._encode,\n help = 'The alphabet to use in the cyclic pattern (defaults to all lower case letters)',\n)\n\nparser.add_argument(\n '-n', '--length',\n metavar = 'length',\n default = 4,\n type = int,\n help = 'Size of the unique subsequences (defaults to 4).'\n)\n\nparser.add_argument(\n '-c', '--context',\n metavar = 'context',\n action = 'append',\n type = common.context_arg,\n choices = common.choices,\n help = 'The os/architecture/endianness/bits the shellcode will run in (default: linux/i386), choose from: %s' % common.choices,\n)\n\ngroup = parser.add_mutually_exclusive_group(required=False)\ngroup.add_argument(\n '-l', '-o', '--offset', '--lookup',\n dest = 'lookup',\n metavar = 'lookup_value',\n help = 'Do a lookup instead printing the alphabet',\n)\n\ngroup.add_argument(\n 'count',\n type=int,\n nargs='?',\n default=None,\n help='Number of characters to print'\n)\n\ndef main(args):\n alphabet = args.alphabet\n subsize = args.length\n\n if args.lookup:\n pat = args.lookup\n\n if six.PY3:\n pat = bytes(pat, encoding='utf-8')\n\n try:\n pat = int(pat, 0)\n except ValueError:\n pass\n pat = flat(pat, bytes=args.length)\n\n if len(pat) != subsize:\n log.critical('Subpattern must be %d bytes' % subsize)\n sys.exit(1)\n\n if not all(c in alphabet for c in pat):\n log.critical('Pattern contains characters not present in the alphabet')\n sys.exit(1)\n\n offset = cyclic_find(pat, alphabet, subsize)\n\n if offset == -1:\n log.critical('Given pattern does not exist in cyclic pattern')\n sys.exit(1)\n else:\n print(offset)\n else:\n want = args.count\n result = cyclic(want, alphabet, subsize)\n got = len(result)\n if want is not None and got < want:\n log.failure(\"Alphabet too small (max length = %i)\" % got)\n\n out = getattr(sys.stdout, 'buffer', sys.stdout)\n out.write(result)\n\n if out.isatty():\n out.write(b'\\n')\n\nif __name__ == '__main__':\n pwnlib.commandline.common.main(__file__)\n",
"path": "pwnlib/commandline/cyclic.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 462646ddb..1d67a0b6d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -64,12 +64,14 @@ The table below shows which release corresponds to each branch, and what date th
- [#1733][1733] Update libc headers -> more syscalls available!
- [#1876][1876] add `self.message` and change `sys.exc_type` to `sys.exec_info()` in PwnlibException
- [#1877][1877] encoders error message handles when `avoid` is bytes in python3
-- [#1892](1892) Silence SIGPIPE error for "pwn phd"
+- [#1892][1892] Silence SIGPIPE error for "pwn phd"
+- [#1893][1893] Fix bytes warning in "pwn cyclic"
[1733]: https://github.com/Gallopsled/pwntools/pull/1733
[1876]: https://github.com/Gallopsled/pwntools/pull/1876
[1877]: https://github.com/Gallopsled/pwntools/pull/1877
[1892]: https://github.com/Gallopsled/pwntools/pull/1892
+[1893]: https://github.com/Gallopsled/pwntools/pull/1893
## 4.6.0 (`beta`)
diff --git a/pwnlib/commandline/cyclic.py b/pwnlib/commandline/cyclic.py
index c0cb19002..9adac3b6c 100644
--- a/pwnlib/commandline/cyclic.py
+++ b/pwnlib/commandline/cyclic.py
@@ -67,6 +67,9 @@ def main(args):
if args.lookup:
pat = args.lookup
+ if six.PY3:
+ pat = bytes(pat, encoding='utf-8')
+
try:
pat = int(pat, 0)
except ValueError:
|
dynaconf__dynaconf-672 | [bug] UnicodeEncodeError upon dynaconf init
**Describe the bug**
`dynaconf init -f yaml` results in a `UnicodeEncodeError `
**To Reproduce**
Steps to reproduce the behavior:
1. `git clone -b dynaconf https://github.com/ebenh/django-flex-user.git`
2. `py -m pipenv install --dev`
3. `py -m pipenv shell`
4. `export DJANGO_SETTINGS_MODULE=test_project.settings`
5. `dynaconf init -f yaml`
**Error Message**
```
Traceback (most recent call last):
File "C:\Users\eben\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\eben\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\eben\.virtualenvs\django-flex-user-ab_cVlY8\Scripts\dynaconf.exe\__main__.py", line 7, in <module>
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\core.py", line 221, in __call__
def __call__(A,*B,**C):return A.main(*B,**C)
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\core.py", line 205, in main
H=E.invoke(F)
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\core.py", line 345, in invoke
with C:return F(C.command.invoke(C))
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\core.py", line 288, in invoke
if A.callback is not _A:return ctx.invoke(A.callback,**ctx.params)
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\core.py", line 170, in invoke
with G:return A(*B,**E)
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\decorators.py", line 21, in A
def A(*A,**B):return f(get_current_context(),*A,**B)
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\cli.py", line 257, in init
click.echo("\u2699\ufe0f Configuring your Dynaconf environment")
File "c:\users\eben\.virtualenvs\django-flex-user-ab_cvly8\lib\site-packages\dynaconf\vendor\click\utils.py", line 82, in echo
if A:B.write(A)
File "C:\Users\eben\AppData\Local\Programs\Python\Python37\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-1: character maps to <undefined>
```
| [
{
"content": "import importlib\nimport io\nimport os\nimport pprint\nimport sys\nimport warnings\nimport webbrowser\nfrom contextlib import suppress\nfrom pathlib import Path\n\nfrom dynaconf import constants\nfrom dynaconf import default_settings\nfrom dynaconf import LazySettings\nfrom dynaconf import loaders\nfrom dynaconf import settings as legacy_settings\nfrom dynaconf.loaders.py_loader import get_module\nfrom dynaconf.utils import upperfy\nfrom dynaconf.utils.files import read_file\nfrom dynaconf.utils.functional import empty\nfrom dynaconf.utils.parse_conf import parse_conf_data\nfrom dynaconf.validator import ValidationError\nfrom dynaconf.validator import Validator\nfrom dynaconf.vendor import click\nfrom dynaconf.vendor import toml\n\n\nCWD = Path.cwd()\nEXTS = [\"ini\", \"toml\", \"yaml\", \"json\", \"py\", \"env\"]\nWRITERS = [\"ini\", \"toml\", \"yaml\", \"json\", \"py\", \"redis\", \"vault\", \"env\"]\n\nENC = default_settings.ENCODING_FOR_DYNACONF\n\n\ndef set_settings(ctx, instance=None):\n \"\"\"Pick correct settings instance and set it to a global variable.\"\"\"\n\n global settings\n\n settings = None\n\n if instance is not None:\n if ctx.invoked_subcommand in [\"init\"]:\n raise click.UsageError(\n \"-i/--instance option is not allowed for `init` command\"\n )\n sys.path.insert(0, \".\")\n settings = import_settings(instance)\n elif \"FLASK_APP\" in os.environ: # pragma: no cover\n with suppress(ImportError, click.UsageError):\n from flask.cli import ScriptInfo # noqa\n\n flask_app = ScriptInfo().load_app()\n settings = flask_app.config\n click.echo(\n click.style(\n \"Flask app detected\", fg=\"white\", bg=\"bright_black\"\n )\n )\n elif \"DJANGO_SETTINGS_MODULE\" in os.environ: # pragma: no cover\n sys.path.insert(0, os.path.abspath(os.getcwd()))\n try:\n # Django extension v2\n from django.conf import settings # noqa\n\n settings.DYNACONF.configure()\n except AttributeError:\n settings = LazySettings()\n\n if settings is not None:\n click.echo(\n click.style(\n \"Django app detected\", fg=\"white\", bg=\"bright_black\"\n )\n )\n\n if settings is None:\n\n if instance is None and \"--help\" not in click.get_os_args():\n if ctx.invoked_subcommand and ctx.invoked_subcommand not in [\n \"init\",\n ]:\n warnings.warn(\n \"Starting on 3.x the param --instance/-i is now required. \"\n \"try passing it `dynaconf -i path.to.settings <cmd>` \"\n \"Example `dynaconf -i config.settings list` \"\n )\n settings = legacy_settings\n else:\n settings = LazySettings(create_new_settings=True)\n else:\n settings = LazySettings()\n\n\ndef import_settings(dotted_path):\n \"\"\"Import settings instance from python dotted path.\n\n Last item in dotted path must be settings instace.\n\n Example: import_settings('path.to.settings')\n \"\"\"\n if \".\" in dotted_path:\n module, name = dotted_path.rsplit(\".\", 1)\n else:\n raise click.UsageError(\n f\"invalid path to settings instance: {dotted_path}\"\n )\n try:\n module = importlib.import_module(module)\n except ImportError as e:\n raise click.UsageError(e)\n try:\n return getattr(module, name)\n except AttributeError as e:\n raise click.UsageError(e)\n\n\ndef split_vars(_vars):\n \"\"\"Splits values like foo=bar=zaz in {'foo': 'bar=zaz'}\"\"\"\n return (\n {\n upperfy(k.strip()): parse_conf_data(\n v.strip(), tomlfy=True, box_settings=settings\n )\n for k, _, v in [item.partition(\"=\") for item in _vars]\n }\n if _vars\n else {}\n )\n\n\ndef read_file_in_root_directory(*names, **kwargs):\n \"\"\"Read a file on root dir.\"\"\"\n return read_file(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf-8\"),\n )\n\n\ndef print_version(ctx, param, value):\n if not value or ctx.resilient_parsing:\n return\n click.echo(read_file_in_root_directory(\"VERSION\"))\n ctx.exit()\n\n\ndef open_docs(ctx, param, value): # pragma: no cover\n if not value or ctx.resilient_parsing:\n return\n url = \"https://dynaconf.com/\"\n webbrowser.open(url, new=2)\n click.echo(f\"{url} opened in browser\")\n ctx.exit()\n\n\ndef show_banner(ctx, param, value):\n \"\"\"Shows dynaconf awesome banner\"\"\"\n if not value or ctx.resilient_parsing:\n return\n set_settings(ctx)\n click.echo(settings.dynaconf_banner)\n click.echo(\"Learn more at: http://github.com/rochacbruno/dynaconf\")\n ctx.exit()\n\n\[email protected]()\[email protected](\n \"--version\",\n is_flag=True,\n callback=print_version,\n expose_value=False,\n is_eager=True,\n help=\"Show dynaconf version\",\n)\[email protected](\n \"--docs\",\n is_flag=True,\n callback=open_docs,\n expose_value=False,\n is_eager=True,\n help=\"Open documentation in browser\",\n)\[email protected](\n \"--banner\",\n is_flag=True,\n callback=show_banner,\n expose_value=False,\n is_eager=True,\n help=\"Show awesome banner\",\n)\[email protected](\n \"--instance\",\n \"-i\",\n default=None,\n envvar=\"INSTANCE_FOR_DYNACONF\",\n help=\"Custom instance of LazySettings\",\n)\[email protected]_context\ndef main(ctx, instance):\n \"\"\"Dynaconf - Command Line Interface\\n\n Documentation: https://dynaconf.com/\n \"\"\"\n set_settings(ctx, instance)\n\n\[email protected]()\[email protected](\n \"--format\", \"fileformat\", \"-f\", default=\"toml\", type=click.Choice(EXTS)\n)\[email protected](\n \"--path\", \"-p\", default=CWD, help=\"defaults to current directory\"\n)\[email protected](\n \"--env\",\n \"-e\",\n default=None,\n help=\"deprecated command (kept for compatibility but unused)\",\n)\[email protected](\n \"--vars\",\n \"_vars\",\n \"-v\",\n multiple=True,\n default=None,\n help=(\n \"extra values to write to settings file \"\n \"e.g: `dynaconf init -v NAME=foo -v X=2`\"\n ),\n)\[email protected](\n \"--secrets\",\n \"_secrets\",\n \"-s\",\n multiple=True,\n default=None,\n help=(\n \"secret key values to be written in .secrets \"\n \"e.g: `dynaconf init -s TOKEN=kdslmflds\"\n ),\n)\[email protected](\"--wg/--no-wg\", default=True)\[email protected](\"-y\", default=False, is_flag=True)\[email protected](\"--django\", default=os.environ.get(\"DJANGO_SETTINGS_MODULE\"))\[email protected]_context\ndef init(ctx, fileformat, path, env, _vars, _secrets, wg, y, django):\n \"\"\"Inits a dynaconf project\n By default it creates a settings.toml and a .secrets.toml\n for [default|development|staging|testing|production|global] envs.\n\n The format of the files can be changed passing\n --format=yaml|json|ini|py.\n\n This command must run on the project's root folder or you must pass\n --path=/myproject/root/folder.\n\n The --env/-e is deprecated (kept for compatibility but unused)\n \"\"\"\n click.echo(\"⚙️ Configuring your Dynaconf environment\")\n click.echo(\"-\" * 42)\n path = Path(path)\n\n if env is not None:\n click.secho(\n \"⚠️ The --env/-e option is deprecated (kept for\\n\"\n \" compatibility but unused)\\n\",\n fg=\"red\",\n bold=True,\n # stderr=True,\n )\n\n if settings.get(\"create_new_settings\") is True:\n filename = Path(\"config.py\")\n if not filename.exists():\n with open(filename, \"w\") as new_settings:\n new_settings.write(\n constants.INSTANCE_TEMPLATE.format(\n settings_files=[\n f\"settings.{fileformat}\",\n f\".secrets.{fileformat}\",\n ]\n )\n )\n click.echo(\n \"🐍 The file `config.py` was generated.\\n\"\n \" on your code now use `from config import settings`.\\n\"\n \" (you must have `config` importable in your PYTHONPATH).\\n\"\n )\n else:\n click.echo(\n f\"⁉️ You already have a {filename} so it is not going to be\\n\"\n \" generated for you, you will need to create your own \\n\"\n \" settings instance e.g: config.py \\n\"\n \" from dynaconf import Dynaconf \\n\"\n \" settings = Dynaconf(**options)\\n\"\n )\n sys.path.append(str(path))\n set_settings(ctx, \"config.settings\")\n\n env = settings.current_env.lower()\n\n loader = importlib.import_module(f\"dynaconf.loaders.{fileformat}_loader\")\n # Turn foo=bar=zaz in {'foo': 'bar=zaz'}\n env_data = split_vars(_vars)\n _secrets = split_vars(_secrets)\n\n # create placeholder data for every env\n settings_data = {}\n secrets_data = {}\n if env_data:\n settings_data[env] = env_data\n settings_data[\"default\"] = {k: \"a default value\" for k in env_data}\n if _secrets:\n secrets_data[env] = _secrets\n secrets_data[\"default\"] = {k: \"a default value\" for k in _secrets}\n\n if str(path).endswith(\n constants.ALL_EXTENSIONS + (\"py\",)\n ): # pragma: no cover # noqa\n settings_path = path\n secrets_path = path.parent / f\".secrets.{fileformat}\"\n gitignore_path = path.parent / \".gitignore\"\n else:\n if fileformat == \"env\":\n if str(path) in (\".env\", \"./.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\"/.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\".env\"): # pragma: no cover\n settings_path = path.parent / \".env\"\n else:\n settings_path = path / \".env\"\n Path.touch(settings_path)\n secrets_path = None\n else:\n settings_path = path / f\"settings.{fileformat}\"\n secrets_path = path / f\".secrets.{fileformat}\"\n gitignore_path = path / \".gitignore\"\n\n if fileformat in [\"py\", \"env\"] or env == \"main\":\n # for Main env, Python and .env formats writes a single env\n settings_data = settings_data.get(env, {})\n secrets_data = secrets_data.get(env, {})\n\n if not y and settings_path and settings_path.exists(): # pragma: no cover\n click.confirm(\n f\"⁉ {settings_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if not y and secrets_path and secrets_path.exists(): # pragma: no cover\n click.confirm(\n f\"⁉ {secrets_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if settings_path:\n loader.write(settings_path, settings_data, merge=True)\n click.echo(\n f\"🎛️ {settings_path.name} created to hold your settings.\\n\"\n )\n\n if secrets_path:\n loader.write(secrets_path, secrets_data, merge=True)\n click.echo(f\"🔑 {secrets_path.name} created to hold your secrets.\\n\")\n ignore_line = \".secrets.*\"\n comment = \"\\n# Ignore dynaconf secret files\\n\"\n if not gitignore_path.exists():\n with io.open(str(gitignore_path), \"w\", encoding=ENC) as f:\n f.writelines([comment, ignore_line, \"\\n\"])\n else:\n existing = (\n ignore_line\n in io.open(str(gitignore_path), encoding=ENC).read()\n )\n if not existing: # pragma: no cover\n with io.open(str(gitignore_path), \"a+\", encoding=ENC) as f:\n f.writelines([comment, ignore_line, \"\\n\"])\n\n click.echo(\n f\"🙈 the {secrets_path.name} is also included in `.gitignore` \\n\"\n \" beware to not push your secrets to a public repo \\n\"\n \" or use dynaconf builtin support for Vault Servers.\\n\"\n )\n\n if django: # pragma: no cover\n dj_module, _ = get_module({}, django)\n dj_filename = dj_module.__file__\n if Path(dj_filename).exists():\n click.confirm(\n f\"⁉ {dj_filename} is found do you want to add dynaconf?\",\n abort=True,\n )\n with open(dj_filename, \"a\") as dj_file:\n dj_file.write(constants.DJANGO_PATCH)\n click.echo(\"🎠 Now your Django settings are managed by Dynaconf\")\n else:\n click.echo(\"❌ Django settings file not written.\")\n else:\n click.echo(\n \"🎉 Dynaconf is configured! read more on https://dynaconf.com\\n\"\n \" Use `dynaconf -i config.settings list` to see your settings\\n\"\n )\n\n\[email protected](name=\"list\")\[email protected](\n \"--env\", \"-e\", default=None, help=\"Filters the env to get the values\"\n)\[email protected](\"--key\", \"-k\", default=None, help=\"Filters a single key\")\[email protected](\n \"--more\",\n \"-m\",\n default=None,\n help=\"Pagination more|less style\",\n is_flag=True,\n)\[email protected](\n \"--loader\",\n \"-l\",\n default=None,\n help=\"a loader identifier to filter e.g: toml|yaml\",\n)\[email protected](\n \"--all\",\n \"_all\",\n \"-a\",\n default=False,\n is_flag=True,\n help=\"show dynaconf internal settings?\",\n)\[email protected](\n \"--output\",\n \"-o\",\n type=click.Path(writable=True, dir_okay=False),\n default=None,\n help=\"Filepath to write the listed values as json\",\n)\[email protected](\n \"--output-flat\",\n \"flat\",\n is_flag=True,\n default=False,\n help=\"Output file is flat (do not include [env] name)\",\n)\ndef _list(env, key, more, loader, _all=False, output=None, flat=False):\n \"\"\"Lists all user defined config values\n and if `--all` is passed it also shows dynaconf internal variables.\n \"\"\"\n if env:\n env = env.strip()\n if key:\n key = key.strip()\n if loader:\n loader = loader.strip()\n\n if env:\n settings.setenv(env)\n\n cur_env = settings.current_env.lower()\n\n if cur_env == \"main\":\n flat = True\n\n click.echo(\n click.style(\n f\"Working in {cur_env} environment \",\n bold=True,\n bg=\"bright_blue\",\n fg=\"bright_white\",\n )\n )\n\n if not loader:\n data = settings.as_dict(env=env, internal=_all)\n else:\n identifier = f\"{loader}_{cur_env}\"\n data = settings._loaded_by_loaders.get(identifier, {})\n data = data or settings._loaded_by_loaders.get(loader, {})\n\n # remove to avoid displaying twice\n data.pop(\"SETTINGS_MODULE\", None)\n\n def color(_k):\n if _k in dir(default_settings):\n return \"blue\"\n return \"magenta\"\n\n def format_setting(_k, _v):\n key = click.style(_k, bg=color(_k), fg=\"bright_white\")\n data_type = click.style(\n f\"<{type(_v).__name__}>\", bg=\"bright_black\", fg=\"bright_white\"\n )\n value = pprint.pformat(_v)\n return f\"{key}{data_type} {value}\"\n\n if not key:\n datalines = \"\\n\".join(\n format_setting(k, v)\n for k, v in data.items()\n if k not in data.get(\"RENAMED_VARS\", [])\n )\n (click.echo_via_pager if more else click.echo)(datalines)\n if output:\n loaders.write(output, data, env=not flat and cur_env)\n else:\n key = upperfy(key)\n\n try:\n value = settings.get(key, empty)\n except AttributeError:\n value = empty\n\n if value is empty:\n click.echo(click.style(\"Key not found\", bg=\"red\", fg=\"white\"))\n return\n\n click.echo(format_setting(key, value))\n if output:\n loaders.write(output, {key: value}, env=not flat and cur_env)\n\n if env:\n settings.setenv()\n\n\[email protected]()\[email protected](\"to\", required=True, type=click.Choice(WRITERS))\[email protected](\n \"--vars\",\n \"_vars\",\n \"-v\",\n multiple=True,\n default=None,\n help=(\n \"key values to be written \"\n \"e.g: `dynaconf write toml -e NAME=foo -e X=2\"\n ),\n)\[email protected](\n \"--secrets\",\n \"_secrets\",\n \"-s\",\n multiple=True,\n default=None,\n help=(\n \"secret key values to be written in .secrets \"\n \"e.g: `dynaconf write toml -s TOKEN=kdslmflds -s X=2\"\n ),\n)\[email protected](\n \"--path\",\n \"-p\",\n default=CWD,\n help=\"defaults to current directory/settings.{ext}\",\n)\[email protected](\n \"--env\",\n \"-e\",\n default=\"default\",\n help=(\n \"env to write to defaults to DEVELOPMENT for files \"\n \"for external sources like Redis and Vault \"\n \"it will be DYNACONF or the value set in \"\n \"$ENVVAR_PREFIX_FOR_DYNACONF\"\n ),\n)\[email protected](\"-y\", default=False, is_flag=True)\ndef write(to, _vars, _secrets, path, env, y):\n \"\"\"Writes data to specific source\"\"\"\n _vars = split_vars(_vars)\n _secrets = split_vars(_secrets)\n loader = importlib.import_module(f\"dynaconf.loaders.{to}_loader\")\n\n if to in EXTS:\n\n # Lets write to a file\n path = Path(path)\n\n if str(path).endswith(constants.ALL_EXTENSIONS + (\"py\",)):\n settings_path = path\n secrets_path = path.parent / f\".secrets.{to}\"\n else:\n if to == \"env\":\n if str(path) in (\".env\", \"./.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\"/.env\"):\n settings_path = path\n elif str(path).endswith(\".env\"):\n settings_path = path.parent / \".env\"\n else:\n settings_path = path / \".env\"\n Path.touch(settings_path)\n secrets_path = None\n _vars.update(_secrets)\n else:\n settings_path = path / f\"settings.{to}\"\n secrets_path = path / f\".secrets.{to}\"\n\n if (\n _vars and not y and settings_path and settings_path.exists()\n ): # pragma: no cover # noqa\n click.confirm(\n f\"{settings_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if (\n _secrets and not y and secrets_path and secrets_path.exists()\n ): # pragma: no cover # noqa\n click.confirm(\n f\"{secrets_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if to not in [\"py\", \"env\"]:\n if _vars:\n _vars = {env: _vars}\n if _secrets:\n _secrets = {env: _secrets}\n\n if _vars and settings_path:\n loader.write(settings_path, _vars, merge=True)\n click.echo(f\"Data successful written to {settings_path}\")\n\n if _secrets and secrets_path:\n loader.write(secrets_path, _secrets, merge=True)\n click.echo(f\"Data successful written to {secrets_path}\")\n\n else: # pragma: no cover\n # lets write to external source\n with settings.using_env(env):\n # make sure we're in the correct environment\n loader.write(settings, _vars, **_secrets)\n click.echo(f\"Data successful written to {to}\")\n\n\[email protected]()\[email protected](\n \"--path\", \"-p\", default=CWD, help=\"defaults to current directory\"\n)\ndef validate(path): # pragma: no cover\n \"\"\"Validates Dynaconf settings based on rules defined in\n dynaconf_validators.toml\"\"\"\n # reads the 'dynaconf_validators.toml' from path\n # for each section register the validator for specific env\n # call validate\n\n path = Path(path)\n\n if not str(path).endswith(\".toml\"):\n path = path / \"dynaconf_validators.toml\"\n\n if not path.exists(): # pragma: no cover # noqa\n click.echo(click.style(f\"{path} not found\", fg=\"white\", bg=\"red\"))\n sys.exit(1)\n\n validation_data = toml.load(open(str(path)))\n\n success = True\n for env, name_data in validation_data.items():\n for name, data in name_data.items():\n if not isinstance(data, dict): # pragma: no cover\n click.echo(\n click.style(\n f\"Invalid rule for parameter '{name}'\",\n fg=\"white\",\n bg=\"yellow\",\n )\n )\n else:\n data.setdefault(\"env\", env)\n click.echo(\n click.style(\n f\"Validating '{name}' with '{data}'\",\n fg=\"white\",\n bg=\"blue\",\n )\n )\n try:\n Validator(name, **data).validate(settings)\n except ValidationError as e:\n click.echo(\n click.style(f\"Error: {e}\", fg=\"white\", bg=\"red\")\n )\n success = False\n\n if success:\n click.echo(click.style(\"Validation success!\", fg=\"white\", bg=\"green\"))\n else:\n click.echo(click.style(\"Validation error!\", fg=\"white\", bg=\"red\"))\n sys.exit(1)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n main()\n",
"path": "dynaconf/cli.py"
}
] | [
{
"content": "import importlib\nimport io\nimport os\nimport pprint\nimport sys\nimport warnings\nimport webbrowser\nfrom contextlib import suppress\nfrom pathlib import Path\n\nfrom dynaconf import constants\nfrom dynaconf import default_settings\nfrom dynaconf import LazySettings\nfrom dynaconf import loaders\nfrom dynaconf import settings as legacy_settings\nfrom dynaconf.loaders.py_loader import get_module\nfrom dynaconf.utils import upperfy\nfrom dynaconf.utils.files import read_file\nfrom dynaconf.utils.functional import empty\nfrom dynaconf.utils.parse_conf import parse_conf_data\nfrom dynaconf.validator import ValidationError\nfrom dynaconf.validator import Validator\nfrom dynaconf.vendor import click\nfrom dynaconf.vendor import toml\n\nos.environ[\"PYTHONIOENCODING\"] = \"utf-8\"\n\nCWD = Path.cwd()\nEXTS = [\"ini\", \"toml\", \"yaml\", \"json\", \"py\", \"env\"]\nWRITERS = [\"ini\", \"toml\", \"yaml\", \"json\", \"py\", \"redis\", \"vault\", \"env\"]\n\nENC = default_settings.ENCODING_FOR_DYNACONF\n\n\ndef set_settings(ctx, instance=None):\n \"\"\"Pick correct settings instance and set it to a global variable.\"\"\"\n\n global settings\n\n settings = None\n\n if instance is not None:\n if ctx.invoked_subcommand in [\"init\"]:\n raise click.UsageError(\n \"-i/--instance option is not allowed for `init` command\"\n )\n sys.path.insert(0, \".\")\n settings = import_settings(instance)\n elif \"FLASK_APP\" in os.environ: # pragma: no cover\n with suppress(ImportError, click.UsageError):\n from flask.cli import ScriptInfo # noqa\n\n flask_app = ScriptInfo().load_app()\n settings = flask_app.config\n click.echo(\n click.style(\n \"Flask app detected\", fg=\"white\", bg=\"bright_black\"\n )\n )\n elif \"DJANGO_SETTINGS_MODULE\" in os.environ: # pragma: no cover\n sys.path.insert(0, os.path.abspath(os.getcwd()))\n try:\n # Django extension v2\n from django.conf import settings # noqa\n\n settings.DYNACONF.configure()\n except AttributeError:\n settings = LazySettings()\n\n if settings is not None:\n click.echo(\n click.style(\n \"Django app detected\", fg=\"white\", bg=\"bright_black\"\n )\n )\n\n if settings is None:\n\n if instance is None and \"--help\" not in click.get_os_args():\n if ctx.invoked_subcommand and ctx.invoked_subcommand not in [\n \"init\",\n ]:\n warnings.warn(\n \"Starting on 3.x the param --instance/-i is now required. \"\n \"try passing it `dynaconf -i path.to.settings <cmd>` \"\n \"Example `dynaconf -i config.settings list` \"\n )\n settings = legacy_settings\n else:\n settings = LazySettings(create_new_settings=True)\n else:\n settings = LazySettings()\n\n\ndef import_settings(dotted_path):\n \"\"\"Import settings instance from python dotted path.\n\n Last item in dotted path must be settings instace.\n\n Example: import_settings('path.to.settings')\n \"\"\"\n if \".\" in dotted_path:\n module, name = dotted_path.rsplit(\".\", 1)\n else:\n raise click.UsageError(\n f\"invalid path to settings instance: {dotted_path}\"\n )\n try:\n module = importlib.import_module(module)\n except ImportError as e:\n raise click.UsageError(e)\n try:\n return getattr(module, name)\n except AttributeError as e:\n raise click.UsageError(e)\n\n\ndef split_vars(_vars):\n \"\"\"Splits values like foo=bar=zaz in {'foo': 'bar=zaz'}\"\"\"\n return (\n {\n upperfy(k.strip()): parse_conf_data(\n v.strip(), tomlfy=True, box_settings=settings\n )\n for k, _, v in [item.partition(\"=\") for item in _vars]\n }\n if _vars\n else {}\n )\n\n\ndef read_file_in_root_directory(*names, **kwargs):\n \"\"\"Read a file on root dir.\"\"\"\n return read_file(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf-8\"),\n )\n\n\ndef print_version(ctx, param, value):\n if not value or ctx.resilient_parsing:\n return\n click.echo(read_file_in_root_directory(\"VERSION\"))\n ctx.exit()\n\n\ndef open_docs(ctx, param, value): # pragma: no cover\n if not value or ctx.resilient_parsing:\n return\n url = \"https://dynaconf.com/\"\n webbrowser.open(url, new=2)\n click.echo(f\"{url} opened in browser\")\n ctx.exit()\n\n\ndef show_banner(ctx, param, value):\n \"\"\"Shows dynaconf awesome banner\"\"\"\n if not value or ctx.resilient_parsing:\n return\n set_settings(ctx)\n click.echo(settings.dynaconf_banner)\n click.echo(\"Learn more at: http://github.com/rochacbruno/dynaconf\")\n ctx.exit()\n\n\[email protected]()\[email protected](\n \"--version\",\n is_flag=True,\n callback=print_version,\n expose_value=False,\n is_eager=True,\n help=\"Show dynaconf version\",\n)\[email protected](\n \"--docs\",\n is_flag=True,\n callback=open_docs,\n expose_value=False,\n is_eager=True,\n help=\"Open documentation in browser\",\n)\[email protected](\n \"--banner\",\n is_flag=True,\n callback=show_banner,\n expose_value=False,\n is_eager=True,\n help=\"Show awesome banner\",\n)\[email protected](\n \"--instance\",\n \"-i\",\n default=None,\n envvar=\"INSTANCE_FOR_DYNACONF\",\n help=\"Custom instance of LazySettings\",\n)\[email protected]_context\ndef main(ctx, instance):\n \"\"\"Dynaconf - Command Line Interface\\n\n Documentation: https://dynaconf.com/\n \"\"\"\n set_settings(ctx, instance)\n\n\[email protected]()\[email protected](\n \"--format\", \"fileformat\", \"-f\", default=\"toml\", type=click.Choice(EXTS)\n)\[email protected](\n \"--path\", \"-p\", default=CWD, help=\"defaults to current directory\"\n)\[email protected](\n \"--env\",\n \"-e\",\n default=None,\n help=\"deprecated command (kept for compatibility but unused)\",\n)\[email protected](\n \"--vars\",\n \"_vars\",\n \"-v\",\n multiple=True,\n default=None,\n help=(\n \"extra values to write to settings file \"\n \"e.g: `dynaconf init -v NAME=foo -v X=2`\"\n ),\n)\[email protected](\n \"--secrets\",\n \"_secrets\",\n \"-s\",\n multiple=True,\n default=None,\n help=(\n \"secret key values to be written in .secrets \"\n \"e.g: `dynaconf init -s TOKEN=kdslmflds\"\n ),\n)\[email protected](\"--wg/--no-wg\", default=True)\[email protected](\"-y\", default=False, is_flag=True)\[email protected](\"--django\", default=os.environ.get(\"DJANGO_SETTINGS_MODULE\"))\[email protected]_context\ndef init(ctx, fileformat, path, env, _vars, _secrets, wg, y, django):\n \"\"\"Inits a dynaconf project\n By default it creates a settings.toml and a .secrets.toml\n for [default|development|staging|testing|production|global] envs.\n\n The format of the files can be changed passing\n --format=yaml|json|ini|py.\n\n This command must run on the project's root folder or you must pass\n --path=/myproject/root/folder.\n\n The --env/-e is deprecated (kept for compatibility but unused)\n \"\"\"\n click.echo(\"⚙️ Configuring your Dynaconf environment\")\n click.echo(\"-\" * 42)\n path = Path(path)\n\n if env is not None:\n click.secho(\n \"⚠️ The --env/-e option is deprecated (kept for\\n\"\n \" compatibility but unused)\\n\",\n fg=\"red\",\n bold=True,\n # stderr=True,\n )\n\n if settings.get(\"create_new_settings\") is True:\n filename = Path(\"config.py\")\n if not filename.exists():\n with open(filename, \"w\") as new_settings:\n new_settings.write(\n constants.INSTANCE_TEMPLATE.format(\n settings_files=[\n f\"settings.{fileformat}\",\n f\".secrets.{fileformat}\",\n ]\n )\n )\n click.echo(\n \"🐍 The file `config.py` was generated.\\n\"\n \" on your code now use `from config import settings`.\\n\"\n \" (you must have `config` importable in your PYTHONPATH).\\n\"\n )\n else:\n click.echo(\n f\"⁉️ You already have a {filename} so it is not going to be\\n\"\n \" generated for you, you will need to create your own \\n\"\n \" settings instance e.g: config.py \\n\"\n \" from dynaconf import Dynaconf \\n\"\n \" settings = Dynaconf(**options)\\n\"\n )\n sys.path.append(str(path))\n set_settings(ctx, \"config.settings\")\n\n env = settings.current_env.lower()\n\n loader = importlib.import_module(f\"dynaconf.loaders.{fileformat}_loader\")\n # Turn foo=bar=zaz in {'foo': 'bar=zaz'}\n env_data = split_vars(_vars)\n _secrets = split_vars(_secrets)\n\n # create placeholder data for every env\n settings_data = {}\n secrets_data = {}\n if env_data:\n settings_data[env] = env_data\n settings_data[\"default\"] = {k: \"a default value\" for k in env_data}\n if _secrets:\n secrets_data[env] = _secrets\n secrets_data[\"default\"] = {k: \"a default value\" for k in _secrets}\n\n if str(path).endswith(\n constants.ALL_EXTENSIONS + (\"py\",)\n ): # pragma: no cover # noqa\n settings_path = path\n secrets_path = path.parent / f\".secrets.{fileformat}\"\n gitignore_path = path.parent / \".gitignore\"\n else:\n if fileformat == \"env\":\n if str(path) in (\".env\", \"./.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\"/.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\".env\"): # pragma: no cover\n settings_path = path.parent / \".env\"\n else:\n settings_path = path / \".env\"\n Path.touch(settings_path)\n secrets_path = None\n else:\n settings_path = path / f\"settings.{fileformat}\"\n secrets_path = path / f\".secrets.{fileformat}\"\n gitignore_path = path / \".gitignore\"\n\n if fileformat in [\"py\", \"env\"] or env == \"main\":\n # for Main env, Python and .env formats writes a single env\n settings_data = settings_data.get(env, {})\n secrets_data = secrets_data.get(env, {})\n\n if not y and settings_path and settings_path.exists(): # pragma: no cover\n click.confirm(\n f\"⁉ {settings_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if not y and secrets_path and secrets_path.exists(): # pragma: no cover\n click.confirm(\n f\"⁉ {secrets_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if settings_path:\n loader.write(settings_path, settings_data, merge=True)\n click.echo(\n f\"🎛️ {settings_path.name} created to hold your settings.\\n\"\n )\n\n if secrets_path:\n loader.write(secrets_path, secrets_data, merge=True)\n click.echo(f\"🔑 {secrets_path.name} created to hold your secrets.\\n\")\n ignore_line = \".secrets.*\"\n comment = \"\\n# Ignore dynaconf secret files\\n\"\n if not gitignore_path.exists():\n with io.open(str(gitignore_path), \"w\", encoding=ENC) as f:\n f.writelines([comment, ignore_line, \"\\n\"])\n else:\n existing = (\n ignore_line\n in io.open(str(gitignore_path), encoding=ENC).read()\n )\n if not existing: # pragma: no cover\n with io.open(str(gitignore_path), \"a+\", encoding=ENC) as f:\n f.writelines([comment, ignore_line, \"\\n\"])\n\n click.echo(\n f\"🙈 the {secrets_path.name} is also included in `.gitignore` \\n\"\n \" beware to not push your secrets to a public repo \\n\"\n \" or use dynaconf builtin support for Vault Servers.\\n\"\n )\n\n if django: # pragma: no cover\n dj_module, _ = get_module({}, django)\n dj_filename = dj_module.__file__\n if Path(dj_filename).exists():\n click.confirm(\n f\"⁉ {dj_filename} is found do you want to add dynaconf?\",\n abort=True,\n )\n with open(dj_filename, \"a\") as dj_file:\n dj_file.write(constants.DJANGO_PATCH)\n click.echo(\"🎠 Now your Django settings are managed by Dynaconf\")\n else:\n click.echo(\"❌ Django settings file not written.\")\n else:\n click.echo(\n \"🎉 Dynaconf is configured! read more on https://dynaconf.com\\n\"\n \" Use `dynaconf -i config.settings list` to see your settings\\n\"\n )\n\n\[email protected](name=\"list\")\[email protected](\n \"--env\", \"-e\", default=None, help=\"Filters the env to get the values\"\n)\[email protected](\"--key\", \"-k\", default=None, help=\"Filters a single key\")\[email protected](\n \"--more\",\n \"-m\",\n default=None,\n help=\"Pagination more|less style\",\n is_flag=True,\n)\[email protected](\n \"--loader\",\n \"-l\",\n default=None,\n help=\"a loader identifier to filter e.g: toml|yaml\",\n)\[email protected](\n \"--all\",\n \"_all\",\n \"-a\",\n default=False,\n is_flag=True,\n help=\"show dynaconf internal settings?\",\n)\[email protected](\n \"--output\",\n \"-o\",\n type=click.Path(writable=True, dir_okay=False),\n default=None,\n help=\"Filepath to write the listed values as json\",\n)\[email protected](\n \"--output-flat\",\n \"flat\",\n is_flag=True,\n default=False,\n help=\"Output file is flat (do not include [env] name)\",\n)\ndef _list(env, key, more, loader, _all=False, output=None, flat=False):\n \"\"\"Lists all user defined config values\n and if `--all` is passed it also shows dynaconf internal variables.\n \"\"\"\n if env:\n env = env.strip()\n if key:\n key = key.strip()\n if loader:\n loader = loader.strip()\n\n if env:\n settings.setenv(env)\n\n cur_env = settings.current_env.lower()\n\n if cur_env == \"main\":\n flat = True\n\n click.echo(\n click.style(\n f\"Working in {cur_env} environment \",\n bold=True,\n bg=\"bright_blue\",\n fg=\"bright_white\",\n )\n )\n\n if not loader:\n data = settings.as_dict(env=env, internal=_all)\n else:\n identifier = f\"{loader}_{cur_env}\"\n data = settings._loaded_by_loaders.get(identifier, {})\n data = data or settings._loaded_by_loaders.get(loader, {})\n\n # remove to avoid displaying twice\n data.pop(\"SETTINGS_MODULE\", None)\n\n def color(_k):\n if _k in dir(default_settings):\n return \"blue\"\n return \"magenta\"\n\n def format_setting(_k, _v):\n key = click.style(_k, bg=color(_k), fg=\"bright_white\")\n data_type = click.style(\n f\"<{type(_v).__name__}>\", bg=\"bright_black\", fg=\"bright_white\"\n )\n value = pprint.pformat(_v)\n return f\"{key}{data_type} {value}\"\n\n if not key:\n datalines = \"\\n\".join(\n format_setting(k, v)\n for k, v in data.items()\n if k not in data.get(\"RENAMED_VARS\", [])\n )\n (click.echo_via_pager if more else click.echo)(datalines)\n if output:\n loaders.write(output, data, env=not flat and cur_env)\n else:\n key = upperfy(key)\n\n try:\n value = settings.get(key, empty)\n except AttributeError:\n value = empty\n\n if value is empty:\n click.echo(click.style(\"Key not found\", bg=\"red\", fg=\"white\"))\n return\n\n click.echo(format_setting(key, value))\n if output:\n loaders.write(output, {key: value}, env=not flat and cur_env)\n\n if env:\n settings.setenv()\n\n\[email protected]()\[email protected](\"to\", required=True, type=click.Choice(WRITERS))\[email protected](\n \"--vars\",\n \"_vars\",\n \"-v\",\n multiple=True,\n default=None,\n help=(\n \"key values to be written \"\n \"e.g: `dynaconf write toml -e NAME=foo -e X=2\"\n ),\n)\[email protected](\n \"--secrets\",\n \"_secrets\",\n \"-s\",\n multiple=True,\n default=None,\n help=(\n \"secret key values to be written in .secrets \"\n \"e.g: `dynaconf write toml -s TOKEN=kdslmflds -s X=2\"\n ),\n)\[email protected](\n \"--path\",\n \"-p\",\n default=CWD,\n help=\"defaults to current directory/settings.{ext}\",\n)\[email protected](\n \"--env\",\n \"-e\",\n default=\"default\",\n help=(\n \"env to write to defaults to DEVELOPMENT for files \"\n \"for external sources like Redis and Vault \"\n \"it will be DYNACONF or the value set in \"\n \"$ENVVAR_PREFIX_FOR_DYNACONF\"\n ),\n)\[email protected](\"-y\", default=False, is_flag=True)\ndef write(to, _vars, _secrets, path, env, y):\n \"\"\"Writes data to specific source\"\"\"\n _vars = split_vars(_vars)\n _secrets = split_vars(_secrets)\n loader = importlib.import_module(f\"dynaconf.loaders.{to}_loader\")\n\n if to in EXTS:\n\n # Lets write to a file\n path = Path(path)\n\n if str(path).endswith(constants.ALL_EXTENSIONS + (\"py\",)):\n settings_path = path\n secrets_path = path.parent / f\".secrets.{to}\"\n else:\n if to == \"env\":\n if str(path) in (\".env\", \"./.env\"): # pragma: no cover\n settings_path = path\n elif str(path).endswith(\"/.env\"):\n settings_path = path\n elif str(path).endswith(\".env\"):\n settings_path = path.parent / \".env\"\n else:\n settings_path = path / \".env\"\n Path.touch(settings_path)\n secrets_path = None\n _vars.update(_secrets)\n else:\n settings_path = path / f\"settings.{to}\"\n secrets_path = path / f\".secrets.{to}\"\n\n if (\n _vars and not y and settings_path and settings_path.exists()\n ): # pragma: no cover # noqa\n click.confirm(\n f\"{settings_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if (\n _secrets and not y and secrets_path and secrets_path.exists()\n ): # pragma: no cover # noqa\n click.confirm(\n f\"{secrets_path} exists do you want to overwrite it?\",\n abort=True,\n )\n\n if to not in [\"py\", \"env\"]:\n if _vars:\n _vars = {env: _vars}\n if _secrets:\n _secrets = {env: _secrets}\n\n if _vars and settings_path:\n loader.write(settings_path, _vars, merge=True)\n click.echo(f\"Data successful written to {settings_path}\")\n\n if _secrets and secrets_path:\n loader.write(secrets_path, _secrets, merge=True)\n click.echo(f\"Data successful written to {secrets_path}\")\n\n else: # pragma: no cover\n # lets write to external source\n with settings.using_env(env):\n # make sure we're in the correct environment\n loader.write(settings, _vars, **_secrets)\n click.echo(f\"Data successful written to {to}\")\n\n\[email protected]()\[email protected](\n \"--path\", \"-p\", default=CWD, help=\"defaults to current directory\"\n)\ndef validate(path): # pragma: no cover\n \"\"\"Validates Dynaconf settings based on rules defined in\n dynaconf_validators.toml\"\"\"\n # reads the 'dynaconf_validators.toml' from path\n # for each section register the validator for specific env\n # call validate\n\n path = Path(path)\n\n if not str(path).endswith(\".toml\"):\n path = path / \"dynaconf_validators.toml\"\n\n if not path.exists(): # pragma: no cover # noqa\n click.echo(click.style(f\"{path} not found\", fg=\"white\", bg=\"red\"))\n sys.exit(1)\n\n validation_data = toml.load(open(str(path)))\n\n success = True\n for env, name_data in validation_data.items():\n for name, data in name_data.items():\n if not isinstance(data, dict): # pragma: no cover\n click.echo(\n click.style(\n f\"Invalid rule for parameter '{name}'\",\n fg=\"white\",\n bg=\"yellow\",\n )\n )\n else:\n data.setdefault(\"env\", env)\n click.echo(\n click.style(\n f\"Validating '{name}' with '{data}'\",\n fg=\"white\",\n bg=\"blue\",\n )\n )\n try:\n Validator(name, **data).validate(settings)\n except ValidationError as e:\n click.echo(\n click.style(f\"Error: {e}\", fg=\"white\", bg=\"red\")\n )\n success = False\n\n if success:\n click.echo(click.style(\"Validation success!\", fg=\"white\", bg=\"green\"))\n else:\n click.echo(click.style(\"Validation error!\", fg=\"white\", bg=\"red\"))\n sys.exit(1)\n\n\nif __name__ == \"__main__\": # pragma: no cover\n main()\n",
"path": "dynaconf/cli.py"
}
] | diff --git a/dynaconf/cli.py b/dynaconf/cli.py
index 5bb8316d3..5aae070cc 100644
--- a/dynaconf/cli.py
+++ b/dynaconf/cli.py
@@ -23,6 +23,7 @@
from dynaconf.vendor import click
from dynaconf.vendor import toml
+os.environ["PYTHONIOENCODING"] = "utf-8"
CWD = Path.cwd()
EXTS = ["ini", "toml", "yaml", "json", "py", "env"]
|
nyu-mll__jiant-615 | ${NFS_PROJECT_PREFIX} and ${JIANT_PROJECT_PREFIX}
Do we need two separate set of environment variables?
We also have ${NFS_DATA_DIR} and ${JIANT_DATA_DIR}. I don't know about potential users of jiant, at least for me, it's pretty confusing.
| [
{
"content": "\"\"\"Train a multi-task model using AllenNLP\n\nTo debug this, run with -m ipdb:\n\n python -m ipdb main.py --config_file ...\n\"\"\"\n# pylint: disable=no-member\nimport argparse\nimport glob\nimport io\nimport logging as log\nimport os\nimport random\nimport subprocess\nimport sys\nimport time\n\nimport torch\n\nfrom src import evaluate\nfrom src.models import build_model\nfrom src.preprocess import build_tasks\nfrom src.trainer import build_trainer\nfrom src.utils import config\nfrom src.utils.utils import assert_for_log, check_arg_name, load_model_state, maybe_make_dir\n\nlog.basicConfig(format=\"%(asctime)s: %(message)s\", datefmt=\"%m/%d %I:%M:%S %p\", level=log.INFO)\n\n\n# Global notification handler, can be accessed outside main() during exception handling.\nEMAIL_NOTIFIER = None\n\n\ndef handle_arguments(cl_arguments):\n parser = argparse.ArgumentParser(description=\"\")\n # Configuration files\n parser.add_argument(\n \"--config_file\",\n \"-c\",\n type=str,\n nargs=\"+\",\n help=\"Config file(s) (.conf) for model parameters.\",\n )\n parser.add_argument(\n \"--overrides\",\n \"-o\",\n type=str,\n default=None,\n help=\"Parameter overrides, as valid HOCON string.\",\n )\n\n parser.add_argument(\n \"--remote_log\", \"-r\", action=\"store_true\", help=\"If true, enable remote logging on GCP.\"\n )\n\n parser.add_argument(\n \"--notify\", type=str, default=\"\", help=\"Email address for job notifications.\"\n )\n\n parser.add_argument(\n \"--tensorboard\",\n \"-t\",\n action=\"store_true\",\n help=\"If true, will run Tensorboard server in a \"\n \"subprocess, serving on the port given by \"\n \"--tensorboard_port.\",\n )\n parser.add_argument(\"--tensorboard_port\", type=int, default=6006)\n\n return parser.parse_args(cl_arguments)\n\n\ndef setup_target_task_training(args, target_tasks, model, strict):\n \"\"\"\n Saves model states from pretraining if applicable, and\n loads the correct model state for the target task training\n stage.\n\n Parameters\n ----------------\n args: Params object\n target_tasks: list of target Task objects\n mdoel: a MultiTaskModel object\n\n Returns\n ----------------\n task_names_to_avoid_loading: list of strings, if we don't allow for\n use of pretrained target specific module parameters, then this list will\n consist of all the task names so that we avoid loading the\n pretrained parameters. Else, it will be an empty list.\n\n \"\"\"\n if args.do_target_task_training and not args.allow_reuse_of_pretraining_parameters:\n # If we're training models for evaluation, which is always done from scratch with a fresh\n # optimizer, we shouldn't load parameters for those models.\n # Usually, there won't be trained parameters to skip, but this can happen if a run is killed\n # during the do_target_task_training phase.\n task_names_to_avoid_loading = [task.name for task in target_tasks]\n else:\n task_names_to_avoid_loading = []\n\n if not args.load_target_train_checkpoint == \"none\":\n # This is to load a particular target train checkpoint.\n log.info(\"Loading existing model from %s...\", args.load_target_train_checkpoint)\n load_model_state(\n model,\n args.load_target_train_checkpoint,\n args.cuda,\n task_names_to_avoid_loading,\n strict=strict,\n )\n else:\n # Look for target train checkpoints (available only if we're restoring from a run that already\n # finished), then look for training checkpoints.\n\n best_path = get_best_checkpoint_path(args.run_dir)\n if best_path:\n load_model_state(\n model, best_path, args.cuda, task_names_to_avoid_loading, strict=strict\n )\n else:\n assert_for_log(\n args.allow_untrained_encoder_parameters, \"No best checkpoint found to evaluate.\"\n )\n\n if args.transfer_paradigm == \"finetune\":\n # Save model so we have a checkpoint to go back to after each\n # task-specific finetune.\n model_state = model.state_dict()\n model_path = os.path.join(args.run_dir, \"model_state_untrained_pre_target_train.th\")\n torch.save(model_state, model_path)\n\n log.warning(\"Evaluating untrained encoder parameters!\")\n return task_names_to_avoid_loading\n\n\ndef check_configurations(args, pretrain_tasks, target_tasks):\n \"\"\"\n Checks configurations for any obvious logical flaws\n and that necessary parameters are set for each step -\n throws asserts and exits if found.\n\n Parameters\n ----------------\n args: Params object\n pretrain_tasks: list of pretraining Task objects\n target_tasks: list of target task training Task objects\n\n Returns\n ----------------\n None\n \"\"\"\n steps_log = io.StringIO()\n if any([t.val_metric_decreases for t in pretrain_tasks]) and any(\n [not t.val_metric_decreases for t in pretrain_tasks]\n ):\n log.warn(\"\\tMixing training tasks with increasing and decreasing val metrics!\")\n\n if args.load_target_train_checkpoint != \"none\":\n assert_for_log(\n os.path.exists(args.load_target_train_checkpoint),\n \"Error: Attempting to load model from non-existent path: [%s]\"\n % args.load_target_train_checkpoint,\n )\n assert_for_log(\n not args.do_pretrain,\n \"Error: Attempting to train a model and then replace that model with one from a checkpoint.\",\n )\n steps_log.write(\"Loading model from path: %s \\n\" % args.load_target_train_checkpoint)\n\n assert_for_log(\n args.transfer_paradigm in [\"finetune\", \"frozen\"],\n \"Transfer paradigm %s not supported!\" % args.transfer_paradigm,\n )\n\n if args.do_pretrain:\n assert_for_log(\n args.pretrain_tasks != \"none\",\n \"Error: Must specify at least one pretraining task: [%s]\" % args.pretrain_tasks,\n )\n steps_log.write(\"Training model on tasks: %s \\n\" % args.pretrain_tasks)\n\n if args.do_target_task_training:\n assert_for_log(\n args.target_tasks != \"none\",\n \"Error: Must specify at least one target task: [%s]\" % args.target_tasks,\n )\n steps_log.write(\"Re-training model for individual target tasks \\n\")\n assert_for_log(\n len(set(pretrain_tasks).intersection(target_tasks)) == 0\n or args.allow_reuse_of_pretraining_parameters\n or args.do_pretrain == 0,\n \"If you're pretraining on a task you plan to reuse as a target task, set\\n\"\n \"allow_reuse_of_pretraining_parameters = 1 (risky), or train in two steps:\\n\"\n \"train with do_pretrain = 1, do_target_task_training = 0, stop, and restart with\\n\"\n \"do_pretrain = 0 and do_target_task_training = 1.\",\n )\n if args.do_full_eval:\n assert_for_log(\n args.target_tasks != \"none\",\n \"Error: Must specify at least one target task: [%s]\" % args.target_tasks,\n )\n steps_log.write(\"Evaluating model on tasks: %s \\n\" % args.target_tasks)\n\n log.info(\"Will run the following steps:\\n%s\", steps_log.getvalue())\n steps_log.close()\n\n\ndef _log_git_info():\n try:\n log.info(\"Waiting on git info....\")\n c = subprocess.run(\n [\"git\", \"rev-parse\", \"--abbrev-ref\", \"HEAD\"], timeout=10, stdout=subprocess.PIPE\n )\n git_branch_name = c.stdout.decode().strip()\n log.info(\"Git branch: %s\", git_branch_name)\n c = subprocess.run([\"git\", \"rev-parse\", \"HEAD\"], timeout=10, stdout=subprocess.PIPE)\n git_sha = c.stdout.decode().strip()\n log.info(\"Git SHA: %s\", git_sha)\n except subprocess.TimeoutExpired as e:\n log.exception(e)\n log.warn(\"Git info not found. Moving right along...\")\n\n\ndef _run_background_tensorboard(logdir, port):\n \"\"\"Run a TensorBoard server in the background.\"\"\"\n import atexit\n\n tb_args = [\"tensorboard\", \"--logdir\", logdir, \"--port\", str(port)]\n log.info(\"Starting TensorBoard server on port %d ...\", port)\n tb_process = subprocess.Popen(tb_args)\n log.info(\"TensorBoard process: %d\", tb_process.pid)\n\n def _kill_tb_child():\n log.info(\"Shutting down TensorBoard server on port %d ...\", port)\n tb_process.terminate()\n\n atexit.register(_kill_tb_child)\n\n\n# TODO(Yada): Move logic for checkpointing finetuned vs frozen pretrained tasks\n# from here to trainer.py.\n\n\ndef get_best_checkpoint_path(run_dir):\n \"\"\" Look in run_dir for model checkpoint to load.\n Hierarchy is\n 1) best checkpoint from target_task_training\n 2) best checkpoint from pretraining\n 3) checkpoint created from before any target task training\n 4) nothing found (empty string) \"\"\"\n target_task_best = glob.glob(os.path.join(run_dir, \"model_state_target_train_best.th\"))\n\n if len(target_task_best) > 0:\n assert_for_log(len(target_task_best) == 1, \"Too many best checkpoints. Something is wrong.\")\n return target_task_best[0]\n macro_best = glob.glob(os.path.join(run_dir, \"model_state_pretrain_epoch_*.best_macro.th\"))\n if len(macro_best) > 0:\n assert_for_log(len(macro_best) == 1, \"Too many best checkpoints. Something is wrong.\")\n return macro_best[0]\n\n pre_target_train = glob.glob(os.path.join(run_dir, \"model_state_untrained_pre_target_train.th\"))\n\n if len(pre_target_train) > 0:\n assert_for_log(len(pre_target_train) == 1, \"Too many best checkpoints. Something is wrong.\")\n return pre_target_train[0]\n\n return \"\"\n\n\ndef evaluate_and_write(args, model, tasks, splits_to_write):\n \"\"\" Evaluate a model on dev and/or test, then write predictions \"\"\"\n val_results, val_preds = evaluate.evaluate(\n model, tasks, args.batch_size, args.cuda, \"val\")\n if 'val' in splits_to_write:\n evaluate.write_preds(tasks, val_preds, args.run_dir, 'val',\n strict_glue_format=args.write_strict_glue_format)\n if 'test' in splits_to_write:\n _, te_preds = evaluate.evaluate(model, tasks, args.batch_size, args.cuda, \"test\")\n evaluate.write_preds(tasks, te_preds, args.run_dir, 'test',\n strict_glue_format=args.write_strict_glue_format)\n run_name = args.get(\"run_name\", os.path.basename(args.run_dir))\n\n results_tsv = os.path.join(args.exp_dir, \"results.tsv\")\n log.info(\"Writing results for split 'val' to %s\", results_tsv)\n evaluate.write_results(val_results, results_tsv, run_name=run_name)\n\n\n\ndef initial_setup(args, cl_args):\n \"\"\"\n Sets up email hook, creating seed, and cuda settings.\n\n Parameters\n ----------------\n args: Params object\n cl_args: list of arguments\n\n Returns\n ----------------\n tasks: list of Task objects\n pretrain_tasks: list of pretraining tasks\n target_tasks: list of target tasks\n vocab: list of vocab\n word_embs: loaded word embeddings, may be None if args.word_embs = none\n model: a MultiTaskModel object\n\n \"\"\"\n output = io.StringIO()\n maybe_make_dir(args.project_dir) # e.g. /nfs/jsalt/exp/$HOSTNAME\n maybe_make_dir(args.exp_dir) # e.g. <project_dir>/jiant-demo\n maybe_make_dir(args.run_dir) # e.g. <project_dir>/jiant-demo/sst\n log.getLogger().addHandler(log.FileHandler(args.local_log_path))\n\n if cl_args.remote_log:\n from src.utils import gcp\n\n gcp.configure_remote_logging(args.remote_log_name)\n\n if cl_args.notify:\n from src.utils import emails\n\n global EMAIL_NOTIFIER\n log.info(\"Registering email notifier for %s\", cl_args.notify)\n EMAIL_NOTIFIER = emails.get_notifier(cl_args.notify, args)\n\n if EMAIL_NOTIFIER:\n EMAIL_NOTIFIER(body=\"Starting run.\", prefix=\"\")\n\n _log_git_info()\n\n log.info(\"Parsed args: \\n%s\", args)\n\n config_file = os.path.join(args.run_dir, \"params.conf\")\n config.write_params(args, config_file)\n log.info(\"Saved config to %s\", config_file)\n\n seed = random.randint(1, 10000) if args.random_seed < 0 else args.random_seed\n random.seed(seed)\n torch.manual_seed(seed)\n log.info(\"Using random seed %d\", seed)\n if args.cuda >= 0:\n try:\n if not torch.cuda.is_available():\n raise EnvironmentError(\"CUDA is not available, or not detected\" \" by PyTorch.\")\n log.info(\"Using GPU %d\", args.cuda)\n torch.cuda.set_device(args.cuda)\n torch.cuda.manual_seed_all(seed)\n except Exception:\n log.warning(\n \"GPU access failed. You might be using a CPU-only installation of PyTorch. Falling back to CPU.\"\n )\n args.cuda = -1\n\n return args, seed\n\n\ndef main(cl_arguments):\n \"\"\" Train a model for multitask-training.\"\"\"\n cl_args = handle_arguments(cl_arguments)\n args = config.params_from_file(cl_args.config_file, cl_args.overrides)\n # Check for deprecated arg names\n check_arg_name(args)\n args, seed = initial_setup(args, cl_args)\n # Load tasks\n log.info(\"Loading tasks...\")\n start_time = time.time()\n pretrain_tasks, target_tasks, vocab, word_embs = build_tasks(args)\n tasks = sorted(set(pretrain_tasks + target_tasks), key=lambda x: x.name)\n log.info(\"\\tFinished loading tasks in %.3fs\", time.time() - start_time)\n log.info(\"\\t Tasks: {}\".format([task.name for task in tasks]))\n\n # Build model\n log.info(\"Building model...\")\n start_time = time.time()\n model = build_model(args, vocab, word_embs, tasks)\n log.info(\"\\tFinished building model in %.3fs\", time.time() - start_time)\n\n # Start Tensorboard if requested\n if cl_args.tensorboard:\n tb_logdir = os.path.join(args.run_dir, \"tensorboard\")\n _run_background_tensorboard(tb_logdir, cl_args.tensorboard_port)\n\n check_configurations(args, pretrain_tasks, target_tasks)\n\n if args.do_pretrain:\n # Train on pretrain tasks\n log.info(\"Training...\")\n stop_metric = pretrain_tasks[0].val_metric if len(pretrain_tasks) == 1 else \"macro_avg\"\n should_decrease = (\n pretrain_tasks[0].val_metric_decreases if len(pretrain_tasks) == 1 else False\n )\n trainer, _, opt_params, schd_params = build_trainer(\n args, [], model, args.run_dir, should_decrease, phase=\"pretrain\"\n )\n to_train = [(n, p) for n, p in model.named_parameters() if p.requires_grad]\n _ = trainer.train(\n pretrain_tasks,\n stop_metric,\n args.batch_size,\n args.weighting_method,\n args.scaling_method,\n to_train,\n opt_params,\n schd_params,\n args.shared_optimizer,\n args.load_model,\n phase=\"pretrain\",\n )\n\n # For checkpointing logic\n if not args.do_target_task_training:\n log.info(\n \"In strict mode because do_target_task_training is off. \"\n \"Will crash if any tasks are missing from the checkpoint.\"\n )\n strict = True\n else:\n strict = False\n\n if args.do_target_task_training:\n # Train on target tasks\n task_names_to_avoid_loading = setup_target_task_training(args, target_tasks, model, strict)\n if args.transfer_paradigm == \"frozen\":\n # might be empty if elmo = 0. scalar_mix_0 should always be\n # pretrain scalars\n elmo_scalars = [\n (n, p)\n for n, p in model.named_parameters()\n if \"scalar_mix\" in n and \"scalar_mix_0\" not in n\n ]\n # Fails when sep_embs_for_skip is 0 and elmo_scalars has nonzero\n # length.\n assert_for_log(\n not elmo_scalars or args.sep_embs_for_skip,\n \"Error: ELMo scalars loaded and will be updated in do_target_task_training but \"\n \"they should not be updated! Check sep_embs_for_skip flag or make an issue.\",\n )\n for task in target_tasks:\n # Skip mnli-diagnostic\n # This has to be handled differently than probing tasks because probing tasks require the \"is_probing_task\"\n # to be set to True. For mnli-diagnostic this flag will be False because it is part of GLUE and\n # \"is_probing_task is global flag specific to a run, not to a task.\n if task.name == \"mnli-diagnostic\":\n continue\n\n if args.transfer_paradigm == \"finetune\":\n # Train both the task specific models as well as sentence\n # encoder.\n to_train = [(n, p) for n, p in model.named_parameters() if p.requires_grad]\n else: # args.transfer_paradigm == \"frozen\":\n # Only train task-specific module\n pred_module = getattr(model, \"%s_mdl\" % task.name)\n to_train = [(n, p) for n, p in pred_module.named_parameters() if p.requires_grad]\n to_train += elmo_scalars\n\n trainer, _, opt_params, schd_params = build_trainer(\n args,\n [task.name, \"target_train\"],\n model,\n args.run_dir,\n task.val_metric_decreases,\n phase=\"target_train\",\n )\n _ = trainer.train(\n tasks=[task],\n stop_metric=task.val_metric,\n batch_size=args.batch_size,\n weighting_method=args.weighting_method,\n scaling_method=args.scaling_method,\n train_params=to_train,\n optimizer_params=opt_params,\n scheduler_params=schd_params,\n shared_optimizer=args.shared_optimizer,\n load_model=False,\n phase=\"target_train\",\n )\n\n # Now that we've trained a model, revert to the normal checkpoint\n # logic for this task.\n if task.name in task_names_to_avoid_loading:\n task_names_to_avoid_loading.remove(task.name)\n\n # The best checkpoint will accumulate the best parameters for each\n # task.\n layer_path = os.path.join(args.run_dir, \"model_state_target_train_best.th\")\n\n if args.transfer_paradigm == \"finetune\":\n # Save this fine-tune model with a task specific name.\n finetune_path = os.path.join(args.run_dir, \"model_state_%s_best.th\" % task.name)\n os.rename(layer_path, finetune_path)\n\n # Reload the original best model from before target-task\n # training.\n pre_finetune_path = get_best_checkpoint_path(args.run_dir)\n load_model_state(\n model, pre_finetune_path, args.cuda, skip_task_models=[], strict=strict\n )\n else: # args.transfer_paradigm == \"frozen\":\n # Load the current overall best model.\n # Save the best checkpoint from that target task training to be\n # specific to that target task.\n load_model_state(\n model,\n layer_path,\n args.cuda,\n strict=strict,\n skip_task_models=task_names_to_avoid_loading,\n )\n\n if args.do_full_eval:\n # Evaluate\n log.info(\"Evaluating...\")\n splits_to_write = evaluate.parse_write_preds_arg(args.write_preds)\n if args.transfer_paradigm == \"finetune\":\n for task in target_tasks:\n if task.name == \"mnli-diagnostic\":\n # we'll load mnli-diagnostic during mnli\n continue\n # Special checkpointing logic here since we train the sentence encoder\n # and have a best set of sent encoder model weights per task.\n finetune_path = os.path.join(args.run_dir, \"model_state_%s_best.th\" % task.name)\n if os.path.exists(finetune_path):\n ckpt_path = finetune_path\n else:\n ckpt_path = get_best_checkpoint_path(args.run_dir)\n load_model_state(model, ckpt_path, args.cuda, skip_task_models=[], strict=strict)\n\n tasks = [task]\n if task.name == \"mnli\":\n tasks += [t for t in target_tasks if t.name == \"mnli-diagnostic\"]\n evaluate_and_write(args, model, tasks, splits_to_write)\n\n elif args.transfer_paradigm == \"frozen\":\n # Don't do any special checkpointing logic here\n # since model already has all the trained task specific modules.\n evaluate_and_write(args, model, target_tasks, splits_to_write)\n\n log.info(\"Done!\")\n\n\nif __name__ == \"__main__\":\n try:\n main(sys.argv[1:])\n if EMAIL_NOTIFIER is not None:\n EMAIL_NOTIFIER(body=\"Run completed successfully!\", prefix=\"\")\n except BaseException as e:\n # Make sure we log the trace for any crashes before exiting.\n log.exception(\"Fatal error in main():\")\n if EMAIL_NOTIFIER is not None:\n import traceback\n\n tb_lines = traceback.format_exception(*sys.exc_info())\n EMAIL_NOTIFIER(body=\"\".join(tb_lines), prefix=\"FAILED\")\n raise e # re-raise exception, in case debugger is attached.\n sys.exit(1)\n sys.exit(0)\n",
"path": "main.py"
}
] | [
{
"content": "\"\"\"Train a multi-task model using AllenNLP\n\nTo debug this, run with -m ipdb:\n\n python -m ipdb main.py --config_file ...\n\"\"\"\n# pylint: disable=no-member\nimport argparse\nimport glob\nimport io\nimport logging as log\nimport os\nimport random\nimport subprocess\nimport sys\nimport time\n\nimport torch\n\nfrom src import evaluate\nfrom src.models import build_model\nfrom src.preprocess import build_tasks\nfrom src.trainer import build_trainer\nfrom src.utils import config\nfrom src.utils.utils import assert_for_log, check_arg_name, load_model_state, maybe_make_dir\n\nlog.basicConfig(format=\"%(asctime)s: %(message)s\", datefmt=\"%m/%d %I:%M:%S %p\", level=log.INFO)\n\n\n# Global notification handler, can be accessed outside main() during exception handling.\nEMAIL_NOTIFIER = None\n\n\ndef handle_arguments(cl_arguments):\n parser = argparse.ArgumentParser(description=\"\")\n # Configuration files\n parser.add_argument(\n \"--config_file\",\n \"-c\",\n type=str,\n nargs=\"+\",\n default=\"config/defaults.conf\",\n help=\"Config file(s) (.conf) for model parameters.\",\n )\n parser.add_argument(\n \"--overrides\",\n \"-o\",\n type=str,\n default=None,\n help=\"Parameter overrides, as valid HOCON string.\",\n )\n\n parser.add_argument(\n \"--remote_log\", \"-r\", action=\"store_true\", help=\"If true, enable remote logging on GCP.\"\n )\n\n parser.add_argument(\n \"--notify\", type=str, default=\"\", help=\"Email address for job notifications.\"\n )\n\n parser.add_argument(\n \"--tensorboard\",\n \"-t\",\n action=\"store_true\",\n help=\"If true, will run Tensorboard server in a \"\n \"subprocess, serving on the port given by \"\n \"--tensorboard_port.\",\n )\n parser.add_argument(\"--tensorboard_port\", type=int, default=6006)\n\n return parser.parse_args(cl_arguments)\n\n\ndef setup_target_task_training(args, target_tasks, model, strict):\n \"\"\"\n Saves model states from pretraining if applicable, and\n loads the correct model state for the target task training\n stage.\n\n Parameters\n ----------------\n args: Params object\n target_tasks: list of target Task objects\n mdoel: a MultiTaskModel object\n\n Returns\n ----------------\n task_names_to_avoid_loading: list of strings, if we don't allow for\n use of pretrained target specific module parameters, then this list will\n consist of all the task names so that we avoid loading the\n pretrained parameters. Else, it will be an empty list.\n\n \"\"\"\n if args.do_target_task_training and not args.allow_reuse_of_pretraining_parameters:\n # If we're training models for evaluation, which is always done from scratch with a fresh\n # optimizer, we shouldn't load parameters for those models.\n # Usually, there won't be trained parameters to skip, but this can happen if a run is killed\n # during the do_target_task_training phase.\n task_names_to_avoid_loading = [task.name for task in target_tasks]\n else:\n task_names_to_avoid_loading = []\n\n if not args.load_target_train_checkpoint == \"none\":\n # This is to load a particular target train checkpoint.\n log.info(\"Loading existing model from %s...\", args.load_target_train_checkpoint)\n load_model_state(\n model,\n args.load_target_train_checkpoint,\n args.cuda,\n task_names_to_avoid_loading,\n strict=strict,\n )\n else:\n # Look for target train checkpoints (available only if we're restoring from a run that already\n # finished), then look for training checkpoints.\n\n best_path = get_best_checkpoint_path(args.run_dir)\n if best_path:\n load_model_state(\n model, best_path, args.cuda, task_names_to_avoid_loading, strict=strict\n )\n else:\n assert_for_log(\n args.allow_untrained_encoder_parameters, \"No best checkpoint found to evaluate.\"\n )\n\n if args.transfer_paradigm == \"finetune\":\n # Save model so we have a checkpoint to go back to after each\n # task-specific finetune.\n model_state = model.state_dict()\n model_path = os.path.join(args.run_dir, \"model_state_untrained_pre_target_train.th\")\n torch.save(model_state, model_path)\n\n log.warning(\"Evaluating untrained encoder parameters!\")\n return task_names_to_avoid_loading\n\n\ndef check_configurations(args, pretrain_tasks, target_tasks):\n \"\"\"\n Checks configurations for any obvious logical flaws\n and that necessary parameters are set for each step -\n throws asserts and exits if found.\n\n Parameters\n ----------------\n args: Params object\n pretrain_tasks: list of pretraining Task objects\n target_tasks: list of target task training Task objects\n\n Returns\n ----------------\n None\n \"\"\"\n steps_log = io.StringIO()\n if any([t.val_metric_decreases for t in pretrain_tasks]) and any(\n [not t.val_metric_decreases for t in pretrain_tasks]\n ):\n log.warn(\"\\tMixing training tasks with increasing and decreasing val metrics!\")\n\n if args.load_target_train_checkpoint != \"none\":\n assert_for_log(\n os.path.exists(args.load_target_train_checkpoint),\n \"Error: Attempting to load model from non-existent path: [%s]\"\n % args.load_target_train_checkpoint,\n )\n assert_for_log(\n not args.do_pretrain,\n \"Error: Attempting to train a model and then replace that model with one from a checkpoint.\",\n )\n steps_log.write(\"Loading model from path: %s \\n\" % args.load_target_train_checkpoint)\n\n assert_for_log(\n args.transfer_paradigm in [\"finetune\", \"frozen\"],\n \"Transfer paradigm %s not supported!\" % args.transfer_paradigm,\n )\n\n if args.do_pretrain:\n assert_for_log(\n args.pretrain_tasks != \"none\",\n \"Error: Must specify at least one pretraining task: [%s]\" % args.pretrain_tasks,\n )\n steps_log.write(\"Training model on tasks: %s \\n\" % args.pretrain_tasks)\n\n if args.do_target_task_training:\n assert_for_log(\n args.target_tasks != \"none\",\n \"Error: Must specify at least one target task: [%s]\" % args.target_tasks,\n )\n steps_log.write(\"Re-training model for individual target tasks \\n\")\n assert_for_log(\n len(set(pretrain_tasks).intersection(target_tasks)) == 0\n or args.allow_reuse_of_pretraining_parameters\n or args.do_pretrain == 0,\n \"If you're pretraining on a task you plan to reuse as a target task, set\\n\"\n \"allow_reuse_of_pretraining_parameters = 1 (risky), or train in two steps:\\n\"\n \"train with do_pretrain = 1, do_target_task_training = 0, stop, and restart with\\n\"\n \"do_pretrain = 0 and do_target_task_training = 1.\",\n )\n if args.do_full_eval:\n assert_for_log(\n args.target_tasks != \"none\",\n \"Error: Must specify at least one target task: [%s]\" % args.target_tasks,\n )\n steps_log.write(\"Evaluating model on tasks: %s \\n\" % args.target_tasks)\n\n log.info(\"Will run the following steps:\\n%s\", steps_log.getvalue())\n steps_log.close()\n\n\ndef _log_git_info():\n try:\n log.info(\"Waiting on git info....\")\n c = subprocess.run(\n [\"git\", \"rev-parse\", \"--abbrev-ref\", \"HEAD\"], timeout=10, stdout=subprocess.PIPE\n )\n git_branch_name = c.stdout.decode().strip()\n log.info(\"Git branch: %s\", git_branch_name)\n c = subprocess.run([\"git\", \"rev-parse\", \"HEAD\"], timeout=10, stdout=subprocess.PIPE)\n git_sha = c.stdout.decode().strip()\n log.info(\"Git SHA: %s\", git_sha)\n except subprocess.TimeoutExpired as e:\n log.exception(e)\n log.warn(\"Git info not found. Moving right along...\")\n\n\ndef _run_background_tensorboard(logdir, port):\n \"\"\"Run a TensorBoard server in the background.\"\"\"\n import atexit\n\n tb_args = [\"tensorboard\", \"--logdir\", logdir, \"--port\", str(port)]\n log.info(\"Starting TensorBoard server on port %d ...\", port)\n tb_process = subprocess.Popen(tb_args)\n log.info(\"TensorBoard process: %d\", tb_process.pid)\n\n def _kill_tb_child():\n log.info(\"Shutting down TensorBoard server on port %d ...\", port)\n tb_process.terminate()\n\n atexit.register(_kill_tb_child)\n\n\n# TODO(Yada): Move logic for checkpointing finetuned vs frozen pretrained tasks\n# from here to trainer.py.\n\n\ndef get_best_checkpoint_path(run_dir):\n \"\"\" Look in run_dir for model checkpoint to load.\n Hierarchy is\n 1) best checkpoint from target_task_training\n 2) best checkpoint from pretraining\n 3) checkpoint created from before any target task training\n 4) nothing found (empty string) \"\"\"\n target_task_best = glob.glob(os.path.join(run_dir, \"model_state_target_train_best.th\"))\n\n if len(target_task_best) > 0:\n assert_for_log(len(target_task_best) == 1, \"Too many best checkpoints. Something is wrong.\")\n return target_task_best[0]\n macro_best = glob.glob(os.path.join(run_dir, \"model_state_pretrain_epoch_*.best_macro.th\"))\n if len(macro_best) > 0:\n assert_for_log(len(macro_best) == 1, \"Too many best checkpoints. Something is wrong.\")\n return macro_best[0]\n\n pre_target_train = glob.glob(os.path.join(run_dir, \"model_state_untrained_pre_target_train.th\"))\n\n if len(pre_target_train) > 0:\n assert_for_log(len(pre_target_train) == 1, \"Too many best checkpoints. Something is wrong.\")\n return pre_target_train[0]\n\n return \"\"\n\n\ndef evaluate_and_write(args, model, tasks, splits_to_write):\n \"\"\" Evaluate a model on dev and/or test, then write predictions \"\"\"\n val_results, val_preds = evaluate.evaluate(\n model, tasks, args.batch_size, args.cuda, \"val\")\n if 'val' in splits_to_write:\n evaluate.write_preds(tasks, val_preds, args.run_dir, 'val',\n strict_glue_format=args.write_strict_glue_format)\n if 'test' in splits_to_write:\n _, te_preds = evaluate.evaluate(model, tasks, args.batch_size, args.cuda, \"test\")\n evaluate.write_preds(tasks, te_preds, args.run_dir, 'test',\n strict_glue_format=args.write_strict_glue_format)\n run_name = args.get(\"run_name\", os.path.basename(args.run_dir))\n\n results_tsv = os.path.join(args.exp_dir, \"results.tsv\")\n log.info(\"Writing results for split 'val' to %s\", results_tsv)\n evaluate.write_results(val_results, results_tsv, run_name=run_name)\n\n\n\ndef initial_setup(args, cl_args):\n \"\"\"\n Sets up email hook, creating seed, and cuda settings.\n\n Parameters\n ----------------\n args: Params object\n cl_args: list of arguments\n\n Returns\n ----------------\n tasks: list of Task objects\n pretrain_tasks: list of pretraining tasks\n target_tasks: list of target tasks\n vocab: list of vocab\n word_embs: loaded word embeddings, may be None if args.word_embs = none\n model: a MultiTaskModel object\n\n \"\"\"\n output = io.StringIO()\n maybe_make_dir(args.project_dir) # e.g. /nfs/jsalt/exp/$HOSTNAME\n maybe_make_dir(args.exp_dir) # e.g. <project_dir>/jiant-demo\n maybe_make_dir(args.run_dir) # e.g. <project_dir>/jiant-demo/sst\n log.getLogger().addHandler(log.FileHandler(args.local_log_path))\n\n if cl_args.remote_log:\n from src.utils import gcp\n\n gcp.configure_remote_logging(args.remote_log_name)\n\n if cl_args.notify:\n from src.utils import emails\n\n global EMAIL_NOTIFIER\n log.info(\"Registering email notifier for %s\", cl_args.notify)\n EMAIL_NOTIFIER = emails.get_notifier(cl_args.notify, args)\n\n if EMAIL_NOTIFIER:\n EMAIL_NOTIFIER(body=\"Starting run.\", prefix=\"\")\n\n _log_git_info()\n\n log.info(\"Parsed args: \\n%s\", args)\n\n config_file = os.path.join(args.run_dir, \"params.conf\")\n config.write_params(args, config_file)\n log.info(\"Saved config to %s\", config_file)\n\n seed = random.randint(1, 10000) if args.random_seed < 0 else args.random_seed\n random.seed(seed)\n torch.manual_seed(seed)\n log.info(\"Using random seed %d\", seed)\n if args.cuda >= 0:\n try:\n if not torch.cuda.is_available():\n raise EnvironmentError(\"CUDA is not available, or not detected\" \" by PyTorch.\")\n log.info(\"Using GPU %d\", args.cuda)\n torch.cuda.set_device(args.cuda)\n torch.cuda.manual_seed_all(seed)\n except Exception:\n log.warning(\n \"GPU access failed. You might be using a CPU-only installation of PyTorch. Falling back to CPU.\"\n )\n args.cuda = -1\n\n return args, seed\n\n\ndef main(cl_arguments):\n \"\"\" Train a model for multitask-training.\"\"\"\n cl_args = handle_arguments(cl_arguments)\n args = config.params_from_file(cl_args.config_file, cl_args.overrides)\n # Check for deprecated arg names\n check_arg_name(args)\n args, seed = initial_setup(args, cl_args)\n # Load tasks\n log.info(\"Loading tasks...\")\n start_time = time.time()\n pretrain_tasks, target_tasks, vocab, word_embs = build_tasks(args)\n tasks = sorted(set(pretrain_tasks + target_tasks), key=lambda x: x.name)\n log.info(\"\\tFinished loading tasks in %.3fs\", time.time() - start_time)\n log.info(\"\\t Tasks: {}\".format([task.name for task in tasks]))\n\n # Build model\n log.info(\"Building model...\")\n start_time = time.time()\n model = build_model(args, vocab, word_embs, tasks)\n log.info(\"\\tFinished building model in %.3fs\", time.time() - start_time)\n\n # Start Tensorboard if requested\n if cl_args.tensorboard:\n tb_logdir = os.path.join(args.run_dir, \"tensorboard\")\n _run_background_tensorboard(tb_logdir, cl_args.tensorboard_port)\n\n check_configurations(args, pretrain_tasks, target_tasks)\n\n if args.do_pretrain:\n # Train on pretrain tasks\n log.info(\"Training...\")\n stop_metric = pretrain_tasks[0].val_metric if len(pretrain_tasks) == 1 else \"macro_avg\"\n should_decrease = (\n pretrain_tasks[0].val_metric_decreases if len(pretrain_tasks) == 1 else False\n )\n trainer, _, opt_params, schd_params = build_trainer(\n args, [], model, args.run_dir, should_decrease, phase=\"pretrain\"\n )\n to_train = [(n, p) for n, p in model.named_parameters() if p.requires_grad]\n _ = trainer.train(\n pretrain_tasks,\n stop_metric,\n args.batch_size,\n args.weighting_method,\n args.scaling_method,\n to_train,\n opt_params,\n schd_params,\n args.shared_optimizer,\n args.load_model,\n phase=\"pretrain\",\n )\n\n # For checkpointing logic\n if not args.do_target_task_training:\n log.info(\n \"In strict mode because do_target_task_training is off. \"\n \"Will crash if any tasks are missing from the checkpoint.\"\n )\n strict = True\n else:\n strict = False\n\n if args.do_target_task_training:\n # Train on target tasks\n task_names_to_avoid_loading = setup_target_task_training(args, target_tasks, model, strict)\n if args.transfer_paradigm == \"frozen\":\n # might be empty if elmo = 0. scalar_mix_0 should always be\n # pretrain scalars\n elmo_scalars = [\n (n, p)\n for n, p in model.named_parameters()\n if \"scalar_mix\" in n and \"scalar_mix_0\" not in n\n ]\n # Fails when sep_embs_for_skip is 0 and elmo_scalars has nonzero\n # length.\n assert_for_log(\n not elmo_scalars or args.sep_embs_for_skip,\n \"Error: ELMo scalars loaded and will be updated in do_target_task_training but \"\n \"they should not be updated! Check sep_embs_for_skip flag or make an issue.\",\n )\n for task in target_tasks:\n # Skip mnli-diagnostic\n # This has to be handled differently than probing tasks because probing tasks require the \"is_probing_task\"\n # to be set to True. For mnli-diagnostic this flag will be False because it is part of GLUE and\n # \"is_probing_task is global flag specific to a run, not to a task.\n if task.name == \"mnli-diagnostic\":\n continue\n\n if args.transfer_paradigm == \"finetune\":\n # Train both the task specific models as well as sentence\n # encoder.\n to_train = [(n, p) for n, p in model.named_parameters() if p.requires_grad]\n else: # args.transfer_paradigm == \"frozen\":\n # Only train task-specific module\n pred_module = getattr(model, \"%s_mdl\" % task.name)\n to_train = [(n, p) for n, p in pred_module.named_parameters() if p.requires_grad]\n to_train += elmo_scalars\n\n trainer, _, opt_params, schd_params = build_trainer(\n args,\n [task.name, \"target_train\"],\n model,\n args.run_dir,\n task.val_metric_decreases,\n phase=\"target_train\",\n )\n _ = trainer.train(\n tasks=[task],\n stop_metric=task.val_metric,\n batch_size=args.batch_size,\n weighting_method=args.weighting_method,\n scaling_method=args.scaling_method,\n train_params=to_train,\n optimizer_params=opt_params,\n scheduler_params=schd_params,\n shared_optimizer=args.shared_optimizer,\n load_model=False,\n phase=\"target_train\",\n )\n\n # Now that we've trained a model, revert to the normal checkpoint\n # logic for this task.\n if task.name in task_names_to_avoid_loading:\n task_names_to_avoid_loading.remove(task.name)\n\n # The best checkpoint will accumulate the best parameters for each\n # task.\n layer_path = os.path.join(args.run_dir, \"model_state_target_train_best.th\")\n\n if args.transfer_paradigm == \"finetune\":\n # Save this fine-tune model with a task specific name.\n finetune_path = os.path.join(args.run_dir, \"model_state_%s_best.th\" % task.name)\n os.rename(layer_path, finetune_path)\n\n # Reload the original best model from before target-task\n # training.\n pre_finetune_path = get_best_checkpoint_path(args.run_dir)\n load_model_state(\n model, pre_finetune_path, args.cuda, skip_task_models=[], strict=strict\n )\n else: # args.transfer_paradigm == \"frozen\":\n # Load the current overall best model.\n # Save the best checkpoint from that target task training to be\n # specific to that target task.\n load_model_state(\n model,\n layer_path,\n args.cuda,\n strict=strict,\n skip_task_models=task_names_to_avoid_loading,\n )\n\n if args.do_full_eval:\n # Evaluate\n log.info(\"Evaluating...\")\n splits_to_write = evaluate.parse_write_preds_arg(args.write_preds)\n if args.transfer_paradigm == \"finetune\":\n for task in target_tasks:\n if task.name == \"mnli-diagnostic\":\n # we'll load mnli-diagnostic during mnli\n continue\n # Special checkpointing logic here since we train the sentence encoder\n # and have a best set of sent encoder model weights per task.\n finetune_path = os.path.join(args.run_dir, \"model_state_%s_best.th\" % task.name)\n if os.path.exists(finetune_path):\n ckpt_path = finetune_path\n else:\n ckpt_path = get_best_checkpoint_path(args.run_dir)\n load_model_state(model, ckpt_path, args.cuda, skip_task_models=[], strict=strict)\n\n tasks = [task]\n if task.name == \"mnli\":\n tasks += [t for t in target_tasks if t.name == \"mnli-diagnostic\"]\n evaluate_and_write(args, model, tasks, splits_to_write)\n\n elif args.transfer_paradigm == \"frozen\":\n # Don't do any special checkpointing logic here\n # since model already has all the trained task specific modules.\n evaluate_and_write(args, model, target_tasks, splits_to_write)\n\n log.info(\"Done!\")\n\n\nif __name__ == \"__main__\":\n try:\n main(sys.argv[1:])\n if EMAIL_NOTIFIER is not None:\n EMAIL_NOTIFIER(body=\"Run completed successfully!\", prefix=\"\")\n except BaseException as e:\n # Make sure we log the trace for any crashes before exiting.\n log.exception(\"Fatal error in main():\")\n if EMAIL_NOTIFIER is not None:\n import traceback\n\n tb_lines = traceback.format_exception(*sys.exc_info())\n EMAIL_NOTIFIER(body=\"\".join(tb_lines), prefix=\"FAILED\")\n raise e # re-raise exception, in case debugger is attached.\n sys.exit(1)\n sys.exit(0)\n",
"path": "main.py"
}
] | diff --git a/Dockerfile b/Dockerfile
index 82304093d..ea07afb76 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -92,5 +92,4 @@ ENV PATH_TO_COVE "$JSALT_SHARE_DIR/cove"
ENV ELMO_SRC_DIR "$JSALT_SHARE_DIR/elmo"
# Set these manually with -e or via Kuberentes config YAML.
-# ENV NFS_PROJECT_PREFIX "/nfs/jsalt/exp/docker"
# ENV JIANT_PROJECT_PREFIX "$NFS_PROJECT_PREFIX"
diff --git a/README.md b/README.md
index a48ecc28c..51a1c038b 100644
--- a/README.md
+++ b/README.md
@@ -2,8 +2,6 @@
[](https://circleci.com/gh/nyu-mll/jiant/tree/master)
-This repo contains the `jiant` sentence representation learning toolkit created at the [2018 JSALT Workshop](https://www.clsp.jhu.edu/workshops/18-workshop/) by the [General-Purpose Sentence Representation Learning](https://jsalt18-sentence-repl.github.io/) team. It is an extensible platform meant to make it easy to run experiments that involve multitask and transfer learning across sentence-level NLP tasks.
-
`jiant` is a work-in-progress software toolkit for natural language processing research, designed to facilitate work on multitask learning and transfer learning for sentence understanding tasks.
A few things you might want to know about `jiant`:
diff --git a/config/edgeprobe_bare.conf b/config/edgeprobe_bare.conf
index e2ba73ec0..8f467dfdf 100644
--- a/config/edgeprobe_bare.conf
+++ b/config/edgeprobe_bare.conf
@@ -6,7 +6,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // configure this
diff --git a/config/edgeprobe_bert.conf b/config/edgeprobe_bert.conf
index 44a51f3d5..67f499bd5 100644
--- a/config/edgeprobe_bert.conf
+++ b/config/edgeprobe_bert.conf
@@ -14,7 +14,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // default
diff --git a/config/edgeprobe_cove.conf b/config/edgeprobe_cove.conf
index e7fc520ea..4d245aeaa 100644
--- a/config/edgeprobe_cove.conf
+++ b/config/edgeprobe_cove.conf
@@ -6,7 +6,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // configure this
@@ -27,7 +26,6 @@ patience = 20 // vals until early-stopping
tokenizer = "MosesTokenizer"
cove = 1
word_embs = "glove"
-word_embs_file = ${GLOVE_EMBS_FILE}
elmo = 0
elmo_chars_only = 1
diff --git a/config/edgeprobe_demo.conf b/config/edgeprobe_demo.conf
index 212c454f7..8b8588a7c 100644
--- a/config/edgeprobe_demo.conf
+++ b/config/edgeprobe_demo.conf
@@ -2,10 +2,8 @@
include "defaults.conf" // relative path to this file
// write to local storage by default for this demo
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "edgeprobe-demo"
run_name = "run"
-global_ro_exp_dir = "/nfs/jsalt/share/exp/demo"
reload_tasks = 1
diff --git a/config/edgeprobe_existing.conf b/config/edgeprobe_existing.conf
index df85b5672..b16dbfebd 100644
--- a/config/edgeprobe_existing.conf
+++ b/config/edgeprobe_existing.conf
@@ -9,7 +9,6 @@
// Override paths from params.conf, since these might point to paths on a
// different system.
-global_ro_exp_dir = "/nfs/jsalt/share/exp/default"
project_dir = ${JIANT_PROJECT_PREFIX}
data_dir = ${JIANT_DATA_DIR} // required - should point to data on NFS.
diff --git a/config/edgeprobe_glove.conf b/config/edgeprobe_glove.conf
index 5bd7287ee..baf1f0f27 100644
--- a/config/edgeprobe_glove.conf
+++ b/config/edgeprobe_glove.conf
@@ -7,7 +7,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // configure this
@@ -28,7 +27,6 @@ patience = 20 // vals until early-stopping
tokenizer = "MosesTokenizer" // for consistency with CoVe
cove = 0
word_embs = "glove"
-word_embs_file = ${GLOVE_EMBS_FILE}
elmo = 0
elmo_chars_only = 1
diff --git a/config/edgeprobe_openai.conf b/config/edgeprobe_openai.conf
index 8ba23d69b..4c6278f33 100644
--- a/config/edgeprobe_openai.conf
+++ b/config/edgeprobe_openai.conf
@@ -6,7 +6,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // default
diff --git a/config/edgeprobe_train.conf b/config/edgeprobe_train.conf
index 055b9a734..62a160321 100644
--- a/config/edgeprobe_train.conf
+++ b/config/edgeprobe_train.conf
@@ -6,7 +6,6 @@
// This imports the defaults, which can be overridden below.
include "defaults.conf" // relative path to this file
-project_dir = ${JIANT_PROJECT_PREFIX}
exp_name = "" // configure this
run_name = "run" // configure this
diff --git a/config/spring19_seminar/bert.conf b/config/spring19_seminar/bert.conf
index c1a51eb11..5958c96b0 100644
--- a/config/spring19_seminar/bert.conf
+++ b/config/spring19_seminar/bert.conf
@@ -2,10 +2,6 @@
include "../final.conf"
-// Output path
-project_dir = ${JIANT_PROJECT_PREFIX}
-
-
// Optimization
batch_size = 16
dropout = 0.1 // following BERT paper
@@ -38,4 +34,4 @@ bert_embeddings_mode = "none" // How to handle the embedding layer of the BERT
// "none" for only top-layer activation,
sep_embs_for_skip = 1 // Skip embedding uses the same embedder object as the original embedding (before skip)
elmo = 0
-elmo_chars_only = 0
\ No newline at end of file
+elmo_chars_only = 0
diff --git a/gcp/config/jsalt_paths.1.2.sh b/gcp/config/jsalt_paths.1.2.sh
index 5f71f2ccd..0cb94d992 100644
--- a/gcp/config/jsalt_paths.1.2.sh
+++ b/gcp/config/jsalt_paths.1.2.sh
@@ -11,12 +11,10 @@ export JIANT_DATA_DIR="$JSALT_SHARE_DIR/glue_data"
# Default experiment directory
export JIANT_PROJECT_PREFIX="$HOME/exp"
-export NFS_PROJECT_PREFIX="/nfs/jsalt/exp/$HOSTNAME"
export GLOVE_EMBS_FILE="$JSALT_SHARE_DIR/glove/glove.840B.300d.txt"
export FASTTEXT_EMBS_FILE="$JSALT_SHARE_DIR/fasttext/crawl-300d-2M.vec"
export WORD_EMBS_FILE="$FASTTEXT_EMBS_FILE"
-export FASTTEXT_MODEL_FILE="." # not yet supported
export PATH_TO_COVE="$JSALT_SHARE_DIR/cove"
diff --git a/gcp/kubernetes/run_batch.sh b/gcp/kubernetes/run_batch.sh
index bb3c030c7..73b6b7853 100755
--- a/gcp/kubernetes/run_batch.sh
+++ b/gcp/kubernetes/run_batch.sh
@@ -93,8 +93,6 @@ spec:
- mountPath: /nfs/jsalt
name: nfs-jsalt
env:
- - name: NFS_PROJECT_PREFIX
- value: ${PROJECT_DIR}
- name: JIANT_PROJECT_PREFIX
value: ${PROJECT_DIR}
- name: NOTIFY_EMAIL
diff --git a/main.py b/main.py
index 6a17d40f6..e1c7289cc 100644
--- a/main.py
+++ b/main.py
@@ -39,6 +39,7 @@ def handle_arguments(cl_arguments):
"-c",
type=str,
nargs="+",
+ default="config/defaults.conf",
help="Config file(s) (.conf) for model parameters.",
)
parser.add_argument(
diff --git a/path_config.sh b/path_config.sh
index 9912db2b6..14ed47718 100644
--- a/path_config.sh
+++ b/path_config.sh
@@ -17,7 +17,6 @@
# Example of custom paths for a local installation:
# export JIANT_PROJECT_PREFIX=/Users/Bowman/Drive/JSALT
# export JIANT_DATA_DIR=/Users/Bowman/Drive/JSALT/jiant/glue_data
-# export WORD_EMBS_FILE=~/glove.840B.300d.txt
# The base directory for model output.
export JIANT_PROJECT_PREFIX=~
@@ -29,7 +28,10 @@ export JIANT_DATA_DIR=~
# A word embeddings file in GloVe/fastText format. Not used when using
# ELMo, GPT, or BERT. To use more than one different set of embeddings
# in your environment, create an additional environment variable (like)
-# FASTTEXT_WORD_EMBS_FILE, and reference it in each of your config files
-# with a line like:
+# FASTTEXT_WORD_EMBS_FILE, and reference it in each of your .conf config
+# files with a line like:
# word_embs_file = ${FASTTEXT_WORD_EMBS_FILE}
export WORD_EMBS_FILE=None
+
+# Optional:
+# echo "Loaded custom config."
diff --git a/scripts/demo.with_docker.sh b/scripts/demo.with_docker.sh
index 786e4aff3..9253d7df2 100755
--- a/scripts/demo.with_docker.sh
+++ b/scripts/demo.with_docker.sh
@@ -44,7 +44,6 @@ COMMAND+=( -o "exp_name=jiant-demo" )
# Run demo.conf in the docker container.
sudo docker run --runtime=nvidia --rm -v "$TEMP_DIR:/nfs/jsalt" \
-v "$JIANT_PATH:/share/jiant" \
- -e "NFS_PROJECT_PREFIX=/nfs/jsalt/exp" \
-e "JIANT_PROJECT_PREFIX=/nfs/jsalt/exp" \
-e "PYTORCH_PRETRAINED_BERT_CACHE=/nfs/jsalt/share/bert_cache" \
-e "ELMO_SRC_DIR=" \
|
holoviz__holoviews-5436 | Game of Life example needs update
### Package versions
```
panel = 0.13.1
holoviews = 1.15.0
bokeh = 2.4.3
```
### Bug description
In the Game of Life example in the holoviews documentation (https://holoviews.org/gallery/apps/bokeh/game_of_life.html)
I needed to update the second to last line
```python
panel.add_periodic_callback(advance, 50)
```
to
```python
pn.state.add_periodic_callback(advance, period=50) # 50 msec
# note: the `period=` is not necessary, but I think it adds clarity
```
It seems this is due to a change in the `panel` interface.
| [
{
"content": "import numpy as np\nimport holoviews as hv\nimport panel as pn\n\nfrom holoviews import opts\nfrom holoviews.streams import Tap, Counter, DoubleTap\nfrom scipy.signal import convolve2d\n\nhv.extension('bokeh')\n\ndiehard = [[0, 0, 0, 0, 0, 0, 1, 0],\n [1, 1, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 1, 1, 1]]\n\nboat = [[1, 1, 0],\n [1, 0, 1],\n [0, 1, 0]]\n\nr_pentomino = [[0, 1, 1],\n [1, 1, 0],\n [0, 1, 0]]\n\nbeacon = [[0, 0, 1, 1],\n [0, 0, 1, 1],\n [1, 1, 0, 0],\n [1, 1, 0, 0]]\n\nacorn = [[0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0],\n [1, 1, 0, 0, 1, 1, 1]]\n\nspaceship = [[0, 0, 1, 1, 0],\n [1, 1, 0, 1, 1],\n [1, 1, 1, 1, 0],\n [0, 1, 1, 0, 0]]\n\nblock_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 1, 1],\n [0, 0, 0, 0, 1, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0],\n [1, 0, 1, 0, 0, 0, 0, 0]]\n\nglider = [[1, 0, 0], [0, 1, 1], [1, 1, 0]]\n\nunbounded = [[1, 1, 1, 0, 1],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 1, 1],\n [0, 1, 1, 0, 1],\n [1, 0, 1, 0, 1]]\n\nshapes = {'Glider': glider, 'Block Switch Engine': block_switch_engine,\n 'Spaceship': spaceship, 'Acorn': acorn, 'Beacon': beacon,\n 'Diehard': diehard, 'Unbounded': unbounded}\n\ndef step(X):\n nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X\n return (nbrs_count == 3) | (X & (nbrs_count == 2))\n\ndef update(pattern, counter, x, y):\n if x and y:\n pattern = np.array(shapes[pattern])\n r, c = pattern.shape\n y, x = img.sheet2matrixidx(x,y)\n img.data[y:y+r,x:x+c] = pattern[::-1]\n else:\n img.data = step(img.data)\n return hv.Image(img)\n\n# Set up plot which advances on counter and adds pattern on tap\ntitle = 'Game of Life - Tap to place pattern, Doubletap to clear'\nimg = hv.Image(np.zeros((100, 200), dtype=np.uint8))\ncounter, tap = Counter(transient=True), Tap(transient=True),\npattern_dim = hv.Dimension('Pattern', values=sorted(shapes.keys()))\ndmap = hv.DynamicMap(update, kdims=[pattern_dim], streams=[counter, tap])\n\nplot = dmap.opts(\n opts.Image(cmap='gray', clim=(0, 1), toolbar=None, responsive=True,\n min_height=800, title=title, xaxis=None, yaxis=None)\n)\n\n# Add callback to clear on double tap\ndef reset_data(x, y):\n img.data[:] = 0\n\nreset = DoubleTap(transient=True, source=plot)\nreset.add_subscriber(reset_data)\n\n# Set up Panel app and periodic callback\npanel = pn.pane.HoloViews(plot, center=True, widget_location='right')\n\ndef advance():\n counter.event(counter=counter.counter+1)\npanel.add_periodic_callback(advance, 50)\n\npanel.servable('Game of Life')\n",
"path": "examples/gallery/apps/bokeh/game_of_life.py"
}
] | [
{
"content": "import numpy as np\nimport holoviews as hv\nimport panel as pn\n\nfrom holoviews import opts\nfrom holoviews.streams import Tap, Counter, DoubleTap\nfrom scipy.signal import convolve2d\n\nhv.extension('bokeh')\n\ndiehard = [[0, 0, 0, 0, 0, 0, 1, 0],\n [1, 1, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 1, 1, 1]]\n\nboat = [[1, 1, 0],\n [1, 0, 1],\n [0, 1, 0]]\n\nr_pentomino = [[0, 1, 1],\n [1, 1, 0],\n [0, 1, 0]]\n\nbeacon = [[0, 0, 1, 1],\n [0, 0, 1, 1],\n [1, 1, 0, 0],\n [1, 1, 0, 0]]\n\nacorn = [[0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0],\n [1, 1, 0, 0, 1, 1, 1]]\n\nspaceship = [[0, 0, 1, 1, 0],\n [1, 1, 0, 1, 1],\n [1, 1, 1, 1, 0],\n [0, 1, 1, 0, 0]]\n\nblock_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 1, 1],\n [0, 0, 0, 0, 1, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0],\n [1, 0, 1, 0, 0, 0, 0, 0]]\n\nglider = [[1, 0, 0], [0, 1, 1], [1, 1, 0]]\n\nunbounded = [[1, 1, 1, 0, 1],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 1, 1],\n [0, 1, 1, 0, 1],\n [1, 0, 1, 0, 1]]\n\nshapes = {'Glider': glider, 'Block Switch Engine': block_switch_engine,\n 'Spaceship': spaceship, 'Acorn': acorn, 'Beacon': beacon,\n 'Diehard': diehard, 'Unbounded': unbounded}\n\ndef step(X):\n nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X\n return (nbrs_count == 3) | (X & (nbrs_count == 2))\n\ndef update(pattern, counter, x, y):\n if x and y:\n pattern = np.array(shapes[pattern])\n r, c = pattern.shape\n y, x = img.sheet2matrixidx(x,y)\n img.data[y:y+r,x:x+c] = pattern[::-1]\n else:\n img.data = step(img.data)\n return hv.Image(img)\n\n# Set up plot which advances on counter and adds pattern on tap\ntitle = 'Game of Life - Tap to place pattern, Doubletap to clear'\nimg = hv.Image(np.zeros((100, 200), dtype=np.uint8))\ncounter, tap = Counter(transient=True), Tap(transient=True),\npattern_dim = hv.Dimension('Pattern', values=sorted(shapes.keys()))\ndmap = hv.DynamicMap(update, kdims=[pattern_dim], streams=[counter, tap])\n\nplot = dmap.opts(\n opts.Image(cmap='gray', clim=(0, 1), toolbar=None, responsive=True,\n min_height=800, title=title, xaxis=None, yaxis=None)\n)\n\n# Add callback to clear on double tap\ndef reset_data(x, y):\n img.data[:] = 0\n\nreset = DoubleTap(transient=True, source=plot)\nreset.add_subscriber(reset_data)\n\n# Set up Panel app and periodic callback\npanel = pn.pane.HoloViews(plot, center=True, widget_location='right')\n\ndef advance():\n counter.event(counter=counter.counter+1)\npn.state.add_periodic_callback(advance, period=50, start=False)\n\npanel.servable('Game of Life')\n",
"path": "examples/gallery/apps/bokeh/game_of_life.py"
}
] | diff --git a/examples/gallery/apps/bokeh/game_of_life.py b/examples/gallery/apps/bokeh/game_of_life.py
index 62ddf783be..37d0088f1e 100644
--- a/examples/gallery/apps/bokeh/game_of_life.py
+++ b/examples/gallery/apps/bokeh/game_of_life.py
@@ -91,6 +91,6 @@ def reset_data(x, y):
def advance():
counter.event(counter=counter.counter+1)
-panel.add_periodic_callback(advance, 50)
+pn.state.add_periodic_callback(advance, period=50, start=False)
panel.servable('Game of Life')
|
buildbot__buildbot-986 | Remove googlecode
This fixes the following test on Python 3:
```
trial buildbot.test.unit.test_www_hooks_googlecode
```
| [
{
"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport urllib\n\nfrom twisted.internet import defer\nfrom twisted.python import components\nfrom twisted.python import log\nfrom zope.interface import implements\n\nimport locale\nimport operator\nimport time\n\nfrom buildbot import interfaces\nfrom buildbot import util\nfrom buildbot.changes import changes\nfrom buildbot.status import build\nfrom buildbot.status import builder\nfrom buildbot.status import buildstep\n\nfrom buildbot.status.web.base import Box\nfrom buildbot.status.web.base import HtmlResource\nfrom buildbot.status.web.base import IBox\nfrom buildbot.status.web.base import ICurrentBox\nfrom buildbot.status.web.base import ITopBox\nfrom buildbot.status.web.base import build_get_class\nfrom buildbot.status.web.base import map_branches\nfrom buildbot.status.web.base import path_to_build\nfrom buildbot.status.web.base import path_to_root\nfrom buildbot.status.web.base import path_to_step\n\n\ndef earlier(old, new):\n # minimum of two things, but \"None\" counts as +infinity\n if old:\n if new < old:\n return new\n return old\n return new\n\n\ndef later(old, new):\n # maximum of two things, but \"None\" counts as -infinity\n if old:\n if new > old:\n return new\n return old\n return new\n\n\nclass CurrentBox(components.Adapter):\n # this provides the \"current activity\" box, just above the builder name\n implements(ICurrentBox)\n\n def formatETA(self, prefix, eta):\n if eta is None:\n return []\n if eta < 60:\n return [\"< 1 min\"]\n eta_parts = [\"~\"]\n eta_secs = eta\n if eta_secs > 3600:\n eta_parts.append(\"%d hrs\" % (eta_secs / 3600))\n eta_secs %= 3600\n if eta_secs > 60:\n eta_parts.append(\"%d mins\" % (eta_secs / 60))\n eta_secs %= 60\n abstime = time.strftime(\"%H:%M\", time.localtime(util.now() + eta))\n return [prefix, \" \".join(eta_parts), \"at %s\" % abstime]\n\n def getBox(self, status, brcounts):\n # getState() returns offline, idle, or building\n state, builds = self.original.getState()\n\n # look for upcoming builds. We say the state is \"waiting\" if the\n # builder is otherwise idle and there is a scheduler which tells us a\n # build will be performed some time in the near future. TODO: this\n # functionality used to be in BuilderStatus.. maybe this code should\n # be merged back into it.\n upcoming = []\n builderName = self.original.getName()\n for s in status.getSchedulers():\n if builderName in s.listBuilderNames():\n upcoming.extend(s.getPendingBuildTimes())\n if state == \"idle\" and upcoming:\n state = \"waiting\"\n\n if state == \"building\":\n text = [\"building\"]\n if builds:\n for b in builds:\n eta = b.getETA()\n text.extend(self.formatETA(\"ETA in\", eta))\n elif state == \"offline\":\n text = [\"offline\"]\n elif state == \"idle\":\n text = [\"idle\"]\n elif state == \"waiting\":\n text = [\"waiting\"]\n else:\n # just in case I add a state and forget to update this\n text = [state]\n\n # TODO: for now, this pending/upcoming stuff is in the \"current\n # activity\" box, but really it should go into a \"next activity\" row\n # instead. The only times it should show up in \"current activity\" is\n # when the builder is otherwise idle.\n\n # are any builds pending? (waiting for a slave to be free)\n brcount = brcounts[builderName]\n if brcount:\n text.append(\"%d pending\" % brcount)\n for t in sorted(upcoming):\n if t is not None:\n eta = t - util.now()\n text.extend(self.formatETA(\"next in\", eta))\n return Box(text, class_=\"Activity \" + state)\n\ncomponents.registerAdapter(CurrentBox, builder.BuilderStatus, ICurrentBox)\n\n\nclass BuildTopBox(components.Adapter):\n # this provides a per-builder box at the very top of the display,\n # showing the results of the most recent build\n implements(IBox)\n\n def getBox(self, req):\n assert interfaces.IBuilderStatus(self.original)\n branches = [b for b in req.args.get(\"branch\", []) if b]\n builder = self.original\n builds = list(builder.generateFinishedBuilds(map_branches(branches),\n num_builds=1))\n if not builds:\n return Box([\"none\"], class_=\"LastBuild\")\n b = builds[0]\n url = path_to_build(req, b)\n text = b.getText()\n tests_failed = b.getSummaryStatistic('tests-failed', operator.add, 0)\n if tests_failed:\n text.extend([\"Failed tests: %d\" % tests_failed])\n # TODO: maybe add logs?\n class_ = build_get_class(b)\n return Box(text, urlbase=url, class_=\"LastBuild %s\" % class_)\ncomponents.registerAdapter(BuildTopBox, builder.BuilderStatus, ITopBox)\n\n\nclass BuildBox(components.Adapter):\n # this provides the yellow \"starting line\" box for each build\n implements(IBox)\n\n def getBox(self, req):\n b = self.original\n number = b.getNumber()\n url = path_to_build(req, b)\n reason = b.getReason()\n template = req.site.buildbot_service.templates.get_template(\"box_macros.html\")\n text = template.module.build_box(reason=reason, url=url, number=number)\n class_ = \"start\"\n if b.isFinished() and not b.getSteps():\n # the steps have been pruned, so there won't be any indication\n # of whether it succeeded or failed.\n class_ = build_get_class(b)\n return Box([text], class_=\"BuildStep \" + class_)\ncomponents.registerAdapter(BuildBox, build.BuildStatus, IBox)\n\n\nclass StepBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n urlbase = path_to_step(req, self.original)\n text = self.original.getText()\n if text is None:\n log.msg(\"getText() gave None\", urlbase)\n text = []\n text = text[:]\n logs = self.original.getLogs()\n\n cxt = dict(text=text, logs=[], urls=[], stepinfo=self)\n\n for num in range(len(logs)):\n name = logs[num].getName()\n if logs[num].hasContents():\n url = urlbase + \"/logs/%s\" % urllib.quote(name)\n else:\n url = None\n cxt['logs'].append(dict(name=name, url=url))\n\n for name, target in self.original.getURLs().items():\n cxt['urls'].append(dict(link=target, name=name))\n\n template = req.site.buildbot_service.templates.get_template(\"box_macros.html\")\n text = template.module.step_box(**cxt)\n\n class_ = \"BuildStep \" + build_get_class(self.original)\n return Box(text, class_=class_)\ncomponents.registerAdapter(StepBox, buildstep.BuildStepStatus, IBox)\n\n\nclass EventBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n text = self.original.getText()\n class_ = \"Event\"\n return Box(text, class_=class_)\ncomponents.registerAdapter(EventBox, builder.Event, IBox)\n\n\nclass Spacer:\n implements(interfaces.IStatusEvent)\n\n def __init__(self, start, finish):\n self.started = start\n self.finished = finish\n\n def getTimes(self):\n return (self.started, self.finished)\n\n def getText(self):\n return []\n\n\nclass SpacerBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n #b = Box([\"spacer\"], \"white\")\n b = Box([])\n b.spacer = True\n return b\ncomponents.registerAdapter(SpacerBox, Spacer, IBox)\n\n\ndef insertGaps(g, showEvents, lastEventTime, idleGap=2):\n debug = False\n\n e = g.next()\n starts, finishes = e.getTimes()\n if debug:\n log.msg(\"E0\", starts, finishes)\n if finishes == 0:\n finishes = starts\n if debug:\n log.msg(\"E1 finishes=%s, gap=%s, lET=%s\" %\n (finishes, idleGap, lastEventTime))\n if finishes is not None and finishes + idleGap < lastEventTime:\n if debug:\n log.msg(\" spacer0\")\n yield Spacer(finishes, lastEventTime)\n\n followingEventStarts = starts\n if debug:\n log.msg(\" fES0\", starts)\n yield e\n\n while True:\n e = g.next()\n if not showEvents and isinstance(e, builder.Event):\n continue\n starts, finishes = e.getTimes()\n if debug:\n log.msg(\"E2\", starts, finishes)\n if finishes == 0:\n finishes = starts\n if finishes is not None and finishes + idleGap < followingEventStarts:\n # there is a gap between the end of this event and the beginning\n # of the next one. Insert an idle event so the waterfall display\n # shows a gap here.\n if debug:\n log.msg(\" finishes=%s, gap=%s, fES=%s\" %\n (finishes, idleGap, followingEventStarts))\n yield Spacer(finishes, followingEventStarts)\n yield e\n followingEventStarts = starts\n if debug:\n log.msg(\" fES1\", starts)\n\n\nclass WaterfallHelp(HtmlResource):\n pageTitle = \"Waterfall Help\"\n\n def __init__(self, categories=None):\n HtmlResource.__init__(self)\n self.categories = categories\n\n def content(self, request, cxt):\n status = self.getStatus(request)\n\n cxt['show_events_checked'] = request.args.get(\"show_events\", [\"false\"])[0].lower() == \"true\"\n cxt['branches'] = [b for b in request.args.get(\"branch\", []) if b]\n cxt['failures_only'] = request.args.get(\"failures_only\", [\"false\"])[0].lower() == \"true\"\n cxt['committers'] = [c for c in request.args.get(\"committer\", []) if c]\n cxt['projects'] = [p for p in request.args.get(\"project\", []) if p]\n\n # this has a set of toggle-buttons to let the user choose the\n # builders\n show_builders = request.args.get(\"show\", [])\n show_builders.extend(request.args.get(\"builder\", []))\n cxt['show_builders'] = show_builders\n cxt['all_builders'] = status.getBuilderNames(categories=self.categories)\n\n # this has a set of toggle-buttons to let the user choose the\n # categories\n show_categories = request.args.get(\"category\", [])\n allBuilderNames = status.getBuilderNames()\n builders = [status.getBuilder(name) for name in allBuilderNames]\n allCategories = [builder.getCategory() for builder in builders]\n cxt['show_categories'] = show_categories\n cxt['all_categories'] = allCategories\n\n # a couple of radio-button selectors for refresh time will appear\n # just after that text\n times = [(\"none\", \"None\"),\n (\"60\", \"60 seconds\"),\n (\"300\", \"5 minutes\"),\n (\"600\", \"10 minutes\"),\n ]\n current_reload_time = request.args.get(\"reload\", [\"none\"])\n if current_reload_time:\n current_reload_time = current_reload_time[0]\n if current_reload_time not in [t[0] for t in times]:\n times.insert(0, (current_reload_time, current_reload_time))\n\n cxt['times'] = times\n cxt['current_reload_time'] = current_reload_time\n\n template = request.site.buildbot_service.templates.get_template(\"waterfallhelp.html\")\n return template.render(**cxt)\n\n\nclass ChangeEventSource(object):\n\n \"A wrapper around a list of changes to supply the IEventSource interface\"\n\n def __init__(self, changes):\n self.changes = changes\n # we want them in newest-to-oldest order\n self.changes.reverse()\n\n def eventGenerator(self, branches, categories, committers, projects, minTime):\n for change in self.changes:\n if branches and change.branch not in branches:\n continue\n if categories and change.category not in categories:\n continue\n if committers and change.author not in committers:\n continue\n if minTime and change.when < minTime:\n continue\n yield change\n\n\nclass WaterfallStatusResource(HtmlResource):\n\n \"\"\"This builds the main status page, with the waterfall display, and\n all child pages.\"\"\"\n\n def __init__(self, categories=None, num_events=200, num_events_max=None):\n HtmlResource.__init__(self)\n self.categories = categories\n self.num_events = num_events\n self.num_events_max = num_events_max\n self.putChild(\"help\", WaterfallHelp(categories))\n\n def getPageTitle(self, request):\n status = self.getStatus(request)\n p = status.getTitle()\n if p:\n return \"BuildBot: %s\" % p\n else:\n return \"BuildBot\"\n\n def getChangeManager(self, request):\n # TODO: this wants to go away, access it through IStatus\n return request.site.buildbot_service.getChangeSvc()\n\n def get_reload_time(self, request):\n if \"reload\" in request.args:\n try:\n reload_time = int(request.args[\"reload\"][0])\n return max(reload_time, 15)\n except ValueError:\n pass\n return None\n\n def isSuccess(self, builderStatus):\n # Helper function to return True if the builder is not failing.\n # The function will return false if the current state is \"offline\",\n # the last build was not successful, or if a step from the current\n # build(s) failed.\n\n # Make sure the builder is online.\n if builderStatus.getState()[0] == 'offline':\n return False\n\n # Look at the last finished build to see if it was success or not.\n lastBuild = builderStatus.getLastFinishedBuild()\n if lastBuild and lastBuild.getResults() != builder.SUCCESS:\n return False\n\n # Check all the current builds to see if one step is already\n # failing.\n currentBuilds = builderStatus.getCurrentBuilds()\n if currentBuilds:\n for build in currentBuilds:\n for step in build.getSteps():\n if step.getResults()[0] == builder.FAILURE:\n return False\n\n # The last finished build was successful, and all the current builds\n # don't have any failed steps.\n return True\n\n def content(self, request, ctx):\n status = self.getStatus(request)\n master = request.site.buildbot_service.master\n\n # before calling content_with_db_data, make a bunch of database\n # queries. This is a sick hack, but beats rewriting the entire\n # waterfall around asynchronous calls\n\n results = {}\n\n # recent changes\n changes_d = master.db.changes.getRecentChanges(40)\n\n def to_changes(chdicts):\n return defer.gatherResults([\n changes.Change.fromChdict(master, chdict)\n for chdict in chdicts])\n changes_d.addCallback(to_changes)\n\n def keep_changes(changes):\n results['changes'] = changes\n changes_d.addCallback(keep_changes)\n\n # build request counts for each builder\n allBuilderNames = status.getBuilderNames(categories=self.categories)\n brstatus_ds = []\n brcounts = {}\n\n def keep_count(statuses, builderName):\n brcounts[builderName] = len(statuses)\n for builderName in allBuilderNames:\n builder_status = status.getBuilder(builderName)\n d = builder_status.getPendingBuildRequestStatuses()\n d.addCallback(keep_count, builderName)\n brstatus_ds.append(d)\n\n # wait for it all to finish\n d = defer.gatherResults([changes_d] + brstatus_ds)\n\n def call_content(_):\n return self.content_with_db_data(results['changes'],\n brcounts, request, ctx)\n d.addCallback(call_content)\n return d\n\n def content_with_db_data(self, changes, brcounts, request, ctx):\n status = self.getStatus(request)\n ctx['refresh'] = self.get_reload_time(request)\n\n # we start with all Builders available to this Waterfall: this is\n # limited by the config-file -time categories= argument, and defaults\n # to all defined Builders.\n allBuilderNames = status.getBuilderNames(categories=self.categories)\n builders = [status.getBuilder(name) for name in allBuilderNames]\n\n # but if the URL has one or more builder= arguments (or the old show=\n # argument, which is still accepted for backwards compatibility), we\n # use that set of builders instead. We still don't show anything\n # outside the config-file time set limited by categories=.\n showBuilders = request.args.get(\"show\", [])\n showBuilders.extend(request.args.get(\"builder\", []))\n if showBuilders:\n builders = [b for b in builders if b.name in showBuilders]\n\n # now, if the URL has one or category= arguments, use them as a\n # filter: only show those builders which belong to one of the given\n # categories.\n showCategories = request.args.get(\"category\", [])\n if showCategories:\n builders = [b for b in builders if b.category in showCategories]\n\n # If the URL has the failures_only=true argument, we remove all the\n # builders that are not currently red or won't be turning red at the end\n # of their current run.\n failuresOnly = request.args.get(\"failures_only\", [\"false\"])[0]\n if failuresOnly.lower() == \"true\":\n builders = [b for b in builders if not self.isSuccess(b)]\n\n (changeNames, builderNames, timestamps, eventGrid, sourceEvents) = \\\n self.buildGrid(request, builders, changes)\n\n # start the table: top-header material\n locale_enc = locale.getdefaultlocale()[1]\n if locale_enc is not None:\n locale_tz = unicode(time.tzname[time.localtime()[-1]], locale_enc)\n else:\n locale_tz = unicode(time.tzname[time.localtime()[-1]])\n ctx['tz'] = locale_tz\n ctx['changes_url'] = request.childLink(\"../changes\")\n\n bn = ctx['builders'] = []\n\n for name in builderNames:\n builder = status.getBuilder(name)\n top_box = ITopBox(builder).getBox(request)\n current_box = ICurrentBox(builder).getBox(status, brcounts)\n bn.append({'name': name,\n 'url': request.childLink(\"../builders/%s\" % urllib.quote(name, safe='')),\n 'top': top_box.text,\n 'top_class': top_box.class_,\n 'status': current_box.text,\n 'status_class': current_box.class_,\n })\n\n ctx.update(self.phase2(request, changeNames + builderNames, timestamps, eventGrid,\n sourceEvents))\n\n def with_args(req, remove_args=[], new_args=[], new_path=None):\n # sigh, nevow makes this sort of manipulation easier\n newargs = req.args.copy()\n for argname in remove_args:\n newargs[argname] = []\n if \"branch\" in newargs:\n newargs[\"branch\"] = [b for b in newargs[\"branch\"] if b]\n for k, v in new_args:\n if k in newargs:\n newargs[k].append(v)\n else:\n newargs[k] = [v]\n newquery = \"&\".join([\"%s=%s\" % (urllib.quote(k), urllib.quote(v))\n for k in newargs\n for v in newargs[k]\n ])\n if new_path:\n new_url = new_path\n elif req.prepath:\n new_url = req.prepath[-1]\n else:\n new_url = ''\n if newquery:\n new_url += \"?\" + newquery\n return new_url\n\n if timestamps:\n bottom = timestamps[-1]\n ctx['nextpage'] = with_args(request, [\"last_time\"],\n [(\"last_time\", str(int(bottom)))])\n\n helpurl = path_to_root(request) + \"waterfall/help\"\n ctx['help_url'] = with_args(request, new_path=helpurl)\n\n if self.get_reload_time(request) is not None:\n ctx['no_reload_page'] = with_args(request, remove_args=[\"reload\"])\n\n template = request.site.buildbot_service.templates.get_template(\"waterfall.html\")\n data = template.render(**ctx)\n return data\n\n def buildGrid(self, request, builders, changes):\n debug = False\n # TODO: see if we can use a cached copy\n\n showEvents = False\n if request.args.get(\"show_events\", [\"false\"])[0].lower() == \"true\":\n showEvents = True\n filterCategories = request.args.get('category', [])\n filterBranches = [b for b in request.args.get(\"branch\", []) if b]\n filterBranches = map_branches(filterBranches)\n filterCommitters = [c for c in request.args.get(\"committer\", []) if c]\n filterProjects = [p for p in request.args.get(\"project\", []) if p]\n maxTime = int(request.args.get(\"last_time\", [util.now()])[0])\n if \"show_time\" in request.args:\n minTime = maxTime - int(request.args[\"show_time\"][0])\n elif \"first_time\" in request.args:\n minTime = int(request.args[\"first_time\"][0])\n elif filterBranches or filterCommitters:\n minTime = util.now() - 24 * 60 * 60\n else:\n minTime = 0\n spanLength = 10 # ten-second chunks\n req_events = int(request.args.get(\"num_events\", [self.num_events])[0])\n if self.num_events_max and req_events > self.num_events_max:\n maxPageLen = self.num_events_max\n else:\n maxPageLen = req_events\n\n # first step is to walk backwards in time, asking each column\n # (commit, all builders) if they have any events there. Build up the\n # array of events, and stop when we have a reasonable number.\n\n commit_source = ChangeEventSource(changes)\n\n lastEventTime = util.now()\n sources = [commit_source] + builders\n changeNames = [\"changes\"]\n builderNames = map(lambda builder: builder.getName(), builders)\n sourceNames = changeNames + builderNames\n sourceEvents = []\n sourceGenerators = []\n\n def get_event_from(g):\n try:\n while True:\n e = g.next()\n # e might be buildstep.BuildStepStatus,\n # builder.BuildStatus, builder.Event,\n # waterfall.Spacer(builder.Event), or changes.Change .\n # The showEvents=False flag means we should hide\n # builder.Event .\n if not showEvents and isinstance(e, builder.Event):\n continue\n\n if isinstance(e, buildstep.BuildStepStatus):\n # unfinished steps are always shown\n if e.isFinished() and e.isHidden():\n continue\n\n break\n event = interfaces.IStatusEvent(e)\n if debug:\n log.msg(\"gen %s gave1 %s\" % (g, event.getText()))\n except StopIteration:\n event = None\n return event\n\n for s in sources:\n gen = insertGaps(s.eventGenerator(filterBranches,\n filterCategories,\n filterCommitters,\n filterProjects,\n minTime),\n showEvents,\n lastEventTime)\n sourceGenerators.append(gen)\n # get the first event\n sourceEvents.append(get_event_from(gen))\n eventGrid = []\n timestamps = []\n\n lastEventTime = 0\n for e in sourceEvents:\n if e and e.getTimes()[0] > lastEventTime:\n lastEventTime = e.getTimes()[0]\n if lastEventTime == 0:\n lastEventTime = util.now()\n\n spanStart = lastEventTime - spanLength\n debugGather = 0\n\n while True:\n if debugGather:\n log.msg(\"checking (%s,]\" % spanStart)\n # the tableau of potential events is in sourceEvents[]. The\n # window crawls backwards, and we examine one source at a time.\n # If the source's top-most event is in the window, is it pushed\n # onto the events[] array and the tableau is refilled. This\n # continues until the tableau event is not in the window (or is\n # missing).\n\n spanEvents = [] # for all sources, in this span. row of eventGrid\n firstTimestamp = None # timestamp of first event in the span\n lastTimestamp = None # last pre-span event, for next span\n\n for c in range(len(sourceGenerators)):\n events = [] # for this source, in this span. cell of eventGrid\n event = sourceEvents[c]\n while event and spanStart < event.getTimes()[0]:\n # to look at windows that don't end with the present,\n # condition the .append on event.time <= spanFinish\n if not IBox(event, None):\n log.msg(\"BAD EVENT\", event, event.getText())\n assert 0\n if debug:\n log.msg(\"pushing\", event.getText(), event)\n events.append(event)\n starts, finishes = event.getTimes()\n firstTimestamp = earlier(firstTimestamp, starts)\n event = get_event_from(sourceGenerators[c])\n if debug:\n log.msg(\"finished span\")\n\n if event:\n # this is the last pre-span event for this source\n lastTimestamp = later(lastTimestamp,\n event.getTimes()[0])\n if debugGather:\n log.msg(\" got %s from %s\" % (events, sourceNames[c]))\n sourceEvents[c] = event # refill the tableau\n spanEvents.append(events)\n\n # only show events older than maxTime. This makes it possible to\n # visit a page that shows what it would be like to scroll off the\n # bottom of this one.\n if firstTimestamp is not None and firstTimestamp <= maxTime:\n eventGrid.append(spanEvents)\n timestamps.append(firstTimestamp)\n\n if lastTimestamp:\n spanStart = lastTimestamp - spanLength\n else:\n # no more events\n break\n if minTime is not None and lastTimestamp < minTime:\n break\n\n if len(timestamps) > maxPageLen:\n break\n\n # now loop\n # loop is finished. now we have eventGrid[] and timestamps[]\n if debugGather:\n log.msg(\"finished loop\")\n assert(len(timestamps) == len(eventGrid))\n return (changeNames, builderNames, timestamps, eventGrid, sourceEvents)\n\n def phase2(self, request, sourceNames, timestamps, eventGrid,\n sourceEvents):\n\n if not timestamps:\n return dict(grid=[], gridlen=0)\n\n # first pass: figure out the height of the chunks, populate grid\n grid = []\n for i in range(1 + len(sourceNames)):\n grid.append([])\n # grid is a list of columns, one for the timestamps, and one per\n # event source. Each column is exactly the same height. Each element\n # of the list is a single <td> box.\n lastDate = time.strftime(\"%d %b %Y\",\n time.localtime(util.now()))\n for r in range(0, len(timestamps)):\n chunkstrip = eventGrid[r]\n # chunkstrip is a horizontal strip of event blocks. Each block\n # is a vertical list of events, all for the same source.\n assert(len(chunkstrip) == len(sourceNames))\n maxRows = reduce(lambda x, y: max(x, y),\n map(lambda x: len(x), chunkstrip))\n for i in range(maxRows):\n if i != maxRows - 1:\n grid[0].append(None)\n else:\n # timestamp goes at the bottom of the chunk\n stuff = []\n # add the date at the beginning (if it is not the same as\n # today's date), and each time it changes\n todayday = time.strftime(\"%a\",\n time.localtime(timestamps[r]))\n today = time.strftime(\"%d %b %Y\",\n time.localtime(timestamps[r]))\n if today != lastDate:\n stuff.append(todayday)\n stuff.append(today)\n lastDate = today\n stuff.append(\n time.strftime(\"%H:%M:%S\",\n time.localtime(timestamps[r])))\n grid[0].append(Box(text=stuff, class_=\"Time\",\n valign=\"bottom\", align=\"center\"))\n\n # at this point the timestamp column has been populated with\n # maxRows boxes, most None but the last one has the time string\n for c in range(0, len(chunkstrip)):\n block = chunkstrip[c]\n assert(block is not None) # should be [] instead\n for i in range(maxRows - len(block)):\n # fill top of chunk with blank space\n grid[c + 1].append(None)\n for i in range(len(block)):\n # so the events are bottom-justified\n b = IBox(block[i]).getBox(request)\n b.parms['valign'] = \"top\"\n b.parms['align'] = \"center\"\n grid[c + 1].append(b)\n # now all the other columns have maxRows new boxes too\n # populate the last row, if empty\n gridlen = len(grid[0])\n for i in range(len(grid)):\n strip = grid[i]\n assert(len(strip) == gridlen)\n if strip[-1] is None:\n if sourceEvents[i - 1]:\n filler = IBox(sourceEvents[i - 1]).getBox(request)\n else:\n # this can happen if you delete part of the build history\n filler = Box(text=[\"?\"], align=\"center\")\n strip[-1] = filler\n strip[-1].parms['rowspan'] = 1\n # second pass: bubble the events upwards to un-occupied locations\n # Every square of the grid that has a None in it needs to have\n # something else take its place.\n noBubble = request.args.get(\"nobubble\", ['0'])\n noBubble = int(noBubble[0])\n if not noBubble:\n for col in range(len(grid)):\n strip = grid[col]\n if col == 1: # changes are handled differently\n for i in range(2, len(strip) + 1):\n # only merge empty boxes. Don't bubble commit boxes.\n if strip[-i] is None:\n next = strip[-i + 1]\n assert(next)\n if next:\n # if not next.event:\n if next.spacer:\n # bubble the empty box up\n strip[-i] = next\n strip[-i].parms['rowspan'] += 1\n strip[-i + 1] = None\n else:\n # we are above a commit box. Leave it\n # be, and turn the current box into an\n # empty one\n strip[-i] = Box([], rowspan=1,\n comment=\"commit bubble\")\n strip[-i].spacer = True\n else:\n # we are above another empty box, which\n # somehow wasn't already converted.\n # Shouldn't happen\n pass\n else:\n for i in range(2, len(strip) + 1):\n # strip[-i] will go from next-to-last back to first\n if strip[-i] is None:\n # bubble previous item up\n assert(strip[-i + 1] is not None)\n strip[-i] = strip[-i + 1]\n strip[-i].parms['rowspan'] += 1\n strip[-i + 1] = None\n else:\n strip[-i].parms['rowspan'] = 1\n\n # convert to dicts\n for i in range(gridlen):\n for strip in grid:\n if strip[i]:\n strip[i] = strip[i].td()\n\n return dict(grid=grid, gridlen=gridlen, no_bubble=noBubble, time=lastDate)\n",
"path": "master/buildbot/status/web/waterfall.py"
}
] | [
{
"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport urllib\n\nfrom twisted.internet import defer\nfrom twisted.python import components\nfrom twisted.python import log\nfrom zope.interface import implements\n\nimport locale\nimport operator\nimport time\n\nfrom buildbot import interfaces\nfrom buildbot import util\nfrom buildbot.changes import changes\nfrom buildbot.status import build\nfrom buildbot.status import builder\nfrom buildbot.status import buildstep\n\nfrom buildbot.status.web.base import Box\nfrom buildbot.status.web.base import HtmlResource\nfrom buildbot.status.web.base import IBox\nfrom buildbot.status.web.base import ICurrentBox\nfrom buildbot.status.web.base import ITopBox\nfrom buildbot.status.web.base import build_get_class\nfrom buildbot.status.web.base import map_branches\nfrom buildbot.status.web.base import path_to_build\nfrom buildbot.status.web.base import path_to_root\nfrom buildbot.status.web.base import path_to_step\n\n\ndef earlier(old, new):\n # minimum of two things, but \"None\" counts as +infinity\n if old:\n if new < old:\n return new\n return old\n return new\n\n\ndef later(old, new):\n # maximum of two things, but \"None\" counts as -infinity\n if old:\n if new > old:\n return new\n return old\n return new\n\n\nclass CurrentBox(components.Adapter):\n # this provides the \"current activity\" box, just above the builder name\n implements(ICurrentBox)\n\n def formatETA(self, prefix, eta):\n if eta is None:\n return []\n if eta < 60:\n return [\"< 1 min\"]\n eta_parts = [\"~\"]\n eta_secs = eta\n if eta_secs > 3600:\n eta_parts.append(\"%d hrs\" % (eta_secs / 3600))\n eta_secs %= 3600\n if eta_secs > 60:\n eta_parts.append(\"%d mins\" % (eta_secs / 60))\n eta_secs %= 60\n abstime = time.strftime(\"%H:%M\", time.localtime(util.now() + eta))\n return [prefix, \" \".join(eta_parts), \"at %s\" % abstime]\n\n def getBox(self, status, brcounts):\n # getState() returns offline, idle, or building\n state, builds = self.original.getState()\n\n # look for upcoming builds. We say the state is \"waiting\" if the\n # builder is otherwise idle and there is a scheduler which tells us a\n # build will be performed some time in the near future. TODO: this\n # functionality used to be in BuilderStatus.. maybe this code should\n # be merged back into it.\n upcoming = []\n builderName = self.original.getName()\n for s in status.getSchedulers():\n if builderName in s.listBuilderNames():\n upcoming.extend(s.getPendingBuildTimes())\n if state == \"idle\" and upcoming:\n state = \"waiting\"\n\n if state == \"building\":\n text = [\"building\"]\n if builds:\n for b in builds:\n eta = b.getETA()\n text.extend(self.formatETA(\"ETA in\", eta))\n elif state == \"offline\":\n text = [\"offline\"]\n elif state == \"idle\":\n text = [\"idle\"]\n elif state == \"waiting\":\n text = [\"waiting\"]\n else:\n # just in case I add a state and forget to update this\n text = [state]\n\n # TODO: for now, this pending/upcoming stuff is in the \"current\n # activity\" box, but really it should go into a \"next activity\" row\n # instead. The only times it should show up in \"current activity\" is\n # when the builder is otherwise idle.\n\n # are any builds pending? (waiting for a slave to be free)\n brcount = brcounts[builderName]\n if brcount:\n text.append(\"%d pending\" % brcount)\n for t in sorted(upcoming):\n if t is not None:\n eta = t - util.now()\n text.extend(self.formatETA(\"next in\", eta))\n return Box(text, class_=\"Activity \" + state)\n\ncomponents.registerAdapter(CurrentBox, builder.BuilderStatus, ICurrentBox)\n\n\nclass BuildTopBox(components.Adapter):\n # this provides a per-builder box at the very top of the display,\n # showing the results of the most recent build\n implements(IBox)\n\n def getBox(self, req):\n assert interfaces.IBuilderStatus(self.original)\n branches = [b for b in req.args.get(\"branch\", []) if b]\n builder = self.original\n builds = list(builder.generateFinishedBuilds(map_branches(branches),\n num_builds=1))\n if not builds:\n return Box([\"none\"], class_=\"LastBuild\")\n b = builds[0]\n url = path_to_build(req, b)\n text = b.getText()\n tests_failed = b.getSummaryStatistic('tests-failed', operator.add, 0)\n if tests_failed:\n text.extend([\"Failed tests: %d\" % tests_failed])\n # TODO: maybe add logs?\n class_ = build_get_class(b)\n return Box(text, urlbase=url, class_=\"LastBuild %s\" % class_)\ncomponents.registerAdapter(BuildTopBox, builder.BuilderStatus, ITopBox)\n\n\nclass BuildBox(components.Adapter):\n # this provides the yellow \"starting line\" box for each build\n implements(IBox)\n\n def getBox(self, req):\n b = self.original\n number = b.getNumber()\n url = path_to_build(req, b)\n reason = b.getReason()\n template = req.site.buildbot_service.templates.get_template(\"box_macros.html\")\n text = template.module.build_box(reason=reason, url=url, number=number)\n class_ = \"start\"\n if b.isFinished() and not b.getSteps():\n # the steps have been pruned, so there won't be any indication\n # of whether it succeeded or failed.\n class_ = build_get_class(b)\n return Box([text], class_=\"BuildStep \" + class_)\ncomponents.registerAdapter(BuildBox, build.BuildStatus, IBox)\n\n\nclass StepBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n urlbase = path_to_step(req, self.original)\n text = self.original.getText()\n if text is None:\n log.msg(\"getText() gave None\", urlbase)\n text = []\n text = text[:]\n logs = self.original.getLogs()\n\n cxt = dict(text=text, logs=[], urls=[], stepinfo=self)\n\n for num in range(len(logs)):\n name = logs[num].getName()\n if logs[num].hasContents():\n url = urlbase + \"/logs/%s\" % urllib.quote(name)\n else:\n url = None\n cxt['logs'].append(dict(name=name, url=url))\n\n for name, target in self.original.getURLs().items():\n cxt['urls'].append(dict(link=target, name=name))\n\n template = req.site.buildbot_service.templates.get_template(\"box_macros.html\")\n text = template.module.step_box(**cxt)\n\n class_ = \"BuildStep \" + build_get_class(self.original)\n return Box(text, class_=class_)\ncomponents.registerAdapter(StepBox, buildstep.BuildStepStatus, IBox)\n\n\nclass EventBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n text = self.original.getText()\n class_ = \"Event\"\n return Box(text, class_=class_)\ncomponents.registerAdapter(EventBox, builder.Event, IBox)\n\n\nclass Spacer:\n implements(interfaces.IStatusEvent)\n\n def __init__(self, start, finish):\n self.started = start\n self.finished = finish\n\n def getTimes(self):\n return (self.started, self.finished)\n\n def getText(self):\n return []\n\n\nclass SpacerBox(components.Adapter):\n implements(IBox)\n\n def getBox(self, req):\n #b = Box([\"spacer\"], \"white\")\n b = Box([])\n b.spacer = True\n return b\ncomponents.registerAdapter(SpacerBox, Spacer, IBox)\n\n\ndef insertGaps(g, showEvents, lastEventTime, idleGap=2):\n debug = False\n\n e = g.next()\n starts, finishes = e.getTimes()\n if debug:\n log.msg(\"E0\", starts, finishes)\n if finishes == 0:\n finishes = starts\n if debug:\n log.msg(\"E1 finishes=%s, gap=%s, lET=%s\" %\n (finishes, idleGap, lastEventTime))\n if finishes is not None and finishes + idleGap < lastEventTime:\n if debug:\n log.msg(\" spacer0\")\n yield Spacer(finishes, lastEventTime)\n\n followingEventStarts = starts\n if debug:\n log.msg(\" fES0\", starts)\n yield e\n\n while True:\n e = g.next()\n if not showEvents and isinstance(e, builder.Event):\n continue\n starts, finishes = e.getTimes()\n if debug:\n log.msg(\"E2\", starts, finishes)\n if finishes == 0:\n finishes = starts\n if finishes is not None and finishes + idleGap < followingEventStarts:\n # there is a gap between the end of this event and the beginning\n # of the next one. Insert an idle event so the waterfall display\n # shows a gap here.\n if debug:\n log.msg(\" finishes=%s, gap=%s, fES=%s\" %\n (finishes, idleGap, followingEventStarts))\n yield Spacer(finishes, followingEventStarts)\n yield e\n followingEventStarts = starts\n if debug:\n log.msg(\" fES1\", starts)\n\n\nclass WaterfallHelp(HtmlResource):\n pageTitle = \"Waterfall Help\"\n\n def __init__(self, categories=None):\n HtmlResource.__init__(self)\n self.categories = categories\n\n def content(self, request, cxt):\n status = self.getStatus(request)\n\n cxt['show_events_checked'] = request.args.get(\"show_events\", [\"false\"])[0].lower() == \"true\"\n cxt['branches'] = [b for b in request.args.get(\"branch\", []) if b]\n cxt['failures_only'] = request.args.get(\"failures_only\", [\"false\"])[0].lower() == \"true\"\n cxt['committers'] = [c for c in request.args.get(\"committer\", []) if c]\n cxt['projects'] = [p for p in request.args.get(\"project\", []) if p]\n\n # this has a set of toggle-buttons to let the user choose the\n # builders\n show_builders = request.args.get(\"show\", [])\n show_builders.extend(request.args.get(\"builder\", []))\n cxt['show_builders'] = show_builders\n cxt['all_builders'] = status.getBuilderNames(categories=self.categories)\n\n # this has a set of toggle-buttons to let the user choose the\n # categories\n show_categories = request.args.get(\"category\", [])\n allBuilderNames = status.getBuilderNames()\n builders = [status.getBuilder(name) for name in allBuilderNames]\n allCategories = [builder.getCategory() for builder in builders]\n cxt['show_categories'] = show_categories\n cxt['all_categories'] = allCategories\n\n # a couple of radio-button selectors for refresh time will appear\n # just after that text\n times = [(\"none\", \"None\"),\n (\"60\", \"60 seconds\"),\n (\"300\", \"5 minutes\"),\n (\"600\", \"10 minutes\"),\n ]\n current_reload_time = request.args.get(\"reload\", [\"none\"])\n if current_reload_time:\n current_reload_time = current_reload_time[0]\n if current_reload_time not in [t[0] for t in times]:\n times.insert(0, (current_reload_time, current_reload_time))\n\n cxt['times'] = times\n cxt['current_reload_time'] = current_reload_time\n\n template = request.site.buildbot_service.templates.get_template(\"waterfallhelp.html\")\n return template.render(**cxt)\n\n\nclass ChangeEventSource(object):\n\n \"A wrapper around a list of changes to supply the IEventSource interface\"\n\n def __init__(self, changes):\n self.changes = changes\n # we want them in newest-to-oldest order\n self.changes.reverse()\n\n def eventGenerator(self, branches, categories, committers, projects, minTime):\n for change in self.changes:\n if branches and change.branch not in branches:\n continue\n if categories and change.category not in categories:\n continue\n if committers and change.author not in committers:\n continue\n if minTime and change.when < minTime:\n continue\n yield change\n\n\nclass WaterfallStatusResource(HtmlResource):\n\n \"\"\"This builds the main status page, with the waterfall display, and\n all child pages.\"\"\"\n\n def __init__(self, categories=None, num_events=200, num_events_max=None):\n HtmlResource.__init__(self)\n self.categories = categories\n self.num_events = num_events\n self.num_events_max = num_events_max\n self.putChild(\"help\", WaterfallHelp(categories))\n\n def getPageTitle(self, request):\n status = self.getStatus(request)\n p = status.getTitle()\n if p:\n return \"BuildBot: %s\" % p\n else:\n return \"BuildBot\"\n\n def getChangeManager(self, request):\n # TODO: this wants to go away, access it through IStatus\n return request.site.buildbot_service.getChangeSvc()\n\n def get_reload_time(self, request):\n if \"reload\" in request.args:\n try:\n reload_time = int(request.args[\"reload\"][0])\n return max(reload_time, 15)\n except ValueError:\n pass\n return None\n\n def isSuccess(self, builderStatus):\n # Helper function to return True if the builder is not failing.\n # The function will return false if the current state is \"offline\",\n # the last build was not successful, or if a step from the current\n # build(s) failed.\n\n # Make sure the builder is online.\n if builderStatus.getState()[0] == 'offline':\n return False\n\n # Look at the last finished build to see if it was success or not.\n lastBuild = builderStatus.getLastFinishedBuild()\n if lastBuild and lastBuild.getResults() != builder.SUCCESS:\n return False\n\n # Check all the current builds to see if one step is already\n # failing.\n currentBuilds = builderStatus.getCurrentBuilds()\n if currentBuilds:\n for build in currentBuilds:\n for step in build.getSteps():\n if step.getResults()[0] == builder.FAILURE:\n return False\n\n # The last finished build was successful, and all the current builds\n # don't have any failed steps.\n return True\n\n def content(self, request, ctx):\n status = self.getStatus(request)\n master = request.site.buildbot_service.master\n\n # before calling content_with_db_data, make a bunch of database\n # queries. This is a sick hack, but beats rewriting the entire\n # waterfall around asynchronous calls\n\n results = {}\n\n # recent changes\n changes_d = master.db.changes.getRecentChanges(40)\n\n def to_changes(chdicts):\n return defer.gatherResults([\n changes.Change.fromChdict(master, chdict)\n for chdict in chdicts])\n changes_d.addCallback(to_changes)\n\n def keep_changes(changes):\n results['changes'] = changes\n changes_d.addCallback(keep_changes)\n\n # build request counts for each builder\n allBuilderNames = status.getBuilderNames(categories=self.categories)\n brstatus_ds = []\n brcounts = {}\n\n def keep_count(statuses, builderName):\n brcounts[builderName] = len(statuses)\n for builderName in allBuilderNames:\n builder_status = status.getBuilder(builderName)\n d = builder_status.getPendingBuildRequestStatuses()\n d.addCallback(keep_count, builderName)\n brstatus_ds.append(d)\n\n # wait for it all to finish\n d = defer.gatherResults([changes_d] + brstatus_ds)\n\n def call_content(_):\n return self.content_with_db_data(results['changes'],\n brcounts, request, ctx)\n d.addCallback(call_content)\n return d\n\n def content_with_db_data(self, changes, brcounts, request, ctx):\n status = self.getStatus(request)\n ctx['refresh'] = self.get_reload_time(request)\n\n # we start with all Builders available to this Waterfall: this is\n # limited by the config-file -time categories= argument, and defaults\n # to all defined Builders.\n allBuilderNames = status.getBuilderNames(categories=self.categories)\n builders = [status.getBuilder(name) for name in allBuilderNames]\n\n # but if the URL has one or more builder= arguments (or the old show=\n # argument, which is still accepted for backwards compatibility), we\n # use that set of builders instead. We still don't show anything\n # outside the config-file time set limited by categories=.\n showBuilders = request.args.get(\"show\", [])\n showBuilders.extend(request.args.get(\"builder\", []))\n if showBuilders:\n builders = [b for b in builders if b.name in showBuilders]\n\n # now, if the URL has one or category= arguments, use them as a\n # filter: only show those builders which belong to one of the given\n # categories.\n showCategories = request.args.get(\"category\", [])\n if showCategories:\n builders = [b for b in builders if b.category in showCategories]\n\n # If the URL has the failures_only=true argument, we remove all the\n # builders that are not currently red or won't be turning red at the end\n # of their current run.\n failuresOnly = request.args.get(\"failures_only\", [\"false\"])[0]\n if failuresOnly.lower() == \"true\":\n builders = [b for b in builders if not self.isSuccess(b)]\n\n (changeNames, builderNames, timestamps, eventGrid, sourceEvents) = \\\n self.buildGrid(request, builders, changes)\n\n # start the table: top-header material\n locale_enc = locale.getdefaultlocale()[1]\n if locale_enc is not None:\n locale_tz = unicode(time.tzname[time.localtime()[-1]], locale_enc)\n else:\n locale_tz = unicode(time.tzname[time.localtime()[-1]])\n ctx['tz'] = locale_tz\n ctx['changes_url'] = request.childLink(\"../changes\")\n\n bn = ctx['builders'] = []\n\n for name in builderNames:\n builder = status.getBuilder(name)\n top_box = ITopBox(builder).getBox(request)\n current_box = ICurrentBox(builder).getBox(status, brcounts)\n bn.append({'name': name,\n 'url': request.childLink(\"../builders/%s\" % urllib.quote(name, safe='')),\n 'top': top_box.text,\n 'top_class': top_box.class_,\n 'status': current_box.text,\n 'status_class': current_box.class_,\n })\n\n ctx.update(self.phase2(request, changeNames + builderNames, timestamps, eventGrid,\n sourceEvents))\n\n def with_args(req, remove_args=[], new_args=[], new_path=None):\n # sigh, nevow makes this sort of manipulation easier\n newargs = req.args.copy()\n for argname in remove_args:\n newargs[argname] = []\n if \"branch\" in newargs:\n newargs[\"branch\"] = [b for b in newargs[\"branch\"] if b]\n for k, v in new_args:\n if k in newargs:\n newargs[k].append(v)\n else:\n newargs[k] = [v]\n newquery = \"&\".join([\"%s=%s\" % (urllib.quote(k), urllib.quote(v))\n for k in newargs\n for v in newargs[k]\n ])\n if new_path:\n new_url = new_path\n elif req.prepath:\n new_url = req.prepath[-1]\n else:\n new_url = ''\n if newquery:\n new_url += \"?\" + newquery\n return new_url\n\n if timestamps:\n bottom = timestamps[-1]\n ctx['nextpage'] = with_args(request, [\"last_time\"],\n [(\"last_time\", str(int(bottom)))])\n\n helpurl = path_to_root(request) + \"waterfall/help\"\n ctx['help_url'] = with_args(request, new_path=helpurl)\n\n if self.get_reload_time(request) is not None:\n ctx['no_reload_page'] = with_args(request, remove_args=[\"reload\"])\n\n template = request.site.buildbot_service.templates.get_template(\"waterfall.html\")\n data = template.render(**ctx)\n return data\n\n def buildGrid(self, request, builders, changes):\n debug = False\n # TODO: see if we can use a cached copy\n\n showEvents = False\n if request.args.get(\"show_events\", [\"false\"])[0].lower() == \"true\":\n showEvents = True\n filterCategories = request.args.get('category', [])\n filterBranches = [b for b in request.args.get(\"branch\", []) if b]\n filterBranches = map_branches(filterBranches)\n filterCommitters = [c for c in request.args.get(\"committer\", []) if c]\n filterProjects = [p for p in request.args.get(\"project\", []) if p]\n maxTime = int(request.args.get(\"last_time\", [util.now()])[0])\n if \"show_time\" in request.args:\n minTime = maxTime - int(request.args[\"show_time\"][0])\n elif \"first_time\" in request.args:\n minTime = int(request.args[\"first_time\"][0])\n elif filterBranches or filterCommitters:\n minTime = util.now() - 24 * 60 * 60\n else:\n minTime = 0\n spanLength = 10 # ten-second chunks\n req_events = int(request.args.get(\"num_events\", [self.num_events])[0])\n if self.num_events_max and req_events > self.num_events_max:\n maxPageLen = self.num_events_max\n else:\n maxPageLen = req_events\n\n # first step is to walk backwards in time, asking each column\n # (commit, all builders) if they have any events there. Build up the\n # array of events, and stop when we have a reasonable number.\n\n commit_source = ChangeEventSource(changes)\n\n lastEventTime = util.now()\n sources = [commit_source] + builders\n changeNames = [\"changes\"]\n builderNames = map(lambda builder: builder.getName(), builders)\n sourceNames = changeNames + builderNames\n sourceEvents = []\n sourceGenerators = []\n\n def get_event_from(g):\n try:\n while True:\n e = g.next()\n # e might be buildstep.BuildStepStatus,\n # builder.BuildStatus, builder.Event,\n # waterfall.Spacer(builder.Event), or changes.Change .\n # The showEvents=False flag means we should hide\n # builder.Event .\n if not showEvents and isinstance(e, builder.Event):\n continue\n\n if isinstance(e, buildstep.BuildStepStatus):\n # unfinished steps are always shown\n if e.isFinished() and e.isHidden():\n continue\n\n break\n event = interfaces.IStatusEvent(e)\n if debug:\n log.msg(\"gen %s gave1 %s\" % (g, event.getText()))\n except StopIteration:\n event = None\n return event\n\n for s in sources:\n gen = insertGaps(s.eventGenerator(filterBranches,\n filterCategories,\n filterCommitters,\n filterProjects,\n minTime),\n showEvents,\n lastEventTime)\n sourceGenerators.append(gen)\n # get the first event\n sourceEvents.append(get_event_from(gen))\n eventGrid = []\n timestamps = []\n\n lastEventTime = 0\n for e in sourceEvents:\n if e and e.getTimes()[0] > lastEventTime:\n lastEventTime = e.getTimes()[0]\n if lastEventTime == 0:\n lastEventTime = util.now()\n\n spanStart = lastEventTime - spanLength\n debugGather = 0\n\n while True:\n if debugGather:\n log.msg(\"checking (%s,]\" % spanStart)\n # the tableau of potential events is in sourceEvents[]. The\n # window crawls backwards, and we examine one source at a time.\n # If the source's top-most event is in the window, is it pushed\n # onto the events[] array and the tableau is refilled. This\n # continues until the tableau event is not in the window (or is\n # missing).\n\n spanEvents = [] # for all sources, in this span. row of eventGrid\n firstTimestamp = None # timestamp of first event in the span\n lastTimestamp = None # last pre-span event, for next span\n\n for c in range(len(sourceGenerators)):\n events = [] # for this source, in this span. cell of eventGrid\n event = sourceEvents[c]\n while event and spanStart < event.getTimes()[0]:\n # to look at windows that don't end with the present,\n # condition the .append on event.time <= spanFinish\n if not IBox(event, None):\n log.msg(\"BAD EVENT\", event, event.getText())\n assert 0\n if debug:\n log.msg(\"pushing\", event.getText(), event)\n events.append(event)\n starts, finishes = event.getTimes()\n firstTimestamp = earlier(firstTimestamp, starts)\n event = get_event_from(sourceGenerators[c])\n if debug:\n log.msg(\"finished span\")\n\n if event:\n # this is the last pre-span event for this source\n lastTimestamp = later(lastTimestamp,\n event.getTimes()[0])\n if debugGather:\n log.msg(\" got %s from %s\" % (events, sourceNames[c]))\n sourceEvents[c] = event # refill the tableau\n spanEvents.append(events)\n\n # only show events older than maxTime. This makes it possible to\n # visit a page that shows what it would be like to scroll off the\n # bottom of this one.\n if firstTimestamp is not None and firstTimestamp <= maxTime:\n eventGrid.append(spanEvents)\n timestamps.append(firstTimestamp)\n\n if lastTimestamp:\n spanStart = lastTimestamp - spanLength\n else:\n # no more events\n break\n if minTime is not None and lastTimestamp < minTime:\n break\n\n if len(timestamps) > maxPageLen:\n break\n\n # now loop\n # loop is finished. now we have eventGrid[] and timestamps[]\n if debugGather:\n log.msg(\"finished loop\")\n assert(len(timestamps) == len(eventGrid))\n return (changeNames, builderNames, timestamps, eventGrid, sourceEvents)\n\n def phase2(self, request, sourceNames, timestamps, eventGrid,\n sourceEvents):\n\n if not timestamps:\n return dict(grid=[], gridlen=0)\n\n # first pass: figure out the height of the chunks, populate grid\n grid = []\n for i in range(1 + len(sourceNames)):\n grid.append([])\n # grid is a list of columns, one for the timestamps, and one per\n # event source. Each column is exactly the same height. Each element\n # of the list is a single <td> box.\n lastDate = time.strftime(\"%d %b %Y\",\n time.localtime(util.now()))\n for r in range(0, len(timestamps)):\n chunkstrip = eventGrid[r]\n # chunkstrip is a horizontal strip of event blocks. Each block\n # is a vertical list of events, all for the same source.\n assert(len(chunkstrip) == len(sourceNames))\n maxRows = reduce(lambda x, y: max(x, y),\n map(lambda x: len(x), chunkstrip))\n for i in range(maxRows):\n if i != maxRows - 1:\n grid[0].append(None)\n else:\n # timestamp goes at the bottom of the chunk\n stuff = []\n # add the date at the beginning (if it is not the same as\n # today's date), and each time it changes\n todayday = time.strftime(\"%a\",\n time.localtime(timestamps[r]))\n today = time.strftime(\"%d %b %Y\",\n time.localtime(timestamps[r]))\n if today != lastDate:\n stuff.append(todayday)\n stuff.append(today)\n lastDate = today\n stuff.append(\n time.strftime(\"%H:%M:%S\",\n time.localtime(timestamps[r])))\n grid[0].append(Box(text=stuff, class_=\"Time\",\n valign=\"bottom\", align=\"center\"))\n\n # at this point the timestamp column has been populated with\n # maxRows boxes, most None but the last one has the time string\n for c in range(0, len(chunkstrip)):\n block = chunkstrip[c]\n assert(block is not None) # should be [] instead\n for i in range(maxRows - len(block)):\n # fill top of chunk with blank space\n grid[c + 1].append(None)\n for i in range(len(block)):\n # so the events are bottom-justified\n b = IBox(block[i]).getBox(request)\n b.parms['valign'] = \"top\"\n b.parms['align'] = \"center\"\n grid[c + 1].append(b)\n # now all the other columns have maxRows new boxes too\n # populate the last row, if empty\n gridlen = len(grid[0])\n for i in range(len(grid)):\n strip = grid[i]\n assert(len(strip) == gridlen)\n if strip[-1] is None:\n if sourceEvents[i - 1]:\n filler = IBox(sourceEvents[i - 1]).getBox(request)\n else:\n # this can happen if you delete part of the build history\n filler = Box(text=[\"?\"], align=\"center\")\n strip[-1] = filler\n strip[-1].parms['rowspan'] = 1\n # second pass: bubble the events upwards to un-occupied locations\n # Every square of the grid that has a None in it needs to have\n # something else take its place.\n noBubble = request.args.get(\"nobubble\", ['0'])\n noBubble = int(noBubble[0])\n if not noBubble:\n for col in range(len(grid)):\n strip = grid[col]\n if col == 1: # changes are handled differently\n for i in range(2, len(strip) + 1):\n # only merge empty boxes. Don't bubble commit boxes.\n if strip[-i] is None:\n next = strip[-i + 1]\n assert(next)\n if next:\n # if not next.event:\n if next.spacer:\n # bubble the empty box up\n strip[-i] = next\n strip[-i].parms['rowspan'] += 1\n strip[-i + 1] = None\n else:\n # we are above a commit box. Leave it\n # be, and turn the current box into an\n # empty one\n strip[-i] = Box([], rowspan=1,\n comment=\"commit bubble\")\n strip[-i].spacer = True\n else:\n # we are above another empty box, which\n # somehow wasn't already converted.\n # Shouldn't happen\n pass\n else:\n for i in range(2, len(strip) + 1):\n # strip[-i] will go from next-to-last back to first\n if strip[-i] is None:\n # bubble previous item up\n assert(strip[-i + 1] is not None)\n strip[-i] = strip[-i + 1]\n strip[-i].parms['rowspan'] += 1\n strip[-i + 1] = None\n else:\n strip[-i].parms['rowspan'] = 1\n\n # convert to dicts\n for i in range(gridlen):\n for strip in grid:\n if strip[i]:\n strip[i] = strip[i].td()\n\n return dict(grid=grid, gridlen=gridlen, no_bubble=noBubble)\n",
"path": "master/buildbot/status/web/waterfall.py"
}
] | diff --git a/master/buildbot/status/web/waterfall.py b/master/buildbot/status/web/waterfall.py
index 698a2ebf5f2a..6ffd5fd4e1bf 100644
--- a/master/buildbot/status/web/waterfall.py
+++ b/master/buildbot/status/web/waterfall.py
@@ -854,4 +854,4 @@ def phase2(self, request, sourceNames, timestamps, eventGrid,
if strip[i]:
strip[i] = strip[i].td()
- return dict(grid=grid, gridlen=gridlen, no_bubble=noBubble, time=lastDate)
+ return dict(grid=grid, gridlen=gridlen, no_bubble=noBubble)
|
microsoft__botbuilder-python-1451 | dependecy conflict between botframework 4.11.0 and azure-identity 1.5.0
## Version
4.11 (also happening with 4.10)
## Describe the bug
`botframework-connector == 4.11.0` (current) requires `msal == 1.2.0`
`azure-identity == 1.5.0` (current) requires `msal >=1.6.0,<2.0.0`
This created a dependency conflict where bot libraries can't coexist in the same program. This used to work a couple of months ago (I bumped into this issue after revisiting some code I had worked on before).
## To Reproduce
This is my `requirements.txt` file, just add it and run `pipenv install -r requirements.txt` (versions pinned to :
```
botbuilder-core == 4.11
azure-keyvault-secrets
azure-identity == 1.5
botbuilder-ai == 4.11
```
## Expected behavior
Packages should install without conflict
## Screenshots
Extract from the error message `pipenv install` shows:
```
[pipenv.exceptions.ResolutionFailure]: Warning: Your dependencies could not be resolved. You likely have a mismatch in your sub-dependencies.
First try clearing your dependency cache with $ pipenv lock --clear, then try the original command again.
Alternatively, you can use $ pipenv install --skip-lock to bypass this mechanism, then run $ pipenv graph to inspect the situation.
Hint: try $ pipenv lock --pre if it is a pre-release dependency.
ERROR: ERROR: Could not find a version that matches msal<2.0.0,==1.2.0,>=0.4.1,>=1.6.0
Tried: 0.1.0, 0.1.0, 0.2.0, 0.2.0, 0.3.0, 0.3.0, 0.3.1, 0.3.1, 0.4.0, 0.4.0, 0.4.1, 0.4.1, 0.5.0, 0.5.0, 0.5.1, 0.5.1, 0.6.0, 0.6.0, 0.6.1, 0.6.1, 0.7.0, 0.7.0, 0.8.0, 0.8.0, 0.8.0, 0.9.0, 0.9.0, 1.0.0, 1.0.0, 1.1.0, 1.1.0, 1.2.0, 1.2.0, 1.3.0, 1.3.0, 1.4.0, 1.4.0, 1.4.1, 1.4.1, 1.4.2, 1.4.2, 1.4.3, 1.4.3, 1.5.0, 1.5.0, 1.5.1, 1.5.1, 1.6.0, 1.6.0, 1.7.0, 1.7.0, 1.8.0, 1.8.0
There are incompatible versions in the resolved dependencies.
```
Relevant extract from the output of `pipenv graph` as per the suggestion above:
```
azure-identity==1.5.0
- msal [required: >=1.6.0,<2.0.0, installed: 1.2.0]
- msal-extensions [required: ~=0.3.0, installed: 0.3.0]
- msal [required: >=0.4.1,<2.0.0, installed: 1.2.0]
azure-keyvault-secrets==4.2.0
botbuilder-ai==4.11.0
- botbuilder-core [required: ==4.11.0, installed: 4.11.0]
- botframework-connector [required: ==4.11.0, installed: 4.11.0]
- msal [required: ==1.2.0, installed: 1.2.0]
```
## Additional context
This issue was also reported in [botbuilder-samples repo's issue 2978](https://github.com/microsoft/BotBuilder-Samples/issues/2978)
| [
{
"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.12.0\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests==2.23.0\",\n \"cryptography==3.2\",\n \"PyJWT==1.5.3\",\n \"botbuilder-schema==4.12.0\",\n \"adal==1.2.1\",\n \"msal==1.2.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n",
"path": "libraries/botframework-connector/setup.py"
}
] | [
{
"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\nimport os\nfrom setuptools import setup\n\nNAME = \"botframework-connector\"\nVERSION = os.environ[\"packageVersion\"] if \"packageVersion\" in os.environ else \"4.12.0\"\nREQUIRES = [\n \"msrest==0.6.10\",\n \"requests==2.23.0\",\n \"cryptography==3.2\",\n \"PyJWT==1.5.3\",\n \"botbuilder-schema==4.12.0\",\n \"adal==1.2.1\",\n \"msal==1.6.0\",\n]\n\nroot = os.path.abspath(os.path.dirname(__file__))\n\nwith open(os.path.join(root, \"README.rst\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nsetup(\n name=NAME,\n version=VERSION,\n description=\"Microsoft Bot Framework Bot Builder SDK for Python.\",\n author=\"Microsoft\",\n url=\"https://www.github.com/Microsoft/botbuilder-python\",\n keywords=[\"BotFrameworkConnector\", \"bots\", \"ai\", \"botframework\", \"botbuilder\"],\n install_requires=REQUIRES,\n packages=[\n \"botframework.connector\",\n \"botframework.connector.auth\",\n \"botframework.connector.async_mixin\",\n \"botframework.connector.operations\",\n \"botframework.connector.models\",\n \"botframework.connector.aio\",\n \"botframework.connector.aio.operations_async\",\n \"botframework.connector.teams\",\n \"botframework.connector.teams.operations\",\n \"botframework.connector.token_api\",\n \"botframework.connector.token_api.aio\",\n \"botframework.connector.token_api.models\",\n \"botframework.connector.token_api.operations\",\n ],\n include_package_data=True,\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=\"MIT\",\n classifiers=[\n \"Programming Language :: Python :: 3.7\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n)\n",
"path": "libraries/botframework-connector/setup.py"
}
] | diff --git a/libraries/botframework-connector/setup.py b/libraries/botframework-connector/setup.py
index 04bf09257..09a82d646 100644
--- a/libraries/botframework-connector/setup.py
+++ b/libraries/botframework-connector/setup.py
@@ -12,7 +12,7 @@
"PyJWT==1.5.3",
"botbuilder-schema==4.12.0",
"adal==1.2.1",
- "msal==1.2.0",
+ "msal==1.6.0",
]
root = os.path.abspath(os.path.dirname(__file__))
|
pypi__warehouse-3598 | Set samesite=lax on session cookies
This is a strong defense-in-depth mechanism for protecting against CSRF. It's currently only respected by Chrome, but Firefox will add it as well.
| [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport functools\nimport time\n\nimport msgpack\nimport msgpack.exceptions\nimport redis\n\nfrom pyramid import viewderivers\nfrom pyramid.interfaces import ISession, ISessionFactory\nfrom zope.interface import implementer\n\nfrom warehouse.cache.http import add_vary\nfrom warehouse.utils import crypto\n\n\ndef _invalid_method(method):\n @functools.wraps(method)\n def wrapped(self, *args, **kwargs):\n self._error_message()\n return wrapped\n\n\n@implementer(ISession)\nclass InvalidSession(dict):\n\n __contains__ = _invalid_method(dict.__contains__)\n __delitem__ = _invalid_method(dict.__delitem__)\n __getitem__ = _invalid_method(dict.__getitem__)\n __iter__ = _invalid_method(dict.__iter__)\n __len__ = _invalid_method(dict.__len__)\n __setitem__ = _invalid_method(dict.__setitem__)\n clear = _invalid_method(dict.clear)\n copy = _invalid_method(dict.copy)\n fromkeys = _invalid_method(dict.fromkeys)\n get = _invalid_method(dict.get)\n items = _invalid_method(dict.items)\n keys = _invalid_method(dict.keys)\n pop = _invalid_method(dict.pop)\n popitem = _invalid_method(dict.popitem)\n setdefault = _invalid_method(dict.setdefault)\n update = _invalid_method(dict.update)\n values = _invalid_method(dict.values)\n\n def _error_message(self):\n raise RuntimeError(\n \"Cannot use request.session in a view without uses_session=True.\"\n )\n\n def __getattr__(self, name):\n self._error_message()\n\n @property\n def created(self):\n self._error_message()\n\n\ndef _changed_method(method):\n @functools.wraps(method)\n def wrapped(self, *args, **kwargs):\n self.changed()\n return method(self, *args, **kwargs)\n return wrapped\n\n\n@implementer(ISession)\nclass Session(dict):\n\n _csrf_token_key = \"_csrf_token\"\n _flash_key = \"_flash_messages\"\n\n # A number of our methods need to be decorated so that they also call\n # self.changed()\n __delitem__ = _changed_method(dict.__delitem__)\n __setitem__ = _changed_method(dict.__setitem__)\n clear = _changed_method(dict.clear)\n pop = _changed_method(dict.pop)\n popitem = _changed_method(dict.popitem)\n setdefault = _changed_method(dict.setdefault)\n update = _changed_method(dict.update)\n\n def __init__(self, data=None, session_id=None, new=True):\n # Brand new sessions don't have any data, so we'll just create an empty\n # dictionary for them.\n if data is None:\n data = {}\n\n # Initialize our actual dictionary here.\n super().__init__(data)\n\n # We need to track the state of our Session.\n self._sid = session_id\n self._changed = False\n self.new = new\n self.created = int(time.time())\n\n # We'll track all of the IDs that have been invalidated here\n self.invalidated = set()\n\n @property\n def sid(self):\n if self._sid is None:\n self._sid = crypto.random_token()\n return self._sid\n\n def changed(self):\n self._changed = True\n\n def invalidate(self):\n self.clear()\n self.new = True\n self.created = int(time.time())\n self._changed = False\n\n # If the current session id isn't None we'll want to record it as one\n # of the ones that have been invalidated.\n if self._sid is not None:\n self.invalidated.add(self._sid)\n self._sid = None\n\n def should_save(self):\n return self._changed\n\n # Flash Messages Methods\n def _get_flash_queue_key(self, queue):\n return \".\".join(filter(None, [self._flash_key, queue]))\n\n def flash(self, msg, queue=\"\", allow_duplicate=True):\n queue_key = self._get_flash_queue_key(queue)\n\n # If we're not allowing duplicates check if this message is already\n # in the queue, and if it is just return immediately.\n if not allow_duplicate and msg in self[queue_key]:\n return\n\n self.setdefault(queue_key, []).append(msg)\n\n def peek_flash(self, queue=\"\"):\n return self.get(self._get_flash_queue_key(queue), [])\n\n def pop_flash(self, queue=\"\"):\n queue_key = self._get_flash_queue_key(queue)\n messages = self.get(queue_key, [])\n self.pop(queue_key, None)\n return messages\n\n # CSRF Methods\n def new_csrf_token(self):\n self[self._csrf_token_key] = crypto.random_token()\n return self[self._csrf_token_key]\n\n def get_csrf_token(self):\n token = self.get(self._csrf_token_key)\n if token is None:\n token = self.new_csrf_token()\n return token\n\n\n@implementer(ISessionFactory)\nclass SessionFactory:\n\n cookie_name = \"session_id\"\n max_age = 12 * 60 * 60 # 12 hours\n\n def __init__(self, secret, url):\n self.redis = redis.StrictRedis.from_url(url)\n self.signer = crypto.TimestampSigner(secret, salt=\"session\")\n\n def __call__(self, request):\n return self._process_request(request)\n\n def _redis_key(self, session_id):\n return \"warehouse/session/data/{}\".format(session_id)\n\n def _process_request(self, request):\n # Register a callback with the request so we can save the session once\n # it's finished.\n request.add_response_callback(self._process_response)\n\n # Load our session ID from the request.\n session_id = request.cookies.get(self.cookie_name)\n\n # If we do not have a session ID then we'll just use a new empty\n # session.\n if session_id is None:\n return Session()\n\n # Check to make sure we have a valid session id\n try:\n session_id = self.signer.unsign(session_id, max_age=self.max_age)\n session_id = session_id.decode(\"utf8\")\n except crypto.BadSignature:\n return Session()\n\n # Fetch the serialized data from redis\n bdata = self.redis.get(self._redis_key(session_id))\n\n # If the session didn't exist in redis, we'll give the user a new\n # session.\n if bdata is None:\n return Session()\n\n # De-serialize our session data\n try:\n data = msgpack.unpackb(bdata, encoding=\"utf8\", use_list=True)\n except (msgpack.exceptions.UnpackException,\n msgpack.exceptions.ExtraData):\n # If the session data was invalid we'll give the user a new session\n return Session()\n\n # If we were able to load existing session data, load it into a\n # Session class\n session = Session(data, session_id, False)\n\n return session\n\n def _process_response(self, request, response):\n # If the request has an InvalidSession, then the view can't have\n # accessed the session, and we can just skip all of this anyways.\n if isinstance(request.session, InvalidSession):\n return\n\n # Check to see if the session has been marked to be deleted, if it has\n # benn then we'll delete it, and tell our response to delete the\n # session cookie as well.\n if request.session.invalidated:\n for session_id in request.session.invalidated:\n self.redis.delete(self._redis_key(session_id))\n\n if not request.session.should_save():\n response.delete_cookie(self.cookie_name)\n\n # Check to see if the session has been marked to be saved, generally\n # this means that the session data has been modified and thus we need\n # to store the new data.\n if request.session.should_save():\n # Save our session in Redis\n self.redis.setex(\n self._redis_key(request.session.sid),\n self.max_age,\n msgpack.packb(\n request.session,\n encoding=\"utf8\",\n use_bin_type=True,\n ),\n )\n\n # Send our session cookie to the client\n response.set_cookie(\n self.cookie_name,\n self.signer.sign(request.session.sid.encode(\"utf8\")),\n max_age=self.max_age,\n httponly=True,\n secure=request.scheme == \"https\",\n samesite=b\"lax\"\n )\n\n\ndef session_view(view, info):\n if info.options.get(\"uses_session\"):\n # If we're using the session, then we'll just return the original view\n # with a small wrapper around it to ensure that it has a Vary: Cookie\n # header.\n return add_vary(\"Cookie\")(view)\n elif info.exception_only:\n return view\n else:\n # If we're not using the session on this view, then we'll wrap the view\n # with a wrapper that just ensures that the session cannot be used.\n @functools.wraps(view)\n def wrapped(context, request):\n # This whole method is a little bit of an odd duck, we want to make\n # sure that we don't actually *access* request.session, because\n # doing so triggers the machinery to create a new session. So\n # instead we will dig into the request object __dict__ to\n # effectively do the same thing, jsut without triggering an access\n # on request.session.\n\n # Save the original session so that we can restore it once the\n # inner views have been called.\n nothing = object()\n original_session = request.__dict__.get(\"session\", nothing)\n\n # This particular view hasn't been set to allow access to the\n # session, so we'll just assign an InvalidSession to\n # request.session\n request.__dict__[\"session\"] = InvalidSession()\n\n try:\n # Invoke the real view\n return view(context, request)\n finally:\n # Restore the original session so that things like\n # pyramid_debugtoolbar can access it.\n if original_session is nothing:\n del request.__dict__[\"session\"]\n else:\n request.__dict__[\"session\"] = original_session\n\n return wrapped\n\n\nsession_view.options = {\"uses_session\"}\n\n\ndef includeme(config):\n config.set_session_factory(\n SessionFactory(\n config.registry.settings[\"sessions.secret\"],\n config.registry.settings[\"sessions.url\"],\n ),\n )\n\n config.add_view_deriver(\n session_view,\n over=\"csrf_view\",\n under=viewderivers.INGRESS,\n )\n",
"path": "warehouse/sessions.py"
}
] | [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport functools\nimport time\n\nimport msgpack\nimport msgpack.exceptions\nimport redis\n\nfrom pyramid import viewderivers\nfrom pyramid.interfaces import ISession, ISessionFactory\nfrom zope.interface import implementer\n\nfrom warehouse.cache.http import add_vary\nfrom warehouse.utils import crypto\n\n\ndef _invalid_method(method):\n @functools.wraps(method)\n def wrapped(self, *args, **kwargs):\n self._error_message()\n return wrapped\n\n\n@implementer(ISession)\nclass InvalidSession(dict):\n\n __contains__ = _invalid_method(dict.__contains__)\n __delitem__ = _invalid_method(dict.__delitem__)\n __getitem__ = _invalid_method(dict.__getitem__)\n __iter__ = _invalid_method(dict.__iter__)\n __len__ = _invalid_method(dict.__len__)\n __setitem__ = _invalid_method(dict.__setitem__)\n clear = _invalid_method(dict.clear)\n copy = _invalid_method(dict.copy)\n fromkeys = _invalid_method(dict.fromkeys)\n get = _invalid_method(dict.get)\n items = _invalid_method(dict.items)\n keys = _invalid_method(dict.keys)\n pop = _invalid_method(dict.pop)\n popitem = _invalid_method(dict.popitem)\n setdefault = _invalid_method(dict.setdefault)\n update = _invalid_method(dict.update)\n values = _invalid_method(dict.values)\n\n def _error_message(self):\n raise RuntimeError(\n \"Cannot use request.session in a view without uses_session=True.\"\n )\n\n def __getattr__(self, name):\n self._error_message()\n\n @property\n def created(self):\n self._error_message()\n\n\ndef _changed_method(method):\n @functools.wraps(method)\n def wrapped(self, *args, **kwargs):\n self.changed()\n return method(self, *args, **kwargs)\n return wrapped\n\n\n@implementer(ISession)\nclass Session(dict):\n\n _csrf_token_key = \"_csrf_token\"\n _flash_key = \"_flash_messages\"\n\n # A number of our methods need to be decorated so that they also call\n # self.changed()\n __delitem__ = _changed_method(dict.__delitem__)\n __setitem__ = _changed_method(dict.__setitem__)\n clear = _changed_method(dict.clear)\n pop = _changed_method(dict.pop)\n popitem = _changed_method(dict.popitem)\n setdefault = _changed_method(dict.setdefault)\n update = _changed_method(dict.update)\n\n def __init__(self, data=None, session_id=None, new=True):\n # Brand new sessions don't have any data, so we'll just create an empty\n # dictionary for them.\n if data is None:\n data = {}\n\n # Initialize our actual dictionary here.\n super().__init__(data)\n\n # We need to track the state of our Session.\n self._sid = session_id\n self._changed = False\n self.new = new\n self.created = int(time.time())\n\n # We'll track all of the IDs that have been invalidated here\n self.invalidated = set()\n\n @property\n def sid(self):\n if self._sid is None:\n self._sid = crypto.random_token()\n return self._sid\n\n def changed(self):\n self._changed = True\n\n def invalidate(self):\n self.clear()\n self.new = True\n self.created = int(time.time())\n self._changed = False\n\n # If the current session id isn't None we'll want to record it as one\n # of the ones that have been invalidated.\n if self._sid is not None:\n self.invalidated.add(self._sid)\n self._sid = None\n\n def should_save(self):\n return self._changed\n\n # Flash Messages Methods\n def _get_flash_queue_key(self, queue):\n return \".\".join(filter(None, [self._flash_key, queue]))\n\n def flash(self, msg, queue=\"\", allow_duplicate=True):\n queue_key = self._get_flash_queue_key(queue)\n\n # If we're not allowing duplicates check if this message is already\n # in the queue, and if it is just return immediately.\n if not allow_duplicate and msg in self[queue_key]:\n return\n\n self.setdefault(queue_key, []).append(msg)\n\n def peek_flash(self, queue=\"\"):\n return self.get(self._get_flash_queue_key(queue), [])\n\n def pop_flash(self, queue=\"\"):\n queue_key = self._get_flash_queue_key(queue)\n messages = self.get(queue_key, [])\n self.pop(queue_key, None)\n return messages\n\n # CSRF Methods\n def new_csrf_token(self):\n self[self._csrf_token_key] = crypto.random_token()\n return self[self._csrf_token_key]\n\n def get_csrf_token(self):\n token = self.get(self._csrf_token_key)\n if token is None:\n token = self.new_csrf_token()\n return token\n\n\n@implementer(ISessionFactory)\nclass SessionFactory:\n\n cookie_name = \"session_id\"\n max_age = 12 * 60 * 60 # 12 hours\n\n def __init__(self, secret, url):\n self.redis = redis.StrictRedis.from_url(url)\n self.signer = crypto.TimestampSigner(secret, salt=\"session\")\n\n def __call__(self, request):\n return self._process_request(request)\n\n def _redis_key(self, session_id):\n return \"warehouse/session/data/{}\".format(session_id)\n\n def _process_request(self, request):\n # Register a callback with the request so we can save the session once\n # it's finished.\n request.add_response_callback(self._process_response)\n\n # Load our session ID from the request.\n session_id = request.cookies.get(self.cookie_name)\n\n # If we do not have a session ID then we'll just use a new empty\n # session.\n if session_id is None:\n return Session()\n\n # Check to make sure we have a valid session id\n try:\n session_id = self.signer.unsign(session_id, max_age=self.max_age)\n session_id = session_id.decode(\"utf8\")\n except crypto.BadSignature:\n return Session()\n\n # Fetch the serialized data from redis\n bdata = self.redis.get(self._redis_key(session_id))\n\n # If the session didn't exist in redis, we'll give the user a new\n # session.\n if bdata is None:\n return Session()\n\n # De-serialize our session data\n try:\n data = msgpack.unpackb(bdata, encoding=\"utf8\", use_list=True)\n except (msgpack.exceptions.UnpackException,\n msgpack.exceptions.ExtraData):\n # If the session data was invalid we'll give the user a new session\n return Session()\n\n # If we were able to load existing session data, load it into a\n # Session class\n session = Session(data, session_id, False)\n\n return session\n\n def _process_response(self, request, response):\n # If the request has an InvalidSession, then the view can't have\n # accessed the session, and we can just skip all of this anyways.\n if isinstance(request.session, InvalidSession):\n return\n\n # Check to see if the session has been marked to be deleted, if it has\n # benn then we'll delete it, and tell our response to delete the\n # session cookie as well.\n if request.session.invalidated:\n for session_id in request.session.invalidated:\n self.redis.delete(self._redis_key(session_id))\n\n if not request.session.should_save():\n response.delete_cookie(self.cookie_name)\n\n # Check to see if the session has been marked to be saved, generally\n # this means that the session data has been modified and thus we need\n # to store the new data.\n if request.session.should_save():\n # Save our session in Redis\n self.redis.setex(\n self._redis_key(request.session.sid),\n self.max_age,\n msgpack.packb(\n request.session,\n encoding=\"utf8\",\n use_bin_type=True,\n ),\n )\n\n # Send our session cookie to the client\n response.set_cookie(\n self.cookie_name,\n self.signer.sign(request.session.sid.encode(\"utf8\")),\n max_age=self.max_age,\n httponly=True,\n secure=request.scheme == \"https\",\n )\n\n\ndef session_view(view, info):\n if info.options.get(\"uses_session\"):\n # If we're using the session, then we'll just return the original view\n # with a small wrapper around it to ensure that it has a Vary: Cookie\n # header.\n return add_vary(\"Cookie\")(view)\n elif info.exception_only:\n return view\n else:\n # If we're not using the session on this view, then we'll wrap the view\n # with a wrapper that just ensures that the session cannot be used.\n @functools.wraps(view)\n def wrapped(context, request):\n # This whole method is a little bit of an odd duck, we want to make\n # sure that we don't actually *access* request.session, because\n # doing so triggers the machinery to create a new session. So\n # instead we will dig into the request object __dict__ to\n # effectively do the same thing, jsut without triggering an access\n # on request.session.\n\n # Save the original session so that we can restore it once the\n # inner views have been called.\n nothing = object()\n original_session = request.__dict__.get(\"session\", nothing)\n\n # This particular view hasn't been set to allow access to the\n # session, so we'll just assign an InvalidSession to\n # request.session\n request.__dict__[\"session\"] = InvalidSession()\n\n try:\n # Invoke the real view\n return view(context, request)\n finally:\n # Restore the original session so that things like\n # pyramid_debugtoolbar can access it.\n if original_session is nothing:\n del request.__dict__[\"session\"]\n else:\n request.__dict__[\"session\"] = original_session\n\n return wrapped\n\n\nsession_view.options = {\"uses_session\"}\n\n\ndef includeme(config):\n config.set_session_factory(\n SessionFactory(\n config.registry.settings[\"sessions.secret\"],\n config.registry.settings[\"sessions.url\"],\n ),\n )\n\n config.add_view_deriver(\n session_view,\n over=\"csrf_view\",\n under=viewderivers.INGRESS,\n )\n",
"path": "warehouse/sessions.py"
}
] | diff --git a/requirements/main.txt b/requirements/main.txt
index 9e51b303408e..defd6b964f59 100644
--- a/requirements/main.txt
+++ b/requirements/main.txt
@@ -464,9 +464,9 @@ vine==1.1.4 \
webencodings==0.5.1 \
--hash=sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78 \
--hash=sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923
-WebOb==1.8.0 \
- --hash=sha256:ae809c05b667c3457a2937cdb4a7c7f07e90f26c651a340d37fdd1d5cf1fed27 \
- --hash=sha256:6fca7aa39bd2f6d2ff71f15a22223ff256c91f60b1ab52dac0ab38dc6ea9142f
+WebOb==1.7.4 \
+ --hash=sha256:63f4220492476c5c716b615baed7bf3d27040b3105014375787160dee0943115 \
+ --hash=sha256:8d10af182fda4b92193113ee1edeb687ab9dc44336b37d6804e413f0240d40d9
whitenoise==3.3.1 \
--hash=sha256:15f43b2e701821b95c9016cf469d29e2a546cb1c7dead584ba82c36f843995cf \
--hash=sha256:9d81515f2b5b27051910996e1e860b1332e354d9e7bcf30c98f21dcb6713e0dd
diff --git a/requirements/tests.txt b/requirements/tests.txt
index f8094f4a357c..e701d7ddbb66 100644
--- a/requirements/tests.txt
+++ b/requirements/tests.txt
@@ -224,9 +224,9 @@ urllib3==1.22 \
waitress==1.1.0 \
--hash=sha256:40b0f297a7f3af61fbfbdc67e59090c70dc150a1601c39ecc9f5f1d283fb931b \
--hash=sha256:d33cd3d62426c0f1b3cd84ee3d65779c7003aae3fc060dee60524d10a57f05a9
-WebOb==1.8.0 \
- --hash=sha256:ae809c05b667c3457a2937cdb4a7c7f07e90f26c651a340d37fdd1d5cf1fed27 \
- --hash=sha256:6fca7aa39bd2f6d2ff71f15a22223ff256c91f60b1ab52dac0ab38dc6ea9142f
+WebOb==1.7.4 \
+ --hash=sha256:63f4220492476c5c716b615baed7bf3d27040b3105014375787160dee0943115 \
+ --hash=sha256:8d10af182fda4b92193113ee1edeb687ab9dc44336b37d6804e413f0240d40d9
WebTest==2.0.29 \
--hash=sha256:9136514159a2e76a21751bf4ab5d3371e539c8ada8b950fcf68e307d9e584a07 \
--hash=sha256:dbbccc15ac2465066c95dc3a7de0d30cde3791e886ccbd7e91d5d2a2580c922d
diff --git a/tests/unit/test_sessions.py b/tests/unit/test_sessions.py
index 8bc57b3c27b0..0baee1c117b5 100644
--- a/tests/unit/test_sessions.py
+++ b/tests/unit/test_sessions.py
@@ -497,7 +497,7 @@ def test_invalidated_deletes_save_non_secure(self, monkeypatch,
)
response = pretend.stub(
set_cookie=pretend.call_recorder(
- lambda cookie, data, max_age, httponly, secure, samesite: None
+ lambda cookie, data, max_age, httponly, secure: None
)
)
session_factory._process_response(pyramid_request, response)
@@ -532,7 +532,6 @@ def test_invalidated_deletes_save_non_secure(self, monkeypatch,
max_age=12 * 60 * 60,
httponly=True,
secure=False,
- samesite=b"lax",
),
]
diff --git a/tests/unit/utils/test_compression.py b/tests/unit/utils/test_compression.py
index ca30c68f575f..8fba42cc342e 100644
--- a/tests/unit/utils/test_compression.py
+++ b/tests/unit/utils/test_compression.py
@@ -14,7 +14,7 @@
import pytest
from pyramid.response import Response
-from webob.acceptparse import AcceptEncodingValidHeader, AcceptEncodingNoHeader
+from webob.acceptparse import Accept, NoAccept
from webob.response import gzip_app_iter
from warehouse.utils.compression import _compressor as compressor
@@ -54,7 +54,7 @@ def test_bails_if_content_encoding(self):
],
)
def test_sets_vary(self, vary, expected):
- request = pretend.stub(accept_encoding=AcceptEncodingNoHeader())
+ request = pretend.stub(accept_encoding=NoAccept())
response = Response(body=b"foo")
response.vary = vary
@@ -66,9 +66,7 @@ def test_compresses_non_streaming(self):
decompressed_body = b"foofoofoofoofoofoofoofoofoofoofoofoofoofoo"
compressed_body = b"".join(list(gzip_app_iter([decompressed_body])))
- request = pretend.stub(
- accept_encoding=AcceptEncodingValidHeader("gzip")
- )
+ request = pretend.stub(accept_encoding=Accept("gzip"))
response = Response(body=decompressed_body)
response.md5_etag()
@@ -85,9 +83,7 @@ def test_compresses_streaming(self):
decompressed_body = b"foofoofoofoofoofoofoofoofoofoofoofoofoofoo"
compressed_body = b"".join(list(gzip_app_iter([decompressed_body])))
- request = pretend.stub(
- accept_encoding=AcceptEncodingValidHeader("gzip")
- )
+ request = pretend.stub(accept_encoding=Accept("gzip"))
response = Response(app_iter=iter([decompressed_body]))
compressor(request, response)
@@ -100,9 +96,7 @@ def test_compresses_streaming_with_etag(self):
decompressed_body = b"foofoofoofoofoofoofoofoofoofoofoofoofoofoo"
compressed_body = b"".join(list(gzip_app_iter([decompressed_body])))
- request = pretend.stub(
- accept_encoding=AcceptEncodingValidHeader("gzip")
- )
+ request = pretend.stub(accept_encoding=Accept("gzip"))
response = Response(app_iter=iter([decompressed_body]))
response.etag = "foo"
@@ -117,9 +111,7 @@ def test_buffers_small_streaming(self):
decompressed_body = b"foofoofoofoofoofoofoofoofoofoofoofoofoofoo"
compressed_body = b"".join(list(gzip_app_iter([decompressed_body])))
- request = pretend.stub(
- accept_encoding=AcceptEncodingValidHeader("gzip")
- )
+ request = pretend.stub(accept_encoding=Accept("gzip"))
response = Response(
app_iter=iter([decompressed_body]),
content_length=len(decompressed_body),
@@ -132,9 +124,7 @@ def test_buffers_small_streaming(self):
assert response.body == compressed_body
def test_doesnt_compress_too_small(self):
- request = pretend.stub(
- accept_encoding=AcceptEncodingValidHeader("gzip")
- )
+ request = pretend.stub(accept_encoding=Accept("gzip"))
response = Response(body=b"foo")
compressor(request, response)
diff --git a/warehouse/sessions.py b/warehouse/sessions.py
index 548f760c757a..a52318f0eb7c 100644
--- a/warehouse/sessions.py
+++ b/warehouse/sessions.py
@@ -263,7 +263,6 @@ def _process_response(self, request, response):
max_age=self.max_age,
httponly=True,
secure=request.scheme == "https",
- samesite=b"lax"
)
|
pypi__warehouse-3292 | Warehouse file order differs from legacy PyPI file list
Tonight, while load testing of pypi.org was ongoing, we saw some failures in automated systems that use `--require-hashes` with `pip install`, as ordering on the package file list page changed.
The specific package we saw break was `pandas` at version `0.12.0`. We had a single hash for `pandas-0.12.0.tar.gz`. A few of our hosts were served from the legacy PyPI service, which succeeded as normal. The Warehouse endpoint, however, failed, since `pandas-0.12.0.zip` now preceded `pandas-0.12.0.tar.gz` in the file list.
At the moment, you can see that https://pypi.org/simple/pandas/ and https://pypi.python.org/simple/pandas/ differ by searching for `pandas-0.12.0.tar.gz` and `pandas-0.12.0.zip` and comparing the position.
| [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom packaging.version import parse\nfrom pyramid.httpexceptions import HTTPMovedPermanently\nfrom pyramid.view import view_config\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import joinedload\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import JournalEntry, File, Project, Release\n\n\n@view_config(\n route_name=\"legacy.api.simple.index\",\n renderer=\"legacy/api/simple/index.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_index(request):\n # Get the latest serial number\n serial = request.db.query(func.max(JournalEntry.id)).scalar() or 0\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(serial)\n\n # Fetch the name and normalized name for all of our projects\n projects = (\n request.db.query(Project.name, Project.normalized_name)\n .order_by(Project.normalized_name)\n .all()\n )\n\n return {\"projects\": projects}\n\n\n@view_config(\n route_name=\"legacy.api.simple.detail\",\n renderer=\"legacy/api/simple/detail.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_detail(project, request):\n # TODO: Handle files which are not hosted on PyPI\n\n # Make sure that we're using the normalized version of the URL.\n if (project.normalized_name !=\n request.matchdict.get(\"name\", project.normalized_name)):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.normalized_name),\n )\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the files for this project.\n files = sorted(\n request.db.query(File)\n .options(joinedload(File.release))\n .filter(\n File.name == project.name,\n File.version.in_(\n request.db.query(Release)\n .filter(Release.project == project)\n .with_entities(Release.version)\n )\n )\n .all(),\n key=lambda f: (parse(f.version), f.packagetype)\n )\n\n return {\"project\": project, \"files\": files}\n",
"path": "warehouse/legacy/api/simple.py"
}
] | [
{
"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom packaging.version import parse\nfrom pyramid.httpexceptions import HTTPMovedPermanently\nfrom pyramid.view import view_config\nfrom sqlalchemy import func\nfrom sqlalchemy.orm import joinedload\n\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.models import JournalEntry, File, Project, Release\n\n\n@view_config(\n route_name=\"legacy.api.simple.index\",\n renderer=\"legacy/api/simple/index.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_index(request):\n # Get the latest serial number\n serial = request.db.query(func.max(JournalEntry.id)).scalar() or 0\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(serial)\n\n # Fetch the name and normalized name for all of our projects\n projects = (\n request.db.query(Project.name, Project.normalized_name)\n .order_by(Project.normalized_name)\n .all()\n )\n\n return {\"projects\": projects}\n\n\n@view_config(\n route_name=\"legacy.api.simple.detail\",\n renderer=\"legacy/api/simple/detail.html\",\n decorator=[\n cache_control(10 * 60), # 10 minutes\n origin_cache(\n 1 * 24 * 60 * 60, # 1 day\n stale_while_revalidate=5 * 60, # 5 minutes\n stale_if_error=1 * 24 * 60 * 60, # 1 day\n ),\n ],\n)\ndef simple_detail(project, request):\n # TODO: Handle files which are not hosted on PyPI\n\n # Make sure that we're using the normalized version of the URL.\n if (project.normalized_name !=\n request.matchdict.get(\"name\", project.normalized_name)):\n return HTTPMovedPermanently(\n request.current_route_path(name=project.normalized_name),\n )\n\n # Get the latest serial number for this project.\n request.response.headers[\"X-PyPI-Last-Serial\"] = str(project.last_serial)\n\n # Get all of the files for this project.\n files = sorted(\n request.db.query(File)\n .options(joinedload(File.release))\n .filter(\n File.name == project.name,\n File.version.in_(\n request.db.query(Release)\n .filter(Release.project == project)\n .with_entities(Release.version)\n )\n )\n .all(),\n key=lambda f: (parse(f.version), f.filename)\n )\n\n return {\"project\": project, \"files\": files}\n",
"path": "warehouse/legacy/api/simple.py"
}
] | diff --git a/tests/unit/legacy/api/test_simple.py b/tests/unit/legacy/api/test_simple.py
index 4a23369eeadb..004b99caa628 100644
--- a/tests/unit/legacy/api/test_simple.py
+++ b/tests/unit/legacy/api/test_simple.py
@@ -202,7 +202,7 @@ def test_with_files_with_version_multi_digit(self, db_request):
files = []
for files_release in \
- zip(egg_files, wheel_files, tar_files):
+ zip(egg_files, tar_files, wheel_files):
files += files_release
db_request.matchdict["name"] = project.normalized_name
@@ -212,9 +212,6 @@ def test_with_files_with_version_multi_digit(self, db_request):
# Make sure that we get any changes made since the JournalEntry was
# saved.
db_request.db.refresh(project)
- import pprint
- pprint.pprint(simple.simple_detail(project, db_request)['files'])
- pprint.pprint(files)
assert simple.simple_detail(project, db_request) == {
"project": project,
diff --git a/warehouse/legacy/api/simple.py b/warehouse/legacy/api/simple.py
index d26e2c2fe335..cea32ee4d6b2 100644
--- a/warehouse/legacy/api/simple.py
+++ b/warehouse/legacy/api/simple.py
@@ -87,7 +87,7 @@ def simple_detail(project, request):
)
)
.all(),
- key=lambda f: (parse(f.version), f.packagetype)
+ key=lambda f: (parse(f.version), f.filename)
)
return {"project": project, "files": files}
|
digitalfabrik__integreat-cms-169 | Change development environment from docker-compose to venv
- [ ] Remove the django docker container
- [ ] Install package and requirements in venv
- [ ] Keep database docker container and manage connection to django
| [
{
"content": "\"\"\"\nDjango settings for backend project.\n\nGenerated by 'django-admin startproject' using Django 1.11.11.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.11/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\n\"\"\"\n\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = [\n 'localhost',\n '127.0.0.1',\n '0.0.0.0'\n]\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'cms.apps.CmsConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.messages',\n 'django.contrib.sessions',\n 'django.contrib.staticfiles',\n 'widget_tweaks',\n 'easy_thumbnails',\n 'filer',\n 'drf_yasg',\n 'mptt',\n 'rest_framework',\n 'rules.apps.AutodiscoverRulesConfig',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'backend.urls'\nTHUMBNAIL_HIGH_RESOLUTION = True\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'backend.context_processors.site_slug_processor',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'backend.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'integreat',\n 'USER': 'integreat',\n 'PASSWORD': 'password',\n 'HOST': 'postgres',\n 'PORT': '5432',\n }\n}\n\n# Directory for initial database contents\n\nFIXTURE_DIRS = (\n os.path.join(BASE_DIR, 'cms/fixtures/'),\n)\n\n# Authentication backends\n\nAUTHENTICATION_BACKENDS = (\n 'rules.permissions.ObjectPermissionBackend',\n 'django.contrib.auth.backends.ModelBackend', # this is default\n)\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.11/topics/i18n/\n\nLANGUAGES = (\n ('en-us', 'English'),\n ('de-de', 'Deutsch'),\n)\n\nLOCALE_PATHS = (\n os.path.join(BASE_DIR, 'locale'),\n)\n\nLANGUAGE_CODE = 'de-de'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\n\nSTATIC_URL = '/static/'\n\n\n# Login\nLOGIN_URL = '/login'\nLOGIN_REDIRECT_URL = '/'\nLOGOUT_REDIRECT_URL = '/login'\n\n# API FRAMEWORK\nREST_FRAMEWORK = {\n # Use Django's standard `django.contrib.auth` permissions,\n # or allow read-only access for unauthenticated users.\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly'\n ]\n}\n\n# Miscellaneous\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\nCSRF_FAILURE_VIEW = 'cms.views.general.csrf_failure'\n\nMEDIA_URL = '/media/'\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nFILER_CANONICAL_URL = 'media/'\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'class': 'logging.StreamHandler'\n },\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'level': 'WARN',\n 'propagate': True,\n },\n 'api': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'cms': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'rules': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': True,\n },\n }\n}\n",
"path": "backend/backend/settings.py"
}
] | [
{
"content": "\"\"\"\nDjango settings for backend project.\n\nGenerated by 'django-admin startproject' using Django 1.11.11.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.11/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\n\"\"\"\n\nimport os\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = '-!v282$zj815_q@htaxcubylo)(l%a+k*-xi78hw*#s2@i86@_'\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = True\n\nALLOWED_HOSTS = [\n 'localhost',\n '127.0.0.1',\n '0.0.0.0'\n]\n\n\n# Application definition\n\nINSTALLED_APPS = [\n 'cms.apps.CmsConfig',\n 'django.contrib.admin',\n 'django.contrib.auth',\n 'django.contrib.contenttypes',\n 'django.contrib.messages',\n 'django.contrib.sessions',\n 'django.contrib.staticfiles',\n 'widget_tweaks',\n 'easy_thumbnails',\n 'filer',\n 'drf_yasg',\n 'mptt',\n 'rest_framework',\n 'rules.apps.AutodiscoverRulesConfig',\n]\n\nMIDDLEWARE = [\n 'django.middleware.security.SecurityMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.locale.LocaleMiddleware',\n 'django.middleware.common.CommonMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n]\n\nROOT_URLCONF = 'backend.urls'\nTHUMBNAIL_HIGH_RESOLUTION = True\n\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [],\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n 'backend.context_processors.site_slug_processor',\n ],\n },\n },\n]\n\nWSGI_APPLICATION = 'backend.wsgi.application'\n\n\n# Database\n# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\n\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.postgresql_psycopg2',\n 'NAME': 'integreat',\n 'USER': 'integreat',\n 'PASSWORD': 'password',\n 'HOST': 'localhost',\n 'PORT': '5432',\n }\n}\n\n# Directory for initial database contents\n\nFIXTURE_DIRS = (\n os.path.join(BASE_DIR, 'cms/fixtures/'),\n)\n\n# Authentication backends\n\nAUTHENTICATION_BACKENDS = (\n 'rules.permissions.ObjectPermissionBackend',\n 'django.contrib.auth.backends.ModelBackend', # this is default\n)\n\n\n# Password validation\n# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',\n },\n {\n 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.11/topics/i18n/\n\nLANGUAGES = (\n ('en-us', 'English'),\n ('de-de', 'Deutsch'),\n)\n\nLOCALE_PATHS = (\n os.path.join(BASE_DIR, 'locale'),\n)\n\nLANGUAGE_CODE = 'de-de'\n\nTIME_ZONE = 'UTC'\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\n\nSTATIC_URL = '/static/'\n\n\n# Login\nLOGIN_URL = '/login'\nLOGIN_REDIRECT_URL = '/'\nLOGOUT_REDIRECT_URL = '/login'\n\n# API FRAMEWORK\nREST_FRAMEWORK = {\n # Use Django's standard `django.contrib.auth` permissions,\n # or allow read-only access for unauthenticated users.\n 'DEFAULT_PERMISSION_CLASSES': [\n 'rest_framework.permissions.DjangoModelPermissionsOrAnonReadOnly'\n ]\n}\n\n# Miscellaneous\nEMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'\nCSRF_FAILURE_VIEW = 'cms.views.general.csrf_failure'\n\nMEDIA_URL = '/media/'\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media')\nFILER_CANONICAL_URL = 'media/'\n\nLOGGING = {\n 'version': 1,\n 'disable_existing_loggers': False,\n 'handlers': {\n 'console': {\n 'class': 'logging.StreamHandler'\n },\n },\n 'loggers': {\n 'django': {\n 'handlers': ['console'],\n 'level': 'WARN',\n 'propagate': True,\n },\n 'api': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'cms': {\n 'handlers': ['console'],\n 'level': 'INFO',\n 'propagate': True,\n },\n 'rules': {\n 'handlers': ['console'],\n 'level': 'DEBUG',\n 'propagate': True,\n },\n }\n}\n",
"path": "backend/backend/settings.py"
}
] | diff --git a/.gitignore b/.gitignore
index 014fe4432f..44f1a9168c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -46,3 +46,6 @@ backend/media/*
# XLIFF files folder
**/xliffs/
+
+# Postgres folder
+.postgres
\ No newline at end of file
diff --git a/README.md b/README.md
index 097c6ea507..35a8db1883 100644
--- a/README.md
+++ b/README.md
@@ -1,40 +1,53 @@
# Integreat Django CMS
This project aims to develop a content management system tailored to the needs of municipalities to provide multilingual local information. It aims at being easy to use and easy to maintain over a long time. This project uses Python3 and Django 1.11 and aims at being run on a Ubuntu 18.04.
-## Development
-There are several ways to run this project locally: install as a package (Ubuntu, openSUSE), run in local Python3 venv, and also in a Docker container. Each method is detailed below.
+## Setup a local development environment
+To run the project locally you can either install as a package (Ubuntu, openSUSE) or you can run in local Python3 **virtualenv**, and also in a Docker container. Using **virtualenv** is the recommended way for setting up a local development environment.
-To get started, run
+First of all, clone the project:
````
git clone [email protected]:Integreat/cms-django.git
cd cms-django
````
-### Development Tools
+### Setup the database
+You can run Postgres either on your local machine or in a Docker container.
+
+* Install Postgres on your machine ([Tutorial for Ubuntu 18.04](https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-18-04))
+* Run Postgres in a Docker container: `./dev-tools/start_db_docker.sh`
+
+### virtualenv
+1. Run `./install-venv.sh`
+2. If you have installed Postgres on your machine, you may have to adjust database credentials in `backend/backend/settings.py`
+3. Do the database migrations: `integreat-cms migrate`
+4. Create the initial superuser: `integreat-cms createsuperuser`
+5. Fire up the CMS: `integreat-cms runserver localhost:8000`
+6. Go to your browser and open the URL `http://localhost:8000`
+
+You may need to activate the `virtualenv` explicitly via `source .venv/bin/activate`.
+
+## Development
+### Migrations
+After changing a models you have to migrate via `./dev-tools/migrate.sh`
+
+### i18n
+To make use of the translated backend, compile the django.po file as follows:
+
+`django-admin compilemessages`
-- Delete docker environment to start over again: `dev-tools/prune_docker.sh`
- (be careful: This will delete all your other docker images as well)
-- Delete database to start over again: `dev-tools/prune_database.sh`
-- Migrate database: `dev-tools/migrate.sh`
-- Create superuser: `dev-tools/create_superuser.sh`
+If you are using a virtual python environment, be sure to use the ´--exclude´ parameter or execute this command in the backend or cms directory, otherwise all the translation files in your venv will be compiled, too.
-### Run CMS in Python3 venv
-1. Install a local PostgreSQL server, for example with `apt install postgresql` and create a database and database user with the name `integreat`.
-2. Run `./install-venv.sh`
-3. Open the `backend/backend/settings.py` and adjust the database credentials. Also change the hostname to `localhost`.
-4. Do the database migrations: `integreat-cms migrate`
-5. Create the initial superuser: `integreat-cms createsuperuser`
-6. Fire up the CMS: `integreat-cms runserver localhost:8000`
-7. Go to your browser and open the URL `http://localhost:8000`
-8. Run Django unittest: `integreat-cms test cms/`
+### Testing
+Run Django unittest: `integreat-cms test cms/`
-### Run CMS in Docker container
-A docker-compose file is provided in the the repository. It will start one container with a PostgreSQL database and another one with the CMS.
-* `docker-compose up`
-* enter [http://localhost:8000](http://localhost:8000)
-* as long as there is no standard SQL dump, you have to create your own user: `docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms createsuperuser"`
+### Miscellaneous
+* Keep in mind that we are using Python 3.x, so use `python3` and `pip3` with any command
+* Access the Postgres database running in Docker container: `docker exec -it integreat_django_postgres psql -U integreat`
+* Too ensure that you do not accidentally push your changes in `settings.py`, you can ignore the file locally via `git update-index --assume-unchanged ./backend/backend/settings.py`
+* Delete the database to start over again: `dev-tools/prune_database.sh`
+* Create superuser: `dev-tools/create_superuser.sh`
-### Packaging and installing on Ubuntu 18.04
+## Packaging and installing on Ubuntu 18.04
Packaging for Debian can be done with setuptools.
```
$ python3 -m venv .venv
@@ -56,35 +69,4 @@ Then install both packages with gdebi:
# gdebi django-widget-tweaks/deb_dist/python3-django-widget-tweaks_1.4.3-1_all.deb
# gebi cms-django/deb_dist/python3-integreat-cms_0.0.13-1_all.deb
````
-In the end, create a PostgreSQL user and database and adjust the `/usr/lib/python3/dist-packages/backend/settings.py`.
-
-
-### Troubleshooting
-#### Cleaning up Docker environment
-* stop all conntainers: `docker stop $(docker ps -a -q)`
-* remove all images: `docker rmi $(docker images -a -q)`
-* remove all volumes: `docker system prune`
-#### Misc
-* keep in mind that we are using Python 3.x, so use `python3` and `pip3` on your bash commands
-* get a bash shell in the django container: `docker exec -it $(docker-compose ps -q django) bash`
-* enter postgres container: `docker exec -it $(docker-compose ps -q postgres) psql -U"integreat" -d "integreat"`
-
-### Migrations
-* change models
-* `docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms makemigrations [app]"`
-* optional, if you want to inspect the corresponding SQL syntax: `docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms sqlmigrate [app] [number]"`
-* `docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms migrate"`
-
-### Docker clean up
-* `docker stop $(docker ps -a -q)`
-* `docker rm $(docker ps -a -q)`
-* remove all images: `docker rmi $(docker images -a -q)`
-* remove all volumes: `docker volume prune`
-
-### i18n
-To make use of the translated backend, compile the django.po file as follows:
-
-`django-admin compilemessages`
-
-If you use a virtual python environment, be sure to use the ´--exclude´ parameter or execute this command in the backend or cms directory, otherwise all the translation files in your venv will be compiled, too.
-
+In the end, create a PostgreSQL user and database and adjust the `/usr/lib/python3/dist-packages/backend/settings.py`.
\ No newline at end of file
diff --git a/_docker/django/development/Dockerfile b/_docker/django/development/Dockerfile
deleted file mode 100644
index 5cd8e41355..0000000000
--- a/_docker/django/development/Dockerfile
+++ /dev/null
@@ -1,18 +0,0 @@
-FROM ubuntu
-
-COPY ./ /opt/integreat-cms
-RUN echo $PWD
-WORKDIR /opt/integreat-cms
-RUN who
-
-RUN apt update
-RUN DEBIAN_FRONTEND=noninteractive apt install --yes --force-yes python3 python3-pip python3-setuptools libpq-dev python3-venv
-
-# remove deprecated pycrypto package
-RUN DEBIAN_FRONTEND=noninteractive apt remove --yes --force-yes python3-crypto
-
-RUN python3 -m venv .venv
-
-RUN echo "source /opt/integreat-cms/.venv/bin/activate" >> /root/.bashrc
-
-EXPOSE 8000
\ No newline at end of file
diff --git a/_docker/django/production/Dockerfile b/_docker/django/production/Dockerfile
deleted file mode 100644
index 5d95d702f4..0000000000
--- a/_docker/django/production/Dockerfile
+++ /dev/null
@@ -1,13 +0,0 @@
-FROM ubuntu
-
-RUN echo $PWD
-RUN who
-
-RUN apt-add-repository ...
-RUN apt update
-RUN DEBIAN_FRONTEND=noninteractive apt install --yes --force-yes integreat-cms
-
-# remove deprecated pycrypto package
-RUN DEBIAN_FRONTEND=noninteractive apt remove --yes --force-yes python3-crypto
-
-EXPOSE 8000
diff --git a/backend/backend/settings.py b/backend/backend/settings.py
index 03db85be54..3389027fa1 100644
--- a/backend/backend/settings.py
+++ b/backend/backend/settings.py
@@ -94,7 +94,7 @@
'NAME': 'integreat',
'USER': 'integreat',
'PASSWORD': 'password',
- 'HOST': 'postgres',
+ 'HOST': 'localhost',
'PORT': '5432',
}
}
diff --git a/dev-tools/create_superuser.sh b/dev-tools/create_superuser.sh
index f32495baca..8fe53e16f2 100755
--- a/dev-tools/create_superuser.sh
+++ b/dev-tools/create_superuser.sh
@@ -1,3 +1,4 @@
#!/bin/sh
-docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms createsuperuser --username root --email ''"
+source .venv/bin/activate
+integreat-cms createsuperuser --username root --email ''
\ No newline at end of file
diff --git a/dev-tools/migrate.sh b/dev-tools/migrate.sh
index 7bed4f66cf..02d1fa165e 100755
--- a/dev-tools/migrate.sh
+++ b/dev-tools/migrate.sh
@@ -1,5 +1,6 @@
#!/bin/sh
-docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms makemigrations cms"
-docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms migrate"
-docker exec -it $(docker-compose ps -q django) bash -ic "integreat-cms loaddata backend/cms/fixtures/roles.json"
+source .venv/bin/activate
+integreat-cms makemigrations cms
+integreat-cms migrate
+integreat-cms loaddata backend/cms/fixtures/roles.json
\ No newline at end of file
diff --git a/dev-tools/prune_database.sh b/dev-tools/prune_database.sh
index 0c5415b5f1..5a09d05a55 100755
--- a/dev-tools/prune_database.sh
+++ b/dev-tools/prune_database.sh
@@ -2,5 +2,5 @@
script_dir=$(dirname "$BASH_SOURCE")
-rm -rfv $script_dir/../_postgres
+rm -rfv $script_dir/../.postgres
rm -rfv $script_dir/../backend/cms/migrations
diff --git a/dev-tools/prune_docker.sh b/dev-tools/prune_docker.sh
deleted file mode 100755
index b358204895..0000000000
--- a/dev-tools/prune_docker.sh
+++ /dev/null
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-docker-compose down --rmi local
diff --git a/dev-tools/start_db_docker.sh b/dev-tools/start_db_docker.sh
new file mode 100755
index 0000000000..02da169668
--- /dev/null
+++ b/dev-tools/start_db_docker.sh
@@ -0,0 +1,15 @@
+#!/bin/sh
+
+# Change connection string
+sed -i 's/5432/5433/g' ./backend/backend/settings.py
+
+# Start Postgres Docker container
+if [ ! "$(docker ps -q -f name='integreat_django_postgres')" ]; then
+ if [ "$(docker ps -aq -f status=exited -f name='integreat_django_postgres')" ]; then
+ # Start the existing container
+ docker start integreat_django_postgres
+ else
+ # Run new container
+ docker run --name "integreat_django_postgres" -e "POSTGRES_USER=integreat" -e "POSTGRES_PASSWORD=password" -e "POSTGRES_DB=integreat" -v "$(pwd)/.postgres:/var/lib/postgresql" -p 5433:5432 postgres
+ fi
+fi
\ No newline at end of file
diff --git a/docker-compose.yml b/docker-compose.yml
deleted file mode 100644
index abccfb632a..0000000000
--- a/docker-compose.yml
+++ /dev/null
@@ -1,26 +0,0 @@
-version: '2'
-
-services:
- django:
- build:
- context: .
- dockerfile: ./_docker/django/development/Dockerfile
- command: bash -c "source .venv/bin/activate && python3 setup.py develop && integreat-cms runserver 0.0.0.0:8000"
- depends_on:
- - postgres
- ports:
- - 8000:8000
- restart: always
- #tty: true
- volumes:
- - "./backend:/opt/integreat-cms/backend"
-
- postgres:
- environment:
- - POSTGRES_USER=integreat
- - POSTGRES_PASSWORD=password
- - POSTGRES_DB=integreat
- image: postgres
- restart: always
- volumes:
- - "./_postgres:/var/lib/postgresql"
diff --git a/install-venv.sh b/install-venv.sh
index 4d551e4375..52ce5df5a6 100755
--- a/install-venv.sh
+++ b/install-venv.sh
@@ -1,7 +1,9 @@
#!/bin/bash
+
# This script installs the CMS in a local virtual environment without
# the need for docker or any other virtualization technology. A Postgres
# SQL server is needed to run the CMS.
python3 -m venv .venv
source .venv/bin/activate
python3 setup.py develop
+source .venv/bin/activate
\ No newline at end of file
|
modin-project__modin-2173 | [OmniSci] Add float32 dtype support
Looks like our calcite serializer doesn't support float32 type.
| [
{
"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .expr import (\n BaseExpr,\n LiteralExpr,\n OpExpr,\n AggregateExpr,\n)\nfrom .calcite_algebra import (\n CalciteBaseNode,\n CalciteInputRefExpr,\n CalciteInputIdxExpr,\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteCollation,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n)\nimport json\nimport numpy as np\n\n\nclass CalciteSerializer:\n dtype_strings = {\n \"int8\": \"TINYINT\",\n \"int16\": \"SMALLINT\",\n \"int32\": \"INTEGER\",\n \"int64\": \"BIGINT\",\n \"bool\": \"BOOLEAN\",\n \"float64\": \"DOUBLE\",\n }\n\n def serialize(self, plan):\n return json.dumps({\"rels\": [self.serialize_item(node) for node in plan]})\n\n def expect_one_of(self, val, *types):\n for t in types:\n if isinstance(val, t):\n return\n raise TypeError(\"Can not serialize {}\".format(type(val).__name__))\n\n def serialize_item(self, item):\n if isinstance(item, CalciteBaseNode):\n return self.serialize_node(item)\n elif isinstance(item, BaseExpr):\n return self.serialize_expr(item)\n elif isinstance(item, CalciteCollation):\n return self.serialize_obj(item)\n elif isinstance(item, list):\n return [self.serialize_item(v) for v in item]\n\n self.expect_one_of(item, str, int)\n return item\n\n def serialize_node(self, node):\n # We need to setup context for proper references\n # serialization\n if isinstance(\n node,\n (\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n ),\n ):\n return self.serialize_obj(node)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(node).__name__)\n )\n\n def serialize_obj(self, obj):\n res = {}\n for k, v in obj.__dict__.items():\n if k[0] != \"_\":\n res[k] = self.serialize_item(v)\n return res\n\n def serialize_typed_obj(self, obj):\n res = self.serialize_obj(obj)\n res[\"type\"] = self.serialize_dtype(obj._dtype)\n return res\n\n def serialize_expr(self, expr):\n if isinstance(expr, LiteralExpr):\n return self.serialize_literal(expr)\n elif isinstance(expr, CalciteInputRefExpr):\n return self.serialize_obj(expr)\n elif isinstance(expr, CalciteInputIdxExpr):\n return self.serialize_input_idx(expr)\n elif isinstance(expr, OpExpr):\n return self.serialize_typed_obj(expr)\n elif isinstance(expr, AggregateExpr):\n return self.serialize_typed_obj(expr)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(expr).__name__)\n )\n\n def serialize_literal(self, literal):\n if literal.val is None:\n return {\n \"literal\": None,\n \"type\": \"BIGINT\",\n \"target_type\": \"BIGINT\",\n \"scale\": 0,\n \"precision\": 19,\n \"type_scale\": 0,\n \"type_precision\": 19,\n }\n if type(literal.val) is str:\n return {\n \"literal\": literal.val,\n \"type\": \"CHAR\",\n \"target_type\": \"CHAR\",\n \"scale\": -2147483648,\n \"precision\": len(literal.val),\n \"type_scale\": -2147483648,\n \"type_precision\": len(literal.val),\n }\n if type(literal.val) in (int, np.int8, np.int16, np.int32, np.int64):\n target_type, precision = self.opts_for_int_type(type(literal.val))\n return {\n \"literal\": int(literal.val),\n \"type\": \"DECIMAL\",\n \"target_type\": target_type,\n \"scale\": 0,\n \"precision\": len(str(literal.val)),\n \"type_scale\": 0,\n \"type_precision\": precision,\n }\n if type(literal.val) in (float, np.float64):\n str_val = f\"{literal.val:f}\"\n precision = len(str_val) - 1\n scale = precision - str_val.index(\".\")\n return {\n \"literal\": int(str_val.replace(\".\", \"\")),\n \"type\": \"DECIMAL\",\n \"target_type\": \"DOUBLE\",\n \"scale\": scale,\n \"precision\": precision,\n \"type_scale\": -2147483648,\n \"type_precision\": 15,\n }\n if type(literal.val) is bool:\n return {\n \"literal\": literal.val,\n \"type\": \"BOOLEAN\",\n \"target_type\": \"BOOLEAN\",\n \"scale\": -2147483648,\n \"precision\": 1,\n \"type_scale\": -2147483648,\n \"type_precision\": 1,\n }\n raise NotImplementedError(f\"Can not serialize {type(literal.val).__name__}\")\n\n def opts_for_int_type(self, int_type):\n if int_type is np.int8:\n return \"TINYINT\", 3\n if int_type is np.int16:\n return \"SMALLINT\", 5\n if int_type is np.int32:\n return \"INTEGER\", 10\n if int_type in (np.int64, int):\n return \"BIGINT\", 19\n raise NotImplementedError(f\"Unsupported integer type {int_type.__name__}\")\n\n def serialize_dtype(self, dtype):\n return {\"type\": type(self).dtype_strings[dtype.name], \"nullable\": True}\n\n def serialize_input_idx(self, expr):\n return expr.input\n",
"path": "modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py"
}
] | [
{
"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\nfrom .expr import (\n BaseExpr,\n LiteralExpr,\n OpExpr,\n AggregateExpr,\n)\nfrom .calcite_algebra import (\n CalciteBaseNode,\n CalciteInputRefExpr,\n CalciteInputIdxExpr,\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteCollation,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n)\nimport json\nimport numpy as np\n\n\nclass CalciteSerializer:\n dtype_strings = {\n \"int8\": \"TINYINT\",\n \"int16\": \"SMALLINT\",\n \"int32\": \"INTEGER\",\n \"int64\": \"BIGINT\",\n \"bool\": \"BOOLEAN\",\n \"float32\": \"FLOAT\",\n \"float64\": \"DOUBLE\",\n }\n\n def serialize(self, plan):\n return json.dumps({\"rels\": [self.serialize_item(node) for node in plan]})\n\n def expect_one_of(self, val, *types):\n for t in types:\n if isinstance(val, t):\n return\n raise TypeError(\"Can not serialize {}\".format(type(val).__name__))\n\n def serialize_item(self, item):\n if isinstance(item, CalciteBaseNode):\n return self.serialize_node(item)\n elif isinstance(item, BaseExpr):\n return self.serialize_expr(item)\n elif isinstance(item, CalciteCollation):\n return self.serialize_obj(item)\n elif isinstance(item, list):\n return [self.serialize_item(v) for v in item]\n\n self.expect_one_of(item, str, int)\n return item\n\n def serialize_node(self, node):\n # We need to setup context for proper references\n # serialization\n if isinstance(\n node,\n (\n CalciteScanNode,\n CalciteProjectionNode,\n CalciteFilterNode,\n CalciteAggregateNode,\n CalciteSortNode,\n CalciteJoinNode,\n CalciteUnionNode,\n ),\n ):\n return self.serialize_obj(node)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(node).__name__)\n )\n\n def serialize_obj(self, obj):\n res = {}\n for k, v in obj.__dict__.items():\n if k[0] != \"_\":\n res[k] = self.serialize_item(v)\n return res\n\n def serialize_typed_obj(self, obj):\n res = self.serialize_obj(obj)\n res[\"type\"] = self.serialize_dtype(obj._dtype)\n return res\n\n def serialize_expr(self, expr):\n if isinstance(expr, LiteralExpr):\n return self.serialize_literal(expr)\n elif isinstance(expr, CalciteInputRefExpr):\n return self.serialize_obj(expr)\n elif isinstance(expr, CalciteInputIdxExpr):\n return self.serialize_input_idx(expr)\n elif isinstance(expr, OpExpr):\n return self.serialize_typed_obj(expr)\n elif isinstance(expr, AggregateExpr):\n return self.serialize_typed_obj(expr)\n else:\n raise NotImplementedError(\n \"Can not serialize {}\".format(type(expr).__name__)\n )\n\n def serialize_literal(self, literal):\n if literal.val is None:\n return {\n \"literal\": None,\n \"type\": \"BIGINT\",\n \"target_type\": \"BIGINT\",\n \"scale\": 0,\n \"precision\": 19,\n \"type_scale\": 0,\n \"type_precision\": 19,\n }\n if type(literal.val) is str:\n return {\n \"literal\": literal.val,\n \"type\": \"CHAR\",\n \"target_type\": \"CHAR\",\n \"scale\": -2147483648,\n \"precision\": len(literal.val),\n \"type_scale\": -2147483648,\n \"type_precision\": len(literal.val),\n }\n if type(literal.val) in (int, np.int8, np.int16, np.int32, np.int64):\n target_type, precision = self.opts_for_int_type(type(literal.val))\n return {\n \"literal\": int(literal.val),\n \"type\": \"DECIMAL\",\n \"target_type\": target_type,\n \"scale\": 0,\n \"precision\": len(str(literal.val)),\n \"type_scale\": 0,\n \"type_precision\": precision,\n }\n if type(literal.val) in (float, np.float64):\n str_val = f\"{literal.val:f}\"\n precision = len(str_val) - 1\n scale = precision - str_val.index(\".\")\n return {\n \"literal\": int(str_val.replace(\".\", \"\")),\n \"type\": \"DECIMAL\",\n \"target_type\": \"DOUBLE\",\n \"scale\": scale,\n \"precision\": precision,\n \"type_scale\": -2147483648,\n \"type_precision\": 15,\n }\n if type(literal.val) is bool:\n return {\n \"literal\": literal.val,\n \"type\": \"BOOLEAN\",\n \"target_type\": \"BOOLEAN\",\n \"scale\": -2147483648,\n \"precision\": 1,\n \"type_scale\": -2147483648,\n \"type_precision\": 1,\n }\n raise NotImplementedError(f\"Can not serialize {type(literal.val).__name__}\")\n\n def opts_for_int_type(self, int_type):\n if int_type is np.int8:\n return \"TINYINT\", 3\n if int_type is np.int16:\n return \"SMALLINT\", 5\n if int_type is np.int32:\n return \"INTEGER\", 10\n if int_type in (np.int64, int):\n return \"BIGINT\", 19\n raise NotImplementedError(f\"Unsupported integer type {int_type.__name__}\")\n\n def serialize_dtype(self, dtype):\n return {\"type\": type(self).dtype_strings[dtype.name], \"nullable\": True}\n\n def serialize_input_idx(self, expr):\n return expr.input\n",
"path": "modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py"
}
] | diff --git a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
index 0156cfbc3d9..f460868cd5d 100644
--- a/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
+++ b/modin/experimental/engines/omnisci_on_ray/frame/calcite_serializer.py
@@ -41,6 +41,7 @@ class CalciteSerializer:
"int32": "INTEGER",
"int64": "BIGINT",
"bool": "BOOLEAN",
+ "float32": "FLOAT",
"float64": "DOUBLE",
}
diff --git a/modin/experimental/engines/omnisci_on_ray/test/test_dataframe.py b/modin/experimental/engines/omnisci_on_ray/test/test_dataframe.py
index 86632635e1b..3fca2092b7f 100644
--- a/modin/experimental/engines/omnisci_on_ray/test/test_dataframe.py
+++ b/modin/experimental/engines/omnisci_on_ray/test/test_dataframe.py
@@ -275,6 +275,20 @@ def test_sep_delimiter(self, kwargs):
df_equals(modin_df, pandas_df)
+ @pytest.mark.skip(reason="https://github.com/modin-project/modin/issues/2174")
+ def test_float32(self):
+ csv_file = os.path.join(self.root, "modin/pandas/test/data", "test_usecols.csv")
+ kwargs = {
+ "dtype": {"a": "float32", "b": "float32"},
+ }
+
+ pandas_df = pandas.read_csv(csv_file, **kwargs)
+ pandas_df["a"] = pandas_df["a"] + pandas_df["b"]
+ modin_df = pd.read_csv(csv_file, **kwargs, engine="arrow")
+ modin_df["a"] = modin_df["a"] + modin_df["b"]
+
+ df_equals(modin_df, pandas_df)
+
class TestMasks:
data = {
|
huggingface__diffusers-680 | LDM Bert `config.json` path
### Describe the bug
### Problem
There is a reference to an LDM Bert that 404's
```bash
src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py: "ldm-bert": "https://huggingface.co/ldm-bert/resolve/main/config.json",
```
I was able to locate a `config.json` at `https://huggingface.co/valhalla/ldm-bert/blob/main/config.json`
Is this the correct `config.json`?
#### Notes for reviewer
Happy to send a PR if needed to update, feel free to do on your own if it's faster/easier :)
### Reproduction
na
### Logs
```shell
na
```
### System Info
na
| [
{
"content": "import inspect\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport torch.utils.checkpoint\n\nfrom transformers.activations import ACT2FN\nfrom transformers.configuration_utils import PretrainedConfig\nfrom transformers.modeling_outputs import BaseModelOutput\nfrom transformers.modeling_utils import PreTrainedModel\nfrom transformers.tokenization_utils import PreTrainedTokenizer\nfrom transformers.utils import logging\n\nfrom ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel\nfrom ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput\nfrom ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler\n\n\nclass LDMTextToImagePipeline(DiffusionPipeline):\n r\"\"\"\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the\n library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)\n\n Parameters:\n vqvae ([`VQModel`]):\n Vector-quantized (VQ) Model to encode and decode images to and from latent representations.\n bert ([`LDMBertModel`]):\n Text-encoder model based on [BERT](https://huggingface.co/docs/transformers/model_doc/bert) architecture.\n tokenizer (`transformers.BertTokenizer`):\n Tokenizer of class\n [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer).\n unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of\n [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].\n \"\"\"\n\n def __init__(\n self,\n vqvae: Union[VQModel, AutoencoderKL],\n bert: PreTrainedModel,\n tokenizer: PreTrainedTokenizer,\n unet: Union[UNet2DModel, UNet2DConditionModel],\n scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],\n ):\n super().__init__()\n self.register_modules(vqvae=vqvae, bert=bert, tokenizer=tokenizer, unet=unet, scheduler=scheduler)\n\n @torch.no_grad()\n def __call__(\n self,\n prompt: Union[str, List[str]],\n height: Optional[int] = 256,\n width: Optional[int] = 256,\n num_inference_steps: Optional[int] = 50,\n guidance_scale: Optional[float] = 1.0,\n eta: Optional[float] = 0.0,\n generator: Optional[torch.Generator] = None,\n output_type: Optional[str] = \"pil\",\n return_dict: bool = True,\n **kwargs,\n ) -> Union[Tuple, ImagePipelineOutput]:\n r\"\"\"\n Args:\n prompt (`str` or `List[str]`):\n The prompt or prompts to guide the image generation.\n height (`int`, *optional*, defaults to 256):\n The height in pixels of the generated image.\n width (`int`, *optional*, defaults to 256):\n The width in pixels of the generated image.\n num_inference_steps (`int`, *optional*, defaults to 50):\n The number of denoising steps. More denoising steps usually lead to a higher quality image at the\n expense of slower inference.\n guidance_scale (`float`, *optional*, defaults to 1.0):\n Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).\n `guidance_scale` is defined as `w` of equation 2. of [Imagen\n Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >\n 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt` at\n the, usually at the expense of lower image quality.\n generator (`torch.Generator`, *optional*):\n A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation\n deterministic.\n output_type (`str`, *optional*, defaults to `\"pil\"`):\n The output format of the generate image. Choose between\n [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.\n return_dict (`bool`, *optional*):\n Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.\n\n Returns:\n [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if\n `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the\n generated images.\n \"\"\"\n\n if isinstance(prompt, str):\n batch_size = 1\n elif isinstance(prompt, list):\n batch_size = len(prompt)\n else:\n raise ValueError(f\"`prompt` has to be of type `str` or `list` but is {type(prompt)}\")\n\n if height % 8 != 0 or width % 8 != 0:\n raise ValueError(f\"`height` and `width` have to be divisible by 8 but are {height} and {width}.\")\n\n # get unconditional embeddings for classifier free guidance\n if guidance_scale != 1.0:\n uncond_input = self.tokenizer([\"\"] * batch_size, padding=\"max_length\", max_length=77, return_tensors=\"pt\")\n uncond_embeddings = self.bert(uncond_input.input_ids.to(self.device))[0]\n\n # get prompt text embeddings\n text_input = self.tokenizer(prompt, padding=\"max_length\", max_length=77, return_tensors=\"pt\")\n text_embeddings = self.bert(text_input.input_ids.to(self.device))[0]\n\n latents = torch.randn(\n (batch_size, self.unet.in_channels, height // 8, width // 8),\n generator=generator,\n )\n latents = latents.to(self.device)\n\n self.scheduler.set_timesteps(num_inference_steps)\n\n # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature\n accepts_eta = \"eta\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n\n extra_kwargs = {}\n if accepts_eta:\n extra_kwargs[\"eta\"] = eta\n\n for t in self.progress_bar(self.scheduler.timesteps):\n if guidance_scale == 1.0:\n # guidance_scale of 1 means no guidance\n latents_input = latents\n context = text_embeddings\n else:\n # For classifier free guidance, we need to do two forward passes.\n # Here we concatenate the unconditional and text embeddings into a single batch\n # to avoid doing two forward passes\n latents_input = torch.cat([latents] * 2)\n context = torch.cat([uncond_embeddings, text_embeddings])\n\n # predict the noise residual\n noise_pred = self.unet(latents_input, t, encoder_hidden_states=context).sample\n # perform guidance\n if guidance_scale != 1.0:\n noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2)\n noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond)\n\n # compute the previous noisy sample x_t -> x_t-1\n latents = self.scheduler.step(noise_pred, t, latents, **extra_kwargs).prev_sample\n\n # scale and decode the image latents with vae\n latents = 1 / 0.18215 * latents\n image = self.vqvae.decode(latents).sample\n\n image = (image / 2 + 0.5).clamp(0, 1)\n image = image.cpu().permute(0, 2, 3, 1).numpy()\n if output_type == \"pil\":\n image = self.numpy_to_pil(image)\n\n if not return_dict:\n return (image,)\n\n return ImagePipelineOutput(images=image)\n\n\n################################################################################\n# Code for the text transformer model\n################################################################################\n\"\"\" PyTorch LDMBERT model.\"\"\"\n\n\nlogger = logging.get_logger(__name__)\n\nLDMBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [\n \"ldm-bert\",\n # See all LDMBert models at https://huggingface.co/models?filter=ldmbert\n]\n\n\nLDMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {\n \"ldm-bert\": \"https://huggingface.co/ldm-bert/resolve/main/config.json\",\n}\n\n\n\"\"\" LDMBERT model configuration\"\"\"\n\n\nclass LDMBertConfig(PretrainedConfig):\n model_type = \"ldmbert\"\n keys_to_ignore_at_inference = [\"past_key_values\"]\n attribute_map = {\"num_attention_heads\": \"encoder_attention_heads\", \"hidden_size\": \"d_model\"}\n\n def __init__(\n self,\n vocab_size=30522,\n max_position_embeddings=77,\n encoder_layers=32,\n encoder_ffn_dim=5120,\n encoder_attention_heads=8,\n head_dim=64,\n encoder_layerdrop=0.0,\n activation_function=\"gelu\",\n d_model=1280,\n dropout=0.1,\n attention_dropout=0.0,\n activation_dropout=0.0,\n init_std=0.02,\n classifier_dropout=0.0,\n scale_embedding=False,\n use_cache=True,\n pad_token_id=0,\n **kwargs,\n ):\n self.vocab_size = vocab_size\n self.max_position_embeddings = max_position_embeddings\n self.d_model = d_model\n self.encoder_ffn_dim = encoder_ffn_dim\n self.encoder_layers = encoder_layers\n self.encoder_attention_heads = encoder_attention_heads\n self.head_dim = head_dim\n self.dropout = dropout\n self.attention_dropout = attention_dropout\n self.activation_dropout = activation_dropout\n self.activation_function = activation_function\n self.init_std = init_std\n self.encoder_layerdrop = encoder_layerdrop\n self.classifier_dropout = classifier_dropout\n self.use_cache = use_cache\n self.num_hidden_layers = encoder_layers\n self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True\n\n super().__init__(pad_token_id=pad_token_id, **kwargs)\n\n\ndef _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):\n \"\"\"\n Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.\n \"\"\"\n bsz, src_len = mask.size()\n tgt_len = tgt_len if tgt_len is not None else src_len\n\n expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)\n\n inverted_mask = 1.0 - expanded_mask\n\n return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)\n\n\n# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->LDMBert\nclass LDMBertAttention(nn.Module):\n \"\"\"Multi-headed attention from 'Attention Is All You Need' paper\"\"\"\n\n def __init__(\n self,\n embed_dim: int,\n num_heads: int,\n head_dim: int,\n dropout: float = 0.0,\n is_decoder: bool = False,\n bias: bool = False,\n ):\n super().__init__()\n self.embed_dim = embed_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.head_dim = head_dim\n self.inner_dim = head_dim * num_heads\n\n self.scaling = self.head_dim**-0.5\n self.is_decoder = is_decoder\n\n self.k_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.v_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.q_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.out_proj = nn.Linear(self.inner_dim, embed_dim)\n\n def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):\n return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()\n\n def forward(\n self,\n hidden_states: torch.Tensor,\n key_value_states: Optional[torch.Tensor] = None,\n past_key_value: Optional[Tuple[torch.Tensor]] = None,\n attention_mask: Optional[torch.Tensor] = None,\n layer_head_mask: Optional[torch.Tensor] = None,\n output_attentions: bool = False,\n ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:\n \"\"\"Input shape: Batch x Time x Channel\"\"\"\n\n # if key_value_states are provided this layer is used as a cross-attention layer\n # for the decoder\n is_cross_attention = key_value_states is not None\n\n bsz, tgt_len, _ = hidden_states.size()\n\n # get query proj\n query_states = self.q_proj(hidden_states) * self.scaling\n # get key, value proj\n if is_cross_attention and past_key_value is not None:\n # reuse k,v, cross_attentions\n key_states = past_key_value[0]\n value_states = past_key_value[1]\n elif is_cross_attention:\n # cross_attentions\n key_states = self._shape(self.k_proj(key_value_states), -1, bsz)\n value_states = self._shape(self.v_proj(key_value_states), -1, bsz)\n elif past_key_value is not None:\n # reuse k, v, self_attention\n key_states = self._shape(self.k_proj(hidden_states), -1, bsz)\n value_states = self._shape(self.v_proj(hidden_states), -1, bsz)\n key_states = torch.cat([past_key_value[0], key_states], dim=2)\n value_states = torch.cat([past_key_value[1], value_states], dim=2)\n else:\n # self_attention\n key_states = self._shape(self.k_proj(hidden_states), -1, bsz)\n value_states = self._shape(self.v_proj(hidden_states), -1, bsz)\n\n if self.is_decoder:\n # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.\n # Further calls to cross_attention layer can then reuse all cross-attention\n # key/value_states (first \"if\" case)\n # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of\n # all previous decoder key/value_states. Further calls to uni-directional self-attention\n # can concat previous decoder key/value_states to current projected key/value_states (third \"elif\" case)\n # if encoder bi-directional self-attention `past_key_value` is always `None`\n past_key_value = (key_states, value_states)\n\n proj_shape = (bsz * self.num_heads, -1, self.head_dim)\n query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)\n key_states = key_states.view(*proj_shape)\n value_states = value_states.view(*proj_shape)\n\n src_len = key_states.size(1)\n attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))\n\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\n raise ValueError(\n f\"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is\"\n f\" {attn_weights.size()}\"\n )\n\n if attention_mask is not None:\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\n raise ValueError(\n f\"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}\"\n )\n attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask\n attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)\n\n attn_weights = nn.functional.softmax(attn_weights, dim=-1)\n\n if layer_head_mask is not None:\n if layer_head_mask.size() != (self.num_heads,):\n raise ValueError(\n f\"Head mask for a single layer should be of size {(self.num_heads,)}, but is\"\n f\" {layer_head_mask.size()}\"\n )\n attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)\n attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)\n\n if output_attentions:\n # this operation is a bit awkward, but it's required to\n # make sure that attn_weights keeps its gradient.\n # In order to do so, attn_weights have to be reshaped\n # twice and have to be reused in the following\n attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)\n attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)\n else:\n attn_weights_reshaped = None\n\n attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)\n\n attn_output = torch.bmm(attn_probs, value_states)\n\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\n raise ValueError(\n f\"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is\"\n f\" {attn_output.size()}\"\n )\n\n attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)\n attn_output = attn_output.transpose(1, 2)\n\n # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be\n # partitioned across GPUs when using tensor-parallelism.\n attn_output = attn_output.reshape(bsz, tgt_len, self.inner_dim)\n\n attn_output = self.out_proj(attn_output)\n\n return attn_output, attn_weights_reshaped, past_key_value\n\n\nclass LDMBertEncoderLayer(nn.Module):\n def __init__(self, config: LDMBertConfig):\n super().__init__()\n self.embed_dim = config.d_model\n self.self_attn = LDMBertAttention(\n embed_dim=self.embed_dim,\n num_heads=config.encoder_attention_heads,\n head_dim=config.head_dim,\n dropout=config.attention_dropout,\n )\n self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)\n self.dropout = config.dropout\n self.activation_fn = ACT2FN[config.activation_function]\n self.activation_dropout = config.activation_dropout\n self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)\n self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)\n self.final_layer_norm = nn.LayerNorm(self.embed_dim)\n\n def forward(\n self,\n hidden_states: torch.FloatTensor,\n attention_mask: torch.FloatTensor,\n layer_head_mask: torch.FloatTensor,\n output_attentions: Optional[bool] = False,\n ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]:\n \"\"\"\n Args:\n hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`\n attention_mask (`torch.FloatTensor`): attention mask of size\n `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.\n layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size\n `(encoder_attention_heads,)`.\n output_attentions (`bool`, *optional*):\n Whether or not to return the attentions tensors of all attention layers. See `attentions` under\n returned tensors for more detail.\n \"\"\"\n residual = hidden_states\n hidden_states = self.self_attn_layer_norm(hidden_states)\n hidden_states, attn_weights, _ = self.self_attn(\n hidden_states=hidden_states,\n attention_mask=attention_mask,\n layer_head_mask=layer_head_mask,\n output_attentions=output_attentions,\n )\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n hidden_states = residual + hidden_states\n\n residual = hidden_states\n hidden_states = self.final_layer_norm(hidden_states)\n hidden_states = self.activation_fn(self.fc1(hidden_states))\n hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)\n hidden_states = self.fc2(hidden_states)\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n hidden_states = residual + hidden_states\n\n if hidden_states.dtype == torch.float16 and (\n torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()\n ):\n clamp_value = torch.finfo(hidden_states.dtype).max - 1000\n hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\n\n outputs = (hidden_states,)\n\n if output_attentions:\n outputs += (attn_weights,)\n\n return outputs\n\n\n# Copied from transformers.models.bart.modeling_bart.BartPretrainedModel with Bart->LDMBert\nclass LDMBertPreTrainedModel(PreTrainedModel):\n config_class = LDMBertConfig\n base_model_prefix = \"model\"\n _supports_gradient_checkpointing = True\n _keys_to_ignore_on_load_unexpected = [r\"encoder\\.version\", r\"decoder\\.version\"]\n\n def _init_weights(self, module):\n std = self.config.init_std\n if isinstance(module, nn.Linear):\n module.weight.data.normal_(mean=0.0, std=std)\n if module.bias is not None:\n module.bias.data.zero_()\n elif isinstance(module, nn.Embedding):\n module.weight.data.normal_(mean=0.0, std=std)\n if module.padding_idx is not None:\n module.weight.data[module.padding_idx].zero_()\n\n def _set_gradient_checkpointing(self, module, value=False):\n if isinstance(module, (LDMBertEncoder,)):\n module.gradient_checkpointing = value\n\n @property\n def dummy_inputs(self):\n pad_token = self.config.pad_token_id\n input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device)\n dummy_inputs = {\n \"attention_mask\": input_ids.ne(pad_token),\n \"input_ids\": input_ids,\n }\n return dummy_inputs\n\n\nclass LDMBertEncoder(LDMBertPreTrainedModel):\n \"\"\"\n Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a\n [`LDMBertEncoderLayer`].\n\n Args:\n config: LDMBertConfig\n embed_tokens (nn.Embedding): output embedding\n \"\"\"\n\n def __init__(self, config: LDMBertConfig):\n super().__init__(config)\n\n self.dropout = config.dropout\n\n embed_dim = config.d_model\n self.padding_idx = config.pad_token_id\n self.max_source_positions = config.max_position_embeddings\n\n self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim)\n self.embed_positions = nn.Embedding(config.max_position_embeddings, embed_dim)\n self.layers = nn.ModuleList([LDMBertEncoderLayer(config) for _ in range(config.encoder_layers)])\n self.layer_norm = nn.LayerNorm(embed_dim)\n\n self.gradient_checkpointing = False\n # Initialize weights and apply final processing\n self.post_init()\n\n def get_input_embeddings(self):\n return self.embed_tokens\n\n def set_input_embeddings(self, value):\n self.embed_tokens = value\n\n def forward(\n self,\n input_ids: torch.LongTensor = None,\n attention_mask: Optional[torch.Tensor] = None,\n position_ids: Optional[torch.LongTensor] = None,\n head_mask: Optional[torch.Tensor] = None,\n inputs_embeds: Optional[torch.FloatTensor] = None,\n output_attentions: Optional[bool] = None,\n output_hidden_states: Optional[bool] = None,\n return_dict: Optional[bool] = None,\n ) -> Union[Tuple, BaseModelOutput]:\n r\"\"\"\n Args:\n input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):\n Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you\n provide it.\n\n Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and\n [`PreTrainedTokenizer.__call__`] for details.\n\n [What are input IDs?](../glossary#input-ids)\n attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):\n Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:\n\n - 1 for tokens that are **not masked**,\n - 0 for tokens that are **masked**.\n\n [What are attention masks?](../glossary#attention-mask)\n head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*):\n Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:\n\n - 1 indicates the head is **not masked**,\n - 0 indicates the head is **masked**.\n\n inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):\n Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.\n This is useful if you want more control over how to convert `input_ids` indices into associated vectors\n than the model's internal embedding lookup matrix.\n output_attentions (`bool`, *optional*):\n Whether or not to return the attentions tensors of all attention layers. See `attentions` under\n returned tensors for more detail.\n output_hidden_states (`bool`, *optional*):\n Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors\n for more detail.\n return_dict (`bool`, *optional*):\n Whether or not to return a [`~utils.BaseModelOutput`] instead of a plain tuple.\n \"\"\"\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n output_hidden_states = (\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\n )\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n # retrieve input_ids and inputs_embeds\n if input_ids is not None and inputs_embeds is not None:\n raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n elif input_ids is not None:\n input_shape = input_ids.size()\n input_ids = input_ids.view(-1, input_shape[-1])\n elif inputs_embeds is not None:\n input_shape = inputs_embeds.size()[:-1]\n else:\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\n\n if inputs_embeds is None:\n inputs_embeds = self.embed_tokens(input_ids)\n\n seq_len = input_shape[1]\n if position_ids is None:\n position_ids = torch.arange(seq_len, dtype=torch.long, device=inputs_embeds.device).expand((1, -1))\n embed_pos = self.embed_positions(position_ids)\n\n hidden_states = inputs_embeds + embed_pos\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n\n # expand attention_mask\n if attention_mask is not None:\n # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]\n attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)\n\n encoder_states = () if output_hidden_states else None\n all_attentions = () if output_attentions else None\n\n # check if head_mask has a correct number of layers specified if desired\n if head_mask is not None:\n if head_mask.size()[0] != (len(self.layers)):\n raise ValueError(\n f\"The head_mask should be specified for {len(self.layers)} layers, but it is for\"\n f\" {head_mask.size()[0]}.\"\n )\n\n for idx, encoder_layer in enumerate(self.layers):\n if output_hidden_states:\n encoder_states = encoder_states + (hidden_states,)\n if self.gradient_checkpointing and self.training:\n\n def create_custom_forward(module):\n def custom_forward(*inputs):\n return module(*inputs, output_attentions)\n\n return custom_forward\n\n layer_outputs = torch.utils.checkpoint.checkpoint(\n create_custom_forward(encoder_layer),\n hidden_states,\n attention_mask,\n (head_mask[idx] if head_mask is not None else None),\n )\n else:\n layer_outputs = encoder_layer(\n hidden_states,\n attention_mask,\n layer_head_mask=(head_mask[idx] if head_mask is not None else None),\n output_attentions=output_attentions,\n )\n\n hidden_states = layer_outputs[0]\n\n if output_attentions:\n all_attentions = all_attentions + (layer_outputs[1],)\n\n hidden_states = self.layer_norm(hidden_states)\n\n if output_hidden_states:\n encoder_states = encoder_states + (hidden_states,)\n\n if not return_dict:\n return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)\n return BaseModelOutput(\n last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions\n )\n\n\nclass LDMBertModel(LDMBertPreTrainedModel):\n def __init__(self, config: LDMBertConfig):\n super().__init__(config)\n self.model = LDMBertEncoder(config)\n self.to_logits = nn.Linear(config.hidden_size, config.vocab_size)\n\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n outputs = self.model(\n input_ids,\n attention_mask=attention_mask,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n return outputs\n",
"path": "src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py"
}
] | [
{
"content": "import inspect\nimport warnings\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\nimport torch.nn as nn\nimport torch.utils.checkpoint\n\nfrom transformers.activations import ACT2FN\nfrom transformers.configuration_utils import PretrainedConfig\nfrom transformers.modeling_outputs import BaseModelOutput\nfrom transformers.modeling_utils import PreTrainedModel\nfrom transformers.tokenization_utils import PreTrainedTokenizer\nfrom transformers.utils import logging\n\nfrom ...models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel\nfrom ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput\nfrom ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler\n\n\nclass LDMTextToImagePipeline(DiffusionPipeline):\n r\"\"\"\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the\n library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)\n\n Parameters:\n vqvae ([`VQModel`]):\n Vector-quantized (VQ) Model to encode and decode images to and from latent representations.\n bert ([`LDMBertModel`]):\n Text-encoder model based on [BERT](https://huggingface.co/docs/transformers/model_doc/bert) architecture.\n tokenizer (`transformers.BertTokenizer`):\n Tokenizer of class\n [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer).\n unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of\n [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].\n \"\"\"\n\n def __init__(\n self,\n vqvae: Union[VQModel, AutoencoderKL],\n bert: PreTrainedModel,\n tokenizer: PreTrainedTokenizer,\n unet: Union[UNet2DModel, UNet2DConditionModel],\n scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler],\n ):\n super().__init__()\n scheduler = scheduler.set_format(\"pt\")\n self.register_modules(vqvae=vqvae, bert=bert, tokenizer=tokenizer, unet=unet, scheduler=scheduler)\n\n @torch.no_grad()\n def __call__(\n self,\n prompt: Union[str, List[str]],\n height: Optional[int] = 256,\n width: Optional[int] = 256,\n num_inference_steps: Optional[int] = 50,\n guidance_scale: Optional[float] = 1.0,\n eta: Optional[float] = 0.0,\n generator: Optional[torch.Generator] = None,\n output_type: Optional[str] = \"pil\",\n return_dict: bool = True,\n **kwargs,\n ) -> Union[Tuple, ImagePipelineOutput]:\n r\"\"\"\n Args:\n prompt (`str` or `List[str]`):\n The prompt or prompts to guide the image generation.\n height (`int`, *optional*, defaults to 256):\n The height in pixels of the generated image.\n width (`int`, *optional*, defaults to 256):\n The width in pixels of the generated image.\n num_inference_steps (`int`, *optional*, defaults to 50):\n The number of denoising steps. More denoising steps usually lead to a higher quality image at the\n expense of slower inference.\n guidance_scale (`float`, *optional*, defaults to 1.0):\n Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).\n `guidance_scale` is defined as `w` of equation 2. of [Imagen\n Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >\n 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt` at\n the, usually at the expense of lower image quality.\n generator (`torch.Generator`, *optional*):\n A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation\n deterministic.\n output_type (`str`, *optional*, defaults to `\"pil\"`):\n The output format of the generate image. Choose between\n [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.\n return_dict (`bool`, *optional*):\n Whether or not to return a [`~pipeline_utils.ImagePipelineOutput`] instead of a plain tuple.\n\n Returns:\n [`~pipeline_utils.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if\n `return_dict` is True, otherwise a `tuple. When returning a tuple, the first element is a list with the\n generated images.\n \"\"\"\n if \"torch_device\" in kwargs:\n device = kwargs.pop(\"torch_device\")\n warnings.warn(\n \"`torch_device` is deprecated as an input argument to `__call__` and will be removed in v0.3.0.\"\n \" Consider using `pipe.to(torch_device)` instead.\"\n )\n\n # Set device as before (to be removed in 0.3.0)\n if device is None:\n device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n self.to(device)\n\n if isinstance(prompt, str):\n batch_size = 1\n elif isinstance(prompt, list):\n batch_size = len(prompt)\n else:\n raise ValueError(f\"`prompt` has to be of type `str` or `list` but is {type(prompt)}\")\n\n if height % 8 != 0 or width % 8 != 0:\n raise ValueError(f\"`height` and `width` have to be divisible by 8 but are {height} and {width}.\")\n\n # get unconditional embeddings for classifier free guidance\n if guidance_scale != 1.0:\n uncond_input = self.tokenizer([\"\"] * batch_size, padding=\"max_length\", max_length=77, return_tensors=\"pt\")\n uncond_embeddings = self.bert(uncond_input.input_ids.to(self.device))[0]\n\n # get prompt text embeddings\n text_input = self.tokenizer(prompt, padding=\"max_length\", max_length=77, return_tensors=\"pt\")\n text_embeddings = self.bert(text_input.input_ids.to(self.device))[0]\n\n latents = torch.randn(\n (batch_size, self.unet.in_channels, height // 8, width // 8),\n generator=generator,\n )\n latents = latents.to(self.device)\n\n self.scheduler.set_timesteps(num_inference_steps)\n\n # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature\n accepts_eta = \"eta\" in set(inspect.signature(self.scheduler.step).parameters.keys())\n\n extra_kwargs = {}\n if accepts_eta:\n extra_kwargs[\"eta\"] = eta\n\n for t in self.progress_bar(self.scheduler.timesteps):\n if guidance_scale == 1.0:\n # guidance_scale of 1 means no guidance\n latents_input = latents\n context = text_embeddings\n else:\n # For classifier free guidance, we need to do two forward passes.\n # Here we concatenate the unconditional and text embeddings into a single batch\n # to avoid doing two forward passes\n latents_input = torch.cat([latents] * 2)\n context = torch.cat([uncond_embeddings, text_embeddings])\n\n # predict the noise residual\n noise_pred = self.unet(latents_input, t, encoder_hidden_states=context).sample\n # perform guidance\n if guidance_scale != 1.0:\n noise_pred_uncond, noise_prediction_text = noise_pred.chunk(2)\n noise_pred = noise_pred_uncond + guidance_scale * (noise_prediction_text - noise_pred_uncond)\n\n # compute the previous noisy sample x_t -> x_t-1\n latents = self.scheduler.step(noise_pred, t, latents, **extra_kwargs).prev_sample\n\n # scale and decode the image latents with vae\n latents = 1 / 0.18215 * latents\n image = self.vqvae.decode(latents).sample\n\n image = (image / 2 + 0.5).clamp(0, 1)\n image = image.cpu().permute(0, 2, 3, 1).numpy()\n if output_type == \"pil\":\n image = self.numpy_to_pil(image)\n\n if not return_dict:\n return (image,)\n\n return ImagePipelineOutput(images=image)\n\n\n################################################################################\n# Code for the text transformer model\n################################################################################\n\"\"\" PyTorch LDMBERT model.\"\"\"\n\n\nlogger = logging.get_logger(__name__)\n\nLDMBERT_PRETRAINED_MODEL_ARCHIVE_LIST = [\n \"ldm-bert\",\n # See all LDMBert models at https://huggingface.co/models?filter=ldmbert\n]\n\n\nLDMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {\n \"ldm-bert\": \"https://huggingface.co/valhalla/ldm-bert/blob/main/config.json\",\n}\n\n\n\"\"\" LDMBERT model configuration\"\"\"\n\n\nclass LDMBertConfig(PretrainedConfig):\n model_type = \"ldmbert\"\n keys_to_ignore_at_inference = [\"past_key_values\"]\n attribute_map = {\"num_attention_heads\": \"encoder_attention_heads\", \"hidden_size\": \"d_model\"}\n\n def __init__(\n self,\n vocab_size=30522,\n max_position_embeddings=77,\n encoder_layers=32,\n encoder_ffn_dim=5120,\n encoder_attention_heads=8,\n head_dim=64,\n encoder_layerdrop=0.0,\n activation_function=\"gelu\",\n d_model=1280,\n dropout=0.1,\n attention_dropout=0.0,\n activation_dropout=0.0,\n init_std=0.02,\n classifier_dropout=0.0,\n scale_embedding=False,\n use_cache=True,\n pad_token_id=0,\n **kwargs,\n ):\n self.vocab_size = vocab_size\n self.max_position_embeddings = max_position_embeddings\n self.d_model = d_model\n self.encoder_ffn_dim = encoder_ffn_dim\n self.encoder_layers = encoder_layers\n self.encoder_attention_heads = encoder_attention_heads\n self.head_dim = head_dim\n self.dropout = dropout\n self.attention_dropout = attention_dropout\n self.activation_dropout = activation_dropout\n self.activation_function = activation_function\n self.init_std = init_std\n self.encoder_layerdrop = encoder_layerdrop\n self.classifier_dropout = classifier_dropout\n self.use_cache = use_cache\n self.num_hidden_layers = encoder_layers\n self.scale_embedding = scale_embedding # scale factor will be sqrt(d_model) if True\n\n super().__init__(pad_token_id=pad_token_id, **kwargs)\n\n\ndef _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):\n \"\"\"\n Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`.\n \"\"\"\n bsz, src_len = mask.size()\n tgt_len = tgt_len if tgt_len is not None else src_len\n\n expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype)\n\n inverted_mask = 1.0 - expanded_mask\n\n return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min)\n\n\n# Copied from transformers.models.bart.modeling_bart.BartAttention with Bart->LDMBert\nclass LDMBertAttention(nn.Module):\n \"\"\"Multi-headed attention from 'Attention Is All You Need' paper\"\"\"\n\n def __init__(\n self,\n embed_dim: int,\n num_heads: int,\n head_dim: int,\n dropout: float = 0.0,\n is_decoder: bool = False,\n bias: bool = False,\n ):\n super().__init__()\n self.embed_dim = embed_dim\n self.num_heads = num_heads\n self.dropout = dropout\n self.head_dim = head_dim\n self.inner_dim = head_dim * num_heads\n\n self.scaling = self.head_dim**-0.5\n self.is_decoder = is_decoder\n\n self.k_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.v_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.q_proj = nn.Linear(embed_dim, self.inner_dim, bias=bias)\n self.out_proj = nn.Linear(self.inner_dim, embed_dim)\n\n def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int):\n return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous()\n\n def forward(\n self,\n hidden_states: torch.Tensor,\n key_value_states: Optional[torch.Tensor] = None,\n past_key_value: Optional[Tuple[torch.Tensor]] = None,\n attention_mask: Optional[torch.Tensor] = None,\n layer_head_mask: Optional[torch.Tensor] = None,\n output_attentions: bool = False,\n ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:\n \"\"\"Input shape: Batch x Time x Channel\"\"\"\n\n # if key_value_states are provided this layer is used as a cross-attention layer\n # for the decoder\n is_cross_attention = key_value_states is not None\n\n bsz, tgt_len, _ = hidden_states.size()\n\n # get query proj\n query_states = self.q_proj(hidden_states) * self.scaling\n # get key, value proj\n if is_cross_attention and past_key_value is not None:\n # reuse k,v, cross_attentions\n key_states = past_key_value[0]\n value_states = past_key_value[1]\n elif is_cross_attention:\n # cross_attentions\n key_states = self._shape(self.k_proj(key_value_states), -1, bsz)\n value_states = self._shape(self.v_proj(key_value_states), -1, bsz)\n elif past_key_value is not None:\n # reuse k, v, self_attention\n key_states = self._shape(self.k_proj(hidden_states), -1, bsz)\n value_states = self._shape(self.v_proj(hidden_states), -1, bsz)\n key_states = torch.cat([past_key_value[0], key_states], dim=2)\n value_states = torch.cat([past_key_value[1], value_states], dim=2)\n else:\n # self_attention\n key_states = self._shape(self.k_proj(hidden_states), -1, bsz)\n value_states = self._shape(self.v_proj(hidden_states), -1, bsz)\n\n if self.is_decoder:\n # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.\n # Further calls to cross_attention layer can then reuse all cross-attention\n # key/value_states (first \"if\" case)\n # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of\n # all previous decoder key/value_states. Further calls to uni-directional self-attention\n # can concat previous decoder key/value_states to current projected key/value_states (third \"elif\" case)\n # if encoder bi-directional self-attention `past_key_value` is always `None`\n past_key_value = (key_states, value_states)\n\n proj_shape = (bsz * self.num_heads, -1, self.head_dim)\n query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)\n key_states = key_states.view(*proj_shape)\n value_states = value_states.view(*proj_shape)\n\n src_len = key_states.size(1)\n attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))\n\n if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):\n raise ValueError(\n f\"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is\"\n f\" {attn_weights.size()}\"\n )\n\n if attention_mask is not None:\n if attention_mask.size() != (bsz, 1, tgt_len, src_len):\n raise ValueError(\n f\"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}\"\n )\n attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask\n attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)\n\n attn_weights = nn.functional.softmax(attn_weights, dim=-1)\n\n if layer_head_mask is not None:\n if layer_head_mask.size() != (self.num_heads,):\n raise ValueError(\n f\"Head mask for a single layer should be of size {(self.num_heads,)}, but is\"\n f\" {layer_head_mask.size()}\"\n )\n attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)\n attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)\n\n if output_attentions:\n # this operation is a bit awkward, but it's required to\n # make sure that attn_weights keeps its gradient.\n # In order to do so, attn_weights have to be reshaped\n # twice and have to be reused in the following\n attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)\n attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)\n else:\n attn_weights_reshaped = None\n\n attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)\n\n attn_output = torch.bmm(attn_probs, value_states)\n\n if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):\n raise ValueError(\n f\"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is\"\n f\" {attn_output.size()}\"\n )\n\n attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)\n attn_output = attn_output.transpose(1, 2)\n\n # Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be\n # partitioned across GPUs when using tensor-parallelism.\n attn_output = attn_output.reshape(bsz, tgt_len, self.inner_dim)\n\n attn_output = self.out_proj(attn_output)\n\n return attn_output, attn_weights_reshaped, past_key_value\n\n\nclass LDMBertEncoderLayer(nn.Module):\n def __init__(self, config: LDMBertConfig):\n super().__init__()\n self.embed_dim = config.d_model\n self.self_attn = LDMBertAttention(\n embed_dim=self.embed_dim,\n num_heads=config.encoder_attention_heads,\n head_dim=config.head_dim,\n dropout=config.attention_dropout,\n )\n self.self_attn_layer_norm = nn.LayerNorm(self.embed_dim)\n self.dropout = config.dropout\n self.activation_fn = ACT2FN[config.activation_function]\n self.activation_dropout = config.activation_dropout\n self.fc1 = nn.Linear(self.embed_dim, config.encoder_ffn_dim)\n self.fc2 = nn.Linear(config.encoder_ffn_dim, self.embed_dim)\n self.final_layer_norm = nn.LayerNorm(self.embed_dim)\n\n def forward(\n self,\n hidden_states: torch.FloatTensor,\n attention_mask: torch.FloatTensor,\n layer_head_mask: torch.FloatTensor,\n output_attentions: Optional[bool] = False,\n ) -> Tuple[torch.FloatTensor, Optional[torch.FloatTensor]]:\n \"\"\"\n Args:\n hidden_states (`torch.FloatTensor`): input to the layer of shape `(seq_len, batch, embed_dim)`\n attention_mask (`torch.FloatTensor`): attention mask of size\n `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.\n layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size\n `(encoder_attention_heads,)`.\n output_attentions (`bool`, *optional*):\n Whether or not to return the attentions tensors of all attention layers. See `attentions` under\n returned tensors for more detail.\n \"\"\"\n residual = hidden_states\n hidden_states = self.self_attn_layer_norm(hidden_states)\n hidden_states, attn_weights, _ = self.self_attn(\n hidden_states=hidden_states,\n attention_mask=attention_mask,\n layer_head_mask=layer_head_mask,\n output_attentions=output_attentions,\n )\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n hidden_states = residual + hidden_states\n\n residual = hidden_states\n hidden_states = self.final_layer_norm(hidden_states)\n hidden_states = self.activation_fn(self.fc1(hidden_states))\n hidden_states = nn.functional.dropout(hidden_states, p=self.activation_dropout, training=self.training)\n hidden_states = self.fc2(hidden_states)\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n hidden_states = residual + hidden_states\n\n if hidden_states.dtype == torch.float16 and (\n torch.isinf(hidden_states).any() or torch.isnan(hidden_states).any()\n ):\n clamp_value = torch.finfo(hidden_states.dtype).max - 1000\n hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value)\n\n outputs = (hidden_states,)\n\n if output_attentions:\n outputs += (attn_weights,)\n\n return outputs\n\n\n# Copied from transformers.models.bart.modeling_bart.BartPretrainedModel with Bart->LDMBert\nclass LDMBertPreTrainedModel(PreTrainedModel):\n config_class = LDMBertConfig\n base_model_prefix = \"model\"\n _supports_gradient_checkpointing = True\n _keys_to_ignore_on_load_unexpected = [r\"encoder\\.version\", r\"decoder\\.version\"]\n\n def _init_weights(self, module):\n std = self.config.init_std\n if isinstance(module, nn.Linear):\n module.weight.data.normal_(mean=0.0, std=std)\n if module.bias is not None:\n module.bias.data.zero_()\n elif isinstance(module, nn.Embedding):\n module.weight.data.normal_(mean=0.0, std=std)\n if module.padding_idx is not None:\n module.weight.data[module.padding_idx].zero_()\n\n def _set_gradient_checkpointing(self, module, value=False):\n if isinstance(module, (LDMBertEncoder,)):\n module.gradient_checkpointing = value\n\n @property\n def dummy_inputs(self):\n pad_token = self.config.pad_token_id\n input_ids = torch.tensor([[0, 6, 10, 4, 2], [0, 8, 12, 2, pad_token]], device=self.device)\n dummy_inputs = {\n \"attention_mask\": input_ids.ne(pad_token),\n \"input_ids\": input_ids,\n }\n return dummy_inputs\n\n\nclass LDMBertEncoder(LDMBertPreTrainedModel):\n \"\"\"\n Transformer encoder consisting of *config.encoder_layers* self attention layers. Each layer is a\n [`LDMBertEncoderLayer`].\n\n Args:\n config: LDMBertConfig\n embed_tokens (nn.Embedding): output embedding\n \"\"\"\n\n def __init__(self, config: LDMBertConfig):\n super().__init__(config)\n\n self.dropout = config.dropout\n\n embed_dim = config.d_model\n self.padding_idx = config.pad_token_id\n self.max_source_positions = config.max_position_embeddings\n\n self.embed_tokens = nn.Embedding(config.vocab_size, embed_dim)\n self.embed_positions = nn.Embedding(config.max_position_embeddings, embed_dim)\n self.layers = nn.ModuleList([LDMBertEncoderLayer(config) for _ in range(config.encoder_layers)])\n self.layer_norm = nn.LayerNorm(embed_dim)\n\n self.gradient_checkpointing = False\n # Initialize weights and apply final processing\n self.post_init()\n\n def get_input_embeddings(self):\n return self.embed_tokens\n\n def set_input_embeddings(self, value):\n self.embed_tokens = value\n\n def forward(\n self,\n input_ids: torch.LongTensor = None,\n attention_mask: Optional[torch.Tensor] = None,\n position_ids: Optional[torch.LongTensor] = None,\n head_mask: Optional[torch.Tensor] = None,\n inputs_embeds: Optional[torch.FloatTensor] = None,\n output_attentions: Optional[bool] = None,\n output_hidden_states: Optional[bool] = None,\n return_dict: Optional[bool] = None,\n ) -> Union[Tuple, BaseModelOutput]:\n r\"\"\"\n Args:\n input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):\n Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you\n provide it.\n\n Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and\n [`PreTrainedTokenizer.__call__`] for details.\n\n [What are input IDs?](../glossary#input-ids)\n attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):\n Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:\n\n - 1 for tokens that are **not masked**,\n - 0 for tokens that are **masked**.\n\n [What are attention masks?](../glossary#attention-mask)\n head_mask (`torch.Tensor` of shape `(encoder_layers, encoder_attention_heads)`, *optional*):\n Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`:\n\n - 1 indicates the head is **not masked**,\n - 0 indicates the head is **masked**.\n\n inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):\n Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation.\n This is useful if you want more control over how to convert `input_ids` indices into associated vectors\n than the model's internal embedding lookup matrix.\n output_attentions (`bool`, *optional*):\n Whether or not to return the attentions tensors of all attention layers. See `attentions` under\n returned tensors for more detail.\n output_hidden_states (`bool`, *optional*):\n Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors\n for more detail.\n return_dict (`bool`, *optional*):\n Whether or not to return a [`~utils.BaseModelOutput`] instead of a plain tuple.\n \"\"\"\n output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions\n output_hidden_states = (\n output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states\n )\n return_dict = return_dict if return_dict is not None else self.config.use_return_dict\n\n # retrieve input_ids and inputs_embeds\n if input_ids is not None and inputs_embeds is not None:\n raise ValueError(\"You cannot specify both input_ids and inputs_embeds at the same time\")\n elif input_ids is not None:\n input_shape = input_ids.size()\n input_ids = input_ids.view(-1, input_shape[-1])\n elif inputs_embeds is not None:\n input_shape = inputs_embeds.size()[:-1]\n else:\n raise ValueError(\"You have to specify either input_ids or inputs_embeds\")\n\n if inputs_embeds is None:\n inputs_embeds = self.embed_tokens(input_ids)\n\n seq_len = input_shape[1]\n if position_ids is None:\n position_ids = torch.arange(seq_len, dtype=torch.long, device=inputs_embeds.device).expand((1, -1))\n embed_pos = self.embed_positions(position_ids)\n\n hidden_states = inputs_embeds + embed_pos\n hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)\n\n # expand attention_mask\n if attention_mask is not None:\n # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len]\n attention_mask = _expand_mask(attention_mask, inputs_embeds.dtype)\n\n encoder_states = () if output_hidden_states else None\n all_attentions = () if output_attentions else None\n\n # check if head_mask has a correct number of layers specified if desired\n if head_mask is not None:\n if head_mask.size()[0] != (len(self.layers)):\n raise ValueError(\n f\"The head_mask should be specified for {len(self.layers)} layers, but it is for\"\n f\" {head_mask.size()[0]}.\"\n )\n\n for idx, encoder_layer in enumerate(self.layers):\n if output_hidden_states:\n encoder_states = encoder_states + (hidden_states,)\n if self.gradient_checkpointing and self.training:\n\n def create_custom_forward(module):\n def custom_forward(*inputs):\n return module(*inputs, output_attentions)\n\n return custom_forward\n\n layer_outputs = torch.utils.checkpoint.checkpoint(\n create_custom_forward(encoder_layer),\n hidden_states,\n attention_mask,\n (head_mask[idx] if head_mask is not None else None),\n )\n else:\n layer_outputs = encoder_layer(\n hidden_states,\n attention_mask,\n layer_head_mask=(head_mask[idx] if head_mask is not None else None),\n output_attentions=output_attentions,\n )\n\n hidden_states = layer_outputs[0]\n\n if output_attentions:\n all_attentions = all_attentions + (layer_outputs[1],)\n\n hidden_states = self.layer_norm(hidden_states)\n\n if output_hidden_states:\n encoder_states = encoder_states + (hidden_states,)\n\n if not return_dict:\n return tuple(v for v in [hidden_states, encoder_states, all_attentions] if v is not None)\n return BaseModelOutput(\n last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions\n )\n\n\nclass LDMBertModel(LDMBertPreTrainedModel):\n def __init__(self, config: LDMBertConfig):\n super().__init__(config)\n self.model = LDMBertEncoder(config)\n self.to_logits = nn.Linear(config.hidden_size, config.vocab_size)\n\n def forward(\n self,\n input_ids=None,\n attention_mask=None,\n position_ids=None,\n head_mask=None,\n inputs_embeds=None,\n output_attentions=None,\n output_hidden_states=None,\n return_dict=None,\n ):\n outputs = self.model(\n input_ids,\n attention_mask=attention_mask,\n position_ids=position_ids,\n head_mask=head_mask,\n inputs_embeds=inputs_embeds,\n output_attentions=output_attentions,\n output_hidden_states=output_hidden_states,\n return_dict=return_dict,\n )\n return outputs\n",
"path": "src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py"
}
] | diff --git a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
index 4a4f29be7f75..2efde98f772e 100644
--- a/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
+++ b/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
@@ -192,7 +192,7 @@ def __call__(
LDMBERT_PRETRAINED_CONFIG_ARCHIVE_MAP = {
- "ldm-bert": "https://huggingface.co/ldm-bert/resolve/main/config.json",
+ "ldm-bert": "https://huggingface.co/valhalla/ldm-bert/blob/main/config.json",
}
|
hedyorg__hedy-654 | Turtle should not be shown in level 6 programs with numbers
Turtle is now shown in some cases:

Violating code:
```
nummer is 5
nummertwee is 6
getal is nummer * nummertwee
print getal
```
Turtle should not be shown in level 6 programs with numbers
Turtle is now shown in some cases:

Violating code:
```
nummer is 5
nummertwee is 6
getal is nummer * nummertwee
print getal
```
| [
{
"content": "from lark import Lark\nfrom lark.exceptions import LarkError, UnexpectedEOF, UnexpectedCharacters\nfrom lark import Tree, Transformer, visitors\nfrom os import path\nimport sys\nimport utils\nfrom collections import namedtuple\n\n\n# Some useful constants\nHEDY_MAX_LEVEL = 22\n\nreserved_words = ['and','except','lambda','with','as','finally','nonlocal','while','assert','False','None','yield','break','for','not','class','from','or','continue','global','pass','def','if','raise','del','import','return','elif','in','True','else','is','try']\n\n#\n# Commands per Hedy level which are used to suggest the closest command when kids make a mistake\n#\n\ncommands_per_level = {1: ['print', 'ask', 'echo'] ,\n 2: ['print', 'ask', 'echo', 'is'],\n 3: ['print', 'ask', 'is'],\n 4: ['print', 'ask', 'is', 'if'],\n 5: ['print', 'ask', 'is', 'if', 'repeat'],\n 6: ['print', 'ask', 'is', 'if', 'repeat'],\n 7: ['print', 'ask', 'is', 'if', 'repeat'],\n 8: ['print', 'ask', 'is', 'if', 'for'],\n 9: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 10: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 11: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 12: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 13: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 14: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 15: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 16: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 17: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 18: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 19: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 20: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 21: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 22: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while']\n }\n\n#\n# closest_command() searches for known commands in an invalid command.\n#\n# It will return the known command which is closest positioned at the beginning.\n# It will return '' if the invalid command does not contain any known command.\n#\n\ndef closest_command(invalid_command, known_commands):\n # First search for 100% match of known commands\n min_position = len(invalid_command)\n min_command = ''\n for known_command in known_commands:\n position = invalid_command.find(known_command)\n if position != -1 and position < min_position:\n min_position = position\n min_command = known_command\n\n # If not found, search for partial match of know commands\n if min_command == '':\n min_command = closest_command_with_min_distance(invalid_command, known_commands)\n\n # Check if we are not returning the found command\n # In that case we have no suggestion\n # This is to prevent \"print is not a command in Hedy level 3, did you mean print?\" error message\n\n if min_command == invalid_command:\n return None\n\n return min_command\n\n\ndef closest_command_with_min_distance(command, commands):\n #simple string distance, could be more sophisticated MACHINE LEARNING!\n min = 1000\n min_command = ''\n for c in commands:\n min_c = minimum_distance(c, command)\n if min_c < min:\n min = min_c\n min_command = c\n return min_command\n\ndef minimum_distance(s1, s2):\n \"\"\"Return string distance between 2 strings.\"\"\"\n if len(s1) > len(s2):\n s1, s2 = s2, s1\n distances = range(len(s1) + 1)\n for index2, char2 in enumerate(s2):\n new_distances = [index2 + 1]\n for index1, char1 in enumerate(s1):\n if char1 == char2:\n new_distances.append(distances[index1])\n else:\n new_distances.append(1 + min((distances[index1], distances[index1 + 1], new_distances[-1])))\n distances = new_distances\n return distances[-1]\n\nclass HedyException(Exception):\n def __init__(self, message, **arguments):\n self.error_code = message\n self.arguments = arguments\n\nclass ExtractAST(Transformer):\n # simplifies the tree: f.e. flattens arguments of text, var and punctuation for further processing\n def text(self, args):\n return Tree('text', [''.join([str(c) for c in args])])\n\n #level 2\n def var(self, args):\n return Tree('var', [''.join([str(c) for c in args])])\n def punctuation(self, args):\n return Tree('punctuation', [''.join([str(c) for c in args])])\n def index(self, args):\n return ''.join([str(c) for c in args])\n def list_access(self, args):\n if type(args[1]) == Tree:\n if \"random\" in args[1].data:\n return Tree('list_access', [args[0], 'random'])\n else:\n return Tree('list_access', [args[0], args[1].children[0]])\n else:\n return Tree('list_access', [args[0], args[1]])\n\n #level 5\n def number(self, args):\n return Tree('number', ''.join([str(c) for c in args]))\n\nclass AllAssignmentCommands(Transformer):\n # returns a list of variable and list access\n # so these can be excluded when printing\n\n # relevant nodes (list acces, ask, assign) are transformed into strings\n # higher in the tree (through default rule), we filter on only string arguments, of lists with string arguments\n\n def filter_ask_assign(self, args):\n ask_assign = []\n for a in args:\n # strings (vars remaining in the tree) are added directly\n if type(a) is str:\n ask_assign.append(a)\n #lists are seached further for string members (vars)\n elif type(a) is list:\n sub_a_ask_assign = self.filter_ask_assign(a)\n for sub_a in sub_a_ask_assign:\n ask_assign.append(sub_a)\n return ask_assign\n\n def for_loop(self, args):\n # for loop iterator is a var so should be added to the list of vars\n iterator = str(args[0])\n commands = args[1:]\n return [iterator] + self.filter_ask_assign(args)\n\n def input(self, args):\n #return left side of the =\n return args[0]\n\n def ask(self, args):\n #try is needed cause in level 1 sk has not variable in front\n try:\n return args[0]\n except:\n return None\n\n def assign(self, args):\n return args[0]\n\n def assign_list(self, args):\n return args[0]\n\n # list access is accessing a variable, so must be escaped\n # for example we print(dieren[1]) not print('dieren[1]')\n def list_access(self, args):\n listname = args[0][0]\n if args[1] == 'random':\n return 'random.choice(' + listname + ')'\n else:\n return listname + '[' + args[1] + ']'\n\n\n # additions Laura, to be checked for higher levels:\n def list_access_var(self, args):\n return args[0]\n def var_access(self,args):\n return args[0]\n def change_list_item(self, args):\n return args[0]\n\n def text(self, args):\n #text never contains a variable\n return None\n\n def var(self, args):\n return args\n\n def punctuation(self, args):\n #is never a variable (but should be removed from the tree or it will be seen as one!)\n return None\n\n def __default__(self, args, children, meta):\n return self.filter_ask_assign(children)\n\n\n\ndef are_all_arguments_true(args):\n bool_arguments = [x[0] for x in args]\n arguments_of_false_nodes = [x[1] for x in args if not x[0]]\n return all(bool_arguments), arguments_of_false_nodes\n\n# this class contains code shared between IsValid and IsComplete, which are quite similar\n# because both filter out some types of 'wrong' nodes\n# TODO: this could also use a default lark rule like AllAssignmentCommands does now\nclass Filter(Transformer):\n def __default__(self, args, children, meta):\n return are_all_arguments_true(children)\n\n def program(self, args):\n bool_arguments = [x[0] for x in args]\n if all(bool_arguments):\n return [True] #all complete\n else:\n command_num = 1\n for a in args:\n if not a[0]:\n return False, a[1], command_num\n command_num += 1\n\n #leafs are treated differently, they are True + their arguments flattened\n def var(self, args):\n return True, ''.join([str(c) for c in args])\n def random(self, args):\n return True, 'random'\n def index(self, args):\n return True, ''.join([str(c) for c in args])\n def punctuation(self, args):\n return True, ''.join([c for c in args])\n def number(self, args):\n return True, ''.join([c for c in args])\n def text(self, args):\n return all(args), ''.join([c for c in args])\n\nclass UsesTurtle(Transformer):\n # returns true if Forward or Turn are in the tree, false otherwise\n def __default__(self, args, children, meta):\n if len(children) == 0: # no children? you are a leaf that is not Turn or Forward, so you are no Turtle command\n return False\n else:\n if type(children[0]) == bool:\n return any(children) # children? if any is true there is a Turtle leaf\n else:\n return False # some nodes like text and punctuation have text children (their letters) these are not turtles\n\n def forward(self, args):\n return True\n\n def turn(self, args):\n return True\n\n\n\n\n\n\n\nclass IsValid(Filter):\n # all rules are valid except for the \"Invalid\" production rule\n # this function is used to generate more informative error messages\n # tree is transformed to a node of [Bool, args, linenumber]\n\n def invalid_space(self, args):\n # return space to indicate that line starts in a space\n return False, \" \"\n\n def print_nq(self, args):\n # return error source to indicate what went wrong\n return False, \"print without quotes\"\n\n def invalid(self, args):\n # return the first argument to place in the error message\n # TODO: this will not work for misspelling 'at', needs to be improved!\n return False, args[0][1]\n\n #other rules are inherited from Filter\n\nclass IsComplete(Filter):\n # print, ask an echo can miss arguments and then are not complete\n # used to generate more informative error messages\n # tree is transformed to a node of [True] or [False, args, line_number]\n\n def ask(self, args):\n return args != [], 'ask'\n def print(self, args):\n return args != [], 'print'\n def input(self, args):\n return args != [], 'input'\n def length(self, args):\n return args != [], 'len'\n def print_nq(self, args):\n return args != [], 'print level 2'\n def echo(self, args):\n #echo may miss an argument\n return True, 'echo'\n\n #other rules are inherited from Filter\n\nclass ConvertToPython_1(Transformer):\n\n def process_single_quote(self, value):\n # defines what happens if a kids uses ' in a string\n value = value.replace(\"'\", \"\\\\'\")\n return value\n\n\n def __init__(self, punctuation_symbols, lookup):\n self.punctuation_symbols = punctuation_symbols\n self.lookup = lookup\n\n def program(self, args):\n return '\\n'.join([str(c) for c in args])\n def command(self, args):\n return args[0]\n\n def text(self, args):\n return ''.join([str(c) for c in args])\n def print(self, args):\n # escape quotes if kids accidentally use them at level 1\n argument = self.process_single_quote(args[0])\n\n return \"print('\" + argument + \"')\"\n def echo(self, args):\n if len(args) == 0:\n return \"print(answer)\" #no arguments, just print answer\n\n argument = self.process_single_quote(args[0])\n return \"print('\" + argument + \"'+answer)\"\n def ask(self, args):\n argument = self.process_single_quote(args[0])\n return \"answer = input('\" + argument + \"')\"\n def forward(self,args):\n # when a not-number is given, we simply use 50 as default\n try:\n parameter = int(args[0])\n except:\n parameter = 50\n return f\"t.forward({parameter})\"\"\"\n def turn(self, args):\n if len(args) == 0:\n return \"t.right(90)\" #no arguments works, and means a right turn\n\n if args[0] == 'left':\n return \"t.left(90)\"\n else:\n return \"t.right(90)\" #something else also defaults to right turn\n\ndef wrap_non_var_in_quotes(argument, lookup):\n if argument in lookup:\n return argument\n else:\n return \"'\" + argument + \"'\"\n\nclass ConvertToPython_2(ConvertToPython_1):\n def punctuation(self, args):\n return ''.join([str(c) for c in args])\n def var(self, args):\n name = ''.join(args)\n return \"_\" + name if name in reserved_words else name\n def print(self, args):\n all_arguments_converted = []\n i = 0\n\n for argument in args:\n # escape quotes if kids accidentally use them at level 2\n argument = self.process_single_quote(argument)\n\n # final argument and punctuation arguments do not have to be separated with a space, other do\n if i == len(args)-1 or args[i+1] in self.punctuation_symbols:\n space = ''\n else:\n space = \"+' '\"\n all_arguments_converted.append(wrap_non_var_in_quotes(argument, self.lookup) + space)\n i = i + 1\n return 'print(' + '+'.join(all_arguments_converted) + ')'\n def forward(self, args):\n parameter = args[0]\n #if the parameter is a variable, print as is\n if parameter in self.lookup:\n return f\"t.forward({parameter})\"\n\n # otherwise, see if we got a number. if not, simply use 50 as default\n try:\n parameter = int(args[0])\n except:\n parameter = 50\n return f\"t.forward({parameter})\"\"\"\n\n def ask(self, args):\n var = args[0]\n all_parameters = [\"'\" + self.process_single_quote(a) + \"'\" for a in args[1:]]\n return f'{var} = input(' + '+'.join(all_parameters) + \")\"\n def assign(self, args):\n parameter = args[0]\n value = args[1]\n #if the assigned value contains single quotes, escape them\n value = self.process_single_quote(value)\n return parameter + \" = '\" + value + \"'\"\n\n def assign_list(self, args):\n parameter = args[0]\n values = [\"'\" + a + \"'\" for a in args[1:]]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def list_access(self, args):\n if args[1] == 'random':\n return 'random.choice(' + args[0] + ')'\n else:\n return args[0] + '[' + args[1] + ']'\n\n\n\n#TODO: lookuptable and punctuation chars not be needed for level2 and up anymore, could be removed\nclass ConvertToPython_3(ConvertToPython_2):\n def text(self, args):\n return ''.join([str(c) for c in args])\n def print(self, args):\n #opzoeken is nu niet meer nodig\n return \"print(\" + '+'.join(args) + ')'\n def print_nq(self, args):\n return ConvertToPython_2.print(self, args)\n def ask(self, args):\n args_new = []\n var = args[0]\n remaining_args = args[1:]\n\n return f'{var} = input(' + '+'.join(remaining_args) + \")\"\n\ndef indent(s):\n lines = s.split('\\n')\n return '\\n'.join([' ' + l for l in lines])\n\nclass ConvertToPython_4(ConvertToPython_3):\n def list_access_var(self, args):\n var = args[0]\n if args[2].data == 'random':\n return var + '=random.choice(' + args[1] + ')'\n else:\n return var + '=' + args[1] + '[' + args[2].children[0] + ']'\n\n def ifs(self, args):\n return f\"\"\"if {args[0]}:\n{indent(args[1])}\"\"\"\n\n def ifelse(self, args):\n return f\"\"\"if {args[0]}:\n{indent(args[1])}\nelse:\n{indent(args[2])}\"\"\"\n def condition(self, args):\n return ' and '.join(args)\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n return f\"{arg0} == {arg1}\" #no and statements\n def in_list_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n return f\"{arg0} in {arg1}\"\n\nclass ConvertToPython_5(ConvertToPython_4):\n def number(self, args):\n return ''.join(args)\n\n def repeat(self, args):\n times = wrap_non_var_in_quotes(args[0], self.lookup)\n command = args[1]\n return f\"\"\"for i in range(int({str(times)})):\n{indent(command)}\"\"\"\n\nclass ConvertToPython_6(ConvertToPython_5):\n\n def print(self, args):\n #force all to be printed as strings (since there can not be int arguments)\n args_new = []\n for a in args:\n if type(a) is Tree:\n args_new.append(f'str({a.children})')\n elif \"'\" not in a:\n args_new.append(f'str({a})')\n else:\n args_new.append(a)\n\n return \"print(\" + '+'.join(args_new) + ')'\n\n #we can now have ints as types so chck must force str\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) == str({arg1})\" #no and statements\n else:\n return f\"str({arg0}) == str({arg1}) and {args[2]}\"\n\n def assign(self, args):\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n\n def addition(self, args):\n return Tree('sum', f'int({str(args[0])}) + int({str(args[1])})')\n\n def substraction(self, args):\n return Tree('sum', f'int({str(args[0])}) - int({str(args[1])})')\n\n def multiplication(self, args):\n return Tree('sum', f'int({str(args[0])}) * int({str(args[1])})')\n\n def division(self, args):\n return Tree('sum', f'int({str(args[0])}) // int({str(args[1])})')\n\nclass ConvertToPython_7(ConvertToPython_6):\n def __init__(self, punctuation_symbols, lookup):\n self.punctuation_symbols = punctuation_symbols\n self.lookup = lookup\n\n def command(self, args):\n return \"\".join(args)\n\n def repeat(self, args):\n all_lines = [indent(x) for x in args[1:]]\n return \"for i in range(int(\" + str(args[0]) + \")):\\n\" + \"\\n\".join(all_lines)\n\n def ifs(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n\n all_lines = [indent(x) for x in args[1:]]\n\n return \"if \" + args[0] + \":\\n\" + \"\\n\".join(all_lines)\n\n def elses(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n\n all_lines = [indent(x) for x in args]\n\n return \"\\nelse:\\n\" + \"\\n\".join(all_lines)\n\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def var_access(self, args):\n if len(args) == 1: #accessing a var\n return wrap_non_var_in_quotes(args[0], self.lookup)\n # this was used to produce better error messages, but needs more work\n # (because plain text strings are now also var_access and not textwithoutspaces\n # since we no longer have priority rules\n # if args[0] in self.lookup:\n # return args[0]\n # else:\n # raise HedyException('VarUndefined', level=7, name=args[0])\n else:\n # dit was list_access\n return args[0] + \"[\" + str(args[1]) + \"]\" if type(args[1]) is not Tree else \"random.choice(\" + str(args[0]) + \")\"\n\nclass ConvertToPython_8(ConvertToPython_7):\n def for_loop(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[3:]]\n return \"for \" + args[0] + \" in range(\" + \"int(\" + args[1] + \")\" + \", \" + \"int(\" + args[2] + \")+1\" + \"):\\n\"+\"\\n\".join(all_lines)\n\nclass ConvertToPython_9_10(ConvertToPython_8):\n def elifs(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[1:]]\n return \"\\nelif \" + args[0] + \":\\n\" + \"\\n\".join(all_lines)\n\nclass ConvertToPython_11(ConvertToPython_9_10):\n def input(self, args):\n args_new = []\n var = args[0]\n for a in args[1:]:\n if type(a) is Tree:\n args_new.append(f'str({a.children})')\n elif \"'\" not in a:\n args_new.append(f'str({a})')\n else:\n args_new.append(a)\n\n return f'{var} = input(' + '+'.join(args_new) + \")\"\n\nclass ConvertToPython_12(ConvertToPython_11):\n def assign_list(self, args):\n parameter = args[0]\n values = [a for a in args[1:]]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def list_access_var(self, args):\n var = args[0]\n if not isinstance(args[2], str):\n if args[2].data == 'random':\n return var + '=random.choice(' + args[1] + ')'\n else:\n return var + '=' + args[1] + '[' + args[2] + '-1]'\n\n def list_access(self, args):\n if args[1] == 'random':\n return 'random.choice(' + args[0] + ')'\n else:\n return args[0] + '[' + args[1] + '-1]'\n\n def change_list_item(self, args):\n return args[0] + '[' + args[1] + '-1] = ' + args[2]\n# Custom transformer that can both be used bottom-up or top-down\n\nclass ConvertToPython_13(ConvertToPython_12):\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n else:\n # FH, June 21 the addition of _true/false is a bit of a hack. cause they are first seen as vars that at reserved words, they egt and _ and we undo that here.\n # could/should be fixed in the grammar!\n if value == 'true' or value == 'True' or value == '_True':\n return parameter + \" = True\"\n elif value == 'false' or value == 'False' or value == '_False':\n return parameter + \" = False\"\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if arg1 == '\\'True\\'' or arg1 == '\\'true\\'':\n return f\"{arg0} == True\"\n elif arg1 == '\\'False\\'' or arg1 == '\\'false\\'':\n return f\"{arg0} == False\"\n else:\n return f\"str({arg0}) == str({arg1})\" #no and statements\n\nclass ConvertToPython_14(ConvertToPython_13):\n def andcondition(self, args):\n return ' and '.join(args)\n def orcondition(self, args):\n return ' or '.join(args)\n\nclass ConvertToPython_15(ConvertToPython_14):\n def comment(self, args):\n return f\"# {args}\"\n\nclass ConvertToPython_16(ConvertToPython_15):\n def smaller(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) < str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) < str({arg1}) and {args[2]}\"\n\n def bigger(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) > str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) > str({arg1}) and {args[2]}\"\n\nclass ConvertToPython_17(ConvertToPython_16):\n def while_loop(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[1:]]\n return \"while \" + args[0] + \":\\n\"+\"\\n\".join(all_lines)\n\nclass ConvertToPython_18_19(ConvertToPython_17):\n def length(self, args):\n arg0 = args[0]\n return f\"len({arg0})\"\n\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n elif \"len(\" in value:\n return parameter + \" = \" + value\n else:\n if value == 'true' or value == 'True':\n return parameter + \" = True\"\n elif value == 'false' or value == 'False':\n return parameter + \" = False\"\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\nclass ConvertToPython_20(ConvertToPython_18_19):\n def equality_check(self, args):\n if type(args[0]) is Tree:\n return args[0].children + \" == int(\" + args[1] + \")\"\n if type(args[1]) is Tree:\n return \"int(\" + args[0] + \") == \" + args[1].children\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if arg1 == '\\'True\\'' or arg1 == '\\'true\\'':\n return f\"{arg0} == True\"\n elif arg1 == '\\'False\\'' or arg1 == '\\'false\\'':\n return f\"{arg0} == False\"\n else:\n return f\"str({arg0}) == str({arg1})\" # no and statements\n\nclass ConvertToPython_21(ConvertToPython_20):\n def not_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) != str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) != str({arg1}) and {args[2]}\"\n\nclass ConvertToPython_22(ConvertToPython_21):\n def smaller_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) <= str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) <= str({arg1}) and {args[2]}\"\n\n def bigger_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) >= str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) >= str({arg1}) and {args[2]}\"\n\n\ndef merge_grammars(grammar_text_1, grammar_text_2):\n # this function takes two grammar files and merges them into one\n # rules that are redefined in the second file are overridden\n # rule that are new in the second file are added (remaining_rules_grammar_2)\n\n merged_grammar = []\n\n rules_grammar_1 = grammar_text_1.split('\\n')\n remaining_rules_grammar_2 = grammar_text_2.split('\\n')\n for line_1 in rules_grammar_1:\n if line_1 == '' or line_1[0] == '/': #skip comments and empty lines:\n continue\n parts = line_1.split(':')\n name_1, definition_1 = parts[0], ''.join(parts[1:]) #get part before are after : (this is a join because there can be : in the rule)\n\n rules_grammar_2 = grammar_text_2.split('\\n')\n override_found = False\n for line_2 in rules_grammar_2:\n if line_2 == '' or line_2[0] == '/': # skip comments and empty lines:\n continue\n parts = line_2.split(':')\n name_2, definition_2 = parts[0], ''.join(parts[1]) #get part before are after :\n if name_1 == name_2:\n override_found = True\n new_rule = line_2\n # this rule is now in the grammar, remove form this list\n remaining_rules_grammar_2.remove(new_rule)\n break\n\n # new rule found? print that. nothing found? print org rule\n if override_found:\n merged_grammar.append(new_rule)\n else:\n merged_grammar.append(line_1)\n\n #all rules that were not overlapping are new in the grammar, add these too\n for rule in remaining_rules_grammar_2:\n if not(rule == '' or rule[0] == '/'):\n merged_grammar.append(rule)\n\n merged_grammar = sorted(merged_grammar)\n return '\\n'.join(merged_grammar)\n\n\ndef create_grammar(level, sub):\n # Load Lark grammars relative to directory of current file\n script_dir = path.abspath(path.dirname(__file__))\n\n # Load Lark grammars relative to directory of current file\n script_dir = path.abspath(path.dirname(__file__))\n\n # we start with creating the grammar for level 1\n grammar_text_1 = get_full_grammar_for_level(1)\n \n if sub:\n #grep\n if level == 1:\n # this is a level 1 sublevel, so get the sublevel grammar and return\n grammar_text_sub = get_additional_rules_for_level(1, sub)\n grammar_text = merge_grammars(grammar_text_1, grammar_text_sub)\n return grammar_text\n\n grammar_text_2 = get_additional_rules_for_level(2)\n\n #start at 1 and keep merging new grammars in\n new = merge_grammars(grammar_text_1, grammar_text_2)\n\n for i in range(3, level+1):\n grammar_text_i = get_additional_rules_for_level(i)\n new = merge_grammars(new, grammar_text_i)\n\n # get grammar for the sublevel and merge it\n grammar_text_sub = get_additional_rules_for_level(level, sub)\n new = merge_grammars(new, grammar_text_sub)\n \n # ready? Save to file to ease debugging\n # this could also be done on each merge for performance reasons\n filename = \"level\" + str(level) + \"-\" + str(sub) + \"-Total.lark\"\n loc = path.join(script_dir, \"grammars-Total\", filename)\n file = open(loc, \"w\", encoding=\"utf-8\")\n file.write(new)\n file.close()\n else:\n #grep\n if level == 1:\n grammar_text = get_full_grammar_for_level(level)\n return grammar_text\n\n grammar_text_2 = get_additional_rules_for_level(2)\n\n #start at 1 and keep merging new grammars in\n new = merge_grammars(grammar_text_1, grammar_text_2)\n\n for i in range(3, level+1):\n grammar_text_i = get_additional_rules_for_level(i)\n new = merge_grammars(new, grammar_text_i)\n\n # ready? Save to file to ease debugging\n # this could also be done on each merge for performance reasons\n filename = \"level\" + str(level) + \"-Total.lark\"\n loc = path.join(script_dir, \"grammars-Total\", filename)\n file = open(loc, \"w\", encoding=\"utf-8\")\n file.write(new)\n file.close()\n\n return new\n\ndef get_additional_rules_for_level(level, sub = 0):\n script_dir = path.abspath(path.dirname(__file__))\n if sub:\n filename = \"level\" + str(level) + \"-\" + str(sub) + \"-Additions.lark\"\n else:\n filename = \"level\" + str(level) + \"-Additions.lark\"\n with open(path.join(script_dir, \"grammars\", filename), \"r\", encoding=\"utf-8\") as file:\n grammar_text = file.read()\n return grammar_text\n\ndef get_full_grammar_for_level(level):\n script_dir = path.abspath(path.dirname(__file__))\n filename = \"level\" + str(level) + \".lark\"\n with open(path.join(script_dir, \"grammars\", filename), \"r\", encoding=\"utf-8\") as file:\n grammar_text = file.read()\n return grammar_text\n\nPARSER_CACHE = {}\n\n\ndef get_parser(level, sub):\n \"\"\"Return the Lark parser for a given level.\n\n Uses caching if Hedy is NOT running in development mode.\n \"\"\"\n key = str(level) + \".\" + str(sub)\n existing = PARSER_CACHE.get(key)\n if existing and not utils.is_debug_mode():\n return existing\n grammar = create_grammar(level, sub)\n ret = Lark(grammar)\n PARSER_CACHE[key] = ret\n return ret\n\nParseResult = namedtuple('ParseResult', ['code', 'has_turtle'])\n\ndef transpile(input_string, level, sub = 0):\n try:\n input_string = input_string.replace('\\r\\n', '\\n')\n transpile_result = transpile_inner(input_string, level, sub)\n return transpile_result\n except Exception as E:\n # This is the 'fall back' transpilation\n # that should surely be improved!!\n # we retry HedyExceptions of the type Parse (and Lark Errors) but we raise Invalids\n if E.args[0] == 'Parse':\n #try 1 level lower\n if level > 1 and sub == 0:\n try:\n new_level = level - 1\n result = transpile_inner(input_string, new_level, sub)\n except (LarkError, HedyException) as innerE:\n # Parse at `level - 1` failed as well, just re-raise original error\n raise E\n # If the parse at `level - 1` succeeded, then a better error is \"wrong level\"\n raise HedyException('Wrong Level', correct_code=result, original_level=level, working_level=new_level) from E\n raise E\n\ndef repair(input_string):\n #the only repair we can do now is remove leading spaces, more can be added!\n return '\\n'.join([x.lstrip() for x in input_string.split('\\n')])\n\ndef translate_characters(s):\n# this method is used to make it more clear to kids what is meant in error messages\n# for example ' ' is hard to read, space is easier\n# this could (should?) be localized so we can call a ' \"Hoge komma\" for example (Felienne, dd Feb 25, 2021)\n if s == ' ':\n return 'space'\n elif s == ',':\n return 'comma'\n elif s == '?':\n return 'question mark'\n elif s == '\\\\n':\n return 'newline'\n elif s == '.':\n return 'period'\n elif s == '!':\n return 'exclamation mark'\n elif s == '*':\n return 'star'\n elif s == \"'\":\n return 'single quotes'\n elif s == '\"':\n return 'double quotes'\n elif s == '/':\n return 'slash'\n elif s == '-':\n return 'dash'\n elif s >= 'a' and s <= 'z' or s >= 'A' and s <= 'Z':\n return s\n else:\n return s\n\ndef filter_and_translate_terminals(list):\n # in giving error messages, it does not make sense to include\n # ANONs, and some things like EOL need kid friendly translations\n new_terminals = []\n for terminal in list:\n if terminal[:4] == \"ANON\":\n continue\n\n if terminal == \"EOL\":\n new_terminals.append(\"Newline\")\n break\n\n #not translated or filtered out? simply add as is:\n new_terminals.append(terminal)\n\n return new_terminals\n\ndef beautify_parse_error(error_message):\n character_found = error_message.split(\"'\")[1]\n character_found = translate_characters(character_found)\n return character_found\n\ndef find_indent_length(line):\n number_of_spaces = 0\n for x in line:\n if x == ' ':\n number_of_spaces += 1\n else:\n break\n return number_of_spaces\n\ndef preprocess_blocks(code):\n processed_code = []\n lines = code.split(\"\\n\")\n current_number_of_indents = 0\n previous_number_of_indents = 0\n indent_size = None #we don't fix indent size but the first encounter sets it\n for line in lines:\n leading_spaces = find_indent_length(line)\n\n #first encounter sets indent size for this program\n if indent_size == None and leading_spaces > 0:\n indent_size = leading_spaces\n\n #calculate nuber of indents if possible\n if indent_size != None:\n current_number_of_indents = leading_spaces // indent_size\n\n if current_number_of_indents < previous_number_of_indents:\n # we springen 'terug' dus er moeten end-blocken in\n # bij meerdere terugsprongen sluiten we ook meerdere blokken\n\n difference_in_indents = (previous_number_of_indents - current_number_of_indents)\n\n for i in range(difference_in_indents):\n processed_code.append('end-block')\n\n #save to compare for next line\n previous_number_of_indents = current_number_of_indents\n\n #if indent remains the same, do nothing, just add line\n processed_code.append(line)\n\n # if the last line is indented, the end of the program is also the end of all indents\n # so close all blocks\n for i in range(current_number_of_indents):\n processed_code.append('end-block')\n return \"\\n\".join(processed_code)\n\n\ndef transpile_inner(input_string, level, sub = 0):\n punctuation_symbols = ['!', '?', '.']\n level = int(level)\n\n parser = get_parser(level, sub)\n\n if level >= 7:\n input_string = preprocess_blocks(input_string)\n # print(input_string)\n\n try:\n program_root = parser.parse(input_string+ '\\n').children[0] # getting rid of the root could also be done in the transformer would be nicer\n abstract_syntaxtree = ExtractAST().transform(program_root)\n lookup_table = AllAssignmentCommands().transform(abstract_syntaxtree)\n\n except UnexpectedCharacters as e:\n try:\n location = e.line, e.column\n characters_expected = str(e.allowed) #not yet in use, could be used in the future (when our parser rules are better organize, now it says ANON*__12 etc way too often!)\n character_found = beautify_parse_error(e.args[0])\n # print(e.args[0])\n # print(location, character_found, characters_expected)\n raise HedyException('Parse', level=level, location=location, character_found=character_found) from e\n except UnexpectedEOF:\n # this one can't be beautified (for now), so give up :)\n raise e\n\n # IsValid returns (True,) or (False, args, line)\n is_valid = IsValid().transform(program_root)\n\n if not is_valid[0]:\n _, args, line = is_valid\n\n # Apparently, sometimes 'args' is a string, sometimes it's a list of\n # strings ( are these production rule names?). If it's a list of\n # strings, just take the first string and proceed.\n if isinstance(args, list):\n args = args[0]\n if args == ' ':\n #the error here is a space at the beginning of a line, we can fix that!\n fixed_code = repair(input_string)\n if fixed_code != input_string: #only if we have made a successful fix\n result = transpile_inner(fixed_code, level, sub)\n raise HedyException('Invalid Space', level=level, line_number=line, fixed_code = result)\n elif args == 'print without quotes':\n # grammar rule is ignostic of line number so we can't easily return that here\n raise HedyException('Unquoted Text', level=level)\n else:\n invalid_command = args\n closest = closest_command(invalid_command, commands_per_level[level])\n if closest == None: #we couldn't find a suggestion because the command itself was found\n # clearly the error message here should be better or it should be a different one!\n raise HedyException('Parse', level=level, location=[\"?\", \"?\"], keyword_found=invalid_command)\n raise HedyException('Invalid', invalid_command=invalid_command, level=level, guessed_command=closest)\n\n is_complete = IsComplete().transform(program_root)\n if not is_complete[0]:\n incomplete_command = is_complete[1]\n line = is_complete[2]\n raise HedyException('Incomplete', incomplete_command=incomplete_command, level=level, line_number=line)\n\n if level == 1:\n python = ConvertToPython_1(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 2:\n python = ConvertToPython_2(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 3:\n python = ConvertToPython_3(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 4:\n # Sublevel has the same grammar\n python = ConvertToPython_4(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 5:\n python = ConvertToPython_5(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 6:\n python = ConvertToPython_6(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 7:\n python = ConvertToPython_7(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 8:\n # Sublevel has the same conversion\n python = ConvertToPython_8(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 9:\n python = ConvertToPython_9_10(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 10:\n # Code does not change for nesting\n python = ConvertToPython_9_10(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 11:\n python = ConvertToPython_11(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 12:\n python = ConvertToPython_12(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 13:\n python = ConvertToPython_13(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 14:\n python = ConvertToPython_14(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 15:\n python = ConvertToPython_15(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 16:\n python = ConvertToPython_16(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 17:\n python = ConvertToPython_17(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 18 or level == 19:\n python = ConvertToPython_18_19(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 20:\n python = ConvertToPython_20(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 21:\n python = ConvertToPython_21(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 22:\n python = ConvertToPython_22(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n else:\n raise Exception('Levels over 22 are not implemented yet')\n\n has_turtle = UsesTurtle().transform(program_root)\n\n return ParseResult(python, has_turtle)\n\ndef execute(input_string, level):\n python = transpile(input_string, level)\n exec(python)\n\n# f = open('output.py', 'w+')\n# f.write(python)\n# f.close()\n",
"path": "hedy.py"
}
] | [
{
"content": "from lark import Lark\nfrom lark.exceptions import LarkError, UnexpectedEOF, UnexpectedCharacters\nfrom lark import Tree, Transformer, visitors\nfrom os import path\nimport sys\nimport utils\nfrom collections import namedtuple\n\n\n# Some useful constants\nHEDY_MAX_LEVEL = 22\n\nreserved_words = ['and','except','lambda','with','as','finally','nonlocal','while','assert','False','None','yield','break','for','not','class','from','or','continue','global','pass','def','if','raise','del','import','return','elif','in','True','else','is','try']\n\n#\n# Commands per Hedy level which are used to suggest the closest command when kids make a mistake\n#\n\ncommands_per_level = {1: ['print', 'ask', 'echo'] ,\n 2: ['print', 'ask', 'echo', 'is'],\n 3: ['print', 'ask', 'is'],\n 4: ['print', 'ask', 'is', 'if'],\n 5: ['print', 'ask', 'is', 'if', 'repeat'],\n 6: ['print', 'ask', 'is', 'if', 'repeat'],\n 7: ['print', 'ask', 'is', 'if', 'repeat'],\n 8: ['print', 'ask', 'is', 'if', 'for'],\n 9: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 10: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 11: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 12: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 13: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 14: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 15: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 16: ['print', 'ask', 'is', 'if', 'for', 'elif'],\n 17: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 18: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 19: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 20: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 21: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while'],\n 22: ['print', 'ask', 'is', 'if', 'for', 'elif', 'while']\n }\n\n#\n# closest_command() searches for known commands in an invalid command.\n#\n# It will return the known command which is closest positioned at the beginning.\n# It will return '' if the invalid command does not contain any known command.\n#\n\ndef closest_command(invalid_command, known_commands):\n # First search for 100% match of known commands\n min_position = len(invalid_command)\n min_command = ''\n for known_command in known_commands:\n position = invalid_command.find(known_command)\n if position != -1 and position < min_position:\n min_position = position\n min_command = known_command\n\n # If not found, search for partial match of know commands\n if min_command == '':\n min_command = closest_command_with_min_distance(invalid_command, known_commands)\n\n # Check if we are not returning the found command\n # In that case we have no suggestion\n # This is to prevent \"print is not a command in Hedy level 3, did you mean print?\" error message\n\n if min_command == invalid_command:\n return None\n\n return min_command\n\n\ndef closest_command_with_min_distance(command, commands):\n #simple string distance, could be more sophisticated MACHINE LEARNING!\n min = 1000\n min_command = ''\n for c in commands:\n min_c = minimum_distance(c, command)\n if min_c < min:\n min = min_c\n min_command = c\n return min_command\n\ndef minimum_distance(s1, s2):\n \"\"\"Return string distance between 2 strings.\"\"\"\n if len(s1) > len(s2):\n s1, s2 = s2, s1\n distances = range(len(s1) + 1)\n for index2, char2 in enumerate(s2):\n new_distances = [index2 + 1]\n for index1, char1 in enumerate(s1):\n if char1 == char2:\n new_distances.append(distances[index1])\n else:\n new_distances.append(1 + min((distances[index1], distances[index1 + 1], new_distances[-1])))\n distances = new_distances\n return distances[-1]\n\nclass HedyException(Exception):\n def __init__(self, message, **arguments):\n self.error_code = message\n self.arguments = arguments\n\nclass ExtractAST(Transformer):\n # simplifies the tree: f.e. flattens arguments of text, var and punctuation for further processing\n def text(self, args):\n return Tree('text', [''.join([str(c) for c in args])])\n\n #level 2\n def var(self, args):\n return Tree('var', [''.join([str(c) for c in args])])\n def punctuation(self, args):\n return Tree('punctuation', [''.join([str(c) for c in args])])\n def index(self, args):\n return ''.join([str(c) for c in args])\n def list_access(self, args):\n if type(args[1]) == Tree:\n if \"random\" in args[1].data:\n return Tree('list_access', [args[0], 'random'])\n else:\n return Tree('list_access', [args[0], args[1].children[0]])\n else:\n return Tree('list_access', [args[0], args[1]])\n\n #level 5\n def number(self, args):\n return Tree('number', ''.join([str(c) for c in args]))\n\nclass AllAssignmentCommands(Transformer):\n # returns a list of variable and list access\n # so these can be excluded when printing\n\n # relevant nodes (list acces, ask, assign) are transformed into strings\n # higher in the tree (through default rule), we filter on only string arguments, of lists with string arguments\n\n def filter_ask_assign(self, args):\n ask_assign = []\n for a in args:\n # strings (vars remaining in the tree) are added directly\n if type(a) is str:\n ask_assign.append(a)\n #lists are seached further for string members (vars)\n elif type(a) is list:\n sub_a_ask_assign = self.filter_ask_assign(a)\n for sub_a in sub_a_ask_assign:\n ask_assign.append(sub_a)\n return ask_assign\n\n def for_loop(self, args):\n # for loop iterator is a var so should be added to the list of vars\n iterator = str(args[0])\n commands = args[1:]\n return [iterator] + self.filter_ask_assign(args)\n\n def input(self, args):\n #return left side of the =\n return args[0]\n\n def ask(self, args):\n #try is needed cause in level 1 sk has not variable in front\n try:\n return args[0]\n except:\n return None\n\n def assign(self, args):\n return args[0]\n\n def assign_list(self, args):\n return args[0]\n\n # list access is accessing a variable, so must be escaped\n # for example we print(dieren[1]) not print('dieren[1]')\n def list_access(self, args):\n listname = args[0][0]\n if args[1] == 'random':\n return 'random.choice(' + listname + ')'\n else:\n return listname + '[' + args[1] + ']'\n\n\n # additions Laura, to be checked for higher levels:\n def list_access_var(self, args):\n return args[0]\n def var_access(self,args):\n return args[0]\n def change_list_item(self, args):\n return args[0]\n\n def text(self, args):\n #text never contains a variable\n return None\n\n def var(self, args):\n return args\n\n def punctuation(self, args):\n #is never a variable (but should be removed from the tree or it will be seen as one!)\n return None\n\n def __default__(self, args, children, meta):\n return self.filter_ask_assign(children)\n\n\n\ndef are_all_arguments_true(args):\n bool_arguments = [x[0] for x in args]\n arguments_of_false_nodes = [x[1] for x in args if not x[0]]\n return all(bool_arguments), arguments_of_false_nodes\n\n# this class contains code shared between IsValid and IsComplete, which are quite similar\n# because both filter out some types of 'wrong' nodes\n# TODO: this could also use a default lark rule like AllAssignmentCommands does now\nclass Filter(Transformer):\n def __default__(self, args, children, meta):\n return are_all_arguments_true(children)\n\n def program(self, args):\n bool_arguments = [x[0] for x in args]\n if all(bool_arguments):\n return [True] #all complete\n else:\n command_num = 1\n for a in args:\n if not a[0]:\n return False, a[1], command_num\n command_num += 1\n\n #leafs are treated differently, they are True + their arguments flattened\n def var(self, args):\n return True, ''.join([str(c) for c in args])\n def random(self, args):\n return True, 'random'\n def index(self, args):\n return True, ''.join([str(c) for c in args])\n def punctuation(self, args):\n return True, ''.join([c for c in args])\n def number(self, args):\n return True, ''.join([c for c in args])\n def text(self, args):\n return all(args), ''.join([c for c in args])\n\nclass UsesTurtle(Transformer):\n # returns true if Forward or Turn are in the tree, false otherwise\n def __default__(self, args, children, meta):\n if len(children) == 0: # no children? you are a leaf that is not Turn or Forward, so you are no Turtle command\n return False\n else:\n if type(children[0]) == bool:\n return any(children) # children? if any is true there is a Turtle leaf\n else:\n return False # some nodes like text and punctuation have text children (their letters) these are not turtles\n\n def forward(self, args):\n return True\n\n def turn(self, args):\n return True\n\n # somehow a token (or only this token?) is not picked up by the default rule so it needs\n # its own rule\n def NUMBER(self, args):\n return False\n\n def NAME(self, args):\n return False\n\n\n\n\n\nclass IsValid(Filter):\n # all rules are valid except for the \"Invalid\" production rule\n # this function is used to generate more informative error messages\n # tree is transformed to a node of [Bool, args, linenumber]\n\n def invalid_space(self, args):\n # return space to indicate that line starts in a space\n return False, \" \"\n\n def print_nq(self, args):\n # return error source to indicate what went wrong\n return False, \"print without quotes\"\n\n def invalid(self, args):\n # return the first argument to place in the error message\n # TODO: this will not work for misspelling 'at', needs to be improved!\n return False, args[0][1]\n\n #other rules are inherited from Filter\n\nclass IsComplete(Filter):\n # print, ask an echo can miss arguments and then are not complete\n # used to generate more informative error messages\n # tree is transformed to a node of [True] or [False, args, line_number]\n\n def ask(self, args):\n return args != [], 'ask'\n def print(self, args):\n return args != [], 'print'\n def input(self, args):\n return args != [], 'input'\n def length(self, args):\n return args != [], 'len'\n def print_nq(self, args):\n return args != [], 'print level 2'\n def echo(self, args):\n #echo may miss an argument\n return True, 'echo'\n\n #other rules are inherited from Filter\n\nclass ConvertToPython_1(Transformer):\n\n def process_single_quote(self, value):\n # defines what happens if a kids uses ' in a string\n value = value.replace(\"'\", \"\\\\'\")\n return value\n\n\n def __init__(self, punctuation_symbols, lookup):\n self.punctuation_symbols = punctuation_symbols\n self.lookup = lookup\n\n def program(self, args):\n return '\\n'.join([str(c) for c in args])\n def command(self, args):\n return args[0]\n\n def text(self, args):\n return ''.join([str(c) for c in args])\n def print(self, args):\n # escape quotes if kids accidentally use them at level 1\n argument = self.process_single_quote(args[0])\n\n return \"print('\" + argument + \"')\"\n def echo(self, args):\n if len(args) == 0:\n return \"print(answer)\" #no arguments, just print answer\n\n argument = self.process_single_quote(args[0])\n return \"print('\" + argument + \"'+answer)\"\n def ask(self, args):\n argument = self.process_single_quote(args[0])\n return \"answer = input('\" + argument + \"')\"\n def forward(self,args):\n # when a not-number is given, we simply use 50 as default\n try:\n parameter = int(args[0])\n except:\n parameter = 50\n return f\"t.forward({parameter})\"\"\"\n def turn(self, args):\n if len(args) == 0:\n return \"t.right(90)\" #no arguments works, and means a right turn\n\n if args[0] == 'left':\n return \"t.left(90)\"\n else:\n return \"t.right(90)\" #something else also defaults to right turn\n\ndef wrap_non_var_in_quotes(argument, lookup):\n if argument in lookup:\n return argument\n else:\n return \"'\" + argument + \"'\"\n\nclass ConvertToPython_2(ConvertToPython_1):\n def punctuation(self, args):\n return ''.join([str(c) for c in args])\n def var(self, args):\n name = ''.join(args)\n return \"_\" + name if name in reserved_words else name\n def print(self, args):\n all_arguments_converted = []\n i = 0\n\n for argument in args:\n # escape quotes if kids accidentally use them at level 2\n argument = self.process_single_quote(argument)\n\n # final argument and punctuation arguments do not have to be separated with a space, other do\n if i == len(args)-1 or args[i+1] in self.punctuation_symbols:\n space = ''\n else:\n space = \"+' '\"\n all_arguments_converted.append(wrap_non_var_in_quotes(argument, self.lookup) + space)\n i = i + 1\n return 'print(' + '+'.join(all_arguments_converted) + ')'\n def forward(self, args):\n parameter = args[0]\n #if the parameter is a variable, print as is\n if parameter in self.lookup:\n return f\"t.forward({parameter})\"\n\n # otherwise, see if we got a number. if not, simply use 50 as default\n try:\n parameter = int(args[0])\n except:\n parameter = 50\n return f\"t.forward({parameter})\"\"\"\n\n def ask(self, args):\n var = args[0]\n all_parameters = [\"'\" + self.process_single_quote(a) + \"'\" for a in args[1:]]\n return f'{var} = input(' + '+'.join(all_parameters) + \")\"\n def assign(self, args):\n parameter = args[0]\n value = args[1]\n #if the assigned value contains single quotes, escape them\n value = self.process_single_quote(value)\n return parameter + \" = '\" + value + \"'\"\n\n def assign_list(self, args):\n parameter = args[0]\n values = [\"'\" + a + \"'\" for a in args[1:]]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def list_access(self, args):\n if args[1] == 'random':\n return 'random.choice(' + args[0] + ')'\n else:\n return args[0] + '[' + args[1] + ']'\n\n\n\n#TODO: lookuptable and punctuation chars not be needed for level2 and up anymore, could be removed\nclass ConvertToPython_3(ConvertToPython_2):\n def text(self, args):\n return ''.join([str(c) for c in args])\n def print(self, args):\n #opzoeken is nu niet meer nodig\n return \"print(\" + '+'.join(args) + ')'\n def print_nq(self, args):\n return ConvertToPython_2.print(self, args)\n def ask(self, args):\n args_new = []\n var = args[0]\n remaining_args = args[1:]\n\n return f'{var} = input(' + '+'.join(remaining_args) + \")\"\n\ndef indent(s):\n lines = s.split('\\n')\n return '\\n'.join([' ' + l for l in lines])\n\nclass ConvertToPython_4(ConvertToPython_3):\n def list_access_var(self, args):\n var = args[0]\n if args[2].data == 'random':\n return var + '=random.choice(' + args[1] + ')'\n else:\n return var + '=' + args[1] + '[' + args[2].children[0] + ']'\n\n def ifs(self, args):\n return f\"\"\"if {args[0]}:\n{indent(args[1])}\"\"\"\n\n def ifelse(self, args):\n return f\"\"\"if {args[0]}:\n{indent(args[1])}\nelse:\n{indent(args[2])}\"\"\"\n def condition(self, args):\n return ' and '.join(args)\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n return f\"{arg0} == {arg1}\" #no and statements\n def in_list_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n return f\"{arg0} in {arg1}\"\n\nclass ConvertToPython_5(ConvertToPython_4):\n def number(self, args):\n return ''.join(args)\n\n def repeat(self, args):\n times = wrap_non_var_in_quotes(args[0], self.lookup)\n command = args[1]\n return f\"\"\"for i in range(int({str(times)})):\n{indent(command)}\"\"\"\n\nclass ConvertToPython_6(ConvertToPython_5):\n\n def print(self, args):\n #force all to be printed as strings (since there can not be int arguments)\n args_new = []\n for a in args:\n if type(a) is Tree:\n args_new.append(f'str({a.children})')\n elif \"'\" not in a:\n args_new.append(f'str({a})')\n else:\n args_new.append(a)\n\n return \"print(\" + '+'.join(args_new) + ')'\n\n #we can now have ints as types so chck must force str\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) == str({arg1})\" #no and statements\n else:\n return f\"str({arg0}) == str({arg1}) and {args[2]}\"\n\n def assign(self, args):\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n\n def addition(self, args):\n return Tree('sum', f'int({str(args[0])}) + int({str(args[1])})')\n\n def substraction(self, args):\n return Tree('sum', f'int({str(args[0])}) - int({str(args[1])})')\n\n def multiplication(self, args):\n return Tree('sum', f'int({str(args[0])}) * int({str(args[1])})')\n\n def division(self, args):\n return Tree('sum', f'int({str(args[0])}) // int({str(args[1])})')\n\nclass ConvertToPython_7(ConvertToPython_6):\n def __init__(self, punctuation_symbols, lookup):\n self.punctuation_symbols = punctuation_symbols\n self.lookup = lookup\n\n def command(self, args):\n return \"\".join(args)\n\n def repeat(self, args):\n all_lines = [indent(x) for x in args[1:]]\n return \"for i in range(int(\" + str(args[0]) + \")):\\n\" + \"\\n\".join(all_lines)\n\n def ifs(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n\n all_lines = [indent(x) for x in args[1:]]\n\n return \"if \" + args[0] + \":\\n\" + \"\\n\".join(all_lines)\n\n def elses(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n\n all_lines = [indent(x) for x in args]\n\n return \"\\nelse:\\n\" + \"\\n\".join(all_lines)\n\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def var_access(self, args):\n if len(args) == 1: #accessing a var\n return wrap_non_var_in_quotes(args[0], self.lookup)\n # this was used to produce better error messages, but needs more work\n # (because plain text strings are now also var_access and not textwithoutspaces\n # since we no longer have priority rules\n # if args[0] in self.lookup:\n # return args[0]\n # else:\n # raise HedyException('VarUndefined', level=7, name=args[0])\n else:\n # dit was list_access\n return args[0] + \"[\" + str(args[1]) + \"]\" if type(args[1]) is not Tree else \"random.choice(\" + str(args[0]) + \")\"\n\nclass ConvertToPython_8(ConvertToPython_7):\n def for_loop(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[3:]]\n return \"for \" + args[0] + \" in range(\" + \"int(\" + args[1] + \")\" + \", \" + \"int(\" + args[2] + \")+1\" + \"):\\n\"+\"\\n\".join(all_lines)\n\nclass ConvertToPython_9_10(ConvertToPython_8):\n def elifs(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[1:]]\n return \"\\nelif \" + args[0] + \":\\n\" + \"\\n\".join(all_lines)\n\nclass ConvertToPython_11(ConvertToPython_9_10):\n def input(self, args):\n args_new = []\n var = args[0]\n for a in args[1:]:\n if type(a) is Tree:\n args_new.append(f'str({a.children})')\n elif \"'\" not in a:\n args_new.append(f'str({a})')\n else:\n args_new.append(a)\n\n return f'{var} = input(' + '+'.join(args_new) + \")\"\n\nclass ConvertToPython_12(ConvertToPython_11):\n def assign_list(self, args):\n parameter = args[0]\n values = [a for a in args[1:]]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def list_access_var(self, args):\n var = args[0]\n if not isinstance(args[2], str):\n if args[2].data == 'random':\n return var + '=random.choice(' + args[1] + ')'\n else:\n return var + '=' + args[1] + '[' + args[2] + '-1]'\n\n def list_access(self, args):\n if args[1] == 'random':\n return 'random.choice(' + args[0] + ')'\n else:\n return args[0] + '[' + args[1] + '-1]'\n\n def change_list_item(self, args):\n return args[0] + '[' + args[1] + '-1] = ' + args[2]\n# Custom transformer that can both be used bottom-up or top-down\n\nclass ConvertToPython_13(ConvertToPython_12):\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n else:\n # FH, June 21 the addition of _true/false is a bit of a hack. cause they are first seen as vars that at reserved words, they egt and _ and we undo that here.\n # could/should be fixed in the grammar!\n if value == 'true' or value == 'True' or value == '_True':\n return parameter + \" = True\"\n elif value == 'false' or value == 'False' or value == '_False':\n return parameter + \" = False\"\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\n def equality_check(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if arg1 == '\\'True\\'' or arg1 == '\\'true\\'':\n return f\"{arg0} == True\"\n elif arg1 == '\\'False\\'' or arg1 == '\\'false\\'':\n return f\"{arg0} == False\"\n else:\n return f\"str({arg0}) == str({arg1})\" #no and statements\n\nclass ConvertToPython_14(ConvertToPython_13):\n def andcondition(self, args):\n return ' and '.join(args)\n def orcondition(self, args):\n return ' or '.join(args)\n\nclass ConvertToPython_15(ConvertToPython_14):\n def comment(self, args):\n return f\"# {args}\"\n\nclass ConvertToPython_16(ConvertToPython_15):\n def smaller(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) < str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) < str({arg1}) and {args[2]}\"\n\n def bigger(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) > str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) > str({arg1}) and {args[2]}\"\n\nclass ConvertToPython_17(ConvertToPython_16):\n def while_loop(self, args):\n args = [a for a in args if a != \"\"] # filter out in|dedent tokens\n all_lines = [indent(x) for x in args[1:]]\n return \"while \" + args[0] + \":\\n\"+\"\\n\".join(all_lines)\n\nclass ConvertToPython_18_19(ConvertToPython_17):\n def length(self, args):\n arg0 = args[0]\n return f\"len({arg0})\"\n\n def assign(self, args): # TODO: needs to be merged with 6, when 6 is improved to with printing expressions directly\n if len(args) == 2:\n parameter = args[0]\n value = args[1]\n if type(value) is Tree:\n return parameter + \" = \" + value.children\n else:\n if \"'\" in value or 'random.choice' in value: # TODO: should be a call to wrap nonvarargument is quotes!\n return parameter + \" = \" + value\n elif \"len(\" in value:\n return parameter + \" = \" + value\n else:\n if value == 'true' or value == 'True':\n return parameter + \" = True\"\n elif value == 'false' or value == 'False':\n return parameter + \" = False\"\n else:\n return parameter + \" = '\" + value + \"'\"\n else:\n parameter = args[0]\n values = args[1:]\n return parameter + \" = [\" + \", \".join(values) + \"]\"\n\nclass ConvertToPython_20(ConvertToPython_18_19):\n def equality_check(self, args):\n if type(args[0]) is Tree:\n return args[0].children + \" == int(\" + args[1] + \")\"\n if type(args[1]) is Tree:\n return \"int(\" + args[0] + \") == \" + args[1].children\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if arg1 == '\\'True\\'' or arg1 == '\\'true\\'':\n return f\"{arg0} == True\"\n elif arg1 == '\\'False\\'' or arg1 == '\\'false\\'':\n return f\"{arg0} == False\"\n else:\n return f\"str({arg0}) == str({arg1})\" # no and statements\n\nclass ConvertToPython_21(ConvertToPython_20):\n def not_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) != str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) != str({arg1}) and {args[2]}\"\n\nclass ConvertToPython_22(ConvertToPython_21):\n def smaller_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) <= str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) <= str({arg1}) and {args[2]}\"\n\n def bigger_equal(self, args):\n arg0 = wrap_non_var_in_quotes(args[0], self.lookup)\n arg1 = wrap_non_var_in_quotes(args[1], self.lookup)\n if len(args) == 2:\n return f\"str({arg0}) >= str({arg1})\" # no and statements\n else:\n return f\"str({arg0}) >= str({arg1}) and {args[2]}\"\n\n\ndef merge_grammars(grammar_text_1, grammar_text_2):\n # this function takes two grammar files and merges them into one\n # rules that are redefined in the second file are overridden\n # rule that are new in the second file are added (remaining_rules_grammar_2)\n\n merged_grammar = []\n\n rules_grammar_1 = grammar_text_1.split('\\n')\n remaining_rules_grammar_2 = grammar_text_2.split('\\n')\n for line_1 in rules_grammar_1:\n if line_1 == '' or line_1[0] == '/': #skip comments and empty lines:\n continue\n parts = line_1.split(':')\n name_1, definition_1 = parts[0], ''.join(parts[1:]) #get part before are after : (this is a join because there can be : in the rule)\n\n rules_grammar_2 = grammar_text_2.split('\\n')\n override_found = False\n for line_2 in rules_grammar_2:\n if line_2 == '' or line_2[0] == '/': # skip comments and empty lines:\n continue\n parts = line_2.split(':')\n name_2, definition_2 = parts[0], ''.join(parts[1]) #get part before are after :\n if name_1 == name_2:\n override_found = True\n new_rule = line_2\n # this rule is now in the grammar, remove form this list\n remaining_rules_grammar_2.remove(new_rule)\n break\n\n # new rule found? print that. nothing found? print org rule\n if override_found:\n merged_grammar.append(new_rule)\n else:\n merged_grammar.append(line_1)\n\n #all rules that were not overlapping are new in the grammar, add these too\n for rule in remaining_rules_grammar_2:\n if not(rule == '' or rule[0] == '/'):\n merged_grammar.append(rule)\n\n merged_grammar = sorted(merged_grammar)\n return '\\n'.join(merged_grammar)\n\n\ndef create_grammar(level, sub):\n # Load Lark grammars relative to directory of current file\n script_dir = path.abspath(path.dirname(__file__))\n\n # Load Lark grammars relative to directory of current file\n script_dir = path.abspath(path.dirname(__file__))\n\n # we start with creating the grammar for level 1\n grammar_text_1 = get_full_grammar_for_level(1)\n \n if sub:\n #grep\n if level == 1:\n # this is a level 1 sublevel, so get the sublevel grammar and return\n grammar_text_sub = get_additional_rules_for_level(1, sub)\n grammar_text = merge_grammars(grammar_text_1, grammar_text_sub)\n return grammar_text\n\n grammar_text_2 = get_additional_rules_for_level(2)\n\n #start at 1 and keep merging new grammars in\n new = merge_grammars(grammar_text_1, grammar_text_2)\n\n for i in range(3, level+1):\n grammar_text_i = get_additional_rules_for_level(i)\n new = merge_grammars(new, grammar_text_i)\n\n # get grammar for the sublevel and merge it\n grammar_text_sub = get_additional_rules_for_level(level, sub)\n new = merge_grammars(new, grammar_text_sub)\n \n # ready? Save to file to ease debugging\n # this could also be done on each merge for performance reasons\n filename = \"level\" + str(level) + \"-\" + str(sub) + \"-Total.lark\"\n loc = path.join(script_dir, \"grammars-Total\", filename)\n file = open(loc, \"w\", encoding=\"utf-8\")\n file.write(new)\n file.close()\n else:\n #grep\n if level == 1:\n grammar_text = get_full_grammar_for_level(level)\n return grammar_text\n\n grammar_text_2 = get_additional_rules_for_level(2)\n\n #start at 1 and keep merging new grammars in\n new = merge_grammars(grammar_text_1, grammar_text_2)\n\n for i in range(3, level+1):\n grammar_text_i = get_additional_rules_for_level(i)\n new = merge_grammars(new, grammar_text_i)\n\n # ready? Save to file to ease debugging\n # this could also be done on each merge for performance reasons\n filename = \"level\" + str(level) + \"-Total.lark\"\n loc = path.join(script_dir, \"grammars-Total\", filename)\n file = open(loc, \"w\", encoding=\"utf-8\")\n file.write(new)\n file.close()\n\n return new\n\ndef get_additional_rules_for_level(level, sub = 0):\n script_dir = path.abspath(path.dirname(__file__))\n if sub:\n filename = \"level\" + str(level) + \"-\" + str(sub) + \"-Additions.lark\"\n else:\n filename = \"level\" + str(level) + \"-Additions.lark\"\n with open(path.join(script_dir, \"grammars\", filename), \"r\", encoding=\"utf-8\") as file:\n grammar_text = file.read()\n return grammar_text\n\ndef get_full_grammar_for_level(level):\n script_dir = path.abspath(path.dirname(__file__))\n filename = \"level\" + str(level) + \".lark\"\n with open(path.join(script_dir, \"grammars\", filename), \"r\", encoding=\"utf-8\") as file:\n grammar_text = file.read()\n return grammar_text\n\nPARSER_CACHE = {}\n\n\ndef get_parser(level, sub):\n \"\"\"Return the Lark parser for a given level.\n\n Uses caching if Hedy is NOT running in development mode.\n \"\"\"\n key = str(level) + \".\" + str(sub)\n existing = PARSER_CACHE.get(key)\n if existing and not utils.is_debug_mode():\n return existing\n grammar = create_grammar(level, sub)\n ret = Lark(grammar)\n PARSER_CACHE[key] = ret\n return ret\n\nParseResult = namedtuple('ParseResult', ['code', 'has_turtle'])\n\ndef transpile(input_string, level, sub = 0):\n try:\n input_string = input_string.replace('\\r\\n', '\\n')\n transpile_result = transpile_inner(input_string, level, sub)\n return transpile_result\n except Exception as E:\n # This is the 'fall back' transpilation\n # that should surely be improved!!\n # we retry HedyExceptions of the type Parse (and Lark Errors) but we raise Invalids\n if E.args[0] == 'Parse':\n #try 1 level lower\n if level > 1 and sub == 0:\n try:\n new_level = level - 1\n result = transpile_inner(input_string, new_level, sub)\n except (LarkError, HedyException) as innerE:\n # Parse at `level - 1` failed as well, just re-raise original error\n raise E\n # If the parse at `level - 1` succeeded, then a better error is \"wrong level\"\n raise HedyException('Wrong Level', correct_code=result, original_level=level, working_level=new_level) from E\n raise E\n\ndef repair(input_string):\n #the only repair we can do now is remove leading spaces, more can be added!\n return '\\n'.join([x.lstrip() for x in input_string.split('\\n')])\n\ndef translate_characters(s):\n# this method is used to make it more clear to kids what is meant in error messages\n# for example ' ' is hard to read, space is easier\n# this could (should?) be localized so we can call a ' \"Hoge komma\" for example (Felienne, dd Feb 25, 2021)\n if s == ' ':\n return 'space'\n elif s == ',':\n return 'comma'\n elif s == '?':\n return 'question mark'\n elif s == '\\\\n':\n return 'newline'\n elif s == '.':\n return 'period'\n elif s == '!':\n return 'exclamation mark'\n elif s == '*':\n return 'star'\n elif s == \"'\":\n return 'single quotes'\n elif s == '\"':\n return 'double quotes'\n elif s == '/':\n return 'slash'\n elif s == '-':\n return 'dash'\n elif s >= 'a' and s <= 'z' or s >= 'A' and s <= 'Z':\n return s\n else:\n return s\n\ndef filter_and_translate_terminals(list):\n # in giving error messages, it does not make sense to include\n # ANONs, and some things like EOL need kid friendly translations\n new_terminals = []\n for terminal in list:\n if terminal[:4] == \"ANON\":\n continue\n\n if terminal == \"EOL\":\n new_terminals.append(\"Newline\")\n break\n\n #not translated or filtered out? simply add as is:\n new_terminals.append(terminal)\n\n return new_terminals\n\ndef beautify_parse_error(error_message):\n character_found = error_message.split(\"'\")[1]\n character_found = translate_characters(character_found)\n return character_found\n\ndef find_indent_length(line):\n number_of_spaces = 0\n for x in line:\n if x == ' ':\n number_of_spaces += 1\n else:\n break\n return number_of_spaces\n\ndef preprocess_blocks(code):\n processed_code = []\n lines = code.split(\"\\n\")\n current_number_of_indents = 0\n previous_number_of_indents = 0\n indent_size = None #we don't fix indent size but the first encounter sets it\n for line in lines:\n leading_spaces = find_indent_length(line)\n\n #first encounter sets indent size for this program\n if indent_size == None and leading_spaces > 0:\n indent_size = leading_spaces\n\n #calculate nuber of indents if possible\n if indent_size != None:\n current_number_of_indents = leading_spaces // indent_size\n\n if current_number_of_indents < previous_number_of_indents:\n # we springen 'terug' dus er moeten end-blocken in\n # bij meerdere terugsprongen sluiten we ook meerdere blokken\n\n difference_in_indents = (previous_number_of_indents - current_number_of_indents)\n\n for i in range(difference_in_indents):\n processed_code.append('end-block')\n\n #save to compare for next line\n previous_number_of_indents = current_number_of_indents\n\n #if indent remains the same, do nothing, just add line\n processed_code.append(line)\n\n # if the last line is indented, the end of the program is also the end of all indents\n # so close all blocks\n for i in range(current_number_of_indents):\n processed_code.append('end-block')\n return \"\\n\".join(processed_code)\n\n\ndef transpile_inner(input_string, level, sub = 0):\n punctuation_symbols = ['!', '?', '.']\n level = int(level)\n\n parser = get_parser(level, sub)\n\n if level >= 7:\n input_string = preprocess_blocks(input_string)\n # print(input_string)\n\n try:\n program_root = parser.parse(input_string+ '\\n').children[0] # getting rid of the root could also be done in the transformer would be nicer\n abstract_syntaxtree = ExtractAST().transform(program_root)\n lookup_table = AllAssignmentCommands().transform(abstract_syntaxtree)\n\n except UnexpectedCharacters as e:\n try:\n location = e.line, e.column\n characters_expected = str(e.allowed) #not yet in use, could be used in the future (when our parser rules are better organize, now it says ANON*__12 etc way too often!)\n character_found = beautify_parse_error(e.args[0])\n # print(e.args[0])\n # print(location, character_found, characters_expected)\n raise HedyException('Parse', level=level, location=location, character_found=character_found) from e\n except UnexpectedEOF:\n # this one can't be beautified (for now), so give up :)\n raise e\n\n # IsValid returns (True,) or (False, args, line)\n is_valid = IsValid().transform(program_root)\n\n if not is_valid[0]:\n _, args, line = is_valid\n\n # Apparently, sometimes 'args' is a string, sometimes it's a list of\n # strings ( are these production rule names?). If it's a list of\n # strings, just take the first string and proceed.\n if isinstance(args, list):\n args = args[0]\n if args == ' ':\n #the error here is a space at the beginning of a line, we can fix that!\n fixed_code = repair(input_string)\n if fixed_code != input_string: #only if we have made a successful fix\n result = transpile_inner(fixed_code, level, sub)\n raise HedyException('Invalid Space', level=level, line_number=line, fixed_code = result)\n elif args == 'print without quotes':\n # grammar rule is ignostic of line number so we can't easily return that here\n raise HedyException('Unquoted Text', level=level)\n else:\n invalid_command = args\n closest = closest_command(invalid_command, commands_per_level[level])\n if closest == None: #we couldn't find a suggestion because the command itself was found\n # clearly the error message here should be better or it should be a different one!\n raise HedyException('Parse', level=level, location=[\"?\", \"?\"], keyword_found=invalid_command)\n raise HedyException('Invalid', invalid_command=invalid_command, level=level, guessed_command=closest)\n\n is_complete = IsComplete().transform(program_root)\n if not is_complete[0]:\n incomplete_command = is_complete[1]\n line = is_complete[2]\n raise HedyException('Incomplete', incomplete_command=incomplete_command, level=level, line_number=line)\n\n if level == 1:\n python = ConvertToPython_1(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 2:\n python = ConvertToPython_2(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 3:\n python = ConvertToPython_3(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 4:\n # Sublevel has the same grammar\n python = ConvertToPython_4(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 5:\n python = ConvertToPython_5(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 6:\n python = ConvertToPython_6(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 7:\n python = ConvertToPython_7(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 8:\n # Sublevel has the same conversion\n python = ConvertToPython_8(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 9:\n python = ConvertToPython_9_10(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 10:\n # Code does not change for nesting\n python = ConvertToPython_9_10(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 11:\n python = ConvertToPython_11(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 12:\n python = ConvertToPython_12(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 13:\n python = ConvertToPython_13(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 14:\n python = ConvertToPython_14(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 15:\n python = ConvertToPython_15(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 16:\n python = ConvertToPython_16(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 17:\n python = ConvertToPython_17(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 18 or level == 19:\n python = ConvertToPython_18_19(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 20:\n python = ConvertToPython_20(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 21:\n python = ConvertToPython_21(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n elif level == 22:\n python = ConvertToPython_22(punctuation_symbols, lookup_table).transform(abstract_syntaxtree)\n else:\n raise Exception('Levels over 22 are not implemented yet')\n\n has_turtle = UsesTurtle().transform(program_root)\n\n return ParseResult(python, has_turtle)\n\ndef execute(input_string, level):\n python = transpile(input_string, level)\n exec(python)\n\n# f = open('output.py', 'w+')\n# f.write(python)\n# f.close()\n",
"path": "hedy.py"
}
] | diff --git a/coursedata/adventures/nl.yaml b/coursedata/adventures/nl.yaml
index 1be31c858af..538e04c366a 100644
--- a/coursedata/adventures/nl.yaml
+++ b/coursedata/adventures/nl.yaml
@@ -484,7 +484,8 @@ adventures:
keuzes is 1, 2, 3, 4, 5, regenworm
worp is ...
print 'je hebt ' ... ' gegooid'
- if ... is regenworm print 'Je mag stoppen met gooien.' ... print 'Je moet nog een keer hoor!'
+ if ... is regenworm print 'Je mag stoppen met gooien.'
+ ... print 'Je moet nog een keer hoor!'
```
start_code: "print Wat zal de dobbelsteen deze keer aangeven?"
5:
diff --git a/hedy.py b/hedy.py
index d0d213ece0c..407371dd5c5 100644
--- a/hedy.py
+++ b/hedy.py
@@ -258,7 +258,13 @@ def forward(self, args):
def turn(self, args):
return True
+ # somehow a token (or only this token?) is not picked up by the default rule so it needs
+ # its own rule
+ def NUMBER(self, args):
+ return False
+ def NAME(self, args):
+ return False
diff --git a/tests/tests_level_05.py b/tests/tests_level_05.py
index 48971bccdb0..9f0a0d139cc 100644
--- a/tests/tests_level_05.py
+++ b/tests/tests_level_05.py
@@ -40,6 +40,7 @@ def test_print_with_var(self):
print('ik heet'+naam)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_comma(self):
@@ -54,6 +55,7 @@ def test_print_with_comma(self):
print('ik heet,'+naam)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_turtle_basic(self):
result = hedy.transpile("forward 50\nturn\nforward 100", self.level)
@@ -62,6 +64,7 @@ def test_transpile_turtle_basic(self):
t.right(90)
t.forward(100)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(True, result.has_turtle)
def test_transpile_turtle_with_ask(self):
code = textwrap.dedent("""\
@@ -72,6 +75,7 @@ def test_transpile_turtle_with_ask(self):
afstand = input('hoe ver dan?')
t.forward(afstand)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(True, result.has_turtle)
def test_print_Spanish(self):
code = textwrap.dedent("""\
@@ -83,6 +87,7 @@ def test_print_Spanish(self):
print('Cuál es tu color favorito?')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask_Spanish(self):
code = textwrap.dedent("""\
@@ -94,6 +99,7 @@ def test_transpile_ask_Spanish(self):
color = input('Cuál es tu color favorito?')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_other(self):
with self.assertRaises(Exception) as context:
@@ -115,6 +121,7 @@ def test_repeat_basic_print(self):
print('me wants a cookie!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
expected_output = textwrap.dedent("""\
me wants a cookie!
@@ -140,6 +147,7 @@ def test_repeat_with_variable_print(self):
print('me wants a cookie!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
expected_output = textwrap.dedent("""\
me wants a cookie!
@@ -165,6 +173,7 @@ def test_repeat_nested_in_if(self):
print('mooi!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_repeat_over_9_times(self):
@@ -178,6 +187,7 @@ def test_repeat_over_9_times(self):
print('me wants a cookie!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
expected_output = textwrap.dedent("""\
me wants a cookie!
diff --git a/tests/tests_level_06.py b/tests/tests_level_06.py
index 7fd031d2d53..f42b2fb58a4 100644
--- a/tests/tests_level_06.py
+++ b/tests/tests_level_06.py
@@ -38,6 +38,7 @@ def test_print_with_var(self):
print('ik heet'+str(naam))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
@@ -50,6 +51,7 @@ def test_transpile_ask(self):
antwoord = input('wat is je lievelingskleur?')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_repeat_nested_in_if(self):
@@ -66,6 +68,7 @@ def test_repeat_nested_in_if(self):
print('mooi!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_repeat_with_variable_print(self):
code = textwrap.dedent("""\
@@ -80,6 +83,7 @@ def test_repeat_with_variable_print(self):
print('me wants a cookie!')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
expected_output = textwrap.dedent("""\
me wants a cookie!
@@ -97,6 +101,7 @@ def test_simple_calculation(self):
expected = "nummer = int(4) + int(5)"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_simple_calculation_without_space(self):
code = "nummer is 4+5"
@@ -104,6 +109,7 @@ def test_simple_calculation_without_space(self):
expected = "nummer = int(4) + int(5)"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_turtle_basic(self):
@@ -113,6 +119,7 @@ def test_transpile_turtle_basic(self):
t.right(90)
t.forward(100)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(True, result.has_turtle)
def test_transpile_turtle_with_ask(self):
code = textwrap.dedent("""\
@@ -123,6 +130,7 @@ def test_transpile_turtle_with_ask(self):
afstand = input('hoe ver dan?')
t.forward(afstand)""")
self.assertEqual(expected, result.code)
+ self.assertEqual(True, result.has_turtle)
def test_calculation_and_printing(self):
@@ -137,6 +145,7 @@ def test_calculation_and_printing(self):
print(str(nummer))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("9", run_code(result))
def test_calculation_with_vars(self):
@@ -155,6 +164,7 @@ def test_calculation_with_vars(self):
print(str(getal))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
def test_print_calculation_times_directly(self):
@@ -171,6 +181,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
def test_print_calculation_divide_directly(self):
@@ -187,6 +198,7 @@ def test_print_calculation_divide_directly(self):
print(str(int(nummer) // int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("0", run_code(result))
def test_issue_andras(self):
@@ -213,5 +225,6 @@ def test_issue_andras(self):
print('ok bedankt dan wordt het '+str(prijs)+' euro')""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
diff --git a/tests/tests_level_08.py b/tests/tests_level_08.py
index 71218bb88a2..e1e1ed2e318 100644
--- a/tests/tests_level_08.py
+++ b/tests/tests_level_08.py
@@ -29,16 +29,19 @@ def test_print(self):
result = hedy.transpile("print 'ik heet'", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint 'ik heet' naam", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print '5 keer 5 is ' 5*5", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -54,6 +57,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -61,6 +65,7 @@ def test_transpile_ask(self):
result = hedy.transpile("antwoord is ask 'wat is je lievelingskleur?'", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -74,6 +79,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -97,6 +103,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -110,6 +117,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -128,6 +136,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -146,6 +155,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
@@ -186,6 +197,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#fails, issue 363
@@ -207,6 +219,7 @@ def test_for_ifbug(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loopbug599(self):
code = textwrap.dedent("""\
@@ -222,6 +235,7 @@ def test_for_loopbug599(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#programs with issues to see if we catch them properly
diff --git a/tests/tests_level_09.py b/tests/tests_level_09.py
index 8f261d0380b..94fafba2005 100644
--- a/tests/tests_level_09.py
+++ b/tests/tests_level_09.py
@@ -29,16 +29,19 @@ def test_print(self):
result = hedy.transpile("print 'ik heet'", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint 'ik heet' naam", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print '5 keer 5 is ' 5*5", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -54,6 +57,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -61,6 +65,7 @@ def test_transpile_ask(self):
result = hedy.transpile("antwoord is ask 'wat is je lievelingskleur?'", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -74,6 +79,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -97,6 +103,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -110,6 +117,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -128,6 +136,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -146,6 +155,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
@@ -186,6 +197,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_elif(self):
code = textwrap.dedent("""\
@@ -204,6 +216,7 @@ def test_if_elif(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_multiple_elifs(self):
code = textwrap.dedent("""\
@@ -226,6 +239,7 @@ def test_if_with_multiple_elifs(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_10.py b/tests/tests_level_10.py
index c2d3939bc7a..111eeb56f26 100644
--- a/tests/tests_level_10.py
+++ b/tests/tests_level_10.py
@@ -29,16 +29,19 @@ def test_print(self):
result = hedy.transpile("print 'ik heet'", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint 'ik heet' naam", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print '5 keer 5 is ' 5*5", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -54,6 +57,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -61,6 +65,7 @@ def test_transpile_ask(self):
result = hedy.transpile("antwoord is ask 'wat is je lievelingskleur?'", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -75,6 +80,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -98,6 +104,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -112,6 +119,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +138,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -148,6 +157,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -161,6 +171,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -175,6 +186,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -193,6 +205,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -218,6 +231,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#programs with issues to see if we catch them properly
# (so this should fail, for now)
# at one point we want a real "Indent" error and a better error message
diff --git a/tests/tests_level_11.py b/tests/tests_level_11.py
index 2e5734f561b..1db8fdfa234 100644
--- a/tests/tests_level_11.py
+++ b/tests/tests_level_11.py
@@ -32,16 +32,19 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -57,6 +60,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -64,6 +68,7 @@ def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -78,6 +83,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -101,6 +107,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -115,6 +122,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -133,6 +141,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -150,6 +159,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -164,6 +174,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -178,6 +189,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -196,6 +208,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -212,6 +225,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -237,6 +251,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_12.py b/tests/tests_level_12.py
index ec79d68d36e..2ec9b5a0167 100644
--- a/tests/tests_level_12.py
+++ b/tests/tests_level_12.py
@@ -30,16 +30,19 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -55,6 +58,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -62,6 +66,7 @@ def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -76,6 +81,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -99,6 +105,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +120,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -131,6 +139,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -149,6 +158,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -163,6 +173,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -177,6 +188,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -195,6 +207,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -211,6 +224,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -223,6 +237,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -237,6 +252,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -251,6 +267,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -276,6 +293,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -301,6 +319,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
#programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_13.py b/tests/tests_level_13.py
index 53bdb830e44..699b3d2c5d7 100644
--- a/tests/tests_level_13.py
+++ b/tests/tests_level_13.py
@@ -32,16 +32,19 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
@@ -58,13 +61,14 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
-
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_with_indent(self):
code = textwrap.dedent("""\
@@ -78,6 +82,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -100,6 +105,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +119,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +137,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -147,6 +155,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +169,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -173,6 +183,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -190,6 +201,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -205,6 +217,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -216,6 +229,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -229,6 +243,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -242,6 +257,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -266,6 +282,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -290,6 +307,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -303,6 +321,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -315,6 +334,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -328,6 +348,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -341,6 +362,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -364,6 +386,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_14.py b/tests/tests_level_14.py
index 9e41b33913d..5b661e512a8 100644
--- a/tests/tests_level_14.py
+++ b/tests/tests_level_14.py
@@ -33,21 +33,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -63,6 +67,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -78,6 +83,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -100,6 +106,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +120,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +138,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -147,6 +156,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -173,6 +184,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -190,6 +202,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -205,6 +218,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -216,6 +230,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -229,6 +244,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -242,6 +258,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -266,6 +283,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -290,6 +308,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -303,6 +322,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -315,6 +335,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -328,6 +349,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -341,6 +363,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -364,6 +387,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -375,6 +399,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -386,6 +411,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_15.py b/tests/tests_level_15.py
index 2fd9a0298db..9787f1048c0 100644
--- a/tests/tests_level_15.py
+++ b/tests/tests_level_15.py
@@ -32,21 +32,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -62,6 +66,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -78,6 +83,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -100,6 +106,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +120,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +138,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -147,6 +156,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -173,6 +184,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -190,6 +202,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -205,6 +218,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -216,6 +230,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -229,6 +244,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -242,6 +258,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -266,6 +283,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -290,6 +308,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -303,6 +322,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -315,6 +335,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -328,6 +349,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -341,6 +363,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -364,6 +387,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -375,6 +399,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -386,6 +411,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -398,6 +424,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -411,6 +438,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -424,6 +452,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_16.py b/tests/tests_level_16.py
index 850d3908040..31ac3393875 100644
--- a/tests/tests_level_16.py
+++ b/tests/tests_level_16.py
@@ -32,21 +32,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -62,6 +66,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -78,6 +83,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -100,6 +106,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +120,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +138,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -147,6 +156,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -173,6 +184,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -190,6 +202,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -205,6 +218,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -216,6 +230,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -229,6 +244,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -242,6 +258,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -266,6 +283,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -290,6 +308,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -303,6 +322,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -315,6 +335,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -328,6 +349,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -341,6 +363,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -364,6 +387,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -375,6 +399,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -386,6 +411,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -398,6 +424,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -411,6 +438,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -424,6 +452,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -437,6 +466,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -450,6 +480,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -467,6 +498,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_17.py b/tests/tests_level_17.py
index 8e11d910883..4f90748b95f 100644
--- a/tests/tests_level_17.py
+++ b/tests/tests_level_17.py
@@ -32,21 +32,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -62,6 +66,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -77,6 +82,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -99,6 +105,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -112,6 +119,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -129,6 +137,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -146,6 +155,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -159,6 +169,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -172,6 +183,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -189,6 +201,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -204,6 +217,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -215,6 +229,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -228,6 +243,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -241,6 +257,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -265,6 +282,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -289,6 +307,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -302,6 +321,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -314,6 +334,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -327,6 +348,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -340,6 +362,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -363,6 +386,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -374,6 +398,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -385,6 +410,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -397,6 +423,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -410,6 +437,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -423,6 +451,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -436,6 +465,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -449,6 +479,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -466,6 +497,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -483,6 +515,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -502,6 +535,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -523,6 +557,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_18.py b/tests/tests_level_18.py
index 4422ed0e455..06d7b6be1a2 100644
--- a/tests/tests_level_18.py
+++ b/tests/tests_level_18.py
@@ -32,21 +32,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -62,6 +66,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -78,6 +83,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -100,6 +106,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -113,6 +120,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -130,6 +138,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -147,6 +156,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -160,6 +170,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -173,6 +184,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -190,6 +202,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -205,6 +218,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -216,6 +230,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -229,6 +244,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -242,6 +258,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -266,6 +283,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -290,6 +308,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -303,6 +322,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -315,6 +335,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -328,6 +349,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -341,6 +363,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -364,6 +387,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -375,6 +399,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -386,6 +411,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -398,6 +424,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -411,6 +438,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -424,6 +452,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -437,6 +466,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -450,6 +480,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -467,6 +498,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -484,6 +516,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -503,6 +536,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -524,6 +558,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_access_plus(self):
code = textwrap.dedent("""\
@@ -541,6 +576,7 @@ def test_access_plus(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_19.py b/tests/tests_level_19.py
index 232824a2473..4e2b69cd77b 100644
--- a/tests/tests_level_19.py
+++ b/tests/tests_level_19.py
@@ -30,21 +30,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam is Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord is input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -60,6 +64,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -75,6 +80,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -97,6 +103,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -110,6 +117,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -127,6 +135,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -144,6 +153,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -157,6 +167,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -170,6 +181,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -187,6 +199,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -202,6 +215,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -213,6 +227,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -226,6 +241,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -239,6 +255,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -263,6 +280,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -287,6 +305,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -300,6 +319,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -312,6 +332,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -325,6 +346,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -338,6 +360,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -361,6 +384,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -372,6 +396,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -383,6 +408,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -395,6 +421,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -408,6 +435,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -421,6 +449,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -434,6 +463,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -447,6 +477,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -464,6 +495,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -481,6 +513,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -500,6 +533,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -521,6 +555,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_access_plus(self):
code = textwrap.dedent("""\
@@ -538,6 +573,7 @@ def test_access_plus(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length(self):
code = textwrap.dedent("""\
fruit is ['appel', 'banaan', 'kers']
@@ -550,6 +586,7 @@ def test_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length2(self):
code = textwrap.dedent("""\
@@ -563,6 +600,7 @@ def test_length2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_length(self):
code = textwrap.dedent("""\
@@ -578,6 +616,7 @@ def test_print_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_20.py b/tests/tests_level_20.py
index 4c87e96b8bb..eca801011b9 100644
--- a/tests/tests_level_20.py
+++ b/tests/tests_level_20.py
@@ -31,21 +31,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam = Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord = input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -61,6 +65,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -76,6 +81,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -98,6 +104,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -111,6 +118,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -128,6 +136,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -145,6 +154,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -158,6 +168,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -171,6 +182,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -188,6 +200,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -203,6 +216,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -214,6 +228,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -227,6 +242,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -240,6 +256,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -264,6 +281,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -288,6 +306,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -301,6 +320,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -313,6 +333,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -326,6 +347,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -339,6 +361,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -362,6 +385,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -373,6 +397,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -384,6 +409,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -396,6 +422,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -409,6 +436,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -422,6 +450,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -435,6 +464,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -448,6 +478,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -465,6 +496,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -482,6 +514,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -501,6 +534,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -522,6 +556,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_access_plus(self):
code = textwrap.dedent("""\
@@ -539,6 +574,7 @@ def test_access_plus(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length(self):
code = textwrap.dedent("""\
fruit = ['appel', 'banaan', 'kers']
@@ -551,6 +587,7 @@ def test_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length2(self):
code = textwrap.dedent("""\
@@ -564,6 +601,7 @@ def test_length2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_length(self):
code = textwrap.dedent("""\
@@ -579,6 +617,7 @@ def test_print_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_if(self):
code = textwrap.dedent("""\
@@ -594,6 +633,7 @@ def test_sum_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_right_side_if(self):
code = textwrap.dedent("""\
@@ -609,6 +649,7 @@ def test_sum_in_right_side_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_min_in_if(self):
code = textwrap.dedent("""\
@@ -624,6 +665,7 @@ def test_min_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_multiply_in_if(self):
code = textwrap.dedent("""\
@@ -639,6 +681,7 @@ def test_multiply_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
diff --git a/tests/tests_level_21.py b/tests/tests_level_21.py
index 9b8a073f0d6..4cdaa3e47f5 100644
--- a/tests/tests_level_21.py
+++ b/tests/tests_level_21.py
@@ -31,21 +31,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam = Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord = input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -61,6 +65,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -76,6 +81,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -98,6 +104,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -111,6 +118,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -128,6 +136,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -145,6 +154,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -158,6 +168,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -171,6 +182,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -188,6 +200,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -203,6 +216,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -214,6 +228,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -227,6 +242,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -240,6 +256,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -264,6 +281,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -288,6 +306,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -301,6 +320,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -313,6 +333,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -326,6 +347,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -339,6 +361,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -362,6 +385,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -373,6 +397,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -384,6 +409,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -396,6 +422,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -409,6 +436,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -422,6 +450,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -435,6 +464,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -448,6 +478,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -465,6 +496,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -482,6 +514,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -501,6 +534,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -522,6 +556,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_access_plus(self):
code = textwrap.dedent("""\
@@ -539,6 +574,7 @@ def test_access_plus(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length(self):
code = textwrap.dedent("""\
fruit = ['appel', 'banaan', 'kers']
@@ -551,6 +587,7 @@ def test_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length2(self):
code = textwrap.dedent("""\
@@ -564,6 +601,7 @@ def test_length2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_length(self):
code = textwrap.dedent("""\
@@ -579,6 +617,7 @@ def test_print_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_not_equal_one(self):
code = textwrap.dedent("""\
@@ -595,6 +634,7 @@ def test_not_equal_one(self):
print('Ik kom ook uit Nederland!')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_not_equal_two(self):
code = textwrap.dedent("""\
@@ -611,6 +651,7 @@ def test_not_equal_two(self):
print('Fout! Je mocht geen 5 zeggen')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_if(self):
code = textwrap.dedent("""\
@@ -626,6 +667,7 @@ def test_sum_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_right_side_if(self):
code = textwrap.dedent("""\
@@ -641,6 +683,7 @@ def test_sum_in_right_side_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_min_in_if(self):
code = textwrap.dedent("""\
@@ -656,6 +699,7 @@ def test_min_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_multiply_in_if(self):
code = textwrap.dedent("""\
@@ -671,6 +715,7 @@ def test_multiply_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
# at one point we want a real "Indent" error and a better error message
diff --git a/tests/tests_level_22.py b/tests/tests_level_22.py
index 179e1cce48e..f181e0c1499 100644
--- a/tests/tests_level_22.py
+++ b/tests/tests_level_22.py
@@ -31,21 +31,25 @@ def test_print(self):
result = hedy.transpile("print('ik heet')", self.level)
expected = "print('ik heet')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_var(self):
result = hedy.transpile("naam = Hedy\nprint('ik heet' naam)", self.level)
expected = "naam = 'Hedy'\nprint('ik heet'+str(naam))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_with_calc_no_spaces(self):
result = hedy.transpile("print('5 keer 5 is ' 5*5)", self.level)
expected = "print('5 keer 5 is '+str(int(5) * int(5)))"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_transpile_ask(self):
result = hedy.transpile("antwoord = input('wat is je lievelingskleur?')", self.level)
expected = "antwoord = input('wat is je lievelingskleur?')"
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_calculation_times_directly(self):
code = textwrap.dedent("""\
@@ -61,6 +65,7 @@ def test_print_calculation_times_directly(self):
print(str(int(nummer) * int(nummertwee)))""")
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
self.assertEqual("30", run_code(result))
@@ -76,6 +81,7 @@ def test_if_with_indent(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_else(self):
code = textwrap.dedent("""\
@@ -98,6 +104,7 @@ def test_if_else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_random(self):
code = textwrap.dedent("""\
@@ -111,6 +118,7 @@ def test_print_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_loop(self):
code = textwrap.dedent("""\
@@ -128,6 +136,7 @@ def test_for_loop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if__else(self):
code = textwrap.dedent("""\
@@ -145,6 +154,7 @@ def test_if__else(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_forloop(self):
code = textwrap.dedent("""\
@@ -158,6 +168,7 @@ def test_forloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_for_nesting(self):
code = textwrap.dedent("""\
@@ -171,6 +182,7 @@ def test_for_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_nesting(self):
code = textwrap.dedent("""\
@@ -188,6 +200,7 @@ def test_if_nesting(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_newprint(self):
code = textwrap.dedent("""\
@@ -203,6 +216,7 @@ def test_newprint(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_list(self):
code = textwrap.dedent("""\
@@ -214,6 +228,7 @@ def test_list(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_random(self):
code = textwrap.dedent("""\
@@ -227,6 +242,7 @@ def test_random(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_specific_access(self):
code = textwrap.dedent("""\
@@ -240,6 +256,7 @@ def test_specific_access(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# note that print(str(highscore)) will not print as it will compare 'score[i]' as str to a variable
def test_everything_combined(self):
@@ -264,6 +281,7 @@ def test_everything_combined(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_if_under_else_in_for(self):
code = textwrap.dedent("""\
@@ -288,6 +306,7 @@ def test_if_under_else_in_for(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true(self):
code = textwrap.dedent("""\
@@ -301,6 +320,7 @@ def test_bool_true(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false(self):
code = textwrap.dedent("""\
@@ -313,6 +333,7 @@ def test_bool_false(self):
print('ja')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_true2(self):
code = textwrap.dedent("""\
@@ -326,6 +347,7 @@ def test_bool_true2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_false2(self):
code = textwrap.dedent("""\
@@ -339,6 +361,7 @@ def test_bool_false2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bool_total(self):
code = textwrap.dedent("""\
@@ -362,6 +385,7 @@ def test_bool_total(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_and(self):
code = textwrap.dedent("""\
@@ -373,6 +397,7 @@ def test_and(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_or(self):
code = textwrap.dedent("""\
@@ -384,6 +409,7 @@ def test_or(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_comment(self):
code = textwrap.dedent("""\
@@ -396,6 +422,7 @@ def test_comment(self):
# ['comment']""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentbegin(self):
code = textwrap.dedent("""\
@@ -409,6 +436,7 @@ def test_commentbegin(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_commentresult(self):
code = textwrap.dedent("""\
@@ -422,6 +450,7 @@ def test_commentresult(self):
print('hallo')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller(self):
code = textwrap.dedent("""\
@@ -435,6 +464,7 @@ def test_smaller(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger(self):
code = textwrap.dedent("""\
@@ -448,6 +478,7 @@ def test_bigger(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_big_and_small(self):
code = textwrap.dedent("""\
@@ -465,6 +496,7 @@ def test_big_and_small(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop(self):
code = textwrap.dedent("""\
@@ -482,6 +514,7 @@ def test_whileloop(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop2(self):
code = textwrap.dedent("""\
@@ -501,6 +534,7 @@ def test_whileloop2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_whileloop3(self):
code = textwrap.dedent("""\
@@ -522,6 +556,7 @@ def test_whileloop3(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_access_plus(self):
code = textwrap.dedent("""\
@@ -539,6 +574,7 @@ def test_access_plus(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length(self):
code = textwrap.dedent("""\
fruit = ['appel', 'banaan', 'kers']
@@ -551,6 +587,7 @@ def test_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_length2(self):
code = textwrap.dedent("""\
@@ -564,6 +601,7 @@ def test_length2(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_print_length(self):
code = textwrap.dedent("""\
@@ -579,6 +617,7 @@ def test_print_length(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_not_equal_one(self):
code = textwrap.dedent("""\
@@ -595,6 +634,7 @@ def test_not_equal_one(self):
print('Ik kom ook uit Nederland!')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_not_equal_two(self):
code = textwrap.dedent("""\
@@ -611,6 +651,7 @@ def test_not_equal_two(self):
print('Fout! Je mocht geen 5 zeggen')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller_equal(self):
code = textwrap.dedent("""\
@@ -623,6 +664,7 @@ def test_smaller_equal(self):
print('Dan ben je jonger dan ik!')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_bigger_equal(self):
code = textwrap.dedent("""\
@@ -635,6 +677,7 @@ def test_bigger_equal(self):
print('Dan ben je jonger dan ik!')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_smaller_bigger_equal(self):
code = textwrap.dedent("""\
@@ -651,6 +694,7 @@ def test_smaller_bigger_equal(self):
print('Dan ben je ouder dan ik!')""")
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_if(self):
code = textwrap.dedent("""\
@@ -666,6 +710,7 @@ def test_sum_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_sum_in_right_side_if(self):
code = textwrap.dedent("""\
@@ -681,6 +726,7 @@ def test_sum_in_right_side_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_min_in_if(self):
code = textwrap.dedent("""\
@@ -696,6 +742,7 @@ def test_min_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
def test_multiply_in_if(self):
code = textwrap.dedent("""\
@@ -711,6 +758,7 @@ def test_multiply_in_if(self):
result = hedy.transpile(code, self.level)
self.assertEqual(expected, result.code)
+ self.assertEqual(False, result.has_turtle)
# programs with issues to see if we catch them properly
# (so this should fail, for now)
# at one point we want a real "Indent" error and a better error message
|
vacanza__python-holidays-451 | Can't un-pickle a `HolidayBase`
Seems that after a holidays class, e.g. `holidays.UnitedStates()` is used once, it can't be un-pickled.
For example, this snippet:
```python
import holidays
import pickle
from datetime import datetime
# Works:
us_holidays = holidays.UnitedStates()
us_holidays_ = pickle.loads(pickle.dumps(us_holidays))
b = datetime.fromisoformat("2020-01-01") in us_holidays_
# Fails:
us_holidays = holidays.UnitedStates()
b = datetime.fromisoformat("2020-01-01") in us_holidays
dump = pickle.dumps(us_holidays)
pickle.loads(dump) # <- exception
```
Raises the following exception from the last line:
```
~/.local/share/virtualenvs/sibylla-v2-LxBhzJgn/lib/python3.8/site-packages/holidays/holiday_base.py in __setitem__(self, key, value)
116
117 def __setitem__(self, key, value):
--> 118 if key in self:
119 if self.get(key).find(value) < 0 \
120 and value.find(self.get(key)) < 0:
~/.local/share/virtualenvs/sibylla-v2-LxBhzJgn/lib/python3.8/site-packages/holidays/holiday_base.py in __contains__(self, key)
73
74 def __contains__(self, key):
---> 75 return dict.__contains__(self, self.__keytransform__(key))
76
77 def __getitem__(self, key):
~/.local/share/virtualenvs/sibylla-v2-LxBhzJgn/lib/python3.8/site-packages/holidays/holiday_base.py in __keytransform__(self, key)
67 raise TypeError("Cannot convert type '%s' to date." % type(key))
68
---> 69 if self.expand and key.year not in self.years:
70 self.years.add(key.year)
71 self._populate(key.year)
```
The `expand` attribute is set by `__init__`, but it's not there during deserialization via unpickling.
I think it's because the `HolidayBase` inherits from dict and there's some weirdness there - it seems to first populate the dict in the deserialized object and only then sets the attributes from the state. But since `HolidayBase` overrides `__setitem__` and in this override it's using state attributes that weren't yet set on the object, the `expand` attribute is missing.
Tested with `holidays=='0.10.4'`.
| [
{
"content": "# -*- coding: utf-8 -*-\n\n# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2021\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import timedelta, datetime, date\n\nimport six\nfrom dateutil.parser import parse\n\n\nclass HolidayBase(dict):\n PROVINCES = []\n\n def __init__(\n self, years=[], expand=True, observed=True, prov=None, state=None\n ):\n self.observed = observed\n self.expand = expand\n if isinstance(years, int):\n years = [\n years,\n ]\n self.years = set(years)\n if not getattr(self, \"prov\", False):\n self.prov = prov\n self.state = state\n for year in list(self.years):\n self._populate(year)\n\n def __setattr__(self, key, value):\n if key == \"observed\" and len(self) > 0:\n dict.__setattr__(self, key, value)\n if value is True:\n # Add (Observed) dates\n years = list(self.years)\n self.years = set()\n self.clear()\n for year in years:\n self._populate(year)\n else:\n # Remove (Observed) dates\n for k, v in list(self.items()):\n if v.find(\"Observed\") >= 0:\n del self[k]\n else:\n return dict.__setattr__(self, key, value)\n\n def __keytransform__(self, key):\n if isinstance(key, datetime):\n key = key.date()\n elif isinstance(key, date):\n key = key\n elif isinstance(key, int) or isinstance(key, float):\n key = datetime.utcfromtimestamp(key).date()\n elif isinstance(key, six.string_types):\n try:\n key = parse(key).date()\n except (ValueError, OverflowError):\n raise ValueError(\"Cannot parse date from string '%s'\" % key)\n else:\n raise TypeError(\"Cannot convert type '%s' to date.\" % type(key))\n\n if self.expand and key.year not in self.years:\n self.years.add(key.year)\n self._populate(key.year)\n return key\n\n def __contains__(self, key):\n return dict.__contains__(self, self.__keytransform__(key))\n\n def __getitem__(self, key):\n if isinstance(key, slice):\n if not key.start or not key.stop:\n raise ValueError(\"Both start and stop must be given.\")\n\n start = self.__keytransform__(key.start)\n stop = self.__keytransform__(key.stop)\n\n if key.step is None:\n step = 1\n elif isinstance(key.step, timedelta):\n step = key.step.days\n elif isinstance(key.step, int):\n step = key.step\n else:\n raise TypeError(\n \"Cannot convert type '%s' to int.\" % type(key.step)\n )\n\n if step == 0:\n raise ValueError(\"Step value must not be zero.\")\n\n date_diff = stop - start\n if date_diff.days < 0 <= step or date_diff.days >= 0 > step:\n step *= -1\n\n days_in_range = []\n for delta_days in range(0, date_diff.days, step):\n day = start + timedelta(days=delta_days)\n try:\n dict.__getitem__(self, day)\n days_in_range.append(day)\n except KeyError:\n pass\n return days_in_range\n return dict.__getitem__(self, self.__keytransform__(key))\n\n def __setitem__(self, key, value):\n if key in self:\n if self.get(key).find(value) < 0 and value.find(self.get(key)) < 0:\n value = \"%s, %s\" % (value, self.get(key))\n else:\n value = self.get(key)\n return dict.__setitem__(self, self.__keytransform__(key), value)\n\n def update(self, *args):\n args = list(args)\n for arg in args:\n if isinstance(arg, dict):\n for key, value in list(arg.items()):\n self[key] = value\n elif isinstance(arg, list):\n for item in arg:\n self[item] = \"Holiday\"\n else:\n self[arg] = \"Holiday\"\n\n def append(self, *args):\n return self.update(*args)\n\n def get(self, key, default=None):\n return dict.get(self, self.__keytransform__(key), default)\n\n def get_list(self, key):\n return [h for h in self.get(key, \"\").split(\", \") if h]\n\n def get_named(self, name):\n # find all dates matching provided name (accepting partial\n # strings too, case insensitive), returning them in a list\n original_expand = self.expand\n self.expand = False\n matches = [key for key in self if name.lower() in self[key].lower()]\n self.expand = original_expand\n return matches\n\n def pop(self, key, default=None):\n if default is None:\n return dict.pop(self, self.__keytransform__(key))\n return dict.pop(self, self.__keytransform__(key), default)\n\n def pop_named(self, name):\n to_pop = self.get_named(name)\n if not to_pop:\n raise KeyError(name)\n for key in to_pop:\n self.pop(key)\n return to_pop\n\n def __eq__(self, other):\n return dict.__eq__(self, other) and self.__dict__ == other.__dict__\n\n def __ne__(self, other):\n return dict.__ne__(self, other) or self.__dict__ != other.__dict__\n\n def __add__(self, other):\n if isinstance(other, int) and other == 0:\n # Required to sum() list of holidays\n # sum([h1, h2]) is equivalent to (0 + h1 + h2)\n return self\n elif not isinstance(other, HolidayBase):\n raise TypeError()\n HolidaySum = createHolidaySum(self, other)\n country = getattr(self, \"country\", None) or getattr(\n other, \"country\", None\n )\n if self.country and other.country and self.country != other.country:\n c1 = self.country\n if not isinstance(c1, list):\n c1 = [c1]\n c2 = other.country\n if not isinstance(c2, list):\n c2 = [c2]\n country = c1 + c2\n prov = getattr(self, \"prov\", None) or getattr(other, \"prov\", None)\n if self.prov and other.prov and self.prov != other.prov:\n p1 = self.prov if isinstance(self.prov, list) else [self.prov]\n p2 = other.prov if isinstance(other.prov, list) else [other.prov]\n prov = p1 + p2\n return HolidaySum(\n years=(self.years | other.years),\n expand=(self.expand or other.expand),\n observed=(self.observed or other.observed),\n country=country,\n prov=prov,\n )\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def _populate(self, year):\n pass\n\n\ndef createHolidaySum(h1, h2):\n class HolidaySum(HolidayBase):\n def __init__(self, country, **kwargs):\n self.country = country\n self.holidays = []\n if getattr(h1, \"holidays\", False):\n for h in h1.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h1)\n if getattr(h2, \"holidays\", False):\n for h in h2.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h2)\n HolidayBase.__init__(self, **kwargs)\n\n def _populate(self, year):\n for h in self.holidays[::-1]:\n h._populate(year)\n self.update(h)\n\n return HolidaySum\n",
"path": "holidays/holiday_base.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\n# python-holidays\n# ---------------\n# A fast, efficient Python library for generating country, province and state\n# specific sets of holidays on the fly. It aims to make determining whether a\n# specific date is a holiday as fast and flexible as possible.\n#\n# Author: ryanss <[email protected]> (c) 2014-2017\n# dr-prodigy <[email protected]> (c) 2017-2021\n# Website: https://github.com/dr-prodigy/python-holidays\n# License: MIT (see LICENSE file)\n\nfrom datetime import timedelta, datetime, date\n\nimport six\nfrom dateutil.parser import parse\n\n\nclass HolidayBase(dict):\n PROVINCES = []\n\n def __init__(\n self, years=[], expand=True, observed=True, prov=None, state=None\n ):\n self.observed = observed\n self.expand = expand\n if isinstance(years, int):\n years = [\n years,\n ]\n self.years = set(years)\n if not getattr(self, \"prov\", False):\n self.prov = prov\n self.state = state\n for year in list(self.years):\n self._populate(year)\n\n def __setattr__(self, key, value):\n if key == \"observed\" and len(self) > 0:\n dict.__setattr__(self, key, value)\n if value is True:\n # Add (Observed) dates\n years = list(self.years)\n self.years = set()\n self.clear()\n for year in years:\n self._populate(year)\n else:\n # Remove (Observed) dates\n for k, v in list(self.items()):\n if v.find(\"Observed\") >= 0:\n del self[k]\n else:\n return dict.__setattr__(self, key, value)\n\n def __keytransform__(self, key):\n if isinstance(key, datetime):\n key = key.date()\n elif isinstance(key, date):\n key = key\n elif isinstance(key, int) or isinstance(key, float):\n key = datetime.utcfromtimestamp(key).date()\n elif isinstance(key, six.string_types):\n try:\n key = parse(key).date()\n except (ValueError, OverflowError):\n raise ValueError(\"Cannot parse date from string '%s'\" % key)\n else:\n raise TypeError(\"Cannot convert type '%s' to date.\" % type(key))\n\n if self.expand and key.year not in self.years:\n self.years.add(key.year)\n self._populate(key.year)\n return key\n\n def __contains__(self, key):\n return dict.__contains__(self, self.__keytransform__(key))\n\n def __getitem__(self, key):\n if isinstance(key, slice):\n if not key.start or not key.stop:\n raise ValueError(\"Both start and stop must be given.\")\n\n start = self.__keytransform__(key.start)\n stop = self.__keytransform__(key.stop)\n\n if key.step is None:\n step = 1\n elif isinstance(key.step, timedelta):\n step = key.step.days\n elif isinstance(key.step, int):\n step = key.step\n else:\n raise TypeError(\n \"Cannot convert type '%s' to int.\" % type(key.step)\n )\n\n if step == 0:\n raise ValueError(\"Step value must not be zero.\")\n\n date_diff = stop - start\n if date_diff.days < 0 <= step or date_diff.days >= 0 > step:\n step *= -1\n\n days_in_range = []\n for delta_days in range(0, date_diff.days, step):\n day = start + timedelta(days=delta_days)\n try:\n dict.__getitem__(self, day)\n days_in_range.append(day)\n except KeyError:\n pass\n return days_in_range\n return dict.__getitem__(self, self.__keytransform__(key))\n\n def __setitem__(self, key, value):\n if key in self:\n if self.get(key).find(value) < 0 and value.find(self.get(key)) < 0:\n value = \"%s, %s\" % (value, self.get(key))\n else:\n value = self.get(key)\n return dict.__setitem__(self, self.__keytransform__(key), value)\n\n def update(self, *args):\n args = list(args)\n for arg in args:\n if isinstance(arg, dict):\n for key, value in list(arg.items()):\n self[key] = value\n elif isinstance(arg, list):\n for item in arg:\n self[item] = \"Holiday\"\n else:\n self[arg] = \"Holiday\"\n\n def append(self, *args):\n return self.update(*args)\n\n def get(self, key, default=None):\n return dict.get(self, self.__keytransform__(key), default)\n\n def get_list(self, key):\n return [h for h in self.get(key, \"\").split(\", \") if h]\n\n def get_named(self, name):\n # find all dates matching provided name (accepting partial\n # strings too, case insensitive), returning them in a list\n original_expand = self.expand\n self.expand = False\n matches = [key for key in self if name.lower() in self[key].lower()]\n self.expand = original_expand\n return matches\n\n def pop(self, key, default=None):\n if default is None:\n return dict.pop(self, self.__keytransform__(key))\n return dict.pop(self, self.__keytransform__(key), default)\n\n def pop_named(self, name):\n to_pop = self.get_named(name)\n if not to_pop:\n raise KeyError(name)\n for key in to_pop:\n self.pop(key)\n return to_pop\n\n def __eq__(self, other):\n return dict.__eq__(self, other) and self.__dict__ == other.__dict__\n\n def __ne__(self, other):\n return dict.__ne__(self, other) or self.__dict__ != other.__dict__\n\n def __add__(self, other):\n if isinstance(other, int) and other == 0:\n # Required to sum() list of holidays\n # sum([h1, h2]) is equivalent to (0 + h1 + h2)\n return self\n elif not isinstance(other, HolidayBase):\n raise TypeError()\n HolidaySum = createHolidaySum(self, other)\n country = getattr(self, \"country\", None) or getattr(\n other, \"country\", None\n )\n if self.country and other.country and self.country != other.country:\n c1 = self.country\n if not isinstance(c1, list):\n c1 = [c1]\n c2 = other.country\n if not isinstance(c2, list):\n c2 = [c2]\n country = c1 + c2\n prov = getattr(self, \"prov\", None) or getattr(other, \"prov\", None)\n if self.prov and other.prov and self.prov != other.prov:\n p1 = self.prov if isinstance(self.prov, list) else [self.prov]\n p2 = other.prov if isinstance(other.prov, list) else [other.prov]\n prov = p1 + p2\n return HolidaySum(\n years=(self.years | other.years),\n expand=(self.expand or other.expand),\n observed=(self.observed or other.observed),\n country=country,\n prov=prov,\n )\n\n def __radd__(self, other):\n return self.__add__(other)\n\n def _populate(self, year):\n pass\n\n def __reduce__(self):\n return super(HolidayBase, self).__reduce__()\n\n\ndef createHolidaySum(h1, h2):\n class HolidaySum(HolidayBase):\n def __init__(self, country, **kwargs):\n self.country = country\n self.holidays = []\n if getattr(h1, \"holidays\", False):\n for h in h1.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h1)\n if getattr(h2, \"holidays\", False):\n for h in h2.holidays:\n self.holidays.append(h)\n else:\n self.holidays.append(h2)\n HolidayBase.__init__(self, **kwargs)\n\n def _populate(self, year):\n for h in self.holidays[::-1]:\n h._populate(year)\n self.update(h)\n\n return HolidaySum\n",
"path": "holidays/holiday_base.py"
}
] | diff --git a/holidays/holiday_base.py b/holidays/holiday_base.py
index 1ca61fccb..a24410150 100644
--- a/holidays/holiday_base.py
+++ b/holidays/holiday_base.py
@@ -209,6 +209,9 @@ def __radd__(self, other):
def _populate(self, year):
pass
+ def __reduce__(self):
+ return super(HolidayBase, self).__reduce__()
+
def createHolidaySum(h1, h2):
class HolidaySum(HolidayBase):
diff --git a/test/test_holiday_base.py b/test/test_holiday_base.py
index 9211bf3c4..962616751 100644
--- a/test/test_holiday_base.py
+++ b/test/test_holiday_base.py
@@ -11,6 +11,7 @@
# Website: https://github.com/dr-prodigy/python-holidays
# License: MIT (see LICENSE file)
+import pickle
import unittest
from datetime import date, datetime, timedelta
@@ -447,6 +448,16 @@ def test_observed(self):
self.holidays.observed = True
self.assertIn(date(2018, 7, 2), self.holidays)
+ def test_serialization(self):
+ loaded_holidays = pickle.loads(pickle.dumps(self.holidays))
+ assert loaded_holidays == self.holidays
+
+ dt = datetime(2020, 1, 1)
+ res = dt in self.holidays
+ loaded_holidays = pickle.loads(pickle.dumps(self.holidays))
+ assert loaded_holidays == self.holidays
+ assert (dt in loaded_holidays) == res
+
class TestKeyTransforms(unittest.TestCase):
def setUp(self):
|
ivy-llc__ivy-28478 | Fix Frontend Failing Test: jax - manipulation.paddle.tile
| [
{
"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n with_supported_device_and_dtypes,\n)\n\n\n@with_unsupported_dtypes({\"2.6.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef broadcast_to(x, shape, name=None):\n return ivy.broadcast_to(x, shape)\n\n\n@with_supported_dtypes(\n {\n \"2.6.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n\n\n@with_unsupported_dtypes({\"2.6.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef expand(x, shape, name=None):\n return ivy.expand(x, shape)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef flip(x, axis, name=None):\n return ivy.flip(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef gather(params, indices, axis=-1, batch_dims=0, name=None):\n return ivy.gather(params, indices, axis=axis, batch_dims=batch_dims)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"uint16\", \"float16\", \"bfloat16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef gather_nd(x, index, name=None):\n return ivy.gather_nd(x, index)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef index_add(x, index, axis, value, *, name=None):\n x = ivy.swapaxes(x, axis, 0)\n value = ivy.swapaxes(value, axis, 0)\n _to_adds = []\n index = sorted(zip(ivy.to_list(index), range(len(index))), key=(lambda i: i[0]))\n while index:\n _curr_idx = index[0][0]\n while len(_to_adds) < _curr_idx:\n _to_adds.append(ivy.zeros_like(value[0]))\n _to_add_cum = ivy.get_item(value, index[0][1])\n while (len(index)) > 1 and (index[0][0] == index[1][0]):\n _to_add_cum = _to_add_cum + ivy.get_item(value, index.pop(1)[1])\n index.pop(0)\n _to_adds.append(_to_add_cum)\n while len(_to_adds) < x.shape[0]:\n _to_adds.append(ivy.zeros_like(value[0]))\n _to_adds = ivy.stack(_to_adds)\n if len(x.shape) < 2:\n # Added this line due to the paddle backend treating scalars as 1-d arrays\n _to_adds = ivy.flatten(_to_adds)\n\n ret = ivy.add(x, _to_adds)\n ret = ivy.swapaxes(ret, axis, 0)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef put_along_axis(arr, indices, values, axis, reduce=\"assign\"):\n result = ivy.put_along_axis(arr, indices, values, axis)\n return result\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"int32\", \"int64\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef repeat_interleave(x, repeats, axis=None, name=None):\n return ivy.repeat(x, repeats, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape, name=None):\n return ivy.reshape(x, shape)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef roll(x, shifts, axis=None, name=None):\n return ivy.roll(x, shifts, axis=axis)\n\n\n@with_supported_device_and_dtypes(\n {\n \"2.6.0 and above\": {\n \"cpu\": (\n \"bool\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\"float16\",),\n },\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef rot90(x, k=1, axes=(0, 1), name=None):\n return ivy.rot90(x, k=k, axes=axes)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis):\n return ivy.take_along_axis(arr, indices, axis)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@to_ivy_arrays_and_back\ndef tolist(x):\n return ivy.to_list(x)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unbind(input, axis=0):\n shape = list(input.shape)\n num_splits = shape[axis]\n shape.pop(axis)\n return tuple(x.reshape(tuple(shape)) for x in split(input, num_splits, axis=axis))\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unique_consecutive(x, axis=0):\n return ivy.unique_consecutive(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.6.0 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unstack(x, axis=0, name=None):\n return ivy.unstack(x, axis=axis)\n\n\nabsolute = abs\n",
"path": "ivy/functional/frontends/paddle/manipulation.py"
}
] | [
{
"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n with_supported_device_and_dtypes,\n)\n\n\n@with_unsupported_dtypes({\"2.6.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef broadcast_to(x, shape, name=None):\n return ivy.broadcast_to(x, shape)\n\n\n@with_supported_dtypes(\n {\n \"2.6.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n\n\n@with_unsupported_dtypes({\"2.6.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef expand(x, shape, name=None):\n return ivy.expand(x, shape)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef flip(x, axis, name=None):\n return ivy.flip(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef gather(params, indices, axis=-1, batch_dims=0, name=None):\n return ivy.gather(params, indices, axis=axis, batch_dims=batch_dims)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"uint16\", \"float16\", \"bfloat16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef gather_nd(x, index, name=None):\n return ivy.gather_nd(x, index)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef index_add(x, index, axis, value, *, name=None):\n x = ivy.swapaxes(x, axis, 0)\n value = ivy.swapaxes(value, axis, 0)\n _to_adds = []\n index = sorted(zip(ivy.to_list(index), range(len(index))), key=(lambda i: i[0]))\n while index:\n _curr_idx = index[0][0]\n while len(_to_adds) < _curr_idx:\n _to_adds.append(ivy.zeros_like(value[0]))\n _to_add_cum = ivy.get_item(value, index[0][1])\n while (len(index)) > 1 and (index[0][0] == index[1][0]):\n _to_add_cum = _to_add_cum + ivy.get_item(value, index.pop(1)[1])\n index.pop(0)\n _to_adds.append(_to_add_cum)\n while len(_to_adds) < x.shape[0]:\n _to_adds.append(ivy.zeros_like(value[0]))\n _to_adds = ivy.stack(_to_adds)\n if len(x.shape) < 2:\n # Added this line due to the paddle backend treating scalars as 1-d arrays\n _to_adds = ivy.flatten(_to_adds)\n\n ret = ivy.add(x, _to_adds)\n ret = ivy.swapaxes(ret, axis, 0)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef put_along_axis(arr, indices, values, axis, reduce=\"assign\"):\n result = ivy.put_along_axis(arr, indices, values, axis)\n return result\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"int32\", \"int64\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef repeat_interleave(x, repeats, axis=None, name=None):\n return ivy.repeat(x, repeats, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape, name=None):\n return ivy.reshape(x, shape)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"complex64\",\n \"complex128\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef roll(x, shifts, axis=None, name=None):\n return ivy.roll(x, shifts, axis=axis)\n\n\n@with_supported_device_and_dtypes(\n {\n \"2.6.0 and above\": {\n \"cpu\": (\n \"bool\",\n \"int32\",\n \"int64\",\n \"float32\",\n \"float64\",\n ),\n \"gpu\": (\"float16\",),\n },\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef rot90(x, k=1, axes=(0, 1), name=None):\n return ivy.rot90(x, k=k, axes=axes)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis):\n return ivy.take_along_axis(arr, indices, axis)\n\n\n@with_unsupported_dtypes(\n {\"2.6.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\", \"bfloat16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@to_ivy_arrays_and_back\ndef tolist(x):\n return ivy.to_list(x)\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unbind(input, axis=0):\n shape = list(input.shape)\n num_splits = shape[axis]\n shape.pop(axis)\n return tuple(x.reshape(tuple(shape)) for x in split(input, num_splits, axis=axis))\n\n\n@with_supported_dtypes(\n {\"2.6.0 and below\": (\"bool\", \"int32\", \"int64\", \"float16\", \"float32\", \"float64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unique_consecutive(x, axis=0):\n return ivy.unique_consecutive(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.6.0 and below\": (\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef unstack(x, axis=0, name=None):\n return ivy.unstack(x, axis=axis)\n\n\nabsolute = abs\n",
"path": "ivy/functional/frontends/paddle/manipulation.py"
}
] | diff --git a/ivy/functional/frontends/paddle/manipulation.py b/ivy/functional/frontends/paddle/manipulation.py
index dd7c7e79a28f9..6c2c8d6a90adc 100644
--- a/ivy/functional/frontends/paddle/manipulation.py
+++ b/ivy/functional/frontends/paddle/manipulation.py
@@ -208,7 +208,7 @@ def take_along_axis(arr, indices, axis):
@with_unsupported_dtypes(
- {"2.6.0 and below": ("int8", "uint8", "int16", "float16")},
+ {"2.6.0 and below": ("int8", "uint8", "int16", "float16", "bfloat16")},
"paddle",
)
@to_ivy_arrays_and_back
|
kubeflow__pipelines-1666 | `pip install kfp` does not install CLI
**What happened:**
```
$ virtualenv .venv
...
$ pip install kfp==0.1.23
...
$ kfp
Traceback (most recent call last):
File "/private/tmp/.venv/bin/kfp", line 6, in <module>
from kfp.__main__ import main
File "/private/tmp/.venv/lib/python3.7/site-packages/kfp/__main__.py", line 15, in <module>
from .cli.cli import main
ModuleNotFoundError: No module named 'kfp.cli'
```
**What did you expect to happen:**
To run the CLI.
**Anything else you would like to add:**
I could be confused about what is expected to be available after installing the kfp package from pip - setup.py mentions an entrypoint named kfp in
https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L74
but main.py imports a `kfp.cli` package https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/kfp/__main__.py#L15
which is not included in the distribution https://github.com/kubeflow/pipelines/blob/812ca7f8836c47039c3b1f3daf23e68fbcee1a92/sdk/python/setup.py#L46-L54
| [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\n\nNAME = 'kfp'\nVERSION = '0.1.24'\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n 'kfp-server-api >= 0.1.18, < 0.1.19', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0'\n]\n\nsetup(\n name=NAME,\n version=VERSION,\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'kfp=kfp.__main__:main']})\n",
"path": "sdk/python/setup.py"
}
] | [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import setup\n\nNAME = 'kfp'\nVERSION = '0.1.24'\n\nREQUIRES = [\n 'urllib3>=1.15,<1.25', #Fixing the version conflict with the \"requests\" package\n 'six >= 1.10',\n 'certifi',\n 'python-dateutil',\n 'PyYAML',\n 'google-cloud-storage>=1.13.0',\n 'kubernetes>=8.0.0, <=9.0.0',\n 'PyJWT>=1.6.4',\n 'cryptography>=2.4.2',\n 'google-auth>=1.6.1',\n 'requests_toolbelt>=0.8.0',\n 'cloudpickle',\n 'kfp-server-api >= 0.1.18, < 0.1.19', #Update the upper version whenever a new version of the kfp-server-api package is released. Update the lower version when there is a breaking change in kfp-server-api.\n 'argo-models == 2.2.1a', #2.2.1a is equivalent to argo 2.2.1\n 'jsonschema >= 3.0.1',\n 'tabulate == 0.8.3',\n 'click == 7.0'\n]\n\nsetup(\n name=NAME,\n version=VERSION,\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.components.structures.kubernetes',\n 'kfp.dsl',\n 'kfp.notebook',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'kfp=kfp.__main__:main']})\n",
"path": "sdk/python/setup.py"
}
] | diff --git a/sdk/python/setup.py b/sdk/python/setup.py
index ee581414370..dac70636a4e 100644
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -45,6 +45,7 @@
install_requires=REQUIRES,
packages=[
'kfp',
+ 'kfp.cli',
'kfp.compiler',
'kfp.components',
'kfp.components.structures',
|
streamlit__streamlit-5184 | It should be :
https://github.com/streamlit/streamlit/blob/535f11765817657892506d6904bbbe04908dbdf3/lib/streamlit/elements/alert.py#L145
| [
{
"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, Optional, TYPE_CHECKING\n\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Alert_pb2 import Alert as AlertProto\nfrom streamlit.string_util import clean_text, is_emoji\n\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\n from streamlit.type_util import SupportsStr\n\n\ndef validate_emoji(maybe_emoji: Optional[str]) -> str:\n if maybe_emoji is None:\n return \"\"\n elif is_emoji(maybe_emoji):\n return maybe_emoji\n else:\n raise StreamlitAPIException(\n f'The value \"{maybe_emoji}\" is not a valid emoji. Shortcodes are not allowed, please use a single character instead.'\n )\n\n\nclass AlertMixin:\n def error(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display error message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n body : str\n The error text to display.\n\n Example\n -------\n >>> st.error('This is an error', icon=\"🚨\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.icon = validate_emoji(icon)\n alert_proto.body = clean_text(body)\n alert_proto.format = AlertProto.ERROR\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def warning(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display warning message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The warning text to display.\n\n Example\n -------\n >>> st.warning('This is a warning', icon=\"⚠️\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.WARNING\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def info(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display an informational message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The info text to display.\n\n Example\n -------\n >>> st.info('This is a purely informational message', icon=\"ℹ️\")\n\n \"\"\"\n\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.INFO\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def success(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display a success message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The success text to display.\n\n Example\n -------\n >>> st.success('This is a success message!', icon:\"✅\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.SUCCESS\n return self.dg._enqueue(\"alert\", alert_proto)\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"DeltaGenerator\", self)\n",
"path": "lib/streamlit/elements/alert.py"
}
] | [
{
"content": "# Copyright 2018-2022 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import cast, Optional, TYPE_CHECKING\n\nfrom streamlit.errors import StreamlitAPIException\nfrom streamlit.proto.Alert_pb2 import Alert as AlertProto\nfrom streamlit.string_util import clean_text, is_emoji\n\nif TYPE_CHECKING:\n from streamlit.delta_generator import DeltaGenerator\n from streamlit.type_util import SupportsStr\n\n\ndef validate_emoji(maybe_emoji: Optional[str]) -> str:\n if maybe_emoji is None:\n return \"\"\n elif is_emoji(maybe_emoji):\n return maybe_emoji\n else:\n raise StreamlitAPIException(\n f'The value \"{maybe_emoji}\" is not a valid emoji. Shortcodes are not allowed, please use a single character instead.'\n )\n\n\nclass AlertMixin:\n def error(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display error message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n body : str\n The error text to display.\n\n Example\n -------\n >>> st.error('This is an error', icon=\"🚨\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.icon = validate_emoji(icon)\n alert_proto.body = clean_text(body)\n alert_proto.format = AlertProto.ERROR\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def warning(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display warning message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The warning text to display.\n\n Example\n -------\n >>> st.warning('This is a warning', icon=\"⚠️\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.WARNING\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def info(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display an informational message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The info text to display.\n\n Example\n -------\n >>> st.info('This is a purely informational message', icon=\"ℹ️\")\n\n \"\"\"\n\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.INFO\n return self.dg._enqueue(\"alert\", alert_proto)\n\n def success(\n self,\n body: \"SupportsStr\",\n *, # keyword-only args:\n icon: Optional[str] = None,\n ) -> \"DeltaGenerator\":\n \"\"\"Display a success message.\n\n Parameters\n ----------\n icon : None\n An optional parameter, that adds an emoji to the alert.\n The default is None.\n This argument can only be supplied by keyword.\n\n body : str\n The success text to display.\n\n Example\n -------\n >>> st.success('This is a success message!', icon=\"✅\")\n\n \"\"\"\n alert_proto = AlertProto()\n alert_proto.body = clean_text(body)\n alert_proto.icon = validate_emoji(icon)\n alert_proto.format = AlertProto.SUCCESS\n return self.dg._enqueue(\"alert\", alert_proto)\n\n @property\n def dg(self) -> \"DeltaGenerator\":\n \"\"\"Get our DeltaGenerator.\"\"\"\n return cast(\"DeltaGenerator\", self)\n",
"path": "lib/streamlit/elements/alert.py"
}
] | diff --git a/lib/streamlit/elements/alert.py b/lib/streamlit/elements/alert.py
index d9d5f2fe5f82..65458e9162b3 100644
--- a/lib/streamlit/elements/alert.py
+++ b/lib/streamlit/elements/alert.py
@@ -142,7 +142,7 @@ def success(
Example
-------
- >>> st.success('This is a success message!', icon:"✅")
+ >>> st.success('This is a success message!', icon="✅")
"""
alert_proto = AlertProto()
|
django-cms__django-cms-1994 | make django-admin-style a fixed dependency
and add it to the tutorial
| [
{
"content": "from setuptools import setup, find_packages\nimport os\nimport cms\n\n\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.3\",\n]\n\nsetup(\n author=\"Patrick Lauber\",\n author_email=\"[email protected]\",\n name='django-cms',\n version=cms.__version__,\n description='An Advanced Django CMS',\n long_description=open(os.path.join(os.path.dirname(__file__), 'README.rst')).read(),\n url='https://www.django-cms.org/',\n license='BSD License',\n platforms=['OS Independent'],\n classifiers=CLASSIFIERS,\n install_requires=[\n 'Django>=1.4,<1.6',\n 'django-classy-tags>=0.3.4.1',\n 'south>=0.7.2',\n 'html5lib',\n 'django-mptt>=0.5.1,<0.5.3',\n 'django-sekizai>=0.7',\n ],\n tests_require=[\n 'django-reversion>=1.6.6',\n 'Pillow==1.7.7',\n 'Sphinx==1.1.3',\n 'Jinja2==2.6',\n 'Pygments==1.5',\n 'dj-database-url==0.2.1',\n 'django-hvad',\n ],\n packages=find_packages(exclude=[\"project\", \"project.*\"]),\n include_package_data=True,\n zip_safe=False,\n test_suite='runtests.main',\n)\n",
"path": "setup.py"
}
] | [
{
"content": "from setuptools import setup, find_packages\nimport os\nimport cms\n\n\nCLASSIFIERS = [\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Framework :: Django',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3.3\",\n]\n\nsetup(\n author=\"Patrick Lauber\",\n author_email=\"[email protected]\",\n name='django-cms',\n version=cms.__version__,\n description='An Advanced Django CMS',\n long_description=open(os.path.join(os.path.dirname(__file__), 'README.rst')).read(),\n url='https://www.django-cms.org/',\n license='BSD License',\n platforms=['OS Independent'],\n classifiers=CLASSIFIERS,\n install_requires=[\n 'Django>=1.4,<1.6',\n 'django-classy-tags>=0.3.4.1',\n 'south>=0.7.2',\n 'html5lib',\n 'django-mptt>=0.5.1,<0.5.3',\n 'django-sekizai>=0.7',\n\t'djangocms-admin-style'\n ],\n tests_require=[\n 'django-reversion>=1.6.6',\n 'Pillow==1.7.7',\n 'Sphinx==1.1.3',\n 'Jinja2==2.6',\n 'Pygments==1.5',\n 'dj-database-url==0.2.1',\n 'django-hvad',\n ],\n packages=find_packages(exclude=[\"project\", \"project.*\"]),\n include_package_data=True,\n zip_safe=False,\n test_suite='runtests.main',\n)\n",
"path": "setup.py"
}
] | diff --git a/docs/getting_started/installation.rst b/docs/getting_started/installation.rst
index fa131805ebc..6f425a8e2e2 100644
--- a/docs/getting_started/installation.rst
+++ b/docs/getting_started/installation.rst
@@ -16,6 +16,7 @@ Requirements
* `django-mptt`_ 0.5.2 (strict due to API compatibility issues)
* `django-sekizai`_ 0.7 or higher
* `html5lib`_ 0.90 or higher
+* `djangocms-admin-style`_
* `django-i18nurls`_ (if using django 1.3.X)
* An installed and working instance of one of the databases listed in the
`Databases`_ section.
@@ -32,6 +33,7 @@ Requirements
.. _django-sekizai: https://github.com/ojii/django-sekizai
.. _html5lib: http://code.google.com/p/html5lib/
.. _django-i18nurls: https://github.com/brocaar/django-i18nurls
+.. _djangocms-admin-style: https://github.com/divio/djangocms-admin-style
Recommended
===========
@@ -94,7 +96,7 @@ following is an example requirements.txt file that can be used with pip to insta
::
# Bare minimum
- django-cms==2.4.1
+ django-cms==3.0
#These dependencies are brought in by django-cms, but if you want to lock-in their version, specify them
Django==1.5.1
@@ -104,6 +106,7 @@ following is an example requirements.txt file that can be used with pip to insta
django-mptt==0.5.2
django-sekizai==0.7
six==1.3.0
+ djangocms-admin-style==0.1.2
#Optional, recommended packages
Pillow==2.0.0
diff --git a/docs/getting_started/tutorial.rst b/docs/getting_started/tutorial.rst
index 59b0d249264..635e4b2d207 100644
--- a/docs/getting_started/tutorial.rst
+++ b/docs/getting_started/tutorial.rst
@@ -72,6 +72,8 @@ other highly recommended applications/libraries:
* ``'menus'``, helper for model independent hierarchical website navigation
* ``'south'``, intelligent schema and data migrations
* ``'sekizai'``, for javascript and css management
+* ``'django_admin_style'``, for the admin skin. You **must** add
+ ``'django_admin_style'`` in the list before ``'django.contrib.admin'``.
Also add any (or all) of the following plugins, depending on your needs:
diff --git a/docs/upgrade/3.0.rst b/docs/upgrade/3.0.rst
index 66686e7b242..9fbbd71a6a1 100644
--- a/docs/upgrade/3.0.rst
+++ b/docs/upgrade/3.0.rst
@@ -18,7 +18,8 @@ What's new in 3.0
New Frontend Editing
====================
-django CMS 3.0 introduces a new Frontend Editing system.
+django CMS 3.0 introduces a new Frontend Editing system as well as a customizable
+django admin skin.
In the new system, Placeholders and their Plugins are no longer managed in the
Admin site, but only from the Frontend.
diff --git a/setup.py b/setup.py
index 6ef04cabeb4..caaa0a16a63 100644
--- a/setup.py
+++ b/setup.py
@@ -37,6 +37,7 @@
'html5lib',
'django-mptt>=0.5.1,<0.5.3',
'django-sekizai>=0.7',
+ 'djangocms-admin-style'
],
tests_require=[
'django-reversion>=1.6.6',
|
ansible__ansible-modules-extras-387 | Freshly installed bower raises json error
I ran into an issue where the ansible bower module when attempting to run bower install can't parse the json from `bower list --json`
Here is the stacktrace
```
failed: [default] => {"failed": true, "parsed": false}
BECOME-SUCCESS-bcokpjdhrlrcdlrfpmvdgmahrbmtzoqk
Traceback (most recent call last):
File "/home/vagrant/.ansible/tmp/ansible-tmp-1427221462.07-279423510478512/bower", line 1781, in <module>
main()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1427221462.07-279423510478512/bower", line 168, in main
installed, missing, outdated = bower.list()
File "/home/vagrant/.ansible/tmp/ansible-tmp-1427221462.07-279423510478512/bower", line 116, in list
data = json.loads(self._exec(cmd, True, False))
File "/usr/lib/python2.7/json/__init__.py", line 338, in loads
return _default_decoder.decode(s)
File "/usr/lib/python2.7/json/decoder.py", line 366, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python2.7/json/decoder.py", line 384, in raw_decode
raise ValueError("No JSON object could be decoded")
```
So, when I logged into run the bower list --json command manually I saw this
```
vagrant@vagrant-ubuntu-trusty-64:~/catdoor/opus$ bower list --json
[?] May bower anonymously report usage statistics to improve the tool over time? Yes
```
Which makes me wonder if a freshly installed bower will always ask that question, thus not producing json output.
When i subsquently run the provision it fails the same way.
| [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, Michael Warkentin <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: bower\nshort_description: Manage bower packages with bower\ndescription:\n - Manage bower packages with bower\nversion_added: 1.9\nauthor: Michael Warkentin\noptions:\n name:\n description:\n - The name of a bower package to install\n required: false\n offline:\n description:\n - Install packages from local cache, if the packages were installed before\n required: false\n default: no\n choices: [ \"yes\", \"no\" ]\n path:\n description:\n - The base path where to install the bower packages\n required: true\n state:\n description:\n - The state of the bower package\n required: false\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n version:\n description:\n - The version to be installed\n required: false\n'''\n\nEXAMPLES = '''\ndescription: Install \"bootstrap\" bower package.\n- bower: name=bootstrap\n\ndescription: Install \"bootstrap\" bower package on version 3.1.1.\n- bower: name=bootstrap version=3.1.1\n\ndescription: Remove the \"bootstrap\" bower package.\n- bower: name=bootstrap state=absent\n\ndescription: Install packages based on bower.json.\n- bower: path=/app/location\n\ndescription: Update packages based on bower.json to their latest version.\n- bower: path=/app/location state=latest\n'''\n\n\nclass Bower(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.name = kwargs['name']\n self.offline = kwargs['offline']\n self.path = kwargs['path']\n self.version = kwargs['version']\n\n if kwargs['version']:\n self.name_version = self.name + '#' + self.version\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = [\"bower\"] + args\n\n if self.name:\n cmd.append(self.name_version)\n\n if self.offline:\n cmd.append('--offline')\n\n # If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json']\n\n installed = list()\n missing = list()\n outdated = list()\n data = json.loads(self._exec(cmd, True, False))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:\n outdated.append(dep)\n elif 'incompatible' in data['dependencies'][dep] and data['dependencies'][dep]['incompatible']:\n outdated.append(dep)\n else:\n installed.append(dep)\n # Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing, outdated\n\n def install(self):\n return self._exec(['install'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None),\n offline=dict(default='no', type='bool'),\n path=dict(required=True),\n state=dict(default='present', choices=['present', 'absent', 'latest', ]),\n version=dict(default=None),\n )\n module = AnsibleModule(\n argument_spec=arg_spec\n )\n\n name = module.params['name']\n offline = module.params['offline']\n path = module.params['path']\n state = module.params['state']\n version = module.params['version']\n\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n bower = Bower(module, name=name, offline=offline, path=path, version=version)\n\n changed = False\n if state == 'present':\n installed, missing, outdated = bower.list()\n if len(missing):\n changed = True\n bower.install()\n elif state == 'latest':\n installed, missing, outdated = bower.list()\n if len(missing) or len(outdated):\n changed = True\n bower.update()\n else: # Absent\n installed, missing, outdated = bower.list()\n if name in installed:\n changed = True\n bower.uninstall()\n\n module.exit_json(changed=changed)\n\n# Import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n",
"path": "packaging/language/bower.py"
}
] | [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# (c) 2014, Michael Warkentin <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\nDOCUMENTATION = '''\n---\nmodule: bower\nshort_description: Manage bower packages with bower\ndescription:\n - Manage bower packages with bower\nversion_added: 1.9\nauthor: Michael Warkentin\noptions:\n name:\n description:\n - The name of a bower package to install\n required: false\n offline:\n description:\n - Install packages from local cache, if the packages were installed before\n required: false\n default: no\n choices: [ \"yes\", \"no\" ]\n path:\n description:\n - The base path where to install the bower packages\n required: true\n state:\n description:\n - The state of the bower package\n required: false\n default: present\n choices: [ \"present\", \"absent\", \"latest\" ]\n version:\n description:\n - The version to be installed\n required: false\n'''\n\nEXAMPLES = '''\ndescription: Install \"bootstrap\" bower package.\n- bower: name=bootstrap\n\ndescription: Install \"bootstrap\" bower package on version 3.1.1.\n- bower: name=bootstrap version=3.1.1\n\ndescription: Remove the \"bootstrap\" bower package.\n- bower: name=bootstrap state=absent\n\ndescription: Install packages based on bower.json.\n- bower: path=/app/location\n\ndescription: Update packages based on bower.json to their latest version.\n- bower: path=/app/location state=latest\n'''\n\n\nclass Bower(object):\n def __init__(self, module, **kwargs):\n self.module = module\n self.name = kwargs['name']\n self.offline = kwargs['offline']\n self.path = kwargs['path']\n self.version = kwargs['version']\n\n if kwargs['version']:\n self.name_version = self.name + '#' + self.version\n else:\n self.name_version = self.name\n\n def _exec(self, args, run_in_check_mode=False, check_rc=True):\n if not self.module.check_mode or (self.module.check_mode and run_in_check_mode):\n cmd = [\"bower\"] + args\n\n if self.name:\n cmd.append(self.name_version)\n\n if self.offline:\n cmd.append('--offline')\n\n # If path is specified, cd into that path and run the command.\n cwd = None\n if self.path:\n if not os.path.exists(self.path):\n os.makedirs(self.path)\n if not os.path.isdir(self.path):\n self.module.fail_json(msg=\"path %s is not a directory\" % self.path)\n cwd = self.path\n\n rc, out, err = self.module.run_command(cmd, check_rc=check_rc, cwd=cwd)\n return out\n return ''\n\n def list(self):\n cmd = ['list', '--json', '--config.interactive=false', '--allow-root']\n\n installed = list()\n missing = list()\n outdated = list()\n data = json.loads(self._exec(cmd, True, False))\n if 'dependencies' in data:\n for dep in data['dependencies']:\n if 'missing' in data['dependencies'][dep] and data['dependencies'][dep]['missing']:\n missing.append(dep)\n elif data['dependencies'][dep]['pkgMeta']['version'] != data['dependencies'][dep]['update']['latest']:\n outdated.append(dep)\n elif 'incompatible' in data['dependencies'][dep] and data['dependencies'][dep]['incompatible']:\n outdated.append(dep)\n else:\n installed.append(dep)\n # Named dependency not installed\n else:\n missing.append(self.name)\n\n return installed, missing, outdated\n\n def install(self):\n return self._exec(['install'])\n\n def update(self):\n return self._exec(['update'])\n\n def uninstall(self):\n return self._exec(['uninstall'])\n\n\ndef main():\n arg_spec = dict(\n name=dict(default=None),\n offline=dict(default='no', type='bool'),\n path=dict(required=True),\n state=dict(default='present', choices=['present', 'absent', 'latest', ]),\n version=dict(default=None),\n )\n module = AnsibleModule(\n argument_spec=arg_spec\n )\n\n name = module.params['name']\n offline = module.params['offline']\n path = module.params['path']\n state = module.params['state']\n version = module.params['version']\n\n if state == 'absent' and not name:\n module.fail_json(msg='uninstalling a package is only available for named packages')\n\n bower = Bower(module, name=name, offline=offline, path=path, version=version)\n\n changed = False\n if state == 'present':\n installed, missing, outdated = bower.list()\n if len(missing):\n changed = True\n bower.install()\n elif state == 'latest':\n installed, missing, outdated = bower.list()\n if len(missing) or len(outdated):\n changed = True\n bower.update()\n else: # Absent\n installed, missing, outdated = bower.list()\n if name in installed:\n changed = True\n bower.uninstall()\n\n module.exit_json(changed=changed)\n\n# Import module snippets\nfrom ansible.module_utils.basic import *\nmain()\n",
"path": "packaging/language/bower.py"
}
] | diff --git a/packaging/language/bower.py b/packaging/language/bower.py
index 3fccf51056b..085f454e639 100644
--- a/packaging/language/bower.py
+++ b/packaging/language/bower.py
@@ -108,7 +108,7 @@ def _exec(self, args, run_in_check_mode=False, check_rc=True):
return ''
def list(self):
- cmd = ['list', '--json']
+ cmd = ['list', '--json', '--config.interactive=false', '--allow-root']
installed = list()
missing = list()
|
cupy__cupy-764 | cupy.array(cupy_array, oder=None) raises error
When I do this:
```python
>>> x = cupy.ones(3)
>>> xx = cupy.array(x, order=None)
```
I get this traceback:
```
File "[...]/cupy/cupy/creation/from_data.py", line 41, in array
return core.array(obj, dtype, copy, order, subok, ndmin)
File "cupy/core/core.pyx", line 2026, in cupy.core.core.array
cpdef ndarray array(obj, dtype=None, bint copy=True, str order='K',
File "cupy/core/core.pyx", line 2039, in cupy.core.core.array
a = src.astype(dtype, order=order, copy=copy)
File "cupy/core/core.pyx", line 276, in cupy.core.core.ndarray.astype
cpdef ndarray astype(
File "cupy/core/core.pyx", line 313, in cupy.core.core.ndarray.astype
raise TypeError('order not understood')
TypeError: order not understood
```
| [
{
"content": "# flake8: NOQA\n# \"flake8: NOQA\" to suppress warning \"H104 File contains nothing but comments\"\n\n# class s_(object):\n\nimport numpy\nimport six\n\nimport cupy\nfrom cupy import core\nfrom cupy.creation import from_data\nfrom cupy.manipulation import join\n\n\nclass AxisConcatenator(object):\n \"\"\"Translates slice objects to concatenation along an axis.\n\n For detailed documentation on usage, see :func:`cupy.r_`.\n This implementation is partially borrowed from NumPy's one.\n\n \"\"\"\n\n def _output_obj(self, obj, ndim, ndmin, trans1d):\n k2 = ndmin - ndim\n if trans1d < 0:\n trans1d += k2 + 1\n defaxes = list(six.moves.range(ndmin))\n k1 = trans1d\n axes = defaxes[:k1] + defaxes[k2:] + \\\n defaxes[k1:k2]\n return obj.transpose(axes)\n\n def __init__(self, axis=0, matrix=False, ndmin=1, trans1d=-1):\n self.axis = axis\n self.trans1d = trans1d\n self.matrix = matrix\n self.ndmin = ndmin\n\n def __getitem__(self, key):\n trans1d = self.trans1d\n ndmin = self.ndmin\n objs = []\n scalars = []\n arraytypes = []\n scalartypes = []\n if isinstance(key, six.string_types):\n raise NotImplementedError\n if not isinstance(key, tuple):\n key = (key,)\n\n for i, k in enumerate(key):\n scalar = False\n if isinstance(k, slice):\n raise NotImplementedError\n elif isinstance(k, six.string_types):\n if i != 0:\n raise ValueError(\n 'special directives must be the first entry.')\n raise NotImplementedError\n elif type(k) in numpy.ScalarType:\n newobj = from_data.array(k, ndmin=ndmin)\n scalars.append(i)\n scalar = True\n scalartypes.append(newobj.dtype)\n else:\n newobj = from_data.array(k, copy=False, ndmin=ndmin)\n if ndmin > 1:\n ndim = from_data.array(k, copy=False).ndim\n if trans1d != -1 and ndim < ndmin:\n newobj = self._output_obj(newobj, ndim, ndmin, trans1d)\n\n objs.append(newobj)\n if not scalar and isinstance(newobj, core.ndarray):\n arraytypes.append(newobj.dtype)\n\n final_dtype = numpy.find_common_type(arraytypes, scalartypes)\n if final_dtype is not None:\n for k in scalars:\n objs[k] = objs[k].astype(final_dtype)\n\n return join.concatenate(tuple(objs), axis=self.axis)\n\n def __len__(self):\n return 0\n\n\nclass CClass(AxisConcatenator):\n\n def __init__(self):\n super(CClass, self).__init__(-1, ndmin=2, trans1d=0)\n\n\nc_ = CClass()\n\"\"\"Translates slice objects to concatenation along the second axis.\n\nThis is a CuPy object that corresponds to :func:`cupy.r_`, which is\nuseful because of its common occurrence. In particular, arrays will be\nstacked along their last axis after being upgraded to at least 2-D with\n1's post-pended to the shape (column vectors made out of 1-D arrays).\n\nFor detailed documentation, see :func:`r_`.\n\nThis implementation is partially borrowed from NumPy's one.\n\nArgs:\n Not a function, so takes no parameters\n\nReturns:\n cupy.ndarray: Joined array.\n\n.. seealso:: :func:`numpy.c_`\n\nExamples\n--------\n>>> a = cupy.array([[1, 2, 3]], dtype=np.int32)\n>>> b = cupy.array([[4, 5, 6]], dtype=np.int32)\n>>> cupy.c_[a, 0, 0, b]\narray([[1, 2, 3, 0, 0, 4, 5, 6]], dtype=int32)\n\n\"\"\"\n\n\nclass RClass(AxisConcatenator):\n\n def __init__(self):\n super(RClass, self).__init__()\n\n\nr_ = RClass()\n\"\"\"Translates slice objects to concatenation along the first axis.\n\nThis is a simple way to build up arrays quickly.\nIf the index expression contains comma separated arrays, then stack\nthem along their first axis.\n\nThis object can build up from normal CuPy arrays.\nTherefore, the other objects (e.g. writing strings like '2,3,4',\nor using imaginary numbers like [1,2,3j],\nor using string integers like '-1') are not implemented yet\ncompared with NumPy.\n\nThis implementation is partially borrowed from NumPy's one.\n\nArgs:\n Not a function, so takes no parameters\n\nReturns:\n cupy.ndarray: Joined array.\n\n.. seealso:: :func:`numpy.r_`\n\nExamples\n--------\n>>> a = cupy.array([1, 2, 3], dtype=np.int32)\n>>> b = cupy.array([4, 5, 6], dtype=np.int32)\n>>> cupy.r_[a, 0, 0, b]\narray([1, 2, 3, 0, 0, 4, 5, 6], dtype=int32)\n\n\"\"\"\n\n\ndef indices(dimensions, dtype=int):\n \"\"\"Returns an array representing the indices of a grid.\n\n Computes an array where the subarrays contain index values 0,1,...\n varying only along the corresponding axis.\n\n Args:\n dimensions: The shape of the grid.\n dtype: Data type specifier. It is int by default.\n\n Returns:\n ndarray:\n The array of grid indices,\n ``grid.shape = (len(dimensions),) + tuple(dimensions)``.\n\n Examples\n --------\n >>> grid = cupy.indices((2, 3))\n >>> grid.shape\n (2, 2, 3)\n >>> grid[0] # row indices\n array([[0, 0, 0],\n [1, 1, 1]])\n >>> grid[1] # column indices\n array([[0, 1, 2],\n [0, 1, 2]])\n\n .. seealso:: :func:`numpy.indices`\n\n \"\"\"\n dimensions = tuple(dimensions)\n N = len(dimensions)\n shape = (1,) * N\n res = cupy.empty((N,) + dimensions, dtype=dtype)\n for i, dim in enumerate(dimensions):\n res[i] = cupy.arange(dim, dtype=dtype).reshape(\n shape[:i] + (dim,) + shape[i + 1:]\n )\n return res\n\n\ndef ix_(*args):\n \"\"\"Construct an open mesh from multiple sequences.\n\n This function takes N 1-D sequences and returns N outputs with N\n dimensions each, such that the shape is 1 in all but one dimension\n and the dimension with the non-unit shape value cycles through all\n N dimensions.\n\n Using `ix_` one can quickly construct index arrays that will index\n the cross product. ``a[cupy.ix_([1,3],[2,5])]`` returns the array\n ``[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]``.\n\n Args:\n *args: 1-D sequences\n\n Returns:\n tuple of ndarrays:\n N arrays with N dimensions each, with N the number of input sequences.\n Together these arrays form an open mesh.\n\n Examples\n --------\n >>> a = cupy.arange(10).reshape(2, 5)\n >>> a\n array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\n >>> ixgrid = cupy.ix_([0,1], [2,4])\n >>> ixgrid\n (array([[0],\n [1]]), array([[2, 4]]))\n\n .. seealso:: :func:`numpy.ix_`\n\n \"\"\"\n out = []\n nd = len(args)\n for k, new in enumerate(args):\n new = from_data.asarray(new)\n if new.ndim != 1:\n raise ValueError('Cross index must be 1 dimensional')\n if new.size == 0:\n # Explicitly type empty arrays to avoid float default\n new = new.astype(numpy.intp)\n if cupy.issubdtype(new.dtype, cupy.bool_):\n new, = new.nonzero()\n new = new.reshape((1,) * k + (new.size,) + (1,) * (nd - k - 1))\n out.append(new)\n return tuple(out)\n\n# TODO(okuta): Implement ravel_multi_index\n\n\ndef unravel_index(indices, dims, order='C'):\n \"\"\"Converts a flat index or array of flat indices into a tuple of coordinate arrays.\n\n Args:\n indices (cupy.ndarray): An integer array whose elements are indices\n into the flattened version of an array of dimensions :obj:`dims`.\n dims (tuple of ints): The shape of the array to use for unraveling indices.\n order ('C' or 'F'): Determines whether the indices should be viewed as indexing\n in row-major (C-style) or column-major (Fortran-style) order.\n\n Returns: tuple of ndarrays:\n Each array in the tuple has the same shape as the indices array.\n\n Examples\n --------\n >>> cupy.unravel_index(cupy.array([22, 41, 37]), (7, 6))\n (array([3, 6, 6]), array([4, 5, 1]))\n >>> cupy.unravel_index(cupy.array([31, 41, 13]), (7, 6), order='F')\n (array([3, 6, 6]), array([4, 5, 1]))\n\n .. seealso:: :func:`numpy.unravel_index`\n\n \"\"\"\n if order == 'C':\n dims = reversed(dims)\n elif order == 'F':\n pass\n else:\n raise TypeError('order not understood')\n\n if not cupy.can_cast(indices, cupy.int64, 'same_kind'):\n raise TypeError(\n 'Iterator operand 0 dtype could not be cast '\n 'from dtype(\\'{}\\') to dtype(\\'{}\\') '\n 'according to the rule \\'same_kind\\''.format(\n indices.dtype, cupy.int64().dtype))\n\n if (indices < 0).any():\n raise ValueError('invalid entry in index array')\n\n unraveled_coords = []\n for dim in dims:\n unraveled_coords.append(indices % dim)\n indices = indices // dim\n\n if (indices > 0).any():\n raise ValueError('invalid entry in index array')\n\n if order == 'C':\n unraveled_coords = reversed(unraveled_coords)\n return tuple(unraveled_coords)\n\n\n# TODO(okuta): Implement diag_indices\n\n\n# TODO(okuta): Implement diag_indices_from\n\n\n# TODO(okuta): Implement mask_indices\n\n\n# TODO(okuta): Implement tril_indices\n\n\n# TODO(okuta): Implement tril_indices_from\n\n\n# TODO(okuta): Implement triu_indices\n\n\n# TODO(okuta): Implement triu_indices_from\n",
"path": "cupy/indexing/generate.py"
}
] | [
{
"content": "# flake8: NOQA\n# \"flake8: NOQA\" to suppress warning \"H104 File contains nothing but comments\"\n\n# class s_(object):\n\nimport numpy\nimport six\n\nimport cupy\nfrom cupy import core\nfrom cupy.creation import from_data\nfrom cupy.manipulation import join\n\n\nclass AxisConcatenator(object):\n \"\"\"Translates slice objects to concatenation along an axis.\n\n For detailed documentation on usage, see :func:`cupy.r_`.\n This implementation is partially borrowed from NumPy's one.\n\n \"\"\"\n\n def _output_obj(self, obj, ndim, ndmin, trans1d):\n k2 = ndmin - ndim\n if trans1d < 0:\n trans1d += k2 + 1\n defaxes = list(six.moves.range(ndmin))\n k1 = trans1d\n axes = defaxes[:k1] + defaxes[k2:] + \\\n defaxes[k1:k2]\n return obj.transpose(axes)\n\n def __init__(self, axis=0, matrix=False, ndmin=1, trans1d=-1):\n self.axis = axis\n self.trans1d = trans1d\n self.matrix = matrix\n self.ndmin = ndmin\n\n def __getitem__(self, key):\n trans1d = self.trans1d\n ndmin = self.ndmin\n objs = []\n scalars = []\n arraytypes = []\n scalartypes = []\n if isinstance(key, six.string_types):\n raise NotImplementedError\n if not isinstance(key, tuple):\n key = (key,)\n\n for i, k in enumerate(key):\n scalar = False\n if isinstance(k, slice):\n raise NotImplementedError\n elif isinstance(k, six.string_types):\n if i != 0:\n raise ValueError(\n 'special directives must be the first entry.')\n raise NotImplementedError\n elif type(k) in numpy.ScalarType:\n newobj = from_data.array(k, ndmin=ndmin)\n scalars.append(i)\n scalar = True\n scalartypes.append(newobj.dtype)\n else:\n newobj = from_data.array(k, copy=False, ndmin=ndmin)\n if ndmin > 1:\n ndim = from_data.array(k, copy=False).ndim\n if trans1d != -1 and ndim < ndmin:\n newobj = self._output_obj(newobj, ndim, ndmin, trans1d)\n\n objs.append(newobj)\n if not scalar and isinstance(newobj, core.ndarray):\n arraytypes.append(newobj.dtype)\n\n final_dtype = numpy.find_common_type(arraytypes, scalartypes)\n if final_dtype is not None:\n for k in scalars:\n objs[k] = objs[k].astype(final_dtype)\n\n return join.concatenate(tuple(objs), axis=self.axis)\n\n def __len__(self):\n return 0\n\n\nclass CClass(AxisConcatenator):\n\n def __init__(self):\n super(CClass, self).__init__(-1, ndmin=2, trans1d=0)\n\n\nc_ = CClass()\n\"\"\"Translates slice objects to concatenation along the second axis.\n\nThis is a CuPy object that corresponds to :func:`cupy.r_`, which is\nuseful because of its common occurrence. In particular, arrays will be\nstacked along their last axis after being upgraded to at least 2-D with\n1's post-pended to the shape (column vectors made out of 1-D arrays).\n\nFor detailed documentation, see :func:`r_`.\n\nThis implementation is partially borrowed from NumPy's one.\n\nArgs:\n Not a function, so takes no parameters\n\nReturns:\n cupy.ndarray: Joined array.\n\n.. seealso:: :func:`numpy.c_`\n\nExamples\n--------\n>>> a = cupy.array([[1, 2, 3]], dtype=np.int32)\n>>> b = cupy.array([[4, 5, 6]], dtype=np.int32)\n>>> cupy.c_[a, 0, 0, b]\narray([[1, 2, 3, 0, 0, 4, 5, 6]], dtype=int32)\n\n\"\"\"\n\n\nclass RClass(AxisConcatenator):\n\n def __init__(self):\n super(RClass, self).__init__()\n\n\nr_ = RClass()\n\"\"\"Translates slice objects to concatenation along the first axis.\n\nThis is a simple way to build up arrays quickly.\nIf the index expression contains comma separated arrays, then stack\nthem along their first axis.\n\nThis object can build up from normal CuPy arrays.\nTherefore, the other objects (e.g. writing strings like '2,3,4',\nor using imaginary numbers like [1,2,3j],\nor using string integers like '-1') are not implemented yet\ncompared with NumPy.\n\nThis implementation is partially borrowed from NumPy's one.\n\nArgs:\n Not a function, so takes no parameters\n\nReturns:\n cupy.ndarray: Joined array.\n\n.. seealso:: :func:`numpy.r_`\n\nExamples\n--------\n>>> a = cupy.array([1, 2, 3], dtype=np.int32)\n>>> b = cupy.array([4, 5, 6], dtype=np.int32)\n>>> cupy.r_[a, 0, 0, b]\narray([1, 2, 3, 0, 0, 4, 5, 6], dtype=int32)\n\n\"\"\"\n\n\ndef indices(dimensions, dtype=int):\n \"\"\"Returns an array representing the indices of a grid.\n\n Computes an array where the subarrays contain index values 0,1,...\n varying only along the corresponding axis.\n\n Args:\n dimensions: The shape of the grid.\n dtype: Data type specifier. It is int by default.\n\n Returns:\n ndarray:\n The array of grid indices,\n ``grid.shape = (len(dimensions),) + tuple(dimensions)``.\n\n Examples\n --------\n >>> grid = cupy.indices((2, 3))\n >>> grid.shape\n (2, 2, 3)\n >>> grid[0] # row indices\n array([[0, 0, 0],\n [1, 1, 1]])\n >>> grid[1] # column indices\n array([[0, 1, 2],\n [0, 1, 2]])\n\n .. seealso:: :func:`numpy.indices`\n\n \"\"\"\n dimensions = tuple(dimensions)\n N = len(dimensions)\n shape = (1,) * N\n res = cupy.empty((N,) + dimensions, dtype=dtype)\n for i, dim in enumerate(dimensions):\n res[i] = cupy.arange(dim, dtype=dtype).reshape(\n shape[:i] + (dim,) + shape[i + 1:]\n )\n return res\n\n\ndef ix_(*args):\n \"\"\"Construct an open mesh from multiple sequences.\n\n This function takes N 1-D sequences and returns N outputs with N\n dimensions each, such that the shape is 1 in all but one dimension\n and the dimension with the non-unit shape value cycles through all\n N dimensions.\n\n Using `ix_` one can quickly construct index arrays that will index\n the cross product. ``a[cupy.ix_([1,3],[2,5])]`` returns the array\n ``[[a[1,2] a[1,5]], [a[3,2] a[3,5]]]``.\n\n Args:\n *args: 1-D sequences\n\n Returns:\n tuple of ndarrays:\n N arrays with N dimensions each, with N the number of input sequences.\n Together these arrays form an open mesh.\n\n Examples\n --------\n >>> a = cupy.arange(10).reshape(2, 5)\n >>> a\n array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\n >>> ixgrid = cupy.ix_([0,1], [2,4])\n >>> ixgrid\n (array([[0],\n [1]]), array([[2, 4]]))\n\n .. seealso:: :func:`numpy.ix_`\n\n \"\"\"\n out = []\n nd = len(args)\n for k, new in enumerate(args):\n new = from_data.asarray(new)\n if new.ndim != 1:\n raise ValueError('Cross index must be 1 dimensional')\n if new.size == 0:\n # Explicitly type empty arrays to avoid float default\n new = new.astype(numpy.intp)\n if cupy.issubdtype(new.dtype, cupy.bool_):\n new, = new.nonzero()\n new = new.reshape((1,) * k + (new.size,) + (1,) * (nd - k - 1))\n out.append(new)\n return tuple(out)\n\n# TODO(okuta): Implement ravel_multi_index\n\n\ndef unravel_index(indices, dims, order='C'):\n \"\"\"Converts a flat index or array of flat indices into a tuple of coordinate arrays.\n\n Args:\n indices (cupy.ndarray): An integer array whose elements are indices\n into the flattened version of an array of dimensions :obj:`dims`.\n dims (tuple of ints): The shape of the array to use for unraveling indices.\n order ('C' or 'F'): Determines whether the indices should be viewed as indexing\n in row-major (C-style) or column-major (Fortran-style) order.\n\n Returns: tuple of ndarrays:\n Each array in the tuple has the same shape as the indices array.\n\n Examples\n --------\n >>> cupy.unravel_index(cupy.array([22, 41, 37]), (7, 6))\n (array([3, 6, 6]), array([4, 5, 1]))\n >>> cupy.unravel_index(cupy.array([31, 41, 13]), (7, 6), order='F')\n (array([3, 6, 6]), array([4, 5, 1]))\n\n .. seealso:: :func:`numpy.unravel_index`\n\n \"\"\"\n if order in ('C', None):\n dims = reversed(dims)\n elif order == 'F':\n pass\n else:\n raise TypeError('order not understood')\n\n if not cupy.can_cast(indices, cupy.int64, 'same_kind'):\n raise TypeError(\n 'Iterator operand 0 dtype could not be cast '\n 'from dtype(\\'{}\\') to dtype(\\'{}\\') '\n 'according to the rule \\'same_kind\\''.format(\n indices.dtype, cupy.int64().dtype))\n\n if (indices < 0).any():\n raise ValueError('invalid entry in index array')\n\n unraveled_coords = []\n for dim in dims:\n unraveled_coords.append(indices % dim)\n indices = indices // dim\n\n if (indices > 0).any():\n raise ValueError('invalid entry in index array')\n\n if order == 'C':\n unraveled_coords = reversed(unraveled_coords)\n return tuple(unraveled_coords)\n\n\n# TODO(okuta): Implement diag_indices\n\n\n# TODO(okuta): Implement diag_indices_from\n\n\n# TODO(okuta): Implement mask_indices\n\n\n# TODO(okuta): Implement tril_indices\n\n\n# TODO(okuta): Implement tril_indices_from\n\n\n# TODO(okuta): Implement triu_indices\n\n\n# TODO(okuta): Implement triu_indices_from\n",
"path": "cupy/indexing/generate.py"
}
] | diff --git a/cupy/core/core.pyx b/cupy/core/core.pyx
index 940f2ce7d32..a27510307ca 100644
--- a/cupy/core/core.pyx
+++ b/cupy/core/core.pyx
@@ -97,7 +97,7 @@ cdef class ndarray:
self.data = memptr
self.base = None
- if order == 'C':
+ if order in ('C', None):
self._strides = internal.get_contiguous_strides(
self._shape, self.itemsize, is_c_contiguous=True)
self._c_contiguous = True
@@ -309,6 +309,8 @@ cdef class ndarray:
if subok is not None:
raise TypeError('subok is not supported yet')
+ if order is None:
+ order = 'K'
if order not in ['C', 'F', 'A', 'K']:
raise TypeError('order not understood')
diff --git a/cupy/indexing/generate.py b/cupy/indexing/generate.py
index d04f53edbd8..bd4afe02696 100644
--- a/cupy/indexing/generate.py
+++ b/cupy/indexing/generate.py
@@ -275,7 +275,7 @@ def unravel_index(indices, dims, order='C'):
.. seealso:: :func:`numpy.unravel_index`
"""
- if order == 'C':
+ if order in ('C', None):
dims = reversed(dims)
elif order == 'F':
pass
diff --git a/tests/cupy_tests/core_tests/test_ndarray.py b/tests/cupy_tests/core_tests/test_ndarray.py
index d1fc1564e83..b0675ff83c1 100644
--- a/tests/cupy_tests/core_tests/test_ndarray.py
+++ b/tests/cupy_tests/core_tests/test_ndarray.py
@@ -63,6 +63,13 @@ def test_order(self):
self.assertTrue(a.flags.f_contiguous)
self.assertTrue(not a.flags.c_contiguous)
+ def test_order_none(self):
+ a = core.ndarray(self.shape, order=None)
+ a_cpu = numpy.ndarray(self.shape, order=None)
+ self.assertEqual(a.flags.c_contiguous, a_cpu.flags.c_contiguous)
+ self.assertEqual(a.flags.f_contiguous, a_cpu.flags.f_contiguous)
+ self.assertTupleEqual(a.strides, a_cpu.strides)
+
@testing.gpu
class TestNdarrayInitRaise(unittest.TestCase):
diff --git a/tests/cupy_tests/core_tests/test_ndarray_copy_and_view.py b/tests/cupy_tests/core_tests/test_ndarray_copy_and_view.py
index b128b4fb712..ab9a85eb906 100644
--- a/tests/cupy_tests/core_tests/test_ndarray_copy_and_view.py
+++ b/tests/cupy_tests/core_tests/test_ndarray_copy_and_view.py
@@ -70,7 +70,7 @@ def test_transposed_fill(self, xp, dtype):
b.fill(1)
return b
- @testing.for_orders('CFAK')
+ @testing.for_orders(['C', 'F', 'A', 'K', None])
@testing.for_all_dtypes(name='src_dtype', no_complex=True)
@testing.for_all_dtypes(name='dst_dtype')
@testing.numpy_cupy_array_equal()
|
databricks__koalas-747 | [DO NOT MERGE] Test
| [
{
"content": "#!/usr/bin/env python\n\n#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport sys\nfrom setuptools import setup\nfrom os import path\n\nDESCRIPTION = \"Koalas: pandas API on Apache Spark\"\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n LONG_DESCRIPTION = f.read()\n\ntry:\n exec(open('databricks/koalas/version.py').read())\nexcept IOError:\n print(\"Failed to load Koalas version file for packaging. You must be in Koalas root dir.\",\n file=sys.stderr)\n sys.exit(-1)\nVERSION = __version__ # noqa\n\nsetup(\n name='koalas',\n version=VERSION,\n packages=['databricks', 'databricks.koalas', 'databricks.koalas.missing',\n 'databricks.koalas.usage_logging'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n 'mlflow': ['mlflow>=1.0'],\n },\n python_requires='>=3.5',\n install_requires=[\n 'pandas>=0.23',\n 'pyarrow>=0.10',\n 'numpy>=1.14',\n 'matplotlib>=3.0.0',\n ],\n maintainer=\"Databricks\",\n maintainer_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n url=\"https://github.com/databricks/koalas\",\n project_urls={\n 'Bug Tracker': 'https://github.com/databricks/koalas/issues',\n 'Documentation': 'https://koalas.readthedocs.io/',\n 'Source Code': 'https://github.com/databricks/koalas'\n },\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\n\n#\n# Copyright (C) 2019 Databricks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nimport sys\nfrom setuptools import setup\nfrom os import path\n\nDESCRIPTION = \"Koalas: pandas API on Apache Spark\"\n\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, 'README.md'), encoding='utf-8') as f:\n LONG_DESCRIPTION = f.read()\n\ntry:\n exec(open('databricks/koalas/version.py').read())\nexcept IOError:\n print(\"Failed to load Koalas version file for packaging. You must be in Koalas root dir.\",\n file=sys.stderr)\n sys.exit(-1)\nVERSION = __version__ # noqa\n\nsetup(\n name='koalas',\n version=VERSION,\n packages=['databricks', 'databricks.koalas', 'databricks.koalas.missing',\n 'databricks.koalas.usage_logging'],\n extras_require={\n 'spark': ['pyspark>=2.4.0'],\n 'mlflow': ['mlflow>=1.0'],\n },\n python_requires='>=3.5',\n install_requires=[\n 'pandas>=0.23.2',\n 'pyarrow>=0.10',\n 'numpy>=1.14',\n 'matplotlib>=3.0.0',\n ],\n maintainer=\"Databricks\",\n maintainer_email=\"[email protected]\",\n license='http://www.apache.org/licenses/LICENSE-2.0',\n url=\"https://github.com/databricks/koalas\",\n project_urls={\n 'Bug Tracker': 'https://github.com/databricks/koalas/issues',\n 'Documentation': 'https://koalas.readthedocs.io/',\n 'Source Code': 'https://github.com/databricks/koalas'\n },\n description=DESCRIPTION,\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n classifiers=[\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n",
"path": "setup.py"
}
] | diff --git a/requirements-dev.txt b/requirements-dev.txt
index 4be44500ee..5e0c474b0f 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -1,5 +1,5 @@
# Dependencies in Koalas
-pandas>=0.23
+pandas>=0.23.2
pyarrow>=0.10
matplotlib>=3.0.0
numpy>=1.14
diff --git a/setup.py b/setup.py
index 6b84679c9e..9c7a5547cd 100644
--- a/setup.py
+++ b/setup.py
@@ -45,7 +45,7 @@
},
python_requires='>=3.5',
install_requires=[
- 'pandas>=0.23',
+ 'pandas>=0.23.2',
'pyarrow>=0.10',
'numpy>=1.14',
'matplotlib>=3.0.0',
|
ansible-collections__community.aws-1712 | Broken example in iam_access_key
### Summary
The "Delete the access key" example in the `iam_access_key` module is broken. It's currently:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
name: example_user
access_key_id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
There are two issues:
- the `name` attribute doesn't exist - it should be `user_name` (or the `username` alias).
- the `access_key_id` attribute should just be `id`. The `access_key_id` attribute specifies credentials for the module to use to access the API, not the ID of the access key we're trying to delete (which is specified by `id`).
Corrected example:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
user_name: example_user
id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
### Issue Type
Documentation Report
### Component Name
iam_access_key
### Ansible Version
```console (paste below)
ansible [core 2.14.2]
config file = None
configured module search path = ['/Users/grt006/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/lib/python3.10/site-packages/ansible
ansible collection location = /Users/grt006/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/ansible
python version = 3.10.9 (main, Dec 15 2022, 17:11:09) [Clang 14.0.0 (clang-1400.0.29.202)] (/Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Collection Versions
```console (paste below)
Collection Version
----------------------------- -------
amazon.aws 5.2.0
ansible.netcommon 4.1.0
ansible.posix 1.5.1
ansible.utils 2.9.0
ansible.windows 1.13.0
arista.eos 6.0.0
awx.awx 21.11.0
azure.azcollection 1.14.0
check_point.mgmt 4.0.0
chocolatey.chocolatey 1.4.0
cisco.aci 2.3.0
cisco.asa 4.0.0
cisco.dnac 6.6.3
cisco.intersight 1.0.23
cisco.ios 4.3.1
cisco.iosxr 4.1.0
cisco.ise 2.5.12
cisco.meraki 2.15.0
cisco.mso 2.2.1
cisco.nso 1.0.3
cisco.nxos 4.0.1
cisco.ucs 1.8.0
cloud.common 2.1.2
cloudscale_ch.cloud 2.2.4
community.aws 5.2.0
community.azure 2.0.0
community.ciscosmb 1.0.5
community.crypto 2.10.0
community.digitalocean 1.23.0
community.dns 2.5.0
community.docker 3.4.0
community.fortios 1.0.0
community.general 6.3.0
community.google 1.0.0
community.grafana 1.5.3
community.hashi_vault 4.1.0
community.hrobot 1.7.0
community.libvirt 1.2.0
community.mongodb 1.4.2
community.mysql 3.5.1
community.network 5.0.0
community.okd 2.2.0
community.postgresql 2.3.2
community.proxysql 1.5.1
community.rabbitmq 1.2.3
community.routeros 2.7.0
community.sap 1.0.0
community.sap_libs 1.4.0
community.skydive 1.0.0
community.sops 1.6.0
community.vmware 3.3.0
community.windows 1.12.0
community.zabbix 1.9.1
containers.podman 1.10.1
cyberark.conjur 1.2.0
cyberark.pas 1.0.17
dellemc.enterprise_sonic 2.0.0
dellemc.openmanage 6.3.0
dellemc.os10 1.1.1
dellemc.os6 1.0.7
dellemc.os9 1.0.4
dellemc.powerflex 1.5.0
dellemc.unity 1.5.0
f5networks.f5_modules 1.22.0
fortinet.fortimanager 2.1.7
fortinet.fortios 2.2.2
frr.frr 2.0.0
gluster.gluster 1.0.2
google.cloud 1.1.2
grafana.grafana 1.1.0
hetzner.hcloud 1.9.1
hpe.nimble 1.1.4
ibm.qradar 2.1.0
ibm.spectrum_virtualize 1.11.0
infinidat.infinibox 1.3.12
infoblox.nios_modules 1.4.1
inspur.ispim 1.2.0
inspur.sm 2.3.0
junipernetworks.junos 4.1.0
kubernetes.core 2.3.2
lowlydba.sqlserver 1.3.1
mellanox.onyx 1.0.0
netapp.aws 21.7.0
netapp.azure 21.10.0
netapp.cloudmanager 21.22.0
netapp.elementsw 21.7.0
netapp.ontap 22.2.0
netapp.storagegrid 21.11.1
netapp.um_info 21.8.0
netapp_eseries.santricity 1.4.0
netbox.netbox 3.10.0
ngine_io.cloudstack 2.3.0
ngine_io.exoscale 1.0.0
ngine_io.vultr 1.1.3
openstack.cloud 1.10.0
openvswitch.openvswitch 2.1.0
ovirt.ovirt 2.4.1
purestorage.flasharray 1.16.2
purestorage.flashblade 1.10.0
purestorage.fusion 1.3.0
sensu.sensu_go 1.13.2
splunk.es 2.1.0
t_systems_mms.icinga_director 1.32.0
theforeman.foreman 3.8.0
vmware.vmware_rest 2.2.0
vultr.cloud 1.7.0
vyos.vyos 4.0.0
wti.remote 1.0.4
```
### Configuration
```console (paste below)
CONFIG_FILE() = None
```
### OS / Environment
Linux
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
| [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n name: example_user\n access_key_id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n user_name: example_user\n id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | diff --git a/changelogs/fragments/iam_access_key_docs_fix.yml b/changelogs/fragments/iam_access_key_docs_fix.yml
new file mode 100644
index 00000000000..f47a15eb91f
--- /dev/null
+++ b/changelogs/fragments/iam_access_key_docs_fix.yml
@@ -0,0 +1,2 @@
+trivial:
+ - iam_access_key - Use correct parameter names in the docs example section (https://github.com/ansible-collections/community.aws/pull/1711).
\ No newline at end of file
diff --git a/plugins/modules/iam_access_key.py b/plugins/modules/iam_access_key.py
index 1d5701e9d74..6e3f47bfd4b 100644
--- a/plugins/modules/iam_access_key.py
+++ b/plugins/modules/iam_access_key.py
@@ -68,8 +68,8 @@
- name: Delete the access_key
community.aws.iam_access_key:
- name: example_user
- access_key_id: AKIA1EXAMPLE1EXAMPLE
+ user_name: example_user
+ id: AKIA1EXAMPLE1EXAMPLE
state: absent
'''
|
ansible-collections__community.aws-1713 | Broken example in iam_access_key
### Summary
The "Delete the access key" example in the `iam_access_key` module is broken. It's currently:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
name: example_user
access_key_id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
There are two issues:
- the `name` attribute doesn't exist - it should be `user_name` (or the `username` alias).
- the `access_key_id` attribute should just be `id`. The `access_key_id` attribute specifies credentials for the module to use to access the API, not the ID of the access key we're trying to delete (which is specified by `id`).
Corrected example:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
user_name: example_user
id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
### Issue Type
Documentation Report
### Component Name
iam_access_key
### Ansible Version
```console (paste below)
ansible [core 2.14.2]
config file = None
configured module search path = ['/Users/grt006/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/lib/python3.10/site-packages/ansible
ansible collection location = /Users/grt006/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/ansible
python version = 3.10.9 (main, Dec 15 2022, 17:11:09) [Clang 14.0.0 (clang-1400.0.29.202)] (/Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Collection Versions
```console (paste below)
Collection Version
----------------------------- -------
amazon.aws 5.2.0
ansible.netcommon 4.1.0
ansible.posix 1.5.1
ansible.utils 2.9.0
ansible.windows 1.13.0
arista.eos 6.0.0
awx.awx 21.11.0
azure.azcollection 1.14.0
check_point.mgmt 4.0.0
chocolatey.chocolatey 1.4.0
cisco.aci 2.3.0
cisco.asa 4.0.0
cisco.dnac 6.6.3
cisco.intersight 1.0.23
cisco.ios 4.3.1
cisco.iosxr 4.1.0
cisco.ise 2.5.12
cisco.meraki 2.15.0
cisco.mso 2.2.1
cisco.nso 1.0.3
cisco.nxos 4.0.1
cisco.ucs 1.8.0
cloud.common 2.1.2
cloudscale_ch.cloud 2.2.4
community.aws 5.2.0
community.azure 2.0.0
community.ciscosmb 1.0.5
community.crypto 2.10.0
community.digitalocean 1.23.0
community.dns 2.5.0
community.docker 3.4.0
community.fortios 1.0.0
community.general 6.3.0
community.google 1.0.0
community.grafana 1.5.3
community.hashi_vault 4.1.0
community.hrobot 1.7.0
community.libvirt 1.2.0
community.mongodb 1.4.2
community.mysql 3.5.1
community.network 5.0.0
community.okd 2.2.0
community.postgresql 2.3.2
community.proxysql 1.5.1
community.rabbitmq 1.2.3
community.routeros 2.7.0
community.sap 1.0.0
community.sap_libs 1.4.0
community.skydive 1.0.0
community.sops 1.6.0
community.vmware 3.3.0
community.windows 1.12.0
community.zabbix 1.9.1
containers.podman 1.10.1
cyberark.conjur 1.2.0
cyberark.pas 1.0.17
dellemc.enterprise_sonic 2.0.0
dellemc.openmanage 6.3.0
dellemc.os10 1.1.1
dellemc.os6 1.0.7
dellemc.os9 1.0.4
dellemc.powerflex 1.5.0
dellemc.unity 1.5.0
f5networks.f5_modules 1.22.0
fortinet.fortimanager 2.1.7
fortinet.fortios 2.2.2
frr.frr 2.0.0
gluster.gluster 1.0.2
google.cloud 1.1.2
grafana.grafana 1.1.0
hetzner.hcloud 1.9.1
hpe.nimble 1.1.4
ibm.qradar 2.1.0
ibm.spectrum_virtualize 1.11.0
infinidat.infinibox 1.3.12
infoblox.nios_modules 1.4.1
inspur.ispim 1.2.0
inspur.sm 2.3.0
junipernetworks.junos 4.1.0
kubernetes.core 2.3.2
lowlydba.sqlserver 1.3.1
mellanox.onyx 1.0.0
netapp.aws 21.7.0
netapp.azure 21.10.0
netapp.cloudmanager 21.22.0
netapp.elementsw 21.7.0
netapp.ontap 22.2.0
netapp.storagegrid 21.11.1
netapp.um_info 21.8.0
netapp_eseries.santricity 1.4.0
netbox.netbox 3.10.0
ngine_io.cloudstack 2.3.0
ngine_io.exoscale 1.0.0
ngine_io.vultr 1.1.3
openstack.cloud 1.10.0
openvswitch.openvswitch 2.1.0
ovirt.ovirt 2.4.1
purestorage.flasharray 1.16.2
purestorage.flashblade 1.10.0
purestorage.fusion 1.3.0
sensu.sensu_go 1.13.2
splunk.es 2.1.0
t_systems_mms.icinga_director 1.32.0
theforeman.foreman 3.8.0
vmware.vmware_rest 2.2.0
vultr.cloud 1.7.0
vyos.vyos 4.0.0
wti.remote 1.0.4
```
### Configuration
```console (paste below)
CONFIG_FILE() = None
```
### OS / Environment
Linux
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
| [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n- amazon.aws.boto3\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n name: example_user\n access_key_id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n- amazon.aws.boto3\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n user_name: example_user\n id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | diff --git a/changelogs/fragments/iam_access_key_docs_fix.yml b/changelogs/fragments/iam_access_key_docs_fix.yml
new file mode 100644
index 00000000000..f47a15eb91f
--- /dev/null
+++ b/changelogs/fragments/iam_access_key_docs_fix.yml
@@ -0,0 +1,2 @@
+trivial:
+ - iam_access_key - Use correct parameter names in the docs example section (https://github.com/ansible-collections/community.aws/pull/1711).
\ No newline at end of file
diff --git a/plugins/modules/iam_access_key.py b/plugins/modules/iam_access_key.py
index 3207741ab94..ad61b5b2ad3 100644
--- a/plugins/modules/iam_access_key.py
+++ b/plugins/modules/iam_access_key.py
@@ -69,8 +69,8 @@
- name: Delete the access_key
community.aws.iam_access_key:
- name: example_user
- access_key_id: AKIA1EXAMPLE1EXAMPLE
+ user_name: example_user
+ id: AKIA1EXAMPLE1EXAMPLE
state: absent
'''
|
ansible-collections__community.aws-1711 | Broken example in iam_access_key
### Summary
The "Delete the access key" example in the `iam_access_key` module is broken. It's currently:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
name: example_user
access_key_id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
There are two issues:
- the `name` attribute doesn't exist - it should be `user_name` (or the `username` alias).
- the `access_key_id` attribute should just be `id`. The `access_key_id` attribute specifies credentials for the module to use to access the API, not the ID of the access key we're trying to delete (which is specified by `id`).
Corrected example:
```yaml
- name: Delete the access_key
community.aws.iam_access_key:
user_name: example_user
id: AKIA1EXAMPLE1EXAMPLE
state: absent
```
### Issue Type
Documentation Report
### Component Name
iam_access_key
### Ansible Version
```console (paste below)
ansible [core 2.14.2]
config file = None
configured module search path = ['/Users/grt006/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/lib/python3.10/site-packages/ansible
ansible collection location = /Users/grt006/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/ansible
python version = 3.10.9 (main, Dec 15 2022, 17:11:09) [Clang 14.0.0 (clang-1400.0.29.202)] (/Users/grt006/ws/argocd/.scratch/external_secrets/iam/ansible/.venv/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Collection Versions
```console (paste below)
Collection Version
----------------------------- -------
amazon.aws 5.2.0
ansible.netcommon 4.1.0
ansible.posix 1.5.1
ansible.utils 2.9.0
ansible.windows 1.13.0
arista.eos 6.0.0
awx.awx 21.11.0
azure.azcollection 1.14.0
check_point.mgmt 4.0.0
chocolatey.chocolatey 1.4.0
cisco.aci 2.3.0
cisco.asa 4.0.0
cisco.dnac 6.6.3
cisco.intersight 1.0.23
cisco.ios 4.3.1
cisco.iosxr 4.1.0
cisco.ise 2.5.12
cisco.meraki 2.15.0
cisco.mso 2.2.1
cisco.nso 1.0.3
cisco.nxos 4.0.1
cisco.ucs 1.8.0
cloud.common 2.1.2
cloudscale_ch.cloud 2.2.4
community.aws 5.2.0
community.azure 2.0.0
community.ciscosmb 1.0.5
community.crypto 2.10.0
community.digitalocean 1.23.0
community.dns 2.5.0
community.docker 3.4.0
community.fortios 1.0.0
community.general 6.3.0
community.google 1.0.0
community.grafana 1.5.3
community.hashi_vault 4.1.0
community.hrobot 1.7.0
community.libvirt 1.2.0
community.mongodb 1.4.2
community.mysql 3.5.1
community.network 5.0.0
community.okd 2.2.0
community.postgresql 2.3.2
community.proxysql 1.5.1
community.rabbitmq 1.2.3
community.routeros 2.7.0
community.sap 1.0.0
community.sap_libs 1.4.0
community.skydive 1.0.0
community.sops 1.6.0
community.vmware 3.3.0
community.windows 1.12.0
community.zabbix 1.9.1
containers.podman 1.10.1
cyberark.conjur 1.2.0
cyberark.pas 1.0.17
dellemc.enterprise_sonic 2.0.0
dellemc.openmanage 6.3.0
dellemc.os10 1.1.1
dellemc.os6 1.0.7
dellemc.os9 1.0.4
dellemc.powerflex 1.5.0
dellemc.unity 1.5.0
f5networks.f5_modules 1.22.0
fortinet.fortimanager 2.1.7
fortinet.fortios 2.2.2
frr.frr 2.0.0
gluster.gluster 1.0.2
google.cloud 1.1.2
grafana.grafana 1.1.0
hetzner.hcloud 1.9.1
hpe.nimble 1.1.4
ibm.qradar 2.1.0
ibm.spectrum_virtualize 1.11.0
infinidat.infinibox 1.3.12
infoblox.nios_modules 1.4.1
inspur.ispim 1.2.0
inspur.sm 2.3.0
junipernetworks.junos 4.1.0
kubernetes.core 2.3.2
lowlydba.sqlserver 1.3.1
mellanox.onyx 1.0.0
netapp.aws 21.7.0
netapp.azure 21.10.0
netapp.cloudmanager 21.22.0
netapp.elementsw 21.7.0
netapp.ontap 22.2.0
netapp.storagegrid 21.11.1
netapp.um_info 21.8.0
netapp_eseries.santricity 1.4.0
netbox.netbox 3.10.0
ngine_io.cloudstack 2.3.0
ngine_io.exoscale 1.0.0
ngine_io.vultr 1.1.3
openstack.cloud 1.10.0
openvswitch.openvswitch 2.1.0
ovirt.ovirt 2.4.1
purestorage.flasharray 1.16.2
purestorage.flashblade 1.10.0
purestorage.fusion 1.3.0
sensu.sensu_go 1.13.2
splunk.es 2.1.0
t_systems_mms.icinga_director 1.32.0
theforeman.foreman 3.8.0
vmware.vmware_rest 2.2.0
vultr.cloud 1.7.0
vyos.vyos 4.0.0
wti.remote 1.0.4
```
### Configuration
```console (paste below)
CONFIG_FILE() = None
```
### OS / Environment
Linux
### Additional Information
_No response_
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
| [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n- amazon.aws.boto3\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n name: example_user\n access_key_id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.community.aws.plugins.module_utils.modules import AnsibleCommunityAWSModule as AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | [
{
"content": "#!/usr/bin/python\n# Copyright (c) 2021 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: iam_access_key\nversion_added: 2.1.0\nshort_description: Manage AWS IAM User access keys\ndescription:\n - Manage AWS IAM user access keys.\nauthor: Mark Chappell (@tremble)\noptions:\n user_name:\n description:\n - The name of the IAM User to which the key belongs.\n required: true\n type: str\n aliases: ['username']\n id:\n description:\n - The ID of the access key.\n - Required when I(state=absent).\n - Mutually exclusive with I(rotate_keys).\n required: false\n type: str\n state:\n description:\n - Create or remove the access key.\n - When I(state=present) and I(id) is not defined a new key will be created.\n required: false\n type: str\n default: 'present'\n choices: [ 'present', 'absent' ]\n active:\n description:\n - Whether the key should be enabled or disabled.\n - Defaults to C(true) when creating a new key.\n required: false\n type: bool\n aliases: ['enabled']\n rotate_keys:\n description:\n - When there are already 2 access keys attached to the IAM user the oldest\n key will be removed and a new key created.\n - Ignored if I(state=absent)\n - Mutually exclusive with I(id).\n required: false\n type: bool\n default: false\n\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n- amazon.aws.boto3\n'''\n\nEXAMPLES = r'''\n# Note: These examples do not set authentication details, see the AWS Guide for details.\n\n- name: Create a new access key\n community.aws.iam_access_key:\n user_name: example_user\n state: present\n\n- name: Delete the access_key\n community.aws.iam_access_key:\n user_name: example_user\n id: AKIA1EXAMPLE1EXAMPLE\n state: absent\n'''\n\nRETURN = r'''\naccess_key:\n description: A dictionary containing all the access key information.\n returned: When the key exists.\n type: complex\n contains:\n access_key_id:\n description: The ID for the access key.\n returned: success\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n create_date:\n description: The date and time, in ISO 8601 date-time format, when the access key was created.\n returned: success\n type: str\n sample: \"2021-10-09T13:25:42+00:00\"\n user_name:\n description: The name of the IAM user to which the key is attached.\n returned: success\n type: str\n sample: example_user\n status:\n description:\n - The status of the key.\n - C(Active) means it can be used.\n - C(Inactive) means it can not be used.\n returned: success\n type: str\n sample: Inactive\nsecret_access_key:\n description:\n - The secret access key.\n - A secret access key is the equivalent of a password which can not be changed and as such should be considered sensitive data.\n - Secret access keys can only be accessed at creation time.\n returned: When a new key is created.\n type: str\n sample: example/Example+EXAMPLE+example/Example\ndeleted_access_key_id:\n description:\n - The access key deleted during rotation.\n returned: When a key was deleted during the rotation of access keys\n type: str\n sample: AKIA1EXAMPLE1EXAMPLE\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # caught by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.community.aws.plugins.module_utils.modules import AnsibleCommunityAWSModule as AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import is_boto3_error_code\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import normalize_boto3_result\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import scrub_none_parameters\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\ndef delete_access_key(access_keys, user, access_key_id):\n if not access_key_id:\n return False\n\n if access_key_id not in access_keys:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.delete_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n )\n except is_boto3_error_code('NoSuchEntityException'):\n # Generally occurs when race conditions have happened and someone\n # deleted the key while we were checking to see if it existed.\n return False\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e: # pylint: disable=duplicate-except\n module.fail_json_aws(\n e, msg='Failed to delete access key \"{0}\" for user \"{1}\"'.format(access_key_id, user)\n )\n\n return True\n\n\ndef update_access_key(access_keys, user, access_key_id, enabled):\n if access_key_id not in access_keys:\n module.fail_json(\n msg='Access key \"{0}\" not found attached to User \"{1}\"'.format(access_key_id, user),\n )\n\n changes = dict()\n access_key = access_keys.get(access_key_id)\n\n if enabled is not None:\n desired_status = 'Active' if enabled else 'Inactive'\n if access_key.get('status') != desired_status:\n changes['Status'] = desired_status\n\n if not changes:\n return False\n\n if module.check_mode:\n return True\n\n try:\n client.update_access_key(\n aws_retry=True,\n UserName=user,\n AccessKeyId=access_key_id,\n **changes\n )\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, changes=changes,\n msg='Failed to update access key \"{0}\" for user \"{1}\"'.format(access_key_id, user),\n )\n return True\n\n\ndef create_access_key(access_keys, user, rotate_keys, enabled):\n changed = False\n oldest_key = False\n\n if len(access_keys) > 1 and rotate_keys:\n sorted_keys = sorted(list(access_keys), key=lambda k: access_keys[k].get('create_date', None))\n oldest_key = sorted_keys[0]\n changed |= delete_access_key(access_keys, user, oldest_key)\n\n if module.check_mode:\n if changed:\n return dict(deleted_access_key=oldest_key)\n return True\n\n try:\n results = client.create_access_key(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to create access key for user \"{0}\"'.format(user))\n results = camel_dict_to_snake_dict(results)\n access_key = results.get('access_key')\n access_key = normalize_boto3_result(access_key)\n\n # Update settings which can't be managed on creation\n if enabled is False:\n access_key_id = access_key['access_key_id']\n access_keys = {access_key_id: access_key}\n update_access_key(access_keys, user, access_key_id, enabled)\n access_key['status'] = 'Inactive'\n\n if oldest_key:\n access_key['deleted_access_key'] = oldest_key\n\n return access_key\n\n\ndef get_access_keys(user):\n try:\n results = client.list_access_keys(aws_retry=True, UserName=user)\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(\n e, msg='Failed to get access keys for user \"{0}\"'.format(user)\n )\n if not results:\n return None\n\n results = camel_dict_to_snake_dict(results)\n access_keys = results.get('access_key_metadata', [])\n if not access_keys:\n return []\n\n access_keys = normalize_boto3_result(access_keys)\n access_keys = {k['access_key_id']: k for k in access_keys}\n return access_keys\n\n\ndef main():\n\n global module\n global client\n\n argument_spec = dict(\n user_name=dict(required=True, type='str', aliases=['username']),\n id=dict(required=False, type='str'),\n state=dict(required=False, choices=['present', 'absent'], default='present'),\n active=dict(required=False, type='bool', aliases=['enabled']),\n rotate_keys=dict(required=False, type='bool', default=False),\n )\n\n required_if = [\n ['state', 'absent', ('id')],\n ]\n mutually_exclusive = [\n ['rotate_keys', 'id'],\n ]\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True\n )\n\n client = module.client('iam', retry_decorator=AWSRetry.jittered_backoff())\n\n changed = False\n state = module.params.get('state')\n user = module.params.get('user_name')\n access_key_id = module.params.get('id')\n rotate_keys = module.params.get('rotate_keys')\n enabled = module.params.get('active')\n\n access_keys = get_access_keys(user)\n results = dict()\n\n if state == 'absent':\n changed |= delete_access_key(access_keys, user, access_key_id)\n else:\n # If we have an ID then we should try to update it\n if access_key_id:\n changed |= update_access_key(access_keys, user, access_key_id, enabled)\n access_keys = get_access_keys(user)\n results['access_key'] = access_keys.get(access_key_id, None)\n # Otherwise we try to create a new one\n else:\n secret_key = create_access_key(access_keys, user, rotate_keys, enabled)\n if isinstance(secret_key, bool):\n changed |= secret_key\n else:\n changed = True\n results['access_key_id'] = secret_key.get('access_key_id', None)\n results['secret_access_key'] = secret_key.pop('secret_access_key', None)\n results['deleted_access_key_id'] = secret_key.pop('deleted_access_key', None)\n if secret_key:\n results['access_key'] = secret_key\n results = scrub_none_parameters(results)\n\n module.exit_json(changed=changed, **results)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/iam_access_key.py"
}
] | diff --git a/changelogs/fragments/iam_access_key_docs_fix.yml b/changelogs/fragments/iam_access_key_docs_fix.yml
new file mode 100644
index 00000000000..f47a15eb91f
--- /dev/null
+++ b/changelogs/fragments/iam_access_key_docs_fix.yml
@@ -0,0 +1,2 @@
+trivial:
+ - iam_access_key - Use correct parameter names in the docs example section (https://github.com/ansible-collections/community.aws/pull/1711).
\ No newline at end of file
diff --git a/plugins/modules/iam_access_key.py b/plugins/modules/iam_access_key.py
index ab3e9110604..32220a216e3 100644
--- a/plugins/modules/iam_access_key.py
+++ b/plugins/modules/iam_access_key.py
@@ -69,8 +69,8 @@
- name: Delete the access_key
community.aws.iam_access_key:
- name: example_user
- access_key_id: AKIA1EXAMPLE1EXAMPLE
+ user_name: example_user
+ id: AKIA1EXAMPLE1EXAMPLE
state: absent
'''
|
pytorch__vision-1501 | Deprecate PILLOW_VERSION
torchvision now uses PILLOW_VERSION
https://github.com/pytorch/vision/blob/1e857d93c8de081e61695dd43e6f06e3e7c2b0a2/torchvision/transforms/functional.py#L5
However, this constant is deprecated in Pillow 5.2, and soon to be removed completely: https://github.com/python-pillow/Pillow/blob/master/CHANGES.rst#700-unreleased
| [
{
"content": "from __future__ import division\nimport torch\nimport sys\nimport math\nfrom PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION\ntry:\n import accimage\nexcept ImportError:\n accimage = None\nimport numpy as np\nimport numbers\nimport collections\nimport warnings\n\nif sys.version_info < (3, 3):\n Sequence = collections.Sequence\n Iterable = collections.Iterable\nelse:\n Sequence = collections.abc.Sequence\n Iterable = collections.abc.Iterable\n\n\ndef _is_pil_image(img):\n if accimage is not None:\n return isinstance(img, (Image.Image, accimage.Image))\n else:\n return isinstance(img, Image.Image)\n\n\ndef _is_tensor_image(img):\n return torch.is_tensor(img) and img.ndimension() == 3\n\n\ndef _is_numpy(img):\n return isinstance(img, np.ndarray)\n\n\ndef _is_numpy_image(img):\n return img.ndim in {2, 3}\n\n\ndef to_tensor(pic):\n \"\"\"Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.\n\n See ``ToTensor`` for more details.\n\n Args:\n pic (PIL Image or numpy.ndarray): Image to be converted to tensor.\n\n Returns:\n Tensor: Converted image.\n \"\"\"\n if not(_is_pil_image(pic) or _is_numpy(pic)):\n raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))\n\n if _is_numpy(pic) and not _is_numpy_image(pic):\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))\n\n if isinstance(pic, np.ndarray):\n # handle numpy array\n if pic.ndim == 2:\n pic = pic[:, :, None]\n\n img = torch.from_numpy(pic.transpose((2, 0, 1)))\n # backward compatibility\n if isinstance(img, torch.ByteTensor):\n return img.float().div(255)\n else:\n return img\n\n if accimage is not None and isinstance(pic, accimage.Image):\n nppic = np.zeros([pic.channels, pic.height, pic.width], dtype=np.float32)\n pic.copyto(nppic)\n return torch.from_numpy(nppic)\n\n # handle PIL Image\n if pic.mode == 'I':\n img = torch.from_numpy(np.array(pic, np.int32, copy=False))\n elif pic.mode == 'I;16':\n img = torch.from_numpy(np.array(pic, np.int16, copy=False))\n elif pic.mode == 'F':\n img = torch.from_numpy(np.array(pic, np.float32, copy=False))\n elif pic.mode == '1':\n img = 255 * torch.from_numpy(np.array(pic, np.uint8, copy=False))\n else:\n img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))\n # PIL image mode: L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK\n if pic.mode == 'YCbCr':\n nchannel = 3\n elif pic.mode == 'I;16':\n nchannel = 1\n else:\n nchannel = len(pic.mode)\n img = img.view(pic.size[1], pic.size[0], nchannel)\n # put it from HWC to CHW format\n # yikes, this transpose takes 80% of the loading time/CPU\n img = img.transpose(0, 1).transpose(0, 2).contiguous()\n if isinstance(img, torch.ByteTensor):\n return img.float().div(255)\n else:\n return img\n\n\ndef to_pil_image(pic, mode=None):\n \"\"\"Convert a tensor or an ndarray to PIL Image.\n\n See :class:`~torchvision.transforms.ToPILImage` for more details.\n\n Args:\n pic (Tensor or numpy.ndarray): Image to be converted to PIL Image.\n mode (`PIL.Image mode`_): color space and pixel depth of input data (optional).\n\n .. _PIL.Image mode: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#concept-modes\n\n Returns:\n PIL Image: Image converted to PIL Image.\n \"\"\"\n if not(isinstance(pic, torch.Tensor) or isinstance(pic, np.ndarray)):\n raise TypeError('pic should be Tensor or ndarray. Got {}.'.format(type(pic)))\n\n elif isinstance(pic, torch.Tensor):\n if pic.ndimension() not in {2, 3}:\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndimension()))\n\n elif pic.ndimension() == 2:\n # if 2D image, add channel dimension (CHW)\n pic = pic.unsqueeze(0)\n\n elif isinstance(pic, np.ndarray):\n if pic.ndim not in {2, 3}:\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))\n\n elif pic.ndim == 2:\n # if 2D image, add channel dimension (HWC)\n pic = np.expand_dims(pic, 2)\n\n npimg = pic\n if isinstance(pic, torch.FloatTensor) and mode != 'F':\n pic = pic.mul(255).byte()\n if isinstance(pic, torch.Tensor):\n npimg = np.transpose(pic.numpy(), (1, 2, 0))\n\n if not isinstance(npimg, np.ndarray):\n raise TypeError('Input pic must be a torch.Tensor or NumPy ndarray, ' +\n 'not {}'.format(type(npimg)))\n\n if npimg.shape[2] == 1:\n expected_mode = None\n npimg = npimg[:, :, 0]\n if npimg.dtype == np.uint8:\n expected_mode = 'L'\n elif npimg.dtype == np.int16:\n expected_mode = 'I;16'\n elif npimg.dtype == np.int32:\n expected_mode = 'I'\n elif npimg.dtype == np.float32:\n expected_mode = 'F'\n if mode is not None and mode != expected_mode:\n raise ValueError(\"Incorrect mode ({}) supplied for input type {}. Should be {}\"\n .format(mode, np.dtype, expected_mode))\n mode = expected_mode\n\n elif npimg.shape[2] == 2:\n permitted_2_channel_modes = ['LA']\n if mode is not None and mode not in permitted_2_channel_modes:\n raise ValueError(\"Only modes {} are supported for 2D inputs\".format(permitted_2_channel_modes))\n\n if mode is None and npimg.dtype == np.uint8:\n mode = 'LA'\n\n elif npimg.shape[2] == 4:\n permitted_4_channel_modes = ['RGBA', 'CMYK', 'RGBX']\n if mode is not None and mode not in permitted_4_channel_modes:\n raise ValueError(\"Only modes {} are supported for 4D inputs\".format(permitted_4_channel_modes))\n\n if mode is None and npimg.dtype == np.uint8:\n mode = 'RGBA'\n else:\n permitted_3_channel_modes = ['RGB', 'YCbCr', 'HSV']\n if mode is not None and mode not in permitted_3_channel_modes:\n raise ValueError(\"Only modes {} are supported for 3D inputs\".format(permitted_3_channel_modes))\n if mode is None and npimg.dtype == np.uint8:\n mode = 'RGB'\n\n if mode is None:\n raise TypeError('Input type {} is not supported'.format(npimg.dtype))\n\n return Image.fromarray(npimg, mode=mode)\n\n\ndef normalize(tensor, mean, std, inplace=False):\n \"\"\"Normalize a tensor image with mean and standard deviation.\n\n .. note::\n This transform acts out of place by default, i.e., it does not mutates the input tensor.\n\n See :class:`~torchvision.transforms.Normalize` for more details.\n\n Args:\n tensor (Tensor): Tensor image of size (C, H, W) to be normalized.\n mean (sequence): Sequence of means for each channel.\n std (sequence): Sequence of standard deviations for each channel.\n inplace(bool,optional): Bool to make this operation inplace.\n\n Returns:\n Tensor: Normalized Tensor image.\n \"\"\"\n if not _is_tensor_image(tensor):\n raise TypeError('tensor is not a torch image.')\n\n if not inplace:\n tensor = tensor.clone()\n\n dtype = tensor.dtype\n mean = torch.as_tensor(mean, dtype=dtype, device=tensor.device)\n std = torch.as_tensor(std, dtype=dtype, device=tensor.device)\n tensor.sub_(mean[:, None, None]).div_(std[:, None, None])\n return tensor\n\n\ndef resize(img, size, interpolation=Image.BILINEAR):\n r\"\"\"Resize the input PIL Image to the given size.\n\n Args:\n img (PIL Image): Image to be resized.\n size (sequence or int): Desired output size. If size is a sequence like\n (h, w), the output size will be matched to this. If size is an int,\n the smaller edge of the image will be matched to this number maintaing\n the aspect ratio. i.e, if height > width, then image will be rescaled to\n :math:`\\left(\\text{size} \\times \\frac{\\text{height}}{\\text{width}}, \\text{size}\\right)`\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``\n\n Returns:\n PIL Image: Resized image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n if not (isinstance(size, int) or (isinstance(size, Iterable) and len(size) == 2)):\n raise TypeError('Got inappropriate size arg: {}'.format(size))\n\n if isinstance(size, int):\n w, h = img.size\n if (w <= h and w == size) or (h <= w and h == size):\n return img\n if w < h:\n ow = size\n oh = int(size * h / w)\n return img.resize((ow, oh), interpolation)\n else:\n oh = size\n ow = int(size * w / h)\n return img.resize((ow, oh), interpolation)\n else:\n return img.resize(size[::-1], interpolation)\n\n\ndef scale(*args, **kwargs):\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n \"please use transforms.Resize instead.\")\n return resize(*args, **kwargs)\n\n\ndef pad(img, padding, fill=0, padding_mode='constant'):\n r\"\"\"Pad the given PIL Image on all sides with specified padding mode and fill value.\n\n Args:\n img (PIL Image): Image to be padded.\n padding (int or tuple): Padding on each border. If a single int is provided this\n is used to pad all borders. If tuple of length 2 is provided this is the padding\n on left/right and top/bottom respectively. If a tuple of length 4 is provided\n this is the padding for the left, top, right and bottom borders\n respectively.\n fill: Pixel fill value for constant fill. Default is 0. If a tuple of\n length 3, it is used to fill R, G, B channels respectively.\n This value is only used when the padding_mode is constant\n padding_mode: Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.\n\n - constant: pads with a constant value, this value is specified with fill\n\n - edge: pads with the last value on the edge of the image\n\n - reflect: pads with reflection of image (without repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\n will result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n - symmetric: pads with reflection of image (repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\n will result in [2, 1, 1, 2, 3, 4, 4, 3]\n\n Returns:\n PIL Image: Padded image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if not isinstance(padding, (numbers.Number, tuple)):\n raise TypeError('Got inappropriate padding arg')\n if not isinstance(fill, (numbers.Number, str, tuple)):\n raise TypeError('Got inappropriate fill arg')\n if not isinstance(padding_mode, str):\n raise TypeError('Got inappropriate padding_mode arg')\n\n if isinstance(padding, Sequence) and len(padding) not in [2, 4]:\n raise ValueError(\"Padding must be an int or a 2, or 4 element tuple, not a \" +\n \"{} element tuple\".format(len(padding)))\n\n assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'], \\\n 'Padding mode should be either constant, edge, reflect or symmetric'\n\n if padding_mode == 'constant':\n if img.mode == 'P':\n palette = img.getpalette()\n image = ImageOps.expand(img, border=padding, fill=fill)\n image.putpalette(palette)\n return image\n\n return ImageOps.expand(img, border=padding, fill=fill)\n else:\n if isinstance(padding, int):\n pad_left = pad_right = pad_top = pad_bottom = padding\n if isinstance(padding, Sequence) and len(padding) == 2:\n pad_left = pad_right = padding[0]\n pad_top = pad_bottom = padding[1]\n if isinstance(padding, Sequence) and len(padding) == 4:\n pad_left = padding[0]\n pad_top = padding[1]\n pad_right = padding[2]\n pad_bottom = padding[3]\n\n if img.mode == 'P':\n palette = img.getpalette()\n img = np.asarray(img)\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)\n img = Image.fromarray(img)\n img.putpalette(palette)\n return img\n\n img = np.asarray(img)\n # RGB image\n if len(img.shape) == 3:\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right), (0, 0)), padding_mode)\n # Grayscale image\n if len(img.shape) == 2:\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)\n\n return Image.fromarray(img)\n\n\ndef crop(img, top, left, height, width):\n \"\"\"Crop the given PIL Image.\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n top (int): Vertical component of the top left corner of the crop box.\n left (int): Horizontal component of the top left corner of the crop box.\n height (int): Height of the crop box.\n width (int): Width of the crop box.\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.crop((left, top, left + width, top + height))\n\n\ndef center_crop(img, output_size):\n \"\"\"Crop the given PIL Image and resize it to desired size.\n\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n output_size (sequence or int): (height, width) of the crop box. If int,\n it is used for both directions\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n if isinstance(output_size, numbers.Number):\n output_size = (int(output_size), int(output_size))\n image_width, image_height = img.size\n crop_height, crop_width = output_size\n crop_top = int(round((image_height - crop_height) / 2.))\n crop_left = int(round((image_width - crop_width) / 2.))\n return crop(img, crop_top, crop_left, crop_height, crop_width)\n\n\ndef resized_crop(img, top, left, height, width, size, interpolation=Image.BILINEAR):\n \"\"\"Crop the given PIL Image and resize it to desired size.\n\n Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.\n\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n top (int): Vertical component of the top left corner of the crop box.\n left (int): Horizontal component of the top left corner of the crop box.\n height (int): Height of the crop box.\n width (int): Width of the crop box.\n size (sequence or int): Desired output size. Same semantics as ``resize``.\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``.\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n assert _is_pil_image(img), 'img should be PIL Image'\n img = crop(img, top, left, height, width)\n img = resize(img, size, interpolation)\n return img\n\n\ndef hflip(img):\n \"\"\"Horizontally flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Horizontall flipped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.transpose(Image.FLIP_LEFT_RIGHT)\n\n\ndef _get_perspective_coeffs(startpoints, endpoints):\n \"\"\"Helper function to get the coefficients (a, b, c, d, e, f, g, h) for the perspective transforms.\n\n In Perspective Transform each pixel (x, y) in the orignal image gets transformed as,\n (x, y) -> ( (ax + by + c) / (gx + hy + 1), (dx + ey + f) / (gx + hy + 1) )\n\n Args:\n List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image,\n List containing [top-left, top-right, bottom-right, bottom-left] of the transformed\n image\n Returns:\n octuple (a, b, c, d, e, f, g, h) for transforming each pixel.\n \"\"\"\n matrix = []\n\n for p1, p2 in zip(endpoints, startpoints):\n matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0] * p1[0], -p2[0] * p1[1]])\n matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1] * p1[0], -p2[1] * p1[1]])\n\n A = torch.tensor(matrix, dtype=torch.float)\n B = torch.tensor(startpoints, dtype=torch.float).view(8)\n res = torch.lstsq(B, A)[0]\n return res.squeeze_(1).tolist()\n\n\ndef perspective(img, startpoints, endpoints, interpolation=Image.BICUBIC):\n \"\"\"Perform perspective transform of the given PIL Image.\n\n Args:\n img (PIL Image): Image to be transformed.\n startpoints: List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image\n endpoints: List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image\n interpolation: Default- Image.BICUBIC\n Returns:\n PIL Image: Perspectively transformed Image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n coeffs = _get_perspective_coeffs(startpoints, endpoints)\n return img.transform(img.size, Image.PERSPECTIVE, coeffs, interpolation)\n\n\ndef vflip(img):\n \"\"\"Vertically flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Vertically flipped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.transpose(Image.FLIP_TOP_BOTTOM)\n\n\ndef five_crop(img, size):\n \"\"\"Crop the given PIL Image into four corners and the central crop.\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center)\n Corresponding top left, top right, bottom left, bottom right and center crop.\n \"\"\"\n if isinstance(size, numbers.Number):\n size = (int(size), int(size))\n else:\n assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n\n image_width, image_height = img.size\n crop_height, crop_width = size\n if crop_width > image_width or crop_height > image_height:\n msg = \"Requested crop size {} is bigger than input size {}\"\n raise ValueError(msg.format(size, (image_height, image_width)))\n\n tl = img.crop((0, 0, crop_width, crop_height))\n tr = img.crop((image_width - crop_width, 0, image_width, crop_height))\n bl = img.crop((0, image_height - crop_height, crop_width, image_height))\n br = img.crop((image_width - crop_width, image_height - crop_height,\n image_width, image_height))\n center = center_crop(img, (crop_height, crop_width))\n return (tl, tr, bl, br, center)\n\n\ndef ten_crop(img, size, vertical_flip=False):\n r\"\"\"Crop the given PIL Image into four corners and the central crop plus the\n flipped version of these (horizontal flipping is used by default).\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n vertical_flip (bool): Use vertical flipping instead of horizontal\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center, tl_flip, tr_flip, bl_flip, br_flip, center_flip)\n Corresponding top left, top right, bottom left, bottom right and center crop\n and same for the flipped image.\n \"\"\"\n if isinstance(size, numbers.Number):\n size = (int(size), int(size))\n else:\n assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n\n first_five = five_crop(img, size)\n\n if vertical_flip:\n img = vflip(img)\n else:\n img = hflip(img)\n\n second_five = five_crop(img, size)\n return first_five + second_five\n\n\ndef adjust_brightness(img, brightness_factor):\n \"\"\"Adjust brightness of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n brightness_factor (float): How much to adjust the brightness. Can be\n any non negative number. 0 gives a black image, 1 gives the\n original image while 2 increases the brightness by a factor of 2.\n\n Returns:\n PIL Image: Brightness adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Brightness(img)\n img = enhancer.enhance(brightness_factor)\n return img\n\n\ndef adjust_contrast(img, contrast_factor):\n \"\"\"Adjust contrast of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n contrast_factor (float): How much to adjust the contrast. Can be any\n non negative number. 0 gives a solid gray image, 1 gives the\n original image while 2 increases the contrast by a factor of 2.\n\n Returns:\n PIL Image: Contrast adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Contrast(img)\n img = enhancer.enhance(contrast_factor)\n return img\n\n\ndef adjust_saturation(img, saturation_factor):\n \"\"\"Adjust color saturation of an image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n saturation_factor (float): How much to adjust the saturation. 0 will\n give a black and white image, 1 will give the original image while\n 2 will enhance the saturation by a factor of 2.\n\n Returns:\n PIL Image: Saturation adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Color(img)\n img = enhancer.enhance(saturation_factor)\n return img\n\n\ndef adjust_hue(img, hue_factor):\n \"\"\"Adjust hue of an image.\n\n The image hue is adjusted by converting the image to HSV and\n cyclically shifting the intensities in the hue channel (H).\n The image is then converted back to original image mode.\n\n `hue_factor` is the amount of shift in H channel and must be in the\n interval `[-0.5, 0.5]`.\n\n See `Hue`_ for more details.\n\n .. _Hue: https://en.wikipedia.org/wiki/Hue\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n hue_factor (float): How much to shift the hue channel. Should be in\n [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in\n HSV space in positive and negative direction respectively.\n 0 means no shift. Therefore, both -0.5 and 0.5 will give an image\n with complementary colors while 0 gives the original image.\n\n Returns:\n PIL Image: Hue adjusted image.\n \"\"\"\n if not(-0.5 <= hue_factor <= 0.5):\n raise ValueError('hue_factor is not in [-0.5, 0.5].'.format(hue_factor))\n\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n input_mode = img.mode\n if input_mode in {'L', '1', 'I', 'F'}:\n return img\n\n h, s, v = img.convert('HSV').split()\n\n np_h = np.array(h, dtype=np.uint8)\n # uint8 addition take cares of rotation across boundaries\n with np.errstate(over='ignore'):\n np_h += np.uint8(hue_factor * 255)\n h = Image.fromarray(np_h, 'L')\n\n img = Image.merge('HSV', (h, s, v)).convert(input_mode)\n return img\n\n\ndef adjust_gamma(img, gamma, gain=1):\n r\"\"\"Perform gamma correction on an image.\n\n Also known as Power Law Transform. Intensities in RGB mode are adjusted\n based on the following equation:\n\n .. math::\n I_{\\text{out}} = 255 \\times \\text{gain} \\times \\left(\\frac{I_{\\text{in}}}{255}\\right)^{\\gamma}\n\n See `Gamma Correction`_ for more details.\n\n .. _Gamma Correction: https://en.wikipedia.org/wiki/Gamma_correction\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n gamma (float): Non negative real number, same as :math:`\\gamma` in the equation.\n gamma larger than 1 make the shadows darker,\n while gamma smaller than 1 make dark regions lighter.\n gain (float): The constant multiplier.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if gamma < 0:\n raise ValueError('Gamma should be a non-negative real number')\n\n input_mode = img.mode\n img = img.convert('RGB')\n\n gamma_map = [255 * gain * pow(ele / 255., gamma) for ele in range(256)] * 3\n img = img.point(gamma_map) # use PIL's point-function to accelerate this part\n\n img = img.convert(input_mode)\n return img\n\n\ndef rotate(img, angle, resample=False, expand=False, center=None, fill=0):\n \"\"\"Rotate the image by angle.\n\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): In degrees degrees counter clockwise order.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter. See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n expand (bool, optional): Optional expansion flag.\n If true, expands the output image to make it large enough to hold the entire rotated image.\n If false or omitted, make the output image the same size as the input image.\n Note that the expand flag assumes rotation around the center and no translation.\n center (2-tuple, optional): Optional center of rotation.\n Origin is the upper left corner.\n Default is the center of the image.\n fill (3-tuple or int): RGB pixel fill value for area outside the rotated image.\n If int, it is used for all channels respectively.\n\n .. _filters: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#filters\n\n \"\"\"\n\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if isinstance(fill, int):\n fill = tuple([fill] * 3)\n\n return img.rotate(angle, resample, expand, center, fillcolor=fill)\n\n\ndef _get_inverse_affine_matrix(center, angle, translate, scale, shear):\n # Helper method to compute inverse matrix for affine transformation\n\n # As it is explained in PIL.Image.rotate\n # We need compute INVERSE of affine transformation matrix: M = T * C * RSS * C^-1\n # where T is translation matrix: [1, 0, tx | 0, 1, ty | 0, 0, 1]\n # C is translation matrix to keep center: [1, 0, cx | 0, 1, cy | 0, 0, 1]\n # RSS is rotation with scale and shear matrix\n # RSS(a, scale, shear) = [ cos(a + shear_y)*scale -sin(a + shear_x)*scale 0]\n # [ sin(a + shear_y)*scale cos(a + shear_x)*scale 0]\n # [ 0 0 1]\n # Thus, the inverse is M^-1 = C * RSS^-1 * C^-1 * T^-1\n\n angle = math.radians(angle)\n if isinstance(shear, (tuple, list)) and len(shear) == 2:\n shear = [math.radians(s) for s in shear]\n elif isinstance(shear, numbers.Number):\n shear = math.radians(shear)\n shear = [shear, 0]\n else:\n raise ValueError(\n \"Shear should be a single value or a tuple/list containing \" +\n \"two values. Got {}\".format(shear))\n scale = 1.0 / scale\n\n # Inverted rotation matrix with scale and shear\n d = math.cos(angle + shear[0]) * math.cos(angle + shear[1]) + \\\n math.sin(angle + shear[0]) * math.sin(angle + shear[1])\n matrix = [\n math.cos(angle + shear[0]), math.sin(angle + shear[0]), 0,\n -math.sin(angle + shear[1]), math.cos(angle + shear[1]), 0\n ]\n matrix = [scale / d * m for m in matrix]\n\n # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1\n matrix[2] += matrix[0] * (-center[0] - translate[0]) + matrix[1] * (-center[1] - translate[1])\n matrix[5] += matrix[3] * (-center[0] - translate[0]) + matrix[4] * (-center[1] - translate[1])\n\n # Apply center translation: C * RSS^-1 * C^-1 * T^-1\n matrix[2] += center[0]\n matrix[5] += center[1]\n return matrix\n\n\ndef affine(img, angle, translate, scale, shear, resample=0, fillcolor=None):\n \"\"\"Apply affine transformation on the image keeping image center invariant\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): rotation angle in degrees between -180 and 180, clockwise direction.\n translate (list or tuple of integers): horizontal and vertical translations (post-rotation translation)\n scale (float): overall scale\n shear (float or tuple or list): shear angle value in degrees between -180 to 180, clockwise direction.\n If a tuple of list is specified, the first value corresponds to a shear parallel to the x axis, while\n the second value corresponds to a shear parallel to the y axis.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter.\n See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n fillcolor (int): Optional fill color for the area outside the transform in the output image. (Pillow>=5.0.0)\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n assert isinstance(translate, (tuple, list)) and len(translate) == 2, \\\n \"Argument translate should be a list or tuple of length 2\"\n\n assert scale > 0.0, \"Argument scale should be positive\"\n\n output_size = img.size\n center = (img.size[0] * 0.5 + 0.5, img.size[1] * 0.5 + 0.5)\n matrix = _get_inverse_affine_matrix(center, angle, translate, scale, shear)\n kwargs = {\"fillcolor\": fillcolor} if PILLOW_VERSION[0] >= '5' else {}\n return img.transform(output_size, Image.AFFINE, matrix, resample, **kwargs)\n\n\ndef to_grayscale(img, num_output_channels=1):\n \"\"\"Convert image to grayscale version of image.\n\n Args:\n img (PIL Image): Image to be converted to grayscale.\n\n Returns:\n PIL Image: Grayscale version of the image.\n if num_output_channels = 1 : returned image is single channel\n\n if num_output_channels = 3 : returned image is 3 channel with r = g = b\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if num_output_channels == 1:\n img = img.convert('L')\n elif num_output_channels == 3:\n img = img.convert('L')\n np_img = np.array(img, dtype=np.uint8)\n np_img = np.dstack([np_img, np_img, np_img])\n img = Image.fromarray(np_img, 'RGB')\n else:\n raise ValueError('num_output_channels should be either 1 or 3')\n\n return img\n\n\ndef erase(img, i, j, h, w, v, inplace=False):\n \"\"\" Erase the input Tensor Image with given value.\n\n Args:\n img (Tensor Image): Tensor image of size (C, H, W) to be erased\n i (int): i in (i,j) i.e coordinates of the upper left corner.\n j (int): j in (i,j) i.e coordinates of the upper left corner.\n h (int): Height of the erased region.\n w (int): Width of the erased region.\n v: Erasing value.\n inplace(bool, optional): For in-place operations. By default is set False.\n\n Returns:\n Tensor Image: Erased image.\n \"\"\"\n if not isinstance(img, torch.Tensor):\n raise TypeError('img should be Tensor Image. Got {}'.format(type(img)))\n\n if not inplace:\n img = img.clone()\n\n img[:, i:i + h, j:j + w] = v\n return img\n",
"path": "torchvision/transforms/functional.py"
}
] | [
{
"content": "from __future__ import division\nimport torch\nimport sys\nimport math\nfrom PIL import Image, ImageOps, ImageEnhance, __version__ as PILLOW_VERSION\ntry:\n import accimage\nexcept ImportError:\n accimage = None\nimport numpy as np\nimport numbers\nimport collections\nimport warnings\n\nif sys.version_info < (3, 3):\n Sequence = collections.Sequence\n Iterable = collections.Iterable\nelse:\n Sequence = collections.abc.Sequence\n Iterable = collections.abc.Iterable\n\n\ndef _is_pil_image(img):\n if accimage is not None:\n return isinstance(img, (Image.Image, accimage.Image))\n else:\n return isinstance(img, Image.Image)\n\n\ndef _is_tensor_image(img):\n return torch.is_tensor(img) and img.ndimension() == 3\n\n\ndef _is_numpy(img):\n return isinstance(img, np.ndarray)\n\n\ndef _is_numpy_image(img):\n return img.ndim in {2, 3}\n\n\ndef to_tensor(pic):\n \"\"\"Convert a ``PIL Image`` or ``numpy.ndarray`` to tensor.\n\n See ``ToTensor`` for more details.\n\n Args:\n pic (PIL Image or numpy.ndarray): Image to be converted to tensor.\n\n Returns:\n Tensor: Converted image.\n \"\"\"\n if not(_is_pil_image(pic) or _is_numpy(pic)):\n raise TypeError('pic should be PIL Image or ndarray. Got {}'.format(type(pic)))\n\n if _is_numpy(pic) and not _is_numpy_image(pic):\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))\n\n if isinstance(pic, np.ndarray):\n # handle numpy array\n if pic.ndim == 2:\n pic = pic[:, :, None]\n\n img = torch.from_numpy(pic.transpose((2, 0, 1)))\n # backward compatibility\n if isinstance(img, torch.ByteTensor):\n return img.float().div(255)\n else:\n return img\n\n if accimage is not None and isinstance(pic, accimage.Image):\n nppic = np.zeros([pic.channels, pic.height, pic.width], dtype=np.float32)\n pic.copyto(nppic)\n return torch.from_numpy(nppic)\n\n # handle PIL Image\n if pic.mode == 'I':\n img = torch.from_numpy(np.array(pic, np.int32, copy=False))\n elif pic.mode == 'I;16':\n img = torch.from_numpy(np.array(pic, np.int16, copy=False))\n elif pic.mode == 'F':\n img = torch.from_numpy(np.array(pic, np.float32, copy=False))\n elif pic.mode == '1':\n img = 255 * torch.from_numpy(np.array(pic, np.uint8, copy=False))\n else:\n img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes()))\n # PIL image mode: L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK\n if pic.mode == 'YCbCr':\n nchannel = 3\n elif pic.mode == 'I;16':\n nchannel = 1\n else:\n nchannel = len(pic.mode)\n img = img.view(pic.size[1], pic.size[0], nchannel)\n # put it from HWC to CHW format\n # yikes, this transpose takes 80% of the loading time/CPU\n img = img.transpose(0, 1).transpose(0, 2).contiguous()\n if isinstance(img, torch.ByteTensor):\n return img.float().div(255)\n else:\n return img\n\n\ndef to_pil_image(pic, mode=None):\n \"\"\"Convert a tensor or an ndarray to PIL Image.\n\n See :class:`~torchvision.transforms.ToPILImage` for more details.\n\n Args:\n pic (Tensor or numpy.ndarray): Image to be converted to PIL Image.\n mode (`PIL.Image mode`_): color space and pixel depth of input data (optional).\n\n .. _PIL.Image mode: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#concept-modes\n\n Returns:\n PIL Image: Image converted to PIL Image.\n \"\"\"\n if not(isinstance(pic, torch.Tensor) or isinstance(pic, np.ndarray)):\n raise TypeError('pic should be Tensor or ndarray. Got {}.'.format(type(pic)))\n\n elif isinstance(pic, torch.Tensor):\n if pic.ndimension() not in {2, 3}:\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndimension()))\n\n elif pic.ndimension() == 2:\n # if 2D image, add channel dimension (CHW)\n pic = pic.unsqueeze(0)\n\n elif isinstance(pic, np.ndarray):\n if pic.ndim not in {2, 3}:\n raise ValueError('pic should be 2/3 dimensional. Got {} dimensions.'.format(pic.ndim))\n\n elif pic.ndim == 2:\n # if 2D image, add channel dimension (HWC)\n pic = np.expand_dims(pic, 2)\n\n npimg = pic\n if isinstance(pic, torch.FloatTensor) and mode != 'F':\n pic = pic.mul(255).byte()\n if isinstance(pic, torch.Tensor):\n npimg = np.transpose(pic.numpy(), (1, 2, 0))\n\n if not isinstance(npimg, np.ndarray):\n raise TypeError('Input pic must be a torch.Tensor or NumPy ndarray, ' +\n 'not {}'.format(type(npimg)))\n\n if npimg.shape[2] == 1:\n expected_mode = None\n npimg = npimg[:, :, 0]\n if npimg.dtype == np.uint8:\n expected_mode = 'L'\n elif npimg.dtype == np.int16:\n expected_mode = 'I;16'\n elif npimg.dtype == np.int32:\n expected_mode = 'I'\n elif npimg.dtype == np.float32:\n expected_mode = 'F'\n if mode is not None and mode != expected_mode:\n raise ValueError(\"Incorrect mode ({}) supplied for input type {}. Should be {}\"\n .format(mode, np.dtype, expected_mode))\n mode = expected_mode\n\n elif npimg.shape[2] == 2:\n permitted_2_channel_modes = ['LA']\n if mode is not None and mode not in permitted_2_channel_modes:\n raise ValueError(\"Only modes {} are supported for 2D inputs\".format(permitted_2_channel_modes))\n\n if mode is None and npimg.dtype == np.uint8:\n mode = 'LA'\n\n elif npimg.shape[2] == 4:\n permitted_4_channel_modes = ['RGBA', 'CMYK', 'RGBX']\n if mode is not None and mode not in permitted_4_channel_modes:\n raise ValueError(\"Only modes {} are supported for 4D inputs\".format(permitted_4_channel_modes))\n\n if mode is None and npimg.dtype == np.uint8:\n mode = 'RGBA'\n else:\n permitted_3_channel_modes = ['RGB', 'YCbCr', 'HSV']\n if mode is not None and mode not in permitted_3_channel_modes:\n raise ValueError(\"Only modes {} are supported for 3D inputs\".format(permitted_3_channel_modes))\n if mode is None and npimg.dtype == np.uint8:\n mode = 'RGB'\n\n if mode is None:\n raise TypeError('Input type {} is not supported'.format(npimg.dtype))\n\n return Image.fromarray(npimg, mode=mode)\n\n\ndef normalize(tensor, mean, std, inplace=False):\n \"\"\"Normalize a tensor image with mean and standard deviation.\n\n .. note::\n This transform acts out of place by default, i.e., it does not mutates the input tensor.\n\n See :class:`~torchvision.transforms.Normalize` for more details.\n\n Args:\n tensor (Tensor): Tensor image of size (C, H, W) to be normalized.\n mean (sequence): Sequence of means for each channel.\n std (sequence): Sequence of standard deviations for each channel.\n inplace(bool,optional): Bool to make this operation inplace.\n\n Returns:\n Tensor: Normalized Tensor image.\n \"\"\"\n if not _is_tensor_image(tensor):\n raise TypeError('tensor is not a torch image.')\n\n if not inplace:\n tensor = tensor.clone()\n\n dtype = tensor.dtype\n mean = torch.as_tensor(mean, dtype=dtype, device=tensor.device)\n std = torch.as_tensor(std, dtype=dtype, device=tensor.device)\n tensor.sub_(mean[:, None, None]).div_(std[:, None, None])\n return tensor\n\n\ndef resize(img, size, interpolation=Image.BILINEAR):\n r\"\"\"Resize the input PIL Image to the given size.\n\n Args:\n img (PIL Image): Image to be resized.\n size (sequence or int): Desired output size. If size is a sequence like\n (h, w), the output size will be matched to this. If size is an int,\n the smaller edge of the image will be matched to this number maintaing\n the aspect ratio. i.e, if height > width, then image will be rescaled to\n :math:`\\left(\\text{size} \\times \\frac{\\text{height}}{\\text{width}}, \\text{size}\\right)`\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``\n\n Returns:\n PIL Image: Resized image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n if not (isinstance(size, int) or (isinstance(size, Iterable) and len(size) == 2)):\n raise TypeError('Got inappropriate size arg: {}'.format(size))\n\n if isinstance(size, int):\n w, h = img.size\n if (w <= h and w == size) or (h <= w and h == size):\n return img\n if w < h:\n ow = size\n oh = int(size * h / w)\n return img.resize((ow, oh), interpolation)\n else:\n oh = size\n ow = int(size * w / h)\n return img.resize((ow, oh), interpolation)\n else:\n return img.resize(size[::-1], interpolation)\n\n\ndef scale(*args, **kwargs):\n warnings.warn(\"The use of the transforms.Scale transform is deprecated, \" +\n \"please use transforms.Resize instead.\")\n return resize(*args, **kwargs)\n\n\ndef pad(img, padding, fill=0, padding_mode='constant'):\n r\"\"\"Pad the given PIL Image on all sides with specified padding mode and fill value.\n\n Args:\n img (PIL Image): Image to be padded.\n padding (int or tuple): Padding on each border. If a single int is provided this\n is used to pad all borders. If tuple of length 2 is provided this is the padding\n on left/right and top/bottom respectively. If a tuple of length 4 is provided\n this is the padding for the left, top, right and bottom borders\n respectively.\n fill: Pixel fill value for constant fill. Default is 0. If a tuple of\n length 3, it is used to fill R, G, B channels respectively.\n This value is only used when the padding_mode is constant\n padding_mode: Type of padding. Should be: constant, edge, reflect or symmetric. Default is constant.\n\n - constant: pads with a constant value, this value is specified with fill\n\n - edge: pads with the last value on the edge of the image\n\n - reflect: pads with reflection of image (without repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in reflect mode\n will result in [3, 2, 1, 2, 3, 4, 3, 2]\n\n - symmetric: pads with reflection of image (repeating the last value on the edge)\n\n padding [1, 2, 3, 4] with 2 elements on both sides in symmetric mode\n will result in [2, 1, 1, 2, 3, 4, 4, 3]\n\n Returns:\n PIL Image: Padded image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if not isinstance(padding, (numbers.Number, tuple)):\n raise TypeError('Got inappropriate padding arg')\n if not isinstance(fill, (numbers.Number, str, tuple)):\n raise TypeError('Got inappropriate fill arg')\n if not isinstance(padding_mode, str):\n raise TypeError('Got inappropriate padding_mode arg')\n\n if isinstance(padding, Sequence) and len(padding) not in [2, 4]:\n raise ValueError(\"Padding must be an int or a 2, or 4 element tuple, not a \" +\n \"{} element tuple\".format(len(padding)))\n\n assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'], \\\n 'Padding mode should be either constant, edge, reflect or symmetric'\n\n if padding_mode == 'constant':\n if img.mode == 'P':\n palette = img.getpalette()\n image = ImageOps.expand(img, border=padding, fill=fill)\n image.putpalette(palette)\n return image\n\n return ImageOps.expand(img, border=padding, fill=fill)\n else:\n if isinstance(padding, int):\n pad_left = pad_right = pad_top = pad_bottom = padding\n if isinstance(padding, Sequence) and len(padding) == 2:\n pad_left = pad_right = padding[0]\n pad_top = pad_bottom = padding[1]\n if isinstance(padding, Sequence) and len(padding) == 4:\n pad_left = padding[0]\n pad_top = padding[1]\n pad_right = padding[2]\n pad_bottom = padding[3]\n\n if img.mode == 'P':\n palette = img.getpalette()\n img = np.asarray(img)\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)\n img = Image.fromarray(img)\n img.putpalette(palette)\n return img\n\n img = np.asarray(img)\n # RGB image\n if len(img.shape) == 3:\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right), (0, 0)), padding_mode)\n # Grayscale image\n if len(img.shape) == 2:\n img = np.pad(img, ((pad_top, pad_bottom), (pad_left, pad_right)), padding_mode)\n\n return Image.fromarray(img)\n\n\ndef crop(img, top, left, height, width):\n \"\"\"Crop the given PIL Image.\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n top (int): Vertical component of the top left corner of the crop box.\n left (int): Horizontal component of the top left corner of the crop box.\n height (int): Height of the crop box.\n width (int): Width of the crop box.\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.crop((left, top, left + width, top + height))\n\n\ndef center_crop(img, output_size):\n \"\"\"Crop the given PIL Image and resize it to desired size.\n\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n output_size (sequence or int): (height, width) of the crop box. If int,\n it is used for both directions\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n if isinstance(output_size, numbers.Number):\n output_size = (int(output_size), int(output_size))\n image_width, image_height = img.size\n crop_height, crop_width = output_size\n crop_top = int(round((image_height - crop_height) / 2.))\n crop_left = int(round((image_width - crop_width) / 2.))\n return crop(img, crop_top, crop_left, crop_height, crop_width)\n\n\ndef resized_crop(img, top, left, height, width, size, interpolation=Image.BILINEAR):\n \"\"\"Crop the given PIL Image and resize it to desired size.\n\n Notably used in :class:`~torchvision.transforms.RandomResizedCrop`.\n\n Args:\n img (PIL Image): Image to be cropped. (0,0) denotes the top left corner of the image.\n top (int): Vertical component of the top left corner of the crop box.\n left (int): Horizontal component of the top left corner of the crop box.\n height (int): Height of the crop box.\n width (int): Width of the crop box.\n size (sequence or int): Desired output size. Same semantics as ``resize``.\n interpolation (int, optional): Desired interpolation. Default is\n ``PIL.Image.BILINEAR``.\n Returns:\n PIL Image: Cropped image.\n \"\"\"\n assert _is_pil_image(img), 'img should be PIL Image'\n img = crop(img, top, left, height, width)\n img = resize(img, size, interpolation)\n return img\n\n\ndef hflip(img):\n \"\"\"Horizontally flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Horizontall flipped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.transpose(Image.FLIP_LEFT_RIGHT)\n\n\ndef _get_perspective_coeffs(startpoints, endpoints):\n \"\"\"Helper function to get the coefficients (a, b, c, d, e, f, g, h) for the perspective transforms.\n\n In Perspective Transform each pixel (x, y) in the orignal image gets transformed as,\n (x, y) -> ( (ax + by + c) / (gx + hy + 1), (dx + ey + f) / (gx + hy + 1) )\n\n Args:\n List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image,\n List containing [top-left, top-right, bottom-right, bottom-left] of the transformed\n image\n Returns:\n octuple (a, b, c, d, e, f, g, h) for transforming each pixel.\n \"\"\"\n matrix = []\n\n for p1, p2 in zip(endpoints, startpoints):\n matrix.append([p1[0], p1[1], 1, 0, 0, 0, -p2[0] * p1[0], -p2[0] * p1[1]])\n matrix.append([0, 0, 0, p1[0], p1[1], 1, -p2[1] * p1[0], -p2[1] * p1[1]])\n\n A = torch.tensor(matrix, dtype=torch.float)\n B = torch.tensor(startpoints, dtype=torch.float).view(8)\n res = torch.lstsq(B, A)[0]\n return res.squeeze_(1).tolist()\n\n\ndef perspective(img, startpoints, endpoints, interpolation=Image.BICUBIC):\n \"\"\"Perform perspective transform of the given PIL Image.\n\n Args:\n img (PIL Image): Image to be transformed.\n startpoints: List containing [top-left, top-right, bottom-right, bottom-left] of the orignal image\n endpoints: List containing [top-left, top-right, bottom-right, bottom-left] of the transformed image\n interpolation: Default- Image.BICUBIC\n Returns:\n PIL Image: Perspectively transformed Image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n coeffs = _get_perspective_coeffs(startpoints, endpoints)\n return img.transform(img.size, Image.PERSPECTIVE, coeffs, interpolation)\n\n\ndef vflip(img):\n \"\"\"Vertically flip the given PIL Image.\n\n Args:\n img (PIL Image): Image to be flipped.\n\n Returns:\n PIL Image: Vertically flipped image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n return img.transpose(Image.FLIP_TOP_BOTTOM)\n\n\ndef five_crop(img, size):\n \"\"\"Crop the given PIL Image into four corners and the central crop.\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center)\n Corresponding top left, top right, bottom left, bottom right and center crop.\n \"\"\"\n if isinstance(size, numbers.Number):\n size = (int(size), int(size))\n else:\n assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n\n image_width, image_height = img.size\n crop_height, crop_width = size\n if crop_width > image_width or crop_height > image_height:\n msg = \"Requested crop size {} is bigger than input size {}\"\n raise ValueError(msg.format(size, (image_height, image_width)))\n\n tl = img.crop((0, 0, crop_width, crop_height))\n tr = img.crop((image_width - crop_width, 0, image_width, crop_height))\n bl = img.crop((0, image_height - crop_height, crop_width, image_height))\n br = img.crop((image_width - crop_width, image_height - crop_height,\n image_width, image_height))\n center = center_crop(img, (crop_height, crop_width))\n return (tl, tr, bl, br, center)\n\n\ndef ten_crop(img, size, vertical_flip=False):\n r\"\"\"Crop the given PIL Image into four corners and the central crop plus the\n flipped version of these (horizontal flipping is used by default).\n\n .. Note::\n This transform returns a tuple of images and there may be a\n mismatch in the number of inputs and targets your ``Dataset`` returns.\n\n Args:\n size (sequence or int): Desired output size of the crop. If size is an\n int instead of sequence like (h, w), a square crop (size, size) is\n made.\n vertical_flip (bool): Use vertical flipping instead of horizontal\n\n Returns:\n tuple: tuple (tl, tr, bl, br, center, tl_flip, tr_flip, bl_flip, br_flip, center_flip)\n Corresponding top left, top right, bottom left, bottom right and center crop\n and same for the flipped image.\n \"\"\"\n if isinstance(size, numbers.Number):\n size = (int(size), int(size))\n else:\n assert len(size) == 2, \"Please provide only two dimensions (h, w) for size.\"\n\n first_five = five_crop(img, size)\n\n if vertical_flip:\n img = vflip(img)\n else:\n img = hflip(img)\n\n second_five = five_crop(img, size)\n return first_five + second_five\n\n\ndef adjust_brightness(img, brightness_factor):\n \"\"\"Adjust brightness of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n brightness_factor (float): How much to adjust the brightness. Can be\n any non negative number. 0 gives a black image, 1 gives the\n original image while 2 increases the brightness by a factor of 2.\n\n Returns:\n PIL Image: Brightness adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Brightness(img)\n img = enhancer.enhance(brightness_factor)\n return img\n\n\ndef adjust_contrast(img, contrast_factor):\n \"\"\"Adjust contrast of an Image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n contrast_factor (float): How much to adjust the contrast. Can be any\n non negative number. 0 gives a solid gray image, 1 gives the\n original image while 2 increases the contrast by a factor of 2.\n\n Returns:\n PIL Image: Contrast adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Contrast(img)\n img = enhancer.enhance(contrast_factor)\n return img\n\n\ndef adjust_saturation(img, saturation_factor):\n \"\"\"Adjust color saturation of an image.\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n saturation_factor (float): How much to adjust the saturation. 0 will\n give a black and white image, 1 will give the original image while\n 2 will enhance the saturation by a factor of 2.\n\n Returns:\n PIL Image: Saturation adjusted image.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n enhancer = ImageEnhance.Color(img)\n img = enhancer.enhance(saturation_factor)\n return img\n\n\ndef adjust_hue(img, hue_factor):\n \"\"\"Adjust hue of an image.\n\n The image hue is adjusted by converting the image to HSV and\n cyclically shifting the intensities in the hue channel (H).\n The image is then converted back to original image mode.\n\n `hue_factor` is the amount of shift in H channel and must be in the\n interval `[-0.5, 0.5]`.\n\n See `Hue`_ for more details.\n\n .. _Hue: https://en.wikipedia.org/wiki/Hue\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n hue_factor (float): How much to shift the hue channel. Should be in\n [-0.5, 0.5]. 0.5 and -0.5 give complete reversal of hue channel in\n HSV space in positive and negative direction respectively.\n 0 means no shift. Therefore, both -0.5 and 0.5 will give an image\n with complementary colors while 0 gives the original image.\n\n Returns:\n PIL Image: Hue adjusted image.\n \"\"\"\n if not(-0.5 <= hue_factor <= 0.5):\n raise ValueError('hue_factor is not in [-0.5, 0.5].'.format(hue_factor))\n\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n input_mode = img.mode\n if input_mode in {'L', '1', 'I', 'F'}:\n return img\n\n h, s, v = img.convert('HSV').split()\n\n np_h = np.array(h, dtype=np.uint8)\n # uint8 addition take cares of rotation across boundaries\n with np.errstate(over='ignore'):\n np_h += np.uint8(hue_factor * 255)\n h = Image.fromarray(np_h, 'L')\n\n img = Image.merge('HSV', (h, s, v)).convert(input_mode)\n return img\n\n\ndef adjust_gamma(img, gamma, gain=1):\n r\"\"\"Perform gamma correction on an image.\n\n Also known as Power Law Transform. Intensities in RGB mode are adjusted\n based on the following equation:\n\n .. math::\n I_{\\text{out}} = 255 \\times \\text{gain} \\times \\left(\\frac{I_{\\text{in}}}{255}\\right)^{\\gamma}\n\n See `Gamma Correction`_ for more details.\n\n .. _Gamma Correction: https://en.wikipedia.org/wiki/Gamma_correction\n\n Args:\n img (PIL Image): PIL Image to be adjusted.\n gamma (float): Non negative real number, same as :math:`\\gamma` in the equation.\n gamma larger than 1 make the shadows darker,\n while gamma smaller than 1 make dark regions lighter.\n gain (float): The constant multiplier.\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if gamma < 0:\n raise ValueError('Gamma should be a non-negative real number')\n\n input_mode = img.mode\n img = img.convert('RGB')\n\n gamma_map = [255 * gain * pow(ele / 255., gamma) for ele in range(256)] * 3\n img = img.point(gamma_map) # use PIL's point-function to accelerate this part\n\n img = img.convert(input_mode)\n return img\n\n\ndef rotate(img, angle, resample=False, expand=False, center=None, fill=0):\n \"\"\"Rotate the image by angle.\n\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): In degrees degrees counter clockwise order.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter. See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n expand (bool, optional): Optional expansion flag.\n If true, expands the output image to make it large enough to hold the entire rotated image.\n If false or omitted, make the output image the same size as the input image.\n Note that the expand flag assumes rotation around the center and no translation.\n center (2-tuple, optional): Optional center of rotation.\n Origin is the upper left corner.\n Default is the center of the image.\n fill (3-tuple or int): RGB pixel fill value for area outside the rotated image.\n If int, it is used for all channels respectively.\n\n .. _filters: https://pillow.readthedocs.io/en/latest/handbook/concepts.html#filters\n\n \"\"\"\n\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if isinstance(fill, int):\n fill = tuple([fill] * 3)\n\n return img.rotate(angle, resample, expand, center, fillcolor=fill)\n\n\ndef _get_inverse_affine_matrix(center, angle, translate, scale, shear):\n # Helper method to compute inverse matrix for affine transformation\n\n # As it is explained in PIL.Image.rotate\n # We need compute INVERSE of affine transformation matrix: M = T * C * RSS * C^-1\n # where T is translation matrix: [1, 0, tx | 0, 1, ty | 0, 0, 1]\n # C is translation matrix to keep center: [1, 0, cx | 0, 1, cy | 0, 0, 1]\n # RSS is rotation with scale and shear matrix\n # RSS(a, scale, shear) = [ cos(a + shear_y)*scale -sin(a + shear_x)*scale 0]\n # [ sin(a + shear_y)*scale cos(a + shear_x)*scale 0]\n # [ 0 0 1]\n # Thus, the inverse is M^-1 = C * RSS^-1 * C^-1 * T^-1\n\n angle = math.radians(angle)\n if isinstance(shear, (tuple, list)) and len(shear) == 2:\n shear = [math.radians(s) for s in shear]\n elif isinstance(shear, numbers.Number):\n shear = math.radians(shear)\n shear = [shear, 0]\n else:\n raise ValueError(\n \"Shear should be a single value or a tuple/list containing \" +\n \"two values. Got {}\".format(shear))\n scale = 1.0 / scale\n\n # Inverted rotation matrix with scale and shear\n d = math.cos(angle + shear[0]) * math.cos(angle + shear[1]) + \\\n math.sin(angle + shear[0]) * math.sin(angle + shear[1])\n matrix = [\n math.cos(angle + shear[0]), math.sin(angle + shear[0]), 0,\n -math.sin(angle + shear[1]), math.cos(angle + shear[1]), 0\n ]\n matrix = [scale / d * m for m in matrix]\n\n # Apply inverse of translation and of center translation: RSS^-1 * C^-1 * T^-1\n matrix[2] += matrix[0] * (-center[0] - translate[0]) + matrix[1] * (-center[1] - translate[1])\n matrix[5] += matrix[3] * (-center[0] - translate[0]) + matrix[4] * (-center[1] - translate[1])\n\n # Apply center translation: C * RSS^-1 * C^-1 * T^-1\n matrix[2] += center[0]\n matrix[5] += center[1]\n return matrix\n\n\ndef affine(img, angle, translate, scale, shear, resample=0, fillcolor=None):\n \"\"\"Apply affine transformation on the image keeping image center invariant\n\n Args:\n img (PIL Image): PIL Image to be rotated.\n angle (float or int): rotation angle in degrees between -180 and 180, clockwise direction.\n translate (list or tuple of integers): horizontal and vertical translations (post-rotation translation)\n scale (float): overall scale\n shear (float or tuple or list): shear angle value in degrees between -180 to 180, clockwise direction.\n If a tuple of list is specified, the first value corresponds to a shear parallel to the x axis, while\n the second value corresponds to a shear parallel to the y axis.\n resample (``PIL.Image.NEAREST`` or ``PIL.Image.BILINEAR`` or ``PIL.Image.BICUBIC``, optional):\n An optional resampling filter.\n See `filters`_ for more information.\n If omitted, or if the image has mode \"1\" or \"P\", it is set to ``PIL.Image.NEAREST``.\n fillcolor (int): Optional fill color for the area outside the transform in the output image. (Pillow>=5.0.0)\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n assert isinstance(translate, (tuple, list)) and len(translate) == 2, \\\n \"Argument translate should be a list or tuple of length 2\"\n\n assert scale > 0.0, \"Argument scale should be positive\"\n\n output_size = img.size\n center = (img.size[0] * 0.5 + 0.5, img.size[1] * 0.5 + 0.5)\n matrix = _get_inverse_affine_matrix(center, angle, translate, scale, shear)\n kwargs = {\"fillcolor\": fillcolor} if PILLOW_VERSION[0] >= '5' else {}\n return img.transform(output_size, Image.AFFINE, matrix, resample, **kwargs)\n\n\ndef to_grayscale(img, num_output_channels=1):\n \"\"\"Convert image to grayscale version of image.\n\n Args:\n img (PIL Image): Image to be converted to grayscale.\n\n Returns:\n PIL Image: Grayscale version of the image.\n if num_output_channels = 1 : returned image is single channel\n\n if num_output_channels = 3 : returned image is 3 channel with r = g = b\n \"\"\"\n if not _is_pil_image(img):\n raise TypeError('img should be PIL Image. Got {}'.format(type(img)))\n\n if num_output_channels == 1:\n img = img.convert('L')\n elif num_output_channels == 3:\n img = img.convert('L')\n np_img = np.array(img, dtype=np.uint8)\n np_img = np.dstack([np_img, np_img, np_img])\n img = Image.fromarray(np_img, 'RGB')\n else:\n raise ValueError('num_output_channels should be either 1 or 3')\n\n return img\n\n\ndef erase(img, i, j, h, w, v, inplace=False):\n \"\"\" Erase the input Tensor Image with given value.\n\n Args:\n img (Tensor Image): Tensor image of size (C, H, W) to be erased\n i (int): i in (i,j) i.e coordinates of the upper left corner.\n j (int): j in (i,j) i.e coordinates of the upper left corner.\n h (int): Height of the erased region.\n w (int): Width of the erased region.\n v: Erasing value.\n inplace(bool, optional): For in-place operations. By default is set False.\n\n Returns:\n Tensor Image: Erased image.\n \"\"\"\n if not isinstance(img, torch.Tensor):\n raise TypeError('img should be Tensor Image. Got {}'.format(type(img)))\n\n if not inplace:\n img = img.clone()\n\n img[:, i:i + h, j:j + w] = v\n return img\n",
"path": "torchvision/transforms/functional.py"
}
] | diff --git a/torchvision/transforms/functional.py b/torchvision/transforms/functional.py
index 6f43d5d263f..a8fdbef86bf 100644
--- a/torchvision/transforms/functional.py
+++ b/torchvision/transforms/functional.py
@@ -2,7 +2,7 @@
import torch
import sys
import math
-from PIL import Image, ImageOps, ImageEnhance, PILLOW_VERSION
+from PIL import Image, ImageOps, ImageEnhance, __version__ as PILLOW_VERSION
try:
import accimage
except ImportError:
|
bridgecrewio__checkov-1497 | checkov fails with junit-xml==1.8
**Describe the bug**
checkov fails with junit-xml==1.8
**To Reproduce**
Steps to reproduce the behavior:
1. pip3 install junit-xml==1.8
2. checkov -d .
3. See error:
```
Traceback (most recent call last):
File "/usr/local/bin/checkov", line 2, in <module>
from checkov.main import run
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/main.py", line 12, in <module>
from checkov.arm.runner import Runner as arm_runner
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/arm/runner.py", line 7, in <module>
from checkov.common.output.report import Report
File "/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/checkov/common/output/report.py", line 5, in <module>
from junit_xml import TestCase, TestSuite, to_xml_report_string
ImportError: cannot import name 'to_xml_report_string' from 'junit_xml' (/opt/rh/rh-python38/root/usr/local/lib/python3.8/site-packages/junit_xml/__init__.py)
```
**Expected behavior**
checkov runs fine with junit-xml==1.9 so a reasonable fix would be to pin that version (or greater) in setup.py install_requires.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: RHEL 7
- Checkov Version [e.g. 22]: 2.0.350
**Additional context**
Add any other context about the problem here (e.g. code snippets).
| [
{
"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\"\n },\n package_data={\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ]\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n ],\n)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\",\n \"jsonschema\",\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml>=1.9\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\",\n \"configargparse\",\n \"detect-secrets\",\n \"policyuniverse\",\n \"typing-extensions\",\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/bridgecrewio/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\", \"integration_tests*\"]),\n include_package_data=True,\n package_dir={\n \"checkov.terraform.checks.graph_checks\": \"checkov/terraform/checks/graph_checks\"\n },\n package_data={\n \"checkov.terraform.checks.graph_checks\": [\n \"aws/*.yaml\",\n \"gcp/*.yaml\",\n \"azure/*.yaml\",\n ]\n },\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Topic :: Security\",\n \"Topic :: Software Development :: Build Tools\",\n ],\n)\n",
"path": "setup.py"
}
] | diff --git a/Pipfile b/Pipfile
index ef3df7dbb1..c2f19fac07 100644
--- a/Pipfile
+++ b/Pipfile
@@ -23,7 +23,7 @@ deep_merge = "*"
tabulate = "*"
colorama="*"
termcolor="*"
-junit-xml ="*"
+junit-xml = ">=1.9"
dpath = ">=1.5.0,<2"
pyyaml = ">=5.4.1"
boto3 = "==1.17.*"
diff --git a/Pipfile.lock b/Pipfile.lock
index 0ba6bf46d4..4e5d5b33a4 100644
--- a/Pipfile.lock
+++ b/Pipfile.lock
@@ -1,7 +1,7 @@
{
"_meta": {
"hash": {
- "sha256": "8dded0accadc2382e9bf421a3643aa1a4eb0a7ced54bffdbcb0a8e0e5502f2ac"
+ "sha256": "59ae28dfc33196758545ef134178198dde9a1bbf23289701f45c74a5aac9efe4"
},
"pipfile-spec": 6,
"requires": {
@@ -183,11 +183,11 @@
},
"importlib-metadata": {
"hashes": [
- "sha256:0645585859e9a6689c523927a5032f2ba5919f1f7d0e84bd4533312320de1ff9",
- "sha256:51c6635429c77cf1ae634c997ff9e53ca3438b495f10a55ba28594dd69764a8b"
+ "sha256:7b30a78db2922d78a6f47fb30683156a14f3c6aa5cc23f77cc8967e9ab2d002f",
+ "sha256:ed5157fef23a4bc4594615a0dd8eba94b2bb36bf2a343fa3d8bb2fa0a62a99d5"
],
"index": "pypi",
- "version": "==4.6.3"
+ "version": "==4.6.4"
},
"jinja2": {
"hashes": [
@@ -233,30 +233,50 @@
"sha256:0446679737af14f45767963a1a9ef7620189912317d095f2d9ffa183a4d25d2b",
"sha256:0717a7390a68be14b8c793ba258e075c6f4ca819f15edfc2a3a027c823718567",
"sha256:0955295dd5eec6cb6cc2fe1698f4c6d84af2e92de33fbcac4111913cd100a6ff",
+ "sha256:0d4b31cc67ab36e3392bbf3862cfbadac3db12bdd8b02a2731f509ed5b829724",
"sha256:10f82115e21dc0dfec9ab5c0223652f7197feb168c940f3ef61563fc2d6beb74",
+ "sha256:168cd0a3642de83558a5153c8bd34f175a9a6e7f6dc6384b9655d2697312a646",
"sha256:1d609f577dc6e1aa17d746f8bd3c31aa4d258f4070d61b2aa5c4166c1539de35",
+ "sha256:1f2ade76b9903f39aa442b4aadd2177decb66525062db244b35d71d0ee8599b6",
+ "sha256:2a7d351cbd8cfeb19ca00de495e224dea7e7d919659c2841bbb7f420ad03e2d6",
+ "sha256:2d7d807855b419fc2ed3e631034685db6079889a1f01d5d9dac950f764da3dad",
"sha256:2ef54abee730b502252bcdf31b10dacb0a416229b72c18b19e24a4509f273d26",
+ "sha256:36bc903cbb393720fad60fc28c10de6acf10dc6cc883f3e24ee4012371399a38",
+ "sha256:37205cac2a79194e3750b0af2a5720d95f786a55ce7df90c3af697bfa100eaac",
"sha256:3c112550557578c26af18a1ccc9e090bfe03832ae994343cfdacd287db6a6ae7",
+ "sha256:3dd007d54ee88b46be476e293f48c85048603f5f516008bee124ddd891398ed6",
"sha256:47ab1e7b91c098ab893b828deafa1203de86d0bc6ab587b160f78fe6c4011f75",
"sha256:49e3ceeabbfb9d66c3aef5af3a60cc43b85c33df25ce03d0031a608b0a8b2e3f",
"sha256:4efca8f86c54b22348a5467704e3fec767b2db12fc39c6d963168ab1d3fc9135",
"sha256:53edb4da6925ad13c07b6d26c2a852bd81e364f95301c66e930ab2aef5b5ddd8",
+ "sha256:5855f8438a7d1d458206a2466bf82b0f104a3724bf96a1c781ab731e4201731a",
"sha256:594c67807fb16238b30c44bdf74f36c02cdf22d1c8cda91ef8a0ed8dabf5620a",
+ "sha256:5bb28c636d87e840583ee3adeb78172efc47c8b26127267f54a9c0ec251d41a9",
+ "sha256:60bf42e36abfaf9aff1f50f52644b336d4f0a3fd6d8a60ca0d054ac9f713a864",
"sha256:611d1ad9a4288cf3e3c16014564df047fe08410e628f89805e475368bd304914",
"sha256:6557b31b5e2c9ddf0de32a691f2312a32f77cd7681d8af66c2692efdbef84c18",
"sha256:693ce3f9e70a6cf7d2fb9e6c9d8b204b6b39897a2c4a1aa65728d5ac97dcc1d8",
"sha256:6a7fae0dd14cf60ad5ff42baa2e95727c3d81ded453457771d02b7d2b3f9c0c2",
"sha256:6c4ca60fa24e85fe25b912b01e62cb969d69a23a5d5867682dd3e80b5b02581d",
+ "sha256:6fcf051089389abe060c9cd7caa212c707e58153afa2c649f00346ce6d260f1b",
"sha256:7d91275b0245b1da4d4cfa07e0faedd5b0812efc15b702576d103293e252af1b",
"sha256:905fec760bd2fa1388bb5b489ee8ee5f7291d692638ea5f67982d968366bef9f",
"sha256:97383d78eb34da7e1fa37dd273c20ad4320929af65d156e35a5e2d89566d9dfb",
"sha256:984d76483eb32f1bcb536dc27e4ad56bba4baa70be32fa87152832cdd9db0833",
+ "sha256:99df47edb6bda1249d3e80fdabb1dab8c08ef3975f69aed437cb69d0a5de1e28",
"sha256:a30e67a65b53ea0a5e62fe23682cfe22712e01f453b95233b25502f7c61cb415",
"sha256:ab3ef638ace319fa26553db0624c4699e31a28bb2a835c5faca8f8acf6a5a902",
+ "sha256:add36cb2dbb8b736611303cd3bfcee00afd96471b09cda130da3581cbdc56a6d",
"sha256:b2f4bf27480f5e5e8ce285a8c8fd176c0b03e93dcc6646477d4630e83440c6a9",
"sha256:b7f2d075102dc8c794cbde1947378051c4e5180d52d276987b8d28a3bd58c17d",
+ "sha256:baa1a4e8f868845af802979fcdbf0bb11f94f1cb7ced4c4b8a351bb60d108145",
"sha256:be98f628055368795d818ebf93da628541e10b75b41c559fdf36d104c5787066",
+ "sha256:bf5d821ffabf0ef3533c39c518f3357b171a1651c1ff6827325e4489b0e46c3c",
+ "sha256:c47adbc92fc1bb2b3274c4b3a43ae0e4573d9fbff4f54cd484555edbf030baf1",
"sha256:d7f9850398e85aba693bb640262d3611788b1f29a79f0c93c565694658f4071f",
+ "sha256:d8446c54dc28c01e5a2dbac5a25f071f6653e6e40f3a8818e8b45d790fe6ef53",
+ "sha256:e0f138900af21926a02425cf736db95be9f4af72ba1bb21453432a07f6082134",
+ "sha256:e9936f0b261d4df76ad22f8fee3ae83b60d7c3e871292cd42f40b81b70afae85",
"sha256:f5653a225f31e113b152e56f154ccbe59eeb1c7487b39b9d9f9cdb58e6c79dc5",
"sha256:f826e31d18b516f653fe296d967d700fddad5901ae07c622bb3705955e1faa94",
"sha256:f8ba0e8349a38d3001fae7eadded3f6606f0da5d748ee53cc1dab1d6527b9509",
@@ -420,11 +440,11 @@
},
"tqdm": {
"hashes": [
- "sha256:3642d483b558eec80d3c831e23953582c34d7e4540db86d9e5ed9dad238dabc6",
- "sha256:706dea48ee05ba16e936ee91cb3791cd2ea6da348a0e50b46863ff4363ff4340"
+ "sha256:07856e19a1fe4d2d9621b539d3f072fa88c9c1ef1f3b7dd4d4953383134c3164",
+ "sha256:35540feeaca9ac40c304e916729e6b78045cbbeccd3e941b2868f09306798ac9"
],
"index": "pypi",
- "version": "==4.62.0"
+ "version": "==4.62.1"
},
"typing-extensions": {
"hashes": [
@@ -453,11 +473,11 @@
},
"websocket-client": {
"hashes": [
- "sha256:4cf754af7e3b3ba76589d49f9e09fd9a6c0aae9b799a89124d656009c01a261d",
- "sha256:8d07f155f8ed14ae3ced97bd7582b08f280bb1bfd27945f023ba2aceff05ab52"
+ "sha256:0133d2f784858e59959ce82ddac316634229da55b498aac311f1620567a710ec",
+ "sha256:8dfb715d8a992f5712fff8c843adae94e22b22a99b2c5e6b0ec4a1a981cc4e0d"
],
"markers": "python_version >= '3.6'",
- "version": "==1.1.1"
+ "version": "==1.2.1"
},
"zipp": {
"hashes": [
diff --git a/setup.py b/setup.py
index d9f0cd62a1..991fef5940 100644
--- a/setup.py
+++ b/setup.py
@@ -39,7 +39,7 @@
"tabulate",
"colorama",
"termcolor",
- "junit-xml",
+ "junit-xml>=1.9",
"dpath>=1.5.0,<2",
"pyyaml>=5.4.1",
"boto3==1.17.*",
|
dbt-labs__dbt-core-5507 | [CT-876] Could we also now remove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2?
Remove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2(#4745). Also bump minimum requirement to match [Jinja2's requirements](https://github.com/pallets/jinja/blob/1c4066a4fad5aaeb2ac55809d1d38477cd23a0f6/setup.py#L6).
| [
{
"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.3.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.0.3\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n",
"path": "core/setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.3.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro==2.9\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n",
"path": "core/setup.py"
}
] | diff --git a/.changes/unreleased/Dependencies-20220721-093233.yaml b/.changes/unreleased/Dependencies-20220721-093233.yaml
new file mode 100644
index 00000000000..f5c623e9581
--- /dev/null
+++ b/.changes/unreleased/Dependencies-20220721-093233.yaml
@@ -0,0 +1,7 @@
+kind: Dependencies
+body: Remove pin for MarkUpSafe from >=0.23,<2.1
+time: 2022-07-21T09:32:33.494002-05:00
+custom:
+ Author: emmyoop
+ Issue: "5506"
+ PR: "5507"
diff --git a/core/setup.py b/core/setup.py
index 2aa2340f10c..d8b415e1b0a 100644
--- a/core/setup.py
+++ b/core/setup.py
@@ -49,7 +49,6 @@
},
install_requires=[
"Jinja2==3.1.2",
- "MarkupSafe>=0.23,<2.1",
"agate>=1.6,<1.6.4",
"click>=7.0,<9",
"colorama>=0.3.9,<0.4.6",
|
microsoft__Qcodes-867 | missing dependency`jsonschema` in requirements.txt
The latest pip installable version of QCoDeS does not list jsonschema as a dependency but requires it.
This problem came to light when running tests on a project that depeneds on QCoDeS. Part of my build script installs qcodes (pip install qcodes). Importing qcodes then raises an exception because jsonschema is missing.
| [
{
"content": "from setuptools import setup, find_packages\nfrom distutils.version import StrictVersion\nfrom importlib import import_module\nimport re\n\ndef get_version(verbose=1):\n \"\"\" Extract version information from source code \"\"\"\n\n try:\n with open('qcodes/version.py', 'r') as f:\n ln = f.readline()\n # print(ln)\n m = re.search('.* ''(.*)''', ln)\n version = (m.group(1)).strip('\\'')\n except Exception as E:\n print(E)\n version = 'none'\n if verbose:\n print('get_version: %s' % version)\n return version\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\nextras = {\n 'MatPlot': ('matplotlib', '2.0.2'),\n 'QtPlot': ('pyqtgraph', '0.10.0'),\n 'coverage tests': ('coverage', '4.0'),\n 'Slack': ('slacker', '0.9.42')\n}\nextras_require = {k: '>='.join(v) for k, v in extras.items()}\n\nsetup(name='qcodes',\n version=get_version(),\n use_2to3=False,\n\n maintainer='Jens H Nielsen',\n maintainer_email='[email protected]',\n description='Python-based data acquisition framework developed by the '\n 'Copenhagen / Delft / Sydney / Microsoft quantum computing '\n 'consortium',\n long_description=readme(),\n url='https://github.com/QCoDeS/Qcodes',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering'\n ],\n license='MIT',\n # if we want to install without tests:\n # packages=find_packages(exclude=[\"*.tests\", \"tests\"]),\n packages=find_packages(),\n package_data={'qcodes': ['monitor/dist/*', 'monitor/dist/js/*',\n 'monitor/dist/css/*', 'config/*.json']},\n install_requires=[\n 'numpy>=1.10',\n 'pyvisa>=1.8',\n 'h5py>=2.6',\n 'websockets>=3.2,<3.4'\n ],\n\n test_suite='qcodes.tests',\n extras_require=extras_require,\n\n # I think the only part of qcodes that would care about zip_safe\n # is utils.helpers.reload_code; users of a zip-installed package\n # shouldn't be needing to do this anyway, but we should test first.\n zip_safe=False)\n\nversion_template = '''\n*****\n***** package {0} must be at least version {1}.\n***** Please upgrade it (pip install -U {0} or conda install {0})\n***** in order to use {2}\n*****\n'''\n\nmissing_template = '''\n*****\n***** package {0} not found\n***** Please install it (pip install {0} or conda install {0})\n***** in order to use {1}\n*****\n'''\n\nvalueerror_template = '''\n*****\n***** package {0} version not understood\n***** Please make sure the installed version ({1})\n***** is compatible with the minimum required version ({2})\n***** in order to use {3}\n*****\n'''\n\n# now test the versions of extras\nfor extra, (module_name, min_version) in extras.items():\n try:\n module = import_module(module_name)\n if StrictVersion(module.__version__) < StrictVersion(min_version):\n print(version_template.format(module_name, min_version, extra))\n except ImportError:\n print(missing_template.format(module_name, extra))\n except ValueError:\n print(valueerror_template.format(\n module_name, module.__version__, min_version, extra))\n",
"path": "setup.py"
}
] | [
{
"content": "from setuptools import setup, find_packages\nfrom distutils.version import StrictVersion\nfrom importlib import import_module\nimport re\n\ndef get_version(verbose=1):\n \"\"\" Extract version information from source code \"\"\"\n\n try:\n with open('qcodes/version.py', 'r') as f:\n ln = f.readline()\n # print(ln)\n m = re.search('.* ''(.*)''', ln)\n version = (m.group(1)).strip('\\'')\n except Exception as E:\n print(E)\n version = 'none'\n if verbose:\n print('get_version: %s' % version)\n return version\n\n\ndef readme():\n with open('README.rst') as f:\n return f.read()\n\nextras = {\n 'MatPlot': ('matplotlib', '2.0.2'),\n 'QtPlot': ('pyqtgraph', '0.10.0'),\n 'coverage tests': ('coverage', '4.0'),\n 'Slack': ('slacker', '0.9.42')\n}\nextras_require = {k: '>='.join(v) for k, v in extras.items()}\n\nsetup(name='qcodes',\n version=get_version(),\n use_2to3=False,\n\n maintainer='Jens H Nielsen',\n maintainer_email='[email protected]',\n description='Python-based data acquisition framework developed by the '\n 'Copenhagen / Delft / Sydney / Microsoft quantum computing '\n 'consortium',\n long_description=readme(),\n url='https://github.com/QCoDeS/Qcodes',\n classifiers=[\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Science/Research',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Scientific/Engineering'\n ],\n license='MIT',\n # if we want to install without tests:\n # packages=find_packages(exclude=[\"*.tests\", \"tests\"]),\n packages=find_packages(),\n package_data={'qcodes': ['monitor/dist/*', 'monitor/dist/js/*',\n 'monitor/dist/css/*', 'config/*.json']},\n install_requires=[\n 'numpy>=1.10',\n 'pyvisa>=1.8',\n 'h5py>=2.6',\n 'websockets>=3.2,<3.4',\n 'jsonschema'\n ],\n\n test_suite='qcodes.tests',\n extras_require=extras_require,\n\n # I think the only part of qcodes that would care about zip_safe\n # is utils.helpers.reload_code; users of a zip-installed package\n # shouldn't be needing to do this anyway, but we should test first.\n zip_safe=False)\n\nversion_template = '''\n*****\n***** package {0} must be at least version {1}.\n***** Please upgrade it (pip install -U {0} or conda install {0})\n***** in order to use {2}\n*****\n'''\n\nmissing_template = '''\n*****\n***** package {0} not found\n***** Please install it (pip install {0} or conda install {0})\n***** in order to use {1}\n*****\n'''\n\nvalueerror_template = '''\n*****\n***** package {0} version not understood\n***** Please make sure the installed version ({1})\n***** is compatible with the minimum required version ({2})\n***** in order to use {3}\n*****\n'''\n\n# now test the versions of extras\nfor extra, (module_name, min_version) in extras.items():\n try:\n module = import_module(module_name)\n if StrictVersion(module.__version__) < StrictVersion(min_version):\n print(version_template.format(module_name, min_version, extra))\n except ImportError:\n print(missing_template.format(module_name, extra))\n except ValueError:\n print(valueerror_template.format(\n module_name, module.__version__, min_version, extra))\n",
"path": "setup.py"
}
] | diff --git a/docs_requirements.txt b/docs_requirements.txt
index a8abb189efe..e9b853631b8 100644
--- a/docs_requirements.txt
+++ b/docs_requirements.txt
@@ -1,6 +1,5 @@
sphinx
sphinx_rtd_theme
-jsonschema
sphinxcontrib-jsonschema
nbconvert
ipython
diff --git a/requirements.txt b/requirements.txt
index 32d5add6fb3..77f0dd3edbc 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
-numpy==1.13.1
+numpy==1.13.3
matplotlib==2.0.2
pyqtgraph==0.10.0
PyVISA==1.8
@@ -6,3 +6,4 @@ PyQt5==5.9
sip==4.19.3
QtPy==1.3.1
h5py==2.7.1
+jsonschema
diff --git a/setup.py b/setup.py
index d8ca9bf8632..2dcd4cef701 100644
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,8 @@ def readme():
'numpy>=1.10',
'pyvisa>=1.8',
'h5py>=2.6',
- 'websockets>=3.2,<3.4'
+ 'websockets>=3.2,<3.4',
+ 'jsonschema'
],
test_suite='qcodes.tests',
|
cupy__cupy-1944 | incorrect FFT results for Fortran-order arrays?
* Conditions (from `python -c 'import cupy; cupy.show_config()'`)
Tested in two environments with different CuPy versions:
```bash
CuPy Version : 4.4.1
CUDA Root : /usr/local/cuda
CUDA Build Version : 9010
CUDA Driver Version : 9010
CUDA Runtime Version : 9010
cuDNN Build Version : 7102
cuDNN Version : 7102
NCCL Build Version : 2115
```
and (this CuPy is built from the latest master branch)
```
CuPy Version : 6.0.0b1
CUDA Root : /usr/local/cuda
CUDA Build Version : 9010
CUDA Driver Version : 9010
CUDA Runtime Version : 9010
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
```
* Code to reproduce
```python
import numpy as np
import cupy as cp
AXES=[(0,), (1,), (2,), (0,1), (1,2), (0,2), (0,1,2)]
a_np = np.random.random((3,4,5))+1j*np.random.random((3,4,5))
print("In C order:")
a_np = np.ascontiguousarray(a_np)
a_cp = cp.asarray(a_np)
a_cp = cp.ascontiguousarray(a_cp)
assert np.allclose(cp.asnumpy(a_cp), a_np)
for axes in AXES:
result_np = np.fft.fftn(a_np, axes=axes)
result_cp = cp.fft.fftn(a_cp, axes=axes)
print(axes, ":", np.allclose(cp.asnumpy(result_cp), result_np))
print("\nIn F order:")
a_np = np.asfortranarray(a_np)
a_cp = cp.asarray(a_np)
a_cp = cp.asfortranarray(a_cp)
assert np.allclose(cp.asnumpy(a_cp), a_np)
for axes in AXES:
result_np = np.fft.fftn(a_np, axes=axes)
result_cp = cp.fft.fftn(a_cp, axes=axes)
print(axes, ":", np.allclose(cp.asnumpy(result_cp), result_np))
```
* Error messages, stack traces, or logs
The outputs from both environments are identical:
```bash
In C order:
(0,) : True
(1,) : True
(2,) : True
(0, 1) : True
(1, 2) : True
(0, 2) : True
(0, 1, 2) : True
In F order:
(0,) : False
(1,) : True
(2,) : False
(0, 1) : True
(1, 2) : False
(0, 2) : False
(0, 1, 2) : True
```
But it's expected to be `True` for all of the axes choices. It seems to me the bug is not introduced by the recent changes in adding support for cuFFT plans (#1669, #1745, #1746) but by something much older. For now I have not yet tracked down the problem, will update here if I find it. I hope I didn't do something stupid in the test...
Thanks.
| [
{
"content": "from copy import copy\n\nimport six\n\nimport numpy as np\n\nimport cupy\nfrom cupy.cuda import cufft\nfrom math import sqrt\nfrom cupy.fft import config\n\n\ndef _output_dtype(a, value_type):\n if value_type != 'R2C':\n if a.dtype in [np.float16, np.float32]:\n return np.complex64\n elif a.dtype not in [np.complex64, np.complex128]:\n return np.complex128\n else:\n if a.dtype in [np.complex64, np.complex128]:\n return a.real.dtype\n elif a.dtype == np.float16:\n return np.float32\n elif a.dtype not in [np.float32, np.float64]:\n return np.float64\n return a.dtype\n\n\ndef _convert_dtype(a, value_type):\n out_dtype = _output_dtype(a, value_type)\n return a.astype(out_dtype, copy=False)\n\n\ndef _cook_shape(a, s, axes, value_type, order='C'):\n if s is None or s == a.shape:\n return a\n if (value_type == 'C2R') and (s[-1] is not None):\n s = list(s)\n s[-1] = s[-1] // 2 + 1\n for sz, axis in zip(s, axes):\n if (sz is not None) and (sz != a.shape[axis]):\n shape = list(a.shape)\n if shape[axis] > sz:\n index = [slice(None)] * a.ndim\n index[axis] = slice(0, sz)\n a = a[index]\n else:\n index = [slice(None)] * a.ndim\n index[axis] = slice(0, shape[axis])\n shape[axis] = sz\n z = cupy.zeros(shape, a.dtype.char, order=order)\n z[index] = a\n a = z\n return a\n\n\ndef _convert_fft_type(a, value_type):\n if value_type == 'C2C' and a.dtype == np.complex64:\n return cufft.CUFFT_C2C\n elif value_type == 'R2C' and a.dtype == np.float32:\n return cufft.CUFFT_R2C\n elif value_type == 'C2R' and a.dtype == np.complex64:\n return cufft.CUFFT_C2R\n elif value_type == 'C2C' and a.dtype == np.complex128:\n return cufft.CUFFT_Z2Z\n elif value_type == 'R2C' and a.dtype == np.float64:\n return cufft.CUFFT_D2Z\n else:\n return cufft.CUFFT_Z2D\n\n\ndef _exec_fft(a, direction, value_type, norm, axis, overwrite_x,\n out_size=None, out=None):\n fft_type = _convert_fft_type(a, value_type)\n\n if axis % a.ndim != a.ndim - 1:\n a = a.swapaxes(axis, -1)\n\n if a.base is not None:\n a = a.copy()\n\n if out_size is None:\n out_size = a.shape[-1]\n\n batch = a.size // a.shape[-1]\n plan = cufft.Plan1d(out_size, fft_type, batch)\n if overwrite_x and value_type == 'C2C':\n out = a\n elif out is not None:\n # verify that out has the expected shape and dtype\n plan.check_output_array(a, out)\n else:\n out = plan.get_output_array(a)\n plan.fft(a, out, direction)\n\n sz = out.shape[-1]\n if fft_type == cufft.CUFFT_R2C or fft_type == cufft.CUFFT_D2Z:\n sz = a.shape[-1]\n if norm is None:\n if direction == cufft.CUFFT_INVERSE:\n out /= sz\n else:\n out /= sqrt(sz)\n\n if axis % a.ndim != a.ndim - 1:\n out = out.swapaxes(axis, -1)\n\n return out\n\n\ndef _fft_c2c(a, direction, norm, axes, overwrite_x):\n for axis in axes:\n a = _exec_fft(a, direction, 'C2C', norm, axis, overwrite_x)\n return a\n\n\ndef _fft(a, s, axes, norm, direction, value_type='C2C', overwrite_x=False):\n if norm not in (None, 'ortho'):\n raise ValueError('Invalid norm value %s, should be None or \\\"ortho\\\".'\n % norm)\n\n if s is not None:\n for n in s:\n if (n is not None) and (n < 1):\n raise ValueError(\n \"Invalid number of FFT data points (%d) specified.\" % n)\n\n if (s is not None) and (axes is not None) and len(s) != len(axes):\n raise ValueError(\"Shape and axes have different lengths.\")\n\n a = _convert_dtype(a, value_type)\n if axes is None:\n if s is None:\n dim = a.ndim\n else:\n dim = len(s)\n axes = [i for i in six.moves.range(-dim, 0)]\n a = _cook_shape(a, s, axes, value_type)\n\n if value_type == 'C2C':\n a = _fft_c2c(a, direction, norm, axes, overwrite_x)\n elif value_type == 'R2C':\n a = _exec_fft(a, direction, value_type, norm, axes[-1], overwrite_x)\n a = _fft_c2c(a, direction, norm, axes[:-1], overwrite_x)\n else:\n a = _fft_c2c(a, direction, norm, axes[:-1], overwrite_x)\n if (s is None) or (s[-1] is None):\n out_size = a.shape[axes[-1]] * 2 - 2\n else:\n out_size = s[-1]\n a = _exec_fft(a, direction, value_type, norm, axes[-1], overwrite_x,\n out_size)\n\n return a\n\n\ndef get_cufft_plan_nd(shape, fft_type, axes=None, order='C'):\n \"\"\"Generate a CUDA FFT plan for transforming up to three axes.\n\n Args:\n shape (tuple of int): The shape of the array to transform\n fft_type ({cufft.CUFFT_C2C, cufft.CUFFT_Z2Z}): The FFT type to perform.\n Currently only complex-to-complex transforms are supported.\n axes (None or int or tuple of int): The axes of the array to\n transform. Currently, these must be a set of up to three adjacent\n axes and must include either the first or the last axis of the\n array. If `None`, it is assumed that all axes are transformed.\n order ({'C', 'F'}): Specify whether the data to be transformed has C or\n Fortran ordered data layout.\n\n Returns:\n plan (cufft.PlanNd): The CUFFT Plan. This can be used with\n cufft.fft.fftn or cufft.fft.ifftn.\n \"\"\"\n ndim = len(shape)\n\n if fft_type not in [cufft.CUFFT_C2C, cufft.CUFFT_Z2Z]:\n raise NotImplementedError(\n \"Only cufft.CUFFT_C2C and cufft.CUFFT_Z2Z are supported.\")\n\n if axes is None:\n # transform over all axes\n fft_axes = tuple(range(ndim))\n else:\n if np.isscalar(axes):\n axes = (axes, )\n axes = tuple(axes)\n\n if np.min(axes) < -ndim or np.max(axes) > ndim - 1:\n raise ValueError(\"The specified axes exceed the array dimensions.\")\n\n # sort the provided axes in ascending order\n fft_axes = tuple(sorted(np.mod(axes, ndim)))\n\n # make sure the specified axes meet the expectations made below\n if not np.all(np.diff(fft_axes) == 1):\n raise ValueError(\n \"The axes to be transformed must be contiguous and repeated \"\n \"axes are not allowed.\")\n if (0 not in fft_axes) and ((ndim - 1) not in fft_axes):\n raise ValueError(\n \"Either the first or the last axis of the array must be in \"\n \"axes.\")\n\n if len(fft_axes) < 1 or len(fft_axes) > 3:\n raise ValueError(\n (\"CUFFT can only transform along 1, 2 or 3 axes, but {} axes were \"\n \"specified.\").format(len(fft_axes)))\n\n if order not in ['C', 'F']:\n raise ValueError(\"order must be 'C' or 'F'\")\n\n \"\"\"\n For full details on idist, istride, iembed, etc. see:\n http://docs.nvidia.com/cuda/cufft/index.html#advanced-data-layout\n\n in 1D:\n input[b * idist + x * istride]\n output[b * odist + x * ostride]\n\n in 2D:\n input[b * idist + (x * inembed[1] + y) * istride]\n output[b * odist + (x * onembed[1] + y) * ostride]\n\n in 3D:\n input[b * idist + ((x * inembed[1] + y) * inembed[2] + z) * istride]\n output[b * odist + ((x * onembed[1] + y) * onembed[2] + z) * ostride]\n \"\"\"\n if fft_axes == tuple(np.arange(ndim)):\n # tranfsorm over all axes\n plan_dimensions = copy(shape)\n if order == 'F':\n plan_dimensions = plan_dimensions[::-1]\n idist = np.intp(np.prod(shape))\n odist = np.intp(np.prod(shape))\n istride = ostride = 1\n inembed = onembed = None\n nbatch = 1\n else:\n plan_dimensions = []\n for d in range(ndim):\n if d in fft_axes:\n plan_dimensions.append(shape[d])\n plan_dimensions = tuple(plan_dimensions)\n if order == 'F':\n plan_dimensions = plan_dimensions[::-1]\n inembed = tuple(np.asarray(plan_dimensions, dtype=int))\n onembed = tuple(np.asarray(plan_dimensions, dtype=int))\n if 0 not in fft_axes:\n # don't FFT along the first min_axis_fft axes\n min_axis_fft = np.min(fft_axes)\n nbatch = np.prod(shape[:min_axis_fft])\n if order == 'C':\n # C-ordered GPU array with batch along first dim\n idist = np.prod(plan_dimensions)\n odist = np.prod(plan_dimensions)\n istride = 1\n ostride = 1\n elif order == 'F':\n # F-ordered GPU array with batch along first dim\n idist = 1\n odist = 1\n istride = nbatch\n ostride = nbatch\n elif (ndim - 1) not in fft_axes:\n # don't FFT along the last axis\n num_axes_batch = ndim - len(fft_axes)\n nbatch = np.prod(shape[-num_axes_batch:])\n if order == 'C':\n # C-ordered GPU array with batch along last dim\n idist = 1\n odist = 1\n istride = nbatch\n ostride = nbatch\n elif order == 'F':\n # F-ordered GPU array with batch along last dim\n idist = np.prod(plan_dimensions)\n odist = np.prod(plan_dimensions)\n istride = 1\n ostride = 1\n else:\n raise ValueError(\n \"General subsets of FFT axes not currently supported for \"\n \"GPU case (Can only batch FFT over the first or last \"\n \"spatial axes).\")\n\n plan = cufft.PlanNd(shape=plan_dimensions,\n istride=istride,\n ostride=ostride,\n inembed=inembed,\n onembed=onembed,\n idist=idist,\n odist=odist,\n fft_type=fft_type,\n batch=nbatch)\n return plan\n\n\ndef _exec_fftn(a, direction, value_type, norm, axes, overwrite_x,\n plan=None, out=None):\n\n fft_type = _convert_fft_type(a, value_type)\n if fft_type not in [cufft.CUFFT_C2C, cufft.CUFFT_Z2Z]:\n raise NotImplementedError(\"Only C2C and Z2Z are supported.\")\n\n if a.base is not None:\n a = a.copy()\n\n if a.flags.c_contiguous:\n order = 'C'\n elif a.flags.f_contiguous:\n order = 'F'\n else:\n raise ValueError(\"a must be contiguous\")\n\n if plan is None:\n # generate a plan\n plan = get_cufft_plan_nd(a.shape, fft_type, axes=axes, order=order)\n else:\n if not isinstance(plan, cufft.PlanNd):\n raise ValueError(\"expected plan to have type cufft.PlanNd\")\n if a.flags.c_contiguous:\n expected_shape = tuple(a.shape[ax] for ax in axes)\n else:\n # plan.shape will be reversed for Fortran-ordered inputs\n expected_shape = tuple(a.shape[ax] for ax in axes[::-1])\n if expected_shape != plan.shape:\n raise ValueError(\n \"The CUFFT plan and a.shape do not match: \"\n \"plan.shape = {}, expected_shape={}, a.shape = {}\".format(\n plan.shape, expected_shape, a.shape))\n if fft_type != plan.fft_type:\n raise ValueError(\"CUFFT plan dtype mismatch.\")\n # TODO: also check the strides and axes of the plan?\n\n if overwrite_x and value_type == 'C2C':\n out = a\n elif out is None:\n out = plan.get_output_array(a, order=order)\n else:\n plan.check_output_array(a, out)\n plan.fft(a, out, direction)\n\n # normalize by the product of the shape along the transformed axes\n sz = np.prod([out.shape[ax] for ax in axes])\n if norm is None:\n if direction == cufft.CUFFT_INVERSE:\n out /= sz\n else:\n out /= sqrt(sz)\n\n return out\n\n\ndef _fftn(a, s, axes, norm, direction, value_type='C2C', order='A', plan=None,\n overwrite_x=False, out=None):\n if norm not in (None, 'ortho'):\n raise ValueError('Invalid norm value %s, should be None or \\\"ortho\\\".'\n % norm)\n\n a = _convert_dtype(a, value_type)\n if axes is None:\n dim = a.ndim\n axes = [i for i in six.moves.range(-dim, 0)]\n axes = tuple(axes)\n\n if (s is not None) and len(s) != len(axes):\n raise ValueError(\"Shape and axes have different lengths.\")\n\n # sort the provided axes in ascending order\n axes = tuple(sorted(np.mod(axes, a.ndim)))\n\n if order == 'A':\n if a.flags.f_contiguous:\n order = 'F'\n elif a.flags.c_contiguous:\n order = 'C'\n else:\n a = cupy.ascontiguousarray(a)\n order = 'C'\n elif order not in ['C', 'F']:\n raise ValueError(\"Unsupported order: {}\".format(order))\n\n a = _cook_shape(a, s, axes, value_type, order=order)\n if order == 'C' and not a.flags.c_contiguous:\n a = cupy.ascontiguousarray(a)\n elif order == 'F' and not a.flags.f_contiguous:\n a = cupy.asfortranarray(a)\n\n a = _exec_fftn(a, direction, value_type, norm=norm, axes=axes,\n overwrite_x=overwrite_x, plan=plan, out=out)\n return a\n\n\ndef _default_plan_type(a, s=None, axes=None):\n \"\"\"Determine whether to use separable 1d planning or nd planning.\"\"\"\n ndim = a.ndim\n if ndim == 1 or not config.enable_nd_planning:\n return '1d'\n\n if axes is None:\n if s is None:\n dim = ndim\n else:\n dim = len(s)\n axes = tuple([i % ndim for i in six.moves.range(-dim, 0)])\n else:\n # sort the provided axes in ascending order\n axes = tuple(sorted([i % ndim for i in axes]))\n\n if len(axes) == 1:\n # use Plan1d to transform a single axis\n return '1d'\n if len(axes) > 3 or not (np.all(np.diff(sorted(axes)) == 1)):\n # PlanNd supports 1d, 2d or 3d transforms over contiguous axes\n return '1d'\n if (0 not in axes) and ((ndim - 1) not in axes):\n # PlanNd only possible if the first or last axis is in axes.\n return '1d'\n return 'nd'\n\n\ndef _default_fft_func(a, s=None, axes=None):\n plan_type = _default_plan_type(a, s, axes)\n if plan_type == 'nd':\n return _fftn\n else:\n return _fft\n\n\ndef fft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. If ``n``\n is not given, the length of the input along the axis specified by\n ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cupy.cuda.cufft.CUFFT_FORWARD)\n\n\ndef ifft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. If ``n``\n is not given, the length of the input along the axis specified by\n ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_INVERSE)\n\n\ndef fft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fft2`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_FORWARD)\n\n\ndef ifft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifft2`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_INVERSE)\n\n\ndef fftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fftn`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_FORWARD)\n\n\ndef ifftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifftn`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_INVERSE)\n\n\ndef rfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Number of points along transformation axis in the\n input to use. If ``n`` is not given, the length of the input along\n the axis specified by ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. The length of the\n transformed axis is ``n//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. For\n ``n`` output points, ``n//2+1`` input points are necessary. If\n ``n`` is not given, it is determined from the length of the input\n along the axis specified by ``axis``.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. If ``n`` is not\n given, the length of the transformed axis is`2*(m-1)` where `m`\n is the length of the transformed axis of the input.\n\n .. seealso:: :func:`numpy.fft.irfft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef rfft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape to use from the input. If ``s`` is not\n given, the lengths of the input along the axes specified by\n ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. The length of the\n last axis transformed will be ``s[-1]//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfft2`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the output. If ``s`` is not given,\n they are determined from the lengths of the input along the axes\n specified by ``axes``.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. If ``s`` is not\n given, the length of final transformed axis of output will be\n `2*(m-1)` where `m` is the length of the final transformed axis of\n the input.\n\n .. seealso:: :func:`numpy.fft.irfft2`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef rfftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape to use from the input. If ``s`` is not\n given, the lengths of the input along the axes specified by\n ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. The length of the\n last axis transformed will be ``s[-1]//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfftn`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the output. If ``s`` is not given,\n they are determined from the lengths of the input along the axes\n specified by ``axes``.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. If ``s`` is not\n given, the length of final transformed axis of output will be\n ``2*(m-1)`` where `m` is the length of the final transformed axis\n of the input.\n\n .. seealso:: :func:`numpy.fft.irfftn`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef hfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. For\n ``n`` output points, ``n//2+1`` input points are necessary. If\n ``n`` is not given, it is determined from the length of the input\n along the axis specified by ``axis``.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. If ``n`` is not\n given, the length of the transformed axis is ``2*(m-1)`` where `m`\n is the length of the transformed axis of the input.\n\n .. seealso:: :func:`numpy.fft.hfft`\n \"\"\"\n a = irfft(a.conj(), n, axis)\n return a * (a.shape[axis] if norm is None else\n cupy.sqrt(a.shape[axis], dtype=a.dtype))\n\n\ndef ihfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Number of points along transformation axis in the\n input to use. If ``n`` is not given, the length of the input along\n the axis specified by ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. The length of the\n transformed axis is ``n//2+1``.\n\n .. seealso:: :func:`numpy.fft.ihfft`\n \"\"\"\n if n is None:\n n = a.shape[axis]\n return rfft(a, n, axis, norm).conj() / (n if norm is None else 1)\n\n\ndef fftfreq(n, d=1.0):\n \"\"\"Return the FFT sample frequencies.\n\n Args:\n n (int): Window length.\n d (scalar): Sample spacing.\n\n Returns:\n cupy.ndarray: Array of length ``n`` containing the sample frequencies.\n\n .. seealso:: :func:`numpy.fft.fftfreq`\n \"\"\"\n return cupy.hstack((cupy.arange(0, (n - 1) // 2 + 1, dtype=np.float64),\n cupy.arange(-(n // 2), 0, dtype=np.float64))) / n / d\n\n\ndef rfftfreq(n, d=1.0):\n \"\"\"Return the FFT sample frequencies for real input.\n\n Args:\n n (int): Window length.\n d (scalar): Sample spacing.\n\n Returns:\n cupy.ndarray:\n Array of length ``n//2+1`` containing the sample frequencies.\n\n .. seealso:: :func:`numpy.fft.rfftfreq`\n \"\"\"\n return cupy.arange(0, n // 2 + 1, dtype=np.float64) / n / d\n\n\ndef fftshift(x, axes=None):\n \"\"\"Shift the zero-frequency component to the center of the spectrum.\n\n Args:\n x (cupy.ndarray): Input array.\n axes (int or tuple of ints): Axes over which to shift. Default is\n ``None``, which shifts all axes.\n\n Returns:\n cupy.ndarray: The shifted array.\n\n .. seealso:: :func:`numpy.fft.fftshift`\n \"\"\"\n x = cupy.asarray(x)\n if axes is None:\n axes = list(six.moves.range(x.ndim))\n elif isinstance(axes, np.compat.integer_types):\n axes = (axes,)\n for axis in axes:\n x = cupy.roll(x, x.shape[axis] // 2, axis)\n return x\n\n\ndef ifftshift(x, axes=None):\n \"\"\"The inverse of :meth:`fftshift`.\n\n Args:\n x (cupy.ndarray): Input array.\n axes (int or tuple of ints): Axes over which to shift. Default is\n ``None``, which shifts all axes.\n\n Returns:\n cupy.ndarray: The shifted array.\n\n .. seealso:: :func:`numpy.fft.ifftshift`\n \"\"\"\n x = cupy.asarray(x)\n if axes is None:\n axes = list(six.moves.range(x.ndim))\n elif isinstance(axes, np.compat.integer_types):\n axes = (axes,)\n for axis in axes:\n x = cupy.roll(x, -(x.shape[axis] // 2), axis)\n return x\n",
"path": "cupy/fft/fft.py"
}
] | [
{
"content": "from copy import copy\n\nimport six\n\nimport numpy as np\n\nimport cupy\nfrom cupy.cuda import cufft\nfrom math import sqrt\nfrom cupy.fft import config\n\n\ndef _output_dtype(a, value_type):\n if value_type != 'R2C':\n if a.dtype in [np.float16, np.float32]:\n return np.complex64\n elif a.dtype not in [np.complex64, np.complex128]:\n return np.complex128\n else:\n if a.dtype in [np.complex64, np.complex128]:\n return a.real.dtype\n elif a.dtype == np.float16:\n return np.float32\n elif a.dtype not in [np.float32, np.float64]:\n return np.float64\n return a.dtype\n\n\ndef _convert_dtype(a, value_type):\n out_dtype = _output_dtype(a, value_type)\n return a.astype(out_dtype, copy=False)\n\n\ndef _cook_shape(a, s, axes, value_type, order='C'):\n if s is None or s == a.shape:\n return a\n if (value_type == 'C2R') and (s[-1] is not None):\n s = list(s)\n s[-1] = s[-1] // 2 + 1\n for sz, axis in zip(s, axes):\n if (sz is not None) and (sz != a.shape[axis]):\n shape = list(a.shape)\n if shape[axis] > sz:\n index = [slice(None)] * a.ndim\n index[axis] = slice(0, sz)\n a = a[index]\n else:\n index = [slice(None)] * a.ndim\n index[axis] = slice(0, shape[axis])\n shape[axis] = sz\n z = cupy.zeros(shape, a.dtype.char, order=order)\n z[index] = a\n a = z\n return a\n\n\ndef _convert_fft_type(a, value_type):\n if value_type == 'C2C' and a.dtype == np.complex64:\n return cufft.CUFFT_C2C\n elif value_type == 'R2C' and a.dtype == np.float32:\n return cufft.CUFFT_R2C\n elif value_type == 'C2R' and a.dtype == np.complex64:\n return cufft.CUFFT_C2R\n elif value_type == 'C2C' and a.dtype == np.complex128:\n return cufft.CUFFT_Z2Z\n elif value_type == 'R2C' and a.dtype == np.float64:\n return cufft.CUFFT_D2Z\n else:\n return cufft.CUFFT_Z2D\n\n\ndef _exec_fft(a, direction, value_type, norm, axis, overwrite_x,\n out_size=None, out=None):\n fft_type = _convert_fft_type(a, value_type)\n\n if axis % a.ndim != a.ndim - 1:\n a = a.swapaxes(axis, -1)\n\n if a.base is not None or not a.flags.c_contiguous:\n a = a.copy()\n\n if out_size is None:\n out_size = a.shape[-1]\n\n batch = a.size // a.shape[-1]\n plan = cufft.Plan1d(out_size, fft_type, batch)\n if overwrite_x and value_type == 'C2C':\n out = a\n elif out is not None:\n # verify that out has the expected shape and dtype\n plan.check_output_array(a, out)\n else:\n out = plan.get_output_array(a)\n plan.fft(a, out, direction)\n\n sz = out.shape[-1]\n if fft_type == cufft.CUFFT_R2C or fft_type == cufft.CUFFT_D2Z:\n sz = a.shape[-1]\n if norm is None:\n if direction == cufft.CUFFT_INVERSE:\n out /= sz\n else:\n out /= sqrt(sz)\n\n if axis % a.ndim != a.ndim - 1:\n out = out.swapaxes(axis, -1)\n\n return out\n\n\ndef _fft_c2c(a, direction, norm, axes, overwrite_x):\n for axis in axes:\n a = _exec_fft(a, direction, 'C2C', norm, axis, overwrite_x)\n return a\n\n\ndef _fft(a, s, axes, norm, direction, value_type='C2C', overwrite_x=False):\n if norm not in (None, 'ortho'):\n raise ValueError('Invalid norm value %s, should be None or \\\"ortho\\\".'\n % norm)\n\n if s is not None:\n for n in s:\n if (n is not None) and (n < 1):\n raise ValueError(\n \"Invalid number of FFT data points (%d) specified.\" % n)\n\n if (s is not None) and (axes is not None) and len(s) != len(axes):\n raise ValueError(\"Shape and axes have different lengths.\")\n\n a = _convert_dtype(a, value_type)\n if axes is None:\n if s is None:\n dim = a.ndim\n else:\n dim = len(s)\n axes = [i for i in six.moves.range(-dim, 0)]\n a = _cook_shape(a, s, axes, value_type)\n\n if value_type == 'C2C':\n a = _fft_c2c(a, direction, norm, axes, overwrite_x)\n elif value_type == 'R2C':\n a = _exec_fft(a, direction, value_type, norm, axes[-1], overwrite_x)\n a = _fft_c2c(a, direction, norm, axes[:-1], overwrite_x)\n else:\n a = _fft_c2c(a, direction, norm, axes[:-1], overwrite_x)\n if (s is None) or (s[-1] is None):\n out_size = a.shape[axes[-1]] * 2 - 2\n else:\n out_size = s[-1]\n a = _exec_fft(a, direction, value_type, norm, axes[-1], overwrite_x,\n out_size)\n\n return a\n\n\ndef get_cufft_plan_nd(shape, fft_type, axes=None, order='C'):\n \"\"\"Generate a CUDA FFT plan for transforming up to three axes.\n\n Args:\n shape (tuple of int): The shape of the array to transform\n fft_type ({cufft.CUFFT_C2C, cufft.CUFFT_Z2Z}): The FFT type to perform.\n Currently only complex-to-complex transforms are supported.\n axes (None or int or tuple of int): The axes of the array to\n transform. Currently, these must be a set of up to three adjacent\n axes and must include either the first or the last axis of the\n array. If `None`, it is assumed that all axes are transformed.\n order ({'C', 'F'}): Specify whether the data to be transformed has C or\n Fortran ordered data layout.\n\n Returns:\n plan (cufft.PlanNd): The CUFFT Plan. This can be used with\n cufft.fft.fftn or cufft.fft.ifftn.\n \"\"\"\n ndim = len(shape)\n\n if fft_type not in [cufft.CUFFT_C2C, cufft.CUFFT_Z2Z]:\n raise NotImplementedError(\n \"Only cufft.CUFFT_C2C and cufft.CUFFT_Z2Z are supported.\")\n\n if axes is None:\n # transform over all axes\n fft_axes = tuple(range(ndim))\n else:\n if np.isscalar(axes):\n axes = (axes, )\n axes = tuple(axes)\n\n if np.min(axes) < -ndim or np.max(axes) > ndim - 1:\n raise ValueError(\"The specified axes exceed the array dimensions.\")\n\n # sort the provided axes in ascending order\n fft_axes = tuple(sorted(np.mod(axes, ndim)))\n\n # make sure the specified axes meet the expectations made below\n if not np.all(np.diff(fft_axes) == 1):\n raise ValueError(\n \"The axes to be transformed must be contiguous and repeated \"\n \"axes are not allowed.\")\n if (0 not in fft_axes) and ((ndim - 1) not in fft_axes):\n raise ValueError(\n \"Either the first or the last axis of the array must be in \"\n \"axes.\")\n\n if len(fft_axes) < 1 or len(fft_axes) > 3:\n raise ValueError(\n (\"CUFFT can only transform along 1, 2 or 3 axes, but {} axes were \"\n \"specified.\").format(len(fft_axes)))\n\n if order not in ['C', 'F']:\n raise ValueError(\"order must be 'C' or 'F'\")\n\n \"\"\"\n For full details on idist, istride, iembed, etc. see:\n http://docs.nvidia.com/cuda/cufft/index.html#advanced-data-layout\n\n in 1D:\n input[b * idist + x * istride]\n output[b * odist + x * ostride]\n\n in 2D:\n input[b * idist + (x * inembed[1] + y) * istride]\n output[b * odist + (x * onembed[1] + y) * ostride]\n\n in 3D:\n input[b * idist + ((x * inembed[1] + y) * inembed[2] + z) * istride]\n output[b * odist + ((x * onembed[1] + y) * onembed[2] + z) * ostride]\n \"\"\"\n if fft_axes == tuple(np.arange(ndim)):\n # tranfsorm over all axes\n plan_dimensions = copy(shape)\n if order == 'F':\n plan_dimensions = plan_dimensions[::-1]\n idist = np.intp(np.prod(shape))\n odist = np.intp(np.prod(shape))\n istride = ostride = 1\n inembed = onembed = None\n nbatch = 1\n else:\n plan_dimensions = []\n for d in range(ndim):\n if d in fft_axes:\n plan_dimensions.append(shape[d])\n plan_dimensions = tuple(plan_dimensions)\n if order == 'F':\n plan_dimensions = plan_dimensions[::-1]\n inembed = tuple(np.asarray(plan_dimensions, dtype=int))\n onembed = tuple(np.asarray(plan_dimensions, dtype=int))\n if 0 not in fft_axes:\n # don't FFT along the first min_axis_fft axes\n min_axis_fft = np.min(fft_axes)\n nbatch = np.prod(shape[:min_axis_fft])\n if order == 'C':\n # C-ordered GPU array with batch along first dim\n idist = np.prod(plan_dimensions)\n odist = np.prod(plan_dimensions)\n istride = 1\n ostride = 1\n elif order == 'F':\n # F-ordered GPU array with batch along first dim\n idist = 1\n odist = 1\n istride = nbatch\n ostride = nbatch\n elif (ndim - 1) not in fft_axes:\n # don't FFT along the last axis\n num_axes_batch = ndim - len(fft_axes)\n nbatch = np.prod(shape[-num_axes_batch:])\n if order == 'C':\n # C-ordered GPU array with batch along last dim\n idist = 1\n odist = 1\n istride = nbatch\n ostride = nbatch\n elif order == 'F':\n # F-ordered GPU array with batch along last dim\n idist = np.prod(plan_dimensions)\n odist = np.prod(plan_dimensions)\n istride = 1\n ostride = 1\n else:\n raise ValueError(\n \"General subsets of FFT axes not currently supported for \"\n \"GPU case (Can only batch FFT over the first or last \"\n \"spatial axes).\")\n\n plan = cufft.PlanNd(shape=plan_dimensions,\n istride=istride,\n ostride=ostride,\n inembed=inembed,\n onembed=onembed,\n idist=idist,\n odist=odist,\n fft_type=fft_type,\n batch=nbatch)\n return plan\n\n\ndef _exec_fftn(a, direction, value_type, norm, axes, overwrite_x,\n plan=None, out=None):\n\n fft_type = _convert_fft_type(a, value_type)\n if fft_type not in [cufft.CUFFT_C2C, cufft.CUFFT_Z2Z]:\n raise NotImplementedError(\"Only C2C and Z2Z are supported.\")\n\n if a.base is not None:\n a = a.copy()\n\n if a.flags.c_contiguous:\n order = 'C'\n elif a.flags.f_contiguous:\n order = 'F'\n else:\n raise ValueError(\"a must be contiguous\")\n\n if plan is None:\n # generate a plan\n plan = get_cufft_plan_nd(a.shape, fft_type, axes=axes, order=order)\n else:\n if not isinstance(plan, cufft.PlanNd):\n raise ValueError(\"expected plan to have type cufft.PlanNd\")\n if a.flags.c_contiguous:\n expected_shape = tuple(a.shape[ax] for ax in axes)\n else:\n # plan.shape will be reversed for Fortran-ordered inputs\n expected_shape = tuple(a.shape[ax] for ax in axes[::-1])\n if expected_shape != plan.shape:\n raise ValueError(\n \"The CUFFT plan and a.shape do not match: \"\n \"plan.shape = {}, expected_shape={}, a.shape = {}\".format(\n plan.shape, expected_shape, a.shape))\n if fft_type != plan.fft_type:\n raise ValueError(\"CUFFT plan dtype mismatch.\")\n # TODO: also check the strides and axes of the plan?\n\n if overwrite_x and value_type == 'C2C':\n out = a\n elif out is None:\n out = plan.get_output_array(a, order=order)\n else:\n plan.check_output_array(a, out)\n plan.fft(a, out, direction)\n\n # normalize by the product of the shape along the transformed axes\n sz = np.prod([out.shape[ax] for ax in axes])\n if norm is None:\n if direction == cufft.CUFFT_INVERSE:\n out /= sz\n else:\n out /= sqrt(sz)\n\n return out\n\n\ndef _fftn(a, s, axes, norm, direction, value_type='C2C', order='A', plan=None,\n overwrite_x=False, out=None):\n if norm not in (None, 'ortho'):\n raise ValueError('Invalid norm value %s, should be None or \\\"ortho\\\".'\n % norm)\n\n a = _convert_dtype(a, value_type)\n if axes is None:\n dim = a.ndim\n axes = [i for i in six.moves.range(-dim, 0)]\n axes = tuple(axes)\n\n if (s is not None) and len(s) != len(axes):\n raise ValueError(\"Shape and axes have different lengths.\")\n\n # sort the provided axes in ascending order\n axes = tuple(sorted(np.mod(axes, a.ndim)))\n\n if order == 'A':\n if a.flags.f_contiguous:\n order = 'F'\n elif a.flags.c_contiguous:\n order = 'C'\n else:\n a = cupy.ascontiguousarray(a)\n order = 'C'\n elif order not in ['C', 'F']:\n raise ValueError(\"Unsupported order: {}\".format(order))\n\n a = _cook_shape(a, s, axes, value_type, order=order)\n if order == 'C' and not a.flags.c_contiguous:\n a = cupy.ascontiguousarray(a)\n elif order == 'F' and not a.flags.f_contiguous:\n a = cupy.asfortranarray(a)\n\n a = _exec_fftn(a, direction, value_type, norm=norm, axes=axes,\n overwrite_x=overwrite_x, plan=plan, out=out)\n return a\n\n\ndef _default_plan_type(a, s=None, axes=None):\n \"\"\"Determine whether to use separable 1d planning or nd planning.\"\"\"\n ndim = a.ndim\n if ndim == 1 or not config.enable_nd_planning:\n return '1d'\n\n if axes is None:\n if s is None:\n dim = ndim\n else:\n dim = len(s)\n axes = tuple([i % ndim for i in six.moves.range(-dim, 0)])\n else:\n # sort the provided axes in ascending order\n axes = tuple(sorted([i % ndim for i in axes]))\n\n if len(axes) == 1:\n # use Plan1d to transform a single axis\n return '1d'\n if len(axes) > 3 or not (np.all(np.diff(sorted(axes)) == 1)):\n # PlanNd supports 1d, 2d or 3d transforms over contiguous axes\n return '1d'\n if (0 not in axes) and ((ndim - 1) not in axes):\n # PlanNd only possible if the first or last axis is in axes.\n return '1d'\n return 'nd'\n\n\ndef _default_fft_func(a, s=None, axes=None):\n plan_type = _default_plan_type(a, s, axes)\n if plan_type == 'nd':\n return _fftn\n else:\n return _fft\n\n\ndef fft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. If ``n``\n is not given, the length of the input along the axis specified by\n ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cupy.cuda.cufft.CUFFT_FORWARD)\n\n\ndef ifft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. If ``n``\n is not given, the length of the input along the axis specified by\n ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_INVERSE)\n\n\ndef fft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fft2`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_FORWARD)\n\n\ndef ifft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifft2`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_INVERSE)\n\n\ndef fftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.fftn`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_FORWARD)\n\n\ndef ifftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional inverse FFT.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the transformed axes of the\n output. If ``s`` is not given, the lengths of the input along the\n axes specified by ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other.\n\n .. seealso:: :func:`numpy.fft.ifftn`\n \"\"\"\n func = _default_fft_func(a, s, axes)\n return func(a, s, axes, norm, cufft.CUFFT_INVERSE)\n\n\ndef rfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Number of points along transformation axis in the\n input to use. If ``n`` is not given, the length of the input along\n the axis specified by ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. The length of the\n transformed axis is ``n//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the one-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. For\n ``n`` output points, ``n//2+1`` input points are necessary. If\n ``n`` is not given, it is determined from the length of the input\n along the axis specified by ``axis``.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. If ``n`` is not\n given, the length of the transformed axis is`2*(m-1)` where `m`\n is the length of the transformed axis of the input.\n\n .. seealso:: :func:`numpy.fft.irfft`\n \"\"\"\n return _fft(a, (n,), (axis,), norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef rfft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape to use from the input. If ``s`` is not\n given, the lengths of the input along the axes specified by\n ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. The length of the\n last axis transformed will be ``s[-1]//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfft2`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfft2(a, s=None, axes=(-2, -1), norm=None):\n \"\"\"Compute the two-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the output. If ``s`` is not given,\n they are determined from the lengths of the input along the axes\n specified by ``axes``.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. If ``s`` is not\n given, the length of final transformed axis of output will be\n `2*(m-1)` where `m` is the length of the final transformed axis of\n the input.\n\n .. seealso:: :func:`numpy.fft.irfft2`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef rfftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape to use from the input. If ``s`` is not\n given, the lengths of the input along the axes specified by\n ``axes`` are used.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. The length of the\n last axis transformed will be ``s[-1]//2+1``.\n\n .. seealso:: :func:`numpy.fft.rfftn`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_FORWARD, 'R2C')\n\n\ndef irfftn(a, s=None, axes=None, norm=None):\n \"\"\"Compute the N-dimensional inverse FFT for real input.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n s (None or tuple of ints): Shape of the output. If ``s`` is not given,\n they are determined from the lengths of the input along the axes\n specified by ``axes``.\n axes (tuple of ints): Axes over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``s`` and type\n will convert to complex if the input is other. If ``s`` is not\n given, the length of final transformed axis of output will be\n ``2*(m-1)`` where `m` is the length of the final transformed axis\n of the input.\n\n .. seealso:: :func:`numpy.fft.irfftn`\n \"\"\"\n return _fft(a, s, axes, norm, cufft.CUFFT_INVERSE, 'C2R')\n\n\ndef hfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Length of the transformed axis of the output. For\n ``n`` output points, ``n//2+1`` input points are necessary. If\n ``n`` is not given, it is determined from the length of the input\n along the axis specified by ``axis``.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. If ``n`` is not\n given, the length of the transformed axis is ``2*(m-1)`` where `m`\n is the length of the transformed axis of the input.\n\n .. seealso:: :func:`numpy.fft.hfft`\n \"\"\"\n a = irfft(a.conj(), n, axis)\n return a * (a.shape[axis] if norm is None else\n cupy.sqrt(a.shape[axis], dtype=a.dtype))\n\n\ndef ihfft(a, n=None, axis=-1, norm=None):\n \"\"\"Compute the FFT of a signal that has Hermitian symmetry.\n\n Args:\n a (cupy.ndarray): Array to be transform.\n n (None or int): Number of points along transformation axis in the\n input to use. If ``n`` is not given, the length of the input along\n the axis specified by ``axis`` is used.\n axis (int): Axis over which to compute the FFT.\n norm (None or ``\"ortho\"``): Keyword to specify the normalization mode.\n\n Returns:\n cupy.ndarray:\n The transformed array which shape is specified by ``n`` and type\n will convert to complex if the input is other. The length of the\n transformed axis is ``n//2+1``.\n\n .. seealso:: :func:`numpy.fft.ihfft`\n \"\"\"\n if n is None:\n n = a.shape[axis]\n return rfft(a, n, axis, norm).conj() / (n if norm is None else 1)\n\n\ndef fftfreq(n, d=1.0):\n \"\"\"Return the FFT sample frequencies.\n\n Args:\n n (int): Window length.\n d (scalar): Sample spacing.\n\n Returns:\n cupy.ndarray: Array of length ``n`` containing the sample frequencies.\n\n .. seealso:: :func:`numpy.fft.fftfreq`\n \"\"\"\n return cupy.hstack((cupy.arange(0, (n - 1) // 2 + 1, dtype=np.float64),\n cupy.arange(-(n // 2), 0, dtype=np.float64))) / n / d\n\n\ndef rfftfreq(n, d=1.0):\n \"\"\"Return the FFT sample frequencies for real input.\n\n Args:\n n (int): Window length.\n d (scalar): Sample spacing.\n\n Returns:\n cupy.ndarray:\n Array of length ``n//2+1`` containing the sample frequencies.\n\n .. seealso:: :func:`numpy.fft.rfftfreq`\n \"\"\"\n return cupy.arange(0, n // 2 + 1, dtype=np.float64) / n / d\n\n\ndef fftshift(x, axes=None):\n \"\"\"Shift the zero-frequency component to the center of the spectrum.\n\n Args:\n x (cupy.ndarray): Input array.\n axes (int or tuple of ints): Axes over which to shift. Default is\n ``None``, which shifts all axes.\n\n Returns:\n cupy.ndarray: The shifted array.\n\n .. seealso:: :func:`numpy.fft.fftshift`\n \"\"\"\n x = cupy.asarray(x)\n if axes is None:\n axes = list(six.moves.range(x.ndim))\n elif isinstance(axes, np.compat.integer_types):\n axes = (axes,)\n for axis in axes:\n x = cupy.roll(x, x.shape[axis] // 2, axis)\n return x\n\n\ndef ifftshift(x, axes=None):\n \"\"\"The inverse of :meth:`fftshift`.\n\n Args:\n x (cupy.ndarray): Input array.\n axes (int or tuple of ints): Axes over which to shift. Default is\n ``None``, which shifts all axes.\n\n Returns:\n cupy.ndarray: The shifted array.\n\n .. seealso:: :func:`numpy.fft.ifftshift`\n \"\"\"\n x = cupy.asarray(x)\n if axes is None:\n axes = list(six.moves.range(x.ndim))\n elif isinstance(axes, np.compat.integer_types):\n axes = (axes,)\n for axis in axes:\n x = cupy.roll(x, -(x.shape[axis] // 2), axis)\n return x\n",
"path": "cupy/fft/fft.py"
}
] | diff --git a/cupy/fft/fft.py b/cupy/fft/fft.py
index e72a4906f5c..cee323e6c2f 100644
--- a/cupy/fft/fft.py
+++ b/cupy/fft/fft.py
@@ -76,7 +76,7 @@ def _exec_fft(a, direction, value_type, norm, axis, overwrite_x,
if axis % a.ndim != a.ndim - 1:
a = a.swapaxes(axis, -1)
- if a.base is not None:
+ if a.base is not None or not a.flags.c_contiguous:
a = a.copy()
if out_size is None:
diff --git a/tests/cupy_tests/fft_tests/test_fft.py b/tests/cupy_tests/fft_tests/test_fft.py
index a93de2d2679..1ad15a6514d 100644
--- a/tests/cupy_tests/fft_tests/test_fft.py
+++ b/tests/cupy_tests/fft_tests/test_fft.py
@@ -84,6 +84,45 @@ def test_ifft(self, xp, dtype):
return out
[email protected](*testing.product({
+ 'shape': [(10, 10), (10, 5, 10)],
+ 'data_order': ['F', 'C'],
+ 'axis': [0, 1, -1],
+}))
[email protected]
[email protected]_requires('numpy>=1.10.0')
+class TestFftOrder(unittest.TestCase):
+
+ @testing.for_all_dtypes()
+ @testing.numpy_cupy_allclose(rtol=1e-4, atol=1e-7, accept_error=ValueError,
+ contiguous_check=False)
+ def test_fft(self, xp, dtype):
+ a = testing.shaped_random(self.shape, xp, dtype)
+ if self.data_order == 'F':
+ a = xp.asfortranarray(a)
+ out = xp.fft.fft(a, axis=self.axis)
+
+ # np.fft.fft alway returns np.complex128
+ if xp == np and dtype in [np.float16, np.float32, np.complex64]:
+ out = out.astype(np.complex64)
+
+ return out
+
+ @testing.for_all_dtypes()
+ @testing.numpy_cupy_allclose(rtol=1e-4, atol=1e-7, accept_error=ValueError,
+ contiguous_check=False)
+ def test_ifft(self, xp, dtype):
+ a = testing.shaped_random(self.shape, xp, dtype)
+ if self.data_order == 'F':
+ a = xp.asfortranarray(a)
+ out = xp.fft.ifft(a, axis=self.axis)
+
+ if xp == np and dtype in [np.float16, np.float32, np.complex64]:
+ out = out.astype(np.complex64)
+
+ return out
+
+
@testing.gpu
class TestDefaultPlanType(unittest.TestCase):
|
privacyidea__privacyidea-1746 | Fix typo in registration token
The example of the registration token contains a typo.
The toketype of course is a "registration" token, not a "register".
| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA\n# Aug 12, 2014 Cornelius Kölbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# 2015-01-29 Adapt during migration to flask\n# Cornelius Kölbel <[email protected]>\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\"\"\"\nThis file contains the definition of the RegisterToken class.\n\nThe code is tested in test_lib_tokens_registration.py.\n\"\"\"\n\nimport logging\n\nfrom privacyidea.lib.utils import to_unicode\nfrom privacyidea.lib.tokens.passwordtoken import PasswordTokenClass\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.crypto import generate_password\nfrom privacyidea.lib.decorators import check_token_locked\nfrom privacyidea.lib import _\n\noptional = True\nrequired = False\n\nlog = logging.getLogger(__name__)\n\n\nclass RegistrationTokenClass(PasswordTokenClass):\n \"\"\"\n Token to implement a registration code.\n It can be used to create a registration code or a \"TAN\" which can be used\n once by a user to authenticate somewhere. After this registration code is\n used, the token is automatically deleted.\n\n The idea is to provide a workflow, where the user can get a registration code\n by e.g. postal mail and then use this code as the initial first factor to\n authenticate to the UI to enroll real tokens.\n\n A registration code can be created by an administrative task with the\n token/init api like this:\n\n **Example Authentication Request**:\n\n .. sourcecode:: http\n\n POST /token/init HTTP/1.1\n Host: example.com\n Accept: application/json\n\n type=register\n user=cornelius\n realm=realm1\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"detail\": {\n \"registrationcode\": \"12345808124095097608\"\n },\n \"id\": 1,\n \"jsonrpc\": \"2.0\",\n \"result\": {\n \"status\": true,\n \"value\": true\n },\n \"version\": \"privacyIDEA unknown\"\n }\n\n \"\"\"\n\n def __init__(self, aToken):\n PasswordTokenClass.__init__(self, aToken)\n self.hKeyRequired = False\n self.set_type(u\"registration\")\n self.otp_len = 24\n\n @staticmethod\n def get_class_type():\n return \"registration\"\n\n @staticmethod\n def get_class_prefix():\n return \"REG\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n returns a subtree of the token definition\n\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dict or scalar\n \"\"\"\n res = {'type': 'registration',\n 'title': 'Registration Code Token',\n 'description': _('Registration: A token that creates a '\n 'registration code that '\n 'can be used as a second factor once.'),\n 'init': {},\n 'config': {},\n 'user': [],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\"],\n 'policy': {},\n }\n\n if key:\n ret = res.get(key)\n else:\n if ret == 'all':\n ret = res\n return ret\n\n def update(self, param):\n \"\"\"\n This method is called during the initialization process.\n :param param: parameters from the token init\n :type param: dict\n :return: None\n \"\"\"\n if \"genkey\" in param:\n # We do not need the genkey! We generate anyway.\n # Otherwise genkey and otpkey will raise an exception in\n # PasswordTokenClass\n del param[\"genkey\"]\n param[\"otpkey\"] = generate_password(size=self.otp_len)\n PasswordTokenClass.update(self, param)\n\n @log_with(log, log_entry=False)\n @check_token_locked\n def inc_count_auth_success(self):\n \"\"\"\n Increase the counter, that counts successful authentications\n In case of successful authentication the token does needs to be deleted.\n \"\"\"\n self.delete_token()\n return 1\n\n @log_with(log)\n def get_init_detail(self, params=None, user=None):\n \"\"\"\n At the end of the initialization we return the registration code.\n \"\"\"\n response_detail = PasswordTokenClass.get_init_detail(self, params, user)\n params = params or {}\n secretHOtp = self.token.get_otpkey()\n registrationcode = secretHOtp.getKey()\n response_detail[\"registrationcode\"] = to_unicode(registrationcode)\n return response_detail\n",
"path": "privacyidea/lib/tokens/registrationtoken.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# privacyIDEA\n# Aug 12, 2014 Cornelius Kölbel\n# License: AGPLv3\n# contact: http://www.privacyidea.org\n#\n# 2015-01-29 Adapt during migration to flask\n# Cornelius Kölbel <[email protected]>\n#\n# This code is free software; you can redistribute it and/or\n# modify it under the terms of the GNU AFFERO GENERAL PUBLIC LICENSE\n# License as published by the Free Software Foundation; either\n# version 3 of the License, or any later version.\n#\n# This code is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU AFFERO GENERAL PUBLIC LICENSE for more details.\n#\n# You should have received a copy of the GNU Affero General Public\n# License along with this program. If not, see <http://www.gnu.org/licenses/>.\n#\n\"\"\"\nThis file contains the definition of the RegisterToken class.\n\nThe code is tested in test_lib_tokens_registration.py.\n\"\"\"\n\nimport logging\n\nfrom privacyidea.lib.utils import to_unicode\nfrom privacyidea.lib.tokens.passwordtoken import PasswordTokenClass\nfrom privacyidea.lib.log import log_with\nfrom privacyidea.lib.crypto import generate_password\nfrom privacyidea.lib.decorators import check_token_locked\nfrom privacyidea.lib import _\n\noptional = True\nrequired = False\n\nlog = logging.getLogger(__name__)\n\n\nclass RegistrationTokenClass(PasswordTokenClass):\n \"\"\"\n Token to implement a registration code.\n It can be used to create a registration code or a \"TAN\" which can be used\n once by a user to authenticate somewhere. After this registration code is\n used, the token is automatically deleted.\n\n The idea is to provide a workflow, where the user can get a registration code\n by e.g. postal mail and then use this code as the initial first factor to\n authenticate to the UI to enroll real tokens.\n\n A registration code can be created by an administrative task with the\n token/init api like this:\n\n **Example Authentication Request**:\n\n .. sourcecode:: http\n\n POST /token/init HTTP/1.1\n Host: example.com\n Accept: application/json\n\n type=registration\n user=cornelius\n realm=realm1\n\n **Example response**:\n\n .. sourcecode:: http\n\n HTTP/1.1 200 OK\n Content-Type: application/json\n\n {\n \"detail\": {\n \"registrationcode\": \"12345808124095097608\"\n },\n \"id\": 1,\n \"jsonrpc\": \"2.0\",\n \"result\": {\n \"status\": true,\n \"value\": true\n },\n \"version\": \"privacyIDEA unknown\"\n }\n\n \"\"\"\n\n def __init__(self, aToken):\n PasswordTokenClass.__init__(self, aToken)\n self.hKeyRequired = False\n self.set_type(u\"registration\")\n self.otp_len = 24\n\n @staticmethod\n def get_class_type():\n return \"registration\"\n\n @staticmethod\n def get_class_prefix():\n return \"REG\"\n\n @staticmethod\n @log_with(log)\n def get_class_info(key=None, ret='all'):\n \"\"\"\n returns a subtree of the token definition\n\n :param key: subsection identifier\n :type key: string\n :param ret: default return value, if nothing is found\n :type ret: user defined\n :return: subsection if key exists or user defined\n :rtype: dict or scalar\n \"\"\"\n res = {'type': 'registration',\n 'title': 'Registration Code Token',\n 'description': _('Registration: A token that creates a '\n 'registration code that '\n 'can be used as a second factor once.'),\n 'init': {},\n 'config': {},\n 'user': [],\n # This tokentype is enrollable in the UI for...\n 'ui_enroll': [\"admin\"],\n 'policy': {},\n }\n\n if key:\n ret = res.get(key)\n else:\n if ret == 'all':\n ret = res\n return ret\n\n def update(self, param):\n \"\"\"\n This method is called during the initialization process.\n :param param: parameters from the token init\n :type param: dict\n :return: None\n \"\"\"\n if \"genkey\" in param:\n # We do not need the genkey! We generate anyway.\n # Otherwise genkey and otpkey will raise an exception in\n # PasswordTokenClass\n del param[\"genkey\"]\n param[\"otpkey\"] = generate_password(size=self.otp_len)\n PasswordTokenClass.update(self, param)\n\n @log_with(log, log_entry=False)\n @check_token_locked\n def inc_count_auth_success(self):\n \"\"\"\n Increase the counter, that counts successful authentications\n In case of successful authentication the token does needs to be deleted.\n \"\"\"\n self.delete_token()\n return 1\n\n @log_with(log)\n def get_init_detail(self, params=None, user=None):\n \"\"\"\n At the end of the initialization we return the registration code.\n \"\"\"\n response_detail = PasswordTokenClass.get_init_detail(self, params, user)\n params = params or {}\n secretHOtp = self.token.get_otpkey()\n registrationcode = secretHOtp.getKey()\n response_detail[\"registrationcode\"] = to_unicode(registrationcode)\n return response_detail\n",
"path": "privacyidea/lib/tokens/registrationtoken.py"
}
] | diff --git a/privacyidea/lib/tokens/registrationtoken.py b/privacyidea/lib/tokens/registrationtoken.py
index 54beeb5ed4..8aa4df8c8f 100644
--- a/privacyidea/lib/tokens/registrationtoken.py
+++ b/privacyidea/lib/tokens/registrationtoken.py
@@ -64,7 +64,7 @@ class RegistrationTokenClass(PasswordTokenClass):
Host: example.com
Accept: application/json
- type=register
+ type=registration
user=cornelius
realm=realm1
|
getpelican__pelican-1507 | abbr support doesn't work for multiline
Eg:
``` rst
this is an :abbr:`TLA (Three Letter
Abbreviation)`
```
will output
`<abbr>TLA (Three Letter Abbreviation)</abbr>`
instead of
`<abbr title="Three Letter Abbreviation">TLA</abbr>`
I believe this could be fixed by adding the `re.M` flag to the `re.compile` call on this line: https://github.com/getpelican/pelican/blob/636fd6cc380f2537924532a587c70e96a386e25c/pelican/rstdirectives.py#L101
This refs ticket #395
| [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\n\nfrom docutils import nodes, utils\nfrom docutils.parsers.rst import directives, roles, Directive\nfrom pygments.formatters import HtmlFormatter\nfrom pygments import highlight\nfrom pygments.lexers import get_lexer_by_name, TextLexer\nimport re\nimport six\nimport pelican.settings as pys\n\n\nclass Pygments(Directive):\n \"\"\" Source code syntax highlighting.\n \"\"\"\n required_arguments = 1\n optional_arguments = 0\n final_argument_whitespace = True\n option_spec = {\n 'anchorlinenos': directives.flag,\n 'classprefix': directives.unchanged,\n 'hl_lines': directives.unchanged,\n 'lineanchors': directives.unchanged,\n 'linenos': directives.unchanged,\n 'linenospecial': directives.nonnegative_int,\n 'linenostart': directives.nonnegative_int,\n 'linenostep': directives.nonnegative_int,\n 'lineseparator': directives.unchanged,\n 'linespans': directives.unchanged,\n 'nobackground': directives.flag,\n 'nowrap': directives.flag,\n 'tagsfile': directives.unchanged,\n 'tagurlformat': directives.unchanged,\n }\n has_content = True\n\n def run(self):\n self.assert_has_content()\n try:\n lexer = get_lexer_by_name(self.arguments[0])\n except ValueError:\n # no lexer found - use the text one instead of an exception\n lexer = TextLexer()\n\n # Fetch the defaults\n if pys.PYGMENTS_RST_OPTIONS is not None:\n for k, v in six.iteritems(pys.PYGMENTS_RST_OPTIONS):\n # Locally set options overrides the defaults\n if k not in self.options:\n self.options[k] = v\n\n if ('linenos' in self.options and\n self.options['linenos'] not in ('table', 'inline')):\n if self.options['linenos'] == 'none':\n self.options.pop('linenos')\n else:\n self.options['linenos'] = 'table'\n\n for flag in ('nowrap', 'nobackground', 'anchorlinenos'):\n if flag in self.options:\n self.options[flag] = True\n\n # noclasses should already default to False, but just in case...\n formatter = HtmlFormatter(noclasses=False, **self.options)\n parsed = highlight('\\n'.join(self.content), lexer, formatter)\n return [nodes.raw('', parsed, format='html')]\n\ndirectives.register_directive('code-block', Pygments)\ndirectives.register_directive('sourcecode', Pygments)\n\n\n_abbr_re = re.compile('\\((.*)\\)$')\n\n\nclass abbreviation(nodes.Inline, nodes.TextElement):\n pass\n\n\ndef abbr_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):\n text = utils.unescape(text)\n m = _abbr_re.search(text)\n if m is None:\n return [abbreviation(text, text)], []\n abbr = text[:m.start()].strip()\n expl = m.group(1)\n return [abbreviation(abbr, abbr, explanation=expl)], []\n\nroles.register_local_role('abbr', abbr_role)\n",
"path": "pelican/rstdirectives.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals, print_function\n\nfrom docutils import nodes, utils\nfrom docutils.parsers.rst import directives, roles, Directive\nfrom pygments.formatters import HtmlFormatter\nfrom pygments import highlight\nfrom pygments.lexers import get_lexer_by_name, TextLexer\nimport re\nimport six\nimport pelican.settings as pys\n\n\nclass Pygments(Directive):\n \"\"\" Source code syntax highlighting.\n \"\"\"\n required_arguments = 1\n optional_arguments = 0\n final_argument_whitespace = True\n option_spec = {\n 'anchorlinenos': directives.flag,\n 'classprefix': directives.unchanged,\n 'hl_lines': directives.unchanged,\n 'lineanchors': directives.unchanged,\n 'linenos': directives.unchanged,\n 'linenospecial': directives.nonnegative_int,\n 'linenostart': directives.nonnegative_int,\n 'linenostep': directives.nonnegative_int,\n 'lineseparator': directives.unchanged,\n 'linespans': directives.unchanged,\n 'nobackground': directives.flag,\n 'nowrap': directives.flag,\n 'tagsfile': directives.unchanged,\n 'tagurlformat': directives.unchanged,\n }\n has_content = True\n\n def run(self):\n self.assert_has_content()\n try:\n lexer = get_lexer_by_name(self.arguments[0])\n except ValueError:\n # no lexer found - use the text one instead of an exception\n lexer = TextLexer()\n\n # Fetch the defaults\n if pys.PYGMENTS_RST_OPTIONS is not None:\n for k, v in six.iteritems(pys.PYGMENTS_RST_OPTIONS):\n # Locally set options overrides the defaults\n if k not in self.options:\n self.options[k] = v\n\n if ('linenos' in self.options and\n self.options['linenos'] not in ('table', 'inline')):\n if self.options['linenos'] == 'none':\n self.options.pop('linenos')\n else:\n self.options['linenos'] = 'table'\n\n for flag in ('nowrap', 'nobackground', 'anchorlinenos'):\n if flag in self.options:\n self.options[flag] = True\n\n # noclasses should already default to False, but just in case...\n formatter = HtmlFormatter(noclasses=False, **self.options)\n parsed = highlight('\\n'.join(self.content), lexer, formatter)\n return [nodes.raw('', parsed, format='html')]\n\ndirectives.register_directive('code-block', Pygments)\ndirectives.register_directive('sourcecode', Pygments)\n\n\n_abbr_re = re.compile('\\((.*)\\)$', re.DOTALL)\n\n\nclass abbreviation(nodes.Inline, nodes.TextElement):\n pass\n\n\ndef abbr_role(typ, rawtext, text, lineno, inliner, options={}, content=[]):\n text = utils.unescape(text)\n m = _abbr_re.search(text)\n if m is None:\n return [abbreviation(text, text)], []\n abbr = text[:m.start()].strip()\n expl = m.group(1)\n return [abbreviation(abbr, abbr, explanation=expl)], []\n\nroles.register_local_role('abbr', abbr_role)\n",
"path": "pelican/rstdirectives.py"
}
] | diff --git a/pelican/rstdirectives.py b/pelican/rstdirectives.py
index 1bf6971ca..1c25cc42a 100644
--- a/pelican/rstdirectives.py
+++ b/pelican/rstdirectives.py
@@ -70,7 +70,7 @@ def run(self):
directives.register_directive('sourcecode', Pygments)
-_abbr_re = re.compile('\((.*)\)$')
+_abbr_re = re.compile('\((.*)\)$', re.DOTALL)
class abbreviation(nodes.Inline, nodes.TextElement):
diff --git a/pelican/tests/test_rstdirectives.py b/pelican/tests/test_rstdirectives.py
new file mode 100644
index 000000000..ae863b309
--- /dev/null
+++ b/pelican/tests/test_rstdirectives.py
@@ -0,0 +1,32 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals, print_function
+
+from mock import Mock
+from pelican.tests.support import unittest
+
+class Test_abbr_role(unittest.TestCase):
+ def call_it(self, text):
+ from pelican.rstdirectives import abbr_role
+ rawtext = text
+ lineno = 42
+ inliner = Mock(name='inliner')
+ nodes, system_messages = abbr_role(
+ 'abbr', rawtext, text, lineno, inliner)
+ self.assertEqual(system_messages, [])
+ self.assertEqual(len(nodes), 1)
+ return nodes[0]
+
+ def test(self):
+ node = self.call_it("Abbr (Abbreviation)")
+ self.assertEqual(node.astext(), "Abbr")
+ self.assertEqual(node['explanation'], "Abbreviation")
+
+ def test_newlines_in_explanation(self):
+ node = self.call_it("CUL (See you\nlater)")
+ self.assertEqual(node.astext(), "CUL")
+ self.assertEqual(node['explanation'], "See you\nlater")
+
+ def test_newlines_in_abbr(self):
+ node = self.call_it("US of\nA \n (USA)")
+ self.assertEqual(node.astext(), "US of\nA")
+ self.assertEqual(node['explanation'], "USA")
|
awslabs__gluonts-644 | Index of forecast is wrong in multivariate Time Series
## Description
When forecasting multivariate Time Series the index has the length of the target dimension instead of the prediction length
## To Reproduce
```python
import numpy as np
from gluonts.dataset.common import ListDataset
from gluonts.distribution import MultivariateGaussianOutput
from gluonts.model.deepar import DeepAREstimator
from gluonts.trainer import Trainer
from gluonts.evaluation.backtest import make_evaluation_predictions
train_dataset = ListDataset(
data_iter=[
{
"start": "2019-01-01 00:00:00",
"target": np.ones(shape=(4, 4)),
},
],
freq="W",
one_dim_target=False,
)
test_dataset = ListDataset(
data_iter=[
{
"start": "2019-01-01 00:00:00",
"target": np.ones(shape=(4, 5)),
},
],
freq="W",
one_dim_target=False,
)
estimator = DeepAREstimator(
'W', prediction_length=1, trainer=Trainer(epochs=3, hybridize=False),
distr_output=MultivariateGaussianOutput(dim=4),
)
predictor = estimator.train(train_dataset)
forecast_it, ts_it = make_evaluation_predictions(dataset=test_dataset, predictor=predictor, num_samples=10)
forecast_list = list(forecast_it)
ts_list = list(ts_it)
```
## Error Message
DatetimeIndex(['2019-02-03', '2019-02-10', '2019-02-17', '2019-02-24'], dtype='datetime64[ns]', freq='W-SUN')
While it should only be ['2019-02-03']
## Environment
- Operating system: Amazon Linux
- Python version: 3.6
- GluonTS version: a96a0cc4 internal
| [
{
"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\n# Standard library imports\nimport re\nfrom enum import Enum\nfrom typing import Dict, List, NamedTuple, Optional, Set, Union, Callable\n\n# Third-party imports\nimport mxnet as mx\nimport numpy as np\nimport pandas as pd\nimport pydantic\n\n# First-party imports\nfrom gluonts.core.exception import GluonTSUserError\nfrom gluonts.distribution import Distribution\nfrom gluonts.core.component import validated\n\n\nclass Quantile(NamedTuple):\n value: float\n name: str\n\n @property\n def loss_name(self):\n return f\"QuantileLoss[{self.name}]\"\n\n @property\n def weighted_loss_name(self):\n return f\"wQuantileLoss[{self.name}]\"\n\n @property\n def coverage_name(self):\n return f\"Coverage[{self.name}]\"\n\n @classmethod\n def checked(cls, value: float, name: str) -> \"Quantile\":\n if not 0 <= value <= 1:\n raise GluonTSUserError(\n f\"quantile value should be in [0, 1] but found {value}\"\n )\n\n return Quantile(value, name)\n\n @classmethod\n def from_float(cls, quantile: float) -> \"Quantile\":\n assert isinstance(quantile, float)\n return cls.checked(value=quantile, name=str(quantile))\n\n @classmethod\n def from_str(cls, quantile: str) -> \"Quantile\":\n assert isinstance(quantile, str)\n try:\n return cls.checked(value=float(quantile), name=quantile)\n except ValueError:\n m = re.match(r\"^p(\\d{2})$\", quantile)\n\n if m is None:\n raise GluonTSUserError(\n \"Quantile string should be of the form \"\n f'\"p10\", \"p50\", ... or \"0.1\", \"0.5\", ... but found {quantile}'\n )\n else:\n quantile_float: float = int(m.group(1)) / 100\n return cls(value=quantile_float, name=str(quantile_float))\n\n @classmethod\n def parse(cls, quantile: Union[\"Quantile\", float, str]) -> \"Quantile\":\n \"\"\"Produces equivalent float and string representation of a given\n quantile level.\n\n >>> Quantile.parse(0.1)\n Quantile(value=0.1, name='0.1')\n\n >>> Quantile.parse('0.2')\n Quantile(value=0.2, name='0.2')\n\n >>> Quantile.parse('0.20')\n Quantile(value=0.2, name='0.20')\n\n >>> Quantile.parse('p99')\n Quantile(value=0.99, name='0.99')\n\n Parameters\n ----------\n quantile\n Quantile, can be a float a str representing a float e.g. '0.1' or a\n quantile string of the form 'p0.1'.\n\n Returns\n -------\n Quantile\n A tuple containing both a float and a string representation of the\n input quantile level.\n \"\"\"\n if isinstance(quantile, Quantile):\n return quantile\n elif isinstance(quantile, float):\n return cls.from_float(quantile)\n else:\n return cls.from_str(quantile)\n\n\nclass Forecast:\n \"\"\"\n A abstract class representing predictions.\n \"\"\"\n\n start_date: pd.Timestamp\n freq: str\n item_id: Optional[str]\n info: Optional[Dict]\n prediction_length: int\n mean: np.ndarray\n _index = None\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n \"\"\"\n Computes a quantile from the predicted distribution.\n\n Parameters\n ----------\n q\n Quantile to compute.\n\n Returns\n -------\n numpy.ndarray\n Value of the quantile across the prediction range.\n \"\"\"\n raise NotImplementedError()\n\n def quantile_ts(self, q: Union[float, str]) -> pd.Series:\n return pd.Series(index=self.index, data=self.quantile(q))\n\n @property\n def median(self) -> np.ndarray:\n return self.quantile(0.5)\n\n def plot(\n self,\n prediction_intervals=(50.0, 90.0),\n show_mean=False,\n color=\"b\",\n label=None,\n output_file=None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Plots the median of the forecast as well as confidence bounds.\n (requires matplotlib and pandas).\n\n Parameters\n ----------\n prediction_intervals : float or list of floats in [0, 100]\n Confidence interval size(s). If a list, it will stack the error\n plots for each confidence interval. Only relevant for error styles\n with \"ci\" in the name.\n show_mean : boolean\n Whether to also show the mean of the forecast.\n color : matplotlib color name or dictionary\n The color used for plotting the forecast.\n label : string\n A label (prefix) that is used for the forecast\n output_file : str or None, default None\n Output path for the plot file. If None, plot is not saved to file.\n args :\n Other arguments are passed to main plot() call\n kwargs :\n Other keyword arguments are passed to main plot() call\n \"\"\"\n\n # matplotlib==2.0.* gives errors in Brazil builds and has to be\n # imported locally\n import matplotlib.pyplot as plt\n\n label_prefix = \"\" if label is None else label + \"-\"\n\n for c in prediction_intervals:\n assert 0.0 <= c <= 100.0\n\n ps = [50.0] + [\n 50.0 + f * c / 2.0\n for c in prediction_intervals\n for f in [-1.0, +1.0]\n ]\n percentiles_sorted = sorted(set(ps))\n\n def alpha_for_percentile(p):\n return (p / 100.0) ** 0.3\n\n ps_data = [self.quantile(p / 100.0) for p in percentiles_sorted]\n i_p50 = len(percentiles_sorted) // 2\n\n p50_data = ps_data[i_p50]\n p50_series = pd.Series(data=p50_data, index=self.index)\n p50_series.plot(color=color, ls=\"-\", label=f\"{label_prefix}median\")\n\n if show_mean:\n mean_data = np.mean(self._sorted_samples, axis=0)\n pd.Series(data=mean_data, index=self.index).plot(\n color=color,\n ls=\":\",\n label=f\"{label_prefix}mean\",\n *args,\n **kwargs,\n )\n\n for i in range(len(percentiles_sorted) // 2):\n ptile = percentiles_sorted[i]\n alpha = alpha_for_percentile(ptile)\n plt.fill_between(\n self.index,\n ps_data[i],\n ps_data[-i - 1],\n facecolor=color,\n alpha=alpha,\n interpolate=True,\n *args,\n **kwargs,\n )\n # Hack to create labels for the error intervals.\n # Doesn't actually plot anything, because we only pass a single data point\n pd.Series(data=p50_data[:1], index=self.index[:1]).plot(\n color=color,\n alpha=alpha,\n linewidth=10,\n label=f\"{label_prefix}{100 - ptile * 2}%\",\n *args,\n **kwargs,\n )\n if output_file:\n plt.savefig(output_file)\n\n @property\n def index(self) -> pd.DatetimeIndex:\n if self._index is None:\n self._index = pd.date_range(\n self.start_date, periods=self.prediction_length, freq=self.freq\n )\n return self._index\n\n def dim(self) -> int:\n \"\"\"\n Returns the dimensionality of the forecast object.\n \"\"\"\n raise NotImplementedError()\n\n def copy_dim(self, dim: int):\n \"\"\"\n Returns a new Forecast object with only the selected sub-dimension.\n\n Parameters\n ----------\n dim\n The returned forecast object will only represent this dimension.\n \"\"\"\n raise NotImplementedError()\n\n def copy_aggregate(self, agg_fun: Callable):\n \"\"\"\n Returns a new Forecast object with a time series aggregated over the\n dimension axis.\n\n Parameters\n ----------\n agg_fun\n Aggregation function that defines the aggregation operation\n (typically mean or sum).\n \"\"\"\n raise NotImplementedError()\n\n def as_json_dict(self, config: \"Config\") -> dict:\n result = {}\n\n if OutputType.mean in config.output_types:\n result[\"mean\"] = self.mean.tolist()\n\n if OutputType.quantiles in config.output_types:\n quantiles = map(Quantile.parse, config.quantiles)\n\n result[\"quantiles\"] = {\n quantile.name: self.quantile(quantile.value).tolist()\n for quantile in quantiles\n }\n\n if OutputType.samples in config.output_types:\n result[\"samples\"] = []\n\n return result\n\n\nclass SampleForecast(Forecast):\n \"\"\"\n A `Forecast` object, where the predicted distribution is represented\n internally as samples.\n\n Parameters\n ----------\n samples\n Array of size (num_samples, prediction_length) (1D case) or\n (num_samples, prediction_length, target_dim) (multivariate case)\n start_date\n start of the forecast\n freq\n forecast frequency\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n samples: Union[mx.nd.NDArray, np.ndarray],\n start_date: pd.Timestamp,\n freq: str,\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n assert isinstance(\n samples, (np.ndarray, mx.ndarray.ndarray.NDArray)\n ), \"samples should be either a numpy or an mxnet array\"\n assert (\n len(np.shape(samples)) == 2 or len(np.shape(samples)) == 3\n ), \"samples should be a 2-dimensional or 3-dimensional array. Dimensions found: {}\".format(\n len(np.shape(samples))\n )\n self.samples = (\n samples if (isinstance(samples, np.ndarray)) else samples.asnumpy()\n )\n self._sorted_samples_value = None\n self._mean = None\n self._dim = None\n self.item_id = item_id\n self.info = info\n\n assert isinstance(\n start_date, pd.Timestamp\n ), \"start_date should be a pandas Timestamp object\"\n self.start_date = start_date\n\n assert isinstance(freq, str), \"freq should be a string\"\n self.freq = freq\n\n @property\n def _sorted_samples(self):\n if self._sorted_samples_value is None:\n self._sorted_samples_value = np.sort(self.samples, axis=0)\n return self._sorted_samples_value\n\n @property\n def num_samples(self):\n \"\"\"\n The number of samples representing the forecast.\n \"\"\"\n return self.samples.shape[0]\n\n @property\n def prediction_length(self):\n \"\"\"\n Time length of the forecast.\n \"\"\"\n return self.samples.shape[-1]\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n if self._mean is not None:\n return self._mean\n else:\n return np.mean(self.samples, axis=0)\n\n @property\n def mean_ts(self) -> pd.Series:\n \"\"\"\n Forecast mean, as a pandas.Series object.\n \"\"\"\n return pd.Series(self.mean, index=self.index)\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n q = Quantile.parse(q).value\n sample_idx = int(np.round((self.num_samples - 1) * q))\n return self._sorted_samples[sample_idx, :]\n\n def copy_dim(self, dim: int) -> \"SampleForecast\":\n if len(self.samples.shape) == 2:\n samples = self.samples\n else:\n target_dim = self.samples.shape[2]\n assert dim < target_dim, (\n f\"must set 0 <= dim < target_dim, but got dim={dim},\"\n f\" target_dim={target_dim}\"\n )\n samples = self.samples[:, :, dim]\n\n return SampleForecast(\n samples=samples,\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n def copy_aggregate(self, agg_fun: Callable) -> \"SampleForecast\":\n if len(self.samples.shape) == 2:\n samples = self.samples\n else:\n # Aggregate over target dimension axis\n samples = agg_fun(self.samples, axis=2)\n return SampleForecast(\n samples=samples,\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n def dim(self) -> int:\n if self._dim is not None:\n return self._dim\n else:\n if len(self.samples.shape) == 2:\n # univariate target\n # shape: (num_samples, prediction_length)\n return 1\n else:\n # multivariate target\n # shape: (num_samples, prediction_length, target_dim)\n return self.samples.shape[2]\n\n def as_json_dict(self, config: \"Config\") -> dict:\n result = super().as_json_dict(config)\n\n if OutputType.samples in config.output_types:\n result[\"samples\"] = self.samples.tolist()\n\n return result\n\n def __repr__(self):\n return \", \".join(\n [\n f\"SampleForecast({self.samples!r})\",\n f\"{self.start_date!r}\",\n f\"{self.freq!r}\",\n f\"item_id={self.item_id!r}\",\n f\"info={self.info!r})\",\n ]\n )\n\n\nclass QuantileForecast(Forecast):\n \"\"\"\n A Forecast that contains arrays (i.e. time series) for quantiles and mean\n\n Parameters\n ----------\n forecast_arrays\n An array of forecasts\n start_date\n start of the forecast\n freq\n forecast frequency\n forecast_keys\n A list of quantiles of the form '0.1', '0.9', etc.,\n and potentially 'mean'. Each entry corresponds to one array in\n forecast_arrays.\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n def __init__(\n self,\n forecast_arrays: np.ndarray,\n start_date: pd.Timestamp,\n freq: str,\n forecast_keys: List[str],\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n self.forecast_array = forecast_arrays\n self.start_date = pd.Timestamp(start_date, freq=freq)\n self.freq = freq\n\n # normalize keys\n self.forecast_keys = [\n Quantile.from_str(key).name if key != \"mean\" else key\n for key in forecast_keys\n ]\n self.item_id = item_id\n self.info = info\n self._dim = None\n\n shape = self.forecast_array.shape\n assert shape[0] == len(self.forecast_keys), (\n f\"The forecast_array (shape={shape} should have the same \"\n f\"length as the forecast_keys (len={len(self.forecast_keys)}).\"\n )\n self.prediction_length = shape[-1]\n self._forecast_dict = {\n k: self.forecast_array[i] for i, k in enumerate(self.forecast_keys)\n }\n\n self._nan_out = np.array([np.nan] * self.prediction_length)\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n q_str = Quantile.parse(q).name\n # We return nan here such that evaluation runs through\n return self._forecast_dict.get(q_str, self._nan_out)\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n return self._forecast_dict.get(\"mean\", self._nan_out)\n\n def dim(self) -> int:\n if self._dim is not None:\n return self._dim\n else:\n if (\n len(self.forecast_array.shape) == 2\n ): # 1D target. shape: (num_samples, prediction_length)\n return 1\n else:\n return self.forecast_array.shape[\n 1\n ] # 2D target. shape: (num_samples, target_dim, prediction_length)\n\n def __repr__(self):\n return \", \".join(\n [\n f\"QuantileForecast({self.forecast_array!r})\",\n f\"start_date={self.start_date!r}\",\n f\"freq={self.freq!r}\",\n f\"forecast_keys={self.forecast_keys!r}\",\n f\"item_id={self.item_id!r}\",\n f\"info={self.info!r})\",\n ]\n )\n\n\nclass DistributionForecast(Forecast):\n \"\"\"\n A `Forecast` object that uses a GluonTS distribution directly.\n This can for instance be used to represent marginal probability\n distributions for each time point -- although joint distributions are\n also possible, e.g. when using MultiVariateGaussian).\n\n Parameters\n ----------\n distribution\n Distribution object. This should represent the entire prediction\n length, i.e., if we draw `num_samples` samples from the distribution,\n the sample shape should be\n\n samples = trans_dist.sample(num_samples)\n samples.shape -> (num_samples, prediction_length)\n\n start_date\n start of the forecast\n freq\n forecast frequency\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n distribution: Distribution,\n start_date: pd.Timestamp,\n freq: str,\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n self.distribution = distribution\n self.shape = (\n self.distribution.batch_shape + self.distribution.event_shape\n )\n self.prediction_length = self.shape[0]\n self.item_id = item_id\n self.info = info\n\n assert isinstance(\n start_date, pd.Timestamp\n ), \"start_date should be a pandas Timestamp object\"\n self.start_date = start_date\n\n assert isinstance(freq, str), \"freq should be a string\"\n self.freq = freq\n self._mean = None\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n if self._mean is not None:\n return self._mean\n else:\n self._mean = self.distribution.mean.asnumpy()\n return self._mean\n\n @property\n def mean_ts(self) -> pd.Series:\n \"\"\"\n Forecast mean, as a pandas.Series object.\n \"\"\"\n return pd.Series(self.mean, index=self.index)\n\n def quantile(self, level: Union[float, str]) -> np.ndarray:\n level = Quantile.parse(level).value\n q = self.distribution.quantile(mx.nd.array([level])).asnumpy()[0]\n return q\n\n def to_sample_forecast(self, num_samples: int = 200) -> SampleForecast:\n return SampleForecast(\n samples=self.distribution.sample(num_samples),\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n\nclass OutputType(str, Enum):\n mean = \"mean\"\n samples = \"samples\"\n quantiles = \"quantiles\"\n\n\nclass Config(pydantic.BaseModel):\n num_samples: int = pydantic.Field(100, alias=\"num_eval_samples\")\n output_types: Set[OutputType] = {OutputType.quantiles, OutputType.mean}\n # FIXME: validate list elements\n quantiles: List[str] = [\"0.1\", \"0.5\", \"0.9\"]\n\n class Config:\n allow_population_by_field_name = True\n # store additional fields\n extra = \"allow\"\n",
"path": "src/gluonts/model/forecast.py"
}
] | [
{
"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\n# Standard library imports\nimport re\nfrom enum import Enum\nfrom typing import Dict, List, NamedTuple, Optional, Set, Union, Callable\n\n# Third-party imports\nimport mxnet as mx\nimport numpy as np\nimport pandas as pd\nimport pydantic\n\n# First-party imports\nfrom gluonts.core.exception import GluonTSUserError\nfrom gluonts.distribution import Distribution\nfrom gluonts.core.component import validated\n\n\nclass Quantile(NamedTuple):\n value: float\n name: str\n\n @property\n def loss_name(self):\n return f\"QuantileLoss[{self.name}]\"\n\n @property\n def weighted_loss_name(self):\n return f\"wQuantileLoss[{self.name}]\"\n\n @property\n def coverage_name(self):\n return f\"Coverage[{self.name}]\"\n\n @classmethod\n def checked(cls, value: float, name: str) -> \"Quantile\":\n if not 0 <= value <= 1:\n raise GluonTSUserError(\n f\"quantile value should be in [0, 1] but found {value}\"\n )\n\n return Quantile(value, name)\n\n @classmethod\n def from_float(cls, quantile: float) -> \"Quantile\":\n assert isinstance(quantile, float)\n return cls.checked(value=quantile, name=str(quantile))\n\n @classmethod\n def from_str(cls, quantile: str) -> \"Quantile\":\n assert isinstance(quantile, str)\n try:\n return cls.checked(value=float(quantile), name=quantile)\n except ValueError:\n m = re.match(r\"^p(\\d{2})$\", quantile)\n\n if m is None:\n raise GluonTSUserError(\n \"Quantile string should be of the form \"\n f'\"p10\", \"p50\", ... or \"0.1\", \"0.5\", ... but found {quantile}'\n )\n else:\n quantile_float: float = int(m.group(1)) / 100\n return cls(value=quantile_float, name=str(quantile_float))\n\n @classmethod\n def parse(cls, quantile: Union[\"Quantile\", float, str]) -> \"Quantile\":\n \"\"\"Produces equivalent float and string representation of a given\n quantile level.\n\n >>> Quantile.parse(0.1)\n Quantile(value=0.1, name='0.1')\n\n >>> Quantile.parse('0.2')\n Quantile(value=0.2, name='0.2')\n\n >>> Quantile.parse('0.20')\n Quantile(value=0.2, name='0.20')\n\n >>> Quantile.parse('p99')\n Quantile(value=0.99, name='0.99')\n\n Parameters\n ----------\n quantile\n Quantile, can be a float a str representing a float e.g. '0.1' or a\n quantile string of the form 'p0.1'.\n\n Returns\n -------\n Quantile\n A tuple containing both a float and a string representation of the\n input quantile level.\n \"\"\"\n if isinstance(quantile, Quantile):\n return quantile\n elif isinstance(quantile, float):\n return cls.from_float(quantile)\n else:\n return cls.from_str(quantile)\n\n\nclass Forecast:\n \"\"\"\n A abstract class representing predictions.\n \"\"\"\n\n start_date: pd.Timestamp\n freq: str\n item_id: Optional[str]\n info: Optional[Dict]\n prediction_length: int\n mean: np.ndarray\n _index = None\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n \"\"\"\n Computes a quantile from the predicted distribution.\n\n Parameters\n ----------\n q\n Quantile to compute.\n\n Returns\n -------\n numpy.ndarray\n Value of the quantile across the prediction range.\n \"\"\"\n raise NotImplementedError()\n\n def quantile_ts(self, q: Union[float, str]) -> pd.Series:\n return pd.Series(index=self.index, data=self.quantile(q))\n\n @property\n def median(self) -> np.ndarray:\n return self.quantile(0.5)\n\n def plot(\n self,\n prediction_intervals=(50.0, 90.0),\n show_mean=False,\n color=\"b\",\n label=None,\n output_file=None,\n *args,\n **kwargs,\n ):\n \"\"\"\n Plots the median of the forecast as well as confidence bounds.\n (requires matplotlib and pandas).\n\n Parameters\n ----------\n prediction_intervals : float or list of floats in [0, 100]\n Confidence interval size(s). If a list, it will stack the error\n plots for each confidence interval. Only relevant for error styles\n with \"ci\" in the name.\n show_mean : boolean\n Whether to also show the mean of the forecast.\n color : matplotlib color name or dictionary\n The color used for plotting the forecast.\n label : string\n A label (prefix) that is used for the forecast\n output_file : str or None, default None\n Output path for the plot file. If None, plot is not saved to file.\n args :\n Other arguments are passed to main plot() call\n kwargs :\n Other keyword arguments are passed to main plot() call\n \"\"\"\n\n # matplotlib==2.0.* gives errors in Brazil builds and has to be\n # imported locally\n import matplotlib.pyplot as plt\n\n label_prefix = \"\" if label is None else label + \"-\"\n\n for c in prediction_intervals:\n assert 0.0 <= c <= 100.0\n\n ps = [50.0] + [\n 50.0 + f * c / 2.0\n for c in prediction_intervals\n for f in [-1.0, +1.0]\n ]\n percentiles_sorted = sorted(set(ps))\n\n def alpha_for_percentile(p):\n return (p / 100.0) ** 0.3\n\n ps_data = [self.quantile(p / 100.0) for p in percentiles_sorted]\n i_p50 = len(percentiles_sorted) // 2\n\n p50_data = ps_data[i_p50]\n p50_series = pd.Series(data=p50_data, index=self.index)\n p50_series.plot(color=color, ls=\"-\", label=f\"{label_prefix}median\")\n\n if show_mean:\n mean_data = np.mean(self._sorted_samples, axis=0)\n pd.Series(data=mean_data, index=self.index).plot(\n color=color,\n ls=\":\",\n label=f\"{label_prefix}mean\",\n *args,\n **kwargs,\n )\n\n for i in range(len(percentiles_sorted) // 2):\n ptile = percentiles_sorted[i]\n alpha = alpha_for_percentile(ptile)\n plt.fill_between(\n self.index,\n ps_data[i],\n ps_data[-i - 1],\n facecolor=color,\n alpha=alpha,\n interpolate=True,\n *args,\n **kwargs,\n )\n # Hack to create labels for the error intervals.\n # Doesn't actually plot anything, because we only pass a single data point\n pd.Series(data=p50_data[:1], index=self.index[:1]).plot(\n color=color,\n alpha=alpha,\n linewidth=10,\n label=f\"{label_prefix}{100 - ptile * 2}%\",\n *args,\n **kwargs,\n )\n if output_file:\n plt.savefig(output_file)\n\n @property\n def index(self) -> pd.DatetimeIndex:\n if self._index is None:\n self._index = pd.date_range(\n self.start_date, periods=self.prediction_length, freq=self.freq\n )\n return self._index\n\n def dim(self) -> int:\n \"\"\"\n Returns the dimensionality of the forecast object.\n \"\"\"\n raise NotImplementedError()\n\n def copy_dim(self, dim: int):\n \"\"\"\n Returns a new Forecast object with only the selected sub-dimension.\n\n Parameters\n ----------\n dim\n The returned forecast object will only represent this dimension.\n \"\"\"\n raise NotImplementedError()\n\n def copy_aggregate(self, agg_fun: Callable):\n \"\"\"\n Returns a new Forecast object with a time series aggregated over the\n dimension axis.\n\n Parameters\n ----------\n agg_fun\n Aggregation function that defines the aggregation operation\n (typically mean or sum).\n \"\"\"\n raise NotImplementedError()\n\n def as_json_dict(self, config: \"Config\") -> dict:\n result = {}\n\n if OutputType.mean in config.output_types:\n result[\"mean\"] = self.mean.tolist()\n\n if OutputType.quantiles in config.output_types:\n quantiles = map(Quantile.parse, config.quantiles)\n\n result[\"quantiles\"] = {\n quantile.name: self.quantile(quantile.value).tolist()\n for quantile in quantiles\n }\n\n if OutputType.samples in config.output_types:\n result[\"samples\"] = []\n\n return result\n\n\nclass SampleForecast(Forecast):\n \"\"\"\n A `Forecast` object, where the predicted distribution is represented\n internally as samples.\n\n Parameters\n ----------\n samples\n Array of size (num_samples, prediction_length) (1D case) or\n (num_samples, prediction_length, target_dim) (multivariate case)\n start_date\n start of the forecast\n freq\n forecast frequency\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n samples: Union[mx.nd.NDArray, np.ndarray],\n start_date: pd.Timestamp,\n freq: str,\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n assert isinstance(\n samples, (np.ndarray, mx.ndarray.ndarray.NDArray)\n ), \"samples should be either a numpy or an mxnet array\"\n assert (\n len(np.shape(samples)) == 2 or len(np.shape(samples)) == 3\n ), \"samples should be a 2-dimensional or 3-dimensional array. Dimensions found: {}\".format(\n len(np.shape(samples))\n )\n self.samples = (\n samples if (isinstance(samples, np.ndarray)) else samples.asnumpy()\n )\n self._sorted_samples_value = None\n self._mean = None\n self._dim = None\n self.item_id = item_id\n self.info = info\n\n assert isinstance(\n start_date, pd.Timestamp\n ), \"start_date should be a pandas Timestamp object\"\n self.start_date = start_date\n\n assert isinstance(freq, str), \"freq should be a string\"\n self.freq = freq\n\n @property\n def _sorted_samples(self):\n if self._sorted_samples_value is None:\n self._sorted_samples_value = np.sort(self.samples, axis=0)\n return self._sorted_samples_value\n\n @property\n def num_samples(self):\n \"\"\"\n The number of samples representing the forecast.\n \"\"\"\n return self.samples.shape[0]\n\n @property\n def prediction_length(self):\n \"\"\"\n Time length of the forecast.\n \"\"\"\n return self.samples.shape[1]\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n if self._mean is not None:\n return self._mean\n else:\n return np.mean(self.samples, axis=0)\n\n @property\n def mean_ts(self) -> pd.Series:\n \"\"\"\n Forecast mean, as a pandas.Series object.\n \"\"\"\n return pd.Series(self.mean, index=self.index)\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n q = Quantile.parse(q).value\n sample_idx = int(np.round((self.num_samples - 1) * q))\n return self._sorted_samples[sample_idx, :]\n\n def copy_dim(self, dim: int) -> \"SampleForecast\":\n if len(self.samples.shape) == 2:\n samples = self.samples\n else:\n target_dim = self.samples.shape[2]\n assert dim < target_dim, (\n f\"must set 0 <= dim < target_dim, but got dim={dim},\"\n f\" target_dim={target_dim}\"\n )\n samples = self.samples[:, :, dim]\n\n return SampleForecast(\n samples=samples,\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n def copy_aggregate(self, agg_fun: Callable) -> \"SampleForecast\":\n if len(self.samples.shape) == 2:\n samples = self.samples\n else:\n # Aggregate over target dimension axis\n samples = agg_fun(self.samples, axis=2)\n return SampleForecast(\n samples=samples,\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n def dim(self) -> int:\n if self._dim is not None:\n return self._dim\n else:\n if len(self.samples.shape) == 2:\n # univariate target\n # shape: (num_samples, prediction_length)\n return 1\n else:\n # multivariate target\n # shape: (num_samples, prediction_length, target_dim)\n return self.samples.shape[2]\n\n def as_json_dict(self, config: \"Config\") -> dict:\n result = super().as_json_dict(config)\n\n if OutputType.samples in config.output_types:\n result[\"samples\"] = self.samples.tolist()\n\n return result\n\n def __repr__(self):\n return \", \".join(\n [\n f\"SampleForecast({self.samples!r})\",\n f\"{self.start_date!r}\",\n f\"{self.freq!r}\",\n f\"item_id={self.item_id!r}\",\n f\"info={self.info!r})\",\n ]\n )\n\n\nclass QuantileForecast(Forecast):\n \"\"\"\n A Forecast that contains arrays (i.e. time series) for quantiles and mean\n\n Parameters\n ----------\n forecast_arrays\n An array of forecasts\n start_date\n start of the forecast\n freq\n forecast frequency\n forecast_keys\n A list of quantiles of the form '0.1', '0.9', etc.,\n and potentially 'mean'. Each entry corresponds to one array in\n forecast_arrays.\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n def __init__(\n self,\n forecast_arrays: np.ndarray,\n start_date: pd.Timestamp,\n freq: str,\n forecast_keys: List[str],\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n self.forecast_array = forecast_arrays\n self.start_date = pd.Timestamp(start_date, freq=freq)\n self.freq = freq\n\n # normalize keys\n self.forecast_keys = [\n Quantile.from_str(key).name if key != \"mean\" else key\n for key in forecast_keys\n ]\n self.item_id = item_id\n self.info = info\n self._dim = None\n\n shape = self.forecast_array.shape\n assert shape[0] == len(self.forecast_keys), (\n f\"The forecast_array (shape={shape} should have the same \"\n f\"length as the forecast_keys (len={len(self.forecast_keys)}).\"\n )\n self.prediction_length = shape[-1]\n self._forecast_dict = {\n k: self.forecast_array[i] for i, k in enumerate(self.forecast_keys)\n }\n\n self._nan_out = np.array([np.nan] * self.prediction_length)\n\n def quantile(self, q: Union[float, str]) -> np.ndarray:\n q_str = Quantile.parse(q).name\n # We return nan here such that evaluation runs through\n return self._forecast_dict.get(q_str, self._nan_out)\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n return self._forecast_dict.get(\"mean\", self._nan_out)\n\n def dim(self) -> int:\n if self._dim is not None:\n return self._dim\n else:\n if (\n len(self.forecast_array.shape) == 2\n ): # 1D target. shape: (num_samples, prediction_length)\n return 1\n else:\n return self.forecast_array.shape[\n 1\n ] # 2D target. shape: (num_samples, target_dim, prediction_length)\n\n def __repr__(self):\n return \", \".join(\n [\n f\"QuantileForecast({self.forecast_array!r})\",\n f\"start_date={self.start_date!r}\",\n f\"freq={self.freq!r}\",\n f\"forecast_keys={self.forecast_keys!r}\",\n f\"item_id={self.item_id!r}\",\n f\"info={self.info!r})\",\n ]\n )\n\n\nclass DistributionForecast(Forecast):\n \"\"\"\n A `Forecast` object that uses a GluonTS distribution directly.\n This can for instance be used to represent marginal probability\n distributions for each time point -- although joint distributions are\n also possible, e.g. when using MultiVariateGaussian).\n\n Parameters\n ----------\n distribution\n Distribution object. This should represent the entire prediction\n length, i.e., if we draw `num_samples` samples from the distribution,\n the sample shape should be\n\n samples = trans_dist.sample(num_samples)\n samples.shape -> (num_samples, prediction_length)\n\n start_date\n start of the forecast\n freq\n forecast frequency\n info\n additional information that the forecaster may provide e.g. estimated\n parameters, number of iterations ran etc.\n \"\"\"\n\n @validated()\n def __init__(\n self,\n distribution: Distribution,\n start_date: pd.Timestamp,\n freq: str,\n item_id: Optional[str] = None,\n info: Optional[Dict] = None,\n ) -> None:\n self.distribution = distribution\n self.shape = (\n self.distribution.batch_shape + self.distribution.event_shape\n )\n self.prediction_length = self.shape[0]\n self.item_id = item_id\n self.info = info\n\n assert isinstance(\n start_date, pd.Timestamp\n ), \"start_date should be a pandas Timestamp object\"\n self.start_date = start_date\n\n assert isinstance(freq, str), \"freq should be a string\"\n self.freq = freq\n self._mean = None\n\n @property\n def mean(self) -> np.ndarray:\n \"\"\"\n Forecast mean.\n \"\"\"\n if self._mean is not None:\n return self._mean\n else:\n self._mean = self.distribution.mean.asnumpy()\n return self._mean\n\n @property\n def mean_ts(self) -> pd.Series:\n \"\"\"\n Forecast mean, as a pandas.Series object.\n \"\"\"\n return pd.Series(self.mean, index=self.index)\n\n def quantile(self, level: Union[float, str]) -> np.ndarray:\n level = Quantile.parse(level).value\n q = self.distribution.quantile(mx.nd.array([level])).asnumpy()[0]\n return q\n\n def to_sample_forecast(self, num_samples: int = 200) -> SampleForecast:\n return SampleForecast(\n samples=self.distribution.sample(num_samples),\n start_date=self.start_date,\n freq=self.freq,\n item_id=self.item_id,\n info=self.info,\n )\n\n\nclass OutputType(str, Enum):\n mean = \"mean\"\n samples = \"samples\"\n quantiles = \"quantiles\"\n\n\nclass Config(pydantic.BaseModel):\n num_samples: int = pydantic.Field(100, alias=\"num_eval_samples\")\n output_types: Set[OutputType] = {OutputType.quantiles, OutputType.mean}\n # FIXME: validate list elements\n quantiles: List[str] = [\"0.1\", \"0.5\", \"0.9\"]\n\n class Config:\n allow_population_by_field_name = True\n # store additional fields\n extra = \"allow\"\n",
"path": "src/gluonts/model/forecast.py"
}
] | diff --git a/src/gluonts/model/forecast.py b/src/gluonts/model/forecast.py
index 19ffc7abc1..6ad91fe9da 100644
--- a/src/gluonts/model/forecast.py
+++ b/src/gluonts/model/forecast.py
@@ -373,7 +373,7 @@ def prediction_length(self):
"""
Time length of the forecast.
"""
- return self.samples.shape[-1]
+ return self.samples.shape[1]
@property
def mean(self) -> np.ndarray:
diff --git a/test/model/test_forecast.py b/test/model/test_forecast.py
index d05c8f231b..252f550e3b 100644
--- a/test/model/test_forecast.py
+++ b/test/model/test_forecast.py
@@ -94,3 +94,38 @@ def percentile(value):
assert forecast.prediction_length == pred_length
assert len(forecast.index) == pred_length
assert forecast.index[0] == pd.Timestamp(START_DATE)
+
+
[email protected](
+ "forecast, exp_index",
+ [
+ (
+ SampleForecast(
+ samples=np.random.normal(size=(100, 7, 3)),
+ start_date=pd.Timestamp("2020-01-01 00:00:00"),
+ freq="1D",
+ ),
+ pd.date_range(
+ start=pd.Timestamp("2020-01-01 00:00:00"),
+ freq="1D",
+ periods=7,
+ ),
+ ),
+ (
+ DistributionForecast(
+ Uniform(
+ low=mx.nd.zeros(shape=(5, 2)),
+ high=mx.nd.ones(shape=(5, 2)),
+ ),
+ start_date=pd.Timestamp("2020-01-01 00:00:00"),
+ freq="W",
+ ),
+ pd.date_range(
+ start=pd.Timestamp("2020-01-01 00:00:00"), freq="W", periods=5,
+ ),
+ ),
+ ],
+)
+def test_forecast_multivariate(forecast, exp_index):
+ assert forecast.prediction_length == len(exp_index)
+ assert np.all(forecast.index == exp_index)
|
python__peps-3263 | Infra: Check Sphinx warnings on CI
This is similar to what we have in the CPython repo, most recently: https://github.com/python/cpython/pull/106460, and will help us gradually remove Sphinx warnings, and avoid new ones being introduces.
It checks three things:
1. If a file previously had no warnings (not listed in `.nitignore`), and new ones are introduced, it fails
* -> To prevent regressions
2. If a file previously had warnings (it's lsited in `.nitignore`), but now has none, it fails and tells us to remove it from `.nitignore`
* To help us incrementally improve over time
3. If a file previously had warnings (it's listed in `.nitignore`), and still has warnings, it doesn't fail, but it will annotate the PR to show the warning
* To make them more visible, and give us the opportunity to fix them
I've intentionally kept the code and layout as close as possible to the CPython version (see https://github.com/python/cpython/tree/main/Doc/tools) for easier future maintenance.
<!-- readthedocs-preview pep-previews start -->
----
:books: Documentation preview :books:: https://pep-previews--3213.org.readthedocs.build/
<!-- readthedocs-preview pep-previews end -->
| [
{
"content": "# This file is placed in the public domain or under the\n# CC0-1.0-Universal license, whichever is more permissive.\n\n\"\"\"Configuration for building PEPs using Sphinx.\"\"\"\n\nfrom pathlib import Path\nimport sys\n\nsys.path.append(str(Path(\".\").absolute()))\n\n# -- Project information -----------------------------------------------------\n\nproject = \"PEPs\"\nmaster_doc = \"contents\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings.\nextensions = [\n \"pep_sphinx_extensions\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.githubpages\",\n]\n\n# The file extensions of source files. Sphinx uses these suffixes as sources.\nsource_suffix = {\n \".rst\": \"pep\",\n \".txt\": \"pep\",\n}\n\n# List of patterns (relative to source dir) to ignore when looking for source files.\ninclude_patterns = [\n # Required for Sphinx\n \"contents.rst\",\n # PEP files\n \"pep-????.rst\",\n \"pep-????.txt\",\n # PEP ancillary files\n \"pep-????/*.rst\",\n # Documentation\n \"docs/*.rst\",\n]\nexclude_patterns = [\n # PEP Template\n \"pep-0012/pep-NNNN.rst\",\n]\n\n# Intersphinx configuration\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'packaging': ('https://packaging.python.org/en/latest/', None),\n 'devguide': ('https://devguide.python.org/', None),\n 'py3.11': ('https://docs.python.org/3.11/', None),\n 'py3.12': ('https://docs.python.org/3.12/', None),\n}\nintersphinx_disabled_reftypes = []\n\n# -- Options for HTML output -------------------------------------------------\n\n# HTML output settings\nhtml_math_renderer = \"maths_to_html\" # Maths rendering\n\n# Theme settings\nhtml_theme_path = [\"pep_sphinx_extensions\"]\nhtml_theme = \"pep_theme\" # The actual theme directory (child of html_theme_path)\nhtml_use_index = False # Disable index (we use PEP 0)\nhtml_style = \"\" # must be defined here or in theme.conf, but is unused\nhtml_permalinks = False # handled in the PEPContents transform\nhtml_baseurl = \"https://peps.python.org\" # to create the CNAME file\ngettext_auto_build = False # speed-ups\n\ntemplates_path = [\"pep_sphinx_extensions/pep_theme/templates\"] # Theme template relative paths from `confdir`\n",
"path": "conf.py"
}
] | [
{
"content": "# This file is placed in the public domain or under the\n# CC0-1.0-Universal license, whichever is more permissive.\n\n\"\"\"Configuration for building PEPs using Sphinx.\"\"\"\n\nfrom pathlib import Path\nimport sys\n\nsys.path.append(str(Path(\".\").absolute()))\n\n# -- Project information -----------------------------------------------------\n\nproject = \"PEPs\"\nmaster_doc = \"contents\"\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings.\nextensions = [\n \"pep_sphinx_extensions\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.githubpages\",\n]\n\n# The file extensions of source files. Sphinx uses these suffixes as sources.\nsource_suffix = {\n \".rst\": \"pep\",\n \".txt\": \"pep\",\n}\n\n# List of patterns (relative to source dir) to ignore when looking for source files.\ninclude_patterns = [\n # Required for Sphinx\n \"contents.rst\",\n # PEP files\n \"pep-????.rst\",\n \"pep-????.txt\",\n # PEP ancillary files\n \"pep-????/*.rst\",\n # Documentation\n \"docs/*.rst\",\n]\nexclude_patterns = [\n # PEP Template\n \"pep-0012/pep-NNNN.rst\",\n]\n\n# Warn on missing references\nnitpicky = True\n\n# Intersphinx configuration\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3/', None),\n 'packaging': ('https://packaging.python.org/en/latest/', None),\n 'devguide': ('https://devguide.python.org/', None),\n 'py3.11': ('https://docs.python.org/3.11/', None),\n 'py3.12': ('https://docs.python.org/3.12/', None),\n}\nintersphinx_disabled_reftypes = []\n\n# -- Options for HTML output -------------------------------------------------\n\n# HTML output settings\nhtml_math_renderer = \"maths_to_html\" # Maths rendering\n\n# Theme settings\nhtml_theme_path = [\"pep_sphinx_extensions\"]\nhtml_theme = \"pep_theme\" # The actual theme directory (child of html_theme_path)\nhtml_use_index = False # Disable index (we use PEP 0)\nhtml_style = \"\" # must be defined here or in theme.conf, but is unused\nhtml_permalinks = False # handled in the PEPContents transform\nhtml_baseurl = \"https://peps.python.org\" # to create the CNAME file\ngettext_auto_build = False # speed-ups\n\ntemplates_path = [\"pep_sphinx_extensions/pep_theme/templates\"] # Theme template relative paths from `confdir`\n",
"path": "conf.py"
}
] | diff --git a/conf.py b/conf.py
index 8e2ae485f06..95a1debd451 100644
--- a/conf.py
+++ b/conf.py
@@ -45,6 +45,9 @@
"pep-0012/pep-NNNN.rst",
]
+# Warn on missing references
+nitpicky = True
+
# Intersphinx configuration
intersphinx_mapping = {
'python': ('https://docs.python.org/3/', None),
|
azavea__raster-vision-1557 | Query is invisible in interactive docs search
## 🐛 Bug
When I search for something in the docs using the new interactive search bar it seems to work except the query is not visible in the search box. Instead a bunch of dots appear. This was in Chrome Version 107.0.5304.110 (Official Build) (arm64) with the extension turned off.

Query is invisible in interactive docs search
## 🐛 Bug
When I search for something in the docs using the new interactive search bar it seems to work except the query is not visible in the search box. Instead a bunch of dots appear. This was in Chrome Version 107.0.5304.110 (Official Build) (arm64) with the extension turned off.

Query is invisible in interactive docs search
## 🐛 Bug
When I search for something in the docs using the new interactive search bar it seems to work except the query is not visible in the search box. Instead a bunch of dots appear. This was in Chrome Version 107.0.5304.110 (Official Build) (arm64) with the extension turned off.

| [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\nfrom typing import TYPE_CHECKING, List\nimport sys\nfrom unittest.mock import MagicMock\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n\n\n# https://read-the-docs.readthedocs.io/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules\nclass Mock(MagicMock):\n @classmethod\n def __getattr__(cls, name):\n return MagicMock()\n\n\nMOCK_MODULES = ['pyproj', 'h5py', 'osgeo']\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# -- Allow Jinja templates in non-template .rst files -------------------------\n\n\ndef rstjinja(app: 'Sphinx', docname: str, source: List[str]) -> None:\n \"\"\"Allow use of jinja templating in all doc pages.\n\n Adapted from:\n https://www.ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/\n \"\"\"\n # Make sure we're outputting HTML\n if app.builder.format != 'html':\n return\n\n src = source[0]\n rendered = app.builder.templates.render_string(src,\n app.config.html_context)\n source[0] = rendered\n\n\ndef setup(app: 'Sphinx') -> None:\n \"\"\"Register event handler for ``source-read`` event.\n\n See: https://www.sphinx-doc.org/en/master/extdev/appapi.html\n \"\"\"\n app.connect('source-read', rstjinja)\n\n\n# -- Path setup ---------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- Project information ------------------------------------------------------\n\nproject = u'Raster Vision'\ncopyright = u'2018, Azavea'\nauthor = u'Azavea'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u'0.20'\n# The full version, including alpha/beta/rc tags\nrelease = u'0.20-dev'\n\n# -- Extension configuration --------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '4'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n # https://www.sphinx-doc.org/en/master/tutorial/automatic-doc-generation.html\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n # support Google-style docstrings\n # https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html\n 'sphinx.ext.napoleon',\n # mardown support\n 'myst_parser',\n # allow linking to python docs; see intersphinx_mapping below\n 'sphinx.ext.intersphinx',\n # better rendering of pydantic Configs\n 'sphinxcontrib.autodoc_pydantic',\n # for linking to source files from docs\n 'sphinx.ext.viewcode',\n # for rendering examples in docstrings\n 'sphinx.ext.doctest',\n # jupyter notebooks\n 'nbsphinx',\n # jupyter notebooks in a gallery\n 'sphinx_gallery.load_style',\n # add a copy button to code blocks\n 'sphinx_copybutton',\n # search-as-you-type\n 'sphinx_search.extension',\n]\n\n#########################\n# autodoc, autosummary\n# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html\n# https://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html\n#########################\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\nadd_module_names = False\n\nautosummary_generate = True\nautosummary_ignore_module_all = False\n\nautodoc_typehints = 'both'\nautodoc_class_signature = 'separated'\nautodoc_member_order = 'groupwise'\nautodoc_mock_imports = ['torch', 'torchvision', 'pycocotools', 'geopandas']\n#########################\n\n#########################\n# nbsphinx options\n#########################\nnbsphinx_execute = 'never'\nsphinx_gallery_conf = {\n 'line_numbers': True,\n}\n# external thumnails\nnbsphinx_thumbnails = {\n # The _images dir is under build/html. This looks brittle but using the\n # more natural img/tensorboard.png path does not work.\n 'tutorials/train': '_images/tensorboard.png',\n}\nnbsphinx_prolog = r\"\"\"\n{% set docpath = env.doc2path(env.docname, base=False) %}\n{% set docname = docpath.split('/')|last %}\n\n.. only:: html\n\n .. role:: raw-html(raw)\n :format: html\n\n .. note:: This page was generated from `{{ docname }} <https://github.com/azavea/raster-vision/blob/master/docs/{{ docpath }}>`__.\n\"\"\" # noqa\n#########################\n\n#########################\n# intersphinx\n#########################\n\n# connect docs in other projects\nintersphinx_mapping = {\n 'python': (\n 'https://docs.python.org/3',\n 'https://docs.python.org/3/objects.inv',\n ),\n 'rasterio': (\n 'https://rasterio.readthedocs.io/en/stable/',\n 'https://rasterio.readthedocs.io/en/stable/objects.inv',\n ),\n 'shapely': (\n 'https://shapely.readthedocs.io/en/stable/',\n 'https://shapely.readthedocs.io/en/stable/objects.inv',\n ),\n 'matplotlib': (\n 'https://matplotlib.org/stable/',\n 'https://matplotlib.org/stable/objects.inv',\n ),\n 'geopandas': (\n 'https://geopandas.org/en/stable/',\n 'https://geopandas.org/en/stable/objects.inv',\n ),\n 'numpy': (\n 'https://numpy.org/doc/stable/',\n 'https://numpy.org/doc/stable/objects.inv',\n ),\n 'pytorch': (\n 'https://pytorch.org/docs/stable/',\n 'https://pytorch.org/docs/stable/objects.inv',\n ),\n}\n\n#########################\n\n#########################\n# sphinx_copybutton\n# https://sphinx-copybutton.readthedocs.io/en/latest/index.html\n#########################\n\ncopybutton_prompt_text = r'>>> |\\.\\.\\. |\\$ |In \\[\\d*\\]: | {2,5}\\.\\.\\.: | {5,8}: |> '\ncopybutton_prompt_is_regexp = True\ncopybutton_only_copy_prompt_lines = True\ncopybutton_line_continuation_character = '\\\\'\n\n#########################\n\n# -- General configuration ----------------------------------------------------\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n '.rst': 'restructuredtext',\n '.md': 'markdown',\n}\n\n# The encoding of source files.\n#\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nroot_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#\n# today = ''\n#\n# Else, today_fmt is used as the format for a strftime call.\n#\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# These patterns also affect html_static_path and html_extra_path\nexclude_patterns = [\n '_build', 'Thumbs.db', '.DS_Store', 'README.md', '**.ipynb_checkpoints'\n]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n#\n# To see all availabel values:\n# >>> from pygments.styles import get_all_styles\n# >>> styles = list(get_all_styles())\n#\n# pygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'furo'\n# html_theme = 'pydata_sphinx_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# Furo theme options: https://pradyunsg.me/furo/customisation/\nhtml_theme_options = {\n 'sidebar_hide_name': True,\n 'top_of_page_button': None,\n 'navigation_with_keys': True,\n}\n\n# A dictionary of values to pass into the template engine’s context for all\n# pages. Single values can also be put in this dictionary using the -A\n# command-line option of sphinx-build.\n#\n# yapf: disable\nhtml_context = dict(\n version=version,\n release=release,\n s3_model_zoo=f'https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-{version}', # noqa\n)\n# yapf: enable\n\n# Add any paths that contain custom themes here, relative to this directory.\n#\n# html_theme_path = []\n\n# The name for this set of Sphinx documents.\n# \"<project> v<release> documentation\" by default.\nhtml_title = f'{project} v{release} documentation'\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = 'img/raster-vision-logo.png'\n\n# The name of an image file (relative to this directory) to use as a favicon of\n# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = 'img/raster-vision-icon.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of CSS files. The entry must be a filename string or a tuple\n# containing the filename string and the attributes dictionary. The filename\n# must be relative to the html_static_path, or a full URI with scheme like\n# https://example.org/style.css. The attributes is used for attributes of\n# <link> tag. It defaults to an empty list.\nhtml_css_files = ['custom.css']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#\n# html_extra_path = []\n\n# If not None, a 'Last updated on:' timestamp is inserted at every page\n# bottom, using the given strftime format.\n# The empty string is equivalent to '%b %d, %Y'.\n#\n# html_last_updated_fmt = None\n\n# Custom sidebar templates, maps document names to template names.\n#\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n#\n# html_domain_indices = True\n\n# If false, no index is generated.\n#\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'\n#\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# 'ja' uses this config value.\n# 'zh' user can custom change `jieba` dictionary path.\n#\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'RasterVisiondoc'\n\n# -- Options for LaTeX output -------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (root_doc, 'RasterVision.tex', 'Raster Vision Documentation', 'Azavea',\n 'manual'),\n]\n\n# -- Options for manual page output -------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(root_doc, 'RasterVisoin-{}.tex', html_title, [author], 'manual')]\n\n# -- Options for Texinfo output -----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (root_doc, 'RasterVision', 'Raster Vision Documentation', author,\n 'RasterVision', 'One line description of project.', 'Miscellaneous'),\n]\n",
"path": "docs/conf.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\nfrom typing import TYPE_CHECKING, List\nimport sys\nfrom unittest.mock import MagicMock\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n\n\n# https://read-the-docs.readthedocs.io/en/latest/faq.html#i-get-import-errors-on-libraries-that-depend-on-c-modules\nclass Mock(MagicMock):\n @classmethod\n def __getattr__(cls, name):\n return MagicMock()\n\n\nMOCK_MODULES = ['pyproj', 'h5py', 'osgeo']\nsys.modules.update((mod_name, Mock()) for mod_name in MOCK_MODULES)\n\n# -- Allow Jinja templates in non-template .rst files -------------------------\n\n\ndef rstjinja(app: 'Sphinx', docname: str, source: List[str]) -> None:\n \"\"\"Allow use of jinja templating in all doc pages.\n\n Adapted from:\n https://www.ericholscher.com/blog/2016/jul/25/integrating-jinja-rst-sphinx/\n \"\"\"\n # Make sure we're outputting HTML\n if app.builder.format != 'html':\n return\n\n src = source[0]\n rendered = app.builder.templates.render_string(src,\n app.config.html_context)\n source[0] = rendered\n\n\ndef setup(app: 'Sphinx') -> None:\n \"\"\"Register event handler for ``source-read`` event.\n\n See: https://www.sphinx-doc.org/en/master/extdev/appapi.html\n \"\"\"\n app.connect('source-read', rstjinja)\n\n\n# -- Path setup ---------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- Project information ------------------------------------------------------\n\nproject = u'Raster Vision'\ncopyright = u'2018, Azavea'\nauthor = u'Azavea'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u'0.20'\n# The full version, including alpha/beta/rc tags\nrelease = u'0.20-dev'\n\n# -- Extension configuration --------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '4'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n # https://www.sphinx-doc.org/en/master/tutorial/automatic-doc-generation.html\n 'sphinx.ext.autodoc',\n 'sphinx.ext.autosummary',\n # support Google-style docstrings\n # https://www.sphinx-doc.org/en/master/usage/extensions/napoleon.html\n 'sphinx.ext.napoleon',\n # mardown support\n 'myst_parser',\n # allow linking to python docs; see intersphinx_mapping below\n 'sphinx.ext.intersphinx',\n # better rendering of pydantic Configs\n 'sphinxcontrib.autodoc_pydantic',\n # for linking to source files from docs\n 'sphinx.ext.viewcode',\n # for rendering examples in docstrings\n 'sphinx.ext.doctest',\n # jupyter notebooks\n 'nbsphinx',\n # jupyter notebooks in a gallery\n 'sphinx_gallery.load_style',\n # add a copy button to code blocks\n 'sphinx_copybutton',\n]\n\n#########################\n# autodoc, autosummary\n# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html\n# https://www.sphinx-doc.org/en/master/usage/extensions/autosummary.html\n#########################\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\nadd_module_names = False\n\nautosummary_generate = True\nautosummary_ignore_module_all = False\n\nautodoc_typehints = 'both'\nautodoc_class_signature = 'separated'\nautodoc_member_order = 'groupwise'\nautodoc_mock_imports = ['torch', 'torchvision', 'pycocotools', 'geopandas']\n#########################\n\n#########################\n# nbsphinx options\n#########################\nnbsphinx_execute = 'never'\nsphinx_gallery_conf = {\n 'line_numbers': True,\n}\n# external thumnails\nnbsphinx_thumbnails = {\n # The _images dir is under build/html. This looks brittle but using the\n # more natural img/tensorboard.png path does not work.\n 'tutorials/train': '_images/tensorboard.png',\n}\nnbsphinx_prolog = r\"\"\"\n{% set docpath = env.doc2path(env.docname, base=False) %}\n{% set docname = docpath.split('/')|last %}\n\n.. only:: html\n\n .. role:: raw-html(raw)\n :format: html\n\n .. note:: This page was generated from `{{ docname }} <https://github.com/azavea/raster-vision/blob/master/docs/{{ docpath }}>`__.\n\"\"\" # noqa\n#########################\n\n#########################\n# intersphinx\n#########################\n\n# connect docs in other projects\nintersphinx_mapping = {\n 'python': (\n 'https://docs.python.org/3',\n 'https://docs.python.org/3/objects.inv',\n ),\n 'rasterio': (\n 'https://rasterio.readthedocs.io/en/stable/',\n 'https://rasterio.readthedocs.io/en/stable/objects.inv',\n ),\n 'shapely': (\n 'https://shapely.readthedocs.io/en/stable/',\n 'https://shapely.readthedocs.io/en/stable/objects.inv',\n ),\n 'matplotlib': (\n 'https://matplotlib.org/stable/',\n 'https://matplotlib.org/stable/objects.inv',\n ),\n 'geopandas': (\n 'https://geopandas.org/en/stable/',\n 'https://geopandas.org/en/stable/objects.inv',\n ),\n 'numpy': (\n 'https://numpy.org/doc/stable/',\n 'https://numpy.org/doc/stable/objects.inv',\n ),\n 'pytorch': (\n 'https://pytorch.org/docs/stable/',\n 'https://pytorch.org/docs/stable/objects.inv',\n ),\n}\n\n#########################\n\n#########################\n# sphinx_copybutton\n# https://sphinx-copybutton.readthedocs.io/en/latest/index.html\n#########################\n\ncopybutton_prompt_text = r'>>> |\\.\\.\\. |\\$ |In \\[\\d*\\]: | {2,5}\\.\\.\\.: | {5,8}: |> '\ncopybutton_prompt_is_regexp = True\ncopybutton_only_copy_prompt_lines = True\ncopybutton_line_continuation_character = '\\\\'\n\n#########################\n\n# -- General configuration ----------------------------------------------------\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = {\n '.rst': 'restructuredtext',\n '.md': 'markdown',\n}\n\n# The encoding of source files.\n#\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nroot_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = 'en'\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#\n# today = ''\n#\n# Else, today_fmt is used as the format for a strftime call.\n#\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# These patterns also affect html_static_path and html_extra_path\nexclude_patterns = [\n '_build', 'Thumbs.db', '.DS_Store', 'README.md', '**.ipynb_checkpoints'\n]\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\n#\n# To see all availabel values:\n# >>> from pygments.styles import get_all_styles\n# >>> styles = list(get_all_styles())\n#\n# pygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n# -- Options for HTML output --------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'furo'\n# html_theme = 'pydata_sphinx_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n# Furo theme options: https://pradyunsg.me/furo/customisation/\nhtml_theme_options = {\n 'sidebar_hide_name': True,\n 'top_of_page_button': None,\n 'navigation_with_keys': True,\n}\n\n# A dictionary of values to pass into the template engine’s context for all\n# pages. Single values can also be put in this dictionary using the -A\n# command-line option of sphinx-build.\n#\n# yapf: disable\nhtml_context = dict(\n version=version,\n release=release,\n s3_model_zoo=f'https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo-{version}', # noqa\n)\n# yapf: enable\n\n# Add any paths that contain custom themes here, relative to this directory.\n#\n# html_theme_path = []\n\n# The name for this set of Sphinx documents.\n# \"<project> v<release> documentation\" by default.\nhtml_title = f'{project} v{release} documentation'\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\nhtml_logo = 'img/raster-vision-logo.png'\n\n# The name of an image file (relative to this directory) to use as a favicon of\n# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\nhtml_favicon = 'img/raster-vision-icon.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# A list of CSS files. The entry must be a filename string or a tuple\n# containing the filename string and the attributes dictionary. The filename\n# must be relative to the html_static_path, or a full URI with scheme like\n# https://example.org/style.css. The attributes is used for attributes of\n# <link> tag. It defaults to an empty list.\nhtml_css_files = ['custom.css']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#\n# html_extra_path = []\n\n# If not None, a 'Last updated on:' timestamp is inserted at every page\n# bottom, using the given strftime format.\n# The empty string is equivalent to '%b %d, %Y'.\n#\n# html_last_updated_fmt = None\n\n# Custom sidebar templates, maps document names to template names.\n#\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n#\n# html_domain_indices = True\n\n# If false, no index is generated.\n#\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#\n# html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#\n# html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr', 'zh'\n#\n# html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# 'ja' uses this config value.\n# 'zh' user can custom change `jieba` dictionary path.\n#\n# html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#\n# html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'RasterVisiondoc'\n\n# -- Options for LaTeX output -------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (root_doc, 'RasterVision.tex', 'Raster Vision Documentation', 'Azavea',\n 'manual'),\n]\n\n# -- Options for manual page output -------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(root_doc, 'RasterVisoin-{}.tex', html_title, [author], 'manual')]\n\n# -- Options for Texinfo output -----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (root_doc, 'RasterVision', 'Raster Vision Documentation', author,\n 'RasterVision', 'One line description of project.', 'Miscellaneous'),\n]\n",
"path": "docs/conf.py"
}
] | diff --git a/.readthedocs.yml b/.readthedocs.yml
index a0bdb3900..e99691356 100644
--- a/.readthedocs.yml
+++ b/.readthedocs.yml
@@ -45,4 +45,4 @@ python:
search:
ranking:
# down-rank source code pages
- '*/_modules/*': -10
+ _modules/*: -10
diff --git a/docs/conf.py b/docs/conf.py
index 28bf2a200..5fe593053 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -107,8 +107,6 @@ def setup(app: 'Sphinx') -> None:
'sphinx_gallery.load_style',
# add a copy button to code blocks
'sphinx_copybutton',
- # search-as-you-type
- 'sphinx_search.extension',
]
#########################
diff --git a/docs/requirements.txt b/docs/requirements.txt
index 3508e6c1d..983790dac 100644
--- a/docs/requirements.txt
+++ b/docs/requirements.txt
@@ -4,7 +4,6 @@ myst-parser==0.18.1
autodoc-pydantic==1.8.0
nbsphinx==0.8.9
sphinx-copybutton==0.5.*
-readthedocs-sphinx-search==0.1.*
# update when this is resolved: https://github.com/spatialaudio/nbsphinx/issues/655
sphinx-gallery>=0.10,<0.11
|
scrapy__scrapy-5880 | _sent_failed cut the errback chain in MailSender
`MailSender._sent_failed` return `None`, instead of `failure`. This cut the errback call chain, making impossible to detect in the code fail in the mails in client code.
| [
{
"content": "\"\"\"\nMail sending helpers\n\nSee documentation in docs/topics/email.rst\n\"\"\"\nimport logging\nfrom email import encoders as Encoders\nfrom email.mime.base import MIMEBase\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.nonmultipart import MIMENonMultipart\nfrom email.mime.text import MIMEText\nfrom email.utils import formatdate\nfrom io import BytesIO\n\nfrom twisted import version as twisted_version\nfrom twisted.internet import defer, ssl\nfrom twisted.python.versions import Version\n\nfrom scrapy.utils.misc import arg_to_iter\nfrom scrapy.utils.python import to_bytes\n\nlogger = logging.getLogger(__name__)\n\n\n# Defined in the email.utils module, but undocumented:\n# https://github.com/python/cpython/blob/v3.9.0/Lib/email/utils.py#L42\nCOMMASPACE = \", \"\n\n\ndef _to_bytes_or_none(text):\n if text is None:\n return None\n return to_bytes(text)\n\n\nclass MailSender:\n def __init__(\n self,\n smtphost=\"localhost\",\n mailfrom=\"scrapy@localhost\",\n smtpuser=None,\n smtppass=None,\n smtpport=25,\n smtptls=False,\n smtpssl=False,\n debug=False,\n ):\n self.smtphost = smtphost\n self.smtpport = smtpport\n self.smtpuser = _to_bytes_or_none(smtpuser)\n self.smtppass = _to_bytes_or_none(smtppass)\n self.smtptls = smtptls\n self.smtpssl = smtpssl\n self.mailfrom = mailfrom\n self.debug = debug\n\n @classmethod\n def from_settings(cls, settings):\n return cls(\n smtphost=settings[\"MAIL_HOST\"],\n mailfrom=settings[\"MAIL_FROM\"],\n smtpuser=settings[\"MAIL_USER\"],\n smtppass=settings[\"MAIL_PASS\"],\n smtpport=settings.getint(\"MAIL_PORT\"),\n smtptls=settings.getbool(\"MAIL_TLS\"),\n smtpssl=settings.getbool(\"MAIL_SSL\"),\n )\n\n def send(\n self,\n to,\n subject,\n body,\n cc=None,\n attachs=(),\n mimetype=\"text/plain\",\n charset=None,\n _callback=None,\n ):\n from twisted.internet import reactor\n\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart(*mimetype.split(\"/\", 1))\n\n to = list(arg_to_iter(to))\n cc = list(arg_to_iter(cc))\n\n msg[\"From\"] = self.mailfrom\n msg[\"To\"] = COMMASPACE.join(to)\n msg[\"Date\"] = formatdate(localtime=True)\n msg[\"Subject\"] = subject\n rcpts = to[:]\n if cc:\n rcpts.extend(cc)\n msg[\"Cc\"] = COMMASPACE.join(cc)\n\n if charset:\n msg.set_charset(charset)\n\n if attachs:\n msg.attach(MIMEText(body, \"plain\", charset or \"us-ascii\"))\n for attach_name, mimetype, f in attachs:\n part = MIMEBase(*mimetype.split(\"/\"))\n part.set_payload(f.read())\n Encoders.encode_base64(part)\n part.add_header(\n \"Content-Disposition\", \"attachment\", filename=attach_name\n )\n msg.attach(part)\n else:\n msg.set_payload(body)\n\n if _callback:\n _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg)\n\n if self.debug:\n logger.debug(\n \"Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": len(attachs),\n },\n )\n return\n\n dfd = self._sendmail(rcpts, msg.as_string().encode(charset or \"utf-8\"))\n dfd.addCallbacks(\n callback=self._sent_ok,\n errback=self._sent_failed,\n callbackArgs=[to, cc, subject, len(attachs)],\n errbackArgs=[to, cc, subject, len(attachs)],\n )\n reactor.addSystemEventTrigger(\"before\", \"shutdown\", lambda: dfd)\n return dfd\n\n def _sent_ok(self, result, to, cc, subject, nattachs):\n logger.info(\n \"Mail sent OK: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": nattachs,\n },\n )\n\n def _sent_failed(self, failure, to, cc, subject, nattachs):\n errstr = str(failure.value)\n logger.error(\n \"Unable to send mail: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d'\n \"- %(mailerr)s\",\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": nattachs,\n \"mailerr\": errstr,\n },\n )\n\n def _sendmail(self, to_addrs, msg):\n from twisted.internet import reactor\n\n msg = BytesIO(msg)\n d = defer.Deferred()\n\n factory = self._create_sender_factory(to_addrs, msg, d)\n\n if self.smtpssl:\n reactor.connectSSL(\n self.smtphost, self.smtpport, factory, ssl.ClientContextFactory()\n )\n else:\n reactor.connectTCP(self.smtphost, self.smtpport, factory)\n\n return d\n\n def _create_sender_factory(self, to_addrs, msg, d):\n from twisted.mail.smtp import ESMTPSenderFactory\n\n factory_keywords = {\n \"heloFallback\": True,\n \"requireAuthentication\": False,\n \"requireTransportSecurity\": self.smtptls,\n }\n\n # Newer versions of twisted require the hostname to use STARTTLS\n if twisted_version >= Version(\"twisted\", 21, 2, 0):\n factory_keywords[\"hostname\"] = self.smtphost\n\n factory = ESMTPSenderFactory(\n self.smtpuser,\n self.smtppass,\n self.mailfrom,\n to_addrs,\n msg,\n d,\n **factory_keywords\n )\n factory.noisy = False\n return factory\n",
"path": "scrapy/mail.py"
}
] | [
{
"content": "\"\"\"\nMail sending helpers\n\nSee documentation in docs/topics/email.rst\n\"\"\"\nimport logging\nfrom email import encoders as Encoders\nfrom email.mime.base import MIMEBase\nfrom email.mime.multipart import MIMEMultipart\nfrom email.mime.nonmultipart import MIMENonMultipart\nfrom email.mime.text import MIMEText\nfrom email.utils import formatdate\nfrom io import BytesIO\n\nfrom twisted import version as twisted_version\nfrom twisted.internet import defer, ssl\nfrom twisted.python.versions import Version\n\nfrom scrapy.utils.misc import arg_to_iter\nfrom scrapy.utils.python import to_bytes\n\nlogger = logging.getLogger(__name__)\n\n\n# Defined in the email.utils module, but undocumented:\n# https://github.com/python/cpython/blob/v3.9.0/Lib/email/utils.py#L42\nCOMMASPACE = \", \"\n\n\ndef _to_bytes_or_none(text):\n if text is None:\n return None\n return to_bytes(text)\n\n\nclass MailSender:\n def __init__(\n self,\n smtphost=\"localhost\",\n mailfrom=\"scrapy@localhost\",\n smtpuser=None,\n smtppass=None,\n smtpport=25,\n smtptls=False,\n smtpssl=False,\n debug=False,\n ):\n self.smtphost = smtphost\n self.smtpport = smtpport\n self.smtpuser = _to_bytes_or_none(smtpuser)\n self.smtppass = _to_bytes_or_none(smtppass)\n self.smtptls = smtptls\n self.smtpssl = smtpssl\n self.mailfrom = mailfrom\n self.debug = debug\n\n @classmethod\n def from_settings(cls, settings):\n return cls(\n smtphost=settings[\"MAIL_HOST\"],\n mailfrom=settings[\"MAIL_FROM\"],\n smtpuser=settings[\"MAIL_USER\"],\n smtppass=settings[\"MAIL_PASS\"],\n smtpport=settings.getint(\"MAIL_PORT\"),\n smtptls=settings.getbool(\"MAIL_TLS\"),\n smtpssl=settings.getbool(\"MAIL_SSL\"),\n )\n\n def send(\n self,\n to,\n subject,\n body,\n cc=None,\n attachs=(),\n mimetype=\"text/plain\",\n charset=None,\n _callback=None,\n ):\n from twisted.internet import reactor\n\n if attachs:\n msg = MIMEMultipart()\n else:\n msg = MIMENonMultipart(*mimetype.split(\"/\", 1))\n\n to = list(arg_to_iter(to))\n cc = list(arg_to_iter(cc))\n\n msg[\"From\"] = self.mailfrom\n msg[\"To\"] = COMMASPACE.join(to)\n msg[\"Date\"] = formatdate(localtime=True)\n msg[\"Subject\"] = subject\n rcpts = to[:]\n if cc:\n rcpts.extend(cc)\n msg[\"Cc\"] = COMMASPACE.join(cc)\n\n if charset:\n msg.set_charset(charset)\n\n if attachs:\n msg.attach(MIMEText(body, \"plain\", charset or \"us-ascii\"))\n for attach_name, mimetype, f in attachs:\n part = MIMEBase(*mimetype.split(\"/\"))\n part.set_payload(f.read())\n Encoders.encode_base64(part)\n part.add_header(\n \"Content-Disposition\", \"attachment\", filename=attach_name\n )\n msg.attach(part)\n else:\n msg.set_payload(body)\n\n if _callback:\n _callback(to=to, subject=subject, body=body, cc=cc, attach=attachs, msg=msg)\n\n if self.debug:\n logger.debug(\n \"Debug mail sent OK: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": len(attachs),\n },\n )\n return\n\n dfd = self._sendmail(rcpts, msg.as_string().encode(charset or \"utf-8\"))\n dfd.addCallbacks(\n callback=self._sent_ok,\n errback=self._sent_failed,\n callbackArgs=[to, cc, subject, len(attachs)],\n errbackArgs=[to, cc, subject, len(attachs)],\n )\n reactor.addSystemEventTrigger(\"before\", \"shutdown\", lambda: dfd)\n return dfd\n\n def _sent_ok(self, result, to, cc, subject, nattachs):\n logger.info(\n \"Mail sent OK: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d',\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": nattachs,\n },\n )\n\n def _sent_failed(self, failure, to, cc, subject, nattachs):\n errstr = str(failure.value)\n logger.error(\n \"Unable to send mail: To=%(mailto)s Cc=%(mailcc)s \"\n 'Subject=\"%(mailsubject)s\" Attachs=%(mailattachs)d'\n \"- %(mailerr)s\",\n {\n \"mailto\": to,\n \"mailcc\": cc,\n \"mailsubject\": subject,\n \"mailattachs\": nattachs,\n \"mailerr\": errstr,\n },\n )\n return failure\n\n def _sendmail(self, to_addrs, msg):\n from twisted.internet import reactor\n\n msg = BytesIO(msg)\n d = defer.Deferred()\n\n factory = self._create_sender_factory(to_addrs, msg, d)\n\n if self.smtpssl:\n reactor.connectSSL(\n self.smtphost, self.smtpport, factory, ssl.ClientContextFactory()\n )\n else:\n reactor.connectTCP(self.smtphost, self.smtpport, factory)\n\n return d\n\n def _create_sender_factory(self, to_addrs, msg, d):\n from twisted.mail.smtp import ESMTPSenderFactory\n\n factory_keywords = {\n \"heloFallback\": True,\n \"requireAuthentication\": False,\n \"requireTransportSecurity\": self.smtptls,\n }\n\n # Newer versions of twisted require the hostname to use STARTTLS\n if twisted_version >= Version(\"twisted\", 21, 2, 0):\n factory_keywords[\"hostname\"] = self.smtphost\n\n factory = ESMTPSenderFactory(\n self.smtpuser,\n self.smtppass,\n self.mailfrom,\n to_addrs,\n msg,\n d,\n **factory_keywords\n )\n factory.noisy = False\n return factory\n",
"path": "scrapy/mail.py"
}
] | diff --git a/scrapy/mail.py b/scrapy/mail.py
index 43115c53ea9..c11f3898d0d 100644
--- a/scrapy/mail.py
+++ b/scrapy/mail.py
@@ -164,6 +164,7 @@ def _sent_failed(self, failure, to, cc, subject, nattachs):
"mailerr": errstr,
},
)
+ return failure
def _sendmail(self, to_addrs, msg):
from twisted.internet import reactor
|
OpenEnergyPlatform__oeplatform-1338 | Django compressor seems to produce unexpected cache behavior.
## Description of the issue
@Darynarli and myself experienced unexpected behavior that is triggered by the new package `django-compression`. This behavior prevents updating the compressed sources like js or css files entirely. This also happens somewhat silent, because the compressed files (e.g. `static/CACHE/js/....`) are created as expected (Using the management command `manage.py compress`). The first error can be found if the template that imports the source script (html-template) code is inspected using the dev-tool (e.g. chrome).
It was observed in the local dev-environments, production might also be affected by the next release.
If you want to know what is part of the compression search in templates for this templatetag:
``` jinja2
{% compress js %}
<script src="update source name here "></script>
{% endcompress %}
```
To avoid this behavior in development, you can deactivate the compressor in the settings.py
COMPRESS_ENABLED = False
## Steps to Reproduce
We noticed this during the development of the open peer review. We have updated the source code of a JS source that is part of the compression. I noticed that the script import of the compressed file (name like output18471749.js) is not updated in the html-template after a new compressed file was created.
## Ideas of solution
- Fix django-compressor settings
- or Report bug
- or Fix template inheritance
- other???
## Context and Environment
* Version used:
* Operating system:
* Environment setup and (python) version:
## Workflow checklist
- [ ] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)
| [
{
"content": "\"\"\"\nDjango settings for oeplatform project.\n\nGenerated by 'django-admin startproject' using Django 1.8.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.8/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.8/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\n\ntry:\n from .securitysettings import * # noqa\nexcept ImportError:\n import logging\n import os\n\n logging.error(\"No securitysettings found. Triggerd in oeplatform/settings.py\")\n SECRET_KEY = os.environ.get(\"SECRET_KEY\", \"0\")\n DEFAULT_FROM_EMAIL = os.environ.get(\"DEFAULT_FROM_EMAIL\")\n URL = os.environ.get(\"URL\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/\n\n# Application definition\n\nINSTALLED_APPS = (\n \"django.contrib.sites\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sessions.backends.signed_cookies\",\n \"django_bootstrap5\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"modelview\",\n \"modelview.templatetags.modelview_extras\",\n \"login\",\n \"base\",\n \"base.templatetags.base_tags\",\n \"widget_tweaks\",\n \"dataedit\",\n \"colorfield\",\n \"api\",\n \"ontology\",\n \"axes\",\n \"captcha\",\n \"django.contrib.postgres\",\n \"fontawesome_5\",\n \"django_better_admin_arrayfield\",\n \"oeo_viewer\",\n \"compressor\",\n)\n\nMIDDLEWARE = (\n \"django.contrib.sites.middleware.CurrentSiteMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"login.middleware.DetachMiddleware\",\n \"axes.middleware.AxesMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n)\n\nROOT_URLCONF = \"oeplatform.urls\"\n\nEXTERNAL_URLS = {\n \"tutorials_index\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_faq\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_api1\": \"https://openenergyplatform.github.io/academy/tutorials/01_api/01_api_download/\", # noqa E501\n \"tutorials_licenses\": \"https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/\", # noqa E501\n \"readthedocs\": \"https://oeplatform.readthedocs.io/en/latest/?badge=latest\",\n \"compendium\": \"https://openenergyplatform.github.io/organisation/\",\n}\n\n\ndef external_urls_context_processor(request):\n \"\"\"Define hard coded external urls here.\n Use in templates like this: {{ EXTERNAL_URLS.<name_of_url> }}\n Also, you may want to add an icon indicating external links, e.g.\n \"\"\"\n return {\"EXTERNAL_URLS\": EXTERNAL_URLS}\n\n\nSITE_ID = 1\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"oeplatform.settings.external_urls_context_processor\",\n ]\n },\n }\n]\n\nCORS_ORIGIN_WHITELIST = [\"http://localhost:3000\", \"http://127.0.0.1:3000\"]\n\nGRAPHENE = {\"SCHEMA\": \"factsheet.schema.schema\"}\n\nWSGI_APPLICATION = \"oeplatform.wsgi.application\"\n\ntry:\n ONTOLOGY_FOLDER # noqa\nexcept NameError:\n ONTOLOGY_FOLDER = \"/tmp\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.8/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.8/howto/static-files/\n\nAUTH_USER_MODEL = \"login.myuser\"\nLOGIN_URL = \"/user/login\"\nLOGIN_REDIRECT_URL = \"/\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.TokenAuthentication\",\n )\n}\n\nAUTHENTICATION_BACKENDS = [\n # AxesBackend should be the first backend in the AUTHENTICATION_BACKENDS list.\n \"axes.backends.AxesBackend\",\n # custom class extenging Django ModelBackend for login with username OR email\n \"login.backends.ModelBackendWithEmail\",\n]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\nSTATICFILES_FINDERS = {\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n}\n\nCOMPRESS_ENABLED = True\nCOMPRESS_OFFLINE = True\n",
"path": "oeplatform/settings.py"
}
] | [
{
"content": "\"\"\"\nDjango settings for oeplatform project.\n\nGenerated by 'django-admin startproject' using Django 1.8.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.8/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.8/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\n\ntry:\n from .securitysettings import * # noqa\nexcept ImportError:\n import logging\n import os\n\n logging.error(\"No securitysettings found. Triggerd in oeplatform/settings.py\")\n SECRET_KEY = os.environ.get(\"SECRET_KEY\", \"0\")\n DEFAULT_FROM_EMAIL = os.environ.get(\"DEFAULT_FROM_EMAIL\")\n URL = os.environ.get(\"URL\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/\n\n# Application definition\n\nINSTALLED_APPS = (\n \"django.contrib.sites\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sessions.backends.signed_cookies\",\n \"django_bootstrap5\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"modelview\",\n \"modelview.templatetags.modelview_extras\",\n \"login\",\n \"base\",\n \"base.templatetags.base_tags\",\n \"widget_tweaks\",\n \"dataedit\",\n \"colorfield\",\n \"api\",\n \"ontology\",\n \"axes\",\n \"captcha\",\n \"django.contrib.postgres\",\n \"fontawesome_5\",\n \"django_better_admin_arrayfield\",\n \"oeo_viewer\",\n \"compressor\",\n)\n\nMIDDLEWARE = (\n \"django.contrib.sites.middleware.CurrentSiteMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"login.middleware.DetachMiddleware\",\n \"axes.middleware.AxesMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n)\n\nROOT_URLCONF = \"oeplatform.urls\"\n\nEXTERNAL_URLS = {\n \"tutorials_index\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_faq\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_api1\": \"https://openenergyplatform.github.io/academy/tutorials/01_api/01_api_download/\", # noqa E501\n \"tutorials_licenses\": \"https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/\", # noqa E501\n \"readthedocs\": \"https://oeplatform.readthedocs.io/en/latest/?badge=latest\",\n \"compendium\": \"https://openenergyplatform.github.io/organisation/\",\n}\n\n\ndef external_urls_context_processor(request):\n \"\"\"Define hard coded external urls here.\n Use in templates like this: {{ EXTERNAL_URLS.<name_of_url> }}\n Also, you may want to add an icon indicating external links, e.g.\n \"\"\"\n return {\"EXTERNAL_URLS\": EXTERNAL_URLS}\n\n\nSITE_ID = 1\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"oeplatform.settings.external_urls_context_processor\",\n ]\n },\n }\n]\n\nCORS_ORIGIN_WHITELIST = [\"http://localhost:3000\", \"http://127.0.0.1:3000\"]\n\nGRAPHENE = {\"SCHEMA\": \"factsheet.schema.schema\"}\n\nWSGI_APPLICATION = \"oeplatform.wsgi.application\"\n\ntry:\n ONTOLOGY_FOLDER # noqa\nexcept NameError:\n ONTOLOGY_FOLDER = \"/tmp\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.8/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.8/howto/static-files/\n\nAUTH_USER_MODEL = \"login.myuser\"\nLOGIN_URL = \"/user/login\"\nLOGIN_REDIRECT_URL = \"/\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.TokenAuthentication\",\n )\n}\n\nAUTHENTICATION_BACKENDS = [\n # AxesBackend should be the first backend in the AUTHENTICATION_BACKENDS list.\n \"axes.backends.AxesBackend\",\n # custom class extenging Django ModelBackend for login with username OR email\n \"login.backends.ModelBackendWithEmail\",\n]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\nSTATICFILES_FINDERS = {\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n}\n\n\n# https://django-compressor.readthedocs.io/en/stable/settings.html\nCOMPRESS_ENABLED = True\nCOMPRESS_OFFLINE = True\nCOMPRESS_REBUILD_TIMEOUT = 0\nCOMPRESS_MTIME_DELAY = 0\n",
"path": "oeplatform/settings.py"
}
] | diff --git a/oeplatform/settings.py b/oeplatform/settings.py
index 6f0696bdf..bc7e34b87 100644
--- a/oeplatform/settings.py
+++ b/oeplatform/settings.py
@@ -166,5 +166,9 @@ def external_urls_context_processor(request):
"compressor.finders.CompressorFinder",
}
+
+# https://django-compressor.readthedocs.io/en/stable/settings.html
COMPRESS_ENABLED = True
COMPRESS_OFFLINE = True
+COMPRESS_REBUILD_TIMEOUT = 0
+COMPRESS_MTIME_DELAY = 0
diff --git a/versions/changelogs/current.md b/versions/changelogs/current.md
index 0598b515b..db641d3f1 100644
--- a/versions/changelogs/current.md
+++ b/versions/changelogs/current.md
@@ -7,5 +7,6 @@
### Bugs
- Open Peer Review: Fix a bug in the review backend to handle reviews that are finished in one go (without any feedback). [(#1333)](https://github.com/OpenEnergyPlatform/oeplatform/pull/1333)
+- The django-compressor integration now updates the compressed sources and cache as expected [(#1338)](https://github.com/OpenEnergyPlatform/oeplatform/pull/1338)
### Removed
|
Azure__azure-cli-extensions-3046 | vmware 2.0.0 does not work in azure-cli:2.7.0
- If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at [Azure/azure-cli](https://github.com/Azure/azure-cli/issues)
### Extension name (the extension in question)
vmware
### Description of issue (in as much detail as possible)
The vmware 2.0.0 extension released yesterday does not work with the az cli 2.7.0 released on 2020-06-01, about 9 months ago. I'm not sure exactly what the minimum version should be set to.
I believe this needs to be updated, but I'm not sure what it should be or what the best process is for updating it.
https://github.com/Azure/azure-cli-extensions/blob/master/src/vmware/azext_vmware/azext_metadata.json
```
"azext.minCliCoreVersion": "2.0.66"
```

steps to reproduce:
```
docker run --rm -it mcr.microsoft.com/azure-cli:2.7.0
az extension add -n vmware
az vmware private-cloud show -g taggac-2020-12 -n taggac-20210219
```
Here are the
```
PS C:\Users\cataggar\io\fct> docker run --rm -it mcr.microsoft.com/azure-cli:2.7.0
bash-5.0# az extension add -n vmware
bash-5.0# az vmware private-cloud show -g taggac-2020-12 -n taggac-20210219
The command failed with an unexpected error. Here is the traceback:
cannot import name 'ARMHttpLoggingPolicy'
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/knack/cli.py", line 215, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 553, in execute
self.commands_loader.load_arguments(command)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 344, in load_arguments
self.command_table[command].load_arguments() # this loads the arguments via reflection
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 310, in load_arguments
super(AzCliCommand, self).load_arguments()
File "/usr/local/lib/python3.6/site-packages/knack/commands.py", line 106, in load_arguments
cmd_args = self.arguments_loader()
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/arm.py", line 723, in generic_show_arguments_loader
cmd_args = get_arguments_loader(context, getter_op, operation_group=kwargs.get('operation_group'))
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/arm.py", line 402, in get_arguments_loader
getter_args = dict(extract_args_from_signature(context.get_op_handler(getter_op, operation_group=operation_group),
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 588, in get_op_handler
op = import_module(mod_to_import)
File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/root/.azure/cliextensions/vmware/azext_vmware/custom.py", line 7, in <module>
from azext_vmware.vendored_sdks.avs_client import AVSClient
File "/root/.azure/cliextensions/vmware/azext_vmware/vendored_sdks/avs_client/__init__.py", line 7, in <module>
from ._avs_client import AVSClient
File "/root/.azure/cliextensions/vmware/azext_vmware/vendored_sdks/avs_client/_avs_client.py", line 18, in <module>
from ._configuration import AVSClientConfiguration
File "/root/.azure/cliextensions/vmware/azext_vmware/vendored_sdks/avs_client/_configuration.py", line 11, in <module> from azure.mgmt.core.policies import ARMHttpLoggingPolicy
ImportError: cannot import name 'ARMHttpLoggingPolicy'
To open an issue, please run: 'az feedback'
bash-5.0#
```
-----
| [
{
"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom io import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"2.0.0\"\n\nwith open('README.md', encoding='utf-8') as f:\n readme = f.read()\nwith open('CHANGELOG.md', encoding='utf-8') as f:\n changelog = f.read()\n\nsetup(\n name='vmware',\n version=VERSION,\n description='Azure VMware Solution commands.',\n long_description=readme + '\\n\\n' + changelog,\n long_description_content_type='text/markdown',\n license='MIT',\n author='Microsoft',\n author_email='[email protected]',\n url='https://github.com/Azure/az-vmware-cli',\n packages=find_packages(exclude=[\"tests\"]),\n install_requires=[],\n package_data={'azext_vmware': ['azext_metadata.json']}\n)\n",
"path": "src/vmware/setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\n\n# --------------------------------------------------------------------------------------------\n# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License. See License.txt in the project root for license information.\n# --------------------------------------------------------------------------------------------\n\nfrom io import open\nfrom setuptools import setup, find_packages\n\nVERSION = \"2.0.1\"\n\nwith open('README.md', encoding='utf-8') as f:\n readme = f.read()\nwith open('CHANGELOG.md', encoding='utf-8') as f:\n changelog = f.read()\n\nsetup(\n name='vmware',\n version=VERSION,\n description='Azure VMware Solution commands.',\n long_description=readme + '\\n\\n' + changelog,\n long_description_content_type='text/markdown',\n license='MIT',\n author='Microsoft',\n author_email='[email protected]',\n url='https://github.com/Azure/az-vmware-cli',\n packages=find_packages(exclude=[\"tests\"]),\n install_requires=[],\n package_data={'azext_vmware': ['azext_metadata.json']}\n)\n",
"path": "src/vmware/setup.py"
}
] | diff --git a/src/vmware/CHANGELOG.md b/src/vmware/CHANGELOG.md
index 32cde2511f8..c0910f47bdb 100644
--- a/src/vmware/CHANGELOG.md
+++ b/src/vmware/CHANGELOG.md
@@ -1,5 +1,8 @@
# Release History
+## 2.0.1 (2021-02)
+- Update the minimum az cli version to 2.11.0 [#3045](https://github.com/Azure/azure-cli-extensions/issues/3045)
+
## 2.0.0 (2021-02)
This version has **breaking changes** for scripts.
diff --git a/src/vmware/azext_vmware/azext_metadata.json b/src/vmware/azext_vmware/azext_metadata.json
index 341daf2272c..4e44ef19715 100644
--- a/src/vmware/azext_vmware/azext_metadata.json
+++ b/src/vmware/azext_vmware/azext_metadata.json
@@ -1,4 +1,4 @@
{
"azext.isPreview": false,
- "azext.minCliCoreVersion": "2.0.66"
+ "azext.minCliCoreVersion": "2.11.0"
}
\ No newline at end of file
diff --git a/src/vmware/setup.py b/src/vmware/setup.py
index 195ba52bb15..9649cf71529 100644
--- a/src/vmware/setup.py
+++ b/src/vmware/setup.py
@@ -8,7 +8,7 @@
from io import open
from setuptools import setup, find_packages
-VERSION = "2.0.0"
+VERSION = "2.0.1"
with open('README.md', encoding='utf-8') as f:
readme = f.read()
|
aio-libs__aiohttp-493 | [bug] URL parssing error in the web server
If you run this simple server example :
``` python
import asyncio
from aiohttp import web
@asyncio.coroutine
def handle(request):
return webResponse(body=request.path.encode('utf8'))
@asyncio.coroutine
def init(loop):
app = web.Application(loop=loop)
app.router.add_route('GET', '/', handle)
srv = yield from loop.create_server(app.make_handler(), '127.0.0.1', 5555)
return srv
loop = asyncio.get_event_loop()
loop.run_until_complete(init(loop))
try:
loop.run_forever()
except KeyboardInterrupt:
pass
```
The following requests will get a `200 OK` with the considered path as content :
```
$ curl -X GET http://127.0.0.1:5555///
/
$ curl -X GET http://127.0.0.1:555//foo/
/
```
As you can see, the path of the URL is not well parsed.
This bug resides in the `_splitted_path` non-data descriptor which uses `urlsplit` on a path rather than a full URL. The consequence is a second segment interpreted as a network location if the first one is empty.
I've not quite investigated a fix at the moment, but `_splitted_path` only being used by `raw_path` and `query_string` seems to be a plea to use a fake scheme and netloc for `urlsplit` to behave the expected way.
| [
{
"content": "__all__ = ('ContentCoding', 'Request', 'StreamResponse', 'Response')\n\nimport asyncio\nimport binascii\nimport cgi\nimport collections\nimport datetime\nimport http.cookies\nimport io\nimport json\nimport math\nimport time\nimport warnings\n\nimport enum\n\nfrom email.utils import parsedate\nfrom types import MappingProxyType\nfrom urllib.parse import urlsplit, parse_qsl, unquote\n\nfrom . import hdrs\nfrom .helpers import reify\nfrom .multidict import (CIMultiDictProxy,\n CIMultiDict,\n MultiDictProxy,\n MultiDict)\nfrom .protocol import Response as ResponseImpl, HttpVersion10\nfrom .streams import EOF_MARKER\n\n\nsentinel = object()\n\n\nclass HeadersMixin:\n\n _content_type = None\n _content_dict = None\n _stored_content_type = sentinel\n\n def _parse_content_type(self, raw):\n self._stored_content_type = raw\n if raw is None:\n # default value according to RFC 2616\n self._content_type = 'application/octet-stream'\n self._content_dict = {}\n else:\n self._content_type, self._content_dict = cgi.parse_header(raw)\n\n @property\n def content_type(self, _CONTENT_TYPE=hdrs.CONTENT_TYPE):\n \"\"\"The value of content part for Content-Type HTTP header.\"\"\"\n raw = self.headers.get(_CONTENT_TYPE)\n if self._stored_content_type != raw:\n self._parse_content_type(raw)\n return self._content_type\n\n @property\n def charset(self, _CONTENT_TYPE=hdrs.CONTENT_TYPE):\n \"\"\"The value of charset part for Content-Type HTTP header.\"\"\"\n raw = self.headers.get(_CONTENT_TYPE)\n if self._stored_content_type != raw:\n self._parse_content_type(raw)\n return self._content_dict.get('charset')\n\n @property\n def content_length(self, _CONTENT_LENGTH=hdrs.CONTENT_LENGTH):\n \"\"\"The value of Content-Length HTTP header.\"\"\"\n l = self.headers.get(_CONTENT_LENGTH)\n if l is None:\n return None\n else:\n return int(l)\n\nFileField = collections.namedtuple('Field', 'name filename file content_type')\n\n\nclass ContentCoding(enum.Enum):\n # The content codings that we have support for.\n #\n # Additional registered codings are listed at:\n # https://www.iana.org/assignments/http-parameters/http-parameters.xhtml#content-coding\n deflate = 'deflate'\n gzip = 'gzip'\n identity = 'identity'\n\n\n############################################################\n# HTTP Request\n############################################################\n\n\nclass Request(dict, HeadersMixin):\n\n POST_METHODS = {hdrs.METH_PATCH, hdrs.METH_POST, hdrs.METH_PUT,\n hdrs.METH_TRACE, hdrs.METH_DELETE}\n\n def __init__(self, app, message, payload, transport, reader, writer, *,\n _HOST=hdrs.HOST, secure_proxy_ssl_header=None):\n self._app = app\n self._version = message.version\n self._transport = transport\n self._reader = reader\n self._writer = writer\n self._method = message.method\n self._host = message.headers.get(_HOST)\n self._path_qs = message.path\n self._post = None\n self._post_files_cache = None\n self._headers = CIMultiDictProxy(message.headers)\n if self._version < HttpVersion10:\n self._keep_alive = False\n else:\n self._keep_alive = not message.should_close\n\n # matchdict, route_name, handler\n # or information about traversal lookup\n self._match_info = None # initialized after route resolving\n\n self._payload = payload\n self._cookies = None\n\n self._read_bytes = None\n self._has_body = not payload.at_eof()\n\n self._secure_proxy_ssl_header = secure_proxy_ssl_header\n\n @property\n def scheme(self):\n \"\"\"A string representing the scheme of the request.\n\n 'http' or 'https'.\n \"\"\"\n if self._transport.get_extra_info('sslcontext'):\n return 'https'\n secure_proxy_ssl_header = self._secure_proxy_ssl_header\n if secure_proxy_ssl_header is not None:\n header, value = secure_proxy_ssl_header\n if self._headers.get(header) == value:\n return 'https'\n return 'http'\n\n @property\n def method(self):\n \"\"\"Read only property for getting HTTP method.\n\n The value is upper-cased str like 'GET', 'POST', 'PUT' etc.\n \"\"\"\n return self._method\n\n @property\n def version(self):\n \"\"\"Read only property for getting HTTP version of request.\n\n Returns aiohttp.protocol.HttpVersion instance.\n \"\"\"\n return self._version\n\n @property\n def host(self):\n \"\"\"Read only property for getting *HOST* header of request.\n\n Returns str or None if HTTP request has no HOST header.\n \"\"\"\n return self._host\n\n @property\n def path_qs(self):\n \"\"\"The URL including PATH_INFO and the query string.\n\n E.g, /app/blog?id=10\n \"\"\"\n return self._path_qs\n\n @reify\n def _splitted_path(self):\n return urlsplit(self._path_qs)\n\n @property\n def raw_path(self):\n \"\"\" The URL including raw *PATH INFO* without the host or scheme.\n Warning, the path is unquoted and may contains non valid URL characters\n\n E.g., ``/my%2Fpath%7Cwith%21some%25strange%24characters``\n \"\"\"\n return self._splitted_path.path\n\n @reify\n def path(self):\n \"\"\"The URL including *PATH INFO* without the host or scheme.\n\n E.g., ``/app/blog``\n \"\"\"\n return unquote(self.raw_path)\n\n @reify\n def query_string(self):\n \"\"\"The query string in the URL.\n\n E.g., id=10\n \"\"\"\n return self._splitted_path.query\n\n @reify\n def GET(self):\n \"\"\"A multidict with all the variables in the query string.\n\n Lazy property.\n \"\"\"\n return MultiDictProxy(MultiDict(parse_qsl(self.query_string,\n keep_blank_values=True)))\n\n @reify\n def POST(self):\n \"\"\"A multidict with all the variables in the POST parameters.\n\n post() methods has to be called before using this attribute.\n \"\"\"\n if self._post is None:\n raise RuntimeError(\"POST is not available before post()\")\n return self._post\n\n @property\n def headers(self):\n \"\"\"A case-insensitive multidict proxy with all headers.\"\"\"\n return self._headers\n\n @property\n def if_modified_since(self, _IF_MODIFIED_SINCE=hdrs.IF_MODIFIED_SINCE):\n \"\"\"The value of If-Modified-Since HTTP header, or None.\n\n This header is represented as a `datetime` object.\n \"\"\"\n httpdate = self.headers.get(_IF_MODIFIED_SINCE)\n if httpdate is not None:\n timetuple = parsedate(httpdate)\n if timetuple is not None:\n return datetime.datetime(*timetuple[:6],\n tzinfo=datetime.timezone.utc)\n return None\n\n @property\n def keep_alive(self):\n \"\"\"Is keepalive enabled by client?\"\"\"\n return self._keep_alive\n\n @property\n def match_info(self):\n \"\"\"Result of route resolving.\"\"\"\n return self._match_info\n\n @property\n def app(self):\n \"\"\"Application instance.\"\"\"\n return self._app\n\n @property\n def transport(self):\n \"\"\"Transport used for request processing.\"\"\"\n return self._transport\n\n @property\n def cookies(self):\n \"\"\"Return request cookies.\n\n A read-only dictionary-like object.\n \"\"\"\n if self._cookies is None:\n raw = self.headers.get(hdrs.COOKIE, '')\n parsed = http.cookies.SimpleCookie(raw)\n self._cookies = MappingProxyType(\n {key: val.value for key, val in parsed.items()})\n return self._cookies\n\n @property\n def payload(self):\n \"\"\"Return raw payload stream.\"\"\"\n warnings.warn('use Request.content instead', DeprecationWarning)\n return self._payload\n\n @property\n def content(self):\n \"\"\"Return raw payload stream.\"\"\"\n return self._payload\n\n @property\n def has_body(self):\n \"\"\"Return True if request has HTTP BODY, False otherwise.\"\"\"\n return self._has_body\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Release request.\n\n Eat unread part of HTTP BODY if present.\n \"\"\"\n chunk = yield from self._payload.readany()\n while chunk is not EOF_MARKER or chunk:\n chunk = yield from self._payload.readany()\n\n @asyncio.coroutine\n def read(self):\n \"\"\"Read request body if present.\n\n Returns bytes object with full request content.\n \"\"\"\n if self._read_bytes is None:\n body = bytearray()\n while True:\n chunk = yield from self._payload.readany()\n body.extend(chunk)\n if chunk is EOF_MARKER:\n break\n self._read_bytes = bytes(body)\n return self._read_bytes\n\n @asyncio.coroutine\n def text(self):\n \"\"\"Return BODY as text using encoding from .charset.\"\"\"\n bytes_body = yield from self.read()\n encoding = self.charset or 'utf-8'\n return bytes_body.decode(encoding)\n\n @asyncio.coroutine\n def json(self, *, loader=json.loads):\n \"\"\"Return BODY as JSON.\"\"\"\n body = yield from self.text()\n return loader(body)\n\n @asyncio.coroutine\n def post(self):\n \"\"\"Return POST parameters.\"\"\"\n if self._post is not None:\n return self._post\n if self.method not in self.POST_METHODS:\n self._post = MultiDictProxy(MultiDict())\n return self._post\n\n content_type = self.content_type\n if (content_type not in ('',\n 'application/x-www-form-urlencoded',\n 'multipart/form-data')):\n self._post = MultiDictProxy(MultiDict())\n return self._post\n\n body = yield from self.read()\n content_charset = self.charset or 'utf-8'\n\n environ = {'REQUEST_METHOD': self.method,\n 'CONTENT_LENGTH': str(len(body)),\n 'QUERY_STRING': '',\n 'CONTENT_TYPE': self.headers.get(hdrs.CONTENT_TYPE)}\n\n fs = cgi.FieldStorage(fp=io.BytesIO(body),\n environ=environ,\n keep_blank_values=True,\n encoding=content_charset)\n\n supported_transfer_encoding = {\n 'base64': binascii.a2b_base64,\n 'quoted-printable': binascii.a2b_qp\n }\n\n out = MultiDict()\n _count = 1\n for field in fs.list or ():\n transfer_encoding = field.headers.get(\n hdrs.CONTENT_TRANSFER_ENCODING, None)\n if field.filename:\n ff = FileField(field.name,\n field.filename,\n field.file, # N.B. file closed error\n field.type)\n if self._post_files_cache is None:\n self._post_files_cache = {}\n self._post_files_cache[field.name+str(_count)] = field\n _count += 1\n out.add(field.name, ff)\n else:\n value = field.value\n if transfer_encoding in supported_transfer_encoding:\n # binascii accepts bytes\n value = value.encode('utf-8')\n value = supported_transfer_encoding[\n transfer_encoding](value)\n out.add(field.name, value)\n\n self._post = MultiDictProxy(out)\n return self._post\n\n def __repr__(self):\n return \"<{} {} {} >\".format(self.__class__.__name__,\n self.method, self.path)\n\n\n############################################################\n# HTTP Response classes\n############################################################\n\n\nclass StreamResponse(HeadersMixin):\n\n def __init__(self, *, status=200, reason=None, headers=None):\n self._body = None\n self._keep_alive = None\n self._chunked = False\n self._chunk_size = None\n self._compression = False\n self._compression_force = False\n self._headers = CIMultiDict()\n self._cookies = http.cookies.SimpleCookie()\n self.set_status(status, reason)\n\n self._req = None\n self._resp_impl = None\n self._eof_sent = False\n\n if headers is not None:\n self._headers.extend(headers)\n\n def _copy_cookies(self):\n for cookie in self._cookies.values():\n value = cookie.output(header='')[1:]\n self.headers.add(hdrs.SET_COOKIE, value)\n\n @property\n def started(self):\n return self._resp_impl is not None\n\n @property\n def status(self):\n return self._status\n\n @property\n def chunked(self):\n return self._chunked\n\n @property\n def compression(self):\n return self._compression\n\n @property\n def reason(self):\n return self._reason\n\n def set_status(self, status, reason=None):\n self._status = int(status)\n if reason is None:\n reason = ResponseImpl.calc_reason(status)\n self._reason = reason\n\n @property\n def keep_alive(self):\n return self._keep_alive\n\n def force_close(self):\n self._keep_alive = False\n\n def enable_chunked_encoding(self, chunk_size=None):\n \"\"\"Enables automatic chunked transfer encoding.\"\"\"\n self._chunked = True\n self._chunk_size = chunk_size\n\n def enable_compression(self, force=None):\n \"\"\"Enables response compression encoding.\"\"\"\n # Backwards compatibility for when force was a bool <0.17.\n if type(force) == bool:\n force = ContentCoding.deflate if force else ContentCoding.identity\n\n self._compression = True\n self._compression_force = force\n\n @property\n def headers(self):\n return self._headers\n\n @property\n def cookies(self):\n return self._cookies\n\n def set_cookie(self, name, value, *, expires=None,\n domain=None, max_age=None, path='/',\n secure=None, httponly=None, version=None):\n \"\"\"Set or update response cookie.\n\n Sets new cookie or updates existent with new value.\n Also updates only those params which are not None.\n \"\"\"\n\n old = self._cookies.get(name)\n if old is not None and old.coded_value == '':\n # deleted cookie\n self._cookies.pop(name, None)\n\n self._cookies[name] = value\n c = self._cookies[name]\n if expires is not None:\n c['expires'] = expires\n if domain is not None:\n c['domain'] = domain\n if max_age is not None:\n c['max-age'] = max_age\n if path is not None:\n c['path'] = path\n if secure is not None:\n c['secure'] = secure\n if httponly is not None:\n c['httponly'] = httponly\n if version is not None:\n c['version'] = version\n\n def del_cookie(self, name, *, domain=None, path='/'):\n \"\"\"Delete cookie.\n\n Creates new empty expired cookie.\n \"\"\"\n # TODO: do we need domain/path here?\n self._cookies.pop(name, None)\n self.set_cookie(name, '', max_age=0, domain=domain, path=path)\n\n @property\n def content_length(self):\n # Just a placeholder for adding setter\n return super().content_length\n\n @content_length.setter\n def content_length(self, value):\n if value is not None:\n value = int(value)\n # TODO: raise error if chunked enabled\n self.headers[hdrs.CONTENT_LENGTH] = str(value)\n else:\n self.headers.pop(hdrs.CONTENT_LENGTH, None)\n\n @property\n def content_type(self):\n # Just a placeholder for adding setter\n return super().content_type\n\n @content_type.setter\n def content_type(self, value):\n self.content_type # read header values if needed\n self._content_type = str(value)\n self._generate_content_type_header()\n\n @property\n def charset(self):\n # Just a placeholder for adding setter\n return super().charset\n\n @charset.setter\n def charset(self, value):\n ctype = self.content_type # read header values if needed\n if ctype == 'application/octet-stream':\n raise RuntimeError(\"Setting charset for application/octet-stream \"\n \"doesn't make sense, setup content_type first\")\n if value is None:\n self._content_dict.pop('charset', None)\n else:\n self._content_dict['charset'] = str(value).lower()\n self._generate_content_type_header()\n\n @property\n def last_modified(self, _LAST_MODIFIED=hdrs.LAST_MODIFIED):\n \"\"\"The value of Last-Modified HTTP header, or None.\n\n This header is represented as a `datetime` object.\n \"\"\"\n httpdate = self.headers.get(_LAST_MODIFIED)\n if httpdate is not None:\n timetuple = parsedate(httpdate)\n if timetuple is not None:\n return datetime.datetime(*timetuple[:6],\n tzinfo=datetime.timezone.utc)\n return None\n\n @last_modified.setter\n def last_modified(self, value):\n if value is None:\n if hdrs.LAST_MODIFIED in self.headers:\n del self.headers[hdrs.LAST_MODIFIED]\n elif isinstance(value, (int, float)):\n self.headers[hdrs.LAST_MODIFIED] = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\", time.gmtime(math.ceil(value)))\n elif isinstance(value, datetime.datetime):\n self.headers[hdrs.LAST_MODIFIED] = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\", value.utctimetuple())\n elif isinstance(value, str):\n self.headers[hdrs.LAST_MODIFIED] = value\n\n def _generate_content_type_header(self, CONTENT_TYPE=hdrs.CONTENT_TYPE):\n params = '; '.join(\"%s=%s\" % i for i in self._content_dict.items())\n if params:\n ctype = self._content_type + '; ' + params\n else:\n ctype = self._content_type\n self.headers[CONTENT_TYPE] = ctype\n\n def _start_pre_check(self, request):\n if self._resp_impl is not None:\n if self._req is not request:\n raise RuntimeError(\n 'Response has been started with different request.')\n else:\n return self._resp_impl\n else:\n return None\n\n def _start_compression(self, request):\n def start(coding):\n if coding != ContentCoding.identity:\n self.headers[hdrs.CONTENT_ENCODING] = coding.value\n self._resp_impl.add_compression_filter(coding.value)\n self.content_length = None\n\n if self._compression_force:\n start(self._compression_force)\n else:\n accept_encoding = request.headers.get(\n hdrs.ACCEPT_ENCODING, '').lower()\n for coding in ContentCoding:\n if coding.value in accept_encoding:\n start(coding)\n return\n\n def start(self, request):\n resp_impl = self._start_pre_check(request)\n if resp_impl is not None:\n return resp_impl\n\n self._req = request\n keep_alive = self._keep_alive\n if keep_alive is None:\n keep_alive = request.keep_alive\n self._keep_alive = keep_alive\n\n resp_impl = self._resp_impl = ResponseImpl(\n request._writer,\n self._status,\n request.version,\n not keep_alive,\n self._reason)\n\n self._copy_cookies()\n\n if self._compression:\n self._start_compression(request)\n\n if self._chunked:\n resp_impl.enable_chunked_encoding()\n if self._chunk_size:\n resp_impl.add_chunking_filter(self._chunk_size)\n\n headers = self.headers.items()\n for key, val in headers:\n resp_impl.add_header(key, val)\n\n resp_impl.send_headers()\n return resp_impl\n\n def write(self, data):\n assert isinstance(data, (bytes, bytearray, memoryview)), \\\n 'data argument must be byte-ish (%r)' % type(data)\n\n if self._eof_sent:\n raise RuntimeError(\"Cannot call write() after write_eof()\")\n if self._resp_impl is None:\n raise RuntimeError(\"Cannot call write() before start()\")\n\n if data:\n return self._resp_impl.write(data)\n else:\n return ()\n\n @asyncio.coroutine\n def drain(self):\n if self._resp_impl is None:\n raise RuntimeError(\"Response has not been started\")\n yield from self._resp_impl.transport.drain()\n\n @asyncio.coroutine\n def write_eof(self):\n if self._eof_sent:\n return\n if self._resp_impl is None:\n raise RuntimeError(\"Response has not been started\")\n\n yield from self._resp_impl.write_eof()\n self._eof_sent = True\n\n def __repr__(self):\n if self.started:\n info = \"{} {} \".format(self._req.method, self._req.path)\n else:\n info = \"not started\"\n return \"<{} {} {}>\".format(self.__class__.__name__,\n self.reason, info)\n\n\nclass Response(StreamResponse):\n\n def __init__(self, *, body=None, status=200,\n reason=None, text=None, headers=None, content_type=None):\n super().__init__(status=status, reason=reason, headers=headers)\n\n if body is not None and text is not None:\n raise ValueError(\"body and text are not allowed together.\")\n\n if text is not None:\n if hdrs.CONTENT_TYPE not in self.headers:\n # fast path for filling headers\n if not isinstance(text, str):\n raise TypeError('text argument must be str (%r)' %\n type(text))\n if content_type is None:\n content_type = 'text/plain'\n self.headers[hdrs.CONTENT_TYPE] = (\n content_type + '; charset=utf-8')\n self._content_type = content_type\n self._content_dict = {'charset': 'utf-8'}\n self.body = text.encode('utf-8')\n else:\n self.text = text\n else:\n if content_type:\n self.content_type = content_type\n if body is not None:\n self.body = body\n else:\n self.body = None\n\n @property\n def body(self):\n return self._body\n\n @body.setter\n def body(self, body):\n if body is not None and not isinstance(body, bytes):\n raise TypeError('body argument must be bytes (%r)' % type(body))\n self._body = body\n if body is not None:\n self.content_length = len(body)\n else:\n self.content_length = 0\n\n @property\n def text(self):\n if self._body is None:\n return None\n return self._body.decode(self.charset or 'utf-8')\n\n @text.setter\n def text(self, text):\n if text is not None and not isinstance(text, str):\n raise TypeError('text argument must be str (%r)' % type(text))\n\n if self.content_type == 'application/octet-stream':\n self.content_type = 'text/plain'\n if self.charset is None:\n self.charset = 'utf-8'\n\n self.body = text.encode(self.charset)\n\n @asyncio.coroutine\n def write_eof(self):\n body = self._body\n if body is not None:\n self.write(body)\n yield from super().write_eof()\n",
"path": "aiohttp/web_reqrep.py"
}
] | [
{
"content": "__all__ = ('ContentCoding', 'Request', 'StreamResponse', 'Response')\n\nimport asyncio\nimport binascii\nimport cgi\nimport collections\nimport datetime\nimport http.cookies\nimport io\nimport json\nimport math\nimport time\nimport warnings\n\nimport enum\n\nfrom email.utils import parsedate\nfrom types import MappingProxyType\nfrom urllib.parse import urlsplit, parse_qsl, unquote\n\nfrom . import hdrs\nfrom .helpers import reify\nfrom .multidict import (CIMultiDictProxy,\n CIMultiDict,\n MultiDictProxy,\n MultiDict)\nfrom .protocol import Response as ResponseImpl, HttpVersion10\nfrom .streams import EOF_MARKER\n\n\nsentinel = object()\n\n\nclass HeadersMixin:\n\n _content_type = None\n _content_dict = None\n _stored_content_type = sentinel\n\n def _parse_content_type(self, raw):\n self._stored_content_type = raw\n if raw is None:\n # default value according to RFC 2616\n self._content_type = 'application/octet-stream'\n self._content_dict = {}\n else:\n self._content_type, self._content_dict = cgi.parse_header(raw)\n\n @property\n def content_type(self, _CONTENT_TYPE=hdrs.CONTENT_TYPE):\n \"\"\"The value of content part for Content-Type HTTP header.\"\"\"\n raw = self.headers.get(_CONTENT_TYPE)\n if self._stored_content_type != raw:\n self._parse_content_type(raw)\n return self._content_type\n\n @property\n def charset(self, _CONTENT_TYPE=hdrs.CONTENT_TYPE):\n \"\"\"The value of charset part for Content-Type HTTP header.\"\"\"\n raw = self.headers.get(_CONTENT_TYPE)\n if self._stored_content_type != raw:\n self._parse_content_type(raw)\n return self._content_dict.get('charset')\n\n @property\n def content_length(self, _CONTENT_LENGTH=hdrs.CONTENT_LENGTH):\n \"\"\"The value of Content-Length HTTP header.\"\"\"\n l = self.headers.get(_CONTENT_LENGTH)\n if l is None:\n return None\n else:\n return int(l)\n\nFileField = collections.namedtuple('Field', 'name filename file content_type')\n\n\nclass ContentCoding(enum.Enum):\n # The content codings that we have support for.\n #\n # Additional registered codings are listed at:\n # https://www.iana.org/assignments/http-parameters/http-parameters.xhtml#content-coding\n deflate = 'deflate'\n gzip = 'gzip'\n identity = 'identity'\n\n\n############################################################\n# HTTP Request\n############################################################\n\n\nclass Request(dict, HeadersMixin):\n\n POST_METHODS = {hdrs.METH_PATCH, hdrs.METH_POST, hdrs.METH_PUT,\n hdrs.METH_TRACE, hdrs.METH_DELETE}\n\n def __init__(self, app, message, payload, transport, reader, writer, *,\n _HOST=hdrs.HOST, secure_proxy_ssl_header=None):\n self._app = app\n self._version = message.version\n self._transport = transport\n self._reader = reader\n self._writer = writer\n self._method = message.method\n self._host = message.headers.get(_HOST)\n self._path_qs = message.path\n self._post = None\n self._post_files_cache = None\n self._headers = CIMultiDictProxy(message.headers)\n if self._version < HttpVersion10:\n self._keep_alive = False\n else:\n self._keep_alive = not message.should_close\n\n # matchdict, route_name, handler\n # or information about traversal lookup\n self._match_info = None # initialized after route resolving\n\n self._payload = payload\n self._cookies = None\n\n self._read_bytes = None\n self._has_body = not payload.at_eof()\n\n self._secure_proxy_ssl_header = secure_proxy_ssl_header\n\n @property\n def scheme(self):\n \"\"\"A string representing the scheme of the request.\n\n 'http' or 'https'.\n \"\"\"\n if self._transport.get_extra_info('sslcontext'):\n return 'https'\n secure_proxy_ssl_header = self._secure_proxy_ssl_header\n if secure_proxy_ssl_header is not None:\n header, value = secure_proxy_ssl_header\n if self._headers.get(header) == value:\n return 'https'\n return 'http'\n\n @property\n def method(self):\n \"\"\"Read only property for getting HTTP method.\n\n The value is upper-cased str like 'GET', 'POST', 'PUT' etc.\n \"\"\"\n return self._method\n\n @property\n def version(self):\n \"\"\"Read only property for getting HTTP version of request.\n\n Returns aiohttp.protocol.HttpVersion instance.\n \"\"\"\n return self._version\n\n @property\n def host(self):\n \"\"\"Read only property for getting *HOST* header of request.\n\n Returns str or None if HTTP request has no HOST header.\n \"\"\"\n return self._host\n\n @property\n def path_qs(self):\n \"\"\"The URL including PATH_INFO and the query string.\n\n E.g, /app/blog?id=10\n \"\"\"\n return self._path_qs\n\n @reify\n def _splitted_path(self):\n url = '{}://{}{}'.format(self.scheme, self.host, self._path_qs)\n return urlsplit(url)\n\n @property\n def raw_path(self):\n \"\"\" The URL including raw *PATH INFO* without the host or scheme.\n Warning, the path is unquoted and may contains non valid URL characters\n\n E.g., ``/my%2Fpath%7Cwith%21some%25strange%24characters``\n \"\"\"\n return self._splitted_path.path\n\n @reify\n def path(self):\n \"\"\"The URL including *PATH INFO* without the host or scheme.\n\n E.g., ``/app/blog``\n \"\"\"\n return unquote(self.raw_path)\n\n @reify\n def query_string(self):\n \"\"\"The query string in the URL.\n\n E.g., id=10\n \"\"\"\n return self._splitted_path.query\n\n @reify\n def GET(self):\n \"\"\"A multidict with all the variables in the query string.\n\n Lazy property.\n \"\"\"\n return MultiDictProxy(MultiDict(parse_qsl(self.query_string,\n keep_blank_values=True)))\n\n @reify\n def POST(self):\n \"\"\"A multidict with all the variables in the POST parameters.\n\n post() methods has to be called before using this attribute.\n \"\"\"\n if self._post is None:\n raise RuntimeError(\"POST is not available before post()\")\n return self._post\n\n @property\n def headers(self):\n \"\"\"A case-insensitive multidict proxy with all headers.\"\"\"\n return self._headers\n\n @property\n def if_modified_since(self, _IF_MODIFIED_SINCE=hdrs.IF_MODIFIED_SINCE):\n \"\"\"The value of If-Modified-Since HTTP header, or None.\n\n This header is represented as a `datetime` object.\n \"\"\"\n httpdate = self.headers.get(_IF_MODIFIED_SINCE)\n if httpdate is not None:\n timetuple = parsedate(httpdate)\n if timetuple is not None:\n return datetime.datetime(*timetuple[:6],\n tzinfo=datetime.timezone.utc)\n return None\n\n @property\n def keep_alive(self):\n \"\"\"Is keepalive enabled by client?\"\"\"\n return self._keep_alive\n\n @property\n def match_info(self):\n \"\"\"Result of route resolving.\"\"\"\n return self._match_info\n\n @property\n def app(self):\n \"\"\"Application instance.\"\"\"\n return self._app\n\n @property\n def transport(self):\n \"\"\"Transport used for request processing.\"\"\"\n return self._transport\n\n @property\n def cookies(self):\n \"\"\"Return request cookies.\n\n A read-only dictionary-like object.\n \"\"\"\n if self._cookies is None:\n raw = self.headers.get(hdrs.COOKIE, '')\n parsed = http.cookies.SimpleCookie(raw)\n self._cookies = MappingProxyType(\n {key: val.value for key, val in parsed.items()})\n return self._cookies\n\n @property\n def payload(self):\n \"\"\"Return raw payload stream.\"\"\"\n warnings.warn('use Request.content instead', DeprecationWarning)\n return self._payload\n\n @property\n def content(self):\n \"\"\"Return raw payload stream.\"\"\"\n return self._payload\n\n @property\n def has_body(self):\n \"\"\"Return True if request has HTTP BODY, False otherwise.\"\"\"\n return self._has_body\n\n @asyncio.coroutine\n def release(self):\n \"\"\"Release request.\n\n Eat unread part of HTTP BODY if present.\n \"\"\"\n chunk = yield from self._payload.readany()\n while chunk is not EOF_MARKER or chunk:\n chunk = yield from self._payload.readany()\n\n @asyncio.coroutine\n def read(self):\n \"\"\"Read request body if present.\n\n Returns bytes object with full request content.\n \"\"\"\n if self._read_bytes is None:\n body = bytearray()\n while True:\n chunk = yield from self._payload.readany()\n body.extend(chunk)\n if chunk is EOF_MARKER:\n break\n self._read_bytes = bytes(body)\n return self._read_bytes\n\n @asyncio.coroutine\n def text(self):\n \"\"\"Return BODY as text using encoding from .charset.\"\"\"\n bytes_body = yield from self.read()\n encoding = self.charset or 'utf-8'\n return bytes_body.decode(encoding)\n\n @asyncio.coroutine\n def json(self, *, loader=json.loads):\n \"\"\"Return BODY as JSON.\"\"\"\n body = yield from self.text()\n return loader(body)\n\n @asyncio.coroutine\n def post(self):\n \"\"\"Return POST parameters.\"\"\"\n if self._post is not None:\n return self._post\n if self.method not in self.POST_METHODS:\n self._post = MultiDictProxy(MultiDict())\n return self._post\n\n content_type = self.content_type\n if (content_type not in ('',\n 'application/x-www-form-urlencoded',\n 'multipart/form-data')):\n self._post = MultiDictProxy(MultiDict())\n return self._post\n\n body = yield from self.read()\n content_charset = self.charset or 'utf-8'\n\n environ = {'REQUEST_METHOD': self.method,\n 'CONTENT_LENGTH': str(len(body)),\n 'QUERY_STRING': '',\n 'CONTENT_TYPE': self.headers.get(hdrs.CONTENT_TYPE)}\n\n fs = cgi.FieldStorage(fp=io.BytesIO(body),\n environ=environ,\n keep_blank_values=True,\n encoding=content_charset)\n\n supported_transfer_encoding = {\n 'base64': binascii.a2b_base64,\n 'quoted-printable': binascii.a2b_qp\n }\n\n out = MultiDict()\n _count = 1\n for field in fs.list or ():\n transfer_encoding = field.headers.get(\n hdrs.CONTENT_TRANSFER_ENCODING, None)\n if field.filename:\n ff = FileField(field.name,\n field.filename,\n field.file, # N.B. file closed error\n field.type)\n if self._post_files_cache is None:\n self._post_files_cache = {}\n self._post_files_cache[field.name+str(_count)] = field\n _count += 1\n out.add(field.name, ff)\n else:\n value = field.value\n if transfer_encoding in supported_transfer_encoding:\n # binascii accepts bytes\n value = value.encode('utf-8')\n value = supported_transfer_encoding[\n transfer_encoding](value)\n out.add(field.name, value)\n\n self._post = MultiDictProxy(out)\n return self._post\n\n def __repr__(self):\n return \"<{} {} {} >\".format(self.__class__.__name__,\n self.method, self.path)\n\n\n############################################################\n# HTTP Response classes\n############################################################\n\n\nclass StreamResponse(HeadersMixin):\n\n def __init__(self, *, status=200, reason=None, headers=None):\n self._body = None\n self._keep_alive = None\n self._chunked = False\n self._chunk_size = None\n self._compression = False\n self._compression_force = False\n self._headers = CIMultiDict()\n self._cookies = http.cookies.SimpleCookie()\n self.set_status(status, reason)\n\n self._req = None\n self._resp_impl = None\n self._eof_sent = False\n\n if headers is not None:\n self._headers.extend(headers)\n\n def _copy_cookies(self):\n for cookie in self._cookies.values():\n value = cookie.output(header='')[1:]\n self.headers.add(hdrs.SET_COOKIE, value)\n\n @property\n def started(self):\n return self._resp_impl is not None\n\n @property\n def status(self):\n return self._status\n\n @property\n def chunked(self):\n return self._chunked\n\n @property\n def compression(self):\n return self._compression\n\n @property\n def reason(self):\n return self._reason\n\n def set_status(self, status, reason=None):\n self._status = int(status)\n if reason is None:\n reason = ResponseImpl.calc_reason(status)\n self._reason = reason\n\n @property\n def keep_alive(self):\n return self._keep_alive\n\n def force_close(self):\n self._keep_alive = False\n\n def enable_chunked_encoding(self, chunk_size=None):\n \"\"\"Enables automatic chunked transfer encoding.\"\"\"\n self._chunked = True\n self._chunk_size = chunk_size\n\n def enable_compression(self, force=None):\n \"\"\"Enables response compression encoding.\"\"\"\n # Backwards compatibility for when force was a bool <0.17.\n if type(force) == bool:\n force = ContentCoding.deflate if force else ContentCoding.identity\n\n self._compression = True\n self._compression_force = force\n\n @property\n def headers(self):\n return self._headers\n\n @property\n def cookies(self):\n return self._cookies\n\n def set_cookie(self, name, value, *, expires=None,\n domain=None, max_age=None, path='/',\n secure=None, httponly=None, version=None):\n \"\"\"Set or update response cookie.\n\n Sets new cookie or updates existent with new value.\n Also updates only those params which are not None.\n \"\"\"\n\n old = self._cookies.get(name)\n if old is not None and old.coded_value == '':\n # deleted cookie\n self._cookies.pop(name, None)\n\n self._cookies[name] = value\n c = self._cookies[name]\n if expires is not None:\n c['expires'] = expires\n if domain is not None:\n c['domain'] = domain\n if max_age is not None:\n c['max-age'] = max_age\n if path is not None:\n c['path'] = path\n if secure is not None:\n c['secure'] = secure\n if httponly is not None:\n c['httponly'] = httponly\n if version is not None:\n c['version'] = version\n\n def del_cookie(self, name, *, domain=None, path='/'):\n \"\"\"Delete cookie.\n\n Creates new empty expired cookie.\n \"\"\"\n # TODO: do we need domain/path here?\n self._cookies.pop(name, None)\n self.set_cookie(name, '', max_age=0, domain=domain, path=path)\n\n @property\n def content_length(self):\n # Just a placeholder for adding setter\n return super().content_length\n\n @content_length.setter\n def content_length(self, value):\n if value is not None:\n value = int(value)\n # TODO: raise error if chunked enabled\n self.headers[hdrs.CONTENT_LENGTH] = str(value)\n else:\n self.headers.pop(hdrs.CONTENT_LENGTH, None)\n\n @property\n def content_type(self):\n # Just a placeholder for adding setter\n return super().content_type\n\n @content_type.setter\n def content_type(self, value):\n self.content_type # read header values if needed\n self._content_type = str(value)\n self._generate_content_type_header()\n\n @property\n def charset(self):\n # Just a placeholder for adding setter\n return super().charset\n\n @charset.setter\n def charset(self, value):\n ctype = self.content_type # read header values if needed\n if ctype == 'application/octet-stream':\n raise RuntimeError(\"Setting charset for application/octet-stream \"\n \"doesn't make sense, setup content_type first\")\n if value is None:\n self._content_dict.pop('charset', None)\n else:\n self._content_dict['charset'] = str(value).lower()\n self._generate_content_type_header()\n\n @property\n def last_modified(self, _LAST_MODIFIED=hdrs.LAST_MODIFIED):\n \"\"\"The value of Last-Modified HTTP header, or None.\n\n This header is represented as a `datetime` object.\n \"\"\"\n httpdate = self.headers.get(_LAST_MODIFIED)\n if httpdate is not None:\n timetuple = parsedate(httpdate)\n if timetuple is not None:\n return datetime.datetime(*timetuple[:6],\n tzinfo=datetime.timezone.utc)\n return None\n\n @last_modified.setter\n def last_modified(self, value):\n if value is None:\n if hdrs.LAST_MODIFIED in self.headers:\n del self.headers[hdrs.LAST_MODIFIED]\n elif isinstance(value, (int, float)):\n self.headers[hdrs.LAST_MODIFIED] = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\", time.gmtime(math.ceil(value)))\n elif isinstance(value, datetime.datetime):\n self.headers[hdrs.LAST_MODIFIED] = time.strftime(\n \"%a, %d %b %Y %H:%M:%S GMT\", value.utctimetuple())\n elif isinstance(value, str):\n self.headers[hdrs.LAST_MODIFIED] = value\n\n def _generate_content_type_header(self, CONTENT_TYPE=hdrs.CONTENT_TYPE):\n params = '; '.join(\"%s=%s\" % i for i in self._content_dict.items())\n if params:\n ctype = self._content_type + '; ' + params\n else:\n ctype = self._content_type\n self.headers[CONTENT_TYPE] = ctype\n\n def _start_pre_check(self, request):\n if self._resp_impl is not None:\n if self._req is not request:\n raise RuntimeError(\n 'Response has been started with different request.')\n else:\n return self._resp_impl\n else:\n return None\n\n def _start_compression(self, request):\n def start(coding):\n if coding != ContentCoding.identity:\n self.headers[hdrs.CONTENT_ENCODING] = coding.value\n self._resp_impl.add_compression_filter(coding.value)\n self.content_length = None\n\n if self._compression_force:\n start(self._compression_force)\n else:\n accept_encoding = request.headers.get(\n hdrs.ACCEPT_ENCODING, '').lower()\n for coding in ContentCoding:\n if coding.value in accept_encoding:\n start(coding)\n return\n\n def start(self, request):\n resp_impl = self._start_pre_check(request)\n if resp_impl is not None:\n return resp_impl\n\n self._req = request\n keep_alive = self._keep_alive\n if keep_alive is None:\n keep_alive = request.keep_alive\n self._keep_alive = keep_alive\n\n resp_impl = self._resp_impl = ResponseImpl(\n request._writer,\n self._status,\n request.version,\n not keep_alive,\n self._reason)\n\n self._copy_cookies()\n\n if self._compression:\n self._start_compression(request)\n\n if self._chunked:\n resp_impl.enable_chunked_encoding()\n if self._chunk_size:\n resp_impl.add_chunking_filter(self._chunk_size)\n\n headers = self.headers.items()\n for key, val in headers:\n resp_impl.add_header(key, val)\n\n resp_impl.send_headers()\n return resp_impl\n\n def write(self, data):\n assert isinstance(data, (bytes, bytearray, memoryview)), \\\n 'data argument must be byte-ish (%r)' % type(data)\n\n if self._eof_sent:\n raise RuntimeError(\"Cannot call write() after write_eof()\")\n if self._resp_impl is None:\n raise RuntimeError(\"Cannot call write() before start()\")\n\n if data:\n return self._resp_impl.write(data)\n else:\n return ()\n\n @asyncio.coroutine\n def drain(self):\n if self._resp_impl is None:\n raise RuntimeError(\"Response has not been started\")\n yield from self._resp_impl.transport.drain()\n\n @asyncio.coroutine\n def write_eof(self):\n if self._eof_sent:\n return\n if self._resp_impl is None:\n raise RuntimeError(\"Response has not been started\")\n\n yield from self._resp_impl.write_eof()\n self._eof_sent = True\n\n def __repr__(self):\n if self.started:\n info = \"{} {} \".format(self._req.method, self._req.path)\n else:\n info = \"not started\"\n return \"<{} {} {}>\".format(self.__class__.__name__,\n self.reason, info)\n\n\nclass Response(StreamResponse):\n\n def __init__(self, *, body=None, status=200,\n reason=None, text=None, headers=None, content_type=None):\n super().__init__(status=status, reason=reason, headers=headers)\n\n if body is not None and text is not None:\n raise ValueError(\"body and text are not allowed together.\")\n\n if text is not None:\n if hdrs.CONTENT_TYPE not in self.headers:\n # fast path for filling headers\n if not isinstance(text, str):\n raise TypeError('text argument must be str (%r)' %\n type(text))\n if content_type is None:\n content_type = 'text/plain'\n self.headers[hdrs.CONTENT_TYPE] = (\n content_type + '; charset=utf-8')\n self._content_type = content_type\n self._content_dict = {'charset': 'utf-8'}\n self.body = text.encode('utf-8')\n else:\n self.text = text\n else:\n if content_type:\n self.content_type = content_type\n if body is not None:\n self.body = body\n else:\n self.body = None\n\n @property\n def body(self):\n return self._body\n\n @body.setter\n def body(self, body):\n if body is not None and not isinstance(body, bytes):\n raise TypeError('body argument must be bytes (%r)' % type(body))\n self._body = body\n if body is not None:\n self.content_length = len(body)\n else:\n self.content_length = 0\n\n @property\n def text(self):\n if self._body is None:\n return None\n return self._body.decode(self.charset or 'utf-8')\n\n @text.setter\n def text(self, text):\n if text is not None and not isinstance(text, str):\n raise TypeError('text argument must be str (%r)' % type(text))\n\n if self.content_type == 'application/octet-stream':\n self.content_type = 'text/plain'\n if self.charset is None:\n self.charset = 'utf-8'\n\n self.body = text.encode(self.charset)\n\n @asyncio.coroutine\n def write_eof(self):\n body = self._body\n if body is not None:\n self.write(body)\n yield from super().write_eof()\n",
"path": "aiohttp/web_reqrep.py"
}
] | diff --git a/aiohttp/web_reqrep.py b/aiohttp/web_reqrep.py
index 6f3e566c8c6..c3e8abc4067 100644
--- a/aiohttp/web_reqrep.py
+++ b/aiohttp/web_reqrep.py
@@ -173,7 +173,8 @@ def path_qs(self):
@reify
def _splitted_path(self):
- return urlsplit(self._path_qs)
+ url = '{}://{}{}'.format(self.scheme, self.host, self._path_qs)
+ return urlsplit(url)
@property
def raw_path(self):
diff --git a/tests/test_web.py b/tests/test_web.py
index b7626d80e2c..3968eab1843 100644
--- a/tests/test_web.py
+++ b/tests/test_web.py
@@ -16,6 +16,7 @@ def setUp(self):
def tearDown(self):
self.loop.close()
+ @unittest.skip('moved to test_web_functional')
def test_handler_returns_not_response(self):
app = web.Application(loop=self.loop)
diff --git a/tests/test_web_functional.py b/tests/test_web_functional.py
index c1ec94cb4ae..8493bdeec2c 100644
--- a/tests/test_web_functional.py
+++ b/tests/test_web_functional.py
@@ -66,6 +66,20 @@ def go():
self.loop.run_until_complete(go())
+ def test_handler_returns_not_response(self):
+
+ @asyncio.coroutine
+ def handler(request):
+ return 'abc'
+
+ @asyncio.coroutine
+ def go():
+ _, _, url = yield from self.create_server('GET', '/', handler)
+ resp = yield from request('GET', url, loop=self.loop)
+ self.assertEqual(500, resp.status)
+
+ self.loop.run_until_complete(go())
+
def test_post_form(self):
@asyncio.coroutine
diff --git a/tests/test_web_request.py b/tests/test_web_request.py
index 2dc3bf9b1b3..60c77bf3af5 100644
--- a/tests/test_web_request.py
+++ b/tests/test_web_request.py
@@ -64,6 +64,10 @@ def test_ctor(self):
self.assertIs(self.transport, req.transport)
self.assertTrue(req.keep_alive)
+ def test_doubleslashes(self):
+ req = self.make_request('GET', '//foo/')
+ self.assertEqual('//foo/', req.path)
+
def test_POST(self):
req = self.make_request('POST', '/')
with self.assertRaises(RuntimeError):
|
psf__black-4019 | Internal error on a specific file
<!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
`black` reports an internal error when formatting a specific file.
```
error: cannot format /home/nicpa/codes/sisl/src/sisl_toolbox/siesta/minimizer/_metric_siesta.py: INTERNAL ERROR: Black produced code that is not equivalent to the source. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk__3mh1ucd.log
```
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
1. Download [this file](https://github.com/zerothi/sisl/blob/5a63302b57fcb38d7460507bf000f077655ac664/src/sisl_toolbox/siesta/minimizer/_metric_siesta.py)
2. Run `black` on the file. (I have done `pip install -U black` as of today!)
My pyproject.toml configuration file has this:
```toml
[tool.black]
line-length = 88
target-version = ["py38", "py39", "py310", "py311", "py312"]
```
The resulting error is:
```
error: cannot format /home/nicpa/codes/sisl/src/sisl_toolbox/siesta/minimizer/_metric_siesta.py: INTERNAL ERROR: Black produced code that is not equivalent to the source. Please report a bug on https://github.com/psf/black/issues. This diff might be helpful: /tmp/blk__3mh1ucd.log
```
Here is the attached diff-log file:
```diff
--- src
+++ dst
@@ -2307,16 +2307,10 @@
body=
Expr(
value=
JoinedStr(
values=
- Constant(
- kind=
- None, # NoneType
- value=
- '', # str
- ) # /Constant
FormattedValue(
conversion=
-1, # int
format_spec=
None, # NoneType
@@ -3263,16 +3257,10 @@
body=
Expr(
value=
JoinedStr(
values=
- Constant(
- kind=
- None, # NoneType
- value=
- '', # str
- ) # /Constant
FormattedValue(
conversion=
-1, # int
format_spec=
None, # NoneType
@@ -4273,16 +4261,10 @@
body=
Expr(
value=
JoinedStr(
values=
- Constant(
- kind=
- None, # NoneType
- value=
- '', # str
- ) # /Constant
FormattedValue(
conversion=
-1, # int
format_spec=
None, # NoneType
```
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Environment**
<!-- Please complete the following information: -->
- Black's version: <!-- e.g. [main] -->
> black --version
> black, 23.10.1 (compiled: yes)
> Python (CPython) 3.11.6
- OS and Python version: Linux/debian, Python 3.11.6
| [
{
"content": "\"\"\"\nblib2to3 Node/Leaf transformation-related utility functions.\n\"\"\"\n\nimport sys\nfrom typing import Final, Generic, Iterator, List, Optional, Set, Tuple, TypeVar, Union\n\nif sys.version_info >= (3, 10):\n from typing import TypeGuard\nelse:\n from typing_extensions import TypeGuard\n\nfrom mypy_extensions import mypyc_attr\n\nfrom black.cache import CACHE_DIR\nfrom black.mode import Mode, Preview\nfrom black.strings import get_string_prefix, has_triple_quotes\nfrom blib2to3 import pygram\nfrom blib2to3.pgen2 import token\nfrom blib2to3.pytree import NL, Leaf, Node, type_repr\n\npygram.initialize(CACHE_DIR)\nsyms: Final = pygram.python_symbols\n\n\n# types\nT = TypeVar(\"T\")\nLN = Union[Leaf, Node]\nLeafID = int\nNodeType = int\n\n\nWHITESPACE: Final = {token.DEDENT, token.INDENT, token.NEWLINE}\nSTATEMENT: Final = {\n syms.if_stmt,\n syms.while_stmt,\n syms.for_stmt,\n syms.try_stmt,\n syms.except_clause,\n syms.with_stmt,\n syms.funcdef,\n syms.classdef,\n syms.match_stmt,\n syms.case_block,\n}\nSTANDALONE_COMMENT: Final = 153\ntoken.tok_name[STANDALONE_COMMENT] = \"STANDALONE_COMMENT\"\nLOGIC_OPERATORS: Final = {\"and\", \"or\"}\nCOMPARATORS: Final = {\n token.LESS,\n token.GREATER,\n token.EQEQUAL,\n token.NOTEQUAL,\n token.LESSEQUAL,\n token.GREATEREQUAL,\n}\nMATH_OPERATORS: Final = {\n token.VBAR,\n token.CIRCUMFLEX,\n token.AMPER,\n token.LEFTSHIFT,\n token.RIGHTSHIFT,\n token.PLUS,\n token.MINUS,\n token.STAR,\n token.SLASH,\n token.DOUBLESLASH,\n token.PERCENT,\n token.AT,\n token.TILDE,\n token.DOUBLESTAR,\n}\nSTARS: Final = {token.STAR, token.DOUBLESTAR}\nVARARGS_SPECIALS: Final = STARS | {token.SLASH}\nVARARGS_PARENTS: Final = {\n syms.arglist,\n syms.argument, # double star in arglist\n syms.trailer, # single argument to call\n syms.typedargslist,\n syms.varargslist, # lambdas\n}\nUNPACKING_PARENTS: Final = {\n syms.atom, # single element of a list or set literal\n syms.dictsetmaker,\n syms.listmaker,\n syms.testlist_gexp,\n syms.testlist_star_expr,\n syms.subject_expr,\n syms.pattern,\n}\nTEST_DESCENDANTS: Final = {\n syms.test,\n syms.lambdef,\n syms.or_test,\n syms.and_test,\n syms.not_test,\n syms.comparison,\n syms.star_expr,\n syms.expr,\n syms.xor_expr,\n syms.and_expr,\n syms.shift_expr,\n syms.arith_expr,\n syms.trailer,\n syms.term,\n syms.power,\n}\nTYPED_NAMES: Final = {syms.tname, syms.tname_star}\nASSIGNMENTS: Final = {\n \"=\",\n \"+=\",\n \"-=\",\n \"*=\",\n \"@=\",\n \"/=\",\n \"%=\",\n \"&=\",\n \"|=\",\n \"^=\",\n \"<<=\",\n \">>=\",\n \"**=\",\n \"//=\",\n}\n\nIMPLICIT_TUPLE: Final = {syms.testlist, syms.testlist_star_expr, syms.exprlist}\nBRACKET: Final = {\n token.LPAR: token.RPAR,\n token.LSQB: token.RSQB,\n token.LBRACE: token.RBRACE,\n}\nOPENING_BRACKETS: Final = set(BRACKET.keys())\nCLOSING_BRACKETS: Final = set(BRACKET.values())\nBRACKETS: Final = OPENING_BRACKETS | CLOSING_BRACKETS\nALWAYS_NO_SPACE: Final = CLOSING_BRACKETS | {token.COMMA, STANDALONE_COMMENT}\n\nRARROW = 55\n\n\n@mypyc_attr(allow_interpreted_subclasses=True)\nclass Visitor(Generic[T]):\n \"\"\"Basic lib2to3 visitor that yields things of type `T` on `visit()`.\"\"\"\n\n def visit(self, node: LN) -> Iterator[T]:\n \"\"\"Main method to visit `node` and its children.\n\n It tries to find a `visit_*()` method for the given `node.type`, like\n `visit_simple_stmt` for Node objects or `visit_INDENT` for Leaf objects.\n If no dedicated `visit_*()` method is found, chooses `visit_default()`\n instead.\n\n Then yields objects of type `T` from the selected visitor.\n \"\"\"\n if node.type < 256:\n name = token.tok_name[node.type]\n else:\n name = str(type_repr(node.type))\n # We explicitly branch on whether a visitor exists (instead of\n # using self.visit_default as the default arg to getattr) in order\n # to save needing to create a bound method object and so mypyc can\n # generate a native call to visit_default.\n visitf = getattr(self, f\"visit_{name}\", None)\n if visitf:\n yield from visitf(node)\n else:\n yield from self.visit_default(node)\n\n def visit_default(self, node: LN) -> Iterator[T]:\n \"\"\"Default `visit_*()` implementation. Recurses to children of `node`.\"\"\"\n if isinstance(node, Node):\n for child in node.children:\n yield from self.visit(child)\n\n\ndef whitespace(leaf: Leaf, *, complex_subscript: bool, mode: Mode) -> str: # noqa: C901\n \"\"\"Return whitespace prefix if needed for the given `leaf`.\n\n `complex_subscript` signals whether the given leaf is part of a subscription\n which has non-trivial arguments, like arithmetic expressions or function calls.\n \"\"\"\n NO: Final[str] = \"\"\n SPACE: Final[str] = \" \"\n DOUBLESPACE: Final[str] = \" \"\n t = leaf.type\n p = leaf.parent\n v = leaf.value\n if t in ALWAYS_NO_SPACE:\n return NO\n\n if t == token.COMMENT:\n return DOUBLESPACE\n\n assert p is not None, f\"INTERNAL ERROR: hand-made leaf without parent: {leaf!r}\"\n if t == token.COLON and p.type not in {\n syms.subscript,\n syms.subscriptlist,\n syms.sliceop,\n }:\n return NO\n\n prev = leaf.prev_sibling\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type in OPENING_BRACKETS:\n return NO\n\n if t == token.COLON:\n if prevp.type == token.COLON:\n return NO\n\n elif prevp.type != token.COMMA and not complex_subscript:\n return NO\n\n return SPACE\n\n if prevp.type == token.EQUAL:\n if prevp.parent:\n if prevp.parent.type in {\n syms.arglist,\n syms.argument,\n syms.parameters,\n syms.varargslist,\n }:\n return NO\n\n elif prevp.parent.type == syms.typedargslist:\n # A bit hacky: if the equal sign has whitespace, it means we\n # previously found it's a typed argument. So, we're using\n # that, too.\n return prevp.prefix\n\n elif (\n prevp.type == token.STAR\n and parent_type(prevp) == syms.star_expr\n and parent_type(prevp.parent) == syms.subscriptlist\n ):\n # No space between typevar tuples.\n return NO\n\n elif prevp.type in VARARGS_SPECIALS:\n if is_vararg(prevp, within=VARARGS_PARENTS | UNPACKING_PARENTS):\n return NO\n\n elif prevp.type == token.COLON:\n if prevp.parent and prevp.parent.type in {syms.subscript, syms.sliceop}:\n return SPACE if complex_subscript else NO\n\n elif (\n prevp.parent\n and prevp.parent.type == syms.factor\n and prevp.type in MATH_OPERATORS\n ):\n return NO\n\n elif prevp.type == token.AT and p.parent and p.parent.type == syms.decorator:\n # no space in decorators\n return NO\n\n elif prev.type in OPENING_BRACKETS:\n return NO\n\n if p.type in {syms.parameters, syms.arglist}:\n # untyped function signatures or calls\n if not prev or prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.varargslist:\n # lambdas\n if prev and prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.typedargslist:\n # typed function signatures\n if not prev:\n return NO\n\n if t == token.EQUAL:\n if prev.type not in TYPED_NAMES:\n return NO\n\n elif prev.type == token.EQUAL:\n # A bit hacky: if the equal sign has whitespace, it means we\n # previously found it's a typed argument. So, we're using that, too.\n return prev.prefix\n\n elif prev.type != token.COMMA:\n return NO\n\n elif p.type in TYPED_NAMES:\n # type names\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type != token.COMMA:\n return NO\n\n elif p.type == syms.trailer:\n # attributes and calls\n if t == token.LPAR or t == token.RPAR:\n return NO\n\n if not prev:\n if t == token.DOT or t == token.LSQB:\n return NO\n\n elif prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.argument:\n # single argument\n if t == token.EQUAL:\n return NO\n\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type == token.LPAR:\n return NO\n\n elif prev.type in {token.EQUAL} | VARARGS_SPECIALS:\n return NO\n\n elif p.type == syms.decorator:\n # decorators\n return NO\n\n elif p.type == syms.dotted_name:\n if prev:\n return NO\n\n prevp = preceding_leaf(p)\n if not prevp or prevp.type == token.AT or prevp.type == token.DOT:\n return NO\n\n elif p.type == syms.classdef:\n if t == token.LPAR:\n return NO\n\n if prev and prev.type == token.LPAR:\n return NO\n\n elif p.type in {syms.subscript, syms.sliceop}:\n # indexing\n if not prev:\n assert p.parent is not None, \"subscripts are always parented\"\n if p.parent.type == syms.subscriptlist:\n return SPACE\n\n return NO\n\n elif Preview.walrus_subscript in mode and (\n t == token.COLONEQUAL or prev.type == token.COLONEQUAL\n ):\n return SPACE\n\n elif not complex_subscript:\n return NO\n\n elif p.type == syms.atom:\n if prev and t == token.DOT:\n # dots, but not the first one.\n return NO\n\n elif p.type == syms.dictsetmaker:\n # dict unpacking\n if prev and prev.type == token.DOUBLESTAR:\n return NO\n\n elif p.type in {syms.factor, syms.star_expr}:\n # unary ops\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type in OPENING_BRACKETS:\n return NO\n\n prevp_parent = prevp.parent\n assert prevp_parent is not None\n if prevp.type == token.COLON and prevp_parent.type in {\n syms.subscript,\n syms.sliceop,\n }:\n return NO\n\n elif prevp.type == token.EQUAL and prevp_parent.type == syms.argument:\n return NO\n\n elif t in {token.NAME, token.NUMBER, token.STRING}:\n return NO\n\n elif p.type == syms.import_from:\n if t == token.DOT:\n if prev and prev.type == token.DOT:\n return NO\n\n elif t == token.NAME:\n if v == \"import\":\n return SPACE\n\n if prev and prev.type == token.DOT:\n return NO\n\n elif p.type == syms.sliceop:\n return NO\n\n elif p.type == syms.except_clause:\n if t == token.STAR:\n return NO\n\n return SPACE\n\n\ndef preceding_leaf(node: Optional[LN]) -> Optional[Leaf]:\n \"\"\"Return the first leaf that precedes `node`, if any.\"\"\"\n while node:\n res = node.prev_sibling\n if res:\n if isinstance(res, Leaf):\n return res\n\n try:\n return list(res.leaves())[-1]\n\n except IndexError:\n return None\n\n node = node.parent\n return None\n\n\ndef prev_siblings_are(node: Optional[LN], tokens: List[Optional[NodeType]]) -> bool:\n \"\"\"Return if the `node` and its previous siblings match types against the provided\n list of tokens; the provided `node`has its type matched against the last element in\n the list. `None` can be used as the first element to declare that the start of the\n list is anchored at the start of its parent's children.\"\"\"\n if not tokens:\n return True\n if tokens[-1] is None:\n return node is None\n if not node:\n return False\n if node.type != tokens[-1]:\n return False\n return prev_siblings_are(node.prev_sibling, tokens[:-1])\n\n\ndef parent_type(node: Optional[LN]) -> Optional[NodeType]:\n \"\"\"\n Returns:\n @node.parent.type, if @node is not None and has a parent.\n OR\n None, otherwise.\n \"\"\"\n if node is None or node.parent is None:\n return None\n\n return node.parent.type\n\n\ndef child_towards(ancestor: Node, descendant: LN) -> Optional[LN]:\n \"\"\"Return the child of `ancestor` that contains `descendant`.\"\"\"\n node: Optional[LN] = descendant\n while node and node.parent != ancestor:\n node = node.parent\n return node\n\n\ndef replace_child(old_child: LN, new_child: LN) -> None:\n \"\"\"\n Side Effects:\n * If @old_child.parent is set, replace @old_child with @new_child in\n @old_child's underlying Node structure.\n OR\n * Otherwise, this function does nothing.\n \"\"\"\n parent = old_child.parent\n if not parent:\n return\n\n child_idx = old_child.remove()\n if child_idx is not None:\n parent.insert_child(child_idx, new_child)\n\n\ndef container_of(leaf: Leaf) -> LN:\n \"\"\"Return `leaf` or one of its ancestors that is the topmost container of it.\n\n By \"container\" we mean a node where `leaf` is the very first child.\n \"\"\"\n same_prefix = leaf.prefix\n container: LN = leaf\n while container:\n parent = container.parent\n if parent is None:\n break\n\n if parent.children[0].prefix != same_prefix:\n break\n\n if parent.type == syms.file_input:\n break\n\n if parent.prev_sibling is not None and parent.prev_sibling.type in BRACKETS:\n break\n\n container = parent\n return container\n\n\ndef first_leaf_of(node: LN) -> Optional[Leaf]:\n \"\"\"Returns the first leaf of the node tree.\"\"\"\n if isinstance(node, Leaf):\n return node\n if node.children:\n return first_leaf_of(node.children[0])\n else:\n return None\n\n\ndef is_arith_like(node: LN) -> bool:\n \"\"\"Whether node is an arithmetic or a binary arithmetic expression\"\"\"\n return node.type in {\n syms.arith_expr,\n syms.shift_expr,\n syms.xor_expr,\n syms.and_expr,\n }\n\n\ndef is_docstring(leaf: Leaf) -> bool:\n if leaf.type != token.STRING:\n return False\n\n prefix = get_string_prefix(leaf.value)\n if \"b\" in prefix or \"B\" in prefix:\n return False\n\n if prev_siblings_are(\n leaf.parent, [None, token.NEWLINE, token.INDENT, syms.simple_stmt]\n ):\n return True\n\n # Multiline docstring on the same line as the `def`.\n if prev_siblings_are(leaf.parent, [syms.parameters, token.COLON, syms.simple_stmt]):\n # `syms.parameters` is only used in funcdefs and async_funcdefs in the Python\n # grammar. We're safe to return True without further checks.\n return True\n\n return False\n\n\ndef is_empty_tuple(node: LN) -> bool:\n \"\"\"Return True if `node` holds an empty tuple.\"\"\"\n return (\n node.type == syms.atom\n and len(node.children) == 2\n and node.children[0].type == token.LPAR\n and node.children[1].type == token.RPAR\n )\n\n\ndef is_one_tuple(node: LN) -> bool:\n \"\"\"Return True if `node` holds a tuple with one element, with or without parens.\"\"\"\n if node.type == syms.atom:\n gexp = unwrap_singleton_parenthesis(node)\n if gexp is None or gexp.type != syms.testlist_gexp:\n return False\n\n return len(gexp.children) == 2 and gexp.children[1].type == token.COMMA\n\n return (\n node.type in IMPLICIT_TUPLE\n and len(node.children) == 2\n and node.children[1].type == token.COMMA\n )\n\n\ndef is_tuple_containing_walrus(node: LN) -> bool:\n \"\"\"Return True if `node` holds a tuple that contains a walrus operator.\"\"\"\n if node.type != syms.atom:\n return False\n gexp = unwrap_singleton_parenthesis(node)\n if gexp is None or gexp.type != syms.testlist_gexp:\n return False\n\n return any(child.type == syms.namedexpr_test for child in gexp.children)\n\n\ndef is_one_sequence_between(\n opening: Leaf,\n closing: Leaf,\n leaves: List[Leaf],\n brackets: Tuple[int, int] = (token.LPAR, token.RPAR),\n) -> bool:\n \"\"\"Return True if content between `opening` and `closing` is a one-sequence.\"\"\"\n if (opening.type, closing.type) != brackets:\n return False\n\n depth = closing.bracket_depth + 1\n for _opening_index, leaf in enumerate(leaves):\n if leaf is opening:\n break\n\n else:\n raise LookupError(\"Opening paren not found in `leaves`\")\n\n commas = 0\n _opening_index += 1\n for leaf in leaves[_opening_index:]:\n if leaf is closing:\n break\n\n bracket_depth = leaf.bracket_depth\n if bracket_depth == depth and leaf.type == token.COMMA:\n commas += 1\n if leaf.parent and leaf.parent.type in {\n syms.arglist,\n syms.typedargslist,\n }:\n commas += 1\n break\n\n return commas < 2\n\n\ndef is_walrus_assignment(node: LN) -> bool:\n \"\"\"Return True iff `node` is of the shape ( test := test )\"\"\"\n inner = unwrap_singleton_parenthesis(node)\n return inner is not None and inner.type == syms.namedexpr_test\n\n\ndef is_simple_decorator_trailer(node: LN, last: bool = False) -> bool:\n \"\"\"Return True iff `node` is a trailer valid in a simple decorator\"\"\"\n return node.type == syms.trailer and (\n (\n len(node.children) == 2\n and node.children[0].type == token.DOT\n and node.children[1].type == token.NAME\n )\n # last trailer can be an argument-less parentheses pair\n or (\n last\n and len(node.children) == 2\n and node.children[0].type == token.LPAR\n and node.children[1].type == token.RPAR\n )\n # last trailer can be arguments\n or (\n last\n and len(node.children) == 3\n and node.children[0].type == token.LPAR\n # and node.children[1].type == syms.argument\n and node.children[2].type == token.RPAR\n )\n )\n\n\ndef is_simple_decorator_expression(node: LN) -> bool:\n \"\"\"Return True iff `node` could be a 'dotted name' decorator\n\n This function takes the node of the 'namedexpr_test' of the new decorator\n grammar and test if it would be valid under the old decorator grammar.\n\n The old grammar was: decorator: @ dotted_name [arguments] NEWLINE\n The new grammar is : decorator: @ namedexpr_test NEWLINE\n \"\"\"\n if node.type == token.NAME:\n return True\n if node.type == syms.power:\n if node.children:\n return (\n node.children[0].type == token.NAME\n and all(map(is_simple_decorator_trailer, node.children[1:-1]))\n and (\n len(node.children) < 2\n or is_simple_decorator_trailer(node.children[-1], last=True)\n )\n )\n return False\n\n\ndef is_yield(node: LN) -> bool:\n \"\"\"Return True if `node` holds a `yield` or `yield from` expression.\"\"\"\n if node.type == syms.yield_expr:\n return True\n\n if is_name_token(node) and node.value == \"yield\":\n return True\n\n if node.type != syms.atom:\n return False\n\n if len(node.children) != 3:\n return False\n\n lpar, expr, rpar = node.children\n if lpar.type == token.LPAR and rpar.type == token.RPAR:\n return is_yield(expr)\n\n return False\n\n\ndef is_vararg(leaf: Leaf, within: Set[NodeType]) -> bool:\n \"\"\"Return True if `leaf` is a star or double star in a vararg or kwarg.\n\n If `within` includes VARARGS_PARENTS, this applies to function signatures.\n If `within` includes UNPACKING_PARENTS, it applies to right hand-side\n extended iterable unpacking (PEP 3132) and additional unpacking\n generalizations (PEP 448).\n \"\"\"\n if leaf.type not in VARARGS_SPECIALS or not leaf.parent:\n return False\n\n p = leaf.parent\n if p.type == syms.star_expr:\n # Star expressions are also used as assignment targets in extended\n # iterable unpacking (PEP 3132). See what its parent is instead.\n if not p.parent:\n return False\n\n p = p.parent\n\n return p.type in within\n\n\ndef is_multiline_string(leaf: Leaf) -> bool:\n \"\"\"Return True if `leaf` is a multiline string that actually spans many lines.\"\"\"\n return has_triple_quotes(leaf.value) and \"\\n\" in leaf.value\n\n\ndef is_funcdef(node: Node) -> bool:\n return node.type == syms.funcdef\n\n\ndef is_stub_suite(node: Node) -> bool:\n \"\"\"Return True if `node` is a suite with a stub body.\"\"\"\n\n # If there is a comment, we want to keep it.\n if node.prefix.strip():\n return False\n\n if (\n len(node.children) != 4\n or node.children[0].type != token.NEWLINE\n or node.children[1].type != token.INDENT\n or node.children[3].type != token.DEDENT\n ):\n return False\n\n if node.children[3].prefix.strip():\n return False\n\n return is_stub_body(node.children[2])\n\n\ndef is_stub_body(node: LN) -> bool:\n \"\"\"Return True if `node` is a simple statement containing an ellipsis.\"\"\"\n if not isinstance(node, Node) or node.type != syms.simple_stmt:\n return False\n\n if len(node.children) != 2:\n return False\n\n child = node.children[0]\n return (\n not child.prefix.strip()\n and child.type == syms.atom\n and len(child.children) == 3\n and all(leaf == Leaf(token.DOT, \".\") for leaf in child.children)\n )\n\n\ndef is_atom_with_invisible_parens(node: LN) -> bool:\n \"\"\"Given a `LN`, determines whether it's an atom `node` with invisible\n parens. Useful in dedupe-ing and normalizing parens.\n \"\"\"\n if isinstance(node, Leaf) or node.type != syms.atom:\n return False\n\n first, last = node.children[0], node.children[-1]\n return (\n isinstance(first, Leaf)\n and first.type == token.LPAR\n and first.value == \"\"\n and isinstance(last, Leaf)\n and last.type == token.RPAR\n and last.value == \"\"\n )\n\n\ndef is_empty_par(leaf: Leaf) -> bool:\n return is_empty_lpar(leaf) or is_empty_rpar(leaf)\n\n\ndef is_empty_lpar(leaf: Leaf) -> bool:\n return leaf.type == token.LPAR and leaf.value == \"\"\n\n\ndef is_empty_rpar(leaf: Leaf) -> bool:\n return leaf.type == token.RPAR and leaf.value == \"\"\n\n\ndef is_import(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts an import statement.\"\"\"\n p = leaf.parent\n t = leaf.type\n v = leaf.value\n return bool(\n t == token.NAME\n and (\n (v == \"import\" and p and p.type == syms.import_name)\n or (v == \"from\" and p and p.type == syms.import_from)\n )\n )\n\n\ndef is_with_or_async_with_stmt(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts a with or async with statement.\"\"\"\n return bool(\n leaf.type == token.NAME\n and leaf.value == \"with\"\n and leaf.parent\n and leaf.parent.type == syms.with_stmt\n ) or bool(\n leaf.type == token.ASYNC\n and leaf.next_sibling\n and leaf.next_sibling.type == syms.with_stmt\n )\n\n\ndef is_async_stmt_or_funcdef(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts an async def/for/with statement.\n\n Note that `async def` can be either an `async_stmt` or `async_funcdef`,\n the latter is used when it has decorators.\n \"\"\"\n return bool(\n leaf.type == token.ASYNC\n and leaf.parent\n and leaf.parent.type in {syms.async_stmt, syms.async_funcdef}\n )\n\n\ndef is_type_comment(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf is a type comment. This function should only\n be used for general type comments (excluding ignore annotations, which should\n use `is_type_ignore_comment`). Note that general type comments are no longer\n used in modern version of Python, this function may be deprecated in the future.\"\"\"\n t = leaf.type\n v = leaf.value\n return t in {token.COMMENT, STANDALONE_COMMENT} and v.startswith(\"# type:\")\n\n\ndef is_type_ignore_comment(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf is a type comment with ignore annotation.\"\"\"\n t = leaf.type\n v = leaf.value\n return t in {token.COMMENT, STANDALONE_COMMENT} and is_type_ignore_comment_string(v)\n\n\ndef is_type_ignore_comment_string(value: str) -> bool:\n \"\"\"Return True if the given string match with type comment with\n ignore annotation.\"\"\"\n return value.startswith(\"# type: ignore\")\n\n\ndef wrap_in_parentheses(parent: Node, child: LN, *, visible: bool = True) -> None:\n \"\"\"Wrap `child` in parentheses.\n\n This replaces `child` with an atom holding the parentheses and the old\n child. That requires moving the prefix.\n\n If `visible` is False, the leaves will be valueless (and thus invisible).\n \"\"\"\n lpar = Leaf(token.LPAR, \"(\" if visible else \"\")\n rpar = Leaf(token.RPAR, \")\" if visible else \"\")\n prefix = child.prefix\n child.prefix = \"\"\n index = child.remove() or 0\n new_child = Node(syms.atom, [lpar, child, rpar])\n new_child.prefix = prefix\n parent.insert_child(index, new_child)\n\n\ndef unwrap_singleton_parenthesis(node: LN) -> Optional[LN]:\n \"\"\"Returns `wrapped` if `node` is of the shape ( wrapped ).\n\n Parenthesis can be optional. Returns None otherwise\"\"\"\n if len(node.children) != 3:\n return None\n\n lpar, wrapped, rpar = node.children\n if not (lpar.type == token.LPAR and rpar.type == token.RPAR):\n return None\n\n return wrapped\n\n\ndef ensure_visible(leaf: Leaf) -> None:\n \"\"\"Make sure parentheses are visible.\n\n They could be invisible as part of some statements (see\n :func:`normalize_invisible_parens` and :func:`visit_import_from`).\n \"\"\"\n if leaf.type == token.LPAR:\n leaf.value = \"(\"\n elif leaf.type == token.RPAR:\n leaf.value = \")\"\n\n\ndef is_name_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.NAME\n\n\ndef is_lpar_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.LPAR\n\n\ndef is_rpar_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.RPAR\n\n\ndef is_string_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.STRING\n\n\ndef is_number_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.NUMBER\n\n\ndef is_part_of_annotation(leaf: Leaf) -> bool:\n \"\"\"Returns whether this leaf is part of type annotations.\"\"\"\n ancestor = leaf.parent\n while ancestor is not None:\n if ancestor.prev_sibling and ancestor.prev_sibling.type == token.RARROW:\n return True\n if ancestor.parent and ancestor.parent.type == syms.tname:\n return True\n ancestor = ancestor.parent\n return False\n",
"path": "src/black/nodes.py"
}
] | [
{
"content": "\"\"\"\nblib2to3 Node/Leaf transformation-related utility functions.\n\"\"\"\n\nimport sys\nfrom typing import Final, Generic, Iterator, List, Optional, Set, Tuple, TypeVar, Union\n\nif sys.version_info >= (3, 10):\n from typing import TypeGuard\nelse:\n from typing_extensions import TypeGuard\n\nfrom mypy_extensions import mypyc_attr\n\nfrom black.cache import CACHE_DIR\nfrom black.mode import Mode, Preview\nfrom black.strings import get_string_prefix, has_triple_quotes\nfrom blib2to3 import pygram\nfrom blib2to3.pgen2 import token\nfrom blib2to3.pytree import NL, Leaf, Node, type_repr\n\npygram.initialize(CACHE_DIR)\nsyms: Final = pygram.python_symbols\n\n\n# types\nT = TypeVar(\"T\")\nLN = Union[Leaf, Node]\nLeafID = int\nNodeType = int\n\n\nWHITESPACE: Final = {token.DEDENT, token.INDENT, token.NEWLINE}\nSTATEMENT: Final = {\n syms.if_stmt,\n syms.while_stmt,\n syms.for_stmt,\n syms.try_stmt,\n syms.except_clause,\n syms.with_stmt,\n syms.funcdef,\n syms.classdef,\n syms.match_stmt,\n syms.case_block,\n}\nSTANDALONE_COMMENT: Final = 153\ntoken.tok_name[STANDALONE_COMMENT] = \"STANDALONE_COMMENT\"\nLOGIC_OPERATORS: Final = {\"and\", \"or\"}\nCOMPARATORS: Final = {\n token.LESS,\n token.GREATER,\n token.EQEQUAL,\n token.NOTEQUAL,\n token.LESSEQUAL,\n token.GREATEREQUAL,\n}\nMATH_OPERATORS: Final = {\n token.VBAR,\n token.CIRCUMFLEX,\n token.AMPER,\n token.LEFTSHIFT,\n token.RIGHTSHIFT,\n token.PLUS,\n token.MINUS,\n token.STAR,\n token.SLASH,\n token.DOUBLESLASH,\n token.PERCENT,\n token.AT,\n token.TILDE,\n token.DOUBLESTAR,\n}\nSTARS: Final = {token.STAR, token.DOUBLESTAR}\nVARARGS_SPECIALS: Final = STARS | {token.SLASH}\nVARARGS_PARENTS: Final = {\n syms.arglist,\n syms.argument, # double star in arglist\n syms.trailer, # single argument to call\n syms.typedargslist,\n syms.varargslist, # lambdas\n}\nUNPACKING_PARENTS: Final = {\n syms.atom, # single element of a list or set literal\n syms.dictsetmaker,\n syms.listmaker,\n syms.testlist_gexp,\n syms.testlist_star_expr,\n syms.subject_expr,\n syms.pattern,\n}\nTEST_DESCENDANTS: Final = {\n syms.test,\n syms.lambdef,\n syms.or_test,\n syms.and_test,\n syms.not_test,\n syms.comparison,\n syms.star_expr,\n syms.expr,\n syms.xor_expr,\n syms.and_expr,\n syms.shift_expr,\n syms.arith_expr,\n syms.trailer,\n syms.term,\n syms.power,\n}\nTYPED_NAMES: Final = {syms.tname, syms.tname_star}\nASSIGNMENTS: Final = {\n \"=\",\n \"+=\",\n \"-=\",\n \"*=\",\n \"@=\",\n \"/=\",\n \"%=\",\n \"&=\",\n \"|=\",\n \"^=\",\n \"<<=\",\n \">>=\",\n \"**=\",\n \"//=\",\n}\n\nIMPLICIT_TUPLE: Final = {syms.testlist, syms.testlist_star_expr, syms.exprlist}\nBRACKET: Final = {\n token.LPAR: token.RPAR,\n token.LSQB: token.RSQB,\n token.LBRACE: token.RBRACE,\n}\nOPENING_BRACKETS: Final = set(BRACKET.keys())\nCLOSING_BRACKETS: Final = set(BRACKET.values())\nBRACKETS: Final = OPENING_BRACKETS | CLOSING_BRACKETS\nALWAYS_NO_SPACE: Final = CLOSING_BRACKETS | {token.COMMA, STANDALONE_COMMENT}\n\nRARROW = 55\n\n\n@mypyc_attr(allow_interpreted_subclasses=True)\nclass Visitor(Generic[T]):\n \"\"\"Basic lib2to3 visitor that yields things of type `T` on `visit()`.\"\"\"\n\n def visit(self, node: LN) -> Iterator[T]:\n \"\"\"Main method to visit `node` and its children.\n\n It tries to find a `visit_*()` method for the given `node.type`, like\n `visit_simple_stmt` for Node objects or `visit_INDENT` for Leaf objects.\n If no dedicated `visit_*()` method is found, chooses `visit_default()`\n instead.\n\n Then yields objects of type `T` from the selected visitor.\n \"\"\"\n if node.type < 256:\n name = token.tok_name[node.type]\n else:\n name = str(type_repr(node.type))\n # We explicitly branch on whether a visitor exists (instead of\n # using self.visit_default as the default arg to getattr) in order\n # to save needing to create a bound method object and so mypyc can\n # generate a native call to visit_default.\n visitf = getattr(self, f\"visit_{name}\", None)\n if visitf:\n yield from visitf(node)\n else:\n yield from self.visit_default(node)\n\n def visit_default(self, node: LN) -> Iterator[T]:\n \"\"\"Default `visit_*()` implementation. Recurses to children of `node`.\"\"\"\n if isinstance(node, Node):\n for child in node.children:\n yield from self.visit(child)\n\n\ndef whitespace(leaf: Leaf, *, complex_subscript: bool, mode: Mode) -> str: # noqa: C901\n \"\"\"Return whitespace prefix if needed for the given `leaf`.\n\n `complex_subscript` signals whether the given leaf is part of a subscription\n which has non-trivial arguments, like arithmetic expressions or function calls.\n \"\"\"\n NO: Final[str] = \"\"\n SPACE: Final[str] = \" \"\n DOUBLESPACE: Final[str] = \" \"\n t = leaf.type\n p = leaf.parent\n v = leaf.value\n if t in ALWAYS_NO_SPACE:\n return NO\n\n if t == token.COMMENT:\n return DOUBLESPACE\n\n assert p is not None, f\"INTERNAL ERROR: hand-made leaf without parent: {leaf!r}\"\n if t == token.COLON and p.type not in {\n syms.subscript,\n syms.subscriptlist,\n syms.sliceop,\n }:\n return NO\n\n prev = leaf.prev_sibling\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type in OPENING_BRACKETS:\n return NO\n\n if t == token.COLON:\n if prevp.type == token.COLON:\n return NO\n\n elif prevp.type != token.COMMA and not complex_subscript:\n return NO\n\n return SPACE\n\n if prevp.type == token.EQUAL:\n if prevp.parent:\n if prevp.parent.type in {\n syms.arglist,\n syms.argument,\n syms.parameters,\n syms.varargslist,\n }:\n return NO\n\n elif prevp.parent.type == syms.typedargslist:\n # A bit hacky: if the equal sign has whitespace, it means we\n # previously found it's a typed argument. So, we're using\n # that, too.\n return prevp.prefix\n\n elif (\n prevp.type == token.STAR\n and parent_type(prevp) == syms.star_expr\n and parent_type(prevp.parent) == syms.subscriptlist\n ):\n # No space between typevar tuples.\n return NO\n\n elif prevp.type in VARARGS_SPECIALS:\n if is_vararg(prevp, within=VARARGS_PARENTS | UNPACKING_PARENTS):\n return NO\n\n elif prevp.type == token.COLON:\n if prevp.parent and prevp.parent.type in {syms.subscript, syms.sliceop}:\n return SPACE if complex_subscript else NO\n\n elif (\n prevp.parent\n and prevp.parent.type == syms.factor\n and prevp.type in MATH_OPERATORS\n ):\n return NO\n\n elif prevp.type == token.AT and p.parent and p.parent.type == syms.decorator:\n # no space in decorators\n return NO\n\n elif prev.type in OPENING_BRACKETS:\n return NO\n\n if p.type in {syms.parameters, syms.arglist}:\n # untyped function signatures or calls\n if not prev or prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.varargslist:\n # lambdas\n if prev and prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.typedargslist:\n # typed function signatures\n if not prev:\n return NO\n\n if t == token.EQUAL:\n if prev.type not in TYPED_NAMES:\n return NO\n\n elif prev.type == token.EQUAL:\n # A bit hacky: if the equal sign has whitespace, it means we\n # previously found it's a typed argument. So, we're using that, too.\n return prev.prefix\n\n elif prev.type != token.COMMA:\n return NO\n\n elif p.type in TYPED_NAMES:\n # type names\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type != token.COMMA:\n return NO\n\n elif p.type == syms.trailer:\n # attributes and calls\n if t == token.LPAR or t == token.RPAR:\n return NO\n\n if not prev:\n if t == token.DOT or t == token.LSQB:\n return NO\n\n elif prev.type != token.COMMA:\n return NO\n\n elif p.type == syms.argument:\n # single argument\n if t == token.EQUAL:\n return NO\n\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type == token.LPAR:\n return NO\n\n elif prev.type in {token.EQUAL} | VARARGS_SPECIALS:\n return NO\n\n elif p.type == syms.decorator:\n # decorators\n return NO\n\n elif p.type == syms.dotted_name:\n if prev:\n return NO\n\n prevp = preceding_leaf(p)\n if not prevp or prevp.type == token.AT or prevp.type == token.DOT:\n return NO\n\n elif p.type == syms.classdef:\n if t == token.LPAR:\n return NO\n\n if prev and prev.type == token.LPAR:\n return NO\n\n elif p.type in {syms.subscript, syms.sliceop}:\n # indexing\n if not prev:\n assert p.parent is not None, \"subscripts are always parented\"\n if p.parent.type == syms.subscriptlist:\n return SPACE\n\n return NO\n\n elif Preview.walrus_subscript in mode and (\n t == token.COLONEQUAL or prev.type == token.COLONEQUAL\n ):\n return SPACE\n\n elif not complex_subscript:\n return NO\n\n elif p.type == syms.atom:\n if prev and t == token.DOT:\n # dots, but not the first one.\n return NO\n\n elif p.type == syms.dictsetmaker:\n # dict unpacking\n if prev and prev.type == token.DOUBLESTAR:\n return NO\n\n elif p.type in {syms.factor, syms.star_expr}:\n # unary ops\n if not prev:\n prevp = preceding_leaf(p)\n if not prevp or prevp.type in OPENING_BRACKETS:\n return NO\n\n prevp_parent = prevp.parent\n assert prevp_parent is not None\n if prevp.type == token.COLON and prevp_parent.type in {\n syms.subscript,\n syms.sliceop,\n }:\n return NO\n\n elif prevp.type == token.EQUAL and prevp_parent.type == syms.argument:\n return NO\n\n elif t in {token.NAME, token.NUMBER, token.STRING}:\n return NO\n\n elif p.type == syms.import_from:\n if t == token.DOT:\n if prev and prev.type == token.DOT:\n return NO\n\n elif t == token.NAME:\n if v == \"import\":\n return SPACE\n\n if prev and prev.type == token.DOT:\n return NO\n\n elif p.type == syms.sliceop:\n return NO\n\n elif p.type == syms.except_clause:\n if t == token.STAR:\n return NO\n\n return SPACE\n\n\ndef preceding_leaf(node: Optional[LN]) -> Optional[Leaf]:\n \"\"\"Return the first leaf that precedes `node`, if any.\"\"\"\n while node:\n res = node.prev_sibling\n if res:\n if isinstance(res, Leaf):\n return res\n\n try:\n return list(res.leaves())[-1]\n\n except IndexError:\n return None\n\n node = node.parent\n return None\n\n\ndef prev_siblings_are(node: Optional[LN], tokens: List[Optional[NodeType]]) -> bool:\n \"\"\"Return if the `node` and its previous siblings match types against the provided\n list of tokens; the provided `node`has its type matched against the last element in\n the list. `None` can be used as the first element to declare that the start of the\n list is anchored at the start of its parent's children.\"\"\"\n if not tokens:\n return True\n if tokens[-1] is None:\n return node is None\n if not node:\n return False\n if node.type != tokens[-1]:\n return False\n return prev_siblings_are(node.prev_sibling, tokens[:-1])\n\n\ndef parent_type(node: Optional[LN]) -> Optional[NodeType]:\n \"\"\"\n Returns:\n @node.parent.type, if @node is not None and has a parent.\n OR\n None, otherwise.\n \"\"\"\n if node is None or node.parent is None:\n return None\n\n return node.parent.type\n\n\ndef child_towards(ancestor: Node, descendant: LN) -> Optional[LN]:\n \"\"\"Return the child of `ancestor` that contains `descendant`.\"\"\"\n node: Optional[LN] = descendant\n while node and node.parent != ancestor:\n node = node.parent\n return node\n\n\ndef replace_child(old_child: LN, new_child: LN) -> None:\n \"\"\"\n Side Effects:\n * If @old_child.parent is set, replace @old_child with @new_child in\n @old_child's underlying Node structure.\n OR\n * Otherwise, this function does nothing.\n \"\"\"\n parent = old_child.parent\n if not parent:\n return\n\n child_idx = old_child.remove()\n if child_idx is not None:\n parent.insert_child(child_idx, new_child)\n\n\ndef container_of(leaf: Leaf) -> LN:\n \"\"\"Return `leaf` or one of its ancestors that is the topmost container of it.\n\n By \"container\" we mean a node where `leaf` is the very first child.\n \"\"\"\n same_prefix = leaf.prefix\n container: LN = leaf\n while container:\n parent = container.parent\n if parent is None:\n break\n\n if parent.children[0].prefix != same_prefix:\n break\n\n if parent.type == syms.file_input:\n break\n\n if parent.prev_sibling is not None and parent.prev_sibling.type in BRACKETS:\n break\n\n container = parent\n return container\n\n\ndef first_leaf_of(node: LN) -> Optional[Leaf]:\n \"\"\"Returns the first leaf of the node tree.\"\"\"\n if isinstance(node, Leaf):\n return node\n if node.children:\n return first_leaf_of(node.children[0])\n else:\n return None\n\n\ndef is_arith_like(node: LN) -> bool:\n \"\"\"Whether node is an arithmetic or a binary arithmetic expression\"\"\"\n return node.type in {\n syms.arith_expr,\n syms.shift_expr,\n syms.xor_expr,\n syms.and_expr,\n }\n\n\ndef is_docstring(leaf: Leaf) -> bool:\n if leaf.type != token.STRING:\n return False\n\n prefix = get_string_prefix(leaf.value)\n if set(prefix).intersection(\"bBfF\"):\n return False\n\n if prev_siblings_are(\n leaf.parent, [None, token.NEWLINE, token.INDENT, syms.simple_stmt]\n ):\n return True\n\n # Multiline docstring on the same line as the `def`.\n if prev_siblings_are(leaf.parent, [syms.parameters, token.COLON, syms.simple_stmt]):\n # `syms.parameters` is only used in funcdefs and async_funcdefs in the Python\n # grammar. We're safe to return True without further checks.\n return True\n\n return False\n\n\ndef is_empty_tuple(node: LN) -> bool:\n \"\"\"Return True if `node` holds an empty tuple.\"\"\"\n return (\n node.type == syms.atom\n and len(node.children) == 2\n and node.children[0].type == token.LPAR\n and node.children[1].type == token.RPAR\n )\n\n\ndef is_one_tuple(node: LN) -> bool:\n \"\"\"Return True if `node` holds a tuple with one element, with or without parens.\"\"\"\n if node.type == syms.atom:\n gexp = unwrap_singleton_parenthesis(node)\n if gexp is None or gexp.type != syms.testlist_gexp:\n return False\n\n return len(gexp.children) == 2 and gexp.children[1].type == token.COMMA\n\n return (\n node.type in IMPLICIT_TUPLE\n and len(node.children) == 2\n and node.children[1].type == token.COMMA\n )\n\n\ndef is_tuple_containing_walrus(node: LN) -> bool:\n \"\"\"Return True if `node` holds a tuple that contains a walrus operator.\"\"\"\n if node.type != syms.atom:\n return False\n gexp = unwrap_singleton_parenthesis(node)\n if gexp is None or gexp.type != syms.testlist_gexp:\n return False\n\n return any(child.type == syms.namedexpr_test for child in gexp.children)\n\n\ndef is_one_sequence_between(\n opening: Leaf,\n closing: Leaf,\n leaves: List[Leaf],\n brackets: Tuple[int, int] = (token.LPAR, token.RPAR),\n) -> bool:\n \"\"\"Return True if content between `opening` and `closing` is a one-sequence.\"\"\"\n if (opening.type, closing.type) != brackets:\n return False\n\n depth = closing.bracket_depth + 1\n for _opening_index, leaf in enumerate(leaves):\n if leaf is opening:\n break\n\n else:\n raise LookupError(\"Opening paren not found in `leaves`\")\n\n commas = 0\n _opening_index += 1\n for leaf in leaves[_opening_index:]:\n if leaf is closing:\n break\n\n bracket_depth = leaf.bracket_depth\n if bracket_depth == depth and leaf.type == token.COMMA:\n commas += 1\n if leaf.parent and leaf.parent.type in {\n syms.arglist,\n syms.typedargslist,\n }:\n commas += 1\n break\n\n return commas < 2\n\n\ndef is_walrus_assignment(node: LN) -> bool:\n \"\"\"Return True iff `node` is of the shape ( test := test )\"\"\"\n inner = unwrap_singleton_parenthesis(node)\n return inner is not None and inner.type == syms.namedexpr_test\n\n\ndef is_simple_decorator_trailer(node: LN, last: bool = False) -> bool:\n \"\"\"Return True iff `node` is a trailer valid in a simple decorator\"\"\"\n return node.type == syms.trailer and (\n (\n len(node.children) == 2\n and node.children[0].type == token.DOT\n and node.children[1].type == token.NAME\n )\n # last trailer can be an argument-less parentheses pair\n or (\n last\n and len(node.children) == 2\n and node.children[0].type == token.LPAR\n and node.children[1].type == token.RPAR\n )\n # last trailer can be arguments\n or (\n last\n and len(node.children) == 3\n and node.children[0].type == token.LPAR\n # and node.children[1].type == syms.argument\n and node.children[2].type == token.RPAR\n )\n )\n\n\ndef is_simple_decorator_expression(node: LN) -> bool:\n \"\"\"Return True iff `node` could be a 'dotted name' decorator\n\n This function takes the node of the 'namedexpr_test' of the new decorator\n grammar and test if it would be valid under the old decorator grammar.\n\n The old grammar was: decorator: @ dotted_name [arguments] NEWLINE\n The new grammar is : decorator: @ namedexpr_test NEWLINE\n \"\"\"\n if node.type == token.NAME:\n return True\n if node.type == syms.power:\n if node.children:\n return (\n node.children[0].type == token.NAME\n and all(map(is_simple_decorator_trailer, node.children[1:-1]))\n and (\n len(node.children) < 2\n or is_simple_decorator_trailer(node.children[-1], last=True)\n )\n )\n return False\n\n\ndef is_yield(node: LN) -> bool:\n \"\"\"Return True if `node` holds a `yield` or `yield from` expression.\"\"\"\n if node.type == syms.yield_expr:\n return True\n\n if is_name_token(node) and node.value == \"yield\":\n return True\n\n if node.type != syms.atom:\n return False\n\n if len(node.children) != 3:\n return False\n\n lpar, expr, rpar = node.children\n if lpar.type == token.LPAR and rpar.type == token.RPAR:\n return is_yield(expr)\n\n return False\n\n\ndef is_vararg(leaf: Leaf, within: Set[NodeType]) -> bool:\n \"\"\"Return True if `leaf` is a star or double star in a vararg or kwarg.\n\n If `within` includes VARARGS_PARENTS, this applies to function signatures.\n If `within` includes UNPACKING_PARENTS, it applies to right hand-side\n extended iterable unpacking (PEP 3132) and additional unpacking\n generalizations (PEP 448).\n \"\"\"\n if leaf.type not in VARARGS_SPECIALS or not leaf.parent:\n return False\n\n p = leaf.parent\n if p.type == syms.star_expr:\n # Star expressions are also used as assignment targets in extended\n # iterable unpacking (PEP 3132). See what its parent is instead.\n if not p.parent:\n return False\n\n p = p.parent\n\n return p.type in within\n\n\ndef is_multiline_string(leaf: Leaf) -> bool:\n \"\"\"Return True if `leaf` is a multiline string that actually spans many lines.\"\"\"\n return has_triple_quotes(leaf.value) and \"\\n\" in leaf.value\n\n\ndef is_funcdef(node: Node) -> bool:\n return node.type == syms.funcdef\n\n\ndef is_stub_suite(node: Node) -> bool:\n \"\"\"Return True if `node` is a suite with a stub body.\"\"\"\n\n # If there is a comment, we want to keep it.\n if node.prefix.strip():\n return False\n\n if (\n len(node.children) != 4\n or node.children[0].type != token.NEWLINE\n or node.children[1].type != token.INDENT\n or node.children[3].type != token.DEDENT\n ):\n return False\n\n if node.children[3].prefix.strip():\n return False\n\n return is_stub_body(node.children[2])\n\n\ndef is_stub_body(node: LN) -> bool:\n \"\"\"Return True if `node` is a simple statement containing an ellipsis.\"\"\"\n if not isinstance(node, Node) or node.type != syms.simple_stmt:\n return False\n\n if len(node.children) != 2:\n return False\n\n child = node.children[0]\n return (\n not child.prefix.strip()\n and child.type == syms.atom\n and len(child.children) == 3\n and all(leaf == Leaf(token.DOT, \".\") for leaf in child.children)\n )\n\n\ndef is_atom_with_invisible_parens(node: LN) -> bool:\n \"\"\"Given a `LN`, determines whether it's an atom `node` with invisible\n parens. Useful in dedupe-ing and normalizing parens.\n \"\"\"\n if isinstance(node, Leaf) or node.type != syms.atom:\n return False\n\n first, last = node.children[0], node.children[-1]\n return (\n isinstance(first, Leaf)\n and first.type == token.LPAR\n and first.value == \"\"\n and isinstance(last, Leaf)\n and last.type == token.RPAR\n and last.value == \"\"\n )\n\n\ndef is_empty_par(leaf: Leaf) -> bool:\n return is_empty_lpar(leaf) or is_empty_rpar(leaf)\n\n\ndef is_empty_lpar(leaf: Leaf) -> bool:\n return leaf.type == token.LPAR and leaf.value == \"\"\n\n\ndef is_empty_rpar(leaf: Leaf) -> bool:\n return leaf.type == token.RPAR and leaf.value == \"\"\n\n\ndef is_import(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts an import statement.\"\"\"\n p = leaf.parent\n t = leaf.type\n v = leaf.value\n return bool(\n t == token.NAME\n and (\n (v == \"import\" and p and p.type == syms.import_name)\n or (v == \"from\" and p and p.type == syms.import_from)\n )\n )\n\n\ndef is_with_or_async_with_stmt(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts a with or async with statement.\"\"\"\n return bool(\n leaf.type == token.NAME\n and leaf.value == \"with\"\n and leaf.parent\n and leaf.parent.type == syms.with_stmt\n ) or bool(\n leaf.type == token.ASYNC\n and leaf.next_sibling\n and leaf.next_sibling.type == syms.with_stmt\n )\n\n\ndef is_async_stmt_or_funcdef(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf starts an async def/for/with statement.\n\n Note that `async def` can be either an `async_stmt` or `async_funcdef`,\n the latter is used when it has decorators.\n \"\"\"\n return bool(\n leaf.type == token.ASYNC\n and leaf.parent\n and leaf.parent.type in {syms.async_stmt, syms.async_funcdef}\n )\n\n\ndef is_type_comment(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf is a type comment. This function should only\n be used for general type comments (excluding ignore annotations, which should\n use `is_type_ignore_comment`). Note that general type comments are no longer\n used in modern version of Python, this function may be deprecated in the future.\"\"\"\n t = leaf.type\n v = leaf.value\n return t in {token.COMMENT, STANDALONE_COMMENT} and v.startswith(\"# type:\")\n\n\ndef is_type_ignore_comment(leaf: Leaf) -> bool:\n \"\"\"Return True if the given leaf is a type comment with ignore annotation.\"\"\"\n t = leaf.type\n v = leaf.value\n return t in {token.COMMENT, STANDALONE_COMMENT} and is_type_ignore_comment_string(v)\n\n\ndef is_type_ignore_comment_string(value: str) -> bool:\n \"\"\"Return True if the given string match with type comment with\n ignore annotation.\"\"\"\n return value.startswith(\"# type: ignore\")\n\n\ndef wrap_in_parentheses(parent: Node, child: LN, *, visible: bool = True) -> None:\n \"\"\"Wrap `child` in parentheses.\n\n This replaces `child` with an atom holding the parentheses and the old\n child. That requires moving the prefix.\n\n If `visible` is False, the leaves will be valueless (and thus invisible).\n \"\"\"\n lpar = Leaf(token.LPAR, \"(\" if visible else \"\")\n rpar = Leaf(token.RPAR, \")\" if visible else \"\")\n prefix = child.prefix\n child.prefix = \"\"\n index = child.remove() or 0\n new_child = Node(syms.atom, [lpar, child, rpar])\n new_child.prefix = prefix\n parent.insert_child(index, new_child)\n\n\ndef unwrap_singleton_parenthesis(node: LN) -> Optional[LN]:\n \"\"\"Returns `wrapped` if `node` is of the shape ( wrapped ).\n\n Parenthesis can be optional. Returns None otherwise\"\"\"\n if len(node.children) != 3:\n return None\n\n lpar, wrapped, rpar = node.children\n if not (lpar.type == token.LPAR and rpar.type == token.RPAR):\n return None\n\n return wrapped\n\n\ndef ensure_visible(leaf: Leaf) -> None:\n \"\"\"Make sure parentheses are visible.\n\n They could be invisible as part of some statements (see\n :func:`normalize_invisible_parens` and :func:`visit_import_from`).\n \"\"\"\n if leaf.type == token.LPAR:\n leaf.value = \"(\"\n elif leaf.type == token.RPAR:\n leaf.value = \")\"\n\n\ndef is_name_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.NAME\n\n\ndef is_lpar_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.LPAR\n\n\ndef is_rpar_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.RPAR\n\n\ndef is_string_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.STRING\n\n\ndef is_number_token(nl: NL) -> TypeGuard[Leaf]:\n return nl.type == token.NUMBER\n\n\ndef is_part_of_annotation(leaf: Leaf) -> bool:\n \"\"\"Returns whether this leaf is part of type annotations.\"\"\"\n ancestor = leaf.parent\n while ancestor is not None:\n if ancestor.prev_sibling and ancestor.prev_sibling.type == token.RARROW:\n return True\n if ancestor.parent and ancestor.parent.type == syms.tname:\n return True\n ancestor = ancestor.parent\n return False\n",
"path": "src/black/nodes.py"
}
] | diff --git a/CHANGES.md b/CHANGES.md
index 5ce37943693..4f90f493ad8 100644
--- a/CHANGES.md
+++ b/CHANGES.md
@@ -13,6 +13,9 @@
- Fix crash on formatting code like `await (a ** b)` (#3994)
+- No longer treat leading f-strings as docstrings. This matches Python's behaviour and
+ fixes a crash (#4019)
+
### Preview style
- Multiline dictionaries and lists that are the sole argument to a function are now
diff --git a/src/black/nodes.py b/src/black/nodes.py
index 5f6b280c035..fff8e05a118 100644
--- a/src/black/nodes.py
+++ b/src/black/nodes.py
@@ -529,7 +529,7 @@ def is_docstring(leaf: Leaf) -> bool:
return False
prefix = get_string_prefix(leaf.value)
- if "b" in prefix or "B" in prefix:
+ if set(prefix).intersection("bBfF"):
return False
if prev_siblings_are(
diff --git a/tests/data/cases/docstring_preview.py b/tests/data/cases/docstring_preview.py
index ff4819acb67..a3c656be2f8 100644
--- a/tests/data/cases/docstring_preview.py
+++ b/tests/data/cases/docstring_preview.py
@@ -58,7 +58,8 @@ def docstring_almost_at_line_limit():
def docstring_almost_at_line_limit_with_prefix():
- f"""long docstring................................................................"""
+ f"""long docstring................................................................
+ """
def mulitline_docstring_almost_at_line_limit():
diff --git a/tests/data/cases/f_docstring.py b/tests/data/cases/f_docstring.py
new file mode 100644
index 00000000000..667f550b353
--- /dev/null
+++ b/tests/data/cases/f_docstring.py
@@ -0,0 +1,20 @@
+def foo(e):
+ f""" {'.'.join(e)}"""
+
+def bar(e):
+ f"{'.'.join(e)}"
+
+def baz(e):
+ F""" {'.'.join(e)}"""
+
+# output
+def foo(e):
+ f""" {'.'.join(e)}"""
+
+
+def bar(e):
+ f"{'.'.join(e)}"
+
+
+def baz(e):
+ f""" {'.'.join(e)}"""
diff --git a/tests/data/cases/docstring_preview_no_string_normalization.py b/tests/data/cases/preview_docstring_no_string_normalization.py
similarity index 100%
rename from tests/data/cases/docstring_preview_no_string_normalization.py
rename to tests/data/cases/preview_docstring_no_string_normalization.py
|
helmholtz-analytics__heat-1268 | Fix Pytorch release tracking workflows
## Due Diligence
<!--- Please address the following points before setting your PR "ready for review".
--->
- General:
- [x] **base branch** must be `main` for new features, latest release branch (e.g. `release/1.3.x`) for bug fixes
- [x] **title** of the PR is suitable to appear in the [Release Notes](https://github.com/helmholtz-analytics/heat/releases/latest)
- Implementation:
- [x] unit tests: all split configurations tested
- [x] unit tests: multiple dtypes tested
- [x] documentation updated where needed
## Description
<!--- Include a summary of the change/s.
Please also include relevant motivation and context. List any dependencies that are required for this change.
--->
Issue/s resolved: #1241
## Changes proposed:
- upgrade to the latest version of checkout action
- delete the token parameter such that the default action token is used
## Type of change
<!--
i.e.
- Bug fix (non-breaking change which fixes an issue)
- New feature (non-breaking change which adds functionality)
- Breaking change (fix or feature that would cause existing functionality to not work as expected)
- Documentation update
--->
## Memory requirements
<!--- Compare memory requirements to previous implementation / relevant torch operations if applicable:
- in distributed and non-distributed mode
- with `split=None` and `split not None`
This can be done using https://github.com/pythonprofilers/memory_profiler for CPU memory measurements,
GPU measurements can be done with https://pytorch.org/docs/master/generated/torch.cuda.max_memory_allocated.html.
These tools only profile the memory used by each process, not the entire function.
--->
## Performance
<!--- Compare performance to previous implementation / relevant torch operations if applicable:
- in distributed and non-distributed mode
- with `split=None` and `split not None`
Python has an embedded profiler: https://docs.python.org/3.9/library/profile.html
Again, this will only profile the performance on each process. Printing the results with many processes
may be illegible. It may be easiest to save the output of each to a file.
--->
#### Does this change modify the behaviour of other functions? If so, which?
no
| [
{
"content": "\"\"\"This module contains Heat's version information.\"\"\"\n\n\nmajor: int = 1\n\"\"\"Indicates Heat's main version.\"\"\"\nminor: int = 3\n\"\"\"Indicates feature extension.\"\"\"\nmicro: int = 0\n\"\"\"Indicates revisions for bugfixes.\"\"\"\nextension: str = \"dev\"\n\"\"\"Indicates special builds, e.g. for specific hardware.\"\"\"\n\nif not extension:\n __version__: str = f\"{major}.{minor}.{micro}\"\n \"\"\"The combined version string, consisting out of major, minor, micro and possibly extension.\"\"\"\nelse:\n __version__: str = f\"{major}.{minor}.{micro}-{extension}\"\n",
"path": "heat/core/version.py"
}
] | [
{
"content": "\"\"\"This module contains Heat's version information.\"\"\"\n\n\nmajor: int = 1\n\"\"\"Indicates Heat's main version.\"\"\"\nminor: int = 4\n\"\"\"Indicates feature extension.\"\"\"\nmicro: int = 0\n\"\"\"Indicates revisions for bugfixes.\"\"\"\nextension: str = \"dev\"\n\"\"\"Indicates special builds, e.g. for specific hardware.\"\"\"\n\nif not extension:\n __version__: str = f\"{major}.{minor}.{micro}\"\n \"\"\"The combined version string, consisting out of major, minor, micro and possibly extension.\"\"\"\nelse:\n __version__: str = f\"{major}.{minor}.{micro}-{extension}\"\n",
"path": "heat/core/version.py"
}
] | diff --git a/.github/workflows/pytorch-latest-main.yml b/.github/workflows/pytorch-latest-main.yml
index 139c66d82c..c59736c82f 100644
--- a/.github/workflows/pytorch-latest-main.yml
+++ b/.github/workflows/pytorch-latest-main.yml
@@ -11,9 +11,8 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository }} == 'hemlholtz-analytics/heat'
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
- token: ${{ secrets.GHACTIONS }}
ref: '${{ env.base_branch }}'
- name: Fetch PyTorch release version
run: |
diff --git a/.github/workflows/pytorch-latest-release.yml b/.github/workflows/pytorch-latest-release.yml
index 0daa39fc2c..6126fa610a 100644
--- a/.github/workflows/pytorch-latest-release.yml
+++ b/.github/workflows/pytorch-latest-release.yml
@@ -11,9 +11,8 @@ jobs:
runs-on: ubuntu-latest
if: ${{ github.repository }} == 'hemlholtz-analytics/heat'
steps:
- - uses: actions/checkout@v3
+ - uses: actions/checkout@v4
with:
- token: ${{ secrets.GHACTIONS }}
ref: '${{ env.base_branch }}'
- name: Fetch PyTorch release version
run: |
diff --git a/heat/core/linalg/tests/test_solver.py b/heat/core/linalg/tests/test_solver.py
index 5bf2cbff08..f8f9889a9d 100644
--- a/heat/core/linalg/tests/test_solver.py
+++ b/heat/core/linalg/tests/test_solver.py
@@ -65,15 +65,8 @@ def test_lanczos(self):
lanczos_B = V_out @ T_out @ V_inv
self.assertTrue(ht.allclose(lanczos_B, B))
- # single precision tolerance
- if (
- int(torch.__version__.split(".")[0]) == 1
- and int(torch.__version__.split(".")[1]) >= 13
- or int(torch.__version__.split(".")[0]) > 1
- ):
- tolerance = 1e-3
- else:
- tolerance = 1e-4
+ # single precision tolerance for torch.inv() is pretty bad
+ tolerance = 1e-3
# float32, pre_defined v0, split mismatch
A = ht.random.randn(n, n, dtype=ht.float32, split=0)
diff --git a/heat/core/version.py b/heat/core/version.py
index 4b3e384aea..d680344436 100644
--- a/heat/core/version.py
+++ b/heat/core/version.py
@@ -3,7 +3,7 @@
major: int = 1
"""Indicates Heat's main version."""
-minor: int = 3
+minor: int = 4
"""Indicates feature extension."""
micro: int = 0
"""Indicates revisions for bugfixes."""
|
biolab__orange3-3530 | Report window and clipboard
Can't copy form Reports
##### Orange version
<!-- From menu _Help→About→Version_ or code `Orange.version.full_version` -->
3.17.0.dev0+8f507ed
##### Expected behavior
If items are selected in the Report window it should be possible to copy to the clipboard for using it in a presentation or a document.
##### Actual behavior
Can't copy anything.
| [
{
"content": "import os\nimport logging\nimport warnings\nimport pickle\nfrom collections import OrderedDict\nfrom enum import IntEnum\n\nfrom typing import Optional\n\nimport pkg_resources\n\nfrom AnyQt.QtCore import Qt, QObject, pyqtSlot\nfrom AnyQt.QtGui import QIcon, QCursor, QStandardItemModel, QStandardItem\nfrom AnyQt.QtWidgets import (\n QApplication, QDialog, QFileDialog, QTableView, QHeaderView\n)\nfrom AnyQt.QtPrintSupport import QPrinter, QPrintDialog\n\nfrom Orange.util import deprecated\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import OWWidget\nfrom Orange.widgets.settings import Setting\nfrom Orange.canvas.application.canvasmain import CanvasMainWindow\nfrom Orange.canvas.gui.utils import message_critical\n\n# Importing WebviewWidget can fail if neither QWebKit (old, deprecated) nor\n# QWebEngine (bleeding-edge, hard to install) are available\ntry:\n from Orange.widgets.utils.webview import WebviewWidget\nexcept ImportError:\n from unittest.mock import Mock\n WebviewWidget = Mock\n\n\nlog = logging.getLogger(__name__)\n\nclass Column(IntEnum):\n item = 0\n remove = 1\n scheme = 2\n\n\nclass ReportItem(QStandardItem):\n def __init__(self, name, html, scheme, module, icon_name, comment=\"\"):\n self.name = name\n self.html = html\n self.scheme = scheme\n self.module = module\n self.icon_name = icon_name\n self.comment = comment\n try:\n path = pkg_resources.resource_filename(module, icon_name)\n except ImportError:\n path = \"\"\n except ValueError:\n path = \"\"\n icon = QIcon(path)\n self.id = id(icon)\n super().__init__(icon, name)\n\n def __getnewargs__(self):\n return (self.name, self.html, self.scheme, self.module, self.icon_name,\n self.comment)\n\n\nclass ReportItemModel(QStandardItemModel):\n def __init__(self, rows, columns, parent=None):\n super().__init__(rows, columns, parent)\n\n def add_item(self, item):\n row = self.rowCount()\n self.setItem(row, Column.item, item)\n self.setItem(row, Column.remove, self._icon_item(\"Remove\"))\n self.setItem(row, Column.scheme, self._icon_item(\"Open Scheme\"))\n\n def get_item_by_id(self, item_id):\n for i in range(self.rowCount()):\n item = self.item(i)\n if str(item.id) == item_id:\n return item\n return None\n\n @staticmethod\n def _icon_item(tooltip):\n item = QStandardItem()\n item.setEditable(False)\n item.setToolTip(tooltip)\n return item\n\n\nclass ReportTable(QTableView):\n def __init__(self, parent):\n super().__init__(parent)\n self._icon_remove = QIcon(pkg_resources.resource_filename(\n __name__, \"icons/delete.svg\"))\n self._icon_scheme = QIcon(pkg_resources.resource_filename(\n __name__, \"icons/scheme.svg\"))\n\n def mouseMoveEvent(self, event):\n self._clear_icons()\n self._repaint(self.indexAt(event.pos()))\n\n def mouseReleaseEvent(self, event):\n if event.button() == Qt.LeftButton:\n super().mouseReleaseEvent(event)\n self._clear_icons()\n self._repaint(self.indexAt(event.pos()))\n\n def leaveEvent(self, _):\n self._clear_icons()\n\n def _repaint(self, index):\n row, column = index.row(), index.column()\n if column in (Column.remove, Column.scheme):\n self.setCursor(QCursor(Qt.PointingHandCursor))\n else:\n self.setCursor(QCursor(Qt.ArrowCursor))\n if row >= 0:\n self.model().item(row, Column.remove).setIcon(self._icon_remove)\n self.model().item(row, Column.scheme).setIcon(self._icon_scheme)\n\n def _clear_icons(self):\n model = self.model()\n for i in range(model.rowCount()):\n model.item(i, Column.remove).setIcon(QIcon())\n model.item(i, Column.scheme).setIcon(QIcon())\n\n\nclass OWReport(OWWidget):\n name = \"Report\"\n save_dir = Setting(\"\")\n open_dir = Setting(\"\")\n\n def __init__(self):\n super().__init__()\n self._setup_ui_()\n self.report_changed = False\n\n index_file = pkg_resources.resource_filename(__name__, \"index.html\")\n with open(index_file, \"r\") as f:\n self.report_html_template = f.read()\n\n def _setup_ui_(self):\n self.table_model = ReportItemModel(0, len(Column.__members__))\n self.table = ReportTable(self.controlArea)\n self.table.setModel(self.table_model)\n self.table.setShowGrid(False)\n self.table.setSelectionBehavior(QTableView.SelectRows)\n self.table.setSelectionMode(QTableView.SingleSelection)\n self.table.setWordWrap(False)\n self.table.setMouseTracking(True)\n self.table.verticalHeader().setSectionResizeMode(QHeaderView.Fixed)\n self.table.verticalHeader().setDefaultSectionSize(20)\n self.table.verticalHeader().setVisible(False)\n self.table.horizontalHeader().setVisible(False)\n self.table.setFixedWidth(250)\n self.table.setColumnWidth(Column.item, 200)\n self.table.setColumnWidth(Column.remove, 23)\n self.table.setColumnWidth(Column.scheme, 25)\n self.table.clicked.connect(self._table_clicked)\n self.table.selectionModel().selectionChanged.connect(\n self._table_selection_changed)\n self.controlArea.layout().addWidget(self.table)\n\n self.last_scheme = None\n self.scheme_button = gui.button(\n self.controlArea, self, \"Back to Last Scheme\",\n callback=self._show_last_scheme\n )\n box = gui.hBox(self.controlArea)\n box.setContentsMargins(-6, 0, -6, 0)\n self.save_button = gui.button(\n box, self, \"Save\", callback=self.save_report, disabled=True\n )\n self.print_button = gui.button(\n box, self, \"Print\", callback=self._print_report, disabled=True\n )\n\n class PyBridge(QObject):\n @pyqtSlot(str)\n def _select_item(myself, item_id):\n item = self.table_model.get_item_by_id(item_id)\n self.table.selectRow(self.table_model.indexFromItem(item).row())\n self._change_selected_item(item)\n\n @pyqtSlot(str, str)\n def _add_comment(myself, item_id, value):\n item = self.table_model.get_item_by_id(item_id)\n item.comment = value\n self.report_changed = True\n\n self.report_view = WebviewWidget(self.mainArea, bridge=PyBridge(self))\n self.mainArea.layout().addWidget(self.report_view)\n\n @deprecated(\"Widgets should not be pickled\")\n def __getstate__(self):\n rep_dict = self.__dict__.copy()\n for key in ('_OWWidget__env', 'controlArea', 'mainArea',\n 'report_view', 'table', 'table_model'):\n del rep_dict[key]\n items_len = self.table_model.rowCount()\n return rep_dict, [self.table_model.item(i) for i in range(items_len)]\n\n @deprecated(\"Widgets should not be pickled\")\n def __setstate__(self, state):\n rep_dict, items = state\n self.__dict__.update(rep_dict)\n self._setup_ui_()\n for i in range(len(items)):\n item = items[i]\n self.table_model.add_item(\n ReportItem(item.name, item.html, item.scheme,\n item.module, item.icon_name, item.comment)\n )\n\n def _table_clicked(self, index):\n if index.column() == Column.remove:\n self._remove_item(index.row())\n indexes = self.table.selectionModel().selectedIndexes()\n if indexes:\n item = self.table_model.item(indexes[0].row())\n self._scroll_to_item(item)\n self._change_selected_item(item)\n if index.column() == Column.scheme:\n self._show_scheme(index.row())\n\n def _table_selection_changed(self, new_selection, _):\n if new_selection.indexes():\n item = self.table_model.item(new_selection.indexes()[0].row())\n self._scroll_to_item(item)\n self._change_selected_item(item)\n\n def _remove_item(self, row):\n self.table_model.removeRow(row)\n self._empty_report()\n self.report_changed = True\n self._build_html()\n\n def clear(self):\n self.table_model.clear()\n self._empty_report()\n self.report_changed = True\n self._build_html()\n\n def _add_item(self, widget):\n name = widget.get_widget_name_extension()\n name = \"{} - {}\".format(widget.name, name) if name else widget.name\n item = ReportItem(name, widget.report_html, self._get_scheme(),\n widget.__module__, widget.icon)\n self.table_model.add_item(item)\n self._empty_report()\n self.report_changed = True\n return item\n\n def _empty_report(self):\n # disable save and print if no reports\n self.save_button.setEnabled(self.table_model.rowCount())\n self.print_button.setEnabled(self.table_model.rowCount())\n\n def _build_html(self):\n html = self.report_html_template\n html += \"<body>\"\n for i in range(self.table_model.rowCount()):\n item = self.table_model.item(i)\n html += \"<div id='{}' class='normal' \" \\\n \"onClick='pybridge._select_item(this.id)'>{}<div \" \\\n \"class='textwrapper'><textarea \" \\\n \"placeholder='Write a comment...'\" \\\n \"onInput='this.innerHTML = this.value;\" \\\n \"pybridge._add_comment(this.parentNode.parentNode.id, this.value);'\" \\\n \">{}</textarea></div>\" \\\n \"</div>\".format(item.id, item.html, item.comment)\n html += \"</body></html>\"\n self.report_view.setHtml(html)\n\n def _scroll_to_item(self, item):\n self.report_view.evalJS(\n \"document.getElementById('{}').scrollIntoView();\".format(item.id)\n )\n\n def _change_selected_item(self, item):\n self.report_view.evalJS(\n \"var sel_el = document.getElementsByClassName('selected')[0]; \"\n \"if (sel_el.id != {}) \"\n \" sel_el.className = 'normal';\".format(item.id))\n self.report_view.evalJS(\n \"document.getElementById('{}').className = 'selected';\"\n .format(item.id))\n self.report_changed = True\n\n def make_report(self, widget):\n item = self._add_item(widget)\n self._build_html()\n self._scroll_to_item(item)\n self.table.selectRow(self.table_model.rowCount() - 1)\n\n def _get_scheme(self):\n canvas = self.get_canvas_instance()\n return canvas.get_scheme_xml() if canvas else None\n\n def _show_scheme(self, row):\n scheme = self.table_model.item(row).scheme\n canvas = self.get_canvas_instance()\n if canvas:\n document = canvas.current_document()\n if document.isModifiedStrict():\n self.last_scheme = canvas.get_scheme_xml()\n self._load_scheme(scheme)\n\n def _show_last_scheme(self):\n if self.last_scheme:\n self._load_scheme(self.last_scheme)\n\n def _load_scheme(self, contents):\n # forcibly load the contents into the associated CanvasMainWindow\n # instance if one exists. Preserve `self` as the designated report.\n canvas = self.get_canvas_instance()\n if canvas is not None:\n document = canvas.current_document()\n old = document.scheme()\n if old.has_report() and old.report_view() is self:\n # remove self so it is not closed\n old.set_report_view(None)\n canvas.load_scheme_xml(contents)\n scheme = canvas.current_document().scheme()\n scheme.set_report_view(self)\n\n def save_report(self):\n \"\"\"Save report\"\"\"\n formats = OrderedDict((('HTML (*.html)', '.html'),\n ('PDF (*.pdf)', '.pdf'),\n ('Report (*.report)', '.report')))\n\n filename, selected_format = QFileDialog.getSaveFileName(\n self, \"Save Report\", self.save_dir, ';;'.join(formats.keys()))\n if not filename:\n return QDialog.Rejected\n\n # Set appropriate extension if not set by the user\n expect_ext = formats[selected_format]\n if not filename.endswith(expect_ext):\n filename += expect_ext\n\n self.save_dir = os.path.dirname(filename)\n self.saveSettings()\n _, extension = os.path.splitext(filename)\n if extension == \".pdf\":\n printer = QPrinter()\n printer.setPageSize(QPrinter.A4)\n printer.setOutputFormat(QPrinter.PdfFormat)\n printer.setOutputFileName(filename)\n self._print_to_printer(printer)\n elif extension == \".report\":\n self.save(filename)\n else:\n def save_html(contents):\n try:\n with open(filename, \"w\", encoding=\"utf-8\") as f:\n f.write(contents)\n except PermissionError:\n self.permission_error(filename)\n\n save_html(self.report_view.html())\n self.report_changed = False\n return QDialog.Accepted\n\n def _print_to_printer(self, printer):\n filename = printer.outputFileName()\n if filename:\n try:\n # QtWebEngine\n return self.report_view.page().printToPdf(filename)\n except AttributeError:\n try:\n # QtWebKit\n return self.report_view.print_(printer)\n except AttributeError:\n # QtWebEngine 5.6\n pass\n # Fallback to printing widget as an image\n self.report_view.render(printer)\n\n def _print_report(self):\n printer = QPrinter()\n print_dialog = QPrintDialog(printer, self)\n print_dialog.setWindowTitle(\"Print report\")\n if print_dialog.exec_() != QDialog.Accepted:\n return\n self._print_to_printer(printer)\n\n def save(self, filename):\n attributes = {}\n for key in ('last_scheme', 'open_dir'):\n attributes[key] = getattr(self, key, None)\n items = [self.table_model.item(i)\n for i in range(self.table_model.rowCount())]\n report = dict(__version__=1,\n attributes=attributes,\n items=items)\n\n try:\n with open(filename, 'wb') as f:\n pickle.dump(report, f)\n except PermissionError:\n self.permission_error(filename)\n\n @classmethod\n def load(cls, filename):\n with open(filename, 'rb') as f:\n report = pickle.load(f)\n\n if not isinstance(report, dict):\n return report\n\n self = cls()\n self.__dict__.update(report['attributes'])\n for item in report['items']:\n self.table_model.add_item(\n ReportItem(item.name, item.html, item.scheme,\n item.module, item.icon_name, item.comment)\n )\n return self\n\n def permission_error(self, filename):\n message_critical(\n self.tr(\"Permission error when trying to write report.\"),\n title=self.tr(\"Error\"),\n informative_text=self.tr(\"Permission error occurred \"\n \"while saving '{}'.\").format(filename),\n exc_info=True,\n parent=self)\n log.error(\"PermissionError when trying to write report.\", exc_info=True)\n\n def is_empty(self):\n return not self.table_model.rowCount()\n\n def is_changed(self):\n return self.report_changed\n\n @staticmethod\n def set_instance(report):\n warnings.warn(\n \"OWReport.set_instance is deprecated\",\n DeprecationWarning, stacklevel=2\n )\n app_inst = QApplication.instance()\n app_inst._report_window = report\n\n @staticmethod\n def get_instance():\n warnings.warn(\n \"OWReport.get_instance is deprecated\",\n DeprecationWarning, stacklevel=2\n )\n app_inst = QApplication.instance()\n if not hasattr(app_inst, \"_report_window\"):\n report = OWReport()\n app_inst._report_window = report\n return app_inst._report_window\n\n def get_canvas_instance(self):\n # type: () -> Optional[CanvasMainWindow]\n \"\"\"\n Return a CanvasMainWindow instance to which this report is attached.\n\n Return None if not associated with any window.\n\n Returns\n -------\n window : Optional[CanvasMainWindow]\n \"\"\"\n # Run up the parent/window chain\n parent = self.parent()\n if parent is not None:\n window = parent.window()\n if isinstance(window, CanvasMainWindow):\n return window\n return None\n\n\nif __name__ == \"__main__\":\n import sys\n from Orange.data import Table\n from Orange.widgets.data.owfile import OWFile\n from Orange.widgets.data.owtable import OWDataTable\n from Orange.widgets.data.owdiscretize import OWDiscretize\n from Orange.widgets.model.owrandomforest import OWRandomForest\n\n iris = Table(\"iris\")\n app = QApplication(sys.argv)\n\n main = OWReport.get_instance()\n file = OWFile()\n file.create_report_html()\n main.make_report(file)\n\n table = OWDataTable()\n table.set_dataset(iris)\n table.create_report_html()\n main.make_report(table)\n\n main = OWReport.get_instance()\n disc = OWDiscretize()\n disc.create_report_html()\n main.make_report(disc)\n\n learner = OWRandomForest()\n learner.create_report_html()\n main.make_report(learner)\n\n main.show()\n main.saveSettings()\n assert main.table_model.rowCount() == 4\n\n sys.exit(app.exec_())\n",
"path": "Orange/widgets/report/owreport.py"
}
] | [
{
"content": "import os\nimport logging\nimport warnings\nimport pickle\nfrom collections import OrderedDict\nfrom enum import IntEnum\n\nfrom typing import Optional\n\nimport pkg_resources\n\nfrom AnyQt.QtCore import Qt, QObject, pyqtSlot\nfrom AnyQt.QtGui import QIcon, QCursor, QStandardItemModel, QStandardItem\nfrom AnyQt.QtWidgets import (\n QApplication, QDialog, QFileDialog, QTableView, QHeaderView\n)\nfrom AnyQt.QtPrintSupport import QPrinter, QPrintDialog\n\nfrom Orange.util import deprecated\nfrom Orange.widgets import gui\nfrom Orange.widgets.widget import OWWidget\nfrom Orange.widgets.settings import Setting\nfrom Orange.canvas.application.canvasmain import CanvasMainWindow\nfrom Orange.canvas.gui.utils import message_critical\n\n# Importing WebviewWidget can fail if neither QWebKit (old, deprecated) nor\n# QWebEngine (bleeding-edge, hard to install) are available\ntry:\n from Orange.widgets.utils.webview import WebviewWidget\nexcept ImportError:\n from unittest.mock import Mock\n WebviewWidget = Mock\n\n\nlog = logging.getLogger(__name__)\n\nclass Column(IntEnum):\n item = 0\n remove = 1\n scheme = 2\n\n\nclass ReportItem(QStandardItem):\n def __init__(self, name, html, scheme, module, icon_name, comment=\"\"):\n self.name = name\n self.html = html\n self.scheme = scheme\n self.module = module\n self.icon_name = icon_name\n self.comment = comment\n try:\n path = pkg_resources.resource_filename(module, icon_name)\n except ImportError:\n path = \"\"\n except ValueError:\n path = \"\"\n icon = QIcon(path)\n self.id = id(icon)\n super().__init__(icon, name)\n\n def __getnewargs__(self):\n return (self.name, self.html, self.scheme, self.module, self.icon_name,\n self.comment)\n\n\nclass ReportItemModel(QStandardItemModel):\n def __init__(self, rows, columns, parent=None):\n super().__init__(rows, columns, parent)\n\n def add_item(self, item):\n row = self.rowCount()\n self.setItem(row, Column.item, item)\n self.setItem(row, Column.remove, self._icon_item(\"Remove\"))\n self.setItem(row, Column.scheme, self._icon_item(\"Open Scheme\"))\n\n def get_item_by_id(self, item_id):\n for i in range(self.rowCount()):\n item = self.item(i)\n if str(item.id) == item_id:\n return item\n return None\n\n @staticmethod\n def _icon_item(tooltip):\n item = QStandardItem()\n item.setEditable(False)\n item.setToolTip(tooltip)\n return item\n\n\nclass ReportTable(QTableView):\n def __init__(self, parent):\n super().__init__(parent)\n self._icon_remove = QIcon(pkg_resources.resource_filename(\n __name__, \"icons/delete.svg\"))\n self._icon_scheme = QIcon(pkg_resources.resource_filename(\n __name__, \"icons/scheme.svg\"))\n\n def mouseMoveEvent(self, event):\n self._clear_icons()\n self._repaint(self.indexAt(event.pos()))\n\n def mouseReleaseEvent(self, event):\n if event.button() == Qt.LeftButton:\n super().mouseReleaseEvent(event)\n self._clear_icons()\n self._repaint(self.indexAt(event.pos()))\n\n def leaveEvent(self, _):\n self._clear_icons()\n\n def _repaint(self, index):\n row, column = index.row(), index.column()\n if column in (Column.remove, Column.scheme):\n self.setCursor(QCursor(Qt.PointingHandCursor))\n else:\n self.setCursor(QCursor(Qt.ArrowCursor))\n if row >= 0:\n self.model().item(row, Column.remove).setIcon(self._icon_remove)\n self.model().item(row, Column.scheme).setIcon(self._icon_scheme)\n\n def _clear_icons(self):\n model = self.model()\n for i in range(model.rowCount()):\n model.item(i, Column.remove).setIcon(QIcon())\n model.item(i, Column.scheme).setIcon(QIcon())\n\n\nclass OWReport(OWWidget):\n name = \"Report\"\n save_dir = Setting(\"\")\n open_dir = Setting(\"\")\n\n def __init__(self):\n super().__init__()\n self._setup_ui_()\n self.report_changed = False\n\n index_file = pkg_resources.resource_filename(__name__, \"index.html\")\n with open(index_file, \"r\") as f:\n self.report_html_template = f.read()\n\n def _setup_ui_(self):\n self.table_model = ReportItemModel(0, len(Column.__members__))\n self.table = ReportTable(self.controlArea)\n self.table.setModel(self.table_model)\n self.table.setShowGrid(False)\n self.table.setSelectionBehavior(QTableView.SelectRows)\n self.table.setSelectionMode(QTableView.SingleSelection)\n self.table.setWordWrap(False)\n self.table.setMouseTracking(True)\n self.table.verticalHeader().setSectionResizeMode(QHeaderView.Fixed)\n self.table.verticalHeader().setDefaultSectionSize(20)\n self.table.verticalHeader().setVisible(False)\n self.table.horizontalHeader().setVisible(False)\n self.table.setFixedWidth(250)\n self.table.setColumnWidth(Column.item, 200)\n self.table.setColumnWidth(Column.remove, 23)\n self.table.setColumnWidth(Column.scheme, 25)\n self.table.clicked.connect(self._table_clicked)\n self.table.selectionModel().selectionChanged.connect(\n self._table_selection_changed)\n self.controlArea.layout().addWidget(self.table)\n\n self.last_scheme = None\n self.scheme_button = gui.button(\n self.controlArea, self, \"Back to Last Scheme\",\n callback=self._show_last_scheme\n )\n box = gui.hBox(self.controlArea)\n box.setContentsMargins(-6, 0, -6, 0)\n self.save_button = gui.button(\n box, self, \"Save\", callback=self.save_report, disabled=True\n )\n self.print_button = gui.button(\n box, self, \"Print\", callback=self._print_report, disabled=True\n )\n\n class PyBridge(QObject):\n @pyqtSlot(str)\n def _select_item(myself, item_id):\n item = self.table_model.get_item_by_id(item_id)\n self.table.selectRow(self.table_model.indexFromItem(item).row())\n self._change_selected_item(item)\n\n @pyqtSlot(str, str)\n def _add_comment(myself, item_id, value):\n item = self.table_model.get_item_by_id(item_id)\n item.comment = value\n self.report_changed = True\n\n self.report_view = WebviewWidget(self.mainArea, bridge=PyBridge(self))\n self.mainArea.layout().addWidget(self.report_view)\n\n @deprecated(\"Widgets should not be pickled\")\n def __getstate__(self):\n rep_dict = self.__dict__.copy()\n for key in ('_OWWidget__env', 'controlArea', 'mainArea',\n 'report_view', 'table', 'table_model'):\n del rep_dict[key]\n items_len = self.table_model.rowCount()\n return rep_dict, [self.table_model.item(i) for i in range(items_len)]\n\n @deprecated(\"Widgets should not be pickled\")\n def __setstate__(self, state):\n rep_dict, items = state\n self.__dict__.update(rep_dict)\n self._setup_ui_()\n for i in range(len(items)):\n item = items[i]\n self.table_model.add_item(\n ReportItem(item.name, item.html, item.scheme,\n item.module, item.icon_name, item.comment)\n )\n\n def _table_clicked(self, index):\n if index.column() == Column.remove:\n self._remove_item(index.row())\n indexes = self.table.selectionModel().selectedIndexes()\n if indexes:\n item = self.table_model.item(indexes[0].row())\n self._scroll_to_item(item)\n self._change_selected_item(item)\n if index.column() == Column.scheme:\n self._show_scheme(index.row())\n\n def _table_selection_changed(self, new_selection, _):\n if new_selection.indexes():\n item = self.table_model.item(new_selection.indexes()[0].row())\n self._scroll_to_item(item)\n self._change_selected_item(item)\n\n def _remove_item(self, row):\n self.table_model.removeRow(row)\n self._empty_report()\n self.report_changed = True\n self._build_html()\n\n def clear(self):\n self.table_model.clear()\n self._empty_report()\n self.report_changed = True\n self._build_html()\n\n def _add_item(self, widget):\n name = widget.get_widget_name_extension()\n name = \"{} - {}\".format(widget.name, name) if name else widget.name\n item = ReportItem(name, widget.report_html, self._get_scheme(),\n widget.__module__, widget.icon)\n self.table_model.add_item(item)\n self._empty_report()\n self.report_changed = True\n return item\n\n def _empty_report(self):\n # disable save and print if no reports\n self.save_button.setEnabled(self.table_model.rowCount())\n self.print_button.setEnabled(self.table_model.rowCount())\n\n def _build_html(self):\n html = self.report_html_template\n html += \"<body>\"\n for i in range(self.table_model.rowCount()):\n item = self.table_model.item(i)\n html += \"<div id='{}' class='normal' \" \\\n \"onClick='pybridge._select_item(this.id)'>{}<div \" \\\n \"class='textwrapper'><textarea \" \\\n \"placeholder='Write a comment...'\" \\\n \"onInput='this.innerHTML = this.value;\" \\\n \"pybridge._add_comment(this.parentNode.parentNode.id, this.value);'\" \\\n \">{}</textarea></div>\" \\\n \"</div>\".format(item.id, item.html, item.comment)\n html += \"</body></html>\"\n self.report_view.setHtml(html)\n\n def _scroll_to_item(self, item):\n self.report_view.evalJS(\n \"document.getElementById('{}').scrollIntoView();\".format(item.id)\n )\n\n def _change_selected_item(self, item):\n self.report_view.evalJS(\n \"var sel_el = document.getElementsByClassName('selected')[0]; \"\n \"if (sel_el.id != {}) \"\n \" sel_el.className = 'normal';\".format(item.id))\n self.report_view.evalJS(\n \"document.getElementById('{}').className = 'selected';\"\n .format(item.id))\n self.report_changed = True\n\n def make_report(self, widget):\n item = self._add_item(widget)\n self._build_html()\n self._scroll_to_item(item)\n self.table.selectRow(self.table_model.rowCount() - 1)\n\n def _get_scheme(self):\n canvas = self.get_canvas_instance()\n return canvas.get_scheme_xml() if canvas else None\n\n def _show_scheme(self, row):\n scheme = self.table_model.item(row).scheme\n canvas = self.get_canvas_instance()\n if canvas:\n document = canvas.current_document()\n if document.isModifiedStrict():\n self.last_scheme = canvas.get_scheme_xml()\n self._load_scheme(scheme)\n\n def _show_last_scheme(self):\n if self.last_scheme:\n self._load_scheme(self.last_scheme)\n\n def _load_scheme(self, contents):\n # forcibly load the contents into the associated CanvasMainWindow\n # instance if one exists. Preserve `self` as the designated report.\n canvas = self.get_canvas_instance()\n if canvas is not None:\n document = canvas.current_document()\n old = document.scheme()\n if old.has_report() and old.report_view() is self:\n # remove self so it is not closed\n old.set_report_view(None)\n canvas.load_scheme_xml(contents)\n scheme = canvas.current_document().scheme()\n scheme.set_report_view(self)\n\n def save_report(self):\n \"\"\"Save report\"\"\"\n formats = OrderedDict((('HTML (*.html)', '.html'),\n ('PDF (*.pdf)', '.pdf'),\n ('Report (*.report)', '.report')))\n\n filename, selected_format = QFileDialog.getSaveFileName(\n self, \"Save Report\", self.save_dir, ';;'.join(formats.keys()))\n if not filename:\n return QDialog.Rejected\n\n # Set appropriate extension if not set by the user\n expect_ext = formats[selected_format]\n if not filename.endswith(expect_ext):\n filename += expect_ext\n\n self.save_dir = os.path.dirname(filename)\n self.saveSettings()\n _, extension = os.path.splitext(filename)\n if extension == \".pdf\":\n printer = QPrinter()\n printer.setPageSize(QPrinter.A4)\n printer.setOutputFormat(QPrinter.PdfFormat)\n printer.setOutputFileName(filename)\n self._print_to_printer(printer)\n elif extension == \".report\":\n self.save(filename)\n else:\n def save_html(contents):\n try:\n with open(filename, \"w\", encoding=\"utf-8\") as f:\n f.write(contents)\n except PermissionError:\n self.permission_error(filename)\n\n save_html(self.report_view.html())\n self.report_changed = False\n return QDialog.Accepted\n\n def _print_to_printer(self, printer):\n filename = printer.outputFileName()\n if filename:\n try:\n # QtWebEngine\n return self.report_view.page().printToPdf(filename)\n except AttributeError:\n try:\n # QtWebKit\n return self.report_view.print_(printer)\n except AttributeError:\n # QtWebEngine 5.6\n pass\n # Fallback to printing widget as an image\n self.report_view.render(printer)\n\n def _print_report(self):\n printer = QPrinter()\n print_dialog = QPrintDialog(printer, self)\n print_dialog.setWindowTitle(\"Print report\")\n if print_dialog.exec_() != QDialog.Accepted:\n return\n self._print_to_printer(printer)\n\n def save(self, filename):\n attributes = {}\n for key in ('last_scheme', 'open_dir'):\n attributes[key] = getattr(self, key, None)\n items = [self.table_model.item(i)\n for i in range(self.table_model.rowCount())]\n report = dict(__version__=1,\n attributes=attributes,\n items=items)\n\n try:\n with open(filename, 'wb') as f:\n pickle.dump(report, f)\n except PermissionError:\n self.permission_error(filename)\n\n @classmethod\n def load(cls, filename):\n with open(filename, 'rb') as f:\n report = pickle.load(f)\n\n if not isinstance(report, dict):\n return report\n\n self = cls()\n self.__dict__.update(report['attributes'])\n for item in report['items']:\n self.table_model.add_item(\n ReportItem(item.name, item.html, item.scheme,\n item.module, item.icon_name, item.comment)\n )\n return self\n\n def permission_error(self, filename):\n message_critical(\n self.tr(\"Permission error when trying to write report.\"),\n title=self.tr(\"Error\"),\n informative_text=self.tr(\"Permission error occurred \"\n \"while saving '{}'.\").format(filename),\n exc_info=True,\n parent=self)\n log.error(\"PermissionError when trying to write report.\", exc_info=True)\n\n def is_empty(self):\n return not self.table_model.rowCount()\n\n def is_changed(self):\n return self.report_changed\n\n @staticmethod\n def set_instance(report):\n warnings.warn(\n \"OWReport.set_instance is deprecated\",\n DeprecationWarning, stacklevel=2\n )\n app_inst = QApplication.instance()\n app_inst._report_window = report\n\n @staticmethod\n def get_instance():\n warnings.warn(\n \"OWReport.get_instance is deprecated\",\n DeprecationWarning, stacklevel=2\n )\n app_inst = QApplication.instance()\n if not hasattr(app_inst, \"_report_window\"):\n report = OWReport()\n app_inst._report_window = report\n return app_inst._report_window\n\n def get_canvas_instance(self):\n # type: () -> Optional[CanvasMainWindow]\n \"\"\"\n Return a CanvasMainWindow instance to which this report is attached.\n\n Return None if not associated with any window.\n\n Returns\n -------\n window : Optional[CanvasMainWindow]\n \"\"\"\n # Run up the parent/window chain\n parent = self.parent()\n if parent is not None:\n window = parent.window()\n if isinstance(window, CanvasMainWindow):\n return window\n return None\n\n def copy_to_clipboard(self):\n self.report_view.triggerPageAction(self.report_view.page().Copy)\n\n\nif __name__ == \"__main__\":\n import sys\n from Orange.data import Table\n from Orange.widgets.data.owfile import OWFile\n from Orange.widgets.data.owtable import OWDataTable\n from Orange.widgets.data.owdiscretize import OWDiscretize\n from Orange.widgets.model.owrandomforest import OWRandomForest\n\n iris = Table(\"iris\")\n app = QApplication(sys.argv)\n\n main = OWReport.get_instance()\n file = OWFile()\n file.create_report_html()\n main.make_report(file)\n\n table = OWDataTable()\n table.set_dataset(iris)\n table.create_report_html()\n main.make_report(table)\n\n main = OWReport.get_instance()\n disc = OWDiscretize()\n disc.create_report_html()\n main.make_report(disc)\n\n learner = OWRandomForest()\n learner.create_report_html()\n main.make_report(learner)\n\n main.show()\n main.saveSettings()\n assert main.table_model.rowCount() == 4\n\n sys.exit(app.exec_())\n",
"path": "Orange/widgets/report/owreport.py"
}
] | diff --git a/Orange/widgets/report/owreport.py b/Orange/widgets/report/owreport.py
index e2a70d8aa71..47a99ab4766 100644
--- a/Orange/widgets/report/owreport.py
+++ b/Orange/widgets/report/owreport.py
@@ -477,6 +477,9 @@ def get_canvas_instance(self):
return window
return None
+ def copy_to_clipboard(self):
+ self.report_view.triggerPageAction(self.report_view.page().Copy)
+
if __name__ == "__main__":
import sys
|
falconry__falcon-801 | Default OPTIONS responder does not set Content-Length to "0"
Per RFC 7231:
> A server MUST generate a Content-Length field with a value of "0" if no payload body is to be sent in the response.
| [
{
"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.errors import HTTPBadRequest\nfrom falcon.errors import HTTPNotFound\nfrom falcon.status_codes import HTTP_204\nfrom falcon.status_codes import HTTP_405\n\n\ndef path_not_found(req, resp, **kwargs):\n \"\"\"Raise 404 HTTPNotFound error\"\"\"\n raise HTTPNotFound()\n\n\ndef bad_request(req, resp, **kwargs):\n \"\"\"Raise 400 HTTPBadRequest error\"\"\"\n raise HTTPBadRequest('Bad request', 'Invalid HTTP method')\n\n\ndef create_method_not_allowed(allowed_methods):\n \"\"\"Creates a responder for \"405 Method Not Allowed\"\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def method_not_allowed(req, resp, **kwargs):\n resp.status = HTTP_405\n resp.set_header('Allow', allowed)\n\n return method_not_allowed\n\n\ndef create_default_options(allowed_methods):\n \"\"\"Creates a default responder for the OPTIONS method\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def on_options(req, resp, **kwargs):\n resp.status = HTTP_204\n resp.set_header('Allow', allowed)\n\n return on_options\n",
"path": "falcon/responders.py"
}
] | [
{
"content": "# Copyright 2013 by Rackspace Hosting, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom falcon.errors import HTTPBadRequest\nfrom falcon.errors import HTTPNotFound\nfrom falcon.status_codes import HTTP_204\nfrom falcon.status_codes import HTTP_405\n\n\ndef path_not_found(req, resp, **kwargs):\n \"\"\"Raise 404 HTTPNotFound error\"\"\"\n raise HTTPNotFound()\n\n\ndef bad_request(req, resp, **kwargs):\n \"\"\"Raise 400 HTTPBadRequest error\"\"\"\n raise HTTPBadRequest('Bad request', 'Invalid HTTP method')\n\n\ndef create_method_not_allowed(allowed_methods):\n \"\"\"Creates a responder for \"405 Method Not Allowed\"\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def method_not_allowed(req, resp, **kwargs):\n resp.status = HTTP_405\n resp.set_header('Allow', allowed)\n\n return method_not_allowed\n\n\ndef create_default_options(allowed_methods):\n \"\"\"Creates a default responder for the OPTIONS method\n\n Args:\n allowed_methods: A list of HTTP methods (uppercase) that should be\n returned in the Allow header.\n\n \"\"\"\n allowed = ', '.join(allowed_methods)\n\n def on_options(req, resp, **kwargs):\n resp.status = HTTP_204\n resp.set_header('Allow', allowed)\n resp.set_header('Content-Length', '0')\n\n return on_options\n",
"path": "falcon/responders.py"
}
] | diff --git a/falcon/responders.py b/falcon/responders.py
index b5f61866d..34da8075b 100644
--- a/falcon/responders.py
+++ b/falcon/responders.py
@@ -58,5 +58,6 @@ def create_default_options(allowed_methods):
def on_options(req, resp, **kwargs):
resp.status = HTTP_204
resp.set_header('Allow', allowed)
+ resp.set_header('Content-Length', '0')
return on_options
diff --git a/tests/test_headers.py b/tests/test_headers.py
index 88809923f..838755d8f 100644
--- a/tests/test_headers.py
+++ b/tests/test_headers.py
@@ -534,6 +534,12 @@ def test_add_link_complex(self):
self._check_link_header(resource, expected_value)
+ def test_content_length_options(self):
+ result = self.simulate_options()
+
+ content_length = '0'
+ self.assertEqual(result.headers['Content-Length'], content_length)
+
# ----------------------------------------------------------------------
# Helpers
# ----------------------------------------------------------------------
|
openvinotoolkit__datumaro-743 | Wrong annotated return type in Registry class
https://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42
In the referenced code the return type of the method appears to be wrong.
Either it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.
When using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.
Wrong annotated return type in Registry class
https://github.com/openvinotoolkit/datumaro/blob/0d4a73d3bbe3a93585af7a0148a0e344fd1106b3/datumaro/components/environment.py#L41-L42
In the referenced code the return type of the method appears to be wrong.
Either it should be `Iterator[str]` since iteration over a dict returns its keys which are of type `str` or the return statement should be `return iter(self.items.values())`.
When using the library with static type checkers this annotation causes type check errors. When removing the annotation, type checkers correctly infer the type `Iterator[str]`.
| [
{
"content": "# Copyright (C) 2020-2022 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport glob\nimport importlib\nimport logging as log\nimport os.path as osp\nfrom functools import partial\nfrom inspect import isclass\nfrom typing import Callable, Dict, Generic, Iterable, Iterator, List, Optional, Type, TypeVar\n\nfrom datumaro.components.cli_plugin import CliPlugin, plugin_types\nfrom datumaro.components.format_detection import RejectionReason, detect_dataset_format\nfrom datumaro.util.os_util import import_foreign_module, split_path\n\nT = TypeVar(\"T\")\n\n\nclass Registry(Generic[T]):\n def __init__(self):\n self.items: Dict[str, T] = {}\n\n def register(self, name: str, value: T) -> T:\n self.items[name] = value\n return value\n\n def unregister(self, name: str) -> Optional[T]:\n return self.items.pop(name, None)\n\n def get(self, key: str):\n \"\"\"Returns a class or a factory function\"\"\"\n return self.items[key]\n\n def __getitem__(self, key: str) -> T:\n return self.get(key)\n\n def __contains__(self, key) -> bool:\n return key in self.items\n\n def __iter__(self) -> Iterator[T]:\n return iter(self.items)\n\n\nclass PluginRegistry(Registry[Type[CliPlugin]]):\n def __init__(\n self, filter: Callable[[Type[CliPlugin]], bool] = None\n ): # pylint: disable=redefined-builtin\n super().__init__()\n self._filter = filter\n\n def batch_register(self, values: Iterable[CliPlugin]):\n for v in values:\n if self._filter and not self._filter(v):\n continue\n\n self.register(v.NAME, v)\n\n\nclass Environment:\n _builtin_plugins = None\n\n @classmethod\n def _make_filter(cls, accept, skip=None):\n accept = (accept,) if isclass(accept) else tuple(accept)\n skip = {skip} if isclass(skip) else set(skip or [])\n skip = tuple(skip | set(accept))\n return partial(cls._check_type, accept=accept, skip=skip)\n\n @staticmethod\n def _check_type(t, *, accept, skip):\n return issubclass(t, accept) and t not in skip\n\n def __init__(self):\n from datumaro.components.converter import Converter\n from datumaro.components.dataset_generator import DatasetGenerator\n from datumaro.components.extractor import (\n Extractor,\n Importer,\n ItemTransform,\n SourceExtractor,\n Transform,\n )\n from datumaro.components.launcher import Launcher\n from datumaro.components.validator import Validator\n\n _filter = self._make_filter\n self._extractors = PluginRegistry(_filter(Extractor, skip=SourceExtractor))\n self._importers = PluginRegistry(_filter(Importer))\n self._launchers = PluginRegistry(_filter(Launcher))\n self._converters = PluginRegistry(_filter(Converter))\n self._generators = PluginRegistry(_filter(DatasetGenerator))\n self._transforms = PluginRegistry(_filter(Transform, skip=ItemTransform))\n self._validators = PluginRegistry(_filter(Validator))\n self._builtins_initialized = False\n\n def _get_plugin_registry(self, name):\n if not self._builtins_initialized:\n self._builtins_initialized = True\n self._register_builtin_plugins()\n return getattr(self, name)\n\n @property\n def extractors(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_extractors\")\n\n @property\n def importers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_importers\")\n\n @property\n def launchers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_launchers\")\n\n @property\n def converters(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_converters\")\n\n @property\n def generators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_generators\")\n\n @property\n def transforms(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_transforms\")\n\n @property\n def validators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_validators\")\n\n @staticmethod\n def _find_plugins(plugins_dir):\n plugins = []\n\n for pattern in (\"*.py\", \"*/*.py\"):\n for path in glob.glob(osp.join(glob.escape(plugins_dir), pattern)):\n if not osp.isfile(path):\n continue\n\n path_rel = osp.relpath(path, plugins_dir)\n name_parts = split_path(osp.splitext(path_rel)[0])\n\n # a module with a dot in the name won't load correctly\n if any(\".\" in part for part in name_parts):\n log.warning(\n \"Python file '%s' in directory '%s' can't be imported \"\n \"due to a dot in the name; skipping.\",\n path_rel,\n plugins_dir,\n )\n continue\n plugins.append(\".\".join(name_parts))\n\n return plugins\n\n @classmethod\n def _get_plugin_exports(cls, module, types):\n exports = []\n if hasattr(module, \"exports\"):\n exports = module.exports\n else:\n for symbol in dir(module):\n if symbol.startswith(\"_\"):\n continue\n exports.append(getattr(module, symbol))\n\n exports = [s for s in exports if isclass(s) and issubclass(s, types) and not s in types]\n\n return exports\n\n @classmethod\n def _load_plugins(cls, module_names, *, importer, types=None):\n types = tuple(types or plugin_types())\n\n all_exports = []\n for module_name in module_names:\n try:\n module = importer(module_name)\n exports = cls._get_plugin_exports(module, types)\n except Exception as e:\n module_search_error = ModuleNotFoundError\n\n message = [\"Failed to import module '%s': %s\", module_name, e]\n if isinstance(e, module_search_error):\n log.debug(*message)\n else:\n log.warning(*message)\n continue\n\n log.debug(\n \"Imported the following symbols from %s: %s\"\n % (module_name, \", \".join(s.__name__ for s in exports))\n )\n all_exports.extend(exports)\n\n return all_exports\n\n @classmethod\n def _load_builtin_plugins(cls):\n if cls._builtin_plugins is None:\n import datumaro.plugins\n\n plugins_dir = osp.dirname(datumaro.plugins.__file__)\n module_names = [\n datumaro.plugins.__name__ + \".\" + name for name in cls._find_plugins(plugins_dir)\n ]\n cls._builtin_plugins = cls._load_plugins(module_names, importer=importlib.import_module)\n return cls._builtin_plugins\n\n def load_plugins(self, plugins_dir):\n module_names = self._find_plugins(plugins_dir)\n plugins = self._load_plugins(\n module_names, importer=partial(import_foreign_module, path=plugins_dir)\n )\n self._register_plugins(plugins)\n\n def _register_builtin_plugins(self):\n self._register_plugins(self._load_builtin_plugins())\n\n def _register_plugins(self, plugins):\n self.extractors.batch_register(plugins)\n self.importers.batch_register(plugins)\n self.launchers.batch_register(plugins)\n self.converters.batch_register(plugins)\n self.generators.batch_register(plugins)\n self.transforms.batch_register(plugins)\n self.validators.batch_register(plugins)\n\n def make_extractor(self, name, *args, **kwargs):\n return self.extractors.get(name)(*args, **kwargs)\n\n def make_importer(self, name, *args, **kwargs):\n return self.importers.get(name)(*args, **kwargs)\n\n def make_launcher(self, name, *args, **kwargs):\n return self.launchers.get(name)(*args, **kwargs)\n\n def make_converter(self, name, *args, **kwargs):\n result = self.converters.get(name)\n if isclass(result):\n result = result.convert\n return partial(result, *args, **kwargs)\n\n def make_transform(self, name, *args, **kwargs):\n return partial(self.transforms.get(name), *args, **kwargs)\n\n def is_format_known(self, name):\n return name in self.importers or name in self.extractors\n\n def detect_dataset(\n self,\n path: str,\n depth: int = 1,\n rejection_callback: Optional[Callable[[str, RejectionReason, str], None]] = None,\n ) -> List[str]:\n ignore_dirs = {\"__MSOSX\", \"__MACOSX\"}\n matched_formats = set()\n for _ in range(depth + 1):\n detected_formats = detect_dataset_format(\n (\n (format_name, importer.detect)\n for format_name, importer in self.importers.items.items()\n ),\n path,\n rejection_callback=rejection_callback,\n )\n\n if detected_formats and len(detected_formats) == 1:\n return detected_formats\n elif detected_formats:\n matched_formats |= set(detected_formats)\n\n paths = glob.glob(osp.join(path, \"*\"))\n path = \"\" if len(paths) != 1 else paths[0]\n if not osp.isdir(path) or osp.basename(path) in ignore_dirs:\n break\n\n return list(matched_formats)\n",
"path": "datumaro/components/environment.py"
}
] | [
{
"content": "# Copyright (C) 2020-2022 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport glob\nimport importlib\nimport logging as log\nimport os.path as osp\nfrom functools import partial\nfrom inspect import isclass\nfrom typing import Callable, Dict, Generic, Iterable, Iterator, List, Optional, Type, TypeVar\n\nfrom datumaro.components.cli_plugin import CliPlugin, plugin_types\nfrom datumaro.components.format_detection import RejectionReason, detect_dataset_format\nfrom datumaro.util.os_util import import_foreign_module, split_path\n\nT = TypeVar(\"T\")\n\n\nclass Registry(Generic[T]):\n def __init__(self):\n self.items: Dict[str, T] = {}\n\n def register(self, name: str, value: T) -> T:\n self.items[name] = value\n return value\n\n def unregister(self, name: str) -> Optional[T]:\n return self.items.pop(name, None)\n\n def get(self, key: str):\n \"\"\"Returns a class or a factory function\"\"\"\n return self.items[key]\n\n def __getitem__(self, key: str) -> T:\n return self.get(key)\n\n def __contains__(self, key) -> bool:\n return key in self.items\n\n def __iter__(self) -> Iterator[str]:\n return iter(self.items)\n\n\nclass PluginRegistry(Registry[Type[CliPlugin]]):\n def __init__(\n self, filter: Callable[[Type[CliPlugin]], bool] = None\n ): # pylint: disable=redefined-builtin\n super().__init__()\n self._filter = filter\n\n def batch_register(self, values: Iterable[CliPlugin]):\n for v in values:\n if self._filter and not self._filter(v):\n continue\n\n self.register(v.NAME, v)\n\n\nclass Environment:\n _builtin_plugins = None\n\n @classmethod\n def _make_filter(cls, accept, skip=None):\n accept = (accept,) if isclass(accept) else tuple(accept)\n skip = {skip} if isclass(skip) else set(skip or [])\n skip = tuple(skip | set(accept))\n return partial(cls._check_type, accept=accept, skip=skip)\n\n @staticmethod\n def _check_type(t, *, accept, skip):\n return issubclass(t, accept) and t not in skip\n\n def __init__(self):\n from datumaro.components.converter import Converter\n from datumaro.components.dataset_generator import DatasetGenerator\n from datumaro.components.extractor import (\n Extractor,\n Importer,\n ItemTransform,\n SourceExtractor,\n Transform,\n )\n from datumaro.components.launcher import Launcher\n from datumaro.components.validator import Validator\n\n _filter = self._make_filter\n self._extractors = PluginRegistry(_filter(Extractor, skip=SourceExtractor))\n self._importers = PluginRegistry(_filter(Importer))\n self._launchers = PluginRegistry(_filter(Launcher))\n self._converters = PluginRegistry(_filter(Converter))\n self._generators = PluginRegistry(_filter(DatasetGenerator))\n self._transforms = PluginRegistry(_filter(Transform, skip=ItemTransform))\n self._validators = PluginRegistry(_filter(Validator))\n self._builtins_initialized = False\n\n def _get_plugin_registry(self, name):\n if not self._builtins_initialized:\n self._builtins_initialized = True\n self._register_builtin_plugins()\n return getattr(self, name)\n\n @property\n def extractors(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_extractors\")\n\n @property\n def importers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_importers\")\n\n @property\n def launchers(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_launchers\")\n\n @property\n def converters(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_converters\")\n\n @property\n def generators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_generators\")\n\n @property\n def transforms(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_transforms\")\n\n @property\n def validators(self) -> PluginRegistry:\n return self._get_plugin_registry(\"_validators\")\n\n @staticmethod\n def _find_plugins(plugins_dir):\n plugins = []\n\n for pattern in (\"*.py\", \"*/*.py\"):\n for path in glob.glob(osp.join(glob.escape(plugins_dir), pattern)):\n if not osp.isfile(path):\n continue\n\n path_rel = osp.relpath(path, plugins_dir)\n name_parts = split_path(osp.splitext(path_rel)[0])\n\n # a module with a dot in the name won't load correctly\n if any(\".\" in part for part in name_parts):\n log.warning(\n \"Python file '%s' in directory '%s' can't be imported \"\n \"due to a dot in the name; skipping.\",\n path_rel,\n plugins_dir,\n )\n continue\n plugins.append(\".\".join(name_parts))\n\n return plugins\n\n @classmethod\n def _get_plugin_exports(cls, module, types):\n exports = []\n if hasattr(module, \"exports\"):\n exports = module.exports\n else:\n for symbol in dir(module):\n if symbol.startswith(\"_\"):\n continue\n exports.append(getattr(module, symbol))\n\n exports = [s for s in exports if isclass(s) and issubclass(s, types) and not s in types]\n\n return exports\n\n @classmethod\n def _load_plugins(cls, module_names, *, importer, types=None):\n types = tuple(types or plugin_types())\n\n all_exports = []\n for module_name in module_names:\n try:\n module = importer(module_name)\n exports = cls._get_plugin_exports(module, types)\n except Exception as e:\n module_search_error = ModuleNotFoundError\n\n message = [\"Failed to import module '%s': %s\", module_name, e]\n if isinstance(e, module_search_error):\n log.debug(*message)\n else:\n log.warning(*message)\n continue\n\n log.debug(\n \"Imported the following symbols from %s: %s\"\n % (module_name, \", \".join(s.__name__ for s in exports))\n )\n all_exports.extend(exports)\n\n return all_exports\n\n @classmethod\n def _load_builtin_plugins(cls):\n if cls._builtin_plugins is None:\n import datumaro.plugins\n\n plugins_dir = osp.dirname(datumaro.plugins.__file__)\n module_names = [\n datumaro.plugins.__name__ + \".\" + name for name in cls._find_plugins(plugins_dir)\n ]\n cls._builtin_plugins = cls._load_plugins(module_names, importer=importlib.import_module)\n return cls._builtin_plugins\n\n def load_plugins(self, plugins_dir):\n module_names = self._find_plugins(plugins_dir)\n plugins = self._load_plugins(\n module_names, importer=partial(import_foreign_module, path=plugins_dir)\n )\n self._register_plugins(plugins)\n\n def _register_builtin_plugins(self):\n self._register_plugins(self._load_builtin_plugins())\n\n def _register_plugins(self, plugins):\n self.extractors.batch_register(plugins)\n self.importers.batch_register(plugins)\n self.launchers.batch_register(plugins)\n self.converters.batch_register(plugins)\n self.generators.batch_register(plugins)\n self.transforms.batch_register(plugins)\n self.validators.batch_register(plugins)\n\n def make_extractor(self, name, *args, **kwargs):\n return self.extractors.get(name)(*args, **kwargs)\n\n def make_importer(self, name, *args, **kwargs):\n return self.importers.get(name)(*args, **kwargs)\n\n def make_launcher(self, name, *args, **kwargs):\n return self.launchers.get(name)(*args, **kwargs)\n\n def make_converter(self, name, *args, **kwargs):\n result = self.converters.get(name)\n if isclass(result):\n result = result.convert\n return partial(result, *args, **kwargs)\n\n def make_transform(self, name, *args, **kwargs):\n return partial(self.transforms.get(name), *args, **kwargs)\n\n def is_format_known(self, name):\n return name in self.importers or name in self.extractors\n\n def detect_dataset(\n self,\n path: str,\n depth: int = 1,\n rejection_callback: Optional[Callable[[str, RejectionReason, str], None]] = None,\n ) -> List[str]:\n ignore_dirs = {\"__MSOSX\", \"__MACOSX\"}\n matched_formats = set()\n for _ in range(depth + 1):\n detected_formats = detect_dataset_format(\n (\n (format_name, importer.detect)\n for format_name, importer in self.importers.items.items()\n ),\n path,\n rejection_callback=rejection_callback,\n )\n\n if detected_formats and len(detected_formats) == 1:\n return detected_formats\n elif detected_formats:\n matched_formats |= set(detected_formats)\n\n paths = glob.glob(osp.join(path, \"*\"))\n path = \"\" if len(paths) != 1 else paths[0]\n if not osp.isdir(path) or osp.basename(path) in ignore_dirs:\n break\n\n return list(matched_formats)\n",
"path": "datumaro/components/environment.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 84c35ac044..e9706cb94b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -11,6 +11,22 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Add jupyter sample introducing how to merge datasets
(<https://github.com/openvinotoolkit/datumaro/pull/738>)
+### Changed
+- N/A
+
+### Deprecated
+- N/A
+
+### Removed
+- N/A
+
+### Fixed
+- Fix static type checking
+ (<https://github.com/openvinotoolkit/datumaro/pull/743>)
+
+### Security
+- N/A
+
## 06/09/2022 - Release v0.3.1
### Added
- Support for custom media types, new `PointCloud` media type,
diff --git a/datumaro/components/environment.py b/datumaro/components/environment.py
index 4e2b9e72c3..9490e9239e 100644
--- a/datumaro/components/environment.py
+++ b/datumaro/components/environment.py
@@ -38,7 +38,7 @@ def __getitem__(self, key: str) -> T:
def __contains__(self, key) -> bool:
return key in self.items
- def __iter__(self) -> Iterator[T]:
+ def __iter__(self) -> Iterator[str]:
return iter(self.items)
|
rasterio__rasterio-437 | Check for "ndarray-like" instead of ndarray in _warp; other places
I want to use `rasterio.warp.reproject` on an `xray.Dataset` with `xray.Dataset.apply` (http://xray.readthedocs.org/en/stable/). xray has a feature to turn the dataset into a `np.ndarray`, but that means losing all my metadata.
At https://github.com/mapbox/rasterio/blob/master/rasterio/_warp.pyx#L249, _warp checks that the source is an `np.ndarray` (whereas the source in my case is an `xray.DataArray` - satisfying the same interfaces as `np.ndarray`), so I get an invalid source error.
It could be a good idea to check for something like
```
def is_ndarray_like(source):
return hasattr(source, '__array__')
```
instead of
```
isinstance(source, np.ndarray)
```
so other numpy-like arrays can be used.
| [
{
"content": "# Mapping of GDAL to Numpy data types.\n#\n# Since 0.13 we are not importing numpy here and data types are strings.\n# Happily strings can be used throughout Numpy and so existing code will\n# break.\n#\n# Within Rasterio, to test data types, we use Numpy's dtype() factory to \n# do something like this:\n#\n# if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...\n#\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\n# Not supported:\n# GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_, # GDT_CInt16\n 9: complex_, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128 } # GDT_CFloat64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\ndtype_rev['uint8'] = 1\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64' }\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\ndef check_dtype(dt):\n if dt not in dtype_rev:\n try:\n return dt().dtype.name in dtype_rev\n except:\n return False\n return True\n\n\ndef get_minimum_int_dtype(values):\n \"\"\"\n Uses range checking to determine the minimum integer data type required\n to represent values.\n\n :param values: numpy array\n :return: named data type that can be later used to create a numpy dtype\n \"\"\"\n\n min_value = values.min()\n max_value = values.max()\n \n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n",
"path": "rasterio/dtypes.py"
}
] | [
{
"content": "# Mapping of GDAL to Numpy data types.\n#\n# Since 0.13 we are not importing numpy here and data types are strings.\n# Happily strings can be used throughout Numpy and so existing code will\n# break.\n#\n# Within Rasterio, to test data types, we use Numpy's dtype() factory to \n# do something like this:\n#\n# if np.dtype(destination.dtype) == np.dtype(rasterio.uint8): ...\n#\n\nbool_ = 'bool'\nubyte = uint8 = 'uint8'\nuint16 = 'uint16'\nint16 = 'int16'\nuint32 = 'uint32'\nint32 = 'int32'\nfloat32 = 'float32'\nfloat64 = 'float64'\ncomplex_ = 'complex'\ncomplex64 = 'complex64'\ncomplex128 = 'complex128'\n\n# Not supported:\n# GDT_CInt16 = 8, GDT_CInt32 = 9, GDT_CFloat32 = 10, GDT_CFloat64 = 11\n\ndtype_fwd = {\n 0: None, # GDT_Unknown\n 1: ubyte, # GDT_Byte\n 2: uint16, # GDT_UInt16\n 3: int16, # GDT_Int16\n 4: uint32, # GDT_UInt32\n 5: int32, # GDT_Int32\n 6: float32, # GDT_Float32\n 7: float64, # GDT_Float64\n 8: complex_, # GDT_CInt16\n 9: complex_, # GDT_CInt32\n 10: complex64, # GDT_CFloat32\n 11: complex128 } # GDT_CFloat64\n\ndtype_rev = dict((v, k) for k, v in dtype_fwd.items())\ndtype_rev['uint8'] = 1\n\ntypename_fwd = {\n 0: 'Unknown',\n 1: 'Byte',\n 2: 'UInt16',\n 3: 'Int16',\n 4: 'UInt32',\n 5: 'Int32',\n 6: 'Float32',\n 7: 'Float64',\n 8: 'CInt16',\n 9: 'CInt32',\n 10: 'CFloat32',\n 11: 'CFloat64' }\n\ntypename_rev = dict((v, k) for k, v in typename_fwd.items())\n\ndef _gdal_typename(dt):\n try:\n return typename_fwd[dtype_rev[dt]]\n except KeyError:\n return typename_fwd[dtype_rev[dt().dtype.name]]\n\ndef check_dtype(dt):\n if dt not in dtype_rev:\n try:\n return dt().dtype.name in dtype_rev\n except:\n return False\n return True\n\n\ndef get_minimum_int_dtype(values):\n \"\"\"\n Uses range checking to determine the minimum integer data type required\n to represent values.\n\n :param values: numpy array\n :return: named data type that can be later used to create a numpy dtype\n \"\"\"\n\n min_value = values.min()\n max_value = values.max()\n \n if min_value >= 0:\n if max_value <= 255:\n return uint8\n elif max_value <= 65535:\n return uint16\n elif max_value <= 4294967295:\n return uint32\n elif min_value >= -32768 and max_value <= 32767:\n return int16\n elif min_value >= -2147483648 and max_value <= 2147483647:\n return int32\n\n\ndef is_ndarray(array):\n import numpy\n\n return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')\n",
"path": "rasterio/dtypes.py"
}
] | diff --git a/rasterio/_features.pyx b/rasterio/_features.pyx
index bc83a65fd..028930303 100644
--- a/rasterio/_features.pyx
+++ b/rasterio/_features.pyx
@@ -67,7 +67,7 @@ def _shapes(image, mask, connectivity, transform):
if is_float:
fieldtp = 2
- if isinstance(image, np.ndarray):
+ if dtypes.is_ndarray(image):
mem_ds = InMemoryRaster(image, transform)
hband = mem_ds.band
elif isinstance(image, tuple):
@@ -76,7 +76,7 @@ def _shapes(image, mask, connectivity, transform):
else:
raise ValueError("Invalid source image")
- if isinstance(mask, np.ndarray):
+ if dtypes.is_ndarray(mask):
# A boolean mask must be converted to uint8 for GDAL
mask_ds = InMemoryRaster(mask.astype('uint8'), transform)
hmaskband = mask_ds.band
@@ -168,7 +168,7 @@ def _sieve(image, size, output, mask, connectivity):
cdef _io.RasterUpdater udr
cdef _io.RasterReader mask_reader
- if isinstance(image, np.ndarray):
+ if dtypes.is_ndarray(image):
in_mem_ds = InMemoryRaster(image)
in_band = in_mem_ds.band
elif isinstance(image, tuple):
@@ -177,7 +177,7 @@ def _sieve(image, size, output, mask, connectivity):
else:
raise ValueError("Invalid source image")
- if isinstance(output, np.ndarray):
+ if dtypes.is_ndarray(output):
log.debug("Output array: %r", output)
out_mem_ds = InMemoryRaster(output)
out_band = out_mem_ds.band
@@ -187,7 +187,7 @@ def _sieve(image, size, output, mask, connectivity):
else:
raise ValueError("Invalid output image")
- if isinstance(mask, np.ndarray):
+ if dtypes.is_ndarray(mask):
# A boolean mask must be converted to uint8 for GDAL
mask_mem_ds = InMemoryRaster(mask.astype('uint8'))
mask_band = mask_mem_ds.band
diff --git a/rasterio/_fill.pyx b/rasterio/_fill.pyx
index 2515564f4..723af0a30 100644
--- a/rasterio/_fill.pyx
+++ b/rasterio/_fill.pyx
@@ -23,7 +23,7 @@ def _fillnodata(image, mask, double max_search_distance=100.0,
cdef _io.RasterReader mrdr
cdef char **alg_options = NULL
- if isinstance(image, np.ndarray):
+ if dtypes.is_ndarray(image):
# copy numpy ndarray into an in-memory dataset.
image_dataset = _gdal.GDALCreate(
memdriver,
@@ -38,7 +38,7 @@ def _fillnodata(image, mask, double max_search_distance=100.0,
else:
raise ValueError("Invalid source image")
- if isinstance(mask, np.ndarray):
+ if dtypes.is_ndarray(mask):
mask_cast = mask.astype('uint8')
mask_dataset = _gdal.GDALCreate(
memdriver,
diff --git a/rasterio/_warp.pyx b/rasterio/_warp.pyx
index 1e267afb9..7d4e4f471 100644
--- a/rasterio/_warp.pyx
+++ b/rasterio/_warp.pyx
@@ -246,7 +246,7 @@ def _reproject(
# If the source is an ndarray, we copy to a MEM dataset.
# We need a src_transform and src_dst in this case. These will
# be copied to the MEM dataset.
- if isinstance(source, np.ndarray):
+ if dtypes.is_ndarray(source):
# Convert 2D single-band arrays to 3D multi-band.
if len(source.shape) == 2:
source = source.reshape(1, *source.shape)
@@ -300,7 +300,7 @@ def _reproject(
raise ValueError("Invalid source")
# Next, do the same for the destination raster.
- if isinstance(destination, np.ndarray):
+ if dtypes.is_ndarray(destination):
if len(destination.shape) == 2:
destination = destination.reshape(1, *destination.shape)
if destination.shape[0] != src_count:
@@ -489,11 +489,11 @@ def _reproject(
# _gdal.GDALDestroyApproxTransformer(psWOptions.pTransformerArg)
if psWOptions != NULL:
_gdal.GDALDestroyWarpOptions(psWOptions)
- if isinstance(source, np.ndarray):
+ if dtypes.is_ndarray(source):
if hdsin != NULL:
_gdal.GDALClose(hdsin)
- if reprojected and isinstance(destination, np.ndarray):
+ if reprojected and dtypes.is_ndarray(destination):
retval = _io.io_auto(destination, hdsout, 0)
# TODO: handle errors (by retval).
diff --git a/rasterio/dtypes.py b/rasterio/dtypes.py
index e08f14e67..449ec5f0f 100644
--- a/rasterio/dtypes.py
+++ b/rasterio/dtypes.py
@@ -96,3 +96,9 @@ def get_minimum_int_dtype(values):
return int16
elif min_value >= -2147483648 and max_value <= 2147483647:
return int32
+
+
+def is_ndarray(array):
+ import numpy
+
+ return isinstance(array, numpy.ndarray) or hasattr(array, '__array__')
diff --git a/tests/test_dtypes.py b/tests/test_dtypes.py
index 7a41fc778..1826ec52c 100644
--- a/tests/test_dtypes.py
+++ b/tests/test_dtypes.py
@@ -1,14 +1,23 @@
import numpy as np
-import rasterio.dtypes
+from rasterio import dtypes, ubyte
+
+
+def test_is_ndarray():
+ assert dtypes.is_ndarray(np.zeros((1,)))
+ assert dtypes.is_ndarray([0]) == False
+ assert dtypes.is_ndarray((0,)) == False
+
def test_np_dt_uint8():
- assert rasterio.dtypes.check_dtype(np.uint8)
+ assert dtypes.check_dtype(np.uint8)
+
def test_dt_ubyte():
- assert rasterio.dtypes.check_dtype(rasterio.ubyte)
+ assert dtypes.check_dtype(ubyte)
+
def test_gdal_name():
- assert rasterio.dtypes._gdal_typename(rasterio.ubyte) == 'Byte'
- assert rasterio.dtypes._gdal_typename(np.uint8) == 'Byte'
- assert rasterio.dtypes._gdal_typename(np.uint16) == 'UInt16'
+ assert dtypes._gdal_typename(ubyte) == 'Byte'
+ assert dtypes._gdal_typename(np.uint8) == 'Byte'
+ assert dtypes._gdal_typename(np.uint16) == 'UInt16'
|
Lightning-Universe__lightning-flash-665 | ImageEmbedder default behavior is not a flattened output
## 🐛 Bug
I discovered this issue while testing PR #655. If you run the [Image Embedding README example code](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), it returns a 3D tensor.
My understanding from the use of embeddings in general, and how they are used in [Fifty One](https://voxel51.com/docs/fiftyone/tutorials/image_embeddings.html) is they expect the embeddings to be 1D (for each embedding).
The reason it returns a 3D tensor is because it depends on the backbone used. The default there is `resnet101`, which returns a `2048x7x7` shape tensor. Others like inception return a flat 1D tensor, i.e. length-X.
### To Reproduce
Steps to reproduce the behavior:
Run the [README example](https://github.com/PyTorchLightning/lightning-flash#example-1-image-embedding), but remove the `embedding_dim` parameter. See below for example.
Note: as-is, this will error on `print(embeddings.shape)`, regardless of configuration, since that is a list. But the question here is around the logic for the ImageEmbedder.
#### Code sample
```python
from flash.core.data.utils import download_data
from flash.image import ImageEmbedder
# 1. Download the data
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", "data/")
# 2. Create an ImageEmbedder with resnet50 trained on imagenet.
embedder = ImageEmbedder(backbone="resnet50")
# 3. Generate an embedding from an image path.
embeddings = embedder.predict("data/hymenoptera_data/predict/153783656_85f9c3ac70.jpg")
# 4. Print embeddings shape
print(embeddings.shape)
```
### Expected behavior
Expect to see a 100352x1 shape tensor as the output, instead of 2048x7x7.
### Environment
- PyTorch Version (e.g., 1.0): 1.9
- OS (e.g., Linux): Linux
- How you installed PyTorch (`conda`, `pip`, source): pip
- Build command you used (if compiling from source): N/A
- Python version: 3.8.6
- CUDA/cuDNN version: N/A
- GPU models and configuration: N/A
- Any other relevant information: N/A
### Additional context
I believe the question is around what the logic should be here:
https://github.com/PyTorchLightning/lightning-flash/blob/075de3a46d74d9fc0e769401063fede1f12d0518/flash/image/embedding/model.py#L85-L92
If `embedding_dim` is None, then the head is `nn.Identity()`. **If we desire a flat 1D embedding, then the question is: should `nn.Identity()` change to `nn.Flatten()`?**
It could be argued that the user should be left to flatten after on their own, but per the contributing guidelines, I thought this would align with "[Force User Decisions To Best Practices](https://github.com/PyTorchLightning/lightning-flash/blob/ddd942d3dfe3884a97a855446410166c3c9f16d9/.github/CONTRIBUTING.md#force-user-decisions-to-best-practices)"
Let me know your thoughts. If that makes sense, then I can update the code, run some tests, and update docs in a PR.
| [
{
"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fiftyone as fo\nimport fiftyone.brain as fob\nimport numpy as np\n\nfrom flash.core.data.utils import download_data\nfrom flash.image import ImageEmbedder\n\n# 1 Download data\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip\")\n\n# 2 Load data into FiftyOne\ndataset = fo.Dataset.from_dir(\n \"data/hymenoptera_data/test/\",\n fo.types.ImageClassificationDirectoryTree,\n)\n\n# 3 Load model\nembedder = ImageEmbedder(backbone=\"resnet101\", embedding_dim=128)\n\n# 4 Generate embeddings\nfilepaths = dataset.values(\"filepath\")\nembeddings = np.stack(embedder.predict(filepaths))\n\n# 5 Visualize in FiftyOne App\nresults = fob.compute_visualization(dataset, embeddings=embeddings)\nsession = fo.launch_app(dataset)\nplot = results.visualize(labels=\"ground_truth.label\")\nplot.show()\n\n# Optional: block execution until App is closed\nsession.wait()\n",
"path": "flash_examples/integrations/fiftyone/image_embedding.py"
}
] | [
{
"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport fiftyone as fo\nimport fiftyone.brain as fob\nimport numpy as np\n\nfrom flash.core.data.utils import download_data\nfrom flash.image import ImageEmbedder\n\n# 1 Download data\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip\")\n\n# 2 Load data into FiftyOne\ndataset = fo.Dataset.from_dir(\n \"data/hymenoptera_data/test/\",\n fo.types.ImageClassificationDirectoryTree,\n)\n\n# 3 Load model\nembedder = ImageEmbedder(backbone=\"resnet101\")\n\n# 4 Generate embeddings\nfilepaths = dataset.values(\"filepath\")\nembeddings = np.stack(embedder.predict(filepaths))\n\n# 5 Visualize in FiftyOne App\nresults = fob.compute_visualization(dataset, embeddings=embeddings)\nsession = fo.launch_app(dataset)\nplot = results.visualize(labels=\"ground_truth.label\")\nplot.show()\n\n# Optional: block execution until App is closed\nsession.wait()\n",
"path": "flash_examples/integrations/fiftyone/image_embedding.py"
}
] | diff --git a/README.md b/README.md
index be19cb06f9..9b840d3476 100644
--- a/README.md
+++ b/README.md
@@ -206,13 +206,13 @@ from flash.image import ImageEmbedder
download_data("https://pl-flash-data.s3.amazonaws.com/hymenoptera_data.zip", "data/")
# 2. Create an ImageEmbedder with resnet50 trained on imagenet.
-embedder = ImageEmbedder(backbone="resnet50", embedding_dim=128)
+embedder = ImageEmbedder(backbone="resnet50")
# 3. Generate an embedding from an image path.
embeddings = embedder.predict("data/hymenoptera_data/predict/153783656_85f9c3ac70.jpg")
# 4. Print embeddings shape
-print(embeddings.shape)
+print(embeddings[0].shape)
```
</details>
diff --git a/flash_examples/integrations/fiftyone/image_embedding.py b/flash_examples/integrations/fiftyone/image_embedding.py
index b9d1651ceb..019bd9cffe 100644
--- a/flash_examples/integrations/fiftyone/image_embedding.py
+++ b/flash_examples/integrations/fiftyone/image_embedding.py
@@ -28,7 +28,7 @@
)
# 3 Load model
-embedder = ImageEmbedder(backbone="resnet101", embedding_dim=128)
+embedder = ImageEmbedder(backbone="resnet101")
# 4 Generate embeddings
filepaths = dataset.values("filepath")
|
getmoto__moto-1613 | Running lambda invoke with govcloud results in a KeyError
moto version: 1.3.3
botocore version: 1.10.4
When using moto to invoke a lambda function on a govcloud region, you run into a key error with the lambda_backends. This is because boto.awslambda.regions() does not include the govcloud region, despite it being available for use.
I've made a pull request that fixes the issue: #1613
Trace of the error:
```
Traceback (most recent call last):
File "/Users/eric/nimbis/sites/tss/apps/session_aws/tasks/dns.py", line 84, in run
Payload=lambda_payload)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/client.py", line 314, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/client.py", line 599, in _make_api_call
operation_model, request_dict)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/endpoint.py", line 148, in make_request
return self._send_request(request_dict, operation_model)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/endpoint.py", line 177, in _send_request
success_response, exception):
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/endpoint.py", line 273, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 269, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/Users/eric/.virtualenvs/tss/lib/python2.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
KeyError: u'us-gov-west-1'
```
| [
{
"content": "from __future__ import unicode_literals\n\nimport base64\nfrom collections import defaultdict\nimport copy\nimport datetime\nimport docker.errors\nimport hashlib\nimport io\nimport logging\nimport os\nimport json\nimport re\nimport zipfile\nimport uuid\nimport functools\nimport tarfile\nimport calendar\nimport threading\nimport traceback\nimport weakref\nimport requests.adapters\n\nimport boto.awslambda\nfrom moto.core import BaseBackend, BaseModel\nfrom moto.core.exceptions import RESTError\nfrom moto.core.utils import unix_time_millis\nfrom moto.s3.models import s3_backend\nfrom moto.logs.models import logs_backends\nfrom moto.s3.exceptions import MissingBucket, MissingKey\nfrom moto import settings\nfrom .utils import make_function_arn\n\nlogger = logging.getLogger(__name__)\n\nACCOUNT_ID = '123456789012'\n\n\ntry:\n from tempfile import TemporaryDirectory\nexcept ImportError:\n from backports.tempfile import TemporaryDirectory\n\n\n_stderr_regex = re.compile(r'START|END|REPORT RequestId: .*')\n_orig_adapter_send = requests.adapters.HTTPAdapter.send\n\n\ndef zip2tar(zip_bytes):\n with TemporaryDirectory() as td:\n tarname = os.path.join(td, 'data.tar')\n timeshift = int((datetime.datetime.now() -\n datetime.datetime.utcnow()).total_seconds())\n with zipfile.ZipFile(io.BytesIO(zip_bytes), 'r') as zipf, \\\n tarfile.TarFile(tarname, 'w') as tarf:\n for zipinfo in zipf.infolist():\n if zipinfo.filename[-1] == '/': # is_dir() is py3.6+\n continue\n\n tarinfo = tarfile.TarInfo(name=zipinfo.filename)\n tarinfo.size = zipinfo.file_size\n tarinfo.mtime = calendar.timegm(zipinfo.date_time) - timeshift\n infile = zipf.open(zipinfo.filename)\n tarf.addfile(tarinfo, infile)\n\n with open(tarname, 'rb') as f:\n tar_data = f.read()\n return tar_data\n\n\nclass _VolumeRefCount:\n __slots__ = \"refcount\", \"volume\"\n\n def __init__(self, refcount, volume):\n self.refcount = refcount\n self.volume = volume\n\n\nclass _DockerDataVolumeContext:\n _data_vol_map = defaultdict(lambda: _VolumeRefCount(0, None)) # {sha256: _VolumeRefCount}\n _lock = threading.Lock()\n\n def __init__(self, lambda_func):\n self._lambda_func = lambda_func\n self._vol_ref = None\n\n @property\n def name(self):\n return self._vol_ref.volume.name\n\n def __enter__(self):\n # See if volume is already known\n with self.__class__._lock:\n self._vol_ref = self.__class__._data_vol_map[self._lambda_func.code_sha_256]\n self._vol_ref.refcount += 1\n if self._vol_ref.refcount > 1:\n return self\n\n # See if the volume already exists\n for vol in self._lambda_func.docker_client.volumes.list():\n if vol.name == self._lambda_func.code_sha_256:\n self._vol_ref.volume = vol\n return self\n\n # It doesn't exist so we need to create it\n self._vol_ref.volume = self._lambda_func.docker_client.volumes.create(self._lambda_func.code_sha_256)\n container = self._lambda_func.docker_client.containers.run('alpine', 'sleep 100', volumes={self.name: {'bind': '/tmp/data', 'mode': 'rw'}}, detach=True)\n try:\n tar_bytes = zip2tar(self._lambda_func.code_bytes)\n container.put_archive('/tmp/data', tar_bytes)\n finally:\n container.remove(force=True)\n\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n with self.__class__._lock:\n self._vol_ref.refcount -= 1\n if self._vol_ref.refcount == 0:\n try:\n self._vol_ref.volume.remove()\n except docker.errors.APIError as e:\n if e.status_code != 409:\n raise\n\n raise # multiple processes trying to use same volume?\n\n\nclass LambdaFunction(BaseModel):\n def __init__(self, spec, region, validate_s3=True, version=1):\n # required\n self.region = region\n self.code = spec['Code']\n self.function_name = spec['FunctionName']\n self.handler = spec['Handler']\n self.role = spec['Role']\n self.run_time = spec['Runtime']\n self.logs_backend = logs_backends[self.region]\n self.environment_vars = spec.get('Environment', {}).get('Variables', {})\n self.docker_client = docker.from_env()\n self.policy = \"\"\n\n # Unfortunately mocking replaces this method w/o fallback enabled, so we\n # need to replace it if we detect it's been mocked\n if requests.adapters.HTTPAdapter.send != _orig_adapter_send:\n _orig_get_adapter = self.docker_client.api.get_adapter\n\n def replace_adapter_send(*args, **kwargs):\n adapter = _orig_get_adapter(*args, **kwargs)\n\n if isinstance(adapter, requests.adapters.HTTPAdapter):\n adapter.send = functools.partial(_orig_adapter_send, adapter)\n return adapter\n self.docker_client.api.get_adapter = replace_adapter_send\n\n # optional\n self.description = spec.get('Description', '')\n self.memory_size = spec.get('MemorySize', 128)\n self.publish = spec.get('Publish', False) # this is ignored currently\n self.timeout = spec.get('Timeout', 3)\n\n self.logs_group_name = '/aws/lambda/{}'.format(self.function_name)\n self.logs_backend.ensure_log_group(self.logs_group_name, [])\n\n # this isn't finished yet. it needs to find out the VpcId value\n self._vpc_config = spec.get(\n 'VpcConfig', {'SubnetIds': [], 'SecurityGroupIds': []})\n\n # auto-generated\n self.version = version\n self.last_modified = datetime.datetime.utcnow().strftime(\n '%Y-%m-%d %H:%M:%S')\n\n if 'ZipFile' in self.code:\n # more hackery to handle unicode/bytes/str in python3 and python2 -\n # argh!\n try:\n to_unzip_code = base64.b64decode(\n bytes(self.code['ZipFile'], 'utf-8'))\n except Exception:\n to_unzip_code = base64.b64decode(self.code['ZipFile'])\n\n self.code_bytes = to_unzip_code\n self.code_size = len(to_unzip_code)\n self.code_sha_256 = hashlib.sha256(to_unzip_code).hexdigest()\n\n # TODO: we should be putting this in a lambda bucket\n self.code['UUID'] = str(uuid.uuid4())\n self.code['S3Key'] = '{}-{}'.format(self.function_name, self.code['UUID'])\n else:\n # validate s3 bucket and key\n key = None\n try:\n # FIXME: does not validate bucket region\n key = s3_backend.get_key(\n self.code['S3Bucket'], self.code['S3Key'])\n except MissingBucket:\n if do_validate_s3():\n raise ValueError(\n \"InvalidParameterValueException\",\n \"Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist\")\n except MissingKey:\n if do_validate_s3():\n raise ValueError(\n \"InvalidParameterValueException\",\n \"Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.\")\n if key:\n self.code_bytes = key.value\n self.code_size = key.size\n self.code_sha_256 = hashlib.sha256(key.value).hexdigest()\n\n self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)\n\n self.tags = dict()\n\n def set_version(self, version):\n self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)\n self.version = version\n self.last_modified = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')\n\n @property\n def vpc_config(self):\n config = self._vpc_config.copy()\n if config['SecurityGroupIds']:\n config.update({\"VpcId\": \"vpc-123abc\"})\n return config\n\n def __repr__(self):\n return json.dumps(self.get_configuration())\n\n def get_configuration(self):\n config = {\n \"CodeSha256\": self.code_sha_256,\n \"CodeSize\": self.code_size,\n \"Description\": self.description,\n \"FunctionArn\": self.function_arn,\n \"FunctionName\": self.function_name,\n \"Handler\": self.handler,\n \"LastModified\": self.last_modified,\n \"MemorySize\": self.memory_size,\n \"Role\": self.role,\n \"Runtime\": self.run_time,\n \"Timeout\": self.timeout,\n \"Version\": str(self.version),\n \"VpcConfig\": self.vpc_config,\n }\n\n if self.environment_vars:\n config['Environment'] = {\n 'Variables': self.environment_vars\n }\n\n return config\n\n def get_code(self):\n return {\n \"Code\": {\n \"Location\": \"s3://awslambda-{0}-tasks.s3-{0}.amazonaws.com/{1}\".format(self.region, self.code['S3Key']),\n \"RepositoryType\": \"S3\"\n },\n \"Configuration\": self.get_configuration(),\n }\n\n @staticmethod\n def convert(s):\n try:\n return str(s, encoding='utf-8')\n except Exception:\n return s\n\n @staticmethod\n def is_json(test_str):\n try:\n response = json.loads(test_str)\n except Exception:\n response = test_str\n return response\n\n def _invoke_lambda(self, code, event=None, context=None):\n # TODO: context not yet implemented\n if event is None:\n event = dict()\n if context is None:\n context = {}\n\n try:\n # TODO: I believe we can keep the container running and feed events as needed\n # also need to hook it up to the other services so it can make kws/s3 etc calls\n # Should get invoke_id /RequestId from invovation\n env_vars = {\n \"AWS_LAMBDA_FUNCTION_TIMEOUT\": self.timeout,\n \"AWS_LAMBDA_FUNCTION_NAME\": self.function_name,\n \"AWS_LAMBDA_FUNCTION_MEMORY_SIZE\": self.memory_size,\n \"AWS_LAMBDA_FUNCTION_VERSION\": self.version,\n \"AWS_REGION\": self.region,\n }\n\n env_vars.update(self.environment_vars)\n\n container = output = exit_code = None\n with _DockerDataVolumeContext(self) as data_vol:\n try:\n run_kwargs = dict(links={'motoserver': 'motoserver'}) if settings.TEST_SERVER_MODE else {}\n container = self.docker_client.containers.run(\n \"lambci/lambda:{}\".format(self.run_time),\n [self.handler, json.dumps(event)], remove=False,\n mem_limit=\"{}m\".format(self.memory_size),\n volumes=[\"{}:/var/task\".format(data_vol.name)], environment=env_vars, detach=True, **run_kwargs)\n finally:\n if container:\n try:\n exit_code = container.wait(timeout=300)['StatusCode']\n except requests.exceptions.ReadTimeout:\n exit_code = -1\n container.stop()\n container.kill()\n output = container.logs(stdout=False, stderr=True)\n output += container.logs(stdout=True, stderr=False)\n container.remove()\n\n output = output.decode('utf-8')\n\n # Send output to \"logs\" backend\n invoke_id = uuid.uuid4().hex\n log_stream_name = \"{date.year}/{date.month:02d}/{date.day:02d}/[{version}]{invoke_id}\".format(\n date=datetime.datetime.utcnow(), version=self.version, invoke_id=invoke_id\n )\n\n self.logs_backend.create_log_stream(self.logs_group_name, log_stream_name)\n\n log_events = [{'timestamp': unix_time_millis(), \"message\": line}\n for line in output.splitlines()]\n self.logs_backend.put_log_events(self.logs_group_name, log_stream_name, log_events, None)\n\n if exit_code != 0:\n raise Exception(\n 'lambda invoke failed output: {}'.format(output))\n\n # strip out RequestId lines\n output = os.linesep.join([line for line in self.convert(output).splitlines() if not _stderr_regex.match(line)])\n return output, False\n except BaseException as e:\n traceback.print_exc()\n return \"error running lambda: {}\".format(e), True\n\n def invoke(self, body, request_headers, response_headers):\n payload = dict()\n\n if body:\n body = json.loads(body)\n\n # Get the invocation type:\n res, errored = self._invoke_lambda(code=self.code, event=body)\n if request_headers.get(\"x-amz-invocation-type\") == \"RequestResponse\":\n encoded = base64.b64encode(res.encode('utf-8'))\n response_headers[\"x-amz-log-result\"] = encoded.decode('utf-8')\n payload['result'] = response_headers[\"x-amz-log-result\"]\n result = res.encode('utf-8')\n else:\n result = json.dumps(payload)\n if errored:\n response_headers['x-amz-function-error'] = \"Handled\"\n\n return result\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n\n # required\n spec = {\n 'Code': properties['Code'],\n 'FunctionName': resource_name,\n 'Handler': properties['Handler'],\n 'Role': properties['Role'],\n 'Runtime': properties['Runtime'],\n }\n optional_properties = 'Description MemorySize Publish Timeout VpcConfig'.split()\n # NOTE: Not doing `properties.get(k, DEFAULT)` to avoid duplicating the\n # default logic\n for prop in optional_properties:\n if prop in properties:\n spec[prop] = properties[prop]\n\n # when ZipFile is present in CloudFormation, per the official docs,\n # the code it's a plaintext code snippet up to 4096 bytes.\n # this snippet converts this plaintext code to a proper base64-encoded ZIP file.\n if 'ZipFile' in properties['Code']:\n spec['Code']['ZipFile'] = base64.b64encode(\n cls._create_zipfile_from_plaintext_code(\n spec['Code']['ZipFile']))\n\n backend = lambda_backends[region_name]\n fn = backend.create_function(spec)\n return fn\n\n def get_cfn_attribute(self, attribute_name):\n from moto.cloudformation.exceptions import \\\n UnformattedGetAttTemplateException\n if attribute_name == 'Arn':\n return make_function_arn(self.region, ACCOUNT_ID, self.function_name)\n raise UnformattedGetAttTemplateException()\n\n @staticmethod\n def _create_zipfile_from_plaintext_code(code):\n zip_output = io.BytesIO()\n zip_file = zipfile.ZipFile(zip_output, 'w', zipfile.ZIP_DEFLATED)\n zip_file.writestr('lambda_function.zip', code)\n zip_file.close()\n zip_output.seek(0)\n return zip_output.read()\n\n\nclass EventSourceMapping(BaseModel):\n def __init__(self, spec):\n # required\n self.function_name = spec['FunctionName']\n self.event_source_arn = spec['EventSourceArn']\n self.starting_position = spec['StartingPosition']\n\n # optional\n self.batch_size = spec.get('BatchSize', 100)\n self.enabled = spec.get('Enabled', True)\n self.starting_position_timestamp = spec.get('StartingPositionTimestamp',\n None)\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n spec = {\n 'FunctionName': properties['FunctionName'],\n 'EventSourceArn': properties['EventSourceArn'],\n 'StartingPosition': properties['StartingPosition']\n }\n optional_properties = 'BatchSize Enabled StartingPositionTimestamp'.split()\n for prop in optional_properties:\n if prop in properties:\n spec[prop] = properties[prop]\n return EventSourceMapping(spec)\n\n\nclass LambdaVersion(BaseModel):\n def __init__(self, spec):\n self.version = spec['Version']\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n spec = {\n 'Version': properties.get('Version')\n }\n return LambdaVersion(spec)\n\n\nclass LambdaStorage(object):\n def __init__(self):\n # Format 'func_name' {'alias': {}, 'versions': []}\n self._functions = {}\n self._arns = weakref.WeakValueDictionary()\n\n def _get_latest(self, name):\n return self._functions[name]['latest']\n\n def _get_version(self, name, version):\n index = version - 1\n\n try:\n return self._functions[name]['versions'][index]\n except IndexError:\n return None\n\n def _get_alias(self, name, alias):\n return self._functions[name]['alias'].get(alias, None)\n\n def get_function(self, name, qualifier=None):\n if name not in self._functions:\n return None\n\n if qualifier is None:\n return self._get_latest(name)\n\n try:\n return self._get_version(name, int(qualifier))\n except ValueError:\n return self._functions[name]['latest']\n\n def get_arn(self, arn):\n return self._arns.get(arn, None)\n\n def put_function(self, fn):\n \"\"\"\n :param fn: Function\n :type fn: LambdaFunction\n \"\"\"\n if fn.function_name in self._functions:\n self._functions[fn.function_name]['latest'] = fn\n else:\n self._functions[fn.function_name] = {\n 'latest': fn,\n 'versions': [],\n 'alias': weakref.WeakValueDictionary()\n }\n\n self._arns[fn.function_arn] = fn\n\n def publish_function(self, name):\n if name not in self._functions:\n return None\n if not self._functions[name]['latest']:\n return None\n\n new_version = len(self._functions[name]['versions']) + 1\n fn = copy.copy(self._functions[name]['latest'])\n fn.set_version(new_version)\n\n self._functions[name]['versions'].append(fn)\n return fn\n\n def del_function(self, name, qualifier=None):\n if name in self._functions:\n if not qualifier:\n # Something is still reffing this so delete all arns\n latest = self._functions[name]['latest'].function_arn\n del self._arns[latest]\n\n for fn in self._functions[name]['versions']:\n del self._arns[fn.function_arn]\n\n del self._functions[name]\n\n return True\n\n elif qualifier == '$LATEST':\n self._functions[name]['latest'] = None\n\n # If theres no functions left\n if not self._functions[name]['versions'] and not self._functions[name]['latest']:\n del self._functions[name]\n\n return True\n\n else:\n fn = self.get_function(name, qualifier)\n if fn:\n self._functions[name]['versions'].remove(fn)\n\n # If theres no functions left\n if not self._functions[name]['versions'] and not self._functions[name]['latest']:\n del self._functions[name]\n\n return True\n\n return False\n\n def all(self):\n result = []\n\n for function_group in self._functions.values():\n if function_group['latest'] is not None:\n result.append(function_group['latest'])\n\n result.extend(function_group['versions'])\n\n return result\n\n\nclass LambdaBackend(BaseBackend):\n def __init__(self, region_name):\n self._lambdas = LambdaStorage()\n self.region_name = region_name\n\n def reset(self):\n region_name = self.region_name\n self.__dict__ = {}\n self.__init__(region_name)\n\n def create_function(self, spec):\n function_name = spec.get('FunctionName', None)\n if function_name is None:\n raise RESTError('InvalidParameterValueException', 'Missing FunctionName')\n\n fn = LambdaFunction(spec, self.region_name, version='$LATEST')\n\n self._lambdas.put_function(fn)\n\n return fn\n\n def publish_function(self, function_name):\n return self._lambdas.publish_function(function_name)\n\n def get_function(self, function_name, qualifier=None):\n return self._lambdas.get_function(function_name, qualifier)\n\n def get_function_by_arn(self, function_arn):\n return self._lambdas.get_arn(function_arn)\n\n def delete_function(self, function_name, qualifier=None):\n return self._lambdas.del_function(function_name, qualifier)\n\n def list_functions(self):\n return self._lambdas.all()\n\n def send_message(self, function_name, message, subject=None, qualifier=None):\n event = {\n \"Records\": [\n {\n \"EventVersion\": \"1.0\",\n \"EventSubscriptionArn\": \"arn:aws:sns:EXAMPLE\",\n \"EventSource\": \"aws:sns\",\n \"Sns\": {\n \"SignatureVersion\": \"1\",\n \"Timestamp\": \"1970-01-01T00:00:00.000Z\",\n \"Signature\": \"EXAMPLE\",\n \"SigningCertUrl\": \"EXAMPLE\",\n \"MessageId\": \"95df01b4-ee98-5cb9-9903-4c221d41eb5e\",\n \"Message\": message,\n \"MessageAttributes\": {\n \"Test\": {\n \"Type\": \"String\",\n \"Value\": \"TestString\"\n },\n \"TestBinary\": {\n \"Type\": \"Binary\",\n \"Value\": \"TestBinary\"\n }\n },\n \"Type\": \"Notification\",\n \"UnsubscribeUrl\": \"EXAMPLE\",\n \"TopicArn\": \"arn:aws:sns:EXAMPLE\",\n \"Subject\": subject or \"TestInvoke\"\n }\n }\n ]\n\n }\n func = self._lambdas.get_function(function_name, qualifier)\n func.invoke(json.dumps(event), {}, {})\n\n def list_tags(self, resource):\n return self.get_function_by_arn(resource).tags\n\n def tag_resource(self, resource, tags):\n fn = self.get_function_by_arn(resource)\n if not fn:\n return False\n\n fn.tags.update(tags)\n return True\n\n def untag_resource(self, resource, tagKeys):\n fn = self.get_function_by_arn(resource)\n if fn:\n for key in tagKeys:\n try:\n del fn.tags[key]\n except KeyError:\n pass\n # Don't care\n return True\n return False\n\n def add_policy(self, function_name, policy):\n self.get_function(function_name).policy = policy\n\n\ndef do_validate_s3():\n return os.environ.get('VALIDATE_LAMBDA_S3', '') in ['', '1', 'true']\n\n\n# Handle us forgotten regions, unless Lambda truly only runs out of US and\nlambda_backends = {_region.name: LambdaBackend(_region.name)\n for _region in boto.awslambda.regions()}\n\nlambda_backends['ap-southeast-2'] = LambdaBackend('ap-southeast-2')\n",
"path": "moto/awslambda/models.py"
}
] | [
{
"content": "from __future__ import unicode_literals\n\nimport base64\nfrom collections import defaultdict\nimport copy\nimport datetime\nimport docker.errors\nimport hashlib\nimport io\nimport logging\nimport os\nimport json\nimport re\nimport zipfile\nimport uuid\nimport functools\nimport tarfile\nimport calendar\nimport threading\nimport traceback\nimport weakref\nimport requests.adapters\n\nimport boto.awslambda\nfrom moto.core import BaseBackend, BaseModel\nfrom moto.core.exceptions import RESTError\nfrom moto.core.utils import unix_time_millis\nfrom moto.s3.models import s3_backend\nfrom moto.logs.models import logs_backends\nfrom moto.s3.exceptions import MissingBucket, MissingKey\nfrom moto import settings\nfrom .utils import make_function_arn\n\nlogger = logging.getLogger(__name__)\n\nACCOUNT_ID = '123456789012'\n\n\ntry:\n from tempfile import TemporaryDirectory\nexcept ImportError:\n from backports.tempfile import TemporaryDirectory\n\n\n_stderr_regex = re.compile(r'START|END|REPORT RequestId: .*')\n_orig_adapter_send = requests.adapters.HTTPAdapter.send\n\n\ndef zip2tar(zip_bytes):\n with TemporaryDirectory() as td:\n tarname = os.path.join(td, 'data.tar')\n timeshift = int((datetime.datetime.now() -\n datetime.datetime.utcnow()).total_seconds())\n with zipfile.ZipFile(io.BytesIO(zip_bytes), 'r') as zipf, \\\n tarfile.TarFile(tarname, 'w') as tarf:\n for zipinfo in zipf.infolist():\n if zipinfo.filename[-1] == '/': # is_dir() is py3.6+\n continue\n\n tarinfo = tarfile.TarInfo(name=zipinfo.filename)\n tarinfo.size = zipinfo.file_size\n tarinfo.mtime = calendar.timegm(zipinfo.date_time) - timeshift\n infile = zipf.open(zipinfo.filename)\n tarf.addfile(tarinfo, infile)\n\n with open(tarname, 'rb') as f:\n tar_data = f.read()\n return tar_data\n\n\nclass _VolumeRefCount:\n __slots__ = \"refcount\", \"volume\"\n\n def __init__(self, refcount, volume):\n self.refcount = refcount\n self.volume = volume\n\n\nclass _DockerDataVolumeContext:\n _data_vol_map = defaultdict(lambda: _VolumeRefCount(0, None)) # {sha256: _VolumeRefCount}\n _lock = threading.Lock()\n\n def __init__(self, lambda_func):\n self._lambda_func = lambda_func\n self._vol_ref = None\n\n @property\n def name(self):\n return self._vol_ref.volume.name\n\n def __enter__(self):\n # See if volume is already known\n with self.__class__._lock:\n self._vol_ref = self.__class__._data_vol_map[self._lambda_func.code_sha_256]\n self._vol_ref.refcount += 1\n if self._vol_ref.refcount > 1:\n return self\n\n # See if the volume already exists\n for vol in self._lambda_func.docker_client.volumes.list():\n if vol.name == self._lambda_func.code_sha_256:\n self._vol_ref.volume = vol\n return self\n\n # It doesn't exist so we need to create it\n self._vol_ref.volume = self._lambda_func.docker_client.volumes.create(self._lambda_func.code_sha_256)\n container = self._lambda_func.docker_client.containers.run('alpine', 'sleep 100', volumes={self.name: {'bind': '/tmp/data', 'mode': 'rw'}}, detach=True)\n try:\n tar_bytes = zip2tar(self._lambda_func.code_bytes)\n container.put_archive('/tmp/data', tar_bytes)\n finally:\n container.remove(force=True)\n\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n with self.__class__._lock:\n self._vol_ref.refcount -= 1\n if self._vol_ref.refcount == 0:\n try:\n self._vol_ref.volume.remove()\n except docker.errors.APIError as e:\n if e.status_code != 409:\n raise\n\n raise # multiple processes trying to use same volume?\n\n\nclass LambdaFunction(BaseModel):\n def __init__(self, spec, region, validate_s3=True, version=1):\n # required\n self.region = region\n self.code = spec['Code']\n self.function_name = spec['FunctionName']\n self.handler = spec['Handler']\n self.role = spec['Role']\n self.run_time = spec['Runtime']\n self.logs_backend = logs_backends[self.region]\n self.environment_vars = spec.get('Environment', {}).get('Variables', {})\n self.docker_client = docker.from_env()\n self.policy = \"\"\n\n # Unfortunately mocking replaces this method w/o fallback enabled, so we\n # need to replace it if we detect it's been mocked\n if requests.adapters.HTTPAdapter.send != _orig_adapter_send:\n _orig_get_adapter = self.docker_client.api.get_adapter\n\n def replace_adapter_send(*args, **kwargs):\n adapter = _orig_get_adapter(*args, **kwargs)\n\n if isinstance(adapter, requests.adapters.HTTPAdapter):\n adapter.send = functools.partial(_orig_adapter_send, adapter)\n return adapter\n self.docker_client.api.get_adapter = replace_adapter_send\n\n # optional\n self.description = spec.get('Description', '')\n self.memory_size = spec.get('MemorySize', 128)\n self.publish = spec.get('Publish', False) # this is ignored currently\n self.timeout = spec.get('Timeout', 3)\n\n self.logs_group_name = '/aws/lambda/{}'.format(self.function_name)\n self.logs_backend.ensure_log_group(self.logs_group_name, [])\n\n # this isn't finished yet. it needs to find out the VpcId value\n self._vpc_config = spec.get(\n 'VpcConfig', {'SubnetIds': [], 'SecurityGroupIds': []})\n\n # auto-generated\n self.version = version\n self.last_modified = datetime.datetime.utcnow().strftime(\n '%Y-%m-%d %H:%M:%S')\n\n if 'ZipFile' in self.code:\n # more hackery to handle unicode/bytes/str in python3 and python2 -\n # argh!\n try:\n to_unzip_code = base64.b64decode(\n bytes(self.code['ZipFile'], 'utf-8'))\n except Exception:\n to_unzip_code = base64.b64decode(self.code['ZipFile'])\n\n self.code_bytes = to_unzip_code\n self.code_size = len(to_unzip_code)\n self.code_sha_256 = hashlib.sha256(to_unzip_code).hexdigest()\n\n # TODO: we should be putting this in a lambda bucket\n self.code['UUID'] = str(uuid.uuid4())\n self.code['S3Key'] = '{}-{}'.format(self.function_name, self.code['UUID'])\n else:\n # validate s3 bucket and key\n key = None\n try:\n # FIXME: does not validate bucket region\n key = s3_backend.get_key(\n self.code['S3Bucket'], self.code['S3Key'])\n except MissingBucket:\n if do_validate_s3():\n raise ValueError(\n \"InvalidParameterValueException\",\n \"Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist\")\n except MissingKey:\n if do_validate_s3():\n raise ValueError(\n \"InvalidParameterValueException\",\n \"Error occurred while GetObject. S3 Error Code: NoSuchKey. S3 Error Message: The specified key does not exist.\")\n if key:\n self.code_bytes = key.value\n self.code_size = key.size\n self.code_sha_256 = hashlib.sha256(key.value).hexdigest()\n\n self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)\n\n self.tags = dict()\n\n def set_version(self, version):\n self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)\n self.version = version\n self.last_modified = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')\n\n @property\n def vpc_config(self):\n config = self._vpc_config.copy()\n if config['SecurityGroupIds']:\n config.update({\"VpcId\": \"vpc-123abc\"})\n return config\n\n def __repr__(self):\n return json.dumps(self.get_configuration())\n\n def get_configuration(self):\n config = {\n \"CodeSha256\": self.code_sha_256,\n \"CodeSize\": self.code_size,\n \"Description\": self.description,\n \"FunctionArn\": self.function_arn,\n \"FunctionName\": self.function_name,\n \"Handler\": self.handler,\n \"LastModified\": self.last_modified,\n \"MemorySize\": self.memory_size,\n \"Role\": self.role,\n \"Runtime\": self.run_time,\n \"Timeout\": self.timeout,\n \"Version\": str(self.version),\n \"VpcConfig\": self.vpc_config,\n }\n\n if self.environment_vars:\n config['Environment'] = {\n 'Variables': self.environment_vars\n }\n\n return config\n\n def get_code(self):\n return {\n \"Code\": {\n \"Location\": \"s3://awslambda-{0}-tasks.s3-{0}.amazonaws.com/{1}\".format(self.region, self.code['S3Key']),\n \"RepositoryType\": \"S3\"\n },\n \"Configuration\": self.get_configuration(),\n }\n\n @staticmethod\n def convert(s):\n try:\n return str(s, encoding='utf-8')\n except Exception:\n return s\n\n @staticmethod\n def is_json(test_str):\n try:\n response = json.loads(test_str)\n except Exception:\n response = test_str\n return response\n\n def _invoke_lambda(self, code, event=None, context=None):\n # TODO: context not yet implemented\n if event is None:\n event = dict()\n if context is None:\n context = {}\n\n try:\n # TODO: I believe we can keep the container running and feed events as needed\n # also need to hook it up to the other services so it can make kws/s3 etc calls\n # Should get invoke_id /RequestId from invovation\n env_vars = {\n \"AWS_LAMBDA_FUNCTION_TIMEOUT\": self.timeout,\n \"AWS_LAMBDA_FUNCTION_NAME\": self.function_name,\n \"AWS_LAMBDA_FUNCTION_MEMORY_SIZE\": self.memory_size,\n \"AWS_LAMBDA_FUNCTION_VERSION\": self.version,\n \"AWS_REGION\": self.region,\n }\n\n env_vars.update(self.environment_vars)\n\n container = output = exit_code = None\n with _DockerDataVolumeContext(self) as data_vol:\n try:\n run_kwargs = dict(links={'motoserver': 'motoserver'}) if settings.TEST_SERVER_MODE else {}\n container = self.docker_client.containers.run(\n \"lambci/lambda:{}\".format(self.run_time),\n [self.handler, json.dumps(event)], remove=False,\n mem_limit=\"{}m\".format(self.memory_size),\n volumes=[\"{}:/var/task\".format(data_vol.name)], environment=env_vars, detach=True, **run_kwargs)\n finally:\n if container:\n try:\n exit_code = container.wait(timeout=300)['StatusCode']\n except requests.exceptions.ReadTimeout:\n exit_code = -1\n container.stop()\n container.kill()\n output = container.logs(stdout=False, stderr=True)\n output += container.logs(stdout=True, stderr=False)\n container.remove()\n\n output = output.decode('utf-8')\n\n # Send output to \"logs\" backend\n invoke_id = uuid.uuid4().hex\n log_stream_name = \"{date.year}/{date.month:02d}/{date.day:02d}/[{version}]{invoke_id}\".format(\n date=datetime.datetime.utcnow(), version=self.version, invoke_id=invoke_id\n )\n\n self.logs_backend.create_log_stream(self.logs_group_name, log_stream_name)\n\n log_events = [{'timestamp': unix_time_millis(), \"message\": line}\n for line in output.splitlines()]\n self.logs_backend.put_log_events(self.logs_group_name, log_stream_name, log_events, None)\n\n if exit_code != 0:\n raise Exception(\n 'lambda invoke failed output: {}'.format(output))\n\n # strip out RequestId lines\n output = os.linesep.join([line for line in self.convert(output).splitlines() if not _stderr_regex.match(line)])\n return output, False\n except BaseException as e:\n traceback.print_exc()\n return \"error running lambda: {}\".format(e), True\n\n def invoke(self, body, request_headers, response_headers):\n payload = dict()\n\n if body:\n body = json.loads(body)\n\n # Get the invocation type:\n res, errored = self._invoke_lambda(code=self.code, event=body)\n if request_headers.get(\"x-amz-invocation-type\") == \"RequestResponse\":\n encoded = base64.b64encode(res.encode('utf-8'))\n response_headers[\"x-amz-log-result\"] = encoded.decode('utf-8')\n payload['result'] = response_headers[\"x-amz-log-result\"]\n result = res.encode('utf-8')\n else:\n result = json.dumps(payload)\n if errored:\n response_headers['x-amz-function-error'] = \"Handled\"\n\n return result\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n\n # required\n spec = {\n 'Code': properties['Code'],\n 'FunctionName': resource_name,\n 'Handler': properties['Handler'],\n 'Role': properties['Role'],\n 'Runtime': properties['Runtime'],\n }\n optional_properties = 'Description MemorySize Publish Timeout VpcConfig'.split()\n # NOTE: Not doing `properties.get(k, DEFAULT)` to avoid duplicating the\n # default logic\n for prop in optional_properties:\n if prop in properties:\n spec[prop] = properties[prop]\n\n # when ZipFile is present in CloudFormation, per the official docs,\n # the code it's a plaintext code snippet up to 4096 bytes.\n # this snippet converts this plaintext code to a proper base64-encoded ZIP file.\n if 'ZipFile' in properties['Code']:\n spec['Code']['ZipFile'] = base64.b64encode(\n cls._create_zipfile_from_plaintext_code(\n spec['Code']['ZipFile']))\n\n backend = lambda_backends[region_name]\n fn = backend.create_function(spec)\n return fn\n\n def get_cfn_attribute(self, attribute_name):\n from moto.cloudformation.exceptions import \\\n UnformattedGetAttTemplateException\n if attribute_name == 'Arn':\n return make_function_arn(self.region, ACCOUNT_ID, self.function_name)\n raise UnformattedGetAttTemplateException()\n\n @staticmethod\n def _create_zipfile_from_plaintext_code(code):\n zip_output = io.BytesIO()\n zip_file = zipfile.ZipFile(zip_output, 'w', zipfile.ZIP_DEFLATED)\n zip_file.writestr('lambda_function.zip', code)\n zip_file.close()\n zip_output.seek(0)\n return zip_output.read()\n\n\nclass EventSourceMapping(BaseModel):\n def __init__(self, spec):\n # required\n self.function_name = spec['FunctionName']\n self.event_source_arn = spec['EventSourceArn']\n self.starting_position = spec['StartingPosition']\n\n # optional\n self.batch_size = spec.get('BatchSize', 100)\n self.enabled = spec.get('Enabled', True)\n self.starting_position_timestamp = spec.get('StartingPositionTimestamp',\n None)\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n spec = {\n 'FunctionName': properties['FunctionName'],\n 'EventSourceArn': properties['EventSourceArn'],\n 'StartingPosition': properties['StartingPosition']\n }\n optional_properties = 'BatchSize Enabled StartingPositionTimestamp'.split()\n for prop in optional_properties:\n if prop in properties:\n spec[prop] = properties[prop]\n return EventSourceMapping(spec)\n\n\nclass LambdaVersion(BaseModel):\n def __init__(self, spec):\n self.version = spec['Version']\n\n @classmethod\n def create_from_cloudformation_json(cls, resource_name, cloudformation_json,\n region_name):\n properties = cloudformation_json['Properties']\n spec = {\n 'Version': properties.get('Version')\n }\n return LambdaVersion(spec)\n\n\nclass LambdaStorage(object):\n def __init__(self):\n # Format 'func_name' {'alias': {}, 'versions': []}\n self._functions = {}\n self._arns = weakref.WeakValueDictionary()\n\n def _get_latest(self, name):\n return self._functions[name]['latest']\n\n def _get_version(self, name, version):\n index = version - 1\n\n try:\n return self._functions[name]['versions'][index]\n except IndexError:\n return None\n\n def _get_alias(self, name, alias):\n return self._functions[name]['alias'].get(alias, None)\n\n def get_function(self, name, qualifier=None):\n if name not in self._functions:\n return None\n\n if qualifier is None:\n return self._get_latest(name)\n\n try:\n return self._get_version(name, int(qualifier))\n except ValueError:\n return self._functions[name]['latest']\n\n def get_arn(self, arn):\n return self._arns.get(arn, None)\n\n def put_function(self, fn):\n \"\"\"\n :param fn: Function\n :type fn: LambdaFunction\n \"\"\"\n if fn.function_name in self._functions:\n self._functions[fn.function_name]['latest'] = fn\n else:\n self._functions[fn.function_name] = {\n 'latest': fn,\n 'versions': [],\n 'alias': weakref.WeakValueDictionary()\n }\n\n self._arns[fn.function_arn] = fn\n\n def publish_function(self, name):\n if name not in self._functions:\n return None\n if not self._functions[name]['latest']:\n return None\n\n new_version = len(self._functions[name]['versions']) + 1\n fn = copy.copy(self._functions[name]['latest'])\n fn.set_version(new_version)\n\n self._functions[name]['versions'].append(fn)\n return fn\n\n def del_function(self, name, qualifier=None):\n if name in self._functions:\n if not qualifier:\n # Something is still reffing this so delete all arns\n latest = self._functions[name]['latest'].function_arn\n del self._arns[latest]\n\n for fn in self._functions[name]['versions']:\n del self._arns[fn.function_arn]\n\n del self._functions[name]\n\n return True\n\n elif qualifier == '$LATEST':\n self._functions[name]['latest'] = None\n\n # If theres no functions left\n if not self._functions[name]['versions'] and not self._functions[name]['latest']:\n del self._functions[name]\n\n return True\n\n else:\n fn = self.get_function(name, qualifier)\n if fn:\n self._functions[name]['versions'].remove(fn)\n\n # If theres no functions left\n if not self._functions[name]['versions'] and not self._functions[name]['latest']:\n del self._functions[name]\n\n return True\n\n return False\n\n def all(self):\n result = []\n\n for function_group in self._functions.values():\n if function_group['latest'] is not None:\n result.append(function_group['latest'])\n\n result.extend(function_group['versions'])\n\n return result\n\n\nclass LambdaBackend(BaseBackend):\n def __init__(self, region_name):\n self._lambdas = LambdaStorage()\n self.region_name = region_name\n\n def reset(self):\n region_name = self.region_name\n self.__dict__ = {}\n self.__init__(region_name)\n\n def create_function(self, spec):\n function_name = spec.get('FunctionName', None)\n if function_name is None:\n raise RESTError('InvalidParameterValueException', 'Missing FunctionName')\n\n fn = LambdaFunction(spec, self.region_name, version='$LATEST')\n\n self._lambdas.put_function(fn)\n\n return fn\n\n def publish_function(self, function_name):\n return self._lambdas.publish_function(function_name)\n\n def get_function(self, function_name, qualifier=None):\n return self._lambdas.get_function(function_name, qualifier)\n\n def get_function_by_arn(self, function_arn):\n return self._lambdas.get_arn(function_arn)\n\n def delete_function(self, function_name, qualifier=None):\n return self._lambdas.del_function(function_name, qualifier)\n\n def list_functions(self):\n return self._lambdas.all()\n\n def send_message(self, function_name, message, subject=None, qualifier=None):\n event = {\n \"Records\": [\n {\n \"EventVersion\": \"1.0\",\n \"EventSubscriptionArn\": \"arn:aws:sns:EXAMPLE\",\n \"EventSource\": \"aws:sns\",\n \"Sns\": {\n \"SignatureVersion\": \"1\",\n \"Timestamp\": \"1970-01-01T00:00:00.000Z\",\n \"Signature\": \"EXAMPLE\",\n \"SigningCertUrl\": \"EXAMPLE\",\n \"MessageId\": \"95df01b4-ee98-5cb9-9903-4c221d41eb5e\",\n \"Message\": message,\n \"MessageAttributes\": {\n \"Test\": {\n \"Type\": \"String\",\n \"Value\": \"TestString\"\n },\n \"TestBinary\": {\n \"Type\": \"Binary\",\n \"Value\": \"TestBinary\"\n }\n },\n \"Type\": \"Notification\",\n \"UnsubscribeUrl\": \"EXAMPLE\",\n \"TopicArn\": \"arn:aws:sns:EXAMPLE\",\n \"Subject\": subject or \"TestInvoke\"\n }\n }\n ]\n\n }\n func = self._lambdas.get_function(function_name, qualifier)\n func.invoke(json.dumps(event), {}, {})\n\n def list_tags(self, resource):\n return self.get_function_by_arn(resource).tags\n\n def tag_resource(self, resource, tags):\n fn = self.get_function_by_arn(resource)\n if not fn:\n return False\n\n fn.tags.update(tags)\n return True\n\n def untag_resource(self, resource, tagKeys):\n fn = self.get_function_by_arn(resource)\n if fn:\n for key in tagKeys:\n try:\n del fn.tags[key]\n except KeyError:\n pass\n # Don't care\n return True\n return False\n\n def add_policy(self, function_name, policy):\n self.get_function(function_name).policy = policy\n\n\ndef do_validate_s3():\n return os.environ.get('VALIDATE_LAMBDA_S3', '') in ['', '1', 'true']\n\n\n# Handle us forgotten regions, unless Lambda truly only runs out of US and\nlambda_backends = {_region.name: LambdaBackend(_region.name)\n for _region in boto.awslambda.regions()}\n\nlambda_backends['ap-southeast-2'] = LambdaBackend('ap-southeast-2')\nlambda_backends['us-gov-west-1'] = LambdaBackend('us-gov-west-1')\n",
"path": "moto/awslambda/models.py"
}
] | diff --git a/moto/awslambda/models.py b/moto/awslambda/models.py
index 80b4ffba3e71..d49df81c753a 100644
--- a/moto/awslambda/models.py
+++ b/moto/awslambda/models.py
@@ -675,3 +675,4 @@ def do_validate_s3():
for _region in boto.awslambda.regions()}
lambda_backends['ap-southeast-2'] = LambdaBackend('ap-southeast-2')
+lambda_backends['us-gov-west-1'] = LambdaBackend('us-gov-west-1')
|
ESMCI__cime-993 | scripts_regression_tests.py O_TestTestScheduler
This test fails with error SystemExit: ERROR: Leftover threads?
when run as part of the full scripts_regression_tests.py
but passes when run using ctest or when run as an individual test.
| [
{
"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"utils\",\"python\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\", \"unit_test\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files --full-name %s\" % cimeroot, verbose=False).splitlines()\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n return dict(results)\n",
"path": "utils/python/CIME/code_checker.py"
}
] | [
{
"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, is_python_executable\n\nfrom multiprocessing.dummy import Pool as ThreadPool\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import,fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cimeroot = get_cime_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"utils\",\"python\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\", \"unit_test\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip in filepath:\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files --full-name %s\" % cimeroot, verbose=False).splitlines()\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n repo_files = run_cmd_no_fail('git ls-files --full-name %s' % get_cime_root(), verbose=False).splitlines()\n files_to_check = []\n if files:\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n",
"path": "utils/python/CIME/code_checker.py"
}
] | diff --git a/utils/python/CIME/code_checker.py b/utils/python/CIME/code_checker.py
index e98e3b21315..e1df4262e98 100644
--- a/utils/python/CIME/code_checker.py
+++ b/utils/python/CIME/code_checker.py
@@ -106,4 +106,6 @@ def check_code(files, num_procs=10, interactive=False):
pool = ThreadPool(num_procs)
results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
+ pool.close()
+ pool.join()
return dict(results)
|
jschneier__django-storages-589 | Is it correct in the `get_available_overwrite_name` function?
Hi,
Please tell me what the following code.
When `name`'s length equals `max_length` in the `get_available_overwrite_name`, `get_available_overwrite_name` returns overwritten `name`.
The `name` must be less than or equal to `max_length` isn't it?
https://github.com/jschneier/django-storages/blob/master/storages/utils.py#L105
Regards,
Chihiro
| [
{
"content": "import os\nimport posixpath\n\nfrom django.conf import settings\nfrom django.core.exceptions import (\n ImproperlyConfigured, SuspiciousFileOperation,\n)\nfrom django.utils.encoding import force_text\n\n\ndef setting(name, default=None):\n \"\"\"\n Helper function to get a Django setting by name. If setting doesn't exists\n it will return a default.\n\n :param name: Name of setting\n :type name: str\n :param default: Value if setting is unfound\n :returns: Setting's value\n \"\"\"\n return getattr(settings, name, default)\n\n\ndef clean_name(name):\n \"\"\"\n Cleans the name so that Windows style paths work\n \"\"\"\n # Normalize Windows style paths\n clean_name = posixpath.normpath(name).replace('\\\\', '/')\n\n # os.path.normpath() can strip trailing slashes so we implement\n # a workaround here.\n if name.endswith('/') and not clean_name.endswith('/'):\n # Add a trailing slash as it was stripped.\n clean_name = clean_name + '/'\n\n # Given an empty string, os.path.normpath() will return ., which we don't want\n if clean_name == '.':\n clean_name = ''\n\n return clean_name\n\n\ndef safe_join(base, *paths):\n \"\"\"\n A version of django.utils._os.safe_join for S3 paths.\n\n Joins one or more path components to the base path component\n intelligently. Returns a normalized version of the final path.\n\n The final path must be located inside of the base path component\n (otherwise a ValueError is raised).\n\n Paths outside the base path indicate a possible security\n sensitive operation.\n \"\"\"\n base_path = force_text(base)\n base_path = base_path.rstrip('/')\n paths = [force_text(p) for p in paths]\n\n final_path = base_path + '/'\n for path in paths:\n _final_path = posixpath.normpath(posixpath.join(final_path, path))\n # posixpath.normpath() strips the trailing /. Add it back.\n if path.endswith('/') or _final_path + '/' == final_path:\n _final_path += '/'\n final_path = _final_path\n if final_path == base_path:\n final_path += '/'\n\n # Ensure final_path starts with base_path and that the next character after\n # the base path is /.\n base_path_len = len(base_path)\n if (not final_path.startswith(base_path) or final_path[base_path_len] != '/'):\n raise ValueError('the joined path is located outside of the base path'\n ' component')\n\n return final_path.lstrip('/')\n\n\ndef check_location(storage):\n if storage.location.startswith('/'):\n correct = storage.location.lstrip('/')\n raise ImproperlyConfigured(\n \"%s.location cannot begin with a leading slash. Found '%s'. Use '%s' instead.\" % (\n storage.__class__.__name__,\n storage.location,\n correct,\n )\n )\n\n\ndef lookup_env(names):\n \"\"\"\n Look up for names in environment. Returns the first element\n found.\n \"\"\"\n for name in names:\n value = os.environ.get(name)\n if value:\n return value\n\n\ndef get_available_overwrite_name(name, max_length):\n if max_length is None or len(name) < max_length:\n return name\n\n # Adapted from Django\n dir_name, file_name = os.path.split(name)\n file_root, file_ext = os.path.splitext(file_name)\n truncation = len(name) - max_length\n\n file_root = file_root[:-truncation]\n if not file_root:\n raise SuspiciousFileOperation(\n 'Storage tried to truncate away entire filename \"%s\". '\n 'Please make sure that the corresponding file field '\n 'allows sufficient \"max_length\".' % name\n )\n return os.path.join(dir_name, \"%s%s\" % (file_root, file_ext))\n",
"path": "storages/utils.py"
}
] | [
{
"content": "import os\nimport posixpath\n\nfrom django.conf import settings\nfrom django.core.exceptions import (\n ImproperlyConfigured, SuspiciousFileOperation,\n)\nfrom django.utils.encoding import force_text\n\n\ndef setting(name, default=None):\n \"\"\"\n Helper function to get a Django setting by name. If setting doesn't exists\n it will return a default.\n\n :param name: Name of setting\n :type name: str\n :param default: Value if setting is unfound\n :returns: Setting's value\n \"\"\"\n return getattr(settings, name, default)\n\n\ndef clean_name(name):\n \"\"\"\n Cleans the name so that Windows style paths work\n \"\"\"\n # Normalize Windows style paths\n clean_name = posixpath.normpath(name).replace('\\\\', '/')\n\n # os.path.normpath() can strip trailing slashes so we implement\n # a workaround here.\n if name.endswith('/') and not clean_name.endswith('/'):\n # Add a trailing slash as it was stripped.\n clean_name = clean_name + '/'\n\n # Given an empty string, os.path.normpath() will return ., which we don't want\n if clean_name == '.':\n clean_name = ''\n\n return clean_name\n\n\ndef safe_join(base, *paths):\n \"\"\"\n A version of django.utils._os.safe_join for S3 paths.\n\n Joins one or more path components to the base path component\n intelligently. Returns a normalized version of the final path.\n\n The final path must be located inside of the base path component\n (otherwise a ValueError is raised).\n\n Paths outside the base path indicate a possible security\n sensitive operation.\n \"\"\"\n base_path = force_text(base)\n base_path = base_path.rstrip('/')\n paths = [force_text(p) for p in paths]\n\n final_path = base_path + '/'\n for path in paths:\n _final_path = posixpath.normpath(posixpath.join(final_path, path))\n # posixpath.normpath() strips the trailing /. Add it back.\n if path.endswith('/') or _final_path + '/' == final_path:\n _final_path += '/'\n final_path = _final_path\n if final_path == base_path:\n final_path += '/'\n\n # Ensure final_path starts with base_path and that the next character after\n # the base path is /.\n base_path_len = len(base_path)\n if (not final_path.startswith(base_path) or final_path[base_path_len] != '/'):\n raise ValueError('the joined path is located outside of the base path'\n ' component')\n\n return final_path.lstrip('/')\n\n\ndef check_location(storage):\n if storage.location.startswith('/'):\n correct = storage.location.lstrip('/')\n raise ImproperlyConfigured(\n \"%s.location cannot begin with a leading slash. Found '%s'. Use '%s' instead.\" % (\n storage.__class__.__name__,\n storage.location,\n correct,\n )\n )\n\n\ndef lookup_env(names):\n \"\"\"\n Look up for names in environment. Returns the first element\n found.\n \"\"\"\n for name in names:\n value = os.environ.get(name)\n if value:\n return value\n\n\ndef get_available_overwrite_name(name, max_length):\n if max_length is None or len(name) <= max_length:\n return name\n\n # Adapted from Django\n dir_name, file_name = os.path.split(name)\n file_root, file_ext = os.path.splitext(file_name)\n truncation = len(name) - max_length\n\n file_root = file_root[:-truncation]\n if not file_root:\n raise SuspiciousFileOperation(\n 'Storage tried to truncate away entire filename \"%s\". '\n 'Please make sure that the corresponding file field '\n 'allows sufficient \"max_length\".' % name\n )\n return os.path.join(dir_name, \"%s%s\" % (file_root, file_ext))\n",
"path": "storages/utils.py"
}
] | diff --git a/CHANGELOG.rst b/CHANGELOG.rst
index 7d4b23bbb..a99adc176 100644
--- a/CHANGELOG.rst
+++ b/CHANGELOG.rst
@@ -1,6 +1,14 @@
django-storages CHANGELOG
=========================
+1.7.1 (2018-09-XX)
+******************
+
+- Fix off-by-1 error in ``get_available_name`` whenever ``file_overwrite`` or ``overwrite_files`` is ``True`` (`#588`_, `#589`_)
+
+.. _#588: https://github.com/jschneier/django-storages/issues/588
+.. _#589: https://github.com/jschneier/django-storages/pull/589
+
1.7 (2018-09-03)
****************
diff --git a/storages/utils.py b/storages/utils.py
index 5e3997352..14d47b067 100644
--- a/storages/utils.py
+++ b/storages/utils.py
@@ -102,7 +102,7 @@ def lookup_env(names):
def get_available_overwrite_name(name, max_length):
- if max_length is None or len(name) < max_length:
+ if max_length is None or len(name) <= max_length:
return name
# Adapted from Django
diff --git a/tests/test_gcloud.py b/tests/test_gcloud.py
index 06f0dd4ec..1d3a8f723 100644
--- a/tests/test_gcloud.py
+++ b/tests/test_gcloud.py
@@ -8,9 +8,7 @@
import mimetypes
from datetime import datetime, timedelta
-from django.core.exceptions import (
- ImproperlyConfigured, SuspiciousFileOperation,
-)
+from django.core.exceptions import ImproperlyConfigured
from django.core.files.base import ContentFile
from django.test import TestCase
from django.utils import timezone
@@ -376,14 +374,6 @@ def test_get_available_name_unicode(self):
filename = 'ủⓝï℅ⅆℇ.txt'
self.assertEqual(self.storage.get_available_name(filename), filename)
- def test_get_available_name_overwrite_maxlength(self):
- self.storage.file_overwrite = True
-
- self.assertEqual(self.storage.get_available_name('test/foo.txt', 11), 'test/fo.txt')
- self.assertEqual(self.storage.get_available_name('test_a/foobar.txt', None), 'test_a/foobar.txt')
- with self.assertRaises(SuspiciousFileOperation):
- self.storage.get_available_name('test_a/foobar.txt', 10)
-
def test_cache_control(self):
data = 'This is some test content.'
filename = 'cache_control_file.txt'
diff --git a/tests/test_utils.py b/tests/test_utils.py
index 2fcee3ac3..e411d232c 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -1,9 +1,11 @@
import datetime
from django.conf import settings
+from django.core.exceptions import SuspiciousFileOperation
from django.test import TestCase
from storages import utils
+from storages.utils import get_available_overwrite_name as gaon
class SettingTest(TestCase):
@@ -108,3 +110,26 @@ def test_join_nothing(self):
def test_with_base_url_join_nothing(self):
path = utils.safe_join('base_url')
self.assertEqual(path, 'base_url/')
+
+
+class TestGetAvailableOverwriteName(TestCase):
+ def test_maxlength_is_none(self):
+ name = 'superlong/file/with/path.txt'
+ self.assertEqual(gaon(name, None), name)
+
+ def test_maxlength_equals_name(self):
+ name = 'parent/child.txt'
+ self.assertEqual(gaon(name, len(name)), name)
+
+ def test_maxlength_is_greater_than_name(self):
+ name = 'parent/child.txt'
+ self.assertEqual(gaon(name, len(name) + 1), name)
+
+ def test_maxlength_less_than_name(self):
+ name = 'parent/child.txt'
+ self.assertEqual(gaon(name, len(name) - 1), 'parent/chil.txt')
+
+ def test_truncates_away_filename_raises(self):
+ name = 'parent/child.txt'
+ with self.assertRaises(SuspiciousFileOperation):
+ gaon(name, len(name) - 5)
|
bridgecrewio__checkov-1228 | boto3 is fixed at the patch level version
**Is your feature request related to a problem? Please describe.**
free boto3 dependency patch version.
**Describe the solution you'd like**
replace the line here:
https://github.com/bridgecrewio/checkov/blob/master/Pipfile#L29
with
```
boto3 = "==1.17.*"
```
**Describe alternatives you've considered**
there are no alternatives as the patch version i don't see why is locked.
it can cause conflicts with already installed boto3 library
**Additional context**
boto3 dependency install latest patch version
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
Fixes #1211
| [
{
"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\"\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.27\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/nimrodkor/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n include_package_data=True,\n package_dir={'checkov.terraform.checks.graph_checks': 'checkov/terraform/checks/graph_checks'},\n package_data = {'checkov.terraform.checks.graph_checks': ['aws/*.yaml', 'gcp/*.yaml', 'azure/*.yaml']},\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\nimport logging\nimport os\nfrom importlib import util\nfrom os import path\n\nimport setuptools\nfrom setuptools import setup\n\n# read the contents of your README file\nthis_directory = path.abspath(path.dirname(__file__))\nwith open(path.join(this_directory, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = f.read()\n\nlogger = logging.getLogger(__name__)\nspec = util.spec_from_file_location(\n \"checkov.version\", os.path.join(\"checkov\", \"version.py\")\n)\n# noinspection PyUnresolvedReferences\nmod = util.module_from_spec(spec)\nspec.loader.exec_module(mod) # type: ignore\nversion = mod.version # type: ignore\n\nsetup(\n extras_require={\n \"dev\": [\n \"pytest==5.3.1\",\n \"coverage\",\n \"coverage-badge\",\n \"GitPython==3.1.7\",\n \"bandit\"\n ]\n },\n install_requires=[\n \"bc-python-hcl2>=0.3.18\",\n \"cloudsplaining>=0.4.1\",\n \"deep_merge\",\n \"tabulate\",\n \"colorama\",\n \"termcolor\",\n \"junit-xml\",\n \"dpath>=1.5.0,<2\",\n \"pyyaml>=5.4.1\",\n \"boto3==1.17.*\",\n \"GitPython\",\n \"six==1.15.0\",\n \"jmespath\",\n \"tqdm\",\n \"update_checker\",\n \"semantic_version\",\n \"packaging\",\n \"networkx\",\n \"dockerfile-parse\",\n \"docker\"\n ],\n license=\"Apache License 2.0\",\n name=\"checkov\",\n version=version,\n python_requires=\">=3.7\",\n description=\"Infrastructure as code static analysis\",\n author=\"bridgecrew\",\n author_email=\"[email protected]\",\n url=\"https://github.com/nimrodkor/checkov\",\n packages=setuptools.find_packages(exclude=[\"tests*\",\"integration_tests*\"]),\n include_package_data=True,\n package_dir={'checkov.terraform.checks.graph_checks': 'checkov/terraform/checks/graph_checks'},\n package_data = {'checkov.terraform.checks.graph_checks': ['aws/*.yaml', 'gcp/*.yaml', 'azure/*.yaml']},\n scripts=[\"bin/checkov\", \"bin/checkov.cmd\"],\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Security',\n 'Topic :: Software Development :: Build Tools'\n ]\n)\n",
"path": "setup.py"
}
] | diff --git a/Pipfile b/Pipfile
index 679055be77..6a18ef5e20 100644
--- a/Pipfile
+++ b/Pipfile
@@ -26,7 +26,7 @@ termcolor="*"
junit-xml ="*"
dpath = ">=1.5.0,<2"
pyyaml = ">=5.4.1"
-boto3 = "==1.17.27"
+boto3 = "==1.17.*"
GitPython = "*"
six = "==1.15.0"
jmespath = "*"
diff --git a/Pipfile.lock b/Pipfile.lock
index b9dcab95b2..1a449cab1a 100644
--- a/Pipfile.lock
+++ b/Pipfile.lock
@@ -1,7 +1,7 @@
{
"_meta": {
"hash": {
- "sha256": "9e2b5b0b254c8c74d8f2e0268c436ccfe24f00078f540c6f4e38c0851734bc77"
+ "sha256": "a0c8170968925bd035d8f5aa085dadda7500510070e78b9466b80c5ab68e866e"
},
"pipfile-spec": 6,
"requires": {
@@ -34,19 +34,19 @@
},
"boto3": {
"hashes": [
- "sha256:6758751f1181b9363e4e7559dcbd5ac0fc7147b73f429c976ec5ecd1688c9ec7",
- "sha256:fa41987f9f71368013767306d9522b627946a01b4843938a26fb19cc8adb06c0"
+ "sha256:0a21893db156c0938d0a06b622c3dd3d2da2dcd9d06d343c8f9536ac9de4ec7f",
+ "sha256:c83a33fff7d20027386552967355508ce71fb7406ab0cc8e627e257c94754d43"
],
"index": "pypi",
- "version": "==1.17.27"
+ "version": "==1.17.74"
},
"botocore": {
"hashes": [
- "sha256:e4f8cb923edf035c2ae5f6169c70e77e31df70b88919b92b826a6b9bd14511b1",
- "sha256:f7c2c5c5ed5212b2628d8fb1c587b31c6e8d413ecbbd1a1cdf6f96ed6f5c8d5e"
+ "sha256:2061cf3d17615aa4114c91dbed8917adc5287a88354a7693c96aa8e9f9dedd6e",
+ "sha256:6937954ce6dabc00eb157e9fbd21edd45b4dfe3de738e68dbca4c042bfda0954"
],
"markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4, 3.5'",
- "version": "==1.20.62"
+ "version": "==1.20.74"
},
"cached-property": {
"hashes": [
@@ -72,19 +72,19 @@
},
"click": {
"hashes": [
- "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a",
- "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"
+ "sha256:7d8c289ee437bcb0316820ccee14aefcb056e58d31830ecab8e47eda6540e136",
+ "sha256:e90e62ced43dc8105fb9a26d62f0d9340b5c8db053a814e25d95c19873ae87db"
],
- "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
- "version": "==7.1.2"
+ "markers": "python_full_version >= '3.6.0'",
+ "version": "==8.0.0"
},
"click-option-group": {
"hashes": [
- "sha256:1b4b2ecf87ba8dea78060cffd294b38eea5af81f28a5f9be223c01b8c5ea9ab0",
- "sha256:743733a0f564438b6b761f49ddf37d845f9a662294ecabe0e832e597208bcf31"
+ "sha256:9653a2297357335d7325a1827e71ac1245d91c97d959346a7decabd4a52d5354",
+ "sha256:a6e924f3c46b657feb5b72679f7e930f8e5b224b766ab35c91ae4019b4e0615e"
],
- "markers": "python_version >= '3.6' and python_version < '4'",
- "version": "==0.5.2"
+ "markers": "python_version < '4' and python_full_version >= '3.6.0'",
+ "version": "==0.5.3"
},
"cloudsplaining": {
"hashes": [
@@ -158,11 +158,11 @@
},
"gitpython": {
"hashes": [
- "sha256:3283ae2fba31c913d857e12e5ba5f9a7772bbc064ae2bb09efafa71b0dd4939b",
- "sha256:be27633e7509e58391f10207cd32b2a6cf5b908f92d9cd30da2e514e1137af61"
+ "sha256:29fe82050709760081f588dd50ce83504feddbebdc4da6956d02351552b1c135",
+ "sha256:ee24bdc93dce357630764db659edaf6b8d664d4ff5447ccfeedd2dc5c253f41e"
],
"index": "pypi",
- "version": "==3.1.14"
+ "version": "==3.1.17"
},
"idna": {
"hashes": [
@@ -182,11 +182,11 @@
},
"jinja2": {
"hashes": [
- "sha256:03e47ad063331dd6a3f04a43eddca8a966a26ba0c5b7207a9a9e4e08f1b29419",
- "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6"
+ "sha256:2f2de5285cf37f33d33ecd4a9080b75c87cd0c1994d5a9c6df17131ea1f049c6",
+ "sha256:ea8d7dd814ce9df6de6a761ec7f1cac98afe305b8cdc4aaae4e114b8d8ce24c5"
],
- "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
- "version": "==2.11.3"
+ "markers": "python_full_version >= '3.6.0'",
+ "version": "==3.0.0"
},
"jmespath": {
"hashes": [
@@ -214,66 +214,48 @@
"sha256:31b5b491868dcc87d6c24b7e3d19a0d730d59d3e46f4eea6430a321bed387a49",
"sha256:96c3ba1261de2f7547b46a00ea8463832c921d3f9d6aba3f255a6f71386db20c"
],
- "markers": "python_version >= '3.6'",
+ "markers": "python_full_version >= '3.6.0'",
"version": "==3.3.4"
},
"markupsafe": {
"hashes": [
- "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473",
- "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161",
- "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235",
- "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5",
- "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42",
- "sha256:195d7d2c4fbb0ee8139a6cf67194f3973a6b3042d742ebe0a9ed36d8b6f0c07f",
- "sha256:22c178a091fc6630d0d045bdb5992d2dfe14e3259760e713c490da5323866c39",
- "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff",
- "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b",
- "sha256:2beec1e0de6924ea551859edb9e7679da6e4870d32cb766240ce17e0a0ba2014",
- "sha256:3b8a6499709d29c2e2399569d96719a1b21dcd94410a586a18526b143ec8470f",
- "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1",
- "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e",
- "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183",
- "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66",
- "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b",
- "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1",
- "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15",
- "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1",
- "sha256:6f1e273a344928347c1290119b493a1f0303c52f5a5eae5f16d74f48c15d4a85",
- "sha256:6fffc775d90dcc9aed1b89219549b329a9250d918fd0b8fa8d93d154918422e1",
- "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e",
- "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b",
- "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905",
- "sha256:7fed13866cf14bba33e7176717346713881f56d9d2bcebab207f7a036f41b850",
- "sha256:84dee80c15f1b560d55bcfe6d47b27d070b4681c699c572af2e3c7cc90a3b8e0",
- "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735",
- "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d",
- "sha256:98bae9582248d6cf62321dcb52aaf5d9adf0bad3b40582925ef7c7f0ed85fceb",
- "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e",
- "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d",
- "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c",
- "sha256:a6a744282b7718a2a62d2ed9d993cad6f5f585605ad352c11de459f4108df0a1",
- "sha256:acf08ac40292838b3cbbb06cfe9b2cb9ec78fce8baca31ddb87aaac2e2dc3bc2",
- "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21",
- "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2",
- "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5",
- "sha256:b1dba4527182c95a0db8b6060cc98ac49b9e2f5e64320e2b56e47cb2831978c7",
- "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b",
- "sha256:b7d644ddb4dbd407d31ffb699f1d140bc35478da613b441c582aeb7c43838dd8",
- "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6",
- "sha256:bf5aa3cbcfdf57fa2ee9cd1822c862ef23037f5c832ad09cfea57fa846dec193",
- "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f",
- "sha256:caabedc8323f1e93231b52fc32bdcde6db817623d33e100708d9a68e1f53b26b",
- "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f",
- "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2",
- "sha256:d53bc011414228441014aa71dbec320c66468c1030aae3a6e29778a3382d96e5",
- "sha256:d73a845f227b0bfe8a7455ee623525ee656a9e2e749e4742706d80a6065d5e2c",
- "sha256:d9be0ba6c527163cbed5e0857c451fcd092ce83947944d6c14bc95441203f032",
- "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7",
- "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be",
- "sha256:feb7b34d6325451ef96bc0e36e1a6c0c1c64bc1fbec4b854f4529e51887b1621"
- ],
- "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
- "version": "==1.1.1"
+ "sha256:007dc055dbce5b1104876acee177dbfd18757e19d562cd440182e1f492e96b95",
+ "sha256:031bf79a27d1c42f69c276d6221172417b47cb4b31cdc73d362a9bf5a1889b9f",
+ "sha256:161d575fa49395860b75da5135162481768b11208490d5a2143ae6785123e77d",
+ "sha256:24bbc3507fb6dfff663af7900a631f2aca90d5a445f272db5fc84999fa5718bc",
+ "sha256:2efaeb1baff547063bad2b2893a8f5e9c459c4624e1a96644bbba08910ae34e0",
+ "sha256:32200f562daaab472921a11cbb63780f1654552ae49518196fc361ed8e12e901",
+ "sha256:3261fae28155e5c8634dd7710635fe540a05b58f160cef7713c7700cb9980e66",
+ "sha256:3b54a9c68995ef4164567e2cd1a5e16db5dac30b2a50c39c82db8d4afaf14f63",
+ "sha256:3c352ff634e289061711608f5e474ec38dbaa21e3e168820d53d5f4015e5b91b",
+ "sha256:3fb47f97f1d338b943126e90b79cad50d4fcfa0b80637b5a9f468941dbbd9ce5",
+ "sha256:441ce2a8c17683d97e06447fcbccbdb057cbf587c78eb75ae43ea7858042fe2c",
+ "sha256:45535241baa0fc0ba2a43961a1ac7562ca3257f46c4c3e9c0de38b722be41bd1",
+ "sha256:4aca81a687975b35e3e80bcf9aa93fe10cd57fac37bf18b2314c186095f57e05",
+ "sha256:4cc563836f13c57f1473bc02d1e01fc37bab70ad4ee6be297d58c1d66bc819bf",
+ "sha256:4fae0677f712ee090721d8b17f412f1cbceefbf0dc180fe91bab3232f38b4527",
+ "sha256:58bc9fce3e1557d463ef5cee05391a05745fd95ed660f23c1742c711712c0abb",
+ "sha256:664832fb88b8162268928df233f4b12a144a0c78b01d38b81bdcf0fc96668ecb",
+ "sha256:70820a1c96311e02449591cbdf5cd1c6a34d5194d5b55094ab725364375c9eb2",
+ "sha256:79b2ae94fa991be023832e6bcc00f41dbc8e5fe9d997a02db965831402551730",
+ "sha256:83cf0228b2f694dcdba1374d5312f2277269d798e65f40344964f642935feac1",
+ "sha256:87de598edfa2230ff274c4de7fcf24c73ffd96208c8e1912d5d0fee459767d75",
+ "sha256:8f806bfd0f218477d7c46a11d3e52dc7f5fdfaa981b18202b7dc84bbc287463b",
+ "sha256:90053234a6479738fd40d155268af631c7fca33365f964f2208867da1349294b",
+ "sha256:a00dce2d96587651ef4fa192c17e039e8cfab63087c67e7d263a5533c7dad715",
+ "sha256:a08cd07d3c3c17cd33d9e66ea9dee8f8fc1c48e2d11bd88fd2dc515a602c709b",
+ "sha256:a19d39b02a24d3082856a5b06490b714a9d4179321225bbf22809ff1e1887cc8",
+ "sha256:d00a669e4a5bec3ee6dbeeeedd82a405ced19f8aeefb109a012ea88a45afff96",
+ "sha256:dab0c685f21f4a6c95bfc2afd1e7eae0033b403dd3d8c1b6d13a652ada75b348",
+ "sha256:df561f65049ed3556e5b52541669310e88713fdae2934845ec3606f283337958",
+ "sha256:e4570d16f88c7f3032ed909dc9e905a17da14a1c4cfd92608e3fda4cb1208bbd",
+ "sha256:e77e4b983e2441aff0c0d07ee711110c106b625f440292dfe02a2f60c8218bd6",
+ "sha256:e79212d09fc0e224d20b43ad44bb0a0a3416d1e04cf6b45fed265114a5d43d20",
+ "sha256:f58b5ba13a5689ca8317b98439fccfbcc673acaaf8241c1869ceea40f5d585bf",
+ "sha256:fef86115fdad7ae774720d7103aa776144cf9b66673b4afa9bcaa7af990ed07b"
+ ],
+ "markers": "python_full_version >= '3.6.0'",
+ "version": "==2.0.0"
},
"networkx": {
"hashes": [
@@ -296,7 +278,7 @@
"sha256:15c1aa5e4d887d07df495518445126182d4a551e177c192a46169593ce971fbc",
"sha256:2c3e4405a72f8284f7a3c987fbd666b3ae63fd095101e004e9ee6a1fb1ab76ff"
],
- "markers": "python_version >= '3.6'",
+ "markers": "python_full_version >= '3.6.0'",
"version": "==0.11.10"
},
"pyparsing": {
@@ -360,10 +342,10 @@
},
"s3transfer": {
"hashes": [
- "sha256:35627b86af8ff97e7ac27975fe0a98a312814b46c6333d8a6b889627bcd80994",
- "sha256:efa5bd92a897b6a8d5c1383828dca3d52d0790e0756d49740563a3fb6ed03246"
+ "sha256:9b3752887a2880690ce628bc263d6d13a3864083aeacff4890c1c9839a5eb0bc",
+ "sha256:cb022f4b16551edebbb31a377d3f09600dbada7363d8c5db7976e7f47732e1b2"
],
- "version": "==0.3.7"
+ "version": "==0.4.2"
},
"schema": {
"hashes": [
@@ -454,29 +436,29 @@
},
"websocket-client": {
"hashes": [
- "sha256:44b5df8f08c74c3d82d28100fdc81f4536809ce98a17f0757557813275fbb663",
- "sha256:63509b41d158ae5b7f67eb4ad20fecbb4eee99434e73e140354dc3ff8e09716f"
+ "sha256:5051b38a2f4c27fbd7ca077ebb23ec6965a626ded5a95637f36be1b35b6c4f81",
+ "sha256:57f876f1af4731cacb806cf54d02f5fbf75dee796053b9a5b94fd7c1d9621db9"
],
- "markers": "python_version >= '2.6' and python_version not in '3.0, 3.1, 3.2, 3.3'",
- "version": "==0.58.0"
+ "markers": "python_full_version >= '3.6.0'",
+ "version": "==1.0.0"
},
"zipp": {
"hashes": [
"sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76",
"sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"
],
- "markers": "python_version >= '3.6'",
+ "markers": "python_full_version >= '3.6.0'",
"version": "==3.4.1"
}
},
"develop": {
"attrs": {
"hashes": [
- "sha256:31b2eced602aa8423c2aea9c76a724617ed67cf9513173fd3a4f03e3a929c7e6",
- "sha256:832aa3cde19744e49938b91fea06d69ecb9e649c93ba974535d08ad92164f700"
+ "sha256:149e90d6d8ac20db7a955ad60cf0e6881a3f20d37096140088356da6c716b0b1",
+ "sha256:ef6aaac3ca6cd92904cdd0d83f629a15f18053ec84e6432106f7a4d04ae4f5fb"
],
- "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'",
- "version": "==20.3.0"
+ "markers": "python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'",
+ "version": "==21.2.0"
},
"bandit": {
"hashes": [
@@ -562,11 +544,11 @@
},
"gitpython": {
"hashes": [
- "sha256:3283ae2fba31c913d857e12e5ba5f9a7772bbc064ae2bb09efafa71b0dd4939b",
- "sha256:be27633e7509e58391f10207cd32b2a6cf5b908f92d9cd30da2e514e1137af61"
+ "sha256:29fe82050709760081f588dd50ce83504feddbebdc4da6956d02351552b1c135",
+ "sha256:ee24bdc93dce357630764db659edaf6b8d664d4ff5447ccfeedd2dc5c253f41e"
],
"index": "pypi",
- "version": "==3.1.14"
+ "version": "==3.1.17"
},
"importlib-metadata": {
"hashes": [
@@ -625,11 +607,11 @@
},
"pytest": {
"hashes": [
- "sha256:671238a46e4df0f3498d1c3270e5deb9b32d25134c99b7d75370a68cfbe9b634",
- "sha256:6ad9c7bdf517a808242b998ac20063c41532a570d088d77eec1ee12b0b5574bc"
+ "sha256:50bcad0a0b9c5a72c8e4e7c9855a3ad496ca6a881a3641b4260605450772c54b",
+ "sha256:91ef2131a9bd6be8f76f1f08eac5c5317221d6ad1e143ae03894b862e8976890"
],
"index": "pypi",
- "version": "==6.2.3"
+ "version": "==6.2.4"
},
"pyyaml": {
"hashes": [
@@ -720,7 +702,7 @@
"sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76",
"sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"
],
- "markers": "python_version >= '3.6'",
+ "markers": "python_full_version >= '3.6.0'",
"version": "==3.4.1"
}
}
diff --git a/setup.py b/setup.py
index 2c264adb76..0b629f92e5 100644
--- a/setup.py
+++ b/setup.py
@@ -41,7 +41,7 @@
"junit-xml",
"dpath>=1.5.0,<2",
"pyyaml>=5.4.1",
- "boto3==1.17.27",
+ "boto3==1.17.*",
"GitPython",
"six==1.15.0",
"jmespath",
|
mitmproxy__mitmproxy-2072 | [web] Failed to dump flows into json when visiting https website.
##### Steps to reproduce the problem:
1. start mitmweb and set the correct proxy configuration in the browser.
2. visit [github](https://github.com), or any other website with https
3. mitmweb stuck and throw an exception:
```python
ERROR:tornado.application:Exception in callback <function WebMaster.run.<locals>.<lambda> at 0x7f8871ebb378>
Traceback (most recent call last):
File "/home/matthew/Hack/mitmproxy/venv3.5/lib/python3.5/site-packages/tornado/ioloop.py", line 1041, in _run
return self.callback()
File "/home/matthew/Hack/mitmproxy/mitmproxy/tools/web/master.py", line 109, in <lambda>
tornado.ioloop.PeriodicCallback(lambda: self.tick(timeout=0), 5).start()
File "/home/matthew/Hack/mitmproxy/mitmproxy/master.py", line 109, in tick
handle_func(obj)
File "/home/matthew/Hack/mitmproxy/mitmproxy/controller.py", line 70, in wrapper
master.addons(f.__name__, message)
File "/home/matthew/Hack/mitmproxy/mitmproxy/addonmanager.py", line 90, in __call__
self.invoke(i, name, *args, **kwargs)
File "/home/matthew/Hack/mitmproxy/mitmproxy/addonmanager.py", line 85, in invoke
func(*args, **kwargs)
File "/home/matthew/Hack/mitmproxy/mitmproxy/addons/view.py", line 327, in request
self.add(f)
File "/home/matthew/Hack/mitmproxy/mitmproxy/addons/view.py", line 255, in add
self.sig_view_add.send(self, flow=f)
File "/home/matthew/Hack/mitmproxy/venv3.5/lib/python3.5/site-packages/blinker/base.py", line 267, in send
for receiver in self.receivers_for(sender)]
File "/home/matthew/Hack/mitmproxy/venv3.5/lib/python3.5/site-packages/blinker/base.py", line 267, in <listcomp>
for receiver in self.receivers_for(sender)]
File "/home/matthew/Hack/mitmproxy/mitmproxy/tools/web/master.py", line 58, in _sig_view_add
data=app.flow_to_json(flow)
File "/home/matthew/Hack/mitmproxy/mitmproxy/tools/web/app.py", line 197, in broadcast
message = json.dumps(kwargs, ensure_ascii=False).encode("utf8", "surrogateescape")
File "/usr/lib/python3.5/json/__init__.py", line 237, in dumps
**kw).encode(obj)
File "/usr/lib/python3.5/json/encoder.py", line 198, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.5/json/encoder.py", line 256, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.5/json/encoder.py", line 179, in default
raise TypeError(repr(o) + " is not JSON serializable")
TypeError: b'-----BEGIN CERTIFICATE-----\nMIIC5jCCAc6gAwIBAgIGDYj0HL5MMA0GCSqGSIb3DQEBCwUAMCgxEjAQBgNVBAMM\nCW1pdG1wcm94eTESMBAGA1UECgwJbWl0bXByb3h5MB4XDTE3MDIyNTA5MDM0M1oX\nDTIwMDIyNzA5MDM0M1owFTETMBEGA1UEAwwKZ2l0aHViLmNvbTCCASIwDQYJKoZI\nhvcNAQEBBQADggEPADCCAQoCggEBAMTLqdlVNA4h2xzkX5XhLO1wtqZX0X0JpsXC\nHUO+KE3Pf2IBHWFAzeB3SVuaTSIa55UvRUDgZm+gYpl/qswf3MpPB8rkosLtwSJt\ns7ziAYF0JlrwYW+ZBaH/baQZ4JmgpY3qFzrkNhXwrVW+Wg3uO47w/9GaIuUNVv5t\nElfbCDBO0wvWt9tgEuaFNLVwOnibN4LEioQw/xnnUZu4JU6u+16rWasARxU7vlGs\no+CB6wgoK62W4VnSK7aQv6PMAOR49tyzhLXO6LKHQtZA4DG34zXWTYfXhuTC7rnA\nQ6haZ9qyVyeYclIXpJkmf10q2eJTjQbj8ff4Cj3LYlVmBtC2qbsCAwEAAaMpMCcw\nJQYDVR0RBB4wHIIKZ2l0aHViLmNvbYIOd3d3LmdpdGh1Yi5jb20wDQYJKoZIhvcN\nAQELBQADggEBABRJcH+lDB6ec343S+tNYDtr+wWgSiGw7WggKcUpMawBuqY61K4L\nLoxous98ie5XFfLbZI2rW/sIbMEuhjjamEMNmt83ZmZxo/YzMTXO/HlmHZYm+Vjw\nTdhGxe5cGTxjCwXhygRHX+IupDjanniwmh2jfg/0SlW7S4YE/MQJ1mcbGyzppwkg\n4hZ6sEcGe+RC7Sn1tJWlVpA3V8a6udZE8ejlaZV0/PYbJUWyRxAl00PlvRG2sPu5\nEJM7Xbd0TxtqVX7oagImBhqlhf0CyJfRMq0DU34j0oeUqtV/0FaapMumOODcnloI\nJeldz1QeX2hHksE1hYeVjZNFNKQLtzvEpgg=\n-----END CERTIFICATE-----\n' is not JSON serializable
```
##### Any other comments? What have you tried so far?
`Flow.client_conn.mitmcert`is in the type of `bytes`, so `json.dumps()` could not handle it and throw an exception saying that it is not JSON serializable.
I noticed that in `flow_to_json()` function, there is a comment say:
> Remove flow message content and cert to save transmission space.
And we indeed remove the `server_conn.cert` from the returning dict, but left `client_conn.mitmcert`.
I have tried to add one more line of code to remove `client_conn.mitmcert` from the returning dict and it solve this exception.
However, I am not sure whether it is appropriate to remove this item. Or should we convert it into a string and keep it in the returning dict?
##### System information
Mitmproxy version: 3.0.0 (2.0.0dev0028-0x0fdf2c0)
Python version: 3.5.2
Platform: Linux-4.4.0-63-generic-x86_64-with-Ubuntu-16.04-xenial
SSL version: OpenSSL 1.0.2g 1 Mar 2016
Linux distro: Ubuntu 16.04 xenial
| [
{
"content": "import hashlib\nimport json\nimport logging\nimport os.path\nimport re\nfrom io import BytesIO\n\nimport mitmproxy.addons.view\nimport mitmproxy.flow\nimport tornado.escape\nimport tornado.web\nimport tornado.websocket\nfrom mitmproxy import contentviews\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import http\nfrom mitmproxy import io\nfrom mitmproxy import log\nfrom mitmproxy import version\n\n\ndef flow_to_json(flow: mitmproxy.flow.Flow) -> dict:\n \"\"\"\n Remove flow message content and cert to save transmission space.\n\n Args:\n flow: The original flow.\n \"\"\"\n f = {\n \"id\": flow.id,\n \"intercepted\": flow.intercepted,\n \"client_conn\": flow.client_conn.get_state(),\n \"server_conn\": flow.server_conn.get_state(),\n \"type\": flow.type,\n \"modified\": flow.modified(),\n \"marked\": flow.marked,\n }\n # .alpn_proto_negotiated is bytes, we need to decode that.\n for conn in \"client_conn\", \"server_conn\":\n if f[conn][\"alpn_proto_negotiated\"] is None:\n continue\n f[conn][\"alpn_proto_negotiated\"] = \\\n f[conn][\"alpn_proto_negotiated\"].decode(errors=\"backslashreplace\")\n if flow.error:\n f[\"error\"] = flow.error.get_state()\n\n if isinstance(flow, http.HTTPFlow):\n if flow.request:\n if flow.request.raw_content:\n content_length = len(flow.request.raw_content)\n content_hash = hashlib.sha256(flow.request.raw_content).hexdigest()\n else:\n content_length = None\n content_hash = None\n f[\"request\"] = {\n \"method\": flow.request.method,\n \"scheme\": flow.request.scheme,\n \"host\": flow.request.host,\n \"port\": flow.request.port,\n \"path\": flow.request.path,\n \"http_version\": flow.request.http_version,\n \"headers\": tuple(flow.request.headers.items(True)),\n \"contentLength\": content_length,\n \"contentHash\": content_hash,\n \"timestamp_start\": flow.request.timestamp_start,\n \"timestamp_end\": flow.request.timestamp_end,\n \"is_replay\": flow.request.is_replay,\n }\n if flow.response:\n if flow.response.raw_content:\n content_length = len(flow.response.raw_content)\n content_hash = hashlib.sha256(flow.response.raw_content).hexdigest()\n else:\n content_length = None\n content_hash = None\n f[\"response\"] = {\n \"http_version\": flow.response.http_version,\n \"status_code\": flow.response.status_code,\n \"reason\": flow.response.reason,\n \"headers\": tuple(flow.response.headers.items(True)),\n \"contentLength\": content_length,\n \"contentHash\": content_hash,\n \"timestamp_start\": flow.response.timestamp_start,\n \"timestamp_end\": flow.response.timestamp_end,\n \"is_replay\": flow.response.is_replay,\n }\n f.get(\"server_conn\", {}).pop(\"cert\", None)\n\n return f\n\n\ndef logentry_to_json(e: log.LogEntry) -> dict:\n return {\n \"id\": id(e), # we just need some kind of id.\n \"message\": e.msg,\n \"level\": e.level\n }\n\n\nclass APIError(tornado.web.HTTPError):\n pass\n\n\nclass RequestHandler(tornado.web.RequestHandler):\n def write(self, chunk):\n # Writing arrays on the top level is ok nowadays.\n # http://flask.pocoo.org/docs/0.11/security/#json-security\n if isinstance(chunk, list):\n chunk = tornado.escape.json_encode(chunk)\n self.set_header(\"Content-Type\", \"application/json; charset=UTF-8\")\n super(RequestHandler, self).write(chunk)\n\n def set_default_headers(self):\n super().set_default_headers()\n self.set_header(\"Server\", version.MITMPROXY)\n self.set_header(\"X-Frame-Options\", \"DENY\")\n self.add_header(\"X-XSS-Protection\", \"1; mode=block\")\n self.add_header(\"X-Content-Type-Options\", \"nosniff\")\n self.add_header(\n \"Content-Security-Policy\",\n \"default-src 'self'; \"\n \"connect-src 'self' ws://* ; \"\n \"style-src 'self' 'unsafe-inline'\"\n )\n\n @property\n def json(self):\n if not self.request.headers.get(\"Content-Type\", \"\").startswith(\"application/json\"):\n raise APIError(400, \"Invalid Content-Type, expected application/json.\")\n try:\n return json.loads(self.request.body.decode())\n except Exception as e:\n raise APIError(400, \"Malformed JSON: {}\".format(str(e)))\n\n @property\n def filecontents(self):\n \"\"\"\n Accept either a multipart/form file upload or just take the plain request body.\n\n \"\"\"\n if self.request.files:\n return next(iter(self.request.files.values()))[0].body\n else:\n return self.request.body\n\n @property\n def view(self) -> mitmproxy.addons.view.View:\n return self.application.master.view\n\n @property\n def master(self) -> \"mitmproxy.tools.web.master.WebMaster\":\n return self.application.master\n\n @property\n def flow(self) -> mitmproxy.flow.Flow:\n flow_id = str(self.path_kwargs[\"flow_id\"])\n # FIXME: Add a facility to addon.view to safely access the store\n flow = self.view.get_by_id(flow_id)\n if flow:\n return flow\n else:\n raise APIError(404, \"Flow not found.\")\n\n def write_error(self, status_code: int, **kwargs):\n if \"exc_info\" in kwargs and isinstance(kwargs[\"exc_info\"][1], APIError):\n self.finish(kwargs[\"exc_info\"][1].log_message)\n else:\n super().write_error(status_code, **kwargs)\n\n\nclass IndexHandler(RequestHandler):\n def get(self):\n token = self.xsrf_token # https://github.com/tornadoweb/tornado/issues/645\n assert token\n self.render(\"index.html\")\n\n\nclass FilterHelp(RequestHandler):\n def get(self):\n self.write(dict(\n commands=flowfilter.help\n ))\n\n\nclass WebSocketEventBroadcaster(tornado.websocket.WebSocketHandler):\n # raise an error if inherited class doesn't specify its own instance.\n connections = None # type: set\n\n def open(self):\n self.connections.add(self)\n\n def on_close(self):\n self.connections.remove(self)\n\n @classmethod\n def broadcast(cls, **kwargs):\n message = json.dumps(kwargs, ensure_ascii=False).encode(\"utf8\", \"surrogateescape\")\n\n for conn in cls.connections:\n try:\n conn.write_message(message)\n except Exception: # pragma: no cover\n logging.error(\"Error sending message\", exc_info=True)\n\n\nclass ClientConnection(WebSocketEventBroadcaster):\n connections = set() # type: set\n\n\nclass Flows(RequestHandler):\n def get(self):\n self.write([flow_to_json(f) for f in self.view])\n\n\nclass DumpFlows(RequestHandler):\n def get(self):\n self.set_header(\"Content-Disposition\", \"attachment; filename=flows\")\n self.set_header(\"Content-Type\", \"application/octet-stream\")\n\n bio = BytesIO()\n fw = io.FlowWriter(bio)\n for f in self.view:\n fw.add(f)\n\n self.write(bio.getvalue())\n bio.close()\n\n def post(self):\n self.view.clear()\n bio = BytesIO(self.filecontents)\n self.master.load_flows(io.FlowReader(bio))\n bio.close()\n\n\nclass ClearAll(RequestHandler):\n def post(self):\n self.view.clear()\n self.master.events.clear()\n\n\nclass ResumeFlows(RequestHandler):\n def post(self):\n for f in self.view:\n f.resume()\n self.view.update(f)\n\n\nclass KillFlows(RequestHandler):\n def post(self):\n for f in self.view:\n if f.killable:\n f.kill()\n self.view.update(f)\n\n\nclass ResumeFlow(RequestHandler):\n def post(self, flow_id):\n self.flow.resume()\n self.view.update(self.flow)\n\n\nclass KillFlow(RequestHandler):\n def post(self, flow_id):\n if self.flow.killable:\n self.flow.kill()\n self.view.update(self.flow)\n\n\nclass FlowHandler(RequestHandler):\n def delete(self, flow_id):\n if self.flow.killable:\n self.flow.kill()\n self.view.remove(self.flow)\n\n def put(self, flow_id):\n flow = self.flow\n flow.backup()\n try:\n for a, b in self.json.items():\n if a == \"request\" and hasattr(flow, \"request\"):\n request = flow.request\n for k, v in b.items():\n if k in [\"method\", \"scheme\", \"host\", \"path\", \"http_version\"]:\n setattr(request, k, str(v))\n elif k == \"port\":\n request.port = int(v)\n elif k == \"headers\":\n request.headers.clear()\n for header in v:\n request.headers.add(*header)\n elif k == \"content\":\n request.text = v\n else:\n raise APIError(400, \"Unknown update request.{}: {}\".format(k, v))\n\n elif a == \"response\" and hasattr(flow, \"response\"):\n response = flow.response\n for k, v in b.items():\n if k in [\"msg\", \"http_version\"]:\n setattr(response, k, str(v))\n elif k == \"code\":\n response.status_code = int(v)\n elif k == \"headers\":\n response.headers.clear()\n for header in v:\n response.headers.add(*header)\n elif k == \"content\":\n response.text = v\n else:\n raise APIError(400, \"Unknown update response.{}: {}\".format(k, v))\n else:\n raise APIError(400, \"Unknown update {}: {}\".format(a, b))\n except APIError:\n flow.revert()\n raise\n self.view.update(flow)\n\n\nclass DuplicateFlow(RequestHandler):\n def post(self, flow_id):\n f = self.flow.copy()\n self.view.add(f)\n self.write(f.id)\n\n\nclass RevertFlow(RequestHandler):\n def post(self, flow_id):\n if self.flow.modified():\n self.flow.revert()\n self.view.update(self.flow)\n\n\nclass ReplayFlow(RequestHandler):\n def post(self, flow_id):\n self.flow.backup()\n self.flow.response = None\n self.view.update(self.flow)\n\n try:\n self.master.replay_request(self.flow)\n except exceptions.ReplayException as e:\n raise APIError(400, str(e))\n\n\nclass FlowContent(RequestHandler):\n def post(self, flow_id, message):\n self.flow.backup()\n message = getattr(self.flow, message)\n message.content = self.filecontents\n self.view.update(self.flow)\n\n def get(self, flow_id, message):\n message = getattr(self.flow, message)\n\n if not message.raw_content:\n raise APIError(400, \"No content.\")\n\n content_encoding = message.headers.get(\"Content-Encoding\", None)\n if content_encoding:\n content_encoding = re.sub(r\"[^\\w]\", \"\", content_encoding)\n self.set_header(\"Content-Encoding\", content_encoding)\n\n original_cd = message.headers.get(\"Content-Disposition\", None)\n filename = None\n if original_cd:\n filename = re.search('filename=([-\\w\" .()]+)', original_cd)\n if filename:\n filename = filename.group(1)\n if not filename:\n filename = self.flow.request.path.split(\"?\")[0].split(\"/\")[-1]\n\n filename = re.sub(r'[^-\\w\" .()]', \"\", filename)\n cd = \"attachment; filename={}\".format(filename)\n self.set_header(\"Content-Disposition\", cd)\n self.set_header(\"Content-Type\", \"application/text\")\n self.set_header(\"X-Content-Type-Options\", \"nosniff\")\n self.set_header(\"X-Frame-Options\", \"DENY\")\n self.write(message.raw_content)\n\n\nclass FlowContentView(RequestHandler):\n def get(self, flow_id, message, content_view):\n message = getattr(self.flow, message)\n\n description, lines, error = contentviews.get_message_content_view(\n content_view.replace('_', ' '), message\n )\n # if error:\n # add event log\n\n self.write(dict(\n lines=list(lines),\n description=description\n ))\n\n\nclass Events(RequestHandler):\n def get(self):\n self.write([logentry_to_json(e) for e in self.master.events.data])\n\n\nclass Settings(RequestHandler):\n def get(self):\n self.write(dict(\n version=version.VERSION,\n mode=str(self.master.options.mode),\n intercept=self.master.options.intercept,\n showhost=self.master.options.showhost,\n no_upstream_cert=self.master.options.no_upstream_cert,\n rawtcp=self.master.options.rawtcp,\n http2=self.master.options.http2,\n websocket=self.master.options.websocket,\n anticache=self.master.options.anticache,\n anticomp=self.master.options.anticomp,\n stickyauth=self.master.options.stickyauth,\n stickycookie=self.master.options.stickycookie,\n stream=self.master.options.stream_large_bodies,\n contentViews=[v.name.replace(' ', '_') for v in contentviews.views],\n listen_host=self.master.options.listen_host,\n listen_port=self.master.options.listen_port,\n ))\n\n def put(self):\n update = self.json\n option_whitelist = {\n \"intercept\", \"showhost\", \"no_upstream_cert\",\n \"rawtcp\", \"http2\", \"websocket\", \"anticache\", \"anticomp\",\n \"stickycookie\", \"stickyauth\", \"stream_large_bodies\"\n }\n for k in update:\n if k not in option_whitelist:\n raise APIError(400, \"Unknown setting {}\".format(k))\n self.master.options.update(**update)\n\n\nclass Application(tornado.web.Application):\n def __init__(self, master, debug):\n self.master = master\n handlers = [\n (r\"/\", IndexHandler),\n (r\"/filter-help\", FilterHelp),\n (r\"/updates\", ClientConnection),\n (r\"/events\", Events),\n (r\"/flows\", Flows),\n (r\"/flows/dump\", DumpFlows),\n (r\"/flows/resume\", ResumeFlows),\n (r\"/flows/kill\", KillFlows),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)\", FlowHandler),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/resume\", ResumeFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/kill\", KillFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/duplicate\", DuplicateFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/replay\", ReplayFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/revert\", RevertFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/(?P<message>request|response)/content\", FlowContent),\n (\n r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/(?P<message>request|response)/content/(?P<content_view>[0-9a-zA-Z\\-\\_]+)\",\n FlowContentView),\n (r\"/settings\", Settings),\n (r\"/clear\", ClearAll),\n ]\n settings = dict(\n template_path=os.path.join(os.path.dirname(__file__), \"templates\"),\n static_path=os.path.join(os.path.dirname(__file__), \"static\"),\n xsrf_cookies=True,\n cookie_secret=os.urandom(256),\n debug=debug,\n autoreload=False,\n )\n super().__init__(handlers, **settings)\n",
"path": "mitmproxy/tools/web/app.py"
}
] | [
{
"content": "import hashlib\nimport json\nimport logging\nimport os.path\nimport re\nfrom io import BytesIO\n\nimport mitmproxy.addons.view\nimport mitmproxy.flow\nimport tornado.escape\nimport tornado.web\nimport tornado.websocket\nfrom mitmproxy import contentviews\nfrom mitmproxy import exceptions\nfrom mitmproxy import flowfilter\nfrom mitmproxy import http\nfrom mitmproxy import io\nfrom mitmproxy import log\nfrom mitmproxy import version\n\n\ndef flow_to_json(flow: mitmproxy.flow.Flow) -> dict:\n \"\"\"\n Remove flow message content and cert to save transmission space.\n\n Args:\n flow: The original flow.\n \"\"\"\n f = {\n \"id\": flow.id,\n \"intercepted\": flow.intercepted,\n \"client_conn\": flow.client_conn.get_state(),\n \"server_conn\": flow.server_conn.get_state(),\n \"type\": flow.type,\n \"modified\": flow.modified(),\n \"marked\": flow.marked,\n }\n # .alpn_proto_negotiated is bytes, we need to decode that.\n for conn in \"client_conn\", \"server_conn\":\n if f[conn][\"alpn_proto_negotiated\"] is None:\n continue\n f[conn][\"alpn_proto_negotiated\"] = \\\n f[conn][\"alpn_proto_negotiated\"].decode(errors=\"backslashreplace\")\n if flow.error:\n f[\"error\"] = flow.error.get_state()\n\n if isinstance(flow, http.HTTPFlow):\n if flow.request:\n if flow.request.raw_content:\n content_length = len(flow.request.raw_content)\n content_hash = hashlib.sha256(flow.request.raw_content).hexdigest()\n else:\n content_length = None\n content_hash = None\n f[\"request\"] = {\n \"method\": flow.request.method,\n \"scheme\": flow.request.scheme,\n \"host\": flow.request.host,\n \"port\": flow.request.port,\n \"path\": flow.request.path,\n \"http_version\": flow.request.http_version,\n \"headers\": tuple(flow.request.headers.items(True)),\n \"contentLength\": content_length,\n \"contentHash\": content_hash,\n \"timestamp_start\": flow.request.timestamp_start,\n \"timestamp_end\": flow.request.timestamp_end,\n \"is_replay\": flow.request.is_replay,\n }\n if flow.response:\n if flow.response.raw_content:\n content_length = len(flow.response.raw_content)\n content_hash = hashlib.sha256(flow.response.raw_content).hexdigest()\n else:\n content_length = None\n content_hash = None\n f[\"response\"] = {\n \"http_version\": flow.response.http_version,\n \"status_code\": flow.response.status_code,\n \"reason\": flow.response.reason,\n \"headers\": tuple(flow.response.headers.items(True)),\n \"contentLength\": content_length,\n \"contentHash\": content_hash,\n \"timestamp_start\": flow.response.timestamp_start,\n \"timestamp_end\": flow.response.timestamp_end,\n \"is_replay\": flow.response.is_replay,\n }\n f.get(\"server_conn\", {}).pop(\"cert\", None)\n f.get(\"client_conn\", {}).pop(\"mitmcert\", None)\n\n return f\n\n\ndef logentry_to_json(e: log.LogEntry) -> dict:\n return {\n \"id\": id(e), # we just need some kind of id.\n \"message\": e.msg,\n \"level\": e.level\n }\n\n\nclass APIError(tornado.web.HTTPError):\n pass\n\n\nclass RequestHandler(tornado.web.RequestHandler):\n def write(self, chunk):\n # Writing arrays on the top level is ok nowadays.\n # http://flask.pocoo.org/docs/0.11/security/#json-security\n if isinstance(chunk, list):\n chunk = tornado.escape.json_encode(chunk)\n self.set_header(\"Content-Type\", \"application/json; charset=UTF-8\")\n super(RequestHandler, self).write(chunk)\n\n def set_default_headers(self):\n super().set_default_headers()\n self.set_header(\"Server\", version.MITMPROXY)\n self.set_header(\"X-Frame-Options\", \"DENY\")\n self.add_header(\"X-XSS-Protection\", \"1; mode=block\")\n self.add_header(\"X-Content-Type-Options\", \"nosniff\")\n self.add_header(\n \"Content-Security-Policy\",\n \"default-src 'self'; \"\n \"connect-src 'self' ws://* ; \"\n \"style-src 'self' 'unsafe-inline'\"\n )\n\n @property\n def json(self):\n if not self.request.headers.get(\"Content-Type\", \"\").startswith(\"application/json\"):\n raise APIError(400, \"Invalid Content-Type, expected application/json.\")\n try:\n return json.loads(self.request.body.decode())\n except Exception as e:\n raise APIError(400, \"Malformed JSON: {}\".format(str(e)))\n\n @property\n def filecontents(self):\n \"\"\"\n Accept either a multipart/form file upload or just take the plain request body.\n\n \"\"\"\n if self.request.files:\n return next(iter(self.request.files.values()))[0].body\n else:\n return self.request.body\n\n @property\n def view(self) -> mitmproxy.addons.view.View:\n return self.application.master.view\n\n @property\n def master(self) -> \"mitmproxy.tools.web.master.WebMaster\":\n return self.application.master\n\n @property\n def flow(self) -> mitmproxy.flow.Flow:\n flow_id = str(self.path_kwargs[\"flow_id\"])\n # FIXME: Add a facility to addon.view to safely access the store\n flow = self.view.get_by_id(flow_id)\n if flow:\n return flow\n else:\n raise APIError(404, \"Flow not found.\")\n\n def write_error(self, status_code: int, **kwargs):\n if \"exc_info\" in kwargs and isinstance(kwargs[\"exc_info\"][1], APIError):\n self.finish(kwargs[\"exc_info\"][1].log_message)\n else:\n super().write_error(status_code, **kwargs)\n\n\nclass IndexHandler(RequestHandler):\n def get(self):\n token = self.xsrf_token # https://github.com/tornadoweb/tornado/issues/645\n assert token\n self.render(\"index.html\")\n\n\nclass FilterHelp(RequestHandler):\n def get(self):\n self.write(dict(\n commands=flowfilter.help\n ))\n\n\nclass WebSocketEventBroadcaster(tornado.websocket.WebSocketHandler):\n # raise an error if inherited class doesn't specify its own instance.\n connections = None # type: set\n\n def open(self):\n self.connections.add(self)\n\n def on_close(self):\n self.connections.remove(self)\n\n @classmethod\n def broadcast(cls, **kwargs):\n message = json.dumps(kwargs, ensure_ascii=False).encode(\"utf8\", \"surrogateescape\")\n\n for conn in cls.connections:\n try:\n conn.write_message(message)\n except Exception: # pragma: no cover\n logging.error(\"Error sending message\", exc_info=True)\n\n\nclass ClientConnection(WebSocketEventBroadcaster):\n connections = set() # type: set\n\n\nclass Flows(RequestHandler):\n def get(self):\n self.write([flow_to_json(f) for f in self.view])\n\n\nclass DumpFlows(RequestHandler):\n def get(self):\n self.set_header(\"Content-Disposition\", \"attachment; filename=flows\")\n self.set_header(\"Content-Type\", \"application/octet-stream\")\n\n bio = BytesIO()\n fw = io.FlowWriter(bio)\n for f in self.view:\n fw.add(f)\n\n self.write(bio.getvalue())\n bio.close()\n\n def post(self):\n self.view.clear()\n bio = BytesIO(self.filecontents)\n self.master.load_flows(io.FlowReader(bio))\n bio.close()\n\n\nclass ClearAll(RequestHandler):\n def post(self):\n self.view.clear()\n self.master.events.clear()\n\n\nclass ResumeFlows(RequestHandler):\n def post(self):\n for f in self.view:\n f.resume()\n self.view.update(f)\n\n\nclass KillFlows(RequestHandler):\n def post(self):\n for f in self.view:\n if f.killable:\n f.kill()\n self.view.update(f)\n\n\nclass ResumeFlow(RequestHandler):\n def post(self, flow_id):\n self.flow.resume()\n self.view.update(self.flow)\n\n\nclass KillFlow(RequestHandler):\n def post(self, flow_id):\n if self.flow.killable:\n self.flow.kill()\n self.view.update(self.flow)\n\n\nclass FlowHandler(RequestHandler):\n def delete(self, flow_id):\n if self.flow.killable:\n self.flow.kill()\n self.view.remove(self.flow)\n\n def put(self, flow_id):\n flow = self.flow\n flow.backup()\n try:\n for a, b in self.json.items():\n if a == \"request\" and hasattr(flow, \"request\"):\n request = flow.request\n for k, v in b.items():\n if k in [\"method\", \"scheme\", \"host\", \"path\", \"http_version\"]:\n setattr(request, k, str(v))\n elif k == \"port\":\n request.port = int(v)\n elif k == \"headers\":\n request.headers.clear()\n for header in v:\n request.headers.add(*header)\n elif k == \"content\":\n request.text = v\n else:\n raise APIError(400, \"Unknown update request.{}: {}\".format(k, v))\n\n elif a == \"response\" and hasattr(flow, \"response\"):\n response = flow.response\n for k, v in b.items():\n if k in [\"msg\", \"http_version\"]:\n setattr(response, k, str(v))\n elif k == \"code\":\n response.status_code = int(v)\n elif k == \"headers\":\n response.headers.clear()\n for header in v:\n response.headers.add(*header)\n elif k == \"content\":\n response.text = v\n else:\n raise APIError(400, \"Unknown update response.{}: {}\".format(k, v))\n else:\n raise APIError(400, \"Unknown update {}: {}\".format(a, b))\n except APIError:\n flow.revert()\n raise\n self.view.update(flow)\n\n\nclass DuplicateFlow(RequestHandler):\n def post(self, flow_id):\n f = self.flow.copy()\n self.view.add(f)\n self.write(f.id)\n\n\nclass RevertFlow(RequestHandler):\n def post(self, flow_id):\n if self.flow.modified():\n self.flow.revert()\n self.view.update(self.flow)\n\n\nclass ReplayFlow(RequestHandler):\n def post(self, flow_id):\n self.flow.backup()\n self.flow.response = None\n self.view.update(self.flow)\n\n try:\n self.master.replay_request(self.flow)\n except exceptions.ReplayException as e:\n raise APIError(400, str(e))\n\n\nclass FlowContent(RequestHandler):\n def post(self, flow_id, message):\n self.flow.backup()\n message = getattr(self.flow, message)\n message.content = self.filecontents\n self.view.update(self.flow)\n\n def get(self, flow_id, message):\n message = getattr(self.flow, message)\n\n if not message.raw_content:\n raise APIError(400, \"No content.\")\n\n content_encoding = message.headers.get(\"Content-Encoding\", None)\n if content_encoding:\n content_encoding = re.sub(r\"[^\\w]\", \"\", content_encoding)\n self.set_header(\"Content-Encoding\", content_encoding)\n\n original_cd = message.headers.get(\"Content-Disposition\", None)\n filename = None\n if original_cd:\n filename = re.search('filename=([-\\w\" .()]+)', original_cd)\n if filename:\n filename = filename.group(1)\n if not filename:\n filename = self.flow.request.path.split(\"?\")[0].split(\"/\")[-1]\n\n filename = re.sub(r'[^-\\w\" .()]', \"\", filename)\n cd = \"attachment; filename={}\".format(filename)\n self.set_header(\"Content-Disposition\", cd)\n self.set_header(\"Content-Type\", \"application/text\")\n self.set_header(\"X-Content-Type-Options\", \"nosniff\")\n self.set_header(\"X-Frame-Options\", \"DENY\")\n self.write(message.raw_content)\n\n\nclass FlowContentView(RequestHandler):\n def get(self, flow_id, message, content_view):\n message = getattr(self.flow, message)\n\n description, lines, error = contentviews.get_message_content_view(\n content_view.replace('_', ' '), message\n )\n # if error:\n # add event log\n\n self.write(dict(\n lines=list(lines),\n description=description\n ))\n\n\nclass Events(RequestHandler):\n def get(self):\n self.write([logentry_to_json(e) for e in self.master.events.data])\n\n\nclass Settings(RequestHandler):\n def get(self):\n self.write(dict(\n version=version.VERSION,\n mode=str(self.master.options.mode),\n intercept=self.master.options.intercept,\n showhost=self.master.options.showhost,\n no_upstream_cert=self.master.options.no_upstream_cert,\n rawtcp=self.master.options.rawtcp,\n http2=self.master.options.http2,\n websocket=self.master.options.websocket,\n anticache=self.master.options.anticache,\n anticomp=self.master.options.anticomp,\n stickyauth=self.master.options.stickyauth,\n stickycookie=self.master.options.stickycookie,\n stream=self.master.options.stream_large_bodies,\n contentViews=[v.name.replace(' ', '_') for v in contentviews.views],\n listen_host=self.master.options.listen_host,\n listen_port=self.master.options.listen_port,\n ))\n\n def put(self):\n update = self.json\n option_whitelist = {\n \"intercept\", \"showhost\", \"no_upstream_cert\",\n \"rawtcp\", \"http2\", \"websocket\", \"anticache\", \"anticomp\",\n \"stickycookie\", \"stickyauth\", \"stream_large_bodies\"\n }\n for k in update:\n if k not in option_whitelist:\n raise APIError(400, \"Unknown setting {}\".format(k))\n self.master.options.update(**update)\n\n\nclass Application(tornado.web.Application):\n def __init__(self, master, debug):\n self.master = master\n handlers = [\n (r\"/\", IndexHandler),\n (r\"/filter-help\", FilterHelp),\n (r\"/updates\", ClientConnection),\n (r\"/events\", Events),\n (r\"/flows\", Flows),\n (r\"/flows/dump\", DumpFlows),\n (r\"/flows/resume\", ResumeFlows),\n (r\"/flows/kill\", KillFlows),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)\", FlowHandler),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/resume\", ResumeFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/kill\", KillFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/duplicate\", DuplicateFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/replay\", ReplayFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/revert\", RevertFlow),\n (r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/(?P<message>request|response)/content\", FlowContent),\n (\n r\"/flows/(?P<flow_id>[0-9a-f\\-]+)/(?P<message>request|response)/content/(?P<content_view>[0-9a-zA-Z\\-\\_]+)\",\n FlowContentView),\n (r\"/settings\", Settings),\n (r\"/clear\", ClearAll),\n ]\n settings = dict(\n template_path=os.path.join(os.path.dirname(__file__), \"templates\"),\n static_path=os.path.join(os.path.dirname(__file__), \"static\"),\n xsrf_cookies=True,\n cookie_secret=os.urandom(256),\n debug=debug,\n autoreload=False,\n )\n super().__init__(handlers, **settings)\n",
"path": "mitmproxy/tools/web/app.py"
}
] | diff --git a/mitmproxy/tools/web/app.py b/mitmproxy/tools/web/app.py
index 1f3467cce5..893c3dde0a 100644
--- a/mitmproxy/tools/web/app.py
+++ b/mitmproxy/tools/web/app.py
@@ -85,6 +85,7 @@ def flow_to_json(flow: mitmproxy.flow.Flow) -> dict:
"is_replay": flow.response.is_replay,
}
f.get("server_conn", {}).pop("cert", None)
+ f.get("client_conn", {}).pop("mitmcert", None)
return f
|
praw-dev__praw-982 | mark_visited function appears to be broken or using the wrong endpoint
## Issue Description
I was tooling around in an interactive session so I don't have a super clean source code snippet, but I tried to mark a submission as visited and got this error instead.
```
In [44]: submi = reddit.submission(s.id)
In [45]: submi.mark_visited()
---------------------------------------------------------------------------
Forbidden Traceback (most recent call last)
~/reddit_opioid_mining/scraper.py in <module>()
----> 1 submi.mark_visited()
/usr/local/lib/python3.5/dist-packages/praw/models/reddit/submission.py in mark_visited(self)
181 """
182 data = {'links': self.fullname}
--> 183 self._reddit.post(API_PATH['store_visits'], data=data)
184
185 def hide(self, other_submissions=None):
/usr/local/lib/python3.5/dist-packages/praw/reddit.py in post(self, path, data, files, params)
463 """
464 data = self.request('POST', path, data=data or {}, files=files,
--> 465 params=params)
466 return self._objector.objectify(data)
467
/usr/local/lib/python3.5/dist-packages/praw/reddit.py in request(self, method, path, params, data, files)
504 """
505 return self._core.request(method, path, data=data, files=files,
--> 506 params=params)
507
508 def submission( # pylint: disable=invalid-name,redefined-builtin
/usr/local/lib/python3.5/dist-packages/prawcore/sessions.py in request(self, method, path, data, files, json, params)
183 return self._request_with_retries(
184 data=data, files=files, json=json, method=method,
--> 185 params=params, url=url)
186
187
/usr/local/lib/python3.5/dist-packages/prawcore/sessions.py in _request_with_retries(self, data, files, json, method, params, url, retries)
128 retries, saved_exception, url)
129 elif response.status_code in self.STATUS_EXCEPTIONS:
--> 130 raise self.STATUS_EXCEPTIONS[response.status_code](response)
131 elif response.status_code == codes['no_content']:
132 return
Forbidden: received 403 HTTP response
```
## System Information
- PRAW Version: 6.0.0
- Python Version: 3.5.2
- Operating System: Ubuntu 16.04
| [
{
"content": "\"\"\"Provide the Submission class.\"\"\"\nfrom ...const import API_PATH, urljoin\nfrom ...exceptions import ClientException\nfrom ..comment_forest import CommentForest\nfrom ..listing.mixins import SubmissionListingMixin\nfrom .base import RedditBase\nfrom .mixins import ThingModerationMixin, UserContentMixin\nfrom .redditor import Redditor\nfrom .subreddit import Subreddit\n\n\nclass Submission(RedditBase, SubmissionListingMixin, UserContentMixin):\n \"\"\"A class for submissions to reddit.\n\n **Typical Attributes**\n\n This table describes attributes that typically belong to objects of this\n class. Since attributes are dynamically provided (see\n :ref:`determine-available-attributes-of-an-object`), there is not a\n guarantee that these attributes will always be present, nor is this list\n comprehensive in any way.\n\n ======================== ==================================================\n Attribute Description\n ======================== ==================================================\n ``author`` Provides an instance of :class:`.Redditor`.\n ``clicked`` Whether or not the submission has been clicked by\n the client.\n ``comments`` Provides an instance of :class:`.CommentForest`.\n ``created_utc`` Time the submission was created, represented in\n `Unix Time`_.\n ``distinguished`` Whether or not the submission is distinguished.\n ``edited`` Whether or not the submission has been edited.\n ``id`` The ID of the submission.\n ``is_video`` Whether or not the submission is a Reddit-hosted\n video.\n ``link_flair_css_class`` The CSS class for the submissions' flair.\n ``link_flair_text`` The flair text for the submissions' flair.\n ``locked`` Whether or not the submission has been locked.\n ``num_comments`` The number of comments on the submission.\n ``over_18`` Whether or not the submission has been marked as\n NSFW.\n ``permalink`` A permalink for the submission.\n ``score`` The number of upvotes for the submission.\n ``selftext`` The submissions' selftext.\n ``stickied`` Whether or not the submission is stickied.\n ``subreddit`` Provides an instance of :class:`.Subreddit`.\n ``subreddit_id`` The ID of the subreddit that the submission\n belongs to.\n ``title`` The title of the submission.\n ``upvote_ratio`` The percentage of upvotes from all votes on the\n submission.\n ======================== ==================================================\n\n\n .. _Unix Time: https://en.wikipedia.org/wiki/Unix_time\n\n \"\"\"\n\n STR_FIELD = 'id'\n\n @staticmethod\n def id_from_url(url):\n \"\"\"Return the ID contained within a submission URL.\n\n :param url: A url to a submission in one of the following formats (http\n urls will also work):\n * https://redd.it/2gmzqe\n * https://reddit.com/comments/2gmzqe/\n * https://www.reddit.com/r/redditdev/comments/2gmzqe/praw_https/\n\n Raise :class:`.ClientException` if URL is not a valid submission URL.\n\n \"\"\"\n parts = RedditBase._url_parts(url)\n if 'comments' not in parts:\n submission_id = parts[-1]\n if 'r' in parts:\n raise ClientException('Invalid URL (subreddit, '\n 'not submission): {}'.format(url))\n else:\n submission_id = parts[parts.index('comments') + 1]\n\n if not submission_id.isalnum():\n raise ClientException('Invalid URL: {}'.format(url))\n return submission_id\n\n @property\n def comments(self):\n \"\"\"Provide an instance of :class:`.CommentForest`.\n\n This attribute can use used, for example, to obtain a flat list of\n comments, with any :class:`.MoreComments` removed:\n\n .. code:: python\n\n submission.comments.replace_more(limit=0)\n comments = submission.comments.list()\n\n Sort order and comment limit can be set with the ``comment_sort`` and\n ``comment_limit`` attributes before comments are fetched, including\n any call to :meth:`.replace_more`:\n\n .. code:: python\n\n submission.comment_sort = 'new'\n comments = submission.comments.list()\n\n See :ref:`extracting_comments` for more on working with a\n :class:`.CommentForest`.\n\n \"\"\"\n # This assumes _comments is set so that _fetch is called when it's not.\n return self._comments\n\n @property\n def flair(self):\n \"\"\"Provide an instance of :class:`.SubmissionFlair`.\n\n This attribute is used to work with flair as a regular user of the\n subreddit the submission belongs to. Moderators can directly use\n :meth:`.flair`.\n\n For example, to select an arbitrary editable flair text (assuming there\n is one) and set a custom value try:\n\n .. code:: python\n\n choices = submission.flair.choices()\n template_id = next(x for x in choices\n if x['flair_text_editable'])['flair_template_id']\n submission.flair.select(template_id, 'my custom value')\n\n \"\"\"\n if self._flair is None:\n self._flair = SubmissionFlair(self)\n return self._flair\n\n @property\n def mod(self):\n \"\"\"Provide an instance of :class:`.SubmissionModeration`.\"\"\"\n if self._mod is None:\n self._mod = SubmissionModeration(self)\n return self._mod\n\n @property\n def shortlink(self):\n \"\"\"Return a shortlink to the submission.\n\n For example http://redd.it/eorhm is a shortlink for\n https://www.reddit.com/r/announcements/comments/eorhm/reddit_30_less_typing/.\n\n \"\"\"\n return urljoin(self._reddit.config.short_url, self.id)\n\n def __init__(self, reddit, id=None, # pylint: disable=redefined-builtin\n url=None, _data=None):\n \"\"\"Initialize a Submission instance.\n\n :param reddit: An instance of :class:`~.Reddit`.\n :param id: A reddit base36 submission ID, e.g., ``2gmzqe``.\n :param url: A URL supported by\n :meth:`~praw.models.Submission.id_from_url`.\n\n Either ``id`` or ``url`` can be provided, but not both.\n\n \"\"\"\n if [id, url, _data].count(None) != 2:\n raise TypeError('Exactly one of `id`, `url`, or `_data` must be '\n 'provided.')\n super(Submission, self).__init__(reddit, _data)\n self.comment_limit = 2048\n\n #: Specify the sort order for ``comments``\n self.comment_sort = 'best'\n\n if id is not None:\n self.id = id # pylint: disable=invalid-name\n elif url is not None:\n self.id = self.id_from_url(url)\n self._flair = self._mod = None\n\n self._comments_by_id = {}\n\n def __setattr__(self, attribute, value):\n \"\"\"Objectify author, and subreddit attributes.\"\"\"\n if attribute == 'author':\n value = Redditor.from_data(self._reddit, value)\n elif attribute == 'subreddit':\n value = Subreddit(self._reddit, value)\n super(Submission, self).__setattr__(attribute, value)\n\n def _chunk(self, other_submissions, chunk_size):\n all_submissions = [self.fullname]\n if other_submissions:\n all_submissions += [x.fullname for x in other_submissions]\n\n for position in range(0, len(all_submissions), chunk_size):\n yield ','.join(all_submissions[position:position + 50])\n\n def _fetch(self):\n other, comments = self._reddit.get(self._info_path(),\n params={'limit': self.comment_limit,\n 'sort': self.comment_sort})\n other = other.children[0]\n delattr(other, 'comment_limit')\n delattr(other, 'comment_sort')\n other._comments = CommentForest(self)\n self.__dict__.update(other.__dict__)\n self.comments._update(comments.children)\n self._fetched = True\n\n def _info_path(self):\n return API_PATH['submission'].format(id=self.id)\n\n def mark_visited(self):\n \"\"\"Mark submission as visited.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mark_visited()\n\n \"\"\"\n data = {'links': self.fullname}\n self._reddit.post(API_PATH['store_visits'], data=data)\n\n def hide(self, other_submissions=None):\n \"\"\"Hide Submission.\n\n :param other_submissions: When provided, additionally\n hide this list of :class:`.Submission` instances\n as part of a single request (default: None).\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.hide()\n\n See also :meth:`~.unhide`\n\n \"\"\"\n for submissions in self._chunk(other_submissions, 50):\n self._reddit.post(API_PATH['hide'], data={'id': submissions})\n\n def unhide(self, other_submissions=None):\n \"\"\"Unhide Submission.\n\n :param other_submissions: When provided, additionally\n unhide this list of :class:`.Submission` instances\n as part of a single request (default: None).\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.unhide()\n\n See also :meth:`~.hide`\n\n \"\"\"\n for submissions in self._chunk(other_submissions, 50):\n self._reddit.post(API_PATH['unhide'], data={'id': submissions})\n\n def crosspost(self, subreddit, title=None, send_replies=True):\n \"\"\"Crosspost the submission to a subreddit.\n\n :param subreddit: Name of the subreddit or :class:`~.Subreddit`\n object to crosspost into.\n :param title: Title of the submission. Will use this submission's\n title if `None` (default: None).\n :param send_replies: When True, messages will be sent to the\n submission author when comments are made to the submission\n (default: True).\n :returns: A :class:`~.Submission` object for the newly created\n submission.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n cross_post = submission.crosspost(subreddit=\"learnprogramming\",\n send_replies=False)\n\n See also :meth:`~.hide`\n\n \"\"\"\n if title is None:\n title = self.title\n\n data = {'sr': str(subreddit),\n 'title': title,\n 'sendreplies': bool(send_replies),\n 'kind': 'crosspost',\n 'crosspost_fullname': self.fullname}\n return self._reddit.post(API_PATH['submit'], data=data)\n\n\nclass SubmissionFlair(object):\n \"\"\"Provide a set of functions pertaining to Submission flair.\"\"\"\n\n def __init__(self, submission):\n \"\"\"Create a SubmissionFlair instance.\n\n :param submission: The submission associated with the flair functions.\n\n \"\"\"\n self.submission = submission\n\n def choices(self):\n \"\"\"Return list of available flair choices.\n\n Choices are required in order to use :meth:`.select`.\n\n Example:\n\n .. code:: python\n\n choices = submission.flair.choices()\n\n \"\"\"\n url = API_PATH['flairselector'].format(\n subreddit=self.submission.subreddit)\n return self.submission._reddit.post(url, data={\n 'link': self.submission.fullname})['choices']\n\n def select(self, flair_template_id, text=None):\n \"\"\"Select flair for submission.\n\n :param flair_template_id: The flair template to select. The possible\n ``flair_template_id`` values can be discovered through\n :meth:`.choices`.\n :param text: If the template's ``flair_text_editable`` value is True,\n this value will set a custom text (default: None).\n\n For example, to select an arbitrary editable flair text (assuming there\n is one) and set a custom value try:\n\n .. code:: python\n\n choices = submission.flair.choices()\n template_id = next(x for x in choices\n if x['flair_text_editable'])['flair_template_id']\n submission.flair.select(template_id, 'my custom value')\n\n \"\"\"\n data = {'flair_template_id': flair_template_id,\n 'link': self.submission.fullname, 'text': text}\n url = API_PATH['select_flair'].format(\n subreddit=self.submission.subreddit)\n self.submission._reddit.post(url, data=data)\n\n\nclass SubmissionModeration(ThingModerationMixin):\n \"\"\"Provide a set of functions pertaining to Submission moderation.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id=\"8dmv8z\")\n submission.mod.approve()\n\n \"\"\"\n\n def __init__(self, submission):\n \"\"\"Create a SubmissionModeration instance.\n\n :param submission: The submission to moderate.\n\n \"\"\"\n self.thing = submission\n\n def contest_mode(self, state=True):\n \"\"\"Set contest mode for the comments of this submission.\n\n :param state: (boolean) True enables contest mode, False, disables\n (default: True).\n\n Contest mode have the following effects:\n * The comment thread will default to being sorted randomly.\n * Replies to top-level comments will be hidden behind\n \"[show replies]\" buttons.\n * Scores will be hidden from non-moderators.\n * Scores accessed through the API (mobile apps, bots) will be\n obscured to \"1\" for non-moderators.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.contest_mode(state=True)\n\n \"\"\"\n self.thing._reddit.post(API_PATH['contest_mode'], data={\n 'id': self.thing.fullname, 'state': state})\n\n def flair(self, text='', css_class=''):\n \"\"\"Set flair for the submission.\n\n :param text: The flair text to associate with the Submission (default:\n '').\n :param css_class: The css class to associate with the flair html\n (default: '').\n\n This method can only be used by an authenticated user who is a\n moderator of the Submission's Subreddit.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.flair(text='PRAW', css_class='bot')\n\n \"\"\"\n data = {'css_class': css_class, 'link': self.thing.fullname,\n 'text': text}\n url = API_PATH['flair'].format(subreddit=self.thing.subreddit)\n self.thing._reddit.post(url, data=data)\n\n def lock(self):\n \"\"\"Lock the submission.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.lock()\n\n See also :meth:`~.unlock`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['lock'],\n data={'id': self.thing.fullname})\n\n def nsfw(self):\n \"\"\"Mark as not safe for work.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.subreddit('test').submit('nsfw test',\n selftext='nsfw')\n submission.mod.nsfw()\n\n See also :meth:`~.sfw`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['marknsfw'],\n data={'id': self.thing.fullname})\n\n def sfw(self):\n \"\"\"Mark as safe for work.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.sfw()\n\n See also :meth:`~.nsfw`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unmarknsfw'],\n data={'id': self.thing.fullname})\n\n def spoiler(self):\n \"\"\"Indicate that the submission contains spoilers.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.spoiler()\n\n See also :meth:`~.unspoiler`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['spoiler'],\n data={'id': self.thing.fullname})\n\n def sticky(self, state=True, bottom=True):\n \"\"\"Set the submission's sticky state in its subreddit.\n\n :param state: (boolean) True sets the sticky for the submission, false\n unsets (default: True).\n :param bottom: (boolean) When true, set the submission as the bottom\n sticky. If no top sticky exists, this submission will become the\n top sticky regardless (default: True).\n\n This submission will replace an existing stickied submission if one\n exists.\n\n Example:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.sticky()\n\n \"\"\"\n data = {'id': self.thing.fullname, 'state': state}\n if not bottom:\n data['num'] = 1\n return self.thing._reddit.post(API_PATH['sticky_submission'],\n data=data)\n\n def suggested_sort(self, sort='blank'):\n \"\"\"Set the suggested sort for the comments of the submission.\n\n :param sort: Can be one of: confidence, top, new, controversial, old,\n random, qa, blank (default: blank).\n\n \"\"\"\n self.thing._reddit.post(API_PATH['suggested_sort'], data={\n 'id': self.thing.fullname, 'sort': sort})\n\n def unlock(self):\n \"\"\"Unlock the submission.\n\n Example:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.unlock()\n\n See also :meth:`~.lock`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unlock'],\n data={'id': self.thing.fullname})\n\n def unspoiler(self):\n \"\"\"Indicate that the submission does not contain spoilers.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example:\n\n .. code:: python\n\n submission = reddit.subreddit('test').submit('not spoiler',\n selftext='spoiler')\n submission.mod.unspoiler()\n\n See also :meth:`~.spoiler`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unspoiler'],\n data={'id': self.thing.fullname})\n\n\nSubreddit._submission_class = Submission\n",
"path": "praw/models/reddit/submission.py"
}
] | [
{
"content": "\"\"\"Provide the Submission class.\"\"\"\nfrom ...const import API_PATH, urljoin\nfrom ...exceptions import ClientException\nfrom ..comment_forest import CommentForest\nfrom ..listing.mixins import SubmissionListingMixin\nfrom .base import RedditBase\nfrom .mixins import ThingModerationMixin, UserContentMixin\nfrom .redditor import Redditor\nfrom .subreddit import Subreddit\n\n\nclass Submission(RedditBase, SubmissionListingMixin, UserContentMixin):\n \"\"\"A class for submissions to reddit.\n\n **Typical Attributes**\n\n This table describes attributes that typically belong to objects of this\n class. Since attributes are dynamically provided (see\n :ref:`determine-available-attributes-of-an-object`), there is not a\n guarantee that these attributes will always be present, nor is this list\n comprehensive in any way.\n\n ======================== ==================================================\n Attribute Description\n ======================== ==================================================\n ``author`` Provides an instance of :class:`.Redditor`.\n ``clicked`` Whether or not the submission has been clicked by\n the client.\n ``comments`` Provides an instance of :class:`.CommentForest`.\n ``created_utc`` Time the submission was created, represented in\n `Unix Time`_.\n ``distinguished`` Whether or not the submission is distinguished.\n ``edited`` Whether or not the submission has been edited.\n ``id`` The ID of the submission.\n ``is_video`` Whether or not the submission is a Reddit-hosted\n video.\n ``link_flair_css_class`` The CSS class for the submissions' flair.\n ``link_flair_text`` The flair text for the submissions' flair.\n ``locked`` Whether or not the submission has been locked.\n ``num_comments`` The number of comments on the submission.\n ``over_18`` Whether or not the submission has been marked as\n NSFW.\n ``permalink`` A permalink for the submission.\n ``score`` The number of upvotes for the submission.\n ``selftext`` The submissions' selftext.\n ``stickied`` Whether or not the submission is stickied.\n ``subreddit`` Provides an instance of :class:`.Subreddit`.\n ``subreddit_id`` The ID of the subreddit that the submission\n belongs to.\n ``title`` The title of the submission.\n ``upvote_ratio`` The percentage of upvotes from all votes on the\n submission.\n ======================== ==================================================\n\n\n .. _Unix Time: https://en.wikipedia.org/wiki/Unix_time\n\n \"\"\"\n\n STR_FIELD = 'id'\n\n @staticmethod\n def id_from_url(url):\n \"\"\"Return the ID contained within a submission URL.\n\n :param url: A url to a submission in one of the following formats (http\n urls will also work):\n * https://redd.it/2gmzqe\n * https://reddit.com/comments/2gmzqe/\n * https://www.reddit.com/r/redditdev/comments/2gmzqe/praw_https/\n\n Raise :class:`.ClientException` if URL is not a valid submission URL.\n\n \"\"\"\n parts = RedditBase._url_parts(url)\n if 'comments' not in parts:\n submission_id = parts[-1]\n if 'r' in parts:\n raise ClientException('Invalid URL (subreddit, '\n 'not submission): {}'.format(url))\n else:\n submission_id = parts[parts.index('comments') + 1]\n\n if not submission_id.isalnum():\n raise ClientException('Invalid URL: {}'.format(url))\n return submission_id\n\n @property\n def comments(self):\n \"\"\"Provide an instance of :class:`.CommentForest`.\n\n This attribute can use used, for example, to obtain a flat list of\n comments, with any :class:`.MoreComments` removed:\n\n .. code:: python\n\n submission.comments.replace_more(limit=0)\n comments = submission.comments.list()\n\n Sort order and comment limit can be set with the ``comment_sort`` and\n ``comment_limit`` attributes before comments are fetched, including\n any call to :meth:`.replace_more`:\n\n .. code:: python\n\n submission.comment_sort = 'new'\n comments = submission.comments.list()\n\n See :ref:`extracting_comments` for more on working with a\n :class:`.CommentForest`.\n\n \"\"\"\n # This assumes _comments is set so that _fetch is called when it's not.\n return self._comments\n\n @property\n def flair(self):\n \"\"\"Provide an instance of :class:`.SubmissionFlair`.\n\n This attribute is used to work with flair as a regular user of the\n subreddit the submission belongs to. Moderators can directly use\n :meth:`.flair`.\n\n For example, to select an arbitrary editable flair text (assuming there\n is one) and set a custom value try:\n\n .. code:: python\n\n choices = submission.flair.choices()\n template_id = next(x for x in choices\n if x['flair_text_editable'])['flair_template_id']\n submission.flair.select(template_id, 'my custom value')\n\n \"\"\"\n if self._flair is None:\n self._flair = SubmissionFlair(self)\n return self._flair\n\n @property\n def mod(self):\n \"\"\"Provide an instance of :class:`.SubmissionModeration`.\"\"\"\n if self._mod is None:\n self._mod = SubmissionModeration(self)\n return self._mod\n\n @property\n def shortlink(self):\n \"\"\"Return a shortlink to the submission.\n\n For example http://redd.it/eorhm is a shortlink for\n https://www.reddit.com/r/announcements/comments/eorhm/reddit_30_less_typing/.\n\n \"\"\"\n return urljoin(self._reddit.config.short_url, self.id)\n\n def __init__(self, reddit, id=None, # pylint: disable=redefined-builtin\n url=None, _data=None):\n \"\"\"Initialize a Submission instance.\n\n :param reddit: An instance of :class:`~.Reddit`.\n :param id: A reddit base36 submission ID, e.g., ``2gmzqe``.\n :param url: A URL supported by\n :meth:`~praw.models.Submission.id_from_url`.\n\n Either ``id`` or ``url`` can be provided, but not both.\n\n \"\"\"\n if [id, url, _data].count(None) != 2:\n raise TypeError('Exactly one of `id`, `url`, or `_data` must be '\n 'provided.')\n super(Submission, self).__init__(reddit, _data)\n self.comment_limit = 2048\n\n #: Specify the sort order for ``comments``\n self.comment_sort = 'best'\n\n if id is not None:\n self.id = id # pylint: disable=invalid-name\n elif url is not None:\n self.id = self.id_from_url(url)\n self._flair = self._mod = None\n\n self._comments_by_id = {}\n\n def __setattr__(self, attribute, value):\n \"\"\"Objectify author, and subreddit attributes.\"\"\"\n if attribute == 'author':\n value = Redditor.from_data(self._reddit, value)\n elif attribute == 'subreddit':\n value = Subreddit(self._reddit, value)\n super(Submission, self).__setattr__(attribute, value)\n\n def _chunk(self, other_submissions, chunk_size):\n all_submissions = [self.fullname]\n if other_submissions:\n all_submissions += [x.fullname for x in other_submissions]\n\n for position in range(0, len(all_submissions), chunk_size):\n yield ','.join(all_submissions[position:position + 50])\n\n def _fetch(self):\n other, comments = self._reddit.get(self._info_path(),\n params={'limit': self.comment_limit,\n 'sort': self.comment_sort})\n other = other.children[0]\n delattr(other, 'comment_limit')\n delattr(other, 'comment_sort')\n other._comments = CommentForest(self)\n self.__dict__.update(other.__dict__)\n self.comments._update(comments.children)\n self._fetched = True\n\n def _info_path(self):\n return API_PATH['submission'].format(id=self.id)\n\n def mark_visited(self):\n \"\"\"Mark submission as visited.\n\n This method requires a subscription to reddit premium.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mark_visited()\n\n \"\"\"\n data = {'links': self.fullname}\n self._reddit.post(API_PATH['store_visits'], data=data)\n\n def hide(self, other_submissions=None):\n \"\"\"Hide Submission.\n\n :param other_submissions: When provided, additionally\n hide this list of :class:`.Submission` instances\n as part of a single request (default: None).\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.hide()\n\n See also :meth:`~.unhide`\n\n \"\"\"\n for submissions in self._chunk(other_submissions, 50):\n self._reddit.post(API_PATH['hide'], data={'id': submissions})\n\n def unhide(self, other_submissions=None):\n \"\"\"Unhide Submission.\n\n :param other_submissions: When provided, additionally\n unhide this list of :class:`.Submission` instances\n as part of a single request (default: None).\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.unhide()\n\n See also :meth:`~.hide`\n\n \"\"\"\n for submissions in self._chunk(other_submissions, 50):\n self._reddit.post(API_PATH['unhide'], data={'id': submissions})\n\n def crosspost(self, subreddit, title=None, send_replies=True):\n \"\"\"Crosspost the submission to a subreddit.\n\n :param subreddit: Name of the subreddit or :class:`~.Subreddit`\n object to crosspost into.\n :param title: Title of the submission. Will use this submission's\n title if `None` (default: None).\n :param send_replies: When True, messages will be sent to the\n submission author when comments are made to the submission\n (default: True).\n :returns: A :class:`~.Submission` object for the newly created\n submission.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n cross_post = submission.crosspost(subreddit=\"learnprogramming\",\n send_replies=False)\n\n See also :meth:`~.hide`\n\n \"\"\"\n if title is None:\n title = self.title\n\n data = {'sr': str(subreddit),\n 'title': title,\n 'sendreplies': bool(send_replies),\n 'kind': 'crosspost',\n 'crosspost_fullname': self.fullname}\n return self._reddit.post(API_PATH['submit'], data=data)\n\n\nclass SubmissionFlair(object):\n \"\"\"Provide a set of functions pertaining to Submission flair.\"\"\"\n\n def __init__(self, submission):\n \"\"\"Create a SubmissionFlair instance.\n\n :param submission: The submission associated with the flair functions.\n\n \"\"\"\n self.submission = submission\n\n def choices(self):\n \"\"\"Return list of available flair choices.\n\n Choices are required in order to use :meth:`.select`.\n\n Example:\n\n .. code:: python\n\n choices = submission.flair.choices()\n\n \"\"\"\n url = API_PATH['flairselector'].format(\n subreddit=self.submission.subreddit)\n return self.submission._reddit.post(url, data={\n 'link': self.submission.fullname})['choices']\n\n def select(self, flair_template_id, text=None):\n \"\"\"Select flair for submission.\n\n :param flair_template_id: The flair template to select. The possible\n ``flair_template_id`` values can be discovered through\n :meth:`.choices`.\n :param text: If the template's ``flair_text_editable`` value is True,\n this value will set a custom text (default: None).\n\n For example, to select an arbitrary editable flair text (assuming there\n is one) and set a custom value try:\n\n .. code:: python\n\n choices = submission.flair.choices()\n template_id = next(x for x in choices\n if x['flair_text_editable'])['flair_template_id']\n submission.flair.select(template_id, 'my custom value')\n\n \"\"\"\n data = {'flair_template_id': flair_template_id,\n 'link': self.submission.fullname, 'text': text}\n url = API_PATH['select_flair'].format(\n subreddit=self.submission.subreddit)\n self.submission._reddit.post(url, data=data)\n\n\nclass SubmissionModeration(ThingModerationMixin):\n \"\"\"Provide a set of functions pertaining to Submission moderation.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id=\"8dmv8z\")\n submission.mod.approve()\n\n \"\"\"\n\n def __init__(self, submission):\n \"\"\"Create a SubmissionModeration instance.\n\n :param submission: The submission to moderate.\n\n \"\"\"\n self.thing = submission\n\n def contest_mode(self, state=True):\n \"\"\"Set contest mode for the comments of this submission.\n\n :param state: (boolean) True enables contest mode, False, disables\n (default: True).\n\n Contest mode have the following effects:\n * The comment thread will default to being sorted randomly.\n * Replies to top-level comments will be hidden behind\n \"[show replies]\" buttons.\n * Scores will be hidden from non-moderators.\n * Scores accessed through the API (mobile apps, bots) will be\n obscured to \"1\" for non-moderators.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.contest_mode(state=True)\n\n \"\"\"\n self.thing._reddit.post(API_PATH['contest_mode'], data={\n 'id': self.thing.fullname, 'state': state})\n\n def flair(self, text='', css_class=''):\n \"\"\"Set flair for the submission.\n\n :param text: The flair text to associate with the Submission (default:\n '').\n :param css_class: The css class to associate with the flair html\n (default: '').\n\n This method can only be used by an authenticated user who is a\n moderator of the Submission's Subreddit.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.flair(text='PRAW', css_class='bot')\n\n \"\"\"\n data = {'css_class': css_class, 'link': self.thing.fullname,\n 'text': text}\n url = API_PATH['flair'].format(subreddit=self.thing.subreddit)\n self.thing._reddit.post(url, data=data)\n\n def lock(self):\n \"\"\"Lock the submission.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.lock()\n\n See also :meth:`~.unlock`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['lock'],\n data={'id': self.thing.fullname})\n\n def nsfw(self):\n \"\"\"Mark as not safe for work.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.subreddit('test').submit('nsfw test',\n selftext='nsfw')\n submission.mod.nsfw()\n\n See also :meth:`~.sfw`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['marknsfw'],\n data={'id': self.thing.fullname})\n\n def sfw(self):\n \"\"\"Mark as safe for work.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.sfw()\n\n See also :meth:`~.nsfw`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unmarknsfw'],\n data={'id': self.thing.fullname})\n\n def spoiler(self):\n \"\"\"Indicate that the submission contains spoilers.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example usage:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.spoiler()\n\n See also :meth:`~.unspoiler`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['spoiler'],\n data={'id': self.thing.fullname})\n\n def sticky(self, state=True, bottom=True):\n \"\"\"Set the submission's sticky state in its subreddit.\n\n :param state: (boolean) True sets the sticky for the submission, false\n unsets (default: True).\n :param bottom: (boolean) When true, set the submission as the bottom\n sticky. If no top sticky exists, this submission will become the\n top sticky regardless (default: True).\n\n This submission will replace an existing stickied submission if one\n exists.\n\n Example:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.sticky()\n\n \"\"\"\n data = {'id': self.thing.fullname, 'state': state}\n if not bottom:\n data['num'] = 1\n return self.thing._reddit.post(API_PATH['sticky_submission'],\n data=data)\n\n def suggested_sort(self, sort='blank'):\n \"\"\"Set the suggested sort for the comments of the submission.\n\n :param sort: Can be one of: confidence, top, new, controversial, old,\n random, qa, blank (default: blank).\n\n \"\"\"\n self.thing._reddit.post(API_PATH['suggested_sort'], data={\n 'id': self.thing.fullname, 'sort': sort})\n\n def unlock(self):\n \"\"\"Unlock the submission.\n\n Example:\n\n .. code:: python\n\n submission = reddit.submission(id='5or86n')\n submission.mod.unlock()\n\n See also :meth:`~.lock`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unlock'],\n data={'id': self.thing.fullname})\n\n def unspoiler(self):\n \"\"\"Indicate that the submission does not contain spoilers.\n\n This method can be used both by the submission author and moderators of\n the subreddit that the submission belongs to.\n\n Example:\n\n .. code:: python\n\n submission = reddit.subreddit('test').submit('not spoiler',\n selftext='spoiler')\n submission.mod.unspoiler()\n\n See also :meth:`~.spoiler`\n\n \"\"\"\n self.thing._reddit.post(API_PATH['unspoiler'],\n data={'id': self.thing.fullname})\n\n\nSubreddit._submission_class = Submission\n",
"path": "praw/models/reddit/submission.py"
}
] | diff --git a/praw/models/reddit/submission.py b/praw/models/reddit/submission.py
index f987186a8..fe75d0a4e 100644
--- a/praw/models/reddit/submission.py
+++ b/praw/models/reddit/submission.py
@@ -216,6 +216,8 @@ def _info_path(self):
def mark_visited(self):
"""Mark submission as visited.
+ This method requires a subscription to reddit premium.
+
Example usage:
.. code:: python
|
python-telegram-bot__python-telegram-bot-1063 | User.full_name doesn't handle non-ASCII (in Python 2?)
### Steps to reproduce
```python
updater = ext.Updater(token=settings.telegram_token())
def F(bot, update):
user = update.effective_user
print repr(user.first_name), repr(user.last_name)
print '%s %s' % (user.first_name, user.last_name)
print user.full_name
updater.dispatcher.add_handler(ext.MessageHandler(0, F))
updater.start_polling()
updater.idle()
```
### Expected behaviour
```
u'Dan\u2022iel' u'Reed'
Dan•iel Reed
Dan•iel Reed
```
### Actual behaviour
```
u'Dan\u2022iel' u'Reed'
Dan•iel Reed
ERROR dispatcher.py:301] An uncaught error was raised while processing the update
Traceback (most recent call last):
File "local/lib/python2.7/site-packages/telegram/ext/dispatcher.py", line 279, in process_update
handler.handle_update(update, self)
File "local/lib/python2.7/site-packages/telegram/ext/messagehandler.py", line 169, in handle_update
return self.callback(dispatcher.bot, update, **optional_args)
File "<stdin>", line 5, in F
File "local/lib/python2.7/site-packages/telegram/user.py", line 91, in full_name
return '{} {}'.format(self.first_name, self.last_name)
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2022' in position 3: ordinal not in range(128)
```
### Configuration
**Operating System:**
**Version of Python, python-telegram-bot & dependencies:**
```
python-telegram-bot 10.0.1
certifi 2018.01.18
future 0.16.0
Python 2.7.14 (default, Sep 23 2017, 22:06:14) [GCC 7.2.0]
```
I'm a little rushed, but this is works for me:
```python
@property
def full_name(self):
"""
:obj:`str`: Convenience property. The user's :attr:`first_name`, followed by (if available)
:attr:`last_name`.
"""
if self.last_name:
! return u'{} {}'.format(self.first_name, self.last_name)
return self.first_name
```
| [
{
"content": "#!/usr/bin/env python\n# pylint: disable=C0103,W0622\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram User.\"\"\"\n\nfrom telegram import TelegramObject\nfrom telegram.utils.helpers import mention_html as util_mention_html\nfrom telegram.utils.helpers import mention_markdown as util_mention_markdown\n\n\nclass User(TelegramObject):\n \"\"\"This object represents a Telegram user or bot.\n\n Attributes:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`): Optional. User's or bot's last name.\n username (:obj:`str`): Optional. User's or bot's username.\n language_code (:obj:`str`): Optional. IETF language tag of the user's language.\n bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.\n\n Args:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`, optional): User's or bot's last name.\n username (:obj:`str`, optional): User's or bot's username.\n language_code (:obj:`str`, optional): IETF language tag of the user's language.\n bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.\n\n \"\"\"\n\n def __init__(self,\n id,\n first_name,\n is_bot,\n last_name=None,\n username=None,\n language_code=None,\n bot=None,\n **kwargs):\n # Required\n self.id = int(id)\n self.first_name = first_name\n self.is_bot = is_bot\n # Optionals\n self.last_name = last_name\n self.username = username\n self.language_code = language_code\n\n self.bot = bot\n\n self._id_attrs = (self.id,)\n\n @property\n def name(self):\n \"\"\"\n :obj:`str`: Convenience property. If available, returns the user's :attr:`username`\n prefixed with \"@\". If :attr:`username` is not available, returns :attr:`full_name`.\n\n \"\"\"\n if self.username:\n return '@{}'.format(self.username)\n return self.full_name\n\n @property\n def full_name(self):\n \"\"\"\n :obj:`str`: Convenience property. The user's :attr:`first_name`, followed by (if available)\n :attr:`last_name`.\n\n \"\"\"\n if self.last_name:\n return '{} {}'.format(self.first_name, self.last_name)\n return self.first_name\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n data = super(User, cls).de_json(data, bot)\n\n return cls(bot=bot, **data)\n\n def get_profile_photos(self, *args, **kwargs):\n \"\"\"\n Shortcut for::\n\n bot.get_user_profile_photos(update.message.from_user.id, *args, **kwargs)\n\n \"\"\"\n\n return self.bot.get_user_profile_photos(self.id, *args, **kwargs)\n\n @classmethod\n def de_list(cls, data, bot):\n if not data:\n return []\n\n users = list()\n for user in data:\n users.append(cls.de_json(user, bot))\n\n return users\n\n def mention_markdown(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): If provided, will overwrite the user's name.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown.\n \"\"\"\n if not name:\n return util_mention_markdown(self.id, self.name)\n else:\n return util_mention_markdown(self.id, name)\n\n def mention_html(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): If provided, will overwrite the user's name.\n\n Returns:\n :obj:`str`: The inline mention for the user as HTML.\n \"\"\"\n if not name:\n return util_mention_html(self.id, self.name)\n else:\n return util_mention_html(self.id, name)\n\n def send_message(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_message(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_message(self.id, *args, **kwargs)\n\n def send_photo(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_photo(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_photo(self.id, *args, **kwargs)\n\n def send_audio(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_audio(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_audio(self.id, *args, **kwargs)\n\n def send_document(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_document(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_document(self.id, *args, **kwargs)\n\n def send_sticker(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_sticker(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_sticker(self.id, *args, **kwargs)\n\n def send_video(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video(self.id, *args, **kwargs)\n\n def send_video_note(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video_note(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video_note(self.id, *args, **kwargs)\n\n def send_voice(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_voice(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_voice(self.id, *args, **kwargs)\n",
"path": "telegram/user.py"
}
] | [
{
"content": "#!/usr/bin/env python\n# pylint: disable=C0103,W0622\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains an object that represents a Telegram User.\"\"\"\n\nfrom telegram import TelegramObject\nfrom telegram.utils.helpers import mention_html as util_mention_html\nfrom telegram.utils.helpers import mention_markdown as util_mention_markdown\n\n\nclass User(TelegramObject):\n \"\"\"This object represents a Telegram user or bot.\n\n Attributes:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`): Optional. User's or bot's last name.\n username (:obj:`str`): Optional. User's or bot's username.\n language_code (:obj:`str`): Optional. IETF language tag of the user's language.\n bot (:class:`telegram.Bot`): Optional. The Bot to use for instance methods.\n\n Args:\n id (:obj:`int`): Unique identifier for this user or bot.\n is_bot (:obj:`bool`): True, if this user is a bot\n first_name (:obj:`str`): User's or bot's first name.\n last_name (:obj:`str`, optional): User's or bot's last name.\n username (:obj:`str`, optional): User's or bot's username.\n language_code (:obj:`str`, optional): IETF language tag of the user's language.\n bot (:class:`telegram.Bot`, optional): The Bot to use for instance methods.\n\n \"\"\"\n\n def __init__(self,\n id,\n first_name,\n is_bot,\n last_name=None,\n username=None,\n language_code=None,\n bot=None,\n **kwargs):\n # Required\n self.id = int(id)\n self.first_name = first_name\n self.is_bot = is_bot\n # Optionals\n self.last_name = last_name\n self.username = username\n self.language_code = language_code\n\n self.bot = bot\n\n self._id_attrs = (self.id,)\n\n @property\n def name(self):\n \"\"\"\n :obj:`str`: Convenience property. If available, returns the user's :attr:`username`\n prefixed with \"@\". If :attr:`username` is not available, returns :attr:`full_name`.\n\n \"\"\"\n if self.username:\n return '@{}'.format(self.username)\n return self.full_name\n\n @property\n def full_name(self):\n \"\"\"\n :obj:`str`: Convenience property. The user's :attr:`first_name`, followed by (if available)\n :attr:`last_name`.\n\n \"\"\"\n if self.last_name:\n return u'{} {}'.format(self.first_name, self.last_name)\n return self.first_name\n\n @classmethod\n def de_json(cls, data, bot):\n if not data:\n return None\n\n data = super(User, cls).de_json(data, bot)\n\n return cls(bot=bot, **data)\n\n def get_profile_photos(self, *args, **kwargs):\n \"\"\"\n Shortcut for::\n\n bot.get_user_profile_photos(update.message.from_user.id, *args, **kwargs)\n\n \"\"\"\n\n return self.bot.get_user_profile_photos(self.id, *args, **kwargs)\n\n @classmethod\n def de_list(cls, data, bot):\n if not data:\n return []\n\n users = list()\n for user in data:\n users.append(cls.de_json(user, bot))\n\n return users\n\n def mention_markdown(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): If provided, will overwrite the user's name.\n\n Returns:\n :obj:`str`: The inline mention for the user as markdown.\n \"\"\"\n if not name:\n return util_mention_markdown(self.id, self.name)\n else:\n return util_mention_markdown(self.id, name)\n\n def mention_html(self, name=None):\n \"\"\"\n Args:\n name (:obj:`str`): If provided, will overwrite the user's name.\n\n Returns:\n :obj:`str`: The inline mention for the user as HTML.\n \"\"\"\n if not name:\n return util_mention_html(self.id, self.name)\n else:\n return util_mention_html(self.id, name)\n\n def send_message(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_message(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_message(self.id, *args, **kwargs)\n\n def send_photo(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_photo(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_photo(self.id, *args, **kwargs)\n\n def send_audio(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_audio(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_audio(self.id, *args, **kwargs)\n\n def send_document(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_document(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_document(self.id, *args, **kwargs)\n\n def send_sticker(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_sticker(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_sticker(self.id, *args, **kwargs)\n\n def send_video(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video(self.id, *args, **kwargs)\n\n def send_video_note(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_video_note(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_video_note(self.id, *args, **kwargs)\n\n def send_voice(self, *args, **kwargs):\n \"\"\"Shortcut for::\n\n bot.send_voice(User.chat_id, *args, **kwargs)\n\n Where User is the current instance.\n\n Returns:\n :class:`telegram.Message`: On success, instance representing the message posted.\n\n \"\"\"\n return self.bot.send_voice(self.id, *args, **kwargs)\n",
"path": "telegram/user.py"
}
] | diff --git a/telegram/user.py b/telegram/user.py
index 018dde25ee6..a407888bde9 100644
--- a/telegram/user.py
+++ b/telegram/user.py
@@ -88,7 +88,7 @@ def full_name(self):
"""
if self.last_name:
- return '{} {}'.format(self.first_name, self.last_name)
+ return u'{} {}'.format(self.first_name, self.last_name)
return self.first_name
@classmethod
diff --git a/tests/test_user.py b/tests/test_user.py
index 67975ca0bc4..216ed33d88d 100644
--- a/tests/test_user.py
+++ b/tests/test_user.py
@@ -42,8 +42,8 @@ def user(bot):
class TestUser(object):
id = 1
is_bot = True
- first_name = 'first_name'
- last_name = 'last_name'
+ first_name = u'first\u2022name'
+ last_name = u'last\u2022name'
username = 'username'
language_code = 'en_us'
@@ -85,16 +85,16 @@ def test_de_json_without_username_and_last_name(self, json_dict, bot):
def test_name(self, user):
assert user.name == '@username'
user.username = None
- assert user.name == 'first_name last_name'
+ assert user.name == u'first\u2022name last\u2022name'
user.last_name = None
- assert user.name == 'first_name'
+ assert user.name == u'first\u2022name'
user.username = self.username
assert user.name == '@username'
def test_full_name(self, user):
- assert user.full_name == 'first_name last_name'
+ assert user.full_name == u'first\u2022name last\u2022name'
user.last_name = None
- assert user.full_name == 'first_name'
+ assert user.full_name == u'first\u2022name'
def test_get_profile_photos(self, monkeypatch, user):
def test(_, *args, **kwargs):
|
ansible-collections__community.vmware-1030 | Documentation fix needed in community.vmware.vsphere_file module
##### SUMMARY
There is module called **community.vmware.vsphere_file** . There is one task _Query a file on a datastore_ to get information of already existing file on vsphere. But In Documentation there mentioned **state : touch** . But state:touch is used to create new blank file on vsphere,not to get existing file information. In order to Query a file the state attribute value should `file` not touch.
**state : file**
Correct code :
- name: Query a file on a datastore
community.vmware.vsphere_file:
host: '{{ vhost }}'
username: '{{ vuser }}'
password: '{{ vpass }}'
datacenter: DC1 Someplace
datastore: datastore1
path: some/remote/file
**state: file**
delegate_to: localhost
ignore_errors: true
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
community.vmware.vsphere_file
##### ANSIBLE VERSION
```
ansible 2.10.9
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.8 (default, Aug 13 2020, 07:46:32) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
| [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2017, Dag Wieers (@dagwieers) <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: vsphere_file\nshort_description: Manage files on a vCenter datastore\ndescription:\n- Manage files on a vCenter datastore.\nauthor:\n- Dag Wieers (@dagwieers)\noptions:\n host:\n description:\n - The vCenter server on which the datastore is available.\n type: str\n required: true\n aliases: [ hostname ]\n username:\n description:\n - The user name to authenticate on the vCenter server.\n type: str\n required: true\n password:\n description:\n - The password to authenticate on the vCenter server.\n type: str\n required: true\n datacenter:\n description:\n - The datacenter on the vCenter server that holds the datastore.\n type: str\n required: true\n datastore:\n description:\n - The datastore on the vCenter server to push files to.\n type: str\n required: true\n path:\n description:\n - The file or directory on the datastore on the vCenter server.\n type: str\n required: true\n aliases: [ dest ]\n validate_certs:\n description:\n - If C(false), SSL certificates will not be validated. This should only be\n set to C(false) when no other option exists.\n type: bool\n default: true\n timeout:\n description:\n - The timeout in seconds for the upload to the datastore.\n type: int\n default: 10\n state:\n description:\n - The state of or the action on the provided path.\n - If C(absent), the file will be removed.\n - If C(directory), the directory will be created.\n - If C(file), more information of the (existing) file will be returned.\n - If C(touch), an empty file will be created if the path does not exist.\n type: str\n choices: [ absent, directory, file, touch ]\n default: file\nnotes:\n- The vSphere folder API does not allow to remove directory objects.\n'''\n\nEXAMPLES = r'''\n- name: Create an empty file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC1 Someplace\n datastore: datastore1\n path: some/remote/file\n state: touch\n delegate_to: localhost\n\n- name: Create a directory on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC2 Someplace\n datastore: datastore2\n path: other/remote/file\n state: directory\n delegate_to: localhost\n\n- name: Query a file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC1 Someplace\n datastore: datastore1\n path: some/remote/file\n state: touch\n delegate_to: localhost\n ignore_errors: true\n\n- name: Delete a file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC2 Someplace\n datastore: datastore2\n path: other/remote/file\n state: absent\n delegate_to: localhost\n'''\n\nRETURN = r'''\n'''\n\nimport socket\nimport sys\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.six import PY2\nfrom ansible.module_utils.six.moves.urllib.error import HTTPError\nfrom ansible.module_utils.six.moves.urllib.parse import quote, urlencode\nfrom ansible.module_utils.urls import open_url\nfrom ansible.module_utils._text import to_native\n\n\ndef vmware_path(datastore, datacenter, path):\n ''' Constructs a URL path that VSphere accepts reliably '''\n path = '/folder/{path}'.format(path=quote(path.strip('/')))\n # Due to a software bug in vSphere, it fails to handle ampersand in datacenter names\n # The solution is to do what vSphere does (when browsing) and double-encode ampersands, maybe others ?\n datacenter = datacenter.replace('&', '%26')\n if not path.startswith('/'):\n path = '/' + path\n params = dict(dsName=datastore)\n if datacenter:\n params['dcPath'] = datacenter\n return '{0}?{1}'.format(path, urlencode(params))\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec=dict(\n host=dict(type='str', required=True, aliases=['hostname']),\n username=dict(type='str', required=True),\n password=dict(type='str', required=True, no_log=True),\n datacenter=dict(type='str', required=True),\n datastore=dict(type='str', required=True),\n path=dict(type='str', required=True, aliases=['dest']),\n state=dict(type='str', default='file', choices=['absent', 'directory', 'file', 'touch']),\n timeout=dict(type='int', default=10),\n validate_certs=dict(type='bool', default=True),\n ),\n supports_check_mode=True,\n )\n\n host = module.params.get('host')\n username = module.params.get('username')\n password = module.params.get('password')\n datacenter = module.params.get('datacenter')\n datastore = module.params.get('datastore')\n path = module.params.get('path')\n validate_certs = module.params.get('validate_certs')\n timeout = module.params.get('timeout')\n state = module.params.get('state')\n\n remote_path = vmware_path(datastore, datacenter, path)\n url = 'https://%s%s' % (host, remote_path)\n\n result = dict(\n path=path,\n size=None,\n state=state,\n status=None,\n url=url,\n )\n\n # Check if the file/directory exists\n try:\n r = open_url(url, method='HEAD', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=dir(e), reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n status = r.getcode()\n if status == 200:\n exists = True\n result['size'] = int(r.headers.get('content-length', None))\n elif status == 404:\n exists = False\n else:\n result['reason'] = r.msg\n result['status'] = status\n module.fail_json(msg=\"Failed to query for file '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n if state == 'absent':\n if not exists:\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'No Content'\n result['status'] = 204\n else:\n try:\n r = open_url(url, method='DELETE', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n\n if result['status'] == 405:\n result['state'] = 'directory'\n module.fail_json(msg='Directories cannot be removed with this module', errno=None, headers=dict(r.headers), **result)\n elif result['status'] != 204:\n module.fail_json(msg=\"Failed to remove '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n result['size'] = None\n module.exit_json(changed=True, **result)\n\n # NOTE: Creating a file in a non-existing directory, then remove the file\n elif state == 'directory':\n if exists:\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'Created'\n result['status'] = 201\n else:\n # Create a temporary file in the new directory\n remote_path = vmware_path(datastore, datacenter, path + '/foobar.tmp')\n temp_url = 'https://%s%s' % (host, remote_path)\n\n try:\n r = open_url(temp_url, method='PUT', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n if result['status'] != 201:\n result['url'] = temp_url\n module.fail_json(msg='Failed to create temporary file', errno=None, headers=dict(r.headers), **result)\n\n try:\n r = open_url(temp_url, method='DELETE', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n status = r.getcode()\n if status != 204:\n result['reason'] = r.msg\n result['status'] = status\n module.warn('Failed to remove temporary file ({reason})'.format(**result))\n\n module.exit_json(changed=True, **result)\n\n elif state == 'file':\n\n if not exists:\n result['state'] = 'absent'\n result['status'] = status\n module.fail_json(msg=\"File '%s' is absent, cannot continue\" % path, **result)\n\n result['status'] = status\n module.exit_json(changed=False, **result)\n\n elif state == 'touch':\n if exists:\n result['state'] = 'file'\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'Created'\n result['status'] = 201\n else:\n try:\n r = open_url(url, method='PUT', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n if result['status'] != 201:\n module.fail_json(msg=\"Failed to touch '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n result['size'] = 0\n result['state'] = 'file'\n module.exit_json(changed=True, **result)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/vsphere_file.py"
}
] | [
{
"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright: (c) 2017, Dag Wieers (@dagwieers) <[email protected]>\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: vsphere_file\nshort_description: Manage files on a vCenter datastore\ndescription:\n- Manage files on a vCenter datastore.\nauthor:\n- Dag Wieers (@dagwieers)\noptions:\n host:\n description:\n - The vCenter server on which the datastore is available.\n type: str\n required: true\n aliases: [ hostname ]\n username:\n description:\n - The user name to authenticate on the vCenter server.\n type: str\n required: true\n password:\n description:\n - The password to authenticate on the vCenter server.\n type: str\n required: true\n datacenter:\n description:\n - The datacenter on the vCenter server that holds the datastore.\n type: str\n required: true\n datastore:\n description:\n - The datastore on the vCenter server to push files to.\n type: str\n required: true\n path:\n description:\n - The file or directory on the datastore on the vCenter server.\n type: str\n required: true\n aliases: [ dest ]\n validate_certs:\n description:\n - If C(false), SSL certificates will not be validated. This should only be\n set to C(false) when no other option exists.\n type: bool\n default: true\n timeout:\n description:\n - The timeout in seconds for the upload to the datastore.\n type: int\n default: 10\n state:\n description:\n - The state of or the action on the provided path.\n - If C(absent), the file will be removed.\n - If C(directory), the directory will be created.\n - If C(file), more information of the (existing) file will be returned.\n - If C(touch), an empty file will be created if the path does not exist.\n type: str\n choices: [ absent, directory, file, touch ]\n default: file\nnotes:\n- The vSphere folder API does not allow to remove directory objects.\n'''\n\nEXAMPLES = r'''\n- name: Create an empty file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC1 Someplace\n datastore: datastore1\n path: some/remote/file\n state: touch\n delegate_to: localhost\n\n- name: Create a directory on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC2 Someplace\n datastore: datastore2\n path: other/remote/file\n state: directory\n delegate_to: localhost\n\n- name: Query a file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC1 Someplace\n datastore: datastore1\n path: some/remote/file\n state: file\n delegate_to: localhost\n ignore_errors: true\n\n- name: Delete a file on a datastore\n community.vmware.vsphere_file:\n host: '{{ vhost }}'\n username: '{{ vuser }}'\n password: '{{ vpass }}'\n datacenter: DC2 Someplace\n datastore: datastore2\n path: other/remote/file\n state: absent\n delegate_to: localhost\n'''\n\nRETURN = r'''\n'''\n\nimport socket\nimport sys\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.six import PY2\nfrom ansible.module_utils.six.moves.urllib.error import HTTPError\nfrom ansible.module_utils.six.moves.urllib.parse import quote, urlencode\nfrom ansible.module_utils.urls import open_url\nfrom ansible.module_utils._text import to_native\n\n\ndef vmware_path(datastore, datacenter, path):\n ''' Constructs a URL path that VSphere accepts reliably '''\n path = '/folder/{path}'.format(path=quote(path.strip('/')))\n # Due to a software bug in vSphere, it fails to handle ampersand in datacenter names\n # The solution is to do what vSphere does (when browsing) and double-encode ampersands, maybe others ?\n datacenter = datacenter.replace('&', '%26')\n if not path.startswith('/'):\n path = '/' + path\n params = dict(dsName=datastore)\n if datacenter:\n params['dcPath'] = datacenter\n return '{0}?{1}'.format(path, urlencode(params))\n\n\ndef main():\n\n module = AnsibleModule(\n argument_spec=dict(\n host=dict(type='str', required=True, aliases=['hostname']),\n username=dict(type='str', required=True),\n password=dict(type='str', required=True, no_log=True),\n datacenter=dict(type='str', required=True),\n datastore=dict(type='str', required=True),\n path=dict(type='str', required=True, aliases=['dest']),\n state=dict(type='str', default='file', choices=['absent', 'directory', 'file', 'touch']),\n timeout=dict(type='int', default=10),\n validate_certs=dict(type='bool', default=True),\n ),\n supports_check_mode=True,\n )\n\n host = module.params.get('host')\n username = module.params.get('username')\n password = module.params.get('password')\n datacenter = module.params.get('datacenter')\n datastore = module.params.get('datastore')\n path = module.params.get('path')\n validate_certs = module.params.get('validate_certs')\n timeout = module.params.get('timeout')\n state = module.params.get('state')\n\n remote_path = vmware_path(datastore, datacenter, path)\n url = 'https://%s%s' % (host, remote_path)\n\n result = dict(\n path=path,\n size=None,\n state=state,\n status=None,\n url=url,\n )\n\n # Check if the file/directory exists\n try:\n r = open_url(url, method='HEAD', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=dir(e), reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n status = r.getcode()\n if status == 200:\n exists = True\n result['size'] = int(r.headers.get('content-length', None))\n elif status == 404:\n exists = False\n else:\n result['reason'] = r.msg\n result['status'] = status\n module.fail_json(msg=\"Failed to query for file '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n if state == 'absent':\n if not exists:\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'No Content'\n result['status'] = 204\n else:\n try:\n r = open_url(url, method='DELETE', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n\n if result['status'] == 405:\n result['state'] = 'directory'\n module.fail_json(msg='Directories cannot be removed with this module', errno=None, headers=dict(r.headers), **result)\n elif result['status'] != 204:\n module.fail_json(msg=\"Failed to remove '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n result['size'] = None\n module.exit_json(changed=True, **result)\n\n # NOTE: Creating a file in a non-existing directory, then remove the file\n elif state == 'directory':\n if exists:\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'Created'\n result['status'] = 201\n else:\n # Create a temporary file in the new directory\n remote_path = vmware_path(datastore, datacenter, path + '/foobar.tmp')\n temp_url = 'https://%s%s' % (host, remote_path)\n\n try:\n r = open_url(temp_url, method='PUT', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n if result['status'] != 201:\n result['url'] = temp_url\n module.fail_json(msg='Failed to create temporary file', errno=None, headers=dict(r.headers), **result)\n\n try:\n r = open_url(temp_url, method='DELETE', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n status = r.getcode()\n if status != 204:\n result['reason'] = r.msg\n result['status'] = status\n module.warn('Failed to remove temporary file ({reason})'.format(**result))\n\n module.exit_json(changed=True, **result)\n\n elif state == 'file':\n\n if not exists:\n result['state'] = 'absent'\n result['status'] = status\n module.fail_json(msg=\"File '%s' is absent, cannot continue\" % path, **result)\n\n result['status'] = status\n module.exit_json(changed=False, **result)\n\n elif state == 'touch':\n if exists:\n result['state'] = 'file'\n module.exit_json(changed=False, **result)\n\n if module.check_mode:\n result['reason'] = 'Created'\n result['status'] = 201\n else:\n try:\n r = open_url(url, method='PUT', timeout=timeout,\n url_username=username, url_password=password,\n validate_certs=validate_certs, force_basic_auth=True)\n except HTTPError as e:\n r = e\n except socket.error as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n except Exception as e:\n module.fail_json(msg=to_native(e), errno=e[0], reason=to_native(e), **result)\n\n if PY2:\n sys.exc_clear() # Avoid false positive traceback in fail_json() on Python 2\n\n result['reason'] = r.msg\n result['status'] = r.getcode()\n if result['status'] != 201:\n module.fail_json(msg=\"Failed to touch '%s'\" % path, errno=None, headers=dict(r.headers), **result)\n\n result['size'] = 0\n result['state'] = 'file'\n module.exit_json(changed=True, **result)\n\n\nif __name__ == '__main__':\n main()\n",
"path": "plugins/modules/vsphere_file.py"
}
] | diff --git a/plugins/modules/vsphere_file.py b/plugins/modules/vsphere_file.py
index 28fc54649b..17415699e2 100644
--- a/plugins/modules/vsphere_file.py
+++ b/plugins/modules/vsphere_file.py
@@ -105,7 +105,7 @@
datacenter: DC1 Someplace
datastore: datastore1
path: some/remote/file
- state: touch
+ state: file
delegate_to: localhost
ignore_errors: true
|
docker__docker-py-1473 | DaemonApiMixin.events does not propagate HttpHeaders from config.json
The docker.api.daemon.DaemonApiMixin.events does not make use of the config.json, which could have custom HTTP headers to pass to the server.
| [
{
"content": "import os\nimport warnings\nfrom datetime import datetime\n\nfrom .. import auth, utils\nfrom ..constants import INSECURE_REGISTRY_DEPRECATION_WARNING\n\n\nclass DaemonApiMixin(object):\n def events(self, since=None, until=None, filters=None, decode=None):\n \"\"\"\n Get real-time events from the server. Similar to the ``docker events``\n command.\n\n Args:\n since (UTC datetime or int): Get events from this point\n until (UTC datetime or int): Get events until this point\n filters (dict): Filter the events by event time, container or image\n decode (bool): If set to true, stream will be decoded into dicts on\n the fly. False by default.\n\n Returns:\n (generator): A blocking generator you can iterate over to retrieve\n events as they happen.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> for event in client.events()\n ... print event\n {u'from': u'image/with:tag',\n u'id': u'container-id',\n u'status': u'start',\n u'time': 1423339459}\n ...\n \"\"\"\n\n if isinstance(since, datetime):\n since = utils.datetime_to_timestamp(since)\n\n if isinstance(until, datetime):\n until = utils.datetime_to_timestamp(until)\n\n if filters:\n filters = utils.convert_filters(filters)\n\n params = {\n 'since': since,\n 'until': until,\n 'filters': filters\n }\n\n return self._stream_helper(\n self.get(self._url('/events'), params=params, stream=True),\n decode=decode\n )\n\n def info(self):\n \"\"\"\n Display system-wide information. Identical to the ``docker info``\n command.\n\n Returns:\n (dict): The info as a dict\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url(\"/info\")), True)\n\n def login(self, username, password=None, email=None, registry=None,\n reauth=False, insecure_registry=False, dockercfg_path=None):\n \"\"\"\n Authenticate with a registry. Similar to the ``docker login`` command.\n\n Args:\n username (str): The registry username\n password (str): The plaintext password\n email (str): The email for the registry account\n registry (str): URL to the registry. E.g.\n ``https://index.docker.io/v1/``\n reauth (bool): Whether refresh existing authentication on the\n Docker server.\n dockercfg_path (str): Use a custom path for the ``.dockercfg`` file\n (default ``$HOME/.dockercfg``)\n\n Returns:\n (dict): The response from the login request\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),\n DeprecationWarning\n )\n\n # If we don't have any auth data so far, try reloading the config file\n # one more time in case anything showed up in there.\n # If dockercfg_path is passed check to see if the config file exists,\n # if so load that config.\n if dockercfg_path and os.path.exists(dockercfg_path):\n self._auth_configs = auth.load_config(dockercfg_path)\n elif not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # If we found an existing auth config for this registry and username\n # combination, we can return it immediately unless reauth is requested.\n if authcfg and authcfg.get('username', None) == username \\\n and not reauth:\n return authcfg\n\n req_data = {\n 'username': username,\n 'password': password,\n 'email': email,\n 'serveraddress': registry,\n }\n\n response = self._post_json(self._url('/auth'), data=req_data)\n if response.status_code == 200:\n self._auth_configs[registry or auth.INDEX_NAME] = req_data\n return self._result(response, json=True)\n\n def ping(self):\n \"\"\"\n Checks the server is responsive. An exception will be raised if it\n isn't responding.\n\n Returns:\n (bool) The response from the server.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url('/_ping'))) == 'OK'\n\n def version(self, api_version=True):\n \"\"\"\n Returns version information from the server. Similar to the ``docker\n version`` command.\n\n Returns:\n (dict): The server version information\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url(\"/version\", versioned_api=api_version)\n return self._result(self._get(url), json=True)\n",
"path": "docker/api/daemon.py"
}
] | [
{
"content": "import os\nimport warnings\nfrom datetime import datetime\n\nfrom .. import auth, utils\nfrom ..constants import INSECURE_REGISTRY_DEPRECATION_WARNING\n\n\nclass DaemonApiMixin(object):\n def events(self, since=None, until=None, filters=None, decode=None):\n \"\"\"\n Get real-time events from the server. Similar to the ``docker events``\n command.\n\n Args:\n since (UTC datetime or int): Get events from this point\n until (UTC datetime or int): Get events until this point\n filters (dict): Filter the events by event time, container or image\n decode (bool): If set to true, stream will be decoded into dicts on\n the fly. False by default.\n\n Returns:\n (generator): A blocking generator you can iterate over to retrieve\n events as they happen.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n\n Example:\n\n >>> for event in client.events()\n ... print event\n {u'from': u'image/with:tag',\n u'id': u'container-id',\n u'status': u'start',\n u'time': 1423339459}\n ...\n \"\"\"\n\n if isinstance(since, datetime):\n since = utils.datetime_to_timestamp(since)\n\n if isinstance(until, datetime):\n until = utils.datetime_to_timestamp(until)\n\n if filters:\n filters = utils.convert_filters(filters)\n\n params = {\n 'since': since,\n 'until': until,\n 'filters': filters\n }\n\n return self._stream_helper(\n self._get(self._url('/events'), params=params, stream=True),\n decode=decode\n )\n\n def info(self):\n \"\"\"\n Display system-wide information. Identical to the ``docker info``\n command.\n\n Returns:\n (dict): The info as a dict\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url(\"/info\")), True)\n\n def login(self, username, password=None, email=None, registry=None,\n reauth=False, insecure_registry=False, dockercfg_path=None):\n \"\"\"\n Authenticate with a registry. Similar to the ``docker login`` command.\n\n Args:\n username (str): The registry username\n password (str): The plaintext password\n email (str): The email for the registry account\n registry (str): URL to the registry. E.g.\n ``https://index.docker.io/v1/``\n reauth (bool): Whether refresh existing authentication on the\n Docker server.\n dockercfg_path (str): Use a custom path for the ``.dockercfg`` file\n (default ``$HOME/.dockercfg``)\n\n Returns:\n (dict): The response from the login request\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n if insecure_registry:\n warnings.warn(\n INSECURE_REGISTRY_DEPRECATION_WARNING.format('login()'),\n DeprecationWarning\n )\n\n # If we don't have any auth data so far, try reloading the config file\n # one more time in case anything showed up in there.\n # If dockercfg_path is passed check to see if the config file exists,\n # if so load that config.\n if dockercfg_path and os.path.exists(dockercfg_path):\n self._auth_configs = auth.load_config(dockercfg_path)\n elif not self._auth_configs:\n self._auth_configs = auth.load_config()\n\n authcfg = auth.resolve_authconfig(self._auth_configs, registry)\n # If we found an existing auth config for this registry and username\n # combination, we can return it immediately unless reauth is requested.\n if authcfg and authcfg.get('username', None) == username \\\n and not reauth:\n return authcfg\n\n req_data = {\n 'username': username,\n 'password': password,\n 'email': email,\n 'serveraddress': registry,\n }\n\n response = self._post_json(self._url('/auth'), data=req_data)\n if response.status_code == 200:\n self._auth_configs[registry or auth.INDEX_NAME] = req_data\n return self._result(response, json=True)\n\n def ping(self):\n \"\"\"\n Checks the server is responsive. An exception will be raised if it\n isn't responding.\n\n Returns:\n (bool) The response from the server.\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n return self._result(self._get(self._url('/_ping'))) == 'OK'\n\n def version(self, api_version=True):\n \"\"\"\n Returns version information from the server. Similar to the ``docker\n version`` command.\n\n Returns:\n (dict): The server version information\n\n Raises:\n :py:class:`docker.errors.APIError`\n If the server returns an error.\n \"\"\"\n url = self._url(\"/version\", versioned_api=api_version)\n return self._result(self._get(url), json=True)\n",
"path": "docker/api/daemon.py"
}
] | diff --git a/docker/api/daemon.py b/docker/api/daemon.py
index d40631f59..033458491 100644
--- a/docker/api/daemon.py
+++ b/docker/api/daemon.py
@@ -54,7 +54,7 @@ def events(self, since=None, until=None, filters=None, decode=None):
}
return self._stream_helper(
- self.get(self._url('/events'), params=params, stream=True),
+ self._get(self._url('/events'), params=params, stream=True),
decode=decode
)
diff --git a/tests/unit/api_test.py b/tests/unit/api_test.py
index 15e4d7cc6..b632d209b 100644
--- a/tests/unit/api_test.py
+++ b/tests/unit/api_test.py
@@ -228,7 +228,8 @@ def test_events(self):
'GET',
url_prefix + 'events',
params={'since': None, 'until': None, 'filters': None},
- stream=True
+ stream=True,
+ timeout=DEFAULT_TIMEOUT_SECONDS
)
def test_events_with_since_until(self):
@@ -247,7 +248,8 @@ def test_events_with_since_until(self):
'until': ts + 10,
'filters': None
},
- stream=True
+ stream=True,
+ timeout=DEFAULT_TIMEOUT_SECONDS
)
def test_events_with_filters(self):
@@ -265,7 +267,8 @@ def test_events_with_filters(self):
'until': None,
'filters': expected_filters
},
- stream=True
+ stream=True,
+ timeout=DEFAULT_TIMEOUT_SECONDS
)
def _socket_path_for_client_session(self, client):
|
flairNLP__flair-1375 | print function for Dictionary class
Currently, the dictionary class only prints an object pointer:
```python
corpus = flair.datasets.UD_ENGLISH(in_memory=False)
tag_dictionary = corpus.make_tag_dictionary(tag_type='upos')
print(tag_dictionary)
```
This prints:
```console
<flair.data.Dictionary object at 0x7f83187fcb50>
```
Better would be a printout that shows the number of items in dictionary and lists them.
| [
{
"content": "from abc import abstractmethod\nfrom operator import itemgetter\nfrom typing import List, Dict, Union, Callable\nimport re\n\nimport torch, flair\nimport logging\n\nfrom collections import Counter\nfrom collections import defaultdict\n\nfrom segtok.segmenter import split_single\nfrom segtok.tokenizer import split_contractions\nfrom segtok.tokenizer import word_tokenizer\nfrom torch.utils.data import Dataset, random_split\nfrom torch.utils.data.dataset import ConcatDataset, Subset\n\nfrom flair.file_utils import Tqdm\n\nlog = logging.getLogger(\"flair\")\n\n\nclass Dictionary:\n \"\"\"\n This class holds a dictionary that maps strings to IDs, used to generate one-hot encodings of strings.\n \"\"\"\n\n def __init__(self, add_unk=True):\n # init dictionaries\n self.item2idx: Dict[str, int] = {}\n self.idx2item: List[str] = []\n self.multi_label: bool = False\n\n # in order to deal with unknown tokens, add <unk>\n if add_unk:\n self.add_item(\"<unk>\")\n\n def add_item(self, item: str) -> int:\n \"\"\"\n add string - if already in dictionary returns its ID. if not in dictionary, it will get a new ID.\n :param item: a string for which to assign an id.\n :return: ID of string\n \"\"\"\n item = item.encode(\"utf-8\")\n if item not in self.item2idx:\n self.idx2item.append(item)\n self.item2idx[item] = len(self.idx2item) - 1\n return self.item2idx[item]\n\n def get_idx_for_item(self, item: str) -> int:\n \"\"\"\n returns the ID of the string, otherwise 0\n :param item: string for which ID is requested\n :return: ID of string, otherwise 0\n \"\"\"\n item = item.encode(\"utf-8\")\n if item in self.item2idx.keys():\n return self.item2idx[item]\n else:\n return 0\n\n def get_idx_for_items(self, items: List[str]) -> List[int]:\n \"\"\"\n returns the IDs for each item of the list of string, otherwise 0 if not found\n :param items: List of string for which IDs are requested\n :return: List of ID of strings\n \"\"\"\n if not hasattr(self, \"item2idx_not_encoded\"):\n d = dict(\n [(key.decode(\"UTF-8\"), value) for key, value in self.item2idx.items()]\n )\n self.item2idx_not_encoded = defaultdict(int, d)\n\n if not items:\n return []\n results = itemgetter(*items)(self.item2idx_not_encoded)\n if isinstance(results, int):\n return [results]\n return list(results)\n\n def get_items(self) -> List[str]:\n items = []\n for item in self.idx2item:\n items.append(item.decode(\"UTF-8\"))\n return items\n\n def __len__(self) -> int:\n return len(self.idx2item)\n\n def get_item_for_index(self, idx):\n return self.idx2item[idx].decode(\"UTF-8\")\n\n def save(self, savefile):\n import pickle\n\n with open(savefile, \"wb\") as f:\n mappings = {\"idx2item\": self.idx2item, \"item2idx\": self.item2idx}\n pickle.dump(mappings, f)\n\n @classmethod\n def load_from_file(cls, filename: str):\n import pickle\n\n dictionary: Dictionary = Dictionary()\n with open(filename, \"rb\") as f:\n mappings = pickle.load(f, encoding=\"latin1\")\n idx2item = mappings[\"idx2item\"]\n item2idx = mappings[\"item2idx\"]\n dictionary.item2idx = item2idx\n dictionary.idx2item = idx2item\n return dictionary\n\n @classmethod\n def load(cls, name: str):\n from flair.file_utils import cached_path\n\n if name == \"chars\" or name == \"common-chars\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n if name == \"chars-large\" or name == \"common-chars-large\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters_large\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n if name == \"chars-xl\" or name == \"common-chars-xl\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters_xl\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n return Dictionary.load_from_file(name)\n\n\nclass Label:\n \"\"\"\n This class represents a label of a sentence. Each label has a value and optionally a confidence score. The\n score needs to be between 0.0 and 1.0. Default value for the score is 1.0.\n \"\"\"\n\n def __init__(self, value: str, score: float = 1.0):\n self.value = value\n self.score = score\n super().__init__()\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n if not value and value != \"\":\n raise ValueError(\n \"Incorrect label value provided. Label value needs to be set.\"\n )\n else:\n self._value = value\n\n @property\n def score(self):\n return self._score\n\n @score.setter\n def score(self, score):\n if 0.0 <= score <= 1.0:\n self._score = score\n else:\n self._score = 1.0\n\n def to_dict(self):\n return {\"value\": self.value, \"confidence\": self.score}\n\n def __str__(self):\n return \"{} ({})\".format(self._value, self._score)\n\n def __repr__(self):\n return \"{} ({})\".format(self._value, self._score)\n\n\nclass DataPoint:\n @property\n @abstractmethod\n def embedding(self):\n pass\n\n @abstractmethod\n def to(self, device: str, pin_memory: bool = False):\n pass\n\n @abstractmethod\n def clear_embeddings(self, embedding_names: List[str] = None):\n pass\n\n\nclass DataPair(DataPoint):\n def __init__(self, first: DataPoint, second: DataPoint):\n self.first = first\n self.second = second\n\n def to(self, device: str, pin_memory: bool = False):\n self.first.to(device, pin_memory)\n self.second.to(device, pin_memory)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n self.first.clear_embeddings(embedding_names)\n self.second.clear_embeddings(embedding_names)\n\n def embedding(self):\n return torch.cat([self.first.embedding, self.second.embedding])\n\n def __str__(self):\n return f\"DataPoint:\\n first: {self.first}\\n second: {self.second}\"\n\n\nclass Token(DataPoint):\n \"\"\"\n This class represents one word in a tokenized sentence. Each token may have any number of tags. It may also point\n to its head in a dependency tree.\n \"\"\"\n\n def __init__(\n self,\n text: str,\n idx: int = None,\n head_id: int = None,\n whitespace_after: bool = True,\n start_position: int = None,\n ):\n self.text: str = text\n self.idx: int = idx\n self.head_id: int = head_id\n self.whitespace_after: bool = whitespace_after\n\n self.start_pos = start_position\n self.end_pos = (\n start_position + len(text) if start_position is not None else None\n )\n\n self.sentence: Sentence = None\n self._embeddings: Dict = {}\n self.tags: Dict[str, Label] = {}\n self.tags_proba_dist: Dict[str, List[Label]] = {}\n\n def add_tag_label(self, tag_type: str, tag: Label):\n self.tags[tag_type] = tag\n\n def add_tags_proba_dist(self, tag_type: str, tags: List[Label]):\n self.tags_proba_dist[tag_type] = tags\n\n def add_tag(self, tag_type: str, tag_value: str, confidence=1.0):\n tag = Label(tag_value, confidence)\n self.tags[tag_type] = tag\n\n def get_tag(self, tag_type: str) -> Label:\n if tag_type in self.tags:\n return self.tags[tag_type]\n return Label(\"\")\n\n def get_tags_proba_dist(self, tag_type: str) -> List[Label]:\n if tag_type in self.tags_proba_dist:\n return self.tags_proba_dist[tag_type]\n return []\n\n def get_head(self):\n return self.sentence.get_token(self.head_id)\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def to(self, device: str, pin_memory: bool = False):\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n def get_each_embedding(self) -> torch.tensor:\n embeddings = []\n for embed in sorted(self._embeddings.keys()):\n embed = self._embeddings[embed].to(flair.device)\n if (flair.embedding_storage_mode == \"cpu\") and embed.device != flair.device:\n embed = embed.to(flair.device)\n embeddings.append(embed)\n return embeddings\n\n def get_embedding(self) -> torch.tensor:\n embeddings = self.get_each_embedding()\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.tensor([], device=flair.device)\n\n @property\n def start_position(self) -> int:\n return self.start_pos\n\n @property\n def end_position(self) -> int:\n return self.end_pos\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def __str__(self) -> str:\n return (\n \"Token: {} {}\".format(self.idx, self.text)\n if self.idx is not None\n else \"Token: {}\".format(self.text)\n )\n\n def __repr__(self) -> str:\n return (\n \"Token: {} {}\".format(self.idx, self.text)\n if self.idx is not None\n else \"Token: {}\".format(self.text)\n )\n\n\nclass Span:\n \"\"\"\n This class represents one textual span consisting of Tokens. A span may have a tag.\n \"\"\"\n\n def __init__(self, tokens: List[Token], tag: str = None, score=1.0):\n self.tokens = tokens\n self.tag = tag\n self.score = score\n self.start_pos = None\n self.end_pos = None\n\n if tokens:\n self.start_pos = tokens[0].start_position\n self.end_pos = tokens[len(tokens) - 1].end_position\n\n @property\n def text(self) -> str:\n return \" \".join([t.text for t in self.tokens])\n\n def to_original_text(self) -> str:\n pos = self.tokens[0].start_pos\n if pos is None:\n return \" \".join([t.text for t in self.tokens])\n str = \"\"\n for t in self.tokens:\n while t.start_pos != pos:\n str += \" \"\n pos += 1\n\n str += t.text\n pos += len(t.text)\n\n return str\n\n def to_dict(self):\n return {\n \"text\": self.to_original_text(),\n \"start_pos\": self.start_pos,\n \"end_pos\": self.end_pos,\n \"type\": self.tag,\n \"confidence\": self.score,\n }\n\n def __str__(self) -> str:\n ids = \",\".join([str(t.idx) for t in self.tokens])\n return (\n '{}-span [{}]: \"{}\"'.format(self.tag, ids, self.text)\n if self.tag is not None\n else 'span [{}]: \"{}\"'.format(ids, self.text)\n )\n\n def __repr__(self) -> str:\n ids = \",\".join([str(t.idx) for t in self.tokens])\n return (\n '<{}-span ({}): \"{}\">'.format(self.tag, ids, self.text)\n if self.tag is not None\n else '<span ({}): \"{}\">'.format(ids, self.text)\n )\n\n\ndef space_tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer based on space character only.\n \"\"\"\n tokens: List[Token] = []\n word = \"\"\n index = -1\n for index, char in enumerate(text):\n if char == \" \":\n if len(word) > 0:\n start_position = index - len(word)\n tokens.append(\n Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n )\n\n word = \"\"\n else:\n word += char\n # increment for last token in sentence if not followed by whitespace\n index += 1\n if len(word) > 0:\n start_position = index - len(word)\n tokens.append(\n Token(text=word, start_position=start_position, whitespace_after=False)\n )\n return tokens\n\n\ndef build_japanese_tokenizer(tokenizer: str = \"MeCab\"):\n if tokenizer.lower() != \"mecab\":\n raise NotImplementedError(\"Currently, MeCab is only supported.\")\n\n try:\n import konoha\n except ModuleNotFoundError:\n log.warning(\"-\" * 100)\n log.warning('ATTENTION! The library \"konoha\" is not installed!')\n log.warning(\n 'To use Japanese tokenizer, please first install with the following steps:'\n )\n log.warning(\n '- Install mecab with \"sudo apt install mecab libmecab-dev mecab-ipadic\"'\n )\n log.warning('- Install konoha with \"pip install konoha[mecab]\"')\n log.warning(\"-\" * 100)\n pass\n\n sentence_tokenizer = konoha.SentenceTokenizer()\n word_tokenizer = konoha.WordTokenizer(tokenizer)\n\n def tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer using konoha, a third party library which supports\n multiple Japanese tokenizer such as MeCab, KyTea and SudachiPy.\n \"\"\"\n tokens: List[Token] = []\n words: List[str] = []\n\n sentences = sentence_tokenizer.tokenize(text)\n for sentence in sentences:\n konoha_tokens = word_tokenizer.tokenize(sentence)\n words.extend(list(map(str, konoha_tokens)))\n\n # determine offsets for whitespace_after field\n index = text.index\n current_offset = 0\n previous_word_offset = -1\n previous_token = None\n for word in words:\n try:\n word_offset = index(word, current_offset)\n start_position = word_offset\n except:\n word_offset = previous_word_offset + 1\n start_position = (\n current_offset + 1 if current_offset > 0 else current_offset\n )\n\n token = Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and word_offset - 1 == previous_word_offset:\n previous_token.whitespace_after = False\n\n current_offset = word_offset + len(word)\n previous_word_offset = current_offset - 1\n previous_token = token\n\n return tokens\n\n return tokenizer\n\n\ndef segtok_tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer using segtok, a third party library dedicated to rules-based Indo-European languages.\n https://github.com/fnl/segtok\n \"\"\"\n tokens: List[Token] = []\n\n words: List[str] = []\n sentences = split_single(text)\n for sentence in sentences:\n contractions = split_contractions(word_tokenizer(sentence))\n words.extend(contractions)\n\n # determine offsets for whitespace_after field\n index = text.index\n current_offset = 0\n previous_word_offset = -1\n previous_token = None\n for word in words:\n try:\n word_offset = index(word, current_offset)\n start_position = word_offset\n except:\n word_offset = previous_word_offset + 1\n start_position = (\n current_offset + 1 if current_offset > 0 else current_offset\n )\n\n if word:\n token = Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and word_offset - 1 == previous_word_offset:\n previous_token.whitespace_after = False\n\n current_offset = word_offset + len(word)\n previous_word_offset = current_offset - 1\n previous_token = token\n\n return tokens\n\n\ndef build_spacy_tokenizer(model) -> Callable[[str], List[Token]]:\n \"\"\"\n Wrap Spacy model to build a tokenizer for the Sentence class.\n :param model a Spacy V2 model\n :return a tokenizer function to provide to Sentence class constructor\n \"\"\"\n try:\n from spacy.language import Language\n from spacy.tokens.doc import Doc\n from spacy.tokens.token import Token as SpacyToken\n except ImportError:\n raise ImportError(\n \"Please install Spacy v2.0 or better before using the Spacy tokenizer, otherwise you can use segtok_tokenizer as advanced tokenizer.\"\n )\n\n model: Language = model\n\n def tokenizer(text: str) -> List[Token]:\n doc: Doc = model.make_doc(text)\n previous_token = None\n tokens: List[Token] = []\n for word in doc:\n word: SpacyToken = word\n token = Token(\n text=word.text, start_position=word.idx, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and (\n token.start_pos - 1\n == previous_token.start_pos + len(previous_token.text)\n ):\n previous_token.whitespace_after = False\n\n previous_token = token\n return tokens\n\n return tokenizer\n\n\nclass Sentence(DataPoint):\n \"\"\"\n A Sentence is a list of Tokens and is used to represent a sentence or text fragment.\n \"\"\"\n\n def __init__(\n self,\n text: str = None,\n use_tokenizer: Union[bool, Callable[[str], List[Token]]] = space_tokenizer,\n labels: Union[List[Label], List[str]] = None,\n language_code: str = None,\n ):\n \"\"\"\n Class to hold all meta related to a text (tokens, predictions, language code, ...)\n :param text: original string\n :param use_tokenizer: a custom tokenizer (default is space based tokenizer,\n more advanced options are segtok_tokenizer to use segtok or build_spacy_tokenizer to use Spacy library\n if available). Check the code of space_tokenizer to implement your own (if you need it).\n If instead of providing a function, this parameter is just set to True, segtok will be used.\n :param labels:\n :param language_code:\n \"\"\"\n super(Sentence, self).__init__()\n\n self.tokens: List[Token] = []\n\n self.labels: List[Label] = []\n if labels is not None:\n self.add_labels(labels)\n\n self._embeddings: Dict = {}\n\n self.language_code: str = language_code\n\n tokenizer = use_tokenizer\n if type(use_tokenizer) == bool:\n tokenizer = segtok_tokenizer if use_tokenizer else space_tokenizer\n\n # if text is passed, instantiate sentence with tokens (words)\n if text is not None:\n text = self._restore_windows_1252_characters(text)\n [self.add_token(token) for token in tokenizer(text)]\n\n # log a warning if the dataset is empty\n if text == \"\":\n log.warning(\n \"ACHTUNG: An empty Sentence was created! Are there empty strings in your dataset?\"\n )\n\n self.tokenized = None\n\n def get_token(self, token_id: int) -> Token:\n for token in self.tokens:\n if token.idx == token_id:\n return token\n\n def add_token(self, token: Union[Token, str]):\n\n if type(token) is str:\n token = Token(token)\n\n self.tokens.append(token)\n\n # set token idx if not set\n token.sentence = self\n if token.idx is None:\n token.idx = len(self.tokens)\n\n def get_spans(self, tag_type: str, min_score=-1) -> List[Span]:\n\n spans: List[Span] = []\n\n current_span = []\n\n tags = defaultdict(lambda: 0.0)\n\n previous_tag_value: str = \"O\"\n for token in self:\n\n tag: Label = token.get_tag(tag_type)\n tag_value = tag.value\n\n # non-set tags are OUT tags\n if tag_value == \"\" or tag_value == \"O\":\n tag_value = \"O-\"\n\n # anything that is not a BIOES tag is a SINGLE tag\n if tag_value[0:2] not in [\"B-\", \"I-\", \"O-\", \"E-\", \"S-\"]:\n tag_value = \"S-\" + tag_value\n\n # anything that is not OUT is IN\n in_span = False\n if tag_value[0:2] not in [\"O-\"]:\n in_span = True\n\n # single and begin tags start a new span\n starts_new_span = False\n if tag_value[0:2] in [\"B-\", \"S-\"]:\n starts_new_span = True\n\n if (\n previous_tag_value[0:2] in [\"S-\"]\n and previous_tag_value[2:] != tag_value[2:]\n and in_span\n ):\n starts_new_span = True\n\n if (starts_new_span or not in_span) and len(current_span) > 0:\n scores = [t.get_tag(tag_type).score for t in current_span]\n span_score = sum(scores) / len(scores)\n if span_score > min_score:\n spans.append(\n Span(\n current_span,\n tag=sorted(\n tags.items(), key=lambda k_v: k_v[1], reverse=True\n )[0][0],\n score=span_score,\n )\n )\n current_span = []\n tags = defaultdict(lambda: 0.0)\n\n if in_span:\n current_span.append(token)\n weight = 1.1 if starts_new_span else 1.0\n tags[tag_value[2:]] += weight\n\n # remember previous tag\n previous_tag_value = tag_value\n\n if len(current_span) > 0:\n scores = [t.get_tag(tag_type).score for t in current_span]\n span_score = sum(scores) / len(scores)\n if span_score > min_score:\n spans.append(\n Span(\n current_span,\n tag=sorted(tags.items(), key=lambda k_v: k_v[1], reverse=True)[\n 0\n ][0],\n score=span_score,\n )\n )\n\n return spans\n\n def add_label(self, label: Union[Label, str]):\n if type(label) is Label:\n self.labels.append(label)\n\n elif type(label) is str:\n self.labels.append(Label(label))\n\n def add_labels(self, labels: Union[List[Label], List[str]]):\n for label in labels:\n self.add_label(label)\n\n def get_label_names(self) -> List[str]:\n return [label.value for label in self.labels]\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def get_embedding(self) -> torch.tensor:\n embeddings = []\n for embed in sorted(self._embeddings.keys()):\n embedding = self._embeddings[embed]\n embeddings.append(embedding)\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.Tensor()\n\n def to(self, device: str, pin_memory: bool = False):\n\n # move sentence embeddings to device\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n # move token embeddings to device\n for token in self:\n token.to(device, pin_memory)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n\n # clear sentence embeddings\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n # clear token embeddings\n for token in self:\n token.clear_embeddings(embedding_names)\n\n def to_tagged_string(self, main_tag=None) -> str:\n list = []\n for token in self.tokens:\n list.append(token.text)\n\n tags: List[str] = []\n for tag_type in token.tags.keys():\n\n if main_tag is not None and main_tag != tag_type:\n continue\n\n if (\n token.get_tag(tag_type).value == \"\"\n or token.get_tag(tag_type).value == \"O\"\n ):\n continue\n tags.append(token.get_tag(tag_type).value)\n all_tags = \"<\" + \"/\".join(tags) + \">\"\n if all_tags != \"<>\":\n list.append(all_tags)\n return \" \".join(list)\n\n def to_tokenized_string(self) -> str:\n\n if self.tokenized is None:\n self.tokenized = \" \".join([t.text for t in self.tokens])\n\n return self.tokenized\n\n def to_plain_string(self):\n plain = \"\"\n for token in self.tokens:\n plain += token.text\n if token.whitespace_after:\n plain += \" \"\n return plain.rstrip()\n\n def convert_tag_scheme(self, tag_type: str = \"ner\", target_scheme: str = \"iob\"):\n\n tags: List[Label] = []\n for token in self.tokens:\n tags.append(token.get_tag(tag_type))\n\n if target_scheme == \"iob\":\n iob2(tags)\n\n if target_scheme == \"iobes\":\n iob2(tags)\n tags = iob_iobes(tags)\n\n for index, tag in enumerate(tags):\n self.tokens[index].add_tag(tag_type, tag)\n\n def infer_space_after(self):\n \"\"\"\n Heuristics in case you wish to infer whitespace_after values for tokenized text. This is useful for some old NLP\n tasks (such as CoNLL-03 and CoNLL-2000) that provide only tokenized data with no info of original whitespacing.\n :return:\n \"\"\"\n last_token = None\n quote_count: int = 0\n # infer whitespace after field\n\n for token in self.tokens:\n if token.text == '\"':\n quote_count += 1\n if quote_count % 2 != 0:\n token.whitespace_after = False\n elif last_token is not None:\n last_token.whitespace_after = False\n\n if last_token is not None:\n\n if token.text in [\".\", \":\", \",\", \";\", \")\", \"n't\", \"!\", \"?\"]:\n last_token.whitespace_after = False\n\n if token.text.startswith(\"'\"):\n last_token.whitespace_after = False\n\n if token.text in [\"(\"]:\n token.whitespace_after = False\n\n last_token = token\n return self\n\n def to_original_text(self) -> str:\n if len(self.tokens) > 0 and (self.tokens[0].start_pos is None):\n return \" \".join([t.text for t in self.tokens])\n str = \"\"\n pos = 0\n for t in self.tokens:\n while t.start_pos != pos:\n str += \" \"\n pos += 1\n\n str += t.text\n pos += len(t.text)\n\n return str\n\n def to_dict(self, tag_type: str = None):\n labels = []\n entities = []\n\n if tag_type:\n entities = [span.to_dict() for span in self.get_spans(tag_type)]\n if self.labels:\n labels = [l.to_dict() for l in self.labels]\n\n return {\"text\": self.to_original_text(), \"labels\": labels, \"entities\": entities}\n\n def __getitem__(self, idx: int) -> Token:\n return self.tokens[idx]\n\n def __iter__(self):\n return iter(self.tokens)\n\n def __repr__(self):\n return 'Sentence: \"{}\" - {} Tokens'.format(\n \" \".join([t.text for t in self.tokens]), len(self)\n )\n\n def __copy__(self):\n s = Sentence()\n for token in self.tokens:\n nt = Token(token.text)\n for tag_type in token.tags:\n nt.add_tag(\n tag_type,\n token.get_tag(tag_type).value,\n token.get_tag(tag_type).score,\n )\n\n s.add_token(nt)\n return s\n\n def __str__(self) -> str:\n\n if self.labels:\n return f'Sentence: \"{self.to_tokenized_string()}\" - {len(self)} Tokens - Labels: {self.labels} '\n else:\n return f'Sentence: \"{self.to_tokenized_string()}\" - {len(self)} Tokens'\n\n def __len__(self) -> int:\n return len(self.tokens)\n\n def get_language_code(self) -> str:\n if self.language_code is None:\n import langdetect\n\n try:\n self.language_code = langdetect.detect(self.to_plain_string())\n except:\n self.language_code = \"en\"\n\n return self.language_code\n\n @staticmethod\n def _restore_windows_1252_characters(text: str) -> str:\n def to_windows_1252(match):\n try:\n return bytes([ord(match.group(0))]).decode(\"windows-1252\")\n except UnicodeDecodeError:\n # No character at the corresponding code point: remove it\n return \"\"\n\n return re.sub(r\"[\\u0080-\\u0099]\", to_windows_1252, text)\n\n\nclass Image(DataPoint):\n def __init__(self, data=None, imageURL=None):\n self.data = data\n self._embeddings: Dict = {}\n self.imageURL = imageURL\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def __str__(self):\n\n image_repr = self.data.size() if self.data else \"\"\n image_url = self.imageURL if self.imageURL else \"\"\n\n return f\"Image: {image_repr} {image_url}\"\n\n def get_embedding(self) -> torch.tensor:\n embeddings = [\n self._embeddings[embed] for embed in sorted(self._embeddings.keys())\n ]\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.tensor([], device=flair.device)\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def to(self, device: str, pin_memory: bool = False):\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n\nclass FlairDataset(Dataset):\n @abstractmethod\n def is_in_memory(self) -> bool:\n pass\n\n\nclass Corpus:\n def __init__(\n self,\n train: FlairDataset,\n dev: FlairDataset,\n test: FlairDataset,\n name: str = \"corpus\",\n ):\n self._train: FlairDataset = train\n self._dev: FlairDataset = dev\n self._test: FlairDataset = test\n self.name: str = name\n\n @property\n def train(self) -> FlairDataset:\n return self._train\n\n @property\n def dev(self) -> FlairDataset:\n return self._dev\n\n @property\n def test(self) -> FlairDataset:\n return self._test\n\n def downsample(self, percentage: float = 0.1, only_downsample_train=False):\n\n self._train = self._downsample_to_proportion(self.train, percentage)\n if not only_downsample_train:\n self._dev = self._downsample_to_proportion(self.dev, percentage)\n self._test = self._downsample_to_proportion(self.test, percentage)\n\n return self\n\n def filter_empty_sentences(self):\n log.info(\"Filtering empty sentences\")\n self._train = Corpus._filter_empty_sentences(self._train)\n self._test = Corpus._filter_empty_sentences(self._test)\n self._dev = Corpus._filter_empty_sentences(self._dev)\n log.info(self)\n\n @staticmethod\n def _filter_empty_sentences(dataset) -> Dataset:\n\n # find out empty sentence indices\n empty_sentence_indices = []\n non_empty_sentence_indices = []\n index = 0\n\n from flair.datasets import DataLoader\n\n for batch in DataLoader(dataset):\n for sentence in batch:\n if len(sentence) == 0:\n empty_sentence_indices.append(index)\n else:\n non_empty_sentence_indices.append(index)\n index += 1\n\n # create subset of non-empty sentence indices\n subset = Subset(dataset, non_empty_sentence_indices)\n\n return subset\n\n def make_vocab_dictionary(self, max_tokens=-1, min_freq=1) -> Dictionary:\n \"\"\"\n Creates a dictionary of all tokens contained in the corpus.\n By defining `max_tokens` you can set the maximum number of tokens that should be contained in the dictionary.\n If there are more than `max_tokens` tokens in the corpus, the most frequent tokens are added first.\n If `min_freq` is set the a value greater than 1 only tokens occurring more than `min_freq` times are considered\n to be added to the dictionary.\n :param max_tokens: the maximum number of tokens that should be added to the dictionary (-1 = take all tokens)\n :param min_freq: a token needs to occur at least `min_freq` times to be added to the dictionary (-1 = there is no limitation)\n :return: dictionary of tokens\n \"\"\"\n tokens = self._get_most_common_tokens(max_tokens, min_freq)\n\n vocab_dictionary: Dictionary = Dictionary()\n for token in tokens:\n vocab_dictionary.add_item(token)\n\n return vocab_dictionary\n\n def _get_most_common_tokens(self, max_tokens, min_freq) -> List[str]:\n tokens_and_frequencies = Counter(self._get_all_tokens())\n tokens_and_frequencies = tokens_and_frequencies.most_common()\n\n tokens = []\n for token, freq in tokens_and_frequencies:\n if (min_freq != -1 and freq < min_freq) or (\n max_tokens != -1 and len(tokens) == max_tokens\n ):\n break\n tokens.append(token)\n return tokens\n\n def _get_all_tokens(self) -> List[str]:\n tokens = list(map((lambda s: s.tokens), self.train))\n tokens = [token for sublist in tokens for token in sublist]\n return list(map((lambda t: t.text), tokens))\n\n @staticmethod\n def _downsample_to_proportion(dataset: Dataset, proportion: float):\n\n sampled_size: int = round(len(dataset) * proportion)\n splits = random_split(dataset, [len(dataset) - sampled_size, sampled_size])\n return splits[1]\n\n def obtain_statistics(\n self, tag_type: str = None, pretty_print: bool = True\n ) -> dict:\n \"\"\"\n Print statistics about the class distribution (only labels of sentences are taken into account) and sentence\n sizes.\n \"\"\"\n json_string = {\n \"TRAIN\": self._obtain_statistics_for(self.train, \"TRAIN\", tag_type),\n \"TEST\": self._obtain_statistics_for(self.test, \"TEST\", tag_type),\n \"DEV\": self._obtain_statistics_for(self.dev, \"DEV\", tag_type),\n }\n if pretty_print:\n import json\n\n json_string = json.dumps(json_string, indent=4)\n return json_string\n\n @staticmethod\n def _obtain_statistics_for(sentences, name, tag_type) -> dict:\n if len(sentences) == 0:\n return {}\n\n classes_to_count = Corpus._get_class_to_count(sentences)\n tags_to_count = Corpus._get_tag_to_count(sentences, tag_type)\n tokens_per_sentence = Corpus._get_tokens_per_sentence(sentences)\n\n label_size_dict = {}\n for l, c in classes_to_count.items():\n label_size_dict[l] = c\n\n tag_size_dict = {}\n for l, c in tags_to_count.items():\n tag_size_dict[l] = c\n\n return {\n \"dataset\": name,\n \"total_number_of_documents\": len(sentences),\n \"number_of_documents_per_class\": label_size_dict,\n \"number_of_tokens_per_tag\": tag_size_dict,\n \"number_of_tokens\": {\n \"total\": sum(tokens_per_sentence),\n \"min\": min(tokens_per_sentence),\n \"max\": max(tokens_per_sentence),\n \"avg\": sum(tokens_per_sentence) / len(sentences),\n },\n }\n\n @staticmethod\n def _get_tokens_per_sentence(sentences):\n return list(map(lambda x: len(x.tokens), sentences))\n\n @staticmethod\n def _get_class_to_count(sentences):\n class_to_count = defaultdict(lambda: 0)\n for sent in sentences:\n for label in sent.labels:\n class_to_count[label.value] += 1\n return class_to_count\n\n @staticmethod\n def _get_tag_to_count(sentences, tag_type):\n tag_to_count = defaultdict(lambda: 0)\n for sent in sentences:\n for word in sent.tokens:\n if tag_type in word.tags:\n label = word.tags[tag_type]\n tag_to_count[label.value] += 1\n return tag_to_count\n\n def __str__(self) -> str:\n return \"Corpus: %d train + %d dev + %d test sentences\" % (\n len(self.train),\n len(self.dev),\n len(self.test),\n )\n\n def make_label_dictionary(self) -> Dictionary:\n \"\"\"\n Creates a dictionary of all labels assigned to the sentences in the corpus.\n :return: dictionary of labels\n \"\"\"\n label_dictionary: Dictionary = Dictionary(add_unk=False)\n label_dictionary.multi_label = False\n\n from flair.datasets import DataLoader\n\n loader = DataLoader(self.train, batch_size=1)\n\n log.info(\"Computing label dictionary. Progress:\")\n for batch in Tqdm.tqdm(iter(loader)):\n\n for sentence in batch:\n\n for label in sentence.labels:\n label_dictionary.add_item(label.value)\n\n if not label_dictionary.multi_label:\n if len(sentence.labels) > 1:\n label_dictionary.multi_label = True\n\n log.info(label_dictionary.idx2item)\n\n return label_dictionary\n\n def get_label_distribution(self):\n class_to_count = defaultdict(lambda: 0)\n for sent in self.train:\n for label in sent.labels:\n class_to_count[label.value] += 1\n return class_to_count\n\n def get_all_sentences(self) -> Dataset:\n return ConcatDataset([self.train, self.dev, self.test])\n\n def make_tag_dictionary(self, tag_type: str) -> Dictionary:\n\n # Make the tag dictionary\n tag_dictionary: Dictionary = Dictionary()\n tag_dictionary.add_item(\"O\")\n for sentence in self.get_all_sentences():\n for token in sentence.tokens:\n tag_dictionary.add_item(token.get_tag(tag_type).value)\n tag_dictionary.add_item(\"<START>\")\n tag_dictionary.add_item(\"<STOP>\")\n return tag_dictionary\n\n\nclass MultiCorpus(Corpus):\n def __init__(self, corpora: List[Corpus], name: str = \"multicorpus\"):\n self.corpora: List[Corpus] = corpora\n\n super(MultiCorpus, self).__init__(\n ConcatDataset([corpus.train for corpus in self.corpora]),\n ConcatDataset([corpus.dev for corpus in self.corpora]),\n ConcatDataset([corpus.test for corpus in self.corpora]),\n name=name,\n )\n\n def __str__(self):\n return \"\\n\".join([str(corpus) for corpus in self.corpora])\n\n\ndef iob2(tags):\n \"\"\"\n Check that tags have a valid IOB format.\n Tags in IOB1 format are converted to IOB2.\n \"\"\"\n for i, tag in enumerate(tags):\n if tag.value == \"O\":\n continue\n split = tag.value.split(\"-\")\n if len(split) != 2 or split[0] not in [\"I\", \"B\"]:\n return False\n if split[0] == \"B\":\n continue\n elif i == 0 or tags[i - 1].value == \"O\": # conversion IOB1 to IOB2\n tags[i].value = \"B\" + tag.value[1:]\n elif tags[i - 1].value[1:] == tag.value[1:]:\n continue\n else: # conversion IOB1 to IOB2\n tags[i].value = \"B\" + tag.value[1:]\n return True\n\n\ndef iob_iobes(tags):\n \"\"\"\n IOB -> IOBES\n \"\"\"\n new_tags = []\n for i, tag in enumerate(tags):\n if tag.value == \"O\":\n new_tags.append(tag.value)\n elif tag.value.split(\"-\")[0] == \"B\":\n if i + 1 != len(tags) and tags[i + 1].value.split(\"-\")[0] == \"I\":\n new_tags.append(tag.value)\n else:\n new_tags.append(tag.value.replace(\"B-\", \"S-\"))\n elif tag.value.split(\"-\")[0] == \"I\":\n if i + 1 < len(tags) and tags[i + 1].value.split(\"-\")[0] == \"I\":\n new_tags.append(tag.value)\n else:\n new_tags.append(tag.value.replace(\"I-\", \"E-\"))\n else:\n raise Exception(\"Invalid IOB format!\")\n return new_tags\n",
"path": "flair/data.py"
}
] | [
{
"content": "from abc import abstractmethod\nfrom operator import itemgetter\nfrom typing import List, Dict, Union, Callable\nimport re\n\nimport torch, flair\nimport logging\n\nfrom collections import Counter\nfrom collections import defaultdict\n\nfrom segtok.segmenter import split_single\nfrom segtok.tokenizer import split_contractions\nfrom segtok.tokenizer import word_tokenizer\nfrom torch.utils.data import Dataset, random_split\nfrom torch.utils.data.dataset import ConcatDataset, Subset\n\nfrom flair.file_utils import Tqdm\n\nlog = logging.getLogger(\"flair\")\n\n\nclass Dictionary:\n \"\"\"\n This class holds a dictionary that maps strings to IDs, used to generate one-hot encodings of strings.\n \"\"\"\n\n def __init__(self, add_unk=True):\n # init dictionaries\n self.item2idx: Dict[str, int] = {}\n self.idx2item: List[str] = []\n self.multi_label: bool = False\n\n # in order to deal with unknown tokens, add <unk>\n if add_unk:\n self.add_item(\"<unk>\")\n\n def add_item(self, item: str) -> int:\n \"\"\"\n add string - if already in dictionary returns its ID. if not in dictionary, it will get a new ID.\n :param item: a string for which to assign an id.\n :return: ID of string\n \"\"\"\n item = item.encode(\"utf-8\")\n if item not in self.item2idx:\n self.idx2item.append(item)\n self.item2idx[item] = len(self.idx2item) - 1\n return self.item2idx[item]\n\n def get_idx_for_item(self, item: str) -> int:\n \"\"\"\n returns the ID of the string, otherwise 0\n :param item: string for which ID is requested\n :return: ID of string, otherwise 0\n \"\"\"\n item = item.encode(\"utf-8\")\n if item in self.item2idx.keys():\n return self.item2idx[item]\n else:\n return 0\n\n def get_idx_for_items(self, items: List[str]) -> List[int]:\n \"\"\"\n returns the IDs for each item of the list of string, otherwise 0 if not found\n :param items: List of string for which IDs are requested\n :return: List of ID of strings\n \"\"\"\n if not hasattr(self, \"item2idx_not_encoded\"):\n d = dict(\n [(key.decode(\"UTF-8\"), value) for key, value in self.item2idx.items()]\n )\n self.item2idx_not_encoded = defaultdict(int, d)\n\n if not items:\n return []\n results = itemgetter(*items)(self.item2idx_not_encoded)\n if isinstance(results, int):\n return [results]\n return list(results)\n\n def get_items(self) -> List[str]:\n items = []\n for item in self.idx2item:\n items.append(item.decode(\"UTF-8\"))\n return items\n\n def __len__(self) -> int:\n return len(self.idx2item)\n\n def get_item_for_index(self, idx):\n return self.idx2item[idx].decode(\"UTF-8\")\n\n def save(self, savefile):\n import pickle\n\n with open(savefile, \"wb\") as f:\n mappings = {\"idx2item\": self.idx2item, \"item2idx\": self.item2idx}\n pickle.dump(mappings, f)\n\n @classmethod\n def load_from_file(cls, filename: str):\n import pickle\n\n dictionary: Dictionary = Dictionary()\n with open(filename, \"rb\") as f:\n mappings = pickle.load(f, encoding=\"latin1\")\n idx2item = mappings[\"idx2item\"]\n item2idx = mappings[\"item2idx\"]\n dictionary.item2idx = item2idx\n dictionary.idx2item = idx2item\n return dictionary\n\n @classmethod\n def load(cls, name: str):\n from flair.file_utils import cached_path\n\n if name == \"chars\" or name == \"common-chars\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n if name == \"chars-large\" or name == \"common-chars-large\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters_large\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n if name == \"chars-xl\" or name == \"common-chars-xl\":\n base_path = \"https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models/common_characters_xl\"\n char_dict = cached_path(base_path, cache_dir=\"datasets\")\n return Dictionary.load_from_file(char_dict)\n\n return Dictionary.load_from_file(name)\n\n def __str__(self):\n tags = ', '.join(self.get_item_for_index(i) for i in range(min(len(self), 30)))\n return f\"Dictionary with {len(self)} tags: {tags}\"\n\n\nclass Label:\n \"\"\"\n This class represents a label of a sentence. Each label has a value and optionally a confidence score. The\n score needs to be between 0.0 and 1.0. Default value for the score is 1.0.\n \"\"\"\n\n def __init__(self, value: str, score: float = 1.0):\n self.value = value\n self.score = score\n super().__init__()\n\n @property\n def value(self):\n return self._value\n\n @value.setter\n def value(self, value):\n if not value and value != \"\":\n raise ValueError(\n \"Incorrect label value provided. Label value needs to be set.\"\n )\n else:\n self._value = value\n\n @property\n def score(self):\n return self._score\n\n @score.setter\n def score(self, score):\n if 0.0 <= score <= 1.0:\n self._score = score\n else:\n self._score = 1.0\n\n def to_dict(self):\n return {\"value\": self.value, \"confidence\": self.score}\n\n def __str__(self):\n return \"{} ({})\".format(self._value, self._score)\n\n def __repr__(self):\n return \"{} ({})\".format(self._value, self._score)\n\n\nclass DataPoint:\n @property\n @abstractmethod\n def embedding(self):\n pass\n\n @abstractmethod\n def to(self, device: str, pin_memory: bool = False):\n pass\n\n @abstractmethod\n def clear_embeddings(self, embedding_names: List[str] = None):\n pass\n\n\nclass DataPair(DataPoint):\n def __init__(self, first: DataPoint, second: DataPoint):\n self.first = first\n self.second = second\n\n def to(self, device: str, pin_memory: bool = False):\n self.first.to(device, pin_memory)\n self.second.to(device, pin_memory)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n self.first.clear_embeddings(embedding_names)\n self.second.clear_embeddings(embedding_names)\n\n def embedding(self):\n return torch.cat([self.first.embedding, self.second.embedding])\n\n def __str__(self):\n return f\"DataPoint:\\n first: {self.first}\\n second: {self.second}\"\n\n\nclass Token(DataPoint):\n \"\"\"\n This class represents one word in a tokenized sentence. Each token may have any number of tags. It may also point\n to its head in a dependency tree.\n \"\"\"\n\n def __init__(\n self,\n text: str,\n idx: int = None,\n head_id: int = None,\n whitespace_after: bool = True,\n start_position: int = None,\n ):\n self.text: str = text\n self.idx: int = idx\n self.head_id: int = head_id\n self.whitespace_after: bool = whitespace_after\n\n self.start_pos = start_position\n self.end_pos = (\n start_position + len(text) if start_position is not None else None\n )\n\n self.sentence: Sentence = None\n self._embeddings: Dict = {}\n self.tags: Dict[str, Label] = {}\n self.tags_proba_dist: Dict[str, List[Label]] = {}\n\n def add_tag_label(self, tag_type: str, tag: Label):\n self.tags[tag_type] = tag\n\n def add_tags_proba_dist(self, tag_type: str, tags: List[Label]):\n self.tags_proba_dist[tag_type] = tags\n\n def add_tag(self, tag_type: str, tag_value: str, confidence=1.0):\n tag = Label(tag_value, confidence)\n self.tags[tag_type] = tag\n\n def get_tag(self, tag_type: str) -> Label:\n if tag_type in self.tags:\n return self.tags[tag_type]\n return Label(\"\")\n\n def get_tags_proba_dist(self, tag_type: str) -> List[Label]:\n if tag_type in self.tags_proba_dist:\n return self.tags_proba_dist[tag_type]\n return []\n\n def get_head(self):\n return self.sentence.get_token(self.head_id)\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def to(self, device: str, pin_memory: bool = False):\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n def get_each_embedding(self) -> torch.tensor:\n embeddings = []\n for embed in sorted(self._embeddings.keys()):\n embed = self._embeddings[embed].to(flair.device)\n if (flair.embedding_storage_mode == \"cpu\") and embed.device != flair.device:\n embed = embed.to(flair.device)\n embeddings.append(embed)\n return embeddings\n\n def get_embedding(self) -> torch.tensor:\n embeddings = self.get_each_embedding()\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.tensor([], device=flair.device)\n\n @property\n def start_position(self) -> int:\n return self.start_pos\n\n @property\n def end_position(self) -> int:\n return self.end_pos\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def __str__(self) -> str:\n return (\n \"Token: {} {}\".format(self.idx, self.text)\n if self.idx is not None\n else \"Token: {}\".format(self.text)\n )\n\n def __repr__(self) -> str:\n return (\n \"Token: {} {}\".format(self.idx, self.text)\n if self.idx is not None\n else \"Token: {}\".format(self.text)\n )\n\n\nclass Span:\n \"\"\"\n This class represents one textual span consisting of Tokens. A span may have a tag.\n \"\"\"\n\n def __init__(self, tokens: List[Token], tag: str = None, score=1.0):\n self.tokens = tokens\n self.tag = tag\n self.score = score\n self.start_pos = None\n self.end_pos = None\n\n if tokens:\n self.start_pos = tokens[0].start_position\n self.end_pos = tokens[len(tokens) - 1].end_position\n\n @property\n def text(self) -> str:\n return \" \".join([t.text for t in self.tokens])\n\n def to_original_text(self) -> str:\n pos = self.tokens[0].start_pos\n if pos is None:\n return \" \".join([t.text for t in self.tokens])\n str = \"\"\n for t in self.tokens:\n while t.start_pos != pos:\n str += \" \"\n pos += 1\n\n str += t.text\n pos += len(t.text)\n\n return str\n\n def to_dict(self):\n return {\n \"text\": self.to_original_text(),\n \"start_pos\": self.start_pos,\n \"end_pos\": self.end_pos,\n \"type\": self.tag,\n \"confidence\": self.score,\n }\n\n def __str__(self) -> str:\n ids = \",\".join([str(t.idx) for t in self.tokens])\n return (\n '{}-span [{}]: \"{}\"'.format(self.tag, ids, self.text)\n if self.tag is not None\n else 'span [{}]: \"{}\"'.format(ids, self.text)\n )\n\n def __repr__(self) -> str:\n ids = \",\".join([str(t.idx) for t in self.tokens])\n return (\n '<{}-span ({}): \"{}\">'.format(self.tag, ids, self.text)\n if self.tag is not None\n else '<span ({}): \"{}\">'.format(ids, self.text)\n )\n\n\ndef space_tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer based on space character only.\n \"\"\"\n tokens: List[Token] = []\n word = \"\"\n index = -1\n for index, char in enumerate(text):\n if char == \" \":\n if len(word) > 0:\n start_position = index - len(word)\n tokens.append(\n Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n )\n\n word = \"\"\n else:\n word += char\n # increment for last token in sentence if not followed by whitespace\n index += 1\n if len(word) > 0:\n start_position = index - len(word)\n tokens.append(\n Token(text=word, start_position=start_position, whitespace_after=False)\n )\n return tokens\n\n\ndef build_japanese_tokenizer(tokenizer: str = \"MeCab\"):\n if tokenizer.lower() != \"mecab\":\n raise NotImplementedError(\"Currently, MeCab is only supported.\")\n\n try:\n import konoha\n except ModuleNotFoundError:\n log.warning(\"-\" * 100)\n log.warning('ATTENTION! The library \"konoha\" is not installed!')\n log.warning(\n 'To use Japanese tokenizer, please first install with the following steps:'\n )\n log.warning(\n '- Install mecab with \"sudo apt install mecab libmecab-dev mecab-ipadic\"'\n )\n log.warning('- Install konoha with \"pip install konoha[mecab]\"')\n log.warning(\"-\" * 100)\n pass\n\n sentence_tokenizer = konoha.SentenceTokenizer()\n word_tokenizer = konoha.WordTokenizer(tokenizer)\n\n def tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer using konoha, a third party library which supports\n multiple Japanese tokenizer such as MeCab, KyTea and SudachiPy.\n \"\"\"\n tokens: List[Token] = []\n words: List[str] = []\n\n sentences = sentence_tokenizer.tokenize(text)\n for sentence in sentences:\n konoha_tokens = word_tokenizer.tokenize(sentence)\n words.extend(list(map(str, konoha_tokens)))\n\n # determine offsets for whitespace_after field\n index = text.index\n current_offset = 0\n previous_word_offset = -1\n previous_token = None\n for word in words:\n try:\n word_offset = index(word, current_offset)\n start_position = word_offset\n except:\n word_offset = previous_word_offset + 1\n start_position = (\n current_offset + 1 if current_offset > 0 else current_offset\n )\n\n token = Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and word_offset - 1 == previous_word_offset:\n previous_token.whitespace_after = False\n\n current_offset = word_offset + len(word)\n previous_word_offset = current_offset - 1\n previous_token = token\n\n return tokens\n\n return tokenizer\n\n\ndef segtok_tokenizer(text: str) -> List[Token]:\n \"\"\"\n Tokenizer using segtok, a third party library dedicated to rules-based Indo-European languages.\n https://github.com/fnl/segtok\n \"\"\"\n tokens: List[Token] = []\n\n words: List[str] = []\n sentences = split_single(text)\n for sentence in sentences:\n contractions = split_contractions(word_tokenizer(sentence))\n words.extend(contractions)\n\n # determine offsets for whitespace_after field\n index = text.index\n current_offset = 0\n previous_word_offset = -1\n previous_token = None\n for word in words:\n try:\n word_offset = index(word, current_offset)\n start_position = word_offset\n except:\n word_offset = previous_word_offset + 1\n start_position = (\n current_offset + 1 if current_offset > 0 else current_offset\n )\n\n if word:\n token = Token(\n text=word, start_position=start_position, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and word_offset - 1 == previous_word_offset:\n previous_token.whitespace_after = False\n\n current_offset = word_offset + len(word)\n previous_word_offset = current_offset - 1\n previous_token = token\n\n return tokens\n\n\ndef build_spacy_tokenizer(model) -> Callable[[str], List[Token]]:\n \"\"\"\n Wrap Spacy model to build a tokenizer for the Sentence class.\n :param model a Spacy V2 model\n :return a tokenizer function to provide to Sentence class constructor\n \"\"\"\n try:\n from spacy.language import Language\n from spacy.tokens.doc import Doc\n from spacy.tokens.token import Token as SpacyToken\n except ImportError:\n raise ImportError(\n \"Please install Spacy v2.0 or better before using the Spacy tokenizer, otherwise you can use segtok_tokenizer as advanced tokenizer.\"\n )\n\n model: Language = model\n\n def tokenizer(text: str) -> List[Token]:\n doc: Doc = model.make_doc(text)\n previous_token = None\n tokens: List[Token] = []\n for word in doc:\n word: SpacyToken = word\n token = Token(\n text=word.text, start_position=word.idx, whitespace_after=True\n )\n tokens.append(token)\n\n if (previous_token is not None) and (\n token.start_pos - 1\n == previous_token.start_pos + len(previous_token.text)\n ):\n previous_token.whitespace_after = False\n\n previous_token = token\n return tokens\n\n return tokenizer\n\n\nclass Sentence(DataPoint):\n \"\"\"\n A Sentence is a list of Tokens and is used to represent a sentence or text fragment.\n \"\"\"\n\n def __init__(\n self,\n text: str = None,\n use_tokenizer: Union[bool, Callable[[str], List[Token]]] = space_tokenizer,\n labels: Union[List[Label], List[str]] = None,\n language_code: str = None,\n ):\n \"\"\"\n Class to hold all meta related to a text (tokens, predictions, language code, ...)\n :param text: original string\n :param use_tokenizer: a custom tokenizer (default is space based tokenizer,\n more advanced options are segtok_tokenizer to use segtok or build_spacy_tokenizer to use Spacy library\n if available). Check the code of space_tokenizer to implement your own (if you need it).\n If instead of providing a function, this parameter is just set to True, segtok will be used.\n :param labels:\n :param language_code:\n \"\"\"\n super(Sentence, self).__init__()\n\n self.tokens: List[Token] = []\n\n self.labels: List[Label] = []\n if labels is not None:\n self.add_labels(labels)\n\n self._embeddings: Dict = {}\n\n self.language_code: str = language_code\n\n tokenizer = use_tokenizer\n if type(use_tokenizer) == bool:\n tokenizer = segtok_tokenizer if use_tokenizer else space_tokenizer\n\n # if text is passed, instantiate sentence with tokens (words)\n if text is not None:\n text = self._restore_windows_1252_characters(text)\n [self.add_token(token) for token in tokenizer(text)]\n\n # log a warning if the dataset is empty\n if text == \"\":\n log.warning(\n \"ACHTUNG: An empty Sentence was created! Are there empty strings in your dataset?\"\n )\n\n self.tokenized = None\n\n def get_token(self, token_id: int) -> Token:\n for token in self.tokens:\n if token.idx == token_id:\n return token\n\n def add_token(self, token: Union[Token, str]):\n\n if type(token) is str:\n token = Token(token)\n\n self.tokens.append(token)\n\n # set token idx if not set\n token.sentence = self\n if token.idx is None:\n token.idx = len(self.tokens)\n\n def get_spans(self, tag_type: str, min_score=-1) -> List[Span]:\n\n spans: List[Span] = []\n\n current_span = []\n\n tags = defaultdict(lambda: 0.0)\n\n previous_tag_value: str = \"O\"\n for token in self:\n\n tag: Label = token.get_tag(tag_type)\n tag_value = tag.value\n\n # non-set tags are OUT tags\n if tag_value == \"\" or tag_value == \"O\":\n tag_value = \"O-\"\n\n # anything that is not a BIOES tag is a SINGLE tag\n if tag_value[0:2] not in [\"B-\", \"I-\", \"O-\", \"E-\", \"S-\"]:\n tag_value = \"S-\" + tag_value\n\n # anything that is not OUT is IN\n in_span = False\n if tag_value[0:2] not in [\"O-\"]:\n in_span = True\n\n # single and begin tags start a new span\n starts_new_span = False\n if tag_value[0:2] in [\"B-\", \"S-\"]:\n starts_new_span = True\n\n if (\n previous_tag_value[0:2] in [\"S-\"]\n and previous_tag_value[2:] != tag_value[2:]\n and in_span\n ):\n starts_new_span = True\n\n if (starts_new_span or not in_span) and len(current_span) > 0:\n scores = [t.get_tag(tag_type).score for t in current_span]\n span_score = sum(scores) / len(scores)\n if span_score > min_score:\n spans.append(\n Span(\n current_span,\n tag=sorted(\n tags.items(), key=lambda k_v: k_v[1], reverse=True\n )[0][0],\n score=span_score,\n )\n )\n current_span = []\n tags = defaultdict(lambda: 0.0)\n\n if in_span:\n current_span.append(token)\n weight = 1.1 if starts_new_span else 1.0\n tags[tag_value[2:]] += weight\n\n # remember previous tag\n previous_tag_value = tag_value\n\n if len(current_span) > 0:\n scores = [t.get_tag(tag_type).score for t in current_span]\n span_score = sum(scores) / len(scores)\n if span_score > min_score:\n spans.append(\n Span(\n current_span,\n tag=sorted(tags.items(), key=lambda k_v: k_v[1], reverse=True)[\n 0\n ][0],\n score=span_score,\n )\n )\n\n return spans\n\n def add_label(self, label: Union[Label, str]):\n if type(label) is Label:\n self.labels.append(label)\n\n elif type(label) is str:\n self.labels.append(Label(label))\n\n def add_labels(self, labels: Union[List[Label], List[str]]):\n for label in labels:\n self.add_label(label)\n\n def get_label_names(self) -> List[str]:\n return [label.value for label in self.labels]\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def get_embedding(self) -> torch.tensor:\n embeddings = []\n for embed in sorted(self._embeddings.keys()):\n embedding = self._embeddings[embed]\n embeddings.append(embedding)\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.Tensor()\n\n def to(self, device: str, pin_memory: bool = False):\n\n # move sentence embeddings to device\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n # move token embeddings to device\n for token in self:\n token.to(device, pin_memory)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n\n # clear sentence embeddings\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n # clear token embeddings\n for token in self:\n token.clear_embeddings(embedding_names)\n\n def to_tagged_string(self, main_tag=None) -> str:\n list = []\n for token in self.tokens:\n list.append(token.text)\n\n tags: List[str] = []\n for tag_type in token.tags.keys():\n\n if main_tag is not None and main_tag != tag_type:\n continue\n\n if (\n token.get_tag(tag_type).value == \"\"\n or token.get_tag(tag_type).value == \"O\"\n ):\n continue\n tags.append(token.get_tag(tag_type).value)\n all_tags = \"<\" + \"/\".join(tags) + \">\"\n if all_tags != \"<>\":\n list.append(all_tags)\n return \" \".join(list)\n\n def to_tokenized_string(self) -> str:\n\n if self.tokenized is None:\n self.tokenized = \" \".join([t.text for t in self.tokens])\n\n return self.tokenized\n\n def to_plain_string(self):\n plain = \"\"\n for token in self.tokens:\n plain += token.text\n if token.whitespace_after:\n plain += \" \"\n return plain.rstrip()\n\n def convert_tag_scheme(self, tag_type: str = \"ner\", target_scheme: str = \"iob\"):\n\n tags: List[Label] = []\n for token in self.tokens:\n tags.append(token.get_tag(tag_type))\n\n if target_scheme == \"iob\":\n iob2(tags)\n\n if target_scheme == \"iobes\":\n iob2(tags)\n tags = iob_iobes(tags)\n\n for index, tag in enumerate(tags):\n self.tokens[index].add_tag(tag_type, tag)\n\n def infer_space_after(self):\n \"\"\"\n Heuristics in case you wish to infer whitespace_after values for tokenized text. This is useful for some old NLP\n tasks (such as CoNLL-03 and CoNLL-2000) that provide only tokenized data with no info of original whitespacing.\n :return:\n \"\"\"\n last_token = None\n quote_count: int = 0\n # infer whitespace after field\n\n for token in self.tokens:\n if token.text == '\"':\n quote_count += 1\n if quote_count % 2 != 0:\n token.whitespace_after = False\n elif last_token is not None:\n last_token.whitespace_after = False\n\n if last_token is not None:\n\n if token.text in [\".\", \":\", \",\", \";\", \")\", \"n't\", \"!\", \"?\"]:\n last_token.whitespace_after = False\n\n if token.text.startswith(\"'\"):\n last_token.whitespace_after = False\n\n if token.text in [\"(\"]:\n token.whitespace_after = False\n\n last_token = token\n return self\n\n def to_original_text(self) -> str:\n if len(self.tokens) > 0 and (self.tokens[0].start_pos is None):\n return \" \".join([t.text for t in self.tokens])\n str = \"\"\n pos = 0\n for t in self.tokens:\n while t.start_pos != pos:\n str += \" \"\n pos += 1\n\n str += t.text\n pos += len(t.text)\n\n return str\n\n def to_dict(self, tag_type: str = None):\n labels = []\n entities = []\n\n if tag_type:\n entities = [span.to_dict() for span in self.get_spans(tag_type)]\n if self.labels:\n labels = [l.to_dict() for l in self.labels]\n\n return {\"text\": self.to_original_text(), \"labels\": labels, \"entities\": entities}\n\n def __getitem__(self, idx: int) -> Token:\n return self.tokens[idx]\n\n def __iter__(self):\n return iter(self.tokens)\n\n def __repr__(self):\n return 'Sentence: \"{}\" - {} Tokens'.format(\n \" \".join([t.text for t in self.tokens]), len(self)\n )\n\n def __copy__(self):\n s = Sentence()\n for token in self.tokens:\n nt = Token(token.text)\n for tag_type in token.tags:\n nt.add_tag(\n tag_type,\n token.get_tag(tag_type).value,\n token.get_tag(tag_type).score,\n )\n\n s.add_token(nt)\n return s\n\n def __str__(self) -> str:\n\n if self.labels:\n return f'Sentence: \"{self.to_tokenized_string()}\" - {len(self)} Tokens - Labels: {self.labels} '\n else:\n return f'Sentence: \"{self.to_tokenized_string()}\" - {len(self)} Tokens'\n\n def __len__(self) -> int:\n return len(self.tokens)\n\n def get_language_code(self) -> str:\n if self.language_code is None:\n import langdetect\n\n try:\n self.language_code = langdetect.detect(self.to_plain_string())\n except:\n self.language_code = \"en\"\n\n return self.language_code\n\n @staticmethod\n def _restore_windows_1252_characters(text: str) -> str:\n def to_windows_1252(match):\n try:\n return bytes([ord(match.group(0))]).decode(\"windows-1252\")\n except UnicodeDecodeError:\n # No character at the corresponding code point: remove it\n return \"\"\n\n return re.sub(r\"[\\u0080-\\u0099]\", to_windows_1252, text)\n\n\nclass Image(DataPoint):\n def __init__(self, data=None, imageURL=None):\n self.data = data\n self._embeddings: Dict = {}\n self.imageURL = imageURL\n\n @property\n def embedding(self):\n return self.get_embedding()\n\n def __str__(self):\n\n image_repr = self.data.size() if self.data else \"\"\n image_url = self.imageURL if self.imageURL else \"\"\n\n return f\"Image: {image_repr} {image_url}\"\n\n def get_embedding(self) -> torch.tensor:\n embeddings = [\n self._embeddings[embed] for embed in sorted(self._embeddings.keys())\n ]\n\n if embeddings:\n return torch.cat(embeddings, dim=0)\n\n return torch.tensor([], device=flair.device)\n\n def set_embedding(self, name: str, vector: torch.tensor):\n device = flair.device\n if (flair.embedding_storage_mode == \"cpu\") and len(self._embeddings.keys()) > 0:\n device = next(iter(self._embeddings.values())).device\n if device != vector.device:\n vector = vector.to(device)\n self._embeddings[name] = vector\n\n def to(self, device: str, pin_memory: bool = False):\n for name, vector in self._embeddings.items():\n if str(vector.device) != str(device):\n if pin_memory:\n self._embeddings[name] = vector.to(\n device, non_blocking=True\n ).pin_memory()\n else:\n self._embeddings[name] = vector.to(device, non_blocking=True)\n\n def clear_embeddings(self, embedding_names: List[str] = None):\n if embedding_names is None:\n self._embeddings: Dict = {}\n else:\n for name in embedding_names:\n if name in self._embeddings.keys():\n del self._embeddings[name]\n\n\nclass FlairDataset(Dataset):\n @abstractmethod\n def is_in_memory(self) -> bool:\n pass\n\n\nclass Corpus:\n def __init__(\n self,\n train: FlairDataset,\n dev: FlairDataset,\n test: FlairDataset,\n name: str = \"corpus\",\n ):\n self._train: FlairDataset = train\n self._dev: FlairDataset = dev\n self._test: FlairDataset = test\n self.name: str = name\n\n @property\n def train(self) -> FlairDataset:\n return self._train\n\n @property\n def dev(self) -> FlairDataset:\n return self._dev\n\n @property\n def test(self) -> FlairDataset:\n return self._test\n\n def downsample(self, percentage: float = 0.1, only_downsample_train=False):\n\n self._train = self._downsample_to_proportion(self.train, percentage)\n if not only_downsample_train:\n self._dev = self._downsample_to_proportion(self.dev, percentage)\n self._test = self._downsample_to_proportion(self.test, percentage)\n\n return self\n\n def filter_empty_sentences(self):\n log.info(\"Filtering empty sentences\")\n self._train = Corpus._filter_empty_sentences(self._train)\n self._test = Corpus._filter_empty_sentences(self._test)\n self._dev = Corpus._filter_empty_sentences(self._dev)\n log.info(self)\n\n @staticmethod\n def _filter_empty_sentences(dataset) -> Dataset:\n\n # find out empty sentence indices\n empty_sentence_indices = []\n non_empty_sentence_indices = []\n index = 0\n\n from flair.datasets import DataLoader\n\n for batch in DataLoader(dataset):\n for sentence in batch:\n if len(sentence) == 0:\n empty_sentence_indices.append(index)\n else:\n non_empty_sentence_indices.append(index)\n index += 1\n\n # create subset of non-empty sentence indices\n subset = Subset(dataset, non_empty_sentence_indices)\n\n return subset\n\n def make_vocab_dictionary(self, max_tokens=-1, min_freq=1) -> Dictionary:\n \"\"\"\n Creates a dictionary of all tokens contained in the corpus.\n By defining `max_tokens` you can set the maximum number of tokens that should be contained in the dictionary.\n If there are more than `max_tokens` tokens in the corpus, the most frequent tokens are added first.\n If `min_freq` is set the a value greater than 1 only tokens occurring more than `min_freq` times are considered\n to be added to the dictionary.\n :param max_tokens: the maximum number of tokens that should be added to the dictionary (-1 = take all tokens)\n :param min_freq: a token needs to occur at least `min_freq` times to be added to the dictionary (-1 = there is no limitation)\n :return: dictionary of tokens\n \"\"\"\n tokens = self._get_most_common_tokens(max_tokens, min_freq)\n\n vocab_dictionary: Dictionary = Dictionary()\n for token in tokens:\n vocab_dictionary.add_item(token)\n\n return vocab_dictionary\n\n def _get_most_common_tokens(self, max_tokens, min_freq) -> List[str]:\n tokens_and_frequencies = Counter(self._get_all_tokens())\n tokens_and_frequencies = tokens_and_frequencies.most_common()\n\n tokens = []\n for token, freq in tokens_and_frequencies:\n if (min_freq != -1 and freq < min_freq) or (\n max_tokens != -1 and len(tokens) == max_tokens\n ):\n break\n tokens.append(token)\n return tokens\n\n def _get_all_tokens(self) -> List[str]:\n tokens = list(map((lambda s: s.tokens), self.train))\n tokens = [token for sublist in tokens for token in sublist]\n return list(map((lambda t: t.text), tokens))\n\n @staticmethod\n def _downsample_to_proportion(dataset: Dataset, proportion: float):\n\n sampled_size: int = round(len(dataset) * proportion)\n splits = random_split(dataset, [len(dataset) - sampled_size, sampled_size])\n return splits[1]\n\n def obtain_statistics(\n self, tag_type: str = None, pretty_print: bool = True\n ) -> dict:\n \"\"\"\n Print statistics about the class distribution (only labels of sentences are taken into account) and sentence\n sizes.\n \"\"\"\n json_string = {\n \"TRAIN\": self._obtain_statistics_for(self.train, \"TRAIN\", tag_type),\n \"TEST\": self._obtain_statistics_for(self.test, \"TEST\", tag_type),\n \"DEV\": self._obtain_statistics_for(self.dev, \"DEV\", tag_type),\n }\n if pretty_print:\n import json\n\n json_string = json.dumps(json_string, indent=4)\n return json_string\n\n @staticmethod\n def _obtain_statistics_for(sentences, name, tag_type) -> dict:\n if len(sentences) == 0:\n return {}\n\n classes_to_count = Corpus._get_class_to_count(sentences)\n tags_to_count = Corpus._get_tag_to_count(sentences, tag_type)\n tokens_per_sentence = Corpus._get_tokens_per_sentence(sentences)\n\n label_size_dict = {}\n for l, c in classes_to_count.items():\n label_size_dict[l] = c\n\n tag_size_dict = {}\n for l, c in tags_to_count.items():\n tag_size_dict[l] = c\n\n return {\n \"dataset\": name,\n \"total_number_of_documents\": len(sentences),\n \"number_of_documents_per_class\": label_size_dict,\n \"number_of_tokens_per_tag\": tag_size_dict,\n \"number_of_tokens\": {\n \"total\": sum(tokens_per_sentence),\n \"min\": min(tokens_per_sentence),\n \"max\": max(tokens_per_sentence),\n \"avg\": sum(tokens_per_sentence) / len(sentences),\n },\n }\n\n @staticmethod\n def _get_tokens_per_sentence(sentences):\n return list(map(lambda x: len(x.tokens), sentences))\n\n @staticmethod\n def _get_class_to_count(sentences):\n class_to_count = defaultdict(lambda: 0)\n for sent in sentences:\n for label in sent.labels:\n class_to_count[label.value] += 1\n return class_to_count\n\n @staticmethod\n def _get_tag_to_count(sentences, tag_type):\n tag_to_count = defaultdict(lambda: 0)\n for sent in sentences:\n for word in sent.tokens:\n if tag_type in word.tags:\n label = word.tags[tag_type]\n tag_to_count[label.value] += 1\n return tag_to_count\n\n def __str__(self) -> str:\n return \"Corpus: %d train + %d dev + %d test sentences\" % (\n len(self.train),\n len(self.dev),\n len(self.test),\n )\n\n def make_label_dictionary(self) -> Dictionary:\n \"\"\"\n Creates a dictionary of all labels assigned to the sentences in the corpus.\n :return: dictionary of labels\n \"\"\"\n label_dictionary: Dictionary = Dictionary(add_unk=False)\n label_dictionary.multi_label = False\n\n from flair.datasets import DataLoader\n\n loader = DataLoader(self.train, batch_size=1)\n\n log.info(\"Computing label dictionary. Progress:\")\n for batch in Tqdm.tqdm(iter(loader)):\n\n for sentence in batch:\n\n for label in sentence.labels:\n label_dictionary.add_item(label.value)\n\n if not label_dictionary.multi_label:\n if len(sentence.labels) > 1:\n label_dictionary.multi_label = True\n\n log.info(label_dictionary.idx2item)\n\n return label_dictionary\n\n def get_label_distribution(self):\n class_to_count = defaultdict(lambda: 0)\n for sent in self.train:\n for label in sent.labels:\n class_to_count[label.value] += 1\n return class_to_count\n\n def get_all_sentences(self) -> Dataset:\n return ConcatDataset([self.train, self.dev, self.test])\n\n def make_tag_dictionary(self, tag_type: str) -> Dictionary:\n\n # Make the tag dictionary\n tag_dictionary: Dictionary = Dictionary()\n tag_dictionary.add_item(\"O\")\n for sentence in self.get_all_sentences():\n for token in sentence.tokens:\n tag_dictionary.add_item(token.get_tag(tag_type).value)\n tag_dictionary.add_item(\"<START>\")\n tag_dictionary.add_item(\"<STOP>\")\n return tag_dictionary\n\n\nclass MultiCorpus(Corpus):\n def __init__(self, corpora: List[Corpus], name: str = \"multicorpus\"):\n self.corpora: List[Corpus] = corpora\n\n super(MultiCorpus, self).__init__(\n ConcatDataset([corpus.train for corpus in self.corpora]),\n ConcatDataset([corpus.dev for corpus in self.corpora]),\n ConcatDataset([corpus.test for corpus in self.corpora]),\n name=name,\n )\n\n def __str__(self):\n return \"\\n\".join([str(corpus) for corpus in self.corpora])\n\n\ndef iob2(tags):\n \"\"\"\n Check that tags have a valid IOB format.\n Tags in IOB1 format are converted to IOB2.\n \"\"\"\n for i, tag in enumerate(tags):\n if tag.value == \"O\":\n continue\n split = tag.value.split(\"-\")\n if len(split) != 2 or split[0] not in [\"I\", \"B\"]:\n return False\n if split[0] == \"B\":\n continue\n elif i == 0 or tags[i - 1].value == \"O\": # conversion IOB1 to IOB2\n tags[i].value = \"B\" + tag.value[1:]\n elif tags[i - 1].value[1:] == tag.value[1:]:\n continue\n else: # conversion IOB1 to IOB2\n tags[i].value = \"B\" + tag.value[1:]\n return True\n\n\ndef iob_iobes(tags):\n \"\"\"\n IOB -> IOBES\n \"\"\"\n new_tags = []\n for i, tag in enumerate(tags):\n if tag.value == \"O\":\n new_tags.append(tag.value)\n elif tag.value.split(\"-\")[0] == \"B\":\n if i + 1 != len(tags) and tags[i + 1].value.split(\"-\")[0] == \"I\":\n new_tags.append(tag.value)\n else:\n new_tags.append(tag.value.replace(\"B-\", \"S-\"))\n elif tag.value.split(\"-\")[0] == \"I\":\n if i + 1 < len(tags) and tags[i + 1].value.split(\"-\")[0] == \"I\":\n new_tags.append(tag.value)\n else:\n new_tags.append(tag.value.replace(\"I-\", \"E-\"))\n else:\n raise Exception(\"Invalid IOB format!\")\n return new_tags\n",
"path": "flair/data.py"
}
] | diff --git a/flair/data.py b/flair/data.py
index 0e35dcaa8b..8f6bafed92 100644
--- a/flair/data.py
+++ b/flair/data.py
@@ -131,6 +131,10 @@ def load(cls, name: str):
return Dictionary.load_from_file(name)
+ def __str__(self):
+ tags = ', '.join(self.get_item_for_index(i) for i in range(min(len(self), 30)))
+ return f"Dictionary with {len(self)} tags: {tags}"
+
class Label:
"""
|
lra__mackup-1412 | AssertionError on Ubuntu 18.04.2 LTS, Mackup 0.8.25, Python 3.6.7
I'm trying to `mackup restore` on a machine running
- Ubuntu 18.04.2 LTS
- Mackup 0.8.25
- Python 3.6.7
It fails immediately with the following:
```
Traceback (most recent call last):
File "/home/REDACTED/.pyenv/versions/3.6.7/bin/mackup", line 10, in <module>
sys.exit(main())
File "/home/REDACTED/.pyenv/versions/3.6.7/lib/python3.6/site-packages/mackup/main.py", line 102, in main
verbose)
File "/home/REDACTED/.pyenv/versions/3.6.7/lib/python3.6/site-packages/mackup/application.py", line 26, in __init__
assert isinstance(files, set)
AssertionError
```
I sync via dropbox, and to debug I made a tar.gz of the original mackup folder and copied/extracted it directly with no luck :( Not sure how to proceed to debug further, I've also tried `mackup restore -v` with no luck.
| [
{
"content": "\"\"\"\nThe applications database.\n\nThe Applications Database provides an easy to use interface to load application\ndata from the Mackup Database (files).\n\"\"\"\nimport os\n\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\n\nfrom .constants import APPS_DIR\nfrom .constants import CUSTOM_APPS_DIR\n\n\nclass ApplicationsDatabase(object):\n\n \"\"\"Database containing all the configured applications.\"\"\"\n\n def __init__(self):\n \"\"\"Create a ApplicationsDatabase instance.\"\"\"\n # Build the dict that will contain the properties of each application\n self.apps = dict()\n\n for config_file in ApplicationsDatabase.get_config_files():\n config = configparser.SafeConfigParser(allow_no_value=True)\n\n # Needed to not lowercase the configuration_files in the ini files\n config.optionxform = str\n\n if config.read(config_file):\n # Get the filename without the directory name\n filename = os.path.basename(config_file)\n # The app name is the cfg filename with the extension\n app_name = filename[:-len('.cfg')]\n\n # Start building a dict for this app\n self.apps[app_name] = dict()\n\n # Add the fancy name for the app, for display purpose\n app_pretty_name = config.get('application', 'name')\n self.apps[app_name]['name'] = app_pretty_name\n\n # Add the configuration files to sync\n self.apps[app_name]['configuration_files'] = set()\n if config.has_section('configuration_files'):\n for path in config.options('configuration_files'):\n if path.startswith('/'):\n raise ValueError('Unsupported absolute path: {}'\n .format(path))\n self.apps[app_name]['configuration_files'].add(path)\n\n # Add the XDG configuration files to sync\n home = os.path.expanduser('~/')\n failobj = \"{}.config\".format(home)\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME', failobj)\n if not xdg_config_home.startswith(home):\n raise ValueError('$XDG_CONFIG_HOME: {} must be '\n 'somewhere within your home '\n 'directory: {}'\n .format(xdg_config_home, home))\n if config.has_section('xdg_configuration_files'):\n for path in config.options('xdg_configuration_files'):\n if path.startswith('/'):\n raise ValueError('Unsupported absolute path: '\n '{}'\n .format(path))\n path = os.path.join(xdg_config_home, path)\n path = path.replace(home, '')\n (self.apps[app_name]['configuration_files']\n .add(path))\n\n @staticmethod\n def get_config_files():\n \"\"\"\n Return the application configuration files.\n\n Return a list of configuration files describing the apps supported by\n Mackup. The files returned are absolute full path to those files.\n e.g. /usr/lib/mackup/applications/bash.cfg\n\n Only one config file per application should be returned, custom config\n having a priority over stock config.\n\n Returns:\n set of strings.\n \"\"\"\n # Configure the config parser\n apps_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)),\n APPS_DIR)\n custom_apps_dir = os.path.join(os.environ['HOME'], CUSTOM_APPS_DIR)\n\n # List of stock application config files\n config_files = set()\n\n # Temp list of user added app config file names\n custom_files = set()\n\n # Get the list of custom application config files first\n if os.path.isdir(custom_apps_dir):\n for filename in os.listdir(custom_apps_dir):\n if filename.endswith('.cfg'):\n config_files.add(os.path.join(custom_apps_dir,\n filename))\n # Also add it to the set of custom apps, so that we don't\n # add the stock config for the same app too\n custom_files.add(filename)\n\n # Add the default provided app config files, but only if those are not\n # customized, as we don't want to overwrite custom app config.\n for filename in os.listdir(apps_dir):\n if filename.endswith('.cfg') and filename not in custom_files:\n config_files.add(os.path.join(apps_dir, filename))\n\n return config_files\n\n def get_name(self, name):\n \"\"\"\n Return the fancy name of an application.\n\n Args:\n name (str)\n\n Returns:\n str\n \"\"\"\n return self.apps[name]['name']\n\n def get_files(self, name):\n \"\"\"\n Return the list of config files of an application.\n\n Args:\n name (str)\n\n Returns:\n set of str.\n \"\"\"\n return sorted(self.apps[name]['configuration_files'])\n\n def get_app_names(self):\n \"\"\"\n Return application names.\n\n Return the list of application names that are available in the\n database.\n\n Returns:\n set of str.\n \"\"\"\n app_names = set()\n for name in self.apps:\n app_names.add(name)\n\n return app_names\n\n def get_pretty_app_names(self):\n \"\"\"\n Return the list of pretty app names that are available in the database.\n\n Returns:\n set of str.\n \"\"\"\n pretty_app_names = set()\n for app_name in self.get_app_names():\n pretty_app_names.add(self.get_name(app_name))\n\n return pretty_app_names\n",
"path": "mackup/appsdb.py"
}
] | [
{
"content": "\"\"\"\nThe applications database.\n\nThe Applications Database provides an easy to use interface to load application\ndata from the Mackup Database (files).\n\"\"\"\nimport os\n\ntry:\n import configparser\nexcept ImportError:\n import ConfigParser as configparser\n\n\nfrom .constants import APPS_DIR\nfrom .constants import CUSTOM_APPS_DIR\n\n\nclass ApplicationsDatabase(object):\n\n \"\"\"Database containing all the configured applications.\"\"\"\n\n def __init__(self):\n \"\"\"Create a ApplicationsDatabase instance.\"\"\"\n # Build the dict that will contain the properties of each application\n self.apps = dict()\n\n for config_file in ApplicationsDatabase.get_config_files():\n config = configparser.SafeConfigParser(allow_no_value=True)\n\n # Needed to not lowercase the configuration_files in the ini files\n config.optionxform = str\n\n if config.read(config_file):\n # Get the filename without the directory name\n filename = os.path.basename(config_file)\n # The app name is the cfg filename with the extension\n app_name = filename[:-len('.cfg')]\n\n # Start building a dict for this app\n self.apps[app_name] = dict()\n\n # Add the fancy name for the app, for display purpose\n app_pretty_name = config.get('application', 'name')\n self.apps[app_name]['name'] = app_pretty_name\n\n # Add the configuration files to sync\n self.apps[app_name]['configuration_files'] = set()\n if config.has_section('configuration_files'):\n for path in config.options('configuration_files'):\n if path.startswith('/'):\n raise ValueError('Unsupported absolute path: {}'\n .format(path))\n self.apps[app_name]['configuration_files'].add(path)\n\n # Add the XDG configuration files to sync\n home = os.path.expanduser('~/')\n failobj = \"{}.config\".format(home)\n xdg_config_home = os.environ.get('XDG_CONFIG_HOME', failobj)\n if not xdg_config_home.startswith(home):\n raise ValueError('$XDG_CONFIG_HOME: {} must be '\n 'somewhere within your home '\n 'directory: {}'\n .format(xdg_config_home, home))\n if config.has_section('xdg_configuration_files'):\n for path in config.options('xdg_configuration_files'):\n if path.startswith('/'):\n raise ValueError('Unsupported absolute path: '\n '{}'\n .format(path))\n path = os.path.join(xdg_config_home, path)\n path = path.replace(home, '')\n (self.apps[app_name]['configuration_files']\n .add(path))\n\n @staticmethod\n def get_config_files():\n \"\"\"\n Return the application configuration files.\n\n Return a list of configuration files describing the apps supported by\n Mackup. The files returned are absolute full path to those files.\n e.g. /usr/lib/mackup/applications/bash.cfg\n\n Only one config file per application should be returned, custom config\n having a priority over stock config.\n\n Returns:\n set of strings.\n \"\"\"\n # Configure the config parser\n apps_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)),\n APPS_DIR)\n custom_apps_dir = os.path.join(os.environ['HOME'], CUSTOM_APPS_DIR)\n\n # List of stock application config files\n config_files = set()\n\n # Temp list of user added app config file names\n custom_files = set()\n\n # Get the list of custom application config files first\n if os.path.isdir(custom_apps_dir):\n for filename in os.listdir(custom_apps_dir):\n if filename.endswith('.cfg'):\n config_files.add(os.path.join(custom_apps_dir,\n filename))\n # Also add it to the set of custom apps, so that we don't\n # add the stock config for the same app too\n custom_files.add(filename)\n\n # Add the default provided app config files, but only if those are not\n # customized, as we don't want to overwrite custom app config.\n for filename in os.listdir(apps_dir):\n if filename.endswith('.cfg') and filename not in custom_files:\n config_files.add(os.path.join(apps_dir, filename))\n\n return config_files\n\n def get_name(self, name):\n \"\"\"\n Return the fancy name of an application.\n\n Args:\n name (str)\n\n Returns:\n str\n \"\"\"\n return self.apps[name]['name']\n\n def get_files(self, name):\n \"\"\"\n Return the list of config files of an application.\n\n Args:\n name (str)\n\n Returns:\n set of str.\n \"\"\"\n return self.apps[name]['configuration_files']\n\n def get_app_names(self):\n \"\"\"\n Return application names.\n\n Return the list of application names that are available in the\n database.\n\n Returns:\n set of str.\n \"\"\"\n app_names = set()\n for name in self.apps:\n app_names.add(name)\n\n return app_names\n\n def get_pretty_app_names(self):\n \"\"\"\n Return the list of pretty app names that are available in the database.\n\n Returns:\n set of str.\n \"\"\"\n pretty_app_names = set()\n for app_name in self.get_app_names():\n pretty_app_names.add(self.get_name(app_name))\n\n return pretty_app_names\n",
"path": "mackup/appsdb.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 04c4026cb..485ce9631 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -2,6 +2,8 @@
## WIP
+- Hotfix, Mackup could not run in most scenarios
+
## Mackup 0.8.25
- Add support for yabai (via @mbdmbd)
diff --git a/mackup/appsdb.py b/mackup/appsdb.py
index 88df5fb07..01e881a20 100644
--- a/mackup/appsdb.py
+++ b/mackup/appsdb.py
@@ -139,7 +139,7 @@ def get_files(self, name):
Returns:
set of str.
"""
- return sorted(self.apps[name]['configuration_files'])
+ return self.apps[name]['configuration_files']
def get_app_names(self):
"""
|
aws-cloudformation__cfn-lint-1081 | Error running cfn-lint with pipe (|)
cfn-lint version: *v0.23.0*
Hello we have a problem running cfn-lint with find command. Only this version is affected as far as we know.
We are keeping couple of template is a folder and linting them like that:
```
find ./templates -type f | xargs cfn-lint -f parseable -c I -t
```
It worked flawlessly before but with the new update we are getting this error:
> 2019-08-02 15:37:01,818 - cfnlint.decode - ERROR - Template file not found: None
None:1:1:1:2:E0000:Template file not found: None
Splitting the files in separated lines with `xargs -L 1` doesn't help.
If you run the cfn-lint command on it's own it works as expected.
This example **doesn't** work:
```
find ./templates -type f | xargs -t cfn-lint -f parseable -c I -t
cfn-lint -f parseable -c I -t ./templates/t1.yml ./templates/t2.yml ./templates/t3.yml
2019-08-02 15:50:20,891 - cfnlint.decode - ERROR - Template file not found: None
None:1:1:1:2:E0000:Template file not found: None
```
This example works:
```
cfn-lint -f parseable -c I -t ./templates/t1.yml ./templates/t2.yml ./templates/t3.yml
```
Regards TT
| [
{
"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport logging\nimport os\nimport sys\nfrom jsonschema.exceptions import ValidationError\nfrom cfnlint import RulesCollection\nimport cfnlint.config\nimport cfnlint.formatters\nimport cfnlint.decode\nimport cfnlint.maintenance\nfrom cfnlint.helpers import REGIONS\n\n\nLOGGER = logging.getLogger('cfnlint')\nDEFAULT_RULESDIR = os.path.join(os.path.dirname(__file__), 'rules')\n\n\nclass CfnLintExitException(Exception):\n \"\"\"Generic exception used when the cli should exit\"\"\"\n def __init__(self, msg=None, exit_code=1):\n if msg is None:\n msg = 'process failed with exit code %s' % exit_code\n super(CfnLintExitException, self).__init__(msg)\n self.exit_code = exit_code\n\n\nclass InvalidRegionException(CfnLintExitException):\n \"\"\"When an unsupported/invalid region is supplied\"\"\"\n\n\nclass UnexpectedRuleException(CfnLintExitException):\n \"\"\"When processing a rule fails in an unexpected way\"\"\"\n\n\ndef run_cli(filename, template, rules, regions, override_spec):\n \"\"\"Process args and run\"\"\"\n\n if override_spec:\n cfnlint.helpers.override_specs(override_spec)\n\n return run_checks(filename, template, rules, regions)\n\n\ndef get_exit_code(matches):\n \"\"\" Determine exit code \"\"\"\n exit_code = 0\n for match in matches:\n if match.rule.id[0] == 'I':\n exit_code = exit_code | 8\n elif match.rule.id[0] == 'W':\n exit_code = exit_code | 4\n elif match.rule.id[0] == 'E':\n exit_code = exit_code | 2\n\n return exit_code\n\n\ndef get_formatter(fmt):\n \"\"\" Get Formatter\"\"\"\n formatter = {}\n if fmt:\n if fmt == 'quiet':\n formatter = cfnlint.formatters.QuietFormatter()\n elif fmt == 'parseable':\n # pylint: disable=bad-option-value\n formatter = cfnlint.formatters.ParseableFormatter()\n elif fmt == 'json':\n formatter = cfnlint.formatters.JsonFormatter()\n else:\n formatter = cfnlint.formatters.Formatter()\n\n return formatter\n\n\ndef get_rules(rulesdir, ignore_rules, include_rules, configure_rules=None, include_experimental=False):\n \"\"\"Get rules\"\"\"\n rules = RulesCollection(ignore_rules, include_rules, configure_rules, include_experimental)\n rules_dirs = [DEFAULT_RULESDIR] + rulesdir\n try:\n for rules_dir in rules_dirs:\n rules.create_from_directory(rules_dir)\n except OSError as e:\n raise UnexpectedRuleException('Tried to append rules but got an error: %s' % str(e), 1)\n return rules\n\n\ndef configure_logging(debug_logging):\n \"\"\" Backwards compatibility for integrators \"\"\"\n LOGGER.info('Update your integrations to use \"cfnlint.config.configure_logging\" instead')\n cfnlint.config.configure_logging(debug_logging, False)\n\n\ndef get_args_filenames(cli_args):\n \"\"\" Get Template Configuration items and set them as default values\"\"\"\n try:\n config = cfnlint.config.ConfigMixIn(cli_args)\n except ValidationError as e:\n LOGGER.error('Error parsing config file: %s', str(e))\n exit(1)\n\n fmt = config.format\n formatter = get_formatter(fmt)\n\n if config.update_specs:\n cfnlint.maintenance.update_resource_specs()\n exit(0)\n\n if config.update_documentation:\n # Get ALL rules (ignore the CLI settings))\n documentation_rules = cfnlint.core.get_rules([], [], ['I', 'E', 'W'], {}, True)\n cfnlint.maintenance.update_documentation(documentation_rules)\n exit(0)\n\n if config.update_iam_policies:\n cfnlint.maintenance.update_iam_policies()\n exit(0)\n\n if config.listrules:\n rules = cfnlint.core.get_rules(\n config.append_rules,\n config.ignore_checks,\n config.include_checks,\n config.configure_rules\n )\n print(rules)\n exit(0)\n\n if not sys.stdin.isatty():\n return(config, [None], formatter)\n\n if not config.templates:\n # Not specified, print the help\n config.parser.print_help()\n exit(1)\n\n return(config, config.templates, formatter)\n\n\ndef get_template_rules(filename, args):\n \"\"\" Get Template Configuration items and set them as default values\"\"\"\n\n (template, matches) = cfnlint.decode.decode(filename, args.ignore_bad_template)\n\n if matches:\n return(template, [], matches)\n\n args.template_args = template\n\n rules = cfnlint.core.get_rules(\n args.append_rules,\n args.ignore_checks,\n args.include_checks,\n args.configure_rules,\n args.include_experimental,\n )\n\n return(template, rules, [])\n\n\ndef run_checks(filename, template, rules, regions):\n \"\"\"Run Checks against the template\"\"\"\n if regions:\n if not set(regions).issubset(set(REGIONS)):\n unsupported_regions = list(set(regions).difference(set(REGIONS)))\n msg = 'Regions %s are unsupported. Supported regions are %s' % (unsupported_regions, REGIONS)\n raise InvalidRegionException(msg, 32)\n\n matches = []\n\n runner = cfnlint.Runner(rules, filename, template, regions)\n matches.extend(runner.transform())\n # Only do rule analysis if Transform was successful\n if not matches:\n try:\n matches.extend(runner.run())\n except Exception as err: # pylint: disable=W0703\n msg = 'Tried to process rules on file %s but got an error: %s' % (filename, str(err))\n UnexpectedRuleException(msg, 1)\n matches.sort(key=lambda x: (x.filename, x.linenumber, x.rule.id))\n\n return(matches)\n",
"path": "src/cfnlint/core.py"
}
] | [
{
"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport logging\nimport os\nimport sys\nfrom jsonschema.exceptions import ValidationError\nfrom cfnlint import RulesCollection\nimport cfnlint.config\nimport cfnlint.formatters\nimport cfnlint.decode\nimport cfnlint.maintenance\nfrom cfnlint.helpers import REGIONS\n\n\nLOGGER = logging.getLogger('cfnlint')\nDEFAULT_RULESDIR = os.path.join(os.path.dirname(__file__), 'rules')\n\n\nclass CfnLintExitException(Exception):\n \"\"\"Generic exception used when the cli should exit\"\"\"\n def __init__(self, msg=None, exit_code=1):\n if msg is None:\n msg = 'process failed with exit code %s' % exit_code\n super(CfnLintExitException, self).__init__(msg)\n self.exit_code = exit_code\n\n\nclass InvalidRegionException(CfnLintExitException):\n \"\"\"When an unsupported/invalid region is supplied\"\"\"\n\n\nclass UnexpectedRuleException(CfnLintExitException):\n \"\"\"When processing a rule fails in an unexpected way\"\"\"\n\n\ndef run_cli(filename, template, rules, regions, override_spec):\n \"\"\"Process args and run\"\"\"\n\n if override_spec:\n cfnlint.helpers.override_specs(override_spec)\n\n return run_checks(filename, template, rules, regions)\n\n\ndef get_exit_code(matches):\n \"\"\" Determine exit code \"\"\"\n exit_code = 0\n for match in matches:\n if match.rule.id[0] == 'I':\n exit_code = exit_code | 8\n elif match.rule.id[0] == 'W':\n exit_code = exit_code | 4\n elif match.rule.id[0] == 'E':\n exit_code = exit_code | 2\n\n return exit_code\n\n\ndef get_formatter(fmt):\n \"\"\" Get Formatter\"\"\"\n formatter = {}\n if fmt:\n if fmt == 'quiet':\n formatter = cfnlint.formatters.QuietFormatter()\n elif fmt == 'parseable':\n # pylint: disable=bad-option-value\n formatter = cfnlint.formatters.ParseableFormatter()\n elif fmt == 'json':\n formatter = cfnlint.formatters.JsonFormatter()\n else:\n formatter = cfnlint.formatters.Formatter()\n\n return formatter\n\n\ndef get_rules(rulesdir, ignore_rules, include_rules, configure_rules=None, include_experimental=False):\n \"\"\"Get rules\"\"\"\n rules = RulesCollection(ignore_rules, include_rules, configure_rules, include_experimental)\n rules_dirs = [DEFAULT_RULESDIR] + rulesdir\n try:\n for rules_dir in rules_dirs:\n rules.create_from_directory(rules_dir)\n except OSError as e:\n raise UnexpectedRuleException('Tried to append rules but got an error: %s' % str(e), 1)\n return rules\n\n\ndef configure_logging(debug_logging):\n \"\"\" Backwards compatibility for integrators \"\"\"\n LOGGER.info('Update your integrations to use \"cfnlint.config.configure_logging\" instead')\n cfnlint.config.configure_logging(debug_logging, False)\n\n\ndef get_args_filenames(cli_args):\n \"\"\" Get Template Configuration items and set them as default values\"\"\"\n try:\n config = cfnlint.config.ConfigMixIn(cli_args)\n except ValidationError as e:\n LOGGER.error('Error parsing config file: %s', str(e))\n exit(1)\n\n fmt = config.format\n formatter = get_formatter(fmt)\n\n if config.update_specs:\n cfnlint.maintenance.update_resource_specs()\n exit(0)\n\n if config.update_documentation:\n # Get ALL rules (ignore the CLI settings))\n documentation_rules = cfnlint.core.get_rules([], [], ['I', 'E', 'W'], {}, True)\n cfnlint.maintenance.update_documentation(documentation_rules)\n exit(0)\n\n if config.update_iam_policies:\n cfnlint.maintenance.update_iam_policies()\n exit(0)\n\n if config.listrules:\n rules = cfnlint.core.get_rules(\n config.append_rules,\n config.ignore_checks,\n config.include_checks,\n config.configure_rules\n )\n print(rules)\n exit(0)\n\n if not sys.stdin.isatty() and not config.templates:\n return(config, [None], formatter)\n\n if not config.templates:\n # Not specified, print the help\n config.parser.print_help()\n exit(1)\n\n return(config, config.templates, formatter)\n\n\ndef get_template_rules(filename, args):\n \"\"\" Get Template Configuration items and set them as default values\"\"\"\n\n (template, matches) = cfnlint.decode.decode(filename, args.ignore_bad_template)\n\n if matches:\n return(template, [], matches)\n\n args.template_args = template\n\n rules = cfnlint.core.get_rules(\n args.append_rules,\n args.ignore_checks,\n args.include_checks,\n args.configure_rules,\n args.include_experimental,\n )\n\n return(template, rules, [])\n\n\ndef run_checks(filename, template, rules, regions):\n \"\"\"Run Checks against the template\"\"\"\n if regions:\n if not set(regions).issubset(set(REGIONS)):\n unsupported_regions = list(set(regions).difference(set(REGIONS)))\n msg = 'Regions %s are unsupported. Supported regions are %s' % (unsupported_regions, REGIONS)\n raise InvalidRegionException(msg, 32)\n\n matches = []\n\n runner = cfnlint.Runner(rules, filename, template, regions)\n matches.extend(runner.transform())\n # Only do rule analysis if Transform was successful\n if not matches:\n try:\n matches.extend(runner.run())\n except Exception as err: # pylint: disable=W0703\n msg = 'Tried to process rules on file %s but got an error: %s' % (filename, str(err))\n UnexpectedRuleException(msg, 1)\n matches.sort(key=lambda x: (x.filename, x.linenumber, x.rule.id))\n\n return(matches)\n",
"path": "src/cfnlint/core.py"
}
] | diff --git a/src/cfnlint/core.py b/src/cfnlint/core.py
index 1e81225e53..bd129e2ec5 100644
--- a/src/cfnlint/core.py
+++ b/src/cfnlint/core.py
@@ -140,7 +140,7 @@ def get_args_filenames(cli_args):
print(rules)
exit(0)
- if not sys.stdin.isatty():
+ if not sys.stdin.isatty() and not config.templates:
return(config, [None], formatter)
if not config.templates:
diff --git a/test/module/core/test_run_cli.py b/test/module/core/test_run_cli.py
index 32fea1073b..8dfdbf91d0 100644
--- a/test/module/core/test_run_cli.py
+++ b/test/module/core/test_run_cli.py
@@ -92,6 +92,10 @@ def test_template_via_stdin(self):
(_, filenames, _) = cfnlint.core.get_args_filenames([])
assert filenames == [None]
+ with patch('sys.stdin', StringIO(file_content)):
+ (_, filenames, _) = cfnlint.core.get_args_filenames(['--template', filename])
+ assert filenames == [filename]
+
@patch('cfnlint.config.ConfigFileArgs._read_config', create=True)
def test_template_config(self, yaml_mock):
"""Test template config"""
|
googleapis__google-cloud-python-6262 | Redis: regen README.rst (DO NOT MERGE)
This PR was generated using Autosynth. :rainbow:
Here's the log from Synthtool:
```
synthtool > Cloning googleapis.
synthtool > Running generator for google/cloud/redis/artman_redis_v1beta1.yaml.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/redis-v1beta1.
synthtool > Running generator for google/cloud/redis/artman_redis_v1.yaml.
synthtool > Ensuring dependencies.
synthtool > Pulling artman image.
synthtool > Generated code into /home/kbuilder/.cache/synthtool/googleapis/artman-genfiles/python/redis-v1.
synthtool > Replaced 'resources of the form:\\n ``' in google/cloud/redis_v1/gapic/cloud_redis_client.py.
synthtool > Replaced 'resources of the form:\\n ``' in google/cloud/redis_v1beta1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n parent \\(str\\): Required. The resource name of the instance location using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}`\n where ``location_id`` refers to a GCP region' in google/cloud/redis_v1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n parent \\(str\\): Required. The resource name of the instance location using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}`\n where ``location_id`` refers to a GCP region' in google/cloud/redis_v1beta1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n with the following restrictions:\n\n \\* Must contain only lowercase letters, numbers, and hyphens\\.' in google/cloud/redis_v1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n with the following restrictions:\n\n \\* Must contain only lowercase letters, numbers, and hyphens\\.' in google/cloud/redis_v1beta1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n name \\(str\\): Required. Redis instance resource name using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}/instances/{instance_id}`\n where ``location_id`` refers to a GCP region' in google/cloud/redis_v1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n name \\(str\\): Required. Redis instance resource name using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}/instances/{instance_id}`\n where ``location_id`` refers to a GCP region' in google/cloud/redis_v1beta1/gapic/cloud_redis_client.py.
synthtool > Replaced '\n fields from ``Instance``:\n\n \\* ``displayName``\n \\* ``labels``\n \\* ``memorySizeGb``\n \\* ``redisConfig``' in google/cloud/redis_v1/gapic/cloud_redis_client.py.
synthtool > Replaced '(release_status = )(.*)$' in setup.py.
synthtool > Replaced '.. _Enable the Google Cloud Memorystore for Redis API.: https://cloud.google.com/redis' in README.rst.
synthtool > Replaced 'https://cloud.google.com/redis' in README.rst.
synthtool > Replaced 'https://googlecloudplatform.github.io/google-cloud-python/stable/redis/usage.html' in README.rst.
synthtool > Replaced 'https://googlecloudplatform.github.io/google-cloud-python/stable/core/auth.html' in README.rst.
synthtool > Cleaned up 1 temporary directories.
```
| [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is used to synthesize generated parts of this library.\"\"\"\n\nimport synthtool as s\nimport synthtool.gcp as gcp\nimport logging\n\nlogging.basicConfig(level=logging.DEBUG)\n\ngapic = gcp.GAPICGenerator()\ncommon = gcp.CommonTemplates()\nexcludes = [\n 'setup.py',\n 'nox.py',\n 'docs/conf.py',\n 'docs/index.rst',\n]\n\nfor version in ['v1beta1', 'v1']:\n library = gapic.py_library(\n 'redis', version,\n config_path=f'artman_redis_{version}.yaml')\n\n s.copy(library, excludes=excludes)\n\n\n# Fix docstrings\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r'resources of the form:\\n ``',\n r'resources of the form:\\n\\n ``',)\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n parent \\(str\\): Required. The resource name of the instance location using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}`\n where ``location_id`` refers to a GCP region\"\"\",\n\n r\"\"\"\n parent (str): Required. The resource name of the instance location using the form ``projects/{project_id}/locations/{location_id}``\n where ``location_id`` refers to a GCP region\"\"\",)\n\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n with the following restrictions:\n\n \\* Must contain only lowercase letters, numbers, and hyphens\\.\"\"\",\n r\"\"\"\n with the following restrictions:\n * Must contain only lowercase letters, numbers, and hyphens.\"\"\")\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n name \\(str\\): Required. Redis instance resource name using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}/instances/{instance_id}`\n where ``location_id`` refers to a GCP region\"\"\",\n r\"\"\"\n name (str): Required. Redis instance resource name using the form ``projects/{project_id}/locations/{location_id}/instances/{instance_id}```\n where ``location_id`` refers to a GCP region\"\"\")\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n fields from ``Instance``:\n\n \\* ``displayName``\n \\* ``labels``\n \\* ``memorySizeGb``\n \\* ``redisConfig``\"\"\",\n\n r\"\"\"\n fields from ``Instance``: ``displayName``, ``labels``, ``memorySizeGb``, and ``redisConfig``.\"\"\",)\n\n# Set Release Status\nrelease_status = 'Development Status :: 3 - Alpha'\ns.replace('setup.py',\n '(release_status = )(.*)$',\n f\"\\\\1'{release_status}'\")\n\n# Fix the enable API link\ns.replace(\n 'README.rst',\n r'.. _Enable the Google Cloud Memorystore for Redis API.: https://cloud.google.com/redis',\n '.. _Enable the Google Cloud Memorystore for Redis API.: https://console.cloud.google.com/apis/'\n 'library/redis.googleapis.com')\n\n# Fix link to product page\ns.replace(\n 'README.rst',\n r'https://cloud.google.com/redis',\n 'https://cloud.google.com/memorystore/')\n\n# Fix link to Client Library Documentation\ns.replace(\n 'README.rst',\n r'https://googlecloudplatform.github.io/google-cloud-python/stable/redis/usage.html',\n 'https://googlecloudplatform.github.io/google-cloud-python/latest/redis/index.html')\n\n# Fix link to Auth instructions\ns.replace(\n 'README.rst',\n r'https://googlecloudplatform.github.io/google-cloud-python/stable/core/auth.html',\n 'https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html')\n",
"path": "redis/synth.py"
}
] | [
{
"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"This script is used to synthesize generated parts of this library.\"\"\"\n\nimport synthtool as s\nimport synthtool.gcp as gcp\nimport logging\n\nlogging.basicConfig(level=logging.DEBUG)\n\ngapic = gcp.GAPICGenerator()\ncommon = gcp.CommonTemplates()\nexcludes = [\n 'README.rst',\n 'setup.py',\n 'nox*.py',\n 'docs/conf.py',\n 'docs/index.rst',\n]\n\nfor version in ['v1beta1', 'v1']:\n library = gapic.py_library(\n 'redis', version,\n config_path=f'artman_redis_{version}.yaml')\n\n s.copy(library, excludes=excludes)\n\n\n# Fix docstrings\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r'resources of the form:\\n ``',\n r'resources of the form:\\n\\n ``',)\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n parent \\(str\\): Required. The resource name of the instance location using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}`\n where ``location_id`` refers to a GCP region\"\"\",\n\n r\"\"\"\n parent (str): Required. The resource name of the instance location using the form ``projects/{project_id}/locations/{location_id}``\n where ``location_id`` refers to a GCP region\"\"\",)\n\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n with the following restrictions:\n\n \\* Must contain only lowercase letters, numbers, and hyphens\\.\"\"\",\n r\"\"\"\n with the following restrictions:\n * Must contain only lowercase letters, numbers, and hyphens.\"\"\")\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n name \\(str\\): Required. Redis instance resource name using the form:\n ::\n\n `projects/{project_id}/locations/{location_id}/instances/{instance_id}`\n where ``location_id`` refers to a GCP region\"\"\",\n r\"\"\"\n name (str): Required. Redis instance resource name using the form ``projects/{project_id}/locations/{location_id}/instances/{instance_id}```\n where ``location_id`` refers to a GCP region\"\"\")\n\ns.replace(\n 'google/cloud/**/cloud_redis_client.py',\n r\"\"\"\n fields from ``Instance``:\n\n \\* ``displayName``\n \\* ``labels``\n \\* ``memorySizeGb``\n \\* ``redisConfig``\"\"\",\n\n r\"\"\"\n fields from ``Instance``: ``displayName``, ``labels``, ``memorySizeGb``, and ``redisConfig``.\"\"\",)\n\n# Set Release Status\nrelease_status = 'Development Status :: 3 - Alpha'\ns.replace('setup.py',\n '(release_status = )(.*)$',\n f\"\\\\1'{release_status}'\")\n\n# Fix the enable API link\ns.replace(\n 'README.rst',\n r'.. _Enable the Google Cloud Memorystore for Redis API.: https://cloud.google.com/redis',\n '.. _Enable the Google Cloud Memorystore for Redis API.: https://console.cloud.google.com/apis/'\n 'library/redis.googleapis.com')\n\n# Fix link to product page\ns.replace(\n 'README.rst',\n r'https://cloud.google.com/redis',\n 'https://cloud.google.com/memorystore/')\n\n# Fix link to Client Library Documentation\ns.replace(\n 'README.rst',\n r'https://googlecloudplatform.github.io/google-cloud-python/stable/redis/usage.html',\n 'https://googlecloudplatform.github.io/google-cloud-python/latest/redis/index.html')\n\n# Fix link to Auth instructions\ns.replace(\n 'README.rst',\n r'https://googlecloudplatform.github.io/google-cloud-python/stable/core/auth.html',\n 'https://googlecloudplatform.github.io/google-cloud-python/latest/core/auth.html')\n",
"path": "redis/synth.py"
}
] | diff --git a/redis/synth.py b/redis/synth.py
index 21c92a28c279..b0942bd4500e 100644
--- a/redis/synth.py
+++ b/redis/synth.py
@@ -23,8 +23,9 @@
gapic = gcp.GAPICGenerator()
common = gcp.CommonTemplates()
excludes = [
+ 'README.rst',
'setup.py',
- 'nox.py',
+ 'nox*.py',
'docs/conf.py',
'docs/index.rst',
]
|
zostera__django-bootstrap3-473 | Fix simple typo: attrivute -> attribute
There is a small typo in src/bootstrap3/templatetags/bootstrap3.py.
Should read `attribute` rather than `attrivute`.
| [
{
"content": "import re\nfrom math import floor\n\nfrom django import template\nfrom django.contrib.messages import constants as message_constants\nfrom django.template import Context\nfrom django.utils.safestring import mark_safe\n\nfrom ..bootstrap import css_url, get_bootstrap_setting, javascript_url, jquery_url, theme_url\nfrom ..components import render_alert, render_icon\nfrom ..forms import (\n render_button,\n render_field,\n render_field_and_label,\n render_form,\n render_form_errors,\n render_form_group,\n render_formset,\n render_formset_errors,\n render_label,\n)\nfrom ..text import force_text\nfrom ..utils import (\n handle_var,\n parse_token_contents,\n render_link_tag,\n render_script_tag,\n render_template_file,\n url_replace_param,\n)\n\nMESSAGE_LEVEL_CLASSES = {\n message_constants.DEBUG: \"alert alert-warning\",\n message_constants.INFO: \"alert alert-info\",\n message_constants.SUCCESS: \"alert alert-success\",\n message_constants.WARNING: \"alert alert-warning\",\n message_constants.ERROR: \"alert alert-danger\",\n}\n\nregister = template.Library()\n\n\[email protected]\ndef bootstrap_setting(value):\n \"\"\"\n Return value of a setting.\n\n Please consider this filter private for now, do not use it in your own templates.\n \"\"\"\n return get_bootstrap_setting(value)\n\n\[email protected]\ndef bootstrap_message_classes(message):\n \"\"\"Return the message classes for a message.\"\"\"\n extra_tags = None\n try:\n extra_tags = message.extra_tags\n except AttributeError:\n pass\n if not extra_tags:\n extra_tags = \"\"\n classes = [extra_tags]\n try:\n level = message.level\n except AttributeError:\n pass\n else:\n try:\n classes.append(MESSAGE_LEVEL_CLASSES[level])\n except KeyError:\n classes.append(\"alert alert-danger\")\n return \" \".join(classes).strip()\n\n\[email protected]_tag\ndef bootstrap_jquery_url():\n \"\"\"\n Return url to jquery resource.\n\n **Tag name**::\n\n bootstrap_jquery_url\n\n Return the full url to jQuery file to use\n\n Default value: ``//code.jquery.com/jquery.min.js``\n\n This value is configurable, see Settings section\n\n **Usage**::\n\n {% bootstrap_jquery_url %}\n\n **Example**::\n\n {% bootstrap_jquery_url %}\n \"\"\"\n return jquery_url()\n\n\[email protected]_tag\ndef bootstrap_javascript_url():\n \"\"\"\n Return the full url to the Bootstrap JavaScript library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_javascript_url\n\n **Usage**::\n\n {% bootstrap_javascript_url %}\n\n **Example**::\n\n {% bootstrap_javascript_url %}\n \"\"\"\n return javascript_url()\n\n\[email protected]_tag\ndef bootstrap_css_url():\n \"\"\"\n Return the full url to the Bootstrap CSS library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_css_url\n\n **Usage**::\n\n {% bootstrap_css_url %}\n\n **Example**::\n\n {% bootstrap_css_url %}\n \"\"\"\n return css_url()\n\n\[email protected]_tag\ndef bootstrap_theme_url():\n \"\"\"\n Return the full url to a Bootstrap theme CSS library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_theme_url\n\n **Usage**::\n\n {% bootstrap_theme_url %}\n\n **Example**::\n\n {% bootstrap_theme_url %}\n \"\"\"\n return theme_url()\n\n\[email protected]_tag\ndef bootstrap_css():\n \"\"\"\n Return HTML for Bootstrap CSS. Adjust url in settings.\n\n If no url is returned, we don't want this statement to return any HTML. This is intended behavior.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_css\n\n **Usage**::\n\n {% bootstrap_css %}\n\n **Example**::\n\n {% bootstrap_css %}\n \"\"\"\n rendered_urls = [render_link_tag(bootstrap_css_url())]\n if bootstrap_theme_url():\n rendered_urls.append(render_link_tag(bootstrap_theme_url()))\n return mark_safe(\"\".join([url for url in rendered_urls]))\n\n\[email protected]_tag\ndef bootstrap_javascript(jquery=None):\n \"\"\"\n Return HTML for Bootstrap JavaScript.\n\n Adjust url in settings. If no url is returned, we don't want this\n statement to return any HTML.\n This is intended behavior.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_javascript\n\n **Parameters**:\n\n :jquery: Truthy to include jQuery as well as Bootstrap\n\n **Usage**::\n\n {% bootstrap_javascript %}\n\n **Example**::\n\n {% bootstrap_javascript jquery=1 %}\n \"\"\"\n\n javascript = \"\"\n # See if we have to include jQuery\n if jquery is None:\n jquery = get_bootstrap_setting(\"include_jquery\", False)\n # NOTE: No async on scripts, not mature enough. See issue #52 and #56\n if jquery:\n url = bootstrap_jquery_url()\n if url:\n javascript += render_script_tag(url)\n url = bootstrap_javascript_url()\n if url:\n javascript += render_script_tag(url)\n return mark_safe(javascript)\n\n\[email protected]_tag\ndef bootstrap_formset(*args, **kwargs):\n \"\"\"\n Render a formset.\n\n **Tag name**::\n\n bootstrap_formset\n\n **Parameters**:\n\n formset\n The formset that is being rendered\n\n\n See bootstrap_field_ for other arguments\n\n **Usage**::\n\n {% bootstrap_formset formset %}\n\n **Example**::\n\n {% bootstrap_formset formset layout='horizontal' %}\n \"\"\"\n return render_formset(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_formset_errors(*args, **kwargs):\n \"\"\"\n Render formset errors.\n\n **Tag name**::\n\n bootstrap_formset_errors\n\n **Parameters**:\n\n formset\n The formset that is being rendered\n\n layout\n Context value that is available in the template ``bootstrap3/form_errors.html``\n as ``layout``.\n\n **Usage**::\n\n {% bootstrap_formset_errors formset %}\n\n **Example**::\n\n {% bootstrap_formset_errors formset layout='inline' %}\n \"\"\"\n return render_formset_errors(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_form(*args, **kwargs):\n \"\"\"\n Render a form.\n\n **Tag name**::\n\n bootstrap_form\n\n **Parameters**:\n\n form\n The form that is to be rendered\n\n exclude\n A list of field names (comma separated) that should not be rendered\n E.g. exclude=subject,bcc\n\n error_types\n This controls the types of errors that are rendered above the form.\n Choices are: \"all\", \"field_errors\", \"non_field_errors\" or \"none\". This will not\n affect the display of errors on the fields themselves.\n\n Default is \"non_field_errors\".\n\n See bootstrap_field_ for other arguments\n\n **Usage**::\n\n {% bootstrap_form form %}\n\n **Example**::\n\n {% bootstrap_form form layout='inline' %}\n \"\"\"\n return render_form(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_form_errors(*args, **kwargs):\n \"\"\"\n Render form errors.\n\n **Tag name**::\n\n bootstrap_form_errors\n\n **Parameters**:\n\n form\n The form that is to be rendered\n\n error_types\n Control which type of errors should be rendered.\n\n One of the following values:\n\n * ``'all'``\n * ``'field_errors'``\n * ``'non_field_errors'``\n\n :default: ``'non_field_errors'``\n\n layout\n Context value that is available in the template ``bootstrap3/form_errors.html`` as ``layout``.\n\n **Usage**::\n\n {% bootstrap_form_errors form %}\n\n **Example**::\n\n {% bootstrap_form_errors form layout='inline' %}\n \"\"\"\n return render_form_errors(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_field(*args, **kwargs):\n \"\"\"\n Render a field.\n\n **Tag name**::\n\n bootstrap_field\n\n **Parameters**:\n\n\n field\n The form field to be rendered\n\n layout\n If set to ``'horizontal'`` then the field and label will be rendered side-by-side, as long as there\n is no ``field_class`` set as well.\n\n form_group_class\n CSS class of the ``div`` that wraps the field and label.\n\n :default: ``'form-group'``\n\n field_class\n CSS class of the ``div`` that wraps the field.\n\n label_class\n CSS class of the ``label`` element. Will always have ``control-label`` as the last CSS class.\n\n show_help\n Show the field's help text, if the field has help text.\n\n :default: ``True``\n\n show_label\n Whether the show the label of the field.\n\n :default: ``True``\n\n exclude\n A list of field names that should not be rendered\n\n size\n Controls the size of the rendered ``div.form-group`` through the use of CSS classes.\n\n One of the following values:\n\n * ``'small'``\n * ``'medium'``\n * ``'large'``\n\n placeholder\n Set/overwrite the field's placeholder.\n\n label\n Overwrite the field's label.\n\n horizontal_label_class\n Class used on the label when the ``layout`` is set to ``horizontal``.\n\n :default: ``'col-md-3'``. Can be changed in :doc:`settings`\n\n horizontal_field_class\n Class used on the field when the ``layout`` is set to ``horizontal``.\n\n :default: ``'col-md-9'``. Can be changed in :doc:`settings`\n\n addon_before\n Text that should be prepended to the form field. Can also be an icon, e.g.\n ``'<span class=\"glyphicon glyphicon-calendar\"></span>'``\n\n See the `Bootstrap docs <http://getbootstrap.com/components/#input-groups-basic>` for more examples.\n\n addon_after\n Text that should be appended to the form field. Can also be an icon, e.g.\n ``'<span class=\"glyphicon glyphicon-calendar\"></span>'``\n\n See the `Bootstrap docs <http://getbootstrap.com/components/#input-groups-basic>` for more examples.\n\n addon_before_class\n Class used on the span when ``addon_before`` is used.\n\n One of the following values:\n\n * ``'input-group-addon'``\n * ``'input-group-btn'``\n\n :default: ``input-group-addon``\n\n addon_after_class\n Class used on the span when ``addon_after`` is used.\n\n One of the following values:\n\n * ``'input-group-addon'``\n * ``'input-group-btn'``\n\n :default: ``input-group-addon``\n\n error_css_class\n CSS class used when the field has an error\n\n :default: ``'has-error'``. Can be changed :doc:`settings`\n\n required_css_class\n CSS class used on the ``div.form-group`` to indicate a field is required\n\n :default: ``''``. Can be changed :doc:`settings`\n\n bound_css_class\n CSS class used when the field is bound\n\n :default: ``'has-success'``. Can be changed :doc:`settings`\n\n **Usage**::\n\n {% bootstrap_field field %}\n\n **Example**::\n\n {% bootstrap_field field show_label=False %}\n \"\"\"\n return render_field(*args, **kwargs)\n\n\[email protected]_tag()\ndef bootstrap_label(*args, **kwargs):\n \"\"\"\n Render a label.\n\n **Tag name**::\n\n bootstrap_label\n\n **Parameters**:\n\n content\n The label's text\n\n label_for\n The value that will be in the ``for`` attribute of the rendered ``<label>``\n\n label_class\n The CSS class for the rendered ``<label>``\n\n label_title\n The value that will be in the ``title`` attribute of the rendered ``<label>``\n\n **Usage**::\n\n {% bootstrap_label content %}\n\n **Example**::\n\n {% bootstrap_label \"Email address\" label_for=\"exampleInputEmail1\" %}\n \"\"\"\n return render_label(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_button(*args, **kwargs):\n \"\"\"\n Render a button.\n\n **Tag name**::\n\n bootstrap_button\n\n **Parameters**:\n\n content\n The text to be displayed in the button\n\n button_type\n Optional field defining what type of button this is.\n\n Accepts one of the following values:\n\n * ``'submit'``\n * ``'reset'``\n * ``'button'``\n * ``'link'``\n icon\n Name of an icon to render in the button's visible content. See bootstrap_icon_ for acceptable values.\n\n button_class\n The class of button to use. If none is given, btn-default will be used.\n\n extra_classes\n Any extra CSS classes that should be added to the button.\n\n size\n Optional field to control the size of the button.\n\n Accepts one of the following values:\n\n * ``'xs'``\n * ``'sm'``\n * ``'small'``\n * ``'md'``\n * ``'medium'``\n * ``'lg'``\n * ``'large'``\n\n\n href\n Render the button as an ``<a>`` element. The ``href`` attribute is set with this value.\n\n name\n Value of the ``name`` attribute of the rendered element.\n\n value\n Value of the ``value`` attribute of the rendered element.\n\n **Usage**::\n\n {% bootstrap_button content %}\n\n **Example**::\n\n {% bootstrap_button \"Save\" button_type=\"submit\" button_class=\"btn-primary\" %}\n \"\"\"\n return render_button(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_icon(icon, **kwargs):\n \"\"\"\n Render an icon.\n\n **Tag name**::\n\n bootstrap_icon\n\n **Parameters**:\n\n icon\n Icon name. See the `Bootstrap docs <http://getbootstrap.com/components/#glyphicons>`_ for all icons.\n\n extra_classes\n Extra CSS classes to add to the icon HTML\n\n title\n A title for the icon (HTML title attrivute)\n\n **Usage**::\n\n {% bootstrap_icon icon %}\n\n **Example**::\n\n {% bootstrap_icon \"star\" %}\n \"\"\"\n return render_icon(icon, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_alert(content, alert_type=\"info\", dismissable=True):\n \"\"\"\n Render an alert.\n\n **Tag name**::\n\n bootstrap_alert\n\n **Parameters**:\n\n content\n HTML content of alert\n\n alert_type\n * ``'info'``\n * ``'warning'``\n * ``'danger'``\n * ``'success'``\n\n :default: ``'info'``\n\n dismissable\n boolean, is alert dismissable\n\n :default: ``True``\n\n **Usage**::\n\n {% bootstrap_alert content %}\n\n **Example**::\n\n {% bootstrap_alert \"Something went wrong\" alert_type='danger' %}\n \"\"\"\n return render_alert(content, alert_type, dismissable)\n\n\[email protected](\"buttons\")\ndef bootstrap_buttons(parser, token):\n \"\"\"\n Render buttons for form.\n\n **Tag name**::\n\n buttons\n\n **Parameters**:\n\n submit\n Text for a submit button\n\n reset\n Text for a reset button\n\n **Usage**::\n\n {% buttons %}{% endbuttons %}\n\n **Example**::\n\n {% buttons submit='OK' reset=\"Cancel\" %}{% endbuttons %}\n \"\"\"\n kwargs = parse_token_contents(parser, token)\n kwargs[\"nodelist\"] = parser.parse((\"endbuttons\",))\n parser.delete_first_token()\n return ButtonsNode(**kwargs)\n\n\nclass ButtonsNode(template.Node):\n def __init__(self, nodelist, args, kwargs, asvar, **kwargs2):\n self.nodelist = nodelist\n self.args = args\n self.kwargs = kwargs\n self.asvar = asvar\n\n def render(self, context):\n output_kwargs = {}\n for key in self.kwargs:\n output_kwargs[key] = handle_var(self.kwargs[key], context)\n buttons = []\n submit = output_kwargs.get(\"submit\", None)\n reset = output_kwargs.get(\"reset\", None)\n if submit:\n buttons.append(bootstrap_button(submit, \"submit\"))\n if reset:\n buttons.append(bootstrap_button(reset, \"reset\"))\n buttons = \" \".join(buttons) + self.nodelist.render(context)\n output_kwargs.update({\"label\": None, \"field\": buttons})\n output = render_form_group(render_field_and_label(**output_kwargs))\n if self.asvar:\n context[self.asvar] = output\n return \"\"\n else:\n return output\n\n\[email protected]_tag(takes_context=True)\ndef bootstrap_messages(context, *args, **kwargs):\n \"\"\"\n Show django.contrib.messages Messages in Bootstrap alert containers.\n\n In order to make the alerts dismissable (with the close button),\n we have to set the jquery parameter too when using the\n bootstrap_javascript tag.\n\n Uses the template ``bootstrap3/messages.html``.\n\n **Tag name**::\n\n bootstrap_messages\n\n **Parameters**:\n\n None.\n\n **Usage**::\n\n {% bootstrap_messages %}\n\n **Example**::\n\n {% bootstrap_javascript jquery=1 %}\n {% bootstrap_messages %}\n \"\"\"\n # Custom template tags with takes_context=True somehow return Context objects. These\n # should be forced to dict, using Context.flatten()\n if isinstance(context, Context):\n context = context.flatten()\n context.update({\"message_constants\": message_constants})\n return render_template_file(\"bootstrap3/messages.html\", context=context)\n\n\[email protected]_tag(\"bootstrap3/pagination.html\")\ndef bootstrap_pagination(page, **kwargs):\n \"\"\"\n Render pagination for a page.\n\n **Tag name**::\n\n bootstrap_pagination\n\n **Parameters**:\n\n page\n The page of results to show.\n\n pages_to_show\n Number of pages in total\n\n :default: ``11``\n\n url\n URL to navigate to for pagination forward and pagination back.\n\n :default: ``None``\n\n size\n Controls the size of the pagination through CSS.\n Defaults to being normal sized.\n\n One of the following:\n\n * ``'small'``\n * ``'large'``\n\n :default: ``None``\n\n extra\n Any extra page parameters.\n\n :default: ``None``\n\n parameter_name\n Name of the paging URL parameter.\n\n :default: ``'page'``\n\n **Usage**::\n\n {% bootstrap_pagination page %}\n\n **Example**::\n\n {% bootstrap_pagination lines url=\"/pagination?page=1\" size=\"large\" %}\n {% bootstrap_pagination page_obj extra=request.GET.urlencode %}\n \"\"\"\n\n pagination_kwargs = kwargs.copy()\n pagination_kwargs[\"page\"] = page\n return get_pagination_context(**pagination_kwargs)\n\n\[email protected]_tag\ndef bootstrap_url_replace_param(url, name, value):\n return url_replace_param(url, name, value)\n\n\ndef get_pagination_context(page, pages_to_show=11, url=None, size=None, extra=None, parameter_name=\"page\"):\n \"\"\"Generate Bootstrap pagination context from a page object.\"\"\"\n pages_to_show = int(pages_to_show)\n if pages_to_show < 1:\n raise ValueError(\n \"Pagination pages_to_show should be a positive integer, you specified {pages}\".format(pages=pages_to_show)\n )\n num_pages = page.paginator.num_pages\n current_page = page.number\n half_page_num = int(floor(pages_to_show / 2))\n if half_page_num < 0:\n half_page_num = 0\n first_page = current_page - half_page_num\n if first_page <= 1:\n first_page = 1\n if first_page > 1:\n pages_back = first_page - half_page_num\n if pages_back < 1:\n pages_back = 1\n else:\n pages_back = None\n last_page = first_page + pages_to_show - 1\n if pages_back is None:\n last_page += 1\n if last_page > num_pages:\n last_page = num_pages\n if last_page < num_pages:\n pages_forward = last_page + half_page_num\n if pages_forward > num_pages:\n pages_forward = num_pages\n else:\n pages_forward = None\n if first_page > 1:\n first_page -= 1\n if pages_back is not None and pages_back > 1:\n pages_back -= 1\n else:\n pages_back = None\n pages_shown = []\n for i in range(first_page, last_page + 1):\n pages_shown.append(i)\n # Append proper character to url\n if url:\n # Remove existing page GET parameters\n url = force_text(url)\n url = re.sub(r\"\\?{0}\\=[^\\&]+\".format(parameter_name), \"?\", url)\n url = re.sub(r\"\\&{0}\\=[^\\&]+\".format(parameter_name), \"\", url)\n # Append proper separator\n if \"?\" in url:\n url += \"&\"\n else:\n url += \"?\"\n # Append extra string to url\n if extra:\n if not url:\n url = \"?\"\n url += force_text(extra) + \"&\"\n if url:\n url = url.replace(\"?&\", \"?\")\n # Set CSS classes, see http://getbootstrap.com/components/#pagination\n pagination_css_classes = [\"pagination\"]\n if size == \"small\":\n pagination_css_classes.append(\"pagination-sm\")\n elif size == \"large\":\n pagination_css_classes.append(\"pagination-lg\")\n # Build context object\n return {\n \"bootstrap_pagination_url\": url,\n \"num_pages\": num_pages,\n \"current_page\": current_page,\n \"first_page\": first_page,\n \"last_page\": last_page,\n \"pages_shown\": pages_shown,\n \"pages_back\": pages_back,\n \"pages_forward\": pages_forward,\n \"pagination_css_classes\": \" \".join(pagination_css_classes),\n \"parameter_name\": parameter_name,\n }\n",
"path": "src/bootstrap3/templatetags/bootstrap3.py"
}
] | [
{
"content": "import re\nfrom math import floor\n\nfrom django import template\nfrom django.contrib.messages import constants as message_constants\nfrom django.template import Context\nfrom django.utils.safestring import mark_safe\n\nfrom ..bootstrap import css_url, get_bootstrap_setting, javascript_url, jquery_url, theme_url\nfrom ..components import render_alert, render_icon\nfrom ..forms import (\n render_button,\n render_field,\n render_field_and_label,\n render_form,\n render_form_errors,\n render_form_group,\n render_formset,\n render_formset_errors,\n render_label,\n)\nfrom ..text import force_text\nfrom ..utils import (\n handle_var,\n parse_token_contents,\n render_link_tag,\n render_script_tag,\n render_template_file,\n url_replace_param,\n)\n\nMESSAGE_LEVEL_CLASSES = {\n message_constants.DEBUG: \"alert alert-warning\",\n message_constants.INFO: \"alert alert-info\",\n message_constants.SUCCESS: \"alert alert-success\",\n message_constants.WARNING: \"alert alert-warning\",\n message_constants.ERROR: \"alert alert-danger\",\n}\n\nregister = template.Library()\n\n\[email protected]\ndef bootstrap_setting(value):\n \"\"\"\n Return value of a setting.\n\n Please consider this filter private for now, do not use it in your own templates.\n \"\"\"\n return get_bootstrap_setting(value)\n\n\[email protected]\ndef bootstrap_message_classes(message):\n \"\"\"Return the message classes for a message.\"\"\"\n extra_tags = None\n try:\n extra_tags = message.extra_tags\n except AttributeError:\n pass\n if not extra_tags:\n extra_tags = \"\"\n classes = [extra_tags]\n try:\n level = message.level\n except AttributeError:\n pass\n else:\n try:\n classes.append(MESSAGE_LEVEL_CLASSES[level])\n except KeyError:\n classes.append(\"alert alert-danger\")\n return \" \".join(classes).strip()\n\n\[email protected]_tag\ndef bootstrap_jquery_url():\n \"\"\"\n Return url to jquery resource.\n\n **Tag name**::\n\n bootstrap_jquery_url\n\n Return the full url to jQuery file to use\n\n Default value: ``//code.jquery.com/jquery.min.js``\n\n This value is configurable, see Settings section\n\n **Usage**::\n\n {% bootstrap_jquery_url %}\n\n **Example**::\n\n {% bootstrap_jquery_url %}\n \"\"\"\n return jquery_url()\n\n\[email protected]_tag\ndef bootstrap_javascript_url():\n \"\"\"\n Return the full url to the Bootstrap JavaScript library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_javascript_url\n\n **Usage**::\n\n {% bootstrap_javascript_url %}\n\n **Example**::\n\n {% bootstrap_javascript_url %}\n \"\"\"\n return javascript_url()\n\n\[email protected]_tag\ndef bootstrap_css_url():\n \"\"\"\n Return the full url to the Bootstrap CSS library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_css_url\n\n **Usage**::\n\n {% bootstrap_css_url %}\n\n **Example**::\n\n {% bootstrap_css_url %}\n \"\"\"\n return css_url()\n\n\[email protected]_tag\ndef bootstrap_theme_url():\n \"\"\"\n Return the full url to a Bootstrap theme CSS library.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_theme_url\n\n **Usage**::\n\n {% bootstrap_theme_url %}\n\n **Example**::\n\n {% bootstrap_theme_url %}\n \"\"\"\n return theme_url()\n\n\[email protected]_tag\ndef bootstrap_css():\n \"\"\"\n Return HTML for Bootstrap CSS. Adjust url in settings.\n\n If no url is returned, we don't want this statement to return any HTML. This is intended behavior.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_css\n\n **Usage**::\n\n {% bootstrap_css %}\n\n **Example**::\n\n {% bootstrap_css %}\n \"\"\"\n rendered_urls = [render_link_tag(bootstrap_css_url())]\n if bootstrap_theme_url():\n rendered_urls.append(render_link_tag(bootstrap_theme_url()))\n return mark_safe(\"\".join([url for url in rendered_urls]))\n\n\[email protected]_tag\ndef bootstrap_javascript(jquery=None):\n \"\"\"\n Return HTML for Bootstrap JavaScript.\n\n Adjust url in settings. If no url is returned, we don't want this\n statement to return any HTML.\n This is intended behavior.\n\n Default value: ``None``\n\n This value is configurable, see Settings section\n\n **Tag name**::\n\n bootstrap_javascript\n\n **Parameters**:\n\n :jquery: Truthy to include jQuery as well as Bootstrap\n\n **Usage**::\n\n {% bootstrap_javascript %}\n\n **Example**::\n\n {% bootstrap_javascript jquery=1 %}\n \"\"\"\n\n javascript = \"\"\n # See if we have to include jQuery\n if jquery is None:\n jquery = get_bootstrap_setting(\"include_jquery\", False)\n # NOTE: No async on scripts, not mature enough. See issue #52 and #56\n if jquery:\n url = bootstrap_jquery_url()\n if url:\n javascript += render_script_tag(url)\n url = bootstrap_javascript_url()\n if url:\n javascript += render_script_tag(url)\n return mark_safe(javascript)\n\n\[email protected]_tag\ndef bootstrap_formset(*args, **kwargs):\n \"\"\"\n Render a formset.\n\n **Tag name**::\n\n bootstrap_formset\n\n **Parameters**:\n\n formset\n The formset that is being rendered\n\n\n See bootstrap_field_ for other arguments\n\n **Usage**::\n\n {% bootstrap_formset formset %}\n\n **Example**::\n\n {% bootstrap_formset formset layout='horizontal' %}\n \"\"\"\n return render_formset(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_formset_errors(*args, **kwargs):\n \"\"\"\n Render formset errors.\n\n **Tag name**::\n\n bootstrap_formset_errors\n\n **Parameters**:\n\n formset\n The formset that is being rendered\n\n layout\n Context value that is available in the template ``bootstrap3/form_errors.html``\n as ``layout``.\n\n **Usage**::\n\n {% bootstrap_formset_errors formset %}\n\n **Example**::\n\n {% bootstrap_formset_errors formset layout='inline' %}\n \"\"\"\n return render_formset_errors(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_form(*args, **kwargs):\n \"\"\"\n Render a form.\n\n **Tag name**::\n\n bootstrap_form\n\n **Parameters**:\n\n form\n The form that is to be rendered\n\n exclude\n A list of field names (comma separated) that should not be rendered\n E.g. exclude=subject,bcc\n\n error_types\n This controls the types of errors that are rendered above the form.\n Choices are: \"all\", \"field_errors\", \"non_field_errors\" or \"none\". This will not\n affect the display of errors on the fields themselves.\n\n Default is \"non_field_errors\".\n\n See bootstrap_field_ for other arguments\n\n **Usage**::\n\n {% bootstrap_form form %}\n\n **Example**::\n\n {% bootstrap_form form layout='inline' %}\n \"\"\"\n return render_form(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_form_errors(*args, **kwargs):\n \"\"\"\n Render form errors.\n\n **Tag name**::\n\n bootstrap_form_errors\n\n **Parameters**:\n\n form\n The form that is to be rendered\n\n error_types\n Control which type of errors should be rendered.\n\n One of the following values:\n\n * ``'all'``\n * ``'field_errors'``\n * ``'non_field_errors'``\n\n :default: ``'non_field_errors'``\n\n layout\n Context value that is available in the template ``bootstrap3/form_errors.html`` as ``layout``.\n\n **Usage**::\n\n {% bootstrap_form_errors form %}\n\n **Example**::\n\n {% bootstrap_form_errors form layout='inline' %}\n \"\"\"\n return render_form_errors(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_field(*args, **kwargs):\n \"\"\"\n Render a field.\n\n **Tag name**::\n\n bootstrap_field\n\n **Parameters**:\n\n\n field\n The form field to be rendered\n\n layout\n If set to ``'horizontal'`` then the field and label will be rendered side-by-side, as long as there\n is no ``field_class`` set as well.\n\n form_group_class\n CSS class of the ``div`` that wraps the field and label.\n\n :default: ``'form-group'``\n\n field_class\n CSS class of the ``div`` that wraps the field.\n\n label_class\n CSS class of the ``label`` element. Will always have ``control-label`` as the last CSS class.\n\n show_help\n Show the field's help text, if the field has help text.\n\n :default: ``True``\n\n show_label\n Whether the show the label of the field.\n\n :default: ``True``\n\n exclude\n A list of field names that should not be rendered\n\n size\n Controls the size of the rendered ``div.form-group`` through the use of CSS classes.\n\n One of the following values:\n\n * ``'small'``\n * ``'medium'``\n * ``'large'``\n\n placeholder\n Set/overwrite the field's placeholder.\n\n label\n Overwrite the field's label.\n\n horizontal_label_class\n Class used on the label when the ``layout`` is set to ``horizontal``.\n\n :default: ``'col-md-3'``. Can be changed in :doc:`settings`\n\n horizontal_field_class\n Class used on the field when the ``layout`` is set to ``horizontal``.\n\n :default: ``'col-md-9'``. Can be changed in :doc:`settings`\n\n addon_before\n Text that should be prepended to the form field. Can also be an icon, e.g.\n ``'<span class=\"glyphicon glyphicon-calendar\"></span>'``\n\n See the `Bootstrap docs <http://getbootstrap.com/components/#input-groups-basic>` for more examples.\n\n addon_after\n Text that should be appended to the form field. Can also be an icon, e.g.\n ``'<span class=\"glyphicon glyphicon-calendar\"></span>'``\n\n See the `Bootstrap docs <http://getbootstrap.com/components/#input-groups-basic>` for more examples.\n\n addon_before_class\n Class used on the span when ``addon_before`` is used.\n\n One of the following values:\n\n * ``'input-group-addon'``\n * ``'input-group-btn'``\n\n :default: ``input-group-addon``\n\n addon_after_class\n Class used on the span when ``addon_after`` is used.\n\n One of the following values:\n\n * ``'input-group-addon'``\n * ``'input-group-btn'``\n\n :default: ``input-group-addon``\n\n error_css_class\n CSS class used when the field has an error\n\n :default: ``'has-error'``. Can be changed :doc:`settings`\n\n required_css_class\n CSS class used on the ``div.form-group`` to indicate a field is required\n\n :default: ``''``. Can be changed :doc:`settings`\n\n bound_css_class\n CSS class used when the field is bound\n\n :default: ``'has-success'``. Can be changed :doc:`settings`\n\n **Usage**::\n\n {% bootstrap_field field %}\n\n **Example**::\n\n {% bootstrap_field field show_label=False %}\n \"\"\"\n return render_field(*args, **kwargs)\n\n\[email protected]_tag()\ndef bootstrap_label(*args, **kwargs):\n \"\"\"\n Render a label.\n\n **Tag name**::\n\n bootstrap_label\n\n **Parameters**:\n\n content\n The label's text\n\n label_for\n The value that will be in the ``for`` attribute of the rendered ``<label>``\n\n label_class\n The CSS class for the rendered ``<label>``\n\n label_title\n The value that will be in the ``title`` attribute of the rendered ``<label>``\n\n **Usage**::\n\n {% bootstrap_label content %}\n\n **Example**::\n\n {% bootstrap_label \"Email address\" label_for=\"exampleInputEmail1\" %}\n \"\"\"\n return render_label(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_button(*args, **kwargs):\n \"\"\"\n Render a button.\n\n **Tag name**::\n\n bootstrap_button\n\n **Parameters**:\n\n content\n The text to be displayed in the button\n\n button_type\n Optional field defining what type of button this is.\n\n Accepts one of the following values:\n\n * ``'submit'``\n * ``'reset'``\n * ``'button'``\n * ``'link'``\n icon\n Name of an icon to render in the button's visible content. See bootstrap_icon_ for acceptable values.\n\n button_class\n The class of button to use. If none is given, btn-default will be used.\n\n extra_classes\n Any extra CSS classes that should be added to the button.\n\n size\n Optional field to control the size of the button.\n\n Accepts one of the following values:\n\n * ``'xs'``\n * ``'sm'``\n * ``'small'``\n * ``'md'``\n * ``'medium'``\n * ``'lg'``\n * ``'large'``\n\n\n href\n Render the button as an ``<a>`` element. The ``href`` attribute is set with this value.\n\n name\n Value of the ``name`` attribute of the rendered element.\n\n value\n Value of the ``value`` attribute of the rendered element.\n\n **Usage**::\n\n {% bootstrap_button content %}\n\n **Example**::\n\n {% bootstrap_button \"Save\" button_type=\"submit\" button_class=\"btn-primary\" %}\n \"\"\"\n return render_button(*args, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_icon(icon, **kwargs):\n \"\"\"\n Render an icon.\n\n **Tag name**::\n\n bootstrap_icon\n\n **Parameters**:\n\n icon\n Icon name. See the `Bootstrap docs <http://getbootstrap.com/components/#glyphicons>`_ for all icons.\n\n extra_classes\n Extra CSS classes to add to the icon HTML\n\n title\n A title for the icon (HTML title attribute)\n\n **Usage**::\n\n {% bootstrap_icon icon %}\n\n **Example**::\n\n {% bootstrap_icon \"star\" %}\n \"\"\"\n return render_icon(icon, **kwargs)\n\n\[email protected]_tag\ndef bootstrap_alert(content, alert_type=\"info\", dismissable=True):\n \"\"\"\n Render an alert.\n\n **Tag name**::\n\n bootstrap_alert\n\n **Parameters**:\n\n content\n HTML content of alert\n\n alert_type\n * ``'info'``\n * ``'warning'``\n * ``'danger'``\n * ``'success'``\n\n :default: ``'info'``\n\n dismissable\n boolean, is alert dismissable\n\n :default: ``True``\n\n **Usage**::\n\n {% bootstrap_alert content %}\n\n **Example**::\n\n {% bootstrap_alert \"Something went wrong\" alert_type='danger' %}\n \"\"\"\n return render_alert(content, alert_type, dismissable)\n\n\[email protected](\"buttons\")\ndef bootstrap_buttons(parser, token):\n \"\"\"\n Render buttons for form.\n\n **Tag name**::\n\n buttons\n\n **Parameters**:\n\n submit\n Text for a submit button\n\n reset\n Text for a reset button\n\n **Usage**::\n\n {% buttons %}{% endbuttons %}\n\n **Example**::\n\n {% buttons submit='OK' reset=\"Cancel\" %}{% endbuttons %}\n \"\"\"\n kwargs = parse_token_contents(parser, token)\n kwargs[\"nodelist\"] = parser.parse((\"endbuttons\",))\n parser.delete_first_token()\n return ButtonsNode(**kwargs)\n\n\nclass ButtonsNode(template.Node):\n def __init__(self, nodelist, args, kwargs, asvar, **kwargs2):\n self.nodelist = nodelist\n self.args = args\n self.kwargs = kwargs\n self.asvar = asvar\n\n def render(self, context):\n output_kwargs = {}\n for key in self.kwargs:\n output_kwargs[key] = handle_var(self.kwargs[key], context)\n buttons = []\n submit = output_kwargs.get(\"submit\", None)\n reset = output_kwargs.get(\"reset\", None)\n if submit:\n buttons.append(bootstrap_button(submit, \"submit\"))\n if reset:\n buttons.append(bootstrap_button(reset, \"reset\"))\n buttons = \" \".join(buttons) + self.nodelist.render(context)\n output_kwargs.update({\"label\": None, \"field\": buttons})\n output = render_form_group(render_field_and_label(**output_kwargs))\n if self.asvar:\n context[self.asvar] = output\n return \"\"\n else:\n return output\n\n\[email protected]_tag(takes_context=True)\ndef bootstrap_messages(context, *args, **kwargs):\n \"\"\"\n Show django.contrib.messages Messages in Bootstrap alert containers.\n\n In order to make the alerts dismissable (with the close button),\n we have to set the jquery parameter too when using the\n bootstrap_javascript tag.\n\n Uses the template ``bootstrap3/messages.html``.\n\n **Tag name**::\n\n bootstrap_messages\n\n **Parameters**:\n\n None.\n\n **Usage**::\n\n {% bootstrap_messages %}\n\n **Example**::\n\n {% bootstrap_javascript jquery=1 %}\n {% bootstrap_messages %}\n \"\"\"\n # Custom template tags with takes_context=True somehow return Context objects. These\n # should be forced to dict, using Context.flatten()\n if isinstance(context, Context):\n context = context.flatten()\n context.update({\"message_constants\": message_constants})\n return render_template_file(\"bootstrap3/messages.html\", context=context)\n\n\[email protected]_tag(\"bootstrap3/pagination.html\")\ndef bootstrap_pagination(page, **kwargs):\n \"\"\"\n Render pagination for a page.\n\n **Tag name**::\n\n bootstrap_pagination\n\n **Parameters**:\n\n page\n The page of results to show.\n\n pages_to_show\n Number of pages in total\n\n :default: ``11``\n\n url\n URL to navigate to for pagination forward and pagination back.\n\n :default: ``None``\n\n size\n Controls the size of the pagination through CSS.\n Defaults to being normal sized.\n\n One of the following:\n\n * ``'small'``\n * ``'large'``\n\n :default: ``None``\n\n extra\n Any extra page parameters.\n\n :default: ``None``\n\n parameter_name\n Name of the paging URL parameter.\n\n :default: ``'page'``\n\n **Usage**::\n\n {% bootstrap_pagination page %}\n\n **Example**::\n\n {% bootstrap_pagination lines url=\"/pagination?page=1\" size=\"large\" %}\n {% bootstrap_pagination page_obj extra=request.GET.urlencode %}\n \"\"\"\n\n pagination_kwargs = kwargs.copy()\n pagination_kwargs[\"page\"] = page\n return get_pagination_context(**pagination_kwargs)\n\n\[email protected]_tag\ndef bootstrap_url_replace_param(url, name, value):\n return url_replace_param(url, name, value)\n\n\ndef get_pagination_context(page, pages_to_show=11, url=None, size=None, extra=None, parameter_name=\"page\"):\n \"\"\"Generate Bootstrap pagination context from a page object.\"\"\"\n pages_to_show = int(pages_to_show)\n if pages_to_show < 1:\n raise ValueError(\n \"Pagination pages_to_show should be a positive integer, you specified {pages}\".format(pages=pages_to_show)\n )\n num_pages = page.paginator.num_pages\n current_page = page.number\n half_page_num = int(floor(pages_to_show / 2))\n if half_page_num < 0:\n half_page_num = 0\n first_page = current_page - half_page_num\n if first_page <= 1:\n first_page = 1\n if first_page > 1:\n pages_back = first_page - half_page_num\n if pages_back < 1:\n pages_back = 1\n else:\n pages_back = None\n last_page = first_page + pages_to_show - 1\n if pages_back is None:\n last_page += 1\n if last_page > num_pages:\n last_page = num_pages\n if last_page < num_pages:\n pages_forward = last_page + half_page_num\n if pages_forward > num_pages:\n pages_forward = num_pages\n else:\n pages_forward = None\n if first_page > 1:\n first_page -= 1\n if pages_back is not None and pages_back > 1:\n pages_back -= 1\n else:\n pages_back = None\n pages_shown = []\n for i in range(first_page, last_page + 1):\n pages_shown.append(i)\n # Append proper character to url\n if url:\n # Remove existing page GET parameters\n url = force_text(url)\n url = re.sub(r\"\\?{0}\\=[^\\&]+\".format(parameter_name), \"?\", url)\n url = re.sub(r\"\\&{0}\\=[^\\&]+\".format(parameter_name), \"\", url)\n # Append proper separator\n if \"?\" in url:\n url += \"&\"\n else:\n url += \"?\"\n # Append extra string to url\n if extra:\n if not url:\n url = \"?\"\n url += force_text(extra) + \"&\"\n if url:\n url = url.replace(\"?&\", \"?\")\n # Set CSS classes, see http://getbootstrap.com/components/#pagination\n pagination_css_classes = [\"pagination\"]\n if size == \"small\":\n pagination_css_classes.append(\"pagination-sm\")\n elif size == \"large\":\n pagination_css_classes.append(\"pagination-lg\")\n # Build context object\n return {\n \"bootstrap_pagination_url\": url,\n \"num_pages\": num_pages,\n \"current_page\": current_page,\n \"first_page\": first_page,\n \"last_page\": last_page,\n \"pages_shown\": pages_shown,\n \"pages_back\": pages_back,\n \"pages_forward\": pages_forward,\n \"pagination_css_classes\": \" \".join(pagination_css_classes),\n \"parameter_name\": parameter_name,\n }\n",
"path": "src/bootstrap3/templatetags/bootstrap3.py"
}
] | diff --git a/src/bootstrap3/templatetags/bootstrap3.py b/src/bootstrap3/templatetags/bootstrap3.py
index 23da590a..f163e711 100644
--- a/src/bootstrap3/templatetags/bootstrap3.py
+++ b/src/bootstrap3/templatetags/bootstrap3.py
@@ -623,7 +623,7 @@ def bootstrap_icon(icon, **kwargs):
Extra CSS classes to add to the icon HTML
title
- A title for the icon (HTML title attrivute)
+ A title for the icon (HTML title attribute)
**Usage**::
|
dotkom__onlineweb4-501 | UserResource in API should not display last login date publicly
Somewhat sensitive information...
| [
{
"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]\n",
"path": "apps/api/v0/authentication.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\nfrom tastypie import fields\nfrom tastypie.resources import ModelResource\nfrom tastypie.authorization import Authorization\n\nfrom apps.authentication.models import OnlineUser as User\n\nclass UserResource(ModelResource):\n\n class Meta:\n queryset = User.objects.all()\n resource_name = 'user'\n fields = ['username', 'first_name', 'last_name', 'email', ]\n",
"path": "apps/api/v0/authentication.py"
}
] | diff --git a/apps/api/v0/authentication.py b/apps/api/v0/authentication.py
index c299928ea..c79905101 100644
--- a/apps/api/v0/authentication.py
+++ b/apps/api/v0/authentication.py
@@ -11,4 +11,4 @@ class UserResource(ModelResource):
class Meta:
queryset = User.objects.all()
resource_name = 'user'
- fields = ['username', 'first_name', 'last_name', 'last_login', 'email', ]
+ fields = ['username', 'first_name', 'last_name', 'email', ]
|
open-telemetry__opentelemetry-python-contrib-566 | AWS X-Ray propagator should be registered with xray environment variable
In the spec, we have a definition for the environment variable as `xray`
https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/sdk-environment-variables.md#general-sdk-configuration
Currently python uses `aws_xray`
| [
{
"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAWS X-Ray Propagator\n--------------------\n\nThe **AWS X-Ray Propagator** provides a propagator that when used, adds a `trace\nheader`_ to outgoing traces that is compatible with the AWS X-Ray backend service.\nThis allows the trace context to be propagated when a trace span multiple AWS\nservices.\n\nUsage\n-----\n\nUse the provided AWS X-Ray Propagator to inject the necessary context into\ntraces sent to external systems.\n\nThis can be done by either setting this environment variable:\n\n::\n\n export OTEL_PROPAGATORS = aws_xray\n\n\nOr by setting this propagator in your instrumented application:\n\n.. code-block:: python\n\n from opentelemetry.propagate import set_global_textmap\n from opentelemetry.sdk.extension.aws.trace.propagation.aws_xray_format import AwsXRayFormat\n\n set_global_textmap(AwsXRayFormat())\n\nAPI\n---\n.. _trace header: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n\"\"\"\n\nimport logging\nimport typing\n\nfrom opentelemetry import trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\n\nTRACE_HEADER_KEY = \"X-Amzn-Trace-Id\"\nKV_PAIR_DELIMITER = \";\"\nKEY_AND_VALUE_DELIMITER = \"=\"\n\nTRACE_ID_KEY = \"Root\"\nTRACE_ID_LENGTH = 35\nTRACE_ID_VERSION = \"1\"\nTRACE_ID_DELIMITER = \"-\"\nTRACE_ID_DELIMITER_INDEX_1 = 1\nTRACE_ID_DELIMITER_INDEX_2 = 10\nTRACE_ID_FIRST_PART_LENGTH = 8\n\nPARENT_ID_KEY = \"Parent\"\nPARENT_ID_LENGTH = 16\n\nSAMPLED_FLAG_KEY = \"Sampled\"\nSAMPLED_FLAG_LENGTH = 1\nIS_SAMPLED = \"1\"\nNOT_SAMPLED = \"0\"\n\n\n_logger = logging.getLogger(__name__)\n\n\nclass AwsParseTraceHeaderError(Exception):\n def __init__(self, message):\n super().__init__()\n self.message = message\n\n\nclass AwsXRayFormat(TextMapPropagator):\n \"\"\"Propagator for the AWS X-Ray Trace Header propagation protocol.\n\n See: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n \"\"\"\n\n # AWS\n\n def extract(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n if context is None:\n context = Context()\n\n trace_header_list = getter.get(carrier, TRACE_HEADER_KEY)\n\n if not trace_header_list or len(trace_header_list) != 1:\n return context\n\n trace_header = trace_header_list[0]\n\n if not trace_header:\n return context\n\n try:\n (\n trace_id,\n span_id,\n sampled,\n ) = AwsXRayFormat._extract_span_properties(trace_header)\n except AwsParseTraceHeaderError as err:\n _logger.debug(err.message)\n return context\n\n options = 0\n if sampled:\n options |= trace.TraceFlags.SAMPLED\n\n span_context = trace.SpanContext(\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n\n if not span_context.is_valid:\n _logger.debug(\n \"Invalid Span Extracted. Insertting INVALID span into provided context.\"\n )\n return context\n\n return trace.set_span_in_context(\n trace.NonRecordingSpan(span_context), context=context\n )\n\n @staticmethod\n def _extract_span_properties(trace_header):\n trace_id = trace.INVALID_TRACE_ID\n span_id = trace.INVALID_SPAN_ID\n sampled = False\n\n for kv_pair_str in trace_header.split(KV_PAIR_DELIMITER):\n try:\n key_str, value_str = kv_pair_str.split(KEY_AND_VALUE_DELIMITER)\n key, value = key_str.strip(), value_str.strip()\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Error parsing X-Ray trace header. Invalid key value pair: %s. Returning INVALID span context.\",\n kv_pair_str,\n )\n ) from ex\n if key == TRACE_ID_KEY:\n if not AwsXRayFormat._validate_trace_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n trace_id = AwsXRayFormat._parse_trace_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == PARENT_ID_KEY:\n if not AwsXRayFormat._validate_span_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid ParentId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n span_id = AwsXRayFormat._parse_span_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == SAMPLED_FLAG_KEY:\n if not AwsXRayFormat._validate_sampled_flag(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid Sampling flag in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n sampled = AwsXRayFormat._parse_sampled_flag(value)\n\n return trace_id, span_id, sampled\n\n @staticmethod\n def _validate_trace_id(trace_id_str):\n return (\n len(trace_id_str) == TRACE_ID_LENGTH\n and trace_id_str.startswith(TRACE_ID_VERSION)\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_1] == TRACE_ID_DELIMITER\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_2] == TRACE_ID_DELIMITER\n )\n\n @staticmethod\n def _parse_trace_id(trace_id_str):\n timestamp_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_1 + 1 : TRACE_ID_DELIMITER_INDEX_2\n ]\n unique_id_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_2 + 1 : TRACE_ID_LENGTH\n ]\n return int(timestamp_subset + unique_id_subset, 16)\n\n @staticmethod\n def _validate_span_id(span_id_str):\n return len(span_id_str) == PARENT_ID_LENGTH\n\n @staticmethod\n def _parse_span_id(span_id_str):\n return int(span_id_str, 16)\n\n @staticmethod\n def _validate_sampled_flag(sampled_flag_str):\n return len(\n sampled_flag_str\n ) == SAMPLED_FLAG_LENGTH and sampled_flag_str in (\n IS_SAMPLED,\n NOT_SAMPLED,\n )\n\n @staticmethod\n def _parse_sampled_flag(sampled_flag_str):\n return sampled_flag_str[0] == IS_SAMPLED\n\n def inject(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n span_context = span.get_span_context()\n if not span_context.is_valid:\n return\n\n otel_trace_id = \"{:032x}\".format(span_context.trace_id)\n xray_trace_id = TRACE_ID_DELIMITER.join(\n [\n TRACE_ID_VERSION,\n otel_trace_id[:TRACE_ID_FIRST_PART_LENGTH],\n otel_trace_id[TRACE_ID_FIRST_PART_LENGTH:],\n ]\n )\n\n parent_id = \"{:016x}\".format(span_context.span_id)\n\n sampling_flag = (\n IS_SAMPLED\n if span_context.trace_flags & trace.TraceFlags.SAMPLED\n else NOT_SAMPLED\n )\n\n # TODO: Add OT trace state to the X-Ray trace header\n\n trace_header = KV_PAIR_DELIMITER.join(\n [\n KEY_AND_VALUE_DELIMITER.join([key, value])\n for key, value in [\n (TRACE_ID_KEY, xray_trace_id),\n (PARENT_ID_KEY, parent_id),\n (SAMPLED_FLAG_KEY, sampling_flag),\n ]\n ]\n )\n\n setter.set(\n carrier, TRACE_HEADER_KEY, trace_header,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n\n return {TRACE_HEADER_KEY}\n",
"path": "sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py"
}
] | [
{
"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nAWS X-Ray Propagator\n--------------------\n\nThe **AWS X-Ray Propagator** provides a propagator that when used, adds a `trace\nheader`_ to outgoing traces that is compatible with the AWS X-Ray backend service.\nThis allows the trace context to be propagated when a trace span multiple AWS\nservices.\n\nUsage\n-----\n\nUse the provided AWS X-Ray Propagator to inject the necessary context into\ntraces sent to external systems.\n\nThis can be done by either setting this environment variable:\n\n::\n\n export OTEL_PROPAGATORS = xray\n\n\nOr by setting this propagator in your instrumented application:\n\n.. code-block:: python\n\n from opentelemetry.propagate import set_global_textmap\n from opentelemetry.sdk.extension.aws.trace.propagation.aws_xray_format import AwsXRayFormat\n\n set_global_textmap(AwsXRayFormat())\n\nAPI\n---\n.. _trace header: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n\"\"\"\n\nimport logging\nimport typing\n\nfrom opentelemetry import trace\nfrom opentelemetry.context import Context\nfrom opentelemetry.propagators.textmap import (\n CarrierT,\n Getter,\n Setter,\n TextMapPropagator,\n default_getter,\n default_setter,\n)\n\nTRACE_HEADER_KEY = \"X-Amzn-Trace-Id\"\nKV_PAIR_DELIMITER = \";\"\nKEY_AND_VALUE_DELIMITER = \"=\"\n\nTRACE_ID_KEY = \"Root\"\nTRACE_ID_LENGTH = 35\nTRACE_ID_VERSION = \"1\"\nTRACE_ID_DELIMITER = \"-\"\nTRACE_ID_DELIMITER_INDEX_1 = 1\nTRACE_ID_DELIMITER_INDEX_2 = 10\nTRACE_ID_FIRST_PART_LENGTH = 8\n\nPARENT_ID_KEY = \"Parent\"\nPARENT_ID_LENGTH = 16\n\nSAMPLED_FLAG_KEY = \"Sampled\"\nSAMPLED_FLAG_LENGTH = 1\nIS_SAMPLED = \"1\"\nNOT_SAMPLED = \"0\"\n\n\n_logger = logging.getLogger(__name__)\n\n\nclass AwsParseTraceHeaderError(Exception):\n def __init__(self, message):\n super().__init__()\n self.message = message\n\n\nclass AwsXRayFormat(TextMapPropagator):\n \"\"\"Propagator for the AWS X-Ray Trace Header propagation protocol.\n\n See: https://docs.aws.amazon.com/xray/latest/devguide/xray-concepts.html#xray-concepts-tracingheader\n \"\"\"\n\n # AWS\n\n def extract(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n getter: Getter = default_getter,\n ) -> Context:\n if context is None:\n context = Context()\n\n trace_header_list = getter.get(carrier, TRACE_HEADER_KEY)\n\n if not trace_header_list or len(trace_header_list) != 1:\n return context\n\n trace_header = trace_header_list[0]\n\n if not trace_header:\n return context\n\n try:\n (\n trace_id,\n span_id,\n sampled,\n ) = AwsXRayFormat._extract_span_properties(trace_header)\n except AwsParseTraceHeaderError as err:\n _logger.debug(err.message)\n return context\n\n options = 0\n if sampled:\n options |= trace.TraceFlags.SAMPLED\n\n span_context = trace.SpanContext(\n trace_id=trace_id,\n span_id=span_id,\n is_remote=True,\n trace_flags=trace.TraceFlags(options),\n trace_state=trace.TraceState(),\n )\n\n if not span_context.is_valid:\n _logger.debug(\n \"Invalid Span Extracted. Insertting INVALID span into provided context.\"\n )\n return context\n\n return trace.set_span_in_context(\n trace.NonRecordingSpan(span_context), context=context\n )\n\n @staticmethod\n def _extract_span_properties(trace_header):\n trace_id = trace.INVALID_TRACE_ID\n span_id = trace.INVALID_SPAN_ID\n sampled = False\n\n for kv_pair_str in trace_header.split(KV_PAIR_DELIMITER):\n try:\n key_str, value_str = kv_pair_str.split(KEY_AND_VALUE_DELIMITER)\n key, value = key_str.strip(), value_str.strip()\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Error parsing X-Ray trace header. Invalid key value pair: %s. Returning INVALID span context.\",\n kv_pair_str,\n )\n ) from ex\n if key == TRACE_ID_KEY:\n if not AwsXRayFormat._validate_trace_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n trace_id = AwsXRayFormat._parse_trace_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == PARENT_ID_KEY:\n if not AwsXRayFormat._validate_span_id(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid ParentId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n try:\n span_id = AwsXRayFormat._parse_span_id(value)\n except ValueError as ex:\n raise AwsParseTraceHeaderError(\n (\n \"Invalid TraceId in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n ) from ex\n elif key == SAMPLED_FLAG_KEY:\n if not AwsXRayFormat._validate_sampled_flag(value):\n raise AwsParseTraceHeaderError(\n (\n \"Invalid Sampling flag in X-Ray trace header: '%s' with value '%s'. Returning INVALID span context.\",\n TRACE_HEADER_KEY,\n trace_header,\n )\n )\n\n sampled = AwsXRayFormat._parse_sampled_flag(value)\n\n return trace_id, span_id, sampled\n\n @staticmethod\n def _validate_trace_id(trace_id_str):\n return (\n len(trace_id_str) == TRACE_ID_LENGTH\n and trace_id_str.startswith(TRACE_ID_VERSION)\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_1] == TRACE_ID_DELIMITER\n and trace_id_str[TRACE_ID_DELIMITER_INDEX_2] == TRACE_ID_DELIMITER\n )\n\n @staticmethod\n def _parse_trace_id(trace_id_str):\n timestamp_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_1 + 1 : TRACE_ID_DELIMITER_INDEX_2\n ]\n unique_id_subset = trace_id_str[\n TRACE_ID_DELIMITER_INDEX_2 + 1 : TRACE_ID_LENGTH\n ]\n return int(timestamp_subset + unique_id_subset, 16)\n\n @staticmethod\n def _validate_span_id(span_id_str):\n return len(span_id_str) == PARENT_ID_LENGTH\n\n @staticmethod\n def _parse_span_id(span_id_str):\n return int(span_id_str, 16)\n\n @staticmethod\n def _validate_sampled_flag(sampled_flag_str):\n return len(\n sampled_flag_str\n ) == SAMPLED_FLAG_LENGTH and sampled_flag_str in (\n IS_SAMPLED,\n NOT_SAMPLED,\n )\n\n @staticmethod\n def _parse_sampled_flag(sampled_flag_str):\n return sampled_flag_str[0] == IS_SAMPLED\n\n def inject(\n self,\n carrier: CarrierT,\n context: typing.Optional[Context] = None,\n setter: Setter = default_setter,\n ) -> None:\n span = trace.get_current_span(context=context)\n\n span_context = span.get_span_context()\n if not span_context.is_valid:\n return\n\n otel_trace_id = \"{:032x}\".format(span_context.trace_id)\n xray_trace_id = TRACE_ID_DELIMITER.join(\n [\n TRACE_ID_VERSION,\n otel_trace_id[:TRACE_ID_FIRST_PART_LENGTH],\n otel_trace_id[TRACE_ID_FIRST_PART_LENGTH:],\n ]\n )\n\n parent_id = \"{:016x}\".format(span_context.span_id)\n\n sampling_flag = (\n IS_SAMPLED\n if span_context.trace_flags & trace.TraceFlags.SAMPLED\n else NOT_SAMPLED\n )\n\n # TODO: Add OT trace state to the X-Ray trace header\n\n trace_header = KV_PAIR_DELIMITER.join(\n [\n KEY_AND_VALUE_DELIMITER.join([key, value])\n for key, value in [\n (TRACE_ID_KEY, xray_trace_id),\n (PARENT_ID_KEY, parent_id),\n (SAMPLED_FLAG_KEY, sampling_flag),\n ]\n ]\n )\n\n setter.set(\n carrier, TRACE_HEADER_KEY, trace_header,\n )\n\n @property\n def fields(self):\n \"\"\"Returns a set with the fields set in `inject`.\"\"\"\n\n return {TRACE_HEADER_KEY}\n",
"path": "sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 5661998af0..2d2a12f78f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased](https://github.com/open-telemetry/opentelemetry-python/compare/v1.3.0-0.22b0...HEAD)
+- `opentelemetry-sdk-extension-aws` Update AWS entry points to match spec
+ ([#566](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/566))
- Include Flask 2.0 as compatible with existing flask instrumentation
([#545](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/545))
diff --git a/sdk-extension/opentelemetry-sdk-extension-aws/README.rst b/sdk-extension/opentelemetry-sdk-extension-aws/README.rst
index 790f6cb1dc..e95b44411e 100644
--- a/sdk-extension/opentelemetry-sdk-extension-aws/README.rst
+++ b/sdk-extension/opentelemetry-sdk-extension-aws/README.rst
@@ -53,7 +53,7 @@ This can be done by either setting this environment variable:
::
- export OTEL_PROPAGATORS = aws_xray
+ export OTEL_PROPAGATORS = xray
Or by setting this propagator in your instrumented application:
diff --git a/sdk-extension/opentelemetry-sdk-extension-aws/setup.cfg b/sdk-extension/opentelemetry-sdk-extension-aws/setup.cfg
index bec34df03b..6f312fa51c 100644
--- a/sdk-extension/opentelemetry-sdk-extension-aws/setup.cfg
+++ b/sdk-extension/opentelemetry-sdk-extension-aws/setup.cfg
@@ -42,9 +42,9 @@ install_requires =
[options.entry_points]
opentelemetry_propagator =
- aws_xray = opentelemetry.sdk.extension.aws.trace.propagation.aws_xray_format:AwsXRayFormat
+ xray = opentelemetry.sdk.extension.aws.trace.propagation.aws_xray_format:AwsXRayFormat
opentelemetry_id_generator =
- aws_xray = opentelemetry.sdk.extension.aws.trace.aws_xray_id_generator:AwsXRayIdGenerator
+ xray = opentelemetry.sdk.extension.aws.trace.aws_xray_id_generator:AwsXRayIdGenerator
[options.extras_require]
test =
diff --git a/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py b/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py
index 2e5e913252..2bc145b791 100644
--- a/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py
+++ b/sdk-extension/opentelemetry-sdk-extension-aws/src/opentelemetry/sdk/extension/aws/trace/propagation/aws_xray_format.py
@@ -31,7 +31,7 @@
::
- export OTEL_PROPAGATORS = aws_xray
+ export OTEL_PROPAGATORS = xray
Or by setting this propagator in your instrumented application:
|
mlflow__mlflow-9827 | [DOC-FIX] Doc for Run.inputs erroneously refers to Run.data
### Willingness to contribute
No. I cannot contribute a documentation fix at this time.
### URL(s) with the issue
https://www.mlflow.org/docs/latest/python_api/mlflow.entities.html#mlflow.entities.Run
### Description of proposal (what needs changing)
In the Run doc page, the doc for Run.inputs refers to Run.data instead of Run.input.
property inputs
The run inputs, including dataset inputs
Return type
mlflow.entities.RunData
| [
{
"content": "from typing import Any, Dict, Optional\n\nfrom mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.run_data import RunData\nfrom mlflow.entities.run_info import RunInfo\nfrom mlflow.entities.run_inputs import RunInputs\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.service_pb2 import Run as ProtoRun\n\n\nclass Run(_MLflowObject):\n \"\"\"\n Run object.\n \"\"\"\n\n def __init__(\n self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None\n ) -> None:\n if run_info is None:\n raise MlflowException(\"run_info cannot be None\")\n self._info = run_info\n self._data = run_data\n self._inputs = run_inputs\n\n @property\n def info(self) -> RunInfo:\n \"\"\"\n The run metadata, such as the run id, start time, and status.\n\n :rtype: :py:class:`mlflow.entities.RunInfo`\n \"\"\"\n return self._info\n\n @property\n def data(self) -> RunData:\n \"\"\"\n The run data, including metrics, parameters, and tags.\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._data\n\n @property\n def inputs(self) -> RunInputs:\n \"\"\"\n The run inputs, including dataset inputs\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._inputs\n\n def to_proto(self):\n run = ProtoRun()\n run.info.MergeFrom(self.info.to_proto())\n if self.data:\n run.data.MergeFrom(self.data.to_proto())\n if self.inputs:\n run.inputs.MergeFrom(self.inputs.to_proto())\n return run\n\n @classmethod\n def from_proto(cls, proto):\n return cls(\n RunInfo.from_proto(proto.info),\n RunData.from_proto(proto.data),\n RunInputs.from_proto(proto.inputs),\n )\n\n def to_dictionary(self) -> Dict[Any, Any]:\n run_dict = {\n \"info\": dict(self.info),\n }\n if self.data:\n run_dict[\"data\"] = self.data.to_dictionary()\n if self.inputs:\n run_dict[\"inputs\"] = self.inputs.to_dictionary()\n return run_dict\n",
"path": "mlflow/entities/run.py"
}
] | [
{
"content": "from typing import Any, Dict, Optional\n\nfrom mlflow.entities._mlflow_object import _MLflowObject\nfrom mlflow.entities.run_data import RunData\nfrom mlflow.entities.run_info import RunInfo\nfrom mlflow.entities.run_inputs import RunInputs\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.protos.service_pb2 import Run as ProtoRun\n\n\nclass Run(_MLflowObject):\n \"\"\"\n Run object.\n \"\"\"\n\n def __init__(\n self, run_info: RunInfo, run_data: RunData, run_inputs: Optional[RunInputs] = None\n ) -> None:\n if run_info is None:\n raise MlflowException(\"run_info cannot be None\")\n self._info = run_info\n self._data = run_data\n self._inputs = run_inputs\n\n @property\n def info(self) -> RunInfo:\n \"\"\"\n The run metadata, such as the run id, start time, and status.\n\n :rtype: :py:class:`mlflow.entities.RunInfo`\n \"\"\"\n return self._info\n\n @property\n def data(self) -> RunData:\n \"\"\"\n The run data, including metrics, parameters, and tags.\n\n :rtype: :py:class:`mlflow.entities.RunData`\n \"\"\"\n return self._data\n\n @property\n def inputs(self) -> RunInputs:\n \"\"\"\n The run inputs, including dataset inputs\n\n :rtype: :py:class:`mlflow.entities.RunInputs`\n \"\"\"\n return self._inputs\n\n def to_proto(self):\n run = ProtoRun()\n run.info.MergeFrom(self.info.to_proto())\n if self.data:\n run.data.MergeFrom(self.data.to_proto())\n if self.inputs:\n run.inputs.MergeFrom(self.inputs.to_proto())\n return run\n\n @classmethod\n def from_proto(cls, proto):\n return cls(\n RunInfo.from_proto(proto.info),\n RunData.from_proto(proto.data),\n RunInputs.from_proto(proto.inputs),\n )\n\n def to_dictionary(self) -> Dict[Any, Any]:\n run_dict = {\n \"info\": dict(self.info),\n }\n if self.data:\n run_dict[\"data\"] = self.data.to_dictionary()\n if self.inputs:\n run_dict[\"inputs\"] = self.inputs.to_dictionary()\n return run_dict\n",
"path": "mlflow/entities/run.py"
}
] | diff --git a/mlflow/entities/run.py b/mlflow/entities/run.py
index 84718209c1954..731ffd900c0b0 100644
--- a/mlflow/entities/run.py
+++ b/mlflow/entities/run.py
@@ -45,7 +45,7 @@ def inputs(self) -> RunInputs:
"""
The run inputs, including dataset inputs
- :rtype: :py:class:`mlflow.entities.RunData`
+ :rtype: :py:class:`mlflow.entities.RunInputs`
"""
return self._inputs
|
awslabs__gluonts-2148 | `PandasDataset` slow at creating when many large `DataFrame`s are given
## Description
The `PandasDataset` class is slow at constructing when several large DataFrames are given. It appears like [this check](https://github.com/awslabs/gluon-ts/blob/94247a9c0d4768aeb4a17a8bb44252706c519a6a/src/gluonts/dataset/pandas.py#L296-L308) is to be blamed.
## To Reproduce
The following snippet takes something like 14 seconds to run on my machine:
```python
import pandas as pd
from gluonts.dataset.pandas import PandasDataset
df = pd.DataFrame(
{
k: [1.0] * 5000
for k in range(200)
},
index=pd.period_range("2005-01-01", periods=5000, freq="2H")
)
dataset = PandasDataset(dict(df))
```
## What I tried
Changing the definition of [`is_uniform`](https://github.com/awslabs/gluon-ts/blob/94247a9c0d4768aeb4a17a8bb44252706c519a6a/src/gluonts/dataset/pandas.py#L296-L308) to
```python
def is_uniform(index: pd.PeriodIndex) -> bool:
ts_index = index.to_timestamp()
return (ts_index[1:] - ts_index[:-1] == index.freq).all()
```
drastically reduces the runtime. However, this doesn't work with irregular offsets like `MonthEnd` (in fact, a test using `3M` frequency fails): turning `MonthEnd` periods to timestamp makes their difference become irregular in terms of days:
```python
import pandas as pd
pi = pd.period_range("2012-01", periods=3, freq="M")
print(pi[1:] - pi[:-1]) # Index([<MonthEnd>, <MonthEnd>], dtype='object')
dti = pi.to_timestamp()
print(dti[1:] - dti[:-1]) # TimedeltaIndex(['31 days', '29 days'], dtype='timedelta64[ns]', freq=None)
```
| [
{
"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom copy import deepcopy\nfrom dataclasses import dataclass, field\nfrom typing import Any, cast, Dict, Iterator, List, Optional, Union\n\nimport pandas as pd\nfrom pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin\nfrom toolz import valmap\n\nfrom gluonts.dataset.common import DataEntry, ProcessDataEntry\nfrom gluonts.dataset.field_names import FieldName\n\n\n@dataclass\nclass PandasDataset:\n \"\"\"\n A pandas.DataFrame-based dataset type.\n\n This class is constructed with a collection of pandas.DataFrame-objects\n where each ``DataFrame`` is representing one time series.\n A ``target`` and a ``timestamp`` columns are essential. Furthermore,\n static/dynamic real/categorical features can be specified.\n\n Parameters\n ----------\n dataframes\n Single ``pd.DataFrame``/``pd.Series`` or a collection as list or dict\n containing at least ``timestamp`` and ``target`` values.\n If a Dict is provided, the key will be the associated ``item_id``.\n target\n Name of the column that contains the ``target`` time series.\n For multivariate targets, a list of column names should be provided.\n timestamp\n Name of the column that contains the timestamp information.\n freq\n Frequency of observations in the time series. Must be a valid pandas\n frequency.\n feat_dynamic_real\n List of column names that contain dynamic real features.\n feat_dynamic_cat\n List of column names that contain dynamic categorical features.\n feat_static_real\n List of column names that contain static real features.\n feat_static_cat\n List of column names that contain static categorical features.\n past_feat_dynamic_real\n List of column names that contain dynamic real features only for the\n history.\n ignore_last_n_targets\n For target and past dynamic features last ``ignore_last_n_targets``\n elements are removed when iterating over the data set. This becomes\n important when the predictor is called.\n \"\"\"\n\n dataframes: Union[\n pd.DataFrame,\n pd.Series,\n List[pd.DataFrame],\n List[pd.Series],\n Dict[str, pd.DataFrame],\n Dict[str, pd.Series],\n ]\n target: Union[str, List[str]] = \"target\"\n timestamp: Optional[str] = None\n freq: Optional[str] = None\n feat_dynamic_real: List[str] = field(default_factory=list)\n feat_dynamic_cat: List[str] = field(default_factory=list)\n feat_static_real: List[str] = field(default_factory=list)\n feat_static_cat: List[str] = field(default_factory=list)\n past_feat_dynamic_real: List[str] = field(default_factory=list)\n ignore_last_n_targets: int = 0\n\n def __post_init__(self) -> None:\n if isinstance(self.target, list) and len(self.target) == 1:\n self.target = self.target[0]\n self.one_dim_target = not isinstance(self.target, list)\n\n if is_series(self.dataframes):\n self.dataframes = series_to_dataframe(self.dataframes)\n # store data internally as List[Tuple[str, pandas.DataFrame]]\n # if str is not empty it will be set in ``DataEntry`` as ``item_id``.\n if isinstance(self.dataframes, dict):\n self._dataframes = list(self.dataframes.items())\n elif isinstance(self.dataframes, list):\n self._dataframes = [(None, df) for df in self.dataframes]\n else: # case single dataframe\n self._dataframes = [(None, self.dataframes)]\n\n for i, (item_id, df) in enumerate(self._dataframes):\n if self.timestamp:\n df = df.set_index(keys=self.timestamp)\n\n if not isinstance(df.index, pd.PeriodIndex):\n df.index = pd.to_datetime(df.index)\n df = df.to_period(freq=self.freq)\n\n df.sort_index(inplace=True)\n\n assert is_uniform(df.index), (\n \"Dataframe index is not uniformly spaced. \"\n \"If your dataframe contains data from multiple series in the \"\n 'same column (\"long\" format), consider constructing the '\n \"dataset with `PandasDataset.from_long_dataframe` instead.\"\n )\n\n self._dataframes[i] = (item_id, df)\n\n if not self.freq: # infer frequency from index\n self.freq = self._dataframes[0][1].index.freqstr\n\n self.process = ProcessDataEntry(\n cast(str, self.freq), one_dim_target=self.one_dim_target\n )\n\n def _dataentry(\n self, item_id: Optional[str], df: pd.DataFrame\n ) -> DataEntry:\n dataentry = as_dataentry(\n data=df,\n target=self.target,\n feat_dynamic_real=self.feat_dynamic_real,\n feat_dynamic_cat=self.feat_dynamic_cat,\n feat_static_real=self.feat_static_real,\n feat_static_cat=self.feat_static_cat,\n past_feat_dynamic_real=self.past_feat_dynamic_real,\n )\n if item_id is not None:\n dataentry[\"item_id\"] = item_id\n return dataentry\n\n def __iter__(self) -> Iterator[DataEntry]:\n for item_id, df in self._dataframes:\n dataentry = self.process(self._dataentry(item_id, df))\n if self.ignore_last_n_targets:\n dataentry = prepare_prediction_data(\n dataentry, self.ignore_last_n_targets\n )\n yield dataentry\n\n def __len__(self) -> int:\n return len(self._dataframes)\n\n @classmethod\n def from_long_dataframe(\n cls, dataframe: pd.DataFrame, item_id: str, **kwargs\n ) -> \"PandasDataset\":\n \"\"\"\n Construct ``PandasDataset`` out of a long dataframe.\n A long dataframe uses the long format for each variable. Target time\n series values, for example, are stacked on top of each other rather\n than side-by-side. The same is true for other dynamic or categorical\n features.\n\n Parameters\n ----------\n dataframe\n pandas.DataFrame containing at least ``timestamp``, ``target`` and\n ``item_id`` columns.\n item_id\n Name of the column that, when grouped by, gives the different time\n series.\n **kwargs\n Additional arguments. Same as of PandasDataset class.\n\n Returns\n -------\n PandasDataset\n Gluonts dataset based on ``pandas.DataFrame``s.\n \"\"\"\n return cls(dataframes=dict(list(dataframe.groupby(item_id))), **kwargs)\n\n\ndef series_to_dataframe(\n series: Union[pd.Series, List[pd.Series], Dict[str, pd.Series]]\n) -> Union[pd.DataFrame, List[pd.DataFrame], Dict[str, pd.DataFrame]]:\n def to_df(series):\n assert isinstance(\n series.index, DatetimeIndexOpsMixin\n ), \"series index has to be a DatetimeIndex.\"\n return series.to_frame(name=\"target\")\n\n if isinstance(series, list):\n return list(map(to_df, series))\n elif isinstance(series, dict):\n return valmap(to_df, series)\n return to_df(series)\n\n\ndef is_series(series: Any) -> bool:\n \"\"\"\n return True if ``series`` is ``pd.Series`` or a collection of\n ``pd.Series``.\n \"\"\"\n if isinstance(series, list):\n return is_series(series[0])\n elif isinstance(series, dict):\n return is_series(list(series.values()))\n return isinstance(series, pd.Series)\n\n\ndef as_dataentry(\n data: pd.DataFrame,\n target: Union[str, List[str]],\n timestamp: Optional[str] = None,\n feat_dynamic_real: List[str] = [],\n feat_dynamic_cat: List[str] = [],\n feat_static_real: List[str] = [],\n feat_static_cat: List[str] = [],\n past_feat_dynamic_real: List[str] = [],\n) -> DataEntry:\n \"\"\"\n Convert a single time series (uni- or multi-variate) that is given in\n a pandas.DataFrame format to a DataEntry.\n\n Parameters\n ----------\n data\n pandas.DataFrame containing at least ``timestamp``, ``target`` and\n ``item_id`` columns.\n target\n Name of the column that contains the ``target`` time series.\n For multivariate targets ``target`` is expecting a list of column\n names.\n timestamp\n Name of the column that contains the timestamp information.\n If ``None`` the index of ``data`` is assumed to be the time.\n feat_dynamic_real\n List of column names that contain dynamic real features.\n feat_dynamic_cat\n List of column names that contain dynamic categorical features.\n feat_static_real\n List of column names that contain static real features.\n feat_static_cat\n List of column names that contain static categorical features.\n past_feat_dynamic_real\n List of column names that contain dynamic real features only for\n the history.\n\n Returns\n -------\n DataEntry\n A dictionary with at least ``target`` and ``start`` field.\n \"\"\"\n start = data.loc[:, timestamp].iloc[0] if timestamp else data.index[0]\n dataentry = {FieldName.START: start}\n\n def set_field(fieldname, col_names, f=lambda x: x):\n if col_names:\n dataentry[fieldname] = [\n f(data.loc[:, n].to_list()) for n in col_names\n ]\n\n if isinstance(target, str):\n dataentry[FieldName.TARGET] = data.loc[:, target].to_list()\n else:\n set_field(FieldName.TARGET, target)\n set_field(FieldName.FEAT_DYNAMIC_REAL, feat_dynamic_real)\n set_field(FieldName.FEAT_DYNAMIC_CAT, feat_dynamic_cat)\n set_field(FieldName.FEAT_STATIC_REAL, feat_static_real, lambda x: x[0])\n set_field(FieldName.FEAT_STATIC_CAT, feat_static_cat, lambda x: x[0])\n set_field(FieldName.PAST_FEAT_DYNAMIC_REAL, past_feat_dynamic_real)\n return dataentry\n\n\ndef prepare_prediction_data(\n dataentry: DataEntry, ignore_last_n_targets: int\n) -> DataEntry:\n \"\"\"\n Remove ``ignore_last_n_targets`` values from ``target`` and\n ``past_feat_dynamic_real``. Works in univariate and multivariate case.\n\n >>> prepare_prediction_data(\n >>> {\"target\": np.array([1., 2., 3., 4.])}, ignore_last_n_targets=2\n >>> )\n {'target': array([1., 2.])}\n \"\"\"\n entry = deepcopy(dataentry)\n for fname in [FieldName.TARGET, FieldName.PAST_FEAT_DYNAMIC_REAL]:\n if fname in entry:\n entry[fname] = entry[fname][..., :-ignore_last_n_targets]\n return entry\n\n\ndef is_uniform(index: pd.PeriodIndex) -> bool:\n \"\"\"\n Check if ``index`` contains monotonically increasing periods, evenly spaced\n with frequency ``index.freq``.\n\n >>> ts = [\"2021-01-01 00:00\", \"2021-01-01 02:00\", \"2021-01-01 04:00\"]\n >>> is_uniform(pd.DatetimeIndex(ts).to_period(\"2H\"))\n True\n >>> ts = [\"2021-01-01 00:00\", \"2021-01-01 04:00\"]\n >>> is_uniform(pd.DatetimeIndex(ts).to_period(\"2H\"))\n False\n \"\"\"\n return (index[1:] - index[:-1] == index.freq).all()\n",
"path": "src/gluonts/dataset/pandas.py"
}
] | [
{
"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom copy import deepcopy\nfrom dataclasses import dataclass, field\nfrom typing import Any, cast, Dict, Iterator, List, Optional, Union\n\nimport pandas as pd\nfrom pandas.core.indexes.datetimelike import DatetimeIndexOpsMixin\nfrom toolz import valmap\n\nfrom gluonts.dataset.common import DataEntry, ProcessDataEntry\nfrom gluonts.dataset.field_names import FieldName\n\n\n@dataclass\nclass PandasDataset:\n \"\"\"\n A pandas.DataFrame-based dataset type.\n\n This class is constructed with a collection of pandas.DataFrame-objects\n where each ``DataFrame`` is representing one time series.\n A ``target`` and a ``timestamp`` columns are essential. Furthermore,\n static/dynamic real/categorical features can be specified.\n\n Parameters\n ----------\n dataframes\n Single ``pd.DataFrame``/``pd.Series`` or a collection as list or dict\n containing at least ``timestamp`` and ``target`` values.\n If a Dict is provided, the key will be the associated ``item_id``.\n target\n Name of the column that contains the ``target`` time series.\n For multivariate targets, a list of column names should be provided.\n timestamp\n Name of the column that contains the timestamp information.\n freq\n Frequency of observations in the time series. Must be a valid pandas\n frequency.\n feat_dynamic_real\n List of column names that contain dynamic real features.\n feat_dynamic_cat\n List of column names that contain dynamic categorical features.\n feat_static_real\n List of column names that contain static real features.\n feat_static_cat\n List of column names that contain static categorical features.\n past_feat_dynamic_real\n List of column names that contain dynamic real features only for the\n history.\n ignore_last_n_targets\n For target and past dynamic features last ``ignore_last_n_targets``\n elements are removed when iterating over the data set. This becomes\n important when the predictor is called.\n \"\"\"\n\n dataframes: Union[\n pd.DataFrame,\n pd.Series,\n List[pd.DataFrame],\n List[pd.Series],\n Dict[str, pd.DataFrame],\n Dict[str, pd.Series],\n ]\n target: Union[str, List[str]] = \"target\"\n timestamp: Optional[str] = None\n freq: Optional[str] = None\n feat_dynamic_real: List[str] = field(default_factory=list)\n feat_dynamic_cat: List[str] = field(default_factory=list)\n feat_static_real: List[str] = field(default_factory=list)\n feat_static_cat: List[str] = field(default_factory=list)\n past_feat_dynamic_real: List[str] = field(default_factory=list)\n ignore_last_n_targets: int = 0\n\n def __post_init__(self) -> None:\n if isinstance(self.target, list) and len(self.target) == 1:\n self.target = self.target[0]\n self.one_dim_target = not isinstance(self.target, list)\n\n if is_series(self.dataframes):\n self.dataframes = series_to_dataframe(self.dataframes)\n # store data internally as List[Tuple[str, pandas.DataFrame]]\n # if str is not empty it will be set in ``DataEntry`` as ``item_id``.\n if isinstance(self.dataframes, dict):\n self._dataframes = list(self.dataframes.items())\n elif isinstance(self.dataframes, list):\n self._dataframes = [(None, df) for df in self.dataframes]\n else: # case single dataframe\n self._dataframes = [(None, self.dataframes)]\n\n for i, (item_id, df) in enumerate(self._dataframes):\n if self.timestamp:\n df = df.set_index(keys=self.timestamp)\n\n if not isinstance(df.index, pd.PeriodIndex):\n df.index = pd.to_datetime(df.index)\n df = df.to_period(freq=self.freq)\n\n df.sort_index(inplace=True)\n\n assert is_uniform(df.index), (\n \"Dataframe index is not uniformly spaced. \"\n \"If your dataframe contains data from multiple series in the \"\n 'same column (\"long\" format), consider constructing the '\n \"dataset with `PandasDataset.from_long_dataframe` instead.\"\n )\n\n self._dataframes[i] = (item_id, df)\n\n if not self.freq: # infer frequency from index\n self.freq = self._dataframes[0][1].index.freqstr\n\n self.process = ProcessDataEntry(\n cast(str, self.freq), one_dim_target=self.one_dim_target\n )\n\n def _dataentry(\n self, item_id: Optional[str], df: pd.DataFrame\n ) -> DataEntry:\n dataentry = as_dataentry(\n data=df,\n target=self.target,\n feat_dynamic_real=self.feat_dynamic_real,\n feat_dynamic_cat=self.feat_dynamic_cat,\n feat_static_real=self.feat_static_real,\n feat_static_cat=self.feat_static_cat,\n past_feat_dynamic_real=self.past_feat_dynamic_real,\n )\n if item_id is not None:\n dataentry[\"item_id\"] = item_id\n return dataentry\n\n def __iter__(self) -> Iterator[DataEntry]:\n for item_id, df in self._dataframes:\n dataentry = self.process(self._dataentry(item_id, df))\n if self.ignore_last_n_targets:\n dataentry = prepare_prediction_data(\n dataentry, self.ignore_last_n_targets\n )\n yield dataentry\n\n def __len__(self) -> int:\n return len(self._dataframes)\n\n @classmethod\n def from_long_dataframe(\n cls, dataframe: pd.DataFrame, item_id: str, **kwargs\n ) -> \"PandasDataset\":\n \"\"\"\n Construct ``PandasDataset`` out of a long dataframe.\n A long dataframe uses the long format for each variable. Target time\n series values, for example, are stacked on top of each other rather\n than side-by-side. The same is true for other dynamic or categorical\n features.\n\n Parameters\n ----------\n dataframe\n pandas.DataFrame containing at least ``timestamp``, ``target`` and\n ``item_id`` columns.\n item_id\n Name of the column that, when grouped by, gives the different time\n series.\n **kwargs\n Additional arguments. Same as of PandasDataset class.\n\n Returns\n -------\n PandasDataset\n Gluonts dataset based on ``pandas.DataFrame``s.\n \"\"\"\n return cls(dataframes=dict(list(dataframe.groupby(item_id))), **kwargs)\n\n\ndef series_to_dataframe(\n series: Union[pd.Series, List[pd.Series], Dict[str, pd.Series]]\n) -> Union[pd.DataFrame, List[pd.DataFrame], Dict[str, pd.DataFrame]]:\n def to_df(series):\n assert isinstance(\n series.index, DatetimeIndexOpsMixin\n ), \"series index has to be a DatetimeIndex.\"\n return series.to_frame(name=\"target\")\n\n if isinstance(series, list):\n return list(map(to_df, series))\n elif isinstance(series, dict):\n return valmap(to_df, series)\n return to_df(series)\n\n\ndef is_series(series: Any) -> bool:\n \"\"\"\n return True if ``series`` is ``pd.Series`` or a collection of\n ``pd.Series``.\n \"\"\"\n if isinstance(series, list):\n return is_series(series[0])\n elif isinstance(series, dict):\n return is_series(list(series.values()))\n return isinstance(series, pd.Series)\n\n\ndef as_dataentry(\n data: pd.DataFrame,\n target: Union[str, List[str]],\n timestamp: Optional[str] = None,\n feat_dynamic_real: List[str] = [],\n feat_dynamic_cat: List[str] = [],\n feat_static_real: List[str] = [],\n feat_static_cat: List[str] = [],\n past_feat_dynamic_real: List[str] = [],\n) -> DataEntry:\n \"\"\"\n Convert a single time series (uni- or multi-variate) that is given in\n a pandas.DataFrame format to a DataEntry.\n\n Parameters\n ----------\n data\n pandas.DataFrame containing at least ``timestamp``, ``target`` and\n ``item_id`` columns.\n target\n Name of the column that contains the ``target`` time series.\n For multivariate targets ``target`` is expecting a list of column\n names.\n timestamp\n Name of the column that contains the timestamp information.\n If ``None`` the index of ``data`` is assumed to be the time.\n feat_dynamic_real\n List of column names that contain dynamic real features.\n feat_dynamic_cat\n List of column names that contain dynamic categorical features.\n feat_static_real\n List of column names that contain static real features.\n feat_static_cat\n List of column names that contain static categorical features.\n past_feat_dynamic_real\n List of column names that contain dynamic real features only for\n the history.\n\n Returns\n -------\n DataEntry\n A dictionary with at least ``target`` and ``start`` field.\n \"\"\"\n start = data.loc[:, timestamp].iloc[0] if timestamp else data.index[0]\n dataentry = {FieldName.START: start}\n\n def set_field(fieldname, col_names, f=lambda x: x):\n if col_names:\n dataentry[fieldname] = [\n f(data.loc[:, n].to_list()) for n in col_names\n ]\n\n if isinstance(target, str):\n dataentry[FieldName.TARGET] = data.loc[:, target].to_list()\n else:\n set_field(FieldName.TARGET, target)\n set_field(FieldName.FEAT_DYNAMIC_REAL, feat_dynamic_real)\n set_field(FieldName.FEAT_DYNAMIC_CAT, feat_dynamic_cat)\n set_field(FieldName.FEAT_STATIC_REAL, feat_static_real, lambda x: x[0])\n set_field(FieldName.FEAT_STATIC_CAT, feat_static_cat, lambda x: x[0])\n set_field(FieldName.PAST_FEAT_DYNAMIC_REAL, past_feat_dynamic_real)\n return dataentry\n\n\ndef prepare_prediction_data(\n dataentry: DataEntry, ignore_last_n_targets: int\n) -> DataEntry:\n \"\"\"\n Remove ``ignore_last_n_targets`` values from ``target`` and\n ``past_feat_dynamic_real``. Works in univariate and multivariate case.\n\n >>> prepare_prediction_data(\n >>> {\"target\": np.array([1., 2., 3., 4.])}, ignore_last_n_targets=2\n >>> )\n {'target': array([1., 2.])}\n \"\"\"\n entry = deepcopy(dataentry)\n for fname in [FieldName.TARGET, FieldName.PAST_FEAT_DYNAMIC_REAL]:\n if fname in entry:\n entry[fname] = entry[fname][..., :-ignore_last_n_targets]\n return entry\n\n\ndef is_uniform(index: pd.PeriodIndex) -> bool:\n \"\"\"\n Check if ``index`` contains monotonically increasing periods, evenly spaced\n with frequency ``index.freq``.\n\n >>> ts = [\"2021-01-01 00:00\", \"2021-01-01 02:00\", \"2021-01-01 04:00\"]\n >>> is_uniform(pd.DatetimeIndex(ts).to_period(\"2H\"))\n True\n >>> ts = [\"2021-01-01 00:00\", \"2021-01-01 04:00\"]\n >>> is_uniform(pd.DatetimeIndex(ts).to_period(\"2H\"))\n False\n \"\"\"\n other = pd.period_range(index[0], periods=len(index), freq=index.freq)\n return (other == index).all()\n",
"path": "src/gluonts/dataset/pandas.py"
}
] | diff --git a/src/gluonts/dataset/pandas.py b/src/gluonts/dataset/pandas.py
index 72ecbefc65..4c21c12a34 100644
--- a/src/gluonts/dataset/pandas.py
+++ b/src/gluonts/dataset/pandas.py
@@ -305,4 +305,5 @@ def is_uniform(index: pd.PeriodIndex) -> bool:
>>> is_uniform(pd.DatetimeIndex(ts).to_period("2H"))
False
"""
- return (index[1:] - index[:-1] == index.freq).all()
+ other = pd.period_range(index[0], periods=len(index), freq=index.freq)
+ return (other == index).all()
|
d2l-ai__d2l-vi-115 | test
| [
{
"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\n# reload(sys)\n# sys.setdefaultencoding('utf8')\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*dịch đoạn phía trên*'\nHEADER_INDICATOR = ' *dịch tiêu đề phía trên*\\n'\nIMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'\nSTART_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='Dịch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n",
"path": "utils.py"
}
] | [
{
"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*dịch đoạn phía trên*'\nHEADER_INDICATOR = ' *dịch tiêu đề phía trên*\\n'\nIMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'\nSTART_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='Dịch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n",
"path": "utils.py"
}
] | diff --git a/utils.py b/utils.py
index 242fea1d92..89e1f0c35a 100644
--- a/utils.py
+++ b/utils.py
@@ -5,9 +5,6 @@
import sys
import argparse
-# reload(sys)
-# sys.setdefaultencoding('utf8')
-
BEGIN_BLOCK_COMMENT = '<!--\n'
END_BLOCK_COMMENT = '-->\n\n'
TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
|
PaddlePaddle__PaddleSpeech-2364 | Added pre-install doc for G2P and TN modules and updated the dependency version of pypinyin
Added pre-install doc for G2P and TN modules and updated the dependency version of pypinyin
| [
{
"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport contextlib\nimport inspect\nimport io\nimport os\nimport subprocess as sp\nimport sys\nfrom pathlib import Path\n\nfrom setuptools import Command\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\nHERE = Path(os.path.abspath(os.path.dirname(__file__)))\n\nVERSION = '0.0.0'\nCOMMITID = 'none'\n\nbase = [\n \"editdistance\",\n \"g2p_en\",\n \"g2pM\",\n \"h5py\",\n \"inflect\",\n \"jieba\",\n \"jsonlines\",\n \"kaldiio\",\n \"librosa==0.8.1\",\n \"loguru\",\n \"matplotlib\",\n \"nara_wpe\",\n \"onnxruntime==1.10.0\",\n \"opencc\",\n \"pandas\",\n \"paddlenlp\",\n \"paddlespeech_feat\",\n \"Pillow>=9.0.0\",\n \"praatio==5.0.0\",\n \"protobuf>=3.1.0, <=3.20.0\",\n \"pypinyin\",\n \"pypinyin-dict\",\n \"python-dateutil\",\n \"pyworld==0.2.12\",\n \"resampy==0.2.2\",\n \"sacrebleu\",\n \"scipy\",\n \"sentencepiece~=0.1.96\",\n \"soundfile~=0.10\",\n \"textgrid\",\n \"timer\",\n \"tqdm\",\n \"typeguard\",\n \"visualdl\",\n \"webrtcvad\",\n \"yacs~=0.1.8\",\n \"prettytable\",\n \"zhon\",\n \"colorlog\",\n \"pathos == 0.2.8\",\n \"braceexpand\",\n \"pyyaml\",\n \"pybind11\",\n]\n\nserver = [\"fastapi\", \"uvicorn\", \"pattern_singleton\", \"websockets\"]\n\nrequirements = {\n \"install\":\n base + server,\n \"develop\": [\n \"ConfigArgParse\",\n \"coverage\",\n \"gpustat\",\n \"paddlespeech_ctcdecoders\",\n \"phkit\",\n \"pypi-kenlm\",\n \"snakeviz\",\n \"sox\",\n \"soxbindings\",\n \"unidecode\",\n \"yq\",\n \"pre-commit\",\n ]\n}\n\n\ndef check_call(cmd: str, shell=False, executable=None):\n try:\n sp.check_call(\n cmd.split(),\n shell=shell,\n executable=\"/bin/bash\" if shell else executable)\n except sp.CalledProcessError as e:\n print(\n f\"{__file__}:{inspect.currentframe().f_lineno}: CMD: {cmd}, Error:\",\n e.output,\n file=sys.stderr)\n raise e\n\n\ndef check_output(cmd: str, shell=False):\n try:\n out_bytes = sp.check_output(cmd.split())\n except sp.CalledProcessError as e:\n out_bytes = e.output # Output generated before error\n code = e.returncode # Return code\n print(\n f\"{__file__}:{inspect.currentframe().f_lineno}: CMD: {cmd}, Error:\",\n out_bytes,\n file=sys.stderr)\n return out_bytes.strip().decode('utf8')\n\n\[email protected]\ndef pushd(new_dir):\n old_dir = os.getcwd()\n os.chdir(new_dir)\n print(new_dir)\n yield\n os.chdir(old_dir)\n print(old_dir)\n\n\ndef read(*names, **kwargs):\n with io.open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef _remove(files: str):\n for f in files:\n f.unlink()\n\n\n################################# Install ##################################\n\n\ndef _post_install(install_lib_dir):\n # tools/make\n tool_dir = HERE / \"tools\"\n _remove(tool_dir.glob(\"*.done\"))\n with pushd(tool_dir):\n check_call(\"make\")\n print(\"tools install.\")\n\n # ctcdecoder\n ctcdecoder_dir = HERE / 'third_party/ctc_decoders'\n with pushd(ctcdecoder_dir):\n check_call(\"bash -e setup.sh\")\n print(\"ctcdecoder install.\")\n\n\nclass DevelopCommand(develop):\n def run(self):\n develop.run(self)\n # must after develop.run, or pkg install by shell will not see\n self.execute(_post_install, (self.install_lib, ), msg=\"Post Install...\")\n\n\nclass InstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass TestCommand(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # Run nose ensuring that argv simulates running nosetests directly\n import nose\n nose.run_exit(argv=['nosetests', '-w', 'tests'])\n\n\n# cmd: python setup.py upload\nclass UploadCommand(Command):\n description = \"Build and publish the package.\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n print(\"Removing previous dist/ ...\")\n shutil.rmtree(str(HERE / \"dist\"))\n except OSError:\n pass\n print(\"Building source distribution...\")\n sp.check_call([sys.executable, \"setup.py\", \"sdist\"])\n print(\"Uploading package to PyPi...\")\n sp.check_call([\"twine\", \"upload\", \"dist/*\"])\n sys.exit()\n\n\n################################# Version ##################################\ndef write_version_py(filename='paddlespeech/__init__.py'):\n import paddlespeech\n if hasattr(paddlespeech,\n \"__version__\") and paddlespeech.__version__ == VERSION:\n return\n with open(filename, \"a\") as f:\n out_str = f\"\\n__version__ = '{VERSION}'\\n\"\n print(out_str)\n f.write(f\"\\n__version__ = '{VERSION}'\\n\")\n\n COMMITID = check_output(\"git rev-parse HEAD\")\n with open(filename, 'a') as f:\n out_str = f\"\\n__commit__ = '{COMMITID}'\\n\"\n print(out_str)\n f.write(f\"\\n__commit__ = '{COMMITID}'\\n\")\n\n print(f\"{inspect.currentframe().f_code.co_name} done\")\n\n\ndef remove_version_py(filename='paddlespeech/__init__.py'):\n with open(filename, \"r\") as f:\n lines = f.readlines()\n with open(filename, \"w\") as f:\n for line in lines:\n if \"__version__\" in line or \"__commit__\" in line:\n continue\n f.write(line)\n print(f\"{inspect.currentframe().f_code.co_name} done\")\n\n\[email protected]\ndef version_info():\n write_version_py()\n yield\n remove_version_py()\n\n\n################################# Steup ##################################\nsetup_info = dict(\n # Metadata\n name='paddlespeech',\n version=VERSION,\n author='PaddlePaddle Speech and Language Team',\n author_email='[email protected]',\n url='https://github.com/PaddlePaddle/PaddleSpeech',\n license='Apache 2.0',\n description='Speech tools and models based on Paddlepaddle',\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n keywords=[\n \"speech\",\n \"asr\",\n \"tts\",\n \"speaker verfication\",\n \"speech classfication\",\n \"text frontend\",\n \"MFA\",\n \"paddlepaddle\",\n \"beam search\",\n \"ctcdecoder\",\n \"deepspeech2\",\n \"transformer\",\n \"conformer\",\n \"fastspeech\",\n \"vocoder\",\n \"pwgan\",\n \"gan\",\n ],\n python_requires='>=3.7',\n install_requires=requirements[\"install\"],\n extras_require={\n 'develop':\n requirements[\"develop\"],\n 'doc': [\n \"sphinx\", \"sphinx-rtd-theme\", \"numpydoc\", \"myst_parser\",\n \"recommonmark>=0.5.0\", \"sphinx-markdown-tables\", \"sphinx-autobuild\"\n ],\n 'test': ['nose', 'torchaudio==0.10.2'],\n },\n cmdclass={\n 'develop': DevelopCommand,\n 'install': InstallCommand,\n 'upload': UploadCommand,\n 'test': TestCommand,\n },\n\n # Package info\n packages=find_packages(include=('paddlespeech*')),\n zip_safe=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n entry_points={\n 'console_scripts': [\n 'paddlespeech=paddlespeech.cli.entry:_execute',\n 'paddlespeech_server=paddlespeech.server.entry:server_execute',\n 'paddlespeech_client=paddlespeech.server.entry:client_execute'\n ]\n })\n\nwith version_info():\n setup(**setup_info, include_package_data=True)\n",
"path": "setup.py"
}
] | [
{
"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport contextlib\nimport inspect\nimport io\nimport os\nimport subprocess as sp\nimport sys\nfrom pathlib import Path\n\nfrom setuptools import Command\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.command.develop import develop\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\nHERE = Path(os.path.abspath(os.path.dirname(__file__)))\n\nVERSION = '0.0.0'\nCOMMITID = 'none'\n\nbase = [\n \"editdistance\",\n \"g2p_en\",\n \"g2pM\",\n \"h5py\",\n \"inflect\",\n \"jieba\",\n \"jsonlines\",\n \"kaldiio\",\n \"librosa==0.8.1\",\n \"loguru\",\n \"matplotlib\",\n \"nara_wpe\",\n \"onnxruntime==1.10.0\",\n \"opencc\",\n \"pandas\",\n \"paddlenlp\",\n \"paddlespeech_feat\",\n \"Pillow>=9.0.0\",\n \"praatio==5.0.0\",\n \"protobuf>=3.1.0, <=3.20.0\",\n \"pypinyin<=0.44.0\",\n \"pypinyin-dict\",\n \"python-dateutil\",\n \"pyworld==0.2.12\",\n \"resampy==0.2.2\",\n \"sacrebleu\",\n \"scipy\",\n \"sentencepiece~=0.1.96\",\n \"soundfile~=0.10\",\n \"textgrid\",\n \"timer\",\n \"tqdm\",\n \"typeguard\",\n \"visualdl\",\n \"webrtcvad\",\n \"yacs~=0.1.8\",\n \"prettytable\",\n \"zhon\",\n \"colorlog\",\n \"pathos == 0.2.8\",\n \"braceexpand\",\n \"pyyaml\",\n \"pybind11\",\n]\n\nserver = [\"fastapi\", \"uvicorn\", \"pattern_singleton\", \"websockets\"]\n\nrequirements = {\n \"install\":\n base + server,\n \"develop\": [\n \"ConfigArgParse\",\n \"coverage\",\n \"gpustat\",\n \"paddlespeech_ctcdecoders\",\n \"phkit\",\n \"pypi-kenlm\",\n \"snakeviz\",\n \"sox\",\n \"soxbindings\",\n \"unidecode\",\n \"yq\",\n \"pre-commit\",\n ]\n}\n\n\ndef check_call(cmd: str, shell=False, executable=None):\n try:\n sp.check_call(\n cmd.split(),\n shell=shell,\n executable=\"/bin/bash\" if shell else executable)\n except sp.CalledProcessError as e:\n print(\n f\"{__file__}:{inspect.currentframe().f_lineno}: CMD: {cmd}, Error:\",\n e.output,\n file=sys.stderr)\n raise e\n\n\ndef check_output(cmd: str, shell=False):\n try:\n out_bytes = sp.check_output(cmd.split())\n except sp.CalledProcessError as e:\n out_bytes = e.output # Output generated before error\n code = e.returncode # Return code\n print(\n f\"{__file__}:{inspect.currentframe().f_lineno}: CMD: {cmd}, Error:\",\n out_bytes,\n file=sys.stderr)\n return out_bytes.strip().decode('utf8')\n\n\[email protected]\ndef pushd(new_dir):\n old_dir = os.getcwd()\n os.chdir(new_dir)\n print(new_dir)\n yield\n os.chdir(old_dir)\n print(old_dir)\n\n\ndef read(*names, **kwargs):\n with io.open(\n os.path.join(os.path.dirname(__file__), *names),\n encoding=kwargs.get(\"encoding\", \"utf8\")) as fp:\n return fp.read()\n\n\ndef _remove(files: str):\n for f in files:\n f.unlink()\n\n\n################################# Install ##################################\n\n\ndef _post_install(install_lib_dir):\n # tools/make\n tool_dir = HERE / \"tools\"\n _remove(tool_dir.glob(\"*.done\"))\n with pushd(tool_dir):\n check_call(\"make\")\n print(\"tools install.\")\n\n # ctcdecoder\n ctcdecoder_dir = HERE / 'third_party/ctc_decoders'\n with pushd(ctcdecoder_dir):\n check_call(\"bash -e setup.sh\")\n print(\"ctcdecoder install.\")\n\n\nclass DevelopCommand(develop):\n def run(self):\n develop.run(self)\n # must after develop.run, or pkg install by shell will not see\n self.execute(_post_install, (self.install_lib, ), msg=\"Post Install...\")\n\n\nclass InstallCommand(install):\n def run(self):\n install.run(self)\n\n\nclass TestCommand(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n def run_tests(self):\n # Run nose ensuring that argv simulates running nosetests directly\n import nose\n nose.run_exit(argv=['nosetests', '-w', 'tests'])\n\n\n# cmd: python setup.py upload\nclass UploadCommand(Command):\n description = \"Build and publish the package.\"\n user_options = []\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n print(\"Removing previous dist/ ...\")\n shutil.rmtree(str(HERE / \"dist\"))\n except OSError:\n pass\n print(\"Building source distribution...\")\n sp.check_call([sys.executable, \"setup.py\", \"sdist\"])\n print(\"Uploading package to PyPi...\")\n sp.check_call([\"twine\", \"upload\", \"dist/*\"])\n sys.exit()\n\n\n################################# Version ##################################\ndef write_version_py(filename='paddlespeech/__init__.py'):\n import paddlespeech\n if hasattr(paddlespeech,\n \"__version__\") and paddlespeech.__version__ == VERSION:\n return\n with open(filename, \"a\") as f:\n out_str = f\"\\n__version__ = '{VERSION}'\\n\"\n print(out_str)\n f.write(f\"\\n__version__ = '{VERSION}'\\n\")\n\n COMMITID = check_output(\"git rev-parse HEAD\")\n with open(filename, 'a') as f:\n out_str = f\"\\n__commit__ = '{COMMITID}'\\n\"\n print(out_str)\n f.write(f\"\\n__commit__ = '{COMMITID}'\\n\")\n\n print(f\"{inspect.currentframe().f_code.co_name} done\")\n\n\ndef remove_version_py(filename='paddlespeech/__init__.py'):\n with open(filename, \"r\") as f:\n lines = f.readlines()\n with open(filename, \"w\") as f:\n for line in lines:\n if \"__version__\" in line or \"__commit__\" in line:\n continue\n f.write(line)\n print(f\"{inspect.currentframe().f_code.co_name} done\")\n\n\[email protected]\ndef version_info():\n write_version_py()\n yield\n remove_version_py()\n\n\n################################# Steup ##################################\nsetup_info = dict(\n # Metadata\n name='paddlespeech',\n version=VERSION,\n author='PaddlePaddle Speech and Language Team',\n author_email='[email protected]',\n url='https://github.com/PaddlePaddle/PaddleSpeech',\n license='Apache 2.0',\n description='Speech tools and models based on Paddlepaddle',\n long_description=read(\"README.md\"),\n long_description_content_type=\"text/markdown\",\n keywords=[\n \"speech\",\n \"asr\",\n \"tts\",\n \"speaker verfication\",\n \"speech classfication\",\n \"text frontend\",\n \"MFA\",\n \"paddlepaddle\",\n \"beam search\",\n \"ctcdecoder\",\n \"deepspeech2\",\n \"transformer\",\n \"conformer\",\n \"fastspeech\",\n \"vocoder\",\n \"pwgan\",\n \"gan\",\n ],\n python_requires='>=3.7',\n install_requires=requirements[\"install\"],\n extras_require={\n 'develop':\n requirements[\"develop\"],\n 'doc': [\n \"sphinx\", \"sphinx-rtd-theme\", \"numpydoc\", \"myst_parser\",\n \"recommonmark>=0.5.0\", \"sphinx-markdown-tables\", \"sphinx-autobuild\"\n ],\n 'test': ['nose', 'torchaudio==0.10.2'],\n },\n cmdclass={\n 'develop': DevelopCommand,\n 'install': InstallCommand,\n 'upload': UploadCommand,\n 'test': TestCommand,\n },\n\n # Package info\n packages=find_packages(include=('paddlespeech*')),\n zip_safe=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Science/Research',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n entry_points={\n 'console_scripts': [\n 'paddlespeech=paddlespeech.cli.entry:_execute',\n 'paddlespeech_server=paddlespeech.server.entry:server_execute',\n 'paddlespeech_client=paddlespeech.server.entry:client_execute'\n ]\n })\n\nwith version_info():\n setup(**setup_info, include_package_data=True)\n",
"path": "setup.py"
}
] | diff --git a/docs/requirements.txt b/docs/requirements.txt
index bd071e7e20c..3fb82367f64 100644
--- a/docs/requirements.txt
+++ b/docs/requirements.txt
@@ -27,7 +27,7 @@ pattern_singleton
Pillow>=9.0.0
praatio==5.0.0
prettytable
-pypinyin
+pypinyin<=0.44.0
pypinyin-dict
python-dateutil
pyworld==0.2.12
diff --git a/examples/other/g2p/README.md b/examples/other/g2p/README.md
index 85c9535d1f3..a1911b2f6a4 100644
--- a/examples/other/g2p/README.md
+++ b/examples/other/g2p/README.md
@@ -9,6 +9,9 @@ We use `WER` as an evaluation criterion.
Run the command below to get the results of the test.
```bash
+cd ../../../tools
+bash extras/install_sclite.sh
+cd -
./run.sh
```
diff --git a/examples/other/tn/README.md b/examples/other/tn/README.md
index 3b80de661e0..cae89a36a39 100644
--- a/examples/other/tn/README.md
+++ b/examples/other/tn/README.md
@@ -5,6 +5,9 @@ We use `CER` as an evaluation criterion.
## Start
Run the command below to get the results of the test.
```bash
+cd ../../../tools
+bash extras/install_sclite.sh
+cd -
./run.sh
```
The `avg CER` of text normalization is: 0.00730093543235227
diff --git a/setup.py b/setup.py
index fac9e1207d8..e551d9fa6f7 100644
--- a/setup.py
+++ b/setup.py
@@ -52,7 +52,7 @@
"Pillow>=9.0.0",
"praatio==5.0.0",
"protobuf>=3.1.0, <=3.20.0",
- "pypinyin",
+ "pypinyin<=0.44.0",
"pypinyin-dict",
"python-dateutil",
"pyworld==0.2.12",
|
AUTOMATIC1111__stable-diffusion-webui-60 | FileNotFoundError after new update
Getting a FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\stable-diffusion-webui\\scripts' after the new update.
Not exactly good at all the coding stuff, using it just fine yesterday but I downloaded the repo instead of git clone, for the sake of easier update I started a new installation by git cloning into user folder and the installation went well but ran into this while launching through webui.py.
Python 3.10.6
venv C:\Users\admin\stable-diffusion-webui\venv\Scripts\Python.exe
Launching webui.py...
Loading model from C:\Users\admin\stable-diffusion-webui\model.ckpt
Global Step: 470000
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Traceback (most recent call last):
File "C:\Users\admin\stable-diffusion-webui\webui.py", line 135, in <module>
modules.scripts.load_scripts(os.path.join(script_path, "scripts"))
File "C:\Users\admin\stable-diffusion-webui\modules\scripts.py", line 32, in load_scripts
for filename in os.listdir(basedir):
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\admin\\stable-diffusion-webui\\scripts'
| [
{
"content": "import os\r\nimport sys\r\nimport traceback\r\n\r\nimport modules.ui as ui\r\nimport gradio as gr\r\n\r\nfrom modules.processing import StableDiffusionProcessing\r\n\r\nclass Script:\r\n filename = None\r\n args_from = None\r\n args_to = None\r\n\r\n def title(self):\r\n raise NotImplementedError()\r\n\r\n def ui(self, is_img2img):\r\n pass\r\n\r\n def run(self, *args):\r\n raise NotImplementedError()\r\n\r\n def describe(self):\r\n return \"\"\r\n\r\n\r\nscripts = []\r\n\r\n\r\ndef load_scripts(basedir):\r\n for filename in os.listdir(basedir):\r\n path = os.path.join(basedir, filename)\r\n\r\n if not os.path.isfile(path):\r\n continue\r\n\r\n with open(path, \"r\", encoding=\"utf8\") as file:\r\n text = file.read()\r\n\r\n from types import ModuleType\r\n compiled = compile(text, path, 'exec')\r\n module = ModuleType(filename)\r\n exec(compiled, module.__dict__)\r\n\r\n for key, script_class in module.__dict__.items():\r\n if type(script_class) == type and issubclass(script_class, Script):\r\n obj = script_class()\r\n obj.filename = path\r\n\r\n scripts.append(obj)\r\n\r\n\r\ndef wrap_call(func, filename, funcname, *args, default=None, **kwargs):\r\n try:\r\n res = func(*args, **kwargs)\r\n return res\r\n except Exception:\r\n print(f\"Error calling: {filename}/{funcname}\", file=sys.stderr)\r\n print(traceback.format_exc(), file=sys.stderr)\r\n\r\n return default\r\n\r\n\r\ndef setup_ui(is_img2img):\r\n titles = [wrap_call(script.title, script.filename, \"title\") or f\"{script.filename} [error]\" for script in scripts]\r\n\r\n dropdown = gr.Dropdown(label=\"Script\", choices=[\"None\"] + titles, value=\"None\", type=\"index\")\r\n\r\n inputs = [dropdown]\r\n\r\n for script in scripts:\r\n script.args_from = len(inputs)\r\n controls = script.ui(is_img2img)\r\n\r\n for control in controls:\r\n control.visible = False\r\n\r\n inputs += controls\r\n script.args_to = len(inputs)\r\n\r\n def select_script(index):\r\n if index > 0:\r\n script = scripts[index-1]\r\n args_from = script.args_from\r\n args_to = script.args_to\r\n else:\r\n args_from = 0\r\n args_to = 0\r\n\r\n return [ui.gr_show(True if i == 0 else args_from <= i < args_to) for i in range(len(inputs))]\r\n\r\n dropdown.change(\r\n fn=select_script,\r\n inputs=[dropdown],\r\n outputs=inputs\r\n )\r\n\r\n return inputs\r\n\r\n\r\ndef run(p: StableDiffusionProcessing, *args):\r\n script_index = args[0] - 1\r\n\r\n if script_index < 0 or script_index >= len(scripts):\r\n return None\r\n\r\n script = scripts[script_index]\r\n\r\n script_args = args[script.args_from:script.args_to]\r\n processed = script.run(p, *script_args)\r\n\r\n return processed\r\n",
"path": "modules/scripts.py"
}
] | [
{
"content": "import os\r\nimport sys\r\nimport traceback\r\n\r\nimport modules.ui as ui\r\nimport gradio as gr\r\n\r\nfrom modules.processing import StableDiffusionProcessing\r\n\r\nclass Script:\r\n filename = None\r\n args_from = None\r\n args_to = None\r\n\r\n def title(self):\r\n raise NotImplementedError()\r\n\r\n def ui(self, is_img2img):\r\n pass\r\n\r\n def run(self, *args):\r\n raise NotImplementedError()\r\n\r\n def describe(self):\r\n return \"\"\r\n\r\n\r\nscripts = []\r\n\r\n\r\ndef load_scripts(basedir):\r\n if not os.path.exists(basedir):\r\n return\r\n\r\n for filename in os.listdir(basedir):\r\n path = os.path.join(basedir, filename)\r\n\r\n if not os.path.isfile(path):\r\n continue\r\n\r\n with open(path, \"r\", encoding=\"utf8\") as file:\r\n text = file.read()\r\n\r\n from types import ModuleType\r\n compiled = compile(text, path, 'exec')\r\n module = ModuleType(filename)\r\n exec(compiled, module.__dict__)\r\n\r\n for key, script_class in module.__dict__.items():\r\n if type(script_class) == type and issubclass(script_class, Script):\r\n obj = script_class()\r\n obj.filename = path\r\n\r\n scripts.append(obj)\r\n\r\n\r\ndef wrap_call(func, filename, funcname, *args, default=None, **kwargs):\r\n try:\r\n res = func(*args, **kwargs)\r\n return res\r\n except Exception:\r\n print(f\"Error calling: {filename}/{funcname}\", file=sys.stderr)\r\n print(traceback.format_exc(), file=sys.stderr)\r\n\r\n return default\r\n\r\n\r\ndef setup_ui(is_img2img):\r\n titles = [wrap_call(script.title, script.filename, \"title\") or f\"{script.filename} [error]\" for script in scripts]\r\n\r\n dropdown = gr.Dropdown(label=\"Script\", choices=[\"None\"] + titles, value=\"None\", type=\"index\")\r\n\r\n inputs = [dropdown]\r\n\r\n for script in scripts:\r\n script.args_from = len(inputs)\r\n controls = script.ui(is_img2img)\r\n\r\n for control in controls:\r\n control.visible = False\r\n\r\n inputs += controls\r\n script.args_to = len(inputs)\r\n\r\n def select_script(index):\r\n if index > 0:\r\n script = scripts[index-1]\r\n args_from = script.args_from\r\n args_to = script.args_to\r\n else:\r\n args_from = 0\r\n args_to = 0\r\n\r\n return [ui.gr_show(True if i == 0 else args_from <= i < args_to) for i in range(len(inputs))]\r\n\r\n dropdown.change(\r\n fn=select_script,\r\n inputs=[dropdown],\r\n outputs=inputs\r\n )\r\n\r\n return inputs\r\n\r\n\r\ndef run(p: StableDiffusionProcessing, *args):\r\n script_index = args[0] - 1\r\n\r\n if script_index < 0 or script_index >= len(scripts):\r\n return None\r\n\r\n script = scripts[script_index]\r\n\r\n script_args = args[script.args_from:script.args_to]\r\n processed = script.run(p, *script_args)\r\n\r\n return processed\r\n",
"path": "modules/scripts.py"
}
] | diff --git a/modules/scripts.py b/modules/scripts.py
index be348a70481..37a236827c4 100644
--- a/modules/scripts.py
+++ b/modules/scripts.py
@@ -29,6 +29,9 @@ def describe(self):
def load_scripts(basedir):
+ if not os.path.exists(basedir):
+ return
+
for filename in os.listdir(basedir):
path = os.path.join(basedir, filename)
|
mitmproxy__mitmproxy-5603 | libGL error when starting latest version of mitmweb 8.1.1 on Debian
#### Problem Description
I was using old version of mitmproxy 6.0.2 that I got installed from the debian unstable repository and it works just fine. then today I decided to download the latest version of mitmproxy 8.1.1 and I got the below errors immediately after I type in `./mitmweb`
```
Web server listening at http://127.0.0.1:8081/
Opening in existing browser session.
Proxy server listening at *:8080
libGL error: MESA-LOADER: failed to open crocus: /usr/lib/dri/crocus_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: crocus
libGL error: MESA-LOADER: failed to open crocus: /usr/lib/dri/crocus_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: crocus
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
[5508:5508:0100/000000.622195:ERROR:angle_platform_impl.cc(43)] Display.cpp:992 (initialize): ANGLE Display::initialize error 12289: Could not create a backing OpenGL context.
[5508:5508:0100/000000.622454:ERROR:gl_surface_egl.cc(831)] EGL Driver message (Critical) eglInitialize: Could not create a backing OpenGL context.
[5508:5508:0100/000000.622599:ERROR:gl_surface_egl.cc(1353)] eglInitialize OpenGL failed with error EGL_NOT_INITIALIZED, trying next display type
[5508:5508:0100/000000.625277:ERROR:angle_platform_impl.cc(43)] Display.cpp:992 (initialize): ANGLE Display::initialize error 12289: Could not create a backing OpenGL context.
[5508:5508:0100/000000.625508:ERROR:gl_surface_egl.cc(831)] EGL Driver message (Critical) eglInitialize: Could not create a backing OpenGL context.
[5508:5508:0100/000000.625555:ERROR:gl_surface_egl.cc(1353)] eglInitialize OpenGLES failed with error EGL_NOT_INITIALIZED
[5508:5508:0100/000000.625654:ERROR:gl_ozone_egl.cc(23)] GLSurfaceEGL::InitializeOneOff failed.
```
And the URL at http://127.0.0.1:8081 loads just a blank page.
Note that I checked, and I have `libgl1-mesa-dri` package already installed.
#### Steps to reproduce the behavior:
1. download latest version of mitmproxy 8.1.1
2. open the terminal and type in `./mitmweb`
#### System Information
Paste the output of "./mitmproxy --version"
```
Mitmproxy: 8.1.1 binary
Python: 3.10.5
OpenSSL: OpenSSL 3.0.3 3 May 2022
Platform: Linux-5.18.0-3-amd64-x86_64-with-glibc2.34
```
I will include the output of mitmproxy of version 6.0.2 that I have installed on the same system as I noticed that Python and OpenSSL versions are different:
```
Mitmproxy: 6.0.2
Python: 3.10.6
OpenSSL: OpenSSL 3.0.5 5 Jul 2022
Platform: Linux-5.18.0-3-amd64-x86_64-with-glibc2.34
```
| [
{
"content": "import logging\nimport webbrowser\nfrom collections.abc import Sequence\n\nfrom mitmproxy import ctx\n\n\nclass WebAddon:\n def load(self, loader):\n loader.add_option(\"web_open_browser\", bool, True, \"Start a browser.\")\n loader.add_option(\"web_debug\", bool, False, \"Enable mitmweb debugging.\")\n loader.add_option(\"web_port\", int, 8081, \"Web UI port.\")\n loader.add_option(\"web_host\", str, \"127.0.0.1\", \"Web UI host.\")\n loader.add_option(\n \"web_columns\",\n Sequence[str],\n [\"tls\", \"icon\", \"path\", \"method\", \"status\", \"size\", \"time\"],\n \"Columns to show in the flow list\",\n )\n\n def running(self):\n if hasattr(ctx.options, \"web_open_browser\") and ctx.options.web_open_browser:\n web_url = f\"http://{ctx.options.web_host}:{ctx.options.web_port}/\"\n success = open_browser(web_url)\n if not success:\n logging.info(\n f\"No web browser found. Please open a browser and point it to {web_url}\",\n )\n\n\ndef open_browser(url: str) -> bool:\n \"\"\"\n Open a URL in a browser window.\n In contrast to webbrowser.open, we limit the list of suitable browsers.\n This gracefully degrades to a no-op on headless servers, where webbrowser.open\n would otherwise open lynx.\n\n Returns:\n True, if a browser has been opened\n False, if no suitable browser has been found.\n \"\"\"\n browsers = (\n \"windows-default\",\n \"macosx\",\n \"wslview %s\",\n \"gio\",\n \"x-www-browser %s\",\n \"gnome-open %s\",\n \"xdg-open\",\n \"google-chrome\",\n \"chrome\",\n \"chromium\",\n \"chromium-browser\",\n \"firefox\",\n \"opera\",\n \"safari\",\n )\n for browser in browsers:\n try:\n b = webbrowser.get(browser)\n except webbrowser.Error:\n pass\n else:\n if b.open(url):\n return True\n return False\n",
"path": "mitmproxy/tools/web/webaddons.py"
}
] | [
{
"content": "import logging\nimport webbrowser\nfrom collections.abc import Sequence\n\nfrom mitmproxy import ctx\n\n\nclass WebAddon:\n def load(self, loader):\n loader.add_option(\"web_open_browser\", bool, True, \"Start a browser.\")\n loader.add_option(\"web_debug\", bool, False, \"Enable mitmweb debugging.\")\n loader.add_option(\"web_port\", int, 8081, \"Web UI port.\")\n loader.add_option(\"web_host\", str, \"127.0.0.1\", \"Web UI host.\")\n loader.add_option(\n \"web_columns\",\n Sequence[str],\n [\"tls\", \"icon\", \"path\", \"method\", \"status\", \"size\", \"time\"],\n \"Columns to show in the flow list\",\n )\n\n def running(self):\n if hasattr(ctx.options, \"web_open_browser\") and ctx.options.web_open_browser:\n web_url = f\"http://{ctx.options.web_host}:{ctx.options.web_port}/\"\n success = open_browser(web_url)\n if not success:\n logging.info(\n f\"No web browser found. Please open a browser and point it to {web_url}\",\n )\n\n\ndef open_browser(url: str) -> bool:\n \"\"\"\n Open a URL in a browser window.\n In contrast to webbrowser.open, we limit the list of suitable browsers.\n This gracefully degrades to a no-op on headless servers, where webbrowser.open\n would otherwise open lynx.\n\n Returns:\n True, if a browser has been opened\n False, if no suitable browser has been found.\n \"\"\"\n browsers = (\n \"windows-default\",\n \"macosx\",\n \"wslview %s\",\n \"gio\",\n \"x-www-browser\",\n \"gnome-open %s\",\n \"xdg-open\",\n \"google-chrome\",\n \"chrome\",\n \"chromium\",\n \"chromium-browser\",\n \"firefox\",\n \"opera\",\n \"safari\",\n )\n for browser in browsers:\n try:\n b = webbrowser.get(browser)\n except webbrowser.Error:\n pass\n else:\n if b.open(url):\n return True\n return False\n",
"path": "mitmproxy/tools/web/webaddons.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 8476be5201..6b3a73a5a8 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -39,6 +39,9 @@
([#5588](https://github.com/mitmproxy/mitmproxy/pull/5588), @nikitastupin, @abbbe)
* Add WireGuard mode to enable userspace transparent proxying via WireGuard.
([#5562](https://github.com/mitmproxy/mitmproxy/pull/5562), @decathorpe, @mhils)
+* Fix mitmweb not properly opening a browser and being stuck on some Linux.
+ ([#5522](https://github.com/mitmproxy/mitmproxy/issues/5522), @Prinzhorn)
+
## 28 June 2022: mitmproxy 8.1.1
diff --git a/mitmproxy/tools/web/webaddons.py b/mitmproxy/tools/web/webaddons.py
index 3ba42f12a2..6d5970988a 100644
--- a/mitmproxy/tools/web/webaddons.py
+++ b/mitmproxy/tools/web/webaddons.py
@@ -44,7 +44,7 @@ def open_browser(url: str) -> bool:
"macosx",
"wslview %s",
"gio",
- "x-www-browser %s",
+ "x-www-browser",
"gnome-open %s",
"xdg-open",
"google-chrome",
|
wemake-services__wemake-python-styleguide-204 | Feature: ignore async function definitions from jones complexity check
Currently we only ignore `ClassDef` and `FunctionDef`: https://github.com/wemake-services/wemake-python-styleguide/blob/master/wemake_python_styleguide/visitors/ast/complexity/jones.py#L38-L41
What needs to be done:
1. ignore `AsyncFunctionDef` from the check
2. we do not have a special test case for ignoring nodes for now. It should be added. We can call it `test_that_some_nodes_are_ignored`. It should test all three ignored nodes: with the lowest complexity threshold there should be no errors: https://github.com/wemake-services/wemake-python-styleguide/blob/master/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py
| [
{
"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nJones Complexity to count inline complexity.\n\nBased on the original `jones-complexity` project:\nhttps://github.com/Miserlou/JonesComplexity\n\nOriginal project is licensed under MIT.\n\"\"\"\n\nimport ast\nfrom collections import defaultdict\nfrom statistics import median\nfrom typing import DefaultDict, List\n\nfrom wemake_python_styleguide.logics.nodes import is_subtype_of_any\nfrom wemake_python_styleguide.violations.complexity import (\n JonesScoreViolation,\n LineComplexityViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\nclass JonesComplexityVisitor(BaseNodeVisitor): # TODO: consider `logical_line`\n \"\"\"\n This visitor is used to find complex lines in the code.\n\n Calculates the number of AST nodes per line of code.\n Also calculates the median nodes/line score.\n Then compares these numbers to the given tressholds.\n\n Some nodes are ignored because there's no sense in analyzing them.\n Some nodes like type annotations are not affecting line complexity,\n so we do not count them.\n \"\"\"\n\n _ignored_nodes = (\n ast.FunctionDef,\n ast.ClassDef,\n )\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initializes line number counter.\"\"\"\n super().__init__(*args, **kwargs)\n self._lines: DefaultDict[int, List[ast.AST]] = defaultdict(list)\n self._to_ignore: List[ast.AST] = []\n\n def _post_visit(self) -> None:\n \"\"\"\n Triggers after the whole module was processed.\n\n Checks each line for its complexity, compares it to the tresshold.\n We also calculate the final Jones score for the whole module.\n \"\"\"\n for line_nodes in self._lines.values():\n complexity = len(line_nodes)\n if complexity > self.options.max_line_complexity:\n self.add_violation(LineComplexityViolation(\n line_nodes[0], text=str(complexity),\n ))\n\n node_counts = [len(nodes) for nodes in self._lines.values()]\n total_count = median(node_counts) if node_counts else 0\n if total_count > self.options.max_jones_score:\n self.add_violation(JonesScoreViolation())\n\n def _maybe_ignore_child(self, node: ast.AST) -> bool:\n if isinstance(node, ast.AnnAssign):\n self._to_ignore.append(node.annotation)\n\n return node in self._to_ignore\n\n def visit(self, node: ast.AST) -> None:\n \"\"\"\n Visits all nodes, sums the number of nodes per line.\n\n Then calculates the median value of all line results.\n\n Raises:\n JonesScoreViolation\n LineComplexityViolation\n\n \"\"\"\n line_number = getattr(node, 'lineno', None)\n is_ignored = is_subtype_of_any(node, self._ignored_nodes)\n if line_number is not None and not is_ignored:\n if not self._maybe_ignore_child(node):\n self._lines[line_number].append(node)\n\n self.generic_visit(node)\n",
"path": "wemake_python_styleguide/visitors/ast/complexity/jones.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nJones Complexity to count inline complexity.\n\nBased on the original `jones-complexity` project:\nhttps://github.com/Miserlou/JonesComplexity\n\nOriginal project is licensed under MIT.\n\"\"\"\n\nimport ast\nfrom collections import defaultdict\nfrom statistics import median\nfrom typing import DefaultDict, List\n\nfrom wemake_python_styleguide.logics.nodes import is_subtype_of_any\nfrom wemake_python_styleguide.violations.complexity import (\n JonesScoreViolation,\n LineComplexityViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\n\n\nclass JonesComplexityVisitor(BaseNodeVisitor): # TODO: consider `logical_line`\n \"\"\"\n This visitor is used to find complex lines in the code.\n\n Calculates the number of AST nodes per line of code.\n Also calculates the median nodes/line score.\n Then compares these numbers to the given tressholds.\n\n Some nodes are ignored because there's no sense in analyzing them.\n Some nodes like type annotations are not affecting line complexity,\n so we do not count them.\n \"\"\"\n\n _ignored_nodes = (\n ast.FunctionDef,\n ast.ClassDef,\n ast.AsyncFunctionDef,\n )\n\n def __init__(self, *args, **kwargs) -> None:\n \"\"\"Initializes line number counter.\"\"\"\n super().__init__(*args, **kwargs)\n self._lines: DefaultDict[int, List[ast.AST]] = defaultdict(list)\n self._to_ignore: List[ast.AST] = []\n\n def _post_visit(self) -> None:\n \"\"\"\n Triggers after the whole module was processed.\n\n Checks each line for its complexity, compares it to the tresshold.\n We also calculate the final Jones score for the whole module.\n \"\"\"\n for line_nodes in self._lines.values():\n complexity = len(line_nodes)\n if complexity > self.options.max_line_complexity:\n self.add_violation(LineComplexityViolation(\n line_nodes[0], text=str(complexity),\n ))\n\n node_counts = [len(nodes) for nodes in self._lines.values()]\n total_count = median(node_counts) if node_counts else 0\n if total_count > self.options.max_jones_score:\n self.add_violation(JonesScoreViolation())\n\n def _maybe_ignore_child(self, node: ast.AST) -> bool:\n if isinstance(node, ast.AnnAssign):\n self._to_ignore.append(node.annotation)\n\n return node in self._to_ignore\n\n def visit(self, node: ast.AST) -> None:\n \"\"\"\n Visits all nodes, sums the number of nodes per line.\n\n Then calculates the median value of all line results.\n\n Raises:\n JonesScoreViolation\n LineComplexityViolation\n\n \"\"\"\n line_number = getattr(node, 'lineno', None)\n is_ignored = is_subtype_of_any(node, self._ignored_nodes)\n if line_number is not None and not is_ignored:\n if not self._maybe_ignore_child(node):\n self._lines[line_number].append(node)\n\n self.generic_visit(node)\n",
"path": "wemake_python_styleguide/visitors/ast/complexity/jones.py"
}
] | diff --git a/CHANGELOG.md b/CHANGELOG.md
index 358a18761..7011c6fdf 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,7 @@ We used to have incremental versioning before `0.1.0`.
- We now count `async` methods as method for classes complexity check
- We now count `async` functions as functions for module complexity check
- We now count `async` functions complexity
+- We now ignore `async` functions in jones complexity check
### Misc
diff --git a/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py b/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py
index 5aea60576..20e807e9c 100644
--- a/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py
+++ b/tests/test_visitors/test_ast/test_complexity/test_jones/test_line_complexity.py
@@ -16,12 +16,39 @@ def some_function():
return 2 + 1
"""
+line_inside_async_function = """
+async def some_function():
+ return 2 + 1
+"""
+
line_inside_class = """
class SomeClass():
field = 13 / 2
"""
+class_with_function = """
+class First:
+ def second():
+ return 2 + 1
+"""
+
+class_with_async_function = """
+class First:
+ async def second():
+ return 2 + 1
+"""
+
+class_with_usual_and_async_function = """
+class First:
+ async def second():
+ return 2 + 1
+
+ def third():
+ return 2 + 2
+"""
+
function_declaration = 'def some_function(): ...'
+async_function_declaration = 'async def some_function(): ...'
class_declaration = 'class SomeClass(object): ...'
empty_module = ''
@@ -32,10 +59,15 @@ class SomeClass():
line_with_comprehension,
line_with_math,
line_inside_function,
+ line_inside_async_function,
line_inside_class,
function_declaration,
+ async_function_declaration,
class_declaration,
empty_module,
+ class_with_function,
+ class_with_async_function,
+ class_with_usual_and_async_function,
])
def test_regular_nodes(assert_errors, parse_ast_tree, code, default_options):
"""Testing that regular nodes do not raise violations."""
@@ -50,6 +82,7 @@ def test_regular_nodes(assert_errors, parse_ast_tree, code, default_options):
@pytest.mark.parametrize('code', [
line_simple,
line_inside_function,
+ line_inside_async_function,
line_inside_class,
])
def test_complex_lines(assert_errors, parse_ast_tree, code, options):
@@ -96,3 +129,22 @@ def test_exact_complexity(parse_ast_tree, default_options, code, complexity):
assert len(visitor._lines) == 1
assert len(visitor._lines[1]) == complexity
+
+
[email protected]('code, number_of_lines', [
+ (line_inside_function, 1),
+ (line_inside_async_function, 1),
+ (class_with_async_function, 1),
+ (class_with_function, 1),
+ (class_with_usual_and_async_function, 2),
+])
+def test_that_some_nodes_are_ignored(
+ parse_ast_tree, default_options, code, assert_errors, number_of_lines,
+):
+ """Ensures that complexity is counted correctly."""
+ tree = parse_ast_tree(code)
+
+ visitor = JonesComplexityVisitor(default_options, tree=tree)
+ visitor.run()
+
+ assert len(visitor._lines) == number_of_lines
diff --git a/wemake_python_styleguide/visitors/ast/complexity/jones.py b/wemake_python_styleguide/visitors/ast/complexity/jones.py
index 59dab73cc..1d9e40e1d 100644
--- a/wemake_python_styleguide/visitors/ast/complexity/jones.py
+++ b/wemake_python_styleguide/visitors/ast/complexity/jones.py
@@ -38,6 +38,7 @@ class JonesComplexityVisitor(BaseNodeVisitor): # TODO: consider `logical_line`
_ignored_nodes = (
ast.FunctionDef,
ast.ClassDef,
+ ast.AsyncFunctionDef,
)
def __init__(self, *args, **kwargs) -> None:
|
dmlc__dgl-2897 | Moving a graph to GPU will change the default CUDA device
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
```
import torch
import dgl
torch.cuda.set_device(1)
print(torch.cuda.current_device()) # print 1
device = 'cuda' # 'cuda:1'
g = dgl.graph((torch.tensor([0, 1, 2]), torch.tensor([1, 2, 3]))).to(device)
print(torch.cuda.current_device()) # print 0
```
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
The index of the current device should not be changed.
<!-- A clear and concise description of what you expected to happen. -->
## Environment
- DGL Version (e.g., 1.0): 0.6
- Backend Library & Version (e.g., PyTorch 0.4.1, MXNet/Gluon 1.3): PyTorch 1.9.0a0+gitaeaa91b
- OS (e.g., Linux): RHEL
- How you installed DGL (`conda`, `pip`, source): source
- Build command you used (if compiling from source):
- Python version: 3.8
- CUDA/cuDNN version (if applicable): 11.0
- GPU models and configuration (e.g. V100): NVIDIA GeForce RTX 2080 Ti
- Any other relevant information:
## Additional context
<!-- Add any other context about the problem here. -->
| [
{
"content": "from __future__ import absolute_import\n\nfrom distutils.version import LooseVersion\n\nimport scipy # Weird bug in new pytorch when import scipy after import torch\nimport torch as th\nimport builtins\nimport numbers\nfrom torch.utils import dlpack\n\nfrom ... import ndarray as nd\nfrom ..._deprecate import kernel as K\nfrom ...function.base import TargetCode\nfrom ...base import dgl_warning\n\nif LooseVersion(th.__version__) < LooseVersion(\"1.5.0\"):\n raise Exception(\"Detected an old version of PyTorch. Please update torch>=1.5.0 \"\n \"for the best experience.\")\n\ndef data_type_dict():\n return {'float16' : th.float16,\n 'float32' : th.float32,\n 'float64' : th.float64,\n 'uint8' : th.uint8,\n 'int8' : th.int8,\n 'int16' : th.int16,\n 'int32' : th.int32,\n 'int64' : th.int64,\n 'bool' : th.bool}\n\ndef cpu():\n return th.device('cpu')\n\ndef tensor(data, dtype=None):\n if isinstance(data, numbers.Number):\n data = [data]\n if isinstance(data, list) and len(data) > 0 and isinstance(data[0], th.Tensor):\n # prevent GPU->CPU->GPU copies\n if data[0].ndim == 0:\n # zero dimenion scalar tensors\n return th.stack(data)\n if isinstance(data, th.Tensor):\n return th.as_tensor(data, dtype=dtype, device=data.device)\n else:\n return th.as_tensor(data, dtype=dtype)\n\ndef as_scalar(data):\n return data.item()\n\ndef get_preferred_sparse_format():\n \"\"\"Get the preferred sparse matrix format supported by the backend.\n\n Different backends have their preferred backend. This info is useful when\n constructing a sparse matrix.\n \"\"\"\n return \"coo\"\n\ndef sparse_matrix(data, index, shape, force_format=False):\n fmt = index[0]\n if fmt != 'coo':\n raise TypeError('Pytorch backend only supports COO format. But got %s.' % fmt)\n spmat = th.sparse_coo_tensor(index[1], data, shape)\n return spmat, None\n\ndef sparse_matrix_indices(spmat):\n return ('coo', spmat._indices())\n\ndef is_tensor(obj):\n return isinstance(obj, th.Tensor)\n\ndef shape(input):\n return input.shape\n\ndef dtype(input):\n return input.dtype\n\ndef ndim(input):\n return input.dim()\n\ndef context(input):\n return input.device\n\ndef device_type(ctx):\n return th.device(ctx).type\n\ndef device_id(ctx):\n ctx = th.device(ctx)\n if ctx.index is None:\n return 0\n else:\n return ctx.index\n\ndef to_backend_ctx(dglctx):\n dev_type = dglctx.device_type\n if dev_type == 1:\n return th.device('cpu')\n elif dev_type == 2:\n return th.device('cuda', dglctx.device_id)\n else:\n raise ValueError('Unsupported DGL device context:', dglctx)\n\ndef astype(input, ty):\n return input.type(ty)\n\ndef asnumpy(input):\n if isinstance(input, th.sparse.FloatTensor):\n return input.to_dense().cpu().detach().numpy()\n else:\n return input.cpu().detach().numpy()\n\ndef copy_to(input, ctx, **kwargs):\n ctx = th.device(ctx)\n if ctx.type == 'cpu':\n return input.cpu()\n elif ctx.type == 'cuda':\n if ctx.index is not None:\n th.cuda.set_device(ctx.index)\n return input.cuda(**kwargs)\n else:\n raise RuntimeError('Invalid context', ctx)\n\ndef sum(input, dim, keepdims=False):\n return th.sum(input, dim=dim, keepdim=keepdims)\n\ndef floor_div(in1, in2):\n return in1 // in2\n\ndef reduce_sum(input):\n return input.sum()\n\ndef cumsum(input, dim):\n return th.cumsum(input, dim=dim)\n\ndef mean(input, dim):\n return th.mean(input, dim=dim)\n\ndef reduce_mean(input):\n return input.mean()\n\ndef max(input, dim):\n # NOTE: the second argmax array is not returned\n return th.max(input, dim=dim)[0]\n\ndef reduce_max(input):\n return input.max()\n\ndef min(input, dim):\n # NOTE: the second argmin array is not returned\n return th.min(input, dim=dim)[0]\n\ndef reduce_min(input):\n return input.min()\n\ndef argsort(input, dim, descending):\n return th.argsort(input, dim=dim, descending=descending)\n\ndef topk(input, k, dim, descending=True):\n return th.topk(input, k, dim, largest=descending)[0]\n\ndef argtopk(input, k, dim, descending=True):\n return th.topk(input, k, dim, largest=descending)[1]\n\ndef exp(input):\n return th.exp(input)\n\ndef sqrt(input):\n return th.sqrt(input)\n\ndef softmax(input, dim=-1):\n return th.softmax(input, dim=dim)\n\ndef cat(seq, dim):\n return th.cat(seq, dim=dim)\n\ndef stack(seq, dim):\n return th.stack(seq, dim=dim)\n\ndef split(input, sizes_or_sections, dim):\n return th.split(input, sizes_or_sections, dim)\n\ndef repeat(input, repeats, dim):\n return th.repeat_interleave(input, repeats, dim) # PyTorch 1.1\n\ndef gather_row(data, row_index):\n return th.index_select(data, 0, row_index.long())\n\ndef slice_axis(data, axis, begin, end):\n return th.narrow(data, axis, begin, end - begin)\n\ndef take(data, indices, dim):\n new_shape = data.shape[:dim] + indices.shape + data.shape[dim+1:]\n return th.index_select(data, dim, indices.view(-1)).view(new_shape)\n\ndef narrow_row(x, start, stop):\n return x[start:stop]\n\ndef index_add_inplace(data, row_idx, value):\n data.index_add_(0, row_idx, value)\n\ndef scatter_row(data, row_index, value):\n return data.index_copy(0, row_index.long(), value)\n\ndef scatter_row_inplace(data, row_index, value):\n data[row_index.long()] = value\n\ndef squeeze(input, dim):\n return th.squeeze(input, dim)\n\ndef unsqueeze(input, dim):\n return th.unsqueeze(input, dim)\n\ndef reshape(input, shape):\n return th.reshape(input ,shape)\n\ndef swapaxes(input, axis1, axis2):\n return th.transpose(input, axis1, axis2)\n\ndef zeros(shape, dtype, ctx):\n return th.zeros(shape, dtype=dtype, device=ctx)\n\ndef zeros_like(input):\n return th.zeros_like(input)\n\ndef ones(shape, dtype, ctx):\n return th.ones(shape, dtype=dtype, device=ctx)\n\ndef uniform(shape, dtype, ctx, low, high):\n return th.empty(shape, dtype=dtype, device=ctx).uniform_(low, high)\n\ndef randint(shape, dtype, ctx, low, high):\n return th.randint(low, high, shape, dtype=dtype, device=ctx)\n\ndef pad_packed_tensor(input, lengths, value, l_min=None):\n old_shape = input.shape\n device = input.device\n if not is_tensor(lengths):\n lengths = th.tensor(lengths, dtype=th.int64, device=device)\n else:\n lengths = lengths.to(device)\n max_len = as_scalar(lengths.max())\n\n if l_min is not None:\n max_len = builtins.max(max_len, l_min)\n\n batch_size = len(lengths)\n x = input.new(batch_size * max_len, *old_shape[1:])\n x.fill_(value)\n index = th.ones(len(input), dtype=th.int64, device=device)\n cum_lengths = th.cumsum(lengths, 0)\n index[cum_lengths[:-1]] += (max_len - lengths[:-1])\n index = th.cumsum(index, 0) - 1\n x[index] = input\n return x.view(batch_size, max_len, *old_shape[1:])\n\ndef pack_padded_tensor(input, lengths):\n max_len = input.shape[1]\n device = input.device\n if not is_tensor(lengths):\n lengths = th.tensor(lengths, dtype=th.int64, device=device)\n else:\n lengths = lengths.to(device)\n input = input.view(-1, *input.shape[2:])\n out_len = lengths.sum().item()\n index = th.ones(out_len, dtype=th.int64, device=device)\n cum_lengths = th.cumsum(lengths, 0)\n index[cum_lengths[:-1]] += (max_len - lengths[:-1])\n index = th.cumsum(index, 0) - 1\n return input[index]\n\ndef boolean_mask(input, mask):\n if 'bool' not in str(mask.dtype):\n mask = th.tensor(mask, dtype=th.bool)\n return input[mask]\n\ndef equal(x, y):\n return x == y\n\ndef logical_not(input):\n return ~input\n\ndef logical_and(input1, input2):\n return input1 & input2\n\ndef clone(input):\n return input.clone()\n\ndef clamp(data, min_val, max_val):\n return th.clamp(data, min_val, max_val)\n\ndef replace_inf_with_zero(x):\n return th.masked_fill(x, th.isinf(x), 0)\n\ndef unique(input):\n if input.dtype == th.bool:\n input = input.type(th.int8)\n return th.unique(input)\n\ndef full_1d(length, fill_value, dtype, ctx):\n return th.full((length,), fill_value, dtype=dtype, device=ctx)\n\ndef nonzero_1d(input):\n x = th.nonzero(input, as_tuple=False).squeeze()\n return x if x.dim() == 1 else x.view(-1)\n\ndef sort_1d(input):\n return th.sort(input)\n\ndef arange(start, stop, dtype=th.int64, ctx=None):\n return th.arange(start, stop, dtype=dtype, device=ctx)\n\ndef rand_shuffle(arr):\n idx = th.randperm(len(arr))\n return arr[idx]\n\ndef zerocopy_to_dlpack(input):\n return dlpack.to_dlpack(input.contiguous())\n\ndef zerocopy_from_dlpack(dlpack_tensor):\n return dlpack.from_dlpack(dlpack_tensor)\n\ndef zerocopy_to_numpy(input):\n # NOTE: not zerocopy\n return asnumpy(input)\n\ndef zerocopy_from_numpy(np_array):\n return th.as_tensor(np_array)\n\ndef zerocopy_to_dgl_ndarray(data):\n return nd.from_dlpack(dlpack.to_dlpack(data.contiguous()))\n\ndef zerocopy_to_dgl_ndarray_for_write(input):\n return zerocopy_to_dgl_ndarray(input)\n\ndef zerocopy_from_dgl_ndarray(data):\n if data.shape == (0,):\n # NOTE: PyTorch v1.5 does not accept DLPack object representing empty CUDA tensor.\n # Related issue: https://github.com/pytorch/pytorch/issues/41182\n # The issue will be fixed in v1.6 and later.\n return th.tensor([], dtype=getattr(th, data.dtype),\n device=to_backend_ctx(data.ctx))\n else:\n return dlpack.from_dlpack(data.to_dlpack())\n\n\nclass BinaryReduce(th.autograd.Function):\n @staticmethod\n def forward(ctx, reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data, out_data,\n out_size, lhs_map, rhs_map, out_map):\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n feat_shape = K.infer_binary_feature_shape(binary_op, lhs_data_nd, rhs_data_nd)\n out_shape = feat_shape\n if binary_op == 'dot':\n out_shape = feat_shape[:-1]\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n K.binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, lhs_map[0], rhs_map[0], out_map[0])\n # normalize if mean reducer\n # NOTE(zihao): this is a temporary hack and we should have better solution in the future.\n if reducer == 'mean':\n degs = lhs_data.new_empty((out_data.shape[0],))\n degs_nd = zerocopy_to_dgl_ndarray(degs)\n if lhs != TargetCode.DST: # src or edge\n target = lhs\n n = lhs_data.shape[0]\n in_map = lhs_map[0]\n else: # rhs != TargetCode.DST\n target = rhs\n n = rhs_data.shape[0]\n in_map = rhs_map[0]\n in_ones = lhs_data.new_ones((n,))\n in_ones_nd = zerocopy_to_dgl_ndarray(in_ones)\n K.copy_reduce(\n 'sum', graph, target, in_ones_nd, degs_nd, in_map, out_map[0])\n # reshape\n degs = degs.reshape((out_data.shape[0],) + (1,) * (out_data.dim() - 1)).clamp(min=1)\n out_data = out_data / degs\n else:\n degs = None\n # save_for_backward can only save variables\n ctx.backward_cache = (reducer, binary_op, graph, lhs, rhs, lhs_map,\n rhs_map, out_map, feat_shape, degs)\n ctx.save_for_backward(lhs_data, rhs_data, out_data)\n return out_data\n\n @staticmethod\n def backward(ctx, grad_out):\n reducer, binary_op, graph, lhs, rhs, lhs_map, rhs_map, out_map, \\\n feat_shape, degs = ctx.backward_cache\n lhs_data, rhs_data, out_data = ctx.saved_tensors\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n grad_lhs = None\n grad_rhs = None\n if reducer == 'mean':\n grad_out = grad_out / degs\n grad_out_nd = zerocopy_to_dgl_ndarray(grad_out)\n if ctx.needs_input_grad[5]:\n grad_lhs = grad_out.new_empty((lhs_data_nd.shape[0],) + feat_shape)\n K.backward_lhs_binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, grad_out_nd, zerocopy_to_dgl_ndarray(grad_lhs),\n lhs_map[1], rhs_map[1], out_map[1])\n grad_lhs = _reduce_grad(grad_lhs, lhs_data_nd.shape)\n if ctx.needs_input_grad[6]:\n grad_rhs = grad_out.new_empty((rhs_data_nd.shape[0],) + feat_shape)\n K.backward_rhs_binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, grad_out_nd, zerocopy_to_dgl_ndarray(grad_rhs),\n lhs_map[1], rhs_map[1], out_map[1])\n grad_rhs = _reduce_grad(grad_rhs, rhs_data_nd.shape)\n\n return None, None, None, None, None, grad_lhs, grad_rhs, None, None, None, \\\n None, None\n\n\ndef binary_reduce(reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data,\n out_size, lhs_map=(None, None), rhs_map=(None, None), out_map=(None, None)):\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n feat_shape = K.infer_binary_feature_shape(binary_op, lhs_data_nd, rhs_data_nd)\n\n out_shape = feat_shape\n if binary_op == 'dot':\n out_shape = feat_shape[:-1]\n out_data = lhs_data.new_empty((out_size,) + out_shape)\n\n return BinaryReduce.apply(\n reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data, out_data,\n out_size, lhs_map, rhs_map, out_map)\n\n\nclass CopyReduce(th.autograd.Function):\n @staticmethod\n def forward(ctx, reducer, graph, target, in_data, out_data, out_size, in_map,\n out_map):\n in_data_nd = zerocopy_to_dgl_ndarray(in_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n K.copy_reduce(\n reducer if reducer != 'mean' else 'sum',\n graph, target, in_data_nd, out_data_nd, in_map[0], out_map[0])\n # normalize if mean reducer\n # NOTE(zihao): this is a temporary hack and we should have better solution in the future.\n if reducer == 'mean':\n in_ones = in_data.new_ones((in_data.shape[0],))\n degs = in_data.new_empty((out_data.shape[0],))\n in_ones_nd = zerocopy_to_dgl_ndarray(in_ones)\n degs_nd = zerocopy_to_dgl_ndarray(degs)\n K.copy_reduce(\n 'sum', graph, target, in_ones_nd, degs_nd, in_map[0], out_map[0])\n # reshape\n degs = degs.reshape((out_data.shape[0],) + (1,) * (out_data.dim() - 1)).clamp(min=1)\n out_data = out_data / degs\n else:\n degs = None\n # save_for_backward can only save variables\n ctx.backward_cache = (reducer, graph, target, in_map, out_map, degs)\n ctx.save_for_backward(in_data, out_data)\n return out_data\n\n @staticmethod\n def backward(ctx, grad_out):\n reducer, graph, target, in_map, out_map, degs = ctx.backward_cache\n in_data, out_data = ctx.saved_tensors\n in_data_nd = zerocopy_to_dgl_ndarray(in_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n grad_in = None\n if reducer == 'mean':\n grad_out = grad_out / degs\n grad_out_nd = zerocopy_to_dgl_ndarray(grad_out)\n if ctx.needs_input_grad[3]:\n grad_in = grad_out.new_empty(in_data_nd.shape)\n K.backward_copy_reduce(\n reducer if reducer != 'mean' else 'sum',\n graph, target, in_data_nd, out_data_nd, grad_out_nd,\n zerocopy_to_dgl_ndarray(grad_in), in_map[1], out_map[1])\n return None, None, None, grad_in, None, None, None, None\n\n\ndef copy_reduce(reducer, graph, target, in_data, out_size, in_map=(None, None),\n out_map=(None, None)):\n out_data = in_data.new_empty((out_size,) + in_data.shape[1:])\n return CopyReduce.apply(reducer, graph, target, in_data, out_data, out_size, in_map, out_map)\n\n\ndef _reduce_grad(grad, shape):\n \"\"\"Reduce gradient on the broadcast dimension\n\n If there is broadcast in forward pass, gradients need to be reduced on\n broadcast dimension. This function checks the input tensor shape and\n gradient shape and perform the reduction.\n\n Parameters\n ----------\n grad: Tensor\n Gradient tensor\n shape: tuple\n Shape of input tensor\n\n Returns\n -------\n Tensor\n \"\"\"\n grad_shape = grad.shape[1:]\n in_shape = shape[1:]\n if in_shape == grad_shape:\n # no need to reduce\n return grad\n num_to_squeeze = len(grad_shape) - len(in_shape)\n # pad inshape\n in_shape = (1,) * num_to_squeeze + in_shape\n reduce_idx = th.nonzero(th.tensor(grad_shape) - th.tensor(in_shape), as_tuple=False)\n reduce_idx += 1 # skip batch dim\n grad = grad.sum(dim=tuple(reduce_idx), keepdim=True)\n return grad.view(shape)\n\ndef sync():\n # Pytorch performs computation synchronously, so no need for synchronization.\n pass\n\ndef attach_grad(x):\n if x.grad is not None:\n x.grad.zero_()\n return x\n else:\n return x.requires_grad_()\n\ndef backward(x, head_gradient=None):\n if head_gradient is not None and head_gradient.shape[0] == 1 and len(head_gradient.shape) == 1:\n # Fix for torch 1.3.1\n head_gradient = th.tensor(head_gradient.item()).to(head_gradient.device)\n x.backward(head_gradient)\n\ndef grad(x):\n return x.grad\n\ndef is_no_grad(x):\n return x.grad is None or (x.grad == 0).all()\n\ndef is_recording():\n return th.is_grad_enabled()\n\nclass record_grad(object):\n def __init__(self):\n pass\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_value, exc_traceback):\n pass\n\nno_grad = th.no_grad\n",
"path": "python/dgl/backend/pytorch/tensor.py"
}
] | [
{
"content": "from __future__ import absolute_import\n\nfrom distutils.version import LooseVersion\n\nimport scipy # Weird bug in new pytorch when import scipy after import torch\nimport torch as th\nimport builtins\nimport numbers\nfrom torch.utils import dlpack\n\nfrom ... import ndarray as nd\nfrom ..._deprecate import kernel as K\nfrom ...function.base import TargetCode\nfrom ...base import dgl_warning\n\nif LooseVersion(th.__version__) < LooseVersion(\"1.5.0\"):\n raise Exception(\"Detected an old version of PyTorch. Please update torch>=1.5.0 \"\n \"for the best experience.\")\n\ndef data_type_dict():\n return {'float16' : th.float16,\n 'float32' : th.float32,\n 'float64' : th.float64,\n 'uint8' : th.uint8,\n 'int8' : th.int8,\n 'int16' : th.int16,\n 'int32' : th.int32,\n 'int64' : th.int64,\n 'bool' : th.bool}\n\ndef cpu():\n return th.device('cpu')\n\ndef tensor(data, dtype=None):\n if isinstance(data, numbers.Number):\n data = [data]\n if isinstance(data, list) and len(data) > 0 and isinstance(data[0], th.Tensor):\n # prevent GPU->CPU->GPU copies\n if data[0].ndim == 0:\n # zero dimenion scalar tensors\n return th.stack(data)\n if isinstance(data, th.Tensor):\n return th.as_tensor(data, dtype=dtype, device=data.device)\n else:\n return th.as_tensor(data, dtype=dtype)\n\ndef as_scalar(data):\n return data.item()\n\ndef get_preferred_sparse_format():\n \"\"\"Get the preferred sparse matrix format supported by the backend.\n\n Different backends have their preferred backend. This info is useful when\n constructing a sparse matrix.\n \"\"\"\n return \"coo\"\n\ndef sparse_matrix(data, index, shape, force_format=False):\n fmt = index[0]\n if fmt != 'coo':\n raise TypeError('Pytorch backend only supports COO format. But got %s.' % fmt)\n spmat = th.sparse_coo_tensor(index[1], data, shape)\n return spmat, None\n\ndef sparse_matrix_indices(spmat):\n return ('coo', spmat._indices())\n\ndef is_tensor(obj):\n return isinstance(obj, th.Tensor)\n\ndef shape(input):\n return input.shape\n\ndef dtype(input):\n return input.dtype\n\ndef ndim(input):\n return input.dim()\n\ndef context(input):\n return input.device\n\ndef device_type(ctx):\n return th.device(ctx).type\n\ndef device_id(ctx):\n ctx = th.device(ctx)\n if ctx.index is None:\n return 0 if ctx.type == 'cpu' else th.cuda.current_device()\n else:\n return ctx.index\n\ndef to_backend_ctx(dglctx):\n dev_type = dglctx.device_type\n if dev_type == 1:\n return th.device('cpu')\n elif dev_type == 2:\n return th.device('cuda', dglctx.device_id)\n else:\n raise ValueError('Unsupported DGL device context:', dglctx)\n\ndef astype(input, ty):\n return input.type(ty)\n\ndef asnumpy(input):\n if isinstance(input, th.sparse.FloatTensor):\n return input.to_dense().cpu().detach().numpy()\n else:\n return input.cpu().detach().numpy()\n\ndef copy_to(input, ctx, **kwargs):\n ctx = th.device(ctx)\n if ctx.type == 'cpu':\n return input.cpu()\n elif ctx.type == 'cuda':\n if ctx.index is not None:\n th.cuda.set_device(ctx.index)\n return input.cuda(**kwargs)\n else:\n raise RuntimeError('Invalid context', ctx)\n\ndef sum(input, dim, keepdims=False):\n return th.sum(input, dim=dim, keepdim=keepdims)\n\ndef floor_div(in1, in2):\n return in1 // in2\n\ndef reduce_sum(input):\n return input.sum()\n\ndef cumsum(input, dim):\n return th.cumsum(input, dim=dim)\n\ndef mean(input, dim):\n return th.mean(input, dim=dim)\n\ndef reduce_mean(input):\n return input.mean()\n\ndef max(input, dim):\n # NOTE: the second argmax array is not returned\n return th.max(input, dim=dim)[0]\n\ndef reduce_max(input):\n return input.max()\n\ndef min(input, dim):\n # NOTE: the second argmin array is not returned\n return th.min(input, dim=dim)[0]\n\ndef reduce_min(input):\n return input.min()\n\ndef argsort(input, dim, descending):\n return th.argsort(input, dim=dim, descending=descending)\n\ndef topk(input, k, dim, descending=True):\n return th.topk(input, k, dim, largest=descending)[0]\n\ndef argtopk(input, k, dim, descending=True):\n return th.topk(input, k, dim, largest=descending)[1]\n\ndef exp(input):\n return th.exp(input)\n\ndef sqrt(input):\n return th.sqrt(input)\n\ndef softmax(input, dim=-1):\n return th.softmax(input, dim=dim)\n\ndef cat(seq, dim):\n return th.cat(seq, dim=dim)\n\ndef stack(seq, dim):\n return th.stack(seq, dim=dim)\n\ndef split(input, sizes_or_sections, dim):\n return th.split(input, sizes_or_sections, dim)\n\ndef repeat(input, repeats, dim):\n return th.repeat_interleave(input, repeats, dim) # PyTorch 1.1\n\ndef gather_row(data, row_index):\n return th.index_select(data, 0, row_index.long())\n\ndef slice_axis(data, axis, begin, end):\n return th.narrow(data, axis, begin, end - begin)\n\ndef take(data, indices, dim):\n new_shape = data.shape[:dim] + indices.shape + data.shape[dim+1:]\n return th.index_select(data, dim, indices.view(-1)).view(new_shape)\n\ndef narrow_row(x, start, stop):\n return x[start:stop]\n\ndef index_add_inplace(data, row_idx, value):\n data.index_add_(0, row_idx, value)\n\ndef scatter_row(data, row_index, value):\n return data.index_copy(0, row_index.long(), value)\n\ndef scatter_row_inplace(data, row_index, value):\n data[row_index.long()] = value\n\ndef squeeze(input, dim):\n return th.squeeze(input, dim)\n\ndef unsqueeze(input, dim):\n return th.unsqueeze(input, dim)\n\ndef reshape(input, shape):\n return th.reshape(input ,shape)\n\ndef swapaxes(input, axis1, axis2):\n return th.transpose(input, axis1, axis2)\n\ndef zeros(shape, dtype, ctx):\n return th.zeros(shape, dtype=dtype, device=ctx)\n\ndef zeros_like(input):\n return th.zeros_like(input)\n\ndef ones(shape, dtype, ctx):\n return th.ones(shape, dtype=dtype, device=ctx)\n\ndef uniform(shape, dtype, ctx, low, high):\n return th.empty(shape, dtype=dtype, device=ctx).uniform_(low, high)\n\ndef randint(shape, dtype, ctx, low, high):\n return th.randint(low, high, shape, dtype=dtype, device=ctx)\n\ndef pad_packed_tensor(input, lengths, value, l_min=None):\n old_shape = input.shape\n device = input.device\n if not is_tensor(lengths):\n lengths = th.tensor(lengths, dtype=th.int64, device=device)\n else:\n lengths = lengths.to(device)\n max_len = as_scalar(lengths.max())\n\n if l_min is not None:\n max_len = builtins.max(max_len, l_min)\n\n batch_size = len(lengths)\n x = input.new(batch_size * max_len, *old_shape[1:])\n x.fill_(value)\n index = th.ones(len(input), dtype=th.int64, device=device)\n cum_lengths = th.cumsum(lengths, 0)\n index[cum_lengths[:-1]] += (max_len - lengths[:-1])\n index = th.cumsum(index, 0) - 1\n x[index] = input\n return x.view(batch_size, max_len, *old_shape[1:])\n\ndef pack_padded_tensor(input, lengths):\n max_len = input.shape[1]\n device = input.device\n if not is_tensor(lengths):\n lengths = th.tensor(lengths, dtype=th.int64, device=device)\n else:\n lengths = lengths.to(device)\n input = input.view(-1, *input.shape[2:])\n out_len = lengths.sum().item()\n index = th.ones(out_len, dtype=th.int64, device=device)\n cum_lengths = th.cumsum(lengths, 0)\n index[cum_lengths[:-1]] += (max_len - lengths[:-1])\n index = th.cumsum(index, 0) - 1\n return input[index]\n\ndef boolean_mask(input, mask):\n if 'bool' not in str(mask.dtype):\n mask = th.tensor(mask, dtype=th.bool)\n return input[mask]\n\ndef equal(x, y):\n return x == y\n\ndef logical_not(input):\n return ~input\n\ndef logical_and(input1, input2):\n return input1 & input2\n\ndef clone(input):\n return input.clone()\n\ndef clamp(data, min_val, max_val):\n return th.clamp(data, min_val, max_val)\n\ndef replace_inf_with_zero(x):\n return th.masked_fill(x, th.isinf(x), 0)\n\ndef unique(input):\n if input.dtype == th.bool:\n input = input.type(th.int8)\n return th.unique(input)\n\ndef full_1d(length, fill_value, dtype, ctx):\n return th.full((length,), fill_value, dtype=dtype, device=ctx)\n\ndef nonzero_1d(input):\n x = th.nonzero(input, as_tuple=False).squeeze()\n return x if x.dim() == 1 else x.view(-1)\n\ndef sort_1d(input):\n return th.sort(input)\n\ndef arange(start, stop, dtype=th.int64, ctx=None):\n return th.arange(start, stop, dtype=dtype, device=ctx)\n\ndef rand_shuffle(arr):\n idx = th.randperm(len(arr))\n return arr[idx]\n\ndef zerocopy_to_dlpack(input):\n return dlpack.to_dlpack(input.contiguous())\n\ndef zerocopy_from_dlpack(dlpack_tensor):\n return dlpack.from_dlpack(dlpack_tensor)\n\ndef zerocopy_to_numpy(input):\n # NOTE: not zerocopy\n return asnumpy(input)\n\ndef zerocopy_from_numpy(np_array):\n return th.as_tensor(np_array)\n\ndef zerocopy_to_dgl_ndarray(data):\n return nd.from_dlpack(dlpack.to_dlpack(data.contiguous()))\n\ndef zerocopy_to_dgl_ndarray_for_write(input):\n return zerocopy_to_dgl_ndarray(input)\n\ndef zerocopy_from_dgl_ndarray(data):\n if data.shape == (0,):\n # NOTE: PyTorch v1.5 does not accept DLPack object representing empty CUDA tensor.\n # Related issue: https://github.com/pytorch/pytorch/issues/41182\n # The issue will be fixed in v1.6 and later.\n return th.tensor([], dtype=getattr(th, data.dtype),\n device=to_backend_ctx(data.ctx))\n else:\n return dlpack.from_dlpack(data.to_dlpack())\n\n\nclass BinaryReduce(th.autograd.Function):\n @staticmethod\n def forward(ctx, reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data, out_data,\n out_size, lhs_map, rhs_map, out_map):\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n feat_shape = K.infer_binary_feature_shape(binary_op, lhs_data_nd, rhs_data_nd)\n out_shape = feat_shape\n if binary_op == 'dot':\n out_shape = feat_shape[:-1]\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n K.binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, lhs_map[0], rhs_map[0], out_map[0])\n # normalize if mean reducer\n # NOTE(zihao): this is a temporary hack and we should have better solution in the future.\n if reducer == 'mean':\n degs = lhs_data.new_empty((out_data.shape[0],))\n degs_nd = zerocopy_to_dgl_ndarray(degs)\n if lhs != TargetCode.DST: # src or edge\n target = lhs\n n = lhs_data.shape[0]\n in_map = lhs_map[0]\n else: # rhs != TargetCode.DST\n target = rhs\n n = rhs_data.shape[0]\n in_map = rhs_map[0]\n in_ones = lhs_data.new_ones((n,))\n in_ones_nd = zerocopy_to_dgl_ndarray(in_ones)\n K.copy_reduce(\n 'sum', graph, target, in_ones_nd, degs_nd, in_map, out_map[0])\n # reshape\n degs = degs.reshape((out_data.shape[0],) + (1,) * (out_data.dim() - 1)).clamp(min=1)\n out_data = out_data / degs\n else:\n degs = None\n # save_for_backward can only save variables\n ctx.backward_cache = (reducer, binary_op, graph, lhs, rhs, lhs_map,\n rhs_map, out_map, feat_shape, degs)\n ctx.save_for_backward(lhs_data, rhs_data, out_data)\n return out_data\n\n @staticmethod\n def backward(ctx, grad_out):\n reducer, binary_op, graph, lhs, rhs, lhs_map, rhs_map, out_map, \\\n feat_shape, degs = ctx.backward_cache\n lhs_data, rhs_data, out_data = ctx.saved_tensors\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n grad_lhs = None\n grad_rhs = None\n if reducer == 'mean':\n grad_out = grad_out / degs\n grad_out_nd = zerocopy_to_dgl_ndarray(grad_out)\n if ctx.needs_input_grad[5]:\n grad_lhs = grad_out.new_empty((lhs_data_nd.shape[0],) + feat_shape)\n K.backward_lhs_binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, grad_out_nd, zerocopy_to_dgl_ndarray(grad_lhs),\n lhs_map[1], rhs_map[1], out_map[1])\n grad_lhs = _reduce_grad(grad_lhs, lhs_data_nd.shape)\n if ctx.needs_input_grad[6]:\n grad_rhs = grad_out.new_empty((rhs_data_nd.shape[0],) + feat_shape)\n K.backward_rhs_binary_op_reduce(\n reducer if reducer != 'mean' else 'sum',\n binary_op, graph, lhs, rhs, lhs_data_nd, rhs_data_nd,\n out_data_nd, grad_out_nd, zerocopy_to_dgl_ndarray(grad_rhs),\n lhs_map[1], rhs_map[1], out_map[1])\n grad_rhs = _reduce_grad(grad_rhs, rhs_data_nd.shape)\n\n return None, None, None, None, None, grad_lhs, grad_rhs, None, None, None, \\\n None, None\n\n\ndef binary_reduce(reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data,\n out_size, lhs_map=(None, None), rhs_map=(None, None), out_map=(None, None)):\n lhs_data_nd = zerocopy_to_dgl_ndarray(lhs_data)\n rhs_data_nd = zerocopy_to_dgl_ndarray(rhs_data)\n feat_shape = K.infer_binary_feature_shape(binary_op, lhs_data_nd, rhs_data_nd)\n\n out_shape = feat_shape\n if binary_op == 'dot':\n out_shape = feat_shape[:-1]\n out_data = lhs_data.new_empty((out_size,) + out_shape)\n\n return BinaryReduce.apply(\n reducer, binary_op, graph, lhs, rhs, lhs_data, rhs_data, out_data,\n out_size, lhs_map, rhs_map, out_map)\n\n\nclass CopyReduce(th.autograd.Function):\n @staticmethod\n def forward(ctx, reducer, graph, target, in_data, out_data, out_size, in_map,\n out_map):\n in_data_nd = zerocopy_to_dgl_ndarray(in_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n K.copy_reduce(\n reducer if reducer != 'mean' else 'sum',\n graph, target, in_data_nd, out_data_nd, in_map[0], out_map[0])\n # normalize if mean reducer\n # NOTE(zihao): this is a temporary hack and we should have better solution in the future.\n if reducer == 'mean':\n in_ones = in_data.new_ones((in_data.shape[0],))\n degs = in_data.new_empty((out_data.shape[0],))\n in_ones_nd = zerocopy_to_dgl_ndarray(in_ones)\n degs_nd = zerocopy_to_dgl_ndarray(degs)\n K.copy_reduce(\n 'sum', graph, target, in_ones_nd, degs_nd, in_map[0], out_map[0])\n # reshape\n degs = degs.reshape((out_data.shape[0],) + (1,) * (out_data.dim() - 1)).clamp(min=1)\n out_data = out_data / degs\n else:\n degs = None\n # save_for_backward can only save variables\n ctx.backward_cache = (reducer, graph, target, in_map, out_map, degs)\n ctx.save_for_backward(in_data, out_data)\n return out_data\n\n @staticmethod\n def backward(ctx, grad_out):\n reducer, graph, target, in_map, out_map, degs = ctx.backward_cache\n in_data, out_data = ctx.saved_tensors\n in_data_nd = zerocopy_to_dgl_ndarray(in_data)\n out_data_nd = zerocopy_to_dgl_ndarray(out_data)\n grad_in = None\n if reducer == 'mean':\n grad_out = grad_out / degs\n grad_out_nd = zerocopy_to_dgl_ndarray(grad_out)\n if ctx.needs_input_grad[3]:\n grad_in = grad_out.new_empty(in_data_nd.shape)\n K.backward_copy_reduce(\n reducer if reducer != 'mean' else 'sum',\n graph, target, in_data_nd, out_data_nd, grad_out_nd,\n zerocopy_to_dgl_ndarray(grad_in), in_map[1], out_map[1])\n return None, None, None, grad_in, None, None, None, None\n\n\ndef copy_reduce(reducer, graph, target, in_data, out_size, in_map=(None, None),\n out_map=(None, None)):\n out_data = in_data.new_empty((out_size,) + in_data.shape[1:])\n return CopyReduce.apply(reducer, graph, target, in_data, out_data, out_size, in_map, out_map)\n\n\ndef _reduce_grad(grad, shape):\n \"\"\"Reduce gradient on the broadcast dimension\n\n If there is broadcast in forward pass, gradients need to be reduced on\n broadcast dimension. This function checks the input tensor shape and\n gradient shape and perform the reduction.\n\n Parameters\n ----------\n grad: Tensor\n Gradient tensor\n shape: tuple\n Shape of input tensor\n\n Returns\n -------\n Tensor\n \"\"\"\n grad_shape = grad.shape[1:]\n in_shape = shape[1:]\n if in_shape == grad_shape:\n # no need to reduce\n return grad\n num_to_squeeze = len(grad_shape) - len(in_shape)\n # pad inshape\n in_shape = (1,) * num_to_squeeze + in_shape\n reduce_idx = th.nonzero(th.tensor(grad_shape) - th.tensor(in_shape), as_tuple=False)\n reduce_idx += 1 # skip batch dim\n grad = grad.sum(dim=tuple(reduce_idx), keepdim=True)\n return grad.view(shape)\n\ndef sync():\n # Pytorch performs computation synchronously, so no need for synchronization.\n pass\n\ndef attach_grad(x):\n if x.grad is not None:\n x.grad.zero_()\n return x\n else:\n return x.requires_grad_()\n\ndef backward(x, head_gradient=None):\n if head_gradient is not None and head_gradient.shape[0] == 1 and len(head_gradient.shape) == 1:\n # Fix for torch 1.3.1\n head_gradient = th.tensor(head_gradient.item()).to(head_gradient.device)\n x.backward(head_gradient)\n\ndef grad(x):\n return x.grad\n\ndef is_no_grad(x):\n return x.grad is None or (x.grad == 0).all()\n\ndef is_recording():\n return th.is_grad_enabled()\n\nclass record_grad(object):\n def __init__(self):\n pass\n\n def __enter__(self):\n pass\n\n def __exit__(self, exc_type, exc_value, exc_traceback):\n pass\n\nno_grad = th.no_grad\n",
"path": "python/dgl/backend/pytorch/tensor.py"
}
] | diff --git a/python/dgl/backend/pytorch/tensor.py b/python/dgl/backend/pytorch/tensor.py
index 7c99a31847ea..34284cd9ec7a 100644
--- a/python/dgl/backend/pytorch/tensor.py
+++ b/python/dgl/backend/pytorch/tensor.py
@@ -86,7 +86,7 @@ def device_type(ctx):
def device_id(ctx):
ctx = th.device(ctx)
if ctx.index is None:
- return 0
+ return 0 if ctx.type == 'cpu' else th.cuda.current_device()
else:
return ctx.index
|
facebookresearch__ParlAI-581 | Can we keep a mturk task from outside parlai/mturk/tasks?
Hi @JackUrb, I have a few questions regarding the mturk evaluation:
1. This link (http://parl.ai/static/docs/mturk.html#running-a-task) says that
> to run an MTurk task, first ensure that the task directory is in `parlai/mturk/tasks/`.
Is it by design? I tried to keep my task in some other directory (outside root parlai directory) and tried to import parlai as a package but that doesn't seem to work. Basically I am trying to use ParlAI as one of the dependency for one of my project for Human In the loop evaluation.
2. How easy/hard it is to provide support for keeping the mturk task anywhere?
| [
{
"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\n\nfrom setuptools import setup, find_packages\nimport sys\n\nif sys.version_info < (3,):\n sys.exit('Sorry, Python3 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n readme = f.read()\n\nwith open('LICENSE') as f:\n license = f.read()\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\nsetup(\n name='parlai',\n version='0.1.0',\n description='Unified API for accessing dialog datasets.',\n long_description=readme,\n url='http://parl.ai/',\n license=license,\n packages=find_packages(exclude=(\n 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),\n install_requires=reqs.strip().split('\\n'),\n)\n",
"path": "setup.py"
}
] | [
{
"content": "# Copyright (c) 2017-present, Facebook, Inc.\n# All rights reserved.\n# This source code is licensed under the BSD-style license found in the\n# LICENSE file in the root directory of this source tree. An additional grant\n# of patent rights can be found in the PATENTS file in the same directory.\n\n\nfrom setuptools import setup, find_packages\nimport sys\n\nif sys.version_info < (3,):\n sys.exit('Sorry, Python3 is required for ParlAI.')\n\nwith open('README.md', encoding=\"utf8\") as f:\n readme = f.read()\n\nwith open('LICENSE') as f:\n license = f.read()\n\nwith open('requirements.txt') as f:\n reqs = f.read()\n\nsetup(\n name='parlai',\n version='0.1.0',\n description='Unified API for accessing dialog datasets.',\n long_description=readme,\n url='http://parl.ai/',\n license=license,\n packages=find_packages(exclude=(\n 'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),\n install_requires=reqs.strip().split('\\n'),\n include_package_data=True,\n)\n",
"path": "setup.py"
}
] | diff --git a/MANIFEST.in b/MANIFEST.in
new file mode 100644
index 00000000000..fa9ea114284
--- /dev/null
+++ b/MANIFEST.in
@@ -0,0 +1,3 @@
+include parlai/mturk/core/server/html/*
+include parlai/mturk/core/server/server.js
+include parlai/mturk/core/server/package.json
diff --git a/setup.py b/setup.py
index 4e895050e87..936b86aae45 100644
--- a/setup.py
+++ b/setup.py
@@ -30,4 +30,5 @@
packages=find_packages(exclude=(
'data', 'docs', 'downloads', 'examples', 'logs', 'tests')),
install_requires=reqs.strip().split('\n'),
+ include_package_data=True,
)
|
netbox-community__netbox-2144 | PUTs to Site Endpoint Requires Value for time_zone
<!--
Before opening a new issue, please search through the existing issues to
see if your topic has already been addressed. Note that you may need to
remove the "is:open" filter from the search bar to include closed issues.
Check the appropriate type for your issue below by placing an x between the
brackets. For assistance with installation issues, or for any other issues
other than those listed below, please raise your topic for discussion on
our mailing list:
https://groups.google.com/forum/#!forum/netbox-discuss
Please note that issues which do not fall under any of the below categories
will be closed. Due to an excessive backlog of feature requests, we are
not currently accepting any proposals which extend NetBox's feature scope.
Do not prepend any sort of tag to your issue's title. An administrator will
review your issue and assign labels as appropriate.
--->
### Issue type
[ ] Feature request <!-- An enhancement of existing functionality -->
[X] Bug report <!-- Unexpected or erroneous behavior -->
[ ] Documentation <!-- A modification to the documentation -->
<!--
Please describe the environment in which you are running NetBox. (Be sure
to verify that you are running the latest stable release of NetBox before
submitting a bug report.) If you are submitting a bug report and have made
any changes to the code base, please first validate that your bug can be
recreated while running an official release.
-->
### Environment
* Python version: 2.6.7
* NetBox version: 2.4-dev, but includes previous versions as well.
<!--
BUG REPORTS must include:
* A list of the steps needed for someone else to reproduce the bug
* A description of the expected and observed behavior
* Any relevant error messages (screenshots may also help)
FEATURE REQUESTS must include:
* A detailed description of the proposed functionality
* A use case for the new feature
* A rough description of any necessary changes to the database schema
* Any relevant third-party libraries which would be needed
-->
### Description
More details over at digitalocean/pynetbox#59, but when the `time_zone` field is present and null we get an error saying it can't be null. Omitting the field doesn't return an error.
| [
{
"content": "from __future__ import unicode_literals\n\nfrom collections import OrderedDict\n\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom circuits.models import Circuit, CircuitTermination\nfrom dcim.constants import (\n CONNECTION_STATUS_CHOICES, DEVICE_STATUS_CHOICES, IFACE_FF_CHOICES, IFACE_MODE_CHOICES, IFACE_ORDERING_CHOICES,\n RACK_FACE_CHOICES, RACK_TYPE_CHOICES, RACK_WIDTH_CHOICES, SITE_STATUS_CHOICES, SUBDEVICE_ROLE_CHOICES,\n)\nfrom dcim.models import (\n ConsolePort, ConsolePortTemplate, ConsoleServerPort, ConsoleServerPortTemplate, Device, DeviceBay,\n DeviceBayTemplate, DeviceType, DeviceRole, Interface, InterfaceConnection, InterfaceTemplate, Manufacturer,\n InventoryItem, Platform, PowerOutlet, PowerOutletTemplate, PowerPort, PowerPortTemplate, Rack, RackGroup,\n RackReservation, RackRole, Region, Site, VirtualChassis,\n)\nfrom extras.api.customfields import CustomFieldModelSerializer\nfrom ipam.models import IPAddress, VLAN\nfrom tenancy.api.serializers import NestedTenantSerializer\nfrom users.api.serializers import NestedUserSerializer\nfrom utilities.api import ChoiceFieldSerializer, TimeZoneField, ValidatedModelSerializer\nfrom virtualization.models import Cluster\n\n\n#\n# Regions\n#\n\nclass NestedRegionSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:region-detail')\n\n class Meta:\n model = Region\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass RegionSerializer(serializers.ModelSerializer):\n parent = NestedRegionSerializer()\n\n class Meta:\n model = Region\n fields = ['id', 'name', 'slug', 'parent']\n\n\nclass WritableRegionSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Region\n fields = ['id', 'name', 'slug', 'parent']\n\n\n#\n# Sites\n#\n\nclass SiteSerializer(CustomFieldModelSerializer):\n status = ChoiceFieldSerializer(choices=SITE_STATUS_CHOICES)\n region = NestedRegionSerializer()\n tenant = NestedTenantSerializer()\n time_zone = TimeZoneField(required=False)\n\n class Meta:\n model = Site\n fields = [\n 'id', 'name', 'slug', 'status', 'region', 'tenant', 'facility', 'asn', 'time_zone', 'description',\n 'physical_address', 'shipping_address', 'contact_name', 'contact_phone', 'contact_email', 'comments',\n 'custom_fields', 'created', 'last_updated', 'count_prefixes', 'count_vlans', 'count_racks', 'count_devices',\n 'count_circuits',\n ]\n\n\nclass NestedSiteSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:site-detail')\n\n class Meta:\n model = Site\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritableSiteSerializer(CustomFieldModelSerializer):\n time_zone = TimeZoneField(required=False)\n\n class Meta:\n model = Site\n fields = [\n 'id', 'name', 'slug', 'status', 'region', 'tenant', 'facility', 'asn', 'time_zone', 'description',\n 'physical_address', 'shipping_address', 'contact_name', 'contact_phone', 'contact_email', 'comments',\n 'custom_fields', 'created', 'last_updated',\n ]\n\n\n#\n# Rack groups\n#\n\nclass RackGroupSerializer(serializers.ModelSerializer):\n site = NestedSiteSerializer()\n\n class Meta:\n model = RackGroup\n fields = ['id', 'name', 'slug', 'site']\n\n\nclass NestedRackGroupSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rackgroup-detail')\n\n class Meta:\n model = RackGroup\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritableRackGroupSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackGroup\n fields = ['id', 'name', 'slug', 'site']\n\n\n#\n# Rack roles\n#\n\nclass RackRoleSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackRole\n fields = ['id', 'name', 'slug', 'color']\n\n\nclass NestedRackRoleSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rackrole-detail')\n\n class Meta:\n model = RackRole\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Racks\n#\n\nclass RackSerializer(CustomFieldModelSerializer):\n site = NestedSiteSerializer()\n group = NestedRackGroupSerializer()\n tenant = NestedTenantSerializer()\n role = NestedRackRoleSerializer()\n type = ChoiceFieldSerializer(choices=RACK_TYPE_CHOICES)\n width = ChoiceFieldSerializer(choices=RACK_WIDTH_CHOICES)\n\n class Meta:\n model = Rack\n fields = [\n 'id', 'name', 'facility_id', 'display_name', 'site', 'group', 'tenant', 'role', 'serial', 'type', 'width',\n 'u_height', 'desc_units', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n\n\nclass NestedRackSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rack-detail')\n\n class Meta:\n model = Rack\n fields = ['id', 'url', 'name', 'display_name']\n\n\nclass WritableRackSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = Rack\n fields = [\n 'id', 'name', 'facility_id', 'site', 'group', 'tenant', 'role', 'serial', 'type', 'width', 'u_height',\n 'desc_units', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n # Omit the UniqueTogetherValidator that would be automatically added to validate (site, facility_id). This\n # prevents facility_id from being interpreted as a required field.\n validators = [\n UniqueTogetherValidator(queryset=Rack.objects.all(), fields=('site', 'name'))\n ]\n\n def validate(self, data):\n\n # Validate uniqueness of (site, facility_id) since we omitted the automatically-created validator from Meta.\n if data.get('facility_id', None):\n validator = UniqueTogetherValidator(queryset=Rack.objects.all(), fields=('site', 'facility_id'))\n validator.set_context(self)\n validator(data)\n\n # Enforce model validation\n super(WritableRackSerializer, self).validate(data)\n\n return data\n\n\n#\n# Rack units\n#\n\nclass NestedDeviceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:device-detail')\n\n class Meta:\n model = Device\n fields = ['id', 'url', 'name', 'display_name']\n\n\nclass RackUnitSerializer(serializers.Serializer):\n \"\"\"\n A rack unit is an abstraction formed by the set (rack, position, face); it does not exist as a row in the database.\n \"\"\"\n id = serializers.IntegerField(read_only=True)\n name = serializers.CharField(read_only=True)\n face = serializers.IntegerField(read_only=True)\n device = NestedDeviceSerializer(read_only=True)\n\n\n#\n# Rack reservations\n#\n\nclass RackReservationSerializer(serializers.ModelSerializer):\n rack = NestedRackSerializer()\n user = NestedUserSerializer()\n tenant = NestedTenantSerializer()\n\n class Meta:\n model = RackReservation\n fields = ['id', 'rack', 'units', 'created', 'user', 'tenant', 'description']\n\n\nclass WritableRackReservationSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackReservation\n fields = ['id', 'rack', 'units', 'user', 'tenant', 'description']\n\n\n#\n# Manufacturers\n#\n\nclass ManufacturerSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Manufacturer\n fields = ['id', 'name', 'slug']\n\n\nclass NestedManufacturerSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:manufacturer-detail')\n\n class Meta:\n model = Manufacturer\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Device types\n#\n\nclass DeviceTypeSerializer(CustomFieldModelSerializer):\n manufacturer = NestedManufacturerSerializer()\n interface_ordering = ChoiceFieldSerializer(choices=IFACE_ORDERING_CHOICES)\n subdevice_role = ChoiceFieldSerializer(choices=SUBDEVICE_ROLE_CHOICES)\n instance_count = serializers.IntegerField(source='instances.count', read_only=True)\n\n class Meta:\n model = DeviceType\n fields = [\n 'id', 'manufacturer', 'model', 'slug', 'part_number', 'u_height', 'is_full_depth', 'interface_ordering',\n 'is_console_server', 'is_pdu', 'is_network_device', 'subdevice_role', 'comments', 'custom_fields',\n 'instance_count',\n ]\n\n\nclass NestedDeviceTypeSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicetype-detail')\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = DeviceType\n fields = ['id', 'url', 'manufacturer', 'model', 'slug']\n\n\nclass WritableDeviceTypeSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = DeviceType\n fields = [\n 'id', 'manufacturer', 'model', 'slug', 'part_number', 'u_height', 'is_full_depth', 'interface_ordering',\n 'is_console_server', 'is_pdu', 'is_network_device', 'subdevice_role', 'comments', 'custom_fields',\n ]\n\n\n#\n# Console port templates\n#\n\nclass ConsolePortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = ConsolePortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableConsolePortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsolePortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Console server port templates\n#\n\nclass ConsoleServerPortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = ConsoleServerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableConsoleServerPortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsoleServerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Power port templates\n#\n\nclass PowerPortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = PowerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritablePowerPortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Power outlet templates\n#\n\nclass PowerOutletTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = PowerOutletTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritablePowerOutletTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerOutletTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Interface templates\n#\n\nclass InterfaceTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n\n class Meta:\n model = InterfaceTemplate\n fields = ['id', 'device_type', 'name', 'form_factor', 'mgmt_only']\n\n\nclass WritableInterfaceTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = InterfaceTemplate\n fields = ['id', 'device_type', 'name', 'form_factor', 'mgmt_only']\n\n\n#\n# Device bay templates\n#\n\nclass DeviceBayTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = DeviceBayTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableDeviceBayTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceBayTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Device roles\n#\n\nclass DeviceRoleSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceRole\n fields = ['id', 'name', 'slug', 'color', 'vm_role']\n\n\nclass NestedDeviceRoleSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicerole-detail')\n\n class Meta:\n model = DeviceRole\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Platforms\n#\n\nclass PlatformSerializer(serializers.ModelSerializer):\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = Platform\n fields = ['id', 'name', 'slug', 'manufacturer', 'napalm_driver', 'rpc_client']\n\n\nclass NestedPlatformSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:platform-detail')\n\n class Meta:\n model = Platform\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritablePlatformSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Platform\n fields = ['id', 'name', 'slug', 'manufacturer', 'napalm_driver', 'rpc_client']\n\n\n#\n# Devices\n#\n\n# Cannot import ipam.api.NestedIPAddressSerializer due to circular dependency\nclass DeviceIPAddressSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='ipam-api:ipaddress-detail')\n\n class Meta:\n model = IPAddress\n fields = ['id', 'url', 'family', 'address']\n\n\n# Cannot import virtualization.api.NestedClusterSerializer due to circular dependency\nclass NestedClusterSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='virtualization-api:cluster-detail')\n\n class Meta:\n model = Cluster\n fields = ['id', 'url', 'name']\n\n\n# Cannot import NestedVirtualChassisSerializer due to circular dependency\nclass DeviceVirtualChassisSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:virtualchassis-detail')\n master = NestedDeviceSerializer()\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'url', 'master']\n\n\nclass DeviceSerializer(CustomFieldModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n device_role = NestedDeviceRoleSerializer()\n tenant = NestedTenantSerializer()\n platform = NestedPlatformSerializer()\n site = NestedSiteSerializer()\n rack = NestedRackSerializer()\n face = ChoiceFieldSerializer(choices=RACK_FACE_CHOICES)\n status = ChoiceFieldSerializer(choices=DEVICE_STATUS_CHOICES)\n primary_ip = DeviceIPAddressSerializer()\n primary_ip4 = DeviceIPAddressSerializer()\n primary_ip6 = DeviceIPAddressSerializer()\n parent_device = serializers.SerializerMethodField()\n cluster = NestedClusterSerializer()\n virtual_chassis = DeviceVirtualChassisSerializer()\n\n class Meta:\n model = Device\n fields = [\n 'id', 'name', 'display_name', 'device_type', 'device_role', 'tenant', 'platform', 'serial', 'asset_tag',\n 'site', 'rack', 'position', 'face', 'parent_device', 'status', 'primary_ip', 'primary_ip4', 'primary_ip6',\n 'cluster', 'virtual_chassis', 'vc_position', 'vc_priority', 'comments', 'custom_fields', 'created',\n 'last_updated',\n ]\n\n def get_parent_device(self, obj):\n try:\n device_bay = obj.parent_bay\n except DeviceBay.DoesNotExist:\n return None\n context = {'request': self.context['request']}\n data = NestedDeviceSerializer(instance=device_bay.device, context=context).data\n data['device_bay'] = NestedDeviceBaySerializer(instance=device_bay, context=context).data\n return data\n\n\nclass WritableDeviceSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = Device\n fields = [\n 'id', 'name', 'device_type', 'device_role', 'tenant', 'platform', 'serial', 'asset_tag', 'site', 'rack',\n 'position', 'face', 'status', 'primary_ip4', 'primary_ip6', 'cluster', 'virtual_chassis', 'vc_position',\n 'vc_priority', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n validators = []\n\n def validate(self, data):\n\n # Validate uniqueness of (rack, position, face) since we omitted the automatically-created validator from Meta.\n if data.get('rack') and data.get('position') and data.get('face'):\n validator = UniqueTogetherValidator(queryset=Device.objects.all(), fields=('rack', 'position', 'face'))\n validator.set_context(self)\n validator(data)\n\n # Enforce model validation\n super(WritableDeviceSerializer, self).validate(data)\n\n return data\n\n\n#\n# Console server ports\n#\n\nclass ConsoleServerPortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n\n class Meta:\n model = ConsoleServerPort\n fields = ['id', 'device', 'name', 'connected_console']\n read_only_fields = ['connected_console']\n\n\nclass WritableConsoleServerPortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsoleServerPort\n fields = ['id', 'device', 'name']\n\n\n#\n# Console ports\n#\n\nclass ConsolePortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n cs_port = ConsoleServerPortSerializer()\n\n class Meta:\n model = ConsolePort\n fields = ['id', 'device', 'name', 'cs_port', 'connection_status']\n\n\nclass WritableConsolePortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsolePort\n fields = ['id', 'device', 'name', 'cs_port', 'connection_status']\n\n\n#\n# Power outlets\n#\n\nclass PowerOutletSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n\n class Meta:\n model = PowerOutlet\n fields = ['id', 'device', 'name', 'connected_port']\n read_only_fields = ['connected_port']\n\n\nclass WritablePowerOutletSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerOutlet\n fields = ['id', 'device', 'name']\n\n\n#\n# Power ports\n#\n\nclass PowerPortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n power_outlet = PowerOutletSerializer()\n\n class Meta:\n model = PowerPort\n fields = ['id', 'device', 'name', 'power_outlet', 'connection_status']\n\n\nclass WritablePowerPortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerPort\n fields = ['id', 'device', 'name', 'power_outlet', 'connection_status']\n\n\n#\n# Interfaces\n#\n\nclass NestedInterfaceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interface-detail')\n\n class Meta:\n model = Interface\n fields = ['id', 'url', 'name']\n\n\nclass InterfaceNestedCircuitSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='circuits-api:circuit-detail')\n\n class Meta:\n model = Circuit\n fields = ['id', 'url', 'cid']\n\n\nclass InterfaceCircuitTerminationSerializer(serializers.ModelSerializer):\n circuit = InterfaceNestedCircuitSerializer()\n\n class Meta:\n model = CircuitTermination\n fields = [\n 'id', 'circuit', 'term_side', 'port_speed', 'upstream_speed', 'xconnect_id', 'pp_info',\n ]\n\n\n# Cannot import ipam.api.NestedVLANSerializer due to circular dependency\nclass InterfaceVLANSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='ipam-api:vlan-detail')\n\n class Meta:\n model = VLAN\n fields = ['id', 'url', 'vid', 'name', 'display_name']\n\n\nclass InterfaceSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n lag = NestedInterfaceSerializer()\n is_connected = serializers.SerializerMethodField(read_only=True)\n interface_connection = serializers.SerializerMethodField(read_only=True)\n circuit_termination = InterfaceCircuitTerminationSerializer()\n untagged_vlan = InterfaceVLANSerializer()\n mode = ChoiceFieldSerializer(choices=IFACE_MODE_CHOICES)\n tagged_vlans = InterfaceVLANSerializer(many=True)\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only', 'description',\n 'is_connected', 'interface_connection', 'circuit_termination', 'mode', 'untagged_vlan', 'tagged_vlans',\n ]\n\n def get_is_connected(self, obj):\n \"\"\"\n Return True if the interface has a connected interface or circuit termination.\n \"\"\"\n if obj.connection:\n return True\n try:\n circuit_termination = obj.circuit_termination\n return True\n except CircuitTermination.DoesNotExist:\n pass\n return False\n\n def get_interface_connection(self, obj):\n if obj.connection:\n return OrderedDict((\n ('interface', PeerInterfaceSerializer(obj.connected_interface, context=self.context).data),\n ('status', obj.connection.connection_status),\n ))\n return None\n\n\nclass PeerInterfaceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interface-detail')\n device = NestedDeviceSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n lag = NestedInterfaceSerializer()\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'url', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only',\n 'description',\n ]\n\n\nclass WritableInterfaceSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only', 'description',\n 'mode', 'untagged_vlan', 'tagged_vlans',\n ]\n\n def validate(self, data):\n\n # All associated VLANs be global or assigned to the parent device's site.\n device = self.instance.device if self.instance else data.get('device')\n untagged_vlan = data.get('untagged_vlan')\n if untagged_vlan and untagged_vlan.site not in [device.site, None]:\n raise serializers.ValidationError({\n 'untagged_vlan': \"VLAN {} must belong to the same site as the interface's parent device, or it must be \"\n \"global.\".format(untagged_vlan)\n })\n for vlan in data.get('tagged_vlans', []):\n if vlan.site not in [device.site, None]:\n raise serializers.ValidationError({\n 'tagged_vlans': \"VLAN {} must belong to the same site as the interface's parent device, or it must \"\n \"be global.\".format(vlan)\n })\n\n return super(WritableInterfaceSerializer, self).validate(data)\n\n\n#\n# Device bays\n#\n\nclass DeviceBaySerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n installed_device = NestedDeviceSerializer()\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'device', 'name', 'installed_device']\n\n\nclass NestedDeviceBaySerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicebay-detail')\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'url', 'name']\n\n\nclass WritableDeviceBaySerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'device', 'name', 'installed_device']\n\n\n#\n# Inventory items\n#\n\nclass InventoryItemSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = InventoryItem\n fields = [\n 'id', 'device', 'parent', 'name', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'discovered',\n 'description',\n ]\n\n\nclass WritableInventoryItemSerializer(ValidatedModelSerializer):\n # Provide a default value to satisfy UniqueTogetherValidator\n parent = serializers.PrimaryKeyRelatedField(queryset=InventoryItem.objects.all(), allow_null=True, default=None)\n\n class Meta:\n model = InventoryItem\n fields = [\n 'id', 'device', 'parent', 'name', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'discovered',\n 'description',\n ]\n\n\n#\n# Interface connections\n#\n\nclass InterfaceConnectionSerializer(serializers.ModelSerializer):\n interface_a = PeerInterfaceSerializer()\n interface_b = PeerInterfaceSerializer()\n connection_status = ChoiceFieldSerializer(choices=CONNECTION_STATUS_CHOICES)\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'interface_a', 'interface_b', 'connection_status']\n\n\nclass NestedInterfaceConnectionSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interfaceconnection-detail')\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'url', 'connection_status']\n\n\nclass WritableInterfaceConnectionSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'interface_a', 'interface_b', 'connection_status']\n\n\n#\n# Virtual chassis\n#\n\nclass VirtualChassisSerializer(serializers.ModelSerializer):\n master = NestedDeviceSerializer()\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'master', 'domain']\n\n\nclass NestedVirtualChassisSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:virtualchassis-detail')\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'url']\n\n\nclass WritableVirtualChassisSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'master', 'domain']\n",
"path": "netbox/dcim/api/serializers.py"
}
] | [
{
"content": "from __future__ import unicode_literals\n\nfrom collections import OrderedDict\n\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueTogetherValidator\n\nfrom circuits.models import Circuit, CircuitTermination\nfrom dcim.constants import (\n CONNECTION_STATUS_CHOICES, DEVICE_STATUS_CHOICES, IFACE_FF_CHOICES, IFACE_MODE_CHOICES, IFACE_ORDERING_CHOICES,\n RACK_FACE_CHOICES, RACK_TYPE_CHOICES, RACK_WIDTH_CHOICES, SITE_STATUS_CHOICES, SUBDEVICE_ROLE_CHOICES,\n)\nfrom dcim.models import (\n ConsolePort, ConsolePortTemplate, ConsoleServerPort, ConsoleServerPortTemplate, Device, DeviceBay,\n DeviceBayTemplate, DeviceType, DeviceRole, Interface, InterfaceConnection, InterfaceTemplate, Manufacturer,\n InventoryItem, Platform, PowerOutlet, PowerOutletTemplate, PowerPort, PowerPortTemplate, Rack, RackGroup,\n RackReservation, RackRole, Region, Site, VirtualChassis,\n)\nfrom extras.api.customfields import CustomFieldModelSerializer\nfrom ipam.models import IPAddress, VLAN\nfrom tenancy.api.serializers import NestedTenantSerializer\nfrom users.api.serializers import NestedUserSerializer\nfrom utilities.api import ChoiceFieldSerializer, TimeZoneField, ValidatedModelSerializer\nfrom virtualization.models import Cluster\n\n\n#\n# Regions\n#\n\nclass NestedRegionSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:region-detail')\n\n class Meta:\n model = Region\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass RegionSerializer(serializers.ModelSerializer):\n parent = NestedRegionSerializer()\n\n class Meta:\n model = Region\n fields = ['id', 'name', 'slug', 'parent']\n\n\nclass WritableRegionSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Region\n fields = ['id', 'name', 'slug', 'parent']\n\n\n#\n# Sites\n#\n\nclass SiteSerializer(CustomFieldModelSerializer):\n status = ChoiceFieldSerializer(choices=SITE_STATUS_CHOICES)\n region = NestedRegionSerializer()\n tenant = NestedTenantSerializer()\n time_zone = TimeZoneField(required=False)\n\n class Meta:\n model = Site\n fields = [\n 'id', 'name', 'slug', 'status', 'region', 'tenant', 'facility', 'asn', 'time_zone', 'description',\n 'physical_address', 'shipping_address', 'contact_name', 'contact_phone', 'contact_email', 'comments',\n 'custom_fields', 'created', 'last_updated', 'count_prefixes', 'count_vlans', 'count_racks', 'count_devices',\n 'count_circuits',\n ]\n\n\nclass NestedSiteSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:site-detail')\n\n class Meta:\n model = Site\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritableSiteSerializer(CustomFieldModelSerializer):\n time_zone = TimeZoneField(required=False, allow_null=True)\n\n class Meta:\n model = Site\n fields = [\n 'id', 'name', 'slug', 'status', 'region', 'tenant', 'facility', 'asn', 'time_zone', 'description',\n 'physical_address', 'shipping_address', 'contact_name', 'contact_phone', 'contact_email', 'comments',\n 'custom_fields', 'created', 'last_updated',\n ]\n\n\n#\n# Rack groups\n#\n\nclass RackGroupSerializer(serializers.ModelSerializer):\n site = NestedSiteSerializer()\n\n class Meta:\n model = RackGroup\n fields = ['id', 'name', 'slug', 'site']\n\n\nclass NestedRackGroupSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rackgroup-detail')\n\n class Meta:\n model = RackGroup\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritableRackGroupSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackGroup\n fields = ['id', 'name', 'slug', 'site']\n\n\n#\n# Rack roles\n#\n\nclass RackRoleSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackRole\n fields = ['id', 'name', 'slug', 'color']\n\n\nclass NestedRackRoleSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rackrole-detail')\n\n class Meta:\n model = RackRole\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Racks\n#\n\nclass RackSerializer(CustomFieldModelSerializer):\n site = NestedSiteSerializer()\n group = NestedRackGroupSerializer()\n tenant = NestedTenantSerializer()\n role = NestedRackRoleSerializer()\n type = ChoiceFieldSerializer(choices=RACK_TYPE_CHOICES)\n width = ChoiceFieldSerializer(choices=RACK_WIDTH_CHOICES)\n\n class Meta:\n model = Rack\n fields = [\n 'id', 'name', 'facility_id', 'display_name', 'site', 'group', 'tenant', 'role', 'serial', 'type', 'width',\n 'u_height', 'desc_units', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n\n\nclass NestedRackSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:rack-detail')\n\n class Meta:\n model = Rack\n fields = ['id', 'url', 'name', 'display_name']\n\n\nclass WritableRackSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = Rack\n fields = [\n 'id', 'name', 'facility_id', 'site', 'group', 'tenant', 'role', 'serial', 'type', 'width', 'u_height',\n 'desc_units', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n # Omit the UniqueTogetherValidator that would be automatically added to validate (site, facility_id). This\n # prevents facility_id from being interpreted as a required field.\n validators = [\n UniqueTogetherValidator(queryset=Rack.objects.all(), fields=('site', 'name'))\n ]\n\n def validate(self, data):\n\n # Validate uniqueness of (site, facility_id) since we omitted the automatically-created validator from Meta.\n if data.get('facility_id', None):\n validator = UniqueTogetherValidator(queryset=Rack.objects.all(), fields=('site', 'facility_id'))\n validator.set_context(self)\n validator(data)\n\n # Enforce model validation\n super(WritableRackSerializer, self).validate(data)\n\n return data\n\n\n#\n# Rack units\n#\n\nclass NestedDeviceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:device-detail')\n\n class Meta:\n model = Device\n fields = ['id', 'url', 'name', 'display_name']\n\n\nclass RackUnitSerializer(serializers.Serializer):\n \"\"\"\n A rack unit is an abstraction formed by the set (rack, position, face); it does not exist as a row in the database.\n \"\"\"\n id = serializers.IntegerField(read_only=True)\n name = serializers.CharField(read_only=True)\n face = serializers.IntegerField(read_only=True)\n device = NestedDeviceSerializer(read_only=True)\n\n\n#\n# Rack reservations\n#\n\nclass RackReservationSerializer(serializers.ModelSerializer):\n rack = NestedRackSerializer()\n user = NestedUserSerializer()\n tenant = NestedTenantSerializer()\n\n class Meta:\n model = RackReservation\n fields = ['id', 'rack', 'units', 'created', 'user', 'tenant', 'description']\n\n\nclass WritableRackReservationSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = RackReservation\n fields = ['id', 'rack', 'units', 'user', 'tenant', 'description']\n\n\n#\n# Manufacturers\n#\n\nclass ManufacturerSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Manufacturer\n fields = ['id', 'name', 'slug']\n\n\nclass NestedManufacturerSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:manufacturer-detail')\n\n class Meta:\n model = Manufacturer\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Device types\n#\n\nclass DeviceTypeSerializer(CustomFieldModelSerializer):\n manufacturer = NestedManufacturerSerializer()\n interface_ordering = ChoiceFieldSerializer(choices=IFACE_ORDERING_CHOICES)\n subdevice_role = ChoiceFieldSerializer(choices=SUBDEVICE_ROLE_CHOICES)\n instance_count = serializers.IntegerField(source='instances.count', read_only=True)\n\n class Meta:\n model = DeviceType\n fields = [\n 'id', 'manufacturer', 'model', 'slug', 'part_number', 'u_height', 'is_full_depth', 'interface_ordering',\n 'is_console_server', 'is_pdu', 'is_network_device', 'subdevice_role', 'comments', 'custom_fields',\n 'instance_count',\n ]\n\n\nclass NestedDeviceTypeSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicetype-detail')\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = DeviceType\n fields = ['id', 'url', 'manufacturer', 'model', 'slug']\n\n\nclass WritableDeviceTypeSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = DeviceType\n fields = [\n 'id', 'manufacturer', 'model', 'slug', 'part_number', 'u_height', 'is_full_depth', 'interface_ordering',\n 'is_console_server', 'is_pdu', 'is_network_device', 'subdevice_role', 'comments', 'custom_fields',\n ]\n\n\n#\n# Console port templates\n#\n\nclass ConsolePortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = ConsolePortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableConsolePortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsolePortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Console server port templates\n#\n\nclass ConsoleServerPortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = ConsoleServerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableConsoleServerPortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsoleServerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Power port templates\n#\n\nclass PowerPortTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = PowerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritablePowerPortTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerPortTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Power outlet templates\n#\n\nclass PowerOutletTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = PowerOutletTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritablePowerOutletTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerOutletTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Interface templates\n#\n\nclass InterfaceTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n\n class Meta:\n model = InterfaceTemplate\n fields = ['id', 'device_type', 'name', 'form_factor', 'mgmt_only']\n\n\nclass WritableInterfaceTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = InterfaceTemplate\n fields = ['id', 'device_type', 'name', 'form_factor', 'mgmt_only']\n\n\n#\n# Device bay templates\n#\n\nclass DeviceBayTemplateSerializer(serializers.ModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n\n class Meta:\n model = DeviceBayTemplate\n fields = ['id', 'device_type', 'name']\n\n\nclass WritableDeviceBayTemplateSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceBayTemplate\n fields = ['id', 'device_type', 'name']\n\n\n#\n# Device roles\n#\n\nclass DeviceRoleSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceRole\n fields = ['id', 'name', 'slug', 'color', 'vm_role']\n\n\nclass NestedDeviceRoleSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicerole-detail')\n\n class Meta:\n model = DeviceRole\n fields = ['id', 'url', 'name', 'slug']\n\n\n#\n# Platforms\n#\n\nclass PlatformSerializer(serializers.ModelSerializer):\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = Platform\n fields = ['id', 'name', 'slug', 'manufacturer', 'napalm_driver', 'rpc_client']\n\n\nclass NestedPlatformSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:platform-detail')\n\n class Meta:\n model = Platform\n fields = ['id', 'url', 'name', 'slug']\n\n\nclass WritablePlatformSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Platform\n fields = ['id', 'name', 'slug', 'manufacturer', 'napalm_driver', 'rpc_client']\n\n\n#\n# Devices\n#\n\n# Cannot import ipam.api.NestedIPAddressSerializer due to circular dependency\nclass DeviceIPAddressSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='ipam-api:ipaddress-detail')\n\n class Meta:\n model = IPAddress\n fields = ['id', 'url', 'family', 'address']\n\n\n# Cannot import virtualization.api.NestedClusterSerializer due to circular dependency\nclass NestedClusterSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='virtualization-api:cluster-detail')\n\n class Meta:\n model = Cluster\n fields = ['id', 'url', 'name']\n\n\n# Cannot import NestedVirtualChassisSerializer due to circular dependency\nclass DeviceVirtualChassisSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:virtualchassis-detail')\n master = NestedDeviceSerializer()\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'url', 'master']\n\n\nclass DeviceSerializer(CustomFieldModelSerializer):\n device_type = NestedDeviceTypeSerializer()\n device_role = NestedDeviceRoleSerializer()\n tenant = NestedTenantSerializer()\n platform = NestedPlatformSerializer()\n site = NestedSiteSerializer()\n rack = NestedRackSerializer()\n face = ChoiceFieldSerializer(choices=RACK_FACE_CHOICES)\n status = ChoiceFieldSerializer(choices=DEVICE_STATUS_CHOICES)\n primary_ip = DeviceIPAddressSerializer()\n primary_ip4 = DeviceIPAddressSerializer()\n primary_ip6 = DeviceIPAddressSerializer()\n parent_device = serializers.SerializerMethodField()\n cluster = NestedClusterSerializer()\n virtual_chassis = DeviceVirtualChassisSerializer()\n\n class Meta:\n model = Device\n fields = [\n 'id', 'name', 'display_name', 'device_type', 'device_role', 'tenant', 'platform', 'serial', 'asset_tag',\n 'site', 'rack', 'position', 'face', 'parent_device', 'status', 'primary_ip', 'primary_ip4', 'primary_ip6',\n 'cluster', 'virtual_chassis', 'vc_position', 'vc_priority', 'comments', 'custom_fields', 'created',\n 'last_updated',\n ]\n\n def get_parent_device(self, obj):\n try:\n device_bay = obj.parent_bay\n except DeviceBay.DoesNotExist:\n return None\n context = {'request': self.context['request']}\n data = NestedDeviceSerializer(instance=device_bay.device, context=context).data\n data['device_bay'] = NestedDeviceBaySerializer(instance=device_bay, context=context).data\n return data\n\n\nclass WritableDeviceSerializer(CustomFieldModelSerializer):\n\n class Meta:\n model = Device\n fields = [\n 'id', 'name', 'device_type', 'device_role', 'tenant', 'platform', 'serial', 'asset_tag', 'site', 'rack',\n 'position', 'face', 'status', 'primary_ip4', 'primary_ip6', 'cluster', 'virtual_chassis', 'vc_position',\n 'vc_priority', 'comments', 'custom_fields', 'created', 'last_updated',\n ]\n validators = []\n\n def validate(self, data):\n\n # Validate uniqueness of (rack, position, face) since we omitted the automatically-created validator from Meta.\n if data.get('rack') and data.get('position') and data.get('face'):\n validator = UniqueTogetherValidator(queryset=Device.objects.all(), fields=('rack', 'position', 'face'))\n validator.set_context(self)\n validator(data)\n\n # Enforce model validation\n super(WritableDeviceSerializer, self).validate(data)\n\n return data\n\n\n#\n# Console server ports\n#\n\nclass ConsoleServerPortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n\n class Meta:\n model = ConsoleServerPort\n fields = ['id', 'device', 'name', 'connected_console']\n read_only_fields = ['connected_console']\n\n\nclass WritableConsoleServerPortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsoleServerPort\n fields = ['id', 'device', 'name']\n\n\n#\n# Console ports\n#\n\nclass ConsolePortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n cs_port = ConsoleServerPortSerializer()\n\n class Meta:\n model = ConsolePort\n fields = ['id', 'device', 'name', 'cs_port', 'connection_status']\n\n\nclass WritableConsolePortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = ConsolePort\n fields = ['id', 'device', 'name', 'cs_port', 'connection_status']\n\n\n#\n# Power outlets\n#\n\nclass PowerOutletSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n\n class Meta:\n model = PowerOutlet\n fields = ['id', 'device', 'name', 'connected_port']\n read_only_fields = ['connected_port']\n\n\nclass WritablePowerOutletSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerOutlet\n fields = ['id', 'device', 'name']\n\n\n#\n# Power ports\n#\n\nclass PowerPortSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n power_outlet = PowerOutletSerializer()\n\n class Meta:\n model = PowerPort\n fields = ['id', 'device', 'name', 'power_outlet', 'connection_status']\n\n\nclass WritablePowerPortSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = PowerPort\n fields = ['id', 'device', 'name', 'power_outlet', 'connection_status']\n\n\n#\n# Interfaces\n#\n\nclass NestedInterfaceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interface-detail')\n\n class Meta:\n model = Interface\n fields = ['id', 'url', 'name']\n\n\nclass InterfaceNestedCircuitSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='circuits-api:circuit-detail')\n\n class Meta:\n model = Circuit\n fields = ['id', 'url', 'cid']\n\n\nclass InterfaceCircuitTerminationSerializer(serializers.ModelSerializer):\n circuit = InterfaceNestedCircuitSerializer()\n\n class Meta:\n model = CircuitTermination\n fields = [\n 'id', 'circuit', 'term_side', 'port_speed', 'upstream_speed', 'xconnect_id', 'pp_info',\n ]\n\n\n# Cannot import ipam.api.NestedVLANSerializer due to circular dependency\nclass InterfaceVLANSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='ipam-api:vlan-detail')\n\n class Meta:\n model = VLAN\n fields = ['id', 'url', 'vid', 'name', 'display_name']\n\n\nclass InterfaceSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n lag = NestedInterfaceSerializer()\n is_connected = serializers.SerializerMethodField(read_only=True)\n interface_connection = serializers.SerializerMethodField(read_only=True)\n circuit_termination = InterfaceCircuitTerminationSerializer()\n untagged_vlan = InterfaceVLANSerializer()\n mode = ChoiceFieldSerializer(choices=IFACE_MODE_CHOICES)\n tagged_vlans = InterfaceVLANSerializer(many=True)\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only', 'description',\n 'is_connected', 'interface_connection', 'circuit_termination', 'mode', 'untagged_vlan', 'tagged_vlans',\n ]\n\n def get_is_connected(self, obj):\n \"\"\"\n Return True if the interface has a connected interface or circuit termination.\n \"\"\"\n if obj.connection:\n return True\n try:\n circuit_termination = obj.circuit_termination\n return True\n except CircuitTermination.DoesNotExist:\n pass\n return False\n\n def get_interface_connection(self, obj):\n if obj.connection:\n return OrderedDict((\n ('interface', PeerInterfaceSerializer(obj.connected_interface, context=self.context).data),\n ('status', obj.connection.connection_status),\n ))\n return None\n\n\nclass PeerInterfaceSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interface-detail')\n device = NestedDeviceSerializer()\n form_factor = ChoiceFieldSerializer(choices=IFACE_FF_CHOICES)\n lag = NestedInterfaceSerializer()\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'url', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only',\n 'description',\n ]\n\n\nclass WritableInterfaceSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = Interface\n fields = [\n 'id', 'device', 'name', 'form_factor', 'enabled', 'lag', 'mtu', 'mac_address', 'mgmt_only', 'description',\n 'mode', 'untagged_vlan', 'tagged_vlans',\n ]\n\n def validate(self, data):\n\n # All associated VLANs be global or assigned to the parent device's site.\n device = self.instance.device if self.instance else data.get('device')\n untagged_vlan = data.get('untagged_vlan')\n if untagged_vlan and untagged_vlan.site not in [device.site, None]:\n raise serializers.ValidationError({\n 'untagged_vlan': \"VLAN {} must belong to the same site as the interface's parent device, or it must be \"\n \"global.\".format(untagged_vlan)\n })\n for vlan in data.get('tagged_vlans', []):\n if vlan.site not in [device.site, None]:\n raise serializers.ValidationError({\n 'tagged_vlans': \"VLAN {} must belong to the same site as the interface's parent device, or it must \"\n \"be global.\".format(vlan)\n })\n\n return super(WritableInterfaceSerializer, self).validate(data)\n\n\n#\n# Device bays\n#\n\nclass DeviceBaySerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n installed_device = NestedDeviceSerializer()\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'device', 'name', 'installed_device']\n\n\nclass NestedDeviceBaySerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:devicebay-detail')\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'url', 'name']\n\n\nclass WritableDeviceBaySerializer(ValidatedModelSerializer):\n\n class Meta:\n model = DeviceBay\n fields = ['id', 'device', 'name', 'installed_device']\n\n\n#\n# Inventory items\n#\n\nclass InventoryItemSerializer(serializers.ModelSerializer):\n device = NestedDeviceSerializer()\n manufacturer = NestedManufacturerSerializer()\n\n class Meta:\n model = InventoryItem\n fields = [\n 'id', 'device', 'parent', 'name', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'discovered',\n 'description',\n ]\n\n\nclass WritableInventoryItemSerializer(ValidatedModelSerializer):\n # Provide a default value to satisfy UniqueTogetherValidator\n parent = serializers.PrimaryKeyRelatedField(queryset=InventoryItem.objects.all(), allow_null=True, default=None)\n\n class Meta:\n model = InventoryItem\n fields = [\n 'id', 'device', 'parent', 'name', 'manufacturer', 'part_id', 'serial', 'asset_tag', 'discovered',\n 'description',\n ]\n\n\n#\n# Interface connections\n#\n\nclass InterfaceConnectionSerializer(serializers.ModelSerializer):\n interface_a = PeerInterfaceSerializer()\n interface_b = PeerInterfaceSerializer()\n connection_status = ChoiceFieldSerializer(choices=CONNECTION_STATUS_CHOICES)\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'interface_a', 'interface_b', 'connection_status']\n\n\nclass NestedInterfaceConnectionSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:interfaceconnection-detail')\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'url', 'connection_status']\n\n\nclass WritableInterfaceConnectionSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = InterfaceConnection\n fields = ['id', 'interface_a', 'interface_b', 'connection_status']\n\n\n#\n# Virtual chassis\n#\n\nclass VirtualChassisSerializer(serializers.ModelSerializer):\n master = NestedDeviceSerializer()\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'master', 'domain']\n\n\nclass NestedVirtualChassisSerializer(serializers.ModelSerializer):\n url = serializers.HyperlinkedIdentityField(view_name='dcim-api:virtualchassis-detail')\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'url']\n\n\nclass WritableVirtualChassisSerializer(ValidatedModelSerializer):\n\n class Meta:\n model = VirtualChassis\n fields = ['id', 'master', 'domain']\n",
"path": "netbox/dcim/api/serializers.py"
}
] | diff --git a/netbox/dcim/api/serializers.py b/netbox/dcim/api/serializers.py
index e37354d47f6..988a2d59f69 100644
--- a/netbox/dcim/api/serializers.py
+++ b/netbox/dcim/api/serializers.py
@@ -80,7 +80,7 @@ class Meta:
class WritableSiteSerializer(CustomFieldModelSerializer):
- time_zone = TimeZoneField(required=False)
+ time_zone = TimeZoneField(required=False, allow_null=True)
class Meta:
model = Site
|
vas3k__vas3k.club-260 | Сломался check_PR экшн на новые пуллреквесты
Вот здесь все пошло не так после пары изменений в requirements и докерфайлах: https://github.com/vas3k/vas3k.club/blob/master/.github/workflows/CI.yml
Из-за этого все новые пуллреквесты красненькие и мержить их приходится только суровой админской рукой. Надо бы переосмыслить этот CI как-нибудь. У кого есть идеи?
По сути мне важны линтеры и чтобы докер с новым кодом успешно поднимался. Остального пока нет.
| [
{
"content": "import io\nimport logging\nimport os\nfrom urllib.parse import urlparse\n\nimport requests\nfrom PIL import Image\nfrom django.conf import settings\n\nlog = logging.getLogger(__name__)\n\n\ndef upload_image_bytes(\n filename, data, resize=(192, 192), convert_to=None, quality=None\n):\n if not data:\n return None\n\n if resize:\n try:\n image = Image.open(data)\n except Exception as ex:\n log.warning(f\"Bad image data: {ex}\")\n return None\n\n image.thumbnail(resize)\n saved_image = io.BytesIO()\n saved_image.name = filename\n\n try:\n image.save(saved_image)\n except OSError:\n log.warning(f\"Error saving image data: {ex}\")\n return None\n\n data = saved_image.getvalue()\n\n upload_params = {\n \"code\": settings.MEDIA_UPLOAD_CODE\n }\n\n if convert_to:\n upload_params[\"convert_to\"] = convert_to\n\n if quality:\n upload_params[\"quality\"] = quality\n\n try:\n uploaded = requests.post(\n url=settings.MEDIA_UPLOAD_URL,\n params=upload_params,\n files={\"media\": (filename, data)},\n )\n except requests.exceptions.RequestException as ex:\n log.error(f\"Image upload error: {ex}\")\n return None\n\n if 200 <= uploaded.status_code <= 299:\n try:\n response_data = uploaded.json()\n except Exception as ex:\n log.error(f\"Image upload error: {ex} ({uploaded.content})\")\n return None\n\n return response_data[\"uploaded\"][0]\n\n return None\n\n\ndef upload_image_from_url(url, resize=(192, 192), convert_to=\"jpg\", quality=90):\n if settings.DEBUG or not settings.MEDIA_UPLOAD_URL or not settings.MEDIA_UPLOAD_CODE:\n return url\n\n if not url:\n return None\n\n image_name = os.path.basename(urlparse(url).path)\n if \".\" not in image_name:\n image_name += \".jpg\"\n\n try:\n image_data = io.BytesIO(requests.get(url).content)\n except requests.exceptions.RequestException:\n return None\n\n return upload_image_bytes(image_name, image_data, resize=resize, convert_to=convert_to, quality=quality)\n",
"path": "utils/images.py"
}
] | [
{
"content": "import io\nimport logging\nimport os\nfrom urllib.parse import urlparse\n\nimport requests\nfrom PIL import Image\nfrom django.conf import settings\n\nlog = logging.getLogger(__name__)\n\n\ndef upload_image_bytes(\n filename, data, resize=(192, 192), convert_to=None, quality=None\n):\n if not data:\n return None\n\n if resize:\n try:\n image = Image.open(data)\n except Exception as ex:\n log.warning(f\"Bad image data: {ex}\")\n return None\n\n image.thumbnail(resize)\n saved_image = io.BytesIO()\n saved_image.name = filename\n\n try:\n image.save(saved_image)\n except OSError as ex:\n log.warning(f\"Error saving image data: {ex}\")\n return None\n\n data = saved_image.getvalue()\n\n upload_params = {\n \"code\": settings.MEDIA_UPLOAD_CODE\n }\n\n if convert_to:\n upload_params[\"convert_to\"] = convert_to\n\n if quality:\n upload_params[\"quality\"] = quality\n\n try:\n uploaded = requests.post(\n url=settings.MEDIA_UPLOAD_URL,\n params=upload_params,\n files={\"media\": (filename, data)},\n )\n except requests.exceptions.RequestException as ex:\n log.error(f\"Image upload error: {ex}\")\n return None\n\n if 200 <= uploaded.status_code <= 299:\n try:\n response_data = uploaded.json()\n except Exception as ex:\n log.error(f\"Image upload error: {ex} ({uploaded.content})\")\n return None\n\n return response_data[\"uploaded\"][0]\n\n return None\n\n\ndef upload_image_from_url(url, resize=(192, 192), convert_to=\"jpg\", quality=90):\n if settings.DEBUG or not settings.MEDIA_UPLOAD_URL or not settings.MEDIA_UPLOAD_CODE:\n return url\n\n if not url:\n return None\n\n image_name = os.path.basename(urlparse(url).path)\n if \".\" not in image_name:\n image_name += \".jpg\"\n\n try:\n image_data = io.BytesIO(requests.get(url).content)\n except requests.exceptions.RequestException:\n return None\n\n return upload_image_bytes(image_name, image_data, resize=resize, convert_to=convert_to, quality=quality)\n",
"path": "utils/images.py"
}
] | diff --git a/.github/workflows/CI.yml b/.github/workflows/CI.yml
index 65f731e55..0d6194288 100644
--- a/.github/workflows/CI.yml
+++ b/.github/workflows/CI.yml
@@ -3,29 +3,28 @@ name: check_pr
on: [pull_request]
jobs:
+ lint:
+ runs-on: ubuntu-latest
+
+ steps:
+ - uses: actions/checkout@master
+ - uses: actions/setup-python@v2
+ with:
+ python-version: '3.8'
+ architecture: 'x64'
+ - name: Install requirements
+ run: |
+ pip install --no-cache-dir flake8
+ - name: run flake8
+ run: |
+ # stop the build if there are Python syntax errors or undefined names
+ flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
+ # exit-zero treats all errors as warnings.
+ flake8 . --count --exit-zero --statistics
-# Disabled due errors with gdal installation
-# lint:
-# runs-on: ubuntu-latest
-#
-# steps:
-# - uses: actions/checkout@master
-# - uses: actions/setup-python@v2
-# with:
-# python-version: '3.8'
-# architecture: 'x64'
-# - name: Install requirements
-# run: |
-# pip install --no-cache-dir pipenv
-# pipenv install --dev
-# - name: run lint
-# run: make test-ci
-# # continue-on-error: true
dockerize:
runs-on: ubuntu-latest
-# needs: lint
-
steps:
- uses: actions/checkout@master
- name: Build the docker-compose stack
@@ -36,7 +35,7 @@ jobs:
time: '20s'
- name: Check db migrate on container
run: |
- docker-compose exec -T club_app make migrate
+ docker-compose exec -T club_app make docker-migrate
- name: Check build frontend on container
run: |
docker-compose exec -T webpack npm run build
diff --git a/Makefile b/Makefile
index fd55484ad..1763cfffe 100644
--- a/Makefile
+++ b/Makefile
@@ -11,16 +11,16 @@ run-dev: ## Runs dev server
run-queue: ## Runs task broker
pipenv run python manage.py qcluster
-run-queue-production:
+docker-run-queue:
python manage.py qcluster
run-uvicorn: ## Runs uvicorn (ASGI) server in managed mode
pipenv run uvicorn --fd 0 --lifespan off club.asgi:application
docker-run-dev: ## Runs dev server in docker
- pipenv run python ./utils/wait_for_postgres.py
- pipenv run python manage.py migrate
- pipenv run python manage.py runserver 0.0.0.0:8000
+ python ./utils/wait_for_postgres.py
+ python manage.py migrate
+ python manage.py runserver 0.0.0.0:8000
docker-run-production: ## Runs production server in docker
python3 manage.py migrate
@@ -40,6 +40,9 @@ requirements: ## Generate requirements.txt for production
migrate: ## Migrate database to the latest version
pipenv run python3 manage.py migrate
+docker-migrate:
+ python3 manage.py migrate
+
build-frontend: ## Runs webpack
npm run --prefix frontend build
diff --git a/docker-compose.production.yml b/docker-compose.production.yml
index fc4c2c5ac..6255725b2 100644
--- a/docker-compose.production.yml
+++ b/docker-compose.production.yml
@@ -30,7 +30,7 @@ services:
queue:
<<: *app
- command: make run-queue-production
+ command: make docker-run-queue
container_name: club_queue
depends_on:
- postgres
diff --git a/docker-compose.yml b/docker-compose.yml
index 9288da34f..5a1d42140 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -31,7 +31,7 @@ services:
build:
dockerfile: dev.dockerfile
context: .
- command: make run-queue
+ command: make docker-run-queue
environment:
- DEBUG=true
- PYTHONUNBUFFERED=1
@@ -70,10 +70,3 @@ services:
volumes:
- .:/app:delegated
working_dir: /app/frontend
-
- migrate_and_init:
- <<: *app
- container_name: club_migrate_and_init
- restart: "no"
- ports: []
- command: make migrate
diff --git a/requirements.txt b/requirements.txt
index e54cc9b28..ff4b88c03 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -13,6 +13,7 @@ cryptography==2.8
cssselect==1.1.0
cssutils==1.0.2
decorator==4.4.2
+django-debug-toolbar==2.2
django-picklefield==2.1.1
django-q-sentry==0.1.1
django-q[sentry]==1.2.1
diff --git a/utils/images.py b/utils/images.py
index cf0fd5646..73b166b2f 100644
--- a/utils/images.py
+++ b/utils/images.py
@@ -29,7 +29,7 @@ def upload_image_bytes(
try:
image.save(saved_image)
- except OSError:
+ except OSError as ex:
log.warning(f"Error saving image data: {ex}")
return None
|
akvo__akvo-rsr-1603 | Transaction admin creates internal server error
| [
{
"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\nfrom akvo.codelists.models import (Currency, DisbursementChannel,TransactionType, Country, Region,\n RegionVocabulary, Sector, SectorCategory, SectorVocabulary)\nfrom akvo.codelists.store.codelists_v201 import (AID_TYPE, CURRENCY, DISBURSEMENT_CHANNEL,\n FINANCE_TYPE, FLOW_TYPE, TIED_STATUS,\n TRANSACTION_TYPE, COUNTRY, REGION,\n REGION_VOCABULARY, SECTOR_VOCABULARY)\nfrom akvo.utils import codelist_choices, codelist_value\n\n\nclass Transaction(models.Model):\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='transactions')\n reference = ValidXMLCharField(\n _(u'reference'), blank=True, max_length=25,\n help_text=_(u'Enter a reference for the transaction. (25 characters)')\n )\n aid_type = ValidXMLCharField(\n _(u'aid type'), blank=True, max_length=3, choices=codelist_choices(AID_TYPE)\n )\n description = ValidXMLCharField(\n _(u'description'), max_length=255, blank=True,\n help_text=_(u'Enter a description for the transaction. (255 characters)')\n )\n disbursement_channel = ValidXMLCharField(\n _(u'disbursement channel'), blank=True, max_length=1,\n choices=codelist_choices(DISBURSEMENT_CHANNEL)\n )\n finance_type = ValidXMLCharField(\n _(u'finance type'), max_length=3, blank=True, choices=codelist_choices(FINANCE_TYPE)\n )\n flow_type = ValidXMLCharField(\n _(u'flow type'), max_length=2, blank=True, choices=codelist_choices(FLOW_TYPE)\n )\n tied_status = ValidXMLCharField(\n _(u'tied status'), blank=True, max_length=1, choices=codelist_choices(TIED_STATUS)\n )\n transaction_date = models.DateField(\n _(u'transaction date'), blank=True, null=True,\n help_text=_(u'Enter the financial reporting date that '\n u'the transaction was/will be undertaken.')\n )\n transaction_type = ValidXMLCharField(\n _(u'transaction type'), blank=True, max_length=2,\n choices=codelist_choices(TRANSACTION_TYPE),\n help_text=_(u'Select the type of transaction from the list.')\n )\n value = models.DecimalField(\n _(u'value'), blank=True, null=True, max_digits=11, decimal_places=2,\n help_text=_(u'Enter the transaction amount.')\n )\n value_date = models.DateField(_(u'value date'), blank=True, null=True)\n currency = ValidXMLCharField(\n _(u'currency'), blank=True, max_length=3, choices=codelist_choices(CURRENCY)\n )\n provider_organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'provider organisation'),\n related_name='providing_transactions', blank=True, null=True, on_delete=models.SET_NULL\n )\n provider_organisation_activity = ValidXMLCharField(\n _(u'provider organisation activity id'), blank=True, max_length=50\n )\n receiver_organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'receiver organisation'),\n related_name='receiving_transactions', blank=True, null=True, on_delete=models.SET_NULL\n )\n receiver_organisation_activity = ValidXMLCharField(\n _(u'receiver organisation activity id'), blank=True, max_length=50\n )\n recipient_country = ValidXMLCharField(\n _(u'recipient country'), blank=True, max_length=2, choices=codelist_choices(COUNTRY)\n )\n recipient_region = ValidXMLCharField(\n _(u'recipient region'), blank=True, max_length=3, choices=codelist_choices(REGION)\n )\n recipient_region_vocabulary = ValidXMLCharField(\n _(u'recipient region vocabulary'), blank=True, max_length=1,\n choices=codelist_choices(REGION_VOCABULARY)\n )\n\n def __unicode__(self):\n return self.value\n\n def iati_currency(self):\n return codelist_value(Currency, self, 'currency')\n\n def iati_transaction_type(self):\n return codelist_value(TransactionType, self, 'transaction_type')\n\n def iati_disbursement_channel(self):\n return codelist_value(DisbursementChannel, self, 'disbursement_channel')\n\n def iati_recipient_country(self):\n return codelist_value(Country, self, 'recipient_country')\n\n def iati_recipient_region(self):\n return codelist_value(Region, self, 'recipient_region')\n\n def iati_recipient_region_vocabulary(self):\n return codelist_value(RegionVocabulary, self, 'recipient_region_vocabulary')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'transaction')\n verbose_name_plural = _(u'transactions')\n\n\nclass TransactionSector(models.Model):\n project = models.ForeignKey(\n 'Transaction', verbose_name=_(u'transaction'), related_name='sectors'\n )\n code = ValidXMLCharField(_(u'sector'), blank=True, max_length=5)\n text = ValidXMLCharField(\n _(u'description'), blank=True, max_length=100, help_text=_(u'(max 100 characters)')\n )\n vocabulary = ValidXMLCharField(\n _(u'vocabulary'), blank=True, max_length=5, choices=codelist_choices(SECTOR_VOCABULARY)\n )\n\n def iati_sector(self):\n if self.code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):\n return codelist_value(Sector, self, 'code')\n elif self.code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):\n return codelist_value(SectorCategory, self, 'code')\n else:\n return self.code\n\n def iati_vocabulary(self):\n return codelist_value(SectorVocabulary, self, 'vocabulary')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'transaction sector')\n verbose_name_plural = _(u'transaction sectors')\n unique_together = ('project', 'vocabulary')\n",
"path": "akvo/rsr/models/transaction.py"
}
] | [
{
"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\nfrom akvo.codelists.models import (Currency, DisbursementChannel,TransactionType, Country, Region,\n RegionVocabulary, Sector, SectorCategory, SectorVocabulary)\nfrom akvo.codelists.store.codelists_v201 import (AID_TYPE, CURRENCY, DISBURSEMENT_CHANNEL,\n FINANCE_TYPE, FLOW_TYPE, TIED_STATUS,\n TRANSACTION_TYPE, COUNTRY, REGION,\n REGION_VOCABULARY, SECTOR_VOCABULARY)\nfrom akvo.utils import codelist_choices, codelist_value\n\n\nclass Transaction(models.Model):\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='transactions')\n reference = ValidXMLCharField(\n _(u'reference'), blank=True, max_length=25,\n help_text=_(u'Enter a reference for the transaction. (25 characters)')\n )\n aid_type = ValidXMLCharField(\n _(u'aid type'), blank=True, max_length=3, choices=codelist_choices(AID_TYPE)\n )\n description = ValidXMLCharField(\n _(u'description'), max_length=255, blank=True,\n help_text=_(u'Enter a description for the transaction. (255 characters)')\n )\n disbursement_channel = ValidXMLCharField(\n _(u'disbursement channel'), blank=True, max_length=1,\n choices=codelist_choices(DISBURSEMENT_CHANNEL)\n )\n finance_type = ValidXMLCharField(\n _(u'finance type'), max_length=3, blank=True, choices=codelist_choices(FINANCE_TYPE)\n )\n flow_type = ValidXMLCharField(\n _(u'flow type'), max_length=2, blank=True, choices=codelist_choices(FLOW_TYPE)\n )\n tied_status = ValidXMLCharField(\n _(u'tied status'), blank=True, max_length=1, choices=codelist_choices(TIED_STATUS)\n )\n transaction_date = models.DateField(\n _(u'transaction date'), blank=True, null=True,\n help_text=_(u'Enter the financial reporting date that '\n u'the transaction was/will be undertaken.')\n )\n transaction_type = ValidXMLCharField(\n _(u'transaction type'), blank=True, max_length=2,\n choices=codelist_choices(TRANSACTION_TYPE),\n help_text=_(u'Select the type of transaction from the list.')\n )\n value = models.DecimalField(\n _(u'value'), blank=True, null=True, max_digits=11, decimal_places=2,\n help_text=_(u'Enter the transaction amount.')\n )\n value_date = models.DateField(_(u'value date'), blank=True, null=True)\n currency = ValidXMLCharField(\n _(u'currency'), blank=True, max_length=3, choices=codelist_choices(CURRENCY)\n )\n provider_organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'provider organisation'),\n related_name='providing_transactions', blank=True, null=True, on_delete=models.SET_NULL\n )\n provider_organisation_activity = ValidXMLCharField(\n _(u'provider organisation activity id'), blank=True, max_length=50\n )\n receiver_organisation = models.ForeignKey(\n 'Organisation', verbose_name=_(u'receiver organisation'),\n related_name='receiving_transactions', blank=True, null=True, on_delete=models.SET_NULL\n )\n receiver_organisation_activity = ValidXMLCharField(\n _(u'receiver organisation activity id'), blank=True, max_length=50\n )\n recipient_country = ValidXMLCharField(\n _(u'recipient country'), blank=True, max_length=2, choices=codelist_choices(COUNTRY)\n )\n recipient_region = ValidXMLCharField(\n _(u'recipient region'), blank=True, max_length=3, choices=codelist_choices(REGION)\n )\n recipient_region_vocabulary = ValidXMLCharField(\n _(u'recipient region vocabulary'), blank=True, max_length=1,\n choices=codelist_choices(REGION_VOCABULARY)\n )\n\n def __unicode__(self):\n return unicode(self.value)\n\n def iati_currency(self):\n return codelist_value(Currency, self, 'currency')\n\n def iati_transaction_type(self):\n return codelist_value(TransactionType, self, 'transaction_type')\n\n def iati_disbursement_channel(self):\n return codelist_value(DisbursementChannel, self, 'disbursement_channel')\n\n def iati_recipient_country(self):\n return codelist_value(Country, self, 'recipient_country')\n\n def iati_recipient_region(self):\n return codelist_value(Region, self, 'recipient_region')\n\n def iati_recipient_region_vocabulary(self):\n return codelist_value(RegionVocabulary, self, 'recipient_region_vocabulary')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'transaction')\n verbose_name_plural = _(u'transactions')\n\n\nclass TransactionSector(models.Model):\n project = models.ForeignKey(\n 'Transaction', verbose_name=_(u'transaction'), related_name='sectors'\n )\n code = ValidXMLCharField(_(u'sector'), blank=True, max_length=5)\n text = ValidXMLCharField(\n _(u'description'), blank=True, max_length=100, help_text=_(u'(max 100 characters)')\n )\n vocabulary = ValidXMLCharField(\n _(u'vocabulary'), blank=True, max_length=5, choices=codelist_choices(SECTOR_VOCABULARY)\n )\n\n def iati_sector(self):\n if self.code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):\n return codelist_value(Sector, self, 'code')\n elif self.code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):\n return codelist_value(SectorCategory, self, 'code')\n else:\n return self.code\n\n def iati_vocabulary(self):\n return codelist_value(SectorVocabulary, self, 'vocabulary')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'transaction sector')\n verbose_name_plural = _(u'transaction sectors')\n unique_together = ('project', 'vocabulary')\n",
"path": "akvo/rsr/models/transaction.py"
}
] | diff --git a/akvo/rsr/models/transaction.py b/akvo/rsr/models/transaction.py
index 65687712e4..30bf931422 100644
--- a/akvo/rsr/models/transaction.py
+++ b/akvo/rsr/models/transaction.py
@@ -89,7 +89,7 @@ class Transaction(models.Model):
)
def __unicode__(self):
- return self.value
+ return unicode(self.value)
def iati_currency(self):
return codelist_value(Currency, self, 'currency')
|
speechbrain__speechbrain-1127 | Broken docs for `speechbrain.alignment.ctc_segmentation`
Hi, thanks for maintaining such a wonderful library.
Looks like the documentation for `speechbrain.alignment.ctc_segmentation` is broken:
https://speechbrain.readthedocs.io/en/latest/API/speechbrain.alignment.ctc_segmentation.html
I guess this is caused by unneeded shebang, as shown in the following:
https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/alignment/ctc_segmentation.py#L1-L2
Perhaps this could be related to #819 ?
| [
{
"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport hyperpyyaml\n\n\nsys.path.insert(0, os.path.abspath(\"../speechbrain\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"SpeechBrain\"\ncopyright = \"2021, SpeechBrain\"\nauthor = \"SpeechBrain\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"0.5.0\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.napoleon\",\n \"recommonmark\",\n]\n\n\n# Napoleon settings\nnapoleon_google_docstring = False\nnapoleon_numpy_docstring = True\nnapoleon_include_init_with_doc = True\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = True\nnapoleon_use_admonition_for_references = False\nnapoleon_use_ivar = False\nnapoleon_use_param = True\nnapoleon_use_rtype = True\n\n# Intersphinx mapping:\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/\", None),\n \"numpy\": (\"http://docs.scipy.org/doc/numpy/\", None),\n \"torch\": (\"https://pytorch.org/docs/master/\", None),\n}\n\n# AUTODOC:\n\nautodoc_default_options = {}\n\n# Autodoc mock extra dependencies:\nautodoc_mock_imports = [\"numba\", \"sklearn\"]\n\n# Order of API items:\nautodoc_member_order = \"bysource\"\nautodoc_default_options = {\"member-order\": \"bysource\"}\n\n# Don't show inherited docstrings:\nautodoc_inherit_docstrings = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_apidoc_templates\"]\n\n# -- Better apidoc -----------------------------------------------------------\n\n\ndef run_apidoc(app):\n \"\"\"Generage API documentation\"\"\"\n import better_apidoc\n\n better_apidoc.APP = app\n\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.dirname(hyperpyyaml.__file__),\n ]\n )\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.join(\"../\", \"speechbrain\"),\n ]\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n# See https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n# for rtd theme options\nhtml_theme_options = {\n # Toc options\n \"collapse_navigation\": False,\n \"sticky_navigation\": True,\n \"navigation_depth\": 4,\n \"includehidden\": True,\n}\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\ndef setup(app):\n app.connect(\"builder-inited\", run_apidoc)\n",
"path": "docs/conf.py"
}
] | [
{
"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport hyperpyyaml\n\n\nsys.path.insert(0, os.path.abspath(\"../speechbrain\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"SpeechBrain\"\ncopyright = \"2021, SpeechBrain\"\nauthor = \"SpeechBrain\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"0.5.0\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.napoleon\",\n \"recommonmark\",\n]\n\n\n# Napoleon settings\nnapoleon_google_docstring = False\nnapoleon_numpy_docstring = True\nnapoleon_include_init_with_doc = True\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = True\nnapoleon_use_admonition_for_references = False\nnapoleon_use_ivar = False\nnapoleon_use_param = True\nnapoleon_use_rtype = True\n\n# Intersphinx mapping:\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/\", None),\n \"numpy\": (\"http://docs.scipy.org/doc/numpy/\", None),\n \"torch\": (\"https://pytorch.org/docs/master/\", None),\n}\n\n# AUTODOC:\n\nautodoc_default_options = {}\n\n# Autodoc mock extra dependencies:\nautodoc_mock_imports = [\"sklearn\"]\n\n# Order of API items:\nautodoc_member_order = \"bysource\"\nautodoc_default_options = {\"member-order\": \"bysource\"}\n\n# Don't show inherited docstrings:\nautodoc_inherit_docstrings = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_apidoc_templates\"]\n\n# -- Better apidoc -----------------------------------------------------------\n\n\ndef run_apidoc(app):\n \"\"\"Generage API documentation\"\"\"\n import better_apidoc\n\n better_apidoc.APP = app\n\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.dirname(hyperpyyaml.__file__),\n ]\n )\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.join(\"../\", \"speechbrain\"),\n ]\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n# See https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n# for rtd theme options\nhtml_theme_options = {\n # Toc options\n \"collapse_navigation\": False,\n \"sticky_navigation\": True,\n \"navigation_depth\": 4,\n \"includehidden\": True,\n}\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\ndef setup(app):\n app.connect(\"builder-inited\", run_apidoc)\n",
"path": "docs/conf.py"
}
] | diff --git a/docs/conf.py b/docs/conf.py
index 435bf01b80..a774420cbf 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -69,7 +69,7 @@
autodoc_default_options = {}
# Autodoc mock extra dependencies:
-autodoc_mock_imports = ["numba", "sklearn"]
+autodoc_mock_imports = ["sklearn"]
# Order of API items:
autodoc_member_order = "bysource"
diff --git a/docs/docs-requirements.txt b/docs/docs-requirements.txt
index 9a506e0f15..b029209d95 100644
--- a/docs/docs-requirements.txt
+++ b/docs/docs-requirements.txt
@@ -1,6 +1,7 @@
better-apidoc>=0.3.1
-numba
+numba>=0.54.1
recommonmark>=0.7.1
six
sphinx-rtd-theme>=0.4.3
Sphinx>=3.4.3
+ctc-segmentation>=1.7.0
|
tensorflow__addons-1941 | Usage with tf.keras API
https://github.com/tensorflow/addons/blob/5f618fdb92d9737da059de2a33fa606e97505398/tensorflow_addons/losses/focal_loss.py#L52-L53
The usage in `tf.keras` API example is incorrect. It should be replaced with:
```python
model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())
```
| [
{
"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n y_true = [[1.0], [1.0], [0.0]],\n y_pred = [[0.97], [0.91], [0.03]])\n print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n 1.9097870e-04,\n 2.0559824e-05]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n",
"path": "tensorflow_addons/losses/focal_loss.py"
}
] | [
{
"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n y_true = [[1.0], [1.0], [0.0]],\n y_pred = [[0.97], [0.91], [0.03]])\n print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n 1.9097870e-04,\n 2.0559824e-05]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n",
"path": "tensorflow_addons/losses/focal_loss.py"
}
] | diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py
index 973bfccb1f..550a82a614 100644
--- a/tensorflow_addons/losses/focal_loss.py
+++ b/tensorflow_addons/losses/focal_loss.py
@@ -50,7 +50,7 @@ class SigmoidFocalCrossEntropy(LossFunctionWrapper):
```python
model = tf.keras.Model(inputs, outputs)
- model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())
+ model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())
```
Args
|
Gallopsled__pwntools-669 | Need import
util/iters.py have not import time and context,It cause a problem when use mbruteforce
Need import
util/iters.py have not import time and context,It cause a problem when use mbruteforce
Need import
util/iters.py have not import time and context,It cause a problem when use mbruteforce
| [
{
"content": "\"\"\"\nThis module includes and extends the standard module :mod:`itertools`.\n\"\"\"\n\n__all__ = [\n 'bruteforce' ,\n 'mbruteforce' ,\n 'chained' ,\n 'consume' ,\n 'cyclen' ,\n 'dotproduct' ,\n 'flatten' ,\n 'group' ,\n 'iter_except' ,\n 'lexicographic' ,\n 'lookahead' ,\n 'nth' ,\n 'pad' ,\n 'pairwise' ,\n 'powerset' ,\n 'quantify' ,\n 'random_combination' ,\n 'random_combination_with_replacement' ,\n 'random_permutation' ,\n 'random_product' ,\n 'repeat_func' ,\n 'roundrobin' ,\n 'tabulate' ,\n 'take' ,\n 'unique_everseen' ,\n 'unique_justseen' ,\n 'unique_window' ,\n # these are re-exported from itertools\n 'chain' ,\n 'combinations' ,\n 'combinations_with_replacement' ,\n 'compress' ,\n 'count' ,\n 'cycle' ,\n 'dropwhile' ,\n 'groupby' ,\n 'ifilter' ,\n 'ifilterfalse' ,\n 'imap' ,\n 'islice' ,\n 'izip' ,\n 'izip_longest' ,\n 'permutations' ,\n 'product' ,\n 'repeat' ,\n 'starmap' ,\n 'takewhile' ,\n 'tee'\n]\n\nimport collections\nimport copy\nimport multiprocessing\nimport operator\nimport random\nfrom itertools import *\n\nfrom ..log import getLogger\n\nlog = getLogger(__name__)\n\ndef take(n, iterable):\n \"\"\"take(n, iterable) -> list\n\n Returns first `n` elements of `iterable`. If `iterable` is a iterator it\n will be advanced.\n\n Arguments:\n n(int): Number of elements to take.\n iterable: An iterable.\n\n Returns:\n A list of the first `n` elements of `iterable`. If there are fewer than\n `n` elements in `iterable` they will all be returned.\n\n Examples:\n >>> take(2, range(10))\n [0, 1]\n >>> i = count()\n >>> take(2, i)\n [0, 1]\n >>> take(2, i)\n [2, 3]\n >>> take(9001, [1, 2, 3])\n [1, 2, 3]\n \"\"\"\n return list(islice(iterable, n))\n\ndef tabulate(func, start = 0):\n \"\"\"tabulate(func, start = 0) -> iterator\n\n Arguments:\n func(function): The function to tabulate over.\n start(int): Number to start on.\n\n Returns:\n An iterator with the elements ``func(start), func(start + 1), ...``.\n\n Examples:\n >>> take(2, tabulate(str))\n ['0', '1']\n >>> take(5, tabulate(lambda x: x**2, start = 1))\n [1, 4, 9, 16, 25]\n \"\"\"\n return imap(func, count(start))\n\ndef consume(n, iterator):\n \"\"\"consume(n, iterator)\n\n Advance the iterator `n` steps ahead. If `n is :const:`None`, consume\n everything.\n\n Arguments:\n n(int): Number of elements to consume.\n iterator(iterator): An iterator.\n\n Returns:\n :const:`None`.\n\n Examples:\n >>> i = count()\n >>> consume(5, i)\n >>> i.next()\n 5\n >>> i = iter([1, 2, 3, 4, 5])\n >>> consume(2, i)\n >>> list(i)\n [3, 4, 5]\n \"\"\"\n # Use functions that consume iterators at C speed.\n if n is None:\n # feed the entire iterator into a zero-length deque\n collections.deque(iterator, maxlen = 0)\n else:\n # advance to the empty slice starting at position n\n next(islice(iterator, n, n), None)\n\ndef nth(n, iterable, default = None):\n \"\"\"nth(n, iterable, default = None) -> object\n\n Returns the element at index `n` in `iterable`. If `iterable` is a\n iterator it will be advanced.\n\n Arguments:\n n(int): Index of the element to return.\n iterable: An iterable.\n default(objext): A default value.\n\n Returns:\n The element at index `n` in `iterable` or `default` if `iterable` has too\n few elements.\n\n Examples:\n >>> nth(2, [0, 1, 2, 3])\n 2\n >>> nth(2, [0, 1], 42)\n 42\n >>> i = count()\n >>> nth(42, i)\n 42\n >>> nth(42, i)\n 85\n \"\"\"\n return next(islice(iterable, n, None), default)\n\ndef quantify(iterable, pred = bool):\n \"\"\"quantify(iterable, pred = bool) -> int\n\n Count how many times the predicate `pred` is :const:`True`.\n\n Arguments:\n iterable: An iterable.\n pred: A function that given an element from `iterable` returns either\n ``True`` or ``False``.\n\n Returns:\n The number of elements in `iterable` for which `pred` returns\n ``True``.\n\n Examples:\n >>> quantify([1, 2, 3, 4], lambda x: x % 2 == 0)\n 2\n >>> quantify(['1', 'two', '3', '42'], str.isdigit)\n 3\n \"\"\"\n return sum(imap(pred, iterable))\n\ndef pad(iterable, value = None):\n \"\"\"pad(iterable, value = None) -> iterator\n\n Pad an `iterable` with `value`, i.e. returns an iterator whoose elements are\n first the elements of `iterable` then `value` indefinitely.\n\n Arguments:\n iterable: An iterable.\n value: The value to pad with.\n\n Returns:\n An iterator whoose elements are first the elements of `iterable` then\n `value` indefinitely.\n\n Examples:\n >>> take(3, pad([1, 2]))\n [1, 2, None]\n >>> i = pad(iter([1, 2, 3]), 42)\n >>> take(2, i)\n [1, 2]\n >>> take(2, i)\n [3, 42]\n >>> take(2, i)\n [42, 42]\n \"\"\"\n return chain(iterable, repeat(value))\n\ndef cyclen(n, iterable):\n \"\"\"cyclen(n, iterable) -> iterator\n\n Repeats the elements of `iterable` `n` times.\n\n Arguments:\n n(int): The number of times to repeat `iterable`.\n iterable: An iterable.\n\n Returns:\n An iterator whoose elements are the elements of `iterator` repeated `n`\n times.\n\n Examples:\n >>> take(4, cyclen(2, [1, 2]))\n [1, 2, 1, 2]\n >>> list(cyclen(10, []))\n []\n \"\"\"\n return chain.from_iterable(repeat(tuple(iterable), n))\n\ndef dotproduct(x, y):\n \"\"\"dotproduct(x, y) -> int\n\n Computes the dot product of `x` and `y`.\n\n Arguments:\n x(iterable): An iterable.\n x(iterable): An iterable.\n\n Returns:\n The dot product of `x` and `y`, i.e.: ``x[0] * y[0] + x[1] * y[1] + ...``.\n\n Example:\n >>> dotproduct([1, 2, 3], [4, 5, 6])\n ... # 1 * 4 + 2 * 5 + 3 * 6 == 32\n 32\n \"\"\"\n return sum(imap(operator.mul, x, y))\n\ndef flatten(xss):\n \"\"\"flatten(xss) -> iterator\n\n Flattens one level of nesting; when `xss` is an iterable of iterables,\n returns an iterator whoose elements is the concatenation of the elements of\n `xss`.\n\n Arguments:\n xss: An iterable of iterables.\n\n Returns:\n An iterator whoose elements are the concatenation of the iterables in\n `xss`.\n\n Examples:\n >>> list(flatten([[1, 2], [3, 4]]))\n [1, 2, 3, 4]\n >>> take(6, flatten([[43, 42], [41, 40], count()]))\n [43, 42, 41, 40, 0, 1]\n \"\"\"\n return chain.from_iterable(xss)\n\ndef repeat_func(func, *args, **kwargs):\n \"\"\"repeat_func(func, *args, **kwargs) -> iterator\n\n Repeatedly calls `func` with positional arguments `args` and keyword\n arguments `kwargs`. If no keyword arguments is given the resulting iterator\n will be computed using only functions from :mod:`itertools` which are very\n fast.\n\n Arguments:\n func(function): The function to call.\n args: Positional arguments.\n kwargs: Keyword arguments.\n\n Returns:\n An iterator whoose elements are the results of calling ``func(*args,\n **kwargs)`` repeatedly.\n\n Examples:\n >>> def f(x):\n ... x[0] += 1\n ... return x[0]\n >>> i = repeat_func(f, [0])\n >>> take(2, i)\n [1, 2]\n >>> take(2, i)\n [3, 4]\n >>> def f(**kwargs):\n ... return kwargs.get('x', 43)\n >>> i = repeat_func(f, x = 42)\n >>> take(2, i)\n [42, 42]\n >>> i = repeat_func(f, 42)\n >>> take(2, i)\n Traceback (most recent call last):\n ...\n TypeError: f() takes exactly 0 arguments (1 given)\n \"\"\"\n if kwargs:\n return starmap(lambda args, kwargs: func(*args, **kwargs),\n repeat((args, kwargs))\n )\n else:\n return starmap(func, repeat(args))\n\ndef pairwise(iterable):\n \"\"\"pairwise(iterable) -> iterator\n\n Arguments:\n iterable: An iterable.\n\n Returns:\n An iterator whoose elements are pairs of neighbouring elements of\n `iterable`.\n\n Examples:\n >>> list(pairwise([1, 2, 3, 4]))\n [(1, 2), (2, 3), (3, 4)]\n >>> i = starmap(operator.add, pairwise(count()))\n >>> take(5, i)\n [1, 3, 5, 7, 9]\n \"\"\"\n a, b = tee(iterable)\n next(b, None)\n return izip(a, b)\n\ndef group(n, iterable, fill_value = None):\n \"\"\"group(n, iterable, fill_value = None) -> iterator\n\n Similar to :func:`pwnlib.util.lists.group`, but returns an iterator and uses\n :mod:`itertools` fast build-in functions.\n\n Arguments:\n n(int): The group size.\n iterable: An iterable.\n fill_value: The value to fill into the remaining slots of the last group\n if the `n` does not divide the number of elements in `iterable`.\n\n Returns:\n An iterator whoose elements are `n`-tuples of the elements of `iterable`.\n\n Examples:\n >>> list(group(2, range(5)))\n [(0, 1), (2, 3), (4, None)]\n >>> take(3, group(2, count()))\n [(0, 1), (2, 3), (4, 5)]\n >>> [''.join(x) for x in group(3, 'ABCDEFG', 'x')]\n ['ABC', 'DEF', 'Gxx']\n \"\"\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue = fill_value, *args)\n\ndef roundrobin(*iterables):\n \"\"\"roundrobin(*iterables)\n\n Take elements from `iterables` in a round-robin fashion.\n\n Arguments:\n *iterables: One or more iterables.\n\n Returns:\n An iterator whoose elements are taken from `iterables` in a round-robin\n fashion.\n\n Examples:\n >>> ''.join(roundrobin('ABC', 'D', 'EF'))\n 'ADEBFC'\n >>> ''.join(take(10, roundrobin('ABC', 'DE', repeat('x'))))\n 'ADxBExCxxx'\n \"\"\"\n # Recipe credited to George Sakkis\n pending = len(iterables)\n nexts = cycle(iter(it).next for it in iterables)\n while pending:\n try:\n for next in nexts:\n yield next()\n except StopIteration:\n pending -= 1\n nexts = cycle(islice(nexts, pending))\n\ndef powerset(iterable, include_empty = True):\n \"\"\"powerset(iterable, include_empty = True) -> iterator\n\n The powerset of an iterable.\n\n Arguments:\n iterable: An iterable.\n include_empty(bool): Whether to include the empty set.\n\n Returns:\n The powerset of `iterable` as an interator of tuples.\n\n Examples:\n >>> list(powerset(range(3)))\n [(), (0,), (1,), (2,), (0, 1), (0, 2), (1, 2), (0, 1, 2)]\n >>> list(powerset(range(2), include_empty = False))\n [(0,), (1,), (0, 1)]\n \"\"\"\n s = list(iterable)\n i = chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))\n if not include_empty:\n next(i)\n return i\n\ndef unique_everseen(iterable, key = None):\n \"\"\"unique_everseen(iterable, key = None) -> iterator\n\n Get unique elements, preserving order. Remember all elements ever seen. If\n `key` is not :const:`None` then for each element ``elm`` in `iterable` the\n element that will be rememberes is ``key(elm)``. Otherwise ``elm`` is\n remembered.\n\n Arguments:\n iterable: An iterable.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_everseen('AAAABBBCCDAABBB'))\n 'ABCD'\n >>> ''.join(unique_everseen('ABBCcAD', str.lower))\n 'ABCD'\n \"\"\"\n seen = set()\n seen_add = seen.add\n if key is None:\n for element in ifilterfalse(seen.__contains__, iterable):\n seen_add(element)\n yield element\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n seen_add(k)\n yield element\n\ndef unique_justseen(iterable, key = None):\n \"\"\"unique_everseen(iterable, key = None) -> iterator\n\n Get unique elements, preserving order. Remember only the elements just seen.\n If `key` is not :const:`None` then for each element ``elm`` in `iterable`\n the element that will be rememberes is ``key(elm)``. Otherwise ``elm`` is\n remembered.\n\n Arguments:\n iterable: An iterable.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_justseen('AAAABBBCCDAABBB'))\n 'ABCDAB'\n >>> ''.join(unique_justseen('ABBCcAD', str.lower))\n 'ABCAD'\n \"\"\"\n return imap(next, imap(operator.itemgetter(1), groupby(iterable, key)))\n\ndef unique_window(iterable, window, key = None):\n \"\"\"unique_everseen(iterable, window, key = None) -> iterator\n\n Get unique elements, preserving order. Remember only the last `window`\n elements seen. If `key` is not :const:`None` then for each element ``elm``\n in `iterable` the element that will be rememberes is ``key(elm)``.\n Otherwise ``elm`` is remembered.\n\n Arguments:\n iterable: An iterable.\n window(int): The number of elements to remember.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_window('AAAABBBCCDAABBB', 6))\n 'ABCDA'\n >>> ''.join(unique_window('ABBCcAD', 5, str.lower))\n 'ABCD'\n >>> ''.join(unique_window('ABBCcAD', 4, str.lower))\n 'ABCAD'\n \"\"\"\n seen = collections.deque(maxlen = window)\n seen_add = seen.append\n if key is None:\n for element in iterable:\n if element not in seen:\n yield element\n seen_add(element)\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n yield element\n seen_add(k)\n\ndef iter_except(func, exception):\n \"\"\"iter_except(func, exception)\n\n Calls `func` repeatedly until an exception is raised. Works like the\n build-in :func:`iter` but uses an exception instead of a sentinel to signal\n the end.\n\n Arguments:\n func: The function to call.\n exception(exception): The exception that signals the end. Other\n exceptions will not be caught.\n\n Returns:\n An iterator whoose elements are the results of calling ``func()`` until an\n exception matching `exception` is raised.\n\n Examples:\n >>> s = {1, 2, 3}\n >>> i = iter_except(s.pop, KeyError)\n >>> i.next()\n 1\n >>> i.next()\n 2\n >>> i.next()\n 3\n >>> i.next()\n Traceback (most recent call last):\n ...\n StopIteration\n \"\"\"\n try:\n while True:\n yield func()\n except exception:\n pass\n\ndef random_product(*args, **kwargs):\n \"\"\"random_product(*args, repeat = 1) -> tuple\n\n Arguments:\n args: One or more iterables\n repeat(int): Number of times to repeat `args`.\n\n Returns:\n A random element from ``itertools.product(*args, repeat = repeat)``.\n\n Examples:\n >>> args = (range(2), range(2))\n >>> random_product(*args) in {(0, 0), (0, 1), (1, 0), (1, 1)}\n True\n >>> args = (range(3), range(3), range(3))\n >>> random_product(*args, repeat = 2) in product(*args, repeat = 2)\n True\n \"\"\"\n repeat = kwargs.pop('repeat', 1)\n\n if kwargs != {}:\n raise TypeError('random_product() does not support argument %s' % kwargs.popitem())\n\n pools = map(tuple, args) * repeat\n return tuple(random.choice(pool) for pool in pools)\n\ndef random_permutation(iterable, r = None):\n \"\"\"random_product(iterable, r = None) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the permutation. If :const:`None` select all elements in\n `iterable`.\n\n Returns:\n A random element from ``itertools.permutations(iterable, r = r)``.\n\n Examples:\n >>> random_permutation(range(2)) in {(0, 1), (1, 0)}\n True\n >>> random_permutation(range(10), r = 2) in permutations(range(10), r = 2)\n True\n \"\"\"\n pool = tuple(iterable)\n r = len(pool) if r is None else r\n return tuple(random.sample(pool, r))\n\ndef random_combination(iterable, r):\n \"\"\"random_combination(iterable, r) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the combination.\n\n Returns:\n A random element from ``itertools.combinations(iterable, r = r)``.\n\n Examples:\n >>> random_combination(range(2), 2)\n (0, 1)\n >>> random_combination(range(10), r = 2) in combinations(range(10), r = 2)\n True\n \"\"\"\n pool = tuple(iterable)\n n = len(pool)\n indices = sorted(random.sample(xrange(n), r))\n return tuple(pool[i] for i in indices)\n\ndef random_combination_with_replacement(iterable, r):\n \"\"\"random_combination(iterable, r) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the combination.\n\n Returns:\n A random element from ``itertools.combinations_with_replacement(iterable,\n r = r)``.\n\n Examples:\n >>> cs = {(0, 0), (0, 1), (1, 1)}\n >>> random_combination_with_replacement(range(2), 2) in cs\n True\n >>> i = combinations_with_replacement(range(10), r = 2)\n >>> random_combination_with_replacement(range(10), r = 2) in i\n True\n \"\"\"\n pool = tuple(iterable)\n n = len(pool)\n indices = sorted(random.randrange(n) for i in xrange(r))\n return tuple(pool[i] for i in indices)\n\ndef lookahead(n, iterable):\n \"\"\"lookahead(n, iterable) -> object\n\n Inspects the upcoming element at index `n` without advancing the iterator.\n Raises ``IndexError`` if `iterable` has too few elements.\n\n Arguments:\n n(int): Index of the element to return.\n iterable: An iterable.\n\n Returns:\n The element in `iterable` at index `n`.\n\n Examples:\n >>> i = count()\n >>> lookahead(4, i)\n 4\n >>> i.next()\n 0\n >>> i = count()\n >>> nth(4, i)\n 4\n >>> i.next()\n 5\n >>> lookahead(4, i)\n 10\n \"\"\"\n for value in islice(copy.copy(iterable), n, None):\n return value\n raise IndexError(n)\n\ndef lexicographic(alphabet):\n \"\"\"lexicographic(alphabet) -> iterator\n\n The words with symbols in `alphabet`, in lexicographic order (determined by\n the order of `alphabet`).\n\n Arguments:\n alphabet: The alphabet to draw symbols from.\n\n Returns:\n An iterator of the words with symbols in `alphabet`, in lexicographic\n order.\n\n Example:\n >>> take(8, imap(lambda x: ''.join(x), lexicographic('01')))\n ['', '0', '1', '00', '01', '10', '11', '000']\n \"\"\"\n for n in count():\n for e in product(alphabet, repeat = n):\n yield e\n\ndef chained(func):\n \"\"\"chained(func)\n\n A decorator chaining the results of `func`. Useful for generators.\n\n Arguments:\n func(function): The function being decorated.\n\n Returns:\n A generator function whoose elements are the concatenation of the return\n values from ``func(*args, **kwargs)``.\n\n Example:\n >>> @chained\n ... def g():\n ... for x in count():\n ... yield (x, -x)\n >>> take(6, g())\n [0, 0, 1, -1, 2, -2]\n \"\"\"\n def wrapper(*args, **kwargs):\n for xs in func(*args, **kwargs):\n for x in xs:\n yield x\n return wrapper\n\ndef bruteforce(func, alphabet, length, method = 'upto', start = None, databag = None):\n \"\"\"bruteforce(func, alphabet, length, method = 'upto', start = None)\n\n Bruteforce `func` to return :const:`True`. `func` should take a string\n input and return a :func:`bool`. `func` will be called with strings from\n `alphabet` until it returns :const:`True` or the search space has been\n exhausted.\n\n The argument `start` can be used to split the search space, which is useful\n if multiple CPU cores are available.\n\n Arguments:\n func(function): The function to bruteforce.\n alphabet: The alphabet to draw symbols from.\n length: Longest string to try.\n method: If 'upto' try strings of length ``1 .. length``, if 'fixed' only\n try strings of length ``length`` and if 'downfrom' try strings of length\n ``length .. 1``.\n start: a tuple ``(i, N)`` which splits the search space up into `N` pieces\n and starts at piece `i` (1..N). :const:`None` is equivalent to ``(1, 1)``.\n\n Returns:\n A string `s` such that ``func(s)`` returns :const:`True` or :const:`None`\n if the search space was exhausted.\n\n Example:\n >>> bruteforce(lambda x: x == 'hello', string.lowercase, length = 10)\n 'hello'\n >>> bruteforce(lambda x: x == 'hello', 'hllo', 5) is None\n True\n \"\"\"\n\n if method == 'upto' and length > 1:\n iterator = product(alphabet, repeat = 1)\n for i in xrange(2, length + 1):\n iterator = chain(iterator, product(alphabet, repeat = i))\n\n elif method == 'downfrom' and length > 1:\n iterator = product(alphabet, repeat = length)\n for i in xrange(length - 1, 1, -1):\n iterator = chain(iterator, product(alphabet, repeat = i))\n\n elif method == 'fixed':\n iterator = product(alphabet, repeat = length)\n\n else:\n raise TypeError('bruteforce(): unknown method')\n\n if method == 'fixed':\n total_iterations = len(alphabet) ** length\n else:\n total_iterations = (len(alphabet) ** (length + 1) / (len(alphabet) - 1)) - 1\n\n if start is not None:\n i, N = start\n if i > N:\n raise ValueError('bruteforce(): invalid starting point')\n\n i -= 1\n chunk_size = total_iterations / N\n rest = total_iterations % N\n starting_point = 0\n\n for chunk in range(N):\n if chunk >= i:\n break\n if chunk <= rest:\n starting_point += chunk_size + 1\n else:\n starting_point += chunk_size\n\n if rest >= i:\n chunk_size += 1\n\n total_iterations = chunk_size\n\n h = log.waitfor('Bruteforcing')\n cur_iteration = 0\n if start != None:\n consume(i, iterator)\n for e in iterator:\n cur = ''.join(e)\n cur_iteration += 1\n if cur_iteration % 2000 == 0:\n progress = 100.0 * cur_iteration / total_iterations\n h.status('Trying \"%s\", %0.3f%%' % (cur, progress))\n if databag:\n databag[\"current_item\"] = cur\n databag[\"items_done\"] = cur_iteration\n databag[\"items_total\"] = total_iterations\n res = func(cur)\n if res:\n h.success('Found key: \"%s\"' % cur)\n return cur\n if start != None:\n consume(N - 1, iterator)\n\n h.failure('No matches found')\n\n\n\ndef mbruteforce(func, alphabet, length, method = 'upto', start = None, threads = None):\n \"\"\"mbruteforce(func, alphabet, length, method = 'upto', start = None, threads = None)\n\n Same functionality as bruteforce(), but multithreaded.\n\n Arguments:\n func, alphabet, length, method, start: same as for bruteforce()\n threads: Amount of threads to spawn, default is the amount of cores.\n \"\"\"\n\n def bruteforcewrap(func, alphabet, length, method, start, databag):\n oldloglevel = context.log_level\n context.log_level = 'critical'\n res = bruteforce(func, alphabet, length, method=method, start=start, databag=databag)\n context.log_level = oldloglevel\n databag[\"result\"] = res\n\n if start == None:\n start = (1, 1)\n\n if threads == None:\n try:\n threads = multiprocessing.cpu_count()\n except NotImplementedError:\n threads = 1\n\n h = log.waitfor('MBruteforcing')\n processes = [None] * threads\n shareddata = [None] * threads\n\n (i2, N2) = start\n totalchunks = threads * N2\n\n for i in range(threads):\n shareddata[i] = multiprocessing.Manager().dict()\n shareddata[i]['result'] = None\n shareddata[i]['current_item'] = \"\"\n shareddata[i]['items_done'] = 0\n shareddata[i]['items_total'] = 0\n\n chunkid = (i2-1) + (i * N2) + 1\n\n processes[i] = multiprocessing.Process(target=bruteforcewrap,\n args=(func, alphabet, length, method, (chunkid, totalchunks),\n shareddata[i]))\n processes[i].start()\n\n done = False\n\n while not done:\n # log status\n current_item_list = \",\".join([\"\\\"%s\\\"\" % x[\"current_item\"]\n for x in shareddata if x != None])\n items_done = sum([x[\"items_done\"] for x in shareddata if x != None])\n items_total = sum([x[\"items_total\"] for x in shareddata if x != None])\n\n progress = 100.0 * items_done / items_total if items_total != 0 else 0.0\n\n h.status('Trying %s -- %0.3f%%' % (current_item_list, progress))\n\n # handle finished threads\n for i in range(threads):\n if processes[i] and processes[i].exitcode != None:\n # thread has terminated\n res = shareddata[i][\"result\"]\n processes[i].join()\n processes[i] = None\n\n # if successful, kill all other threads and return success\n if res != None:\n for i in range(threads):\n if processes[i] != None:\n processes[i].terminate()\n processes[i].join()\n processes[i] = None\n h.success('Found key: \"%s\"' % res)\n return res\n\n if all([x == None for x in processes]):\n done = True\n time.sleep(0.3)\n h.failure('No matches found')\n",
"path": "pwnlib/util/iters.py"
}
] | [
{
"content": "\"\"\"\nThis module includes and extends the standard module :mod:`itertools`.\n\"\"\"\n\n__all__ = [\n 'bruteforce' ,\n 'mbruteforce' ,\n 'chained' ,\n 'consume' ,\n 'cyclen' ,\n 'dotproduct' ,\n 'flatten' ,\n 'group' ,\n 'iter_except' ,\n 'lexicographic' ,\n 'lookahead' ,\n 'nth' ,\n 'pad' ,\n 'pairwise' ,\n 'powerset' ,\n 'quantify' ,\n 'random_combination' ,\n 'random_combination_with_replacement' ,\n 'random_permutation' ,\n 'random_product' ,\n 'repeat_func' ,\n 'roundrobin' ,\n 'tabulate' ,\n 'take' ,\n 'unique_everseen' ,\n 'unique_justseen' ,\n 'unique_window' ,\n # these are re-exported from itertools\n 'chain' ,\n 'combinations' ,\n 'combinations_with_replacement' ,\n 'compress' ,\n 'count' ,\n 'cycle' ,\n 'dropwhile' ,\n 'groupby' ,\n 'ifilter' ,\n 'ifilterfalse' ,\n 'imap' ,\n 'islice' ,\n 'izip' ,\n 'izip_longest' ,\n 'permutations' ,\n 'product' ,\n 'repeat' ,\n 'starmap' ,\n 'takewhile' ,\n 'tee'\n]\n\nimport collections\nimport copy\nimport multiprocessing\nimport operator\nimport random\nimport time\nfrom itertools import *\n\nfrom ..context import context\nfrom ..log import getLogger\n\nlog = getLogger(__name__)\n\ndef take(n, iterable):\n \"\"\"take(n, iterable) -> list\n\n Returns first `n` elements of `iterable`. If `iterable` is a iterator it\n will be advanced.\n\n Arguments:\n n(int): Number of elements to take.\n iterable: An iterable.\n\n Returns:\n A list of the first `n` elements of `iterable`. If there are fewer than\n `n` elements in `iterable` they will all be returned.\n\n Examples:\n >>> take(2, range(10))\n [0, 1]\n >>> i = count()\n >>> take(2, i)\n [0, 1]\n >>> take(2, i)\n [2, 3]\n >>> take(9001, [1, 2, 3])\n [1, 2, 3]\n \"\"\"\n return list(islice(iterable, n))\n\ndef tabulate(func, start = 0):\n \"\"\"tabulate(func, start = 0) -> iterator\n\n Arguments:\n func(function): The function to tabulate over.\n start(int): Number to start on.\n\n Returns:\n An iterator with the elements ``func(start), func(start + 1), ...``.\n\n Examples:\n >>> take(2, tabulate(str))\n ['0', '1']\n >>> take(5, tabulate(lambda x: x**2, start = 1))\n [1, 4, 9, 16, 25]\n \"\"\"\n return imap(func, count(start))\n\ndef consume(n, iterator):\n \"\"\"consume(n, iterator)\n\n Advance the iterator `n` steps ahead. If `n is :const:`None`, consume\n everything.\n\n Arguments:\n n(int): Number of elements to consume.\n iterator(iterator): An iterator.\n\n Returns:\n :const:`None`.\n\n Examples:\n >>> i = count()\n >>> consume(5, i)\n >>> i.next()\n 5\n >>> i = iter([1, 2, 3, 4, 5])\n >>> consume(2, i)\n >>> list(i)\n [3, 4, 5]\n \"\"\"\n # Use functions that consume iterators at C speed.\n if n is None:\n # feed the entire iterator into a zero-length deque\n collections.deque(iterator, maxlen = 0)\n else:\n # advance to the empty slice starting at position n\n next(islice(iterator, n, n), None)\n\ndef nth(n, iterable, default = None):\n \"\"\"nth(n, iterable, default = None) -> object\n\n Returns the element at index `n` in `iterable`. If `iterable` is a\n iterator it will be advanced.\n\n Arguments:\n n(int): Index of the element to return.\n iterable: An iterable.\n default(objext): A default value.\n\n Returns:\n The element at index `n` in `iterable` or `default` if `iterable` has too\n few elements.\n\n Examples:\n >>> nth(2, [0, 1, 2, 3])\n 2\n >>> nth(2, [0, 1], 42)\n 42\n >>> i = count()\n >>> nth(42, i)\n 42\n >>> nth(42, i)\n 85\n \"\"\"\n return next(islice(iterable, n, None), default)\n\ndef quantify(iterable, pred = bool):\n \"\"\"quantify(iterable, pred = bool) -> int\n\n Count how many times the predicate `pred` is :const:`True`.\n\n Arguments:\n iterable: An iterable.\n pred: A function that given an element from `iterable` returns either\n ``True`` or ``False``.\n\n Returns:\n The number of elements in `iterable` for which `pred` returns\n ``True``.\n\n Examples:\n >>> quantify([1, 2, 3, 4], lambda x: x % 2 == 0)\n 2\n >>> quantify(['1', 'two', '3', '42'], str.isdigit)\n 3\n \"\"\"\n return sum(imap(pred, iterable))\n\ndef pad(iterable, value = None):\n \"\"\"pad(iterable, value = None) -> iterator\n\n Pad an `iterable` with `value`, i.e. returns an iterator whoose elements are\n first the elements of `iterable` then `value` indefinitely.\n\n Arguments:\n iterable: An iterable.\n value: The value to pad with.\n\n Returns:\n An iterator whoose elements are first the elements of `iterable` then\n `value` indefinitely.\n\n Examples:\n >>> take(3, pad([1, 2]))\n [1, 2, None]\n >>> i = pad(iter([1, 2, 3]), 42)\n >>> take(2, i)\n [1, 2]\n >>> take(2, i)\n [3, 42]\n >>> take(2, i)\n [42, 42]\n \"\"\"\n return chain(iterable, repeat(value))\n\ndef cyclen(n, iterable):\n \"\"\"cyclen(n, iterable) -> iterator\n\n Repeats the elements of `iterable` `n` times.\n\n Arguments:\n n(int): The number of times to repeat `iterable`.\n iterable: An iterable.\n\n Returns:\n An iterator whoose elements are the elements of `iterator` repeated `n`\n times.\n\n Examples:\n >>> take(4, cyclen(2, [1, 2]))\n [1, 2, 1, 2]\n >>> list(cyclen(10, []))\n []\n \"\"\"\n return chain.from_iterable(repeat(tuple(iterable), n))\n\ndef dotproduct(x, y):\n \"\"\"dotproduct(x, y) -> int\n\n Computes the dot product of `x` and `y`.\n\n Arguments:\n x(iterable): An iterable.\n x(iterable): An iterable.\n\n Returns:\n The dot product of `x` and `y`, i.e.: ``x[0] * y[0] + x[1] * y[1] + ...``.\n\n Example:\n >>> dotproduct([1, 2, 3], [4, 5, 6])\n ... # 1 * 4 + 2 * 5 + 3 * 6 == 32\n 32\n \"\"\"\n return sum(imap(operator.mul, x, y))\n\ndef flatten(xss):\n \"\"\"flatten(xss) -> iterator\n\n Flattens one level of nesting; when `xss` is an iterable of iterables,\n returns an iterator whoose elements is the concatenation of the elements of\n `xss`.\n\n Arguments:\n xss: An iterable of iterables.\n\n Returns:\n An iterator whoose elements are the concatenation of the iterables in\n `xss`.\n\n Examples:\n >>> list(flatten([[1, 2], [3, 4]]))\n [1, 2, 3, 4]\n >>> take(6, flatten([[43, 42], [41, 40], count()]))\n [43, 42, 41, 40, 0, 1]\n \"\"\"\n return chain.from_iterable(xss)\n\ndef repeat_func(func, *args, **kwargs):\n \"\"\"repeat_func(func, *args, **kwargs) -> iterator\n\n Repeatedly calls `func` with positional arguments `args` and keyword\n arguments `kwargs`. If no keyword arguments is given the resulting iterator\n will be computed using only functions from :mod:`itertools` which are very\n fast.\n\n Arguments:\n func(function): The function to call.\n args: Positional arguments.\n kwargs: Keyword arguments.\n\n Returns:\n An iterator whoose elements are the results of calling ``func(*args,\n **kwargs)`` repeatedly.\n\n Examples:\n >>> def f(x):\n ... x[0] += 1\n ... return x[0]\n >>> i = repeat_func(f, [0])\n >>> take(2, i)\n [1, 2]\n >>> take(2, i)\n [3, 4]\n >>> def f(**kwargs):\n ... return kwargs.get('x', 43)\n >>> i = repeat_func(f, x = 42)\n >>> take(2, i)\n [42, 42]\n >>> i = repeat_func(f, 42)\n >>> take(2, i)\n Traceback (most recent call last):\n ...\n TypeError: f() takes exactly 0 arguments (1 given)\n \"\"\"\n if kwargs:\n return starmap(lambda args, kwargs: func(*args, **kwargs),\n repeat((args, kwargs))\n )\n else:\n return starmap(func, repeat(args))\n\ndef pairwise(iterable):\n \"\"\"pairwise(iterable) -> iterator\n\n Arguments:\n iterable: An iterable.\n\n Returns:\n An iterator whoose elements are pairs of neighbouring elements of\n `iterable`.\n\n Examples:\n >>> list(pairwise([1, 2, 3, 4]))\n [(1, 2), (2, 3), (3, 4)]\n >>> i = starmap(operator.add, pairwise(count()))\n >>> take(5, i)\n [1, 3, 5, 7, 9]\n \"\"\"\n a, b = tee(iterable)\n next(b, None)\n return izip(a, b)\n\ndef group(n, iterable, fill_value = None):\n \"\"\"group(n, iterable, fill_value = None) -> iterator\n\n Similar to :func:`pwnlib.util.lists.group`, but returns an iterator and uses\n :mod:`itertools` fast build-in functions.\n\n Arguments:\n n(int): The group size.\n iterable: An iterable.\n fill_value: The value to fill into the remaining slots of the last group\n if the `n` does not divide the number of elements in `iterable`.\n\n Returns:\n An iterator whoose elements are `n`-tuples of the elements of `iterable`.\n\n Examples:\n >>> list(group(2, range(5)))\n [(0, 1), (2, 3), (4, None)]\n >>> take(3, group(2, count()))\n [(0, 1), (2, 3), (4, 5)]\n >>> [''.join(x) for x in group(3, 'ABCDEFG', 'x')]\n ['ABC', 'DEF', 'Gxx']\n \"\"\"\n args = [iter(iterable)] * n\n return izip_longest(fillvalue = fill_value, *args)\n\ndef roundrobin(*iterables):\n \"\"\"roundrobin(*iterables)\n\n Take elements from `iterables` in a round-robin fashion.\n\n Arguments:\n *iterables: One or more iterables.\n\n Returns:\n An iterator whoose elements are taken from `iterables` in a round-robin\n fashion.\n\n Examples:\n >>> ''.join(roundrobin('ABC', 'D', 'EF'))\n 'ADEBFC'\n >>> ''.join(take(10, roundrobin('ABC', 'DE', repeat('x'))))\n 'ADxBExCxxx'\n \"\"\"\n # Recipe credited to George Sakkis\n pending = len(iterables)\n nexts = cycle(iter(it).next for it in iterables)\n while pending:\n try:\n for next in nexts:\n yield next()\n except StopIteration:\n pending -= 1\n nexts = cycle(islice(nexts, pending))\n\ndef powerset(iterable, include_empty = True):\n \"\"\"powerset(iterable, include_empty = True) -> iterator\n\n The powerset of an iterable.\n\n Arguments:\n iterable: An iterable.\n include_empty(bool): Whether to include the empty set.\n\n Returns:\n The powerset of `iterable` as an interator of tuples.\n\n Examples:\n >>> list(powerset(range(3)))\n [(), (0,), (1,), (2,), (0, 1), (0, 2), (1, 2), (0, 1, 2)]\n >>> list(powerset(range(2), include_empty = False))\n [(0,), (1,), (0, 1)]\n \"\"\"\n s = list(iterable)\n i = chain.from_iterable(combinations(s, r) for r in range(len(s) + 1))\n if not include_empty:\n next(i)\n return i\n\ndef unique_everseen(iterable, key = None):\n \"\"\"unique_everseen(iterable, key = None) -> iterator\n\n Get unique elements, preserving order. Remember all elements ever seen. If\n `key` is not :const:`None` then for each element ``elm`` in `iterable` the\n element that will be rememberes is ``key(elm)``. Otherwise ``elm`` is\n remembered.\n\n Arguments:\n iterable: An iterable.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_everseen('AAAABBBCCDAABBB'))\n 'ABCD'\n >>> ''.join(unique_everseen('ABBCcAD', str.lower))\n 'ABCD'\n \"\"\"\n seen = set()\n seen_add = seen.add\n if key is None:\n for element in ifilterfalse(seen.__contains__, iterable):\n seen_add(element)\n yield element\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n seen_add(k)\n yield element\n\ndef unique_justseen(iterable, key = None):\n \"\"\"unique_everseen(iterable, key = None) -> iterator\n\n Get unique elements, preserving order. Remember only the elements just seen.\n If `key` is not :const:`None` then for each element ``elm`` in `iterable`\n the element that will be rememberes is ``key(elm)``. Otherwise ``elm`` is\n remembered.\n\n Arguments:\n iterable: An iterable.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_justseen('AAAABBBCCDAABBB'))\n 'ABCDAB'\n >>> ''.join(unique_justseen('ABBCcAD', str.lower))\n 'ABCAD'\n \"\"\"\n return imap(next, imap(operator.itemgetter(1), groupby(iterable, key)))\n\ndef unique_window(iterable, window, key = None):\n \"\"\"unique_everseen(iterable, window, key = None) -> iterator\n\n Get unique elements, preserving order. Remember only the last `window`\n elements seen. If `key` is not :const:`None` then for each element ``elm``\n in `iterable` the element that will be rememberes is ``key(elm)``.\n Otherwise ``elm`` is remembered.\n\n Arguments:\n iterable: An iterable.\n window(int): The number of elements to remember.\n key: A function to map over each element in `iterable` before remembering\n it. Setting to :const:`None` is equivalent to the identity function.\n\n Returns:\n An iterator of the unique elements in `iterable`.\n\n Examples:\n >>> ''.join(unique_window('AAAABBBCCDAABBB', 6))\n 'ABCDA'\n >>> ''.join(unique_window('ABBCcAD', 5, str.lower))\n 'ABCD'\n >>> ''.join(unique_window('ABBCcAD', 4, str.lower))\n 'ABCAD'\n \"\"\"\n seen = collections.deque(maxlen = window)\n seen_add = seen.append\n if key is None:\n for element in iterable:\n if element not in seen:\n yield element\n seen_add(element)\n else:\n for element in iterable:\n k = key(element)\n if k not in seen:\n yield element\n seen_add(k)\n\ndef iter_except(func, exception):\n \"\"\"iter_except(func, exception)\n\n Calls `func` repeatedly until an exception is raised. Works like the\n build-in :func:`iter` but uses an exception instead of a sentinel to signal\n the end.\n\n Arguments:\n func: The function to call.\n exception(exception): The exception that signals the end. Other\n exceptions will not be caught.\n\n Returns:\n An iterator whoose elements are the results of calling ``func()`` until an\n exception matching `exception` is raised.\n\n Examples:\n >>> s = {1, 2, 3}\n >>> i = iter_except(s.pop, KeyError)\n >>> i.next()\n 1\n >>> i.next()\n 2\n >>> i.next()\n 3\n >>> i.next()\n Traceback (most recent call last):\n ...\n StopIteration\n \"\"\"\n try:\n while True:\n yield func()\n except exception:\n pass\n\ndef random_product(*args, **kwargs):\n \"\"\"random_product(*args, repeat = 1) -> tuple\n\n Arguments:\n args: One or more iterables\n repeat(int): Number of times to repeat `args`.\n\n Returns:\n A random element from ``itertools.product(*args, repeat = repeat)``.\n\n Examples:\n >>> args = (range(2), range(2))\n >>> random_product(*args) in {(0, 0), (0, 1), (1, 0), (1, 1)}\n True\n >>> args = (range(3), range(3), range(3))\n >>> random_product(*args, repeat = 2) in product(*args, repeat = 2)\n True\n \"\"\"\n repeat = kwargs.pop('repeat', 1)\n\n if kwargs != {}:\n raise TypeError('random_product() does not support argument %s' % kwargs.popitem())\n\n pools = map(tuple, args) * repeat\n return tuple(random.choice(pool) for pool in pools)\n\ndef random_permutation(iterable, r = None):\n \"\"\"random_product(iterable, r = None) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the permutation. If :const:`None` select all elements in\n `iterable`.\n\n Returns:\n A random element from ``itertools.permutations(iterable, r = r)``.\n\n Examples:\n >>> random_permutation(range(2)) in {(0, 1), (1, 0)}\n True\n >>> random_permutation(range(10), r = 2) in permutations(range(10), r = 2)\n True\n \"\"\"\n pool = tuple(iterable)\n r = len(pool) if r is None else r\n return tuple(random.sample(pool, r))\n\ndef random_combination(iterable, r):\n \"\"\"random_combination(iterable, r) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the combination.\n\n Returns:\n A random element from ``itertools.combinations(iterable, r = r)``.\n\n Examples:\n >>> random_combination(range(2), 2)\n (0, 1)\n >>> random_combination(range(10), r = 2) in combinations(range(10), r = 2)\n True\n \"\"\"\n pool = tuple(iterable)\n n = len(pool)\n indices = sorted(random.sample(xrange(n), r))\n return tuple(pool[i] for i in indices)\n\ndef random_combination_with_replacement(iterable, r):\n \"\"\"random_combination(iterable, r) -> tuple\n\n Arguments:\n iterable: An iterable.\n r(int): Size of the combination.\n\n Returns:\n A random element from ``itertools.combinations_with_replacement(iterable,\n r = r)``.\n\n Examples:\n >>> cs = {(0, 0), (0, 1), (1, 1)}\n >>> random_combination_with_replacement(range(2), 2) in cs\n True\n >>> i = combinations_with_replacement(range(10), r = 2)\n >>> random_combination_with_replacement(range(10), r = 2) in i\n True\n \"\"\"\n pool = tuple(iterable)\n n = len(pool)\n indices = sorted(random.randrange(n) for i in xrange(r))\n return tuple(pool[i] for i in indices)\n\ndef lookahead(n, iterable):\n \"\"\"lookahead(n, iterable) -> object\n\n Inspects the upcoming element at index `n` without advancing the iterator.\n Raises ``IndexError`` if `iterable` has too few elements.\n\n Arguments:\n n(int): Index of the element to return.\n iterable: An iterable.\n\n Returns:\n The element in `iterable` at index `n`.\n\n Examples:\n >>> i = count()\n >>> lookahead(4, i)\n 4\n >>> i.next()\n 0\n >>> i = count()\n >>> nth(4, i)\n 4\n >>> i.next()\n 5\n >>> lookahead(4, i)\n 10\n \"\"\"\n for value in islice(copy.copy(iterable), n, None):\n return value\n raise IndexError(n)\n\ndef lexicographic(alphabet):\n \"\"\"lexicographic(alphabet) -> iterator\n\n The words with symbols in `alphabet`, in lexicographic order (determined by\n the order of `alphabet`).\n\n Arguments:\n alphabet: The alphabet to draw symbols from.\n\n Returns:\n An iterator of the words with symbols in `alphabet`, in lexicographic\n order.\n\n Example:\n >>> take(8, imap(lambda x: ''.join(x), lexicographic('01')))\n ['', '0', '1', '00', '01', '10', '11', '000']\n \"\"\"\n for n in count():\n for e in product(alphabet, repeat = n):\n yield e\n\ndef chained(func):\n \"\"\"chained(func)\n\n A decorator chaining the results of `func`. Useful for generators.\n\n Arguments:\n func(function): The function being decorated.\n\n Returns:\n A generator function whoose elements are the concatenation of the return\n values from ``func(*args, **kwargs)``.\n\n Example:\n >>> @chained\n ... def g():\n ... for x in count():\n ... yield (x, -x)\n >>> take(6, g())\n [0, 0, 1, -1, 2, -2]\n \"\"\"\n def wrapper(*args, **kwargs):\n for xs in func(*args, **kwargs):\n for x in xs:\n yield x\n return wrapper\n\ndef bruteforce(func, alphabet, length, method = 'upto', start = None, databag = None):\n \"\"\"bruteforce(func, alphabet, length, method = 'upto', start = None)\n\n Bruteforce `func` to return :const:`True`. `func` should take a string\n input and return a :func:`bool`. `func` will be called with strings from\n `alphabet` until it returns :const:`True` or the search space has been\n exhausted.\n\n The argument `start` can be used to split the search space, which is useful\n if multiple CPU cores are available.\n\n Arguments:\n func(function): The function to bruteforce.\n alphabet: The alphabet to draw symbols from.\n length: Longest string to try.\n method: If 'upto' try strings of length ``1 .. length``, if 'fixed' only\n try strings of length ``length`` and if 'downfrom' try strings of length\n ``length .. 1``.\n start: a tuple ``(i, N)`` which splits the search space up into `N` pieces\n and starts at piece `i` (1..N). :const:`None` is equivalent to ``(1, 1)``.\n\n Returns:\n A string `s` such that ``func(s)`` returns :const:`True` or :const:`None`\n if the search space was exhausted.\n\n Example:\n >>> bruteforce(lambda x: x == 'hello', string.lowercase, length = 10)\n 'hello'\n >>> bruteforce(lambda x: x == 'hello', 'hllo', 5) is None\n True\n \"\"\"\n\n if method == 'upto' and length > 1:\n iterator = product(alphabet, repeat = 1)\n for i in xrange(2, length + 1):\n iterator = chain(iterator, product(alphabet, repeat = i))\n\n elif method == 'downfrom' and length > 1:\n iterator = product(alphabet, repeat = length)\n for i in xrange(length - 1, 1, -1):\n iterator = chain(iterator, product(alphabet, repeat = i))\n\n elif method == 'fixed':\n iterator = product(alphabet, repeat = length)\n\n else:\n raise TypeError('bruteforce(): unknown method')\n\n if method == 'fixed':\n total_iterations = len(alphabet) ** length\n else:\n total_iterations = (len(alphabet) ** (length + 1) / (len(alphabet) - 1)) - 1\n\n if start is not None:\n i, N = start\n if i > N:\n raise ValueError('bruteforce(): invalid starting point')\n\n i -= 1\n chunk_size = total_iterations / N\n rest = total_iterations % N\n starting_point = 0\n\n for chunk in range(N):\n if chunk >= i:\n break\n if chunk <= rest:\n starting_point += chunk_size + 1\n else:\n starting_point += chunk_size\n\n if rest >= i:\n chunk_size += 1\n\n total_iterations = chunk_size\n\n h = log.waitfor('Bruteforcing')\n cur_iteration = 0\n if start != None:\n consume(i, iterator)\n for e in iterator:\n cur = ''.join(e)\n cur_iteration += 1\n if cur_iteration % 2000 == 0:\n progress = 100.0 * cur_iteration / total_iterations\n h.status('Trying \"%s\", %0.3f%%' % (cur, progress))\n if databag:\n databag[\"current_item\"] = cur\n databag[\"items_done\"] = cur_iteration\n databag[\"items_total\"] = total_iterations\n res = func(cur)\n if res:\n h.success('Found key: \"%s\"' % cur)\n return cur\n if start != None:\n consume(N - 1, iterator)\n\n h.failure('No matches found')\n\n\n\ndef mbruteforce(func, alphabet, length, method = 'upto', start = None, threads = None):\n \"\"\"mbruteforce(func, alphabet, length, method = 'upto', start = None, threads = None)\n\n Same functionality as bruteforce(), but multithreaded.\n\n Arguments:\n func, alphabet, length, method, start: same as for bruteforce()\n threads: Amount of threads to spawn, default is the amount of cores.\n \"\"\"\n\n def bruteforcewrap(func, alphabet, length, method, start, databag):\n oldloglevel = context.log_level\n context.log_level = 'critical'\n res = bruteforce(func, alphabet, length, method=method, start=start, databag=databag)\n context.log_level = oldloglevel\n databag[\"result\"] = res\n\n if start == None:\n start = (1, 1)\n\n if threads == None:\n try:\n threads = multiprocessing.cpu_count()\n except NotImplementedError:\n threads = 1\n\n h = log.waitfor('MBruteforcing')\n processes = [None] * threads\n shareddata = [None] * threads\n\n (i2, N2) = start\n totalchunks = threads * N2\n\n for i in range(threads):\n shareddata[i] = multiprocessing.Manager().dict()\n shareddata[i]['result'] = None\n shareddata[i]['current_item'] = \"\"\n shareddata[i]['items_done'] = 0\n shareddata[i]['items_total'] = 0\n\n chunkid = (i2-1) + (i * N2) + 1\n\n processes[i] = multiprocessing.Process(target=bruteforcewrap,\n args=(func, alphabet, length, method, (chunkid, totalchunks),\n shareddata[i]))\n processes[i].start()\n\n done = False\n\n while not done:\n # log status\n current_item_list = \",\".join([\"\\\"%s\\\"\" % x[\"current_item\"]\n for x in shareddata if x != None])\n items_done = sum([x[\"items_done\"] for x in shareddata if x != None])\n items_total = sum([x[\"items_total\"] for x in shareddata if x != None])\n\n progress = 100.0 * items_done / items_total if items_total != 0 else 0.0\n\n h.status('Trying %s -- %0.3f%%' % (current_item_list, progress))\n\n # handle finished threads\n for i in range(threads):\n if processes[i] and processes[i].exitcode != None:\n # thread has terminated\n res = shareddata[i][\"result\"]\n processes[i].join()\n processes[i] = None\n\n # if successful, kill all other threads and return success\n if res != None:\n for i in range(threads):\n if processes[i] != None:\n processes[i].terminate()\n processes[i].join()\n processes[i] = None\n h.success('Found key: \"%s\"' % res)\n return res\n\n if all([x == None for x in processes]):\n done = True\n time.sleep(0.3)\n h.failure('No matches found')\n",
"path": "pwnlib/util/iters.py"
}
] | diff --git a/pwnlib/util/iters.py b/pwnlib/util/iters.py
index 835ef78ce..b5a68bf49 100644
--- a/pwnlib/util/iters.py
+++ b/pwnlib/util/iters.py
@@ -58,8 +58,10 @@
import multiprocessing
import operator
import random
+import time
from itertools import *
+from ..context import context
from ..log import getLogger
log = getLogger(__name__)
|
robocorp__rpaframework-550 | `RPA.JSON` RecursionError: maximum recursion depth exceeded
This error is at the moment breaking our [Certificate level 3](https://robocorp.com/docs/courses/work-data-management/validate-business-data) course with `rpaframework==15.0.0`.
Works correctly with `rpaframework==14.0.0`
```
*** Keywords ***
Validate traffic data
[Arguments] ${traffic_data}
${country}= Get Value From Json ${traffic_data} $.country
${valid}= Evaluate len("${country}") == 3
RETURN ${valid}
```
example content of `${traffic_data}`
```
{
"country": "ISR",
"year": 2019,
"rate": 3.90874
}
```
| [
{
"content": "import json\nimport logging\nfrom typing import Any, Callable, Dict, Hashable, List, Optional, Union\n\nfrom jsonpath_ng import Index, Fields\nfrom jsonpath_ng.ext.filter import Filter\nfrom jsonpath_ng.ext.parser import ExtentedJsonPathParser\n\nfrom robot.api.deco import keyword\n\n\nJSONValue = Optional[Union[str, int, float, bool]]\nJSONType = Union[Dict[Hashable, \"JSONType\"], List[\"JSONType\"], JSONValue]\n\n\nclass RPAFilter(Filter):\n \"\"\"Extends default filtering JSON path logic.\"\"\"\n\n def filter(self, fn: Callable[[JSONType], bool], data: JSONType) -> JSONType:\n for datum in reversed(self.find(data)):\n index_obj = datum.path\n if isinstance(data, dict):\n index_obj.index = list(data)[index_obj.index]\n index_obj.filter(fn, data)\n return data\n\n\nclass RPAJsonPathParser(ExtentedJsonPathParser):\n \"\"\"Extends the default JSON path parser found in `jsonpath_ng.ext`.\"\"\"\n\n def p_filter(self, p):\n \"\"\"filter : '?' expressions\"\"\"\n p[0] = RPAFilter(p[2])\n\n\ndef parse(path: str, debug: bool = False) -> RPAJsonPathParser:\n return RPAJsonPathParser(debug=debug).parse(path)\n\n\nclass JSON:\n r\"\"\"`JSON` is a library for manipulating `JSON`_ files and strings.\n\n JSON is a common data interchange format inspired by a subset of\n the Javascript programming language, but these days is a de facto\n standard in modern web APIs and is language agnostic.\n\n .. _JSON: http://json.org/\n\n Serialization\n =============\n\n The term `serialization` refers to the process of converting\n Robot Framework or Python types to JSON or the other way around.\n\n Basic types can be easily converted between the domains,\n and the mapping is as follows:\n\n ============= =======\n JSON Python\n ============= =======\n object dict\n array list\n string str\n number (int) int\n number (real) float\n true True\n false False\n null None\n ============= =======\n\n About JSONPath\n ==============\n\n Reading and writing values from/to JSON serializable objects is done\n using `JSONPath`_. It's a syntax designed to quickly and easily refer to\n specific elements in a JSON structure. The specific flavor used in this\n library is based on `jsonpath-ng`_.\n\n Compared to Python's normal dictionary access, JSONPath expressions can\n target multiple elements through features such as conditionals and wildcards,\n which can simplify many JSON-related operations. It's analogous to XPath\n for XML structures.\n\n .. _JSONPath: http://goessner.net/articles/JsonPath/\n .. _jsonpath-ng: https://pypi.org/project/jsonpath-ng/#description\n\n Syntax example\n --------------\n\n For this example consider the following structure:\n\n .. code-block:: json\n\n {\n \"clients\": [\n {\n \"name\": \"Johnny Example\",\n \"email\": \"[email protected]\",\n \"orders\": [\n {\"address\": \"Streetroad 123\", \"price\": 103.20},\n {\"address\": \"Streetroad 123\", \"price\": 98.99}\n ]\n },\n {\n \"name\": \"Jane Example\",\n \"email\": \"[email protected]\",\n \"orders\": [\n {\"address\": \"Waypath 321\", \"price\": 22.00},\n {\"address\": \"Streetroad 123\", \"price\": 2330.01}\n ]\n }\n ]\n }\n\n In the simplest case JSONPath can replace nested access:\n\n .. code-block:: robotframework\n\n *** Tasks ***\n Nested access\n # First order of first client, with direct dictionary access\n ${value}= Set variable ${json}[\"clients\"][0][\"orders\"][0]\n\n # JSONPath access\n ${value}= Get value from JSON ${json} $.clients[0].orders[0]\n\n But the power comes from complicated expressions:\n\n .. code-block:: robotframework\n\n *** Tasks ***\n Complicated expressions\n # Find delivery addresses for all orders\n ${prices}= Get values from JSON $..address\n\n # Find orders that cost over 100\n ${expensives}= Get values from JSON $..orders[?(@.price>100)]\n\n\n Supported Expressions\n ---------------------\n\n The supported syntax elements are:\n\n ======================= ===========\n Element Description\n ======================= ===========\n ``$`` Root object/element\n ``@`` Current object/element inside expressions\n ``.`` or ``[]`` Child operator\n ``..`` Recursive descendant operator\n ````parent```` Parent operator, see `functions`_\n ``*`` Wilcard, any element\n ``,`` Select multiple fields\n ``[n]`` Array index\n ``[a:b:c]`` Array slice (start, end, step)\n ``[a,b]`` Union of indices or names\n ``[?()]`` Apply a filter expression\n ``()`` Script expression\n ``[\\\\field]`` Sort descending by ``field``, cannot be combined with\n filters.\n ``[/field]`` Sort ascending by ``field``, cannot be combined with\n filters.\n ````str()```` Convert value to string, see `functions`_\n ````sub()```` Regex substitution function, see `functions`_\n ````len```` Calculate value's length, see `functions`_\n ````split()```` String split function, see `functions`_\n ``+`` ``-`` ``*`` ``/`` Arithmetic functions, see `functions`_\n ======================= ===========\n\n Functions\n ^^^^^^^^^\n\n This library allows JSON path expressions to include certain functions\n which can provide additional benefit to users. These functions are\n generally encapsulated in backticks (`````). Some functions require\n you to pass arguments similar to a Python function.\n\n For example, let's say a JSON has nodes on the JSON path\n ``$.books[*].genres`` which are represented as strings of genres with\n commas separating each genre. So for one book, this node might have a\n value like ``horror,young-adult``. You can return a list of first genre\n for each book by using the ``split`` function like so:\n\n .. code-block:: robotframework\n\n *** Task ***\n Get genres\n ${genres}= Get values from JSON $.books[*].genres.`split(,, 0, -1)`\n\n Each functions parameters are defined here:\n\n =================================== =====\n Function Usage\n =================================== =====\n ``str()`` No parameters, but parenthesis are required\n ``sub(/regex/, repl)`` The regex pattern must be provided in *regex*\n and the replacement value provided in *repl*\n ``len`` No parameters and no parenthesis\n ``split(char, segment, max_split)`` Separator character provided as *char*, which\n index from the resulting array to be returns\n provided as *segment*, and maximum number of\n splits to perform provided as *max_split*,\n ``-1`` for all splits.\n ``parent`` No parameters, no parenthesis\n =================================== =====\n\n **Arithmetic Functions**\n\n JSON Path can be written and combined to concatenate string values\n or perform arithmetic functions on numerical values. Each JSONPath\n expression used must return the same type, and when performing\n such functions between returned lists, each list must be the same\n length. An example is included in documentation for the keyword\n \\`Get values from JSON\\`.\n\n Additional Information\n ^^^^^^^^^^^^^^^^^^^^^^\n\n There are a multitude of different script expressions\n in addition to the elements listed above, which can\n be seen in the `aforementioned article`__.\n\n For further library usage examples, see the individual keywords.\n\n __ JSONPath_\n \"\"\"\n\n # TODO: Add more logging about affected rows, at least on debug level\n\n ROBOT_LIBRARY_SCOPE = \"GLOBAL\"\n ROBOT_LIBRARY_DOC_FORMAT = \"REST\"\n\n def __init__(self):\n self.logger = logging.getLogger(__name__)\n\n @keyword(\"Load JSON from file\")\n def load_json_from_file(self, filename: str, encoding=\"utf-8\") -> JSONType:\n \"\"\"Load JSON data from a file, and return it as JSON serializable object.\n Depending on the input file the object can be either a dictionary,\n a list, or a scalar value.\n\n :param filename: path to input file\n :param encoding: file character encoding\n :return: JSON serializable object of the JSON file\n\n Example:\n\n .. code:: robotframework\n\n *** Task ***\n Load json\n &{auth}= Load JSON from file auth.json\n Log Current auth token: ${auth.token}\n\n \"\"\"\n self.logger.info(\"Loading JSON from file: %s\", filename)\n with open(filename, \"r\", encoding=encoding) as json_file:\n return json.load(json_file)\n\n @keyword(\"Save JSON to file\")\n def save_json_to_file(\n self,\n doc: JSONType,\n filename: str,\n indent: Optional[int] = None,\n encoding: str = \"utf-8\",\n ) -> None:\n \"\"\"Save a JSON serializable object or a string containing\n a JSON value into a file.\n\n :param doc: JSON serializable object or string\n :param filename: path to output file\n :param indent: if given this value is used for json file indent\n :param encoding: file character encoding\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Tasks ***\n Save dictionary to file\n ${john}= Create dictionary name=John [email protected]\n Save JSON to file ${john} john.json\n\n Save string to file\n ${mark}= Set variable {\"name\": \"Mark\", \"mail\": \"[email protected]\"}\n Save JSON to file ${mark} mark.json\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Save dictionary to file.\n john = {\"name\": \"John\", \"mail\": \"[email protected]\"}\n JSON().save_json_to_file(john, \"john.json\")\n\n \"\"\"\n self.logger.info(\"Saving JSON to file: %s\", filename)\n extra_args = {}\n if indent:\n extra_args[\"indent\"] = indent\n doc = self.convert_string_to_json(doc) if isinstance(doc, str) else doc\n with open(filename, \"w\", encoding=encoding) as outfile:\n json.dump(doc, outfile, **extra_args)\n\n @keyword(\"Convert JSON to String\")\n def convert_json_to_string(self, doc: JSONType) -> str:\n \"\"\"Convert a JSON serializable object to a string and return it.\n\n :param doc: JSON serializable object\n :return: string of the JSON serializable object\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Convert to string\n ${obj}= Create dictionary Key=Value\n ${json}= Convert JSON to string ${obj}\n Should be equal ${json} {\"Key\": \"Value\"}\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n from robot.libraries.BuiltIn import BuiltIn\n\n obj = {\"Key\": \"Value\"}\n json = JSON().convert_json_to_string(obj)\n BuiltIn().should_be_equal(json, '{\"Key\": \"Value\"}')\n\n \"\"\"\n return json.dumps(doc)\n\n @keyword(\"Convert String to JSON\")\n def convert_string_to_json(self, doc: str) -> JSONType:\n \"\"\"Convert a string to a JSON serializable object and return it.\n\n :param doc: JSON string\n :return: JSON serializable object of the string\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Convert to json\n ${json}= Set variable {\"Key\": \"Value\"}\n &{obj}= Convert string to JSON ${json}\n Should be equal ${obj.Key} Value\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n from robot.libraries.BuiltIn import BuiltIn\n\n json = '{\"Key\": \"Value\"}'\n obj = JSON().convert_string_to_json(json)\n BuiltIn().should_be_equal(obj[\"Key\"], \"Value\")\n\n \"\"\"\n return json.loads(doc)\n\n @keyword(\"Add to JSON\")\n def add_to_json(self, doc: JSONType, expr: str, value: JSONType) -> JSONType:\n \"\"\"Add items into a JSON serializable object and return the result.\n\n If the target is a list, the values are appended to the end.\n If the target is a dict, the keys are either added or updated.\n\n :param doc: JSON serializable object\n :param expr: JSONPath expression\n :param value: values to either append or update\n :return: JSON serializable object of the updated JSON\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Change the name value for all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{person}= Create dictionary Name=John\n &{after}= Add to JSON ${before} $.People ${person}\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Change the name value for all people\n js = JSON()\n before = js.convert_string_to_json('{\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}')\n person = {\"Name\": \"John\"}\n after = js.add_to_json(before, \"$.People\", person)\n\n print(after)\n\n \"\"\" # noqa: E501\n self.logger.info(\"Add to JSON with expression: %r\", expr)\n for match in parse(expr).find(doc):\n if isinstance(match.value, dict):\n match.value.update(value)\n if isinstance(match.value, list):\n match.value.append(value)\n return doc\n\n @keyword(\"Get value from JSON\")\n def get_value_from_json(\n self, doc: JSONType, expr: str, default: Optional[Any] = None\n ) -> str:\n \"\"\"Get a single value from a JSON serializable object that matches the given expression.\n\n Raises a ValueError if there is more than one match.\n Returns the given default argument (or None) if there\n were no matches.\n\n :param doc: JSON serializable object or string\n :param expr: jsonpath expression\n :param default: default value to return in the absence of a match\n :return: string containing the match OR `default` if there are no matches\n :raises ValueError: if more than one match is discovered\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Get the name value for the first person\n &{people}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n ${first}= Get value from JSON ${people} $.People[0].Name\n\n Short Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Get the name value for the second person.\n people = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n second = JSON().get_value_from_json(people, \"$.People[1].Name\")\n print(second)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Get email for specific order id\n ${email}= Get value from json ${JSON_DOC} $.clients[?(@..id==\"${ID}\")].email\n Log \\\\nOUTPUT IS\\\\n ${email} console=${True}\n Should be equal as strings ${email} [email protected]\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Get value from JSON with expression: %r\", expr)\n result = [match.value for match in parse(expr).find(doc)]\n if len(result) > 1:\n raise ValueError(\n \"Found {count} matches: {values}\".format(\n count=len(result), values=\", \".join(str(r) for r in result)\n )\n )\n\n return result[0] if result else default\n\n @keyword(\"Get values from JSON\")\n def get_values_from_json(self, doc: JSONType, expr: str) -> list:\n \"\"\"Get all values from a JSON serializable object that match the given expression.\n\n :param doc: JSON serializable object or string\n :param expr: JSONPath expression\n :return: list of values that match\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Get all the names for all people\n &{people}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n @{names}= Get values from JSON ${people} $.People[*].Name\n\n Short Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Get all the names for all people\n people = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n names = JSON().get_values_from_json(people, \"$.People[*].Name\")\n print(second)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Get All Prices and Order Ids\n # Arithmetic operations only work when lists are of equal lengths and types.\n ${prices}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[*].id + \" has price \" + $.clients[*].orders[*].price.`str()`\n Log \\\\nOUTPUT IS\\\\n ${prices} console=${True}\n Should be equal as strings ${prices}\n ... ['guid-001 has price 103.2', 'guid-002 has price 98.99', 'guid-003 has price 22.0', 'guid-004 has price 2330.01', 'guid-005 has price 152.12']\n\n Find Only Valid Emails With Regex\n # The regex used in this example is simplistic and\n # will not work with all email addresses\n ${emails}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[?(@.email =~ \"[a-zA-Z]+@[a-zA-Z]+\\\\.[a-zA-Z]+\")].email\n Log \\\\nOUTPUT IS\\\\n ${emails} console=${True}\n Should be equal as strings ${emails} ['[email protected]', '[email protected]']\n\n Find Orders From Texas Over 100\n # The regex used in this example is simplistic and\n # will not work with all email addresses\n ${orders}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[?(@.price > 100 & @.state == \"TX\")]\n Log \\\\nOUTPUT IS\\\\n ${orders} console=${True}\n Should be equal as strings ${orders}\n ... [{'address': 'Streetroad 123', 'state': 'TX', 'price': 103.2, 'id': 'guid-001'}, {'address': 'Streetroad 123', 'state': 'TX', 'price': 2330.01, 'id': 'guid-004'}]\n\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Get values from JSON with expression: %r\", expr)\n return [match.value for match in parse(expr).find(doc)]\n\n @keyword(\"Update value to JSON\")\n def update_value_to_json(\n self, doc: JSONType, expr: str, value: JSONType\n ) -> JSONType:\n \"\"\"Update existing values in a JSON serializable object and return the result.\n Will change all values that match the expression.\n\n :param doc: JSON or string\n :param expr: JSONPath expression\n :param value: New value for the matching item(s)\n :return: JSON serializable object with updated results\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Tasks ***\n Change the name key for all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{after}= Update value to JSON ${before} $.People[*].Name JohnMalkovich\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Change the name key for all people\n before = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n after = JSON().update_value_to_json(before, \"$.People[*].Name\",\"JohnMalkovich\")\n print(after)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Library Collections\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"id\": \"user-001\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"id\": \"user-002\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Update user email\n ${updated_doc}= Update value to json\n ... ${JSON_DOC}\n ... $.clients[?(@.id==\"user-001\")].email\n ... [email protected]\n Log \\\\nNEW JSON IS\\\\n ${updated_doc} console=${True}\n ${new_email}= Get value from json ${updated_doc} $.clients[?(@.id==\"user-001\")].email\n Should be equal as strings ${new_email} [email protected]\n\n Add additional charge to all prices in WA\n # This example also shows how the update keyword changes the original JSON doc in memory.\n ${id_price}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[?(@.state==\"WA\")].id,price\n FOR ${order_id} ${price} IN @{id_price}\n Update value to json ${JSON_DOC} $.clients[*].orders[?(@.id==\"${order_id}\")].price ${{${price} * 1.06}}\n END\n Log \\\\nNEW JSON IS\\\\n ${JSON_DOC} console=${True}\n ${one_price}= Get value from json ${JSON_DOC} $..orders[?(@.id==${ID})].price\n Should be equal as numbers ${one_price} 23.32\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Update JSON with expression: %r\", expr)\n for match in parse(expr).find(doc):\n path = match.path\n if isinstance(path, Index):\n match.context.value[match.path.index] = value\n elif isinstance(path, Fields):\n match.context.value[match.path.fields[0]] = value\n return doc\n\n @keyword(\"Delete from JSON\")\n def delete_from_json(self, doc: JSONType, expr: str) -> JSONType:\n \"\"\"Delete values from a JSON serializable object and return the result.\n Will delete all values that match the expression.\n\n :param doc: JSON serializable object or string\n :param expr: JSONPath expression\n :return: JSON serializable object with values removed\n\n Example:\n\n .. code:: robotframework\n\n *** Task ***\n Delete all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{after}= Delete from JSON ${before} $.People[*]\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Delete all people\n before = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n after = JSON().delete_from_json(before, \"$.People[*]\")\n print(after)\n\n \"\"\" # noqa: E501\n self.logger.info(\"Delete from JSON with expression: %r\", expr)\n return parse(expr).filter(lambda _: True, doc)\n",
"path": "packages/main/src/RPA/JSON.py"
}
] | [
{
"content": "import json\nimport logging\nfrom typing import Any, Callable, Dict, Hashable, List, Optional, Union\n\nfrom jsonpath_ng import Index, Fields\nfrom jsonpath_ng.ext.filter import Filter\nfrom jsonpath_ng.ext.parser import ExtentedJsonPathParser\n\nfrom robot.api.deco import keyword\n\n\nJSONValue = Optional[Union[str, int, float, bool]]\nJSONType = Union[Dict[Hashable, JSONValue], List[JSONValue], JSONValue]\n\n\nclass RPAFilter(Filter):\n \"\"\"Extends default filtering JSON path logic.\"\"\"\n\n def filter(self, fn: Callable[[JSONType], bool], data: JSONType) -> JSONType:\n for datum in reversed(self.find(data)):\n index_obj = datum.path\n if isinstance(data, dict):\n index_obj.index = list(data)[index_obj.index]\n index_obj.filter(fn, data)\n return data\n\n\nclass RPAJsonPathParser(ExtentedJsonPathParser):\n \"\"\"Extends the default JSON path parser found in `jsonpath_ng.ext`.\"\"\"\n\n def p_filter(self, p):\n \"\"\"filter : '?' expressions\"\"\"\n p[0] = RPAFilter(p[2])\n\n\ndef parse(path: str, debug: bool = False) -> RPAJsonPathParser:\n return RPAJsonPathParser(debug=debug).parse(path)\n\n\nclass JSON:\n r\"\"\"`JSON` is a library for manipulating `JSON`_ files and strings.\n\n JSON is a common data interchange format inspired by a subset of\n the Javascript programming language, but these days is a de facto\n standard in modern web APIs and is language agnostic.\n\n .. _JSON: http://json.org/\n\n Serialization\n =============\n\n The term `serialization` refers to the process of converting\n Robot Framework or Python types to JSON or the other way around.\n\n Basic types can be easily converted between the domains,\n and the mapping is as follows:\n\n ============= =======\n JSON Python\n ============= =======\n object dict\n array list\n string str\n number (int) int\n number (real) float\n true True\n false False\n null None\n ============= =======\n\n About JSONPath\n ==============\n\n Reading and writing values from/to JSON serializable objects is done\n using `JSONPath`_. It's a syntax designed to quickly and easily refer to\n specific elements in a JSON structure. The specific flavor used in this\n library is based on `jsonpath-ng`_.\n\n Compared to Python's normal dictionary access, JSONPath expressions can\n target multiple elements through features such as conditionals and wildcards,\n which can simplify many JSON-related operations. It's analogous to XPath\n for XML structures.\n\n .. _JSONPath: http://goessner.net/articles/JsonPath/\n .. _jsonpath-ng: https://pypi.org/project/jsonpath-ng/#description\n\n Syntax example\n --------------\n\n For this example consider the following structure:\n\n .. code-block:: json\n\n {\n \"clients\": [\n {\n \"name\": \"Johnny Example\",\n \"email\": \"[email protected]\",\n \"orders\": [\n {\"address\": \"Streetroad 123\", \"price\": 103.20},\n {\"address\": \"Streetroad 123\", \"price\": 98.99}\n ]\n },\n {\n \"name\": \"Jane Example\",\n \"email\": \"[email protected]\",\n \"orders\": [\n {\"address\": \"Waypath 321\", \"price\": 22.00},\n {\"address\": \"Streetroad 123\", \"price\": 2330.01}\n ]\n }\n ]\n }\n\n In the simplest case JSONPath can replace nested access:\n\n .. code-block:: robotframework\n\n *** Tasks ***\n Nested access\n # First order of first client, with direct dictionary access\n ${value}= Set variable ${json}[\"clients\"][0][\"orders\"][0]\n\n # JSONPath access\n ${value}= Get value from JSON ${json} $.clients[0].orders[0]\n\n But the power comes from complicated expressions:\n\n .. code-block:: robotframework\n\n *** Tasks ***\n Complicated expressions\n # Find delivery addresses for all orders\n ${prices}= Get values from JSON $..address\n\n # Find orders that cost over 100\n ${expensives}= Get values from JSON $..orders[?(@.price>100)]\n\n\n Supported Expressions\n ---------------------\n\n The supported syntax elements are:\n\n ======================= ===========\n Element Description\n ======================= ===========\n ``$`` Root object/element\n ``@`` Current object/element inside expressions\n ``.`` or ``[]`` Child operator\n ``..`` Recursive descendant operator\n ````parent```` Parent operator, see `functions`_\n ``*`` Wilcard, any element\n ``,`` Select multiple fields\n ``[n]`` Array index\n ``[a:b:c]`` Array slice (start, end, step)\n ``[a,b]`` Union of indices or names\n ``[?()]`` Apply a filter expression\n ``()`` Script expression\n ``[\\\\field]`` Sort descending by ``field``, cannot be combined with\n filters.\n ``[/field]`` Sort ascending by ``field``, cannot be combined with\n filters.\n ````str()```` Convert value to string, see `functions`_\n ````sub()```` Regex substitution function, see `functions`_\n ````len```` Calculate value's length, see `functions`_\n ````split()```` String split function, see `functions`_\n ``+`` ``-`` ``*`` ``/`` Arithmetic functions, see `functions`_\n ======================= ===========\n\n Functions\n ^^^^^^^^^\n\n This library allows JSON path expressions to include certain functions\n which can provide additional benefit to users. These functions are\n generally encapsulated in backticks (`````). Some functions require\n you to pass arguments similar to a Python function.\n\n For example, let's say a JSON has nodes on the JSON path\n ``$.books[*].genres`` which are represented as strings of genres with\n commas separating each genre. So for one book, this node might have a\n value like ``horror,young-adult``. You can return a list of first genre\n for each book by using the ``split`` function like so:\n\n .. code-block:: robotframework\n\n *** Task ***\n Get genres\n ${genres}= Get values from JSON $.books[*].genres.`split(,, 0, -1)`\n\n Each functions parameters are defined here:\n\n =================================== =====\n Function Usage\n =================================== =====\n ``str()`` No parameters, but parenthesis are required\n ``sub(/regex/, repl)`` The regex pattern must be provided in *regex*\n and the replacement value provided in *repl*\n ``len`` No parameters and no parenthesis\n ``split(char, segment, max_split)`` Separator character provided as *char*, which\n index from the resulting array to be returns\n provided as *segment*, and maximum number of\n splits to perform provided as *max_split*,\n ``-1`` for all splits.\n ``parent`` No parameters, no parenthesis\n =================================== =====\n\n **Arithmetic Functions**\n\n JSON Path can be written and combined to concatenate string values\n or perform arithmetic functions on numerical values. Each JSONPath\n expression used must return the same type, and when performing\n such functions between returned lists, each list must be the same\n length. An example is included in documentation for the keyword\n \\`Get values from JSON\\`.\n\n Additional Information\n ^^^^^^^^^^^^^^^^^^^^^^\n\n There are a multitude of different script expressions\n in addition to the elements listed above, which can\n be seen in the `aforementioned article`__.\n\n For further library usage examples, see the individual keywords.\n\n __ JSONPath_\n \"\"\"\n\n # TODO: Add more logging about affected rows, at least on debug level\n\n ROBOT_LIBRARY_SCOPE = \"GLOBAL\"\n ROBOT_LIBRARY_DOC_FORMAT = \"REST\"\n\n def __init__(self):\n self.logger = logging.getLogger(__name__)\n\n @keyword(\"Load JSON from file\")\n def load_json_from_file(self, filename: str, encoding=\"utf-8\") -> JSONType:\n \"\"\"Load JSON data from a file, and return it as JSON serializable object.\n Depending on the input file the object can be either a dictionary,\n a list, or a scalar value.\n\n :param filename: path to input file\n :param encoding: file character encoding\n :return: JSON serializable object of the JSON file\n\n Example:\n\n .. code:: robotframework\n\n *** Task ***\n Load json\n &{auth}= Load JSON from file auth.json\n Log Current auth token: ${auth.token}\n\n \"\"\"\n self.logger.info(\"Loading JSON from file: %s\", filename)\n with open(filename, \"r\", encoding=encoding) as json_file:\n return json.load(json_file)\n\n @keyword(\"Save JSON to file\")\n def save_json_to_file(\n self,\n doc: JSONType,\n filename: str,\n indent: Optional[int] = None,\n encoding: str = \"utf-8\",\n ) -> None:\n \"\"\"Save a JSON serializable object or a string containing\n a JSON value into a file.\n\n :param doc: JSON serializable object or string\n :param filename: path to output file\n :param indent: if given this value is used for json file indent\n :param encoding: file character encoding\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Tasks ***\n Save dictionary to file\n ${john}= Create dictionary name=John [email protected]\n Save JSON to file ${john} john.json\n\n Save string to file\n ${mark}= Set variable {\"name\": \"Mark\", \"mail\": \"[email protected]\"}\n Save JSON to file ${mark} mark.json\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Save dictionary to file.\n john = {\"name\": \"John\", \"mail\": \"[email protected]\"}\n JSON().save_json_to_file(john, \"john.json\")\n\n \"\"\"\n self.logger.info(\"Saving JSON to file: %s\", filename)\n extra_args = {}\n if indent:\n extra_args[\"indent\"] = indent\n doc = self.convert_string_to_json(doc) if isinstance(doc, str) else doc\n with open(filename, \"w\", encoding=encoding) as outfile:\n json.dump(doc, outfile, **extra_args)\n\n @keyword(\"Convert JSON to String\")\n def convert_json_to_string(self, doc: JSONType) -> str:\n \"\"\"Convert a JSON serializable object to a string and return it.\n\n :param doc: JSON serializable object\n :return: string of the JSON serializable object\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Convert to string\n ${obj}= Create dictionary Key=Value\n ${json}= Convert JSON to string ${obj}\n Should be equal ${json} {\"Key\": \"Value\"}\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n from robot.libraries.BuiltIn import BuiltIn\n\n obj = {\"Key\": \"Value\"}\n json = JSON().convert_json_to_string(obj)\n BuiltIn().should_be_equal(json, '{\"Key\": \"Value\"}')\n\n \"\"\"\n return json.dumps(doc)\n\n @keyword(\"Convert String to JSON\")\n def convert_string_to_json(self, doc: str) -> JSONType:\n \"\"\"Convert a string to a JSON serializable object and return it.\n\n :param doc: JSON string\n :return: JSON serializable object of the string\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Convert to json\n ${json}= Set variable {\"Key\": \"Value\"}\n &{obj}= Convert string to JSON ${json}\n Should be equal ${obj.Key} Value\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n from robot.libraries.BuiltIn import BuiltIn\n\n json = '{\"Key\": \"Value\"}'\n obj = JSON().convert_string_to_json(json)\n BuiltIn().should_be_equal(obj[\"Key\"], \"Value\")\n\n \"\"\"\n return json.loads(doc)\n\n @keyword(\"Add to JSON\")\n def add_to_json(self, doc: JSONType, expr: str, value: JSONType) -> JSONType:\n \"\"\"Add items into a JSON serializable object and return the result.\n\n If the target is a list, the values are appended to the end.\n If the target is a dict, the keys are either added or updated.\n\n :param doc: JSON serializable object\n :param expr: JSONPath expression\n :param value: values to either append or update\n :return: JSON serializable object of the updated JSON\n\n Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Change the name value for all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{person}= Create dictionary Name=John\n &{after}= Add to JSON ${before} $.People ${person}\n\n Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Change the name value for all people\n js = JSON()\n before = js.convert_string_to_json('{\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}')\n person = {\"Name\": \"John\"}\n after = js.add_to_json(before, \"$.People\", person)\n\n print(after)\n\n \"\"\" # noqa: E501\n self.logger.info(\"Add to JSON with expression: %r\", expr)\n for match in parse(expr).find(doc):\n if isinstance(match.value, dict):\n match.value.update(value)\n if isinstance(match.value, list):\n match.value.append(value)\n return doc\n\n @keyword(\"Get value from JSON\")\n def get_value_from_json(\n self, doc: JSONType, expr: str, default: Optional[Any] = None\n ) -> str:\n \"\"\"Get a single value from a JSON serializable object that matches the given expression.\n\n Raises a ValueError if there is more than one match.\n Returns the given default argument (or None) if there\n were no matches.\n\n :param doc: JSON serializable object or string\n :param expr: jsonpath expression\n :param default: default value to return in the absence of a match\n :return: string containing the match OR `default` if there are no matches\n :raises ValueError: if more than one match is discovered\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Get the name value for the first person\n &{people}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n ${first}= Get value from JSON ${people} $.People[0].Name\n\n Short Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Get the name value for the second person.\n people = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n second = JSON().get_value_from_json(people, \"$.People[1].Name\")\n print(second)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Get email for specific order id\n ${email}= Get value from json ${JSON_DOC} $.clients[?(@..id==\"${ID}\")].email\n Log \\\\nOUTPUT IS\\\\n ${email} console=${True}\n Should be equal as strings ${email} [email protected]\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Get value from JSON with expression: %r\", expr)\n result = [match.value for match in parse(expr).find(doc)]\n if len(result) > 1:\n raise ValueError(\n \"Found {count} matches: {values}\".format(\n count=len(result), values=\", \".join(str(r) for r in result)\n )\n )\n\n return result[0] if result else default\n\n @keyword(\"Get values from JSON\")\n def get_values_from_json(self, doc: JSONType, expr: str) -> list:\n \"\"\"Get all values from a JSON serializable object that match the given expression.\n\n :param doc: JSON serializable object or string\n :param expr: JSONPath expression\n :return: list of values that match\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Task ***\n Get all the names for all people\n &{people}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n @{names}= Get values from JSON ${people} $.People[*].Name\n\n Short Python Example:\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Get all the names for all people\n people = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n names = JSON().get_values_from_json(people, \"$.People[*].Name\")\n print(second)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Get All Prices and Order Ids\n # Arithmetic operations only work when lists are of equal lengths and types.\n ${prices}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[*].id + \" has price \" + $.clients[*].orders[*].price.`str()`\n Log \\\\nOUTPUT IS\\\\n ${prices} console=${True}\n Should be equal as strings ${prices}\n ... ['guid-001 has price 103.2', 'guid-002 has price 98.99', 'guid-003 has price 22.0', 'guid-004 has price 2330.01', 'guid-005 has price 152.12']\n\n Find Only Valid Emails With Regex\n # The regex used in this example is simplistic and\n # will not work with all email addresses\n ${emails}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[?(@.email =~ \"[a-zA-Z]+@[a-zA-Z]+\\\\.[a-zA-Z]+\")].email\n Log \\\\nOUTPUT IS\\\\n ${emails} console=${True}\n Should be equal as strings ${emails} ['[email protected]', '[email protected]']\n\n Find Orders From Texas Over 100\n # The regex used in this example is simplistic and\n # will not work with all email addresses\n ${orders}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[?(@.price > 100 & @.state == \"TX\")]\n Log \\\\nOUTPUT IS\\\\n ${orders} console=${True}\n Should be equal as strings ${orders}\n ... [{'address': 'Streetroad 123', 'state': 'TX', 'price': 103.2, 'id': 'guid-001'}, {'address': 'Streetroad 123', 'state': 'TX', 'price': 2330.01, 'id': 'guid-004'}]\n\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Get values from JSON with expression: %r\", expr)\n return [match.value for match in parse(expr).find(doc)]\n\n @keyword(\"Update value to JSON\")\n def update_value_to_json(\n self, doc: JSONType, expr: str, value: JSONType\n ) -> JSONType:\n \"\"\"Update existing values in a JSON serializable object and return the result.\n Will change all values that match the expression.\n\n :param doc: JSON or string\n :param expr: JSONPath expression\n :param value: New value for the matching item(s)\n :return: JSON serializable object with updated results\n\n Short Robot Framework Example:\n\n .. code:: robotframework\n\n *** Tasks ***\n Change the name key for all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{after}= Update value to JSON ${before} $.People[*].Name JohnMalkovich\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Change the name key for all people\n before = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n after = JSON().update_value_to_json(before, \"$.People[*].Name\",\"JohnMalkovich\")\n print(after)\n\n Extended Robot Framework Example:\n\n .. code:: robotframework\n\n *** Settings ***\n Library RPA.JSON\n Library Collections\n Suite Setup Ingest JSON\n\n *** Variables ***\n ${JSON_STRING} {\n ... \"clients\": [\n ... {\n ... \"name\": \"Johnny Example\",\n ... \"email\": \"[email protected]\",\n ... \"id\": \"user-001\",\n ... \"orders\": [\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 103.20, \"id\":\"guid-001\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 98.99, \"id\":\"guid-002\"}\n ... ]\n ... },\n ... {\n ... \"name\": \"Jane Example\",\n ... \"email\": \"[email protected]\",\n ... \"id\": \"user-002\",\n ... \"orders\": [\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 22.00, \"id\":\"guid-003\"},\n ... {\"address\": \"Streetroad 123\", \"state\": \"TX\", \"price\": 2330.01, \"id\":\"guid-004\"},\n ... {\"address\": \"Waypath 321\", \"state\": \"WA\", \"price\": 152.12, \"id\":\"guid-005\"}\n ... ]\n ... }\n ... ]\n ... }\n ${ID} guid-003\n\n *** Tasks ***\n Update user email\n ${updated_doc}= Update value to json\n ... ${JSON_DOC}\n ... $.clients[?(@.id==\"user-001\")].email\n ... [email protected]\n Log \\\\nNEW JSON IS\\\\n ${updated_doc} console=${True}\n ${new_email}= Get value from json ${updated_doc} $.clients[?(@.id==\"user-001\")].email\n Should be equal as strings ${new_email} [email protected]\n\n Add additional charge to all prices in WA\n # This example also shows how the update keyword changes the original JSON doc in memory.\n ${id_price}= Get values from json\n ... ${JSON_DOC}\n ... $.clients[*].orders[?(@.state==\"WA\")].id,price\n FOR ${order_id} ${price} IN @{id_price}\n Update value to json ${JSON_DOC} $.clients[*].orders[?(@.id==\"${order_id}\")].price ${{${price} * 1.06}}\n END\n Log \\\\nNEW JSON IS\\\\n ${JSON_DOC} console=${True}\n ${one_price}= Get value from json ${JSON_DOC} $..orders[?(@.id==${ID})].price\n Should be equal as numbers ${one_price} 23.32\n\n *** Keywords ***\n Ingest JSON\n ${doc}= Convert string to json ${JSON_STRING}\n Set suite variable ${JSON_DOC} ${doc}\n\n \"\"\" # noqa: E501\n self.logger.info(\"Update JSON with expression: %r\", expr)\n for match in parse(expr).find(doc):\n path = match.path\n if isinstance(path, Index):\n match.context.value[match.path.index] = value\n elif isinstance(path, Fields):\n match.context.value[match.path.fields[0]] = value\n return doc\n\n @keyword(\"Delete from JSON\")\n def delete_from_json(self, doc: JSONType, expr: str) -> JSONType:\n \"\"\"Delete values from a JSON serializable object and return the result.\n Will delete all values that match the expression.\n\n :param doc: JSON serializable object or string\n :param expr: JSONPath expression\n :return: JSON serializable object with values removed\n\n Example:\n\n .. code:: robotframework\n\n *** Task ***\n Delete all people\n &{before}= Convert string to JSON {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n &{after}= Delete from JSON ${before} $.People[*]\n\n .. code:: python\n\n from RPA.JSON import JSON\n\n # Delete all people\n before = {\"People\": [{\"Name\": \"Mark\"}, {\"Name\": \"Jane\"}]}\n after = JSON().delete_from_json(before, \"$.People[*]\")\n print(after)\n\n \"\"\" # noqa: E501\n self.logger.info(\"Delete from JSON with expression: %r\", expr)\n return parse(expr).filter(lambda _: True, doc)\n",
"path": "packages/main/src/RPA/JSON.py"
}
] | diff --git a/docs/source/releasenotes.rst b/docs/source/releasenotes.rst
index 3399a9b7ff..2d4efdd348 100644
--- a/docs/source/releasenotes.rst
+++ b/docs/source/releasenotes.rst
@@ -5,12 +5,17 @@ Release notes
`Upcoming release <https://github.com/robocorp/rpaframework/projects/3#column-16713994>`_
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-- Deprecate *Lab* references under documentation.
-
`Released <https://pypi.org/project/rpaframework/#history>`_
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+15.1.1 - 17 June 2022
+---------------------
+
+- Library **RPA.JSON** (:issue:`548`): Fix *libspec* infinite recursion on ``JSONType``
+ type.
+- Deprecate *Lab* references under documentation.
+
15.1.0 - 15 June 2022
---------------------
diff --git a/packages/main/poetry.lock b/packages/main/poetry.lock
index 25fc7bfdda..1dc5caa88b 100644
--- a/packages/main/poetry.lock
+++ b/packages/main/poetry.lock
@@ -143,14 +143,14 @@ uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "boto3"
-version = "1.24.10"
+version = "1.24.11"
description = "The AWS SDK for Python"
category = "main"
optional = true
python-versions = ">= 3.7"
[package.dependencies]
-botocore = ">=1.27.10,<1.28.0"
+botocore = ">=1.27.11,<1.28.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
@@ -159,7 +159,7 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
-version = "1.27.10"
+version = "1.27.11"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = true
@@ -381,7 +381,7 @@ pyflakes = ">=2.3.0,<2.4.0"
[[package]]
name = "fpdf2"
-version = "2.5.4"
+version = "2.5.5"
description = "Simple & fast PDF generation for Python"
category = "main"
optional = false
@@ -389,7 +389,7 @@ python-versions = "*"
[package.dependencies]
defusedxml = "*"
-Pillow = ">=9.1.0"
+Pillow = ">=6.2.2"
[[package]]
name = "furl"
@@ -2187,12 +2187,12 @@ black = [
{file = "black-22.3.0.tar.gz", hash = "sha256:35020b8886c022ced9282b51b5a875b6d1ab0c387b31a065b84db7c33085ca79"},
]
boto3 = [
- {file = "boto3-1.24.10-py3-none-any.whl", hash = "sha256:32ffc0fd50408acc710cf5ce40037aa3c14926d6e3f6fbf61ed5990fb63cd881"},
- {file = "boto3-1.24.10.tar.gz", hash = "sha256:88fd816274d4b64bcf90889441d4efa5f16a0048ed670bc33cbd0f5a678313a6"},
+ {file = "boto3-1.24.11-py3-none-any.whl", hash = "sha256:19d6fb2b5e51f10e7b5d551a111cf9c64b9a5144b2838493ac41be0706e590cf"},
+ {file = "boto3-1.24.11.tar.gz", hash = "sha256:79fc9699006af26de4413105e458af5f1626ba32d1f00fa0b3e8b94c2b16e2dc"},
]
botocore = [
- {file = "botocore-1.27.10-py3-none-any.whl", hash = "sha256:24ec42b4f29a50f7ef78f9f863c3c25e00f65b5a48db669c8068457789a90803"},
- {file = "botocore-1.27.10.tar.gz", hash = "sha256:b39da97452c9e2c856e7778d8c908252394da81e2e5792f1d4cb0ece4ce1043a"},
+ {file = "botocore-1.27.11-py3-none-any.whl", hash = "sha256:8efab7f85156705cbe532aeb17b065b67ba32addc3270d9000964b98c07bb20a"},
+ {file = "botocore-1.27.11.tar.gz", hash = "sha256:92f099a36df832d7f151682e1efa8e1d47d23a5cedde8692adcaa6420bcb18aa"},
]
cached-property = [
{file = "cached-property-1.5.2.tar.gz", hash = "sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130"},
@@ -2370,8 +2370,8 @@ flake8 = [
{file = "flake8-3.9.2.tar.gz", hash = "sha256:07528381786f2a6237b061f6e96610a4167b226cb926e2aa2b6b1d78057c576b"},
]
fpdf2 = [
- {file = "fpdf2-2.5.4-py2.py3-none-any.whl", hash = "sha256:0f5bb5059d6049ad6b6fa985120bd81ca2beecff60ec48735272e2ab4f1b39d7"},
- {file = "fpdf2-2.5.4.tar.gz", hash = "sha256:24b045c8bab16ce0b52769f4066385b5255dc6a01c474b0e41cb6d8bbfebe3ff"},
+ {file = "fpdf2-2.5.5-py2.py3-none-any.whl", hash = "sha256:72deaec4d0172e10025f4febddaa306edc5cfad28a3fa0069a368d9d896caa46"},
+ {file = "fpdf2-2.5.5.tar.gz", hash = "sha256:2dace3a7cfa9ebfbfa08a4d40d97d8944838370b3cee739e4b1549c48afc4811"},
]
furl = [
{file = "furl-2.1.3-py2.py3-none-any.whl", hash = "sha256:9ab425062c4217f9802508e45feb4a83e54324273ac4b202f1850363309666c0"},
diff --git a/packages/main/pyproject.toml b/packages/main/pyproject.toml
index 078cbecb03..62b1e7051f 100644
--- a/packages/main/pyproject.toml
+++ b/packages/main/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "rpaframework"
-version = "15.1.0"
+version = "15.1.1"
description = "A collection of tools and libraries for RPA"
authors = ["RPA Framework <[email protected]>"]
license = "Apache-2.0"
diff --git a/packages/main/src/RPA/JSON.py b/packages/main/src/RPA/JSON.py
index 92c73bec70..a73bb77c44 100644
--- a/packages/main/src/RPA/JSON.py
+++ b/packages/main/src/RPA/JSON.py
@@ -10,7 +10,7 @@
JSONValue = Optional[Union[str, int, float, bool]]
-JSONType = Union[Dict[Hashable, "JSONType"], List["JSONType"], JSONValue]
+JSONType = Union[Dict[Hashable, JSONValue], List[JSONValue], JSONValue]
class RPAFilter(Filter):
diff --git a/poetry.lock b/poetry.lock
index 9a0346e503..a4dd8d1d8e 100644
--- a/poetry.lock
+++ b/poetry.lock
@@ -135,14 +135,14 @@ uvloop = ["uvloop (>=0.15.2)"]
[[package]]
name = "boto3"
-version = "1.24.10"
+version = "1.24.11"
description = "The AWS SDK for Python"
category = "main"
optional = false
python-versions = ">= 3.7"
[package.dependencies]
-botocore = ">=1.27.10,<1.28.0"
+botocore = ">=1.27.11,<1.28.0"
jmespath = ">=0.7.1,<2.0.0"
s3transfer = ">=0.6.0,<0.7.0"
@@ -151,7 +151,7 @@ crt = ["botocore[crt] (>=1.21.0,<2.0a0)"]
[[package]]
name = "botocore"
-version = "1.27.10"
+version = "1.27.11"
description = "Low-level, data-driven core of boto 3."
category = "main"
optional = false
@@ -356,7 +356,7 @@ termcolor = "*"
[[package]]
name = "fpdf2"
-version = "2.5.4"
+version = "2.5.5"
description = "Simple & fast PDF generation for Python"
category = "main"
optional = false
@@ -364,7 +364,7 @@ python-versions = "*"
[package.dependencies]
defusedxml = "*"
-Pillow = ">=9.1.0"
+Pillow = ">=6.2.2"
[[package]]
name = "furl"
@@ -1682,7 +1682,7 @@ python-versions = "*"
[[package]]
name = "rpaframework"
-version = "15.1.0"
+version = "15.1.1"
description = "A collection of tools and libraries for RPA"
category = "main"
optional = false
@@ -2444,12 +2444,12 @@ black = [
{file = "black-22.3.0.tar.gz", hash = "sha256:35020b8886c022ced9282b51b5a875b6d1ab0c387b31a065b84db7c33085ca79"},
]
boto3 = [
- {file = "boto3-1.24.10-py3-none-any.whl", hash = "sha256:32ffc0fd50408acc710cf5ce40037aa3c14926d6e3f6fbf61ed5990fb63cd881"},
- {file = "boto3-1.24.10.tar.gz", hash = "sha256:88fd816274d4b64bcf90889441d4efa5f16a0048ed670bc33cbd0f5a678313a6"},
+ {file = "boto3-1.24.11-py3-none-any.whl", hash = "sha256:19d6fb2b5e51f10e7b5d551a111cf9c64b9a5144b2838493ac41be0706e590cf"},
+ {file = "boto3-1.24.11.tar.gz", hash = "sha256:79fc9699006af26de4413105e458af5f1626ba32d1f00fa0b3e8b94c2b16e2dc"},
]
botocore = [
- {file = "botocore-1.27.10-py3-none-any.whl", hash = "sha256:24ec42b4f29a50f7ef78f9f863c3c25e00f65b5a48db669c8068457789a90803"},
- {file = "botocore-1.27.10.tar.gz", hash = "sha256:b39da97452c9e2c856e7778d8c908252394da81e2e5792f1d4cb0ece4ce1043a"},
+ {file = "botocore-1.27.11-py3-none-any.whl", hash = "sha256:8efab7f85156705cbe532aeb17b065b67ba32addc3270d9000964b98c07bb20a"},
+ {file = "botocore-1.27.11.tar.gz", hash = "sha256:92f099a36df832d7f151682e1efa8e1d47d23a5cedde8692adcaa6420bcb18aa"},
]
cached-property = [
{file = "cached-property-1.5.2.tar.gz", hash = "sha256:9fa5755838eecbb2d234c3aa390bd80fbd3ac6b6869109bfc1b499f7bd89a130"},
@@ -2584,8 +2584,8 @@ fire = [
{file = "fire-0.4.0.tar.gz", hash = "sha256:c5e2b8763699d1142393a46d0e3e790c5eb2f0706082df8f647878842c216a62"},
]
fpdf2 = [
- {file = "fpdf2-2.5.4-py2.py3-none-any.whl", hash = "sha256:0f5bb5059d6049ad6b6fa985120bd81ca2beecff60ec48735272e2ab4f1b39d7"},
- {file = "fpdf2-2.5.4.tar.gz", hash = "sha256:24b045c8bab16ce0b52769f4066385b5255dc6a01c474b0e41cb6d8bbfebe3ff"},
+ {file = "fpdf2-2.5.5-py2.py3-none-any.whl", hash = "sha256:72deaec4d0172e10025f4febddaa306edc5cfad28a3fa0069a368d9d896caa46"},
+ {file = "fpdf2-2.5.5.tar.gz", hash = "sha256:2dace3a7cfa9ebfbfa08a4d40d97d8944838370b3cee739e4b1549c48afc4811"},
]
furl = [
{file = "furl-2.1.3-py2.py3-none-any.whl", hash = "sha256:9ab425062c4217f9802508e45feb4a83e54324273ac4b202f1850363309666c0"},
|
joke2k__faker-993 | text-unidecode is released under the Artistic license
`text-unidecode` is released under the Artistic license v1.0, which is considered non-free by the FSF (and therefore not compatible with the GPL). I believe this clause is also of concern to commercial users of faker too:
> 5. You may charge a reasonable copying fee for any distribution of this Package. You may charge any fee you choose for support of this Package. You may not charge a fee for this Package itself. However, you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own.
Not being able to charge a fee for the software is problematic for those of us who are contractors, for example.
I realise there aren't really any good alternatives (`unidecode` is GPL licensed as pointed out in #628 , `isounidecode` doesn't support Python 3), so would a patch making `text-unidecode` an optional dependency be acceptable?
| [
{
"content": "#!/usr/bin/env python\n# coding=utf-8\n\nimport io\nimport os\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\nwith io.open(os.path.join(here, 'README.rst'), encoding='utf-8') as fp:\n README = fp.read()\n\nwith io.open(os.path.join(here, 'VERSION')) as version_file:\n VERSION = version_file.read().strip()\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n zip_safe = hasattr(zipimport.zipimporter, \"iter_modules\") or \\\n zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\nexcept (ImportError, AttributeError):\n zip_safe = False\n\nsetup(\n name='Faker',\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n 'console_scripts': ['faker=faker.cli:execute_from_command_line'],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Testing',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: MIT License',\n ],\n keywords='faker fixtures data test mock generator',\n author='joke2k',\n author_email='[email protected]',\n url='https://github.com/joke2k/faker',\n license='MIT License',\n packages=find_packages(exclude=[\"docs\", \"tests\", \"tests.*\"]),\n platforms=[\"any\"],\n test_suite='tests',\n zip_safe=zip_safe,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*\",\n setup_requires=[\"pytest-runner\"],\n install_requires=[\n \"python-dateutil>=2.4\",\n \"six>=1.10\",\n \"text-unidecode==1.2\",\n ],\n tests_require=[\n \"validators>=0.13.0\",\n \"ukpostcodeparser>=1.1.1\",\n \"mock ; python_version < '3.3'\",\n \"pytest>=3.8.0,<3.9\",\n \"more-itertools<6.0.0 ; python_version < '3.0'\",\n # restricted because they may drop python2 support in future versions\n # https://github.com/joke2k/faker/issues/970\n \"random2<1.1\",\n \"freezegun<0.4\",\n ],\n extras_require={\n ':python_version<\"3.3\"': [\n 'ipaddress',\n ],\n },\n)\n",
"path": "setup.py"
}
] | [
{
"content": "#!/usr/bin/env python\n# coding=utf-8\n\nimport io\nimport os\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\nwith io.open(os.path.join(here, 'README.rst'), encoding='utf-8') as fp:\n README = fp.read()\n\nwith io.open(os.path.join(here, 'VERSION')) as version_file:\n VERSION = version_file.read().strip()\n\n\n# this module can be zip-safe if the zipimporter implements iter_modules or if\n# pkgutil.iter_importer_modules has registered a dispatch for the zipimporter.\ntry:\n import pkgutil\n import zipimport\n zip_safe = hasattr(zipimport.zipimporter, \"iter_modules\") or \\\n zipimport.zipimporter in pkgutil.iter_importer_modules.registry.keys()\nexcept (ImportError, AttributeError):\n zip_safe = False\n\nsetup(\n name='Faker',\n version=VERSION,\n description=\"Faker is a Python package that generates fake data for you.\",\n long_description=README,\n entry_points={\n 'console_scripts': ['faker=faker.cli:execute_from_command_line'],\n },\n classifiers=[\n # See https://pypi.org/pypi?%3Aaction=list_classifiers\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'Intended Audience :: Developers',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Topic :: Software Development :: Testing',\n 'Topic :: Utilities',\n 'License :: OSI Approved :: MIT License',\n ],\n keywords='faker fixtures data test mock generator',\n author='joke2k',\n author_email='[email protected]',\n url='https://github.com/joke2k/faker',\n license='MIT License',\n packages=find_packages(exclude=[\"docs\", \"tests\", \"tests.*\"]),\n platforms=[\"any\"],\n test_suite='tests',\n zip_safe=zip_safe,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*\",\n setup_requires=[\"pytest-runner\"],\n install_requires=[\n \"python-dateutil>=2.4\",\n \"six>=1.10\",\n \"text-unidecode==1.3\",\n ],\n tests_require=[\n \"validators>=0.13.0\",\n \"ukpostcodeparser>=1.1.1\",\n \"mock ; python_version < '3.3'\",\n \"pytest>=3.8.0,<3.9\",\n \"more-itertools<6.0.0 ; python_version < '3.0'\",\n # restricted because they may drop python2 support in future versions\n # https://github.com/joke2k/faker/issues/970\n \"random2<1.1\",\n \"freezegun<0.4\",\n ],\n extras_require={\n ':python_version<\"3.3\"': [\n 'ipaddress',\n ],\n },\n)\n",
"path": "setup.py"
}
] | diff --git a/setup.py b/setup.py
index e0ef8ee22c..67d70d304d 100644
--- a/setup.py
+++ b/setup.py
@@ -66,7 +66,7 @@
install_requires=[
"python-dateutil>=2.4",
"six>=1.10",
- "text-unidecode==1.2",
+ "text-unidecode==1.3",
],
tests_require=[
"validators>=0.13.0",
|
dynamiqs__dynamiqs-196 | implement a ver() method
As a user if I want to make sure my setup is up to date with the latest version, I want to be able to call dq.ver() to know which version I am running
| [
{
"content": "from .mesolve import mesolve\nfrom .sesolve import sesolve\nfrom .smesolve import smesolve\nfrom .utils import *\n",
"path": "dynamiqs/__init__.py"
}
] | [
{
"content": "from importlib.metadata import version\n\nfrom .mesolve import mesolve\nfrom .sesolve import sesolve\nfrom .smesolve import smesolve\nfrom .utils import *\n\n# get version from pyproject.toml\n__version__ = version(__package__)\n",
"path": "dynamiqs/__init__.py"
}
] | diff --git a/dynamiqs/__init__.py b/dynamiqs/__init__.py
index 3167dd305..e89e489f3 100644
--- a/dynamiqs/__init__.py
+++ b/dynamiqs/__init__.py
@@ -1,4 +1,9 @@
+from importlib.metadata import version
+
from .mesolve import mesolve
from .sesolve import sesolve
from .smesolve import smesolve
from .utils import *
+
+# get version from pyproject.toml
+__version__ = version(__package__)
|
Pyomo__pyomo-429 | Review objects exposed by environ
At the request of @jsiirola after I brought this to his attention, some Pyomo objects are not exposed by environ that would otherwise be expected. One that I have encountered is `TerminationCondition`, which needs to be imported from `pyomo.opt`.
| [
{
"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport sys as _sys\nif _sys.version_info[0] >= 3:\n import importlib\n\n def _do_import(pkg_name):\n importlib.import_module(pkg_name)\nelse:\n def _do_import(pkg_name):\n __import__(pkg_name, globals(), locals(), [], -1)\n\n#\n# These packages contain plugins that need to be loaded\n#\n_packages = [\n 'pyomo.opt',\n 'pyomo.core',\n 'pyomo.checker',\n 'pyomo.repn',\n 'pyomo.pysp',\n 'pyomo.neos',\n 'pyomo.solvers',\n 'pyomo.gdp',\n 'pyomo.mpec',\n 'pyomo.dae',\n 'pyomo.bilevel',\n 'pyomo.scripting',\n]\n#\n#\n# These packages also contain plugins that need to be loaded, but\n# we silently ignore any import errors because these\n# packages are optional and/or under development.\n#\n_optional_packages = set([\n 'pyomo.contrib.example',\n 'pyomo.contrib.preprocessing',\n 'pyomo.contrib.gdpopt',\n 'pyomo.contrib.trustregion',\n])\n\n\ndef _import_packages():\n #\n # Import required packages\n #\n for name in _packages:\n pname = name+'.plugins'\n try:\n _do_import(pname)\n except ImportError:\n exctype, err, tb = _sys.exc_info() # BUG?\n import traceback\n msg = \"pyomo.environ failed to import %s:\\nOriginal %s: %s\\n\"\\\n \"Traceback:\\n%s\" \\\n % (pname, exctype.__name__, err,\n ''.join(traceback.format_tb(tb)),)\n # clear local variables to remove circular references\n exctype = err = tb = None\n # TODO: Should this just log an error and re-raise the\n # original exception?\n raise ImportError(msg)\n\n pkg = _sys.modules[pname]\n pkg.load()\n #\n # Import optional packages\n #\n for name in _optional_packages:\n pname = name+'.plugins'\n try:\n _do_import(pname)\n except ImportError:\n continue\n pkg = _sys.modules[pname]\n pkg.load()\n\nfrom pyomo.util.plugin import PluginGlobals as _PG\n_PG.add_env(\"pyomo\")\n_import_packages()\n_PG.pop_env()\n\n#\n# Expose the symbols from pyomo.core\n#\nfrom pyomo.core import *\nfrom pyomo.opt import SolverFactory, SolverManagerFactory, UnknownSolver\n",
"path": "pyomo/environ/__init__.py"
}
] | [
{
"content": "# ___________________________________________________________________________\n#\n# Pyomo: Python Optimization Modeling Objects\n# Copyright 2017 National Technology and Engineering Solutions of Sandia, LLC\n# Under the terms of Contract DE-NA0003525 with National Technology and\n# Engineering Solutions of Sandia, LLC, the U.S. Government retains certain\n# rights in this software.\n# This software is distributed under the 3-clause BSD License.\n# ___________________________________________________________________________\n\nimport sys as _sys\nif _sys.version_info[0] >= 3:\n import importlib\n\n def _do_import(pkg_name):\n importlib.import_module(pkg_name)\nelse:\n def _do_import(pkg_name):\n __import__(pkg_name, globals(), locals(), [], -1)\n\n#\n# These packages contain plugins that need to be loaded\n#\n_packages = [\n 'pyomo.opt',\n 'pyomo.core',\n 'pyomo.checker',\n 'pyomo.repn',\n 'pyomo.pysp',\n 'pyomo.neos',\n 'pyomo.solvers',\n 'pyomo.gdp',\n 'pyomo.mpec',\n 'pyomo.dae',\n 'pyomo.bilevel',\n 'pyomo.scripting',\n]\n#\n#\n# These packages also contain plugins that need to be loaded, but\n# we silently ignore any import errors because these\n# packages are optional and/or under development.\n#\n_optional_packages = set([\n 'pyomo.contrib.example',\n 'pyomo.contrib.preprocessing',\n 'pyomo.contrib.gdpopt',\n 'pyomo.contrib.trustregion',\n])\n\n\ndef _import_packages():\n #\n # Import required packages\n #\n for name in _packages:\n pname = name+'.plugins'\n try:\n _do_import(pname)\n except ImportError:\n exctype, err, tb = _sys.exc_info() # BUG?\n import traceback\n msg = \"pyomo.environ failed to import %s:\\nOriginal %s: %s\\n\"\\\n \"Traceback:\\n%s\" \\\n % (pname, exctype.__name__, err,\n ''.join(traceback.format_tb(tb)),)\n # clear local variables to remove circular references\n exctype = err = tb = None\n # TODO: Should this just log an error and re-raise the\n # original exception?\n raise ImportError(msg)\n\n pkg = _sys.modules[pname]\n pkg.load()\n #\n # Import optional packages\n #\n for name in _optional_packages:\n pname = name+'.plugins'\n try:\n _do_import(pname)\n except ImportError:\n continue\n pkg = _sys.modules[pname]\n pkg.load()\n\nfrom pyomo.util.plugin import PluginGlobals as _PG\n_PG.add_env(\"pyomo\")\n_import_packages()\n_PG.pop_env()\n\n#\n# Expose the symbols from pyomo.core\n#\nfrom pyomo.core import *\nfrom pyomo.opt import (\n SolverFactory, SolverManagerFactory, UnknownSolver,\n TerminationCondition, SolverStatus,\n)\n",
"path": "pyomo/environ/__init__.py"
}
] | diff --git a/pyomo/environ/__init__.py b/pyomo/environ/__init__.py
index e8d3de7d3b3..01d842e0b29 100644
--- a/pyomo/environ/__init__.py
+++ b/pyomo/environ/__init__.py
@@ -93,4 +93,7 @@ def _import_packages():
# Expose the symbols from pyomo.core
#
from pyomo.core import *
-from pyomo.opt import SolverFactory, SolverManagerFactory, UnknownSolver
+from pyomo.opt import (
+ SolverFactory, SolverManagerFactory, UnknownSolver,
+ TerminationCondition, SolverStatus,
+)
|
nilearn__nilearn-1936 | _threshold_maps_ratio changes the input map
My maps images keep changing when I use RegionExtractor. I think we need to make a copy [here](https://github.com/nilearn/nilearn/blob/master/nilearn/regions/region_extractor.py#L58)
For instance the following code throws an `AssertionError: Arrays are not equal`
```Python
from nilearn._utils.data_gen import generate_maps
import numpy as np
from nilearn.regions.region_extractor import _threshold_maps_ratio
maps, mask_img = generate_maps((10, 10, 10), 30)
maps.get_data()[:5] = 100
maps_data = maps.get_data().copy()
thresholded_maps = _threshold_maps_ratio(maps, threshold=1)
np.testing.assert_array_equal(maps.get_data(), maps_data)
```
| [
{
"content": "\"\"\"\nBetter brain parcellations for Region of Interest analysis\n\"\"\"\n\nimport numbers\nimport collections\nimport numpy as np\n\nfrom scipy import ndimage\nfrom scipy.stats import scoreatpercentile\n\nfrom sklearn.externals.joblib import Memory\n\nfrom .. import masking\nfrom ..input_data import NiftiMapsMasker\nfrom .._utils import check_niimg, check_niimg_3d, check_niimg_4d\nfrom ..image import new_img_like, resample_img\nfrom ..image.image import _smooth_array, threshold_img\nfrom .._utils.niimg_conversions import concat_niimgs, _check_same_fov\nfrom .._utils.niimg import _safe_get_data\nfrom .._utils.compat import _basestring\nfrom .._utils.ndimage import _peak_local_max\nfrom .._utils.segmentation import _random_walker\n\n\ndef _threshold_maps_ratio(maps_img, threshold):\n \"\"\" Automatic thresholding of atlas maps image.\n\n Considers the given threshold as a ratio to the total number of voxels\n in the brain volume. This gives a certain number within the data\n voxel size which means that nonzero voxels which fall above than this\n size will be kept across all the maps.\n\n Parameters\n ----------\n maps_img: Niimg-like object\n an image of brain atlas maps.\n threshold: float\n If float, value is used as a ratio to n_voxels to get a certain threshold\n size in number to threshold the image. The value should be positive and\n within the range of number of maps (i.e. n_maps in 4th dimension).\n\n Returns\n -------\n threshold_maps_img: Nifti1Image\n gives us thresholded image.\n \"\"\"\n maps = check_niimg(maps_img)\n n_maps = maps.shape[-1]\n if not isinstance(threshold, numbers.Real) or threshold <= 0 or threshold > n_maps:\n raise ValueError(\"threshold given as ratio to the number of voxels must \"\n \"be Real number and should be positive and between 0 and \"\n \"total number of maps i.e. n_maps={0}. \"\n \"You provided {1}\".format(n_maps, threshold))\n else:\n ratio = threshold\n\n maps_data = _safe_get_data(maps, ensure_finite=True)\n\n abs_maps = np.abs(maps_data)\n # thresholding\n cutoff_threshold = scoreatpercentile(\n abs_maps, 100. - (100. / n_maps) * ratio)\n maps_data[abs_maps < cutoff_threshold] = 0.\n\n threshold_maps_img = new_img_like(maps, maps_data)\n\n return threshold_maps_img\n\n\ndef _remove_small_regions(input_data, index, affine, min_size):\n \"\"\"Remove small regions in volume from input_data of specified min_size.\n\n min_size should be specified in mm^3 (region size in volume).\n\n Parameters\n ----------\n input_data : numpy.ndarray\n Values inside the regions defined by labels contained in input_data\n are summed together to get the size and compare with given min_size.\n For example, see scipy.ndimage.label\n\n index : numpy.ndarray\n A sequence of label numbers of the regions to be measured corresponding\n to input_data. For example, sequence can be generated using\n np.arange(n_labels + 1)\n\n affine : numpy.ndarray\n Affine of input_data is used to convert size in voxels to size in\n volume of region in mm^3.\n\n min_size : float in mm^3\n Size of regions in input_data which falls below the specified min_size\n of volume in mm^3 will be discarded.\n\n Returns\n -------\n out : numpy.ndarray\n Data returned will have regions removed specified by min_size\n Otherwise, if criterion is not met then same input data will be\n returned.\n \"\"\"\n # with return_counts argument is introduced from numpy 1.9.0.\n # _, region_sizes = np.unique(input_data, return_counts=True)\n\n # For now, to count the region sizes, we use return_inverse from\n # np.unique and then use np.bincount to count the region sizes.\n\n _, region_indices = np.unique(input_data, return_inverse=True)\n region_sizes = np.bincount(region_indices)\n size_in_vox = min_size / np.abs(np.linalg.det(affine[:3, :3]))\n labels_kept = region_sizes > size_in_vox\n if not np.all(labels_kept):\n # Put to zero the indices not kept\n rejected_labels_mask = np.in1d(input_data,\n np.where(np.logical_not(labels_kept))[0]\n ).reshape(input_data.shape)\n # Avoid modifying the input:\n input_data = input_data.copy()\n input_data[rejected_labels_mask] = 0\n # Reorder the indices to avoid gaps\n input_data = np.searchsorted(np.unique(input_data), input_data)\n return input_data\n\n\ndef connected_regions(maps_img, min_region_size=1350,\n extract_type='local_regions', smoothing_fwhm=6,\n mask_img=None):\n \"\"\" Extraction of brain connected regions into separate regions.\n\n Note: the region size should be defined in mm^3. See the documentation for\n more details.\n\n .. versionadded:: 0.2\n\n Parameters\n ----------\n maps_img: Niimg-like object\n an image of brain activation or atlas maps to be extracted into set of\n separate brain regions.\n\n min_region_size: int, default 1350 mm^3, optional\n Minimum volume in mm3 for a region to be kept. For example, if the voxel\n size is 3x3x3 mm then the volume of the voxel is 27mm^3. By default, it\n is 1350mm^3 which means we take minimum size of 1350 / 27 = 50 voxels.\n\n extract_type: str {'connected_components', 'local_regions'} \\\n default local_regions, optional\n If 'connected_components', each component/region in the image is extracted\n automatically by labelling each region based upon the presence of unique\n features in their respective regions.\n If 'local_regions', each component/region is extracted based on their\n maximum peak value to define a seed marker and then using random walker\n segementation algorithm on these markers for region separation.\n\n smoothing_fwhm: scalar, default 6mm, optional\n To smooth an image to extract most sparser regions. This parameter\n is passed `_smooth_array` and exists only for extract_type 'local_regions'.\n\n mask_img: Niimg-like object, default None\n If given, mask image is applied to input data.\n If None, no masking is applied.\n\n Returns\n -------\n regions_extracted_img: Nifti1Image\n gives the image in 4D of extracted brain regions. Each 3D image consists\n of only one separated region.\n\n index_of_each_map: numpy array\n an array of list of indices where each index denotes the identity\n of each extracted region to their family of brain maps.\n\n See Also\n --------\n nilearn.regions.connected_label_regions : A function can be used for\n extraction of regions on labels based atlas images.\n\n nilearn.regions.RegionExtractor : A class can be used for both\n region extraction on continuous type atlas images and\n also time series signals extraction from regions extracted.\n \"\"\"\n all_regions_imgs = []\n index_of_each_map = []\n maps_img = check_niimg(maps_img, atleast_4d=True)\n maps = _safe_get_data(maps_img).copy()\n affine = maps_img.affine\n min_region_size = min_region_size / np.abs(np.linalg.det(affine[:3, :3]))\n\n allowed_extract_types = ['connected_components', 'local_regions']\n if extract_type not in allowed_extract_types:\n message = (\"'extract_type' should be given either of these {0} \"\n \"You provided extract_type='{1}'\").format(allowed_extract_types, extract_type)\n raise ValueError(message)\n\n if mask_img is not None:\n if not _check_same_fov(maps_img, mask_img):\n mask_img = resample_img(mask_img,\n target_affine=maps_img.affine,\n target_shape=maps_img.shape[:3],\n interpolation=\"nearest\")\n mask_data, _ = masking._load_mask_img(mask_img)\n # Set as 0 to the values which are outside of the mask\n maps[mask_data == 0.] = 0.\n\n for index in range(maps.shape[-1]):\n regions = []\n map_3d = maps[..., index]\n # Mark the seeds using random walker\n if extract_type == 'local_regions':\n smooth_map = _smooth_array(map_3d, affine=affine, fwhm=smoothing_fwhm)\n seeds = _peak_local_max(smooth_map)\n seeds_label, seeds_id = ndimage.label(seeds)\n # Assign -1 to values which are 0. to indicate to ignore\n seeds_label[map_3d == 0.] = -1\n rw_maps = _random_walker(map_3d, seeds_label)\n # Now simply replace \"-1\" with \"0\" for regions separation\n rw_maps[rw_maps == -1] = 0.\n label_maps = rw_maps\n else:\n # Connected component extraction\n label_maps, n_labels = ndimage.label(map_3d)\n\n # Takes the size of each labelized region data\n labels_size = np.bincount(label_maps.ravel())\n # set background labels sitting in zero index to zero\n labels_size[0] = 0.\n for label_id, label_size in enumerate(labels_size):\n if label_size > min_region_size:\n region_data = (label_maps == label_id) * map_3d\n region_img = new_img_like(maps_img, region_data)\n regions.append(region_img)\n\n index_of_each_map.extend([index] * len(regions))\n all_regions_imgs.extend(regions)\n\n regions_extracted_img = concat_niimgs(all_regions_imgs)\n\n return regions_extracted_img, index_of_each_map\n\n\nclass RegionExtractor(NiftiMapsMasker):\n \"\"\"Class for brain region extraction.\n\n Region Extraction is a post processing technique which\n is implemented to automatically segment each brain atlas maps\n into different set of separated brain activated region.\n Particularly, to show that each decomposed brain maps can be\n used to focus on a target specific Regions of Interest analysis.\n\n .. versionadded:: 0.2\n\n Parameters\n ----------\n maps_img: 4D Niimg-like object\n Image containing a set of whole brain atlas maps or statistically\n decomposed brain maps.\n\n mask_img: Niimg-like object or None, default None, optional\n Mask to be applied to input data, passed to NiftiMapsMasker.\n If None, no masking is applied.\n\n min_region_size: float, default 1350 mm^3, optional\n Minimum volume in mm3 for a region to be kept. For example, if\n the voxel size is 3x3x3 mm then the volume of the voxel is\n 27mm^3. By default, it is 1350mm^3 which means we take minimum\n size of 1350 / 27 = 50 voxels.\n\n threshold: number, default 1., optional\n A value used either in ratio_n_voxels or img_value or percentile\n `thresholding_strategy` based upon the choice of selection.\n\n thresholding_strategy: str {'ratio_n_voxels', 'img_value', 'percentile'}, optional\n If default 'ratio_n_voxels', we apply thresholding that will keep\n the more intense nonzero brain voxels (denoted as n_voxels)\n across all maps (n_voxels being the number of voxels in the brain\n volume). A float value given in `threshold` parameter indicates\n the ratio of voxels to keep meaning (if float=2. then maps will\n together have 2. x n_voxels non-zero voxels). If set to\n 'percentile', images are thresholded based on the score obtained\n with the given percentile on the data and the voxel intensities\n which are survived above this obtained score will be kept. If set\n to 'img_value', we apply thresholding based on the non-zero voxel\n intensities across all maps. A value given in `threshold`\n parameter indicates that we keep only those voxels which have\n intensities more than this value.\n\n extractor: str {'connected_components', 'local_regions'} default 'local_regions', optional\n If 'connected_components', each component/region in the image is\n extracted automatically by labelling each region based upon the\n presence of unique features in their respective regions. If\n 'local_regions', each component/region is extracted based on\n their maximum peak value to define a seed marker and then using\n random walker segementation algorithm on these markers for region\n separation.\n\n smoothing_fwhm: scalar, default 6mm, optional\n To smooth an image to extract most sparser regions. This parameter\n is passed to `connected_regions` and exists only for extractor\n 'local_regions'. Please set this parameter according to maps\n resolution, otherwise extraction will fail.\n\n standardize: bool, True or False, default False, optional\n If True, the time series signals are centered and normalized by\n putting their mean to 0 and variance to 1. Recommended to\n set as True if signals are not already standardized.\n passed to class NiftiMapsMasker.\n\n detrend: bool, True or False, default False, optional\n This parameter is passed to nilearn.signal.clean basically\n indicates whether to detrend timeseries signals or not.\n passed to class NiftiMapsMasker.\n\n low_pass: float, default None, optional\n This value will be applied on the signals by passing to signal.clean\n Please see the related documentation signal.clean for more details.\n passed to class NiftiMapsMasker.\n\n high_pass: float, default None, optional\n This value will be applied on the signals by passing to signal.clean\n Please see the related documentation signal.clean for more details.\n passed to NiftiMapsMasker.\n\n t_r: float, default None, optional\n Repetition time in sec. This value is given to signal.clean\n Please see the related documentation for details.\n passed to NiftiMapsMasker.\n\n memory: instance of joblib.Memory, string, default None, optional\n Used to cache the masking process. If a string is given, the path\n is set with this string as a folder name in the directory.\n passed to NiftiMapsMasker.\n\n memory_level: int, default 0, optional\n Aggressiveness of memory catching. The higher the number, the higher\n the number of functions that will be cached. Zero mean no caching.\n passed to NiftiMapsMasker.\n\n verbose: int, default 0, optional\n Indicates the level of verbosity by printing the message. Zero\n indicates nothing is printed.\n\n Attributes\n ----------\n `index_` : numpy array\n array of list of indices where each index value is assigned to\n each separate region of its corresponding family of brain maps.\n\n `regions_img_` : Nifti1Image\n List of separated regions with each region lying on an\n original volume concatenated into a 4D image.\n\n References\n ----------\n * Abraham et al. \"Region segmentation for sparse decompositions:\n better brain parcellations from rest fMRI\", Sparsity Techniques in\n Medical Imaging, Sep 2014, Boston, United States. pp.8\n\n See Also\n --------\n nilearn.regions.connected_label_regions : A function can be readily\n used for extraction of regions on labels based atlas images.\n\n \"\"\"\n def __init__(self, maps_img, mask_img=None, min_region_size=1350,\n threshold=1., thresholding_strategy='ratio_n_voxels',\n extractor='local_regions', smoothing_fwhm=6,\n standardize=False, detrend=False,\n low_pass=None, high_pass=None, t_r=None,\n memory=Memory(cachedir=None), memory_level=0, verbose=0):\n super(RegionExtractor, self).__init__(\n maps_img=maps_img, mask_img=mask_img,\n smoothing_fwhm=smoothing_fwhm,\n standardize=standardize, detrend=detrend, low_pass=low_pass,\n high_pass=high_pass, t_r=t_r, memory=memory,\n memory_level=memory_level, verbose=verbose)\n self.maps_img = maps_img\n self.min_region_size = min_region_size\n self.thresholding_strategy = thresholding_strategy\n self.threshold = threshold\n self.extractor = extractor\n self.smoothing_fwhm = smoothing_fwhm\n\n def fit(self, X=None, y=None):\n \"\"\" Prepare the data and setup for the region extraction\n \"\"\"\n maps_img = check_niimg_4d(self.maps_img)\n\n list_of_strategies = ['ratio_n_voxels', 'img_value', 'percentile']\n if self.thresholding_strategy not in list_of_strategies:\n message = (\"'thresholding_strategy' should be \"\n \"either of these {0}\").format(list_of_strategies)\n raise ValueError(message)\n\n if self.threshold is None or isinstance(self.threshold, _basestring):\n raise ValueError(\"The given input to threshold is not valid. \"\n \"Please submit a valid number specific to either of \"\n \"the strategy in {0}\".format(list_of_strategies))\n elif isinstance(self.threshold, numbers.Number):\n # foreground extraction\n if self.thresholding_strategy == 'ratio_n_voxels':\n threshold_maps = _threshold_maps_ratio(maps_img, self.threshold)\n else:\n if self.thresholding_strategy == 'percentile':\n self.threshold = \"{0}%\".format(self.threshold)\n threshold_maps = threshold_img(maps_img, mask_img=self.mask_img,\n threshold=self.threshold)\n\n # connected component extraction\n self.regions_img_, self.index_ = connected_regions(threshold_maps,\n self.min_region_size,\n self.extractor,\n self.smoothing_fwhm)\n\n self.maps_img = self.regions_img_\n super(RegionExtractor, self).fit()\n\n return self\n\n\ndef connected_label_regions(labels_img, min_size=None, connect_diag=True,\n labels=None):\n \"\"\" Extract connected regions from a brain atlas image defined by labels\n (integers).\n\n For each label in an parcellations, separates out connected\n components and assigns to each separated region a unique label.\n\n Parameters\n ----------\n\n labels_img : Nifti-like image\n A 3D image which contains regions denoted as labels. Each region\n is assigned with integers.\n\n min_size : float, in mm^3 optional (default None)\n Minimum region size in volume required to keep after extraction.\n Removes small or spurious regions.\n\n connect_diag : bool (default True)\n If 'connect_diag' is True, two voxels are considered in the same region\n if they are connected along the diagonal (26-connectivity). If it is\n False, two voxels are considered connected only if they are within the\n same x, y, or z direction.\n\n labels : 1D numpy array or list of str, (default None), optional\n Each string in a list or array denote the name of the brain atlas\n regions given in labels_img input. If provided, same names will be\n re-assigned corresponding to each connected component based extraction\n of regions relabelling. The total number of names should match with the\n number of labels assigned in the image.\n\n NOTE: The order of the names given in labels should be appropriately\n matched with the unique labels (integers) assigned to each region\n given in labels_img (also excluding 'Background' label).\n\n Returns\n -------\n new_labels_img : Nifti-like image\n A new image comprising of regions extracted on an input labels_img.\n\n new_labels : list, optional\n If labels are provided, new labels assigned to region extracted will\n be returned. Otherwise, only new labels image will be returned.\n\n See Also\n --------\n nilearn.datasets.fetch_atlas_harvard_oxford : For an example of atlas with\n labels.\n\n nilearn.regions.RegionExtractor : A class can be used for region extraction\n on continuous type atlas images.\n\n nilearn.regions.connected_regions : A function used for region extraction\n on continuous type atlas images.\n\n \"\"\"\n labels_img = check_niimg_3d(labels_img)\n labels_data = _safe_get_data(labels_img, ensure_finite=True)\n affine = labels_img.affine\n\n check_unique_labels = np.unique(labels_data)\n\n if min_size is not None and not isinstance(min_size, numbers.Number):\n raise ValueError(\"Expected 'min_size' to be specified as integer. \"\n \"You provided {0}\".format(min_size))\n if not isinstance(connect_diag, bool):\n raise ValueError(\"'connect_diag' must be specified as True or False. \"\n \"You provided {0}\".format(connect_diag))\n if np.any(check_unique_labels < 0):\n raise ValueError(\"The 'labels_img' you provided has unknown/negative \"\n \"integers as labels {0} assigned to regions. \"\n \"All regions in an image should have positive \"\n \"integers assigned as labels.\"\n .format(check_unique_labels))\n\n unique_labels = set(check_unique_labels)\n # check for background label indicated as 0\n if np.any(check_unique_labels == 0):\n unique_labels.remove(0)\n\n if labels is not None:\n if (not isinstance(labels, collections.Iterable) or\n isinstance(labels, _basestring)):\n labels = [labels, ]\n if len(unique_labels) != len(labels):\n raise ValueError(\"The number of labels: {0} provided as input \"\n \"in labels={1} does not match with the number \"\n \"of unique labels in labels_img: {2}. \"\n \"Please provide appropriate match with unique \"\n \"number of labels in labels_img.\"\n .format(len(labels), labels, len(unique_labels)))\n new_names = []\n\n if labels is None:\n this_labels = [None] * len(unique_labels)\n else:\n this_labels = labels\n\n new_labels_data = np.zeros(labels_data.shape, dtype=np.int)\n current_max_label = 0\n for label_id, name in zip(unique_labels, this_labels):\n this_label_mask = (labels_data == label_id)\n # Extract regions assigned to each label id\n if connect_diag:\n structure = np.ones((3, 3, 3), dtype=np.int)\n regions, this_n_labels = ndimage.label(\n this_label_mask.astype(np.int), structure=structure)\n else:\n regions, this_n_labels = ndimage.label(this_label_mask.astype(np.int))\n\n if min_size is not None:\n index = np.arange(this_n_labels + 1)\n regions = _remove_small_regions(regions, index, affine,\n min_size=min_size)\n this_n_labels = regions.max()\n\n cur_regions = regions[regions != 0] + current_max_label\n new_labels_data[regions != 0] = cur_regions\n current_max_label += this_n_labels\n if name is not None:\n new_names.extend([name] * this_n_labels)\n\n new_labels_img = new_img_like(labels_img, new_labels_data, affine=affine)\n if labels is not None:\n new_labels = new_names\n return new_labels_img, new_labels\n\n return new_labels_img\n",
"path": "nilearn/regions/region_extractor.py"
}
] | [
{
"content": "\"\"\"\nBetter brain parcellations for Region of Interest analysis\n\"\"\"\n\nimport numbers\nimport collections\nimport numpy as np\n\nfrom scipy import ndimage\nfrom scipy.stats import scoreatpercentile\n\nfrom sklearn.externals.joblib import Memory\n\nfrom .. import masking\nfrom ..input_data import NiftiMapsMasker\nfrom .._utils import check_niimg, check_niimg_3d, check_niimg_4d\nfrom ..image import new_img_like, resample_img\nfrom ..image.image import _smooth_array, threshold_img\nfrom .._utils.niimg_conversions import concat_niimgs, _check_same_fov\nfrom .._utils.niimg import _safe_get_data\nfrom .._utils.compat import _basestring\nfrom .._utils.ndimage import _peak_local_max\nfrom .._utils.segmentation import _random_walker\n\n\ndef _threshold_maps_ratio(maps_img, threshold):\n \"\"\" Automatic thresholding of atlas maps image.\n\n Considers the given threshold as a ratio to the total number of voxels\n in the brain volume. This gives a certain number within the data\n voxel size which means that nonzero voxels which fall above than this\n size will be kept across all the maps.\n\n Parameters\n ----------\n maps_img: Niimg-like object\n an image of brain atlas maps.\n threshold: float\n If float, value is used as a ratio to n_voxels to get a certain threshold\n size in number to threshold the image. The value should be positive and\n within the range of number of maps (i.e. n_maps in 4th dimension).\n\n Returns\n -------\n threshold_maps_img: Nifti1Image\n gives us thresholded image.\n \"\"\"\n maps = check_niimg(maps_img)\n n_maps = maps.shape[-1]\n if not isinstance(threshold, numbers.Real) or threshold <= 0 or threshold > n_maps:\n raise ValueError(\"threshold given as ratio to the number of voxels must \"\n \"be Real number and should be positive and between 0 and \"\n \"total number of maps i.e. n_maps={0}. \"\n \"You provided {1}\".format(n_maps, threshold))\n else:\n ratio = threshold\n\n maps_data = _safe_get_data(maps, ensure_finite=True).copy()\n\n abs_maps = np.abs(maps_data)\n # thresholding\n cutoff_threshold = scoreatpercentile(\n abs_maps, 100. - (100. / n_maps) * ratio)\n maps_data[abs_maps < cutoff_threshold] = 0.\n\n threshold_maps_img = new_img_like(maps, maps_data)\n\n return threshold_maps_img\n\n\ndef _remove_small_regions(input_data, index, affine, min_size):\n \"\"\"Remove small regions in volume from input_data of specified min_size.\n\n min_size should be specified in mm^3 (region size in volume).\n\n Parameters\n ----------\n input_data : numpy.ndarray\n Values inside the regions defined by labels contained in input_data\n are summed together to get the size and compare with given min_size.\n For example, see scipy.ndimage.label\n\n index : numpy.ndarray\n A sequence of label numbers of the regions to be measured corresponding\n to input_data. For example, sequence can be generated using\n np.arange(n_labels + 1)\n\n affine : numpy.ndarray\n Affine of input_data is used to convert size in voxels to size in\n volume of region in mm^3.\n\n min_size : float in mm^3\n Size of regions in input_data which falls below the specified min_size\n of volume in mm^3 will be discarded.\n\n Returns\n -------\n out : numpy.ndarray\n Data returned will have regions removed specified by min_size\n Otherwise, if criterion is not met then same input data will be\n returned.\n \"\"\"\n # with return_counts argument is introduced from numpy 1.9.0.\n # _, region_sizes = np.unique(input_data, return_counts=True)\n\n # For now, to count the region sizes, we use return_inverse from\n # np.unique and then use np.bincount to count the region sizes.\n\n _, region_indices = np.unique(input_data, return_inverse=True)\n region_sizes = np.bincount(region_indices)\n size_in_vox = min_size / np.abs(np.linalg.det(affine[:3, :3]))\n labels_kept = region_sizes > size_in_vox\n if not np.all(labels_kept):\n # Put to zero the indices not kept\n rejected_labels_mask = np.in1d(input_data,\n np.where(np.logical_not(labels_kept))[0]\n ).reshape(input_data.shape)\n # Avoid modifying the input:\n input_data = input_data.copy()\n input_data[rejected_labels_mask] = 0\n # Reorder the indices to avoid gaps\n input_data = np.searchsorted(np.unique(input_data), input_data)\n return input_data\n\n\ndef connected_regions(maps_img, min_region_size=1350,\n extract_type='local_regions', smoothing_fwhm=6,\n mask_img=None):\n \"\"\" Extraction of brain connected regions into separate regions.\n\n Note: the region size should be defined in mm^3. See the documentation for\n more details.\n\n .. versionadded:: 0.2\n\n Parameters\n ----------\n maps_img: Niimg-like object\n an image of brain activation or atlas maps to be extracted into set of\n separate brain regions.\n\n min_region_size: int, default 1350 mm^3, optional\n Minimum volume in mm3 for a region to be kept. For example, if the voxel\n size is 3x3x3 mm then the volume of the voxel is 27mm^3. By default, it\n is 1350mm^3 which means we take minimum size of 1350 / 27 = 50 voxels.\n\n extract_type: str {'connected_components', 'local_regions'} \\\n default local_regions, optional\n If 'connected_components', each component/region in the image is extracted\n automatically by labelling each region based upon the presence of unique\n features in their respective regions.\n If 'local_regions', each component/region is extracted based on their\n maximum peak value to define a seed marker and then using random walker\n segementation algorithm on these markers for region separation.\n\n smoothing_fwhm: scalar, default 6mm, optional\n To smooth an image to extract most sparser regions. This parameter\n is passed `_smooth_array` and exists only for extract_type 'local_regions'.\n\n mask_img: Niimg-like object, default None\n If given, mask image is applied to input data.\n If None, no masking is applied.\n\n Returns\n -------\n regions_extracted_img: Nifti1Image\n gives the image in 4D of extracted brain regions. Each 3D image consists\n of only one separated region.\n\n index_of_each_map: numpy array\n an array of list of indices where each index denotes the identity\n of each extracted region to their family of brain maps.\n\n See Also\n --------\n nilearn.regions.connected_label_regions : A function can be used for\n extraction of regions on labels based atlas images.\n\n nilearn.regions.RegionExtractor : A class can be used for both\n region extraction on continuous type atlas images and\n also time series signals extraction from regions extracted.\n \"\"\"\n all_regions_imgs = []\n index_of_each_map = []\n maps_img = check_niimg(maps_img, atleast_4d=True)\n maps = _safe_get_data(maps_img).copy()\n affine = maps_img.affine\n min_region_size = min_region_size / np.abs(np.linalg.det(affine[:3, :3]))\n\n allowed_extract_types = ['connected_components', 'local_regions']\n if extract_type not in allowed_extract_types:\n message = (\"'extract_type' should be given either of these {0} \"\n \"You provided extract_type='{1}'\").format(allowed_extract_types, extract_type)\n raise ValueError(message)\n\n if mask_img is not None:\n if not _check_same_fov(maps_img, mask_img):\n mask_img = resample_img(mask_img,\n target_affine=maps_img.affine,\n target_shape=maps_img.shape[:3],\n interpolation=\"nearest\")\n mask_data, _ = masking._load_mask_img(mask_img)\n # Set as 0 to the values which are outside of the mask\n maps[mask_data == 0.] = 0.\n\n for index in range(maps.shape[-1]):\n regions = []\n map_3d = maps[..., index]\n # Mark the seeds using random walker\n if extract_type == 'local_regions':\n smooth_map = _smooth_array(map_3d, affine=affine, fwhm=smoothing_fwhm)\n seeds = _peak_local_max(smooth_map)\n seeds_label, seeds_id = ndimage.label(seeds)\n # Assign -1 to values which are 0. to indicate to ignore\n seeds_label[map_3d == 0.] = -1\n rw_maps = _random_walker(map_3d, seeds_label)\n # Now simply replace \"-1\" with \"0\" for regions separation\n rw_maps[rw_maps == -1] = 0.\n label_maps = rw_maps\n else:\n # Connected component extraction\n label_maps, n_labels = ndimage.label(map_3d)\n\n # Takes the size of each labelized region data\n labels_size = np.bincount(label_maps.ravel())\n # set background labels sitting in zero index to zero\n labels_size[0] = 0.\n for label_id, label_size in enumerate(labels_size):\n if label_size > min_region_size:\n region_data = (label_maps == label_id) * map_3d\n region_img = new_img_like(maps_img, region_data)\n regions.append(region_img)\n\n index_of_each_map.extend([index] * len(regions))\n all_regions_imgs.extend(regions)\n\n regions_extracted_img = concat_niimgs(all_regions_imgs)\n\n return regions_extracted_img, index_of_each_map\n\n\nclass RegionExtractor(NiftiMapsMasker):\n \"\"\"Class for brain region extraction.\n\n Region Extraction is a post processing technique which\n is implemented to automatically segment each brain atlas maps\n into different set of separated brain activated region.\n Particularly, to show that each decomposed brain maps can be\n used to focus on a target specific Regions of Interest analysis.\n\n .. versionadded:: 0.2\n\n Parameters\n ----------\n maps_img: 4D Niimg-like object\n Image containing a set of whole brain atlas maps or statistically\n decomposed brain maps.\n\n mask_img: Niimg-like object or None, default None, optional\n Mask to be applied to input data, passed to NiftiMapsMasker.\n If None, no masking is applied.\n\n min_region_size: float, default 1350 mm^3, optional\n Minimum volume in mm3 for a region to be kept. For example, if\n the voxel size is 3x3x3 mm then the volume of the voxel is\n 27mm^3. By default, it is 1350mm^3 which means we take minimum\n size of 1350 / 27 = 50 voxels.\n\n threshold: number, default 1., optional\n A value used either in ratio_n_voxels or img_value or percentile\n `thresholding_strategy` based upon the choice of selection.\n\n thresholding_strategy: str {'ratio_n_voxels', 'img_value', 'percentile'}, optional\n If default 'ratio_n_voxels', we apply thresholding that will keep\n the more intense nonzero brain voxels (denoted as n_voxels)\n across all maps (n_voxels being the number of voxels in the brain\n volume). A float value given in `threshold` parameter indicates\n the ratio of voxels to keep meaning (if float=2. then maps will\n together have 2. x n_voxels non-zero voxels). If set to\n 'percentile', images are thresholded based on the score obtained\n with the given percentile on the data and the voxel intensities\n which are survived above this obtained score will be kept. If set\n to 'img_value', we apply thresholding based on the non-zero voxel\n intensities across all maps. A value given in `threshold`\n parameter indicates that we keep only those voxels which have\n intensities more than this value.\n\n extractor: str {'connected_components', 'local_regions'} default 'local_regions', optional\n If 'connected_components', each component/region in the image is\n extracted automatically by labelling each region based upon the\n presence of unique features in their respective regions. If\n 'local_regions', each component/region is extracted based on\n their maximum peak value to define a seed marker and then using\n random walker segementation algorithm on these markers for region\n separation.\n\n smoothing_fwhm: scalar, default 6mm, optional\n To smooth an image to extract most sparser regions. This parameter\n is passed to `connected_regions` and exists only for extractor\n 'local_regions'. Please set this parameter according to maps\n resolution, otherwise extraction will fail.\n\n standardize: bool, True or False, default False, optional\n If True, the time series signals are centered and normalized by\n putting their mean to 0 and variance to 1. Recommended to\n set as True if signals are not already standardized.\n passed to class NiftiMapsMasker.\n\n detrend: bool, True or False, default False, optional\n This parameter is passed to nilearn.signal.clean basically\n indicates whether to detrend timeseries signals or not.\n passed to class NiftiMapsMasker.\n\n low_pass: float, default None, optional\n This value will be applied on the signals by passing to signal.clean\n Please see the related documentation signal.clean for more details.\n passed to class NiftiMapsMasker.\n\n high_pass: float, default None, optional\n This value will be applied on the signals by passing to signal.clean\n Please see the related documentation signal.clean for more details.\n passed to NiftiMapsMasker.\n\n t_r: float, default None, optional\n Repetition time in sec. This value is given to signal.clean\n Please see the related documentation for details.\n passed to NiftiMapsMasker.\n\n memory: instance of joblib.Memory, string, default None, optional\n Used to cache the masking process. If a string is given, the path\n is set with this string as a folder name in the directory.\n passed to NiftiMapsMasker.\n\n memory_level: int, default 0, optional\n Aggressiveness of memory catching. The higher the number, the higher\n the number of functions that will be cached. Zero mean no caching.\n passed to NiftiMapsMasker.\n\n verbose: int, default 0, optional\n Indicates the level of verbosity by printing the message. Zero\n indicates nothing is printed.\n\n Attributes\n ----------\n `index_` : numpy array\n array of list of indices where each index value is assigned to\n each separate region of its corresponding family of brain maps.\n\n `regions_img_` : Nifti1Image\n List of separated regions with each region lying on an\n original volume concatenated into a 4D image.\n\n References\n ----------\n * Abraham et al. \"Region segmentation for sparse decompositions:\n better brain parcellations from rest fMRI\", Sparsity Techniques in\n Medical Imaging, Sep 2014, Boston, United States. pp.8\n\n See Also\n --------\n nilearn.regions.connected_label_regions : A function can be readily\n used for extraction of regions on labels based atlas images.\n\n \"\"\"\n def __init__(self, maps_img, mask_img=None, min_region_size=1350,\n threshold=1., thresholding_strategy='ratio_n_voxels',\n extractor='local_regions', smoothing_fwhm=6,\n standardize=False, detrend=False,\n low_pass=None, high_pass=None, t_r=None,\n memory=Memory(cachedir=None), memory_level=0, verbose=0):\n super(RegionExtractor, self).__init__(\n maps_img=maps_img, mask_img=mask_img,\n smoothing_fwhm=smoothing_fwhm,\n standardize=standardize, detrend=detrend, low_pass=low_pass,\n high_pass=high_pass, t_r=t_r, memory=memory,\n memory_level=memory_level, verbose=verbose)\n self.maps_img = maps_img\n self.min_region_size = min_region_size\n self.thresholding_strategy = thresholding_strategy\n self.threshold = threshold\n self.extractor = extractor\n self.smoothing_fwhm = smoothing_fwhm\n\n def fit(self, X=None, y=None):\n \"\"\" Prepare the data and setup for the region extraction\n \"\"\"\n maps_img = check_niimg_4d(self.maps_img)\n\n list_of_strategies = ['ratio_n_voxels', 'img_value', 'percentile']\n if self.thresholding_strategy not in list_of_strategies:\n message = (\"'thresholding_strategy' should be \"\n \"either of these {0}\").format(list_of_strategies)\n raise ValueError(message)\n\n if self.threshold is None or isinstance(self.threshold, _basestring):\n raise ValueError(\"The given input to threshold is not valid. \"\n \"Please submit a valid number specific to either of \"\n \"the strategy in {0}\".format(list_of_strategies))\n elif isinstance(self.threshold, numbers.Number):\n # foreground extraction\n if self.thresholding_strategy == 'ratio_n_voxels':\n threshold_maps = _threshold_maps_ratio(maps_img, self.threshold)\n else:\n if self.thresholding_strategy == 'percentile':\n self.threshold = \"{0}%\".format(self.threshold)\n threshold_maps = threshold_img(maps_img, mask_img=self.mask_img,\n threshold=self.threshold)\n\n # connected component extraction\n self.regions_img_, self.index_ = connected_regions(threshold_maps,\n self.min_region_size,\n self.extractor,\n self.smoothing_fwhm)\n\n self.maps_img = self.regions_img_\n super(RegionExtractor, self).fit()\n\n return self\n\n\ndef connected_label_regions(labels_img, min_size=None, connect_diag=True,\n labels=None):\n \"\"\" Extract connected regions from a brain atlas image defined by labels\n (integers).\n\n For each label in an parcellations, separates out connected\n components and assigns to each separated region a unique label.\n\n Parameters\n ----------\n\n labels_img : Nifti-like image\n A 3D image which contains regions denoted as labels. Each region\n is assigned with integers.\n\n min_size : float, in mm^3 optional (default None)\n Minimum region size in volume required to keep after extraction.\n Removes small or spurious regions.\n\n connect_diag : bool (default True)\n If 'connect_diag' is True, two voxels are considered in the same region\n if they are connected along the diagonal (26-connectivity). If it is\n False, two voxels are considered connected only if they are within the\n same x, y, or z direction.\n\n labels : 1D numpy array or list of str, (default None), optional\n Each string in a list or array denote the name of the brain atlas\n regions given in labels_img input. If provided, same names will be\n re-assigned corresponding to each connected component based extraction\n of regions relabelling. The total number of names should match with the\n number of labels assigned in the image.\n\n NOTE: The order of the names given in labels should be appropriately\n matched with the unique labels (integers) assigned to each region\n given in labels_img (also excluding 'Background' label).\n\n Returns\n -------\n new_labels_img : Nifti-like image\n A new image comprising of regions extracted on an input labels_img.\n\n new_labels : list, optional\n If labels are provided, new labels assigned to region extracted will\n be returned. Otherwise, only new labels image will be returned.\n\n See Also\n --------\n nilearn.datasets.fetch_atlas_harvard_oxford : For an example of atlas with\n labels.\n\n nilearn.regions.RegionExtractor : A class can be used for region extraction\n on continuous type atlas images.\n\n nilearn.regions.connected_regions : A function used for region extraction\n on continuous type atlas images.\n\n \"\"\"\n labels_img = check_niimg_3d(labels_img)\n labels_data = _safe_get_data(labels_img, ensure_finite=True)\n affine = labels_img.affine\n\n check_unique_labels = np.unique(labels_data)\n\n if min_size is not None and not isinstance(min_size, numbers.Number):\n raise ValueError(\"Expected 'min_size' to be specified as integer. \"\n \"You provided {0}\".format(min_size))\n if not isinstance(connect_diag, bool):\n raise ValueError(\"'connect_diag' must be specified as True or False. \"\n \"You provided {0}\".format(connect_diag))\n if np.any(check_unique_labels < 0):\n raise ValueError(\"The 'labels_img' you provided has unknown/negative \"\n \"integers as labels {0} assigned to regions. \"\n \"All regions in an image should have positive \"\n \"integers assigned as labels.\"\n .format(check_unique_labels))\n\n unique_labels = set(check_unique_labels)\n # check for background label indicated as 0\n if np.any(check_unique_labels == 0):\n unique_labels.remove(0)\n\n if labels is not None:\n if (not isinstance(labels, collections.Iterable) or\n isinstance(labels, _basestring)):\n labels = [labels, ]\n if len(unique_labels) != len(labels):\n raise ValueError(\"The number of labels: {0} provided as input \"\n \"in labels={1} does not match with the number \"\n \"of unique labels in labels_img: {2}. \"\n \"Please provide appropriate match with unique \"\n \"number of labels in labels_img.\"\n .format(len(labels), labels, len(unique_labels)))\n new_names = []\n\n if labels is None:\n this_labels = [None] * len(unique_labels)\n else:\n this_labels = labels\n\n new_labels_data = np.zeros(labels_data.shape, dtype=np.int)\n current_max_label = 0\n for label_id, name in zip(unique_labels, this_labels):\n this_label_mask = (labels_data == label_id)\n # Extract regions assigned to each label id\n if connect_diag:\n structure = np.ones((3, 3, 3), dtype=np.int)\n regions, this_n_labels = ndimage.label(\n this_label_mask.astype(np.int), structure=structure)\n else:\n regions, this_n_labels = ndimage.label(this_label_mask.astype(np.int))\n\n if min_size is not None:\n index = np.arange(this_n_labels + 1)\n regions = _remove_small_regions(regions, index, affine,\n min_size=min_size)\n this_n_labels = regions.max()\n\n cur_regions = regions[regions != 0] + current_max_label\n new_labels_data[regions != 0] = cur_regions\n current_max_label += this_n_labels\n if name is not None:\n new_names.extend([name] * this_n_labels)\n\n new_labels_img = new_img_like(labels_img, new_labels_data, affine=affine)\n if labels is not None:\n new_labels = new_names\n return new_labels_img, new_labels\n\n return new_labels_img\n",
"path": "nilearn/regions/region_extractor.py"
}
] | diff --git a/nilearn/regions/region_extractor.py b/nilearn/regions/region_extractor.py
index c84bfc9c22..5701c5afa8 100644
--- a/nilearn/regions/region_extractor.py
+++ b/nilearn/regions/region_extractor.py
@@ -55,7 +55,7 @@ def _threshold_maps_ratio(maps_img, threshold):
else:
ratio = threshold
- maps_data = _safe_get_data(maps, ensure_finite=True)
+ maps_data = _safe_get_data(maps, ensure_finite=True).copy()
abs_maps = np.abs(maps_data)
# thresholding
diff --git a/nilearn/regions/tests/test_region_extractor.py b/nilearn/regions/tests/test_region_extractor.py
index 4687b9a67b..dff8c9c5ea 100644
--- a/nilearn/regions/tests/test_region_extractor.py
+++ b/nilearn/regions/tests/test_region_extractor.py
@@ -51,10 +51,16 @@ def test_threshold_maps_ratio():
# smoke test for function _threshold_maps_ratio with randomly
# generated maps
- # make sure that n_regions (4th dimension) are kept same even
- # in thresholded image
maps, _ = generate_maps((6, 8, 10), n_regions=3)
+
+ # test that there is no side effect
+ maps.get_data()[:3] = 100
+ maps_data = maps.get_data().copy()
thr_maps = _threshold_maps_ratio(maps, threshold=1.0)
+ np.testing.assert_array_equal(maps.get_data(), maps_data)
+
+ # make sure that n_regions (4th dimension) are kept same even
+ # in thresholded image
assert_true(thr_maps.shape[-1] == maps.shape[-1])
# check that the size should be same for 3D image
|
pulp__pulpcore-2498 | As a developer, I can have pytest run the unit tests
Author: @bmbouter (bmbouter)
Redmine Issue: 9643, https://pulp.plan.io/issues/9643
---
As part of the testing effort, it would be nice to have pytest run the unittests in addition to our functional tests.
| [
{
"content": "\"\"\"\nDjango settings for the Pulp Platform application\n\nNever import this module directly, instead `from django.conf import settings`, see\nhttps://docs.djangoproject.com/en/1.11/topics/settings/#using-settings-in-python-code\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\n\"\"\"\n\nimport sys\n\nfrom contextlib import suppress\nfrom gettext import gettext as _\nfrom importlib import import_module\nfrom logging import getLogger\nfrom pathlib import Path\nfrom pkg_resources import iter_entry_points\n\nfrom cryptography.fernet import Fernet\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection\n\nfrom pulpcore import constants\n\n# Build paths inside the project like this: BASE_DIR / ...\nBASE_DIR = Path(__file__).absolute().parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = False\n\nALLOWED_HOSTS = [\"*\"]\n\nDEPLOY_ROOT = Path(\"/var/lib/pulp\")\nMEDIA_ROOT = str(DEPLOY_ROOT / \"media\") # Django 3.1 adds support for pathlib.Path\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\n\nSTATIC_URL = \"/assets/\"\nSTATIC_ROOT = DEPLOY_ROOT / STATIC_URL.strip(\"/\")\n\nDEFAULT_FILE_STORAGE = \"pulpcore.app.models.storage.FileSystem\"\n\nWORKING_DIRECTORY = DEPLOY_ROOT / \"tmp\"\nFILE_UPLOAD_TEMP_DIR = WORKING_DIRECTORY\n\nCHUNKED_UPLOAD_DIR = \"upload\"\n\n# List of upload handler classes to be applied in order.\nFILE_UPLOAD_HANDLERS = (\"pulpcore.app.files.HashingFileUploadHandler\",)\n\nSECRET_KEY = True\n\n# Key used to encrypt fields in the database\nDB_ENCRYPTION_KEY = \"/etc/pulp/certs/database_fields.symmetric.key\"\n\n# API Root\nAPI_ROOT = \"/pulp/\"\n\n# Application definition\n\nINSTALLED_APPS = [\n # django stuff\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"import_export\",\n # third-party\n \"django_filters\",\n \"django_guid\",\n \"drf_spectacular\",\n \"guardian\",\n \"rest_framework\",\n # pulp core app\n \"pulpcore.app\",\n]\n\n# Enumerate the installed Pulp plugins during the loading process for use in the status API\nINSTALLED_PULP_PLUGINS = []\n\nfor entry_point in iter_entry_points(\"pulpcore.plugin\"):\n plugin_app_config = entry_point.load()\n INSTALLED_PULP_PLUGINS.append(entry_point.module_name)\n INSTALLED_APPS.append(plugin_app_config)\n\n# Optional apps that help with development, or augment Pulp in some non-critical way\nOPTIONAL_APPS = [\n \"crispy_forms\",\n \"django_extensions\",\n \"storages\",\n]\n\nfor app in OPTIONAL_APPS:\n # only import if app is installed\n with suppress(ImportError):\n import_module(app)\n INSTALLED_APPS.append(app)\n\nMIDDLEWARE = [\n \"django_guid.middleware.guid_middleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django_currentuser.middleware.ThreadLocalUserMiddleware\",\n]\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n \"pulpcore.backends.ObjectRolePermissionBackend\",\n]\n\n# Disable django guardian anonymous user\n# https://django-guardian.readthedocs.io/en/stable/configuration.html#anonymous-user-name\nANONYMOUS_USER_NAME = None\n\nROOT_URLCONF = \"pulpcore.app.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR / \"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"pulpcore.app.wsgi.application\"\n\nREST_FRAMEWORK = {\n \"URL_FIELD_NAME\": \"pulp_href\",\n \"DEFAULT_FILTER_BACKENDS\": (\"django_filters.rest_framework.DjangoFilterBackend\",),\n \"DEFAULT_PAGINATION_CLASS\": \"rest_framework.pagination.LimitOffsetPagination\",\n \"PAGE_SIZE\": 100,\n \"DEFAULT_PERMISSION_CLASSES\": (\"pulpcore.plugin.access_policy.AccessPolicyFromDB\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.BasicAuthentication\",\n ),\n \"UPLOADED_FILES_USE_URL\": False,\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"pulpcore.openapi.PulpAutoSchema\",\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"},\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.11/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = \"USE_I18N\", True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# A set of default settings to use if the configuration file in\n# /etc/pulp/ is missing or if it does not have values for every setting\n\n# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"pulp\",\n \"USER\": \"pulp\",\n \"CONN_MAX_AGE\": 0,\n },\n}\n\n# Redis default config\nREDIS_URL = None\nREDIS_HOST = None\nREDIS_PORT = None\nREDIS_DB = 0\nREDIS_PASSWORD = None\nREDIS_SSL = False\nREDIS_SSL_CA_CERTS = None\n\n# https://docs.djangoproject.com/en/1.11/ref/settings/#logging and\n# https://docs.python.org/3/library/logging.config.html\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"simple\": {\"format\": \"pulp [%(correlation_id)s]: %(name)s:%(levelname)s: %(message)s\"}\n },\n \"filters\": {\"correlation_id\": {\"()\": \"django_guid.log_filters.CorrelationId\"}},\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"simple\",\n \"filters\": [\"correlation_id\"],\n }\n },\n \"loggers\": {\n \"\": {\n # The root logger\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n \"filters\": [\"correlation_id\"],\n },\n \"django_guid\": {\n \"handlers\": [\"console\"],\n \"level\": \"WARNING\",\n \"propagate\": False,\n },\n },\n}\n\nDRF_ACCESS_POLICY = {\"reusable_conditions\": [\"pulpcore.app.global_access_conditions\"]}\n\nCONTENT_PATH_PREFIX = \"/pulp/content/\"\nCONTENT_APP_TTL = 30\n\nWORKER_TTL = 30\n\n# how long to protect orphan content in minutes\nORPHAN_PROTECTION_TIME = 24 * 60\n\nREMOTE_USER_ENVIRON_NAME = \"REMOTE_USER\"\n\nALLOWED_IMPORT_PATHS = []\n\nALLOWED_EXPORT_PATHS = []\n\nPROFILE_STAGES_API = False\n\n# https://docs.pulpproject.org/pulpcore/configuration/settings.html#pulp-cache\nCACHE_ENABLED = False\nCACHE_SETTINGS = {\n \"EXPIRES_TTL\": 600, # 10 minutes\n}\n\nSPECTACULAR_SETTINGS = {\n \"SERVE_URLCONF\": ROOT_URLCONF,\n \"DEFAULT_GENERATOR_CLASS\": \"pulpcore.openapi.PulpSchemaGenerator\",\n \"DEFAULT_SCHEMA_CLASS\": \"pulpcore.openapi.PulpAutoSchema\",\n \"ENUM_ADD_EXPLICIT_BLANK_NULL_CHOICE\": False,\n \"COMPONENT_SPLIT_REQUEST\": True,\n \"COMPONENT_NO_READ_ONLY_REQUIRED\": True,\n \"GENERIC_ADDITIONAL_PROPERTIES\": None,\n \"DISABLE_ERRORS_AND_WARNINGS\": not DEBUG,\n \"TITLE\": \"Pulp 3 API\",\n \"DESCRIPTION\": \"Fetch, Upload, Organize, and Distribute Software Packages\",\n \"VERSION\": \"v3\",\n \"CONTACT\": {\n \"name\": \"Pulp Team\",\n \"email\": \"[email protected]\",\n \"url\": \"https://pulpproject.org\",\n },\n \"LICENSE\": {\n \"name\": \"GPLv2+\",\n \"url\": \"https://raw.githubusercontent.com/pulp/pulpcore/master/LICENSE\",\n },\n}\n\n# What kinds of checksums is this pulp-instance _allowed to use_ ?\n# NOTE : \"sha256\"\" IS REQUIRED - Pulp will fail to start if it is not found in this set\n# NOTE: specifying checksums that are not listed under ALL_KNOWN_CONTENT_CHECKSUMS will fail\n# at startup\nALLOWED_CONTENT_CHECKSUMS = [\"sha224\", \"sha256\", \"sha384\", \"sha512\"]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\nTASK_DIAGNOSTICS = False\n\n# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)\n# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html\nfrom dynaconf import DjangoDynaconf, Validator # noqa\n\n# Validators\ncontent_origin_validator = Validator(\n \"CONTENT_ORIGIN\",\n must_exist=True,\n messages={\n \"must_exist_true\": _(\n \"CONTENT_ORIGIN is a required setting but it was not configured. This may be caused \"\n \"by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by \"\n \"the installer automatically.\"\n )\n },\n)\n\ncache_enabled_validator = Validator(\"CACHE_ENABLED\", eq=True)\nredis_url_validator = Validator(\"REDIS_URL\", must_exist=True, when=cache_enabled_validator)\nredis_host_validator = Validator(\"REDIS_HOST\", must_exist=True, when=cache_enabled_validator)\nredis_port_validator = Validator(\"REDIS_PORT\", must_exist=True, when=cache_enabled_validator)\ncache_validator = redis_url_validator | (redis_host_validator & redis_port_validator)\ncache_validator.messages[\"combined\"] = _(\n \"CACHE_ENABLED is enabled but it requires to have REDIS configured. Please check \"\n \"https://docs.pulpproject.org/pulpcore/configuration/settings.html#redis-settings \"\n \"for more information.\"\n)\n\nsha256_validator = Validator(\n \"ALLOWED_CONTENT_CHECKSUMS\",\n cont=\"sha256\",\n messages={\n \"operations\": \"ALLOWED_CONTENT_CHECKSUMS MUST contain 'sha256' - Pulp's \"\n \"content addressable storage relies on sha256 to identify entities.\"\n },\n)\n\nunknown_algs_validator = Validator(\n \"ALLOWED_CONTENT_CHECKSUMS\",\n condition=lambda x: len(set(x).difference(constants.ALL_KNOWN_CONTENT_CHECKSUMS)) == 0,\n messages={\n \"condition\": _(\n \"ALLOWED_CONTENT_CHECKSUMS may only contain algorithms known to pulp - see \"\n \"constants.ALL_KNOWN_CONTENT_CHECKSUMS for the allowed list.\"\n )\n },\n)\n\napi_root_validator = Validator(\n \"API_ROOT\",\n condition=lambda x: x.startswith(\"/\") and x.endswith(\"/\"),\n messages={\n \"condition\": _(\"The API_ROOT must start and end with a '/', currently it is '{value}'\")\n },\n)\n\n\nsettings = DjangoDynaconf(\n __name__,\n GLOBAL_ENV_FOR_DYNACONF=\"PULP\",\n ENV_SWITCHER_FOR_DYNACONF=\"PULP_ENV\",\n PRELOAD_FOR_DYNACONF=[\n \"{}.app.settings\".format(plugin_name) for plugin_name in INSTALLED_PULP_PLUGINS\n ],\n ENVVAR_FOR_DYNACONF=\"PULP_SETTINGS\",\n load_dotenv=False,\n validators=[\n content_origin_validator,\n cache_validator,\n sha256_validator,\n unknown_algs_validator,\n api_root_validator,\n ],\n)\n# HERE ENDS DYNACONF EXTENSION LOAD (No more code below this line)\n\n_logger = getLogger(__name__)\n\n\nif not (\n Path(sys.argv[0]).name == \"sphinx-build\"\n or (len(sys.argv) >= 2 and sys.argv[1] == \"collectstatic\")\n):\n try:\n with open(DB_ENCRYPTION_KEY, \"rb\") as key_file:\n Fernet(key_file.read())\n except Exception as ex:\n raise ImproperlyConfigured(\n _(\"Could not load DB_ENCRYPTION_KEY file '{file}': {err}\").format(\n file=DB_ENCRYPTION_KEY, err=ex\n )\n )\n\n\nFORBIDDEN_CHECKSUMS = set(constants.ALL_KNOWN_CONTENT_CHECKSUMS).difference(\n ALLOWED_CONTENT_CHECKSUMS\n)\n\n_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [\"handle-artifact-checksums\", \"migrate\", \"collectstatic\"]\n\nif not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):\n try:\n with connection.cursor() as cursor:\n for checksum in ALLOWED_CONTENT_CHECKSUMS:\n # can't import Artifact here so use a direct db connection\n cursor.execute(f\"SELECT count(pulp_id) FROM core_artifact WHERE {checksum} IS NULL\")\n row = cursor.fetchone()\n if row[0] > 0:\n raise ImproperlyConfigured(\n _(\n \"There have been identified artifacts missing checksum '{}'. \"\n \"Run 'pulpcore-manager handle-artifact-checksums' first to populate \"\n \"missing artifact checksums.\"\n ).format(checksum)\n )\n for checksum in FORBIDDEN_CHECKSUMS:\n # can't import Artifact here so use a direct db connection\n cursor.execute(\n f\"SELECT count(pulp_id) FROM core_artifact WHERE {checksum} IS NOT NULL\"\n )\n row = cursor.fetchone()\n if row[0] > 0:\n raise ImproperlyConfigured(\n _(\n \"There have been identified artifacts with forbidden checksum '{}'. \"\n \"Run 'pulpcore-manager handle-artifact-checksums' first to unset \"\n \"forbidden checksums.\"\n ).format(checksum)\n )\n\n # warn if there are remote artifacts with checksums but no allowed checksums\n cond = \" AND \".join([f\"{c} IS NULL\" for c in constants.ALL_KNOWN_CONTENT_CHECKSUMS])\n no_checksum_query = f\"SELECT pulp_id FROM core_remoteartifact WHERE {cond}\"\n cond = \" AND \".join([f\"{c} IS NULL\" for c in ALLOWED_CONTENT_CHECKSUMS])\n cursor.execute(\n f\"SELECT count(pulp_id) FROM core_remoteartifact WHERE {cond} AND \"\n f\"pulp_id NOT IN ({no_checksum_query})\"\n )\n row = cursor.fetchone()\n if row[0] > 0:\n _logger.warn(\n _(\n \"Warning: detected remote content without allowed checksums. \"\n \"Run 'pulpcore-manager handle-artifact-checksums --report' to \"\n \"view this content.\"\n )\n )\n\n except ImproperlyConfigured as e:\n raise e\n except Exception:\n # our check could fail if the table hasn't been created yet or we can't get a db connection\n pass\n finally:\n connection.close()\n\nsettings.set(\"V3_API_ROOT\", settings.API_ROOT + \"api/v3/\") # Not user configurable\nsettings.set(\n \"V3_API_ROOT_NO_FRONT_SLASH\", settings.V3_API_ROOT.lstrip(\"/\")\n) # Not user configurable\n",
"path": "pulpcore/app/settings.py"
}
] | [
{
"content": "\"\"\"\nDjango settings for the Pulp Platform application\n\nNever import this module directly, instead `from django.conf import settings`, see\nhttps://docs.djangoproject.com/en/1.11/topics/settings/#using-settings-in-python-code\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.11/ref/settings/\n\"\"\"\n\nimport sys\n\nfrom contextlib import suppress\nfrom gettext import gettext as _\nfrom importlib import import_module\nfrom logging import getLogger\nfrom pathlib import Path\nfrom pkg_resources import iter_entry_points\n\nfrom cryptography.fernet import Fernet\nfrom django.core.exceptions import ImproperlyConfigured\nfrom django.db import connection\n\nfrom pulpcore import constants\n\n# Build paths inside the project like this: BASE_DIR / ...\nBASE_DIR = Path(__file__).absolute().parent\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = False\n\nALLOWED_HOSTS = [\"*\"]\n\nDEPLOY_ROOT = Path(\"/var/lib/pulp\")\nMEDIA_ROOT = str(DEPLOY_ROOT / \"media\") # Django 3.1 adds support for pathlib.Path\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.11/howto/static-files/\n\nSTATIC_URL = \"/assets/\"\nSTATIC_ROOT = DEPLOY_ROOT / STATIC_URL.strip(\"/\")\n\nDEFAULT_FILE_STORAGE = \"pulpcore.app.models.storage.FileSystem\"\n\nWORKING_DIRECTORY = DEPLOY_ROOT / \"tmp\"\nFILE_UPLOAD_TEMP_DIR = WORKING_DIRECTORY\n\nCHUNKED_UPLOAD_DIR = \"upload\"\n\n# List of upload handler classes to be applied in order.\nFILE_UPLOAD_HANDLERS = (\"pulpcore.app.files.HashingFileUploadHandler\",)\n\nSECRET_KEY = True\n\n# Key used to encrypt fields in the database\nDB_ENCRYPTION_KEY = \"/etc/pulp/certs/database_fields.symmetric.key\"\n\n# API Root\nAPI_ROOT = \"/pulp/\"\n\n# Application definition\n\nINSTALLED_APPS = [\n # django stuff\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"import_export\",\n # third-party\n \"django_filters\",\n \"django_guid\",\n \"drf_spectacular\",\n \"guardian\",\n \"rest_framework\",\n # pulp core app\n \"pulpcore.app\",\n]\n\n# Enumerate the installed Pulp plugins during the loading process for use in the status API\nINSTALLED_PULP_PLUGINS = []\n\nfor entry_point in iter_entry_points(\"pulpcore.plugin\"):\n plugin_app_config = entry_point.load()\n INSTALLED_PULP_PLUGINS.append(entry_point.module_name)\n INSTALLED_APPS.append(plugin_app_config)\n\n# Optional apps that help with development, or augment Pulp in some non-critical way\nOPTIONAL_APPS = [\n \"crispy_forms\",\n \"django_extensions\",\n \"storages\",\n]\n\nfor app in OPTIONAL_APPS:\n # only import if app is installed\n with suppress(ImportError):\n import_module(app)\n INSTALLED_APPS.append(app)\n\nMIDDLEWARE = [\n \"django_guid.middleware.guid_middleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"whitenoise.middleware.WhiteNoiseMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django_currentuser.middleware.ThreadLocalUserMiddleware\",\n]\n\nAUTHENTICATION_BACKENDS = [\n \"django.contrib.auth.backends.ModelBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n \"pulpcore.backends.ObjectRolePermissionBackend\",\n]\n\n# Disable django guardian anonymous user\n# https://django-guardian.readthedocs.io/en/stable/configuration.html#anonymous-user-name\nANONYMOUS_USER_NAME = None\n\nROOT_URLCONF = \"pulpcore.app.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [BASE_DIR / \"templates\"],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"pulpcore.app.wsgi.application\"\n\nREST_FRAMEWORK = {\n \"URL_FIELD_NAME\": \"pulp_href\",\n \"DEFAULT_FILTER_BACKENDS\": (\"django_filters.rest_framework.DjangoFilterBackend\",),\n \"DEFAULT_PAGINATION_CLASS\": \"rest_framework.pagination.LimitOffsetPagination\",\n \"PAGE_SIZE\": 100,\n \"DEFAULT_PERMISSION_CLASSES\": (\"pulpcore.plugin.access_policy.AccessPolicyFromDB\",),\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.BasicAuthentication\",\n ),\n \"UPLOADED_FILES_USE_URL\": False,\n \"DEFAULT_VERSIONING_CLASS\": \"rest_framework.versioning.URLPathVersioning\",\n \"DEFAULT_SCHEMA_CLASS\": \"pulpcore.openapi.PulpAutoSchema\",\n}\n\n# Password validation\n# https://docs.djangoproject.com/en/dev/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\"},\n {\"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\"},\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.11/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = \"USE_I18N\", True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# A set of default settings to use if the configuration file in\n# /etc/pulp/ is missing or if it does not have values for every setting\n\n# https://docs.djangoproject.com/en/1.11/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"NAME\": \"pulp\",\n \"USER\": \"pulp\",\n \"CONN_MAX_AGE\": 0,\n },\n}\n\n# Redis default config\nREDIS_URL = None\nREDIS_HOST = None\nREDIS_PORT = None\nREDIS_DB = 0\nREDIS_PASSWORD = None\nREDIS_SSL = False\nREDIS_SSL_CA_CERTS = None\n\n# https://docs.djangoproject.com/en/1.11/ref/settings/#logging and\n# https://docs.python.org/3/library/logging.config.html\nLOGGING = {\n \"version\": 1,\n \"disable_existing_loggers\": False,\n \"formatters\": {\n \"simple\": {\"format\": \"pulp [%(correlation_id)s]: %(name)s:%(levelname)s: %(message)s\"}\n },\n \"filters\": {\"correlation_id\": {\"()\": \"django_guid.log_filters.CorrelationId\"}},\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"formatter\": \"simple\",\n \"filters\": [\"correlation_id\"],\n }\n },\n \"loggers\": {\n \"\": {\n # The root logger\n \"handlers\": [\"console\"],\n \"level\": \"INFO\",\n \"filters\": [\"correlation_id\"],\n },\n \"django_guid\": {\n \"handlers\": [\"console\"],\n \"level\": \"WARNING\",\n \"propagate\": False,\n },\n },\n}\n\nDRF_ACCESS_POLICY = {\"reusable_conditions\": [\"pulpcore.app.global_access_conditions\"]}\n\nCONTENT_PATH_PREFIX = \"/pulp/content/\"\nCONTENT_APP_TTL = 30\n\nWORKER_TTL = 30\n\n# how long to protect orphan content in minutes\nORPHAN_PROTECTION_TIME = 24 * 60\n\nREMOTE_USER_ENVIRON_NAME = \"REMOTE_USER\"\n\nALLOWED_IMPORT_PATHS = []\n\nALLOWED_EXPORT_PATHS = []\n\nPROFILE_STAGES_API = False\n\n# https://docs.pulpproject.org/pulpcore/configuration/settings.html#pulp-cache\nCACHE_ENABLED = False\nCACHE_SETTINGS = {\n \"EXPIRES_TTL\": 600, # 10 minutes\n}\n\nSPECTACULAR_SETTINGS = {\n \"SERVE_URLCONF\": ROOT_URLCONF,\n \"DEFAULT_GENERATOR_CLASS\": \"pulpcore.openapi.PulpSchemaGenerator\",\n \"DEFAULT_SCHEMA_CLASS\": \"pulpcore.openapi.PulpAutoSchema\",\n \"ENUM_ADD_EXPLICIT_BLANK_NULL_CHOICE\": False,\n \"COMPONENT_SPLIT_REQUEST\": True,\n \"COMPONENT_NO_READ_ONLY_REQUIRED\": True,\n \"GENERIC_ADDITIONAL_PROPERTIES\": None,\n \"DISABLE_ERRORS_AND_WARNINGS\": not DEBUG,\n \"TITLE\": \"Pulp 3 API\",\n \"DESCRIPTION\": \"Fetch, Upload, Organize, and Distribute Software Packages\",\n \"VERSION\": \"v3\",\n \"CONTACT\": {\n \"name\": \"Pulp Team\",\n \"email\": \"[email protected]\",\n \"url\": \"https://pulpproject.org\",\n },\n \"LICENSE\": {\n \"name\": \"GPLv2+\",\n \"url\": \"https://raw.githubusercontent.com/pulp/pulpcore/master/LICENSE\",\n },\n}\n\n# What kinds of checksums is this pulp-instance _allowed to use_ ?\n# NOTE : \"sha256\"\" IS REQUIRED - Pulp will fail to start if it is not found in this set\n# NOTE: specifying checksums that are not listed under ALL_KNOWN_CONTENT_CHECKSUMS will fail\n# at startup\nALLOWED_CONTENT_CHECKSUMS = [\"sha224\", \"sha256\", \"sha384\", \"sha512\"]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\nTASK_DIAGNOSTICS = False\n\n# HERE STARTS DYNACONF EXTENSION LOAD (Keep at the very bottom of settings.py)\n# Read more at https://dynaconf.readthedocs.io/en/latest/guides/django.html\nfrom dynaconf import DjangoDynaconf, Validator # noqa\n\n# Validators\ncontent_origin_validator = Validator(\n \"CONTENT_ORIGIN\",\n must_exist=True,\n messages={\n \"must_exist_true\": _(\n \"CONTENT_ORIGIN is a required setting but it was not configured. This may be caused \"\n \"by invalid read permissions of the settings file. Note that CONTENT_ORIGIN is set by \"\n \"the installer automatically.\"\n )\n },\n)\n\ncache_enabled_validator = Validator(\"CACHE_ENABLED\", eq=True)\nredis_url_validator = Validator(\"REDIS_URL\", must_exist=True, when=cache_enabled_validator)\nredis_host_validator = Validator(\"REDIS_HOST\", must_exist=True, when=cache_enabled_validator)\nredis_port_validator = Validator(\"REDIS_PORT\", must_exist=True, when=cache_enabled_validator)\ncache_validator = redis_url_validator | (redis_host_validator & redis_port_validator)\ncache_validator.messages[\"combined\"] = _(\n \"CACHE_ENABLED is enabled but it requires to have REDIS configured. Please check \"\n \"https://docs.pulpproject.org/pulpcore/configuration/settings.html#redis-settings \"\n \"for more information.\"\n)\n\nsha256_validator = Validator(\n \"ALLOWED_CONTENT_CHECKSUMS\",\n cont=\"sha256\",\n messages={\n \"operations\": \"ALLOWED_CONTENT_CHECKSUMS MUST contain 'sha256' - Pulp's \"\n \"content addressable storage relies on sha256 to identify entities.\"\n },\n)\n\nunknown_algs_validator = Validator(\n \"ALLOWED_CONTENT_CHECKSUMS\",\n condition=lambda x: len(set(x).difference(constants.ALL_KNOWN_CONTENT_CHECKSUMS)) == 0,\n messages={\n \"condition\": _(\n \"ALLOWED_CONTENT_CHECKSUMS may only contain algorithms known to pulp - see \"\n \"constants.ALL_KNOWN_CONTENT_CHECKSUMS for the allowed list.\"\n )\n },\n)\n\napi_root_validator = Validator(\n \"API_ROOT\",\n condition=lambda x: x.startswith(\"/\") and x.endswith(\"/\"),\n messages={\n \"condition\": _(\"The API_ROOT must start and end with a '/', currently it is '{value}'\")\n },\n)\n\n\nsettings = DjangoDynaconf(\n __name__,\n GLOBAL_ENV_FOR_DYNACONF=\"PULP\",\n ENV_SWITCHER_FOR_DYNACONF=\"PULP_ENV\",\n PRELOAD_FOR_DYNACONF=[\n \"{}.app.settings\".format(plugin_name) for plugin_name in INSTALLED_PULP_PLUGINS\n ],\n ENVVAR_FOR_DYNACONF=\"PULP_SETTINGS\",\n load_dotenv=False,\n validators=[\n content_origin_validator,\n cache_validator,\n sha256_validator,\n unknown_algs_validator,\n api_root_validator,\n ],\n)\n# HERE ENDS DYNACONF EXTENSION LOAD (No more code below this line)\n\n_logger = getLogger(__name__)\n\n\nif not (\n Path(sys.argv[0]).name == \"pytest\"\n or Path(sys.argv[0]).name == \"sphinx-build\"\n or (len(sys.argv) >= 2 and sys.argv[1] == \"collectstatic\")\n):\n try:\n with open(DB_ENCRYPTION_KEY, \"rb\") as key_file:\n Fernet(key_file.read())\n except Exception as ex:\n raise ImproperlyConfigured(\n _(\"Could not load DB_ENCRYPTION_KEY file '{file}': {err}\").format(\n file=DB_ENCRYPTION_KEY, err=ex\n )\n )\n\n\nFORBIDDEN_CHECKSUMS = set(constants.ALL_KNOWN_CONTENT_CHECKSUMS).difference(\n ALLOWED_CONTENT_CHECKSUMS\n)\n\n_SKIPPED_COMMANDS_FOR_CONTENT_CHECKS = [\"handle-artifact-checksums\", \"migrate\", \"collectstatic\"]\n\nif not (len(sys.argv) >= 2 and sys.argv[1] in _SKIPPED_COMMANDS_FOR_CONTENT_CHECKS):\n try:\n with connection.cursor() as cursor:\n for checksum in ALLOWED_CONTENT_CHECKSUMS:\n # can't import Artifact here so use a direct db connection\n cursor.execute(f\"SELECT count(pulp_id) FROM core_artifact WHERE {checksum} IS NULL\")\n row = cursor.fetchone()\n if row[0] > 0:\n raise ImproperlyConfigured(\n _(\n \"There have been identified artifacts missing checksum '{}'. \"\n \"Run 'pulpcore-manager handle-artifact-checksums' first to populate \"\n \"missing artifact checksums.\"\n ).format(checksum)\n )\n for checksum in FORBIDDEN_CHECKSUMS:\n # can't import Artifact here so use a direct db connection\n cursor.execute(\n f\"SELECT count(pulp_id) FROM core_artifact WHERE {checksum} IS NOT NULL\"\n )\n row = cursor.fetchone()\n if row[0] > 0:\n raise ImproperlyConfigured(\n _(\n \"There have been identified artifacts with forbidden checksum '{}'. \"\n \"Run 'pulpcore-manager handle-artifact-checksums' first to unset \"\n \"forbidden checksums.\"\n ).format(checksum)\n )\n\n # warn if there are remote artifacts with checksums but no allowed checksums\n cond = \" AND \".join([f\"{c} IS NULL\" for c in constants.ALL_KNOWN_CONTENT_CHECKSUMS])\n no_checksum_query = f\"SELECT pulp_id FROM core_remoteartifact WHERE {cond}\"\n cond = \" AND \".join([f\"{c} IS NULL\" for c in ALLOWED_CONTENT_CHECKSUMS])\n cursor.execute(\n f\"SELECT count(pulp_id) FROM core_remoteartifact WHERE {cond} AND \"\n f\"pulp_id NOT IN ({no_checksum_query})\"\n )\n row = cursor.fetchone()\n if row[0] > 0:\n _logger.warn(\n _(\n \"Warning: detected remote content without allowed checksums. \"\n \"Run 'pulpcore-manager handle-artifact-checksums --report' to \"\n \"view this content.\"\n )\n )\n\n except ImproperlyConfigured as e:\n raise e\n except Exception:\n # our check could fail if the table hasn't been created yet or we can't get a db connection\n pass\n finally:\n connection.close()\n\nsettings.set(\"V3_API_ROOT\", settings.API_ROOT + \"api/v3/\") # Not user configurable\nsettings.set(\n \"V3_API_ROOT_NO_FRONT_SLASH\", settings.V3_API_ROOT.lstrip(\"/\")\n) # Not user configurable\n",
"path": "pulpcore/app/settings.py"
}
] | diff --git a/.github/workflows/scripts/script.sh b/.github/workflows/scripts/script.sh
index 44f447e338..1b81d816e2 100755
--- a/.github/workflows/scripts/script.sh
+++ b/.github/workflows/scripts/script.sh
@@ -111,7 +111,7 @@ cmd_prefix bash -c "django-admin makemigrations --check --dry-run"
if [[ "$TEST" != "upgrade" ]]; then
# Run unit tests.
- cmd_prefix bash -c "PULP_DATABASES__default__USER=postgres django-admin test --noinput /usr/local/lib/python3.8/site-packages/pulpcore/tests/unit/"
+ cmd_prefix bash -c "PULP_DATABASES__default__USER=postgres pytest -v -r sx --color=yes --pyargs pulpcore.tests.unit"
fi
# Run functional tests
diff --git a/CHANGES/2070.misc b/CHANGES/2070.misc
new file mode 100644
index 0000000000..39a0115674
--- /dev/null
+++ b/CHANGES/2070.misc
@@ -0,0 +1 @@
+Switches the unit test runner to use pytest, and port unit tests accordingly.
diff --git a/MANIFEST.in b/MANIFEST.in
index 247bac5420..5dafae0ada 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -7,7 +7,7 @@ include COMMITMENT
include functest_requirements.txt
include unittest_requirements.txt
recursive-include pulpcore/tests/functional/api/using_plugin/artifacts *
-recursive-exclude pulpcore/tests/fixtures/ *
+recursive-exclude pulpcore/tests/functional/fixtures/ *
include CODE_OF_CONDUCT.md
include CONTRIBUTING.md
include COPYRIGHT
diff --git a/pulpcore/app/settings.py b/pulpcore/app/settings.py
index 288e41b3b5..96d93155f7 100644
--- a/pulpcore/app/settings.py
+++ b/pulpcore/app/settings.py
@@ -375,7 +375,8 @@
if not (
- Path(sys.argv[0]).name == "sphinx-build"
+ Path(sys.argv[0]).name == "pytest"
+ or Path(sys.argv[0]).name == "sphinx-build"
or (len(sys.argv) >= 2 and sys.argv[1] == "collectstatic")
):
try:
diff --git a/pulpcore/tests/conftest.py b/pulpcore/tests/functional/conftest.py
similarity index 100%
rename from pulpcore/tests/conftest.py
rename to pulpcore/tests/functional/conftest.py
diff --git a/pulpcore/tests/conftest_pulp_file.py b/pulpcore/tests/functional/conftest_pulp_file.py
similarity index 100%
rename from pulpcore/tests/conftest_pulp_file.py
rename to pulpcore/tests/functional/conftest_pulp_file.py
diff --git a/pulpcore/tests/fixtures/basic/1.iso b/pulpcore/tests/functional/fixtures/basic/1.iso
similarity index 100%
rename from pulpcore/tests/fixtures/basic/1.iso
rename to pulpcore/tests/functional/fixtures/basic/1.iso
diff --git a/pulpcore/tests/fixtures/basic/2.iso b/pulpcore/tests/functional/fixtures/basic/2.iso
similarity index 100%
rename from pulpcore/tests/fixtures/basic/2.iso
rename to pulpcore/tests/functional/fixtures/basic/2.iso
diff --git a/pulpcore/tests/fixtures/basic/3.iso b/pulpcore/tests/functional/fixtures/basic/3.iso
similarity index 100%
rename from pulpcore/tests/fixtures/basic/3.iso
rename to pulpcore/tests/functional/fixtures/basic/3.iso
diff --git a/pulpcore/tests/fixtures/basic/PULP_MANIFEST b/pulpcore/tests/functional/fixtures/basic/PULP_MANIFEST
similarity index 100%
rename from pulpcore/tests/fixtures/basic/PULP_MANIFEST
rename to pulpcore/tests/functional/fixtures/basic/PULP_MANIFEST
diff --git a/pulpcore/tests/unit/serializers/test_content.py b/pulpcore/tests/unit/serializers/test_content.py
deleted file mode 100644
index 0aad81dbdc..0000000000
--- a/pulpcore/tests/unit/serializers/test_content.py
+++ /dev/null
@@ -1,37 +0,0 @@
-from unittest import TestCase
-
-import mock
-from pulpcore.app.models import Artifact
-from pulpcore.app.serializers import ArtifactSerializer
-from pulpcore.constants import ALL_KNOWN_CONTENT_CHECKSUMS
-from rest_framework import serializers
-
-
-class TestArtifactSerializer(TestCase):
- def test_validate_file_checksum(self):
- mock_file = mock.MagicMock(size=42)
- mock_file.hashers.__getitem__.return_value.hexdigest.return_value = "asdf"
-
- data = {"file": mock_file}
- serializer = ArtifactSerializer(data=data)
- self.assertTrue(serializer.is_valid())
- new_data = serializer.validated_data
- self.assertEqual(new_data["file"], mock_file)
- self.assertEqual(new_data["size"], 42)
- for csum in Artifact.DIGEST_FIELDS:
- self.assertEqual(new_data[csum], "asdf")
-
- for csum in ALL_KNOWN_CONTENT_CHECKSUMS.difference(Artifact.DIGEST_FIELDS):
- self.assertFalse(csum in new_data, f"Found forbidden checksum {csum}")
-
- # This part of the test will only fire if the system-under-test has forbidden
- # use of 'md5'
- if "md5" not in Artifact.DIGEST_FIELDS:
- data = {"file": mock_file, "md5": "asdf"}
- with self.assertRaises(serializers.ValidationError) as cm: # noqa
- serializer.validate(data)
-
- def test_emtpy_data(self):
- data = {}
- serializer = ArtifactSerializer(data=data)
- self.assertFalse(serializer.is_valid())
diff --git a/pulpcore/tests/unit/serializers/test_repository.py b/pulpcore/tests/unit/serializers/test_repository.py
index 4c661a20e2..927d92b3e2 100644
--- a/pulpcore/tests/unit/serializers/test_repository.py
+++ b/pulpcore/tests/unit/serializers/test_repository.py
@@ -4,9 +4,7 @@
import mock
from rest_framework import serializers
-from pulpcore.app.models import Distribution
from pulpcore.app.serializers import (
- DistributionSerializer,
PublicationSerializer,
RemoteSerializer,
)
@@ -15,38 +13,6 @@
class TestRemoteSerializer(TestCase):
minimal_data = {"name": "test", "url": "http://whatever"}
- def test_minimal_data(self):
- data = {}
- data.update(self.minimal_data)
- serializer = RemoteSerializer(data=data)
- serializer.is_valid(raise_exception=True)
-
- def test_validate_proxy(self):
- data = {"proxy_url": "http://whatever"}
- data.update(self.minimal_data)
- serializer = RemoteSerializer(data=data)
- serializer.is_valid(raise_exception=True)
-
- def test_validate_proxy_invalid(self):
- data = {"proxy_url": "http://user:pass@whatever"}
- data.update(self.minimal_data)
- serializer = RemoteSerializer(data=data)
- with self.assertRaises(serializers.ValidationError):
- serializer.is_valid(raise_exception=True)
-
- def test_validate_proxy_creds(self):
- data = {"proxy_url": "http://whatever", "proxy_username": "user", "proxy_password": "pass"}
- data.update(self.minimal_data)
- serializer = RemoteSerializer(data=data)
- serializer.is_valid(raise_exception=True)
-
- def test_validate_proxy_creds_invalid(self):
- data = {"proxy_url": "http://whatever", "proxy_username": "user"}
- data.update(self.minimal_data)
- serializer = RemoteSerializer(data=data)
- with self.assertRaises(serializers.ValidationError):
- serializer.is_valid(raise_exception=True)
-
def test_validate_proxy_creds_update(self):
Remote = SimpleNamespace(
proxy_url="http://whatever",
@@ -115,58 +81,3 @@ def test_validate_repository_version_only_unknown_field(self):
serializer = PublicationSerializer(data=data)
with self.assertRaises(serializers.ValidationError):
serializer.validate(data)
-
-
-class TestDistributionPath(TestCase):
- def test_overlap(self):
- Distribution.objects.create(base_path="foo/bar", name="foobar")
- overlap_errors = {"base_path": ["Overlaps with existing distribution 'foobar'"]}
-
- # test that the new distribution cannot be nested in an existing path
- data = {"name": "foobarbaz", "base_path": "foo/bar/baz"}
- serializer = DistributionSerializer(data=data)
- self.assertFalse(serializer.is_valid())
- self.assertDictEqual(overlap_errors, serializer.errors)
-
- # test that the new distribution cannot nest an existing path
- data = {"name": "foo", "base_path": "foo"}
- serializer = DistributionSerializer(data=data)
- self.assertFalse(serializer.is_valid())
- self.assertDictEqual(overlap_errors, serializer.errors)
-
- def test_no_overlap(self):
- Distribution.objects.create(base_path="fu/bar", name="fubar")
-
- # different path
- data = {"name": "fufu", "base_path": "fubar"}
- serializer = DistributionSerializer(data=data)
- self.assertTrue(serializer.is_valid())
- self.assertDictEqual({}, serializer.errors)
-
- # common base path but different path
- data = {"name": "fufu", "base_path": "fu/baz"}
- serializer = DistributionSerializer(data=data)
- self.assertTrue(serializer.is_valid())
- self.assertDictEqual({}, serializer.errors)
-
- def test_slashes(self):
- overlap_errors = {"base_path": ["Relative path cannot begin or end with slashes."]}
-
- data = {"name": "fefe", "base_path": "fefe/"}
- serializer = DistributionSerializer(data=data)
- self.assertFalse(serializer.is_valid())
- self.assertDictEqual(overlap_errors, serializer.errors)
-
- data = {"name": "fefe", "base_path": "/fefe/foo"}
- serializer = DistributionSerializer(data=data)
- self.assertFalse(serializer.is_valid())
- self.assertDictEqual(overlap_errors, serializer.errors)
-
- def test_uniqueness(self):
- Distribution.objects.create(base_path="fizz/buzz", name="fizzbuzz")
- data = {"name": "feefee", "base_path": "fizz/buzz"}
- overlap_errors = {"base_path": ["This field must be unique."]}
-
- serializer = DistributionSerializer(data=data)
- self.assertFalse(serializer.is_valid())
- self.assertDictEqual(overlap_errors, serializer.errors)
diff --git a/unittest_requirements.txt b/unittest_requirements.txt
index 4c0775506b..ed1dc5f387 100644
--- a/unittest_requirements.txt
+++ b/unittest_requirements.txt
@@ -1,3 +1,4 @@
# Unit test requirements
asynctest
mock
+pytest-django
|
zestedesavoir__zds-site-5586 | SEO et signature : <a rel="nofollow" />
Dans la signature il faudrait voir si on peut facilement ajouter un attribut `rel="nofollow"` pour préserver notre SEO. https://github.com/zestedesavoir/zmarkdown/blob/1dded309a2670689a4a3353f9e38b80624c6df1a/packages/zmarkdown/server/handlers.js#L139
> limitez les liens en signatures à des no follow or lien interne.
c’est pas mal (:evil) de partager un lien, mais si A-312 répond 4 fois dans la même page, il renvoie 4 fois du jus sur son compte twitter, 4 coding game, … ca a plusieurs effet négatifs
Source: https://zestedesavoir.com/forums/sujet/12099/seo-et-spam/?page=1#p199005
| [
{
"content": "import re\nimport json\nimport logging\nfrom requests import post, HTTPError\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext_lazy as _\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constants\nMAX_ATTEMPTS = 3\nMD_PARSING_ERROR = _('Une erreur est survenue dans la génération de texte Markdown. Veuillez rapporter le bug.')\n\nFORMAT_ENDPOINTS = {\n 'html': '/html',\n 'texfile': '/latex-document',\n 'epub': '/epub',\n 'tex': '/latex',\n}\n\n\ndef _render_markdown_once(md_input, *, output_format='html', **kwargs):\n \"\"\"\n Returns None on error (error details are logged). No retry mechanism.\n \"\"\"\n def log_args():\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n inline = kwargs.get('inline', False) is True\n\n if settings.ZDS_APP['zmd']['disable_pings'] is True:\n kwargs['disable_ping'] = True\n\n endpoint = FORMAT_ENDPOINTS[output_format]\n\n try:\n timeout = 10\n if output_format.startswith('tex'):\n # latex may be really long to generate but it is also restrained by server configuration\n timeout = 120\n response = post('{}{}'.format(settings.ZDS_APP['zmd']['server'], endpoint), json={\n 'opts': kwargs,\n 'md': str(md_input),\n }, timeout=timeout)\n except HTTPError:\n logger.exception('An HTTP error happened, markdown rendering failed')\n log_args()\n return '', {}, []\n\n if response.status_code == 413:\n return '', {}, [{'message': str(_('Texte trop volumineux.'))}]\n\n if response.status_code != 200:\n logger.error('The markdown server replied with status {} (expected 200)'.format(response.status_code))\n log_args()\n return '', {}, []\n\n try:\n content, metadata, messages = response.json()\n logger.debug('Result %s, %s, %s', content, metadata, messages)\n if messages:\n logger.error('Markdown errors %s', json.dumps(messages))\n content = content.strip()\n if inline:\n content = content.replace('</p>\\n', '\\n\\n').replace('\\n<p>', '\\n')\n return mark_safe(content), metadata, messages\n except: # noqa\n logger.exception('Unexpected exception raised')\n log_args()\n return '', {}, []\n\n\ndef render_markdown(md_input, *, on_error=None, **kwargs):\n \"\"\"Render a markdown string.\n\n Returns a tuple ``(rendered_content, metadata)``, where\n ``rendered_content`` is a string and ``metadata`` is a dict.\n\n Handles errors gracefully by returning an user-friendly HTML\n string which explains that the Markdown rendering has failed\n (without any technical details).\n\n \"\"\"\n content, metadata, messages = _render_markdown_once(md_input, **kwargs)\n if messages and on_error:\n on_error([m['message'] for m in messages])\n if content is not None:\n # Success!\n return content, metadata, messages\n\n # Oops, something went wrong\n\n attempts = kwargs.get('attempts', 0)\n inline = kwargs.get('inline', False) is True\n\n if attempts < MAX_ATTEMPTS:\n if not kwargs:\n kwargs = dict()\n return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))\n\n logger.error('Max attempt count reached, giving up')\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n # FIXME: This cannot work with LaTeX.\n if inline:\n return mark_safe('<p>{}</p>'.format(json.dumps(messages))), metadata, []\n else:\n return mark_safe('<div class=\"error ico-after\"><p>{}</p></div>'.format(json.dumps(messages))), metadata, []\n\n\[email protected](name='epub_markdown', needs_autoescape=False)\ndef epub_markdown(md_input, image_directory):\n return emarkdown(md_input, output_format='epub', images_download_dir=image_directory.absolute,\n local_url_to_local_path=[settings.MEDIA_URL + 'galleries/[0-9]+', image_directory.relative])\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n on_error=lambda m: logger.error('Markdown errors %s', str(m)),\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n return content or ''\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_preview(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n Filter markdown string and render it to html.\n\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n if messages:\n content = _('</div><div class=\"preview-error\"><strong>Erreur du serveur Markdown:</strong>\\n{}'\n .format('<br>- '.join([m['message'] for m in messages])))\n content = mark_safe(content)\n\n return content\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_inline(text):\n \"\"\"\n Parses inline elements only and renders HTML. Mainly for member signatures.\n Although they are inline elements, pings are disabled.\n\n :param str text: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n return rendered\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n subt = match.group(1)\n lvl = match.group('level')\n header = match.group('header')\n end = match.group(4)\n\n new_content = subt + '#' * count + lvl + header + end\n\n return new_content\n\n\ndef shift_heading(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n text_by_code = re.split('(```|~~~)', text)\n starting_code = None\n for i, element in enumerate(text_by_code):\n if element in ['```', '~~~'] and not starting_code:\n starting_code = element\n elif element == starting_code:\n starting_code = None\n elif starting_code is None:\n text_by_code[i] = re.sub(r'(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)',\n lambda t: sub_hd(t, count), text_by_code[i])\n\n return ''.join(text_by_code)\n\n\[email protected]('shift_heading_1')\ndef shift_heading_1(text):\n return shift_heading(text, 1)\n\n\[email protected]('shift_heading_2')\ndef shift_heading_2(text):\n return shift_heading(text, 2)\n\n\[email protected]('shift_heading_3')\ndef shift_heading_3(text):\n return shift_heading(text, 3)\n",
"path": "zds/utils/templatetags/emarkdown.py"
}
] | [
{
"content": "import re\nimport json\nimport logging\nfrom requests import post, HTTPError\n\nfrom django import template\nfrom django.conf import settings\nfrom django.template.defaultfilters import stringfilter\nfrom django.utils.safestring import mark_safe\nfrom django.utils.translation import ugettext_lazy as _\n\nlogger = logging.getLogger(__name__)\nregister = template.Library()\n\"\"\"\nMarkdown related filters.\n\"\"\"\n\n# Constants\nMAX_ATTEMPTS = 3\nMD_PARSING_ERROR = _('Une erreur est survenue dans la génération de texte Markdown. Veuillez rapporter le bug.')\n\nFORMAT_ENDPOINTS = {\n 'html': '/html',\n 'texfile': '/latex-document',\n 'epub': '/epub',\n 'tex': '/latex',\n}\n\n\ndef _render_markdown_once(md_input, *, output_format='html', **kwargs):\n \"\"\"\n Returns None on error (error details are logged). No retry mechanism.\n \"\"\"\n def log_args():\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n inline = kwargs.get('inline', False) is True\n\n if settings.ZDS_APP['zmd']['disable_pings'] is True:\n kwargs['disable_ping'] = True\n\n endpoint = FORMAT_ENDPOINTS[output_format]\n\n try:\n timeout = 10\n if output_format.startswith('tex'):\n # latex may be really long to generate but it is also restrained by server configuration\n timeout = 120\n response = post('{}{}'.format(settings.ZDS_APP['zmd']['server'], endpoint), json={\n 'opts': kwargs,\n 'md': str(md_input),\n }, timeout=timeout)\n except HTTPError:\n logger.exception('An HTTP error happened, markdown rendering failed')\n log_args()\n return '', {}, []\n\n if response.status_code == 413:\n return '', {}, [{'message': str(_('Texte trop volumineux.'))}]\n\n if response.status_code != 200:\n logger.error('The markdown server replied with status {} (expected 200)'.format(response.status_code))\n log_args()\n return '', {}, []\n\n try:\n content, metadata, messages = response.json()\n logger.debug('Result %s, %s, %s', content, metadata, messages)\n if messages:\n logger.error('Markdown errors %s', json.dumps(messages))\n content = content.strip()\n if inline:\n content = content.replace('</p>\\n', '\\n\\n').replace('\\n<p>', '\\n')\n return mark_safe(content), metadata, messages\n except: # noqa\n logger.exception('Unexpected exception raised')\n log_args()\n return '', {}, []\n\n\ndef render_markdown(md_input, *, on_error=None, **kwargs):\n \"\"\"Render a markdown string.\n\n Returns a tuple ``(rendered_content, metadata)``, where\n ``rendered_content`` is a string and ``metadata`` is a dict.\n\n Handles errors gracefully by returning an user-friendly HTML\n string which explains that the Markdown rendering has failed\n (without any technical details).\n\n \"\"\"\n content, metadata, messages = _render_markdown_once(md_input, **kwargs)\n if messages and on_error:\n on_error([m['message'] for m in messages])\n if content is not None:\n # Success!\n return content, metadata, messages\n\n # Oops, something went wrong\n\n attempts = kwargs.get('attempts', 0)\n inline = kwargs.get('inline', False) is True\n\n if attempts < MAX_ATTEMPTS:\n if not kwargs:\n kwargs = dict()\n return render_markdown(md_input, **dict(kwargs, attempts=attempts + 1))\n\n logger.error('Max attempt count reached, giving up')\n logger.error('md_input: {!r}'.format(md_input))\n logger.error('kwargs: {!r}'.format(kwargs))\n\n # FIXME: This cannot work with LaTeX.\n if inline:\n return mark_safe('<p>{}</p>'.format(json.dumps(messages))), metadata, []\n else:\n return mark_safe('<div class=\"error ico-after\"><p>{}</p></div>'.format(json.dumps(messages))), metadata, []\n\n\[email protected](name='epub_markdown', needs_autoescape=False)\ndef epub_markdown(md_input, image_directory):\n return emarkdown(md_input, output_format='epub', images_download_dir=image_directory.absolute,\n local_url_to_local_path=[settings.MEDIA_URL + 'galleries/[0-9]+', image_directory.relative])\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n on_error=lambda m: logger.error('Markdown errors %s', str(m)),\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n return content or ''\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_preview(md_input, use_jsfiddle='', **kwargs):\n \"\"\"\n Filter markdown string and render it to html.\n\n :param str md_input: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n disable_jsfiddle = (use_jsfiddle != 'js')\n\n content, metadata, messages = render_markdown(\n md_input,\n **dict(kwargs, disable_jsfiddle=disable_jsfiddle))\n\n if messages:\n content = _('</div><div class=\"preview-error\"><strong>Erreur du serveur Markdown:</strong>\\n{}'\n .format('<br>- '.join([m['message'] for m in messages])))\n content = mark_safe(content)\n\n return content\n\n\[email protected](needs_autoescape=False)\n@stringfilter\ndef emarkdown_inline(text):\n \"\"\"\n Parses inline elements only and renders HTML. Mainly for member signatures.\n Although they are inline elements, pings are disabled.\n\n :param str text: Markdown string.\n :return: HTML string.\n :rtype: str\n \"\"\"\n rendered = emarkdown(text, inline=True)\n return mark_safe(rendered.replace('<a href=', '<a rel=\"nofollow\" href='))\n\n\ndef sub_hd(match, count):\n \"\"\"Replace header shifted.\"\"\"\n subt = match.group(1)\n lvl = match.group('level')\n header = match.group('header')\n end = match.group(4)\n\n new_content = subt + '#' * count + lvl + header + end\n\n return new_content\n\n\ndef shift_heading(text, count):\n \"\"\"\n Shift header in markdown document.\n\n :param str text: Text to filter.\n :param int count:\n :return: Filtered text.\n :rtype: str\n \"\"\"\n text_by_code = re.split('(```|~~~)', text)\n starting_code = None\n for i, element in enumerate(text_by_code):\n if element in ['```', '~~~'] and not starting_code:\n starting_code = element\n elif element == starting_code:\n starting_code = None\n elif starting_code is None:\n text_by_code[i] = re.sub(r'(^|\\n)(?P<level>#{1,4})(?P<header>.*?)#*(\\n|$)',\n lambda t: sub_hd(t, count), text_by_code[i])\n\n return ''.join(text_by_code)\n\n\[email protected]('shift_heading_1')\ndef shift_heading_1(text):\n return shift_heading(text, 1)\n\n\[email protected]('shift_heading_2')\ndef shift_heading_2(text):\n return shift_heading(text, 2)\n\n\[email protected]('shift_heading_3')\ndef shift_heading_3(text):\n return shift_heading(text, 3)\n",
"path": "zds/utils/templatetags/emarkdown.py"
}
] | diff --git a/zds/utils/templatetags/emarkdown.py b/zds/utils/templatetags/emarkdown.py
index e933290542..b403e746d2 100644
--- a/zds/utils/templatetags/emarkdown.py
+++ b/zds/utils/templatetags/emarkdown.py
@@ -178,7 +178,7 @@ def emarkdown_inline(text):
:rtype: str
"""
rendered = emarkdown(text, inline=True)
- return rendered
+ return mark_safe(rendered.replace('<a href=', '<a rel="nofollow" href='))
def sub_hd(match, count):
diff --git a/zds/utils/tests/tests_emarkdown.py b/zds/utils/tests/tests_emarkdown.py
index 416798c150..4b172ba0f7 100644
--- a/zds/utils/tests/tests_emarkdown.py
+++ b/zds/utils/tests/tests_emarkdown.py
@@ -40,7 +40,13 @@ def test_emarkdown_inline(self):
self.assertEqual(tr, expected)
- # Todo: Find a way to force parsing crash or simulate it.
+ def test_emarkdown_inline_with_link(self):
+ # The goal is not to test zmarkdown but test that template tag correctly call it
+ self.context['content'] = '[zds](zestedesavoir.com)'
+ tr = Template('{% load emarkdown %}{{ content | emarkdown_inline}}').render(self.context)
+
+ expected = '<p><a rel="nofollow" href="zestedesavoir.com">zds</a></p>'
+ self.assertEqual(tr, expected)
def test_shift_heading(self):
tr = Template('{% load emarkdown %}{{ content | shift_heading_1}}').render(self.context)
|
TileDB-Inc__TileDB-Py-151 | Reading dense array doesn't free memory
Hi,
I'm wondering if this is expected behavior or if you have any tips to fix. On Ubuntu 16, Python 3.7, and _tiledb_ 0.4.1:
Create toy array:
```
x = np.ones(10000000)
ctx = tiledb.Ctx()
path = 'test_tile_db'
d1 = tiledb.Dim(
'test_domain', domain=(0, x.shape[0] - 1), tile=10000, dtype="uint32"
)
domain = tiledb.Domain(d1)
v = tiledb.Attr(
'test_value',
dtype="float32",
)
schema = tiledb.ArraySchema(
domain=domain, attrs=(v,), cell_order="row-major", tile_order="row-major"
)
A = tiledb.DenseArray.create(path, schema)
values = x.astype(np.float32)
with tiledb.DenseArray(path, mode="w", ctx=ctx) as A:
A[:] = {'test_value': values}
```
Read from array:
```
for i in range(10):
with tiledb.DenseArray(path, mode='r') as data:
data[:]
print('Gigs:', round(psutil.virtual_memory().used / (10**9), 2))
Gigs: 0.84
Gigs: 0.89
Gigs: 0.93
Gigs: 0.97
Gigs: 1.01
Gigs: 1.05
Gigs: 1.1
Gigs: 1.14
Gigs: 1.18
Gigs: 1.22
```
Basically memory never seems to get released even when I don't assign the `data[:]` to any variable. I've tried playing around with garbage collection (`import gc; gc.collect()`) but it seems Python is not aware. Have also tried doing some explicit closing of the DenseArray. Eventually have to reset Jupyter notebook to get memory to free.
In my real use case I am iterating over several tileDBs and pulling full array data out from each, doing some transforms, and writing new tileDBs with transformed data. Works okay except every read call adds around 2GBs to the used memory and never releases it, causing the machine to eventually run out of memory. Current work around is to spin up new process for every iteration.
Thanks!
| [
{
"content": "from __future__ import absolute_import, print_function\n\nimport multiprocessing\nimport os\nimport shutil\nimport subprocess\nimport zipfile\nimport platform\nfrom distutils.sysconfig import get_config_var\nfrom distutils.version import LooseVersion\n\n\ntry:\n # For Python 3\n from urllib.request import urlopen\n import io\n\n def get_zipfile(url):\n \"\"\"Returns a ZipFile constructed from the file at the given URL.\"\"\"\n r = urlopen(url)\n return zipfile.ZipFile(io.BytesIO(r.read()))\nexcept ImportError:\n # Python 2\n from urllib2 import urlopen\n import StringIO\n\n def get_zipfile(url):\n \"\"\"Returns a ZipFile constructed from the file at the given URL.\"\"\"\n r = urlopen(url)\n return zipfile.ZipFile(StringIO.StringIO(r.read()))\n\nfrom setuptools import setup, Extension, find_packages\nfrom pkg_resources import resource_filename\n\nimport sys\nfrom sys import version_info as ver\n\n# Target branch\nTILEDB_VERSION = \"dev\"\n\n# Use `setup.py [] --debug` for a debug build of libtiledb\nTILEDB_DEBUG_BUILD = False\n\n# Directory containing this file\nCONTAINING_DIR = os.path.abspath(os.path.dirname(__file__))\n\n# Build directory path\nBUILD_DIR = os.path.join(CONTAINING_DIR, \"build\")\n\n# TileDB package source directory\nTILEDB_PKG_DIR = os.path.join(CONTAINING_DIR, \"tiledb\")\n\n# Set deployment target for mac\n#\n# Need to ensure thatextensions are built for macos 10.9 when compiling on a\n# 10.9 system or above, overriding distutils behaviour which is to target\n# the version used to build the current python binary.\n#\n# TO OVERRIDE:\n# set MACOSX_DEPLOYMENT_TARGET before calling setup.py\n#\n# From https://github.com/pandas-dev/pandas/pull/24274\n# 3-Clause BSD License: https://github.com/pandas-dev/pandas/blob/master/LICENSE\nif sys.platform == 'darwin':\n if 'MACOSX_DEPLOYMENT_TARGET' not in os.environ:\n current_system = LooseVersion(platform.mac_ver()[0])\n python_target = LooseVersion(\n get_config_var('MACOSX_DEPLOYMENT_TARGET'))\n if python_target < '10.9' and current_system >= '10.9':\n os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9'\n\ndef is_windows():\n return os.name == 'nt'\n\ndef _libtiledb_exists(library_dirs):\n \"\"\"\n Checks the given list of paths and returns true if any contain the TileDB library.\n :return: The path to the TileDB library, or None.\n \"\"\"\n\n print(\"libtiledb_exists checking 'library_dirs': {}\".format(library_dirs))\n\n if len(library_dirs) > 0:\n names = libtiledb_library_names()\n paths = [os.path.join(d, n) for d in library_dirs for n in names]\n for p in paths:\n if os.path.exists(p):\n return p\n raise RuntimeError(\"Could not find given --tiledb library path(s):\\n{}\"\n .format(\"\\n\".join(paths)))\n # If no explicit path is given check to see if TileDB is globally installed.\n import ctypes\n if os.name == \"posix\":\n if sys.platform == \"darwin\":\n lib_name = \"libtiledb.dylib\"\n else:\n lib_name = \"libtiledb.so\"\n elif os.name == \"nt\":\n lib_name = \"tiledb.dll\"\n try:\n # note: this is a relative path on linux\n # https://bugs.python.org/issue21042\n ctypes.CDLL(lib_name)\n return lib_name\n except:\n pass\n\n return None\n\ndef libtiledb_exists(library_dirs):\n lib = _libtiledb_exists(library_dirs)\n print(\"libtiledb_exists found: '{}'\".format(lib))\n return lib\n\n\ndef libtiledb_library_names():\n \"\"\"\n :return: List of TileDB shared library names.\n \"\"\"\n if os.name == \"posix\":\n if sys.platform == \"darwin\":\n return [\"libtiledb.dylib\"]\n else:\n return [\"libtiledb.so\"]\n elif os.name == \"nt\":\n return [\"tiledb.dll\"]\n else:\n raise RuntimeError(\"Unsupported OS name \" + os.name)\n\n\ndef download_libtiledb():\n \"\"\"\n Downloads the native TileDB source.\n :return: Path to extracted source directory.\n \"\"\"\n dest_name = \"TileDB-{}\".format(TILEDB_VERSION)\n dest = os.path.join(BUILD_DIR, dest_name)\n if not os.path.exists(dest):\n url = \"https://github.com/TileDB-Inc/TileDB/archive/{}.zip\".format(TILEDB_VERSION)\n print(\"Downloading TileDB package from {}...\".format(TILEDB_VERSION))\n with get_zipfile(url) as z:\n z.extractall(BUILD_DIR)\n return dest\n\n\ndef build_libtiledb(src_dir):\n \"\"\"\n Builds and installs the native TileDB library.\n :param src_dir: Path to libtiledb source directory.\n :return: Path to the directory where the library was installed.\n \"\"\"\n libtiledb_build_dir = os.path.join(src_dir, \"build\")\n libtiledb_install_dir = os.path.join(src_dir, \"dist\")\n if not os.path.exists(libtiledb_build_dir):\n os.makedirs(libtiledb_build_dir)\n\n print(\"Building libtiledb in directory {}...\".format(libtiledb_build_dir))\n cmake = os.environ.get(\"CMAKE\", \"cmake\")\n cmake_cmd = [cmake,\n \"-DCMAKE_INSTALL_PREFIX={}\".format(libtiledb_install_dir),\n \"-DTILEDB_TESTS=OFF\",\n \"-DTILEDB_S3=ON\",\n \"-DTILEDB_HDFS={}\".format(\"ON\" if os.name == \"posix\" else \"OFF\"),\n \"-DTILEDB_INSTALL_LIBDIR=lib\"\n ]\n\n extra_cmake_args = os.environ.get(\"CMAKE_ARGS\", [])\n if extra_cmake_args:\n cmake_cmd.extend(extra_cmake_args.split())\n\n if TILEDB_DEBUG_BUILD:\n build_type = \"Debug\"\n else:\n build_type = \"Release\"\n\n cmake_cmd.append(\"-DCMAKE_BUILD_TYPE={}\".format(build_type))\n\n if os.name == 'nt':\n cmake_cmd.extend(['-A', 'x64', \"-DMSVC_MP_FLAG=/MP4\"])\n\n # cmake target directory -- important\n cmake_cmd.append(src_dir)\n\n print(\"CMake configure command: {}\".format(cmake_cmd))\n\n have_make = True\n try:\n subprocess.check_call([\"make\", \"-v\"])\n except:\n have_make = False\n\n if have_make and not os.name == 'nt':\n njobs = multiprocessing.cpu_count() or 2\n build_cmd = [\"make\", \"-j{:d}\".format(njobs)]\n install_cmd = [\"make\", \"install-tiledb\"]\n else:\n build_cmd = [\"cmake\", \"--build\", \".\", \"--config\", \"Release\"]\n install_cmd = [\"cmake\", \"--build\", \".\", \"--config\", \"Release\", \"--target\", \"install-tiledb\"]\n\n # Build and install libtiledb\n subprocess.check_call(cmake_cmd, cwd=libtiledb_build_dir)\n subprocess.check_call(build_cmd, cwd=libtiledb_build_dir)\n subprocess.check_call(install_cmd, cwd=libtiledb_build_dir)\n\n if not 'TILEDB_PATH' in os.environ:\n os.environ['TILEDB_PATH'] = libtiledb_install_dir\n return libtiledb_install_dir\n\n\ndef find_or_install_libtiledb(setuptools_cmd):\n \"\"\"\n Find the TileDB library required for building the Cython extension. If not found,\n download, build and install TileDB, copying the resulting shared libraries\n into a path where they will be found by package_data.\n\n :param setuptools_cmd: The setuptools command instance.\n \"\"\"\n tiledb_ext = None\n for ext in setuptools_cmd.distribution.ext_modules:\n if ext.name == \"tiledb.libtiledb\":\n tiledb_ext = ext\n break\n\n # Download, build and locally install TileDB if needed.\n if not libtiledb_exists(tiledb_ext.library_dirs):\n src_dir = download_libtiledb()\n install_dir = build_libtiledb(src_dir)\n lib_subdir = 'bin' if os.name=='nt' else 'lib'\n native_subdir = '' if is_windows() else 'native'\n # Copy libtiledb shared object(s) to the package directory so they can be found\n # with package_data.\n dest_dir = os.path.join(TILEDB_PKG_DIR, native_subdir)\n for libname in libtiledb_library_names():\n src = os.path.join(install_dir, lib_subdir, libname)\n if not os.path.exists(dest_dir):\n os.makedirs(dest_dir)\n dest = os.path.join(dest_dir, libname)\n print(\"Copying file {0} to {1}\".format(src, dest))\n shutil.copy(src, dest)\n\n # TODO hack\n # also copy the lib file for dependees\n # this needs to come before\n if is_windows():\n def do_copy(src, dest):\n print(\"Copying file {0} to {1}\".format(src, dest))\n shutil.copy(src, dest)\n\n # lib files for linking\n src = os.path.join(install_dir, \"lib\", \"tiledb.lib\")\n dest = os.path.join(dest_dir, \"tiledb.lib\")\n do_copy(src, dest)\n\n # tbb\n src = os.path.join(install_dir, \"bin\", \"tbb.dll\")\n dest = os.path.join(dest_dir, \"tbb.dll\")\n do_copy(src, dest)\n src = os.path.join(install_dir, \"lib\", \"tbb.lib\")\n dest = os.path.join(dest_dir, \"tbb.lib\")\n do_copy(src, dest)\n\n #\n tiledb_ext.library_dirs += [os.path.join(install_dir, \"lib\")]\n\n # Update the TileDB Extension instance with correct paths.\n tiledb_ext.library_dirs += [os.path.join(install_dir, lib_subdir)]\n tiledb_ext.include_dirs += [os.path.join(install_dir, \"include\")]\n # Update package_data so the shared object gets installed with the Python module.\n libtiledb_objects = [os.path.join(native_subdir, libname) for libname in libtiledb_library_names()]\n if is_windows():\n libtiledb_objects.extend(\n [os.path.join(native_subdir, libname) for libname in\n [\"tiledb.lib\", \"tbb.dll\", \"tbb.lib\"]])\n print(\"libtiledb_objects: \", libtiledb_objects)\n setuptools_cmd.distribution.package_data.update({\"tiledb\": libtiledb_objects})\n\n\nclass LazyCommandClass(dict):\n \"\"\"\n Lazy command class that defers operations requiring Cython and numpy until\n they've actually been downloaded and installed by setup_requires.\n \"\"\"\n\n def __contains__(self, key):\n return (\n key in ['build_ext', 'bdist_wheel', 'bdist_egg']\n or super(LazyCommandClass, self).__contains__(key)\n )\n\n def __setitem__(self, key, value):\n if key == 'build_ext':\n raise AssertionError(\"build_ext overridden!\")\n super(LazyCommandClass, self).__setitem__(key, value)\n\n def __getitem__(self, key):\n if key == 'build_ext':\n return self.make_build_ext_cmd()\n elif key == 'bdist_wheel':\n return self.make_bdist_wheel_cmd()\n elif key == 'bdist_egg':\n return self.make_bdist_egg_cmd()\n else:\n return super(LazyCommandClass, self).__getitem__(key)\n\n def make_build_ext_cmd(self):\n \"\"\"\n :return: A command class implementing 'build_ext'.\n \"\"\"\n from Cython.Distutils import build_ext as cython_build_ext\n\n class build_ext(cython_build_ext):\n \"\"\"\n Custom build_ext command that lazily adds numpy's include_dir to\n extensions.\n \"\"\"\n\n def build_extensions(self):\n \"\"\"\n Lazily append numpy's include directory to Extension includes.\n\n This is done here rather than at module scope because setup.py\n may be run before numpy has been installed, in which case\n importing numpy and calling `numpy.get_include()` will fail.\n \"\"\"\n numpy_incl = resource_filename('numpy', 'core/include')\n for ext in self.extensions:\n ext.include_dirs.append(numpy_incl)\n\n find_or_install_libtiledb(self)\n\n # This explicitly calls the superclass method rather than the\n # usual super() invocation because distutils' build_class, of\n # which Cython's build_ext is a subclass, is an old-style class\n # in Python 2, which doesn't support `super`.\n cython_build_ext.build_extensions(self)\n\n return build_ext\n\n def make_bdist_wheel_cmd(self):\n \"\"\"\n :return: A command class implementing 'bdist_wheel'.\n \"\"\"\n from wheel.bdist_wheel import bdist_wheel\n\n class bdist_wheel_cmd(bdist_wheel):\n def run(self):\n # This may modify package_data:\n find_or_install_libtiledb(self)\n bdist_wheel.run(self)\n\n return bdist_wheel_cmd\n\n def make_bdist_egg_cmd(self):\n \"\"\"\n :return: A command class implementing 'bdist_egg'.\n \"\"\"\n from setuptools.command.bdist_egg import bdist_egg\n\n class bdist_egg_cmd(bdist_egg):\n def run(self):\n # This may modify package_data:\n find_or_install_libtiledb(self)\n bdist_egg.run(self)\n\n return bdist_egg_cmd\n\n\ndef cmake_available():\n \"\"\"\n Checks whether CMake command is available and >= version 3.3.\n :return:\n \"\"\"\n try:\n output = subprocess.check_output(['cmake', '--version']).split()\n version = output[2].decode('utf-8').split('.')\n return int(version[0]) >= 3 and int(version[1]) >= 3\n except:\n return False\n\n\ndef setup_requires():\n req = ['cython>=0.27',\n 'numpy>=1.7',\n 'setuptools>=18.0',\n 'setuptools_scm>=1.5.4',\n 'wheel>=0.30']\n # Add cmake requirement if libtiledb is not found and cmake is not available.\n if not libtiledb_exists(LIB_DIRS) and not cmake_available():\n req.append('cmake>=3.11.0')\n return req\n\n\nTESTS_REQUIRE = []\nif ver < (3,):\n TESTS_REQUIRE.extend([\"unittest2\", \"mock\"])\n\n# Globals variables\nCXXFLAGS = os.environ.get(\"CXXFLAGS\", \"-std=c++11\" if not is_windows() else \"\").split()\nLFLAGS = os.environ.get(\"LFLAGS\", \"\").split()\n\n# Allow setting (lib) TileDB directory if it is installed on the system\nTILEDB_PATH = os.environ.get(\"TILEDB_PATH\", \"\")\n\n# Sources & libraries\nINC_DIRS = []\nLIB_DIRS = []\nLIBS = [\"tiledb\"]\nDEF_MACROS = []\nSOURCES = [\"tiledb/libtiledb.pyx\"]\n\n# Pass command line flags to setup.py script\n# handle --tiledb=[PATH] --lflags=[FLAGS] --cxxflags=[FLAGS]\nargs = sys.argv[:]\nfor arg in args:\n if arg.find('--tiledb=') == 0:\n TILEDB_PATH = os.path.expanduser(arg.split('=')[1])\n sys.argv.remove(arg)\n if arg.find('--lflags=') == 0:\n LFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n if arg.find('--cxxflags=') == 0:\n CXXFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n if arg.find('--debug') == 0:\n TILEDB_DEBUG_BUILD = True\n sys.argv.remove(arg)\n\n\nif TILEDB_PATH != '':\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'lib')]\n if sys.platform.startswith(\"linux\"):\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'lib64'),\n os.path.join(TILEDB_PATH, 'lib', 'x86_64-linux-gnu')]\n elif os.name == 'nt':\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'bin')]\n INC_DIRS += [os.path.join(TILEDB_PATH, 'include')]\n\nwith open('README.rst') as f:\n README_RST = f.read()\n\ncy_extension=Extension(\n \"tiledb.libtiledb\",\n include_dirs=INC_DIRS,\n define_macros=DEF_MACROS,\n sources=SOURCES,\n library_dirs=LIB_DIRS,\n libraries=LIBS,\n extra_link_args=LFLAGS,\n extra_compile_args=CXXFLAGS,\n language=\"c++\"\n )\nif TILEDB_DEBUG_BUILD:\n # monkey patch to tell Cython to generate debug mapping\n # files (in `cython_debug`)\n if sys.version_info < (3,0):\n cy_extension.__dict__['cython_gdb'] = True\n else:\n cy_extension.__setattr__('cython_gdb', True)\n\nsetup(\n name='tiledb',\n description=\"Pythonic interface to the TileDB array storage manager\",\n long_description=README_RST,\n author='TileDB, Inc.',\n author_email='[email protected]',\n maintainer='TileDB, Inc.',\n maintainer_email='[email protected]',\n url='https://github.com/TileDB-Inc/TileDB-Py',\n license='MIT',\n platforms=['any'],\n use_scm_version={\n 'version_scheme': 'guess-next-dev',\n 'local_scheme': 'dirty-tag',\n 'write_to': 'tiledb/version.py'\n },\n ext_modules=[\n cy_extension\n ],\n setup_requires=setup_requires(),\n install_requires=[\n 'numpy>=1.7',\n 'wheel>=0.30'\n ],\n tests_require=TESTS_REQUIRE,\n packages=find_packages(),\n cmdclass=LazyCommandClass(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Operating System :: Unix',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n",
"path": "setup.py"
}
] | [
{
"content": "from __future__ import absolute_import, print_function\n\nimport multiprocessing\nimport os\nimport shutil\nimport subprocess\nimport zipfile\nimport platform\nfrom distutils.sysconfig import get_config_var\nfrom distutils.version import LooseVersion\n\n\ntry:\n # For Python 3\n from urllib.request import urlopen\n import io\n\n def get_zipfile(url):\n \"\"\"Returns a ZipFile constructed from the file at the given URL.\"\"\"\n r = urlopen(url)\n return zipfile.ZipFile(io.BytesIO(r.read()))\nexcept ImportError:\n # Python 2\n from urllib2 import urlopen\n import StringIO\n\n def get_zipfile(url):\n \"\"\"Returns a ZipFile constructed from the file at the given URL.\"\"\"\n r = urlopen(url)\n return zipfile.ZipFile(StringIO.StringIO(r.read()))\n\nfrom setuptools import setup, Extension, find_packages\nfrom pkg_resources import resource_filename\n\nimport sys\nfrom sys import version_info as ver\n\n# Target branch\nTILEDB_VERSION = \"1.5.1\"\n\n# Use `setup.py [] --debug` for a debug build of libtiledb\nTILEDB_DEBUG_BUILD = False\n\n# Directory containing this file\nCONTAINING_DIR = os.path.abspath(os.path.dirname(__file__))\n\n# Build directory path\nBUILD_DIR = os.path.join(CONTAINING_DIR, \"build\")\n\n# TileDB package source directory\nTILEDB_PKG_DIR = os.path.join(CONTAINING_DIR, \"tiledb\")\n\n# Set deployment target for mac\n#\n# Need to ensure thatextensions are built for macos 10.9 when compiling on a\n# 10.9 system or above, overriding distutils behaviour which is to target\n# the version used to build the current python binary.\n#\n# TO OVERRIDE:\n# set MACOSX_DEPLOYMENT_TARGET before calling setup.py\n#\n# From https://github.com/pandas-dev/pandas/pull/24274\n# 3-Clause BSD License: https://github.com/pandas-dev/pandas/blob/master/LICENSE\nif sys.platform == 'darwin':\n if 'MACOSX_DEPLOYMENT_TARGET' not in os.environ:\n current_system = LooseVersion(platform.mac_ver()[0])\n python_target = LooseVersion(\n get_config_var('MACOSX_DEPLOYMENT_TARGET'))\n if python_target < '10.9' and current_system >= '10.9':\n os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9'\n\ndef is_windows():\n return os.name == 'nt'\n\ndef _libtiledb_exists(library_dirs):\n \"\"\"\n Checks the given list of paths and returns true if any contain the TileDB library.\n :return: The path to the TileDB library, or None.\n \"\"\"\n\n print(\"libtiledb_exists checking 'library_dirs': {}\".format(library_dirs))\n\n if len(library_dirs) > 0:\n names = libtiledb_library_names()\n paths = [os.path.join(d, n) for d in library_dirs for n in names]\n for p in paths:\n if os.path.exists(p):\n return p\n raise RuntimeError(\"Could not find given --tiledb library path(s):\\n{}\"\n .format(\"\\n\".join(paths)))\n # If no explicit path is given check to see if TileDB is globally installed.\n import ctypes\n if os.name == \"posix\":\n if sys.platform == \"darwin\":\n lib_name = \"libtiledb.dylib\"\n else:\n lib_name = \"libtiledb.so\"\n elif os.name == \"nt\":\n lib_name = \"tiledb.dll\"\n try:\n # note: this is a relative path on linux\n # https://bugs.python.org/issue21042\n ctypes.CDLL(lib_name)\n return lib_name\n except:\n pass\n\n return None\n\ndef libtiledb_exists(library_dirs):\n lib = _libtiledb_exists(library_dirs)\n print(\"libtiledb_exists found: '{}'\".format(lib))\n return lib\n\n\ndef libtiledb_library_names():\n \"\"\"\n :return: List of TileDB shared library names.\n \"\"\"\n if os.name == \"posix\":\n if sys.platform == \"darwin\":\n return [\"libtiledb.dylib\"]\n else:\n return [\"libtiledb.so\"]\n elif os.name == \"nt\":\n return [\"tiledb.dll\"]\n else:\n raise RuntimeError(\"Unsupported OS name \" + os.name)\n\n\ndef download_libtiledb():\n \"\"\"\n Downloads the native TileDB source.\n :return: Path to extracted source directory.\n \"\"\"\n dest_name = \"TileDB-{}\".format(TILEDB_VERSION)\n dest = os.path.join(BUILD_DIR, dest_name)\n if not os.path.exists(dest):\n url = \"https://github.com/TileDB-Inc/TileDB/archive/{}.zip\".format(TILEDB_VERSION)\n print(\"Downloading TileDB package from {}...\".format(TILEDB_VERSION))\n with get_zipfile(url) as z:\n z.extractall(BUILD_DIR)\n return dest\n\n\ndef build_libtiledb(src_dir):\n \"\"\"\n Builds and installs the native TileDB library.\n :param src_dir: Path to libtiledb source directory.\n :return: Path to the directory where the library was installed.\n \"\"\"\n libtiledb_build_dir = os.path.join(src_dir, \"build\")\n libtiledb_install_dir = os.path.join(src_dir, \"dist\")\n if not os.path.exists(libtiledb_build_dir):\n os.makedirs(libtiledb_build_dir)\n\n print(\"Building libtiledb in directory {}...\".format(libtiledb_build_dir))\n cmake = os.environ.get(\"CMAKE\", \"cmake\")\n cmake_cmd = [cmake,\n \"-DCMAKE_INSTALL_PREFIX={}\".format(libtiledb_install_dir),\n \"-DTILEDB_TESTS=OFF\",\n \"-DTILEDB_S3=ON\",\n \"-DTILEDB_HDFS={}\".format(\"ON\" if os.name == \"posix\" else \"OFF\"),\n \"-DTILEDB_INSTALL_LIBDIR=lib\"\n ]\n\n extra_cmake_args = os.environ.get(\"CMAKE_ARGS\", [])\n if extra_cmake_args:\n cmake_cmd.extend(extra_cmake_args.split())\n\n if TILEDB_DEBUG_BUILD:\n build_type = \"Debug\"\n else:\n build_type = \"Release\"\n\n cmake_cmd.append(\"-DCMAKE_BUILD_TYPE={}\".format(build_type))\n\n if os.name == 'nt':\n cmake_cmd.extend(['-A', 'x64', \"-DMSVC_MP_FLAG=/MP4\"])\n\n # cmake target directory -- important\n cmake_cmd.append(src_dir)\n\n print(\"CMake configure command: {}\".format(cmake_cmd))\n\n have_make = True\n try:\n subprocess.check_call([\"make\", \"-v\"])\n except:\n have_make = False\n\n if have_make and not os.name == 'nt':\n njobs = multiprocessing.cpu_count() or 2\n build_cmd = [\"make\", \"-j{:d}\".format(njobs)]\n install_cmd = [\"make\", \"install-tiledb\"]\n else:\n build_cmd = [\"cmake\", \"--build\", \".\", \"--config\", \"Release\"]\n install_cmd = [\"cmake\", \"--build\", \".\", \"--config\", \"Release\", \"--target\", \"install-tiledb\"]\n\n # Build and install libtiledb\n subprocess.check_call(cmake_cmd, cwd=libtiledb_build_dir)\n subprocess.check_call(build_cmd, cwd=libtiledb_build_dir)\n subprocess.check_call(install_cmd, cwd=libtiledb_build_dir)\n\n if not 'TILEDB_PATH' in os.environ:\n os.environ['TILEDB_PATH'] = libtiledb_install_dir\n return libtiledb_install_dir\n\n\ndef find_or_install_libtiledb(setuptools_cmd):\n \"\"\"\n Find the TileDB library required for building the Cython extension. If not found,\n download, build and install TileDB, copying the resulting shared libraries\n into a path where they will be found by package_data.\n\n :param setuptools_cmd: The setuptools command instance.\n \"\"\"\n tiledb_ext = None\n for ext in setuptools_cmd.distribution.ext_modules:\n if ext.name == \"tiledb.libtiledb\":\n tiledb_ext = ext\n break\n\n # Download, build and locally install TileDB if needed.\n if not libtiledb_exists(tiledb_ext.library_dirs):\n src_dir = download_libtiledb()\n install_dir = build_libtiledb(src_dir)\n lib_subdir = 'bin' if os.name=='nt' else 'lib'\n native_subdir = '' if is_windows() else 'native'\n # Copy libtiledb shared object(s) to the package directory so they can be found\n # with package_data.\n dest_dir = os.path.join(TILEDB_PKG_DIR, native_subdir)\n for libname in libtiledb_library_names():\n src = os.path.join(install_dir, lib_subdir, libname)\n if not os.path.exists(dest_dir):\n os.makedirs(dest_dir)\n dest = os.path.join(dest_dir, libname)\n print(\"Copying file {0} to {1}\".format(src, dest))\n shutil.copy(src, dest)\n\n # TODO hack\n # also copy the lib file for dependees\n # this needs to come before\n if is_windows():\n def do_copy(src, dest):\n print(\"Copying file {0} to {1}\".format(src, dest))\n shutil.copy(src, dest)\n\n # lib files for linking\n src = os.path.join(install_dir, \"lib\", \"tiledb.lib\")\n dest = os.path.join(dest_dir, \"tiledb.lib\")\n do_copy(src, dest)\n\n # tbb\n src = os.path.join(install_dir, \"bin\", \"tbb.dll\")\n dest = os.path.join(dest_dir, \"tbb.dll\")\n do_copy(src, dest)\n src = os.path.join(install_dir, \"lib\", \"tbb.lib\")\n dest = os.path.join(dest_dir, \"tbb.lib\")\n do_copy(src, dest)\n\n #\n tiledb_ext.library_dirs += [os.path.join(install_dir, \"lib\")]\n\n # Update the TileDB Extension instance with correct paths.\n tiledb_ext.library_dirs += [os.path.join(install_dir, lib_subdir)]\n tiledb_ext.include_dirs += [os.path.join(install_dir, \"include\")]\n # Update package_data so the shared object gets installed with the Python module.\n libtiledb_objects = [os.path.join(native_subdir, libname) for libname in libtiledb_library_names()]\n if is_windows():\n libtiledb_objects.extend(\n [os.path.join(native_subdir, libname) for libname in\n [\"tiledb.lib\", \"tbb.dll\", \"tbb.lib\"]])\n print(\"libtiledb_objects: \", libtiledb_objects)\n setuptools_cmd.distribution.package_data.update({\"tiledb\": libtiledb_objects})\n\n\nclass LazyCommandClass(dict):\n \"\"\"\n Lazy command class that defers operations requiring Cython and numpy until\n they've actually been downloaded and installed by setup_requires.\n \"\"\"\n\n def __contains__(self, key):\n return (\n key in ['build_ext', 'bdist_wheel', 'bdist_egg']\n or super(LazyCommandClass, self).__contains__(key)\n )\n\n def __setitem__(self, key, value):\n if key == 'build_ext':\n raise AssertionError(\"build_ext overridden!\")\n super(LazyCommandClass, self).__setitem__(key, value)\n\n def __getitem__(self, key):\n if key == 'build_ext':\n return self.make_build_ext_cmd()\n elif key == 'bdist_wheel':\n return self.make_bdist_wheel_cmd()\n elif key == 'bdist_egg':\n return self.make_bdist_egg_cmd()\n else:\n return super(LazyCommandClass, self).__getitem__(key)\n\n def make_build_ext_cmd(self):\n \"\"\"\n :return: A command class implementing 'build_ext'.\n \"\"\"\n from Cython.Distutils import build_ext as cython_build_ext\n\n class build_ext(cython_build_ext):\n \"\"\"\n Custom build_ext command that lazily adds numpy's include_dir to\n extensions.\n \"\"\"\n\n def build_extensions(self):\n \"\"\"\n Lazily append numpy's include directory to Extension includes.\n\n This is done here rather than at module scope because setup.py\n may be run before numpy has been installed, in which case\n importing numpy and calling `numpy.get_include()` will fail.\n \"\"\"\n numpy_incl = resource_filename('numpy', 'core/include')\n for ext in self.extensions:\n ext.include_dirs.append(numpy_incl)\n\n find_or_install_libtiledb(self)\n\n # This explicitly calls the superclass method rather than the\n # usual super() invocation because distutils' build_class, of\n # which Cython's build_ext is a subclass, is an old-style class\n # in Python 2, which doesn't support `super`.\n cython_build_ext.build_extensions(self)\n\n return build_ext\n\n def make_bdist_wheel_cmd(self):\n \"\"\"\n :return: A command class implementing 'bdist_wheel'.\n \"\"\"\n from wheel.bdist_wheel import bdist_wheel\n\n class bdist_wheel_cmd(bdist_wheel):\n def run(self):\n # This may modify package_data:\n find_or_install_libtiledb(self)\n bdist_wheel.run(self)\n\n return bdist_wheel_cmd\n\n def make_bdist_egg_cmd(self):\n \"\"\"\n :return: A command class implementing 'bdist_egg'.\n \"\"\"\n from setuptools.command.bdist_egg import bdist_egg\n\n class bdist_egg_cmd(bdist_egg):\n def run(self):\n # This may modify package_data:\n find_or_install_libtiledb(self)\n bdist_egg.run(self)\n\n return bdist_egg_cmd\n\n\ndef cmake_available():\n \"\"\"\n Checks whether CMake command is available and >= version 3.3.\n :return:\n \"\"\"\n try:\n output = subprocess.check_output(['cmake', '--version']).split()\n version = output[2].decode('utf-8').split('.')\n return int(version[0]) >= 3 and int(version[1]) >= 3\n except:\n return False\n\n\ndef setup_requires():\n req = ['cython>=0.27',\n 'numpy>=1.7',\n 'setuptools>=18.0',\n 'setuptools_scm>=1.5.4',\n 'wheel>=0.30']\n # Add cmake requirement if libtiledb is not found and cmake is not available.\n if not libtiledb_exists(LIB_DIRS) and not cmake_available():\n req.append('cmake>=3.11.0')\n return req\n\n\nTESTS_REQUIRE = []\nif ver < (3,):\n TESTS_REQUIRE.extend([\"unittest2\", \"mock\"])\n\n# Globals variables\nCXXFLAGS = os.environ.get(\"CXXFLAGS\", \"-std=c++11\" if not is_windows() else \"\").split()\nLFLAGS = os.environ.get(\"LFLAGS\", \"\").split()\n\n# Allow setting (lib) TileDB directory if it is installed on the system\nTILEDB_PATH = os.environ.get(\"TILEDB_PATH\", \"\")\n\n# Sources & libraries\nINC_DIRS = []\nLIB_DIRS = []\nLIBS = [\"tiledb\"]\nDEF_MACROS = []\nSOURCES = [\"tiledb/libtiledb.pyx\"]\n\n# Pass command line flags to setup.py script\n# handle --tiledb=[PATH] --lflags=[FLAGS] --cxxflags=[FLAGS]\nargs = sys.argv[:]\nfor arg in args:\n if arg.find('--tiledb=') == 0:\n TILEDB_PATH = os.path.expanduser(arg.split('=')[1])\n sys.argv.remove(arg)\n if arg.find('--lflags=') == 0:\n LFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n if arg.find('--cxxflags=') == 0:\n CXXFLAGS = arg.split('=')[1].split()\n sys.argv.remove(arg)\n if arg.find('--debug') == 0:\n TILEDB_DEBUG_BUILD = True\n sys.argv.remove(arg)\n\n\nif TILEDB_PATH != '':\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'lib')]\n if sys.platform.startswith(\"linux\"):\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'lib64'),\n os.path.join(TILEDB_PATH, 'lib', 'x86_64-linux-gnu')]\n elif os.name == 'nt':\n LIB_DIRS += [os.path.join(TILEDB_PATH, 'bin')]\n INC_DIRS += [os.path.join(TILEDB_PATH, 'include')]\n\nwith open('README.rst') as f:\n README_RST = f.read()\n\ncy_extension=Extension(\n \"tiledb.libtiledb\",\n include_dirs=INC_DIRS,\n define_macros=DEF_MACROS,\n sources=SOURCES,\n library_dirs=LIB_DIRS,\n libraries=LIBS,\n extra_link_args=LFLAGS,\n extra_compile_args=CXXFLAGS,\n language=\"c++\"\n )\nif TILEDB_DEBUG_BUILD:\n # monkey patch to tell Cython to generate debug mapping\n # files (in `cython_debug`)\n if sys.version_info < (3,0):\n cy_extension.__dict__['cython_gdb'] = True\n else:\n cy_extension.__setattr__('cython_gdb', True)\n\nsetup(\n name='tiledb',\n description=\"Pythonic interface to the TileDB array storage manager\",\n long_description=README_RST,\n author='TileDB, Inc.',\n author_email='[email protected]',\n maintainer='TileDB, Inc.',\n maintainer_email='[email protected]',\n url='https://github.com/TileDB-Inc/TileDB-Py',\n license='MIT',\n platforms=['any'],\n use_scm_version={\n 'version_scheme': 'guess-next-dev',\n 'local_scheme': 'dirty-tag',\n 'write_to': 'tiledb/version.py'\n },\n ext_modules=[\n cy_extension\n ],\n setup_requires=setup_requires(),\n install_requires=[\n 'numpy>=1.7',\n 'wheel>=0.30'\n ],\n tests_require=TESTS_REQUIRE,\n packages=find_packages(),\n cmdclass=LazyCommandClass(),\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Information Technology',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: MIT License',\n 'Programming Language :: Python',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n 'Operating System :: Unix',\n 'Operating System :: POSIX :: Linux',\n 'Operating System :: MacOS :: MacOS X',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n",
"path": "setup.py"
}
] | diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index cfbf0ade25..35e00249b0 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -27,7 +27,7 @@ steps:
architecture: 'x64'
- script: |
- python -m pip install --upgrade pip setuptools wheel numpy tox setuptools-scm cython
+ python -m pip install --upgrade pip setuptools wheel numpy tox setuptools-scm cython psutil
displayName: 'Install dependencies'
- script: |
diff --git a/requirements.txt b/requirements.txt
index 8c34793759..6db5ef6f8c 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -4,3 +4,4 @@ numpy>=1.7.2
setuptools>=18.0.1
setuptools-scm>=1.5.4
wheel>=0.30.0
+psutil
diff --git a/requirements_dev.txt b/requirements_dev.txt
index b0e1dc77cc..130724f89b 100644
--- a/requirements_dev.txt
+++ b/requirements_dev.txt
@@ -6,3 +6,4 @@ setuptools==40.8.0
setuptools-scm==1.5.4
wheel==0.30.0
tox==3.0.0
+psutil
diff --git a/setup.py b/setup.py
index 44def48710..1e0652eaf9 100644
--- a/setup.py
+++ b/setup.py
@@ -36,7 +36,7 @@ def get_zipfile(url):
from sys import version_info as ver
# Target branch
-TILEDB_VERSION = "dev"
+TILEDB_VERSION = "1.5.1"
# Use `setup.py [] --debug` for a debug build of libtiledb
TILEDB_DEBUG_BUILD = False
diff --git a/tiledb/libtiledb.pyx b/tiledb/libtiledb.pyx
index fef27d6196..937d0bb9c2 100644
--- a/tiledb/libtiledb.pyx
+++ b/tiledb/libtiledb.pyx
@@ -11,9 +11,6 @@ from cpython.bytes cimport (PyBytes_GET_SIZE,
PyBytes_FromString,
PyBytes_FromStringAndSize)
-from cpython.mem cimport (PyMem_Malloc,
- PyMem_Realloc,
- PyMem_Free)
from cpython.ref cimport (Py_INCREF, Py_DECREF, PyTypeObject)
@@ -52,6 +49,10 @@ cdef extern from "numpy/arrayobject.h":
object obj)
# Steals a reference to dtype, need to incref the dtype
object PyArray_Scalar(void* ptr, np.dtype descr, object itemsize)
+ void PyArray_ENABLEFLAGS(np.ndarray arr, int flags)
+ void* PyDataMem_NEW(size_t nbytes)
+ void* PyDataMem_RENEW(void* data, size_t nbytes)
+ void PyDataMem_FREE(void* data)
import sys
from os.path import abspath
@@ -72,7 +73,8 @@ _MB = 1024 * _KB
# The native int type for this platform
IntType = np.dtype(np.int_)
-# Numpy initialization code
+# Numpy initialization code (critical)
+# https://docs.scipy.org/doc/numpy/reference/c-api.array.html#c.import_array
np.import_array()
def version():
@@ -2375,7 +2377,7 @@ cdef class KV(object):
@staticmethod
def create(uri, KVSchema schema, key=None, Ctx ctx=default_ctx()):
- """Creates a persistent KV at the given URI, returns a KV class instance
+ """Creates a persistent KV at the given URI
"""
cdef tiledb_ctx_t* ctx_ptr = ctx.ptr
cdef bytes buri = unicode_path(uri)
@@ -2402,7 +2404,7 @@ cdef class KV(object):
if rc != TILEDB_OK:
_raise_ctx_err(ctx_ptr, rc)
- return KV(uri, key=key, ctx=ctx)
+ return
def __init__(self, uri, mode='r', key=None, timestamp=None, Ctx ctx=default_ctx()):
cdef tiledb_ctx_t* ctx_ptr = ctx.ptr
@@ -3403,6 +3405,7 @@ cdef class Array(object):
cdef uint64_t _timestamp = 0
if timestamp is not None:
_timestamp = <uint64_t> timestamp
+
# allocate and then open the array
cdef tiledb_array_t* array_ptr = NULL
cdef int rc = TILEDB_OK
@@ -3429,9 +3432,12 @@ cdef class Array(object):
# view on a single attribute
if attr and not any(attr == schema.attr(i).name for i in range(schema.nattr)):
+ tiledb_array_close(ctx_ptr, array_ptr)
+ tiledb_array_free(&array_ptr)
raise KeyError("No attribute matching '{}'".format(attr))
else:
self.view_attr = unicode(attr) if (attr is not None) else None
+
self.ctx = ctx
self.uri = unicode(uri)
self.mode = unicode(mode)
@@ -3817,11 +3823,11 @@ cdef class Array(object):
# note: must not divide by itemsize for a string, because it may be zero (e.g 'S0')
dims[0] = el_bytelen / el_dtype.base.itemsize
newobj = \
- PyArray_NewFromDescr(
+ np.copy(PyArray_NewFromDescr(
<PyTypeObject*> np.ndarray,
el_dtype.base, 1, dims, NULL,
el_ptr,
- np.NPY_ENSURECOPY, <object> NULL)
+ 0, <object> NULL))
# set the output object
out_flat[el] = newobj
@@ -3840,7 +3846,6 @@ cdef class ReadQuery(object):
@property
def _offsets(self): return self._offsets
-
def __init__(self, Array array, np.ndarray subarray, list attr_names, tiledb_layout_t layout):
self._buffers = dict()
self._offsets = dict()
@@ -3854,8 +3859,10 @@ cdef class ReadQuery(object):
cdef:
vector [void*] buffer_ptrs
vector [uint64_t*] offsets_ptrs
+ void* tmp_ptr = NULL
void* subarray_ptr = NULL
np.npy_intp dims[1]
+ np.ndarray tmparray
bytes battr_name
Py_ssize_t nattr = len(attr_names)
@@ -3880,11 +3887,13 @@ cdef class ReadQuery(object):
tiledb_query_free(&query_ptr)
_raise_ctx_err(ctx_ptr, rc)
- cdef uint64_t* buffer_sizes_ptr = <uint64_t*> PyMem_Malloc(nattr * sizeof(uint64_t))
+ # lifetime: free in finally clause
+ cdef uint64_t* buffer_sizes_ptr = <uint64_t*> PyDataMem_NEW(nattr * sizeof(uint64_t))
if buffer_sizes_ptr == NULL:
tiledb_query_free(&query_ptr)
raise MemoryError()
- cdef uint64_t* offsets_sizes_ptr = <uint64_t*> PyMem_Malloc(nattr * sizeof(uint64_t))
+ # lifetime: free in finally clause
+ cdef uint64_t* offsets_sizes_ptr = <uint64_t*> PyDataMem_NEW(nattr * sizeof(uint64_t))
if offsets_sizes_ptr == NULL:
tiledb_query_free(&query_ptr)
raise MemoryError()
@@ -3911,19 +3920,31 @@ cdef class ReadQuery(object):
# allocate buffer to hold offsets for var-length attribute
# NOTE offsets_sizes is in BYTES
- offsets_ptrs.push_back(<uint64_t*> PyMem_Malloc(<size_t>(offsets_sizes_ptr[i])))
- #self._offsets[name] = np.empty(offsets_sizes_ptr[i], dtype=np.uint8)
+
+ # lifetime:
+ # - free on exception
+ # - otherwise, ownership transferred to NumPy
+ tmp_ptr = PyDataMem_NEW(<size_t>(offsets_sizes_ptr[i]))
+ if tmp_ptr == NULL:
+ raise MemoryError()
+ offsets_ptrs.push_back(<uint64_t*> tmp_ptr)
+ tmp_ptr = NULL
else:
rc = tiledb_array_max_buffer_size(ctx_ptr, array_ptr, battr_name,
subarray_ptr, &(buffer_sizes_ptr[i]))
-
if rc != TILEDB_OK:
_raise_ctx_err(ctx_ptr, rc)
offsets_ptrs.push_back(NULL)
- buffer_ptrs.push_back(<void*> PyMem_Malloc(<size_t>(buffer_sizes_ptr[i])))
- #self._buffers[name] = np.empty(buffer_sizes_ptr[i], dtype=np.uint8)
+ # lifetime:
+ # - free on exception
+ # - otherwise, ownership transferred to NumPy
+ tmp_ptr = PyDataMem_NEW(<size_t>(buffer_sizes_ptr[i]))
+ if tmp_ptr == NULL:
+ raise MemoryError()
+ buffer_ptrs.push_back(tmp_ptr)
+ tmp_ptr = NULL
# set the query buffers
for i in range(nattr):
@@ -3956,39 +3977,34 @@ cdef class ReadQuery(object):
for i in range(nattr):
name = attr_names[i]
- dtype = np.dtype('uint8')
-
# Note: we don't know the actual read size until *after* the query executes
# so the realloc below is very important as consumers of this buffer
# rely on the size corresponding to actual bytes read.
if name != "coords" and schema.attr(name).isvar:
dims[0] = offsets_sizes_ptr[i]
- Py_INCREF(dtype)
+ tmp_ptr = PyDataMem_RENEW(offsets_ptrs[i], <size_t>(offsets_sizes_ptr[i]))
self._offsets[name] = \
- PyArray_NewFromDescr(
- <PyTypeObject*> np.ndarray,
- dtype, 1, dims, NULL,
- PyMem_Realloc(offsets_ptrs[i], <size_t>(offsets_sizes_ptr[i])),
- np.NPY_OWNDATA, <object> NULL)
+ np.PyArray_SimpleNewFromData(1, dims, np.NPY_UINT8, tmp_ptr)
+ PyArray_ENABLEFLAGS(self._offsets[name], np.NPY_OWNDATA)
dims[0] = buffer_sizes_ptr[i]
- Py_INCREF(dtype)
+ tmp_ptr = PyDataMem_RENEW(buffer_ptrs[i], <size_t>(buffer_sizes_ptr[i]))
self._buffers[name] = \
- PyArray_NewFromDescr(
- <PyTypeObject*> np.ndarray,
- dtype, 1, dims, NULL,
- PyMem_Realloc(buffer_ptrs[i], <size_t>(buffer_sizes_ptr[i])),
- np.NPY_OWNDATA, <object> NULL)
+ np.PyArray_SimpleNewFromData(1, dims, np.NPY_UINT8, tmp_ptr)
+ PyArray_ENABLEFLAGS(self._buffers[name], np.NPY_OWNDATA)
+
except:
+ # we only free the PyDataMem_NEW'd buffers on exception,
+ # otherwise NumPy manages them
for i in range(nattr):
if buffer_ptrs[i] != NULL:
- PyMem_Free(buffer_ptrs[i])
+ PyDataMem_FREE(buffer_ptrs[i])
if offsets_ptrs[i] != NULL:
- PyMem_Free(offsets_ptrs[i])
+ PyDataMem_FREE(offsets_ptrs[i])
raise
finally:
- PyMem_Free(buffer_sizes_ptr)
- PyMem_Free(offsets_sizes_ptr)
+ PyDataMem_FREE(buffer_sizes_ptr)
+ PyDataMem_FREE(offsets_sizes_ptr)
tiledb_query_free(&query_ptr)
diff --git a/tiledb/tests/common.py b/tiledb/tests/common.py
index af594cf91e..c2acaf85e6 100644
--- a/tiledb/tests/common.py
+++ b/tiledb/tests/common.py
@@ -4,9 +4,11 @@
import os
import shutil
import tempfile
+import traceback
from unittest import TestCase
class DiskTestCase(TestCase):
+ pathmap = dict()
def setUp(self):
prefix = 'tiledb-' + self.__class__.__name__
@@ -20,8 +22,14 @@ def tearDown(self):
except OSError as exc:
print("test '{}' error deleting '{}'".format(self.__class__.__name__,
dirpath))
- raise
+ print("registered paths and originating functions:")
+ for path,frame in self.pathmap.items():
+ print(" '{}' <- '{}'".format(path,frame))
+ raise exc
def path(self, path):
- return os.path.abspath(os.path.join(self.rootdir, path))
+ out = os.path.abspath(os.path.join(self.rootdir, path))
+ frame = traceback.extract_stack(limit=2)[-2][2]
+ self.pathmap[out] = frame
+ return out
diff --git a/tiledb/tests/test_libtiledb.py b/tiledb/tests/test_libtiledb.py
index 84d5144ddb..bead3d02cc 100644
--- a/tiledb/tests/test_libtiledb.py
+++ b/tiledb/tests/test_libtiledb.py
@@ -1046,14 +1046,13 @@ def test_varlen_write_floats(self):
att = tiledb.Attr(dtype=np.float64, var=True, ctx=ctx)
schema = tiledb.ArraySchema(dom, (att,), ctx=ctx)
-
tiledb.DenseArray.create(self.path("foo"), schema)
with tiledb.DenseArray(self.path("foo"), mode='w', ctx=ctx) as T:
T[:] = A
with tiledb.DenseArray(self.path("foo"), mode='r', ctx=ctx) as T:
T_ = T[:]
- self.assertEqual(len(A), len(T))
+ self.assertEqual(len(A), len(T_))
# can't use assert_array_equal w/ np.object array
self.assertTrue(all(np.array_equal(x,A[i]) for i,x in enumerate(T_)))
@@ -1560,9 +1559,9 @@ def test_pickle_roundtrip(self):
with io.BytesIO() as buf, tiledb.DenseArray(uri) as V:
pickle.dump(V, buf)
buf.seek(0)
- V2 = pickle.load(buf)
- # make sure anonymous view pickles and round-trips
- assert_array_equal(V, V2)
+ with pickle.load(buf) as V2:
+ # make sure anonymous view pickles and round-trips
+ assert_array_equal(V, V2)
def test_pickle_with_config(self):
import io, pickle
@@ -1606,13 +1605,13 @@ def test_view_multiattr(self):
anon_ar = np.random.rand(3, 3)
named_ar = np.random.rand(3, 3)
- with tiledb.DenseArray(uri, 'w') as T:
+ with tiledb.DenseArray(uri, 'w', ctx=ctx) as T:
T[:] = {'': anon_ar, 'named': named_ar}
with self.assertRaises(KeyError):
- T = tiledb.DenseArray(uri, 'r', attr="foo111")
+ T = tiledb.DenseArray(uri, 'r', attr="foo111", ctx=ctx)
- with tiledb.DenseArray(uri, 'r', attr="named") as T:
+ with tiledb.DenseArray(uri, 'r', attr="named", ctx=ctx) as T:
assert_array_equal(T, named_ar)
# make sure each attr view can pickle and round-trip
with io.BytesIO() as buf:
@@ -1621,7 +1620,7 @@ def test_view_multiattr(self):
with pickle.load(buf) as T_rt:
assert_array_equal(T, T_rt)
- with tiledb.DenseArray(uri, 'r', attr="") as T:
+ with tiledb.DenseArray(uri, 'r', attr="", ctx=ctx) as T:
assert_array_equal(T, anon_ar)
with io.BytesIO() as buf:
@@ -1632,10 +1631,10 @@ def test_view_multiattr(self):
# set subarray on multi-attribute
range_ar = np.arange(0,9).reshape(3,3)
- with tiledb.DenseArray(uri, 'w', attr='named') as V_named:
+ with tiledb.DenseArray(uri, 'w', attr='named', ctx=ctx) as V_named:
V_named[1:3,1:3] = range_ar[1:3,1:3]
- with tiledb.DenseArray(uri, 'r', attr='named') as V_named:
+ with tiledb.DenseArray(uri, 'r', attr='named', ctx=ctx) as V_named:
assert_array_equal(V_named[1:3,1:3], range_ar[1:3,1:3])
@@ -1749,8 +1748,7 @@ def test_kv_write_schema_load(self):
a1 = tiledb.Attr("value", dtype=bytes, ctx=ctx)
schema = tiledb.KVSchema(ctx, attrs=(a1,))
# persist kv schema
- kv = tiledb.KV.create(self.path("foo"), schema, ctx=ctx)
- self.assertNotEqual(kv, None)
+ tiledb.KV.create(self.path("foo"), schema, ctx=ctx)
self.assertEqual(tiledb.KVSchema.load(self.path("foo"), ctx=ctx), schema)
def test_kv_contains(self):
@@ -1798,8 +1796,7 @@ def test_kv_write_consolidate(self):
schema = tiledb.KVSchema(attrs=(a1,), ctx=ctx)
# persist kv schema
- kv = tiledb.KV.create(self.path("foo1"), schema, ctx=ctx)
- kv.close()
+ tiledb.KV.create(self.path("foo1"), schema, ctx=ctx)
def append_kv(path, k, v):
kv = tiledb.KV(path, mode='w', ctx=ctx)
@@ -1850,18 +1847,19 @@ def test_kv_write_load_read_encrypted(self):
def test_kv_update_reload(self):
# create a kv array
- ctx = tiledb.Ctx()
- a1 = tiledb.Attr("val", ctx=ctx, dtype=bytes)
+ ctx1 = tiledb.Ctx()
+ ctx2 = tiledb.Ctx()
+ a1 = tiledb.Attr("val", ctx=ctx1, dtype=bytes)
# persist kv schema
- schema = tiledb.KVSchema(attrs=(a1,), ctx=ctx)
- tiledb.KV.create(self.path("foo"), schema, ctx=ctx)
+ schema = tiledb.KVSchema(attrs=(a1,), ctx=ctx1)
+ tiledb.KV.create(self.path("foo"), schema, ctx=ctx1)
# load kv array
- with tiledb.KV(self.path("foo"), mode='w', ctx=ctx) as kv1:
+ with tiledb.KV(self.path("foo"), mode='w', ctx=ctx1) as kv1:
kv1['foo'] = 'bar'
kv1.flush()
- with tiledb.KV(self.path("foo"), mode='r', ctx=ctx) as kv2:
+ with tiledb.KV(self.path("foo"), mode='r', ctx=ctx2) as kv2:
self.assertTrue('foo' in kv2)
kv1['bar'] = 'baz'
kv1.flush()
@@ -2113,6 +2111,75 @@ def test_io(self):
self.assertEqual(io.readall(), b"")
+class MemoryTest(DiskTestCase):
+ # sanity check that memory usage doesn't increase more than 10% reading 40MB 100x
+ # https://github.com/TileDB-Inc/TileDB-Py/issues/150
+
+ def setUp(self):
+ super(MemoryTest, self).setUp()
+ import sys
+ if not sys.platform.startswith("linux"):
+ self.skipTest("Only run MemoryTest on linux")
+
+ @staticmethod
+ def use_many_buffers(path):
+ import psutil, os
+ # https://stackoverflow.com/questions/938733/total-memory-used-by-python-process
+ process = psutil.Process(os.getpid())
+
+ x = np.ones(10000000, dtype=np.float32)
+ ctx = tiledb.Ctx()
+ d1 = tiledb.Dim(
+ 'test_domain', domain=(0, x.shape[0] - 1), tile=10000, dtype="uint32")
+ domain = tiledb.Domain(d1)
+ v = tiledb.Attr(
+ 'test_value',
+ dtype="float32")
+
+ schema = tiledb.ArraySchema(
+ domain=domain, attrs=(v,), cell_order="row-major", tile_order="row-major")
+
+ A = tiledb.DenseArray.create(path, schema)
+
+ with tiledb.DenseArray(path, mode="w", ctx=ctx) as A:
+ A[:] = {'test_value': x}
+
+ with tiledb.DenseArray(path, mode='r') as data:
+ data[:]
+ initial = process.memory_info().rss
+ print(" initial RSS: {}".format(round(initial / (10 ** 6)), 2))
+ for i in range(100):
+ # read but don't store: this memory should be freed
+ data[:]
+
+ if i % 10 == 0:
+ print(' read iter {}, RSS (MB): {}'.format(
+ i, round(process.memory_info().rss / (10 ** 6), 2)))
+
+ return initial
+
+ def test_memory_cleanup(self):
+ import tiledb, numpy as np
+ import psutil, os
+
+ # run function which reads 100x from a 40MB test array
+ # TODO: RSS is too loose to do this end-to-end, so should use instrumentation.
+ print("Starting TileDB-Py memory test:")
+ initial = self.use_many_buffers(self.path('test_memory_cleanup'))
+
+ process = psutil.Process(os.getpid())
+ final = process.memory_info().rss
+ print(" final RSS: {}".format(round(final / (10 ** 6)), 2))
+
+ import gc
+ gc.collect()
+
+ final_gc = process.memory_info().rss
+ print(" final RSS after forced GC: {}".format(round(final_gc / (10 ** 6)), 2))
+
+ self.assertTrue((final - initial) < (.1 * initial))
+
+
#if __name__ == '__main__':
# # run a single example for in-process debugging
# # better to use `pytest --gdb` if available
|
AUTOMATIC1111__stable-diffusion-webui-7353 | [Bug]: thumbnail cards are not loading the preview image
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
just getting black image, and if I try to update an image, it goes black too.
It was working before checkpoints were added, I don't know if that's related.
### Steps to reproduce the problem
1. Go to ....
2. Press ....
3. ...
### What should have happened?
should see the preview images
### Commit where the problem happens
0a8515085ef258d4b76fdc000f7ed9d55751d6b8
### What platforms do you use to access the UI ?
_No response_
### What browsers do you use to access the UI ?
_No response_
### Command Line Arguments
```Shell
--api --cors-allow-origins http://localhost:5173 --administrator --no-half-vae --no-half --disable-safe-unpickle --force-cpu --xformers
```
### List of extensions
all of them
### Console logs
```Shell
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 94, in receive
return self.receive_nowait()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 77, in call_next
message = await recv_stream.receive()
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\streams\memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 407, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "D:\stable-diffusion-webui\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\extensions\auto-sd-paint-ext\backend\app.py", line 391, in app_encryption_middleware
res: StreamingResponse = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 106, in __call__
response = await self.dispatch_func(request, call_next)
File "D:\stable-diffusion-webui\modules\api\api.py", line 96, in log_and_time
res: Response = await call_next(req)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next
raise app_exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\base.py", line 69, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\gzip.py", line 43, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 706, in __call__
await route.handle(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\fastapi\routing.py", line 163, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "D:\stable-diffusion-webui\venv\lib\site-packages\starlette\concurrency.py", line 41, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in fetch_file
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\stable-diffusion-webui\modules\ui_extra_networks.py", line 28, in <listcomp>
if not any([Path(x).resolve() in Path(filename).resolve().parents for x in allowed_dirs]):
File "D:\Python\Python310\lib\pathlib.py", line 960, in __new__
self = cls._from_parts(args)
File "D:\Python\Python310\lib\pathlib.py", line 594, in _from_parts
drv, root, parts = self._parse_args(args)
File "D:\Python\Python310\lib\pathlib.py", line 578, in _parse_args
a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
### Additional information
_No response_
| [
{
"content": "import html\r\nimport json\r\nimport os\r\nimport urllib.parse\r\n\r\nfrom modules import shared, ui_extra_networks, sd_models\r\n\r\n\r\nclass ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):\r\n def __init__(self):\r\n super().__init__('Checkpoints')\r\n\r\n def refresh(self):\r\n shared.refresh_checkpoints()\r\n\r\n def list_items(self):\r\n for name, checkpoint in sd_models.checkpoints_list.items():\r\n path, ext = os.path.splitext(checkpoint.filename)\r\n previews = [path + \".png\", path + \".preview.png\"]\r\n\r\n preview = None\r\n for file in previews:\r\n if os.path.isfile(file):\r\n preview = self.link_preview(file)\r\n break\r\n\r\n yield {\r\n \"name\": checkpoint.name_for_extra,\r\n \"filename\": path,\r\n \"preview\": preview,\r\n \"search_term\": self.search_terms_from_path(checkpoint.filename),\r\n \"onclick\": '\"' + html.escape(f\"\"\"return selectCheckpoint({json.dumps(name)})\"\"\") + '\"',\r\n \"local_preview\": path + \".png\",\r\n }\r\n\r\n def allowed_directories_for_previews(self):\r\n return [shared.cmd_opts.ckpt_dir, sd_models.model_path]\r\n\r\n",
"path": "modules/ui_extra_networks_checkpoints.py"
}
] | [
{
"content": "import html\r\nimport json\r\nimport os\r\nimport urllib.parse\r\n\r\nfrom modules import shared, ui_extra_networks, sd_models\r\n\r\n\r\nclass ExtraNetworksPageCheckpoints(ui_extra_networks.ExtraNetworksPage):\r\n def __init__(self):\r\n super().__init__('Checkpoints')\r\n\r\n def refresh(self):\r\n shared.refresh_checkpoints()\r\n\r\n def list_items(self):\r\n for name, checkpoint1 in sd_models.checkpoints_list.items():\r\n checkpoint: sd_models.CheckpointInfo = checkpoint1\r\n path, ext = os.path.splitext(checkpoint.filename)\r\n previews = [path + \".png\", path + \".preview.png\"]\r\n\r\n preview = None\r\n for file in previews:\r\n if os.path.isfile(file):\r\n preview = self.link_preview(file)\r\n break\r\n\r\n yield {\r\n \"name\": checkpoint.model_name,\r\n \"filename\": path,\r\n \"preview\": preview,\r\n \"onclick\": '\"' + html.escape(f\"\"\"return selectCheckpoint({json.dumps(name)})\"\"\") + '\"',\r\n \"local_preview\": path + \".png\",\r\n }\r\n\r\n def allowed_directories_for_previews(self):\r\n return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]\r\n\r\n",
"path": "modules/ui_extra_networks_checkpoints.py"
}
] | diff --git a/modules/ui_extra_networks_checkpoints.py b/modules/ui_extra_networks_checkpoints.py
index c66cb8307ad..5b471671a09 100644
--- a/modules/ui_extra_networks_checkpoints.py
+++ b/modules/ui_extra_networks_checkpoints.py
@@ -34,5 +34,5 @@ def list_items(self):
}
def allowed_directories_for_previews(self):
- return [shared.cmd_opts.ckpt_dir, sd_models.model_path]
+ return [v for v in [shared.cmd_opts.ckpt_dir, sd_models.model_path] if v is not None]
|
praw-dev__praw-1304 | Sphinx stops emitting warnings if it encounters only one
**Describe the bug**
<!-- A clear and concise description of what the bug is. --> When running pre_push, if Sphinx runs into an warning, it does does print any more. When there are lots of warnings, it takes a lot of time to re-run pre_push per warning
I recommend adding the command line argument `--keep-going`. This will cause it to print all warnings.
**System Info**
- PRAW Version: Latest
| [
{
"content": "#!/usr/bin/env python3\n\"\"\"Run static analysis on the project.\"\"\"\n\nimport argparse\nimport sys\nfrom os import path\nfrom shutil import rmtree\nfrom subprocess import CalledProcessError, check_call\nfrom tempfile import mkdtemp\n\ncurrent_directory = path.abspath(path.join(__file__, \"..\"))\n\n\ndef do_process(args, shell=False):\n \"\"\"Run program provided by args.\n\n Return True on success.\n\n Output failed message on non-zero exit and return False.\n\n Exit if command is not found.\n \"\"\"\n print(\"Running: {}\".format(\" \".join(args)))\n try:\n check_call(args, shell=shell)\n except CalledProcessError:\n print(\"\\nFailed: {}\".format(\" \".join(args)))\n return False\n except Exception as exc:\n sys.stderr.write(str(exc) + \"\\n\")\n sys.exit(1)\n return True\n\n\ndef run_static():\n \"\"\"Runs the static tests.\n\n Returns a statuscode of 0 if everything ran correctly.\n Otherwise, it will return statuscode 1\n \"\"\"\n success = True\n success &= do_process(\n [\n sys.executable,\n path.join(current_directory, \"tools\", \"static_word_checks.py\"),\n \"--replace\",\n ]\n )\n success &= do_process([\"black .\"], shell=True)\n success &= do_process([\"flake8\", \"--exclude=.eggs,build,docs\"])\n success &= do_process([\"pydocstyle\", \"praw\"])\n # success &= do_process([\"pylint\", \"--rcfile=.pylintrc\", \"praw\"])\n\n tmp_dir = mkdtemp()\n try:\n success &= do_process([\"sphinx-build\", \"-W\", \"docs\", tmp_dir])\n finally:\n rmtree(tmp_dir)\n\n return success\n\n\ndef run_unit():\n \"\"\"Runs the unit-tests.\n\n Follows the behavior of the static tests,\n where any failed tests cause pre_push.py to fail.\n \"\"\"\n return do_process(\n [sys.executable, path.join(current_directory, \"setup.py\"), \"test\"]\n )\n\n\ndef main():\n \"\"\"Runs the main function.\n\n usage: pre_push.py [-h] [-n] [-u] [-a]\n\n Run static and/or unit-tests\n \"\"\"\n parser = argparse.ArgumentParser(\n description=\"Run static and/or unit-tests\"\n )\n parser.add_argument(\n \"-n\",\n \"--unstatic\",\n action=\"store_true\",\n help=\"Do not run static tests (black/flake8/pydocstyle/sphinx-build)\",\n default=False,\n )\n parser.add_argument(\n \"-u\",\n \"--unit-tests\",\n \"--unit\",\n action=\"store_true\",\n default=False,\n help=\"Run the unit tests\",\n )\n parser.add_argument(\n \"-a\",\n \"--all\",\n action=\"store_true\",\n default=False,\n help=\"Run all of the tests (static and unit). \"\n \"Overrides the unstatic argument.\",\n )\n args = parser.parse_args()\n success = True\n try:\n if not args.unstatic or args.all:\n success &= run_static()\n if args.all or args.unit_tests:\n success &= run_unit()\n except KeyboardInterrupt:\n return int(not False)\n return int(not success)\n\n\nif __name__ == \"__main__\":\n exit_code = main()\n print(\n \"\\npre_push.py: Success!\" if not exit_code else \"\\npre_push.py: Fail\"\n )\n sys.exit(exit_code)\n",
"path": "pre_push.py"
}
] | [
{
"content": "#!/usr/bin/env python3\n\"\"\"Run static analysis on the project.\"\"\"\n\nimport argparse\nimport sys\nfrom os import path\nfrom shutil import rmtree\nfrom subprocess import CalledProcessError, check_call\nfrom tempfile import mkdtemp\n\ncurrent_directory = path.abspath(path.join(__file__, \"..\"))\n\n\ndef do_process(args, shell=False):\n \"\"\"Run program provided by args.\n\n Return True on success.\n\n Output failed message on non-zero exit and return False.\n\n Exit if command is not found.\n \"\"\"\n print(\"Running: {}\".format(\" \".join(args)))\n try:\n check_call(args, shell=shell)\n except CalledProcessError:\n print(\"\\nFailed: {}\".format(\" \".join(args)))\n return False\n except Exception as exc:\n sys.stderr.write(str(exc) + \"\\n\")\n sys.exit(1)\n return True\n\n\ndef run_static():\n \"\"\"Runs the static tests.\n\n Returns a statuscode of 0 if everything ran correctly.\n Otherwise, it will return statuscode 1\n \"\"\"\n success = True\n success &= do_process(\n [\n sys.executable,\n path.join(current_directory, \"tools\", \"static_word_checks.py\"),\n \"--replace\",\n ]\n )\n success &= do_process([\"black .\"], shell=True)\n success &= do_process([\"flake8\", \"--exclude=.eggs,build,docs\"])\n success &= do_process([\"pydocstyle\", \"praw\"])\n # success &= do_process([\"pylint\", \"--rcfile=.pylintrc\", \"praw\"])\n\n tmp_dir = mkdtemp()\n try:\n success &= do_process(\n [\"sphinx-build\", \"-W\", \"--keep-going\", \"docs\", tmp_dir]\n )\n finally:\n rmtree(tmp_dir)\n\n return success\n\n\ndef run_unit():\n \"\"\"Runs the unit-tests.\n\n Follows the behavior of the static tests,\n where any failed tests cause pre_push.py to fail.\n \"\"\"\n return do_process(\n [sys.executable, path.join(current_directory, \"setup.py\"), \"test\"]\n )\n\n\ndef main():\n \"\"\"Runs the main function.\n\n usage: pre_push.py [-h] [-n] [-u] [-a]\n\n Run static and/or unit-tests\n \"\"\"\n parser = argparse.ArgumentParser(\n description=\"Run static and/or unit-tests\"\n )\n parser.add_argument(\n \"-n\",\n \"--unstatic\",\n action=\"store_true\",\n help=\"Do not run static tests (black/flake8/pydocstyle/sphinx-build)\",\n default=False,\n )\n parser.add_argument(\n \"-u\",\n \"--unit-tests\",\n \"--unit\",\n action=\"store_true\",\n default=False,\n help=\"Run the unit tests\",\n )\n parser.add_argument(\n \"-a\",\n \"--all\",\n action=\"store_true\",\n default=False,\n help=\"Run all of the tests (static and unit). \"\n \"Overrides the unstatic argument.\",\n )\n args = parser.parse_args()\n success = True\n try:\n if not args.unstatic or args.all:\n success &= run_static()\n if args.all or args.unit_tests:\n success &= run_unit()\n except KeyboardInterrupt:\n return int(not False)\n return int(not success)\n\n\nif __name__ == \"__main__\":\n exit_code = main()\n print(\n \"\\npre_push.py: Success!\" if not exit_code else \"\\npre_push.py: Fail\"\n )\n sys.exit(exit_code)\n",
"path": "pre_push.py"
}
] | diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index dcb104835..cf5962247 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -36,7 +36,7 @@ jobs:
- name: Run pydocstyle
run: pydocstyle praw
- name: Run sphinx
- run: sphinx-build -W docs/ /tmp/foo
+ run: sphinx-build -W --keep-going docs/ /tmp/foo
strategy:
matrix:
os: [macOS-latest, ubuntu-latest, windows-latest]
@@ -69,7 +69,7 @@ jobs:
- name: Run pydocstyle
run: pydocstyle praw
- name: Run sphinx
- run: sphinx-build -W docs/ /tmp/foo
+ run: sphinx-build -W --keep-going docs/ /tmp/foo
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
diff --git a/pre_push.py b/pre_push.py
index 3be3c1937..49d13522f 100755
--- a/pre_push.py
+++ b/pre_push.py
@@ -53,7 +53,9 @@ def run_static():
tmp_dir = mkdtemp()
try:
- success &= do_process(["sphinx-build", "-W", "docs", tmp_dir])
+ success &= do_process(
+ ["sphinx-build", "-W", "--keep-going", "docs", tmp_dir]
+ )
finally:
rmtree(tmp_dir)
|
python-pillow__Pillow-399 | Image opened twice if imagemagick and xv are installed
If ImageMagick and xv are both installed and you call Image.show() it will open the image twice, once with display and once with xv. This is probably related to the for loop over registered viewers. Not sure why the return 1 isn't working though.
| [
{
"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# im.show() drivers\n#\n# History:\n# 2008-04-06 fl Created\n#\n# Copyright (c) Secret Labs AB 2008.\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom __future__ import print_function\n\nfrom PIL import Image\nimport os, sys\n\n_viewers = []\n\ndef register(viewer, order=1):\n try:\n if issubclass(viewer, Viewer):\n viewer = viewer()\n except TypeError:\n pass # raised if viewer wasn't a class\n if order > 0:\n _viewers.append(viewer)\n elif order < 0:\n _viewers.insert(0, viewer)\n\n##\n# Displays a given image.\n#\n# @param image An image object.\n# @param title Optional title. Not all viewers can display the title.\n# @param **options Additional viewer options.\n# @return True if a suitable viewer was found, false otherwise.\n\ndef show(image, title=None, **options):\n for viewer in _viewers:\n if viewer.show(image, title=title, **options):\n return 1\n return 0\n\n##\n# Base class for viewers.\n\nclass Viewer:\n\n # main api\n\n def show(self, image, **options):\n\n # save temporary image to disk\n if image.mode[:4] == \"I;16\":\n # @PIL88 @PIL101\n # \"I;16\" isn't an 'official' mode, but we still want to\n # provide a simple way to show 16-bit images.\n base = \"L\"\n # FIXME: auto-contrast if max() > 255?\n else:\n base = Image.getmodebase(image.mode)\n if base != image.mode and image.mode != \"1\":\n image = image.convert(base)\n\n self.show_image(image, **options)\n\n # hook methods\n\n format = None\n\n def get_format(self, image):\n # return format name, or None to save as PGM/PPM\n return self.format\n\n def get_command(self, file, **options):\n raise NotImplementedError\n\n def save_image(self, image):\n # save to temporary file, and return filename\n return image._dump(format=self.get_format(image))\n\n def show_image(self, image, **options):\n # display given image\n return self.show_file(self.save_image(image), **options)\n\n def show_file(self, file, **options):\n # display given file\n os.system(self.get_command(file, **options))\n return 1\n\n# --------------------------------------------------------------------\n\nif sys.platform == \"win32\":\n\n class WindowsViewer(Viewer):\n format = \"BMP\"\n def get_command(self, file, **options):\n return (\"start /wait %s && ping -n 2 127.0.0.1 >NUL \"\n \"&& del /f %s\" % (file, file))\n\n register(WindowsViewer)\n\nelif sys.platform == \"darwin\":\n\n class MacViewer(Viewer):\n format = \"BMP\"\n def get_command(self, file, **options):\n # on darwin open returns immediately resulting in the temp\n # file removal while app is opening\n command = \"open -a /Applications/Preview.app\"\n command = \"(%s %s; sleep 20; rm -f %s)&\" % (command, file, file)\n return command\n\n register(MacViewer)\n\nelse:\n\n # unixoids\n\n def which(executable):\n path = os.environ.get(\"PATH\")\n if not path:\n return None\n for dirname in path.split(os.pathsep):\n filename = os.path.join(dirname, executable)\n if os.path.isfile(filename):\n # FIXME: make sure it's executable\n return filename\n return None\n\n class UnixViewer(Viewer):\n def show_file(self, file, **options):\n command, executable = self.get_command_ex(file, **options)\n command = \"(%s %s; rm -f %s)&\" % (command, file, file)\n os.system(command)\n return 1\n\n # implementations\n\n class DisplayViewer(UnixViewer):\n def get_command_ex(self, file, **options):\n command = executable = \"display\"\n return command, executable\n\n if which(\"display\"):\n register(DisplayViewer)\n\n class XVViewer(UnixViewer):\n def get_command_ex(self, file, title=None, **options):\n # note: xv is pretty outdated. most modern systems have\n # imagemagick's display command instead.\n command = executable = \"xv\"\n if title:\n # FIXME: do full escaping\n command = command + \" -name \\\"%s\\\"\" % title\n return command, executable\n\n if which(\"xv\"):\n register(XVViewer)\n\nif __name__ == \"__main__\":\n # usage: python ImageShow.py imagefile [title]\n print(show(Image.open(sys.argv[1]), *sys.argv[2:]))\n",
"path": "PIL/ImageShow.py"
}
] | [
{
"content": "#\n# The Python Imaging Library.\n# $Id$\n#\n# im.show() drivers\n#\n# History:\n# 2008-04-06 fl Created\n#\n# Copyright (c) Secret Labs AB 2008.\n#\n# See the README file for information on usage and redistribution.\n#\n\nfrom __future__ import print_function\n\nfrom PIL import Image\nimport os, sys\n\n_viewers = []\n\ndef register(viewer, order=1):\n try:\n if issubclass(viewer, Viewer):\n viewer = viewer()\n except TypeError:\n pass # raised if viewer wasn't a class\n if order > 0:\n _viewers.append(viewer)\n elif order < 0:\n _viewers.insert(0, viewer)\n\n##\n# Displays a given image.\n#\n# @param image An image object.\n# @param title Optional title. Not all viewers can display the title.\n# @param **options Additional viewer options.\n# @return True if a suitable viewer was found, false otherwise.\n\ndef show(image, title=None, **options):\n for viewer in _viewers:\n if viewer.show(image, title=title, **options):\n return 1\n return 0\n\n##\n# Base class for viewers.\n\nclass Viewer:\n\n # main api\n\n def show(self, image, **options):\n\n # save temporary image to disk\n if image.mode[:4] == \"I;16\":\n # @PIL88 @PIL101\n # \"I;16\" isn't an 'official' mode, but we still want to\n # provide a simple way to show 16-bit images.\n base = \"L\"\n # FIXME: auto-contrast if max() > 255?\n else:\n base = Image.getmodebase(image.mode)\n if base != image.mode and image.mode != \"1\":\n image = image.convert(base)\n\n return self.show_image(image, **options)\n\n # hook methods\n\n format = None\n\n def get_format(self, image):\n # return format name, or None to save as PGM/PPM\n return self.format\n\n def get_command(self, file, **options):\n raise NotImplementedError\n\n def save_image(self, image):\n # save to temporary file, and return filename\n return image._dump(format=self.get_format(image))\n\n def show_image(self, image, **options):\n # display given image\n return self.show_file(self.save_image(image), **options)\n\n def show_file(self, file, **options):\n # display given file\n os.system(self.get_command(file, **options))\n return 1\n\n# --------------------------------------------------------------------\n\nif sys.platform == \"win32\":\n\n class WindowsViewer(Viewer):\n format = \"BMP\"\n def get_command(self, file, **options):\n return (\"start /wait %s && ping -n 2 127.0.0.1 >NUL \"\n \"&& del /f %s\" % (file, file))\n\n register(WindowsViewer)\n\nelif sys.platform == \"darwin\":\n\n class MacViewer(Viewer):\n format = \"BMP\"\n def get_command(self, file, **options):\n # on darwin open returns immediately resulting in the temp\n # file removal while app is opening\n command = \"open -a /Applications/Preview.app\"\n command = \"(%s %s; sleep 20; rm -f %s)&\" % (command, file, file)\n return command\n\n register(MacViewer)\n\nelse:\n\n # unixoids\n\n def which(executable):\n path = os.environ.get(\"PATH\")\n if not path:\n return None\n for dirname in path.split(os.pathsep):\n filename = os.path.join(dirname, executable)\n if os.path.isfile(filename):\n # FIXME: make sure it's executable\n return filename\n return None\n\n class UnixViewer(Viewer):\n def show_file(self, file, **options):\n command, executable = self.get_command_ex(file, **options)\n command = \"(%s %s; rm -f %s)&\" % (command, file, file)\n os.system(command)\n return 1\n\n # implementations\n\n class DisplayViewer(UnixViewer):\n def get_command_ex(self, file, **options):\n command = executable = \"display\"\n return command, executable\n\n if which(\"display\"):\n register(DisplayViewer)\n\n class XVViewer(UnixViewer):\n def get_command_ex(self, file, title=None, **options):\n # note: xv is pretty outdated. most modern systems have\n # imagemagick's display command instead.\n command = executable = \"xv\"\n if title:\n # FIXME: do full escaping\n command = command + \" -name \\\"%s\\\"\" % title\n return command, executable\n\n if which(\"xv\"):\n register(XVViewer)\n\nif __name__ == \"__main__\":\n # usage: python ImageShow.py imagefile [title]\n print(show(Image.open(sys.argv[1]), *sys.argv[2:]))\n",
"path": "PIL/ImageShow.py"
}
] | diff --git a/PIL/ImageShow.py b/PIL/ImageShow.py
index 7e3d63ba3cd..78bc210f3d6 100644
--- a/PIL/ImageShow.py
+++ b/PIL/ImageShow.py
@@ -65,7 +65,7 @@ def show(self, image, **options):
if base != image.mode and image.mode != "1":
image = image.convert(base)
- self.show_image(image, **options)
+ return self.show_image(image, **options)
# hook methods
|
nilearn__nilearn-2792 | `FirstLevelModel._get_voxelwise_model_attribute` only returns first design matrix's attribute
<!--Provide a brief description of the bug.-->
The FirstLevelModel attributes which use `_get_voxelwise_model_attribute()` only return the img for the first design matrix, rather than all of the design matrices' associated imgs.
<!--Please fill in the following information, to the best of your ability.-->
Nilearn version: ~0.7.1 (`master` at c4839dd)
### Expected behavior
Accessing one of the voxelwise attributes which rely on `FirstLevelModel._get_voxelwise_model_attribute()`, such as `FirstLevelModel.residuals`, `FirstLevelModel.predicted`, or `FirstLevelModel.r_square`, should return a list of Nifti1Image objects with the same length as `FirstLevelModel.design_matrices_`.
### Actual behavior
The attributes are lists with only one item.
### The associated code
https://github.com/nilearn/nilearn/blob/c4839ddfe68ddf15775def1fc0ce9ea23544a527/nilearn/glm/first_level/first_level.py#L668-L686
### The solution
Unindenting line 686 should fix it, I think. There should also be at least one test to make sure that the length of the attribute list is the same as the length of `model.design_matrices`.
| [
{
"content": "\"\"\"\nThis module presents an interface to use the glm implemented in\nnistats.regression.\n\nIt contains the GLM and contrast classes that are meant to be the main objects\nof fMRI data analyses.\n\nAuthor: Bertrand Thirion, Martin Perez-Guevara, 2016\n\n\"\"\"\nimport glob\nimport json\nimport os\nimport sys\nimport time\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom joblib import Memory, Parallel, delayed\nfrom nibabel import Nifti1Image\nfrom nibabel.onetime import auto_attr\nfrom sklearn.base import clone\n\nfrom nilearn._utils.glm import (_check_events_file_uses_tab_separators,\n _check_run_tables, get_bids_files,\n parse_bids_filename)\nfrom nilearn._utils.niimg_conversions import check_niimg\nfrom nilearn.glm.contrasts import (_compute_fixed_effect_contrast,\n expression_to_contrast_vector)\nfrom nilearn.glm.first_level.design_matrix import \\\n make_first_level_design_matrix\nfrom nilearn.image import get_data\nfrom nilearn.glm.regression import (ARModel, OLSModel, RegressionResults,\n SimpleRegressionResults)\nfrom nilearn.glm._base import BaseGLM\n\n\ndef mean_scaling(Y, axis=0):\n \"\"\"Scaling of the data to have percent of baseline change along the\n specified axis\n\n Parameters\n ----------\n Y : array of shape (n_time_points, n_voxels)\n The input data.\n\n axis : int, optional\n Axis along which the scaling mean should be calculated. Default=0.\n\n Returns\n -------\n Y : array of shape (n_time_points, n_voxels),\n The data after mean-scaling, de-meaning and multiplication by 100.\n\n mean : array of shape (n_voxels,)\n The data mean.\n\n \"\"\"\n mean = Y.mean(axis=axis)\n if (mean == 0).any():\n warn('Mean values of 0 observed.'\n 'The data have probably been centered.'\n 'Scaling might not work as expected')\n mean = np.maximum(mean, 1)\n Y = 100 * (Y / mean - 1)\n return Y, mean\n\n\ndef _ar_model_fit(X, val, Y):\n \"\"\"Wrapper for fit method of ARModel to allow joblib parallelization\"\"\"\n return ARModel(X, val).fit(Y)\n\n\ndef run_glm(Y, X, noise_model='ar1', bins=100, n_jobs=1, verbose=0):\n \"\"\" GLM fit for an fMRI data matrix\n\n Parameters\n ----------\n Y : array of shape (n_time_points, n_voxels)\n The fMRI data.\n\n X : array of shape (n_time_points, n_regressors)\n The design matrix.\n\n noise_model : {'ar1', 'ols'}, optional\n The temporal variance model. Default='ar1'.\n\n bins : int, optional\n Maximum number of discrete bins for the AR(1) coef histogram.\n Default=100.\n\n n_jobs : int, optional\n The number of CPUs to use to do the computation. -1 means\n 'all CPUs'. Default=1.\n\n verbose : int, optional\n The verbosity level. Defaut=0.\n\n Returns\n -------\n labels : array of shape (n_voxels,),\n A map of values on voxels used to identify the corresponding model.\n\n results : dict,\n Keys correspond to the different labels values\n values are RegressionResults instances corresponding to the voxels.\n\n \"\"\"\n acceptable_noise_models = ['ar1', 'ols']\n if noise_model not in acceptable_noise_models:\n raise ValueError(\n \"Acceptable noise models are {0}. You provided \"\n \"'noise_model={1}'\".format(acceptable_noise_models,\n noise_model)\n )\n if Y.shape[0] != X.shape[0]:\n raise ValueError('The number of rows of Y '\n 'should match the number of rows of X.'\n ' You provided X with shape {0} '\n 'and Y with shape {1}'.\n format(X.shape, Y.shape))\n\n # Create the model\n ols_result = OLSModel(X).fit(Y)\n\n if noise_model == 'ar1':\n # compute and discretize the AR1 coefs\n ar1 = (\n (ols_result.residuals[1:]\n * ols_result.residuals[:-1]).sum(axis=0)\n / (ols_result.residuals ** 2).sum(axis=0)\n )\n del ols_result\n ar1 = (ar1 * bins).astype(np.int) * 1. / bins\n # Fit the AR model acccording to current AR(1) estimates\n results = {}\n labels = ar1\n # Parallelize by creating a job per ARModel\n vals = np.unique(ar1)\n ar_result = Parallel(n_jobs=n_jobs, verbose=verbose)(\n delayed(_ar_model_fit)(X, val, Y[:, labels == val])\n for val in vals)\n for val, result in zip(vals, ar_result):\n results[val] = result\n del vals\n del ar_result\n\n else:\n labels = np.zeros(Y.shape[1])\n results = {0.0: ols_result}\n\n return labels, results\n\n\nclass FirstLevelModel(BaseGLM):\n \"\"\" Implementation of the General Linear Model\n for single session fMRI data.\n\n Parameters\n ----------\n t_r : float\n This parameter indicates repetition times of the experimental runs.\n In seconds. It is necessary to correctly consider times in the design\n matrix. This parameter is also passed to nilearn.signal.clean.\n Please see the related documentation for details.\n\n slice_time_ref : float, optional\n This parameter indicates the time of the reference slice used in the\n slice timing preprocessing step of the experimental runs. It is\n expressed as a percentage of the t_r (time repetition), so it can have\n values between 0. and 1. Default=0.\n\n hrf_model : {'glover', 'spm', 'spm + derivative', 'spm + derivative + dispersion',\n 'glover + derivative', 'glover + derivative + dispersion', 'fir', None}, optional\n String that specifies the hemodynamic response function.\n Default='glover'.\n\n drift_model : string, optional\n This parameter specifies the desired drift model for the design\n matrices. It can be 'polynomial', 'cosine' or None.\n Default='cosine'.\n\n high_pass : float, optional\n This parameter specifies the cut frequency of the high-pass filter in\n Hz for the design matrices. Used only if drift_model is 'cosine'.\n Default=0.01.\n\n drift_order : int, optional\n This parameter specifices the order of the drift model (in case it is\n polynomial) for the design matrices. Default=1.\n\n fir_delays : array of shape(n_onsets) or list, optional\n In case of FIR design, yields the array of delays used in the FIR\n model, in scans. Default=[0].\n\n min_onset : float, optional\n This parameter specifies the minimal onset relative to the design\n (in seconds). Events that start before (slice_time_ref * t_r +\n min_onset) are not considered. Default=-24.\n\n mask_img : Niimg-like, NiftiMasker object or False, optional\n Mask to be used on data. If an instance of masker is passed,\n then its mask will be used. If no mask is given,\n it will be computed automatically by a NiftiMasker with default\n parameters. If False is given then the data will not be masked.\n\n target_affine : 3x3 or 4x4 matrix, optional\n This parameter is passed to nilearn.image.resample_img.\n Please see the related documentation for details.\n\n target_shape : 3-tuple of integers, optional\n This parameter is passed to nilearn.image.resample_img.\n Please see the related documentation for details.\n\n smoothing_fwhm : float, optional\n If smoothing_fwhm is not None, it gives the size in millimeters of\n the spatial smoothing to apply to the signal.\n\n memory : string, optional\n Path to the directory used to cache the masking process and the glm\n fit. By default, no caching is done.\n Creates instance of joblib.Memory.\n\n memory_level : integer, optional\n Rough estimator of the amount of memory used by caching. Higher value\n means more memory for caching.\n\n standardize : boolean, optional\n If standardize is True, the time-series are centered and normed:\n their variance is put to 1 in the time dimension. Default=False.\n\n signal_scaling : False, int or (int, int), optional\n If not False, fMRI signals are\n scaled to the mean value of scaling_axis given,\n which can be 0, 1 or (0, 1).\n 0 refers to mean scaling each voxel with respect to time,\n 1 refers to mean scaling each time point with respect to all voxels &\n (0, 1) refers to scaling with respect to voxels and time,\n which is known as grand mean scaling.\n Incompatible with standardize (standardize=False is enforced when\n signal_scaling is not False).\n Default=0.\n\n noise_model : {'ar1', 'ols'}, optional\n The temporal variance model. Default='ar1'.\n\n verbose : integer, optional\n Indicate the level of verbosity. By default, nothing is printed.\n If 0 prints nothing. If 1 prints progress by computation of\n each run. If 2 prints timing details of masker and GLM. If 3\n prints masker computation details. Default=0.\n\n n_jobs : integer, optional\n The number of CPUs to use to do the computation. -1 means\n 'all CPUs', -2 'all CPUs but one', and so on.\n Default=1.\n\n minimize_memory : boolean, optional\n Gets rid of some variables on the model fit results that are not\n necessary for contrast computation and would only be useful for\n further inspection of model details. This has an important impact\n on memory consumption. Default=True.\n\n subject_label : string, optional\n This id will be used to identify a `FirstLevelModel` when passed to\n a `SecondLevelModel` object.\n\n Attributes\n ----------\n labels_ : array of shape (n_voxels,),\n a map of values on voxels used to identify the corresponding model\n\n results_ : dict,\n with keys corresponding to the different labels values.\n Values are SimpleRegressionResults corresponding to the voxels,\n if minimize_memory is True,\n RegressionResults if minimize_memory is False\n\n Notes\n -----\n This class is experimental.\n It may change in any future release of Nilearn.\n\n \"\"\"\n def __init__(self, t_r=None, slice_time_ref=0., hrf_model='glover',\n drift_model='cosine', high_pass=.01, drift_order=1,\n fir_delays=[0], min_onset=-24, mask_img=None,\n target_affine=None, target_shape=None, smoothing_fwhm=None,\n memory=Memory(None), memory_level=1, standardize=False,\n signal_scaling=0, noise_model='ar1', verbose=0, n_jobs=1,\n minimize_memory=True, subject_label=None):\n # design matrix parameters\n self.t_r = t_r\n self.slice_time_ref = slice_time_ref\n self.hrf_model = hrf_model\n self.drift_model = drift_model\n self.high_pass = high_pass\n self.drift_order = drift_order\n self.fir_delays = fir_delays\n self.min_onset = min_onset\n # glm parameters\n self.mask_img = mask_img\n self.target_affine = target_affine\n self.target_shape = target_shape\n self.smoothing_fwhm = smoothing_fwhm\n if isinstance(memory, str):\n self.memory = Memory(memory)\n else:\n self.memory = memory\n self.memory_level = memory_level\n self.standardize = standardize\n if signal_scaling is False:\n self.signal_scaling = signal_scaling\n elif signal_scaling in [0, 1, (0, 1)]:\n self.scaling_axis = signal_scaling\n self.signal_scaling = True\n self.standardize = False\n else:\n raise ValueError('signal_scaling must be \"False\", \"0\", \"1\"'\n ' or \"(0, 1)\"')\n\n self.noise_model = noise_model\n self.verbose = verbose\n self.n_jobs = n_jobs\n self.minimize_memory = minimize_memory\n # attributes\n self.labels_ = None\n self.results_ = None\n self.subject_label = subject_label\n\n def fit(self, run_imgs, events=None, confounds=None,\n design_matrices=None):\n \"\"\"Fit the GLM\n\n For each run:\n 1. create design matrix X\n 2. do a masker job: fMRI_data -> Y\n 3. fit regression to (Y, X)\n\n Parameters\n ----------\n run_imgs : Niimg-like object or list of Niimg-like objects,\n Data on which the GLM will be fitted. If this is a list,\n the affine is considered the same for all.\n\n events : pandas Dataframe or string or list of pandas DataFrames or strings, optional\n fMRI events used to build design matrices. One events object\n expected per run_img. Ignored in case designs is not None.\n If string, then a path to a csv file is expected.\n\n confounds : pandas Dataframe, numpy array or string or\n list of pandas DataFrames, numpy arays or strings, optional\n Each column in a DataFrame corresponds to a confound variable\n to be included in the regression model of the respective run_img.\n The number of rows must match the number of volumes in the\n respective run_img. Ignored in case designs is not None.\n If string, then a path to a csv file is expected.\n\n design_matrices : pandas DataFrame or list of pandas DataFrames, optional\n Design matrices that will be used to fit the GLM. If given it\n takes precedence over events and confounds.\n\n \"\"\"\n # Initialize masker_ to None such that attribute exists\n self.masker_ = None\n\n # Raise a warning if both design_matrices and confounds are provided\n if design_matrices is not None and (confounds is not None or events is not None):\n warn('If design matrices are supplied, confounds and events will be ignored.')\n # Local import to prevent circular imports\n from nilearn.input_data import NiftiMasker # noqa\n\n # Check arguments\n # Check imgs type\n if events is not None:\n _check_events_file_uses_tab_separators(events_files=events)\n if not isinstance(run_imgs, (list, tuple)):\n run_imgs = [run_imgs]\n if design_matrices is None:\n if events is None:\n raise ValueError('events or design matrices must be provided')\n if self.t_r is None:\n raise ValueError('t_r not given to FirstLevelModel object'\n ' to compute design from events')\n else:\n design_matrices = _check_run_tables(run_imgs, design_matrices,\n 'design_matrices')\n # Check that number of events and confound files match number of runs\n # Also check that events and confound files can be loaded as DataFrame\n if events is not None:\n events = _check_run_tables(run_imgs, events, 'events')\n if confounds is not None:\n confounds = _check_run_tables(run_imgs, confounds, 'confounds')\n\n # Learn the mask\n if self.mask_img is False:\n # We create a dummy mask to preserve functionality of api\n ref_img = check_niimg(run_imgs[0])\n self.mask_img = Nifti1Image(np.ones(ref_img.shape[:3]),\n ref_img.affine)\n if not isinstance(self.mask_img, NiftiMasker):\n self.masker_ = NiftiMasker(mask_img=self.mask_img,\n smoothing_fwhm=self.smoothing_fwhm,\n target_affine=self.target_affine,\n standardize=self.standardize,\n mask_strategy='epi',\n t_r=self.t_r,\n memory=self.memory,\n verbose=max(0, self.verbose - 2),\n target_shape=self.target_shape,\n memory_level=self.memory_level\n )\n self.masker_.fit(run_imgs[0])\n else:\n # Make sure masker has been fitted otherwise no attribute mask_img_\n self.mask_img._check_fitted()\n if self.mask_img.mask_img_ is None and self.masker_ is None:\n self.masker_ = clone(self.mask_img)\n for param_name in ['target_affine', 'target_shape',\n 'smoothing_fwhm', 't_r', 'memory',\n 'memory_level']:\n our_param = getattr(self, param_name)\n if our_param is None:\n continue\n if getattr(self.masker_, param_name) is not None:\n warn('Parameter %s of the masker'\n ' overriden' % param_name)\n setattr(self.masker_, param_name, our_param)\n self.masker_.fit(run_imgs[0])\n else:\n self.masker_ = self.mask_img\n\n # For each run fit the model and keep only the regression results.\n self.labels_, self.results_, self.design_matrices_ = [], [], []\n n_runs = len(run_imgs)\n t0 = time.time()\n for run_idx, run_img in enumerate(run_imgs):\n # Report progress\n if self.verbose > 0:\n percent = float(run_idx) / n_runs\n percent = round(percent * 100, 2)\n dt = time.time() - t0\n # We use a max to avoid a division by zero\n if run_idx == 0:\n remaining = 'go take a coffee, a big one'\n else:\n remaining = (100. - percent) / max(0.01, percent) * dt\n remaining = '%i seconds remaining' % remaining\n\n sys.stderr.write(\n \"Computing run %d out of %d runs (%s)\\n\"\n % (run_idx + 1, n_runs, remaining))\n\n # Build the experimental design for the glm\n run_img = check_niimg(run_img, ensure_ndim=4)\n if design_matrices is None:\n n_scans = get_data(run_img).shape[3]\n if confounds is not None:\n confounds_matrix = confounds[run_idx].values\n if confounds_matrix.shape[0] != n_scans:\n raise ValueError('Rows in confounds does not match'\n 'n_scans in run_img at index %d'\n % (run_idx,))\n confounds_names = confounds[run_idx].columns.tolist()\n else:\n confounds_matrix = None\n confounds_names = None\n start_time = self.slice_time_ref * self.t_r\n end_time = (n_scans - 1 + self.slice_time_ref) * self.t_r\n frame_times = np.linspace(start_time, end_time, n_scans)\n design = make_first_level_design_matrix(frame_times,\n events[run_idx],\n self.hrf_model,\n self.drift_model,\n self.high_pass,\n self.drift_order,\n self.fir_delays,\n confounds_matrix,\n confounds_names,\n self.min_onset\n )\n else:\n design = design_matrices[run_idx]\n self.design_matrices_.append(design)\n\n # Mask and prepare data for GLM\n if self.verbose > 1:\n t_masking = time.time()\n sys.stderr.write('Starting masker computation \\r')\n\n Y = self.masker_.transform(run_img)\n del run_img # Delete unmasked image to save memory\n\n if self.verbose > 1:\n t_masking = time.time() - t_masking\n sys.stderr.write('Masker took %d seconds \\n'\n % t_masking)\n\n if self.signal_scaling:\n Y, _ = mean_scaling(Y, self.scaling_axis)\n if self.memory:\n mem_glm = self.memory.cache(run_glm, ignore=['n_jobs'])\n else:\n mem_glm = run_glm\n\n # compute GLM\n if self.verbose > 1:\n t_glm = time.time()\n sys.stderr.write('Performing GLM computation\\r')\n labels, results = mem_glm(Y, design.values,\n noise_model=self.noise_model,\n bins=100, n_jobs=self.n_jobs)\n if self.verbose > 1:\n t_glm = time.time() - t_glm\n sys.stderr.write('GLM took %d seconds \\n' % t_glm)\n\n self.labels_.append(labels)\n # We save memory if inspecting model details is not necessary\n if self.minimize_memory:\n for key in results:\n results[key] = SimpleRegressionResults(results[key])\n self.results_.append(results)\n del Y\n\n # Report progress\n if self.verbose > 0:\n sys.stderr.write(\"\\nComputation of %d runs done in %i seconds\\n\\n\"\n % (n_runs, time.time() - t0))\n return self\n\n def compute_contrast(self, contrast_def, stat_type=None,\n output_type='z_score'):\n \"\"\"Generate different outputs corresponding to\n the contrasts provided e.g. z_map, t_map, effects and variance.\n In multi-session case, outputs the fixed effects map.\n\n Parameters\n ----------\n contrast_def : str or array of shape (n_col) or list of (string or\n array of shape (n_col))\n\n where ``n_col`` is the number of columns of the design matrix,\n (one array per run). If only one array is provided when there\n are several runs, it will be assumed that the same contrast is\n desired for all runs. The string can be a formula compatible with\n `pandas.DataFrame.eval`. Basically one can use the name of the\n conditions as they appear in the design matrix of the fitted model\n combined with operators +- and combined with numbers\n with operators +-`*`/.\n\n stat_type : {'t', 'F'}, optional\n type of the contrast\n\n output_type : str, optional\n Type of the output map. Can be 'z_score', 'stat', 'p_value',\n 'effect_size', 'effect_variance' or 'all'.\n Default='z-score'.\n\n Returns\n -------\n output : Nifti1Image or dict\n The desired output image(s). If ``output_type == 'all'``, then\n the output is a dictionary of images, keyed by the type of image.\n\n \"\"\"\n if self.labels_ is None or self.results_ is None:\n raise ValueError('The model has not been fit yet')\n\n if isinstance(contrast_def, (np.ndarray, str)):\n con_vals = [contrast_def]\n elif isinstance(contrast_def, (list, tuple)):\n con_vals = contrast_def\n else:\n raise ValueError('contrast_def must be an array or str or list of'\n ' (array or str)')\n\n n_runs = len(self.labels_)\n n_contrasts = len(con_vals)\n if n_contrasts == 1 and n_runs > 1:\n warn('One contrast given, assuming it for all %d runs' % n_runs)\n con_vals = con_vals * n_runs\n elif n_contrasts != n_runs:\n raise ValueError('%n contrasts given, while there are %n runs' %\n (n_contrasts, n_runs))\n\n # Translate formulas to vectors\n for cidx, (con, design_mat) in enumerate(zip(con_vals,\n self.design_matrices_)\n ):\n design_columns = design_mat.columns.tolist()\n if isinstance(con, str):\n con_vals[cidx] = expression_to_contrast_vector(\n con, design_columns)\n\n valid_types = ['z_score', 'stat', 'p_value', 'effect_size',\n 'effect_variance']\n valid_types.append('all') # ensuring 'all' is the final entry.\n if output_type not in valid_types:\n raise ValueError(\n 'output_type must be one of {}'.format(valid_types))\n contrast = _compute_fixed_effect_contrast(self.labels_, self.results_,\n con_vals, stat_type)\n output_types = (valid_types[:-1]\n if output_type == 'all' else [output_type])\n outputs = {}\n for output_type_ in output_types:\n estimate_ = getattr(contrast, output_type_)()\n # Prepare the returned images\n output = self.masker_.inverse_transform(estimate_)\n contrast_name = str(con_vals)\n output.header['descrip'] = (\n '%s of contrast %s' % (output_type_, contrast_name))\n outputs[output_type_] = output\n\n return outputs if output_type == 'all' else output\n\n def _get_voxelwise_model_attribute(self, attribute,\n result_as_time_series):\n \"\"\"Transform RegressionResults instances within a dictionary\n (whose keys represent the autoregressive coefficient under the 'ar1'\n noise model or only 0.0 under 'ols' noise_model and values are the\n RegressionResults instances) into input nifti space.\n\n Parameters\n ----------\n attribute : str\n an attribute of a RegressionResults instance.\n possible values include: resid, norm_resid, predicted,\n SSE, r_square, MSE.\n\n result_as_time_series : bool\n whether the RegressionResult attribute has a value\n per timepoint of the input nifti image.\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n # check if valid attribute is being accessed.\n all_attributes = dict(vars(RegressionResults)).keys()\n possible_attributes = [prop\n for prop in all_attributes\n if '__' not in prop\n ]\n if attribute not in possible_attributes:\n msg = (\"attribute must be one of: \"\n \"{attr}\".format(attr=possible_attributes)\n )\n raise ValueError(msg)\n\n if self.minimize_memory:\n raise ValueError(\n 'To access voxelwise attributes like '\n 'R-squared, residuals, and predictions, '\n 'the `FirstLevelModel`-object needs to store '\n 'there attributes. '\n 'To do so, set `minimize_memory` to `False` '\n 'when initializing the `FirstLevelModel`-object.')\n\n if self.labels_ is None or self.results_ is None:\n raise ValueError('The model has not been fit yet')\n\n output = []\n\n for design_matrix, labels, results in zip(self.design_matrices_,\n self.labels_,\n self.results_\n ):\n if result_as_time_series:\n voxelwise_attribute = np.zeros((design_matrix.shape[0],\n len(labels))\n )\n else:\n voxelwise_attribute = np.zeros((1, len(labels)))\n\n for label_ in results:\n label_mask = labels == label_\n voxelwise_attribute[:, label_mask] = getattr(results[label_],\n attribute)\n\n output.append(self.masker_.inverse_transform(voxelwise_attribute))\n\n return output\n\n @auto_attr\n def residuals(self):\n \"\"\"Transform voxelwise residuals to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('resid',\n result_as_time_series=True)\n\n @auto_attr\n def predicted(self):\n \"\"\"Transform voxelwise predicted values to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('predicted',\n result_as_time_series=True)\n\n @auto_attr\n def r_square(self):\n \"\"\"Transform voxelwise r-squared values to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('r_square',\n result_as_time_series=False\n )\n\n\ndef first_level_from_bids(dataset_path, task_label, space_label=None,\n img_filters=None, t_r=None, slice_time_ref=0.,\n hrf_model='glover', drift_model='cosine',\n high_pass=.01, drift_order=1, fir_delays=[0],\n min_onset=-24, mask_img=None,\n target_affine=None, target_shape=None,\n smoothing_fwhm=None, memory=Memory(None),\n memory_level=1, standardize=False,\n signal_scaling=0, noise_model='ar1',\n verbose=0, n_jobs=1,\n minimize_memory=True,\n derivatives_folder='derivatives'):\n \"\"\"Create FirstLevelModel objects and fit arguments from a BIDS dataset.\n\n It t_r is not specified this function will attempt to load it from a\n bold.json file alongside slice_time_ref. Otherwise t_r and slice_time_ref\n are taken as given.\n\n Parameters\n ----------\n dataset_path : str\n Directory of the highest level folder of the BIDS dataset. Should\n contain subject folders and a derivatives folder.\n\n task_label : str\n Task_label as specified in the file names like _task-<task_label>_.\n\n space_label : str, optional\n Specifies the space label of the preprocessed bold.nii images.\n As they are specified in the file names like _space-<space_label>_.\n\n img_filters : list of tuples (str, str), optional\n Filters are of the form (field, label). Only one filter per field\n allowed. A file that does not match a filter will be discarded.\n Possible filters are 'acq', 'ce', 'dir', 'rec', 'run', 'echo', 'res',\n 'den', and 'desc'. Filter examples would be ('desc', 'preproc'),\n ('dir', 'pa') and ('run', '10').\n\n derivatives_folder : str, optional\n derivatives and app folder path containing preprocessed files.\n Like \"derivatives/FMRIPREP\". Default=\"derivatives\".\n\n All other parameters correspond to a `FirstLevelModel` object, which\n contains their documentation. The subject label of the model will be\n determined directly from the BIDS dataset.\n\n Returns\n -------\n models : list of `FirstLevelModel` objects\n Each FirstLevelModel object corresponds to a subject. All runs from\n different sessions are considered together for the same subject to run\n a fixed effects analysis on them.\n\n models_run_imgs : list of list of Niimg-like objects,\n Items for the FirstLevelModel fit function of their respective model.\n\n models_events : list of list of pandas DataFrames,\n Items for the FirstLevelModel fit function of their respective model.\n\n models_confounds : list of list of pandas DataFrames or None,\n Items for the FirstLevelModel fit function of their respective model.\n\n \"\"\"\n # check arguments\n img_filters = img_filters if img_filters else []\n if not isinstance(dataset_path, str):\n raise TypeError(\n 'dataset_path must be a string, instead %s was given' %\n type(task_label))\n if not os.path.exists(dataset_path):\n raise ValueError('given path do not exist: %s' % dataset_path)\n if not isinstance(task_label, str):\n raise TypeError('task_label must be a string, instead %s was given' %\n type(task_label))\n if space_label is not None and not isinstance(space_label, str):\n raise TypeError('space_label must be a string, instead %s was given' %\n type(space_label))\n if not isinstance(img_filters, list):\n raise TypeError('img_filters must be a list, instead %s was given' %\n type(img_filters))\n for img_filter in img_filters:\n if (not isinstance(img_filter[0], str)\n or not isinstance(img_filter[1], str)):\n raise TypeError('filters in img filters must be (str, str), '\n 'instead %s was given' % type(img_filter))\n if img_filter[0] not in ['acq', 'ce', 'dir', 'rec', 'run',\n 'echo', 'desc', 'res', 'den',\n ]:\n raise ValueError(\n \"field %s is not a possible filter. Only \"\n \"'acq', 'ce', 'dir', 'rec', 'run', 'echo', \"\n \"'desc', 'res', 'den' are allowed.\" % img_filter[0])\n\n # check derivatives folder is present\n derivatives_path = os.path.join(dataset_path, derivatives_folder)\n if not os.path.exists(derivatives_path):\n raise ValueError('derivatives folder does not exist in given dataset')\n\n # Get acq specs for models. RepetitionTime and SliceTimingReference.\n # Throw warning if no bold.json is found\n if t_r is not None:\n warn('RepetitionTime given in model_init as %d' % t_r)\n warn('slice_time_ref is %d percent of the repetition '\n 'time' % slice_time_ref)\n else:\n filters = [('task', task_label)]\n for img_filter in img_filters:\n if img_filter[0] in ['acq', 'rec', 'run']:\n filters.append(img_filter)\n\n img_specs = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='bold', file_type='json',\n filters=filters)\n # If we dont find the parameter information in the derivatives folder\n # we try to search in the raw data folder\n if not img_specs:\n img_specs = get_bids_files(dataset_path, modality_folder='func',\n file_tag='bold', file_type='json',\n filters=filters)\n if not img_specs:\n warn('No bold.json found in derivatives folder or '\n 'in dataset folder. t_r can not be inferred and will need to'\n ' be set manually in the list of models, otherwise their fit'\n ' will throw an exception')\n else:\n specs = json.load(open(img_specs[0], 'r'))\n if 'RepetitionTime' in specs:\n t_r = float(specs['RepetitionTime'])\n else:\n warn('RepetitionTime not found in file %s. t_r can not be '\n 'inferred and will need to be set manually in the '\n 'list of models. Otherwise their fit will throw an '\n ' exception' % img_specs[0])\n if 'SliceTimingRef' in specs:\n slice_time_ref = float(specs['SliceTimingRef'])\n else:\n warn('SliceTimingRef not found in file %s. It will be assumed'\n ' that the slice timing reference is 0.0 percent of the '\n 'repetition time. If it is not the case it will need to '\n 'be set manually in the generated list of models' %\n img_specs[0])\n\n # Infer subjects in dataset\n sub_folders = glob.glob(os.path.join(derivatives_path, 'sub-*/'))\n sub_labels = [os.path.basename(s[:-1]).split('-')[1] for s in sub_folders]\n sub_labels = sorted(list(set(sub_labels)))\n\n # Build fit_kwargs dictionaries to pass to their respective models fit\n # Events and confounds files must match number of imgs (runs)\n models = []\n models_run_imgs = []\n models_events = []\n models_confounds = []\n for sub_label in sub_labels:\n # Create model\n model = FirstLevelModel(\n t_r=t_r, slice_time_ref=slice_time_ref, hrf_model=hrf_model,\n drift_model=drift_model, high_pass=high_pass,\n drift_order=drift_order, fir_delays=fir_delays,\n min_onset=min_onset, mask_img=mask_img,\n target_affine=target_affine, target_shape=target_shape,\n smoothing_fwhm=smoothing_fwhm, memory=memory,\n memory_level=memory_level, standardize=standardize,\n signal_scaling=signal_scaling, noise_model=noise_model,\n verbose=verbose, n_jobs=n_jobs,\n minimize_memory=minimize_memory, subject_label=sub_label)\n models.append(model)\n\n # Get preprocessed imgs\n if space_label is None:\n filters = [('task', task_label)] + img_filters\n else:\n filters = [('task', task_label),\n ('space', space_label)] + img_filters\n imgs = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='bold', file_type='nii*',\n sub_label=sub_label, filters=filters)\n # If there is more than one file for the same (ses, run), likely we\n # have an issue of underspecification of filters.\n run_check_list = []\n # If more than one run is present the run field is mandatory in BIDS\n # as well as the ses field if more than one session is present.\n if len(imgs) > 1:\n for img in imgs:\n img_dict = parse_bids_filename(img)\n if (\n '_ses-' in img_dict['file_basename']\n and '_run-' in img_dict['file_basename']\n ):\n if (img_dict['ses'], img_dict['run']) in run_check_list:\n raise ValueError(\n 'More than one nifti image found '\n 'for the same run %s and session %s. '\n 'Please verify that the '\n 'desc_label and space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n (img_dict['run'], img_dict['ses']))\n else:\n run_check_list.append((img_dict['ses'],\n img_dict['run']))\n\n elif '_ses-' in img_dict['file_basename']:\n if img_dict['ses'] in run_check_list:\n raise ValueError(\n 'More than one nifti image '\n 'found for the same ses %s, while '\n 'no additional run specification present'\n '. Please verify that the desc_label and '\n 'space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n img_dict['ses'])\n else:\n run_check_list.append(img_dict['ses'])\n\n elif '_run-' in img_dict['file_basename']:\n if img_dict['run'] in run_check_list:\n raise ValueError(\n 'More than one nifti image '\n 'found for the same run %s. '\n 'Please verify that the desc_label and '\n 'space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n img_dict['run'])\n else:\n run_check_list.append(img_dict['run'])\n models_run_imgs.append(imgs)\n\n # Get events and extra confounds\n filters = [('task', task_label)]\n for img_filter in img_filters:\n if img_filter[0] in ['acq', 'rec', 'run']:\n filters.append(img_filter)\n\n # Get events files\n events = get_bids_files(dataset_path, modality_folder='func',\n file_tag='events', file_type='tsv',\n sub_label=sub_label, filters=filters)\n if events:\n if len(events) != len(imgs):\n raise ValueError('%d events.tsv files found for %d bold '\n 'files. Same number of event files as '\n 'the number of runs is expected' %\n (len(events), len(imgs)))\n events = [pd.read_csv(event, sep='\\t', index_col=None)\n for event in events]\n models_events.append(events)\n else:\n raise ValueError('No events.tsv files found')\n\n # Get confounds. If not found it will be assumed there are none.\n # If there are confounds, they are assumed to be present for all runs.\n confounds = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='desc-confounds*',\n file_type='tsv', sub_label=sub_label,\n filters=filters)\n\n if confounds:\n if len(confounds) != len(imgs):\n raise ValueError('%d confounds.tsv files found for %d bold '\n 'files. Same number of confound files as '\n 'the number of runs is expected' %\n (len(events), len(imgs)))\n confounds = [pd.read_csv(c, sep='\\t', index_col=None)\n for c in confounds]\n models_confounds.append(confounds)\n\n return models, models_run_imgs, models_events, models_confounds\n",
"path": "nilearn/glm/first_level/first_level.py"
}
] | [
{
"content": "\"\"\"\nThis module presents an interface to use the glm implemented in\nnistats.regression.\n\nIt contains the GLM and contrast classes that are meant to be the main objects\nof fMRI data analyses.\n\nAuthor: Bertrand Thirion, Martin Perez-Guevara, 2016\n\n\"\"\"\nimport glob\nimport json\nimport os\nimport sys\nimport time\nfrom warnings import warn\n\nimport numpy as np\nimport pandas as pd\nfrom joblib import Memory, Parallel, delayed\nfrom nibabel import Nifti1Image\nfrom nibabel.onetime import auto_attr\nfrom sklearn.base import clone\n\nfrom nilearn._utils.glm import (_check_events_file_uses_tab_separators,\n _check_run_tables, get_bids_files,\n parse_bids_filename)\nfrom nilearn._utils.niimg_conversions import check_niimg\nfrom nilearn.glm.contrasts import (_compute_fixed_effect_contrast,\n expression_to_contrast_vector)\nfrom nilearn.glm.first_level.design_matrix import \\\n make_first_level_design_matrix\nfrom nilearn.image import get_data\nfrom nilearn.glm.regression import (ARModel, OLSModel, RegressionResults,\n SimpleRegressionResults)\nfrom nilearn.glm._base import BaseGLM\n\n\ndef mean_scaling(Y, axis=0):\n \"\"\"Scaling of the data to have percent of baseline change along the\n specified axis\n\n Parameters\n ----------\n Y : array of shape (n_time_points, n_voxels)\n The input data.\n\n axis : int, optional\n Axis along which the scaling mean should be calculated. Default=0.\n\n Returns\n -------\n Y : array of shape (n_time_points, n_voxels),\n The data after mean-scaling, de-meaning and multiplication by 100.\n\n mean : array of shape (n_voxels,)\n The data mean.\n\n \"\"\"\n mean = Y.mean(axis=axis)\n if (mean == 0).any():\n warn('Mean values of 0 observed.'\n 'The data have probably been centered.'\n 'Scaling might not work as expected')\n mean = np.maximum(mean, 1)\n Y = 100 * (Y / mean - 1)\n return Y, mean\n\n\ndef _ar_model_fit(X, val, Y):\n \"\"\"Wrapper for fit method of ARModel to allow joblib parallelization\"\"\"\n return ARModel(X, val).fit(Y)\n\n\ndef run_glm(Y, X, noise_model='ar1', bins=100, n_jobs=1, verbose=0):\n \"\"\" GLM fit for an fMRI data matrix\n\n Parameters\n ----------\n Y : array of shape (n_time_points, n_voxels)\n The fMRI data.\n\n X : array of shape (n_time_points, n_regressors)\n The design matrix.\n\n noise_model : {'ar1', 'ols'}, optional\n The temporal variance model. Default='ar1'.\n\n bins : int, optional\n Maximum number of discrete bins for the AR(1) coef histogram.\n Default=100.\n\n n_jobs : int, optional\n The number of CPUs to use to do the computation. -1 means\n 'all CPUs'. Default=1.\n\n verbose : int, optional\n The verbosity level. Defaut=0.\n\n Returns\n -------\n labels : array of shape (n_voxels,),\n A map of values on voxels used to identify the corresponding model.\n\n results : dict,\n Keys correspond to the different labels values\n values are RegressionResults instances corresponding to the voxels.\n\n \"\"\"\n acceptable_noise_models = ['ar1', 'ols']\n if noise_model not in acceptable_noise_models:\n raise ValueError(\n \"Acceptable noise models are {0}. You provided \"\n \"'noise_model={1}'\".format(acceptable_noise_models,\n noise_model)\n )\n if Y.shape[0] != X.shape[0]:\n raise ValueError('The number of rows of Y '\n 'should match the number of rows of X.'\n ' You provided X with shape {0} '\n 'and Y with shape {1}'.\n format(X.shape, Y.shape))\n\n # Create the model\n ols_result = OLSModel(X).fit(Y)\n\n if noise_model == 'ar1':\n # compute and discretize the AR1 coefs\n ar1 = (\n (ols_result.residuals[1:]\n * ols_result.residuals[:-1]).sum(axis=0)\n / (ols_result.residuals ** 2).sum(axis=0)\n )\n del ols_result\n ar1 = (ar1 * bins).astype(np.int) * 1. / bins\n # Fit the AR model acccording to current AR(1) estimates\n results = {}\n labels = ar1\n # Parallelize by creating a job per ARModel\n vals = np.unique(ar1)\n ar_result = Parallel(n_jobs=n_jobs, verbose=verbose)(\n delayed(_ar_model_fit)(X, val, Y[:, labels == val])\n for val in vals)\n for val, result in zip(vals, ar_result):\n results[val] = result\n del vals\n del ar_result\n\n else:\n labels = np.zeros(Y.shape[1])\n results = {0.0: ols_result}\n\n return labels, results\n\n\nclass FirstLevelModel(BaseGLM):\n \"\"\" Implementation of the General Linear Model\n for single session fMRI data.\n\n Parameters\n ----------\n t_r : float\n This parameter indicates repetition times of the experimental runs.\n In seconds. It is necessary to correctly consider times in the design\n matrix. This parameter is also passed to nilearn.signal.clean.\n Please see the related documentation for details.\n\n slice_time_ref : float, optional\n This parameter indicates the time of the reference slice used in the\n slice timing preprocessing step of the experimental runs. It is\n expressed as a percentage of the t_r (time repetition), so it can have\n values between 0. and 1. Default=0.\n\n hrf_model : {'glover', 'spm', 'spm + derivative', 'spm + derivative + dispersion',\n 'glover + derivative', 'glover + derivative + dispersion', 'fir', None}, optional\n String that specifies the hemodynamic response function.\n Default='glover'.\n\n drift_model : string, optional\n This parameter specifies the desired drift model for the design\n matrices. It can be 'polynomial', 'cosine' or None.\n Default='cosine'.\n\n high_pass : float, optional\n This parameter specifies the cut frequency of the high-pass filter in\n Hz for the design matrices. Used only if drift_model is 'cosine'.\n Default=0.01.\n\n drift_order : int, optional\n This parameter specifices the order of the drift model (in case it is\n polynomial) for the design matrices. Default=1.\n\n fir_delays : array of shape(n_onsets) or list, optional\n In case of FIR design, yields the array of delays used in the FIR\n model, in scans. Default=[0].\n\n min_onset : float, optional\n This parameter specifies the minimal onset relative to the design\n (in seconds). Events that start before (slice_time_ref * t_r +\n min_onset) are not considered. Default=-24.\n\n mask_img : Niimg-like, NiftiMasker object or False, optional\n Mask to be used on data. If an instance of masker is passed,\n then its mask will be used. If no mask is given,\n it will be computed automatically by a NiftiMasker with default\n parameters. If False is given then the data will not be masked.\n\n target_affine : 3x3 or 4x4 matrix, optional\n This parameter is passed to nilearn.image.resample_img.\n Please see the related documentation for details.\n\n target_shape : 3-tuple of integers, optional\n This parameter is passed to nilearn.image.resample_img.\n Please see the related documentation for details.\n\n smoothing_fwhm : float, optional\n If smoothing_fwhm is not None, it gives the size in millimeters of\n the spatial smoothing to apply to the signal.\n\n memory : string, optional\n Path to the directory used to cache the masking process and the glm\n fit. By default, no caching is done.\n Creates instance of joblib.Memory.\n\n memory_level : integer, optional\n Rough estimator of the amount of memory used by caching. Higher value\n means more memory for caching.\n\n standardize : boolean, optional\n If standardize is True, the time-series are centered and normed:\n their variance is put to 1 in the time dimension. Default=False.\n\n signal_scaling : False, int or (int, int), optional\n If not False, fMRI signals are\n scaled to the mean value of scaling_axis given,\n which can be 0, 1 or (0, 1).\n 0 refers to mean scaling each voxel with respect to time,\n 1 refers to mean scaling each time point with respect to all voxels &\n (0, 1) refers to scaling with respect to voxels and time,\n which is known as grand mean scaling.\n Incompatible with standardize (standardize=False is enforced when\n signal_scaling is not False).\n Default=0.\n\n noise_model : {'ar1', 'ols'}, optional\n The temporal variance model. Default='ar1'.\n\n verbose : integer, optional\n Indicate the level of verbosity. By default, nothing is printed.\n If 0 prints nothing. If 1 prints progress by computation of\n each run. If 2 prints timing details of masker and GLM. If 3\n prints masker computation details. Default=0.\n\n n_jobs : integer, optional\n The number of CPUs to use to do the computation. -1 means\n 'all CPUs', -2 'all CPUs but one', and so on.\n Default=1.\n\n minimize_memory : boolean, optional\n Gets rid of some variables on the model fit results that are not\n necessary for contrast computation and would only be useful for\n further inspection of model details. This has an important impact\n on memory consumption. Default=True.\n\n subject_label : string, optional\n This id will be used to identify a `FirstLevelModel` when passed to\n a `SecondLevelModel` object.\n\n Attributes\n ----------\n labels_ : array of shape (n_voxels,),\n a map of values on voxels used to identify the corresponding model\n\n results_ : dict,\n with keys corresponding to the different labels values.\n Values are SimpleRegressionResults corresponding to the voxels,\n if minimize_memory is True,\n RegressionResults if minimize_memory is False\n\n Notes\n -----\n This class is experimental.\n It may change in any future release of Nilearn.\n\n \"\"\"\n def __init__(self, t_r=None, slice_time_ref=0., hrf_model='glover',\n drift_model='cosine', high_pass=.01, drift_order=1,\n fir_delays=[0], min_onset=-24, mask_img=None,\n target_affine=None, target_shape=None, smoothing_fwhm=None,\n memory=Memory(None), memory_level=1, standardize=False,\n signal_scaling=0, noise_model='ar1', verbose=0, n_jobs=1,\n minimize_memory=True, subject_label=None):\n # design matrix parameters\n self.t_r = t_r\n self.slice_time_ref = slice_time_ref\n self.hrf_model = hrf_model\n self.drift_model = drift_model\n self.high_pass = high_pass\n self.drift_order = drift_order\n self.fir_delays = fir_delays\n self.min_onset = min_onset\n # glm parameters\n self.mask_img = mask_img\n self.target_affine = target_affine\n self.target_shape = target_shape\n self.smoothing_fwhm = smoothing_fwhm\n if isinstance(memory, str):\n self.memory = Memory(memory)\n else:\n self.memory = memory\n self.memory_level = memory_level\n self.standardize = standardize\n if signal_scaling is False:\n self.signal_scaling = signal_scaling\n elif signal_scaling in [0, 1, (0, 1)]:\n self.scaling_axis = signal_scaling\n self.signal_scaling = True\n self.standardize = False\n else:\n raise ValueError('signal_scaling must be \"False\", \"0\", \"1\"'\n ' or \"(0, 1)\"')\n\n self.noise_model = noise_model\n self.verbose = verbose\n self.n_jobs = n_jobs\n self.minimize_memory = minimize_memory\n # attributes\n self.labels_ = None\n self.results_ = None\n self.subject_label = subject_label\n\n def fit(self, run_imgs, events=None, confounds=None,\n design_matrices=None):\n \"\"\"Fit the GLM\n\n For each run:\n 1. create design matrix X\n 2. do a masker job: fMRI_data -> Y\n 3. fit regression to (Y, X)\n\n Parameters\n ----------\n run_imgs : Niimg-like object or list of Niimg-like objects,\n Data on which the GLM will be fitted. If this is a list,\n the affine is considered the same for all.\n\n events : pandas Dataframe or string or list of pandas DataFrames or strings, optional\n fMRI events used to build design matrices. One events object\n expected per run_img. Ignored in case designs is not None.\n If string, then a path to a csv file is expected.\n\n confounds : pandas Dataframe, numpy array or string or\n list of pandas DataFrames, numpy arays or strings, optional\n Each column in a DataFrame corresponds to a confound variable\n to be included in the regression model of the respective run_img.\n The number of rows must match the number of volumes in the\n respective run_img. Ignored in case designs is not None.\n If string, then a path to a csv file is expected.\n\n design_matrices : pandas DataFrame or list of pandas DataFrames, optional\n Design matrices that will be used to fit the GLM. If given it\n takes precedence over events and confounds.\n\n \"\"\"\n # Initialize masker_ to None such that attribute exists\n self.masker_ = None\n\n # Raise a warning if both design_matrices and confounds are provided\n if design_matrices is not None and (confounds is not None or events is not None):\n warn('If design matrices are supplied, confounds and events will be ignored.')\n # Local import to prevent circular imports\n from nilearn.input_data import NiftiMasker # noqa\n\n # Check arguments\n # Check imgs type\n if events is not None:\n _check_events_file_uses_tab_separators(events_files=events)\n if not isinstance(run_imgs, (list, tuple)):\n run_imgs = [run_imgs]\n if design_matrices is None:\n if events is None:\n raise ValueError('events or design matrices must be provided')\n if self.t_r is None:\n raise ValueError('t_r not given to FirstLevelModel object'\n ' to compute design from events')\n else:\n design_matrices = _check_run_tables(run_imgs, design_matrices,\n 'design_matrices')\n # Check that number of events and confound files match number of runs\n # Also check that events and confound files can be loaded as DataFrame\n if events is not None:\n events = _check_run_tables(run_imgs, events, 'events')\n if confounds is not None:\n confounds = _check_run_tables(run_imgs, confounds, 'confounds')\n\n # Learn the mask\n if self.mask_img is False:\n # We create a dummy mask to preserve functionality of api\n ref_img = check_niimg(run_imgs[0])\n self.mask_img = Nifti1Image(np.ones(ref_img.shape[:3]),\n ref_img.affine)\n if not isinstance(self.mask_img, NiftiMasker):\n self.masker_ = NiftiMasker(mask_img=self.mask_img,\n smoothing_fwhm=self.smoothing_fwhm,\n target_affine=self.target_affine,\n standardize=self.standardize,\n mask_strategy='epi',\n t_r=self.t_r,\n memory=self.memory,\n verbose=max(0, self.verbose - 2),\n target_shape=self.target_shape,\n memory_level=self.memory_level\n )\n self.masker_.fit(run_imgs[0])\n else:\n # Make sure masker has been fitted otherwise no attribute mask_img_\n self.mask_img._check_fitted()\n if self.mask_img.mask_img_ is None and self.masker_ is None:\n self.masker_ = clone(self.mask_img)\n for param_name in ['target_affine', 'target_shape',\n 'smoothing_fwhm', 't_r', 'memory',\n 'memory_level']:\n our_param = getattr(self, param_name)\n if our_param is None:\n continue\n if getattr(self.masker_, param_name) is not None:\n warn('Parameter %s of the masker'\n ' overriden' % param_name)\n setattr(self.masker_, param_name, our_param)\n self.masker_.fit(run_imgs[0])\n else:\n self.masker_ = self.mask_img\n\n # For each run fit the model and keep only the regression results.\n self.labels_, self.results_, self.design_matrices_ = [], [], []\n n_runs = len(run_imgs)\n t0 = time.time()\n for run_idx, run_img in enumerate(run_imgs):\n # Report progress\n if self.verbose > 0:\n percent = float(run_idx) / n_runs\n percent = round(percent * 100, 2)\n dt = time.time() - t0\n # We use a max to avoid a division by zero\n if run_idx == 0:\n remaining = 'go take a coffee, a big one'\n else:\n remaining = (100. - percent) / max(0.01, percent) * dt\n remaining = '%i seconds remaining' % remaining\n\n sys.stderr.write(\n \"Computing run %d out of %d runs (%s)\\n\"\n % (run_idx + 1, n_runs, remaining))\n\n # Build the experimental design for the glm\n run_img = check_niimg(run_img, ensure_ndim=4)\n if design_matrices is None:\n n_scans = get_data(run_img).shape[3]\n if confounds is not None:\n confounds_matrix = confounds[run_idx].values\n if confounds_matrix.shape[0] != n_scans:\n raise ValueError('Rows in confounds does not match'\n 'n_scans in run_img at index %d'\n % (run_idx,))\n confounds_names = confounds[run_idx].columns.tolist()\n else:\n confounds_matrix = None\n confounds_names = None\n start_time = self.slice_time_ref * self.t_r\n end_time = (n_scans - 1 + self.slice_time_ref) * self.t_r\n frame_times = np.linspace(start_time, end_time, n_scans)\n design = make_first_level_design_matrix(frame_times,\n events[run_idx],\n self.hrf_model,\n self.drift_model,\n self.high_pass,\n self.drift_order,\n self.fir_delays,\n confounds_matrix,\n confounds_names,\n self.min_onset\n )\n else:\n design = design_matrices[run_idx]\n self.design_matrices_.append(design)\n\n # Mask and prepare data for GLM\n if self.verbose > 1:\n t_masking = time.time()\n sys.stderr.write('Starting masker computation \\r')\n\n Y = self.masker_.transform(run_img)\n del run_img # Delete unmasked image to save memory\n\n if self.verbose > 1:\n t_masking = time.time() - t_masking\n sys.stderr.write('Masker took %d seconds \\n'\n % t_masking)\n\n if self.signal_scaling:\n Y, _ = mean_scaling(Y, self.scaling_axis)\n if self.memory:\n mem_glm = self.memory.cache(run_glm, ignore=['n_jobs'])\n else:\n mem_glm = run_glm\n\n # compute GLM\n if self.verbose > 1:\n t_glm = time.time()\n sys.stderr.write('Performing GLM computation\\r')\n labels, results = mem_glm(Y, design.values,\n noise_model=self.noise_model,\n bins=100, n_jobs=self.n_jobs)\n if self.verbose > 1:\n t_glm = time.time() - t_glm\n sys.stderr.write('GLM took %d seconds \\n' % t_glm)\n\n self.labels_.append(labels)\n # We save memory if inspecting model details is not necessary\n if self.minimize_memory:\n for key in results:\n results[key] = SimpleRegressionResults(results[key])\n self.results_.append(results)\n del Y\n\n # Report progress\n if self.verbose > 0:\n sys.stderr.write(\"\\nComputation of %d runs done in %i seconds\\n\\n\"\n % (n_runs, time.time() - t0))\n return self\n\n def compute_contrast(self, contrast_def, stat_type=None,\n output_type='z_score'):\n \"\"\"Generate different outputs corresponding to\n the contrasts provided e.g. z_map, t_map, effects and variance.\n In multi-session case, outputs the fixed effects map.\n\n Parameters\n ----------\n contrast_def : str or array of shape (n_col) or list of (string or\n array of shape (n_col))\n\n where ``n_col`` is the number of columns of the design matrix,\n (one array per run). If only one array is provided when there\n are several runs, it will be assumed that the same contrast is\n desired for all runs. The string can be a formula compatible with\n `pandas.DataFrame.eval`. Basically one can use the name of the\n conditions as they appear in the design matrix of the fitted model\n combined with operators +- and combined with numbers\n with operators +-`*`/.\n\n stat_type : {'t', 'F'}, optional\n type of the contrast\n\n output_type : str, optional\n Type of the output map. Can be 'z_score', 'stat', 'p_value',\n 'effect_size', 'effect_variance' or 'all'.\n Default='z-score'.\n\n Returns\n -------\n output : Nifti1Image or dict\n The desired output image(s). If ``output_type == 'all'``, then\n the output is a dictionary of images, keyed by the type of image.\n\n \"\"\"\n if self.labels_ is None or self.results_ is None:\n raise ValueError('The model has not been fit yet')\n\n if isinstance(contrast_def, (np.ndarray, str)):\n con_vals = [contrast_def]\n elif isinstance(contrast_def, (list, tuple)):\n con_vals = contrast_def\n else:\n raise ValueError('contrast_def must be an array or str or list of'\n ' (array or str)')\n\n n_runs = len(self.labels_)\n n_contrasts = len(con_vals)\n if n_contrasts == 1 and n_runs > 1:\n warn('One contrast given, assuming it for all %d runs' % n_runs)\n con_vals = con_vals * n_runs\n elif n_contrasts != n_runs:\n raise ValueError('%n contrasts given, while there are %n runs' %\n (n_contrasts, n_runs))\n\n # Translate formulas to vectors\n for cidx, (con, design_mat) in enumerate(zip(con_vals,\n self.design_matrices_)\n ):\n design_columns = design_mat.columns.tolist()\n if isinstance(con, str):\n con_vals[cidx] = expression_to_contrast_vector(\n con, design_columns)\n\n valid_types = ['z_score', 'stat', 'p_value', 'effect_size',\n 'effect_variance']\n valid_types.append('all') # ensuring 'all' is the final entry.\n if output_type not in valid_types:\n raise ValueError(\n 'output_type must be one of {}'.format(valid_types))\n contrast = _compute_fixed_effect_contrast(self.labels_, self.results_,\n con_vals, stat_type)\n output_types = (valid_types[:-1]\n if output_type == 'all' else [output_type])\n outputs = {}\n for output_type_ in output_types:\n estimate_ = getattr(contrast, output_type_)()\n # Prepare the returned images\n output = self.masker_.inverse_transform(estimate_)\n contrast_name = str(con_vals)\n output.header['descrip'] = (\n '%s of contrast %s' % (output_type_, contrast_name))\n outputs[output_type_] = output\n\n return outputs if output_type == 'all' else output\n\n def _get_voxelwise_model_attribute(self, attribute,\n result_as_time_series):\n \"\"\"Transform RegressionResults instances within a dictionary\n (whose keys represent the autoregressive coefficient under the 'ar1'\n noise model or only 0.0 under 'ols' noise_model and values are the\n RegressionResults instances) into input nifti space.\n\n Parameters\n ----------\n attribute : str\n an attribute of a RegressionResults instance.\n possible values include: resid, norm_resid, predicted,\n SSE, r_square, MSE.\n\n result_as_time_series : bool\n whether the RegressionResult attribute has a value\n per timepoint of the input nifti image.\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n # check if valid attribute is being accessed.\n all_attributes = dict(vars(RegressionResults)).keys()\n possible_attributes = [prop\n for prop in all_attributes\n if '__' not in prop\n ]\n if attribute not in possible_attributes:\n msg = (\"attribute must be one of: \"\n \"{attr}\".format(attr=possible_attributes)\n )\n raise ValueError(msg)\n\n if self.minimize_memory:\n raise ValueError(\n 'To access voxelwise attributes like '\n 'R-squared, residuals, and predictions, '\n 'the `FirstLevelModel`-object needs to store '\n 'there attributes. '\n 'To do so, set `minimize_memory` to `False` '\n 'when initializing the `FirstLevelModel`-object.')\n\n if self.labels_ is None or self.results_ is None:\n raise ValueError('The model has not been fit yet')\n\n output = []\n\n for design_matrix, labels, results in zip(self.design_matrices_,\n self.labels_,\n self.results_\n ):\n if result_as_time_series:\n voxelwise_attribute = np.zeros((design_matrix.shape[0],\n len(labels))\n )\n else:\n voxelwise_attribute = np.zeros((1, len(labels)))\n\n for label_ in results:\n label_mask = labels == label_\n voxelwise_attribute[:, label_mask] = getattr(results[label_],\n attribute)\n\n output.append(self.masker_.inverse_transform(voxelwise_attribute))\n\n return output\n\n @auto_attr\n def residuals(self):\n \"\"\"Transform voxelwise residuals to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('resid',\n result_as_time_series=True)\n\n @auto_attr\n def predicted(self):\n \"\"\"Transform voxelwise predicted values to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('predicted',\n result_as_time_series=True)\n\n @auto_attr\n def r_square(self):\n \"\"\"Transform voxelwise r-squared values to the same shape\n as the input Nifti1Image(s)\n\n Returns\n -------\n output : list\n A list of Nifti1Image(s).\n\n \"\"\"\n return self._get_voxelwise_model_attribute('r_square',\n result_as_time_series=False\n )\n\n\ndef first_level_from_bids(dataset_path, task_label, space_label=None,\n img_filters=None, t_r=None, slice_time_ref=0.,\n hrf_model='glover', drift_model='cosine',\n high_pass=.01, drift_order=1, fir_delays=[0],\n min_onset=-24, mask_img=None,\n target_affine=None, target_shape=None,\n smoothing_fwhm=None, memory=Memory(None),\n memory_level=1, standardize=False,\n signal_scaling=0, noise_model='ar1',\n verbose=0, n_jobs=1,\n minimize_memory=True,\n derivatives_folder='derivatives'):\n \"\"\"Create FirstLevelModel objects and fit arguments from a BIDS dataset.\n\n It t_r is not specified this function will attempt to load it from a\n bold.json file alongside slice_time_ref. Otherwise t_r and slice_time_ref\n are taken as given.\n\n Parameters\n ----------\n dataset_path : str\n Directory of the highest level folder of the BIDS dataset. Should\n contain subject folders and a derivatives folder.\n\n task_label : str\n Task_label as specified in the file names like _task-<task_label>_.\n\n space_label : str, optional\n Specifies the space label of the preprocessed bold.nii images.\n As they are specified in the file names like _space-<space_label>_.\n\n img_filters : list of tuples (str, str), optional\n Filters are of the form (field, label). Only one filter per field\n allowed. A file that does not match a filter will be discarded.\n Possible filters are 'acq', 'ce', 'dir', 'rec', 'run', 'echo', 'res',\n 'den', and 'desc'. Filter examples would be ('desc', 'preproc'),\n ('dir', 'pa') and ('run', '10').\n\n derivatives_folder : str, optional\n derivatives and app folder path containing preprocessed files.\n Like \"derivatives/FMRIPREP\". Default=\"derivatives\".\n\n All other parameters correspond to a `FirstLevelModel` object, which\n contains their documentation. The subject label of the model will be\n determined directly from the BIDS dataset.\n\n Returns\n -------\n models : list of `FirstLevelModel` objects\n Each FirstLevelModel object corresponds to a subject. All runs from\n different sessions are considered together for the same subject to run\n a fixed effects analysis on them.\n\n models_run_imgs : list of list of Niimg-like objects,\n Items for the FirstLevelModel fit function of their respective model.\n\n models_events : list of list of pandas DataFrames,\n Items for the FirstLevelModel fit function of their respective model.\n\n models_confounds : list of list of pandas DataFrames or None,\n Items for the FirstLevelModel fit function of their respective model.\n\n \"\"\"\n # check arguments\n img_filters = img_filters if img_filters else []\n if not isinstance(dataset_path, str):\n raise TypeError(\n 'dataset_path must be a string, instead %s was given' %\n type(task_label))\n if not os.path.exists(dataset_path):\n raise ValueError('given path do not exist: %s' % dataset_path)\n if not isinstance(task_label, str):\n raise TypeError('task_label must be a string, instead %s was given' %\n type(task_label))\n if space_label is not None and not isinstance(space_label, str):\n raise TypeError('space_label must be a string, instead %s was given' %\n type(space_label))\n if not isinstance(img_filters, list):\n raise TypeError('img_filters must be a list, instead %s was given' %\n type(img_filters))\n for img_filter in img_filters:\n if (not isinstance(img_filter[0], str)\n or not isinstance(img_filter[1], str)):\n raise TypeError('filters in img filters must be (str, str), '\n 'instead %s was given' % type(img_filter))\n if img_filter[0] not in ['acq', 'ce', 'dir', 'rec', 'run',\n 'echo', 'desc', 'res', 'den',\n ]:\n raise ValueError(\n \"field %s is not a possible filter. Only \"\n \"'acq', 'ce', 'dir', 'rec', 'run', 'echo', \"\n \"'desc', 'res', 'den' are allowed.\" % img_filter[0])\n\n # check derivatives folder is present\n derivatives_path = os.path.join(dataset_path, derivatives_folder)\n if not os.path.exists(derivatives_path):\n raise ValueError('derivatives folder does not exist in given dataset')\n\n # Get acq specs for models. RepetitionTime and SliceTimingReference.\n # Throw warning if no bold.json is found\n if t_r is not None:\n warn('RepetitionTime given in model_init as %d' % t_r)\n warn('slice_time_ref is %d percent of the repetition '\n 'time' % slice_time_ref)\n else:\n filters = [('task', task_label)]\n for img_filter in img_filters:\n if img_filter[0] in ['acq', 'rec', 'run']:\n filters.append(img_filter)\n\n img_specs = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='bold', file_type='json',\n filters=filters)\n # If we dont find the parameter information in the derivatives folder\n # we try to search in the raw data folder\n if not img_specs:\n img_specs = get_bids_files(dataset_path, modality_folder='func',\n file_tag='bold', file_type='json',\n filters=filters)\n if not img_specs:\n warn('No bold.json found in derivatives folder or '\n 'in dataset folder. t_r can not be inferred and will need to'\n ' be set manually in the list of models, otherwise their fit'\n ' will throw an exception')\n else:\n specs = json.load(open(img_specs[0], 'r'))\n if 'RepetitionTime' in specs:\n t_r = float(specs['RepetitionTime'])\n else:\n warn('RepetitionTime not found in file %s. t_r can not be '\n 'inferred and will need to be set manually in the '\n 'list of models. Otherwise their fit will throw an '\n ' exception' % img_specs[0])\n if 'SliceTimingRef' in specs:\n slice_time_ref = float(specs['SliceTimingRef'])\n else:\n warn('SliceTimingRef not found in file %s. It will be assumed'\n ' that the slice timing reference is 0.0 percent of the '\n 'repetition time. If it is not the case it will need to '\n 'be set manually in the generated list of models' %\n img_specs[0])\n\n # Infer subjects in dataset\n sub_folders = glob.glob(os.path.join(derivatives_path, 'sub-*/'))\n sub_labels = [os.path.basename(s[:-1]).split('-')[1] for s in sub_folders]\n sub_labels = sorted(list(set(sub_labels)))\n\n # Build fit_kwargs dictionaries to pass to their respective models fit\n # Events and confounds files must match number of imgs (runs)\n models = []\n models_run_imgs = []\n models_events = []\n models_confounds = []\n for sub_label in sub_labels:\n # Create model\n model = FirstLevelModel(\n t_r=t_r, slice_time_ref=slice_time_ref, hrf_model=hrf_model,\n drift_model=drift_model, high_pass=high_pass,\n drift_order=drift_order, fir_delays=fir_delays,\n min_onset=min_onset, mask_img=mask_img,\n target_affine=target_affine, target_shape=target_shape,\n smoothing_fwhm=smoothing_fwhm, memory=memory,\n memory_level=memory_level, standardize=standardize,\n signal_scaling=signal_scaling, noise_model=noise_model,\n verbose=verbose, n_jobs=n_jobs,\n minimize_memory=minimize_memory, subject_label=sub_label)\n models.append(model)\n\n # Get preprocessed imgs\n if space_label is None:\n filters = [('task', task_label)] + img_filters\n else:\n filters = [('task', task_label),\n ('space', space_label)] + img_filters\n imgs = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='bold', file_type='nii*',\n sub_label=sub_label, filters=filters)\n # If there is more than one file for the same (ses, run), likely we\n # have an issue of underspecification of filters.\n run_check_list = []\n # If more than one run is present the run field is mandatory in BIDS\n # as well as the ses field if more than one session is present.\n if len(imgs) > 1:\n for img in imgs:\n img_dict = parse_bids_filename(img)\n if (\n '_ses-' in img_dict['file_basename']\n and '_run-' in img_dict['file_basename']\n ):\n if (img_dict['ses'], img_dict['run']) in run_check_list:\n raise ValueError(\n 'More than one nifti image found '\n 'for the same run %s and session %s. '\n 'Please verify that the '\n 'desc_label and space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n (img_dict['run'], img_dict['ses']))\n else:\n run_check_list.append((img_dict['ses'],\n img_dict['run']))\n\n elif '_ses-' in img_dict['file_basename']:\n if img_dict['ses'] in run_check_list:\n raise ValueError(\n 'More than one nifti image '\n 'found for the same ses %s, while '\n 'no additional run specification present'\n '. Please verify that the desc_label and '\n 'space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n img_dict['ses'])\n else:\n run_check_list.append(img_dict['ses'])\n\n elif '_run-' in img_dict['file_basename']:\n if img_dict['run'] in run_check_list:\n raise ValueError(\n 'More than one nifti image '\n 'found for the same run %s. '\n 'Please verify that the desc_label and '\n 'space_label labels '\n 'corresponding to the BIDS spec '\n 'were correctly specified.' %\n img_dict['run'])\n else:\n run_check_list.append(img_dict['run'])\n models_run_imgs.append(imgs)\n\n # Get events and extra confounds\n filters = [('task', task_label)]\n for img_filter in img_filters:\n if img_filter[0] in ['acq', 'rec', 'run']:\n filters.append(img_filter)\n\n # Get events files\n events = get_bids_files(dataset_path, modality_folder='func',\n file_tag='events', file_type='tsv',\n sub_label=sub_label, filters=filters)\n if events:\n if len(events) != len(imgs):\n raise ValueError('%d events.tsv files found for %d bold '\n 'files. Same number of event files as '\n 'the number of runs is expected' %\n (len(events), len(imgs)))\n events = [pd.read_csv(event, sep='\\t', index_col=None)\n for event in events]\n models_events.append(events)\n else:\n raise ValueError('No events.tsv files found')\n\n # Get confounds. If not found it will be assumed there are none.\n # If there are confounds, they are assumed to be present for all runs.\n confounds = get_bids_files(derivatives_path, modality_folder='func',\n file_tag='desc-confounds*',\n file_type='tsv', sub_label=sub_label,\n filters=filters)\n\n if confounds:\n if len(confounds) != len(imgs):\n raise ValueError('%d confounds.tsv files found for %d bold '\n 'files. Same number of confound files as '\n 'the number of runs is expected' %\n (len(events), len(imgs)))\n confounds = [pd.read_csv(c, sep='\\t', index_col=None)\n for c in confounds]\n models_confounds.append(confounds)\n\n return models, models_run_imgs, models_events, models_confounds\n",
"path": "nilearn/glm/first_level/first_level.py"
}
] | diff --git a/doc/whats_new.rst b/doc/whats_new.rst
index 79546fe1bc..df432c18c0 100644
--- a/doc/whats_new.rst
+++ b/doc/whats_new.rst
@@ -11,6 +11,9 @@ Fixes
in :func:`nilearn.signal.clean`, so that these operations are applied
in the same order as for the signals, i.e., first detrending and
then temporal filtering (https://github.com/nilearn/nilearn/issues/2730).
+- Fix number of attributes returned by the
+ :func:`nilearn.glm.first_level.FirstLevelModel._get_voxelwise_model_attribute` method in the first level model.
+ It used to return only the first attribute, and now returns as many attributes as design matrices.
Enhancements
diff --git a/nilearn/glm/first_level/first_level.py b/nilearn/glm/first_level/first_level.py
index a26d9e084b..f7dfbb44c1 100644
--- a/nilearn/glm/first_level/first_level.py
+++ b/nilearn/glm/first_level/first_level.py
@@ -683,7 +683,7 @@ def _get_voxelwise_model_attribute(self, attribute,
output.append(self.masker_.inverse_transform(voxelwise_attribute))
- return output
+ return output
@auto_attr
def residuals(self):
diff --git a/nilearn/glm/tests/test_first_level.py b/nilearn/glm/tests/test_first_level.py
index 8cafb03686..136afdfb1c 100644
--- a/nilearn/glm/tests/test_first_level.py
+++ b/nilearn/glm/tests/test_first_level.py
@@ -286,6 +286,7 @@ def test_compute_contrast_num_contrasts():
with pytest.warns(UserWarning, match='One contrast given, assuming it for all 3 runs'):
multi_session_model.compute_contrast([np.eye(rk)[1]])
+
def test_run_glm():
rng = np.random.RandomState(42)
n, p, q = 100, 80, 10
@@ -703,6 +704,24 @@ def test_first_level_residuals():
assert_array_almost_equal(mean_residuals, 0)
[email protected]("shapes", [
+ [(10, 10, 10, 25)],
+ [(10, 10, 10, 25), (10, 10, 10, 100)],
+])
+def test_get_voxelwise_attributes_should_return_as_many_as_design_matrices(shapes):
+ mask, fmri_data, design_matrices = generate_fake_fmri_data_and_design(shapes)
+
+ for i in range(len(design_matrices)):
+ design_matrices[i].iloc[:, 0] = 1
+
+ model = FirstLevelModel(mask_img=mask, minimize_memory=False,
+ noise_model='ols')
+ model.fit(fmri_data, design_matrices=design_matrices)
+
+ # Check that length of outputs is the same as the number of design matrices
+ assert len(model._get_voxelwise_model_attribute("resid", True)) == len(shapes)
+
+
def test_first_level_predictions_r_square():
shapes, rk = [(10, 10, 10, 25)], 3
mask, fmri_data, design_matrices = generate_fake_fmri_data_and_design(shapes, rk)
@@ -726,4 +745,4 @@ def test_first_level_predictions_r_square():
assert_almost_equal(np.mean(y_predicted - y_measured), 0)
r_square_2d = model.masker_.transform(r_square_3d)
- assert_array_less(0., r_square_2d)
\ No newline at end of file
+ assert_array_less(0., r_square_2d)
|
e-valuation__EvaP-817 | +x on update.sh, earlier apache restart
update_production.sh is missing the x bit, also because of the cache clearing the apache is restarted 2min after the code has changed.
| [
{
"content": "from django.core.management.base import BaseCommand\nfrom django.core.serializers.base import ProgressBar\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n total_count = Course.objects.count()\n\n self.stdout.write(\"Calculating results for all courses...\")\n\n self.stdout.ending = None\n progress_bar = ProgressBar(self.stdout, total_count)\n\n for counter, course in enumerate(Course.objects.all()):\n progress_bar.update(counter + 1)\n calculate_results(course)\n\n self.stdout.write(\"Done with updating cache.\\n\")\n",
"path": "evap/evaluation/management/commands/refresh_results_cache.py"
}
] | [
{
"content": "from django.core.management.base import BaseCommand\nfrom django.core.serializers.base import ProgressBar\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n total_count = Course.objects.count()\n\n self.stdout.write(\"Calculating results for all courses...\")\n\n self.stdout.ending = None\n progress_bar = ProgressBar(self.stdout, total_count)\n\n for counter, course in enumerate(Course.objects.all()):\n progress_bar.update(counter + 1)\n calculate_results(course)\n\n self.stdout.write(\"Results cache has been refreshed.\\n\")\n",
"path": "evap/evaluation/management/commands/refresh_results_cache.py"
}
] | diff --git a/deployment/update_production.sh b/deployment/update_production.sh
old mode 100644
new mode 100755
index 44c3c4f261..9cc89634a8
--- a/deployment/update_production.sh
+++ b/deployment/update_production.sh
@@ -6,13 +6,16 @@ set -x # print executed commands
sudo -u evap git fetch
sudo -u evap git checkout origin/release
sudo pip3 install -r requirements.txt
-sudo -u evap ./manage.py migrate
-sudo -u evap ./manage.py collectstatic --noinput
sudo -u evap ./manage.py compilemessages
+sudo -u evap ./manage.py collectstatic --noinput
sudo -u evap ./manage.py compress --verbosity=0
+sudo -u evap ./manage.py migrate
+# reload only after static files are updated, so the new code finds all the files it expects.
+# also, reload after migrations happened. see https://github.com/fsr-itse/EvaP/pull/817 for a discussion.
+sudo service apache2 reload
+# update caches. this can take minutes but doesn't need a reload.
sudo -u evap ./manage.py clear_cache
sudo -u evap ./manage.py refresh_results_cache
-sudo service apache2 restart
{ set +x; } 2>/dev/null # don't print the echo command, and don't print the 'set +x' itself
diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py
index 300b802123..3aaecd26e4 100644
--- a/evap/evaluation/management/commands/refresh_results_cache.py
+++ b/evap/evaluation/management/commands/refresh_results_cache.py
@@ -24,4 +24,4 @@ def handle(self, *args, **options):
progress_bar.update(counter + 1)
calculate_results(course)
- self.stdout.write("Done with updating cache.\n")
+ self.stdout.write("Results cache has been refreshed.\n")
diff --git a/evap/evaluation/tests/test_commands.py b/evap/evaluation/tests/test_commands.py
index 5d3a856840..abc1671cd3 100644
--- a/evap/evaluation/tests/test_commands.py
+++ b/evap/evaluation/tests/test_commands.py
@@ -3,7 +3,7 @@
from unittest.mock import patch
from django.conf import settings
-from django.utils.six import StringIO
+from io import StringIO
from django.core import management, mail
from django.test import TestCase
from django.test.utils import override_settings
diff --git a/evap/evaluation/tests/test_misc.py b/evap/evaluation/tests/test_misc.py
index bc12d0765b..9fe8344e12 100644
--- a/evap/evaluation/tests/test_misc.py
+++ b/evap/evaluation/tests/test_misc.py
@@ -1,4 +1,5 @@
import os.path
+from io import StringIO
from django.conf import settings
from django.contrib.auth.models import Group
@@ -59,3 +60,15 @@ def load_test_data(self):
call_command("loaddata", "test_data", verbosity=0)
except Exception:
self.fail("Test data failed to load.")
+
+
+class TestMissingMigrations(TestCase):
+ def test_for_missing_migrations(self):
+ output = StringIO()
+ try:
+ call_command('makemigrations', interactive=False, dry_run=True, exit_code=True, stdout=output)
+ except SystemExit as e:
+ # The exit code will be 1 when there are no missing migrations
+ self.assertEqual(str(e), '1')
+ else:
+ self.fail("There are missing migrations:\n %s" % output.getvalue())
|
lnbits__lnbits-1183 | [BUG] LNDhub extension return unusable `getinfo` response
**Describe the bug**
The [getinfo call](https://github.com/lnbits/lnbits/blob/main/lnbits/extensions/lndhub/views_api.py#L22) simply returns `bad auth` everytime, which breaks integrations like for us in BTCPay Server (see btcpayserver/btcpayserver#4414).
**Expected behavior**
Return [valid information](https://github.com/BlueWallet/LndHub/blob/master/doc/Send-requirements.md#get-getinfo), which we can use to connect. For us that would mean having a list of `uris` and a `block_height` being set.
| [
{
"content": "import asyncio\nimport time\nfrom base64 import urlsafe_b64encode\nfrom http import HTTPStatus\n\nfrom fastapi.param_functions import Query\nfrom fastapi.params import Depends\nfrom pydantic import BaseModel\nfrom starlette.exceptions import HTTPException\n\nfrom lnbits import bolt11\nfrom lnbits.core.crud import delete_expired_invoices, get_payments\nfrom lnbits.core.services import create_invoice, pay_invoice\nfrom lnbits.decorators import WalletTypeInfo\nfrom lnbits.settings import LNBITS_SITE_TITLE, WALLET\n\nfrom . import lndhub_ext\nfrom .decorators import check_wallet, require_admin_key\nfrom .utils import decoded_as_lndhub, to_buffer\n\n\n@lndhub_ext.get(\"/ext/getinfo\")\nasync def lndhub_getinfo():\n raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail=\"bad auth\")\n\n\nclass AuthData(BaseModel):\n login: str = Query(None)\n password: str = Query(None)\n refresh_token: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/auth\")\nasync def lndhub_auth(data: AuthData):\n token = (\n data.refresh_token\n if data.refresh_token\n else urlsafe_b64encode(\n (data.login + \":\" + data.password).encode(\"utf-8\")\n ).decode(\"ascii\")\n )\n return {\"refresh_token\": token, \"access_token\": token}\n\n\nclass AddInvoice(BaseModel):\n amt: str = Query(...)\n memo: str = Query(...)\n preimage: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/addinvoice\")\nasync def lndhub_addinvoice(\n data: AddInvoice, wallet: WalletTypeInfo = Depends(check_wallet)\n):\n try:\n _, pr = await create_invoice(\n wallet_id=wallet.wallet.id,\n amount=int(data.amt),\n memo=data.memo or LNBITS_SITE_TITLE,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(\n status_code=HTTPStatus.NOT_FOUND, detail=\"Failed to create invoice\"\n )\n invoice = bolt11.decode(pr)\n return {\n \"pay_req\": pr,\n \"payment_request\": pr,\n \"add_index\": \"500\",\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"hash\": invoice.payment_hash,\n }\n\n\nclass Invoice(BaseModel):\n invoice: str = Query(...)\n\n\n@lndhub_ext.post(\"/ext/payinvoice\")\nasync def lndhub_payinvoice(\n r_invoice: Invoice, wallet: WalletTypeInfo = Depends(require_admin_key)\n):\n try:\n await pay_invoice(\n wallet_id=wallet.wallet.id,\n payment_request=r_invoice.invoice,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(status_code=HTTPStatus.NOT_FOUND, detail=\"Payment failed\")\n\n invoice: bolt11.Invoice = bolt11.decode(r_invoice.invoice)\n\n return {\n \"payment_error\": \"\",\n \"payment_preimage\": \"0\" * 64,\n \"route\": {},\n \"payment_hash\": invoice.payment_hash,\n \"decoded\": decoded_as_lndhub(invoice),\n \"fee_msat\": 0,\n \"type\": \"paid_invoice\",\n \"fee\": 0,\n \"value\": invoice.amount_msat / 1000,\n \"timestamp\": int(time.time()),\n \"memo\": invoice.description,\n }\n\n\n@lndhub_ext.get(\"/ext/balance\")\nasync def lndhub_balance(\n wallet: WalletTypeInfo = Depends(check_wallet),\n):\n return {\"BTC\": {\"AvailableBalance\": wallet.wallet.balance}}\n\n\n@lndhub_ext.get(\"/ext/gettxs\")\nasync def lndhub_gettxs(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for payment in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await payment.check_status()\n\n return [\n {\n \"payment_preimage\": payment.preimage,\n \"payment_hash\": payment.payment_hash,\n \"fee_msat\": payment.fee * 1000,\n \"type\": \"paid_invoice\",\n \"fee\": payment.fee,\n \"value\": int(payment.amount / 1000),\n \"timestamp\": payment.time,\n \"memo\": payment.memo if not payment.pending else \"Payment in transition\",\n }\n for payment in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getuserinvoices\")\nasync def lndhub_getuserinvoices(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for invoice in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=False,\n incoming=True,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await invoice.set_pending(\n (await WALLET.get_invoice_status(invoice.checking_id)).pending\n )\n\n return [\n {\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"payment_request\": invoice.bolt11,\n \"add_index\": \"500\",\n \"description\": invoice.memo,\n \"payment_hash\": invoice.payment_hash,\n \"ispaid\": not invoice.pending,\n \"amt\": int(invoice.amount / 1000),\n \"expire_time\": int(time.time() + 1800),\n \"timestamp\": invoice.time,\n \"type\": \"user_invoice\",\n }\n for invoice in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n incoming=True,\n outgoing=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getbtc\")\nasync def lndhub_getbtc(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"load an address for incoming onchain btc\"\n return []\n\n\n@lndhub_ext.get(\"/ext/getpending\")\nasync def lndhub_getpending(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"pending onchain transactions\"\n return []\n\n\n@lndhub_ext.get(\"/ext/decodeinvoice\")\nasync def lndhub_decodeinvoice(invoice: str = Query(None)):\n inv = bolt11.decode(invoice)\n return decoded_as_lndhub(inv)\n\n\n@lndhub_ext.get(\"/ext/checkrouteinvoice\")\nasync def lndhub_checkrouteinvoice():\n \"not implemented on canonical lndhub\"\n pass\n",
"path": "lnbits/extensions/lndhub/views_api.py"
}
] | [
{
"content": "import asyncio\nimport time\nfrom base64 import urlsafe_b64encode\nfrom http import HTTPStatus\n\nfrom fastapi.param_functions import Query\nfrom fastapi.params import Depends\nfrom pydantic import BaseModel\nfrom starlette.exceptions import HTTPException\n\nfrom lnbits import bolt11\nfrom lnbits.core.crud import delete_expired_invoices, get_payments\nfrom lnbits.core.services import create_invoice, pay_invoice\nfrom lnbits.decorators import WalletTypeInfo\nfrom lnbits.settings import LNBITS_SITE_TITLE, WALLET\n\nfrom . import lndhub_ext\nfrom .decorators import check_wallet, require_admin_key\nfrom .utils import decoded_as_lndhub, to_buffer\n\n\n@lndhub_ext.get(\"/ext/getinfo\")\nasync def lndhub_getinfo():\n return {\"alias\": LNBITS_SITE_TITLE}\n\n\nclass AuthData(BaseModel):\n login: str = Query(None)\n password: str = Query(None)\n refresh_token: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/auth\")\nasync def lndhub_auth(data: AuthData):\n token = (\n data.refresh_token\n if data.refresh_token\n else urlsafe_b64encode(\n (data.login + \":\" + data.password).encode(\"utf-8\")\n ).decode(\"ascii\")\n )\n return {\"refresh_token\": token, \"access_token\": token}\n\n\nclass AddInvoice(BaseModel):\n amt: str = Query(...)\n memo: str = Query(...)\n preimage: str = Query(None)\n\n\n@lndhub_ext.post(\"/ext/addinvoice\")\nasync def lndhub_addinvoice(\n data: AddInvoice, wallet: WalletTypeInfo = Depends(check_wallet)\n):\n try:\n _, pr = await create_invoice(\n wallet_id=wallet.wallet.id,\n amount=int(data.amt),\n memo=data.memo or LNBITS_SITE_TITLE,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(\n status_code=HTTPStatus.NOT_FOUND, detail=\"Failed to create invoice\"\n )\n invoice = bolt11.decode(pr)\n return {\n \"pay_req\": pr,\n \"payment_request\": pr,\n \"add_index\": \"500\",\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"hash\": invoice.payment_hash,\n }\n\n\nclass Invoice(BaseModel):\n invoice: str = Query(...)\n\n\n@lndhub_ext.post(\"/ext/payinvoice\")\nasync def lndhub_payinvoice(\n r_invoice: Invoice, wallet: WalletTypeInfo = Depends(require_admin_key)\n):\n try:\n await pay_invoice(\n wallet_id=wallet.wallet.id,\n payment_request=r_invoice.invoice,\n extra={\"tag\": \"lndhub\"},\n )\n except:\n raise HTTPException(status_code=HTTPStatus.NOT_FOUND, detail=\"Payment failed\")\n\n invoice: bolt11.Invoice = bolt11.decode(r_invoice.invoice)\n\n return {\n \"payment_error\": \"\",\n \"payment_preimage\": \"0\" * 64,\n \"route\": {},\n \"payment_hash\": invoice.payment_hash,\n \"decoded\": decoded_as_lndhub(invoice),\n \"fee_msat\": 0,\n \"type\": \"paid_invoice\",\n \"fee\": 0,\n \"value\": invoice.amount_msat / 1000,\n \"timestamp\": int(time.time()),\n \"memo\": invoice.description,\n }\n\n\n@lndhub_ext.get(\"/ext/balance\")\nasync def lndhub_balance(\n wallet: WalletTypeInfo = Depends(check_wallet),\n):\n return {\"BTC\": {\"AvailableBalance\": wallet.wallet.balance}}\n\n\n@lndhub_ext.get(\"/ext/gettxs\")\nasync def lndhub_gettxs(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for payment in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await payment.check_status()\n\n return [\n {\n \"payment_preimage\": payment.preimage,\n \"payment_hash\": payment.payment_hash,\n \"fee_msat\": payment.fee * 1000,\n \"type\": \"paid_invoice\",\n \"fee\": payment.fee,\n \"value\": int(payment.amount / 1000),\n \"timestamp\": payment.time,\n \"memo\": payment.memo if not payment.pending else \"Payment in transition\",\n }\n for payment in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n outgoing=True,\n incoming=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getuserinvoices\")\nasync def lndhub_getuserinvoices(\n wallet: WalletTypeInfo = Depends(check_wallet),\n limit: int = Query(20, ge=1, le=20),\n offset: int = Query(0, ge=0),\n):\n for invoice in await get_payments(\n wallet_id=wallet.wallet.id,\n complete=False,\n pending=True,\n outgoing=False,\n incoming=True,\n limit=limit,\n offset=offset,\n exclude_uncheckable=True,\n ):\n await invoice.set_pending(\n (await WALLET.get_invoice_status(invoice.checking_id)).pending\n )\n\n return [\n {\n \"r_hash\": to_buffer(invoice.payment_hash),\n \"payment_request\": invoice.bolt11,\n \"add_index\": \"500\",\n \"description\": invoice.memo,\n \"payment_hash\": invoice.payment_hash,\n \"ispaid\": not invoice.pending,\n \"amt\": int(invoice.amount / 1000),\n \"expire_time\": int(time.time() + 1800),\n \"timestamp\": invoice.time,\n \"type\": \"user_invoice\",\n }\n for invoice in reversed(\n (\n await get_payments(\n wallet_id=wallet.wallet.id,\n pending=True,\n complete=True,\n incoming=True,\n outgoing=False,\n limit=limit,\n offset=offset,\n )\n )\n )\n ]\n\n\n@lndhub_ext.get(\"/ext/getbtc\")\nasync def lndhub_getbtc(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"load an address for incoming onchain btc\"\n return []\n\n\n@lndhub_ext.get(\"/ext/getpending\")\nasync def lndhub_getpending(wallet: WalletTypeInfo = Depends(check_wallet)):\n \"pending onchain transactions\"\n return []\n\n\n@lndhub_ext.get(\"/ext/decodeinvoice\")\nasync def lndhub_decodeinvoice(invoice: str = Query(None)):\n inv = bolt11.decode(invoice)\n return decoded_as_lndhub(inv)\n\n\n@lndhub_ext.get(\"/ext/checkrouteinvoice\")\nasync def lndhub_checkrouteinvoice():\n \"not implemented on canonical lndhub\"\n pass\n",
"path": "lnbits/extensions/lndhub/views_api.py"
}
] | diff --git a/lnbits/extensions/lndhub/views_api.py b/lnbits/extensions/lndhub/views_api.py
index 8cbe5a6bfd..2acdc4ec93 100644
--- a/lnbits/extensions/lndhub/views_api.py
+++ b/lnbits/extensions/lndhub/views_api.py
@@ -21,7 +21,7 @@
@lndhub_ext.get("/ext/getinfo")
async def lndhub_getinfo():
- raise HTTPException(status_code=HTTPStatus.UNAUTHORIZED, detail="bad auth")
+ return {"alias": LNBITS_SITE_TITLE}
class AuthData(BaseModel):
|
boto__boto-2166 | Invalid path check in euca-bundle-image
The -i option uses convert_file in boto/roboto/param.py to verify that the path passed is, indeed, a file. This fails unless the path specified is a boring old file which is not necessary. Indeed it not being necessary is sort of the whole point in unix having a /dev in the first place. Everything is a file.
The code calls os.path.isfile(value) in convert_file(). It should call os.path.exists(value) and not os.path.isdir(value). Directories are the only types of files which need to be considered special in the normal course of events.
| [
{
"content": "# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2010, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nimport os\n\nclass Converter(object):\n\n @classmethod\n def convert_string(cls, param, value):\n # TODO: could do length validation, etc. here\n if not isinstance(value, basestring):\n raise ValueError\n return value\n\n @classmethod\n def convert_integer(cls, param, value):\n # TODO: could do range checking here\n return int(value)\n\n @classmethod\n def convert_boolean(cls, param, value):\n \"\"\"\n For command line arguments, just the presence\n of the option means True so just return True\n \"\"\"\n return True\n\n @classmethod\n def convert_file(cls, param, value):\n if os.path.isfile(value):\n return value\n raise ValueError\n\n @classmethod\n def convert_dir(cls, param, value):\n if os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert(cls, param, value):\n try:\n if hasattr(cls, 'convert_'+param.ptype):\n mthd = getattr(cls, 'convert_'+param.ptype)\n else:\n mthd = cls.convert_string\n return mthd(param, value)\n except:\n raise ValidationException(param, '')\n\nclass Param(Converter):\n\n def __init__(self, name=None, ptype='string', optional=True,\n short_name=None, long_name=None, doc='',\n metavar=None, cardinality=1, default=None,\n choices=None, encoder=None, request_param=True):\n self.name = name\n self.ptype = ptype\n self.optional = optional\n self.short_name = short_name\n self.long_name = long_name\n self.doc = doc\n self.metavar = metavar\n self.cardinality = cardinality\n self.default = default\n self.choices = choices\n self.encoder = encoder\n self.request_param = request_param\n\n @property\n def optparse_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def synopsis_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def getopt_long_name(self):\n ln = None\n if self.long_name:\n ln = '%s' % self.long_name\n if self.ptype != 'boolean':\n ln += '='\n return ln\n\n @property\n def optparse_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def synopsis_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def getopt_short_name(self):\n sn = None\n if self.short_name:\n sn = '%s' % self.short_name\n if self.ptype != 'boolean':\n sn += ':'\n return sn\n\n def convert(self, value):\n \"\"\"\n Convert a string value as received in the command line\n tools and convert to the appropriate type of value.\n Raise a ValidationError if the value can't be converted.\n\n :type value: str\n :param value: The value to convert. This should always\n be a string.\n \"\"\"\n return super(Param, self).convert(self,value)\n\n\n",
"path": "boto/roboto/param.py"
}
] | [
{
"content": "# Copyright (c) 2010 Mitch Garnaat http://garnaat.org/\n# Copyright (c) 2010, Eucalyptus Systems, Inc.\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nimport os\n\nclass Converter(object):\n\n @classmethod\n def convert_string(cls, param, value):\n # TODO: could do length validation, etc. here\n if not isinstance(value, basestring):\n raise ValueError\n return value\n\n @classmethod\n def convert_integer(cls, param, value):\n # TODO: could do range checking here\n return int(value)\n\n @classmethod\n def convert_boolean(cls, param, value):\n \"\"\"\n For command line arguments, just the presence\n of the option means True so just return True\n \"\"\"\n return True\n\n @classmethod\n def convert_file(cls, param, value):\n if os.path.exists(value) and not os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert_dir(cls, param, value):\n if os.path.isdir(value):\n return value\n raise ValueError\n\n @classmethod\n def convert(cls, param, value):\n try:\n if hasattr(cls, 'convert_'+param.ptype):\n mthd = getattr(cls, 'convert_'+param.ptype)\n else:\n mthd = cls.convert_string\n return mthd(param, value)\n except:\n raise ValidationException(param, '')\n\nclass Param(Converter):\n\n def __init__(self, name=None, ptype='string', optional=True,\n short_name=None, long_name=None, doc='',\n metavar=None, cardinality=1, default=None,\n choices=None, encoder=None, request_param=True):\n self.name = name\n self.ptype = ptype\n self.optional = optional\n self.short_name = short_name\n self.long_name = long_name\n self.doc = doc\n self.metavar = metavar\n self.cardinality = cardinality\n self.default = default\n self.choices = choices\n self.encoder = encoder\n self.request_param = request_param\n\n @property\n def optparse_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def synopsis_long_name(self):\n ln = None\n if self.long_name:\n ln = '--%s' % self.long_name\n return ln\n\n @property\n def getopt_long_name(self):\n ln = None\n if self.long_name:\n ln = '%s' % self.long_name\n if self.ptype != 'boolean':\n ln += '='\n return ln\n\n @property\n def optparse_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def synopsis_short_name(self):\n sn = None\n if self.short_name:\n sn = '-%s' % self.short_name\n return sn\n\n @property\n def getopt_short_name(self):\n sn = None\n if self.short_name:\n sn = '%s' % self.short_name\n if self.ptype != 'boolean':\n sn += ':'\n return sn\n\n def convert(self, value):\n \"\"\"\n Convert a string value as received in the command line\n tools and convert to the appropriate type of value.\n Raise a ValidationError if the value can't be converted.\n\n :type value: str\n :param value: The value to convert. This should always\n be a string.\n \"\"\"\n return super(Param, self).convert(self,value)\n\n\n",
"path": "boto/roboto/param.py"
}
] | diff --git a/boto/roboto/param.py b/boto/roboto/param.py
index ed3e6be9b9..35a25b4af5 100644
--- a/boto/roboto/param.py
+++ b/boto/roboto/param.py
@@ -46,7 +46,7 @@ def convert_boolean(cls, param, value):
@classmethod
def convert_file(cls, param, value):
- if os.path.isfile(value):
+ if os.path.exists(value) and not os.path.isdir(value):
return value
raise ValueError
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.